After taking in the majesty of what is The Houses of Parliament (and navigating the security protocols!) our supper and two hours of conversation commenced. The introductions made it clear quite how far and wide the impacts of Artificial Intelligence (AI) would be felt but perhaps more significantly the impact it would have on the data it is both used on and draws on to be as effective as possible.
We had representation around the table from the following industries:
Similar challenges were presented from the table, such as “what can we actually do with AI?” to “what are practical use cases for AI?”, all underpinned by the concern for the legal and ethical use of the data involved. AI may have been the buzzword for the last two years, and it may be difficult to avoid seeing it in the news or from vendors, but the uncertainty of being able to use it for the benefit of the business is clearly top of mind for many organisations.
There have been some success stories, perhaps indicating the topic of AI has left (or is leaving) the hype cycle of any new technology. Examples given of how AI had been successfully applied in day to day business activities were given:
But what happens when it is implemented well, and actually does result in real-world digital transformation? As one guest put it:
“It’s great to reduce workload and gain efficiencies, but only if that time is spent working on the business and not drinking cups of tea.”
For example, just because there are efficiencies equating to 20% of time-saved, it doesn’t mean there is going to be a comparable increase in productivity or profit.This element stresses the importance of governance to ensure that the AI projects that are pursued and invested in are actually transformative for the business, and not just vanity projects. There was agreement that the business case for an effective AI project needs to have a very high bar.
The initial thrust of the conversation however was clearly about the democratisation of AI versus its governance; in other words, should we just allow our users to find the business and use cases for AI through unregulated experimentation, or should it be strictly governed? Additionally, what role do government and regulation play in this expansion and adoption process? There was concern that AI models that were freely available may be located in unsuitable countries, falling foul of privacy laws, and even being unaware of what data the model in use has actually been trained on. Was the data gathered ethically? Has it been sanitised appropriately? And is the data gathered actually suitable for use in the specific use cases it is being used for?
While there was broad consensus that governance is vital in a burgeoning technology like AI, it was obvious that the speed of technology is significantly greater than the speed of governance, with both organisations and governments struggling to keep pace with the speed of change and revolution. All that being said, it was pointed out that “governance done right” actually allows organisations to move quickly without falling foul of laws and regulations, both now and in the future.
The concept of Reinforcement Learning Human Input was raised as an approach to enforcing and supporting governance efforts ensuring that the inputs and outputs from any AI model is filtered through a human and reinforcing the AI model accordingly. As such, the human is the primary control ensuring the human-in-the-loop as a failsafe. It also emphasises that the core output of AI now is data, information, and intelligence, (whilst changing very quickly, see above), it is the human interpretation of the output that is key.
As an aside, there was a small but vital conversation regarding how and where AI models are used; using public AI models such as Chat-GPT is quick, cheap and easy, but there are privacy and security concerns regarding how information that organisations upload is used elsewhere. A lack of governance would predicate that many use cases are being piloted and deployed in public models, when they should be deployed in private models. AWS and technology advisors (such as SoftServe) have plenty of secure and easy to deploy private models that organisations can use to safely test and build in.
Using these external AI models also brings other challenges such as ensuring the provenance of the model is fully understood, that it has not been poisoned (either maliciously or unintentionally), or that it doesn’t suffer from delivering unexpected results due to its model not being fully understood. (One guest discussed their testing of an AI model to review and score CV’s that consistently scored test applicants containing English names and US universities much higher for selection purposes than other, identical CV’s; a lawsuit just waiting to happen!
The importance of a data governance-driven approach to AI made up a significant part of the conversation. Given the low barrier to entry of using AI, getting “the house” in order was agreed to be a prerequisite of any adoption to ensure the greatest benefit is derived form an organisation’s data. In practical terms this means robust records management and data classification of all unstructured data in the business. The challenge here of course is that in the vast majority of cases, these activities are not prioritised until external auditors, clients or regulators need to get involved, resulting in a knee jerk response from leadership to “fix the problem”. Good data governance was agreed to be a cultural rather than technology problem.
We are only at the very beginning of something that is already making huge waves across the globe, no matter what you do or how you do it, and what was impossible five years ago is not only possible now, but accessible to every household around the world. Without at least some guidance in the form of effective governance around its adoption and use, the unintended consequence may give us more than we bargained for.
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543