ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The promise and peril of generative AI for supply chains

Sponsored by Kinaxis
Linked InTwitterFacebook

“Our rules of thumb are no longer up to the challenge.” This description by a chief supply chain officer of the state of the supply chain he found in his new role stuck with me. Interest in generative AI as something more capable of meeting today’s challenges than simple rules of thumb has accelerated – and ChatGPT’s emergence has added to this already explosive growth.

 

Generative AI’s remarkable capabilities seem like wizardry. So allow me to direct your attention to the technology behind the curtain, to highlight both its risks and its rewards, and to share the essential leadership skills AI cannot provide.

 

Generative AI is like a magic trick

 

Millions of people are using generative AI – ChatGPT had already reached 100 million users by January, the fastest consumer adoption of a technology in history. Part of the draw is its remarkable capabilities, from conjuring up songs about anything, sung in anyone’s voice, to acing standardised tests or carrying on spookily human conversations.

 

But generative AI is less magic, more magic trick – it can do remarkable things, but only under carefully controlled circumstances. A magician can pull a rabbit out of a hat on stage, but can’t repeat the trick on command in a casual setting. Magic works on a sequence of steps performed in a specific order to (mis)direct the audience’s attention. Unlike a supply chain, these steps rely on there being no disruption to their execution.

 

Under the hood of generative AI chatbots such as ChatGPT are statistical sentence completion machines that compose content based on probabilities, not understanding, context and empathy. A French history professor illustrated this limitation with a lengthy interrogation he called “My dinners with GPT-4,” showing how the right questions forced ChatGPT into contradicting itself (while irritating its interlocutor by failing to comply with repeated specific requests).

 

The promise of generative AI in supply chains

 

What dazzles about generative AI is that it dips into realms we thought were our sole human domain, such as the creative arts. Any human with a browser can produce results, since it requires no sophisticated coding skills or knowledge. Essays, poetry and paintings often indistinguishable from human creation can be created with a few short requests. But its applications in business are exploding beyond mere novelty. Some of the first test cases range from creating novel protein sequences, personalising marketing messages and answering complex legal questions. We are only at the nascent stages of this technology, and the explosion of generative AI start-ups will only expand the options.

 

I’ve seen our own R&D team test some very cool applications and have been following the news closely in search of other examples relevant to supply chain. While there is endless speculation, two supply chain-specific opportunities intrigued me. The first is MIT professor Yossi Sheffi’s idea for using generative AI to monitor all possible sources of risk for a given supplier to mitigate any disruptions. The second is Walmart’s use of chatbots to automatically negotiate relatively routine contracts.

 

Beyond these targeted examples, I see even more promise in broader applications: code generation, knowledge management and user interfaces. For many years P&G has had an initiative to foster supply chain “citizen developers” to extend its digital transformation, by using their own skills to create tools rather than relying on IT. Generative AI can write code for someone without coding skills (and then even debug it), a capability with huge potential to expand P&G’s concept. For knowledge management, imagine generative AI primed to intelligently search all of a company’s internal documentation to answer all kinds of questions, from “how did we do this before?” to “how do I do this now?”

 

Perhaps the most useful way to think of generative AI is as the future way to interface with computers. Rather than fearing how it will replace us, we could consider how it can connect us more effectively to computers and all their cognitive capacity to search the terabytes of knowledge locked in documents. Generative AI may be the gateway to seamless computer collaboration with a user interface far beyond what we experience today.

 

The many perils of generative AI

 

But the risks are as daunting as the rewards are enticing. These perils are well addressed elsewhere, but I’ll summarise my four biggest concerns. The first is bias and misinformation. Models trained on human data reflect us, our pride and our prejudices. The output reflects statistical probabilities of occurrence, not accuracy. This orientation leads to generative AI “hallucinations”, which can range from citing research that doesn’t exist to telling a reporter to leave his wife. Second, generative AI can arm bad actors more easily with even more powerful tools for malfeasance. Both of these risks are amplifications of existing risks, not new ones, but they are significant and must be mitigated.

 

Third is the concern about job losses – which is valid but, I hope, overstated. I share the sentiment of many economists who posit that historically new technologies result in some short-term displacement but net job creation. The fourth risk is what is sometimes called the “alignment problem” – AI that operates on its own interests, which don’t necessarily match ours, the fear that result in memes about Skynet and the Terminator. My belief is that we are a long way from the artificial general intelligence that would truly pose this risk.

 

The only honest answer to any question about the future of AI is “I don’t know”. Things are moving so quickly that no one can realistically keep up. So instead, I want to direct attention to what I do know.

 

The keys to succeeding in AI

 

As a trio of economists argue in a pair of books, most recently Power and Prediction: The Disruptive Economics of Artificial Intelligence, we seek predictions where there is uncertainty. Predictions can be powerful, but they also caution against planning in misaligned silos with what the authors term the AI bullwhip – because a highly accurate AI-generated demand forecast is pointless if there is no capacity to produce it. Supply chains are rife with uncertainty and silos, but instead of producing useless forecasts or buffering with inventory, we can also adopt best practices such as concurrent planning.

 

What I do know is that guiding a supply chain in the age of AI requires three leadership skills, all of which are uniquely human.

 

The first is the ability to ask the right questions. Even generative AI needs our help in the form of the emerging field of prompt engineers, humans who improve the software by asking it questions. People are often better at describing symptoms than problems, so the ability to ferret out underlying issues for targeted inquiry is critical. Framing the business problem well is key to solving it, something AI cannot do.

 

Directing attention to the right things is the second critical skill. Magic – and leadership – both depend on doing this properly, and the fear of missing the AI bandwagon may be distracting you. ChatGPT may be the shiny objet du jour, but a sure ROI means investing in people and process to enable technology investments. Many problems land on a leader’s desk – the challenge is recognising which ones really matter and sustaining attention on them. Generative AI has an “attention mechanism” to help it weigh the importance of words in a sentence, but it cannot match human judgement.

 

And judgement is an inarguably human leadership skill. As the economist trio of Agrawal, Gans and Goldfarb explain, AI makes predictions, which are inputs into decisions. We create rules for situations when making decisions is difficult, but AI can increase our productivity and the quality of our decisions by automating the obvious with a more precise prediction, such as predicting lead times. As I noted at the start, many of the rules of thumb undergirding our supply chains are no longer up to the challenge.

 

We should strive to relieve planners groaning under the weight of tedious decisions, but not to drive a “lights out” supply chain. We need to keep the lights on, because the most important decisions still require judgement. There are ample situations where the trade-offs are complex and unclear, the outcomes risky. It is in these situations where people matter the most, and where we most need the ability to ask the right questions, pay attention to the right things and exercise the right judgement. And these are all capabilities that are very human in nature.


By Polly Mitchell-Guthrie, VP of Industry Outreach and Thought Leadership, Kinaxis

Polly Mitchell-Guthrie is VP of Industry Outreach and Thought Leadership at Kinaxis, the leader in empowering people to make confident supply chain decisions. Previously she served as Director of Analytical Consulting Services at the University of North Carolina Health Care System, senior manager of the Advanced Analytics Customer Liaison Group in SAS’s Research and Development Division, and Director of the SAS Global Academic Program.

 

Mitchell-Guthrie has an MBA from the Kenan-Flagler Business School of the University of North Carolina at Chapel Hill, where she also received her BA in political science as a Morehead Scholar. She has been active in many roles within INFORMS (the Institute for Operations Research and Management Sciences), including serving as the chair and vice chair of the Analytics Certification Board and secretary of the Analytics Society.

Sponsored by Kinaxis
Linked InTwitterFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543