On 11June 2024, Digital Transformation host Kevin Crane was joined by Bryan Phillips, Senior Manager | Pre-Sales Consulting, TELUS International; Pat McGrew, Managing Director, McGrewGroup; and Ronald Gerlach,Chief Marketing Officer, Gaubert Oil Company.
Views on news
While an announcement that Apple is to launch generative AI across iPhone-specific applications like Siri and iMessage may signal a new phase of the AI wars, the true star of the developer conference will be how Apple plans to roll out its AI features to its massive installed base of 2.2 billion iOS devices. Emphasis on building trust with consumers in the use of AI could be the differentiating factor that Apple might leverage to avoid the pitfalls that Google and others have encountered by deploying AI haphazardly in their messaging. For employers, it means that most of their workforce will soon have a ChatGPT tool in their pockets, which calls for implementing company policies about its use. Apple has a reputation for strong privacy protection. However, for more computing-intensive ChatGPT features, cloud servers will be needed, which may open Apple devices up to threats and change the brand’s security proposition. However, one can say that AI has already been on iphones for years, it’s just that users weren’t as much aware of it as they are by now.
The opportunities and pitfalls of wider AI adoption
Service providers already used AI extensively, but they didn’t feel the need to communicate this to customers until ChatGPT’s explosion into the market. Currently, we don’t have international standards of deploying AI. The AI Act in the EU came into being about a month ago. The EU’s aim has been to create clear requirements and obligations regarding specific uses of AI with a risk-based approach setting up categories of unacceptable, high, limited and minimal risk. Non-compliance comes with heavy fines at around 38 million dollars or 7% of the company’s annual revenue. Businesses that have a presence in the EU can either adopt these guidelines or set their own. Accountability for data breaches is still not clear-cut.
The strongest proposition for AI is speed and the enabling of iterative failure testing at a higher speed that we’ve ever seen before with better metrics and data. A GPT engine can be leveraged to test outbound communication with the appropriate grade level and sentience level. All in all, AI has the biggest potential behind the scenes for pulling the data together and get insights. Employees who learnt how to leverage AI in their jobs won’t be replaced – only those who refuse to learn how to use AI tools. Digital co-workers have functionalities that was a human colleague’s task previously. Those people whose tasks were automated can, on the other hand, be trained to operate AI systems.
The components of an ethical AI governance policy should include standards for transparency, accountability; AI-related decisions must be explainable and justifiable; any bias must be eliminated and AI systems must be checked for bias continuously; compliance with data protection regulations is key, as well as having a human in the loop.
The panel’s advice
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543