ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Harnessing responsible AI

Ana Paula Assis at IBM EMEA illustrates how businesses can best harness AI, describes what responsible AI truly is, and explains the importance of ethics and an open ecosystem

 

The EU AI Act entered into force in August, signalling a new chapter for organisations on their Artificial Intelligence (AI) journey.  The first-ever comprehensive legal framework for AI worldwide, this landmark act provides a risk-based approach to regulation, applying different rules to AI according to the risk they pose and providing clear legal obligations to AI developers and deployers. 

 

Over the last two years, businesses have been busy exploring the new frontier of generative AI (gen AI). In fact, 75% of CEOs believe that gaining a competitive advantage hinges on possessing the most advanced gen AI, harnessing its large language models and rapid data processing abilities to boost productivity, drive efficiency and improve customer and employee experience.  

 

While this innovation shows no sign of slowing down, the introduction of the EU AI Act will undoubtedly realign priorities, requiring businesses to balance the speed of deployment with regulatory compliance. 

 

Looking ahead, businesses driving AI adoption need to ensure their AI strategies are not only efficient and valuable, but responsible across all areas of their operations. 

 

What is responsible AI?  

Responsible AI refers to a set of principles that guide the design, development, and deployment of AI. It is built on five pillars of trust – explainability, fairness, robustness, transparency, and privacy – and requires companies to consider the broader societal impact of AI systems, paired with stakeholder and ethical values.

 

Establishing responsible AI is not a once-and-done activity; it requires continuous progress that will shift alongside new AI developments and regulations. This means that organisations should not only focus on compliance with the EU AI Act as it stands today, but ensure they are building the appropriate frameworks to accommodate and respond to further changes in the future.  

 

The benefits of establishing responsible AI principles are significant. It not only brings clear ethical and legal benefits to adoption, but enhances a business’s reputation, allowing companies to position themselves as market leaders. 

 

What’s more, with recent data showing that 75% of Gen Z and millennials see an organisation’s societal impact as an important factor when considering a potential employer, embracing responsible AI will be critical in attracting and retaining top talent too. 

 

Ethics, data and workflows 

Implementing responsible AI practices requires an end-to-end approach that addresses all stages of AI development and deployment. To do this, organisations should focus on three components: establishing ethical principles, data governance and integrated workflows. 

 

Developing a set of ethical AI principles is a critical initial step in this process. These principles must align with an organisation’s values and business objectives and should be developed by an interdisciplinary team of experts, including AI specialists, ethicists, legal experts, and business leaders. They should dictate all decisions related to AI, from data usage and training to third-party integration and procurement. 

 

Establishing an AI Ethics Board is one effective and popular strategy to achieve this. This Board should consist of a dedicated body of experts that acts as a leading authority in identifying, analysing and advising on the ethical risks of AI across every stage of its use. At IBM, our AI Ethics Board guides every aspect of our AI approach while engaging in critical research on its wider impact. For example, our board’s white paper, “Foundation models: Opportunities, risks and mitigations,” highlights that foundation models show substantial improvements in their ability to tackle challenging and intricate problems, with another recent focus being the development of  energy-efficient methods to train, tune and run AI models sustainably. 

 

The second key part of responsible AI is data governance. The accuracy and effectiveness of all AI applications hinge on the quality and integrity of the underlying data. The rise of gen AI has further heightened the importance of this; because it creates brand new content based on the data sets, inaccuracies, biases or anomalies can be amplified, resulting in misleading and flawed outputs. 

 

Establishing a robust data governance strategy must be a top priority, with clear rules on data collection, storage, and use, and procedures for data access and deletion. Without this stringent approach to data, organisations increase the likelihood of privacy, security and intellectual property issues as well as the risk of AI bias and drift. These incidents can have far-reaching financial and reputational consequences and undermine the core principles of explainability, fairness, and privacy.

 

The final critical component in delivering a successful responsible AI strategy is the integration of workflows. Previously, responsibility for AI lay almost exclusively with the engineering or data science teams. The onset of the EU Act AI will see more departments – including legal, risk, cyber-security and Human Resources (HR) departments – share a greater portion of this duty. 

 

Building workflow management structures across these departments will ensure compliance across the entire company, with AI systems undergoing regular audits to certify systems are functioning as intended and that human oversight is being prioritised at every stage. 

 

Naturally, this requires the education and training of employees to ensure teams are well-versed in the legal and ethical aspects of AI. Those who engage directly with the AI applications, whether it be in HR, finance or engineering, must have a good understanding of how these applications arrive at decisions and have access to documentation about data sources and algorithms.  

 

An open ecosystem 

While establishing responsible AI is often an internal process, businesses can contribute to the broader ethical AI landscape by supporting the open AI ecosystem. 

 

AI development has so far been driven by the energy and collaboration of the entire AI community – maintaining this is paramount for continued ethical innovation. An open ecosystem enables everyone to explore, test and study AI. This promotes competition, skilling, and security and cultivates a broader and more diverse pool of voices and perspectives, contributing to the development of more responsible AI models.

 

That’s why last year, IBM together with other technology players launched the AI Alliance, bringing together a group of now more than 100 leading organisations across industry, startup, academia, research and government, to support open innovation and open science in AI.  

 

By integrating a mix of the best open-source models, private models, and their own models, businesses can position themselves to capitalise on the wider landscape while fostering AI innovation and collaboration now and in the future.

 

Increasing scrutiny

As organisations prepare for the first round of EU AI Act provisions to take effect in February 2025, the use of AI and data will come under increasing scrutiny from stakeholders, customers, and regulators. Organisations should act now to harness responsible AI, seeking both short-term compliance and long-term competitiveness.

 


 

Ana Paula Assis is Chair and General Manager at IBM EMEA

 

Main image courtesy of iStockPhoto.com and Just_Super

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543