ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Preventing AI misuse

Jay Limburn at Ataccama asks whether the EU AI Act is really the answer to keeping society safe from rogue AI applications

 

AI is a once-in-a-decade technology disruption. Yet despite the rush to create and adopt AI-based tools, our understanding of the unforeseen consequences of rapid implementation is still incomplete. That is about to change. On Feb 2, 2025, AI systems deemed to pose an unacceptable risk under the new EU AI Act will be prohibited. 

 

The EU AI Act takes a risk-based approach to developing and using AI to ensure safe, transparent and ethical AI. The importance of human supervision and control in automated decision-making is emphasized and highlights the need for robust data governance to ensure high-quality data is used to mitigate biases and discrimination. 

 

A risk approach to AI development and usage

Legislation on data and AI usage has forced businesses to prioritize a data strategy. This approach deals with all aspects of responsible data management, from the collection and storage of data, the type of data including sensitive and those containing personally identifiable information, and how all data is handled, secured and accessed across the organization. 

 

Companies leveraging AI tools must comply with Article 5 of the Act and eliminate prohibited AI systems. However, history shows that retrospective regulation after rapid technology adoption leads to a patchwork of continuous fixes, as seen with privacy-focused legislation. To prevent this, the Act aims to establish a standardized framework for AI regulation across the EU, fostering legal clarity and promoting the responsible development and adoption of AI technologies that benefit society. 

 

AI has been heralded as the next great thing that will drive revenue for companies and economies. However, the EU AI Act, which aims to promote AI innovation while safeguarding the safety, trustworthiness, and respect for fundamental rights of AI systems, has sparked mixed reactions from stakeholders. It aims to create an environment conducive to AI innovation, yet some argue that it may impede the rapid implementation of new AI technologies. This dilemma highlights the need for a thorough evaluation of the trade-offs between fostering innovation and ensuring responsible AI development.

 

Defining acceptable risk 

The Act sets out three risk levels for which it outlines controls. Baseline risk levels are categorized as minimal and limited risk, and include applications such as spam filters and AI-enabled video games and chatbots or AI-generated text for public consumption, respectively. 

 

High-risk and unacceptable-risk implementations are more stringently outlined. The former are systems that are deemed potentially dangerous but are still suitable with sufficient scrutiny and subject to specific regulations or legal requirements, examples of which include systems determining access to education or critical infrastructure like transportation. The latter are simply completely prohibited on the basis they pose significant risks to personal rights and safety, such as social scoring or facial recognition databases. 

 

Most obligations fall on the high-risk AI vendors and developers who will be subject to conformity assessments and will need to conduct internal checks for compliance purposes. In particular, the onus is very much on developers who must perform regular evaluations, such as adversarial testing, to identify and deal with risks in both the sources of data and the models themselves. 

 

High data quality enables robust data governance

Data is central to the success of AI, so before businesses develop and adopt AI systems, it’s crucial that they ensure their data is AI-ready. That requires organizations to create structured, high-quality data sets to train and feed into their AI systems. Without quality data inputs, there can’t be trustworthy outputs. 

 

Quality data is critical to comprehensive data governance as it ensures companies treat data appropriately by being able to distinguish different kinds and apply appropriate rules and protection. With data trust, business, IT and security teams can better ensure compliance and reduce the risk of negative outcomes to the business. It also enables them to embed risk assessment into data handling and develop necessary guardrails across the business, including AI governance. 

 

The consequences of failing to meet the EU AI Act requirements are hefty indeed, as much as 30 million Euros or 6% of global annual turnover. Getting compliance right, however, isn’t just about risk mitigation and avoiding financial penalties. Effective data governance is a vital component of a growth strategy. With an effective data governance framework in place, businesses can confidently develop and deploy their models more confidently, aware of the existing and planned regulations. This way, they can plan proactively and future-proof their business strategy and AI innovation roadmap to maximize early competitive advantage.

 

Balancing safeguarding and innovation

Technology regulation aims to balance safeguards to protect people and businesses with the potential to stifle innovation and halt the development of potentially transformative solutions. The risk scale implemented under the new legislation offers a reasonable answer to this, only restricting the highest risk AI systems and models. 

 

Additionally, the Act mandates that organizations keep stringent records and allow for on-the-spot assessments by regulators when necessary, emphasizing the need for robust governance. This requirement is similar to the approach taken under the General Data Protection Regulation (GDPR). Organizations that have already implemented compliance measures for stringent privacy laws may find it easier to extend data management, data governance, and security practices to meet the Act’s requirements. 

 

Despite the EU AI Act being a catalyst in retrospectively addressing the unchecked adoption of AI, it serves a useful purpose in forcing organizations to catalog and classify all their data. To meet its requirements, enterprises can initiate, continue or enhance the necessary work that will ensure the data used for internal AI development is safe, high-quality, and free of sensitive information. 

 

The bottom line is that AI is one of the most promising and transformative technologies ever developed, with the potential to create vast economic value worldwide. However, guarding high-risk edge cases is vital. Regulation plays an important role in this, but businesses must adopt AI management strategies that go beyond mere compliance to foster the responsible and reasonable use of AI both regionally and globally.  

 


 

Jay Limburn is CPO at Ataccama

 

Main image courtesy of iStockPhoto.com and Parradee Kietsirikul

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543