ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Navigating the challenges of AI in software development

Guy Levi at JFrog issues a Call to Action for software development teams to comply with the EU AI Act 

 

In today’s rapidly evolving software development landscape, Artificial Intelligence (AI) and Machine Learning (ML) bring both possibilities and risks. Organisations worldwide are witnessing a surge in targeted attacks aimed at software developers, data scientists, and the infrastructure supporting the deployment of secure AI-enabled software supply chains.

 

Reports of attacks on development languages, infrastructure, manipulation of AI engines to expose sensitive data, and threats to overall software integrity are increasingly prevalent. 

 

In this environment, organisations must defend against AI software supply chain risks across three domains: Regulatory, Quality, and Security. 

 

1. Regulatory 

Although the EU AI Act has garnered significant attention, it is just one part of a broader, more complex global regulatory environment.  

 

With countless laws such as EU GDPR and the EU Cyber Resilience Act already impacting AI globally, organisations must go beyond simply viewing the EU’s framework as the gold standard. Instead, they should focus on a more holistic, risk-based approach that considers a more diverse array of global regulations.

 

This strategy ensures compliance with relevant laws but also addresses the nuanced needs of different markets, recognising that regulations like the EU AI Act, though influential, are not without their flaws – being both overly protective in some areas, for example its broad restrictions on the use of facial recognition technology, and insufficiently protective in others, like GenAI, where there is concern it doesn’t go far enough in regulating the potential misuse of deepfakes and AI-generated content.

 

As AI and ML introduce a new attack surface, organisations must prepare for these regulatory changes today to be prepared for when they take effect between 2025 and 2027. It’s not uncommon that even the most established businesses run decades-old, homegrown infrastructure built by developers using various programming languages and principles.

 

This brings complexity to businesses that  want to modernise their infrastructure while complying with emerging regulations. Today, companies are moving with caution as they want to scale in the right way to avoid any unplanned operational disruption and spikes in IT running costs. 

 

2. Quality 

Navigating the complexities of software development is inherently challenging and the integration of AI complicates the landscape even further. Attaining deterministic outcomes from statistical models—a core of AI and ML—is fraught with difficulties. With AI’s reliance on vast datasets, developers must grapple with the intricacies of statistical variability, from data drift to bias. 

 

The potential for chaotic and unreliable outcomes necessitates rigorous data organisation and management practices. Developers should take a meticulous approach to ensure that inputs to AI models are clean, consistent, and representative. Quality assurance in AI-centric software development is not just a technical challenge; it requires a cultural shift towards prioritising excellence in every development lifecycle phase. 

 

3. Security 

While AI enhances software and functional capabilities it also introduces new vulnerabilities that malicious actors can exploit. Python, the language of choice for many AI developers due to its accessible syntax and robust libraries for data visualisation and analytics, exemplifies this dual-edged sword. While its foundations support the advanced AI software ecosystem, its widespread usage presents critical security risks, particularly regarding malicious ML models.

 

As regulations like the EU AI Act push for more stringent standards, addressing these vulnerabilities becomes a necessary step in safeguarding AI’s potential.

 

Recent discoveries by the JFrog Security Research team illustrate the gravity of these threats: an accidentally leaked GitHub token, if misused, could have afforded malicious access to significant repositories, including the Python Package Index (PyPI) and the Python Software Foundation (PSF). Malicious models could have taken advantage of the model object format used in Python to execute malicious code on the user’s machine without the user’s knowledge.

 

If the worst did happen, this vulnerability would have threatened the integrity of critical systems across banking, government, cloud and eCommerce platforms. 

 

The potential fallout of such vulnerabilities emphasises the urgent need for enhanced security measures within the AI software supply chain. Companies must prioritise defensive strategies to safeguard against these emerging threats, as the consequences of inaction could jeopardise not only their operations but the entire digital ecosystem. 

 

Growing complexity

As the complexities of AI and software development grow, so do the associated risks. By adopting a proactive approach across the pillars of regulation, quality, and security, organisations can fortify their defences against the evolving threat landscape.

 

The time to act is now—ensuring compliance, excellence in execution, and fortified security is not just a strategic advantage; it’s essential for business survival in an increasingly interconnected world. With frameworks like the EU AI Act setting new standards, aligning with these regulations becomes fundamental to staying ahead and mitigating future risks.

 


 

Guy Levi is VP and Lead Architect, CTO Office at JFrog 

 

Main image courtesy of iStockPhoto.com and kemalbas

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543