ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Exploring the future of AI through film

Andy Patel and Tom Van de Wiele at WithSecure explore the lines between reality and fictional portrayals of AI and ask whether films are challenging common assumptions

 

As advancements in artificial intelligence (AI) continue to make headlines and challenge our assumptions about technology, the medium of film has become an intriguing forum for exploring its societal implications.

 

The world of cinema gives us speculative scenarios that push the envelope, such as the recent movie, "The Creator," set in 2070. This imagines a future war between humans and AI, raising ethical questions around automation and empathetic AI designed to protect us. We’ve seen variations of this narrative in many popular Hollywood movies throughout the years, such as The Matrix, Ex Machina, and Blade Runner, to name just a few. 

 

Whilst these dramatic portrayals are pure fantasy, they are beginning to raise some interesting questions about the future of AI among experts, policymakers, and the general public. 

 

The fictional dystopia versus the reality 

In fictional interpretations of AI, machines and technology are often represented as sentient entities, with emotions and self-preservation instincts. In contrast, AI, as it stands, doesn’t possess feelings or intentions; it’s not seeking secret escape routes to self-improve. Such notions often come from a misunderstanding of what AI is capable of. 

 

The truth is that AI systems are neither evil nor good; they are simply tools created and shaped by humans. If AI systems were to act in harmful ways, it would only reflect the objectives programmed into them by their human creators.

 

The crux of films like "The Creator" is the idea that AI machines possess the ability to think—a big leap of imagination from the current reality. Today’s AI systems primarily engage in what psychologists refer to as "System 1" thinking: fast, automatic responses.

 

This is a far cry from "System 2" thinking, which is slow, deliberative, and far more complex. AI’s current capabilities are more akin to very advanced, auto-complete engines, rather than sentient beings capable of rational judgment.

 

It’s also worth noting that, as AI technology proliferates at a rapid pace, the focus from policy makers and the tech industry should not be on speculative scenarios of sentient AI, but on the growing need for ethical frameworks.

 

After all, AI systems are a product of human design, constrained by our ethical and moral considerations. Any potential risk stemming from AI is not a question of the technology going rogue but rather a result of human intent and oversight.

 

The human element: shaping AI’s actions

Another layer to this complex conversation around AI’s role and capabilities is how movies and media could contribute to what can be described as "AI doom mongering." These narratives inadvertently instil a collective perspective that AI is a ticking time bomb. As a result, it augments a general fear among users, businesses, and even regulators, and shifts the focus away from practical solutions. 

 

Ironically, the dramatised narrative of an "AI uprising" could itself contribute to scenarios where humans, fuelled by unfounded fears, misuse or mistreat AI systems, create a self-fulfilling prophecy of sorts.

 

Further, the human tendency to anthropomorphise AI can have serious repercussions. Neama Dadkhahnikoo, a former AI scientist at IBM, warns that debates around AI sentience are premature, given that the technology cannot have novel thoughts or emotions.

 

Still, yearly threat reports on cyber-security reveal that humans can be deceived by AI systems employing social engineering tactics. This not only puts information security at risk but also underlines how misconceptions about AI could lead to poor judgment and decision-making.

 

The solution lies in human education and awareness, coupled with regulation. As technology evolves, so too should our understanding of it. By addressing the human factors that influence AI development and perception, we can pave the way for a more ethical and responsible future.

 

The real threat: a lack of effective guidance 

Today, the focus is often misdirected towards creating AI systems that are inherently ’good,’ sidelining the crucial role humans play in shaping these technologies. It’s humans who programme these systems, set their goals, and decide their limitations.

 

The most urgent call to action is not merely preventing an imagined dystopia but constructing robust ethical frameworks that guide AI development and usage. The challenges of achieving intelligence in machines are monumental, so the immediate concerns of a ’singularity’ is not practical.

 

Even if we don’t know when or if AI will reach that point of hypothetical self-awareness, regulations governing the safe and ethical use of AI should be established before then.

 

For instance, the European Commission have proposed regulations governing AI’s use, and the White House is proposing an AI-focused human rights bill that could turn into federal law. The UK is also hosting the AI Summit this year, where technologists and business leaders will explore the different real-word applications of AI. Such efforts should be recognised and accelerated across the globe. 

 

The imminent threat is not AI itself but the lack of a moral and ethical structure governing it. The conversations spurred by fictional portrayals can be enriching, but they shouldn’t detract from actionable measures that need to be taken.

 

To secure a future where AI benefits humanity, the development of well-defined ethical guardrails is not an option; it’s a necessity.

 


 

Andy Patel is a Security Researcher at WithSecure and Tom Van de Wiele is Principal Technology and Threat Researcher at WithSecure    

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543