Ayesha Iqbal at the Advanced Manufacturing Training Centre explores the risks and rewards of AI in 2024
Technological advancements are happening faster than ever, particularly in the present decade. World leaders are coming together to deepen their cooperation around the risks and benefits associated with Artificial Intelligence (AI), as seen by the first global AI summit.
More specifically, new applications and implementations of AI have been hitting the headlines almost every day since the start of the year 2023, to the extent that the term ‘AI’ has been named the most notable word of 2023 by the dictionary publisher, Collins.
According to a recent global survey, conducted by the IEEE, AI in its many forms will be the most important area of technology in 2024. When asked about the top three technology trends in 2024, 65 percent of survey respondents chose AI, including predictive and generative AI, machine learning (ML) and natural language processing (NLP).
28 percent of respondents chose Extended reality (XR), including metaverse, augmented reality (AR), virtual reality (VR) and mixed reality (MR); and 24 percent voted in favour of cloud computing. Other important technologies included 5G and Electric Vehicles (EV).
AI has its applications and uses in almost all areas and sectors of life, including e-commerce, education, healthcare, autonomous vehicles, robotics, marketing, finance, travel and transport. In 2023, there were some impressive applications and examples of implementation of AI in various disciplines – the most popular being ChatGPT, a form of generative AI.
Other than that, AI is already implemented in a number of devices and systems; for example, smart voice assistants, automatic grocery checkouts, online recommendation systems in search engines, and chatbots. In some parts of the world, driverless cars, taxis and buses have been hitting the road. In a very interesting advancement, a boarding school in West Sussex has got a brand new headteacher called Abigail Bailey, who is actually an AI chatbot.
In the same way, the scope and applications of AI in other fields are also being explored. For example, let’s consider the healthcare industry; AI can help in the early detection of cancer, as well as speed up the process of drug discovery, research, and clinical trials.
Very importantly, the biggest challenge society is facing currently is climate change. Indeed, AI can make climate predictions, monitor carbon emissions, track the quality of air, measure environmental footprints, renewable energy allocation, ensure biodiversity and build smart cities.
Looking ahead to 2024 and the future of AI, it is likely that there will be some interesting applications and breakthroughs with the technology. For example, AI applications and algorithms that can optimise data, perform complex tasks, and make decisions with human-like accuracy will be used in diverse ways.
According to the IEEE global survey, technology leaders suggested that top potential applications of AI, include:
These findings indicate that the next generation of generative AI tools will go far beyond the chatbots and image generators that have made a significant impact in 2023.
In addition, understanding how human intelligence and capabilities can be augmented – to do jobs faster, more efficiently and more safely – will be an important workplace skill in 2024. According to the IEEE survey, 41 percent of participants cited that the percentage of jobs augmented by AI-driven software will be 26-50 percent in the year 2024.
Of course, there are risks associated with the use of AI. According to Forbes, risks include a lack of transparency, bias and discrimination, privacy and security, ethics, concentration of power (AI development dominated by only a small number of organisations), dependence on AI, economic inequality, legal and regulatory challenges, loss of human connection (diminished empathy and social skills), misinformation and manipulation, and unintended consequences.
It is therefore imperative that AI is implemented with the correct measures in place.
A policy paper published by The Future of Life Institute, offered some recommendations to govern the future of AI development, which included mandating robust third-party auditing and certification for specific AI systems, regulating access to computational power, establishing capable AI agencies at the national level, establishing liability for AI-caused harms, introducing measures to prevent and track AI model leaks, expanding technical AI safety research funding, and developing standards for identifying and managing AI-generated content and recommendations.
Ultimately, while the risks and challenges cannot be completely eradicated, they can be reduced and controlled by taking appropriate measures, and ensuring safety, security and robustness, as well as appropriate transparency and fairness, accountability and governance.
Given a very wide scope and promising applications and advantages of AI in all areas of life, it is essential to try and minimise the risks as we move forward with large-scale adoption of this ground-breaking technology.
Ayesha Iqbal is an IEEE senior member and an engineering trainer at the Advanced Manufacturing Training Centre
Main image courtesy of iStockPhoto.com
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543