Many people worry about AI. A few take a hysterical “we are all going to die because of AI” approach. But there are genuine worries about the use of AI by business, such as an increase in inequality and bias, damage to democracy and effects on employment. It is therefore important for organisational leaders to have a very clear idea of the risks that AI can pose, and the ethical parameters that can act as guidelines for the implementation of AI in a responsible and trustworthy manner.
An ideal framework
A typical framework involves examining and attempting to achieve to the following desirable attributes:
Fairness and accessibility. AI systems should avoid reinforcing or amplifying biases that already exist in society, and should deliver outputs that are fair to all stakeholders. That isn’t the same as saying that all stakeholders should experience the same outputs, but the use of AI should not unfairly discriminate against any group of people. Discrimination can be caused if the system delivers biased outputs, for example refusing a certain type of person a home loan.
Unfairness is often the result of using biased or incomplete data sets to train an algorithm. Generally, except for very low risk applications (deciding what to watch on Netflix tonight, for example) having a human who reviews outputs to ensure they are fair is an essential part of achieving fairness. In addition, a collaborative development approach involving various stakeholders will help create an AI system that is fair and aligned with societal values and expectations.
Accountability and human oversight. In any organisation that uses AI, a senior individual or group of people should be accountable for the way the system operates. Accountability is not the same as responsibility. Someone who is responsible is tasked with an activity, perhaps ensuring to the best of their ability that the outputs of an AI system are accurate. The accountable person, however, has to explain “why” when things go wrong.
In many cases, an AI system’s outputs can have a serious impact on people, and it would be unacceptable for any faults to be blamed on the system. Designated people in organisations must own any technology disasters. A failure to do this will inevitably result in users and society in general losing trust in the system.
Transparency and understandability. AI systems should be designed to be transparent, providing clear insights into their functioning and decision-making processes. People who interacting with an AI system should always know that they are not dealing with a human. Pretending that an automated chatbot is really a customer service executive is unwise as inevitably there will be instances when the chatbot cannot answer a question and has to pass the customer over to a real person.
An important, and difficult, part of transparency is explainability – the ability of an AI system to explain to a user why a particular decision has been made and what factors influenced it. With some AI systems (particularly those that use artificial neural networks, sometimes known as “black boxes”) this can be problematic although even here efforts to explain outcomes can be made.
Safety. Some AI systems may have the potential to cause people physical or mental harm. A collaborative factory robot could cause injury to a human colleague if designed badly. A bank’s AI system that makes poor financial judgements could harm the life chances of a customer. Even a movie recommendation engine could cause emotional harm in some circumstances, perhaps recommending an adult film to a child. Great care should be taken to ensure that AI systems do no harm.
Autonomy and redress. As far as possible, AI systems should not take away people’s freedom of choice or force them to do things they are unwilling to. Systems should also allow users to contest any decisions they feel are wrong, biased or just unfair, with the right to seek redress if they are treated unfairly, and a clear pathway for them to do so.
Diligence and agility. Many of the risks from using an AI system will be unknown, especially before it has started to operate. Organisations must be diligent in their search for potential harms, not just when AI systems are being developed, but throughout their lifecycle.
Accuracy. Most AI systems work on data that is, like most data sets, incomplete. Their ability to provide correct decisions all the time can therefore be limited. While seeking to develop AI systems that are as accurate as possible (and as part of this employing techniques to measure accuracy), leaders should accept this fallibility and insist on agility and flexibility in the face of errors or emerging harms. As well as being accurate, AI systems should be tested to ensure that they are resilient (able to withstand a rapidly changing technological environment), reliable (capable of producing comparable results over time and in different expected circumstances) and robust (able to cope with unexpected situations such as ensuring an autonomous car can navigate in extremely harsh weather).
Security. AI systems not only use data, some of which may be commercially sensitive or personal, they also frequently underpin critical business systems. However, they can be susceptible to malicious interference. Appropriate information security must be in place to ensure data protection and the integrity of their processes. For the most part AI systems can be protected by the cyber-security protocols that protect all IT systems. However, there are some security risks that are unique to AI systems such as prompt injections.
Privacy and personal data governance. Some people fear that, as AI becomes ubiquitous, we will see the end of privacy. The problem is that AI can often generate new personal information from existing datasets. This information may be inaccurate, or it may be highly sensitive and compromising for the person involved. Either way, organisations need to put checks in place to prevent this from happening. AI systems should be designed to collect and use data in a manner that respects individuals’ privacy rights. Clear guidelines on data handling, storage and consent mechanisms need to be incorporated to ensure compliance with privacy regulations and uphold individuals’ rights to privacy. And if personal data is created deliberately then this must be done in compliance with privacy regulations.
Sustainability. AI systems run on computers. And many use considerable computing power, with consequences for sustainability, including the use of fossil fuels to power data centres and the pollution of water to cool them. The decision whether to use AI should therefore include a consideration of the environmental affects, compared with other possible solutions.
Achieving an ethical approach to AI
In theory it is simple to develop an ethical or responsible AI system. Concepts such as fairness, privacy and sustainability are not complicated. However there is a difference between understanding what must be done, and doing it.
Developing an ethical approach is something that should be led by management at the very top of an organisation, who should set the desired moral tone and culture. However there are also some practical approaches to developing trustworthy AI:
Ethical frameworks are not mandatory for any organisation – except insofar as they underpin and are written into laws. It is up to business to decide what framework they want to adopt. However, the points raised above form a comprehensive list of the issues that all organisations wanting to develop responsible AI applications will want to consider.
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543