Professor Dame Wendy Hall, one of the first computer scientists to explore multimedia and hypermedia considers how we can build more ethical, inclusive technologies for the future
As someone who has worked in computing for decades, what has your experience been as a woman navigating such a male-dominated field—particularly as AI evolves into an even more exclusive space?
Wendy: Well, it’s so frustrating because I wrote my first paper about the lack of women in computing in 1987, when we were beginning to teach computer science degree courses at Southampton. And we arrived one October at the university and realised we had no females, no girls registered on the courses, no women registered on the courses.
So, us women in computing then started talking about why. And there were lots of reasons. It was partly because the personal computer had come out and they were seen as toys for the boys, and that really changed the culture.
The culture of computing since then in the West—yeah, it’s not so true in India and Malaysia and other Southeast Asian countries—but in the West, computing is very much seen as something rather nerdy that only the geeks do. And a lot of girls don’t want to be involved in that. When they come to do their GCSE choices, it goes wrong for them. We haven’t managed to change—despite many attempts—we haven’t managed to change that culture.
So here we are, nearly 40 years later, and we still have a very male-dominated computing industry, even though everybody in the world—including the more than 50% females in the world—we have a very male-dominated industry. Women are very much not part of the design and development of computers and software. We apply them, we use them, but we’re not part of the design and the discussions about future technologies.
AI is getting even worse than that, because if you’re going to get into the world of machine learning, if you’re going to get into the world of machine learning programming, you’ve really got to have a mathematics or a computer science degree to be able to go into that. And so, we’re taking a very male-dominated sector and pushing that into an even more male-dominated pipeline to AI.
But, of course, I argue very much that AI is much broader than just machine learning programming. We have to apply it. We have to think about the ethics and the values and the wonderful opportunities, as well as the threats that we might need to mitigate against. That needs a real diversity of voices. It needs the whole of society to be involved in that. It’s not just about gender balance; it’s about age and ethnicity and culture, and people with disabilities being able to use the technologies and creating level playing fields.
We need so many disciplines involved in developing the way we use these systems. We need lawyers, we need philosophers, we need psychologists, we need the people who know how to run businesses, we need historians. We need so many different voices here.
And that’s why my take on this world is that we have to think of it as a socio-technical world to really understand what’s going on. So, we need diversity in every way.”
With AI adoption accelerating across industries, what ethical considerations should businesses prioritise when implementing emerging technologies like facial recognition or surveillance tools?
Wendy: Either, if you take, for example, face recognition. We still haven’t really worked out what the rules and regulations should be around when people can apply face recognition technology.
You know, did anyone ask you whether you wanted the face recognition technology on your phone? You get offered it as a system download and then you can choose to use it.
Face recognition in surveillance—we all know this happens in China—but it’s happening in a creeping way in Europe and the US as well. Our security forces are using them. But on the other hand, I like the fact that there’s a CCTV camera in the car park at night, so I feel safer.
And all these new technologies, and the technologies that are going to come in the future we don’t even know about yet—new AI technologies that we have no clue what they are and how they’re going to be used—all these things have a good and a bad side, the yin and the yang, right? Benefits and threats.
We have to learn how to make the best use of the benefits for the good of humanity, for the good of society, for the good of ourselves, for the good of the business, and how to mitigate against the threats. And that’s what we’ve got to learn to do.”
As generative AI becomes more integrated into creative and decision-making workflows, how can businesses harness its capabilities without compromising human authenticity, trust, or accountability?
Wendy: Generative AI is nothing to be frightened of, and I think we will start to use generative AI, which is about having software that helps us write things, summarise things, argue about things.
I liked it to when calculators first appeared and everyone—shock horror— “How can we let calculators into the classroom? How can we trust the answers it comes up with?” Well, of course, it’s garbage in, garbage out as ever. And now we have, you know, a finance industry run by computers, so all the old ways of doing things by hand, the ledger systems, have all gone. But we’ve got more jobs than we ever had before in the finance industry, and we’ll see this around generative AI.
I think we’re all going to be very relieved not to have to write essays about things. It will help us be more creative. But you need to view it as an adjunct, augmenting what we do and not taking over, because it’s not clever enough to take over.
But the advantages of having the system—just take the legal industry—summarising the huge amount of work that people have to absorb and then creating, predicting whether to take a case on or not, whether you think you can win in this circumstance. And I see a future where AI becomes part of the team that’s deciding how to deal with a legal case, or how to deal with a medical diagnosis, or how to deal with a problem pupil at school.
We’ll have teams and AI will be part of that team, and we’ll ask the AI questions, and it’ll come back with answers. But it’s really important that we see it as something that is augmenting human intelligence and not taking control, because at the moment it can’t possibly—it can’t—because it doesn’t. The answers are not… we can’t trust the answers.
The data that’s going into the generative AI is very biased. If it’s being trained on the internet, a lot of it’s incorrect. We have this lovely term “hallucinating”—it will make things up if it doesn’t know the answer. You know, we can’t trust it yet. So, we have to think, I think, as part of the team, part of what we do, and using it to help us be more productive, more creative and have a better working life. Actually, maybe shorter work—maybe we’ll get a four-day week out of all this.
Dame Wendy Hall, DBE, FRS, FREng is Regius Professor of Computer Science, Associate Vice President (International Engagement) and is Director of the Web Science Institute at the University of Southampton. She is a leading voice in ethical AI. Dame Wendy was interviewed by Mark Matthews
Main image courtesy of iStockPhoto.com and kutubQ
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543