Petra Jenner at Splunk argues that promoting diversity in AI must be about more than just lip service
There is no doubt that the various uses of Artificial Intelligence (AI) will change the way we live, work and interact. It will empower new ways to solve business challenges and deliver customer value.
Embedding trust into every aspect of a business’ AI strategy needs to be a top priority for organisations. This will require a fair, equitable and accountable approach to the way AI is integrated into workforces, how its rollout is communicated amongst employees and customers, and the reliability of the data sets upon which the technology is built. However, this raises a larger question around diversity.
As AI use increases, the concern around diversity in AI is twofold. First, there is a distinct lack of representative talent in the AI sector, with WEF reporting that only 22% of AI professionals globally in 2023 were women compared to 78% male. This lack of representation has clear economic and social implications.
Secondly, the ramifications of this imbalance regarding technology development are also concerning. AI is only as good as the data on which it is being trained and, despite best efforts, it has been near-impossible to eliminate human biases from AI datasets and applications.
Without a diverse workforce driving its development and adoption, we are at risk of creating AI frameworks that do not represent the society we live in, resulting in untrustworthy outcomes that further entrench the existing bias and discrimination in society.
New industries, old problems
Statistics from Women in Tech found that in 2023, women held just 26.7% of technology jobs overall, with leadership representation even lower (10.9%). The lack of diversity doesn’t just relate to gender. According to the sixth annual Diversity in Tech report from Tech Talent Charter, only 28% of the UK’s technology workers are gender minorities and 35% are from minority ethnic backgrounds.
As we move into the next phase of widespread AI deployment, it is concerning to see problematic statistics repeated with regards AI-related roles. The World Economic Forum reports that only 22% of AI professionals globally in 2023 were women. What’s more, McKinsey’s State of AI 2022 revealed that less than 25% of AI employees identify as racial or ethnic minorities, with only a third of companies having active programmes or initiatives to increase diversity in the field.
So, how can we ensure the AI workforce truly reflects the society we live in?
Purposeful AI hiring and training
The CIPD sets out clear guidelines on how to be more inclusive with hiring practices, which should apply to building AI teams. This involves placing job adverts where they are more likely to be seen by different demographics, using school outreach programmes to encourage younger minority generations to join your team.
There is also a question of the language we use in job adverts. Research from Sage Journals has revealed that in the AI age, employers expect to increasingly value “soft skills” that enhance human collaboration, creativity and foster rich, people-centred company cultures. Promoting the wider skills needed to drive successful AI adoption, beyond the traditional coding and data skills more commonly associated with the technology, will be important to drive successful AI adoption.
Another critical consideration is the role of extensive AI training and support across an organisation. Recent data from CPD revealed that 43% of HR managers think their company will face a skills gap in the near future as the rise of AI transforms roles.
Whilst the skills gap is a multifaceted issue, organisations can seek to future-proof their workforce by building internal training programmes which accommodate multiple knowledge levels, from AI basics to hands-on workshops. These should be available for synchronous and asynchronous participation, with an additional focusing on training underrepresented groups and communities.
By fostering an inclusive environment where people from all backgrounds are equally encouraged to participate and lead, organisations will unlock new perspectives in the field of AI.
Diversity is critical for AI success
Diversity in AI cannot be solved merely through virtue signalling and by token hires and initiatives; true diversity of experiences and skill sets is critical to the overall success of any organisation’s AI strategy.
Studies show that there are continuous correlations between diversity in a workforce and outperformance – literally better, stronger products and success. McKinsey’s state of AI reports show that organisations where at least 25% of AI development employees identify as women are 3.2 times more likely than others to be ‘AI high performers’.
Those teams where at least one-quarter of AI development employees are racial or ethnic minorities are more than twice as likely to be AI high performers. Similar stats from McKinsey’s Diversity Matters report shows a 39% increased likelihood of outperformance for companies in the top quartile of ethnic representation versus the bottom quartile.
This isn’t about simply hiring diverse talent for the sake of it. Bringing in employees from a wider and more representative pool of the population will ensure you have the best team with a diverse range of views, experience and knowledge. And this is what will set an organisation up for positive AI transformation.
Tackling bias in AI applications
Tackling bias in the workforce is paramount, not just from a socioeconomic perspective but from a technological one too. The contribution of women and different ethnicities in AI teams is essential to ensure a variety of perspectives and outcomes. Insufficient representation of large portions of our society will result in knowledge gaps and human bias within the data AI is built on, underscoring the pervasive issue of bias in AI outcomes.
The implications of bias in AI cannot be underestimated. As AI adoption expands into banks, insurers, mortgage providers, HR teams etc., it can have enormous real-world impact on people’s finances, jobs and lifestyle, entrenching existing discrimination.
A cross-functional team that specialises in identifying bias in both humans and machines is best equipped to holistically tackle the challenge and ensure more equitable and inclusive AI systems.
Trusting AI
We can’t slow the adoption of AI down - but its progress will be halted if we cannot trust its outcomes. We need to ensure that AI reflects all perspectives of the human experience, or it will only serve to perpetuate the existing issues within society.
Those leading the AI revolution have the vital responsibility in ensuring that everyone can contribute to shaping its future.
Petra Jenner is EMEA General Manager at Splunk
Main image courtesy of iStockPhoto.com and metamorworks
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543