Keiron Holyome at BlackBerry asks: Why is Generative AI being banned by so many organisations?
The growing influence of ChatGPT means many are wondering if it will have a net positive or negative influence on cyber-security. Will it help us to be more predictive and proactive in blocking attacks — or more of a hindrance by weaponising cyber-criminals and facilitating an increase in activity?
Questions raised have not only been around the danger of generative AI in the hands of cyber-criminals, but also whether ChatGPT is safe for people and the organisations they work for to use, both creatively and defensively.
There are a number of things to consider when it comes to ChatGPT and cyber-security risks, including – but far from limited to – the legal, privacy and compliance considerations of AI technologies.
So where does this leave technology professionals? Should they take advantage of its potential or move to ban its use by employees in an attempt to mitigate exposure to risk?
New research, released by BlackBerry recently, reveals that 66% of IT decision makers in UK-based organisations are currently considering or implementing bans on ChatGPT and other generative AI applications in the workplace.
The majority of those deploying or considering bans (69%) say the measures are intended to be long-term or permanent. Many (78%) say that the measures are reactions to concerns that unsecured apps pose a cyber-security threat to their corporate IT environment.
Why are organisations banning GenAI?
Potential risk to data security and privacy emerges as the primary reason that 73% of the survey respondents are making moves to block the use of ChatGPT and similar generative AI tools by employees.
The next greatest concerns at, 51% each, are risks to corporate reputation and previous experience with a data breach or cyber-security incidents.
Together, these concerns are prompting IT decision makers across the UK to consider the safe use of generative AI apps by the organisation, while taking due care of employee, organisational, customer and supplier information.
Regulating generative AI apps
Technical leadership within organisations is at the forefront of pushing through these bans, according to survey results, with CEOs also playing a leading role in almost half of the decisions.
In some cases, decisions are also being influenced by legal and compliance teams, HR and Finance, which illustrates the widespread concern for data privacy and security across all parts of the organisation in relation to generative AI apps.
The potential of tools like ChatGPT
Despite their inclination toward blocking widespread use of the burgeoning technology, most IT decision-makers in the BlackBerry research also acknowledge the potential opportunity for generative AI applications to have a positive impact in the workplace.
Possible advantages they foresee include increasing efficiency (53%), innovation (44%), and enhancing creativity (42%).
It’s interesting to see that respondents also tend to favour the use of AI tools for cyber-security and 74% of UK organisations agree it is useful in a defensive role. Harking back to the question of whether generative AI offers potential for a net gain or risk for cyber-security, deploying the many positive qualities of AI in powering a predictive and proactive advantage – in a controlled manner – for defence could be the industry’s greatest equaliser.
Expert views on Generative AI
As cyber-security experts, we are advising organisations that we work with to proceed with caution. Our own approach is to engage less with consumer-grade generative AI tools in the workplace, and instead focus on enterprise-grade generative AI within defensive innovation.
This is where taking a long-term or permanent approach to banning generative AI applications in the workplace could mean a wealth of potential business benefits are quashed.
Organisations are advised instead to maintain a steady focus on value over hype and regularly revisit and re-evaluate positions on innovative AI apps in the workplace. As platforms mature and regulations take effect, flexibility can then be introduced into organisational policies and restrictions reviewed or revised.
The same research also revealed that, although 76% of IT decision-makers believe their organisations are within their rights to control the applications that employees use for business purposes, 66% think that outright bans signal “excessive control” over corporate and BYO devices.
For this reason, more than half of CIOs and CISOs surveyed (52%) are turning to unified endpoint management (UEM) platforms for granular control over which applications can connect to the corporate environment.
With an effective UEM solution in place, IT professionals can avoid measures that users may perceive as heavy-handed, such as removing or blocking the use of personal apps on work device, while still ensuring that enterprise security is maintained. These solutions work by “containerising” corporate data and keeping it physically separate and insulated from a device owner’s private data or applications.
This is particularly useful for organisations that employ popular BYO (bring your own) device programmes or want to protect against personal use of company-issued devices.
The future for ChatGPT
Alongside generative AI experimentation, OpenAI promises updates and improvements to ChatGPT based on usage and feedback it receives from users. Subsequent enhancements could make the bot a more powerful ally to defenders, or a greater enemy – time will tell. The key will be in having the right tools in place for visibility, monitoring, and management of applications used in the workplace.
Businesses must watch closely to see what happens, while entrusting cyber-security experts to focus on the signs that indicate how threat actors are weaponising AI.
As we all know, any technology can be used for both good and bad. So the future of ChatGPT hinges on vigilant observation by businesses and the diligent efforts of cyber-security experts to stay a step ahead of potential threats arising from the evolving landscape of AI.
Keiron Holyome is VP UKI & Nordics, BlackBerry and will be speaking on “Using emerging tech and AI to monitor, analyse and respond to threats” at teissLondon 2023, on Tuesday 26th September 2023
Main image courtesy of iStockPhoto.com
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543