In the digital age, change is a constant, and as technology advances the intricacies of safeguarding our digital world is becoming increasingly challenging. Amid this complex landscape, artificial intelligence (AI) is emerging as a beacon of untapped promise.
Despite a growing array of sophisticated security tools on offer, AI remains something of an enigma – a nascent technology whose precise role and effectiveness in bolstering security operations remains unclear. Recent research by Adarma reflects this uncertainty.
Of 500 security operations leaders from UK organisations with over 2,000 employees surveyed by Adarma, 74 per cent said they struggled to envisage how exactly AI will help them with tasks. However, when asked about its potential applications, 61 per cent of respondents believed that automation could shoulder up to 30 per cent of security tasks usually dealt with by humans, while another 17 per cent envision AI taking over 50 per cent of these responsibilities.
As with every technological stride forwards, multiple narratives unfold as to how the technology can be applied. When it comes to AI, experts believe there are two major areas of application: the reduction of both human error and false positives. Even though a vast majority of alerts sent to security teams are false positives, every one of them must be cleared by a human.
In an ideal scenario, AI could assess these false positives swiftly, making instant determinations on their validity with regard to their specific context. Indeed, 53 per cent of respondents expressed a preference for eliminating the time spent on reporting – a task currently among the least automated, with 70 per cent admitting that they don’t leverage automation for this.
Clearly, this gap presents an opportunity for AI to be deployed in automating reporting and other repetitive or mundane duties, thereby improving the satisfaction, efficiency and effectiveness of security teams. Furthermore, 42 per cent of security professionals believe that automation will provide superior contextual information, aiding in more informed decision-making.
The second area AI could be implemented is in assisting in the isolation and containment of potential threats. Configured correctly, it has the potential to substantially reduce the risk of a threat spreading across an organisation’s IT environment.
However, reaping these advantages hinges upon security leaders placing their trust in the technology. Given that AI is still in its infancy it’s entirely understandable that leaders might harbour reservations about entrusting this technology with the operation of critical systems.
This is a valid concern for any organisation, and it likely arises from the fact that the technology is so new that there is a general scarcity of expertise in AI within the cyber-security industry. Naturally, it will take time to build trust in the technology and for security professionals to acquire the necessary skills to deploy it safely and effectively.
Interestingly, respondents who had already started their automation journey reported moderate success in the implementation of their automation projects, although they did acknowledge the complexity and time-consuming nature of the route. Specifically, 42 per cent found automation implementation to be challenging and time-intensive, with an additional 21 per cent indicating that it was more demanding than initially anticipated. Nonetheless, an overwhelming majority (73 per cent) attested that the effort invested in automation was worthwhile.
It is clear from these findings that security leaders are still acclimatising to the concept of AI and are proceeding with justified caution. Embracing any emerging technology demands vigilant oversight from security leaders as they navigate the evolution of AI. The establishment of confidence and trust in AI’s capabilities remains a priority.
Thorough assessments and continuous monitoring are essential to ensure desired outcomes. However, it’s crucial to approach this with the understanding that the goal is not to stifle innovation but to comprehensively understand and effectively manage associated risks. As organisations deal with cyber-threat going forward, AI’s efficiency and precision are poised to be the lynchpin for sustaining enduring protection.
Read the full report here: adarma.com/a-false-sense-of-cybersecurity/
*The survey was completed between 15 and 22 May 2023.
About Adarma
We are Adarma, leaders in detection and response services. We specialise in designing, building and managing cyber-security operations that deliver a measurable reduction in business risk. We are on a mission to make cyber-resilience a reality for organisations around the world.
Our team of passionate cyber-defenders work hand in hand with our customers to mitigate risk and maximise the value of their cyber-security investments. Powered by the Adarma Threat Management Platform and optimised to our customers’ individual needs, our integrated set of services will improve your security posture, and include best-in-class Managed Detection and Response services.
We operate with transparency and visibility across today’s hybrid-SOC environments to protect our customers as they innovate, transform and grow their businesses. Adarma delivers the cyber-security outcomes you need to make a remarkable difference.
For more information, visit: www.adarma.com
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543