Article The Impact of AI on Defensive Cybersecurity: From Machine Learning to Agentic Intelligence
The Impact of AI on Defensive Cybersecurity: From Machine Learning to Agentic Intelligence
By Insight UK / 4 Feb 2025 / Topics: Cybersecurity
By Insight UK / 4 Feb 2025 / Topics: Cybersecurity
The integration of machine learning (ML) in cybersecurity began many years ago with a simple yet ambitious idea: to harness the power of algorithms for identifying patterns in massive datasets. Traditionally, threat detection relied heavily on signature-based techniques - essentially digital fingerprints of known threats. These methods, while effective against familiar malware, struggled with zero-day attacks and the increasingly sophisticated tactics of cybercriminals. This gap led to a surge of interest in using ML to identify anomalies, recognise patterns indicative of malicious behaviour, and ultimately predict attacks before they could fully unfold.
One of the early successful applications of ML in cybersecurity was in spam detection, followed by anomaly-based intrusion detection systems (IDS). These early iterations relied heavily on supervised learning, where historical data - both benign and malicious - was fed to algorithms to help them differentiate between the two. Over time, ML powered solutions grew in complexity, incorporating unsupervised learning and even reinforcement learning to adapt to the evolving nature of cyber threats.
While ML-based approaches drastically improved detection rates and reduced the workload on security teams, they were not without challenges. False positives became a significant issue, creating "alert fatigue" among security analysts. Moreover, attackers began to adapt, leveraging adversarial techniques to mislead machine learning models. Nevertheless, the evolution of ML transformed cybersecurity, introducing more dynamic and adaptive forms of defense.
In recent years, the introduction of large language models (LLMs) like GPT-4 has shifted the conversation around AI in cybersecurity. These models excel in tasks like synthesizing large volumes of information, summarising reports, and generating natural language content. In the cybersecurity space, LLMs have been used to parse through threat intelligence feeds, generate executive summaries, and assist in documentation - all tasks that require handling vast amounts of data and presenting it in an understandable form.
Yet, despite their strengths, LLMs have not yet found a "killer use case" in cybersecurity. Their value often lies in augmenting human analysts rather than replacing them. They can enhance productivity by automating mundane tasks, but they lack the deep contextual understanding and decision-making abilities required for incident response or threat hunting. LLMs can be helpful assistants, but they struggle to move beyond this role, limiting their impact in real-world defensive operations.
With the rise of AI in software development, the concept of a "copilot for security" emerged - a tool intended to assist security analysts much like coding copilots help developers write code. The idea was that an AI-driven copilot could act as a virtual Security Operations Center (SOC) analyst, helping to sift through alerts, contextualise incidents, and even propose response actions. However, this vision has largely fallen short.
The core issue is that security copilots have yet to fulfill their promise of transforming SOC operations. They neither replace the expertise of a seasoned analyst nor effectively address any glaring pain points that human analysts face today. Rather than serving as a dependable virtual analyst, these tools have often become a "solution looking for a problem" - adding another layer of technology that analysts need to understand and manage, without delivering commensurate value. For instance, Microsoft's Security Copilot, while promising, has struggled to effectively replace the role of a skilled SOC analyst, often providing suggestions that lack context or requiring additional human intervention to be actionable.
Part of the challenge is that the nature of cybersecurity work is inherently complex and contextual. SOC analysts operate in a high-pressure environment, piecing together fragmented information, understanding the broader implications of a threat, and making decisions that require a nuanced understanding of the organisation’s unique context. Current AI copilots can assist in narrowing down options or summarising data, but they lack the situational awareness and deep understanding needed to make critical security decisions effectively.
While current implementations have struggled to find their stride, the future of AI in cybersecurity may lie in the development of agentic AI - AI systems capable of taking proactive and autonomous actions. Agentic AI refers to systems that can independently assess situations and make decisions without human intervention, enabling a more dynamic and adaptive approach to cybersecurity. Agentic AI offers a more promising direction for defensive security by potentially allowing AI-driven entities to actively defend systems, engage in threat hunting, and adapt to novel threats without the constant need for human direction. Microsoft has some great plans in this area for the next stage of Security Co-pilot.
Agentic AI could bridge the gap between automation and autonomy in cybersecurity. Instead of waiting for an analyst to interpret data or issue commands, agentic AI could take action on its own: isolating a compromised endpoint, rerouting network traffic, or even engaging in deception techniques to mislead attackers. Such capabilities would mark a significant leap from the largely passive and assistive roles that AI currently plays.
Organisations have typically been slow in adopting any new security technology which is capable of taking actions on it own. False positives are always a risk and no-one wants to cause an outage in production or stop a senior executive from using their laptop based on a false assumption.
The attackers don't have this handicap - they will use AI to its fullest extent to steal data, cause outages and make money. Deepfake phone calls, deepfake video calls, hyper-personalised phishing emails - these threats are on the rise. It is likely that organisations in 2025 will face the bleakest threat landscape in cybersecurity history, driven by the malicious use of AI. According to a report by Gartner, the proliferation of AI-driven cyberattacks is expected to significantly increase by 2025, leading to a more challenging environment for defenders. The only way to combat this will be to be part of the arms race - using new AI Agents to make decisions.
There will be collateral damage of course, users will complain and security teams will be blamed, but it may finally be time to fight fire with fire and lean into the future of AI control. I believe that the risk of AI driven security threats will outweigh the risk of AI caused outages due to false positives in 2025 - and we should start to reconsider our risk calculations in the very near future when the technology is available.
Technology Lead EMEA CISO
Insight