Search This Blog

Powered by Blogger.

Blog Archive

Labels

A New Era is Emerging in Cybersecurity, but Only the Best Algorithms will Survive

Security teams must consider the best way to use defensive AI.

 

The industry identified that basic fingerprinting could not maintain up with the rate of these developments, and the requirement to be everywhere, at all times, pushed the acceptance of AI technology to deal with the scale and complexity of modern business security. 

Since then, the AI defence market has become crowded with vendors promising data analytics, looking for "fuzzy matches": close matches to previously encountered threats, and eventually using machine learning to detect similar attacks. While this is an advancement over basic signatures, using AI in this manner does not hide the fact that it is still reactive. It may be capable of recognizing attacks that are very similar to previous incidents, but it is unable to prevent new attack infrastructure and techniques that the system has never seen before.

Whatever you call it, this system is still receiving the same historical attack data. It recognises that in order to succeed, there must be a "patient zero" — or first victim. Supervised machine learning is another term for "pretraining" an AI on observed data (ML). This method does have some clever applications in cybersecurity. For example, in threat investigation, supervised ML has been used to learn and mimic how a human analyst conducts investigations — asking questions, forming and revising hypotheses, and reaching conclusions — and can now carry out these investigations autonomously at speed and scale.

But what about tracking down the first traces of an attack? What about detecting the first indication that something is wrong?

The issue with utilising supervised ML in this area is that it is only as good as its historical training set — not with new things. As a result, it must be constantly updated, and the update must be distributed to all customers. This method also necessitates sending the customer's data to a centralised data lake in the cloud to be processed and analysed. When an organisation becomes aware of a threat, it is frequently too late.

As a result, organisations suffer from a lack of tailored protection, a high number of false positives, and missed detections because this approach overlooks one critical factor: the context of the specific organisation it is tasked with protecting.

However, there is still hope for defenders in the war of algorithms. Today, thousands of organisations utilise a different application of AI in cyber defence, taking a fundamentally different approach to defending against the entire attack spectrum — including indiscriminate and known attacks, as well as targeted and unknown attacks.

Unsupervised machine learning involves the AI learning the organisation rather than training it on what an attack looks like. In this scenario, the AI learns its surroundings from the inside out, down to the smallest digital details, understanding "normal" for the specific digital environment in which it is deployed in order to identify what is not normal.

This is AI that comprehends "you" in order to identify your adversary. It was once thought to be radical, but it now protects over 8,000 organisations worldwide by detecting, responding to, and even avoiding the most sophisticated cyberattacks.

Consider last year's widespread Hafnium attacks on Microsoft Exchange Servers. Darktrace's unmonitored ML identified and disrupted a series of new, unattributed campaigns in real time across many of its customer environments, with no prior threat intelligence associated with these attacks. Other organisations, on the other hand, were caught off guard and vulnerable to the threat until Microsoft revealed the attacks a few months later.

This is where unsupervised ML excels — autonomously detecting, investigating, and responding to advanced and previously unseen threats based on a unique understanding of the organization in question. Darktrace's AI research centre in Cambridge, UK, tested this AI technology against offensive AI prototypes. These prototypes, like ChatGPT, can create hyperrealistic and contextualised phishing emails and even choose a suitable sender to spoof and fire the emails.

The conclusions are clear: as attackers begin to weaponize AI for nefarious reasons, security teams will require AI to combat AI. Unsupervised machine learning will be critical because it learns on the fly, constructing a complex, evolving understanding of every user and device across the organisation. With this bird's-eye view of the digital business, unsupervised AI that recognises "you" will detect offensive AI as soon as it begins to manipulate data and will take appropriate action.

Offensive AI may be exploited for its speed, but defensive AI will also contribute to the arms race. In the war of algorithms, the right approach to ML could mean the difference between a strong security posture and disaster.
Share it:

Algorithm

attacks

Cyber defense

Cyber Security

Machine learning

Microsoft Exchange