Blog
/
AI
/
February 10, 2025

From Hype to Reality: How AI is Transforming Cybersecurity Practices

AI hype is everywhere, but not many vendors are getting specific. Darktrace’s multi-layered AI combines various machine learning techniques for behavioral analytics, real-time threat detection, investigation, and autonomous response.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nicole Carignan
SVP, Security & AI Strategy, Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Feb 2025

AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.

In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.

While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.

Consequently, many  people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.

Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.

Modernizing cybersecurity with AI

Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.

Supervised machine learning and cybersecurity

Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.

In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.

Unsupervised machine learning and cybersecurity

Unlike supervised approaches, unsupervised machine learning does not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.

However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.

When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.

LLMs and cybersecurity

LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.

With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.

But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.

Combining techniques in a multi-layered AI approach

Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.

Darktrace’s multi-layered AI engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.

Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.

This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.

The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.

The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.

Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions

Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.

Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.

  1. Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
  2. Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models. 
  3. Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
  4. Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
  5. Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.

See the complete AI architecture in the paper “The AI Arsenal: Understanding the Tools Shaping Cybersecurity.”

Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.

Machine learning is the fundamental ally in cyber defense

Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.

Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.

Darktrace’s multi-layered AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.

Download the full report

Discover specifically how Darktrace applies different types of AI to improve cybersecurity efficacy and operations in this technical paper.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nicole Carignan
SVP, Security & AI Strategy, Field CISO

More in this series

No items found.

Blog

/

/

May 8, 2025

Anomaly-based threat hunting: Darktrace's approach in action

person working on laptopDefault blog imageDefault blog image

What is threat hunting?

Threat hunting in cybersecurity involves proactively and iteratively searching through networks and datasets to detect threats that evade existing automated security solutions. It is an important component of a strong cybersecurity posture.

There are several frameworks that Darktrace analysts use to guide how threat hunting is carried out, some of which are:

  • MITRE Attack
  • Tactics, Techniques, Procedures (TTPs)
  • Diamond Model for Intrusion Analysis
  • Adversary, Infrastructure, Victims, Capabilities
  • Threat Hunt Model – Six Steps
  • Purpose, Scope, Equip, Plan, Execute, Feedback
  • Pyramid of Pain

These frameworks are important in baselining how to run a threat hunt. There are also a combination of different methods that allow defenders diversity– regardless of whether it is a proactive or reactive threat hunt. Some of these are:

  • Hypothesis-based threat hunting
  • Analytics-driven threat hunting
  • Automated/machine learning hunting
  • Indicator of Compromise (IoC) hunting
  • Victim-based threat hunting

Threat hunting with Darktrace

At its core, Darktrace relies on anomaly-based detection methods. It combines various machine learning types that allows it to characterize what constitutes ‘normal’, based on the analysis of many different measures of a device or actor’s behavior. Those types of learning are then curated into what are called models.

Darktrace models leverage anomaly detection and integrate outputs from Darktrace Deep Packet Inspection, telemetry inputs, and additional modules, creating tailored activity detection.

This dynamic understanding allows Darktrace to identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign.  On top of machine learning models for detection, there is also the ability to change and create models showcasing the tool’s diversity. The Model Editor allows security teams to specify values, priorities, thresholds, and actions they want to detect. That means a team can create custom detection models based on specific use cases or business requirements. Teams can also increase the priority of existing detections based on their own risk assessments to their environment.

This level of dexterity is particularly useful when conducting a threat hunt. As described above, and in previous ‘Inside the SOC’ blogs such a threat hunt can be on a specific threat actor, specific sector, or a  hypothesis-based threat hunt combined with ‘experimenting’ with some of Darktrace’s models.

Conducting a threat hunt in the energy sector with experimental models

In Darktrace’s recent Threat Research report “AI & Cybersecurity: The state of cyber in UK and US energy sectors” Darktrace’s Threat Research team crafted hypothesis-driven threat hunts, building experimental models and investigating existing models to test them and detect malicious activity across Darktrace customers in the energy sector.

For one of the hunts, which hypothesised utilization of PerfectData software and multi-factor authentication (MFA) bypass to compromise user accounts and destruct data, an experimental model was created to detect a Software-as-a-Service (SaaS) user performing activity relating to 'PerfectData Software’, known to allow a threat actor to exfiltrate whole mailboxes as a PST file. Experimental model alerts caused by this anomalous activity were analyzed, in conjunction with existing SaaS and email-related models that would indicate a multi-stage attack in line with the hypothesis.

Whilst hunting, Darktrace researchers found multiple model alerts for this experimental model associated with PerfectData software usage, within energy sector customers, including an oil and gas investment company, as well as other sectors. Upon further investigation, it was also found that in June 2024, a malicious actor had targeted a renewable energy infrastructure provider via a PerfectData Software attack and demonstrated intent to conduct an Operational Technology (OT) attack.

The actor logged into Azure AD from a rare US IP address. They then granted Consent to ‘eM Client’ from the same IP. Shortly after, the actor granted ‘AddServicePrincipal’ via Azure to PerfectData Software. Two days later, the actor created a  new email rule from a London IP to move emails to an RSS Feed Folder, stop processing rules, and mark emails as read. They then accessed mail items in the “\Sent” folder from a malicious IP belonging to anonymization network,  Private Internet Access Virtual Private Network (PIA VPN) [1]. The actor then conducted mass email deletions, deleting multiple instances of emails with subject “[Name] shared "[Company Name] Proposal" With You” from the  “\Sent folder”. The emails’ subject suggests the email likely contains a link to file storage for phishing purposes. The mass deletion likely represented an attempt to obfuscate a potential outbound phishing email campaign.

The Darktrace Model Alert that triggered for the mass deletes of the likely phishing email containing a file storage link.
Figure 1: The Darktrace Model Alert that triggered for the mass deletes of the likely phishing email containing a file storage link.

A month later, the same user was observed downloading mass mLog CSV files related to proprietary and Operational Technology information. In September, three months after the initial attack, another mass download of operational files occurred by this actor, pertaining to operating instructions and measurements, The observed patience and specific file downloads seemingly demonstrated an intent to conduct or research possible OT attack vectors. An attack on OT could have significant impacts including operational downtime, reputational damage, and harm to everyday operations. Darktrace alerted the impacted customer once findings were verified, and subsequent actions were taken by the internal security team to prevent further malicious activity.

Conclusion

Harnessing the power of different tools in a security stack is a key element to cyber defense. The above hypothesis-based threat hunt and custom demonstrated intent to conduct an experimental model creation demonstrates different threat hunting approaches, how Darktrace’s approach can be operationalized, and that proactive threat hunting can be a valuable complement to traditional security controls and is essential for organizations facing increasingly complex threat landscapes.

Credit to Nathaniel Jones (VP, Security & AI Strategy, Field CISO at Darktrace) and Zoe Tilsiter (EMEA Consultancy Lead)

References

  1. https://spur.us/context/191.96.106.219

Continue reading
About the author
Nathaniel Jones
VP, Security & AI Strategy, Field CISO

Blog

/

/

May 6, 2025

Combatting the Top Three Sources of Risk in the Cloud

woman working on laptopDefault blog imageDefault blog image

With cloud computing, organizations are storing data like intellectual property, trade secrets, Personally Identifiable Information (PII), proprietary code and statistics, and other sensitive information in the cloud. If this data were to be accessed by malicious actors, it could incur financial loss, reputational damage, legal liabilities, and business disruption.

Last year data breaches in solely public cloud deployments were the most expensive type of data breach, with an average of $5.17 million USD, a 13.1% increase from the year before.

So, as cloud usage continues to grow, the teams in charge of protecting these deployments must understand the associated cybersecurity risks.

What are cloud risks?

Cloud threats come in many forms, with one of the key types consisting of cloud risks. These arise from challenges in implementing and maintaining cloud infrastructure, which can expose the organization to potential damage, loss, and attacks.

There are three major types of cloud risks:

1. Misconfigurations

As organizations struggle with complex cloud environments, misconfiguration is one of the leading causes of cloud security incidents. These risks occur when cloud settings leave gaps between cloud security solutions and expose data and services to unauthorized access. If discovered by a threat actor, a misconfiguration can be exploited to allow infiltration, lateral movement, escalation, and damage.

With the scale and dynamism of cloud infrastructure and the complexity of hybrid and multi-cloud deployments, security teams face a major challenge in exerting the required visibility and control to identify misconfigurations before they are exploited.

Common causes of misconfiguration come from skill shortages, outdated practices, and manual workflows. For example, potential misconfigurations can occur around firewall zones, isolated file systems, and mount systems, which all require specialized skill to set up and diligent monitoring to maintain

2. Identity and Access Management (IAM) failures

IAM has only increased in importance with the rise of cloud computing and remote working. It allows security teams to control which users can and cannot access sensitive data, applications, and other resources.

Cybersecurity professionals ranked IAM skills as the second most important security skill to have, just behind general cloud and application security.

There are four parts to IAM: authentication, authorization, administration, and auditing and reporting. Within these, there are a lot of subcomponents as well, including but not limited to Single Sign-On (SSO), Two-Factor Authentication (2FA), Multi-Factor Authentication (MFA), and Role-Based Access Control (RBAC).

Security teams are faced with the challenge of allowing enough access for employees, contractors, vendors, and partners to complete their jobs while restricting enough to maintain security. They may struggle to track what users are doing across the cloud, apps, and on-premises servers.

When IAM is misconfigured, it increases the attack surface and can leave accounts with access to resources they do not need to perform their intended roles. This type of risk creates the possibility for threat actors or compromised accounts to gain access to sensitive company data and escalate privileges in cloud environments. It can also allow malicious insiders and users who accidentally violate data protection regulations to cause greater damage.

3. Cross-domain threats

The complexity of hybrid and cloud environments can be exploited by attacks that cross multiple domains, such as traditional network environments, identity systems, SaaS platforms, and cloud environments. These attacks are difficult to detect and mitigate, especially when a security posture is siloed or fragmented.  

Some attack types inherently involve multiple domains, like lateral movement and supply chain attacks, which target both on-premises and cloud networks.  

Challenges in securing against cross-domain threats often come from a lack of unified visibility. If a security team does not have unified visibility across the organization’s domains, gaps between various infrastructures and the teams that manage them can leave organizations vulnerable.

Adopting AI cybersecurity tools to reduce cloud risk

For security teams to defend against misconfigurations, IAM failures, and insecure APIs, they require a combination of enhanced visibility into cloud assets and architectures, better automation, and more advanced analytics. These capabilities can be achieved with AI-powered cybersecurity tools.

Such tools use AI and automation to help teams maintain a clear view of all their assets and activities and consistently enforce security policies.

Darktrace / CLOUD is a Cloud Detection and Response (CDR) solution that makes cloud security accessible to all security teams and SOCs by using AI to identify and correct misconfigurations and other cloud risks in public, hybrid, and multi-cloud environments.

It provides real-time, dynamic architectural modeling, which gives SecOps and DevOps teams a unified view of cloud infrastructures to enhance collaboration and reveal possible misconfigurations and other cloud risks. It continuously evaluates architecture changes and monitors real-time activity, providing audit-ready traceability and proactive risk management.

Real-time visibility into cloud assets and architectures built from network, configuration, and identity and access roles. In this unified view, Darktrace / CLOUD reveals possible misconfigurations and risk paths.
Figure 1: Real-time visibility into cloud assets and architectures built from network, configuration, and identity and access roles. In this unified view, Darktrace / CLOUD reveals possible misconfigurations and risk paths.

Darktrace / CLOUD also offers attack path modeling for the cloud. It can identify exposed assets and highlight internal attack paths to get a dynamic view of the riskiest paths across cloud environments, network environments, and between – enabling security teams to prioritize based on unique business risk and address gaps to prevent future attacks.  

Darktrace’s Self-Learning AI ensures continuous cloud resilience, helping teams move from reactive to proactive defense.

[related-resource]

Continue reading
About the author
Pallavi Singh
Product Marketing Manager, OT Security & Compliance
Your data. Our AI.
Elevate your network security with Darktrace AI
OSZAR »