Will AI also revolutionize cybersecurity?
Today, there’s every reason to believe so!
After a decade of massive investment in cybersecurity, we are a period of consolidation. Optimization is becoming the watchword: automate repetitive tasks, rationalize resources, detect ever faster and respond ever better.
AI, among other things, is a response to these objectives.
But in concrete terms, what changes has it already brought? What use cases are transforming the daily lives of cyber teams? And how far can we go?
Let’s explore together how AI will revolutionize cybersecurity.
Raising awareness: AI is changing the game!
In a nutshell: 20% of cyber incidents are related to phishing and the use of stolen accounts (according to the CERT-Wavestone 2024 report: trends, analyses and lessons for 2025).
Training teams is therefore essential. But it’s an onerous task, requiring time, resources and the right approach to capture attention and guarantee real impact. AI is changing the game by automating awareness campaigns, making them more interactive and engaging.
There’s no longer any excuse for excluding an entity from your campaign because they don’t speak English, or for failing to tailor your communications to the issues faced by different departments (HR, Finance, IT…).
With a little background on the different teams targeted, and an initial version of your awareness campaign, GenAI1 templates can quickly break down your campaigns into customized copies for each target group. AI makes it possible to create, with minimal effort, content tailored to the issues of the awareness program’s targets, increasing employee engagement and interest thanks to a message that is fully addressed to them and deals with their own issues. This saves time, performance and quality, enabling you to transform massive, generic awareness campaigns into targeted, personalized campaigns that are undeniably more relevant.
Two possibilities are emerging for implementing this use case:
- Use your company’s trusted GenAI templates to help you generate your campaign elements. The advantage here is, of course, the low costs involved.
- Use an external supplier. Many service providers who assist companies with standard phishing campaigns use GenAI internally to deliver a customized solution quickly.
In short, AI will reduce the cost and time taken to roll out awareness programs, while improving their adherence and effectiveness to make safety a responsibility shared by all.
These same AI models can also be customized and used by cybersecurity teams for other purposes, such as facilitating access to cybersecurity repositories.
CISO GPT: simplified access to the cyber repository for the business
Internal cybersecurity documents and regulations are generally comprehensive and well mastered by the teams involved in drawing them up. However, they remain little known to other company departments.
These documents are full of useful information for the business, but due to a lack of visibility, policies are not applied. Cyber teams are called upon to respond to recurring requests for information, even though these are well documented.
With AI chatbots, this information becomes easily accessible. No need to scroll through entire pages: a simple question provides clear, instant answers, making it easier to apply best practices and react quickly in the event of an incident
More and more companies are adopting chatbots based on generative AI to answer users’ questions and guide them to the right information. These tools, powered by models such as ChatGPT, Gemini or LLaMA, access up-to-date, high-quality internal data.
Result: users quickly find the answers they need.
At Wavestone, we have developed CISO GPT. This chatbot, connected to internal security repositories, becomes a veritable cybersecurity assistant. It answers common questions, facilitates access to best practices and relieves cyber teams of repetitive requests
Answering business questions with AI is all well and good. But it’s possible to do so much more!
As well as providing rapid access to information, AI can also automate time-consuming tasks. Incident management, alert analysis, reporting… these are all processes that consume time and resources. What if AI could speed them up, or even take them over?
Save time with AI: Automate time-consuming tasks
Everyday business life is full of time-consuming tasks. AI can certainly automate many of them, but which ones should you focus on first for maximum value?
Automating data classification with AI
Here’s a first answer with another figure: 77% of recorded cyber-attacks resulted in data theft. (According to the CERT-Wavestone 2024 report: trends, analyses and lessons for 2025
And this trend is unlikely to slow down. The explosion in data volumes, accelerated by the rise of AI, makes securing them more complex.
Faced with this challenge, Data Classification remains an essential pillar in building effective DLP (Data Loss Prevention) rules. The aim: to identify and categorize data according to its sensitivity, and apply the appropriate protection measures.
But classifying data by hand is impossible on a large scale. Fortunately, machine learning can automate the process. No need for GenAI here: specialized algorithms can analyze immense volumes of documents, understand their nature and predict their level of sensitivity.
These models are based on several criteria:
- The presence of sensitive indicators (bank numbers, personal data, strategic information, ).
- User behavior to detect anomalies and report abnormally exposed files.
By combining Data Classification and AI, companies can finally regain control of their data and drastically reduce the risk of data leakage.
This is where DSPM (Data Security Posture Management) comes in. These solutions go beyond simple classification, offering complete visibility of data exposure in cloud and hybrid environments. They can detect poorly protected data, monitor access and automate compliance.
And compliance is another time-consuming process!
Simplify compliance: automate it with AI
Complying with standards and regulations is a tedious task. With every new standard comes a new compliance process!
For an international player, subject to several regulatory authorities, it’s a never-ending loop.
Good news: AI can automate much of the work. GenAI-based solutions can verify and anticipate compliance deviations.
AI excels at analyzing and comparing structured data. For example, a GenAI model can compare a document with an internal or external repository to validate its compliance. Need to check an ISP against NIST recommendations? AI can identify discrepancies and suggest adjustments.
Simplify vulnerability management
AI has no shortage of solutions when it to vulnerability management. It can automate several key tasks:
- Verification of firewall rules: GenAI can analyze a flow matrix and compare it with the rules actually implemented. It detects inconsistencies and can even anticipate the impact of a rule change.
- Code review: AI scans code for security flaws and suggests optimizations. With these tools, teams reduce the risk of error, speed up processes and free up time to concentrate on higher value-added tasks.
Automating compliance and vulnerability management reinforces upstream security and anticipates threats. But sometimes it’s already too late!
Faced with ever more innovative attackers, how can AI help to better detect and respond to incidents?
Incident detection and response: AI on the front line
Let’s start with a clear observation: cyberthreats are constantly evolving!
Attackers are adapting and innovating, and it is imperative to react quickly and effectively to increasingly sophisticated incidents. Security Operations Centers (SOCs) are at the forefront of incident management.
With the AI on their side, they now have a new ally!
AI at the heart of the SOC: detect faster….
One of the most widely used and damaging attack vectors in recent years is phishing, and the attempts are not only more recurrent, but also more elaborate than in the past: QR-Code, BEC (Business Email Compromise) …
As mentioned above, awareness-raising campaigns are essential to deal with this threat, but it is now possible to reinforce the first lines of defense against this type of attack thanks to deep learning.
NLP language processing algorithms don’t just analyze the raw content of e-mails. They also detect subtle signals such as an alarmist tone, an urgent request or an unusual style. By comparing each message with the usual patterns, AI can more effectively spot fraud attempts. These solutions go much further than traditional anti-spam solutions, which are often based solely on indicators of compromise.
Apart from this very specific case, AI will become indispensable for the detection of deviant behavior (UEBA). The ever-increasing size and diversity of IS makes it impossible to build individual rules to detect anomalies. Thanks to machine learning, we can continuously analyze the activities of users and systems to identify significant deviations from normal behavior. This makes it possible to detect threats that are difficult to identify with static rules, such as a compromised account suddenly accessing sensitive resources, or a user adopting unusual behavior outside his or her normal working hours.
These solutions are not new: as early as 2015, solution vendors were proposing the incorporation of behavioral analysis algorithms into their solutions!
AI also plays a key role in accelerating and automating response. Faced with ever faster and more sophisticated attacks, let’s see how AI enables SOC teams to react with greater efficiency and precision.
… answer louder
SOC analysts, overwhelmed by a growing volume of alerts, have to deal with ever more of them, with teams that are not growing. To help them, new GenAI assistants dedicated to SOC are emerging on the market, optimizing the entire incident processing chain. The aim is to do more with less, by redirecting analysts towards higher value-added tasks and limiting the well-known syndrome of “alert fatigue”
Starting with prioritization, operational teams are overwhelmed by alerts, and must constantly distinguish between true and false, priority and low priority. On a list of 20 alerts in front of me, which ones represent a real attack on my IS? AI’s strength lies precisely in ensuring better alert processing by correlating current events. In an instant, AI excludes false positives and returns the list of priority incidents to be investigated
The analyst can then rely on this feedback to launch his investigation. And here again, the AI supports him in his research. The GenAI assistant is capable of generating queries based on natural language, making it easy to interrogate all network equipment. Based on its knowledge, the AI can also suggest the steps to follow for the investigation: who should I question? What should I check?
The results returned will not be comparable to the analysis an expert SOC engineer. On the other hand, they will enable more junior analysts to begin their investigation before escalating it in the event of difficulties.
But the job doesn’t stop there: you need to be able to take the necessary remediation actions following the discovery of an attack. Once again, the AI assistant keeps the focus on the decision-making process, and quickly provides the user with a set of actions to take to contain the threat: hosts to isolate, IPs to block…
The power of these use cases also lies in the ability of AI assistants to provide structured feedback, which makes it much easier not only for analysts to understand, but also to archive and explain incidents to a third party.
Of course, these are not the only use cases to date, and many more will emerge in the years to come. For incident response teams, the next step is clear: automate remediation and protection actions. We are already seeing this for our most mature customers, and the arrival of AI agents2 will only accelerate this trend.
The next use cases are clear: AI active rights over corporate resources to enable a real-time response to block the spread of a threat. Following an autonomous investigation, the AI will be able to decide on its own whether to adapt firewall rules, revoke a user’s access on the fly, or initiate a new strong authentication request. Of course, such advanced autonomy is still some way off, but it’s clear that we’re heading in that direction…
Finally, integrating these use cases raises another major challenge: price. Adding these use cases has a cost. In a tense economic climate, the budgets of security teams are not being revised upwards – quite the contrary. The next step will be to find a compromise between security gains and financial costs.
Conclusion
Cybersecurity teams are faced with a plethora of AI solutions on offer, making the choice a complex one. To move forward effectively, it’s essential to adopt a pragmatic and structured approach. Our recommendations:
- Get trained in AI to better assess the added value of certain products, and avoid ‘gimmicky’ solutions.
- Choose the right use cases according to their added value (optimization of resources, economies of scale, improved risk coverage) and complexity (technology base, data management, HR and financial costs).
- Define the right development strategy, choosing between an in-house approach or using existing market solutions.
- Focus on impact rather than completeness, aiming for efficient deployment of use cases.
- Anticipate the challenges of securing AI, including model robustness, bias management and resistance to adversarial attacks.
Ten years ago, DARPA launched a challenge on autonomous cars. What was then science fiction is now reality. In 2025, AI will transform cybersecurity. We’re only at the beginning: how far will AI agents go in 10 years’ time?
–
1: GenAI (Generative Artificial Intelligence) refers to a branch of AI capable of creating original content (text, images, code, etc.) based on models trained on large datasets.
2: AI agent refers to an artificial intelligence capable of acting autonomously to achieve complex goals, by planning, making decisions and interacting with its environment without constant human supervision.