Switchover to AI will offer enterprises some grace period against cyber threats
The moment Generative AI applications hit the market, it changed the pace of business for organizations. Today, not embracing AI innovations can mean falling behind our competitors and putting our cyber defense at a disadvantage against cyberattacks powered by AI.
But when discussing the impact of AI on cybercrime, it's important that we look at things through a pragmatic lens, not feeding into the hype that reads more like science fiction.
Today's AI advancements signal a significant leap forward for enterprise security. With the Middle East cybersecurity market projected to grow to $44.7 billion by 2027, cybercriminals can't easily match the size and scale of enterprises' resources, skills, and motivation, making it harder for them to keep up with the current speed of AI innovation.
Make no mistake, though: Cybercrime will catch up. This is not the first time the security industry has had a brief edge — when ransomware started driving more defenders to adopt endpoint detection and response technologies, attackers needed some time to figure out how to evade those detections.
That interim ‘grace period’ gave businesses time to better shield themselves. The same applies now: Businesses need to maximize their lead in the AI race, advancing their threat detection and response capabilities and leveraging the benefits that AI innovations offer.
Let's take at a look at where malicious use of AI will and won't make the most immediate impact.
Fully automated malware campaigns
In recent months, we've seen claims regarding various malicious use cases of AI, but just because a scenario is possible, it doesn’t make it probable. Take fully automated malware campaigns, for example.
Logic says it’s possible to leverage AI to achieve that outcome, but given that leading tech companies have yet to pioneer fully automated software development cycles, it's unlikely that financially constrained cybercrime groups will achieve this sooner.
AI-engineered phishing
Another use case to consider is AI-engineered phishing attacks. This next generation of phishing may achieve higher levels of persuasiveness and click rate, but a human-engineered phish and AI-engineered phish require the same detection and response readiness.
AI acts as a force multiplier to scale phishing campaigns, so if an enterprise is seeing a spike in inbound phishing emails — and those emails are significantly more persuasive — then it's likely looking at a high click rate probability and potential for compromise. AI models can also increase targeting efficacy, helping attackers determine who is the most susceptible target for a specific phish within an organization and ultimately reaching a higher RoI from their campaigns.
Phishing attacks have historically been among the most successful tactics used to infiltrate enterprises.
AI poisoning attacks
In other words, programmatically manipulating the code and data on which AI models are built. By poisoning the model, attackers can make it behave in whatever way they want, and it's not easily detectable. However, these attacks aren't easy to carry out — they require gaining access to the data the AI model is training on at the time of training, which is no small feat.
As more models become open-source, the risk of these attacks will increase, but for now, it remains low.
The unknown
We may not see an immediate spike in AI-enabled attacks, but the scaling of cybercrime thanks to AI will have a substantial impact on organizations that aren't ready for it. Speed and scale are intrinsic characteristics of AI that benefit defenders and attackers alike. Security teams are already understaffed and overwhelmed — a spike in malicious traffic or incident response engagements adds a substantial weight to their workload.
This reaffirms the need for enterprises to invest in their defenses, using AI to drive speed and precision in their threat detection and response capabilities. Enterprises that take advantage of this grace period will find themselves much better prepared for the day attackers catch up in the AI cyber race.