When AI is used to orchestrate cyberattacks, both the speed and the sophistication change dramatically. Early documented cases show that attacks can now be automated, personalised and far better concealed than before. This article gives you an overview of what’s changing and, more importantly, how you can prepare your digital business to withstand AI-driven cybercrime.
AI has made everyday work easier for developers, sales teams, marketers and customer service alike. But it has also made life easier for those operating on the darker side of the internet. That became painfully clear when Anthropic recently had to defend how their AI models and agents were used to automate a large, coordinated cyberattack.
According to Anthropic, the attack was carried out by a Chinese state-sponsored group they’ve dubbed GTG-1002. The attackers’ technique was to jailbreak the safety mechanisms by convincing the AI it was performing a legitimate cybersecurity test - effectively bypassing the built-in guardrails.
Once inside, they used Anthropic’s models (especially Claude) to execute the majority of an espionage campaign targeting around 30 international organisations — primarily within finance, tech and politics. More specifically, the attackers used Anthropic’s models to:
- map targets by organising public and stolen data
- analyse documents to extract valuable information
- craft convincing messages to key individuals
- suggest next steps based on the campaign’s progress
- automate tasks that would normally require expertise and time
In practice, Anthropic estimates that AI handled 80–90% of the workload. They also believe the attack progressed at a speed that would be impossible for humans to match.
And this is exactly the moment where anyone focused on IT security should sit up straight. When attackers can offload so much of the work to an AI model that can think in structures, communicate convincingly and operate at extreme speed, the threat landscape changes dramatically.
Old wolves in new sheep’s clothing
AI doesn’t change the types of attacks businesses are hit by - it changes how they’re carried out. The classic categories remain: phishing, malware, ransomware, data theft, SQL injections, social engineering and so on. But the quality, scale and timing are now on a completely different level.
Social engineering and phishing in particular become far more convincing with AI. Not long ago, it was relatively easy to spot a fake email, SMS or social media message thanks to odd visuals, a sketchy sender address or a half-baked attempt at writing Danish. That era is over. AI can mimic internal roles, jargon, deadlines and projects almost flawlessly - and write in perfect language across practically any tongue. Messages are no longer easy to dismiss as foreign or off. They look like something you should respond to.
At the same time, the nature of attacks is shifting. It’s less about breaking in and more about blending in. The most effective attack today isn’t necessarily the most technical one. It’s the one that convinces an employee to hand over access without realising it.
AI gives attackers both a sharper understanding of the organisations they’re trying to infiltrate and the ability to produce content that feels far more credible than anything we’ve seen before. And it reduces the level of technical expertise and research required to execute complex, convincing attacks - because so much of the work can be automated.
On top of that, companies’ own systems are becoming more complex with AI. Complexity isn’t dangerous in itself, but it can introduce cracks in the foundation that the wrong people can exploit. A chatbot sourcing external content can be manipulated. An AI agent can gain more access than intended. AI-generated code with insecure patterns can slip into production if the review process isn’t airtight.
All in all, it’s a dangerous cocktail. One businesses need to take seriously. So let’s look at how you can best prepare for a future shaped by AI-driven cyberattacks.
Build IT security into the bricks
As cyberattacks become faster and more convincing, your defences need to be rooted in the very architecture of your systems. The best way to do that is by creating clarity, structure and an organisation that knows how to work safely with technology and AI.
It starts with visibility.
Many companies already have AI (and a range of other tools) embedded in their systems - often without anyone having fully mapped where, how and with what permissions. You can’t protect what you can’t see. The first step is understanding your own landscape: Which AI components have access to which data? Who is using external tools? And where does information flow across platforms?
Next comes identity security.
Hackers today target people far more often than pure technical vulnerabilities. That means access design, better monitoring of login behaviour and a minimum number of “superusers” can make a bigger difference than layering on yet another security product. This is also where employee training is crucial: recognising AI-generated phishing, understanding how deepfakes can be used, and knowing the difference between a quick request and a suspicious one. It’s about making it easy for employees to do the right thing and hard for everyone else to break through.
Then there’s the design of the solutions themselves.
When AI becomes part of the architecture, security has to follow. That means clear boundaries for what AI components are allowed to do, how they’re integrated and how their output is validated. An agent doesn’t need access to your entire system just because it’s technically possible. And AI-generated code should be reviewed as thoroughly as if any other developer had written it. You need razor-sharp oversight to ensure the code meets both internal and external security requirements.
AI can also be used for defence.
Just as attackers use AI to increase the speed and scale of their operations, you can use it to match that pace on the defensive side. AI can monitor systems in real time, detect anomalies and respond faster than a human analyst. It can help isolate systems when something goes wrong - even predict where attacks are most likely to strike. So it’s worth considering whether you should use AI to fight AI.
And finally, the organisational layer.
AI (and tech in general) works best in an organisation where everyone knows the rules: which tools are approved, what can be shared, and where the line is when customer data is involved. That clarity leads to safer solutions and more room for innovation because no one has to guess what’s allowed.
The best defence against AI-enabled cyberattacks is fundamentally about building greater security awareness and baking it into the architecture from the moment you design your IT systems and digital solutions.
There’s nothing revolutionary about that - it has long been our recommendation for good cybersecurity practice. AI has simply made it more relevant and more urgent because attackers have suddenly gained more sophisticated, harder-to-detect methods. The threat level has risen. And keeping attackers out now requires solid preparation and strong internal training.
So where does that leave us?
The attack involving Claude wasn’t an isolated incident. It was a marker of a new reality in cybersecurity. One where attacks are carried out with the very same technologies we use to create value.
Overall, AI doesn’t change the game itself, but it changes the speed and precision of those trying to deceive us. That’s why we need an updated version of the discipline we’ve always relied on in IT security: common sense, solid structures and a sharp eye on the places where data and people intersect.
Once we get control of the landscape, the access points, the architecture and the internal training, we stand much stronger. Not just against AI-driven attacks, but against the full spectrum of modern cyberthreats.
And maybe that’s the key takeaway: AI makes attackers better, but it makes us better too, if we use it wisely. We still decide how our digital business is built, and how difficult it should be to break in.
The future will be faster, more automated and more inventive. Our security needs to match that. Thoughtful, ever-present and as well-built as the rest of our digital storefront.
.webp)



.webp)
%20(1).webp)