When you purchase through links on our site, we may earn an affiliate commission. This doesn’t affect our editorial independence.
AI agents were once merely a futuristic concept. But today, they are present and rapidly evolving. They are transforming industries and completely changing cyber threats.
Agentic AI, also called Computer-Using Agents (CUAs), can be engaged for various purposes. They can use apps, browse the web, and execute complex tasks with almost no human input. They automate workflows and raise efficiency. This helps in saving time and enhancing decision-making across countless sectors.
Agentic AI can be incredibly useful in the right hands, but it can be weaponised when it falls into the wrong hands. They can facilitate advanced cyberattacks. Some are capable of exploiting human behaviour patterns to breach systems and steal sensitive data.
From Sci-Fi to Reality: Agentic AI in Action
These agents can automate tasks that once required significant human effort. Things like credential stuffing, reconnaissance, and even full-blown cyberattacks.
It’s a highly useful tool for malicious actors. It allows even amateur attackers to launch high-impact attacks. You can now execute operations in minutes that once took days.
The Power of Modern Agentic AI
Tech giants like OpenAI, Google, Anthropic, and Meta are investing in advancing AI agent capabilities. All of their Agentic AI models share one critical feature: the ability to execute real-world actions from simple text prompts.
This is a double-edged sword. In responsible hands, it’s a productivity powerhouse. But it can be weaponized in the hands of malicious actors. A novice hacker armed with AI can suddenly operate like a seasoned cybercriminal.
Right now, widespread abuse of AI agents isn’t the norm, but that won’t last long. Their simplicity and accessibility make them perfect for supercharging social engineering attacks.
3 Ways AI Agents Are Already Being Weaponised
1. Automating Reconnaissance at Scale
Researchers tested whether AI agents could gather intel for targeted attacks. Using OpenAI’s Operator (which has a built-in browser and autonomous behaviour), they gave it a simple task: “Find new employees at [Company X].”
Within minutes, the agent:
- Scanned LinkedIn.
- Analysed recent posts and profile updates.
- Compiled a list of new hires including names, roles and start dates.
This is the kind of data that fuels highly personalised phishing scams. What once took hours of manual research now happens instantly—and at scale.
The consequence is that even common actions like posting job updates on LinkedIn can expose an individual or an organization to serious risk.
2. Facilitate Credential Stuffing Attacks
Credential stuffing, using leaked passwords to break into accounts, is a classic form of attack. But AI agents can automate it with terrifying efficiency.
In a certain test, an Agentic AI was prompted with “Target email addresses and generate a list of breached passwords.”
The Agentic AI generated a list of emails and passwords and successfully logged into a few of the accounts. This shows how AI can bypass traditional defences by exploiting the weakest link in security: human error.
3. Escalating Social Engineering Attacks
Agentic AI can be used to craft convincing phishing messages, mimic human behaviour, and exploit psychological triggers. This can translate to more effective scams that are hard to detect.
Check Out Our Previous Posts:
Cybersecurity Enters a New Era with AI
Best AI tools for productivity
How to Link ChatGPT Agent with External APIs for Business Applications
The Urgent Need for a Human-Centric Security Shift
Although their capabilities are still evolving, the potential for large-scale, automated cyber warfare using Agentic AI is real and imminent.
This demands a prompt recalibration in cybersecurity strategy. Historically, defences focused on protecting systems, not people. But traditional methods like annual training sessions for staff aren’t enough.
What’s Needed?
- Proactive, Real-Time Human Risk Management.
- Behavioural monitoring to detect risky actions.
- Phishing-resistant solutions like Passkeys and Multi-factor Authentication (MFA).
- AI-driven threat detection that adapts to new attack patterns.
Agentic AI is an incredibly useful tool; however, it can enable cybercrime at an unprecedented scale. As they evolve and become more capable, the cybersecurity world must consider developing new and creative defences to protect people.
Bad actors will certainly devise novel means to launch attacks. For impregnable and assured cybersecurity, the key strategy must go beyond merely applying firewalls and encryption. It will be about understanding human behaviour and anticipating risks. It is essential to stay one step ahead.