AI is useful. It cuts costs, opens up new opportunities, gives teams a competitive edge, and saves people time. The problem is that the same capabilities making AI valuable to defenders make it just as valuable to attackers, and that side of the story doesn't get enough attention. This post covers what we're seeing on the attacker side, and what to actually do about it.

For years we've worried about hackers getting smarter. Now the old saying "work smarter, not harder" is taking on new meaning thanks to AI. It's no longer just a powerful tool for workplace automation and defense. Hackers are using it to make their attacks faster, more convincing, and harder to detect. In the incidents we investigate, AI is now the single biggest accelerant we see on the attacker side.

Sophisticated phishing

Phishing has evolved into a multi-step, AI-driven attack. Hackers can now generate highly convincing emails, messages, and even deep-fake audio and video files designed to trick you into handing over sensitive information.

  • That "login request" from your bank, payroll provider, or tax advisor? Fake.
  • A video message from your CEO asking for a wire transfer? Fake.

AI makes these look frighteningly real. The best approach is: verify, verify, verify.

  • Double-check the sender's email address.
  • Pick up the phone and call the person or company directly.
  • Never click links in unsolicited emails. Always go to the trusted website you know.
  • Stop looking for misspelled words and grammar issues. AI doesn't make these human errors. Spelling and grammar with AI is flawless.

Hackers are building login pages that look exactly like the real thing. The moment you type in your credentials, they own them. From there, attackers might sit quietly inside your systems for months, studying your processes, operations, trusted vendor relationships, and client interactions. Hackers are casing and identifying profit opportunities, then moving when it pays off the most.

Automated attacks

AI can scan networks, find vulnerabilities, and launch ransomware campaigns without a hacker lifting a finger. This automation gives attackers instant reconnaissance and a faster path to payday. Forget the days of a hooded actor overseas watching and waiting at their keyboard. The new "hooded actor" is AI itself, running automation at a scale no human team can match.

AI defense and detection evasion

AI has already been forensically documented to stealthily evade common cyber detection capabilities. Traditional threat detection tools (anti-virus, endpoint detection, firewall monitoring) rely on both historical known hacking signatures and observed file behaviors. All depend on machine learning components which, when analyzed with the proper AI tool, reveal hacking opportunities for sophisticated actors. Yes, AI is being used as a surveying tool to "size up" what you're monitoring, what tools you're monitoring with, and what you're not.

AI can trick, or even collaborate with, the very AI systems designed to protect you. Imagine attackers weaponizing your own defenses against you. Reversing the system's purpose and opening the door wider instead of closing it. Or being used by these hacking groups to "muddy the waters." If AI is poisoning the victim's environment, the threat detection accuracy of the organization is compromised.

Hackers are using AI to perform "fast gradient-based techniques" which enable them to alter the default detection scanning capabilities of antivirus or other detection software. Once hackers gain a foothold, they deploy adaptive malware powered by AI. This is creating havoc for cybersecurity detection teams and engineers. It's challenging to detect behaviors and traffic when it's expertly masked to look exactly like legitimate, normal system activity.

How to protect yourself

The best defense starts with people.

  • Continuous training. Cyber threats evolve daily, so training your team cannot be a one-time event.
  • Make it safe to report. Encourage employees to speak up about anything suspicious. Too often people stay silent out of fear they've made a mistake. Anyone can fall victim to a well-crafted attack.
  • Govern your organization. Develop policies that strictly limit which AI tools can be used. Ensure that associates are trained on what can and cannot be presented to AI agents, to avoid leaking sensitive information.

With AI-powered threats, awareness and vigilance are your strongest tools. Technology catches a lot, but the people who know what to look for catch the rest.