Phishing With Sniper Rifles
I remember seeing Forrest Gump in the theaters and telling myself I'm never going to know what's real or not on-screen. That feather and the "cameos". How long ago was that? Not sure I want to know.
I'm starting to wonder where that line of make-believe is today.
In just one week I've been direct witness to three scams, and the likelihood that AI was involved in each is palpable.
419 Problems
In the early days of the internet, back when you installed AOL on a computer with a CDROM distributed inside a magazine delivered to your mailbox (three things a couple of generations won't know first-hand, if at all), not everyone had email. That changed, of course, gradually ramping up to the point that email is just as much a necessity as a birth certificate or a Social Security Number.
And just like a new neighborhood in a growing city, crime inevitably arrived on the scene.
For about twenty years, phishing meant email, and I watched most of it happen from inside IT. Facepalming the whole time.
The canonical phishing email was unmistakable: "Dear Valued Customer," written in the approximate English of someone who learned the language from a dictionary, sent from an address that was almost but not quite the bank's domain. Urgent, slightly threatening, and just wrong enough to spot if you were paying even a little attention. Security training across two decades taught people to look for those signals, and that training worked for the threat it was built to counter. The grammar was the canary.
Ironically, between scammers and endless marketing campaigns, the average person's inbox became a sea of hooks waiting for a bite. Most people stopped checking their messages unless they were expecting something specific. Other than hooks, of course.
They're gonna need a bigger hook.
Who Ya Going to Call?
Phone scams predate the internet by decades, but the one that set the modern template was the tech support call. "This is Microsoft support calling about a virus detected on your computer." You probably know someone who got that call. You might know someone who followed the instructions.
The play was simple: create urgency around something the target doesn't fully understand, then walk them through granting remote access to their own machine. "Just go to this website, download this tool, and click Allow." Once they're in, it's over. They can install whatever they want, copy whatever they find, and lock whatever they choose. The victim opened the door, handed over the keys, and thanked them for their help.
These calls targeted seniors overwhelmingly, not because seniors are less intelligent, but because they grew up in a world where a phone call from a company meant the company was actually calling. Trust in the telephone was earned over fifty years of it being trustworthy. Scammers didn't exploit stupidity; they exploited a generational contract that no longer applies.
The calls were human. A person in a call center, working a script, dialing numbers. The economics limited the scale; you could only scam as many people as you had callers.
Just last week, someone I know got a phone call. Similar setup: a caller representing a financial institution, something that needed urgent attention, a resolution that made sense in the moment. He followed it. I don't know every detail, and I won't speculate on what he lost, because what he lost is real and isn't mine to characterize. What I know is that the call was convincing enough that a reasonable person acted on it, and the consequences were real.
They told me via text. You can probably guess, but don't let them know I had no choice but to facepalm when I read it.
SMS sending an SOS
How many of you have received text messages that only say "Hi"?
Or "Hey, are you free this afternoon?" from a number you don't recognize. Or the classic: a USPS tracking notification for a package you never ordered, with a link you're supposed to click to "reschedule delivery." I've gotten three toll violation notices this month from roads I've never driven on. My favorite was a text telling me my FasTrak account was suspended - easily checked online.
Text message phishing has exploded over the last few years and now makes up a significant chunk of all phishing attempts. The mechanics follow the same economics as email: high volume, low craft, getting smarter the way email phishing did before it. You can catch most of it the same way you always could.
What changed is the follow-through.
The random "Hi" isn't the scam. It's the probe. They're looking for anyone who responds, because a response means a live number with a person willing to engage. That's a qualified lead. From there it might be a romance scam, a crypto pitch, or just harvesting your number for the next phase. The "Hi" costs nothing to send and the return on even a small response rate makes it worth blasting to millions.
People caught on. Apple even added the ability to 'mark as spam'.
Return To Sender
Enter AI. Large language models (the tech behind ChatGPT and tools like it) are getting genuinely good. The output is grammatically flawless. It can match a specific person's writing style from years of public emails, produce a vendor invoice follow-up indistinguishable from the real thing, or write a phishing campaign that would have taken a human team an entire day in about five minutes. IBM tested that in 2024: five prompts, five minutes, done.
Remember the canary? It's dead. The majority of phishing emails now contain AI-generated content. These aren't broken English with a suspicious link anymore. They read like a message from someone you work with, referencing a project you're actually on, following up on an invoice you're actually expecting. Because the AI scraped your LinkedIn, your company's website, and whatever else you've made publicly available before it wrote a single word.
When building a convincing email goes from craft to commodity, the math changes completely. You don't target ten people and hope for the best. You target ten thousand, each with a personalized message. It got bad enough that the FBI gave AI its own section in the IC3 annual report for the first time. That's not a footnote; that's a new chapter.
And that's just email. The same technology that writes a perfect phishing message can also speak one.
Stealing What Can't Be Taken
I almost kept walking. Someone I know was on their phone, on speakerphone so they could use the computer, which is not unusual. They were having what sounded like a bank conversation, which is also not unusual. Turns out their account had been compromised to the tune of five figures. Yikes!
What caught my attention was a specific kind of wrongness: the caller was giving incorrect instructions. The kind of thing a person at a bank shouldn't screw up. And the resolution path, what the caller was trying to guide their client through, set off alarm bells.
Go to Zelle. You're going to notify support, so address it to the security team. Type this number in the account field. Now, go to the amount field and enter the amount you're disputing...
No. Absolutely not.
Unsolicited, but also unable to stand idly by while it happened, I exclaimed "There's no way on Earth I would be doing that".
I even heard the caller question "what was that?". My friend agreed, thanked the caller for their time, and hung up. That was rapidly followed by a drive to the bank, who confirmed what I'd suspected: the call was fraud.
No money was lost, but it was only a couple keystrokes and a mouse click away. My friend did lose something, however... ownership of their voice. That call was more than likely recording responses and can now be used in a new type of attack where through social engineering across the internet and dark web, actors can identify contacts and call them with a false voice. It's been done, and it gets worse.
Having a 3rd party helped this time. Does it always?
Modern Day Real-Time Forrest Gump
Remember that feather?
In February 2024, a finance employee in Arup's Hong Kong office received what appeared to be a message from the company's CFO requesting confidential financial transactions. He suspected phishing.
He joined a video call to verify. On the call, he saw the CFO. He saw colleagues he recognized. They confirmed the request. He made fifteen wire transfers totaling $25 million to five Hong Kong bank accounts.
Every person on that call was AI-generated. The CFO was a deepfake constructed from publicly available footage with a cloned voice to match. The colleagues were fabricated. Arup acknowledged the incident publicly in May 2024.
This guy didn't fail to verify. He verified, and the attack had anticipated that exact move and compromised the verification process. He was more careful than most people would have been, and it didn't matter.
The system is evolving to the point that it can subvert caution. While it's a safe bet there was an insider in this case, it's still terrifying; the tech required to pull it off is insane.
Is there anywhere you can call safe?
Me Too
Then it happened to me. Didn’t see it coming at first, but I caught it fairly quickly. It's easy enough to spot the hookup invites on LinkedIn (I mean, I guess it's work, right?), but I wasn't expecting it when a recruiter reached out to me about a position. It sounded too good to be true: remote or on-prem, part-time or full-time. A company that was looking to consult with industry experts but no hands-on-keyboard required. Consulting. For an AI company. With no AI knowledge required.
My curiosity was piqued, so I replied. The response: blah blah blah blah blah, what’s your Signal? LOL. Say WHAT? I responded that Signal seemed a bit out of the ordinary and here’s my phone number.
This time the response was something to the effect of “here are links to download and install Signal; my client has limited availability”. That was followed just a couple hours later with “I await your response”. On a Friday afternoon.
The job was suspicious, the disappearing message platform for communications raised some flags, and the urgency cemented my suspicion. I replied “you won’t get one” and that’s the end of it.
Common Threads
Connecting all the models, urgency is the way these scammers work, and with the inclusion of AI for social engineering, layering in fake audio and even video, that urgency can be heightened and doubts suppressed.
In one week I was targeted, as were two people in my orbit. One was compromised, one almost, and I dodged a bullet entirely.
The people executing these scams are spending a lot of money on AI, and my guess is that it’s paying returns. AI is the new battleground on many fronts. AI doesn’t get tired. AI doesn’t discriminate. It does what it’s told (mostly; if you know you know) and the crime that only used to haunt your Hotmail inbox is now attacking you on more fronts than you may even realize.
We were trained to spot mistakes. Some signals are still there, but the mistakes have changed. Spelling errors became real-time spoken hallucinations, and yet most people can't even spot written ones.
I mean, have you heard what people have done with AI and smart glasses?