Phishing Simulations and Security Culture Building
We’ve been running phishing simulations for three years. Once a month, our security team sends fake phishing emails to random groups of employees to see who clicks suspicious links or submits credentials to fake login pages.
The click-through rate has gone from about 25% in the first campaign to around 8% now. That sounds like success, and in some ways it is. But what I’ve learned is that phishing simulations are just a small part of security culture, and you can actually undermine security culture if you run them badly.
Why We Started
The trigger was an incident where an employee’s credentials were compromised through a phishing attack. The attacker sent an email that looked like it came from IT requesting password verification. The employee clicked the link, entered their credentials, and within 30 minutes the attacker was in our systems trying to access customer data.
We caught it quickly because we had monitoring in place that flagged unusual login patterns. But it was close. If the attacker had been slightly more sophisticated about covering their tracks, we might not have noticed until real damage was done.
The post-incident review concluded that we needed better security awareness. Phishing simulations were the obvious solution. Test employees regularly, provide training when they fail, gradually improve awareness over time.
The Early Campaigns
We started with relatively obvious phishing emails. Suspicious sender addresses, generic greetings, links to domains that weren’t our company’s domain. About 25% of recipients clicked through anyway.
Some people just click everything. They’re trained to be helpful and responsive. An email that says “urgent: review this document” triggers an automatic click response. They don’t pause to check the sender or hover over links.
Other people clicked because they were busy and not paying attention. In retrospect, they knew the email was suspicious. But in the moment, they were in the middle of something else and clicked without thinking.
A smaller group fell for more sophisticated social engineering. We sent an email that appeared to come from HR about updated benefits information during benefits enrollment period. That one got clicks from people who normally would have been more careful.
The Training Response
When someone fails a phishing test, they get an automatic message explaining that it was a simulation and providing tips for identifying phishing. Then they’re required to complete a short training module.
This works for some people. The immediate feedback and training right after they made a mistake is more effective than generic annual security training. They remember the embarrassment and pay more attention next time.
But it also creates resentment. Some people feel tricked or embarrassed. They argue that the security team is wasting their time with “gotcha” exercises instead of helping them do their jobs. This is especially true when simulations use stressful scenarios like “your account will be locked unless you verify immediately.”
The resentment is a problem because it undermines what we’re actually trying to build: a culture where people feel comfortable reporting suspicious emails and asking questions. If people see security as adversarial, they won’t engage.
The Sophisticated Simulations
As click-through rates dropped, we made the simulations harder. We used email addresses from compromised real companies so the sender domain looked legitimate. We crafted messages referencing actual projects. We timed campaigns to coincide with real company events.
These sophisticated simulations got clicks from people who had been careful about obvious phishing. And they taught us something important: with enough effort, you can trick almost anyone. Social engineering works. Even security-aware people will click given the right circumstances.
But I’m not convinced this is valuable. The point isn’t to prove we can trick people. The point is to build security awareness. Running simulations that are intentionally hard to detect just makes people anxious without teaching them much.
What Actually Improves Security
After three years, here’s what I think actually matters for security culture:
Making it easy to report suspicious emails is more important than testing whether people click them. We have a button in Outlook that forwards suspicious emails to the security team with one click. We get about 50 reports per month. Most are legitimate emails that someone wasn’t sure about, but we’d rather have false positives than have people not report genuine threats.
Responding to reports quickly builds trust. When someone reports a suspicious email, they get a response within an hour saying whether it’s legitimate or a threat. This reinforces that reporting is valued and security is responsive, not just punitive.
Celebrating people who report actual phishing attempts is more motivating than shaming people who click simulations. We announce in our all-hands meetings when someone reports a real phishing attempt and it gets blocked before spreading. This creates positive association with security awareness.
Working with AI consultants in Melbourne on an email filtering project, we learned that technology can catch most phishing attempts before they reach employees. Better email security controls reduce the number of phishing emails that make it through in the first place. That’s more effective than relying on employees to be perfect.
Training needs to be practical and specific. Generic “be careful about phishing” messages don’t work. But teaching people to check sender domains, hover over links before clicking, and watch for urgency tactics does work.
The Gaming Problem
Some employees have figured out patterns in our phishing simulations and optimized for not clicking anything suspicious. This seems like success until you realize they’re treating it like a game rather than actually learning security mindset.
We had one person who stopped clicking any links in emails entirely. They would visit websites by typing URLs manually rather than clicking email links. This technically protects against phishing, but it’s not practical long-term. They’re treating email as adversarial rather than learning to distinguish legitimate from malicious.
Others have developed overly paranoid behaviors. They report legitimate company emails as suspicious because they’re trying to avoid failing simulations. This creates noise for the security team and erodes their confidence in making judgment calls.
The goal should be good judgment, not zero risk tolerance. Employees need to be able to identify suspicious emails while still using email normally for actual work.
When Simulations Backfire
The worst outcome from phishing simulations is when they undermine trust in legitimate company communications. We had an incident where IT sent an email about a required password reset. Multiple people thought it was a phishing simulation because it asked them to click a link and change their password.
IT had to follow up with phone calls to convince people it was legitimate. The security team’s simulation program created skepticism about actual IT communications. That’s counterproductive.
We’ve had to coordinate better between IT and security to avoid this. Legitimate emails that involve authentication or unusual requests now get flagged with a banner that explicitly says “this is not a phishing test.” This helps, but it’s extra complexity we have to manage.
The Diminishing Returns Question
At some point, continued phishing simulations show diminishing returns. We’re at 8% click-through rate. Getting that to 5% or 3% would require either running more frequent simulations or making them even more sophisticated. Both approaches have costs.
Running monthly simulations already generates some resentment. Increasing frequency would make it worse. Making simulations more sophisticated means we’re just proving that social engineering works, not teaching practical skills.
I think we’re at the point where the value of continued simulations is marginal. The people who still click are either not paying attention (which more simulations won’t fix) or being fooled by genuinely sophisticated attacks (which means we need better email filtering, not more training).
What I’d Do Differently
If I were starting over, I’d focus less on testing and more on building culture. Phishing simulations should be one tool among many, not the primary security awareness program.
I’d run simulations quarterly instead of monthly. Frequent enough to maintain awareness, infrequent enough not to be annoying. I’d keep them moderately difficult but not intentionally tricky. The goal is education, not proving we can fool people.
I’d invest more in making reporting easy and rewarding people who report threats. Building a culture where security concerns are taken seriously matters more than individual test performance.
And I’d focus more on technical controls. Email filtering, multi-factor authentication, zero-trust architecture. These protect against phishing even when employees make mistakes. Relying on human perfection is not a good security strategy.
The Uncomfortable Truth
Phishing simulations are popular because they’re measurable. You can show executives a graph of click-through rates declining over time. This makes it look like security is improving.
But what are we actually measuring? Whether employees can identify fake emails in low-stakes situations. That’s not the same as whether they’ll recognize real phishing attempts when they’re stressed, busy, or dealing with sophisticated attackers.
The real test of security culture is what happens during an actual incident. Do people report suspicious activity? Do they follow security protocols even when it’s inconvenient? Do they trust the security team enough to ask questions instead of guessing?
Those outcomes are harder to measure, but they matter more than phishing simulation click rates. Security culture is about relationships, trust, and judgment, not just test performance. I wish we had figured that out earlier.