> Would love to hear your war stories on phishing scams, and how you train your teams!
I was working on anti-phishing in 2003, before it had the name phishing. We were trying to teach our users not to fall for the scams.
It didn't work. People will fall for the same scam over and over.
The conclusion we came to was that the only solution to phishing was education, and education was also nearly impossible to get 100% coverage.
I wish you luck, but don't get discouraged if it doesn't work. We've been trying to educate people about phishing for 17+ years. :)
We shifted our focus to tracking the phishing sites and then tying that back to which user accounts were hacked, and disabling the hacked accounts and notifying the users before damage could be done.
PayPal actually holds the patent on what we built, along with a ton of other anti-phishing and phishing site tracking patents.
The only way to pass the phishing tests at my employer is to never click links in email. But then we also have a number of official systems sending emails with links in them (bug tracking, code review, Zoom invites, HR portal, etc).
The only way this kind of policy makes sense is if you have to actually give the phishing site some kind of credential in order to fail, vs. merely opening on it.
If someone has a Chrome zero-day, we're done anyway. Just post it on HN.
This is my major concern. Heaps of legitimate companies send emails with links to things like 'http://dh380.<third party server>.com'. We're being trained to accept this sort of silliness
I don't think it's realistic to live in constant fear of browser sandbox escapes, or to consider visiting an arbitrary URL "silliness." If your threat model includes people willing to burn Chrome 0-days on you, you need an air gap.
The much more relevant battle is preventing credential theft, which you can solve completely at the technical level with U2F. And if you can't, user education on "check the URL before typing your password" is a little more realistic than "don't open links from email ever."
While I agree with you, I'm far less concerned for my family/friends/colleagues about a sandbox escape compared to accidentally putting information in to a malicious site
Yes, and "consider the URL and how you got there before typing in your password or credit card" is a lot more realistic than "don't click links." Still, clicking the link fails the phishing test all by itself.
Then I would have gotten fired. That's a ridiculous policy. Do they fire people for making mistakes too?
As a security engineer in a previous life, I always open the links in phishing emails (in an isolated and secure VM). I would fail the tests at work every time, but luckily the person in charge of them knew what I was doing and didn't care.
A better approach is to turn it into a game: reward those who report suspected phishing emails, security breaches, tailgating into secure areas, USB devices left around, etc. and have red teams doing this stuff periodically. Punitive measures don't really work. Friendly competition with rewards does work, though.
In our case we were educating and protecting our customers. It's usually bad policy to carry out punitive punishment on your customers. :)
In fact, the worst offenders were actually rewarded. They were the only ones who had two factor auth for their eBay accounts. Back then we didn't have soft tokens -- the only way to do 2 factor was to get a physical RSA token, which cost about $10 at the time. So only the "best" customers were worth the cost.
The term was coined in the 90s, but didn't get widespread usage until the mid-2000s. So yes, technically it had that name already, but no one used it then.
I'd have to think about it more, but if feels overly complex. You've essentially taken the idea of a DMZ network and put it in an individual computing device.
DMZ networks are hard to get right and hard to admin, and almost always end up getting some sort of exception for certain business needs.
Asking a user to admin that, or having no admin at all, feels almost impossible.
I was working on anti-phishing in 2003, before it had the name phishing. We were trying to teach our users not to fall for the scams.
It didn't work. People will fall for the same scam over and over.
The conclusion we came to was that the only solution to phishing was education, and education was also nearly impossible to get 100% coverage.
I wish you luck, but don't get discouraged if it doesn't work. We've been trying to educate people about phishing for 17+ years. :)
We shifted our focus to tracking the phishing sites and then tying that back to which user accounts were hacked, and disabling the hacked accounts and notifying the users before damage could be done.
PayPal actually holds the patent on what we built, along with a ton of other anti-phishing and phishing site tracking patents.