Users Rejecting Security Advice Considered Rational 389
WeeBit writes "Researchers have different ideas as to why people fail to use security measures. Some feel that regardless of what happens, users will only do the minimum required. Others believe security tasks are rejected because users consider them to be a pain. A third group maintains user education is not working. [Microsoft Research's Cormac] Herley offers a different viewpoint. He contends that user rejection of security advice is based entirely on the economics of the process." Here is Dr. Herley's paper, So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users (PDF).
Some security measures don't seem practical. (Score:5, Interesting)
I have to remember something like 70 passwords as a multiplatform software developer, and some of those hosts have passwords which expire every 30 days, can't repeat for at least a dozen iterations, and must contain at least one numeric, at least one upper-case and one lower-case alpha, and at least one non-alphanumeric symbol.
I understand the reasoning, and if it was only a handful of boxes .. or rarely used boxes ... I would understand, but I'm logging into 25 or 30 of these machines or applications on a daily basis.
I can use a password manager like Keepass, and it's okay, but I can see how some folks would resort to other means, try to use password patterns, etc.
Microsoft Researcher using TeX. (Score:5, Interesting)
They aren't kidding when they say that Microsoft Research is autonomous. I would have assumed that Microsoft would at least make its researchers use MS Word.
Re:Wasted time (Score:4, Interesting)
Ya see, there's no way to make my soundcard work in *nix, from what I, and my friend who damn well *lives* in *nix can find.
You don't say what kind of card it is, I notice...
There's no way to make my sound card work in Windows. Well, I could download a couple of gigabytes of Windows updates and a driver, and then download a couple of gigabytes of software updates, and eventually I'd have two of the ten channels working. Or, I could just use Linux, where my Delta 1010LT is supported perfectly.
good advice versus bad advice; costs to others (Score:5, Interesting)
The paper is not entirely unreasonable. However, there are at least some holes in it.
It lumps good and bad security advice together. The economic benefit of following bad security advice (e.g., buying antivirus software) is zero or negative, so of course anybody would be rational to ignore such advice. That doesn't mean it should be lumped together with *good* security advice. They're hypothesizing that people are acting like the idealized economic free agents beloved of economists: people with perfect information, acting rationally. Under this hypothesis, people would have perfect information about which security advice is good and which is bad.
The article doesn't talk about costs to others. People who get their computers owned by a botnet aren't only suffering economic harm themselves, they're inflicting harm on other people. On p. 5 Herley talks about how Wells Fargo limits customers' liability to $50 if they're victims of fraud. That doesn't mean *nobody* pays the cost of the fraud. We all pay those costs, indirectly.
Another problem is that in many cases Herley relies on back-of-the-envelope estimates of the damage caused by security failures. E.g., on p. 2 he estimates the economic costs of a particular exploit. But these estimates aren't based on any actual data. That particular calculation is also kind of stupid, because he says that a user shouldn't spend more than "0.98 seconds" (doesn't he understand significant figures?) protecting against a particular exploit. What his analysis ignores is that there may be hundreds of such exploits out there, and that anything you do that protects against one exploit (e.g., not using a dictionary word as your password) will also help to protect you against all the others. And forgive me if I'm a little skeptical of low-ball estimates originating from MS of the economic damage of computer security failures. That's like trusting GM to estimate the economic effects of global warming.
Re:Wasted time (Score:5, Interesting)
Personally, I buy things with the intent of running Linux on them. That means I have to take more care in researching before purchase, but in the end, it makes so many things so much easier.
I never have to hunt down drivers. 99% of my software comes from one place, and the updates are handled automatically. Frankly, when you buy the right hardware, everything just works far better than Windows.
Re:Wasted time (Score:3, Interesting)
Except that when a torrent is bad usually a person will not reseed it. Though it is possible to "fake" seeds generally I've found a high number of seeds from a tracker you trust is a good sign.
Uhhhh what do I torrent? Linux DVD ISOs, duh!
6. Change often (Score:5, Interesting)
TFA:
Rule 6 will help only if the attacker waits weeks before
exploiting the password. So this amplies the burden
for little gain. Only if it is changed between the time of
the compromise and the time of the attempted exploit
does Rule 6 help.
IANASE, but last time I checked this rule meant to make it difficult for attackers to have time to brute-force-guessing the password and profit from it. It had nothing to do with the attacker discovering the password then waiting quietly until nobody's looking to profit from it.
In theory, if you change your password often enough before the brute-force being complete, the attacker would have to start all over again.
That said, it's an extremelly difficult rule to enforce/comply, unless you have a wonderful "I forgot my password" system.
Security on the web (Score:2, Interesting)
Re:good advice versus bad advice; costs to others (Score:2, Interesting)
Re:Some security measures don't seem practical. (Score:4, Interesting)
This is slightly off-topic, but I have to question how useful it is to require people to change their passwords often. Chances are, when someone breaks into your computer, they're going to leave a back door, so they can get in, regardless of the actual password. Anyone have any thoughts on that?
I used to agree with you ... (Score:5, Interesting)
I used to hate expiring passwords on the financial data systems where I used to work. Then one day the Comptroller was locked out of his own account because he had tried his old password too many times. But it turned out the Comptroller was on vacation and hadn't even tried to log in.
It turned out that an inside person had put a physical keylogger (USB pass-through device between computer and keyboard, ordered straight from China) on the Comptroller's computer one night and collected it a week later, and then subtly tampered with her own salary. She had also stolen the e-mail passwords of any employee who would have been alerted about the change, and instantly deleted the e-mail notifications as soon as she modified the system. She was sophisticated enough to alter other logs and alerts as well.
We might have locked down our internal systems better to begin with, but I have to say that she might have gotten away with it if it hadn't been for those darn password changes.
Good article! (Score:5, Interesting)
I have to say, the linked [microsoft.com] article is the best article on security that I have ever read; and, for that matter, just about the first one that ever considers the radical concept that the user's time is of value.
"Third, the claimed benefits are not based on evidence:
we have a real scarcity of data on the frequency and
severity of attacks."
This is a very good point. What fraction of attacks are frustrated by making users change their passwords from one which is chosen from a set of 1E12 possible passwords, to one which is one of 1E20 possible passwords? How much safer do they get if you then say they have to have a symbol as well?
When they make me jump through hoops, I'd like to know what exactly I'm gaining.
Taking a harder line on phishing-friendly sites (Score:3, Interesting)
On the phishing front, it's useful to stop blaming the end user, and blame the site that hosted the phishing page.
For some time, I've encouraged taking a harder line on phishing-friendly sites, sites that host phishing pages. I had a paper [sitetruth.com] on this at the 2008 MIT Spam Conference. At SiteTruth, we take the position that one phishing page blacklists the whole second-level domain. Here's the current list of major domains being exploited by active phishing scams [sitetruth.com].
The free hosting sites and the "short URL" sites show up on the blacklist regularly. After much nagging and some press coverage, most of them are now very aggressive about kicking off phishing pages, and they don't stay on for long. The better ones now read PhishTank and the APWG blacklist automatically and kick off anything that shows up. Currently, Google is in the doghouse, because they've recently entered the "free hosting business" without adequate phishing defenses. See this abuse of Google Spreadsheets. [phishtank.com]
At the moment, "t35.com", a free hosting service, is the site most abused in this way, by a large margin. I've contacted their people. The problem is that they're being attacked by a program, and they're cleaning up by hand. Right now, they're hosting 545 known phishing pages. Nobody else is even in double digits. "piczo.com" (a social network/free hosting service for teenage girls) was the last big victim, but they're gradually getting the problem under control.
A Draconian blacklisting policy may seem harsh, but it encourages site operators of easily-exploited sites to be very aggressive about dealing with the problem. We're seeing more free hosting sites with a "click here if this is abuse" button on every page. The number of people who have to be educated to deal with the problem in this way is in the hundreds, not the hundreds of millions. So it's a solveable problem.
If you're going to blame the victim, this is the way to go at it.