Password complexity rules alone won’t keep an account secure. Business Reporter’s resident U.S. ‘blogger Keil Hubert argues that password reset ‘security questions’ undermine account security measures … so long as the answers to those security questions are true.
In the beginning, there were passwords, and passwords were … better than nothing. When computers were new, boffins warned users to avoid words that could be easily associated with them (e.g., the user’s name or birthday). Then boffins advocated for password complexity: we were warned off using common dictionary words. Then, they advised users to substitute numbers for letters (e.g., the ‘0’ for ‘o’ swap). Finally, they insisted that actual words were forbidden; users should form passwords out of strings of numbers, random letters, special characters, and hieroglyphics. Still, baddies  manage to keep compromising accounts because passwords can never be complex ‘enough.’ Or so we’ve been told …
In reality, a user’s information system access credentials are only as strong as the weakest password in a series of passwords that grant access. Setting a password to something insanely complicated like ‘q&9-V@m-t)(cc,’ isn’t likely to keep an account safe if the baddies can trick a user into sharing that password with them (via a phishing attack) or if they can simply reset the user’s password to one of their choosing by forcing a password reset. It’s this latter tactic that tends to let the baddies in.
As password complexity rules started getting ridiculous, exasperated users began to flood their service desks with demands to reset forgotten passwords. It was inevitable; it’s darned hard to remember the exact sequence of a super-complicated password like ‘q&9-V@m-t)(cc.’ Get just one digit wrong, and the account gets locked out. A frustrated user has to request a reset and start over again with a completely different impossible string that’s also insanely difficult to remember.
Remember, too, that every user has a slightly different idea of what constitutes a ‘complex’ password.
Baddies figured this out and realized that they could easily compromise accounts by impersonating put-upon users. They’d access a system’s login prompt, lock out the account with failed attempts, then phone the service desk and ask for a reset. A sympathetic support tech would reset the account with a temporary password and tell the caller what it was … allowing the imposter to seize control.
Security boffins eventually implemented stricter user authentications methods to counter this attack. They wouldn’t simply trust that a caller was who he or she said that they were. Some demanded that a user physically show up at the service desk to authenticate his or her identity with a valid ID card. That worked well for organisations with only one location, but didn’t work for distributed entities.
To compensate for remote users, security boffins developed special questions that a user could answer in order to validate their identity. The concept was simple: ask the user a question that only they would know the answer to. If the caller knew the secret answer, proceed! If not, hang up.
Eventually, though, the manual password reset burden got annoying enough that service desks started automating the process. Instead of clogging the service desk phone line, a user could go to a password reset web page and answer their security questions there. Self-service resets are convenient … but they’re undermine all of the advantages of password complexity rules, since the answers to users’ ‘security questions’ don’t have the same complexity rules as system passwords do. That makes sense; after all, how many people can say that their mother’s maiden was spelled ‘q&9-V@m-t)(cc’?
Besides her, obviously.
These days, baddies ignore cracking your regular system password and simply get it reset. All they have to do is answer your security questions, toggle a password change, and they’re in. But … those security questions are supposed to be super secure, right? The user is the only person who knows the answers, right? In theory, yes; in practice, the answers are easy crack because users post the answers online for everyone to read. All it takes is a little online detective work and a baddie can discover all of the information that he or she requires to wrest control of an account.
For starters, most ‘security questions’ aren’t very complicated. Many can be guessed quickly through a simple process of elimination. Consider the question ‘which member of the Beatles was your favourite?’ A baddie only has to select from four names in descending order of probability until the metaphorical lock pops open. A little common sense applied to the puzzle helps, too. A foreign-born Millennial hacker who’s never heard of the Fab Four can Google fan surveys to see who the most-adored Beatle was and start with the most popular name when taking a guess.
A better security question is ‘what was the make of the first car you ever owned?’ That’s better, since there are about fifty or so makes on the market today. Depending on the age of the user being targeted, the baddie can eliminate newer makes (that didn’t exist when the user was a teenager or young adult). For example, a user that graduated university in 1992 might have had a Pontiac or a Volkswagen as their first car, but couldn’t have had a SMART City-Coupé (that debuted in 1998).
But … how could a baddie possibly know when a person bought his or her first car? It’s not really that hard to deduce; all a baddie needs to do is pull up a public LinkedIn profile and review the user’s education. Users that list their high school graduation date (US users), the dates when they got their A-levels (UK users), or when they started university (darned near everyone) in order to make an educated guess about a user’s age.
Most people graduate at or close to the expected age for their culture. The small number of people who graduate earlier or later than expected have a minor security advantage over everyone else.
A baddie can drill farther down with information that we all regularly share online: based on where a user grew up or had his or her first job, a baddie can further narrow the list or probable security question answers. Eliminate the marques that aren’t exported to a given country (e.g., Škodas aren’t sold in the USA) and de-prioritize makes that a young driver probably couldn’t afford (i.e., Bentleys). Simple analysis and helps to eliminate a surprising number of wasteful guesses.
Then we have unique string questions. The ‘mother’s maiden name’ security question is a classic, because there are a staggering number of possible names to guess from, featuring nearly infinite spelling variations. These are still easy questions to beat, through, because a baddie doesn’t have to guess the answers. Why bother? Users love to post family details on social media for everyone to see. Search the user’s Facebook profile for photos and posts tagged as ‘family.’ All it takes is an old post that tags an unmarried material aunt and BLAM! There’s the user’s ‘impossible to guess’ answer. A baddie can also look up a user on genealogy sites, search online wedding registries, and so on. A little target research goes a long way towards uncovering seemingly ‘impossible’ answers.
Making things worse, many different organisations end to use the same security question prompts … meaning that that a baddie can test out possible answer combinations on a low-value, unmonitored site until the right combination is unlocked, and then take those discovered answers to the desired target site where they manage to trigger an online password reset on the first try.
So, the takeaway from this depressing topic is that everything’s hopeless, baddies are going to crack all of our accounts, and the world’s going to end in a rain of flaming toads, right? There’s nothing that a normal person can do, so why bother trying? Actually … no. It’s not nearly as bad as it seems, if only you factor one blindingly-obvious rule: there’s absolutely no requirement at all that a user tell the truth when answering password recovery security questions. The best way to protect oneself and send the baddies running in endless futile circles is to lie.
Your security questions aren’t like a job application or an online dating app profile. You’re allowed to be far more interesting in security question fiction than you are in real life.
The automated password reset system falls apart when the user answers their own security question(s) on social media where anyone can discover the answer silently and anonymously. It’s insanely difficult to convince normal people to not share their intimate life details on social media; that’s become our global public square. You may as well ask people to not breathe.
The only real requirement for a ‘security question,’ however, is that the user is the only person who knows the answer. That’s it. The most effective countermeasure to security question guessing, then, is to teach users to never use publicly-disclosed data as an answer to a security question. Consider that information to be permanently compromised. Instead, use nonsensical and impossible-to-guess answers for security questions. What’s your mother’s maiden name? Peugeot! What’s your favourite colour? Ennui! The make of the first car you ever owned? Walrus!
Teach your users that whatever they do, they shouldn’t ever user real answers for these questions. Remembering their answers is all that matters. Therefore, they should keep an encrypted list of all their security question answers someplace safe – preferably someplace that can’t be accessed from across the Internet.
Also, teach them to vary their answers from site to site, and never, ever let the answers make logical sense. Expanding the list of possible options from a short list of rational choices to a dictionary full of insane and inappropriate options increases the research burden on the baddie immensely. This tactic isn’t a bulletproof guarantee that a baddie won’t (or can’t) compromise their account(s), but it does make the baddies work for it. Often times, that’s enough disincentive to make the baddies give up and go after an easier target somewhere else.
Finally, there’s something immensely satisfying about crafting deliberately meaningless gibberish that sends adversary researchers into a frothing fit of frustration trying to decipher it. History suggests that Mr. Lennon would approve. ‘Let the [deleted]’s work that one out’ indeed.
 I’m sticking with the generic ‘baddie’ appellation throughout this column because ‘hacker’ is too often associated with amateurs, pranksters, and unaffiliated malcontents. Most hacking against companies and governments is performed by professional criminals (organized crime types) and nation-state actors (government spies and military operatives).
Title Allusion: John Lennon and Paul McCartney, I Am the Walrus (1967 title track of the EP of the same name)
Photographs under licence from thinkstockphotos.co.uk, copyright: walrus in Canadian Arctic, JohnPitcher, note with password, BeeBright, happy biker, aspenrock, portrait happy girl, diego_cervo, bare chested man, Photologue.
POC is Keil Hubert, firstname.lastname@example.org
Follow him on Twitter at @keilhubert.
Keil Hubert is a retired U.S. Air Force ‘Cyberspace Operations’ officer, with over ten years of military command experience. He currently consults on business, security and technology issues in Texas. He’s built dot-com start-ups for KPMG Consulting, created an in-house consulting practice for Yahoo!, and helped to launch four small businesses (including his own).
Keil’s experience creating and leading IT teams in the defense, healthcare, media, government and non-profit sectors has afforded him an eclectic perspective on the integration of business needs, technical services and creative employee development… This serves him well as Business Reporter’s resident U.S. blogger.