Beating the bots, streamlining security
In addition to eliminating friction caused by bots, the online tests produce a fractional amount of friction as well

“Type the two words.” It was the probably the easiest thing I’d be asked to do all day, certainly easier than reviewing the marketing plan that had landed in my inbox minutes before, or even picking up milk on my way home. Moreover, the request hadn’t even come from a person, but from the website where I was trying to register for an event. I paused to ponder the appropriate answer.
The twisted text waited patiently for me to decipher it. WVNEST BLOCKS? Or was it WWNEST BLOCKS?” I leaned in to the screen for a better look. It wasn’t the first time I’d been asked to solve a problem like this, nor, I was certain, would it be the last. I reckoned that, like most people, I had performed such a task hundreds of times, maybe even thousands, although who was counting? At any rate, I ought to be getting good at it.
Of course, I knew the reason for this challenge. It was a CAPTCHA—a security measure designed to filter out any bots that might be trying to penetrate the site’s defenses. The website wanted to know if I was human.
It turns out that almost half of all online traffic comes not from people but from bots, applications that execute automated scripts over the Internet. Of these bots, only around 1/3 of them perform useful, productive functions, while the rest are engaged in some form of malicious or criminal activities. Good bots do things like help search engines and monitor websites for weaknesses. Bad bots, on the other hand, pretend to be good bots, and attempt to take advantage of the access they gain to perform destructive deeds.
In addition to wreaking havoc and undermining the trust of the online community, bad bots illustrate that it’s possible to automate bad behavior as well as good behavior. And when automation is used for criminal purposes, the mistrust it spawns generates friction in the community, rather than removes it.
Hence the CAPTCHA, or “Completely Automated Public Turing Test To Tell Computers and Humans Apart”. As the name suggests, CAPTCHAs were designed to identify which users were bots, and which were humans. Over the years variations on CAPTCHA have emerged to outsmart the bots, but the basic premise remains the same—ask users to do something that computers can’t.
Unfortunately, in addition to eliminating friction caused by bots, the online tests produce a fractional amount of friction as well. By reducing the test for humanity to a routine act of reading and typing, CAPTCHAs require us to perform just the sort of menial, manual tasks that digital technology was supposed to free us of. They’ve given us more friction.
Fortunately, Google now claims to have solved this frictional problem with a newer, streamlined test called Invisible reCAPTCHA. The new test works behind the scenes, invisible to the user, presumably based on browsing behavior—although exactly how it does that is still secret—and presents a test to the user only when it determines that the user is likely not human.
If it proves effective, the new test could be a small victory for the forces of online order, as well as proof that the best way to fight friction generated by automation is with even better automation, by streamlining the connection between our machines and our selves.
Regrettably Invisible reCAPTCHA had not yet made its way onto this website, and so I typed in “WVNEST BLOCKS” and waited. Success! I was registered, and human after all.