I admit that I have always been suspicious of CAPTCHAs of any kind. Apart from the repeated concerns voiced by various experts over the years, I have always felt that they cause more problems than they actually solve.
Consequently, I steer clear of those little critters as best I can. That is, I refuse to comment on web content or generally interact with websites employing this “mechanism” to separate the digital wheat from the tares — regardless of their actual quality.
Refusal Is Not the Solution?
“Not much of a solution, either”, you say? Well, you are right — perhaps. Yet refusal is the only means of opposition I have at command as a random, quasi–anonymous individual in the presence of this “digital authority”.
After all, I cannot tell the software to either replace itself with something that makes sense for every possible human actor or buzz off; I cannot turn to some higher authority and demand exemption be granted; and I cannot negotiate a different, more suitable methodology, right there and then. Now, can I?
CAPTCHAs are neither illegal nor popularly frowned upon (yet), and I am — fortunately — not a member of a legally recognised minority, which would make suing for discrimination (at least in theory) worthwhile.
Hence, considering everyone employing this software ignorant of its capabilities and shortcomings and otherwise avoiding it is all I can do.
This approach worked quite well until two weeks ago, when someone turned to me seeking confirmation that there actually is something wrong with this “bloody nuisance”. I promised to look into the matter …
To say it clearly: No, I do not think that any attempt to keep spammers and bots at bay is per se wrong, or the software in question flawed. Yet I do think that the entire concept is in desperate need of thorough consideration — and revision.
Yes, technically speaking, the software called CAPTCHA does work, but that’s not the point. It’s the basic design that’s abominable. Somehow, I doubt that Alan Turing would feel honoured to see his name abused to promote this poor show of logic.
What’s the Point of CAPTCHAs?
Well, the acronym itself should be fairly self–explanatory — its propagated meaning indicates that those who coined this term are not quite familiar with the concept of acronyms, though.
(What, the heck, is “P” for? “Bungled” starts with “B” as in “Bravo”.)
This software should “capture” bots roaming the vast expanses of the Internet, and by such keep them from gaining access to certain areas thereof. In other words, CAPTCHAs are supposed to “protect” honest website owners and genuine (human) users from being bothered by those “digital bastards”. If only, …
How, for Athena’s Sake, Are They Supposed to Accomplish Their Mission?
That’s a fine question, actually. One, that should have been asked before the first CAPTCHA was even conceived — and, make no mistake, it should have been raised frequently ever since.
After all, technology did make quick progress in the past fifteen years — at least as far as data harvesting and spamming are concerned — while the self–anointed defenders against the rage of the machine appear to have enjoyed an extended slumber in some remote ivory tower.
Logic has it that these parties who have to accomplish the easier task are more likely to succeed than those whose task is more difficult — particularly so, when all parties have access to the same resources. And this is exactly the point here.
“A machine will always be more successful at solving a machine–related problem than the average human.”
This is the one and only epigram developers of “Completely Automated Turing tests to tell Computers and Humans Apart” need to print in large bold letters, frame, and hang on the wall facing their desks.
After two weeks of relentlessly besieging a number of those fancy fortresses, I’m prepared to tell you this:
- Your strategy is wrong
- You lie about the purpose of your “tests”
- Your approach is wrong
- Your tactics are (highly) questionable
- Your “artificial intelligence” is not based on natural (human) intelligence … and therefore flawed
- You have no idea what you are doing (but you are good at pretending)
Let’s pretend for a moment the goal of those pestering unsuspecting human users with their “protective devices” against bots and spammers actually were exactly this: protection against bots and spammers.
If this were the case, developers (who are worth their salt) would not rely on CAPTCHAs (as we know them). This strategy is useless against spammers, as big–scale spamming is an industry. That is, serious spammers employ the same technology as those allegedly fighting them, and they employ cheaper — and therefore more — but not necessarily less skilled human resources than their “adversaries”.
And what of protecting against bots? Seriously? Go check your logs. Whether or not you employ a CAPTCHA, I bet you a dollar to a button that you will find among the ten “users” most frequently visiting your website at least one Google bot (not to mention any of the others).
Even though I haven’t used any form of CAPTCHA in years, not once has their (or any other) bot attempted to contact me or subscribe to my newsletter by way of “exploiting” my unprotected forms. And I have yet to meet another human being who was actually approached by a friendly bot even once, digitally or otherwise.
I deem it more likely to receive an e–mail from Elvis’ ghost with an mp3–encoded file containing an as yet unreleased song attached to it, one of these days.
The simple logic behind this “riddle” is: There is no pertinent information to gain from using such forms. Whether or not “submit” or “subscribe” buttons on websites work to effect is relevant to human users only. Bots don’t have (and monitor) working e–mail addresses at which they receive replies.
And wouldn’t it be a bit ironic, if Google, arguably one of the biggest players in the data harvesting game, helped you fight the very technology their success relies upon?
So it may safely be stated that CAPTCHAs are not the strategy of choice when it comes to keep your mail box from being flooded with nonsensical messages.
But what is? As far as I can see, your best option to fight spammers (from exploiting your online forms) is to employ the login–services of well–known (and tested) platforms. I dare to venture the guess that everyone who owns a website does also have at least one account on any of the better–known social media platforms, and thus access to these particular technologies.
Of course, one may create an account as “Donald Duck” or “Jane Doe” with any of them, but one would have to provide a working e–mail address or the registration would not be completed. You may “troll” Facebook or Disqus or even Google’s own (and all the others not mentioned here, of course), creating fake accounts, but entirely unidentifiable you are not. There simply is no way of sending messages “anonymously” with any of them.
Employing Disqus for comments on this blog, instantly reduced spam attacks from two or three waves a year to zero. The CAPTCHA I used to employ on an older blog for years — requiring the capability to solve rather simple equations — did not fare remotely as well.
The real purpose of CAPTCHAs has been widely (and heatedly) discussed over the years. For the longest time, I have suspected that the critics are right. After intensively testing them, I have no reason to change my mind. CAPTCHAs are used to exploit free human labour (yours, in case you failed to get my drift) to identify information that could not be identified by machines.
You may have recognised over the years that with every new version of CAPTCHA the types of visual objects (or audio data) to be identified have changed in nature. This appears strange, as bots used to be (and arguably still are) “blind and deaf”.
They don’t sit in front of monitors and read the visual (or listen to audio) content of websites. That’s what humans do. Consequently, it makes no sense to use visual or acoustic traps to “capture” them.
Let’s again pretend for the time you take to read the next few paragraphs that you are provided with this technology — “free of charge” — in an effort to protect you against bots and spammers, though. Then why do we have to identify all parts of a kind in a particular image or alternatively all images containing certain objects?
You may have recognised over the years that, while the overall theme has changed, the basic routine remained the same. That is, the longer you “sit the exam” the more difficult it gets. That’s illogical in more than one respect.
First, it’s not reasonable to expect humans who fail to properly identify objects in photos with adequate resolution to fare any better when presented with grainy images.
Second, if the software were to tell computers and humans apart, it could dismiss everyone as a bot after the first mistake. True to its (alleged) purpose, it would have to terminate the session (rather than telling you that you failed and reloading the CAPTCHA) and kick you out.
And third, if you are considered a bot after the first go, you cannot possibly be considered a (fallible) human, and given another chance — and another, and yet another. If, on the other hand, you are still considered human after the first go, you cannot possible be a computer at the same time. After all, you can only be this (computer equals 0) or that (human equals 1).
Being “not quite a computer” (and thus probably human) or “almost human” (and thus probably not a computer) are characteristics defined by human imagination and experience — too subtle by far for a machine to consider. If one coded software to make these distinctions, this person would inevitably face a logical dilemma.
Let’s pretend bots do actually try to solve CAPTCHAs to gain access to what treasure ever might await them once they got past this “digital sentry”: It would have to be granted access regardless of the test results, because software capable of making mentioned subtle distinctions would have to consider the candidate either “human” (if it passes the test without a mistake) or still “almost human” — and thus probably not a computer — if it happens to fail the test several times. Either way, computers and humans would succeed (or fail) with equal probability.
Hence, it is safe to conclude that either the traditional CAPTCHA fails its purpose or its real purpose is not quite what we are supposed to think it is.
Unfortunately, the latest version (at the time of writing), supposed to be less obtrusive, is no more promising than any of its predecessors. The software is said to determine the candidate’s status based on behavioural patterns. That definitely sounded like something I wanted to try. And try I did … hard.
I launched websites by entering URLs manually into the browser’s address bar. I launched the same websites, following referring links. I acted quickly once they had loaded the content or took my time. I launched them in different browser windows simultaneously, constantly switching between instances. And I terminated all sessions and diligently flushed the cache between sessions.
After two weeks of trying all shenanigans that came to mind, the results were still perfectly inconclusive. As often as not, I had to solve CAPTCHAs. Apparently, the software failed to decide whether I am human or machine.
Sometimes, I had to tick off a box to confirm that “I’m not a robot” (which is what I most certainly would say, if I were one). This action either triggered a series of rounds of CAPTCHAs I had to solve or caused the software to let me pass without further ado. Yet at other times, the “sentry” literally vanished before I could even raise a finger.
Sometimes, I would pass the exam even though deliberately offering wrong answers. At others, none of my answers would be good enough. My highest count was eight spins — without passing the test.
Yet the most frustrating experience was the few times when the software simply died on me after several rounds, while processing my (correct) answer. I was asked to come back later and try again. Seriously? Are Bits and Bytes entitled to coffee breaks and power naps of late?
Human vs Artificial Intelligence
I can see the point of trying to develop artificial intelligence. And to some extent, I do even understand some actors’ ambition and eagerness to gain an advantage over their contenders in this field. Yet I seem to recognise a serious lack of orientation as to purpose and goal of this aspect of technology.
Basically, computers are idiot savants; tools we employ to support our own creativity. Simply put, their only skill is to tell “0” from “1” — and they are admittedly much better than humans doing just that.
Yet to expect a computer to tell “0” from “10” (the binary equivalent of decimal “2”) or “11” (decimal “3”) of its own accord (which would translate to “completely automated”) is a folly. A computer returning results like “Zero, or not” or “Zero, or One, or Two” cannot possibly be any one scientists declared goal, as it would leave us none the wiser. To this end, we could simply ask any random person walking this planet — with equal probability of success.
To Know or Not To Know: That’s the Question
I find it difficult to believe that people who are able to code CAPTCHAs don’t know how computers — or humans — work. After all, this knowledge is paramount, if you want to write (nearly) flawless software (to write “perfect” software, you also need beer, coffee, and chocolate bananas — vast amounts thereof).
The pausible conclusion were that the critics of CAPTCHAs have been right, and Google and their ilk merely try to “sell” us this nuisance under the pretext of fighting bots and spammers, but exploit the unsuspecting (human) user, really. Yet this would prove they actually don’t know how computers and humans work — a similarly disturbing thought, as far as I’m concerned.
Let’s give the superfluent “P” (see remark above) a bit of meaning: “P” as in “Professional”. Shall we? Let’s “hack” them all, the machines and the humans.
If you want humans to do some dirty work, “offer” them to “let’s do it together”. Tell them, “we strive to provide ‘the world’ (including you) with useful information, but we need your contribution (because our computers cannot tell ‘hold’ from ‘bold’ or apples from oranges). Help us identify hard–to–read words in books we have scanned to be available in our online library (you may use free of charge)”.
For crying out loud, people agree to “sacrifice” parts of their CPU’s power to help perfect strangers mine cryptocurrencies, they offer resources to help (again) perfect strangers explore outer space, they spend quality time hunting down fictional creatures conceived by a (again) perfect stranger IRL the world over for no conceivable purpose, and they take the time to participate in polls and write reviews for the benefit of (yet again) perfect strangers — without any immediate compensation at all.
Why should they refuse to help Google (a company, an incredible number of people cannot tell from the Internet itself) identify poorly printed words and phrases in literary classics or low–quality images, if kindly asked? A little bit of honesty (and humbleness) may go a long way here.
Now, with that settled, let’s take care of the machine issue. By now, word should have made the rounds in — and even reached the remotest corners of — certain circles: Computers suck at reading comprehensively.
To them, “words” are merely strings comprising random characters — and sentences composed of these words are merely extensions thereof.
For each word computers seem to “recognise”, it takes an available reference list, created by some “data monkey” in advance, containing this particular string for comparison. That’s basically how all the “big data” mumbo jumbo, but also the filter mechanism you may or may not employ in your e–mail agent and the word correction in your document editor work.
That’s how “trigger words” in communication between supposed terrorists and other assumed troublemakers, “bad words” to keep spammers and trolls from constantly annoying you with nonsensical or inappropriate comments on your blog, and also misspelt words (and poor grammar) in your documents to avoid embarrassment, are identified.
Someone created “whitelists” and “blacklists” and wrote a routine for the computer to check all strings in a document “under investigation” against their content. Without these, “luck” and “fuck” or “foe” and “for” would mean the same — basically, nothing at all — to the machine.
To keep bots from exploiting online forms — which would only be a useful venture, if bots used for spamming were considered the actual target — and spare honest human users frustration, the human factor would have to be taken out of the equation entirely.
There used to be a nifty approach I haven’t seen taken for years (and even back in the day it wasn’t outrageously popular, it seems). It was called “honeypot”, and worked considerably more reliable than any visible CAPTCHA, but required immediate access to the source code of both the website and the software processing online forms.
You would add a hidden (that is, not displayed on the screen) field to your form. Then you created a filter to keep the mailing script from delivering a message unless this particular field was empty. It didn’t actually matter much how you labelled this field or whether you marked it as “mandatory” as computers didn’t care (and filled every field they came across, mandatory or not) and humans didn’t even know it was there (as assistive software wasn’t even “a thing” then).
This approach should work quite well, even today. Yet one would have to be a hint more careful. At any rate, this field would have to be marked as “mandatory” (as “teaching” a bot to avoid a field not marked with an asterisk is not much of a herculean task), but not processed as such (or the mailing script would not deliver it, if left empty).
For some time, the real problem was not a technical, but a logical one. How to keep a screen reader (an assistive technology for the blind and visually impaired, but also for users with reading difficulties) from confusing matters? This issue appears to have been settled meanwhile. It is relatively safe to assume that the vast majority of these agents do actually ignore form fields marked as “hidden”.
So basically, one could decide to label this field with any kind of string and mark the field as “mandatory”. It really wouldn’t matter whether or not developers of bots had created reference lists, as they couldn’t possibly anticipate strings form developers may create at their own discretion.
The only relevant information to process the form properly were the content (value) of a single hidden field with an inconspicuous name — calling it “honeypot” or “captcha” would be rather silly, “grandma” or “email2” would do considerably better. An empty field upon submission would cause the form to be delivered, while any other value would cause it to be instantly “shredded”.
Of course, I don’t want to send hosts of software developers to the breadline. Therefore, here’s an approach for an alternative CAPTCHA that would actually tell computers and humans apart (without nagging human actors too much).
Why not use the vast selection of pictures and comprehensible audio files already available and display one randomly chosen set (that is, one picture and one audio alternative) on the website in question?
If you gave this set a different, randomly generated temporary name each time around, machines could not identify them by this name for later use — and human visitors wouldn’t care for these names anyway.
If a “user” cannot solve either, it is relatively safe to conclude that you are dealing with a computer — and everyone else is likely to be human.
While this may seem to be an outrageous approach to some (and perhaps utterly contradict the real purpose of CAPTCHAs as we know them), it would simplify the entire process for everyone involved and still lead to the (allegedly) desired effect.
All things considered, the point of the internet is (still) to make gathering and sharing information a relatively inclusive experience (for humans) — or so I would assume.