Trading stocks, targeting ads, steering political campaigns, arranging dates, besting people on “Jeopardy" and even choosing bra sizes: Computer algorithms are doing all this work and more.
But increasingly, behind the curtain there is a decidedly retro helper — a human being.
Although algorithms are becoming ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of human reasoning. Yet these days they are being asked to be more humanlike.
Even at Google, where algorithms and engineers reign supreme in the company’s business and culture, the human contribution to search results is increasing, if subtly. And so, while programming experts still write the step-by-step instructions of computer code, additional people are needed to make more subtle contributions as the work the computers do has become more involved.
People evaluate, edit or correct an algorithm’s work. Or they assemble online databases of knowledge and check and verify them — creating, essentially, a crib sheet the computer can call on for a quick answer. Humans can interpret and tweak information in ways that are understandable to both computers and other humans.
Question-answering technologies like Apple’s Siri and IBM’s Watson rely particularly on the emerging machine-man collaboration. Algorithms alone are not enough.
Google uses human helpers in two ways. Several months ago, it began presenting summaries of information on the right side of a search page when a user typed in the name of a well-known person or place, like “Barack Obama" or “New York City." These summaries draw from databases of knowledge like Wikipedia, the CIA World Factbook and Freebase, whose parent company, Metaweb, Google acquired in 2010. These databases are edited by humans.
When Google’s algorithm detects a search term for which this distilled information is available, the search engine is trained to go fetch it rather than merely present links to websites.
“There has been a shift in our thinking," said Scott Huffman, an engineering director in charge of search quality at Google. “A part of our resources are now more human curated."
Other human helpers, known as evaluators or raters, help Google develop tweaks to its search algorithm, a powerhouse of automation, fielding 100 billion queries a month. “Our engineers evolve the algorithm, and humans help us see if a suggested change is really an improvement," Huffman said.
Katherine Young, 23, is a Google rater — a contract worker and a college student in Macon, Ga. She is shown an ambiguous search query like “what does king hold," presented with two sets of Google search results and asked to rate their relevance, accuracy and quality. The current search result for that imprecise phrase starts with links to Web pages saying that kings typically hold ceremonial scepters, a reasonable inference.
IBM’s Watson, the powerful question-answering computer that defeated “Jeopardy" champions two years ago, is in training these days to help doctors make diagnoses. But it, too, is turning to humans for help.
To prepare for its role in assisting doctors, Watson is being fed medical texts, scientific papers and digital patient records stripped of personal identifying information. Instead of answering questions, however, Watson is asking them of clinicians at the Cleveland Clinic and medical school students. They are giving answers and correcting the computer’s mistakes, using a “Teach Watson" feature.
Ben Taylor, 25, is a product manager at FindTheBest, a fast-growing startup in Santa Barbara, Calif. The company calls itself a “comparison engine" for finding and comparing more than 100 topics and products, from universities to nursing homes, smartphones to dog breeds. Its website went up in 2010, and the company now has 60 full-time employees.
Taylor helps design and edit the site’s education pages. He is not an engineer, but an English major who has become a self-taught expert in the arcane data found in Education Department studies and elsewhere. His research methods include talking to and emailing educators. He is an information sleuth.
On FindTheBest, more than 8,500 colleges can be searched quickly according to geography, programs and tuition costs, among other criteria.
Taylor and his team write the summaries and design the initial charts and graphs. From hundreds of data points on college costs, for example, they select the most relevant ones to college students and their parents. But much of their information is prepared in templates and tagged with code a computer can read.
The algorithms are getting better. But they cannot do it alone.
“You need judgment, and to be able to intuitively recognize the smaller sets of data that are most important," Taylor said. “To do that, you need some level of human involvement."