About Search Engine Fail

Search Engine FAIL Blog

search-engine-failuresAlthough this blog is about search engine failures, I’d like to begin by acknowledging the truly amazing things search engines do. Google alone conducts more than 100 billion searches per month – in each case creating, ranking and displaying search engine results pages (SERPs) in just a fraction of a second. The scale, complexity and speed of these operations is mind boggling. Google alone has crawled more than 30 trillion URLs; they have indexed 50 billion of these, and revisit 20+ billion every day. Although smaller, Bing (13 billion) and Yahoo! (10 billion) indexes are still enormous.

Every minute of every day — without fail — Bing, Yahoo! compete to capture some of Google’s user base. These proportions have been remarkably steady over the past year. Although the other search engines have failed to take share from Google, they do not seem to be losing their audience either. Experian/Hitwise continues to report that more than 20% of all queries are ‘unsuccessful’ (the searcher does not click on a search result, and either searches again or abandons the search); but, of course this means that about 80% of searches are successful in some sense. Amazingly, these statistics are skewed by the fact that 30% of all searches use just one word. Clearly many of these people are using a search engine in lieu of bookmarks – the most popular one-word search term was ‘facebook’ (to Google’s annoyance no doubt).

Sources of Search Engine Failures

There are three basic reasons why a search engines fail to provide the best possible relevant result pages.

1) Their web crawlers (also known as ‘web bots’) do not comprehend the web pages they crawl and index. They dutifully find new pages, record the number and prominence of words, rate and tally the hyperlinks to and from the page. But they really don’t understand the meaning of what they’re reading. [Pick any search engine and look for ‘best Italian restaurant’. Then ‘worst italian restaurant’. Pretty much the same Page One results for both queries – Hmmm.]

The crawlers also check for ‘black hat’ SEO: cloaking, keyword stuffing, link schemes and other nefarious techniques used to artificially enhance the web crawler’s scoring of the page. A large number of search engine fail situations are attributable to SEO (white hat and black hat), and the web crawlers and ranking algorithms are periodically updated to try to counteract these effects.

2) Search engines fail to understand and interpret search queries in any meaningful way. Even if the web bots did a perfect job of indexing and grading the pages (and there is plenty of room for discussion about what ‘perfect’ means), the user would still not be presented with the best possible SERPs; because the search engine does not always understand what the user is looking for. Some of this is due to user error (typos, poor grammar, poorly formed queries), and we won’t call it Search Engine FAIL in those cases. But there are broad classes of search query that search engines have trouble interpreting. For example, if you do a Google or Yahoo! query for ‘superbowl score’, the actual final score of the most recent Superbowl game doesn’t appear until Page 2 of the SERPs! [Try it and see!] Bing, on the other hand, shows the score in the first result.

3) Financial and other business motivations induce the search engine companies to modify the SERPs. Google doesn’t make any money providing organic search results; most of their revenue comes from the AdWords program – selling ads positioned on the results pages (usually 3 at the top, 7 on the right margin, and they are now experimenting with 3 on the bottom). The ads take up quite a bit of the SERP real estate, and the light background color sometimes makes them difficult to distinguish from the organic search results. [Note: This is arguably a violation of Google’s own AdSense guidelines.]

However, if a search engine fails to deliver useful responses, users will turn elsewhere to find what they want – to social media sites or offline sources. So there is a strong motivation to adjust the ranking algorithm to get the results ‘right’ (i.e., meeting users’ expectations). A relatively recent adjustment is the inclusion of ‘Local’ results in the SERPs (usually a ‘3-pack’ or ‘7 pack’, positioned near the top of Page 1). Google, Yahoo! and Bing (but not Ask, interestingly) assume people searching for certain keywords (e.g., restaurant, barber, car repair, piano teacher, pet store, etc.) are looking for local information. This may be the case most of the time; but when it’s not, we get another search engine fail.

Is Your Search Engine Smarter than a 5th Grader?

Bing, Ask, Yahoo and Google have huge databases and indexes; they sift through billions of entries to assemble and deliver the SERPs in under a second. But they’re really not that smart – how can a search engine fail to produce the answers to these simple queries?

  • how many onions in two dozen
  • how many socks is two pair
  • how many toes do i have
  • four score and seven
  • number of people to tango

Remarkably, most search engines fail to display the answers to the above questions on Page One. They are also unable to produce useful responses to the following rather simple queries:

  • stars faceoff time (produces movie info, not the hockey game start time)
  • 2010 sonata tire pressure (recommended value not given)
  • state department address (street address does not appear on Page One)
  • us company with most employees (company not named)
  • smallest west virginia university
  • worlds fastest swimmer (#1 result is about the world’s slowest swimmer)
  • cheapest iphone 5 (no stores listed)
  • austin pediatrician (no doctors listed, just directories)
  • best search engine (Google not listed on Page One!)

Categories of Search Engine Failure

There are several types of queries that pose challenging technical problems for search engines.

  • Homograph phrases (words with multiple meanings/pronunciations (i.e., ‘bat’, ‘bear’, ‘can’, ‘down’, ‘effect’, ‘fine’, ‘lead’, ‘mind’, ‘object’, ‘park’, ‘recess’, ‘rock’, ‘wave’, ‘wound’ and ‘’yard’)
  • Ambiguous terms and phrases (‘kids make delicious snacks’ or the classic ‘fruit flies like a banana’)
  • Idiomatic terms (‘brush with death’, ‘pulling my leg’, ‘kicking the bucket’)
  • Date ordering problems (i.e., ‘first’, ‘latest’, ‘oldest’, ‘new’, etc. – the selection algorithm needs to calculate the most recent thing, frequently in the presence of misleading date/time info)

The above are fairly difficult to do correctly; but most search engine fail examples are the result of arbitrary preferences specified in the ranking algorithms:

  • -Preference for large/old websites, major brands
  • Reliance on social signals (easily spammed)
  • Inclusion of Local results pushes more valid organic results down in rankings
  • Local results dominated by directories, ratings/reviews sites (due to PageRank)
  • Tolerance of machine-generated content (real estate, product catalogs,
  • Concentration on short-tail results (=> vulnerability for long-tail spamming, Google bombing)

So search engines still have a lot of work to do. And, with Bing increasing in credibility and usage, Google will need to innovate to maintain its dominance in this field. Although upstarts like Blekko, DuckDuckGo and Shodan have promising new approaches and technologies, it is unlikely they will break into the top four. But if the leading search engines fail to maintain and improve the quality of the SERPs, they will be vulnerable to social media information sites like Facebook Questions, Mahalo, Quora and Wolfram Alpha.