This is a follow-up to my previous post on the “improvement in search” technology and the question of whether these improvements are actually beneficial.
I have read at least a half dozen books about Google or search (pretty much the same things now). The data-oriented approach Google uses to make decisions has always intrigued me. The origins of search and the Google way made some sense to me as an academic. Quality of publications are sometimes judged based on citation and citation indices – how often your are cited, prestige value of different journals, etc. This background offered the analogy by which I understood what Google was doing. I just finished a new book on Google (Steven Levy’s In the Plex) and this account offered some new tidbits concerning data Google uses. For example, if I submit a similar search following an initial review of top “hits” from a previous search I may not be pleased with the results of the previous search. What I came to understand was that my personal evaluation of the information Google prioritized for me was of interest to Google and Google would attempt to alter the suggestions I was given accordingly?
The problem with this (see previous post) is that Google and other companies have found ways to show us what we want to see and not what we should see. In some situations, this difference matters. By feeding our biases, Google offers us a way to “view only Fox News or MSNBC” as we seek information. We are likely accepting of what we find, but we are not encouraged to view challenging positions. Maybe we really need to worry if Google has made us stupid.
Could Google fix this? What if I wanted an option that would ignore my personal biases? I would think one solution would be to offer a search option that says “ignore my personal history”.