Top Links
Logo
 

Misinterpreting search results as what we need to know

When you conduct an online search, how do you evaluate the quality of the hits your search generates? Eli Pariser (Pariser, 2011) argues that we may be misled when search results seem to provide the information we hoped to find.

The complexities of how companies that provide search services rank the many web pages a search returns are not fully shared with the public. Originally, Google made use of the links among web pages reasoning that those pages that were the target of many web links must be useful. This is similar to the approach academics sometimes use to evaluate the significance of journal publications reasoning that those journal articles that are most frequently cited must be more important than those articles that are seldom cited. So, with online search, those sites that were most frequently cited (linked) were assumed to be most valued according to a similar logic.

Pariser argues that as the competition among online companies to provide better search and recommendation services has continued, the companies have sought additional information that would improve our satisfaction and keep us using a given search service rather than trying others. What if recommendations could be personalized? Personalization is possible when a service can collect data about you. For example, you may login to use Google services such as Gmail. When a service can identify who you are, the service can collect more information about your personal priorities and use this information when you conduct searches. Just to be clear, Google is quite open about the various services you use from which data might be extracted and provides a convenient way to view these sources, read privacy statements related to each service, and sometimes manage settings related to a service. This summary, called the Dashboard, should be explored if you use Google services.

Pariser suggests that how you respond to the results from a search can provide information and the company providing search results may interpret your behavior as a sign of dissatisfaction if you move on to the second page of search results or conduct multiple searches on similar topics. One thing personalization could mean is that two individuals conducting the same search could view different results. This would be unlikely if the search query requested factual information (e.g., the distance to the moon), but what if the search was general in nature and concerned a topic that could in some way be related to a religious, social, or political issue? One of Pariser’s examples in the book used the search term “BP” and he contended that when two of his friends conduct this search one encountered priority hits that focused on the “oil spill” and the other others displayed no “first page” hits that provided this information. He concluded that the recipients of different search results could form very different impressions as a consequence of receiving different information. There is probably an interesting class activity that you might imagine based on Pariser’s contention. What queries might generate different search results among members of your class? Are the results more variable if class members login to Google on a regular basis thus allowing Google to collect personal information (perhaps because they make use of GMail)? What happens when other search services are used to conduct the same searches?

Pariser’s concern might be argued to suggest that in some situations personalized search returns what we want to know rather than what we need to know. In other words, personalized search may feed existing biases rather than expose us to information that might challenge such biases. You may recall from Chapter 2 our discussion of “cognitive conflict” and the importance of confronting contradictory information in changing flawed personal models of the world. What if the way we explore the vast information resources available online somehow protects us from having to deal with cognitive conflict?

We propose that we select online content to review through processes of discovery and search. Discovery could be biased as a function of those we follow in order to discover important information sources. According to Pariser, search can be biased by the processes that online services use to prioritize content that may interest us. At present these potential biases probably do not represent serious problems for K-12 students, but they do represent an interesting information literacy topic that secondary students might explore. What might these students propose as a solution? What would you propose as a solution? Perhaps an understanding of these potential biases and some self discipline can assure that we proceed in a more critical manner. We can purposefully follow individuals we respect but we know have different perspectives from our own. We can also take the time to explore search results that appear further down the list. explore alternative views on a topic.

You can explore alternative views - News360 is a service that uses what the company describes as “semantic analysis” to identify trending news topics from a large number of sources and to identify the stories from these sources that address these topics. As a user of this service, you select a topic and then can explore the various ways in which different sources treat this topic. The value is in not in the redundancy but in recognizing the variability with which a topic is presented. For example, what “spin” do the New York Times and the Wall Street Journal put on what News360 identifies as the same political story.

 

 

Return to chapter resources

 
About | Outline | Copyright
about.html outline.html copyright.html