Dark ads

The prominence of “fake news” has gained a lot of attention in the wake of the recent election. It may have been even worse than this. I could argue that you bring fake news on yourself. You receive fake news from a site such as Facebook or Twitter because you follow someone who posted the fake story. You contribute to the problem if you retweet or reshare. You may end up as a victim of such falsehoods, but at least, in this case, you can blame the individual you followed for leading you astray. It may eventually be possible to flag suspect stories much in the way Wikipedia now includes notices with stories that fail to satisfy certain standards.

What you may not realize is that you may be targeted in an effort to manipulate you in some way by a completely independent source. Facebook allows what are called “dark posts“. As I understand the dark post, it is essentially an ad the source sends to a subset of users. What is allowed as an “ad” appears to be much more open to interpretation than you might expect.  This NYTimes opinion piece by McKenzie Funk claims dark posts were used by the Republican presidential election committee to “micro-target” users either to encourage a vote for their candidate or to discourage those opposing their candidate from voting. The detail in the Funk article is helpful in explaining how this was done. This wmicro-targeting was based on a massive database accumulated on millions of us by Cambridge Analytica. Forbes takes a similar position. The Forbes article provides greater detail on the different approaches taken by the Democrats and Republicans, but while noting the greater use of micro-targeting by Republicans provides less information regarding how this was done.

I assume must of us recognize that the social media ads we see are based on our own behavior. In theory, we supposedly see ads we want to see. The dark-ad feels different to me. I wonder what disclosure is required and if you or I received these ads whether we realized the source. That message that is required at the end of television ads is certainly absent or less prominent when we are targeted online. Without an awareness of the source we have less information to interpret intent.

So, as educators, we attempt to develop critical thinking skills in preparing students for what they will encounter in the “less friendly” real world. How distrustful should we assume we must prepare future citizens to be?

Loading

Free speech should require you say something

Shortly after the conclusion of the uniquely contentious 2016 Presidential election, Buzzfeed released a disturbing report on the prevalence and popularity of fake stories related to the election.

You may have seen this chart somewhere.

sub-buzz-441-1479332078-1

The chart describes the shares, reactions, and comments for the top twenty fake election stories and the top 20 stories originating from actual news services in the run-up to the election. In the critical period before the election, fake stories generated more “engagement” (the term Buzzfeed used to describe their composite variable). A large proportion of the popular fake stores were pro-Trump or anti-Hillary (17 of 20) and this disparity in combination with the contentious election led to public outcry directed at Facebook and concern that the public has been manipulated by the content they encounter online. Take a look at the article for links to some of the fake articles.

In an era in which social media has become a major part of the battlefield for politics, the credibility of content raises serious questions related to how voters make decisions (a similar NYTimes story on twitter bots and fake information). Given the outlandish comments made by the candidates, perhaps no one should be surprised. Social media promises to open political conversation to everyone at low cost, but more and more the opportunity for individuals is overpowered by the promotion of falsehoods.

Social media users are partly at fault. Some months ago the Washington Post described a study demonstrating that approximately 6 of 10 individuals posted a story consisting of gibberish. The implication posted stories are often not read by the individual recommending the stories. Consider this in combination with the Buzzfeed study and consider how this works. As the election neared more and more individuals made their personal decision and were attempting to influence others. It seems likely it was assumed more extreme stories would be more persuasive. The titles of articles were likely as far as many so motivated to influence got and sharing is so easy. No fuss, easy, and completely fabricated. It is difficult to know if anyone really read these articles, but the titles may have provided the message.

Facebook and other services question whether popular fake news had any impact on voting behavior, but promise to address the problem. Fake news may be protected as free speech, but some ad providers say they will not honor ad revenue from the sites.

As educators, we go on and on about the importance of information literacy. We try to teach learners what to look for and how to be critical thinkers. Here is what I think is a new concern. The issue with social media is a little different than the issue with search. It is one thing to find resources on your and then evaluate the credibility of these resources, but this is different from the challenge of encountering resources endorsed by someone you may trust. I wonder if this difference between found and endorsed resources has been considered.

I am beginning to develop a personal perspective on this problem. I think sharing is far too easy. Amazon has a thing with product reviews that indicates whether the reviewer is known to have actually purchased the product. It is too bad that there is no way to indicate whether a shared source has actually been read. My recommendation would be to avoid any recommendation that does not include some message of justification from the individual promoting a resource. Free speech should at least require you say something yourself.

 

Loading

Media and/or personal bias

As I now remember the focus of Pariser’s “Filter bubble”, the author was concerned that by search services learning our priorities, the content appearing at the top of the hits returned to response to a search would tell us what we wanted to hear or feed our biases. Two individuals with different beliefs could conduct the same search and be told different things.

I admit that I tried various ways to demonstrate this potential bias and was unable to come up with a demonstration that worked. Pariser describes having two individuals he knew who had different political leanings conduct the same search and observing that the results were different. I attempted to conduct anonymous and self-identified searches (logged into my Google account) for the word “apple” assuming that by revealing who I was to the search service my results would be biased toward technology and the anonymous searches toward the fruit. Not luck.

Researchers using Facebook data have approached the “filter bubble” issue in a different way. They have identified users along a conservative/liberal continuum and then examined the links included in posts from these groups. In the aftermath of the election, they are providing related data graphically through what they describe as the blue feed/red feed. Assuming both sources of media bias a real, the arguments would be that we receive different slants on the facts through the history of who we are and who we friend. It seems possible these two forms of bias interact to compound the effect.

Loading