My original connection to Section 230 was related to my experiences as a blogger (since 2002). I would find that individuals had commented on old posts adding unrelated links to services or products. I had not turned on moderation so these additions would escape my attention. I wondered why this was a useful activity and why anyone who happened to encounter such content unrelated to an individual post would find it to be of interest. It seemed pointless and a violation of some unwritten rule of online behavior. I turned on moderation and after deleting such content and marking it as spam the practice stopped. I thought of this experience when I first encountered Section 230. The ads were seldom links to inappropriate content or services, but this could easily have been the case.
I first made an effort to understand Section 230 when I read Jeff Kossoff’s “The twenty-six words that saved the Internet”. My recollection of his analysis is that it explained two benefits, one of which was a surprise. The obvious advantage to big social media platforms and small platform me was that we were to be protected from being responsible for content we hosted, but did not create. This was the situation I had experienced as a blogger and just described. The second provision which I thought nonobvious was it protected hosts should they attempt to do something about content they decided was inappropriate or not just what they wanted on their sites. They could delete such content and not be held accountable and more they could delete content and not be held accountable should the pattern of control they exerted be regarded as biased or unfair. The argument that such provisions saved the Internet was based on limiting litigation that would be constant if anyone could claim their rights had been violated.
I was a defender of Section 230 because these provisions made sense. I read the complaints and was largely unfazed. I believe this was the case because I thought about the protections from my own position as a long-time blogger and not as a user of major social media platforms.
When I thought about solutions to the complaints, the one remedy that occurred to me was to remove the protection allowed by anonymity. If those who in some way wronged others online could be identified, go after them rather than the host platform. What is wrong with this expectation? Early counterarguments focused on the value of the Internet to provide an outlet for those who had legitimate reasons to hide their identities – e.g., oppressed folks persecuted in other countries, those needing protection from hostile spouses or teens who felt parents did not understand their feelings or desires. The mixture of good and evil is a constant struggle with technology. Several goods that conflict seemed continually to be present. What do you do then? Make a decision based on the number of individuals who benefit or are harmed? Perhaps you could promote multiple platforms that work in different ways. There always seems to be no clear best solution.
A second reality related to anonymity has recently become obvious. How can identity be established to meet legal requirements? Consider the issue with age requirements for online services. Many platforms require an individual to be 13 to use that platform. Seems clear enough and yet can a platform be held responsible when younger individuals somehow sign up? Kids lie. Parents ignore requirements so their kids can be part of a family group and share images with grandma. Consider what it takes to secure a passport. It is no easy feat and it is this level of documentation some want to allow an individual to vote – a legal level of proof. Given this challenge, I don’t see politicians establishing an acceptable level of proof for legal online participation so the burden seems to have shifted to the platforms should identity be required.
No solution seems possible here. I give up.
The other component and the abdicated responsibility.
What now seems lost is the second opportunity Section 230 allows. Platforms can make good faith efforts to moderate content and their efforts do not have to be perfect. The potential seems similar to how I think of privacy as a selling point. Some companies make a big issue of their commitment to privacy and others don’t. Why don’t some social media platforms make an issue of their commitment to truthful or appropriate content? For all of the “let the market decide” advocates, why has quality moderation not become a selling point? Section 230 certainly provides the means for this opportunity.
The recent political arguments about bias seem to hold the answer to my question. Moderation or algorithmic prioritization (proven or not) are labeled woke or worse, since woke is vague, a violation of free speech. Political action despite the clear provisions of Section 230 is demanded or expected. It is clear large social media corporations are concerned about their immediate futures should they not do what the present party in power suggests is the right thing. They seem cowed into responding by relaxing any responsibility allowed them to address even the obvious factual flaws (e.g., who won the election of 2020, are vaccines effective).
The Punt
Two platforms have decided to punt. Twitter (X) and Facebook have decided to take no personal responsibility for moderation (although allowed by Section 230) and make a form of moderation the responsibility of users. First rolled out by Twitter, the companies have endorsed an approach that provides the appearance of quality control. Community Notes defines an approach by which certified participants can attach a visible note to a post. Of course, anyone can counter claims made on X or Facebook in responses/comments, but this is not the same as a note that appears attached to the original post and is visible to all. The community note process is complicated and I would argue largely ineffective. If you are interested, here is the best description I can find of how Twitter’s Community Notes process works.
There is considerable evidence that Twitter’s process is ineffective. Without going into the details of who qualifies as raters and what biases are built in by the qualification process, here is one obvious problem any actual user of Twitter or Facebook should understand. The process of collecting input from those who have established themselves as raters takes some time. By the time sufficient data has been collected few users ever see a note that has been added. How many of you examine Tweets or Facebook posts from last week? In addition, the proportion of posts that generate a note varies greatly. Flawed posts about facts (e.g., vaccines reduce deaths from COVID) are much more likely to result in a note while politically charged claims (e.g., Trump really won the election of 2020) go without a note. The difference is in whether conditions set by Twitter are met. BTW – X does not rate on the basis of truthfulness, respondents respond to a three-level scale of “usefulness”.
Does it matter?
I think so. From personal experience and I urge you to check your own, I have relatives who seldom read a quality newspaper and never a book. They believe they are informed because they spend hours scrolling through online media. I think it unlikely their reading habits will change so I regard this a serious problem. I am more concerned because any government intervention is now unlikely. The migration of people away from X and Facebook may be great for personal mental health, but it again does nothing to improve the quality (truthfulness) of content consumed by those who remain. I made a personal decision to leave X more because the newest algorithm made tweets containing links less likely to appear near the top of my feed than because I could not deal with the content I encountered. Without links I can use to reach more expansive and well-argued content, what is the point? I am considering returning to post rather than to explore and discover.
I seldom write a post lacking a recommended opportunity or a course of action. I really am at a lost here. Perhaps some of you might respond with recommendations.