Online social media providers (YouTube, Facebook, Twitter) have come under free speech and appropriateness scrutiny recently and have responded in different ways. At issue is the transmission of factually inaccurate content by prominent individuals (e.g., President Trump). Twitter has tagged a few tweets as factually inaccurately setting off a firestorm regarding free speech. Twitter does not block the false claims, it just attaches a label. Facebook has decided not to get involved in the accuracy issue.
I understand that these platforms are in a very difficult situation and could not possibly fact check all posts. The platforms protect themselves by claiming that those adding content must be responsible for the legitimacy of what is posted. This is the platform argument. The publishing argument acknowledges some responsibility for what appears.
Here is an issue I think is important and not acknowledged by Facebook. What I as an individual experience is not what has been posted by those I follow. The algorithms Facebook employs make decisions about what I see and the algorithms are designed to encourage greater attention to Facebook content. By definition, this negates the argument Facebook makes that it is a neutral party. It is suspected that a way to encourage greater attention (more time spent on the site increasing ad views) is through the display of more content intended to generate an emotional response.
In a way, Twitter is more neutral than either YouTube or Facebook because it does not control or recommend what you should see.