Social Media Can Be Social, If We Force It
The Washington Post has an article about a social media company that allows a wide range of debate on it about all sorts of topics, ranging from the local levies to the most hot button of culture war topics. And it has none of the toxicity that drives other social media sites. People are, if not exactly nice, at least mostly respectful. How? Moderation.
The site, Front Porch Forum, has aggressive moderation. Each post is read by a human moderator before it is allowed on the site. They do not censor positions or politics or opinions. They are tone focused. Basically, anyone who posts like and asshole has their posts blocked. It has been a success with “ … 235,000 active members in a state with about 265,000 households.”
That success should not be a surprise. Most people are not shit-posters. Most people tend to listen to others and try to understand other people. And even among those that do not, most people tend to adhere to the social rules. Enforce a no assholes rule, and most people will try not to be an asshole. The site is popular, vibrant, and economically successful. It is what all social media could be, if we forced it to be.
Big social medial like Instagram, Facebook, and Threads (I am putting twitter to the side since at this point is nothing more than the personal grievance machine of its owner. Its moderation is designed to push far right, racist news and topics into the conversation) drive their success by attempting to trigger the reward centers of the brain to encourage people to stay “engaged”. This, in turn, leads to practices like encouraging heated rhetoric and misinformation since anger drives more engagement, more time on the site. It obviously does not have to be this way, but we as a society just as obviously have allowed it to be.
We could, though. We could force companies to move away from their algorithms and to moderate better. The big companies say that such moderation does not scale. But these are among the wealthiest companies in the world, in the history of the world. Meta, Instagram and Facebook’s parent company, spent ten billion dollars to build a legless utopia and call it the Metaverse. Just imagine how many moderators those missing legs could have paid for.
Companies, especially corporations that are granted enormous benefits and protections by the government, do not have to be left alone to do things that harm society. Democratic control of the economy demands that we reign in companies that harm society. Forcing social media companies to stop lining their bottom line with algorithms that encourage anger and misinformation is just the same as preventing a power plant from polluting the air.
Maybe the Front Porch model doesn’t entirely scale. Maybe there need to be other controls, rules, and/or processes to achieve a similar outcome. But we won’t know until we give up this idea that these companies should be allowed ot do whatever harm they want in order to line their own pockets. That, in the parlance of Front Porch, is asshole behavior and we should not be expected to tolerate it.


"The site, Front Porch Forum, has aggressive moderation. Each post is read by a human moderator before it is allowed on the site. They do not censor positions or politics or opinions. They are tone focused. Basically, anyone who posts like and asshole has their posts blocked."
The example you’ve given us – human moderators evaluating the tone of a post – demonstrates a fundamental weakness in automated moderation that I seriously doubt even AI will be able to overcome. At the heart of the issue is the capacity to make a judgment about meaning (in the post) that depends almost entirely on grasping a context (the reach and range of the discussion/topic in question and the group of participants).
In order to have a chance of dealing with tone, such a grasp of context must include a sense of what the context’s standard of appropriateness should be, which very much includes how people should relate to each other and how people should talk about the topic. A grasp on appropriateness is the basis for deciding that a post is inappropriate. Tone, in this context, communicates an emotional attitude toward another person or toward a topic. This kind of human evaluation/judgment is what computers cannot do (credit to Hubert Dreyfus’s inspiration), whether algorithmically or through machine learning next-word-prediction.
I will check this out! I see so painfully often (when I scratch the surface or even if I glance in the direction of a thing) that money/growth is never spent on actual humans or humanity-- only things that leave us in the lurch (and always some humans way way more than others of course)