Bluesky, Threads and Moderating on the Cheap
Bluesky, a social network that is a riff on Twitter, has recently come out of invite only mode and opened their doors to anyone who wants to join. They seem to have added about two million users on the first day, bumping their counts to about five million. That is peanuts compared to the big services, but Bluesky is a nice service that a lot of the people I used to follow on Twitter have migrated to (you can find me at kcraybould.bsky.social if you want to.) It is not perfect, and it really needs more hockey reporters to join, but it works for me. Wired has an interesting interview with Bluesky’s CEO, however, that makes wonder how much longer Bluesky can be viable. Combined with Thread’s recent decision to not promote political and news content by default, and I am afraid the new social media companies are repeating the mistakes of the old social media companies.
The Bluesky CEO, when pressed on moderation, has some good points and some concerning points:
Not all those posts will be playful, though. What’s your vision for moderation?
We have community guidelines to prevent harassment and hate speech, and we use moderation to try to create a baseline of a healthy, welcoming social space on the default Bluesky app. Then because it's built on this open protocol, anyone can set up and run their own infrastructure and start labeling or annotating content and accounts in the network. That's something that users can directly install to piece together their own community norms.
Moderation has proven to be a weak spot of just about every social network, even those that are very profitable. Do you think you’ll ever reach a point where you’re unable to moderate efficiently?
Our goal is to combine both approaches—to run a moderation service that tries to provide a baseline and to also have an open ecosystem where anyone who wants to innovate can come in and start building. I think this is particularly useful around cases where information is really fast moving and there's specialized knowledge. There are organizations out there already in the business of fact-checking, or figuring out if a verified account is actually a politician or not. They can start annotating and putting that information into the network, and we can build off that collective intelligence.
Recently there was a very high-profile incident on X where deepfake porn of Taylor Swift started spreading and the platform was not super prompt at clamping down. What’s your approach to moderating deepfakes?
From the start we've been using some AI-detection services—image labeling services—but this is an area where there's a lot of innovation and we've been looking at other alternatives.
This is also where a third-party labeling system could really come into use. We can move faster as an open collective of people—she has lots of fans who could help identify content like this very proactively.
There is a lot of emphasis in her answers about the community creating its own moderation. When she talks about third-party labeling and have anyone build their own moderation tools, that is what she is referring to. Feeling harassed? There’s a third-party tool, she seems to be saying, that can help you. On the one hand, that is largely good. If the company itself if moving fast enough to shut down harassment or hate speech, someone out there can probably step in and help you. But this kind of reliance on the kindness of strangers also seems problematic.
First, I was unaware that these tools existed, despite being on the site for several months. Even after reading the interview, I am not sure how to find these tools or how to use them. A system that is invisible to users is not much of a system. Which highlights the problem — Bluesky wants to do moderation on the cheap. They effectively want to Uber-ize their moderation, tossing it onto the backs of unpaid hobbyists to solve the moderation problem for them. It is almost certainly not going to work.
I understand their thinking here, at least to a certain extent. Moderation is hard, both in terms of the cost to do it effectively and the decisions you have to make. Every platform/media company/place on the internet with any audience goes through these cycles. First, they do nothing and pretend that they are free speech heroes. Then they look around at who is using their site and how and realize “Wait, when we said free speech uber alles, we didn’t mean the porn and the Nazis and the death threats!” and try to do something to make their site attractive to normal people. And then they end up finding that broad rules don’t really work, and they have to actually spend time and resources moderating things and that process is going to irritate people, making the job even harder and resource intensive. It is the circle of internet comments life.
Bluesky seems to think they can break this circle by having the broad rules enforced by them and a series of third-party tools that tighten moderation that people can pick and choose from. I don’t think it’s going to work. First, because as I noted, the third-party tools are largely invisible right now. Second, installing/using the third-party tools is not as simple as just using Bluesky itself. There will be some level of friction to the process, and friction when you are already mad or upset is not going to be well received. People are going to quite reasonably ask why is this harassment allowed and why wont the company do something about it without having to make me jump through hoops and read instruction manuals? People, in other words, are going to want the company to moderate at a detailed level. And when Bluesky doesn’t? People are going to get angry.
Threads is trying a slightly different approach. They seem to think if they keep politics and news out the main feeds, they will have very little to moderate as people won’t be angry at each other enough to be jerks online. I doubt this approach is going to work. First, you can choose to opt in to getting news and politics, so the arguments are going to happen anyway, and therefore the calls for moderation are going to happen anyway. Second, and more importantly, history has taught us that merely existing as a certain kind of person — woman, trans, person of color, to name the most notable, though not exclusive, examples — invites harassment. Threads is still going to have to do something about that kind of abuse if it doesn’t want to be another Truth Social or the website formerly known as Twitter.
Moderation is hard, always has been always will be. It requires nuance, a deft touch, and the proper level of trained staffing to make the hard decisions. It is a thankless job that will not make everyone happy and will likely, if you want normal people to stick around, make the most vocal subsets of the internet furious. Bluesky and Threads know that if they don’t moderate, they will be overrun with Nazis, alt-right hate speech, porn, and harassment. They further know that if they allow that to happen, they will drive away most normal people killing their services. Their mistake is in thinking, like everyone that came before them, there is a simple, cost-free, magical solution that will make the hard decisions go away. There isn’t, and the decisions will always remain. Both services would be better off admitting this well know truth and building their moderation capabilities out accordingly.

