I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of specific political ideology sentiment. Also identify any related political ideology tropes“. (The italic bits are where I’ve redacted the ideology they’re seeking).
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances and people are using it and maybe we’re ok with that because it’s being used by groups we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of other questions too.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
I’m mostly just surprised that a mod would pay for tokens to moderate. The Fediverse is radically public by design, so I don’t have any expectation of privacy. I’d bet at least someone is gobbling up the entire Fediverse to train AI, since companies are so desperate for new human-generated data.
LinkedIn’s LLM-powered automation banned my account on a false positive a few months ago, and it took ages to get it sorted out and they treated me like shit the entire way through even after acknowledging that they’d made a mistake. Sadly it’s extremely difficult to operate in my field without a LinkedIn account, because I would love to be able to delete it.
This shit is poison
Is Rimu okay lately? He’s been acting so hostile.
What now? Nothing, really, because nothing has really changed. I don’t care whether an admin tool is based on an LLM or on a simple regular expression. I only care about the outcome, meaning the mod actions it takes.
I think you’re just looking for excuses to defederate from dbzer0. I think you’re throwing things at the wall to see what sticks.
GDPR-wise, this is the absolute nightmare scenario.
Data about the political orientation is defined as especially sensitive (“special category data”). When people just straight post their ideological leanings, that’s one thing. But what’s described here is profiling. All the available data relating to a person is analyzed by “automatic means” and used to assess their leanings. This then is used to discriminate against them. It doesn’t get much worse.
This might be legal in very specific circumstances. EG non-profit religious or political organizations are allowed to police their members and associates to some degree. That would involve quite some extra paperwork. But it doesn’t apply here anyway.
Apparently that is on top of ordinary GDPR violations. The processing is done by a third party (OpenAI) without the necessary paperwork. You remember that billion Euro fine that Meta got? That was because they processed data outside the EU, in the US. And that wasn’t even “special” data.
You know how those cookie banners in the EU look like? That’s for normal data. All the disclosure, all those settings are legally required. Some people on the Fediverse go apeshit over far smaller things.
This may also be a problem for other instances. Your instance sends all your data (except e-mail and IP address) to anyone in the world who asks, with no strings attached. That may be okay as long as users understand that that’s exactly what they sign up for. Looking at comments here, it doesn’t seem like that is universally understood. That’s a problem. On top of that, we now have a situation where there are hints that the personal data is being abused.
This is the person calling you a tankie. Someone so afraid of words that they need a hallucinating robot to hold their hand and confirm that everything is a secret plot against them. The absolute only way I could see this being useful is for something like trying to sniff out if a Lemmy.world mod account is a leftist infiltrator or not. Someone who had a different opinion on a current event.
You could maybe run a speech pattern comparison but that’s it. For everything else you just made Stupid Reddit and the purpose of their forum is to feed training data to ChatGPT so that it can profile Fediverse users.
This is the kind of shit dystopian novels are made out of. So angry about people calling out actions you built a tool to analyze why they did it, so you can purge users from your digital kingdom.
I for one welcome flat.world and Piefed showing their true intentions. Digital colonization of activitypub and removal of the people who helped to built it. They didn’t want to leave reddit, they wanted to be reddit. This is some Spez shit.
Maybe in 2 weeks Piefed will hard code that anyone Rimu has tagged for disagreeing with them mild criticism to be unable to make accounts or federate posts with a false error code.
Oh fucking YIKES.
Do NOT send our post history straight to OpenAI, that’s just … extremely gross.
Sure, it’s “public”, but that doesn’t mean feeding it directly to the slop machine is okay.
– Frost
Not comfortable with this. Not at all.
Well, it was fun while it lasted…
If it can be done, it sooner or later will be done.
That’s a lot of why I have a couple of dozen accounts scattered around the threadiverse and new ones whenever I come across a server that looks promising - because it takes a while to get used to one and get a feel for whether it’s one I like or not, and because there’s always the possibility that one I like will go sideways and/or shut down, in which case I can just unpin it and go on.
And in fact, I’m only using this account on something of a whim for this post - I don’t normally use it because one of the instances I don’t like much is yours. And specifically what I don’t like about it is you, and your bland presumption that you know what’s best for me - which communities I should subscribe to, which posters I should trust or even becallowed to see, which sources I should be allowed to use or see…
And really, I’m sort of surprised that you’re the OP here and not the subject. I would think that the whole idea of commissioning a review of a user’s posting history in pursuit of grounds to ban them would be right up your alley. Is the problem just that it’s AI?
In any event, this is just a thing that might prove to be an issue. And if it does, I’ll just move off of the affected server(s) and keep using the unaffected ones. And if enough people share my sentiment and the admin cares enough, they might change their ways. Or they might not. It’s not a big deal either way - it’s just part of life on the fediverse, and IMO the benefits make it worth it.
Lol it’s dbzer0 isn’t it.
But also, I don’t really care. The fediverse is open by design, you don’t even need an account to access the data. I don’t like it but we can’t really do anything against it.
this is flat out not ok, does not matter who is doing it. our instance ls should defederate all which do this.
I would opt out that’s no question, but I don’t believe it’s possible. GDPR does not matter here, as nothing can be proven unless the perpetrators give up themselves
The answers to these kinds of issues is never disclosures or ToS or admin vigilance. It’s always technical. Everything which is technically possible will become normal.
Lemmy is not popular because it is a well designed piece of technology. Frankly it’s a pretty naive implementation of activitypub. It’s popularity comes from being the biggest alternative around when Reddit pissed off a good chunk of its users.
The only way to control how data is used, is to make it technically or practically impossible to do so. Until then, expect all the data on the fediverse to be used in every way possible for any purpose, and act accordingly.
I don’t see a technical or practical way to limit - let alone render impossible - AI moderation tools that is not at odds with decentralized open-protocol social media.
If you can copy-paste user activity into a textbox, this remains trivial.
If you’re not going to name them, why post here at all? Don’t you have other communication channels to “give them a fair chance to reply”? Why post here, letting users form their own assumptions about what those instances are without any solid evidence?
OP literally asks like 10 relevant questions for this place, and names their reasons for not naming specific instances. And all you focus on, is the question: who did it?
To me that is proof that OP did the right thing here.
Lets first figure out how to approach this without knowing the pupotrator.
Hardly, this was just a bait post. Rimu seems to be convinced if he flings enough mud, eventually some of it will stick. It’s all very petty.
The posts/comments on the fediverse are already public. The privacy questions are better answered here by another commenter:
My issue is that OP is not providing any solid proof. They are just giving 'wink wink’s about some ‘popular’ instances doing it. When asked whether they have proof, OP says they has proof of some mods doing it. Mods don’t handle instances, admins do. They haven’t yet provided any concrete proof, yet creating an impression that some instances are banning en masse using LLMs based on “political ideology”.
Name names. The only people you’re protecting are scumbags.







