I think using LLMs to HELP with moderation makes sense. The problem with all these companies is they appear to think it’ll be perfect and lay off all the humans.
Yeah, LLMs could really help. Other tools without AI are also helpful. The problem with all those companies is that they don’t want to do moderating for the public good at all. Reddit could kill a lot of Fake News on it’s platform, prevent reposts of revenge porn or kick idiots just by implementing a few rules. They don’t want to
I don’t think this is about LLM’s. That’s not synonymous with AI.
I mean, what people refer to as AI today isn’t really synonymous with actual AI
It’s been cheapened
I don’t think it’s that. LLM’s very much are actual AI. Most people just take that term to mean something more than that when it actually doesn’t. A simple chess engine is an AI as well.
Cool. I think he should piss on the 3rd rail.
🔥
<letsUsersPreventFreedomOfScreechFromHittingTheirOwnFeed>
“Cool. I think he should piss on the 3rd rail.”
¿What the hell? It’s right there in the title, letting users OPT INTO IT as in not in the company’s discretion forced upon everyone but allowing the user to set their tolerance levels. As long as it can be set to 0 why’s this a bad thing?
Forcing moderation ONTO everyone is vehemently opposed.
¿Why the fuck would anyone want to prevent an AI from filtering out nazi/csam from their own feeds?
He’s thought of a clever way to offload the responsibility/burden of the platform/service allowing speech on it. It allows people who don’t want to see triggering content to not see it, without having to involve some third party that gets PTSD from filtering out all the vile shit humanity has to offer.
… that’s not moderation then dipshit. Blocking things from your personal feed is what we call a FILTER. It’s not moderation.
Except the AI will still need to be trained on data, which requires the very labor you believe will be eliminated.
Why don’t we get AI to moderate Alexis. He stopped being relevant 10 years ago.
Is he spez or is that someone different
No. Reddit has 3 co-founders; Steve Huffman, and the current CEO (/u/spez), Aaron Swartz (/u/AaronSw/), and Alexis Ohanian (/u/kn0thing).
Great idea dipshit, who’s gonna foot the power bill, you?
Fuck spez
Fuck /u/kn0thing
RIP /u/aaronsw
No.
It is simple enough as is to confuse ai or to make it forget or work around its directives. Not least of the concerns would be malicious actors such as musk censoring our thoughts.
Ai is not something humanity should, in any way, be subjugated by or subordinate to.
Ever.
To think we lost Aaron Swartz and this shitstain and Huffman are still with us. I don’t believe in the supernatural but this kind of shit makes a good case for the existence of a devil.
And you’d be in charge of the AI, right Alexis? What a cunt.
In my opinion AI should cover the worst content; ones that harm people just by looking at it. Anything up to debate is a big no; however there exists many content where even seeing the content can be disturbing to anyone seeing it.
Yeah, but who decides what content is disturbing? I mean there is CSAM, but the fact that it even exists shows that not everyone is disturbed by it.
You’ll never be able to get a definition that covers your question. The world isn’t black and white. It’s gray and because of that a line has to be drawn and yes it would always be considered be arbitrary for some. But a line must be drawn none the less.
Agreed 100%, a line absolutely should be drawn.
That said, as a parent of 5 kids, I’m more concerned for false positives. I’ve heard enough horror stories about parents getting arrested over completely innocent pics of their kids as toddlers or infants, that may have genitalia showing. Like them at 6 months old doing something silly in the tub, or what have you. I don’t trust a computer program that doesn’t understand context to accurately handle those kinds of photos. Frankly, parents shouldn’t be posting those pics on social media to begin with, but I digress. It sets a bad precedent.
There’s a vast gulf between automated moderation systems deleting posts and calling the cops on someone.
You would be shocked.
My ex has called the cops on me more than a few times in our past, because she just didn’t like how I was doing things. After I remarried and moved in with my now wife, my ex called the cops and CPS on us for abusing our kids (we don’t - we just have reasonable rules). That was a fun one and ended up with me getting a lawyer and dragging her to court. The judge was not happy with her. Also, my neighbor called the cops on us a few months ago because one of my kids was having a temper tantrum, and then again because my two older kids (and some neighborhood friends) purposefully excluded her kids from whatever game they were playing. We had a talk with our kids about excluding others and why you shouldn’t do that, but between you and I, those kids are brats and bully a lot of other neighborhood kids.
That’s just a taste of crazy. It’s not out of the realm of possibility for someone to be batshit enough to call the cops over an innocent baby-in-the-bathtub picture (though like I said, parents shouldn’t be sharing that on social media anyway, but here we are).
This is a fucking wild take
I mean I’m not defending CSAM, just to be clear. I just disagree with any usage of AI that could turn somebody’s life upside down based on a false positive. Plus you also get idiots who report things they just don’t like.
I couldn’t agree more. Human moderators, especially unpaid ones simply aren’t the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I’d much rather tell an AI moderator what I’m interested in seeing and what not and have it analyze the content to see what needs to be filtered out.
Take this thread for example:
Cool. I think he should piss on the 3rd rail.
This pukebag is just as bad as Steve. Fuck both of them.
What a cunt.
How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.
Translation: An AI would allow me to maybe have an echo chamber since human moderators won’t work for me for free.
deleted by creator
Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?
At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.
That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.
Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.
Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn’t keep up with moderation. I don’t remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.
Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn’t need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support. So no matter what you think of AI and if it’s moral, this is actually one of the few good applications in my opinion
Old-school AI, like automod, or LLM/genAI AI mod / image recognition tools?
I’d need to see some kind of proof Lemmy instances were using LLM mod tools; I’d be very interested.
Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support
How in the actual hell can Facebook not provide medical support to these people, after putting them through actual hell? That is actively evil of them.
The real answer? They use people in countries like Nigeria that have fewer laws
I agree, but it’s also not surprising. I think somebody else posted the article about kenyan Facebook moderators in this comment section somewhere if you want to know more
1984 is getting closer than ever!
i dread to think about the amount of double speak this would cause to get around the ai so you can say what you want
I think I am for this use of AI. Specifically for image moderation, not really community moderation. Yes, it would be subject to whatever bias they want, but they already moderate with a bias.
If they could create this technology, situations like the linked article could be avoided: https://www.cnn.com/2024/12/22/business/facebook-content-moderators-kenya-ptsd-intl/index.html
Edit: To be clear, not to replace existing reddit mods, but to be a supplemental tool.
I agree. AI could be good for first line of defense specifically for sorting our traumatizing gore and the like.
For normal moderation I think it’s only useful in the same way as spell check. Second set or eyes but human makes the final call.
Hotdog / Not Hotdog
But yeah, having a semantical image filter could do be a good first line, of course with human oversight.
And frankly, seeing the mod abuse that goes on in many communities, having AI moderators helping with text moderation would be nice too. At least they’d be more consistent.
Oh yeah, lets do that and see that everything going into chaos.
Pinterest lets their AI do checks on pins and totally (non violated ToS) images get deleted. Accounts getting permanent banned because their AI claims images are violating their ToS (I guess plants and houses are violent).
What could go wrong, nothing eh? /sarcasm.
No thanks.