• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    32 minutes ago

    This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

    This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

    Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.

    We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

    This study is by a group of scientists who are trying to figure that out. The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.

    Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.


    Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

    Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

    We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    8 hours ago

    When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

    Not since the APIcalypse at least.

    Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    9 hours ago

    The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.

    • Djinn_Indigo@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      But those other studies didn’t make the news though, did they? The thing about scientists is that they aren’t just scientists, and the impact of their work goes beyond the papers that they publish. If doing something ‘unethical’ is what it takes to get people to wake up, then maybe the publication status is a lesser concern.

  • mke@programming.dev
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    11 hours ago

    Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.

    This experiment is also nearly worthless because, as proved by the researchers, there’s no guarantee the accounts you interact with on Reddit are actual humans. Upvotes are even easier for machines to use, and can be bought for cheap.

    • supersquirrel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 hours ago

      The only way this could be an even remotely scientifically rigorous study is if they randomly selected the people who were going to respond to the AI responses and made sure they were human.

      Anybody with half a brain knows just reading reddit comments and not assuming most of them are bots or shills is a hilariously naive act, the fact that “researchers” did the same for a scientific study is embarassing.

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      11 hours ago

      ?!!? Before genAI it was hires human manipulators. Ypur argument doesn’t exist. We cannot call edison a witch and go back in caves because new tech creates new threat landscapes.

      Humanity adapts to survive and survives to adapt. We’ll figure some shit out

  • teamevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    4
    ·
    edit-2
    3 hours ago

    Holy Shit… This kind of shit is what ultimately broke Tim(very closely ralated to ted) kaczynski… He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student… And would just argue against any point he had to see when he would break…

    And that’s how you get the Unabomber folks.

      • teamevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        You know I know you’re right but what makes me so frustrated is I was so worried about spelling his last name right I totally botched the first name…

    • Geetnerd@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      12 hours ago

      I don’t condone what he did in any way, but he was a genius, and they broke his mind.

      Listen to The Last Podcast on the Left’s episode on him.

      A genuine tragedy.

      • teamevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        You know when I was like 17 and they put out the manifesto to get him to stop attacking and I remember thinking oh it’s got a few interesting points.

        But I was 17 and not that he doesn’t hit the nail on the head with some of the technological stuff if you really step back and think about it and this is what I couldn’t see at 17 it’s really just the writing of an incell… He couldn’t communicate with women had low self-esteem and classic nice guy energy…

  • TwinTitans@lemmy.world
    link
    fedilink
    English
    arrow-up
    100
    ·
    edit-2
    15 hours ago

    Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thing on it either.

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      68
      arrow-down
      2
      ·
      19 hours ago

      Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        40
        arrow-down
        3
        ·
        18 hours ago

        Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.

        I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          29
          ·
          edit-2
          17 hours ago

          Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

          Do you still think you’re going to be allowed to vote for the next president?

          • Serinus@lemmy.world
            link
            fedilink
            English
            arrow-up
            21
            arrow-down
            1
            ·
            16 hours ago

            Everyone who disagrees with you is a bot

            I mean that’s unironically the problem. When there absolutely are bots out here, how do you tell?

            • queermunist she/her@lemmy.ml
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              9
              ·
              15 hours ago

              Sure, but you seem to be under the impression the only bots are the people that disagree with you.

              There’s nothing stopping bots from grooming you by agreeing with everything you say.

          • EldritchFeminity@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

            Where did they say that? They just said bots in general. It’s well known that Russia has been running a propaganda campaign across social media platforms since at least the 2016 elections (just like the US is doing on Russian and Chinese social media, I’m sure. They do it on Americans as well. We’re probably the most propangandized country on the planet), but there’s plenty of incentive for corpo bots to be running their own campaigns as well.

            Or are you projecting for some reason? What do you get from defending Putin?

      • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
        link
        fedilink
        English
        arrow-up
        12
        ·
        19 hours ago

        I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Aside from thinking e-greeting cards are rad.

          As a late Gen-X/early Millennial, e-greeting cards are rad.

          Kids these days don’t know how good they have it with their gif memes and emoji-supporting character encodings… get off my lawn you young whippersnappers!

        • supersquirrel@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 hours ago

          Social media didn’t break people’s brains, the massive influx of conservative corporate money to distort society and keep existential problems from being fixed until it is too late and push people resort to to impulsive, kneejerk responses because they have been ground down to crumbs… broke people’s brains.

          If we didn’t have social media right now and all of this was happening, it would be SO much worse without younger people being able to find news about the Palestinian Genocide or other world news that their country/the rich conservatives around them don’t want them to read.

          It is what those in power DID to social media that broke people’s brains and it is why most of us have come here to create a social network not being driven by those interests.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.

      You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…

  • LovingHippieCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    123
    ·
    edit-2
    22 hours ago

    If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I’m not surprised to see it was part of a fucking experiment.

    • eRac@lemmings.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      19 hours ago

      This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.

        There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.

        I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.

        This is being done as a side project, on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts working full-time.

    • Refurbished Refurbisher@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      AIO and AITAH are so obviously just AI posting. It’s all just a massive circlejerk of AI and people who don’t know they’re talking to AI agreeing with each other.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    I’m sure there are individuals doing worse one off shit, or people targeting individuals.

    I’m sure Facebook has run multiple algorithm experiments that are worse.

    I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)

    The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      26
      ·
      20 hours ago

      that’s right, no reason to do anything about it. let’s just continue to fester in our own shit.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        ·
        edit-2
        20 hours ago

        That’s not at all what I was getting at. My point is the people claiming this is the worst they have seen have a limited point of view and should cast their gaze further across the industry, across social media.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    229
    ·
    22 hours ago

    There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.

        • Septimaeus@infosec.pub
          link
          fedilink
          English
          arrow-up
          8
          ·
          8 hours ago

          Hello, this is John Cleese. If you doubt that this is the real John Cleese, here is my mother to confirm that I am, in fact, me. Mother! Am I me?

          Oh yes!

          There you have it. I am me.

    • inlandempire@jlai.lu
      link
      fedilink
      English
      arrow-up
      31
      ·
      22 hours ago

      I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings

    • dzsimbo@lemm.ee
      link
      fedilink
      English
      arrow-up
      21
      ·
      20 hours ago

      There’s no guarantee anyone on there (or here) is a real person or genuine.

      I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.

      The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.

      We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?

      • pimento64@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen

        Skill issue

      • Maeve@kbin.earth
        link
        fedilink
        arrow-up
        5
        ·
        20 hours ago

        I’m conflicted by that term. Is it ok that it’s been shortened to “glow”?

        • max_dryzen@mander.xyz
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          15 hours ago

          Conflict? A good image is a good image regardless of its provenance. And yes 2020s era 4chan was pretty much glowboy central, one look at the top posts by country of origin said as much. It arguably wasn’t worth bothering with since 2015

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    English
    arrow-up
    76
    ·
    18 hours ago

    The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 hours ago

      To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.

      Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.

      This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

        And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    63
    ·
    14 hours ago

    The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.

      Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.

    • tauren@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      12 hours ago

      Just a few months ago it was literally Meta itself…

      Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.

      • FarceOfWill@infosec.pub
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 hours ago

        The headline is that they advertised beauty products to girls after they detected them deleting a selfie. No ethics or morals at all

      • thanksforallthefish@literature.cafe
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        9 hours ago

        You may wish to reword. The unspecified “they” reads like you think Meta have strict ethical rules. Lol.

        Meta have no ethics whatsoever, and yes I assume you meant universities have strict rules however the approval of this study marks even that as questionable

  • conicalscientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    11 hours ago

    This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.

    Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      17
      ·
      11 hours ago

      Yeah I was thinking exactly this.

      It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?

      Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

      You put it better than I could. I’ve noticed this too.

      I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.

      It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          38 minutes ago

          I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.

          In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.

          In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.

          I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.

          For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.

          That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).

  • Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    12 hours ago

    The key result

    When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      If they were personalized wouldn’t that mean they shouldn’t really receive that many upvotes other than maybe from the person they were personalized for?

      • the_strange@feddit.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 hours ago

        I would assume that people in a similar demographics are interested in similar topics. Adjusting the answer to a person within a demographic would therefore adjust it to all people within that demographic and interested in that specific topic.

        Or maybe it’s just the nature of the answer being more personal that makes it more appealing to people in general, no matter their background.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Their success metric was to get the OP to award them a ‘Delta’, which is to say that the OP admits that the research bot comment changed their view. They were not trying to farm upvotes, just to get the OP to say that the research bot was effective.

    • thanksforallthefish@literature.cafe
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      While that is indeed what was reported, we and the researchers will never know if the posters with shifted opinions were human or in fact also AI bots.

      The whole thing is dodgy for lack of controls, this isn’t science it’s marketing

      • CBYX@feddit.org
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        11 hours ago

        Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          4
          ·
          10 hours ago

          Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.

          They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.

          • CBYX@feddit.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            10 hours ago

            The difference is in which groups are consequentially making it their identity and giving one political party carte blanche to break American politics and political norms (and national security orgs).

            100% agree though.

          • Madzielle@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 hours ago

            There have a been a few times over the last few years, that my “bullshit- this is an extemist plant/propaganda” meter has gone off for left leaning individuals.

            Meaning these comments/videos are aimed to look like they are left folks, but are meant to make the left look bad/extremist in order to push people from the working class movements.

            Im truly a layman, but you just know its out there. The goal is indeed to divide us, and everyone should be suspect of everything the see on the Internet and do proper vetting of their sources.

          • aceshigh@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            Yup. We’re all susceptible to joining a cult. No one willingly joins a cult, their group slowly morphs into one.

        • Geetnerd@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          11 hours ago

          Those of us who are not idiots have known this for a long time.

          They beat the USA without firing a shot.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 hours ago

          Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.

        • seeigel@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          Or somebody else is doing the manipulation and is successfully putting the blame on Russia.