• bedrooms@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    As I always write, trying to restrict AI training on the ground of copyright will only backfire. The sad truth is that malicious parties (dictatorships) will get more training materials because they won’t abide by rules. The end result is, dictators would outperform democracies in terms of future generation AIs, if we treat AI training like human reading.

    • zaphod@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 months ago

      You know what?

      I’m fine with that hypothetical risk.

      “The bad guys will do it anyway so we need to do it, too” is the worst kind of fatalism. That kind of logic can be used to justify any number of heinous acts, and I refuse to live in a world where the worst of us are allowed to drag down the rest of us.

      • Blisterexe@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        But, if we make training ai without copyright illegal, it will hamper open source models, while not affecting closed source ones , because they could just buy it off of big social media conglomerates

      • bedrooms@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        The consequence of falling behind is gravely different from most heinous acts. It can impact the military, elections, espionage, or whatever.

        • zaphod@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 months ago

          Really? I’m supposed to believe AI is somehow more existentially risky than, say, chemical or biological weapons, or human cloning and genetic engineering (all of which are banned or heavily regulated in developed nations)? Please.

          I understand the AI hype artists have done a masterful job convincing everyone that their tech is so insanely powerful (and thus incredibly valuable to prospective investors) that it’ll wipe out humanity, but let’s try to be realistic.

          But you know, let’s take your premise as a given. Even despite that risk, I refuse to let an unknowable hypothetical be used to hold our better natures hostage. The examples are countless of governments and corporations using vague threats as a way to get us to accept bad deals at the barrel of a virtual gun. Sorry, I will not play along.

          • davehtaylor@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            9 months ago

            If you don’t see how even the most basic of AI images, videos, deepfakes, etc. can manipulate the public, the electorate, popular opinion, or even sow just enough doubt as a cause a problem, then I don’t know what to tell you.

            People are already dying because of deepfakes and fake AI porn. We know that most people who see some headline on Facebook will never click farther to read it, and will just accept the headline and/or the synopsis as fact. They will accept something a 1000x re-shared image says, without sources or verification. The fact that a picture or vid might have a person with 8 fingers on one hand in the background isn’t going to prevent them from taking in the message. And we’ve all literally seen people around the web say , explicitly, something to the effect of “I don’t care if the story is true or not, it’s a real issue we need to consider” when we know for a fact that it is not.

            Yes, mis- and dis-information are far more of an existential thread than chem or bio weapons, and we know this because we are already seeing the consequences of it. If you refuse to see that, then you are lost.

    • davehtaylor@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      “Bad guys are going to do bad things, so we shouldn’t even bother trying to do anything to make things better, and just let the dystopia happen” is not the answer