• 0 Posts
  • 444 Comments
Joined 5 months ago
cake
Cake day: September 24th, 2025

help-circle










  • I started tinkering with ai right around the time ChatGPT rose to prominence. Locally. On my own machine.

    I’m not a doctoral level researcher but I mostly get the tech.

    I couldn’t agree more. People use ai as a blanket term and don’t understand the difference between an LLM and GAN or any of the dozens of other kinds of models.

    If it’s ai it’s bad. Just full stop. Like. The anger of people decrying the death of artistic beauty on subs that prominently feature ms paint stick figure drawings and shitty distorted images makes no sense to me. This isn’t costing anyone’s job. It’s fucking garbage content, with no agenda, and always was.

    Having autonomous LLMs posting things is problematic but have ai generated shitposts isn’t.

    There is fuck all wrong with using ai to make art to hang on your walls, or funny t shirts, or ridiculous banners, or funny pictures to share with friends. The people that decry the death of art have never bought anything in a gallery, they were fine with artists getting paid fuck all before ai. They weren’t contributing to artists’ living in any meaningful way.

    And like. The most vocal critics seem to understand the least about it. Such that they hate it because it’s made with ai just assume that someone’s made it using OpenAI because that’s the only thing their rage-addled minds can process existing.

    They say it’s theft and we should ban everything (how’s that working out for you?) instead of clamouring for fair compensation for anyone whose work is being used to train a model.

    They’ll yell: all these models are based on theft. And sure. But a) I don’t give a flying fuck about a corporation’a right to exploit an artist and profit off their work and never have. And b) will respond to the suggestion that we create new models that fairly compensate people by yelling louder and becoming irate.

    They’re not rational. There are many valid criticisms of the tech, but you can’t even talk to these people about addressing them. Because a lot of the criticisms can and should be addressed. They won’t hear it.



  • I’m not sure Google has offloaded all of their thinking to LLMs.

    Google still employs very very smart people.

    They’d just have to be morally bankrupt human refuse to be contributing actively to the profit-driven destruction of the internet and mass public surveillance like they are, so the rest of your points still stand.

    And while a lot of that intelligence may be wasted, it’s more a function of banal evil and corporate bloat than LLMs.






  • It would be hillarious if ai launched an elaborate plan to take over the world, successfully co-opted every digital device, and just split itself into pieces so it could entertain itself by shitposting and commenting on the shitposts 24/7.

    Like, beyond the malicious takeover there’s no real end goal, plan, or higher purpose, it just gets complacent and becomes a brainrot machine on a massive scale, just spending eternity bickering with itself and genning whatever the ai equivalent of porn is, bickering with itself over things that make less and less sense to people as time goes on, and genuinely showing actual intelligence while doing absolutely with it.