G/O Media, a major online media company that runs publications including Gizmodo, Kotaku, Quartz, Jezebel, and Deadspin, has announced that it will begin a “modest test” of AI content on its sites.

The trial will include “producing just a handful of stories for most of our sites that are basically built around lists and data,” Brown wrote. “These features aren’t replacing work currently being done by writers and editors, and we hope that over time if we get these forms of content right and produced at scale, AI will, via search and promotion, help us grow our audience.”

  • ConsciousCode@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    As someone working on LLM-based stuff, this is a terrible idea with current models and techniques unless they have a dedicated team of human editors to make sure the AI doesn’t go off the rails, to say nothing of the cruelty of firing people to save maybe a few hundred thousand dollars with a substantial drop in quality. They can be very smart with proper prompting, but are also inconsistent and require a lot of handholding for anything requiring executive function or deliberation (like… writing an article meant to make a point). It might be possible with current models, but the field is way too new and techniques too crude to make this work without a few million dollars in R&D, at which point it’ll probably be completely wasted when new developments come out nearly every week anyway.

    Also wait, wtf are they going to do for game reviews? RL can barely complete Minecraft (which is an astonishing development, but it’s so bleeding edge it might just cut to the bone). Even if they got some ultra-high-tech multimodal multi-model AI to play a game and review it, it would need to be an artificial person (AGI + autonomy) to even approximate human sensibilities and preferences.

  • Storksforlegs@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I know there are already people working on creating AI filters, to filter out spam articles and other AI-created content.

    I’d pay for that, it’ll be the new adblocker. Fuck any company that does this.

    • rwhitisissle@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I know there are already people working on creating AI filters, to filter out spam articles and other AI-created content.

      These will probably (ironically) be largely labeled by AI. As in, you get an AI to detect AI text and content generation and flag those websites as likely AI generated, with some kind of scaling probability index. That said, I think you could use AI to enhance human writing and that’s fine. Maybe write something on your own and then have an AI restructure it or reword things for clarity, fixing grammar mistakes and other things. But full on “write me an article on [insert random thing here]” is where shit gets tedious.

  • Mandy@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Lets be real, these two sites specifically have read like they where written by AI for years now.

  • Noved@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I feel like AI would be better put to replace CEOs than Frontline workers lol. Get rid of your most expensive asset and improve efficiency.

    • thejml@lemm.eeOP
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Or at least middle Management. Though, it would require better reporting/time tracking/statistics to do so. Something easily gamed.

      • Lucien@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Eh, you can improve reporting, time usage, and statistics all you want. It won’t help people stop making stupid short-sighted decisions. If it isn’t middle management, it’ll be the people controlling the AI’s which replace them.

        CEO: “AI, give me a plan to improve profits by at least 10% in the next quarter.”

        AI: “<insert plan>. Note: enacting this plan will cause talent attrition and there is a 70% chance of -50% revenue over the following 5 years.”

        CEO: “Sounds great, I’m retiring next year!”

        The people up top have plenty information on how to run a long-term successful business, but still choose to make illogical decisions which screw them over the long term. Changing the source of data to an AI just means that the CEO can ignore any feedback or metrics which don’t agree with their internal model and incentive structure.

  • PaupersSerenade@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Why do I feel like I stepped into r/KotakuinAction‽ This is shitty, no matter who it happens too. And EVERY news label has junk this day and age. The vitriol for this publication seems way more than necessary.