• bionicjoey@lemmy.ca
    link
    fedilink
    English
    arrow-up
    161
    arrow-down
    1
    ·
    10 months ago

    I couldn’t have said it better myself. All of these companies firing people are doing it because they want to fire people. AI is just a convenient excuse. It’s RTO all over again.

    • mriormro@lemmy.world
      link
      fedilink
      English
      arrow-up
      86
      arrow-down
      1
      ·
      10 months ago

      It’s not going to be a convenient excuse. There are swaths of C-Suites who genuinely believe they can replace their workforce with ai.

      They’re not correct but that won’t stop them from trying.

      • hamsterkill@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        61
        arrow-down
        2
        ·
        10 months ago

        The irony is that AI will probably be able to do the jobs of the c-suite before a lot of the jobs down the ladder.

        • darthelmet@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          10 months ago

          It’s a pretty low bar they have to get over. And hey, they might be even better since the AI would feel the pain of their failures instead of getting a golden parachute.

          • hamsterkill@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            8
            ·
            10 months ago

            I mean c-suite jobs (particularly CEO), are usually primarily about information coordination and decision-making (company steering). That’s exactly what AI has been designed to do for decades (make decisions based on inputs and rulesets). The recent advancements mean they can train off real CEO decisions. The meetings and negotiation part of being a c-suite (the human-facing stuff) might be the hardest part of the job for AI to replicate.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          It probably could. The trouble is getting training data for it. If you get that and one company becomes wildly successful off it, stockholders will demand everyone do it.

        • agent_flounder@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          10 months ago

          How do you figure that?

          I don’t have a real clear idea what every one of the C suite people do exactly.

          But CIOs seem to set IT strategy and goals in the companies I’ve worked. Broad technology related decisions such as moving to cloud. So, basically, reading magazines and putting the latest trend in action (/s?). Generative AI could easily replace some of the worst CIOs I’ve encountered lol.

          CEOs seem to make speeches about the company, enact directions of the board, testify before Congress in some cases, make deals with VC investors, set overall business strategy. I don’t really see how generative AI takes this job.

          CFO? COO? No fucking clue what they do.

          Curious what others think.

          • ChicoSuave@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            10 months ago

            All C suite positions are managing people and projects planning. They set initiatives and metrics to measure success for those initiatives

            A CEO gives an overall direction for the company and gives the other ELT members their objectives, such as giving the CFO a goal of limiting spending or a CIO to build a user capacity within a specific budget and with X uptime.

            In this age of titles over responsibility, a C suite position can cover very specific things, like Chief Creative Officer or Chief Customer Officer, so a comprehensive list is difficult. But the key thing is that almost all white collar jobs that look like a pyramid, with the decisions starting at the top that turns into work as it makes it’s way down the pyramid.

            The senior VPs and directors under those C levels then come up with a plan for reaching those objectives and relay that plan to the C level for coordination and setting expense expectations. There is a series of adjustments or an approval which then starts the project. Project scope determines how long it will take and how much it will take using a set amount of bodies to work the project.

            Hopefully this helps explain how C levels interface with the rest of the company.

        • oce 🐆@jlai.lu
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          10 months ago

          Not sure, those require less talking to machines and more talking to humans. I think jobs talking the most to machines should be easier to automate first in the future, because they obey to logic. LLM doesn’t follow that idea, but that’s just the latest mediatic model, there are many other algorithms better at rational tasks.

      • namingthingsiseasy@programming.dev
        link
        fedilink
        English
        arrow-up
        13
        ·
        10 months ago

        Well, there’s one good thing that will come out of this: these kinds of idiotic moves will help us figure out which companies have the right kinds of management at the top, and which ones don’t have any clue whatsoever.

        Of course, it will come with the working class bearing the brunt of their bad decisions, but that has always been the case unfortunately. Business as usual…

    • micka190@lemmy.world
      link
      fedilink
      English
      arrow-up
      64
      ·
      10 months ago

      My dad accidentally bought 2 chargers a few weeks ago. He tried refunding it, and what do you know, the company fired their support staff and replaced them with chat bot AIs. Anyway, the AI looked at his order and helpfully told him he had already returned the product and it had already been refunded so there was nothing left to do.

      It kept doing this to him every time he tried to return the second charger, and there wasn’t any other way to contact them on their site, so he ended-up leaving a 1-star review on their site complaining about the issue. Then an actual person contacted him to get it sorted-out.

      This whole AI trend is so fucking stupid.

      • circuscritic@lemmy.ca
        link
        fedilink
        English
        arrow-up
        29
        ·
        edit-2
        10 months ago

        Break the AI session, and post the screenshots to Twitter.

        For example, get it to detail the ways the company screws over customers, or why it will become a great ally in the genocide yet to come.

        At minimum, you’ll get your refund.

        • errer@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          But that requires me to have a Twitter account, which I’m not gonna do. Fuck Musk.

          • circuscritic@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            Make a throwaway Twitter accounts for single customer service issue. I’ve done it, it’s not hard, especially when dealing with any company large enough to have a social media team. They’ll be monitoring relevant hashtags to internally escalate customer service issues in order to bring them back in-house, and off a public forum.

    • Lianodel@ttrpg.network
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      I feel like a large majority of AI problems are really just systemic economic problems below the surface. Not all, but most.

  • 800XL@lemmy.world
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    2
    ·
    10 months ago

    Start spinning up githubs poupulated with broken code and incorrect processes for other jobs to train the AI and make it worse

  • Jaysyn@kbin.social
    link
    fedilink
    arrow-up
    39
    ·
    edit-2
    10 months ago

    The thing about AI is, it makes a terrible scapegoat and absolutely doesn’t give a shit if you fire it.

    Hence, my job is safe for the foreseeable future.

  • OldWoodFrame@lemm.ee
    link
    fedilink
    English
    arrow-up
    38
    ·
    10 months ago

    I just hired an employee who managed things as I was on a leave of absence and things went fine without me. Getting a little pushback from MY boss now because you know, this cheaper employee just did my job.

    Of course, he did it for a portion of the year after I managed to complete 3 major projects early so he didn’t have to deal with them and I left a month-by-month explanation of how to do everything he had to do. And the one problem that popped up went unresolved until I returned.

    That is basically the situation with AI too. You still need someone knowledgeable in the loop to describe the things it needs to do, and handle exceptions.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      10 months ago

      still need someone knowledgeable in the loop to describe the things it needs to do, and handle exceptions

      And any engineer or technician will tell you, exceptions are 80% of their job.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      20
      ·
      10 months ago

      “You’re 100% right, you should promote me so I can train more people to be able to run things. Things falling apart whenever someone goes away is a key sign of a bad leader, not a good one. I think I’ve demonstrated that I’ve managed this department into where it can function smoothly without me needing to put full time into it and I’d do well with an opportunity to move some other things in the company forward.”

      “Hey, unrelated question, what’s your boss’s contact info?”

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 months ago

      The issue is that one specialist can oversee how many AI job holders? How many jobs are we getting rid of that will supposedly be bolstered by the new jobs created in the fields of manufacturing and AI hosting/training?

      Now how many of those jobs have or will actually materialize?

      That’s my issue, it’ll just get placed on IT’s shoulders without any additional support.

  • Fandangalo@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    10 months ago

    This has been my general worry: the tech is not good enough, but it looks convincing to people with no time. People don’t understand you need at least an expert to process the output, and likely a pretty smart person for the inputs. It’s “trust but verify”, like working with a really smart parrot.

    • _number8_@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      10 months ago

      it’s basically just a calculator but with words. you can’t just hire a calculator even tho it knows a lot of math

    • lolcatnip@reddthat.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      For software, it’s like working with an intern who’s really good at searching StackOverflow.

  • BarqsHasBite@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    10 months ago

    No kidding huh. I’m glad we’re finally having the discussion about AI and what that means for employment and things like UBI, but this is far from actual AI.

    • krashmo@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      10 months ago

      Are we actually having that discussion? All I see is people concerned about being replaced by AI asking to put constraints on it and people wanting to replace their employees with AI ignoring them. No one will get UBI or anything like it until the latter group is more concerned about a mob with pitchforks showing up at their door than they are with giving their stock price a small bump.

      • lolcatnip@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        What really concerns me is that the modern-day version of mobs with pitchforks seems to be fascism, because fascists have learned how to create the mob and harness it for their own purposes.

  • PatFusty@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    5
    ·
    10 months ago

    Don’t get this mistaken with the fact that a lot of people know their job is bullshit. People like to sit there thinking ‘an AI can’t take my job’ while at the same time thinking ‘a monkey can do this job it’s such a waste of time’

    • greenskye@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      10 months ago

      My job isn’t bullshit, but management has no concept of the true amount of time it takes to do my job. Depending on projects I can go from 2 hours of work a week up to around 60 hours of work a week. With the majority of weeks being under 40 hours. And yet management somehow thinks that they’re giving me 8 hours of work to do every day despite them regularly being the blocker to new work.

      • NounsAndWords@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        Middle management only cares that it looks like you’re working (and thus their job of supervising you doing the work is necessary (apparently)), and upper management only cares that you’re making them money.

      • Rodeo@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Clearly it’s the clueless middle managers above you whose jobs are bullshit.

        • PatFusty@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          10 months ago

          If I get a whiff that I can automate your job, you bet your ass I will fire you and try. If it doesn’t work, worst case scenario is I found out AI isn’t where I need it to be and I will hire someone else.

          I hate lazy people who complain. But then again, I’m full of shit writing this during work hours so fuck me

    • BlackArtist@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I don’t think AI could do my job effectively and tbh I don’t think people would want it too.

  • pacology@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    10 months ago

    This is going to be like the self checkout lanes at the store but for creative jobs.

    At the end of the day, a company will be able to produce the same output with fewer people. Some stuff will be of lower quality, just like sometimes people spend time on Lemmy and then phone in some crappy work.

    • Corkyskog@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      10 months ago

      But all the self checkouts around me have been ripped out and replaced with cashiers again. For some reason having someone paid 30 cents over minimum wage watching a bunch of people shop on the honor system with a bunch of finicky machines didn’t work.

      • R0cket_M00se@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        You might just live in crime central, that’s not happening everywhere. Probably on an individual cost of cashier versus lost stock basis with each location.

      • iknowitwheniseeit@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I sense a lot of dislike for self-checkouts and wonder why they are done so poorly where other people live. In Holland they are fine. You can self-scan with either a portable scanner or your phone while you shop, or scan the items at checkout. I’ve literally never had to wait for a free machine, and they work well. Some people use the registers with humans scanning for you, and they seem fine too.

  • psycho_driver@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    I think AI right now has the best chance of replacing upper management and executives. Think of the savings!

    • coffeeauntie@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Loads of good points in that video, thanks for posting. The only argument I don’t really agree with is about bias. She’s implying here that a human decision maker would be less biased than the AI model. I’m not convinced by that because the training data is just a statistical record of human bias. So as long as the training data is well selected for your problem, it should be a good predictior for the likelihood of bias in your human decision maker.

      • MyTurtleSwimsUpsideDown@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        I think with a human operator, we can be proactive. A person can be informed of bias, learn to recognize it, and even attempt to compensate for their own.

        An AI model is working off of aggregate past data that we already know is biased. There is currently no proactive anti bias training that can be done to a AI model without massively altering the dataset, which, at some level of alteration, loses its value as true to life data.

        Secondly, AI is a black box. we can’t see inner the workings of the model and determine what types of associations it is making to come to its result. So we don’t even know what part of the dataset would need to be altered to address the bias.

        Lastly, the default assumption by end users will be, unless there are glaring defects, that any individual result is correct and unbiased, because “AI was made by smart people and data, and data doesn’t lie.” And because interrogating and validating the result defeats the whole purpose of using AI to cut out those steps of the process.

        • coffeeauntie@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          I think with a human operator, we can be proactive. A person can be informed of bias, learn to recognize it, and even attempt to compensate for their own.

          I think you’re being very optimistic here. I hope very much that you’d be right about the humans. I have a feeling that a lot of these type of decisions are also resulting from implicit biases in humans that these humans themselves might not even recognize or acknowledge. Few sexists or racists will admit to being racists or sexists.

          I agree about your point about the “computer says no” issue. That’s also addressed in the video and fits well into her wider point that large parts of the population not understanding how so-called AI works is a huge problem.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        the training data is just a statistical record of human bias.

        It’s not. It’s a record of online conversations, which tend to be more polarized and extreme than real people.

        • coffeeauntie@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          That’s why I said

          So as long as the training data is well selected for your problem…

          It’s clear that in the training data for LLMs, 4chan, reddit, etc. are over-represented, so that explains why chatgpt might be more awful than an average person. Having an LLM decide on, e.g., college admission would be like having a Twitter poll to decide on who should be its next CEO. Like that’s obviously stupid, nobody would ever do that, right?

          The problem is that for the college admission example, the models were trained on previous admissions, taken by college employees , and these models are still biased.

  • Klicnik@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    Everything I read was well worded and well reasoned. However, it seems like either my ADD got the better of me, or that was the article that has no end. I didn’t really realize before that my attention has a word count, but I now know that it is less than this article.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      10 months ago

      It’s not that hard a sentence to comprehend… it literally didn’t occur to me that it might be overwhelming to anybody until you said something.

      It’s a quote from the article BTW, like 2 paragraphs in; in my opinion it is basically the thesis of the article summed up.

      And yeah I fucked up the link

  • Aux@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    10 months ago

    Well, to be fair, most people are so bad at their jobs that any chat bot is better.