• BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 months ago

    Another idiot writer missing how AI works… along with every other automation and productivity increase.

    I literally automate jobs for a living.

    My job isn’t to eliminate the role of every staff member in a department, it’s to take the headcount from 40 to 20 while having the remaining 20 be able to produce the same results. I’ve successfully done this dozens of times in my careers, and generative AI is now just another tool we can use to get that number down a little bit lower or more easily than we could before.

    Will I be able to take a unit of 2 people down to 0 people? No, I’ve never seen a process where I could eliminate every human.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      Cory Doctorow is an idiot writer? Do you know of him and you’ve reached this conclusion, or you don’t know who he is and just throwing shade?

      I am curious. How much follow-up do you do after your automations 1 year later to see how the profit and loss picture of the department has worked out after your work is done?

      (Not that that’s the point; I think you’ll get very little sympathy here for “I help the already-rich to keep more of the productive output of the world and make sure workers keep less” even if you can make an argument that you can do it effectively.)

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        I’ve been following Doctorow for decades now (BoingBoing) and yes, he’s an idiot in this situation.

        I’m still working with the organizations I started automating for more than a decade ago. I’m sitting in the office of one of them right now. It’s worked out great, nobody is complaining about the fact that this office space now has people at separated desks instead of crunched together like they were when I started. If it makes you feel any better, I almost exclusively do this for government and public organizations (I’m at a post-secondary education institution right now) though I really don’t care.

        Stopping or stalling productivity improvements is stupid, that job is effectively useless if it can be automated, it’s nothing more than make-work to keep it. We should pass laws to redistribute wealth to solve that problem, not keep them in useless jobs by preventing automation.

        • mozz@mbin.grits.devOP
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          You’re still working simultaneously with dozens of different organizations? Maybe I’m misunderstanding something.

          Stopping or stalling productivity improvements is stupid, that job is effectively useless if it can be automated, it’s nothing more than make-work to keep it. We should pass laws to redistribute wealth to solve that problem, not keep them in useless jobs by preventing automation.

          Like a lot of things, the devil is in the details. Almost everyone’s firsthand experience with consultants coming in and enacting “efficiency” is that it’s bad for both the employees obviously, but also bad for the business. I’m not saying that’s the impact of what you’re doing, just what most people’s experience is going to be.

          So there’s a central question in AI: Once the machines can do everything for us, does that mean everyone eats for free? Or no one eats? What would your answer to that question be?

          • BlameThePeacock@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            No, I have worked with a dozen or so organizations, but I’ve done multiple jobs for each. I’m a freelancer.

            As for your second question, I’d like to see a basic income implemented for all citizens in my country. I’ve talked to my local politicians about it multiple times. It’s something that people now know about, which is good progress in my opinion. I don’t expect it to happen soon, but hopefully we’ll get there before we start to have too many social problems.

    • jcarax@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      As someone who works for a very large company, on a team with around 500 people around the world, this is what concerns me. Our team will not be 500 people in a few years, and if it is, it’s because usage of our product has grown substantially. We are buying heavily into AI, and yet people are buying it when our leadership teams claim it will not impact jobs.

      Will I be able to take a unit of 2 people down to 0 people? No, I’ve never seen a process where I could eliminate every human.

      Socially speaking, this is also very concerning to me. I’m afraid that implementation of AI will be yet another thing that makes it difficult for smaller businesses to compete in a global marketplace. Yes, a tech-minded company can leverage a smaller head count into more capabilities, but this typically requires more expensive and limiting turnkey solutions, or major investment into developers of a customized solution.

      • mozz@mbin.grits.devOP
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        I honestly have no idea what the solution is. To me the issue is that with technology where it is, only about 20% of us actually have to do any work to keep all the wheels turning and provide for everyone. So far, in the western world, the solution has been to occupy people with increasingly-bullshit jobs (and, for some reason, not giving a lot of people who do the actual work enough to live on), but as technology keeps getting more and more powerful we’re more and more being faced with the limits of “you have to work to live” as a way to set things up.

    • Overzeetop@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      I sat in a room of probably 400 engineers last spring and they all laughed and jeered when the presenter asked if AI could replace them. With the right framework and dataset, ML almost certainly could replace about 2/3 of the people there; I know the work they do (I’m one of them) and the bulk of my time is spent recreating documentation using 2-3 computer programs to facilitate calculations and looking up and applying manufacturer’s data to the situation. Mine is an industry of high repeatability and the human judgement part is, at most, 10% of the job.

      Here’s the real problem. The people who will be fully automatable are those with less than 10 years experience. They’re the ones doing the day to day layout and design, and their work is monitored, guided, and checked by an experienced senior engineer to catch their mistakes. Replacing all of those people with AI will save a ton of money, right up until all of the senior engineers retire. In a system which maximizes corporate/partner profit, that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough), and yet there will still be a substantial fraction of oversight that will be needed. Unfortunately, ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight. That may not matter too much for marketing art departments, but for structural engineers it’s going to result in a safety or reliability issue for society as a whole. And since failures in my profession only occur in marginal situations (high loads - wind, snow, rain, mass gatherings) my suspicion is that it will be decades before we really find out that we’ve been whistling through the graveyard.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough)

        Anything a human can be trained to do, a neural network can be trained to do.

        Yes, there will be a lack of trained humans for those positions… but spinning up enough “senior engineers” will be as easy as moving a slider on a cloud computing interface… or remote API… done by whichever NN comes to replace the people from HR.

        ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight

        Cue in the humanoid robots.

        Better yet: outsource the creation of “qualified oversight”, and just download/subscribe to some when needed.

        • noxfriend@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Anything a human can be trained to do, a neural network can be trained to do.

          Come on. This is a gross exaggeration. Neural nets are incredibly limited. Try getting them to even open a door. If we someday come up with a true general AI that really can do what you say, it will be as similar to today’s neural nets as a space shuttle is to a paper airoplane.