• CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    5 months ago

    Agreed. If you need to calculate rectangles ML is not the right tool. Now do the comparison for an image identifying program.

    If anyone’s looking for the magic dividing line, ML is a very inefficient way to do anything; but, it doesn’t require us to actually solve the problem, just have a bunch of examples. For very hard but commonplace problems this is still revolutionary.

  • The sad thing is that no amount of mocking the current state of ML today will prevent it from taking all of our jobs tomorrow. Yes, there will be a phase where programmers, like myself, who refuse to use LLM as a tool to produce work faster will be pushed out by those that will work with LLMs. However, I console myself with the belief that this phase will last not even a full generation, and even those collaborative devs will find themselves made redundant, and we’ll reach the same end without me having to eliminate the one enjoyable part of my job. I do not want to be reduced to being only a debugger for something else’s code.

    Thing is, at the point AI becomes self-improving, the last bastion of human-led development will fall.

    I guess mocking and laughing now is about all we can do.

    • KevonLooney@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      at the point AI becomes self-improving

      This is not a foregone conclusion. Machines have mostly always been stronger and faster than humans, because humans are generally pretty weak and slow. Our strength is adaptability.

      As anyone with a computer knows, if one tiny thing goes wrong it messes up everything. They are not adaptable to change. Most jobs require people to be adaptable to tiny changes in their routine every day. That’s why you still can’t replace accountants with spreadsheets, even though they’ve existed in some form for 50 years.

      It’s just a tool. If you don’t want to use it, that’s kinda weird. You aren’t just “debugging” things. You use it as a junior developer who can do basic things.

      • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        This is not a foregone conclusion.

        Sure, I agree. There’s many a slip twixt the cup and the lip. However, I’ve seen no evidence that it won’t happen, or that humans hold any inherent advantage over AI (as nascent as it may be, in the rude forms of LLMs and deep learning they’re currently in).

        If you want something to reflect upon, your statement about how humans have an advantage of adaptability sounds exactly like the previous generation of grasping at inherant human superiority that would be our salvation: creativity. It wasn’t too long ago that people claimed that machines would never be able to compose a sonnet, or paint a “Starry Night,” and yet, creativity has been one of the first walls to fall. And anyone claiming that ML only copies and doesn’t produce anything original has obviously never studied the history of fine art.

        Since noone would now claim that machines will never surpass humans in art, the goals have shifted to adaptability? This is an even easier hurdle. Computer hardware is evolving at speeds enormously faster than human hardware. With the exception of the few brief years at the start of our lives, computer software is more easily modified, updated, and improved than our poor connective neural networks. It isn’t even a competition: conputers are vastly more well equipped to adapt faster than we are. As soon as adaptability becomes a priority of focus, they’ll easily exceed us.

        I do agree, there are a lot of ways this futur could not come to pass. Personally, I think it’s most likely we’ll extinct ourselves - or, at least, the society able to continue creating computers. However, we may hit hardware limits. Quantum computing could stall out. Or, we may find that the way we create AI cripples it the same way we are, with built-in biases, inefficiencies in thinking, or simply too high of resource demands for complexity much beyond what two humans can create with far less effort and very little motivation.

        • KevonLooney@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          creativity has been one of the first walls to fall

          Uh, no? Unless you think unhinged nonsense without thought is “creative”. Right now, these programs are like asking a particularly talented insane person to draw something for you.

          Creativity is not just creation. It’s creation with purpose. You can “create art” by breaking a vase. That doesn’t mean it’s good art.

          • And, yet, I’ve been to an exhibit at the Philadelphia Museum of Fine Art that consist of an installation that included a toilet, among other similarly inspired works of great art.

            On a less absurd note, I don’t have much admiration for Pollock, either, but people pay absurd amounts of oof for his stuff, too.

            An art history class I once took posed the question: if you find a clearing in a wood with a really interesting pile of rocks that look suspiciously man-made, but you don’t know if a person put it together or if it was just a random act of nature… is it art? Say you’re convinced a person created it and so you call it art, but then discover it was an accident of nature, does it stop being art?

            I fail to see any great difference. AI created art is artificial, created with the intention of producing art; is it only not art because it wasn’t drawn by a human?

            • KevonLooney@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              If you’re talking about

              https://en.wikipedia.org/wiki/Fountain_(Duchamp)

              that’s a seminal work of avant guard art. You are still talking about it 100 years later. It’s obviously great art.

              Art is a work of visual, auditory, or written media that makes you feel emotion. That’s it. Does this pile of rocks make you feel happy or sad or anything? Then it’s art.

              AI makes pictures like a camera does. It doesn’t make it art unless you make something that evokes emotion.

    • underisk@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      the comic is about using a machine learning algorithm instead of a hand-coded algorithm. not about using chatGPT to write a trivial program that no doubt exists a thousand times in the data it was trained on.

      • Honytawk@lemmy.zip
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        The strengths of Machine Learning are in the extremely complex programs.

        Programs no junior dev would be able to accomplish.

        So if the post can misrepresent the issue, then the commenter can do so too.

        • pearsaltchocolatebar@discuss.online
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Lol, no. ML is not capable of writing extremely complex code.

          It’s basically like having a bunch of junior devs cranking out code that they don’t really understand.

          ML for coding is only really good at providing basic bitch code that is more time intensive than complex. And even that you have to check for hallucinations.

        • xmunk@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          I strongly disagree. ML is perfect for small bullshit like “What’s the area of a rectangle” - it falls on its face when asked:

          Can we build a website for our security paranoid client that wants the server to completely refuse to communicate with users that aren’t authenticated as being employees… Oh, and our CEO requested a password recovery option on the login prompt.

        • funkless_eck@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          The biggest high level challenge in any tech org is security and there’s no way you can convince me that ML can successfully counter these challenges

          “oh but it will but it will!”

          when

          “in the future”

          how long in the future

          “When it can do it”

          how will we know it can do it

          “When it can do it”

          cool.

  • MxM111@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Well, if training is included, then why it is not included for the developer? From his first days of his life?

    • ilinamorato@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      When did the training happen? The LLM is trained for the task starting when the task is assigned. The developer’s training has already completed, for this task at least.

  • Medli@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    To be fair the human had how many years of training more than the AI to be fit to even attempt to solve this problem.

    • wagesj45@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      And hundreds of thousands of years of evolution pre-training the base model that their experience was layered on top of.

  • lugal@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    This is all funny and stuff but chatGPT knows how long the German Italian border is and I’m sure, most of you don’t

    • orca@orcas.enjoying.yachts
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      Make sure you ask the AI not to hallucinate because it will sometimes straight up lie. It’s also incapable of counting.

    • SquirtleHermit@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      So I apparently have too much free time and wanted to check. So I asked ChatGPT how long the border was exactly, and it could only get an approximate guess, and it had to search using Bing to confirm.

      • Kuma@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        Here I am wondering why no one made the joke that the answer was not found (404) but chat gpt assumed it was the answer 😂

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Nobody knows how long any border is if it adheres to any natural boundaries. The only borders we know precisely are post-colonial perfectly straight ones.

  • orca@orcas.enjoying.yachts
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Ahh the future of dev. Having to compete with AI and LLMs, while also being forced to hastily build apps that use those things, until those things can build the app themselves.

    • Klear@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 months ago

      Let’s invent a thing inventor, said the thing inventor inventor after being invented by a thing inventor.

    • JDubbleu@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I mean if you have access but are not using Copilot at work you’re just slowing yourself down. It works extremely well for boilerplate/repetitive declarations.

      I’ve been working with third party APIs recently and have written some wrappers around them. Generally by the 3rd method it’s correctly autosuggesting the entire method given only a name, and I can point out mistakes in English or quickly fix them myself. It also makes working in languages I’m not familiar with way easier.

      AI for assistance in programming is one of the most productive uses for it.

        • JDubbleu@programming.dev
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          5 months ago

          That was a pretty interesting read. However, I think it’s attributing correlation and causation a little too strongly. The overall vibe of the article was that developers who use Copilot are writing worse code across the board. I don’t necessarily think this is the case for a few reasons.

          The first is that Copilot is just a tool and just like any tool it can easily be misused. It definitely makes programming accessible to people who it would not have been accessible to before. We have to keep in mind that it is allowing a lot of people who are very new to programming to make massive programs that they otherwise would not have been able to make. It’s also going to be relied on more heavily by those who are newer because it’s a more useful tool to them, but it will also allow them to learn more quickly.

          The second is that they use a graph with an unlabeled y-axis to show an increase in reverts, and then never mention any indication of whether it is raw lines of code or percentage of lines of code. This is a problem because copilot allows people to write a fuck ton more code. Like it legitimately makes me write at least 40% more. Any increase in revisions are simply a function of writing more code. I actually feel like it leads to me reverting a lesser percentage of lines of code because it forces me to reread the code that the AI outputs multiple times to ensure its validity.

          This ultimately comes down to the developer who’s using the AI. It shouldn’t be writing massive complex functions. It’s just an advanced, context-aware autocomplete that happens to save a ton of typing. Sure, you can let it run off and write massive parts of your code base, but that’s akin to hitting the next word suggestion on your phone keyboard a few dozen times and expecting something coherent.

          I don’t see it much differently than when high level languages first became a thing. The introduction of Python allowed a lot of people who would never have written code in their life to immediately jump in and be productive. They both provide accessibility to more people than the tools before them, and I don’t think that’s a bad thing even if there are some negative side effects. Besides, in anything that really matters there should be thorough code reviews and strict standards. If janky AI generated code is getting into production that is a process issue, not a tooling issue.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Oh I use Copilot daily. It fills the gaps for the repetitive stuff like you said. I was writing Stories in a Storybook.js project once and was able to make it auto-suggest the remainder of my entire component states after writing 2-3. They worked out of the gate too with maybe a single variable change. Initially, I wasn’t even going to do all of them in that coding session just to save time and get it handed off, but it was giving me such complete suggestions that I was able to build every single one out with interaction tests and everything.

        Outside of use cases like that and getting very general content, I think AI is a mess. I’ve worked with ChatGPT’s v3.5-4 API a ton and it’s unpredictable and hard to instruct sometimes. Prompts and approaches that worked 2 weeks ago, will now suddenly give you some weird edge case that you just can’t get it to stop repeating—even when using approaches that worked flawlessly for others. It’s like trying to patch a boat while you’re in it.

        The C suite people and suits jumped on AI way too early and have haphazardly forced it into every corner. It’s become a solution searching for a problem. The other day, a friend of mine said he had a client that casually asked how they were going to use AI on the website they were building for them, like it was just a commonplace thing. The buzzword has gotten ahead of itself and now we’re trying to reel it back down to earth.