I’ve gone down a rabbit hole here.

I’ve been looking at lk99, the potential room temp superconductor, lately. Then I came across an AI chat and decided to test it. I then asked it to propose a room temp superconductor and it suggested (NdBaCaCuO)_7(SrCuO_2)_2 and a means of production which got me thinking. It’s just a system for looking at patterns and answering the question. I’m not saying this has made anything new, but it seems to me eventually a chat AI would be able to suggest a new material fairly easily.

Has AI actually discovered or invented anything outside of it’s own computer industry and how close are we to it doing stuff humans haven’t done before?

  • Maharashtra@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    11 months ago

    The AIs we have at our disposal can’t invent a thing - yet - because they aren’t true AIs - again: yet.

    They are merely, and should be perceived as tools, nothing more. It’s the people who use them that may apply them to tasks that will result in invention, but on their own, they are closer to the Chinese Room principle, than to thinking and inventive constructions.

    • Archpawn@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      I agree with the basic idea, but there’s not some fundamental distinction between what we have now and true AI. Maybe we’ll find breakthroughs that help, but the systems we’re using now would work given enough computing power and training. There’s nothing the human brain can do that they can’t, so with enough resources they can imitate the human brain.

      Making one smarter than a human wouldn’t be completely trivial, but I doubt it would be all that difficult given that the AI is powerful enough to imitate something smarter than a human.

      • Maharashtra@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        I agree with the basic idea, but there’s not some fundamental distinction between what we have now and true AI.

        Are AIs we have at our disposal able and allowed to self-improve on their own? As in: can they modify their own internal procedures and possibly reshape their own code to better themselves, thus becoming more than their creators predicted them to be?

        There’s nothing the human brain can do that they can’t, so with enough resources they can imitate the human brain.

        Human brain can:

        • interfere with any of its “hardware” and break it
        • go insane
        • preocupy itself with absolutely pointless stuff
        • create for the sake of creation itself
        • develop and upkeep illusions it will begin to trust to be real
        • choose ad act against undeniable proof given to it

        These are of course tongue-in-cheek examples of what a human brain can, but - from the persepctive of neuroscience, psychology and a few adjacent fields of study - it is absolutely incorrect to say that AIs can do what a human brain can, because we’re still not sure how our brains work, and what they are capable of.

        Based on some dramatic articles we see in news that promise us “trauma erasing pills”, or “new breakthrough in healing Alzheimer” we may tend to believe that we know what this funny blob in our heads is capable of, and that we have but a few small secrets to uncover, but the fact is, that we can’t even be sure just how much is there to discover.