We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

  • Lumidaub@feddit.org
    link
    fedilink
    English
    arrow-up
    85
    ·
    edit-2
    3 days ago

    adding missing information

    Did you mean: hallucinate on purpose?

    Wasn’t he going to lay off the ketamine for a while?

    Edit: … i hadnt seen the More Context and now i need a fucking beer or twnety fffffffffu-

    • BreadstickNinja@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      Yeah, let’s take a technology already known for filling in gaps with invented nonsense and use that as our new training paradigm.

    • Carmakazi@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      ·
      3 days ago

      He means rewrite every narrative to his liking, like the benevolent god-sage he thinks he is.

  • Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 days ago

    I think most AI corp tech bros do want to control information, they just aren’t high enough on Ket to say it out loud.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        60
        arrow-down
        1
        ·
        edit-2
        3 days ago

        If we had direct control over how our tax dollars were spent, things would be different pretty fast. Might not be better, but different.

  • dalekcaan@lemm.ee
    link
    fedilink
    English
    arrow-up
    254
    ·
    3 days ago

    adding missing information and deleting errors

    Which is to say, “I’m sick of Grok accurately portraying me as an evil dipshit, so I’m going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings.”

    • bean@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 days ago

      That is definitely how I read it.

      History can’t just be ‘rewritten’ by A.I. and taken as truth. That’s fucking stupid.

  • maxfield@pf.z.org
    link
    fedilink
    English
    arrow-up
    163
    arrow-down
    2
    ·
    3 days ago

    The plan to “rewrite the entire corpus of human knowledge” with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can’t create genuinely new knowledge or identify “missing information” that wasn’t already in their training data.

        • MajinBlayze@lemmy.world
          link
          fedilink
          English
          arrow-up
          24
          ·
          edit-2
          3 days ago

          Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.

          It would be way too expensive to go through it by hand

        • zqps@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          Yes.

          He wants to prompt grok to rewrite history according to his worldview, then retrain the model on that output.

    • WizardofFrobozz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      2 days ago

      To be fair, your brain is a pattern-matching system.

      When you catch a ball, you’re not doing the physics calculations in your head- you’re making predictions based on an enormous quantity of input. Unless you’re being very deliberate, you’re not thinking before you speak every word- your brain’s predictive processing takes over and you often literally speak before you think.

      Fuck LLMs- but I think it’s a bit wild to dismiss the power of a sufficiently advanced pattern-matching system.

      • zildjiandrummer1@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        I said literally this in my reply, and the lemmy hivemind downvoted me. Beware of sharing information here I guess.

    • zildjiandrummer1@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      15
      ·
      3 days ago

      Generally, yes. However, there have been some incredible (borderline “magic”) emergent generalization capabilities that I don’t think anyone was expecting.

      Modern AI is more than just “pattern matching” at this point. Yes at the lowest levels, sure that’s what it’s doing, but then you could also say human brains are just pattern matching at that same low level.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        English
        arrow-up
        18
        ·
        3 days ago

        Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!

        • zildjiandrummer1@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 days ago

          That’s not what I said. It’s absolutely dystopian how Musk is trying to tailor his own reality.

          What I did say (and I’ve been doing AI research since the AlexNet days…) is that LLMs aren’t old school ML systems, and we’re at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).

          Lemmy is almost as bad as reddit when it comes to hiveminds.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 day ago

            You literally called it borderline magic.

            Don’t do that? They’re pattern recognition engines, they can produce some neat results and are good for niche tasks and interesting as toys, but they really aren’t that impressive. This “borderline magic” line is why they’re trying to shove these chatbots into literally everything, even though they aren’t good at most tasks.

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    107
    ·
    2 days ago

    “If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!”

    ~Fucking Dumbass

  • Hossenfeffer@feddit.uk
    link
    fedilink
    English
    arrow-up
    73
    ·
    2 days ago

    He’s been frustrated by the fact that he can’t make Wikipedia ‘tell the truth’ for years. This will be his attempt to replace it.

    • wrinkledoo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      There are thousands of backups of wikipedia, and you can download the entire thing legally, for free.

      He’ll never be rid of it.

      Wikipedia may even outlive humanity, ever so slightly.

      • sthetic@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Seconds after the last human being dies, the Wikipedia page is updated to read:

        Humans (Homo sapiens) or modern humans were the most common and widespread species of primate

  • Naevermix@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    2 days ago

    Elon Musk, like most pseudo intellectuals, has a very shallow understanding of things. Human knowledge is full of holes, and they cannot simply be resolved through logic, which Mush the dweeb imagines.

    • biocoder.ronin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      2 days ago

      Uh, just a thought. Please pardon, I’m not an Elon shill, I just think your argument phrasing is off.

      How would you know there are holes in understanding, without logic. How would you remedy gaps of understanding in human knowledge, without the application of logic to find things are consistent?

      • andros_rex@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        2 days ago

        You have to have data to apply your logic too.

        If it is raining, the sidewalk is wet. Does that mean if the sidewalk is wet, that it is raining?

        There are domains of human knowledge that we will never have data on. There’s no logical way for me to 100% determine what was in Abraham Lincoln’s pockets on the day he was shot.

        When you read real academic texts, you’ll notice that there is always the “this suggests that,” “we can speculate that,” etc etc. The real world is not straight math and binary logic. The closest fields to that might be physics and chemistry to a lesser extent, but even then - theoretical physics must be backed by experimentation and data.

        • biocoder.ronin@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          2 days ago

          Thanks I’ve never heard of data. And I’ve never read an academic text either. Condescending pos

          So, while I’m ironing out your logic for you, “what else would you rely on, if not logic, to prove or disprove and ascertain knowledge about gaps?”

          • andros_rex@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            2 days ago

            You asked a question, I gave an answer. I’m not sure where you get “condescending” there. I was assuming you had read an academic text, so I was hoping that you might have seen those patterns before.

            You would look at the data for gaps, as my answer explained. You could use logic to predict some gaps, but not all gaps would be predictable. Mendeleev was able to use logic and patterns in the periodic table to predict the existence of germanium and other elements, which data confirmed, but you could not logically derive the existence of protons, electrons and neutrons without the later experimentations of say, JJ Thompson and Rutherford.

            You can’t just feed the sum of human knowledge into a computer and expect it to know everything. You can’t predict “unknown unknowns” with logic.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    edit-2
    3 days ago

    I elaborated below, but basically Musk has no idea WTF he’s talking about.

    If I had his “f you” money, I’d at least try a diffusion or bitnet model (and open the weights for others to improve on), and probably 100 other papers I consider low hanging fruit, before this absolutely dumb boomer take.

    He’s such an idiot know it all. It’s so painful whenever he ventures into a field you sorta know.

    But he might just be shouting nonsense on Twitter while X employees actually do something different. Because if they take his orders verbatim they’re going to get crap models, even with all the stupid brute force they have.

  • namingthingsiseasy@programming.dev
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    2 days ago

    Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.

    The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.

    • aaron@infosec.pub
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      25
      ·
      edit-2
      2 days ago

      Wikipedia is not a trustworthy source of information for anything regarding contemporary politics or economics.

      Edit - this is why the US is fucked.

      • Green Wizard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        2 days ago

        Wikipedia gives lists of their sources, judge what you read based off of that. Or just skip to the sources and read them instead.

        • aaron@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          22
          ·
          edit-2
          2 days ago

          Yeah because 1. obviously this is what everybody does. And 2. Just because sources are provided does not mean they are in any way balanced.

          The fact that you would consider this sort of response acceptable justification of wikipedia might indicate just how weak wikipedia is.

          Edit - if only you could downvote reality away.

        • InputZero@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          10
          ·
          2 days ago

          Just because Wikipedia offers a list of references doesn’t mean that those references reflect what knowledge is actually out there. Wikipedia is trying to be academically rigorous without any of the real work. A big part of doing academic research is reading articles and studies that are wrong or which prove the null hypothesis. That’s why we need experts and not just an AI to regurgitate information. Wikipedia is useful if people understand it’s limitations, I think a lot of people don’t though.

          • Green Wizard@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            For sure, Wikipedia is for the most basic subjects to research, or the first step of doing any research (they could still offer helpful sources) . For basic stuff, or quick glances of something for conversation.

            • Warl0k3@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              2 days ago

              This very much depends on the subject, I suspect. For math or computer science, wikipedia is an excellent source, and the credentials of the editors maintaining those areas are formidable (to say the least). Their explanations of the underlaying mechanisms are in my experience a little variable in quality, but I haven’t found one that’s even close to outright wrong.

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        Wikipedia is not a trustworthy source of information for anything regarding contemporary politics or economics.

        Wikipedia presents the views of reliable sources on notable topics. The trick is what sources are considered “reliable” and what topics are “notable”, which is why it’s such a poor source of information for things like contemporary politics in particular.

        • aaron@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          edit-2
          2 days ago

          A bit more than fifteen years ago I was burned out in my very successful creative career, and decided to try and learn about how the world worked.

          I noticed opposing headlines generated from the same studies (published in whichever academic journal) and realised I could only go to the source: the actual studies themselves. This is in the fields of climate change, global energy production, and biospheric degradation. The scientific method is much degraded but there is still some substance to it. Wikipedia no chance at all. Academic papers take a bit of getting used to but coping with them is a skill that most people can learn in fairly short order. Start with the abstract, then conclusion if the abstract is interesting. Don’t worry about the maths, plenty of people will look at that, and go from there.

          I also read all of the major works on Western beliefs on economics, from the Physiocrats (Quesnay) to modern monetary theory. Read books, not websites/a website edited by who knows which government agencies and one guy who edited a third of it. It is simple: the cost of production still usually means more effort, so higher quality, provided you are somewhat discerning of the books you buy.

          This should not even be up for debate. The fact it is does go some way to explain why the US is so fucked.___

          • Grappling7155@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Books are not immune to being written by LLMs spewing nonsense, lies, and hallucinations, which will only make more traditional issue of author/publisher biases worse. The asymmetry between how long it takes to create misinformation and how long it takes to verify it has never been this bad.

            Media literacy will be very important going forward for new informational material and there will be increasing demand for pre-LLM materials.

            • aaron@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 days ago

              Yes I know books are not immune to llm’s. The classics are all already written - I would suggest peple start with them.

        • aaron@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          2 days ago

          Wikipedia presents the views of reliable sources on notable topics

          Absolutely nowhere near. This is why America is fucked.

          • Schadrach@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            Again, read the rest of the comment. Wikipedia very much repeats the views of reliable sources on notable topics - most of the fuckery is in deciding what counts as “reliable” and “notable”.

            • aaron@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              6
              ·
              2 days ago

              And again. Read my reply. I refuted this idiotic. take.

              You allowed yourselves to be dumbed down to this point.

                • aaron@infosec.pub
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  2 days ago

                  No. You calling me a ‘dick’ negates any point you might have had. In fact you had none. This is a personal attack.

    • Kyrgizion@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      5
      ·
      2 days ago

      Thinking wikipedia or other unbiased sources will still be available in a decade or so is wishful thinking. Once the digital stranglehold kicks in, it’ll be mandatory sign-in with gov vetted identity provider and your sources will be limited to what that gov allows you to see. MMW.

      • namingthingsiseasy@programming.dev
        link
        fedilink
        English
        arrow-up
        27
        ·
        2 days ago

        Wikipedia is quite resilient - you can even put it on a USB drive. As long as you have a free operating system, there will always be ways to access it.

        • Dead_or_Alive@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          2 days ago

          I keep a partial local copy of Wikipedia on my phone and backup device with an app called Kiwix. Great if you need access to certain items in remote areas with no access to the internet.

      • coolmojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        2 days ago

        Yes. There will be no websites only AI and apps. You will be automatically logged in to the apps. Linux, Lemmy will be baned. We will be classed as hackers and criminals. We probably have to build our own mesh network for communication or access it from a secret location.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      2 days ago

      The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context.

      That’s a massive oversimplification, it’s like saying humans don’t remember things, we just have neurons that fire based on context

      LLMs do actually “know” things. They work based on tokens and weights, which are the nodes and edges of a high dimensional graph. The llm traverses this graph as it processes inputs and generates new tokens

      You can do brain surgery on an llm and change what it knows, we have a very good understanding of how this works. You can change a single link and the model will believe the Eiffel tower is in Rome, and it’ll describe how you have a great view of the colosseum from the top

      The problem is that it’s very complicated and complex, researchers are currently developing new math to let us do this in a useful way

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      And the adding missing information doesn’t. Isn’t that just saying we are going to make shit up.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    ·
    3 days ago

    Dude is gonna spend Manhattan Project level money making another stupid fucking shitbot. Trained on regurgitated AI Slop.

    Glorious.