OpenAI does not want anyone to know what o1 is “thinking" under the hood.

    • paf0@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Gotta wonder if that would work. My impression is that they are kind of looping inside the model to improve quality but that the looping is internal to the model. Can’t wait for someone to make something similar for Ollama.

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        This approach has been around for a while and there are a number of applications/systems that were using the approach. The thing is that it’s not a different model, it’s just a different use case.

        Its the same way OpenAI handle math, they recognize it’s asking for a math solution and actually have it produce a python solution and run it. You can’t integrate it into the model because they’re engineering solutions to make up for the models limitations.