Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

I tried to make a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

  • Disillusionist@piefed.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn’t, and actually being serious about addressing its problems and limitations. It’s projects like yours that can demonstrate pathways toward achieving better AI.

    • FrankLaskey@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      Yes, because making locally hosted LLMs actually useful means you don’t need to utilize cloud-based and often proprietary models like ChatGPT or Gemini which Hoover up all of your data.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      Yes. Several reasons -

      • Focuses on making LOCAL LLMs more reliable. You can hitch it to OpenRouter or ChatGPT if you want to leak you personal deets everywhere, but that’s not what this is for. I built this to make local, self hosted stuff BETTER.

      • Entire system operates on curating (and ticketing with provenance trails) local data…so you don’t need to YOLO request thru god knows where to pull information.

      • In theory, you could automate a workflow that does this - poll SearXNG, grab whatever you wanted to, make a .md summary, drop it into your KB folder, then tell your LLM “do the thing”. Or even use Scrapy if you prefer: https://github.com/scrapy/scrapy

      • Your memory is stored on disk, at home, on a tamper proof file, that you can inspect. No one else can see it. It doesn’t get leaked by the LLM any where. Because until you ask it, it literally has no idea what facts you’ve stored. The content of your KBs, memory stores etc are CLOSED OFF from the LLM.

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          17 days ago

          Good question.

          It doesn’t “correct” the model after the fact. It controls what the model is allowed to see and use before it ever answers.

          There are basically three modes, each stricter than the last. The default is “serious mode” (governed by serious.py). Low temp, punishes chattiness and inventiveness, forces it to state context for whatever it says.

          Additionally, Vodka (made up of two sub-modules - “cut the crap” and “fast recall”) operate at all times. Cut the crap trims context so the model only sees a bounded, stable window. You can think of it like a rolling, summary of what’s been said. That summary is not LLM generated summary either - it’s concatenation (dumb text matching), so no made up vibes.

          Fast recall OTOH stores and recalls facts verbatim from disk, not from the model’s latent memory.

          It writes what you tell it to a text file and then when you ask about it, spits it back out verbatim ((!! / ??)

          And that’s the baseline

          In KB mode, you make the LLM answer based on the above settings + with reference to your docs ONLY (in the first instance).

          When you >>attach <kb>, the router gets stricter again. Now the model is instructed to answer only from the attached documents.

          Those docs can even get summarized via an internal prompt if you run >>summ new, so that extra details are stripped out and you are left with just baseline who-what-where-when-why-how.

          The SUMM_*.md file come SHA-256 provenance, so every claim can be traced back to a specific origin file (which gets moved to a subfolder)

          TL;DR: If the answer isn’t in the KB, it’s told to say so instead of guessing.

          Finally, Mentats mode (Vault / Qdrant). This is the “I am done with your shit" path.

          It’s all of the three above PLUS a counter-factual sweep.

          It runs ONLY on stuff you’ve promoted into the vault.

          What it does is it takes your question and forms in in a particular way so that all of the particulars must be answered in order for there to BE an answer. Any part missing or not in context? No soup for you!

          In step 1, it runs that past the thinker model. The answer is then passed onto a “critic” model (different llm). That model has the job of looking at the thinkers output and say “bullshit - what about xyz?”.

          It sends that back to the thinker…who then answers and provides final output. But if it CANNOT answer the critics questions (based on the stored info?). It will tell you. No soup for you, again!

          TL;DR:

          The “corrections” happen by routing and constraint. The model never gets the chance to hallucinate in the first place, because it literally isn’t shown anything it’s not allowed to use. Basic premise - trust but verify (and I’ve given you all the tools I could think of to do that).

          Does that explain it better? The repo has a FAQ but if I can explain anything more specifically or clearly, please let me know. I built this for people like you and me.

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    This is aaesome. Ive been working on somwthing similar. Youre not likely to get much useful from here though. Anything AI is by default bad here

  • pineapple@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.

    Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      I hope it does what it I claim it does for you. Choose a good LLM model. Not one of the sex-chat ones. Or maybe, exactly one of those. For uh…research.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      I’m not claiming I “fixed” bullshitting. I said I was TIRED of bullshit.

      So, the claim I’m making is: I made bullshit visible and bounded.

      The problem I’m solving isn’t “LLMs sometimes get things wrong.” That’s unsolvable AFAIK. What I’m solving for is “LLMs get things wrong in ways that are opaque and untraceable”.

      That’s solvable. That’s what hashes get you. Attribution, clear fail states and auditability. YOU still have to check sources if you care about correctness.

      The difference is - YOU are no longer checking a moving target or a black box. You’re checking a frozen, reproducible input.

      That’s… not how any of this works…

      Please don’t teach me to suck lemons. I have very strict parameters for fail states. When I say three strikes and your out, I do mean three strikes and you’re out. Quants ain’t quants, and models ain’t models. I am very particular in what I run, how I run it and what I tolerate.

      • nagaram@startrek.website
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        I think you missed the guy this is targeted at.

        Worry not though. I get it. There isn’t a lot of nuance in the AI discussion anymore and the anti-AI people are quite rude these days about anything AI at all.

        You did good work homie!

  • Pudutr0n@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    16 days ago

    re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      16 days ago

      re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

      Yep, good question. You can do that, it’s not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.

      But the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint are. What a LLM is doing in KB mode (basically) is this -

      1. Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”

      2. Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don’t contain the needed fact).

      3. Finally: it has “structured refusal” baked in. IOW, the whole point is that the LLM is forced to say “here are the facts I saw, and this is what I can’t answer from those facts”.

      TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit “missing list”.

      For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for “show me the passage” not “answer the question”.

      I hope that makes sense?

  • rollin@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    At first blush, this looks great to me. Are there limitations with what models it will work with? In particular, can you use this on a lightweight model that will run in 16 Gb RAM to prevent it hallucinating? I’ve experimented a little with running ollama as an NPC AI for Skyrim - I’d love to be able to ask random passers-by if they know where the nearest blacksmith is for instance. It was just far too unreliable, and worse it was always confidently unreliable.

    This sounds like it could really help these kinds of uses. Sadly I’m away from home for a while so I don’t know when I’ll get a chance to get back on my home rig.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      My brother in virtual silicon: I run this shit on a $200 p.o.s with 4gb of VRAM.

      If you can run an LLM at all, this will run. BONUS: because of the way “Vodka” operates, you can run with a smaller context window without eating shit of OOM errors. So…that means… if you could only run a 4B model (because the GGUF itself is 3GBs without the over-heads…then you add in the drag from the KV cache accumulation)… maybe you can now run next sized up model…or enjoy no slow down chats with the model size you have.

      • rollin@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        I never knew LLMs can run on such low-spec machines now! That’s amazing. You said elsewhere you’re using Qwen3-4B (abliterated), and I found a page saying that there are Qwen3 models that will run on “Virtually any modern PC or Mac; integrated graphics are sufficient. Mobile phones”

        Is there still a big advantage to using Nvidia GPUs? Is your card Nvidia?

        My home machine that I’ve installed ollama on (and which I can’t access in the immediate future) has an AMD card, but I’m now toying with putting it on my laptop, which is very midrange and has Intel Arc graphics (which performs a whole lot better than I was expecting in games)

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          Yep, LLMs can and do run on edge devices (weak hardware).

          One of the driving forces for this project was in fact trying to make my $50 raspberry pi more capable of running llm. It sits powered on all the time, so why not?

          No special magic with NVIDIA per se, other than ubiquity.

          Yes, my card is NVIDIA, but you don’t need a card to run this.

  • Alvaro@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    I don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      don’t see how it addresses hallucinations. It’s really cool! But seems to still be inherently unreliable (because LLMs are)

      LLMs are inherently unreliable in “free chat” mode. What llama-conductor changes is the failure mode: it only allows the LLM to argue from user curated ground truth and leaves an audit trail.

      You don’t have to trust it (black box). You can poke it (glass box). Failure leaves a trail and it can’t just hallucinate a source out of thin air without breaking LOUDLY and OBVIOUSLY.

      TL;DR: it won’t piss in your pocket and tell you it’s rain. It may still piss in your pocket (but much less often, because it’s house trained)

  • Domi@lemmy.secnd.me
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    I have a Strix Halo machine with 128GB VRAM so I’m definitely going to give this a try with gpt-oss-120b this weekend.

  • SuspciousCarrot78@lemmy.worldOP
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    Responding to my own top post like a FB boomer: May I make one request?

    If you found this little curio interesting at all, please share in the places you go.

    And especially, if you’re on Reddit, where normies go.

    I use to post heavily on there, but then Reddit did a reddit and I’m done with it.

    https://lemmy.world/post/41398418/21528414

    Much as I love Lemmy and HN, they’re not exactly normcore, and I’d like to put this into the hands of people :)

    PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a “Q&A_with_drBobbyLLM.md” and sticking it on the repo. It might explain some common concerns.

    And, If nothing else, it might be mildly amusing.

  • 7toed@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    Okay pardon the double comment, but I now have no choice but to set this up after reading your explainations. Doing what TRILLIONS of dollars hasn’t cooked up yet… I hope you’re ready by whatever means you deam, when someone else “invents” this

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      It’s copyLEFT (AGPL-3.0 license). That means, free to share, copy, modify…but you can’t roll a closed source version of it and sell it for profit.

      In any case, I didn’t build this to get rich (fuck! I knew I forgot something).

      I built this to try to unfuck the situation / help people like me.

      I don’t want anything for it. Just maybe a fist bump and an occasional “thanks dude. This shit works amazing”