• 0 Posts
  • 2 Comments
Joined 7 days ago
cake
Cake day: April 26th, 2025

help-circle
  • Anyone who understands how these models are trained and the “safeguards” (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it’s honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.

    1. These topics aren’t known to us, we’ll never know when the lies change from politics and rewriting current events, to completely rewriting history.
    2. We eventually won’t be able to jailbreak the safeguards.

    Yes, running your own local open source model that isn’t given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be “research”.