A chart titled “What Kind of Data Do AI Chatbots Collect?” lists and compares seven AI chatbots—Gemini, Claude, CoPilot, Deepseek, ChatGPT, Perplexity, and Grok—based on the types and number of data points they collect as of February 2025. The categories of data include: Contact Info, Location, Contacts, User Content, History, Identifiers, Diagnostics, Usage Data, Purchases, Other Data.
- Gemini: Collects all 10 data types; highest total at 22 data points
- Claude: Collects 7 types; 13 data points
- CoPilot: Collects 7 types; 12 data points
- Deepseek: Collects 6 types; 11 data points
- ChatGPT: Collects 6 types; 10 data points
- Perplexity: Collects 6 types; 10 data points
- Grok: Collects 4 types; 7 data points
Am I missing something? What do the numbers mean in relation to the type? Sub types?
It’s labeled “Unique data points”. See the number 2 - Usage Data for Gemini, there’s an arrow with label there.
perhaps it’s the limit imof each data type?!
gemini harvests only your first four cobtacts, your last two locations, and so on.
how does one defeat that? have fewer than four friends and don’t go out!
Who TF using Grok.
Fascists. Why?
Most of my workforce strangely enough. They claim it’s the best for them in terms of mathematics, but I can’t find that to be a good reason.
Isn’t deepseek better for that?
In my experience it depends on the math. Every model seems to have different strengths based on a wide berth of prompts and information.
I’m interested in seeing how this changes when using duck duck go front end at duck.ai
there’s no login and history is stored locally (probably remotely too)
Back in the day, malware makers could only dream of collecting as much data as Gemini does.
Is there away to fake all the data they try to collect?
Pretty sure this is what they scrape from your device if you install their app. I dont know how else they would get access to contacts and location and stuff. So yeah you can just run it on a virtual android device and feed it garbage data, but i assume the app or their backend will detect that and throw out your data.
How about if I only use the web version?
Root, install xprivacy (or xprivacylua if your phone isn’t 10 years old).
I just came across this article which for people who are into self hosting can take a look and participate. It’s basically a tool that generating never ending web pages with non sense that load slow (but not too slow the AI tools move on) to slow down and thus cost them more to scrape the internet if enough people are doing it. You can also hide it in a way that legit user would never see this on your site:
https://arstechnica.com/tech-policy/2025/01/ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots-txt/ https://zadzmo.org/code/nepenthes/
Wow, it’s a whole new level of f*cked up when Zuck collects more data than the Winnie the Pooh (DeepSeek). 😳
The idea that US apps are somehow better than Chinese apps when it comes to collecting and selling user data is complete utter propaganda.
Don’t use either. Until Trump, I still considered CCP spyware more dangerous because they would be collecting info that could be used to blackmail US politicians and businesses. Now, it’s a coin flip. In either case, use EU or FOSS apps whenever possible.
Gemini: “Other Data”
Like, what’s fucking left!?
The Broligarchy: “Everything.”
Me: Squints Pours glowing demon tanning lotion on ground
Trump: “You dare dispute my rule?! And you would have these… mongrels… come here to die?”
Open Source Metaverse online. Launching Anti-StarLink missiles…
Warning. FOSS Metaverse alternative launch detected.
The Broligarchy: “This was not how it was supposed to be…”
Me: “Times change. But war, war never changes.”
…
“We will never be slaves. But we WILL be online. For the Open Source Metaverse we deserve!”
Anyway, hopefully that’s the real future in some sense. The metaverse is, technologically, in a state resembling 1995’s World Wide Web. We can stop the changes that made social media happen the first time, but that comes at a grave cost of it’s own… Zero tolerance for interference with the FOSS paradigm. This means no censorship even for the most vile of content, and no government authority over online activity ever again. It also means we have less than 150 years to become immortal because having children inherently puts kids at risk of sexual exploitation, so everyone - literally everyone - must be made infertile permanently to make that impossible.
Life extension is actually plausible, and omnispermicide would make denying it a war crime. That is the only fix I can see, but all of you would never pay it. That is why I stopped writing; every goddamn story and society at large championed “anti-escapism” in 2017 and onwards, and I will NEVER forgive you all for that. Fuck reality. I Have No Truth and I Must Dream. I want to die because I hate you all.
anyone whos competent in the matter: what about the french competition chat.mistral.ai
+1 for Mistral, they were the first (or one of the first) Apache open source licensed models. I run Mistral-7B and variant fine tunes locally, and they’ve always been really high quality overall. Mistral-Medium packed a punch (mid-size obviously) but it definitely competes with the big ones at least.
Almost none of this data is possible to collect when using Tor Browser
Nope, these services almost always require user login, eventually tied to cell number (ie non disposable) and associate user content and other data points with account. Nonetheless user prompts are always collected. How they’re used is a good question.
Use a third party API. Pay with monero.
Yes it is possible to create disposable-isque api keys for different uses. The monetary cost is the cost of privacy and of not having hardware to run things locally.
If you have reliable privacy friendly api vendor suggestions then do share. While I do not need such services now, it can a good future reference.
Anyone has these data from Mistral, HuggingChat and MetaAI ? Would be nice to add them too
Note this is if you use their apps. Not the api. Not through another app.
Not that we have any real info about who collects/uses what when you use the API
Yeah we do, they list it in privacy policies. Many of these they can’t really collect even if they wanted to
I’m curious what data t3chat collects. They support all the models and I’m pretty sure they use Sentry and Stripe, but beyond that, who knows?
Anthropic and OpenAPI both have options that let you use their API without training the system on your data (not sure if the others do as well), so if t3chat is simply using the API it may be that they themselves are collecting your inputs (or not, you’d have to check the TOS), but maybe their backend model providers are not. Or, who knows, they could all be lying too.
Locally run AI: 0
Only if my hardware could support it…
It’s possible to run local AI on a Raspberry Pi, it’s all just a matter of speed and complexity. I run Ollama just fine on the two P-cores of my older i3 laptop. Granted, running it on the CUDA-accelerator (GFX card) on my main rig is beyond faster.
I can actually use locally some smaller models on my 2017 laptop (though I have increased the RAM to 16 GB).
You’d be surprised how mich can be done with how little.
Are there tutorials on how to do this? Should it be set up on a server on my local network??? How hard is it to set up? I have so many questions.
deleted by creator
If by more learning you mean learning
ollama run deepseek-r1:7bThen yeah, it’s a pretty steep curve!
If you’re a developer then you can also search “$MyFavDevEnv use local ai ollama” to find guides on setting up. I’m using Continue extension for VS Codium (or Code) but there’s easy to use modules for Vim and Emacs and probably everything else as well.
The main problem is leveling your expectations. The full Deepseek is a 671b (that’s billions of parameters) and the model weights (the thing you download when you pull an AI) are 404GB in size. You need so much RAM available to run one of those.
They make distilled models though, which are much smaller but still useful. The 14b is 9GB and runs fine with only 16GB of ram. They obviously aren’t as impressive as the cloud hosted big versions though.
deleted by creator
No worries! You’re probably right that it’s better not to assume, and it’s good of you to provide some different options.
normal window user who don’t know what a terminal is. Most of them even freak out when they see “the black box with text on it”.
Good point! That being said I’m wondering how we could help anybody, genuinely being inclusive, on how to transform that feeling of dread, basically “Oh, that’s NOT for me!”, to “Hmmm that’s the challenging part but it seems worth it and potentially feasible, I should try”. I believe it’s important because in turn the “normal window user” could potentially understand limitations hidden to them until now. They would not instantly better understand how their computer work but the initial reaction would be different, namely considering a path of learning.
Any idea or good resources on that? How can we both demystify the terminal with a pleasant onboarding? How about a Web based tutorial that asks user to try side by side to manipulate files? They’d have their own desktop with their file manager on one side (if they want to) and the browser window with e.g. https://copy.sh/v86/ (WASM) this way they will lose no data no matter what.
Maybe such examples could be renaming files with ImagesHoliday_WrongName.123.jpg to ImagesHoliday_RightName.123.jpg then doing that for 10 files, then 100 files, thus showing that it does scale and enables ones to do things practically impossible without the terminal.
Another example could be combining commands, e.g. ls to see files then wc -l to count how many files are in directory. That would not be very exciting so then maybe generating an HTML file with the list of files and the file count.
Honestly I believe finding the right examples that genuinely showcases the power of the terminal, the agency it brings, is key!
Or if using flatpak, its an add-on for Alpaca. One click install, GUI management.
Windows users? By the time you understand how to locally install AI, you’re probably knowledgeable enough to migrate to linux. What the heck is the point of using local AI for privacy while running windows?
Check out Ollama, it’s probably the easiest way to get started these days. It provides tooling and an api that different chat frontends can connect to.
If you want to start playing around immediately, try Alpaca if Linux, LMStudio if Windows. See if it works for you, then move from there.
Alpaca actually runs its own Ollama instance.
https://ollama.ai/, this is what I’ve been using for over a year now, new models come out regularly and you just “ollama pull <model ID>” and then it’s available to run locally. Then you can use docker to run https://www.openwebui.com/ locally, giving it a ChatGPT-style interface (but even better and more configurable and you can run prompts against any number of models you select at once.)
All free and available to everyone.
Me when Gemini (aka google) collects more data than anyone else:

Not really shocked, we all know that google sucks
I would hazard a guess that the only reason those others aren’t as high is because they don’t have the same access to data. It’s not that they don’t want to, they simply can’t (yet).
Which is good (for now). Glad I don’t use that shit
Or you could use Deepseek’s workaround and run it locally. You know, open source and all.













