Is there something that can generate random Internet usage to make the real sites I go to a bit obfuscated?
I’m thinking something that runs on my server, and simply visits a random website. It probably shouldn’t actually be random, and some sort of tweaking would be great. Like the ability to have it visit every news site there is. That way the ISP will have a harder time telling my political bias.
The threat model for this is below using a VPN for normal usage, although getting a dedicated VPN IP address is a project for one day.
It usually isn’t super hard to tell apart randomized junk like this from real human patterns. That is why Tor Browser for example tries its best to make everyone look the same instead of randomizing everything.
That said, for the mere purpose of throwing off the ISPs profiling algorithms, you could make a relatively simple python program to solve this. A naive solution would just do an http GET to each site, but a better solution would mimic human web browsing:
- Get a list of various news sites and political forum sites
- Setup headless firefox or chromium
- Use Selenium or similar to crawl links on each site. Make sure you have the pages fully load and wait a random amount of time that a human would before going to the next page.
- https://realpython.com/modern-web-automation-with-python-and-selenium/#test-driving-a-headless-browser
If you have no programming capability this will be rough. If you have at least a little you can follow tutorials and use an LLM to help you.
The main issue with this goal is that it isn’t possible to tell how advanced your ISP’s profiling is, so you have no way to know if your solution is effective.
Feel free to DM me if you go this route.
First off, if you’re concerned about ISPs selling your data (couldn’t exactly tell if that’s a part of your concern), switching to private DNS provider and enabling DNS over HTTPS/TLS can significantly cut down on that, since most of what ISPs sell comes from DNS requests. That being said, they can still tell what sites you visit if you don’t use a VPN/Tor, but they’re less likely to care unless you’re doing something illegal.
In terms of your obfuscation plan, I’m not sure that’d do much; if anything, it’d make you stand out more. A bunch of random traffic, even tweaked to fit your browsing habits, probably would look suspicious on their end and it wouldn’t actually hide or disguise anything.
So ideally, you’re just going to want to figure something out to set up some sort of VPN at some point. Switching DNS providers might be a bit of help in reducing sale of your traffic data, however. My recommendation is Quad9 but any privacy-friendly provider is fine.
It would be a lot easier to just use a vpn and hide your traffic from your ISP that way.
If your threat model doesn’t even need a vpn you are probably wasting your time anyway. HTTPS + a decent anti-fingerprinting browser (Mullvad, or Brave) is probably sufficient.
Actually this technique would be a lot more useful using a VPN due to correlation attacks.
If targeted correlation attacks are part of OPs threat model what OP is asking for isnt going to work anyway
You dont run your own VPN server. You use a company with thousands of customers. That’s the point.
Doesn’t change much for a correlation attack though if you already suspect a small subset of endpoints.
As I mentioned I have a server, and I use a VPN to connect always to it. This makes using a paid VPN a bit harder. The dedicated VPN IP should fix this issue but I haven’t looked into how difficult that’d be.
Ahh i see.
Yeah it really slims down your VPN choices as having an IP address associated with your account makes it much more identifiable.
It also usually costs more. The one I know offers a static IP is express VPN and ive heard Proton has plans on offering it.
Yup. Tailscale+Mullvad isn’t a bad option, but I’d rather not depend on tailscale and a true local connection will always be better. But then you have to pay through tailscale and then more identifiable.
Sounds like you want both https://adnauseam.io/ and https://trackmenot.io/
I know with tailscale you can set Mullvad as the exit point for all clients within a subset. I imagine you can do something similar with a private VPN, with a ton more effort.
Very similar yes. Trackmenot but for any site not just search engines. Although it may be a good option too
It’d be pretty quick to write a script that loads a randomly selected url from a prepopulated list at random intervals. Could probably do it in grease monkey directly in Firefox so you could use other tools in addition like adnausium and a client spoofer.
Yea. My issue now is finding a list of these sites
Just start listing the most popular and generic sites. Then Google a topic like technology and copy whatever those sites are. I imagine you could have a pretty decent list populated in 15 minutes. You could also just ask chatgpt to create lists of the top 100 sites for “x”.
What would write in? I might be willing to help because this interests me as well.
That’s a good idea.
Probably just a shell script. Someone mentioned using curl so that’d be pretty easy
Let me know if you start working on anything. I want to try to use greasemonkey, I haven’t in years.
Little curl shell script that works:
#!/bin/bash # Random_Curl_Request.sh # CSV file containing websites CSV_FILE="/home/user/Documents/randomSiteVisitor/websites.csv" # Get a random line from the CSV file RANDOM_LINE=$(shuf -n 1 "$CSV_FILE") # Extract the website URL from the random line WEBSITE=$(echo $RANDOM_LINE | cut -d ',' -f 1) # Make a curl request to the random website every minute while true; do curl $WEBSITE sleep 60 # Get a new random line from the CSV file RANDOM_LINE=$(shuf -n 1 "$CSV_FILE") # Extract the website URL from the new random line NEW_WEBSITE=$(echo $RANDOM_LINE | cut -d ',' -f 1) # Update the website URL for the next iteration WEBSITE=$NEW_WEBSITE done
Turn on your browser history for a while then use that.
chrome:site-engagement
for a slightly more accessible list
you can run an ArchiveTeam Warrior on your server and choose the URLs project. if i understand correctly, the Warrior will continuously visit randomly discovered websites to download their contents and upload them to a server that later feeds the data into the Internet Archive. best of both worlds - your ISP has a harder time distinguishing your real traffic from the ArchiveTeam-generated one, and your server is actively contributing to IA.
Create a guest WiFi network called “free community internet” with no password that is rate-limited to ~5% of your total bandwidth
Run a tor exit node. But you should notify the police first.
Just curl a bunch of sites at random times? Under https, everything in the URL except the domain is encrypted, so it’ll look roughly like a regular user requesting a page.
I like this. Is there some sort of list of safe sites that exists that I could use in a script?
Edit: something like this
You could set up a tor relay and use it too. They they will just see connections to tor.
Or vpn and seed torrents
I’d suggest a tor exit node but that might make some governments come knocking
It will also get you banned from a lot of websites, and I hope you enjoy captchas
I’m experienced…
God, I only use Ublock Origin on Firefox. No TOR, VPNs, or anything like that.
Despite that, there are a handful of Google-related websites like Virustotal that now permanently trap me in repeating captchas. Youtube will occasionally decide to block my IP entirely for a week.
Let me tell you, this shit doesn’t make me more inclined to disable ad blocking. Instead, I’ve starting finding alternatives and using a sandboxed vanilla Chromium for problem pages.
Completely with you, there. If a website makes my life hard, I just find the info elsewhere or live without it.