← Back to Library

How to use any AI privately - the most private LLM

Most guides on artificial intelligence privacy stop at running a model on your laptop, but The Hated One argues this is a dangerous illusion for anyone without a dedicated graphics card. This piece stands out because it refuses to accept the binary choice between total privacy and cloud convenience, instead offering a granular, tiered strategy for de-identification that treats data protection as a continuous workflow rather than a single setting. In an era where every chat is potentially training the next generation of surveillance tools, understanding how to navigate the gap between local hardware limitations and cloud-based necessity is no longer optional.

The Illusion of Local Privacy

The Hated One begins by dismantling the assumption that "local" is synonymous with "private" for the average user. They write, "not everyone is going to have the hardware to run a capable enough model on their own device," noting that even small open-source models often lack the nuance of their cloud-based counterparts. The author's core argument is that the race for capability has forced a collapse of safeguards, where major providers prioritize data collection over user confidentiality. "Anything you say your upload to Chad GPT co-pilot or Gemini will be collected stored and retained for a long time," The Hated One warns, highlighting that deletion requests often only strip credentials while the underlying data profile remains intact.

How to use any AI privately - the most private LLM

This framing is effective because it shifts the focus from user error to systemic design. The author points out that human reviewers still process conversations, meaning confidential information is read by third parties. Furthermore, they warn that large language models can be attacked to extract their training data, meaning "whatever you tell these cloud-based AI tools could potentially be extracted by attackers from around the globe." Critics might argue that the risk of prompt extraction is overstated for casual users, but the author's insistence that "the larger the model the more vulnerable it is" aligns with emerging academic research on model inversion attacks. The takeaway is clear: if you use a cloud service, you must assume your data is already compromised.

"There are different privacy techniques you need to implement to protect either of these individually in some cases it might be enough to just protect your identity while letting them steal your chat in other cases you might want the reverse that your identity is not really that important but provider should not know what prompts you're standing their way"

The Architecture of De-Identification

Moving from diagnosis to prescription, the text outlines a specific workflow for users who must rely on major platforms. The Hated One insists that protection starts before account creation, specifically by masking the IP address and using an alias email. "The best way to do this is either with a VPN or tour," they advise, explicitly steering readers away from popular, sponsored services toward more reputable options like ProtonVPN. The author emphasizes that a native mobile app is a "no-go" due to invasive permissions, suggesting instead that users access services via a web browser over a secure connection.

The strategy relies on compartmentalization. By combining a Virtual Private Network with a temporary email alias and a unique password, a user creates a "pseudonymous account that will still collect your prompts but your identity will be separated from them." This is a pragmatic, if imperfect, solution for those who need the capability of top-tier models without linking the output to their real-world identity. The author's tone here is urgent and instructional, treating privacy as a series of technical hurdles to be cleared rather than a philosophical stance.

When the Provider is the Product

For users requiring higher stakes, such as brainstorming business ideas or discussing mental health, the author pivots to services where the business model does not rely on data mining. The Hated One identifies three specific alternatives: Venice AI, Hugging Chat, and Brave's Leo. "One of them is going to run your prompts through a proxy and the other two will erase your prompts upon fulfilling the request," they explain. Venice AI is highlighted for routing prompts through a proxy and refusing to retain conversations, while Hugging Chat offers the ability to delete data and avoid sharing it with third parties.

The most distinct recommendation is Brave's Leo, which promises to "perch" (preserve) records only until a response is generated, after which they are erased. The author notes that while Brave claims not to log IP addresses, they still recommend a VPN as a safety net. This section is particularly valuable because it moves beyond the generic "use Tor" advice to specific, actionable tools that balance usability with privacy. However, a counterargument worth considering is that these smaller providers may lack the robust security infrastructure of the tech giants they seek to replace, potentially introducing new vectors for data loss.

The Ultimate Isolation: Local and GrapheneOS

The piece culminates in a discussion of absolute privacy, where the user must run models locally or create an entirely isolated digital environment. For local execution, The Hated One recommends tools like Open Web UI and Jan, which allow users to run models downloaded from Hugging Face without an internet connection. "Open web UI will run from a Docker container as a local host which you can open up in your default web browser but it's all local you don't have to worry about it," they write, emphasizing the ability to customize models and analyze confidential documents without external exposure.

For the most extreme scenarios, the author introduces GrapheneOS, a hardened mobile operating system. "This is only possible on graphine OS so I included this at the end of this tutorial," they state, describing a workflow where apps are installed in a separate, isolated user profile running a full device VPN. The author details using anonymous Google accounts and gift cards to purchase services, ensuring that even if data is collected, it cannot be traced back to the user. "I know my information is still being collected but I don't care because it's fake and isolated and doesn't lead to any identifiable information about me," The Hated One concludes. This level of operational security is demanding, but it represents the only true method for bypassing identification in a hostile digital landscape.

"Take control of your privacy because it matters"

Bottom Line

The Hated One provides a rare, technically rigorous roadmap for navigating the privacy paradox of modern AI, successfully arguing that capability and anonymity can coexist through layered de-identification strategies. The piece's greatest strength is its refusal to offer a silver bullet, instead providing a spectrum of solutions from simple IP masking to full system isolation. Its primary vulnerability lies in the steep technical barrier to entry for the most secure options, which may limit their utility to a niche of highly motivated users rather than the general public.

Sources

How to use any AI privately - the most private LLM

by The Hated One · The Hated One · Watch video

I want to teach you how to use AI privately not just how to run an llm locally on your device but how to use any AI out there actually privately by now there are more than enough tutorials on how to run AI on your laptop which is private but insufficient not everyone is going to have the hardware to run a capable enough model on their own device you're going to need a dedicated GPU and plenty of ram in order to run something like a 7 million parameter llama 3 which is a very small model compared to GPT 4 or Gemini and even if you do have the hardware sometimes the small open- Source models are just not capable enough and in any case you probably want to hop between multiple models that are going to excel at different tasks so what can you do if you are in that situation in this private AI tutorial I'm going to show you some really cool things you can do to take control of your data while using cloud-based llms and I'm going to teach you not only how but most importantly when you should hop between local and cloud-based Services you're going to learn what to do and what to avoid there is so much to show you so be sure to stick till the end where I'll share my secret methods I use to bypass all forms of identification with these online services okay ready let's begin hey if you enjoy the sponsor free content support me on patreon and unlock access to all of my podcasts get early access to everything ad free and it can even get my merch my content is very adversarial to social media algorithms and it clearly shows in my YouTube analytics this work cannot survive without your support so please become a paid member on patreon and join me in this fight from within thank you kindly before I can teach you about the Privacy techniques you need to understand why you need to protect your privacy from AI in the first place and the short answer is because the situation is really bad AI companies like open AI Microsoft or Google have really went all in on the race to the top to the point where all safeguards went to the side which includes any basic protection of ...