On behalf of the Human Rights Foundation, I am pleased to welcome you to the first edition of HRF’s monthly newsletter on the intersection of artificial intelligence and individual rights.
Each month we’ll bring you a concise roundup of some of the world’s most important AI stories in two areas:
- How dictators are using AI tools to repress, and
- How open-source AI tools are being developed to allow individuals to protect themselves, strengthen their networks, and push back against tyranny.
In this first edition, I’ll dive into China’s methods for harvesting data from Large Language Models (LLMs aka AI chat bots) and look at the global impact of this activity while also enclosing a few notes on other global news — some scary, some exciting.
We look forward to showcasing some pioneering AI content at the Oslo Freedom Forum in Norway on May 26-28 and to introducing our webinars, trainings, grants, and AI-specific events later this year. Stay tuned for more!
Best regards,
Craig Vachon
Director, AI for Individual Rights

P.S. This might be the first time you’ve seen my name. I just joined HRF this month, filling the newly created Director for AI for Individual Rights role. I’m terrifically excited to dive in and help HRF advance this program. I have a background in leading AI companies, making investments in AI startups, and advancing freedom technology. It might interest you that I served many executive roles at Anchorfree, pushing forward the Hotspot Shield VPN to more than 700 million monthly active users, mostly inside dictatorships. This is the kind of global impact we’d like to have in the coming months and years with HRF. If you’d like to chat, please reach out at [email protected].
China’s Methods for Capturing LLM Data and Global Parallels
Why should I care?
Data scraped from LLMs (for example, ChatGPT, Grok, or, especially important in this case, DeepSeek) can be used to glean insights into user behavior that may be utilized to create effective, hyper-personalized cyberattacks, or other nefarious manipulation campaigns.
Legal Mandates & State Access
Chinese Communist Party law requires all Chinese companies to cooperate with state intelligence. This means any data collected by DeepSeek —including user prompts, chat histories, account registration details, device data, and potentially even keystroke patterns — can be accessed by government authorities without the need for a warrant.
Technical Integration & Surveillance Partnerships
Many low-cost LLMs like DeepSeek are directly integrated into services provided by major Chinese surveillance and security firms (TopSec, QAX, NetEase), which explicitly use LLMs to enhance cyber censorship and state monitoring capabilities. Researchers have also discovered hidden backdoors in DeepSeek’s code, linking user data directly to servers controlled by the CCP, ByteDance (owner of TikTok), and telecoms like China Mobile. This infrastructure is designed to facilitate mass data collection and surveillance.
Broad Data Collection Practices
DeepSeek’s privacy policy and technical analysis confirm it collects a wide range of information: chat content, search queries, device fingerprints, IP addresses, and even Internet activity from other apps. The platform stores this data on servers in China, making it fully subject to Chinese state access.
Use in Influence and Espionage Operations
The Chinese government has deployed low-cost DeepSeek and other AI models for mass surveillance, biometric data collection, and social media monitoring — both domestically and internationally. These surveillance tools are used aggressively for tracking dissidents, monitoring protests, and conducting information/influence operations abroad.
Are Other Countries Using Similar Techniques?
Other Authoritarian States
Autocracies with strong state surveillance apparatuses — such as Russia, Iran, Saudi Arabia, and other Gulf regimes — have adopted similar methods, leveraging AI models and internet platforms to monitor citizens, censor content, and collect data for intelligence purposes. These practices often involve mandatory data localization, legal requirements for company cooperation with state security, and integration of AI tools into national surveillance systems.
Democratic Countries
In contrast, most democracies (United States, European Union, etc.) require judicial oversight for government access to user data. While LLM providers in these countries do collect prompt and usage data, legal and regulatory frameworks (like GDPR or CCPA) impose limits on government access, mandate transparency, and grant users rights over their data. But there are significant ongoing debates about the adequacy of these protections and the potential for overreach under national security laws.
What can you do?
- Use a reputable VPN (like Mullvad or Obscura) to tunnel into a reputable LLM (a few are mentioned below in the AI for Freedom section).
- Review Privacy Policies: Carefully read the privacy policies of any LLM service or website you use to understand what data they collect and how it is used.
- Limit Information Sharing: Don’t share personal information you might include in your prompts to LLMs. Stop sharing sensitive (health/wealth/vulnerable) details unless absolutely necessary and you trust the platform’s data handling practices.
- Use Privacy-Focused Tools: Consider using privacy-enhancing browser extensions, VPNs, and ad blockers to limit tracking by websites.
- Adjust Privacy Settings: Review and adjust the privacy settings on your web browsers, social media accounts, and other online services.
- Delete Old Accounts: Close accounts for services you no longer use to reduce the amount of your data stored online.

Iran | AI tools used to enforce hijab compliance
To enforce hijab compliance, Iranian authorities have implemented facial recognition systems at Tehran Amirkabir University and aerial drones and surveillance cameras on major roads in Tehran. Women inside a car determined to not be wearing a hijab have received text messages from authorities warning them their car could be impounded. These upgraded enforcement systems (which are complemented by bank account freezes and other punishments for female dress code non-compliance) come on the heels of reports of systemic intimidation of those in custody, including torture, mock execution, and sexual abuse. The introduction of AI tools to further enforce repressive laws serves to allow Iranian authorities to further tighten control against their citizens.
Russia | Leaked Details on Putin’s AI Surveillance Operation
Leaked Kremlin documents reveal that Vladimir Putin’s regime is funding a vast AI-driven surveillance system, with a particular focus on facial recognition and real-time tracking of citizens. The system is designed to identify “disloyal and destructive” individuals and has already been used to arrest people at politically sensitive events, such as at Alexei Navalny’s funeral and protests across Russia. These documents reveal that Putin’s administration is allocating at least 11.2 billion rubles ($124 million) for the systems development in 2022-2026. These surveillance upgrades can serve as tools to further clamp down on political opposition within the country.
United Arab Emirates | Program in Dubai uses 300,000 Cameras for Facial Recognition.
The UAE’s rulers have deployed AI-powered surveillance systems, with the publicly stated goal of increasing national security and public safety. National security advisor Sheikh Tahnoun bin Zayed al Nahyan, also brother to President Mohamed bin Zayed al Nahyan, oversees $1.5 trillion in assets and has integrated facial recognition technologies into over 300,000 cameras. The UAE has a history of human rights violations, including criminalizing online activities that oppose fundamental principles of governance and imprisoning a human rights defender based on information extracted from spyware, which makes these latest actions a concern for human rights activists.
Turkey | Turkish Government Uses AI Tools to ID “Terrorists” without Evidence
The Turkish government put forward an AI initiative to automatically identify potential associations between new case entries and previously classified terrorist organizations in the national judiciary database. This AI-powered system algorithmically links individuals or cases to terrorist organizations, “tagging” them before any judicial process has occurred. This automated tagging risks introducing bias into the process, violating the individual’s presumption of innocence and right to a fair trial. Critics fear this approach will facilitate a chilling effect on political and civil freedoms.

Maple AI | OpenSecret releases Maple AI: A Private and Secure AI
OpenSecret recently released Maple AI, an end-to-end encrypted chat bot for enhanced privacy. Maple AI runs server code that is open source, fully auditable, and readable by anyone. A technology called “confidential computing” provides the user with mathematical (cryptographic) proof that the same open-source and privacy-preserving code is running on Maple AI’s servers. Encrypted and open-source AI, in conjunction with confidential computing, will allow activists and journalists to use AI chat tools with greater confidence their data is secure. For more information, check out this recent podcast interview with Open Secret’s cofounder Marks.
Ollama | How to Run Your Own AI on Your Own Device
Design News offers a guide to running an open-source AI chat tool, Ollama, on one’s local computer. Self-hosting AI bypasses the need to share sensitive information with third-party AI providers, maximizing privacy and ensuring that one’s data isn’t captured by hostile corporations or governments. This guide walks readers through hardware requirements and the step-by-step process for installation. Learning how to self-host one’s own AI leads to greater digital independence and support of the open-source ecosystem.
Presidio Bitcoin | AI and Freedom Tech Hackathon Takes Place in San Francisco
Presidio Bitcoin, a technology coworking and event space in San Francisco, recently hosted a 24-hour hackathon for developers to use AI tools to build on open-source tech. Developers flew from across the world to participate, dozens of students came from top local universities, and more than $10k in prizes was awarded to the top three projects. The top three finalist presentations can be viewed here, the quality of which served to emphasize the advantage AI can provide activists and digital builders as part of a freedom tech stack.
Harvard Kennedy School | A Primer on Using AI to Improve Your Movement’s Effectiveness
Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation released a playbook for social activists to enhance their efforts with AI. Included in the guide are the recommendations to use AI to strategize, analyze feedback, and generate content such as images, video, music, voiceovers, and subtitles. As AI tools become more dominant in today’s technologically focused world, human rights activists and developers would be well-served to explore the technology to learn how it can best serve their mission. Look out for an HRF webinar in the coming months, teaching activists and non-profits how to best harness open-source AI tools.
AI Meetups | Regular Open-Source AI Event Taking Place in Austin
Bitcoin Park Austin has begun holding weekly meetups exploring the intersection of AI and Bitcoin development. Sessions are technically focused, with attendees teaching each other how to build an AI agent and challenging each other’s ability to code with AI. These meetups provide the dual benefit of exposing freedom tech and Bitcoin developers to cutting-edge AI tools, further enhancing their work, and welcoming those interested in AI into the Bitcoin and freedom tech community.