Dear reader,
Welcome to the fifth edition of the Human Rights Foundation’s (HRF) newsletter on the intersection of AI and human rights!
This issue explores how to use AI more privately and examines the tradeoffs between power, convenience, and privacy for dissidents seeking to use AI without fear of surveillance.
We highlight a powerful talk from HRF’s first AI for Individual Rights Summit, where Ramez Naam, award-winning science fiction author, challenges dominant fears about centralized, omnipotent AI and highlights the potential for decentralized AI to strengthen individual rights. This riveting, sweeping talk is well worth your time. Watch here!
This month, two pioneering HRF-sponsored reports were published. “The Party’s AI,” published by the Australian Strategic Policy Institute (ASPI), analyzes in detail how the Chinese Communist Party (CCP) weaponizes AI for surveillance, censorship, and suppression of dissent. Meanwhile, “Shared Labs, Shared Harms” by business intelligence firm Strategy Risks exposes for the first time how Western research institutions collaborated with Chinese state-backed AI laboratories deeply embedded in China’s surveillance state while AI ethics groups remained silent. The reports were covered by The Washington Post and by Fox News respectively. HRF is proud to advance cutting-edge research that is shaping the global conversation about freedom and technology.
On January 17-18 in Austin, Texas, HRF will sponsor “AI Hack for Freedom,” an AI-focused hackathon to build practical tools for human rights defenders working under authoritarian pressure. Read more later in the newsletter.
How Can Dissidents Use AI More Privately?
Governments and companies can view conversations with their AI systems, train models on personal data, and learn details and patterns that can be used for repression. For human rights defenders, nonprofits, and anyone handling sensitive information, this raises an urgent question: how can you use AI without exposing yourself to surveillance?
There are a range of AI tools that offer greater privacy protections. Here are three categories of tools, from the smallest to greatest additional privacy-protecting features, worth exploring for dissidents at risk of surveillance:
1) Proxies
A proxy is a middle-layer service that sits between you and the AI model. Instead of sending prompts directly to ChatGPT, for example, proxies bundle your queries with those of other users, helping you “blend into the herd.” This provides partial anonymity while still granting access to frontier closed models. In the case of tools like PPQ and Routstr, users can pay in Bitcoin rather than a credit card or other payment method linked to identity. Users can also avoid sharing sensitive personal information when signing up.
Examples: OpenRouter, PayPerQ, Routstr
2) Secure Enclaves
Secure enclaves are sealed hardware environments that enable encrypted data to be processed securely. When paired with AI, they allow you to have encrypted conversations. Services make this simple by offering a ChatGPT-like interface that routes your prompt through open AI models inside the enclave. This shields user data from the model provider. Secure-enclave AI is currently only available with open models, limiting users’ options, although open models are now catching up to proprietary closed-source ones.
Example: Maple AI
3) Running AI Locally
Running AI locally means the AI model operates entirely on your own device — letting you prompt it even without an internet connection. This offers the highest degree of privacy since no queries ever leave your computer. However, running AI locally limits model selection, as personal hardware can only run smaller and less capable models.
What Science Fiction Can Teach Us About AI
During HRF’s first AI for Individual Rights Summit last month, Ramez Naam, an award-winning science fiction author, challenged the dominant sci-fi trope of a centralized, all-powerful AI future. Naam explained that today’s artificial intelligence landscape is increasingly competitive and multipolar. As more models advance and costs fall, ordinary people gain access to cutting-edge AI tools once only attainable for companies and governments. While AI in the hands of dictators poses real dangers, the decentralization of intelligence creates a future that Naam says is “fundamentally pro-democracy.” Watch his talk here.
The Latest in AI for Repression
Chinese AI Firms Showcase Surveillance Tools to Police Forces
At an industry conference in Beijing, The New York Times reported that companies pitched AI tools designed for state power, including speech recognition software able to decipher minority languages, robots that flag protest banners, and software claiming to understand people’s states of mind. Vendors openly advertised their ability to target “potential troublemakers” — a category that includes migrants and individuals who submit complaints to the government. This deep collaboration between Chinese tech firms and the state pushes the CCP closer to an all-seeing surveillance system.
Hong Kong Activist Carmen Lau Targeted with Deepfakes
Carmen Lau, a leading Hong Kong pro-democracy activist living in exile, was targeted by explicit deepfakes intended to degrade and intimidate her. At least a half dozen of her former neighbors in the United Kingdom received letters containing seemingly AI-generated sexualized images along with text inviting them to visit her former home address. The materials were sent anonymously from Macau, a semi-autonomous Chinese territory near Hong Kong. “This latest incident reflects the evolving playbook of transnational repression and how artificial intelligence can be weaponised by authoritarian actors to export repression across borders,” said Lau.
North Korea’s Advancing AI Capabilities
The Institute for National Security Strategy (INSS), a South Korean think tank, launched a report revealing significant developments in North Korea’s AI capabilities. It documents technologies such as advanced facial recognition systems, automated multi-person tracking, and voice synthesis models for impersonation and psychological control. While the report warns of military and cybercrime applications, these same AI systems could intensify domestic surveillance and crush dissent.
Chinese State-Sponsored Group Used Anthropic to Conduct Cyberattacks
According to AI company Anthropic, a Chinese state-sponsored group performed the first known AI-orchestrated cyber espionage campaign. Hackers utilized Anthropic’s Claude Code to attempt to infiltrate roughly 30 “high-value” global entities, including major technology corporations and government agencies across many countries. With AI performing 80 to 90 percent of the operation, the group, designated GTG-1002, executed several intrusions, uncovering network weaknesses and exploiting them to exfiltrate data. This case illustrates an alarming reality: authoritarian actors can now harness AI to conduct large-scale cyberattacks with minimal human effort.
Burma’s AI-Enhanced Surveillance Deepens Risk for Women
Human Rights Myanmar (HRM) submitted a formal report to the UN Working Group on Discrimination Against Women and Girls, condemning the military regime’s repressive use of AI. The junta has built a “digital dictatorship” powered by AI-enhanced surveillance systems from Chinese firms that disproportionately harm women and girls. Women who speak out against the junta face increased risk of state violence, online doxxing campaigns, and sexual assault in detention. The expansion of AI surveillance further deters peaceful assembly or open dissent. HRM calls for targeted sanctions on companies supplying the junta with AI surveillance tools and urgent protection for women journalists and human rights defenders targeted by the regime.
Generative AI and Political Manipulation in South Asia
In India, Pakistan, and Bangladesh, political parties are intensifying political manipulation and disinformation campaigns with AI. By flooding social media feeds with deepfakes and propaganda, political actors manipulate algorithms to amplify divisive narratives. AI-generated images depict anti-Muslim and anti-migrant stereotypes, and fabricated photos of political supporters suggest false popularity. These tactics demonstrate the potential for authoritarian regimes to weaponize AI to supercharge propaganda campaigns and divide populations in order to maintain power.
Recommended Content
Two HRF-sponsored two ground-breaking research reports released this month expose China’s expanding AI-driven human rights abuses.
“The party’s AI: How China’s new AI systems are reshaping human rights,” published by the Australian Strategic Policy Institute and covered in The Washington Post, shows how the Chinese Communist Party deploys LLMs and other AI tools to escalate censorship, surveillance, and social control.
“Shared Labs, Shared Harm: Global AI Research Partnerships and China’s Rights Abuses,” published by Strategy Risks and covered by Fox, exposes how Western research institutions have collaborated with Chinese AI labs linked to the country’s surveillance and security apparatus.
The Latest in AI for Freedom
Maple AI Releases Maple 2.0 with Live Data and Anonymous Accounts
Maple AI, an end-to-end encrypted AI assistant that enables users to converse with AI privately, launched Maple 2.0. The update enables anonymous web searches that integrate up-to-date information into online conversations and introduces fully anonymous accounts. Instead of creating an account with an email, users can now generate a unique ID, create a password, and pay with Bitcoin, ensuring no personal information is linked to their activity. In addition to encrypting user conversations, Maple will not retain information about identities of users who use the platform in this way.
Sylvanus Olympio Virtual Museum
Togolese human rights activist Farida Nabourema vibe coded a virtual museum to honor Sylvanus Epiphanio Olympio, Togo’s only democratically elected president. His assassination during a 1963 military coup marked the beginning of ongoing authoritarian rule dominated by the Gnassingbé family. The site features a detailed biography, educational quizzes, lesson plans for teachers, and curated content on Olympio’s life and Togo’s history. AI helped Nabourema rapidly and efficiently create an accessible way for Togolese citizens to reconnect with their past and imagine a democratic future.
Mistral AI Releases Advanced Open Models
French AI lab Mistral launched Mistral 3, a suite of top-tier open AI models that compete with AI systems from OpenAI and DeepSeek. Mistral’s models can understand text, images, complex reasoning tasks, and operate across more than 40 languages. While their flagship model requires expensive hardware to run locally, the smaller models can run offline on personal devices to reduce surveillance risks for human rights defenders. This is notable, as it is a leading open model to emerge from a democracy rather than from China.
Ai2 Announces Fully Open-Source Olmo 3
Ai2 announced Olmo 3, a family of fully open-source AI models with new capabilities. Ai2 reported that these models match or outperform other open models on key reasoning benchmarks. Ai2’s release included the models’ complete training data and code, which enables researchers to inspect training processes for bias or harmful patterns. Olmo 3 marks progress towards more transparent, accountable, and independent use of AI.
Gebeya Launches Vibe-Coding Platform Dala
African startup Gebeya released a beta version of Dala, a new AI-powered app builder designed for software developers across the continent. While vibe coding grows globally, many tools remain inaccessible to African users due to language barriers and the high costs of hardware. Dala offers a mobile-friendly option that supports Amharic, Swahili, Hausa, and other local languages. Dala empowers users across contexts and technical backgrounds to build tools for their communities.
Chaincode Labs Invites AI Developers Into Bitcoin Open Source Ecosystem
Chaincode Labs, a Bitcoin research and development center based in New York, is launching the ₿OSS Challenge, a free program to help software developers (including those under authoritarian regimes) begin their careers in Bitcoin open-source software. This challenge offers participants a month of hands-on curriculum and exercises, with the opportunity to extend for two more months. Top participants will receive mentorship and introductions to organizations that can sponsor full-time ₿OSS contributors. Chaincode Labs aims to reach curious minds from a wide range of backgrounds so if this is of interest, apply here.
AI Hack for Freedom: HRF Sponsors Hackathon Pairing Dissidents and Developers
HRF is sponsoring “AI Hack for Freedom,” the first AI-focused hackathon designed and driven by human rights advocates from countries ruled by authoritarian regimes. Hosted by Bitcoin Park on Jan. 17-18, 2026 in Austin, Texas, the hackathon will unite global human rights defenders from HRF’s network with top open-source AI developers to contribute to the freedom technology ecosystem.