Newsletter
Oct 27, 2025

HRF’s AI for Individual Rights Newsletter #4

HRF’s AI and Individual Rights Newsletter 4
HRF’s AI and Individual Rights Newsletter 4

Dear reader, 

Welcome to the fourth edition of the Human Rights Foundation’s (HRF) newsletter exploring the intersection of AI and human rights. 

I am Jason Hsu, the AI for Individual Rights Research Fellow at HRF. My career has taken me from Taiwan’s parliament to a position as Chief Initiative Officer of Taiwan AI Labs. I have held research fellowships where I studied how governments deploy AI to monitor and control their citizens. At HRF, I’ll be continuing my work investigating how authoritarian regimes weaponize AI and consider ways to use open-source AI to protect civil liberties.

This newsletter is part of that mission. Each edition explores how AI can both entrench state power and empower individual freedom. Today’s issue shares our latest educational video for activists and nonprofits who want to use AI safely. 

We highlight China’s surveillance technology exports, North Korea’s AI-powered hacking attempts targeting South Korean journalists, and United Arab Emirates’ censorship of AI-generated content. We then turn to movements using vibe-coded tools to advance freedom from Nepal to Togo. 

Thank you for reading and for standing with us to empower activists and movements with privacy-preserving AI.

Jason Hsu
Jason Hsu
Research Fellow, AI for Individual Rights
Jason Hsu

Privacy-Preserving AI for Human Rights

HRF’s AI for Individual Rights technical lead Justin Moon and financial manager of  Anti-Corruption Foundation Anna Chekhovich share ways for nonprofits and activists to use AI safely and effectively. In this second installment of our video series on AI for activists, they demonstrate how activists can leverage AI tools to streamline grant applications. But since mainstream AI platforms like ChatGPT may expose confidential data to third parties, the session highlights alternative tools like Ollama, Maple AI, PayPerQ, and Routstr that help activists work more privately under repression.

The Latest in AI for Repression

China Offers AI Surveillance to Authoritarian Police Forces

At China’s Global Public Security Cooperation Forum, vendors showcased a suite of AI-powered surveillance tools to visiting officials from authoritarian states including Belarus, the Democratic Republic of Congo, Nicaragua, and Ethiopia. The forum displayed body cameras with AI-enhanced facial recognition capabilities, enabling police to identify “high-risk” individuals and transmit location data to command centers. Exporting this kind of surveillance technology strengthens the capacity of repressive regimes to dismantle civil society, carry out rights abuses, and further entrench authoritarian rule.

North Korean Hackers Use AI Deepfakes to Target Journalists

South Korean cybersecurity firm Genians reported that Kimsuky, a North-Korean state-sponsored hacking group, is using AI to bolster its spear-phishing campaigns targeting South Korean journalists, researchers, and human rights activists focused on North Korea. The group used ChatGPT to create a deepfake image of a South Korean military identification document. By impersonating an official, the hackers lured targets into opening malicious files able to exfiltrate data from targets’ devices. This chilling case shows how AI can increase the effectiveness of hacking attempts targeting human rights defenders.

United Arab Emirates (UAE) Media Council Bans Unauthorized AI-Generated Images of Public Figures

The UAE Media Council banned AI-generated images of national figures and symbols without government approval. The ban came after a woman posted an AI-generated image of herself with the country’s founding father, Sheikh Zayed Al Nahyan. Social media users who fail to comply could face fines or other penalties. Earlier this year, the Media Council signed an agreement with AI company Presight to build tools to analyze and filter media for compliance with laws and “national values” in real time. Combined, these actions reveal how the UAE is weaponizing AI to police digital expression while reinforcing state censorship.

DeepSeek Produces Less Secure Code for Politically Sensitive Groups

A Washington Post investigation found that China’s AI model DeepSeek generates code with major security flaws when asked to produce code for projects linked to Tibet, Taiwan, or politically sensitive groups like Falun Gong. Researchers offered several explanations for DeepSeek’s variation in code quality, but results suggest that the model could put sensitive groups at greater risk for hacking.

Russian AI-Powered Disinformation Seeks to Tilt Elections in Moldova and Undermine Support for Ukraine

Ahead of Moldova’s September parliamentary elections, Russian-linked actors launched AI-powered disinformation campaigns to boost candidates aligned with Moscow. Using off-the-shelf, AI software, they created spoofed news sites that flooded social media with anti-EU narratives. Pro-Russian forces have also deployed AI-generated deepfakes and fabricated maps to manipulate reality and undermine support for Ukraine, according to a recent report by the Foundation for Defense of Democracies. These operations illustrate how AI is becoming central to Russia’s efforts to distort democratic processes and control information.

Recommended Content

The October issue of the Journal of Democracy features essays on AI and authoritarianism. Valentin Weber’s essay, “China’s AI-Powered Surveillance State,” examines the Chinese Communist Party’s enormous AI-based surveillance apparatus, while noting that human unpredictability and the desire for freedom remain enduring obstacles to complete control. Dean Jackson and Samuel Woolley’s essay, “AI’s Real Dangers for Democracy,” warns that AI may endanger democracy by automating political control, widening inequality, and centralizing power in the hands of tech elites.

The Latest in AI for Freedom

Bitchat Downloads Spike amid Protests Worldwide

Bitchat is rapidly gaining traction as governments worldwide crack down on online communications. The peer-to-peer encrypted messaging app works offline over Bluetooth and was developed by Jack Dorsey using Goose AI, an open-source vibe coding tool. After Nepal’s government blocked 26 social media platforms, tens of thousands of protesters downloaded the censorship-resistant communication tool. Similar spikes have been reported during recent protests in Indonesia and Madagascar. The increasing adoption of Bitchat worldwide reminds us that censorship-resistant social networks can keep communities connected when traditional channels are cut off.

Free Togo Empowers Their Movement with Vibe Coding

For more than five decades, Togolese citizens have been subjected to authoritarian rule, first under Gnassingbé Eyadéma and now under his son, Faure Gnassingbé, who recently cemented indefinite rule through constitutional changes. Protesters demanding democratic reforms have faced violent crackdowns, mass arrests, censorship, and social media restrictions. But Togolese activists are moving towards freedom tech to sustain their movement and outmaneuver state repression. Human rights defender Farida Nahbourema used Loveable to build a Free Togo website. In under an hour, she was able to guide the AI to build and publish the site without needing to hire developers. “AI has become an incredible resource for activists like myself,” said Farida. “It understands what we need, builds it instantly, and gives us the freedom to manage our own tools without dependency.”

Maple AI Releases Maple Proxy

Maple AI, an open-source, end-to-end encrypted large language model (LLM), has introduced Maple Proxy, a new feature that routes OpenAI-style API requests through encrypted channels to keep prompts and data private. Normally, when you use an app powered by ChatGPT, your activity is visible to OpenAI. Maple Proxy ensures that no prompts, files, or other sensitive pieces of information are exposed. Maple AI has also re-enabled its document and image upload features, making it easier to handle real-world workflows (such as uploading grant applications, translating long texts, and summarizing important materials) without leaking data to third parties. These updates position Maple AI to empower activists and nonprofits operating under constant surveillance.

Soapbox Launches Shakespeare Act 2

Soapbox released the second version of Shakespeare, an AI-powered website builder built on nostr, a decentralized and censorship-resistant social network protocol. With Shakespeare, anyone can build and publish websites directly from their browser. No technical experience is required. Shakespeare Act 2 now stores data locally and eliminates centralized servers. Only the services users choose to connect to (ie. Github or an AI provider) see their data. Shakespeare itself sees nothing. You can now choose from any AI provider – “this is the most choice of any AI web builder we’re aware of” – including local models to enhance privacy. The update also brings built-in Git integration to allow users to clone, commit, and manage projects in-browser. Combined with better control of the development environment, users gain more privacy, autonomy, and flexibility.

Switzerland Launches Open-Source LLM

A team of Swiss universities have launched an open-source LLM, Apertus. While many LLMs keep their training processes opaque, Apertus aims to make the whole system open and reproducible. It also expands multilingual representation by training on over 1,800 languages and dialects and evaluating the model’s performance on underrepresented languages. Apertus advances the frontier of open-source AI development.

Code Orange Dev School Hosts Vibe Coding Workshops

Based in Bali, Indonesia, the Code Orange Dev School equips software developers and individuals across authoritarian-leaning regimes in Asia with technical Bitcoin education. Their monthly vibe coding workshops empower students to build freedom-focused, censorship-resistant apps on Nostr using Shakespeare. To advertise, they vibe-coded a website, proving how easily groups can expand outreach without hiring external web designers. Through these workshops, they empower students to create freedom-focused, censorship-resistant apps faster and cheaper. Stay tuned for more in-person and online workshops here.

Recommended Content

In this episode of Citadel Dispatch, host Matt Odell interviews Mark Suman, the founder of Maple AI. The conversation dives into why Maple AI matters: it’s open source, privacy-first large language model that uses end-to-end encryption so that no one — not even the company itself — can view user data. They unpack the tradeoffs between different AI tools and explore the risks of closed-source AI, from censorship to threats against freedom of thought. They also highlight how Maple offers a different path: giving people the ability to harness AI without sacrificing their privacy. Watch the episode here.

Join our AI for Individual Rights Newsletter

Each month we’ll share a concise roundup of the world’s most important AI stories exploring how dictators are using AI tools to repress and how open-source AI tools are being developed to resist tyranny.

Contribute

Visit our Website

How can we help?

Hit enter to search or ESC to close

Join the cause by subscribing to our newsletter.

Email Us