Dear reader,
Welcome to the seventh edition of the Human Rights Foundation’s AI for Individual Rights newsletter.
This month: Russia is pouring $29 million into AI-powered censorship. Chinese AI models are trained to parrot CCP propaganda, while Chinese police gain new robotic powers. And an AI-driven hacking campaign targeted Iranians who documented the regime’s killing of protesters.
On the tools side, the news is just as significant. Both the explosion of AI agents like OpenClaw and the release of new agentic models are refining workflows and supporting software development. Developers are building AI agents that are more capable, more secure, and more useful for people fighting for freedom. For a deeper dive, listen to HRF’s Alex Gladstein and Justin Moon on the latest episode of Infinite Tech by The Investor’s Podcast Network.
Looking ahead, the AI for Individual Rights program will be at SxSW in Austin, Texas from March 16–18. Hosted at the Fairmont Austin, our booth, ESC TYRANNY, will display stories of activists and resistance movements powering their work with AI. Stop by to explore the tools powering digital freedom and vibe code your own resistance website on the spot.
Maple AI for Human Rights
In the latest installment of our AI video series, HRF’s Justin Moon sits down with Maple AI CTO Anthony Ronning to show how activists can use encrypted AI without any technical setup. They walk through downloading Maple’s desktop app, creating an anonymous account, and using it for real human rights work, from drafting sensitive communications to analyzing documents privately.
The Latest in AI for Repression
Russia to Scale Internet Filtering with AI
Roskomnadzor, Russia’s internet regulator, is building an AI-powered censorship system with a 2.27 billion ruble ($29 million) budget. According to analysts, the system will use AI to instantly block mirror sites hosting banned content and, more ominously, to identify the people creating those mirrors. The move would further restrict the space for dissent and independent online information in Russia.
Researchers Raise Alarm Over Chinese AI Models’ Censorship
New research from the China Media Project confirms what many suspected: Chinese AI models are systematically trained to censor and propagandize. Using chain-of-thought prompting to expose a model’s hidden reasoning, researchers found that Alibaba’s Qwen models, for instance, were instructed to portray China’s human rights record positively while treating other countries with studied neutrality. Separately, Estonia’s Foreign Intelligence Service flagged DeepSeek for concealing information and parroting CCP talking points on security questions — even refusing to answer when asked about a Chinese official who challenged former Soviet states’ sovereignty.
AI-Powered Cyberattack Targets Iranian Protesters
Cybersecurity firm HarfangLab identified a new threat actor called “Redkitten” that used AI to build a hacking campaign targeting people who documented the Iranian regime’s violent crackdown on protesters in January. The attackers, linked to the IRGC, spread malware disguised as forensic reports and lists of people killed during the protests: bait designed to lure the very people searching for information about the dead and missing. Once opened, the files gave attackers access to personal data, system details, and the ability to run additional malware, helping identify and surveil anyone documenting the regime’s abuses.
OpenAI Reportedly in Talks with United Arab Emirates for Censored Chatbot
OpenAI is reportedly exploring a partnership with UAE-based AI company G42 to develop a version of ChatGPT fine-tuned for the country’s local Arabic dialect and political censorship. While OpenAI will still offer a global version of ChatGPT in the country, both versions would tailor their answers to comply with local content regulations and censorship laws. This is exactly why open models matter: tools like Mistral and gpt-oss can be downloaded and run locally, free from any government’s content restrictions.
China Pursues Embodied AI from Robots to Drones
China is deploying robotic police officers that use cameras and AI to monitor and manage its population. Paired with other surveillance technologies, these authoritarian applications of AI can surveil, analyze, and act with minimal human intervention. Meanwhile, China’s military seeks to integrate AI into drone swarms and “robot wolves” designed for coordinated operations. These developments in China’s push into physical AI pose risks to both domestic civil liberties and global stability.
Tightening the Net: China’s Infrastructure of Oppression in Iran
Free speech watchdog ARTICLE 19 released a report documenting how Chinese technology companies and long-term partnerships built the backbone of Iran’s digital surveillance and censorship network. Since 2010, Chinese exports enabled the Iranian regime to scale its internet surveillance, content filtering, and AI-enhanced facial recognition software. This architecture, along with Russian military hardware that could disrupt satellite internet connectivity, made the regime’s January 2026 near-total internet shutdown possible. Read it here.
The Latest in AI for Freedom
OpenClaw’s Explosive Growth and Development Continues
Last month, Austrian developer Peter Steinberginer open-sourced OpenClaw (formerly Clawdbot and Moltbot), and it has taken the AI world by storm. OpenClaw gives users an AI agent that runs directly on their device, capable of managing communications, organizing files, and even building custom applications, all without technical expertise. It is the connective tissue that permits people to choose what intelligence they will use, what machine they will use, and how they will talk to it. Steinberginer has since joined OpenAI, though he says he is committed to keeping the tool “open and independent.” While OpenClaw should be used carefully, and should not be connected to your personal computer or emails, its potential to superscale the work of activists and nonprofits is impossible to ignore.
Caution: Giving an AI agent full access to your system carries real security risks, and OpenClaw is no exception. A vulnerability could let attackers steal data, impersonate users, or hijack devices. Please consider setting up OpenClaw on its own machine and treating it like a colleague with its own set of email addresses, etc.
Anthropic and OpenAI Launch Major Upgrades
Anthropic and OpenAI both released major upgrades to their coding models this month: Anthropic’s Claude Opus 4.6 and OpenAI’s GPT-5.3-Codex. These advanced systems significantly accelerate AI-assisted software development, with developers saying the new models “made everything before them feel like a different era.” Less than two weeks later, Anthropic launched another upgrade, Claude Sonnet 4.6. Designed for everyday research, drafting, and analysis, the new Sonnet also brings stronger coding capabilities to a wider audience. Both models are closed-source, which means activists working on sensitive projects should weigh the privacy tradeoffs. But the capability leap is real. The practical result: a human rights defender with no coding experience can now use AI to build a censorship-circumvention tool or a secure communications app in hours rather than months.
Maple AI Enhances Private Agentic Capabilities
Maple AI, which encrypts every interaction with open-source AI models end-to-end, now integrates with developer tools like OpenCode and OpenClaw. OpenCode, an AI-powered coding assistant, can now route all its requests through Maple’s encrypted layer, meaning activists can build software with AI help without exposing their work to surveillance. Maple also released an OpenClaw plug-in that connects AI agents to encrypted models with minimal setup. (Overall security still depends on the agent’s configuration.) The pattern is significant: open-source tools are making it possible to use powerful AI privately, without trusting a third party with sensitive data.
Lightning Labs Brings Bitcoin Payments to AI Agents
Lightning Labs, a Bitcoin infrastructure company, released tools that let AI agents transact in bitcoin over the Lightning Network, enabling fast, cheap, and private payments. Using bitcoin, AI agents can now make instant payments to other agents or services without identity verification, bank accounts, or API keys: a critical capability for operating in environments where traditional payment infrastructure is censored or surveilled. Imagine an AI agent that autonomously pays for VPN access, cloud hosting, or translation services on behalf of a human rights defender, all without revealing who’s paying.
Clawi.ai Enables OpenClaw in the Cloud
Open-source developer Calle launched Clawi.ai, which hosts OpenClaw in the cloud so users don’t need to set up their own server. The advantage: AI agents run in an isolated environment rather than on your personal machine, reducing the risk of an agent accessing sensitive local files. Clawi.ai says it doesn’t log or analyze interactions. While still experimental, Clawi.ai could make personal AI agents accessible to human rights defenders who lack the technical skills to self-host.
Mistral Launches Voxtral Transcribe 2
French AI company Mistral released Voxtral Transcribe 2, a pair of speech-to-text models supporting thirteen languages. The key model, Voxtral Realtime, is open-source and runs locally and offline with no internet connection required, and no data leaving the device. For human rights defenders recording testimony or documenting abuses, Realtime offers fast, private transcription without sending audio to external servers. It’s free to download on Hugging Face.
OpenClaw and Self-Sovereign AI with Alex Gladstein and Justin Moon
On the latest Infinite Tech episode, host Preston Pysh talks with HRF’s Alex Gladstein and Justin Moon about how open-source AI tools like OpenClaw are advancing individual freedom. They cover how LLMs and AI agents work, why open-source matters, and how “vibe-coding” is letting non-technical activists build their own freedom tools. Essential viewing for anyone wanting to harness AI for human rights work.