Newsletter
Jan 26, 2026

HRF’s AI for Individual Rights Newsletter #6

HRF’s AI for Individual Rights Newsletter #6
HRF’s AI for Individual Rights Newsletter #6

Dear reader,

Welcome to the sixth edition of the Human Rights Foundation’s (HRF) AI for Individual Rights newsletter.

This month captures the spirit of the moment, where AI is being used to repress, but people are pushing back in remarkable ways towards personal empowerment.

In Nepal, an AI-powered surveillance system is suppressing Tibetan dissent. We also examine how authoritarian regimes like Russia and Iran use generative AI to manipulate public narratives and obscure the truth during crackdowns on peaceful protests.

On the other hand, we highlight the increasing usage of Bitchat, a vibe-coded and encrypted offline messaging app, in Uganda and Iran, where it provides a way for citizens to communicate during nationwide internet shutdowns. We also explore the release of TranslateGemma, a set of open-source AI translation models, as well as the rise of AI tools like Claude Code, Clawdbot, and OpenCode that are accelerating the development of freedom technologies and giving rise to a new revolution in personal computing and, potentially, digital sovereignty.

To that end, this month, HRF sponsored the first AI Hack for Freedom, which demonstrated how vibe-coding tools can directly accelerate the work of advocates for freedom. The hackathon paired eight dissidents with skilled open-source developers to build practical tools for movements operating under repressive regimes. By utilizing the power of tools like Claude Code, OpenCode, and more, teams rapidly prototyped and deployed solutions in just 28 hours. Learn more about the event and the winning tools here.

Before we jump into the news, let’s learn how to run an LLM locally.

Running LLMs Locally for Human Rights

In this educational video, Justin Moon, HRF’s AI for Individual Rights technical lead, teaches Win Ko Ko Aung, an exiled Burmese activist and HRF’s global Bitcoin adoption fellow, how to run AI locally using Ollama. Moon explains how running AI directly on a personal computer can defend against surveillance by authoritarian eyes and enables offline conversations with AI when regimes restrict internet connectivity. Watch to learn how local AI can support activists in their work.

The Latest in AI for Repression

AI-Enhanced Surveillance System Silences Tibetan Refugees in Nepal

A recent Associated Press (AP) investigation revealed how Tibetans who fled Chinese repression now live beneath an expansive, AI-powered surveillance system in Nepal. Thousands of cameras, many equipped with night vision facial recognition and AI tracking, monitor people’s daily movements. “Even though we are free, the surveillance cameras mean we’re actually living in a big prison,” Tibetan activist Namkyi told the AP. Nepali police use Chinese predictive policing technologies to identify and preemptively detain Tibetans identified as likely to protest. Along the border, sensors and AI-powered drones line traditional escape routes. This surveillance has stifled the once-vibrant Free Tibet movement and driven many Tibetans to silence their advocacy or to flee Nepal altogether.

AI-Generated Protest Videos Spread During Iran’s Internet Shutdown

During widespread anti-regime protests, Iranian state media accounts spread videos that purported to show aerial footage of counterprotests in support of the dictatorship. However, visual inconsistencies in the videos led many users to question whether the clips had been altered using AI photo- and video-editing tools. As the regime’s nationwide internet shutdown continues, limiting access to reliable information from within the country, both pro- and anti-regime social media accounts based abroad have shared AI-generated images and videos that advance their preferred protest narratives, according to NewsGuard.

Russian Propaganda Hijacks AI Models

Moscow-based news networks are producing massive amounts of disinformation designed to align AI models with pro-Kremlin narratives. The state-aligned Pravda network uses artificial intelligence to generate tens of thousands of articles containing false claims about Ukraine. Leading AI models that are trained on news articles scraped from the internet often then echo these narratives, downplaying Russia’s invasion and spreading false information when prompted with questions about Ukraine. Generative AI has created a feedback loop in which pro-Russian actors can cheaply produce large volumes of misleading online content, which then contaminates the data used to train and refine future AI models.

Egypt’s Council of Ministers Plans to Adopt AI to Crack Down on “Fake News”

Egyptian officials announced a plan to adopt AI tools to monitor the accuracy of information published in the media and on social media platforms. The regime is also exploring harsher legal penalties for those accused of reporting false information. Egypt has routinely prosecuted journalists and activists for reporting or posting information online that the regime claims is false. The deployment of AI risks accelerating the regime’s ability to monitor and punish dissent.

Vietnam Targets Journalist for Deepfake Political Satire

Vietnamese officials charged Berlin-based journalist Le Trun Khoa with “disseminating information against the state” after he allegedly used AI to create deepfake voices and images of Communist Party and state officials. In a separate case filed in Germany by Vingroup, a Vietnamese conglomerate, a Berlin court rejected attempts to fine the exiled journalist for defamation, affirming his right to free expression.

The Latest in AI for Freedom

Bitchat Downloads Surge in Uganda and Iran During Internet Shutdowns

Bitchat, a vibe-coded application that enables encrypted, offline chats through the ubiquitous Bluetooth mesh network, has enabled people to stay connected even as authoritarian regimes cut off internet access nationwide. In Uganda, where dictator Yoweri Museveni sought his seventh term in elections on Jan. 15, the regime shut off the internet on the eve of the vote. Before service was disrupted, opposition leader and presidential candidate Bobi Wine urged Ugandans to download Bitchat, prompting hundreds of thousands of people to install the app. Usage of the app also surged more than threefold in Iran as the regime cut internet access and violently repressed demonstrations. These cases underscore the resilience freedom technologies provide in the face of authoritarian repression.

TranslateGemma Enables Private, Offline AI Translation

Google released TranslateGemma, a collection of open translation models that support 55 languages. TranslateGemma is freely downloadable, and its rapid translation of text and documents provides an essential resource for activists who face language barriers and limited access to reliable translation tools. Because the models are open, dissidents can run them locally on their own computers, protecting any sensitive queries from outside surveillance and enabling translation without an internet connection. Try it here.

Clawdbot Marks a Step Towards Self-Sovereign AI

Clawdbot is a new, open-source, personal AI assistant that runs locally across an entire computer system. It can organize files, install software, send emails, and execute system-level tasks on hardware controlled by the user rather than a company’s servers. While this marks a step towards greater self-sovereignty, this level of access also introduces great risk. If misconfigured or poorly secured, Clawdbot could become a point of compromise and expose the entire computer it controls. However, it can be paired with local models or encrypted AI interfaces like Maple to reduce surveillance. Used safely, this kind of tool could give human rights defenders a way to automate their work without exposing communications, files, and information to centralized platforms.

Claude Code Redefines Software Development

Anthropic’s Claude Code is revolutionizing coding for developers and nontechnical users alike, providing large productivity gains, as skills that once took years to learn can be replicated almost instantly. In response to the surge of interest in the tool, Anthropic launched Cowork, designed to make these capabilities accessible beyond engineers. Anthropic is making software creation accessible to dissidents worldwide and, in doing so, accelerating freedom technology: most of the teams participating in HRF’s AI Hack for Freedom hackathon utilized Claude Code to build freedom technology tools in just 28 hours.

Caution: While tools like Claude Code are powerful accelerants, users should be cautious with sensitive information, as queries and computer files may be visible to the provider. Users should not rely on AI systems to design or audit security or privacy protections on their own. Any tools intended for high-risk environments should be carefully reviewed by experienced developers.

OpenCode Gains Rapid Adoption

HRF grantee OpenCode is an open-source AI agent that enables developers to quickly build digital tools like websites or full-stack web applications. Unlike closed-source, proprietary agents, OpenCode gives users the freedom to use any AI model, including local models. Launched in June, OpenCode reported rapid adoption with more than 650,000 active users and nearly one million downloads by the end of 2025, signaling a growing demand for open-source AI tools that preserve user autonomy and remain accessible even under censorship or repression.

OpenAI’s Codex Responds to Claude Code Shutdown

Earlier this month, Anthropic blocked third-party apps from its Claude Code subscriptions, which had previously powered OpenCode and other open-source coding agents. In response, OpenAI announced support for direct integration between Codex, its own open-source agent, and tools like OpenCode. Codex integration restores developers’ ability to utilize agentic coding through user-controlled tools. These developments also show the resilience of open-source tools when proprietary AI tools restrict access.

OpenAI’s Codex Responds to Claude Code Shutdown

Moxie Marlinspike, the creator of the private messaging app Signal, has released Confer, an open-source interface designed to make private AI use simple and accessible. Confer claims to encrypt prompts and send them into a trusted environment, where they are privately decrypted and processed by AI. The AI’s response is then encrypted and sent back to the user’s computer. Neither Marlinspike nor the server can see users’ prompts. Confer can reduce surveillance risks for human rights defenders, pro-democracy activists, journalists, and others who rely on AI while operating under authoritarian repression.

HRF x PubKey: How Open Source AI Supports Freedom

In this interview, Alex Hancock, senior software engineer at Block, sits down with Harrison Friedes, HRF’s AI for Individual Rights program associate, to explore the power of Goose AI. Goose enables activists and nontechnical users to build functional tools in minutes while giving users control over model choice and where their data is stored. Watch to learn more about the privacy trade-offs of AI systems and how open-source agents can empower human rights activists.

Join our AI for Individual Rights Newsletter

Each month we’ll share a concise roundup of the world’s most important AI stories exploring how dictators are using AI tools to repress and how open-source AI tools are being developed to resist tyranny.

Contribute

Visit our Website

How can we help?

Hit enter to search or ESC to close

Join the cause by subscribing to our newsletter.

Email Us