Welcome to the ninth edition of the Human Rights Foundation’s (HRF) AI for Individual Rights newsletter.
We start this month in Russia, where officials proposed new legislation aimed at restricting or banning Russian access to foreign AI models. These laws could block tools like ChatGPT, Claude, and Gemini that transmit Russian user data, queries, and conversations abroad, unless companies submit to regulations that require them to run on infrastructure controlled by the state. If enacted, these laws would give the Kremlin intrusive oversight of AI access and user data.
This authoritarian crackdown on AI is why the next wave of tools must be open and resilient to centralized control. This month’s developments showed AI moving in that direction. Google’s Gemma 4, one of the most capable open-weight AI model suites to date, can now run locally on everyday devices like phones and laptops, giving activists powerful, private, and offline AI capabilities without relying on centralized providers. Hermes, a new open-source AI agent, pushes this trend even further, allowing users to automate workflows, build apps, and create infrastructure through simple prompts while learning and improving over time.
HRF is actively working to deliver these new AI tools and capabilities to activists on the front lines. On April 14–15, it hosted an AI Camp for HRF leadership, close supporters, and prominent human rights defenders. The graduates used secure open-source technology to launch websites, automate workflows with simple prompts, and deploy personal AI agents that will overtime grow to understand their needs and preferences.
This is what the future of digital resistance looks like: activists equipped with intelligent systems that amplify their reach and help them move faster than the regimes trying to silence them.
The Latest in AI for Repression
Russia Proposes Rules To Restrict and Control Foreign AI
Russia’s Ministry for Digital Development proposed laws that could restrict or ban foreign AI models. This proposal would block tools like ChatGPT, Claude, and Gemini that send Russian user data, queries, and conversations abroad. But not all foreign AI would be excluded. Open models like China’s Qwen or DeepSeek could still run, so long as they operate on the infrastructure controlled by Russian state organizations and companies and keep data within the country. Officials claim the rules protect against foreign influence and defend “traditional Russian spiritual and moral values.” In reality, the rules fit the Kremlin’s broader pattern of digital repression: restrict outside tools, keep citizens’ data within its reach, and expand its ability to monitor and suppress dissent.
Bitchat Pulled From China’s App Store
Bitchat, an open-source encrypted messaging application that works offline using Bluetooth, has been removed from Apple’s App Store in China. Officials claim it violates laws related to services with “public opinion or social mobilization capabilities.” Bitchat’s offline feature lets users bypass internet censorship and communicate during network shutdowns. Millions have downloaded it across countries like Nepal, Uganda, and Iran amid civil unrest and regime-imposed blackouts. The app was built by Block CEO Jack Dorsey in just a weekend through vibe coding, an approach in which ideas are described in plain language, while AI handles the actual code. While AI accelerates the creation of freedom-enhancing tools, authoritarian regimes continue seeking to shut down technologies outside their control.
Research Reveals Use of AI-Powered Surveillance in Orbán’s Hungary
A joint report by the investigative outlet VSquare and the research group Citizen Lab uncovered that Viktor Orbán’s regime deployed secret AI-powered surveillance tools to monitor citizens without consent. Among the most intrusive is Webloc, a surveillance system that collects location data from smartphone apps and digital advertising to track people’s physical locations. Then there is Tangles, an AI-powered tool that monitors digital activity across the entire internet. Layered on top is Full AI, a sinister upgrade to Tangles that adds facial recognition and other automated capabilities to accelerate digital monitoring. The regime began quietly acquiring these systems in 2021, with its most recent license renewal in March. Together, these tools fuse physical and digital surveillance into a system capable of detecting, tracking, and silencing dissent.
Read Citizen Lab’s full research paper to see how Webloc targeted populations across both authoritarian regimes and democratic governments.
Chinese AI Qwen Models Move to Closed Sourcing
Alibaba, a major Chinese tech company, has introduced three proprietary, closed-source AI Qwen models that are available only through its official cloud and chatbot services. Users are required to send their queries through Chinese servers, risking surveillance by the Chinese Communist Party (CCP). Previously, all Qwen models were open, meaning anyone could download them and run them locally, privately, and offline. Their adoption has been massive: by March 2026, Alibaba’s models accounted for more than half of all global open model downloads, totaling nearly 1 billion. There are still Qwen open-weight models, but its most advanced systems are now proprietary and available exclusively through Alibaba Cloud, subject to the control of the CCP.
India To Use AI for Crime Prediction
India’s National Crime Records Bureau, a central police data agency, plans to incorporate AI into its upcoming version of the national police database. This system, called Crime and Criminal Tracking Network and Systems (CCTNS) 2.0, will link data from 17,000 police stations nationwide to a centralized platform. The database will do more than just store criminal records. It will employ AI to develop behavioral profiles and predict those likely to commit future crimes. Simultaneously, officials revealed plans to use AI for crime detection, including facial recognition cameras and algorithms to monitor cybercrime and banking transactions. Such an automated policing system could be activated against anyone critical of the government.
Recommended Content
“The Party’s AI”: China’s Use of Artificial Intelligence To Protect the State
In an episode of the podcast Stop the World, researchers Bethany Allen and Fergus Ryan from the Australian Strategic Policy Institute (ASPI) discuss their HRF-sponsored report, “The party’s AI: How China’s new AI systems are reshaping human rights.” The report covers topics such as predictive policing, automated justice, and accelerated censorship enabled by algorithms and Chinese AI models. It also highlights how the CCP uses AI to reinforce and sustain its power. You can read the full report here or listen to the podcast here.
Join Us at the 18th Annual Oslo Freedom Forum
Join HRF this year at the 18th annual Oslo Freedom Forum (OFF), hosted in Oslo, Norway, from June 1–3. This year’s OFF theme of “Dismantling Dictatorship” celebrates the activists, thinkers, technologists, and artists who take tyranny apart with ingenuity, creativity, and solidarity. On June 2, HRF will host a dedicated Freedom Tech Track to explore how authoritarian regimes weaponize technology and how dissidents resist with digital tools like Bitcoin, open-source AI, and decentralized communication.
The Latest in AI for Freedom
HRF Hosts AI Camp
HRF hosted an AI Camp for Activists where HRF leadership, top supporters, and leading human rights defenders came together to safely implement AI agents. Participants worked with HRF grantee OpenCode, an open-source coding agent that enables anyone to create websites and digital tools through straightforward prompts. Participants then went further by deploying an open-source personal AI agent with Hermes (profiled below). These agents function like personal digital employees, capable of sending emails and messages, automating workflows, and learning user preferences over time. Each agent was set up with guidance from experienced developers and linked to dedicated email accounts to protect and silo any personal data. Within just two days, graduates had built websites, automated workflows, and a 24/7 AI agent to enhance their work. HRF is eager to continue sponsoring these camps and expanding the impact of AI agents for frontline users.
Google Advances Open-Weight AI With Gemma 4
Google released Gemma 4, a suite of four open-weight models designed to improve intelligence while using less computing power. According to the company, they pack stronger reasoning, coding, and multimodal capabilities that grant users access to frontier-level capabilities with significantly less hardware. Human rights defenders can run the smaller Gemma models locally on phones or laptops for private, offline AI use. And for those without the expensive hardware needed for the largest model, Maple AI now offers encrypted access to its full power. Gemma 4 is already becoming the go-to local model for open-source agents like OpenClaw, a step closer to truly private AI agents that can serve dissidents, not surveil them.
Open-Source AI Agent Hermes Grows
Hermes, an emerging open-source AI agent, is quickly gaining traction. In the last month, it has grown from about 22,000 to more than 120,000 developer endorsements on GitHub, the world’s largest open-source software platform, making it one of the fastest-growing open-source tools ever. Hermes is an autonomous agent that operates on a computer or server, helping users automate tasks, launch websites, or build applications. Users simply give instructions in plain language via platforms like Telegram or WhatsApp. What distinguishes Hermes from other agents is its capability to learn and improve over time. It remembers users’ projects, preferences, and past conversations, continuously refining its skills. When used safely, human rights defenders can benefit from a personal assistant that evolves to enhance their efforts. Hermes is now available on Clawi.ai for easy, cloud-based setup.
HRF Hosts the Second AI Hack for Freedom
HRF’s AI for Individual Rights program is proud to sponsor the second AI Hack for Freedom, scheduled for May 9–10 in Nashville, Tennessee. This AI-powered hackathon will connect nine dissidents from Afghanistan, Rwanda, Nicaragua, Hong Kong, Iran, Palestine, Bolivia, and beyond with innovative open-source developers. Over 27 hours, teams will build digital tools to bypass censorship, evade surveillance, or dismantle the repression of authoritarian regimes. Participants will split a $50,000 Bitcoin prize pool, in addition to walking away with tools ready for immediate use on the front lines of the struggle for freedom. Together, the experiences of activists, technical expertise of engineers, and the power of AI will create tools supercharging resistance movements across the globe. Find out more about the event here.
This event follows the success of HRF’s first AI Hack for Freedom. Learn more about the previous event and winners here.
Mesh-LLM Enables Local AI With Spare Devices
Michael Neale, an engineer at payments company Block, introduced Mesh-LLM, an open-source platform that enables multiple devices to collaboratively run advanced AI models locally. Old laptops, PCs, or phones can run portions of the AI models and combine their outputs into a single response. Mesh-LLM keeps all data private and local on each piece of hardware. Now, dissidents with a few spare devices can ask the most powerful models about campaigning, planning peaceful protests, or authoritarian corruption, all completely privately and without expensive hardware.
Acree.ai Launches New Open-Weight AI Model
Acree.ai, a US-based AI research lab specializing in open-weight models, has launched a robust new model named Trinity Large Thinking. This model is tailored for advanced reasoning tasks, as it processes problems step-by-step instead of providing immediate answers. Acree claims that, on many measures, “it is the strongest open model ever released outside of China.” And that distinction matters. Many of today’s leading open-weight models come from China and are embedded with CCP bias. For human rights defenders who require strong reasoning capabilities without compromising on authoritarian bias or data exposure, Trinity Large Thinking is one of the top options developed in a democratic country.
Recommended Content
Pubkey x HRF: How Freedom Tech Wins
In this interview, Calle, a pioneering open-source developer, sits down with Alex Li, HRF’s Bitcoin Development Lead, to explore the future of freedom tech. Calle created Cashu, a privacy-focused solution built on top of the Bitcoin network; Bitchat, an encrypted peer-to-peer messaging app that works offline via Bluetooth; and Clawi.ai, a tool that lets users easily set up open-source AI agents in the cloud. Learn how these tools can help protect human rights and how recent advances in AI are accelerating the creation of freedom technology.