Welcome back to the Human Rights Foundation’s (HRF) monthly newsletter giving you the low-down on how AI and individual rights converge. This month, we’re continuing to explore the dual nature of AI, which can be powerful for individual liberty or an instrument of state surveillance and control — depending on how you use it.
First, we invite you to learn more about vibe coding, an incredibly useful tool that allows anyone to build a website or an app in minutes. HRF’s AI for Individual Rights technical lead Justin Moon, chief strategy officer Alex Gladstein, and Freedom Fellows director Jhanisse Vaca Daza break it down in a new educational video, “Vibe Coding for Human Rights 101.”
In this 30-minute segment, the three summarize the helpful nature of vibe coding for dissidents and civil society groups and then spin up and deploy a brand new website on the spot. Bear in mind this is just an example to show you what is possible — and how simple it is. Navigating the explosion of new AI tools can be overwhelming. To help, we’re launching a new resource on our website: a small, curated list of AI tools we currently use. Visit HRF.org/AI to learn more. Given the rapid pace of change, this list is a living document. While we’ll strive to keep it updated, the landscape shifts constantly, making your own judgment essential. This brings us to a critical takeaway for the AI era: assume that any information you share online can be, and likely has been, scraped for AI training data. This makes personal data hygiene and a commitment to critical thinking more urgent than ever.
Finally, as we’ll do with each newsletter, we provide a deep dive into a particular topic. This time, as you’ll find at the bottom of this email, it’s a detailed look into the Shanghai Cooperation Organization and how the Chinese government is exporting AI surveillance tech to its allies.
Your feedback and suggestions are always welcome. Submit tips, stories, news, and ideas by emailing [email protected]
Best,
Craig Vachon
Craig Vachon
Director, AI for Individual Rights
Vibe Coding for Human Rights 101
Join HRF’s AI for Individual Rights Technical Advisor Justin Moon, HRF Chief Strategy Officer Alex Gladstein, and HRF Freedom Fellowship Director Jhanisse Vaca Daza as they vibe code a website in a matter of minutes using Replit. See the website draft they vibe coded in minutes here. Stay tuned for future iterations of this educational video series teaching activists how to use AI tools.
The Latest in AI for Repression
China | Government’s DeepSeek Adoption May Be Less Substantial Than Reported
Leiphone, a Chinese online tech media platform, contends local Chinese adoption of DeepSeek, an AI-powered chatbot like ChatGPT, may be less substantial than originally reported. The technology-focused think tank Zhiding, which provides services to the Chinese government, claims 72 provincial and city governments fully deployed DeepSeek as of early March, but they have omitted the specifics of the deployment. In this case, the local governments’ deployed DeepSeek using an all-in-one machine equipped with the AI tool, but sales have remained low after the first wave of early adopters. Leiphone explains that organizations such as hospitals were having trouble integrating application-specific data with DeepSeek, indicating claims of widespread adoption may be exaggerated. While these local governments’ use of AI may be overrated, there have still been concerning reports of the Chinese Communist Party’s use of AI, such as the use of deepfakes to discredit pro-democracy advocates. Carmen Lau, an exiled pro-democracy Hong Kong activist, discovered deepfaked videos of herself and colleagues expressing distress over a proposed amendment to the Extradition Act 2003 to allow Hong Kongers to be extradited on a case-by-case basis. The goal of making dissidents living in the United Kingdom feel unsafe was successful. These phenomena highlight the importance of remaining skeptical about both the claimed and actual use of AI by authoritarian governments.
Russia | AI Tools Create Sophisticated Propaganda Campaigns
A recent study released by CheckFirst revealed Russia’s use of consumer-grade AI tools to create sophisticated propaganda campaigns, an effort referred to as “Operation Overload.” The campaign doctored videos of prominent figures to make it appear they made statements they never did, generated images of riots that never occurred, and used bots to propagate the content across platforms such as Telegram, X, and BlueSky, gathering millions of views. Unexpectedly, the campaign then sent hundreds of thousands of emails of its content to fact-checking organizations asking for verification in order to further increase its coverage. Operation Overload serves as the latest example of an authoritarian regime’s use of AI to manipulate public opinion and spread propaganda.
United Arab Emirates and Qatar | May Fund Major AI Company Anthropic
In a leaked memo published by Wired, Anthropic CEO Dario Amodei revealed that the company is seeking investment from the United Arab Emirates and Qatar, expressing that there is “$100 [billion] or more” of capital available in the region. Anthropic is known for the popular-among-software-developers AI chatbot, Claude. Regarding the sought-after investment from these authoritarian governments, Amodei wrote, “The implicit promise of investing in future rounds can create a situation where they have some soft power, making it a bit harder to resist these things in the future. In fact, I actually am worried that getting the largest possible amounts of investment might be difficult without agreeing to some of these other things. But l think the right response to this is simply to see how much we can get without agreeing to these things (which I think are likely still many billions), and then hold firm if they ask.” In short, in exchange for an investment, Anthropic will likely be subject to the censorious or otherwise oppressive demands of these two authoritarian regimes.
Turkey | Bans Selected Content from AI Chatbot Grok
A Turkish court ordered, on the pretext of protecting public order, a ban on content from xAI’s chatbot. Authorities allegedly identified 50 posts from the chatbot that insulted President Recep Tayyip Erdogan, who is notoriously sensitive to criticism of any kind. To justify this ban, authorities cited violations of laws that make such insults a criminal offense punishable by up to four years in prison. Transport and Infrastructure Minister Abdulkadir Uraloglu told national media that Turkey hasn’t yet implemented a total access ban on Grok, but that it would do so if necessary. Turkey has a history of censoring content online and is now the first government to formally censor Grok, highlighting the need for open-source and censorship-resistant AI tools.
The Latest in AI for Freedom
PayPerQ | Enables Users to Pay for LLMs with Bitcoin
PayPerQ is a chatbot service that enables users to easily pay per query to use the leading chatbots in bitcoin and without an account. Users can pay per use with bitcoin or Lightning and can switch the models they use on the fly, without needing to provide personal information such as an email address, phone number, billing address, or credit card information. PayPerQ —which also can be purchased using traditional payment methods — offers use of leading LLMs such as ChatGPT’s GPT-4o, xAI’s Grok 4, Claude’s Sonnet 4, Meta’s Llama 4, and more, with the average PPQ user spending only $4 per month. This service could be of use to political dissidents who would like to use cutting-edge models without tying their personal identity to sensitive information, as well as unbanked individuals who otherwise wouldn’t be able to pay with a credit card or bank account.
Soapbox | Launches Shakespeare.diy to Easily Vibe Code on Nostr
Soapbox has recently launched Shakespeare, an AI-powered website builder that builds on top of Nostr. Nostr is a decentralized and censorship-resistant social media protocol that allows users to post without permission from a centralized party. Shakespeare allows users to quickly and easily build websites on top of the Nostr protocol with nothing more than natural language prompts. Users sign up via a pseudonymous Nostr account and can pay via Lightning or Stripe to buy credits to use the tool. The code generated is provided to the user, and Shakespeare itself is fully open source, ensuring users have full sovereignty in the websites they develop. By combining AI vibe coding and Nostr, Shakespeare allows activists and developers alike to create tools that empower free speech and resist censorship.
OpenAI | Releases Open-Source AI Models
OpenAI released their first open-source models, gpt-oss-120b and gpt-oss-20b. Gpt-oss-120b is a large model designed to run in data centers and high-end desktops and laptops, while gpt-oss-20b is a medium-sized model that can run on most desktops and laptops. Users can adjust the reasoning effort to low, medium, or high in order to preserve processing power, access the full chain of thought of a model’s results for easy debugging, and use the models for agentic tasks such as executing Python code and web searches using built-in browsing tools. Both models are made available on Hugging Face, the leading platform for open-source AI tools. With a concerning trend of Western AI companies abandoning open-source models, this release by OpenAI serves as a promising step in the other direction, ensuring a greater ideological diversity of open-source AI models are accessible to everyone.
Bitchat | Vibe Coded with Goose AI Assistant
Twitter and Square founder Jack Dorsey recently released Bitchat, a peer-to-peer encrypted messaging tool that works entirely offline using Bluetooth. To create the app, Dorsey used Block’s AI coding assistant Goose. Goose is an open-source vibe coding tool that writes codes and debugs errors as prompted by the user. Dorsey was able to use Goose to create Bitchat in just a few days. Messages sent on Bitchat promise to be end-to-end encrypted and hop locally from one phone to another to reach their destination. The development of Bitchat signifies how decentralized freedom tech communication tools that empower the individual can be built more efficiently with the use of AI.
Cactus | Launches Open-Source Interface to Run Chatbots Locally
The open-source interface Cactus allows users to run open-source chatbots such as Llama, Qwen, and Gemma on their local devices. By cutting out the cloud, Cactus allows users to make use of chatbots offline without sharing their data with third parties or being charged for API costs. Cactus supports any LLM or Vision Language Model (VLM) that is available on Hugging Face, the leading platform for open-source AI models. Thus far, the application has seen uses for medical and other privacy-pertinent industries, offline fallback for the big remote AI models, and more. A private, offline, and free-to-use interface for running open-source LLMs ensures access to the technology for anyone in challenging circumstances, including activists and political dissidents.
Join our AI for Individual Rights Newsletter
Deep Dive:
SCO Expansion and the Rise of Digital Expansionism
The Shanghai Cooperation Organisation (SCO), a Eurasian political, economic, and security alliance led by China and Russia whose member states have adopted various methods of digital authoritarianism, has significantly expanded its economic and political influence, integrating countries such as Iran, Belarus, Turkey, and Myanmar — states ruled by authoritarian regimes — into its framework. Belarus officially became a full SCO member in 2024, following a period as an observer, while Turkey and Myanmar have increased their engagement as dialogue partners or through enhanced cooperation agreements. Iran joined as a full member in 2023, further extending the bloc’s geographical reach and economic integration. This enlarged SCO now represents a considerable share of global energy resources, economic output, and population, marking a shift in geopolitical and economic alliances away from democratic models.
These integrations are part of larger efforts to enhance cross-border trade, infrastructural connectivity (such as new railway and energy corridors), and digital economies under the leadership of the Chinese and Russian authoritarian regimes.
Digital Authoritarianism: Models, Practices, and Expansion
Digital authoritarianism refers to the use of digital technology by governments to monitor, repress, and shape the behavior of populations, often at the expense of individual liberties and privacy. It is increasingly promoted as a governing model among SCO members by China and Russia:
- China provides an advanced example through its “Great Firewall,” mass surveillance systems, facial recognition, and extensive social media censorship. It exports these systems and accompanying governance models to other autocratic states, especially those in the SCO and its partner network.
- Russia adopts a lower-cost approach, characterized by strong legal restrictions, aggressive disinformation campaigns, mandatory software preinstallations, and cyber sovereignty laws. While not as technologically sophisticated as China, Russia’s methods are easily exportable and adaptable for other authoritarian contexts.
Countries entering the SCO fold, like Iran, have rapidly adopted or deepened these digital authoritarian techniques, acquiring Chinese surveillance and censorship technology. Myanmar seeks tighter trade and digital links, while Belarus is positioned to benefit from expanded digital collaboration with China and Russia. Turkey’s pursuit of closer ties signals the appeal of these governance models beyond Asia, even into NATO-aligned states.
Global Risks and Implications
Spread of Repressive Digital Governance
- Proliferation of surveillance and censorship: Chinese and Russian digital platforms, AI-enabled surveillance, and “smart city” technologies are now being widely exported to SCO states and beyond, often with government training and expertise provided to entrench these systems.
- Weakening of digital rights: Freedom House findings indicate that no SCO member is rated as having a “free” Internet, with chronic and worsening abuses against privacy and expression across the bloc.
- Transnational policy influence: China and Russia use the SCO and other multisectoral forums to influence global digital governance, seeking to shift cyber norms and standards toward their authoritarian vision.
Future Trajectories
- Institutionalization: The SCO is formalizing its digital cooperation, with official action plans targeting unified cybersecurity, coordinated responses to cyber threats, and collective standard-setting in AI and data analysis.
- Expansion: Digital authoritarianism is likely to keep spreading through these new partnerships, especially as more countries, even those with democratic histories, see economic and technological benefits in adopting SCO-driven systems.
The convergence of dictatorial governance, poor human rights records, economic integration into the SCO, and the explicit promotion of digital authoritarianism by China and Russia is accelerating the rise of a global web of digital repression.
Want to contribute to the newsletter? Submit tips, stories, news, and ideas by emailing [email protected]