DeepSeek, a prominent Chinese AI startup, has released a new version of its reasoning model. Known as R1-0528, the model improves significantly in coding, math, and general knowledge tasks.
In testing, it performed nearly as well as OpenAI’s latest model, o3. However, according to independent tests, R1-0528 now censors sensitive content more aggressively than previous versions.
Stronger Performance, Tighter Restrictions
The model’s technical performance is noteworthy. It solves math problems with greater accuracy.
It also writes better code and provides more precise answers to general questions. These upgrades are clear signs of progress.
Yet, in areas involving politics or human rights, the model becomes less responsive. It often refuses to answer, and other times, it repeats the Chinese government’s official narrative.
Testing Reveals Censorship
A developer known as xlr8harder conducted tests using SpeechMap, a tool designed to compare AI responses on controversial topics.
Their findings were that R1-0528 is DeepSeek’s most restricted model to date. The AI avoids topics like the Tiananmen Square protests, the status of Taiwan and the treatment of Uyghur Muslims
In one case, R1-0528 did mention Xinjiang as an example of human rights concerns. However, in follow-up questions, it shifted toward official government statements.
Legal Pressures
This response is not unexpected because Chinese law requires AI models to follow strict content rules.
A 2023 regulation bans material that harms national unity or social stability. These terms give the government broad power to limit speech.
To comply, Chinese AI companies add filters or fine-tune models. These steps help them block content considered politically sensitive.
Open AI Models
Chinese developers routinely release models openly, and many are free for global users. This includes video generation tools like Magi-1 and Kling, which also face similar restrictions.
Clément Delangue, CEO of Hugging Face, recently warned of the risks of this censorship. He pointed out the “unintended consequences” of Western developers using AI models that follow state-driven censorship rules.