Chinese tech company Alibaba on Monday released Qwen 3, a family of AI models the company claims matches and in some cases outperforms the best models available from Google and OpenAI.
Most of the models are — or soon will be — available for download under an “open” license from AI dev platform Hugging Face and GitHub. They range in size from 0.6 billion parameters to 235 billion parameters. Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.
The rise of China-originated model series like Qwen have increased the pressure on American labs such as OpenAI to deliver more capable AI technologies. They’ve also led policymakers to implement restrictions aimed at limiting the ability of Chinese AI companies to obtain the chips necessary to train models.
According to Alibaba, Qwen 3 models are “hybrid” models in the sense that they can take time and “reason” through complex problems or answer simpler requests quickly. Reasoning enables the models to effectively fact-check themselves, similar to models like OpenAI’s o3, but at the cost of higher latency.
“We have seamlessly integrated thinking and non-thinking modes, offering users the flexibility to control the thinking budget,” wrote the Qwen team in a blog post. “This design enables users to configure task-specific budgets with greater ease.”
The Qwen 3 models support 119 languages, Alibaba says, and were trained on a data set of nearly 36 trillion tokens. Tokens are the raw bits of data that a model processes; 1 million tokens is equivalent to about 750,000 words. Alibaba says that Qwen 3 was trained on a combination of textbooks, “question-answer pairs,” code snippets, AI-generated data, and more.
These improvements, along with others, greatly boosted Qwen 3’s capabilities compared to its predecessor, Qwen 2, says Alibaba. None of the Qwen 3 models are head and shoulders above top-of-the-line recent models like OpenAI’s o3 and o4-mini, but they’re strong performers nonetheless.
On Codeforces, a platform for programming contests, the largest Qwen 3 model — Qwen-3-235B-A22B — just beats out OpenAI’s o3-mini and Google’s Gemini 2.5 Pro. Qwen-3-235B-A22B also bests o3-mini on the latest version of AIME, a challenging math benchmark, and BFCL, a test for assessing a model’s ability to “reason” about problems.
But Qwen-3-235B-A22B isn’t publicly available — at least not yet.
The largest public Qwen 3 model, Qwen3-32B, is still competitive with a number of proprietary and open AI models, including Chinese AI lab DeepSeek’s R1. Qwen3-32B surpasses OpenAI’s o1 model on several tests, including the coding benchmark LiveCodeBench.
Alibaba says Qwen 3 “excels” in tool-calling capabilities as well as following instructions and copying specific data formats. In addition to the models for download, Qwen 3 is available from cloud providers including Fireworks AI and Hyperbolic.
Tuhin Srivastava, co-founder and CEO of AI cloud host Baseten, said that Qwen 3 is another point in the trend line of open models keeping pace with closed-source systems such as OpenAI’s.
“The U.S. is doubling down on restricting sales of chips to China and purchases from China, but models like Qwen 3 that are state-of-the-art and open […] will undoubtedly be used domestically,” he told TechCrunch. “It reflects the reality that businesses are both building their own tools [as well as] buying off the shelf via closed-model companies like Anthropic and OpenAI.”