CHINA – Alibaba has launched Qwen3, the latest generation of its open-sourced large language model (LLM) family, setting a new benchmark for AI innovation.
The Qwen3 series features six dense models and two Mixture-of-Experts (MoE) models, offering developers flexibility to build next-generation applications across mobile devices, smart glasses, autonomous vehicles, robotics and beyond.
All Qwen3 models – including dense models (0.6B, 1.7B, 4B, 8B, 14B, and 32B parameters) and MoE models (30B with 3B active, and 235B with 22B active) – are now open sourced and available globally.
Hybrid Reasoning Combining Thinking and Non-thinking Modes
Qwen3 marks Alibaba’s debut of hybrid reasoning models, combining traditional LLM capabilities with advanced, dynamic reasoning.
Qwen3 models can seamlessly switch between thinking mode for complex, multi-step tasks such as mathematics, coding, and logical deduction and non-thinking mode for fast, general-purpose responses.
For developers accessing Qwen3 through API, the model offers granular control over thinking duration (up to 38K tokens), enabling an optimized balance between intelligent performance and compute efficiency.