For a long time, there was an unwritten rule in the world of artificial intelligence: closed models lead, and open models try to keep up. Z.ai — the Chinese company formerly known as Zhipu AI — just broke that rule in a rather loud way.
On April 7, 2026, GLM-5.1 was released as open source and immediately took first place on SWE-Bench Pro with 58.4 points. That means, for the first time in the history of that benchmark, an open-source model surpassed every closed model in the world at the task of solving real software problems.
What SWE-Bench Pro Is
SWE-Bench Pro is not a multiple-choice test. It takes real problems that were opened by developers in public GitHub repositories — bugs, failures, unexpected behaviors — and asks the model to write code that actually solves the problem.
No cheat sheet. No hints. The model needs to understand the problem, analyze the existing code, propose a solution, and the solution needs to work. It's the kind of test that most closely mirrors what a developer does day to day.
With 58.4 points, GLM-5.1 ranked above GPT-5.4 (57.7) and Claude Opus 4.6 (57.3). A small numerical difference, but enormous in symbolic significance.
The Architecture Behind the Result
GLM-5.1 is a Mixture-of-Experts model with 744 billion total parameters, but only 40 billion active parameters per processed token. That matters because it defines the real computational cost.
It works like a team of specialists: instead of activating the entire team for every task, the model activates only the experts relevant to that specific situation. Result: far greater efficiency than dense models of equivalent size.
The context window is 200,000 tokens — enough to process extensive technical documents, complete codebases from medium-sized projects, or very long conversations without losing reference.
MIT: The License That Changes Everything
The detail that transforms GLM-5.1 from good news into important news is the MIT license.
The MIT license is one of the most permissive in the software world. You can download the model, use it in production, modify it, include it in commercial products, redistribute it — without paying anything, without asking permission, without significant restrictions.
This contrasts with other "open" models that, in practice, have commercial use restrictions or require special agreements for certain applications. GLM-5.1 is genuinely free to use.
For companies that need to keep data internal — healthcare, legal, finance, defense — having access to a cutting-edge model that can run locally, without data leaving to third-party servers, is strategically relevant.
What This Means for the Industry
There's an argument circulating in Silicon Valley that closed models will always lead because they have more resources. GLM-5.1 is the most concrete counter-argument the market has ever seen.
Z.ai is not a startup: it's a company with years of research and solid academic partnerships in China. GLM-5.1 is the result of a long-term bet on large language models, with a focus on code and technical reasoning.
Where to Find It
The model weights are available on HuggingFace under the MIT license. The maximum context output is 131,072 tokens. For those with the infrastructure to run a model of this scale, access is immediate and free.
The message GLM-5.1 sends to the market is direct: open source has reached the top. It may not stay there forever. But it got there.