
By Chen Jiahui and Cheng Manqi
“me stepping down. Bye my beloved qwen.” This brief comment, posted on X by Lin Junyang just after midnight Beijing time on March 4 sent shockwaves through the global AI community.
His departure as the technical lead of Alibaba’s Qwen, the open-source large language model that has taken the world by storm, came less than three weeks after the company unveiled a major upgrade to its flagship model, Qwen3.5.
Lin had formally submitted his resignation the previous afternoon, and on the same day, Yu Bowen, head of post-training at Qwen, also quit. His role will be taken over by Zhou Hao, a former senior research scientist at DeepMind who recently joined Alibaba’s Tongyi Laboratory and now reports to Alibaba Cloud CTO Zhou Jingren.
The departures follow the exit in January of Hui Binyuan, the head of Qwen Code, who has reportedly joined Meta.
Lin’s exit hints at division over restructuring
Lin’s ultimate decision to leave appears closely linked to the ongoing organizational restructuring within the Qwen team. The team, which Lin directly led under Tongyi Laboratory, had operated as a vertically integrated unit spanning pre-training, post-training, multimodal research and infrastructure. Recent plans called for splitting this structure horizontally into separate teams focused on specific stages and modalities. All would remain under Tongyi Laboratory, but Lin’s managerial scope would narrow considerably.
The shift also ran counter to Lin’s technical philosophy. Over the past year, he had advocated tighter integration between pre-training, post-training, infrastructure and training teams, arguing that deep collaboration across stages was essential to long-term competitiveness. Qwen had even begun building its own infrastructure unit, taking over functions previously handled by Alibaba Cloud’s AI platform PAI, which supports multiple Tongyi teams. The restructuring suggested a move in the opposite direction.
Over the past year or two, the AI model teams of the major Chinese tech companies have undergone several rounds of restructuring. ByteDance’s Seed team now uses a “horse racing” approach, fielding multiple groups working on similar directions and modalities. Its Doubao main model series utilizes a structure divided by training stages. Tencent last year consolidated model training and infrastructure teams under a more unified structure. Compared with its peers, Alibaba had made relatively fewer structural changes — until now.
Navigating internal tensions
Prior to this latest development, the Qwen team had manged to navigate internal tensions within Alibaba.
Externally, it had built a strong reputation in the global open-source community. Its wide range of model sizes proved popular with startups and small and medium-sized enterprises. Companies including Cursor have used Qwen models for fine-tuning and post-training, while its open-source multimodal models became base models for several Chinese embodied AI startups.
At the same time, Qwen’s expanding capabilities increasingly overlapped with other Tongyi Laboratory teams. It was developing vision-language-action embodied models, an area also pursued elsewhere within Tongyi. It was building text-to-image and voice models that intersected with other multimodal and speech-focused groups. As Qwen strengthened its own infrastructure and technical stack, it began to resemble a “mini full-stack AI lab” inside a much larger organization.
Alongside these overlaps came internal scrutiny. Senior management evaluated the commercial efficiency of open-source models, which enhance technical influence but can complicate direct monetization through APIs. There were also assessments of individual releases. People familiar with internal discussions said some executives were not fully satisfied with the Qwen3.5 model unveiled earlier this year, viewing it as incomplete.
Strategic goals are paramount to winning the tech war
From Alibaba’s broader perspective, open-source leadership and technical prestige are not ends in themselves, but the means to achieve strategic and commercial goals such as AI Cloud and consumer-facing AI applications. In the AI Cloud arena, Alibaba faces fierce competition from ByteDance’s Volcano Engine, which follows a closed-source model strategy. In the race for flagship AI apps, Qwen’s consumer product struggled to narrow the gap with ByteDance’s Doubao during the intensive marketing promotions over the Lunar New Year in mid-February.
These tensions point to a deeper misalignment between commercial imperatives and technical ideals: top-down strategic goals and clearer divisions of labor versus bottom-up exploration and integrated experimentation within smaller teams like Qwen.
A Tongyi Laboratory source previously told LatePost that before the 2023 generative AI boom placed large language models under intense scrutiny, the Qwen team operated under the radar. Its isolation insulated it from internal friction and allowed it to focus intensely on core research.
However, as AI has evolved into an existential “war” that major tech companies feel they cannot afford to lose, core model R&D teams have faced growing pressure, disruption and restructuring, often following setbacks. Yet Alibaba’s latest shift came at a moment when Qwen enjoyed solid external reviews and high internal morale.
But in large organizations, structural priorities ultimately outweigh individual roles. Lin’s departure underscores how, in the escalating AI race, even respected teams and rising stars are not immune to reorganization.
Source: LatePost