# Ling-1T **Repository Path**: hf-models/Ling-1T ## Basic Information - **Project Name**: Ling-1T - **Description**: Mirror of https://huggingface.co/inclusionAI/Ling-1T - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-10-12 - **Last Updated**: 2025-10-15 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README --- license: mit pipeline_tag: text-generation library_name: transformers ---
đ¤ Hugging Face | đ¤ ModelScope | đ Experience Now
## Introduction **Ling-1T** is the first flagship *non-thinking* model in the Ling 2.0 series, featuring **1 trillion total parameters** with **â 50 billion active parameters per token**. Built on the Ling 2.0 architecture, Ling-1T is designed to push the limits of *efficient reasoning* and *scalable cognition*. Pre-trained on **20 trillion+ high-quality, reasoning-dense tokens**, Ling-1T-base supports up to **128K context length** and adopts an **evolutionary chain-of-thought (Evo-CoT)** process across mid-training and post-training. This curriculum greatly enhances the modelâs efficiency and reasoning depth, allowing Ling-1T to achieve **state-of-the-art performance** on multiple complex reasoning benchmarksâbalancing **accuracy** and **efficiency**. ### Flagship-Level Efficient Reasoning
We comprehensively evaluated Ling-1T against leading flagship models, including both **open-source giants** (e.g., *DeepSeek-V3.1-Terminus*, *Kimi-K2-Instruct-0905*) and **closed-source APIs** (*GPT-5-main*, *Gemini-2.5-Pro*). Across code generation, software development, competition-level mathematics, professional math, and logical reasoning, Ling-1T consistently demonstrates **superior complex reasoning ability** and overall advantage. In the **AIME 25** benchmark, Ling-1T extends the **Pareto frontier** of reasoning accuracy vs. reasoning length, showcasing its strength in **âefficient thinking and precise reasoning.â**
### Aesthetic Understanding and Front-End Generation Ling-1T excels in visual reasoning and front-end code generation tasks, combining deep semantic understanding with precise code synthesis. We introduce a hybrid *SyntaxâFunctionâAesthetics* reward mechanism, enabling the model to not only generate correct and functional code but also demonstrate a refined sense of **visual aesthetics**. On **ArtifactsBench**, [Ling-1T](https://zenmux.ai/inclusionai/ling-1t?utm_source=hf_inclusionAI) ranks **first among open-source models**, and the benchmark visualizations in this card were, in fact, *generated by Ling-1T itself*. ### Emergent Intelligence at Trillion-Scale Scaling to the trillion-parameter level has revealed strong **emergent reasoning and transfer capabilities**. For example, in the **BFCL V3** tool-use benchmark, Ling-1T achieves **â 70% tool-call accuracy** with only light instruction tuningâdespite having seen no large-scale trajectory data during training. [Ling-1T](https://zenmux.ai/inclusionai/ling-1t?utm_source=hf_inclusionAI) can: * Interpret complex natural-language instructions * Transform abstract logic into functional visual components * Generate cross-platform compatible front-end code * Create stylistically controlled marketing copy and multi-lingual text These capabilities form the foundation for **general, collaborative humanâAI intelligence**, which we aim to advance together with the open-source community through Ling-1Tâs release. ### Pre-Training at Trillion Scale The Ling 2.0 architecture was designed from the ground up for trillion-scale efficiency, guided by the **Ling Scaling Law** ([arXiv:2507.17702](https://arxiv.org/abs/2507.17702)). This ensures architectural and hyperparameter scalability even under **1e25â1e26 FLOPs** of compute. Key architectural innovations include: * **1T total / 50B active parameters** with a **1/32 MoE activation ratio** * **MTP layers** for enhanced compositional reasoning * **Aux-loss-free**, **sigmoid-scoring expert routing** with **zero-mean updates** * **QK Normalization** for fully stable convergence
Ling-1T is the **largest FP8-trained foundation model** known to date. FP8 mixed-precision training yields **15%+ end-to-end speedup**, improved memory efficiency, and maintains **⤠0.1% loss deviation** from BF16 across **1T tokens**. A fine-grained, **heterogeneous 1F1B interleaved pipeline** further boosts utilization by 40 %+. System-level optimizationsâfused kernels, communication scheduling, recomputation, checkpointing, simulation, and telemetryâensure stable trillion-scale training.
Pre-training used over **20T high-quality tokens**, with **> 40% reasoning-dense data** in later stages. Mid-training introduced **curated chain-of-thought corpora** for â**reasoning pre-activation**â, improving downstream reasoning stability. A custom **WSM (WarmupâStableâMerge)** LR schedulerďź[arXiv:2507.17634](https://arxiv.org/abs/2507.17634)ďź with mid-train checkpoint merging simulates LR decay and boosts generalization. ### Post-Training and Evo-CoT Optimization Built upon mid-training reasoning activation, post-training adopts **Evo-CoT (Evolutionary Chain-of-Thought)** for progressive reasoning enhancement under controllable cost. This approach continually expands the **Pareto frontier** of reasoning accuracy vs. efficiencyâideal for reflexive non-thinking models. For reinforcement learning, we introduce **LPO (Linguistics-Unit Policy Optimization)** âa novel sentence-level policy optimization method. Unlike GRPO (token-level) or GSPO (sequence-level) algorithms, LPO treats *sentences* as the natural semantic action units, enabling precise alignment between rewards and reasoning behavior. Empirically, LPO offers superior **training stability** and **generalization** across reasoning tasks.
## Evaluation Ling-1T has been extensively evaluated across **knowledge**, **code**, **math**, **reasoning**, **agent**, and **alignment** benchmarks. It currently stands as the **best open-source flagship non-thinking model**, rivaling closed-source APIs in complex reasoning while maintaining exceptional efficiency and interpretability.
## Model Downloads You can download Ling-1T from the following table. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.