Nexus of Agentic Intelligence

Full-Stack AI Agent Platform

Shanghai Innovation Institute, Shanghai Qiji Zhifeng Co., Ltd., Mosi Intelligence Co., Ltd., KuafuAI Co., Ltd, and many entrepreneurial partners jointly unveiled Nex—the next-generation agentic end-to-end solution. The project is building a sustainable closed-loop open ecosystem that powers industry upgrades and truly ushers in the AI agency era.

As a full-stack AI agent solution across models, data, agent frameworks, and infrastructure, Nex dramatically lowers development and deployment barriers. It delivers a high-performance, stable, and low-cost ready-to-use agentic system for researchers and entrepreneurs, helping teams land AI agency across real production scenarios.

    Nex vs Frontier Models

    Nex Ecosystem

    Models & Open Release

    Built on our agentic end-to-end solution, we are releasing four post-trained Nex-N1 models optimized for agent workflows.

    The checkpoints are now live on GitHub, HuggingFace, ModelScope deploy locally, on-prem, or via hosted inference.

    Fully Open Source

    Beyond the core models we are open-sourcing the NexAU agent framework, training data and production pipelines, the NexRL reinforcement learning stack, and the EaaS MoE inference core NexVenusCL, unlocking end-to-end deployment and optimization.

    Agent Framework

    NexAU Agent Framework

    NexAU (Agent Universe) is designed with the core philosophy of "low barrier to entry, high efficiency, and flexible customization," comprehensively covering all foundational capabilities required for AI agent development.

    Data & Pipelines

    End-to-End Agentic Data

    Powered by NexAU + Agent4Agent, the open pipelines span prompt/issue synthesis, trajectory generation, and processing for tool use, MCP, and agentic coding.

    • Synthesis pipeline Covers essential agent skills, tool / MCP calling, and agentic coding. Primary resources: NexGAP.
    • Data coverage Opening 70K+ high-quality trajectories for full-stack dev, rewrites, game dev, analytics, crawling, testing, ML, vector graphics, and mini program building.
    Infra

    NexRL & EaaS

    Service-oriented infra purpose-built for agentic workloads—decoupling rollout/training and enabling high-throughput MoE inference via GPU P2P communication.

    • NexRL Modular RL stack that service-izes training, inference, tool interaction, and rewards via standard APIs for quick adaptation. Repo: NexRL.
    • EaaS / NexVenusCL MoE inference system with elastic experts and IBGDA-based GPU P2P comms; we open the NexVenusCL core to keep throughput loss under 2% even under faults. Repo: NexVenusCL.

    Use Cases

    Select a use case to view the output preview.

    Evaluation

    DeepResearch

    Global Deep Research Benchmark

    Referencing the public DeepResearch Bench leaderboard to compare Nex-N1 against mainstream offerings on complex research tasks.