author:GCC
On October 28, the inaugural all-member meeting of the Global Computing Alliance Open AI Infra community and the second all-member meeting of the Open Liquid Cooling Professional Committee were successfully held at Beijing Jiangsu Tower. Nearly a hundred member representatives and experts attended the event, witnessing the transition from the Open Liquid Cooling Professional Committee to the Open AI Infra community and the official establishment of the community.

The meeting reviewed and voted on the proposal to transition from the Open Liquid Cooling Professional Committee to the Open AI Infra community, along with changes to the business scope and community guidelines, providing standardized guidance for the community's future operations.
At the conference, Jin Hai, Chairman of the Global Computing Alliance, stated that since its establishment, the alliance has actively promoted the development of the computing industry. With the rapid iteration of AI technology, open-source and open collaboration have become inevitable. Through community-driven business proposals, the alliance has pioneered the transformation of the Open Liquid Cooling Special Committee. Jin Hai emphasized that future communities should adhere to the principles of fairness and openness, focus on key technologies, drive industry consensus, and contribute to breakthroughs in AI computing infrastructure to serve future industrial development.
Chairman Zhang Chun of the Open AI Infra Community Management Committee first summarized the achievements of the specialized committee over the past year, including the establishment of a comprehensive liquid cooling standard system and the ongoing focus on compatibility challenges through the "Dual Zero Initiative" while collaborating with testing institutions to build a shared testing platform. Regarding transformation, he emphasized that member demands, the maturity of open-source environments, and institutional safeguards have met the conditions for the Open Liquid Cooling Specialized Committee's transition. Moving forward, the Open AI Infra community will position itself as "open, leading, and innovative," focusing on developing an open ecosystem, achieving technological breakthroughs, and reconstructing application scenarios. Simultaneously, its business scope will expand to cover full-stack intelligent computing infrastructure, establishing an efficient governance framework to drive coordinated industrial development.
This meeting discussed key directions for the community's future: Standardizing rack dimensions, node layouts, and three-bus blind plug architecture for complete rack systems, clarifying power supply, cooling, and high-speed interconnection requirements to achieve high-density deployment and efficient operation, while adapting to AI computing expansion needs. The AI Data Center (AIDC) infrastructure specification addresses pain points in AI data center planning and construction, establishing technical parameters and standards across power distribution, liquid cooling, and architectural layout dimensions. It unifies specifications for CDUs and secondary pipelines to ensure high-density computing deployment efficiency and reliability. The computing performance benchmark focuses on building a diversified evaluation system for various computing scenarios, including tools like CPUBench (single-machine performance), ClusterBench (cluster performance), and AISBench (AI server). It clarifies layered metrics and testing modes to overcome traditional benchmarking limitations, supporting product optimization and procurement decisions.
Wang Zhiqiang, a member of the Open AI Infra Community Management Committee, pledged to drive the continuous evolution of open communities from standards to specifications, making these standards a bridge between technology and value. He cordially invited industry partners to join the ongoing development of AIDC construction specifications and collaborate on project groups, harnessing the power of open co-creation to shape the future of AI infrastructure.
Professor Fan Chun, Community Advisory Committee Chair of OpenAI Infra, emphasized that openness and innovation are the soul of the community. In setting standards and regulations, the community will drive cross-framework and cross-platform compatibility design, link computing facilities with data center infrastructure to lower innovation barriers, and collaborate with universities and academic institutions to cultivate next-generation AI infrastructure leaders. The Advisory Committee will establish an industry-academia-research-application collaboration platform, promoting industrial chain co-construction from proprietary models to open ecosystems, while delivering systematic insights with comprehensive, industrial, and ecological influence on computing power ecosystems.
The conference concurrently hosted the inaugural Community Management Committee and Technical Advisory Committee meetings, with key outcomes announced during plenary sessions. Representatives from core user enterprises unveiled the membership lists of the Community Management Committee, Advisory Committee, and Technical Advisory Committee, along with the appointment of the Community Secretary-General. The Technical Advisory Committee unveiled four major project clusters—AI Complete Rack Systems, Data Center Infrastructure, and others—while announcing the list of member representatives serving as project directors and team leaders. In project implementation, initiatives such as the GCC Liquid-Cooled Complete Rack Interface Specifications and AIDC Infrastructure Specifications were launched, laying the groundwork for standardizing and scaling AIDC infrastructure deployment.