Sunnyvale: Artificial intelligence chip startup Cerebras Systems and G42 launched its first of nine interconnected AI supercomputers.
The Condor Galaxy is a network of nine interconnected supercomputers, offering a new approach to AI compute that promises to significantly reduce AI model training time. The first AI supercomputer on this network, Condor Galaxy 1 (CG-1), has 4 exaFLOPs and 54 million cores.
Cerebras and G42 are planning to deploy two more such supercomputers, CG-2 and CG-3, in the US in early 2024, the company said in a statement on Thursday.
Located in Santa Clara, California, CG-1 links 64 Cerebras CS-2 systems together into a single, easy-to-use AI supercomputer, with an AI training capacity of 4 exaFLOPs. Cerebras and G42 offer CG-1 as a cloud service, allowing customers to enjoy the performance of an AI supercomputer without having to manage or distribute models over physical systems.
CG-1 is the first time Cerebras has partnered not only to build a dedicated AI supercomputer but also to manage and operate it. CG-1 is designed to enable G42 and its cloud customers to train large, ground-breaking models quickly and easily, thereby accelerating innovation. The Cerebras-G42 strategic partnership has already advanced state-of-the-art AI models in Arabic bilingual chat, healthcare and climate studies.
Training large models requires huge amounts of compute, vast datasets, and specialized AI expertise. The partnership between G42 and Cerebras delivers on all three of these elements. With the Condor Galaxy supercomputing network, the two companies are democratizing AI, enabling simple and easy access to the industry’s leading AI compute.
G42’s work with diverse datasets across healthcare, energy and climate studies will enable users of the systems to train new cutting-edge foundational models.
These models and derived applications are a powerful force for good. Finally, Cerebras and G42 bring together a team of hardware engineers, data engineers, AI scientists, and industry specialists to deliver a full-service AI offering to solve customers’ problems. This combination will produce ground-breaking results and turbocharge hundreds of AI projects globally.
With 54 million AI-optimized compute cores, 388 terabits per second of fabric bandwidth, and fed by 72,704 AMD EPYC processor cores, unlike any known GPU cluster, CG-1 delivers near-linear performance scaling from 1 to 64 CS-2 systems using simple data parallelism.