Cisco’s UCS business unit has announced a new server designed for AI and machine learning use cases.
The UCS C480 ML M5 is a 4U rackmount server with eight NVIDIA Tesla V100 32GB GPUs.
These GPUs are connected via NVIDIA’s NVLink instead of a traditional PCIe bus. A Cisco blog on the new server says NVLink provides ten times the bandwidth of PCIe.
The C480 ML supports up to 24 SAS/SATA SSD drives for a total of 182Tbytes of storage. Six of the bays can use NVMe drives.
Other hardware specs include two Intel Xeons with up to 28 cores, up to 3Tbytes of memory, and 4 100Gig ports.
Cisco makes a big deal of the airflow design of this box, which include “height differential” heat sinks to ensure that both the front four and rear four GPUs can be cooled simultaneously so you can run all the processors at the same time, full throttle.
According to Cisco, you can run the C480 ML in the same rack with other UCS and Hyperflex models.
The company is partnering with Cloudera, Hortonworks, and others to validate machine-learning software on the new UCS model. Cisco says it will also work with partners to provide professional services for companies that want to undertake data science initiatives.
Cisco isn’t the only server vendor packing 8 GPUs into one box. HPE’s Apollo 6500 Gen10 server also sports up to 8 NVIDIA Tesla V100s and uses the NVLink interconnect.
Other server manufacturers, including Huawei, Cray, and SuperMicro, also support up to 8 Tesla V100s, according to this page from NVIDIA.
Cisco’s C480 ML can be managed via Cisco’s cloud-based Intersight management platform or UCS Manager. Customers can purchase the new server from partners starting in Q4 of 2018.