nvidia-200-dgx-station-reduced-size-graphic.jpg

GPU titan Nvidia on Monday morning unveiled[1] what it calls AI computing on your desktop, the DGX Station A100, which will be sold by a variety of partners and is expected to be available "this quarter," Nvidia said.

The announcement comes at the start of SC20[2], a supercomputing conference usually held in San Diego every year and this time around held as a virtual event given the COVID-19 pandemic. 

Nvidia calls the DGX Station A100 an "AI appliance you can place anywhere." The box, measuring 25 inches high, 10 inches across, and 20 inches deep, comes with four GPUs, either existing 40-gigabyte A100 GPUs, or a newly unveiled 80-gigabyte version[3]. It weighs 91 lbs, though fully outfitted, it tops out at 127 lbs. The total system has a maximum memory size of 320 gigabytes. More information is available in the spec sheet[4].

Nvidia touts the throughput of the 80-gigabyte version of A100 for large workloads. 

The A100 80GB also enables training of the largest models with more parameters fitting within a single HGX powered server such as GPT-2, a natural language processing model with superhuman generative text capability. This eliminates the need for data or model parallel architectures that can be time consuming to implement and slow to run across multiple nodes.

The A100 chips, based on Nvidia's "Ampere" GPU architecture, were first unveiled in May[5] of this year. 

With today's announcement, the building of AI systems by chip makers is now officially a trend. Nvidia's box comes a year after startup Cerebras Systems unveiled a workstation-sized AI computer[6] containing its WSE chip, the world's largest computer chip. At the time, Cerebras made significant mention of

Read more from our friends at ZDNet