

There’s little debate that graphics processor unit producer NVIDIA is the de facto customary with regards to offering silicon to energy machine studying (ML) and synthetic intelligence (AI) primarily based programs. As essential as Intel was to general-purpose computing, NVIDIA is identical to accelerated computing. Its GPUs might be present in all the things from big-data heart programs to vehicles to desktop video gadgets–even shopper endpoints.
NVIDIA is greatest identified for GPUs but in addition makes programs
An rising a part of NVIDIA’s enterprise is the programs group, the place it makes full-functioning, turnkey servers and desktop PCs for accelerated computing. An instance of that is the NVIDIA DGX Server line which is a set of engineered programs particularly constructed for the pains of AI/ML. This week on the digital Supercomputing present, NVIDIA introduced the most recent member of its DGX household with the DGX A100 Station.
This “workstation” is a beast of a pc and options 4 of the just lately introduced A100 GPUs. These GPUs have been designed for knowledge facilities and include both 40 GB or 80 GB of GPU reminiscence, giving the workstation as much as 320 GB of GPU reminiscence for knowledge scientists to deduce, study and analyze with. DGX A100 Station has a whopping 2.5 petaflops of AI efficiency and options NVIDIA’s NVLink because the high-performance spine to attach the GPUs with no inter-chip latency creating successfully one, huge GPU.
MIG allow workgroups to leverage a single system
I put the time period “workstation” in quotes as a result of it’s actually a workstation in kind issue solely; even at 2.5 FLOPS in comparison with the 5 that the A100 Server has, it’s nonetheless a beast of a machine. The advantage of the DGX Station is that it brings AI/ML out of the info heart and permits workgroups to plug it in and run it anyplace. The workstation is the one workgroup server I’m conscious of that helps NVIDIA’s Multi-Occasion GPU (MIG) expertise. With MIG, the GPUs on the A100 might be virtualized so a single workstation can present 28 GPU situations to run parallel jobs and assist a number of customers, with out impacting system efficiency.
As talked about beforehand, the workstation kind issue makes the A100 Station best for workgroups and might be procured straight by the strains of enterprise. Juxtapose this with the A100 Server, which is deployed into a knowledge heart and usually bought and managed by the IT group. Most line-of-business people, similar to knowledge scientists, don’t have the technical acumen and even the info heart entry to buy a server, rack and stack it, join it to the community and do the IT issues that must be completed to maintain it operating.
A100 Station is designed for simplicity
The A100 Station seems to be like an enormous laptop. It sits upright on or beneath a desk and easily requires the consumer to plug the ability twine and community in. The straightforward design makes it good for agile knowledge science groups who work in a lab, a standard workplace and even at residence. DGX Station was designed for simplicity and doesn’t require any IT assist or superior technical expertise. My first job out of school was working with a bunch of knowledge scientists as an IT particular person, and I can attest to the significance of simplicity with that viewers.
With out one thing like A100 that was purpose-built for accelerated computing, workgroups can be pressured to buy CPU-based desktop servers that are severely underpowered for this sort of use case. Positive, the common Intel-based workgroup server can run Phrase and Google Docs, however it will probably take months to run AI-based analytic fashions? With the GPU-powered programs, what took months can usually be completed in only a few hours and even minutes.
Though NVIDIA didn’t announce a worth for the DGX A100 Station, I’m guessing it’s approaching six figures and which may appear excessive for a workstation. However contemplating the compensation stage of knowledge scientists, maintaining them working versus sitting round ready for fashions to run on CPU programs, that value is a discount. If one components within the misplaced alternative prices of not having an AI/ML optimized system, it makes the Station a no brainer for workgroups that want this sort of compute energy.
Some firms may flip all AI infrastructure over to the IT group, and that’s a superbly tremendous mannequin. These firms possible will leverage one of many server kind components.
For many who go away the infrastructure selections and buying inside the strains of enterprise, the DGX A100 Station is ideally suited. GPU energy on the desk might sound a bit sci-fi-ish, however NVIDIA introduced it this week.
Zeus Kerravala is an eWEEK common contributor and the founder and principal analyst with ZK Analysis. He spent 10 years at Yankee Group and previous to that held quite a lot of company IT positions.