THE ART OF TAILORING here is the systerm of app

0
8
THE ART OF TAILORING here is the systerm of app

Various applications and algorithms that generate AI are in various colors to make different hardware available for use. At Stanford Conference HPC 2019, Brett Newman from Microway offers some general guidelines about designing the system according to the nature of the software.

Although Newman’s official title is vice presidential marketing and customer involvement in Microway, HPC provider and now AI’s iron, he is part of a technology team that produces high iron for buyers.

The team, Newman said, is full of people who “spend days designing and delivering large quantities, and more importantly, the diversity of many HPC and AI systems.”

Over the years

Newman has been involved in hundreds of deployment of this system, working with customers to determine what kind of hardware they should take.

He later served in IBM, where he worked in Power Systems, which developed PowerAI’s Big Blue learning machine pitch and sky sell Power9 GPU-fast as a system used in the “Super Summit” at the Oak Ridge National Laboratory.

In conjunction with Stanford Newman’s presentation, it is remarkable to promote Microway, a 35 year old HPC system integrator. The company only jumped in the early GPU cars, since 2007, the same year Nvidia released the first version of the CUDA runtime and software development toolkit.

The AI ​​market came alone, only, but Microway immediately acknowledged that the GPU-based system would benefit from the larger market. “We are not just vendors out there,” Newman said. “But we are one of the better stores when it comes to AI hardware solutions.”

In a presentation at Stanford conference

Newman talked about the new language criteria AI customers consider when choosing a system to do their work load, and how to map the criteria for hardware.

In this regard, he prepares the procedure for the user to be considered as considering the various options. Remember that the options described here are all played on the GPU provided by Nvidia, which, as we all know, has become a dominant player in machine learning exercises.

Setting aside for a moment of vendor choice, Newman, believe two first-ever first consumer questions should ask when considering and installing AI systems: Do you like and AI data workloads and calculus are you happy?

The nature of the dataset is the GPU preferred key,

especially about memory capacity. Is data compatible with GPU memory, and if GPU or GPU matches? (The top-of-the-line Nvidia V100 GPU exists with either 16 GB or 32 GB of local HBM2 memory, so that the options are more convenient) In most cases, data sets will be larger than 32 GB, so it will be distributed the GPU on the server or even on a server group.

This leads to a secondary insight into how data can be divided into logical entries that can be inserted into the GPU or GPU, which ultimately determines the size of the AI ​​model sports collection. You should also consider the size of individual data items (assuming you have less or less dataset) and many of these items will fit the GPU with the provided memory capacity.

For example,

if you have 128 GB of 8 GB images, you can use four GPU servers, with each GPU loaded with four images. Being a small dataset, but you have a general idea.

For the AI ​​training model, Newman invites to use the NVLink-enabled Nvidia GPU. The Vlink GPU connects between one and more at speeds up to 300 GB / sec (if you use NVSwitch Connect Nvidia) and contributes and cache cache atoms. NVLink provides a better price / performance than the 32Gb / second PCI-Express 3.0 base connection to the GPU for this work.

Newman said, on various GPU servers with NVLink, you can expect dividends of 10 percent to 20 percent when working compared to a slower GPU than the PCI-Express. Remember that by vlink-V100 GPUs provide 11 percent more computing capacity of PCI Express friends (125 versus 112 tensor raw flops, respectively), so it’s hard to say how much to boost performance

LEAVE A REPLY

Please enter your comment!
Please enter your name here