IBM’s new AI-friendly server adopts Nvidia’s NVLink for faster memory

Author
stalinx20
CLASSIFIED Member
  • Total Posts : 4656
  • Reward points : 0
  • Joined: 2009/01/03 08:56:23
  • Location: U.S., Indiana
  • Status: offline
  • Ribbons : 0
2016/09/08 16:29:49 (permalink)


 




GPUs are a proven way to speed up the time-consuming task of machine learning, a crucial element of the recent rapid expansion of the use of AI solutions in many industries. The result has been an explosively-growing new market for GPU vendors Nvidia and AMD. IBM’s newly announced Power Systems S822LC aims to push machine learning performance even further — with two IBM POWER8 CPUs and four Nvidia Tesla P100 GPUs.

NVLink dramatically improves memory access over PCI-e

Nvidia announced NVLink at last year’s GTC, and its Pascal-based GPUs are the first to support it. It is used both for communication between CPUs and GPUs, and between multiple GPUs. In raw data rates, Nvidia says it is 5 to 12 times faster than PCIe Gen 3 interconnects — yielding as much as a doubling in real world performance for data-intensive GPU applications.
 
As part of the announcement, IBM cited raw interconnect performance improvements from 16 GB/s over PCIe to 40 GB/s using NVLink. IBM has made a huge investment in what it calls cognitive computing, so it makes perfect sense that it would implement a version of its POWER8 processor with the highest performance interconnect possible. IBM says some of the early units will ship to high-profile customers, including Oak Ridge National Labs and Lawrence Livermore National Labs. The systems will be test beds in preparation for IBM’s Summit and Sierra supercomputers due in 2017.


http://www.extremetech.com/extreme/235213-ibms-new-ai-friendly-server-adopts-nvidias-nvlink-for-faster-memory

#1

0 Replies Related Threads

    Jump to:
  • Back to Mobile