The Nervana ‘Neural Network Processor’ uses a parallel, clustered computing approach and is built pretty much like a normal GPU. It has 32 GB of HBM2 memory dedicated in 4 different 8 GB HBM2 stacks, all of which is connected to 12 processing clusters which contain further cores (the exact count is unknown at this point). Total memory access speeds combine to a whopping 8 terabits per second. An interposer has been used to full effect and Intel’s homegrown interconnect seals the deal.
Unfortunately, the only information we have (apart from the pictures of the complete board) is that the card will be using 2x 8 pin connectors as can be seen in the picture.
Intel is claiming big numbers with the Nervana AI chip and has also revealed that it will be highly scalable which is something their CEO, Brian Krzanich, has already stated to be the path forward for AI learning. The chip will feature 12 bidirectional high-bandwidth links and seamless data transfer via the interconnects. These proprietary inter-chip links will provide bandwidth up to 20 times faster than PCI Express links.
https://wccftech.com/inte...mplete-board-pictured/ -------------------
The more you look at it the more it resembles a Nvidia GPU and their interlink system