Tensor Memory Compression: NVCache is interesting, but Tensor Memory Compression will be on Ampere, and will reportedly use Tensor Cores to both compress and decompress items that are stored in VRAM. This could see a 20-40% reduction in VRAM usage, or more VRAM usage with higher textures in next-gen games and Tensor Memory Compression decreasing that VRAM footprint by 20-40%.
Read more: https://www.tweaktown.com...e-24gb-vram/index.html
Memory-wise, Nvidia appears to be focusing on improvements in memory compression to deliver increased effective memory bandwidth with Ampere. This allows Nvidia to increase its memory performance without increasing the VRAM capacities, and build costs, of its next-generation graphics cards significantly. A new technology called Tensor Accelerated VRAM Compression is also said to be in the works.
Nvidia also plans to create a technology called DLSS 3.0, which is designed to be usable on any game that uses TAA. TAA can be implemented in several ways, so at this time it is unclear exactly how this technology will be implemented, or what level of graphic quality that gamers can expect from DLSS 3.0.
double performance over 2.0 and provide even better imaging
Another feature that's reportedly coming to Ampere is NVCache, a new technology that's designed to allow Ampere graphics cards to better utilise data in system memory and storage to speed-up memory-constrained workloads. In effect, Nvidia has created an alternative to AMD's HBCC (High Bandwidth Cache Controller), which allows AMD to utilise system memory of fast storage to overcome the limitations of frame buffer sizes. In effect, HBCC allows AMD to use system memory and storage as more VRAM, which is something that Nvidia hopes to replicate with Turing.
and then is there a traversal co-processor???