NVIDIA may be best known to the general public (or at least the geeky set of the general public) for its PC graphics cards used for gaming, content creation, and cryptocurrency mining, among other things. But the first products based on the company’s new Ampere technology will be high-performance (and high-price) solutions aimed at products for data centers, autonomous vehicles, and related applications.

It’ll be a while before you see Ampere-based graphics solutions for personal computers.


NVIDIA is kicking things off with the NVIDIA A100 data center GPU. NVIDIA says the A100 GPU is designed for “data analytics, scientific computing, and cloud graphics.”

According to NVIDIA, the A100 GPU offers up to a 20X improvement in AI applications when compared with the previous-gen V100 GPU based on Volta architecture. It’s a 7nm chip with 54 billion transistors, third-generation Tensor Cores, and support for multi-instance GPU technology, allowing a single A100 GPU to be set up so that it functions as up to seven independent GPUs.

There are several related products. The NVIDIA DGX A100 is an AI system with 5 petaflops of performance which NVIDIA says is expected to be used for things like biomedical research, computer vision, speech recognition, and other machine learning/AI applications.


The NVIDIA EGX A100 Edge AI platform, meanwhile, is a smaller, lower-power, lower-cost system that can bring real-time AI processing to local hardware.


NVIDIA is also announcing the availability of its previously-unveiled developer kit, the Jetson Xavier NX. But interestingly this $399 mini computer is powered by a removable system-on-a-module with a GPU based on NVIDIA’s previous-gen Volta technology.

NVIDIA Jetson Xavier NX

The Jetson Xavier NX module has a 6-core NVIDIA Carmel ARMv8 CPU, a 384-core NVIDIA Volta GPU with 48 tensor cores, 8GB of RAM, 16GB of eMMC storage, and support for 21 TOPS of AI performance.

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign


Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 9,543 other subscribers

3 replies on “NVIDIA launches Ampere GPU architecture (coming first to data centers)”

  1. Impressive.
    It looks like Nvidia is doubling-down on actual Real-time Raypath Tracing, since heaps of transistors are dedicated to it. That means how the RTX 2060 was a joke with “RTX On” compared to the RTX 2080 Ti (that tank to framerates)… well, the RTX 2080 Ti and it’s ability to do “RTX On” is going to feel like a toddler compared to the RTX 3000-series.

    However, there’s impressive performance to regular (rasterised) graphics too.
    Looks like AMD’s Flagship RX 5700XT, which is on the same 7nm node, is going to be competing against Nvidia’s Low-end model the RTX 3060.

    That’s just how it looks now, but we’ll get proper idea once trusted reviews get their hands on some units to test with.

  2. The one big thing that bothers me about all this neural network stuff is how running the programs on consumer hardware is (usually) such a pain in the butt, if the people who wrote the software are willing to let you do that at all.
    It’s a matter of power difference. Not computing power, but rather how you can’t legally “prove” a video was faked because you don’t have permission to download the terabytes of real and faked videos to sort that out yourself (since I would imagine that judges will only accept expert testimony or computer proof of something like that).

Comments are closed.