This month AMD introduced a new set of Ryzen multi-core desktop processors ranging from the 16-core Ryzen 9 3950X to the 32-core Ryzen Threadripper 3970x. They’re available for purchase starting today… and AMD is also confirming a new high-end chip coming next year.
The upcoming AMD Ryzen Threadripper 3990X processor features 64 CPU cores and 128 threads.
AMD hasn’t mentioned the price or exact release date yet. But odds are that it won’t be cheap. The only target market the chip maker calls out in its press release is “Hollywood creators” looking for a chip designed to handle “highly scalable workloads of a VFX pro.”
According to a leaked product slide shared to Chinese social media, the Ryzen Threadripper 3990X is expected to be a 280 watt processor with 288MB of total cache. Interestingly, that’s the same TDP as AMD’s 24-core and 32-core Threadripper chips, despite having twice the core count.
That probably means the individual CPU cores will run at lower clock speeds. But the sheer number of cores could help… VFX pros, I guess. Most users would probably be hard-pressed to find tasks that require more than 32-cores… but I’m sure it’s just a matter of time before developers come up with applications that can leverage all the resources of AMD’s new high-end processors.
via VideoCardz
Even for highly parallel jobs, like 3d video creation, the bottleneck quickly becomes how fast the memory subsystem can feed that many cores. And sure enough, look at that insane cache. Next step will be gluing a 16GB HBM memory module to the top to try to feed these beasts. But for a lot of jobs we are already past the scalability point where adding more cores slows down the program, as cross core communication begins to chew up so much time that more cores literally runs slower. For every job there exists that magic number of maximum cores, where adding another doesn’t just get a very small increase, it actually slows the thing down. Careful optimization and hand tuning can push that number up a little but it is always looming as a finite limit.
Once we hit that limit for most common tasks they will need to find another way to keep pushing performance up. Cycle wars hit a limit, IPC is hitting a limit of exposing information through cache leaking, adding cores will max out. Will be fun to see what comes next.
Clearly while your work is compiling (Or rendering, or processing) you should be writing your report on what the outputs will be, so that your own efficiency rises?
Just kidding. There has to be some way to have swordfights while compiling.