Intel’s ARC A750 and A770 desktop graphics cards are designed to offer performance comparable to what you’d get from an NVIDIA GeForce RTX 3060 GPU at a more competitive price. And early review suggested they largely deliver on that promise… for newer games. But they struggled with some older titles.

Now Intel has released a driver update that the company says should bring big performance improvements for older games.

The problem, in a nutshell, arrives from the fact that the initial graphics drivers were designed for games using Vulkan and DirectX 12 APIs. But since they weren’t optimized for older APIs like DX9, performance was underwhelming for some older, but still popular games like Counter-Strike: Global Offensive and League of Legends.

Intel says starting with its v3959 graphics driver, performance for DX9 titles should be much, much better. For example frame rates in CS:Go are twice as high with the new driver And while other games saw somewhat more modest improvements, Intel measured significant boosts in frame rates for titles including StellarisStarcraft 2Guild Wars 2, and Payday 2 when playing at 1080p or 1440p resolutions.

The company says the improvements come from “a combination of API techniques.” One of those appears to be the open source DXVK, which is an open source translation layer that Valve developed as part of its Proton software that allows Windows games to run on Linux PCs. In this case, Intel appears to be using the translation layer to allow DX9 games to use the Vulkan API.

In other words, Intel’s graphics cards still don’t have native support for DirectX 9. But they don’t need it in order to get better performance while running DX9 games.

Intel says the new graphics driver will also bring improvements to lower-cost, lower-performance products like the Arc A380 GPU which began shipping earlier this year. But the company hasn’t spelled out what kind of performance gains to expect from that entry-level solution.

via GamingOnLinux 

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Join the Conversation

3 Comments

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Nice that they’ve got some backwards compatability hammered down. I’m seeing a lot of comments on the internet saying that the Arc is not yet ready for VR, though. Seems like the kinda thing you’d want to do first, to me. Soon enough I guess.

    1. I’m guessing they went for the low-hanging fruits first, high DX9 performance is something everyone would take for granted for a recent GPU so it was just plain embarrassing for Intel to not at least have that sorted out to begin with – VR is far more complex, especially if Arc GPUs are just barely as good as Nvidia and AMD’s midrange cards.

    2. I dunno.
      Hindsight is 20/20 and the way Intel has moved has always been two steps too short, two hours too late.

      Hypothetically think of an alternative timeline. Intel is the first in the industry with 16nm silicon in 2014. And they take what was great about the x7-Z8750 chipset and what was great about the i7-6567U and combine them. So in 2015, we have an x86 version of big.LITTLE. With 2x Atom cores being efficient, optimised and working slow enough to run Windows 10 and Background Processes, leaving room for the 2x Skylake cores to run unfettered for more demanding tasks. And as Intel improves their 16nm node to their ++14nm, they can double it to having a 4+4 design in 2017. And this transitions over to their desktop processors as well, having 8x small cores (Atom) healthily running the background stuff, as 8x big cores (Core-i) running free. And it gives them a launchpad to move this direction early putting pressure on AMD.

      Secondly, Intel moves towards GPUs even earlier. The current Xe dGPU we have would have been competitive in 2028, it would have been decent in 2019, and with the 2020-2022 problems it would’ve been a sellout. They missed that boat. And even more crucially, had they been early, their business could have pivot. Their problems with their Fabrication Company could have been used for midrange applications like dGPU, Radios, Motherboard, Interconnects, and low-end CPUs. While they could have outsourced to the likes of TSMC for their high-end stuff like CPU and APU.

      This would’ve kept them competitive against Nvidia, AMD, Apple, and the looming threat of ARM. But as they say, hindsight is 20/20.