Chip makers have been designing chips that incorporate both a CPU and GPU for decades at this point. But we’re increasingly seeing companies add other dedicated features, and as the tech world looks toward AI as the next big thing, it’s unsurprising to see that Intel is putting a dedicated AI accelerator into its next-gen processors for mainstream PCs.

The company says its 14th-gen Intel Core chips based on “Meteor Lake” architecture will have a dedicated CPU, GPU, and VPU (Vision Processing Unit) for hardware-accelerated AI.

Intel is no stranger to the VPU space. The company acquired computer vision company Movidius in 2016 and began cranking out products using Movidius technologies soon after.

Now the company says it’s incorporating a VPU based on Movidius’s third-gen architecture into Meteor Lake chips set to launch later this year. The company says this will allow the chips to handle sustained AI workloads while consuming less power and freeing up CPU and graphics resources for other tasks. The new VPU block will be part of all of Intel’s 14th-gen chips, ranging from processors for low-cost laptops to high-performance chips for gaming or workstation-class desktops.

But it’s telling that Intel is calling this a VPU rather than an AI accelerator. Movidius’s expertise is in computer vision, and up until recently that was one of the most important consumer-oriented applications for artificial intelligence. We’ve seen computer vision-based AI used to enhance photographs snapped on smartphones like Google’s Pixel phones. And we’ve seen it used to add features to video conferencing software like Microsoft’s Windows Studio effects, which enable automatic framing, background noise reduction, and a kind of creepy “eye contact” feature on devices like the Surface Pro 9.

Intel says that its VPU will also enable features like improved background blurring and noise suppression in video calls… which currently require ARM-based PCs like the Surface Pro 9.

But since the release of generative AI tools like ChatGPT, Bard, Dall-E, Midjourney, and Stable Diffusion, it’s become clear that AI isn’t just about photography. And it’s unclear whether the specific AI accelerator Intel is building into its next-gen PC chips will help with those sorts of applications.

via AnandTech

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 9,545 other subscribers

Join the Conversation

5 Comments

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. So current AMD processors support avx-512 acceleration, which Intel used to include until they nixed it in all except their server class processors.

    I sort of wonder if this news is all marketing hype (it is Intel afterall) re-introducing some sort of avx-512 acceleration in their consumer chips?

    I’ve only had the pleasure to experiment with avx-2 and it did speed things up for me a bit vs. non avx-2. I’ve heard avx-512 is a further improvement in ai inference performance.

    Beware of Intel marketing, however. Without further details it’s hard to say exactly what this is, and if it can be easily incorporated into open source solutions or not.

    I think someguy kind of hinted at what I’m talking about, so I’ll leave it at that. (p.s. I have long suspected that DRAM bandwidth plays the greatest impact on a.i. inference performance than number of processor cores one has).

    1. And yes, I know the discussion is around the “VPU”, but I sort of wonder if this is some kind of hybrid model, whereas one could use it also for non-video applications, and that’s what I meant. I’m just curious if they will reintroduce avx-512 acceleration cause that will benefit running local instruct models.

      Personally, I have gotten bored with A.I. Don’t know about anybody else.

      1. I still like to mess around with Stable Diffusion and chatbots like CharacterAi would be funny to use, if they weren’t censored to oblivion and back.

        On Meteor Lake and avx-512, there are rumours about it
        https://twitter.com/OneRaichu/status/1651945951857344512
        but it’s also rumoured that the e cores used in Meteor Lake won’t support this instruction. If both rumours were true Meteor Lake would be similar to Alder Lake immediately upon release, when avx-512 wasn’t physically fused off and it was possible to use it after deactivating the e cores.

  2. It would be nice if chip makers started using a measurement to quote AI performance, like using TOPS, or something. Just to give us an idea of what ballpark to expect performance.

    It seems like we hear often about an upcoming chip offering some AI features, but it always seems difficult to understand what that means for end users.

    I usually take it as meaning that the chip will offer a very entry-level amount of performance, just enough for small applications to be able to handle very low-demand AI functions on the chip itself (rather than offloading that work to a server somewhere). Like for example, virtual-assistant services (Google Assistant, Alexa, etc) might be able to do lightweight work on the system for faster response.

    But that level of performance probably wouldn’t be interesting to people who tinker with AI software themselves. They likely want something at the level of an Nvidia 40-series GPU.

    In this specific example, it would be nice to know if we should expect performance to be similar to what you expect with a current-gen discrete GPU, or much lower. With the prices of current-gen GPUs, it would be cool to hear that an Intel 14th gen laptop CPU might offer a reasonable alternative.

  3. Well, given that everything noteworthy except stable diffusion and llama and whisper and mozilla TTS is a remotely hosted service I really doubt this will help any with that. If they expect it Stable Diffusion to run on this thing they’ll need to do what they did to get it running on their own graphics cards, and give the VPU access to basically the entirety of system RAM.
    And that would be a lot of work for something everyone thinks you’re a cold-hearted misanthropic monster (at best, a smart cold-hearted misanthropic monster who’s accurately predicting how much worse everything is going to get) for using so, I think this entirely going to be about blurring backgrounds in your video conferencing.