The Google Pixel 2 and Pixel 2 XL have some of the best cameras available on a smartphone. They’re so good that they let a mediocre photographer like me take some pretty stunning shots. Part of that is due to the high-quality image sensors. Part comes from Google’s software which helps with everything from image stabilization to HDR processing.

But it turns out the cameras have a previously undisclosed ace up their sleeves: a custom co-processor called Pixel Visual Core.

It’s the first custom co-processor Google has designed for a consumer product, and right now it’s exclusively available in Pixel 2 phones. At launch, it helps enhance photos taken using the stock camera app. But Google plans to let third-party applications make use of Pixel Visual Core soon.

The coprocessor features 8 custom cores capable of performing 3 trillion operations per second. That means the Pixel 2 can process HDR+ photos 5 times faster than a device without the chip, while using 10 percent of the energy.

Google plans to launch a developer preview of Android 8.1 Oreo “in the coming weeks,” and it’ll enable developer support for Pixel Visual Core. Eventually the company says any third-party app that uses the Android Camera API will be able to take advantage of Pixel Visual Core to enable the same HDR+ features in their apps.

But since the chip is programmable, it’s not limited to processing HDR+ photos. Google says more applications that take advantage of the coprocessor are on the way, including “machine learning and imaging applications.”

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 9,543 other subscribers

2 replies on “Pixel 2 smartphones have a custom camera chip called Pixel Visual Core”

  1. I don’t get it, the timing of this is simply weird.
    Huawei just launched their Kirin 970 SoC with a similar co-processor they dedicate for AI/NP.
    I get the feeling some of Google Assistant’s feature set will be powered by this unit.

    Overall, sounds like Google beat Huawei to the punch, but with a weaker subunit (probably), but with a real practical application = study photographs taken, use algorithms to enhance the images.

    While regular CPU’s do this in one way, they are general processors, so they are not as fast, or use more power, or both. Hence, why these GPU-like coprocessors can come in handy. Though I think instead of a hammer looking for a nail (Huawei), its better to have a working solution to a feature and allow third-party access, ie “Pixel Visual Core”.

    I’m only concerned about the API, since there’s no standard… its going to be a problem for developers to take advantage of these features from multiple vendors. This is where Apple’s ecosystem shines through, as future SoC’s can use the same A.I. platform making things much simpler for developers.

    1. I think Apple and Google have the best approach with having a separate processor module inside the camera module. I suspect that when the camera is on, pictures are being sampled and given to the processing units to (1) rate how good that image is (2) suggest additional camera settings to improve the image. Before the take-picture button is pressed, the camera is already processing image samples. The final picture might be taken slightly before the take-picture button is pressed, or it might be taken slightly after the button is pressed. If there a 8 processing units, then perhaps 8 different camera settings could be evaluated in each processing unit. All of this being done locally in the camera module.

      The weird thing about this is that the camera feature is being developed independent to the phone. Almost like miniaturizing a dedicated digital camera and bolting it into a phone.

Comments are closed.