LG’s next flagship smartphone will have a 3D depth-sensing camera on the front with support for face recognition, augmented reality, and other features.

The company plans to launch the LG G8 ThinQ smartphone at Mobile World Congress later this month, but LG has a habit of building a bit of hype by pre-announcing specific features. This time LG is starting with the phone’s front camera.

LG says it’s worked with Infineon Technologies to include a selfie camera with depth-sensing Time of Flight (ToF) technology.

That means it calculates distance by measuring the amount of time it takes for light to be reflected off of the subject. The camera captures infrared light, which allows 3D data to be captured indoors or outdoors without any interference from other light sources.

While this will hardly be the first smartphone to feature a 3D camera with support for face recognition, LG says its ToF camera uses less power than other solutions.

Apple’s FaceID technology, for example, relies on the company’s TrueDepth camera system which includes a dot projector, a flood illuminator, and an infrared camera.

Support Liliputing

Liliputing's primary sources of revenue are advertising and affiliate links (if you click the "Shop" button at the top of the page and buy something on Amazon, for example, we'll get a small commission).

But there are several ways you can support the site directly even if you're using an ad blocker* and hate online shopping.

Contribute to our Patreon campaign

or...

Contribute via PayPal

* If you are using an ad blocker like uBlock Origin and seeing a pop-up message at the bottom of the screen, we have a guide that may help you disable it.

Subscribe to Liliputing via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 9,544 other subscribers

One reply on “LG G8 ThinQ smartphone will have depth-sensing selfie camera”

  1. Depth cameras are in a pretty strange shape since the untimely death of Tango.

    Depth camera basically allows to “sense” real objects instead of running this or that photogrammetry solution and potentially may allow for pretty accurate 6DoF tracking.
    Combined with the advent of NPUs, the possibilities are quite far-reaching (for example, a natural navigation for the blind, like the “look” command in text quests of old or scanning a real-life object by waving a phone around it).

    However, depth cameras at the moment are primarily placed on the front of the phone and used for boring things like identification and AR funnies.
    To my knowledge, there are two phones on the market with back-facing depth camera (Oppo R17 Pro with Snap 710 and Vivo Nex Dual with Snap 845) and none of the manufacturers provide a publicly available SDK for depth cameras at the moment.
    So it’s kinda close, but so far away.

Comments are closed.