Google has announced that it’s bringing a new feature to Google Assistant that uses computer vision to recognize real-world items and suggest appropriate actions.
It’s sort of like the all-but-defunct Google Goggles, in that you can point your phone’s camera at an item to find out what it is. But it goes further by allowing you to take the next step: for example, pointing your phone at piece of paper with a WiFi network SSID and password will allow your device to automatically log onto that network.
While introducing the feature at the Google I/O developer conference, Google showed a few other uses, including:
- Pointing your phone’s camera at a flower to identify it and get more information
- Pointing at a restaurant sign to look up user reviews and other information on your phone
- Translating a restaurant menu into another language in real-time, and then asking Google Assistant follow-up questions based on what you see
At launch, google Lens will be available in Google Assistant and Google Photos apps on mobile devices, but eventually it should be available across more platforms.
Pichai says Google Lens takes advantage of advances in deep learning, which allows Google’s software to better identify objects and context in an image. Similar technology is helping improve speech recognition in Google products: the company says deep learning was what enabled the company to ship its Google Home smart speaker with 2 microphones rather than 8, as originally planned.
Computer vision advances also allow for some pretty cool image editing features: Google CEO Sundar Pichai showed an example today where you can remove obstructions from a photo by demonstrating a picture taken through a fence… with the fence removed.