Google’s Visual Search Can Now Answer Even More Complex Questions
When Google Lens was introduced in 2017, the search feature accomplished a feat that not too long ago would have seemed like the stuff of science fiction: Point your phone’s camera at an object and Google Lens can identify it, show some context, maybe even let you buy it. As Google increasingly uses its foundational generative AI models to generate summaries of information in response to text searches, Google Lens’ visual search has been evolving, too. Courtesy of Google Lens search is now multimodal, a hot word in AI these days, which means people can now search with a combination of video, images, and voice inputs. First announced at I/O, this feature is considered experimental and is available only to people who have opted into Google’s search labs, says Rajan Patel, an 18-year Googler and a cofounder of Lens.
Discover Related

iPhones get Google's Circle to Search-like feature with new update

Google adds new AI-powered features to Search, Lens, more

Google Lens AI-Powered Feature To Let You Record & Search Any Query

Google Lens enhances functionality with new video feature powered by AI

Google announces built-in Lens support for Chrome on iOS devices

Google to let Android users search for anything seen in photos or videos

Google I/O 2019: AR is now coming to search while Google Lens becomes more useful

Microsoft launches an AI-powered 'visual search' feature on Bing, similar to Google Lens
