6 months, 2 weeks ago

Google’s Visual Search Can Now Answer Even More Complex Questions

When Google Lens was introduced in 2017, the search feature accomplished a feat that not too long ago would have seemed like the stuff of science fiction: Point your phone’s camera at an object and Google Lens can identify it, show some context, maybe even let you buy it. As Google increasingly uses its foundational generative AI models to generate summaries of information in response to text searches, Google Lens’ visual search has been evolving, too. Courtesy of Google Lens search is now multimodal, a hot word in AI these days, which means people can now search with a combination of video, images, and voice inputs. First announced at I/O, this feature is considered experimental and is available only to people who have opted into Google’s search labs, says Rajan Patel, an 18-year Googler and a cofounder of Lens.

Discover Related