Last month, Google launched Multisearch, an update to the Google search tool and app whereby users can receive answers by snapping a photo. The feature works by suggesting similar products based on the image and phrases you have provided the search engine with – for example: "Can I buy this in green?" followed by a hat photo.
Let's say that you've just finished viewing one of those five minute crafts or budget cake decorating videos that pop up on Facebook, and you'd love to get something inexpensive for your friend's birthday.
• These are the best Google phones
The idea is that Multisearch will find exactly what you need, based on the image you provide it, from millions of retailers as well as local businesses around you that Google serve. See the video keynote below for full details on how Multisearch and future updates will work.
Google's technology and multi-modal understanding can recognize visual intricacies of the image you use, and combine this with intent. The tool then scans millions of images, reviews and updates, along with help from Maps contributors, to find results on nearby food spots and retailer locations.
Currently this technology can only recognize objects captured within a single frame, though obviously there may be situations where we need information about an entire scene in front of us – such as a pharmacy shelf.
Google is working on an advancement to Multisearch called Scene Exploration, whereby users can pan the camera, ask a question, and instantly receive insights about multiple objects within a wider scene.
The applications of this innovative tool are limitless and can help with everything from day-to-day uses like shopping lists, finding minority-owned products to support, exploring a museum or art gallery, photography location scouting, to assisting conservationists with identifying plant species that are endangered, and helping disaster relief workers to quickly sift through donations they have received.
Helpful insights will be overlaid in front of you while using your smartphone camera in Scene Exploration mode, and labels will be hovering over a particular object or product when panning over a shop shelf, to identify things like whether it's gluten and dairy-free or contains any nuts.
Scene Exploration operates by using computer vision to instantly connect the multiple frames that make up the scene you are capturing, and identify all of the objects within it. Using Google's Knowledge Graph, the software will then work to surface the most helpful results based on keywords or questions that you ask while scanning with your smartphone camera.
Multisearch will be updated by Google later this year to allow for local information gathering, which is being referred to as Multisearch Near Me. Users can long-press on any image or take a picture in the moment, and Multisearch will do the rest.
Google says it aims to make search more natural and helpful so that users can search the world while asking any questions, anywhere. This innovative tech is certainly a step in the right direction and a great tool for browsing in person and online simultaneously, as an extension of the Google Lens tools we have already.
Check out the video below for a summarized look at the innovations and products announced from Google I/O 2022.