Google Lens and AI transform visual-auditory search

New updates to Google Search with AI and the Gemini model allow you to perform visual and video searches, answer questions in real time, and search for products with greater precision.

Google has taken a step forward in the evolution of its search engine by integrating new search capabilities. artificial intelligence (AI) in Google Lens and its traditional search engine.

These innovations, powered by the Gemini model, allow users to search using text, images, video and even sound. With these new tools, Google not only makes it easier to find information, but also enriches the user experience, especially on mobile devices.

The evolution of Google Search with AI

Google has always sought to reimagine the way people find information. With the help of artificial intelligence, the company has managed to expand the ways in which people can interact with search, whether by typing a query, using the phone’s camera or even humming a song to find it.

According to Google, the integration of these tools has been so successful that more people are using Google Search to further explore their curiosities.

Google has highlighted the positive impact of the use of AI on the satisfaction of Searched users. According to the company, users who use the AI ​​functionality called “AI Overviews” are more satisfied with the results obtained and use the search more frequently.

In addition, the Google Lens tool has grown significantly, reaching more than 20 billion monthly visual searches.

New Google Lens capabilities

Since its launch, Google Lens has changed the way users interact with the visual world around them. With the latest updates, Lens can now perform video-based searches in real time and answer questions about what appears in the video.

For example, if a user is in an aquarium and wonders why certain fish swim in groups, they can record the video and ask the question out loud. Google AI analyzes the video and the query to provide an answer in the form of “AI Overview”, along with links to more information.

These capabilities are available globally to Google Search Labs users in English. Additionally, Google has added the option to ask questions using your voice whenever you take a photo with Lens. This functionality makes Lens a powerful tool for querying in real time naturally, without having to type.

Shopping visual con Lens

Another innovation of Google Lens is the improvement of product search. For years, users have been able to find similar products visually through the tool, but new updates provide much more useful results.

When you point the camera at a product, Lens now shows key information, such as reviews, prices, and places to buy it.

For example, if a user sees an interesting backpack at the airport and wants to know more, they only need to take a photo. Google Lens will use its Shopping Graph, which contains information on more than 45 billion products, to identify the exact item and offer all the relevant information to facilitate the purchase.

Song recognition anywhere

Google has also optimized the song search with the “Circle to Search”, which allows you to identify songs that are heard in any context: from a video on social networks to a movie or a website.

This functionality has been expanded to more than 150 million Android devices, allowing users to quickly identify music without having to switch apps.

AI to organize search results

With the rise of queries that have no definitive answer, Google has started using AI to organize results on its search page. An example of this is the new organization for queries related to recipes and food on mobile devices.

Google-Logo

Instead of displaying a traditional list of links, the new results page uses AI to organize relevant information from articles, videos and forums in a format that makes it easy to navigate.

This reorganization of results is now available in the United States and, according to Google, has received positive feedback from users who value the ability to access a greater variety of formats and perspectives in one place.

A focus on source diversity

One of the constant concerns with the integration of AI in searches has been the possible limitation in access to different sources of information. To address this, Google has redesigned its “AI Overviews” to include prominent links to reference sites directly in the AI ​​text.

This change has led to an increase in traffic to the aforementioned websites, as users find it easier to visit pages of interest.

Ads in “AI Overviews”

In parallel with these innovations, Google has been testing the inclusion of ads in its “AI Overviews”. According to the company, the ads help users quickly connect with relevant products and services, making it easier to search for businesses and brands that might be of interest.

In conclusion, with these improvements to visual, auditory, and textual search, Google continues to reimagine how we interact with information. By integrating advanced AI capabilities into tools like Lens and its search engine, they not only answer users’ questions, but anticipate their needs, making search more intuitive and accessible. Over time, these innovations can radically change the way we find information on the Internet, and even more so, how we interact with the world around us.

Follow us on Twitter via @Geeksroom and don’t miss all the news, free courses and other articles. You can also follow us through our Youtube channel to watch our videos and through Instagram to see our images!



Source: geeksroom.com