NIX Solutions: Google AI Search Faces Challenges with Strange Responses

In recent days, a large number of examples have appeared on social networks showcasing how Google’s recently launched AI search engine gives strange answers to user queries. Examples include recommendations to add glue to pizza and suggestions to eat stones. In response, Google has manually disabled the AI-assisted reviews feature for certain queries, causing many peculiar search engine responses to disappear soon.

NIX Solutions

This situation is puzzling because Google has been testing AI-generated reviews for a year now. The Search Generative Experience feature launched in beta last May, and Google CEO Sundar Pichai stated that it has processed more than a billion queries since then. During the testing period, Pichai mentioned that it was possible to reduce the cost of providing answers using AI by 80% thanks to “hardware, engineering, and technical breakthroughs.” It seems this optimization was implemented prematurely, before the technology was ready for commercial use.

Google’s Response and Future Improvements

Google continues to assert that the company’s AI search engine provides users with “high-quality information” in most cases. “Many of the examples we’ve seen involve non-standard queries, and we’ve also seen examples that were spoofed or that we were unable to reproduce,” a Google spokesperson said. The spokesperson also confirmed that the company is taking steps to remove AI-generated reviews for certain queries “where necessary in accordance with our content policies,” adds NIX Solutions. We’ll keep you updated as Google continues to refine and enhance its search engine. Examples of unsuccessful AI responses serve as the basis for developing improvements, some of which have “already begun to be implemented.”

Artificial intelligence expert and New York University professor Gary Marcus believes that many AI companies are “selling dreams” about the accuracy of their algorithms increasing from 80% to 100%. According to Marcus, achieving 80% accuracy is relatively easy because it requires training the algorithm on a large amount of pre-existing data. However, making the algorithm even more accurate by an additional 20% is an extremely challenging task. Marcus noted that the algorithm must “reason” about the credibility of information and sources, essentially behaving like a fact checker.