Google debuts AI-based search for US users
During the company’s annual I/O developer conference yesterday (14 May), Alphabet and Google CEO Sundar Pichai (pictured) stated AI Overviews will expand from the US to billions of users across various countries by year-end.
AI Overviews uses Google’s Gemini AI model to create search summaries that appear alongside of traditional link-bases search queries.
At the conference Liz Reid, who is chief of Google’s Search division, stated “generative AI search will do more for you than you ever imagined”.
“Whatever is on your mind, whenever you need to get done, just ask and Google will do the googling for you,” she added.
She explained AI Overview provides a range of prospective answers and links to deeper dives once it receives a general query. It is also capable of answering more complex questions and sub questions in seconds by using “multi reasoning” in search.
At the event Google also announced improvements to its Gemini Pro 1.5 model for consumers. Pichai stated the model is doubling its context window to 2 million tokens, which means it can answer questions more quickly or ingest video at faster rate.
Pichai highlighted a future AI chatbot assistant called Project Astra, which will use a smartphone camera to find items such as eyeglasses or identify locations. Demis Hassabis, the head of Google DeepMind, stated Project Astra is a multimodal, universal AI agent “that can be truly helpful in everyday life”.
Hassabis also introduced Gemini 1.5 Flash for applications that need lower latency at a reduced cost compared to its Gemini 1.5 Pro model. He stated Flash is “designed to be fast and cost efficient to serve at scale, while still featuring multimodal reasoning capabilities”.
In another announcement Google unveiled a text-to-video AI model called Veo that can create computer-generated footage based on written prompts.