The "AI Overview" is often sufficient and is served very quickly. (Sometimes nearly instant. I assume Google is caching responses for common searches).
"Deep Mode" is just one click away. And the responses are much, much faster. A question that might take 10 or 15 seconds in ChatGPT (with the default GPT5) takes <1 second to first token with Google. And then remaining tokens stream in at a noticeably faster rate.
Is Google just throwing more hardware than OpenAI?
Playing other tricks to look faster? (E.g., use a smaller, faster, non-reasoning model to serve the first part of the response while a slower, reasoning model works on more detailed part of the later response).
Web search tool calls are much faster too, presumably powered by Google's 30 years of web search.
That makes them very fast. But that also leads to a ton of hallucinations. If you ask for non existent things (like the cats.txt protocol), AI Overviews consistently fabricate facts. Ai Overviews can pull the content of the potential source ULRs directly from Google's cache.
ChatGPT is slow because they have to make an external API call to Bing or - even worse - to a scraping provider like SerpApi/Data4SEO/Oxylabs to crawl regular Google search results. That introduced two delays. OpenAI then has to fetch some of these potential source URLs in real time. That introduces another delay. And then OpenAI also uses a better (but slower) model than Google to generate the answer.
Over time, OpenAI should be able to catch up in terms of speed with their own web/search index.
If you try more complex questions, you might find AI Overviews less to your liking.
Google gets away with this because users are used to type simple queries - often just a few keywords. Any kind of AI answer is like magic.
OpenAI cannot do the same. Their users are used to having multi-turn conversations and receiving thoughtful answers to complex questions.