Using Google now feels completely lackluster in comparison.
I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background.
How about you?
You hear about this new programming language called "Frob", and you assume it must have a website. So you google "Frob language". You hear that there was a plane crash in DC, and assume (CNN/AP/your_favorite_news_site) has almost certainly written an article about it. You google "DC plane crash."
LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.
Where LLMs will take over from search is when it comes to open-ended research - where you don't know in advance where you're going or what you're going to find. I don't really have frequent use cases of this sort, but depending on your occupation it might revolutionize your daily work.
I don't like LLMs for two reasons:
* I can't really get a feel for the veracity of the information without double checking it. A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.
* I'm missing out on learning opertunities that I would usually get otherwise by reading or skimming through a larger document trying to find the answer. I appreciate that I skim through a lot of documentation on a regular basis and can recall things that I just happened to read when looking for a solution for another problem. I would hate it if an LLM would drop random tidbits of information when I was looking for concrete answers, but since its a side effect of my information gathering process, I like it.
If I were to use an AI assistant that could help me search and curate the results, instead of trying to answer my question directly. Hopefully in a more sleek way than Perplexity does with its sources feature.
Really, these days, either I know some resource exists and I want to find it, in which case a search engine makes much more sense than an LLM which might hallucinate, or I want to know if something is possible / how to do it, and the LLM will again hallucinate an incorrect way to do it.
I've only found LLMs useful for translation, transcription, natural language interface, etc.
"Search" can mean a lot of things. Sometimes I just want a website but can't remember the URL (traditional); other times I want an answer (LLMs); and other times, I want a bunch of resources to learn more (search+LLMs).
Instead I use a search engine and do my own reading and filtering. This way I learn what I'm researching, too, so I don't fall into the vicious cycle of drug abu ^H^H^H^H^H laziness. Otherwise I'll inevitably rely more on more on that thing, and be a prisoner of my own doing by increasingly offloading my tasks to a black box and be dependent on it.
As for AI search, I do find it extremely useful when I don't know the right words to search for. The LLM will instantly figure out what I'm trying to say.
I use LLM-s for what they are good at, generative stuff. I know some task take me a long time and I can shortcut with LLM-s easily.
So here's a ChatGPT example query* which is completely off:
https://chatgpt.com/share/67f5a071-53bc-8013-9c32-25cc2857e5...
* It's intentionally bad be able to compare with Google.
And here's the web result, which is spot on:
So yeah, I do still use search engines, specifically Kagi and (as a fallback) DuckDuckGo. From either of them I might tack on a !g if I'm dissatisfied with the results, but it's pretty rare for Google's results to be any better.
When I do use an LLM, it's specifically for churning through some unstructured text for specific answers about it, with the understanding that I'll want to verify those answers myself. An LLM's great for taking queries like "What parts of this document talk about $FOO?" and spitting out a list of excerpts that discuss $FOO that I can then go back and spot-check myself for accuracy.
For example Jeep consistently lands at the bottom of the reliability ratings. Try asking GPT if Jeeps are reliable. The response reads like Jeep advertising.
For me, searches fall into one of three categories, none of which are a good fit for LLMs:
1. A single business, location, object, or concept (I really just want the Google Maps or Wikipedia page, and I'm too lazy to go straight to the site). For these queries, LLMs are either overkill or outdated.
2. Product reviews, setup instructions, and other real-world blog posts. LLMs want to summarize these, and I don't want that.
3. Really specific knowledge in a limited domain ("2017 Kia Sedona automatic sliding door motor replacement steps," "Can I exit a Queue-Triggered Azure Function without removing it from the queue?"). In these cases, the LLMs are so prone to hallucination that I can't trust them.
Even without much customization (lenses, scoring, etc) it's so much better (for my use cases) I happily pay for it.
Recently I have also started to use Perplexity more for "research for a few minutes and get back to me" type of things.
Queries like "what was that Python package for X" I usually ask an AI right from my editor, or ChatGPT if I'm in the browser already.
2 recent success stories:
I was toying around with an esp32 - i was experimenting to turn it into a bluetooth remote control device. The online guides help to an extent, setting up and running sample projects, but the segue into deploying my own code was less clear. LLMs are "expert beginners" so this was a perfect request for it. I was able to jump from demos to live deploying my own code very quickly.
Another time I was tinkering with opnsense and setting up VLANs. The router config is easy enough but what I didnt realize before diving in was that the switch and access point require configuration too. What's difficult about searching this kind of problem is that most of the info is buried in old blog posts and forum threads and requires a lot of digging and piecing together disparate details. I wasn't lucky enough to find someone who did a writeup with my exact setup, but since LLMs are trained on all these old message boards, this was again a perfect prompt playing to its strengths.
The results from LLMs are still too slow, vary too much in quality and still frequently hallucinate.
My typical use-case is that when I'm looking for an answer I make a search query, sometimes a few. Then scan through the list of results and open tabs for the most promising of them - often recognising trusted, or at least familiar, sites. I then scan through those tabs for the best results. It turns out I can scan rapidly - that whole process only takes a few seconds, maybe a minute for the more complex queries.
I've found LLMs are good when you have open-ended questions, when you're not really sure what you're looking for. They can help narrow the search space.
At most I use AI now to speed up my research phase dramatically. AI is also pretty good at showing what is in the ballpark for more popular tools.
However I am missing forum style communities more and more, sometimes I don't want the correct answer, I want to know what someone that has been in the trenches for 10 years has to say, for my day job I can just make a phone call but for hobbies, side projects etc I don't have the contacts built up and I don't always have local interest groups that I can tap for knowledge.
LLMs can't be trusted, you have no way to tell between a correct answer and a hallucination. Which means I often end up searching what the LLM told me just to check, and it is often wrong.
Search engines can also lead you to false information, but you have a lot more context. For example, a StackOverflow answer has comments, and often, they point out important nuances and inaccuracies. You can also cross-reference different websites, and gauge how reliable the information is (ex: primary source vs Reddit post). A well trained LLM can do that implicitly, but you have no idea how it did for your particular case.
What are the specs for new Goolge Pixel 9a? LLM can't answer this may after a year they can.
Last night, I asked Claude 3.7 Sonnet to obtain historical gold prices in AUD and the ASX200 TR index values and plot the ratio of them, it got all of the tickers wrong - I had to google (it then got a bunch of other stuff wrong in the code).
Also yesterday, I was preparing a brief summary of forecasting metrics/measures for a stakeholder and it incorrectly described the properties of SMAPE (easily validated by checking Wikipedia).
I constantly have issues with my direct reports writing code using LLM's. They constantly hallucinate things for some of the SDK's we use.
But now, the veracity of most LLMs' responses is terrible. They often include “sources” unrelated to what they say and make up hallucinations when I search for what I'm an expert in. Even Gemini in Google Search told me yesterday that Ada Lovelace invented the first programming language in the 18th century. The trust is completely gone.
So, I'm back to the plain old search. At least it doesn't obscure its sources, and I can get a sense of the veracity of what I find.
I recently upgraded my video card, and I run a 4K display. Suddenly the display was randomly disconnecting until I restarted the monitor. I googled my brains out trying to figure out the issue, and got nowhere.
So I gave ChatGPT a shot. I told it exactly what I upgraded from/to, and which monitor I have, and it said "Oh, your HDMI 2.0 cable is specced to work, but AMD cards love HDMI2.1 especially ones that are grounded, so go get one of those even if it's overspecced for your setup."
So I did what it said, and it worked.
For other topics, exact pedantic correctness may not always be as important, but I definitely do want to be able to evaluate my sources nevertheless, for other obvious reasons.
Search is actually pretty much what I want: a condensed list of possible sources of information for whatever I'm looking for. I can then build my own understanding of the topic by checking the sources and judging their credibility. Search seems to have been getting worse lately, sadly, but it's still useful.
Meet Sarah – a researcher burning the midnight oil, trying to extract insights from a 50-page paper. She’s exhausted, overwhelmed by the endless paragraphs, trying to condense hours of reading into key findings for her thesis. Sound familiar?
If you’re tired of drowning in long PDFs, AI PDF Summary is here to change your workflow. In just one click, AI PDF Summary helps you:
Instant Analysis: Summarize documents and highlight essential information in seconds. Key Insights: Quickly access the most important findings, methodologies, and conclusions. Automatic References: AI extracts references from your PDF, saving you hours of tedious work.
Stop wasting time, and start using AI PDF Summary for faster, more efficient research.
Try it now: https://clck.ru/3LQuDa
If they get rid of those operators, then that would be really bad. But I have a feeling that’s what a lot of search engine people are itching to do.
Conversely it’s a huge mistake to rely on LLMs for anything that requires authoritative content. It’s not good at appropriately discounting low quality sources in my experience. Google can have a similar problem, but I find it easier to find good sources there first for many topics.
Where LLMs really replace modern google is for topics you only kind of think should exist. Google used to show some pretty tenuously related links by the time you got to page 5 results and there you might find terms that bring you closer to what you’re looking for. Google simply doesn’t do that anymore. So for me, one of the joys is being able to explore topics in a way I haven’t been able to for over a decade
Search engines tend to over summarize less, and provide lots of references. Something LLM researchers worked hard to achieve.
If they feel lackluster for you, maybe you are not interested in those specific use cases in which they shine.
Similarly, the reason could be that you don't want to check references for yourself, and you prefer to trust the selection of cross references provided by your LLM of choice.
It is likely that your close circle of friends share an identity similar to yours. That is, by many, considered a defining characteristic of friendship. Although it can be a sign of the rising popularity of LLMs, one must take it as an anecdote and not a statistically significant fact.
I do prefer a soft selection of queries on different search engines and different LLM models. Since you asked for an opinion and self-declared an ability to do searches and questions yourself, I don't feel obligated to cite sources for this answer.
I don't have a circle of friends, so I have no idea what other people are doing, outside of what I read online.
I use an LLM a lot for coding. However, I was never as much into doing web searches for programming problems anyway, I used docs more and rarely needed sites like SO. I haven't therefore moved away from search engines for that side of things.
With chatbots I first need to formulate a question (or, I feel like I do), then wait for it to slowly churn out an overly wordy response. Or I need to prompt it first to keep it short.
I suppose this difference is different if you already used a search engine by asking it a fully formulated question like "What is a html fieldset and how do I use it?" instead of "html fieldset" and clicking through to MDN.
I would use the analogy of consuming a perfectly tasty and nutritional meal crafted by chef chatgpt vs visiting a few restaurants around your neighborhood and tasting different cuisines. neither approach is wrong but you get different things and values out of each approach. Do what you feel like doing!
Last week, there was a specific coding problem I needed help with, I asked chatgpt which gave me a great answer. Except I spent a few hours trying to figure out why the function chatgpt was using wasn't being included, despite the #include directives being all correct. neither chatgpt nor google were helpful. The solution was to just take a different approach to my code, if I only googled, I wouldn't have spent that time chasing the wrong solution.
Also consider this, when you ask a question, there are a bunch of rude people (well meaning) that ask you questions like "what are you really trying to do?" and who criticize a bunch of unrelated things about your code/approach/question. a lot of times that's just annoying but sometimes that gives you really good insights on the problem domain.
If it is more of an open ended question that I am not sure there'll be a page with an answer for, I am more likely to use ChatGPT/Claude.
Same with my wife (non-technical) and teenage daughter.
Someone at work yesterday asked me if I knew which bus lines would be active today due to the ongoing strike. Googled, got a result, shared back in under 10 seconds.
Out of curiosity I just checked with various LLMs through t3.chat, with all kinds of features, none had anything more than a vague "check with local news" to say. Last one I tried Gemini with Deep Research and what do you know, it actually found the information and it was correct!
It also took nearly 5 minutes..
Like I feel if your search is about _reality_ (what X product should I buy, is this restoraunt good, when is A event in B city, recipes etc.) then LLMs are severely lacking.
Too slow, almost always incomplete answers if not straight up incorrect, deep research tends to work if you have 20 minutes to spare both to get an initial answer and manually go and vet the sources/look for more information in them.
People should do what makes them feel good, but I think we're all going to get a bit dumber if we rely too much on LLMs for our information.
I personally still use search engines daily when I know what it is that I am searching for. I am actually finding that I am reaching less for LLMs even though it is getting easier and cheaper (I pay for T3 Chat at $8USD p/m).
Where I find LLMs useful is when I am trying to unpack a concept or I can't remember the name of something. The result of these chats often lead to their own Google searches. Even after all this development, the best LLMs still hallucinate constantly. The best way that I've found to reduce hallucinations is to use better prompts. I have used https://promptcowboy.ai/ to some success for this.
- If I am seeking technical information, I would rather get it from the original source. It is often possible to do that with a search. The output from an LLM is not going to be the original source. Even with dealing with secondary sources, it is typically easier to spot red flags in a secondary source than it is with the output of an LLM.
- I often perform image searches. I have no desire for generated images, though I'm not going to object to one if someone else "curated" the outputs of an AI model.
That said, I will use an LLM for things that aren't strictly factual. i.e. I can judge if it is good enough for my needs by simply reading it over.
As an example, someone typo'd an abbreviation, so I asked GPT and it gladly made up something for me. So I gave it a random abbreviation, and it did the same (using its knowledge of the game).
Even when I tell it the specific version I'm playing it gets so much wrong it's basically useless. Item stats, where mobs are located, how to do a certain quest - anything. So I'm back to using websites like wowhead and google.
Until LLMs stop responding with over confident “MBA talk” that sounds impressive but doesn’t really say much, I’ll continue to use search engines.
Image searches without having to describe every minute detail of what I'm looking for?
Bah, even some searches that are basically looking for wikipedia/historical lookups....so much easier UI in Google Search than chatgpt's endless paragraphs with unclear sources etc.
For some things Google's AI results are helpful too, if not to just narrow down the results to certain sources.
There's no chat interface helping any of this
Search is for finding specific websites and products. Totally different things.
Basically, there’s a lot of good and specific information on the web, but not necessarily combined in the way I want. LLMs can help break apart my specific combination at a high level but struggle with the human ability to get to solutions quickly.
Or maybe I just suck at asking questions haah
For programming stuff that can be immediately verified LLMs are good. They also cover many cases where search engines can't go (e.g. "what was that song where X did Y?"). But looking up facts? Not yet. Burned many times and not trying it again until I hear something changed fundamentally.
The serendipity of doing search with your own eyes and brain on page 34 of the results cannot be understated. Web surfing is good and does things that curated results (ie, google's <400, bing's <900, kagi's <200, LLM's very limited single results) cannot.
1. questions where I expect SEO crap, like for cooking recipes, are for LLMs. I use the best available LLM for those to avoid hallucinations as much as possible, 2.5 pro these days. With so much blogspam, LLMs are actually less likely to hallucinate at this point than the real internet IMO.
2. Questions whose answer I can immediately verify, like "how do I do x in language y", also go to an LLM. If the suggestion doesn't work, then I google. My stackoverflow usage has fallen to almost 0.
3. General overviews / "how is this algorithm called" / "is there a library that does x" are LLMs, usually followed by Googling about the solutions discussed.
4. When there's no answer to my exact question anywhere, or when I need a more detailed overview of a new library / language, I still read tutorials and reference docs.
5. Local / company stuff, things like "when is this place open and how do I call them" or "what is the refund policy of this store" are exclusively Google. Same for shopping (not an American, so LLM shopping comparisons aren't very useful to me). Sadly, online reviews are still a cesspool.
Google wants to show me products to buy, which I'm almost never searching for, or they're "being super helpful" by removing/modifying my search terms, or they demonstrate that the decision makers simply don't care (or understand) what search is intended to accomplish for the user (ex: ever-present notices that there "aren't many results" for my search).
Recently tried to find a singer and song title based on lyrics. Google wouldn't present either of those, despite giving it the exact lyrics. ChatGPT gave me nonsense until I complained that it was giving me worse results than Google, at which point it gave me the correct singer but the wrong song, and then the correct song after pointing out that it was wrong about that.
Still can't get Google to do it unless my search is for the singer's name and song title, which is a bit late to the party.
I use gemini more on my phone, where I feel like going through search results and reading is more effort, but I'll fall back to searching on duck duck go fairly often.
On a desktop I generally start at duck duck go, and if it's not there, then I don't bother with AI. (I use copilot in my editor, and it's usually helpful, but not really "search").
ddg is often faster for when I want to get to an actual web site and find up-to-date info, for "search as navigation".
llm's are often faster for finding answers to slightly vague questions (where you know you're going to have to burn at least as much climate on wading through blogspam and ads and videos-that-didn't-need-to-be-videos if you do a search).
Yes, I still use search engines and almost always find what I need in long form if I can’t figure it out on my own.
When I need to search, I use a search engine and try to find a trustworthy source, assuming one is available.
I won't deny LLMs can be useful, but they're like the news: double-check and form your own conclusions.
I’m mostly using my personal SearXNG instance and am still finding what I’m looking for.
On systems where I don’t have access to that, I’m currently trying Mojeek and experiment with Marginalia. Both rather traditional search engines.
I’m not a big fan of using LLMs for this. I rather punch in 3-5 keywords instead of explaining to some LLM what I’m looking for.
I use perplexity pro + Claude a lot as well. Maybe too much but mostly for coding and conversations about technical topics.
It really depends on intent.
I have noticed that I’ve started reading a lot more. Lots of technical books in the iPad based on what I’m interested in at the moment.
These tools are useful, but in my view the level of trust seemingly commonly being placed in them far exceeds their capabilities. They’re not capable of distinguishing confidently worded but woefully incorrect reddit posts from well-verified authoritative pages which combined with their inclination for hallucinations and overeagerness to please the user makes them dangerous in an insidious way.
Why would I want to have a conversation in a medium of ambiguity when I could quickly type in a few keywords instead? If we'd invented the former first, we'd build statues of whoever invented the latter.
Why would I want to use a search service that strips privacy by forcing me to be logged in and is following the Netflix model of giving away a service cheap now to get you to rely on it so much that you'll have no choice but to keep paying for it later when it's expensive and enshittified?
When I do, it's because either I can't think of good terms to use, and the LLM helps me figure out what I'm looking for, or I want to keep asking follow-up questions.
Even then, I probably use an LLM every other week at most.
Given my time dedicated to researching thing, I feel like I am "more productive" b/c I waste less time.
But I do my due diligence to double-check what ChatGPT suggests. So if I ask ChatGPT to recommend a list of books, I double-check with Goodreads and Amazon reviews/ratings. Like that. I guess it's like having a pair-research-sesson with an AI librarian friend? I am not sure.
But I know that I am appreciative. Does anyone remember how bad chatbots were before the arrival of low-hanging-AI-fruits like generative AI? Intel remembers.
This can be very difficult, if there's a lot of semantic overlap with a more commonly-searched mainstream topic, or if the date-range-filtering is unreliable.
Sometimes I'll look for a recipe for banana bread or something, and searching "banana bread recipe" will get me to something acceptable. Then I just have to scroll down through 10 paragraphs of SEO exposition about how much everyone loves homemade banana bread.
Searching for suppliers for products that I want to buy is, ironically, extremely difficult.
I don't trust LLMs for any kind of factual information retrieval yet.
Specific search expecting 1 answer. These type search is enhanced by ChatGPT. Google is losing here.
Wild goose chase / brainstorming. For this, I need a broad set of answers. I am looking for a radically different solution. Here, today's Google is inferior to the OG Google. That is for 2 reasons.
1. SEOs have screwed up the results. A famous culprit is pinterest and many other irrelevant site that fill the first couple of pages.
2. Self-sensoring & shadow banning. Banning of torrent sites, politically motivated manipulation. Though the topic I am searching is not political, there is some issue with the result. I can see the difference when I try the same in Bing or DuckDuckGo.
No, I don't use the hallucination machines to search, and I never will.
I use search engines to search. I use the "make shit up" machine when I want shit made up. Modern voice models are great for IVR menus and other similar tasks. Image generation models have entirely taken over from clipart when I want a meaningless image to represent an idea. LLMs are even fun to make up bogus news articles, boilerplate text to fill a template, etc. They're not search engines though and they can't replace search engines.
If I want to find real information I use a search engine to find primary sources containing the keywords I'm looking for, or well referenced secondary sources like Wikipedia which can lead me to primary sources.
I echo what others say, Kagi is a joy to use and feels just like Google used to be - useful
But a lot of my classic ADHD "let's dive into this rabbit hole" google sessions have definitely been replaced by AI deep searches like Perplexity. Instead of me going down a rabbit hole personally for all the random stuff that comes across my mind, I'll just let perplexity handle it and I come back a few minutes later and read whatever it came up with.
And sometimes, I don't even read that, and that's also fine. Just being able to hand that "task" off to an AI to handle it for me is very liberating in a way. I still get derailed a bit of course, but instead of losing half an hour, it's just a few seconds of typing out my question, and then getting back to what I've been doing.
Just now for example I wanted to know how Emma Goldman was deported despite being a US citizen. Or whether she was a citizen to begin with. If an LLM gave me an answer I for sure would not trust it to be factual.
My search was simple: Emma Goldman citizenship. I got a wikipedia article, claiming it was argued that her citizenship was considered void after her ex husband’s citizenship was revoked. Now I needed to confirm it from a different source and also find out why her ex’s citizenship was revoked. So I searched his name + citizenship and got an New Yorker article claiming it was revoked because of some falsified papers. Done
If an LLM told me that, I simply wouldn’t trust it and would need to search for it anyway.
But if I then click the Google search text box at the top, and start typing, it takes 20 seconds for my text to start appearing (the screen is clearly lagged by whatever Google is doing in the background), and then somehow it starts getting jumbled. Google is the only web page this happens to.
I actually like their results, they just don't want me to see their results. Weird business model.
The more you thrust the models, the less cognitive load you are spending checking and verifiefing which will lead to what people call ai but which actually is nothing more than a for loop over in memory loaded data. That those who still think that: for Μessage in messages... can represent any sort of intelligence actually has already brainwashed on a new itteration of the "one armed bandit" where you click regenerate indefinatly with a random seed being distracted from what is going on around you
Hence, search still remains my hope until SO and the likes decay.
Additionally, many search engines now already generate quick summaries or result snippets without a lot of prompt-fu, hence LLMs have actually become 40:60(llm:search) ratio day to day.
Of course, I have used Phind and other LLMs, and the results sometimes are useful, but in general the information they give back feels like a summary written for the “Explain Like I'm Five” crowd, it just gives me more questions than answers, and frustrates me more than it helps me.
Where LLMs excel is when I don't know the exact search term to use for some particular concept. I ask the LLM about something, it answers with the right terms I can use in a search engine to find what I want, then I use these terms instead of my own words, and what I want is in the search results, in the first page.
The question is: are you searching for answers to something, or are you searching for a site/article/journal/whatever in order to consume the actual content? If you are searching for a page/article/journal/ in order to find an answer, then the journal/article itself was just a detour, if the LLM could give you the answer and you could trust it. But if you were looking for the page/article itself, not some piece of information IN the article then ChatGPT can (at best) give you the same URL google did, but 100x slower?
Still have a trust issue with LLM/ChatGPT for facts. Maybe in a couple years my mindset will shift and trust LLM/chatgpt more.
I use ChatGPT for text summation and translation, and midjourney for slide decks and graphic design ideation.
I just tried ChatGPT and saw that you can ask it to search the web and also can see its sources now. I still remembered how it was last time I used it, where it specifically refused to link out to external sources (looks like they changed it around last November). That's a pretty good improvement for using it as search.
I'd rank kagi > chatgpt > google any day.
But in fact I overwhelmingly use search over llm because it's an order of magnitude quicker (I also have google search's ai bobbins turned off by auto-using "web" instead of "all".)
I've used llm "for real" about 3 times in the last two months, twice to get a grounding in an area where I lacked any knowledge, so I could make better informed web searches, and once in a (failed) attempt to locate a piece of music where web search was unsuccessful.
- I use RSS to see 'what's new', and to search it. My RSS client support search
- I maintain list of domains, so when I want to find particular place I check my list of domains (I can search domain title, description, etc.). I have 1 million of domains [0]
- If I want more precise information I try to google it
- I also may ask chatgpt
So in fact I am not using one tool to find information. I use many tools, and often narrowing it down to tools that most likely will have the answer.
The biggest issue is when GPT returns something that doesn’t match your knowledge, experience, or intuition and you ask the “are you sure?” question, it seems to inevitably come back with “you’re right!”. But then why/how did it get it wrong the first time? Which one is actually true? So I go back to search (Kagi).
So for me, LLMs are about helping to process and collate large bodies information, but not final answers on their own.
I use Claude pretty exclusively, and GPT as a backup because GPT errors too much and tries to train on you too much and has a lackluster search feature. The web UIs are not these company’s priority, as they focus more on other offerings and API behavior. Which means any gripe will not be addressed and you have to just go for the differentiating UX.
For a second opinion from Claude, I use ChatGPT and Google pretty much the same amount. Raw google searches are just my glorified reddit search engine.
I also use offline LLM’S a lot. But my reliance on multimodal behavior brings me back to cloud offerings.
On the flip side, any time I'm searching for something programming (FE, JavaScript in my case) it's last resort because an LLM is not giving me the answer I'm looking for.
This is still shocking to me, I really never thought I would replace my reliance on Google with something new.
Operator words still do work in google, albeit less so than in the past - they still do the job.
I see the AI as being there to do the major leg work. But the devil's in the details and we can't simply take their word that something is fact without scrutinizing the data.
One interesting trend that I like is that I started using local LLMs way more in the last couple of months. They are good enough that I was able to cancel my personal ChatGPT subscription. Still using ChatGPT on the work machines since the company is paying it.
Keep in mind that I'm not counting in my 75% queries where I get my answer from Google Gemini I'm just guessing if you added that in, it would rise to 85-90%.
My thought is if browsers and phones started pushing queries over to an LLM, search (and search revenue) would virtually disappear.
There is some room for optimism, though. There's been a rise in smaller search engines with different funding models that are more aligned with user needs. Kagi is the only one that comes to mind (I use it), but I'm sure there are others.
Though lately for more in-depth research I've been enjoying working with the LLM to have it do the searching for me and provide me links back to the sources.
That’s if they can swing the immense ads machine (and by that I mean the ads organisation not the tech) and point it at a new world and a different GTM strategy.
They still haven’t figured out how to properly incentivise content producers. A lazy way would be to display ads that the source websites would display alongside the summary or llm generated response and pass on any CPM to the source.
- Specific documentation
- Datasets
- Shopping items
- Product reviews
But for the search engines I use, their branded LLM response takes up half of the first page. So that 25% figure may actually be a lot smaller.
It's important to note that these search engine LLM responses are often ludicrously incorrect -- at least, in my experience. So now I'm in this weird phase where I visit Google and debate whether I need to enter search terms or some prompt engineering in the search box.
For example I asked it about rear springs for a 3rd gen 4runner and it recommended springs for a 5th gen.
I was very surprised to hear this, and it made me wonder how much of traditional SEO will be bypassed through LLM search results. How do you leverage trying to get ranked by an LLM? Do you just provide real value? Or do you get featured on a platform like Chrome Extensions Store to improve your chances? I don't know, but it is fun to think about.
For the people who say they've reduced their search engine use by some large percentage, do you never need to find a particular document on the web or look for reference material?
Learning is fun! Reading is good for you! Being spoon fed likely-inaccurate/incomplete info or unmaintainable code is not why i got into computers.
And yes, just plain old Google search is completely lackluster in comparison to the perplexity.ai search I get to do today.
Earlier today I was trying to remember the name of the lizard someone tweeted about seeing in a variety store. Google search yielded nothing. Gemini immediately gave me precise details of what I was talking about, it linked to web resources about it.
I use ChatGPT at home constantly, for history questions, symptoms of an illness, identification of a plant hiking, remembering a complex term or idea I can't articulate, tips for games, and this list goes on.
At work it's Copilot.
I've come to loathe and mock Google search and I can't be the only one.
If I want to play with ideas, I chat with AI. If I need facts, I use search.
Unlike Google, or Duck Duck Go, which serve up links that we can instantly judge are relevant to us, LLM spin stories that sound pretty good but may be and often are insidiously wrong. It’s too much effort to fact check them, so people don’t.
I'm still using Google for searches on Reddit these days because Reddit's own search engine is terrible.
These are the things I usually search for:
* lazy spell check * links to resources/services * human-made content (e.g. reviews, suggestions, communities)
Genuinely curious - those who use chatbots regularly in lieu of search, what kinds of things are you prompting it for?
I mostly use Perplexity for search, sometimes ChatGPT. Only when I am looking for something _very_ specific do I use a traditional search engine.
Dropping usage of search engines compounded by lack of support led to me cancelling my Kagi subscription and now I just stick with Google in the very rare occasions that I use a search engine at all. For a dozen searches or so a month, it wasn't worth it to keep paying for Kagi.
The only advantage Google and other traditional search engines have over AIs is that they're very fast. If I know for certain I can get what I want in under 1s I might as well use Google. For everything else, Perplexity or ChatGPT is going to be faster.
Exploratory/introductory/surface-level queries are the ones that get handed to auto-complete.
I like how Kagi lets me control whether AI should be involved by adding or omitting a question mark from my search query. Best of both worlds.
But I appreciate and read the Google Gemini AI generated response at the top of the page.
Also, I'm an iPhone user. But I have a Google Pixel phone for deve work.
I find myself now using 'Hey Google' a lot more because of the Gemini responses.
It's particularly fun playing with it with the kids on road trips as we ask it weird questions, and get it to reply in olde english, or speak backwards in French and so on!
LLMs are amazing for technical research or getting a quick overview and a clear explanation without clicking through ten links. But for everyday searches — checking restaurant hours, finding recent news, digging into niche forums, or comparing product — search engines are still way better.
I don’t think it’s a matter of one replacing the other — they serve different purposes.
We're in a bubble here.
I used to use DDG for syntax problems (so many programming languages....) and it usually sent me to SO.
Now I use DeepSeek. Much friendlier, I can ask it stupid questions without getting shut down by the wankers on SO. Very good
I still use DDG to interface with current events and/or history. For history DDG is primarily, not only, an interface to Wikipedia
Here's the difference as per chatgpt search https://chatgpt.com/share/67f5ae28-5700-800d-b241-386462a307...
I feel like the google search will become obsolete in a short time and they have to make big changes to their UX and search engine.
Although I guess most of its user base are still relying on the old ways so changing it right now has huge impacts on older users.
Websites have all kinds of extra context and links to other stuff on them. If I want to learn/discover stuff then they are still the best place to go.
For simple informational questions, all of that extra context is noise; asking gpt "what's the name of that cpp function that does xyz" is much faster than having to skim over several search results, click one, wait for 100 JavaScript libraries to load, click no on a cookies popup and then actually read the page to find the information only to realise the post is 15 years old and no longer relevant.
There are times where I know exactly what website to go to and where information is on that site and so I prefer that over AI. DDGs bangs are excellent for this: "!cpp std::string" and you are there.
Then there's the verifiability thing. Most information I am searching for is code which is trivial to verify: sometimes AI hallucinates a function but the compiler immediately tells me this and the end result is I've wasted 30 seconds which is more than offset by the time saved not scrolling through search.
Examples of things that aren't easy to verify: when's this deprecated function going to be removed, how mature is tool xyz.
Of course, there's also questions about things that happened after the AI's knowledge cutoff date. I know there are some that can access the internet now but I don't think any are free
I'd also happily turn off several other search features, more directly tied to revenue, which is probably why they don't like adding options. I'm sure their AI will be selling products soon enough. Got to make those billions spent back somehow.
https://chromewebstore.google.com/detail/comparative-chatgpt...
The more times goes by, the more I use both ChatGPT and Claude to search (at the same time, to cross-check the results) with Kagi used to either check the results when I know strictly nothing of the subject or for specific searches (restaurants, movie showings…).
I’ve almost completely stopped using Google.
This constrains the search space to whatever training data set used for the LLM. A commercial search engine includes resources outside this data set.
Using a search engine for responses to natural language questions is of dubious value as that is not their intended purpose.
I use LLMs for things where an explanation where accuracy ranging between 0% to 100% is not a problem. When I need to get a feel for something, a pointer to some resource
Until the false results rate drops, it can't be trusted.
I use ChatGPT for learning about topics I don't know much about. For example, I could spend 15 minutes reading wikipedia, or I could ask it to use Wikipedia and summarize for me.
Having said that, I use ChatGPT exactly like a search engine. If I want to find info I will explicitly enable the web search mode and usually just read the sources, not the actual summary provided by the LLM.
Why do this? I find if I don't quite know the exact term I am looking for I can describe my problem/situation and let ChatGPT make the relevant searches on my behalf (and presumably also do some kind of embedding lookup).
This is particularly useful in new domains, e.g. I've been helping my wife do some legal research and I can explain my layman's understanding of a situation and ask for legal references, and sure enough it will produce cases and/or gov.uk sources that I can check. She has been impressed enough to buy a subscription.
I have also noticed that my years (decades!) of search engine skills have atrophied quicker than expected. I find myself typing into Google as I would to ChatGPT, in a much more human way, then catch myself and realise I can actually write much more tersely (and use, e.g. site:).
The most important part for me is understanding how to communicate with each system, whether it's google-fu or prompting.
* adult cat sleep time -> search engines. * my cat drops his toy into his water and brings it to me -> GPT
Besides, Google has some convenient features that I frequently use, e.g., currency/unit/timezone conversion, stock chart.
- What other people think of product XYZ: reddit - Subject specific/Historical: Wikipedia - News specific: My favored news sources - Coding related: I start with ChatGPT. To validate those answers I use Google
It will also help get rid of the antitrust issues that the chrome browser has created
They can be very useful, especially when looking for something closely adjacent to a popular topic, but you got to check carefully what they say.
Personally, I don't want an LLM synthesized result to a query. I want to read original source material on websites, preferably written by experts, in the field in which my search is targeted.
What I find in serious regression in search, is interpretation of the search query. If I search for something like "Systems containing A but not B" I just get results that contain the words A and B. The logical semantics of asking for "not B" is completely ignored. Using "-B" doesn't work, since many discussions of something that doesn't have B, will mention the word B. These errors didn't seem to be so egregious historically. There seemed to be more correct semantic interpretation of the query.
I don't know if this has to do with applying LLMs in the backend of search, but if LLMs could more accurately interpret what I'm asking for, then I would be happy to have them parse my queries and return links that meet my query specifications more accurately.
But again, I don't want a synthesized result, I want to read original source material. I see the push to make everything LLM synthesized prose, to be just another attempt to put "Big Tech" between me and the info I'm trying to access.
Just link me to the original info please...
p.s. Something like the "semantic web" which would eliminate any 3rd party search agent completely would be the ideal solution.
If I need something more complex like programming, talk therapy, or finding new music then I’ll hop on over to Chat.
Like I could interrogate an LLM about something technical “X” or I could just search “X documentation” and get to the ground truth.
Our projects heavily use platform tools so I am looking there rather than Googling.
I started using Kagi in an attempt to de-googlify, but it turns out that it's just downright good and now I prefer it.
For everything else, I still use search.
On the other hand, Google search is starting to be useless without curating my queries. And their AI suggestions are full of lies.
I use Kagi as my search engine and GitHub code search for searching for code examples.
I haven't found a reason to use AI yet.
I average around 1400-1600 searches per month.
Twitter and reddit are garbage.
I sometimes use youtube search then fast forward with the subs on and the sound off.
The internet has ended. It's been a fun ride, thanks everyone.
6510 slaps hn with a large trout
I often search for solutions for some specific (often exotic) problems and LLMs are not best to handle them.
DDG does not have best results I'm not sure if those are better than those from Google. Definitely have different set of issues.
Finally seeing another positive comment at HN about Kagi I decided to pull the wallet and try it. And it's great. It feels like Google from 2000s
I decided to replace my subscription to anthropic and chatgpt with Kagi where I have access to both providers and also Gemini, meta and others. So in bottom line I it actually saving me money.
Their Ki assistant (LLM that iterate with multiple search queries when looking for answers) is actually neat. In general it best of both worlds. depending what do you need you can use LLM interface or classic search and they have both working great
Boils down to the fact that the internet is full of shitty blogspam that search happily returns if your question is vague.
Sparktoro (no affilitation) had a post or video about this somewhere very recently.
What I tend to use LLMs for is rubber ducking or the opening of research on a topic.
It is easy to filter them when you working with familiar domain, but trying to learn something completely new - it is better to ask DeepSeek for a summary, and then decide what to explore.
Until when, I don't know.
LLM is okay for some use cases, but the amount of times it hallucinates bullshit makes it not trustworthy.
but I will stay I have started to just use the AI summary at the top of Google though although it is wrong like I searched "why is the nose of a supra so long" and it started talking about people's faces vs. the car which granted yeah it's not a nose but yeah
With LLM being good enough, I go to LLM for what I used to go for Wikipedia and StackOverflow.
If some AI answers I'm not sure or suspicious AI crafted it, I'll search it for cross validation.
No, just joking. I use libraries to read books.
Perplexity for anything complex
Yandex for pics (Google pics got ridiculously bad)
I think this also stems from a new design paradigm emerging in the search domain of tech. The content results and conversational answers are merging – be it Google or your Algolia search within your documentation, a hybrid model is on the rise.
I usually search for specific terms, often in quotes. My extra terms are variations on how people might word the question or answer.
Over time, I notice many sites are reliable for specific topics. I'll use site: operator to limit the search to them initially. If it's a research paper, adding "paper" and PDF usually links to it immediately. If government, it's often on a .gov page. And so on.
Search works well for me with these techniques for most of my needs. There has certainly even a drop in quality, with an increase in work, due to them optimizing for what generates ad clicks. That gives me a lot of sites that appear to be helpful but actually arent. I can usually spot and weed them out in one session for a given topic, though, since click farm sites are recognizable (intuitable) once you're used to them.
Finally, I try to follow the law since my Savior, Jesus Christ, requires it where possible. A.I.'s are usually trained with massive copyright infringement with outputs that may be copyright infringement. Search engines link me to the content creator to use directly. The creator also often says if they want it shared or how which I try to respect when I see it mentioned.
1. Bookmark manager. I can write "maven download sources", click on baeldung and copy&paste command from there. I did that 100 times and I'll do it 101-th time. I have plenty of webpages that I know they exist and I know how to find them. I'm too lazy to actually bookmark and organize them and Google works just fine for me.
2. Search for new knowledge in general domains. This category of queries I sometimes use ChatGPT for it, but not always. It's hard to formulate the rules, but I have a feeling which tool is more useful for given question. Sometimes it's just quicker to type few keywords in Google rather than asking full-fledged question to ChatGPT and wait for it to return full-fledged answer.
3. Search for new knowledge in arcane domains. For example I might need new info about my city, where to buy laptop, etc. ChatGPT might know about US, but its knowledge about Kazakhstan is definitely limited, shallow and outdated, so real web comes to rescue here.
4. Using Google-specific functions. For example I very often use queries like "site:bla.com something", because website-hosted searches in 100% cases are braindead and I wouldn't even bother with it.
For me, ChatGPT main function is to create new content. For example I might want to start new Go project, so I'll ask ChatGPT to write hello world golang tutorial for me, then ask follow-up questions like what identifier naming should I follow, etc. There are, of course, hundreds of similar articles in the Internet, but ChatGPT delivers tailored data much faster.
If I find that I really need to be more extensively descriptive, or to actually ask a question, I'll use an LLM, but that's pretty rare.
Here's an example of the latter that I did yesterday (this is Google Gemini):
"Hello! There is a poetic device where one line of a poem can be read both as the continuation of the previous line and as the first part of the following line. What is that called?"
"Hello! That poetic device is called an anastrophe or sometimes a hyperbaton, but these terms are more generally used for the inversion of the usual order of words or clauses in a sentence.
"While anastrophe or hyperbaton can create the effect you're describing, there isn't one single, widely recognized term specifically for a line that functions as both the end of one thought and the beginning of another in the way you've described.
"However, the effect you're describing is closely related to the idea of enjambment (also known as "run-on lines"). Enjambment is when a line of poetry ends without punctuation or a natural pause, and the thought continues onto the next line.
"..."
I proceeded to provide an example, saying "Points if you know what song this is ;-)", and Gemini confidently misidentified the song. (The correct song is "Black Cow" by Steely Dan; Gemini told me that the lyrics I'd quoted were from Bob Dylan's "Positively 4th Street," which doesn't contain any lyrics that even resemble what I'd quoted.)
1. No prompt about decline/accepting cookies every time I want to look something up.
2. No ads.
the results are mediocre the same way using google is.
The other day I was also searching for something dumb: how to hammer a nail into concrete.
Google will find me instructions for a hammer-drill... no I just have a regular hammer. There's a link from wikiHow, which is okay, but I feel like it hallucinates as much as AI. Actually I just opened the link and the first instruction involves a hammer drill too. The second one is what I wanted, more wordy than ChatGPT.
Google then shows YouTube which has a 6 minute video. Then reddit which has bad advice half the time. I'm an idiot searching for how to hammer nails into a wall. I do not have the skill level to know when it's BS. Reddit makes me think I need a hammer drill and a fastener. Quora is next and it's even worse. It says concrete nails bend when hit, which even I know is false. It also convinces me that I need safety equipment to hit a nail with a hammer.
I just want a checklist to know that I'm not forgetting anything. ChatGPT gives me an accurate 5-step plan and it went perfectly.
For more general searches, depending on the topic, DDG is close to useless because link farms, AI slop, and returning results that aren't really what I'm looking for (some of the keywords weight too much). But I suspect this is a common problem in all search engines, so I'm not looking for a replacement. It is frustrating though. I can't believe the information doesn't exist, is just that it is unreachable.
I don't search using AI. Generally I'm not looking for information that can be distilled into an "answer"; and there's also that DDG is not feeding me AI answer (I think? May be I'm not paying attention).
I use an LLM to generated regular expressions.
I have not been impressed by the results. In my experience, LLMs used this way generally output confident-sounding information but have one of two problems the majority of the time:
- The information is blatantly wrong, from a source that doesn't exist.
- The information is subtly wrong, and generated a predictive chain that doesn't exist from part of a source.
I have found them about on-par with the reliability of a straightforward Google search with no constraints, but that is more of a condemnation of how poor Google's modern performance as a search engine is, than an accolade for using LLMs for search.
Oh, and a major reason why Google sucks now? AI enshittification. They basically jettisoned their finely tuned algorithm in favor of "just run it through the LLM sausage grinder".
As for those AI chatbots -those are anything but useful for the general search purposes beyond a bit of surface level answers which you can't fully trust because they (still) hallucinate a lot. I tell chatgpt - "Give me a list of good X. (And don't you make anything yup!!!)" - yeah with those bangs; and it still makes shit up.
Liking it a lot.
Rest? Still search engines
AI is a better search for now because SEO and paid prioritization in search hasn't infested that ecosystem yet but it's only a matter of time.
I dropped Google search years ago but every engine is experiencing enshitification.
I'm very disappointed in Apple that changing the default browser in Safari requires you to install a Safari extension. Super lame stuff.
Which is kind of a problem, especially for Google, because their incentive to limit AI slop in search results is reduced when AI is one of their products, and they stand to benefit from search quality declining across the board in relation to AI answers.
On the other hand every time I've used language models to find information I've gotten back generic or incorrect text + "sources" that have nothing to do with my query.
For political stuff, I avoid wikipedia and just search engines in general and ask Grok/ChatGPT, specifying the specific biases I want it to filter out and know pieces of misinformation for it to ignore.
Gemini is similar.
I sometimes use phind and find myself jumping directly to the sources.
Consider paying for kagi.
Kagi is like Google in it's prime - fast, relevant and giving a range of results.
1. *Browsing*
This can be completely avoided. Here is a thing you can do on firefox with some tweaks in order to achieve no-search browsing
- Remove search suggestions in (about:preferences#search)
- Use the [History AutoDelete](https://addons.mozilla.org/en-US/firefox/addon/history-autod...) addon to remove searches from your History. This will avoid searches from your history to pollute the results
- Go to (about:config) and set `browser.urlbar.resultMenu.keyboardAccessible` to `false`
Now when you Ctrl + L into the tab, you will get results from your history, bookmarks and even open tabs. And the results are only a few Tab presses away, no need to move your hands off the keyboard.
If you don't like the results and want to launch a search anyways, well just press Enter instead and it will launch a search with the default search engine. A cool trick is to type % + space in the awesome bar to move around opened tabs/ You can also specifically look into bookmarks with * and history with ^
P.S : Ctrl + L, Ctrl + T, Ctrl + W and Ctrl + Alt + T are your best friends.
P.P.S: Now you can also learn more about custom search engines : https://askubuntu.com/a/1534489
2. *Quick answer* on a topic. This is the second most common use case and what Google has been trying to optimize for for a long time. Say you want to know how many people are there in Nepal or what is the actual percentage blue-eyed people in Germany. This is where llm shine I think but to be fair Google is just as good for this job.
3. *Finding resources* to work with. This one is a bit on the way out because, it's what people who want to understand want but we probably are few. This is valuable because those resources do not just give an answer but also provide the rationale/context/sources for the answer. But.
On the one hand, most people just want the answer, and most people can be you if, even though you deem yourself a curious person, you don't have the time right now to actually make the effort to understand. On the other hand, llms can craft tutorials and break down subjects for you which turn those resources much less valuable. I kind of feel like the writing is on the wall and the future for this use case is for "curating" search engines that will give you the best resources and won't be afraid to tell you "Nothing of value turned up" instead of giving you trash. Curious to hear your thougts about that.
Sadly search is massively enshitified by AI generated SEO'd crap...