Comment by d0liver

1 day ago

I need some information/advice -> I feed that into an imprecise aggregator/generator of some kind -> I apply my engineering judgement to evaluate the result and save time by reusing someone's existing work

This _is_ something that you can do with AI, but it's something that a search engine is better suited to because the search engine provides context that helps you do the evaluation, and it doesn't smash up results in weird and unpredictable ways.

Y'all think that AI is "thinking" because it's right sometimes, but it ain't thinking.

If I search for "refactor <something> to <something else>" and I get good results, that doesn't make the search engine capable of abstract thought.

AI is usually a better search engine than a search engine.

  • AI alone can't replace a search engine very well at all.

    AI with access to a search engine may be present a more useful solution to some problems than a bare search engine, but the AI isn't replacing a search engine it is using one.

    • The "Deep Research" modes in web-based LLMs are quite useful. They can take a days worth of reading forums and social media sites and compress into about 10 minutes.

      For example I found a perfect 4k120Hz capable HDMI switch by using ChatGPTs research mode. It did suggest the generic Chinese random-named ones off Amazon, but there was one brand with an actual website and a history - based in Germany.

      I hadn't seen it recommended anywhere on my attempts, but did find it by searching for the brand specifically and found only good reviews. Bought it, love it.

    • I did say "usually" and it was meant in reference to the use case you originally gave.

      Around 30% to 50% of things I used to google for, I now go to ChatGPT for. Often because the available context is better.

This seems like a great example of someone reasoning from first principles that X is impossible, while someone else doing some simple experiments with an open mind can easily see that X is both possible and easily demonstrated to be so.

Y'all think that AI is "thinking" because it's right sometimes, but it ain't thinking.

I know the principles of how LLMs work, I know the difference between anthropomorphizing them and not. It's not complicated. And yet I still find them wildly useful.

YMMV, but it's just lazy to declare that anyone who sees it differently than you just doesn't understand how LLMs work.

Anyway, I could care less if others avoid coding with LLMs, I'll just keep getting shit done.

  • If you observe it at the right time, a broken clock will appear to be working, because it's right twice a day.