← Back to context

Comment by cheald

7 hours ago

Remember that many people are heavily are happy-path biased. They see a good result once and say "that's it, ship it!"

I'm sure they QA'd it, but QA was probably "does this give me good results" (almost certainly 'yes' with an LLM), not "does this consistently not give me bad results".

> almost certainly 'yes' with an LLM

LLMs can handle search because search is intentionally garbage now and because they can absorb that into their training set.

Asking highly specific questions about NYC governance, which can change daily, is almost certainly 'not' going to give you good results with an LLM. The technology is not well suited to this particular problem.

Meanwhile if an LLM actually did give you good results it's an indication that the city is so bad at publishing information that citizens cannot rightfully discover it on their own. This is a fundamental problem and should be solved instead of layering a $600k barely working "chat bot" on top the mess.

  • I use Duckduckgo so I don't see really garbage search imo but not sure about people who google.

    But as you say that LLM's cant handle search. One of the things that I can't understand and I hope you help in is that this doesn't have to be this way.

    Kagi exists (I think I like the product/product idea even though I haven't bought it but I have tried it). Kagi's assistants can actually use Kagi search engine itself which is really customizable and you can almost have a lot of search settings filtered and Kagi overall is considered by many people as giving good search.

    Not to be a sponsor of kagi or anything but if this is such a really big problem assuming that NYC literally had to kill a bot because of it & the reason you mention is the garbage in garbage out problem of search happening.

    I wonder if Kagi could've maybe helped in it. I think that they are B-corp so they would've really appreciated the support itself if say NYC would've used them as a search layer.