Comment by philipwhiuk

2 months ago

You can’t just prompt your way out of a systemic flaw

You don't need to as long as you don't use LLMs like these in cases where incorrect output isn't of any consequence. If you're using LLMs to generate some placeholder bullshit to fill out a proof of concept website, you don't care if it claims strawberries have tails, you just need it to generate some vaguely coherent crap.

For things where factuality is even just a little important, you need to treat these things like asking a toddler that got their hands on a thesaurus and an encyclopaedia (that's a few years out of date): go through everything it produces and fact check any statement it makes that you're not confident about already.

Unfortunately, people seem to be mistaking LLMs for search engines more and more (no doubt thanks to attempts from LLM companies to make people think exactly that) so this will only get worse in the future. For now we can still catch these models out with simple examples, but as AI fuckups grow sparser, more people will think these things tell the actual truth.