← Back to context

Comment by hehehheh

19 hours ago

It has to be the same as all AI: you need someone thorough to check what it did.

LLM generated code needs to be read line by line. It is still useful to do that with code because reading is faster than googling then typing.

You can't detect hallucinations in general.

A (costly) way is to compare responses from different models, as they don't hallucinate in exactly the same way.