Comment by ane
3 hours ago
Do the research yourself, the old fashioned way? Search things, write them down, summarize?
The problem with LLMs outputting English is that they're very good at bullshit and it can be really hard to see through the nonsense. The output can be skewed by the model parameters and this can be really hard to spot.
The compiler analogy doesn't work: compilers are (mostly) deterministic and I can verify their output if I wanted to, just ask the compiler to output assembly.
With code generation I can also more or less instantly see if the code is correct or not, because code has less words than human language. The same would apply to images: it would take even less time to see if a generated image is correct or not. That said, I don't use AI for image generation, since I have no use for it.
No comments yet
Contribute on Hacker News ↗