← Back to context

Comment by lionkor

1 day ago

Not only are you wrong (LLMs are horrible at reproducing anything that isn't fairly ABUNDANT in the training data), but it's also quite sad.

AI can write a whole book on anything. You can take anything, even make up a phenomenon, and have an AI write a whole factual-sounding book on it.

How that isn't clearly an indicator to you that it produces loads and loads of BS, I'm really not sure.

It works because if you want some information on React or say Python, or say Prolog. Whatever information ChatGPT generates is quickly verifiable, as you have to write code to test it.

Even better many times, it shows me new insights into doing things.

I haven't bought a book in a while, but Im reading a lot, like really a lot.