Comment by senordevnyc

1 day ago

This seems like a great example of someone reasoning from first principles that X is impossible, while someone else doing some simple experiments with an open mind can easily see that X is both possible and easily demonstrated to be so.

Y'all think that AI is "thinking" because it's right sometimes, but it ain't thinking.

I know the principles of how LLMs work, I know the difference between anthropomorphizing them and not. It's not complicated. And yet I still find them wildly useful.

YMMV, but it's just lazy to declare that anyone who sees it differently than you just doesn't understand how LLMs work.

Anyway, I could care less if others avoid coding with LLMs, I'll just keep getting shit done.

If you observe it at the right time, a broken clock will appear to be working, because it's right twice a day.