Comment by dmos62
1 day ago
This discussion might be a bit more grounded if we were to discuss a concrete LLM response. Seems pretty freaking good to me:
https://chatgpt.com/share/6955a171-e7a4-8012-bd78-9848087058...
1 day ago
This discussion might be a bit more grounded if we were to discuss a concrete LLM response. Seems pretty freaking good to me:
https://chatgpt.com/share/6955a171-e7a4-8012-bd78-9848087058...
You've prompted it by giving it the learning sequence from the post you're replying to, which somebody who needs the tutorial wouldn't be able to specify, and it's replied with a bunch of bullets and lists that, as a person with general programming knowledge but almost no experience writing raytracing algorithms (i.e. presumably the target audience here) look like they have zero value to me in learning the subject.
> zero value to me in learning the subject
Perplexing how different our perspectives are. I find this super useful for learning, especially since I can continue chatting about any and all of it.
[dead]
Now compare that to the various books people mentioned at [0]. It isn't even remotely close.
You spoonfed ChatGPT, and it returned a bunch of semi-relevant formulas and code snippets. But a tutorial? Absolutely not. For starters, it never explains what it is doing, or why! It is missing some crucial concepts, and it doesn't even begin to describe how the various parts fit together.
If this counts as "pretty freaking good" already, I am afraid to ask what you think average educational material looks like.
Sure, it's a nice trick that we can now get a LLM to semi-coherently stitch some StackOverflow answers together, but let's not get ahead of ourselves: there's still a lot of improvement to be done before it is on par with human writing.
[0]: https://news.ycombinator.com/item?id=46448544