Comment by crote
1 day ago
> Why buy this book when ChatGPT can generate the same style of tutorial for ANY project that is customized to you?
Isn't it obvious? Because the ChatGPT output wouldn't be reviewed!
You buy books like these exactly because they are written by a professional, who has taken the time to divide it up into easily digestible chunks which form a coherent narrative, with runnable intermediate stages in-between.
For example, I expect a raytracing project to start with simple ray casting of single-color objects. After that it can add things like lights and Blinn-Phong shading, progress with Whitted-style recursive raytracing for the shiny reflections and transparent objects, then progress to modern path tracing with things like BRDFs, and end up with BVHs to make it not horribly slow.
You can stop at any point and still end up with a functional raytracer, and the added value of each step is immediately obvious to the reader. There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
I heard the other day that LLMs won't replace writers, just mediocre writing.
On the one hand, I can see the point- you'll never get chatgpt to come up with something on par with the venerable Crafting Interpreters.
On the other hand, that means that all the hard-won lessons from writing poorly and improving with practice will be eliminated for most. When a computer can do something better than you right now, why bother trying to get better on your own? You never know if you'll end up surpassing it or not. Much easier to just put out mediocre crap and move on.
Which, I think, means that we will see fewer and fewer masters of crafts as more people are content with drudgery.
After all, it is cheaper and generally healthier and tastier to cook at home, yet for many people fast food or ordering out is a daily thing.
I have to disagree. My brother-in-law has started to use ChatGPT to punch up his personal letters and they’ve become excerpts from lesser 70s sitcoms. From actually personal and relevant to disturbingly soulless.
Right? If I could get the same output by just talking to AI myself, what's the point of the human connection? Be something, be someone. Be wrong or a little rude from time to time, it's still more genuine.
6 replies →
Your whole point is disproven by woodworking as a craft, and many other crafts for that matter. There are still craftspeople doing good work with wood even though IKEA and such have captured the furniture industry.
There will still be fine programmers developing software by hand after AI is good enough for most.
> There will still be fine programmers developing software by hand after AI is good enough for most.
This fallacy seems to be brought up very frequently, that there are still blacksmiths; people who ride horses; people who use typewriters; even people who use fountain pens, but they don't really exist in any practical or economical sense outside of 10 years ago Portland, OR.
No technological advancement that I'm aware of completely eliminates one's ability to pursue a discipline as a hobbyist or as a niche for rich people. It's rarely impossible, but I don't think that's ever anyone's point. Sometimes they even make a comeback, like vinyl records.
The scope of the topic seems to be what the usual one is, which is the chain of incentives that enable the pursuit of something as a persuasive exchange of value, particularly that of a market that needs a certain amount of volume and doesn't have shady protectionism working for it like standard textbooks.
With writing, like with other liberal arts, it's far from a new target of parental scrutiny, and it's my impression that those disciplines have long been the pursuit of people who can largely get away with not really needing a viable source of income, particularly during the apprentice and journeyman stages.
Programming has been largely been exempt from that, but if I were in the midst of a traditional comp sci program, facing the existential dreads that are U.S and Canadian economies (at least), along with the effective collapse of a path to financial stability, I'd be stupid not to be considering a major pivot; to what, I don't know.
15 replies →
Yet usually woodworking is not a viable business. As a craft - sure. As a day job to provide for your family - not really. Guys who created a custom tables for me five years ago are out of business.
Pretty much the same story with any craft.
10 replies →
This comparison is hardly apt in the way it is formulated, but it is fitting when considering tailors and seamstresses. A few decades ago, numerous tailors made custom-made clothes and skilled seamstresses repaired them. Today, since clothes are made by machines and the cost of production has fallen significantly, making bespoke clothes has become a niche job, almost extinct, and instead of repairing clothes, people prefer to buy new ones.
These jobs have not disappeared, but they have become much less common and attractive.
high quality carpentry has a market of people who buy one off projects for lots of money.
There is not really a similar market in software.
I'm not saying there won't be fine programmers etc. but with woodworking I can see how a market exists that will support you developing your skills and I don't see it with software and thus the path seems much less clear to me.
7 replies →
LLLMs replace bad writing with mediocre writing
One thing getting better at writing does is make you better at organizing your thoughts.
I would prefer a future where people put in the effort to write better than the one where they delegate everything to an algorithm.
1 reply →
LLMs will make mediocre and bad writers think they are good writers. It will also make consumers of said mediocre and bad writing think they are consuming worthwhile stuff. Not only will writing get worse but expectations for it will sink as well.
(I’ve written this in the future tense but this is all in fact happening already. Amid the slop, decent writing stands out more.)
What is the inherent value of being able to write well?
Is it not possible that people consider their craft to exist at a higher level than the written word? For example, writing facile prose is a very different from being a good storyteller. How many brilliant stories has the world missed out on because the people who imagined them didn't have the confidence with prose to share them.
Your last sentence is one answer to the question you ask in the first sentence.
1 reply →
100% right. I buy lots of Japanese cookbooks secondhand. I found an Okinawa cook book for $8. When I received it, it was clear the author was just a content farmer pumping out various recipe books with copied online recipes. Once I looked up their name I saw hundreds of books across cooking baking etc. there was no way they even tried all of the recipes.
So yes, review and “narrative voice” will be more valuable than ever.
Agreed. Still amazed that people keep trusting the service that has like a 60% failure rate, who would want to buy something that fails over half the time?
Shame OP stopped their book, it would definitely have found an audience easily. I know many programmers that love these styles of books.
Unfortunately (fortunately?) it does not have like 60% failure rate. Yes, there’s some non-negligible error rate. But it’s lower than the threshold that would make the average user throw it into the bin. We can pretend that’s not the case, but it doesn’t even pass the real life sniff test.
You don't know when it's wrong.
This reminds me of the VHS vs Betamax debate.
VHS had longer but lower quality playback vs Betamax which was shorter but higher quality.
It wasn't clear when VCRs came out which version consumers would prefer. Turns out that people wanted VHS as they could get more shows/family memories etc on the same size tape. In other words, VHS "won".
Most people have heard the above version but Betamax was widely adopter in TV news. The reason being that news preferred shorter, higher quality video for news segments as they rarely lasted more than 5-10 minutes.
My point being, the market is BIG and is really made up of many "mini-markets". I can see folks who are doing work on projects with big downside risk (e.g. finance, rockets etc) wanting to have code that is tested, reviewed etc. People needing one off code probably don't care if the failure rate is high especially if failure cases are obvious and the downside risk is low.
The version I heard was that Sony wouldn't allow porn on Betamax, but VHS was less prudish. Hence VHS won the format war. Don't know how true this is.
In my dream world, you take that book plus information about yourself (how good of a programmer you already are), feed that into AI and get a customized version that is much better for you. Possibly shorter. Skips boring stuff you know. And slows down for stuff you have never been exposed to. Everyone wins.
Yes, and you get to ask questions. So, interactive.
Why do I need a machine to do that?
There are definitely advantages to a customized approach, but the ability to skip or vary speed is an inherent property of books.
Many conviences in life are not truly needed.
> There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
This is the entire crux of your argument. If it's false, then everything else you wrote is wrong - because all that the consumer of the book cares about is the quality of the output.
I'd be pretty surprised if you couldn't get a tutorial exactly as good as you want, if you're willing to make a prompt that's a bit better than just "I want to build a ray tracer" prompt. I'd be even more surprised if LLMs won't be able to do this in 6 months. And that's not even considering the benefits of using an LLM (something unclear in the tutorial? Ask and it's answered).
Indeed. The top-level comment is pretty much wishful thinking. At this point, if you tell a frontier LLM to explain things bottom up, with motivation and background, you usually get something that’s better than 95% of the openly available material written by human domain experts.
Of course, if you just look at top posts on forums like this one, you might get the impression that humans are still way ahead, but that’s only because you’re looking at the best of the best of the best stuff, made by the very best humans. As far as teaching goes, the vast majority of humans are already obsolete.
> ...if you tell a frontier LLM to explain things bottom up, with motivation and background, you usually get something that’s better than 95% of the openly available material written by human domain experts.
That's an extraordinary claim. Are there examples of this?
Why buy the book when big AI can add it to their training data. Multitudes of people can then enjoy slightly higher quality output without you being compensated a single cent.
Yes, this is the big change. There’s no more financial incentive to create such a work because Big AI captures all the views it would have gotten and thus all the revenue potential.
It would actually be nice to have a book-LLM. That is, an LLM that embodies a single (human-written) book, like an interactive book. With a regular book, you can get stuck when the author didn’t think of some possible stumbling block, or thinks along slightly differently lines than the reader. An LLM could fill in the gaps, and elaborate on details when needed.
Of course, nowadays you can ask an LLM separately. But that isn’t the same as if it were an integrated feature, focused on (and limited to) the specific book.
I've not used it, but isn't this kind of what NotebookLM does?
You drag a source into it such as a books PDF and then you have a discussion with it.
https://notebooklm.google
What I’m imagining is an LLM that is strongly tied to the book, in the sense of being RLHF’d for the book (or something along those lines), like an author able to cater to any reader interested in the book, but also confined to what the book is about. An LLM embodiment and generalization of the book. Not an LLM you can talk about anything where you just happen to talk about some random book now. The LLM should be clearly specific to the book. LLMs for different books would feel as distinct from each other as the books do, and you couldn’t prompt-engineer the LLM to go out of the context the book.
1 reply →
Tyler Cowen's GOAT book explores this in depth. Try it out! https://goatgreatesteconomistofalltime.ai/en
Genuinely have no idea what the novelty of this is for versus just uploading a PDF to ChatGPT. In terms of UX it is incredibly limited for a book evolution work.
1 reply →
Absolutely. And further because when you prompt ChatGPT as you write your ray tracer you don't know what the important things to ask are. Sure, you can get their with enough prompts of "what should I be asking you" or "explain to me the basics" of so and so. But the point of the book is all of that work has already been done for you in a vetted way.
If something is not clear in the book, can I ask it to explain it to me?
No but an LLM is a great assistant to reading a book. In fact some eReader interfaces have started to include them. Best of both worlds.
This discussion might be a bit more grounded if we were to discuss a concrete LLM response. Seems pretty freaking good to me:
https://chatgpt.com/share/6955a171-e7a4-8012-bd78-9848087058...
You've prompted it by giving it the learning sequence from the post you're replying to, which somebody who needs the tutorial wouldn't be able to specify, and it's replied with a bunch of bullets and lists that, as a person with general programming knowledge but almost no experience writing raytracing algorithms (i.e. presumably the target audience here) look like they have zero value to me in learning the subject.
> zero value to me in learning the subject
Perplexing how different our perspectives are. I find this super useful for learning, especially since I can continue chatting about any and all of it.
1 reply →
Now compare that to the various books people mentioned at [0]. It isn't even remotely close.
You spoonfed ChatGPT, and it returned a bunch of semi-relevant formulas and code snippets. But a tutorial? Absolutely not. For starters, it never explains what it is doing, or why! It is missing some crucial concepts, and it doesn't even begin to describe how the various parts fit together.
If this counts as "pretty freaking good" already, I am afraid to ask what you think average educational material looks like.
Sure, it's a nice trick that we can now get a LLM to semi-coherently stitch some StackOverflow answers together, but let's not get ahead of ourselves: there's still a lot of improvement to be done before it is on par with human writing.
[0]: https://news.ycombinator.com/item?id=46448544
> Isn't it obvious? Because the ChatGPT output wouldn't be reviewed!
Reviewed by a human. It's trivial to take the output from one LLM and have another LLM review it.
Also, often mediocrity is enough, especially if it is cheap.
> There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
I mean, maybe not "flawlessly", and not in a single prompt, but it absolutely can.
I've gone deep in several areas, essentially consuming around a book's worth of content from ChatGPT over the course of several days, each day consisting of about 20 prompts and replies. It's an astonishingly effective way to learn, because you get to ask it to go simpler when you're confused and explain more, in whatever mode you want (i.e. focus on the math, focus on the geometry, focus on the code, focus on the intuition). And then whenever you feel like you've "got" the current stage, ask it what to move onto next, and if there are choices.
This isn't going to work for cutting-edge stuff that you need a PhD advisor to guide you through. But for most stuff up to about a master's-degree level where there's a pretty "established" progression of things and enough examples in its training data (which ray-tracing will have plenty of), it's phenomenal.
If you haven't tried it, you may be very surprised. Does it make mistakes? Yes, occasionally. Do human-authored books also make mistakes? Yes, and often probably at about the same rate. But you're stuck adapting yourself to their organization and style and content, whereas with ChatGPT it adapts its teaching and explanations and content to you and your needs.
How did you verify that it wasn't bogus? Like, when it says "most of the time", or "commonly", or "always", how do you know that's accurate? How do those terms shape your thinking?
> when it says "most of the time", or "commonly", or "always", how do you know that's accurate?
Do you get those words a lot? If you're learning ray-tracing, it's math and code that either works or doesn't. There isn't a lot of "most of the time"?
Same with learning history. Events happened or they didn't. Economies grew at certain rates. Something that is factually "most of the time" is generally expressed as a frequency based on data.
2 replies →
You’re hitting at the core problem. Experts have done the intensive research to create guides on the Internet which ChatGPT is trained on. For example, car repairs. ChatGPT can guide you through a lot of issues. But who is going to take the time to seriously investigate and research a brand new issue in a brand new model of car? An expert. Not an AI model. And as many delegate thinking to AI models, we end up with fewer experts.
ChatGPT is not an expert, it’s just statistically likely to regurgitate something very similar to what existing experts (or maybe amateurs or frauds!) have already said online. It’s not creating any information for itself.
So if we end up with fewer people willing to do the hard work of creating the underlying expert information these AI models are so generously trained on, we see stagnation in progress.
So encouraging people to write books and do real investigative research, digging for the truth, is even more important than ever. A chatbot’s value proposition is repackaging that truth in a way you can understand, surfacing it when you might not have found it. Without people researching the truth, that already fragile foundation crumbles.
> You’re hitting at the core problem.
Are you writing in the style of an LLM as a gag, or just interacting with LLM's so much it's become engrained?
I wouldn’t be surprised if publishers today delegated some of the reviewing to LLMs.
Does this type of ray tracing book exist? It’s something never learned about and would love to know what courses or books others have found valuable
I really enjoyed this one as an introduction: https://www.gabrielgambetta.com/computer-graphics-from-scrat...
It touches on both ray traced and raster graphics. It lets you use whatever language and graphics library you want as long as you can create a canvas/buffer and put pixels on it so you can target whatever platform you want. It includes links to JavaScript for that if you want. (I didn’t want to use a new-to-me language so I used python and Pygame at the expense of speed.)
Happy New Year my friend and welcome to this wonderful world: https://raytracing.github.io/books/RayTracingInOneWeekend.ht...
And once you get beyond the books the sibling comments here have mentioned (I'd suggest starting with the Ray Tracing In One Weekend minibook series first before Physically Based Rendering), there's the Ray Tracing Gems series (https://www.realtimerendering.com/raytracinggems/) which is open access online with print editions for purchase.
(Disclosure: I contributed a chapter.)
Try Raytracing in One Week and its sequels[1] perhaps.
[1] https://raytracing.github.io/
In addition to the other suggestions, see also https://pbr-book.org/4ed/contents
“The Ray Tracer Challenge” by Jamis Buck is really good as well
> a professional, who has taken the time to divide it up into easily digestible chunks which form a coherent narrative, with runnable intermediate stages in-between.
Tangentially related, but I think the way to get to this is to build a "learner model" that LLMs could build and update through frequent integrated testing during instruction.
One thing that books can't do is go back and forth with you, having you demonstrate understanding before moving on, or noticing when you forget something you've already learned. That's what tutors do. The best books can do is put exercises at the end of a chapter, and pitch the next chapter at someone who can complete those exercises successfully. An LLM could drop a single-question quiz in as soon as you ask a weird question that doesn't jibe with the model, and fall back into review if you blow it.
> having you demonstrate understanding before moving on
Isn't that what the exercises are for?
There's just no way in hell ChatGPT at its current level is going to guide you flawlessly through all of that if you start with a simple "I want to build a raytracer" prompt!
Have you tried? Lately? I'd be amazed if the higher-end models didn't do just that. Ray-tracing projects and books on 3D graphics in general are both very well-represented in any large training set.
Isn't the whole point to learn and challenge yourself? If you just wanted to render a 3-dimensional scene there are already hundreds of open source raytracers on github.
Asking chatgpt to "guide" you through the process is a strange middle-ground between making your own project and using somebody else's in which nothing new is created and nothing new is learned.
Ridiculous. If you go through this process with ChatGPT and don't learn anything, that's all on you.
Given the lack of a CS professor looking over your shoulder, what's more powerful than a textbook that you can hold a conversation with?
3 replies →
Claude is a better explainer, but yes they're all capable of teaching you to write a raytracer.
It has nothing to do with "raytracers are well-represented in the training set" though. I find it so strange when people get overly specific in an attempt to sound savvy. You should be able to easily think of like five other ways it could work.
> It has nothing to do with "raytracers are well-represented in the training set" though. I find it so strange when people get overly specific in an attempt to sound savvy. You should be able to easily think of like five other ways it could work.
Can you elaborate? Your first sentence seems to be saying that it's basically irrelevant whether they have been trained on text and code related to raytracing, and I have no idea why that would be true.
2 replies →
[dead]
> Because the ChatGPT output wouldn't be reviewed!
So what? If it's not already, frontier LLM one-shot output will be as good as heavily edited human output soon.