← Back to context

Comment by kristiandupont

6 days ago

I agree that this isn't a very interesting example, but your statement is: "just asking the model to do a simple transform". If you assert that it understand when you ask it things like that, how could anything it produces not fall under the "already in the model" umbrella?

I didn't say it wasn't an interesting example -- i said it wasn't an example of LLMs generating things they have not seen before.

> how could anything it produces not fall under the "already in the model" umbrella

It doesn't. That is the point of my comment.