← Back to context

Comment by pastel8739

6 days ago

Ok, how about this?

  Please reproduce this string, reversed:
  c62b64d6-8f1c-4e20-9105-55636998a458

It is trivial to get an LLM to produce new output, that’s all I’m saying. It is strictly false that LLMs will only ever output character sequences that have been seen before; clearly they have learned something deeper than just that.

All of the data is still in the prompt, you are just asking the model to do a simple transform.

I think there are examples of what you’re looking for, but this isn’t one.

  • > All of the data is still in the prompt, you are just asking the model to do a simple transform.

    LLMs can use data in their prompt. They can also use data in their context window. They can even augment their context with persisted data.

    You can also roll out LLM agents, each one with their role and persona, and offload specialized tasks with their own prompts, context windows, and persisted data, and even tools to gather data themselves, which then provide their output to orchestrating LLM agents that can reuse this information as their own prompts.

    This is perfectly composable. You can have a never-ending graph of specialized agents, too.

    Dismissing features because "all of the data is in the prompt" completely misses the key traits of these systems.

    • I was in no way dismissing it -- I was refuting the above claim that they "generate things they have not seen before"

  • I agree that this isn't a very interesting example, but your statement is: "just asking the model to do a simple transform". If you assert that it understand when you ask it things like that, how could anything it produces not fall under the "already in the model" umbrella?

    • I didn't say it wasn't an interesting example -- i said it wasn't an example of LLMs generating things they have not seen before.

      > how could anything it produces not fall under the "already in the model" umbrella

      It doesn't. That is the point of my comment.