Comment by weitendorf

1 year ago

So many of these examples are simply forgetting that LLMs experience the world through a 1-dimensional stream of tokens, while we experience those same tokens in 2 dimensions.

Try this: represent all those ASCII representations of games with the letter Q replacing the newline, to properly convert the encoding into a representation approximating what LLMs "see" (not a table, but a stream interspersed with Qs at a regular interval). Pretty hard right?

> LLMs cannot reset their own context

If you have a model hooked up to something agentic, I don't see why it couldn't perform context manipulation on itself or even selective realtime finetuning. Think you'll need info for the long haul, kick off some finetuning. Think you'd rather have one page of documentation in context than other, swap them out in one iteration. When you call LLMs over APIs you usually provide the entire context with each invocation...

> Devin

It's not that it's massively smarter or agentic, just that it has the opportunity to correct its mistakes rather than committing to the first thing to come out of it (and is being handheld by a vastly more knowledgable SWE in its demos). You see cherrypicked examples (I also work on GenAI-for-coding) - just like a tragically incompetent employee could waste literal years on a project diligently plugging away at some task, so too can agentic models go off on a wild goose chase that accomplishes nothing besides making Nvidia more money. Just because something is highly persistent doesn't mean it will "converge" on a correct outcome.