← Back to context

Comment by t55

2 days ago

what do you mean, the LLM abstracting things for us while we speak to it?

No I meant something else. As you said: us humans love clean abstractions. We love building on top of them. Now LLMs are trained on data produced by us. So I wonder if they would also inherit this trait from us and end up loving good abstractions, and would find it easier to build on top of them. Other possibility is that they end up move-37ing the whole abstraction shebang. And find that always building something up bespoke, from low-level is better than constraining oneself to some general purpose abstraction.

  • It's an interesting idea.

    If code is ever updated by an LLM, does it benefit from using abstractions? After all they're really a tool for us lowly sapients to aid in breaking down complex problems. Maybe LLM's will create their own class of abstractions, diverse from our own but useful for their task.

  • ah gotcha. I think that with the new trend of RLing models, the move 37 may come up sooner than we think -- just provide the pretrained models some outcome-goal and the way it gets there may use low-level code without clean abstractions