← Back to context

Comment by dgb23

12 hours ago

Thinking and reasoning cannot be abstracted away from the individual who experiences the thinking and reasoning itself and changes because of it.

LLMs are amazing, but they represent a very narrow slice of what thinking is. Living beings are extremely dynamic and both much more complex and simple at the same time.

There is a reason for:

- companies releasing new versions every couple of months

- LLMs needing massive amounts of data to train on that is produced by us and not by itself interacting with the world

- a massive amount of manual labor being required both for data labeling and for reinforcement learning

- them not being able to guide through a solution, but ultimately needing guidance at every decision point