Comment by falcor84

5 hours ago

As Yegge himself would agree, there's likely nothing that is particularly good about this specific architecture, but I think that there's something massive in this as a proof of concept for something bigger beyond the realm of software development.

Over the last few years, people have been playing around with trying to integrate LLMs into cognitive architectures like ACT-R or Soar, with not much to show for it. But I think that here we actually have an example of a working cognitive architecture that is capable of autonomous long-term action planning, with the ability to course-correct and stay on task.

I wouldn't be surprised if future science historians will look at this as an early precursor to what will eventually be adapted to give AIs full agentic executive functioning.