Comment by yannyu
2 days ago
It's not just the pretraining, it's the entire scaffolding between the user and the LLM itself that contributes to the illusion. How many people would continue assuming that these chatbots were conscious or intelligent if they had to build their own context manager, memory manager, system prompt, personality prompt, and interface?
No comments yet
Contribute on Hacker News ↗