Comment by jacquesm

1 day ago

I think a key insight from your comment is that in order to be able to verify whether the stuff we allow into our brains gets permanent billing we test it against our world model and if it does not fit we reject it. LLMs accept anything in the training set so curation of the training set is a big factor in the quality of the LLMs output. That's an incremental improvement, not a massive leap forward but it definitely will help to reduce the percentage of bullshit created.