← Back to context

Comment by ajross

7 hours ago

LLMs can write and run tests though.

You're cherry picking my little bit of wordsmithing. Obviously we aren't always wrong. I'm saying that our thought processes stem from hallucinatory connections and are routinely wrong on first cut, just like those of an LLM.

Actually I'm going farther than that and saying that the first cut token stream out of an AI is significantly more reliable than our personal thoughts. Certainly than mine, and I like to think I'm pretty good at this stuff.

I don't think the complaint about cherry picking is quite fair. Most of your original comment consists of claims that we're bullshit machines, our internal dialog is almost 100% fantasy, we're hallucinating, etc. Those claims may be true. But I'm not carefully like curating them out of nowhere.