← Back to context

Comment by simonw

1 year ago

When ChatGPT came out, one of the things we learned is that human society generally assumes a strong correlation between intelligence and the ability to string together grammatically correct sentences. As a result, many people assumed that even GPT-3.5 was wildly more "intelligent" than it actually was.

I think Deep Research (and tools like it) offer an even stronger illustration of that same effect. Anything that can produce a well-formatted multiple page report with headings and citations surely must be of PhD-level intelligence, right?

(Clearly not.)

In some ways, it's a good tool to teach yourself to sus out the real clues to reliability, not format and authoritative tone.