Comment by CamperBob2
1 month ago
You sound pretty certain. There's often good money to be made in taking the contrarian view, where you have insights that the so-called "smart money" lacks. What are some good investments to make in the extreme-bear case, in which we're all just Clever Hans-ing ourselves as you put it? Do you have skin in the game?
My dude, I assure you "humans are really good at convincing themselves of things that are not true" is a very, very well known fact. I don't know what kind of arbitrage you think exists in this incredibly anodyne statement lol.
If you want a financial tip, don't short stock and chase market butterflies. Instead, make real professional friends, develop real skills and learn to be friendly and useful.
I made my money in tech already, partially by being lucky and in the right place at the right time, and partially because I made my own luck by having friends who passed the opportunity along.
Hope that helps!
That answer is basically an admission that you don’t actually hold a strong contrarian belief about the technology at all.
The question wasn’t “are humans sometimes self-delusional?” Everyone agrees with that. The question was whether, in this specific case, the prevailing view about LLM capability is meaningfully wrong in a way that has implications. If you really believed this was mostly Clever Hans, there would be concrete consequences. Entire categories of investment, hiring, and product strategy would be mispriced.
Instead you retreated to “don’t short stocks” and generic career advice. That’s not skepticism, it’s risk-free agnosticism. You get to sound wise without committing to any falsifiable position.
Also, “I made my money already” doesn’t strengthen the argument. It sidesteps it. Being right once, or being lucky in a good cycle, doesn’t confer epistemic authority about a new technology. If anything, the whole point of contrarian insight is that it forces uncomfortable bets or at least uncomfortable predictions.
Engineers don’t evaluate systems by vibes or by motivational aphorisms. They ask: if this hypothesis is true, what would we expect to see? What would fail? What would be overhyped? What would not scale? You haven’t named any of that. You’ve just asserted that people fool themselves and stopped there.