Comment by 0points
6 days ago
> Then I actually read the code.
This is my experience in general. People seem to be impressed by the LLM output until they actually comprehend it.
The fastest way to have someone break out of this illusion is tell them to chat with the LLM about their own expertise. They will quickly start to notice errors in the output.
You know who does that also? Humans. I read shitty, broken, amazing, useful code every day, but you don’t see my complaining online that people who earn 100-200k salary don’t produce ideal output right away. And believe me, I spend way more time fixing their shit than LLMs.
If I can reduce this even by 10% for 20 dollars it’s a bargain.
But no one is hyping the fact that Bob the mediocre coder is going to replace us.
what no one is reckoning with right here
The AI skeptics are mostly correctly reacting to the AI hypists, who are usually shitty linkedin influencer type dudes crowing about how they never have to pay anyone again. its very natural, even intelligent to not trust this now that its filling the same bubble as NFTs a few years ago. I think its okay to stay skeptical and see where the chips fall in a few years at this point.
But Bob isn’t getting better every 6 months
5 replies →
Offshoring / nearshoring has been here for decades!
/0
[flagged]
2 replies →
That has not been my experience at all with networking and cryptography.
Your comment is ambiguous; what exactly do you refer to by "that"?
[flagged]
You put people in nice little drawers, the skeptics, and the non-skeptics. It is reductive and most of all, it’s polarizing. This is how US politics have become and we should avoid this here.
1 reply →
[flagged]
31 replies →
As someone who has followed Thomas' writing on HN for a long time... this is the funniest thing I've ever read here! You clearly have no idea about him at all.
2 replies →
One would hope the experience leads to the position, and not vice-versa.
... you think tptacek has no expertise in cryptography?
That is no different from pretty any other person in the world. If I interview people to catch them on mistakes, I will be able to do exactly that. Sure, there are some exceptions, like if you were to interview Linus about Linux. Other than that, you'll always be able to find a fluke in someone's knowledge.
None of this makes me 'snap out' of anything. Accepting that LLM's aren't perfect means you can just keep that in mind. For me, they're still a knowledge multiplier and they allow me to be more productive in many areas of life.
6 replies →
That proves nothing with respect to the LLMs usefulness, all it means is that you are still useful.