Comment by epolanski
21 hours ago
I'll add few things at which Cursor with Claude is better than us (at least in time/effort):
- explaining code. Enter some legacy part of your code nobody understands, LLMs aren't limited to keeping few things in memory like us. Even if the code is very obfuscated and poorly written it can understand what it does and the purpose and suggest refactors to make it understandable
- explaining and fixing bugs. Just the other day Antirez posted a bug of him debugging a Redis segfault on some C code providing context and stack trace. This might be a hit or miss at times, but more often than not it saves you hours
- writing tests. It often comes up with many more examples and edge cases than I thought of. If it doesn't, you can always ask it to.
In any case I want to stress that LLMs are only as good as your data and prompts. They lack the nuance of understanding lots of context, yet I see people talking to them like humans that understand the business, best practices and others.
That first one has always felt super crazy to me, I've figured out what lots of "arcane magic, don't touch" type of functions genuinely do since LLMs have become a thing.
Even if it's slightly wrong it's usually at least in the right ballpark so it gives you a very good starting point to work from. Almost everything is explainable now.
I can relate, I have been genuinely amazed more than once by how it could "understand" some very complex code nobody dared to touch like you mention.
Kinda reminds me of that Glados quote, haha:
"These next tests require cooperation. Consequently, they have never been solved by a human. That's where you come in. You don't know pride, you don't know fear, you don't know anything. You'll be perfect."
It takes someone with no ego, no preconceptions, and infinite patience to delve in and come back alive.
Agreed, AI has been a godsend for trying to understand snippets of perl code in our codebase that were basically unreadable before unless you were an expert.