Comment by al_borland
9 days ago
If your assistant is causing you to work more hours and only sleeping 4 hours per night, is it really a good assistant? I think I'd be firing an assistant that did that to my life.
I'm a big believer is not just doing something because I can. Could AI build me a personal suite of apps to manage my life in the exact way I want... maybe? Should I spend my time doing that, even if AI is writing 100% of the code? Probably not. Will it be better enough to justify the investment? No. When it breaks or has bugs, who has to deal with me? Me. What about the infrastructure? Another thing to do.
You can say AI is writing all the code, but if someone has to be there to babysit and guide it the whole way, it's still work. Less engaging and rewarding work. I mostly find vibe coding to be boring and frustrating, unless it can one-shot it, which it can only do for small stuff.
I use AI, but I use it in the same way I would use a search engine or a hammer. It's a tool to help do what I was already doing. Sure, it grows my capacity to some degree, but pushing that too far ends up being problematic, as I lose my ability to properly oversee it.
Yea there could also be an issue with learning how to hand hold an AI vs working on how to actually engineer good solutions. Maybe one feeds into the other since we're not getting off the AI train...
> When it breaks or has bugs, who has to deal with me? Me. What about the infrastructure? Another thing to do.
Wait, why are you doing it? If you're already that far, have the AI agent do that for you as well.
Who is having the AI do it? It would be me, correct? I have to tell the AI to fix the bugs and make sure they are fixed. I have to tell the AI to build the infrastructure and hope it works. Then I have to pay for it and hope it didn’t do something stupid that will cost Me a small fortune.
The point is, none of this stuff just happens. I would have to be involved in all of it, and the more it does, the more I need to do from a guidance and oversight perspective.
Is this what I want my life to be? It sounds absolutely awful.
Because of course the thing that persistently fails to make it work will somehow fix it?
If my brain got a 100x overclock (both on speed and endurance), I'd be excited to use it all the time too.
The issue is obviously AI isn't that, it's a simulation of that that often fails...
> If my brain got a 100x overclock (both on speed and endurance), I'd be excited to use it all the time too.
Ted Chiang's Understand comes to mind.
https://en.wikipedia.org/wiki/Understand_(story)
Brilliant, highly recommended short story! Ted Chiang rarely disappoints.
[flagged]
I can't tell, do you think Karpathy is actually making progress, or is he spinning wheels into arbitrary dimensions that have no fundamental connection to progress in the field?
The interesting thing about the LLM is it can spit out so many near-misses that it is a powerful gambling thing; but it's only gambling if you're not winning. Just like the stockmarket can be a rational excercise in finding undervalued stocks and providing liquid knowledge by extracting the value.
So that's probably the argument ensuing: Are the majority of people using LLMs to "play the stocks" as gamblers or as rational analysts.