Comment by atleastoptimal
3 days ago
I think much of HN has a blind spot that prevents them from engaging with the facts.
Yes, AI currently has limitations and isn't a panacea for cognitive tasks. But in many specific use cases it is enormously useful, and the rapid growth of ChatGPT, AI startups, etc. is evidence of that. Many will argue that it's all fake, that it's all artificial hype to prop of VC evaluations, etc. They literally will see the billions in revenue as not real, same with all the real people upskilled via LLM's in ways that are entirely unique to the utility of AI.
I would trust many peoples' evaluations on the impacts of AI if they could at least engage with reality first.
Some people see it as a political identity issue.
One person told me the other day that for the rest of time people will see using an AI as equivalent to crossing a picket line.
Well, if you use LLMs to write a book and claim to be a writer that is crossing a line. A writer doesn't just write the book, they do a lot of thinking over the concepts of the book and their writing style is somewhat unique to themselves. It's deceiving to prentend you're not using AI when you are. And if you are that's fine as long as you disclose it.
To me the progress achieved so far has been overhyped in many respects. The numbers out of Google that 25% of the code being generated is AI or some high number like that? BS. It’s gamified statistics that look at the command completion (not AI trying to solve a problem) vs what’s accepted and it’s likely hyper inflated even then.
It works better than you for UI prototypes when you don’t know how to do UI (and maybe even faster even if you do). It doesn’t work at all on problems it hasn’t seen. I literally just saw a coworker staring at code for hours and getting completely off track trying to correct AI output vs stepping through the problem step by step using how we thought the algorithm should work.
There’s a very real difference between where it could be in the future to be useful vs what you can do with it today in a useful way and you have to be very careful about utilizing it correctly. If you don’t know what you’re doing and AI helps you get it done cool, but also keep in mind that you also won’t know if it has catastrophic bugs because you don’t understand the problem and the conceptual idea of the solution well enough to know if what it did is correct. For most people there’s not much difference but for those of us who care it’s a huge problem.
I'm not sure if this post is ragebait or not but I'll bite...
If anything, HN is in general very much on the LLM hype train. The contrarian takes tend to be from more experienced folks working on difficult problems that very much see the fundamental flaws in how we're talking about AI.
> Many will argue that it's all fake, that it's all artificial hype to prop of VC evaluations, etc. They literally will see the billions in revenue as not real
That's not what people are saying. They're noting that revenue is meaningless in the absence of looking at cost. And it's true, investor money is propping up extremely costly ventures in AI. These services operate at a substantial loss. The only way they can hope to survive is through promising future pricing power by promising they can one day (the proverbial next week) replace human labor.
> same with all the real people upskilled via LLM's in ways that are entirely unique to the utility of AI.
Again, no one really denies that LLMs can be useful in learning.
This all feels like a strawman-- it's important to approach these topics with nuance.
I was talking to a friend today about where AI would actually be useful in my personal life, but it would require much higher reliability.
This is very basic stuff, not rewriting a codebase, creating a video game from text prompt or generating imagery.
Simply - I would like to be able to verbally prompt my phone something like "make sure the lights and AC are set to I will be comfortable when I get home, follow up with that plumber if they haven't gotten back to us, place my usual grocery order plus add some berries plus anything my wife put on our shared grocery list, and schedule a haircut for the end of next week some time after 5pm".
Basically 15-30min of daily stupid personal time sucks that can all be accomplished via smartphone.
Given the promise of IoT, smart home, LLMs, voice assistants, etc.. this should be possible.
This would require it having access to my calendar, location, ability to navigate apps on my phone, read/send email/text, and spend money. Given the current state of the tools, even if there is a 0.1% chance it changes my contact card photo to Hitler, replies to an email from my boss with an insult, purchases $100,000 in bananas, or sets the thermostats to 99F.. then I couldn't imagine giving an LLM access to all those things.
Are we 3 months, 5 years, or never away from that being achievable? These feel like the kind of things previous voice assistants promised 10 years ago.
because the preachers preach how amazing it is on their greenfield 'i built a todo list app in 5 minutes from scratch' and then you use it on an established codebase with a bigger context than the llm could ever possibly consume and spend 5x more time debugging the slop than it would've taken you to do the task yourself, and you become jaded
stop underestimating the amount of internalized knowledge people can have about projects in the real world, it's so annoying.
an llm can't ever possibly get close to it. there's some guy in a team in another building who knows why a certain weird piece of critical business logic was put there 6 years ago, the llm will never know this and won't understand this even if it consumed the whole repository because it would have to work there for years to understand how the business works
A completely non-technical saleslady on our team prototyped a whole JS web app that generated some data based on some user inputs (and even generated PDFs), which solved a problem our customers were having and our devs didnt have the time to develop yet.
This obviously was a temporary tool we'd never let touch our github repo but it still very much worked and solved a niche problem. It even looked like our app because the LLM could consume screenshots to copy our designs.
I'm on board with vibe coding = non-maintainable, non-tested, mostly useless code by non-devs. But the plus side it will expose many many people to learn basic programming and fill many tiny gaps not solved by bigger more serious pieces of code. Especially once people start building infrastructure and tooling around these non-devs, like hosting, deployment, webhook integrations, etc.
Do people actually learn when using these tools though? I mean, I’m sure they can be used to learn, just like TikTok could be used to read John Stuart Mill. But I doubt that’s what it’s going to be used for in real life.
2 replies →
But that’s not good. You don’t want Bob to be the gate keeper for why a process is the way it is.
In my experience working with agents helps eliminate that crap, because you have to bring the agent along as it reads your code (or process or whatever) for it to be effective. Just like human co-workers need to be brought along, so it’s not all on poor Bob.
totally. Especially when i'm debugging something for colleagues or friends, given a domain and a handful of ways of doing it, if i'm familiar with it I generally already get a sense of the reasons why its failing or falling short. This has nothing to do with the code base, any given language not withstanding. It comes from years and decades of systems and idiosyncratic behaviors of systems and examples which strangely rear their heads in notable ways.
Theese notable ways may also not be commonly known or put into words, but they persist nevertheless.
[dead]