← Back to context

Comment by slopinthebag

15 hours ago

> I didn’t write any piece of code there. There are several known issues, which I will task the agent to resolve, eventually. Meanwhile, I strongly advise against using it for anything beyond a studying exercise.

Months of effort and three separate tries to get something kind of working but which is buggy and untested and not recommended for anyone to use, but unfortunately some folks will just read the headline and proclaim that AI has solved programming. "Ubiquitous hardware support in every OS is going to be a solved problem"! Or my favourite: instead of software we will just have the LLM output bespoke code for every single computer interaction.

Actually a great article and well worth reading, just ignore the comments because it's clear a lot of people have just read the headline and are reading their own opinions into it.

The author specifically said that they did not read the code or even test the output very thoroughly. It was intentionally just a naive toy they wanted to play around with.

Nothing to do with AI, or even the capabilities of AI. The person intentionally didn't put in much effort.

  • > Nothing to do with AI, or even the capabilities of AI. The person intentionally didn't put in much effort.

    The part to do with AI is that it was not able to drive a comprehensive and bug free driver with minimal effort from the human.

    That is the point.

    • Why is that the metric? In my job, I get drafts from junior employees that requires major revisions, often rewriting significant parts. It’s still faster to have someone take the first pass. Why can’t AI coding be used the same way? Especially if AIs are capable of following your own style and design choices, as well as testing code against a test suite, why isn’t it easier to start from a kind of working baseline than to rebuild from a raf h.

      1 reply →

  • Seems like they did put in quite a bit of effort, but were not knowledgeable enough on wifi drivers to go further.

    So hardware drivers are not a solved problem where you can just ask chatgpt for a driver and it spits one out for you.

  • > The author specifically said that they did not read the code or even test the output very thoroughly. It was intentionally just a naive toy they wanted to play around with.

    Yes and that's what I'm pointing out, they vibe coded it and the headline is somewhat misleading, although it's not the authors fault if you don't go read the article before commenting.

    But it does have to do with AI (obviously), and specifically the capabilities of AI. If you need to be knowledgable about how wifi drivers work and put in effort to get a decent result, that obviously speaks volumes about the capabilities of the vibe coding approach.

    • I strongly suspect that somebody with domain knowledge around Wi-Fi drivers and OS kernel drivers could prompt the llm to spit out a lot more robust code than this guy was able to. That's not a knock on him, he was just trying to see what he could do. It's impressive what he actually accomplished given how little effort he put forth and how little knowledge he had about the subject.

      9 replies →

  • > The person intentionally didn't put in much effort.

    Aren't you just describing every vibe code ever?

    To think about it, that is probably my main issue with AI art/books etc. They never put in any effort. In fact, even the competition is about putting least effort.

You're validly critiquing where it is now.

The hype people are excited because they're guessing where it's going.

This is notable because it's a milestone that was not previously possible: a driver that works, from someone who spent ~zero effort learning the hardware or driver programming themselves.

It's not production ready, but neither is the first working version of anything. Do you see any reason that progress will stop abruptly here?

  • >> Do you see any reason that progress will stop abruptly here?

    I do. When someone thinks they are building next generation super software for 20$ a month using AI, they conveniently forget someone else is paying the remaining 19,980$ for them for compute power and electricity.

  • People abstract upon new leaps in invention way too early though. Believing these leaps are becoming the standard. Look at cars, airplanes, phones, etc.

    After we landed on the moon people were hyped for casual space living within 50 years.

    The reality is it often takes much much longer as invention isn't isolated to itself. It requires integration into the real world and all the complexities it meets.

    Even moreso, we may have ai models that can do anything perfectly but it will require so much compute that only the richest of the rich are able to use it and it effectively won't exist for most people.

  • > Do you see any reason progress will stop abruptly here?

    Yeah, money and energy. And fundamental limitations of LLM's. I mean, I'm obviously guessing as well because I'm not an expert, but it's a view shared by some of the biggest experts in the field ¯\_(ツ)_/¯

    I just don't really buy the idea that we're going to have near-infinite linear or exponential progress until we reach AGI. Reality rarely works like that.

    • So far the people who bet against scaling laws have all lost money. That does not mean that their luck won’t change, but we should at least admit the winning streak.

      5 replies →

    • At the very least, computers are still getting faster. Models will get faster and cheaper to run over time, allowing them more time to "think", and we know that helps. Might be slow progress, but it seems inevitable.

      I do agree that exponential progress to AGI is speculation.

    • You think all AI companies will never release a better model days after they all release better models?

      That is a position to take.

    • I know some proponents have AGI as their target, but to me it seems to be unrelated to the steadily increasing effectiveness of using LLMs to write computer code.

      I think of it as just another leap in human-computer interface for programming, and a welcome one at that.

      1 reply →

I don’t get this response. This is amazing! What percentage of programmers can even write a buggy FreeBSD kernel driver? If you were tasked at developing this yourself, wouldn’t it be a huge help to have something that already kind of works to get things started?

  • Fairly high could - but some could start today some need a few months of study before they know how to start (and then take 10x long than the first person to get it working)

Programmers have always been in search of an additional layer of abstraction. LLM coding feeds exactly into this impulse.

> instead of software we will just have the LLM output bespoke code for every single computer interaction.

That's sort of the idea behind GPU upscaling: You increase gaming performance and visual sharpness by rendering games at lower resolutions and use algorithms to upscale to the monitor's native resolution. Somehow cheaper than actually rendering at high resolution: Let the GPU hallucinate the difference at a lower cost.