← Back to context

Comment by SchemaLoad

3 months ago

Everything is unless your app is a React todolist or leatcode questions.

people say this like it's a criticism, but damn is it ever nice to start writing a simple crud form and just have copilot autocomplete the whole thing for me.

  • Yep. I find the hype around AI to be wildly overblown, but that doesn’t mean that what it can do right now isn’t interesting & useful.

    If you told me a decade ago that I could have a fuzzy search engine on my desktop that I could use to vaguely describe some program that I needed & it would go out into the universe of publicly available source code & return something that looks as close to the thing I’ve asked for as it can find then that would have been mindblowing. Suddenly I have (slightly lossy) access to all the code ever written, if I can describe it.

    Same for every other field of human endeavour! Who cares if AI can “think“ or “do new things”? What it can do is amazing & sometimes extremely powerful. (Sometimes not, but that’s the joy of new technology!)

    • Why do you think what you describe being excited about does not warrant the current level of AI hype? I agree with your assessment and sometimes I think there is too much cynicism and not enough excitement.

      8 replies →

    • They go beyond merely "return something that looks as close to the thing I’ve asked for as it can find". Eg: Say we asked for "A todo app that has 4 buttons on the right that each play a different animal sound effect for no good reason and also you can spin a wheel and pick a random task to do". That isn't something that already exists, so in order to build that, the LLM has to break that down, look for appropriate libraries and source and decide on a framework to use, and then glue those pieces together cohesively. That didn't come from a singular repo off GitHub. The machine had to write new code in order to fulfill my request. Yeah, some if it existed in the training data somewhere, but not arranged exactly like that. The LLM had to do something in order to glue those together in that way.

      Some people can't see past how the trick is done (take training data and do a bunch of math/statistics on it), but the fact that LLMs are able to build the thing is in-and-of-itself interesting and useful (and fun!).

      1 reply →

  • Back in the 90s you could drag and drop a vb6 applet in Microsoft word. Somehow we’ve regressed..

    Edit: for the young, wysiwyg (what you see is what you get) was common for all sorts of languages from c++ to Delphi to html. You could draw up anything you wanted. Many had native bindings to data sources of all kinds. My favourite was actually HyperCard because I learned it in grade school.

  • I agree. I am "writing" simple crud apps for my own convenience and entertainment. I can use unfamiliar frameworks and languaged for extra fun and education.

    Good times!

  • Before copilot what I'd do is diagnose and identify the feature that resembles the one that I'm about to build, and then I'd copy the files over before I start tweaking.

    Boilerplate generation was never, ever the bottleneck.

  • I've been using AI like this as well. The code-complete / 'randomly pop up a block of code while typing' feature was cool for a bit but soon became annoying. I just use it to generate a block of boilerplate code or to ask it questions, I do 90% of the 'typing the code' bit myself, but that's not where most programmers time is spent.

    • i'm not sure when you tried it, but if you've had copilot disabled it might be worth giving it another go. in my totally anecdotal experience, over the last few months it's gotten significantly better at shutting up when it can't provide anything useful.

  • It is, because the frontend ecosystem is not just React. There are plenty of projects where LLMs still give weird suggestions just because the app is not written in React.

  • I've probably commented the same thing like 20 times, but my rule of thumb and use with AI / "vibe coding" is two-fold:

    * Scaffolding first and foremost - It's usually fine for this, I typically ask "give me the industry standard project structure for x language as designed by a Staff level engineer" blah blah just give me a sane project structure to follow and maintain so I don't have to wonder after switching around to yet another programming language (I'm a geek, sue me).

    * Code that makes sense at first glance and is easy to maintain / manage, because if you blindly take code you don't understand, you'll regret it the moment you need to be called in for a production outage and you don't know your own codebase.

HN's cynicism towards AI coding (and everything else ever) is exhausting. Karpathy would probably cringe reading this.

  • First, it's not cynicism but a more realistic approach than just following SV marketing blindly, and second, it's not "everything else", just GenAI, NFTs/ICOs/Web3, "Metaverse" (or Zucks interpretation of it), delf-driving cars ready today, maybe a bit Theranos.

    • I’ve recently written a message queue <> database connector in Go using Claude Code, checkpointing, recovery, all that stuff built in.

      I’d say it made me around 2x as productive.

      I don’t think the cynicism of HN is justified, but I think what people forget is that it takes several months of really investing a lot of time into learning how to use AI well. If I see some of the prompts people give, and expect it to work, yeah no wonder that only works for React-like apps.

      1 reply →

    • The thing is cryptocurrency and metaverse stuff was obvious bullshit from day one while even GPT-3 was clearly a marvel from day one. It's a false pattern match.

  • okay but he literally does have a bridge that non-deterministically might take you to the wrong place to sell you

    • The original context of this sub-thread was Karpathy saying how AI coding tools were pretty useless for him when working on this particular project.

      7 replies →

  • I mean Karpathy himself wrote that he could not use the AI tools for the project, so he had to handwrite most of it. I wonder why.

    • One of my hobby projects is an esoteric game engine oriented towards expressing simulation mechanics. I simply do not use agentic tools when editing the core code for this project (mostly rust and wgsl). It always stumbles, and leaves code that I need to fix up manually, and even then feel unsure about. I've tried a few different agents, including the current top of the line. The power is just not there yet.

      At the same time, these tools have helped me reduce the development time on this project by orders of magnitude. There are two prominent examples.

      --- Example 1:

      The first relates to internal tooling. I was debugging a gnarly problem in an interpreter. At some point I had written code to do a step-by-step dump of the entire machine state to file (in json) and I was looking through it to figure out what was going wrong.

      In a flash of insight, I asked my AI service (I'll leave names out since I'm not trying to promote one over another) to build a react UI for this information. Over the course of a single day, I (definitely not a frontend dev by history) worked with it to build out a beautiful, functional, easy to use interface for browsing step-data for my VM, with all sorts of creature comforts (like if you hover over a memory cell, and the memory cell's value happens to be a valid address to another memory cell, the target memory cell gets automatically highlighted).

      This single tool has reduced my debugging time from hours or days to minutes. I never would have built the tool without AI support, because I'm simply not experienced enough in frontend stuff to build a functional UI quickly.. and this thing built an advanced UI for me based on a conversation. I was truly impressed.

      --- Example 2:

      As part of verifying correctness for my project, I wanted to generate a set of tests that validated the runtime behaviour. The task here consists of writing a large set of reference programs, and verifying that their behaviour was identical between a reference implementation and the real implementation.

      Half decent coverage meant at least a hundred or so tests were required.

      Here I was able to use agentic AI to reduce the testcase construction time from a month to about a week. I asked the AI to come up with a coverage plan and write the test case ideas to a markdown file in an organized, categorized way. Then I went through each category in the test case markdown and had the AI generate the test cases and integrate them into the code.

      ---

      I was and remain a strong skeptic of the hype around this tech. It's not the singularity, it's not "thinking". It's all pattern matching and pattern extension, but in ways so sophisticated that it feels like magic sometimes.

      But while the skeptical perspective is something I value, I can't deny that there is core utility in this tech that has a massive potential to contribute to efficiency of software development.

      This is a tool that we as industry are still figuring out the shape of. In that landscape you have all sorts of people trying to evangelize these tools along their particular biases and perspectives. Some of them clearly read more into the tech than is there. Others seem to be allergically reacting to the hype and going in the other direction.

      I can see that there is both noise, and fundamental value. It's worth it to try to figure out how to filter the noise out but still develop a decent sense of what the shape of that fundamental value is. It's a de-facto truth that these tools are in the future of every mainstream developer.

    • That's exactly why I said he would cringe at it. Seeing someone look at him saying "it's not able to make a good GPT clone" and going "yeah it's useless for anything besides React todo list demos" would definitely evoke some kind of reaction. He understands AI coding agents are neither geniuses nor worthless CRUD monkeys.

      1 reply →

or a typical CRUD app architecture, or a common design pattern, or unit/integration test scaffolding, or standard CI/CD pipeline definitions, or one-off utility scripts, etc...

Like 80% of writing coding is just being a glorified autocomplete and AI is exceptional at automating those aspects. Yes, there is a lot more to being a developer than writing code, but, in those instances, AI really does make a difference in the amount of time one is able to spend focusing on domain-specific deliverables.

  • And even for "out of distribution" code you can still ask question about how to do the same thing but more optimized, could a library help for this, why is that piece of code giving this unexpected output etc

  • It has gotten to the point that I don't modify or write SQL. Instead I throw some schema and related queries in and use natural language to rubber duck the change, by which point the LLM can already get it right.

I've had some success with a multi-threaded software defined radio (SDR) app in Rust that does signal processing. It's been useful for trying something out that's beyond my experience. Which isn't to say it's been easy. It's been a learning experience to figure out how to work around Claude's limitations.

Generative AI for coding isn't your new junior programmer, it's the next generation of app framework.

  • I wished such sentiments prevailed in upper management, as it is true. Much like owning a car that can drive itself - you still need to pass a driving test to be allowed to use it.

Really such an annoying genre of comment. Yes I’m sure your groundbreaking bespoke code cannot be written by LLMs, however for the rest of us that build and maintain 99% of the software people actually use, they are quite useful.

simple CRUD, is as common in many many business applications or backend portals, are a good fit for AI assistance imho. And fix some designs here and there, where you can't be bothered to keep track of the latest JS/CSS framework