← Back to context

Comment by gyomu

3 months ago

> the repo is too far off the data distribution

ah, this explains why these models have been useless to me this whole time. everything i do is just too far off the data distribution!

Everything is unless your app is a React todolist or leatcode questions.

  • people say this like it's a criticism, but damn is it ever nice to start writing a simple crud form and just have copilot autocomplete the whole thing for me.

    • Yep. I find the hype around AI to be wildly overblown, but that doesn’t mean that what it can do right now isn’t interesting & useful.

      If you told me a decade ago that I could have a fuzzy search engine on my desktop that I could use to vaguely describe some program that I needed & it would go out into the universe of publicly available source code & return something that looks as close to the thing I’ve asked for as it can find then that would have been mindblowing. Suddenly I have (slightly lossy) access to all the code ever written, if I can describe it.

      Same for every other field of human endeavour! Who cares if AI can “think“ or “do new things”? What it can do is amazing & sometimes extremely powerful. (Sometimes not, but that’s the joy of new technology!)

      11 replies →

    • Back in the 90s you could drag and drop a vb6 applet in Microsoft word. Somehow we’ve regressed..

      Edit: for the young, wysiwyg (what you see is what you get) was common for all sorts of languages from c++ to Delphi to html. You could draw up anything you wanted. Many had native bindings to data sources of all kinds. My favourite was actually HyperCard because I learned it in grade school.

      8 replies →

    • I agree. I am "writing" simple crud apps for my own convenience and entertainment. I can use unfamiliar frameworks and languaged for extra fun and education.

      Good times!

    • Before copilot what I'd do is diagnose and identify the feature that resembles the one that I'm about to build, and then I'd copy the files over before I start tweaking.

      Boilerplate generation was never, ever the bottleneck.

    • I've been using AI like this as well. The code-complete / 'randomly pop up a block of code while typing' feature was cool for a bit but soon became annoying. I just use it to generate a block of boilerplate code or to ask it questions, I do 90% of the 'typing the code' bit myself, but that's not where most programmers time is spent.

      1 reply →

    • It is, because the frontend ecosystem is not just React. There are plenty of projects where LLMs still give weird suggestions just because the app is not written in React.

    • I've probably commented the same thing like 20 times, but my rule of thumb and use with AI / "vibe coding" is two-fold:

      * Scaffolding first and foremost - It's usually fine for this, I typically ask "give me the industry standard project structure for x language as designed by a Staff level engineer" blah blah just give me a sane project structure to follow and maintain so I don't have to wonder after switching around to yet another programming language (I'm a geek, sue me).

      * Code that makes sense at first glance and is easy to maintain / manage, because if you blindly take code you don't understand, you'll regret it the moment you need to be called in for a production outage and you don't know your own codebase.

  • HN's cynicism towards AI coding (and everything else ever) is exhausting. Karpathy would probably cringe reading this.

    • First, it's not cynicism but a more realistic approach than just following SV marketing blindly, and second, it's not "everything else", just GenAI, NFTs/ICOs/Web3, "Metaverse" (or Zucks interpretation of it), delf-driving cars ready today, maybe a bit Theranos.

      3 replies →

  • or a typical CRUD app architecture, or a common design pattern, or unit/integration test scaffolding, or standard CI/CD pipeline definitions, or one-off utility scripts, etc...

    Like 80% of writing coding is just being a glorified autocomplete and AI is exceptional at automating those aspects. Yes, there is a lot more to being a developer than writing code, but, in those instances, AI really does make a difference in the amount of time one is able to spend focusing on domain-specific deliverables.

    • And even for "out of distribution" code you can still ask question about how to do the same thing but more optimized, could a library help for this, why is that piece of code giving this unexpected output etc

    • It has gotten to the point that I don't modify or write SQL. Instead I throw some schema and related queries in and use natural language to rubber duck the change, by which point the LLM can already get it right.

  • I've had some success with a multi-threaded software defined radio (SDR) app in Rust that does signal processing. It's been useful for trying something out that's beyond my experience. Which isn't to say it's been easy. It's been a learning experience to figure out how to work around Claude's limitations.

  • Generative AI for coding isn't your new junior programmer, it's the next generation of app framework.

    • I wished such sentiments prevailed in upper management, as it is true. Much like owning a car that can drive itself - you still need to pass a driving test to be allowed to use it.

  • Really such an annoying genre of comment. Yes I’m sure your groundbreaking bespoke code cannot be written by LLMs, however for the rest of us that build and maintain 99% of the software people actually use, they are quite useful.

  • simple CRUD, is as common in many many business applications or backend portals, are a good fit for AI assistance imho. And fix some designs here and there, where you can't be bothered to keep track of the latest JS/CSS framework

I wonder if the new GenAI architecture namely DDN or distributed discrete networks being discussed recently can outperform the conventional architecture of GAN and VAE. As the name suggests, it can provide multitude of distributions for training and inference purposes [1].

[1] Show HN: I invented a new generative model and got accepted to ICLR (90 comments):

https://news.ycombinator.com/item?id=45536694

I work on this typed lua language in lua, and sometimes use llms to help fix internal analyzer stuff, which works 30% of the time for complex, and sometimes not at all, but helps me find a solution in the end.

However when I ask an llm to generate my typed lua code, with examples and all, on how the syntax is supposed to be, it mostly gets it wrong.

my syntax for tables/objects is: local x: {foo = boolean}

but an llm will most likely gloss over this and always use : instead of = local x: {foo: boolean}

  • I've had success in the past with getting it to write YueScript/Moonscript (which is not a very large part of its training data) by pointing it to the root URL for the language docs and thus making that part of the context.

    If your typed version of Lua has a syntax checker, you could also have it try to use that first on any code it's generated

  • Are you using a coding agent or just an llm chat interface? Do you have a linter or compiler that will catch the misuse that you’ve hooked up to the agent?

    • I've dabbled with claude code in this particular project, but not much. My short experience with it is that it's slow, costly and goes off the rails easily.

      I prefer to work with more isolated parts of the code. But again, I don't really know all that much about agents.

      One thing I wanted to do on my project is reorganize all the tests, which sounds like an agent job. But I'd imagine I need to define some hard programmatic constraints to make sure tests are not lost or changed in the process.

      1 reply →