I let LLMs write an Elixir NIF in C; it mostly worked

15 hours ago (overbring.com)

"it mostly worked" is just a more nuanced way of saying "it didn't work". Apparently the author did eventually get something working, but it is false to say that the LLMs produced a working project.

  • What is your definition of "a working project"? It does what it says on the tin (actually it probably does more, because splint throws some warnings...)

  • I dunno. Depending on the writer and their particularly axe to grind the definition can vary widely. I would like it to mean, "any fixes I needed to make were minimal and not time intensive."

    • It's more of "yeah it worked, but I had to do a lot of hand-holding" and "it passes the tests but I cannot tell if the code has memory leaks".

      Actually, I can tell; I ran split on the C source and got things like this:

      disk_space.c:144:16: Only storage bin.ref_bin (type void *) derived from variable declared in this scope is not released (memory leak)

      So I'm looking into a Rust version with Rustler now.

      1 reply →

  • Ok. But what are you even reacting to? Who is saying that it produced a working product?

    As you said, the very title of the article acknowledged that it didn’t produce a working product.

    This is just outrage for the sake of outrage.

    • > As you said, the very title of the article acknowledged that it didn’t produce a working product.

      Then why not say "mostly didn't work"? I read the article and that's the impression I got.

      The OP's comment isn't an outage, it's more like you intentionally painted it as an outrage with a comment that reads more like an outrage.

    • Amen, thank you for noticing. The goal here was not to produce something of stellar quality, which is anyway out of the question as I don't have the skills/knowledge to evaluate anything other than "it returns the Elixir map I wanted". It was to see if this is feasible at all.

I would never ever let an LLM anywhere near C code. If you need help from LLM to write a NIF that performs basic C calls to the OS, you probably can’t check if it’s safe. I mean, it needs at least to pass valgrind.

  • You can use something like Claude Code or Codex CLI and tell it to run valgrind as part of iterating on the code.

  • Security is a spectrum. If you totally control the input going into a program, it can be safe even if you didn't test it for memory leaks. The only errors that occur will be truly erroneous, not malicious and for many solutions that's fine.

    At the very least, it's fine for personal projects which is something I'm getting into more and more: remembering that computers were meant to create convenience, so writing small programs to make life easier.

    • For personal projects, ok security is different. But get out of that, and I'd do it even for that, you need defense in depth. You think you sanitized your input but your C program has a bug and a vulnerability - or your Java program or whatever has bugs. Almost everything has some bugs, and thus your vulnerabilities will hit eventually in your C program, even if you were careful.

      I'd say absent some temporary hack to do something, my bad experiences won't let me say something is low risk. I worked at Microsoft years ago, and after the zillions of vulnerabilities were attacked by people around the time of windows 95 and computers on the net, we did serious code reviews in my team of the data access libraries. There were vast numbers of vulnerabilities. A group of 3 or 4 of us would sit in a room for 3 hours a day, one person a scribe, and we'd go over this c code that was ancient even then - we found problems everywhere, it was exhausting and shocking. The entire data access infrastructure was riddled with memory leaks, strings that were not length limited, input parameters that were not checked or sanitized, etc. I'm sure it was endemic across all components, not just there. We fixed some things, but we found so much shit.

      Thank got I wasn't on the team trying to figure out what to do about those problems. I think they end of lifed a lot of stuff.

      2 replies →

    • Outside personal projects, my take is that security really just comes in two flavors: CVE vs no CVE. I pick the former.

    • > Security is a spectrum.

      It's less spectrum and more that it's relative. Depends on attacker and what they seek to gain.

      An unsecured server is an unsecured server. But there is a world of difference if they are attacked by CIA or local script kiddies.

I've done this. The NIF worked as in that it ran and was a correct enough NIF. It did not work in terms of solving what I needed it to do. Iteration was a bit painful because it was tangled with a nasty library that needed to be cross-compiled. So when I made a change it seg faulted and I bailed.

I essentially ran out of patience and tried another approach. It involved an LLM running C code so I could check the library output compared to my implementation to make sure it was byte-for-byte.

The C will never ship. I don't have practice writing C so I am very inefficient at it. I read it okay. LLMs are pretty decent help for this type of scrap code.

I once wrote a little generalized yaml templating processor in Python by using an LLM for assistance. It was working pretty well and passing a lot of the tests that I was throwing at it!

Then I noticed that some of the tests that failed were failing in really odd ways. Upon closer inspection, the generated processor had made lots of crazy assumptions about what it should be doing based upon specific values in yaml keys that were obviously unrelated to instructions.

Yeah, I agree with the author. This stuff can be incredibly useful, but it definitely isn't anything like an AGI in its current form.

For anyone wondering, the article clarifies that "A NIF is a function that is implemented in C instead of Erlang".

I had a bunch of fun getting ChatGPT Code Interpreter to write (and compile and test) C extensions for SQLite last year: https://simonwillison.net/2024/Mar/23/building-c-extensions-...

  • Not only C. Can be done in any compiled language (C, Rust, Zig, etc). Not sure if can be done with GC language.

    • BEAM loads a shared object, that opens the door to anything.

      If you want to use a GC language for NIFs, you'd need to hook up your runtime somehow.

      IMHO, it makes more sense to lean into the BEAM and use its resource management... my NIFs have all been pretty straight forward to write. All the boiler plate is what it is, and figuring out how to cooperate with the scheuduler for long running code or i/o can be a bit tricky, but if you can do a lot in a BEAM language, the native code ends up being like

      Check the arguments, do the thing, return the results.

built my startup in elixir. love it but nifs are one of the few ways you can crash the VM. I don't trust myseld to write a nif in production. no way I'd do it with AI in c. Thank god theres projects like rustler which can catch panics before it crashes the main VM.

I tried to do this a few weeks ago, I tried to build a NIF around an existing C lib. I was using Claude Opus and burned over $300 (I didn't have Pro) on tokens with no usable results.

Why C instead of Rust or Zig? Rustler and Zigler exist. I feel like a Vibecoded NIF in C is the absolute last thing I would want to expose the BEAM to.

  • Given the amount of issues the code had when I ran splint on the C file, I agree. The question was for me whether I can get something working to get over the "speed bump" of lacking such a function for the API client I'm writing.

    I'm now re-vibe-coding it into Rust with the same process, but also using Grok 4 to get better results. It now builds and passes the tests on Elixir 1.14 to 1.18 on macOS and Ubuntu, but I'm still trying to get Grok 3 and 4 to fix the Windows-specific parts of the Rust code.

  • Why not C? It made no difference, we're talking about a few function calls.

    • because the author self admitted they don't know C! One of the reason why people use the Beam VM is because its robust and fault tolerant.

      a lot of the choice here are made at the expense of VM's health.

      also why wouldn't anyone just use :disksup.get_disk_info/1. (Thats immediate) calling :disksup.get_disk_info/1 won’t mess with the scheduler in the way a custom NIF or a big blocking port might.

      I see the above code/lib and just see reflags all over the place.

      1 reply →

It's interesting why the author used weaker models (like Grok 3 when 4 is available, and Gemini 2.5 Flash when Pro is), since the difference in coding quality between these models is significant, and results could be much better.

This was built copy pasting results from chats? Not using an ide or cli like Claude Code or Amp? Why such a manual process. This isn’t 2023…

  • Because what difference would it make, given the bad quality of code?

    Also, is Claude Code free to use?

    The manual process has the upside that you get to see how the sausage is (badly) made. Otherwise, just YOLO it and put your trust in GenAI completely.

    Furthermore, if there is the interim step of pushing to GitHub to trigger the build & test workflow and see if it works on something other than Linux, is the choice of Vibe-Coding IDE really the limiting factor in the entire process?

So all this arose because you didn't read the docs and note that get_disk_info/1 immediately fetches the data when called? The every-30-minutes-by-default checks are for generating "disk usage is high" event conditions.

  • Thanks, that was not clear to me from skimming the docs.

    However, this NIF also returns more fields than the disksup function.