Comment by kgeist
11 days ago
Interesting how times have changed. Back in 2015, the entire Go runtime (already a mature codebase) was rewritten from C to Go semi-automatically: one of the maintainers wrote a C-to-Go conversion tool (for a subset of C they used) so that it compiled and produced identical output, and then the resulting code was manually refactored to make the Go code more idiomatic and optimized. And now you can just ask a language model.
The slides: https://go.dev/talks/2015/gogo.slide#3
An interesting similarity:
>We had our own C compiler just to compile the runtime.
The Bun team maintain their own fork of Zig too
The big difference here is that the C-to-Go tool was presumably deterministic: running it over and over again should produce the exact same result. You can trust that result because the human wrote the conversion tool, understood it, tested it, and worked the bugs out.
The LLM is non-deterministic. You could have it independently do the conversion 10 times, and you'd get 10 different results, and some of them might even be wildly different. There's no way to validate that without reviewing it fully, in its entirety, each time.
That's not to say the human-written deterministic conversion tool is going to be perfect or infallible. But you can certainly build much more confidence with it than you can with the LLM.
I'm not convinced by this argument. If you put 10 senior devs on a problem, you'd get ten solutions. Maybe even 12. If one engineer solves the same problem 10 times, you also will get 10 solutions.
The problem is not that we get 10 solutions, and I think you should draw out your implications and state them directly. Bc they're already either solved or being actively iterated on by industry. And we (well not me) can address them if you're willing to speak them
It's more about knowing that the tool will always produce the same result, like a compiler. There is also a difference, the llm may use diffrent solutions within the file and across files
Perhaps a viable approach might be to vibe code the translation tool itself and observe that for every input it gives the expected output. Then once the translation is done, the translation tool can be discarded.
This would require a robust test suite though.
One of the cases where vibe coding might actually be useful, writing a throwaway tool.
I see this dilemma with LLMs all of the time.
Should you use the LLM to do the thing directly, or use the LLM to implement a tool that does the thing?
I tend to reach for the latter, it’s easier to reason about.
1 reply →
Why does the deterministic nature matter? The interesting part is having oracle tests, not determinism. If someone is deterministic and wrong you use oracle tests to catch that.
People keep saying "deterministic" when they mean "probabilistic". For illustration, a bloom filter is deterministic, but it's also probabilistic. LLMs are the same.
1 reply →
You could also use the LLM create a program to do the conversion and then review and use the program to deterministicly perform the actual conversion.
Have the best of both worlds.