Comment by tptacek
1 year ago
I use LLMs for three things:
* To catch passive voice and nominalizations in my writing.
* To convert Linux kernel subsystems into Python so I can quickly understand them (I'm a C programmer but everyone reads Python faster).
* To write dumb programs using languages and libraries I haven't used much before; for instance, I'm an ActiveRecord person and needed to do some SQLAlchemy stuff today, and GPT 4o (and o1) kept me away from the SQLAlchemy documentation.
OpenAI talks about o1 going head to head with PhDs. I could care less. But for the specific problem we're talking about on this subthread: o1 seems materially better.
> * To convert Linux kernel subsystems into Python so I can quickly understand them (I'm a C programmer but everyone reads Python faster).
Do you have an example chat of this output? Sounds interesting. Do you just dump the C source code into the prompt and ask it to convert to Python?
No, ChatGPT is way cooler than that. It's already read every line of kernel code ever written. I start with a subsystem: the device mapper is a good recent example. I ask things like "explain the linux device mapper. if it was a class in an object-oriented language, what would its interface look like?" and "give me dm_target as a python class". I get stuff like:
(A bunch of stuff elided). I don't care at all about the correctness of this code, because I'm just using it as a roadmap for the real Linux kernel code. The example use case code is an example of something GPT 4o provides that I didn't even know I wanted.
That's awesome. Have you tried asking it to convert Python (psuedo-ish) code back into C that interfaces with the kernel?
1 reply →
it's all placeholders - that's my experience with gpt trying to write slop code
3 replies →