← Back to context

Comment by tptacek

1 year ago

I've had the opposite experience with some coding samples. After reading Nick Carlini's post, I've gotten into the habit of powering through coding problems with GPT (where previously I'd just laugh and immediately give up) by just presenting it the errors in its code and asking it to fix them. o1 seems to be effectively screening for some of those errors (I assume it's just some, but I've noticed that the o1 things I've done haven't had obvious dumb errors like missing imports, and all my 4o attempts have).

My experience is likely colored by the fact that I tend to turn to LLMs for problems I have trouble solving by myself. I typically don't use them for the low-hanging fruits.

That's the frustrating thing. LLMs don't materially reduce the set of problems where I'm running against a wall or have trouble finding information.

  • I use LLMs for three things:

    * To catch passive voice and nominalizations in my writing.

    * To convert Linux kernel subsystems into Python so I can quickly understand them (I'm a C programmer but everyone reads Python faster).

    * To write dumb programs using languages and libraries I haven't used much before; for instance, I'm an ActiveRecord person and needed to do some SQLAlchemy stuff today, and GPT 4o (and o1) kept me away from the SQLAlchemy documentation.

    OpenAI talks about o1 going head to head with PhDs. I could care less. But for the specific problem we're talking about on this subthread: o1 seems materially better.

    • > * To convert Linux kernel subsystems into Python so I can quickly understand them (I'm a C programmer but everyone reads Python faster).

      Do you have an example chat of this output? Sounds interesting. Do you just dump the C source code into the prompt and ask it to convert to Python?

      7 replies →

  • LLMs are not for expanding the sphere of human knowledge, but for speeding up auto-correct of higher order processing to help you more quickly reach the shell of the sphere and make progress with your own mind :)

  • It's funny because I'm very happy with the productivity boost from LLMs, but I use them in a way that is pretty much diametrically opposite to yours.

    I can't think of many situations where I would use them for a problem that I tried to solve and failed - not only because they would probably fail, but in many cases it would even be difficult to know that it failed.

    I use it for things that are not hard, can be solved by someone without a specialized degree that took the effort to learn some knowledge or skill, but would take too much work to do. And there are a lot of those, even in my highly specialized job.

  • > That's the frustrating thing. LLMs don't materially reduce the set of problems where I'm running against a wall or have trouble finding information.

    As you step outside regular Stack Overflow questions for top-3 languages, you run into limitations of these predictive models.

    There's no "reasoning" behind them. They are still, largely, bullshit machines.

    • you're both on the wrong wavelength. No one has claimed it is better than an expert human yet. Be glad, for now your jobs are safe, why not use it as a tool to boost your productivity, yes, even though you'll get proportionally less use than others in other perhaps less "expert" jobs.

      8 replies →