← Back to context

Comment by sottol

2 days ago

Interesting, I find the exact opposite. Although to a much lesser extent (maybe 50% boost).

I ended shoehorned into backend dev in Ruby/Py/Java and don't find it improves my day to day a lot.

Specifically in C, it can bang out complicated but mostly common data-structures without fault where I would surely do one-off errors. I guess since I do C for hobby I tend to solve more interesting and complicated problems like generating a whole array of dynamic C-dispatchers from a UI-library spec in JSON that allows parsing and rendering a UI specified in YAML. Gemini pro even spat out a YAML-dialect parser after a few attempts/fixes.

Maybe it's a function of familiarity and problems you end using the AI for.

As in, it seems to be best at problems that you’re unfamiliar with in domains where you have trouble judging the quality?

  • >it seems to be best at problems that you’re unfamiliar with

    Yes.

    >in domains where you have trouble judging the quality

    Sure, possibly. Kind of like how you think the news is accurate until you read a story that's in your field.

    But not necessarily. Might just be more "I don't know how do to <basic task> in <domain that I don't spend a lot of time in>", and LLMs are good at doing basic tasks.