← Back to context

Comment by rel_ic

1 day ago

This is kind of like saying a kid can never become a better programmer than the average of his teachers.

IMHO, the reasons not to use AI are social, not logical.

The kid can learn and become better over time, while "AI" can only be retrained using better training data.

I'm not against using AI by any means, but I know what to use it for: for stuff where I can only do a worse than half the population because I can't be bothered to learn it properly. I don't want to toot my own horn, but I'd say I'm definitely better at my niche than 50% of the people. There are plenty of other niches where I'm not.

  • Yeah, but it's been trained on the boring, repetitive stuff, and A LOT of code that needs to be written is just boring, repetitive stuff.

    By leaving the busywork for the drones, this frees up time for the mind to solve the interesting and unsolved problems.

The AI doesn't know what good or bad code is. It doesn't know what surpassing someone means. It's been trained to generate text similar to its training data, and that's what it does.

If you feed it only good code, we'd expect a better result, but currently we're feeding it average code. The cost to evaluate code quality for the huge data set is too high.

  • The training data includes plenty of examples of labelled good and bad code. And comparisons between two implementations plus trade-offs and costs and benefits. I think it absolutely does "know" good code, in the sense that it can know anything at all.

    • There does exist some text making comparisons like that, but compared to the raw quantity of totally unlabeled code out there, it's tiny.

      You can do some basic checks like "does it actually compile", but for the most part you'd really need to go out and do manual categorization, which would be brutally expensive.