← Back to context

Comment by armchairhacker

4 months ago

The problem domain is part of the solution domain: writing a good specification and tests is a skill.

Moreover, I suspect the second kind of quality won't completely go away: a smart machine will develop new techniques to organize its code (making it "neat and clear" to the machine), which may resemble human techniques. I wouldn't bet much on it, but maybe even, buried within the cryptic code output by a machine, there will be patterns resembling popular design patterns.

Brute force can get results faster than careful planning, but brute force and planning gets results faster than both. AI will keep being optimized (even if one day it starts optimizing itself), and organization is presumably a good optimization.

Furthermore: LLMs think differently than humans, e.g. they seem to have much larger "context" (analogous to short-term memory) but their training (analogous to long-term memory) is immutable. Yet there are similarities as demonstrated in LLM responses, e.g. they reason in English, and reach conclusions without understanding the steps they took. Assuming this holds for later AIs, the structures those AIs organize their code into to make it easier to understand, probably won't be the structures humans would create, but they'll be similar.

Although a different type of model and much smaller, there's evidence of this in auto-encoders: they work via compression, which is a form of organization, and the weights roughly correspond to human concepts like specific numbers (MNIST) or facial characteristics (https://www.youtube.com/watch?v=4VAkrUNLKSo&t=352).