Sorry, yes. LLMs write code that's then checked by human reviewers. Maybe it will be checked less in the future. But I'm not seeing fully-autonomous AI on the horizon.
At that point, the legibility and prevalence of humans who can read the code becomes almost more important than which language the machine "prefers."
Well, verification is easier than creation (i.e., P ≠ NP). I think humans who can quickly verify something works will be in more demand than those who know how to write it. Even better: Since LLMs aren't as creative as humans (in-distribution thinking), test-writers will be in more demand (out-of-distribution thinkers). Both of these mean that humans will still be needed, but for other reasons.
What does that mean? Can you elaborate?
Sorry, yes. LLMs write code that's then checked by human reviewers. Maybe it will be checked less in the future. But I'm not seeing fully-autonomous AI on the horizon.
At that point, the legibility and prevalence of humans who can read the code becomes almost more important than which language the machine "prefers."
Well, verification is easier than creation (i.e., P ≠ NP). I think humans who can quickly verify something works will be in more demand than those who know how to write it. Even better: Since LLMs aren't as creative as humans (in-distribution thinking), test-writers will be in more demand (out-of-distribution thinkers). Both of these mean that humans will still be needed, but for other reasons.
The future belongs to generalists!
2 replies →