Comment by jgraham
4 hours ago
> it's foolish to fight the future
And yet, the premise of the question assumes that it's possible in this case.
Historically having produced a piece of software to accomplish some non-trivial task implied weeks, months, or more of developing expertise and painstakingly converting that expertise into a formulation of the problem precise enough to run on a computer.
One could reasonably assume that any reasonable-looking submission was in fact the result of someone putting in the time to refine their understanding of the problem, and express it in code. By discussing the project one could reasonably hope to learn more about their understanding of the problem domain, or about the choices they made when reifying that understanding into an artifact useful for computation.
Now that no longer appears to be the case.
Which isn't to say there's no longer any skill involved in producing well engineered software that continues to function over time. Or indeed that there aren't classes of software that require interesting novel approaches that AI tooling can't generate. But now anyone with an idea, some high level understanding of the domain, and a few hundred dollars a month to spend, can write out a plan can ask an AI provider to generate them software to implement that plan. That software may or may not be good, but determining that requires a significant investment of time.
That change fundamentally changes the dynamics of "Show HN" (and probably much else besides).
It's essentially the same problem that art forums had with AI-generated work. Except they have an advantage: people generally agree that there's some value to art being artisan; the skill and effort that went into producing it are — in most cases — part of the reason people enjoy consuming it. That makes it rather easy to at least develop a policy to exclude AI, even if it's hard to implement in practice.
But the most common position here is that the value of software is what it does. Whilst people might intellectually prefer 100 lines of elegant lisp to 10,000 lines of spaghetti PHP to solve a problem, the majority view here is that if the latter provides more economic value — e.g. as the basis of a successful business — then it's better.
So now the cost of verifying things for interestingness is higher than the cost of generating plausibly-interesting things, and you can't even have a blanket policy that tries to enforce a minimum level of effort on the submitter.
To engage with the original question: if one was serious about extracting the human understanding from the generated code, one would probably take a leaf from the standards world where the important artifact is a specification that allows multiple parties to generate unique, but functionally equivalent, implementations of an idea. In the LLM case, that would presumably be a plan detailed enough to reliably one-shot an implementation across several models.
However I can't see any incentive structure that might cause that to become a common practice.
No comments yet
Contribute on Hacker News ↗