Comment by durch
16 days ago
LLMs find the center of the distribution: the typical pattern, the median opinion. Tailwind was an edge bet. It required metis, the tacit competence to know the consensus (semantic classes, separation of concerns, the cascade) was a local maximum worth escaping. That judgment, knowing what the center is wrong about, doesn't emerge from interpolation. It emerges from the recognition loop where you try something, feel "that's not quite it," and refine.
The bottleneck was never typing. It was judgment. Tailwind is crystallized judgment. AI can consume it endlessly. Producing the next version requires the loop that creates metis, and that loop isn't in the training data.
No comments yet
Contribute on Hacker News ↗