← Back to context

Comment by xnx

7 days ago

"I'm not joking and this isn't funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options, not everyone is aligned... I gave Claude Code a description of the problem, it generated what we built last year in an hour."

https://x.com/rakyll/status/2007239758158975130

The real question is "could she have written that problem statement a year ago, or is a year's learning by the team baked into that prompt?"

There are lots of distributed agent orchestators out there. Rarely is any of them going to do exactly what you want. The hard part is commonly defining the problem to be solved.

  • There’s definitely a strong Clyde-the-counting-horse effect. language models perform vastly better on questions to which the question writer knew the answer.

  • Even the Google engineer points out that this isn't an engineering problem, but an alignment problem among the Google devs themselves.