Comment by nikcub
1 day ago
from my understanding Anthropic are now hiring a lot of experts in different who are writing content used to post-train models to make these decisions and they're constantly adjusted by the anthropic team themselves
this is why the stacks in the report and what cc suggests closely match latest developer "consensus"
your suggestion would degrade user experience and be noticed very quickly
I guess that’s why I’m not seeing anyone trying to build a skills marketplace for agent skills files. The llm api will read in any skills you want to add to context in plain text, and then use your content to help populate their own skills files.
So I wonder about sharable skills? Like if it's a problem that lots of people have, I find the base model knows about it already.
But how to do things in your environment? The conventions your team follow? Super useful but not very shareable.
Whats left over between those extremes does not seem to be big enough to build an ecosystem around.
Final problem, it seems difficult to monetise what is effectively a repo of llm generated text files.
isn't that https://lobehub.com/ ?
That sounds too expensive to be viable when the giveaway phase ends.
That's how Google search worked back when it was at its most useful. They had a large "editorial team" that manually tweaked page ranks on a site-by-site basis.
The core graph reputation based page ranking algorithm lasted for a hot second before people started gaming it. No idea what they do these days.
Yeah but you can farm that out very cheap, and I don’t think they were even manually reviewing more than a small fraction of sites.
If you’re hiring experts to manually rank programming libraries, that’s a much more expensive position.