Comment by donatj

8 hours ago

I run a tiny site that basically gave a point-at-able definition to an existing adhoc standard. As part of the effort I have a list of software and libraries following the standard on the homepage. Initially I would accept just about anything but as the list grew I started wanting to set a sort of notability baseline.

Specifically someone submitted a library that was only several days old, clearly entirely AI generated, and not particularly well built.

I noted my concerns with listing said library in my reply declining to do so, among them that it had "zero stars". The author was very aggressive and in his rant of a reply asked how many stars he needed. I declined to answer, that's not how this works. Stars are a consideration, not the be all end all.

You need real world users and more importantly real notability. Not stars. The stars are irrelevant.

This conversation happened on GitHub and since then I have had other developers wander into that conversation and demand I set a star count definition for my "vague notability requirement". I'm not going to, it's intentionally vague. When a metric becomes a target it ceases to be a good metric as they say.

I don't want the page to get overly long, and if I just listed everything with X star count I'd certainly list some sort of malware.

I am under no obligation to list your library. Stop being rude.

> When a metric becomes a target it ceases to be a good metric as they say.

https://en.wikipedia.org/wiki/Goodhart's_law

  • Nice to know the name for this — Goodhart's Law. And I think the core reason is that the cost to fake these metrics is far less than what they claim to represent. Stars, reviews, ratings, trading volumes — all cheap to manufacture, and only getting cheaper with AI.

    I've been thinking about this a lot. These metrics are all just marketing signals to draw people's attention, trying to make some kind of deals. So the fix should be: make the cost of the signal match what it claims to represent. I'm obsessed with something called DUKI /djuːki/ (Decentralized Universal Kindness Income, a form of UBI) — the idea is that instead of stars or reviews, trust comes from deals pledging real money to the world for all as the deal happens. You can't fake that cheaply.

    So the metric becomes the money itself — if you fake X amount, it costs you X, and the world will thank you by paying attention...

    Imagine if GitHub let you back a star with real money — the more you put in, the more credible the star. And that money goes out as UBI for everyone. For attention makers, star anything you want, as much as you want. For attention takers, just follow the money to filter through all the noise that's so easy to manipulate...

    • > make the cost of the signal match what it claims to represent.

      Well that's the WHOLE problem of trust. There is so much work on blockchain in proof of work, proof of stake, etc in order to protect ourselves from attacks, e.g. https://en.wikipedia.org/wiki/Sybil_attack

      If you do find a way it would apply to a lot more than "just" GitHub star for VCs.