Comment by whatisthiseven
4 hours ago
I don't think I have ever used stars in making a decision to use a library and I don't understand why anyone would.
Here are the things I look at in order:
* last commit date. Newer is better
* age. old is best if still updating. New is not great but tolerable if commits aren't rapid
* issues. Not the count, mind you, just looking at them. How are they handled, what kind of issues are lingering open.
* some of the code. No one is evaluating all of the code of libraries they use. You can certainly check some!
What does stars tell me? They are an indirect variable caused by the above things (driving real engagement and third interest) or otherwise fraud. Only way to tell is to look at the things I listed anyway.
I always treated stars like a bookmark "I'll come back to this project" and never thought of it as a quality metric. Years ago when this problem first surfaced I was surprised (but should not have been in retrospect) they had become a substitute for quality.
I hope the FTC comes down hard on this.
Edit:
* commit history: just browse the history to see what's there. What kind of changes are made and at what cadence.
> I don't think I have ever used stars in making a decision to use a library and I don't understand why anyone would
I do it all the time, whenever there are competing libraries to choose among.
It's a heuristic that saves me time.
If one library has 1,000 stars and the other has 15, I'm going to default to the 1,000 stars.
I also look at download count and release frequency. Basically I don't want to use some obscure dependency for something critical.
> If one library has 1,000 stars and the other has 15, I'm going to default to the 1,000 stars.
There are clearly inflection points where stars become useful, with "nobody has ever used this package" and "Meta/Alphabet pays to develop/maintain this package" on the two extremes.
I'm less sure what the signal says in-between those extremes. We have 2 packages, one has 5,000 stars, the other has 10,000 stars - what does this actually tell me, apart from how many times each has gone viral on HN?
> If one library has 1,000 stars and the other has 15, I'm going to default to the 1,000 stars.
Will you continue to do this after reading TFA?
> It's a heuristic that saves me time.
A bad one.
I listed many other useful heuristics. Do you not find value in them? Do you find stars more valuable than them?
Take a moment to consider stars as a useful metric may only be useful for packages created prior to ~2015 when they weren't such a strong vanity metric, and are already very well established. This is preconditioning you to think "stars can still sometimes be useful, because I took a look at Facebook's React GH and it has a quarter million stars".
Sure, it's useful for that. But you aren't going to evaluate if the "React" package is safe. You already trivially know it is.
You'll be evaluating packages like "left-pad". Or any number of packages involved in the latest round of supply chain attacks.
For that matter, VCs are the ones stars are being targeted at, and potential employers (something this article doesn't cover, but some potential hires do hope to leverage on their resume).
If you are a VC, or an employer, it is a negative metric. If you are a dev evaluating packages, it is a vacuous metric that either tells you what you already know, or would be better answered looking at literally anything else within that repo.
The article also called out how download count can be faked trivially. I admit I have relied upon this in the past by mistake. Release frequency I do use as one metric.
When I care about making decisions for a system that will ingest 50k-250k TPS or need to respond in sub-second timings (systems I have worked on multiple times), you can bet "looking at stars" is a useless metric.
For personal projects, it is equally useless.
I care about how many tutorials are online. And today, I care more about if there was enough textual artifacts for the LLMs to usefully build it into their memory and to search on. I care if their docs are good so I spend less tokens burning through their codebase for APIs. I care if they resolve issues in a timely manner. I care if they have meaningful releases and not just garbage nothings every week.
I didn't mean for this to sound like a rant. But seriously, I just can't imagine in any scenario where "I look at stars" as a useful metric. You want to add it to the list? Sure. That is fine. But it should not be a deciding factor. I have chosen libraries with less stars because it had better metrics on things I cared about, and it was the correct choice (I ended up needing to evaluate them both anyhow. But I had my preference from the start).
Choosing the wrong package will waste you so much more time. Spend the 5 minutes evaluating for stuff that is important to your project.
This behavior is similar from the time I played a very popular mmorpg - when people selected others for their groups, their criteria deferred to the candidate's analyzed gameplay records (their 'logs') on a website which boiled down to a number showing their damage dealt and the color of it's text.
There was nothing about going into the logs to see if they could do the game's mechanical challenges, minimizing their damage taken. It made for a worse environment yet the players couldn't stop themselves from using such criteria.
In short, humans are lazy and default to numbers and colors when given the chance. When others question them on it, they can have a default easy answer of being part of the herd of zebras to get out of trouble.
I use stars to try and protect myself from dependency confusion attacks.
For example, let’s say I want to run some piece of software that I’ve heard about, and let’s say I trust that the software isn’t malware because of its reputation.
Most of the time, I’d be installing the software from somewhere that’s not GitHub. A lot of package managers will let anyone upload malware with a name that’s very similar to the software I’m looking for, designed to fool people like me. I need to defend against that. If I can find a GitHub repo that has a ton of stars, I can generally assume that it’s the software I’m looking for, and not a fake imitator, and I can therefore trust the installation instructions in its readme.
Except this is also not 100% safe, because as mentioned in TFA, stars can be bought.
Sure, I suppose that is one solution, but given that buying stars has been around for at least 5 years, and I have been aware of people faking stars for longer than that, I am not sure why you would rely on stars as a primary metric.
There are many other far more useful metrics to look at first, and to focus on first, and to think about. Every time you think about stars, you'll forget the other stuff, or discount it in favor of stars.
Forget stars. They now no longer mean anything. Even if they did before, they don't anymore.
Interesting that 5 years ago is exactly when this page showed up according to the Wayback Machine: https://docs.github.com/en/get-started/exploring-projects-on...
In it they explicitly call it out as a ranking metric
> Many of GitHub's repository rankings depend on the number of stars a repository has. In addition, Explore GitHub shows popular repositories based on the number of stars they have.
Yet another case of metric -> target -> useless metric
What does "TFA" mean here please?
The article. Pick whatever adjective you like beginning with F!
I think it's "The fucking article".
3 replies →
The featured article.
The fucking article.
You call these baubles, well, it is with baubles that men are led... Do you think that you would be able to make men fight by reasoning? Never. That is only good for the scholar in his study. The soldier needs glory, distinctions, and rewards.
https://en.wikiquote.org/wiki/Napoleon
Totally agree with you. I think Github "stars" are a relic of the past. They should be renamed to "Bookmarks" and exist as a tool for users to just mark interesting repositories. By no means should a repository keep a count of how many people bookmarked it. It makes no practical sense. Active maintainers and commit dates are much better metric.
Agree! My longstanding metric uses just two values:
* Most recent commit
* Total number of commits
This might have to die in the era of AI, but it's served me well for a long time. Rather than how many people are paying attention, it tries to measure the effort put in.
I usually use stars as bookmarks to maybe come back to some repo I thought looked interesting a year later. Terrible metric to invest based on!
> I don't think I have ever used stars in making a decision to use a library and I don't understand why anyone would.
You might not have but the makers of dependencies that you use might so still problematic.
True, but that is beyond my control. I am not evaluating every package within a dependency tree unless something happens, out of practicality.
I have limited time on this Earth and at my employer. My job is not critical to life. I am comfortable with this level of pragmatism.
But to someone else, it is a meaningful metric that you bookmarked something. It doesn't matter that the star isn't you saying you liked something. It's already telling enough merely that you wanted to bookmark it.
It's only not meaningful because of how other people can game it and fabricate it, but everything you just said, if it was only people like you, that would be a very meaningful number.
It doesn't even matter why you bookmarked it, and it doesn't matter that whatever the reason was, it doesn't prove the project as a whole is overall good or useful. Maybe you bookmarked it because you hate it and you want to keep track of it for reference in your ted talk about examples of all the worst stuff you hate, but really by the numbers adding up everyone's bookmarks, the more likely is that you found something interesting. It doesn't even matter what was interesting or why. The entire project could be worthless and the thing you're bookmarking was nothing more than some markdown trick in the readme. That's fine. That counts. Or it's all terrible, not a single thing of value, and the only reason to bookmark it is because it's the only thing that turned up in a search. Even that counts, because that still shows they tried to work on something no one else even tried to work on.
It's like, it doesn't matter how little a given star means, it still does mean something, and the aggregation does actually mean something, except for the fact of fakes.
> it still does mean something
Yes...which is why I said it is an indirect variable, as caused by the other things I pointed out above. Age, quality, code, utility, whether issues are addressed, interest, etc. Or fraud. Pretty cut and dry.
FWIW, I almost never star repos. Even ones I use or like. I don't see the utility for myself.
Aim for a more concise post and don't couch your statements in doubt next time if you want a productive conversation, because I don't know what you are trying to say.
I also never in my career have consciously looked at the GH star counter on a repo, let alone used it to make decisions.
Instead I look at (in addition to the above):
1. Who is the author? Is it just some person chasing Internet clout by making tons of 'cool' libraries across different domains? Or are they someone senior working in an industry sector from which project might actually benefit in expertise?
2. Is the author working alone? Are there regular contributors? Is there an established governance structure? Is the project going to survive one person getting bored / burning out / signing an NDA / dying?
3. Is the project style over substance? Did it introduce logos, discord channels, mascots too early? Is it trying too hard to become The New Hot Thing?
4. What are the project's dependencies? Is its dependency set conservative or is it going to cause supply chain problems down the line?
5. What's the project's development cadence? Is it shipping features and breaking APIs too fast? Has it ever done a patch release or backported fixes, or does it always live at the bleeding edge?
6. NEW ARRIVAL 2026! Is the project actually carefully crafted and well designed, or is it just LLM slop? Am I about to discover that even though it's a bunch of code it doesn't actually work?
7. If the project is security critical (handles auth, public facing protocol parsing, etc.): do a deeper dive into the code.