GitHub CLI now collects pseudoanonymous telemetry

8 hours ago (cli.github.com)

    Why we collect telemetry

    ...our team needs visibility into how features are being used in practice. We use this data to prioritize our work and evaluate whether features are meeting real user needs.

I'm curious why corporate development teams always feel the need to spy on their users? Is it not sufficient to employ good engineering and design practices? Git has served us well for 20+ years without detailed analytics over who exactly is using which features and commands. Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?

  • I used to believe that it was not necessary until I started building my own startup. If you dont have analytics you are flying blind. You don't know what your users actually care about and how to optimize a successful user journey. The difference between what people tell you when asked directly and how they actually use your software is actually shocking.

    • You're only flying blind if you make decisions not looking and thinking. Analytics isn't the only way to figure out "what your users actually care about", you can also try the old school way, commonly referred to as "Talking with people", then after taking notes, you think about it, maybe discuss with others. Don't take what people say at face value, but think about it together with your knowledge and experience, and you'll make even better product decisions than the people who are only making "data driven decisions" all the time.

      58 replies →

    • > The difference between what people tell you when asked directly and how they actually use your software is actually shocking.

      And the difference between what they do and what they want is equally shocking. If what they want isn’t in your app, they can’t do it and it won’t show up in your data.

      Quantitative data doesn’t tell you what your users want or care about. It tells you only what they are doing. You can get similar data without spying on your users.

      I don’t necessarily think all data gathering is equivalent to spying, but if it’s not entirely opt-in, I think it is effectively spying no matter what you’re collecting, varying only along a dimension of invasiveness.

      4 replies →

    • It's not like they don't own API's that those cli's are hitting. They have all the stats they need.

    • > If you dont have analytics you are flying blind.

      We... we are talking about a CLI tool. A CLI tool that directly uses the API. A tool which already identifies itself with a User-Agent[0].

      A tool which obviously knows who is using it. What information are you gathering by running telemetry on my machine that couldn't.. just. be. a. database. query?

      Reading the justification the main thing they seem to want to know is if gh is being driven by a human or an agent... Which, F off with your creepy nonsense.

      Please don't just use generic "but ma analytics!" when this obviously doesn't apply here?

      [0]: https://github.com/cli/cli/blob/3ad29588b8bf9f2390be652f46ee...

    • Analytics is wrong. I never click any ads, but they keep showing it. I avoid registering or enter fake emails, but they keep showing full screen popups asking for email. I always reject cookies but they still ask me to accept them. And youtube keeps pushing those vertical videos for alternately gifted kids despite me never watching them. What's the point of this garbage analytics. It seems that their only goal is to annoy people.

    • Wow, it really is sad how literally unthinkable it is to you and so much of the industry that you could actually talk to your users and customers like human beings instead of just data points.

      And you know what happens when you reach out to talk to your customers like human beings instead of spying on them like animals? They like you more and they raise issues that your telemetry would never even think to measure.

      It's called user research and client relationship management.

      11 replies →

    • The totality of Microsoft's products is proof that this is false. If telemetry and analytics actually mattered for usability, every product Microsoft puts out wouldn't be good instead of garbage.

    • > If you dont have analytics you are flying blind

      More like flying based on your knowledge as a pilot and not by the whims of your passengers.

      For many CLIs and developer tooling, principled decisions need to reign. Accepting the unquantifiability of usage in a principled product is often difficult for those that are not the target demographic, but for developer tools specifically (be they programming languages, CLIs, APIs, SDKs, etc), cohesion and common sense are usually enough. It also seems real hard for product teams to accept the value of the status quo with these existing, heavily used tools.

      1 reply →

    • It makes me think, what `gh` features don't generate some activity in the github API that could as easily guide feature development without adding extra telemetry?

      1 reply →

    • Game developers benefit tremendously from streams where they get to see peoples webcams _and_ screens as they use their software.

      This would be _absolutely insane_ telemetry to request from a user for any other piece of software, but it would be fantastically useful in identifying where people get frustrated and why.

      That said, I do not trust Microsoft with any telemetry, I am not invested in helping them improve their product, and I am happy not to rely on the GitHub CLI.

    • I agree with you in that regard. That said, knowing that this is Microsoft, the data will be used to extract value from the customers, not provide them with one.

    • You could, I don't know, do user interviews with the various customer segments that use your product.

    • How did GitHub ever survive without this telemetry? Was it a web application buried in obscurity?

    • This got me thinking: Are there prominent examples of open source projects that 1. collect telemetry, 2. without a way to opt-out (or obfuscating / making it difficult to opt-out)? This practice seems to be specific to corporate software development.

      Why is it that startups and commercial software developers seem to be the only ones obsessed with telemetry? Why do they need it to "optimize user journeys" but open source projects do just fine while flying blind?

      1 reply →

    • You can "optimize a successful user journey" by making the software easy to use, making it load so fast people are surprised by it, and talking to your customers. Telemetry doesn't help you do any of that, but it does help you squeeze more money out of them, or find out where you can pop an interstitial ad to goose your ad revenue, and what features you can move up a tier level to increase revenue without providing any additional value.

    • I think there's room for a distinction between "not using metrics" and "not using data".

      Unthinkingly leaning on metrics is likely to help you build a faster, stronger horse, while at the same time avoiding building a car, a bus or a tractor.

    • Teams that do this need to just dogfood internally. Once you start collecting telemetry on external users defaulted to opt-in you're not a good faith actor in the ecosystem.

    • You have all info you need on server side, I don’t believe that you’re totally blind without client tracking

  • > I'm curious why corporate development teams always feel the need to spy on their users? Is it not sufficient to employ good engineering and design practices?

    No, because users have different needs and thoughts from the developers. And because sometimes it's hard to get good feedback from people. Maybe everyone loves the concept of feature X, but then never uses it in practice for some reason. Or a given feature has a vocal fan base that won't actually translate to sales/real usage.

    > Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?

    I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.

    I imagine that in an alternate world, a git with telemetry would have come out with a less confusing UI because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.

    • I think the big problem with Telemetry is that it's too much of a black box. There is 0 transparency on how that data it really used and we have a long history of large corporates using this data to build prediction products that track people by finger printing behavior though these signals. There is too much at stake right now around this topic for people to trust any implementation.

    • Didn't Go propose opt-out telemetry but then the community said no?

      Compilers and whatnot seem to suffer from the same problem that programs like git(1) does. Once you've put it out there in the world you have no idea if someone will still use some corner of it thirty years from now.

    • > because git famously has a terrible UI

      Thankfully, github has zero control over git. If they did have control they would have sank the whole operation on year one

      > because somebody would have looked at the stats and for instance have added "git restore" right from the very start, because "git checkout -- foo.txt" is an absolutely unintuitive command.

      How is git restore any better? Restoring what from when? At least git checkout is clear in what it does.

      11 replies →

    • A more intuitive git UI would reduce engagement. Do you really want to cut a 30 minute git session down to five minutes by introducing things like 'git restore' or 'git undo'? /s

    • > I think yes, because git famously has a terrible UI, and any amount of telemetry would quickly tell you people fumble around a lot at first.

      1. git doesn’t have a UI, it’s a program run in a terminal environment. the terminal is the interface for the user.

      2. git has a specific design that was intended to solve a specific problem in a specific way. mostly for linux kernel development. so, the UX might seem terrible to you — but remember that it wasn’t built for you, nor was it designed for people in their first ever coding boot camp. that was never git’s purpose.

      3. the fact that every other tool was designed so poorly that everyone (eventually, mostly) jumped on git as a new standard is an expression of the importance of designing systems well.

      11 replies →

  • > I'm curious why corporate development teams always feel the need to spy on their users

    Unfortunately this is due to a large part of "decision makers" being non-technical folks, not being able to understand how the tools is actually used, as they don't use such tools themselves. So some product manager "responsible" for development tooling needs this sort of stuff to be able to perform in their job, just as some clueless product manager in the e-commerce absolutely has to overload your frontend with scripts tracking your behaviour, also to be able to perform in their job. Of course the question remains, why do those jobs exist in the first place, as the engineers were perfectly capable of designing interaction with their users before the VCs imposed the unfortunate paradigm of a deeply non-technical person somehow leading the design and development of highly technical products...So here we are, sharing our data with them, because how else will Joe collect their PM paycheck, in between prompting the AI for his slides and various "very important" meetings...

    • Man if I had a nickle for every time a PM asked me to violate user privacy for the purposes of making a slide that will be shown to their boss for 2.5 seconds I'd probably make enough to actually retire someday.

      1 reply →

  • > Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?

    I'm not sure if you're implying it's obvious but it's not obvious to me that it would be unhelpful.

    • Just anecdotally, I get the feeling telemetry often does more harm than good, because it's too easy to misinterpret or lie with statistics. There needs to be proper statistical methodology and biases need to be considered, but this doesn't always happen. Maybe a contrived example, but someone wants to show high impact on their next performance review? Implement the new feature in such a way that everyone easily misclicks it, then show the extremely high engagement as demonstration that their work is a huge success. For Git, I'm not sure it would be widely adopted today if the development process was mainly telemetry-driven rather than Torvalds developing it based solely on his expertise and intuition.

      2 replies →

    • I think the seeing the underutilized commands and flags (with real data not just a hunch) would have helped identify where users were not understanding why they should use it, and could have helped refine the interface and docs to make it gradually more usable.

      I mean no solution is perfect, and some underused things are just only sometimes extremely useful, but data used smartly is not a waste of time.

  • > Is it not sufficient to employ good engineering and design practices? Git...

    Git has horrible design and ergonomics.

    It is an excellent example of engineers designing interfaces for engineers without a good feedback loop.

    Ironically, you just proved your point that engineers need to better understand how users are actually using their product, because their mental visualizations of how their product gets used is usually poor.

    • > Git has horrible design and ergonomics.

      People say this and never has written about the supposed failure of design. Git has a very good conceptual model, and then provides operations (aptly named when you know about the model) to manipulate it.

      Most people who complains about git only think of it as code storage (folder {v1,v2,...}) instead of version control.

      2 replies →

  • It isn't only corporate development teams — open source development teams want to spy on their users, too. For instance, Homebrew: "Anonymous analytics allow us to prioritise fixes and features based on how, where and when people use Homebrew." [1]

    [1] https://docs.brew.sh/Analytics

  • > Is it not sufficient to employ good engineering and design practices?

    It's not that it's insufficient, new developers, product people and designers literally don't know how to make tasteful and useful decisions without first "asking users" by experimenting on them.

    Used to be you built up an intuition for your user base, but considering everyone is changing jobs every year, I guess people don't have time for that anymore, so literally every decision is "data driven" and no user is super happy or not anymore, everyone is just "OK, that's fine".

  • The impact of a few more network calls and decreased privacy is basically never felt by users beyond this abstract "they're spying on me" realization. The impact of this telemetry for a product development team is material.

    Not saying that telemetry more valuable than privacy, just that it's a straightforward decision for a company to make when real benefits are only counterbalanced by abstract privacy concerns. This is why it's so universally applied across apps and tools developed commercially.

    • For most CLIs, I definitely feel extra network calls because they translate to real latency for commands that _should_ be quick.

      If I run "gh alias set foo bar", and that takes even a marginally perceptible amount of time, I'll feel like the tool I'm using is poorly built since a local alias obviously doesn't need network calls.

      I do see that `gh` is spawning a child to do sending in the background (https://github.com/cli/cli/blob/3ad29588b8bf9f2390be652f46ee...), which also is something I'd be annoyed at since having background processes lingering in a shell's session is bad manners for a command that doesn't have a very good reason to do so.

  • The people who write any individual feature want to be able to prove usage in order to get good performance reviews and promotions. It's so awful that it's become normalized. Back in The Day we had the term “spyware” to refer to any piece of software that phoned home to report user behavior, but now that's just All Software.

  • Anonymous telemetry isn't necessarily spying, though "pseudoanonymous" sounds about as well protected as distinguishing between free speech and "absolutism." Github also wouldn't be tracking git use here, but the `gh` CLI that you don't need to install.

    All that said, having been in plenty of corporate environments I would be surprised if the data is anonymized and wouldn't be surprised if the primary motivator boils down to something like internal OKRs and politics.

  • You have three features, A, B, and C. They are core features. Two of the features break. How do you prioritize which feature gets fixed first? With telemetry its obvious, without it, you're guessing.

    Also, gh cli is not about git, its about the github api. In theory the app has its own user agent and of course their LB is tracking all http requests, so not anonymous ever.

  • > I'm curious why corporate development teams always feel the need to spy on their users?

    Cause the alternative is viewing all of your app as one opaque blob - you don't know exactly how it's being used, which features actually need your attention, especially if you're spread thin. If you're in consulting or something like that and the clients haven't let you configure and/or access analytics (and the same goes for APM and log shipping), it's like flying blind. Couple that with vague bug reports instead of automated session recording and if you need to maintain that, you'll have gray hairs appearing by the age of 30.

    Take that disregard of measurement and spread it all across the development culture and you'll get errors in the logs that nobody is seeing and no insights into application performance - with the system working okay at a load X, but falling over at X+1 and you having to spend late evenings trying to refactor it, knowing that it needs to be shipped in less than a week because of client deadlines. Unless the data is something that's heavily regulated and more trouble than it's worth, more data will be better than less data, if you do something meaningful with it.

    > Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?

    Knowing the most common fuck ups and foot guns might inform better CLI design. Otherwise people saying that it's good have about as much right to do so as saying that it's bad (at least in regards to UX), without knowing the ground level truth about what 90% of the users experience.

    • > you don't know exactly how it's being used, which features actually need your attention, especially if you're spread thin.

      Why not conduct a survey?

      > vague bug reports instead of automated session recording and if you need to maintain that, you'll have gray hairs appearing by the age of 30.

      If it's a customer, why not reach directly to him?

      > with the system working okay at a load X, but falling over at X+1 and you having to spend late evenings trying to refactor it,

      No one is talking about telemetry on your servers. We're talking about telemetry on client's computers.

  • Perhaps the more interesting question is why these companies feel the need to "explain" why they are collecting telemetry or "disclose" how the data is used

    The software user has no means to verify the explanation or disclosure is accurate or complete. Once the data is transferred to the company then the user has no control over where it goes, who sees it or how it is used

    When the company states "We use the data for X" it is not promising to use the data for X in the future, nor does it prevent the company, or one of its "business partners", from using the data additionally for something else besides X

    Why "explain" the reason for collecting telemetry

    Why "disclose" how the data is used

    What does this accomplish

  • Git relatively recently got an `--i-still-use-this` option for two deprecated commands that you have to run if you want to use them. The error you get tells you about it and that you should "please email us here" if you really am unable to figure out an alternative.

    I guess that's the price of regular and non-invasive software.

  • > I'm curious why corporate development teams always feel the need to spy on their users?

    I've repeatedly talked about this on HN; I call it Marketing Driven Development. It's when some Marketing manager goes to your IT manager and starts asking for things that no customer wants or needs, so they can track if their initiatives justify their job, aka are they bringing in more people to x feature?

    Honestly, with something as sensitive as software developer tools, I think any sort of telemetry should ALWAYS be off by default.

  • > I'm curious why corporate development teams always feel the need to spy on their users?

    This isn’t that surprising to me. Having usage data is important for many purposes. Even Debian has an opt-in usage tracker (popcon) to see wha packages they should keep supporting.

    What I’m curious about is why this is included in the CLI. Why aren’t they measuring this at the API level where they wouldn’t need to disclose it to anyone? What is done locally with the GH CLI tool that doesn’t interact with the GitHub servers?

  • While I agree, I personally always opt out if I'm aware, and hate it when a tool suddenly gets telemetry, I don't think Git is comparable, same with Linux.

    Linux and Git are fully open source, and have big companies contribute to it. If a company like Google, Microsoft etc need a feature, they can usually afford to hire someone and develop _and_ maintain this feature.

    Something like gh is the opposite. It's maintained by a singular organisation, the team maintaining this has a finite resources. I don't think it's much to ask for understand what features are being used, what errors might come up, etc.

    • Good news! gh is actually a client of a web API so they can just read their logs to know what's being used!

  • Product work can be counterintuitive. An engineer / PM might think that a design or feature “makes sense”, but you don’t actually know that unless you measure usage.

  • When allocating engineering spend you need to predict impact. If you know how features of GitHub CLI are used and how you can do this more easily.

  • I’m curious as well. Github is one of the rare products out there that get actual valuable user feedback. So why not just ask the users for specific feedback instead of tracking all of them.

  • It's not the devs themselves, but the team/project/product management show that needs to pretend they are data driven, but then resort to the silliest metrics that are easy to measure.

  • The current IA boom is entirely based on data . The more data you have the more you can train and the more money you make

  • > Would Git have been significantly better if it had collected telemetry

    Yes, probably. Git is seriously hard to use beyond basic tasks. It has a byzantine array of commands, and the "porcelain" feels a lot closer to "plumbing" than it should. You and I are used to it, but that doesn't make it good.

    I mean, it took 14 years before it gained a `switch` command! `checkout` and `reset` can do like six different things depending on how your arguments resolve, from nondestructive to very, very destructive; safe(r) operations like --force-with-lease are made harder to find than their more dangerous counterparts; it's a mess.

    Analytics alone wouldn't solve the problem - you also need a team of developers who are willing to listen to their users, pore through usage data, and prioritize UX - but it would be something.

  • git is terrible from ux perspective

    >Would Git have been significantly better if it had collected telemetry, or would the data not have just been a distraction?

    Definitely

  • > always feel the need to spy on their users?

    If it's truly pseudoanonymous then it's hardly spying, just sayin'...

    Others have answered your actual question better than I could have.

  • Arguably yes. git has a terrible developer experience and we've only gotten to this point where everyone embraces it through Stockholm syndrome. If someone had been looking at analytics from git, they'd have seen millions of confused people trying to find the right incantation in a forest of confusing poorly named flags.

    Sincerely, a Mercurial user from way back.

  • I'm curious why people think this is in the same ballpark as that something like a private investigator can do. This isn't spying at all.

    "oh no, they're aware of someone at the computer 19416146-F56B-49E4-BF16-C0D8B337BF7F running `gh api` a lot! that's spying!"

  • I'm curious why corporate development teams always feel the need to spy on their users?

    Because they're too shy, lazy, or socially awkward to actually ask their users questions.

    They cover up this anxiety and laziness by saying that it costs too much, or it doesn't "scale." Both of these are false.

    My company requires me to actually speak to the people who use the web sites I build; usually about every ten to twelve months. The company pays for my time, travel, and other expenses.

    The company does this because it cares about the product. It has to, because it is beholden to the customers for its financial position, not to anonymous stock market trading bots a continent away.

    • Respectfully I think your argument defeats itself. If you can only speak to your users once every 10-12 months it means your process doesn't scale by definition. Good analytics (not useless vanity metrics) should allow you to spot a problem days after it was launched not wait 3 quarters for a user to air their grievances.

      3 replies →

  • This is where (surprise surprise) I respect Valve. The hardware survey is opt in and transparent. They get useful info out of it and it’s just..not scummy.

    There are all sorts of best practices for getting info without vacuuming up everyone’s data in opaque ways.

  • Git notoriously has had performance issues and did not scale and has had a horrible user interface. Both of these problems can be measured using telemetry and improvements can be measured once telemetry is in place.

    • How was it notorious if git has no telemetry? According to you without telemetry nothing can be known, and nothing can become notorious.

      2 replies →

If you have 3 of your developers spending 80% of their time in an area of the codebase that gets no usage and you don't see a path forward that realistically is likely to increase usage, it can be a better use of developer time to focus them elsewhere or even rethink the feature.

The problem I have with a lot of these analytics is that while there are harmless ways to use it, there is this understanding that they could be tying your unique identifier to behavioral patterns which could be used to reconstruct your identity with machine learning. It's even worse if they include timestamps.

Why not just expose exactly what telemetry is being sent when it's sent? Like add an option that makes telemetry verbose, but doesn't send it unless you enable it. That way you can evaluate it before you decide to turn it on. Whenever you do the Steam Hardware survey it'll show you what gets sent. This is the right way to do it.

> you're going to have to opt out of a lot more than this one setting

The opt-out situation for gh CLI telemetry is actually trickier than it sounds. gh runs in CI/CD pipelines and server environments where you may not want any outbound connections to github.com at all, not because of privacy but because of networking constraints. In those environments, the telemetry being on by default means your CI fails or your Bastion host can't reach GitHub at all.

Compare this to git itself, which is entirely local until you explicitly push. The trust model is different: git will never phone home unless you configure it to. gh, being a wrapper around the GitHub API, has to make those calls to function - but that's separate from whether it should also be collecting and uploading your command patterns.

  • > In those environments, the telemetry being on by default means your CI fails or your Bastion host can't reach GitHub at all.

    i'd be surprised if the inability to submit telemetry is a hard error that crashes the program

  • Isn't the gh CLI useless if it can't connect to GitHub.com? Or does it work with enterprise GitHub and that's the use case you're talking about.

Do people think that GitHub isn't already collecting and aggregating all the requests sent to their servers, which is after all the entire point of the gh CLI?

If you don't want your requests tracked, you're going to have to opt out of a lot more than this one setting.

  • Data is on their server, so obviously they are already doing it, they just want to increase tracking by knowing what transit as well to Gitlab, Codeberg and such by having additional client-side metrics.

    • I did not get that impression from these docs or from a brief look through the gh CLI codebase. Can you point to evidence that makes you believe this is used to collect metrics about requests to other services?

So happy I deployed gitea to my homelab last month. It's got an import feature from github and honestly just faster and better uptime that github. Claude can use it just fine with tea cli and git. It's pretty much a knockoff github, but I think it's better so far.

  • I’m running Forgejo which has the same core code and yeah it’s amazing. Faster and better uptime indeed. It even works when my internet goes down because it’s on a Pi 4 here in the cabinet next to my desk Backups are done with borg and syncthing to offsite location. It takes a bit of work setting it up but after that maintenance time is near zero. I just manually SSH in once every two weeks to check SSD space, RAM usage and run apt update and upgrade, and major version bumps

Good for GitHub. All companies need this. Some use it to improve products, some use it for less commendable goals. I know HN crowd is allergic to telemetry but if you've ever developed a software as a service, telemetry is indispensable.

  • GitHub CLI is not a SaaS. It's a commandline utility.

    • That doesn't mean it doesn't have usage patterns or other things telemetry would be useful for. And, at the rate these tools are being updated (multiple times a week, multiple times a day in some cases), they practically _are_ SaaS.

  • Thinking out loud: what are the best practices to vet a tools' telemetry details? The devil is in the details.

    A quick summary of my Claude-assisted research at the Gist below. Top of mind is some kind of trusted intermediary service with a vested interest in striking a definable middle ground that is good enough for both sides (users and product-builders)

    Gist: WIP 31 minutes in still cookin'

Do they mean "pseudonymous" telemetry meaning "non-identifying telemetry", or do they mean "pseudoanonymous" telemetry meaning telemetry is that not really anonymous?

Those two words have almost exactly opposite meanings, and as stated, they are literally saying they are collecting identifiable data.

  • The page only uses the term “pseudonymous”. “Pseudoanonymous” seems to be an invention of the HN submitter.

  • it means they can see all the telemetry from a single machine, but the identity of the machine is not tied to any human identity or github account. each machine appears to get its own UUID and that's how they "identify" machines.

Remember that thing Microsoft does?

Embrace, extend, extinguish.

The first two have been done.

I give it five years before the GH CLI is the only way to interact with GitHub repos.

Then the third will also be done, and the cycle is complete.

  • > I give it five years before the GH CLI is the only way to interact with GitHub repos.

    I'll take that bet. How much are you willing to put on it?

  • >I give it five years before the GH CLI is the only way to interact with GitHub repos.

    I do not doubt this, already it seems to be a pain to deal with some repos on github without using gh. I do not know what gh buys you but I have never used it so I do not know if it is "better". To me the standard git commands are fine. But yes, I think the trend to forcing gh upon us is already being done.

    • I do use a command-line program as the only way to interact with GitHub (using the GitHub API), but I do not use GH CLI; I have my own implementation (which is much smaller than theirs). (They can see that I use my own, because of the User-Agent header, and they can also see what APIs are accessed.) (Git can also be used, but only for the functions of Git rather than the functions of GitHub.)

  • people that say things like this are exhausting. exhausting. You make it so very easy to classify you straight into the "looney" bin. People said that WSL was EEE for Linux. when that didn't happen, people said that WSL gaining GPU support was EEE. When that didn't happen, people said that WSLg was EEE for Linux. People said that Powershell was EEE for Windows.

    None of these happened. none of them even appear to have happened, and none of them appear to have even been planned. It's all a hallucination by people that talk like this. It's all imaginary. Show me any evidence of anything like this. ANY AT ALL. Not a hunch, not something that could be interpreted that way, show me the very clear and repeatable steps that Microsoft used in the 90s to EEE something in anything they're doing today.

    They're too busy copiloting everything and arguing with each other to do this. Show me Microsoft Git with extra features over the open source version. Show me Microsoft Linux with extra features over the open source version. Show me Microsoft ANYTHING with extra features over the open source version they copied, and show me the doors slowly closing behind me. You can't. Because it isn't happening.

    git repos can't be locked up in the way you're describing. github is a wrapper around git. it would take an enormous amount of work for microsoft to change this fundamental decision in the design of github. GitHub is a git server, over both HTTP and SSH. These are core decisions of the software that everything else sits on top of. If pulling git repos over HTTP or SSH ever stops being supported, so many things are going to stop being supported, that it just won't be useful at all after that point.

    the gh cli makes api calls, that's all. it just makes api calls easier. it exposes a lot of api calls as friendly commands. it's not something that most of github users are even aware of, much less use. gh is not going to lock someone into a github "ecosystem" A) because such a thing doesn't exist and B) again, most people don't use it.

    Microsoft is far more likely to kill GitHub because of people with MBAs (aka "morons") who somehow call the shots in business these days. They are not going to pilot into the ground by EEE. They are going to pilot it into the ground because they don't know what they're doing, and they don't know what users want or what they like. That will be the fate of GitHub; incompetence will kill it, not cold, calculating, nefarious competence.

    • I think the down-votes on this comment are too bad. It's legitimately funny to write a muli-paragraph rant in high dudgeon calling other people "exhausting".

I built a small link shortener and faced this exact decision. I track click counts, top country, and top referrer — nothing else. No fingerprinting, no user profiles, no persistent identifiers. The data gets attached to a link, not to a person.

The line I drew: if deleting the short link removes all the data, it's analytics. If deleting the link leaves a profile somewhere, it's surveillance.

GitHub CLI is the opposite case — the data follows the user, not the artifact they created. That's the part that feels off.

Well, that validates my decision not to install it. Of course Microsoft will eventually abuse any trust you place in them and any access you give them. They always do. Don't let Microsoft run code on your machine and don't give them your data.

*pseudonymous

The article doesn’t use the word “pseudoanonymous”, only “pseudonymous”.

can someone explain why github has a CLI? why wouldn't you just use git?

  • You use gh to interact with the forge, git to interact with the repo.

    For example

      gh pr checks --watch
    

    will run and poll the CI checks of a PR and exit 0 once they all pass

  • My last job they used gh features heavily - pull requests, issues, and gha most of all. So having the cli made automating (or interacting with agents) github-specific tasks possible.

  • At my current job, I sometimes set up a Nix shell with the GitHub CLI, since that let's Claude Code associate a feature branch to a pull request. The LLM can then retrieve PR description, workflow results, review comments, etc.

    Also, I believe GitHub Actions cache cannot be bulk deleted outside of the CLI. The first time I [hesitantly] used the gh CLI was to empty GitHub Actions cache. At the time it wasn't possible with the REST API or web interface.

  • gh is insanely powerful, especially if you let your coding agent use it. It’s one of my top tools. Gh lets you use GitHub features such as issues, pull request, reading CI pipelines, creating CI pipelines, etc. git is just for code version control.

  • PRs, and managing repos, and other things that aren't git features. You can use it to auth with GITHUB_TOKEN instead of ssh or http. Which is how my agents get access. I've switched to gitea, it's got all the same features.

    • ah, that's probably why I've never had any use for it. I don't really contribute to any large open source projects and prefer the sourcehut/lkml style of using git

This should be opt-in. Force their employees to opt-in if they want. That's plenty of data to make informed decisions.

#Telemetry FUCK OFF export DOTNET_CLI_TELEMETRY_OPTOUT=1 export ASTRO_TELEMETRY_DISABLED=1 export GATSBY_TELEMETRY_DISABLED=1 export HOMEBREW_NO_ANALYTICS=1 export NEXT_TELEMETRY_DISABLED=1 export DISABLE_ZAPIER_ANALYTICS=1 export TELEMETRY_DISABLED=1 export GH_TELEMETRY=false

  • Also:

      # Atlas
      export DISABLE_TELEMETRY=1
      # CloudFlare
      export WRANGLER_SEND_METRICS=false
      export VERCEL_PLUGIN_TELEMETRY=off
      # AWS
      export SAM_CLI_TELEMETRY=0
      export CDK_DISABLE_CLI_TELEMETRY=true
      # ???
      export DO_NOT_TRACK=true

Fuck Github man, Fuck em'. I mean what even is the point. You lost the AI whatever it was, build a good product and features for developers like you tried to once.

And less social media shit, maybe adding better LFS alternative similar to huggingface and stuff.

Git isn't the popular choice in game dev because of this assets in tree hosting nonsense, why haven't we fixed it yet.

Similarly many edge cases, also finally they built stacked prs but man does it feel a under baked, and what it's like 2+ years late.

Please just improve Github, make me feel like I will be missing out if I am not on Github because of the features not because I have to be because of work.

I suggest anyone who cares, and certainly anybody in the EU mails privacy@github.com and also opens a support ticket to let them know exactly what you think

  • Wouldn't telemetry solve this problem automatically? I mean: they should get some signal back when people opt-out no? :)

dev tools and especially libraries must not have telemetry unless absolutely strictly necessary (and even then!).

* Dev tools because you need to be able to trust they don't leak while you're working. Not all sites/locations/customers/projects allow leaks, and it's easier to just blacklist anything that does leak, so you know you can trust your tools, and the same habits, justfiles, etc work everywhere.

* libraries that leak deserve a special kind of hell. You add a library to your project, and now it might be leaking without warning. If a lot of libraries decide to leak, your application is now an unmanageable sieve.

If you do need to run telemetry, make it opt in or end user only. But if you as developer don't even have control then that's the worst.

There is no such thing as "pseudoanonymous" it's not a thing it does not exist it's an oxymoron.

Microsoft really wants people to stop using GitHub.

Hopfully the codeberg people can improve their UI - the UI is the single reason I still use github (and filing issues is super-simple). I could never handle gitlab because I hate their UI.

Note that GitHub is in the news in the last some months, negatively. I think we are seeing first wear-and-tear signs. If people are smart, this would be a time where real competition to Microsoft GitHub could work. GitHub without users would be dead. Microsoft seems unable to care - they sold their soul to AI. It is make-it-or-break-it for them now.

Today I learned GitHub has a CLI. I guess that's like Pornhub having a CLI

  • Before GitHub had a CLI, I used cURL (via zsh alises/functions) to open PRs and find what remote/branch a PR is associated with.

    Today I use a Golang CLI made with ~200K LOC to do essentially the same thing. Yay, efficiency?

  • Seeing how annoying their website interfaces are, I'd actually be open to paying for API/CLI access to porn.

Do you know what doesn't collect telemetry?

the old git command in your terminal

I think I'll keep using that

pseudoanonymous, meaning not anonymous? lol

  • Yes, that's exactly what pseudoanonymous means. It's fake-anonymous. It can be trivially de-anonymized.

    • No, "pseudoanonymous" doesn't mean anything, because it is not a word.

      https://en.wiktionary.org/w/index.php?search=pseudoanonymous...

      It is interesting how GitHub sort of prominently features this non-word in their article. Perhaps some South Asian or European person for whom English is a struggle.

      There is no word that means "fake-anonymous". I would assume that the author of this article intended to write "pseudonymous" which is a real word with a real definition.

      https://en.wiktionary.org/wiki/pseudonymous

      But it would also be interesting if they very much intended the ambiguity of using a non-word that is more than it seems on the surface.

The current century is the one of enshitification, like a cancer, now there is a whole generation of PM that it is totally ok and legitimate to update your product to add spying of your user's usage.

It might seems legit from them, but I'm quite sure that just listening to your user is enough. It is not like they lack an user base ready to interact with them or that they lack of bugs or features to work on.

In most cases, the telemetry is more a vanity metric that is rarely used. "Congratz to this team that did the flag that is the most used in the cli". But even for product decision, it is hard to extract conclusions from current usage because what you can and will do today is already dependent on the way the cli is done. A feature might not be used a lot because it is not convenient to do, or not available in a good way compared to an alternative, but usage report will not tell if it was useful or not. In the same way, when I buy a product, often there are a lot of features that I will never use, but that I'm happy to have. And I might not have bought the product, or bought another one if it was not available. But the worse would have the manufacturer remove or disable the feature because it is not used...

pseudoanonymous = euphemism for not anonoymous.

Regulators should wake up and fine them hard, so hard to become existential. Make an example for others not to follow.

  • Being a good regulator is about solving a nearly impossible satisficing problem. You have to follow the law and achieve achieve results with a limited budget and political constraints. Given the priorities of say the FTC or state AGs or the SEC, I don't think GitHub is even a blip on their radar. Of any of the regulators I would hazard to guess that maybe the California Privacy Protection Agency is the most likely to prioritize a look, but I still doubt it.

    I know lots of idealists -- I went to a public policy school. And in some areas, I am one myself. We need them; they can push for their causes.

    But if you ever find yourself working as a regulator, you'll find the world is complicated and messy. Regulators that overreach often make things worse for their very causes they support.

    If you haven't yet, go find some regulators that have to take companies all the way to court and win. I have know some in certain fields. Learn from them. Some would probably really enjoy getting to talk to a disinterested third-party to learn the domain. There are even ways to get involved as a sort of citizen journalist if you want.

    But these sort of blanket calls for "make an example of GitHub" are probably a waste of time. I think a broader view is needed here. Think about the causal chain of problems and find a link where you have leverage. Then focus your effort on that link.

    I live in the DC area, where ignorance of how the government works leads to people walking away and not taking you seriously. When tech people put comparable effort into understanding the machinery of government that they do into technology, that is awesome. There are some amazing examples of this if you look around.

    There are no excuses. Tech people readily accept that they have to work around the warts of their infrastructure. (We are often lucky because we get to rebuild so much software ourselves.) But we forget what it's like to work with systems that have to resist change because they are coordination points between multiple stakeholders. The conflict is by design!

    Anyhow, we have no excuse to blame the warts in our governmental system. You either fix them or work around them or both.

    The world is a big broken machine. Almost no individual person is to blame. You just have to understand where to turn the wrench.

I mean, make sense, of course. How else could they possibly know what users want? Run a bug tracker? Use their own software? Have more than one 9 of uptime? /s

Corporations can and will do every scummy thing permitted to them by law, so here we are. Until the US grows a backbone on issues of privacy, we shouldn't be surprised, I suppose. But the US won't be growing such a backbone anytime in the near future.

tl;dr for opt-out as per https://cli.github.com/telemetry#how-to-opt-out (any of these work individually):

export GH_TELEMETRY=false

export DO_NOT_TRACK=true

gh config set telemetry disabled (starting from version 2.91.0, which this announcement refers to)

  • > gh config set telemetry false > ! warning: 'telemetry' is not a known configuration key

    What's strange is if you check your `~/.config/gh/config.yml` it will put `telemetry: disabled` in there. But it will put anything in that `config.yml` lol.

    > gh config set this-is-some-random-bullshit aww-shucks > ! warning: 'this-is-some-random-bullshit' is not a known configuration key

    But in my config.yml is

    this-is-some-random-bullshit: aww-shucks

  • ... don't forget to recheck this info every update, restore flags that have been "accidentally" reset and set any new flags that they added for "different" telemetry