← Back to context

Comment by pcthrowaway

2 years ago

OK but say a company has a private, closed source internal tool, and they want to open-source some part of it. They fork it and start working on cleaning up the history to make it publishable.

After some changes which include deleting sensitive information and proprietary code, and squashing all the history to one commit, they change the repo to public.

According to this article, any commit on either repo which was made before the 2nd repo was made public, can still be accessed on the public repo.

> After some changes which include deleting sensitive information and proprietary code, and squashing all the history to one commit, they change the repo to public.

I know this might look like a valid approach on the first glance but... it is stupid for anyone who knows how git or GitHub API works? Remote (GitHub's) reflog is not GC'd immediately, you can try to get commit hashes from events history via API, and then try to get commits from reflog.

  • > it is stupid for anyone who knows how git or GitHub API works?

    You need to know how git works and GitHub's API. I would say I have a pretty good understanding about how (local) git works internally, but was deeply surprised about GitHub's brute-forceable short commit IDs and the existence of a public log of all reflog activity [1].

    When the article said "You might think you’re protected by needing to know the commit hash. You’re not. The hash is discoverable. More on that later." I was not able to deduce what would come later. Meanwhile, data access by hash seemed like a non-issue to me – how would you compute the hash without having the data in the first place? Checking that a certain file exists in a private branch might be an information disclosure, but gi not usually problematic.

    And in any case, GitHub has grown so far away from its roots as a simple git hoster that implicit expectations change as well. If I self-host my git repository, my mental model is very close to git internals. If I use GitHub's web interface to click myself a repository with complex access rights, I assume they have concepts in place to thoroughly enforce these access rights. I mean, GitHub organizations are not a git concept.

    [1] https://www.gharchive.org/

    • > You need to know how git works and GitHub's API.

      No; just knowing how git works is enough to understand that force-pushing squashed commits or removing branches on remote will not necessarily remove the actual data on remote.

      GitHub API (or just using the web UI) only makes these features more obvious. For example, you can find and check commit referenced in MR comments even if it was force-pushed away.

      > was deeply surprised about GitHub's brute-forceable short commit IDs

      Short commit IDs are not GitHub feature, they are git feature.

      > If I use GitHub's web interface to click myself a repository with complex access rights, I assume they have concepts in place to thoroughly enforce these access rights.

      Have you ever tried to make private GitHub repository public? There is a clear warning that code, logs and activity history will become public. Maybe they should include additional clause about forks there.

      3 replies →

  • Yes, even though I expect there to be people that do exactly what the GP describes, if you know git it has severe "do not do that!" vibes.

    Do not squash your commits and make the repository public. Instead, make a new repository and add the code there.

Chat gpt given the following repo, create a plausible perfect commit history to create this repository.