As I continued to use git to run web services over the years, the Git repository has grown to over a few gigabytes. The breakdown of the repository includes programs, resources such as images, and text configuration files. Also, assume that there are tens of thousands of commits and many branches.
Cloning this grown repository every time takes a lot of time, so I would like to do something about it.
The condition is
If so, how can I approach it to create a quick, light repository?
git
gitgc
compresses past commits and reduces capacity.
By configuring gc.auto
and gc.autopacklimit
, it will run automatically without having to run it regularly.
You can't erase files that you put in the past as a history, so I think it's better to use git-media
to sync files that you think are big to external resources.
Mattn has already written about git gc, but you can add the --aggressive option or
git repack-a-d --depth=250 --window=250
Have you tried?
In the end, what we're doing is --aggressive is re-selecting delta calculated and repack is calculating delta at an above-normal depth, which doesn't necessarily decrease, but it might work because we usually reuse delta as much as possible.By the way, the second one is Linus said on the gcc mailing list, and it will take a long time, but unlike aggressive, if you want to recalculate it properly, it's this way. If you add the -f option, you can try not to reuse the old delta.
You can put it outside, so why don't you use filter-branch
to completely remove what you leave outside from the repository?However, since the commit ID changes, everyone needs to clone it again.
How-to
Or submodule and manage it in several repositories (although it may be difficult to separate from now on).
© 2024 OneMinuteCode. All rights reserved.