Message ID | pull.1043.v3.git.git.1626724399377.gitgitgadget@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | [v3] packfile: freshen the mtime of packfile by configuration | expand |
On Mon, Jul 19, 2021 at 07:53:19PM +0000, Sun Chao via GitGitGadget wrote: > From: Sun Chao <16657101987@163.com> > > Commit 33d4221c79 (write_sha1_file: freshen existing objects, > 2014-10-15) avoid writing existing objects by freshen their > mtime (especially the packfiles contains them) in order to > aid the correct caching, and some process like find_lru_pack > can make good decision. However, this is unfriendly to > incremental backup jobs or services rely on cached file system > when there are large '.pack' files exists. > > For example, after packed all objects, use 'write-tree' to > create same commit with the same tree and same environments > such like GIT_COMMITTER_DATE and GIT_AUTHOR_DATE, we can > notice the '.pack' file's mtime changed. Git servers > that mount the same NFS disk will re-sync the '.pack' files > to cached file system which will slow the git commands. > > So if add core.freshenPackfiles to indicate whether or not > packs can be freshened, turning off this option on some > servers can speed up the execution of some commands on servers > which use NFS disk instead of local disk. Hmm. I'm still quite unconvinced that we should be taking this direction without better motivation. We talked about your assumption that NFS seems to be invalidating the block cache when updating the inodes that point at those blocks, but I don't recall seeing further evidence. Regardless, a couple of idle thoughts: > + if (!core_freshen_packfiles) > + return 1; It is important to still freshen the object mtimes even when we cannot update the pack mtimes. That's why we return 0 when "freshen_file" returned 0: even if there was an error calling utime, we should still freshen the object. This is important because it impacts when unreachable objects are pruned. So I would have assumed that if a user set "core.freshenPackfiles=false" that they would still want their object mtimes updated, in which case the only option we have is to write those objects out loose. ...and that happens by the caller of freshen_packed_object (either write_object_file() or hash_object_file_literally()) then calling write_loose_object() if freshen_packed_object() failed. So I would have expected to see a "return 0" in the case that packfile freshening was disabled. But that leads us to an interesting problem: how many redundant objects do we expect to see on the server? It may be a lot, in which case you may end up having the same IO problems for a different reason. Peff mentioned to me off-list that he suspected write-tree was overeager in how many trees it would try to write out. I'm not sure. > +test_expect_success 'do not bother loosening old objects without freshen pack time' ' > + obj1=$(echo three | git hash-object -w --stdin) && > + obj2=$(echo four | git hash-object -w --stdin) && > + pack1=$(echo $obj1 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && > + pack2=$(echo $obj2 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && > + git -c core.freshenPackFiles=false prune-packed && > + git cat-file -p $obj1 && > + git cat-file -p $obj2 && > + test-tool chmtime =-86400 .git/objects/pack/pack-$pack2.pack && > + git -c core.freshenPackFiles=false repack -A -d --unpack-unreachable=1.hour.ago && > + git cat-file -p $obj1 && > + test_must_fail git cat-file -p $obj2 > +' I had a little bit of a hard time following this test. AFAICT, it proceeds as follows: - Write two packs, each containing a unique unreachable blob. - Call 'git prune-packed' with packfile freshening disabled, then check that the object survived. - Then repack while in a state where one of the pack's contents would be pruned. - Make sure that one object survives and the other doesn't. This doesn't really seem to be testing the behavior of disabling packfile freshening so much as it's testing prune-packed, and repack's `--unpack-unreachable` option. I would probably have expected to see something more along the lines of: - Write an unreachable object, pack it, and then remove the loose copy you wrote in the first place. - Then roll the pack's mtime to some fixed value in the past. - Try to write the same object again with packfile freshening disabled, and verify that: - the pack's mtime was unchanged, - the object exists loose again But I'd really like to get some other opinions (especially from Peff, who brought up the potential concerns with write-tree) as to whether or not this is a direction worth pursuing. My opinion is that it is not, and that the bizarre caching behavior you are seeing is out of Git's control. Thanks, Taylor
Taylor Blau <me@ttaylorr.com> writes: > Hmm. I'm still quite unconvinced that we should be taking this direction > without better motivation. We talked about your assumption that NFS > seems to be invalidating the block cache when updating the inodes that > point at those blocks, but I don't recall seeing further evidence. Me neither. Not touching the pack and not updating the "most recently used" time of individual objects smells like a recipe for repository corruption. > My opinion is that it is not, and that the bizarre caching behavior you > are seeing is out of Git's control.
On Mon, Jul 19 2021, Taylor Blau wrote: > On Mon, Jul 19, 2021 at 07:53:19PM +0000, Sun Chao via GitGitGadget wrote: >> From: Sun Chao <16657101987@163.com> >> >> Commit 33d4221c79 (write_sha1_file: freshen existing objects, >> 2014-10-15) avoid writing existing objects by freshen their >> mtime (especially the packfiles contains them) in order to >> aid the correct caching, and some process like find_lru_pack >> can make good decision. However, this is unfriendly to >> incremental backup jobs or services rely on cached file system >> when there are large '.pack' files exists. >> >> For example, after packed all objects, use 'write-tree' to >> create same commit with the same tree and same environments >> such like GIT_COMMITTER_DATE and GIT_AUTHOR_DATE, we can >> notice the '.pack' file's mtime changed. Git servers >> that mount the same NFS disk will re-sync the '.pack' files >> to cached file system which will slow the git commands. >> >> So if add core.freshenPackfiles to indicate whether or not >> packs can be freshened, turning off this option on some >> servers can speed up the execution of some commands on servers >> which use NFS disk instead of local disk. > > Hmm. I'm still quite unconvinced that we should be taking this direction > without better motivation. We talked about your assumption that NFS > seems to be invalidating the block cache when updating the inodes that > point at those blocks, but I don't recall seeing further evidence. I don't know about Sun's setup, but what he's describing is consistent with how NFS works, or can commonly be made to work. See e.g. "lookupcache" in nfs(5) on Linux, but also a lot of people use some random vendor's proprietary NFS implementation, and commonly tweak various options that make it anywhere between "I guess that's not too crazy" and "are you kidding me?" levels of non-POSIX compliant. > Regardless, a couple of idle thoughts: > >> + if (!core_freshen_packfiles) >> + return 1; > > It is important to still freshen the object mtimes even when we cannot > update the pack mtimes. That's why we return 0 when "freshen_file" > returned 0: even if there was an error calling utime, we should still > freshen the object. This is important because it impacts when > unreachable objects are pruned. > > So I would have assumed that if a user set "core.freshenPackfiles=false" > that they would still want their object mtimes updated, in which case > the only option we have is to write those objects out loose. > > ...and that happens by the caller of freshen_packed_object (either > write_object_file() or hash_object_file_literally()) then calling > write_loose_object() if freshen_packed_object() failed. So I would have > expected to see a "return 0" in the case that packfile freshening was > disabled. > > But that leads us to an interesting problem: how many redundant objects > do we expect to see on the server? It may be a lot, in which case you > may end up having the same IO problems for a different reason. Peff > mentioned to me off-list that he suspected write-tree was overeager in > how many trees it would try to write out. I'm not sure. In my experience with NFS the thing that kills you is anything that needs to do iterations, i.e. recursive readdir() and the like, or to read a lot of data, throughput was excellent. It's why I hacked core that core.checkCollisions patch. Jeff improved the situation I was mainly trying to fix with with the loose objects cache. I never got around to benchmarking the two in production, and now that setup is at an ex-job... >> +test_expect_success 'do not bother loosening old objects without freshen pack time' ' >> + obj1=$(echo three | git hash-object -w --stdin) && >> + obj2=$(echo four | git hash-object -w --stdin) && >> + pack1=$(echo $obj1 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && >> + pack2=$(echo $obj2 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && >> + git -c core.freshenPackFiles=false prune-packed && >> + git cat-file -p $obj1 && >> + git cat-file -p $obj2 && >> + test-tool chmtime =-86400 .git/objects/pack/pack-$pack2.pack && >> + git -c core.freshenPackFiles=false repack -A -d --unpack-unreachable=1.hour.ago && >> + git cat-file -p $obj1 && >> + test_must_fail git cat-file -p $obj2 >> +' > > I had a little bit of a hard time following this test. AFAICT, it > proceeds as follows: > > - Write two packs, each containing a unique unreachable blob. > - Call 'git prune-packed' with packfile freshening disabled, then > check that the object survived. > - Then repack while in a state where one of the pack's contents would > be pruned. > - Make sure that one object survives and the other doesn't. > > This doesn't really seem to be testing the behavior of disabling > packfile freshening so much as it's testing prune-packed, and repack's > `--unpack-unreachable` option. I would probably have expected to see > something more along the lines of: > > - Write an unreachable object, pack it, and then remove the loose copy > you wrote in the first place. > - Then roll the pack's mtime to some fixed value in the past. > - Try to write the same object again with packfile freshening > disabled, and verify that: > - the pack's mtime was unchanged, > - the object exists loose again > > But I'd really like to get some other opinions (especially from Peff, > who brought up the potential concerns with write-tree) as to whether or > not this is a direction worth pursuing. > > My opinion is that it is not, and that the bizarre caching behavior you > are seeing is out of Git's control. Thanks for this, I found the test hard to follow too, but didn't have time to really think about it, this makes sense. Back to the topic: I share your sentiment of trying to avoid complexity in this area. Sun: Have you considered --keep-unreachable to "git repack"? It's orthagonal to what you're trying here, but I wonder if being more aggressive about keeping objects + some impromevents to perhaps skip this "touch" at all if we have that in effect wouldn't be more viable & something e.g. Taylor would be more comforable having part of git.git.
> 2021年7月20日 04:51,Taylor Blau <me@ttaylorr.com> 写道: > > On Mon, Jul 19, 2021 at 07:53:19PM +0000, Sun Chao via GitGitGadget wrote: >> From: Sun Chao <16657101987@163.com> >> >> Commit 33d4221c79 (write_sha1_file: freshen existing objects, >> 2014-10-15) avoid writing existing objects by freshen their >> mtime (especially the packfiles contains them) in order to >> aid the correct caching, and some process like find_lru_pack >> can make good decision. However, this is unfriendly to >> incremental backup jobs or services rely on cached file system >> when there are large '.pack' files exists. >> >> For example, after packed all objects, use 'write-tree' to >> create same commit with the same tree and same environments >> such like GIT_COMMITTER_DATE and GIT_AUTHOR_DATE, we can >> notice the '.pack' file's mtime changed. Git servers >> that mount the same NFS disk will re-sync the '.pack' files >> to cached file system which will slow the git commands. >> >> So if add core.freshenPackfiles to indicate whether or not >> packs can be freshened, turning off this option on some >> servers can speed up the execution of some commands on servers >> which use NFS disk instead of local disk. > > Hmm. I'm still quite unconvinced that we should be taking this direction > without better motivation. We talked about your assumption that NFS > seems to be invalidating the block cache when updating the inodes that > point at those blocks, but I don't recall seeing further evidence. Yes, these days I'm trying to asking help from our SRE to do tests in our production environments (where we can get the real traffic report of NFS server), such like: - setup a repository witch some large packfiles in a NFS disk - keep running 'git pack-objects --all --stdout --delta-base-offset >/dev/null' in multiple git servers that mount the same NFS disk above - 'touch' the packfiles in another server, and check (a) if the IOPS and IO traffic of the NFS server changes and (b) if the IO traffic of network interfaces from the git servers to the NFS server changes I like to share the data when I receive the reports. Meanwhile I find the description of the cached file system for NFS Client: https://www.ibm.com/docs/en/aix/7.2?topic=performance-cache-file-system It is mentioned that: 3. To ensure that the cached directories and files are kept up to date, CacheFS periodically checks the consistency of files stored in the cache. It does this by comparing the current modification time to the previous modification time. 4. If the modification times are different, all data and attributes for the directory or file are purged from the cache, and new data and attributes are retrieved from the back file system. It looks like reasonable, but perhaps I should check it from my test reports ;) > > Regardless, a couple of idle thoughts: > >> + if (!core_freshen_packfiles) >> + return 1; > > It is important to still freshen the object mtimes even when we cannot > update the pack mtimes. That's why we return 0 when "freshen_file" > returned 0: even if there was an error calling utime, we should still > freshen the object. This is important because it impacts when > unreachable objects are pruned. > > So I would have assumed that if a user set "core.freshenPackfiles=false" > that they would still want their object mtimes updated, in which case > the only option we have is to write those objects out loose. > > ...and that happens by the caller of freshen_packed_object (either > write_object_file() or hash_object_file_literally()) then calling > write_loose_object() if freshen_packed_object() failed. So I would have > expected to see a "return 0" in the case that packfile freshening was > disabled. > > But that leads us to an interesting problem: how many redundant objects > do we expect to see on the server? It may be a lot, in which case you > may end up having the same IO problems for a different reason. Peff > mentioned to me off-list that he suspected write-tree was overeager in > how many trees it would try to write out. I'm not sure. You are right, I haven't thought it in details, if we do not update the mtime both of packfiles and loose files, 'prune' may delete them by accident. > >> +test_expect_success 'do not bother loosening old objects without freshen pack time' ' >> + obj1=$(echo three | git hash-object -w --stdin) && >> + obj2=$(echo four | git hash-object -w --stdin) && >> + pack1=$(echo $obj1 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && >> + pack2=$(echo $obj2 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && >> + git -c core.freshenPackFiles=false prune-packed && >> + git cat-file -p $obj1 && >> + git cat-file -p $obj2 && >> + test-tool chmtime =-86400 .git/objects/pack/pack-$pack2.pack && >> + git -c core.freshenPackFiles=false repack -A -d --unpack-unreachable=1.hour.ago && >> + git cat-file -p $obj1 && >> + test_must_fail git cat-file -p $obj2 >> +' > > I had a little bit of a hard time following this test. AFAICT, it > proceeds as follows: > > - Write two packs, each containing a unique unreachable blob. > - Call 'git prune-packed' with packfile freshening disabled, then > check that the object survived. > - Then repack while in a state where one of the pack's contents would > be pruned. > - Make sure that one object survives and the other doesn't. > > This doesn't really seem to be testing the behavior of disabling > packfile freshening so much as it's testing prune-packed, and repack's > `--unpack-unreachable` option. I would probably have expected to see > something more along the lines of: > > - Write an unreachable object, pack it, and then remove the loose copy > you wrote in the first place. > - Then roll the pack's mtime to some fixed value in the past. > - Try to write the same object again with packfile freshening > disabled, and verify that: > - the pack's mtime was unchanged, > - the object exists loose again > > But I'd really like to get some other opinions (especially from Peff, > who brought up the potential concerns with write-tree) as to whether or > not this is a direction worth pursuing. > > My opinion is that it is not, and that the bizarre caching behavior you > are seeing is out of Git's control. OK, I will try this. And I will try to get the test reports from our SRE and check how the mtime impacts the caches. > > Thanks, > Taylor
> 2021年7月20日 08:07,Junio C Hamano <gitster@pobox.com> 写道: > > Taylor Blau <me@ttaylorr.com> writes: > >> Hmm. I'm still quite unconvinced that we should be taking this direction >> without better motivation. We talked about your assumption that NFS >> seems to be invalidating the block cache when updating the inodes that >> point at those blocks, but I don't recall seeing further evidence. > > Me neither. Not touching the pack and not updating the "most > recently used" time of individual objects smells like a recipe > for repository corruption. > >> My opinion is that it is not, and that the bizarre caching behavior you >> are seeing is out of Git's control. > Thanks Junio, I will try to get a more detial reports of the NFS caches and share it if it is valuable. Not touching the mtime of packfiles really has potencial problems just as Taylor said.
> 2021年7月20日 14:19,Ævar Arnfjörð Bjarmason <avarab@gmail.com> 写道: > > > On Mon, Jul 19 2021, Taylor Blau wrote: > >> On Mon, Jul 19, 2021 at 07:53:19PM +0000, Sun Chao via GitGitGadget wrote: >>> From: Sun Chao <16657101987@163.com> >>> >>> Commit 33d4221c79 (write_sha1_file: freshen existing objects, >>> 2014-10-15) avoid writing existing objects by freshen their >>> mtime (especially the packfiles contains them) in order to >>> aid the correct caching, and some process like find_lru_pack >>> can make good decision. However, this is unfriendly to >>> incremental backup jobs or services rely on cached file system >>> when there are large '.pack' files exists. >>> >>> For example, after packed all objects, use 'write-tree' to >>> create same commit with the same tree and same environments >>> such like GIT_COMMITTER_DATE and GIT_AUTHOR_DATE, we can >>> notice the '.pack' file's mtime changed. Git servers >>> that mount the same NFS disk will re-sync the '.pack' files >>> to cached file system which will slow the git commands. >>> >>> So if add core.freshenPackfiles to indicate whether or not >>> packs can be freshened, turning off this option on some >>> servers can speed up the execution of some commands on servers >>> which use NFS disk instead of local disk. >> >> Hmm. I'm still quite unconvinced that we should be taking this direction >> without better motivation. We talked about your assumption that NFS >> seems to be invalidating the block cache when updating the inodes that >> point at those blocks, but I don't recall seeing further evidence. > > I don't know about Sun's setup, but what he's describing is consistent > with how NFS works, or can commonly be made to work. > > See e.g. "lookupcache" in nfs(5) on Linux, but also a lot of people use > some random vendor's proprietary NFS implementation, and commonly tweak > various options that make it anywhere between "I guess that's not too > crazy" and "are you kidding me?" levels of non-POSIX compliant. > >> Regardless, a couple of idle thoughts: >> >>> + if (!core_freshen_packfiles) >>> + return 1; >> >> It is important to still freshen the object mtimes even when we cannot >> update the pack mtimes. That's why we return 0 when "freshen_file" >> returned 0: even if there was an error calling utime, we should still >> freshen the object. This is important because it impacts when >> unreachable objects are pruned. >> >> So I would have assumed that if a user set "core.freshenPackfiles=false" >> that they would still want their object mtimes updated, in which case >> the only option we have is to write those objects out loose. >> >> ...and that happens by the caller of freshen_packed_object (either >> write_object_file() or hash_object_file_literally()) then calling >> write_loose_object() if freshen_packed_object() failed. So I would have >> expected to see a "return 0" in the case that packfile freshening was >> disabled. >> >> But that leads us to an interesting problem: how many redundant objects >> do we expect to see on the server? It may be a lot, in which case you >> may end up having the same IO problems for a different reason. Peff >> mentioned to me off-list that he suspected write-tree was overeager in >> how many trees it would try to write out. I'm not sure. > > In my experience with NFS the thing that kills you is anything that > needs to do iterations, i.e. recursive readdir() and the like, or to > read a lot of data, throughput was excellent. It's why I hacked core > that core.checkCollisions patch. I have read your patch, looks like a good idea to reduce the expensive operations like readdir(). And in my production environments, IOPS stress of the NFS server bothers me which make the git commands slow. > > Jeff improved the situation I was mainly trying to fix with with the > loose objects cache. I never got around to benchmarking the two in > production, and now that setup is at an ex-job... > >>> +test_expect_success 'do not bother loosening old objects without freshen pack time' ' >>> + obj1=$(echo three | git hash-object -w --stdin) && >>> + obj2=$(echo four | git hash-object -w --stdin) && >>> + pack1=$(echo $obj1 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && >>> + pack2=$(echo $obj2 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && >>> + git -c core.freshenPackFiles=false prune-packed && >>> + git cat-file -p $obj1 && >>> + git cat-file -p $obj2 && >>> + test-tool chmtime =-86400 .git/objects/pack/pack-$pack2.pack && >>> + git -c core.freshenPackFiles=false repack -A -d --unpack-unreachable=1.hour.ago && >>> + git cat-file -p $obj1 && >>> + test_must_fail git cat-file -p $obj2 >>> +' >> >> I had a little bit of a hard time following this test. AFAICT, it >> proceeds as follows: >> >> - Write two packs, each containing a unique unreachable blob. >> - Call 'git prune-packed' with packfile freshening disabled, then >> check that the object survived. >> - Then repack while in a state where one of the pack's contents would >> be pruned. >> - Make sure that one object survives and the other doesn't. >> >> This doesn't really seem to be testing the behavior of disabling >> packfile freshening so much as it's testing prune-packed, and repack's >> `--unpack-unreachable` option. I would probably have expected to see >> something more along the lines of: >> >> - Write an unreachable object, pack it, and then remove the loose copy >> you wrote in the first place. >> - Then roll the pack's mtime to some fixed value in the past. >> - Try to write the same object again with packfile freshening >> disabled, and verify that: >> - the pack's mtime was unchanged, >> - the object exists loose again >> >> But I'd really like to get some other opinions (especially from Peff, >> who brought up the potential concerns with write-tree) as to whether or >> not this is a direction worth pursuing. >> >> My opinion is that it is not, and that the bizarre caching behavior you >> are seeing is out of Git's control. > > Thanks for this, I found the test hard to follow too, but didn't have > time to really think about it, this makes sense. > > Back to the topic: I share your sentiment of trying to avoid complexity > in this area. > > Sun: Have you considered --keep-unreachable to "git repack"? It's > orthagonal to what you're trying here, but I wonder if being more > aggressive about keeping objects + some impromevents to perhaps skip > this "touch" at all if we have that in effect wouldn't be more viable & > something e.g. Taylor would be more comforable having part of git.git. Yes, I will try to create some more useful test cases with both `--keep-unreachable` and `--unpack-unreachable` if I still believe I need to do something with the mtime, whatever I need to get my test reports first, thanks ;)
On Tue, Jul 20, 2021 at 11:00:18PM +0800, Sun Chao wrote: > Meanwhile I find the description of the cached file system for NFS Client: > https://www.ibm.com/docs/en/aix/7.2?topic=performance-cache-file-system > It is mentioned that: > > 3. To ensure that the cached directories and files are kept up to date, > CacheFS periodically checks the consistency of files stored in the cache. > It does this by comparing the current modification time to the previous > modification time. > 4. If the modification times are different, all data and attributes > for the directory or file are purged from the cache, and new data and > attributes are retrieved from the back file system. > > It looks like reasonable, but perhaps I should check it from my test reports ;) That seems reasonable, assuming that you have CacheFS mounted and that's what you're interacting with (instead of talking to NFS directly). It seems reasonable that CacheFS would also have some way to tune how often the "purge cached blocks because of stale mtimes" would kick in, and what the grace period for determining if an mtime is "stale" is. So hopefully those values are (a) configurable and (b) you can find values that result in acceptable performance. Thanks, Taylor
diff --git a/Documentation/config/core.txt b/Documentation/config/core.txt index c04f62a54a1..1e7cf366628 100644 --- a/Documentation/config/core.txt +++ b/Documentation/config/core.txt @@ -398,6 +398,17 @@ the largest projects. You probably do not need to adjust this value. + Common unit suffixes of 'k', 'm', or 'g' are supported. +core.freshenPackFiles:: + Normally we avoid writing existing object by freshening the mtime + of the *.pack file which contains it in order to aid some processes + such like prune. Turning off this option on some servers can speed + up the execution of some commands like 'git-upload-pack'(e.g. some + servers that mount the same NFS disk will re-sync the *.pack files + to cached file system if the mtime cahnges). ++ +The default is true which means the *.pack file will be freshened if we +want to write a existing object whthin it. + core.deltaBaseCacheLimit:: Maximum number of bytes per thread to reserve for caching base objects that may be referenced by multiple deltified objects. By storing the diff --git a/cache.h b/cache.h index ba04ff8bd36..46126c6977c 100644 --- a/cache.h +++ b/cache.h @@ -956,6 +956,7 @@ extern size_t packed_git_limit; extern size_t delta_base_cache_limit; extern unsigned long big_file_threshold; extern unsigned long pack_size_limit_cfg; +extern int core_freshen_packfiles; /* * Accessors for the core.sharedrepository config which lazy-load the value diff --git a/config.c b/config.c index f9c400ad306..02dcc8a028e 100644 --- a/config.c +++ b/config.c @@ -1431,6 +1431,10 @@ static int git_default_core_config(const char *var, const char *value, void *cb) return 0; } + if (!strcmp(var, "core.freshenpackfiles")) { + core_freshen_packfiles = git_config_bool(var, value); + } + if (!strcmp(var, "core.deltabasecachelimit")) { delta_base_cache_limit = git_config_ulong(var, value); return 0; diff --git a/environment.c b/environment.c index 2f27008424a..397525609a8 100644 --- a/environment.c +++ b/environment.c @@ -73,6 +73,7 @@ int core_sparse_checkout_cone; int merge_log_config = -1; int precomposed_unicode = -1; /* see probe_utf8_pathname_composition() */ unsigned long pack_size_limit_cfg; +int core_freshen_packfiles = 1; enum log_refs_config log_all_ref_updates = LOG_REFS_UNSET; #ifndef PROTECT_HFS_DEFAULT diff --git a/object-file.c b/object-file.c index f233b440b22..884c3e92c38 100644 --- a/object-file.c +++ b/object-file.c @@ -1974,6 +1974,10 @@ static int freshen_loose_object(const struct object_id *oid) static int freshen_packed_object(const struct object_id *oid) { struct pack_entry e; + + if (!core_freshen_packfiles) + return 1; + if (!find_pack_entry(the_repository, oid, &e)) return 0; if (e.p->freshened) diff --git a/t/t7701-repack-unpack-unreachable.sh b/t/t7701-repack-unpack-unreachable.sh index 937f89ee8c8..b6a0b6c9695 100755 --- a/t/t7701-repack-unpack-unreachable.sh +++ b/t/t7701-repack-unpack-unreachable.sh @@ -112,6 +112,20 @@ test_expect_success 'do not bother loosening old objects' ' test_must_fail git cat-file -p $obj2 ' +test_expect_success 'do not bother loosening old objects without freshen pack time' ' + obj1=$(echo three | git hash-object -w --stdin) && + obj2=$(echo four | git hash-object -w --stdin) && + pack1=$(echo $obj1 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && + pack2=$(echo $obj2 | git -c core.freshenPackFiles=false pack-objects .git/objects/pack/pack) && + git -c core.freshenPackFiles=false prune-packed && + git cat-file -p $obj1 && + git cat-file -p $obj2 && + test-tool chmtime =-86400 .git/objects/pack/pack-$pack2.pack && + git -c core.freshenPackFiles=false repack -A -d --unpack-unreachable=1.hour.ago && + git cat-file -p $obj1 && + test_must_fail git cat-file -p $obj2 +' + test_expect_success 'keep packed objects found only in index' ' echo my-unique-content >file && git add file &&