diff mbox series

[8/9] maintenance: auto-size incremental-repack batch

Message ID c3487fb8e3e72949080f51f341655f37b5f2f03f.1596731425.git.gitgitgadget@gmail.com (mailing list archive)
State Superseded
Headers show
Series Maintenance II: prefetch, loose-objects, incremental-repack tasks | expand

Commit Message

Jean-Noël Avila via GitGitGadget Aug. 6, 2020, 4:30 p.m. UTC
From: Derrick Stolee <dstolee@microsoft.com>

When repacking during the 'incremental-repack' task, we use the
--batch-size option in 'git multi-pack-index repack'. The initial setting
used --batch-size=0 to repack everything into a single pack-file. This is
not sustainable for a large repository. The amount of work required is
also likely to use too many system resources for a background job.

Update the 'incremental-repack' task by dynamically computing a
--batch-size option based on the current pack-file structure.

The dynamic default size is computed with this idea in mind for a client
repository that was cloned from a very large remote: there is likely one
"big" pack-file that was created at clone time. Thus, do not try
repacking it as it is likely packed efficiently by the server.

Instead, we select the second-largest pack-file, and create a batch size
that is one larger than that pack-file. If there are three or more
pack-files, then this guarantees that at least two will be combined into
a new pack-file.

Of course, this means that the second-largest pack-file size is likely
to grow over time and may eventually surpass the initially-cloned
pack-file. Recall that the pack-file batch is selected in a greedy
manner: the packs are considered from oldest to newest and are selected
if they have size smaller than the batch size until the total selected
size is larger than the batch size. Thus, that oldest "clone" pack will
be first to repack after the new data creates a pack larger than that.

We also want to place some limits on how large these pack-files become,
in order to bound the amount of time spent repacking. A maximum
batch-size of two gigabytes means that large repositories will never be
packed into a single pack-file using this job, but also that repack is
rather expensive. This is a trade-off that is valuable to have if the
maintenance is being run automatically or in the background. Users who
truly want to optimize for space and performance (and are willing to pay
the upfront cost of a full repack) can use the 'gc' task to do so.

Create a test for this two gigabyte limit by creating an EXPENSIVE test
that generates two pack-files of roughly 2.5 gigabytes in size, then
performs an incremental repack. Check that the --batch-size argument in
the subcommand uses the hard-coded maximum.

Helped-by: Chris Torek <chris.torek@gmail.com>
Reported-by: Son Luong Ngoc <sluongng@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
---
 builtin/gc.c           | 43 +++++++++++++++++++++++++++++++++++++++++-
 t/t7900-maintenance.sh | 36 +++++++++++++++++++++++++++++++++--
 2 files changed, 76 insertions(+), 3 deletions(-)

Comments

Derrick Stolee Aug. 6, 2020, 6:13 p.m. UTC | #1
On 8/6/2020 1:02 PM, Son Luong Ngoc wrote:
> Hi Derrick,
> 
>> On Aug 6, 2020, at 18:30, Derrick Stolee via GitGitGadget <gitgitgadget@gmail.com> wrote:
>>
>> From: Derrick Stolee <dstolee@microsoft.com>
>>
>> When repacking during the 'incremental-repack' task, we use the
>> --batch-size option in 'git multi-pack-index repack'. The initial setting
>> used --batch-size=0 to repack everything into a single pack-file. This is
>> not sustainable for a large repository. The amount of work required is
>> also likely to use too many system resources for a background job.
>>
>> Update the 'incremental-repack' task by dynamically computing a
>> --batch-size option based on the current pack-file structure.
>>
>> The dynamic default size is computed with this idea in mind for a client
>> repository that was cloned from a very large remote: there is likely one
>> "big" pack-file that was created at clone time. Thus, do not try
>> repacking it as it is likely packed efficiently by the server.
>>
>> Instead, we select the second-largest pack-file, and create a batch size
>> that is one larger than that pack-file. If there are three or more
>> pack-files, then this guarantees that at least two will be combined into
>> a new pack-file.
> 
> I have been using this strategy with git-care.sh [1] with large success.
> However it worth to note that there are still edge case where I observed that
> pack count keep increasing because using '--batch-size=<second-biggest-pack>+1'
> did not resulted in any repacking.
> In one case, I have observed a local copy went up to 160+ packs without being able
> to repack.
> 
> I have been considering whether a strategy such as falling back to the '(3rd biggest
> pack size) + 1' and 4th and 5th and so on... when midx repack call resulted in no-op,
> as that was how I fixed my repo when the edge case happen.
> 
> Such strategy would require a way to detect midx repack to signal when no-op happen,
> so something like 'git multi-pack-index repack --batch-size=123456 --exit-code' would
> be much desirable.
> 
>>
>> Of course, this means that the second-largest pack-file size is likely
>> to grow over time and may eventually surpass the initially-cloned
>> pack-file. Recall that the pack-file batch is selected in a greedy
>> manner: the packs are considered from oldest to newest and are selected
>> if they have size smaller than the batch size until the total selected
>> size is larger than the batch size. Thus, that oldest "clone" pack will
>> be first to repack after the new data creates a pack larger than that.
>>
>> We also want to place some limits on how large these pack-files become,
>> in order to bound the amount of time spent repacking. A maximum
>> batch-size of two gigabytes means that large repositories will never be
>> packed into a single pack-file using this job, but also that repack is
>> rather expensive. This is a trade-off that is valuable to have if the
>> maintenance is being run automatically or in the background. Users who
>> truly want to optimize for space and performance (and are willing to pay
>> the upfront cost of a full repack) can use the 'gc' task to do so.
>>
>> Create a test for this two gigabyte limit by creating an EXPENSIVE test
>> that generates two pack-files of roughly 2.5 gigabytes in size, then
>> performs an incremental repack. Check that the --batch-size argument in
>> the subcommand uses the hard-coded maximum.
>>
>> Helped-by: Chris Torek <chris.torek@gmail.com>
>> Reported-by: Son Luong Ngoc <sluongng@gmail.com>
>> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
> 
> Generally, I have found working with '--batch-size' to be a bit unpredictable.
> I wonder if we could tweak the behavior somewhat so that its more consistent
> to use and test?

Thanks for continuing to test with this model. My experience has been limited
to the --batch-size=2g option on repos that typically have >15gb of commit
and tree data in their pack directory.

One bit of unpredictability that we've seen is that the --batch-size uses
the "expected size" of a pack-file to select if it should be repacked. This
only tracks the objects that are referenced in that pack, so when we have
new packs that were "un-thinned" by duplicating delta-bases, those affect
the expected size of a repack.

Here is a great reason to have this series be split. While part I stabilizes,
we can take the time to re-evaluate this strategy. It might require updating
the multi-pack-index builtin itself.

One thing to think about is to focus on the different possible sizes of a
repository. If the entire pack-directory is small, then we might as well
repack everything. (What should the limit be? 2gb? configurable?) In the
case of a larger directory, should we just use the --batch-size logic with
that limit, or should we create a new option for which packs to repack?

For example, the recent simulation [1] of downloading fetch packs and
running this maintenance did see extra space that could be recovered with
the current logic. However, what if we could specify "repack everything
except the largest pack" exactly? That would do exactly what this task
is _intending_, as long as that resulting pack-file does not exceed the
2gb maximum by too much.

[1] https://lore.kernel.org/git/d50fbb33-9be3-1c48-2277-8bf894df734f@gmail.com/

I will think more on this. I'm open to alternate strategies.

Thanks,
-Stolee
diff mbox series

Patch

diff --git a/builtin/gc.c b/builtin/gc.c
index 35c6d7ce82..c09bc1381c 100644
--- a/builtin/gc.c
+++ b/builtin/gc.c
@@ -1094,6 +1094,46 @@  static int multi_pack_index_expire(struct maintenance_opts *opts)
 	return 0;
 }
 
+#define TWO_GIGABYTES (INT32_MAX)
+
+static off_t get_auto_pack_size(void)
+{
+	/*
+	 * The "auto" value is special: we optimize for
+	 * one large pack-file (i.e. from a clone) and
+	 * expect the rest to be small and they can be
+	 * repacked quickly.
+	 *
+	 * The strategy we select here is to select a
+	 * size that is one more than the second largest
+	 * pack-file. This ensures that we will repack
+	 * at least two packs if there are three or more
+	 * packs.
+	 */
+	off_t max_size = 0;
+	off_t second_largest_size = 0;
+	off_t result_size;
+	struct packed_git *p;
+	struct repository *r = the_repository;
+
+	reprepare_packed_git(r);
+	for (p = get_all_packs(r); p; p = p->next) {
+		if (p->pack_size > max_size) {
+			second_largest_size = max_size;
+			max_size = p->pack_size;
+		} else if (p->pack_size > second_largest_size)
+			second_largest_size = p->pack_size;
+	}
+
+	result_size = second_largest_size + 1;
+
+	/* But limit ourselves to a batch size of 2g */
+	if (result_size > TWO_GIGABYTES)
+		result_size = TWO_GIGABYTES;
+
+	return result_size;
+}
+
 static int multi_pack_index_repack(struct maintenance_opts *opts)
 {
 	struct child_process child = CHILD_PROCESS_INIT;
@@ -1104,7 +1144,8 @@  static int multi_pack_index_repack(struct maintenance_opts *opts)
 	if (opts->quiet)
 		strvec_push(&child.args, "--no-progress");
 
-	strvec_push(&child.args, "--batch-size=0");
+	strvec_pushf(&child.args, "--batch-size=%"PRIuMAX,
+				  (uintmax_t)get_auto_pack_size());
 
 	close_object_store(the_repository->objects);
 
diff --git a/t/t7900-maintenance.sh b/t/t7900-maintenance.sh
index be19ac7623..1c5f44f2b3 100755
--- a/t/t7900-maintenance.sh
+++ b/t/t7900-maintenance.sh
@@ -179,10 +179,42 @@  test_expect_success 'incremental-repack task' '
 	test_line_count = 4 packs-between &&
 
 	# the job deletes the two old packs, and does not write
-	# a new one because only one pack remains.
+	# a new one because the batch size is not high enough to
+	# pack the largest pack-file.
 	git maintenance run --task=incremental-repack &&
 	ls .git/objects/pack/*.pack >packs-after &&
-	test_line_count = 1 packs-after
+	test_line_count = 2 packs-after
+'
+
+test_expect_success EXPENSIVE 'incremental-repack 2g limit' '
+	for i in $(test_seq 1 5)
+	do
+		test-tool genrandom foo$i $((512 * 1024 * 1024 + 1)) >>big ||
+		return 1
+	done &&
+	git add big &&
+	git commit -m "Add big file (1)" &&
+
+	# ensure any possible loose objects are in a pack-file
+	git maintenance run --task=loose-objects &&
+
+	rm big &&
+	for i in $(test_seq 6 10)
+	do
+		test-tool genrandom foo$i $((512 * 1024 * 1024 + 1)) >>big ||
+		return 1
+	done &&
+	git add big &&
+	git commit -m "Add big file (2)" &&
+
+	# ensure any possible loose objects are in a pack-file
+	git maintenance run --task=loose-objects &&
+
+	# Now run the incremental-repack task and check the batch-size
+	GIT_TRACE2_EVENT="$(pwd)/run-2g.txt" git maintenance run \
+		--task=incremental-repack 2>/dev/null &&
+	test_subcommand git multi-pack-index repack \
+		 --no-progress --batch-size=2147483647 <run-2g.txt
 '
 
 test_done