mbox series

[0/1] builtin/pack-objects.c: avoid iterating all refs

Message ID 20210119143348.27535-1-jacob@gitlab.com (mailing list archive)
Headers show
Series builtin/pack-objects.c: avoid iterating all refs | expand

Message

Jacob Vosmaer Jan. 19, 2021, 2:33 p.m. UTC
This is a small patch for git-pack-objects which will help server side
performance on repositories with lots of refs. I will post a related
but slightly larger patch for ls-refs.c in a separate thread.

The back story is in
https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/746 but I
will try to summarize it here.

We have a particular Gitaly (Git RPC) server at GitLab that has a very
homogenous workload, dominated by CI. While trying to reduce CPU
utilization on the server we configured CI to fetch with the
'--no-tags' option. This had an unexpectedly large impact so I started
looking closer at why that may be.

What I learned is that by default, a fetch ends up using the
'--include-tag' command line option of git-pack-objects. This causes
git-pack-objects to iterate through all the tags of the repository to
see if any should be included in the pack because they point to packed
objects. The problem is that this "iterate through all the tags" uses
for_each_ref which iterates through all references in the repository,
and in doing so loads each associated object into memory to check if
the ref is broken. But all we need for '--include-tag' is to iterate
through refs/tags/.

On the repo we were testing this on, there are about
500,000 refs but only 2,000 tags. So we had to load a lot of objects
just for the sake of '--include-tag'. It was common to see more than
half the CPU time in git-pack-objects being spent in do_for_each_ref,
and that in turn was dominated by ref_resolves_to_object.

So, I think it would be nice to just iterate over those 2,000 tags and
not load 500,000 objects outside refs/tags we already know we don't
care about.

Jacob Vosmaer (1):
  builtin/pack-objects.c: avoid iterating all refs

 builtin/pack-objects.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Jeff King Jan. 19, 2021, 11:26 p.m. UTC | #1
On Tue, Jan 19, 2021 at 03:33:47PM +0100, Jacob Vosmaer wrote:

> What I learned is that by default, a fetch ends up using the
> '--include-tag' command line option of git-pack-objects. This causes
> git-pack-objects to iterate through all the tags of the repository to
> see if any should be included in the pack because they point to packed
> objects. The problem is that this "iterate through all the tags" uses
> for_each_ref which iterates through all references in the repository,
> and in doing so loads each associated object into memory to check if
> the ref is broken. But all we need for '--include-tag' is to iterate
> through refs/tags/.
> 
> On the repo we were testing this on, there are about
> 500,000 refs but only 2,000 tags. So we had to load a lot of objects
> just for the sake of '--include-tag'. It was common to see more than
> half the CPU time in git-pack-objects being spent in do_for_each_ref,
> and that in turn was dominated by ref_resolves_to_object.

Some of these details may be useful in the commit message, too. :)

Your "load a lot of objects" had me worried for a moment. We try hard
not to load objects during such an iteration, even when peeling them
(because the packed-refs format has a magic shortcut there). But I think
that is all working as intended. What you were seeing was just tons of
has_object_file() to make sure the ref was not corrupt (so finding the
entry in a packfile, but not actually inflating the object contents).

Arguably both upload-pack and pack-objects could use the INCLUDE_BROKEN
flag to avoid even checking this. We'd notice the problem when somebody
actually tried to fetch the object in question. That would speed things
up further on top of your patch, because we wouldn't need to check the
existence of even the tags. But it's definitely orthogonal, and should
be considered separately.

-Peff