diff mbox series

midx: use buffered I/O to talk to pack-objects

Message ID c5920e08-b7dd-e870-f99e-225d0aafc663@web.de (mailing list archive)
State Superseded
Headers show
Series midx: use buffered I/O to talk to pack-objects | expand

Commit Message

René Scharfe Aug. 2, 2020, 2:38 p.m. UTC
Like f0bca72dc77 (send-pack: use buffered I/O to talk to pack-objects,
2016-06-08), significantly reduce the number of system calls and
simplify the code for sending object IDs to pack-objects by using
stdio's buffering and handling errors after the loop.

Signed-off-by: René Scharfe <l.s.r@web.de>
---
 midx.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

--
2.28.0

Comments

Chris Torek Aug. 2, 2020, 4:11 p.m. UTC | #1
On Sun, Aug 2, 2020 at 7:40 AM René Scharfe <l.s.r@web.de> wrote:
> @@ -1443,10 +1446,15 @@ int midx_repack(struct repository *r, const char *object_dir, size_t batch_size,
>                         continue;
>
>                 nth_midxed_object_oid(&oid, m, i);
> -               xwrite(cmd.in, oid_to_hex(&oid), the_hash_algo->hexsz);
> -               xwrite(cmd.in, "\n", 1);
> +               fprintf(cmd_in, "%s\n", oid_to_hex(&oid));
> +       }
> +
> +       if (fclose(cmd_in)) {
> +               error_errno(_("could not close stdin of pack-objects"));
> +               result = 1;
> +               finish_command(&cmd);
> +               goto cleanup;
>         }
> -       close(cmd.in);
>
>         if (finish_command(&cmd)) {
>                 error(_("could not finish pack-objects"));
> --
> 2.28.0

Here, we don't have any explicit errno checking, but
of course error_errno() uses errno.  This too needs
an ferror() (or fflush()) test before the final fclose(),
and then we just need to use plain error().  Otherwise
you'll need the clumsier test-after-each-fprintf() and
an explicit final fflush()-and-test.

Chris
Derrick Stolee Aug. 3, 2020, 12:39 p.m. UTC | #2
On 8/2/2020 10:38 AM, René Scharfe wrote:
> Like f0bca72dc77 (send-pack: use buffered I/O to talk to pack-objects,
> 2016-06-08), significantly reduce the number of system calls and
> simplify the code for sending object IDs to pack-objects by using
> stdio's buffering and handling errors after the loop.

Good find. Thanks for doing this important cleanup.

Outside of Chris's other feedback, this looks like an obviously
correct transformation.

Thanks,
-Stolee
Johannes Sixt Aug. 3, 2020, 6:10 p.m. UTC | #3
Am 02.08.20 um 18:11 schrieb Chris Torek:
> On Sun, Aug 2, 2020 at 7:40 AM René Scharfe <l.s.r@web.de> wrote:
>> @@ -1443,10 +1446,15 @@ int midx_repack(struct repository *r, const char *object_dir, size_t batch_size,
>>                         continue;
>>
>>                 nth_midxed_object_oid(&oid, m, i);
>> -               xwrite(cmd.in, oid_to_hex(&oid), the_hash_algo->hexsz);
>> -               xwrite(cmd.in, "\n", 1);
>> +               fprintf(cmd_in, "%s\n", oid_to_hex(&oid));
>> +       }
>> +
>> +       if (fclose(cmd_in)) {
>> +               error_errno(_("could not close stdin of pack-objects"));
>> +               result = 1;
>> +               finish_command(&cmd);
>> +               goto cleanup;
>>         }
>> -       close(cmd.in);
>>
>>         if (finish_command(&cmd)) {
>>                 error(_("could not finish pack-objects"));
>> --
>> 2.28.0
> 
> Here, we don't have any explicit errno checking, but
> of course error_errno() uses errno.  This too needs
> an ferror() (or fflush()) test before the final fclose(),
> and then we just need to use plain error().  Otherwise
> you'll need the clumsier test-after-each-fprintf() and
> an explicit final fflush()-and-test.

We need this explicit test after each fprintf anyway because SIGPIPE may
be ignored, and then writing fails with EPIPE. On Windows, this is
doubly important because we do not have SIGPIPE at all (and always see
EPIPE), but we see EPIPE only on the first failed write; subsequent
writes produce EINVAL.

-- Hannes
René Scharfe Aug. 3, 2020, 10:27 p.m. UTC | #4
Am 03.08.20 um 20:10 schrieb Johannes Sixt:
> Am 02.08.20 um 18:11 schrieb Chris Torek:
>> On Sun, Aug 2, 2020 at 7:40 AM René Scharfe <l.s.r@web.de> wrote:
>>> @@ -1443,10 +1446,15 @@ int midx_repack(struct repository *r, const char *object_dir, size_t batch_size,
>>>                         continue;
>>>
>>>                 nth_midxed_object_oid(&oid, m, i);
>>> -               xwrite(cmd.in, oid_to_hex(&oid), the_hash_algo->hexsz);
>>> -               xwrite(cmd.in, "\n", 1);
>>> +               fprintf(cmd_in, "%s\n", oid_to_hex(&oid));
>>> +       }
>>> +
>>> +       if (fclose(cmd_in)) {
>>> +               error_errno(_("could not close stdin of pack-objects"));
>>> +               result = 1;
>>> +               finish_command(&cmd);
>>> +               goto cleanup;
>>>         }
>>> -       close(cmd.in);
>>>
>>>         if (finish_command(&cmd)) {
>>>                 error(_("could not finish pack-objects"));
>>> --
>>> 2.28.0
>>
>> Here, we don't have any explicit errno checking, but
>> of course error_errno() uses errno.  This too needs
>> an ferror() (or fflush()) test before the final fclose(),
>> and then we just need to use plain error().  Otherwise
>> you'll need the clumsier test-after-each-fprintf() and
>> an explicit final fflush()-and-test.

OK, the implicit fflush() called by fclose() and thus fclose() itself
can succeed even if the error indicator is set, in particular if that
fflush() has nothing to do.  So we need to check ferror() before calling
fclose().

If ferror() tells us there was an error, errno might contain some random
error code, but not necessarily the root cause.  Thus we better keep
quiet about it and only use error() to tell the user we failed to talk
to our child but we don't know why.

We could fflush() explicitly before fclose(), but fclose() reports any
failure of its implicit fflush() anyway , so we don't gain anything by
doing so.

Did I get that right?

> We need this explicit test after each fprintf anyway because SIGPIPE may
> be ignored, and then writing fails with EPIPE. On Windows, this is
> doubly important because we do not have SIGPIPE at all (and always see
> EPIPE), but we see EPIPE only on the first failed write; subsequent
> writes produce EINVAL.

Why is this important?  The current code doesn't care about it, at
least.  It does care about EINTR, though.

René
René Scharfe Aug. 4, 2020, 4:31 a.m. UTC | #5
Am 04.08.20 um 00:27 schrieb René Scharfe:
> Am 03.08.20 um 20:10 schrieb Johannes Sixt:
>> Am 02.08.20 um 18:11 schrieb Chris Torek:
>>> On Sun, Aug 2, 2020 at 7:40 AM René Scharfe <l.s.r@web.de> wrote:
>>>> @@ -1443,10 +1446,15 @@ int midx_repack(struct repository *r, const char *object_dir, size_t batch_size,
>>>>                         continue;
>>>>
>>>>                 nth_midxed_object_oid(&oid, m, i);
>>>> -               xwrite(cmd.in, oid_to_hex(&oid), the_hash_algo->hexsz);
>>>> -               xwrite(cmd.in, "\n", 1);
>>>> +               fprintf(cmd_in, "%s\n", oid_to_hex(&oid));
>>>> +       }
>>>> +
>>>> +       if (fclose(cmd_in)) {
>>>> +               error_errno(_("could not close stdin of pack-objects"));
>>>> +               result = 1;
>>>> +               finish_command(&cmd);
>>>> +               goto cleanup;
>>>>         }
>>>> -       close(cmd.in);
>>>>
>>>>         if (finish_command(&cmd)) {
>>>>                 error(_("could not finish pack-objects"));
>>>> --
>>>> 2.28.0

>> We need this explicit test after each fprintf anyway because SIGPIPE may
>> be ignored, and then writing fails with EPIPE. On Windows, this is
>> doubly important because we do not have SIGPIPE at all (and always see
>> EPIPE), but we see EPIPE only on the first failed write; subsequent
>> writes produce EINVAL.
>
> Why is this important?  The current code doesn't care about it, at
> least.  It does care about EINTR, though.

Ah, that's the point, right?  You want to *ignore* EPIPE, because the
failed pack-objects process at the other end will have died with a
(hopefully) useful error message already.

OK, so we also need a fprintf() wrapper that retries on EINTR, ignores
EPIPE and exits early if the error indicator is set?

Somehow staying with write(2) and its friends and just adding strbuf
based buffering looks attractive to me now. :-/

René
Junio C Hamano Aug. 4, 2020, 4:37 a.m. UTC | #6
René Scharfe <l.s.r@web.de> writes:

> Somehow staying with write(2) and its friends and just adding strbuf
> based buffering looks attractive to me now. :-/

Indeed :-/
René Scharfe Aug. 11, 2020, 4:08 p.m. UTC | #7
Am 03.08.20 um 14:39 schrieb Derrick Stolee:
> On 8/2/2020 10:38 AM, René Scharfe wrote:
>> Like f0bca72dc77 (send-pack: use buffered I/O to talk to pack-objects,
>> 2016-06-08), significantly reduce the number of system calls and
>> simplify the code for sending object IDs to pack-objects by using
>> stdio's buffering and handling errors after the loop.
>
> Good find. Thanks for doing this important cleanup.
>
> Outside of Chris's other feedback, this looks like an obviously
> correct transformation.

I spent a surprising amount of time trying to find a solution that is
easy to use and allows precise error handling.  But now I get second
thoughts.  The main selling point of buffering is better performance,
which is achieved by reducing the number of system calls.  How much
better actually?

So I get this in my Git repo clone without this patch:

  $ strace --summary-only --trace=write git multi-pack-index repack --no-progress
  % time     seconds  usecs/call     calls    errors syscall
  ------ ----------- ----------- --------- --------- ----------------
  100.00    2.237478           2    921650           write
  ------ ----------- ----------- --------- --------- ----------------
  100.00    2.237478                921650           total

And here's the same with the patch:

  % time     seconds  usecs/call     calls    errors syscall
  ------ ----------- ----------- --------- --------- ----------------
  100.00    0.013293           2      4613           write
  ------ ----------- ----------- --------- --------- ----------------
  100.00    0.013293                  4613           total

Awesome, right?  write(2) calls are down by a factor of almost 200 and
the time spent on them is reduced significantly, as advertised.  Let's
ask hyperfine for a second opinion though.  Without this patch:

  Benchmark #1: git multi-pack-index repack --no-progress
    Time (mean ± σ):      1.652 s ±  0.206 s    [User: 1.383 s, System: 0.317 s]
    Range (min … max):    1.426 s …  1.890 s    10 runs

And the same with this patch applied:

    Time (mean ± σ):      1.635 s ±  0.199 s    [User: 1.363 s, System: 0.204 s]
    Range (min … max):    1.430 s …  1.871 s    10 runs

OK, so system time is down by ca. 50%, but the total duration is
basically unchanged.  It seems strace added quite some overhead to our
measurement above.

Anyway, now I wonder if adding our own buffer on top if the
OS-internal pipe buffer is actually worth it.  The numbers above are
from Debian testing , by the way.  Perhaps buffering still pays off on
operating systems with slower pipes..

René
Derrick Stolee Aug. 11, 2020, 5:14 p.m. UTC | #8
On 8/11/2020 12:08 PM, René Scharfe wrote:
> Am 03.08.20 um 14:39 schrieb Derrick Stolee:
>> On 8/2/2020 10:38 AM, René Scharfe wrote:
>>> Like f0bca72dc77 (send-pack: use buffered I/O to talk to pack-objects,
>>> 2016-06-08), significantly reduce the number of system calls and
>>> simplify the code for sending object IDs to pack-objects by using
>>> stdio's buffering and handling errors after the loop.
>>
>> Good find. Thanks for doing this important cleanup.
>>
>> Outside of Chris's other feedback, this looks like an obviously
>> correct transformation.
> 
> I spent a surprising amount of time trying to find a solution that is
> easy to use and allows precise error handling.  But now I get second
> thoughts.  The main selling point of buffering is better performance,
> which is achieved by reducing the number of system calls.  How much
> better actually?
> 
> So I get this in my Git repo clone without this patch:
> 
>   $ strace --summary-only --trace=write git multi-pack-index repack --no-progress
>   % time     seconds  usecs/call     calls    errors syscall
>   ------ ----------- ----------- --------- --------- ----------------
>   100.00    2.237478           2    921650           write
>   ------ ----------- ----------- --------- --------- ----------------
>   100.00    2.237478                921650           total
> 
> And here's the same with the patch:
> 
>   % time     seconds  usecs/call     calls    errors syscall
>   ------ ----------- ----------- --------- --------- ----------------
>   100.00    0.013293           2      4613           write
>   ------ ----------- ----------- --------- --------- ----------------
>   100.00    0.013293                  4613           total
> 
> Awesome, right?  write(2) calls are down by a factor of almost 200 and
> the time spent on them is reduced significantly, as advertised.  Let's
> ask hyperfine for a second opinion though.  Without this patch:
> 
>   Benchmark #1: git multi-pack-index repack --no-progress
>     Time (mean ± σ):      1.652 s ±  0.206 s    [User: 1.383 s, System: 0.317 s]
>     Range (min … max):    1.426 s …  1.890 s    10 runs
> 
> And the same with this patch applied:
> 
>     Time (mean ± σ):      1.635 s ±  0.199 s    [User: 1.363 s, System: 0.204 s]
>     Range (min … max):    1.430 s …  1.871 s    10 runs
> 
> OK, so system time is down by ca. 50%, but the total duration is
> basically unchanged.  It seems strace added quite some overhead to our
> measurement above.
> 
> Anyway, now I wonder if adding our own buffer on top if the
> OS-internal pipe buffer is actually worth it.  The numbers above are
> from Debian testing , by the way.  Perhaps buffering still pays off on
> operating systems with slower pipes..

For what it's worth, I took your patch and applied it on Git for Windows
and tested 'git multi-pack-index repack' on my copy of the Git repo
(which includes Git for Windows and microsoft/git for a total of
1.7 million objects) and saw the time improve from 22.3s to 16.6s!

The "Enumerating objects" progress bar was visibly faster when I was
watching the command.

I was not expecting such a huge speed bump, seeing how the objects
are being repacked, so this command includes complicated processes
like delta compression an zlib compression.

Thanks! This is definitely worth the speed boost on Windows.

-Stolee
diff mbox series

Patch

diff --git a/midx.c b/midx.c
index 6d1584ca51d..742638c3e51 100644
--- a/midx.c
+++ b/midx.c
@@ -1383,6 +1383,7 @@  int midx_repack(struct repository *r, const char *object_dir, size_t batch_size,
 	uint32_t i;
 	unsigned char *include_pack;
 	struct child_process cmd = CHILD_PROCESS_INIT;
+	FILE *cmd_in;
 	struct strbuf base_name = STRBUF_INIT;
 	struct multi_pack_index *m = load_multi_pack_index(object_dir, 1);

@@ -1435,6 +1436,8 @@  int midx_repack(struct repository *r, const char *object_dir, size_t batch_size,
 		goto cleanup;
 	}

+	cmd_in = xfdopen(cmd.in, "w");
+
 	for (i = 0; i < m->num_objects; i++) {
 		struct object_id oid;
 		uint32_t pack_int_id = nth_midxed_pack_int_id(m, i);
@@ -1443,10 +1446,15 @@  int midx_repack(struct repository *r, const char *object_dir, size_t batch_size,
 			continue;

 		nth_midxed_object_oid(&oid, m, i);
-		xwrite(cmd.in, oid_to_hex(&oid), the_hash_algo->hexsz);
-		xwrite(cmd.in, "\n", 1);
+		fprintf(cmd_in, "%s\n", oid_to_hex(&oid));
+	}
+
+	if (fclose(cmd_in)) {
+		error_errno(_("could not close stdin of pack-objects"));
+		result = 1;
+		finish_command(&cmd);
+		goto cleanup;
 	}
-	close(cmd.in);

 	if (finish_command(&cmd)) {
 		error(_("could not finish pack-objects"));