diff mbox series

index-pack: spawn threads atomically

Message ID 20240105085034.GA3078476@coredump.intra.peff.net (mailing list archive)
State New, archived
Headers show
Series index-pack: spawn threads atomically | expand

Commit Message

Jeff King Jan. 5, 2024, 8:50 a.m. UTC
The t5309 script triggers a racy false positive with SANITIZE=leak on a
multi-core system. Running with "--stress --run=6" usually fails within
10 seconds or so for me, complaining with something like:

    + git index-pack --fix-thin --stdin
    fatal: REF_DELTA at offset 46 already resolved (duplicate base 01d7713666f4de822776c7622c10f1b07de280dc?)

    =================================================================
    ==3904583==ERROR: LeakSanitizer: detected memory leaks

    Direct leak of 32 byte(s) in 1 object(s) allocated from:
        #0 0x7fa790d01986 in __interceptor_realloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:98
        #1 0x7fa790add769 in __pthread_getattr_np nptl/pthread_getattr_np.c:180
        #2 0x7fa790d117c5 in __sanitizer::GetThreadStackTopAndBottom(bool, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:150
        #3 0x7fa790d11957 in __sanitizer::GetThreadStackAndTls(bool, unsigned long*, unsigned long*, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:598
        #4 0x7fa790d03fe8 in __lsan::ThreadStart(unsigned int, unsigned long long, __sanitizer::ThreadType) ../../../../src/libsanitizer/lsan/lsan_posix.cpp:51
        #5 0x7fa790d013fd in __lsan_thread_start_func ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:440
        #6 0x7fa790adc3eb in start_thread nptl/pthread_create.c:444
        #7 0x7fa790b5ca5b in clone3 ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

    SUMMARY: LeakSanitizer: 32 byte(s) leaked in 1 allocation(s).
    Aborted

What happens is this:

  0. We construct a bogus pack with a duplicate object in it and trigger
     index-pack.

  1. We spawn a bunch of worker threads to resolve deltas (on my system
     it is 16 threads).

  2. One of the threads sees the duplicate object and bails by calling
     exit(), taking down all of the threads. This is expected and is the
     point of the test.

  3. At the time exit() is called, we may still be spawning threads from
     the main process via pthread_create(). LSan hooks thread creation
     to update its book-keeping; it has to know where each thread's
     stack is (so it can find entry points for reachable memory). So it
     calls pthread_getattr_np() to get information about the new thread.
     That may allocate memory that must be freed with a matching call to
     pthread_attr_destroy(). Probably LSan does that immediately, but
     if you're unlucky enough, the exit() will happen while it's between
     those two calls, and the allocated pthread_attr_t appears as a
     leak.

This isn't a real leak. It's not even in our code, but rather in the
LSan instrumentation code. So we could just ignore it. But the false
positive can cause people to waste time tracking it down.

It's possibly something that LSan could protect against (e.g., cover the
getattr/destroy pair with a mutex, and then in the final post-exit()
check for leaks try to take the same mutex). But I don't know enough
about LSan to say if that's a reasonable approach or not (or if my
analysis is even completely correct).

In the meantime, it's pretty easy to avoid the race by making creation
of the worker threads "atomic". That is, we'll spawn all of them before
letting any of them start to work. That's easy to do because we already
have a work_lock() mutex for handing out that work. If the main process
takes it, then all of the threads will immediately block until we've
finished spawning and released it.

This shouldn't make any practical difference for non-LSan runs. The
thread spawning is quick, and could happen before any worker thread gets
scheduled anyway.

Probably other spots that use threads are subject to the same issues.
But since we have to manually insert locking (and since this really is
kind of a hack), let's not bother with them unless somebody experiences
a similar racy false-positive in practice.

Signed-off-by: Jeff King <peff@peff.net>
---
Rescuing this from:

  https://lore.kernel.org/git/20231221105124.GD570888@coredump.intra.peff.net/

where it was buried deep in a thread. I still think it's kind of gross,
but it may be the least-bad thing.

 builtin/index-pack.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Taylor Blau Jan. 5, 2024, 4:33 p.m. UTC | #1
On Fri, Jan 05, 2024 at 03:50:34AM -0500, Jeff King wrote:
> The t5309 script triggers a racy false positive with SANITIZE=leak on a
> multi-core system. Running with "--stress --run=6" usually fails within
> 10 seconds or so for me, complaining with something like:
>
>     + git index-pack --fix-thin --stdin
>     fatal: REF_DELTA at offset 46 already resolved (duplicate base 01d7713666f4de822776c7622c10f1b07de280dc?)
>
>     =================================================================
>     ==3904583==ERROR: LeakSanitizer: detected memory leaks
>
>     Direct leak of 32 byte(s) in 1 object(s) allocated from:
>         #0 0x7fa790d01986 in __interceptor_realloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:98
>         #1 0x7fa790add769 in __pthread_getattr_np nptl/pthread_getattr_np.c:180
>         #2 0x7fa790d117c5 in __sanitizer::GetThreadStackTopAndBottom(bool, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:150
>         #3 0x7fa790d11957 in __sanitizer::GetThreadStackAndTls(bool, unsigned long*, unsigned long*, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:598
>         #4 0x7fa790d03fe8 in __lsan::ThreadStart(unsigned int, unsigned long long, __sanitizer::ThreadType) ../../../../src/libsanitizer/lsan/lsan_posix.cpp:51
>         #5 0x7fa790d013fd in __lsan_thread_start_func ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:440
>         #6 0x7fa790adc3eb in start_thread nptl/pthread_create.c:444
>         #7 0x7fa790b5ca5b in clone3 ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
>
>     SUMMARY: LeakSanitizer: 32 byte(s) leaked in 1 allocation(s).
>     Aborted

We discussed this in another thread (beginning here [1]), and I would be
fine with this approach. I share your feeling that it is a little gross
to have to work around LSan's implementation by tweaking production
code, but I think that this doesn't have to be the most pristine patch
ever written, either ;-).

Just playing devil's advocate for a moment, I wonder if another approach
might be to disable the threading altogether for the purposes of this
test. The performance difference is negligible, and I don't think we're
exercising any interesting paths in this particular test that have to do
with pack.threads > 1 that aren't covered extensively elsewhere.

So, in other words, I think a reasonable approach would be to do
something like:

--- 8< ---
diff --git a/t/t5309-pack-delta-cycles.sh b/t/t5309-pack-delta-cycles.sh
index 4e910c5b9d..1d132b6324 100755
--- a/t/t5309-pack-delta-cycles.sh
+++ b/t/t5309-pack-delta-cycles.sh
@@ -73,7 +73,7 @@ test_expect_success 'failover to a duplicate object in the same pack' '
 		pack_obj $A
 	} >recoverable.pack &&
 	pack_trailer recoverable.pack &&
-	test_must_fail git index-pack --fix-thin --stdin <recoverable.pack
+	test_must_fail git index-pack --threads=1 --fix-thin --stdin <recoverable.pack
 '

 test_done
--- >8 ---

And call it a day. I built with SANITIZE=leak, and then ran:

    $ GIT_TEST_PASSING_SANITIZE_LEAK=true ./t5309-pack-delta-cycles.sh --stress --run=6

For a while and didn't see any failures. That could be luck, of course,
but without the above patch I was seeing failures within a few seconds.
I'm reasonably confident that this would do the trick.

For what it's worth, I'm fine with either approach, mostly to avoid
tying up more of the list's time discussing the options. But I have a
vague preference towards `--threads=1` since it doesn't require us to
touch production code.

> Rescuing this from:
>
>   https://lore.kernel.org/git/20231221105124.GD570888@coredump.intra.peff.net/
>
> where it was buried deep in a thread. I still think it's kind of gross,
> but it may be the least-bad thing.

In either case, thanks for digging it back up :-).

Thanks,
Taylor

[1]: https://lore.kernel.org/git/xmqqbkasnxba.fsf@gitster.g/
Jeff King Jan. 10, 2024, 11:44 a.m. UTC | #2
On Fri, Jan 05, 2024 at 11:33:23AM -0500, Taylor Blau wrote:

> -	test_must_fail git index-pack --fix-thin --stdin <recoverable.pack
> +	test_must_fail git index-pack --threads=1 --fix-thin --stdin <recoverable.pack
> [...]
> For what it's worth, I'm fine with either approach, mostly to avoid
> tying up more of the list's time discussing the options. But I have a
> vague preference towards `--threads=1` since it doesn't require us to
> touch production code.

That's quite tempting, actually. The flip side, though, is that the test
no longer reflects the production code as well. That is, in the real
world we'd still call exit() from a thread. That obviously works OK now
(modulo LSan), but if we ever had a regression where that left us in an
inconsistent state, we'd be less likely to notice it. Feels kind of
unlikely in practice, though.

I dunno. I guess the real least-bad thing is seeing if LSan can be
fixed to handle this atomically. I haven't even reported it there.

If do go with "--threads=1", I suspect several tests in that file need
it.

-Peff
Taylor Blau Jan. 10, 2024, 5:34 p.m. UTC | #3
On Wed, Jan 10, 2024 at 06:44:56AM -0500, Jeff King wrote:
> On Fri, Jan 05, 2024 at 11:33:23AM -0500, Taylor Blau wrote:
>
> > -	test_must_fail git index-pack --fix-thin --stdin <recoverable.pack
> > +	test_must_fail git index-pack --threads=1 --fix-thin --stdin <recoverable.pack
> > [...]
> > For what it's worth, I'm fine with either approach, mostly to avoid
> > tying up more of the list's time discussing the options. But I have a
> > vague preference towards `--threads=1` since it doesn't require us to
> > touch production code.
>
> That's quite tempting, actually. The flip side, though, is that the test
> no longer reflects the production code as well. That is, in the real
> world we'd still call exit() from a thread. That obviously works OK now
> (modulo LSan), but if we ever had a regression where that left us in an
> inconsistent state, we'd be less likely to notice it. Feels kind of
> unlikely in practice, though.
>
> I dunno. I guess the real least-bad thing is seeing if LSan can be
> fixed to handle this atomically. I haven't even reported it there.

In the meantime, I think that the `--threads=1` approach feels less
invasive. I tend to agree that neither option is ideal, but that
`--threads=1` is probably the least bad, and that failing to catch a
regression there feels rather unlikely.

> If do go with "--threads=1", I suspect several tests in that file need
> it.

Yeah, there are a couple of others. I think the ones that need modifying
are at the intersection of "expected to fail" and "in a test which is
expected to pass leak-free":

    $ grep -l 'TEST_PASSES_SANITIZE_LEAK=true' t????-*.sh |
      xargs grep -l 'test_must_fail git index-pack'
    t5302-pack-index.sh
    t5308-pack-detect-duplicates.sh
    t5309-pack-delta-cycles.sh
    t5313-pack-bounds-checks.sh
    t5325-reverse-index.sh

I'll send a series shortly to tweak those test scripts to avoid this
issue if you want to notify the LSan folks of this issue more generally.

> -Peff

Thanks,
Taylor
Jeff King Jan. 11, 2024, 6:53 a.m. UTC | #4
On Wed, Jan 10, 2024 at 12:34:09PM -0500, Taylor Blau wrote:

> > If do go with "--threads=1", I suspect several tests in that file need
> > it.
> 
> Yeah, there are a couple of others. I think the ones that need modifying
> are at the intersection of "expected to fail" and "in a test which is
> expected to pass leak-free":
> 
>     $ grep -l 'TEST_PASSES_SANITIZE_LEAK=true' t????-*.sh |
>       xargs grep -l 'test_must_fail git index-pack'
>     t5302-pack-index.sh
>     t5308-pack-detect-duplicates.sh
>     t5309-pack-delta-cycles.sh
>     t5313-pack-bounds-checks.sh
>     t5325-reverse-index.sh

I think that is more than we need. It's only a problem when we hit a
die() inside a thread, which happens only during delta resolution. So
your patch 2, for example, touches a test which triggers the
--max-input-size check. But we would find that out on the initial
unthreaded pass over the pack.

The one in patch 3 seems at first glance like it might be a problem
(it's another duplicate-object case, like the one of them in 5309). But
it isn't a problem because the duplicate object isn't a delta, so we
notice the problem in write_idx_file() from the main thread (which I
verified by running it under gdb and setting a breakpoint at die()).

I suspect patch 4 is the same, but didn't run gdb on each case. And
patch 5 is about a corrupt reverse index, so almost certainly the main
thread. So I suspect that patch 1 is the only one that matters here (and
probably all of those are needed, because it is all about broken
deltas).

All that said, I am on the fence between the two approaches. If Junio
prefers the atomic-spawn direction, I'm fine with that, and there's not
much point in polishing the --threads=1 approach further.

-Peff
diff mbox series

Patch

diff --git a/builtin/index-pack.c b/builtin/index-pack.c
index dda94a9f46..0e94819216 100644
--- a/builtin/index-pack.c
+++ b/builtin/index-pack.c
@@ -1257,13 +1257,15 @@  static void resolve_deltas(void)
 	base_cache_limit = delta_base_cache_limit * nr_threads;
 	if (nr_threads > 1 || getenv("GIT_FORCE_THREADS")) {
 		init_thread();
+		work_lock();
 		for (i = 0; i < nr_threads; i++) {
 			int ret = pthread_create(&thread_data[i].thread, NULL,
 						 threaded_second_pass, thread_data + i);
 			if (ret)
 				die(_("unable to create thread: %s"),
 				    strerror(ret));
 		}
+		work_unlock();
 		for (i = 0; i < nr_threads; i++)
 			pthread_join(thread_data[i].thread, NULL);
 		cleanup_thread();