diff mbox series

[2/3] check_stream_sha1(): handle input underflow

Message ID 20181030232312.GB32038@sigill.intra.peff.net (mailing list archive)
State New, archived
Headers show
Series [1/3] t1450: check large blob in trailing-garbage test | expand

Commit Message

Jeff King Oct. 30, 2018, 11:23 p.m. UTC
This commit fixes an infinite loop when fscking large
truncated loose objects.

The check_stream_sha1() function takes an mmap'd loose
object buffer and streams 4k of output at a time, checking
its sha1. The loop quits when we've output enough bytes (we
know the size from the object header), or when zlib tells us
anything except Z_OK or Z_BUF_ERROR.

The latter is expected because zlib may run out of room in
our 4k buffer, and that is how it tells us to process the
output and loop again.

But Z_BUF_ERROR also covers another case: one in which zlib
cannot make forward progress because it needs more _input_.
This should never happen in this loop, because though we're
streaming the output, we have the entire deflated input
available in the mmap'd buffer. But since we don't check
this case, we'll just loop infinitely if we do see a
truncated object, thinking that zlib is asking for more
output space.

It's tempting to fix this by checking stream->avail_in as
part of the loop condition (and quitting if all of our bytes
have been consumed). But that assumes that once zlib has
consumed the input, there is nothing left to do.  That's not
necessarily the case: it may have read our input into its
internal state, but still have bytes to output.

Instead, let's continue on Z_BUF_ERROR only when we see the
case we're expecting: the previous round filled our output
buffer completely. If it didn't (and we still saw
Z_BUF_ERROR), we know something is wrong and should break
out of the loop.

The bug comes from commit f6371f9210 (sha1_file: add
read_loose_object() function, 2017-01-13), which
reimplemented some of the existing loose object functions.
So it's worth checking if this bug was inherited from any of
those. The answers seems to be no. The two obvious
candidates are both OK:

  1. unpack_sha1_rest(); this doesn't need to loop on
     Z_BUF_ERROR at all, since it allocates the expected
     output buffer in advance (which we can't do since we're
     explicitly streaming here)

  2. check_object_signature(); the streaming path relies on
     the istream interface, which uses read_istream_loose()
     for this case. That function uses a similar "is our
     output buffer full" check with Z_BUF_ERROR (which is
     where I stole it from for this patch!)

Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Jeff King <peff@peff.net>
---
 sha1-file.c     |  3 ++-
 t/t1450-fsck.sh | 19 +++++++++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

Comments

Junio C Hamano Oct. 31, 2018, 4:23 a.m. UTC | #1
Jeff King <peff@peff.net> writes:

> The bug comes from commit f6371f9210 (sha1_file: add
> read_loose_object() function, 2017-01-13), which
> reimplemented some of the existing loose object functions.
> So it's worth checking if this bug was inherited from any of
> those. The answers seems to be no. The two obvious
> candidates are both OK:
>
>   1. unpack_sha1_rest(); this doesn't need to loop on
>      Z_BUF_ERROR at all, since it allocates the expected
>      output buffer in advance (which we can't do since we're
>      explicitly streaming here)
>
>   2. check_object_signature(); the streaming path relies on
>      the istream interface, which uses read_istream_loose()
>      for this case. That function uses a similar "is our
>      output buffer full" check with Z_BUF_ERROR (which is
>      where I stole it from for this patch!)

See 692f0bc7 to find who did the fix you stole from, and for what
kind of breakage the original fix was made.

By the way, a very similar loop for pack_non_delta istream iterates
while total_read is smaller than sz, but it does not have the same
check upon BUF_ERROR to see if we've read everything.
Jeff King Oct. 31, 2018, 4:30 a.m. UTC | #2
On Wed, Oct 31, 2018 at 01:23:54PM +0900, Junio C Hamano wrote:

> Jeff King <peff@peff.net> writes:
> 
> > The bug comes from commit f6371f9210 (sha1_file: add
> > read_loose_object() function, 2017-01-13), which
> > reimplemented some of the existing loose object functions.
> > So it's worth checking if this bug was inherited from any of
> > those. The answers seems to be no. The two obvious
> > candidates are both OK:
> >
> >   1. unpack_sha1_rest(); this doesn't need to loop on
> >      Z_BUF_ERROR at all, since it allocates the expected
> >      output buffer in advance (which we can't do since we're
> >      explicitly streaming here)
> >
> >   2. check_object_signature(); the streaming path relies on
> >      the istream interface, which uses read_istream_loose()
> >      for this case. That function uses a similar "is our
> >      output buffer full" check with Z_BUF_ERROR (which is
> >      where I stole it from for this patch!)
> 
> See 692f0bc7 to find who did the fix you stole from, and for what
> kind of breakage the original fix was made.

Heh. I did not dig into it, but actually thought "I'll bet Junio had to
get this right when he wrote the streaming code. No wonder he spotted my
mistake so quickly!".

> By the way, a very similar loop for pack_non_delta istream iterates
> while total_read is smaller than sz, but it does not have the same
> check upon BUF_ERROR to see if we've read everything.

Indeed. Did you find that one by inspection, or did you peek at:

  https://public-inbox.org/git/20130325202114.GD16019@sigill.intra.peff.net/

? :)

-Peff
Junio C Hamano Oct. 31, 2018, 4:44 a.m. UTC | #3
Jeff King <peff@peff.net> writes:

>> See 692f0bc7 to find who did the fix you stole from, and for what
>> kind of breakage the original fix was made.
>
> Heh. I did not dig into it, but actually thought "I'll bet Junio had to
> get this right when he wrote the streaming code. No wonder he spotted my
> mistake so quickly!".
>
>> By the way, a very similar loop for pack_non_delta istream iterates
>> while total_read is smaller than sz, but it does not have the same
>> check upon BUF_ERROR to see if we've read everything.
>
> Indeed. Did you find that one by inspection, or did you peek at:
>
>   https://public-inbox.org/git/20130325202114.GD16019@sigill.intra.peff.net/

I looked for BUF_ERROR in the streaming.c and found two instances in
a very similar looking loop with a subtle differnce, and the
difference was due to one of them getting fixed in the past while
the other one was left intact as written at its inception.

I should have looked for that message to read the part below
three-dash mark.  Or we may want to transplant that comment somehow
to the function so next person will not be puzzled like I did?
Jeff King Oct. 31, 2018, 5:03 a.m. UTC | #4
On Wed, Oct 31, 2018 at 01:44:25PM +0900, Junio C Hamano wrote:

> Jeff King <peff@peff.net> writes:
> 
> >> See 692f0bc7 to find who did the fix you stole from, and for what
> >> kind of breakage the original fix was made.
> >
> > Heh. I did not dig into it, but actually thought "I'll bet Junio had to
> > get this right when he wrote the streaming code. No wonder he spotted my
> > mistake so quickly!".
> >
> >> By the way, a very similar loop for pack_non_delta istream iterates
> >> while total_read is smaller than sz, but it does not have the same
> >> check upon BUF_ERROR to see if we've read everything.
> >
> > Indeed. Did you find that one by inspection, or did you peek at:
> >
> >   https://public-inbox.org/git/20130325202114.GD16019@sigill.intra.peff.net/
> 
> I looked for BUF_ERROR in the streaming.c and found two instances in
> a very similar looking loop with a subtle differnce, and the
> difference was due to one of them getting fixed in the past while
> the other one was left intact as written at its inception.
> 
> I should have looked for that message to read the part below
> three-dash mark.  Or we may want to transplant that comment somehow
> to the function so next person will not be puzzled like I did?

Hmm. Reading that function, I am not sure if it actually might need
fixing.

Might we actually get Z_BUF_ERROR asking for more input if zlib reads to
the end of the pack window? That is probably quite unlikely in practice,
but in theory you could feed a very large buffer for the output and use
a very small pack window.

So I do not think we can use the same logic in that loop. But at the
same time, what prevents use_pack() from getting to the very end of the
pack and saying "I have no bytes left for you"? And then we'd loop
infinitely, feeding zlib nothing.

I'm not sure what the solution is. I do not think this works:

diff --git a/streaming.c b/streaming.c
index d1e6b2dce6..a92a85ed10 100644
--- a/streaming.c
+++ b/streaming.c
@@ -394,6 +394,9 @@ static read_method_decl(pack_non_delta)
 		mapped = use_pack(st->u.in_pack.pack, &window,
 				  st->u.in_pack.pos, &st->z.avail_in);
 
+		if (!st->z.avail_in)
+			break;
+
 		st->z.next_out = (unsigned char *)buf + total_read;
 		st->z.avail_out = sz - total_read;
 		st->z.next_in = mapped;

because we may have read to the very end but still have bytes to output.

Though hrm. I think use_pack() will always tell us about the trailing
20-byte hash in the "avail" window. Which means we should never
legitimately get to 0 there, because it means that either:

  1. We're reading the trailing hash, which cannot possibly be right (in
     most cases I'd expect zlib to barf at that point anyway, but of
     course it's possible to have a hash that is valid zlib data ;) ).

  2. We're truncated _before_ the hash, so we really did read to EOF,
     and there are no more bytes. I suspect we may actually detect this
     case upon opening the pack (since we do peek at the trailer then),
     but again that could be fooled by coincidence.

I guess that's not the whole story, though. use_pack() tries to promise
at least 20 bytes (to simplify some of the other parsing routines). So
we shouldn't actually ever get "0" here. If we really are that close to
the end of the pack, we'd hit this logic in use_pack:

  if (offset > (p->pack_size - the_hash_algo->rawsz))
	die("offset beyond end of packfile (truncated pack?)");

So actually, I think this code is OK as-is. We will always have at least
20 bytes of input, or use_pack() will die.

Phew. I almost just deleted all of the above, because now I think I'm
ready to write that comment you asked for. ;) But I left it since maybe
it makes sense to follow my thought process.

-Peff
Jeff King Oct. 31, 2018, 5:13 a.m. UTC | #5
On Wed, Oct 31, 2018 at 01:03:39AM -0400, Jeff King wrote:

> Phew. I almost just deleted all of the above, because now I think I'm
> ready to write that comment you asked for. ;) But I left it since maybe
> it makes sense to follow my thought process.

So here it is in a more succinct form.

-Peff

-- >8 --
Subject: [PATCH] read_istream_pack_non_delta(): document input handling

Twice now we have scratched our heads about why the loose streaming code
needs the protection added by 692f0bc7ae (avoid infinite loop in
read_istream_loose, 2013-03-25), but the similar code in its pack
counterpart does not.

The short answer is that use_pack() will die before it lets us run out
of bytes. Note that this could mean reading garbage (including the
trailing hash) from the packfile in some cases of corruption, but that's
OK. zlib will notice and complain (and if not, certainly the end result
will not match the object hash we expect).

Let's leave a comment this time to document our findings.

Signed-off-by: Jeff King <peff@peff.net>
---
 streaming.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/streaming.c b/streaming.c
index d1e6b2dce6..ac7c7a22f9 100644
--- a/streaming.c
+++ b/streaming.c
@@ -408,6 +408,15 @@ static read_method_decl(pack_non_delta)
 			st->z_state = z_done;
 			break;
 		}
+
+		/*
+		 * Unlike the loose object case, we do not have to worry here
+		 * about running out of input bytes and spinning infinitely. If
+		 * we get Z_BUF_ERROR due to too few input bytes, then we'll
+		 * replenish them in the next use_pack() call when we loop. If
+		 * we truly hit the end of the pack (i.e., because it's corrupt
+		 * or truncated), then use_pack() catches that and will die().
+		 */
 		if (status != Z_OK && status != Z_BUF_ERROR) {
 			git_inflate_end(&st->z);
 			st->z_state = z_error;
Junio C Hamano Oct. 31, 2018, 5:31 a.m. UTC | #6
Jeff King <peff@peff.net> writes:

> On Wed, Oct 31, 2018 at 01:03:39AM -0400, Jeff King wrote:
>
>> Phew. I almost just deleted all of the above, because now I think I'm
>> ready to write that comment you asked for. ;) But I left it since maybe
>> it makes sense to follow my thought process.
>
> So here it is in a more succinct form.

Thanks.

> +
> +		/*
> +		 * Unlike the loose object case, we do not have to worry here
> +		 * about running out of input bytes and spinning infinitely. If
> +		 * we get Z_BUF_ERROR due to too few input bytes, then we'll
> +		 * replenish them in the next use_pack() call when we loop. If
> +		 * we truly hit the end of the pack (i.e., because it's corrupt
> +		 * or truncated), then use_pack() catches that and will die().
> +		 */
>  		if (status != Z_OK && status != Z_BUF_ERROR) {
>  			git_inflate_end(&st->z);
>  			st->z_state = z_error;

Reads well.  Will apply.
diff mbox series

Patch

diff --git a/sha1-file.c b/sha1-file.c
index dd0b6aa873..2daf7d9935 100644
--- a/sha1-file.c
+++ b/sha1-file.c
@@ -2199,7 +2199,8 @@  static int check_stream_sha1(git_zstream *stream,
 	 * see the comment in unpack_sha1_rest for details.
 	 */
 	while (total_read <= size &&
-	       (status == Z_OK || status == Z_BUF_ERROR)) {
+	       (status == Z_OK ||
+		(status == Z_BUF_ERROR && !stream->avail_out))) {
 		stream->next_out = buf;
 		stream->avail_out = sizeof(buf);
 		if (size - total_read < stream->avail_out)
diff --git a/t/t1450-fsck.sh b/t/t1450-fsck.sh
index 3421f12e8a..b5677d26a4 100755
--- a/t/t1450-fsck.sh
+++ b/t/t1450-fsck.sh
@@ -683,6 +683,25 @@  test_expect_success 'fsck detects trailing loose garbage (large blob)' '
 	test_i18ngrep "garbage.*$blob" out
 '
 
+test_expect_success 'fsck detects truncated loose object' '
+	# make it big enough that we know we will truncate in the data
+	# portion, not the header
+	test-tool genrandom truncate 4096 >file &&
+	blob=$(git hash-object -w file) &&
+	file=$(sha1_file $blob) &&
+	test_when_finished "remove_object $blob" &&
+	test_copy_bytes 1024 <"$file" >tmp &&
+	rm "$file" &&
+	mv -f tmp "$file" &&
+
+	# check both regular and streaming code paths
+	test_must_fail git fsck 2>out &&
+	test_i18ngrep corrupt.*$blob out &&
+
+	test_must_fail git -c core.bigfilethreshold=128 fsck 2>out &&
+	test_i18ngrep corrupt.*$blob out
+'
+
 # for each of type, we have one version which is referenced by another object
 # (and so while unreachable, not dangling), and another variant which really is
 # dangling.