diff mbox series

[RFC] Modify fetch-pack to no longer die on error?

Message ID 20200724223844.2723397-1-jonathantanmy@google.com (mailing list archive)
State New, archived
Headers show
Series [RFC] Modify fetch-pack to no longer die on error? | expand

Commit Message

Jonathan Tan July 24, 2020, 10:38 p.m. UTC
We've had a few instances where a lazy fetch in a partial clone fails,
leading to a fatal error, when the calling code could have easily
recovered - in other words, the severity of the bug should have just a
wasted fetch instead of stopping the whole command. Part of the issue
(and possibly the whole issue - I haven't looked at this beyond
fetch-pack yet) is that fetch-pack dies whenever it encounters an
error, so I took a look at fixing that.

(Note that fetch-pack is sometimes run through a remote helper, meaning
that we could leave the die() invocations in and just make sure that we
handle failure in the separate process correctly. But when the promisor
remote is HTTP protocol v2 or SSH protocol v0/v2, this is not true -
fetch_pack() is run in-process.)

I think the best way for easy authorship and review is to convert each
possibly-dying function into a function that either returns a
possibly-null error string or returns success/failure and writes the
error string into an "out" parameter. In this way, the change is rather
mechanical and should be easy to review. In the patch below I chose the
former approach, and I modified 2 functions (one that returns no value
and one that returns a value) to demonstrate what it would look like.

We could also take this further and have a "struct error" for type
safety and macros - e.g. THROW() to return a "struct error", TRY() to
execute what's inside the parentheses and return the error if there is
one, and OR_DIE() to execute what's inside the parentheses and die if
there is an error.

Any opinions before I continue working on this?

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
---
 fetch-pack.c | 78 +++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 53 insertions(+), 25 deletions(-)

Comments

Junio C Hamano July 24, 2020, 11:07 p.m. UTC | #1
Jonathan Tan <jonathantanmy@google.com> writes:

> We've had a few instances where a lazy fetch in a partial clone fails,
> leading to a fatal error, when the calling code could have easily
> recovered - in other words, the severity of the bug should have just a
> wasted fetch instead of stopping the whole command. Part of the issue
> (and possibly the whole issue - I haven't looked at this beyond
> fetch-pack yet) is that fetch-pack dies whenever it encounters an
> error, so I took a look at fixing that.
>
> (Note that fetch-pack is sometimes run through a remote helper, meaning
> that we could leave the die() invocations in and just make sure that we
> handle failure in the separate process correctly. But when the promisor
> remote is HTTP protocol v2 or SSH protocol v0/v2, this is not true -
> fetch_pack() is run in-process.)

That is easily fixable ;-)  After all, we are talking about network
latency once we "lazily fetch" during a main operation, so why not
take advantage of the isolation that the operating system gives us?

> I think the best way for easy authorship and review is to convert each
> possibly-dying function into a function that either returns a
> possibly-null error string or returns success/failure and writes the
> error string into an "out" parameter.

Ugly.
Junio C Hamano July 24, 2020, 11:11 p.m. UTC | #2
Jonathan Tan <jonathantanmy@google.com> writes:

> I think the best way for easy authorship and review is to convert each
> possibly-dying function into a function that either returns a
> possibly-null error string or returns success/failure and writes the
> error string into an "out" parameter.

Ugly.

Rather, if we were to pass an extra parameter through the entire
callchain, I'd rather see that parameter to be what to do when we
see an error.  Depending on that, the callee can die(), return
error(), or silently return -1, or do something else.

I really do no think it is a good idea to pass text string back to
the caller, which invites sentence lego and untranslatable mess.
Jeff King July 25, 2020, 9:41 p.m. UTC | #3
On Fri, Jul 24, 2020 at 04:11:31PM -0700, Junio C Hamano wrote:

> Jonathan Tan <jonathantanmy@google.com> writes:
> 
> > I think the best way for easy authorship and review is to convert each
> > possibly-dying function into a function that either returns a
> > possibly-null error string or returns success/failure and writes the
> > error string into an "out" parameter.
> 
> Ugly.
> 
> Rather, if we were to pass an extra parameter through the entire
> callchain, I'd rather see that parameter to be what to do when we
> see an error.  Depending on that, the callee can die(), return
> error(), or silently return -1, or do something else.

Agreed. I had a proposal a while ago to pass a "struct error_context"
which was basically a function pointer and any accompanying data. That
lets the caller instruct us to die(), call error(), stuff the error in a
strbuf owned by the caller, and so on.

I think it was this one:

  https://lore.kernel.org/git/20160927191955.mympqgylrxhkp24n@sigill.intra.peff.net/

I still think it's a good idea, but the hard part is lib-ifying all of
the functions that call die() to instead return an error up the stack
(and handle cleanup, etc). Which is why I never really pushed it
further. But if we're going to be lib-ifying such calls anyway, I think
it's nice to do this flexible thing (from their perspective it's no more
work to trigger the callback than it is to call error() and return).

That said, I do think for this particular case, your "just run it in a
sub-process" suggestion may be simpler and more robust.

-Peff
Junio C Hamano July 25, 2020, 11:01 p.m. UTC | #4
Jeff King <peff@peff.net> writes:

> I think it was this one:
>
>   https://lore.kernel.org/git/20160927191955.mympqgylrxhkp24n@sigill.intra.peff.net/
>
> I still think it's a good idea, but the hard part is lib-ifying all of
> the functions that call die() to instead return an error up the stack
> (and handle cleanup, etc). Which is why I never really pushed it
> further. But if we're going to be lib-ifying such calls anyway, I think
> it's nice to do this flexible thing (from their perspective it's no more
> work to trigger the callback than it is to call error() and return).

Yeah, I almost 80%+ agree.

The remainder of 20% is that I am not so convinced that (fmt_string
+ va_list) that isn't pretty much usable for anything but generating
human readable error messages is enough.  It is certainly a good
enough replacement for (and much better than) the approach to
prepare an error string in a "struct strbuf err" that was passeed by
the caller, but I am not sure if that is a good bar to aim to pass
to begin with ;-).

> That said, I do think for this particular case, your "just run it in a
> sub-process" suggestion may be simpler and more robust.

For this particular case, with the performance and all, I agree that
the stupid and robust approach would be the best.
Jeff King July 27, 2020, 5:11 p.m. UTC | #5
On Sat, Jul 25, 2020 at 04:01:25PM -0700, Junio C Hamano wrote:

> Jeff King <peff@peff.net> writes:
> 
> > I think it was this one:
> >
> >   https://lore.kernel.org/git/20160927191955.mympqgylrxhkp24n@sigill.intra.peff.net/
> >
> > I still think it's a good idea, but the hard part is lib-ifying all of
> > the functions that call die() to instead return an error up the stack
> > (and handle cleanup, etc). Which is why I never really pushed it
> > further. But if we're going to be lib-ifying such calls anyway, I think
> > it's nice to do this flexible thing (from their perspective it's no more
> > work to trigger the callback than it is to call error() and return).
> 
> Yeah, I almost 80%+ agree.
> 
> The remainder of 20% is that I am not so convinced that (fmt_string
> + va_list) that isn't pretty much usable for anything but generating
> human readable error messages is enough.  It is certainly a good
> enough replacement for (and much better than) the approach to
> prepare an error string in a "struct strbuf err" that was passeed by
> the caller, but I am not sure if that is a good bar to aim to pass
> to begin with ;-).

Yeah, I certainly agree that just passing around strings does not
provide as much information as some callers would like. But I think
introducing a system of machine-readable error tokens would be
cumbersome for a lot of cases, to the point that it may tip this from
"seems like an easy enough change" to something that nobody wants to
take on.

One in-between step that would be pretty easy is to record errno when
available (i.e., if error_errno() becomes error_ctx_errno(), then we
just stuff that errno value into the error_context struct). That would
let us reliably propagate errno up the stack, or even give intermediate
callers the opportunity to munge it (e.g., if a ref lookup failed with
EISDIR while opening a file, it might convert that to ENOENT as it
propagates up the stack).

-Peff
Jonathan Tan July 28, 2020, 7:23 p.m. UTC | #6
> Jeff King <peff@peff.net> writes:
> 
> > I think it was this one:
> >
> >   https://lore.kernel.org/git/20160927191955.mympqgylrxhkp24n@sigill.intra.peff.net/
> >
> > I still think it's a good idea, but the hard part is lib-ifying all of
> > the functions that call die() to instead return an error up the stack
> > (and handle cleanup, etc). Which is why I never really pushed it
> > further. But if we're going to be lib-ifying such calls anyway, I think
> > it's nice to do this flexible thing (from their perspective it's no more
> > work to trigger the callback than it is to call error() and return).
> 
> Yeah, I almost 80%+ agree.

Thanks Peff for the pointer. That looks like a good idea, and more
comprehensive than my suggested approach.

> The remainder of 20% is that I am not so convinced that (fmt_string
> + va_list) that isn't pretty much usable for anything but generating
> human readable error messages is enough.  It is certainly a good
> enough replacement for (and much better than) the approach to
> prepare an error string in a "struct strbuf err" that was passeed by
> the caller, but I am not sure if that is a good bar to aim to pass
> to begin with ;-).

I think that functions that inform their callers about different errors
already do so through return values, but in any case I think that Peff's
idea can be extended once we need it (e.g. by adding error reporting
functions that can fallback to the string function if the special
function is not present).

> > That said, I do think for this particular case, your "just run it in a
> > sub-process" suggestion may be simpler and more robust.
> 
> For this particular case, with the performance and all, I agree that
> the stupid and robust approach would be the best.

I'm concerned that we will be painting ourselves into a corner here - I
have been appreciating the richer interface that a C call provides us,
compared to sub-processes where we have to communicate through a single
input stream and a single output stream. For example, "wanted-refs" and
packfile URIs (a.k.a. CDN offloading) were made much simpler because it
was quite easy to communicate back the updated hashes for refs (for
"wanted-refs") and more than one lockfile (for packfile URIs). If we
were doing it with "fetch" in remote helpers (like in HTTP protocol v0),
we would have to augment the textual protocol to communicate back the
updated hashes and the list of lockfiles - doable, but more cumbersome.
(That is also why I only implemented those for protocol v2, because
protocol v2 had "stateless-connect" for HTTP.)

Being limited to sub-processes also stopped me from implementing an
improvement to how fetch-pack calls index-pack - when I added debugging
information (list of refs and hashes) when writing the .promisor file,
what I wanted to do was to pass a callback (function pointer + data
pointer) to index-pack to tell it what to write to .promisor, but I
couldn't do that. I also couldn't write the list to the stdin of
index-pack, because the stdin was already used to communicate the
packfile. I could write the list to a temporary file and pass that name
to index-pack, but by that stage I thought it would be simpler to just
have fetch-pack write the .promisor file itself, even though I think
that this should be an index-pack concern, not a fetch-pack one.

(Also, I think that debugging within a process is easier than debugging
across processes, but that might not be a concern that other people
share.)

In this particular case, when doing the lazy fetch, we pass a special
flag that skips some steps - among other things, negotiation. Admittedly
we could add this flag to "fetch" (or whatever command we're going to
invoke to implement this sub-process part), or add the equivalent
general-purpose flags/configs (e.g. fetch.negotiationAlgorithm=null)
(but I haven't looked thoroughly at what flags would be needed). These
seem surmountable and I can't think of anything unsurmountable, but I'm
still worried that we're painting ourselves into a corner.
Jeff King July 28, 2020, 8:08 p.m. UTC | #7
On Tue, Jul 28, 2020 at 12:23:50PM -0700, Jonathan Tan wrote:

> > For this particular case, with the performance and all, I agree that
> > the stupid and robust approach would be the best.
> 
> I'm concerned that we will be painting ourselves into a corner here - I
> have been appreciating the richer interface that a C call provides us,
> compared to sub-processes where we have to communicate through a single
> input stream and a single output stream. For example, "wanted-refs" and
> [...]

Yeah, that's a compelling reason. I'd have thought for this use case you
could just say "hey, make sure these objects exist", which doesn't
require a lot of communication. But often when I think things like that
and start coding, it turns out to be much more complicated. So I am
perfectly willing to believe you that it doesn't apply here. And even
if it did, you're right that we may run into other spots that do need to
pass back more information, but benefit from more lib-ified code that
doesn't die().

Just to play devil's advocate for a moment...

> (Also, I think that debugging within a process is easier than debugging
> across processes, but that might not be a concern that other people
> share.)

This is definitely true sometimes, but I think is sometimes the
opposite. When we push things out to a sub-process, then the interface
between the two processes has to be well-defined (e.g., writing results
to a file with a particular format). And that can make debugging easier,
because you can pick up from that intermediate state (munging it in the
middle, or even generating it from scratch for testing).

Likewise, that can result in a more flexible and robust system from the
perspective of users. If we had invented "git log" first, we probably
wouldn't have "git rev-list | git diff-tree --stdin" at all. But having
that as two separate tools is sometimes useful for people doing things
_besides_ log, since it gives different entry points to the code.

That said, I think I could buy the argument that "fetch" works pretty
well as a basic building block for users. It's pretty rare to actually
use fetch-pack as a distinct operation. This is all a monolith vs module
tradeoff question, and the tradeoff around modularity for fetch isn't
that compelling.

-Peff
Jonathan Tan July 29, 2020, 6:53 p.m. UTC | #8
> On Tue, Jul 28, 2020 at 12:23:50PM -0700, Jonathan Tan wrote:
> 
> > > For this particular case, with the performance and all, I agree that
> > > the stupid and robust approach would be the best.
> > 
> > I'm concerned that we will be painting ourselves into a corner here - I
> > have been appreciating the richer interface that a C call provides us,
> > compared to sub-processes where we have to communicate through a single
> > input stream and a single output stream. For example, "wanted-refs" and
> > [...]
> 
> Yeah, that's a compelling reason. I'd have thought for this use case you
> could just say "hey, make sure these objects exist", which doesn't
> require a lot of communication. But often when I think things like that
> and start coding, it turns out to be much more complicated. So I am
> perfectly willing to believe you that it doesn't apply here. And even
> if it did, you're right that we may run into other spots that do need to
> pass back more information, but benefit from more lib-ified code that
> doesn't die().

Thanks. Just to be clear, I can't think of any hard stops to
implementing lazy fetch in a sub-process right now, but (as you said)
while programming I could discover something, or we could need something
in the future.

> Just to play devil's advocate for a moment...
> 
> > (Also, I think that debugging within a process is easier than debugging
> > across processes, but that might not be a concern that other people
> > share.)
> 
> This is definitely true sometimes, but I think is sometimes the
> opposite. When we push things out to a sub-process, then the interface
> between the two processes has to be well-defined (e.g., writing results
> to a file with a particular format). And that can make debugging easier,
> because you can pick up from that intermediate state (munging it in the
> middle, or even generating it from scratch for testing).

Well, unless there is some sort of interactivity like in remote helpers
:-)

> Likewise, that can result in a more flexible and robust system from the
> perspective of users. If we had invented "git log" first, we probably
> wouldn't have "git rev-list | git diff-tree --stdin" at all. But having
> that as two separate tools is sometimes useful for people doing things
> _besides_ log, since it gives different entry points to the code.

That's true. And the lazy fetch might be one of them - after looking at
the code, I think that I can get to where I want just by implementing a
null negotiator, which will be useful for end users. (In particular,
simulating a lazy fetch might be useful sometimes, and re-fetching a
packfile could be a crude way of repairing object corruption.)

> That said, I think I could buy the argument that "fetch" works pretty
> well as a basic building block for users. It's pretty rare to actually
> use fetch-pack as a distinct operation. This is all a monolith vs module
> tradeoff question, and the tradeoff around modularity for fetch isn't
> that compelling.

If we are going the sub-process route, I was planning to use "fetch" as
the sub-process, actually, not "fetch-pack" - among other things,
"fetch" allows us to specify a fetch negotiator. So it seems like you
are saying that if we had to use "fetch-pack", you have no problem with
libifying it and calling it in the same process, but if we can use
"fetch", we should use it as a sub-process?

In any case, I'll look into using "fetch" as a sub-process and see if it
works.
Junio C Hamano July 29, 2020, 7:02 p.m. UTC | #9
Jeff King <peff@peff.net> writes:

> Just to play devil's advocate for a moment...
>
>> (Also, I think that debugging within a process is easier than debugging
>> across processes, but that might not be a concern that other people
>> share.)
>
> This is definitely true sometimes, but I think is sometimes the
> opposite. When we push things out to a sub-process, then the interface
> between the two processes has to be well-defined (e.g., writing results
> to a file with a particular format).

I agree that it forces us to be more disciplined to separate them
into two processes and clearly define the way they communicate with
each other.  When done poorly, it tends to make a bigger mess.

> That said, I think I could buy the argument that "fetch" works pretty
> well as a basic building block for users. It's pretty rare to actually
> use fetch-pack as a distinct operation. This is all a monolith vs module
> tradeoff question, and the tradeoff around modularity for fetch isn't
> that compelling.

Yup.  

The "more naive and stupid way to run them as subprocesses, just
like we do for cURL based transport" was primarily because I thought
it would be easier to get right (as the transport API and interface
has long been established) than hunting down all the callers of
die() and touch all the functions involved in the callchain.  I
won't complain or block if a harder-but-more-correct approach is
taken ;-)
Jeff King July 29, 2020, 7:29 p.m. UTC | #10
On Wed, Jul 29, 2020 at 11:53:47AM -0700, Jonathan Tan wrote:

> > This is definitely true sometimes, but I think is sometimes the
> > opposite. When we push things out to a sub-process, then the interface
> > between the two processes has to be well-defined (e.g., writing results
> > to a file with a particular format). And that can make debugging easier,
> > because you can pick up from that intermediate state (munging it in the
> > middle, or even generating it from scratch for testing).
> 
> Well, unless there is some sort of interactivity like in remote helpers
> :-)

Yes, debugging remote helpers is its own special hell. :)

> > That said, I think I could buy the argument that "fetch" works pretty
> > well as a basic building block for users. It's pretty rare to actually
> > use fetch-pack as a distinct operation. This is all a monolith vs module
> > tradeoff question, and the tradeoff around modularity for fetch isn't
> > that compelling.
> 
> If we are going the sub-process route, I was planning to use "fetch" as
> the sub-process, actually, not "fetch-pack" - among other things,
> "fetch" allows us to specify a fetch negotiator. So it seems like you
> are saying that if we had to use "fetch-pack", you have no problem with
> libifying it and calling it in the same process, but if we can use
> "fetch", we should use it as a sub-process?

No, I just meant that in general fetching is a monolithic operation from
the users perspective, and we don't often need to break it down further.
So if you have to build more options into "fetch" (that _could_ have
been broken out into separate sub-commands) I don't think anybody will
be sad.

I guess that is kind of orthogonal to your original problem, though,
which is "should fetch be triggered in-process by other processes". So
one could still make the argument that whatever features are needed by
that calling code (e.g., "use a different negotiation scheme") might
also be needed by other (script) callers of git-fetch, so there may be
some benefit to users in making the cross-process interface more rich
(of course in the case of negotiation schemes, it is not "make it more
rich now" but "earlier efforts to make it rich are now paying off").

-Peff
Jonathan Tan July 29, 2020, 10:55 p.m. UTC | #11
> The "more naive and stupid way to run them as subprocesses, just
> like we do for cURL based transport" was primarily because I thought
> it would be easier to get right (as the transport API and interface
> has long been established) than hunting down all the callers of
> die() and touch all the functions involved in the callchain.  I
> won't complain or block if a harder-but-more-correct approach is
> taken ;-)

Well, I was prepared to do the harder approach (as you can see from my
first email) but it will be harder not just for the author but also for
the reviewer and future readers of the code; now I think that I should
try the subprocess way first, especially since there is another part of
Git that will do that too (the maintenance prefetch [1]). I'll try that
and write back about how it goes.

[1] https://lore.kernel.org/git/3165b8916d2d80bf72dac6596a42c871ccd4cbe6.1595527000.git.gitgitgadget@gmail.com/
diff mbox series

Patch

diff --git a/fetch-pack.c b/fetch-pack.c
index 80fb3bd899..20a7e05ea8 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -24,6 +24,8 @@ 
 #include "fsck.h"
 #include "shallow.h"
 
+typedef char * error_string;
+
 static int transfer_unpack_limit = -1;
 static int fetch_unpack_limit = -1;
 static int unpack_limit = 100;
@@ -136,8 +138,8 @@  enum ack_type {
 	ACK_ready
 };
 
-static void consume_shallow_list(struct fetch_pack_args *args,
-				 struct packet_reader *reader)
+static error_string consume_shallow_list(struct fetch_pack_args *args,
+					 struct packet_reader *reader)
 {
 	if (args->stateless_rpc && args->deepen) {
 		/* If we sent a depth we will get back "duplicate"
@@ -149,41 +151,54 @@  static void consume_shallow_list(struct fetch_pack_args *args,
 				continue;
 			if (starts_with(reader->line, "unshallow "))
 				continue;
-			die(_("git fetch-pack: expected shallow list"));
+			return xstrdup(_("git fetch-pack: expected shallow list"));
 		}
 		if (reader->status != PACKET_READ_FLUSH)
-			die(_("git fetch-pack: expected a flush packet after shallow list"));
+			return xstrdup(_("git fetch-pack: expected a flush packet after shallow list"));
 	}
+	return NULL;
 }
 
-static enum ack_type get_ack(struct packet_reader *reader,
-			     struct object_id *result_oid)
+static error_string get_ack(struct packet_reader *reader,
+			    enum ack_type *result_ack,
+			    struct object_id *result_oid)
 {
 	int len;
 	const char *arg;
 
 	if (packet_reader_read(reader) != PACKET_READ_NORMAL)
-		die(_("git fetch-pack: expected ACK/NAK, got a flush packet"));
+		return xstrdup(_("git fetch-pack: expected ACK/NAK, got a flush packet"));
 	len = reader->pktlen;
 
-	if (!strcmp(reader->line, "NAK"))
-		return NAK;
+	if (!strcmp(reader->line, "NAK")) {
+		*result_ack = NAK;
+		return NULL;
+	}
 	if (skip_prefix(reader->line, "ACK ", &arg)) {
 		const char *p;
 		if (!parse_oid_hex(arg, result_oid, &p)) {
 			len -= p - reader->line;
-			if (len < 1)
-				return ACK;
-			if (strstr(p, "continue"))
-				return ACK_continue;
-			if (strstr(p, "common"))
-				return ACK_common;
-			if (strstr(p, "ready"))
-				return ACK_ready;
-			return ACK;
+			if (len < 1) {
+				*result_ack = ACK;
+				return NULL;
+			}
+			if (strstr(p, "continue")) {
+				*result_ack = ACK_continue;
+				return NULL;
+			}
+			if (strstr(p, "common")) {
+				*result_ack = ACK_common;
+				return NULL;
+			}
+			if (strstr(p, "ready")) {
+				*result_ack = ACK_ready;
+				return NULL;
+			}
+			*result_ack = ACK;
+			return NULL;
 		}
 	}
-	die(_("git fetch-pack: expected ACK/NAK, got '%s'"), reader->line);
+	return xstrfmt(_("git fetch-pack: expected ACK/NAK, got '%s'"), reader->line);
 }
 
 static void send_request(struct fetch_pack_args *args,
@@ -394,7 +409,8 @@  static int find_common(struct fetch_negotiator *negotiator,
 		print_verbose(args, "have %s", oid_to_hex(oid));
 		in_vain++;
 		if (flush_at <= ++count) {
-			int ack;
+			enum ack_type ack;
+			error_string err;
 
 			packet_buf_flush(&req_buf);
 			send_request(args, fd[1], &req_buf);
@@ -409,9 +425,11 @@  static int find_common(struct fetch_negotiator *negotiator,
 			if (!args->stateless_rpc && count == INITIAL_FLUSH)
 				continue;
 
-			consume_shallow_list(args, &reader);
+			if ((err = consume_shallow_list(args, &reader)))
+				die("%s", err);
 			do {
-				ack = get_ack(&reader, result_oid);
+				if ((err = get_ack(&reader, &ack, result_oid)))
+					die("%s", err);
 				if (ack)
 					print_verbose(args, _("got %s %d %s"), "ack",
 						      ack, oid_to_hex(result_oid));
@@ -457,6 +475,9 @@  static int find_common(struct fetch_negotiator *negotiator,
 						got_ready = 1;
 					break;
 					}
+				case NAK:
+					/* nothing */
+					break;
 				}
 			} while (ack);
 			flushes--;
@@ -481,10 +502,17 @@  static int find_common(struct fetch_negotiator *negotiator,
 	}
 	strbuf_release(&req_buf);
 
-	if (!got_ready || !no_done)
-		consume_shallow_list(args, &reader);
+	if (!got_ready || !no_done) {
+		error_string err;
+		if ((err = consume_shallow_list(args, &reader)))
+			die("%s", err);
+	}
 	while (flushes || multi_ack) {
-		int ack = get_ack(&reader, result_oid);
+		error_string err;
+		enum ack_type ack;
+
+		if ((err = get_ack(&reader, &ack, result_oid)))
+			die("%s", err);
 		if (ack) {
 			print_verbose(args, _("got %s (%d) %s"), "ack",
 				      ack, oid_to_hex(result_oid));