Message ID | 370b608f9007abe9c0562d76894e2475d19867a1.1709843663.git.ps@pks.im (mailing list archive) |
---|---|
State | Accepted |
Commit | fffd981ec2d7965733a4a15f9071e3734f7654a6 |
Headers | show |
Series | reftable/block: fix binary search over restart counter | expand |
Patrick Steinhardt <ps@pks.im> writes: > The consequence is that `binsearch()` essentially always returns 0, > indicacting to us that we must start searching right at the beginning of > the block. This works by chance because we now always do a linear scan > from the start of the block, and thus we would still end up finding the > desired record. But needless to say, this makes the optimization quite > useless. > > Fix this bug by returning whether the current key is smaller than the > searched key. As the current behaviour was correct it is not possible to > write a test. Furthermore it is also not really possible to demonstrate > in a benchmark that this fix speeds up seeking records. This is an amusing bug. I wonder if we inherited it from the original implementation---this was imported from jgit, right? Thanks for a detailed write-up. The "it is a fix, but the breakage is well hidden and cannot be observed only by checking for correctness" aspect of the bug deserves the unusually large "number of paragraphs explaining the change divided by number of changed lines" ratio ;-). Applied. > Signed-off-by: Patrick Steinhardt <ps@pks.im> > --- > reftable/block.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/reftable/block.c b/reftable/block.c > index 72eb73b380..1663030386 100644 > --- a/reftable/block.c > +++ b/reftable/block.c > @@ -302,7 +302,7 @@ static int restart_key_less(size_t idx, void *args) > > result = strbuf_cmp(&a->key, &rkey); > strbuf_release(&rkey); > - return result; > + return result < 0; > } > > void block_iter_copy_from(struct block_iter *dest, struct block_iter *src)
Junio C Hamano <gitster@pobox.com> writes: > Patrick Steinhardt <ps@pks.im> writes: > >> The consequence is that `binsearch()` essentially always returns 0, >> indicacting to us that we must start searching right at the beginning of >> the block. This works by chance because we now always do a linear scan >> from the start of the block, and thus we would still end up finding the >> desired record. But needless to say, this makes the optimization quite >> useless. >> Fix this bug by returning whether the current key is smaller than the >> searched key. As the current behaviour was correct it is not possible to >> write a test. Furthermore it is also not really possible to demonstrate >> in a benchmark that this fix speeds up seeking records. > > This is an amusing bug. Having said all that. I have to wonder if it is the custom implementation of binsearch() the reftable/basic.c file has, not this particular comparison callback. It makes an unusual expectation on the comparison function, unlike bsearch(3) whose compar(a,b) is expected to return an answer with the same sign as "a - b". I just checked the binary search loops we have in the core part of the system, like the one in hash-lookup.c (which takes advantage of the random and uniform nature of hashed values to converge faster than log2) and ones in builtin/pack-objects.c (both of which are absolute bog-standard). Luckily, we do not use such an unusual convention (well, we avoid overhead of compar callbacks to begin with, so it is a bit of apples-to-oranges comparison).
On Thu, Mar 07, 2024 at 04:40:46PM -0800, Junio C Hamano wrote: > Junio C Hamano <gitster@pobox.com> writes: > > > Patrick Steinhardt <ps@pks.im> writes: > > > >> The consequence is that `binsearch()` essentially always returns 0, > >> indicacting to us that we must start searching right at the beginning of > >> the block. This works by chance because we now always do a linear scan > >> from the start of the block, and thus we would still end up finding the > >> desired record. But needless to say, this makes the optimization quite > >> useless. > > >> Fix this bug by returning whether the current key is smaller than the > >> searched key. As the current behaviour was correct it is not possible to > >> write a test. Furthermore it is also not really possible to demonstrate > >> in a benchmark that this fix speeds up seeking records. > > > > This is an amusing bug. > > Having said all that. > > I have to wonder if it is the custom implementation of binsearch() > the reftable/basic.c file has, not this particular comparison > callback. It makes an unusual expectation on the comparison > function, unlike bsearch(3) whose compar(a,b) is expected to return > an answer with the same sign as "a - b". > > I just checked the binary search loops we have in the core part of > the system, like the one in hash-lookup.c (which takes advantage of > the random and uniform nature of hashed values to converge faster > than log2) and ones in builtin/pack-objects.c (both of which are > absolute bog-standard). Luckily, we do not use such an unusual > convention (well, we avoid overhead of compar callbacks to begin > with, so it is a bit of apples-to-oranges comparison). Very true, this behaviour cought me by surprise, as well, and I do think it's quite easy to get wrong. Now I would've understood if `binsearch()` were able to handle and forward errors to the caller by passing -1. And I almost thought that was the case because `restart_key_less()` can indeed fail, and it would return a negative value if so. But that error return code is then not taken as an indicator of failure, but instead will cause us to treat the current value as smaller than the comparison key. But we do know to bubble the error up via the pasesd-in args by setting `args->error = -1`. Funny thing though: I just now noticed that we check for `args.error` _before_ we call `binsearch()`. Oops. I will send a follow-up patch that addresses these issues. Patrick
Patrick Steinhardt <ps@pks.im> writes: > But we do know to bubble the error up via the pasesd-in args by setting > `args->error = -1`. Funny thing though: I just now noticed that we check > for `args.error` _before_ we call `binsearch()`. Oops. > > I will send a follow-up patch that addresses these issues. Thanks, that is doubly amusing ;-)
diff --git a/reftable/block.c b/reftable/block.c index 72eb73b380..1663030386 100644 --- a/reftable/block.c +++ b/reftable/block.c @@ -302,7 +302,7 @@ static int restart_key_less(size_t idx, void *args) result = strbuf_cmp(&a->key, &rkey); strbuf_release(&rkey); - return result; + return result < 0; } void block_iter_copy_from(struct block_iter *dest, struct block_iter *src)
Records store their keys prefix-compressed. As many records will share a common prefix (e.g. "refs/heads/"), this can end up saving quite a bit of disk space. The downside of this is that it is not possible to just seek into the middle of a block and consume the corresponding record because it may depend on prefixes read from preceding records. To help with this usecase, the reftable format writes every n'th record without using prefix compression, which is called a "restart". The list of restarts is stored at the end of each block so that a reader can figure out entry points at which to read a full record without having to read all preceding records. This allows us to do a binary search over the records in a block when searching for a particular key by iterating through the restarts until we have found the section in which our record must be located. From thereon we perform a linear search to locate the desired record. This mechanism is broken though. In `block_reader_seek()` we call `binsearch()` over the count of restarts in the current block. The function we pass to compare records with each other computes the key at the current index and then compares it to our search key by calling `strbuf_cmp()`, returning its result directly. But `binsearch()` expects us to return a truish value that indicates whether the current index is smaller than the searched-for key. And unless our key exactly matches the value at the restart counter we always end up returning a truish value. The consequence is that `binsearch()` essentially always returns 0, indicacting to us that we must start searching right at the beginning of the block. This works by chance because we now always do a linear scan from the start of the block, and thus we would still end up finding the desired record. But needless to say, this makes the optimization quite useless. Fix this bug by returning whether the current key is smaller than the searched key. As the current behaviour was correct it is not possible to write a test. Furthermore it is also not really possible to demonstrate in a benchmark that this fix speeds up seeking records. This may cause the reader to question whether this binary search makes sense in the first place if it doesn't even help with performance. But it would end up helping if we were to read a reftable with a much larger block size. Blocks can be up to 16MB in size, in which case it will become much more important to avoid the linear scan. We are not yet ready to read or write such larger blocks though, so we have to live without a benchmark demonstrating this. Signed-off-by: Patrick Steinhardt <ps@pks.im> --- reftable/block.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)