Message ID | 20210629205305.7100-6-e@80x24.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | None | expand |
Am 29.06.21 um 22:53 schrieb Eric Wong: > This saves 8K per `struct object_directory', meaning it saves > around 800MB in my case involving 100K alternates (half or more > of those alternates are unlikely to hold loose objects). > > This is implemented in two parts: a generic, allocation-free > `cbtree' and the `oidtree' wrapper on top of it. The latter > provides allocation using alloc_state as a memory pool to > improve locality and reduce free(3) overhead. > > Unlike oid-array, the crit-bit tree does not require sorting. > Performance is bound by the key length, for oidtree that is > fixed at sizeof(struct object_id). There's no need to have > 256 oidtrees to mitigate the O(n log n) overhead like we did > with oid-array. > > Being a prefix trie, it is natively suited for expanding short > object IDs via prefix-limited iteration in > `find_short_object_filename'. Sounds like a good match. > > On my busy workstation, p4205 performance seems to be roughly > unchanged (+/-8%). Startup with 100K total alternates with no > loose objects seems around 10-20% faster on a hot cache. > (800MB in memory savings means more memory for the kernel FS > cache). > > The generic cbtree implementation does impose some extra > overhead for oidtree in that it uses memcmp(3) on > "struct object_id" so it wastes cycles comparing 12 extra bytes > on SHA-1 repositories. I've not yet explored reducing this > overhead, but I expect there are many places in our code base > where we'd want to investigate this. > > More information on crit-bit trees: https://cr.yp.to/critbit.html > > v2: make oidtree test hash-agnostic > > Signed-off-by: Eric Wong <e@80x24.org> > --- > Makefile | 3 + > alloc.c | 6 ++ > alloc.h | 1 + > cbtree.c | 167 ++++++++++++++++++++++++++++++++++++++++ > cbtree.h | 56 ++++++++++++++ > object-file.c | 17 ++-- > object-name.c | 28 +++---- > object-store.h | 5 +- > oidtree.c | 94 ++++++++++++++++++++++ > oidtree.h | 29 +++++++ > t/helper/test-oidtree.c | 47 +++++++++++ > t/helper/test-tool.c | 1 + > t/helper/test-tool.h | 1 + > t/t0069-oidtree.sh | 52 +++++++++++++ > 14 files changed, 478 insertions(+), 29 deletions(-) > create mode 100644 cbtree.c > create mode 100644 cbtree.h > create mode 100644 oidtree.c > create mode 100644 oidtree.h > create mode 100644 t/helper/test-oidtree.c > create mode 100755 t/t0069-oidtree.sh > > diff --git a/Makefile b/Makefile > index c3565fc0f8..a1525978fb 100644 > --- a/Makefile > +++ b/Makefile > @@ -722,6 +722,7 @@ TEST_BUILTINS_OBJS += test-mergesort.o > TEST_BUILTINS_OBJS += test-mktemp.o > TEST_BUILTINS_OBJS += test-oid-array.o > TEST_BUILTINS_OBJS += test-oidmap.o > +TEST_BUILTINS_OBJS += test-oidtree.o > TEST_BUILTINS_OBJS += test-online-cpus.o > TEST_BUILTINS_OBJS += test-parse-options.o > TEST_BUILTINS_OBJS += test-parse-pathspec-file.o > @@ -845,6 +846,7 @@ LIB_OBJS += branch.o > LIB_OBJS += bulk-checkin.o > LIB_OBJS += bundle.o > LIB_OBJS += cache-tree.o > +LIB_OBJS += cbtree.o > LIB_OBJS += chdir-notify.o > LIB_OBJS += checkout.o > LIB_OBJS += chunk-format.o > @@ -940,6 +942,7 @@ LIB_OBJS += object.o > LIB_OBJS += oid-array.o > LIB_OBJS += oidmap.o > LIB_OBJS += oidset.o > +LIB_OBJS += oidtree.o > LIB_OBJS += pack-bitmap-write.o > LIB_OBJS += pack-bitmap.o > LIB_OBJS += pack-check.o > diff --git a/alloc.c b/alloc.c > index 957a0af362..ca1e178c5a 100644 > --- a/alloc.c > +++ b/alloc.c > @@ -14,6 +14,7 @@ > #include "tree.h" > #include "commit.h" > #include "tag.h" > +#include "oidtree.h" > #include "alloc.h" > > #define BLOCKING 1024 > @@ -123,6 +124,11 @@ void *alloc_commit_node(struct repository *r) > return c; > } > > +void *alloc_from_state(struct alloc_state *alloc_state, size_t n) > +{ > + return alloc_node(alloc_state, n); > +} > + Why extend alloc.c instead of using mem-pool.c? (I don't know which fits better, but when you say "memory pool" and not use mem-pool.c I just have to ask..) > diff --git a/oidtree.c b/oidtree.c > new file mode 100644 > index 0000000000..c1188d8f48 > --- /dev/null > +++ b/oidtree.c > @@ -0,0 +1,94 @@ > +/* > + * A wrapper around cbtree which stores oids > + * May be used to replace oid-array for prefix (abbreviation) matches > + */ > +#include "oidtree.h" > +#include "alloc.h" > +#include "hash.h" > + > +struct oidtree_node { > + /* n.k[] is used to store "struct object_id" */ > + struct cb_node n; > +}; > + > +struct oidtree_iter_data { > + oidtree_iter fn; > + void *arg; > + size_t *last_nibble_at; > + int algo; > + uint8_t last_byte; > +}; > + > +void oidtree_destroy(struct oidtree *ot) > +{ > + if (ot->mempool) { > + clear_alloc_state(ot->mempool); > + FREE_AND_NULL(ot->mempool); > + } > + oidtree_init(ot); > +} > + > +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) > +{ > + struct oidtree_node *on; > + > + if (!ot->mempool) > + ot->mempool = allocate_alloc_state(); > + if (!oid->algo) > + BUG("oidtree_insert requires oid->algo"); > + > + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); > + oidcpy_with_padding((struct object_id *)on->n.k, oid); > + > + /* > + * n.b. we shouldn't get duplicates, here, but we'll have > + * a small leak that won't be freed until oidtree_destroy > + */ Why shouldn't we get duplicates? That depends on the usage of oidtree, right? The current user is fine because we avoid reading the same loose object directory twice using the loose_objects_subdir_seen bitmap. The leak comes from the allocation above, which is not used in case we already have the key in the oidtree. So we need memory for all candidates, not just the inserted candidates. That's probably acceptable in most use cases. We can do better by keeping track of the unnecessary allocation in struct oidtree and recycling it at the next insert attempt, however. That way we'd only waste at most one slot. > + cb_insert(&ot->t, &on->n, sizeof(*oid)); > +} > + > +int oidtree_contains(struct oidtree *ot, const struct object_id *oid) > +{ > + struct object_id k = { 0 }; > + size_t klen = sizeof(k); > + oidcpy_with_padding(&k, oid); Why initialize k; isn't oidcpy_with_padding() supposed to overwrite it completely? > + > + if (oid->algo == GIT_HASH_UNKNOWN) { > + k.algo = hash_algo_by_ptr(the_hash_algo); > + klen -= sizeof(oid->algo); > + } This relies on the order of the members hash and algo in struct object_id to find a matching hash if we don't actually know algo. It also relies on the absence of padding after algo. Would something like this make sense? BUILD_ASSERT_OR_ZERO(offsetof(struct object_id, algo) + sizeof(k.algo) == sizeof(k)); And why set k.algo to some arbitrary value if we ignore it anyway? I.e. why not keep it GIT_HASH_UNKNOWN, as set by oidcpy_with_padding()? > + > + return cb_lookup(&ot->t, (const uint8_t *)&k, klen) ? 1 : 0; > +} > + > +static enum cb_next iter(struct cb_node *n, void *arg) > +{ > + struct oidtree_iter_data *x = arg; > + const struct object_id *oid = (const struct object_id *)n->k; > + > + if (x->algo != GIT_HASH_UNKNOWN && x->algo != oid->algo) > + return CB_CONTINUE; > + > + if (x->last_nibble_at) { > + if ((oid->hash[*x->last_nibble_at] ^ x->last_byte) & 0xf0) > + return CB_CONTINUE; > + } > + > + return x->fn(oid, x->arg); > +} > + > +void oidtree_each(struct oidtree *ot, const struct object_id *oid, > + size_t oidhexlen, oidtree_iter fn, void *arg) > +{ > + size_t klen = oidhexlen / 2; > + struct oidtree_iter_data x = { 0 }; > + > + x.fn = fn; > + x.arg = arg; > + x.algo = oid->algo; > + if (oidhexlen & 1) { > + x.last_byte = oid->hash[klen]; > + x.last_nibble_at = &klen; > + } > + cb_each(&ot->t, (const uint8_t *)oid, klen, iter, &x); > +} Clamp oidhexlen at GIT_MAX_HEXSZ? Or die? René
On Tue, Jun 29 2021, Eric Wong wrote: > +struct alloc_state; > +struct oidtree { > + struct cb_tree t; s/t/tree/? Too short a name for an interface IMO. > + struct alloc_state *mempool; > +}; > + > +#define OIDTREE_INIT { .t = CBTREE_INIT, .mempool = NULL } Let's use designated initilaizers for new code. Just: #define OIDTREE_init { \ .tere = CBTREE_INIT, \ } Will do, no need for the ".mempool = NULL" > +static inline void oidtree_init(struct oidtree *ot) > +{ > + cb_init(&ot->t); > + ot->mempool = NULL; > +} You can use the "memcpy() a blank" trick/idiom here: https://lore.kernel.org/git/patch-2.5-955dbd1693d-20210701T104855Z-avarab@gmail.com/ Also, is this even needed? Why have the "destroy" re-initialize it? > +void oidtree_destroy(struct oidtree *); Maybe s/destroy/release/, or if you actually need that reset behavior oidtree_reset(). We've got > +void oidtree_insert(struct oidtree *, const struct object_id *); > +int oidtree_contains(struct oidtree *, const struct object_id *); > + > +typedef enum cb_next (*oidtree_iter)(const struct object_id *, void *arg); An "arg" name for some arguments, but none for others, if there's a name here call it "data" like you do elswhere? > +void oidtree_each(struct oidtree *, const struct object_id *, > + size_t oidhexlen, oidtree_iter, void *arg); s/oidhexlen/hexsz/, like in git_hash_algo.a > + > +#endif /* OIDTREE_H */ > diff --git a/t/helper/test-oidtree.c b/t/helper/test-oidtree.c > new file mode 100644 > index 0000000000..e0da13eea3 > --- /dev/null > +++ b/t/helper/test-oidtree.c > @@ -0,0 +1,47 @@ > +#include "test-tool.h" > +#include "cache.h" > +#include "oidtree.h" > + > +static enum cb_next print_oid(const struct object_id *oid, void *data) > +{ > + puts(oid_to_hex(oid)); > + return CB_CONTINUE; > +} > + > +int cmd__oidtree(int argc, const char **argv) > +{ > + struct oidtree ot = OIDTREE_INIT; > + struct strbuf line = STRBUF_INIT; > + int nongit_ok; > + int algo = GIT_HASH_UNKNOWN; > + > + setup_git_directory_gently(&nongit_ok); > + > + while (strbuf_getline(&line, stdin) != EOF) { > + const char *arg; > + struct object_id oid; > + > + if (skip_prefix(line.buf, "insert ", &arg)) { > + if (get_oid_hex_any(arg, &oid) == GIT_HASH_UNKNOWN) > + die("insert not a hexadecimal oid: %s", arg); > + algo = oid.algo; > + oidtree_insert(&ot, &oid); > + } else if (skip_prefix(line.buf, "contains ", &arg)) { > + if (get_oid_hex(arg, &oid)) > + die("contains not a hexadecimal oid: %s", arg); > + printf("%d\n", oidtree_contains(&ot, &oid)); > + } else if (skip_prefix(line.buf, "each ", &arg)) { > + char buf[GIT_MAX_HEXSZ + 1] = { '0' }; > + memset(&oid, 0, sizeof(oid)); > + memcpy(buf, arg, strlen(arg)); > + buf[hash_algos[algo].hexsz] = 0; = '\0' if it's the intent to have a NULL-terminated string is more readable. > + get_oid_hex_any(buf, &oid); > + oid.algo = algo; > + oidtree_each(&ot, &oid, strlen(arg), print_oid, NULL); > + } else if (!strcmp(line.buf, "destroy")) > + oidtree_destroy(&ot); > + else > + die("unknown command: %s", line.buf); Missing braces. > + } > + return 0; > +} > diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c > index c5bd0c6d4c..9d37debf28 100644 > --- a/t/helper/test-tool.c > +++ b/t/helper/test-tool.c > @@ -43,6 +43,7 @@ static struct test_cmd cmds[] = { > { "mktemp", cmd__mktemp }, > { "oid-array", cmd__oid_array }, > { "oidmap", cmd__oidmap }, > + { "oidtree", cmd__oidtree }, > { "online-cpus", cmd__online_cpus }, > { "parse-options", cmd__parse_options }, > { "parse-pathspec-file", cmd__parse_pathspec_file }, > diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h > index e8069a3b22..f683a2f59c 100644 > --- a/t/helper/test-tool.h > +++ b/t/helper/test-tool.h > @@ -32,6 +32,7 @@ int cmd__match_trees(int argc, const char **argv); > int cmd__mergesort(int argc, const char **argv); > int cmd__mktemp(int argc, const char **argv); > int cmd__oidmap(int argc, const char **argv); > +int cmd__oidtree(int argc, const char **argv); > int cmd__online_cpus(int argc, const char **argv); > int cmd__parse_options(int argc, const char **argv); > int cmd__parse_pathspec_file(int argc, const char** argv); > diff --git a/t/t0069-oidtree.sh b/t/t0069-oidtree.sh > new file mode 100755 > index 0000000000..0594f57c81 > --- /dev/null > +++ b/t/t0069-oidtree.sh > @@ -0,0 +1,52 @@ > +#!/bin/sh > + > +test_description='basic tests for the oidtree implementation' > +. ./test-lib.sh > + > +echoid () { > + prefix="${1:+$1 }" > + shift > + while test $# -gt 0 > + do > + echo "$1" > + shift > + done | awk -v prefix="$prefix" -v ZERO_OID=$ZERO_OID '{ > + printf("%s%s", prefix, $0); > + need = length(ZERO_OID) - length($0); > + for (i = 0; i < need; i++) > + printf("0"); > + printf "\n"; > + }' > +} Looks fairly easy to do in pure-shell, first of all you don't need a length() on $ZERO_OID, use $(test_oid hexsz) instead. That applies for the awk version too. But once you have that and the N arguments just do a wc -c on the argument, use $(()) to compute the $difference, and a loop with: printf "%s%s%0${difference}d" "$prefix" "$shortoid" "0" > + > +test_expect_success 'oidtree insert and contains' ' > + cat >expect <<EOF && > +0 > +0 > +0 > +1 > +1 > +0 > +EOF use "<<-\EOF" and indent it. > + { > + echoid insert 444 1 2 3 4 5 a b c d e && > + echoid contains 44 441 440 444 4440 4444 > + echo destroy > + } | test-tool oidtree >actual && > + test_cmp expect actual > +' > + > +test_expect_success 'oidtree each' ' > + echoid "" 123 321 321 >expect && > + { > + echoid insert f 9 8 123 321 a b c d e > + echo each 12300 > + echo each 3211 > + echo each 3210 > + echo each 32100 > + echo destroy > + } | test-tool oidtree >actual && > + test_cmp expect actual > +' > + > +test_done
René Scharfe <l.s.r@web.de> wrote: > Am 29.06.21 um 22:53 schrieb Eric Wong: > > --- a/alloc.c > > +++ b/alloc.c > > @@ -14,6 +14,7 @@ > > #include "tree.h" > > #include "commit.h" > > #include "tag.h" > > +#include "oidtree.h" > > #include "alloc.h" > > > > #define BLOCKING 1024 > > @@ -123,6 +124,11 @@ void *alloc_commit_node(struct repository *r) > > return c; > > } > > > > +void *alloc_from_state(struct alloc_state *alloc_state, size_t n) > > +{ > > + return alloc_node(alloc_state, n); > > +} > > + > > Why extend alloc.c instead of using mem-pool.c? (I don't know which fits > better, but when you say "memory pool" and not use mem-pool.c I just have > to ask..) I didn't know mem-pool.c existed :x (And I've always known about alloc.c). Perhaps we could merge them in another series to avoid further confusion. > > +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) > > +{ > > + struct oidtree_node *on; > > + > > + if (!ot->mempool) > > + ot->mempool = allocate_alloc_state(); > > + if (!oid->algo) > > + BUG("oidtree_insert requires oid->algo"); > > + > > + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); > > + oidcpy_with_padding((struct object_id *)on->n.k, oid); > > + > > + /* > > + * n.b. we shouldn't get duplicates, here, but we'll have > > + * a small leak that won't be freed until oidtree_destroy > > + */ > > Why shouldn't we get duplicates? That depends on the usage of oidtree, > right? The current user is fine because we avoid reading the same loose > object directory twice using the loose_objects_subdir_seen bitmap. Yes, it reflects the current caller. > The leak comes from the allocation above, which is not used in case we > already have the key in the oidtree. So we need memory for all > candidates, not just the inserted candidates. That's probably > acceptable in most use cases. Yes, I think the small, impossible-due-to-current-usage leak is an acceptable trade off. > We can do better by keeping track of the unnecessary allocation in > struct oidtree and recycling it at the next insert attempt, however. > That way we'd only waste at most one slot. It'd involve maintaining a free list; which may be better suited to being in alloc_state or mem_pool. That would also increase the size of a struct *somewhere* and add a small amount of code complexity, too. > > + cb_insert(&ot->t, &on->n, sizeof(*oid)); > > +} > > + > > +int oidtree_contains(struct oidtree *ot, const struct object_id *oid) > > +{ > > + struct object_id k = { 0 }; > > + size_t klen = sizeof(k); > > + oidcpy_with_padding(&k, oid); > > Why initialize k; isn't oidcpy_with_padding() supposed to overwrite it > completely? Ah, I only added oidcpy_with_padding later into the development of this patch. > > + > > + if (oid->algo == GIT_HASH_UNKNOWN) { > > + k.algo = hash_algo_by_ptr(the_hash_algo); > > + klen -= sizeof(oid->algo); > > + } > > This relies on the order of the members hash and algo in struct > object_id to find a matching hash if we don't actually know algo. It > also relies on the absence of padding after algo. Would something like > this make sense? > > BUILD_ASSERT_OR_ZERO(offsetof(struct object_id, algo) + sizeof(k.algo) == sizeof(k)); Maybe... I think a static assertion that object_id.hash be the first element of "struct object_id" is definitely needed, at least. > And why set k.algo to some arbitrary value if we ignore it anyway? I.e. > why not keep it GIT_HASH_UNKNOWN, as set by oidcpy_with_padding()? Good point, shortening klen would've been all that was needed. > > +void oidtree_each(struct oidtree *ot, const struct object_id *oid, > > + size_t oidhexlen, oidtree_iter fn, void *arg) > > +{ > > + size_t klen = oidhexlen / 2; > > + struct oidtree_iter_data x = { 0 }; > > + > > + x.fn = fn; > > + x.arg = arg; > > + x.algo = oid->algo; > > + if (oidhexlen & 1) { > > + x.last_byte = oid->hash[klen]; > > + x.last_nibble_at = &klen; > > + } > > + cb_each(&ot->t, (const uint8_t *)oid, klen, iter, &x); > > +} > > Clamp oidhexlen at GIT_MAX_HEXSZ? Or die? I think an assertion would be enough, here. init_object_disambiguation already clamps to the_hash_algo->hexsz
Ævar Arnfjörð Bjarmason <avarab@gmail.com> wrote: > > On Tue, Jun 29 2021, Eric Wong wrote: > > > +struct alloc_state; > > +struct oidtree { > > + struct cb_tree t; > > s/t/tree/? Too short a name for an interface IMO. Done. I was keeping `t' to match agl's published version (and it remains that way in cbtree.[ch]) > > + struct alloc_state *mempool; > > +}; > > + > > +#define OIDTREE_INIT { .t = CBTREE_INIT, .mempool = NULL } > > Let's use designated initilaizers for new code. Just: > > #define OIDTREE_init { \ > .tere = CBTREE_INIT, \ > } > > Will do, no need for the ".mempool = NULL" > > +static inline void oidtree_init(struct oidtree *ot) > > +{ > > + cb_init(&ot->t); > > + ot->mempool = NULL; > > +} > > You can use the "memcpy() a blank" trick/idiom here: > https://lore.kernel.org/git/patch-2.5-955dbd1693d-20210701T104855Z-avarab@gmail.com/ > > Also, is this even needed? Why have the "destroy" re-initialize it? I'm using mem_pool, now. With the way mem_pool_init works, I've decided to do away with OIDTREE_INIT and only use oidtree_init (and lazy-malloc the entire loose_objects_cache) > > +void oidtree_destroy(struct oidtree *); > > Maybe s/destroy/release/, or if you actually need that reset behavior > oidtree_reset(). We've got I'm renaming it oidtree_clear to match oid_array_clear. > > +void oidtree_insert(struct oidtree *, const struct object_id *); > > +int oidtree_contains(struct oidtree *, const struct object_id *); > > + > > +typedef enum cb_next (*oidtree_iter)(const struct object_id *, void *arg); > > An "arg" name for some arguments, but none for others, if there's a name > here call it "data" like you do elswhere? OK, using "data". To reduce noise, I prefer to only name variables in prototypes if the usage can't be easily inferred from its type and function name. > > +void oidtree_each(struct oidtree *, const struct object_id *, > > + size_t oidhexlen, oidtree_iter, void *arg); > > s/oidhexlen/hexsz/, like in git_hash_algo.a done > > +++ b/t/helper/test-oidtree.c > > @@ -0,0 +1,47 @@ > > +#include "test-tool.h" > > +#include "cache.h" > > +#include "oidtree.h" > > + > > +static enum cb_next print_oid(const struct object_id *oid, void *data) > > +{ > > + puts(oid_to_hex(oid)); > > + return CB_CONTINUE; > > +} > > + > > +int cmd__oidtree(int argc, const char **argv) > > +{ > > + struct oidtree ot = OIDTREE_INIT; > > + struct strbuf line = STRBUF_INIT; > > + int nongit_ok; > > + int algo = GIT_HASH_UNKNOWN; > > + > > + setup_git_directory_gently(&nongit_ok); > > + > > + while (strbuf_getline(&line, stdin) != EOF) { > > + const char *arg; > > + struct object_id oid; > > + > > + if (skip_prefix(line.buf, "insert ", &arg)) { > > + if (get_oid_hex_any(arg, &oid) == GIT_HASH_UNKNOWN) > > + die("insert not a hexadecimal oid: %s", arg); > > + algo = oid.algo; > > + oidtree_insert(&ot, &oid); > > + } else if (skip_prefix(line.buf, "contains ", &arg)) { > > + if (get_oid_hex(arg, &oid)) > > + die("contains not a hexadecimal oid: %s", arg); > > + printf("%d\n", oidtree_contains(&ot, &oid)); > > + } else if (skip_prefix(line.buf, "each ", &arg)) { > > + char buf[GIT_MAX_HEXSZ + 1] = { '0' }; > > + memset(&oid, 0, sizeof(oid)); > > + memcpy(buf, arg, strlen(arg)); > > + buf[hash_algos[algo].hexsz] = 0; > > = '\0' if it's the intent to have a NULL-terminated string is more > readable. done > > + get_oid_hex_any(buf, &oid); > > + oid.algo = algo; > > + oidtree_each(&ot, &oid, strlen(arg), print_oid, NULL); > > + } else if (!strcmp(line.buf, "destroy")) > > + oidtree_destroy(&ot); > > + else > > + die("unknown command: %s", line.buf); > > Missing braces. Added. > > + } > > + return 0; > > +} > > diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c > > index c5bd0c6d4c..9d37debf28 100644 > > --- a/t/helper/test-tool.c > > +++ b/t/helper/test-tool.c > > @@ -43,6 +43,7 @@ static struct test_cmd cmds[] = { > > { "mktemp", cmd__mktemp }, > > { "oid-array", cmd__oid_array }, > > { "oidmap", cmd__oidmap }, > > + { "oidtree", cmd__oidtree }, > > { "online-cpus", cmd__online_cpus }, > > { "parse-options", cmd__parse_options }, > > { "parse-pathspec-file", cmd__parse_pathspec_file }, > > diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h > > index e8069a3b22..f683a2f59c 100644 > > --- a/t/helper/test-tool.h > > +++ b/t/helper/test-tool.h > > @@ -32,6 +32,7 @@ int cmd__match_trees(int argc, const char **argv); > > int cmd__mergesort(int argc, const char **argv); > > int cmd__mktemp(int argc, const char **argv); > > int cmd__oidmap(int argc, const char **argv); > > +int cmd__oidtree(int argc, const char **argv); > > int cmd__online_cpus(int argc, const char **argv); > > int cmd__parse_options(int argc, const char **argv); > > int cmd__parse_pathspec_file(int argc, const char** argv); > > diff --git a/t/t0069-oidtree.sh b/t/t0069-oidtree.sh > > new file mode 100755 > > index 0000000000..0594f57c81 > > --- /dev/null > > +++ b/t/t0069-oidtree.sh > > @@ -0,0 +1,52 @@ > > +#!/bin/sh > > + > > +test_description='basic tests for the oidtree implementation' > > +. ./test-lib.sh > > + > > +echoid () { > > + prefix="${1:+$1 }" > > + shift > > + while test $# -gt 0 > > + do > > + echo "$1" > > + shift > > + done | awk -v prefix="$prefix" -v ZERO_OID=$ZERO_OID '{ > > + printf("%s%s", prefix, $0); > > + need = length(ZERO_OID) - length($0); > > + for (i = 0; i < need; i++) > > + printf("0"); > > + printf "\n"; > > + }' > > +} > > Looks fairly easy to do in pure-shell, first of all you don't need a > length() on $ZERO_OID, use $(test_oid hexsz) instead. That applies for > the awk version too. Ah, I didn't know about test_oid, using it, now. > But once you have that and the N arguments just do a wc -c on the > argument, use $(()) to compute the $difference, and a loop with: > > printf "%s%s%0${difference}d" "$prefix" "$shortoid" "0" I also wanted to avoid repeated 'wc -c' and figured awk was portable enough since we use it elsewhere in tests. I've now noticed "${#var}" is portable and we're already relying on it in packetize(), so I'm using that. > > + > > +test_expect_success 'oidtree insert and contains' ' > > + cat >expect <<EOF && > > +0 > > +0 > > +0 > > +1 > > +1 > > +0 > > +EOF > > use "<<-\EOF" and indent it. done Thanks all for the reviews.
On 29/06/2021 22:53, Eric Wong wrote: > [...snip...] > diff --git a/oidtree.c b/oidtree.c > new file mode 100644 > index 0000000000..c1188d8f48 > --- /dev/null > +++ b/oidtree.c > @@ -0,0 +1,94 @@ > +/* > + * A wrapper around cbtree which stores oids > + * May be used to replace oid-array for prefix (abbreviation) matches > + */ > +#include "oidtree.h" > +#include "alloc.h" > +#include "hash.h" > + > +struct oidtree_node { > + /* n.k[] is used to store "struct object_id" */ > + struct cb_node n; > +}; > + > [... snip ...] > + > +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) > +{ > + struct oidtree_node *on; > + > + if (!ot->mempool) > + ot->mempool = allocate_alloc_state(); > + if (!oid->algo) > + BUG("oidtree_insert requires oid->algo"); > + > + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); > + oidcpy_with_padding((struct object_id *)on->n.k, oid); I think this object_id cast introduced undefined behaviour - here's my layperson's interepretation of what's going on (full UBSAN output is pasted below): cb_node.k is a uint8_t[], and hence can be 1-byte aligned (on my machine: offsetof(struct cb_node, k) == 21). We're casting its pointer to "struct object_id *", and later try to access object_id.hash within oidcpy_with_padding. My compiler assumes that an object_id pointer needs to be 4-byte aligned, and reading from a misaligned pointer means we hit undefined behaviour. (I think the 4-byte alignment requirement comes from the fact that object_id's largest member is an int?) I'm not sure what an elegant and idiomatic fix might be - IIUC it's hard to guarantee misaligned access can't happen with a flex array that's being used for arbitrary data (you would presumably have to declare it as an array of whatever the largest supported type is, so that you can guarantee correct alignment even when cbtree is used with that type) - which might imply that k needs to be declared as a void pointer? That in turn would make cbtree.c harder to read. Anyhow, here's the UBSAN output from t0000 running against next: hash.h:277:14: runtime error: member access within misaligned address 0x7fcb31c4103d for type 'struct object_id', which requires 4 byte alignment 0x7fcb31c4103d: note: pointer points here 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a ^ #0 0xc76d9d in oidcpy_with_padding hash.h:277:14 #1 0xc768f5 in oidtree_insert oidtree.c:44:2 #2 0xc418e3 in append_loose_object object-file.c:2398:2 #3 0xc3fbdc in for_each_file_in_obj_subdir object-file.c:2316:9 #4 0xc41785 in odb_loose_cache object-file.c:2424:2 #5 0xc50336 in find_short_object_filename object-name.c:103:16 #6 0xc50e04 in repo_find_unique_abbrev_r object-name.c:712:2 #7 0xc519a9 in repo_find_unique_abbrev object-name.c:727:2 #8 0x9b6ce2 in diff_abbrev_oid diff.c:4208:10 #9 0x9f13d0 in fill_metainfo diff.c:4286:8 #10 0x9f02d6 in run_diff_cmd diff.c:4322:3 #11 0x9efbef in run_diff diff.c:4422:3 #12 0x9c9ac9 in diff_flush_patch diff.c:5765:2 #13 0x9c9e74 in diff_flush_patch_all_file_pairs diff.c:6246:4 #14 0x9be33e in diff_flush diff.c:6387:3 #15 0xb8864e in log_tree_diff_flush log-tree.c:895:2 #16 0xb8987b in log_tree_diff log-tree.c:933:4 #17 0xb88c9a in log_tree_commit log-tree.c:988:10 #18 0x5b4257 in cmd_log_walk log.c:426:8 #19 0x5b6224 in cmd_show log.c:698:10 #20 0x42ec48 in run_builtin git.c:461:11 #21 0x4295e0 in handle_builtin git.c:714:3 #22 0x42d043 in run_argv git.c:781:4 #23 0x428cc2 in cmd_main git.c:912:19 #24 0x7791ce in main common-main.c:52:11 #25 0x7fcb30aab349 in __libc_start_main (/lib64/libc.so.6+0x24349) #26 0x4074a9 in _start start.S:120 SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior hash.h:277:14 in > + > + /* > + * n.b. we shouldn't get duplicates, here, but we'll have > + * a small leak that won't be freed until oidtree_destroy > + */ > + cb_insert(&ot->t, &on->n, sizeof(*oid)); > +} > + ATB, Andrzej
Am 06.08.21 um 17:31 schrieb Andrzej Hunt: > > > On 29/06/2021 22:53, Eric Wong wrote: >> [...snip...] >> diff --git a/oidtree.c b/oidtree.c >> new file mode 100644 >> index 0000000000..c1188d8f48 >> --- /dev/null >> +++ b/oidtree.c >> @@ -0,0 +1,94 @@ >> +/* >> + * A wrapper around cbtree which stores oids >> + * May be used to replace oid-array for prefix (abbreviation) matches >> + */ >> +#include "oidtree.h" >> +#include "alloc.h" >> +#include "hash.h" >> + >> +struct oidtree_node { >> + /* n.k[] is used to store "struct object_id" */ >> + struct cb_node n; >> +}; >> + >> [... snip ...] >> + >> +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) >> +{ >> + struct oidtree_node *on; >> + >> + if (!ot->mempool) >> + ot->mempool = allocate_alloc_state(); >> + if (!oid->algo) >> + BUG("oidtree_insert requires oid->algo"); >> + >> + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); >> + oidcpy_with_padding((struct object_id *)on->n.k, oid); > > I think this object_id cast introduced undefined behaviour - here's > my layperson's interepretation of what's going on (full UBSAN output > is pasted below): > > cb_node.k is a uint8_t[], and hence can be 1-byte aligned (on my > machine: offsetof(struct cb_node, k) == 21). We're casting its > pointer to "struct object_id *", and later try to access > object_id.hash within oidcpy_with_padding. My compiler assumes that > an object_id pointer needs to be 4-byte aligned, and reading from a > misaligned pointer means we hit undefined behaviour. (I think the > 4-byte alignment requirement comes from the fact that object_id's > largest member is an int?) > > I'm not sure what an elegant and idiomatic fix might be - IIUC it's > hard to guarantee misaligned access can't happen with a flex array > that's being used for arbitrary data (you would presumably have to > declare it as an array of whatever the largest supported type is, so > that you can guarantee correct alignment even when cbtree is used > with that type) - which might imply that k needs to be declared as a > void pointer? That in turn would make cbtree.c harder to read. C11 has alignas. We could also make the member before the flex array, otherbits, wider, e.g. promote it to uint32_t. A more parsimonious solution would be to turn the int member of struct object_id, algo, into an unsigned char for now and reconsider the issue once we support our 200th algorithm or so. This breaks notes, though. Its GET_PTR_TYPE seems to require struct leaf_node to have 4-byte alignment for some reason. That can be ensured by adding an int member. Anyway, with either of these fixes UBSan is still unhappy about a different issue. Here's a patch for that: --- >8 --- Subject: [PATCH] object-file: use unsigned arithmetic with bit mask 33f379eee6 (make object_directory.loose_objects_subdir_seen a bitmap, 2021-07-07) replaced a wasteful 256-byte array with a 32-byte array and bit operations. The mask calculation shifts a literal 1 of type int left by anything between 0 and 31. UndefinedBehaviorSanitizer doesn't like that and reports: object-file.c:2477:18: runtime error: left shift of 1 by 31 places cannot be represented in type 'int' Make sure to use an unsigned 1 instead to avoid the issue. Signed-off-by: René Scharfe <l.s.r@web.de> --- object-file.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/object-file.c b/object-file.c index 3d27dc8dea..a8be899481 100644 --- a/object-file.c +++ b/object-file.c @@ -2474,7 +2474,7 @@ struct oidtree *odb_loose_cache(struct object_directory *odb, struct strbuf buf = STRBUF_INIT; size_t word_bits = bitsizeof(odb->loose_objects_subdir_seen[0]); size_t word_index = subdir_nr / word_bits; - size_t mask = 1 << (subdir_nr % word_bits); + size_t mask = 1u << (subdir_nr % word_bits); uint32_t *bitmap; if (subdir_nr < 0 || -- 2.32.0
René Scharfe <l.s.r@web.de> wrote: > Am 06.08.21 um 17:31 schrieb Andrzej Hunt: > > On 29/06/2021 22:53, Eric Wong wrote: > >> [...snip...] > >> diff --git a/oidtree.c b/oidtree.c > >> new file mode 100644 > >> index 0000000000..c1188d8f48 > >> --- /dev/null > >> +++ b/oidtree.c > >> +struct oidtree_node { > >> + /* n.k[] is used to store "struct object_id" */ > >> + struct cb_node n; > >> +}; > >> + > >> [... snip ...] > >> + > >> +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) > >> +{ > >> + struct oidtree_node *on; > >> + > >> + if (!ot->mempool) > >> + ot->mempool = allocate_alloc_state(); > >> + if (!oid->algo) > >> + BUG("oidtree_insert requires oid->algo"); > >> + > >> + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); > >> + oidcpy_with_padding((struct object_id *)on->n.k, oid); > > > > I think this object_id cast introduced undefined behaviour - here's > > my layperson's interepretation of what's going on (full UBSAN output > > is pasted below): > > > > cb_node.k is a uint8_t[], and hence can be 1-byte aligned (on my > > machine: offsetof(struct cb_node, k) == 21). We're casting its > > pointer to "struct object_id *", and later try to access > > object_id.hash within oidcpy_with_padding. My compiler assumes that > > an object_id pointer needs to be 4-byte aligned, and reading from a > > misaligned pointer means we hit undefined behaviour. (I think the > > 4-byte alignment requirement comes from the fact that object_id's > > largest member is an int?) I seem to recall struct alignment requirements being architecture-dependent; and x86/x86-64 are the most liberal w.r.t alignment requirements. > > I'm not sure what an elegant and idiomatic fix might be - IIUC it's > > hard to guarantee misaligned access can't happen with a flex array > > that's being used for arbitrary data (you would presumably have to > > declare it as an array of whatever the largest supported type is, so > > that you can guarantee correct alignment even when cbtree is used > > with that type) - which might imply that k needs to be declared as a > > void pointer? That in turn would make cbtree.c harder to read. > > C11 has alignas. We could also make the member before the flex array, > otherbits, wider, e.g. promote it to uint32_t. Ugh, no. cb_node should be as small as possible and (for our current purposes) ->byte could be uint8_t. > A more parsimonious solution would be to turn the int member of struct > object_id, algo, into an unsigned char for now and reconsider the issue > once we support our 200th algorithm or so. Yes, making struct object_id smaller would benefit all git users (at least for the next few centuries :P). > This breaks notes, though. > Its GET_PTR_TYPE seems to require struct leaf_node to have 4-byte > alignment for some reason. That can be ensured by adding an int member. Adding a 4-byte int to leaf_node after shaving 6-bytes off two object_id structs would mean a net savings of 2 bytes; sounds good to me. I don't know much about notes nor the associated code, but I also wonder if crit-bit tree can be used there, too. > Anyway, with either of these fixes UBSan is still unhappy about a > different issue. Here's a patch for that: Thanks <snip>
On Sat, Aug 7, 2021 at 3:51 PM Eric Wong <e@80x24.org> wrote: > > René Scharfe <l.s.r@web.de> wrote: > > Am 06.08.21 um 17:31 schrieb Andrzej Hunt: > > > On 29/06/2021 22:53, Eric Wong wrote: > > >> [...snip...] > > >> diff --git a/oidtree.c b/oidtree.c > > >> new file mode 100644 > > >> index 0000000000..c1188d8f48 > > >> --- /dev/null > > >> +++ b/oidtree.c > > > >> +struct oidtree_node { > > >> + /* n.k[] is used to store "struct object_id" */ > > >> + struct cb_node n; > > >> +}; > > >> + > > >> [... snip ...] > > >> + > > >> +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) > > >> +{ > > >> + struct oidtree_node *on; > > >> + > > >> + if (!ot->mempool) > > >> + ot->mempool = allocate_alloc_state(); > > >> + if (!oid->algo) > > >> + BUG("oidtree_insert requires oid->algo"); > > >> + > > >> + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); > > >> + oidcpy_with_padding((struct object_id *)on->n.k, oid); > > > > > > I think this object_id cast introduced undefined behaviour - here's > > > my layperson's interepretation of what's going on (full UBSAN output > > > is pasted below): > > > > > > cb_node.k is a uint8_t[], and hence can be 1-byte aligned (on my > > > machine: offsetof(struct cb_node, k) == 21). We're casting its > > > pointer to "struct object_id *", and later try to access > > > object_id.hash within oidcpy_with_padding. My compiler assumes that > > > an object_id pointer needs to be 4-byte aligned, and reading from a > > > misaligned pointer means we hit undefined behaviour. (I think the > > > 4-byte alignment requirement comes from the fact that object_id's > > > largest member is an int?) > > I seem to recall struct alignment requirements being > architecture-dependent; and x86/x86-64 are the most liberal > w.r.t alignment requirements. I think the problem here is not the alignment though, but the fact that the nesting of structs with flexible arrays is forbidden by ISO/IEC 9899:2011 6.7.2.1¶3 that reads : 6.7.2.1 Structure and union specifiers ¶3 A structure or union shall not contain a member with incomplete or function type (hence, a structure shall not contain an instance of itself, but may contain a pointer to an instance of itself), except that the last member of a structure with more than one named member may have incomplete array type; such a structure (and any union containing, possibly recursively, a member that is such a structure) shall not be a member of a structure or an element of an array. and it will throw a warning with clang 12 (-Wflexible-array-extensions) or gcc 11 (-Werror=pedantic) when using DEVOPTS=pedantic My somewhat naive suggestion was to avoid the struct nesting by removing struct oidtree_node and using a struct cb_node directly. Will reply with a small series of patches that fix pedantic related warnings in ew/many-alternate-optim on top of next. Carlo
Am 09.08.21 um 03:35 schrieb Carlo Arenas: > On Sat, Aug 7, 2021 at 3:51 PM Eric Wong <e@80x24.org> wrote: >> >> René Scharfe <l.s.r@web.de> wrote: >>> Am 06.08.21 um 17:31 schrieb Andrzej Hunt: >>>> On 29/06/2021 22:53, Eric Wong wrote: >>>>> [...snip...] >>>>> diff --git a/oidtree.c b/oidtree.c >>>>> new file mode 100644 >>>>> index 0000000000..c1188d8f48 >>>>> --- /dev/null >>>>> +++ b/oidtree.c >> >>>>> +struct oidtree_node { >>>>> + /* n.k[] is used to store "struct object_id" */ >>>>> + struct cb_node n; >>>>> +}; >>>>> + >>>>> [... snip ...] >>>>> + >>>>> +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) >>>>> +{ >>>>> + struct oidtree_node *on; >>>>> + >>>>> + if (!ot->mempool) >>>>> + ot->mempool = allocate_alloc_state(); >>>>> + if (!oid->algo) >>>>> + BUG("oidtree_insert requires oid->algo"); >>>>> + >>>>> + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); >>>>> + oidcpy_with_padding((struct object_id *)on->n.k, oid); >>>> >>>> I think this object_id cast introduced undefined behaviour - here's >>>> my layperson's interepretation of what's going on (full UBSAN output >>>> is pasted below): >>>> >>>> cb_node.k is a uint8_t[], and hence can be 1-byte aligned (on my >>>> machine: offsetof(struct cb_node, k) == 21). We're casting its >>>> pointer to "struct object_id *", and later try to access >>>> object_id.hash within oidcpy_with_padding. My compiler assumes that >>>> an object_id pointer needs to be 4-byte aligned, and reading from a >>>> misaligned pointer means we hit undefined behaviour. (I think the >>>> 4-byte alignment requirement comes from the fact that object_id's >>>> largest member is an int?) >> >> I seem to recall struct alignment requirements being >> architecture-dependent; and x86/x86-64 are the most liberal >> w.r.t alignment requirements. > > I think the problem here is not the alignment though, but the fact that > the nesting of structs with flexible arrays is forbidden by ISO/IEC > 9899:2011 6.7.2.1¶3 that reads : > > 6.7.2.1 Structure and union specifiers > > ¶3 A structure or union shall not contain a member with incomplete or > function type (hence, a structure shall not contain an instance of > itself, but may contain a pointer to an instance of itself), except > that the last member of a structure with more than one named member > may have incomplete array type; such a structure (and any union > containing, possibly recursively, a member that is such a structure) > shall not be a member of a structure or an element of an array. > > and it will throw a warning with clang 12 > (-Wflexible-array-extensions) or gcc 11 (-Werror=pedantic) when using > DEVOPTS=pedantic That's an additional problem. UBSan still reports the alignment error with your patches. René
Am 08.08.21 um 00:49 schrieb Eric Wong: > René Scharfe <l.s.r@web.de> wrote: >> Am 06.08.21 um 17:31 schrieb Andrzej Hunt: >>> On 29/06/2021 22:53, Eric Wong wrote: >>>> [...snip...] >>>> diff --git a/oidtree.c b/oidtree.c >>>> new file mode 100644 >>>> index 0000000000..c1188d8f48 >>>> --- /dev/null >>>> +++ b/oidtree.c > >>>> +struct oidtree_node { >>>> + /* n.k[] is used to store "struct object_id" */ >>>> + struct cb_node n; >>>> +}; >>>> + >>>> [... snip ...] >>>> + >>>> +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) >>>> +{ >>>> + struct oidtree_node *on; >>>> + >>>> + if (!ot->mempool) >>>> + ot->mempool = allocate_alloc_state(); >>>> + if (!oid->algo) >>>> + BUG("oidtree_insert requires oid->algo"); >>>> + >>>> + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); >>>> + oidcpy_with_padding((struct object_id *)on->n.k, oid); >>> >>> I think this object_id cast introduced undefined behaviour - here's >>> my layperson's interepretation of what's going on (full UBSAN output >>> is pasted below): >>> >>> cb_node.k is a uint8_t[], and hence can be 1-byte aligned (on my >>> machine: offsetof(struct cb_node, k) == 21). We're casting its >>> pointer to "struct object_id *", and later try to access >>> object_id.hash within oidcpy_with_padding. My compiler assumes that >>> an object_id pointer needs to be 4-byte aligned, and reading from a >>> misaligned pointer means we hit undefined behaviour. (I think the >>> 4-byte alignment requirement comes from the fact that object_id's >>> largest member is an int?) > > I seem to recall struct alignment requirements being > architecture-dependent; and x86/x86-64 are the most liberal > w.r.t alignment requirements. > >>> I'm not sure what an elegant and idiomatic fix might be - IIUC it's >>> hard to guarantee misaligned access can't happen with a flex array >>> that's being used for arbitrary data (you would presumably have to >>> declare it as an array of whatever the largest supported type is, so >>> that you can guarantee correct alignment even when cbtree is used >>> with that type) - which might imply that k needs to be declared as a >>> void pointer? That in turn would make cbtree.c harder to read. >> >> C11 has alignas. We could also make the member before the flex array, >> otherbits, wider, e.g. promote it to uint32_t. > > Ugh, no. cb_node should be as small as possible and (for our > current purposes) ->byte could be uint8_t. Well, we can make both byte and otherbits uint16_t. That would require a good comment explaining the reasoning and probably some rework later, but might be the least intrusive solution for now. >> A more parsimonious solution would be to turn the int member of struct >> object_id, algo, into an unsigned char for now and reconsider the issue >> once we support our 200th algorithm or so. > > Yes, making struct object_id smaller would benefit all git users > (at least for the next few centuries :P). True, we're currently using 4 bytes to distinguish between SHA-1 and SHA-256, i.e. to represent a single bit. Reducing the size of struct object_id from 36 bytes to 33 bytes seems quite significant. I don't know how important the 4-byte alignment is, though. cf0983213c (hash: add an algo member to struct object_id, 2021-04-26) doesn't mention it, but the notes code seems to rely on it -- strange. Overall this seems to be a good way to go -- after the next release. René
diff --git a/Makefile b/Makefile index c3565fc0f8..a1525978fb 100644 --- a/Makefile +++ b/Makefile @@ -722,6 +722,7 @@ TEST_BUILTINS_OBJS += test-mergesort.o TEST_BUILTINS_OBJS += test-mktemp.o TEST_BUILTINS_OBJS += test-oid-array.o TEST_BUILTINS_OBJS += test-oidmap.o +TEST_BUILTINS_OBJS += test-oidtree.o TEST_BUILTINS_OBJS += test-online-cpus.o TEST_BUILTINS_OBJS += test-parse-options.o TEST_BUILTINS_OBJS += test-parse-pathspec-file.o @@ -845,6 +846,7 @@ LIB_OBJS += branch.o LIB_OBJS += bulk-checkin.o LIB_OBJS += bundle.o LIB_OBJS += cache-tree.o +LIB_OBJS += cbtree.o LIB_OBJS += chdir-notify.o LIB_OBJS += checkout.o LIB_OBJS += chunk-format.o @@ -940,6 +942,7 @@ LIB_OBJS += object.o LIB_OBJS += oid-array.o LIB_OBJS += oidmap.o LIB_OBJS += oidset.o +LIB_OBJS += oidtree.o LIB_OBJS += pack-bitmap-write.o LIB_OBJS += pack-bitmap.o LIB_OBJS += pack-check.o diff --git a/alloc.c b/alloc.c index 957a0af362..ca1e178c5a 100644 --- a/alloc.c +++ b/alloc.c @@ -14,6 +14,7 @@ #include "tree.h" #include "commit.h" #include "tag.h" +#include "oidtree.h" #include "alloc.h" #define BLOCKING 1024 @@ -123,6 +124,11 @@ void *alloc_commit_node(struct repository *r) return c; } +void *alloc_from_state(struct alloc_state *alloc_state, size_t n) +{ + return alloc_node(alloc_state, n); +} + static void report(const char *name, unsigned int count, size_t size) { fprintf(stderr, "%10s: %8u (%"PRIuMAX" kB)\n", diff --git a/alloc.h b/alloc.h index 371d388b55..4032375aa1 100644 --- a/alloc.h +++ b/alloc.h @@ -13,6 +13,7 @@ void init_commit_node(struct commit *c); void *alloc_commit_node(struct repository *r); void *alloc_tag_node(struct repository *r); void *alloc_object_node(struct repository *r); +void *alloc_from_state(struct alloc_state *, size_t n); void alloc_report(struct repository *r); struct alloc_state *allocate_alloc_state(void); diff --git a/cbtree.c b/cbtree.c new file mode 100644 index 0000000000..b0c65d810f --- /dev/null +++ b/cbtree.c @@ -0,0 +1,167 @@ +/* + * crit-bit tree implementation, does no allocations internally + * For more information on crit-bit trees: https://cr.yp.to/critbit.html + * Based on Adam Langley's adaptation of Dan Bernstein's public domain code + * git clone https://github.com/agl/critbit.git + */ +#include "cbtree.h" + +static struct cb_node *cb_node_of(const void *p) +{ + return (struct cb_node *)((uintptr_t)p - 1); +} + +/* locate the best match, does not do a final comparision */ +static struct cb_node *cb_internal_best_match(struct cb_node *p, + const uint8_t *k, size_t klen) +{ + while (1 & (uintptr_t)p) { + struct cb_node *q = cb_node_of(p); + uint8_t c = q->byte < klen ? k[q->byte] : 0; + size_t direction = (1 + (q->otherbits | c)) >> 8; + + p = q->child[direction]; + } + return p; +} + +/* returns NULL if successful, existing cb_node if duplicate */ +struct cb_node *cb_insert(struct cb_tree *t, struct cb_node *node, size_t klen) +{ + size_t newbyte, newotherbits; + uint8_t c; + int newdirection; + struct cb_node **wherep, *p; + + assert(!((uintptr_t)node & 1)); /* allocations must be aligned */ + + if (!t->root) { /* insert into empty tree */ + t->root = node; + return NULL; /* success */ + } + + /* see if a node already exists */ + p = cb_internal_best_match(t->root, node->k, klen); + + /* find first differing byte */ + for (newbyte = 0; newbyte < klen; newbyte++) { + if (p->k[newbyte] != node->k[newbyte]) + goto different_byte_found; + } + return p; /* element exists, let user deal with it */ + +different_byte_found: + newotherbits = p->k[newbyte] ^ node->k[newbyte]; + newotherbits |= newotherbits >> 1; + newotherbits |= newotherbits >> 2; + newotherbits |= newotherbits >> 4; + newotherbits = (newotherbits & ~(newotherbits >> 1)) ^ 255; + c = p->k[newbyte]; + newdirection = (1 + (newotherbits | c)) >> 8; + + node->byte = newbyte; + node->otherbits = newotherbits; + node->child[1 - newdirection] = node; + + /* find a place to insert it */ + wherep = &t->root; + for (;;) { + struct cb_node *q; + size_t direction; + + p = *wherep; + if (!(1 & (uintptr_t)p)) + break; + q = cb_node_of(p); + if (q->byte > newbyte) + break; + if (q->byte == newbyte && q->otherbits > newotherbits) + break; + c = q->byte < klen ? node->k[q->byte] : 0; + direction = (1 + (q->otherbits | c)) >> 8; + wherep = q->child + direction; + } + + node->child[newdirection] = *wherep; + *wherep = (struct cb_node *)(1 + (uintptr_t)node); + + return NULL; /* success */ +} + +struct cb_node *cb_lookup(struct cb_tree *t, const uint8_t *k, size_t klen) +{ + struct cb_node *p = cb_internal_best_match(t->root, k, klen); + + return p && !memcmp(p->k, k, klen) ? p : NULL; +} + +struct cb_node *cb_unlink(struct cb_tree *t, const uint8_t *k, size_t klen) +{ + struct cb_node **wherep = &t->root; + struct cb_node **whereq = NULL; + struct cb_node *q = NULL; + size_t direction = 0; + uint8_t c; + struct cb_node *p = t->root; + + if (!p) return NULL; /* empty tree, nothing to delete */ + + /* traverse to find best match, keeping link to parent */ + while (1 & (uintptr_t)p) { + whereq = wherep; + q = cb_node_of(p); + c = q->byte < klen ? k[q->byte] : 0; + direction = (1 + (q->otherbits | c)) >> 8; + wherep = q->child + direction; + p = *wherep; + } + + if (memcmp(p->k, k, klen)) + return NULL; /* no match, nothing unlinked */ + + /* found an exact match */ + if (whereq) /* update parent */ + *whereq = q->child[1 - direction]; + else + t->root = NULL; + return p; +} + +static enum cb_next cb_descend(struct cb_node *p, cb_iter fn, void *arg) +{ + if (1 & (uintptr_t)p) { + struct cb_node *q = cb_node_of(p); + enum cb_next n = cb_descend(q->child[0], fn, arg); + + return n == CB_BREAK ? n : cb_descend(q->child[1], fn, arg); + } else { + return fn(p, arg); + } +} + +void cb_each(struct cb_tree *t, const uint8_t *kpfx, size_t klen, + cb_iter fn, void *arg) +{ + struct cb_node *p = t->root; + struct cb_node *top = p; + size_t i = 0; + + if (!p) return; /* empty tree */ + + /* Walk tree, maintaining top pointer */ + while (1 & (uintptr_t)p) { + struct cb_node *q = cb_node_of(p); + uint8_t c = q->byte < klen ? kpfx[q->byte] : 0; + size_t direction = (1 + (q->otherbits | c)) >> 8; + + p = q->child[direction]; + if (q->byte < klen) + top = p; + } + + for (i = 0; i < klen; i++) { + if (p->k[i] != kpfx[i]) + return; /* "best" match failed */ + } + cb_descend(top, fn, arg); +} diff --git a/cbtree.h b/cbtree.h new file mode 100644 index 0000000000..fe4587087e --- /dev/null +++ b/cbtree.h @@ -0,0 +1,56 @@ +/* + * crit-bit tree implementation, does no allocations internally + * For more information on crit-bit trees: https://cr.yp.to/critbit.html + * Based on Adam Langley's adaptation of Dan Bernstein's public domain code + * git clone https://github.com/agl/critbit.git + * + * This is adapted to store arbitrary data (not just NUL-terminated C strings + * and allocates no memory internally. The user needs to allocate + * "struct cb_node" and fill cb_node.k[] with arbitrary match data + * for memcmp. + * If "klen" is variable, then it should be embedded into "c_node.k[]" + * Recursion is bound by the maximum value of "klen" used. + */ +#ifndef CBTREE_H +#define CBTREE_H + +#include "git-compat-util.h" + +struct cb_node; +struct cb_node { + struct cb_node *child[2]; + /* + * n.b. uint32_t for `byte' is excessive for OIDs, + * we may consider shorter variants if nothing else gets stored. + */ + uint32_t byte; + uint8_t otherbits; + uint8_t k[FLEX_ARRAY]; /* arbitrary data */ +}; + +struct cb_tree { + struct cb_node *root; +}; + +enum cb_next { + CB_CONTINUE = 0, + CB_BREAK = 1 +}; + +#define CBTREE_INIT { .root = NULL } + +static inline void cb_init(struct cb_tree *t) +{ + t->root = NULL; +} + +struct cb_node *cb_lookup(struct cb_tree *, const uint8_t *k, size_t klen); +struct cb_node *cb_insert(struct cb_tree *, struct cb_node *, size_t klen); +struct cb_node *cb_unlink(struct cb_tree *t, const uint8_t *k, size_t klen); + +typedef enum cb_next (*cb_iter)(struct cb_node *, void *arg); + +void cb_each(struct cb_tree *, const uint8_t *kpfx, size_t klen, + cb_iter, void *arg); + +#endif /* CBTREE_H */ diff --git a/object-file.c b/object-file.c index 91183d1297..6c397fb4f1 100644 --- a/object-file.c +++ b/object-file.c @@ -1175,7 +1175,7 @@ static int quick_has_loose(struct repository *r, prepare_alt_odb(r); for (odb = r->objects->odb; odb; odb = odb->next) { - if (oid_array_lookup(odb_loose_cache(odb, oid), oid) >= 0) + if (oidtree_contains(odb_loose_cache(odb, oid), oid)) return 1; } return 0; @@ -2454,11 +2454,11 @@ int for_each_loose_object(each_loose_object_fn cb, void *data, static int append_loose_object(const struct object_id *oid, const char *path, void *data) { - oid_array_append(data, oid); + oidtree_insert(data, oid); return 0; } -struct oid_array *odb_loose_cache(struct object_directory *odb, +struct oidtree *odb_loose_cache(struct object_directory *odb, const struct object_id *oid) { int subdir_nr = oid->hash[0]; @@ -2474,24 +2474,21 @@ struct oid_array *odb_loose_cache(struct object_directory *odb, bitmap = &odb->loose_objects_subdir_seen[word_index]; if (*bitmap & mask) - return &odb->loose_objects_cache[subdir_nr]; + return &odb->loose_objects_cache; strbuf_addstr(&buf, odb->path); for_each_file_in_obj_subdir(subdir_nr, &buf, append_loose_object, NULL, NULL, - &odb->loose_objects_cache[subdir_nr]); + &odb->loose_objects_cache); *bitmap |= mask; strbuf_release(&buf); - return &odb->loose_objects_cache[subdir_nr]; + return &odb->loose_objects_cache; } void odb_clear_loose_cache(struct object_directory *odb) { - int i; - - for (i = 0; i < ARRAY_SIZE(odb->loose_objects_cache); i++) - oid_array_clear(&odb->loose_objects_cache[i]); + oidtree_destroy(&odb->loose_objects_cache); memset(&odb->loose_objects_subdir_seen, 0, sizeof(odb->loose_objects_subdir_seen)); } diff --git a/object-name.c b/object-name.c index 64202de60b..3263c19457 100644 --- a/object-name.c +++ b/object-name.c @@ -87,27 +87,21 @@ static void update_candidates(struct disambiguate_state *ds, const struct object static int match_hash(unsigned, const unsigned char *, const unsigned char *); +static enum cb_next match_prefix(const struct object_id *oid, void *arg) +{ + struct disambiguate_state *ds = arg; + /* no need to call match_hash, oidtree_each did prefix match */ + update_candidates(ds, oid); + return ds->ambiguous ? CB_BREAK : CB_CONTINUE; +} + static void find_short_object_filename(struct disambiguate_state *ds) { struct object_directory *odb; - for (odb = ds->repo->objects->odb; odb && !ds->ambiguous; odb = odb->next) { - int pos; - struct oid_array *loose_objects; - - loose_objects = odb_loose_cache(odb, &ds->bin_pfx); - pos = oid_array_lookup(loose_objects, &ds->bin_pfx); - if (pos < 0) - pos = -1 - pos; - while (!ds->ambiguous && pos < loose_objects->nr) { - const struct object_id *oid; - oid = loose_objects->oid + pos; - if (!match_hash(ds->len, ds->bin_pfx.hash, oid->hash)) - break; - update_candidates(ds, oid); - pos++; - } - } + for (odb = ds->repo->objects->odb; odb && !ds->ambiguous; odb = odb->next) + oidtree_each(odb_loose_cache(odb, &ds->bin_pfx), + &ds->bin_pfx, ds->len, match_prefix, ds); } static int match_hash(unsigned len, const unsigned char *a, const unsigned char *b) diff --git a/object-store.h b/object-store.h index 8fcddf3e65..b507108d18 100644 --- a/object-store.h +++ b/object-store.h @@ -9,6 +9,7 @@ #include "thread-utils.h" #include "khash.h" #include "dir.h" +#include "oidtree.h" struct object_directory { struct object_directory *next; @@ -23,7 +24,7 @@ struct object_directory { * Be sure to call odb_load_loose_cache() before using. */ uint32_t loose_objects_subdir_seen[8]; /* 256 bits */ - struct oid_array loose_objects_cache[256]; + struct oidtree loose_objects_cache; /* * Path to the alternative object store. If this is a relative path, @@ -69,7 +70,7 @@ void add_to_alternates_memory(const char *dir); * Populate and return the loose object cache array corresponding to the * given object ID. */ -struct oid_array *odb_loose_cache(struct object_directory *odb, +struct oidtree *odb_loose_cache(struct object_directory *odb, const struct object_id *oid); /* Empty the loose object cache for the specified object directory. */ diff --git a/oidtree.c b/oidtree.c new file mode 100644 index 0000000000..c1188d8f48 --- /dev/null +++ b/oidtree.c @@ -0,0 +1,94 @@ +/* + * A wrapper around cbtree which stores oids + * May be used to replace oid-array for prefix (abbreviation) matches + */ +#include "oidtree.h" +#include "alloc.h" +#include "hash.h" + +struct oidtree_node { + /* n.k[] is used to store "struct object_id" */ + struct cb_node n; +}; + +struct oidtree_iter_data { + oidtree_iter fn; + void *arg; + size_t *last_nibble_at; + int algo; + uint8_t last_byte; +}; + +void oidtree_destroy(struct oidtree *ot) +{ + if (ot->mempool) { + clear_alloc_state(ot->mempool); + FREE_AND_NULL(ot->mempool); + } + oidtree_init(ot); +} + +void oidtree_insert(struct oidtree *ot, const struct object_id *oid) +{ + struct oidtree_node *on; + + if (!ot->mempool) + ot->mempool = allocate_alloc_state(); + if (!oid->algo) + BUG("oidtree_insert requires oid->algo"); + + on = alloc_from_state(ot->mempool, sizeof(*on) + sizeof(*oid)); + oidcpy_with_padding((struct object_id *)on->n.k, oid); + + /* + * n.b. we shouldn't get duplicates, here, but we'll have + * a small leak that won't be freed until oidtree_destroy + */ + cb_insert(&ot->t, &on->n, sizeof(*oid)); +} + +int oidtree_contains(struct oidtree *ot, const struct object_id *oid) +{ + struct object_id k = { 0 }; + size_t klen = sizeof(k); + oidcpy_with_padding(&k, oid); + + if (oid->algo == GIT_HASH_UNKNOWN) { + k.algo = hash_algo_by_ptr(the_hash_algo); + klen -= sizeof(oid->algo); + } + + return cb_lookup(&ot->t, (const uint8_t *)&k, klen) ? 1 : 0; +} + +static enum cb_next iter(struct cb_node *n, void *arg) +{ + struct oidtree_iter_data *x = arg; + const struct object_id *oid = (const struct object_id *)n->k; + + if (x->algo != GIT_HASH_UNKNOWN && x->algo != oid->algo) + return CB_CONTINUE; + + if (x->last_nibble_at) { + if ((oid->hash[*x->last_nibble_at] ^ x->last_byte) & 0xf0) + return CB_CONTINUE; + } + + return x->fn(oid, x->arg); +} + +void oidtree_each(struct oidtree *ot, const struct object_id *oid, + size_t oidhexlen, oidtree_iter fn, void *arg) +{ + size_t klen = oidhexlen / 2; + struct oidtree_iter_data x = { 0 }; + + x.fn = fn; + x.arg = arg; + x.algo = oid->algo; + if (oidhexlen & 1) { + x.last_byte = oid->hash[klen]; + x.last_nibble_at = &klen; + } + cb_each(&ot->t, (const uint8_t *)oid, klen, iter, &x); +} diff --git a/oidtree.h b/oidtree.h new file mode 100644 index 0000000000..73399bb978 --- /dev/null +++ b/oidtree.h @@ -0,0 +1,29 @@ +#ifndef OIDTREE_H +#define OIDTREE_H + +#include "cbtree.h" +#include "hash.h" + +struct alloc_state; +struct oidtree { + struct cb_tree t; + struct alloc_state *mempool; +}; + +#define OIDTREE_INIT { .t = CBTREE_INIT, .mempool = NULL } + +static inline void oidtree_init(struct oidtree *ot) +{ + cb_init(&ot->t); + ot->mempool = NULL; +} + +void oidtree_destroy(struct oidtree *); +void oidtree_insert(struct oidtree *, const struct object_id *); +int oidtree_contains(struct oidtree *, const struct object_id *); + +typedef enum cb_next (*oidtree_iter)(const struct object_id *, void *arg); +void oidtree_each(struct oidtree *, const struct object_id *, + size_t oidhexlen, oidtree_iter, void *arg); + +#endif /* OIDTREE_H */ diff --git a/t/helper/test-oidtree.c b/t/helper/test-oidtree.c new file mode 100644 index 0000000000..e0da13eea3 --- /dev/null +++ b/t/helper/test-oidtree.c @@ -0,0 +1,47 @@ +#include "test-tool.h" +#include "cache.h" +#include "oidtree.h" + +static enum cb_next print_oid(const struct object_id *oid, void *data) +{ + puts(oid_to_hex(oid)); + return CB_CONTINUE; +} + +int cmd__oidtree(int argc, const char **argv) +{ + struct oidtree ot = OIDTREE_INIT; + struct strbuf line = STRBUF_INIT; + int nongit_ok; + int algo = GIT_HASH_UNKNOWN; + + setup_git_directory_gently(&nongit_ok); + + while (strbuf_getline(&line, stdin) != EOF) { + const char *arg; + struct object_id oid; + + if (skip_prefix(line.buf, "insert ", &arg)) { + if (get_oid_hex_any(arg, &oid) == GIT_HASH_UNKNOWN) + die("insert not a hexadecimal oid: %s", arg); + algo = oid.algo; + oidtree_insert(&ot, &oid); + } else if (skip_prefix(line.buf, "contains ", &arg)) { + if (get_oid_hex(arg, &oid)) + die("contains not a hexadecimal oid: %s", arg); + printf("%d\n", oidtree_contains(&ot, &oid)); + } else if (skip_prefix(line.buf, "each ", &arg)) { + char buf[GIT_MAX_HEXSZ + 1] = { '0' }; + memset(&oid, 0, sizeof(oid)); + memcpy(buf, arg, strlen(arg)); + buf[hash_algos[algo].hexsz] = 0; + get_oid_hex_any(buf, &oid); + oid.algo = algo; + oidtree_each(&ot, &oid, strlen(arg), print_oid, NULL); + } else if (!strcmp(line.buf, "destroy")) + oidtree_destroy(&ot); + else + die("unknown command: %s", line.buf); + } + return 0; +} diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c index c5bd0c6d4c..9d37debf28 100644 --- a/t/helper/test-tool.c +++ b/t/helper/test-tool.c @@ -43,6 +43,7 @@ static struct test_cmd cmds[] = { { "mktemp", cmd__mktemp }, { "oid-array", cmd__oid_array }, { "oidmap", cmd__oidmap }, + { "oidtree", cmd__oidtree }, { "online-cpus", cmd__online_cpus }, { "parse-options", cmd__parse_options }, { "parse-pathspec-file", cmd__parse_pathspec_file }, diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h index e8069a3b22..f683a2f59c 100644 --- a/t/helper/test-tool.h +++ b/t/helper/test-tool.h @@ -32,6 +32,7 @@ int cmd__match_trees(int argc, const char **argv); int cmd__mergesort(int argc, const char **argv); int cmd__mktemp(int argc, const char **argv); int cmd__oidmap(int argc, const char **argv); +int cmd__oidtree(int argc, const char **argv); int cmd__online_cpus(int argc, const char **argv); int cmd__parse_options(int argc, const char **argv); int cmd__parse_pathspec_file(int argc, const char** argv); diff --git a/t/t0069-oidtree.sh b/t/t0069-oidtree.sh new file mode 100755 index 0000000000..0594f57c81 --- /dev/null +++ b/t/t0069-oidtree.sh @@ -0,0 +1,52 @@ +#!/bin/sh + +test_description='basic tests for the oidtree implementation' +. ./test-lib.sh + +echoid () { + prefix="${1:+$1 }" + shift + while test $# -gt 0 + do + echo "$1" + shift + done | awk -v prefix="$prefix" -v ZERO_OID=$ZERO_OID '{ + printf("%s%s", prefix, $0); + need = length(ZERO_OID) - length($0); + for (i = 0; i < need; i++) + printf("0"); + printf "\n"; + }' +} + +test_expect_success 'oidtree insert and contains' ' + cat >expect <<EOF && +0 +0 +0 +1 +1 +0 +EOF + { + echoid insert 444 1 2 3 4 5 a b c d e && + echoid contains 44 441 440 444 4440 4444 + echo destroy + } | test-tool oidtree >actual && + test_cmp expect actual +' + +test_expect_success 'oidtree each' ' + echoid "" 123 321 321 >expect && + { + echoid insert f 9 8 123 321 a b c d e + echo each 12300 + echo each 3211 + echo each 3210 + echo each 32100 + echo destroy + } | test-tool oidtree >actual && + test_cmp expect actual +' + +test_done
This saves 8K per `struct object_directory', meaning it saves around 800MB in my case involving 100K alternates (half or more of those alternates are unlikely to hold loose objects). This is implemented in two parts: a generic, allocation-free `cbtree' and the `oidtree' wrapper on top of it. The latter provides allocation using alloc_state as a memory pool to improve locality and reduce free(3) overhead. Unlike oid-array, the crit-bit tree does not require sorting. Performance is bound by the key length, for oidtree that is fixed at sizeof(struct object_id). There's no need to have 256 oidtrees to mitigate the O(n log n) overhead like we did with oid-array. Being a prefix trie, it is natively suited for expanding short object IDs via prefix-limited iteration in `find_short_object_filename'. On my busy workstation, p4205 performance seems to be roughly unchanged (+/-8%). Startup with 100K total alternates with no loose objects seems around 10-20% faster on a hot cache. (800MB in memory savings means more memory for the kernel FS cache). The generic cbtree implementation does impose some extra overhead for oidtree in that it uses memcmp(3) on "struct object_id" so it wastes cycles comparing 12 extra bytes on SHA-1 repositories. I've not yet explored reducing this overhead, but I expect there are many places in our code base where we'd want to investigate this. More information on crit-bit trees: https://cr.yp.to/critbit.html v2: make oidtree test hash-agnostic Signed-off-by: Eric Wong <e@80x24.org> --- Makefile | 3 + alloc.c | 6 ++ alloc.h | 1 + cbtree.c | 167 ++++++++++++++++++++++++++++++++++++++++ cbtree.h | 56 ++++++++++++++ object-file.c | 17 ++-- object-name.c | 28 +++---- object-store.h | 5 +- oidtree.c | 94 ++++++++++++++++++++++ oidtree.h | 29 +++++++ t/helper/test-oidtree.c | 47 +++++++++++ t/helper/test-tool.c | 1 + t/helper/test-tool.h | 1 + t/t0069-oidtree.sh | 52 +++++++++++++ 14 files changed, 478 insertions(+), 29 deletions(-) create mode 100644 cbtree.c create mode 100644 cbtree.h create mode 100644 oidtree.c create mode 100644 oidtree.h create mode 100644 t/helper/test-oidtree.c create mode 100755 t/t0069-oidtree.sh