Message ID | 20230705160139.19967-4-aspsk@isovalent.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | BPF |
Headers | show |
Series | bpf: add percpu stats for bpf_map | expand |
On Wed, Jul 5, 2023 at 9:00 AM Anton Protopopov <aspsk@isovalent.com> wrote: > > Initialize and utilize the per-cpu insertions/deletions counters for hash-based > maps. Non-trivial changes apply to preallocated maps for which the > {inc,dec}_elem_count functions are not called, as there's no need in counting > elements to sustain proper map operations. > > To increase/decrease percpu counters for preallocated hash maps we add raw > calls to the bpf_map_{inc,dec}_elem_count functions so that the impact is > minimal. For dynamically allocated maps we add corresponding calls to the > existing {inc,dec}_elem_count functions. > > For LRU maps bpf_map_{inc,dec}_elem_count added to the lru pop/free helpers. > > Signed-off-by: Anton Protopopov <aspsk@isovalent.com> > --- > kernel/bpf/hashtab.c | 23 +++++++++++++++++++++-- > 1 file changed, 21 insertions(+), 2 deletions(-) > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > index 56d3da7d0bc6..c23557bf9a1a 100644 > --- a/kernel/bpf/hashtab.c > +++ b/kernel/bpf/hashtab.c > @@ -302,6 +302,7 @@ static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key, > struct htab_elem *l; > > if (node) { > + bpf_map_inc_elem_count(&htab->map); > l = container_of(node, struct htab_elem, lru_node); > memcpy(l->key, key, htab->map.key_size); > return l; > @@ -581,10 +582,17 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) > } > } > > + err = bpf_map_init_elem_count(&htab->map); > + if (err) > + goto free_extra_elements; > + > return &htab->map; > > +free_extra_elements: > + free_percpu(htab->extra_elems); > free_prealloc: > - prealloc_destroy(htab); > + if (prealloc) > + prealloc_destroy(htab); This is a bit difficult to read. I think the logic would be easier to understand if bpf_map_init_elem_count was done right before htab->buckets = bpf_map_area_alloc() and if (err) goto free_htab where you would add bpf_map_free_elem_count.
On Wed, Jul 05, 2023 at 06:24:44PM -0700, Alexei Starovoitov wrote: > On Wed, Jul 5, 2023 at 9:00 AM Anton Protopopov <aspsk@isovalent.com> wrote: > > > > Initialize and utilize the per-cpu insertions/deletions counters for hash-based > > maps. Non-trivial changes apply to preallocated maps for which the > > {inc,dec}_elem_count functions are not called, as there's no need in counting > > elements to sustain proper map operations. > > > > To increase/decrease percpu counters for preallocated hash maps we add raw > > calls to the bpf_map_{inc,dec}_elem_count functions so that the impact is > > minimal. For dynamically allocated maps we add corresponding calls to the > > existing {inc,dec}_elem_count functions. > > > > For LRU maps bpf_map_{inc,dec}_elem_count added to the lru pop/free helpers. > > > > Signed-off-by: Anton Protopopov <aspsk@isovalent.com> > > --- > > kernel/bpf/hashtab.c | 23 +++++++++++++++++++++-- > > 1 file changed, 21 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > > index 56d3da7d0bc6..c23557bf9a1a 100644 > > --- a/kernel/bpf/hashtab.c > > +++ b/kernel/bpf/hashtab.c > > @@ -302,6 +302,7 @@ static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key, > > struct htab_elem *l; > > > > if (node) { > > + bpf_map_inc_elem_count(&htab->map); > > l = container_of(node, struct htab_elem, lru_node); > > memcpy(l->key, key, htab->map.key_size); > > return l; > > @@ -581,10 +582,17 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) > > } > > } > > > > + err = bpf_map_init_elem_count(&htab->map); > > + if (err) > > + goto free_extra_elements; > > + > > return &htab->map; > > > > +free_extra_elements: > > + free_percpu(htab->extra_elems); > > free_prealloc: > > - prealloc_destroy(htab); > > + if (prealloc) > > + prealloc_destroy(htab); > > This is a bit difficult to read. > I think the logic would be easier to understand if bpf_map_init_elem_count > was done right before htab->buckets = bpf_map_area_alloc() > and if (err) goto free_htab > where you would add bpf_map_free_elem_count. Thanks, I will fix this.
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 56d3da7d0bc6..c23557bf9a1a 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -302,6 +302,7 @@ static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key, struct htab_elem *l; if (node) { + bpf_map_inc_elem_count(&htab->map); l = container_of(node, struct htab_elem, lru_node); memcpy(l->key, key, htab->map.key_size); return l; @@ -581,10 +582,17 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) } } + err = bpf_map_init_elem_count(&htab->map); + if (err) + goto free_extra_elements; + return &htab->map; +free_extra_elements: + free_percpu(htab->extra_elems); free_prealloc: - prealloc_destroy(htab); + if (prealloc) + prealloc_destroy(htab); free_map_locked: if (htab->use_percpu_counter) percpu_counter_destroy(&htab->pcount); @@ -804,6 +812,7 @@ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node) if (l == tgt_l) { hlist_nulls_del_rcu(&l->hash_node); check_and_free_fields(htab, l); + bpf_map_dec_elem_count(&htab->map); break; } @@ -900,6 +909,8 @@ static bool is_map_full(struct bpf_htab *htab) static void inc_elem_count(struct bpf_htab *htab) { + bpf_map_inc_elem_count(&htab->map); + if (htab->use_percpu_counter) percpu_counter_add_batch(&htab->pcount, 1, PERCPU_COUNTER_BATCH); else @@ -908,6 +919,8 @@ static void inc_elem_count(struct bpf_htab *htab) static void dec_elem_count(struct bpf_htab *htab) { + bpf_map_dec_elem_count(&htab->map); + if (htab->use_percpu_counter) percpu_counter_add_batch(&htab->pcount, -1, PERCPU_COUNTER_BATCH); else @@ -920,6 +933,7 @@ static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l) htab_put_fd_value(htab, l); if (htab_is_prealloc(htab)) { + bpf_map_dec_elem_count(&htab->map); check_and_free_fields(htab, l); __pcpu_freelist_push(&htab->freelist, &l->fnode); } else { @@ -1000,6 +1014,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key, if (!l) return ERR_PTR(-E2BIG); l_new = container_of(l, struct htab_elem, fnode); + bpf_map_inc_elem_count(&htab->map); } } else { if (is_map_full(htab)) @@ -1168,6 +1183,7 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value, static void htab_lru_push_free(struct bpf_htab *htab, struct htab_elem *elem) { check_and_free_fields(htab, elem); + bpf_map_dec_elem_count(&htab->map); bpf_lru_push_free(&htab->lru, &elem->lru_node); } @@ -1357,8 +1373,10 @@ static long __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key, err: htab_unlock_bucket(htab, b, hash, flags); err_lock_bucket: - if (l_new) + if (l_new) { + bpf_map_dec_elem_count(&htab->map); bpf_lru_push_free(&htab->lru, &l_new->lru_node); + } return ret; } @@ -1523,6 +1541,7 @@ static void htab_map_free(struct bpf_map *map) prealloc_destroy(htab); } + bpf_map_free_elem_count(map); free_percpu(htab->extra_elems); bpf_map_area_free(htab->buckets); bpf_mem_alloc_destroy(&htab->pcpu_ma);
Initialize and utilize the per-cpu insertions/deletions counters for hash-based maps. Non-trivial changes apply to preallocated maps for which the {inc,dec}_elem_count functions are not called, as there's no need in counting elements to sustain proper map operations. To increase/decrease percpu counters for preallocated hash maps we add raw calls to the bpf_map_{inc,dec}_elem_count functions so that the impact is minimal. For dynamically allocated maps we add corresponding calls to the existing {inc,dec}_elem_count functions. For LRU maps bpf_map_{inc,dec}_elem_count added to the lru pop/free helpers. Signed-off-by: Anton Protopopov <aspsk@isovalent.com> --- kernel/bpf/hashtab.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-)