Message ID | 20250321204105.1898507-5-kees@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | slab: Set freed variables to NULL by default | expand |
On Fri, 21 Mar 2025, Kees Cook wrote: > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 2717ad238fa2..a4740c8b6ccb 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -469,6 +469,8 @@ void __kfree(const void *objp); > void __kfree_sensitive(const void *objp); > size_t __ksize(const void *objp); > > +extern atomic_t count_nulled; That is a scalability issue. Use a vmstat counter instead?
On Mon, Mar 24, 2025 at 09:16:47AM -0700, Christoph Lameter (Ampere) wrote: > On Fri, 21 Mar 2025, Kees Cook wrote: > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index 2717ad238fa2..a4740c8b6ccb 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -469,6 +469,8 @@ void __kfree(const void *objp); > > void __kfree_sensitive(const void *objp); > > size_t __ksize(const void *objp); > > > > +extern atomic_t count_nulled; > > That is a scalability issue. Use a vmstat counter instead? Yeah, this patch (marked "DEBUG") isn't intended for upstreaming. It was just a quick hack to get a ballpark statistic. :)
diff --git a/include/linux/slab.h b/include/linux/slab.h index 2717ad238fa2..a4740c8b6ccb 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -469,6 +469,8 @@ void __kfree(const void *objp); void __kfree_sensitive(const void *objp); size_t __ksize(const void *objp); +extern atomic_t count_nulled; + static inline void kfree_and_null(void **ptr) { __kfree(*ptr); @@ -487,6 +489,7 @@ static inline void kfree_sensitive_and_null(void **ptr) ({ \ typeof(x) *__ptr = &(x); \ __how ## _and_null((void **)__ptr); \ + atomic_inc(&count_nulled); \ }) #define __free_and_maybe_null(__how, x) \ __builtin_choose_expr(__is_lvalue(x), \ diff --git a/mm/slab_common.c b/mm/slab_common.c index 9a82952ec266..0412cbab81f9 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -42,6 +42,9 @@ LIST_HEAD(slab_caches); DEFINE_MUTEX(slab_mutex); struct kmem_cache *kmem_cache; +atomic_t count_nulled = ATOMIC_INIT(0); +EXPORT_SYMBOL(count_nulled); + /* * Set of flags that will prevent slab merging */ @@ -1084,6 +1087,7 @@ static void print_slabinfo_header(struct seq_file *m) * without _too_ many complaints. */ seq_puts(m, "slabinfo - version: 2.1\n"); + seq_printf(m, "# nulled: %d\n", atomic_read(&count_nulled)); seq_puts(m, "# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>"); seq_puts(m, " : tunables <limit> <batchcount> <sharedfactor>"); seq_puts(m, " : slabdata <active_slabs> <num_slabs> <sharedavail>");
Just to get a sense of what's happening, report the number of NULL assignments that have been done. After booting an otherwise standard Ubuntu image, this shows about 240,000 NULLifications have been performed. Signed-off-by: Kees Cook <kees@kernel.org> --- Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org --- include/linux/slab.h | 3 +++ mm/slab_common.c | 4 ++++ 2 files changed, 7 insertions(+)