kmemleak: survive in a low-memory situation
diff mbox series

Message ID 20190102160849.11480-1-cai@lca.pw
State New
Headers show
Series
  • kmemleak: survive in a low-memory situation
Related show

Commit Message

Qian Cai Jan. 2, 2019, 4:08 p.m. UTC
Kmemleak could quickly fail to allocate an object structure and then
disable itself in a low-memory situation. For example, running a mmap()
workload triggering swapping and OOM [1].

First, it unnecessarily attempt to allocate even though the tracking
object is NULL in kmem_cache_alloc(). For example,

alloc_io
  bio_alloc_bioset
    mempool_alloc
      mempool_alloc_slab
        kmem_cache_alloc
          slab_alloc_node
            __slab_alloc <-- could return NULL
            slab_post_alloc_hook
              kmemleak_alloc_recursive

Second, kmemleak allocation could fail even though the trackig object is
succeeded. Hence, it could still try to start a direct reclaim if it is
not executed in an atomic context (spinlock, irq-handler etc), or a
high-priority allocation in an atomic context as a last-ditch effort.

[1]
https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/mem/oom/oom01.c

Signed-off-by: Qian Cai <cai@lca.pw>
---
 mm/kmemleak.c | 10 ++++++++++
 mm/slab.h     | 17 +++++++++--------
 2 files changed, 19 insertions(+), 8 deletions(-)

Comments

Catalin Marinas Jan. 2, 2019, 4:59 p.m. UTC | #1
Hi Qian,

On Wed, Jan 02, 2019 at 11:08:49AM -0500, Qian Cai wrote:
> Kmemleak could quickly fail to allocate an object structure and then
> disable itself in a low-memory situation. For example, running a mmap()
> workload triggering swapping and OOM [1].
> 
> First, it unnecessarily attempt to allocate even though the tracking
> object is NULL in kmem_cache_alloc(). For example,
> 
> alloc_io
>   bio_alloc_bioset
>     mempool_alloc
>       mempool_alloc_slab
>         kmem_cache_alloc
>           slab_alloc_node
>             __slab_alloc <-- could return NULL
>             slab_post_alloc_hook
>               kmemleak_alloc_recursive

kmemleak_alloc() only continues with the kmemleak_object allocation if
the given pointer is not NULL.

> diff --git a/mm/slab.h b/mm/slab.h
> index 4190c24ef0e9..51a9a942cc56 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -435,15 +435,16 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
>  {
>  	size_t i;
>  
> -	flags &= gfp_allowed_mask;
> -	for (i = 0; i < size; i++) {
> -		void *object = p[i];
> -
> -		kmemleak_alloc_recursive(object, s->object_size, 1,
> -					 s->flags, flags);
> -		p[i] = kasan_slab_alloc(s, object, flags);
> +	if (*p) {
> +		flags &= gfp_allowed_mask;
> +		for (i = 0; i < size; i++) {
> +			void *object = p[i];
> +
> +			kmemleak_alloc_recursive(object, s->object_size, 1,
> +						 s->flags, flags);
> +			p[i] = kasan_slab_alloc(s, object, flags);
> +		}
>  	}

This is not necessary for kmemleak.

Patch
diff mbox series

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index f9d9dc250428..9e1aa3b7df75 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -576,6 +576,16 @@  static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 	struct rb_node **link, *rb_parent;
 
 	object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
+#ifdef CONFIG_PREEMPT_COUNT
+	if (!object) {
+		/* last-ditch effort in a low-memory situation */
+		if (irqs_disabled() || is_idle_task(current) || in_atomic())
+			gfp = GFP_ATOMIC;
+		else
+			gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM;
+		object = kmem_cache_alloc(object_cache, gfp);
+	}
+#endif
 	if (!object) {
 		pr_warn("Cannot allocate a kmemleak_object structure\n");
 		kmemleak_disable();
diff --git a/mm/slab.h b/mm/slab.h
index 4190c24ef0e9..51a9a942cc56 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -435,15 +435,16 @@  static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
 {
 	size_t i;
 
-	flags &= gfp_allowed_mask;
-	for (i = 0; i < size; i++) {
-		void *object = p[i];
-
-		kmemleak_alloc_recursive(object, s->object_size, 1,
-					 s->flags, flags);
-		p[i] = kasan_slab_alloc(s, object, flags);
+	if (*p) {
+		flags &= gfp_allowed_mask;
+		for (i = 0; i < size; i++) {
+			void *object = p[i];
+
+			kmemleak_alloc_recursive(object, s->object_size, 1,
+						 s->flags, flags);
+			p[i] = kasan_slab_alloc(s, object, flags);
+		}
 	}
-
 	if (memcg_kmem_enabled())
 		memcg_kmem_put_cache(s);
 }