From patchwork Wed Oct 30 14:22:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11219639 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 757A31668 for ; Wed, 30 Oct 2019 14:23:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 34C1E2087E for ; Wed, 30 Oct 2019 14:23:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Enj5rdnS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 34C1E2087E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9AE826B0274; Wed, 30 Oct 2019 10:23:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 960786B0275; Wed, 30 Oct 2019 10:23:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 800366B0276; Wed, 30 Oct 2019 10:23:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id 523156B0274 for ; Wed, 30 Oct 2019 10:23:39 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id EA131181AEF31 for ; Wed, 30 Oct 2019 14:23:38 +0000 (UTC) X-FDA: 76100669316.19.blood79_3ce060332e947 X-Spam-Summary: 2,0,0,06767967a12d4169,d41d8cd98f00b204,36jy5xqykcpsjolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com,:akpm@linux-foundation.org:vegard.nossum@oracle.com:dvyukov@google.com::viro@zeniv.linux.org.uk:aryabinin@virtuozzo.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:hch@lst.de:dmitry.torokhov@gmail.com:edumazet@google.com:ericvh@gmail.com:gregkh@linuxfoundation.org:harry.wentland@amd.com:herbert@gondor.apana.org.au:mingo@elte.hu:axboe@kernel.dk:martin.petersen@oracle.com:schwidefsky@de.ibm.com:mst@redhat.com:monstr@monstr.eu:pmladek@suse.com:sergey.senozhatsky@gmail.com:rostedt@goodmis.org:tiwai@suse.com:tytso@mit.edu:tglx@linutronix.de:wsa@the-dreams.de:gor@linux.ibm.com:iii@linux.ibm.com:mark.rutland@arm.com:willy@infradead.org:rdunlap@infradead.org:andreyknvl@google.com:elver@google.com:glider@google.com,RULES_HIT:2:41:152:355:379:541:800:960:966:968:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1593:1594:1605 :1606:17 X-HE-Tag: blood79_3ce060332e947 X-Filterd-Recvd-Size: 9258 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Oct 2019 14:23:38 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id j17so1208301wru.13 for ; Wed, 30 Oct 2019 07:23:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RzDCqX7fVteJS3E4fZ9bQiryP3pTaR7mMOYJU/v7Dvc=; b=Enj5rdnSmxKCHUa2MGe3XLvYYbthefD0qIMthtdUf3p7jt4q6J1KlYRxvvaeJbw25K b389zApkQVqVpOyF+WBV0hvK+Of+uAW+6w+M6n9B+xkQ/YQ8LW48oKBm2DMXDQ/LMPFW IvPONzwfed8iqQCbXnIUL2IG5t45eMEY9Eg645L0QXRnNI0Jqqn9eNV+/Zb7jra5dvcV P0/xaXjiH7c27EvtBaB7Ux6WcLahyF5CUbTwQ0r1fILaOceX7Hd6Xu1tBnQh2NAEK7/F VfR7YWPf76n1yhc7mftyEFHr0dw038yrukIdhSQWMny2zMXmuOOLRQ6DNldgmDQ0MCeU WRSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RzDCqX7fVteJS3E4fZ9bQiryP3pTaR7mMOYJU/v7Dvc=; b=r8Fx5tMwIrTeum2/xqlr/lT+8QLSiwolRspeY6CyRo0sFzdn6Ii94pucC1XlxWWmL8 kbljIr6LdPGhon0nmo80jDHACmhgB0Ga3QEs9BxODDk7NR3v2sf4tYt5TocUQFJAGfBf JAUq/v2H4060w6t1cnfveFt78oTqx3MO8YmvUsmQfTdzYUNKAFkIt7KrCAvbgnjD3aGP 9c9GQThZ7hUW9xS48XfT/mOfdS7z0pWEWx7O9C4W8m/yp3+g5wMkiJiB3BDHrmtppO4M M6Qj571TZzhnNa2PYLIZEjZTV67kKDE5OmyXyMkTP3aWlw2dd2H2aIG+ermOLyGVwbdc WDXw== X-Gm-Message-State: APjAAAV7cZCfWXQAmsZs314MJNk1bN/cAl9avQ0Gtbe27MrmBOmaCKtF M1uID6l0nhF6Hhpw1DqwuFhO/8TMlqM= X-Google-Smtp-Source: APXvYqxODVnetcMyi6dC7YdP/TzwKi6vgzBTkf8NjuMxEP2/5wJhjhj8Q3h4YpSCkA5ozl37OqsUWgLcV5c= X-Received: by 2002:a5d:424f:: with SMTP id s15mr113158wrr.368.1572445416536; Wed, 30 Oct 2019 07:23:36 -0700 (PDT) Date: Wed, 30 Oct 2019 15:22:29 +0100 In-Reply-To: <20191030142237.249532-1-glider@google.com> Message-Id: <20191030142237.249532-18-glider@google.com> Mime-Version: 1.0 References: <20191030142237.249532-1-glider@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH RFC v2 17/25] kmsan: mm: call KMSAN hooks from SLUB code From: glider@google.com To: Andrew Morton , Vegard Nossum , Dmitry Vyukov , linux-mm@kvack.org Cc: viro@zeniv.linux.org.uk, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@lst.de, dmitry.torokhov@gmail.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, mingo@elte.hu, axboe@kernel.dk, martin.petersen@oracle.com, schwidefsky@de.ibm.com, mst@redhat.com, monstr@monstr.eu, pmladek@suse.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, wsa@the-dreams.de, gor@linux.ibm.com, iii@linux.ibm.com, mark.rutland@arm.com, willy@infradead.org, rdunlap@infradead.org, andreyknvl@google.com, elver@google.com, Alexander Potapenko X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: linux-mm@kvack.org --- Change-Id: I51103b7981d3aabed747d0c85cbdc85568665871 --- mm/slub.c | 37 +++++++++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 6 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b25c807a111f..8b7069812801 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -21,6 +21,8 @@ #include #include #include +#include +#include /* KMSAN_INIT_VALUE */ #include #include #include @@ -285,17 +287,27 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetch(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, use KMSAN_INIT_VALUE() to force initialize the + * return value of get_freepointer_safe(). + */ static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled()) - return get_freepointer(s, object); + return KMSAN_INIT_VALUE(get_freepointer(s, object)); freepointer_addr = (unsigned long)object + s->offset; probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); - return freelist_ptr(s, p, freepointer_addr); + return KMSAN_INIT_VALUE(freelist_ptr(s, p, freepointer_addr)); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) @@ -1390,6 +1402,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1397,6 +1410,7 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x, _RET_IP_); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) @@ -1453,6 +1467,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, } while (object != old_tail); } + do { + object = next; + next = get_freepointer(s, object); + kmsan_slab_free(s, object); + } while (object != old_tail); + /* * Compiler cannot detect this function can be removed if slab_free_hook() * evaluates to nothing. Thus, catch all relevant config debug options here. @@ -2776,6 +2796,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); + kmsan_slab_alloc(s, object, gfpflags); slab_post_alloc_hook(s, gfpflags, 1, &object); return object; @@ -2804,6 +2825,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) void *ret = slab_alloc(s, gfpflags, _RET_IP_); trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); ret = kasan_kmalloc(s, ret, size, gfpflags); + return ret; } EXPORT_SYMBOL(kmem_cache_alloc_trace); @@ -2816,7 +2838,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) trace_kmem_cache_alloc_node(_RET_IP_, ret, s->object_size, s->size, gfpflags, node); - return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node); @@ -2832,6 +2853,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); + return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); @@ -3157,7 +3179,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { struct kmem_cache_cpu *c; - int i; + int i, j; /* memcg and kmem_cache debug support */ s = slab_pre_alloc_hook(s, flags); @@ -3198,11 +3220,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, /* Clear memory outside IRQ disabled fastpath loop */ if (unlikely(slab_want_init_on_alloc(flags, s))) { - int j; - for (j = 0; j < i; j++) memset(p[j], 0, s->object_size); } + for (j = 0; j < i; j++) + kmsan_slab_alloc(s, p[j], flags); /* memcg and kmem_cache debug support */ slab_post_alloc_hook(s, flags, size, p); @@ -3803,6 +3825,7 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); +__no_sanitize_memory void *__kmalloc(size_t size, gfp_t flags) { struct kmem_cache *s; @@ -5717,6 +5740,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_shadow(name, p - name); return name; } @@ -5866,6 +5890,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_shadow(al, sizeof(struct saved_alias)); return 0; }