From patchwork Wed Jul 9 11:30:08 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 4513951 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 67499BEECB for ; Wed, 9 Jul 2014 11:40:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8412D202E5 for ; Wed, 9 Jul 2014 11:40:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8BFE52037E for ; Wed, 9 Jul 2014 11:40:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X4qCl-0001Q3-6x; Wed, 09 Jul 2014 11:38:31 +0000 Received: from mailout2.w1.samsung.com ([210.118.77.12]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1X4qBB-0008P8-R2 for linux-arm-kernel@lists.infradead.org; Wed, 09 Jul 2014 11:36:55 +0000 Received: from eucpsbgm1.samsung.com (unknown [203.254.199.244]) by mailout2.w1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N8G009DT08NMK60@mailout2.w1.samsung.com> for linux-arm-kernel@lists.infradead.org; Wed, 09 Jul 2014 12:36:24 +0100 (BST) X-AuditID: cbfec7f4-b7fac6d000006cfe-f3-53bd29474124 Received: from eusync1.samsung.com ( [203.254.199.211]) by eucpsbgm1.samsung.com (EUCPMTA) with SMTP id 8C.53.27902.7492DB35; Wed, 09 Jul 2014 12:36:39 +0100 (BST) Received: from localhost.localdomain ([106.109.129.143]) by eusync1.samsung.com (Oracle Communications Messaging Server 7u4-23.01 (7.0.4.23.0) 64bit (built Aug 10 2011)) with ESMTPA id <0N8G0029508FLQ20@eusync1.samsung.com>; Wed, 09 Jul 2014 12:36:39 +0100 (BST) From: Andrey Ryabinin To: linux-kernel@vger.kernel.org Subject: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory Date: Wed, 09 Jul 2014 15:30:08 +0400 Message-id: <1404905415-9046-15-git-send-email-a.ryabinin@samsung.com> X-Mailer: git-send-email 1.8.5.5 In-reply-to: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> References: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrALMWRmVeSWpSXmKPExsVy+t/xy7rumnuDDZbfNrXY9usRm8XvvTNZ LeasX8Nmcf3bG0aLCQ/b2C1WdjezWWx/9pbJYmXnA1aLTY+vsVr82bWDyeLyrjlsFvfW/Ge1 uH2Z1+LSgQVMFi37LjBZtH3+x2qxb+V5IGvJRiaLxUduM1u8ezaZ2WLzpqnMFj82PGZ1EPNo ae5h89g56y67x4JNpR6bVnWyeWz6NIndo+vtFSaPd+fOsXucmPGbxePJlelMHpuX1Ht8fHqL xeP9vqtsHn1bVjF6nFlwhN3j8ya5AP4oLpuU1JzMstQifbsEroxnH7uZC85IV8zevJ2pgXGR WBcjJ4eEgInEztsLmCBsMYkL99azdTFycQgJLGWUmHy2nwnC6WOSmLbzJRtIFZuAnsS/WdvB bBEBBYnNvc9YQYqYBZrZJNo7PrCCJIQFsiQWH+pmAbFZBFQlJjVPZQexeQXcJD6t6mGDWKcg sWz5TLB6TqD4hOnXmLsYOYC2uUpMWKE2gZF3ASPDKkbR1NLkguKk9FxDveLE3OLSvHS95Pzc TYyQWPqyg3HxMatDjAIcjEo8vBq1u4OFWBPLiitzDzFKcDArifDaiu4NFuJNSaysSi3Kjy8q zUktPsTIxMEp1cDo1bFY6fIM6Xl3bFyvtE3QWsGx0+1WBJdql+m/hY73bjqszFvd2pVt9/px 1z7rY2v92f6HxomHdnLcVxd7xqjysqOYuTh54kPFrA2XSptYbqoW+7rtaf3dfXPx/Rl9YSIJ 6RNkNBZPX/DV6hOf47RfWe/iI5Wj7lseqb9y5vVzvSsT8xhfNdcrsRRnJBpqMRcVJwIAcMqL KIMCAAA= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140709_043654_079001_FD06DBA3 X-CRM114-Status: GOOD ( 13.20 ) X-Spam-Score: -5.7 (-----) Cc: Michal Marek , Christoph Lameter , x86@kernel.org, Russell King , Andrew Morton , linux-kbuild@vger.kernel.org, Andrey Ryabinin , Joonsoo Kim , David Rientjes , linux-mm@kvack.org, Pekka Enberg , Konstantin Serebryany , Yuri Gribov , Dmitry Vyukov , Sasha Levin , Andrey Konovalov , Thomas Gleixner , Alexey Preobrazhensky , Ingo Molnar , Konstantin Khlebnikov , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some code in slub could validly touch memory marked by kasan as unaccessible. Even though slub.c doesn't instrumented, functions called in it are instrumented, so to avoid false positive reports such places are protected by kasan_disable_local()/kasan_enable_local() calls. Signed-off-by: Andrey Ryabinin --- mm/slub.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 6ddedf9..c8dbea7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object) if (!(s->flags & SLAB_STORE_USER)) return; + kasan_disable_local(); print_track("Allocated", get_track(s, object, TRACK_ALLOC)); print_track("Freed", get_track(s, object, TRACK_FREE)); + kasan_enable_local(); } static void print_page_info(struct page *page) @@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) unsigned int off; /* Offset of last byte */ u8 *addr = page_address(page); + kasan_disable_local(); + print_tracking(s, p); print_page_info(page); @@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p) /* Beginning of the filler is the free pointer */ print_section("Padding ", p + off, s->size - off); + kasan_enable_local(); + dump_stack(); } @@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, struct page *page, void *object, unsigned long addr) { + + kasan_disable_local(); if (!check_slab(s, page)) goto bad; @@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, set_track(s, object, TRACK_ALLOC, addr); trace(s, page, object, 1); init_object(s, object, SLUB_RED_ACTIVE); + kasan_enable_local(); return 1; bad: @@ -1041,6 +1050,7 @@ bad: page->inuse = page->objects; page->freelist = NULL; } + kasan_enable_local(); return 0; } @@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing( spin_lock_irqsave(&n->list_lock, *flags); slab_lock(page); + kasan_disable_local(); if (!check_slab(s, page)) goto fail; @@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing( trace(s, page, object, 0); init_object(s, object, SLUB_RED_INACTIVE); out: + kasan_enable_local(); slab_unlock(page); /* * Keep node_lock to preserve integrity @@ -1096,6 +1108,7 @@ out: return n; fail: + kasan_enable_local(); slab_unlock(page); spin_unlock_irqrestore(&n->list_lock, *flags); slab_fix(s, "Object at 0x%p not freed", object); @@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page, void *object) { setup_object_debug(s, page, object); - if (unlikely(s->ctor)) + if (unlikely(s->ctor)) { + kasan_disable_local(); s->ctor(object); + kasan_enable_local(); + } } static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) @@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page) if (kmem_cache_debug(s)) { void *p; - + kasan_disable_local(); slab_pad_check(s, page); for_each_object(p, s, page_address(page), page->objects) check_object(s, page, p, SLUB_RED_INACTIVE); + kasan_enable_local(); } kmemcheck_free_shadow(page, compound_order(page));