From patchwork Wed Jul 5 12:44:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 13302135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5554EB64DD for ; Wed, 5 Jul 2023 12:44:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B5BF8D0002; Wed, 5 Jul 2023 08:44:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 265AF8D0001; Wed, 5 Jul 2023 08:44:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12E438D0002; Wed, 5 Jul 2023 08:44:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 00AAC8D0001 for ; Wed, 5 Jul 2023 08:44:18 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BD4BFA095B for ; Wed, 5 Jul 2023 12:44:18 +0000 (UTC) X-FDA: 80977526196.22.69E3ABE Received: from out-8.mta1.migadu.com (out-8.mta1.migadu.com [95.215.58.8]) by imf27.hostedemail.com (Postfix) with ESMTP id E16F440002 for ; Wed, 5 Jul 2023 12:44:14 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=L8cZm9Lx; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf27.hostedemail.com: domain of andrey.konovalov@linux.dev designates 95.215.58.8 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688561056; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=OvYom9X/S7wayVeQUgTcDXl5jN2F2BPuKd+7kG4C4mU=; b=DmrsiSVdnkb3NykGH5lQIzkU7lJwXeZROuW2l7XAGJ3wPnEUO/2O8ZdkgNFMgXCyGAMfi1 hIKdtUCBuv12L5InopwYOqXFNJtos55u1CWZQEkgzCupNNBM5AAGfowgCMT5UlVCGWf73b P3iZeSC8X+XuPIOiHuh2dHwzfrkJUws= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=L8cZm9Lx; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf27.hostedemail.com: domain of andrey.konovalov@linux.dev designates 95.215.58.8 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688561056; a=rsa-sha256; cv=none; b=NescP5Xj75jH3gce0Z2vJ7Pa36UjT8VAJ7PoI2PBXNwmPhaM/uBWn+rqv60Qpf6BK+QKjY LC36PyBvJgfOiUyjtUZYQ47EhvYXTbbcucMP7/sxAox/iJAGHp7WxtbwAAkulrhdkYEhln Xxfas8dbRDG1xHV6yvit8yddbhdUTBk= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1688561049; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=OvYom9X/S7wayVeQUgTcDXl5jN2F2BPuKd+7kG4C4mU=; b=L8cZm9LxdvRBYMRNAvSEu9H0IxP7taSx5EvR7BcZ8G88ca7lnB4ReJaYDVG16WBkTlpRN7 24GG0wNfx+8UM1CyKbWj+H72mwRnnArUTCvZRaQVPbU+ySUdM/44CzzLQw+XaOIEncUFVt 8VsKnaiXIIbjxVhTkWZ85lf6z419tXo= From: andrey.konovalov@linux.dev To: Marco Elver , Mark Rutland Cc: Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , Vincenzo Frascino , kasan-dev@googlegroups.com, Andrew Morton , linux-mm@kvack.org, Catalin Marinas , Peter Collingbourne , Feng Tang , stable@vger.kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH] kasan, slub: fix HW_TAGS zeroing with slub_debug Date: Wed, 5 Jul 2023 14:44:02 +0200 Message-Id: <678ac92ab790dba9198f9ca14f405651b97c8502.1688561016.git.andreyknvl@google.com> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Stat-Signature: 6mjpf6aj1xdxoc55u3z3zcw66buotaag X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E16F440002 X-HE-Tag: 1688561054-261354 X-HE-Meta: U2FsdGVkX1/oexO+A9OBpqd3fKxaP2zjgTeEckr3x2sR5kidwLgTuqGu+2A/MJP3qle+aQ2oqO8Dnceq2AaszkgrTuOkROGkKepbpi/3mNY2s6f0Vode4vrHcmCaooKVtCZTvWpp1riVziarEQVmyJzgKvY8Oe8VuKD1rAL4DIU5fsUPudsTSrgP6dV9LHEWewFXIWuGr0236nLIuQ3AF4icv57LEJ2fKuRi8YNNiK0VqYJ0kQ8rzWjS0Esp7avQsPRMBaZ+XgyuIQzlYZCdC6dFO6H4GeC6M3ZGkFAfLlvKTdn1PfEVt9GCC+YWkSV3+LtFZ31gHBsLm/EMP+NVuT6ccF8P31l5HsT1HmidOQ67q8fjY1kBcJh79LdSGDwjfAf2+XI08v7e5ZcNifm26woXOozHwRTx5qWNFtWVvrGpf7YcJV+viCx+bLK9WWbs+NLcw3VyUnEcNwMAD74yJ3B8EVvg47uY3C/gS0DzN8tsVSxFL3T5aH6HArrcGZ7OxRCKTimyW3OjxfUpHiIZ1HTJwBfNOFF6K1EQ8NlvydeDrVVdaAx2o2XdMtgtSU/vNxLqb5NOidLiQH+szA9EJ6Xirv08uxl84UY76WZAj6Yi4uLsYBvI31648jQYqECy5wlRwIc07bepf7d6tUsGj/3CYSqIDIztSWyejTCnI/OKqujTbobArOOTHTXMxk05qH+wOe1Hapu9RQre7S2R+ItPNR0Q+QGtq7kWFZL3FCt7w7VBXVjrYfxWKp952vykMDpjox62/Raxcq1dKyR2r+R3YCa0jKht3Nzmxo4EXqdzNuvicOKfuIXbJqcOiAWqTSkPhhpa4hJUu2+yCqSp72lFDICI2AlUTw9gzqOr96blzQWF3Z8NrrgW6XgT11SteYuPxDo/WHzEJlQw3mI7ufJ4QVULOLzNU0MJhkp17KGykwJjGSquiQTUesBHgdxsQ1sMJT+kYKNRPv4Ps4F Um+ti9+t UOV0IcxGLskuSFZEjOCgOH95of0hQKC57WL0Gtt4aDiuh2P5DVMH6pO+aao5pfKBNWnPMgChB+t7mQsch2atO++3zeTh2F40cTPLXSNOSPEGMYLpkj59SZXRbs1+K14pBaWS4PoCgJFnOlbR16qJlhh8xFou8WE9Zbn5ZFnqjRQKNfCPdxjHCXWCFmyp8tCOkOJ1z1qVqfM25ME6ToLvhQPXdxiWxzJy+8hQXKY2JCRJRkixAs6oTtQN4EatSyFZZk6CoPjWoEM6HpPeLU2sJvQMo2IkIeA/hdO9eg7PApzbRauBgsyI+bIm+L1SnWcwZJZtUBuuvJoqqaiQna7ptrprCbQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Commit 946fa0dbf2d8 ("mm/slub: extend redzone check to extra allocated kmalloc space than requested") added precise kmalloc redzone poisoning to the slub_debug functionality. However, this commit didn't account for HW_TAGS KASAN fully initializing the object via its built-in memory initialization feature. Even though HW_TAGS KASAN memory initialization contains special memory initialization handling for when slub_debug is enabled, it does not account for in-object slub_debug redzones. As a result, HW_TAGS KASAN can overwrite these redzones and cause false-positive slub_debug reports. To fix the issue, avoid HW_TAGS KASAN memory initialization when slub_debug is enabled altogether. Implement this by moving the __slub_debug_enabled check to slab_post_alloc_hook. Common slab code seems like a more appropriate place for a slub_debug check anyway. Fixes: 946fa0dbf2d8 ("mm/slub: extend redzone check to extra allocated kmalloc space than requested") Cc: Reported-by: Mark Rutland Signed-off-by: Andrey Konovalov Acked-by: Marco Elver Reported-by: Will Deacon Tested-by: Will Deacon Acked-by: Vlastimil Babka --- mm/kasan/kasan.h | 12 ------------ mm/slab.h | 16 ++++++++++++++-- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index b799f11e45dc..2e973b36fe07 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -466,18 +466,6 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; - /* - * Explicitly initialize the memory with the precise object size to - * avoid overwriting the slab redzone. This disables initialization in - * the arch code and may thus lead to performance penalty. This penalty - * does not affect production builds, as slab redzones are not enabled - * there. - */ - if (__slub_debug_enabled() && - init && ((unsigned long)size & KASAN_GRANULE_MASK)) { - init = false; - memzero_explicit((void *)addr, size); - } size = round_up(size, KASAN_GRANULE_SIZE); hw_set_mem_tag_range((void *)addr, size, tag, init); diff --git a/mm/slab.h b/mm/slab.h index 6a5633b25eb5..9c0e09d0f81f 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -723,6 +723,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, unsigned int orig_size) { unsigned int zero_size = s->object_size; + bool kasan_init = init; size_t i; flags &= gfp_allowed_mask; @@ -739,6 +740,17 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, (s->flags & SLAB_KMALLOC)) zero_size = orig_size; + /* + * When slub_debug is enabled, avoid memory initialization integrated + * into KASAN and instead zero out the memory via the memset below with + * the proper size. Otherwise, KASAN might overwrite SLUB redzones and + * cause false-positive reports. This does not lead to a performance + * penalty on production builds, as slub_debug is not intended to be + * enabled there. + */ + if (__slub_debug_enabled()) + kasan_init = false; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_alloc and initialization memset must be @@ -747,8 +759,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, * As p[i] might get tagged, memset and kmemleak hook come after KASAN. */ for (i = 0; i < size; i++) { - p[i] = kasan_slab_alloc(s, p[i], flags, init); - if (p[i] && init && !kasan_has_integrated_init()) + p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); + if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) memset(p[i], 0, zero_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags);