From patchwork Fri Oct 21 03:24:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 13014238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72F8DC433FE for ; Fri, 21 Oct 2022 03:24:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 10FDF8E0003; Thu, 20 Oct 2022 23:24:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C1848E0001; Thu, 20 Oct 2022 23:24:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E55008E0003; Thu, 20 Oct 2022 23:24:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D2E078E0001 for ; Thu, 20 Oct 2022 23:24:15 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A59DA40813 for ; Fri, 21 Oct 2022 03:24:15 +0000 (UTC) X-FDA: 80043513270.15.1C803F3 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf27.hostedemail.com (Postfix) with ESMTP id 1180D40006 for ; Fri, 21 Oct 2022 03:24:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666322655; x=1697858655; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DZlZbkmzdjTJqsSTh0PWMpWULB1+KKo5zc/qeSpD1kM=; b=W1XowHlkT5K2hHdrl76giADaV0ei/PyU7Nh7KQpYjz7LUhu0OeUO2eQm kXZET3EWGRz3JHiHhRRp/reJpDzya8qffxX+Dunu5eIQTgGYREZnAc6jb R9+6SF7oVGkBGIITlcAC1RPUb8IFFm+1Nya6917lv9RNvpsgpG3u2ngvT 2VzlY6HPuGvv2z08lLwtezRpo7UxBiSxULSbWZCne0rO1TyqsngXKpeGd r/LDkaZJQT4wLa7Z9ZJKfBawlmp5soi1GDeCVS3DHdYNymV/xmf2JVMFx 5UFgqNSKHGN9qLQptZ4Z78/qXeddFYarXmxR60FlZob++9tVGFDPUJvII w==; X-IronPort-AV: E=McAfee;i="6500,9779,10506"; a="307998014" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="307998014" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2022 20:24:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10506"; a="719459559" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="719459559" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by FMSMGA003.fm.intel.com with ESMTP; 20 Oct 2022 20:24:11 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Andrey Konovalov , Kees Cook Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang Subject: [PATCH v7 1/3] mm/slub: only zero requested size of buffer for kzalloc when debug enabled Date: Fri, 21 Oct 2022 11:24:03 +0800 Message-Id: <20221021032405.1825078-2-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221021032405.1825078-1-feng.tang@intel.com> References: <20221021032405.1825078-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666322655; a=rsa-sha256; cv=none; b=Siot/xzCk0pNCuECKXOCWE0H9Zgf7aN0SRV+0nKIqsmnsfQrDtd//6pPd23fHFtblyRqHm 6hev6SNzmcSKMsLd7x5B2XBYION1Wu/RT4PM5jwUAi1A9ql4XmS8Lgy9lF58UhHRkyTdND sQ3jJE2oGjpsIqfaqhs7ye+33MbRKfU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=W1XowHlk; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666322655; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3YDxtwYk13Eg8oX62TDkjnLYaWguBEGcZS5fYx2UnH0=; b=GsHmM6a2Lu2BBgXKFJSDuALKMf4M/Flg9vRMMNUbiLGSOqKAF0UPzowMg2eG6FIOzr2O9H OukeEpzM9FWlrz5rBGPFHALqZ2lcc1baMzLIKZ1dr+WMqHUwoehKgGA1fzMM08YRojpK9R Z4/DybskRakMWjeB50eAGx8ASpbdEsY= X-Stat-Signature: ustotrwaa6etgcssgunybbnsxihif7by X-Rspamd-Queue-Id: 1180D40006 X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=W1XowHlk; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=feng.tang@intel.com X-Rspamd-Server: rspam11 X-HE-Tag: 1666322654-446897 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kzalloc/kmalloc will round up the request size to a fixed size (mostly power of 2), so the allocated memory could be more than requested. Currently kzalloc family APIs will zero all the allocated memory. To detect out-of-bound usage of the extra allocated memory, only zero the requested part, so that redzone sanity check could be added to the extra space later. For kzalloc users who will call ksize() later and utilize this extra space, please be aware that the space is not zeroed any more when debug is enabled. (Thanks to Kees Cook's effort to sanitize all ksize() user cases [1], this won't be a big issue). [1]. https://lore.kernel.org/all/20220922031013.2150682-1-keescook@chromium.org/#r Signed-off-by: Feng Tang Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Andrey Konovalov Signed-off-by: Feng Tang Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.c | 7 ++++--- mm/slab.h | 18 ++++++++++++++++-- mm/slub.c | 10 +++++++--- 3 files changed, 27 insertions(+), 8 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index a5486ff8362a..4594de0e3d6b 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3253,7 +3253,8 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, init = slab_want_init_on_alloc(flags, cachep); out: - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, + cachep->object_size); return objp; } @@ -3506,13 +3507,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled section. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), s->object_size); /* FIXME: Trace call missing. Christoph would like a bulk variant */ return size; error: local_irq_enable(); cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); return 0; } diff --git a/mm/slab.h b/mm/slab.h index 0202a8c2f0d2..8b4ee02fc14a 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -720,12 +720,26 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, static inline void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init) + size_t size, void **p, bool init, + unsigned int orig_size) { + unsigned int zero_size = s->object_size; size_t i; flags &= gfp_allowed_mask; + /* + * For kmalloc object, the allocated memory size(object_size) is likely + * larger than the requested size(orig_size). If redzone check is + * enabled for the extra space, don't zero it, as it will be redzoned + * soon. The redzone operation for this extra space could be seen as a + * replacement of current poisoning under certain debug option, and + * won't break other sanity checks. + */ + if (kmem_cache_debug_flags(s, SLAB_STORE_USER) && + (s->flags & SLAB_KMALLOC)) + zero_size = orig_size; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_alloc and initialization memset must be @@ -736,7 +750,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { p[i] = kasan_slab_alloc(s, p[i], flags, init); if (p[i] && init && !kasan_has_integrated_init()) - memset(p[i], 0, s->object_size); + memset(p[i], 0, zero_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); kmsan_slab_alloc(s, p[i], flags); diff --git a/mm/slub.c b/mm/slub.c index 12354fb8d6e4..17292c2d3eee 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3395,7 +3395,11 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l init = slab_want_init_on_alloc(gfpflags, s); out: - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); + /* + * When init equals 'true', like for kzalloc() family, only + * @orig_size bytes will be zeroed instead of s->object_size + */ + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); return object; } @@ -3852,11 +3856,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled fastpath loop. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), s->object_size); return i; error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); return 0; } From patchwork Fri Oct 21 03:24:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 13014239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7871C433FE for ; Fri, 21 Oct 2022 03:24:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 702D98E0005; Thu, 20 Oct 2022 23:24:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B30F8E0001; Thu, 20 Oct 2022 23:24:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57B3A8E0005; Thu, 20 Oct 2022 23:24:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 48CD88E0001 for ; Thu, 20 Oct 2022 23:24:21 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 21AC91402FB for ; Fri, 21 Oct 2022 03:24:21 +0000 (UTC) X-FDA: 80043513522.08.FF451E5 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf27.hostedemail.com (Postfix) with ESMTP id 86E3340006 for ; Fri, 21 Oct 2022 03:24:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666322659; x=1697858659; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TNsKWHYJLqAo/IELz04ecR5M2wF4zce2b3Ctn8SweIk=; b=UvBTWdXZfmdyCKnxuwdmvhWMuU2YzM1GMZnoR4IKEu89IDisATU9EVlv 8U5d4IVWcEnIpuc8q++HzUzC3oHoId1nMY7nKkAZDwI8ynKRdJwx2xivg S2Hv0YZlDODsbO2DBNOGreouP6uX9EZ016f+8jpO6rtYOEk6xoQA0xG+q 8/pDQZepEChWFGBl+y8w0TEwoJb1VcyM/1+1PrABvZpGgEUHokIIDh01b 2T2EtZJWKT8y5Y6Db8PX9mlBz85C1LstBRoyxayFptTKkBCfurBaU/7Rb 38c2pyEAmORjra5Vd7yngA72Qi6i7HtdiHEXZhwZsBP3oRsZOpeF530nN A==; X-IronPort-AV: E=McAfee;i="6500,9779,10506"; a="307998024" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="307998024" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2022 20:24:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10506"; a="719459592" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="719459592" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by FMSMGA003.fm.intel.com with ESMTP; 20 Oct 2022 20:24:14 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Andrey Konovalov , Kees Cook Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang , kernel test robot , Andrey Ryabinin , Alexander Potapenko , Vincenzo Frascino Subject: [PATCH v7 2/3] mm: kasan: Extend kasan_metadata_size() to also cover in-object size Date: Fri, 21 Oct 2022 11:24:04 +0800 Message-Id: <20221021032405.1825078-3-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221021032405.1825078-1-feng.tang@intel.com> References: <20221021032405.1825078-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666322659; a=rsa-sha256; cv=none; b=4ShkYB9lxmAYv8i77S/xJtLGsxMmBenf2XDLsEHdB49ZLxYGF4a9JegI3m220lH0KdJS3u tM+yo9PF0NdLHvLR415u6OUDgG2nq9DZwv7QpXSX1nKe5qikNtOGkZcyIocy322avjAKIe ZoQSKuXhXf+dY3LeZO24f8ccxDWthB4= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=UvBTWdXZ; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666322659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qT9aTG4MhKp/JZdJeVtvRHIFl7COGLInTpwNhLfWtPo=; b=leI7y5Rf8fOWDLVag1HMWSf+xjmLF//pC6YH0pUxMzoGD3fjmXO+X2fNNLO9On3dmuruMu vXV7m9yXdrar0BxBkl6mOks0sA5gVX/ACRV22pewg5tYAHgM6TMrct6Q4lyaEaK1nhTej1 rMEz8EMsqhsVtHPjpw2GTF8vU0gzrAw= X-Stat-Signature: 6x7smya63b9s9i4zaq8womr4isi7of1n X-Rspamd-Queue-Id: 86E3340006 X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=UvBTWdXZ; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=feng.tang@intel.com X-Rspamd-Server: rspam11 X-HE-Tag: 1666322659-762315 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When kasan is enabled for slab/slub, it may save kasan' free_meta data in the former part of slab object data area in slab object's free path, which works fine. There is ongoing effort to extend slub's debug function which will redzone the latter part of kmalloc object area, and when both of the debug are enabled, there is possible conflict, especially when the kmalloc object has small size, as caught by 0Day bot [1]. To solve it, slub code needs to know the in-object kasan's meta data size. Currently, there is existing kasan_metadata_size() which returns the kasan's metadata size inside slub's metadata area, so extend it to also cover the in-object meta size by adding a boolean flag 'in_object'. There is no functional change to existing code logic. [1]. https://lore.kernel.org/lkml/YuYm3dWwpZwH58Hu@xsang-OptiPlex-9020/ Reported-by: kernel test robot Suggested-by: Andrey Konovalov Signed-off-by: Feng Tang Reviewed-by: Andrey Konovalov Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Dmitry Vyukov Cc: Vincenzo Frascino Reviewed-by: Andrey Konovalov --- include/linux/kasan.h | 5 +++-- mm/kasan/generic.c | 19 +++++++++++++------ mm/slub.c | 4 ++-- 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index d811b3d7d2a1..96c9d56e5510 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -302,7 +302,7 @@ static inline void kasan_unpoison_task_stack(struct task_struct *task) {} #ifdef CONFIG_KASAN_GENERIC -size_t kasan_metadata_size(struct kmem_cache *cache); +size_t kasan_metadata_size(struct kmem_cache *cache, bool in_object); slab_flags_t kasan_never_merge(void); void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags); @@ -315,7 +315,8 @@ void kasan_record_aux_stack_noalloc(void *ptr); #else /* CONFIG_KASAN_GENERIC */ /* Tag-based KASAN modes do not use per-object metadata. */ -static inline size_t kasan_metadata_size(struct kmem_cache *cache) +static inline size_t kasan_metadata_size(struct kmem_cache *cache, + bool in_object) { return 0; } diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c index d8b5590f9484..b076f597a378 100644 --- a/mm/kasan/generic.c +++ b/mm/kasan/generic.c @@ -450,15 +450,22 @@ void kasan_init_object_meta(struct kmem_cache *cache, const void *object) __memset(alloc_meta, 0, sizeof(*alloc_meta)); } -size_t kasan_metadata_size(struct kmem_cache *cache) +size_t kasan_metadata_size(struct kmem_cache *cache, bool in_object) { + struct kasan_cache *info = &cache->kasan_info; + if (!kasan_requires_meta()) return 0; - return (cache->kasan_info.alloc_meta_offset ? - sizeof(struct kasan_alloc_meta) : 0) + - ((cache->kasan_info.free_meta_offset && - cache->kasan_info.free_meta_offset != KASAN_NO_FREE_META) ? - sizeof(struct kasan_free_meta) : 0); + + if (in_object) + return (info->free_meta_offset ? + 0 : sizeof(struct kasan_free_meta)); + else + return (info->alloc_meta_offset ? + sizeof(struct kasan_alloc_meta) : 0) + + ((info->free_meta_offset && + info->free_meta_offset != KASAN_NO_FREE_META) ? + sizeof(struct kasan_free_meta) : 0); } static void __kasan_record_aux_stack(void *addr, bool can_alloc) diff --git a/mm/slub.c b/mm/slub.c index 17292c2d3eee..adff7553b54e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -910,7 +910,7 @@ static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p) if (slub_debug_orig_size(s)) off += sizeof(unsigned int); - off += kasan_metadata_size(s); + off += kasan_metadata_size(s, false); if (off != size_from_object(s)) /* Beginning of the filler is the free pointer */ @@ -1070,7 +1070,7 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p) off += sizeof(unsigned int); } - off += kasan_metadata_size(s); + off += kasan_metadata_size(s, false); if (size_from_object(s) == off) return 1; From patchwork Fri Oct 21 03:24:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 13014240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BE89C4332F for ; Fri, 21 Oct 2022 03:24:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C33058E0006; Thu, 20 Oct 2022 23:24:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE55A8E0001; Thu, 20 Oct 2022 23:24:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAB188E0006; Thu, 20 Oct 2022 23:24:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9BE988E0001 for ; Thu, 20 Oct 2022 23:24:24 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 75F2EA0386 for ; Fri, 21 Oct 2022 03:24:24 +0000 (UTC) X-FDA: 80043513648.11.9C6E4EC Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf27.hostedemail.com (Postfix) with ESMTP id CF4A140007 for ; Fri, 21 Oct 2022 03:24:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666322662; x=1697858662; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BgwnaduZY7Uty+d/aSigQJByRP8rJXh8ABZquZbtWNQ=; b=UjkepvYqyFkvs9nhv8r00Rk/xWOvkyK4gSCDjBuNLYUvkELaWexphwgw MMRlQCfh1PRsltkM9VW2xUWoZm3pkR9Nt0ehF8+kQKZLXa+aZA9LB3/gv aF/kE8+/+UREXQb1tyhpjXKRR9ogIfimmq8XwyDt4yNbkN+WzIhBpcjPD uKKKDrtWWJ14Uhmb1IsMJIDQGUPa1gbeG9naK+ZaXQmwQZFltID+ZIGw+ 8pJ1noABaiQW3EoyJpQyNyllPI+wa1TbzUlCxCjdAoU3rmaTq5Q56E1k5 kvUVzpze0H6CyBAaC2EG3f9GletwBMm5ECLLRJD9XhVBzA2aNB0Wh3rIP Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10506"; a="307998032" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="307998032" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2022 20:24:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10506"; a="719459612" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="719459612" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by FMSMGA003.fm.intel.com with ESMTP; 20 Oct 2022 20:24:19 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Andrey Konovalov , Kees Cook Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang Subject: [PATCH v7 3/3] mm/slub: extend redzone check to extra allocated kmalloc space than requested Date: Fri, 21 Oct 2022 11:24:05 +0800 Message-Id: <20221021032405.1825078-4-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221021032405.1825078-1-feng.tang@intel.com> References: <20221021032405.1825078-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666322663; a=rsa-sha256; cv=none; b=I9K82zLwGUfsJgvYuqZto8WZhn95sLh2FGyLhEYyzthShjq/yT4G1051+xxawQa3HSELgf UVBNkk8HM9a6HrS2n1BRkhqOGjCBKDS+ZrShQTWJERtgyO2Jigtj6bGmVni+HFO7+dQ7sy avSJNKJrEzT5iHB9WS/Es/sFZqjKOtU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=UjkepvYq; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666322663; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ms3Ytik3W/RJebwRdrY+iMC5yNLhiRgPO9DIkBKR5Fk=; b=jd7HVxT+7NcuL0QXnzvE4PH7snPCHE8t1JAB8oA7I/cCsCo7QMO35y9kkg7oIfjqZK43wl 3eeQCY+ACsdLMLmzFZEQnHMzfkxZATnP0kLk8bbYDKlTSyXKGAPuFSqb30L2fGhqiTppFQ SZs6qy8bNuZJkt7ezR7GpXPRMnLTqPI= X-Stat-Signature: r1k6go98r4cono6q5we3xhh1ncfntjeg X-Rspamd-Queue-Id: CF4A140007 X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=UjkepvYq; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=feng.tang@intel.com X-Rspamd-Server: rspam11 X-HE-Tag: 1666322662-515532 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmalloc will round up the request size to a fixed size (mostly power of 2), so there could be a extra space than what is requested, whose size is the actual buffer size minus original request size. To better detect out of bound access or abuse of this space, add redzone sanity check for it. In current kernel, some kmalloc user already knows the existence of the space and utilizes it after calling 'ksize()' to know the real size of the allocated buffer. So we skip the sanity check for objects which have been called with ksize(), as treating them as legitimate users. In some cases, the free pointer could be saved inside the latter part of object data area, which may overlap the redzone part(for small sizes of kmalloc objects). As suggested by Hyeonggon Yoo, force the free pointer to be in meta data area when kmalloc redzone debug is enabled, to make all kmalloc objects covered by redzone check. Suggested-by: Vlastimil Babka Signed-off-by: Feng Tang Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Feng Tang Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.h | 4 ++++ mm/slab_common.c | 4 ++++ mm/slub.c | 51 ++++++++++++++++++++++++++++++++++++++++++++---- 3 files changed, 55 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 8b4ee02fc14a..1dd773afd0c4 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -885,4 +885,8 @@ void __check_heap_object(const void *ptr, unsigned long n, } #endif +#ifdef CONFIG_SLUB_DEBUG +void skip_orig_size_check(struct kmem_cache *s, const void *object); +#endif + #endif /* MM_SLAB_H */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 33b1886b06eb..0bb4625f10a2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1037,6 +1037,10 @@ size_t __ksize(const void *object) return folio_size(folio); } +#ifdef CONFIG_SLUB_DEBUG + skip_orig_size_check(folio_slab(folio)->slab_cache, object); +#endif + return slab_ksize(folio_slab(folio)->slab_cache); } diff --git a/mm/slub.c b/mm/slub.c index adff7553b54e..76581da6b9df 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -829,6 +829,17 @@ static inline void set_orig_size(struct kmem_cache *s, if (!slub_debug_orig_size(s)) return; +#ifdef CONFIG_KASAN_GENERIC + /* + * KASAN could save its free meta data in object's data area at + * offset 0, if the size is larger than 'orig_size', it will + * overlap the data redzone in [orig_size+1, object_size], and + * the check should be skipped. + */ + if (kasan_metadata_size(s, true) > orig_size) + orig_size = s->object_size; +#endif + p += get_info_end(s); p += sizeof(struct track) * 2; @@ -848,6 +859,11 @@ static inline unsigned int get_orig_size(struct kmem_cache *s, void *object) return *(unsigned int *)p; } +void skip_orig_size_check(struct kmem_cache *s, const void *object) +{ + set_orig_size(s, (void *)object, s->object_size); +} + static void slab_bug(struct kmem_cache *s, char *fmt, ...) { struct va_format vaf; @@ -966,13 +982,27 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, static void init_object(struct kmem_cache *s, void *object, u8 val) { u8 *p = kasan_reset_tag(object); + unsigned int orig_size = s->object_size; - if (s->flags & SLAB_RED_ZONE) + if (s->flags & SLAB_RED_ZONE) { memset(p - s->red_left_pad, val, s->red_left_pad); + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + orig_size = get_orig_size(s, object); + + /* + * Redzone the extra allocated space by kmalloc + * than requested. + */ + if (orig_size < s->object_size) + memset(p + orig_size, val, + s->object_size - orig_size); + } + } + if (s->flags & __OBJECT_POISON) { - memset(p, POISON_FREE, s->object_size - 1); - p[s->object_size - 1] = POISON_END; + memset(p, POISON_FREE, orig_size - 1); + p[orig_size - 1] = POISON_END; } if (s->flags & SLAB_RED_ZONE) @@ -1120,6 +1150,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, { u8 *p = object; u8 *endobject = object + s->object_size; + unsigned int orig_size; if (s->flags & SLAB_RED_ZONE) { if (!check_bytes_and_report(s, slab, object, "Left Redzone", @@ -1129,6 +1160,17 @@ static int check_object(struct kmem_cache *s, struct slab *slab, if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; + + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + orig_size = get_orig_size(s, object); + + if (s->object_size > orig_size && + !check_bytes_and_report(s, slab, object, + "kmalloc Redzone", p + orig_size, + val, s->object_size - orig_size)) { + return 0; + } + } } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { check_bytes_and_report(s, slab, p, "Alignment padding", @@ -4206,7 +4248,8 @@ static int calculate_sizes(struct kmem_cache *s) */ s->inuse = size; - if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) || + if (slub_debug_orig_size(s) || + (flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) || ((flags & SLAB_RED_ZONE) && s->object_size < sizeof(void *)) || s->ctor) { /*