From patchwork Tue Sep 13 06:54:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12974478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E304C6FA82 for ; Tue, 13 Sep 2022 06:55:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EA5596B0074; Tue, 13 Sep 2022 02:55:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E5487940008; Tue, 13 Sep 2022 02:55:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D1CAA940007; Tue, 13 Sep 2022 02:55:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C339E6B0074 for ; Tue, 13 Sep 2022 02:55:03 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9CDE1AACE7 for ; Tue, 13 Sep 2022 06:55:03 +0000 (UTC) X-FDA: 79906150086.05.7226CB1 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf11.hostedemail.com (Postfix) with ESMTP id B31CD40088 for ; Tue, 13 Sep 2022 06:55:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663052102; x=1694588102; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MwGHd/pOZOwzdadrxPsU8z6L/u7Dv9xis33Cv0S9Lpk=; b=XrvVyepKNEjVOWgZ/ZKYl8PhiWD2PROnQRoTCilESgqu/5b0XOhgNn7K lj5u7GffXI+3DReR+82t5Pozj9LOtmKUnsERFYzUnGwR+AlzHFOzp6Fuo 01xNR1mWnl+zbQXzeFmsy3ZpgBNy2ONJ87l9YUvWKcP9guiS/5tsDsApC jawdtuqTXKHpe4/RFANR0NAVMylG6g9QKPvxyqe6zTtQYhaS8K20QHMds 86mn8vPlIF8DZ9DwrAht+vQjWvEzj7FPILmcKCxsOBOW1H+lGSXOfri1M g3QAOcHYciLEqMjHF/o4YIBhBmJhCACj3qikv/36GplVVE0bycefEG7Jr w==; X-IronPort-AV: E=McAfee;i="6500,9779,10468"; a="285079435" X-IronPort-AV: E=Sophos;i="5.93,312,1654585200"; d="scan'208";a="285079435" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 23:55:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,312,1654585200"; d="scan'208";a="861440749" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga006.fm.intel.com with ESMTP; 12 Sep 2022 23:54:58 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Jonathan Corbet , Andrey Konovalov Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang Subject: [PATCH v6 4/4] mm/slub: extend redzone check to extra allocated kmalloc space than requested Date: Tue, 13 Sep 2022 14:54:23 +0800 Message-Id: <20220913065423.520159-5-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220913065423.520159-1-feng.tang@intel.com> References: <20220913065423.520159-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663052103; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=17ru2aScMcm/B0OnyQ31ceHqJ7mFMov0F/bGaFPOtys=; b=5NDfNsN1emJFTa7QPk4VNxJzzCw2+f6TLZfhfxwEslszrhMrp3qe3NTDpJouKXVILOGhTj iuuwZx4u2ZS8LNm9jcfb6TbjIlfZt8Yez3D+eehpQKwWCEGi4IwU6X/Li91gzdNtqIO05R AfOMM7a3Vg5gOy53bsm/Hryu3NwjmPs= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=XrvVyepK; spf=pass (imf11.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663052103; a=rsa-sha256; cv=none; b=j0WMa8FmHya1iw1FKD1OVA+LUskCccpzThrM9gFoxqBadOO2voqTLnFteqSaVHw1vY2top kaQgXzMt02n3saxGCWr0HGlz5r7FJk4vvZg2B6iM/MriElR6Tl/kzNLxWHpjeN5Z1ZwXcu 08HX5VBKoefdwvJiJx2vQltAweMPjmo= X-Rspamd-Queue-Id: B31CD40088 X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=XrvVyepK; spf=pass (imf11.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Server: rspam02 X-Stat-Signature: rzgf58cm1uezgsxny4np1z1i54rwbp8u X-HE-Tag: 1663052102-351484 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmalloc will round up the request size to a fixed size (mostly power of 2), so there could be a extra space than what is requested, whose size is the actual buffer size minus original request size. To better detect out of bound access or abuse of this space, add redzone sanity check for it. And in current kernel, some kmalloc user already knows the existence of the space and utilizes it after calling 'ksize()' to know the real size of the allocated buffer. So we skip the sanity check for objects which have been called with ksize(), as treating them as legitimate users. In some cases, the free pointer could be saved inside the latter part of object data area, which may overlap the redzone part(for small sizes of kmalloc objects). As suggested by Hyeonggon Yoo, force the free pointer to be in meta data area when kmalloc redzone debug is enabled, to make all kmalloc objects covered by redzone check. Suggested-by: Vlastimil Babka Signed-off-by: Feng Tang Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.h | 4 ++++ mm/slab_common.c | 4 ++++ mm/slub.c | 51 ++++++++++++++++++++++++++++++++++++++++++++---- 3 files changed, 55 insertions(+), 4 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 3cf5adf63f48..5ca04d9c8bf5 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -881,4 +881,8 @@ void __check_heap_object(const void *ptr, unsigned long n, } #endif +#ifdef CONFIG_SLUB_DEBUG +void skip_orig_size_check(struct kmem_cache *s, const void *object); +#endif + #endif /* MM_SLAB_H */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 8e13e3aac53f..5106667d6adb 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1001,6 +1001,10 @@ size_t __ksize(const void *object) return folio_size(folio); } +#ifdef CONFIG_SLUB_DEBUG + skip_orig_size_check(folio_slab(folio)->slab_cache, object); +#endif + return slab_ksize(folio_slab(folio)->slab_cache); } diff --git a/mm/slub.c b/mm/slub.c index 6f823e99d8b4..546b30ed5afd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -812,12 +812,28 @@ static inline void set_orig_size(struct kmem_cache *s, if (!slub_debug_orig_size(s)) return; +#ifdef CONFIG_KASAN_GENERIC + /* + * KASAN could save its free meta data in object's data area at + * offset 0, if the size is larger than 'orig_size', it could + * overlap the data redzone(from 'orig_size+1' to 'object_size'), + * where the check should be skipped. + */ + if (s->kasan_info.free_meta_size > orig_size) + orig_size = s->object_size; +#endif + p += get_info_end(s); p += sizeof(struct track) * 2; *(unsigned int *)p = orig_size; } +void skip_orig_size_check(struct kmem_cache *s, const void *object) +{ + set_orig_size(s, (void *)object, s->object_size); +} + static unsigned int get_orig_size(struct kmem_cache *s, void *object) { void *p = kasan_reset_tag(object); @@ -949,13 +965,27 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, static void init_object(struct kmem_cache *s, void *object, u8 val) { u8 *p = kasan_reset_tag(object); + unsigned int orig_size = s->object_size; - if (s->flags & SLAB_RED_ZONE) + if (s->flags & SLAB_RED_ZONE) { memset(p - s->red_left_pad, val, s->red_left_pad); + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + orig_size = get_orig_size(s, object); + + /* + * Redzone the extra allocated space by kmalloc + * than requested. + */ + if (orig_size < s->object_size) + memset(p + orig_size, val, + s->object_size - orig_size); + } + } + if (s->flags & __OBJECT_POISON) { - memset(p, POISON_FREE, s->object_size - 1); - p[s->object_size - 1] = POISON_END; + memset(p, POISON_FREE, orig_size - 1); + p[orig_size - 1] = POISON_END; } if (s->flags & SLAB_RED_ZONE) @@ -1103,6 +1133,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, { u8 *p = object; u8 *endobject = object + s->object_size; + unsigned int orig_size; if (s->flags & SLAB_RED_ZONE) { if (!check_bytes_and_report(s, slab, object, "Left Redzone", @@ -1112,6 +1143,17 @@ static int check_object(struct kmem_cache *s, struct slab *slab, if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; + + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + orig_size = get_orig_size(s, object); + + if (s->object_size > orig_size && + !check_bytes_and_report(s, slab, object, + "kmalloc Redzone", p + orig_size, + val, s->object_size - orig_size)) { + return 0; + } + } } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { check_bytes_and_report(s, slab, p, "Alignment padding", @@ -4187,7 +4229,8 @@ static int calculate_sizes(struct kmem_cache *s) */ s->inuse = size; - if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) || + if (slub_debug_orig_size(s) || + (flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) || ((flags & SLAB_RED_ZONE) && s->object_size < sizeof(void *)) || s->ctor) { /*