From patchwork Tue Sep 13 06:54:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12974476 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95A51C6FA86 for ; Tue, 13 Sep 2022 06:54:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E53E8D0002; Tue, 13 Sep 2022 02:54:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 294DF6B0074; Tue, 13 Sep 2022 02:54:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 138CF8D0002; Tue, 13 Sep 2022 02:54:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 061D16B0073 for ; Tue, 13 Sep 2022 02:54:56 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id CCFC5A0939 for ; Tue, 13 Sep 2022 06:54:55 +0000 (UTC) X-FDA: 79906149750.10.B3A9B6E Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf11.hostedemail.com (Postfix) with ESMTP id 32E3640088 for ; Tue, 13 Sep 2022 06:54:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663052095; x=1694588095; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jNIjARJaqhZDNzVjt1J74PS/eC/mhxmnJkoZtfdgppw=; b=Hd0aQ5hB+m8RpR5gnQyg5/Dcu0cXreFj+Fb2Zez17CdxB/7bBIMutSw9 erTe+dt2ZvNtMSR+FksiQsQtjp4mxvcFNr4HNLWSxgjmRWB55EnvlDwHs UJn724vhVr+NNb29N5ROHfUi3W9/a0UP9v+cHqsoXlwxt3NpGxmUQJJeY 3E1fwh2LvkbtdEvNb4RRlXOddvT8ojZlFFtU98sFoTRnsZqUKzdb4WQVE kWfienSg4es4wfmGFHy7SmbHBPfxPCZL7dokdWcfzN/QJn3yGwIpNTiX4 0SURNbxE4H5YdoXrqmU2HEQahGgf6BNGhYH3l8HcMHd01W2f1Wu20CPKD Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10468"; a="285079390" X-IronPort-AV: E=Sophos;i="5.93,312,1654585200"; d="scan'208";a="285079390" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2022 23:54:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,312,1654585200"; d="scan'208";a="861440725" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga006.fm.intel.com with ESMTP; 12 Sep 2022 23:54:51 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Jonathan Corbet , Andrey Konovalov Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang Subject: [PATCH v6 2/4] mm/slub: only zero the requested size of buffer for kzalloc Date: Tue, 13 Sep 2022 14:54:21 +0800 Message-Id: <20220913065423.520159-3-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220913065423.520159-1-feng.tang@intel.com> References: <20220913065423.520159-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663052095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WygvbcU+57VUtDe0idm3K8vvy6IHtOeCKf1SmJPZZYI=; b=kt11PmAJrTFbRbi1VyhYRHr0uSgxTddlTL7H9ss50fP9U+oXbezrMJOad3Ns0cibLYNQ+R joHSxf0zC7y/mup6wZ6K3KuBjlqvI6nir7sSS9XbIZ25wPefTe27JApX9C0X0tVL4qOHZJ USheRWCigcqeAx/IgVspJSNvXjshBIY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Hd0aQ5hB; spf=pass (imf11.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663052095; a=rsa-sha256; cv=none; b=z8lja88osOA1Z4l+OH4aJ65MOh/ymGjpXc1sW/e2SkRU8VSTmYkElQKy/oyut3vB21SOYS LfIeF745Ov7hWxcGvXqJRIOi+OCoiaB/ggJaIpmo7Ho0uJ90y9XcVvVCPs3y2wX/oGnzLg +W19MWG/+OlqN1t2/gsEmNnhchhxtPQ= X-Rspamd-Queue-Id: 32E3640088 X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Hd0aQ5hB; spf=pass (imf11.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Server: rspam02 X-Stat-Signature: 8hfmzo3k5ummq9hj5gpyw5x33u7di9s5 X-HE-Tag: 1663052095-71704 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kzalloc/kmalloc will round up the request size to a fixed size (mostly power of 2), so the allocated memory could be more than requested. Currently kzalloc family APIs will zero all the allocated memory. To detect out-of-bound usage of the extra allocated memory, only zero the requested part, so that sanity check could be added to the extra space later. Performance wise, smaller zeroing length also brings shorter execution time, as shown from test data on various server/desktop platforms. For kzalloc users who will call ksize() later and utilize this extra space, please be aware that the space is not zeroed any more. Signed-off-by: Feng Tang --- mm/slab.c | 7 ++++--- mm/slab.h | 5 +++-- mm/slub.c | 10 +++++++--- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index a5486ff8362a..4594de0e3d6b 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3253,7 +3253,8 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, init = slab_want_init_on_alloc(flags, cachep); out: - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, + cachep->object_size); return objp; } @@ -3506,13 +3507,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled section. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), s->object_size); /* FIXME: Trace call missing. Christoph would like a bulk variant */ return size; error: local_irq_enable(); cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); return 0; } diff --git a/mm/slab.h b/mm/slab.h index d0ef9dd44b71..3cf5adf63f48 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -730,7 +730,8 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, static inline void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init) + size_t size, void **p, bool init, + unsigned int orig_size) { size_t i; @@ -746,7 +747,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { p[i] = kasan_slab_alloc(s, p[i], flags, init); if (p[i] && init && !kasan_has_integrated_init()) - memset(p[i], 0, s->object_size); + memset(p[i], 0, orig_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); } diff --git a/mm/slub.c b/mm/slub.c index c8ba16b3a4db..6f823e99d8b4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3376,7 +3376,11 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l init = slab_want_init_on_alloc(gfpflags, s); out: - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); + /* + * When init equals 'true', like for kzalloc() family, only + * @orig_size bytes will be zeroed instead of s->object_size + */ + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); return object; } @@ -3833,11 +3837,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled fastpath loop. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), s->object_size); return i; error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); kmem_cache_free_bulk(s, i, p); return 0; }