From patchwork Tue Mar 8 11:41:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72F0AC433F5 for ; Tue, 8 Mar 2022 11:42:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0CB158D0005; Tue, 8 Mar 2022 06:42:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 07A678D0001; Tue, 8 Mar 2022 06:42:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E859F8D0005; Tue, 8 Mar 2022 06:42:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id DA0E28D0001 for ; Tue, 8 Mar 2022 06:42:22 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B28BE184B for ; Tue, 8 Mar 2022 11:42:22 +0000 (UTC) X-FDA: 79221030924.13.7749EB3 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf18.hostedemail.com (Postfix) with ESMTP id 2C1231C0006 for ; Tue, 8 Mar 2022 11:42:22 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id p17so16818801plo.9 for ; Tue, 08 Mar 2022 03:42:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CvChe1Nx5QM82i5sMEtLWw0W8b5zEMFWykl/RZ7HcKk=; b=lrQARZIbg6zKFpFazaRiPu+2pB6jdM9vBa23U+KrHIi6moyefe5oBwfnXHbcrMGd1u 9FCk//aQX0m0rZmdOe2RsuQp3QKMoFuQsOQfhlt3sYQYVrLy2ttEmGIDELJ8eZ41xTx5 o/4ZOe7LoQjZSZQrTYHmYIhLWLjPsiHDUUXLKsZYnqwsuG4UtHFJleJxm48RmTNzG0KV xE3EBjTQOJsguaBxkzu/qfLA408wOR/EGWDgbNx+Z01+A7nLgWpB2KBXpAH6fPvOKD4n 5rXL5+o/bVWraQjrUz7pNy5csAtWYciNG5NCyVBmnl9K5lu9LK9BdQzd8t1T83QdOvhz M/ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CvChe1Nx5QM82i5sMEtLWw0W8b5zEMFWykl/RZ7HcKk=; b=RLc721Cdb+JQt6l+L4EupYd66qRHmnGk5aqhV6lSN6noO2Ba9+NkJcb1+UInMlbuAz z47V62rCV/SDxpX2NcCZrVfwRbH0HuBWAusISwaZL6C2mReO06a6VOJ/fQqpSSLX2ew9 stAGgemT70PtywgCCzVk+DnVp3IaGrN8pi42Bv8snrGaeowTt0UuDYrmtNBpCEYQsGum lJ9y6QIvvqPUjV50aWmBfyzA49uu4ROXRwiAZQbrL0ZvbOJ2Di6A25ufsKzwZNm21FzY d8RIjZDZDaUKQ8Jdblbq9Qup4PBaDEaG3l6YSbitoo+I9kZwvPOCZpv1zLfL5QgChp2d 8VvA== X-Gm-Message-State: AOAM530q0/YMOYmNJ+jSrkPbg/58AJjNrILTcCL0X+5aaLQa+PFoljo1 8Y4CxgOkqbzk7+sioJkdw08aXrfn0SyeiA== X-Google-Smtp-Source: ABdhPJxSdPT1uqVF9ZNoFsseoMYuGupVSa9y0mdlo7pCI66JTfN53RgQfS/FDGbDBCK8yrS+hUeKoA== X-Received: by 2002:a17:902:7fc1:b0:151:f80f:494a with SMTP id t1-20020a1709027fc100b00151f80f494amr5769354plb.81.1646739740665; Tue, 08 Mar 2022 03:42:20 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:20 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 01/15] mm/slab: cleanup slab_alloc() and slab_alloc_node() Date: Tue, 8 Mar 2022 11:41:28 +0000 Message-Id: <20220308114142.1744229-2-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2C1231C0006 X-Stat-Signature: hsswe6a7xy8wtes54g6aszhou3z8q1qt X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=lrQARZIb; spf=pass (imf18.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam07 X-HE-Tag: 1646739742-272079 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make slab_alloc_node() available for non-NUMA configurations and make slab_alloc() wrapper of slab_alloc_node(). This is necessary for further cleanup. Do not check availability of node when allocating from locally cached objects. It's redundant. This patch was tested on both CONFIG_NUMA=y and n. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.c | 116 +++++++++++++++++++++++------------------------------- 1 file changed, 50 insertions(+), 66 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index ddf5737c63d9..5d102aaf1629 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3200,60 +3200,6 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } -static __always_inline void * -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, - unsigned long caller) -{ - unsigned long save_flags; - void *ptr; - int slab_node = numa_mem_id(); - struct obj_cgroup *objcg = NULL; - bool init = false; - - flags &= gfp_allowed_mask; - cachep = slab_pre_alloc_hook(cachep, &objcg, 1, flags); - if (unlikely(!cachep)) - return NULL; - - ptr = kfence_alloc(cachep, orig_size, flags); - if (unlikely(ptr)) - goto out_hooks; - - cache_alloc_debugcheck_before(cachep, flags); - local_irq_save(save_flags); - - if (nodeid == NUMA_NO_NODE) - nodeid = slab_node; - - if (unlikely(!get_node(cachep, nodeid))) { - /* Node not bootstrapped yet */ - ptr = fallback_alloc(cachep, flags); - goto out; - } - - if (nodeid == slab_node) { - /* - * Use the locally cached objects if possible. - * However ____cache_alloc does not allow fallback - * to other nodes. It may fail while we still have - * objects on other nodes available. - */ - ptr = ____cache_alloc(cachep, flags); - if (ptr) - goto out; - } - /* ___cache_alloc_node can fall back to other nodes */ - ptr = ____cache_alloc_node(cachep, flags, nodeid); - out: - local_irq_restore(save_flags); - ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); - init = slab_want_init_on_alloc(flags, cachep); - -out_hooks: - slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); - return ptr; -} - static __always_inline void * __do_cache_alloc(struct kmem_cache *cache, gfp_t flags) { @@ -3283,14 +3229,24 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) { return ____cache_alloc(cachep, flags); } - #endif /* CONFIG_NUMA */ +static __always_inline bool node_match(int nodeid, int slab_node) +{ +#ifdef CONFIG_NUMA + if (nodeid != NUMA_NO_NODE && nodeid != slab_node) + return false; +#endif + return true; +} + static __always_inline void * -slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, + unsigned long caller) { unsigned long save_flags; - void *objp; + void *ptr; + int slab_node = numa_mem_id(); struct obj_cgroup *objcg = NULL; bool init = false; @@ -3299,21 +3255,49 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned lo if (unlikely(!cachep)) return NULL; - objp = kfence_alloc(cachep, orig_size, flags); - if (unlikely(objp)) - goto out; + ptr = kfence_alloc(cachep, orig_size, flags); + if (unlikely(ptr)) + goto out_hooks; cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - objp = __do_cache_alloc(cachep, flags); + + if (node_match(nodeid, slab_node)) { + /* + * Use the locally cached objects if possible. + * However ____cache_alloc does not allow fallback + * to other nodes. It may fail while we still have + * objects on other nodes available. + */ + ptr = ____cache_alloc(cachep, flags); + if (ptr) + goto out; + } +#ifdef CONFIG_NUMA + else if (unlikely(!get_node(cachep, nodeid))) { + /* Node not bootstrapped yet */ + ptr = fallback_alloc(cachep, flags); + goto out; + } + + /* ___cache_alloc_node can fall back to other nodes */ + ptr = ____cache_alloc_node(cachep, flags, nodeid); +#endif +out: local_irq_restore(save_flags); - objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); - prefetchw(objp); + ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); + prefetchw(ptr); init = slab_want_init_on_alloc(flags, cachep); -out: - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); - return objp; +out_hooks: + slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); + return ptr; +} + +static __always_inline void * +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) +{ + return slab_alloc_node(cachep, flags, NUMA_NO_NODE, orig_size, caller); } /* From patchwork Tue Mar 8 11:41:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0834AC433FE for ; Tue, 8 Mar 2022 11:42:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11A518D0007; Tue, 8 Mar 2022 06:42:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CA068D0001; Tue, 8 Mar 2022 06:42:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D72408D0007; Tue, 8 Mar 2022 06:42:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id C8D2B8D0001 for ; Tue, 8 Mar 2022 06:42:25 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A72192437F for ; Tue, 8 Mar 2022 11:42:25 +0000 (UTC) X-FDA: 79221031050.11.BDA8191 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf16.hostedemail.com (Postfix) with ESMTP id 1985918000B for ; Tue, 8 Mar 2022 11:42:24 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id q13so680613plk.12 for ; Tue, 08 Mar 2022 03:42:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0uAYYVPs60JRCJbDB7Ptx2zuTygUKkzVc0YEvytONEk=; b=mnMXwAKPz38DFrMEbASiEBNVRBOWfMSF2dF1QXoWyWc059niybLo/YHiFLH0pANPOl ZbQZTU9NjZQxq2GW8+1v7JeVybmrQWyr0K8Okfv7nRb5lCbk1KRDlX+jlxdZIITHRssB Tr4XrZtRoin3226yrsWrQOm8pSLNbKmlM6NpX6FR6drKV8yRl2XfvthDVH0Q3VS9yJWK FQGLDjlLZFl3Qyx1oq6F6fu6cygllU1f9WTfPweE3Mv5aLYQyezPa5KVhqVOSPvR0tRx olHiCGJQSG+dmP01d/QAwSDldN2TqJWUf5pisdKMeLGZuWUifbqQaIcsrWMtIMr29Mfu HnTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0uAYYVPs60JRCJbDB7Ptx2zuTygUKkzVc0YEvytONEk=; b=sY5uq3LKTOEkS6CH8a/4K5w1DA1lGqZzHSKBeHDyRQs2kivrOgZHLQOow8lX4puyZM KLeRYrhn/jNxFONbZCDa+rq5Htf/rG4gfFYj+hw0ruIfzYHABm9Q0UKxlWug+kNHZd58 fStwnjybnHPJHGYfTTVjCfkX12uTw7U/ayCFzcnP3d/i4N1TrXv+LGs9jQ6ShTbRCzxE Pw0+ggBy08mv92vFctoDcUgf4GEzF5/FK4QsVeoexyNK3XQs6LrsjQ34DlqMMjJYCV9B hf0LHN6CwJTICKHgaB5idBtRsFVaoUbQD4ow5/4o/ZrsyZMK4OSpF4avFnZCBiFs0wz8 S30Q== X-Gm-Message-State: AOAM5320Ex7LduYF8+9NcMAZ5jb721nYlDZRabB8s/APl2o00S1lIJ0X 8sa5OyKkiISTcZVv+69IxWVmnkPNC/GG6Q== X-Google-Smtp-Source: ABdhPJxaajBP1wZxh/cNev7TD+7syKbaVCwqBJztpPaX2/UmhKpITbk2Xed6yt4xd7nKDAl96aF8OA== X-Received: by 2002:a17:902:e5ca:b0:152:54c1:f860 with SMTP id u10-20020a170902e5ca00b0015254c1f860mr184865plf.35.1646739743783; Tue, 08 Mar 2022 03:42:23 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:23 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 02/15] mm/sl[auo]b: remove CONFIG_NUMA ifdefs for common functions Date: Tue, 8 Mar 2022 11:41:29 +0000 Message-Id: <20220308114142.1744229-3-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: pg6nsun4861zr7ha6s45xt4pzr4nhk1g Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=mnMXwAKP; spf=pass (imf16.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: 1985918000B X-HE-Tag: 1646739744-395232 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that slab_alloc_node() is available regardless of CONFIG_NUMA on SLAB, just remove CONFIG_NUMA ifdefs and make non-NUMA version of functions wrapper of NUMA version. This makes slab allocators use NUMA version of tracepoints. In later patch, tracepoints will be also cleaned up. Remove now unused __do_kmalloc() in SLAB. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 85 ++++++++++++++++++-------------------------- mm/slab.c | 63 -------------------------------- mm/slob.c | 22 ------------ mm/slub.c | 62 -------------------------------- 4 files changed, 35 insertions(+), 197 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 37bde99b74af..df8e5dca00a2 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -414,8 +414,31 @@ static __always_inline unsigned int __kmalloc_index(size_t size, #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ -void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; +void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment + __alloc_size(1); +void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment + __malloc; + +static __always_inline void *__kmalloc(size_t size, gfp_t flags) +{ + return __kmalloc_node(size, flags, NUMA_NO_NODE); +} + +/** + * kmem_cache_alloc - Allocate an object + * @cachep: The cache to allocate from. + * @flags: See kmalloc(). + * + * Allocate an object from this cache. The flags are only relevant + * if the cache has no available objects. + * + * Return: pointer to the new object or %NULL in case of error + */ +static __always_inline void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) +{ + return kmem_cache_alloc_node(s, flags, NUMA_NO_NODE); +} + void kmem_cache_free(struct kmem_cache *s, void *objp); /* @@ -437,38 +460,13 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -#ifdef CONFIG_NUMA -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment - __alloc_size(1); -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment - __malloc; -#else -static __always_inline __alloc_size(1) void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __kmalloc(size, flags); -} - -static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) -{ - return kmem_cache_alloc(s, flags); -} -#endif - #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) __assume_slab_alignment __alloc_size(3); -#ifdef CONFIG_NUMA extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -#else -static __always_inline __alloc_size(4) void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, int node, size_t size) -{ - return kmem_cache_alloc_trace(s, gfpflags, size); -} -#endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, @@ -652,19 +650,6 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flag return kmalloc_array(n, size, flags | __GFP_ZERO); } -/* - * kmalloc_track_caller is a special version of kmalloc that records the - * calling function of the routine calling it for slab leak tracking instead - * of just the calling function (confusing, eh?). - * It's useful when the call to kmalloc comes from a widely-used standard - * allocator where we care about the real place the memory allocation - * request comes from. - */ -extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) - __alloc_size(1); -#define kmalloc_track_caller(size, flags) \ - __kmalloc_track_caller(size, flags, _RET_IP_) - static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, int node) { @@ -682,21 +667,21 @@ static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); } - -#ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) - -#else /* CONFIG_NUMA */ - -#define kmalloc_node_track_caller(size, flags, node) \ - kmalloc_track_caller(size, flags) - -#endif /* CONFIG_NUMA */ - +/* + * kmalloc_track_caller is a special version of kmalloc that records the + * calling function of the routine calling it for slab leak tracking instead + * of just the calling function (confusing, eh?). + * It's useful when the call to kmalloc comes from a widely-used standard + * allocator where we care about the real place the memory allocation + * request comes from. + */ +#define kmalloc_track_caller(size, flags) \ + __kmalloc_node_track_caller(size, flags, NUMA_NO_NODE, _RET_IP_) /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index 5d102aaf1629..b41124a1efd9 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3468,27 +3468,6 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, __free_one(ac, objp); } -/** - * kmem_cache_alloc - Allocate an object - * @cachep: The cache to allocate from. - * @flags: See kmalloc(). - * - * Allocate an object from this cache. The flags are only relevant - * if the cache has no available objects. - * - * Return: pointer to the new object or %NULL in case of error - */ -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) -{ - void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); - - trace_kmem_cache_alloc(_RET_IP_, ret, - cachep->object_size, cachep->size, flags); - - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc); - static __always_inline void cache_alloc_debugcheck_after_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p, unsigned long caller) @@ -3556,7 +3535,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -#ifdef CONFIG_NUMA /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. @@ -3630,7 +3608,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags, return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif /* CONFIG_NUMA */ #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) @@ -3654,46 +3631,6 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) } #endif -/** - * __do_kmalloc - allocate memory - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @caller: function caller for debug tracking of the caller - * - * Return: pointer to the allocated memory or %NULL in case of error - */ -static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, - unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; - cachep = kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret = slab_alloc(cachep, flags, size, caller); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(caller, ret, - size, cachep->size, flags); - - return ret; -} - -void *__kmalloc(size_t size, gfp_t flags) -{ - return __do_kmalloc(size, flags, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - -void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) -{ - return __do_kmalloc(size, flags, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. diff --git a/mm/slob.c b/mm/slob.c index 60c5842215f1..c4f9c83900b0 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -522,26 +522,12 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) return ret; } -void *__kmalloc(size_t size, gfp_t gfp) -{ - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - -void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) -{ - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { return __do_kmalloc_node(size, gfp, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif void kfree(const void *block) { @@ -629,13 +615,6 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) return b; } -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) -{ - return slob_alloc_node(cachep, flags, NUMA_NO_NODE); -} -EXPORT_SYMBOL(kmem_cache_alloc); - -#ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t gfp, int node) { return __do_kmalloc_node(size, gfp, node, _RET_IP_); @@ -647,7 +626,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node) return slob_alloc_node(cachep, gfp, node); } EXPORT_SYMBOL(kmem_cache_alloc_node); -#endif static void __kmem_cache_free(void *b, int size) { diff --git a/mm/slub.c b/mm/slub.c index 261474092e43..74369cadc243 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3238,17 +3238,6 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr, orig_size); } -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) -{ - void *ret = slab_alloc(s, gfpflags, _RET_IP_, s->object_size); - - trace_kmem_cache_alloc(_RET_IP_, ret, s->object_size, - s->size, gfpflags); - - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc); - #ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { @@ -3260,7 +3249,6 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -#ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, s->object_size); @@ -3287,7 +3275,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif /* CONFIG_NUMA */ /* * Slow path handling. This may still be called frequently since objects @@ -4404,30 +4391,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -void *__kmalloc(size_t size, gfp_t flags) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, flags); - - s = kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc(s, flags, _RET_IP_, size); - - trace_kmalloc(_RET_IP_, ret, size, s->size, flags); - - ret = kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc); - -#ifdef CONFIG_NUMA static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4474,7 +4437,6 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif /* CONFIG_NUMA */ #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4910,29 +4872,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return 0; } -void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, gfpflags); - - s = kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc(s, gfpflags, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmalloc(caller, ret, size, s->size, gfpflags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_track_caller); - -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { @@ -4962,7 +4901,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, return ret; } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) From patchwork Tue Mar 8 11:41:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A352DC433F5 for ; Tue, 8 Mar 2022 11:42:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44D1D8D0008; Tue, 8 Mar 2022 06:42:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FD768D0001; Tue, 8 Mar 2022 06:42:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29DE68D0008; Tue, 8 Mar 2022 06:42:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 1C3B28D0001 for ; Tue, 8 Mar 2022 06:42:29 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id E2546120F7A for ; Tue, 8 Mar 2022 11:42:28 +0000 (UTC) X-FDA: 79221031176.14.D5B6AA4 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf10.hostedemail.com (Postfix) with ESMTP id 4E261C000A for ; Tue, 8 Mar 2022 11:42:28 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id k5-20020a17090a3cc500b001befa0d3102so1673038pjd.1 for ; Tue, 08 Mar 2022 03:42:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eynsZq3RW1+TrCJLl32w1TfmJK+jtWer17/XHi0Tnm8=; b=SD2+kTPAaRYy6LvnWQzAtcz8OTYBz917eB0UJ7koHUv9emT9HrKjarOJc1Kf75V27D +/NFn6ehrBBaEgwO0Mui0zNAZu8SJuRda2D/t/uOGSvhCHLxF73JT3Y1BaXvMOO3OhvB vL99ihHbrHmHShERq7w417PAoNoUw+0NOIZISUUs3VpuFukZ69t1S3dLVG7wgUVA1R4b A4mP0eYtO4wWznu1IFfqbe0MeNTSz1SdbTnkefnlqdu6r4bTDwqk4U+M7aVhUHKh4UH1 M21qZ+V9oKFFmGya3CndW2e3A1DEdff9X/B+W3sqmzOhCuzzFlh0TT3lTSvGB1LUEMXk OxaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eynsZq3RW1+TrCJLl32w1TfmJK+jtWer17/XHi0Tnm8=; b=aZKJwDZmLn1VyBxV52l4NaZUM8ZgGW0ks0b63SN/7571ulStZHC5fIXWPXvcZDbXjn gB1Jgnk4vne6N7qd84bNLp5UmrOZO4rWA2BVFSBDacKGL9QxmzWWL0VeVU+UyjXM5T+S aCggOBfer53M/Vzw8MzEch0Lgtm2/8RuQOyDp1tz3Wi2LVBwtfSWGTWfH057dreYj87v ezQXVUj3WHArDo6UiAg7Fz94zxZVvQ6cO2ONrhm25YNhgFyUZXjYRtqjuqxC3Eqvg7xc pVWlvJGB9z9rliH0xuEZ5PSympRy57B779aDFemUD4wcz0IHYzm/Xq+gpu+f75OGz93i 9OKw== X-Gm-Message-State: AOAM532Y/VZSzgsz5SBwa5iEyIicYE/mmeVHfIjZySZbV9WWKT1Zhbfk 36c+/LA6B4JPsLAT7CzxlM82rOCTOZ/nKg== X-Google-Smtp-Source: ABdhPJz9hCYwesf444Cvu2tnyBCQn0mYU/k5pynV7fTDn9ijc9L1PChHZtiv1m5TXgjgYNt+59OGQA== X-Received: by 2002:a17:902:db0d:b0:14f:b047:8d22 with SMTP id m13-20020a170902db0d00b0014fb0478d22mr16624592plx.90.1646739747029; Tue, 08 Mar 2022 03:42:27 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:26 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 03/15] mm/sl[au]b: remove CONFIG_TRACING ifdefs for tracing functions Date: Tue, 8 Mar 2022 11:41:30 +0000 Message-Id: <20220308114142.1744229-4-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 4E261C000A X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=SD2+kTPA; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: m3uysittsggxcjzzujoeqag47y3ptog6 X-HE-Tag: 1646739748-710970 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: CONFIG_TRACING ifdefs are not necessary because tracepoints do nothing on kernels without CONFIG_TRACING. In later cleanup these functions will be removed. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 29 ----------------------------- mm/slab.c | 4 ---- mm/slab_common.c | 2 -- mm/slub.c | 4 ---- 4 files changed, 39 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index df8e5dca00a2..a5e3ad058817 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -460,7 +460,6 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -#ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) __assume_slab_alignment __alloc_size(3); @@ -468,39 +467,11 @@ extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -#else /* CONFIG_TRACING */ -static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, - gfp_t flags, size_t size) -{ - void *ret = kmem_cache_alloc(s, flags); - - ret = kasan_kmalloc(s, ret, size, flags); - return ret; -} - -static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) -{ - void *ret = kmem_cache_alloc_node(s, gfpflags, node); - - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -#endif /* CONFIG_TRACING */ - extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __alloc_size(1); -#ifdef CONFIG_TRACING extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __alloc_size(1); -#else -static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t size, gfp_t flags, - unsigned int order) -{ - return kmalloc_order(size, flags, order); -} -#endif static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t flags) { diff --git a/mm/slab.c b/mm/slab.c index b41124a1efd9..1f3195344bdf 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3519,7 +3519,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); -#ifdef CONFIG_TRACING void * kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) { @@ -3533,7 +3532,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) return ret; } EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif /** * kmem_cache_alloc_node - Allocate an object on the specified node @@ -3560,7 +3558,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) } EXPORT_SYMBOL(kmem_cache_alloc_node); -#ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, int nodeid, @@ -3577,7 +3574,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -#endif static __always_inline void * __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) diff --git a/mm/slab_common.c b/mm/slab_common.c index 23f2ab0713b7..2edb77056adc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -954,7 +954,6 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) } EXPORT_SYMBOL(kmalloc_order); -#ifdef CONFIG_TRACING void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) { void *ret = kmalloc_order(size, flags, order); @@ -962,7 +961,6 @@ void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) return ret; } EXPORT_SYMBOL(kmalloc_order_trace); -#endif #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ diff --git a/mm/slub.c b/mm/slub.c index 74369cadc243..267f700abac1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3238,7 +3238,6 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr, orig_size); } -#ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc(s, gfpflags, _RET_IP_, size); @@ -3247,7 +3246,6 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) return ret; } EXPORT_SYMBOL(kmem_cache_alloc_trace); -#endif void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { @@ -3260,7 +3258,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) } EXPORT_SYMBOL(kmem_cache_alloc_node); -#ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) @@ -3274,7 +3271,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -#endif /* * Slow path handling. This may still be called frequently since objects From patchwork Tue Mar 8 11:41:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98BACC433EF for ; Tue, 8 Mar 2022 11:42:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33BBA8D0009; Tue, 8 Mar 2022 06:42:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2EB588D0001; Tue, 8 Mar 2022 06:42:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 20D998D0009; Tue, 8 Mar 2022 06:42:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 1235B8D0001 for ; Tue, 8 Mar 2022 06:42:32 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id DF2A424137 for ; Tue, 8 Mar 2022 11:42:31 +0000 (UTC) X-FDA: 79221031302.15.1112941 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf25.hostedemail.com (Postfix) with ESMTP id 64484A0004 for ; Tue, 8 Mar 2022 11:42:31 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id 9so16825400pll.6 for ; Tue, 08 Mar 2022 03:42:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LZZ8fVjHeh5I8K2sKzZs1QzAUbB02xCOg/5s4cz8cRE=; b=Z0oYgAKXyrIqwEuVfeU9cRh9YSUCIwuYqlBz0A5lt34OhAd57AXF+Y0lX9rXFz8x6p FZkwKTbRMbsc5VcJOvSnTnqF+ngaUmOfPl1gV73IolqJwlLp8C5GBcY6UW031L/IvigT 6g89/ykep7HdizJ/gZeGckX5WtQTwPl1CFYr9dSZbd07d7kK4K0lyTBoEU6EBnr30E7r bF7bTM437Fs15hzqWQAfxCgxKFpg952oWG4TMO1wz82Xc6v2IYuh5bTkhZxU0liQax7y WWW27wEXdq3cJcesxWt0yPziZhPrG74pZPUxNNm5U5mSQjTGTzvHA07W+2mWOXJ2zhL1 PRKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LZZ8fVjHeh5I8K2sKzZs1QzAUbB02xCOg/5s4cz8cRE=; b=B4ctKiR+/NIDts8UE4/vAPItCc9PB+xVaiDaneKrV03WxBNPEt7U+ZcYNI5omNKj2b Ki14RRWkMuekDgzo+6RONihqHCOGdAxw3xhXJB4qLikF9eOUdOeveVbsguyvTGXBznkt KieM87g7ou4sY15ya/KAA6FKFNZkw+kbL2TGuNoVHPnqVRRqyyBfJnRWkQcHCn7/VEg+ CrT5EGsq6GEzYhyBs6P5OVTDP/mBchCX6HGczZfesd4sQyly9pX34ejD6oHGru9+VS2G FMRiZXCTTNZi/7UKh7tZeSETpraJuUii/UKCxOMY1yaActgIgMy5XdsSWbqopcfr+rDg Vx2A== X-Gm-Message-State: AOAM530rDOyXqL518BMGZVjEIpYjBmdpttocTRF6aIe3T3KND9+EOnI4 /l8aykycD9XPgYomQJ1Q366zyHVG8u/Z5g== X-Google-Smtp-Source: ABdhPJx+ihLWlNgAYhaAy4bGA/WKuwXn/5j/3Ommdre35Kbi9XjhlStDw4zZU1DSbUDjF8uSjpmUEg== X-Received: by 2002:a17:902:d4c9:b0:151:d074:cbe8 with SMTP id o9-20020a170902d4c900b00151d074cbe8mr16210397plg.102.1646739750183; Tue, 08 Mar 2022 03:42:30 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:29 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 04/15] mm/sl[auo]b: fold kmalloc_order() into kmalloc_large() Date: Tue, 8 Mar 2022 11:41:31 +0000 Message-Id: <20220308114142.1744229-5-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 64484A0004 X-Stat-Signature: dh7npssabi1sh765bfw195na3z1q18zd X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Z0oYgAKX; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam07 X-HE-Tag: 1646739751-64486 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is no caller of kmalloc_order() except kmalloc_large(). Fold it into kmalloc_large() and remove kmalloc_order{,_trace}(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 13 ++----------- mm/slab_common.c | 12 +++--------- 2 files changed, 5 insertions(+), 20 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index a5e3ad058817..aa14aba2b068 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -467,17 +467,8 @@ extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment - __alloc_size(1); - -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) - __assume_page_alignment __alloc_size(1); - -static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t flags) -{ - unsigned int order = get_order(size); - return kmalloc_order_trace(size, flags, order); -} +extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment + __alloc_size(1); /** * kmalloc - allocate memory diff --git a/mm/slab_common.c b/mm/slab_common.c index 2edb77056adc..1ba479f9d143 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -932,10 +932,11 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) +void *kmalloc_large(size_t size, gfp_t flags) { void *ret = NULL; struct page *page; + unsigned int order = get_order(size); if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags = kmalloc_fix_flags(flags); @@ -950,17 +951,10 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) ret = kasan_kmalloc_large(ret, size, flags); /* As ret might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ret, size, 1, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_order); - -void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) -{ - void *ret = kmalloc_order(size, flags, order); trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); return ret; } -EXPORT_SYMBOL(kmalloc_order_trace); +EXPORT_SYMBOL(kmalloc_large); #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ From patchwork Tue Mar 8 11:41:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D78ACC433F5 for ; Tue, 8 Mar 2022 11:42:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D34D8D000A; Tue, 8 Mar 2022 06:42:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 582E28D0001; Tue, 8 Mar 2022 06:42:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4723F8D000A; Tue, 8 Mar 2022 06:42:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 39EAE8D0001 for ; Tue, 8 Mar 2022 06:42:35 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 18F99243A3 for ; Tue, 8 Mar 2022 11:42:35 +0000 (UTC) X-FDA: 79221031470.05.B65EC07 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf12.hostedemail.com (Postfix) with ESMTP id 96D5140005 for ; Tue, 8 Mar 2022 11:42:34 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id b8so16897007pjb.4 for ; Tue, 08 Mar 2022 03:42:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=a35SrEad/eAQMCxQWVEvT242oxRfWtKIsHLOiwjxy8Q=; b=nhM/WVD4SmMuSmGF9+Jzs/aSOEKIsVgjY5Hme43ZybRCajPFs1+LOYEJO5ztiNbZDC EOnE8q2d4+SHYzYkqUtIjugISLfUUpp2iS+sE7wGwW3MH4pfLyfZXCJEw5BkErk4+IVw nOUAvIqURjrO8mKadDL6uWMWqxzbjvp5JaYVpqX9YTNA6cEAntuljvhzBH+qqOHAkeQK PHbJ9vL2SNJezF2BrMRP1tTIV8Dc/AEqvG0htaemDXFf7uHDU2XvsAy4WDpwYGBR4tfY SbtS3B8PwbtzeC7dXvb6oDWWSNiEtWjIFlfBUN9Es1vjWulLyZe5dvhksnnui8ZYRJrL p9tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=a35SrEad/eAQMCxQWVEvT242oxRfWtKIsHLOiwjxy8Q=; b=1kwmChle4Yls7tFdp3DtBASl8jslQFN524bJx4gjvab5AFuTZeU4GRsTy8o5cKzy0X u6dY6qRbnrFzVd5XqcHELtXPyLkUJSxNxuFZjmYHzHq21I4IVwsQj+5aOBsGj/tT4qp9 Ge5X0cKqeY+xsRNZ8QFpuZzHMpi2MfQcQ/7fXH4sLyZaJpniIYl1dIhhDsQXa2gBAzOp CkG/J35KXodUNYtwIWl+KvEY1cpzAN8rNXksyYlAEszfkxPKCusq7CeV8w/q4iZClUDf uqtd0GOKO3bM2qxzaDSsF/f724iurV2aoA8A+SOPbniyFGNUM2Zz2oI0U9bZ3m+++Ed3 4loQ== X-Gm-Message-State: AOAM532hhBaJfYphIhHwceycs5y5Jtym72JeRwO1nSP2nAbd8j94/lI9 5S6L5o1VuuDBP/1pBnlLc68aKbMnuIhpcQ== X-Google-Smtp-Source: ABdhPJw/Ns1qLFibCI3qywPljP+oYhIxKSBnkRalG1lLXwiRtHEak6SE9Xl/VvnxhdZmdRBZuPvwoQ== X-Received: by 2002:a17:903:2290:b0:151:e50c:bb50 with SMTP id b16-20020a170903229000b00151e50cbb50mr10886509plh.95.1646739753350; Tue, 08 Mar 2022 03:42:33 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:32 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 05/15] mm/slub: move kmalloc_large_node() to slab_common.c Date: Tue, 8 Mar 2022 11:41:32 +0000 Message-Id: <20220308114142.1744229-6-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: mpbwe4e71ddizuq48tnopxzetjr3t5dq Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="nhM/WVD4"; spf=pass (imf12.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: 96D5140005 X-HE-Tag: 1646739754-287578 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In later patch SLAB will also pass requests larger than order-1 page to page allocator. Move kmalloc_large_node() to slab_common.c. Fold kmalloc_large_node_hook() into kmalloc_large_node() as there is no other caller. Move tracepoint in kmalloc_large_node(). Add flag fix code. This exist in kmalloc_large() but omitted in kmalloc_large_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 3 +++ mm/slab_common.c | 26 ++++++++++++++++++++++++ mm/slub.c | 47 ++++---------------------------------------- 3 files changed, 33 insertions(+), 43 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index aa14aba2b068..60d27635c13d 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -470,6 +470,9 @@ extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment __alloc_size(1); +extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) + __assume_page_alignment __alloc_size(1); + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index 1ba479f9d143..f61ac7458829 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -956,6 +956,32 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + struct page *page; + void *ptr = NULL; + unsigned int order = get_order(size); + + if (unlikely(flags & GFP_SLAB_BUG_MASK)) + flags = kmalloc_fix_flags(flags); + + flags |= __GFP_COMP; + page = alloc_pages_node(node, flags, order); + if (page) { + ptr = page_address(page); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + } + ptr = kasan_kmalloc_large(ptr, size, flags); + /* As ptr might get tagged, call kmemleak hook after KASAN. */ + kmemleak_alloc(ptr, size, 1, flags); + trace_kmalloc_node(_RET_IP_, ptr, size, PAGE_SIZE << order, flags, + node); + return ptr; + +} +EXPORT_SYMBOL(kmalloc_large_node); + #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ static void freelist_randomize(struct rnd_state *state, unsigned int *list, diff --git a/mm/slub.c b/mm/slub.c index 267f700abac1..cdbbf0e97637 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1678,14 +1678,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) -{ - ptr = kasan_kmalloc_large(ptr, size, flags); - /* As ptr might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ptr, size, 1, flags); - return ptr; -} - static __always_inline void kfree_hook(void *x) { kmemleak_free(x); @@ -4387,37 +4379,13 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -static void *kmalloc_large_node(size_t size, gfp_t flags, int node) -{ - struct page *page; - void *ptr = NULL; - unsigned int order = get_order(size); - - flags |= __GFP_COMP; - page = alloc_pages_node(node, flags, order); - if (page) { - ptr = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - - return kmalloc_large_node_hook(ptr, size, flags); -} - void *__kmalloc_node(size_t size, gfp_t flags, int node) { struct kmem_cache *s; void *ret; - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = kmalloc_large_node(size, flags, node); - - trace_kmalloc_node(_RET_IP_, ret, - size, PAGE_SIZE << get_order(size), - flags, node); - - return ret; - } + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, flags, node); s = kmalloc_slab(size, flags); @@ -4874,15 +4842,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, struct kmem_cache *s; void *ret; - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = kmalloc_large_node(size, gfpflags, node); - - trace_kmalloc_node(caller, ret, - size, PAGE_SIZE << get_order(size), - gfpflags, node); - - return ret; - } + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, gfpflags, node); s = kmalloc_slab(size, gfpflags); From patchwork Tue Mar 8 11:41:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6639C433FE for ; Tue, 8 Mar 2022 11:42:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 783498D000B; Tue, 8 Mar 2022 06:42:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 730A48D0001; Tue, 8 Mar 2022 06:42:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6465D8D000B; Tue, 8 Mar 2022 06:42:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id 572158D0001 for ; Tue, 8 Mar 2022 06:42:39 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0F7D6183084DA for ; Tue, 8 Mar 2022 11:42:39 +0000 (UTC) X-FDA: 79221031638.21.9EC2E72 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by imf13.hostedemail.com (Postfix) with ESMTP id 057742000A for ; Tue, 8 Mar 2022 11:42:37 +0000 (UTC) Received: by mail-pg1-f182.google.com with SMTP id c11so3818421pgu.11 for ; Tue, 08 Mar 2022 03:42:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NJODWLWI7+P4Y4wf9Goyx7nWejvl0xaH4jd1S+sBpGc=; b=k8DqROsNj3BWSNG2HVRj2m3y+wA/g1RB5ufaW5G7A1VfyTlpqnavG7Y+AXEIbYf6LO CQeU9DHNU7Uk2GkBYnTLXY2YE/UEppE8Qj1TSTiTEAu4/qVWu4fCymEufSGdwNM/IbkR 9mnZcPZDvpVlpepo9JHszhsl/PntsBcfeLgsA+CXe29/GkjBeBEelKB+saVHeAv5yHZ+ PF4wn+R5lVgcCz+a1R7wELDHwKDHEPc8kXdVdX1T9gx6EHcnem8AyUhc41CMP82Ff8UA obfS73XrO2zVNOq4kkECRHvM6D+DybvJKPdPsA9VX0LwkA9ct45SLx6GLyJCZ/2DPbMV nmgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NJODWLWI7+P4Y4wf9Goyx7nWejvl0xaH4jd1S+sBpGc=; b=noXqRc7DpwA1x8Nh754Ea6SuJ4cOyheN8d/ktMD1M4F17hWLvhyIqbTWOufOHnGrO0 ObvAfaaZ5E59v2lBF3LjJe34Rov3UxnzLWBoCGdolYZkkcSAEfWoAeMdakKp0k8XomsI mYu0TnmCzibYtbYKjyibwf5VwYaC0DBe6K09nQRKJP7nmYApHkiDYVj4b3Puzwc1zOyr Nv06av6jpJ4o8MK7NBkUDW7FZ29lmGlWwYfE1pk3TocKl7Y2NGrzTG0Ivn0A7ZP6Fs7M 15nWPeCdArZKgpsOmat1EkVM0D3ILslB6vZY7FM2+62btxF8ukSPvuzzGr22axbaV3l4 /7bQ== X-Gm-Message-State: AOAM5322oLsa6+2A2laDnU+5itMEaVC1eT10IL6nYub5Y/VVZjh/ltGT 8FJWrDL2j0OFeZXUnPc7hl66rhF3tlkBTQ== X-Google-Smtp-Source: ABdhPJyfjhk94lYAce75iEXeUf2t8Pcrxd7uSvQMXJhrHla2T9WmCnuYSkcYeNf816yPaaO3IIbsMg== X-Received: by 2002:a63:8742:0:b0:37c:94b7:4fba with SMTP id i63-20020a638742000000b0037c94b74fbamr13501881pge.507.1646739756570; Tue, 08 Mar 2022 03:42:36 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:36 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 06/15] mm/slab_common: cleanup kmalloc_large() Date: Tue, 8 Mar 2022 11:41:33 +0000 Message-Id: <20220308114142.1744229-7-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 057742000A X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=k8DqROsN; spf=pass (imf13.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.182 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: jf8tzdgr8jmtkdw119n5xfhzcqcnr1tz X-HE-Tag: 1646739757-357665 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc_large() and kmalloc_large_node() do same job, make kmalloc_large() wrapper of kmalloc_large_node(). This makes slab allocators to use kmalloc_node tracepoint in kmalloc_large(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 8 +++++--- mm/slab_common.c | 24 ------------------------ 2 files changed, 5 insertions(+), 27 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 60d27635c13d..8840b2d55567 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -467,12 +467,14 @@ extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -extern void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment - __alloc_size(1); - extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); +static __always_inline void *kmalloc_large(size_t size, gfp_t flags) +{ + return kmalloc_large_node(size, flags, NUMA_NO_NODE); +} + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index f61ac7458829..1fe2f2a7326d 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -932,30 +932,6 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_large(size_t size, gfp_t flags) -{ - void *ret = NULL; - struct page *page; - unsigned int order = get_order(size); - - if (unlikely(flags & GFP_SLAB_BUG_MASK)) - flags = kmalloc_fix_flags(flags); - - flags |= __GFP_COMP; - page = alloc_pages(flags, order); - if (likely(page)) { - ret = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - ret = kasan_kmalloc_large(ret, size, flags); - /* As ret might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ret, size, 1, flags); - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_large); - void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; From patchwork Tue Mar 8 11:41:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C31BC433EF for ; Tue, 8 Mar 2022 11:42:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F0CC8D000C; Tue, 8 Mar 2022 06:42:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A0C08D0001; Tue, 8 Mar 2022 06:42:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED2738D000C; Tue, 8 Mar 2022 06:42:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id E05CF8D0001 for ; Tue, 8 Mar 2022 06:42:41 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 91FEF182EFDFB for ; Tue, 8 Mar 2022 11:42:41 +0000 (UTC) X-FDA: 79221031722.18.D956584 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf23.hostedemail.com (Postfix) with ESMTP id 04064140008 for ; Tue, 8 Mar 2022 11:42:40 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id t19so13267723plr.5 for ; Tue, 08 Mar 2022 03:42:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dRBTUQACiQlx0LwCs+gvgSw5DzRBjLDB/xFsjZpjfn8=; b=eKhM8L1Q6v+MAnYmglapfX6kG3lc2plE0eACiYKlqVnXpgx6WVSGv+Lqf/5cDPEDP4 gZ+5uvrqB+Xqokx85qkhqLFzX0sgrCqmtk4511gissRzPIjOlt0zU1ZdG18sJp0iaLvk dG/RkxfbD4qUOGLzPuwQEuRcLFom7ajyiwfQ4gci/VBV3tV35Q2aEzhWb1efes/Dr+4c c5HyIVkQ50Ma/cEs0oXvFtYuVzYmY4Xo1KYScGZ2PNGP4NMB8k7wxOB4dHTuzUQiXPZE P8apwja9zdytSWgXOoIxtsKuiYTDApaTg3muHU4KoHFmZNWmvyAgrEoTAlbw2AtgduRq PuhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dRBTUQACiQlx0LwCs+gvgSw5DzRBjLDB/xFsjZpjfn8=; b=FV1kPEm8GTDh8rVA69SxL+DZbNXTjd8rPnfcsxQ9vybb73R7RZqAZm8yMpD4yzDE6R 1FtQPoC8XQ78YfQ+g+nBTEcKn0u/2ByIPi/4oSn7lNU/iQnmkayGaV5Oz+w7cTJHMH21 tmAUimeybN/WRqQtgOiywHIVmqg6sknhPkOoWE3nETS7u58+WZRRtkSL4DF08WZLIor4 EAUjRPqzdMSen/MYqzdC66aUXauQvzk/gBC4nUCCehr0Q9Ndc/9jmcsD1q0eB/y0wRt6 VPhMD3f4DIYg+4+cZZRFqJmB6JqgHPNvFouiIuReyYT9S0yrQV+V0vtlMV1wK10npIkx OuDA== X-Gm-Message-State: AOAM531dK/nXjwA3vuWMNXVqfyNtVIm/LMkhP9kfvur2cdmCEkFMJ/JM pNPx/JQ+/yineo8go6WcTcANITvboCQvOQ== X-Google-Smtp-Source: ABdhPJxPvhmUGL9XnU9afnShPr0BQ2rnu562TEMPr2BvMh14WzqH80Eyl1ZH6ljIydlfk5YeOt4KAQ== X-Received: by 2002:a17:90b:4b4f:b0:1bf:2381:b247 with SMTP id mi15-20020a17090b4b4f00b001bf2381b247mr4163411pjb.75.1646739759779; Tue, 08 Mar 2022 03:42:39 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:39 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 07/15] mm/sl[au]b: kmalloc_node: pass large requests to page allocator Date: Tue, 8 Mar 2022 11:41:34 +0000 Message-Id: <20220308114142.1744229-8-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 04064140008 X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=eKhM8L1Q; spf=pass (imf23.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: 17yb7m8u5jo137y4kfo5m7dw8pupj6e5 X-HE-Tag: 1646739760-998736 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc_large_node() is in common code, pass large requests to page allocator in kmalloc_node() using kmalloc_large_node(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 8840b2d55567..33d4260bce8b 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -551,23 +551,35 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) return __kmalloc(size, flags); } +#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { -#ifndef CONFIG_SLOB - if (__builtin_constant_p(size) && - size <= KMALLOC_MAX_CACHE_SIZE) { - unsigned int i = kmalloc_index(size); + if (__builtin_constant_p(size)) { + unsigned int index; - if (!i) + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index = kmalloc_index(size); + + if (!index) return ZERO_SIZE_PTR; return kmem_cache_alloc_node_trace( kmalloc_caches[kmalloc_type(flags)][i], flags, node, size); } -#endif return __kmalloc_node(size, flags, node); } +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif /** * kmalloc_array - allocate memory for an array. From patchwork Tue Mar 8 11:41:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 697FFC433FE for ; Tue, 8 Mar 2022 11:42:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07C678D000D; Tue, 8 Mar 2022 06:42:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B678D0001; Tue, 8 Mar 2022 06:42:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E35BC8D000D; Tue, 8 Mar 2022 06:42:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id D4BED8D0001 for ; Tue, 8 Mar 2022 06:42:44 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A6B4123195 for ; Tue, 8 Mar 2022 11:42:44 +0000 (UTC) X-FDA: 79221031848.15.F0E5460 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf12.hostedemail.com (Postfix) with ESMTP id 20C9140007 for ; Tue, 8 Mar 2022 11:42:44 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id kx6-20020a17090b228600b001bf859159bfso1980374pjb.1 for ; Tue, 08 Mar 2022 03:42:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WyJUzAbm9yHjJ4QKSKKsRPqOaZ/q3Aw9wGC8avt4y3A=; b=SPOB2Is9v5XHsx3KQKzQnq3I+g7W8LG2kR7QM7TXpvg7Yy/WQuHZEsBvJVobSOf7TL S0Y5M5vqJcfdS51VXiGAzQe5WwhM5NXOmkw5bdY5Q5WSxRN4ffqzxXZnkMpWzqC0nsGC pQKd/G9pyhqk62HNgFxlL3PPLu7a4u7RKHj6qguBWeNrl81xQHwFBaXc4YcGDx74fQ6H /EQeI77O1+2AC+zZFbojMcQ+ZmaVDV00jrpTjS7YIaHHMzUUkSb1zVu9+gaXwFZ2+jW0 ZYZooIb73Psyq0sG1WsaQl+lPMt4gf+lSMEp5zDZXy+VfkO2Jr+iNY9O2vezybOEWf6O wwtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WyJUzAbm9yHjJ4QKSKKsRPqOaZ/q3Aw9wGC8avt4y3A=; b=SlNorsNloc9EY6s7Texh1SnFkumWI7GMxnGDyB2hoi5kmEuLQKJucZfu2I9xx6oTDz i3v9A6RU04rVePeN3fVQvDsB74d70XpENl3qjUPj/aswr0mYooNwCWCjr6TLQo4wmeU1 0FNqrFmyOFN7vLCNsl3GKpyp5FRzJswhqScjjNedtqfW2XjP6dN5dFi4RMDndPUQ2Njq riMOtwU+blEP+1Ma3AzFtxWMUohNisnuOuPqcCIQeYWd8/JTjY7gfRoYhKE9GRUSCOws PcWYnwMCnxMq+pG6cap8QUhsrChlG4cW60GaEDlVZNfFjNZVj97Ujy6Z/LqOlOTm0TPZ Accg== X-Gm-Message-State: AOAM532chrvm249/7rC9WhdFjs0r6HnmRfVg2T5AyzmdFSU2nkkiU3JP b1VLwKGVrys9SVtpy+Bn69OVReudtcelTg== X-Google-Smtp-Source: ABdhPJwBBmcQhwZHwUbPZucQH/JdiAMEHfLJ+/PU4B7VIoOPrR4kHVV3U6pLD61NDo5NAPt4lomuqA== X-Received: by 2002:a17:902:7298:b0:151:842b:a241 with SMTP id d24-20020a170902729800b00151842ba241mr16959617pll.115.1646739762983; Tue, 08 Mar 2022 03:42:42 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:42 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 08/15] mm/sl[auo]b: cleanup kmalloc() Date: Tue, 8 Mar 2022 11:41:35 +0000 Message-Id: <20220308114142.1744229-9-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 20C9140007 X-Stat-Signature: 6qs8kc311bi9xzr6jwdpfftyjwdtasb1 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=SPOB2Is9; spf=pass (imf12.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1646739764-12640 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc() and kmalloc_node() do same job, make kmalloc() wrapper of kmalloc_node(). Remove kmalloc_trace() that is now unused. This patch makes slab allocator use kmalloc_node tracepoints in kmalloc(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 82 +++++++++++++++++--------------------------- mm/slab.c | 14 -------- mm/slub.c | 9 ----- 3 files changed, 31 insertions(+), 74 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 33d4260bce8b..dfcc8301d969 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -460,9 +460,6 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) - __assume_slab_alignment __alloc_size(3); - extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); @@ -475,6 +472,36 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) return kmalloc_large_node(size, flags, NUMA_NO_NODE); } +#ifndef CONFIG_SLOB +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size)) { + unsigned int index; + + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large(size, flags); + + index = kmalloc_index(size); + + if (!index) + return ZERO_SIZE_PTR; + + return kmem_cache_alloc_node_trace( + kmalloc_caches[kmalloc_type(flags)][index], + flags, node, size); + } + return __kmalloc_node(size, flags, node); +} +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large(size, flags); + + return __kmalloc_node(size, flags, node); +} +#endif + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. @@ -531,55 +558,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) */ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) { - if (__builtin_constant_p(size)) { -#ifndef CONFIG_SLOB - unsigned int index; -#endif - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); -#ifndef CONFIG_SLOB - index = kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, size); -#endif - } - return __kmalloc(size, flags); -} - -#ifndef CONFIG_SLOB -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size)) { - unsigned int index; - - if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - index = kmalloc_index(size); - - if (!index) - return ZERO_SIZE_PTR; - - return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][i], - flags, node, size); - } - return __kmalloc_node(size, flags, node); -} -#else -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) -{ - if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); - - return __kmalloc_node(size, flags, node); + return kmalloc_node(size, flags, NUMA_NO_NODE); } -#endif /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.c b/mm/slab.c index 1f3195344bdf..6ebf509bf2de 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3519,20 +3519,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); -void * -kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) -{ - void *ret; - - ret = slab_alloc(cachep, flags, size, _RET_IP_); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, - size, cachep->size, flags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); - /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. diff --git a/mm/slub.c b/mm/slub.c index cdbbf0e97637..d8fb987ff7e0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3230,15 +3230,6 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr, orig_size); } -void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) -{ - void *ret = slab_alloc(s, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); - ret = kasan_kmalloc(s, ret, size, gfpflags); - return ret; -} -EXPORT_SYMBOL(kmem_cache_alloc_trace); - void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, s->object_size); From patchwork Tue Mar 8 11:41:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A74C1C433EF for ; Tue, 8 Mar 2022 11:42:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C14F8D000E; Tue, 8 Mar 2022 06:42:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 470978D0001; Tue, 8 Mar 2022 06:42:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 338BC8D000E; Tue, 8 Mar 2022 06:42:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 26B498D0001 for ; Tue, 8 Mar 2022 06:42:48 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id F2A62184B for ; Tue, 8 Mar 2022 11:42:47 +0000 (UTC) X-FDA: 79221031974.02.25E1E67 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf21.hostedemail.com (Postfix) with ESMTP id 72DAA1C0010 for ; Tue, 8 Mar 2022 11:42:47 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id e2so16822944pls.10 for ; Tue, 08 Mar 2022 03:42:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c/6FsH9M7AVD3jGPPCGPRjlLtrkmXgfcT3rZLTHDFFQ=; b=D95hcByh+lQtvolKIcyrASR0tOCSL6jj3KPZpRijEfknmUmSHlKzi8oDjcJT8KumHO ENfgR/VShuEYly+JJrWiGeeL9zcXgY7VVsooCeRXKEUEu8OdmzvlT/aymdhKXGMwGkhY ckzl1+jAdyNrulWA5Cx5dzZKUo/pUiRQ9rOGiNH4jJr8fb2zUfDG7cHYt+iXSNNT96EX 6K87SmnL8jGZHbWy+Y3qhIqqQA6PKPHirqIrA/Yx5/bx8qHI9p5zNMLnH/87thVJ20mi FbjemkNc1RY2vrSrCJguYriVWF4CsvUyth8O1NomYi+KSHlFnZp+ojfwZbVHo/9q/t2w IRAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c/6FsH9M7AVD3jGPPCGPRjlLtrkmXgfcT3rZLTHDFFQ=; b=DFQYzA/xf+kVpOEC+Xrp+C1t6IlwAA52yO/fBE/3FUgNANMgsJdXNvCpXFkvjDUACT frhSxL9RwN9SDvh5mKKaJt5ZDTgfE4ChpDKYauSI8WBkZ5qcj/ioJuDhwjvBFJLgJXUh DE1TWMr9WeqowJphYDYDfAl2xIDDU5LdA5G3K1x7EnQCzdHsyocMMo5kaZX5jDt/ufHo rgkkFIrHAa8mbLrdOLrB03U1z5eYs2Xj5K8xm0s0cTqFst5b3y7Tn69J5AUDRvtavZYy 4NeSiE2QEF5f+CraSGpWkjkOHupZWo7NtI9MpWpUY//jXGhhF414FgYDeoFsWuNAz7J+ vTjg== X-Gm-Message-State: AOAM531U/jm2lBCl7vRlupMV66o6ZWTsY7QiptkJVJ9Exr3VsUFWqTVi b7VteNVAPNE1Od/xMfy7/xDlM14X0hwZbg== X-Google-Smtp-Source: ABdhPJx5PZlsfmmsghAY5+JfSGQI2qk5dKdlXSn1QyQVWvD43D9f/JLoOERpbkm4fs7Az/jKgv1VSg== X-Received: by 2002:a17:90b:517:b0:1be:dde4:f848 with SMTP id r23-20020a17090b051700b001bedde4f848mr4179725pjz.233.1646739766216; Tue, 08 Mar 2022 03:42:46 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:45 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 09/15] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Date: Tue, 8 Mar 2022 11:41:36 +0000 Message-Id: <20220308114142.1744229-10-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 72DAA1C0010 X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=D95hcByh; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: 34zys96njig41iw1q1mibsu9iaxgezys X-HE-Tag: 1646739767-866125 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is not much benefit for serving large objects in kmalloc(). Let's pass large requests to page allocator like SLUB for better maintenance of common code. [ vbabka@suse.cz: Enable and disable irq around free_large_kmalloc(). Do not lose NUMA locality in __do_kmalloc_node(). Use folio_slab(folio)->slab_cache instead of virt_to_cache(). Remove large sizes in __kmalloc_index(). ] Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 23 +++++----------------- mm/slab.c | 45 ++++++++++++++++++++++++++++++-------------- mm/slab.h | 3 +++ mm/slab_common.c | 25 +++++++++++++++++------- mm/slub.c | 19 ------------------- 5 files changed, 57 insertions(+), 58 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index dfcc8301d969..9ced225a3ea3 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -226,27 +226,17 @@ void kmem_dump_obj(void *object); #ifdef CONFIG_SLAB /* - * The largest kmalloc size supported by the SLAB allocators is - * 32 megabyte (2^25) or the maximum allocatable page order if that is - * less than 32 MB. - * - * WARNING: Its not easy to increase this value since the allocators have - * to do various tricks to work around compiler limitations in order to - * ensure proper constant folding. + * SLAB and SLUB directly allocates requests fitting in to an order-1 page + * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \ - (MAX_ORDER + PAGE_SHIFT - 1) : 25) -#define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH +#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 #endif #endif #ifdef CONFIG_SLUB -/* - * SLUB directly allocates requests fitting in to an order-1 page - * (PAGE_SIZE*2). Larger requests are passed to the page allocator. - */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW @@ -398,10 +388,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size, if (size <= 512 * 1024) return 19; if (size <= 1024 * 1024) return 20; if (size <= 2 * 1024 * 1024) return 21; - if (size <= 4 * 1024 * 1024) return 22; - if (size <= 8 * 1024 * 1024) return 23; - if (size <= 16 * 1024 * 1024) return 24; - if (size <= 32 * 1024 * 1024) return 25; if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && size_is_constant) BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()"); @@ -411,6 +397,7 @@ static __always_inline unsigned int __kmalloc_index(size_t size, /* Will never be reached. Needed because the compiler may complain */ return -1; } +static_assert(PAGE_SHIFT <= 20); #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ diff --git a/mm/slab.c b/mm/slab.c index 6ebf509bf2de..f0041f0125ba 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3568,7 +3568,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) void *ret; if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; + return kmalloc_large_node(size, flags, node); cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; @@ -3642,15 +3642,25 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { struct kmem_cache *s; size_t i; + struct folio *folio; local_irq_disable(); for (i = 0; i < size; i++) { void *objp = p[i]; - if (!orig_s) /* called via kfree_bulk */ - s = virt_to_cache(objp); - else + if (!orig_s) { + folio = virt_to_folio(objp); + /* called via kfree_bulk */ + if (!folio_test_slab(folio)) { + local_irq_enable(); + free_large_kmalloc(folio, objp); + local_irq_disable(); + continue; + } + s = folio_slab(folio)->slab_cache; + } else s = cache_from_obj(orig_s, objp); + if (!s) continue; @@ -3679,20 +3689,25 @@ void kfree(const void *objp) { struct kmem_cache *c; unsigned long flags; + struct folio *folio; + void *x = (void *) objp; trace_kfree(_RET_IP_, objp); if (unlikely(ZERO_OR_NULL_PTR(objp))) return; - local_irq_save(flags); - kfree_debugcheck(objp); - c = virt_to_cache(objp); - if (!c) { - local_irq_restore(flags); + + folio = virt_to_folio(objp); + if (!folio_test_slab(folio)) { + free_large_kmalloc(folio, x); return; } - debug_check_no_locks_freed(objp, c->object_size); + c = folio_slab(folio)->slab_cache; + + local_irq_save(flags); + kfree_debugcheck(objp); + debug_check_no_locks_freed(objp, c->object_size); debug_check_no_obj_freed(objp, c->object_size); __cache_free(c, (void *)objp, _RET_IP_); local_irq_restore(flags); @@ -4114,15 +4129,17 @@ void __check_heap_object(const void *ptr, unsigned long n, size_t __ksize(const void *objp) { struct kmem_cache *c; - size_t size; + struct folio *folio; BUG_ON(!objp); if (unlikely(objp == ZERO_SIZE_PTR)) return 0; - c = virt_to_cache(objp); - size = c ? c->object_size : 0; + folio = virt_to_folio(objp); + if (!folio_test_slab(folio)) + return folio_size(folio); - return size; + c = folio_slab(folio)->slab_cache; + return c->object_size; } EXPORT_SYMBOL(__ksize); diff --git a/mm/slab.h b/mm/slab.h index c7f2abc2b154..eb6e26784d69 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -664,6 +664,9 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) print_tracking(cachep, x); return cachep; } + +void free_large_kmalloc(struct folio *folio, void *object); + #endif /* CONFIG_SLOB */ static inline size_t slab_ksize(const struct kmem_cache *s) diff --git a/mm/slab_common.c b/mm/slab_common.c index 1fe2f2a7326d..af67005a151f 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -759,8 +759,8 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags) /* * kmalloc_info[] is to make slub_debug=,kmalloc-xx option work at boot time. - * kmalloc_index() supports up to 2^25=32MB, so the final entry of the table is - * kmalloc-32M. + * kmalloc_index() supports up to 2^21=2MB, so the final entry of the table is + * kmalloc-2M. */ const struct kmalloc_info_struct kmalloc_info[] __initconst = { INIT_KMALLOC_INFO(0, 0), @@ -784,11 +784,7 @@ const struct kmalloc_info_struct kmalloc_info[] __initconst = { INIT_KMALLOC_INFO(262144, 256k), INIT_KMALLOC_INFO(524288, 512k), INIT_KMALLOC_INFO(1048576, 1M), - INIT_KMALLOC_INFO(2097152, 2M), - INIT_KMALLOC_INFO(4194304, 4M), - INIT_KMALLOC_INFO(8388608, 8M), - INIT_KMALLOC_INFO(16777216, 16M), - INIT_KMALLOC_INFO(33554432, 32M) + INIT_KMALLOC_INFO(2097152, 2M) }; /* @@ -913,6 +909,21 @@ void __init create_kmalloc_caches(slab_flags_t flags) } #endif } + +void free_large_kmalloc(struct folio *folio, void *object) +{ + unsigned int order = folio_order(folio); + + if (WARN_ON_ONCE(order == 0)) + pr_warn_once("object pointer: 0x%p\n", object); + + kmemleak_free(object); + kasan_kfree_large(object); + + mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); + __free_pages(folio_page(folio, 0), order); +} #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index d8fb987ff7e0..283c4ac92ffe 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1678,12 +1678,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static __always_inline void kfree_hook(void *x) -{ - kmemleak_free(x); - kasan_kfree_large(x); -} - static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { @@ -3501,19 +3495,6 @@ struct detached_freelist { struct kmem_cache *s; }; -static inline void free_large_kmalloc(struct folio *folio, void *object) -{ - unsigned int order = folio_order(folio); - - if (WARN_ON_ONCE(order == 0)) - pr_warn_once("object pointer: 0x%p\n", object); - - kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); - __free_pages(folio_page(folio, 0), order); -} - /* * This function progressively scans the array with free objects (with * a limited look ahead) and extract objects belonging to the same From patchwork Tue Mar 8 11:41:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6117C433F5 for ; Tue, 8 Mar 2022 11:42:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 75B868D000F; Tue, 8 Mar 2022 06:42:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 709328D0001; Tue, 8 Mar 2022 06:42:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D1028D000F; Tue, 8 Mar 2022 06:42:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 505FA8D0001 for ; Tue, 8 Mar 2022 06:42:51 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2932723521 for ; Tue, 8 Mar 2022 11:42:51 +0000 (UTC) X-FDA: 79221032142.04.573B9DC Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf05.hostedemail.com (Postfix) with ESMTP id A652D100004 for ; Tue, 8 Mar 2022 11:42:50 +0000 (UTC) Received: by mail-pg1-f169.google.com with SMTP id bc27so16226434pgb.4 for ; Tue, 08 Mar 2022 03:42:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XKf/iqJDpGTI9wtLvHAGUNI7LJwe6P2neDLVrQRTFRI=; b=q1Od24w4Tf+snqyvnattUx0RFmfogqMnczlUVrKOWVMhj4xApjQspsebHG4nsXHY4i WPa8noaO603ML05jATKHlqLGYX+jhX+iaLMGbnHQbpqRXPr6BCbWfbv54E6TvZKcyXhm jq3c39jQeU9k5iZ5L5UxXEAUI6O3gkNF0/iApp+9KCT2U0PtEkXJ49Oh/hDV8mtWOuXV ++GRjFna+Kinv6qQlJpaSVEFB3HgXzgcQe9c4qEnUYOzopxqeiOhEcOhJRv6e1r6deM/ vByz90tVO0PbmWu4bceKgXtITvMnlnyKc5+0m1L6KVEkFgDZYXuxZQIJw+E6hZ5dfWuc yyEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XKf/iqJDpGTI9wtLvHAGUNI7LJwe6P2neDLVrQRTFRI=; b=nvR4zbE8qTB/pWavQf3ez0ekHr+yCOwyvullB6Kvw6QbKacXiyvLMMXqiHMxBLTxYf iXn/GIS3dDSqH93yqIH2zxiifOgY5EcbJyKWeW3Z+REJVoMvoCy/caYbE3b+FdYyjPz4 CcZEhR6iSrOKpUuqRGD5ithUsI1YODk3Hefmqhkq2a7czox4xct5ylayIwshITSax0F5 OF66f1Objs5YfK5OSIeBOXrgWrx06MzsoSBhCAUngIUK8bjcFuQLXpGO4VMjskzOl4vo mO26VX4cW7AJsyYOYIWWah1X+EA/LEY4Wk/aHlO/2JN9s5kG9Ty+uQxNwW4bff590Myg KAdA== X-Gm-Message-State: AOAM530xJcU7GDkxKn5BC7VBWhxBhe90tDA+zZZ5jd+UCEiKoAP0otgQ vWWwTvUuD3gNyJmeVZufIV/l59epzB/o8w== X-Google-Smtp-Source: ABdhPJwNTwWDUFchB/w2OL99S+LVAP0tnY/Bi7fUH+GehJyk3z+FJYzWGHZbe6+TUZ04AdVzCOp9Mg== X-Received: by 2002:a65:568b:0:b0:378:86b8:9426 with SMTP id v11-20020a65568b000000b0037886b89426mr13429924pgs.70.1646739769485; Tue, 08 Mar 2022 03:42:49 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:48 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 10/15] mm/sl[auo]b: print cache name in tracepoints Date: Tue, 8 Mar 2022 11:41:37 +0000 Message-Id: <20220308114142.1744229-11-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: A652D100004 X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=q1Od24w4; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: czw3kgnpoamox586rkty1rzqzpb3y5ho X-HE-Tag: 1646739770-694019 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Print cache name in tracepoints. If there is no corresponding cache (kmalloc in SLOB or kmalloc_large_node), use KMALLOC_{,LARGE_}NAME macro. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/kmem.h | 34 +++++++++++++++++++--------------- mm/slab.c | 6 +++--- mm/slab.h | 4 ++++ mm/slab_common.c | 4 ++-- mm/slob.c | 10 +++++----- mm/slub.c | 10 +++++----- 6 files changed, 38 insertions(+), 30 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index ddc8c944f417..35e6887c6101 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -61,16 +61,18 @@ DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, DECLARE_EVENT_CLASS(kmem_alloc_node, - TP_PROTO(unsigned long call_site, + TP_PROTO(const char *name, + unsigned long call_site, const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), TP_STRUCT__entry( + __string( name, name ) __field( unsigned long, call_site ) __field( const void *, ptr ) __field( size_t, bytes_req ) @@ -80,6 +82,7 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, ), TP_fast_assign( + __assign_str(name, name); __entry->call_site = call_site; __entry->ptr = ptr; __entry->bytes_req = bytes_req; @@ -88,7 +91,8 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->node = node; ), - TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d", + TP_printk("name=%s call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d", + __get_str(name), (void *)__entry->call_site, __entry->ptr, __entry->bytes_req, @@ -99,20 +103,20 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, DEFINE_EVENT(kmem_alloc_node, kmalloc_node, - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, + TP_PROTO(const char *name, unsigned long call_site, + const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, + TP_PROTO(const char *name, unsigned long call_site, + const void *ptr, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) + TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); TRACE_EVENT(kfree, @@ -137,24 +141,24 @@ TRACE_EVENT(kfree, TRACE_EVENT(kmem_cache_free, - TP_PROTO(unsigned long call_site, const void *ptr, const char *name), + TP_PROTO(const char *name, unsigned long call_site, const void *ptr), - TP_ARGS(call_site, ptr, name), + TP_ARGS(name, call_site, ptr), TP_STRUCT__entry( + __string( name, name ) __field( unsigned long, call_site ) __field( const void *, ptr ) - __string( name, name ) ), TP_fast_assign( + __assign_str(name, name); __entry->call_site = call_site; __entry->ptr = ptr; - __assign_str(name, name); ), - TP_printk("call_site=%pS ptr=%p name=%s", - (void *)__entry->call_site, __entry->ptr, __get_str(name)) + TP_printk("name=%s call_site=%pS ptr=%p", + __get_str(name), (void *)__entry->call_site, __entry->ptr) ); TRACE_EVENT(mm_page_free, diff --git a/mm/slab.c b/mm/slab.c index f0041f0125ba..e451f8136066 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3536,7 +3536,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_); - trace_kmem_cache_alloc_node(_RET_IP_, ret, + trace_kmem_cache_alloc_node(cachep->name, _RET_IP_, ret, cachep->object_size, cachep->size, flags, nodeid); @@ -3554,7 +3554,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(cachep->name, _RET_IP_, ret, size, cachep->size, flags, nodeid); return ret; @@ -3628,7 +3628,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp) if (!cachep) return; - trace_kmem_cache_free(_RET_IP_, objp, cachep->name); + trace_kmem_cache_free(cachep->name, _RET_IP_, objp); local_irq_save(flags); debug_check_no_locks_freed(objp, cachep->object_size); if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) diff --git a/mm/slab.h b/mm/slab.h index eb6e26784d69..bfedfe3900bb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -274,6 +274,10 @@ void create_kmalloc_caches(slab_flags_t); struct kmem_cache *kmalloc_slab(size_t, gfp_t); #endif +/* cache names for tracepoints where it has no corresponding cache */ +#define KMALLOC_LARGE_NAME "kmalloc_large_node" +#define KMALLOC_NAME "kmalloc_node" + gfp_t kmalloc_fix_flags(gfp_t flags); /* Functions provided by the slab allocators */ diff --git a/mm/slab_common.c b/mm/slab_common.c index af67005a151f..03949445c5fc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -962,8 +962,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); - trace_kmalloc_node(_RET_IP_, ptr, size, PAGE_SIZE << order, flags, - node); + trace_kmalloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, + PAGE_SIZE << order, flags, node); return ptr; } diff --git a/mm/slob.c b/mm/slob.c index c4f9c83900b0..d60175c9bb1b 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,7 +505,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) *m = size; ret = (void *)m + minalign; - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(KMALLOC_NAME, caller, ret, size, size + minalign, gfp, node); } else { unsigned int order = get_order(size); @@ -514,7 +514,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp |= __GFP_COMP; ret = slob_new_pages(gfp, order, node); - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(KMALLOC_LARGE_NAME, caller, ret, size, PAGE_SIZE << order, gfp, node); } @@ -596,12 +596,12 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(c->name, _RET_IP_, b, c->object_size, SLOB_UNITS(c->size) * SLOB_UNIT, flags, node); } else { b = slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(c->name, _RET_IP_, b, c->object_size, PAGE_SIZE << get_order(c->size), flags, node); } @@ -646,7 +646,7 @@ static void kmem_rcu_free(struct rcu_head *head) void kmem_cache_free(struct kmem_cache *c, void *b) { kmemleak_free_recursive(b, c->flags); - trace_kmem_cache_free(_RET_IP_, b, c->name); + trace_kmem_cache_free(c->name, _RET_IP_, b); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu = b + (c->size - sizeof(struct slob_rcu)); diff --git a/mm/slub.c b/mm/slub.c index 283c4ac92ffe..8a23d1f9507d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3228,7 +3228,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, s->object_size); - trace_kmem_cache_alloc_node(_RET_IP_, ret, + trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, s->object_size, s->size, gfpflags, node); return ret; @@ -3241,7 +3241,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, size); - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); @@ -3482,7 +3482,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s = cache_from_obj(s, x); if (!s) return; - trace_kmem_cache_free(_RET_IP_, x, s->name); + trace_kmem_cache_free(s->name, _RET_IP_, x); slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); @@ -4366,7 +4366,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) ret = slab_alloc_node(s, flags, node, _RET_IP_, size); - trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); + trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, flags, node); ret = kasan_kmalloc(s, ret, size, flags); @@ -4825,7 +4825,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, ret = slab_alloc_node(s, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ - trace_kmalloc_node(caller, ret, size, s->size, gfpflags, node); + trace_kmalloc_node(s->name, caller, ret, size, s->size, gfpflags, node); return ret; } From patchwork Tue Mar 8 11:41:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 184B8C433EF for ; Tue, 8 Mar 2022 11:42:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ADC358D0010; Tue, 8 Mar 2022 06:42:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A8B2A8D0001; Tue, 8 Mar 2022 06:42:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9557E8D0010; Tue, 8 Mar 2022 06:42:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id 87AEB8D0001 for ; Tue, 8 Mar 2022 06:42:54 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 497C9AA194 for ; Tue, 8 Mar 2022 11:42:54 +0000 (UTC) X-FDA: 79221032268.27.517CF74 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf28.hostedemail.com (Postfix) with ESMTP id E50C1C0002 for ; Tue, 8 Mar 2022 11:42:53 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id s18so3597361plp.1 for ; Tue, 08 Mar 2022 03:42:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EY3E1MPvA1t7e/Kt56kr9/Hv8whIDrpsa4b0BxWIZKk=; b=VVpxcdo9FtcdTxXq+rHzHfEYxY1vnrmOyim9KCBAKXxTRqUs3+/ILCw0C9eO8HYMV4 hI56Lm/QPs7re3spSrvPvokQ7UmPjRo1dVAOwK5s4wAIRD0nzHegrc5xJlVSXkrC4rPT FGD/YcxJyqFPCnYy6JFKu/eWHjHddKK9Re2/a8kINssbT1LwBtHiS3TgQIqMTX7QvfIC NFnMhIm6iShaC/VofQjlMt1wc38zEk/YyXy5JxQKDiM/zs3Se7QPM7e+/NtgDmrGc1dD 2p32tyPgV2Wb84dgCviy2ftACV8prclw69ggZhW7M7zoT9AP2K9f9pSEekEEZy5DscPS +4BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EY3E1MPvA1t7e/Kt56kr9/Hv8whIDrpsa4b0BxWIZKk=; b=Uba2I1dkflcPDA9I0ongc8ZjxutUUwVylmhhvhN9nm9h0jomaoBDvYe0Zd2ScqHHG2 gbUfGbDihVLwbMDhLE1zac2TpBCqvWBNqJUgqlF1e9mzJG0mMjfO9KURqH44J4T8QDWo WDESM2KK0pERS2UzIgkdCEHlhVekg8zh5ExpreQNYXhOgll09PlVVuamw5YYyGBCy0YQ FE3jXZ3wzWfWHiCv26+4QPTWE+vGyt/s2NzB3Waov6vmz+qvVOET4GyKWKcIKCHhZx99 CVxc5IjRj+9CijrchogI8FIscG3jQkyoI/tV+Ve1RsQvN1itMgs8aRzD15SOYVKWOsdK Lxsw== X-Gm-Message-State: AOAM533eUVSCnXjUZByL9w/zhDLtpUAIRkayZcOoCYPcC3Z6PBlq2jYW /3P0B3/F0K/9PSggnQEhO9gssL70lGbaBg== X-Google-Smtp-Source: ABdhPJzy9xk6Fl6a/TMkxCpgxDYIH9ok/AIGZ/pNatXCWBpe2m5/fCkr73+hsKNMzxXlulkfTk/f9g== X-Received: by 2002:a17:90b:4d8a:b0:1be:f5f1:89d3 with SMTP id oj10-20020a17090b4d8a00b001bef5f189d3mr4258302pjb.79.1646739772579; Tue, 08 Mar 2022 03:42:52 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:52 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 11/15] mm/sl[auo]b: use same tracepoint in kmalloc and normal caches Date: Tue, 8 Mar 2022 11:41:38 +0000 Message-Id: <20220308114142.1744229-12-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: E50C1C0002 X-Stat-Signature: 4yfabep6wz5hbytwgjip3je8g6tdckda Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=VVpxcdo9; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-HE-Tag: 1646739773-421446 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that tracepoints print cache names, we can distinguish kmalloc and normal cache allocations. Use same tracepoint in kmalloc and normal caches. After this patch, there is only two tracepoints in slab allocators: kmem_cache_alloc_node and kmem_cache_free. Remove all unused tracepoints. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/trace/events/kmem.h | 79 ------------------------------------- mm/slab.c | 8 ++-- mm/slab_common.c | 5 ++- mm/slob.c | 14 ++++--- mm/slub.c | 19 +++++---- 5 files changed, 27 insertions(+), 98 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 35e6887c6101..ca67ba5fd76a 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -9,56 +9,6 @@ #include #include -DECLARE_EVENT_CLASS(kmem_alloc, - - TP_PROTO(unsigned long call_site, - const void *ptr, - size_t bytes_req, - size_t bytes_alloc, - gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags), - - TP_STRUCT__entry( - __field( unsigned long, call_site ) - __field( const void *, ptr ) - __field( size_t, bytes_req ) - __field( size_t, bytes_alloc ) - __field( gfp_t, gfp_flags ) - ), - - TP_fast_assign( - __entry->call_site = call_site; - __entry->ptr = ptr; - __entry->bytes_req = bytes_req; - __entry->bytes_alloc = bytes_alloc; - __entry->gfp_flags = gfp_flags; - ), - - TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s", - (void *)__entry->call_site, - __entry->ptr, - __entry->bytes_req, - __entry->bytes_alloc, - show_gfp_flags(__entry->gfp_flags)) -); - -DEFINE_EVENT(kmem_alloc, kmalloc, - - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags) -); - -DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, - - TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags) -); - DECLARE_EVENT_CLASS(kmem_alloc_node, TP_PROTO(const char *name, @@ -101,15 +51,6 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->node) ); -DEFINE_EVENT(kmem_alloc_node, kmalloc_node, - - TP_PROTO(const char *name, unsigned long call_site, - const void *ptr, size_t bytes_req, size_t bytes_alloc, - gfp_t gfp_flags, int node), - - TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) -); - DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, TP_PROTO(const char *name, unsigned long call_site, @@ -119,26 +60,6 @@ DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, TP_ARGS(name, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) ); -TRACE_EVENT(kfree, - - TP_PROTO(unsigned long call_site, const void *ptr), - - TP_ARGS(call_site, ptr), - - TP_STRUCT__entry( - __field( unsigned long, call_site ) - __field( const void *, ptr ) - ), - - TP_fast_assign( - __entry->call_site = call_site; - __entry->ptr = ptr; - ), - - TP_printk("call_site=%pS ptr=%p", - (void *)__entry->call_site, __entry->ptr) -); - TRACE_EVENT(kmem_cache_free, TP_PROTO(const char *name, unsigned long call_site, const void *ptr), diff --git a/mm/slab.c b/mm/slab.c index e451f8136066..702a78f64b44 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3554,9 +3554,9 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(cachep->name, _RET_IP_, ret, - size, cachep->size, - flags, nodeid); + trace_kmem_cache_alloc_node(cachep->name, _RET_IP_, ret, + size, cachep->size, + flags, nodeid); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); @@ -3692,7 +3692,6 @@ void kfree(const void *objp) struct folio *folio; void *x = (void *) objp; - trace_kfree(_RET_IP_, objp); if (unlikely(ZERO_OR_NULL_PTR(objp))) return; @@ -3704,6 +3703,7 @@ void kfree(const void *objp) } c = folio_slab(folio)->slab_cache; + trace_kmem_cache_free(c->name, _RET_IP_, objp); local_irq_save(flags); kfree_debugcheck(objp); diff --git a/mm/slab_common.c b/mm/slab_common.c index 03949445c5fc..8a8330a777f5 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -917,6 +917,7 @@ void free_large_kmalloc(struct folio *folio, void *object) if (WARN_ON_ONCE(order == 0)) pr_warn_once("object pointer: 0x%p\n", object); + trace_kmem_cache_free(KMALLOC_LARGE_NAME, _RET_IP_, object); kmemleak_free(object); kasan_kfree_large(object); @@ -962,8 +963,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); - trace_kmalloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, - PAGE_SIZE << order, flags, node); + trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, _RET_IP_, ptr, size, + PAGE_SIZE << order, flags, node); return ptr; } diff --git a/mm/slob.c b/mm/slob.c index d60175c9bb1b..3726b77a066b 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,8 +505,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) *m = size; ret = (void *)m + minalign; - trace_kmalloc_node(KMALLOC_NAME, caller, ret, - size, size + minalign, gfp, node); + trace_kmem_cache_alloc_node(KMALLOC_NAME, caller, ret, + size, size + minalign, gfp, node); } else { unsigned int order = get_order(size); @@ -514,8 +514,9 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp |= __GFP_COMP; ret = slob_new_pages(gfp, order, node); - trace_kmalloc_node(KMALLOC_LARGE_NAME, caller, ret, - size, PAGE_SIZE << order, gfp, node); + trace_kmem_cache_alloc_node(KMALLOC_LARGE_NAME, caller, + ret, size, PAGE_SIZE << order, + gfp, node); } kmemleak_alloc(ret, size, 1, gfp); @@ -533,8 +534,6 @@ void kfree(const void *block) { struct folio *sp; - trace_kfree(_RET_IP_, block); - if (unlikely(ZERO_OR_NULL_PTR(block))) return; kmemleak_free(block); @@ -543,10 +542,13 @@ void kfree(const void *block) if (folio_test_slab(sp)) { int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); unsigned int *m = (unsigned int *)(block - align); + + trace_kmem_cache_free(KMALLOC_LARGE_NAME, _RET_IP_, block); slob_free(m, *m + align); } else { unsigned int order = folio_order(sp); + trace_kmem_cache_free(KMALLOC_NAME, _RET_IP_, block); mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(folio_page(sp, 0), order); diff --git a/mm/slub.c b/mm/slub.c index 8a23d1f9507d..c2e713bdb26c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3241,8 +3241,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, size); - trace_kmalloc_node(s->name, _RET_IP_, ret, - size, s->size, gfpflags, node); + trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, + size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -4366,7 +4366,8 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) ret = slab_alloc_node(s, flags, node, _RET_IP_, size); - trace_kmalloc_node(s->name, _RET_IP_, ret, size, s->size, flags, node); + trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, size, + s->size, flags, node); ret = kasan_kmalloc(s, ret, size, flags); @@ -4445,8 +4446,7 @@ void kfree(const void *x) struct folio *folio; struct slab *slab; void *object = (void *)x; - - trace_kfree(_RET_IP_, x); + struct kmem_cache *s; if (unlikely(ZERO_OR_NULL_PTR(x))) return; @@ -4456,8 +4456,12 @@ void kfree(const void *x) free_large_kmalloc(folio, object); return; } + slab = folio_slab(folio); - slab_free(slab->slab_cache, slab, object, NULL, 1, _RET_IP_); + s = slab->slab_cache; + + trace_kmem_cache_free(s->name, _RET_IP_, x); + slab_free(s, slab, object, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kfree); @@ -4825,7 +4829,8 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, ret = slab_alloc_node(s, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ - trace_kmalloc_node(s->name, caller, ret, size, s->size, gfpflags, node); + trace_kmem_cache_alloc_node(s->name, caller, ret, size, + s->size, gfpflags, node); return ret; } From patchwork Tue Mar 8 11:41:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8ED21C433F5 for ; Tue, 8 Mar 2022 11:42:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C5B88D0011; Tue, 8 Mar 2022 06:42:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2750F8D0001; Tue, 8 Mar 2022 06:42:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 118378D0011; Tue, 8 Mar 2022 06:42:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id F34F28D0001 for ; Tue, 8 Mar 2022 06:42:57 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AEC65829871B for ; Tue, 8 Mar 2022 11:42:57 +0000 (UTC) X-FDA: 79221032394.25.92E523C Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf26.hostedemail.com (Postfix) with ESMTP id 32BEE14000C for ; Tue, 8 Mar 2022 11:42:57 +0000 (UTC) Received: by mail-pf1-f177.google.com with SMTP id s11so17067359pfu.13 for ; Tue, 08 Mar 2022 03:42:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m3bFV+3u+b20rLT2oE1x5BZUmsdi1StL5opiJFj+i8I=; b=msT3CPaR0QmzW3OWcptkBem5LvGFEvg0MsV/reuAXN4rWovYrN8BklX4XTjXn0X4du LecYWX9tvkKhwOwwqYtkGDE+7OITrNvPS/PxHlTUGfif9cMJPEivvYNqGISa702U8o7Q v2p6wQZYTL9tRWNtPNwSuBHyhrAGf61pQWSbjq01Ysf25RWJ8/qx7aNG94nBoLH/MVEB MUgSS/wLRHcgof19BFiwc/XEZZKhBhua2xIHGKocKz4UANgtuss1M2C3Ue0FxpiQuNpr 4UJMTj56ERdeEz5skTmqV7GddYycyy75IxPePaGUYTg5DXSVAEzVctI8ipkTEBv1YzEc eIpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m3bFV+3u+b20rLT2oE1x5BZUmsdi1StL5opiJFj+i8I=; b=baDK8bMsU4auuyLdm9UX3qAcezqYR7acQMZBH//urgkpv0i7uAZ6GlsQVWZiV+N++f /n9YRFDssFlcQnFzS/DOUSPqBTtvVPpPg4sl/xOY0q7ZZ0ATxuVdbzHh2O2DfnvfexUP mD1i9Ajf6A4k5LInP6gbjdivBNHdGzyqvCx1DRh8n735GwNsOkcK7vBC15MFycy58zm2 CbRS7q8AufmrL9ftt8EN+1eXLsqPsWfVeUSJtrJYCifP9CabLMdZZ4m5K2PRV2zKQ6gK R/HsdeGF3JFyYXUNcWiD5226kJyz884LQ0t8xCxn0oAg7yIuHL92NKQ8wdRJKH4HF5x1 KvNg== X-Gm-Message-State: AOAM532An4J/1aHZ1Jv6cQKtU0vUZRYg9WeGlDV1UiwKnYbO4GjBiBnm v3XxkCCAUQLQpehAI9djAXgHIrg18+O3+w== X-Google-Smtp-Source: ABdhPJzBXCF0MdHOPb9x1fIzLSJK4GfnxVP0UvnudAy7USYm8sbaDp2nRaF6HMydgDbkhJjuVYQNBA== X-Received: by 2002:a63:110e:0:b0:375:89f4:b54e with SMTP id g14-20020a63110e000000b0037589f4b54emr13785935pgl.430.1646739775771; Tue, 08 Mar 2022 03:42:55 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:55 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 12/15] mm/sl[au]b: generalize kmalloc subsystem Date: Tue, 8 Mar 2022 11:41:39 +0000 Message-Id: <20220308114142.1744229-13-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 32BEE14000C X-Stat-Signature: u3s84nh4com45ikfhktfma1ffyh1zn6i Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=msT3CPaR; spf=pass (imf26.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1646739777-703308 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now everything in kmalloc subsystem can be generalized. Let's do it! Generalize __kmalloc_node_track_caller(), kfree(), __ksize(), and move them to slab_common.c. Make __kmalloc_node() wrapper of __kmalloc_node_track_caller(). They are duplicate. To keep caller address unchanged in kmalloc/kfree tracepoints, implement __kmem_cache_{alloc_node,free}() that takes caller address. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 79 ++++++++++++++++++------- mm/slab.c | 135 ++++--------------------------------------- mm/slab_common.c | 75 ++++++++++++++++++++++++ mm/slob.c | 32 +++++----- mm/slub.c | 105 +++------------------------------ 5 files changed, 166 insertions(+), 260 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 9ced225a3ea3..6b632137f799 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -401,14 +401,53 @@ static_assert(PAGE_SHIFT <= 20); #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment - __alloc_size(1); -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment - __malloc; +extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, + unsigned long caller) __alloc_size(1); +#define kmalloc_node_track_caller(size, flags, node) \ + __kmalloc_node_track_caller(size, flags, node, \ + _RET_IP_) +/* + * kmalloc_track_caller is a special version of kmalloc that records the + * calling function of the routine calling it for slab leak tracking instead + * of just the calling function (confusing, eh?). + * It's useful when the call to kmalloc comes from a widely-used standard + * allocator where we care about the real place the memory allocation + * request comes from. + */ +#define kmalloc_track_caller(size, flags) \ + __kmalloc_node_track_caller(size, flags, NUMA_NO_NODE, _RET_IP_) + +static __always_inline void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __kmalloc_node_track_caller(size, flags, node, _RET_IP_); +} static __always_inline void *__kmalloc(size_t size, gfp_t flags) { - return __kmalloc_node(size, flags, NUMA_NO_NODE); + return __kmalloc_node_track_caller(size, flags, NUMA_NO_NODE, _RET_IP_); +} + +void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller); +void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, + int node, unsigned long caller); + +/** + * kmem_cache_alloc_node - Allocate an object on the specified node + * @cachep: The cache to allocate from. + * @flags: See kmalloc(). + * @nodeid: node number of the target node. + * + * Identical to kmem_cache_alloc but it will allocate memory on the given + * node, which can improve the performance for cpu bound structures. + * + * Fallback to other node is possible if __GFP_THISNODE is not set. + * + * Return: pointer to the new object or %NULL in case of error + */ +static __always_inline void * +kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) +{ + return __kmem_cache_alloc_node(s, gfpflags, node, _RET_IP_); } /** @@ -423,10 +462,21 @@ static __always_inline void *__kmalloc(size_t size, gfp_t flags) */ static __always_inline void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) { - return kmem_cache_alloc_node(s, flags, NUMA_NO_NODE); + return __kmem_cache_alloc_node(s, flags, NUMA_NO_NODE, _RET_IP_); } -void kmem_cache_free(struct kmem_cache *s, void *objp); +/** + * kmem_cache_free - Deallocate an object + * @cachep: The cache the allocation was from. + * @objp: The previously allocated object. + * + * Free an object which was previously allocated from this + * cache. + */ +static __always_inline void kmem_cache_free(struct kmem_cache *s, void *x) +{ + __kmem_cache_free(s, x, _RET_IP_); +} /* * Bulk allocation and freeing operations. These are accelerated in an @@ -613,21 +663,6 @@ static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); } -extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, - unsigned long caller) __alloc_size(1); -#define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) -/* - * kmalloc_track_caller is a special version of kmalloc that records the - * calling function of the routine calling it for slab leak tracking instead - * of just the calling function (confusing, eh?). - * It's useful when the call to kmalloc comes from a widely-used standard - * allocator where we care about the real place the memory allocation - * request comes from. - */ -#define kmalloc_track_caller(size, flags) \ - __kmalloc_node_track_caller(size, flags, NUMA_NO_NODE, _RET_IP_) /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index 702a78f64b44..2f4d13bb511b 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3519,30 +3519,19 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } EXPORT_SYMBOL(kmem_cache_alloc_bulk); -/** - * kmem_cache_alloc_node - Allocate an object on the specified node - * @cachep: The cache to allocate from. - * @flags: See kmalloc(). - * @nodeid: node number of the target node. - * - * Identical to kmem_cache_alloc but it will allocate memory on the given - * node, which can improve the performance for cpu bound structures. - * - * Fallback to other node is possible if __GFP_THISNODE is not set. - * - * Return: pointer to the new object or %NULL in case of error - */ -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) +void *__kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, + int nodeid, unsigned long caller) { - void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_); + void *ret = slab_alloc_node(cachep, flags, nodeid, + cachep->object_size, caller); - trace_kmem_cache_alloc_node(cachep->name, _RET_IP_, ret, + trace_kmem_cache_alloc_node(cachep->name, caller, ret, cachep->object_size, cachep->size, flags, nodeid); return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(__kmem_cache_alloc_node); void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, @@ -3561,36 +3550,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); -static __always_inline void * -__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, flags, node); - cachep = kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret = kmem_cache_alloc_node_trace(cachep, flags, node, size); - ret = kasan_kmalloc(cachep, ret, size, flags); - - return ret; -} - -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __do_kmalloc_node(size, flags, node, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc_node); - -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) -{ - return __do_kmalloc_node(size, flags, node, caller); -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_PRINTK void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { @@ -3613,30 +3572,23 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) } #endif -/** - * kmem_cache_free - Deallocate an object - * @cachep: The cache the allocation was from. - * @objp: The previously allocated object. - * - * Free an object which was previously allocated from this - * cache. - */ -void kmem_cache_free(struct kmem_cache *cachep, void *objp) +void __kmem_cache_free(struct kmem_cache *cachep, void *objp, + unsigned long caller) { unsigned long flags; cachep = cache_from_obj(cachep, objp); if (!cachep) return; - trace_kmem_cache_free(cachep->name, _RET_IP_, objp); + trace_kmem_cache_free(cachep->name, caller, objp); local_irq_save(flags); debug_check_no_locks_freed(objp, cachep->object_size); if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) debug_check_no_obj_freed(objp, cachep->object_size); - __cache_free(cachep, objp, _RET_IP_); + __cache_free(cachep, objp, caller); local_irq_restore(flags); } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { @@ -3676,44 +3628,6 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) } EXPORT_SYMBOL(kmem_cache_free_bulk); -/** - * kfree - free previously allocated memory - * @objp: pointer returned by kmalloc. - * - * If @objp is NULL, no operation is performed. - * - * Don't free memory not originally allocated by kmalloc() - * or you will run into trouble. - */ -void kfree(const void *objp) -{ - struct kmem_cache *c; - unsigned long flags; - struct folio *folio; - void *x = (void *) objp; - - - if (unlikely(ZERO_OR_NULL_PTR(objp))) - return; - - folio = virt_to_folio(objp); - if (!folio_test_slab(folio)) { - free_large_kmalloc(folio, x); - return; - } - - c = folio_slab(folio)->slab_cache; - trace_kmem_cache_free(c->name, _RET_IP_, objp); - - local_irq_save(flags); - kfree_debugcheck(objp); - debug_check_no_locks_freed(objp, c->object_size); - debug_check_no_obj_freed(objp, c->object_size); - __cache_free(c, (void *)objp, _RET_IP_); - local_irq_restore(flags); -} -EXPORT_SYMBOL(kfree); - /* * This initializes kmem_cache_node or resizes various caches for all nodes. */ @@ -4116,30 +4030,3 @@ void __check_heap_object(const void *ptr, unsigned long n, usercopy_abort("SLAB object", cachep->name, to_user, offset, n); } #endif /* CONFIG_HARDENED_USERCOPY */ - -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ -size_t __ksize(const void *objp) -{ - struct kmem_cache *c; - struct folio *folio; - - BUG_ON(!objp); - if (unlikely(objp == ZERO_SIZE_PTR)) - return 0; - - folio = virt_to_folio(objp); - if (!folio_test_slab(folio)) - return folio_size(folio); - - c = folio_slab(folio)->slab_cache; - return c->object_size; -} -EXPORT_SYMBOL(__ksize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 8a8330a777f5..6533026b4a6b 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -925,6 +925,81 @@ void free_large_kmalloc(struct folio *folio, void *object) -(PAGE_SIZE << order)); __free_pages(folio_page(folio, 0), order); } + +void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, + int node, unsigned long caller) +{ + struct kmem_cache *s; + void *ret; + + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return kmalloc_large_node(size, gfpflags, node); + + s = kmalloc_slab(size, gfpflags); + + if (unlikely(ZERO_OR_NULL_PTR(s))) + return s; + + ret = __kmem_cache_alloc_node(s, gfpflags, node, caller); + ret = kasan_kmalloc(s, ret, size, gfpflags); + + return ret; +} +EXPORT_SYMBOL(__kmalloc_node_track_caller); + +/** + * kfree - free previously allocated memory + * @objp: pointer returned by kmalloc. + * + * If @objp is NULL, no operation is performed. + * + * Don't free memory not originally allocated by kmalloc() + * or you will run into trouble. + */ +void kfree(const void *x) +{ + struct folio *folio; + void *object = (void *)x; + struct kmem_cache *s; + + if (unlikely(ZERO_OR_NULL_PTR(x))) + return; + + folio = virt_to_folio(x); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, object); + return; + } + + s = folio_slab(folio)->slab_cache; + __kmem_cache_free(s, object, _RET_IP_); +} +EXPORT_SYMBOL(kfree); + +/** + * __ksize -- Uninstrumented ksize. + * @objp: pointer to the object + * + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same + * safety checks as ksize() with KASAN instrumentation enabled. + * + * Return: size of the actual memory used by @objp in bytes + */ +size_t __ksize(const void *object) +{ + struct folio *folio; + + if (unlikely(object == ZERO_SIZE_PTR)) + return 0; + + folio = virt_to_folio(object); + + if (unlikely(!folio_test_slab(folio))) + return folio_size(folio); + + return slab_ksize(folio_slab(folio)->slab_cache); +} +EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slob.c b/mm/slob.c index 3726b77a066b..836a7d1ae996 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -588,7 +588,8 @@ int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) return 0; } -static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) +static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, + int node, unsigned long caller) { void *b; @@ -598,12 +599,12 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(c->name, _RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, SLOB_UNITS(c->size) * SLOB_UNIT, flags, node); } else { b = slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(c->name, _RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(c->name, caller, b, c->object_size, PAGE_SIZE << get_order(c->size), flags, node); } @@ -617,19 +618,14 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) return b; } -void *__kmalloc_node(size_t size, gfp_t gfp, int node) +void *__kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, + int node, unsigned long caller) { - return __do_kmalloc_node(size, gfp, node, _RET_IP_); + return slob_alloc_node(cachep, gfp, node, caller); } -EXPORT_SYMBOL(__kmalloc_node); +EXPORT_SYMBOL(__kmem_cache_alloc_node); -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node) -{ - return slob_alloc_node(cachep, gfp, node); -} -EXPORT_SYMBOL(kmem_cache_alloc_node); - -static void __kmem_cache_free(void *b, int size) +static void ___kmem_cache_free(void *b, int size) { if (size < PAGE_SIZE) slob_free(b, size); @@ -642,23 +638,23 @@ static void kmem_rcu_free(struct rcu_head *head) struct slob_rcu *slob_rcu = (struct slob_rcu *)head; void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu)); - __kmem_cache_free(b, slob_rcu->size); + ___kmem_cache_free(b, slob_rcu->size); } -void kmem_cache_free(struct kmem_cache *c, void *b) +void __kmem_cache_free(struct kmem_cache *c, void *b, unsigned long caller) { kmemleak_free_recursive(b, c->flags); - trace_kmem_cache_free(c->name, _RET_IP_, b); + trace_kmem_cache_free(c->name, caller, b); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu = b + (c->size - sizeof(struct slob_rcu)); slob_rcu->size = c->size; call_rcu(&slob_rcu->head, kmem_rcu_free); } else { - __kmem_cache_free(b, c->size); + ___kmem_cache_free(b, c->size); } } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) { diff --git a/mm/slub.c b/mm/slub.c index c2e713bdb26c..f8fdb6b4fbd2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3224,16 +3224,17 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr, orig_size); } -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) +void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, + int node, unsigned long caller) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, s->object_size); + void *ret = slab_alloc_node(s, gfpflags, node, caller, s->object_size); - trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, + trace_kmem_cache_alloc_node(s->name, caller, ret, s->object_size, s->size, gfpflags, node); return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(__kmem_cache_alloc_node); void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, @@ -3477,15 +3478,15 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) } #endif -void kmem_cache_free(struct kmem_cache *s, void *x) +void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller) { s = cache_from_obj(s, x); if (!s) return; - trace_kmem_cache_free(s->name, _RET_IP_, x); - slab_free(s, virt_to_slab(x), x, NULL, 1, _RET_IP_); + trace_kmem_cache_free(s->name, caller, x); + slab_free(s, virt_to_slab(x), x, NULL, 1, caller); } -EXPORT_SYMBOL(kmem_cache_free); +EXPORT_SYMBOL(__kmem_cache_free); struct detached_freelist { struct slab *slab; @@ -4351,30 +4352,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, flags, node); - - s = kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc_node(s, flags, node, _RET_IP_, size); - - trace_kmem_cache_alloc_node(s->name, _RET_IP_, ret, size, - s->size, flags, node); - - ret = kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_node); - #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied @@ -4425,46 +4402,6 @@ void __check_heap_object(const void *ptr, unsigned long n, } #endif /* CONFIG_HARDENED_USERCOPY */ -size_t __ksize(const void *object) -{ - struct folio *folio; - - if (unlikely(object == ZERO_SIZE_PTR)) - return 0; - - folio = virt_to_folio(object); - - if (unlikely(!folio_test_slab(folio))) - return folio_size(folio); - - return slab_ksize(folio_slab(folio)->slab_cache); -} -EXPORT_SYMBOL(__ksize); - -void kfree(const void *x) -{ - struct folio *folio; - struct slab *slab; - void *object = (void *)x; - struct kmem_cache *s; - - if (unlikely(ZERO_OR_NULL_PTR(x))) - return; - - folio = virt_to_folio(x); - if (unlikely(!folio_test_slab(folio))) { - free_large_kmalloc(folio, object); - return; - } - - slab = folio_slab(folio); - s = slab->slab_cache; - - trace_kmem_cache_free(s->name, _RET_IP_, x); - slab_free(s, slab, object, NULL, 1, _RET_IP_); -} -EXPORT_SYMBOL(kfree); - #define SHRINK_PROMOTE_MAX 32 /* @@ -4812,30 +4749,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return 0; } -void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, - int node, unsigned long caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large_node(size, gfpflags, node); - - s = kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc_node(s, gfpflags, node, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmem_cache_alloc_node(s->name, caller, ret, size, - s->size, gfpflags, node); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { From patchwork Tue Mar 8 11:41:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A1EBC433EF for ; Tue, 8 Mar 2022 11:43:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D9AA48D0012; Tue, 8 Mar 2022 06:43:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D4A548D0001; Tue, 8 Mar 2022 06:43:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEB1E8D0012; Tue, 8 Mar 2022 06:43:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id B12468D0001 for ; Tue, 8 Mar 2022 06:43:01 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 649711830AFCA for ; Tue, 8 Mar 2022 11:43:01 +0000 (UTC) X-FDA: 79221032562.27.51979F6 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf13.hostedemail.com (Postfix) with ESMTP id 2D85F20005 for ; Tue, 8 Mar 2022 11:42:59 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id n2so7314628plf.4 for ; Tue, 08 Mar 2022 03:42:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qUPv+czHu+ddxgZ8oS0ewjwH+jitPnbHg1+sr8pOpYg=; b=G85/TCA5cJn/gdgyKVVMiRrVQa4IJvnrVAVddno0iqZqfKkxMScN5MUN/1r3QDUj5l oQqQ/O1DMYPRqam5/slZ3iu7wI1+aOk6ThNdibALhfW+keQXjrQzbw0LgsbYvzbbKA3s qQboPhxljlWY6DAYBxiBol1vIBTDS4o20gBPVEhbMhRLr6o5hwseTHDYL+DPeKz9TzGl cH+O8IUmk+geLIImbcNI57rSJzfEYEnXu6a+6Che/9Hmu/fqqS8iQc/VmuL2XTXWpY/T jfk50yhnx0HqcyKtOrTm7ribD8/BA7OGE7kAR4x0Y3nnmQRe0zWTvXiLI8VO9sGUHX8b SsJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qUPv+czHu+ddxgZ8oS0ewjwH+jitPnbHg1+sr8pOpYg=; b=S3G0TwhL7nXryIDvsAWKv5cj7fKK+GaA+H8DNXk/VQnR1+S95l83hqVxWJn1Ig/lll fqHEXZmC+hCA3Af5vKBRGnJLF1XT7N2uzMtHKsjPL8DnY1RWRkcezoyhulThWhhAwtMq l5F/F0tVaB7aIJfvpj9IGDBpmZw3VjrZw0psu/d2u44XoSrP//r+h14/Co1nSTuTC+hB S0dc1XOWV35u625uLrYRJnLCDocxhV+hB6TQoDDch01yTnQLFhmlsjT9i/yVSQJRiYCR aAkUjE2darKjG6NCbySyDo1cyVpmVlAjNrCUyYZhxVC0Ty/97yiXxfxyUTdioeZ7gNfp uzAw== X-Gm-Message-State: AOAM533eT/I2CfNL84vRrtXqFUFd4m1zoMQ2NpK4UrHVdaRCf3C4vXar 5E9aHniWI7h10ajT8nDCHiZjZlgQQOtVEQ== X-Google-Smtp-Source: ABdhPJwpAmeD5MxtpUNKumLB7VVq1hmeVGqwhjbe/E/SlKyFascMG1F61KAp6t4bp87BqLUVr36lUQ== X-Received: by 2002:a17:90a:74c7:b0:1bf:5532:3ae8 with SMTP id p7-20020a17090a74c700b001bf55323ae8mr4124308pjl.120.1646739778875; Tue, 08 Mar 2022 03:42:58 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:42:58 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 13/15] mm/sl[au]b: remove kmem_cache_alloc_node_trace() Date: Tue, 8 Mar 2022 11:41:40 +0000 Message-Id: <20220308114142.1744229-14-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2D85F20005 X-Stat-Signature: jtzon1kdxwjw5zzgdzknusym3etbgy9h Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="G85/TCA5"; spf=pass (imf13.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-HE-Tag: 1646739779-987307 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmem_cache_alloc_node_trace() was introduced by commit 4a92379bdfb4 ("slub tracing: move trace calls out of always inlined functions to reduce kernel code size") to avoid inlining tracepoints for inlined kmalloc function calls. Now that we use same tracepoint in kmalloc and normal caches, kmem_cache_alloc_node_trace() can be replaced with __kmem_cache_alloc_node() and kasan_kmalloc(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- include/linux/slab.h | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 6b632137f799..8da8beff712f 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -497,10 +497,6 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) __assume_slab_alignment - __alloc_size(4); - extern void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); @@ -512,6 +508,9 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) #ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { + struct kmem_cache *s; + void *objp; + if (__builtin_constant_p(size)) { unsigned int index; @@ -523,9 +522,11 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla if (!index) return ZERO_SIZE_PTR; - return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][index], - flags, node, size); + s = kmalloc_caches[kmalloc_type(flags)][index]; + + objp = __kmem_cache_alloc_node(s, flags, node, _RET_IP_); + objp = kasan_kmalloc(s, objp, size, flags); + return objp; } return __kmalloc_node(size, flags, node); } From patchwork Tue Mar 8 11:41:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54A26C433EF for ; Tue, 8 Mar 2022 11:43:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E855D8D0002; Tue, 8 Mar 2022 06:43:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E348F8D0001; Tue, 8 Mar 2022 06:43:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFD068D0002; Tue, 8 Mar 2022 06:43:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id C25298D0001 for ; Tue, 8 Mar 2022 06:43:04 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 867F6A89B3 for ; Tue, 8 Mar 2022 11:43:04 +0000 (UTC) X-FDA: 79221032688.23.B76E9C2 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf15.hostedemail.com (Postfix) with ESMTP id 03007A0004 for ; Tue, 8 Mar 2022 11:43:03 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id m22so16951475pja.0 for ; Tue, 08 Mar 2022 03:43:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=C+Jsu4HCVJIvVBTQ9bsapoPPM3jCvdQzttLGL+ZzKMY=; b=jHF9PDv3hq3gkQfo5TFIFXhcA/79kEEdYCzpEA/d2mjRF6bc6Gwd1MihJqJmY1UNmc WiD5vCHHqS1j99+Db/d+9ztOYMQKy4/E09xga6rVJMW0cryzeV0Oacb4lz0dzG9DY3y5 UbpZtTU8pIHuu+xWv3oNoF83r9h3ixhfPsNwwd2I44nGZotZrdp374+gbS3zQRmkvLxh 8851L7qhzLv6A6szvy04sWjKSRVXp5clOMDLhTXnNIsUsSGod+X06uiJyFbfJr9faNj4 ZgJwGgqYDx0I2vbjXmrtBo931m/EqAtoR7d+NwX5g0TKsoRQuZZvNu9cfjeeCWZxc3Bc rFFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=C+Jsu4HCVJIvVBTQ9bsapoPPM3jCvdQzttLGL+ZzKMY=; b=ROSEfxEYCttcyC13PuGZKdScmS7W6KlWqG5cn19VLNNxrYL9j9/ZCiYQMgUoO52wpy PnFK1e3Vmb4TFSE+N9Qe/iq5BUYPK633ucO2ZeRONpiuf2Y+slZ/9adrhd61suYQI79H dtUIDKZZSvsuLklri/eowRcDbs0j5tBDMhkiJ/mBhCQ7TmpXhRYQt1e6iu8yyJ0R6p4x z8bmy6RJ34ZDNg6s5GadIi3tud+chZfkeKsMYnsslXqb23ke6xFEiCFG+kbc7tDdP54E CIcXQ1OKD5eUSuDiZShKRDvJNz4RoXVSEuB+UuMBW3PGuNPxGoXU6BsdZ5udYYM9gt4q F3aw== X-Gm-Message-State: AOAM532Kc1uIS6jqjD2AS9XLQoMRMndMJqGtw2CXI/JgKL6BQnSl41Gl 3Q+m8g2NNd+SgNABJYB1eqQtPSdno6nDWA== X-Google-Smtp-Source: ABdhPJx4KwQ5pJZuufi0rCbWEqcdat0s5xevU2SvD3q7sL0/CrSOfOaXj/E7yf7qfpnShkF6fKTIlA== X-Received: by 2002:a17:90b:1652:b0:1bf:32e9:6db3 with SMTP id il18-20020a17090b165200b001bf32e96db3mr4209347pjb.179.1646739782835; Tue, 08 Mar 2022 03:43:02 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.42.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:43:01 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 14/15] mm/sl[auo]b: move definition of __ksize() to mm/slab.h Date: Tue, 8 Mar 2022 11:41:41 +0000 Message-Id: <20220308114142.1744229-15-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: qddknjemcrxghonjii17pwouog711sy9 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=jHF9PDv3; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Queue-Id: 03007A0004 X-HE-Tag: 1646739783-545702 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __ksize() is only called by KASAN. Remove export symbol and move definition to mm/slab.h as we don't want to grow its callers. [ willy@infradead.org: Move definition to mm/slab.h and reduce comments ] Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 1 - mm/slab.h | 2 ++ mm/slab_common.c | 11 +---------- mm/slob.c | 1 - 4 files changed, 3 insertions(+), 12 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 8da8beff712f..a3f8a103f318 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -182,7 +182,6 @@ int kmem_cache_shrink(struct kmem_cache *s); void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __alloc_size(2); void kfree(const void *objp); void kfree_sensitive(const void *objp); -size_t __ksize(const void *objp); size_t ksize(const void *objp); #ifdef CONFIG_PRINTK bool kmem_valid_obj(void *object); diff --git a/mm/slab.h b/mm/slab.h index bfedfe3900bb..4fd4bd7bb4d7 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -673,6 +673,8 @@ void free_large_kmalloc(struct folio *folio, void *object); #endif /* CONFIG_SLOB */ +size_t __ksize(const void *objp); + static inline size_t slab_ksize(const struct kmem_cache *s) { #ifndef CONFIG_SLUB diff --git a/mm/slab_common.c b/mm/slab_common.c index 6533026b4a6b..07ed382ed5a9 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -976,15 +976,7 @@ void kfree(const void *x) } EXPORT_SYMBOL(kfree); -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ +/* Uninstrumented ksize. Only called by KASAN. */ size_t __ksize(const void *object) { struct folio *folio; @@ -999,7 +991,6 @@ size_t __ksize(const void *object) return slab_ksize(folio_slab(folio)->slab_cache); } -EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slob.c b/mm/slob.c index 836a7d1ae996..59ddf80e987c 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -576,7 +576,6 @@ size_t __ksize(const void *block) m = (unsigned int *)(block - align); return SLOB_UNITS(*m) * SLOB_UNIT; } -EXPORT_SYMBOL(__ksize); int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) { From patchwork Tue Mar 8 11:41:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12773576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A104C433EF for ; Tue, 8 Mar 2022 11:43:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EABF38D0013; Tue, 8 Mar 2022 06:43:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E36D58D0001; Tue, 8 Mar 2022 06:43:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D23A58D0013; Tue, 8 Mar 2022 06:43:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id C3C428D0001 for ; Tue, 8 Mar 2022 06:43:07 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 87AA71830AFC3 for ; Tue, 8 Mar 2022 11:43:07 +0000 (UTC) X-FDA: 79221032814.27.B0586F5 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf21.hostedemail.com (Postfix) with ESMTP id 1D6F21C000C for ; Tue, 8 Mar 2022 11:43:06 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id 27so16199944pgk.10 for ; Tue, 08 Mar 2022 03:43:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TVm1wzeU0y3JZ7p5wnCxhldPZJzxYO3AZTbbV3y/P8c=; b=ZBYHDJThFStU0rnWXfOROGQPKtGJeftM5/haZPk61cjnmK0NY2ff0+YovK1r93UWff nBW833vQD2RsHWbqNHCMMH/AR77E5Lz1jfEMB8E4OykBIZbzj4eME/lV7KBnbCGJmB3o 0ep1QuphjEuW4XuET2NnIioVZAUeoCb+619lNATjkhEk1N0C+T2LLYxFI7ZalNPf4m5Q IJLZAlX4exdqdRPhg0aVFWjVTuRpUlW+IBu8+WTxuxePp/VozSMNG1oK1jX4ep9ffVI9 Eq5T+FXzZAHxLhAiG45Kf6F3Z0/1rdRqP3SAMRf2mgRNWz+tDISTm4WhCPfv6ew8QUTz 30og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TVm1wzeU0y3JZ7p5wnCxhldPZJzxYO3AZTbbV3y/P8c=; b=GiOz28ClrEdJONTY6ucDHnQMqYC6/mUNSyq20kagjwPMKlH76ps0LgeG6rXD5YkAyW /UMEQo6L8sdW0yH1278Bqi3vCnLFbGl8c0allCaJ/LGhFBINH2Ssz2Oini06aCdEV0Yv BdSTYjsgRS6IgF1NG+D29F/XvXC8unVSsvoRvqOlyvxlDvcJqwuLkRV4Zk4uKNDxdrB5 vxgdQ3oHuCLLUhxgV/6Gc3rKkwwnrhg8j5PBFBeq2lDsa3b6+6A9jMchQuNMB7k7f8PS wGvkf63Csszpl+7iiUTFZIg/pS3MX0GHpjmAQXT8wUcrXuLJNsMgskIvU8euj69neGe5 HLYw== X-Gm-Message-State: AOAM531z/YgffOcDu3rkKiqeuwCzh8FeqYjPwRcj+PjAGh3pR+emzqla Kyqkd180vKmcPHs/c8xOe65JI1kTZGGtAw== X-Google-Smtp-Source: ABdhPJyW3GhA94fMa4pOEEGPq4uBk6v8GzY3MUV4sfCOd9eI1msJ0Vi83xfnoaqt6lifAHrKbS/ECg== X-Received: by 2002:a63:ec11:0:b0:378:5331:7f18 with SMTP id j17-20020a63ec11000000b0037853317f18mr13695809pgh.577.1646739785937; Tue, 08 Mar 2022 03:43:05 -0800 (PST) Received: from ip-172-31-19-208.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id i2-20020a17090ac40200b001bd0e552d27sm2578285pjt.11.2022.03.08.03.43.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 03:43:05 -0800 (PST) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Marco Elver , Matthew WilCox , Roman Gushchin , linux-kernel@vger.kernel.org, 42.hyeyoo@gmail.com Subject: [RFC PATCH v1 15/15] mm/sl[au]b: check if large object is valid in __ksize() Date: Tue, 8 Mar 2022 11:41:42 +0000 Message-Id: <20220308114142.1744229-16-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220308114142.1744229-1-42.hyeyoo@gmail.com> References: <20220308114142.1744229-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 1D6F21C000C X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ZBYHDJTh; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: wjk5fjtuka7c61y7xh4p5wahzyydo37c X-HE-Tag: 1646739786-455060 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __ksize() returns size of objects allocated from slab allocator. When invalid object is passed to __ksize(), returning zero prevents further memory corruption and makes caller be able to check if there is an error. If address of large object is not beginning of folio or size of the folio is too small, it must be invalid. Return zero in such cases. Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab_common.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 07ed382ed5a9..acb1d27fc9e3 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -986,8 +986,12 @@ size_t __ksize(const void *object) folio = virt_to_folio(object); - if (unlikely(!folio_test_slab(folio))) + if (unlikely(!folio_test_slab(folio))) { + if (object != folio_address(folio) || + folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE) + return 0; return folio_size(folio); + } return slab_ksize(folio_slab(folio)->slab_cache); }