From patchwork Tue Jul 12 13:39:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B848C433EF for ; Tue, 12 Jul 2022 13:39:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E2B9B940076; Tue, 12 Jul 2022 09:39:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DDACE940063; Tue, 12 Jul 2022 09:39:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7C9E940076; Tue, 12 Jul 2022 09:39:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B7F1B940063 for ; Tue, 12 Jul 2022 09:39:57 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7F0DE20F03 for ; Tue, 12 Jul 2022 13:39:57 +0000 (UTC) X-FDA: 79678556034.19.5273712 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf26.hostedemail.com (Postfix) with ESMTP id 26CE6140067 for ; Tue, 12 Jul 2022 13:39:56 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id v21so4772027plo.0 for ; Tue, 12 Jul 2022 06:39:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6rRFJ0xVh5ABORNo5ssv3zp+HF72hGoY2m+vDB2j/5A=; b=j66iaT+jljhuBXI/FcmS1Ljzo3XHqUsiIheAG10WlV1Ku/9yi7MlhWLmsZuiXwEgSG IykzHuntgUkrgpiVpOi1lVYG8PvbQm2BQToVyI2vVE8xX9Z8UA6ZDZItF582+QgVXWgm OnStF0hYaRWgwBEJSvVjboluOqwfEyMEly8ggpKj7K1Hlt/7qZ9VUTdoLl3GD6+8WI8g SnN8GJg+HJjwXeV5XzOcFNrQzxrhrE7HP38XNHCEnzJ9zYqVXikCDU4bJylBcdFOcUzm F/cCcELaJceKC8dJVpfhxXdQzH/5DGuzzRNmlaWF7Ygj/VgQfuAWWuFLlmH2evYtbSgR k6RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6rRFJ0xVh5ABORNo5ssv3zp+HF72hGoY2m+vDB2j/5A=; b=hW2qlg5Geu3jQErlk4UpMV2cHSUeXF3wZa1/c6mnme3Fe1D4/s026qW8Uw7Cq4Xnx0 npFHUH+ycKi2cdU0PNoZ7EMeICqXdcWGC+NY3Aj076ZgV282K7M1I4G82wvPrtNwQUwD dCD8qBi+kEGEcHkHexKsgVCPsnC2nghOnSp8kDJOimIxhArizCp5Z7N4n8aVjdq8v/to PXqvw+6X57uQbW9SR/UhohbSX113b1N4h1ia0XCvJxXgzQVEogBNFtnz0s6WTULewM8f Thvz73cCY8X4Dnt4UV7TA8raKmE4IbjeyWUAUTXRBYFYjCfzwRIM6UjNxtgFrsSzqiHA Brsg== X-Gm-Message-State: AJIora9IOLY2jEyAYHthBsZkZFAfAK/cQHgyecnCkMFh8zsHm77XIRai rsjIUDN5fih73lKPN1ggVCU= X-Google-Smtp-Source: AGRyM1v1TLiXCxd2XqyR+KXoLhBqLyQ1wHNby5FzNlOmnNN+pMCAzAHTJilkftrp2vgWo3ANSltsKg== X-Received: by 2002:a17:902:eec2:b0:16b:e389:7efe with SMTP id h2-20020a170902eec200b0016be3897efemr24530998plb.84.1657633196101; Tue, 12 Jul 2022 06:39:56 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.39.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:39:55 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 1/15] mm/slab: move NUMA-related code to __do_cache_alloc() Date: Tue, 12 Jul 2022 13:39:31 +0000 Message-Id: <20220712133946.307181-2-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633197; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6rRFJ0xVh5ABORNo5ssv3zp+HF72hGoY2m+vDB2j/5A=; b=DHH2j/NCM/3grjIJVciBo4kHgVzY4Y4CJaT0lIdbqVdcbXXeKnbXtE49YrHs7WllNDnRza FycmNBnGKDoDvYV1K1boKIQpe+6XjOy8BeEaymbmdclcR3UkxHNky0FTGMne0DBlZUsmWr i/AORO25FvmhFG66Mfe1ATXxFHDAxj0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=j66iaT+j; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633197; a=rsa-sha256; cv=none; b=A1fzv1M+nTooh12eeDDSRvoIz8BbdIylB2efIDK0S5FtOuBqz+Kcbk8pzCmAovzf4Jp73K KY4Blbbtb/DUsda8mLrLaiHl4RlsOezehS5k4mQWItHCy/3KForFehAZEgjxnOXA4cATR5 EAtq7kBfGDbsgtlIS8b+alyRYJDdvrY= X-Stat-Signature: bop4daeg7ite19z5kieiwnjtxsb4a1wg X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 26CE6140067 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=j66iaT+j; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-HE-Tag: 1657633196-610164 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To implement slab_alloc_node() independent of NUMA configuration, move NUMA fallback/alternate allocation code into __do_cache_alloc(). One functional change here is not to check availability of node when allocating from local node. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- v3: Fixed uninitialized variable bug due to missing NULL-initialization of variable objp. mm/slab.c | 68 +++++++++++++++++++++++++------------------------------ 1 file changed, 31 insertions(+), 37 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 764cbadba69c..3d83d17ff3b3 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3186,13 +3186,14 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } +static void *__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid); + static __always_inline void * slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *ptr; - int slab_node = numa_mem_id(); struct obj_cgroup *objcg = NULL; bool init = false; @@ -3207,30 +3208,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - - if (nodeid == NUMA_NO_NODE) - nodeid = slab_node; - - if (unlikely(!get_node(cachep, nodeid))) { - /* Node not bootstrapped yet */ - ptr = fallback_alloc(cachep, flags); - goto out; - } - - if (nodeid == slab_node) { - /* - * Use the locally cached objects if possible. - * However ____cache_alloc does not allow fallback - * to other nodes. It may fail while we still have - * objects on other nodes available. - */ - ptr = ____cache_alloc(cachep, flags); - if (ptr) - goto out; - } - /* ___cache_alloc_node can fall back to other nodes */ - ptr = ____cache_alloc_node(cachep, flags, nodeid); -out: + ptr = __do_cache_alloc(cachep, flags, nodeid); local_irq_restore(save_flags); ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); init = slab_want_init_on_alloc(flags, cachep); @@ -3241,31 +3219,46 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ } static __always_inline void * -__do_cache_alloc(struct kmem_cache *cache, gfp_t flags) +__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - void *objp; + void *objp = NULL; + int slab_node = numa_mem_id(); - if (current->mempolicy || cpuset_do_slab_mem_spread()) { - objp = alternate_node_alloc(cache, flags); - if (objp) - goto out; + if (nodeid == NUMA_NO_NODE) { + if (current->mempolicy || cpuset_do_slab_mem_spread()) { + objp = alternate_node_alloc(cachep, flags); + if (objp) + goto out; + } + /* + * Use the locally cached objects if possible. + * However ____cache_alloc does not allow fallback + * to other nodes. It may fail while we still have + * objects on other nodes available. + */ + objp = ____cache_alloc(cachep, flags); + nodeid = slab_node; + } else if (nodeid == slab_node) { + objp = ____cache_alloc(cachep, flags); + } else if (!get_node(cachep, nodeid)) { + /* Node not bootstrapped yet */ + objp = fallback_alloc(cachep, flags); + goto out; } - objp = ____cache_alloc(cache, flags); /* * We may just have run out of memory on the local node. * ____cache_alloc_node() knows how to locate memory on other nodes */ if (!objp) - objp = ____cache_alloc_node(cache, flags, numa_mem_id()); - + objp = ____cache_alloc_node(cachep, flags, nodeid); out: return objp; } #else static __always_inline void * -__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) +__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid __maybe_unused) { return ____cache_alloc(cachep, flags); } @@ -3292,7 +3285,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - objp = __do_cache_alloc(cachep, flags); + objp = __do_cache_alloc(cachep, flags, NUMA_NO_NODE); local_irq_restore(save_flags); objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); @@ -3531,7 +3524,8 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_irq_disable(); for (i = 0; i < size; i++) { - void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags); + void *objp = kfence_alloc(s, s->object_size, flags) ?: + __do_cache_alloc(s, flags, NUMA_NO_NODE); if (unlikely(!objp)) goto error; From patchwork Tue Jul 12 13:39:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2C18C433EF for ; Tue, 12 Jul 2022 13:40:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7294E940077; Tue, 12 Jul 2022 09:40:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6AE47940063; Tue, 12 Jul 2022 09:40:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5286B940077; Tue, 12 Jul 2022 09:40:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 43B09940063 for ; Tue, 12 Jul 2022 09:40:01 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0FF6D3376F for ; Tue, 12 Jul 2022 13:40:01 +0000 (UTC) X-FDA: 79678556202.11.3C187BA Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf23.hostedemail.com (Postfix) with ESMTP id 6F52E140073 for ; Tue, 12 Jul 2022 13:40:00 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id f11so7284532plr.4 for ; Tue, 12 Jul 2022 06:40:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r/sC4QUhXRkGGh2bMZjIx8vUi/+M03CaD7v75Vc4qIM=; b=Q4gF1YgEyhsTFkNHPaBVgVWmoB1UycH55o77nMikgwe0GbIhxDa72n1fUGC5vQe/QD Il9OHSXdBMCafC0e5otobL4t+Nwc7Cep5xa5y7tGgixwBwz0A4fO9F37XXfglBp11Pyt 7YhwCUoQcUh3fao2+JVDsNZrMz3SuZvF6CFZCenkPXSvDumwz/NLcRaOdC9dPGrHXm6M PTTMEWYUBKk4lYq3dYSSRqLl+xRm9yvnUq8luiT8pqvwQt4Dgm+DxmXKjFJL5bJVGwPy 1Cp6APIWKaZgmAcOs58RclTepUEjVTJJoQjwsoApSvNuDBHiGGsvuR0S6i12eiblMIDB s8sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r/sC4QUhXRkGGh2bMZjIx8vUi/+M03CaD7v75Vc4qIM=; b=olz/6MxT4QUxxUMfXyd9GPtQJb30zfmthotr7H7EuMwDbalRIc//p5wacf8cPzQiX1 fWfaiG2740FxmetA+Ukytkxk1iMoIY2Uzuzjcan/1vEBlt4WksYkI8j7582TdEH5/laF 7t9IMBrrTVF33TsuqvtMgmrFF/w5sgVZ6hlghzYrpEuRzt9J95gQG6bWsYxYVMGr5LYX W+MzC8BnZUAmWE8N7ppx9jeFD9kbihxqjlQPPfM5ay4aCoHM1fIRQ/e++PICtDpWIsbG 0efshXuoAYh+CIPt5wVoO3rluQwcdgAWA90m+S5gXDi4kQ7+BE92pAD7hZD9d3O5yUdI w8Qg== X-Gm-Message-State: AJIora/CX97Z72Ey35E9E9RoKmUx1DKESctiTd8e+TGu3pRWtfbiMQmq 5YS7T0v6S1zGyQ0F1BDYUm4= X-Google-Smtp-Source: AGRyM1vJg6rHv6f9xZ16nqyAOzl+pygP2/nWo+HF7ZxNCw7rYTwZRTp22u80Y8t30cgdcqSc0Zg+3A== X-Received: by 2002:a17:90b:4f4a:b0:1f0:65d:d888 with SMTP id pj10-20020a17090b4f4a00b001f0065dd888mr4576679pjb.136.1657633199556; Tue, 12 Jul 2022 06:39:59 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.39.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:39:58 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 2/15] mm/slab: cleanup slab_alloc() and slab_alloc_node() Date: Tue, 12 Jul 2022 13:39:32 +0000 Message-Id: <20220712133946.307181-3-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633200; a=rsa-sha256; cv=none; b=tx/v2vE4OuuInkEwXn8PSnYEgFiZjWShlKNidKYXXMZXzwaDc66+XV7CirEQdxyHdQLEoe LWScvFmig8OPRFVub0RJxZC8JrSWkCVJnGG9Gz6UIVURCsS69xptp2Hhbnt49rT7SMxFGI ZDbQkrcWudL0MlL/r7tAuiNd56oGhQg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Q4gF1YgE; spf=pass (imf23.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=r/sC4QUhXRkGGh2bMZjIx8vUi/+M03CaD7v75Vc4qIM=; b=nNL/2KPDnhGDSW2UUWQW7puOotaHIqhDJaYeCZ/t3+b3Tyx9RbYoWasAHSFFuFaV3X0AR0 fPdY6+pQaRM7PL7hiSOovV0wRgkDYZE7FYUeGJI/P4QQVyAEcF/QY3SeI2FykSOhNqmMSi IO+oGh/y1JTcl6HCkGuIbFGuzChnWIQ= X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6F52E140073 X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Q4gF1YgE; spf=pass (imf23.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: oro85ys3aidq1d5wn4hrukczgj1zft46 X-HE-Tag: 1657633200-881694 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make slab_alloc_node() available even when CONFIG_NUMA=n and make slab_alloc() wrapper of slab_alloc_node(). This is necessary for further cleanup. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 50 +++++++++++++------------------------------------- 1 file changed, 13 insertions(+), 37 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 3d83d17ff3b3..5bcd2b62b5a2 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3186,38 +3186,6 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, return obj ? obj : fallback_alloc(cachep, flags); } -static void *__do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid); - -static __always_inline void * -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, - unsigned long caller) -{ - unsigned long save_flags; - void *ptr; - struct obj_cgroup *objcg = NULL; - bool init = false; - - flags &= gfp_allowed_mask; - cachep = slab_pre_alloc_hook(cachep, NULL, &objcg, 1, flags); - if (unlikely(!cachep)) - return NULL; - - ptr = kfence_alloc(cachep, orig_size, flags); - if (unlikely(ptr)) - goto out_hooks; - - cache_alloc_debugcheck_before(cachep, flags); - local_irq_save(save_flags); - ptr = __do_cache_alloc(cachep, flags, nodeid); - local_irq_restore(save_flags); - ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); - init = slab_want_init_on_alloc(flags, cachep); - -out_hooks: - slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); - return ptr; -} - static __always_inline void * __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid) { @@ -3266,8 +3234,8 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags, int nodeid __maybe_unus #endif /* CONFIG_NUMA */ static __always_inline void * -slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, - size_t orig_size, unsigned long caller) +slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, + int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *objp; @@ -3285,7 +3253,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); - objp = __do_cache_alloc(cachep, flags, NUMA_NO_NODE); + objp = __do_cache_alloc(cachep, flags, nodeid); local_irq_restore(save_flags); objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); prefetchw(objp); @@ -3296,6 +3264,14 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, return objp; } +static __always_inline void * +slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, + size_t orig_size, unsigned long caller) +{ + return slab_alloc_node(cachep, lru, flags, NUMA_NO_NODE, orig_size, + caller); +} + /* * Caller needs to acquire correct kmem_cache_node's list_lock * @list: List of detached free slabs should be freed by caller @@ -3584,7 +3560,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); */ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_); + void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object_size, _RET_IP_); trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep, cachep->object_size, cachep->size, @@ -3602,7 +3578,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, { void *ret; - ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); + ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, cachep, From patchwork Tue Jul 12 13:39:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94DAECCA47C for ; Tue, 12 Jul 2022 13:40:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 30710940078; Tue, 12 Jul 2022 09:40:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B755940063; Tue, 12 Jul 2022 09:40:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15837940078; Tue, 12 Jul 2022 09:40:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 075C4940063 for ; Tue, 12 Jul 2022 09:40:04 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E1582207FE for ; Tue, 12 Jul 2022 13:40:03 +0000 (UTC) X-FDA: 79678556286.28.87B1A61 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf31.hostedemail.com (Postfix) with ESMTP id 73C2620064 for ; Tue, 12 Jul 2022 13:40:03 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id p9so7632796pjd.3 for ; Tue, 12 Jul 2022 06:40:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1gwXwcTxbtqS306f0SRAhtNxfxeXqjslTrtEUxnOJNw=; b=BCoKGtiIYwndVeEjknYETv1h9O/+3SP5RRacoN1ak4QTh9SzE6i/tUYYmWh8zFxNmY Pkdd6EOifVnZwZkTUQjvG8AvF3GpJ9JF3iauraj4OLKEYUouc1RtQk/iXIsvzyMnsVmu cwoXbEtFkGrj8ynvr03lfdsu4HAps3BNvgNDYglyC0LliGKQ0+m3OOOBYgN4mma4vyGh jqe6H5gf5Be7uy9KdoRK01ZYeXy+VwFkttgU7IG/pVLyQ9Fa9vZ0Mr4e6GIKP/bPmzSY NjVqWUAdKh/+r5iv3rHasQ3Niq7pnidg/B4ojuMciWxPpM/940QOqakFDP6cvBDEHF4u 35LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1gwXwcTxbtqS306f0SRAhtNxfxeXqjslTrtEUxnOJNw=; b=0k9wC0L3oJdKS4WKUtyFjcBKOsioZDfi0fwoKgDTTK0s7Jyqzu+mZZ9oz71GQj59Vb aKDfQjOKsjffS5f3Pxm+pyhiWYNmpgCHSqVwgJku+VhZKHss7eJUn1akje5qYBe5mfXU ThIdkwUK9N8tFGxoAyTGCoj2p5RyzCAwOYWoiprbmmc+MUoy9rDoMokKvMKiKcBhD2u6 z1JFBSdkRYBgS/0Lk7RskITvhJGijsIiHAtUqg9/iD+huzn3X6LSKVU65rKO4ZcaWqQ1 zo86kb3FwgrGboh1fbjD6Oit6GjUSuqOzPxC2zLCwhBKPuJjShtBuGu0iF0JsrkHolRI Fh7w== X-Gm-Message-State: AJIora86zVX65d76sOevfIqllAe0SMMQBg+syzlzp5VJmf04lMvsFrDB IdSrWYQjgb6yWbuw3iI0MIk= X-Google-Smtp-Source: AGRyM1vcwqPEhg/cIcX+w8wud4nD7jfJS5OlGxdHMSCzwYC+DiGDAkESXL3iUAr1hyBm3yP+P3AQSA== X-Received: by 2002:a17:90b:3841:b0:1ef:f0ac:de55 with SMTP id nl1-20020a17090b384100b001eff0acde55mr4565802pjb.35.1657633202990; Tue, 12 Jul 2022 06:40:02 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.39.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:02 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 03/15] mm/slab_common: remove CONFIG_NUMA ifdefs for common kmalloc functions Date: Tue, 12 Jul 2022 13:39:33 +0000 Message-Id: <20220712133946.307181-4-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633203; a=rsa-sha256; cv=none; b=Xc0AUx1AHhXOeg7gbUOpWAP4m6TsQNkdpX4yiUoHDCNSBhZVoEpcvG98A3QHIKrpGJ8T0H WgjFBMimfizXMtyiIMry/L5eHQP+IX5WW67oHfmyDsCitbIGhC4+ageG1dj5Vvc5caReAp LjksYnTSFAIciT04uDeNmnmFhJ/rtAw= ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=BCoKGtiI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf31.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633203; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1gwXwcTxbtqS306f0SRAhtNxfxeXqjslTrtEUxnOJNw=; b=b/heY+ZETZZDA09/W0Z61uTdsnws2EKfj0UpW66kZCtZN1f/20SMizZQo0KsvHM/P2JPue 2/fEUd64oLHoMFVZoItwirNfPH3/3tAbGWvmgXsTQtqRdnhGwLQ/prbaAHhb+a4cMFCUAj DcYrFVXy7z1o0qfvXlFh86fDe5oxnu0= X-Stat-Signature: 5gt6mzbc14xmejm5j1yofkdan64dy5s9 X-Rspamd-Queue-Id: 73C2620064 Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=BCoKGtiI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf31.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1657633203-311509 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that slab_alloc_node() is available for SLAB when CONFIG_NUMA=n, remove CONFIG_NUMA ifdefs for common kmalloc functions. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 28 ---------------------------- mm/slab.c | 2 -- mm/slob.c | 5 +---- mm/slub.c | 6 ------ 4 files changed, 1 insertion(+), 40 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 0fefdf528e0d..4754c834b0e3 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -456,38 +456,18 @@ static __always_inline void kfree_bulk(size_t size, void **p) kmem_cache_free_bulk(NULL, size, p); } -#ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment __malloc; -#else -static __always_inline __alloc_size(1) void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __kmalloc(size, flags); -} - -static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) -{ - return kmem_cache_alloc(s, flags); -} -#endif #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) __assume_slab_alignment __alloc_size(3); -#ifdef CONFIG_NUMA extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_slab_alignment __alloc_size(4); -#else -static __always_inline __alloc_size(4) void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, int node, size_t size) -{ - return kmem_cache_alloc_trace(s, gfpflags, size); -} -#endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, @@ -701,20 +681,12 @@ static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t } -#ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) -#else /* CONFIG_NUMA */ - -#define kmalloc_node_track_caller(size, flags, node) \ - kmalloc_track_caller(size, flags) - -#endif /* CONFIG_NUMA */ - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index 5ca8bb7335dc..81804921c538 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3535,7 +3535,6 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -#ifdef CONFIG_NUMA /** * kmem_cache_alloc_node - Allocate an object on the specified node * @cachep: The cache to allocate from. @@ -3609,7 +3608,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags, return __do_kmalloc_node(size, flags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif /* CONFIG_NUMA */ #ifdef CONFIG_PRINTK void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) diff --git a/mm/slob.c b/mm/slob.c index 56421fe461c4..c54aad6b106c 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -536,14 +536,12 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { return __do_kmalloc_node(size, gfp, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif void kfree(const void *block) { @@ -647,7 +645,7 @@ void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, gfp_ return slob_alloc_node(cachep, flags, NUMA_NO_NODE); } EXPORT_SYMBOL(kmem_cache_alloc_lru); -#ifdef CONFIG_NUMA + void *__kmalloc_node(size_t size, gfp_t gfp, int node) { return __do_kmalloc_node(size, gfp, node, _RET_IP_); @@ -659,7 +657,6 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node) return slob_alloc_node(cachep, gfp, node); } EXPORT_SYMBOL(kmem_cache_alloc_node); -#endif static void __kmem_cache_free(void *b, int size) { diff --git a/mm/slub.c b/mm/slub.c index 26b00951aad1..a5b3a4484263 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3287,7 +3287,6 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) EXPORT_SYMBOL(kmem_cache_alloc_trace); #endif -#ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); @@ -3314,7 +3313,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif /* CONFIG_NUMA */ /* * Slow path handling. This may still be called frequently since objects @@ -4427,7 +4425,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); -#ifdef CONFIG_NUMA static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4474,7 +4471,6 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif /* CONFIG_NUMA */ #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4930,7 +4926,6 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); -#ifdef CONFIG_NUMA void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { @@ -4960,7 +4955,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, return ret; } EXPORT_SYMBOL(__kmalloc_node_track_caller); -#endif #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) From patchwork Tue Jul 12 13:39:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDB62C43334 for ; Tue, 12 Jul 2022 13:40:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79AF0940079; Tue, 12 Jul 2022 09:40:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 74BF4940063; Tue, 12 Jul 2022 09:40:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6140B940079; Tue, 12 Jul 2022 09:40:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4F003940063 for ; Tue, 12 Jul 2022 09:40:08 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2768433EEE for ; Tue, 12 Jul 2022 13:40:08 +0000 (UTC) X-FDA: 79678556496.13.53BEAC3 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf13.hostedemail.com (Postfix) with ESMTP id 5C7D62005F for ; Tue, 12 Jul 2022 13:40:07 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id 5so7269197plk.9 for ; Tue, 12 Jul 2022 06:40:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W+t2TYQyUd3MLoRrZFHRafgKosCtEfHks/X4I7bodFI=; b=lC9cVXM2npScD2ZYw8CT4kj9R7jq2JvDLs1QphPos+o7fmVpE0703VFXwBBoaSuZhj do6IQnZW/TV5vv8kqxIIgodC+sQz51op1rKKgohqWHwuj/MqEorzrMTj1co42+Ld/j0A CkDaf/7TID8mjapFQN4v6YVew41uT2Qd2o/kEPHo4zGlAW23FoGiiqYbRHKFBF7po/7L dsZEsN55SNfbFLL8fiY6CjOtiLvKP3rmoxVQhYK4YjZG59ppQ3+RlG0SrqRNd0hNO2s1 ilfwlaRg0ZSeaTuaEFpKYY1OvIh2/w7uQVwN3N3iXaEbVGBCCx9YE4c1oTG3TaIq8NmH l9ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W+t2TYQyUd3MLoRrZFHRafgKosCtEfHks/X4I7bodFI=; b=GdfBxh8Fk1mu5IXOcV12rZejw2F2BaH7debqCD97sIOdPLXfSE0ZTcQfL1ezP0G6z3 dwMYfIsNyRbnQufxqHiBAvN0+zbZ0E4YIKctl2RF2Z6Rmiq5Hfppfv7xGrlt3TBKB7ov 4S3w7ciZOJ6EEfSt6JJ8LZ8byEGjEM+lmSPVjgKCFRV3Jeg5tPXYgwutQaA9WyyCLlK6 eZxCQmepRuLqBHz1kYJoGA1QC9w4R5262L9zeO51IcyCmPHXkGDVZ4D/ynh88vdtGQOc LPLm3ZBp3uk5Jf9Kch9jg6A34SRSNcqLg2NPF1JiOwcmkG5zSpzak/DWQ8AMrQ1OUh6Z Fh6w== X-Gm-Message-State: AJIora9EDBgCVV6l7qd1z7m+mULh6vb5D8x8MhsyrksUf3sbfCp0fM/W Qi0NAClMJvCnfHQ6cMPVO3w= X-Google-Smtp-Source: AGRyM1tF4nj/+intWIHCMQ8XyAHzk+7R0O55XlDFiY10p+ezbHjlYwpSeqsyseQ0P6NtV2q1y+hqLg== X-Received: by 2002:a17:90b:1c0e:b0:1ef:f82c:1746 with SMTP id oc14-20020a17090b1c0e00b001eff82c1746mr4371155pjb.124.1657633206339; Tue, 12 Jul 2022 06:40:06 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:05 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 04/15] mm/slab_common: cleanup kmalloc_track_caller() Date: Tue, 12 Jul 2022 13:39:34 +0000 Message-Id: <20220712133946.307181-5-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633207; a=rsa-sha256; cv=none; b=Oxdshkjs4i6sbGQbCjzT9AkUi/jBndXYtw4dYAjIoWJ0fg3SuG/Nik0ySRPfdtWfAjvhYq R63/OM4gmMHE9Qj/aS0lcQ/f9t98LsbbaEyn273zv+iRzm0EwKJ6cqfnch6BTKv/lMkHOo sP6i6WWnV4GfR3osAVZ6EGR6R+FxOkk= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=lC9cVXM2; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633207; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W+t2TYQyUd3MLoRrZFHRafgKosCtEfHks/X4I7bodFI=; b=HNjGdUnsHOZN5PpJs9SqNMljU4/rTiZB1mg3CPcH1PFlDDMIzinCBnuwbCbbW1fcUYQh18 b7AN7BjQp8I9NwtSfYXFrmGkcRn3KMoc6wMvq1ZkluUc6DWMDrEpjvNBS/DOhD18Fx+zHx ywMpxDxqedLUA4vn911RCqqrqyxENzE= Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=lC9cVXM2; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf13.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-Stat-Signature: c118hpk3te4x9cjw1corgtdndrxna9r7 X-Rspamd-Queue-Id: 5C7D62005F X-Rspamd-Server: rspam08 X-HE-Tag: 1657633207-388583 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make kmalloc_track_caller() wrapper of kmalloc_node_track_caller(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 17 ++++++++--------- mm/slab.c | 6 ------ mm/slob.c | 6 ------ mm/slub.c | 22 ---------------------- 4 files changed, 8 insertions(+), 43 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 4754c834b0e3..a0e57df3d5a4 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -651,6 +651,12 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flag return kmalloc_array(n, size, flags | __GFP_ZERO); } +void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, + unsigned long caller) __alloc_size(1); +#define kmalloc_node_track_caller(size, flags, node) \ + __kmalloc_node_track_caller(size, flags, node, \ + _RET_IP_) + /* * kmalloc_track_caller is a special version of kmalloc that records the * calling function of the routine calling it for slab leak tracking instead @@ -659,9 +665,9 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flag * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); #define kmalloc_track_caller(size, flags) \ - __kmalloc_track_caller(size, flags, _RET_IP_) + __kmalloc_node_track_caller(size, flags, \ + NUMA_NO_NODE, _RET_IP_) static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, int node) @@ -680,13 +686,6 @@ static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); } - -extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, - unsigned long caller) __alloc_size(1); -#define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) - /* * Shortcuts */ diff --git a/mm/slab.c b/mm/slab.c index 81804921c538..da2f6a5dd8fa 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3665,12 +3665,6 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); -void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) -{ - return __do_kmalloc(size, flags, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. diff --git a/mm/slob.c b/mm/slob.c index c54aad6b106c..80cdbe4f0d67 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -530,12 +530,6 @@ void *__kmalloc(size_t size, gfp_t gfp) } EXPORT_SYMBOL(__kmalloc); -void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller) -{ - return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller); -} -EXPORT_SYMBOL(__kmalloc_track_caller); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfp, int node, unsigned long caller) { diff --git a/mm/slub.c b/mm/slub.c index a5b3a4484263..7c284535a62b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4904,28 +4904,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return 0; } -void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, gfpflags); - - s = kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc(s, NULL, gfpflags, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmalloc(caller, ret, s, size, s->size, gfpflags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc_track_caller); - void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, int node, unsigned long caller) { From patchwork Tue Jul 12 13:39:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E68DCC433EF for ; Tue, 12 Jul 2022 13:40:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8373394007A; Tue, 12 Jul 2022 09:40:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E8A9940063; Tue, 12 Jul 2022 09:40:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 687E794007A; Tue, 12 Jul 2022 09:40:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 58CD0940063 for ; Tue, 12 Jul 2022 09:40:11 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 28978604E4 for ; Tue, 12 Jul 2022 13:40:11 +0000 (UTC) X-FDA: 79678556622.04.173C14D Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf19.hostedemail.com (Postfix) with ESMTP id BE6241A0068 for ; Tue, 12 Jul 2022 13:40:10 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id o18so7576863pgu.9 for ; Tue, 12 Jul 2022 06:40:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ti9NktoQFsWdX4hu+k0KGe8MY0mLyAlR/eRab2R9rEQ=; b=ThsouO6vHYVO9wZ6HVSglUj9qGd+WEyZjOmXsK1l2F9xrvQrbQGR0JkLp94pRUcVyy 0JuVUYZufPEbar4+20VlXfJXzD/YREBs+HqTEUyJ9dza5nY8HbVXlxttZr7aRzLgrbL5 XV2OtLLkL76cB9B2Uts67ZO8pjpnm8KGDX4fmOZ+iIWYP7buo5leMp2OKTb139/oGv9q gVlF+nDglEgWpcwIKbS3NlZbMtrtRd2Ng6dQ8th5Od8C48QDkqIsHzrB+mpfDRaeq8Fb RierjCgrRwJNMkUEPFdapaCH8CZ1HuOcdgYydNt7s7fbBmDBO3MvFponw6ACAeoi4VHY m1cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ti9NktoQFsWdX4hu+k0KGe8MY0mLyAlR/eRab2R9rEQ=; b=Qxe+bIWA1Sb8Bj2u5A+qPdZs9b1ivXG5OqxKVtLreRtW3U4HEcr36Ylm3YiHT+ePLh TGZcYLt9OwORf0mqMQ2fKE55fzzfRnLoRGChACGi7Q32CGH2EXeuv0LfwUiNhOLmkjty RECp72xRTeU4169cne+UxCFHkC4jCV/+KHfe/yqyrn/yIxA3LZwivMNrdIZ+kpUtfobV 0z77HUze3aBNic6fZpcS6yFopibiAzHsUZ0os/2pdqX2bXKkWydwZxlsOai/w/UnjYbW rO90SwK/udBQMnC592HZYE28SuUvgQ+HU7Eo1NmT5urCHl8Mj9oJFLFVw8WNxGzkOZPl N82w== X-Gm-Message-State: AJIora+6dOJ0hhyH+twklAYNLVHteFEiCDBuNsz8zTWl/Q51xx8L9Z4A aMr2zZgu6mFGS9/iZFhK+KE= X-Google-Smtp-Source: AGRyM1tcJ8LvhNY1blwZS1ll3lYp4SL+tU513CYHvlgfsd43chQOMW2DATB/G8tJYHwwoaYncOvzfw== X-Received: by 2002:a63:1607:0:b0:412:8fc0:756b with SMTP id w7-20020a631607000000b004128fc0756bmr20201203pgl.142.1657633209739; Tue, 12 Jul 2022 06:40:09 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:09 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 05/15] mm/sl[au]b: factor out __do_kmalloc_node() Date: Tue, 12 Jul 2022 13:39:35 +0000 Message-Id: <20220712133946.307181-6-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ThsouO6v; spf=pass (imf19.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633210; a=rsa-sha256; cv=none; b=WDtKZe4dn202RYbVpqwDDLD5Dt0JCvfbSrEjxVvTzZhlJacRBxhGPz0ymTyqoq8z2EiA/C v0sRji3nFK9p1Ah05u2JS6AsZXApiLk9q+mZWjqTYEO04MQxNh9SFs3V8VY9H1ZuAjHb6K 1w6x4edKqjlw+oMz+aX8jwx9VX/KG1E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633210; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ti9NktoQFsWdX4hu+k0KGe8MY0mLyAlR/eRab2R9rEQ=; b=r4vL35tDZ0xI8ZIrbW6FMhx4/vFUeZwhiZT8Tt/JNppC6d3zjjsiTjC9kcRVEdhDi7/U7U 1HUIEZbalI3IflohmDqie9QzryaCUG9gjUQMIqXv+LKXPrLEWkyB5MoZGhCbFu6R3Y8HI6 7fH4glk5o+4x1YCQms7zXppvleTdk/Y= Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ThsouO6v; spf=pass (imf19.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam09 X-Stat-Signature: kuufgd4efd49pj3sdyy8go5gdumuiipg X-Rspamd-Queue-Id: BE6241A0068 X-HE-Tag: 1657633210-574060 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __kmalloc(), __kmalloc_node(), __kmalloc_node_track_caller() mostly do same job. Factor out common code into __do_kmalloc_node(). Note that this patch also fixes missing kasan_kmalloc() in SLUB's __kmalloc_node_track_caller(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 30 +---------------------- mm/slub.c | 71 +++++++++++++++---------------------------------------- 2 files changed, 20 insertions(+), 81 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index da2f6a5dd8fa..ab34727d61b2 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3631,37 +3631,9 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) } #endif -/** - * __do_kmalloc - allocate memory - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @caller: function caller for debug tracking of the caller - * - * Return: pointer to the allocated memory or %NULL in case of error - */ -static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, - unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; - cachep = kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - ret = slab_alloc(cachep, NULL, flags, size, caller); - - ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(caller, ret, cachep, - size, cachep->size, flags); - - return ret; -} - void *__kmalloc(size_t size, gfp_t flags) { - return __do_kmalloc(size, flags, _RET_IP_); + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); } EXPORT_SYMBOL(__kmalloc); diff --git a/mm/slub.c b/mm/slub.c index 7c284535a62b..2ccc473e0ae7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4402,29 +4402,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -void *__kmalloc(size_t size, gfp_t flags) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return kmalloc_large(size, flags); - - s = kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc(s, NULL, flags, _RET_IP_, size); - - trace_kmalloc(_RET_IP_, ret, s, size, s->size, flags); - - ret = kasan_kmalloc(s, ret, size, flags); - - return ret; -} -EXPORT_SYMBOL(__kmalloc); - static void *kmalloc_large_node(size_t size, gfp_t flags, int node) { struct page *page; @@ -4442,7 +4419,8 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node) return kmalloc_large_node_hook(ptr, size, flags); } -void *__kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) { struct kmem_cache *s; void *ret; @@ -4450,7 +4428,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret = kmalloc_large_node(size, flags, node); - trace_kmalloc_node(_RET_IP_, ret, NULL, + trace_kmalloc_node(caller, ret, NULL, size, PAGE_SIZE << get_order(size), flags, node); @@ -4462,16 +4440,28 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); + ret = slab_alloc_node(s, NULL, flags, node, caller, size); - trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, flags, node); + trace_kmalloc_node(caller, ret, s, size, s->size, flags, node); ret = kasan_kmalloc(s, ret, size, flags); return ret; } + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __do_kmalloc_node(size, flags, node, _RET_IP_); +} EXPORT_SYMBOL(__kmalloc_node); +void *__kmalloc(size_t size, gfp_t flags) +{ + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc); + + #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied @@ -4905,32 +4895,9 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) } void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, - int node, unsigned long caller) + int node, unsigned long caller) { - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = kmalloc_large_node(size, gfpflags, node); - - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << get_order(size), - gfpflags, node); - - return ret; - } - - s = kmalloc_slab(size, gfpflags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc_node(s, NULL, gfpflags, node, caller, size); - - /* Honor the call site pointer we received. */ - trace_kmalloc_node(caller, ret, s, size, s->size, gfpflags, node); - - return ret; + return __do_kmalloc_node(size, gfpflags, node, caller); } EXPORT_SYMBOL(__kmalloc_node_track_caller); From patchwork Tue Jul 12 13:39:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24F91C43334 for ; Tue, 12 Jul 2022 13:40:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B440594007B; Tue, 12 Jul 2022 09:40:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF493940063; Tue, 12 Jul 2022 09:40:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BC7194007B; Tue, 12 Jul 2022 09:40:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8F813940063 for ; Tue, 12 Jul 2022 09:40:14 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6567760D2C for ; Tue, 12 Jul 2022 13:40:14 +0000 (UTC) X-FDA: 79678556748.08.F5616D0 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf01.hostedemail.com (Postfix) with ESMTP id 072CB40035 for ; Tue, 12 Jul 2022 13:40:13 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id j1-20020a17090aeb0100b001ef777a7befso1533632pjz.0 for ; Tue, 12 Jul 2022 06:40:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RfZICKzfMt9jP521ltPO4rr38titj9GujPx2SBl9+NM=; b=Lg0N91mr0+CDOYu+efRU4fHCR7Qv3ssQqZ5FEFbYTsc//saKmmUTlEeXOMTauarXrT 6lWX5v9SwRcnkS8qmBYYIwxTKQG7t4ihTTYZe22uPKe1RrVAvIIb23e41kXILNHh2zNU 9z4wt2QwoiZ1XiACht3mok9xBmUUlqDwGv8gKgMHdFnFUltMkCqIHUCY6TrJ2+diPX4y 81/PPs6VRwN63pMoo0OyfU1/DepI1eK6jVpYLrmDv0XlEKEp2nEU+W5QCJ40DcCrMUwi DlqxHINZEVrxBh82Wy6tgJZPVrMuMsZ830BIUtDkPBotOYLldgp+/ePgUJDazYKMHWew N+tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RfZICKzfMt9jP521ltPO4rr38titj9GujPx2SBl9+NM=; b=yu/BF/n2h176TIaLMGnkHxoqvrU+VuMES8V+ncZFPURaEauxwZgogjnEt5cyXAOjPo 3hyR+9NdXslr9UrkEmBdaxfBvDlb+TzVavWtQOIBpeRWLoSmleFMIWEk/51cn8F8CPBZ 3G47lLmUVWC6lYOyKhCt1TkejkpTJBnox6lhYltVy+G9USRp3ZTJwZMmhTMH6ogL+ZXZ H9epSnNYTfsk9BDz2kuqEugHq9HDrguAJuFgx+CBylaErWwlRDPmeNTICArA4sS2hq50 oPJT3iI+PCiMACmF00iWSPsjnsQK3okPOg0iQtvlQRVLAuMhfKCaMcmU0wciCYIXcMRs jYcQ== X-Gm-Message-State: AJIora+KlmZWGL3SpLth0xRmYT4r7rQO2T3B/kyHlLRvHpXP/OiJbBxB VsPljl4RwWgowyubYqDpQWo= X-Google-Smtp-Source: AGRyM1vlXqHvvszhpDV2iSM8ks7jydXWE9ZU4N9umSMIWSjNT0JIA0n9gnMP/DPU6j56Ir4d5vWV9w== X-Received: by 2002:a17:902:e746:b0:16c:4eb6:915d with SMTP id p6-20020a170902e74600b0016c4eb6915dmr9478865plf.106.1657633213063; Tue, 12 Jul 2022 06:40:13 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:12 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 06/15] mm/slab_common: fold kmalloc_order_trace() into kmalloc_large() Date: Tue, 12 Jul 2022 13:39:36 +0000 Message-Id: <20220712133946.307181-7-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633214; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RfZICKzfMt9jP521ltPO4rr38titj9GujPx2SBl9+NM=; b=nn2VMx7GvSM+gJb8NOE6fjXKRP86KAjbrpFlYHP65HfB0Es1KPCC9EbBkRI2zad5s+wvaJ fsDHfG2bV5vKUQ1pvmknbc0bzFqmr3j13N8bRY7xKagmLxBXdg4M014TjQmWFafP9p4a+I o5FRUJjUbGAF+U0F28C9Ch/aB8p/GJk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633214; a=rsa-sha256; cv=none; b=Q8zm1kDtNb9Os2oOh1agyh4VdB0YLln07/oDvKyZ/y2wBEvxVfM2fJacfo4Ejw0Kd+3V7x macqTfCJzDz4/Xal2g54vXoYyEaEmL3Xqu0tN5SzJHwZlQnKQpxEhVHXkdvph1AogLT5Py fyu9jFEddgwxdKpqDcmO27czRph0+Vo= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Lg0N91mr; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: 7r5jzr75o4r5r1bjp575usbcnm46rcnb X-Rspamd-Queue-Id: 072CB40035 X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Lg0N91mr; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Server: rspam05 X-HE-Tag: 1657633213-573136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is no caller of kmalloc_order_trace() except kmalloc_large(). Fold it into kmalloc_large() and remove kmalloc_order{,_trace}(). Also add tracepoint in kmalloc_large() that was previously in kmalloc_order_trace(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 22 ++-------------------- mm/slab_common.c | 14 +++----------- 2 files changed, 5 insertions(+), 31 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index a0e57df3d5a4..15a4c59da59e 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -489,26 +489,8 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment - __alloc_size(1); - -#ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) - __assume_page_alignment __alloc_size(1); -#else -static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t size, gfp_t flags, - unsigned int order) -{ - return kmalloc_order(size, flags, order); -} -#endif - -static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t flags) -{ - unsigned int order = get_order(size); - return kmalloc_order_trace(size, flags, order); -} - +void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment + __alloc_size(1); /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index 6c9aac5d8f4a..1f8af2106df0 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -932,10 +932,11 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) +void *kmalloc_large(size_t size, gfp_t flags) { void *ret = NULL; struct page *page; + unsigned int order = get_order(size); if (unlikely(flags & GFP_SLAB_BUG_MASK)) flags = kmalloc_fix_flags(flags); @@ -950,19 +951,10 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) ret = kasan_kmalloc_large(ret, size, flags); /* As ret might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ret, size, 1, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_order); - -#ifdef CONFIG_TRACING -void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) -{ - void *ret = kmalloc_order(size, flags, order); trace_kmalloc(_RET_IP_, ret, NULL, size, PAGE_SIZE << order, flags); return ret; } -EXPORT_SYMBOL(kmalloc_order_trace); -#endif +EXPORT_SYMBOL(kmalloc_large); #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ From patchwork Tue Jul 12 13:39:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915006 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FBE3C43334 for ; Tue, 12 Jul 2022 13:40:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0BD5394007C; Tue, 12 Jul 2022 09:40:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 06DC5940063; Tue, 12 Jul 2022 09:40:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9F1594007C; Tue, 12 Jul 2022 09:40:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DB206940063 for ; Tue, 12 Jul 2022 09:40:17 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id B3CCC1204C1 for ; Tue, 12 Jul 2022 13:40:17 +0000 (UTC) X-FDA: 79678556874.31.311A2C2 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf06.hostedemail.com (Postfix) with ESMTP id 66A34180027 for ; Tue, 12 Jul 2022 13:40:17 +0000 (UTC) Received: by mail-pl1-f175.google.com with SMTP id l12so7258321plk.13 for ; Tue, 12 Jul 2022 06:40:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5zXey38vmjUFb+GfMhB6B68P1VCpS6kNFrksbY6msJM=; b=IUUUzFdICndGH31XMdqciHNS9TD8lB55IEPCUzQuALBEjHxdz1FTzdzsTu2vzzp3Vg VK5l7grM5ZJsDCS7/gwbtCmeD4NXHTJHcbfOQgVPjxNqS3Kh6Fhev+0pY//YNvStMXl9 CkCb0f9PjSeW/e8nEkGo+4LkwfziN6UZYtXOOADKQHknRZZU1N3Z1FIral8v/0F6B1Wd 5Zpv6NzPgFq9dl49L2UabaepUujksbXbFMOt4uM9znrUDwMUdOOeZ7uKCRoUaLtneEDD rRh9PFfuWGsTafq6oZdXoDuKCXsAW4c9J/cvAQTlX3goprFNtltR8SZE7Io7i2mLtRwK pRdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5zXey38vmjUFb+GfMhB6B68P1VCpS6kNFrksbY6msJM=; b=qD+7TEd6wH2a5Qe5lIbVSXaPf4ZB+Iiz4GxMC8Nhyty63Avdf8Y/w7c/mg0FxnnFst X2eVzLn6nhbpNMcXc5EXdZuJeCAtGtkVB/lhH2ARuuDWhUcwwjEi6hdfezLgBFFbBHio zUBB6RI/8LwptcPSWrXoR48dX8mgepN8haINLXf2xSkWJAwMZG6rLV6njBCo8FF9Txk2 /gofUBCwmON57rQkj742A3gRpAKkcgDZy3vKte9fXyv8mJOFcirRx9hxY7bQ1Zj4k7fa fo7NyFN7YLxr7Q1JCxRGZMIiE9qkh8gd8KZITzYD3vp4UKaVlVUsj0AjmmuSP2kU91BW uxLQ== X-Gm-Message-State: AJIora/R6A4m39WvLDrLlFpLj9MOr1bKj3vO9IMtMgRWr5LIcddDpgez nGsjvMMFpzdgXpgJ8RVvBek= X-Google-Smtp-Source: AGRyM1vtmYPKNJBp2rYSh/8vXIvdTgbEVgesyhLxMtq2Di+VDK83fYVktlc+wENFfd7ZfYqw2CJS+g== X-Received: by 2002:a17:902:eccf:b0:16b:f555:d42e with SMTP id a15-20020a170902eccf00b0016bf555d42emr23624458plh.75.1657633216463; Tue, 12 Jul 2022 06:40:16 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:15 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 07/15] mm/slub: move kmalloc_large_node() to slab_common.c Date: Tue, 12 Jul 2022 13:39:37 +0000 Message-Id: <20220712133946.307181-8-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633217; a=rsa-sha256; cv=none; b=y6JyvXPMDRyA1MISQbLGk6DZE14TKgkjaHrvYeBeGo6KBZu3A849pqknFeQfp+6MSc3V/0 sUrZ1R84xL0uS7TOvCMLgNeYzEwgzX4LhIlw6ptWnRaJOI6SBthycoQc6DNl9EpN4TSMP3 8kcD8IHwrJU7THPlhQgNI665dX8iJNE= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=IUUUzFdI; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633217; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5zXey38vmjUFb+GfMhB6B68P1VCpS6kNFrksbY6msJM=; b=yTMQ09VCRQQsrek3zijSjUQIGfOABOIAjibI3KrYirVMh1oTiYFMgQic49u65BDCUY+rgP VyPjCYhq22hQJsOWmWRgOJW1ItmYtsbV4YdQ0oaCz5iSz26vxH2TxNT+n5rAAtovlmnOuT ePwdNtMtlN9CmEsVfY6A+WQqdTgFUSY= X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 66A34180027 X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=IUUUzFdI; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: eg9iac9xtxti5duaqm6dc66t7th4ema8 X-HE-Tag: 1657633217-463099 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In later patch SLAB will also pass requests larger than order-1 page to page allocator. Move kmalloc_large_node() to slab_common.c. Fold kmalloc_large_node_hook() into kmalloc_large_node() as there is no other caller. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 4 ++++ mm/slab_common.c | 22 ++++++++++++++++++++++ mm/slub.c | 25 ------------------------- 3 files changed, 26 insertions(+), 25 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 15a4c59da59e..082499306098 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -491,6 +491,10 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment __alloc_size(1); + +void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment + __alloc_size(1); + /** * kmalloc - allocate memory * @size: how many bytes of memory are required. diff --git a/mm/slab_common.c b/mm/slab_common.c index 1f8af2106df0..6f855587b635 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -956,6 +956,28 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + struct page *page; + void *ptr = NULL; + unsigned int order = get_order(size); + + flags |= __GFP_COMP; + page = alloc_pages_node(node, flags, order); + if (page) { + ptr = page_address(page); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); + } + + ptr = kasan_kmalloc_large(ptr, size, flags); + /* As ptr might get tagged, call kmemleak hook after KASAN. */ + kmemleak_alloc(ptr, size, 1, flags); + + return ptr; +} +EXPORT_SYMBOL(kmalloc_large_node); + #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ static void freelist_randomize(struct rnd_state *state, unsigned int *list, diff --git a/mm/slub.c b/mm/slub.c index 2ccc473e0ae7..f22a84dd27de 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1704,14 +1704,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) -{ - ptr = kasan_kmalloc_large(ptr, size, flags); - /* As ptr might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ptr, size, 1, flags); - return ptr; -} - static __always_inline void kfree_hook(void *x) { kmemleak_free(x); @@ -4402,23 +4394,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -static void *kmalloc_large_node(size_t size, gfp_t flags, int node) -{ - struct page *page; - void *ptr = NULL; - unsigned int order = get_order(size); - - flags |= __GFP_COMP; - page = alloc_pages_node(node, flags, order); - if (page) { - ptr = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - - return kmalloc_large_node_hook(ptr, size, flags); -} - static __always_inline void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) { From patchwork Tue Jul 12 13:39:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07097C43334 for ; Tue, 12 Jul 2022 13:40:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8FBE094007D; Tue, 12 Jul 2022 09:40:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AAB7940063; Tue, 12 Jul 2022 09:40:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7725394007D; Tue, 12 Jul 2022 09:40:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 69026940063 for ; Tue, 12 Jul 2022 09:40:21 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3CA0F340BA for ; Tue, 12 Jul 2022 13:40:21 +0000 (UTC) X-FDA: 79678557042.04.0D4EEBD Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf30.hostedemail.com (Postfix) with ESMTP id D903A8007D for ; Tue, 12 Jul 2022 13:40:20 +0000 (UTC) Received: by mail-pl1-f180.google.com with SMTP id q5so7267566plr.11 for ; Tue, 12 Jul 2022 06:40:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q05RXXWJycZ1P59/mp0uhe2gutuzKOhr4XAdWhZ9Oj4=; b=Z2yvwOEeqLzlkysX3tAzh98KlM9fDBJZWvwk/jHI28aqsj3dgNo4OXlx4xGiU/PUOL hMX475+QpEsnX3L7CV0avgMlQzZXQKnVFqo+ix0FNpw6i/yJ6A+fVFXyd0YoX62i5/Gu KXl4bJGNnLvIIgBmXXLYvWZZeWXZAremP1z/CKu5XgBKzOe0b0JjL/R1YpIF99Pbzok0 6WXbOFEgbd7aCfzzRLZV4iU+aiJgGnLFQ1Y2XmODraA6PNtKuL86I8oh8xORR93tdZRC 2qHCylrHfqUOjyVV0z3Vx545hBF87H9lHT5MPU05JkJ7w0ZiOSHL20lPc6N4Bns4MivH Py1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q05RXXWJycZ1P59/mp0uhe2gutuzKOhr4XAdWhZ9Oj4=; b=rRkWEPx4WYl/swYcMX3MsBW97ySfPDVxUmE3bUovNKnVEMpfOKoJWr7D1fFgK26Vpy us4TgManSjf1KCttHfWqYB4DgGZYckb5H/ewwfXR7bHDHgMPT1bRj7l0UY1bb2usQgID ZQ01e8pnr+e4W/vsahnjRs+UdevRETmliIdnHuzuwHcmv1tNAKtztrGosIwQdLZHGk8/ T/GPfQ2ucVxjJtadGoORPJY3c6QyDAisTW4cNuGFFrMDBmlYqM0A1ZTlMMl+pxuZXFwk UiPVGQ/W8jn8nmqZWYU2ssKbmdfYahd+T2Et41a4kY1rBr2kjcQtJUZboxgyNYwlV3Qm 0hug== X-Gm-Message-State: AJIora+Y/3KysBaJnE2efhOCazTg7DYV04A+G2tAdcMfNwcKzrGgxlAj c+ZQQsDYjRWFiOdzU5Z6Z9s= X-Google-Smtp-Source: AGRyM1t65LKcTddXYYGzUV8MUChdpiaqqiK6HwnsR9t+AA9zssLUWZ9aL/V+mESwNPFA8YEPjFrUhg== X-Received: by 2002:a17:902:b08a:b0:16c:68b6:311 with SMTP id p10-20020a170902b08a00b0016c68b60311mr712737plr.166.1657633219946; Tue, 12 Jul 2022 06:40:19 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:19 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 08/15] mm/slab_common: kmalloc_node: pass large requests to page allocator Date: Tue, 12 Jul 2022 13:39:38 +0000 Message-Id: <20220712133946.307181-9-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633220; a=rsa-sha256; cv=none; b=JsjoyKVCeRYbOSlCPMb+bAPOKuhiy5lYBCnRMCwoho2zj5tcDARPWt/spBz3axH952zoLB Ry5lzJPLRs62b6kK1CDTzpGkw7qDYj9HV2D3diSuTn6ZYJP2Wvx08sQVraDW60vf3IdnVc Dwpu3a7fqvuyeHdaJbB4xfVaMhdIcQ4= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Z2yvwOEe; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633220; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=q05RXXWJycZ1P59/mp0uhe2gutuzKOhr4XAdWhZ9Oj4=; b=cAOfEgSrcTvWy9pW+ZT6gnd/I2L2HparWHMywuRsOOdOnJI1Q+Yq3A1h/rT4FixhSZqs+/ +ApAKjuUzOcVhqSOdFbn2wK4aKE1UTfO/aSn1AuqRdHIs081lM7I10+PBEn/xW6i1G1nfA Poix2O8ZVR2flIa9RDry6bDsFDiQHDU= Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Z2yvwOEe; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-Stat-Signature: dpjorzd7jrx6cz9zgyf4f34838y5fk6e X-Rspamd-Queue-Id: D903A8007D X-Rspamd-Server: rspam08 X-HE-Tag: 1657633220-748573 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc_large_node() is in common code, pass large requests to page allocator in kmalloc_node() using kmalloc_large_node(). One problem is that currently there is no tracepoint in kmalloc_large_node(). Instead of simply putting tracepoint in it, use kmalloc_large_node{,_notrace} depending on its caller to show useful address for both inlined kmalloc_node() and __kmalloc_node_track_caller() when large objects are allocated. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- v3: This patch is new in v3 and this avoids missing caller in __kmalloc_large_node_track_caller() when kmalloc_large_node() is called. include/linux/slab.h | 26 +++++++++++++++++++------- mm/slab.h | 2 ++ mm/slab_common.c | 17 ++++++++++++++++- mm/slub.c | 2 +- 4 files changed, 38 insertions(+), 9 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 082499306098..fd2e129fc813 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -571,23 +571,35 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) return __kmalloc(size, flags); } +#ifndef CONFIG_SLOB static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { -#ifndef CONFIG_SLOB - if (__builtin_constant_p(size) && - size <= KMALLOC_MAX_CACHE_SIZE) { - unsigned int i = kmalloc_index(size); + if (__builtin_constant_p(size)) { + unsigned int index; - if (!i) + if (size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + index = kmalloc_index(size); + + if (!index) return ZERO_SIZE_PTR; return kmem_cache_alloc_node_trace( - kmalloc_caches[kmalloc_type(flags)][i], + kmalloc_caches[kmalloc_type(flags)][index], flags, node, size); } -#endif return __kmalloc_node(size, flags, node); } +#else +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +{ + if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE) + return kmalloc_large_node(size, flags, node); + + return __kmalloc_node(size, flags, node); +} +#endif /** * kmalloc_array - allocate memory for an array. diff --git a/mm/slab.h b/mm/slab.h index a8d5eb1c323f..7cb51ff44f0c 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -273,6 +273,8 @@ void create_kmalloc_caches(slab_flags_t); /* Find the kmalloc slab corresponding for a certain size */ struct kmem_cache *kmalloc_slab(size_t, gfp_t); + +void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node); #endif gfp_t kmalloc_fix_flags(gfp_t flags); diff --git a/mm/slab_common.c b/mm/slab_common.c index 6f855587b635..dc872e0ef0fc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -956,7 +956,8 @@ void *kmalloc_large(size_t size, gfp_t flags) } EXPORT_SYMBOL(kmalloc_large); -void *kmalloc_large_node(size_t size, gfp_t flags, int node) +static __always_inline +void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) { struct page *page; void *ptr = NULL; @@ -976,6 +977,20 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) return ptr; } + +void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) +{ + return __kmalloc_large_node_notrace(size, flags, node); +} + +void *kmalloc_large_node(size_t size, gfp_t flags, int node) +{ + void *ret = __kmalloc_large_node_notrace(size, flags, node); + + trace_kmalloc_node(_RET_IP_, ret, NULL, size, + PAGE_SIZE << get_order(size), flags, node); + return ret; +} EXPORT_SYMBOL(kmalloc_large_node); #ifdef CONFIG_SLAB_FREELIST_RANDOM diff --git a/mm/slub.c b/mm/slub.c index f22a84dd27de..3d02cf44adf7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4401,7 +4401,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller void *ret; if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = kmalloc_large_node(size, flags, node); + ret = kmalloc_large_node_notrace(size, flags, node); trace_kmalloc_node(caller, ret, NULL, size, PAGE_SIZE << get_order(size), From patchwork Tue Jul 12 13:39:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A321C43334 for ; Tue, 12 Jul 2022 13:40:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DDC8B94007E; Tue, 12 Jul 2022 09:40:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D8D05940063; Tue, 12 Jul 2022 09:40:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C547394007E; Tue, 12 Jul 2022 09:40:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B7629940063 for ; Tue, 12 Jul 2022 09:40:24 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9663CDE5 for ; Tue, 12 Jul 2022 13:40:24 +0000 (UTC) X-FDA: 79678557168.26.217F375 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf20.hostedemail.com (Postfix) with ESMTP id 4631B1C009D for ; Tue, 12 Jul 2022 13:40:24 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id f11so7285382plr.4 for ; Tue, 12 Jul 2022 06:40:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hkTIlokXdXZHx70dj4cBcCaT2BPOKJtfpJSDSwc0fHU=; b=gXwpav1cs46JY9auUHpcAwYHPt6lSXcC2SSzBTo4UXju5/zqc9fbvLJYuVlpXdlK+x ILUw3UDcuQM6XR0DYIOaLLq7PiFw1DelSfNhpH8fwrMOMcUZKBsnicw8LHwhwYTZZy9f V9yVjlGvi0UjRoM7uKFypklMHjTy5TGtAM389Mbcd/lY4jpm/sC7Nn4xgO2TvpAzLFDO kGQ2/qx9+4o3tsfwPKUPPFOueOag7i1feT+gRgghI/bdCfLwnHTCpw9HeHb2t//988sg c6NWB4HtEj7qA4n2UDfd8bO5eiuCpQHi/pkHoGQi6pX0nDw24mta8uLyX/kz+rzZnmC+ QGQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hkTIlokXdXZHx70dj4cBcCaT2BPOKJtfpJSDSwc0fHU=; b=XL1U/jbrbklswvKYupLupzZTuAQO3+tsDs62+CjeWcxP6NvHY78gbz9L9CrTSAWk50 V4+QwO+yVkG0Y47gVscj6TQhD0vcoHaO9h2mDCSwSRctGGXJGdAcRN79hYmxfJd2CXl/ YK0ABH4KbXmvMdswHtLxs/yowJ+PeTSed/0IYs48aqMpzrsWk1tWNuAzdFV3Qn9GMnvp LL9ZKY05yu7O1kWoRUTVLhBc1SkSz+1L0OvkFbvzJsHDsunRFoOqtD049m2C31/1VfYL qZw7YIlqNey8gtQsY55TfqqSRyI4d67QIj+mO04R6L5ZJBNkfVMYwlxXF1c2VQjNi6c5 /QDg== X-Gm-Message-State: AJIora+mowYHvvJSrbdJ6Ea89ufOjXMTqmP2pOtYqsrq92m6Lp8gFnA/ DVZnE9FVy/Kvq7vxS87DXYk= X-Google-Smtp-Source: AGRyM1t9qjh7ww8HOoIZEKNFWLT+QYRkxOqJ5JGxw1Mo5b4zwMonNKL/Oyevq1mySzRn8kSkxIFfEg== X-Received: by 2002:a17:90b:3807:b0:1f0:a86:6875 with SMTP id mq7-20020a17090b380700b001f00a866875mr4428239pjb.103.1657633223420; Tue, 12 Jul 2022 06:40:23 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:22 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 09/15] mm/slab_common: cleanup kmalloc_large() Date: Tue, 12 Jul 2022 13:39:39 +0000 Message-Id: <20220712133946.307181-10-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633224; a=rsa-sha256; cv=none; b=sNXF7zmziw/gs+KQMxvuyWMGEFk5T4YDD8NvSjNHMP73fG9iq7wJsrPBSHywG7WRsIGRxq osirJZ8R6honBfrM1PQ6uVc03PqmqCRceqjCIZTenGrIYgpX0IVXfOe1Ls57Cmht7sExW1 UjQNtW+cD6iWAeoPLfUjHQIFoKaWHXE= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=gXwpav1c; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633224; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hkTIlokXdXZHx70dj4cBcCaT2BPOKJtfpJSDSwc0fHU=; b=0FypH8x6IbaRoSzur+807m5lwKsoV2UgN4TWJyhzfdF+aiNTuvp+KA2T2d8GUYs+T37MTB HbDNtBMYahJesPWodT2DoCRN2TmJWJVV9hZCOMniqWGhpFJA6v2a3dxR6/0BiUHrPQNdF+ wiBVDkjSsOV4YWyj/2X3SkoM+tWdX6k= Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=gXwpav1c; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-Stat-Signature: kdycaw8fppiumy7a4ric7kadsxk9mgcr X-Rspamd-Queue-Id: 4631B1C009D X-Rspamd-Server: rspam08 X-HE-Tag: 1657633224-59564 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that kmalloc_large() and kmalloc_large_node() do mostly same job, make kmalloc_large() wrapper of __kmalloc_large_node_notrace(). In the meantime, add missing flag fix code in __kmalloc_large_node_notrace(). Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab_common.c | 36 +++++++++++++----------------------- 1 file changed, 13 insertions(+), 23 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index dc872e0ef0fc..9c46e2f9589f 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -932,29 +932,6 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_large(size_t size, gfp_t flags) -{ - void *ret = NULL; - struct page *page; - unsigned int order = get_order(size); - - if (unlikely(flags & GFP_SLAB_BUG_MASK)) - flags = kmalloc_fix_flags(flags); - - flags |= __GFP_COMP; - page = alloc_pages(flags, order); - if (likely(page)) { - ret = page_address(page); - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); - } - ret = kasan_kmalloc_large(ret, size, flags); - /* As ret might get tagged, call kmemleak hook after KASAN. */ - kmemleak_alloc(ret, size, 1, flags); - trace_kmalloc(_RET_IP_, ret, NULL, size, PAGE_SIZE << order, flags); - return ret; -} -EXPORT_SYMBOL(kmalloc_large); static __always_inline void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) @@ -963,6 +940,9 @@ void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) void *ptr = NULL; unsigned int order = get_order(size); + if (unlikely(flags & GFP_SLAB_BUG_MASK)) + flags = kmalloc_fix_flags(flags); + flags |= __GFP_COMP; page = alloc_pages_node(node, flags, order); if (page) { @@ -978,6 +958,16 @@ void *__kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) return ptr; } +void *kmalloc_large(size_t size, gfp_t flags) +{ + void *ret = __kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE); + + trace_kmalloc(_RET_IP_, ret, NULL, size, + PAGE_SIZE << get_order(size), flags); + return ret; +} +EXPORT_SYMBOL(kmalloc_large); + void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node) { return __kmalloc_large_node_notrace(size, flags, node); From patchwork Tue Jul 12 13:39:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00B84C433EF for ; Tue, 12 Jul 2022 13:40:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D44194007F; Tue, 12 Jul 2022 09:40:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 883A0940063; Tue, 12 Jul 2022 09:40:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7248094007F; Tue, 12 Jul 2022 09:40:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 63F93940063 for ; Tue, 12 Jul 2022 09:40:28 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 3C3DC80669 for ; Tue, 12 Jul 2022 13:40:28 +0000 (UTC) X-FDA: 79678557336.18.2A5F10B Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf21.hostedemail.com (Postfix) with ESMTP id D458D1C006F for ; Tue, 12 Jul 2022 13:40:27 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id cp18-20020a17090afb9200b001ef79e8484aso1293250pjb.1 for ; Tue, 12 Jul 2022 06:40:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UEhl6T7JwJfq9mYpwmLIICqK7OGzKtmxP+Xgk7Rgr5I=; b=VfzlsmyPKS4RGFcclo5FLhS8osdWOEhR+Jqwqhr/QM+QUjAEvcd+RyOT/erFEEtYxH uHNrqffbyv2dEjbzq4CEBS8CKZiDVv8Y3uodO4BrgFFj8JBOFe25LP3YNYwGkF2V7R5F C9VtW37dvIobXngEBKRhJkMkby4c/TSC4IuO9ZhD9xCdV+j0nzGi8DzBD2K5r9URcyqN B5QUKHRR1Fh8zmQi5hh2DMH0tA4H3dGOo5qDx9P8aaKpvDSDk13O0mAOFBPBhvXABk3D fAh0k8aqOFuMa0NGkLZqju70Csp2Y+y4qyiWnZDcvvcVYjVlgYYyHoDYZhyUFDS1OyfG xF4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UEhl6T7JwJfq9mYpwmLIICqK7OGzKtmxP+Xgk7Rgr5I=; b=hLhSZ/nJMXyYcyUKp+idD/t+XRhLhg63fUQEy5X80pQrKleITPaS1UbGXHm5oNtrlB MIkAONqIf3ZYr6R7pF2v8eLlx9xjqakBQF7VJDPclZwIhqzmlQGPs8IiCYJOJTSHAyfV vM/MQ3sWwlKrv/iVUTmJp6UOxXkEw/Pvd+YPLOC1BNe1fqJt2utzQZEIBNBNKIh+T7hk 7hF3Az4+KjHf45az88xZWmLiUrUq+YzCbxYuLvWobH8sIe96KzwRiDWPTDXVAkMqdWBI PMxIC4iutIPBZ5FoYmrp0AgAmjxNF4YLDI2y/VZAI95ygAqgdiV8RWdxGCsd0yR6D4hv /+sg== X-Gm-Message-State: AJIora+NJesMFjb+L+aWktfsFZq62GTW4ngtZrpo9SCjw7zINX2Wf2RG ORZYXxRlQ2c84JRhXqAqKd8= X-Google-Smtp-Source: AGRyM1uCSfhNnwjw6E02QINgM8ULAMKmayJizjx5NtuRwfakBCV/YUViYbSAwoVZxFy75U3AgNSEcw== X-Received: by 2002:a17:902:a9c9:b0:161:5b73:5ac9 with SMTP id b9-20020a170902a9c900b001615b735ac9mr23919033plr.14.1657633226853; Tue, 12 Jul 2022 06:40:26 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:26 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 10/15] mm/slab: kmalloc: pass requests larger than order-1 page to page allocator Date: Tue, 12 Jul 2022 13:39:40 +0000 Message-Id: <20220712133946.307181-11-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633227; a=rsa-sha256; cv=none; b=fqmjOOs6HeZU3WRFfxV4vIQ+ld2N0tZ38K5mZx4qqY2hFJa1mvRMZMywN4QiZX9QqOA4tZ pY/g91v/4mflF8l3z2qZqHcMNk2uED9dDz3t4m2L6i+NSmvnzjVYQU4UPuTaa2FzW7LnZr nRwdzvK8Wl5QHPOUFP0bxqRUZ7oe0dw= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=VfzlsmyP; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633227; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UEhl6T7JwJfq9mYpwmLIICqK7OGzKtmxP+Xgk7Rgr5I=; b=w7HWvwFY6h2wVsu5glOljNs1bt50dtjapbjtTld9855dVr5xrWAbzDwNsFaehKcTI4Sibb RbYmpg0jy1kbXWumyCn90l0EICcA16g03Y4g88zN3vJTNOreOv545b7AXQZAAe+P4KwwHr y2NV6D6b/+Tf3qwfxj/MjweVCyuc2Mw= X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D458D1C006F Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=VfzlsmyP; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: 1ja8dsd4gbff7sae84r7ii5jfn8zpkhw X-Rspam-User: X-HE-Tag: 1657633227-603658 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is not much benefit for serving large objects in kmalloc(). Let's pass large requests to page allocator like SLUB for better maintenance of common code. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- v2->v3: This patch is basically same but it now depends on kmalloc_large_node_notrace(). include/linux/slab.h | 23 ++++------------- mm/slab.c | 60 +++++++++++++++++++++++++++++++------------- mm/slab.h | 3 +++ mm/slab_common.c | 25 ++++++++++++------ mm/slub.c | 19 -------------- 5 files changed, 68 insertions(+), 62 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index fd2e129fc813..4ee5b2fed164 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -243,27 +243,17 @@ static inline unsigned int arch_slab_minalign(void) #ifdef CONFIG_SLAB /* - * The largest kmalloc size supported by the SLAB allocators is - * 32 megabyte (2^25) or the maximum allocatable page order if that is - * less than 32 MB. - * - * WARNING: Its not easy to increase this value since the allocators have - * to do various tricks to work around compiler limitations in order to - * ensure proper constant folding. + * SLAB and SLUB directly allocates requests fitting in to an order-1 page + * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \ - (MAX_ORDER + PAGE_SHIFT - 1) : 25) -#define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH +#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 #endif #endif #ifdef CONFIG_SLUB -/* - * SLUB directly allocates requests fitting in to an order-1 page - * (PAGE_SIZE*2). Larger requests are passed to the page allocator. - */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) #ifndef KMALLOC_SHIFT_LOW @@ -415,10 +405,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size, if (size <= 512 * 1024) return 19; if (size <= 1024 * 1024) return 20; if (size <= 2 * 1024 * 1024) return 21; - if (size <= 4 * 1024 * 1024) return 22; - if (size <= 8 * 1024 * 1024) return 23; - if (size <= 16 * 1024 * 1024) return 24; - if (size <= 32 * 1024 * 1024) return 25; if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && size_is_constant) BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()"); @@ -428,6 +414,7 @@ static __always_inline unsigned int __kmalloc_index(size_t size, /* Will never be reached. Needed because the compiler may complain */ return -1; } +static_assert(PAGE_SHIFT <= 20); #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ diff --git a/mm/slab.c b/mm/slab.c index ab34727d61b2..a2f43425a0ae 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3585,11 +3585,19 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) struct kmem_cache *cachep; void *ret; - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) - return NULL; + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { + ret = kmalloc_large_node_notrace(size, flags, node); + + trace_kmalloc_node(caller, ret, NULL, size, + PAGE_SIZE << get_order(size), + flags, node); + return ret; + } + cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; + ret = kmem_cache_alloc_node_trace(cachep, flags, node, size); ret = kasan_kmalloc(cachep, ret, size, flags); @@ -3664,17 +3672,27 @@ EXPORT_SYMBOL(kmem_cache_free); void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) { - struct kmem_cache *s; - size_t i; local_irq_disable(); - for (i = 0; i < size; i++) { + for (int i = 0; i < size; i++) { void *objp = p[i]; + struct kmem_cache *s; - if (!orig_s) /* called via kfree_bulk */ - s = virt_to_cache(objp); - else + if (!orig_s) { + struct folio *folio = virt_to_folio(objp); + + /* called via kfree_bulk */ + if (!folio_test_slab(folio)) { + local_irq_enable(); + free_large_kmalloc(folio, objp); + local_irq_disable(); + continue; + } + s = folio_slab(folio)->slab_cache; + } else { s = cache_from_obj(orig_s, objp); + } + if (!s) continue; @@ -3703,20 +3721,24 @@ void kfree(const void *objp) { struct kmem_cache *c; unsigned long flags; + struct folio *folio; trace_kfree(_RET_IP_, objp); if (unlikely(ZERO_OR_NULL_PTR(objp))) return; - local_irq_save(flags); - kfree_debugcheck(objp); - c = virt_to_cache(objp); - if (!c) { - local_irq_restore(flags); + + folio = virt_to_folio(objp); + if (!folio_test_slab(folio)) { + free_large_kmalloc(folio, (void *)objp); return; } - debug_check_no_locks_freed(objp, c->object_size); + c = folio_slab(folio)->slab_cache; + + local_irq_save(flags); + kfree_debugcheck(objp); + debug_check_no_locks_freed(objp, c->object_size); debug_check_no_obj_freed(objp, c->object_size); __cache_free(c, (void *)objp, _RET_IP_); local_irq_restore(flags); @@ -4138,15 +4160,17 @@ void __check_heap_object(const void *ptr, unsigned long n, size_t __ksize(const void *objp) { struct kmem_cache *c; - size_t size; + struct folio *folio; BUG_ON(!objp); if (unlikely(objp == ZERO_SIZE_PTR)) return 0; - c = virt_to_cache(objp); - size = c ? c->object_size : 0; + folio = virt_to_folio(objp); + if (!folio_test_slab(folio)) + return folio_size(folio); - return size; + c = folio_slab(folio)->slab_cache; + return c->object_size; } EXPORT_SYMBOL(__ksize); diff --git a/mm/slab.h b/mm/slab.h index 7cb51ff44f0c..c81c92d421f1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -669,6 +669,9 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) print_tracking(cachep, x); return cachep; } + +void free_large_kmalloc(struct folio *folio, void *object); + #endif /* CONFIG_SLOB */ static inline size_t slab_ksize(const struct kmem_cache *s) diff --git a/mm/slab_common.c b/mm/slab_common.c index 9c46e2f9589f..0dff46fb7193 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -771,8 +771,8 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags) /* * kmalloc_info[] is to make slub_debug=,kmalloc-xx option work at boot time. - * kmalloc_index() supports up to 2^25=32MB, so the final entry of the table is - * kmalloc-32M. + * kmalloc_index() supports up to 2^21=2MB, so the final entry of the table is + * kmalloc-2M. */ const struct kmalloc_info_struct kmalloc_info[] __initconst = { INIT_KMALLOC_INFO(0, 0), @@ -796,11 +796,7 @@ const struct kmalloc_info_struct kmalloc_info[] __initconst = { INIT_KMALLOC_INFO(262144, 256k), INIT_KMALLOC_INFO(524288, 512k), INIT_KMALLOC_INFO(1048576, 1M), - INIT_KMALLOC_INFO(2097152, 2M), - INIT_KMALLOC_INFO(4194304, 4M), - INIT_KMALLOC_INFO(8388608, 8M), - INIT_KMALLOC_INFO(16777216, 16M), - INIT_KMALLOC_INFO(33554432, 32M) + INIT_KMALLOC_INFO(2097152, 2M) }; /* @@ -913,6 +909,21 @@ void __init create_kmalloc_caches(slab_flags_t flags) /* Kmalloc array is now usable */ slab_state = UP; } + +void free_large_kmalloc(struct folio *folio, void *object) +{ + unsigned int order = folio_order(folio); + + if (WARN_ON_ONCE(order == 0)) + pr_warn_once("object pointer: 0x%p\n", object); + + kmemleak_free(object); + kasan_kfree_large(object); + + mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); + __free_pages(folio_page(folio, 0), order); +} #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index 3d02cf44adf7..6cb7ca27f3b7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1704,12 +1704,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static __always_inline void kfree_hook(void *x) -{ - kmemleak_free(x); - kasan_kfree_large(x); -} - static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { @@ -3550,19 +3544,6 @@ struct detached_freelist { struct kmem_cache *s; }; -static inline void free_large_kmalloc(struct folio *folio, void *object) -{ - unsigned int order = folio_order(folio); - - if (WARN_ON_ONCE(order == 0)) - pr_warn_once("object pointer: 0x%p\n", object); - - kfree_hook(object); - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); - __free_pages(folio_page(folio, 0), order); -} - /* * This function progressively scans the array with free objects (with * a limited look ahead) and extract objects belonging to the same From patchwork Tue Jul 12 13:39:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99DC6C433EF for ; Tue, 12 Jul 2022 13:40:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3070E940080; Tue, 12 Jul 2022 09:40:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B846940063; Tue, 12 Jul 2022 09:40:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13099940080; Tue, 12 Jul 2022 09:40:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 02A19940063 for ; Tue, 12 Jul 2022 09:40:32 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id D5D7D12072C for ; Tue, 12 Jul 2022 13:40:31 +0000 (UTC) X-FDA: 79678557462.11.D574F52 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf26.hostedemail.com (Postfix) with ESMTP id 399FB140064 for ; Tue, 12 Jul 2022 13:40:31 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id v21so4773251plo.0 for ; Tue, 12 Jul 2022 06:40:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=M9trx0bduPD2gKBrMHDX1mfzZZXG+Gw7lYAsCvpVcLo=; b=KGPEqotBNFGeitsFyFdQnj/ioV4T9439Ug3SDyKROEBkz8PCFdHO4XPZdNcc0TOFhd cjrj2c+PJYv0Eb8EqTY+O0hq4bZ/ZpLBNoDasWmtGEqAxn8bxcuxL6cSL8nlOjh6uLGT ic0l7sxWROpTXOPuLmlUszqvdqnZ5LwRZdO954bnV8Xct+3ADkq6OWoE9qgq4/kpiN/s SuSEdfpgplC8O+KIiLgRMy69oKTPxq2dWkDbl0KoiNrKal74bWT6e0FqARZXZaYXiAH5 Aj/wtGTpxxi7lxqCecIdiH6Bc52lbLmZCIg/Iae+9BXhnGMICzn9SFoPBZB4I5G6Q9TC Vedg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=M9trx0bduPD2gKBrMHDX1mfzZZXG+Gw7lYAsCvpVcLo=; b=hsJ763AG0E2F5MlL/5lETqozQoqRMgta3Y+vKw5pH6eOa9CdBhAc569Eh9o6ZFcHW/ ZmnSKJcnZQfRBdB5nctg1HhrImlkTysTSai3NzCnJoLL8PFJ5sS16OazmHgZAYdvAp5M E+3E51GFDodYot/V2pOfckDPajtrltfENd3fMIYPOHyL4llKVGW8c1rms+QRZUz0eqPM QPG96NNVX3NyN80zmKz+T7hvfsLqSNRAnlx8fv32mA3IH5qcB8Or8o/V5AYLCHBU4nPj 6SCprtBueOgVGU/R8x542hpQstAnvvfnEuKANVfaEVU1QPklJ5op+U0uU3AGGXFbLLME Thmg== X-Gm-Message-State: AJIora/Em/lw3RPUyWlNO4oH3XQyYqyjdV2cVQK/fpQiroW6kE5mrXgn A2a0BOphYUg0HbKhqjZMVzg= X-Google-Smtp-Source: AGRyM1sMglOT9Btb/nAvMbQYf3DF9hAHMsrKiLzAo+9qCzA5jMlaMSXF/SyIG17UAHaDWKYeJ28Mtw== X-Received: by 2002:a17:902:f792:b0:168:e97b:3c05 with SMTP id q18-20020a170902f79200b00168e97b3c05mr23581351pln.94.1657633230347; Tue, 12 Jul 2022 06:40:30 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:29 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 11/15] mm/sl[au]b: introduce common alloc/free functions without tracepoint Date: Tue, 12 Jul 2022 13:39:41 +0000 Message-Id: <20220712133946.307181-12-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=KGPEqotB; spf=pass (imf26.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633231; a=rsa-sha256; cv=none; b=qNbTezjkeImbtQ31HDvXLk0RXhi/o5RRYQkKiiOQoM/IbipIZJrfZVHZvN83bYbprxcCJk obex0U1UZH5Q79bmmz65RaZFglKDOUIiIfly/T4LRmSN8H74Cqg1yuVim/n1VJHv1h708W jKZrm0lPgtsmT3juSOsqYTHoOTScEcA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633231; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=M9trx0bduPD2gKBrMHDX1mfzZZXG+Gw7lYAsCvpVcLo=; b=h/SsdiXZAsyNLCmpC7nz5YxuhxUbPYOytHF/f88w6s3Pz3v0wJYpw4S2qeJEJbdpXCmsKh 2jDC118xX3crx6z2P7Mo3wukQ+I4qdZFFT/j8lGBOmbsFxYXVlZnpWOafCGHeY8l6wZLa3 mXI34qmDgiDn/zsDvH9aZFM7BUPObOM= X-Rspamd-Queue-Id: 399FB140064 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=KGPEqotB; spf=pass (imf26.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: e6yo9h4wc7o6gju963hr4p7b6df3a8o7 X-HE-Tag: 1657633231-303586 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To unify kmalloc functions in later patch, introduce common alloc/free functions that does not have tracepoint. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- v3: Tried to avoid affecting existing functions. mm/slab.c | 36 +++++++++++++++++++++++++++++------- mm/slab.h | 4 ++++ mm/slub.c | 13 +++++++++++++ 3 files changed, 46 insertions(+), 7 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index a2f43425a0ae..375e35c14430 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3560,6 +3560,14 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) } EXPORT_SYMBOL(kmem_cache_alloc_node); +void *__kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, + int nodeid, size_t orig_size, + unsigned long caller) +{ + return slab_alloc_node(cachep, NULL, flags, nodeid, + orig_size, caller); +} + #ifdef CONFIG_TRACING void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, gfp_t flags, @@ -3645,6 +3653,26 @@ void *__kmalloc(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc); +static __always_inline +void __do_kmem_cache_free(struct kmem_cache *cachep, void *objp, + unsigned long caller) +{ + unsigned long flags; + + local_irq_save(flags); + debug_check_no_locks_freed(objp, cachep->object_size); + if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) + debug_check_no_obj_freed(objp, cachep->object_size); + __cache_free(cachep, objp, caller); + local_irq_restore(flags); +} + +void __kmem_cache_free(struct kmem_cache *cachep, void *objp, + unsigned long caller) +{ + __do_kmem_cache_free(cachep, objp, caller); +} + /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. @@ -3655,18 +3683,12 @@ EXPORT_SYMBOL(__kmalloc); */ void kmem_cache_free(struct kmem_cache *cachep, void *objp) { - unsigned long flags; cachep = cache_from_obj(cachep, objp); if (!cachep) return; trace_kmem_cache_free(_RET_IP_, objp, cachep->name); - local_irq_save(flags); - debug_check_no_locks_freed(objp, cachep->object_size); - if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) - debug_check_no_obj_freed(objp, cachep->object_size); - __cache_free(cachep, objp, _RET_IP_); - local_irq_restore(flags); + __do_kmem_cache_free(cachep, objp, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); diff --git a/mm/slab.h b/mm/slab.h index c81c92d421f1..9193e9c1f040 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -275,6 +275,10 @@ void create_kmalloc_caches(slab_flags_t); struct kmem_cache *kmalloc_slab(size_t, gfp_t); void *kmalloc_large_node_notrace(size_t size, gfp_t flags, int node); +void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t orig_size, + unsigned long caller); +void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller); #endif gfp_t kmalloc_fix_flags(gfp_t flags); diff --git a/mm/slub.c b/mm/slub.c index 6cb7ca27f3b7..74eb78678c98 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3262,6 +3262,14 @@ void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, } EXPORT_SYMBOL(kmem_cache_alloc_lru); +void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t orig_size, + unsigned long caller) +{ + return slab_alloc_node(s, NULL, gfpflags, node, + caller, orig_size); +} + #ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { @@ -3526,6 +3534,11 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) } #endif +void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller) +{ + slab_free(s, virt_to_slab(x), x, NULL, &x, 1, caller); +} + void kmem_cache_free(struct kmem_cache *s, void *x) { s = cache_from_obj(s, x); From patchwork Tue Jul 12 13:39:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 237A4C433EF for ; Tue, 12 Jul 2022 13:40:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2001940081; Tue, 12 Jul 2022 09:40:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ACDAF940063; Tue, 12 Jul 2022 09:40:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96E6A940081; Tue, 12 Jul 2022 09:40:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 87F01940063 for ; Tue, 12 Jul 2022 09:40:35 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5A47034219 for ; Tue, 12 Jul 2022 13:40:35 +0000 (UTC) X-FDA: 79678557630.24.13A8E8C Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf25.hostedemail.com (Postfix) with ESMTP id C8FA4A0077 for ; Tue, 12 Jul 2022 13:40:34 +0000 (UTC) Received: by mail-pf1-f175.google.com with SMTP id 70so7522729pfx.1 for ; Tue, 12 Jul 2022 06:40:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Wm1f+oB3jG+iWra85ZdjG4EuPpOPh0bKcAoaTDQxDpQ=; b=TSDYRkQQJryf1S+KvrrcZzRC++k4TCCFHzx7q1+B8dGEDRldr3l8i94LRXAGHnMyfO NpIdXqQe3g4rSx0WTg6t656Fyv/PBvu/PpcXTbhzpSuJoilLAdrAqnTsInKgneVSGNGl +QYw8QmOiA7hzE+qfv+f8T6Q9ORtccOjvqxmskJiXr5QHOnTIDZGEoIW939s1dvqqK07 7w6JkWwOdq9yz5Qlh7NiTE6Bdm/9jIOuhAp/f8X/AjcjJKinjE61IqV6XvCbj+KffGoO 02E93ZWLz6d/EDCJOKyFUwbof8zBCAQi6BsVZwSE6NIl3w/+FWEYgLnyG3gAEzVQMwhR R1Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Wm1f+oB3jG+iWra85ZdjG4EuPpOPh0bKcAoaTDQxDpQ=; b=EnTkYRAQ/1d61XoKGQFsM3uHdYTZTPegbNfylqxecH9vDrnqkNRoiXV+2siV1O12kt YxvhgtzwIszO/zRpz2tUzkmkHSrhj9cYRz4WrzUqgFtCGD8fnD/BfxYClKaZ04pmZ7wG gjrubluVjKiSskcE/F7Uph+IGnLWj4OpWMXZDxtxkfFeqmnL+CBQWyZQyeNw4VmJswXK U8Htm//ZiMUVsKryHv1PRwgy+VOVizcx3Cz5sPrnqRamIjK/ytH9G7iInvHh+/GrsOxh hM/3pZemeY1hr/dcCx2MdZTqGjsfn7aHQozf+L9QJvLhAMqqb5eMk2sVaMkmhxJcS1Jy Oq4A== X-Gm-Message-State: AJIora+J/hapqmtHZxwnyM3vyigXr3kkUtdbdQMmco4Qdge+AqNMFLx8 /Z8rX874LNbbOOjvb2obr1U= X-Google-Smtp-Source: AGRyM1tCUagSJA9NBaVy2H+L/IKHO61UygWvEJXJgCvDVDV1MZm5D3Z7H44T6bxvef4k8fp42uP8Fg== X-Received: by 2002:a05:6a00:ac7:b0:528:7acb:e445 with SMTP id c7-20020a056a000ac700b005287acbe445mr23942137pfl.14.1657633233818; Tue, 12 Jul 2022 06:40:33 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:33 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 12/15] mm/sl[au]b: generalize kmalloc subsystem Date: Tue, 12 Jul 2022 13:39:42 +0000 Message-Id: <20220712133946.307181-13-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633234; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wm1f+oB3jG+iWra85ZdjG4EuPpOPh0bKcAoaTDQxDpQ=; b=J8mLJok9dQ67f8Lo3KQnnMugR0lioQ9Ta+FNWq1UWzNiSXe+TA62OTaEjsrHskCQCbQXI7 tuR4wxCG3diY1Z2e0NTRxdmgiKp4CR66Bat2NoiZ9QOBgp+Vs8rL5WS8+tQ0dWVJ1jtccp nLnWVUSo8BoZOEK5aNWw3v+3RmGThdA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633234; a=rsa-sha256; cv=none; b=myeRmJpbCRCNSW37Ta8KaE8npscyoALl4guKNKU776L5L1BnDuu5RAIMLlB2LrmMb27GAR AgDY9VMp2mU1+VuaLYVTF4IWFdFLpFhILxQJEt7zntic25gNB4pdH8D01dSOdWduw3O3Au TG2OlXNS8YnA+1vxRcQnY6Ea8fx3Ens= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=TSDYRkQQ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: afifghfpap35jn985g45fuqeknn1boti X-Rspamd-Queue-Id: C8FA4A0077 X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=TSDYRkQQ; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Server: rspam05 X-HE-Tag: 1657633234-644581 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now everything in kmalloc subsystem can be generalized. Let's do it! Generalize __do_kmalloc_node(), __kmalloc_node_track_caller(), kfree(), __ksize(), __kmalloc(), __kmalloc_node() and move them to slab_common.c. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab.c | 108 ----------------------------------------------- mm/slab_common.c | 102 ++++++++++++++++++++++++++++++++++++++++++++ mm/slub.c | 87 -------------------------------------- 3 files changed, 102 insertions(+), 195 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 375e35c14430..6407dad13d5c 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3587,44 +3587,6 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -static __always_inline void * -__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) -{ - struct kmem_cache *cachep; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = kmalloc_large_node_notrace(size, flags, node); - - trace_kmalloc_node(caller, ret, NULL, size, - PAGE_SIZE << get_order(size), - flags, node); - return ret; - } - - cachep = kmalloc_slab(size, flags); - if (unlikely(ZERO_OR_NULL_PTR(cachep))) - return cachep; - - ret = kmem_cache_alloc_node_trace(cachep, flags, node, size); - ret = kasan_kmalloc(cachep, ret, size, flags); - - return ret; -} - -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __do_kmalloc_node(size, flags, node, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc_node); - -void *__kmalloc_node_track_caller(size_t size, gfp_t flags, - int node, unsigned long caller) -{ - return __do_kmalloc_node(size, flags, node, caller); -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_PRINTK void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { @@ -3647,12 +3609,6 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) } #endif -void *__kmalloc(size_t size, gfp_t flags) -{ - return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - static __always_inline void __do_kmem_cache_free(struct kmem_cache *cachep, void *objp, unsigned long caller) @@ -3730,43 +3686,6 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) } EXPORT_SYMBOL(kmem_cache_free_bulk); -/** - * kfree - free previously allocated memory - * @objp: pointer returned by kmalloc. - * - * If @objp is NULL, no operation is performed. - * - * Don't free memory not originally allocated by kmalloc() - * or you will run into trouble. - */ -void kfree(const void *objp) -{ - struct kmem_cache *c; - unsigned long flags; - struct folio *folio; - - trace_kfree(_RET_IP_, objp); - - if (unlikely(ZERO_OR_NULL_PTR(objp))) - return; - - folio = virt_to_folio(objp); - if (!folio_test_slab(folio)) { - free_large_kmalloc(folio, (void *)objp); - return; - } - - c = folio_slab(folio)->slab_cache; - - local_irq_save(flags); - kfree_debugcheck(objp); - debug_check_no_locks_freed(objp, c->object_size); - debug_check_no_obj_freed(objp, c->object_size); - __cache_free(c, (void *)objp, _RET_IP_); - local_irq_restore(flags); -} -EXPORT_SYMBOL(kfree); - /* * This initializes kmem_cache_node or resizes various caches for all nodes. */ @@ -4169,30 +4088,3 @@ void __check_heap_object(const void *ptr, unsigned long n, usercopy_abort("SLAB object", cachep->name, to_user, offset, n); } #endif /* CONFIG_HARDENED_USERCOPY */ - -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ -size_t __ksize(const void *objp) -{ - struct kmem_cache *c; - struct folio *folio; - - BUG_ON(!objp); - if (unlikely(objp == ZERO_SIZE_PTR)) - return 0; - - folio = virt_to_folio(objp); - if (!folio_test_slab(folio)) - return folio_size(folio); - - c = folio_slab(folio)->slab_cache; - return c->object_size; -} -EXPORT_SYMBOL(__ksize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 0dff46fb7193..1000e05c77df 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -924,6 +924,108 @@ void free_large_kmalloc(struct folio *folio, void *object) -(PAGE_SIZE << order)); __free_pages(folio_page(folio, 0), order); } + +static __always_inline +void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) +{ + struct kmem_cache *s; + void *ret; + + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { + ret = kmalloc_large_node_notrace(size, flags, node); + trace_kmalloc_node(caller, ret, NULL, + size, PAGE_SIZE << get_order(size), + flags, node); + return ret; + } + + s = kmalloc_slab(size, flags); + + if (unlikely(ZERO_OR_NULL_PTR(s))) + return s; + + ret = __kmem_cache_alloc_node(s, flags, node, size, caller); + ret = kasan_kmalloc(s, ret, size, flags); + trace_kmalloc_node(caller, ret, s, size, + s->size, flags, node); + return ret; +} + +void *__kmalloc_node(size_t size, gfp_t flags, int node) +{ + return __do_kmalloc_node(size, flags, node, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc_node); + +void *__kmalloc(size_t size, gfp_t flags) +{ + return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); +} +EXPORT_SYMBOL(__kmalloc); + +void *__kmalloc_node_track_caller(size_t size, gfp_t flags, + int node, unsigned long caller) +{ + return __do_kmalloc_node(size, flags, node, caller); +} +EXPORT_SYMBOL(__kmalloc_node_track_caller); + +/** + * kfree - free previously allocated memory + * @objp: pointer returned by kmalloc. + * + * If @objp is NULL, no operation is performed. + * + * Don't free memory not originally allocated by kmalloc() + * or you will run into trouble. + */ +void kfree(const void *object) +{ + struct folio *folio; + struct slab *slab; + struct kmem_cache *s; + + trace_kfree(_RET_IP_, object); + + if (unlikely(ZERO_OR_NULL_PTR(object))) + return; + + folio = virt_to_folio(object); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, (void *)object); + return; + } + + slab = folio_slab(folio); + s = slab->slab_cache; + __kmem_cache_free(s, (void *)object, _RET_IP_); +} +EXPORT_SYMBOL(kfree); + +/** + * __ksize -- Uninstrumented ksize. + * @objp: pointer to the object + * + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same + * safety checks as ksize() with KASAN instrumentation enabled. + * + * Return: size of the actual memory used by @objp in bytes + */ +size_t __ksize(const void *object) +{ + struct folio *folio; + + if (unlikely(object == ZERO_SIZE_PTR)) + return 0; + + folio = virt_to_folio(object); + + if (unlikely(!folio_test_slab(folio))) + return folio_size(folio); + + return slab_ksize(folio_slab(folio)->slab_cache); +} +EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slub.c b/mm/slub.c index 74eb78678c98..836292c32e58 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4388,49 +4388,6 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); -static __always_inline -void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) -{ - struct kmem_cache *s; - void *ret; - - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { - ret = kmalloc_large_node_notrace(size, flags, node); - - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << get_order(size), - flags, node); - - return ret; - } - - s = kmalloc_slab(size, flags); - - if (unlikely(ZERO_OR_NULL_PTR(s))) - return s; - - ret = slab_alloc_node(s, NULL, flags, node, caller, size); - - trace_kmalloc_node(caller, ret, s, size, s->size, flags, node); - - ret = kasan_kmalloc(s, ret, size, flags); - - return ret; -} - -void *__kmalloc_node(size_t size, gfp_t flags, int node) -{ - return __do_kmalloc_node(size, flags, node, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc_node); - -void *__kmalloc(size_t size, gfp_t flags) -{ - return __do_kmalloc_node(size, flags, NUMA_NO_NODE, _RET_IP_); -} -EXPORT_SYMBOL(__kmalloc); - - #ifdef CONFIG_HARDENED_USERCOPY /* * Rejects incorrectly sized objects and objects that are to be copied @@ -4481,43 +4438,6 @@ void __check_heap_object(const void *ptr, unsigned long n, } #endif /* CONFIG_HARDENED_USERCOPY */ -size_t __ksize(const void *object) -{ - struct folio *folio; - - if (unlikely(object == ZERO_SIZE_PTR)) - return 0; - - folio = virt_to_folio(object); - - if (unlikely(!folio_test_slab(folio))) - return folio_size(folio); - - return slab_ksize(folio_slab(folio)->slab_cache); -} -EXPORT_SYMBOL(__ksize); - -void kfree(const void *x) -{ - struct folio *folio; - struct slab *slab; - void *object = (void *)x; - - trace_kfree(_RET_IP_, x); - - if (unlikely(ZERO_OR_NULL_PTR(x))) - return; - - folio = virt_to_folio(x); - if (unlikely(!folio_test_slab(folio))) { - free_large_kmalloc(folio, object); - return; - } - slab = folio_slab(folio); - slab_free(slab->slab_cache, slab, object, NULL, &object, 1, _RET_IP_); -} -EXPORT_SYMBOL(kfree); - #define SHRINK_PROMOTE_MAX 32 /* @@ -4863,13 +4783,6 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) return 0; } -void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, - int node, unsigned long caller) -{ - return __do_kmalloc_node(size, gfpflags, node, caller); -} -EXPORT_SYMBOL(__kmalloc_node_track_caller); - #ifdef CONFIG_SYSFS static int count_inuse(struct slab *slab) { From patchwork Tue Jul 12 13:39:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BE6CCCA47C for ; Tue, 12 Jul 2022 13:40:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A907940082; Tue, 12 Jul 2022 09:40:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 17EFE940063; Tue, 12 Jul 2022 09:40:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F146C940082; Tue, 12 Jul 2022 09:40:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E1848940063 for ; Tue, 12 Jul 2022 09:40:38 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B8CE560D35 for ; Tue, 12 Jul 2022 13:40:38 +0000 (UTC) X-FDA: 79678557756.18.C96E59A Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf06.hostedemail.com (Postfix) with ESMTP id 2E9C6180027 for ; Tue, 12 Jul 2022 13:40:38 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id w22so726859ply.12 for ; Tue, 12 Jul 2022 06:40:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=b4Q2DizgZQ7+TOz0Lfpsus9sxOwctEnm3EGP0luNKvc=; b=Acp7Bhbzyn+BRSc46G6x84g73Tw4ndAXZDLldyVmCJ+U20+G3ciC+yKKrfTvwDE6pG pQk1btbM2obsHmUCgSE/PTl7RxG55Ej0Pd4NjjG3vQrJfV87P9dgOMSJv9O2t4dHUbuh XoJVa5Be5JIoWCpPJ0T435LMyzlz7GuVTZCbgrPNY0yZlIdWzCyLnidk//LtPJEkXbBg 8VFWiaSyg96ki4IbCaIcOqag/dOsmzKkC8KgKeHROonDba2qt7L75wuLEoD/StNuCXrE 7L958AhQNtelYlCQrvewCcG/2Xa0YLETnPj3+OQtaPDIYn9TkGQ+YJ/IgrYNO5KkinEJ jkXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=b4Q2DizgZQ7+TOz0Lfpsus9sxOwctEnm3EGP0luNKvc=; b=eWn3fmMh+OtPHdFjhSLGYKvAAUc4hqzOThUDSlLNj9P5d9H/K6bJnhzN+1gb/wv4kI /oN21SjAgMFQ6So5ABGgKLxNfoVr2O1eMJRQxfAUGPsLZlUa0MiAZmajvXgX2O5wxiiT Xjrjb5PtVWrc69b/sVRkK/C6Cs06rBUiNldNWT7qgKPngDrciWGPILUDWjSz1XFuRmNb /cfRoc2+GCl8uChsYFO0k0C9r9Va4L4IOUl42BqDXaOSx5gtLEMXW3t9CpAgX4EryTzK 2XVD451PGDJVG/uoppqdYwKUup0aDwXMLmsxJqVgpNBQf1jFnIBwn9qOGhnzvPS6v9Db X7pw== X-Gm-Message-State: AJIora+bDMsvWSmpiKnqVaAFKqtqDAAfKase/lS0FLhsjN9LPaeNTjtO LMutrMyBj6FtE9e07B3VJTw= X-Google-Smtp-Source: AGRyM1sgMYok3oEVFyOdPn8DbpTggXv3T7w8m3KYlxLN5tpjjqCL6r0ashabKK5/3Nn7FtX+QyDF0g== X-Received: by 2002:a17:90b:4acb:b0:1f0:62ab:2956 with SMTP id mh11-20020a17090b4acb00b001f062ab2956mr3298811pjb.178.1657633237177; Tue, 12 Jul 2022 06:40:37 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:36 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 13/15] mm/slab_common: unify NUMA and UMA version of tracepoints Date: Tue, 12 Jul 2022 13:39:43 +0000 Message-Id: <20220712133946.307181-14-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633238; a=rsa-sha256; cv=none; b=2lQa0bqEB0m9sV7h2hmoMNy+oBgqnxXapuzeykZkn0xP9X0h3HHV536+HwN4S9D4wCB79i o2b1JR2uHOoHimSmj3b9pl9KhDJbnbzGe4ktib8N+Y6jYe/3khA8plaIIlcNR+7ZhaekYq NRpClVOuiv5h50n/GKnp2Ogyabrxf5E= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Acp7Bhbz; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633238; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b4Q2DizgZQ7+TOz0Lfpsus9sxOwctEnm3EGP0luNKvc=; b=x0+qQCWxLGb1tJdlgov9Kvh1vWIInQdFOVkboA0SN4Wp//1O9irRH8V/3mC+1INnPs6Dzu jEjTNMGZQ68YKD7Z+N/M8MUJpEBaGJ5TRwBvc/+VA2ITJjXmUa9rJNPsIWcj/JP15l7g7f 8L2YfJRDVsBT3YhEThWVB+sdNiWVqps= X-Rspamd-Queue-Id: 2E9C6180027 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Acp7Bhbz; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: 3h779cphuycabq6iykwzn7j4rr6j83mu X-HE-Tag: 1657633238-67833 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Drop kmem_alloc event class, rename kmem_alloc_node to kmem_alloc, and remove _node postfix for NUMA version of tracepoints. This will break some tools that depend on {kmem_cache_alloc,kmalloc}_node, but at this point maintaining both kmem_alloc and kmem_alloc_node event classes does not makes sense at all. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/trace/events/kmem.h | 60 ++----------------------------------- mm/slab.c | 18 +++++------ mm/slab_common.c | 18 +++++------ mm/slob.c | 20 ++++++------- mm/slub.c | 12 ++++---- 5 files changed, 35 insertions(+), 93 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 4cb51ace600d..e078ebcdc4b1 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -11,62 +11,6 @@ DECLARE_EVENT_CLASS(kmem_alloc, - TP_PROTO(unsigned long call_site, - const void *ptr, - struct kmem_cache *s, - size_t bytes_req, - size_t bytes_alloc, - gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags), - - TP_STRUCT__entry( - __field( unsigned long, call_site ) - __field( const void *, ptr ) - __field( size_t, bytes_req ) - __field( size_t, bytes_alloc ) - __field( unsigned long, gfp_flags ) - __field( bool, accounted ) - ), - - TP_fast_assign( - __entry->call_site = call_site; - __entry->ptr = ptr; - __entry->bytes_req = bytes_req; - __entry->bytes_alloc = bytes_alloc; - __entry->gfp_flags = (__force unsigned long)gfp_flags; - __entry->accounted = IS_ENABLED(CONFIG_MEMCG_KMEM) ? - ((gfp_flags & __GFP_ACCOUNT) || - (s && s->flags & SLAB_ACCOUNT)) : false; - ), - - TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s accounted=%s", - (void *)__entry->call_site, - __entry->ptr, - __entry->bytes_req, - __entry->bytes_alloc, - show_gfp_flags(__entry->gfp_flags), - __entry->accounted ? "true" : "false") -); - -DEFINE_EVENT(kmem_alloc, kmalloc, - - TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags) -); - -DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, - - TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, - size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags) -); - -DECLARE_EVENT_CLASS(kmem_alloc_node, - TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, @@ -109,7 +53,7 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->accounted ? "true" : "false") ); -DEFINE_EVENT(kmem_alloc_node, kmalloc_node, +DEFINE_EVENT(kmem_alloc, kmalloc, TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, @@ -118,7 +62,7 @@ DEFINE_EVENT(kmem_alloc_node, kmalloc_node, TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node) ); -DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, +DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, diff --git a/mm/slab.c b/mm/slab.c index 6407dad13d5c..a361d2f9d4d9 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3440,8 +3440,8 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, { void *ret = slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP_); - trace_kmem_cache_alloc(_RET_IP_, ret, cachep, - cachep->object_size, cachep->size, flags); + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size, + cachep->size, flags, NUMA_NO_NODE); return ret; } @@ -3529,7 +3529,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(_RET_IP_, ret, cachep, - size, cachep->size, flags); + size, cachep->size, flags, NUMA_NO_NODE); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_trace); @@ -3552,9 +3552,9 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object_size, _RET_IP_); - trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep, - cachep->object_size, cachep->size, - flags, nodeid); + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, + cachep->object_size, cachep->size, + flags, nodeid); return ret; } @@ -3579,9 +3579,9 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(_RET_IP_, ret, cachep, - size, cachep->size, - flags, nodeid); + trace_kmalloc(_RET_IP_, ret, cachep, + size, cachep->size, + flags, nodeid); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); diff --git a/mm/slab_common.c b/mm/slab_common.c index 1000e05c77df..0e66b4911ebf 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -933,9 +933,9 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret = kmalloc_large_node_notrace(size, flags, node); - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << get_order(size), - flags, node); + trace_kmalloc(_RET_IP_, ret, NULL, + size, PAGE_SIZE << get_order(size), + flags, node); return ret; } @@ -946,8 +946,8 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller ret = __kmem_cache_alloc_node(s, flags, node, size, caller); ret = kasan_kmalloc(s, ret, size, flags); - trace_kmalloc_node(caller, ret, s, size, - s->size, flags, node); + trace_kmalloc(_RET_IP_, ret, s, size, + s->size, flags, node); return ret; } @@ -1076,7 +1076,7 @@ void *kmalloc_large(size_t size, gfp_t flags) void *ret = __kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE); trace_kmalloc(_RET_IP_, ret, NULL, size, - PAGE_SIZE << get_order(size), flags); + PAGE_SIZE << get_order(size), flags, NUMA_NO_NODE); return ret; } EXPORT_SYMBOL(kmalloc_large); @@ -1090,8 +1090,8 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) { void *ret = __kmalloc_large_node_notrace(size, flags, node); - trace_kmalloc_node(_RET_IP_, ret, NULL, size, - PAGE_SIZE << get_order(size), flags, node); + trace_kmalloc(_RET_IP_, ret, NULL, size, + PAGE_SIZE << get_order(size), flags, node); return ret; } EXPORT_SYMBOL(kmalloc_large_node); @@ -1426,8 +1426,6 @@ EXPORT_SYMBOL(ksize); /* Tracepoints definitions. */ EXPORT_TRACEPOINT_SYMBOL(kmalloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); -EXPORT_TRACEPOINT_SYMBOL(kmalloc_node); -EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc_node); EXPORT_TRACEPOINT_SYMBOL(kfree); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); diff --git a/mm/slob.c b/mm/slob.c index 80cdbe4f0d67..a4d50d957c25 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -507,8 +507,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) *m = size; ret = (void *)m + minalign; - trace_kmalloc_node(caller, ret, NULL, - size, size + minalign, gfp, node); + trace_kmalloc(caller, ret, NULL, + size, size + minalign, gfp, node); } else { unsigned int order = get_order(size); @@ -516,8 +516,8 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp |= __GFP_COMP; ret = slob_new_pages(gfp, order, node); - trace_kmalloc_node(caller, ret, NULL, - size, PAGE_SIZE << order, gfp, node); + trace_kmalloc(caller, ret, NULL, + size, PAGE_SIZE << order, gfp, node); } kmemleak_alloc(ret, size, 1, gfp); @@ -608,14 +608,14 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(_RET_IP_, b, NULL, c->object_size, - SLOB_UNITS(c->size) * SLOB_UNIT, - flags, node); + trace_kmem_cache_alloc(_RET_IP_, b, NULL, c->object_size, + SLOB_UNITS(c->size) * SLOB_UNIT, + flags, node); } else { b = slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(_RET_IP_, b, NULL, c->object_size, - PAGE_SIZE << get_order(c->size), - flags, node); + trace_kmem_cache_alloc(_RET_IP_, b, NULL, c->object_size, + PAGE_SIZE << get_order(c->size), + flags, node); } if (b && c->ctor) { diff --git a/mm/slub.c b/mm/slub.c index 836292c32e58..f1aa51480dc4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3244,7 +3244,7 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, void *ret = slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size); trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size, - s->size, gfpflags); + s->size, gfpflags, NUMA_NO_NODE); return ret; } @@ -3274,7 +3274,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc(s, NULL, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags); + trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, NUMA_NO_NODE); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } @@ -3285,8 +3285,8 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); - trace_kmem_cache_alloc_node(_RET_IP_, ret, s, - s->object_size, s->size, gfpflags, node); + trace_kmem_cache_alloc(_RET_IP_, ret, s, + s->object_size, s->size, gfpflags, node); return ret; } @@ -3299,8 +3299,8 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - trace_kmalloc_node(_RET_IP_, ret, s, - size, s->size, gfpflags, node); + trace_kmalloc(_RET_IP_, ret, s, + size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; From patchwork Tue Jul 12 13:39:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EFBFC433EF for ; Tue, 12 Jul 2022 13:40:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA808940083; Tue, 12 Jul 2022 09:40:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A5831940063; Tue, 12 Jul 2022 09:40:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D2E6940083; Tue, 12 Jul 2022 09:40:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7ED29940063 for ; Tue, 12 Jul 2022 09:40:42 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 56C6033570 for ; Tue, 12 Jul 2022 13:40:42 +0000 (UTC) X-FDA: 79678557924.30.B3FCDFD Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by imf30.hostedemail.com (Postfix) with ESMTP id EC00A80073 for ; Tue, 12 Jul 2022 13:40:41 +0000 (UTC) Received: by mail-pj1-f44.google.com with SMTP id s21so7627457pjq.4 for ; Tue, 12 Jul 2022 06:40:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jCLX99BoMYHyiMcoJa9MjaNk5QMKOoXigsqQTZeT3Ho=; b=or63Q3/3Ls3YkmEsQOT42Z96LrbyEUqAlov8Davkl1rXeVzxOl6i0qgdGA7Fg/lmb8 toeXq6puNJsS2cqH6IVngI6z7qnWhAHTMcz5rmDGlpASD+4E8z4ywqnddHyFW1QaD3oO D/2pIdBLcKmT4gws7FZjETnAJO2UONSgH7p+aHON77Y4JUQlDBq7xNPvy1SvsXvdQque CHqEgwIiWwMhpH9l+RMtqsC6AaCXvs/fn9WMsUd7TeQrt9A13c718V01GXuRUmYeLDxG 9yoxxKgfYSE+DaAkHmZrNAPK1E1ciNUyo3RHxjkVcen7IsrE83386IZ2TvaBTJRpWxwB a6Kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jCLX99BoMYHyiMcoJa9MjaNk5QMKOoXigsqQTZeT3Ho=; b=YoN/z+lN0ShZngael2qdConT41Ip2Q0BdqAEGKXJIJCFS4FMJPL/oRwEgqxqoFswx8 IPQpg6opa1r29ltJt4+DogCzVkHB9BkvzlbsIem+jKDIejd2IA6qs90o82cQV6V9RlvW vJyBiRc7Q+ipimQgxQmKSrKMJxUxPNzrrxFv+r2Vrd6UOsWUw0nzkUMteoG8n9iYzzmQ 7M0IGEWIe+ZfIJgwr+P2DRrGdzORPD8TzsfB5WCpSb4ZdBiGyC1ivTI0Cr0hHy/nBXAy v65UpW0iaeLVQ9oxRvQoFu+kZvupG97cFS1fNlTyODZA8rHS4ypcMT4d3/6iXnocWPiW vLzg== X-Gm-Message-State: AJIora+vGIBvrG2ULeqYI9AdXgQbIwBMXEB0Bb0mu8SXEJ/nSQ8Ed+AH BV4lvzUft0HcXtLGyzijwu8= X-Google-Smtp-Source: AGRyM1uNVXbBVa9+ZJ0iwzZ/WjhvQdHn2twG4kKtgtLzau3RCGA7TzP2iTndXce4Cfngr2eu4xkgXw== X-Received: by 2002:a17:90a:e008:b0:1ef:831a:1fff with SMTP id u8-20020a17090ae00800b001ef831a1fffmr4347956pjy.221.1657633240764; Tue, 12 Jul 2022 06:40:40 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:39 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 14/16] mm/slab_common: drop kmem_alloc & avoid dereferencing fields when not using Date: Tue, 12 Jul 2022 13:39:44 +0000 Message-Id: <20220712133946.307181-15-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633242; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jCLX99BoMYHyiMcoJa9MjaNk5QMKOoXigsqQTZeT3Ho=; b=zzjlJfHExsWChzx5NHK/u4TnzBZEdKRBtXa3uARAgvH53iqPr7pm6PUWwxxZP+gAG6RtJN b78xKU7uafWaDlcs5RIJowqfsuxuoKyG8q1UL1+0CqMejxXbC6ZPZcPd5xvL76XKF8+4Ha HBfKM78469V92C3VAR0BJ+mW2FxlFvI= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="or63Q3/3"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633242; a=rsa-sha256; cv=none; b=X6C+AM/shca5FI4HPbNjrjkEb7SSM8q7aEn+gyKz9mjErfJ06XL51jMUuN5ZJhTjT9Qmyr omBBEesQ7hfFwbeCOuKaF7rFxkmt2Dvmqc6vEETjCt+VmzYb/noZ7jBjgUgdnwpleUejPl a+rkczPS+AzAhzlc1ljN6Dv0GV+/fTU= X-Stat-Signature: 3it4abpmpoj4kkgeh8qrwdzk7yj9bsxp X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: EC00A80073 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="or63Q3/3"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-HE-Tag: 1657633241-394733 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Drop kmem_alloc event class, and define kmalloc and kmem_cache_alloc using TRACE_EVENT() macro. And then this patch does: - Do not pass pointer to struct kmem_cache to trace_kmalloc. gfp flag is enough to know if it's accounted or not. - Avoid dereferencing s->object_size and s->size when not using kmem_cache_alloc event. - Avoid dereferencing s->name in when not using kmem_cache_free event. Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/trace/events/kmem.h | 64 +++++++++++++++++++++++++------------ mm/slab.c | 16 ++++------ mm/slab_common.c | 17 ++++++---- mm/slob.c | 16 +++------- mm/slub.c | 13 +++----- 5 files changed, 70 insertions(+), 56 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index e078ebcdc4b1..0ded2c351062 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -9,17 +9,15 @@ #include #include -DECLARE_EVENT_CLASS(kmem_alloc, +TRACE_EVENT(kmem_cache_alloc, TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, - size_t bytes_req, - size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node), + TP_ARGS(call_site, ptr, s, gfp_flags, node), TP_STRUCT__entry( __field( unsigned long, call_site ) @@ -34,8 +32,8 @@ DECLARE_EVENT_CLASS(kmem_alloc, TP_fast_assign( __entry->call_site = call_site; __entry->ptr = ptr; - __entry->bytes_req = bytes_req; - __entry->bytes_alloc = bytes_alloc; + __entry->bytes_req = s->object_size; + __entry->bytes_alloc = s->size; __entry->gfp_flags = (__force unsigned long)gfp_flags; __entry->node = node; __entry->accounted = IS_ENABLED(CONFIG_MEMCG_KMEM) ? @@ -53,22 +51,46 @@ DECLARE_EVENT_CLASS(kmem_alloc, __entry->accounted ? "true" : "false") ); -DEFINE_EVENT(kmem_alloc, kmalloc, +TRACE_EVENT(kmalloc, - TP_PROTO(unsigned long call_site, const void *ptr, - struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, - gfp_t gfp_flags, int node), + TP_PROTO(unsigned long call_site, + const void *ptr, + size_t bytes_req, + size_t bytes_alloc, + gfp_t gfp_flags, + int node), - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node) -); + TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), -DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, + TP_STRUCT__entry( + __field( unsigned long, call_site ) + __field( const void *, ptr ) + __field( size_t, bytes_req ) + __field( size_t, bytes_alloc ) + __field( unsigned long, gfp_flags ) + __field( int, node ) + __field( bool, accounted ) + ), - TP_PROTO(unsigned long call_site, const void *ptr, - struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, - gfp_t gfp_flags, int node), + TP_fast_assign( + __entry->call_site = call_site; + __entry->ptr = ptr; + __entry->bytes_req = bytes_req; + __entry->bytes_alloc = bytes_alloc; + __entry->gfp_flags = (__force unsigned long)gfp_flags; + __entry->node = node; + __entry->accounted = IS_ENABLED(CONFIG_MEMCG_KMEM) ? + (gfp_flags & __GFP_ACCOUNT) : false; + ), - TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node) + TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d accounted=%s", + (void *)__entry->call_site, + __entry->ptr, + __entry->bytes_req, + __entry->bytes_alloc, + show_gfp_flags(__entry->gfp_flags), + __entry->node, + __entry->accounted ? "true" : "false") ); TRACE_EVENT(kfree, @@ -93,20 +115,20 @@ TRACE_EVENT(kfree, TRACE_EVENT(kmem_cache_free, - TP_PROTO(unsigned long call_site, const void *ptr, const char *name), + TP_PROTO(unsigned long call_site, const void *ptr, const struct kmem_cache *s), - TP_ARGS(call_site, ptr, name), + TP_ARGS(call_site, ptr, s), TP_STRUCT__entry( __field( unsigned long, call_site ) __field( const void *, ptr ) - __string( name, name ) + __string( name, s->name ) ), TP_fast_assign( __entry->call_site = call_site; __entry->ptr = ptr; - __assign_str(name, name); + __assign_str(name, s->name); ), TP_printk("call_site=%pS ptr=%p name=%s", diff --git a/mm/slab.c b/mm/slab.c index f9b74831e3f4..1685e5507a59 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3447,8 +3447,7 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, { void *ret = slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP_); - trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size, - cachep->size, flags, NUMA_NO_NODE); + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, flags, NUMA_NO_NODE); return ret; } @@ -3537,8 +3536,9 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) ret = slab_alloc(cachep, NULL, flags, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, cachep, - size, cachep->size, flags, NUMA_NO_NODE); + trace_kmalloc(_RET_IP_, ret, + size, cachep->size, + flags, NUMA_NO_NODE); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_trace); @@ -3561,9 +3561,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object_size, _RET_IP_); - trace_kmem_cache_alloc(_RET_IP_, ret, cachep, - cachep->object_size, cachep->size, - flags, nodeid); + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, flags, nodeid); return ret; } @@ -3588,7 +3586,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, NULL, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, cachep, + trace_kmalloc(_RET_IP_, ret, size, cachep->size, flags, nodeid); return ret; @@ -3652,7 +3650,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp) if (!cachep) return; - trace_kmem_cache_free(_RET_IP_, objp, cachep->name); + trace_kmem_cache_free(_RET_IP_, objp, cachep); __do_kmem_cache_free(cachep, objp, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); diff --git a/mm/slab_common.c b/mm/slab_common.c index 0e66b4911ebf..c01c6b8f0d34 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -933,7 +933,7 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret = kmalloc_large_node_notrace(size, flags, node); - trace_kmalloc(_RET_IP_, ret, NULL, + trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << get_order(size), flags, node); return ret; @@ -946,8 +946,9 @@ void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller ret = __kmem_cache_alloc_node(s, flags, node, size, caller); ret = kasan_kmalloc(s, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, s, size, - s->size, flags, node); + trace_kmalloc(_RET_IP_, ret, + size, s->size, + flags, node); return ret; } @@ -1075,8 +1076,9 @@ void *kmalloc_large(size_t size, gfp_t flags) { void *ret = __kmalloc_large_node_notrace(size, flags, NUMA_NO_NODE); - trace_kmalloc(_RET_IP_, ret, NULL, size, - PAGE_SIZE << get_order(size), flags, NUMA_NO_NODE); + trace_kmalloc(_RET_IP_, ret, + size, PAGE_SIZE << get_order(size), + flags, NUMA_NO_NODE); return ret; } EXPORT_SYMBOL(kmalloc_large); @@ -1090,8 +1092,9 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) { void *ret = __kmalloc_large_node_notrace(size, flags, node); - trace_kmalloc(_RET_IP_, ret, NULL, size, - PAGE_SIZE << get_order(size), flags, node); + trace_kmalloc(_RET_IP_, ret, + size, PAGE_SIZE << get_order(size), + flags, node); return ret; } EXPORT_SYMBOL(kmalloc_large_node); diff --git a/mm/slob.c b/mm/slob.c index a4d50d957c25..97a4d2407f96 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -507,8 +507,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) *m = size; ret = (void *)m + minalign; - trace_kmalloc(caller, ret, NULL, - size, size + minalign, gfp, node); + trace_kmalloc(caller, ret, size, size + minalign, gfp, node); } else { unsigned int order = get_order(size); @@ -516,8 +515,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp |= __GFP_COMP; ret = slob_new_pages(gfp, order, node); - trace_kmalloc(caller, ret, NULL, - size, PAGE_SIZE << order, gfp, node); + trace_kmalloc(caller, ret, size, PAGE_SIZE << order, gfp, node); } kmemleak_alloc(ret, size, 1, gfp); @@ -608,14 +606,10 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc(_RET_IP_, b, NULL, c->object_size, - SLOB_UNITS(c->size) * SLOB_UNIT, - flags, node); + trace_kmem_cache_alloc(_RET_IP_, b, c, flags, node); } else { b = slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc(_RET_IP_, b, NULL, c->object_size, - PAGE_SIZE << get_order(c->size), - flags, node); + trace_kmem_cache_alloc(_RET_IP_, b, c, flags, node); } if (b && c->ctor) { @@ -671,7 +665,7 @@ static void kmem_rcu_free(struct rcu_head *head) void kmem_cache_free(struct kmem_cache *c, void *b) { kmemleak_free_recursive(b, c->flags); - trace_kmem_cache_free(_RET_IP_, b, c->name); + trace_kmem_cache_free(_RET_IP_, b, c); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu = b + (c->size - sizeof(struct slob_rcu)); diff --git a/mm/slub.c b/mm/slub.c index f1aa51480dc4..a49c69469c64 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3243,8 +3243,7 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, { void *ret = slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size); - trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size, - s->size, gfpflags, NUMA_NO_NODE); + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfpflags, NUMA_NO_NODE); return ret; } @@ -3274,7 +3273,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc(s, NULL, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags, NUMA_NO_NODE); + trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } @@ -3285,8 +3284,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); - trace_kmem_cache_alloc(_RET_IP_, ret, s, - s->object_size, s->size, gfpflags, node); + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfpflags, node); return ret; } @@ -3299,8 +3297,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, s, - size, s->size, gfpflags, node); + trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -3544,7 +3541,7 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s = cache_from_obj(s, x); if (!s) return; - trace_kmem_cache_free(_RET_IP_, x, s->name); + trace_kmem_cache_free(_RET_IP_, x, s); slab_free(s, virt_to_slab(x), x, NULL, &x, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free); From patchwork Tue Jul 12 13:39:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915014 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0611BC433EF for ; Tue, 12 Jul 2022 13:40:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 83766940084; Tue, 12 Jul 2022 09:40:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E91F940063; Tue, 12 Jul 2022 09:40:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 612B3940084; Tue, 12 Jul 2022 09:40:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 50A14940063 for ; Tue, 12 Jul 2022 09:40:45 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 254FB120C91 for ; Tue, 12 Jul 2022 13:40:45 +0000 (UTC) X-FDA: 79678558050.15.B449C43 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf25.hostedemail.com (Postfix) with ESMTP id B75FAA0071 for ; Tue, 12 Jul 2022 13:40:44 +0000 (UTC) Received: by mail-pf1-f175.google.com with SMTP id 70so7523169pfx.1 for ; Tue, 12 Jul 2022 06:40:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v0Ra2mCmkhs97lju56wkWDygUYxf+qFMbjtuR/xcWUM=; b=Va1EalHaI3VhS6se/mZId8N5nQGeKy3Rq1mFP+/nbgL2p5eorhQfrJo2xH1goELwTW bSgXKk5F4dpIAzZSCMZdKKBME6hVbFHbWFF+hBaYb0JIUWadGWnYBEbwZXjMaV4164Nf eVcahALSse4SL0m3uvkn+8tSvLOth1FJp8f16qG/nNgxonLO7u9v0xYZ5Q//sHtWHUye /2uvjrD3bxUF3r37JsNzxwA6GknccDJ3xdD7OpoBCk/+4PgR/5/rZ8qLZJ2gv9eQp1O7 e/ERHYiIRDrHIHiKR30VM0ssW6gIz7BK9sTsrdFvd0lOLgomJEE8wrEJWoTkJxvtSbHn eQQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v0Ra2mCmkhs97lju56wkWDygUYxf+qFMbjtuR/xcWUM=; b=N+shNTrdRpp6u32111ADTVLrCcIeDUrJMJg7PDbd0EcGvJxOLgTEY+XM04Z9tHehak zeo6epN8UF6W7vZu2zT1oCU2p67ix++XSGdk1vlmoLVGHUr/MhFRNtPgQG+pht2EOqSZ Dw9VoOuhGaPd8F4utEZJjBob17cEJrDfXgs/jmFMBU8WcmTSSMEADpKsTGhVeZ2/lQcq mfGgE7XpuHRNWAeu2l8QAHsxmuvLJn8pBZp8Z4HWDsAZwbxJ1k59ZO1DdZnJp12ucj8J tVWynZg/Xvfl3rEMeYfHbMXbVUu9nipLH/GDJtLtI4hwd8cw+m+VZt8+PRoJ6rPbZ2TV wpFQ== X-Gm-Message-State: AJIora90bvLFpEL5MtphRLIPVQjk8vt+sJFxGJkj7l8hooi9onqHYaU8 n8yE4GB9x4ABcFoO/jTn43c= X-Google-Smtp-Source: AGRyM1v3kohd9u7MCQ5n7GErcuU6Ao04BiPJQrX6vF2MBJLKb1sbj/DBi312niR5XFKEbIYrLHI69A== X-Received: by 2002:a63:6984:0:b0:40d:9ebe:5733 with SMTP id e126-20020a636984000000b0040d9ebe5733mr20810644pgc.170.1657633244244; Tue, 12 Jul 2022 06:40:44 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:43 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 15/16] mm/slab_common: move definition of __ksize() to mm/slab.h Date: Tue, 12 Jul 2022 13:39:45 +0000 Message-Id: <20220712133946.307181-16-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633244; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v0Ra2mCmkhs97lju56wkWDygUYxf+qFMbjtuR/xcWUM=; b=gvZzSnRBxcbQPz02X8eeFogU1orWkxku8VJG94a3MceoT2w9dc12/KMQCpC91n+IrZNXX2 IEAMvX3jgFpbPonQ3z9Sf+myEeEzLdprWtDbrKsN1xNujpfC6YCFzLuLja/46Tbtqv8ekr uSHHOEgHe+ND26O/b4y3nYTlVeJ5jk8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633244; a=rsa-sha256; cv=none; b=LlEpFTlTa1zuVfHMofTpZzWUFLccqilcOKJQJaefumkUHcKRWLYlBil5/7kPcIgHYNf1xR UpCJXb5+TUvL9HCCbO/M2/cgrGuAwYiT/m0vNr53UXdPj/DFeOPALUf8uoXuy11P4gStu4 cwqoQ2+qmuAtWHFU4ziDxkDhw9PcJpE= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Va1EalHa; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Stat-Signature: 6wwsqih1f7g77rupy3eg5r7rsthdm6rh X-Rspamd-Queue-Id: B75FAA0071 X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Va1EalHa; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Server: rspam05 X-HE-Tag: 1657633244-478069 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __ksize() is only called by KASAN. Remove export symbol and move definition to mm/slab.h as we don't want to grow its callers. Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- include/linux/slab.h | 1 - mm/slab.h | 2 ++ mm/slab_common.c | 11 +---------- mm/slob.c | 1 - 4 files changed, 3 insertions(+), 12 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 4ee5b2fed164..701fe538650f 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -187,7 +187,6 @@ int kmem_cache_shrink(struct kmem_cache *s); void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __alloc_size(2); void kfree(const void *objp); void kfree_sensitive(const void *objp); -size_t __ksize(const void *objp); size_t ksize(const void *objp); #ifdef CONFIG_PRINTK bool kmem_valid_obj(void *object); diff --git a/mm/slab.h b/mm/slab.h index 9193e9c1f040..ad634e02b3cb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -678,6 +678,8 @@ void free_large_kmalloc(struct folio *folio, void *object); #endif /* CONFIG_SLOB */ +size_t __ksize(const void *objp); + static inline size_t slab_ksize(const struct kmem_cache *s) { #ifndef CONFIG_SLUB diff --git a/mm/slab_common.c b/mm/slab_common.c index c01c6b8f0d34..1f8db7959366 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1003,15 +1003,7 @@ void kfree(const void *object) } EXPORT_SYMBOL(kfree); -/** - * __ksize -- Uninstrumented ksize. - * @objp: pointer to the object - * - * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same - * safety checks as ksize() with KASAN instrumentation enabled. - * - * Return: size of the actual memory used by @objp in bytes - */ +/* Uninstrumented ksize. Only called by KASAN. */ size_t __ksize(const void *object) { struct folio *folio; @@ -1026,7 +1018,6 @@ size_t __ksize(const void *object) return slab_ksize(folio_slab(folio)->slab_cache); } -EXPORT_SYMBOL(__ksize); #endif /* !CONFIG_SLOB */ gfp_t kmalloc_fix_flags(gfp_t flags) diff --git a/mm/slob.c b/mm/slob.c index 97a4d2407f96..91d6e2b19929 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -584,7 +584,6 @@ size_t __ksize(const void *block) m = (unsigned int *)(block - align); return SLOB_UNITS(*m) * SLOB_UNIT; } -EXPORT_SYMBOL(__ksize); int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) { From patchwork Tue Jul 12 13:39:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hyeonggon Yoo <42.hyeyoo@gmail.com> X-Patchwork-Id: 12915015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97E37CCA47F for ; Tue, 12 Jul 2022 13:40:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 307E3940085; Tue, 12 Jul 2022 09:40:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B735940063; Tue, 12 Jul 2022 09:40:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17F27940085; Tue, 12 Jul 2022 09:40:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 05DBC940063 for ; Tue, 12 Jul 2022 09:40:49 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D650933FB7 for ; Tue, 12 Jul 2022 13:40:48 +0000 (UTC) X-FDA: 79678558176.25.359632D Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf30.hostedemail.com (Postfix) with ESMTP id 81BFD80073 for ; Tue, 12 Jul 2022 13:40:48 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id 89-20020a17090a09e200b001ef7638e536so11429959pjo.3 for ; Tue, 12 Jul 2022 06:40:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hHX6u+uuT0Qv/UMIRhleUO+BgcB2HGRMakbokCWE8Fs=; b=Ihzm2rddLGzNKF4YXXSxlXjbHdCTscB0yqKGkiatFFI5tMV1MKKybwz46/gefAVWW/ uHag3srSoMJMC8+UcknJNrRV8FPVQfTKKnQ3pNaw44uEtgQUv7d/H/huqn8/BmtP3RK7 P0wt/gNUbftGkpGgJcCsY5IyXb5HuQKPSJS9+FSw/w86QbzAsw98x2+oC2UBgYgEgxqZ cNXkhaobZ9JF9wYFAs8caGM4lZXYluwZpyF/f6Re6xczg3RD3pmVkFVYiKgLFMpCUqNH DYPRdqfDhrsYLX2mUHikivQ6A0p02w+HPIBG02A8nDqyMUV9x/dhfmXIBhlSNTdTuKaJ JJqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hHX6u+uuT0Qv/UMIRhleUO+BgcB2HGRMakbokCWE8Fs=; b=QviNbq0e2ocfp5/SGaE9nGL9BCEK2uUJB8ZWQgwCMqPQA3ssSWEusFJxsTh+GOrwfu Ma+LZZYHc9tI6tHtx5DXSND8GU1rKkLEQiby1SroEKJSFbNTVfMxl2tt6urenxdKvmwC w3oWV40N+NEh5wJxtr2ff4o1wiaD3+KPadzAzeXUQIcYNxKguO9og9dc98CnpMuOyZ4f +9i1AT2JiUo9X1Gg/z16W0LMHLPYvEyLsV/rCAHEnPbldtTZsmJSWT7NtmBFBgY9Clmn qsg0xPKEiqpeQbnKWDBx+o+k5W9Q1uqe1r8ziOzxJRLpW4H1MeTWbFsobN3xttOUoNgv lNkg== X-Gm-Message-State: AJIora8rObJjOnDzdGSuPIA5ElSyl+lefO28IPRHIVwQSXzgwzFS4zr4 swq0iKxMUk7Mv89c2pvLVIA= X-Google-Smtp-Source: AGRyM1tLVKYRqR/7FAuECqcy2MJ1lHFZ77GM2G7vH4DXeoiG1wAQZNcrfpbj6S8pBzGrN9YwT5/13g== X-Received: by 2002:a17:90b:17d1:b0:1f0:6f1:90d1 with SMTP id me17-20020a17090b17d100b001f006f190d1mr4367814pjb.221.1657633247691; Tue, 12 Jul 2022 06:40:47 -0700 (PDT) Received: from ip-172-31-24-42.ap-northeast-1.compute.internal (ec2-35-75-15-180.ap-northeast-1.compute.amazonaws.com. [35.75.15.180]) by smtp.gmail.com with ESMTPSA id r6-20020aa79886000000b0052ae3bcb807sm1947178pfl.188.2022.07.12.06.40.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jul 2022 06:40:47 -0700 (PDT) From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Joe Perches , Vasily Averin , Matthew WilCox Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 16/16] mm/sl[au]b: check if large object is valid in __ksize() Date: Tue, 12 Jul 2022 13:39:46 +0000 Message-Id: <20220712133946.307181-17-42.hyeyoo@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220712133946.307181-1-42.hyeyoo@gmail.com> References: <20220712133946.307181-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Ihzm2rdd; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657633248; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hHX6u+uuT0Qv/UMIRhleUO+BgcB2HGRMakbokCWE8Fs=; b=wTJxqoBRpQHeZpAFtFf+WvClxK1nFpfcTmXdtQL6eFyS7yC6DTvjyOqkBVHCwljkKVebr0 ZknvtDLjucmJI1wfoAPkm/91kzaaOXgCKDehZfHWJ0oqvW0wK7R17qYx8H+5Nbwk0tUebB e2bbCdLMYBk7C5RkIDzzkhWhLDRHhJ0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657633248; a=rsa-sha256; cv=none; b=XS1WCfFQQB4QQvY1mD62WOPpkG7NrxcNp9uIuzp9OqkWMc44pcx09UOxZ6s678DlISbIXu UoQywP919cO3+zWqV2tqUgXcUMNU2bEdEMlAmwZWcbsUMhQrgaAKzFNo0B3shPtT6WvlMc Dq0dDCaWOzf4El8yW4hyvWRSBbUqENw= X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 81BFD80073 X-Stat-Signature: hd5g4cq8hjp3etcorq568qfyued7878j Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Ihzm2rdd; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-HE-Tag: 1657633248-951556 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __ksize() returns size of objects allocated from slab allocator. When invalid object is passed to __ksize(), returning zero prevents further memory corruption and makes caller be able to check if there is an error. If address of large object is not beginning of folio or size of the folio is too small, it must be invalid. Return zero in such cases. Suggested-by: Vlastimil Babka Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Vlastimil Babka --- mm/slab_common.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 1f8db7959366..0d6cbe9d7ad0 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1013,8 +1013,12 @@ size_t __ksize(const void *object) folio = virt_to_folio(object); - if (unlikely(!folio_test_slab(folio))) + if (unlikely(!folio_test_slab(folio))) { + if (WARN_ON(object != folio_address(folio) || + folio_size(folio) <= KMALLOC_MAX_CACHE_SIZE)) + return 0; return folio_size(folio); + } return slab_ksize(folio_slab(folio)->slab_cache); }