From patchwork Sun Dec 3 00:14:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiongwei Song X-Patchwork-Id: 13477164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75A63C4167B for ; Sun, 3 Dec 2023 00:16:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AFF906B039D; Sat, 2 Dec 2023 19:16:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AAFE36B03A1; Sat, 2 Dec 2023 19:16:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 978496B03A6; Sat, 2 Dec 2023 19:16:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 881C76B039D for ; Sat, 2 Dec 2023 19:16:41 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 605108019E for ; Sun, 3 Dec 2023 00:16:41 +0000 (UTC) X-FDA: 81523591002.16.F8605F9 Received: from pv50p00im-hyfv10011601.me.com (pv50p00im-hyfv10011601.me.com [17.58.6.43]) by imf23.hostedemail.com (Postfix) with ESMTP id 8F9B8140002 for ; Sun, 3 Dec 2023 00:16:39 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=me.com header.s=1a1hai header.b=jrIcNLTl; spf=pass (imf23.hostedemail.com: domain of sxwjean@me.com designates 17.58.6.43 as permitted sender) smtp.mailfrom=sxwjean@me.com; dmarc=pass (policy=quarantine) header.from=me.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701562599; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nx1wvkEsbwYtOEsINlo4S/h/mqmrWBdpRZcp/xVR8Eg=; b=aP1iccCuqFGQICY6L/BYxfMa3UFzUtge/qqhI2no7zU2BS1A94uwP36WyKf/cVQDTp/05v 94LO3bda1eiKepWxbTOI5CViq+gkfP0WaCVdewpLsFpj1S9NSSZa9Sm4NF3CrFTbWsXX4y pGzRr2Lz+BwDTdYKYLS3MBuScSYNxmQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701562599; a=rsa-sha256; cv=none; b=GVe3H3w/Pfmk8jRJuNAiSrNA1opCvFNRFKP/J+QuXBoeX01/JIZSvrM/CDk8sgQ+wVZe/U e5wJHCEp1AR9lA3/vKNQjYNTAvxpx7b9HCRqwUMKe8YZbINwpyYpkGxdPgDOssOMvwTHTE zfMN21FkwqeKUZ4IKHm46obP7q2FkBU= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=me.com header.s=1a1hai header.b=jrIcNLTl; spf=pass (imf23.hostedemail.com: domain of sxwjean@me.com designates 17.58.6.43 as permitted sender) smtp.mailfrom=sxwjean@me.com; dmarc=pass (policy=quarantine) header.from=me.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=1a1hai; t=1701562598; bh=nx1wvkEsbwYtOEsINlo4S/h/mqmrWBdpRZcp/xVR8Eg=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=jrIcNLTlnDSdk1jySH5c17aJZdC1IWtLnVdTj4cNMsFzmpjn0S8zfWK29OLPQbi9j K/dtb85o+9fwIoibnuaL8AIoR3a8Xl7LBzVgQJIUQrPGzRq4kONyL3mP/sYmVzPd0t umQASTh6DjYDNHvuN25qacdPqJKSNDPW+qIkZmkyNhm4A2AaJeqGT7quZfaX3d8e8e stwh41CtRTTRNALV0YWxA+A3SVDMLUoBtpHBOxfmLdf1n/eoWmt3/aQZPHk9nfisRJ fF94Nzwqx+i1bIQVKxuL6hyru7ZOXq7lmFRuFQDzHnwmWSYf5liAZSXRXj/yP+gzZz FlY4NJZlJ6Ycw== Received: from xiongwei.. (pv50p00im-dlb-asmtp-mailmevip.me.com [17.56.9.10]) by pv50p00im-hyfv10011601.me.com (Postfix) with ESMTPSA id B7A34C800C0; Sun, 3 Dec 2023 00:16:31 +0000 (UTC) From: sxwjean@me.com To: vbabka@suse.cz, 42.hyeyoo@gmail.com, cl@linux.com, linux-mm@kvack.org Cc: penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, roman.gushchin@linux.dev, corbet@lwn.net, keescook@chromium.org, arnd@arndb.de, akpm@linux-foundation.org, gregkh@linuxfoundation.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Xiongwei Song Subject: [PATCH v2 1/3] Documentation: kernel-parameters: remove noaliencache Date: Sun, 3 Dec 2023 08:14:59 +0800 Message-Id: <20231203001501.126339-2-sxwjean@me.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231203001501.126339-1-sxwjean@me.com> References: <20231203001501.126339-1-sxwjean@me.com> MIME-Version: 1.0 X-Proofpoint-GUID: LIXi0Nx9putNQ6njWX1fqKz8kxXImTOW X-Proofpoint-ORIG-GUID: LIXi0Nx9putNQ6njWX1fqKz8kxXImTOW X-Proofpoint-Virus-Version: =?utf-8?q?vendor=3Dfsecure_engine=3D1=2E1=2E170-?= =?utf-8?q?22c6f66c430a71ce266a39bfe25bc2903e8d5c8f=3A6=2E0=2E517=2C18=2E0?= =?utf-8?q?=2E883=2C17=2E0=2E605=2E474=2E0000000_definitions=3D2022-06-21=5F?= =?utf-8?q?08=3A2022-06-21=5F01=2C2022-06-21=5F08=2C2020-01-23=5F02_signatur?= =?utf-8?q?es=3D0?= X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0 bulkscore=0 adultscore=0 mlxscore=0 mlxlogscore=708 clxscore=1015 malwarescore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2312030000 X-Rspamd-Queue-Id: 8F9B8140002 X-Rspam-User: X-Stat-Signature: ayep14p1q6x5ea68q7m4839xe46ff9hk X-Rspamd-Server: rspam03 X-HE-Tag: 1701562599-750096 X-HE-Meta: U2FsdGVkX19Md2lG5u8eNaif7Cg/iBawU/mMFi7Jgz3mqwXrkE6pa2u0XUpKfvcNSsKuBHQabhc/4y8k2fuLMsvMIu2GGWv+x8IlRC6zKbVSO/K8Bm8RDoGqROwgCe7GyyI4Qmf/T9QnIkqBkHcW9wxsFCDjzT9bE01l0Z1b6gAmtK+Eal/2vbF+dJs6DMR8xiS35pHIK+w1UFTWvgN8U5v7MjXUaO+W2kRPCZSoa2PDOS7b4YKkR2YoHTIWamt+YJd/cqRHc861ybTwC2U4d9YIemQZ/7EPxEdY8W8s4XMt0zocgf5nea4llJIWtdv8351nZ87TMHIKHFLa7lmq6ac1WVF/nK0YgxBWJJor8O7HV/V9cCnk61yVCzeUhzavVzmJdZ0XouVREuQyLbu9Bnrl42Ztx95gvSsXrADTH7VqA0p1UyFLONew+1IU1i7R/Fefdke70iNAnn4tbi961Qgm/Kys/TV21JsTu36E3YvMxBBlq6ujSru4SQ8kCS2gaSLhOG7WvWDFVLPlYb+ffIP8cV3Jzu/09Qfp5EdvP5ScOyVFTHoJgUVvH8WnE2ABEDVOcVGztngY0Lyzsd8Z4HiXUTxhdA71ANSgxvBjTPjWlzd0jz1HhPrSMT9xMKVP7D7sNyumot5b0y+k95+D9g7stCqgiz2qpRUKHRDGkhS4tZzLR3tx4a1W7FXdq0y9US/nwhFPa1BhM4DPVyp6Caf+tT2+0lrhZKAkxp90ILFPXuT+Lf5OVk5oX0THFQRpeFtfbivhgmQ0N9OG7j4auHMucDyVu5vMn8IDgdvYSxWRJ9LCYl7H3tJ10GLgSSTtpslMvVjHwc5okPr7J3n5HDWsE/vdgOwqtMZG9RQ6E+E3IfPJ1BhQVCwmzrkNgkRAu40IF3vHe4MsXgGaEuKYG8jYpLASNDWCXZO/NbdmMGmqmpTYAwIGX5RCug3IJVkHIY/ZUaE1rFkYMqZ+ANA HLDNiEjW HtkLDx/m4hO/TofuprBwx168GX4YvEELs4Md3Rua133xBfLDCdv76z8h+IWellKoMxIInTcfN4E5sL2uhR2b7vFpYXtrRcbq9AmFGIpzL/DX2c3o7g5Ay0Qwvo+3Yu6g4uRVfQKkD6QUmjdx55PV768so9AZXbn7SWJbmab3TKK/Wq9B1kd3h+sBdl8xZpEeCytqNKIAgz7mkm8dN6Ukl8wmcH87nZr5l3OS8xHXhHekCzk+jAKLkhClhroLX3F3OOAsrBvPiOid2lpp01JBH1jKCF9tc46VHvaMzQl4wLWsp13n9Z1VBMFdXFbCbgIc1wERDIJ8lijyr/RESfeuPTqCbwlo+gxMrzMpMfaEDgSI91rlztEwfut4p88cFrghgjVVr58pZ6nY8QXhfnf7rgKIQdOxHWOsXapt3+3T2QOrH/Rn8wdvKSSIyM1W5M+Ia/8MkivynecXw2Ggp8+1oAk0XPCRg91Zd0wAaDyb9ejwZ/670DxVCN/yVmMwZ10pC24CT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Xiongwei Song Since slab allocator has already been removed. There is no users about noaliencache, so let's remove it. Suggested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Kees Cook Signed-off-by: Xiongwei Song Reviewed-by: Vlastimil Babka --- Hi Hyeonggon & Christoph, I didn't pick your Acked-by tags because I removed the changes for slab_max_order. Would you like to allow me to add them in this patch? Regards, Xiongwei v4: Collect Reviewed-by tag. v3: Remove the changes for slab_max_order. v2: Add changes for removing "noaliencache". https://lore.kernel.org/linux-mm/20231122143603.85297-1-sxwjean@me.com/ v1: Remove slab_max_order. https://lore.kernel.org/linux-mm/20231120091214.150502-2-sxwjean@me.com/ --- Documentation/admin-guide/kernel-parameters.txt | 4 ---- 1 file changed, 4 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 65731b060e3f..9f94baeb2f82 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3740,10 +3740,6 @@ no5lvl [X86-64,RISCV] Disable 5-level paging mode. Forces kernel to use 4-level paging instead. - noaliencache [MM, NUMA, SLAB] Disables the allocation of alien - caches in the slab allocator. Saves per-node memory, - but will impact performance. - noalign [KNL,ARM] noaltinstr [S390] Disables alternative instructions patching From patchwork Sun Dec 3 00:15:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiongwei Song X-Patchwork-Id: 13477165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 385D4C4167B for ; Sun, 3 Dec 2023 00:16:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CAD4F6B03A6; Sat, 2 Dec 2023 19:16:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C5DD76B03AE; Sat, 2 Dec 2023 19:16:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFE346B03DC; Sat, 2 Dec 2023 19:16:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9BDFA6B03A6 for ; Sat, 2 Dec 2023 19:16:52 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6A885C019D for ; Sun, 3 Dec 2023 00:16:52 +0000 (UTC) X-FDA: 81523591464.14.325469A Received: from pv50p00im-hyfv10011601.me.com (pv50p00im-hyfv10011601.me.com [17.58.6.43]) by imf30.hostedemail.com (Postfix) with ESMTP id 710FB8000E for ; Sun, 3 Dec 2023 00:16:49 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=me.com header.s=1a1hai header.b=avdBEo4B; spf=pass (imf30.hostedemail.com: domain of sxwjean@me.com designates 17.58.6.43 as permitted sender) smtp.mailfrom=sxwjean@me.com; dmarc=pass (policy=quarantine) header.from=me.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701562609; a=rsa-sha256; cv=none; b=Iu5cE+VS0TnA/l7WbtkVvBsvgRL7DzeAV4Tl0xpociUZarLYO4M27+1BpTAd17FhtZkOFO SfAdPrX94Ub5viintGWijrVOtQwUQm2YpFfr9cdsMjkgutMbKza7jqsPm6qcG8icBiBBqH kj/0DdwVdQalkQi55NtxN6Rrrau74Es= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=me.com header.s=1a1hai header.b=avdBEo4B; spf=pass (imf30.hostedemail.com: domain of sxwjean@me.com designates 17.58.6.43 as permitted sender) smtp.mailfrom=sxwjean@me.com; dmarc=pass (policy=quarantine) header.from=me.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701562609; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CuikmGjuEhQM3g76GkkOH0QQcoBLmI9UYr2n29aXUbo=; b=7T8KIKpg7yTBsXzEYJQ19MVYsBiJFRCeUi54xT0ypuF8LcnN15Brdah0UtrHL1LBLJOCIW /Pi+NT53V7TXh3dPeQsB0L1ukK5Z9uMRjV8rai0qTRfCDbTgX5nRrTrQnBZobTie+cJgUS gnnGn8mMasiFyiUm2ts/9x+dflrzI4A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=1a1hai; t=1701562608; bh=CuikmGjuEhQM3g76GkkOH0QQcoBLmI9UYr2n29aXUbo=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=avdBEo4Bf/JVT/wLDwIAcZaaIIlDmTMdWIpOlfVJ/28e+/WTUZIkT60feMoqTZWVb Awxt96EQ9LpRrewM2f1tG5CN4FwcJue7cEpoXOva29CHcFW2HHS9QxS2mb9cNb0gBp hiS3uqhB3F6qqPiFFmDritxEfuj3XhoR+kpxh4C7gszuqi+7WLaYcFXrYj8AoWE++z vHrMoA0c8P1vkILOPK1CxCvodPJvRyET+2MSTpt56EO3qVqwlqkeJBLBEu62ID5Hby 9UgebTaoBk9wUek2CgKYRDu3V2EpFdCSK0qalg7RwyZB2U8NdpLpL5ttb/L4ZI49ve 8zE0tzpFfMOAQ== Received: from xiongwei.. (pv50p00im-dlb-asmtp-mailmevip.me.com [17.56.9.10]) by pv50p00im-hyfv10011601.me.com (Postfix) with ESMTPSA id 79187C800D4; Sun, 3 Dec 2023 00:16:42 +0000 (UTC) From: sxwjean@me.com To: vbabka@suse.cz, 42.hyeyoo@gmail.com, cl@linux.com, linux-mm@kvack.org Cc: penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, roman.gushchin@linux.dev, corbet@lwn.net, keescook@chromium.org, arnd@arndb.de, akpm@linux-foundation.org, gregkh@linuxfoundation.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Xiongwei Song Subject: [RFC PATCH v2 2/3] mm/slub: unify all sl[au]b parameters with "slab_$param" Date: Sun, 3 Dec 2023 08:15:00 +0800 Message-Id: <20231203001501.126339-3-sxwjean@me.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231203001501.126339-1-sxwjean@me.com> References: <20231203001501.126339-1-sxwjean@me.com> MIME-Version: 1.0 X-Proofpoint-GUID: FTSjeJVkCLwsjf92A80ycRFD1NN4eGuT X-Proofpoint-ORIG-GUID: FTSjeJVkCLwsjf92A80ycRFD1NN4eGuT X-Proofpoint-Virus-Version: =?utf-8?q?vendor=3Dfsecure_engine=3D1=2E1=2E170-?= =?utf-8?q?22c6f66c430a71ce266a39bfe25bc2903e8d5c8f=3A6=2E0=2E517=2C18=2E0?= =?utf-8?q?=2E883=2C17=2E0=2E605=2E474=2E0000000_definitions=3D2022-06-21=5F?= =?utf-8?q?08=3A2022-06-21=5F01=2C2022-06-21=5F08=2C2020-01-23=5F02_signatur?= =?utf-8?q?es=3D0?= X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0 bulkscore=0 adultscore=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2312030000 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 710FB8000E X-Stat-Signature: 9657y686k669tqq3i9ojhfq45435piiu X-Rspam-User: X-HE-Tag: 1701562609-963690 X-HE-Meta: U2FsdGVkX1+Sz1UNmSDzlQHtffD962bmj4SoCZVsS1tAYUhTBzeyCDvMDIvELZrBtvFEGSZ6GmA5fcRzzn0fdZ2sRtQHJXIL94YWzVknrkLsfB062CvujivGqqUBtyfPyCLBSlpWZf+OFDt2OPh4ntCtgwn/NRrFgE4ZLT3BSb0Ljd6pSRtBs57sfYr3MZk0WBAa/W+e/3lThvKlbArEFABhzBCQ06CC1+af4+d2VvtNMjq3IBH9NWL/fnIA691RyWd7CzKb3xGmEqj1KVA36wqidCSgPWHzoA9+fT6oDYWYRxHPT+pnWgIRfgB/pKiyDH3MhwI+u3+e4EZEatsKBhVuuRVy+kXw2fjpqMWeJi8KYTvjKJdAn8R/nY7881TlCIP96VYl5v/19F/qJeCD9Xz3RIyCaDkQgC2zKyzQcNDBbnloiOSYwUoBHrayMU5GwXzWQ6Xht9alfikPRIJLVlDwAEBbBZ+Fw5ExUHe8RnyfH8xSaJJRnc5PwFTbUok7hY5a9PKgholCGNVys25by/KDZuD4UOD0QGjxtP+5w8ojzNhF1gfy2esEJs25IFIp1esOM7FRT1ps9Y7x9cQ6l6pwdC02Nkezi4CN0klccSf/uYHf4A3ilTJqCg8+Yz1le+vLsJHaNgUlWNk0Ipivjr1T5dYfelf4HQfYb/D0QDjH5PXBMGrqBg/oZvR8rwAgIKpFxW0VZ9HZFWYRFb0QSTG0Bd1z9jbxqANOSqVaFiGlDkVZnbhOiIh8rRrsydVZn8xpwE28dDImPNbYFCqpbNnydfb7wNHCer7+/miIgIuOq770TiO8AjqcW8Jry4TmRDym5VAS87jn992oB4+UnLAuJMiMweVAkX+X5RTs3vuZt/GmxolyphJDmQlwCwRIT5nOyvwi/Atn219Y9JHDQChVKJ7qAFas+/PcBDbbdNSGV9XySnMbu3JxPiRTk+A+GfhxbDf+BDN3PqEdmYp BRYlLR0r 2tY6Rh/u6jkG2NXEHVtXuNrHP8F8ES9WIGe/TFVitNrEiGWd+N6far0DxRHrzxhsM17bLxr+tG7geuWTMqNCw2pOKVHq3Sb5dm0J3xpMjqNHhY54CVhUt4bTuhW0XJZqa6g4weKvoBV9E73J78PCcAwNpoRKlGPeItMd9DXMUZLtK2+rC7FzlwZMwWVauTvH2tYJfYsTAtxdDz00HhEOHnAWh941LgLnG2bFtVYEQ6dUrXG83u+hQbCUh79dEXcHyKpHX472EuTzhb8XGe4RnfNrCiqMiZc9i/TcjAi18Z68lFS1e5/g79GyoFVDbS0B1qJT3i/jSAbTlGkWkdenSK6stRihaARQZ9NUbiBI6zpBC1y+kqpKg3iqz+QuuoTaifosJe2cK6jwCoiS0GzJu6ZW6FWfV6VbUrW/A0N2PQpUIr87gNXzaHSMLmviDyDHLnUJ9rYfiRKT/RfIOpIp8+PBS4IAG+ShFaVi8QPUaGcesHO4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Xiongwei Song Since the SLAB allocator has been removed, so we need to clean up the sl[au]b_$params. However, the "slab/SLAB" terms should be keep for long-term rather than "slub/SLUB". Hence, we should use "slab_$param" as the primary prefix, which is pointed out by Vlastimil Babka. For more information please see [1]. This patch is changing the following slab parameters - slub_max_order - slub_min_order - slub_min_objects - slub_debug to - slab_max_order - slab_min_order - slab_min_objects - slab_debug as the primary slab parameters in Documentation/admin-guide/kernel-parameters.txt and source, and rename all setup functions of them too. Meanwhile, "slub_$params" can also be passed by command line, which is to keep backward compatibility. Also mark all "slub_$params" as legacy. The function static int __init setup_slub_debug(char *str); , which is to setup debug flags inside a slab during kernel init, is changed to setup_slab_debug_flags(), which is to prevent the name conflict. Because there is another function void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr); , which is to poison slab space, would have name conflict with the prior one. For parameter "slub_debug", beside replacing it with "slab_debug", there are several global variables, local variables and functions which are related with the parameter, let's rename them all. Remove the separate descriptions for slub_[no]merge, append legacy tip for them at the end of descriptions of slab_[no]merge. I didn't change the parameters in Documentation/mm/slub.rst because the file name is still "slub.rst", and slub_$params still can be used in kernel command line to keep backward compatibility. [1] https://lore.kernel.org/linux-mm/7512b350-4317-21a0-fab3-4101bc4d8f7a@suse.cz/ Signed-off-by: Xiongwei Song --- .../admin-guide/kernel-parameters.txt | 44 +++--- drivers/misc/lkdtm/heap.c | 2 +- mm/Kconfig.debug | 6 +- mm/slab.h | 16 +- mm/slab_common.c | 8 +- mm/slub.c | 142 +++++++++--------- 6 files changed, 109 insertions(+), 109 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 9f94baeb2f82..d01c12e2a247 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5869,6 +5869,8 @@ slab_merge [MM] Enable merging of slabs with similar size when the kernel is built without CONFIG_SLAB_MERGE_DEFAULT. + (slub_merge is accepted too, but it's supported for + legacy) slab_nomerge [MM] Disable merging of slabs with similar size. May be @@ -5882,47 +5884,41 @@ unchanged). Debug options disable merging on their own. For more information see Documentation/mm/slub.rst. + (slub_nomerge is accepted too, but it's supported for + legacy) - slab_max_order= [MM, SLAB] - Determines the maximum allowed order for slabs. - A high setting may cause OOMs due to memory - fragmentation. Defaults to 1 for systems with - more than 32MB of RAM, 0 otherwise. - - slub_debug[=options[,slabs][;[options[,slabs]]...] [MM, SLUB] - Enabling slub_debug allows one to determine the + slab_debug[=options[,slabs][;[options[,slabs]]...] [MM] + Enabling slab_debug allows one to determine the culprit if slab objects become corrupted. Enabling - slub_debug can create guard zones around objects and + slab_debug can create guard zones around objects and may poison objects when not in use. Also tracks the last alloc / free. For more information see - Documentation/mm/slub.rst. + Documentation/mm/slub.rst. (slub_debug is accepted + too, but it's supported for legacy) - slub_max_order= [MM, SLUB] + slab_max_order= [MM] Determines the maximum allowed order for slabs. A high setting may cause OOMs due to memory fragmentation. For more information see - Documentation/mm/slub.rst. + Documentation/mm/slub.rst. (slub_max_order is + accepted too, but it's supported for legacy) - slub_min_objects= [MM, SLUB] + slab_min_objects= [MM] The minimum number of objects per slab. SLUB will - increase the slab order up to slub_max_order to + increase the slab order up to slab_max_order to generate a sufficiently large slab able to contain the number of objects indicated. The higher the number of objects the smaller the overhead of tracking slabs and the less frequently locks need to be acquired. For more information see Documentation/mm/slub.rst. + (slub_min_objects is accepted too, but it's supported + for legacy) - slub_min_order= [MM, SLUB] + slab_min_order= [MM] Determines the minimum page order for slabs. Must be - lower than slub_max_order. - For more information see Documentation/mm/slub.rst. - - slub_merge [MM, SLUB] - Same with slab_merge. - - slub_nomerge [MM, SLUB] - Same with slab_nomerge. This is supported for legacy. - See slab_nomerge for more information. + lower than slab_max_order. For more information see + Documentation/mm/slub.rst. (slub_min_order is accepted + too, but it's supported for legacy) smart2= [HW] Format: [,[,...,]] diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c index 0ce4cbf6abda..076ca9b225de 100644 --- a/drivers/misc/lkdtm/heap.c +++ b/drivers/misc/lkdtm/heap.c @@ -47,7 +47,7 @@ static void lkdtm_VMALLOC_LINEAR_OVERFLOW(void) * correctly. * * This should get caught by either memory tagging, KASan, or by using - * CONFIG_SLUB_DEBUG=y and slub_debug=ZF (or CONFIG_SLUB_DEBUG_ON=y). + * CONFIG_SLUB_DEBUG=y and slab_debug=ZF (or CONFIG_SLUB_DEBUG_ON=y). */ static void lkdtm_SLAB_LINEAR_OVERFLOW(void) { diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 321ab379994f..afc72fde0f03 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -64,11 +64,11 @@ config SLUB_DEBUG_ON help Boot with debugging on by default. SLUB boots by default with the runtime debug capabilities switched off. Enabling this is - equivalent to specifying the "slub_debug" parameter on boot. + equivalent to specifying the "slab_debug" parameter on boot. There is no support for more fine grained debug control like - possible with slub_debug=xxx. SLUB debugging may be switched + possible with slab_debug=xxx. SLUB debugging may be switched off in a kernel built with CONFIG_SLUB_DEBUG_ON by specifying - "slub_debug=-". + "slab_debug=-". config PAGE_OWNER bool "Track page owner" diff --git a/mm/slab.h b/mm/slab.h index 54deeb0428c6..c78b3d24be2c 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -507,36 +507,36 @@ ssize_t slabinfo_write(struct file *file, const char __user *buffer, #ifdef CONFIG_SLUB_DEBUG #ifdef CONFIG_SLUB_DEBUG_ON -DECLARE_STATIC_KEY_TRUE(slub_debug_enabled); +DECLARE_STATIC_KEY_TRUE(slab_debug_enabled); #else -DECLARE_STATIC_KEY_FALSE(slub_debug_enabled); +DECLARE_STATIC_KEY_FALSE(slab_debug_enabled); #endif extern void print_tracking(struct kmem_cache *s, void *object); long validate_slab_cache(struct kmem_cache *s); -static inline bool __slub_debug_enabled(void) +static inline bool __slab_debug_enabled(void) { - return static_branch_unlikely(&slub_debug_enabled); + return static_branch_unlikely(&slab_debug_enabled); } #else static inline void print_tracking(struct kmem_cache *s, void *object) { } -static inline bool __slub_debug_enabled(void) +static inline bool __slab_debug_enabled(void) { return false; } #endif /* - * Returns true if any of the specified slub_debug flags is enabled for the - * cache. Use only for flags parsed by setup_slub_debug() as it also enables + * Returns true if any of the specified slab_debug flags is enabled for the + * cache. Use only for flags parsed by setup_slab_debug() as it also enables * the static key. */ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) { if (IS_ENABLED(CONFIG_SLUB_DEBUG)) VM_WARN_ON_ONCE(!(flags & SLAB_DEBUG_FLAGS)); - if (__slub_debug_enabled()) + if (__slab_debug_enabled()) return s->flags & flags; return false; } diff --git a/mm/slab_common.c b/mm/slab_common.c index 238293b1dbe1..d7360a50633c 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -282,14 +282,14 @@ kmem_cache_create_usercopy(const char *name, #ifdef CONFIG_SLUB_DEBUG /* - * If no slub_debug was enabled globally, the static key is not yet - * enabled by setup_slub_debug(). Enable it if the cache is being + * If no slab_debug was enabled globally, the static key is not yet + * enabled by setup_slab_debug(). Enable it if the cache is being * created with any of the debugging flags passed explicitly. * It's also possible that this is the first cache created with * SLAB_STORE_USER and we should init stack_depot for it. */ if (flags & SLAB_DEBUG_FLAGS) - static_branch_enable(&slub_debug_enabled); + static_branch_enable(&slab_debug_enabled); if (flags & SLAB_STORE_USER) stack_depot_init(); #endif @@ -766,7 +766,7 @@ EXPORT_SYMBOL(kmalloc_size_roundup); } /* - * kmalloc_info[] is to make slub_debug=,kmalloc-xx option work at boot time. + * kmalloc_info[] is to make slab_debug=,kmalloc-xx option work at boot time. * kmalloc_index() supports up to 2^21=2MB, so the final entry of the table is * kmalloc-2M. */ diff --git a/mm/slub.c b/mm/slub.c index 3f8b95757106..f67108ed2653 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -197,9 +197,9 @@ do { \ #ifdef CONFIG_SLUB_DEBUG #ifdef CONFIG_SLUB_DEBUG_ON -DEFINE_STATIC_KEY_TRUE(slub_debug_enabled); +DEFINE_STATIC_KEY_TRUE(slab_debug_enabled); #else -DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); +DEFINE_STATIC_KEY_FALSE(slab_debug_enabled); #endif #endif /* CONFIG_SLUB_DEBUG */ @@ -280,7 +280,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) /* * Debugging flags that require metadata to be stored in the slab. These get - * disabled when slub_debug=O is used and a cache's min order increases with + * disabled when slab_debug=O is used and a cache's min order increases with * metadata. */ #define DEBUG_METADATA_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER) @@ -765,12 +765,12 @@ static inline void *restore_red_left(struct kmem_cache *s, void *p) * Debug settings: */ #if defined(CONFIG_SLUB_DEBUG_ON) -static slab_flags_t slub_debug = DEBUG_DEFAULT_FLAGS; +static slab_flags_t slab_debug = DEBUG_DEFAULT_FLAGS; #else -static slab_flags_t slub_debug; +static slab_flags_t slab_debug; #endif -static char *slub_debug_string; +static char *slab_debug_string; static int disable_higher_order_debug; /* @@ -1582,7 +1582,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, } /* - * Parse a block of slub_debug options. Blocks are delimited by ';' + * Parse a block of slab_debug options. Blocks are delimited by ';' * * @str: start of block * @flags: returns parsed flags, or DEBUG_DEFAULT_FLAGS if none specified @@ -1592,7 +1592,7 @@ static inline int free_consistency_checks(struct kmem_cache *s, * returns the start of next block if there's any, or NULL */ static char * -parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init) +parse_slab_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init) { bool higher_order_disable = false; @@ -1643,7 +1643,7 @@ parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init) break; default: if (init) - pr_err("slub_debug option '%c' unknown. skipped\n", *str); + pr_err("slab_debug option '%c' unknown. skipped\n", *str); } } check_slabs: @@ -1669,13 +1669,13 @@ parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init) return NULL; } -static int __init setup_slub_debug(char *str) +static int __init setup_slab_debug_flags(char *str) { slab_flags_t flags; slab_flags_t global_flags; char *saved_str; char *slab_list; - bool global_slub_debug_changed = false; + bool global_slab_debug_changed = false; bool slab_list_specified = false; global_flags = DEBUG_DEFAULT_FLAGS; @@ -1687,11 +1687,11 @@ static int __init setup_slub_debug(char *str) saved_str = str; while (str) { - str = parse_slub_debug_flags(str, &flags, &slab_list, true); + str = parse_slab_debug_flags(str, &flags, &slab_list, true); if (!slab_list) { global_flags = flags; - global_slub_debug_changed = true; + global_slab_debug_changed = true; } else { slab_list_specified = true; if (flags & SLAB_STORE_USER) @@ -1702,31 +1702,32 @@ static int __init setup_slub_debug(char *str) /* * For backwards compatibility, a single list of flags with list of * slabs means debugging is only changed for those slabs, so the global - * slub_debug should be unchanged (0 or DEBUG_DEFAULT_FLAGS, depending + * slab_debug should be unchanged (0 or DEBUG_DEFAULT_FLAGS, depending * on CONFIG_SLUB_DEBUG_ON). We can extended that to multiple lists as * long as there is no option specifying flags without a slab list. */ if (slab_list_specified) { - if (!global_slub_debug_changed) - global_flags = slub_debug; - slub_debug_string = saved_str; + if (!global_slab_debug_changed) + global_flags = slab_debug; + slab_debug_string = saved_str; } out: - slub_debug = global_flags; - if (slub_debug & SLAB_STORE_USER) + slab_debug = global_flags; + if (slab_debug & SLAB_STORE_USER) stack_depot_request_early_init(); - if (slub_debug != 0 || slub_debug_string) - static_branch_enable(&slub_debug_enabled); + if (slab_debug != 0 || slab_debug_string) + static_branch_enable(&slab_debug_enabled); else - static_branch_disable(&slub_debug_enabled); + static_branch_disable(&slab_debug_enabled); if ((static_branch_unlikely(&init_on_alloc) || static_branch_unlikely(&init_on_free)) && - (slub_debug & SLAB_POISON)) + (slab_debug & SLAB_POISON)) pr_info("mem auto-init: SLAB_POISON will take precedence over init_on_alloc/init_on_free\n"); return 1; } -__setup("slub_debug", setup_slub_debug); +__setup("slab_debug", setup_slab_debug_flags); +__setup_param("slub_debug", slub_debug, setup_slab_debug_flags, 0); /* * kmem_cache_flags - apply debugging options to the cache @@ -1736,7 +1737,7 @@ __setup("slub_debug", setup_slub_debug); * * Debug option(s) are applied to @flags. In addition to the debug * option(s), if a slab name (or multiple) is specified i.e. - * slub_debug=,, ... + * slab_debug=,, ... * then only the select slabs will receive the debug option(s). */ slab_flags_t kmem_cache_flags(unsigned int object_size, @@ -1746,7 +1747,7 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, size_t len; char *next_block; slab_flags_t block_flags; - slab_flags_t slub_debug_local = slub_debug; + slab_flags_t slab_debug_local = slab_debug; if (flags & SLAB_NO_USER_FLAGS) return flags; @@ -1757,13 +1758,13 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, * but let the user enable it via the command line below. */ if (flags & SLAB_NOLEAKTRACE) - slub_debug_local &= ~SLAB_STORE_USER; + slab_debug_local &= ~SLAB_STORE_USER; len = strlen(name); - next_block = slub_debug_string; + next_block = slab_debug_string; /* Go through all blocks of debug options, see if any matches our slab's name */ while (next_block) { - next_block = parse_slub_debug_flags(next_block, &block_flags, &iter, false); + next_block = parse_slab_debug_flags(next_block, &block_flags, &iter, false); if (!iter) continue; /* Found a block that has a slab list, search it */ @@ -1792,12 +1793,12 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, } } - return flags | slub_debug_local; + return flags | slab_debug_local; } #else /* !CONFIG_SLUB_DEBUG */ static inline void setup_object_debug(struct kmem_cache *s, void *object) {} -static inline -void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) {} +static inline void setup_slab_debug(struct kmem_cache *s, + struct slab *slab, void *addr) {} static inline bool alloc_debug_processing(struct kmem_cache *s, struct slab *slab, void *object, int orig_size) { return true; } @@ -1821,7 +1822,7 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, { return flags; } -#define slub_debug 0 +#define slab_debug 0 #define disable_higher_order_debug 0 @@ -3285,7 +3286,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) oo_order(s->min)); if (oo_order(s->min) > get_order(s->object_size)) - pr_warn(" %s debugging increased min order, use slub_debug=O to disable.\n", + pr_warn(" %s debugging increased min order, use slab_debug=O to disable.\n", s->name); for_each_kmem_cache_node(s, node, n) { @@ -3778,14 +3779,14 @@ void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, zero_size = orig_size; /* - * When slub_debug is enabled, avoid memory initialization integrated + * When slab_debug is enabled, avoid memory initialization integrated * into KASAN and instead zero out the memory via the memset below with * the proper size. Otherwise, KASAN might overwrite SLUB redzones and * cause false-positive reports. This does not lead to a performance - * penalty on production builds, as slub_debug is not intended to be + * penalty on production builds, as slab_debug is not intended to be * enabled there. */ - if (__slub_debug_enabled()) + if (__slab_debug_enabled()) kasan_init = false; /* @@ -4638,10 +4639,10 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk); * and increases the number of allocations possible without having to * take the list_lock. */ -static unsigned int slub_min_order; -static unsigned int slub_max_order = +static unsigned int slab_min_order; +static unsigned int slab_max_order = IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER; -static unsigned int slub_min_objects; +static unsigned int slab_min_objects; /* * Calculate the order of allocation given an slab object size. @@ -4658,8 +4659,8 @@ static unsigned int slub_min_objects; * activity on the partial lists which requires taking the list_lock. This is * less a concern for large slabs though which are rarely used. * - * slub_max_order specifies the order where we begin to stop considering the - * number of objects in a slab as critical. If we reach slub_max_order then + * slab_max_order specifies the order where we begin to stop considering the + * number of objects in a slab as critical. If we reach slab_max_order then * we try to keep the page order as low as possible. So we accept more waste * of space in favor of a small page order. * @@ -4695,7 +4696,7 @@ static inline int calculate_order(unsigned int size) unsigned int max_objects; unsigned int min_order; - min_objects = slub_min_objects; + min_objects = slab_min_objects; if (!min_objects) { /* * Some architectures will only update present cpus when @@ -4712,10 +4713,10 @@ static inline int calculate_order(unsigned int size) min_objects = 4 * (fls(nr_cpus) + 1); } /* min_objects can't be 0 because get_order(0) is undefined */ - max_objects = max(order_objects(slub_max_order, size), 1U); + max_objects = max(order_objects(slab_max_order, size), 1U); min_objects = min(min_objects, max_objects); - min_order = max_t(unsigned int, slub_min_order, + min_order = max_t(unsigned int, slab_min_order, get_order(min_objects * size)); if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE) return get_order(size * MAX_OBJS_PER_PAGE) - 1; @@ -4726,24 +4727,24 @@ static inline int calculate_order(unsigned int size) * and backing off gradually. * * We start with accepting at most 1/16 waste and try to find the - * smallest order from min_objects-derived/slub_min_order up to - * slub_max_order that will satisfy the constraint. Note that increasing + * smallest order from min_objects-derived/slab_min_order up to + * slab_max_order that will satisfy the constraint. Note that increasing * the order can only result in same or less fractional waste, not more. * * If that fails, we increase the acceptable fraction of waste and try * again. The last iteration with fraction of 1/2 would effectively * accept any waste and give us the order determined by min_objects, as - * long as at least single object fits within slub_max_order. + * long as at least single object fits within slab_max_order. */ for (unsigned int fraction = 16; fraction > 1; fraction /= 2) { - order = calc_slab_order(size, min_order, slub_max_order, + order = calc_slab_order(size, min_order, slab_max_order, fraction); - if (order <= slub_max_order) + if (order <= slab_max_order) return order; } /* - * Doh this slab cannot be placed using slub_max_order. + * Doh this slab cannot be placed using slab_max_order. */ order = get_order(size); if (order <= MAX_ORDER) @@ -5259,39 +5260,42 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) * Kmalloc subsystem *******************************************************************/ -static int __init setup_slub_min_order(char *str) +static int __init setup_slab_min_order(char *str) { - get_option(&str, (int *)&slub_min_order); + get_option(&str, (int *)&slab_min_order); - if (slub_min_order > slub_max_order) - slub_max_order = slub_min_order; + if (slab_min_order > slab_max_order) + slab_max_order = slab_min_order; return 1; } -__setup("slub_min_order=", setup_slub_min_order); +__setup("slab_min_order=", setup_slab_min_order); +__setup_param("slub_min_order=", slub_min_order, setup_slab_min_order, 0); -static int __init setup_slub_max_order(char *str) +static int __init setup_slab_max_order(char *str) { - get_option(&str, (int *)&slub_max_order); - slub_max_order = min_t(unsigned int, slub_max_order, MAX_ORDER); + get_option(&str, (int *)&slab_max_order); + slab_max_order = min_t(unsigned int, slab_max_order, MAX_ORDER); - if (slub_min_order > slub_max_order) - slub_min_order = slub_max_order; + if (slab_min_order > slab_max_order) + slab_min_order = slab_max_order; return 1; } -__setup("slub_max_order=", setup_slub_max_order); +__setup("slab_max_order=", setup_slab_max_order); +__setup_param("slub_max_order=", slub_max_order, setup_slab_max_order, 0); -static int __init setup_slub_min_objects(char *str) +static int __init setup_slab_min_objects(char *str) { - get_option(&str, (int *)&slub_min_objects); + get_option(&str, (int *)&slab_min_objects); return 1; } -__setup("slub_min_objects=", setup_slub_min_objects); +__setup("slab_min_objects=", setup_slab_min_objects); +__setup_param("slub_min_objects=", slub_min_objects, setup_slab_min_objects, 0); #ifdef CONFIG_HARDENED_USERCOPY /* @@ -5584,10 +5588,10 @@ void __init kmem_cache_init(void) int node; if (debug_guardpage_minorder()) - slub_max_order = 0; + slab_max_order = 0; /* Print slub debugging pointers without hashing */ - if (__slub_debug_enabled()) + if (__slab_debug_enabled()) no_hash_pointers_enable(NULL); kmem_cache_node = &boot_kmem_cache_node; @@ -5628,7 +5632,7 @@ void __init kmem_cache_init(void) pr_info("SLUB: HWalign=%d, Order=%u-%u, MinObjects=%u, CPUs=%u, Nodes=%u\n", cache_line_size(), - slub_min_order, slub_max_order, slub_min_objects, + slab_min_order, slab_max_order, slab_min_objects, nr_cpu_ids, nr_node_ids); } @@ -6702,7 +6706,7 @@ static int sysfs_slab_add(struct kmem_cache *s) int unmergeable = slab_unmergeable(s); if (!unmergeable && disable_higher_order_debug && - (slub_debug & DEBUG_METADATA_FLAGS)) + (slab_debug & DEBUG_METADATA_FLAGS)) unmergeable = 1; if (unmergeable) { From patchwork Sun Dec 3 00:15:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiongwei Song X-Patchwork-Id: 13477166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC9F3C10F04 for ; Sun, 3 Dec 2023 00:16:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6BAEB6B03AE; Sat, 2 Dec 2023 19:16:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 66A486B03E1; Sat, 2 Dec 2023 19:16:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50B966B03E8; Sat, 2 Dec 2023 19:16:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 40CBD6B03AE for ; Sat, 2 Dec 2023 19:16:59 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 20059160194 for ; Sun, 3 Dec 2023 00:16:59 +0000 (UTC) X-FDA: 81523591758.20.4B2387C Received: from pv50p00im-hyfv10011601.me.com (pv50p00im-hyfv10011601.me.com [17.58.6.43]) by imf08.hostedemail.com (Postfix) with ESMTP id 45440160009 for ; Sun, 3 Dec 2023 00:16:57 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=me.com header.s=1a1hai header.b=RSk4Wv8H; spf=pass (imf08.hostedemail.com: domain of sxwjean@me.com designates 17.58.6.43 as permitted sender) smtp.mailfrom=sxwjean@me.com; dmarc=pass (policy=quarantine) header.from=me.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701562617; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UkWhhz1yudvdliDY1UWHxKQEvL85ZeI0mAdRV1sW2sg=; b=LvDf3WiCwl1vrNIWVkm0jTKqeflmyHWYVaGp9WJsmx0fbq1qhPQzcuZijhABdzDuytgq+q /gEQ6AyCpkTNxE6thZGPzup+xcZb6oKWkgSYOdBWCe7/MrxtAWgc0Z3spqk8X6ib2S1I0I XBI5Fr4cQbVMbhhXxcuFRyrALkCymRE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701562617; a=rsa-sha256; cv=none; b=ilei7UEvYCZqT0AOBzSKTMuUPCH1W3t79fXDNgOAlvzH261t/k/j6nAuPhkTcwPtmPL2VX 6hzPJNWzAbuthRpVwTFu2V9SIsHJ7Tfrz2/3zoF4JmyzUJZYjgEOuTgODTKR4Sp8ga7v8u 5yUdK7bsa/6xXXXb638qMvhMgUy+pWY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=me.com header.s=1a1hai header.b=RSk4Wv8H; spf=pass (imf08.hostedemail.com: domain of sxwjean@me.com designates 17.58.6.43 as permitted sender) smtp.mailfrom=sxwjean@me.com; dmarc=pass (policy=quarantine) header.from=me.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=1a1hai; t=1701562616; bh=UkWhhz1yudvdliDY1UWHxKQEvL85ZeI0mAdRV1sW2sg=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=RSk4Wv8HOlDt444Dd514CovTON6ThDBwPzt8bfiKU3So/xoPLs3dqqLgySypd73w5 Ce43wlTVFk6cGdRZJu9MQJ+nDwCsOVN3CKjFf26VbgzJ1ZqQGyCYWladSIrlLeK4yE xDmeCZm69IWM91ROpKZh8NsAMNW+xTHcbzoxD+LNT7AUzRPTiMVXygrzltt4wjqH9a RsD/YmBQGS/BfTFWn6JH5B5D0ZEZS+2oG4jf+UUhyaDUrsW6IpxIlim9WgIE0GYdL3 dIXXXeDPuwlPCu29+xET+LaHvo8Npdjd2mzkPBpXcJnHMq3IpbjGQ5ygwSDoo2SlLP LumH+YK5mjH+A== Received: from xiongwei.. (pv50p00im-dlb-asmtp-mailmevip.me.com [17.56.9.10]) by pv50p00im-hyfv10011601.me.com (Postfix) with ESMTPSA id 75430C800C0; Sun, 3 Dec 2023 00:16:50 +0000 (UTC) From: sxwjean@me.com To: vbabka@suse.cz, 42.hyeyoo@gmail.com, cl@linux.com, linux-mm@kvack.org Cc: penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, roman.gushchin@linux.dev, corbet@lwn.net, keescook@chromium.org, arnd@arndb.de, akpm@linux-foundation.org, gregkh@linuxfoundation.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Xiongwei Song Subject: [PATCH v2 3/3] mm/slub: correct the default value of slub_min_objects in doc Date: Sun, 3 Dec 2023 08:15:01 +0800 Message-Id: <20231203001501.126339-4-sxwjean@me.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231203001501.126339-1-sxwjean@me.com> References: <20231203001501.126339-1-sxwjean@me.com> MIME-Version: 1.0 X-Proofpoint-GUID: Rtt4dbLleoIBTkMumi-JfIC8Wq0WE36R X-Proofpoint-ORIG-GUID: Rtt4dbLleoIBTkMumi-JfIC8Wq0WE36R X-Proofpoint-Virus-Version: =?utf-8?q?vendor=3Dfsecure_engine=3D1=2E1=2E170-?= =?utf-8?q?22c6f66c430a71ce266a39bfe25bc2903e8d5c8f=3A6=2E0=2E517=2C18=2E0?= =?utf-8?q?=2E883=2C17=2E0=2E605=2E474=2E0000000_definitions=3D2022-06-21=5F?= =?utf-8?q?08=3A2022-06-21=5F01=2C2022-06-21=5F08=2C2020-01-23=5F02_signatur?= =?utf-8?q?es=3D0?= X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0 bulkscore=0 adultscore=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2312030000 X-Rspamd-Queue-Id: 45440160009 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: uh6tgqip1qktdytn5fhe7j8axygwsmzi X-HE-Tag: 1701562617-461752 X-HE-Meta: U2FsdGVkX18MIPhoS08z4xt6HNjXqdw1MU9/XkZE8gqVPMQfj4ZOpfwtM5a6DXxxt8sgKeImqkhm0I4SHyLW4tROV9Od4ctbFVrX5kGZMREuXapigJRYSSPzE/sblNJGo+Tn0I+JEfjV/5aeA4vETvZq9MFdGOJf8kMqTkiAJUK/DGZPrsjba0C/25hfpFdIAK4NppNfky/ApmTndrLqJtPcauPZaH/9hXuRrgcAnouduYB8AwEgTWx0r0jp88B5+CMSFU/G5a5EMJVth6RYjHiNEPJQppXxS3DaCuz61naWqw8sf+4LzeKOFhSTQWn3FH+mxzCXmWwM/Fsk6R3r+8m5Yw/LNDgr6m0syLfn0RDkQeDQfLtKekFktk4xDxmaDVE83CjPiNhSnqpfEshdOynirkZYimNgeDHU1vaBIr7hB+2LxUaw4za0VZMhciFSJtpiW5cyea85mAOJgRUB1f+ydB6H2OiJq253TpBGgmb8/YvCDVLF81SbzatbJKiyrNiAveR8A6/ThmMMM+a9h0jKE0zOZk/6Jf+nmWInD+Py5a31b/QFRY+qKHFDpyBpcB2u9lzqtRZ3BLKe3ExM0z9BkUuXPnu6shvu0jXRApxJhdbzYx7vxHGAlsr24nIhojHCpIiuRBjzn13f8alUStrHT3V0D+niR8UCSLxR2ufkFKjjVZ6OXrhKmiUwRIg/nmxKm6iFfk/dXCLeAnTJib8x+K8e/L3Ap4slZ3xI/l5RCFrGdOeL/2YPuNQoa5ZFDbCitrNODcbAsXHVqwCZBZPzLbbrhOmGxo2ELMHyg1LMuk/k5zdH2587xnHnmRE67mHyGUuccD+AIQ51bDmtUezPedgtCKc1v1xYYQC+BbcJw9f6RBhmDz1QUGRy6bvVdllOi2fIzH7xRKC+iCCIYQsyIv8WuWsZeImSvBFHJx4YoyoxZOhRUCmNFqGPR4/Sog0SmBkO4aek0WZGK7H p9iXMR3u 0SCH51kLATilI+6z4iDmURYg7PmDgtBr+M9Hr7L79UX0cj7QUTibTlJq19xew3zHNzLniQBv+GO9fD5oT614bfQLblfIYw0RPoMKDFI6dHfeLyGikxUQfIFKb7Dh9yx3KKUM8lHOfVYiJvEHEJ2evCss7b3qWKIjlNP7GlQ70otos4k4Y0DVkek17jJhuZo7nUxaG/09w2XHSG6wmMl+rnbtxNwimLSKa/+P6WZCdoqDjVSMOZnPO76S/t2jvvqBcgHmH4SEh/GWeGMIj2DPptzX1evOLiUUKlxi1H9qFgP9tCfCPsJmMXttIitmfyQNiOVkc1dFllFEfjBSdYzKe+kgK39Q2+HY7ULKnFeW3HRKBoN+9OrwOQKWguszk85ZktIcc4TIIAKiiuY0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Xiongwei Song There is no a value assigned to slub_min_objects by default, it always is 0 that is initialized by compiler if no assigned value by command line. min_objects is calculated based on processor numbers in calculate_order(). For more details, see commit 9b2cd506e5f2 ("slub: Calculate min_objects based on number of processors.") Signed-off-by: Xiongwei Song --- Documentation/mm/slub.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/mm/slub.rst b/Documentation/mm/slub.rst index be75971532f5..1f4399581449 100644 --- a/Documentation/mm/slub.rst +++ b/Documentation/mm/slub.rst @@ -150,7 +150,7 @@ list_lock once in a while to deal with partial slabs. That overhead is governed by the order of the allocation for each slab. The allocations can be influenced by kernel parameters: -.. slub_min_objects=x (default 4) +.. slub_min_objects=x (default 0) .. slub_min_order=x (default 0) .. slub_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))