From patchwork Tue Sep 5 14:13:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 13374491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B5CDC83F2C for ; Tue, 5 Sep 2023 14:08:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 577DB6B0078; Tue, 5 Sep 2023 10:08:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 500716B007B; Tue, 5 Sep 2023 10:08:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 379DE8D0001; Tue, 5 Sep 2023 10:08:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2240A6B0078 for ; Tue, 5 Sep 2023 10:08:19 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B4F341A0860 for ; Tue, 5 Sep 2023 14:08:18 +0000 (UTC) X-FDA: 81202723476.21.18DB770 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) by imf14.hostedemail.com (Postfix) with ESMTP id 8D1AD10019B for ; Tue, 5 Sep 2023 14:07:37 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jS6aTckY; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf14.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693922857; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kFyLSjddLKlcAvWZhTTsKLsIjga2fip+4yGzBvKczGM=; b=xNLBxShBdB5H/qmFST7UlNHrPpJLtTCyI8tdi4GLOZBpk4ZQiFz/29iQkGUUpBgpteeM1h TyiqFesL2YbNeL3JXyMNXi0I/cfNXs8DELytVAh/1lAVH9Y5BoRX+YfPoqIKy87kafpCPC shfvCpYrq+QeKRSA/JysF/8pw+nLzzo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jS6aTckY; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf14.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693922857; a=rsa-sha256; cv=none; b=KSETjIVN4N1dgr3wgr5SWxLs1zFTmWTOpGNOvl1zkCUu0wVzNUXyMD4i+vJELH7m+goTcq 1FFn4Ocl8GHiUwUm6LeWMCvJ9zROFszBM2ArfF+Q00cJhLwI5TBHkpAxaTag1qmsJ2rWcH 8M28LlZvLZNA4BR7lFFEIpH2bA3AxZo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693922857; x=1725458857; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+Yeg+pnZH94x+zS9eiLxzs8MVPwnnUwq2oxNPIF7mVU=; b=jS6aTckY1G2svllWwEMWaFj943exA9I3eVxYbcHwWQaZisHAWeHHRPKA Pl0OXpq7JYluHCB1FdmdpxMHpls/FuxFpZxFAVHyEZiwdOiz4IDmBJsa1 SKMwho8uof8oWKweKj9b0jAxBcA1wXmmeESRI3g4Wc1Ru24KWJnuJ4MYM Uzb7b9quOB54Q/TipRBYgayRPfuvGn1mmgvPmowg5yEZC/kcFXLhWuYAB MQfbVxaEaRUQWyiur4BDY1PL3Kl1E1+9OvYX8mDK2BJuAKijPGywGexEQ T9vLKGqi9FlyGcyfn9ODwkjLZU02r3RSQ3/E1Is+5ky5LZmMvnSzls6uh Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="380609589" X-IronPort-AV: E=Sophos;i="6.02,229,1688454000"; d="scan'208";a="380609589" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2023 07:06:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="811242127" X-IronPort-AV: E=Sophos;i="6.02,229,1688454000"; d="scan'208";a="811242127" Received: from shbuild999.sh.intel.com ([10.239.146.107]) by fmsmga004.fm.intel.com with ESMTP; 05 Sep 2023 07:06:20 -0700 From: Feng Tang To: Vlastimil Babka , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Feng Tang Subject: [RFC Patch 1/3] mm/slub: increase the maximum slab order to 4 for big systems Date: Tue, 5 Sep 2023 22:13:46 +0800 Message-Id: <20230905141348.32946-2-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230905141348.32946-1-feng.tang@intel.com> References: <20230905141348.32946-1-feng.tang@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 8D1AD10019B X-Stat-Signature: eyuz1r5iw6afyrqmwjsa3e3b1ij5mgec X-HE-Tag: 1693922857-559034 X-HE-Meta: U2FsdGVkX18fPVQSsmUrrBaeipBAz+U8jPEELbAnrbvq/LB8CY0CaSriVInh+ql14CBkaB9/eveohitWnGINaLV9IzB3X2FbMqIClWsNUKaytMbhszjX4sPQhz/3RqeEgg3t0ypn0jAoqPbWSJbPeymgzkxjG7drmFs7yeQvWWtCEVl/htI88/k8DuIvJ3W0ZwlvIJbXs2KA1Z2Fd8pmbCVZ1iwuWNgjW6f/pSdEYX6SnFLvHfOEbvsB8a4txl4rb3/worMCWsnXd119etkTYiq/kTC5RqiyPMHONoq+J/PfVIhhR12PfAl90ixyFx6B7/LiEDbXLAf/Gxf+Qm5smKyOBEZfdpUjA/q1z1jHyFYomjWAvZW2/hj+Y0/HzGfwAMzzRo4kRoSdgQPgPQWqwwjhdyliGGkbIfcFubVLyD2T1LIQ8cxSplKPtnKbsOnhaBHE/g9qAQM+5IX6ZO+GB0iRNAFPAbsJw1aS9hgTYhl00j4YMVy/2eWLR8pjPfhzf2Cz86PGz3JzpRqTx4LOkK6Juf0aUg/n9+kfAQvoH+z+wWBCCwW16p9lFBmFK68vRuCfSzQixV/dje15xbELe3kRM5VL9Yk123N29GbYLX5NhLLTqL6ToAmlWq6fS0/McvnCYKQ7H74FxSI0IINX8+LZAWktxonunpg2LldmCYxG4D6s+B4vQKDXDtK/1QCT6afCYUZbY+I05PpuUQaS1TLzdZOpr38CPtYPZ/KXoqhgVfz5dggHQDSnAec9pPS7dI45BYK4W9e/BppWRKuMMzXQdjzVDI27i7J3sHRARgR1JF3UAMNZ+ykg7qMtrCXIAc1b02tm3tfJ0DzleFwSCNp4mzHOF+/JH5eguCU3gSaUJknvngIoLu4N29zAsP1g3Vm7yTpFjTiBhqEJCQeHScAoQevIg8pWWFWDWIExQSEhjmUyUDCR/j1yAWp2eL2SJzOkmOeTK/wDFAtC+aD +JpfuEq/ Umx8hd4iaFzxb2AWEhMLdtD2dNOuIiyW6iWZH7NF6QCCmP/VKBvWFWhumd8oo4dJzpc1vwSDXwFGKt77esJtU+m1a5oFKDrxN+Nx22hMr8zMOd6yPbUcAa9DOGW1gOIaR7uOvmCSCOKtCvxorFfxdLaK51ck7XwKbCyrlrIcHsEg5vGx4YfWnbo109Ne5bJYe2NQIHUnswHiBddM8XxMiY2+uovhFUH4WtPwetIwu29tcl2pi4RVRVv80PYDCiarr+Ifc45eeVtxJKXsyPJmhQD1OwzO7+MFs/yrZUiJUvwJM0+SeyJQJ01jWU/L+KKTZqcce93K8UbXJLwXhucxonk1/xJj5ciEl3NR7F9mGBD33BCU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are reports about severe lock contention for slub's per-node 'list_lock' in 'hackbench' test, [1][2], on server systems. And similar contention is also seen when running 'mmap1' case of will-it-scale on big systems. As the trend is one processor (socket) will have more and more CPUs (100+, 200+), the contention could be much more severe and becomes a scalability issue. One way to help reducing the contention is to increase the maximum slab order from 3 to 4, for big systems. Unconditionally increasing the order could bring trouble to client devices with very limited size of memory, which may care more about memory footprint, also allocating order 4 page could be harder under memory pressure. So the increase will only be done for big systems like servers, which usually are equipped with plenty of memory and easier to hit lock contention issues. Following is some performance data: will-it-scale/mmap1 ------------------- Run will-it-scale benchmark's 'mmap1' test case on a 2 socket Sapphire Rapids server (112 cores / 224 threads) with 256 GB DRAM, run 3 configurations with parallel test threads of 25%, 50% and 100% of number of CPUs, and the data is (base is vanilla v6.5 kernel): base base+patch wis-mmap1-25% 223670 +33.3% 298205 per_process_ops wis-mmap1-50% 186020 +51.8% 282383 per_process_ops wis-mmap1-100% 89200 +65.0% 147139 per_process_ops Take the perf-profile comparasion of 50% test case, the lock contention is greatly reduced: 43.80 -30.8 13.04 pp.self.native_queued_spin_lock_slowpath 0.85 -0.2 0.65 pp.self.___slab_alloc 0.41 -0.1 0.27 pp.self.__unfreeze_partials 0.20 ± 2% -0.1 0.12 ± 4% pp.self.get_any_partial hackbench --------- Run same hackbench testcase mentioned in [1], use same HW/SW as will-it-scale: base base+patch hackbench 759951 +10.5% 839601 hackbench.throughput perf-profile diff: 22.20 ± 3% -15.2 7.05 pp.self.native_queued_spin_lock_slowpath 0.82 -0.2 0.59 pp.self.___slab_alloc 0.33 -0.2 0.13 pp.self.__unfreeze_partials [1]. https://lore.kernel.org/all/202307172140.3b34825a-oliver.sang@intel.com/ [2]. ttps://lore.kernel.org/lkml/ZORaUsd+So+tnyMV@chenyu5-mobl2/ Signed-off-by: Feng Tang --- mm/slub.c | 51 ++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f7940048138c..09ae1ed642b7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4081,7 +4081,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk); */ static unsigned int slub_min_order; static unsigned int slub_max_order = - IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER; + IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : 4; static unsigned int slub_min_objects; /* @@ -4134,6 +4134,26 @@ static inline unsigned int calc_slab_order(unsigned int size, return order; } +static inline int num_cpus(void) +{ + int nr_cpus; + + /* + * Some architectures will only update present cpus when + * onlining them, so don't trust the number if it's just 1. But + * we also don't want to use nr_cpu_ids always, as on some other + * architectures, there can be many possible cpus, but never + * onlined. Here we compromise between trying to avoid too high + * order on systems that appear larger than they are, and too + * low order on systems that appear smaller than they are. + */ + nr_cpus = num_present_cpus(); + if (nr_cpus <= 1) + nr_cpus = nr_cpu_ids; + + return nr_cpus; +} + static inline int calculate_order(unsigned int size) { unsigned int order; @@ -4151,19 +4171,17 @@ static inline int calculate_order(unsigned int size) */ min_objects = slub_min_objects; if (!min_objects) { - /* - * Some architectures will only update present cpus when - * onlining them, so don't trust the number if it's just 1. But - * we also don't want to use nr_cpu_ids always, as on some other - * architectures, there can be many possible cpus, but never - * onlined. Here we compromise between trying to avoid too high - * order on systems that appear larger than they are, and too - * low order on systems that appear smaller than they are. - */ - nr_cpus = num_present_cpus(); - if (nr_cpus <= 1) - nr_cpus = nr_cpu_ids; + nr_cpus = num_cpus(); min_objects = 4 * (fls(nr_cpus) + 1); + + /* + * If nr_cpus >= 32, the platform is likely to be a server + * which usually has much more memory, and is easier to be + * hurt by scalability issue, so enlarge it to reduce the + * possible contention of the per-node 'list_lock'. + */ + if (nr_cpus >= 32) + min_objects *= 2; } max_objects = order_objects(slub_max_order, size); min_objects = min(min_objects, max_objects); @@ -4361,6 +4379,13 @@ static void set_cpu_partial(struct kmem_cache *s) else nr_objects = 120; + /* + * Give larger system more buffer to reduce scalability issue, like + * the handling in calculate_order(). + */ + if (num_cpus() >= 32) + nr_objects *= 2; + slub_set_cpu_partial(s, nr_objects); #endif } From patchwork Tue Sep 5 14:13:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 13374493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB0DC83F3E for ; Tue, 5 Sep 2023 14:09:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 837928D0006; Tue, 5 Sep 2023 10:09:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 799318D0001; Tue, 5 Sep 2023 10:09:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 615818D0006; Tue, 5 Sep 2023 10:09:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 417BA8D0001 for ; Tue, 5 Sep 2023 10:09:04 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DDEEC120717 for ; Tue, 5 Sep 2023 14:09:03 +0000 (UTC) X-FDA: 81202725366.29.F74D4EA Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) by imf06.hostedemail.com (Postfix) with ESMTP id 78A711801B1 for ; Tue, 5 Sep 2023 14:07:39 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dbeB2+f3; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf06.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693922861; a=rsa-sha256; cv=none; b=nqogziY5y3T3FajnrLny0NuSGZB/MYf24eQbrSeInyeWcFGXnzqxzTkeY2f8wWjfQiObbr VwlshVzf9ZDCF6RUN06hg2s+NIQH58M/M2DdYlWinMzFR7C++Scu9aD04ZeYGLWFllWjim C8C6CbBO0Lfb1xhcKw6BOyUbVsOVr8k= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dbeB2+f3; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf06.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693922861; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sp6m1759+xAjhak4EbHxsPW+m3pg3r3Czi0Tpj8/NAg=; b=76UD0OEPFJ9U/ukR1ynNfnt8AbjzajL6ZO/5rhFkvUfTSnAcfnBDavkuoKesdjuRFitERg 4QhYxwANnqmqIShI/dG6+jxNIgvzKBVpLjizEpXrt7bwtm2bCk8vzbIGF/bvKuL4gmYG3b 0QC+7d3Iaz7zxqw/IkVGsbUi0Hqu7w4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693922860; x=1725458860; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q95MzduZqjx1H3qSsIIWM/4xzW0Di0Tyl1YsUi4/EEk=; b=dbeB2+f31FK+RvU7e7OZeb2Jnke+/wUjXB5VHXY/iOrWC7EC70zXiSYF JXiL/jHG5FViLsyIvOhKFJo2xUu+0G4XJGagX4tQbxZFSk9EULcrn/MO5 Icfbxf/jC9l/rQhjTuY4abM9TvBPdqBAOnV2xlf13i9uMnywOtbfbYkuJ jhQ9cVNbvuvB7xx+EDyWYUPBGHo16jHa0Rz83EX4CP/UJ7ABSMpaIfqNi k+6t07FfyvVMDI27GJ8k9olz5ySjXOelYjsnxA3ULqBhC9M+Rf3Yf2cq8 NxrFwiSvsxOVFfi2qjQ7PeXffKvRRxXgoj6xV0UJSNHXFdOc9AKC9P4RB g==; X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="380609604" X-IronPort-AV: E=Sophos;i="6.02,229,1688454000"; d="scan'208";a="380609604" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2023 07:06:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="811242153" X-IronPort-AV: E=Sophos;i="6.02,229,1688454000"; d="scan'208";a="811242153" Received: from shbuild999.sh.intel.com ([10.239.146.107]) by fmsmga004.fm.intel.com with ESMTP; 05 Sep 2023 07:06:23 -0700 From: Feng Tang To: Vlastimil Babka , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Feng Tang Subject: [RFC Patch 2/3] mm/slub: double per-cpu partial number for large systems Date: Tue, 5 Sep 2023 22:13:47 +0800 Message-Id: <20230905141348.32946-3-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230905141348.32946-1-feng.tang@intel.com> References: <20230905141348.32946-1-feng.tang@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 78A711801B1 X-Stat-Signature: 4ff9ix91y4j1abirrizd9i3m5fbgibbm X-HE-Tag: 1693922859-97519 X-HE-Meta: U2FsdGVkX1/5IjxTq6sIIeUt6WeMJdKS4vKeo0sVU5ZP20KatPC2YJC20t2jbWNptlKv4QQbvuko1A1CVP/ktF2x+VXrxpzmhRK0Okv5f0dEyMk0AQGHJjfqg2fXe2YhAsLpOdlpoRCsTznwY0hRt/79sC5/d8Iia+e+ASaaD/reGY9Av7QKb/2wq0MUzCA2rES6cdKI9boBHST0Td+NLSgPb94lpHEL9pkm1BALjcDjRQfbeB+3JkHA6KpXJGcSkeEk1yI51iuEyauhJzrcRLExY9YXy5kHSEz4GJROng8MXlMPjlNyNlwT0Qyeu/Su2N+afj8o01IZM6ZfV8B4K4RDPvh/my/LS86Gzu/ELRTaG49UyF9DpeZLKo3uESrbrM4Kmi3tiny1rE8ZEKwfNGa2fN8wGx6BLllQJQqsW147dx/4hqEoJjoFY8eMwmT92DQ67QUuGQJL45zptqfGB3sivzAZDCFArR3BP8ZLy/gFM6FObEVkFRF0DdUWddNpq8MA5pTQH9u4Q0YWfaelXl0Q6HGbKcLYBv9cf1RaksAuejoL6wyb7b/11mQNHJC/VVPKkf6ij4RVE/yHovQFSEwNosFkM/nOJcx9zGc+IDXfARDqSGFS63vUco42/9dKtPvvGn0lhstIiNrL9YrW4XjtzZ7mqmkZw310+JcOF+Yi/Icp1SfVXGya8uuFZU2V+QsFkIBAiOHoMs6jCvfcZnEriIgjuB7qgT8fXttje56VdzFlQ63Wvn2gRyivLHEI4xit395io0AyXn1RUDF6sPrUnKqSXrZwihfZAbneoluuMbMAnxb2Wa/pDbeTghpri1agWtno+kUu4YHCLF/9IZGV9YwdKu2RPu5Vb3ESLT76MRXNo4GhtQwKQRRkzwDba2Z5a+GiPNE78W7GTulOAnZunLnOYRnVWqSZiIqq+VRM6ieeOwWze6eA7couHlmnkJ3WAxdxMBZan96ImJv NBPEhfI/ g6q+FFXNi/NQmNyJLoRSy51qrDWM5xOqCWt0qpgQduKox30MVVK1If+JZU02hObLPpgZrP+8x0L8LTFinbUthAShQgN3s5n+IDB7m+xsfTBWlMKDvI2KTupFGc2659nteybWOlH2IYspBRcPs6d7jId3XUvnxakJhforaWRzuYvlU9ZWNcj/kJwhCfqTo7lhs0CM3dDUMIcDj3DCCl9N1+Wuxnjn84/WKlcjleG0SRkk72TrLrcLbrr7pAviXtNFIRK7Q1h5BhaN2usHTANlAHenZtIvyfBfT/Rus X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are reports about severe lock contention for slub's per-node 'list_lock' in 'hackbench' test, [1][2], on server systems. And similar contention is also seen when running 'mmap1' case of will-it-scale on big systems. As the trend is one processor (socket) will have more and more CPUs (100+, 200+), the contention could be much more severe and becomes a scalability issue. One way to help reducing the contention is to double the per-cpu partial number for large systems. Following is some performance data, where it shows big improvment in will-it-scale/mmap1 case, but no ovbious change for the 'hackbench' test. The patch itself only makes the per-cpu partial number 2X, and for better analysis, the 4X case is also profiled will-it-scale/mmap1 ------------------- Run will-it-scale benchmark's 'mmap1' test case on a 2 socket Sapphire Rapids server (112 cores / 224 threads) with 256 GB DRAM, run 3 configurations with parallel test threads of 25%, 50% and 100% of number of CPUs, and the data is (base is vanilla v6.5 kernel): base base + 2X patch base + 4X patch wis-mmap1-25 223670 +12.7% 251999 +34.9% 301749 per_process_ops wis-mmap1-50 186020 +28.0% 238067 +55.6% 289521 per_process_ops wis-mmap1-100 89200 +40.7% 125478 +62.4% 144858 per_process_ops Take the perf-profile comparasion of 50% test case, the lock contention is greatly reduced: 43.80 -11.5 32.27 -27.9 15.91 pp.self.native_queued_spin_lock_slowpath hackbench --------- Run same hackbench testcase mentioned in [1], use same HW/SW as will-it-scale: base base + 2X patch base + 4X patch hackbench 759951 +0.2% 761506 +0.5% 763972 hackbench.throughput [1]. https://lore.kernel.org/all/202307172140.3b34825a-oliver.sang@intel.com/ [2]. ttps://lore.kernel.org/lkml/ZORaUsd+So+tnyMV@chenyu5-mobl2/ Signed-off-by: Feng Tang --- mm/slub.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index f7940048138c..51ca6dbaad09 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4361,6 +4361,13 @@ static void set_cpu_partial(struct kmem_cache *s) else nr_objects = 120; + /* + * Give larger system more per-cpu partial slabs to reduce/postpone + * contending per-node partial list. + */ + if (num_cpus() >= 32) + nr_objects *= 2; + slub_set_cpu_partial(s, nr_objects); #endif } From patchwork Tue Sep 5 14:13:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 13374492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 064C9C83F3E for ; Tue, 5 Sep 2023 14:08:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 74DAA8D0005; Tue, 5 Sep 2023 10:08:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D5CA8D0001; Tue, 5 Sep 2023 10:08:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 576E68D0005; Tue, 5 Sep 2023 10:08:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 39C1C8D0001 for ; Tue, 5 Sep 2023 10:08:52 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E1AA416086B for ; Tue, 5 Sep 2023 14:08:51 +0000 (UTC) X-FDA: 81202724862.15.05876B0 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) by imf14.hostedemail.com (Postfix) with ESMTP id 2C49B1002EF for ; Tue, 5 Sep 2023 14:07:40 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Chp4sjZp; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf14.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693922861; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QGTOghc+gA04fugpli15BRz2MjimwwZ2E3P/yXbdOM4=; b=8A6CcbsBhYNq7mTIdfHteUPs+sDu7cDspzK3NEFxPxKN6KzKQ0+VE/Uu28e5JlEMzMSc9U qVzE5bhfEDjEP0V2ht6CkrG4IzansLpZs37Y/3SzOa4PpMqyPI1YTmkJW1+pCOHYK/NcKU 6gbDaGmFyOuXXuGCl3RY9n0qx0IsiCo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Chp4sjZp; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf14.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=feng.tang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693922861; a=rsa-sha256; cv=none; b=tjWvFK0rWlquSf0Y0tzOkluwcjzesA9xFme7C4KetSnJT9uNUVkfXiyFpiV/qtv1NYCu7d yUGL+f5rEhdiNdmU35wwOqqny8U+NnmkjXzNwDcXXoZLZzMAm834TrEVhEQYoEvx1LbN7j Qbw6ptLZzjVrBg6aL51xNF8dGNrNZ70= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1693922861; x=1725458861; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dvT4bjQOLiJP78MPuTog+CQfsmhI/8vfzdHD6sqXSeU=; b=Chp4sjZpwKYYI5/DAqL/ForcPSy5o4Q/37vkevCcYWSTYh53Sh17IBx6 el9U9/aT2ZjeFaEQVpozVOWjonHygFi7FCkvtfil9JdUoeH2FOyUWnSFO xzX2QyJvgICiIzH7ZGsBLtRc+1AxKUpG9vHXmTSI6OJdmmKmHnIZVEjZE ynJz77l2ix39K69pO3i+oC7iE3JvELEDlj8wcmqeqIa0Vblsk5OcNhXhR iBNs6W//h/+3cR/8gmW56HRGS6CUQqEPj2uPa0Pkrx2usfXxcroNxgRK5 UcTiwxzFMtKaWz9Hw4qvT4nhrV/4ltCnQ0vU+EaZZCgbjHkvgKiqoX0Va A==; X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="380609619" X-IronPort-AV: E=Sophos;i="6.02,229,1688454000"; d="scan'208";a="380609619" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2023 07:06:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10824"; a="811242163" X-IronPort-AV: E=Sophos;i="6.02,229,1688454000"; d="scan'208";a="811242163" Received: from shbuild999.sh.intel.com ([10.239.146.107]) by fmsmga004.fm.intel.com with ESMTP; 05 Sep 2023 07:06:25 -0700 From: Feng Tang To: Vlastimil Babka , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Feng Tang Subject: [RFC Patch 3/3] mm/slub: setup maxim per-node partial according to cpu numbers Date: Tue, 5 Sep 2023 22:13:48 +0800 Message-Id: <20230905141348.32946-4-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230905141348.32946-1-feng.tang@intel.com> References: <20230905141348.32946-1-feng.tang@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2C49B1002EF X-Stat-Signature: izqebr1bqa5k4fe59b1x9crjk4xm8hh7 X-HE-Tag: 1693922860-508866 X-HE-Meta: U2FsdGVkX18upu3Q5kHgD5iU9WUoQCxl+XC2SA5WTz3vH1rbsrqbJkAAMhcFnX6NDRt3dzT6oe9smHJIwX9FCrkPr6NKXJAcNjn0tr+EImc+vpObxYYvEdotqScvSSpntKxTskT+tqHFXZWJOCeg/bCYXKVbnDicdq3qbkEG9RS2Z8i1vKRrp8zfZW6MfjyQW9gfcSuUGTLzhF10OBjqrAnY9bLyLa9Qb+TrIL+U1nWonCyLBf9jwwnmjdtAbxJKQOtwtBWXYqyaRMOT8NL9fQTa5B1QJleRBUBe5cWcv8phK6eQhg0vMch+YkrgV9vED4ssRVtiuh9Z3g4p9Nnd0U42WXRx8gIN5KvZBFj49HcIIv15xHuuIJrB/NFo0uSwySTZw1Hsvh034Wvi3dAsDnTMAftgvoio86+gnr9+yFvS7KrcgDXnesuRuX8MBgED/GgX8mTOGIYUzRUTa+aWCKjhpdWL/h9PMjNr4BNybXppcsfylfaNvbyJIyFg2JCvCb7ck5BrGrMc210I8zUUfpz80hif5hHHK2MGxtaYjI0faLGSYLxHSgXa+RKF4bcKILd07Wt6t5BOQOTy/k0nWSMEHelQYVahgpfg3dHfxZnzZNNYiVd3HjvAPlCrX/ey9nUujlOVYY1wlxv2KIYkW5mIKfda7ro/LGtwlyIERko7pDwbqfccWA7EGg9RmfJ3YljKazqekqMmj5qLfv/aEu3jJzj7VkCOp9T2U8joqrdd2pxou49qKFYYlMymp4mNP5j4oDVqhjqW7vKBdcpI4twr0Jr27VT1aYSo6GKyEDSccpGppohxbAUo8ZDuhNcNXzsdFOVxIi5JbMEuXK8efCRT6bvpjXOl6lgII0C7eJH17pORE6py8KF6xVV3OPg77RMDv44gV4UVDr4WERBdFsC5pm5wvupC0+ssJAW/tW6bg0kCmMTvmDyjffdGlJlztUCX/E0PUZoSlkuu1tu YKfSQ/bW NYA6mDU3Bw5ioB+i9gpRM9LDe0WhR6NdUECjrJ8KVrDnDfyxtc2GQtGXw+bjuYWGFfUAUAkZAE3AuqkjosNCtcUjJYqjcLc2pzVcmXIHTyMXqZ8cHYD7+uRHUy1rk75EJ+gCT/fzZU3Chci6OluJM2Yq2MxXKcRvfLKryLBk6XCxORD3iIT6ScSn7+hrES/36El4zJUSJEKd2XSvcfaw32ISxcioXLwhyKRh2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently most of the slab's min_partial is set to 5 (as MIN_PARTIAL is 5). This is fine for older or small systesms, and could be too small for a large system with hundreds of CPUs, when per-node 'list_lock' is contended for allocating from and freeing to per-node partial list. So enlarge it based on the CPU numbers per node. Signed-off-by: Feng Tang --- include/linux/nodemask.h | 1 + mm/slub.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index 8d07116caaf1..6e22caab186d 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -530,6 +530,7 @@ static inline int node_random(const nodemask_t *maskp) #define num_online_nodes() num_node_state(N_ONLINE) #define num_possible_nodes() num_node_state(N_POSSIBLE) +#define num_cpu_nodes() num_node_state(N_CPU) #define node_online(node) node_state((node), N_ONLINE) #define node_possible(node) node_state((node), N_POSSIBLE) diff --git a/mm/slub.c b/mm/slub.c index 09ae1ed642b7..984e012d7bbc 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4533,6 +4533,7 @@ static int calculate_sizes(struct kmem_cache *s) static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) { + unsigned long min_partial; s->flags = kmem_cache_flags(s->size, flags, s->name); #ifdef CONFIG_SLAB_FREELIST_HARDENED s->random = get_random_long(); @@ -4564,8 +4565,12 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) * The larger the object size is, the more slabs we want on the partial * list to avoid pounding the page allocator excessively. */ - s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); - s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); + + min_partial = rounddown_pow_of_two(num_cpus() / num_cpu_nodes()); + min_partial = max_t(unsigned long, MIN_PARTIAL, min_partial); + + s->min_partial = min_t(unsigned long, min_partial * 2, ilog2(s->size) / 2); + s->min_partial = max_t(unsigned long, min_partial, s->min_partial); set_cpu_partial(s);