From patchwork Sat Feb 22 09:24:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wen Yang X-Patchwork-Id: 11398033 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BE00159A for ; Sat, 22 Feb 2020 09:25:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3997D208C3 for ; Sat, 22 Feb 2020 09:25:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3997D208C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 466606B0003; Sat, 22 Feb 2020 04:25:15 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 41A116B0006; Sat, 22 Feb 2020 04:25:15 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32D5A6B0007; Sat, 22 Feb 2020 04:25:15 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id 16DC06B0003 for ; Sat, 22 Feb 2020 04:25:15 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9BCC1181AEF10 for ; Sat, 22 Feb 2020 09:25:14 +0000 (UTC) X-FDA: 76517229348.09.pipe46_4ac0d78d9973b X-Spam-Summary: 2,0,0,e0157185c4e642c6,d41d8cd98f00b204,wenyang@linux.alibaba.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1261:1311:1314:1345:1431:1437:1515:1534:1542:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2897:3138:3139:3140:3141:3142:3353:3865:3867:3868:3870:3871:3872:3874:4250:4321:4385:5007:6261:7903:7904:9010:10004:11026:11473:11658:11914:12043:12048:12114:12296:12297:12438:12555:12895:12986:13161:13172:13229:13846:13894:14096:14181:14394:14721:21080:21451:21627:21740:21990:30034:30054:30064,0,RBL:115.124.30.44:@linux.alibaba.com:.lbl8.mailshell.net-64.201.201.201 62.20.2.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: pipe46_4ac0d78d9973b X-Filterd-Recvd-Size: 3499 Received: from out30-44.freemail.mail.aliyun.com (out30-44.freemail.mail.aliyun.com [115.124.30.44]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Sat, 22 Feb 2020 09:25:13 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R221e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07417;MF=wenyang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0TqatCjt_1582363499; Received: from localhost(mailfrom:wenyang@linux.alibaba.com fp:SMTPD_---0TqatCjt_1582363499) by smtp.aliyun-inc.com(127.0.0.1); Sat, 22 Feb 2020 17:25:06 +0800 From: Wen Yang To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton Cc: Wen Yang , Roman Gushchin , Xunlei Pang , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm/slub: improve count_partial() for CONFIG_SLUB_CPU_PARTIAL Date: Sat, 22 Feb 2020 17:24:28 +0800 Message-Id: <20200222092428.99488-1-wenyang@linux.alibaba.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the cloud server scenario, reading "/proc/slabinfo" can possibily block the slab allocation on another CPU for a while, 200ms in extreme cases. If the slab object is to carry network packet, targeting the far-end disk array, it causes block IO jitter issues. This is because the list_lock, which protecting the node partial list, is taken when couting the free objects resident in that list. It introduces locking contention when the page(s) is moved between CPU and node partial lists in allocation path on another CPU. We also observed that in this scenario, CONFIG_SLUB_CPU_PARTIAL is turned on by default, and count_partial() is useless because the returned number is far from the reality. Therefore, we can simply return 0, then nr_free is also 0, and eventually active_objects == total_objects. We do not introduce any regression, and it's preferable to show the unrealistic uniform 100% slab utilization rather than some very high but incorrect value. Co-developed-by: Roman Gushchin Signed-off-by: Roman Gushchin Signed-off-by: Wen Yang Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Xunlei Pang Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 17dc00e..d5b7230 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2411,14 +2411,16 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) static unsigned long count_partial(struct kmem_cache_node *n, int (*get_count)(struct page *)) { - unsigned long flags; unsigned long x = 0; +#ifndef CONFIG_SLUB_CPU_PARTIAL + unsigned long flags; struct page *page; spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) x += get_count(page); spin_unlock_irqrestore(&n->list_lock, flags); +#endif return x; } #endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */