From patchwork Thu Jul 2 08:32:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 11638257 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7ECF2739 for ; Thu, 2 Jul 2020 08:32:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 561B1206A1 for ; Thu, 2 Jul 2020 08:32:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 561B1206A1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 12EBC6B00D6; Thu, 2 Jul 2020 04:32:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0AE486B00D8; Thu, 2 Jul 2020 04:32:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D41336B00D7; Thu, 2 Jul 2020 04:32:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id B0DA76B00D4 for ; Thu, 2 Jul 2020 04:32:15 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 74912180AD806 for ; Thu, 2 Jul 2020 08:32:15 +0000 (UTC) X-FDA: 76992468630.11.trip97_5d0d35a26e87 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 464E5180F8B82 for ; Thu, 2 Jul 2020 08:32:15 +0000 (UTC) X-Spam-Summary: 1,0,0,89d481b5cdcb06df,d41d8cd98f00b204,xlpang@linux.alibaba.com,,RULES_HIT:2:41:355:379:541:960:966:968:973:988:989:1260:1261:1345:1437:1535:1605:1606:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4117:4321:4384:4385:4395:5007:6261:7875:8603:9010:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12555:12683:12895:12986:13161:13229:13869:14096:14394:14877:21080:21433:21451:21627:21990:30005:30034:30054,0,RBL:115.124.30.43:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201;04yrdaxmnqu6mzxjftanpizcbckexocg4k4ojhks3qaby76kkm3k7amdtinxmek.uagnh3exs7bzug6596igfaodru4tf1dfzs5r8sftqkhr6thyhxsjzpiw13zukwq.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: trip97_5d0d35a26e87 X-Filterd-Recvd-Size: 6650 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Jul 2020 08:32:13 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U1TJPN7_1593678728; Received: from localhost(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0U1TJPN7_1593678728) by smtp.aliyun-inc.com(127.0.0.1); Thu, 02 Jul 2020 16:32:08 +0800 From: Xunlei Pang To: Christoph Lameter , Andrew Morton , Wen Yang , Yang Shi , Roman Gushchin Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects Date: Thu, 2 Jul 2020 16:32:07 +0800 Message-Id: <1593678728-128358-1-git-send-email-xlpang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Rspamd-Queue-Id: 464E5180F8B82 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The node list_lock in count_partial() spend long time iterating in case of large amount of partial page lists, which can cause thunder herd effect to the list_lock contention, e.g. it cause business response-time jitters when accessing "/proc/slabinfo" in our production environments. This patch introduces two counters to maintain the actual number of partial objects dynamically instead of iterating the partial page lists with list_lock held. New counters of kmem_cache_node are: pfree_objects, ptotal_objects. The main operations are under list_lock in slow path, its performance impact is minimal. Co-developed-by: Wen Yang Signed-off-by: Xunlei Pang Acked-by: Pekka Enberg --- mm/slab.h | 2 ++ mm/slub.c | 38 +++++++++++++++++++++++++++++++++++++- 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/mm/slab.h b/mm/slab.h index 7e94700..5935749 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -616,6 +616,8 @@ struct kmem_cache_node { #ifdef CONFIG_SLUB unsigned long nr_partial; struct list_head partial; + atomic_long_t pfree_objects; /* partial free objects */ + atomic_long_t ptotal_objects; /* partial total objects */ #ifdef CONFIG_SLUB_DEBUG atomic_long_t nr_slabs; atomic_long_t total_objects; diff --git a/mm/slub.c b/mm/slub.c index 6589b41..53890f3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1775,10 +1775,24 @@ static void discard_slab(struct kmem_cache *s, struct page *page) /* * Management of partially allocated slabs. */ + +static inline void +__update_partial_free(struct kmem_cache_node *n, long delta) +{ + atomic_long_add(delta, &n->pfree_objects); +} + +static inline void +__update_partial_total(struct kmem_cache_node *n, long delta) +{ + atomic_long_add(delta, &n->ptotal_objects); +} + static inline void __add_partial(struct kmem_cache_node *n, struct page *page, int tail) { n->nr_partial++; + __update_partial_total(n, page->objects); if (tail == DEACTIVATE_TO_TAIL) list_add_tail(&page->slab_list, &n->partial); else @@ -1798,6 +1812,7 @@ static inline void remove_partial(struct kmem_cache_node *n, lockdep_assert_held(&n->list_lock); list_del(&page->slab_list); n->nr_partial--; + __update_partial_total(n, -page->objects); } /* @@ -1842,6 +1857,7 @@ static inline void *acquire_slab(struct kmem_cache *s, return NULL; remove_partial(n, page); + __update_partial_free(n, -*objects); WARN_ON(!freelist); return freelist; } @@ -2174,8 +2190,11 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, "unfreezing slab")) goto redo; - if (lock) + if (lock) { + if (m == M_PARTIAL) + __update_partial_free(n, page->objects - page->inuse); spin_unlock(&n->list_lock); + } if (m == M_PARTIAL) stat(s, tail); @@ -2241,6 +2260,7 @@ static void unfreeze_partials(struct kmem_cache *s, discard_page = page; } else { add_partial(n, page, DEACTIVATE_TO_TAIL); + __update_partial_free(n, page->objects - page->inuse); stat(s, FREE_ADD_PARTIAL); } } @@ -2915,6 +2935,14 @@ static void __slab_free(struct kmem_cache *s, struct page *page, head, new.counters, "__slab_free")); + if (!was_frozen && prior) { + if (n) + __update_partial_free(n, cnt); + else + __update_partial_free(get_node(s, page_to_nid(page)), + cnt); + } + if (likely(!n)) { /* @@ -2944,6 +2972,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, if (!kmem_cache_has_cpu_partial(s) && unlikely(!prior)) { remove_full(s, n, page); add_partial(n, page, DEACTIVATE_TO_TAIL); + __update_partial_free(n, page->objects - page->inuse); stat(s, FREE_ADD_PARTIAL); } spin_unlock_irqrestore(&n->list_lock, flags); @@ -2955,6 +2984,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * Slab on the partial list. */ remove_partial(n, page); + __update_partial_free(n, page->inuse - page->objects); stat(s, FREE_REMOVE_PARTIAL); } else { /* Slab must be on the full list */ @@ -3364,6 +3394,8 @@ static inline int calculate_order(unsigned int size) n->nr_partial = 0; spin_lock_init(&n->list_lock); INIT_LIST_HEAD(&n->partial); + atomic_long_set(&n->pfree_objects, 0); + atomic_long_set(&n->ptotal_objects, 0); #ifdef CONFIG_SLUB_DEBUG atomic_long_set(&n->nr_slabs, 0); atomic_long_set(&n->total_objects, 0); @@ -3437,6 +3469,7 @@ static void early_kmem_cache_node_alloc(int node) * initialized and there is no concurrent access. */ __add_partial(n, page, DEACTIVATE_TO_HEAD); + __update_partial_free(n, page->objects - page->inuse); } static void free_kmem_cache_nodes(struct kmem_cache *s) @@ -3747,6 +3780,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) list_for_each_entry_safe(page, h, &n->partial, slab_list) { if (!page->inuse) { remove_partial(n, page); + __update_partial_free(n, page->objects - page->inuse); list_add(&page->slab_list, &discard); } else { list_slab_objects(s, page, @@ -4045,6 +4079,8 @@ int __kmem_cache_shrink(struct kmem_cache *s) if (free == page->objects) { list_move(&page->slab_list, &discard); n->nr_partial--; + __update_partial_free(n, -free); + __update_partial_total(n, -free); } else if (free <= SHRINK_PROMOTE_MAX) list_move(&page->slab_list, promote + free - 1); }