From patchwork Sat Oct 19 03:19:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11199893 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 85BD814E5 for ; Sat, 19 Oct 2019 03:20:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4855A22459 for ; Sat, 19 Oct 2019 03:20:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="WtzWzCPc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4855A22459 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 406018E0011; Fri, 18 Oct 2019 23:20:01 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3B7A78E0003; Fri, 18 Oct 2019 23:20:01 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CE138E0011; Fri, 18 Oct 2019 23:20:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 0C7838E0003 for ; Fri, 18 Oct 2019 23:20:01 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 974094DB4 for ; Sat, 19 Oct 2019 03:20:00 +0000 (UTC) X-FDA: 76059080160.05.kite25_1d223be397e2c X-Spam-Summary: 2,0,0,c60d80e6c98e326a,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:dave.hansen@intel.com:guro@fb.com:hannes@cmpxchg.org:honglei.wang@oracle.com::mhocko@suse.com:mm-commits@vger.kernel.org:stable@vger.kernel.org:tim.c.chen@linux.intel.com:tj@kernel.org:torvalds@linux-foundation.org:vdavydov.dev@gmail.com,RULES_HIT:2:41:355:379:560:800:960:966:967:973:988:989:1260:1263:1345:1381:1431:1437:1535:1605:1730:1747:1777:1792:1801:2196:2199:2393:2525:2559:2563:2682:2685:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3165:3369:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4049:4119:4250:4321:4385:4605:5007:6119:6261:6653:6737:7514:7576:7774:7903:8599:8603:9010:9012:9025:9545:10004:10913:11026:11473:11658:11914:12043:12048:12295:12296:12297:12438:12517:12519:12555:12679:12783:12986:13161:13221:13229:13255:13846:21080:21324:21451:21627:21740:21939:21966:30034:30054:30056: 30064,0, X-HE-Tag: kite25_1d223be397e2c X-Filterd-Recvd-Size: 8105 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Sat, 19 Oct 2019 03:20:00 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B6031222D1; Sat, 19 Oct 2019 03:19:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1571455199; bh=MFB0PsclR0Ef/O1ZgEX8pyCjXPlOU8JIc004TRJNo88=; h=Date:From:To:Subject:From; b=WtzWzCPcoN9YDJevAJaPF/LTRH2i1Xe97kfVdST0MUucpejFbB4Wren0wgMzzNrO8 1Eb5y7PRMTWfxv41mm4b6xitT2luzQhf0XpIearguRcgBHkxUYf38Ezgf3biKs7PqG jFUDWxPZgtID9K8ofAkEdutZHLYdTy0hfuqvotVQ= Date: Fri, 18 Oct 2019 20:19:58 -0700 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, dave.hansen@intel.com, guro@fb.com, hannes@cmpxchg.org, honglei.wang@oracle.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, stable@vger.kernel.org, tim.c.chen@linux.intel.com, tj@kernel.org, torvalds@linux-foundation.org, vdavydov.dev@gmail.com Subject: [patch 12/26] mm: memcg: get number of pages on the LRU list in memcgroup base on lru_zone_size Message-ID: <20191019031958.4NBZo8bqy%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Honglei Wang Subject: mm: memcg: get number of pages on the LRU list in memcgroup base on lru_zone_size 1a61ab8038e72 ("mm: memcontrol: replace zone summing with lruvec_page_state()") has made lruvec_page_state to use per-cpu counters instead of calculating it directly from lru_zone_size with an idea that this would be more effective. Tim has reported that this is not really the case for their database benchmark which is showing an opposite results where lruvec_page_state is taking up a huge chunk of CPU cycles (about 25% of the system time which is roughly 7% of total cpu cycles) on 5.3 kernels. The workload is running on a larger machine (96cpus), it has many cgroups (500) and it is heavily direct reclaim bound. Tim Chen said: : The problem can also be reproduced by running simple multi-threaded : pmbench benchmark with a fast Optane SSD swap (see profile below). : : : 6.15% 3.08% pmbench [kernel.vmlinux] [k] lruvec_lru_size : | : |--3.07%--lruvec_lru_size : | | : | |--2.11%--cpumask_next : | | | : | | --1.66%--find_next_bit : | | : | --0.57%--call_function_interrupt : | | : | --0.55%--smp_call_function_interrupt : | : |--1.59%--0x441f0fc3d009 : | _ops_rdtsc_init_base_freq : | access_histogram : | page_fault : | __do_page_fault : | handle_mm_fault : | __handle_mm_fault : | | : | --1.54%--do_swap_page : | swapin_readahead : | swap_cluster_readahead : | | : | --1.53%--read_swap_cache_async : | __read_swap_cache_async : | alloc_pages_vma : | __alloc_pages_nodemask : | __alloc_pages_slowpath : | try_to_free_pages : | do_try_to_free_pages : | shrink_node : | shrink_node_memcg : | | : | |--0.77%--lruvec_lru_size : | | : | --0.76%--inactive_list_is_low : | | : | --0.76%--lruvec_lru_size : | : --1.50%--measure_read : page_fault : __do_page_fault : handle_mm_fault : __handle_mm_fault : do_swap_page : swapin_readahead : swap_cluster_readahead : | : --1.48%--read_swap_cache_async : __read_swap_cache_async : alloc_pages_vma : __alloc_pages_nodemask : __alloc_pages_slowpath : try_to_free_pages : do_try_to_free_pages : shrink_node : shrink_node_memcg : | : |--0.75%--inactive_list_is_low : | | : | --0.75%--lruvec_lru_size : | : --0.73%--lruvec_lru_size The likely culprit is the cache traffic the lruvec_page_state_local generates. Dave Hansen says: : I was thinking purely of the cache footprint. If it's reading : pn->lruvec_stat_local->count[idx] is three separate cachelines, so 192 : bytes of cache *96 CPUs = 18k of data, mostly read-only. 1 cgroup would : be 18k of data for the whole system and the caching would be pretty : efficient and all 18k would probably survive a tight page fault loop in : the L1. 500 cgroups would be ~90k of data per CPU thread which doesn't : fit in the L1 and probably wouldn't survive a tight page fault loop if : both logical threads were banging on different cgroups. : : It's just a theory, but it's why I noted the number of cgroups when I : initially saw this show up in profiles Fix the regression by partially reverting the said commit and calculate the lru size explicitly. Link: http://lkml.kernel.org/r/20190905071034.16822-1-honglei.wang@oracle.com Fixes: 1a61ab8038e72 ("mm: memcontrol: replace zone summing with lruvec_page_state()") Signed-off-by: Honglei Wang Reported-by: Tim Chen Acked-by: Tim Chen Tested-by: Tim Chen Acked-by: Michal Hocko Cc: Vladimir Davydov Cc: Johannes Weiner Cc: Roman Gushchin Cc: Tejun Heo Cc: Dave Hansen Cc: [5.2+] Signed-off-by: Andrew Morton --- mm/vmscan.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) --- a/mm/vmscan.c~mm-vmscan-get-number-of-pages-on-the-lru-list-in-memcgroup-base-on-lru_zone_size +++ a/mm/vmscan.c @@ -351,12 +351,13 @@ unsigned long zone_reclaimable_pages(str */ unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx) { - unsigned long lru_size; + unsigned long lru_size = 0; int zid; - if (!mem_cgroup_disabled()) - lru_size = lruvec_page_state_local(lruvec, NR_LRU_BASE + lru); - else + if (!mem_cgroup_disabled()) { + for (zid = 0; zid < MAX_NR_ZONES; zid++) + lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid); + } else lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru); for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {