From patchwork Fri Mar 20 08:32:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11448605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0492E92A for ; Fri, 20 Mar 2020 08:32:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B89D120767 for ; Fri, 20 Mar 2020 08:32:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SRteJ3wZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B89D120767 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8C9A6B0006; Fri, 20 Mar 2020 04:32:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BC4D86B0007; Fri, 20 Mar 2020 04:32:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A90BC6B0008; Fri, 20 Mar 2020 04:32:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 8ABD16B0006 for ; Fri, 20 Mar 2020 04:32:40 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 49FA5180AD81D for ; Fri, 20 Mar 2020 08:32:40 +0000 (UTC) X-FDA: 76615074480.21.knot53_174385e2b1848 X-Spam-Summary: 2,0,0,8695a548bdd85956,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:967:973:988:989:1260:1263:1345:1359:1437:1461:1535:1544:1605:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2525:2559:2563:2682:2685:2693:2859:2897:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4119:4250:4321:4385:4605:5007:6261:6653:6742:7576:7903:8599:8985:9025:9108:9121:9413:10004:10913:11026:11658:11914:12043:12296:12297:12517:12519:12555:12679:12682:12783:12895:12986:13161:13221:13229:14096:14181:14394:14721:14849:21063:21080:21094:21323:21433:21444:21451:21627:21666:30054:30064,0,RBL:209.85.210.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: knot53_174385e2b1848 X-Filterd-Recvd-Size: 8058 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Mar 2020 08:32:39 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id x2so2861541pfn.9 for ; Fri, 20 Mar 2020 01:32:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fPSSxy7Sb5FMFyGDgOEpC3OGiXR80DXgt3C2tiJbk38=; b=SRteJ3wZGid+C453tX+JpakjQekyh2c3gekBGk5Ab8LTwjFyn4TfftkKuG5A94i/pC uJwr76zrbrp5GWijPJYxUg9Az9s/F6a1sJMqbHjwJ3wUxjEl1U5OnPmpDeqOCj6wyCJh rXon7fgTFJmel1nH9nQNK4hw41unwplMZKbBctQFPgs3assTSvI0NGxYz5C65H9InsE2 HAXk53MCsBnO5p99y24uSDFcxyVVnD/pwJUY+jzRwGuT4pO9hsipVKW3una8t5HNbn7c XsPHtByrCNWOb/aZfgh4Wi7lYPbucsuXueryXN2LQOqm4eGhALv4R6GrBC7vfNsOTAk5 UWfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fPSSxy7Sb5FMFyGDgOEpC3OGiXR80DXgt3C2tiJbk38=; b=uOEXHasGNMfJtSMuRlGljKJQWZGjrNMy98sX1fsYrXt9AGhbu7VxJtDsYcTo0GL06S 7txXBDERJ9tKXzzGzAK9PXEPbcLa7B8Jxv2R1Qn48j4ymOnZbq6Iq1/kJSvHXvDYSSf7 ZP+ElJtV7qwMdAfI5REnC2KQfaIxXRYRQyEXIAYdxHd6tFcyDdhqXmXfEfDiLzZHmVbT AwzHYn2PkL8rxNK2SCrdbyYnU/VF5tKkb7d8MA4p6b5B6mw+gGR9EYpmhBoZYLJpavvs 05JMVwACIDAb+AMswCwaJGrbSKf0ZvskeR30GEzzEFcoMHSS1/maxWSymYMsu+u62+OQ xryA== X-Gm-Message-State: ANhLgQ0Ra5aRklFxssl20y8z3SRhYL4nCbWppmCJ+oFWB/zm1TKtfSwv KT6yKBvTNsIue3LWgi3JFoA= X-Google-Smtp-Source: ADFU+vuk9Hc9r1N9+VblRAXdUT+tBZqrOZVC0Thl3oIx30Nev25k84+uml3nHPGeVgwis+YTNu247Q== X-Received: by 2002:a63:6b8a:: with SMTP id g132mr7254055pgc.359.1584693158467; Fri, 20 Mar 2020 01:32:38 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id y193sm4501986pgd.87.2020.03.20.01.32.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 20 Mar 2020 01:32:37 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Ye Xiaolong , David Rientjes , Joonsoo Kim Subject: [PATCH v3 1/2] mm/page_alloc: use ac->high_zoneidx for classzone_idx Date: Fri, 20 Mar 2020 17:32:14 +0900 Message-Id: <1584693135-4396-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584693135-4396-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1584693135-4396-1-git-send-email-iamjoonsoo.kim@lge.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Currently, we use the zone index of preferred_zone which represents the best matching zone for allocation, as classzone_idx. It has a problem on NUMA systems when the lowmem reserve protection exists for some zones on a node that do not exist on other nodes. In NUMA system, it can be possible that each node has different populated zones. For example, node 0 could have DMA/DMA32/NORMAL/MOVABLE zone and node 1 could have only NORMAL zone. In this setup, allocation request initiated on node 0 and the one on node 1 would have different classzone_idx, 3 and 2, respectively, since their preferred_zones are different. If the allocation is local, there is no problem. However, if it is handled by the remote node due to memory shortage, the problem would happen. In the following setup, allocation initiated on node 1 will have some precedence than allocation initiated on node 0 when former allocation is processed on node 0 due to not enough memory on node 1. They will have different lowmem reserve due to their different classzone_idx thus an watermark bars are also different. root@ubuntu:/sys/devices/system/memory# cat /proc/zoneinfo Node 0, zone DMA per-node stats ... pages free 3965 min 5 low 8 high 11 spanned 4095 present 3998 managed 3977 protection: (0, 2961, 4928, 5440) ... Node 0, zone DMA32 pages free 757955 min 1129 low 1887 high 2645 spanned 1044480 present 782303 managed 758116 protection: (0, 0, 1967, 2479) ... Node 0, zone Normal pages free 459806 min 750 low 1253 high 1756 spanned 524288 present 524288 managed 503620 protection: (0, 0, 0, 4096) ... Node 0, zone Movable pages free 130759 min 195 low 326 high 457 spanned 1966079 present 131072 managed 131072 protection: (0, 0, 0, 0) ... Node 1, zone DMA pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1006, 1006) Node 1, zone DMA32 pages free 0 min 0 low 0 high 0 spanned 0 present 0 managed 0 protection: (0, 0, 1006, 1006) Node 1, zone Normal per-node stats ... pages free 233277 min 383 low 640 high 897 spanned 262144 present 262144 managed 257744 protection: (0, 0, 0, 0) ... Node 1, zone Movable pages free 0 min 0 low 0 high 0 spanned 262144 present 0 managed 0 protection: (0, 0, 0, 0) min watermark for NORMAL zone on node 0 allocation initiated on node 0: 750 + 4096 = 4846 allocation initiated on node 1: 750 + 0 = 750 This watermark difference could cause too many numa_miss allocation in some situation and then performance could be downgraded. Recently, there was a regression report about this problem on CMA patches since CMA memory are placed in ZONE_MOVABLE by those patches. I checked that problem is disappeared with this fix that uses high_zoneidx for classzone_idx. http://lkml.kernel.org/r/20180102063528.GG30397@yexl-desktop Using high_zoneidx for classzone_idx is more consistent way than previous approach because system's memory layout doesn't affect anything to it. With this patch, both classzone_idx on above example will be 3 so will have the same min watermark. allocation initiated on node 0: 750 + 4096 = 4846 allocation initiated on node 1: 750 + 4096 = 4846 One could wonder if there is a side effect that allocation initiated on node 1 will use higher bar when allocation is handled on local since classzone_idx could be higher than before. It will not happen because the zone without managed page doesn't contributes lowmem_reserve at all. Reported-by: Ye Xiaolong Tested-by: Ye Xiaolong Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim Reported-by: Ye Xiaolong Tested-by: Ye Xiaolong Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index c39c895..aebaa33 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -119,7 +119,7 @@ struct alloc_context { bool spread_dirty_pages; }; -#define ac_classzone_idx(ac) zonelist_zone_idx(ac->preferred_zoneref) +#define ac_classzone_idx(ac) (ac->high_zoneidx) /* * Locate the struct page for both the matching buddy in our