From patchwork Fri Aug 7 06:22:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11704857 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3D00138C for ; Fri, 7 Aug 2020 06:22:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B7BE12177B for ; Fri, 7 Aug 2020 06:22:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="XX7RS8XN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B7BE12177B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6B8508D0066; Fri, 7 Aug 2020 02:22:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 63FBB8D0026; Fri, 7 Aug 2020 02:22:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 557608D0066; Fri, 7 Aug 2020 02:22:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id 3A29C8D0026 for ; Fri, 7 Aug 2020 02:22:21 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 024FA181AEF10 for ; Fri, 7 Aug 2020 06:22:21 +0000 (UTC) X-FDA: 77122778082.28.grain08_1e0d9f926fbe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id C995D6D74 for ; Fri, 7 Aug 2020 06:22:20 +0000 (UTC) X-Spam-Summary: 1,0,0,a514fad50c76c018,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:152:355:379:800:960:967:973:988:989:1260:1263:1277:1311:1313:1314:1345:1359:1381:1431:1437:1513:1515:1516:1518:1521:1534:1542:1593:1594:1711:1730:1747:1777:1792:2393:2525:2553:2559:2563:2682:2685:2859:2897:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4041:4321:5007:6261:6653:7576:8599:9025:9545:10004:10400:10450:10455:10913:11026:11658:11914:12043:12048:12295:12296:12297:12438:12517:12519:12555:12679:12783:12986:13161:13229:13846:13870:14093:14096:14097:14181:14721:14849:19904:19999:21067:21080:21451:21627:21740:21939:30054:30064:30090,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04yg6whfqpphspd3n7m4fu1jyu5pjyp9noef3npn1hmrfpoy7pamhuw1e7p483w.1waxj9hr3hagpuhz6ozfqqo3eoah86epy8am4miab5oduxtm4ctpfs47z77qs5g.4-lbl8.mail shell.ne X-HE-Tag: grain08_1e0d9f926fbe X-Filterd-Recvd-Size: 4293 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Fri, 7 Aug 2020 06:22:20 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 33CC122C9F; Fri, 7 Aug 2020 06:22:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596781339; bh=oSdNly9aIeyibSd9/S4eMSMD5jPf3sak6mS/2Aqks1A=; h=Date:From:To:Subject:In-Reply-To:From; b=XX7RS8XNlUeUGQWfrxFpo/0uT/lGY26j6IKgkZFLpueXaBLdAiIAABur7zi8FVCzf X+Ll7btLzEg5Vtp8Je/89OT1Mo6vwK+wKDRRfAnjetFf6NoKI+axYh8c4husXYnb8N i7rENz9gdA3cZhTqQnGazowo0YSklM5Shbt6IGhw= Date: Thu, 06 Aug 2020 23:22:18 -0700 From: Andrew Morton To: akpm@linux-foundation.org, guro@fb.com, hannes@cmpxchg.org, linux-mm@kvack.org, mhocko@suse.com, mkoutny@suse.com, mm-commits@vger.kernel.org, stable@vger.kernel.org, tj@kernel.org, torvalds@linux-foundation.org Subject: [patch 093/163] mm/page_counter.c: fix protection usage propagation Message-ID: <20200807062218.U7OzBph9t%akpm@linux-foundation.org> In-Reply-To: <20200806231643.a2711a608dd0f18bff2caf2b@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 X-Rspamd-Queue-Id: C995D6D74 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Koutný Subject: mm/page_counter.c: fix protection usage propagation When workload runs in cgroups that aren't directly below root cgroup and their parent specifies reclaim protection, it may end up ineffective. The reason is that propagate_protected_usage() is not called in all hierarchy up. All the protected usage is incorrectly accumulated in the workload's parent. This means that siblings_low_usage is overestimated and effective protection underestimated. Even though it is transitional phenomenon (uncharge path does correct propagation and fixes the wrong children_low_usage), it can undermine the intended protection unexpectedly. We have noticed this problem while seeing a swap out in a descendant of a protected memcg (intermediate node) while the parent was conveniently under its protection limit and the memory pressure was external to that hierarchy. Michal has pinpointed this down to the wrong siblings_low_usage which led to the unwanted reclaim. The fix is simply updating children_low_usage in respective ancestors also in the charging path. Link: http://lkml.kernel.org/r/20200803153231.15477-1-mhocko@kernel.org Fixes: 230671533d64 ("mm: memory.low hierarchical behavior") Signed-off-by: Michal Koutný Signed-off-by: Michal Hocko Acked-by: Michal Hocko Acked-by: Roman Gushchin Cc: Johannes Weiner Cc: Tejun Heo Cc: [4.18+] Signed-off-by: Andrew Morton --- mm/page_counter.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/mm/page_counter.c~mm-fix-protection-usage-propagation +++ a/mm/page_counter.c @@ -72,7 +72,7 @@ void page_counter_charge(struct page_cou long new; new = atomic_long_add_return(nr_pages, &c->usage); - propagate_protected_usage(counter, new); + propagate_protected_usage(c, new); /* * This is indeed racy, but we can live with some * inaccuracy in the watermark. @@ -116,7 +116,7 @@ bool page_counter_try_charge(struct page new = atomic_long_add_return(nr_pages, &c->usage); if (new > c->max) { atomic_long_sub(nr_pages, &c->usage); - propagate_protected_usage(counter, new); + propagate_protected_usage(c, new); /* * This is racy, but we can live with some * inaccuracy in the failcnt. @@ -125,7 +125,7 @@ bool page_counter_try_charge(struct page *fail = c; goto failed; } - propagate_protected_usage(counter, new); + propagate_protected_usage(c, new); /* * Just like with failcnt, we can live with some * inaccuracy in the watermark.