From patchwork Fri Dec 13 19:21:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11291515 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FF5D138D for ; Fri, 13 Dec 2019 21:28:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 92EE82465A for ; Fri, 13 Dec 2019 21:28:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="cUlOaDnZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 92EE82465A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 53A0D8E0015; Fri, 13 Dec 2019 14:22:10 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 49BFE8E0001; Fri, 13 Dec 2019 14:22:10 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 363838E0015; Fri, 13 Dec 2019 14:22:10 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 208088E0001 for ; Fri, 13 Dec 2019 14:22:10 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id D913D180AD817 for ; Fri, 13 Dec 2019 19:22:09 +0000 (UTC) X-FDA: 76261088778.19.bomb62_8738ed05b0004 X-Spam-Summary: 2,0,0,51aa7b497cde3b85,d41d8cd98f00b204,hannes@cmpxchg.org,:akpm@linux-foundation.org:mhocko@suse.com:guro@fb.com:tj@kernel.org::cgroups@vger.kernel.org:linux-kernel@vger.kernel.org:kernel-team@fb.com,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:1801:2393:2559:2562:2897:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4117:4321:4605:5007:6119:6261:6653:7875:7903:8784:9592:10004:11026:11657:11658:11914:12043:12220:12295:12296:12297:12438:12517:12519:12555:12694:12737:12895:12986:13161:13229:13894:14093:14096:14181:14394:14721:14824:21080:21444:21451:21627:21740:21990:30054,0,RBL:209.85.222.196:@cmpxchg.org:.lbl8.mailshell.net-62.14.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: bomb62_8738ed05b0004 X-Filterd-Recvd-Size: 6928 Received: from mail-qk1-f196.google.com (mail-qk1-f196.google.com [209.85.222.196]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Dec 2019 19:22:09 +0000 (UTC) Received: by mail-qk1-f196.google.com with SMTP id z76so143946qka.2 for ; Fri, 13 Dec 2019 11:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kIHxBHZ5brpnLBIeYRR4Zy+JYAJhZx+hc+o8DfZr+/0=; b=cUlOaDnZdkeZLanKU3aLP1Zk7P6+nUwqXpUXG5Wy4VV4OTMlkGpuWWEqfSVH12ocLM XW/RSAmDIh0aicHiA8TCP7oJWIdm3BjV+PjXtnjLEv4Vx+gYKi6UT6j4Y+TFZ6kttAjQ rOcwCQitZxXydHocsR8D8uK1qftqZfS7jmgt7MiTq8B6zQX4FN1IhGpfxiU1u3TzvqNQ N43RE+El1Y3SQlyrZ1fToTNWVetgMWKvIwiYNQa9mtwM//ebyVkeUgfcEIY3Ce4Gn07A 58t78H8dkSNt6+9GZ1L6E7IJTa06q+H2BxoPjPzdh+VqBMfV1yqxOnDyUZKjtJop8ZG3 51Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kIHxBHZ5brpnLBIeYRR4Zy+JYAJhZx+hc+o8DfZr+/0=; b=nNir9rAen+hi7N9jSmMGCPBp6tQVj4jaOdwh/99Ny8E9IQTiXhs0FZ1XQNGuGO4ESp l3Ua3sC0yh81lVrKG5HT25iFxueY7+eruaNpuymSxbbEAVoTHaN8H/p/2S7QBV9JYS0c oMZEwYQypNZD5wPdp7LKVAgkIlkp8wScOnKhqF1FkyajDLzYhRQaPf4dfTFhhyrDZUKb 9ds72q06Cx03nQh2ZmBNLJOcKNfnDZxp0YwwFqt0eeMClW1ljTsoaXYFTOpBi1R/oxdy 11zZ02utfgdtGGkjnTmfTIiEjluG6E7siIJKyam1GizqGIyFhvGkCG0MTJjFT7IuCopc n+0A== X-Gm-Message-State: APjAAAW5SXU02ipLnp21gkSfUTLbJNgCZEztsUw8JUwGCb9nUFV6I3C+ UocveSisr2xGO2W3GgcdVvq50w== X-Google-Smtp-Source: APXvYqzIQGv+yc5JGtSvpf6UidQoCll4ofJa937AnUTHgdo7FLBJIiUjkYn94LuuO96ZkGf3sxfGAA== X-Received: by 2002:a05:620a:3cf:: with SMTP id r15mr15267665qkm.12.1576264928301; Fri, 13 Dec 2019 11:22:08 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id q126sm3074257qkd.21.2019.12.13.11.22.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 13 Dec 2019 11:22:07 -0800 (PST) From: Johannes Weiner To: Andrew Morton Cc: Michal Hocko , Roman Gushchin , Tejun Heo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 1/3] mm: memcontrol: fix memory.low proportional distribution Date: Fri, 13 Dec 2019 14:21:56 -0500 Message-Id: <20191213192158.188939-2-hannes@cmpxchg.org> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191213192158.188939-1-hannes@cmpxchg.org> References: <20191213192158.188939-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When memory.low is overcommitted - i.e. the children claim more protection than their shared ancestor grants them - the allowance is distributed in proportion to each siblings's utilized protection: low_usage = min(low, usage) elow = parent_elow * (low_usage / siblings_low_usage) However, siblings_low_usage is not the sum of all low_usages. It sums up the usages of *only those cgroups that are within their memory.low* That means that low_usage can be *bigger* than siblings_low_usage, and consequently the total protection afforded to the children can be bigger than what the ancestor grants the subtree. Consider three groups where two are in excess of their protection: A/memory.low = 10G A/A1/memory.low = 10G, A/memory.current = 20G A/A2/memory.low = 10G, B/memory.current = 20G A/A3/memory.low = 10G, C/memory.current = 8G siblings_low_usage = 8G (only A3 contributes) A1/elow = parent_elow(10G) * low_usage(20G) / siblings_low_usage(8G) = 25G The 25G are then capped to A1's own memory.low setting, i.e. 10G. The same is true for A2. And A3 would also receive 10G. The combined protection of A1, A2 and A3 is 30G, when A limits the tree to 10G. What does this mean in practice? A1 and A2 would still be in excess of their 10G allowance and would be reclaimed, whereas A3 would not. As they eventually drop below their protection setting, they would be counted in siblings_low_usage again and the error would right itself. When reclaim is applied in a binary fashion - cgroup is reclaimed when it's above its protection, otherwise it's skipped - this could work actually work out just fine - although it's not quite clear to me why we'd introduce this error in the first place. However, since 1bc63fb1272b ("mm, memcg: make scan aggression always exclude protection"), reclaim pressure is scaled to how much a cgroup is above its protection. As a result this calculation error unduly skews pressure away from A1 and A2 toward the rest of the system. Fix this by by making siblings_low_usage the sum of all protected memory among siblings, including those that are in excess of their protection. Signed-off-by: Johannes Weiner --- mm/memcontrol.c | 4 +--- mm/page_counter.c | 12 ++---------- 2 files changed, 3 insertions(+), 13 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c5b5f74cfd4d..874a0b00f89b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6236,9 +6236,7 @@ struct cgroup_subsys memory_cgrp_subsys = { * elow = min( memory.low, parent->elow * ------------------ ), * siblings_low_usage * - * | memory.current, if memory.current < memory.low - * low_usage = | - * | 0, otherwise. + * low_usage = min(memory.low, memory.current) * * * Such definition of the effective memory.low provides the expected diff --git a/mm/page_counter.c b/mm/page_counter.c index de31470655f6..75d53f15f040 100644 --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -23,11 +23,7 @@ static void propagate_protected_usage(struct page_counter *c, return; if (c->min || atomic_long_read(&c->min_usage)) { - if (usage <= c->min) - protected = usage; - else - protected = 0; - + protected = min(usage, c->min); old_protected = atomic_long_xchg(&c->min_usage, protected); delta = protected - old_protected; if (delta) @@ -35,11 +31,7 @@ static void propagate_protected_usage(struct page_counter *c, } if (c->low || atomic_long_read(&c->low_usage)) { - if (usage <= c->low) - protected = usage; - else - protected = 0; - + protected = min(usage, c->low); old_protected = atomic_long_xchg(&c->low_usage, protected); delta = protected - old_protected; if (delta)