From patchwork Tue Aug 17 18:05:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 12441849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65C31C4338F for ; Tue, 17 Aug 2021 18:03:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E713360FE6 for ; Tue, 17 Aug 2021 18:03:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E713360FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 506CA6B0071; Tue, 17 Aug 2021 14:03:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B6F56B0072; Tue, 17 Aug 2021 14:03:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A5338D0001; Tue, 17 Aug 2021 14:03:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 1E2296B0071 for ; Tue, 17 Aug 2021 14:03:31 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B1A441811CE5B for ; Tue, 17 Aug 2021 18:03:30 +0000 (UTC) X-FDA: 78485344980.04.99A363F Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by imf11.hostedemail.com (Postfix) with ESMTP id 4A042F00022D for ; Tue, 17 Aug 2021 18:03:30 +0000 (UTC) Received: by mail-qk1-f169.google.com with SMTP id n11so19243606qkk.1 for ; Tue, 17 Aug 2021 11:03:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=0aakl6RvUiI6sZ5cXxvbF6/Vj9nfatl8x2B4LqmSBf8=; b=Yy18VUql+67ALRRkEeGggrIgzSo/8IWYyUDWIBre9LBWAOpTIeSVVbVWkqH5zdsrJt aSw/WwSAzFYB5fQ7IzfX9ZOk9jgKrtHxidiFX+lfXpeEJQHtxlDevqmkUVaD/9S59KpC ghTugOpHHb3M42P1LlhMKFh/GB2IR9Kldp5JWpYQ4AkiZXdyElHsxUL0dVJsOPizxvw0 ac4O7+s9nW27pURc+BNWbkPwkYXrbwpC90mp97FVzPbmbTo0/jQKb/+V20/+TACMUtq5 2dH8TqL3qpuGFjvEpM714HtxTjk6RvJTa5BwGPThji6O8b9oVv5VFtMK5bt5yX6XhVGv uJPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=0aakl6RvUiI6sZ5cXxvbF6/Vj9nfatl8x2B4LqmSBf8=; b=IizQdSuiVsl5mjgeJfdhtKJEZhnjcLDyJWNbA831Xz9jhlCxSIrlfzJK+e2y7S4txV +VqA4Pnip1q3uNk4HJ0ff5Z1i4gjh1U1LTKE+8zFSdesCmUI4AtDTbI7sMTSWJ21np87 j8CE4xKzc3yi60sdetYBhtHCXbPrQ0kQ05qDDlnqoSeqAnCD5kxid4+IQwDrbsQPk1gp codIhhNzTaVwzpQZntkW0/arDd73QYQyMple00gX+hf6oezNlyfiaxnngIPiDOr4sbPq lm2ktlXFFwpKKDs2+Fh6dHyt17t/cENnI4bd3K2azSqR77ooV5VZ0jQ7FKryG9msyidJ mEtg== X-Gm-Message-State: AOAM532vxvfMaazFqKi7UhAqLY0fArRJR6aT7To/BRUjPTaaa+RpCB8y 4uBEM3Oz/7mGoKOI1Ncbg3k+6w== X-Google-Smtp-Source: ABdhPJxs2nN6GGCeCiuqVCrH+b7SWZp+zbBW872BObBp2/8KhN7Ks9ctZFuhhqdXxzBiud0bye4g1g== X-Received: by 2002:a37:944:: with SMTP id 65mr5134121qkj.412.1629223409442; Tue, 17 Aug 2021 11:03:29 -0700 (PDT) Received: from localhost (cpe-98-15-154-102.hvc.res.rr.com. [98.15.154.102]) by smtp.gmail.com with ESMTPSA id bl40sm1816554qkb.64.2021.08.17.11.03.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Aug 2021 11:03:28 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Leon Yang , Chris Down , Roman Gushchin , Michal Hocko , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH] mm: memcontrol: fix occasional OOMs due to proportional memory.low reclaim Date: Tue, 17 Aug 2021 14:05:06 -0400 Message-Id: <20210817180506.220056-1-hannes@cmpxchg.org> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=Yy18VUql; spf=pass (imf11.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.169 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 4A042F00022D X-Stat-Signature: u3os4ab5yzspr7yaxi7wfpnd1cdps9rz X-HE-Tag: 1629223410-50014 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We've noticed occasional OOM killing when memory.low settings are in effect for cgroups. This is unexpected and undesirable as memory.low is supposed to express non-OOMing memory priorities between cgroups. The reason for this is proportional memory.low reclaim. When cgroups are below their memory.low threshold, reclaim passes them over in the first round, and then retries if it couldn't find pages anywhere else. But when cgroups are slighly above their memory.low setting, page scan force is scaled down and diminished in proportion to the overage, to the point where it can cause reclaim to fail as well - only in that case we currently don't retry, and instead trigger OOM. To fix this, hook proportional reclaim into the same retry logic we have in place for when cgroups are skipped entirely. This way if reclaim fails and some cgroups were scanned with dimished pressure, we'll try another full-force cycle before giving up and OOMing. Reported-by: Leon Yang Signed-off-by: Johannes Weiner Reviewed-by: Rik van Riel Reviewed-by: Shakeel Butt Acked-by: Roman Gushchin Acked-by: Chris Down Acked-by: Michal Hocko Acked-by: Michal Hocko --- include/linux/memcontrol.h | 29 +++++++++++++++-------------- mm/vmscan.c | 27 +++++++++++++++++++-------- 2 files changed, 34 insertions(+), 22 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index bfe5c486f4ad..24797929d8a1 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -612,12 +612,15 @@ static inline bool mem_cgroup_disabled(void) return !cgroup_subsys_enabled(memory_cgrp_subsys); } -static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root, - struct mem_cgroup *memcg, - bool in_low_reclaim) +static inline void mem_cgroup_protection(struct mem_cgroup *root, + struct mem_cgroup *memcg, + unsigned long *min, + unsigned long *low) { + *min = *low = 0; + if (mem_cgroup_disabled()) - return 0; + return; /* * There is no reclaim protection applied to a targeted reclaim. @@ -653,13 +656,10 @@ static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root, * */ if (root == memcg) - return 0; - - if (in_low_reclaim) - return READ_ONCE(memcg->memory.emin); + return; - return max(READ_ONCE(memcg->memory.emin), - READ_ONCE(memcg->memory.elow)); + *min = READ_ONCE(memcg->memory.emin); + *low = READ_ONCE(memcg->memory.elow); } void mem_cgroup_calculate_protection(struct mem_cgroup *root, @@ -1147,11 +1147,12 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, { } -static inline unsigned long mem_cgroup_protection(struct mem_cgroup *root, - struct mem_cgroup *memcg, - bool in_low_reclaim) +static inline void mem_cgroup_protection(struct mem_cgroup *root, + struct mem_cgroup *memcg, + unsigned long *min, + unsigned long *low) { - return 0; + *min = *low = 0; } static inline void mem_cgroup_calculate_protection(struct mem_cgroup *root, diff --git a/mm/vmscan.c b/mm/vmscan.c index 4620df62f0ff..701106e1829c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -100,9 +100,12 @@ struct scan_control { unsigned int may_swap:1; /* - * Cgroups are not reclaimed below their configured memory.low, - * unless we threaten to OOM. If any cgroups are skipped due to - * memory.low and nothing was reclaimed, go back for memory.low. + * Cgroup memory below memory.low is protected as long as we + * don't threaten to OOM. If any cgroup is reclaimed at + * reduced force or passed over entirely due to its memory.low + * setting (memcg_low_skipped), and nothing is reclaimed as a + * result, then go back back for one more cycle that reclaims + * the protected memory (memcg_low_reclaim) to avert OOM. */ unsigned int memcg_low_reclaim:1; unsigned int memcg_low_skipped:1; @@ -2537,15 +2540,14 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, for_each_evictable_lru(lru) { int file = is_file_lru(lru); unsigned long lruvec_size; + unsigned long low, min; unsigned long scan; - unsigned long protection; lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx); - protection = mem_cgroup_protection(sc->target_mem_cgroup, - memcg, - sc->memcg_low_reclaim); + mem_cgroup_protection(sc->target_mem_cgroup, memcg, + &min, &low); - if (protection) { + if (min || low) { /* * Scale a cgroup's reclaim pressure by proportioning * its current usage to its memory.low or memory.min @@ -2576,6 +2578,15 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, * hard protection. */ unsigned long cgroup_size = mem_cgroup_size(memcg); + unsigned long protection; + + /* memory.low scaling, make sure we retry before OOM */ + if (!sc->memcg_low_reclaim && low > min) { + protection = low; + sc->memcg_low_skipped = 1; + } else { + protection = min; + } /* Avoid TOCTOU with earlier protection check */ cgroup_size = max(cgroup_size, protection);