From patchwork Tue Nov 22 23:27:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13052877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ABC7C433FE for ; Tue, 22 Nov 2022 23:27:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D44B88E0001; Tue, 22 Nov 2022 18:27:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF5106B0073; Tue, 22 Nov 2022 18:27:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBCC28E0001; Tue, 22 Nov 2022 18:27:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AC6736B0072 for ; Tue, 22 Nov 2022 18:27:26 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7C150120714 for ; Tue, 22 Nov 2022 23:27:26 +0000 (UTC) X-FDA: 80162666892.24.DCE4650 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf10.hostedemail.com (Postfix) with ESMTP id 1C65BC0004 for ; Tue, 22 Nov 2022 23:27:25 +0000 (UTC) Received: by mail-pj1-f73.google.com with SMTP id r10-20020a17090a1bca00b002137a500398so135762pjr.5 for ; Tue, 22 Nov 2022 15:27:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=b1fbKiUHSCNwPD8ziga3iKV48CqpH2MMrAKLftLto7I=; b=TNJuE9QixngDemTVcV8LxdV9Fc52J6wh30nypLm9Byy5taJTMboNJGYein5Jm0EtNn bZkVlSNr8Zoc/0Y8aysen+Hb44BiQfCFs57jmoz7AoImdPJVZaU57hBhATNiJwWjjZCb zLuWiMmnBoAGSzyGxKyfTTVDj5Ev8jaGAHmMDCC6y4VWb6Peu7wqTZyltIOqdqhlEhhX omZilehQqyOoRnxENDx4qlUNiShSJGIKeGGE9hzzR4G+3DrjCUZ5hrC0Rb+e7Km2+tOo I+k1EzjkpWSGWD9FF2lTfhkqkP1WJKWKqaZdPFzVhHlw21kfiu4XwH0RpPc6VBV83bGd SLxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=b1fbKiUHSCNwPD8ziga3iKV48CqpH2MMrAKLftLto7I=; b=E7T8Uu6Z41vIwOqyrut7ErBg+PyLEXaMbRV62gNK4BurAhNWV9jEKYCG2JPLsmdxHk FfTOpZ8Gr9GPdGpMKy95S+1PdpHgxZycDKKW1WhWU8nhukg8Z71908x4Ar3yN9sPh844 DI+PAGYKHF/ehvWqB9kSa+yV+PzvhAODl6CeMHSF4QQzYs6U6sTRsvOluL/t0ZPNWOZc LZLOCwUOI8jVhU1l6/4TUby1arVAIGzsZXi9h+dLQ47/IOpShKXhtFvq5BY/xwVZFNTu V2vHqhRle+1KpyeoZg8+3wYIc1hqzSeQXybpW2ODREt/xG5IARkLffnKEbfW0L1e3aZJ 7yYw== X-Gm-Message-State: ANoB5pl2WVJmCFomERjjGWDVG0frALtGnzgh72PVugwa0EVVn88Sp/yN eTO+r28CtO5FZxj/HFWJuzgMvuLvbkswZa8V X-Google-Smtp-Source: AA0mqf6FQtZ/FC14BI/LmA3GHrODClsyJxiEHgSkSaK+rKUrqXT72zONjUrwb1XbcUGgh/e7cT/maB0aslhmNMbM X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:902:e550:b0:184:dc59:cc9f with SMTP id n16-20020a170902e55000b00184dc59cc9fmr8913296plf.89.1669159644696; Tue, 22 Nov 2022 15:27:24 -0800 (PST) Date: Tue, 22 Nov 2022 23:27:21 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221122232721.2306102-1-yosryahmed@google.com> Subject: [PATCH] mm: memcg: fix stale protection of reclaim target memcg From: Yosry Ahmed To: Shakeel Butt , Roman Gushchin , Johannes Weiner , Michal Hocko , Yu Zhao , Muchun Song Cc: "Matthew Wilcox (Oracle)" , Vasily Averin , Vlastimil Babka , Chris Down , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669159646; a=rsa-sha256; cv=none; b=i5bRT+WqMl4nhBSwZ5NyQvLkS5p2IGS7tA0YPFcypT/BZyJP6q5EwDC7eAEJ6bw5hVuVbn OB4gyWMeL2KywXv0ekGOUY3Tijp4Gs+jsIhcafoYe0AiPsP4kV5AiUPJoIJCQ9NqggmO4o /TPLLS/SuU8ZOrlnqU7o1RKhxu+wGoQ= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=TNJuE9Qi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 33Fp9YwoKCKkhXbahJQVNMPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--yosryahmed.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=33Fp9YwoKCKkhXbahJQVNMPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--yosryahmed.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669159646; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=b1fbKiUHSCNwPD8ziga3iKV48CqpH2MMrAKLftLto7I=; b=Eeaf1DKPhSgYw71BVSdsQqma4nonZ5z1ClsHJ/dyQEWLI1RL6zK4CYIuz+dlTUZ2Bdquz3 49tmqEl5Xq8CDeskpxsarstTFP7DeR6BNbC2GmruGVEEersT6ME7uuIBWnRewwulnI1rSo S/wnRi22I9HaSqsqv4Y5RcYseLKNdcc= Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=TNJuE9Qi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 33Fp9YwoKCKkhXbahJQVNMPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--yosryahmed.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=33Fp9YwoKCKkhXbahJQVNMPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--yosryahmed.bounces.google.com X-Stat-Signature: sjgr3pmgituxc1c8xikjq47bicmy38cy X-Rspamd-Queue-Id: 1C65BC0004 X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1669159645-333334 X-Bogosity: Ham, tests=bogofilter, spamicity=0.007290, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: During reclaim, mem_cgroup_calculate_protection() is used to determine the effective protection (emin and elow) values of a memcg. The protection of the reclaim target is ignored, but we cannot set their effective protection to 0 due to a limitation of the current implementation (see comment in mem_cgroup_protection()). Instead, we leave their effective protection values unchaged, and later ignore it in mem_cgroup_protection(). However, mem_cgroup_protection() is called later in shrink_lruvec()->get_scan_count(), which is after the mem_cgroup_below_{min/low}() checks in shrink_node_memcgs(). As a result, the stale effective protection values of the target memcg may lead us to skip reclaiming from the target memcg entirely, before calling shrink_lruvec(). This can be even worse with recursive protection, where the stale target memcg protection can be higher than its standalone protection. An example where this can happen is as follows. Consider the following hierarchy with memory_recursiveprot: ROOT | A (memory.min = 50M) | B (memory.min = 10M, memory.high = 40M) Consider the following scenarion: - B has memory.current = 35M. - The system undergoes global reclaim (target memcg is NULL). - B will have an effective min of 50M (all of A's unclaimed protection). - B will not be reclaimed from. - Now allocate 10M more memory in B, pushing it above it's high limit. - The system undergoes memcg reclaim from B (target memcg is B) - In shrink_node_memcgs(), we call mem_cgroup_calculate_protection(), which immediately returns for B without doing anything, as B is the target memcg, relying on mem_cgroup_protection() to ignore B's stale effective min (still 50M). - Directly after mem_cgroup_calculate_protection(), we will call mem_cgroup_below_min(), which will read the stale effective min for B and skip it (instead of ignoring its protection as intended). In this case, it's really bad because we are not just considering B's standalone protection (10M), but we are reading a much higher stale protection (50M) which will cause us to not reclaim from B at all. This is an artifact of commit 45c7f7e1ef17 ("mm, memcg: decouple e{low,min} state mutations from protection checks") which made mem_cgroup_calculate_protection() only change the state without returning any value. Before that commit, we used to return MEMCG_PROT_NONE for the target memcg, which would cause us to skip the mem_cgroup_below_{min/low}() checks. After that commit we do not return anything and we end up checking the min & low effective protections for the target memcg, which are stale. Add mem_cgroup_ignore_protection() that checks if we are reclaiming from the target memcg, and call it in mem_cgroup_below_{min/low}() to ignore the stale protection of the target memcg. Fixes: 45c7f7e1ef17 ("mm, memcg: decouple e{low,min} state mutations from protection checks") Signed-off-by: Yosry Ahmed --- include/linux/memcontrol.h | 33 +++++++++++++++++++++++++++------ mm/vmscan.c | 11 ++++++----- 2 files changed, 33 insertions(+), 11 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e1644a24009c..22c9c9f9c6b1 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -625,18 +625,32 @@ static inline bool mem_cgroup_supports_protection(struct mem_cgroup *memcg) } -static inline bool mem_cgroup_below_low(struct mem_cgroup *memcg) +static inline bool mem_cgroup_ignore_protection(struct mem_cgroup *target, + struct mem_cgroup *memcg) { - if (!mem_cgroup_supports_protection(memcg)) + /* + * The target memcg's protection is ignored, see + * mem_cgroup_calculate_protection() and mem_cgroup_protection() + */ + return target == memcg; +} + +static inline bool mem_cgroup_below_low(struct mem_cgroup *target, + struct mem_cgroup *memcg) +{ + if (!mem_cgroup_supports_protection(memcg) || + mem_cgroup_ignore_protection(target, memcg)) return false; return READ_ONCE(memcg->memory.elow) >= page_counter_read(&memcg->memory); } -static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) +static inline bool mem_cgroup_below_min(struct mem_cgroup *target, + struct mem_cgroup *memcg) { - if (!mem_cgroup_supports_protection(memcg)) + if (!mem_cgroup_supports_protection(memcg) || + mem_cgroup_ignore_protection(target, memcg)) return false; return READ_ONCE(memcg->memory.emin) >= @@ -1209,12 +1223,19 @@ static inline void mem_cgroup_calculate_protection(struct mem_cgroup *root, { } -static inline bool mem_cgroup_below_low(struct mem_cgroup *memcg) +static inline bool mem_cgroup_ignore_protection(struct mem_cgroup *target, + struct mem_cgroup *memcg) +{ + return false; +} +static inline bool mem_cgroup_below_low(struct mem_cgroup *target, + struct mem_cgroup *memcg) { return false; } -static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) +static inline bool mem_cgroup_below_min(struct mem_cgroup *target, + struct mem_cgroup *memcg) { return false; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 04d8b88e5216..79ef0fe67518 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4486,7 +4486,7 @@ static bool age_lruvec(struct lruvec *lruvec, struct scan_control *sc, unsigned mem_cgroup_calculate_protection(NULL, memcg); - if (mem_cgroup_below_min(memcg)) + if (mem_cgroup_below_min(NULL, memcg)) return false; need_aging = should_run_aging(lruvec, max_seq, min_seq, sc, swappiness, &nr_to_scan); @@ -5047,8 +5047,9 @@ static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control * DEFINE_MAX_SEQ(lruvec); DEFINE_MIN_SEQ(lruvec); - if (mem_cgroup_below_min(memcg) || - (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) + if (mem_cgroup_below_min(sc->target_mem_cgroup, memcg) || + (mem_cgroup_below_low(sc->target_mem_cgroup, memcg) && + !sc->memcg_low_reclaim)) return 0; *need_aging = should_run_aging(lruvec, max_seq, min_seq, sc, can_swap, &nr_to_scan); @@ -6048,13 +6049,13 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) mem_cgroup_calculate_protection(target_memcg, memcg); - if (mem_cgroup_below_min(memcg)) { + if (mem_cgroup_below_min(target_memcg, memcg)) { /* * Hard protection. * If there is no reclaimable memory, OOM. */ continue; - } else if (mem_cgroup_below_low(memcg)) { + } else if (mem_cgroup_below_low(target_memcg, memcg)) { /* * Soft protection. * Respect the protection only as long as