From patchwork Sat Aug 26 03:44:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Shixin X-Patchwork-Id: 13366481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDBC9C83F10 for ; Sat, 26 Aug 2023 02:49:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 189A4680010; Fri, 25 Aug 2023 22:49:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1111868000D; Fri, 25 Aug 2023 22:49:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF2CD680010; Fri, 25 Aug 2023 22:49:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DB76468000D for ; Fri, 25 Aug 2023 22:49:55 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B194016081A for ; Sat, 26 Aug 2023 02:49:55 +0000 (UTC) X-FDA: 81164725950.15.A97116D Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf07.hostedemail.com (Postfix) with ESMTP id 0517140010 for ; Sat, 26 Aug 2023 02:49:51 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=liushixin2@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693018193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=Qtf42UubiDK48rSY4Guamm8dE8nmiMljibl8m+O676s=; b=EIkodfPaJ+xAepYnGR8NSuCbOeaUXiUul9lV+JU5PYT3eYQz14+S0ysU8mtle+fqjDuLN1 WUglKGWXBPly3kG11h79iWMnwvVEE33SEsrYEwIMm3/vCbgivXIgHpdOAPgIj70q3WcAt1 QIeW7s4cOeZemQnwMNRdlqc3XwUtm5Q= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=liushixin2@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693018193; a=rsa-sha256; cv=none; b=PzT8Iz3+v8nvJeHKdujDW1nL1aPyKDzvy99zOELjdATZztESybnlKsHYshDdRZQMyAczPW 1hlmHQav82AJFgRfp/bepsOC1PU510HFSP+TYKzs17knHm9Q+GPm2+Ct7GqG0w4KsF8pb4 lD6Osdg9qGmaxSIupN92yNwB/andtck= Received: from dggpemm500009.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4RXh6N462CzJrmp; Sat, 26 Aug 2023 10:46:36 +0800 (CST) Received: from huawei.com (10.175.113.32) by dggpemm500009.china.huawei.com (7.185.36.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Sat, 26 Aug 2023 10:49:43 +0800 From: Liu Shixin To: Yosry Ahmed , Huang Ying , Michal Hocko , Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , CC: , , , Liu Shixin Subject: [PATCH v3] mm: vmscan: try to reclaim swapcache pages if no swap space Date: Sat, 26 Aug 2023 11:44:01 +0800 Message-ID: <20230826034401.640861-1-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500009.china.huawei.com (7.185.36.225) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 0517140010 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: eboos47amzb1w9zsyozxhr4qpztqq5ju X-HE-Tag: 1693018191-89777 X-HE-Meta: U2FsdGVkX19o+B74WVLw5HhbcoAQl/edBRnY21PoukyFjcI4fOgZZ0LeCduxIRkOVGmdHbgeZ/9XXrUC0R1QJbpIVg1WpRQeYxGmgN/z3PpF/Ne41CcZE6xDO1dGMy/NiGQs3A2o8EbxERd1ZzbujC9vZ0inCG13qddr9WgHKR9dHyNdGsWNq0hDXsj9Oljgv2TWId5FaCJahzdhx81IwZnz0x17tmbGY8QgoP7ulQH48fPkn7hHEg86m+Q5osOWvThVf/UUShMReGAq8SrdTvhtVCdPAUqFuqX+eWHM9bPA+UVPpYUa2CIRd6J4EGnazQxzdn8/f+1BDv/kWRQxXefKGDEo9ejTkWYGynCU6w62hoYZW2qgbC9IE3P/D7Lx9+2TtpU0R/AwWFoYDTbsyWYsGN8QMFdLfGVIm8Xz5uJpDGtwq5iJ0RATzdJ4rj9aNZ1ke0vOYCzJ8jguZXbj3OXvLg4tyW+NaiiDDsIoFsXG/LSyBFO70fM0ddXFlBIr9/RAlk59C/E6nf/qZ2LONunp+UaRRKRQa7ddA136Lthg77W1ODlp1wb2tKrZaEDzfgYrzXFSKhSDzwpeP9C0FKt8MifAbrkZOV3hH6iDEbuXXskiJOyzdhAaY6Zh5MS0cC48ezoDMa+4N3/Rv+OJTXNpZDCsfFt0si23ssdSHLkJygeitpyXWi/lRVBGPEjsKIamCR+aay0PlIHZhubq9X7XyzSqFtelcMEN3ToQMmrD56ijrmnQopMw5y8P72wT7TCgXwMMNar+umdePh4YIkGMWLqiZxiLKTfXdFUpzykiFFz/uJzmEPXmEtmOWTOELy4FasI6OgJMDz2Ff/J0TTiLgVjiglmfpB/T3ZMNuBFaqf7Yzdage7JCOYIS6yZIJrFw5AYjp21QPEubaZMcWTGJ51Wn3LP6m527ag7rOkGQcLoLn2XfMkoWmRs5jZYbBMVbaYWOLdZ6QTCCKYj djviCUXZ HIahtsGHudty7arDwLgh4kcyUyZUga0HdTQmgERdIXNLFgFaYU1P1r0ah+8gAlBXuZzwI+NGEFa88scmNSGBVScKEMlhjhtQ0G7mTRtHdrBu5j7ezF/LXuQhcMrCRxC3hLidF3cSOZwCWCttECMyEe1w17j/EwspBXLuOtn2rg53edcGfBw/uzMMTJVK4oOfnyDwY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When spaces of swap devices are exhausted, only file pages can be reclaimed. But there are still some swapcache pages in anon lru list. This can lead to a premature out-of-memory. This problem can be fixed by checking number of swapcache pages in can_reclaim_anon_pages(). Add a new bit swapcache_only in struct scan_control to skip isolating anon pages that are not in the swap cache when only swap cache can be reclaimed. Signed-off-by: Liu Shixin Tested-by: Yosry Ahmed --- include/linux/swap.h | 6 ++++++ mm/memcontrol.c | 8 ++++++++ mm/vmscan.c | 29 +++++++++++++++++++++++++++-- 3 files changed, 41 insertions(+), 2 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 456546443f1f..0318e918bfa4 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -669,6 +669,7 @@ static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_p } extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg); +extern long mem_cgroup_get_nr_swapcache_pages(struct mem_cgroup *memcg); extern bool mem_cgroup_swap_full(struct folio *folio); #else static inline void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) @@ -691,6 +692,11 @@ static inline long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) return get_nr_swap_pages(); } +static inline long mem_cgroup_get_nr_swapcache_pages(struct mem_cgroup *memcg) +{ + return total_swapcache_pages(); +} + static inline bool mem_cgroup_swap_full(struct folio *folio) { return vm_swap_full(); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e8ca4bdcb03c..c465829db92b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7567,6 +7567,14 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) return nr_swap_pages; } +long mem_cgroup_get_nr_swapcache_pages(struct mem_cgroup *memcg) +{ + if (mem_cgroup_disabled()) + return total_swapcache_pages(); + + return memcg_page_state(memcg, NR_SWAPCACHE); +} + bool mem_cgroup_swap_full(struct folio *folio) { struct mem_cgroup *memcg; diff --git a/mm/vmscan.c b/mm/vmscan.c index 7c33c5b653ef..5cb4adf6642b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -137,6 +137,9 @@ struct scan_control { /* Always discard instead of demoting to lower tier memory */ unsigned int no_demotion:1; + /* Swap space is exhausted, only reclaim swapcache for anon LRU */ + unsigned int swapcache_only:1; + /* Allocation order */ s8 order; @@ -613,10 +616,20 @@ static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, */ if (get_nr_swap_pages() > 0) return true; + /* Is there any swapcache pages to reclaim? */ + if (total_swapcache_pages() > 0) { + sc->swapcache_only = 1; + return true; + } } else { /* Is the memcg below its swap limit? */ if (mem_cgroup_get_nr_swap_pages(memcg) > 0) return true; + /* Is there any swapcache pages in memcg to reclaim? */ + if (mem_cgroup_get_nr_swapcache_pages(memcg) > 0) { + sc->swapcache_only = 1; + return true; + } } /* @@ -2280,6 +2293,19 @@ static bool skip_cma(struct folio *folio, struct scan_control *sc) } #endif +static bool skip_isolate(struct folio *folio, struct scan_control *sc, + enum lru_list lru) +{ + if (folio_zonenum(folio) > sc->reclaim_idx) + return true; + if (skip_cma(folio, sc)) + return true; + if (unlikely(sc->swapcache_only && !is_file_lru(lru) && + !folio_test_swapcache(folio))) + return true; + return false; +} + /* * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. * @@ -2326,8 +2352,7 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, nr_pages = folio_nr_pages(folio); total_scan += nr_pages; - if (folio_zonenum(folio) > sc->reclaim_idx || - skip_cma(folio, sc)) { + if (skip_isolate(folio, sc, lru)) { nr_skipped[folio_zonenum(folio)] += nr_pages; move_to = &folios_skipped; goto move;