From patchwork Wed Mar 8 09:41:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13165512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A7D5C64EC4 for ; Wed, 8 Mar 2023 09:41:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A94E16B007B; Wed, 8 Mar 2023 04:41:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A43026B007D; Wed, 8 Mar 2023 04:41:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 90C4B6B007E; Wed, 8 Mar 2023 04:41:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 83C726B007B for ; Wed, 8 Mar 2023 04:41:37 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 57088160D82 for ; Wed, 8 Mar 2023 09:41:37 +0000 (UTC) X-FDA: 80545238634.09.3B88A34 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf20.hostedemail.com (Postfix) with ESMTP id 7F49B1C0018 for ; Wed, 8 Mar 2023 09:41:35 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=V6FAK2+t; spf=pass (imf20.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678268495; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h8mhSnmMoDufRPSIQ3CwA06wj1H+ncDCjkrSqDXN2qo=; b=c3buwB3bw30Nvluxd15s/Q3xfBQwdUvju8M+xBaQQfZWyyb6tlSKpushMdIcH6le4iZg0Q OTGnxtbkoghMM7jKwFGr7NqO7a3Vo80/g0NlN+n5Di1btx78wV4lEgrZ2etL9QVAZY7BYv 1aQV8T2qS+m3abeJrMFxtAk9ECtthaY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=V6FAK2+t; spf=pass (imf20.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678268495; a=rsa-sha256; cv=none; b=cGDOcKwmraKn5cDbizpzjgFZHsCaJDv6HW9AhrwhfyPTcGkjjxiYBrzHfB26GWYCYD4Dcz /9xJ3FzlJ58UiePwTIabH4vTOYhpJ0RbZRwofMnxq0d3jyPHLvLrjbSn594kpl9ezbLonh VovirQ+4J0OEoiCZPWAJXebZlDXmtXM= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2D5B2B81BFF; Wed, 8 Mar 2023 09:41:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78284C433A4; Wed, 8 Mar 2023 09:41:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678268492; bh=8DLovPxnQy1OCrJZdJAEGK7t0D40yssgi5Mxz0RgWek=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=V6FAK2+t6gSHB91y9VkD/0zOfXhNUhq+sMc5AX4IQuRsdMaWCpYJCn1tHJeqYnOLY RWQ8R3BcCx3KjA7HQiX4FWRwiXskDus+nGuTU/h7Wao7lKZmWu32p94flaMQoKhkjv nL3jFqNz9nbN+CXPwNbIimjP38J1yfRIKOOiHu1l6hqqXu2wHBe6fikO0N0rwDw4rV jpSOfwYDrUpzaXm9lANUnmHiYWQ34T2LFwbVrf8z2Ilhgv3JEnwwcpcBlhfjLrjayl W8CcmajJxQHjkRZKa9pmK4E32ysqR4p5Fy+FD17Vl8xGoLMRnRfcR+TZrgX9jnjemr Axe7yl8DldgkA== From: Mike Rapoport To: linux-mm@kvack.org Cc: Andrew Morton , Dave Hansen , Mike Rapoport , Peter Zijlstra , Rick Edgecombe , Song Liu , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, x86@kernel.org Subject: [RFC PATCH 3/5] mm/unmapped_alloc: add shrinker Date: Wed, 8 Mar 2023 11:41:04 +0200 Message-Id: <20230308094106.227365-4-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230308094106.227365-1-rppt@kernel.org> References: <20230308094106.227365-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: kcu4m5n8k9nm4rf748so3hob4axihyxy X-Rspamd-Queue-Id: 7F49B1C0018 X-HE-Tag: 1678268495-629604 X-HE-Meta: U2FsdGVkX19A3+AzDBdBH9mZwHkBdyz1Qyo5pD86OkiPYqVAZ3f3bVt8NgHIN/fRdRUrkN4crBCq3s+oDA8pxt04cE3SmDCVph9Ixnvc4NdpAPq4xJ3po113m6hbYz/1U+Y0gzNqEDGlPFxmnUvH5HVueIDIt4xkvtvr4kAY0ZiH5K+PWzZXzqmANpCk8nPMEEDcNbaj5Q89P7g2wf3RfYWkLC9bAUvO/0hJO/Hveqar22NM27oA49YBmzvbe26VPihLoF0AVSeUmiBArPOYa/3v8LWikXkyPsa4JLWMvnbpsXeAELB+7anHfPTCrX3w73TLsIjlWOXy/RlZupn47S7tWDtlrMlV4+kIeeGh8Y380rVUeMc0IA65H06yaGSA8WLMxoAl/zJxySgzW7Ud9G593g0HuWZiFMi/eODIrmiOZu68wgB2xvptC7DX2NHCoiLSuA8uL5SmiyxsxzeQrTptthTEAfn4opwydyIFs4FIbHmJMVUrSh4fKka96TwWHOzDTDk/bnwsRVsy/4DXsH8n/vQK8IEVl+Ql6dvX4SOiaOkEtZNpTjYms5NEMhfebZkPJA8tP0xjFdbEVKj8xQqFDgoRDN3mAP0V1u2MPj2LJm6kjHQ0dPJcEmtrZUqQS+OkUGqAuFPrsYFfZ4PLK1no040KOAXIRWiXjJWYm1fFU7Iw38NSbxqGyy6L3Ker16rkCj8IkG+wDKEYJViBLEdqRt1nRa5ptIWJDA/oRbBSYVY54pFN/Tivi+Ney76vQ01ycx2Ga8qAa4NvcYhdiipSoF/oBxT0rjqcUIR6qs8JJGQn5ZvSlQYK4icpkBUVP/goPQBYNdrWaHOJKalUUhGbng0pTmNdaX/PJu2U7hkNm8aoBm0lqLxCDjGSFOk9q4Xz2JvPxfSic8b27fkqwB9DG2TPAyGu68DiWedkbgvJ550CUN92XWs4suSTmbCOA1Ywl6ufuU2OU3iqz92 wBdRxOlH lLe9xsKnQlRf6CFE06BxHhkzdQENgm1UNL7V79kr56/jGEyfSl44gbsY3EVHjtqQhgWJUN4kk6nd/3u+2BAx/lO8N8MpOYzUJOP1tKSsnFvm4QK6jCmvp9cOziRanPUXLrfzuR/1qHWpGJ8xtVRMFOP0Uja7CBjw1RNRQ6ywb3N58A/gauSLY3GZjqEU/uOYABS/M18QaSx5AqRQ2UemlBhqzVrFeg8hsCA/nJBWR684MPSUxTca9LUss2yTw7MIVRAdJDWGcru1U3ay9411GNdl2NRKAvpnVhWBCh6bwoVgGfQjO7sEIdCdj0VMiSKg/VHkCq9BgIeu1evmac2yL6N82S2RdfuBjSj3nmgwzClAiZzeauwb66iwW71yHGr6+bDoLPExzOMkNweqS1SiNp3wzig== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Mike Rapoport (IBM)" Allow shrinking unmapped caches when there is a memory pressure. Signed-off-by: Mike Rapoport(IBM) --- mm/internal.h | 2 ++ mm/page_alloc.c | 12 ++++++- mm/unmapped-alloc.c | 86 ++++++++++++++++++++++++++++++++++++++++++++- 3 files changed, 98 insertions(+), 2 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 8d84cceab467..aa2934594dcf 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1064,4 +1064,6 @@ static inline struct page *unmapped_pages_alloc(gfp_t gfp, int order) static inline void unmapped_pages_free(struct page *page, int order) {} #endif +void __free_unmapped_page(struct page *page, unsigned int order); + #endif /* __MM_INTERNAL_H */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 01f18e7529b0..ece213fac27a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -123,6 +123,11 @@ typedef int __bitwise fpi_t; */ #define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2)) +/* + * Free pages from the unmapped cache + */ +#define FPI_UNMAPPED ((__force fpi_t)BIT(3)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8) @@ -1467,7 +1472,7 @@ static __always_inline bool free_pages_prepare(struct page *page, PAGE_SIZE << order); } - if (get_pageblock_unmapped(page)) { + if (!(fpi_flags & FPI_UNMAPPED) && get_pageblock_unmapped(page)) { unmapped_pages_free(page, order); return false; } @@ -1636,6 +1641,11 @@ static void free_one_page(struct zone *zone, spin_unlock_irqrestore(&zone->lock, flags); } +void __free_unmapped_page(struct page *page, unsigned int order) +{ + __free_pages_ok(page, order, FPI_UNMAPPED | FPI_TO_TAIL); +} + static void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long zone, int nid) { diff --git a/mm/unmapped-alloc.c b/mm/unmapped-alloc.c index f74640e9ce9f..89f54383df92 100644 --- a/mm/unmapped-alloc.c +++ b/mm/unmapped-alloc.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include @@ -16,6 +17,7 @@ struct unmapped_free_area { spinlock_t lock; unsigned long nr_free; unsigned long nr_cached; + unsigned long nr_released; }; static struct unmapped_free_area free_area[MAX_ORDER]; @@ -204,6 +206,82 @@ void unmapped_pages_free(struct page *page, int order) __free_one_page(page, order, false); } +static unsigned long unmapped_alloc_count_objects(struct shrinker *sh, + struct shrink_control *sc) +{ + unsigned long pages_to_free = 0; + + for (int order = 0; order < MAX_ORDER; order++) + pages_to_free += (free_area[order].nr_free << order); + + return pages_to_free; +} + +static unsigned long scan_free_area(struct shrink_control *sc, int order) +{ + struct unmapped_free_area *area = &free_area[order]; + unsigned long nr_pages = (1 << order); + unsigned long pages_freed = 0; + unsigned long flags; + struct page *page; + + spin_lock_irqsave(&area->lock, flags); + while (pages_freed < sc->nr_to_scan) { + + page = list_first_entry_or_null(&area->free_list, struct page, + lru); + if (!page) + break; + + del_page_from_free_list(page, order); + expand(page, order, order); + + area->nr_released++; + + if (order == pageblock_order) + clear_pageblock_unmapped(page); + + spin_unlock_irqrestore(&area->lock, flags); + + for (int i = 0; i < nr_pages; i++) + set_direct_map_default_noflush(page + i); + + __free_unmapped_page(page, order); + + pages_freed += nr_pages; + + cond_resched(); + spin_lock_irqsave(&area->lock, flags); + } + + spin_unlock_irqrestore(&area->lock, flags); + + return pages_freed; +} + +static unsigned long unmapped_alloc_scan_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + sc->nr_scanned = 0; + + for (int order = 0; order < MAX_ORDER; order++) { + sc->nr_scanned += scan_free_area(sc, order); + + if (sc->nr_scanned >= sc->nr_to_scan) + break; + + sc->nr_to_scan -= sc->nr_scanned; + } + + return sc->nr_scanned ? sc->nr_scanned : SHRINK_STOP; +} + +static struct shrinker shrinker = { + .count_objects = unmapped_alloc_count_objects, + .scan_objects = unmapped_alloc_scan_objects, + .seeks = DEFAULT_SEEKS, +}; + int unmapped_alloc_init(void) { for (int order = 0; order < MAX_ORDER; order++) { @@ -237,6 +315,11 @@ static int unmapped_alloc_debug_show(struct seq_file *m, void *private) seq_printf(m, "%5lu ", free_area[order].nr_cached); seq_putc(m, '\n'); + seq_printf(m, "%-10s", "Released:"); + for (order = 0; order < MAX_ORDER; ++order) + seq_printf(m, "%5lu ", free_area[order].nr_released); + seq_putc(m, '\n'); + return 0; } DEFINE_SHOW_ATTRIBUTE(unmapped_alloc_debug); @@ -245,6 +328,7 @@ static int __init unmapped_alloc_init_late(void) { debugfs_create_file("unmapped_alloc", 0444, NULL, NULL, &unmapped_alloc_debug_fops); - return 0; + + return register_shrinker(&shrinker, "mm-unmapped-alloc"); } late_initcall(unmapped_alloc_init_late);