From patchwork Thu Aug 1 06:50:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: BiscuitOS Broiler X-Patchwork-Id: 13749895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E84FC3DA4A for ; Thu, 1 Aug 2024 06:53:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 01A896B00A7; Thu, 1 Aug 2024 02:53:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE48B6B00A8; Thu, 1 Aug 2024 02:53:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D10936B00B2; Thu, 1 Aug 2024 02:53:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A61396B00A7 for ; Thu, 1 Aug 2024 02:53:24 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 61B7280673 for ; Thu, 1 Aug 2024 06:53:24 +0000 (UTC) X-FDA: 82402760328.22.79B76EB Received: from h3cspam02-ex.h3c.com (smtp.h3c.com [60.191.123.50]) by imf19.hostedemail.com (Postfix) with ESMTP id EE13F1A001C for ; Thu, 1 Aug 2024 06:53:21 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of zhang.renze@h3c.com designates 60.191.123.50 as permitted sender) smtp.mailfrom=zhang.renze@h3c.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722495158; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TNFsS5vQk2ZA1naXHfc0yUvKRLWJFzbK9DfrBDq0SHg=; b=kzKqS8kxn2Ll0JZN/dKwnKnlwJqyS2eGAOnqKEW3Uip4zG3YsW3FZJlUnRyuqPtzC8ItkN gh1wnKk03YjO4WmUYr+Q4jEO3mOO7CcDp4Vp7enrAjDOI6DseVwCSabXynhQp5854RgZ0m wbjrRW0ZS3DFfbKJSE5NLQXo8CQMWkI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of zhang.renze@h3c.com designates 60.191.123.50 as permitted sender) smtp.mailfrom=zhang.renze@h3c.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722495158; a=rsa-sha256; cv=none; b=aJ2D0w6lD5uwOkjgVHdM3UirhMeJqKbgbiO2Himh2XUhTnDpwfl9iN5r7X909D+/bUun0Q CyXxWKf622V/hPDNFiCOUqgwdY1ncvmHFh+iheZRR4GYdlCX3AEfMIpNixxwSUzDbjEFt8 lAcrn2G2CGBePUQugFt0bwWo/0dQTuY= Received: from mail.maildlp.com ([172.25.15.154]) by h3cspam02-ex.h3c.com with ESMTP id 4716pNwR013230; Thu, 1 Aug 2024 14:51:26 +0800 (GMT-8) (envelope-from zhang.renze@h3c.com) Received: from DAG6EX09-BJD.srv.huawei-3com.com (unknown [10.153.34.11]) by mail.maildlp.com (Postfix) with ESMTP id D379C20071B6; Thu, 1 Aug 2024 14:56:11 +0800 (CST) Received: from localhost.localdomain (10.99.206.12) by DAG6EX09-BJD.srv.huawei-3com.com (10.153.34.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1258.27; Thu, 1 Aug 2024 14:51:27 +0800 From: BiscuitOS Broiler To: , , CC: , , , , , , , , , , , , , , , , , Subject: [PATCH 1/1] mm: introduce MADV_DEMOTE/MADV_PROMOTE Date: Thu, 1 Aug 2024 14:50:35 +0800 Message-ID: <20240801065035.15377-2-zhang.renze@h3c.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240801065035.15377-1-zhang.renze@h3c.com> References: <20240801065035.15377-1-zhang.renze@h3c.com> MIME-Version: 1.0 X-Originating-IP: [10.99.206.12] X-ClientProxiedBy: BJSMTP02-EX.srv.huawei-3com.com (10.63.20.133) To DAG6EX09-BJD.srv.huawei-3com.com (10.153.34.11) X-DNSRBL: X-MAIL: h3cspam02-ex.h3c.com 4716pNwR013230 X-Stat-Signature: 77sgyfnx7wc773gk41a4i8oxe3h5me7k X-Rspam-User: X-Rspamd-Queue-Id: EE13F1A001C X-Rspamd-Server: rspam02 X-HE-Tag: 1722495201-562389 X-HE-Meta: U2FsdGVkX19EGHqn7WoFN4smFE4zIxsbnpxHt6VhTIcHz805Udeof+3A8Cl7ryPJy+TgDn3bjKi5xcrc5/cHN+7jpJFCHVnL7CNwbQZZoGODO4JkHcNhKB6zVFtT9YZ5foNK22582y1mCLDXywGKr7uzJV2CUwOA/OWgQS5lnvPTbB7EmXC8c8It6HSZ1naTbY3ateD5N3zHdde63apy8bVF95WoIAa/RFZ9Zpd0R1NaST7dZWcwr0huzxiXUpkehGGPgg7RSoYh+mv9wrX9JPzh98u2ztAdz3qHxZeDy+xhyoRrFCHFCz2KFwyOOhHMhBOp6tPS4RO897hz8xJSvuUJsvu/my9QaMFV0t23VV5hrHVWhljdq9I022wJljfM0Em5BOOAJ1FB9qHbfejjXo9LpbSPTpW3MvrPcwXMvExSvg+e375twPkGpp1ANoKfERzaRJEOsQQgOuUTXBi5PXT0f36/O3+t7P7SB1snQgc/aX5K4KenNA9rVJUYyRzsMAYB04jB+6qHQNvBOpHh6Dfr+DZy6n/+jFHsvpFGf19XY3edJb8+5YSPN9xS4Nvb0s1+50gJkNuhyKxsLBF20PHG5VDY+/P0/i/+AZV4ok7FCXQIF737NtUg1CPfaT18ErSYRQYrqr3HBBGPpEVLrjjmee202YdfwJCmwOKuCjw1+E9V6TS1gmclJoT6of3je310Jzddqn4CLoCKZgyEYjUXN5M5YZTaLOW4R6xCgGmuC43z0z23XG/GoxsFejLZ0Gpn4+c3ZlQwO9/HIgIb/4PdKJK4BBQflBE/AKUiUlvi7wQbyTqYAqvoqAr9vZ2Vaqco5Gy8ePBnBewvdK1JHTC6iJXOpVYNPPm9k/etqI59lgBljNdp9qn8/QWdpfpzxOB/u09oVypvGPtnBlNJudQGvKHcAD03NV8qMAPWZlqX4+r8c8i4iq56q/sdVAnGImaUFsYd2D6sRpeWOaZ OVrgI1Th UVKxXptwRON2Oj1xsA5GuL5iEmkkDvV+UYekSEJzAVe9vCL5eL0CZz4TseasK4k2PBEwPNtPMtggkLJJrouQ8wdQJK9Zt3YNspxOwSyN8A2GZzJrLU4gPluP3BgkGgj8X501DYV92E5aXtCaWQ5acOXOyvvk2RiqSkkHx9IS91M3r5zq1g1DpzQk8QVEm/JP3cyJ9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In a tiered memory architecture, when a process does not access memory in the fast nodes for a long time, the kernel will demote the memory to slower memory through a reclamation mechanism. This frees up the fast memory for other processes. When the process accesses the demoted memory again, the tiered memory system will, following certain policies, promote it back to fast memory. Since memory demotion and promotion in a tiered memory system do not occur instantly but require a gradual process, this can severely impact the performance of programs in high-performance computing scenarios. This patch introduces new MADV_DEMOTE and MADV_PROMOTE hints to the madvise syscall. MADV_DEMOTE can mark a range of memory pages as cold pages and immediately demote them to slow memory. MADV_PROMOTE can mark a range of memory pages as hot pages and immediately promote them to fast memory, allowing applications to better balance large memory capacity with latency. Signed-off-by: BiscuitOS Broiler --- arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 + mm/internal.h | 1 + mm/madvise.c | 251 +++++++++++++++++++ mm/vmscan.c | 57 +++++ tools/include/uapi/asm-generic/mman-common.h | 3 + 9 files changed, 327 insertions(+) -- 2.34.1 ------------------------------------------------------------------------------------------------------------------------------------- ±¾Óʼþ¼°Æ丽¼þº¬ÓÐлªÈý¼¯Íŵı£ÃÜÐÅÏ¢£¬½öÏÞÓÚ·¢Ë͸øÉÏÃæµØÖ·ÖÐÁгö µÄ¸öÈË»òȺ×é¡£½ûÖ¹ÈκÎÆäËûÈËÒÔÈκÎÐÎʽʹÓ㨰üÀ¨µ«²»ÏÞÓÚÈ«²¿»ò²¿·ÖµØй¶¡¢¸´ÖÆ¡¢ »òÉ¢·¢£©±¾ÓʼþÖеÄÐÅÏ¢¡£Èç¹ûÄú´íÊÕÁ˱¾Óʼþ£¬ÇëÄúÁ¢¼´µç»°»òÓʼþ֪ͨ·¢¼þÈ˲¢É¾³ý±¾ Óʼþ£¡ This e-mail and its attachments contain confidential information from New H3C, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h index 763929e814e9..98e7609d51ab 100644 --- a/arch/alpha/include/uapi/asm/mman.h +++ b/arch/alpha/include/uapi/asm/mman.h @@ -78,6 +78,9 @@ #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ +#define MADV_DEMOTE 26 /* Demote page into slow node */ +#define MADV_PROMOTE 27 /* Promote page into fast node */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h index 9c48d9a21aa0..aae4cd01c20d 100644 --- a/arch/mips/include/uapi/asm/mman.h +++ b/arch/mips/include/uapi/asm/mman.h @@ -105,6 +105,9 @@ #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ +#define MADV_DEMOTE 26 /* Demote page into slow node */ +#define MADV_PROMOTE 27 /* Promote page into fast node */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h index 68c44f99bc93..8b50ac91d0c9 100644 --- a/arch/parisc/include/uapi/asm/mman.h +++ b/arch/parisc/include/uapi/asm/mman.h @@ -72,6 +72,9 @@ #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ +#define MADV_DEMOTE 26 /* Demote page into slow node */ +#define MADV_PROMOTE 27 /* Promote page into fast node */ + #define MADV_HWPOISON 100 /* poison a page for testing */ #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */ diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h index 1ff0c858544f..8f820d4f5901 100644 --- a/arch/xtensa/include/uapi/asm/mman.h +++ b/arch/xtensa/include/uapi/asm/mman.h @@ -113,6 +113,9 @@ #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ +#define MADV_DEMOTE 26 /* Demote page into slow node */ +#define MADV_PROMOTE 27 /* Promote page into fast node */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 6ce1f1ceb432..52222c2245a8 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -79,6 +79,9 @@ #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ +#define MADV_DEMOTE 26 /* Demote page into slow node */ +#define MADV_PROMOTE 27 /* Promote page into fast node */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/mm/internal.h b/mm/internal.h index 7a3bcc6d95e7..105c2621e335 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1096,6 +1096,7 @@ extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, extern void set_pageblock_order(void); struct folio *alloc_migrate_folio(struct folio *src, unsigned long private); unsigned long reclaim_pages(struct list_head *folio_list); +unsigned long demotion_pages(struct list_head *folio_list); unsigned int reclaim_clean_pages_from_list(struct zone *zone, struct list_head *folio_list); /* The ALLOC_WMARK bits are used as an index to zone->watermark */ diff --git a/mm/madvise.c b/mm/madvise.c index 89089d84f8df..9e41936a2dc5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -31,6 +31,9 @@ #include #include #include +#include +#include +#include #include @@ -56,6 +59,8 @@ static int madvise_need_mmap_write(int behavior) case MADV_DONTNEED_LOCKED: case MADV_COLD: case MADV_PAGEOUT: + case MADV_DEMOTE: + case MADV_PROMOTE: case MADV_FREE: case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: @@ -639,6 +644,242 @@ static long madvise_pageout(struct vm_area_struct *vma, return 0; } +static int madvise_demotion_pte_range(pmd_t *pmd, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct mmu_gather *tlb = walk->private; + struct vm_area_struct *vma = walk->vma; + struct mm_struct *mm = tlb->mm; + pte_t *start_pte, *pte, ptent; + struct folio *folio = NULL; + LIST_HEAD(folio_list); + spinlock_t *ptl; + int nid; + + if (fatal_signal_pending(current)) + return -EINTR; + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_trans_huge(*pmd)) + return 0; +#endif + tlb_change_page_size(tlb, PAGE_SIZE); + start_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); + if (!start_pte) + return 0; + flush_tlb_batched_pending(mm); + arch_enter_lazy_mmu_mode(); + for (; addr < end; pte++, addr += PAGE_SIZE) { + ptent = ptep_get(pte); + + if (pte_none(ptent)) + continue; + + if (!pte_present(ptent)) + continue; + + folio = vm_normal_folio(vma, addr, ptent); + if (!folio || folio_is_zone_device(folio)) + continue; + + if (folio_test_large(folio)) + continue; + + if (!folio_test_anon(folio)) + continue; + + nid = folio_nid(folio); + if (!node_is_toptier(nid)) + continue; + + /* no tiered memory node */ + if (next_demotion_node(nid) == NUMA_NO_NODE) + continue; + + /* + * Do not interfere with other mappings of this folio and + * non-LRU folio. If we have a large folio at this point, we + * know it is fully mapped so if its mapcount is the same as its + * number of pages, it must be exclusive. + */ + if (!folio_test_lru(folio) || + folio_mapcount(folio) != folio_nr_pages(folio)) + continue; + + folio_clear_referenced(folio); + folio_test_clear_young(folio); + if (folio_test_active(folio)) + folio_set_workingset(folio); + if (folio_isolate_lru(folio)) { + if (folio_test_unevictable(folio)) + folio_putback_lru(folio); + else + list_add(&folio->lru, &folio_list); + } + } + + if (start_pte) { + arch_leave_lazy_mmu_mode(); + pte_unmap_unlock(start_pte, ptl); + } + + demotion_pages(&folio_list); + cond_resched(); + + return 0; +} + +static const struct mm_walk_ops demotion_walk_ops = { + .pmd_entry = madvise_demotion_pte_range, + .walk_lock = PGWALK_RDLOCK, +}; + +static void madvise_demotion_page_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + tlb_start_vma(tlb, vma); + walk_page_range(vma->vm_mm, addr, end, &demotion_walk_ops, tlb); + tlb_end_vma(tlb, vma); +} + +static long madvise_demotion(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start_addr, unsigned long end_addr) +{ + struct mm_struct *mm = vma->vm_mm; + struct mmu_gather tlb; + + *prev = vma; + if (!can_madv_lru_vma(vma)) + return -EINVAL; + + if (!numa_demotion_enabled && !vma_is_anonymous(vma) && + (vma->vm_flags & VM_MAYSHARE)) + return 0; + + lru_add_drain(); + tlb_gather_mmu(&tlb, mm); + madvise_demotion_page_range(&tlb, vma, start_addr, end_addr); + tlb_finish_mmu(&tlb); + + return 0; +} + +static int madvise_promotion_pte_range(pmd_t *pmd, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct mmu_gather *tlb = walk->private; + struct vm_area_struct *vma = walk->vma; + struct mm_struct *mm = tlb->mm; + struct folio *folio = NULL; + LIST_HEAD(folio_list); + int nid, target_nid; + pte_t *pte, ptent; + spinlock_t *ptl; + + if (fatal_signal_pending(current)) + return -EINTR; + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_trans_huge(*pmd)) + return 0; +#endif + tlb_change_page_size(tlb, PAGE_SIZE); + pte = pte_offset_map_nolock(vma->vm_mm, pmd, addr, &ptl); + if (!pte) + return 0; + flush_tlb_batched_pending(mm); + arch_enter_lazy_mmu_mode(); + for (; addr < end; pte++, addr += PAGE_SIZE) { + ptent = ptep_get(pte); + + if (pte_none(ptent)) + continue; + + if (!pte_present(ptent)) + continue; + + folio = vm_normal_folio(vma, addr, ptent); + if (!folio || folio_is_zone_device(folio)) + continue; + + if (folio_test_large(folio)) + continue; + + if (!folio_test_anon(folio)) + continue; + + /* skip page on fast node */ + nid = folio_nid(folio); + if (node_is_toptier(nid)) + continue; + + if (!folio_test_lru(folio) || + folio_mapcount(folio) != folio_nr_pages(folio)) + continue; + + /* force update folio last access time */ + folio_xchg_access_time(folio, jiffies_to_msecs(jiffies)); + + target_nid = numa_node_id(); + if (!should_numa_migrate_memory(current, folio, nid, target_nid)) + continue; + + /* prepare pormote */ + if (!folio_isolate_lru(folio)) + continue; + + /* promote page directly */ + migrate_misplaced_folio(folio, vma, target_nid); + tlb_remove_tlb_entry(tlb, pte, addr); + } + + arch_leave_lazy_mmu_mode(); + cond_resched(); + + return 0; +} + +static const struct mm_walk_ops promotion_walk_ops = { + .pmd_entry = madvise_promotion_pte_range, + .walk_lock = PGWALK_RDLOCK, +}; + +static void madvise_promotion_page_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + tlb_start_vma(tlb, vma); + walk_page_range(vma->vm_mm, addr, end, &promotion_walk_ops, tlb); + tlb_end_vma(tlb, vma); +} + +static long madvise_promotion(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start_addr, unsigned long end_addr) +{ + struct mm_struct *mm = vma->vm_mm; + struct mmu_gather tlb; + + *prev = vma; + if (!can_madv_lru_vma(vma)) + return -EINVAL; + + if (!numa_demotion_enabled && !vma_is_anonymous(vma) && + (vma->vm_flags & VM_MAYSHARE)) + return 0; + + lru_add_drain(); + tlb_gather_mmu(&tlb, mm); + madvise_promotion_page_range(&tlb, vma, start_addr, end_addr); + tlb_finish_mmu(&tlb); + + return 0; +} + static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -1040,6 +1281,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, return madvise_cold(vma, prev, start, end); case MADV_PAGEOUT: return madvise_pageout(vma, prev, start, end); + case MADV_DEMOTE: + return madvise_demotion(vma, prev, start, end); + case MADV_PROMOTE: + return madvise_promotion(vma, prev, start, end); case MADV_FREE: case MADV_DONTNEED: case MADV_DONTNEED_LOCKED: @@ -1179,6 +1424,8 @@ madvise_behavior_valid(int behavior) case MADV_FREE: case MADV_COLD: case MADV_PAGEOUT: + case MADV_DEMOTE: + case MADV_PROMOTE: case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: #ifdef CONFIG_KSM @@ -1210,6 +1457,8 @@ static bool process_madvise_behavior_valid(int behavior) switch (behavior) { case MADV_COLD: case MADV_PAGEOUT: + case MADV_DEMOTE: + case MADV_PROMOTE: case MADV_WILLNEED: case MADV_COLLAPSE: return true; @@ -1391,6 +1640,8 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, * triggering read faults if required * MADV_POPULATE_WRITE - populate (prefault) page tables writable by * triggering write faults if required + * MADV_DEMOTE - the application forces pages into slow node. + * MADV_PROMOTE - the application forces pages into fast node. * * return values: * zero - success diff --git a/mm/vmscan.c b/mm/vmscan.c index c89d0551655e..88d7a1dd05a0 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2185,6 +2185,63 @@ unsigned long reclaim_pages(struct list_head *folio_list) return nr_reclaimed; } +static unsigned int demotion_folio_list(struct list_head *folio_list, + struct pglist_data *pgdat) +{ + struct reclaim_stat dummy_stat; + unsigned int nr_demoted; + struct folio *folio; + struct scan_control sc = { + .gfp_mask = GFP_KERNEL, + .may_writepage = 1, + .may_unmap = 1, + .may_swap = 1, + .no_demotion = 0, + }; + + nr_demoted = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, true); + while (!list_empty(folio_list)) { + folio = lru_to_folio(folio_list); + list_del(&folio->lru); + folio_putback_lru(folio); + } + + return nr_demoted; +} + +unsigned long demotion_pages(struct list_head *folio_list) +{ + unsigned int nr_demoted = 0; + LIST_HEAD(node_folio_list); + unsigned int noreclaim_flag; + int nid; + + if (list_empty(folio_list)) + return nr_demoted; + + noreclaim_flag = memalloc_noreclaim_save(); + + nid = folio_nid(lru_to_folio(folio_list)); + do { + struct folio *folio = lru_to_folio(folio_list); + + if (nid == folio_nid(folio)) { + folio_clear_active(folio); + list_move(&folio->lru, &node_folio_list); + continue; + } + + nr_demoted += demotion_folio_list(&node_folio_list, NODE_DATA(nid)); + nid = folio_nid(lru_to_folio(folio_list)); + } while (!list_empty(folio_list)); + + nr_demoted += demotion_folio_list(&node_folio_list, NODE_DATA(nid)); + + memalloc_noreclaim_restore(noreclaim_flag); + + return nr_demoted; +} + static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan, struct lruvec *lruvec, struct scan_control *sc) { diff --git a/tools/include/uapi/asm-generic/mman-common.h b/tools/include/uapi/asm-generic/mman-common.h index 6ce1f1ceb432..52222c2245a8 100644 --- a/tools/include/uapi/asm-generic/mman-common.h +++ b/tools/include/uapi/asm-generic/mman-common.h @@ -79,6 +79,9 @@ #define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ +#define MADV_DEMOTE 26 /* Demote page into slow node */ +#define MADV_PROMOTE 27 /* Promote page into fast node */ + /* compatibility flags */ #define MAP_FILE 0