From patchwork Tue Mar 15 16:37:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: haoxin X-Patchwork-Id: 12781644 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62E6FC433FE for ; Tue, 15 Mar 2022 16:37:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DFA018D0003; Tue, 15 Mar 2022 12:37:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DAA118D0001; Tue, 15 Mar 2022 12:37:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B87308D0005; Tue, 15 Mar 2022 12:37:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 9C99D8D0003 for ; Tue, 15 Mar 2022 12:37:16 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 5A659121932 for ; Tue, 15 Mar 2022 16:37:16 +0000 (UTC) X-FDA: 79247175672.02.2D1DB71 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf07.hostedemail.com (Postfix) with ESMTP id D92D540016 for ; Tue, 15 Mar 2022 16:37:14 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V7IiZEH_1647362230; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V7IiZEH_1647362230) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Mar 2022 00:37:11 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 1/3] mm/damon: rename damon_evenly_split_region() Date: Wed, 16 Mar 2022 00:37:05 +0800 Message-Id: <537ed6bc00ea35dbd73270477d77707891e97b0c.1647378112.git.xhao@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D92D540016 X-Stat-Signature: 9gm7fskgfcsbf3en9wog8864abcfm5fx Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf07.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.42 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com X-HE-Tag: 1647362234-572738 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000339, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch rename damon_va_evenly_split_region() to damon_evenly_split_region() is aimed to call it in the physical address space. So there fix it, and move it to "ops-common.c" file. Signed-off-by: Xin Hao --- mm/damon/ops-common.c | 39 +++++++++++++++++++++++++++++++++++++++ mm/damon/ops-common.h | 3 +++ mm/damon/vaddr-test.h | 6 +++--- mm/damon/vaddr.c | 41 +---------------------------------------- 4 files changed, 46 insertions(+), 43 deletions(-) diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c index e346cc10d143..fd5e98005358 100644 --- a/mm/damon/ops-common.c +++ b/mm/damon/ops-common.c @@ -131,3 +131,42 @@ int damon_pageout_score(struct damon_ctx *c, struct damon_region *r, /* Return coldness of the region */ return DAMOS_MAX_SCORE - hotness; } + +/* + * Size-evenly split a region into 'nr_pieces' small regions + * + * Returns 0 on success, or negative error code otherwise. + */ +int damon_evenly_split_region(struct damon_target *t, + struct damon_region *r, unsigned int nr_pieces) +{ + unsigned long sz_orig, sz_piece, orig_end; + struct damon_region *n = NULL, *next; + unsigned long start; + + if (!r || !nr_pieces) + return -EINVAL; + + orig_end = r->ar.end; + sz_orig = r->ar.end - r->ar.start; + sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION); + + if (!sz_piece) + return -EINVAL; + + r->ar.end = r->ar.start + sz_piece; + next = damon_next_region(r); + for (start = r->ar.end; start + sz_piece <= orig_end; + start += sz_piece) { + n = damon_new_region(start, start + sz_piece); + if (!n) + return -ENOMEM; + damon_insert_region(n, r, next, t); + r = n; + } + /* complement last region for possible rounding error */ + if (n) + n->ar.end = orig_end; + + return 0; +} diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h index e790cb5f8fe0..fd441016a2ae 100644 --- a/mm/damon/ops-common.h +++ b/mm/damon/ops-common.h @@ -14,3 +14,6 @@ void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr); int damon_pageout_score(struct damon_ctx *c, struct damon_region *r, struct damos *s); + +int damon_evenly_split_region(struct damon_target *t, + struct damon_region *r, unsigned int nr_pieces); diff --git a/mm/damon/vaddr-test.h b/mm/damon/vaddr-test.h index 1a55bb6c36c3..161906ab66a7 100644 --- a/mm/damon/vaddr-test.h +++ b/mm/damon/vaddr-test.h @@ -256,7 +256,7 @@ static void damon_test_split_evenly_fail(struct kunit *test, damon_add_region(r, t); KUNIT_EXPECT_EQ(test, - damon_va_evenly_split_region(t, r, nr_pieces), -EINVAL); + damon_evenly_split_region(t, r, nr_pieces), -EINVAL); KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1u); damon_for_each_region(r, t) { @@ -277,7 +277,7 @@ static void damon_test_split_evenly_succ(struct kunit *test, damon_add_region(r, t); KUNIT_EXPECT_EQ(test, - damon_va_evenly_split_region(t, r, nr_pieces), 0); + damon_evenly_split_region(t, r, nr_pieces), 0); KUNIT_EXPECT_EQ(test, damon_nr_regions(t), nr_pieces); damon_for_each_region(r, t) { @@ -294,7 +294,7 @@ static void damon_test_split_evenly_succ(struct kunit *test, static void damon_test_split_evenly(struct kunit *test) { - KUNIT_EXPECT_EQ(test, damon_va_evenly_split_region(NULL, NULL, 5), + KUNIT_EXPECT_EQ(test, damon_evenly_split_region(NULL, NULL, 5), -EINVAL); damon_test_split_evenly_fail(test, 0, 100, 0); diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index b2ec0aa1ff45..0870e178b1b8 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -56,45 +56,6 @@ static struct mm_struct *damon_get_mm(struct damon_target *t) * Functions for the initial monitoring target regions construction */ -/* - * Size-evenly split a region into 'nr_pieces' small regions - * - * Returns 0 on success, or negative error code otherwise. - */ -static int damon_va_evenly_split_region(struct damon_target *t, - struct damon_region *r, unsigned int nr_pieces) -{ - unsigned long sz_orig, sz_piece, orig_end; - struct damon_region *n = NULL, *next; - unsigned long start; - - if (!r || !nr_pieces) - return -EINVAL; - - orig_end = r->ar.end; - sz_orig = r->ar.end - r->ar.start; - sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, DAMON_MIN_REGION); - - if (!sz_piece) - return -EINVAL; - - r->ar.end = r->ar.start + sz_piece; - next = damon_next_region(r); - for (start = r->ar.end; start + sz_piece <= orig_end; - start += sz_piece) { - n = damon_new_region(start, start + sz_piece); - if (!n) - return -ENOMEM; - damon_insert_region(n, r, next, t); - r = n; - } - /* complement last region for possible rounding error */ - if (n) - n->ar.end = orig_end; - - return 0; -} - static unsigned long sz_range(struct damon_addr_range *r) { return r->end - r->start; @@ -265,7 +226,7 @@ static void __damon_va_init_regions(struct damon_ctx *ctx, damon_add_region(r, t); nr_pieces = (regions[i].end - regions[i].start) / sz; - damon_va_evenly_split_region(t, r, nr_pieces); + damon_evenly_split_region(t, r, nr_pieces); } } From patchwork Tue Mar 15 16:37:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: haoxin X-Patchwork-Id: 12781646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D051C433FE for ; Tue, 15 Mar 2022 16:37:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4E418D0001; Tue, 15 Mar 2022 12:37:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA0018D0007; Tue, 15 Mar 2022 12:37:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 742238D0001; Tue, 15 Mar 2022 12:37:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 5C90B8D0005 for ; Tue, 15 Mar 2022 12:37:18 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 1DE1661422 for ; Tue, 15 Mar 2022 16:37:18 +0000 (UTC) X-FDA: 79247175756.07.C19ADF6 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf29.hostedemail.com (Postfix) with ESMTP id D5C2E120007 for ; Tue, 15 Mar 2022 16:37:16 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04357;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V7IiZER_1647362232; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V7IiZER_1647362232) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Mar 2022 00:37:12 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 2/3] mm/damon/paddr: Move "paddr" relative func to ops-common.c file Date: Wed, 16 Mar 2022 00:37:06 +0800 Message-Id: <3b0c406efd961762e899e26978c010ed7746817b.1647378112.git.xhao@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D5C2E120007 X-Stat-Signature: f5kfppsxpsohtketz3fa6xn5gbmspi19 Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Rspam-User: X-HE-Tag: 1647362236-39231 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the next patch, I will introduce the CMA monitoring support, because CMA is also based on physical addresses, so there are many functions can be shared with "paddr". Signed-off-by: Xin Hao --- mm/damon/ops-common.c | 247 ++++++++++++++++++++++++++++++++++++++++++ mm/damon/ops-common.h | 15 +++ mm/damon/paddr.c | 246 ----------------------------------------- 3 files changed, 262 insertions(+), 246 deletions(-) diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c index fd5e98005358..0e895c0034b1 100644 --- a/mm/damon/ops-common.c +++ b/mm/damon/ops-common.c @@ -9,8 +9,11 @@ #include #include #include +#include +#include #include "ops-common.h" +#include "../internal.h" /* * Get an online page for a pfn if it's in the LRU list. Otherwise, returns @@ -170,3 +173,247 @@ int damon_evenly_split_region(struct damon_target *t, return 0; } + +#ifdef CONFIG_DAMON_PADDR + +static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .address = addr, + }; + + while (page_vma_mapped_walk(&pvmw)) { + addr = pvmw.address; + if (pvmw.pte) + damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr); + else + damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr); + } + return true; +} + +void damon_pa_mkold(unsigned long paddr) +{ + struct page *page = damon_get_page(PHYS_PFN(paddr)); + struct rmap_walk_control rwc = { + .rmap_one = __damon_pa_mkold, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return; + + if (!page_mapped(page) || !page_rmapping(page)) { + set_page_idle(page); + goto out; + } + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) + goto out; + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + +out: + put_page(page); +} + +static void __damon_pa_prepare_access_check(struct damon_ctx *ctx, + struct damon_region *r) +{ + r->sampling_addr = damon_rand(r->ar.start, r->ar.end); + + damon_pa_mkold(r->sampling_addr); +} + +void damon_pa_prepare_access_checks(struct damon_ctx *ctx) +{ + struct damon_target *t; + struct damon_region *r; + + damon_for_each_target(t, ctx) { + damon_for_each_region(r, t) + __damon_pa_prepare_access_check(ctx, r); + } +} + +struct damon_pa_access_chk_result { + unsigned long page_sz; + bool accessed; +}; + +static bool __damon_pa_young(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + struct damon_pa_access_chk_result *result = arg; + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .address = addr, + }; + + result->accessed = false; + result->page_sz = PAGE_SIZE; + while (page_vma_mapped_walk(&pvmw)) { + addr = pvmw.address; + if (pvmw.pte) { + result->accessed = pte_young(*pvmw.pte) || + !page_is_idle(page) || + mmu_notifier_test_young(vma->vm_mm, addr); + } else { +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + result->accessed = pmd_young(*pvmw.pmd) || + !page_is_idle(page) || + mmu_notifier_test_young(vma->vm_mm, addr); + result->page_sz = ((1UL) << HPAGE_PMD_SHIFT); +#else + WARN_ON_ONCE(1); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + } + if (result->accessed) { + page_vma_mapped_walk_done(&pvmw); + break; + } + } + + /* If accessed, stop walking */ + return !result->accessed; +} + +bool damon_pa_young(unsigned long paddr, unsigned long *page_sz) +{ + struct page *page = damon_get_page(PHYS_PFN(paddr)); + struct damon_pa_access_chk_result result = { + .page_sz = PAGE_SIZE, + .accessed = false, + }; + struct rmap_walk_control rwc = { + .arg = &result, + .rmap_one = __damon_pa_young, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return false; + + if (!page_mapped(page) || !page_rmapping(page)) { + if (page_is_idle(page)) + result.accessed = false; + else + result.accessed = true; + put_page(page); + goto out; + } + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) { + put_page(page); + return NULL; + } + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); + +out: + *page_sz = result.page_sz; + return result.accessed; +} + +static void __damon_pa_check_access(struct damon_ctx *ctx, + struct damon_region *r) +{ + static unsigned long last_addr; + static unsigned long last_page_sz = PAGE_SIZE; + static bool last_accessed; + + /* If the region is in the last checked page, reuse the result */ + if (ALIGN_DOWN(last_addr, last_page_sz) == + ALIGN_DOWN(r->sampling_addr, last_page_sz)) { + if (last_accessed) + r->nr_accesses++; + return; + } + + last_accessed = damon_pa_young(r->sampling_addr, &last_page_sz); + if (last_accessed) + r->nr_accesses++; + + last_addr = r->sampling_addr; +} + +unsigned int damon_pa_check_accesses(struct damon_ctx *ctx) +{ + struct damon_target *t; + struct damon_region *r; + unsigned int max_nr_accesses = 0; + + damon_for_each_target(t, ctx) { + damon_for_each_region(r, t) { + __damon_pa_check_access(ctx, r); + max_nr_accesses = max(r->nr_accesses, max_nr_accesses); + } + } + + return max_nr_accesses; +} + +unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx, + struct damon_target *t, struct damon_region *r, + struct damos *scheme) +{ + unsigned long addr, applied; + LIST_HEAD(page_list); + + if (scheme->action != DAMOS_PAGEOUT) + return 0; + + for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { + struct page *page = damon_get_page(PHYS_PFN(addr)); + + if (!page) + continue; + + ClearPageReferenced(page); + test_and_clear_page_young(page); + if (isolate_lru_page(page)) { + put_page(page); + continue; + } + if (PageUnevictable(page)) { + putback_lru_page(page); + } else { + list_add(&page->lru, &page_list); + put_page(page); + } + } + applied = reclaim_pages(&page_list); + cond_resched(); + return applied * PAGE_SIZE; +} + +int damon_pa_scheme_score(struct damon_ctx *context, + struct damon_target *t, struct damon_region *r, + struct damos *scheme) +{ + switch (scheme->action) { + case DAMOS_PAGEOUT: + return damon_pageout_score(context, r, scheme); + default: + break; + } + + return DAMOS_MAX_SCORE; +} + +#endif /* CONFIG_DAMON_PADDR */ diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h index fd441016a2ae..bb62fd300ea9 100644 --- a/mm/damon/ops-common.h +++ b/mm/damon/ops-common.h @@ -17,3 +17,18 @@ int damon_pageout_score(struct damon_ctx *c, struct damon_region *r, int damon_evenly_split_region(struct damon_target *t, struct damon_region *r, unsigned int nr_pieces); + +#ifdef CONFIG_DAMON_PADDR + +void damon_pa_mkold(unsigned long paddr); +void damon_pa_prepare_access_checks(struct damon_ctx *ctx); +bool damon_pa_young(unsigned long paddr, unsigned long *page_sz); +unsigned int damon_pa_check_accesses(struct damon_ctx *ctx); +unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx, + struct damon_target *t, struct damon_region *r, + struct damos *scheme); +int damon_pa_scheme_score(struct damon_ctx *context, + struct damon_target *t, struct damon_region *r, + struct damos *scheme); + +#endif diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 7c263797a9a9..c0a87c0bde9b 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -7,255 +7,9 @@ #define pr_fmt(fmt) "damon-pa: " fmt -#include -#include -#include -#include -#include - #include "../internal.h" #include "ops-common.h" -static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, - unsigned long addr, void *arg) -{ - struct page_vma_mapped_walk pvmw = { - .page = page, - .vma = vma, - .address = addr, - }; - - while (page_vma_mapped_walk(&pvmw)) { - addr = pvmw.address; - if (pvmw.pte) - damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr); - else - damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr); - } - return true; -} - -static void damon_pa_mkold(unsigned long paddr) -{ - struct page *page = damon_get_page(PHYS_PFN(paddr)); - struct rmap_walk_control rwc = { - .rmap_one = __damon_pa_mkold, - .anon_lock = page_lock_anon_vma_read, - }; - bool need_lock; - - if (!page) - return; - - if (!page_mapped(page) || !page_rmapping(page)) { - set_page_idle(page); - goto out; - } - - need_lock = !PageAnon(page) || PageKsm(page); - if (need_lock && !trylock_page(page)) - goto out; - - rmap_walk(page, &rwc); - - if (need_lock) - unlock_page(page); - -out: - put_page(page); -} - -static void __damon_pa_prepare_access_check(struct damon_ctx *ctx, - struct damon_region *r) -{ - r->sampling_addr = damon_rand(r->ar.start, r->ar.end); - - damon_pa_mkold(r->sampling_addr); -} - -static void damon_pa_prepare_access_checks(struct damon_ctx *ctx) -{ - struct damon_target *t; - struct damon_region *r; - - damon_for_each_target(t, ctx) { - damon_for_each_region(r, t) - __damon_pa_prepare_access_check(ctx, r); - } -} - -struct damon_pa_access_chk_result { - unsigned long page_sz; - bool accessed; -}; - -static bool __damon_pa_young(struct page *page, struct vm_area_struct *vma, - unsigned long addr, void *arg) -{ - struct damon_pa_access_chk_result *result = arg; - struct page_vma_mapped_walk pvmw = { - .page = page, - .vma = vma, - .address = addr, - }; - - result->accessed = false; - result->page_sz = PAGE_SIZE; - while (page_vma_mapped_walk(&pvmw)) { - addr = pvmw.address; - if (pvmw.pte) { - result->accessed = pte_young(*pvmw.pte) || - !page_is_idle(page) || - mmu_notifier_test_young(vma->vm_mm, addr); - } else { -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - result->accessed = pmd_young(*pvmw.pmd) || - !page_is_idle(page) || - mmu_notifier_test_young(vma->vm_mm, addr); - result->page_sz = ((1UL) << HPAGE_PMD_SHIFT); -#else - WARN_ON_ONCE(1); -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ - } - if (result->accessed) { - page_vma_mapped_walk_done(&pvmw); - break; - } - } - - /* If accessed, stop walking */ - return !result->accessed; -} - -static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz) -{ - struct page *page = damon_get_page(PHYS_PFN(paddr)); - struct damon_pa_access_chk_result result = { - .page_sz = PAGE_SIZE, - .accessed = false, - }; - struct rmap_walk_control rwc = { - .arg = &result, - .rmap_one = __damon_pa_young, - .anon_lock = page_lock_anon_vma_read, - }; - bool need_lock; - - if (!page) - return false; - - if (!page_mapped(page) || !page_rmapping(page)) { - if (page_is_idle(page)) - result.accessed = false; - else - result.accessed = true; - put_page(page); - goto out; - } - - need_lock = !PageAnon(page) || PageKsm(page); - if (need_lock && !trylock_page(page)) { - put_page(page); - return NULL; - } - - rmap_walk(page, &rwc); - - if (need_lock) - unlock_page(page); - put_page(page); - -out: - *page_sz = result.page_sz; - return result.accessed; -} - -static void __damon_pa_check_access(struct damon_ctx *ctx, - struct damon_region *r) -{ - static unsigned long last_addr; - static unsigned long last_page_sz = PAGE_SIZE; - static bool last_accessed; - - /* If the region is in the last checked page, reuse the result */ - if (ALIGN_DOWN(last_addr, last_page_sz) == - ALIGN_DOWN(r->sampling_addr, last_page_sz)) { - if (last_accessed) - r->nr_accesses++; - return; - } - - last_accessed = damon_pa_young(r->sampling_addr, &last_page_sz); - if (last_accessed) - r->nr_accesses++; - - last_addr = r->sampling_addr; -} - -static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx) -{ - struct damon_target *t; - struct damon_region *r; - unsigned int max_nr_accesses = 0; - - damon_for_each_target(t, ctx) { - damon_for_each_region(r, t) { - __damon_pa_check_access(ctx, r); - max_nr_accesses = max(r->nr_accesses, max_nr_accesses); - } - } - - return max_nr_accesses; -} - -static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx, - struct damon_target *t, struct damon_region *r, - struct damos *scheme) -{ - unsigned long addr, applied; - LIST_HEAD(page_list); - - if (scheme->action != DAMOS_PAGEOUT) - return 0; - - for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { - struct page *page = damon_get_page(PHYS_PFN(addr)); - - if (!page) - continue; - - ClearPageReferenced(page); - test_and_clear_page_young(page); - if (isolate_lru_page(page)) { - put_page(page); - continue; - } - if (PageUnevictable(page)) { - putback_lru_page(page); - } else { - list_add(&page->lru, &page_list); - put_page(page); - } - } - applied = reclaim_pages(&page_list); - cond_resched(); - return applied * PAGE_SIZE; -} - -static int damon_pa_scheme_score(struct damon_ctx *context, - struct damon_target *t, struct damon_region *r, - struct damos *scheme) -{ - switch (scheme->action) { - case DAMOS_PAGEOUT: - return damon_pageout_score(context, r, scheme); - default: - break; - } - - return DAMOS_MAX_SCORE; -} - static int __init damon_pa_initcall(void) { struct damon_operations ops = { From patchwork Tue Mar 15 16:37:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: haoxin X-Patchwork-Id: 12781645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 458FCC433EF for ; Tue, 15 Mar 2022 16:37:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A3938D0006; Tue, 15 Mar 2022 12:37:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E3898D0005; Tue, 15 Mar 2022 12:37:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 67F928D0006; Tue, 15 Mar 2022 12:37:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 45BDD8D0001 for ; Tue, 15 Mar 2022 12:37:18 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 11F3460ED6 for ; Tue, 15 Mar 2022 16:37:18 +0000 (UTC) X-FDA: 79247175756.14.2EE1AA4 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf10.hostedemail.com (Postfix) with ESMTP id 1E4FFC0015 for ; Tue, 15 Mar 2022 16:37:16 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R301e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V7IiZEe_1647362233; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V7IiZEe_1647362233) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Mar 2022 00:37:13 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 3/3] mm/damon/sysfs: Add CMA memory monitoring Date: Wed, 16 Mar 2022 00:37:07 +0800 Message-Id: <0325c53c46291f96e6d99223fc4d2d8454de5d97.1647378112.git.xhao@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 1E4FFC0015 X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=none; spf=pass (imf10.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.42 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Stat-Signature: 9913z7ibf758c97xwdxem1rjxgt6hzko X-Rspamd-Server: rspam07 X-HE-Tag: 1647362236-193700 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000003, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Users can do the CMA memory monitoring by writing a special keyword 'cma' to the 'operations' sysfs file. Then, DAMON will check the special keyword and configure the monitoring context to run with the CMA reserved physical address space. Unlike other physical memorys monitoring, the monitoring target region will be automatically set. Signed-off-by: Xin Hao --- include/linux/damon.h | 1 + mm/damon/Makefile | 2 +- mm/damon/paddr-cma.c | 104 ++++++++++++++++++++++++++++++++++++++++++ mm/damon/sysfs.c | 1 + 4 files changed, 107 insertions(+), 1 deletion(-) create mode 100644 mm/damon/paddr-cma.c diff --git a/include/linux/damon.h b/include/linux/damon.h index f23cbfa4248d..27eaa6d6c43a 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -266,6 +266,7 @@ struct damos { enum damon_ops_id { DAMON_OPS_VADDR, DAMON_OPS_PADDR, + DAMON_OPS_CMA, NR_DAMON_OPS, }; diff --git a/mm/damon/Makefile b/mm/damon/Makefile index dbf7190b4144..d32048f70f6d 100644 --- a/mm/damon/Makefile +++ b/mm/damon/Makefile @@ -2,7 +2,7 @@ obj-y := core.o obj-$(CONFIG_DAMON_VADDR) += ops-common.o vaddr.o -obj-$(CONFIG_DAMON_PADDR) += ops-common.o paddr.o +obj-$(CONFIG_DAMON_PADDR) += ops-common.o paddr.o paddr-cma.o obj-$(CONFIG_DAMON_SYSFS) += sysfs.o obj-$(CONFIG_DAMON_DBGFS) += dbgfs.o obj-$(CONFIG_DAMON_RECLAIM) += reclaim.o diff --git a/mm/damon/paddr-cma.c b/mm/damon/paddr-cma.c new file mode 100644 index 000000000000..ad422854c8c6 --- /dev/null +++ b/mm/damon/paddr-cma.c @@ -0,0 +1,104 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DAMON Primitives for The CMA Physical Address Space + * + * Author: Xin Hao + */ +#ifdef CONFIG_CMA + +#define pr_fmt(fmt) "damon-cma: " fmt + +#include + +#include "ops-common.h" +#include "../cma.h" + +static int damon_cma_area_regions(struct damon_addr_range *regions, int nr_cma_area) +{ + int i; + + if (!nr_cma_area || !regions) + return -EINVAL; + + for (i = 0; i < nr_cma_area; i++) { + phys_addr_t base = cma_get_base(&cma_areas[i]); + + regions[i].start = base; + regions[i].end = base + cma_get_size(&cma_areas[i]); + } + + return 0; +} + +static void __damon_cma_init_regions(struct damon_ctx *ctx, + struct damon_target *t) +{ + struct damon_target *ti; + struct damon_region *r; + struct damon_addr_range regions[MAX_CMA_AREAS]; + unsigned long sz = 0, nr_pieces; + int i, tidx = 0; + + if (damon_cma_area_regions(regions, cma_area_count)) { + damon_for_each_target(ti, ctx) { + if (ti == t) + break; + tidx++; + } + pr_err("Failed to get CMA regions of %dth target\n", tidx); + return; + } + + for (i = 0; i < cma_area_count; i++) + sz += regions[i].end - regions[i].start; + if (ctx->min_nr_regions) + sz /= ctx->min_nr_regions; + if (sz < DAMON_MIN_REGION) + sz = DAMON_MIN_REGION; + + /* Set the initial three regions of the target */ + for (i = 0; i < cma_area_count; i++) { + r = damon_new_region(regions[i].start, regions[i].end); + if (!r) { + pr_err("%d'th init region creation failed\n", i); + return; + } + damon_add_region(r, t); + + nr_pieces = (regions[i].end - regions[i].start) / sz; + damon_evenly_split_region(t, r, nr_pieces); + } +} + +static void damon_cma_init(struct damon_ctx *ctx) +{ + struct damon_target *t; + + damon_for_each_target(t, ctx) { + /* the user may set the target regions as they want */ + if (!damon_nr_regions(t)) + __damon_cma_init_regions(ctx, t); + } +} + +static int __init damon_cma_initcall(void) +{ + struct damon_operations ops = { + .id = DAMON_OPS_CMA, + .init = damon_cma_init, + .update = NULL, + .prepare_access_checks = damon_pa_prepare_access_checks, + .check_accesses = damon_pa_check_accesses, + .reset_aggregated = NULL, + .target_valid = NULL, + .cleanup = NULL, + .apply_scheme = damon_pa_apply_scheme, + .get_scheme_score = damon_pa_scheme_score, + }; + + return damon_register_ops(&ops); +}; + +subsys_initcall(damon_cma_initcall); + +#endif /* CONFIG_CMA */ diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c index d39f74969469..8a34880cc2c4 100644 --- a/mm/damon/sysfs.c +++ b/mm/damon/sysfs.c @@ -1761,6 +1761,7 @@ static struct kobj_type damon_sysfs_attrs_ktype = { static const char * const damon_sysfs_ops_strs[] = { "vaddr", "paddr", + "cma", }; struct damon_sysfs_context {