From patchwork Fri Mar 18 09:23:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 12785062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 664B9C433F5 for ; Fri, 18 Mar 2022 09:23:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1CB38D0002; Fri, 18 Mar 2022 05:23:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BCCC08D0001; Fri, 18 Mar 2022 05:23:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ABA348D0002; Fri, 18 Mar 2022 05:23:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 9DA318D0001 for ; Fri, 18 Mar 2022 05:23:27 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5C0378249980 for ; Fri, 18 Mar 2022 09:23:27 +0000 (UTC) X-FDA: 79256968854.23.9743806 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf02.hostedemail.com (Postfix) with ESMTP id 7186080012 for ; Fri, 18 Mar 2022 09:23:25 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=5;SR=0;TI=SMTPD_---0V7Vv8yQ_1647595401; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0V7Vv8yQ_1647595401) by smtp.aliyun-inc.com(127.0.0.1); Fri, 18 Mar 2022 17:23:22 +0800 From: Baolin Wang To: sj@kernel.org, akpm@linux-foundation.org Cc: baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm/damon: Make the sampling more accurate Date: Fri, 18 Mar 2022 17:23:13 +0800 Message-Id: <1647595393-103185-1-git-send-email-baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Rspam-User: Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 7186080012 X-Stat-Signature: dggfjy6s1gerkaj5bzebsor3xsfhdwyo X-HE-Tag: 1647595405-715044 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000017, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When I try to sample the physical address with DAMON to migrate pages on tiered memory system, I found it will demote some cold regions mistakenly. Now we will choose an physical address in the region randomly, but if its corresponding page is not an online LRU page, we will ignore the accessing status in this cycle of sampling, and actually will be treated as a non-accessed region. Suppose a region including some non-LRU pages, it will be treated as a cold region with a high probability, and may be merged with adjacent cold regions, but there are some pages may be accessed we missed. So instead of ignoring the access status of this region if we did not find a valid page according to current sampling address, we can use last valid sampling address to help to make the sampling more accurate, then we can do a better decision. Signed-off-by: Baolin Wang --- include/linux/damon.h | 2 ++ mm/damon/core.c | 2 ++ mm/damon/paddr.c | 15 ++++++++++++--- 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index f23cbfa..3311e15 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -38,6 +38,7 @@ struct damon_addr_range { * struct damon_region - Represents a monitoring target region. * @ar: The address range of the region. * @sampling_addr: Address of the sample for the next access check. + * @last_sampling_addr: Last valid address of the sampling. * @nr_accesses: Access frequency of this region. * @list: List head for siblings. * @age: Age of this region. @@ -50,6 +51,7 @@ struct damon_addr_range { struct damon_region { struct damon_addr_range ar; unsigned long sampling_addr; + unsigned long last_sampling_addr; unsigned int nr_accesses; struct list_head list; diff --git a/mm/damon/core.c b/mm/damon/core.c index c1e0fed..957704f 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -108,6 +108,7 @@ struct damon_region *damon_new_region(unsigned long start, unsigned long end) region->ar.start = start; region->ar.end = end; region->nr_accesses = 0; + region->last_sampling_addr = 0; INIT_LIST_HEAD(®ion->list); region->age = 0; @@ -848,6 +849,7 @@ static void damon_split_region_at(struct damon_ctx *ctx, return; r->ar.end = new->ar.start; + r->last_sampling_addr = 0; new->age = r->age; new->last_nr_accesses = r->last_nr_accesses; diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 21474ae..5f15068 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -31,10 +31,9 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma, return true; } -static void damon_pa_mkold(unsigned long paddr) +static void damon_pa_mkold(struct page *page) { struct folio *folio; - struct page *page = damon_get_page(PHYS_PFN(paddr)); struct rmap_walk_control rwc = { .rmap_one = __damon_pa_mkold, .anon_lock = folio_lock_anon_vma_read, @@ -66,9 +65,19 @@ static void damon_pa_mkold(unsigned long paddr) static void __damon_pa_prepare_access_check(struct damon_ctx *ctx, struct damon_region *r) { + struct page *page; + r->sampling_addr = damon_rand(r->ar.start, r->ar.end); - damon_pa_mkold(r->sampling_addr); + page = damon_get_page(PHYS_PFN(r->sampling_addr)); + if (page) { + r->last_sampling_addr = r->sampling_addr; + } else if (r->last_sampling_addr) { + r->sampling_addr = r->last_sampling_addr; + page = damon_get_page(PHYS_PFN(r->last_sampling_addr)); + } + + damon_pa_mkold(page); } static void damon_pa_prepare_access_checks(struct damon_ctx *ctx)