From patchwork Wed Dec 16 09:42:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11977135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4034CC4361B for ; Wed, 16 Dec 2020 09:46:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A116723123 for ; Wed, 16 Dec 2020 09:46:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A116723123 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1FAC26B006C; Wed, 16 Dec 2020 04:46:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 184D76B0071; Wed, 16 Dec 2020 04:46:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 04C776B0072; Wed, 16 Dec 2020 04:46:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id DDC346B006C for ; Wed, 16 Dec 2020 04:46:06 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8884510FD0 for ; Wed, 16 Dec 2020 09:46:06 +0000 (UTC) X-FDA: 77598664332.02.rule28_5e1384b2742b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 6A22110097AA0 for ; Wed, 16 Dec 2020 09:46:06 +0000 (UTC) X-HE-Tag: rule28_5e1384b2742b X-Filterd-Recvd-Size: 11667 Received: from smtp-fw-9101.amazon.com (smtp-fw-9101.amazon.com [207.171.184.25]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Dec 2020 09:46:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1608111966; x=1639647966; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=SUQT16FeXJgUgtw4aF8cUXAdkk10Wp5q3gpjNNGwgdo=; b=fF6mCZYN58ojOPGVgaOSl4NZiU9wUVNEPdpD7Jojy5UXUWCyxi/wWn0u N8hLpDTUp6rZzthWT0ULOmIDj5fhyfrfNIuw217Q/RgFKM2ogMuJ2FdKB CbvE8Q0X6ES8sxDdblNQ1nKS2ZqBsxK2To4/MPEVhcplJ09KIBoScYBq8 0=; X-IronPort-AV: E=Sophos;i="5.78,424,1599523200"; d="scan'208";a="96475241" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2c-456ef9c9.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 16 Dec 2020 09:45:57 +0000 Received: from EX13D31EUA001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2c-456ef9c9.us-west-2.amazon.com (Postfix) with ESMTPS id 50515C1D11; Wed, 16 Dec 2020 09:45:53 +0000 (UTC) Received: from u3f2cd687b01c55.ant.amazon.com (10.43.161.31) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 16 Dec 2020 09:44:58 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v10 07/13] mm/damon: Implement primitives for physical address space monitoring Date: Wed, 16 Dec 2020 10:42:15 +0100 Message-ID: <20201216094221.11898-8-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201216094221.11898-1-sjpark@amazon.com> References: <20201216094221.11898-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.31] X-ClientProxiedBy: EX13D16UWB001.ant.amazon.com (10.43.161.17) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit implements the primitives for the basic access monitoring of the physical memory address space. By using this, users can easily monitor the accesses to the physical memory. Internally, it uses the PTE Accessed bit, as similar to that of the virtual memory support. Also, it supports only user memory pages, as idle page tracking also does, for the same reason. If the monitoring target physical memory address range contains non-user memory pages, access check of the pages will do nothing but simply treat the pages as not accessed. Users who want to use other access check primitives and/or monitor the non-user memory regions could implement and use their own callbacks. Signed-off-by: SeongJae Park --- include/linux/damon.h | 10 ++ mm/damon/Kconfig | 9 ++ mm/damon/Makefile | 1 + mm/damon/paddr.c | 222 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 242 insertions(+) create mode 100644 mm/damon/paddr.c diff --git a/include/linux/damon.h b/include/linux/damon.h index ed7e86207e53..ea2fd054b2ef 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -376,4 +376,14 @@ void damon_va_set_primitives(struct damon_ctx *ctx); #endif /* CONFIG_DAMON_VADDR */ +#ifdef CONFIG_DAMON_PADDR + +/* Monitoring primitives for the physical memory address space */ +void damon_pa_prepare_access_checks(struct damon_ctx *ctx); +unsigned int damon_pa_check_accesses(struct damon_ctx *ctx); +bool damon_pa_target_valid(void *t); +void damon_pa_set_primitives(struct damon_ctx *ctx); + +#endif /* CONFIG_DAMON_PADDR */ + #endif /* _DAMON_H */ diff --git a/mm/damon/Kconfig b/mm/damon/Kconfig index 455995152697..89c06ac8c9eb 100644 --- a/mm/damon/Kconfig +++ b/mm/damon/Kconfig @@ -33,6 +33,15 @@ config DAMON_VADDR This builds the default data access monitoring primitives for DAMON that works for virtual address spaces. +config DAMON_PADDR + bool "Data access monitoring primitives for the physical address space" + depends on DAMON && MMU + select PAGE_EXTENSION if !64BIT + select PAGE_IDLE_FLAG + help + This builds the default data access monitoring primitives for DAMON + that works for physical address spaces. + config DAMON_VADDR_KUNIT_TEST bool "Test for DAMON primitives" if !KUNIT_ALL_TESTS depends on DAMON_VADDR && KUNIT=y diff --git a/mm/damon/Makefile b/mm/damon/Makefile index 99b1bfe01ff5..8d9b0df79702 100644 --- a/mm/damon/Makefile +++ b/mm/damon/Makefile @@ -2,4 +2,5 @@ obj-$(CONFIG_DAMON) := core.o obj-$(CONFIG_DAMON_VADDR) += prmtv-common.o vaddr.o +obj-$(CONFIG_DAMON_PADDR) += prmtv-common.o paddr.o obj-$(CONFIG_DAMON_DBGFS) += dbgfs.o diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c new file mode 100644 index 000000000000..b120f672cc57 --- /dev/null +++ b/mm/damon/paddr.c @@ -0,0 +1,222 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DAMON Primitives for The Physical Address Space + * + * Author: SeongJae Park + */ + +#define pr_fmt(fmt) "damon-pa: " fmt + +#include + +#include "prmtv-common.h" + +/* + * This has no implementations for 'init_target_regions()' and + * 'update_target_regions()'. Users should set the initial regions and update + * regions by themselves in the 'before_start' and 'after_aggregation' + * callbacks, respectively. Or, they can implement and use their own version + * of the primitives. + */ + +/* + * Get a page by pfn if it is in the LRU list. Otherwise, returns NULL. + * + * The body of this function is stollen from the 'page_idle_get_page()'. We + * steal rather than reuse it because the code is quite simple. + */ +static struct page *damon_pa_get_page(unsigned long pfn) +{ + struct page *page = pfn_to_online_page(pfn); + pg_data_t *pgdat; + + if (!page || !PageLRU(page) || + !get_page_unless_zero(page)) + return NULL; + + pgdat = page_pgdat(page); + spin_lock_irq(&pgdat->lru_lock); + if (unlikely(!PageLRU(page))) { + put_page(page); + page = NULL; + } + spin_unlock_irq(&pgdat->lru_lock); + return page; +} + +static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + damon_va_mkold(vma->vm_mm, addr); + return true; +} + +static void damon_pa_mkold(unsigned long paddr) +{ + struct page *page = damon_pa_get_page(PHYS_PFN(paddr)); + struct rmap_walk_control rwc = { + .rmap_one = __damon_pa_mkold, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return; + + if (!page_mapped(page) || !page_rmapping(page)) { + set_page_idle(page); + put_page(page); + return; + } + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) { + put_page(page); + return; + } + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); +} + +static void __damon_pa_prepare_access_check(struct damon_ctx *ctx, + struct damon_region *r) +{ + r->sampling_addr = damon_rand(r->ar.start, r->ar.end); + + damon_pa_mkold(r->sampling_addr); +} + +void damon_pa_prepare_access_checks(struct damon_ctx *ctx) +{ + struct damon_target *t; + struct damon_region *r; + + damon_for_each_target(t, ctx) { + damon_for_each_region(r, t) + __damon_pa_prepare_access_check(ctx, r); + } +} + +struct damon_pa_access_chk_result { + unsigned long page_sz; + bool accessed; +}; + +static bool damon_pa_accessed(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + struct damon_pa_access_chk_result *result = arg; + + result->accessed = damon_va_young(vma->vm_mm, addr, &result->page_sz); + + /* If accessed, stop walking */ + return !result->accessed; +} + +static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz) +{ + struct page *page = damon_pa_get_page(PHYS_PFN(paddr)); + struct damon_pa_access_chk_result result = { + .page_sz = PAGE_SIZE, + .accessed = false, + }; + struct rmap_walk_control rwc = { + .arg = &result, + .rmap_one = damon_pa_accessed, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return false; + + if (!page_mapped(page) || !page_rmapping(page)) { + if (page_is_idle(page)) + result.accessed = false; + else + result.accessed = true; + put_page(page); + goto out; + } + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) { + put_page(page); + return NULL; + } + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); + +out: + *page_sz = result.page_sz; + return result.accessed; +} + +/* + * Check whether the region was accessed after the last preparation + * + * mm 'mm_struct' for the given virtual address space + * r the region of physical address space that needs to be checked + */ +static void __damon_pa_check_access(struct damon_ctx *ctx, + struct damon_region *r) +{ + static unsigned long last_addr; + static unsigned long last_page_sz = PAGE_SIZE; + static bool last_accessed; + + /* If the region is in the last checked page, reuse the result */ + if (ALIGN_DOWN(last_addr, last_page_sz) == + ALIGN_DOWN(r->sampling_addr, last_page_sz)) { + if (last_accessed) + r->nr_accesses++; + return; + } + + last_accessed = damon_pa_young(r->sampling_addr, &last_page_sz); + if (last_accessed) + r->nr_accesses++; + + last_addr = r->sampling_addr; +} + +unsigned int damon_pa_check_accesses(struct damon_ctx *ctx) +{ + struct damon_target *t; + struct damon_region *r; + unsigned int max_nr_accesses = 0; + + damon_for_each_target(t, ctx) { + damon_for_each_region(r, t) { + __damon_pa_check_access(ctx, r); + max_nr_accesses = max(r->nr_accesses, max_nr_accesses); + } + } + + return max_nr_accesses; +} + +bool damon_pa_target_valid(void *t) +{ + return true; +} + +void damon_pa_set_primitives(struct damon_ctx *ctx) +{ + ctx->primitive.init_target_regions = NULL; + ctx->primitive.update_target_regions = NULL; + ctx->primitive.prepare_access_checks = damon_pa_prepare_access_checks; + ctx->primitive.check_accesses = damon_pa_check_accesses; + ctx->primitive.reset_aggregated = NULL; + ctx->primitive.target_valid = damon_pa_target_valid; + ctx->primitive.cleanup = NULL; + ctx->primitive.apply_scheme = NULL; +}