From patchwork Wed Jun 3 14:11:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585721 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E9BA913 for ; Wed, 3 Jun 2020 14:12:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3FA0820825 for ; Wed, 3 Jun 2020 14:12:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="BZXvgzMa" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3FA0820825 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 558648E0008; Wed, 3 Jun 2020 10:12:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4E2ED8E0006; Wed, 3 Jun 2020 10:12:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A9428E0008; Wed, 3 Jun 2020 10:12:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0045.hostedemail.com [216.40.44.45]) by kanga.kvack.org (Postfix) with ESMTP id 2372C8E0006 for ; Wed, 3 Jun 2020 10:12:46 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D119118030346 for ; Wed, 3 Jun 2020 14:12:45 +0000 (UTC) X-FDA: 76888091490.09.toad26_5241b058caa1c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id B57B0180AEF50 for ; Wed, 3 Jun 2020 14:12:45 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30004:30054:30064:30070:30075,0,RBL:52.95.48.154:@amazon.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: toad26_5241b058caa1c X-Filterd-Recvd-Size: 15363 Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:12:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193566; x=1622729566; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=HdcrPuLgRalMd0xvT2Nk9yFYbm77J9Vp5ajfx1XxAKI=; b=BZXvgzManVmoVCUDp+swS+P8InKRppM/VoUXb+4pu2ePh1sUGQyCo7x3 9YXkQY9yMPsIEe0EIlE6s5ID0lH0RQCSicFrIiz/Bt+7V9q0uRpViwy5p Zyi1dsRJv7mQwFwWU111QHWcGPyaiLjn/FgfVhv/9mnETTCENSVSJkD6Q A=; IronPort-SDR: 9CalquzHuRYWE2RJ+k9p9Y8ZBwl9SEZFdnvMce1l17HjCYXClmVQDJpvSN1NJ9+kxG4VZEueQy N+Rx8naSOHtg== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="35572938" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-53356bf6.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP; 03 Jun 2020 14:12:33 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-53356bf6.us-west-2.amazon.com (Postfix) with ESMTPS id 22480A1852; Wed, 3 Jun 2020 14:12:30 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:12:29 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:12:12 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 1/9] mm/damon: Use vm-independent address range concept Date: Wed, 3 Jun 2020 16:11:27 +0200 Message-ID: <20200603141135.10575-2-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: B57B0180AEF50 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park DAMON's main idea is not limited to virtual address space. To prepare for further expansion of the support for other address spaces including physical memory, this commit modifies one of its core struct, 'struct damon_region' to use virtual memory independent address space concept. Signed-off-by: SeongJae Park --- include/linux/damon.h | 20 +++++++++++------ mm/damon-test.h | 42 ++++++++++++++++++------------------ mm/damon.c | 50 +++++++++++++++++++++---------------------- 3 files changed, 59 insertions(+), 53 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index e77256cf30dd..b4b06ca905a2 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -16,11 +16,18 @@ #include /** - * struct damon_region - Represents a monitoring target region of - * [@vm_start, @vm_end). - * - * @vm_start: Start address of the region (inclusive). - * @vm_end: End address of the region (exclusive). + * struct damon_addr_range - Represents an address region of [@start, @end). + * @start: Start address of the region (inclusive). + * @end: End address of the region (exclusive). + */ +struct damon_addr_range { + unsigned long start; + unsigned long end; +}; + +/** + * struct damon_region - Represents a monitoring target region. + * @ar: The address range of the region. * @sampling_addr: Address of the sample for the next access check. * @nr_accesses: Access frequency of this region. * @list: List head for siblings. @@ -33,8 +40,7 @@ * region are set as region size-weighted average of those of the two regions. */ struct damon_region { - unsigned long vm_start; - unsigned long vm_end; + struct damon_addr_range ar; unsigned long sampling_addr; unsigned int nr_accesses; struct list_head list; diff --git a/mm/damon-test.h b/mm/damon-test.h index 5b18619efe72..9dd2061502cb 100644 --- a/mm/damon-test.h +++ b/mm/damon-test.h @@ -78,8 +78,8 @@ static void damon_test_regions(struct kunit *test) struct damon_task *t; r = damon_new_region(&damon_user_ctx, 1, 2); - KUNIT_EXPECT_EQ(test, 1ul, r->vm_start); - KUNIT_EXPECT_EQ(test, 2ul, r->vm_end); + KUNIT_EXPECT_EQ(test, 1ul, r->ar.start); + KUNIT_EXPECT_EQ(test, 2ul, r->ar.end); KUNIT_EXPECT_EQ(test, 0u, r->nr_accesses); t = damon_new_task(42); @@ -267,7 +267,7 @@ static void damon_test_aggregate(struct kunit *test) KUNIT_EXPECT_EQ(test, 3, it); /* The aggregated information should be written in the buffer */ - sr = sizeof(r->vm_start) + sizeof(r->vm_end) + sizeof(r->nr_accesses); + sr = sizeof(r->ar.start) + sizeof(r->ar.end) + sizeof(r->nr_accesses); sp = sizeof(t->pid) + sizeof(unsigned int) + 3 * sr; sz = sizeof(struct timespec64) + sizeof(unsigned int) + 3 * sp; KUNIT_EXPECT_EQ(test, (unsigned int)sz, ctx->rbuf_offset); @@ -350,8 +350,8 @@ static void damon_do_test_apply_three_regions(struct kunit *test, for (i = 0; i < nr_expected / 2; i++) { r = __nth_region_of(t, i); - KUNIT_EXPECT_EQ(test, r->vm_start, expected[i * 2]); - KUNIT_EXPECT_EQ(test, r->vm_end, expected[i * 2 + 1]); + KUNIT_EXPECT_EQ(test, r->ar.start, expected[i * 2]); + KUNIT_EXPECT_EQ(test, r->ar.end, expected[i * 2 + 1]); } damon_cleanup_global_state(); @@ -470,8 +470,8 @@ static void damon_test_split_evenly(struct kunit *test) i = 0; damon_for_each_region(r, t) { - KUNIT_EXPECT_EQ(test, r->vm_start, i++ * 10); - KUNIT_EXPECT_EQ(test, r->vm_end, i * 10); + KUNIT_EXPECT_EQ(test, r->ar.start, i++ * 10); + KUNIT_EXPECT_EQ(test, r->ar.end, i * 10); } damon_free_task(t); @@ -485,11 +485,11 @@ static void damon_test_split_evenly(struct kunit *test) damon_for_each_region(r, t) { if (i == 4) break; - KUNIT_EXPECT_EQ(test, r->vm_start, 5 + 10 * i++); - KUNIT_EXPECT_EQ(test, r->vm_end, 5 + 10 * i); + KUNIT_EXPECT_EQ(test, r->ar.start, 5 + 10 * i++); + KUNIT_EXPECT_EQ(test, r->ar.end, 5 + 10 * i); } - KUNIT_EXPECT_EQ(test, r->vm_start, 5 + 10 * i); - KUNIT_EXPECT_EQ(test, r->vm_end, 59ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 5 + 10 * i); + KUNIT_EXPECT_EQ(test, r->ar.end, 59ul); damon_free_task(t); t = damon_new_task(42); @@ -499,8 +499,8 @@ static void damon_test_split_evenly(struct kunit *test) KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 1u); damon_for_each_region(r, t) { - KUNIT_EXPECT_EQ(test, r->vm_start, 5ul); - KUNIT_EXPECT_EQ(test, r->vm_end, 6ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 5ul); + KUNIT_EXPECT_EQ(test, r->ar.end, 6ul); } damon_free_task(t); } @@ -514,12 +514,12 @@ static void damon_test_split_at(struct kunit *test) r = damon_new_region(&damon_user_ctx, 0, 100); damon_add_region(r, t); damon_split_region_at(&damon_user_ctx, r, 25); - KUNIT_EXPECT_EQ(test, r->vm_start, 0ul); - KUNIT_EXPECT_EQ(test, r->vm_end, 25ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 0ul); + KUNIT_EXPECT_EQ(test, r->ar.end, 25ul); r = damon_next_region(r); - KUNIT_EXPECT_EQ(test, r->vm_start, 25ul); - KUNIT_EXPECT_EQ(test, r->vm_end, 100ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 25ul); + KUNIT_EXPECT_EQ(test, r->ar.end, 100ul); damon_free_task(t); } @@ -539,8 +539,8 @@ static void damon_test_merge_two(struct kunit *test) damon_add_region(r2, t); damon_merge_two_regions(r, r2); - KUNIT_EXPECT_EQ(test, r->vm_start, 0ul); - KUNIT_EXPECT_EQ(test, r->vm_end, 300ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 0ul); + KUNIT_EXPECT_EQ(test, r->ar.end, 300ul); KUNIT_EXPECT_EQ(test, r->nr_accesses, 16u); i = 0; @@ -577,8 +577,8 @@ static void damon_test_merge_regions_of(struct kunit *test) KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 5u); for (i = 0; i < 5; i++) { r = __nth_region_of(t, i); - KUNIT_EXPECT_EQ(test, r->vm_start, saddrs[i]); - KUNIT_EXPECT_EQ(test, r->vm_end, eaddrs[i]); + KUNIT_EXPECT_EQ(test, r->ar.start, saddrs[i]); + KUNIT_EXPECT_EQ(test, r->ar.end, eaddrs[i]); } damon_free_task(t); } diff --git a/mm/damon.c b/mm/damon.c index ea6a8b6886b8..a9676b804b0b 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -80,7 +80,7 @@ static struct damon_ctx damon_user_ctx = { * Returns the pointer to the new struct if success, or NULL otherwise */ static struct damon_region *damon_new_region(struct damon_ctx *ctx, - unsigned long vm_start, unsigned long vm_end) + unsigned long start, unsigned long end) { struct damon_region *region; @@ -88,8 +88,8 @@ static struct damon_region *damon_new_region(struct damon_ctx *ctx, if (!region) return NULL; - region->vm_start = vm_start; - region->vm_end = vm_end; + region->ar.start = start; + region->ar.end = end; region->nr_accesses = 0; INIT_LIST_HEAD(®ion->list); @@ -277,16 +277,16 @@ static int damon_split_region_evenly(struct damon_ctx *ctx, if (!r || !nr_pieces) return -EINVAL; - orig_end = r->vm_end; - sz_orig = r->vm_end - r->vm_start; + orig_end = r->ar.end; + sz_orig = r->ar.end - r->ar.start; sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, MIN_REGION); if (!sz_piece) return -EINVAL; - r->vm_end = r->vm_start + sz_piece; + r->ar.end = r->ar.start + sz_piece; next = damon_next_region(r); - for (start = r->vm_end; start + sz_piece <= orig_end; + for (start = r->ar.end; start + sz_piece <= orig_end; start += sz_piece) { n = damon_new_region(ctx, start, start + sz_piece); if (!n) @@ -296,7 +296,7 @@ static int damon_split_region_evenly(struct damon_ctx *ctx, } /* complement last region for possible rounding error */ if (n) - n->vm_end = orig_end; + n->ar.end = orig_end; return 0; } @@ -509,7 +509,7 @@ static void damon_mkold(struct mm_struct *mm, unsigned long addr) static void damon_prepare_access_check(struct damon_ctx *ctx, struct mm_struct *mm, struct damon_region *r) { - r->sampling_addr = damon_rand(r->vm_start, r->vm_end); + r->sampling_addr = damon_rand(r->ar.start, r->ar.end); damon_mkold(mm, r->sampling_addr); } @@ -709,12 +709,12 @@ static void kdamond_reset_aggregated(struct damon_ctx *c) nr = nr_damon_regions(t); damon_write_rbuf(c, &nr, sizeof(nr)); damon_for_each_region(r, t) { - damon_write_rbuf(c, &r->vm_start, sizeof(r->vm_start)); - damon_write_rbuf(c, &r->vm_end, sizeof(r->vm_end)); + damon_write_rbuf(c, &r->ar.start, sizeof(r->ar.start)); + damon_write_rbuf(c, &r->ar.end, sizeof(r->ar.end)); damon_write_rbuf(c, &r->nr_accesses, sizeof(r->nr_accesses)); trace_damon_aggregated(t->pid, nr, - r->vm_start, r->vm_end, r->nr_accesses); + r->ar.start, r->ar.end, r->nr_accesses); r->last_nr_accesses = r->nr_accesses; r->nr_accesses = 0; } @@ -742,8 +742,8 @@ static int damos_madvise(struct damon_task *task, struct damon_region *r, if (!mm) goto put_task_out; - ret = do_madvise(t, mm, PAGE_ALIGN(r->vm_start), - PAGE_ALIGN(r->vm_end - r->vm_start), behavior); + ret = do_madvise(t, mm, PAGE_ALIGN(r->ar.start), + PAGE_ALIGN(r->ar.end - r->ar.start), behavior); mmput(mm); put_task_out: put_task_struct(t); @@ -790,7 +790,7 @@ static void damon_do_apply_schemes(struct damon_ctx *c, struct damon_task *t, unsigned long sz; damon_for_each_scheme(s, c) { - sz = r->vm_end - r->vm_start; + sz = r->ar.end - r->ar.start; if ((s->min_sz_region && sz < s->min_sz_region) || (s->max_sz_region && s->max_sz_region < sz)) continue; @@ -821,7 +821,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c) } } -#define sz_damon_region(r) (r->vm_end - r->vm_start) +#define sz_damon_region(r) (r->ar.end - r->ar.start) /* * Merge two adjacent regions into one region @@ -834,7 +834,7 @@ static void damon_merge_two_regions(struct damon_region *l, l->nr_accesses = (l->nr_accesses * sz_l + r->nr_accesses * sz_r) / (sz_l + sz_r); l->age = (l->age * sz_l + r->age * sz_r) / (sz_l + sz_r); - l->vm_end = r->vm_end; + l->ar.end = r->ar.end; damon_destroy_region(r); } @@ -856,7 +856,7 @@ static void damon_merge_regions_of(struct damon_task *t, unsigned int thres) else r->age++; - if (prev && prev->vm_end == r->vm_start && + if (prev && prev->ar.end == r->ar.start && diff_of(prev->nr_accesses, r->nr_accesses) <= thres) damon_merge_two_regions(prev, r); else @@ -893,8 +893,8 @@ static void damon_split_region_at(struct damon_ctx *ctx, { struct damon_region *new; - new = damon_new_region(ctx, r->vm_start + sz_r, r->vm_end); - r->vm_end = new->vm_start; + new = damon_new_region(ctx, r->ar.start + sz_r, r->ar.end); + r->ar.end = new->ar.start; new->age = r->age; new->last_nr_accesses = r->last_nr_accesses; @@ -911,7 +911,7 @@ static void damon_split_regions_of(struct damon_ctx *ctx, int i; damon_for_each_region_safe(r, next, t) { - sz_region = r->vm_end - r->vm_start; + sz_region = r->ar.end - r->ar.start; for (i = 0; i < nr_subs - 1 && sz_region > 2 * MIN_REGION; i++) { @@ -985,7 +985,7 @@ static bool kdamond_need_update_regions(struct damon_ctx *ctx) */ static bool damon_intersect(struct damon_region *r, struct region *re) { - return !(r->vm_end <= re->start || re->end <= r->vm_start); + return !(r->ar.end <= re->start || re->end <= r->ar.start); } /* @@ -1024,7 +1024,7 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, first = r; last = r; } - if (r->vm_start >= br->end) + if (r->ar.start >= br->end) break; } if (!first) { @@ -1036,8 +1036,8 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, continue; damon_insert_region(newr, damon_prev_region(r), r); } else { - first->vm_start = ALIGN_DOWN(br->start, MIN_REGION); - last->vm_end = ALIGN(br->end, MIN_REGION); + first->ar.start = ALIGN_DOWN(br->start, MIN_REGION); + last->ar.end = ALIGN(br->end, MIN_REGION); } } } From patchwork Wed Jun 3 14:11:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585723 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EE435618 for ; Wed, 3 Jun 2020 14:13:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AE13B20738 for ; Wed, 3 Jun 2020 14:13:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="SDzcABIn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE13B20738 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D47BE8E0009; Wed, 3 Jun 2020 10:13:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CD1DD8E0006; Wed, 3 Jun 2020 10:13:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B995D8E0009; Wed, 3 Jun 2020 10:13:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id 9E82F8E0006 for ; Wed, 3 Jun 2020 10:13:14 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5590E31ECC for ; Wed, 3 Jun 2020 14:13:14 +0000 (UTC) X-FDA: 76888092708.30.alley47_5664bc0ca951a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 32FDE1801F1C9 for ; Wed, 3 Jun 2020 14:13:14 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30012:30054:30064:30070,0,RBL:72.21.196.25:@amazon.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:32,LUA_SUMMARY:none X-HE-Tag: alley47_5664bc0ca951a X-Filterd-Recvd-Size: 12430 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:13:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193594; x=1622729594; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=RSeCraCWvQ2SO2B882JJ9QFVdUoepG0Yn3yCcaEeubE=; b=SDzcABInoP5yGDG1J9ZUfOmKkiMkC4KqMsw1hXaQG5VBImXUt75zuxFj boOKK9s2eUXu/xfK+vu0kh/WH5PQn1I9gIZ5WPH/Wptrw4QjYsXpMkHaj VXRRdEWHueub5eWqt+jrWCnj/GfIDhsIw1eWg6/2kmfhD+Hq21AwrtC3S c=; IronPort-SDR: lbCK1x6wFLNRjmzwSvqq5e+tt5rawN8U5ina3oNFDCOa80uKD5HnO/IEUkkG4ueE+oQ2VNQ7j0 UoSoZmc49sAQ== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="34278974" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-2b-55156cd4.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 03 Jun 2020 14:12:58 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2b-55156cd4.us-west-2.amazon.com (Postfix) with ESMTPS id 0ECCAA2370; Wed, 3 Jun 2020 14:12:56 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:12:55 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:12:39 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 2/9] mm/damon: Clean up code using 'struct damon_addr_range' Date: Wed, 3 Jun 2020 16:11:28 +0200 Message-ID: <20200603141135.10575-3-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: 32FDE1801F1C9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park There are unnecessarily duplicated code in DAMON, that can be eliminated by using the new struct, 'damon_addr_range'. This commit cleans up the DAMON code in the way. Signed-off-by: SeongJae Park --- mm/damon-test.h | 36 ++++++++++++++++++------------------ mm/damon.c | 46 ++++++++++++++++++++-------------------------- 2 files changed, 38 insertions(+), 44 deletions(-) diff --git a/mm/damon-test.h b/mm/damon-test.h index 9dd2061502cb..6d01f0e782d5 100644 --- a/mm/damon-test.h +++ b/mm/damon-test.h @@ -177,7 +177,7 @@ static void damon_test_set_recording(struct kunit *test) */ static void damon_test_three_regions_in_vmas(struct kunit *test) { - struct region regions[3] = {0,}; + struct damon_addr_range regions[3] = {0,}; /* 10-20-25, 200-210-220, 300-305, 307-330 */ struct vm_area_struct vmas[] = { (struct vm_area_struct) {.vm_start = 10, .vm_end = 20}, @@ -331,7 +331,7 @@ static struct damon_region *__nth_region_of(struct damon_task *t, int idx) */ static void damon_do_test_apply_three_regions(struct kunit *test, unsigned long *regions, int nr_regions, - struct region *three_regions, + struct damon_addr_range *three_regions, unsigned long *expected, int nr_expected) { struct damon_task *t; @@ -369,10 +369,10 @@ static void damon_test_apply_three_regions1(struct kunit *test) unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59, 70, 80, 80, 90, 90, 100}; /* 5-27, 45-55, 73-104 */ - struct region new_three_regions[3] = { - (struct region){.start = 5, .end = 27}, - (struct region){.start = 45, .end = 55}, - (struct region){.start = 73, .end = 104} }; + struct damon_addr_range new_three_regions[3] = { + (struct damon_addr_range){.start = 5, .end = 27}, + (struct damon_addr_range){.start = 45, .end = 55}, + (struct damon_addr_range){.start = 73, .end = 104} }; /* 5-20-27, 45-55, 73-80-90-104 */ unsigned long expected[] = {5, 20, 20, 27, 45, 55, 73, 80, 80, 90, 90, 104}; @@ -391,10 +391,10 @@ static void damon_test_apply_three_regions2(struct kunit *test) unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59, 70, 80, 80, 90, 90, 100}; /* 5-27, 56-57, 65-104 */ - struct region new_three_regions[3] = { - (struct region){.start = 5, .end = 27}, - (struct region){.start = 56, .end = 57}, - (struct region){.start = 65, .end = 104} }; + struct damon_addr_range new_three_regions[3] = { + (struct damon_addr_range){.start = 5, .end = 27}, + (struct damon_addr_range){.start = 56, .end = 57}, + (struct damon_addr_range){.start = 65, .end = 104} }; /* 5-20-27, 56-57, 65-80-90-104 */ unsigned long expected[] = {5, 20, 20, 27, 56, 57, 65, 80, 80, 90, 90, 104}; @@ -415,10 +415,10 @@ static void damon_test_apply_three_regions3(struct kunit *test) unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59, 70, 80, 80, 90, 90, 100}; /* 5-27, 61-63, 65-104 */ - struct region new_three_regions[3] = { - (struct region){.start = 5, .end = 27}, - (struct region){.start = 61, .end = 63}, - (struct region){.start = 65, .end = 104} }; + struct damon_addr_range new_three_regions[3] = { + (struct damon_addr_range){.start = 5, .end = 27}, + (struct damon_addr_range){.start = 61, .end = 63}, + (struct damon_addr_range){.start = 65, .end = 104} }; /* 5-20-27, 61-63, 65-80-90-104 */ unsigned long expected[] = {5, 20, 20, 27, 61, 63, 65, 80, 80, 90, 90, 104}; @@ -440,10 +440,10 @@ static void damon_test_apply_three_regions4(struct kunit *test) unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59, 70, 80, 80, 90, 90, 100}; /* 5-7, 30-32, 65-68 */ - struct region new_three_regions[3] = { - (struct region){.start = 5, .end = 7}, - (struct region){.start = 30, .end = 32}, - (struct region){.start = 65, .end = 68} }; + struct damon_addr_range new_three_regions[3] = { + (struct damon_addr_range){.start = 5, .end = 7}, + (struct damon_addr_range){.start = 30, .end = 32}, + (struct damon_addr_range){.start = 65, .end = 68} }; /* expect 5-7, 30-32, 65-68 */ unsigned long expected[] = {5, 7, 30, 32, 65, 68}; diff --git a/mm/damon.c b/mm/damon.c index a9676b804b0b..f6dd34425185 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -301,19 +301,15 @@ static int damon_split_region_evenly(struct damon_ctx *ctx, return 0; } -struct region { - unsigned long start; - unsigned long end; -}; - -static unsigned long sz_region(struct region *r) +static unsigned long sz_range(struct damon_addr_range *r) { return r->end - r->start; } -static void swap_regions(struct region *r1, struct region *r2) +static void swap_ranges(struct damon_addr_range *r1, + struct damon_addr_range *r2) { - struct region tmp; + struct damon_addr_range tmp; tmp = *r1; *r1 = *r2; @@ -324,7 +320,7 @@ static void swap_regions(struct region *r1, struct region *r2) * Find three regions separated by two biggest unmapped regions * * vma the head vma of the target address space - * regions an array of three 'struct region's that results will be saved + * regions an array of three address ranges that results will be saved * * This function receives an address space and finds three regions in it which * separated by the two biggest unmapped regions in the space. Please refer to @@ -334,9 +330,9 @@ static void swap_regions(struct region *r1, struct region *r2) * Returns 0 if success, or negative error code otherwise. */ static int damon_three_regions_in_vmas(struct vm_area_struct *vma, - struct region regions[3]) + struct damon_addr_range regions[3]) { - struct region gap = {0}, first_gap = {0}, second_gap = {0}; + struct damon_addr_range gap = {0}, first_gap = {0}, second_gap = {0}; struct vm_area_struct *last_vma = NULL; unsigned long start = 0; @@ -349,20 +345,20 @@ static int damon_three_regions_in_vmas(struct vm_area_struct *vma, } gap.start = last_vma->vm_end; gap.end = vma->vm_start; - if (sz_region(&gap) > sz_region(&second_gap)) { - swap_regions(&gap, &second_gap); - if (sz_region(&second_gap) > sz_region(&first_gap)) - swap_regions(&second_gap, &first_gap); + if (sz_range(&gap) > sz_range(&second_gap)) { + swap_ranges(&gap, &second_gap); + if (sz_range(&second_gap) > sz_range(&first_gap)) + swap_ranges(&second_gap, &first_gap); } last_vma = vma; } - if (!sz_region(&second_gap) || !sz_region(&first_gap)) + if (!sz_range(&second_gap) || !sz_range(&first_gap)) return -EINVAL; /* Sort the two biggest gaps by address */ if (first_gap.start > second_gap.start) - swap_regions(&first_gap, &second_gap); + swap_ranges(&first_gap, &second_gap); /* Store the result */ regions[0].start = ALIGN(start, MIN_REGION); @@ -381,7 +377,7 @@ static int damon_three_regions_in_vmas(struct vm_area_struct *vma, * Returns 0 on success, negative error code otherwise. */ static int damon_three_regions_of(struct damon_task *t, - struct region regions[3]) + struct damon_addr_range regions[3]) { struct mm_struct *mm; int rc; @@ -443,7 +439,7 @@ static int damon_three_regions_of(struct damon_task *t, static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t) { struct damon_region *r, *m = NULL; - struct region regions[3]; + struct damon_addr_range regions[3]; int i; if (damon_three_regions_of(t, regions)) { @@ -977,13 +973,11 @@ static bool kdamond_need_update_regions(struct damon_ctx *ctx) } /* - * Check whether regions are intersecting - * - * Note that this function checks 'struct damon_region' and 'struct region'. + * Check whether a region is intersecting an address range * * Returns true if it is. */ -static bool damon_intersect(struct damon_region *r, struct region *re) +static bool damon_intersect(struct damon_region *r, struct damon_addr_range *re) { return !(r->ar.end <= re->start || re->end <= r->ar.start); } @@ -995,7 +989,7 @@ static bool damon_intersect(struct damon_region *r, struct region *re) * bregions the three big regions of the task */ static void damon_apply_three_regions(struct damon_ctx *ctx, - struct damon_task *t, struct region bregions[3]) + struct damon_task *t, struct damon_addr_range bregions[3]) { struct damon_region *r, *next; unsigned int i = 0; @@ -1014,7 +1008,7 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, for (i = 0; i < 3; i++) { struct damon_region *first = NULL, *last; struct damon_region *newr; - struct region *br; + struct damon_addr_range *br; br = &bregions[i]; /* Get the first and last regions which intersects with br */ @@ -1047,7 +1041,7 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, */ static void kdamond_update_regions(struct damon_ctx *ctx) { - struct region three_regions[3]; + struct damon_addr_range three_regions[3]; struct damon_task *t; damon_for_each_task(t, ctx) { From patchwork Wed Jun 3 14:11:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585725 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBAAA618 for ; Wed, 3 Jun 2020 14:13:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8F1B720738 for ; Wed, 3 Jun 2020 14:13:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="SjWWp7Ef" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F1B720738 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C37118E000A; Wed, 3 Jun 2020 10:13:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BE6638E0006; Wed, 3 Jun 2020 10:13:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A86F58E000A; Wed, 3 Jun 2020 10:13:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id 924618E0006 for ; Wed, 3 Jun 2020 10:13:29 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3E493183B5C27 for ; Wed, 3 Jun 2020 14:13:29 +0000 (UTC) X-FDA: 76888093338.08.beds42_5887a4e444561 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id EFF9918076542 for ; Wed, 3 Jun 2020 14:13:28 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30012:30054:30064:30070,0,RBL:207.171.184.25:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: beds42_5887a4e444561 X-Filterd-Recvd-Size: 8229 Received: from smtp-fw-9101.amazon.com (smtp-fw-9101.amazon.com [207.171.184.25]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:13:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193608; x=1622729608; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Ej94uKSnn/iDFeYCmXsAkJtR13LdF4uCUpJrZqH1uaw=; b=SjWWp7EfHbMybE5+46CkJDxtDZ7UMRwveuBLjtIcdNo/ygRhwfDe65Qv glHf2ZkaEruQl319AEBO7lyT2MeNOL0rmeqdaGPbypj8vjtjR6wPij+19 8XFajXdUHffskF/Gka/V6JOsnHzHp1oIzI7hiLlcT+fQdFtmNNCCzg+eq I=; IronPort-SDR: 0D3QqA6tyyY5NH9rf89c2wrk2SdQAsUVZNAlRumsVEepPYIhQP2amzPue9dzc3lJ2ZQu7+0o0o ZO+SfiAFKLdA== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="41264917" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-e7be2041.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 03 Jun 2020 14:13:25 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-e7be2041.us-west-2.amazon.com (Postfix) with ESMTPS id 4E60EA1607; Wed, 3 Jun 2020 14:13:23 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:13:22 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:13:05 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 3/9] mm/damon: Make monitoring target regions init/update configurable Date: Wed, 3 Jun 2020 16:11:29 +0200 Message-ID: <20200603141135.10575-4-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: EFF9918076542 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit allows DAMON users to configure their own monitoring target regions initializer / updater. Using this, users can confine the monitoring address spaces as they want. For example, users can track only stack, heap, shared memory area, or specific file-backed area, as they want. Signed-off-by: SeongJae Park --- include/linux/damon.h | 13 +++++++++++++ mm/damon.c | 17 ++++++++++------- 2 files changed, 23 insertions(+), 7 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index b4b06ca905a2..a1b6810ce0eb 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -158,9 +158,16 @@ struct damos { * @tasks_list: Head of monitoring target tasks (&damon_task) list. * @schemes_list: Head of schemes (&damos) list. * + * @init_target_regions: Constructs initial monitoring target regions. + * @update_target_regions: Updates monitoring target regions. * @sample_cb: Called for each sampling interval. * @aggregate_cb: Called for each aggregation interval. * + * The monitoring thread calls @init_target_regions before starting the + * monitoring, @update_target_regions for each @regions_update_interval. By + * setting these callbacks to appropriate functions, therefore, users can + * monitor specific range of virtual address space. + * * @sample_cb and @aggregate_cb are called from @kdamond for each of the * sampling intervals and aggregation intervals, respectively. Therefore, * users can safely access to the monitoring results via @tasks_list without @@ -190,10 +197,16 @@ struct damon_ctx { struct list_head schemes_list; /* 'damos' objects */ /* callbacks */ + void (*init_target_regions)(struct damon_ctx *context); + void (*update_target_regions)(struct damon_ctx *context); void (*sample_cb)(struct damon_ctx *context); void (*aggregate_cb)(struct damon_ctx *context); }; +/* Reference callback implementations for virtual memory */ +void kdamond_init_vm_regions(struct damon_ctx *ctx); +void kdamond_update_vm_regions(struct damon_ctx *ctx); + int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, unsigned long aggr_int, unsigned long regions_update_int, diff --git a/mm/damon.c b/mm/damon.c index f6dd34425185..2a3c1abb9b47 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -72,6 +72,9 @@ static struct damon_ctx damon_user_ctx = { .regions_update_interval = 1000 * 1000, .min_nr_regions = 10, .max_nr_regions = 1000, + + .init_target_regions = kdamond_init_vm_regions, + .update_target_regions = kdamond_update_vm_regions, }; /* @@ -324,7 +327,7 @@ static void swap_ranges(struct damon_addr_range *r1, * * This function receives an address space and finds three regions in it which * separated by the two biggest unmapped regions in the space. Please refer to - * below comments of 'damon_init_regions_of()' function to know why this is + * below comments of 'damon_init_vm_regions_of()' function to know why this is * necessary. * * Returns 0 if success, or negative error code otherwise. @@ -436,7 +439,7 @@ static int damon_three_regions_of(struct damon_task *t, * * */ -static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t) +static void damon_init_vm_regions_of(struct damon_ctx *c, struct damon_task *t) { struct damon_region *r, *m = NULL; struct damon_addr_range regions[3]; @@ -465,12 +468,12 @@ static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t) } /* Initialize '->regions_list' of every task */ -static void kdamond_init_regions(struct damon_ctx *ctx) +void kdamond_init_vm_regions(struct damon_ctx *ctx) { struct damon_task *t; damon_for_each_task(t, ctx) - damon_init_regions_of(ctx, t); + damon_init_vm_regions_of(ctx, t); } static void damon_mkold(struct mm_struct *mm, unsigned long addr) @@ -1039,7 +1042,7 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, /* * Update regions for current memory mappings */ -static void kdamond_update_regions(struct damon_ctx *ctx) +void kdamond_update_vm_regions(struct damon_ctx *ctx) { struct damon_addr_range three_regions[3]; struct damon_task *t; @@ -1101,7 +1104,7 @@ static int kdamond_fn(void *data) unsigned int max_nr_accesses = 0; pr_info("kdamond (%d) starts\n", ctx->kdamond->pid); - kdamond_init_regions(ctx); + ctx->init_target_regions(ctx); kdamond_write_record_header(ctx); @@ -1124,7 +1127,7 @@ static int kdamond_fn(void *data) } if (kdamond_need_update_regions(ctx)) - kdamond_update_regions(ctx); + ctx->update_target_regions(ctx); } damon_flush_rbuffer(ctx); damon_for_each_task(t, ctx) { From patchwork Wed Jun 3 14:11:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585729 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B2FA618 for ; Wed, 3 Jun 2020 14:14:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3B77F20772 for ; Wed, 3 Jun 2020 14:14:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="kg51dq4/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B77F20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 691168E0008; Wed, 3 Jun 2020 10:14:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 641028E0006; Wed, 3 Jun 2020 10:14:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52FE88E0008; Wed, 3 Jun 2020 10:14:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id 3BDB78E0006 for ; Wed, 3 Jun 2020 10:14:20 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E8847180A8350 for ; Wed, 3 Jun 2020 14:14:19 +0000 (UTC) X-FDA: 76888095438.16.tax83_5b34e56fe5c54 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 2D635100B8ACC for ; Wed, 3 Jun 2020 14:13:47 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30051:30054:30056:30064:30070,0,RBL:207.171.184.25:@amazon.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: tax83_5b34e56fe5c54 X-Filterd-Recvd-Size: 9369 Received: from smtp-fw-9101.amazon.com (smtp-fw-9101.amazon.com [207.171.184.25]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:13:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193627; x=1622729627; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=gUw4QWtpZPwxseMBLheuC2wO/x6PXwucCcB6pX7q48c=; b=kg51dq4/aKVGSEPK21BadXsX+qIMOM5NAEJ/wcQoWFWQCNndVOjJDNFT JC16646+WnTLHh3NizMr3qDEdmuej15bR0vxcuONTVdwkrkZal2IzQkVA vuRLahaOIihyxDruAR44lpwDO7B1M/7cEEUxFb9+jA53fbnv00xvKOc/K U=; IronPort-SDR: HhLJecBqQGp5IyKdK/ZPMBgCUvtWEaVtFkzBXqwcGp86eZNwvgB8pzPoqtW/CCduWfX7b/vZ9C AO1iCCZQUaQw== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="41265019" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-f14f4a47.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 03 Jun 2020 14:13:45 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-f14f4a47.us-west-2.amazon.com (Postfix) with ESMTPS id 4F069A297C; Wed, 3 Jun 2020 14:13:43 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:13:42 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:13:26 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 4/9] mm/damon/debugfs: Allow users to set initial monitoring target regions Date: Wed, 3 Jun 2020 16:11:30 +0200 Message-ID: <20200603141135.10575-5-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: 2D635100B8ACC X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park Some users would want to monitor only a part of the entire virtual memory address space. The '->init_target_regions' callback is therefore provided, but only programming interface can use it. For the reason, this commit introduces a new debugfs file, 'init_region'. Users can specify which initial monitoring target address regions they want by writing special input to the file. The input should describe each region in each line in below form: This commit also makes the default '->init_target_regions' callback, 'kdamon_init_vm_regions()' to do nothing if the user has set the initial target regions already. Note that the regions will be updated to cover entire memory mapped regions after 'regions update interval'. If you want the regions to not be updated after the initial setting, you could set the interval as a very long time, say, a few decades. Signed-off-by: SeongJae Park --- mm/damon.c | 169 +++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 163 insertions(+), 6 deletions(-) diff --git a/mm/damon.c b/mm/damon.c index 2a3c1abb9b47..c7806e81248a 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -472,8 +472,10 @@ void kdamond_init_vm_regions(struct damon_ctx *ctx) { struct damon_task *t; - damon_for_each_task(t, ctx) - damon_init_vm_regions_of(ctx, t); + damon_for_each_task(t, ctx) { + if (!nr_damon_regions(t)) + damon_init_vm_regions_of(ctx, t); + } } static void damon_mkold(struct mm_struct *mm, unsigned long addr) @@ -1685,6 +1687,154 @@ static ssize_t debugfs_record_write(struct file *file, return ret; } +/* This function should not be called while the monitoring is ongoing */ +static ssize_t sprint_init_regions(struct damon_ctx *c, char *buf, ssize_t len) +{ + struct damon_task *t; + struct damon_region *r; + int written = 0; + int rc; + + damon_for_each_task(t, c) { + damon_for_each_region(r, t) { + rc = snprintf(&buf[written], len - written, + "%d %lu %lu\n", + t->pid, r->ar.start, r->ar.end); + if (!rc) + return -ENOMEM; + written += rc; + } + } + return written; +} + +static ssize_t debugfs_init_regions_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos) +{ + struct damon_ctx *ctx = &damon_user_ctx; + char *kbuf; + ssize_t len; + + kbuf = kmalloc(count, GFP_KERNEL); + if (!kbuf) + return -ENOMEM; + + mutex_lock(&ctx->kdamond_lock); + if (ctx->kdamond) { + mutex_unlock(&ctx->kdamond_lock); + return -EBUSY; + } + + len = sprint_init_regions(ctx, kbuf, count); + mutex_unlock(&ctx->kdamond_lock); + if (len < 0) + goto out; + len = simple_read_from_buffer(buf, count, ppos, kbuf, len); + +out: + kfree(kbuf); + return len; +} + +static int add_init_region(struct damon_ctx *c, + int pid, struct damon_addr_range *ar) +{ + struct damon_task *t; + struct damon_region *r, *prev; + int rc = -EINVAL; + + if (ar->start >= ar->end) + return -EINVAL; + + damon_for_each_task(t, c) { + if (t->pid == pid) { + r = damon_new_region(c, ar->start, ar->end); + if (!r) + return -ENOMEM; + damon_add_region(r, t); + if (nr_damon_regions(t) > 1) { + prev = damon_prev_region(r); + if (prev->ar.end > r->ar.start) { + damon_destroy_region(r); + return -EINVAL; + } + } + rc = 0; + } + } + return rc; +} + +static int set_init_regions(struct damon_ctx *c, const char *str, ssize_t len) +{ + struct damon_task *t; + struct damon_region *r, *next; + int pos = 0, parsed, ret; + int pid; + struct damon_addr_range ar; + int err; + + damon_for_each_task(t, c) { + damon_for_each_region_safe(r, next, t) + damon_destroy_region(r); + } + + while (pos < len) { + ret = sscanf(&str[pos], "%d %lu %lu%n", + &pid, &ar.start, &ar.end, &parsed); + if (ret != 3) + break; + err = add_init_region(c, pid, &ar); + if (err) + goto fail; + pos += parsed; + } + + return 0; + +fail: + damon_for_each_task(t, c) { + damon_for_each_region_safe(r, next, t) + damon_destroy_region(r); + } + return err; +} + +static ssize_t debugfs_init_regions_write(struct file *file, const char __user + *buf, size_t count, loff_t *ppos) +{ + struct damon_ctx *ctx = &damon_user_ctx; + char *kbuf; + ssize_t ret; + int err; + + if (*ppos) + return -EINVAL; + + kbuf = kmalloc(count, GFP_KERNEL); + if (!kbuf) + return -ENOMEM; + + ret = simple_write_to_buffer(kbuf, count, ppos, buf, count); + if (ret < 0) + goto out; + + mutex_lock(&ctx->kdamond_lock); + if (ctx->kdamond) { + ret = -EBUSY; + goto unlock_out; + } + + err = set_init_regions(ctx, kbuf, ret); + if (err) + ret = err; + +unlock_out: + mutex_unlock(&ctx->kdamond_lock); +out: + kfree(kbuf); + return ret; +} static ssize_t debugfs_attrs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) @@ -1766,6 +1916,12 @@ static const struct file_operations record_fops = { .write = debugfs_record_write, }; +static const struct file_operations init_regions_fops = { + .owner = THIS_MODULE, + .read = debugfs_init_regions_read, + .write = debugfs_init_regions_write, +}; + static const struct file_operations attrs_fops = { .owner = THIS_MODULE, .read = debugfs_attrs_read, @@ -1776,10 +1932,11 @@ static struct dentry *debugfs_root; static int __init damon_debugfs_init(void) { - const char * const file_names[] = {"attrs", "record", "schemes", - "pids", "monitor_on"}; - const struct file_operations *fops[] = {&attrs_fops, &record_fops, - &schemes_fops, &pids_fops, &monitor_on_fops}; + const char * const file_names[] = {"attrs", "init_regions", "record", + "schemes", "pids", "monitor_on"}; + const struct file_operations *fops[] = {&attrs_fops, + &init_regions_fops, &record_fops, &schemes_fops, &pids_fops, + &monitor_on_fops}; int i; debugfs_root = debugfs_create_dir("damon", NULL); From patchwork Wed Jun 3 14:11:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585727 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2463913 for ; Wed, 3 Jun 2020 14:14:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 955EF20772 for ; Wed, 3 Jun 2020 14:14:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="tVfb+69e" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 955EF20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C0ECE8E0007; Wed, 3 Jun 2020 10:14:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BBF718E0006; Wed, 3 Jun 2020 10:14:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A87458E0007; Wed, 3 Jun 2020 10:14:09 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id 8DA748E0006 for ; Wed, 3 Jun 2020 10:14:09 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4574B3C642 for ; Wed, 3 Jun 2020 14:14:09 +0000 (UTC) X-FDA: 76888095018.26.ball40_5e620216fa907 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 2BF8A18080520 for ; Wed, 3 Jun 2020 14:14:09 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30054:30064:30070,0,RBL:207.171.184.25:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: ball40_5e620216fa907 X-Filterd-Recvd-Size: 5546 Received: from smtp-fw-9101.amazon.com (smtp-fw-9101.amazon.com [207.171.184.25]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:14:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193648; x=1622729648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=4Ff2FlavwNWCorfCk4OWW1ES8k5kxbCqQvU0uywv4K0=; b=tVfb+69e/SsRImO1ca/5EtBZf7wUe4xWDRzJjv1qR3f7PRl7kXl8lLvO QKEdHcEw7j43+ODTZCsWsfxEwv/+kSI1iPC6hMrcnzKlFdirC7MYOj+Li hswKZAf3PnTDCDGWldYsJJSu197tGFSSpw3ojpYnICFJtiQcGdupfcTFC o=; IronPort-SDR: 9LPTo87SywWCVFvOKx9F2B7bSyqWuI15pbcIlfIhXQMnPvAPH8bAMF7pI2Asnp0wHrIHkBx6DC FEioX7uvQnOQ== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="41265128" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 03 Jun 2020 14:14:07 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com (Postfix) with ESMTPS id 3C4ECA1C6F; Wed, 3 Jun 2020 14:14:05 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:14:04 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:13:48 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 5/9] Docs/damon: Document 'initial_regions' feature Date: Wed, 3 Jun 2020 16:11:31 +0200 Message-ID: <20200603141135.10575-6-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: 2BF8A18080520 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit documents the 'initial_regions' feature. Signed-off-by: SeongJae Park --- Documentation/admin-guide/mm/damon/usage.rst | 34 ++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/Documentation/admin-guide/mm/damon/usage.rst b/Documentation/admin-guide/mm/damon/usage.rst index 18a19c35b4f3..137ed770c2d6 100644 --- a/Documentation/admin-guide/mm/damon/usage.rst +++ b/Documentation/admin-guide/mm/damon/usage.rst @@ -326,6 +326,40 @@ having pids 42 and 4242 as the processes to be monitored and check it again:: Note that setting the pids doesn't start the monitoring. +Initla Monitoring Target Regions +-------------------------------- + +DAMON automatically sets and updates the monitoring target regions so that +entire memory mappings of target processes can be covered. However, users +might want to limit the monitoring region to specific address ranges, such as +the heap, the stack, or specific file-mapped area. Or, some users might know +the initial access pattern of their workloads and therefore want to set optimal +initial regions for the 'adaptive regions adjustment'. + +In such cases, users can explicitly set the initial monitoring target regions +as they want, by writing proper values to the ``init_regions`` file. Each line +of the input should represent one region in below form.:: + + + +The ``pid`` should be already in ``pids`` file, and the regions should be +passed in address order. For example, below commands will set a couple of +address ranges, ``1-100`` and ``100-200`` as the initial monitoring target +region of process 42, and another couple of address ranges, ``20-40`` and +``50-100`` as that of process 4242.:: + + # cd /damon + # echo "42 1 100 + 42 100 200 + 4242 20 40 + 4242 50 100" > init_regions + +Note that this sets the initial monitoring target regions only. DAMON will +automatically updates the boundary of the regions after one ``regions update +interval``. Therefore, users should set the ``regions update interval`` large +enough. + + Record ------ From patchwork Wed Jun 3 14:11:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585731 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 339BD618 for ; Wed, 3 Jun 2020 14:14:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E7D842087D for ; Wed, 3 Jun 2020 14:14:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="V4KMJ9Xu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E7D842087D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 188088E0009; Wed, 3 Jun 2020 10:14:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 15F318E0006; Wed, 3 Jun 2020 10:14:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 075978E0009; Wed, 3 Jun 2020 10:14:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0115.hostedemail.com [216.40.44.115]) by kanga.kvack.org (Postfix) with ESMTP id E581E8E0006 for ; Wed, 3 Jun 2020 10:14:42 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 95A1018347D61 for ; Wed, 3 Jun 2020 14:14:42 +0000 (UTC) X-FDA: 76888096404.20.nut93_631c02fecdc36 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 9C887180BEE4D for ; Wed, 3 Jun 2020 14:14:41 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30054:30064:30070,0,RBL:207.171.184.25:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:3,LUA_SUMMARY:none X-HE-Tag: nut93_631c02fecdc36 X-Filterd-Recvd-Size: 9342 Received: from smtp-fw-9101.amazon.com (smtp-fw-9101.amazon.com [207.171.184.25]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:14:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193681; x=1622729681; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=SRYgykF0yAohy2O2/4bBvbIs8aFrlPW/NT2qDFyZwWE=; b=V4KMJ9XumQKTBoUC8Lp5+K67SfGI2NdP0mCmh6INBa5D91U5yMMERPFJ BMRABo7TmUbEBVaCUv3Iov0VMqqI11aMeP5lOCbMk7RgzFTBSGQRjpeE+ tSNTOROHeMRgRLJU/QBO5YLs9heFQDPWrEyIIQFyb2QbmsRSechH/b0/8 g=; IronPort-SDR: 1kL31in7XtdjkETvtVd2fzBeZmIrdTWw1WeU3Rr7AK11SdSlFGC1OsaKPzcRq//dMRCiZ9uBym zSbFg8kH0BAQ== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="41265283" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-119b4f96.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 03 Jun 2020 14:14:40 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-119b4f96.us-west-2.amazon.com (Postfix) with ESMTPS id C67E51A18E6; Wed, 3 Jun 2020 14:14:37 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:14:37 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:14:21 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 6/9] mm/damon: Make access check primitive configurable Date: Wed, 3 Jun 2020 16:11:32 +0200 Message-ID: <20200603141135.10575-7-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: 9C887180BEE4D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park DAMON assumes the target region is in virtual address space and therefore uses PTE Accessed bit checking for access checking. However, as some CPU provides H/W based memory access check features that usually more accurate and light-weight than PTE Accessed bit checking, some users would want to use those in special use cases. Also, some users might want to use DAMON for different address spaces such as physical memory space, which needs different ways to check the access. This commit therefore allows DAMON users to configure the low level access check primitives as they want. Signed-off-by: SeongJae Park --- include/linux/damon.h | 13 +++++++++++-- mm/damon.c | 20 +++++++++++--------- 2 files changed, 22 insertions(+), 11 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index a1b6810ce0eb..1a788bfd1b4e 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -160,13 +160,18 @@ struct damos { * * @init_target_regions: Constructs initial monitoring target regions. * @update_target_regions: Updates monitoring target regions. + * @prepare_access_checks: Prepares next access check of target regions. + * @check_accesses: Checks the access of target regions. * @sample_cb: Called for each sampling interval. * @aggregate_cb: Called for each aggregation interval. * * The monitoring thread calls @init_target_regions before starting the - * monitoring, @update_target_regions for each @regions_update_interval. By + * monitoring, @update_target_regions for each @regions_update_interval, and + * @prepare_access_checks and @check_accesses for each @sample_interval. By * setting these callbacks to appropriate functions, therefore, users can - * monitor specific range of virtual address space. + * monitor any address space with special handling. If these are not + * explicitly configured, the functions for virtual memory address space + * monitoring are used. * * @sample_cb and @aggregate_cb are called from @kdamond for each of the * sampling intervals and aggregation intervals, respectively. Therefore, @@ -199,6 +204,8 @@ struct damon_ctx { /* callbacks */ void (*init_target_regions)(struct damon_ctx *context); void (*update_target_regions)(struct damon_ctx *context); + void (*prepare_access_checks)(struct damon_ctx *context); + unsigned int (*check_accesses)(struct damon_ctx *context); void (*sample_cb)(struct damon_ctx *context); void (*aggregate_cb)(struct damon_ctx *context); }; @@ -206,6 +213,8 @@ struct damon_ctx { /* Reference callback implementations for virtual memory */ void kdamond_init_vm_regions(struct damon_ctx *ctx); void kdamond_update_vm_regions(struct damon_ctx *ctx); +void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx); +unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx); int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, diff --git a/mm/damon.c b/mm/damon.c index c7806e81248a..f5cbc97a3bbc 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -75,6 +75,8 @@ static struct damon_ctx damon_user_ctx = { .init_target_regions = kdamond_init_vm_regions, .update_target_regions = kdamond_update_vm_regions, + .prepare_access_checks = kdamond_prepare_vm_access_checks, + .check_accesses = kdamond_check_vm_accesses, }; /* @@ -507,7 +509,7 @@ static void damon_mkold(struct mm_struct *mm, unsigned long addr) #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ } -static void damon_prepare_access_check(struct damon_ctx *ctx, +static void damon_prepare_vm_access_check(struct damon_ctx *ctx, struct mm_struct *mm, struct damon_region *r) { r->sampling_addr = damon_rand(r->ar.start, r->ar.end); @@ -515,7 +517,7 @@ static void damon_prepare_access_check(struct damon_ctx *ctx, damon_mkold(mm, r->sampling_addr); } -static void kdamond_prepare_access_checks(struct damon_ctx *ctx) +void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx) { struct damon_task *t; struct mm_struct *mm; @@ -526,7 +528,7 @@ static void kdamond_prepare_access_checks(struct damon_ctx *ctx) if (!mm) continue; damon_for_each_region(r, t) - damon_prepare_access_check(ctx, mm, r); + damon_prepare_vm_access_check(ctx, mm, r); mmput(mm); } } @@ -564,7 +566,7 @@ static bool damon_young(struct mm_struct *mm, unsigned long addr, * mm 'mm_struct' for the given virtual address space * r the region to be checked */ -static void damon_check_access(struct damon_ctx *ctx, +static void damon_check_vm_access(struct damon_ctx *ctx, struct mm_struct *mm, struct damon_region *r) { static struct mm_struct *last_mm; @@ -588,7 +590,7 @@ static void damon_check_access(struct damon_ctx *ctx, last_addr = r->sampling_addr; } -static unsigned int kdamond_check_accesses(struct damon_ctx *ctx) +unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx) { struct damon_task *t; struct mm_struct *mm; @@ -600,12 +602,12 @@ static unsigned int kdamond_check_accesses(struct damon_ctx *ctx) if (!mm) continue; damon_for_each_region(r, t) { - damon_check_access(ctx, mm, r); + damon_check_vm_access(ctx, mm, r); max_nr_accesses = max(r->nr_accesses, max_nr_accesses); } - mmput(mm); } + return max_nr_accesses; } @@ -1111,13 +1113,13 @@ static int kdamond_fn(void *data) kdamond_write_record_header(ctx); while (!kdamond_need_stop(ctx)) { - kdamond_prepare_access_checks(ctx); + ctx->prepare_access_checks(ctx); if (ctx->sample_cb) ctx->sample_cb(ctx); usleep_range(ctx->sample_interval, ctx->sample_interval + 1); - max_nr_accesses = kdamond_check_accesses(ctx); + max_nr_accesses = ctx->check_accesses(ctx); if (kdamond_aggregate_interval_passed(ctx)) { kdamond_merge_regions(ctx, max_nr_accesses / 10); From patchwork Wed Jun 3 14:11:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585735 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F0D7618 for ; Wed, 3 Jun 2020 14:15:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E3C420738 for ; Wed, 3 Jun 2020 14:15:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="SqJqL3dS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E3C420738 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 41F2A8E0007; Wed, 3 Jun 2020 10:15:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3CFD78E0006; Wed, 3 Jun 2020 10:15:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BE228E0007; Wed, 3 Jun 2020 10:15:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 1521A8E0006 for ; Wed, 3 Jun 2020 10:15:29 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C8834180ACF62 for ; Wed, 3 Jun 2020 14:15:28 +0000 (UTC) X-FDA: 76888098336.29.blade10_697a35ba34531 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 4759B18082828 for ; Wed, 3 Jun 2020 14:15:25 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30012:30054:30064:30070,0,RBL:52.95.49.90:@amazon.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: blade10_697a35ba34531 X-Filterd-Recvd-Size: 10538 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:15:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193724; x=1622729724; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=faje2xiXtBmZmE30JnFGMsjEdqz81v/UC5sZtQepIaM=; b=SqJqL3dS7qZ+2FbKKBIhg9/xmo6Ior8qaStnOHAuxu5mgzv2Itus3yLX EiukerZMuNFU6pS/4m4Kv+zWnQDHcU7MEOGpKhAnF+flQbwDFp+x8smEx mEuAWAcm5VlQ8JipOjHBCRVY3aQfTl7Sz+qq+mAEjDs5F+TpjYWxPpNmy E=; IronPort-SDR: 6ctg4vvr34L9vkWItNnSgSIDRGwYm4wKTCYNXC5cMzWDUXISMusWumRj3NY3IxwqNwOseav9XI x/T10QSfLJuA== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="34144210" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 03 Jun 2020 14:15:09 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com (Postfix) with ESMTPS id A775BA2685; Wed, 3 Jun 2020 14:14:58 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:14:57 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:14:41 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 7/9] mm/damon: Implement callbacks for physical memory monitoring Date: Wed, 3 Jun 2020 16:11:33 +0200 Message-ID: <20200603141135.10575-8-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: 4759B18082828 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit implements the four callbacks (->init_target_regions, ->update_target_regions, ->prepare_access_check, and ->check_accesses) for the basic access monitoring of the physical memory address space. By setting the callback pointers to point those, users can easily monitor the accesses to the physical memory. Internally, it uses the PTE Accessed bit, as similar to that of the virtual memory support. Also, it supports only page frames that supported by idle page tracking. Acutally, most of the code is stollen from idle page tracking. Users who want to use other access check primitives and monitor the frames that not supported with this implementation could implement their own callbacks on their own. Signed-off-by: SeongJae Park --- include/linux/damon.h | 5 ++ mm/damon.c | 184 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 189 insertions(+) diff --git a/include/linux/damon.h b/include/linux/damon.h index 1a788bfd1b4e..f96503a532ea 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -216,6 +216,11 @@ void kdamond_update_vm_regions(struct damon_ctx *ctx); void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx); unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx); +void kdamond_init_phys_regions(struct damon_ctx *ctx); +void kdamond_update_phys_regions(struct damon_ctx *ctx); +void kdamond_prepare_phys_access_checks(struct damon_ctx *ctx); +unsigned int kdamond_check_phys_accesses(struct damon_ctx *ctx); + int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, unsigned long aggr_int, unsigned long regions_update_int, diff --git a/mm/damon.c b/mm/damon.c index f5cbc97a3bbc..6a5c6d540580 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -19,7 +19,9 @@ #include #include #include +#include #include +#include #include #include #include @@ -480,6 +482,11 @@ void kdamond_init_vm_regions(struct damon_ctx *ctx) } } +/* Do nothing. Users should set the initial regions by themselves */ +void kdamond_init_phys_regions(struct damon_ctx *ctx) +{ +} + static void damon_mkold(struct mm_struct *mm, unsigned long addr) { pte_t *pte = NULL; @@ -611,6 +618,178 @@ unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx) return max_nr_accesses; } +/* access check functions for physical address based regions */ + +/* This code is stollen from page_idle.c */ +static struct page *damon_phys_get_page(unsigned long pfn) +{ + struct page *page; + pg_data_t *pgdat; + + if (!pfn_valid(pfn)) + return NULL; + + page = pfn_to_page(pfn); + if (!page || !PageLRU(page) || + !get_page_unless_zero(page)) + return NULL; + + pgdat = page_pgdat(page); + spin_lock_irq(&pgdat->lru_lock); + if (unlikely(!PageLRU(page))) { + put_page(page); + page = NULL; + } + spin_unlock_irq(&pgdat->lru_lock); + return page; +} + +static bool damon_page_mkold(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + damon_mkold(vma->vm_mm, addr); + return true; +} + +static void damon_phys_mkold(unsigned long paddr) +{ + struct page *page = damon_phys_get_page(PHYS_PFN(paddr)); + struct rmap_walk_control rwc = { + .rmap_one = damon_page_mkold, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return; + + if (!page_mapped(page) || !page_rmapping(page)) + return; + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) + return; + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); +} + +static void damon_prepare_phys_access_check(struct damon_ctx *ctx, + struct damon_region *r) +{ + r->sampling_addr = damon_rand(r->ar.start, r->ar.end); + + damon_phys_mkold(r->sampling_addr); +} + +void kdamond_prepare_phys_access_checks(struct damon_ctx *ctx) +{ + struct damon_task *t; + struct damon_region *r; + + damon_for_each_task(t, ctx) { + damon_for_each_region(r, t) + damon_prepare_phys_access_check(ctx, r); + } +} + +struct damon_phys_access_chk_result { + unsigned long page_sz; + bool accessed; +}; + +static bool damon_page_accessed(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + struct damon_phys_access_chk_result *result = arg; + + result->accessed = damon_young(vma->vm_mm, addr, &result->page_sz); + + /* If accessed, stop walking */ + return !result->accessed; +} + +static bool damon_phys_young(unsigned long paddr, unsigned long *page_sz) +{ + struct page *page = damon_phys_get_page(PHYS_PFN(paddr)); + struct damon_phys_access_chk_result result = { + .page_sz = PAGE_SIZE, + .accessed = false, + }; + struct rmap_walk_control rwc = { + .arg = &result, + .rmap_one = damon_page_accessed, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return false; + + if (!page_mapped(page) || !page_rmapping(page)) + return false; + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) + return false; + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); + + *page_sz = result.page_sz; + return result.accessed; +} + +/* + * Check whether the region was accessed after the last preparation + * + * mm 'mm_struct' for the given virtual address space + * r the region of physical address space that needs to be checked + */ +static void damon_check_phys_access(struct damon_ctx *ctx, + struct damon_region *r) +{ + static unsigned long last_addr; + static unsigned long last_page_sz = PAGE_SIZE; + static bool last_accessed; + + /* If the region is in the last checked page, reuse the result */ + if (ALIGN_DOWN(last_addr, last_page_sz) == + ALIGN_DOWN(r->sampling_addr, last_page_sz)) { + if (last_accessed) + r->nr_accesses++; + return; + } + + last_accessed = damon_phys_young(r->sampling_addr, &last_page_sz); + if (last_accessed) + r->nr_accesses++; + + last_addr = r->sampling_addr; +} + +unsigned int kdamond_check_phys_accesses(struct damon_ctx *ctx) +{ + struct damon_task *t; + struct damon_region *r; + unsigned int max_nr_accesses = 0; + + damon_for_each_task(t, ctx) { + damon_for_each_region(r, t) { + damon_check_phys_access(ctx, r); + max_nr_accesses = max(r->nr_accesses, max_nr_accesses); + } + } + + return max_nr_accesses; +} + /* * damon_check_reset_time_interval() - Check if a time interval is elapsed. * @baseline: the time to check whether the interval has elapsed since @@ -1058,6 +1237,11 @@ void kdamond_update_vm_regions(struct damon_ctx *ctx) } } +/* Do nothing. If necessary, users should update regions in other callbacks */ +void kdamond_update_phys_regions(struct damon_ctx *ctx) +{ +} + /* * Check whether current monitoring should be stopped * From patchwork Wed Jun 3 14:11:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585737 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54FC4618 for ; Wed, 3 Jun 2020 14:16:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2205220772 for ; Wed, 3 Jun 2020 14:16:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="rt7y2XbH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2205220772 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 57E138E0008; Wed, 3 Jun 2020 10:16:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 52E148E0006; Wed, 3 Jun 2020 10:16:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41E628E0008; Wed, 3 Jun 2020 10:16:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 29E708E0006 for ; Wed, 3 Jun 2020 10:16:20 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D486A180BDF52 for ; Wed, 3 Jun 2020 14:16:19 +0000 (UTC) X-FDA: 76888100478.22.taste13_7140d65455c10 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id D2F55180777CE for ; Wed, 3 Jun 2020 14:16:18 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30051:30054:30064:30070,0,RBL:207.171.184.29:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: taste13_7140d65455c10 X-Filterd-Recvd-Size: 5759 Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:16:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193778; x=1622729778; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=uykKOZZjn9dx48r34/9kMX/c9ZH8aqf2dtSlZ4UrAfo=; b=rt7y2XbHRuyBm5+uZUbyvrjGoL06Pe2f8wpijWGfYTP/WHDJIPMiPMJ3 1oLELgMGaOdV+xYy6z9M/cGyBfOr3vGGxCnbV7JPKfslrCp49rbIFOrm+ jpRLi6S73BAgcG8deQ5rQDTumIxnz4AQd17sDS6bjjQ4uj0kVAkkq8qdS M=; IronPort-SDR: Wn8WtR/08hi8t6atRaPbX10k0NQMvGJ+CDx3dpUypdxdiNtV+Tsd9TCNSIPehX4z6lraeTb3fE jo/uCpEfjZbA== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="49526531" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-119b4f96.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 03 Jun 2020 14:16:15 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-119b4f96.us-west-2.amazon.com (Postfix) with ESMTPS id 563131A19DC; Wed, 3 Jun 2020 14:16:13 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:16:12 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:15:56 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 8/9] mm/damon/debugfs: Support physical memory monitoring Date: Wed, 3 Jun 2020 16:11:34 +0200 Message-ID: <20200603141135.10575-9-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: D2F55180777CE X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit makes the debugfs interface to support the physical memory monitoring, in addition to the virtual memory monitoring. Users can do the physical memory monitoring by writing a special keyword, 'paddr\n' to the 'pids' debugfs file. Then, DAMON will check the special keyword and configure the callbacks of the monitoring context for the debugfs user for physical memory. This will internally add one fake monitoring target process, which has pid as -1. Unlike the virtual memory monitoring, DAMON debugfs will not automatically set the monitoring target region. Therefore, users should also set the monitoring target address region using the 'init_regions' debugfs file. While doing this, the 'pid' in the input should be '-1'. Finally, the physical memory monitoring will not automatically terminated because it has fake monitoring target process. The user should explicitly turn off the monitoring by writing 'off' to the 'monitor_on' debugfs file. Signed-off-by: SeongJae Park --- mm/damon.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/mm/damon.c b/mm/damon.c index 6a5c6d540580..7361d5885118 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -1263,6 +1263,9 @@ static bool kdamond_need_stop(struct damon_ctx *ctx) return true; damon_for_each_task(t, ctx) { + if (t->pid == -1) + return false; + task = damon_get_task_struct(t); if (task) { put_task_struct(task); @@ -1796,6 +1799,23 @@ static ssize_t debugfs_pids_write(struct file *file, if (ret < 0) goto out; + if (!strncmp(kbuf, "paddr\n", count)) { + /* Configure the context for physical memory monitoring */ + ctx->init_target_regions = kdamond_init_phys_regions; + ctx->update_target_regions = kdamond_update_phys_regions; + ctx->prepare_access_checks = kdamond_prepare_phys_access_checks; + ctx->check_accesses = kdamond_check_phys_accesses; + + /* Set the fake target task pid as -1 */ + snprintf(kbuf, count, "-1"); + } else { + /* Configure the context for virtual memory monitoring */ + ctx->init_target_regions = kdamond_init_vm_regions; + ctx->update_target_regions = kdamond_update_vm_regions; + ctx->prepare_access_checks = kdamond_prepare_vm_access_checks; + ctx->check_accesses = kdamond_check_vm_accesses; + } + targets = str_to_pids(kbuf, ret, &nr_targets); if (!targets) { ret = -ENOMEM; From patchwork Wed Jun 3 14:11:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11585739 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8621E618 for ; Wed, 3 Jun 2020 14:16:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 524BD20772 for ; Wed, 3 Jun 2020 14:16:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="eDbD0YYJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 524BD20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7C9458E0007; Wed, 3 Jun 2020 10:16:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 79FE18E0006; Wed, 3 Jun 2020 10:16:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68EA08E0007; Wed, 3 Jun 2020 10:16:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 51C038E0006 for ; Wed, 3 Jun 2020 10:16:36 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id F1A133A2B1 for ; Wed, 3 Jun 2020 14:16:35 +0000 (UTC) X-FDA: 76888101150.07.alley15_73c44a0c89446 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id CE861180D870D for ; Wed, 3 Jun 2020 14:16:35 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=416e8e2f5=sjpark@amazon.com,,RULES_HIT:30003:30054:30064:30070,0,RBL:52.95.48.154:@amazon.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: alley15_73c44a0c89446 X-Filterd-Recvd-Size: 7071 Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 14:16:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1591193796; x=1622729796; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Q95Imqm6D4GsRp4TGS1qrEUIPf92/YhKKGxjfjqdz5s=; b=eDbD0YYJzxl5IHLNfy4GgBvJxs/fDMpYz42218Fw2gGxv4JLTuLBp429 LxgouLDE7qEScWwn/eIWLSZstIhi/dng4LsXYONk5xIknpEa0lc1S4co3 21MLDhKJqXuuhzj4rArLT2+Xpsb5KdhBG94ShjG7WJPFrJBUenaHJ0CwQ Q=; IronPort-SDR: lspKg3ajDh5Oo2wnIVCzaXvCQutKzWtGrcE+8aN0BzbvFkBGtmQ8EEl+hH5XyAVRNxqSDbBgCb HnYlj4rQ6KoA== X-IronPort-AV: E=Sophos;i="5.73,467,1583193600"; d="scan'208";a="35573782" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2b-5bdc5131.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP; 03 Jun 2020 14:16:33 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2b-5bdc5131.us-west-2.amazon.com (Postfix) with ESMTPS id 01E29A27D2; Wed, 3 Jun 2020 14:16:29 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:16:29 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.160.90) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 3 Jun 2020 14:16:13 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v2 9/9] Docs/damon: Document physical memory monitoring support Date: Wed, 3 Jun 2020 16:11:35 +0200 Message-ID: <20200603141135.10575-10-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200603141135.10575-1-sjpark@amazon.com> References: <20200603141135.10575-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.90] X-ClientProxiedBy: EX13D45UWB002.ant.amazon.com (10.43.161.78) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: CE861180D870D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit adds description for the physical memory monitoring usage in the DAMON document. Signed-off-by: SeongJae Park --- Documentation/admin-guide/mm/damon/usage.rst | 42 ++++++++++++++------ 1 file changed, 29 insertions(+), 13 deletions(-) diff --git a/Documentation/admin-guide/mm/damon/usage.rst b/Documentation/admin-guide/mm/damon/usage.rst index 137ed770c2d6..359745f0dbfb 100644 --- a/Documentation/admin-guide/mm/damon/usage.rst +++ b/Documentation/admin-guide/mm/damon/usage.rst @@ -314,27 +314,42 @@ check it again:: Target PIDs ----------- -Users can get and set the pids of monitoring target processes by reading from -and writing to the ``pids`` file. For example, below commands set processes -having pids 42 and 4242 as the processes to be monitored and check it again:: +To monitor the virtual memory address spaces of specific processes, users can +get and set the pids of monitoring target processes by reading from and writing +to the ``pids`` file. For example, below commands set processes having pids 42 +and 4242 as the processes to be monitored and check it again:: # cd /damon # echo 42 4242 > pids # cat pids 42 4242 +Users can also monitor the physical memory address space of the system by +writing a special keyword, "``paddr\n``" to the file. In this case, reading the +file will show ``-1``, as below:: + + # cd /damon + # echo paddr > pids + # cat pids + -1 + Note that setting the pids doesn't start the monitoring. Initla Monitoring Target Regions -------------------------------- -DAMON automatically sets and updates the monitoring target regions so that -entire memory mappings of target processes can be covered. However, users -might want to limit the monitoring region to specific address ranges, such as -the heap, the stack, or specific file-mapped area. Or, some users might know -the initial access pattern of their workloads and therefore want to set optimal -initial regions for the 'adaptive regions adjustment'. +In case of the virtual memory monitoring, DAMON automatically sets and updates +the monitoring target regions so that entire memory mappings of target +processes can be covered. However, users might want to limit the monitoring +region to specific address ranges, such as the heap, the stack, or specific +file-mapped area. Or, some users might know the initial access pattern of +their workloads and therefore want to set optimal initial regions for the +'adaptive regions adjustment'. + +In contrast, DAMON do not automatically sets and updates the monitoring target +regions in case of physical memory monitoring. Therefore, users should set the +monitoring target regions by themselves. In such cases, users can explicitly set the initial monitoring target regions as they want, by writing proper values to the ``init_regions`` file. Each line @@ -354,10 +369,11 @@ region of process 42, and another couple of address ranges, ``20-40`` and 4242 20 40 4242 50 100" > init_regions -Note that this sets the initial monitoring target regions only. DAMON will -automatically updates the boundary of the regions after one ``regions update -interval``. Therefore, users should set the ``regions update interval`` large -enough. +Note that this sets the initial monitoring target regions only. In case of +virtual memory monitoring, DAMON will automatically updates the boundary of the +regions after one ``regions update interval``. Therefore, users should set the +``regions update interval`` large enough in this case, if they don't want the +update. Record