From patchwork Thu Apr 9 09:42:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11481507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB19B92A for ; Thu, 9 Apr 2020 09:43:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 88F1A22207 for ; Thu, 9 Apr 2020 09:43:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="Aac8a9tA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88F1A22207 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BBBAB8E0010; Thu, 9 Apr 2020 05:43:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B93048E0006; Thu, 9 Apr 2020 05:43:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A83288E0010; Thu, 9 Apr 2020 05:43:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id 9CECC8E0006 for ; Thu, 9 Apr 2020 05:43:25 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 72A40141F3 for ; Thu, 9 Apr 2020 09:43:25 +0000 (UTC) X-FDA: 76687828770.28.use45_3ae27deba6754 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=3610f98c6=sjpark@amazon.com,,RULES_HIT:30003:30004:30054:30064:30070:30075,0,RBL:207.171.190.10:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:15,LUA_SUMMARY:none X-HE-Tag: use45_3ae27deba6754 X-Filterd-Recvd-Size: 16984 Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Apr 2020 09:43:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1586425405; x=1617961405; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=eJdgkB1g/K1jRRQidYcVaQejEdoFAC1wMAfV1Ww6yws=; b=Aac8a9tAHMZ6ANO59GfBH0deTgvYCSmXtr+pdYA6uBm6dVY1xWF32K3u /qKaMW4BO3ZcM2EXS++loOURiPMbLdAlfDTadKtPGOsvJs/brJdWh+VQm 1wLJhCBHForfAjIU2qLMIi6+SWzo7Hy2XjB/us+fBvtBTSPJk9jqVrfd6 o=; IronPort-SDR: orQxzxSS55iQqc+T3T5nm8Firl3XqpSHalhZo0Sx5WzDek4vQG5VqHCzHSNYPjhc0nmSin4O8D SNWlYOlGS74Q== X-IronPort-AV: E=Sophos;i="5.72,362,1580774400"; d="scan'208";a="37581156" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 09 Apr 2020 09:43:22 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com (Postfix) with ESMTPS id 572E9A181A; Thu, 9 Apr 2020 09:43:20 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 9 Apr 2020 09:43:19 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.115) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 9 Apr 2020 09:43:05 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH 1/4] mm/damon: Use vm-independent address range concept Date: Thu, 9 Apr 2020 11:42:29 +0200 Message-ID: <20200409094232.29680-2-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200409094232.29680-1-sjpark@amazon.com> References: <20200409094232.29680-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.115] X-ClientProxiedBy: EX13D15UWB003.ant.amazon.com (10.43.161.138) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park DAMON's main idea is not limited to virtual address space. To prepare for further expansion of the support for other address spaces including physical memory, this commit modifies one of its core struct, 'struct damon_region' to use virtual memory independent address space concept. Signed-off-by: SeongJae Park --- include/linux/damon.h | 11 ++++--- mm/damon-test.h | 46 +++++++++++++------------- mm/damon.c | 76 +++++++++++++++++++++---------------------- 3 files changed, 68 insertions(+), 65 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index b0fa898ed6d8..d72dd524924f 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -15,17 +15,20 @@ #include #include +struct damon_addr_range { + unsigned long start; + unsigned long end; +}; + /* Represents a monitoring target region on the virtual address space */ struct damon_region { - unsigned long vm_start; - unsigned long vm_end; + struct damon_addr_range ar; unsigned long sampling_addr; unsigned int nr_accesses; struct list_head list; unsigned int age; - unsigned long last_vm_start; - unsigned long last_vm_end; + struct damon_addr_range last_ar; unsigned int last_nr_accesses; }; diff --git a/mm/damon-test.h b/mm/damon-test.h index af6a1e84b8eb..7fd66df1e493 100644 --- a/mm/damon-test.h +++ b/mm/damon-test.h @@ -78,8 +78,8 @@ static void damon_test_regions(struct kunit *test) struct damon_task *t; r = damon_new_region(&damon_user_ctx, 1, 2); - KUNIT_EXPECT_EQ(test, 1ul, r->vm_start); - KUNIT_EXPECT_EQ(test, 2ul, r->vm_end); + KUNIT_EXPECT_EQ(test, 1ul, r->ar.start); + KUNIT_EXPECT_EQ(test, 2ul, r->ar.end); KUNIT_EXPECT_EQ(test, 0u, r->nr_accesses); t = damon_new_task(42); @@ -255,7 +255,7 @@ static void damon_test_aggregate(struct kunit *test) KUNIT_EXPECT_EQ(test, 3, it); /* The aggregated information should be written in the buffer */ - sr = sizeof(r->vm_start) + sizeof(r->vm_end) + sizeof(r->nr_accesses); + sr = sizeof(r->ar.start) + sizeof(r->ar.end) + sizeof(r->nr_accesses); sp = sizeof(t->pid) + sizeof(unsigned int) + 3 * sr; sz = sizeof(struct timespec64) + sizeof(unsigned int) + 3 * sp; KUNIT_EXPECT_EQ(test, (unsigned int)sz, ctx->rbuf_offset); @@ -325,8 +325,8 @@ static void damon_do_test_apply_three_regions(struct kunit *test, for (i = 0; i < nr_expected / 2; i++) { r = damon_nth_region_of(t, i); - KUNIT_EXPECT_EQ(test, r->vm_start, expected[i * 2]); - KUNIT_EXPECT_EQ(test, r->vm_end, expected[i * 2 + 1]); + KUNIT_EXPECT_EQ(test, r->ar.start, expected[i * 2]); + KUNIT_EXPECT_EQ(test, r->ar.end, expected[i * 2 + 1]); } damon_cleanup_global_state(); @@ -445,8 +445,8 @@ static void damon_test_split_evenly(struct kunit *test) i = 0; damon_for_each_region(r, t) { - KUNIT_EXPECT_EQ(test, r->vm_start, i++ * 10); - KUNIT_EXPECT_EQ(test, r->vm_end, i * 10); + KUNIT_EXPECT_EQ(test, r->ar.start, i++ * 10); + KUNIT_EXPECT_EQ(test, r->ar.end, i * 10); } damon_free_task(t); @@ -460,11 +460,11 @@ static void damon_test_split_evenly(struct kunit *test) damon_for_each_region(r, t) { if (i == 4) break; - KUNIT_EXPECT_EQ(test, r->vm_start, 5 + 10 * i++); - KUNIT_EXPECT_EQ(test, r->vm_end, 5 + 10 * i); + KUNIT_EXPECT_EQ(test, r->ar.start, 5 + 10 * i++); + KUNIT_EXPECT_EQ(test, r->ar.end, 5 + 10 * i); } - KUNIT_EXPECT_EQ(test, r->vm_start, 5 + 10 * i); - KUNIT_EXPECT_EQ(test, r->vm_end, 59ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 5 + 10 * i); + KUNIT_EXPECT_EQ(test, r->ar.end, 59ul); damon_free_task(t); t = damon_new_task(42); @@ -474,8 +474,8 @@ static void damon_test_split_evenly(struct kunit *test) KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 1u); damon_for_each_region(r, t) { - KUNIT_EXPECT_EQ(test, r->vm_start, 5ul); - KUNIT_EXPECT_EQ(test, r->vm_end, 6ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 5ul); + KUNIT_EXPECT_EQ(test, r->ar.end, 6ul); } damon_free_task(t); } @@ -489,12 +489,12 @@ static void damon_test_split_at(struct kunit *test) r = damon_new_region(&damon_user_ctx, 0, 100); damon_add_region(r, t); damon_split_region_at(&damon_user_ctx, r, 25); - KUNIT_EXPECT_EQ(test, r->vm_start, 0ul); - KUNIT_EXPECT_EQ(test, r->vm_end, 25ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 0ul); + KUNIT_EXPECT_EQ(test, r->ar.end, 25ul); r = damon_next_region(r); - KUNIT_EXPECT_EQ(test, r->vm_start, 25ul); - KUNIT_EXPECT_EQ(test, r->vm_end, 100ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 25ul); + KUNIT_EXPECT_EQ(test, r->ar.end, 100ul); damon_free_task(t); } @@ -514,8 +514,8 @@ static void damon_test_merge_two(struct kunit *test) damon_add_region(r2, t); damon_merge_two_regions(r, r2); - KUNIT_EXPECT_EQ(test, r->vm_start, 0ul); - KUNIT_EXPECT_EQ(test, r->vm_end, 300ul); + KUNIT_EXPECT_EQ(test, r->ar.start, 0ul); + KUNIT_EXPECT_EQ(test, r->ar.end, 300ul); KUNIT_EXPECT_EQ(test, r->nr_accesses, 16u); i = 0; @@ -554,10 +554,10 @@ static void damon_test_merge_regions_of(struct kunit *test) KUNIT_EXPECT_EQ(test, nr_damon_regions(t), 5u); for (i = 0; i < 5; i++) { r = damon_nth_region_of(t, i); - KUNIT_EXPECT_EQ(test, r->vm_start, saddrs[i]); - KUNIT_EXPECT_EQ(test, r->vm_end, eaddrs[i]); - KUNIT_EXPECT_EQ(test, r->last_vm_start, lsa[i]); - KUNIT_EXPECT_EQ(test, r->last_vm_end, lea[i]); + KUNIT_EXPECT_EQ(test, r->ar.start, saddrs[i]); + KUNIT_EXPECT_EQ(test, r->ar.end, eaddrs[i]); + KUNIT_EXPECT_EQ(test, r->last_ar.start, lsa[i]); + KUNIT_EXPECT_EQ(test, r->last_ar.end, lea[i]); } damon_free_task(t); diff --git a/mm/damon.c b/mm/damon.c index 3f93da898d72..f9958952d09e 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -72,7 +72,7 @@ static struct damon_ctx damon_user_ctx = { * Returns the pointer to the new struct if success, or NULL otherwise */ static struct damon_region *damon_new_region(struct damon_ctx *ctx, - unsigned long vm_start, unsigned long vm_end) + unsigned long start, unsigned long end) { struct damon_region *region; @@ -80,14 +80,14 @@ static struct damon_region *damon_new_region(struct damon_ctx *ctx, if (!region) return NULL; - region->vm_start = vm_start; - region->vm_end = vm_end; + region->ar.start = start; + region->ar.end = end; region->nr_accesses = 0; INIT_LIST_HEAD(®ion->list); region->age = 0; - region->last_vm_start = vm_start; - region->last_vm_end = vm_end; + region->last_ar.start = start; + region->last_ar.end = end; return region; } @@ -282,16 +282,16 @@ static int damon_split_region_evenly(struct damon_ctx *ctx, if (!r || !nr_pieces) return -EINVAL; - orig_end = r->vm_end; - sz_orig = r->vm_end - r->vm_start; + orig_end = r->ar.end; + sz_orig = r->ar.end - r->ar.start; sz_piece = sz_orig / nr_pieces; if (!sz_piece) return -EINVAL; - r->vm_end = r->vm_start + sz_piece; + r->ar.end = r->ar.start + sz_piece; next = damon_next_region(r); - for (start = r->vm_end; start + sz_piece <= orig_end; + for (start = r->ar.end; start + sz_piece <= orig_end; start += sz_piece) { n = damon_new_region(ctx, start, start + sz_piece); damon_insert_region(n, r, next); @@ -299,7 +299,7 @@ static int damon_split_region_evenly(struct damon_ctx *ctx, } /* complement last region for possible rounding error */ if (n) - n->vm_end = orig_end; + n->ar.end = orig_end; return 0; } @@ -507,7 +507,7 @@ static void damon_mkold(struct mm_struct *mm, unsigned long addr) static void damon_prepare_access_check(struct damon_ctx *ctx, struct mm_struct *mm, struct damon_region *r) { - r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end); + r->sampling_addr = damon_rand(ctx, r->ar.start, r->ar.end); damon_mkold(mm, r->sampling_addr); } @@ -708,12 +708,12 @@ static void kdamond_reset_aggregated(struct damon_ctx *c) nr = nr_damon_regions(t); damon_write_rbuf(c, &nr, sizeof(nr)); damon_for_each_region(r, t) { - damon_write_rbuf(c, &r->vm_start, sizeof(r->vm_start)); - damon_write_rbuf(c, &r->vm_end, sizeof(r->vm_end)); + damon_write_rbuf(c, &r->ar.start, sizeof(r->ar.start)); + damon_write_rbuf(c, &r->ar.end, sizeof(r->ar.end)); damon_write_rbuf(c, &r->nr_accesses, sizeof(r->nr_accesses)); trace_damon_aggregated(t->pid, nr, - r->vm_start, r->vm_end, r->nr_accesses); + r->ar.start, r->ar.end, r->nr_accesses); r->last_nr_accesses = r->nr_accesses; r->nr_accesses = 0; } @@ -730,10 +730,10 @@ static void kdamond_reset_aggregated(struct damon_ctx *c) */ static void damon_do_count_age(struct damon_region *r, unsigned int threshold) { - unsigned long sz_threshold = (r->vm_end - r->vm_start) / 5; + unsigned long sz_threshold = (r->ar.end - r->ar.start) / 5; - if (diff_of(r->vm_start, r->last_vm_start) + - diff_of(r->vm_end, r->last_vm_end) > sz_threshold) + if (diff_of(r->ar.start, r->last_ar.start) + + diff_of(r->ar.end, r->last_ar.end) > sz_threshold) r->age = 0; else if (diff_of(r->nr_accesses, r->last_nr_accesses) > threshold) r->age = 0; @@ -773,8 +773,8 @@ static int damos_madvise(struct damon_task *task, struct damon_region *r, if (!mm) goto put_task_out; - ret = do_madvise(t, mm, PAGE_ALIGN(r->vm_start), - PAGE_ALIGN(r->vm_end - r->vm_start), behavior); + ret = do_madvise(t, mm, PAGE_ALIGN(r->ar.start), + PAGE_ALIGN(r->ar.end - r->ar.start), behavior); mmput(mm); put_task_out: put_task_struct(t); @@ -819,7 +819,7 @@ static void damon_do_apply_schemes(struct damon_ctx *c, struct damon_task *t, unsigned long sz; damon_for_each_schemes(c, s) { - sz = r->vm_end - r->vm_start; + sz = r->ar.end - r->ar.start; if ((s->min_sz_region && sz < s->min_sz_region) || (s->max_sz_region && s->max_sz_region < sz)) continue; @@ -847,7 +847,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c) } } -#define sz_damon_region(r) (r->vm_end - r->vm_start) +#define sz_damon_region(r) (r->ar.end - r->ar.start) /* * Merge two adjacent regions into one region @@ -860,20 +860,20 @@ static void damon_merge_two_regions(struct damon_region *l, l->nr_accesses = (l->nr_accesses * sz_l + r->nr_accesses * sz_r) / (sz_l + sz_r); l->age = (l->age * sz_l + r->age * sz_r) / (sz_l + sz_r); - l->vm_end = r->vm_end; + l->ar.end = r->ar.end; damon_destroy_region(r); } static inline void set_last_area(struct damon_region *r, struct region *last) { - r->last_vm_start = last->start; - r->last_vm_end = last->end; + r->last_ar.start = last->start; + r->last_ar.end = last->end; } static inline void get_last_area(struct damon_region *r, struct region *last) { - last->start = r->last_vm_start; - last->end = r->last_vm_end; + last->start = r->last_ar.start; + last->end = r->last_ar.end; } /* @@ -905,7 +905,7 @@ static void damon_merge_regions_of(struct damon_task *t, unsigned int thres) unsigned long sz_mergee = 0; /* size of current mergee */ damon_for_each_region_safe(r, next, t) { - if (!prev || prev->vm_end != r->vm_start || + if (!prev || prev->ar.end != r->ar.start || diff_of(prev->nr_accesses, r->nr_accesses) > thres) { if (sz_biggest) set_last_area(prev, &biggest_mergee); @@ -928,7 +928,7 @@ static void damon_merge_regions_of(struct damon_task *t, unsigned int thres) * If next region and current region is not originated from * same region, initialize the size of mergee. */ - if (r->last_vm_start != next->last_vm_start) + if (r->last_ar.start != next->last_ar.start) sz_mergee = 0; damon_merge_two_regions(prev, r); @@ -966,14 +966,14 @@ static void damon_split_region_at(struct damon_ctx *ctx, { struct damon_region *new; - new = damon_new_region(ctx, r->vm_start + sz_r, r->vm_end); + new = damon_new_region(ctx, r->ar.start + sz_r, r->ar.end); new->age = r->age; - new->last_vm_start = r->vm_start; + new->last_ar.start = r->ar.start; new->last_nr_accesses = r->last_nr_accesses; - r->last_vm_start = r->vm_start; - r->last_vm_end = r->vm_end; - r->vm_end = new->vm_start; + r->last_ar.start = r->ar.start; + r->last_ar.end = r->ar.end; + r->ar.end = new->ar.start; damon_insert_region(new, r, damon_next_region(r)); } @@ -989,7 +989,7 @@ static void damon_split_regions_of(struct damon_ctx *ctx, struct damon_task *t) * 10 percent and at most 90% of original region */ sz_left_region = (prandom_u32_state(&ctx->rndseed) % 9 + 1) * - (r->vm_end - r->vm_start) / 10; + (r->ar.end - r->ar.start) / 10; /* Do not allow blank region */ if (sz_left_region == 0) continue; @@ -1034,7 +1034,7 @@ static bool kdamond_need_update_regions(struct damon_ctx *ctx) static bool damon_intersect(struct damon_region *r, struct region *re) { - return !(r->vm_end <= re->start || re->end <= r->vm_start); + return !(r->ar.end <= re->start || re->end <= r->ar.start); } /* @@ -1073,7 +1073,7 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, first = r; last = r; } - if (r->vm_start >= br->end) + if (r->ar.start >= br->end) break; } if (!first) { @@ -1081,8 +1081,8 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, newr = damon_new_region(ctx, br->start, br->end); damon_insert_region(newr, damon_prev_region(r), r); } else { - first->vm_start = br->start; - last->vm_end = br->end; + first->ar.start = br->start; + last->ar.end = br->end; } } } From patchwork Thu Apr 9 09:42:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11481511 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 989DE92A for ; Thu, 9 Apr 2020 09:44:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 49DCC22242 for ; Thu, 9 Apr 2020 09:44:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="WedRGohC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49DCC22242 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 85F3F8E0011; Thu, 9 Apr 2020 05:44:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 80F358E0006; Thu, 9 Apr 2020 05:44:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 725AF8E0011; Thu, 9 Apr 2020 05:44:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 66F6C8E0006 for ; Thu, 9 Apr 2020 05:44:05 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 32FD9180AD838 for ; Thu, 9 Apr 2020 09:44:05 +0000 (UTC) X-FDA: 76687830450.03.ball45_40b30ab118c08 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=3610f98c6=sjpark@amazon.com,,RULES_HIT:30003:30012:30054:30064:30070,0,RBL:52.95.49.90:@amazon.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:14,LUA_SUMMARY:none X-HE-Tag: ball45_40b30ab118c08 X-Filterd-Recvd-Size: 14030 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Apr 2020 09:44:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1586425445; x=1617961445; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=XiS2374fsSGZdv9aLyHrmIE5A3AGAHwWAZGOPgqN40k=; b=WedRGohClJKwPmX2HjE5eU5hjFLNExPwqlanxUfNU06ynMDm0J8qdi21 GmwlTkgzOouZT7p2CYXEm+cXmWhXfFA3rAB5X4MYdojNDBlBLbqt+ITiu DgO6Y+eXYcjbC0Rq0AABSrpxdNv8w9CtSUcLFpmmhQHSUhPNlchlbWx8y s=; IronPort-SDR: 4VNG/GfQsZb9N8ApXp55nFYqLKxTuj3WyQxVZ7Om1kpBsWHCJtJ4CSwQXAsAKQunI1uPOzN/Le TUpCm1Euos+w== X-IronPort-AV: E=Sophos;i="5.72,362,1580774400"; d="scan'208";a="24812650" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2c-168cbb73.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 09 Apr 2020 09:43:50 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2c-168cbb73.us-west-2.amazon.com (Postfix) with ESMTPS id 50F64A1FEE; Thu, 9 Apr 2020 09:43:47 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 9 Apr 2020 09:43:46 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.115) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 9 Apr 2020 09:43:32 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH 2/4] mm/damon: Clean up code using 'struct damon_addr_range' Date: Thu, 9 Apr 2020 11:42:30 +0200 Message-ID: <20200409094232.29680-3-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200409094232.29680-1-sjpark@amazon.com> References: <20200409094232.29680-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.115] X-ClientProxiedBy: EX13D15UWB003.ant.amazon.com (10.43.161.138) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park There are unnecessarily duplicated code in DAMON, that can be eliminated by using the new struct, 'damon_addr_range'. This commit cleans up the DAMON code in the way. Signed-off-by: SeongJae Park --- mm/damon-test.h | 36 ++++++++++++++-------------- mm/damon.c | 64 +++++++++++++++++++------------------------------ 2 files changed, 42 insertions(+), 58 deletions(-) diff --git a/mm/damon-test.h b/mm/damon-test.h index 7fd66df1e493..7b2c903f1357 100644 --- a/mm/damon-test.h +++ b/mm/damon-test.h @@ -165,7 +165,7 @@ static void damon_test_set_pids(struct kunit *test) */ static void damon_test_three_regions_in_vmas(struct kunit *test) { - struct region regions[3] = {0,}; + struct damon_addr_range regions[3] = {0,}; /* 10-20-25, 200-210-220, 300-305, 307-330 */ struct vm_area_struct vmas[] = { (struct vm_area_struct) {.vm_start = 10, .vm_end = 20}, @@ -306,7 +306,7 @@ static void damon_test_write_rbuf(struct kunit *test) */ static void damon_do_test_apply_three_regions(struct kunit *test, unsigned long *regions, int nr_regions, - struct region *three_regions, + struct damon_addr_range *three_regions, unsigned long *expected, int nr_expected) { struct damon_task *t; @@ -344,10 +344,10 @@ static void damon_test_apply_three_regions1(struct kunit *test) unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59, 70, 80, 80, 90, 90, 100}; /* 5-27, 45-55, 73-104 */ - struct region new_three_regions[3] = { - (struct region){.start = 5, .end = 27}, - (struct region){.start = 45, .end = 55}, - (struct region){.start = 73, .end = 104} }; + struct damon_addr_range new_three_regions[3] = { + (struct damon_addr_range){.start = 5, .end = 27}, + (struct damon_addr_range){.start = 45, .end = 55}, + (struct damon_addr_range){.start = 73, .end = 104} }; /* 5-20-27, 45-55, 73-80-90-104 */ unsigned long expected[] = {5, 20, 20, 27, 45, 55, 73, 80, 80, 90, 90, 104}; @@ -366,10 +366,10 @@ static void damon_test_apply_three_regions2(struct kunit *test) unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59, 70, 80, 80, 90, 90, 100}; /* 5-27, 56-57, 65-104 */ - struct region new_three_regions[3] = { - (struct region){.start = 5, .end = 27}, - (struct region){.start = 56, .end = 57}, - (struct region){.start = 65, .end = 104} }; + struct damon_addr_range new_three_regions[3] = { + (struct damon_addr_range){.start = 5, .end = 27}, + (struct damon_addr_range){.start = 56, .end = 57}, + (struct damon_addr_range){.start = 65, .end = 104} }; /* 5-20-27, 56-57, 65-80-90-104 */ unsigned long expected[] = {5, 20, 20, 27, 56, 57, 65, 80, 80, 90, 90, 104}; @@ -390,10 +390,10 @@ static void damon_test_apply_three_regions3(struct kunit *test) unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59, 70, 80, 80, 90, 90, 100}; /* 5-27, 61-63, 65-104 */ - struct region new_three_regions[3] = { - (struct region){.start = 5, .end = 27}, - (struct region){.start = 61, .end = 63}, - (struct region){.start = 65, .end = 104} }; + struct damon_addr_range new_three_regions[3] = { + (struct damon_addr_range){.start = 5, .end = 27}, + (struct damon_addr_range){.start = 61, .end = 63}, + (struct damon_addr_range){.start = 65, .end = 104} }; /* 5-20-27, 61-63, 65-80-90-104 */ unsigned long expected[] = {5, 20, 20, 27, 61, 63, 65, 80, 80, 90, 90, 104}; @@ -415,10 +415,10 @@ static void damon_test_apply_three_regions4(struct kunit *test) unsigned long regions[] = {10, 20, 20, 30, 50, 55, 55, 57, 57, 59, 70, 80, 80, 90, 90, 100}; /* 5-7, 30-32, 65-68 */ - struct region new_three_regions[3] = { - (struct region){.start = 5, .end = 7}, - (struct region){.start = 30, .end = 32}, - (struct region){.start = 65, .end = 68} }; + struct damon_addr_range new_three_regions[3] = { + (struct damon_addr_range){.start = 5, .end = 7}, + (struct damon_addr_range){.start = 30, .end = 32}, + (struct damon_addr_range){.start = 65, .end = 68} }; /* expect 5-7, 30-32, 65-68 */ unsigned long expected[] = {5, 7, 30, 32, 65, 68}; diff --git a/mm/damon.c b/mm/damon.c index f9958952d09e..80fa3cab7720 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -304,19 +304,15 @@ static int damon_split_region_evenly(struct damon_ctx *ctx, return 0; } -struct region { - unsigned long start; - unsigned long end; -}; - -static unsigned long sz_region(struct region *r) +static unsigned long sz_range(struct damon_addr_range *r) { return r->end - r->start; } -static void swap_regions(struct region *r1, struct region *r2) +static void swap_ranges(struct damon_addr_range *r1, + struct damon_addr_range *r2) { - struct region tmp; + struct damon_addr_range tmp; tmp = *r1; *r1 = *r2; @@ -327,7 +323,7 @@ static void swap_regions(struct region *r1, struct region *r2) * Find the three regions in an address space * * vma the head vma of the target address space - * regions an array of three 'struct region's that results will be saved + * regions an array of three address ranges that results will be saved * * This function receives an address space and finds three regions in it which * separated by the two biggest unmapped regions in the space. Please refer to @@ -337,9 +333,9 @@ static void swap_regions(struct region *r1, struct region *r2) * Returns 0 if success, or negative error code otherwise. */ static int damon_three_regions_in_vmas(struct vm_area_struct *vma, - struct region regions[3]) + struct damon_addr_range regions[3]) { - struct region gap = {0,}, first_gap = {0,}, second_gap = {0,}; + struct damon_addr_range gap = {0,}, first_gap = {0,}, second_gap = {0,}; struct vm_area_struct *last_vma = NULL; unsigned long start = 0; @@ -352,20 +348,20 @@ static int damon_three_regions_in_vmas(struct vm_area_struct *vma, } gap.start = last_vma->vm_end; gap.end = vma->vm_start; - if (sz_region(&gap) > sz_region(&second_gap)) { - swap_regions(&gap, &second_gap); - if (sz_region(&second_gap) > sz_region(&first_gap)) - swap_regions(&second_gap, &first_gap); + if (sz_range(&gap) > sz_range(&second_gap)) { + swap_ranges(&gap, &second_gap); + if (sz_range(&second_gap) > sz_range(&first_gap)) + swap_ranges(&second_gap, &first_gap); } last_vma = vma; } - if (!sz_region(&second_gap) || !sz_region(&first_gap)) + if (!sz_range(&second_gap) || !sz_range(&first_gap)) return -EINVAL; /* Sort the two biggest gaps by address */ if (first_gap.start > second_gap.start) - swap_regions(&first_gap, &second_gap); + swap_ranges(&first_gap, &second_gap); /* Store the result */ regions[0].start = start; @@ -384,7 +380,7 @@ static int damon_three_regions_in_vmas(struct vm_area_struct *vma, * Returns 0 on success, negative error code otherwise. */ static int damon_three_regions_of(struct damon_task *t, - struct region regions[3]) + struct damon_addr_range regions[3]) { struct mm_struct *mm; int rc; @@ -446,7 +442,7 @@ static int damon_three_regions_of(struct damon_task *t, static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t) { struct damon_region *r; - struct region regions[3]; + struct damon_addr_range regions[3]; int i; if (damon_three_regions_of(t, regions)) { @@ -864,18 +860,6 @@ static void damon_merge_two_regions(struct damon_region *l, damon_destroy_region(r); } -static inline void set_last_area(struct damon_region *r, struct region *last) -{ - r->last_ar.start = last->start; - r->last_ar.end = last->end; -} - -static inline void get_last_area(struct damon_region *r, struct region *last) -{ - last->start = r->last_ar.start; - last->end = r->last_ar.end; -} - /* * Merge adjacent regions having similar access frequencies * @@ -900,7 +884,7 @@ static inline void get_last_area(struct damon_region *r, struct region *last) static void damon_merge_regions_of(struct damon_task *t, unsigned int thres) { struct damon_region *r, *prev = NULL, *next; - struct region biggest_mergee; /* the biggest region being merged */ + struct damon_addr_range biggest_mergee; unsigned long sz_biggest = 0; /* size of the biggest_mergee */ unsigned long sz_mergee = 0; /* size of current mergee */ @@ -908,11 +892,11 @@ static void damon_merge_regions_of(struct damon_task *t, unsigned int thres) if (!prev || prev->ar.end != r->ar.start || diff_of(prev->nr_accesses, r->nr_accesses) > thres) { if (sz_biggest) - set_last_area(prev, &biggest_mergee); + prev->last_ar = biggest_mergee; prev = r; sz_biggest = sz_damon_region(prev); - get_last_area(prev, &biggest_mergee); + biggest_mergee = prev->ar; continue; } @@ -921,7 +905,7 @@ static void damon_merge_regions_of(struct damon_task *t, unsigned int thres) sz_mergee += sz_damon_region(r); if (sz_mergee > sz_biggest) { sz_biggest = sz_mergee; - get_last_area(r, &biggest_mergee); + biggest_mergee = r->ar; } /* @@ -934,7 +918,7 @@ static void damon_merge_regions_of(struct damon_task *t, unsigned int thres) damon_merge_two_regions(prev, r); } if (sz_biggest) - set_last_area(prev, &biggest_mergee); + prev->last_ar = biggest_mergee; } /* @@ -1032,7 +1016,7 @@ static bool kdamond_need_update_regions(struct damon_ctx *ctx) ctx->regions_update_interval); } -static bool damon_intersect(struct damon_region *r, struct region *re) +static bool damon_intersect(struct damon_region *r, struct damon_addr_range *re) { return !(r->ar.end <= re->start || re->end <= r->ar.start); } @@ -1044,7 +1028,7 @@ static bool damon_intersect(struct damon_region *r, struct region *re) * bregions the three big regions of the task */ static void damon_apply_three_regions(struct damon_ctx *ctx, - struct damon_task *t, struct region bregions[3]) + struct damon_task *t, struct damon_addr_range bregions[3]) { struct damon_region *r, *next; unsigned int i = 0; @@ -1063,7 +1047,7 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, for (i = 0; i < 3; i++) { struct damon_region *first = NULL, *last; struct damon_region *newr; - struct region *br; + struct damon_addr_range *br; br = &bregions[i]; /* Get the first and last regions which intersects with br */ @@ -1092,7 +1076,7 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, */ static void kdamond_update_regions(struct damon_ctx *ctx) { - struct region three_regions[3]; + struct damon_addr_range three_regions[3]; struct damon_task *t; damon_for_each_task(ctx, t) { From patchwork Thu Apr 9 09:42:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11481513 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED24D913 for ; Thu, 9 Apr 2020 09:44:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AB6B922250 for ; Thu, 9 Apr 2020 09:44:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="E8jn4I9b" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB6B922250 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CCC5C8E0012; Thu, 9 Apr 2020 05:44:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CA4D98E0006; Thu, 9 Apr 2020 05:44:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBB648E0012; Thu, 9 Apr 2020 05:44:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id AF2298E0006 for ; Thu, 9 Apr 2020 05:44:07 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 741CE181AC212 for ; Thu, 9 Apr 2020 09:44:07 +0000 (UTC) X-FDA: 76687830534.24.ship47_410e88fb4a11d X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=3610f98c6=sjpark@amazon.com,,RULES_HIT:30003:30012:30034:30054:30064:30070,0,RBL:207.171.184.29:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:14,LUA_SUMMARY:none X-HE-Tag: ship47_410e88fb4a11d X-Filterd-Recvd-Size: 7219 Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Apr 2020 09:44:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1586425448; x=1617961448; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=DQOaqgUeRZ7W8RMsPCocm8k0cuRNjH4lt231zETKFlg=; b=E8jn4I9bMpRXPuCvjhiNe5w4kI4I1OvrXYF0o0z7L9u4z0nc3NI6Bsl4 v/xmeE8jaY99vSYnI9TjbtuSLLJy9lBUkpVI3ezLkDpYK00HTRuf6Hanm eCkHpoyoxrAUx4CYKxM9Apz6L2TH+X6Jc2LhdvYK+y4XAIg8WtpQMreSH I=; IronPort-SDR: H6WmciwMKvPvXMRrZo6To41lLKCMJ4ne5p/aVW5SfQm9/LK71jyq0F2lToYeWqvoo4Db5m5F0g hwN0S292SDYg== X-IronPort-AV: E=Sophos;i="5.72,362,1580774400"; d="scan'208";a="36177728" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2c-6f38efd9.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 09 Apr 2020 09:44:05 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2c-6f38efd9.us-west-2.amazon.com (Postfix) with ESMTPS id A6DFDA1CA8; Thu, 9 Apr 2020 09:44:02 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 9 Apr 2020 09:44:02 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.115) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 9 Apr 2020 09:43:48 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH 3/4] mm/damon: Make monitoring target regions init/update configurable Date: Thu, 9 Apr 2020 11:42:31 +0200 Message-ID: <20200409094232.29680-4-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200409094232.29680-1-sjpark@amazon.com> References: <20200409094232.29680-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.115] X-ClientProxiedBy: EX13D15UWB003.ant.amazon.com (10.43.161.138) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit allows DAMON users to configure their own monitoring target regions initializer / updater. Using this, users can confine the monitoring address spaces as they want. For example, users can track only stack, heap, or shared memory area, as they want. Signed-off-by: SeongJae Park --- include/linux/damon.h | 2 ++ mm/damon.c | 20 +++++++++++++------- 2 files changed, 15 insertions(+), 7 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index d72dd524924f..a051b5d966ed 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -93,6 +93,8 @@ struct damon_ctx { struct list_head schemes_list; /* 'damos' objects */ /* callbacks */ + void (*init_target_regions)(struct damon_ctx *context); + void (*update_target_regions)(struct damon_ctx *context); void (*sample_cb)(struct damon_ctx *context); void (*aggregate_cb)(struct damon_ctx *context); }; diff --git a/mm/damon.c b/mm/damon.c index 80fa3cab7720..da0e7efdf1e1 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -57,6 +57,9 @@ /* Get a random number in [l, r) */ #define damon_rand(ctx, l, r) (l + prandom_u32_state(&ctx->rndseed) % (r - l)) +static void kdamond_init_vm_regions(struct damon_ctx *ctx); +static void kdamond_update_vm_regions(struct damon_ctx *ctx); + /* A monitoring context for debugfs interface users. */ static struct damon_ctx damon_user_ctx = { .sample_interval = 5 * 1000, @@ -64,6 +67,9 @@ static struct damon_ctx damon_user_ctx = { .regions_update_interval = 1000 * 1000, .min_nr_regions = 10, .max_nr_regions = 1000, + + .init_target_regions = kdamond_init_vm_regions, + .update_target_regions = kdamond_update_vm_regions, }; /* @@ -327,7 +333,7 @@ static void swap_ranges(struct damon_addr_range *r1, * * This function receives an address space and finds three regions in it which * separated by the two biggest unmapped regions in the space. Please refer to - * below comments of 'damon_init_regions_of()' function to know why this is + * below comments of 'damon_init_vm_regions_of()' function to know why this is * necessary. * * Returns 0 if success, or negative error code otherwise. @@ -439,7 +445,7 @@ static int damon_three_regions_of(struct damon_task *t, * * */ -static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t) +static void damon_init_vm_regions_of(struct damon_ctx *c, struct damon_task *t) { struct damon_region *r; struct damon_addr_range regions[3]; @@ -463,12 +469,12 @@ static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t) } /* Initialize '->regions_list' of every task */ -static void kdamond_init_regions(struct damon_ctx *ctx) +static void kdamond_init_vm_regions(struct damon_ctx *ctx) { struct damon_task *t; damon_for_each_task(ctx, t) - damon_init_regions_of(ctx, t); + damon_init_vm_regions_of(ctx, t); } static void damon_mkold(struct mm_struct *mm, unsigned long addr) @@ -1074,7 +1080,7 @@ static void damon_apply_three_regions(struct damon_ctx *ctx, /* * Update regions for current memory mappings */ -static void kdamond_update_regions(struct damon_ctx *ctx) +static void kdamond_update_vm_regions(struct damon_ctx *ctx) { struct damon_addr_range three_regions[3]; struct damon_task *t; @@ -1126,7 +1132,7 @@ static int kdamond_fn(void *data) unsigned int max_nr_accesses = 0; pr_info("kdamond (%d) starts\n", ctx->kdamond->pid); - kdamond_init_regions(ctx); + ctx->init_target_regions(ctx); while (!kdamond_need_stop(ctx)) { kdamond_prepare_access_checks(ctx); if (ctx->sample_cb) @@ -1147,7 +1153,7 @@ static int kdamond_fn(void *data) } if (kdamond_need_update_regions(ctx)) - kdamond_update_regions(ctx); + ctx->update_target_regions(ctx); } damon_flush_rbuffer(ctx); damon_for_each_task(ctx, t) { From patchwork Thu Apr 9 09:42:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11481515 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FA5714DD for ; Thu, 9 Apr 2020 09:44:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D9DBF20A8B for ; Thu, 9 Apr 2020 09:44:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="nZ1R9RUr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D9DBF20A8B Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 10B518E0013; Thu, 9 Apr 2020 05:44:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0BC3A8E0006; Thu, 9 Apr 2020 05:44:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EED038E0013; Thu, 9 Apr 2020 05:44:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id E3BB88E0006 for ; Thu, 9 Apr 2020 05:44:35 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B6E52180AC483 for ; Thu, 9 Apr 2020 09:44:35 +0000 (UTC) X-FDA: 76687831710.19.soda46_4528822a18b28 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=3610f98c6=sjpark@amazon.com,,RULES_HIT:30003:30054:30064:30070,0,RBL:72.21.198.25:@amazon.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:15,LUA_SUMMARY:none X-HE-Tag: soda46_4528822a18b28 X-Filterd-Recvd-Size: 7862 Received: from smtp-fw-4101.amazon.com (smtp-fw-4101.amazon.com [72.21.198.25]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Apr 2020 09:44:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1586425476; x=1617961476; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=JM5sUrNXKRJcdMGMKqju9wpkLsnMHLh8mrvocthSOiA=; b=nZ1R9RUrc1vAKn1N4V61nW6sVZjYZXpt76VxYAGrg2dJEiBQpGIo11V5 aUzMINs5C4N9K4oa7aQbJefxSD+5nL+fRmW3rL+8970mRZKS05MisjELE gXv7Y5AygQKUmSAdz0eRxzHXjv514Lbf6Kc5nxoeSsVUtEdF9peXDmHG7 o=; IronPort-SDR: GznbAMVH7ewBpKdtsJ3sQUEMtalWl/P2BFIN4vgfkQARSVI5kXIJ4bsAcXOgyKormsKvROOcNu Dt2pMSgWnnSw== X-IronPort-AV: E=Sophos;i="5.72,362,1580774400"; d="scan'208";a="24913339" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-6e2fc477.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP; 09 Apr 2020 09:44:20 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-6e2fc477.us-west-2.amazon.com (Postfix) with ESMTPS id E2548A2391; Thu, 9 Apr 2020 09:44:17 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 9 Apr 2020 09:44:17 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.115) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 9 Apr 2020 09:44:03 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH 4/4] mm/damon: Make access check configurable Date: Thu, 9 Apr 2020 11:42:32 +0200 Message-ID: <20200409094232.29680-5-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200409094232.29680-1-sjpark@amazon.com> References: <20200409094232.29680-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.115] X-ClientProxiedBy: EX13D15UWB003.ant.amazon.com (10.43.161.138) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park DAMON assumes the target region is in virtual address space and therefore uses PTE Accessed bit checking for access checking. However, some users might want to use architecture-specific, more accurate and light-weight access checking features. Also, some users might want to use DAMON for different address spaces such as physical memory space, which needs different ways to check the access. This commit allows DAMON users to configure the access check function to their own version. Signed-off-by: SeongJae Park --- include/linux/damon.h | 2 ++ mm/damon.c | 22 +++++++++++++--------- 2 files changed, 15 insertions(+), 9 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index a051b5d966ed..188d5b89b303 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -95,6 +95,8 @@ struct damon_ctx { /* callbacks */ void (*init_target_regions)(struct damon_ctx *context); void (*update_target_regions)(struct damon_ctx *context); + void (*prepare_access_checks)(struct damon_ctx *context); + unsigned int (*check_accesses)(struct damon_ctx *context); void (*sample_cb)(struct damon_ctx *context); void (*aggregate_cb)(struct damon_ctx *context); }; diff --git a/mm/damon.c b/mm/damon.c index da0e7efdf1e1..20a66a6307d1 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -59,6 +59,8 @@ static void kdamond_init_vm_regions(struct damon_ctx *ctx); static void kdamond_update_vm_regions(struct damon_ctx *ctx); +static void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx); +static unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx); /* A monitoring context for debugfs interface users. */ static struct damon_ctx damon_user_ctx = { @@ -70,6 +72,8 @@ static struct damon_ctx damon_user_ctx = { .init_target_regions = kdamond_init_vm_regions, .update_target_regions = kdamond_update_vm_regions, + .prepare_access_checks = kdamond_prepare_vm_access_checks, + .check_accesses = kdamond_check_vm_accesses, }; /* @@ -506,7 +510,7 @@ static void damon_mkold(struct mm_struct *mm, unsigned long addr) #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ } -static void damon_prepare_access_check(struct damon_ctx *ctx, +static void damon_prepare_vm_access_check(struct damon_ctx *ctx, struct mm_struct *mm, struct damon_region *r) { r->sampling_addr = damon_rand(ctx, r->ar.start, r->ar.end); @@ -514,7 +518,7 @@ static void damon_prepare_access_check(struct damon_ctx *ctx, damon_mkold(mm, r->sampling_addr); } -static void kdamond_prepare_access_checks(struct damon_ctx *ctx) +static void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx) { struct damon_task *t; struct mm_struct *mm; @@ -525,7 +529,7 @@ static void kdamond_prepare_access_checks(struct damon_ctx *ctx) if (!mm) continue; damon_for_each_region(r, t) - damon_prepare_access_check(ctx, mm, r); + damon_prepare_vm_access_check(ctx, mm, r); mmput(mm); } } @@ -563,7 +567,7 @@ static bool damon_young(struct mm_struct *mm, unsigned long addr, * mm 'mm_struct' for the given virtual address space * r the region to be checked */ -static void damon_check_access(struct damon_ctx *ctx, +static void damon_check_vm_access(struct damon_ctx *ctx, struct mm_struct *mm, struct damon_region *r) { static struct mm_struct *last_mm; @@ -587,7 +591,7 @@ static void damon_check_access(struct damon_ctx *ctx, last_addr = r->sampling_addr; } -static unsigned int kdamond_check_accesses(struct damon_ctx *ctx) +static unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx) { struct damon_task *t; struct mm_struct *mm; @@ -599,12 +603,12 @@ static unsigned int kdamond_check_accesses(struct damon_ctx *ctx) if (!mm) continue; damon_for_each_region(r, t) { - damon_check_access(ctx, mm, r); + damon_check_vm_access(ctx, mm, r); max_nr_accesses = max(r->nr_accesses, max_nr_accesses); } - mmput(mm); } + return max_nr_accesses; } @@ -1134,13 +1138,13 @@ static int kdamond_fn(void *data) pr_info("kdamond (%d) starts\n", ctx->kdamond->pid); ctx->init_target_regions(ctx); while (!kdamond_need_stop(ctx)) { - kdamond_prepare_access_checks(ctx); + ctx->prepare_access_checks(ctx); if (ctx->sample_cb) ctx->sample_cb(ctx); usleep_range(ctx->sample_interval, ctx->sample_interval + 1); - max_nr_accesses = kdamond_check_accesses(ctx); + max_nr_accesses = ctx->check_accesses(ctx); if (kdamond_aggregate_interval_passed(ctx)) { kdamond_merge_regions(ctx, max_nr_accesses / 10);