From patchwork Fri Jan 3 17:43:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 13925765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAD5EE7718F for ; Fri, 3 Jan 2025 17:44:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FEE26B0098; Fri, 3 Jan 2025 12:44:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9AD9A6B009A; Fri, 3 Jan 2025 12:44:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E6006B009C; Fri, 3 Jan 2025 12:44:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 53B7B6B0098 for ; Fri, 3 Jan 2025 12:44:21 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1465DA094B for ; Fri, 3 Jan 2025 17:44:21 +0000 (UTC) X-FDA: 82966862286.19.B237777 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf28.hostedemail.com (Postfix) with ESMTP id 083AEC000F for ; Fri, 3 Jan 2025 17:43:23 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="ZLv4U/hV"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of sj@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735926235; a=rsa-sha256; cv=none; b=R0Uh9i3VruayIQ0s7ghkbwkKzgSLKofCuaJ9RFYqs2DPZ4+ab3rrkHZye4xnSsmiQEGtzx Q2xXeXRBlxDRC6yvVyJaXpZs2TIt9XHgj9vZK5x22/C15tDav5ceR5/Ix019/VYKMsB5EM zpcHNe4BQUZ/5rhbRyfy4jAggDjxvl0= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="ZLv4U/hV"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of sj@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735926235; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FrvXual4+5sv2fuuT5LKpOO5M8vjIvBWVHZEw71cg+w=; b=AKnJXcsgWioO/8WpygIPDFQTOnlLPkEKj4/S7sh5WQyfB0P4y593MlS0EoYZ9iJ2P1NIaB 2UYuggDnEOdcyNj8AAT3R1q9Sx7AnXj38VLpb8i9lC6q7UTdZZPiy/YEARQIanucZUTtV/ wScwzN/mgkf1J904tXWLo8/I5Yo6ezs= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id A9D80A41042; Fri, 3 Jan 2025 17:42:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 69DFAC4CEDF; Fri, 3 Jan 2025 17:44:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1735926258; bh=m5b1QRjgbQAvEoZjEHeueE5i8801YUauSjCf8Zy53xI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZLv4U/hVN3Y0MuHy/MBIajZjK+xUYPX2FItpYfDWr1wwBrI2Kudci/SO765zhI34m 3RiLgV0XDZQMjgEglDXEBftK112UMe3urGPBtOgCtB9+sRkWTd0UTSvG3wZ1JqjoAk ZwN7MOUhHjPO36bA+vVBD3ZqIbBMF4DbxzLbVS4+7BEMvf25gosl93kZJdBmgboVAx UiJJUTl3YCJe1dOgL9aGT28FMBeBeePjN4bzDJ65wD5SsV1IcXUsLGLRwJag4yZnnz tgMqFn5GZDtUa43WTmjDaOAx6n9hEK8reRblceC6H+JeMcAwsgJpuh/Ay9Gtg/3hd2 RAD1Ckl9hPvCw== From: SeongJae Park To: Andrew Morton Cc: SeongJae Park , damon@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 07/10] mm/damon/core: implement damos_walk() Date: Fri, 3 Jan 2025 09:43:57 -0800 Message-Id: <20250103174400.54890-8-sj@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250103174400.54890-1-sj@kernel.org> References: <20250103174400.54890-1-sj@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 083AEC000F X-Stat-Signature: 6phq14nia1q7syustd8xwcjchto5cpr1 X-HE-Tag: 1735926203-274602 X-HE-Meta: U2FsdGVkX1+T13gmKWlE+DW+fzIMOMXJ8SxkFbl+8c1dlGltwr+0qfrKKD5jKheqWPhpeeXM6sleF4zHYr4oAeIPgKvEXPBqUwbJQX7ZyVwsI5mshpylctiYhDHwu9zmCFujzUpOgqwYedmElxkVTyztmzavQDUAA3u1zEOOvLv+xVor3Eq9AiSDak4951k3DduIe99m2Da1UfQ+kISdtzV3k7wOfKQlheMLxlXDt2ehsajhPgLHKC2ZtrQIOjR7EZTA2tRvVkkSNbvrnWTTeyVN1br3hzcRHF+jbSKWauQ5bcvqaRi4PyGExY0CSzkYlDTIoh8QF8Fw3RB9NfkhNynqPjnnClSINJjGwlkDruv2kUFIIgnMVVHCSb8HRTFH0YdW9TfDHadk1UX49xEQW4KeE/vxU5/sJba+uNHaYEBmSZ1U7YnuCKH4E+adh20B+yuSr0DbXX/zp5+eFLR4lSCGTSeZ3R7GJHTR7XgZz6iqcSpAMHV5tu+5aSKGMkTQjcGoNzRoXn0Wn2BtE2NZ+/u7Pix/SImvrDz5XtFpFupBEk0hCWrNILMvALHDY0Z6oYFOAcPXdmZca0rbvGNG90lm75/WPWQ0Jvx+eB3H3WbcNvK4Vjc6zoCbgfykKQYbvx9+nMshxFbRzu7d+PEe+j0NkUAxXn2c9j7IkIwowrjtYJV8rqCRe61DYFybd5UCn9/Gs3RMhTuF38ydX6BGsjHl5hqwntnKSh5mcysRL3yC71FepxrmQdx/wR71Z7Bj+VBi80kOEW1dUSEk49Txrt8C6iLgqcEWjAW9PyYoX53I9BDQr2PKXnu0nFmfGVgayMXikj0xoY5oGM5V6malX0Wpl22g3+Bog7y22Mod72J+D+r4FWM2fJ9D8fWI1TdnlNZEDpG4biwi9T/dloLB8guyqm3IETArU0FYwz84RaKTInYVC9HyuNWXTUD7ftzZZYSo+J2NQvCgwC649ay VauTDQs8 Iq46gBx97fTc2h4ZSb3YMTLSTvVet0hT7QYd2TaVB161tnKl0WhRYPzt3WOTuep3f3/xrAm5Mp9k77lY2LQ+3A+6utA87ciX7OwvteduH9mgS9NR12UkIrs+s1lKRgcwSEmugV97LHGwUysKPoACKfz12KAqTUJYuNHpX53HqzAr56DNuDGFvcHKfkIDTJ7p8skTOPVQ0ILPsV5JkuTg/H5e8Adov8gqSK4oQDFcxvZs+wJGMOysRqcTgi21fim2b0N2okzAAc5HuKoi43P/TWqAK/A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce a new core layer interface, damos_walk(). It aims to replace some damon_callback usages that access DAMOS schemes applied regions of ongoing kdamond with additional synchronizations. It receives a function pointer and asks kdamond to invoke it for any region that it tried to apply any DAMOS action within one scheme apply interval for every scheme of it. The function further waits until the kdamond finishes the invocations for every scheme, or cancels the request, and returns. The kdamond invokes the function as requested within the main loop. If it is deactivated by DAMOS watermarks or going out of the main loop, it marks the request as canceled, so that damos_walk() can wakeup and return. Signed-off-by: SeongJae Park --- include/linux/damon.h | 33 ++++++++++- mm/damon/core.c | 132 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 163 insertions(+), 2 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index ac2d42a50751..2889de3526c3 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -352,6 +352,31 @@ struct damos_filter { struct list_head list; }; +struct damon_ctx; +struct damos; + +/** + * struct damos_walk_control - Control damos_walk(). + * + * @walk_fn: Function to be called back for each region. + * @data: Data that will be passed to walk functions. + * + * Control damos_walk(), which requests specific kdamond to invoke the given + * function to each region that eligible to apply actions of the kdamond's + * schemes. Refer to damos_walk() for more details. + */ +struct damos_walk_control { + void (*walk_fn)(void *data, struct damon_ctx *ctx, + struct damon_target *t, struct damon_region *r, + struct damos *s); + void *data; +/* private: internal use only */ + /* informs if the kdamond finished handling of the walk request */ + struct completion completion; + /* informs if the walk is canceled. */ + bool canceled; +}; + /** * struct damos_access_pattern - Target access pattern of the given scheme. * @min_sz_region: Minimum size of target regions. @@ -415,6 +440,8 @@ struct damos { * @action */ unsigned long next_apply_sis; + /* informs if ongoing DAMOS walk for this scheme is finished */ + bool walk_completed; /* public: */ struct damos_quota quota; struct damos_watermarks wmarks; @@ -442,8 +469,6 @@ enum damon_ops_id { NR_DAMON_OPS, }; -struct damon_ctx; - /** * struct damon_operations - Monitoring operations for given use cases. * @@ -656,6 +681,9 @@ struct damon_ctx { struct damon_call_control *call_control; struct mutex call_control_lock; + struct damos_walk_control *walk_control; + struct mutex walk_control_lock; + /* public: */ struct task_struct *kdamond; struct mutex kdamond_lock; @@ -804,6 +832,7 @@ int damon_start(struct damon_ctx **ctxs, int nr_ctxs, bool exclusive); int damon_stop(struct damon_ctx **ctxs, int nr_ctxs); int damon_call(struct damon_ctx *ctx, struct damon_call_control *control); +int damos_walk(struct damon_ctx *ctx, struct damos_walk_control *control); int damon_set_region_biggest_system_ram_default(struct damon_target *t, unsigned long *start, unsigned long *end); diff --git a/mm/damon/core.c b/mm/damon/core.c index 97f19ec4179c..d02a7d6da855 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -505,6 +505,7 @@ struct damon_ctx *damon_new_ctx(void) mutex_init(&ctx->kdamond_lock); mutex_init(&ctx->call_control_lock); + mutex_init(&ctx->walk_control_lock); ctx->attrs.min_nr_regions = 10; ctx->attrs.max_nr_regions = 1000; @@ -1211,6 +1212,46 @@ int damon_call(struct damon_ctx *ctx, struct damon_call_control *control) return 0; } +/** + * damos_walk() - Invoke a given functions while DAMOS walk regions. + * @ctx: DAMON context to call the functions for. + * @control: Control variable of the walk request. + * + * Ask DAMON worker thread (kdamond) of @ctx to call a function for each region + * that the kdamond will apply DAMOS action to, and wait until the kdamond + * finishes handling of the request. + * + * The kdamond executes the given function in the main loop, for each region + * just after it applied any DAMOS actions of @ctx to it. The invocation is + * made only within one &damos->apply_interval_us since damos_walk() + * invocation, for each scheme. The given callback function can hence safely + * access the internal data of &struct damon_ctx and &struct damon_region that + * each of the scheme will apply the action for next interval, without + * additional synchronizations against the kdamond. If every scheme of @ctx + * passed at least one &damos->apply_interval_us, kdamond marks the request as + * completed so that damos_walk() can wakeup and return. + * + * Return: 0 on success, negative error code otherwise. + */ +int damos_walk(struct damon_ctx *ctx, struct damos_walk_control *control) +{ + init_completion(&control->completion); + control->canceled = false; + mutex_lock(&ctx->walk_control_lock); + if (ctx->walk_control) { + mutex_unlock(&ctx->walk_control_lock); + return -EBUSY; + } + ctx->walk_control = control; + mutex_unlock(&ctx->walk_control_lock); + if (!damon_is_running(ctx)) + return -EINVAL; + wait_for_completion(&control->completion); + if (control->canceled) + return -ECANCELED; + return 0; +} + /* * Reset the aggregated monitoring results ('nr_accesses' of each region). */ @@ -1390,6 +1431,91 @@ static bool damos_filter_out(struct damon_ctx *ctx, struct damon_target *t, return false; } +/* + * damos_walk_call_walk() - Call &damos_walk_control->walk_fn. + * @ctx: The context of &damon_ctx->walk_control. + * @t: The monitoring target of @r that @s will be applied. + * @r: The region of @t that @s will be applied. + * @s: The scheme of @ctx that will be applied to @r. + * + * This function is called from kdamond whenever it asked the operation set to + * apply a DAMOS scheme action to a region. If a DAMOS walk request is + * installed by damos_walk() and not yet uninstalled, invoke it. + */ +static void damos_walk_call_walk(struct damon_ctx *ctx, struct damon_target *t, + struct damon_region *r, struct damos *s) +{ + struct damos_walk_control *control; + + mutex_lock(&ctx->walk_control_lock); + control = ctx->walk_control; + mutex_unlock(&ctx->walk_control_lock); + if (!control) + return; + control->walk_fn(control->data, ctx, t, r, s); +} + +/* + * damos_walk_complete() - Complete DAMOS walk request if all walks are done. + * @ctx: The context of &damon_ctx->walk_control. + * @s: A scheme of @ctx that all walks are now done. + * + * This function is called when kdamond finished applying the action of a DAMOS + * scheme to all regions that eligible for the given &damos->apply_interval_us. + * If every scheme of @ctx including @s now finished walking for at least one + * &damos->apply_interval_us, this function makrs the handling of the given + * DAMOS walk request is done, so that damos_walk() can wake up and return. + */ +static void damos_walk_complete(struct damon_ctx *ctx, struct damos *s) +{ + struct damos *siter; + struct damos_walk_control *control; + + mutex_lock(&ctx->walk_control_lock); + control = ctx->walk_control; + mutex_unlock(&ctx->walk_control_lock); + if (!control) + return; + + s->walk_completed = true; + /* if all schemes completed, signal completion to walker */ + damon_for_each_scheme(siter, ctx) { + if (!siter->walk_completed) + return; + } + complete(&control->completion); + mutex_lock(&ctx->walk_control_lock); + ctx->walk_control = NULL; + mutex_unlock(&ctx->walk_control_lock); +} + +/* + * damos_walk_cancel() - Cancel the current DAMOS walk request. + * @ctx: The context of &damon_ctx->walk_control. + * + * This function is called when @ctx is deactivated by DAMOS watermarks, DAMOS + * walk is requested but there is no DAMOS scheme to walk for, or the kdamond + * is already out of the main loop and therefore gonna be terminated, and hence + * cannot continue the walks. This function therefore marks the walk request + * as canceled, so that damos_walk() can wake up and return. + */ +static void damos_walk_cancel(struct damon_ctx *ctx) +{ + struct damos_walk_control *control; + + mutex_lock(&ctx->walk_control_lock); + control = ctx->walk_control; + mutex_unlock(&ctx->walk_control_lock); + + if (!control) + return; + control->canceled = true; + complete(&control->completion); + mutex_lock(&ctx->walk_control_lock); + ctx->walk_control = NULL; + mutex_unlock(&ctx->walk_control_lock); +} + static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t, struct damon_region *r, struct damos *s) { @@ -1444,6 +1570,7 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t, damon_nr_regions(t), do_trace); sz_applied = c->ops.apply_scheme(c, t, r, s); } + damos_walk_call_walk(c, t, r, s); ktime_get_coarse_ts64(&end); quota->total_charged_ns += timespec64_to_ns(&end) - timespec64_to_ns(&begin); @@ -1712,6 +1839,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c) damon_for_each_scheme(s, c) { if (c->passed_sample_intervals < s->next_apply_sis) continue; + damos_walk_complete(c, s); s->next_apply_sis = c->passed_sample_intervals + (s->apply_interval_us ? s->apply_interval_us : c->attrs.aggr_interval) / sample_interval; @@ -2024,6 +2152,7 @@ static int kdamond_wait_activation(struct damon_ctx *ctx) ctx->callback.after_wmarks_check(ctx)) break; kdamond_call(ctx, true); + damos_walk_cancel(ctx); } return -EBUSY; } @@ -2117,6 +2246,8 @@ static int kdamond_fn(void *data) */ if (!list_empty(&ctx->schemes)) kdamond_apply_schemes(ctx); + else + damos_walk_cancel(ctx); sample_interval = ctx->attrs.sample_interval ? ctx->attrs.sample_interval : 1; @@ -2157,6 +2288,7 @@ static int kdamond_fn(void *data) mutex_unlock(&ctx->kdamond_lock); kdamond_call(ctx, true); + damos_walk_cancel(ctx); mutex_lock(&damon_lock); nr_running_ctxs--;