From patchwork Thu Feb 13 01:44:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 13972682 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C66BCC021A0 for ; Thu, 13 Feb 2025 01:44:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 128766B0093; Wed, 12 Feb 2025 20:44:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D88A280001; Wed, 12 Feb 2025 20:44:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E94BE6B0096; Wed, 12 Feb 2025 20:44:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C705D6B0093 for ; Wed, 12 Feb 2025 20:44:57 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 903C6B2341 for ; Thu, 13 Feb 2025 01:44:57 +0000 (UTC) X-FDA: 83113227834.01.F91E5E0 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf07.hostedemail.com (Postfix) with ESMTP id 0A79E4000E for ; Thu, 13 Feb 2025 01:44:55 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Jw5STQf0; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf07.hostedemail.com: domain of sj@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739411096; a=rsa-sha256; cv=none; b=t/Y+gP/f41KL1i6YkLkFO2ELRazUT7cCCkh/aU/0bGn5OQgS1Lh6oC0v7YODwodBRm/R41 zMMxTOwUSIab9Se3nP44pp2fxP5FI6z+++H+LL1iAH4LQFefI68+8MmZk1drzO0dfKh8Ig DY7vdOANlI3g5zs1pC7o9m3hGXffHa8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Jw5STQf0; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf07.hostedemail.com: domain of sj@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739411096; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KNHfuXhHWN/YQGe3CPiDVN3CXvZT2r2ybj8BPwUi1Co=; b=7U+mv2SAW3mIjx1zeVwSkZbxskank514r5qPK1WZ1ltEu/BpJ7uNdLpmCZuTbDkKbPatv/ kDalnjXYacVClm0NHruBqPbDo3XQRQXZlRF++pvPYAi0hJULD7DEk5ckUNIaC1WmZsmOTh An4vqpXGXtjnLV+0zUl/HAqqsxVoufY= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id E251FA41C4C; Thu, 13 Feb 2025 01:43:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A7F5CC4CEDF; Thu, 13 Feb 2025 01:44:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739411094; bh=W/Z4Pz7/0rKa1NitqjDRMCIfBla8IOcoTknAAfohU00=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Jw5STQf0kp1MAge7MPXqyEykadpvLLsK9LJzEH3HX/d5PDfTYcUUAcSYKI2bzds0S Ics9c0nN69O9/PqA6WGdmU5mXEBSiGYvCUDE1qRPQj+LFaHTLwcsPYyAmUYIV9Jq5x lRwuo/Mw1/V21QX1zkJT4bmAk2jxFgJH26tSPEXEWgfxoMi6VPv5D9qVVUaX/rs/y8 ++2bmuXGygdL7Jcgn/pkwtAu3t8uzFMpIJl8csUcEtF+zvdtaQTLLHSa9BXPNGcy0s CYerbKo85s0tLT7IFeR4YF2cBVmkQYKGeaBGcZNY1dBJfx/ubtXdlQWCrlrHfp0JUC mIo6KqW1VDLsQ== From: SeongJae Park To: Cc: SeongJae Park , Andrew Morton , damon@lists.linux.dev, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 2/8] mm/damon/core: implement intervals auto-tuning Date: Wed, 12 Feb 2025 17:44:32 -0800 Message-Id: <20250213014438.145611-3-sj@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250213014438.145611-1-sj@kernel.org> References: <20250213014438.145611-1-sj@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 0A79E4000E X-Rspamd-Server: rspam12 X-Stat-Signature: c8fwdeo7mt8wkw5pif6ypr17grcu5o1z X-HE-Tag: 1739411095-975283 X-HE-Meta: U2FsdGVkX1+PUKpjVydsJNgMuhaEP8K0TSyLQG7ejUCms2Qr3PCA1ypq7kiwbAHgm9pOf8J6GHoH8gOs0H0+aivqO3NAJ3giRycKf+27nSCrL98QBnFB5CT8NPWScbP/D9KaPhjPYqmVM3sPGbb5hb80rHVsydiJDlCuSuwK6wZ4mCeTTyYSc1NtioKIvKrcYcwgp6R48Qx0/1SeEmCg5NSHhc/gJeZ/0yDNEPlt7Xm2tlHTQHGSobbvCrAUxQ0fZAHRihuKllT8bVAzPLRPVYF93wzkV142/xpyQ3du/ENKnMFAk52QIlKUF49KVewRmJXXpM3YkU+no/PPotKYrG/jJsH2LiRlvuRlFfeekfsQ4nlITsjz6+jZY4oKZgxqs7W9LdCyn7PtYFZV6E991pqpXpJ8ecudJvJWY4R8FdcGxS7/QiyDkKROCnSsBGbpdcFBwwPlVy6IxPrMy9cy418U4il1qrEhS9PXV4hadtDu6SeMgTqHNZs1htD0siI53M/hf6SotcS7P81XpOwzY+C6y0AZI18uuQUky5HAfpVTdK0pko0u08wI5a3tdsXXMYreL5/9JJYH8N+0ZGWaA+EsMPUCYpaEthHw1vC+mSWe+H++OYNYORwi1APUugoRkl+9vxwGqhhO9EvsWq3jvy3nkdbLxLsAgEgJGz0r6MxyTBHZtV0CGFOwL0DzteBwSFcWipcxi9FB4wGvSqJ8tu1AIElxN3+6YeYqotC/kQY+F6coFemxPOKcz8h3p8ozsXgkVhThMmLRyQKNBhjZhseiWd1bgVcZuO+FII2PfEDQJWtUXtpGJckOpdhDPfRRsGHEUBunDE8+xGprVCSAWLj4DcBPvueJKlbMM3ZMlHERfw+uYUAg0WkFkU6ZQ7JUhID9U4OV7XCTcExP9ZJHfHVasKd5N5wLEtGrrtfZJvPyDm3wtbBuFTp4uGM/NOXksq1Ey+MvbvC3daCHldj 3QpFT7rr 8UP92cvyoLuMaHvG2cmZNfSucT23RYtsgHDuWG/JHv/wl2WC9dfSyQlsq1Z6n+UlKA6rwqlcYjyzLqBgsJCYUmXNhSsuzDFNmGHulrrba3pBaGy0SrmqlCiAdqeEjP2/EhA68Fs1lW/dTR96pZBBCHqtJzdW9PdqBj5RmamQQSyPF9BATqcIsNowM8/cozmfUMb/lZduOPx+eqpL2tgNaWkNx8rdZR4lTtqmEO7ZYKPPQ1ro= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implement the DAMON sampling and aggregation intervals auto-tuning mechanism as designed on the cover letter of this patch series. The mechanism reuses the feedback loop function for DAMOS quotas auto-tuning. Unlike the DAMOS quotas auto-tuning use case, limit the maximum decreasing amount after the adjustment to 50% of the current value. This is because the intervals have no good merits at rapidly reducing, and it is assumed the user will set the range of tunable values not very wide. Signed-off-by: SeongJae Park --- include/linux/damon.h | 16 ++++++++++ mm/damon/core.c | 68 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/include/linux/damon.h b/include/linux/damon.h index 4368ba1a942f..a205843fcf5a 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -705,6 +705,17 @@ struct damon_attrs { struct damon_intervals_goal intervals_goal; unsigned long min_nr_regions; unsigned long max_nr_regions; +/* private: internal use only */ + /* + * @aggr_interval to @sample_interval ratio. + * Core-external components call damon_set_attrs() with &damon_attrs + * that this field is unset. In the case, damon_set_attrs() sets this + * field of resulting &damon_attrs. Core-internal components such as + * kdamond_tune_intervals() calls damon_set_attrs() with &damon_attrs + * that this field is set. In the case, damon_set_attrs() just keep + * it. + */ + unsigned long aggr_samples; }; /** @@ -753,6 +764,11 @@ struct damon_ctx { * update */ unsigned long next_ops_update_sis; + /* + * number of sample intervals that should be passed before next + * intervals tuning + */ + unsigned long next_intervals_tune_sis; /* for waiting until the execution of the kdamond_fn is started */ struct completion kdamond_started; /* for scheme quotas prioritization */ diff --git a/mm/damon/core.c b/mm/damon/core.c index 2fad800271a4..227bdb856157 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -663,6 +663,10 @@ int damon_set_attrs(struct damon_ctx *ctx, struct damon_attrs *attrs) if (attrs->sample_interval > attrs->aggr_interval) return -EINVAL; + /* calls from core-external doesn't set this. */ + if (!attrs->aggr_samples) + attrs->aggr_samples = attrs->aggr_interval / sample_interval; + ctx->next_aggregation_sis = ctx->passed_sample_intervals + attrs->aggr_interval / sample_interval; ctx->next_ops_update_sis = ctx->passed_sample_intervals + @@ -1300,6 +1304,60 @@ static void kdamond_reset_aggregated(struct damon_ctx *c) } } +static unsigned long damon_feed_loop_next_input(unsigned long last_input, + unsigned long score); + +static unsigned long damon_get_intervals_adaptation_bp(struct damon_ctx *c) +{ + struct damon_target *t; + struct damon_region *r; + unsigned long nr_regions = 0, access_samples = 0; + struct damon_intervals_goal *goal = &c->attrs.intervals_goal; + unsigned long max_samples, target_samples, score_bp; + unsigned long adaptation_bp; + + damon_for_each_target(t, c) { + nr_regions = damon_nr_regions(t); + damon_for_each_region(r, t) + access_samples += r->nr_accesses; + } + max_samples = nr_regions * c->attrs.aggr_samples; + target_samples = max_samples * goal->samples_bp / 10000; + score_bp = access_samples * 10000 / target_samples; + adaptation_bp = damon_feed_loop_next_input(100000000, score_bp) / + 10000; + /* + * adaptaion_bp ranges from 1 to 20,000. Avoid too rapid reduction of + * the intervals by rescaling [1,10,000] to [5000, 10,000]. + */ + if (adaptation_bp <= 10000) + adaptation_bp = 5000 + adaptation_bp / 2; + + return adaptation_bp; +} + +static void kdamond_tune_intervals(struct damon_ctx *c) +{ + unsigned long adaptation_bp; + struct damon_attrs new_attrs; + struct damon_intervals_goal *goal; + + adaptation_bp = damon_get_intervals_adaptation_bp(c); + if (adaptation_bp == 10000) + return; + + new_attrs = c->attrs; + goal = &c->attrs.intervals_goal; + new_attrs.sample_interval = min( + c->attrs.sample_interval * adaptation_bp / 10000, + goal->max_sample_us); + new_attrs.sample_interval = max(new_attrs.sample_interval, + goal->min_sample_us); + new_attrs.aggr_interval = new_attrs.sample_interval * + c->attrs.aggr_samples; + damon_set_attrs(c, &new_attrs); +} + static void damon_split_region_at(struct damon_target *t, struct damon_region *r, unsigned long sz_r); @@ -2204,6 +2262,8 @@ static void kdamond_init_intervals_sis(struct damon_ctx *ctx) ctx->next_aggregation_sis = ctx->attrs.aggr_interval / sample_interval; ctx->next_ops_update_sis = ctx->attrs.ops_update_interval / sample_interval; + ctx->next_intervals_tune_sis = ctx->next_aggregation_sis * + ctx->attrs.intervals_goal.aggrs; damon_for_each_scheme(scheme, ctx) { apply_interval = scheme->apply_interval_us ? @@ -2290,6 +2350,14 @@ static int kdamond_fn(void *data) if (ctx->passed_sample_intervals >= next_aggregation_sis) { ctx->next_aggregation_sis = next_aggregation_sis + ctx->attrs.aggr_interval / sample_interval; + if (ctx->attrs.intervals_goal.aggrs && + ctx->passed_sample_intervals >= + ctx->next_intervals_tune_sis) { + ctx->next_intervals_tune_sis += + ctx->attrs.aggr_samples * + ctx->attrs.intervals_goal.aggrs; + kdamond_tune_intervals(ctx); + } kdamond_reset_aggregated(ctx); kdamond_split_regions(ctx);