From patchwork Mon Aug 15 15:41:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Ripard X-Patchwork-Id: 12943764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2C51C00140 for ; Mon, 15 Aug 2022 15:42:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233131AbiHOPmS (ORCPT ); Mon, 15 Aug 2022 11:42:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232884AbiHOPmG (ORCPT ); Mon, 15 Aug 2022 11:42:06 -0400 Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com [64.147.123.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A741313F49 for ; Mon, 15 Aug 2022 08:42:02 -0700 (PDT) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 287C6320014C; Mon, 15 Aug 2022 11:42:01 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Mon, 15 Aug 2022 11:42:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cerno.tech; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; t=1660578120; x=1660664520; bh=WD yaqM8srmU1YTrUZgp/e7Vi6T4o1cvbz4ny/A1rc0w=; b=rffwLboSUzlEmSGzdc II0sqYk9sRbBHofsH3xtNCIDdTl3ZtZOLOuPhUBOkYjcIhIleMN8M+IsQEPOp4Gk zRczQv4Vvg3hIZPX7uIZ6BaabNj3iUFmg4+syHlqtsoV9hfMwcQ9FB2UYse77/ZU bJpbko+c+cIYAGx2A6S4EKpaScK0frI3nil7WihM/Yoz6vNBQfRy6SGX/TEICwZs MJH9F6CYCskRBh1cyWXXa31zX9wtfzYSKTcX62Z2pU8dxTVJTXqvIal+Dv78nurc qskQOj6RXHu+kp6tfpiVZ0heZR/vLBUquLb2S6hSvlIb2vrxKh7UMOTVp+qRIfca sFcw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; t=1660578120; x=1660664520; bh=WDyaqM8srmU1Y TrUZgp/e7Vi6T4o1cvbz4ny/A1rc0w=; b=euFyZkbekpcgUHnYiqL16sQksKdkj OcfyQJGHAntQL/mE0x9fE0AObe3zXVLuuC7/3PAO3P6rTHcjB+8yDX9NShzYECW8 gJmLfYaBj1AV43IWPV/eba3kV5EmwHsF7MQTUOXM5NLUvcb+mBhCKEFbq3dM3h4d P9FlFz5eAlIwRbDgV59lzpATny9WDUY4YBFl+uEVSVo9CkEu4QLwqAy1sBQYikjr SWnb4A7sU2+nVwXANOYT9slftJjzQm4NwYgGHXYvVVIq0i38rHv255qG3974QoHG kORMBOXQyFL/HBAg7wwL9Y+KO3C+3vEm0vV6cQ/hyiX89E+wylyn6GsDw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdehvddgledvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepofgrgihi mhgvucftihhprghrugcuoehmrgigihhmvgestggvrhhnohdrthgvtghhqeenucggtffrrg htthgvrhhnpeelkeefteduhfekjeeihfetudfguedvveekkeetteekhfekhfdtlefgfedu vdejhfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe hmrgigihhmvgestggvrhhnohdrthgvtghh X-ME-Proxy: Feedback-ID: i8771445c:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 15 Aug 2022 11:41:59 -0400 (EDT) From: Maxime Ripard To: linux-clk@vger.kernel.org, Mike Turquette , Stephen Boyd Cc: Tony Lindgren , Jerome Brunet , Neil Armstrong , Yassine Oudjana , Dmitry Baryshkov , Marek Szyprowski , Naresh Kamboju , Alexander Stein , Maxime Ripard Subject: [PATCH v8 02/25] clk: Drop the rate range on clk_put() Date: Mon, 15 Aug 2022 17:41:24 +0200 Message-Id: <20220815154147.1631441-3-maxime@cerno.tech> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220815154147.1631441-1-maxime@cerno.tech> References: <20220815154147.1631441-1-maxime@cerno.tech> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-clk@vger.kernel.org When clk_put() is called we don't make another clk_set_rate() call to re-evaluate the rate boundaries. This is unlike clk_set_rate_range() that evaluates the rate again each time it is called. However, clk_put() is essentially equivalent to clk_set_rate_range() since after clk_put() completes the consumer's boundaries shouldn't be enforced anymore. Let's add a call to clk_set_rate_range() in clk_put() to make sure those rate boundaries are dropped and the clock provider drivers can react. In order to be as non-intrusive as possible, we'll just make that call if the clock had non-default boundaries. Also add a few tests to make sure this case is covered. Fixes: c80ac50cbb37 ("clk: Always set the rate on clk_set_range_rate") Tested-by: Alexander Stein # imx8mp Tested-by: Marek Szyprowski # exynos4210, meson g12b Signed-off-by: Maxime Ripard --- drivers/clk/clk.c | 45 +++++++++++------ drivers/clk/clk_test.c | 110 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 141 insertions(+), 14 deletions(-) diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index 7fc191c15507..a5e0ab8bd6be 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -2325,19 +2325,15 @@ int clk_set_rate_exclusive(struct clk *clk, unsigned long rate) } EXPORT_SYMBOL_GPL(clk_set_rate_exclusive); -/** - * clk_set_rate_range - set a rate range for a clock source - * @clk: clock source - * @min: desired minimum clock rate in Hz, inclusive - * @max: desired maximum clock rate in Hz, inclusive - * - * Returns success (0) or negative errno. - */ -int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max) +static int clk_set_rate_range_nolock(struct clk *clk, + unsigned long min, + unsigned long max) { int ret = 0; unsigned long old_min, old_max, rate; + lockdep_assert_held(&prepare_lock); + if (!clk) return 0; @@ -2350,8 +2346,6 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max) return -EINVAL; } - clk_prepare_lock(); - if (clk->exclusive_count) clk_core_rate_unprotect(clk->core); @@ -2395,6 +2389,28 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max) if (clk->exclusive_count) clk_core_rate_protect(clk->core); + return ret; +} + +/** + * clk_set_rate_range - set a rate range for a clock source + * @clk: clock source + * @min: desired minimum clock rate in Hz, inclusive + * @max: desired maximum clock rate in Hz, inclusive + * + * Return: 0 for success or negative errno on failure. + */ +int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max) +{ + int ret; + + if (!clk) + return 0; + + clk_prepare_lock(); + + ret = clk_set_rate_range_nolock(clk, min, max); + clk_prepare_unlock(); return ret; @@ -4348,9 +4364,10 @@ void __clk_put(struct clk *clk) } hlist_del(&clk->clks_node); - if (clk->min_rate > clk->core->req_rate || - clk->max_rate < clk->core->req_rate) - clk_core_set_rate_nolock(clk->core, clk->core->req_rate); + + /* If we had any boundaries on that clock, let's drop them. */ + if (clk->min_rate > 0 || clk->max_rate < ULONG_MAX) + clk_set_rate_range_nolock(clk, 0, ULONG_MAX); owner = clk->core->owner; kref_put(&clk->core->ref, __clk_release); diff --git a/drivers/clk/clk_test.c b/drivers/clk/clk_test.c index 7646356f30cb..7d9da88c39ee 100644 --- a/drivers/clk/clk_test.c +++ b/drivers/clk/clk_test.c @@ -793,9 +793,66 @@ static void clk_range_test_multiple_set_range_rate_maximized(struct kunit *test) clk_put(clk); } +/* + * Test that if we have several subsequent calls to + * clk_set_rate_range(), across multiple users, the core will reevaluate + * whether a new rate is needed, including when a user drop its clock. + * + * With clk_dummy_maximize_rate_ops, this means that the rate will + * trail along the maximum as it evolves. + */ +static void clk_range_test_multiple_set_range_rate_put_maximized(struct kunit *test) +{ + struct clk_dummy_context *ctx = test->priv; + struct clk_hw *hw = &ctx->hw; + struct clk *clk = clk_hw_get_clk(hw, NULL); + struct clk *user1, *user2; + unsigned long rate; + + user1 = clk_hw_get_clk(hw, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, user1); + + user2 = clk_hw_get_clk(hw, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, user2); + + KUNIT_ASSERT_EQ(test, + clk_set_rate(clk, DUMMY_CLOCK_RATE_2 + 1000), + 0); + + KUNIT_ASSERT_EQ(test, + clk_set_rate_range(user1, + 0, + DUMMY_CLOCK_RATE_2), + 0); + + rate = clk_get_rate(clk); + KUNIT_ASSERT_GT(test, rate, 0); + KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2); + + KUNIT_ASSERT_EQ(test, + clk_set_rate_range(user2, + 0, + DUMMY_CLOCK_RATE_1), + 0); + + rate = clk_get_rate(clk); + KUNIT_ASSERT_GT(test, rate, 0); + KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1); + + clk_put(user2); + + rate = clk_get_rate(clk); + KUNIT_ASSERT_GT(test, rate, 0); + KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2); + + clk_put(user1); + clk_put(clk); +} + static struct kunit_case clk_range_maximize_test_cases[] = { KUNIT_CASE(clk_range_test_set_range_rate_maximized), KUNIT_CASE(clk_range_test_multiple_set_range_rate_maximized), + KUNIT_CASE(clk_range_test_multiple_set_range_rate_put_maximized), {} }; @@ -913,9 +970,62 @@ static void clk_range_test_multiple_set_range_rate_minimized(struct kunit *test) clk_put(clk); } +/* + * Test that if we have several subsequent calls to + * clk_set_rate_range(), across multiple users, the core will reevaluate + * whether a new rate is needed, including when a user drop its clock. + * + * With clk_dummy_minimize_rate_ops, this means that the rate will + * trail along the minimum as it evolves. + */ +static void clk_range_test_multiple_set_range_rate_put_minimized(struct kunit *test) +{ + struct clk_dummy_context *ctx = test->priv; + struct clk_hw *hw = &ctx->hw; + struct clk *clk = clk_hw_get_clk(hw, NULL); + struct clk *user1, *user2; + unsigned long rate; + + user1 = clk_hw_get_clk(hw, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, user1); + + user2 = clk_hw_get_clk(hw, NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, user2); + + KUNIT_ASSERT_EQ(test, + clk_set_rate_range(user1, + DUMMY_CLOCK_RATE_1, + ULONG_MAX), + 0); + + rate = clk_get_rate(clk); + KUNIT_ASSERT_GT(test, rate, 0); + KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1); + + KUNIT_ASSERT_EQ(test, + clk_set_rate_range(user2, + DUMMY_CLOCK_RATE_2, + ULONG_MAX), + 0); + + rate = clk_get_rate(clk); + KUNIT_ASSERT_GT(test, rate, 0); + KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_2); + + clk_put(user2); + + rate = clk_get_rate(clk); + KUNIT_ASSERT_GT(test, rate, 0); + KUNIT_EXPECT_EQ(test, rate, DUMMY_CLOCK_RATE_1); + + clk_put(user1); + clk_put(clk); +} + static struct kunit_case clk_range_minimize_test_cases[] = { KUNIT_CASE(clk_range_test_set_range_rate_minimized), KUNIT_CASE(clk_range_test_multiple_set_range_rate_minimized), + KUNIT_CASE(clk_range_test_multiple_set_range_rate_put_minimized), {} };