From patchwork Tue Jun 18 15:50:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 13702543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C3C21C27C4F for ; Tue, 18 Jun 2024 15:50:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=j0o2btwvxJuAvs23zoxm88cS8MnRzldpGlzftLOxRCk=; b=Pprqb+5R039tEpBfmi11SMc6qq Uc9EmuY3NjURTcvLyWo9AHj8iustMtlwjWCFNY5t/0N4YxTe7fI7C00m8l+Z3L+EmouMNki2L54Z2 WahLf924uSTmHBWOJHr8hnbLT9pmGAegmTG4OtCayM5APhQtYdJivVT2YDoq2VJGhPEtdoQ7+OnVx 46sWAU88sPOZKzKX2Ur+wfno6FVwAnGwpq3+D4/IH+VzjIS2H8H7kwF61Rzqwx/0+Zy9XmRj65Jxf ENIc1XWbx93/gxNZ4ZlV2sKd0/UrMOp1EeHkF+NB7pzNf0Uh7d6THqTQqERij6aU8NnZNPNVbJ/AK K9thpYVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sJb6R-0000000Fh89-1hZF; Tue, 18 Jun 2024 15:50:35 +0000 Received: from mail-lj1-x229.google.com ([2a00:1450:4864:20::229]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sJb6N-0000000Fh5v-3qst for linux-arm-kernel@lists.infradead.org; Tue, 18 Jun 2024 15:50:33 +0000 Received: by mail-lj1-x229.google.com with SMTP id 38308e7fff4ca-2eaae2a6dc1so98495771fa.0 for ; Tue, 18 Jun 2024 08:50:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718725828; x=1719330628; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=j0o2btwvxJuAvs23zoxm88cS8MnRzldpGlzftLOxRCk=; b=UFFmEdATHdWiw2rm0hqp/MC/RNZ3SthtR6zr1uClenJZBHufx/woO3zN0+wItpFhaD PyLqrJqLmxnmZTybOSNexXclAH1bEF35wbLOQ5LWhOXrWAiElpCIaOnf6U0p+un9Fm2P /eRyAHHPdoIC7u89+wynuXeIaIiPuThTSCE4/R3/9ySfzP/rIpIN7nEygEnPEDEjeEXF Z270xJGM5a3Nfpd8mf5DJVZAaGJy1dHz8Duzgu9l0QHDKYw4njUCfVOu1DhPi9YcsCST RGs/V2Xx5Lh9W/tChFmCWPG8t+W4EsOYVpUByxMP1jc6Of7AnBXCCOJ9ranB6LnL24+0 TZ9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718725828; x=1719330628; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=j0o2btwvxJuAvs23zoxm88cS8MnRzldpGlzftLOxRCk=; b=hccmFfJ1x9Jrk7aSJAZg8pKmdkWPR0ZkFnqEUj7kBY8KvzK1XfzGR86bsB85rlvSRg bawIZ3nboUZ3r84IJkVCA80OhrAlzYK7XhahbtaGgrHYIq4Qhu9qvQWohz3lBpkqLDDh thJI09fBxkRg9JVC5vmY+uVeGmp2H2E7+RZ6o0+p/lfj/46j4+iQqgjFjPL0vZTEget6 9D8OiSIz2r5VLgaEzJmrAVv6tgLcEx0V0VRLahTdJspawUTIMxShZ9i9SbAqEWmcXtwo hGfjaOjIybQ2qi1vy3P81ZVyet8jXAOHAECqbANh8XTC/aoWN9LP75PXbK2w2EL6H9Ws 1gKQ== X-Forwarded-Encrypted: i=1; AJvYcCXabxNCIBsPREojqCzsmSNMdHcrn+9xI1takMYjvlPZzo4w0W9GVyPY91+JY1XjbjyquaHEqmr4B6Ikub1l8hMRlrb29+psJ19hMYhPiuBUN7ylBFU= X-Gm-Message-State: AOJu0YwvR1wBwazDW6K91ugXxJW/+h/BZIhkCUJZAGhW0OjGHPlDowds AYz6YZNELC0pN1oBYDh+W+CadQ3pROt5SX94TYycbxh7UAV3kq5p4rLtMBPa1Iw= X-Google-Smtp-Source: AGHT+IHHHBPht25n9Gly0ArxuEAcjB2htzP4JMhnxdb/RauObkjrKLdR6j/+3D8xHpgeQ7/4CkMWXg== X-Received: by 2002:a19:750c:0:b0:52c:76ac:329b with SMTP id 2adb3069b0e04-52ccaa369d8mr30796e87.35.1718725828111; Tue, 18 Jun 2024 08:50:28 -0700 (PDT) Received: from uffe-tuxpro14.. (h-178-174-189-39.A498.priv.bahnhof.se. [178.174.189.39]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-52ca287241csm1544550e87.172.2024.06.18.08.50.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Jun 2024 08:50:26 -0700 (PDT) From: Ulf Hansson To: Viresh Kumar , Nishanth Menon , Stephen Boyd Cc: Nikunj Kela , Prasad Sodagudi , Ulf Hansson , linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-tegra@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH] OPP: Fix support for required OPPs for multiple PM domains Date: Tue, 18 Jun 2024 17:50:13 +0200 Message-Id: <20240618155013.323322-1-ulf.hansson@linaro.org> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240618_085032_080626_67B88E39 X-CRM114-Status: GOOD ( 18.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In _set_opp() we are normally bailing out when trying to set an OPP that is the current one. This make perfect sense, but becomes a problem when _set_required_opps() calls it recursively. More precisely, when a required OPP is being shared by multiple PM domains, we end up skipping to request the corresponding performance-state for all of the PM domains, but the first one. Let's fix the problem, by calling _set_opp_level() from _set_required_opps() instead. Fixes: e37440e7e2c2 ("OPP: Call dev_pm_opp_set_opp() for required OPPs") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson --- drivers/opp/core.c | 47 +++++++++++++++++++++++----------------------- 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/drivers/opp/core.c b/drivers/opp/core.c index cb4611fe1b5b..45eca65f27f9 100644 --- a/drivers/opp/core.c +++ b/drivers/opp/core.c @@ -1061,6 +1061,28 @@ static int _set_opp_bw(const struct opp_table *opp_table, return 0; } +static int _set_opp_level(struct device *dev, struct opp_table *opp_table, + struct dev_pm_opp *opp) +{ + unsigned int level = 0; + int ret = 0; + + if (opp) { + if (opp->level == OPP_LEVEL_UNSET) + return 0; + + level = opp->level; + } + + /* Request a new performance state through the device's PM domain. */ + ret = dev_pm_domain_set_performance_state(dev, level); + if (ret) + dev_err(dev, "Failed to set performance state %u (%d)\n", level, + ret); + + return ret; +} + /* This is only called for PM domain for now */ static int _set_required_opps(struct device *dev, struct opp_table *opp_table, struct dev_pm_opp *opp, bool up) @@ -1091,7 +1113,8 @@ static int _set_required_opps(struct device *dev, struct opp_table *opp_table, if (devs[index]) { required_opp = opp ? opp->required_opps[index] : NULL; - ret = dev_pm_opp_set_opp(devs[index], required_opp); + ret = _set_opp_level(devs[index], opp_table, + required_opp); if (ret) return ret; } @@ -1102,28 +1125,6 @@ static int _set_required_opps(struct device *dev, struct opp_table *opp_table, return 0; } -static int _set_opp_level(struct device *dev, struct opp_table *opp_table, - struct dev_pm_opp *opp) -{ - unsigned int level = 0; - int ret = 0; - - if (opp) { - if (opp->level == OPP_LEVEL_UNSET) - return 0; - - level = opp->level; - } - - /* Request a new performance state through the device's PM domain. */ - ret = dev_pm_domain_set_performance_state(dev, level); - if (ret) - dev_err(dev, "Failed to set performance state %u (%d)\n", level, - ret); - - return ret; -} - static void _find_current_opp(struct device *dev, struct opp_table *opp_table) { struct dev_pm_opp *opp = ERR_PTR(-ENODEV);