From patchwork Thu Oct 7 08:07:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukasz Luba X-Patchwork-Id: 12541165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E046CC433F5 for ; Thu, 7 Oct 2021 08:10:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B261061076 for ; Thu, 7 Oct 2021 08:10:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B261061076 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=H+igF8O+z/dzPTCo6QmCwQz579piUXaVLZlKbz+X/m0=; b=WCxOFqNaSu07ZR Z/Hmoo46A+ROar1EghOYhFsj1Y7o21x6UvsJ3U9LkFQEq6AjJfKT0QiH3xyRXmEJR89cvLEJuSPon m1FXoWHzr7IOI+/Uo3daDrHSXECOTcw8bh3Oa9NpDufBEIVUnSIt4qXxcJUtaR0SeoD/xMz4l/wLT o8yGTvaoup2rUthfdP3zaY6dS6jvWX9j0oBMVBgYv83pMQqHSM6Gnsa0btA1kCBviXWaSh+nlzMIi DGoZfvHR7w4rc1sH1r8NcmMvyrO0r+R+hHpMncGKzgKbzQPxHkbq+E/7WQe0IgKVOGF2H8Miz8oQs sAg/+DYFzwIG0pSw0NhQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mYORt-00GVbt-AK; Thu, 07 Oct 2021 08:08:17 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mYORV-00GVV6-Jx for linux-arm-kernel@lists.infradead.org; Thu, 07 Oct 2021 08:07:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 57BEB11B3; Thu, 7 Oct 2021 01:07:52 -0700 (PDT) Received: from e123648.arm.com (unknown [10.57.18.236]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 663873F766; Thu, 7 Oct 2021 01:07:49 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, lukasz.luba@arm.com, sudeep.holla@arm.com, will@kernel.org, catalin.marinas@arm.com, linux@armlinux.org.uk, gregkh@linuxfoundation.org, rafael@kernel.org, viresh.kumar@linaro.org, amitk@kernel.org, daniel.lezcano@linaro.org, amit.kachhap@gmail.com, thara.gopinath@linaro.org, bjorn.andersson@linaro.org, agross@kernel.org Subject: [PATCH 3/5] cpufreq: qcom-cpufreq-hw: Update offline CPUs per-cpu thermal pressure Date: Thu, 7 Oct 2021 09:07:27 +0100 Message-Id: <20211007080729.8262-4-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211007080729.8262-1-lukasz.luba@arm.com> References: <20211007080729.8262-1-lukasz.luba@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211007_010753_743380_1CA5D0D9 X-CRM114-Status: GOOD ( 12.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The thermal pressure signal gives information to the scheduler about reduced CPU capacity due to thermal. It is based on a value stored in a per-cpu 'thermal_pressure' variable. The online CPUs will get the new value there, while the offline won't. Unfortunately, when the CPU is back online, the value read from per-cpu variable might be wrong (stale data). This might affect the scheduler decisions, since it sees the CPU capacity differently than what is actually available. Fix it by making sure that all online+offline CPUs would get the proper value in their per-cpu variable when there is throttling or throttling is removed. Fixes: 275157b367f479 ("cpufreq: qcom-cpufreq-hw: Add dcvs interrupt support") Signed-off-by: Lukasz Luba Reviewed-by: Thara Gopinath --- drivers/cpufreq/qcom-cpufreq-hw.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c index a2be0df7e174..0138b2ec406d 100644 --- a/drivers/cpufreq/qcom-cpufreq-hw.c +++ b/drivers/cpufreq/qcom-cpufreq-hw.c @@ -304,7 +304,8 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data) if (capacity > max_capacity) capacity = max_capacity; - arch_set_thermal_pressure(policy->cpus, max_capacity - capacity); + arch_set_thermal_pressure(policy->related_cpus, + max_capacity - capacity); /* * In the unlikely case policy is unregistered do not enable