From patchwork Tue Jan 10 18:21:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13095489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8ABE7C54EBE for ; Tue, 10 Jan 2023 18:26:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239088AbjAJS0Z (ORCPT ); Tue, 10 Jan 2023 13:26:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239327AbjAJSZa (ORCPT ); Tue, 10 Jan 2023 13:25:30 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68EBA8F293; Tue, 10 Jan 2023 10:22:08 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id w4-20020a17090ac98400b002186f5d7a4cso17459566pjt.0; Tue, 10 Jan 2023 10:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MdL5MN5WNJYIfvUrUuCsHRbLUJhJjLxZ28wGzjZcN0c=; b=hZgAAYTwSFTmiIJc0cj8e6ZekUHRwCbtrw8PzTVgAHqJJOtBkCYATUhwoGORUK5irU qV/GVSw5Qz4jQcxoSuFsxSoL4zcwwICzEHgUhGeMrGvHHQNSfSWeVWo6RhbPbLBFKFMj Pn9p2iMfnkVHvftcIxoGsLN78IDSy5zTDMQ3uoCddqn8atQeq0PhRCBuCnxKDXoYOs5m CggxsPCvlu2f0jwlzNXAf8uII8z0GVK+xq3RrZqmiwfrKvurOSFbgNkAmkVPu7LXTzTL MnME9MRW5DkSo1gVs0DH0JHTaLhRnyfXmKBna3+yGIj3kz4DiJRXbcxYVDP9IDxzi02C nkJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MdL5MN5WNJYIfvUrUuCsHRbLUJhJjLxZ28wGzjZcN0c=; b=t1VjsyAbVNTUcqCe1M3iZQWP0HaneMKpMC9+x8e6X3HgJWBKdV6bjjZRljcw6WtF4I DYxDwFHa8vazu5zIdRmnHFvnP9Dr8OQYptg2Mh2C4KVBRI3RuNmtJAYfAIjwZluLa5Gp fi/WZI5QNr5gKxwpW0ND+zAa/UdTMmqJKVaG/UuAatULH4l2eEKiHCTX83p5v2oGPdX7 ipu3cFqrwk+cFInYPMcIRGVbUebBJTzxqNryWCRynEhb9Bu1pRLpKSJ+hK3JWB0bpUIE YNjtoJ3DYkiQes28EZlSbTfokCqvnINsePxinTIJIBsVb4WPeV6Dv3BUuKrX5z9I5WeQ 1AWg== X-Gm-Message-State: AFqh2kqPVzXuTLxyGKM9FInXY0wmsdR6FVbPCPhXSksavkaqPyLggBq1 7h2zh1tdfDa5Pkt1YGsFI59pPaZTuFg= X-Google-Smtp-Source: AMrXdXsiZC1AXLYdYbOPQkyXdLb465/0YYQVr6drICTNGYfUSJnQdpVwsUzMj0OBuwzg7XiBIWo69g== X-Received: by 2002:a17:902:ce04:b0:189:b4d0:aee with SMTP id k4-20020a170902ce0400b00189b4d00aeemr88950316plg.67.1673374927875; Tue, 10 Jan 2023 10:22:07 -0800 (PST) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id q9-20020a170902bd8900b00192b93a6cdesm8362914pls.212.2023.01.10.10.22.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 10:22:07 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Akhil P Oommen , Chia-I Wu , Douglas Anderson , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 1/3] drm/msm/gpu: Add devfreq tuning debugfs Date: Tue, 10 Jan 2023 10:21:45 -0800 Message-Id: <20230110182150.1911031-2-robdclark@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230110182150.1911031-1-robdclark@gmail.com> References: <20230110182150.1911031-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Make the handful of tuning knobs available visible via debugfs. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/msm_debugfs.c | 12 ++++++++++++ drivers/gpu/drm/msm/msm_drv.h | 9 +++++++++ drivers/gpu/drm/msm/msm_gpu.h | 3 --- drivers/gpu/drm/msm/msm_gpu_devfreq.c | 6 ++++-- 5 files changed, 26 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 36c8fb699b56..6f7401f2acda 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2021,7 +2021,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) * to cause power supply issues: */ if (adreno_is_a618(adreno_gpu) || adreno_is_7c3(adreno_gpu)) - gpu->clamp_to_idle = true; + priv->gpu_clamp_to_idle = true; /* Check if there is a GMU phandle and set it up */ node = of_parse_phandle(pdev->dev.of_node, "qcom,gmu", 0); diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c index 95f4374ae21c..d6ecff0ab618 100644 --- a/drivers/gpu/drm/msm/msm_debugfs.c +++ b/drivers/gpu/drm/msm/msm_debugfs.c @@ -305,6 +305,7 @@ void msm_debugfs_init(struct drm_minor *minor) { struct drm_device *dev = minor->dev; struct msm_drm_private *priv = dev->dev_private; + struct dentry *gpu_devfreq; drm_debugfs_create_files(msm_debugfs_list, ARRAY_SIZE(msm_debugfs_list), @@ -325,6 +326,17 @@ void msm_debugfs_init(struct drm_minor *minor) debugfs_create_file("shrink", S_IRWXU, minor->debugfs_root, dev, &shrink_fops); + gpu_devfreq = debugfs_create_dir("devfreq", minor->debugfs_root); + + debugfs_create_bool("idle_clamp",0600, gpu_devfreq, + &priv->gpu_clamp_to_idle); + + debugfs_create_u32("upthreshold",0600, gpu_devfreq, + &priv->gpu_devfreq_config.upthreshold); + + debugfs_create_u32("downdifferential",0600, gpu_devfreq, + &priv->gpu_devfreq_config.downdifferential); + if (priv->kms && priv->kms->funcs->debugfs_init) priv->kms->funcs->debugfs_init(priv->kms, minor); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 876d8d5eec2f..6cb1c6d230e8 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -234,6 +235,14 @@ struct msm_drm_private { */ unsigned int hangcheck_period; + /** gpu_devfreq_config: Devfreq tuning config for the GPU. */ + struct devfreq_simple_ondemand_data gpu_devfreq_config; + + /** + * gpu_clamp_to_idle: Enable clamping to idle freq when inactive + */ + bool gpu_clamp_to_idle; + /** * disable_err_irq: * diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 651786bc55e5..9e36f6c9bc29 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -275,9 +275,6 @@ struct msm_gpu { struct msm_gpu_state *crashstate; - /* Enable clamping to idle freq when inactive: */ - bool clamp_to_idle; - /* True if the hardware supports expanded apriv (a650 and newer) */ bool hw_apriv; diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c index 025940eb08d1..0d7ff7ddc029 100644 --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c @@ -183,6 +183,7 @@ static bool has_devfreq(struct msm_gpu *gpu) void msm_devfreq_init(struct msm_gpu *gpu) { struct msm_gpu_devfreq *df = &gpu->devfreq; + struct msm_drm_private *priv = gpu->dev->dev_private; /* We need target support to do devfreq */ if (!gpu->funcs->gpu_busy) @@ -209,7 +210,7 @@ void msm_devfreq_init(struct msm_gpu *gpu) df->devfreq = devm_devfreq_add_device(&gpu->pdev->dev, &msm_devfreq_profile, DEVFREQ_GOV_SIMPLE_ONDEMAND, - NULL); + &priv->gpu_devfreq_config); if (IS_ERR(df->devfreq)) { DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize GPU devfreq\n"); @@ -358,10 +359,11 @@ static void msm_devfreq_idle_work(struct kthread_work *work) struct msm_gpu_devfreq *df = container_of(work, struct msm_gpu_devfreq, idle_work.work); struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq); + struct msm_drm_private *priv = gpu->dev->dev_private; df->idle_time = ktime_get(); - if (gpu->clamp_to_idle) + if (priv->gpu_clamp_to_idle) dev_pm_qos_update_request(&df->idle_freq, 0); } From patchwork Tue Jan 10 18:21:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13095491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15B59C46467 for ; Tue, 10 Jan 2023 18:26:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239206AbjAJS00 (ORCPT ); Tue, 10 Jan 2023 13:26:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239331AbjAJSZb (ORCPT ); Tue, 10 Jan 2023 13:25:31 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC05E8F299; Tue, 10 Jan 2023 10:22:10 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id i65so6054373pfc.0; Tue, 10 Jan 2023 10:22:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=u1OLvs2cjBaYplWR312atbyba4IEAgoxSt7vh2kw6LE=; b=IWUA1c249VNWD4LSGGIyMvJLdJNxfxV5ifFZqqeW9CN/gHVVap1wQHEvl+I941VoPs KGJfDMJIig/t1loE/iDS9DBbLmZD2t7h08A8/a/r0B/xxZ9zeh07CeGcaHDmDEhgP/Vh 7Nuq6VfjzWVezgq2VUFJrujdtlSW8cG6b2wBs0nHVEBoP1+TC/vahLTUt7UBoNnet+4r FdfYrmL7V3asi/p43y7nQCi15A3rc6FDCJOGp+gwustSppPXVvTi103mHnSaL/ae7r48 IiCyGLIjM4HgJzPwoVZQjh5ErRFBiegzJTx4ZBlxBkXpJNvPprElFF1cy0DTLlLnPFJB SezQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=u1OLvs2cjBaYplWR312atbyba4IEAgoxSt7vh2kw6LE=; b=LEILXvTFfzs0iuUHiJQqRNo3fFkGvkh8IG4UVcJJoNJg4KJIoI/ah+vchrpH/BjCOt lg0+H+IAfWihA4T3hfMma+5z4Y7swyO+aJcEGD3RhrzZOkBAAyMxDDmdxNwpnEzhBMty hoUqOYP7rndW7WAp8cjbP7IdPQvqw44ibgLGUMQEWje397r85YDrBKpo6RcI+kW0ezx5 GgadkIcTnPUoDfY/NGleLTao9HfBLU3jsHrswY9hkTXZ5yBdHARfP5D1XxeukqBgaCVH 3AC5xxbEOqa6andjf7vW7Jz8W7jJCR8m+JYGa0Zv3Lg9Kp5/8Q1atYYuwbchKJV0xsSq ql2g== X-Gm-Message-State: AFqh2kqvsZeu8+gG8kBrIXr53oi5lPhmVy+RpSLMQDatOotKP+4n91tC YqOQYf2xzfF+lAIaUo2H9A4= X-Google-Smtp-Source: AMrXdXsv1mXadnVeg9UadIlRfnytsf1px5EHXRuHx0lQnC0bo3ITaKjWYwJYLOSYo0Mi8sJYsFoJuA== X-Received: by 2002:aa7:8701:0:b0:583:319a:440b with SMTP id b1-20020aa78701000000b00583319a440bmr19434088pfo.24.1673374929717; Tue, 10 Jan 2023 10:22:09 -0800 (PST) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id r5-20020aa79ec5000000b00589605fb0a1sm3794851pfq.96.2023.01.10.10.22.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 10:22:09 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 2/3] drm/msm/gpu: Bypass PM QoS constraint for idle clamp Date: Tue, 10 Jan 2023 10:21:46 -0800 Message-Id: <20230110182150.1911031-3-robdclark@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230110182150.1911031-1-robdclark@gmail.com> References: <20230110182150.1911031-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Change idle freq clamping back to the direct method, bypassing PM QoS requests. The problem with using PM QoS requests is they call (indirectly) the governors ->get_target_freq() which goes thru a get_dev_status() cycle. The problem comes when the GPU becomes active again and we remove the idle-clamp request, we go through another get_dev_status() cycle for the period that the GPU has been idle, which triggers the governor to lower the target freq excessively. This partially reverts commit 7c0ffcd40b16 ("drm/msm/gpu: Respect PM QoS constraints"), but preserves the use of boost QoS request, so that it will continue to play nicely with other QoS requests such as a cooling device. This also mostly undoes commit 78f815c1cf8f ("drm/msm: return the average load over the polling period") Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.h | 12 ++- drivers/gpu/drm/msm/msm_gpu_devfreq.c | 135 +++++++++++--------------- 2 files changed, 65 insertions(+), 82 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 9e36f6c9bc29..a771f56ed70f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -109,11 +109,15 @@ struct msm_gpu_devfreq { struct mutex lock; /** - * idle_constraint: + * idle_freq: * - * A PM QoS constraint to limit max freq while the GPU is idle. + * Shadow frequency used while the GPU is idle. From the PoV of + * the devfreq governor, we are continuing to sample busyness and + * adjust frequency while the GPU is idle, but we use this shadow + * value as the GPU is actually clamped to minimum frequency while + * it is inactive. */ - struct dev_pm_qos_request idle_freq; + unsigned long idle_freq; /** * boost_constraint: @@ -135,8 +139,6 @@ struct msm_gpu_devfreq { /** idle_time: Time of last transition to idle: */ ktime_t idle_time; - struct devfreq_dev_status average_status; - /** * idle_work: * diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c index 0d7ff7ddc029..e578d74d402f 100644 --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c @@ -33,6 +33,16 @@ static int msm_devfreq_target(struct device *dev, unsigned long *freq, trace_msm_gpu_freq_change(dev_pm_opp_get_freq(opp)); + /* + * If the GPU is idle, devfreq is not aware, so just stash + * the new target freq (to use when we return to active) + */ + if (df->idle_freq) { + df->idle_freq = *freq; + dev_pm_opp_put(opp); + return 0; + } + if (gpu->funcs->gpu_set_freq) { mutex_lock(&df->lock); gpu->funcs->gpu_set_freq(gpu, opp, df->suspended); @@ -48,15 +58,26 @@ static int msm_devfreq_target(struct device *dev, unsigned long *freq, static unsigned long get_freq(struct msm_gpu *gpu) { + struct msm_gpu_devfreq *df = &gpu->devfreq; + + /* + * If the GPU is idle, use the shadow/saved freq to avoid + * confusing devfreq (which is unaware that we are switching + * to lowest freq until the device is active again) + */ + if (df->idle_freq) + return df->idle_freq; + if (gpu->funcs->gpu_get_freq) return gpu->funcs->gpu_get_freq(gpu); return clk_get_rate(gpu->core_clk); } -static void get_raw_dev_status(struct msm_gpu *gpu, +static int msm_devfreq_get_dev_status(struct device *dev, struct devfreq_dev_status *status) { + struct msm_gpu *gpu = dev_to_gpu(dev); struct msm_gpu_devfreq *df = &gpu->devfreq; u64 busy_cycles, busy_time; unsigned long sample_rate; @@ -72,7 +93,7 @@ static void get_raw_dev_status(struct msm_gpu *gpu, if (df->suspended) { mutex_unlock(&df->lock); status->busy_time = 0; - return; + return 0; } busy_cycles = gpu->funcs->gpu_busy(gpu, &sample_rate); @@ -87,71 +108,6 @@ static void get_raw_dev_status(struct msm_gpu *gpu, busy_time = ~0LU; status->busy_time = busy_time; -} - -static void update_average_dev_status(struct msm_gpu *gpu, - const struct devfreq_dev_status *raw) -{ - struct msm_gpu_devfreq *df = &gpu->devfreq; - const u32 polling_ms = df->devfreq->profile->polling_ms; - const u32 max_history_ms = polling_ms * 11 / 10; - struct devfreq_dev_status *avg = &df->average_status; - u64 avg_freq; - - /* simple_ondemand governor interacts poorly with gpu->clamp_to_idle. - * When we enforce the constraint on idle, it calls get_dev_status - * which would normally reset the stats. When we remove the - * constraint on active, it calls get_dev_status again where busy_time - * would be 0. - * - * To remedy this, we always return the average load over the past - * polling_ms. - */ - - /* raw is longer than polling_ms or avg has no history */ - if (div_u64(raw->total_time, USEC_PER_MSEC) >= polling_ms || - !avg->total_time) { - *avg = *raw; - return; - } - - /* Truncate the oldest history first. - * - * Because we keep the history with a single devfreq_dev_status, - * rather than a list of devfreq_dev_status, we have to assume freq - * and load are the same over avg->total_time. We can scale down - * avg->busy_time and avg->total_time by the same factor to drop - * history. - */ - if (div_u64(avg->total_time + raw->total_time, USEC_PER_MSEC) >= - max_history_ms) { - const u32 new_total_time = polling_ms * USEC_PER_MSEC - - raw->total_time; - avg->busy_time = div_u64( - mul_u32_u32(avg->busy_time, new_total_time), - avg->total_time); - avg->total_time = new_total_time; - } - - /* compute the average freq over avg->total_time + raw->total_time */ - avg_freq = mul_u32_u32(avg->current_frequency, avg->total_time); - avg_freq += mul_u32_u32(raw->current_frequency, raw->total_time); - do_div(avg_freq, avg->total_time + raw->total_time); - - avg->current_frequency = avg_freq; - avg->busy_time += raw->busy_time; - avg->total_time += raw->total_time; -} - -static int msm_devfreq_get_dev_status(struct device *dev, - struct devfreq_dev_status *status) -{ - struct msm_gpu *gpu = dev_to_gpu(dev); - struct devfreq_dev_status raw; - - get_raw_dev_status(gpu, &raw); - update_average_dev_status(gpu, &raw); - *status = gpu->devfreq.average_status; return 0; } @@ -191,9 +147,6 @@ void msm_devfreq_init(struct msm_gpu *gpu) mutex_init(&df->lock); - dev_pm_qos_add_request(&gpu->pdev->dev, &df->idle_freq, - DEV_PM_QOS_MAX_FREQUENCY, - PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE); dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq, DEV_PM_QOS_MIN_FREQUENCY, 0); @@ -214,7 +167,6 @@ void msm_devfreq_init(struct msm_gpu *gpu) if (IS_ERR(df->devfreq)) { DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize GPU devfreq\n"); - dev_pm_qos_remove_request(&df->idle_freq); dev_pm_qos_remove_request(&df->boost_freq); df->devfreq = NULL; return; @@ -256,7 +208,6 @@ void msm_devfreq_cleanup(struct msm_gpu *gpu) devfreq_cooling_unregister(gpu->cooling); dev_pm_qos_remove_request(&df->boost_freq); - dev_pm_qos_remove_request(&df->idle_freq); } void msm_devfreq_resume(struct msm_gpu *gpu) @@ -329,6 +280,7 @@ void msm_devfreq_active(struct msm_gpu *gpu) { struct msm_gpu_devfreq *df = &gpu->devfreq; unsigned int idle_time; + unsigned long target_freq; if (!has_devfreq(gpu)) return; @@ -338,8 +290,28 @@ void msm_devfreq_active(struct msm_gpu *gpu) */ cancel_idle_work(df); + /* + * Hold devfreq lock to synchronize with get_dev_status()/ + * target() callbacks + */ + mutex_lock(&df->devfreq->lock); + + target_freq = df->idle_freq; + idle_time = ktime_to_ms(ktime_sub(ktime_get(), df->idle_time)); + df->idle_freq = 0; + + /* + * We could have become active again before the idle work had a + * chance to run, in which case the df->idle_freq would have + * still been zero. In this case, no need to change freq. + */ + if (target_freq) + msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0); + + mutex_unlock(&df->devfreq->lock); + /* * If we've been idle for a significant fraction of a polling * interval, then we won't meet the threshold of busyness for @@ -348,9 +320,6 @@ void msm_devfreq_active(struct msm_gpu *gpu) if (idle_time > msm_devfreq_profile.polling_ms) { msm_devfreq_boost(gpu, 2); } - - dev_pm_qos_update_request(&df->idle_freq, - PM_QOS_MAX_FREQUENCY_DEFAULT_VALUE); } @@ -360,11 +329,23 @@ static void msm_devfreq_idle_work(struct kthread_work *work) struct msm_gpu_devfreq, idle_work.work); struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq); struct msm_drm_private *priv = gpu->dev->dev_private; + unsigned long idle_freq, target_freq = 0; - df->idle_time = ktime_get(); + /* + * Hold devfreq lock to synchronize with get_dev_status()/ + * target() callbacks + */ + mutex_lock(&df->devfreq->lock); + + idle_freq = get_freq(gpu); if (priv->gpu_clamp_to_idle) - dev_pm_qos_update_request(&df->idle_freq, 0); + msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0); + + df->idle_time = ktime_get(); + df->idle_freq = idle_freq; + + mutex_unlock(&df->devfreq->lock); } void msm_devfreq_idle(struct msm_gpu *gpu) From patchwork Tue Jan 10 18:21:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13095490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5A5BC678D6 for ; Tue, 10 Jan 2023 18:26:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239246AbjAJS01 (ORCPT ); Tue, 10 Jan 2023 13:26:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239359AbjAJSZb (ORCPT ); Tue, 10 Jan 2023 13:25:31 -0500 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE37890856; Tue, 10 Jan 2023 10:22:11 -0800 (PST) Received: by mail-pf1-x431.google.com with SMTP id a184so9508011pfa.9; Tue, 10 Jan 2023 10:22:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xPPp4X1sBM/+EwXKJ1Udyn041rDe4dkPN+gbP+h2Yoc=; b=qJBXBpPJHtuENt0GARPRQKp2a2d2bG2sppM+WZAucEidaZyC+7Wdq7LI5CBoTGxNA3 jTM3LIlOEtS2/aAitqxnZlBmwxljOP+vIOV58uLlCtvqQEQ9Hp5OhT51wWZruY2i1MvY 6kNX/u3a2zanqEN+w3E8Fi5z7VxFwMtMlmKNz3DvJhEiBlnsPXKcLQGyP4QrZjvdFQov NpmA3F4p1EDwLG67Omt8qx9Z7OS9T/RSlw79BOSqJ9MZQA+/GvD7cezqN0I8G58X9fhL aKulhMPWw2ZlC2+GEWtLpHyTGBlhi2bX0OSfOQGJWyDhjoNo0z4KMuNgubZ3g4QCFu+8 RGmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xPPp4X1sBM/+EwXKJ1Udyn041rDe4dkPN+gbP+h2Yoc=; b=qaz2DXcYIt5Q6OSc0+xrH2UEGub+ep82IiIjlzku828JcX+j5a4ZJgRSP7OpTetcCh /MZ8TFcinWrc9F9gwFQUrbeRDY1js905H/tqfoSaf+5Bidx18He3/tk21E4CFrWSpqoZ rPWCim1wKi12EKHnWQOpzIXyrgXDJ/6vZ+vedTl9G7KTfvsFLS1ASzGVraNfjR9Xkgca CNbBqJQdyawOjgqDRK74GxkMYP09WWMqMoPgG/scZxODmqMqLk3vsmYokRZ0N3Ln/MBC RK0WJr9rT8al9uUOdv0Q1AYjes84BLgCAPxNXr087cmBCxtcfQ7ho7najm1sX1GMUZ5p sm2w== X-Gm-Message-State: AFqh2kqiwZFQKBUayWaMQBltUMZ3E32pz0cOTGlxZ02cLDuVZYdrEL+y 9oPzVI0atmcCBKVV76uzsuM= X-Google-Smtp-Source: AMrXdXvvQvGvIzUPW5Fb9EE9F+jKZ5hTovkXNS514pYVo6mWEmFnuLDdCDGGVysVJxNzwsiPHXCUBw== X-Received: by 2002:a05:6a00:4c0a:b0:589:85ed:4119 with SMTP id ea10-20020a056a004c0a00b0058985ed4119mr6884681pfb.32.1673374931439; Tue, 10 Jan 2023 10:22:11 -0800 (PST) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id p128-20020a622986000000b00581172f7456sm8410169pfp.56.2023.01.10.10.22.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 10:22:11 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 3/3] drm/msm/gpu: Add default devfreq thresholds Date: Tue, 10 Jan 2023 10:21:47 -0800 Message-Id: <20230110182150.1911031-4-robdclark@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230110182150.1911031-1-robdclark@gmail.com> References: <20230110182150.1911031-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu_devfreq.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c index e578d74d402f..1f31e72ca0cf 100644 --- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c +++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c @@ -145,6 +145,15 @@ void msm_devfreq_init(struct msm_gpu *gpu) if (!gpu->funcs->gpu_busy) return; + /* + * Setup default values for simple_ondemand governor tuning. We + * want to throttle up at 50% load for the double-buffer case, + * where due to stalling waiting for vblank we could get stuck + * at (for ex) 30fps at 50% utilization. + */ + priv->gpu_devfreq_config.upthreshold = 50; + priv->gpu_devfreq_config.downdifferential = 10; + mutex_init(&df->lock); dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,