From patchwork Wed Nov 25 10:02:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalyan Thota X-Patchwork-Id: 11930969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76EDCC56202 for ; Wed, 25 Nov 2020 10:03:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1C6952083E for ; Wed, 25 Nov 2020 10:03:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725842AbgKYKDY (ORCPT ); Wed, 25 Nov 2020 05:03:24 -0500 Received: from alexa-out.qualcomm.com ([129.46.98.28]:33505 "EHLO alexa-out.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725815AbgKYKDY (ORCPT ); Wed, 25 Nov 2020 05:03:24 -0500 Received: from ironmsg07-lv.qualcomm.com (HELO ironmsg07-lv.qulacomm.com) ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 25 Nov 2020 02:03:23 -0800 X-QCInternal: smtphost Received: from ironmsg02-blr.qualcomm.com ([10.86.208.131]) by ironmsg07-lv.qulacomm.com with ESMTP/TLS/AES256-SHA; 25 Nov 2020 02:03:21 -0800 X-QCInternal: smtphost Received: from kalyant-linux.qualcomm.com ([10.204.66.210]) by ironmsg02-blr.qualcomm.com with ESMTP; 25 Nov 2020 15:32:46 +0530 Received: by kalyant-linux.qualcomm.com (Postfix, from userid 94428) id 2578917AA; Wed, 25 Nov 2020 02:02:45 -0800 (PST) From: Kalyan Thota To: y@qualcomm.com, dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Kalyan Thota , linux-kernel@vger.kernel.org, robdclark@gmail.com, seanpaul@chromium.org, hoegsberg@chromium.org, dianders@chromium.org, mkrishn@codeaurora.org, travitej@codeaurora.org, nganji@codeaurora.org, swboyd@chromium.org, abhinavk@codeaurora.org, ddavenport@chromium.org, amit.pundir@linaro.org, sumit.semwal@linaro.org Subject: [v1] drm/msm/dpu: consider vertical front porch in the prefill bw calculation Date: Wed, 25 Nov 2020 02:02:40 -0800 Message-Id: <1606298560-3003-1-git-send-email-kalyan_t@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: References: Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org In case of panels with low vertical back porch, the prefill bw requirement will be high as we will have less time(vbp+pw) to fetch and fill the hw latency buffers before start of first line in active period. For ex: Say hw_latency_line_buffers = 24, and if blanking vbp+pw = 10 Here we need to fetch 24 lines of data in 10 line times. This will increase the bw to the ratio of linebuffers to blanking. DPU hw can also fetch data during vertical front porch provided interface prefetch is enabled. Use vfp in the prefill calculation as dpu driver enables prefetch if the blanking is not sufficient to fill the latency lines. Signed-off-by: Kalyan Thota Tested-by: Amit Pundir --- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c index 7ea90d2..315b999 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -151,7 +151,7 @@ static void _dpu_plane_calc_bw(struct drm_plane *plane, u64 plane_bw; u32 hw_latency_lines; u64 scale_factor; - int vbp, vpw; + int vbp, vpw, vfp; pstate = to_dpu_plane_state(plane->state); mode = &plane->state->crtc->mode; @@ -164,6 +164,7 @@ static void _dpu_plane_calc_bw(struct drm_plane *plane, fps = drm_mode_vrefresh(mode); vbp = mode->vtotal - mode->vsync_end; vpw = mode->vsync_end - mode->vsync_start; + vfp = mode->vsync_start - mode->vdisplay; hw_latency_lines = dpu_kms->catalog->perf.min_prefill_lines; scale_factor = src_height > dst_height ? mult_frac(src_height, 1, dst_height) : 1; @@ -176,7 +177,13 @@ static void _dpu_plane_calc_bw(struct drm_plane *plane, src_width * hw_latency_lines * fps * fmt->bpp * scale_factor * mode->vtotal; - do_div(plane_prefill_bw, (vbp+vpw)); + if ((vbp+vpw) > hw_latency_lines) + do_div(plane_prefill_bw, (vbp+vpw)); + else if ((vbp+vpw+vfp) < hw_latency_lines) + do_div(plane_prefill_bw, (vbp+vpw+vfp)); + else + do_div(plane_prefill_bw, hw_latency_lines); + pstate->plane_fetch_bw = max(plane_bw, plane_prefill_bw); } From patchwork Tue Jun 21 10:53:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 12889036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BE80C43334 for ; Tue, 21 Jun 2022 10:54:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348275AbiFUKxu (ORCPT ); Tue, 21 Jun 2022 06:53:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348999AbiFUKxq (ORCPT ); Tue, 21 Jun 2022 06:53:46 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF7FE1A067; Tue, 21 Jun 2022 03:53:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1655808826; x=1687344826; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=m6EajMgBB+kQe2Y6cp8McOQ/aNlgO0ZhP50u9W1BvIA=; b=Tfuicp+Vs3D6fmTmU7ngdzIR+M9RwSvIAibaInIPmmHzKWgYz9r6Qnxp 2l3Yidl2VnfplaU38Lud7nk716ZJfAEmS72KLNX+6x8VgAOmix+GjD8Jf ueJtkfZnrY5NmR0EutaWaVUn1dVinsNUabFrIggK4MTCKEhmH72HZH+w5 A=; Received: from ironmsg08-lv.qualcomm.com ([10.47.202.152]) by alexa-out.qualcomm.com with ESMTP; 21 Jun 2022 03:53:45 -0700 X-QCInternal: smtphost Received: from ironmsg01-blr.qualcomm.com ([10.86.208.130]) by ironmsg08-lv.qualcomm.com with ESMTP/TLS/AES256-SHA; 21 Jun 2022 03:53:43 -0700 X-QCInternal: smtphost Received: from vpolimer-linux.qualcomm.com ([10.204.67.235]) by ironmsg01-blr.qualcomm.com with ESMTP; 21 Jun 2022 16:23:31 +0530 Received: by vpolimer-linux.qualcomm.com (Postfix, from userid 463814) id BDAB03D5E; Tue, 21 Jun 2022 16:23:30 +0530 (IST) From: Vinod Polimera To: y@qualcomm.com, dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Vinod Polimera , linux-kernel@vger.kernel.org, robdclark@gmail.com, dianders@chromium.org, swboyd@chromium.org, quic_kalyant@quicinc.com, dmitry.baryshkov@linaro.org, quic_sbillaka@quicinc.com Subject: [v3 5/5] drm/msm/disp/dpu1: add PSR support for eDP interface in dpu driver Date: Tue, 21 Jun 2022 16:23:20 +0530 Message-Id: <1655808800-3996-6-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1655808800-3996-1-git-send-email-quic_vpolimer@quicinc.com> References: <1655808800-3996-1-git-send-email-quic_vpolimer@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Enable PSR on eDP interface using drm self-refresh librabry. This patch uses a trigger from self-refresh library to enter/exit into PSR, when there are no updates from framework. Signed-off-by: Kalyan Thota Signed-off-by: Vinod Polimera Reported-by: kernel test robot --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 36 ++++++++++++++++++++++++----- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 20 +++++++++++++++- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 2 +- 3 files changed, 50 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index b56f777..c6e4f03 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "dpu_kms.h" #include "dpu_hw_lm.h" @@ -955,24 +956,39 @@ static void dpu_crtc_disable(struct drm_crtc *crtc, crtc); struct dpu_crtc *dpu_crtc = to_dpu_crtc(crtc); struct dpu_crtc_state *cstate = to_dpu_crtc_state(crtc->state); - struct drm_encoder *encoder; + struct drm_encoder *encoder = NULL; unsigned long flags; bool release_bandwidth = false; DRM_DEBUG_KMS("crtc%d\n", crtc->base.id); + if (old_crtc_state->self_refresh_active) { + drm_for_each_encoder_mask(encoder, crtc->dev, + old_crtc_state->encoder_mask) { + dpu_encoder_assign_crtc(encoder, NULL); + } + return; + } + /* Disable/save vblank irq handling */ drm_crtc_vblank_off(crtc); drm_for_each_encoder_mask(encoder, crtc->dev, old_crtc_state->encoder_mask) { - /* in video mode, we hold an extra bandwidth reference + /* + * in video mode, we hold an extra bandwidth reference * as we cannot drop bandwidth at frame-done if any * crtc is being used in video mode. */ if (dpu_encoder_get_intf_mode(encoder) == INTF_MODE_VIDEO) release_bandwidth = true; - dpu_encoder_assign_crtc(encoder, NULL); + /* + * If disable is triggered during psr active(e.g: screen dim in PSR), + * we will need encoder->crtc connection to process the device sleep & + * preserve it during psr sequence. + */ + if (!crtc->state->self_refresh_active) + dpu_encoder_assign_crtc(encoder, NULL); } /* wait for frame_event_done completion */ @@ -1020,7 +1036,9 @@ static void dpu_crtc_enable(struct drm_crtc *crtc, struct dpu_crtc *dpu_crtc = to_dpu_crtc(crtc); struct drm_encoder *encoder; bool request_bandwidth = false; + struct drm_crtc_state *old_crtc_state; + old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc); pm_runtime_get_sync(crtc->dev->dev); DRM_DEBUG_KMS("crtc%d\n", crtc->base.id); @@ -1042,8 +1060,9 @@ static void dpu_crtc_enable(struct drm_crtc *crtc, trace_dpu_crtc_enable(DRMID(crtc), true, dpu_crtc); dpu_crtc->enabled = true; - drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask) - dpu_encoder_assign_crtc(encoder, crtc); + if (!old_crtc_state->self_refresh_active) + drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask) + dpu_encoder_assign_crtc(encoder, crtc); /* Enable/restore vblank irq handling */ drm_crtc_vblank_on(crtc); @@ -1525,7 +1544,7 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane, { struct drm_crtc *crtc = NULL; struct dpu_crtc *dpu_crtc = NULL; - int i; + int i, ret; dpu_crtc = kzalloc(sizeof(*dpu_crtc), GFP_KERNEL); if (!dpu_crtc) @@ -1562,6 +1581,11 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane, /* initialize event handling */ spin_lock_init(&dpu_crtc->event_lock); + ret = drm_self_refresh_helper_init(crtc); + if (ret) + DPU_ERROR("Failed to initialize %s with self-refresh helpers %d\n", + crtc->name, ret); + DRM_DEBUG_KMS("%s: successfully initialized crtc\n", dpu_crtc->name); return crtc; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index cc2809b..234e95d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -225,6 +225,11 @@ bool dpu_encoder_is_widebus_enabled(const struct drm_encoder *drm_enc) return dpu_enc->wide_bus_en; } +static inline bool is_self_refresh_active(const struct drm_crtc_state *state) +{ + return (state && state->self_refresh_active); +} + static void _dpu_encoder_setup_dither(struct dpu_hw_pingpong *hw_pp, unsigned bpc) { struct dpu_hw_dither_cfg dither_cfg = { 0 }; @@ -592,7 +597,8 @@ static int dpu_encoder_virt_atomic_check( if (drm_atomic_crtc_needs_modeset(crtc_state)) { dpu_rm_release(global_state, drm_enc); - if (!crtc_state->active_changed || crtc_state->active) + if (!crtc_state->active_changed || crtc_state->active || + crtc_state->self_refresh_active) ret = dpu_rm_reserve(&dpu_kms->rm, global_state, drm_enc, crtc_state, topology); } @@ -1171,11 +1177,23 @@ static void dpu_encoder_virt_atomic_disable(struct drm_encoder *drm_enc, struct drm_atomic_state *state) { struct dpu_encoder_virt *dpu_enc = NULL; + struct drm_crtc *crtc; + struct drm_crtc_state *old_state; int i = 0; dpu_enc = to_dpu_encoder_virt(drm_enc); DPU_DEBUG_ENC(dpu_enc, "\n"); + crtc = dpu_enc->crtc; + old_state = drm_atomic_get_old_crtc_state(state, crtc); + + /* + * The encoder disabled already occurred when self refresh mode + * was set earlier, in the old_state for the corresponding crtc. + */ + if (drm_enc->encoder_type == DRM_MODE_ENCODER_TMDS && is_self_refresh_active(old_state)) + return; + mutex_lock(&dpu_enc->enc_lock); dpu_enc->enabled = false; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index bce4764..cc0a674 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -507,7 +507,7 @@ static void dpu_kms_wait_for_commit_done(struct msm_kms *kms, return; } - if (!crtc->state->active) { + if (!drm_atomic_crtc_effectively_active(crtc->state)) { DPU_DEBUG("[crtc:%d] not active\n", crtc->base.id); return; }