From patchwork Tue Jul 5 17:00:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 12906863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AC72CCA480 for ; Tue, 5 Jul 2022 17:01:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233141AbiGERB2 (ORCPT ); Tue, 5 Jul 2022 13:01:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232064AbiGERBY (ORCPT ); Tue, 5 Jul 2022 13:01:24 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9CB11C915; Tue, 5 Jul 2022 10:01:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657040484; x=1688576484; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=qxfVrDaCDuskuhnHaD48bAjAcwUP0f9Vqaw/vpqssR8=; b=NUt2KkZl4usSBGHINGSErUcKDOK/QzpGjOz3vSwRHGA/rwgqAdseLO1T K2/L5VkxK6ITYEVs16QXjOP6m2lbCFo3tkfSuCDNiEijJld1CI/P/vMj3 CQeUqtY6FE/mBMAfTqXuuSOX+0Q7Xz4H9gyi757G2aG6lJ1AniWogBNu6 A=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 05 Jul 2022 10:01:23 -0700 X-QCInternal: smtphost Received: from ironmsg01-blr.qualcomm.com ([10.86.208.130]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/AES256-SHA; 05 Jul 2022 10:01:21 -0700 X-QCInternal: smtphost Received: from vpolimer-linux.qualcomm.com ([10.204.67.235]) by ironmsg01-blr.qualcomm.com with ESMTP; 05 Jul 2022 22:30:57 +0530 Received: by vpolimer-linux.qualcomm.com (Postfix, from userid 463814) id 0429F3CC2; Tue, 5 Jul 2022 22:30:55 +0530 (IST) From: Vinod Polimera To: dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Vinod Polimera , linux-kernel@vger.kernel.org, robdclark@gmail.com, dianders@chromium.org, swboyd@chromium.org, quic_kalyant@quicinc.com, dmitry.baryshkov@linaro.org, quic_khsieh@quicinc.com, quic_vproddut@quicinc.com, bjorn.andersson@linaro.org, quic_aravindh@quicinc.com, quic_abhinavk@quicinc.com, quic_sbillaka@quicinc.com Subject: [PATCH v4 1/7] drm/msm/disp/dpu1: clear dpu_assign_crtc and get crtc from drm_enc instead of dpu_enc Date: Tue, 5 Jul 2022 22:30:39 +0530 Message-Id: <1657040445-13067-2-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> References: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Update crtc retrieval from dpu_enc to drm_enc, since new links get set as part of the update legacy mode set. The dpu_enc->crtc cache is no more needed, hence cleaning it as part of this change. Signed-off-by: Vinod Polimera --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 4 ---- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 18 +++--------------- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h | 8 -------- 3 files changed, 3 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index b56f777..f91e3d1 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -972,7 +972,6 @@ static void dpu_crtc_disable(struct drm_crtc *crtc, */ if (dpu_encoder_get_intf_mode(encoder) == INTF_MODE_VIDEO) release_bandwidth = true; - dpu_encoder_assign_crtc(encoder, NULL); } /* wait for frame_event_done completion */ @@ -1042,9 +1041,6 @@ static void dpu_crtc_enable(struct drm_crtc *crtc, trace_dpu_crtc_enable(DRMID(crtc), true, dpu_crtc); dpu_crtc->enabled = true; - drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask) - dpu_encoder_assign_crtc(encoder, crtc); - /* Enable/restore vblank irq handling */ drm_crtc_vblank_on(crtc); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 52516eb..5629c0b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -1254,8 +1254,8 @@ static void dpu_encoder_vblank_callback(struct drm_encoder *drm_enc, dpu_enc = to_dpu_encoder_virt(drm_enc); spin_lock_irqsave(&dpu_enc->enc_spinlock, lock_flags); - if (dpu_enc->crtc) - dpu_crtc_vblank_callback(dpu_enc->crtc); + if (drm_enc->crtc) + dpu_crtc_vblank_callback(drm_enc->crtc); spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags); atomic_inc(&phy_enc->vsync_cnt); @@ -1280,18 +1280,6 @@ static void dpu_encoder_underrun_callback(struct drm_encoder *drm_enc, DPU_ATRACE_END("encoder_underrun_callback"); } -void dpu_encoder_assign_crtc(struct drm_encoder *drm_enc, struct drm_crtc *crtc) -{ - struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc); - unsigned long lock_flags; - - spin_lock_irqsave(&dpu_enc->enc_spinlock, lock_flags); - /* crtc should always be cleared before re-assigning */ - WARN_ON(crtc && dpu_enc->crtc); - dpu_enc->crtc = crtc; - spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags); -} - void dpu_encoder_toggle_vblank_for_crtc(struct drm_encoder *drm_enc, struct drm_crtc *crtc, bool enable) { @@ -1302,7 +1290,7 @@ void dpu_encoder_toggle_vblank_for_crtc(struct drm_encoder *drm_enc, trace_dpu_enc_vblank_cb(DRMID(drm_enc), enable); spin_lock_irqsave(&dpu_enc->enc_spinlock, lock_flags); - if (dpu_enc->crtc != crtc) { + if (drm_enc->crtc != crtc) { spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags); return; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h index 781d41c..edba815 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h @@ -39,14 +39,6 @@ struct msm_display_info { }; /** - * dpu_encoder_assign_crtc - Link the encoder to the crtc it's assigned to - * @encoder: encoder pointer - * @crtc: crtc pointer - */ -void dpu_encoder_assign_crtc(struct drm_encoder *encoder, - struct drm_crtc *crtc); - -/** * dpu_encoder_toggle_vblank_for_crtc - Toggles vblank interrupts on or off if * the encoder is assigned to the given crtc * @encoder: encoder pointer From patchwork Tue Jul 5 17:00:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 12906858 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9256BCCA483 for ; Tue, 5 Jul 2022 17:01:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232512AbiGERB0 (ORCPT ); Tue, 5 Jul 2022 13:01:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231310AbiGERBW (ORCPT ); Tue, 5 Jul 2022 13:01:22 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59F631C903; Tue, 5 Jul 2022 10:01:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657040481; x=1688576481; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=KJEqjiyJCjcTLZ25oWrA2VIRC1Sd75b2tLdI9i17zxI=; b=UuO+YUqSuAamtBAHqcl8CCOHXV2RsJgpfILWu6gIjhbCVYsVXUqHRQlW RZhCWSC/kHZsViPSCQbEZiJ41SKWrVbUh/40JmNIFxpehToh3DdTZ7YEO YLgFdfuJlFSJNe4TAou0ygYgWIg6C40XmBipsZmRdMzJ049EAK3Pw0Qnp Y=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 05 Jul 2022 10:01:21 -0700 X-QCInternal: smtphost Received: from ironmsg01-blr.qualcomm.com ([10.86.208.130]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/AES256-SHA; 05 Jul 2022 10:01:19 -0700 X-QCInternal: smtphost Received: from vpolimer-linux.qualcomm.com ([10.204.67.235]) by ironmsg01-blr.qualcomm.com with ESMTP; 05 Jul 2022 22:30:58 +0530 Received: by vpolimer-linux.qualcomm.com (Postfix, from userid 463814) id 53C003CBE; Tue, 5 Jul 2022 22:30:57 +0530 (IST) From: Vinod Polimera To: dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Vinod Polimera , linux-kernel@vger.kernel.org, robdclark@gmail.com, dianders@chromium.org, swboyd@chromium.org, quic_kalyant@quicinc.com, dmitry.baryshkov@linaro.org, quic_khsieh@quicinc.com, quic_vproddut@quicinc.com, bjorn.andersson@linaro.org, quic_aravindh@quicinc.com, quic_abhinavk@quicinc.com, quic_sbillaka@quicinc.com Subject: [PATCH v4 2/7] drm: add helper functions to retrieve old and new crtc Date: Tue, 5 Jul 2022 22:30:40 +0530 Message-Id: <1657040445-13067-3-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> References: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add new helper functions, drm_atomic_get_old_crtc_for_encoder and drm_atomic_get_new_crtc_for_encoder to retrieve the corresponding crtc for the encoder. Signed-off-by: Sankeerth Billakanti Signed-off-by: Vinod Polimera --- drivers/gpu/drm/drm_atomic.c | 60 ++++++++++++++++++++++++++++++++++++++++++++ include/drm/drm_atomic.h | 7 ++++++ 2 files changed, 67 insertions(+) diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c index 58c0283..87fcb55 100644 --- a/drivers/gpu/drm/drm_atomic.c +++ b/drivers/gpu/drm/drm_atomic.c @@ -983,6 +983,66 @@ drm_atomic_get_new_connector_for_encoder(struct drm_atomic_state *state, EXPORT_SYMBOL(drm_atomic_get_new_connector_for_encoder); /** + * drm_atomic_get_old_crtc_for_encoder - Get old crtc for an encoder + * @state: Atomic state + * @encoder: The encoder to fetch the crtc state for + * + * This function finds and returns the crtc that was connected to @encoder + * as specified by the @state. + * + * Returns: The old crtc connected to @encoder, or NULL if the encoder is + * not connected. + */ +struct drm_crtc * +drm_atomic_get_old_crtc_for_encoder(struct drm_atomic_state *state, + struct drm_encoder *encoder) +{ + struct drm_connector *connector; + struct drm_connector_state *conn_state; + + connector = drm_atomic_get_old_connector_for_encoder(state, encoder); + if (!connector) + return NULL; + + conn_state = drm_atomic_get_old_connector_state(state, connector); + if (!conn_state) + return NULL; + + return conn_state->crtc; +} +EXPORT_SYMBOL(drm_atomic_get_old_crtc_for_encoder); + +/** + * drm_atomic_get_new_crtc_for_encoder - Get new crtc for an encoder + * @state: Atomic state + * @encoder: The encoder to fetch the crtc state for + * + * This function finds and returns the crtc that will be connected to @encoder + * as specified by the @state. + * + * Returns: The new crtc connected to @encoder, or NULL if the encoder is + * not connected. + */ +struct drm_crtc * +drm_atomic_get_new_crtc_for_encoder(struct drm_atomic_state *state, + struct drm_encoder *encoder) +{ + struct drm_connector *connector; + struct drm_connector_state *conn_state; + + connector = drm_atomic_get_new_connector_for_encoder(state, encoder); + if (!connector) + return NULL; + + conn_state = drm_atomic_get_new_connector_state(state, connector); + if (!conn_state) + return NULL; + + return conn_state->crtc; +} +EXPORT_SYMBOL(drm_atomic_get_new_crtc_for_encoder); + +/** * drm_atomic_get_connector_state - get connector state * @state: global atomic state object * @connector: connector to get state object for diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h index 0777725..7001f12 100644 --- a/include/drm/drm_atomic.h +++ b/include/drm/drm_atomic.h @@ -528,6 +528,13 @@ struct drm_connector * drm_atomic_get_new_connector_for_encoder(struct drm_atomic_state *state, struct drm_encoder *encoder); +struct drm_crtc * +drm_atomic_get_old_crtc_for_encoder(struct drm_atomic_state *state, + struct drm_encoder *encoder); +struct drm_crtc * +drm_atomic_get_new_crtc_for_encoder(struct drm_atomic_state *state, + struct drm_encoder *encoder); + /** * drm_atomic_get_existing_crtc_state - get CRTC state, if it exists * @state: global atomic state object From patchwork Tue Jul 5 17:00:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 12906857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0376ACCA47B for ; Tue, 5 Jul 2022 17:01:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232130AbiGERBZ (ORCPT ); Tue, 5 Jul 2022 13:01:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230271AbiGERBU (ORCPT ); Tue, 5 Jul 2022 13:01:20 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0EF2013D22; Tue, 5 Jul 2022 10:01:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657040478; x=1688576478; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=TdFj1sY11wCtRCrQj+U5WqDgdqegKwcBN7eTJ/P7Q4s=; b=G3/HUc8/v2ZpdD/trlzcUMkoeqCv2oPGmBhs6b9TMTKAsPdVNmc0L4Qw Cs1Tlp40wDmRN5aa0TqK1j4kyQZygPL9cZ/yjJV+0XRQ7RbwXZJ7bBh9T sWaRHYJDupVza+3qA5Fe+5SyhuW5Xl2xjUhNY6XrC8pUlRl7ydYzT/mhq 0=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 05 Jul 2022 10:01:18 -0700 X-QCInternal: smtphost Received: from ironmsg01-blr.qualcomm.com ([10.86.208.130]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/AES256-SHA; 05 Jul 2022 10:01:15 -0700 X-QCInternal: smtphost Received: from vpolimer-linux.qualcomm.com ([10.204.67.235]) by ironmsg01-blr.qualcomm.com with ESMTP; 05 Jul 2022 22:31:00 +0530 Received: by vpolimer-linux.qualcomm.com (Postfix, from userid 463814) id 008563CBE; Tue, 5 Jul 2022 22:30:58 +0530 (IST) From: Vinod Polimera To: dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Vinod Polimera , linux-kernel@vger.kernel.org, robdclark@gmail.com, dianders@chromium.org, swboyd@chromium.org, quic_kalyant@quicinc.com, dmitry.baryshkov@linaro.org, quic_khsieh@quicinc.com, quic_vproddut@quicinc.com, bjorn.andersson@linaro.org, quic_aravindh@quicinc.com, quic_abhinavk@quicinc.com, quic_sbillaka@quicinc.com Subject: [PATCH v4 3/7] drm/msm/dp: Add basic PSR support for eDP Date: Tue, 5 Jul 2022 22:30:41 +0530 Message-Id: <1657040445-13067-4-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> References: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support for basic panel self refresh (PSR) feature for eDP. Add a new interface to set PSR state in the sink from DPU. Program the eDP controller to issue PSR enter and exit SDP to the sink. Signed-off-by: Sankeerth Billakanti Signed-off-by: Vinod Polimera --- drivers/gpu/drm/msm/dp/dp_catalog.c | 81 ++++++++++++++++ drivers/gpu/drm/msm/dp/dp_catalog.h | 4 + drivers/gpu/drm/msm/dp/dp_ctrl.c | 76 ++++++++++++++- drivers/gpu/drm/msm/dp/dp_ctrl.h | 3 + drivers/gpu/drm/msm/dp/dp_display.c | 31 +++--- drivers/gpu/drm/msm/dp/dp_display.h | 2 + drivers/gpu/drm/msm/dp/dp_drm.c | 186 ++++++++++++++++++++++++++++++++++-- drivers/gpu/drm/msm/dp/dp_drm.h | 9 +- drivers/gpu/drm/msm/dp/dp_link.c | 36 +++++++ drivers/gpu/drm/msm/dp/dp_panel.c | 22 +++++ drivers/gpu/drm/msm/dp/dp_panel.h | 6 ++ drivers/gpu/drm/msm/dp/dp_reg.h | 27 ++++++ 12 files changed, 458 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c index 7257515..b9021ed 100644 --- a/drivers/gpu/drm/msm/dp/dp_catalog.c +++ b/drivers/gpu/drm/msm/dp/dp_catalog.c @@ -47,6 +47,14 @@ #define DP_INTERRUPT_STATUS2_MASK \ (DP_INTERRUPT_STATUS2 << DP_INTERRUPT_STATUS_MASK_SHIFT) +#define DP_INTERRUPT_STATUS4 \ + (PSR_UPDATE_INT | PSR_CAPTURE_INT | PSR_EXIT_INT | \ + PSR_UPDATE_ERROR_INT | PSR_WAKE_ERROR_INT) + +#define DP_INTERRUPT_MASK4 \ + (PSR_UPDATE_MASK | PSR_CAPTURE_MASK | PSR_EXIT_MASK | \ + PSR_UPDATE_ERROR_MASK | PSR_WAKE_ERROR_MASK) + struct dp_catalog_private { struct device *dev; struct drm_device *drm_dev; @@ -359,6 +367,24 @@ void dp_catalog_ctrl_lane_mapping(struct dp_catalog *dp_catalog) ln_mapping); } +void dp_catalog_ctrl_psr_mainlink_enable(struct dp_catalog *dp_catalog, + bool enable) +{ + u32 val; + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); + + val = dp_read_link(catalog, REG_DP_MAINLINK_CTRL); + val &= ~DP_MAINLINK_CTRL_ENABLE; + + if (enable) + val |= DP_MAINLINK_CTRL_ENABLE; + else + val &= ~DP_MAINLINK_CTRL_ENABLE; + + dp_write_link(catalog, REG_DP_MAINLINK_CTRL, val); +} + void dp_catalog_ctrl_mainlink_ctrl(struct dp_catalog *dp_catalog, bool enable) { @@ -610,6 +636,47 @@ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog) dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, DP_DP_HPD_CTRL_HPD_EN); } +static void dp_catalog_enable_sdp(struct dp_catalog_private *catalog) +{ + /* trigger sdp */ + dp_write_link(catalog, MMSS_DP_SDP_CFG3, UPDATE_SDP); + dp_write_link(catalog, MMSS_DP_SDP_CFG3, !UPDATE_SDP); +} + +void dp_catalog_ctrl_config_psr(struct dp_catalog *dp_catalog) +{ + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); + u32 config; + + /* enable PSR1 function */ + config = dp_read_link(catalog, REG_PSR_CONFIG); + config |= PSR1_SUPPORTED; + dp_write_link(catalog, REG_PSR_CONFIG, config); + + dp_write_ahb(catalog, REG_DP_INTR_MASK4, DP_INTERRUPT_MASK4); + dp_catalog_enable_sdp(catalog); +} + +void dp_catalog_ctrl_set_psr(struct dp_catalog *dp_catalog, bool enter) +{ + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); + u32 cmd; + + cmd = dp_read_link(catalog, REG_PSR_CMD); + + cmd &= ~(PSR_ENTER | PSR_EXIT); + + if (enter) + cmd |= PSR_ENTER; + else + cmd |= PSR_EXIT; + + dp_catalog_enable_sdp(catalog); + dp_write_link(catalog, REG_PSR_CMD, cmd); +} + u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog) { struct dp_catalog_private *catalog = container_of(dp_catalog, @@ -645,6 +712,20 @@ u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog) return isr & (mask | ~DP_DP_HPD_INT_MASK); } +int dp_catalog_ctrl_read_psr_interrupt_status(struct dp_catalog *dp_catalog) +{ + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); + u32 intr, intr_ack; + + intr = dp_read_ahb(catalog, REG_DP_INTR_STATUS4); + intr_ack = (intr & DP_INTERRUPT_STATUS4) + << DP_INTERRUPT_STATUS_ACK_SHIFT; + dp_write_ahb(catalog, REG_DP_INTR_STATUS4, intr_ack); + + return intr; +} + int dp_catalog_ctrl_get_interrupt(struct dp_catalog *dp_catalog) { struct dp_catalog_private *catalog = container_of(dp_catalog, diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.h b/drivers/gpu/drm/msm/dp/dp_catalog.h index 1f717f4..6454845 100644 --- a/drivers/gpu/drm/msm/dp/dp_catalog.h +++ b/drivers/gpu/drm/msm/dp/dp_catalog.h @@ -93,6 +93,7 @@ void dp_catalog_ctrl_state_ctrl(struct dp_catalog *dp_catalog, u32 state); void dp_catalog_ctrl_config_ctrl(struct dp_catalog *dp_catalog, u32 config); void dp_catalog_ctrl_lane_mapping(struct dp_catalog *dp_catalog); void dp_catalog_ctrl_mainlink_ctrl(struct dp_catalog *dp_catalog, bool enable); +void dp_catalog_ctrl_psr_mainlink_enable(struct dp_catalog *dp_catalog, bool enable); void dp_catalog_ctrl_config_misc(struct dp_catalog *dp_catalog, u32 cc, u32 tb); void dp_catalog_ctrl_config_msa(struct dp_catalog *dp_catalog, u32 rate, u32 stream_rate_khz, bool fixed_nvid); @@ -104,12 +105,15 @@ void dp_catalog_ctrl_enable_irq(struct dp_catalog *dp_catalog, bool enable); void dp_catalog_hpd_config_intr(struct dp_catalog *dp_catalog, u32 intr_mask, bool en); void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog); +void dp_catalog_ctrl_config_psr(struct dp_catalog *dp_catalog); +void dp_catalog_ctrl_set_psr(struct dp_catalog *dp_catalog, bool enter); u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog); u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog); void dp_catalog_ctrl_phy_reset(struct dp_catalog *dp_catalog); int dp_catalog_ctrl_update_vx_px(struct dp_catalog *dp_catalog, u8 v_level, u8 p_level); int dp_catalog_ctrl_get_interrupt(struct dp_catalog *dp_catalog); +int dp_catalog_ctrl_read_psr_interrupt_status(struct dp_catalog *dp_catalog); void dp_catalog_ctrl_update_transfer_unit(struct dp_catalog *dp_catalog, u32 dp_tu, u32 valid_boundary, u32 valid_boundary2); diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c index d21971b..485e8f5 100644 --- a/drivers/gpu/drm/msm/dp/dp_ctrl.c +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c @@ -22,6 +22,7 @@ #define DP_KHZ_TO_HZ 1000 #define IDLE_PATTERN_COMPLETION_TIMEOUT_JIFFIES (30 * HZ / 1000) /* 30 ms */ +#define PSR_OPERATION_COMPLETION_TIMEOUT_JIFFIES (300 * HZ / 1000) /* 300 ms */ #define WAIT_FOR_VIDEO_READY_TIMEOUT_JIFFIES (HZ / 2) #define DP_CTRL_INTR_READY_FOR_VIDEO BIT(0) @@ -80,6 +81,7 @@ struct dp_ctrl_private { struct dp_catalog *catalog; struct completion idle_comp; + struct completion psr_op_comp; struct completion video_comp; }; @@ -153,6 +155,9 @@ static void dp_ctrl_config_ctrl(struct dp_ctrl_private *ctrl) config |= DP_CONFIGURATION_CTRL_STATIC_DYNAMIC_CN; config |= DP_CONFIGURATION_CTRL_SYNC_ASYNC_CLK; + if (ctrl->panel->psr_cap.version) + config |= DP_CONFIGURATION_CTRL_SEND_VSC; + dp_catalog_ctrl_config_ctrl(ctrl->catalog, config); } @@ -1382,11 +1387,64 @@ static int dp_ctrl_enable_stream_clocks(struct dp_ctrl_private *ctrl) return ret; } -void dp_ctrl_reset_irq_ctrl(struct dp_ctrl *dp_ctrl, bool enable) +void dp_ctrl_config_psr(struct dp_ctrl *dp_ctrl) { - struct dp_ctrl_private *ctrl; + u8 cfg; + struct dp_ctrl_private *ctrl = container_of(dp_ctrl, + struct dp_ctrl_private, dp_ctrl); - ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl); + if (!ctrl->panel->psr_cap.version) + return; + + dp_catalog_ctrl_config_psr(ctrl->catalog); + + cfg = DP_PSR_ENABLE; + drm_dp_dpcd_write(ctrl->aux, DP_PSR_EN_CFG, &cfg, 1); +} + +void dp_ctrl_set_psr(struct dp_ctrl *dp_ctrl, bool enter) +{ + struct dp_ctrl_private *ctrl = container_of(dp_ctrl, + struct dp_ctrl_private, dp_ctrl); + + if (!ctrl->panel->psr_cap.version) + return; + + /* + * When entering PSR, + * 1. Send PSR enter SDP and wait for the PSR_UPDATE_INT + * 2. Turn off video + * 3. Disable the mainlink + * + * When exiting PSR, + * 1. Enable the mainlink + * 2. Send the PSR exit SDP + */ + if (enter) { + reinit_completion(&ctrl->psr_op_comp); + dp_catalog_ctrl_set_psr(ctrl->catalog, true); + + if (!wait_for_completion_timeout(&ctrl->psr_op_comp, + PSR_OPERATION_COMPLETION_TIMEOUT_JIFFIES)) { + DRM_ERROR("PSR_ENTRY timedout\n"); + dp_catalog_ctrl_set_psr(ctrl->catalog, false); + return; + } + + dp_catalog_ctrl_state_ctrl(ctrl->catalog, 0); + + dp_catalog_ctrl_psr_mainlink_enable(ctrl->catalog, false); + } else { + dp_catalog_ctrl_psr_mainlink_enable(ctrl->catalog, true); + + dp_catalog_ctrl_set_psr(ctrl->catalog, false); + } +} + +void dp_ctrl_reset_irq_ctrl(struct dp_ctrl *dp_ctrl, bool enable) +{ + struct dp_ctrl_private *ctrl = container_of(dp_ctrl, + struct dp_ctrl_private, dp_ctrl); dp_catalog_ctrl_reset(ctrl->catalog); @@ -1997,6 +2055,17 @@ void dp_ctrl_isr(struct dp_ctrl *dp_ctrl) ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl); + if (ctrl->panel->psr_cap.version) { + isr = dp_catalog_ctrl_read_psr_interrupt_status(ctrl->catalog); + + if (isr == PSR_UPDATE_INT) + drm_dbg_dp(ctrl->drm_dev, "PSR frame update done\n"); + else if (isr == PSR_EXIT_INT) + drm_dbg_dp(ctrl->drm_dev, "PSR exit done\n"); + + complete(&ctrl->psr_op_comp); + } + isr = dp_catalog_ctrl_get_interrupt(ctrl->catalog); if (isr & DP_CTRL_INTR_READY_FOR_VIDEO) { @@ -2043,6 +2112,7 @@ struct dp_ctrl *dp_ctrl_get(struct device *dev, struct dp_link *link, dev_err(dev, "failed to add DP OPP table\n"); init_completion(&ctrl->idle_comp); + init_completion(&ctrl->psr_op_comp); init_completion(&ctrl->video_comp); /* in parameters */ diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.h b/drivers/gpu/drm/msm/dp/dp_ctrl.h index 0745fde..be074ae 100644 --- a/drivers/gpu/drm/msm/dp/dp_ctrl.h +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.h @@ -38,4 +38,7 @@ void dp_ctrl_phy_init(struct dp_ctrl *dp_ctrl); void dp_ctrl_phy_exit(struct dp_ctrl *dp_ctrl); void dp_ctrl_irq_phy_exit(struct dp_ctrl *dp_ctrl); +void dp_ctrl_set_psr(struct dp_ctrl *dp_ctrl, bool enable); +void dp_ctrl_config_psr(struct dp_ctrl *dp_ctrl); + #endif /* _DP_CTRL_H_ */ diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index bce7793..2b3ec6b 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -388,6 +388,8 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp) edid = dp->panel->edid; + dp->dp_display.psr_supported = !!dp->panel->psr_cap.version; + dp->audio_supported = drm_detect_monitor_audio(edid); dp_panel_handle_sink_request(dp->panel); @@ -895,6 +897,10 @@ static int dp_display_post_enable(struct msm_dp *dp_display) /* signal the connect event late to synchronize video and display */ dp_display_handle_plugged_change(dp_display, true); + + if (dp_display->psr_supported) + dp_ctrl_config_psr(dp->ctrl); + return 0; } @@ -980,14 +986,6 @@ enum drm_mode_status dp_bridge_mode_valid(struct drm_bridge *bridge, return -EINVAL; } - /* - * The eDP controller currently does not have a reliable way of - * enabling panel power to read sink capabilities. So, we rely - * on the panel driver to populate only supported modes for now. - */ - if (dp->is_edp) - return MODE_OK; - if (mode->clock > DP_MAX_PIXEL_CLK_KHZ) return MODE_BAD; @@ -1094,6 +1092,14 @@ static void dp_display_config_hpd(struct dp_display_private *dp) enable_irq(dp->irq); } +void dp_display_set_psr(struct msm_dp *dp_display, bool enter) +{ + struct dp_display_private *dp; + + dp = container_of(dp_display, struct dp_display_private, dp_display); + dp_ctrl_set_psr(dp->ctrl, enter); +} + static int hpd_event_thread(void *data) { struct dp_display_private *dp_priv; @@ -1652,7 +1658,8 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev, return 0; } -void dp_bridge_enable(struct drm_bridge *drm_bridge) +void dp_bridge_atomic_enable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) { struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); struct msm_dp *dp = dp_bridge->dp_display; @@ -1716,7 +1723,8 @@ void dp_bridge_enable(struct drm_bridge *drm_bridge) mutex_unlock(&dp_display->event_mutex); } -void dp_bridge_disable(struct drm_bridge *drm_bridge) +void dp_bridge_atomic_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) { struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); struct msm_dp *dp = dp_bridge->dp_display; @@ -1727,7 +1735,8 @@ void dp_bridge_disable(struct drm_bridge *drm_bridge) dp_ctrl_push_idle(dp_display->ctrl); } -void dp_bridge_post_disable(struct drm_bridge *drm_bridge) +void dp_bridge_atomic_post_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) { struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); struct msm_dp *dp = dp_bridge->dp_display; diff --git a/drivers/gpu/drm/msm/dp/dp_display.h b/drivers/gpu/drm/msm/dp/dp_display.h index 4f9fe4d..1feaada 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.h +++ b/drivers/gpu/drm/msm/dp/dp_display.h @@ -29,6 +29,7 @@ struct msm_dp { u32 max_dp_lanes; struct dp_audio *dp_audio; + bool psr_supported; }; int dp_display_set_plugged_cb(struct msm_dp *dp_display, @@ -39,5 +40,6 @@ bool dp_display_check_video_test(struct msm_dp *dp_display); int dp_display_get_test_bpp(struct msm_dp *dp_display); void dp_display_signal_audio_start(struct msm_dp *dp_display); void dp_display_signal_audio_complete(struct msm_dp *dp_display); +void dp_display_set_psr(struct msm_dp *dp, bool enter); #endif /* _DP_DISPLAY_H_ */ diff --git a/drivers/gpu/drm/msm/dp/dp_drm.c b/drivers/gpu/drm/msm/dp/dp_drm.c index 62d58b9..b9cf6c27 100644 --- a/drivers/gpu/drm/msm/dp/dp_drm.c +++ b/drivers/gpu/drm/msm/dp/dp_drm.c @@ -60,14 +60,184 @@ static int dp_bridge_get_modes(struct drm_bridge *bridge, struct drm_connector * return rc; } +static int edp_bridge_atomic_check(struct drm_bridge *drm_bridge, + struct drm_bridge_state *bridge_state, + struct drm_crtc_state *crtc_state, + struct drm_connector_state *conn_state) +{ + struct msm_dp *dp; + + dp = to_dp_bridge(drm_bridge)->dp_display; + if (WARN_ON(!conn_state)) + return -ENODEV; + + if (dp->psr_supported) + conn_state->self_refresh_aware = true; + + if (!conn_state->crtc || !crtc_state) + return 0; + + if (crtc_state->self_refresh_active && !dp->psr_supported) + return -EINVAL; + + return 0; +} + +static void edp_bridge_atomic_enable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) +{ + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_crtc *crtc; + struct drm_crtc_state *old_crtc_state; + struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); + struct msm_dp *dp = dp_bridge->dp_display; + + /* + * Check the old state of the crtc to determine if the panel + * was put into psr state previously by the edp_bridge_atomic_disable. + * If the panel is in psr, just exit psr state and skip the full + * bridge enable sequence. + */ + crtc = drm_atomic_get_new_crtc_for_encoder(atomic_state, dp->encoder); + if (!crtc) + return; + + old_crtc_state = drm_atomic_get_old_crtc_state(atomic_state, crtc); + + if (old_crtc_state && old_crtc_state->self_refresh_active) { + dp_display_set_psr(dp, false); + return; + } + + dp_bridge_atomic_enable(drm_bridge, old_bridge_state); +} + +static void edp_bridge_atomic_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) +{ + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_crtc *crtc; + struct drm_crtc_state *new_crtc_state = NULL, *old_crtc_state = NULL; + struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); + struct msm_dp *dp = dp_bridge->dp_display; + + crtc = drm_atomic_get_old_crtc_for_encoder(atomic_state, dp->encoder); + if (!crtc) + goto out; + + new_crtc_state = drm_atomic_get_new_crtc_state(atomic_state, crtc); + if (!new_crtc_state) + goto out; + + old_crtc_state = drm_atomic_get_old_crtc_state(atomic_state, crtc); + if (!old_crtc_state) + goto out; + + /* + * Set self refresh mode if current crtc state is active. + * + * If old crtc state is active, then this is a display disable + * call while the sink is in psr state. So, exit psr here. + * The eDP controller will be disabled in the + * edp_bridge_atomic_post_disable function. + * + * We observed sink is stuck in self refresh if psr exit is skipped + * when display disable occurs while the sink is in psr state. + */ + if (new_crtc_state->self_refresh_active) { + dp_display_set_psr(dp, true); + return; + } else if (old_crtc_state->self_refresh_active) { + dp_display_set_psr(dp, false); + return; + } + +out: + dp_bridge_atomic_disable(drm_bridge, old_bridge_state); +} + +static void edp_bridge_atomic_post_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) +{ + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_crtc *crtc; + struct drm_crtc_state *new_crtc_state = NULL; + struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); + struct msm_dp *dp = dp_bridge->dp_display; + + crtc = drm_atomic_get_old_crtc_for_encoder(atomic_state, dp->encoder); + if (!crtc) + return; + + new_crtc_state = drm_atomic_get_new_crtc_state(atomic_state, crtc); + if (!new_crtc_state) + return; + + /* + * Self refresh mode is already set in edp_bridge_atomic_disable. + */ + if (new_crtc_state->self_refresh_active) + return; + + dp_bridge_atomic_post_disable(drm_bridge, old_bridge_state); +} + +/** + * edp_bridge_mode_valid - callback to determine if specified mode is valid + * @bridge: Pointer to drm bridge structure + * @info: display info + * @mode: Pointer to drm mode structure + * Returns: Validity status for specified mode + */ +static enum drm_mode_status edp_bridge_mode_valid(struct drm_bridge *bridge, + const struct drm_display_info *info, + const struct drm_display_mode *mode) +{ + struct msm_dp *dp; + int mode_pclk_khz = mode->clock; + + dp = to_dp_bridge(bridge)->dp_display; + + if (!dp || !mode_pclk_khz || !dp->connector) { + DRM_ERROR("invalid params\n"); + return -EINVAL; + } + + if (mode->clock > DP_MAX_PIXEL_CLK_KHZ) + return MODE_CLOCK_HIGH; + + /* + * The eDP controller currently does not have a reliable way of + * enabling panel power to read sink capabilities. So, we rely + * on the panel driver to populate only supported modes for now. + */ + return MODE_OK; +} + +static const struct drm_bridge_funcs edp_bridge_ops = { + .atomic_enable = edp_bridge_atomic_enable, + .atomic_disable = edp_bridge_atomic_disable, + .atomic_post_disable = edp_bridge_atomic_post_disable, + .mode_set = dp_bridge_mode_set, + .mode_valid = edp_bridge_mode_valid, + .atomic_reset = drm_atomic_helper_bridge_reset, + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, + .atomic_check = edp_bridge_atomic_check, +}; + static const struct drm_bridge_funcs dp_bridge_ops = { - .enable = dp_bridge_enable, - .disable = dp_bridge_disable, - .post_disable = dp_bridge_post_disable, - .mode_set = dp_bridge_mode_set, - .mode_valid = dp_bridge_mode_valid, - .get_modes = dp_bridge_get_modes, - .detect = dp_bridge_detect, + .atomic_enable = dp_bridge_atomic_enable, + .atomic_disable = dp_bridge_atomic_disable, + .atomic_post_disable = dp_bridge_atomic_post_disable, + .mode_set = dp_bridge_mode_set, + .mode_valid = dp_bridge_mode_valid, + .get_modes = dp_bridge_get_modes, + .detect = dp_bridge_detect, + .atomic_reset = drm_atomic_helper_bridge_reset, + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, + .atomic_check = edp_bridge_atomic_check, }; struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev, @@ -84,7 +254,7 @@ struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device * dp_bridge->dp_display = dp_display; bridge = &dp_bridge->bridge; - bridge->funcs = &dp_bridge_ops; + bridge->funcs = dp_display->is_edp ? &edp_bridge_ops : &dp_bridge_ops; bridge->type = dp_display->connector_type; /* diff --git a/drivers/gpu/drm/msm/dp/dp_drm.h b/drivers/gpu/drm/msm/dp/dp_drm.h index f4b1ed1..6b8ef29 100644 --- a/drivers/gpu/drm/msm/dp/dp_drm.h +++ b/drivers/gpu/drm/msm/dp/dp_drm.h @@ -23,9 +23,12 @@ struct drm_connector *dp_drm_connector_init(struct msm_dp *dp_display); struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev, struct drm_encoder *encoder); -void dp_bridge_enable(struct drm_bridge *drm_bridge); -void dp_bridge_disable(struct drm_bridge *drm_bridge); -void dp_bridge_post_disable(struct drm_bridge *drm_bridge); +void dp_bridge_atomic_enable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state); +void dp_bridge_atomic_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state); +void dp_bridge_atomic_post_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state); enum drm_mode_status dp_bridge_mode_valid(struct drm_bridge *bridge, const struct drm_display_info *info, const struct drm_display_mode *mode); diff --git a/drivers/gpu/drm/msm/dp/dp_link.c b/drivers/gpu/drm/msm/dp/dp_link.c index 36f0af0..84af70a 100644 --- a/drivers/gpu/drm/msm/dp/dp_link.c +++ b/drivers/gpu/drm/msm/dp/dp_link.c @@ -934,6 +934,38 @@ static int dp_link_process_phy_test_pattern_request( return 0; } +static bool dp_link_read_psr_error_status(struct dp_link_private *link) +{ + u8 status; + + drm_dp_dpcd_read(link->aux, DP_PSR_ERROR_STATUS, &status, 1); + + if (status & DP_PSR_LINK_CRC_ERROR) + DRM_ERROR("PSR LINK CRC ERROR\n"); + else if (status & DP_PSR_RFB_STORAGE_ERROR) + DRM_ERROR("PSR RFB STORAGE ERROR\n"); + else if (status & DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR) + DRM_ERROR("PSR VSC SDP UNCORRECTABLE ERROR\n"); + else + return false; + + return true; +} + +static bool dp_link_psr_capability_changed(struct dp_link_private *link) +{ + u8 status; + + drm_dp_dpcd_read(link->aux, DP_PSR_ESI, &status, 1); + + if (status & DP_PSR_CAPS_CHANGE) { + drm_dbg_dp(link->drm_dev, "PSR Capability Change\n"); + return true; + } + + return false; +} + static u8 get_link_status(const u8 link_status[DP_LINK_STATUS_SIZE], int r) { return link_status[r - DP_LANE0_1_STATUS]; @@ -1053,6 +1085,10 @@ int dp_link_process_request(struct dp_link *dp_link) dp_link->sink_request |= DP_TEST_LINK_TRAINING; } else if (!dp_link_process_phy_test_pattern_request(link)) { dp_link->sink_request |= DP_TEST_LINK_PHY_TEST_PATTERN; + } else if (dp_link_read_psr_error_status(link)) { + DRM_ERROR("PSR IRQ_HPD received\n"); + } else if (dp_link_psr_capability_changed(link)) { + drm_dbg_dp(link->drm_dev, "PSR Capabiity changed"); } else { ret = dp_link_process_link_status_update(link); if (!ret) { diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c index 5149ceb..8bf8ab4 100644 --- a/drivers/gpu/drm/msm/dp/dp_panel.c +++ b/drivers/gpu/drm/msm/dp/dp_panel.c @@ -20,6 +20,27 @@ struct dp_panel_private { bool aux_cfg_update_done; }; +static void dp_panel_read_psr_cap(struct dp_panel_private *panel) +{ + ssize_t rlen; + struct dp_panel *dp_panel; + + dp_panel = &panel->dp_panel; + + /* edp sink */ + if (dp_panel->dpcd[DP_EDP_CONFIGURATION_CAP]) { + rlen = drm_dp_dpcd_read(panel->aux, DP_PSR_SUPPORT, + &dp_panel->psr_cap, sizeof(dp_panel->psr_cap)); + if (rlen == sizeof(dp_panel->psr_cap)) { + drm_dbg_dp(panel->drm_dev, + "psr version: 0x%x, psr_cap: 0x%x\n", + dp_panel->psr_cap.version, + dp_panel->psr_cap.capabilities); + } else + DRM_ERROR("failed to read psr info, rlen=%zd\n", rlen); + } +} + static int dp_panel_read_dpcd(struct dp_panel *dp_panel) { int rc = 0; @@ -106,6 +127,7 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel) } } + dp_panel_read_psr_cap(panel); end: return rc; } diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h index d861197a..2d0826a 100644 --- a/drivers/gpu/drm/msm/dp/dp_panel.h +++ b/drivers/gpu/drm/msm/dp/dp_panel.h @@ -34,6 +34,11 @@ struct dp_panel_in { struct dp_catalog *catalog; }; +struct dp_panel_psr { + u8 version; + u8 capabilities; +}; + struct dp_panel { /* dpcd raw data */ u8 dpcd[DP_RECEIVER_CAP_SIZE + 1]; @@ -46,6 +51,7 @@ struct dp_panel { struct edid *edid; struct drm_connector *connector; struct dp_display_mode dp_mode; + struct dp_panel_psr psr_cap; bool video_test; u32 vic; diff --git a/drivers/gpu/drm/msm/dp/dp_reg.h b/drivers/gpu/drm/msm/dp/dp_reg.h index 2686028..ea85a69 100644 --- a/drivers/gpu/drm/msm/dp/dp_reg.h +++ b/drivers/gpu/drm/msm/dp/dp_reg.h @@ -22,6 +22,20 @@ #define REG_DP_INTR_STATUS2 (0x00000024) #define REG_DP_INTR_STATUS3 (0x00000028) +#define REG_DP_INTR_STATUS4 (0x0000002C) +#define PSR_UPDATE_INT (0x00000001) +#define PSR_CAPTURE_INT (0x00000004) +#define PSR_EXIT_INT (0x00000010) +#define PSR_UPDATE_ERROR_INT (0x00000040) +#define PSR_WAKE_ERROR_INT (0x00000100) + +#define REG_DP_INTR_MASK4 (0x00000030) +#define PSR_UPDATE_MASK (0x00000001) +#define PSR_CAPTURE_MASK (0x00000002) +#define PSR_EXIT_MASK (0x00000004) +#define PSR_UPDATE_ERROR_MASK (0x00000008) +#define PSR_WAKE_ERROR_MASK (0x00000010) + #define REG_DP_DP_HPD_CTRL (0x00000000) #define DP_DP_HPD_CTRL_HPD_EN (0x00000001) @@ -164,6 +178,16 @@ #define MMSS_DP_AUDIO_TIMING_RBR_48 (0x00000094) #define MMSS_DP_AUDIO_TIMING_HBR_48 (0x00000098) +#define REG_PSR_CONFIG (0x00000100) +#define DISABLE_PSR (0x00000000) +#define PSR1_SUPPORTED (0x00000001) +#define PSR2_WITHOUT_FRAMESYNC (0x00000002) +#define PSR2_WITH_FRAMESYNC (0x00000003) + +#define REG_PSR_CMD (0x00000110) +#define PSR_ENTER (0x00000001) +#define PSR_EXIT (0x00000002) + #define MMSS_DP_PSR_CRC_RG (0x00000154) #define MMSS_DP_PSR_CRC_B (0x00000158) @@ -184,6 +208,9 @@ #define MMSS_DP_AUDIO_STREAM_0 (0x00000240) #define MMSS_DP_AUDIO_STREAM_1 (0x00000244) +#define MMSS_DP_SDP_CFG3 (0x0000024c) +#define UPDATE_SDP (0x00000001) + #define MMSS_DP_EXTENSION_0 (0x00000250) #define MMSS_DP_EXTENSION_1 (0x00000254) #define MMSS_DP_EXTENSION_2 (0x00000258) From patchwork Tue Jul 5 17:00:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 12906862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F8B6CCA47B for ; Tue, 5 Jul 2022 17:01:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233067AbiGERB1 (ORCPT ); Tue, 5 Jul 2022 13:01:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232384AbiGERBZ (ORCPT ); Tue, 5 Jul 2022 13:01:25 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 113BF13D22; Tue, 5 Jul 2022 10:01:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657040485; x=1688576485; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=DdSWLWV6h1FcenkEhqE7q80VF+1KzLQ27NhPU+jHsRo=; b=G/ntLgG8jJIdIctuwPeYmRNdMhPo7oPxpzcIAcmSxjZ4KEs1KodwlgHH +koJzcgIUReMgvgin7pxmdHWR6WxL1R0QpsHRbivYNZGj5CauDFbHHJOr kGse3LUp7Db6JBd9if8pL573ZqPAMW9FvTrSEaWoc1W2qIx9kvlxMm+6P 4=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 05 Jul 2022 10:01:25 -0700 X-QCInternal: smtphost Received: from ironmsg01-blr.qualcomm.com ([10.86.208.130]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/AES256-SHA; 05 Jul 2022 10:01:23 -0700 X-QCInternal: smtphost Received: from vpolimer-linux.qualcomm.com ([10.204.67.235]) by ironmsg01-blr.qualcomm.com with ESMTP; 05 Jul 2022 22:31:01 +0530 Received: by vpolimer-linux.qualcomm.com (Postfix, from userid 463814) id A997F3CBE; Tue, 5 Jul 2022 22:31:00 +0530 (IST) From: Vinod Polimera To: dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Vinod Polimera , linux-kernel@vger.kernel.org, robdclark@gmail.com, dianders@chromium.org, swboyd@chromium.org, quic_kalyant@quicinc.com, dmitry.baryshkov@linaro.org, quic_khsieh@quicinc.com, quic_vproddut@quicinc.com, bjorn.andersson@linaro.org, quic_aravindh@quicinc.com, quic_abhinavk@quicinc.com, quic_sbillaka@quicinc.com Subject: [PATCH v4 4/7] drm/bridge: use atomic enable/disable callbacks for panel bridge Date: Tue, 5 Jul 2022 22:30:42 +0530 Message-Id: <1657040445-13067-5-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> References: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Use atomic variants for panel bridge callback functions such that certain states like self-refresh can be accessed as part of enable/disable sequence. Signed-off-by: Sankeerth Billakanti Signed-off-by: Vinod Polimera Reviewed-by: Dmitry Baryshkov --- drivers/gpu/drm/bridge/panel.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/bridge/panel.c b/drivers/gpu/drm/bridge/panel.c index 0ee563e..eeb9546 100644 --- a/drivers/gpu/drm/bridge/panel.c +++ b/drivers/gpu/drm/bridge/panel.c @@ -108,28 +108,32 @@ static void panel_bridge_detach(struct drm_bridge *bridge) drm_connector_cleanup(connector); } -static void panel_bridge_pre_enable(struct drm_bridge *bridge) +static void panel_bridge_atomic_pre_enable(struct drm_bridge *bridge, + struct drm_bridge_state *old_bridge_state) { struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); drm_panel_prepare(panel_bridge->panel); } -static void panel_bridge_enable(struct drm_bridge *bridge) +static void panel_bridge_atomic_enable(struct drm_bridge *bridge, + struct drm_bridge_state *old_bridge_state) { struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); drm_panel_enable(panel_bridge->panel); } -static void panel_bridge_disable(struct drm_bridge *bridge) +static void panel_bridge_atomic_disable(struct drm_bridge *bridge, + struct drm_bridge_state *old_bridge_state) { struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); drm_panel_disable(panel_bridge->panel); } -static void panel_bridge_post_disable(struct drm_bridge *bridge) +static void panel_bridge_atomic_post_disable(struct drm_bridge *bridge, + struct drm_bridge_state *old_bridge_state) { struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); @@ -158,10 +162,10 @@ static void panel_bridge_debugfs_init(struct drm_bridge *bridge, static const struct drm_bridge_funcs panel_bridge_bridge_funcs = { .attach = panel_bridge_attach, .detach = panel_bridge_detach, - .pre_enable = panel_bridge_pre_enable, - .enable = panel_bridge_enable, - .disable = panel_bridge_disable, - .post_disable = panel_bridge_post_disable, + .atomic_pre_enable = panel_bridge_atomic_pre_enable, + .atomic_enable = panel_bridge_atomic_enable, + .atomic_disable = panel_bridge_atomic_disable, + .atomic_post_disable = panel_bridge_atomic_post_disable, .get_modes = panel_bridge_get_modes, .atomic_reset = drm_atomic_helper_bridge_reset, .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, From patchwork Tue Jul 5 17:00:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 12906864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AEF4C43334 for ; Tue, 5 Jul 2022 17:01:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232754AbiGERB3 (ORCPT ); Tue, 5 Jul 2022 13:01:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229486AbiGERBY (ORCPT ); Tue, 5 Jul 2022 13:01:24 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39F0F1C911; Tue, 5 Jul 2022 10:01:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657040483; x=1688576483; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=jeChhiyWoLcjAegHXqh/rf8PudXtR1Yc73pLcMWtrq0=; b=tlN75nk0XWbKtBWOXad+7t5du1l6YzgQPQy38bHO3WyQo3P9j+FVchlO QUQMpT2Mv76CK8nSw6F0e1OUuPjJKzwRd4t9FxORiSw84hNn8GdutZwgF Zmde1gC92CQmCLes2uMrNZiPN4Icj3hOiwYchfuo1IK2+QM01eL13Xsi5 I=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 05 Jul 2022 10:01:23 -0700 X-QCInternal: smtphost Received: from ironmsg01-blr.qualcomm.com ([10.86.208.130]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/AES256-SHA; 05 Jul 2022 10:01:21 -0700 X-QCInternal: smtphost Received: from vpolimer-linux.qualcomm.com ([10.204.67.235]) by ironmsg01-blr.qualcomm.com with ESMTP; 05 Jul 2022 22:31:07 +0530 Received: by vpolimer-linux.qualcomm.com (Postfix, from userid 463814) id 1D9E73CBE; Tue, 5 Jul 2022 22:31:06 +0530 (IST) From: Vinod Polimera To: dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Vinod Polimera , linux-kernel@vger.kernel.org, robdclark@gmail.com, dianders@chromium.org, swboyd@chromium.org, quic_kalyant@quicinc.com, dmitry.baryshkov@linaro.org, quic_khsieh@quicinc.com, quic_vproddut@quicinc.com, bjorn.andersson@linaro.org, quic_aravindh@quicinc.com, quic_abhinavk@quicinc.com, quic_sbillaka@quicinc.com Subject: [PATCH v4 5/7] drm/bridge: Add psr support for panel bridge callbacks Date: Tue, 5 Jul 2022 22:30:43 +0530 Message-Id: <1657040445-13067-6-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> References: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org This change will handle the psr entry exit cases in the panel bridge atomic callback functions. For example, the panel power should not turn off if the panel is entering psr. Signed-off-by: Sankeerth Billakanti Signed-off-by: Vinod Polimera --- drivers/gpu/drm/bridge/panel.c | 48 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/drivers/gpu/drm/bridge/panel.c b/drivers/gpu/drm/bridge/panel.c index eeb9546..9770b8c 100644 --- a/drivers/gpu/drm/bridge/panel.c +++ b/drivers/gpu/drm/bridge/panel.c @@ -112,6 +112,18 @@ static void panel_bridge_atomic_pre_enable(struct drm_bridge *bridge, struct drm_bridge_state *old_bridge_state) { struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_encoder *encoder = bridge->encoder; + struct drm_crtc *crtc; + struct drm_crtc_state *old_crtc_state; + + crtc = drm_atomic_get_new_crtc_for_encoder(atomic_state, encoder); + if (!crtc) + return; + + old_crtc_state = drm_atomic_get_old_crtc_state(atomic_state, crtc); + if (old_crtc_state && old_crtc_state->self_refresh_active) + return; drm_panel_prepare(panel_bridge->panel); } @@ -120,6 +132,18 @@ static void panel_bridge_atomic_enable(struct drm_bridge *bridge, struct drm_bridge_state *old_bridge_state) { struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_encoder *encoder = bridge->encoder; + struct drm_crtc *crtc; + struct drm_crtc_state *old_crtc_state; + + crtc = drm_atomic_get_new_crtc_for_encoder(atomic_state, encoder); + if (!crtc) + return; + + old_crtc_state = drm_atomic_get_old_crtc_state(atomic_state, crtc); + if (old_crtc_state && old_crtc_state->self_refresh_active) + return; drm_panel_enable(panel_bridge->panel); } @@ -128,6 +152,18 @@ static void panel_bridge_atomic_disable(struct drm_bridge *bridge, struct drm_bridge_state *old_bridge_state) { struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_encoder *encoder = bridge->encoder; + struct drm_crtc *crtc; + struct drm_crtc_state *new_crtc_state; + + crtc = drm_atomic_get_old_crtc_for_encoder(atomic_state, encoder); + if (!crtc) + return; + + new_crtc_state = drm_atomic_get_new_crtc_state(atomic_state, crtc); + if (new_crtc_state && new_crtc_state->self_refresh_active) + return; drm_panel_disable(panel_bridge->panel); } @@ -136,6 +172,18 @@ static void panel_bridge_atomic_post_disable(struct drm_bridge *bridge, struct drm_bridge_state *old_bridge_state) { struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge); + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_encoder *encoder = bridge->encoder; + struct drm_crtc *crtc; + struct drm_crtc_state *new_crtc_state; + + crtc = drm_atomic_get_old_crtc_for_encoder(atomic_state, encoder); + if (!crtc) + return; + + new_crtc_state = drm_atomic_get_new_crtc_state(atomic_state, crtc); + if (new_crtc_state && new_crtc_state->self_refresh_active) + return; drm_panel_unprepare(panel_bridge->panel); } From patchwork Tue Jul 5 17:00:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 12906861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC90BC43334 for ; Tue, 5 Jul 2022 17:01:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231234AbiGERB1 (ORCPT ); Tue, 5 Jul 2022 13:01:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231740AbiGERBY (ORCPT ); Tue, 5 Jul 2022 13:01:24 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C4D11C905; Tue, 5 Jul 2022 10:01:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657040482; x=1688576482; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=/f9snznR6RawP8SbLTZEkezvjVdMoQMZJKFRM9YSJf4=; b=R1HSbT3sF5zv1wRqy16YIhqpGvUM6WSkHKU82M5hxMiQpbzuJzVT7+Qi d8xPHcP+ie+M67YJmeK14DJhCdn9+FQeiLvH8ydj3LEs48E16XVQfyL5K SNLtENvU+fCh+8RIByn0oQk2QtxcBGYPHLZXpWHtGpYk9vsGyKWM8X5IU s=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 05 Jul 2022 10:01:21 -0700 X-QCInternal: smtphost Received: from ironmsg01-blr.qualcomm.com ([10.86.208.130]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/AES256-SHA; 05 Jul 2022 10:01:19 -0700 X-QCInternal: smtphost Received: from vpolimer-linux.qualcomm.com ([10.204.67.235]) by ironmsg01-blr.qualcomm.com with ESMTP; 05 Jul 2022 22:31:09 +0530 Received: by vpolimer-linux.qualcomm.com (Postfix, from userid 463814) id E12393CBE; Tue, 5 Jul 2022 22:31:07 +0530 (IST) From: Vinod Polimera To: dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Vinod Polimera , linux-kernel@vger.kernel.org, robdclark@gmail.com, dianders@chromium.org, swboyd@chromium.org, quic_kalyant@quicinc.com, dmitry.baryshkov@linaro.org, quic_khsieh@quicinc.com, quic_vproddut@quicinc.com, bjorn.andersson@linaro.org, quic_aravindh@quicinc.com, quic_abhinavk@quicinc.com, quic_sbillaka@quicinc.com Subject: [PATCH v4 6/7] drm/msm/disp/dpu1: use atomic enable/disable callbacks for encoder functions Date: Tue, 5 Jul 2022 22:30:44 +0530 Message-Id: <1657040445-13067-7-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> References: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Use atomic variants for encoder callback functions such that certain states like self-refresh can be accessed as part of enable/disable sequence. Signed-off-by: Kalyan Thota Signed-off-by: Vinod Polimera Reviewed-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 5629c0b..f01a976 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -1130,7 +1130,8 @@ void dpu_encoder_virt_runtime_resume(struct drm_encoder *drm_enc) mutex_unlock(&dpu_enc->enc_lock); } -static void dpu_encoder_virt_enable(struct drm_encoder *drm_enc) +static void dpu_encoder_virt_atomic_enable(struct drm_encoder *drm_enc, + struct drm_atomic_state *state) { struct dpu_encoder_virt *dpu_enc = NULL; int ret = 0; @@ -1166,7 +1167,8 @@ static void dpu_encoder_virt_enable(struct drm_encoder *drm_enc) mutex_unlock(&dpu_enc->enc_lock); } -static void dpu_encoder_virt_disable(struct drm_encoder *drm_enc) +static void dpu_encoder_virt_atomic_disable(struct drm_encoder *drm_enc, + struct drm_atomic_state *state) { struct dpu_encoder_virt *dpu_enc = NULL; int i = 0; @@ -2320,8 +2322,8 @@ static void dpu_encoder_frame_done_timeout(struct timer_list *t) static const struct drm_encoder_helper_funcs dpu_encoder_helper_funcs = { .atomic_mode_set = dpu_encoder_virt_atomic_mode_set, - .disable = dpu_encoder_virt_disable, - .enable = dpu_encoder_virt_enable, + .atomic_disable = dpu_encoder_virt_atomic_disable, + .atomic_enable = dpu_encoder_virt_atomic_enable, .atomic_check = dpu_encoder_virt_atomic_check, }; From patchwork Tue Jul 5 17:00:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 12906860 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C6CDCCA486 for ; Tue, 5 Jul 2022 17:01:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231949AbiGERB0 (ORCPT ); Tue, 5 Jul 2022 13:01:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231341AbiGERBY (ORCPT ); Tue, 5 Jul 2022 13:01:24 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05D0D1C90E; Tue, 5 Jul 2022 10:01:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657040483; x=1688576483; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=j4XjTkVXddfE+bmZZGuKw6202v++M2QtpsDa3jAyli0=; b=wtOlQPycu6ySbJjnQtYOGwcXTEXFQBUV9vheByhXDjhOq68GGkaMm545 a06nrkwYkQEHzrjpNNPcYpGmFxzuyGtPL8Pcu0Ct5+2UvQDzZ75rH4vLi lIr1G0bw/ewcP6RBUvspRk+gGPTcLrt9LW8PpGqjSNT5RN9JjRrtPF1Tq U=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 05 Jul 2022 10:01:23 -0700 X-QCInternal: smtphost Received: from ironmsg01-blr.qualcomm.com ([10.86.208.130]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/AES256-SHA; 05 Jul 2022 10:01:21 -0700 X-QCInternal: smtphost Received: from vpolimer-linux.qualcomm.com ([10.204.67.235]) by ironmsg01-blr.qualcomm.com with ESMTP; 05 Jul 2022 22:31:10 +0530 Received: by vpolimer-linux.qualcomm.com (Postfix, from userid 463814) id AA6A53CBE; Tue, 5 Jul 2022 22:31:09 +0530 (IST) From: Vinod Polimera To: dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, devicetree@vger.kernel.org Cc: Vinod Polimera , linux-kernel@vger.kernel.org, robdclark@gmail.com, dianders@chromium.org, swboyd@chromium.org, quic_kalyant@quicinc.com, dmitry.baryshkov@linaro.org, quic_khsieh@quicinc.com, quic_vproddut@quicinc.com, bjorn.andersson@linaro.org, quic_aravindh@quicinc.com, quic_abhinavk@quicinc.com, quic_sbillaka@quicinc.com Subject: [PATCH v4 7/7] drm/msm/disp/dpu1: add PSR support for eDP interface in dpu driver Date: Tue, 5 Jul 2022 22:30:45 +0530 Message-Id: <1657040445-13067-8-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> References: <1657040445-13067-1-git-send-email-quic_vpolimer@quicinc.com> Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Enable PSR on eDP interface using drm self-refresh librabry. This patch uses a trigger from self-refresh library to enter/exit into PSR, when there are no updates from framework. Signed-off-by: Kalyan Thota Signed-off-by: Vinod Polimera --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 13 ++++++++++++- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 19 ++++++++++++++++++- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 2 +- 3 files changed, 31 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index f91e3d1..e63e363 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "dpu_kms.h" #include "dpu_hw_lm.h" @@ -961,6 +962,9 @@ static void dpu_crtc_disable(struct drm_crtc *crtc, DRM_DEBUG_KMS("crtc%d\n", crtc->base.id); + if (old_crtc_state->self_refresh_active) + return; + /* Disable/save vblank irq handling */ drm_crtc_vblank_off(crtc); @@ -1019,7 +1023,9 @@ static void dpu_crtc_enable(struct drm_crtc *crtc, struct dpu_crtc *dpu_crtc = to_dpu_crtc(crtc); struct drm_encoder *encoder; bool request_bandwidth = false; + struct drm_crtc_state *old_crtc_state; + old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc); pm_runtime_get_sync(crtc->dev->dev); DRM_DEBUG_KMS("crtc%d\n", crtc->base.id); @@ -1521,7 +1527,7 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane, { struct drm_crtc *crtc = NULL; struct dpu_crtc *dpu_crtc = NULL; - int i; + int i, ret; dpu_crtc = kzalloc(sizeof(*dpu_crtc), GFP_KERNEL); if (!dpu_crtc) @@ -1558,6 +1564,11 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane, /* initialize event handling */ spin_lock_init(&dpu_crtc->event_lock); + ret = drm_self_refresh_helper_init(crtc); + if (ret) + DPU_ERROR("Failed to initialize %s with self-refresh helpers %d\n", + crtc->name, ret); + DRM_DEBUG_KMS("%s: successfully initialized crtc\n", dpu_crtc->name); return crtc; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index f01a976..e2a74ba 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -225,6 +225,11 @@ bool dpu_encoder_is_widebus_enabled(const struct drm_encoder *drm_enc) return dpu_enc->wide_bus_en; } +static inline bool is_self_refresh_active(const struct drm_crtc_state *state) +{ + return (state && state->self_refresh_active); +} + static void _dpu_encoder_setup_dither(struct dpu_hw_pingpong *hw_pp, unsigned bpc) { struct dpu_hw_dither_cfg dither_cfg = { 0 }; @@ -592,7 +597,7 @@ static int dpu_encoder_virt_atomic_check( if (drm_atomic_crtc_needs_modeset(crtc_state)) { dpu_rm_release(global_state, drm_enc); - if (!crtc_state->active_changed || crtc_state->active) + if (!crtc_state->active_changed || crtc_state->enable) ret = dpu_rm_reserve(&dpu_kms->rm, global_state, drm_enc, crtc_state, topology); } @@ -1171,11 +1176,23 @@ static void dpu_encoder_virt_atomic_disable(struct drm_encoder *drm_enc, struct drm_atomic_state *state) { struct dpu_encoder_virt *dpu_enc = NULL; + struct drm_crtc *crtc; + struct drm_crtc_state *old_state; int i = 0; dpu_enc = to_dpu_encoder_virt(drm_enc); DPU_DEBUG_ENC(dpu_enc, "\n"); + crtc = drm_enc->crtc; + old_state = drm_atomic_get_old_crtc_state(state, crtc); + + /* + * The encoder disabled already occurred when self refresh mode + * was set earlier, in the old_state for the corresponding crtc. + */ + if (is_self_refresh_active(old_state)) + return; + mutex_lock(&dpu_enc->enc_lock); dpu_enc->enabled = false; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index bce4764..cc0a674 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -507,7 +507,7 @@ static void dpu_kms_wait_for_commit_done(struct msm_kms *kms, return; } - if (!crtc->state->active) { + if (!drm_atomic_crtc_effectively_active(crtc->state)) { DPU_DEBUG("[crtc:%d] not active\n", crtc->base.id); return; }