From patchwork Thu Dec 22 15:04:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 13080088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D979FC4167B for ; Thu, 22 Dec 2022 15:05:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235315AbiLVPFw (ORCPT ); Thu, 22 Dec 2022 10:05:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52372 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235321AbiLVPFp (ORCPT ); Thu, 22 Dec 2022 10:05:45 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F1985FD2; Thu, 22 Dec 2022 07:05:44 -0800 (PST) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2BMDeC58005960; Thu, 22 Dec 2022 15:05:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=KFGeTNMzm/a7KAQZe4lDkJq2HFrPeLNn1A/GF4GpRPk=; b=b0E8aWnbq9PoVbfhqsZ6H+4RptruQz7fkzgqT4dgofxXeD7QEz04Imx6bkXOC64b8TBr v+UWcNin8Ep7kpvNE0zB44lioc5VFqXauOJMHn/CVEu0SHoVKD4Aj/xqYeg3HrUeqgIe +DRgueQZXQTtA0V1myWJLRttagUnGP5NTlRbkewYEpO3sqwdqy3EQ+7ZSKeciPemwJcz igMKAPgtzyl/lfoHzklgdnJLAOI+SLF7cnk73peJMf5JZpJkDJ0x5eB1zpPJVexREUIs pDkspEpJKf6fODjlmCKBzyYnf5c0M6oRBk+/ROs2W8ZLxqkOe7ZqaXeV9aKc2Fo8CMJu +w== Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3mmn3n8nar-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:42 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2BMF5eZf017668 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:41 GMT Received: from vpolimer-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 22 Dec 2022 07:05:21 -0800 From: Vinod Polimera To: , , , CC: Vinod Polimera , , , , , , , , , , , Subject: [PATCH v10 01/15] drm/msm/disp/dpu: cache crtc obj in the dpu_encoder during initialization Date: Thu, 22 Dec 2022 20:34:48 +0530 Message-ID: <1671721502-16587-2-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> References: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: Al7xPv1Fol0Wl2vGntYYZ1MTS3yx5hhP X-Proofpoint-ORIG-GUID: Al7xPv1Fol0Wl2vGntYYZ1MTS3yx5hhP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-22_08,2022-12-22_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 adultscore=0 lowpriorityscore=0 phishscore=0 suspectscore=0 spamscore=0 priorityscore=1501 impostorscore=0 malwarescore=0 bulkscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2212220131 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Cache crtc obj in the dpu encoder during initialization. This will avoid extracting crtc from connector state there by simplifying the obj access whenever it is required. This patch is dependent on the series: https://patchwork.freedesktop.org/series/110969/ Signed-off-by: Vinod Polimera --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 4 ---- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 16 +++++----------- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 1 + 3 files changed, 6 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index 3f72d38..289d51e 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -1029,7 +1029,6 @@ static void dpu_crtc_disable(struct drm_crtc *crtc, */ if (dpu_encoder_get_intf_mode(encoder) == INTF_MODE_VIDEO) release_bandwidth = true; - dpu_encoder_assign_crtc(encoder, NULL); } /* wait for frame_event_done completion */ @@ -1099,9 +1098,6 @@ static void dpu_crtc_enable(struct drm_crtc *crtc, trace_dpu_crtc_enable(DRMID(crtc), true, dpu_crtc); dpu_crtc->enabled = true; - drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask) - dpu_encoder_assign_crtc(encoder, crtc); - /* Enable/restore vblank irq handling */ drm_crtc_vblank_on(crtc); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index a585036..5055d56 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -1317,7 +1317,6 @@ static void dpu_encoder_vblank_callback(struct drm_encoder *drm_enc, struct dpu_encoder_phys *phy_enc) { struct dpu_encoder_virt *dpu_enc = NULL; - unsigned long lock_flags; if (!drm_enc || !phy_enc) return; @@ -1325,12 +1324,11 @@ static void dpu_encoder_vblank_callback(struct drm_encoder *drm_enc, DPU_ATRACE_BEGIN("encoder_vblank_callback"); dpu_enc = to_dpu_encoder_virt(drm_enc); - atomic_inc(&phy_enc->vsync_cnt); + if (!dpu_enc->crtc) + return; - spin_lock_irqsave(&dpu_enc->enc_spinlock, lock_flags); - if (dpu_enc->crtc) - dpu_crtc_vblank_callback(dpu_enc->crtc); - spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags); + atomic_inc(&phy_enc->vsync_cnt); + dpu_crtc_vblank_callback(dpu_enc->crtc); DPU_ATRACE_END("encoder_vblank_callback"); } @@ -1369,17 +1367,13 @@ void dpu_encoder_toggle_vblank_for_crtc(struct drm_encoder *drm_enc, struct drm_crtc *crtc, bool enable) { struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc); - unsigned long lock_flags; int i; trace_dpu_enc_vblank_cb(DRMID(drm_enc), enable); - spin_lock_irqsave(&dpu_enc->enc_spinlock, lock_flags); - if (dpu_enc->crtc != crtc) { - spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags); + if (!dpu_enc->crtc || dpu_enc->crtc != crtc) { return; } - spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags); for (i = 0; i < dpu_enc->num_phys_encs; i++) { struct dpu_encoder_phys *phys = dpu_enc->phys_encs[i]; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 93221a2..264d571 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -801,6 +801,7 @@ static int _dpu_kms_drm_obj_init(struct dpu_kms *dpu_kms) } priv->crtcs[priv->num_crtcs++] = crtc; encoder->possible_crtcs = 1 << drm_crtc_index(crtc); + dpu_encoder_assign_crtc(encoder, crtc); i++; } From patchwork Thu Dec 22 15:04:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 13080089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED1ADC10F1D for ; Thu, 22 Dec 2022 15:05:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235321AbiLVPF5 (ORCPT ); Thu, 22 Dec 2022 10:05:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235653AbiLVPFw (ORCPT ); Thu, 22 Dec 2022 10:05:52 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8658113D3C; Thu, 22 Dec 2022 07:05:48 -0800 (PST) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2BME9MBg016822; Thu, 22 Dec 2022 15:05:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=WMiXTUhsANyKwtU1kgDcixY5iL4SEDlcQ33BndswIII=; b=gr2l3c4Je9yudfSRibNUwpgQLsvCY596N7KZ9fne9CZLnLeOvGpuXTWMlKNT0VF83Mtl snrTQl8gzgsyqaAjbUJpZirWXHgPPDa+LjcEg1zR+uMjXpFyEMFqvb92CIGSW7i84iQF aK0c+ae99OYhqXEw1m9f2qreCNUWSB8phOyijF+hLiQANoJU3p8EDunSOjGOafxMhnzc uGZT1dX5w2c2KssuF1VAKNAfTp3Hpgy2w6mUgmXEjGUU9ielGPnFprlW6VS3zjbm67ez WR1s6V6riltE1cvLrXJuLPTxlYsfWmPU4g/uzs6QiE1HIiOIIitaHWFaJ2mwX1AqJ3CB lQ== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3mmkrx0tf2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:45 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2BMF5jXr014756 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:45 GMT Received: from vpolimer-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 22 Dec 2022 07:05:26 -0800 From: Vinod Polimera To: , , , CC: Vinod Polimera , , , , , , , , , , , Subject: [PATCH v10 02/15] drm: add helper functions to retrieve old and new crtc Date: Thu, 22 Dec 2022 20:34:49 +0530 Message-ID: <1671721502-16587-3-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> References: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: OGW1krsS_JILBlznZ-JKlY4wbgw3VsD1 X-Proofpoint-ORIG-GUID: OGW1krsS_JILBlznZ-JKlY4wbgw3VsD1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-22_08,2022-12-22_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 adultscore=0 clxscore=1015 mlxscore=0 impostorscore=0 suspectscore=0 priorityscore=1501 mlxlogscore=780 spamscore=0 malwarescore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2212220131 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add new helper functions, drm_atomic_get_old_crtc_for_encoder and drm_atomic_get_new_crtc_for_encoder to retrieve the corresponding crtc for the encoder. Signed-off-by: Sankeerth Billakanti Signed-off-by: Vinod Polimera Reviewed-by: Douglas Anderson --- drivers/gpu/drm/drm_atomic.c | 60 ++++++++++++++++++++++++++++++++++++++++++++ include/drm/drm_atomic.h | 7 ++++++ 2 files changed, 67 insertions(+) diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c index f197f59..941fd6d 100644 --- a/drivers/gpu/drm/drm_atomic.c +++ b/drivers/gpu/drm/drm_atomic.c @@ -985,6 +985,66 @@ drm_atomic_get_new_connector_for_encoder(struct drm_atomic_state *state, EXPORT_SYMBOL(drm_atomic_get_new_connector_for_encoder); /** + * drm_atomic_get_old_crtc_for_encoder - Get old crtc for an encoder + * @state: Atomic state + * @encoder: The encoder to fetch the crtc state for + * + * This function finds and returns the crtc that was connected to @encoder + * as specified by the @state. + * + * Returns: The old crtc connected to @encoder, or NULL if the encoder is + * not connected. + */ +struct drm_crtc * +drm_atomic_get_old_crtc_for_encoder(struct drm_atomic_state *state, + struct drm_encoder *encoder) +{ + struct drm_connector *connector; + struct drm_connector_state *conn_state; + + connector = drm_atomic_get_old_connector_for_encoder(state, encoder); + if (!connector) + return NULL; + + conn_state = drm_atomic_get_old_connector_state(state, connector); + if (!conn_state) + return NULL; + + return conn_state->crtc; +} +EXPORT_SYMBOL(drm_atomic_get_old_crtc_for_encoder); + +/** + * drm_atomic_get_new_crtc_for_encoder - Get new crtc for an encoder + * @state: Atomic state + * @encoder: The encoder to fetch the crtc state for + * + * This function finds and returns the crtc that will be connected to @encoder + * as specified by the @state. + * + * Returns: The new crtc connected to @encoder, or NULL if the encoder is + * not connected. + */ +struct drm_crtc * +drm_atomic_get_new_crtc_for_encoder(struct drm_atomic_state *state, + struct drm_encoder *encoder) +{ + struct drm_connector *connector; + struct drm_connector_state *conn_state; + + connector = drm_atomic_get_new_connector_for_encoder(state, encoder); + if (!connector) + return NULL; + + conn_state = drm_atomic_get_new_connector_state(state, connector); + if (!conn_state) + return NULL; + + return conn_state->crtc; +} +EXPORT_SYMBOL(drm_atomic_get_new_crtc_for_encoder); + +/** * drm_atomic_get_connector_state - get connector state * @state: global atomic state object * @connector: connector to get state object for diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h index 10b1990..fdbd656 100644 --- a/include/drm/drm_atomic.h +++ b/include/drm/drm_atomic.h @@ -528,6 +528,13 @@ struct drm_connector * drm_atomic_get_new_connector_for_encoder(struct drm_atomic_state *state, struct drm_encoder *encoder); +struct drm_crtc * +drm_atomic_get_old_crtc_for_encoder(struct drm_atomic_state *state, + struct drm_encoder *encoder); +struct drm_crtc * +drm_atomic_get_new_crtc_for_encoder(struct drm_atomic_state *state, + struct drm_encoder *encoder); + /** * drm_atomic_get_existing_crtc_state - get CRTC state, if it exists * @state: global atomic state object From patchwork Thu Dec 22 15:04:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 13080091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03430C4167B for ; Thu, 22 Dec 2022 15:06:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235713AbiLVPF7 (ORCPT ); Thu, 22 Dec 2022 10:05:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235681AbiLVPFx (ORCPT ); Thu, 22 Dec 2022 10:05:53 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 183F5C75F; Thu, 22 Dec 2022 07:05:51 -0800 (PST) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2BMDwIK3020806; Thu, 22 Dec 2022 15:05:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=0f9x5BLO8UvmC2UtLsXcMYtIEoUk4LPjY4HR0DslW9w=; b=lM1BOdXelBOqJvwHUPZ92r4/aDCxcpiUXHTVJXE3NZma33ISpOE2qrVrrLNdKYAIPLdR y1o4zkrByDYsyNyD18s2G4ZxFsSfzEQP9APS8b3ao8zrRU3MPJLAv5kFdPDAkd07URBz g0g/z0lmCkKjy98zT7QNdkVJ6MGP+8QAxRg3JOV5fqEQ2fg27Ekh0hNIBfXAUTBhxHAf 3RB1C+fSWlwVja0wS7p5qJKtmOe8qnXT8726M/QiZmaT9QRPD8T9eYSFZ0Rd+s3CUdgE Q/9HKffHHf+Ni/tFhsQtR2oBEDAheNkOX8O+anYpjQYtHyrB6J3tn8uwbuj6ul2Lrasc 6Q== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3mmkrx0tfa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:48 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2BMF5lWm022009 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:47 GMT Received: from vpolimer-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 22 Dec 2022 07:05:33 -0800 From: Vinod Polimera To: , , , CC: Vinod Polimera , , , , , , , , , , , Subject: [PATCH v10 03/15] drm/msm/dp: use atomic callbacks for DP bridge ops Date: Thu, 22 Dec 2022 20:34:50 +0530 Message-ID: <1671721502-16587-4-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> References: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: YMZE8qQqDgCWvPQZYvYNvHVBTZdV-tuA X-Proofpoint-ORIG-GUID: YMZE8qQqDgCWvPQZYvYNvHVBTZdV-tuA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-22_08,2022-12-22_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 adultscore=0 clxscore=1015 mlxscore=0 impostorscore=0 suspectscore=0 priorityscore=1501 mlxlogscore=946 spamscore=0 malwarescore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2212220131 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Use atomic variants for DP bridge callback functions so that the atomic state can be accessed in the interface drivers. The atomic state will help the driver find out if the display is in self refresh state. Signed-off-by: Sankeerth Billakanti Signed-off-by: Vinod Polimera Reviewed-by: Dmitry Baryshkov Reviewed-by: Douglas Anderson --- drivers/gpu/drm/msm/dp/dp_display.c | 9 ++++++--- drivers/gpu/drm/msm/dp/dp_drm.c | 16 ++++++++-------- drivers/gpu/drm/msm/dp/dp_drm.h | 9 ++++++--- 3 files changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index 7ff60e5..c04a02b 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -1643,7 +1643,8 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev, return 0; } -void dp_bridge_enable(struct drm_bridge *drm_bridge) +void dp_bridge_atomic_enable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) { struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); struct msm_dp *dp = dp_bridge->dp_display; @@ -1698,7 +1699,8 @@ void dp_bridge_enable(struct drm_bridge *drm_bridge) mutex_unlock(&dp_display->event_mutex); } -void dp_bridge_disable(struct drm_bridge *drm_bridge) +void dp_bridge_atomic_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) { struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); struct msm_dp *dp = dp_bridge->dp_display; @@ -1709,7 +1711,8 @@ void dp_bridge_disable(struct drm_bridge *drm_bridge) dp_ctrl_push_idle(dp_display->ctrl); } -void dp_bridge_post_disable(struct drm_bridge *drm_bridge) +void dp_bridge_atomic_post_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) { struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); struct msm_dp *dp = dp_bridge->dp_display; diff --git a/drivers/gpu/drm/msm/dp/dp_drm.c b/drivers/gpu/drm/msm/dp/dp_drm.c index 6db82f9..77c34cb 100644 --- a/drivers/gpu/drm/msm/dp/dp_drm.c +++ b/drivers/gpu/drm/msm/dp/dp_drm.c @@ -94,14 +94,14 @@ static const struct drm_bridge_funcs dp_bridge_ops = { .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, .atomic_reset = drm_atomic_helper_bridge_reset, - .enable = dp_bridge_enable, - .disable = dp_bridge_disable, - .post_disable = dp_bridge_post_disable, - .mode_set = dp_bridge_mode_set, - .mode_valid = dp_bridge_mode_valid, - .get_modes = dp_bridge_get_modes, - .detect = dp_bridge_detect, - .atomic_check = dp_bridge_atomic_check, + .atomic_enable = dp_bridge_atomic_enable, + .atomic_disable = dp_bridge_atomic_disable, + .atomic_post_disable = dp_bridge_atomic_post_disable, + .mode_set = dp_bridge_mode_set, + .mode_valid = dp_bridge_mode_valid, + .get_modes = dp_bridge_get_modes, + .detect = dp_bridge_detect, + .atomic_check = dp_bridge_atomic_check, }; struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/dp/dp_drm.h b/drivers/gpu/drm/msm/dp/dp_drm.h index 82035db..e69c0b9 100644 --- a/drivers/gpu/drm/msm/dp/dp_drm.h +++ b/drivers/gpu/drm/msm/dp/dp_drm.h @@ -23,9 +23,12 @@ struct drm_connector *dp_drm_connector_init(struct msm_dp *dp_display, struct dr struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev, struct drm_encoder *encoder); -void dp_bridge_enable(struct drm_bridge *drm_bridge); -void dp_bridge_disable(struct drm_bridge *drm_bridge); -void dp_bridge_post_disable(struct drm_bridge *drm_bridge); +void dp_bridge_atomic_enable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state); +void dp_bridge_atomic_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state); +void dp_bridge_atomic_post_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state); enum drm_mode_status dp_bridge_mode_valid(struct drm_bridge *bridge, const struct drm_display_info *info, const struct drm_display_mode *mode); From patchwork Thu Dec 22 15:04:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 13080090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37516C001B2 for ; Thu, 22 Dec 2022 15:06:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235699AbiLVPF6 (ORCPT ); Thu, 22 Dec 2022 10:05:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235579AbiLVPFy (ORCPT ); Thu, 22 Dec 2022 10:05:54 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EDB57652; Thu, 22 Dec 2022 07:05:52 -0800 (PST) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2BM8TYfH025670; Thu, 22 Dec 2022 15:05:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=23FKWPs4gnH5mjMRQChae2q41TVJAmUo8H22ioY82/I=; b=W9Ei5F7EEUWseCKZ1HwGC/cMHe7e66eWXFas+fKcwsqa+T0+FPopPVTKtLvLRQKF5xvq 31eKDOStVLVq+JvbbvrQowv7cHsZca+QGMnK5C8kUIB2V2b0bBC/kK6LbXNd2O8Y8iwc 3NvRh4GepNj7IRWf6bm9gJeT8DtLU/UH0JnECXZlz0jm7kyM6t/SJm9Hu3HFX/2r+Z6o eNevdo//pIZ/wtF4fl6xZhIzGUl+MEj0TtcNEiTYxaHKjiBcqPe5JobwFrjei2UZhgoS 06iXoJuZX8IA6MWiCojy9i5+CGFrLxb+VEhuCqJD7FYDkvIBVV6gU/4RXNeVPE3+2wf3 HA== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3mmkrx0tff-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:49 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2BMF5me3022013 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:49 GMT Received: from vpolimer-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 22 Dec 2022 07:05:38 -0800 From: Vinod Polimera To: , , , CC: Vinod Polimera , , , , , , , , , , , Subject: [PATCH v10 04/15] drm/msm/dp: Add basic PSR support for eDP Date: Thu, 22 Dec 2022 20:34:51 +0530 Message-ID: <1671721502-16587-5-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> References: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 8STfHj136FisyTrMjIo6ZrD7ZtouDUNj X-Proofpoint-ORIG-GUID: 8STfHj136FisyTrMjIo6ZrD7ZtouDUNj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-22_08,2022-12-22_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 adultscore=0 clxscore=1015 mlxscore=0 impostorscore=0 suspectscore=0 priorityscore=1501 mlxlogscore=999 spamscore=0 malwarescore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2212220131 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support for basic panel self refresh (PSR) feature for eDP. Add a new interface to set PSR state in the sink from DPU. Program the eDP controller to issue PSR enter and exit SDP to the sink. Signed-off-by: Sankeerth Billakanti Signed-off-by: Vinod Polimera Reviewed-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/dp/dp_catalog.c | 80 ++++++++++++++++++++++ drivers/gpu/drm/msm/dp/dp_catalog.h | 4 ++ drivers/gpu/drm/msm/dp/dp_ctrl.c | 80 ++++++++++++++++++++++ drivers/gpu/drm/msm/dp/dp_ctrl.h | 3 + drivers/gpu/drm/msm/dp/dp_display.c | 19 ++++++ drivers/gpu/drm/msm/dp/dp_display.h | 2 + drivers/gpu/drm/msm/dp/dp_drm.c | 133 +++++++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/dp/dp_link.c | 36 ++++++++++ drivers/gpu/drm/msm/dp/dp_panel.c | 22 ++++++ drivers/gpu/drm/msm/dp/dp_panel.h | 6 ++ drivers/gpu/drm/msm/dp/dp_reg.h | 27 ++++++++ 11 files changed, 411 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c index 676279d..c12a5d9 100644 --- a/drivers/gpu/drm/msm/dp/dp_catalog.c +++ b/drivers/gpu/drm/msm/dp/dp_catalog.c @@ -47,6 +47,14 @@ #define DP_INTERRUPT_STATUS2_MASK \ (DP_INTERRUPT_STATUS2 << DP_INTERRUPT_STATUS_MASK_SHIFT) +#define DP_INTERRUPT_STATUS4 \ + (PSR_UPDATE_INT | PSR_CAPTURE_INT | PSR_EXIT_INT | \ + PSR_UPDATE_ERROR_INT | PSR_WAKE_ERROR_INT) + +#define DP_INTERRUPT_MASK4 \ + (PSR_UPDATE_MASK | PSR_CAPTURE_MASK | PSR_EXIT_MASK | \ + PSR_UPDATE_ERROR_MASK | PSR_WAKE_ERROR_MASK) + struct dp_catalog_private { struct device *dev; struct drm_device *drm_dev; @@ -359,6 +367,23 @@ void dp_catalog_ctrl_lane_mapping(struct dp_catalog *dp_catalog) ln_mapping); } +void dp_catalog_ctrl_psr_mainlink_enable(struct dp_catalog *dp_catalog, + bool enable) +{ + u32 val; + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); + + val = dp_read_link(catalog, REG_DP_MAINLINK_CTRL); + + if (enable) + val |= DP_MAINLINK_CTRL_ENABLE; + else + val &= ~DP_MAINLINK_CTRL_ENABLE; + + dp_write_link(catalog, REG_DP_MAINLINK_CTRL, val); +} + void dp_catalog_ctrl_mainlink_ctrl(struct dp_catalog *dp_catalog, bool enable) { @@ -610,6 +635,47 @@ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog) dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, DP_DP_HPD_CTRL_HPD_EN); } +static void dp_catalog_enable_sdp(struct dp_catalog_private *catalog) +{ + /* trigger sdp */ + dp_write_link(catalog, MMSS_DP_SDP_CFG3, UPDATE_SDP); + dp_write_link(catalog, MMSS_DP_SDP_CFG3, 0x0); +} + +void dp_catalog_ctrl_config_psr(struct dp_catalog *dp_catalog) +{ + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); + u32 config; + + /* enable PSR1 function */ + config = dp_read_link(catalog, REG_PSR_CONFIG); + config |= PSR1_SUPPORTED; + dp_write_link(catalog, REG_PSR_CONFIG, config); + + dp_write_ahb(catalog, REG_DP_INTR_MASK4, DP_INTERRUPT_MASK4); + dp_catalog_enable_sdp(catalog); +} + +void dp_catalog_ctrl_set_psr(struct dp_catalog *dp_catalog, bool enter) +{ + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); + u32 cmd; + + cmd = dp_read_link(catalog, REG_PSR_CMD); + + cmd &= ~(PSR_ENTER | PSR_EXIT); + + if (enter) + cmd |= PSR_ENTER; + else + cmd |= PSR_EXIT; + + dp_catalog_enable_sdp(catalog); + dp_write_link(catalog, REG_PSR_CMD, cmd); +} + u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog) { struct dp_catalog_private *catalog = container_of(dp_catalog, @@ -645,6 +711,20 @@ u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog) return isr & (mask | ~DP_DP_HPD_INT_MASK); } +u32 dp_catalog_ctrl_read_psr_interrupt_status(struct dp_catalog *dp_catalog) +{ + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); + u32 intr, intr_ack; + + intr = dp_read_ahb(catalog, REG_DP_INTR_STATUS4); + intr_ack = (intr & DP_INTERRUPT_STATUS4) + << DP_INTERRUPT_STATUS_ACK_SHIFT; + dp_write_ahb(catalog, REG_DP_INTR_STATUS4, intr_ack); + + return intr; +} + int dp_catalog_ctrl_get_interrupt(struct dp_catalog *dp_catalog) { struct dp_catalog_private *catalog = container_of(dp_catalog, diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.h b/drivers/gpu/drm/msm/dp/dp_catalog.h index 1f717f4..2174bb5 100644 --- a/drivers/gpu/drm/msm/dp/dp_catalog.h +++ b/drivers/gpu/drm/msm/dp/dp_catalog.h @@ -93,6 +93,7 @@ void dp_catalog_ctrl_state_ctrl(struct dp_catalog *dp_catalog, u32 state); void dp_catalog_ctrl_config_ctrl(struct dp_catalog *dp_catalog, u32 config); void dp_catalog_ctrl_lane_mapping(struct dp_catalog *dp_catalog); void dp_catalog_ctrl_mainlink_ctrl(struct dp_catalog *dp_catalog, bool enable); +void dp_catalog_ctrl_psr_mainlink_enable(struct dp_catalog *dp_catalog, bool enable); void dp_catalog_ctrl_config_misc(struct dp_catalog *dp_catalog, u32 cc, u32 tb); void dp_catalog_ctrl_config_msa(struct dp_catalog *dp_catalog, u32 rate, u32 stream_rate_khz, bool fixed_nvid); @@ -104,12 +105,15 @@ void dp_catalog_ctrl_enable_irq(struct dp_catalog *dp_catalog, bool enable); void dp_catalog_hpd_config_intr(struct dp_catalog *dp_catalog, u32 intr_mask, bool en); void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog); +void dp_catalog_ctrl_config_psr(struct dp_catalog *dp_catalog); +void dp_catalog_ctrl_set_psr(struct dp_catalog *dp_catalog, bool enter); u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog); u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog); void dp_catalog_ctrl_phy_reset(struct dp_catalog *dp_catalog); int dp_catalog_ctrl_update_vx_px(struct dp_catalog *dp_catalog, u8 v_level, u8 p_level); int dp_catalog_ctrl_get_interrupt(struct dp_catalog *dp_catalog); +u32 dp_catalog_ctrl_read_psr_interrupt_status(struct dp_catalog *dp_catalog); void dp_catalog_ctrl_update_transfer_unit(struct dp_catalog *dp_catalog, u32 dp_tu, u32 valid_boundary, u32 valid_boundary2); diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c index dd26ca6..ea1c1f0 100644 --- a/drivers/gpu/drm/msm/dp/dp_ctrl.c +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c @@ -22,6 +22,7 @@ #define DP_KHZ_TO_HZ 1000 #define IDLE_PATTERN_COMPLETION_TIMEOUT_JIFFIES (30 * HZ / 1000) /* 30 ms */ +#define PSR_OPERATION_COMPLETION_TIMEOUT_JIFFIES (300 * HZ / 1000) /* 300 ms */ #define WAIT_FOR_VIDEO_READY_TIMEOUT_JIFFIES (HZ / 2) #define DP_CTRL_INTR_READY_FOR_VIDEO BIT(0) @@ -80,6 +81,7 @@ struct dp_ctrl_private { struct dp_catalog *catalog; struct completion idle_comp; + struct completion psr_op_comp; struct completion video_comp; }; @@ -153,6 +155,9 @@ static void dp_ctrl_config_ctrl(struct dp_ctrl_private *ctrl) config |= DP_CONFIGURATION_CTRL_STATIC_DYNAMIC_CN; config |= DP_CONFIGURATION_CTRL_SYNC_ASYNC_CLK; + if (ctrl->panel->psr_cap.version) + config |= DP_CONFIGURATION_CTRL_SEND_VSC; + dp_catalog_ctrl_config_ctrl(ctrl->catalog, config); } @@ -1375,6 +1380,64 @@ void dp_ctrl_reset_irq_ctrl(struct dp_ctrl *dp_ctrl, bool enable) dp_catalog_ctrl_enable_irq(ctrl->catalog, enable); } +void dp_ctrl_config_psr(struct dp_ctrl *dp_ctrl) +{ + u8 cfg; + struct dp_ctrl_private *ctrl = container_of(dp_ctrl, + struct dp_ctrl_private, dp_ctrl); + + if (!ctrl->panel->psr_cap.version) + return; + + dp_catalog_ctrl_config_psr(ctrl->catalog); + + cfg = DP_PSR_ENABLE; + drm_dp_dpcd_write(ctrl->aux, DP_PSR_EN_CFG, &cfg, 1); +} + +void dp_ctrl_set_psr(struct dp_ctrl *dp_ctrl, bool enter) +{ + struct dp_ctrl_private *ctrl = container_of(dp_ctrl, + struct dp_ctrl_private, dp_ctrl); + + if (!ctrl->panel->psr_cap.version) + return; + + /* + * When entering PSR, + * 1. Send PSR enter SDP and wait for the PSR_UPDATE_INT + * 2. Turn off video + * 3. Disable the mainlink + * + * When exiting PSR, + * 1. Enable the mainlink + * 2. Send the PSR exit SDP + */ + if (enter) { + reinit_completion(&ctrl->psr_op_comp); + dp_catalog_ctrl_set_psr(ctrl->catalog, true); + + if (!wait_for_completion_timeout(&ctrl->psr_op_comp, + PSR_OPERATION_COMPLETION_TIMEOUT_JIFFIES)) { + DRM_ERROR("PSR_ENTRY timedout\n"); + dp_catalog_ctrl_set_psr(ctrl->catalog, false); + return; + } + + dp_ctrl_push_idle(dp_ctrl); + dp_catalog_ctrl_state_ctrl(ctrl->catalog, 0); + + dp_catalog_ctrl_psr_mainlink_enable(ctrl->catalog, false); + } else { + dp_catalog_ctrl_psr_mainlink_enable(ctrl->catalog, true); + + dp_catalog_ctrl_set_psr(ctrl->catalog, false); + dp_catalog_ctrl_state_ctrl(ctrl->catalog, DP_STATE_CTRL_SEND_VIDEO); + dp_ctrl_wait4video_ready(ctrl); + dp_catalog_ctrl_state_ctrl(ctrl->catalog, 0); + } +} + void dp_ctrl_phy_init(struct dp_ctrl *dp_ctrl) { struct dp_ctrl_private *ctrl; @@ -1989,6 +2052,22 @@ void dp_ctrl_isr(struct dp_ctrl *dp_ctrl) ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl); + if (ctrl->panel->psr_cap.version) { + isr = dp_catalog_ctrl_read_psr_interrupt_status(ctrl->catalog); + + if (isr) + complete(&ctrl->psr_op_comp); + + if (isr & PSR_EXIT_INT) + drm_dbg_dp(ctrl->drm_dev, "PSR exit done\n"); + + if (isr & PSR_UPDATE_INT) + drm_dbg_dp(ctrl->drm_dev, "PSR frame update done\n"); + + if (isr & PSR_CAPTURE_INT) + drm_dbg_dp(ctrl->drm_dev, "PSR frame capture done\n"); + } + isr = dp_catalog_ctrl_get_interrupt(ctrl->catalog); if (isr & DP_CTRL_INTR_READY_FOR_VIDEO) { @@ -2035,6 +2114,7 @@ struct dp_ctrl *dp_ctrl_get(struct device *dev, struct dp_link *link, dev_err(dev, "failed to add DP OPP table\n"); init_completion(&ctrl->idle_comp); + init_completion(&ctrl->psr_op_comp); init_completion(&ctrl->video_comp); /* in parameters */ diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.h b/drivers/gpu/drm/msm/dp/dp_ctrl.h index 9f29734..b226683 100644 --- a/drivers/gpu/drm/msm/dp/dp_ctrl.h +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.h @@ -37,4 +37,7 @@ void dp_ctrl_phy_init(struct dp_ctrl *dp_ctrl); void dp_ctrl_phy_exit(struct dp_ctrl *dp_ctrl); void dp_ctrl_irq_phy_exit(struct dp_ctrl *dp_ctrl); +void dp_ctrl_set_psr(struct dp_ctrl *dp_ctrl, bool enable); +void dp_ctrl_config_psr(struct dp_ctrl *dp_ctrl); + #endif /* _DP_CTRL_H_ */ diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index c04a02b..7ec81b8 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -399,6 +399,8 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp) edid = dp->panel->edid; + dp->dp_display.psr_supported = dp->panel->psr_cap.version; + dp->audio_supported = drm_detect_monitor_audio(edid); dp_panel_handle_sink_request(dp->panel); @@ -898,6 +900,10 @@ static int dp_display_post_enable(struct msm_dp *dp_display) /* signal the connect event late to synchronize video and display */ dp_display_handle_plugged_change(dp_display, true); + + if (dp_display->psr_supported) + dp_ctrl_config_psr(dp->ctrl); + return 0; } @@ -1092,6 +1098,19 @@ static void dp_display_config_hpd(struct dp_display_private *dp) enable_irq(dp->irq); } +void dp_display_set_psr(struct msm_dp *dp_display, bool enter) +{ + struct dp_display_private *dp; + + if (!dp_display) { + DRM_ERROR("invalid params\n"); + return; + } + + dp = container_of(dp_display, struct dp_display_private, dp_display); + dp_ctrl_set_psr(dp->ctrl, enter); +} + static int hpd_event_thread(void *data) { struct dp_display_private *dp_priv; diff --git a/drivers/gpu/drm/msm/dp/dp_display.h b/drivers/gpu/drm/msm/dp/dp_display.h index dcedf02..01bc741 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.h +++ b/drivers/gpu/drm/msm/dp/dp_display.h @@ -28,6 +28,7 @@ struct msm_dp { u32 max_dp_lanes; struct dp_audio *dp_audio; + bool psr_supported; }; int dp_display_set_plugged_cb(struct msm_dp *dp_display, @@ -38,5 +39,6 @@ bool dp_display_check_video_test(struct msm_dp *dp_display); int dp_display_get_test_bpp(struct msm_dp *dp_display); void dp_display_signal_audio_start(struct msm_dp *dp_display); void dp_display_signal_audio_complete(struct msm_dp *dp_display); +void dp_display_set_psr(struct msm_dp *dp, bool enter); #endif /* _DP_DISPLAY_H_ */ diff --git a/drivers/gpu/drm/msm/dp/dp_drm.c b/drivers/gpu/drm/msm/dp/dp_drm.c index 77c34cb..df66df0 100644 --- a/drivers/gpu/drm/msm/dp/dp_drm.c +++ b/drivers/gpu/drm/msm/dp/dp_drm.c @@ -104,6 +104,137 @@ static const struct drm_bridge_funcs dp_bridge_ops = { .atomic_check = dp_bridge_atomic_check, }; +static int edp_bridge_atomic_check(struct drm_bridge *drm_bridge, + struct drm_bridge_state *bridge_state, + struct drm_crtc_state *crtc_state, + struct drm_connector_state *conn_state) +{ + struct msm_dp *dp = to_dp_bridge(drm_bridge)->dp_display; + + if (WARN_ON(!conn_state)) + return -ENODEV; + + if (!conn_state->crtc || !crtc_state) + return 0; + + if (crtc_state->self_refresh_active && !dp->psr_supported) + return -EINVAL; + + return 0; +} + +static void edp_bridge_atomic_enable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) +{ + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_crtc *crtc; + struct drm_crtc_state *old_crtc_state; + struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); + struct msm_dp *dp = dp_bridge->dp_display; + + /* + * Check the old state of the crtc to determine if the panel + * was put into psr state previously by the edp_bridge_atomic_disable. + * If the panel is in psr, just exit psr state and skip the full + * bridge enable sequence. + */ + crtc = drm_atomic_get_new_crtc_for_encoder(atomic_state, + drm_bridge->encoder); + if (!crtc) + return; + + old_crtc_state = drm_atomic_get_old_crtc_state(atomic_state, crtc); + + if (old_crtc_state && old_crtc_state->self_refresh_active) { + dp_display_set_psr(dp, false); + return; + } + + dp_bridge_atomic_enable(drm_bridge, old_bridge_state); +} + +static void edp_bridge_atomic_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) +{ + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_crtc *crtc; + struct drm_crtc_state *new_crtc_state = NULL, *old_crtc_state = NULL; + struct msm_dp_bridge *dp_bridge = to_dp_bridge(drm_bridge); + struct msm_dp *dp = dp_bridge->dp_display; + + crtc = drm_atomic_get_old_crtc_for_encoder(atomic_state, + drm_bridge->encoder); + if (!crtc) + goto out; + + new_crtc_state = drm_atomic_get_new_crtc_state(atomic_state, crtc); + if (!new_crtc_state) + goto out; + + old_crtc_state = drm_atomic_get_old_crtc_state(atomic_state, crtc); + if (!old_crtc_state) + goto out; + + /* + * Set self refresh mode if current crtc state is active. + * + * If old crtc state is active, then this is a display disable + * call while the sink is in psr state. So, exit psr here. + * The eDP controller will be disabled in the + * edp_bridge_atomic_post_disable function. + * + * We observed sink is stuck in self refresh if psr exit is skipped + * when display disable occurs while the sink is in psr state. + */ + if (new_crtc_state->self_refresh_active) { + dp_display_set_psr(dp, true); + return; + } else if (old_crtc_state->self_refresh_active) { + dp_display_set_psr(dp, false); + return; + } + +out: + dp_bridge_atomic_disable(drm_bridge, old_bridge_state); +} + +static void edp_bridge_atomic_post_disable(struct drm_bridge *drm_bridge, + struct drm_bridge_state *old_bridge_state) +{ + struct drm_atomic_state *atomic_state = old_bridge_state->base.state; + struct drm_crtc *crtc; + struct drm_crtc_state *new_crtc_state = NULL; + + crtc = drm_atomic_get_old_crtc_for_encoder(atomic_state, + drm_bridge->encoder); + if (!crtc) + return; + + new_crtc_state = drm_atomic_get_new_crtc_state(atomic_state, crtc); + if (!new_crtc_state) + return; + + /* + * Self refresh mode is already set in edp_bridge_atomic_disable. + */ + if (new_crtc_state->self_refresh_active) + return; + + dp_bridge_atomic_post_disable(drm_bridge, old_bridge_state); +} + +static const struct drm_bridge_funcs edp_bridge_ops = { + .atomic_enable = edp_bridge_atomic_enable, + .atomic_disable = edp_bridge_atomic_disable, + .atomic_post_disable = edp_bridge_atomic_post_disable, + .mode_set = dp_bridge_mode_set, + .mode_valid = dp_bridge_mode_valid, + .atomic_reset = drm_atomic_helper_bridge_reset, + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, + .atomic_check = edp_bridge_atomic_check, +}; + struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev, struct drm_encoder *encoder) { @@ -118,7 +249,7 @@ struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device * dp_bridge->dp_display = dp_display; bridge = &dp_bridge->bridge; - bridge->funcs = &dp_bridge_ops; + bridge->funcs = dp_display->is_edp ? &edp_bridge_ops : &dp_bridge_ops; bridge->type = dp_display->connector_type; /* diff --git a/drivers/gpu/drm/msm/dp/dp_link.c b/drivers/gpu/drm/msm/dp/dp_link.c index f1f1d64..5a4817a 100644 --- a/drivers/gpu/drm/msm/dp/dp_link.c +++ b/drivers/gpu/drm/msm/dp/dp_link.c @@ -937,6 +937,38 @@ static int dp_link_process_phy_test_pattern_request( return 0; } +static bool dp_link_read_psr_error_status(struct dp_link_private *link) +{ + u8 status; + + drm_dp_dpcd_read(link->aux, DP_PSR_ERROR_STATUS, &status, 1); + + if (status & DP_PSR_LINK_CRC_ERROR) + DRM_ERROR("PSR LINK CRC ERROR\n"); + else if (status & DP_PSR_RFB_STORAGE_ERROR) + DRM_ERROR("PSR RFB STORAGE ERROR\n"); + else if (status & DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR) + DRM_ERROR("PSR VSC SDP UNCORRECTABLE ERROR\n"); + else + return false; + + return true; +} + +static bool dp_link_psr_capability_changed(struct dp_link_private *link) +{ + u8 status; + + drm_dp_dpcd_read(link->aux, DP_PSR_ESI, &status, 1); + + if (status & DP_PSR_CAPS_CHANGE) { + drm_dbg_dp(link->drm_dev, "PSR Capability Change\n"); + return true; + } + + return false; +} + static u8 get_link_status(const u8 link_status[DP_LINK_STATUS_SIZE], int r) { return link_status[r - DP_LANE0_1_STATUS]; @@ -1055,6 +1087,10 @@ int dp_link_process_request(struct dp_link *dp_link) dp_link->sink_request |= DP_TEST_LINK_TRAINING; } else if (!dp_link_process_phy_test_pattern_request(link)) { dp_link->sink_request |= DP_TEST_LINK_PHY_TEST_PATTERN; + } else if (dp_link_read_psr_error_status(link)) { + DRM_ERROR("PSR IRQ_HPD received\n"); + } else if (dp_link_psr_capability_changed(link)) { + drm_dbg_dp(link->drm_dev, "PSR Capabiity changed"); } else { ret = dp_link_process_link_status_update(link); if (!ret) { diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c index 5149ceb..8bf8ab4 100644 --- a/drivers/gpu/drm/msm/dp/dp_panel.c +++ b/drivers/gpu/drm/msm/dp/dp_panel.c @@ -20,6 +20,27 @@ struct dp_panel_private { bool aux_cfg_update_done; }; +static void dp_panel_read_psr_cap(struct dp_panel_private *panel) +{ + ssize_t rlen; + struct dp_panel *dp_panel; + + dp_panel = &panel->dp_panel; + + /* edp sink */ + if (dp_panel->dpcd[DP_EDP_CONFIGURATION_CAP]) { + rlen = drm_dp_dpcd_read(panel->aux, DP_PSR_SUPPORT, + &dp_panel->psr_cap, sizeof(dp_panel->psr_cap)); + if (rlen == sizeof(dp_panel->psr_cap)) { + drm_dbg_dp(panel->drm_dev, + "psr version: 0x%x, psr_cap: 0x%x\n", + dp_panel->psr_cap.version, + dp_panel->psr_cap.capabilities); + } else + DRM_ERROR("failed to read psr info, rlen=%zd\n", rlen); + } +} + static int dp_panel_read_dpcd(struct dp_panel *dp_panel) { int rc = 0; @@ -106,6 +127,7 @@ static int dp_panel_read_dpcd(struct dp_panel *dp_panel) } } + dp_panel_read_psr_cap(panel); end: return rc; } diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h index d861197a..2d0826a 100644 --- a/drivers/gpu/drm/msm/dp/dp_panel.h +++ b/drivers/gpu/drm/msm/dp/dp_panel.h @@ -34,6 +34,11 @@ struct dp_panel_in { struct dp_catalog *catalog; }; +struct dp_panel_psr { + u8 version; + u8 capabilities; +}; + struct dp_panel { /* dpcd raw data */ u8 dpcd[DP_RECEIVER_CAP_SIZE + 1]; @@ -46,6 +51,7 @@ struct dp_panel { struct edid *edid; struct drm_connector *connector; struct dp_display_mode dp_mode; + struct dp_panel_psr psr_cap; bool video_test; u32 vic; diff --git a/drivers/gpu/drm/msm/dp/dp_reg.h b/drivers/gpu/drm/msm/dp/dp_reg.h index 2686028..ea85a69 100644 --- a/drivers/gpu/drm/msm/dp/dp_reg.h +++ b/drivers/gpu/drm/msm/dp/dp_reg.h @@ -22,6 +22,20 @@ #define REG_DP_INTR_STATUS2 (0x00000024) #define REG_DP_INTR_STATUS3 (0x00000028) +#define REG_DP_INTR_STATUS4 (0x0000002C) +#define PSR_UPDATE_INT (0x00000001) +#define PSR_CAPTURE_INT (0x00000004) +#define PSR_EXIT_INT (0x00000010) +#define PSR_UPDATE_ERROR_INT (0x00000040) +#define PSR_WAKE_ERROR_INT (0x00000100) + +#define REG_DP_INTR_MASK4 (0x00000030) +#define PSR_UPDATE_MASK (0x00000001) +#define PSR_CAPTURE_MASK (0x00000002) +#define PSR_EXIT_MASK (0x00000004) +#define PSR_UPDATE_ERROR_MASK (0x00000008) +#define PSR_WAKE_ERROR_MASK (0x00000010) + #define REG_DP_DP_HPD_CTRL (0x00000000) #define DP_DP_HPD_CTRL_HPD_EN (0x00000001) @@ -164,6 +178,16 @@ #define MMSS_DP_AUDIO_TIMING_RBR_48 (0x00000094) #define MMSS_DP_AUDIO_TIMING_HBR_48 (0x00000098) +#define REG_PSR_CONFIG (0x00000100) +#define DISABLE_PSR (0x00000000) +#define PSR1_SUPPORTED (0x00000001) +#define PSR2_WITHOUT_FRAMESYNC (0x00000002) +#define PSR2_WITH_FRAMESYNC (0x00000003) + +#define REG_PSR_CMD (0x00000110) +#define PSR_ENTER (0x00000001) +#define PSR_EXIT (0x00000002) + #define MMSS_DP_PSR_CRC_RG (0x00000154) #define MMSS_DP_PSR_CRC_B (0x00000158) @@ -184,6 +208,9 @@ #define MMSS_DP_AUDIO_STREAM_0 (0x00000240) #define MMSS_DP_AUDIO_STREAM_1 (0x00000244) +#define MMSS_DP_SDP_CFG3 (0x0000024c) +#define UPDATE_SDP (0x00000001) + #define MMSS_DP_EXTENSION_0 (0x00000250) #define MMSS_DP_EXTENSION_1 (0x00000254) #define MMSS_DP_EXTENSION_2 (0x00000258) From patchwork Thu Dec 22 15:04:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Polimera X-Patchwork-Id: 13080092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 807F6C3DA7D for ; Thu, 22 Dec 2022 15:06:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235461AbiLVPGA (ORCPT ); Thu, 22 Dec 2022 10:06:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235711AbiLVPFy (ORCPT ); Thu, 22 Dec 2022 10:05:54 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7120913FAC; Thu, 22 Dec 2022 07:05:53 -0800 (PST) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2BMBH9FX009850; Thu, 22 Dec 2022 15:05:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=FoyR4f7DvJDmmnlya6a3xkzmGDIWFCKMbMyOJcBV9Cw=; b=mXCGIKFI6Dhi6X3L7WT/GDyVOimSuNa9AriRb7z842ltQbzZvHImXqTYXvRXi8gHGKjX nnKZd9i1AC/kpPu1ChfkKrYWbVoRpHQR/FE+GmseJkHJ9ZF7JbqFvnb5aURX2LeGGKUc dNrS1hlyBFxUh+tCwtCeYLewAui/hGn0zslqhwIutxX/F318+B0yXhiOLWpgOy4rNA6j 5Becw3Uti8f9Mu0fzB+pL0MMHhkdiWp6iwK7yfahGAt9tGMncqAXqQu7Q4mHlBvASIdz vsul8l7BCoWYx9r6IFGmuNqZqDpXWVF63Ny5pIvE3QQ7iVNpeHE+HhryYNx8vdzQWP7D 8A== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3mmn3n8nbf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:50 +0000 Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 2BMF5oHQ014773 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 22 Dec 2022 15:05:50 GMT Received: from vpolimer-linux.qualcomm.com (10.80.80.8) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 22 Dec 2022 07:05:44 -0800 From: Vinod Polimera To: , , , CC: Vinod Polimera , , , , , , , , , , , Subject: [PATCH v10 05/15] drm/msm/dp: use the eDP bridge ops to validate eDP modes Date: Thu, 22 Dec 2022 20:34:52 +0530 Message-ID: <1671721502-16587-6-git-send-email-quic_vpolimer@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> References: <1671721502-16587-1-git-send-email-quic_vpolimer@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: c-gzc6Mf7H333iIPk3u9PFTPQqVsSBVl X-Proofpoint-ORIG-GUID: c-gzc6Mf7H333iIPk3u9PFTPQqVsSBVl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-22_08,2022-12-22_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxlogscore=999 adultscore=0 lowpriorityscore=0 phishscore=0 suspectscore=0 spamscore=0 priorityscore=1501 impostorscore=0 malwarescore=0 bulkscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2212220131 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org The eDP and DP interfaces shared the bridge operations and the eDP specific changes were implemented under is_edp check. To add psr support for eDP, we started using a new set of eDP bridge ops. We are moving the eDP specific code in the dp_bridge_mode_valid function to a new eDP function, edp_bridge_mode_valid under the eDP bridge ops. Signed-off-by: Sankeerth Billakanti Signed-off-by: Vinod Polimera Reviewed-by: Dmitry Baryshkov --- drivers/gpu/drm/msm/dp/dp_display.c | 8 -------- drivers/gpu/drm/msm/dp/dp_drm.c | 34 +++++++++++++++++++++++++++++++++- 2 files changed, 33 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index 7ec81b8..91642a0 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -984,14 +984,6 @@ enum drm_mode_status dp_bridge_mode_valid(struct drm_bridge *bridge, return -EINVAL; } - /* - * The eDP controller currently does not have a reliable way of - * enabling panel power to read sink capabilities. So, we rely - * on the panel driver to populate only supported modes for now. - */ - if (dp->is_edp) - return MODE_OK; - if (mode->clock > DP_MAX_PIXEL_CLK_KHZ) return MODE_CLOCK_HIGH; diff --git a/drivers/gpu/drm/msm/dp/dp_drm.c b/drivers/gpu/drm/msm/dp/dp_drm.c index df66df0..d3e9010 100644 --- a/drivers/gpu/drm/msm/dp/dp_drm.c +++ b/drivers/gpu/drm/msm/dp/dp_drm.c @@ -223,12 +223,44 @@ static void edp_bridge_atomic_post_disable(struct drm_bridge *drm_bridge, dp_bridge_atomic_post_disable(drm_bridge, old_bridge_state); } +/** + * edp_bridge_mode_valid - callback to determine if specified mode is valid + * @bridge: Pointer to drm bridge structure + * @info: display info + * @mode: Pointer to drm mode structure + * Returns: Validity status for specified mode + */ +static enum drm_mode_status edp_bridge_mode_valid(struct drm_bridge *bridge, + const struct drm_display_info *info, + const struct drm_display_mode *mode) +{ + struct msm_dp *dp; + int mode_pclk_khz = mode->clock; + + dp = to_dp_bridge(bridge)->dp_display; + + if (!dp || !mode_pclk_khz || !dp->connector) { + DRM_ERROR("invalid params\n"); + return -EINVAL; + } + + if (mode->clock > DP_MAX_PIXEL_CLK_KHZ) + return MODE_CLOCK_HIGH; + + /* + * The eDP controller currently does not have a reliable way of + * enabling panel power to read sink capabilities. So, we rely + * on the panel driver to populate only supported modes for now. + */ + return MODE_OK; +} + static const struct drm_bridge_funcs edp_bridge_ops = { .atomic_enable = edp_bridge_atomic_enable, .atomic_disable = edp_bridge_atomic_disable, .atomic_post_disable = edp_bridge_atomic_post_disable, .mode_set = dp_bridge_mode_set, - .mode_valid = dp_bridge_mode_valid, + .mode_valid = edp_bridge_mode_valid, .atomic_reset = drm_atomic_helper_bridge_reset, .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,