From patchwork Wed Jan 29 03:20:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953428 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA6FF17E019; Wed, 29 Jan 2025 03:21:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120885; cv=none; b=O/U9bXl+uTQvN3LqqgGF9kHmz1qCZ2FV1HdBGXDoh8toe/e8PmLSRnfkvw+Wt2PiFapRa5qOS7eGqFiAKdUk1HFnnykfubQGZWPA8Aa4wUb8hnBetuz8Pu2J6dTAvd1qh8lTVQRcBATOIsR6paId15vCBjaKWO4SD4WcW/z04mY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120885; c=relaxed/simple; bh=T7z0ZggyRy4FQIQtXKtvXzCqX63KcavaJyTssXB4JhE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=YPWLJ9PwnZwIuDkP3ktGycVuTLkbLi8cl3BRi8XOXTCrn4S3K5dXf4+rOI/RYTGA1nBf/rhxurfCwnwotfx7mVpPb0d2iXOvVSoiidjQYE/C2Vuucq1r7N98tHaTlTdP7yT1qd5fUVBR5nppqTsENsMQ3wWJjHdTCNM8q91dMjs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=JuHcR/yt; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="JuHcR/yt" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2hLw6026851; Wed, 29 Jan 2025 03:21:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= psvM7kgECixb/F+vLdjprHZ+gS7EjJa1GBdIvq0y9Uc=; b=JuHcR/ytr7s4NZ9O SvvgIaN15FKxWRav/MdPb0Yz6D6U0Nj4ukgfl+la9U7mIiBUCW3ScAXe9g3IodXu FrVJvK/AtZTTe+4Px6Bzfw8Y3fC2o693cUg34SglndlkXzmmxZdLz0rle4a9PVrp T/TMEYf1+XSyR2UMduM2VtK+mRcen24aBZY4fYb1+hfBSYX+czsptDYg8CtfzUae SnbohuOd7RgrYaH+fQ5DdNt7Jt7voCWBTNh1wy7J+7OVCyvoih153pZtxv2iTDmi B0DlUyJmQ6XUlSNCmBjUoOxNoRF1Gxu9ESu+YEGarQKUYpLPYEeCPffnauSXUXaV PoeQ2g== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f54dgsc6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:07 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L6f4023631 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:06 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:05 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:33 -0800 Subject: [PATCH v5 01/14] drm/msm/dpu: fill CRTC resources in dpu_crtc.c Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-1-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=5787; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=RPq6PTUm6LpSxG+EGUTAQtLAK4m/tJTQoNnORYeTbns=; b=VFy2DWiA+G8B7ZXSBZ1FBx/H5r99pbEDTTKmcB+h+rQ6GYs4EQPeIOxCLHfkyEUxj9KMDIGlm ylUfLt86/GQDq0wQ2vGuGS1TnSDtp/4TiNl+E2dDQTd6aMePAoTz912 X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: lN445K0JdTyQpDuKk1jrsHgkZyyjd_Jh X-Proofpoint-ORIG-GUID: lN445K0JdTyQpDuKk1jrsHgkZyyjd_Jh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 priorityscore=1501 bulkscore=0 malwarescore=0 suspectscore=0 spamscore=0 clxscore=1015 phishscore=0 adultscore=0 lowpriorityscore=0 mlxlogscore=832 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290025 From: Dmitry Baryshkov Stop poking into CRTC state from dpu_encoder.c, fill CRTC HW resources from dpu_crtc_assign_resources(). Signed-off-by: Dmitry Baryshkov [quic_abhinavk@quicinc.com: cleaned up formatting] Signed-off-by: Abhinav Kumar Reviewed-by: Abhinav Kumar Reviewed-by: Dmitry Baryshkov Signed-off-by: Jessica Zhang --- Changes in v5: - Reordered to prevent breaking CI or upon partial application --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 66 +++++++++++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 37 ---------------- 2 files changed, 66 insertions(+), 37 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index 29485e76f531fad765d586612528cf3f0c9e7a89..505827174a685617bfa6fb5c790670c80a3f6101 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -1230,6 +1230,66 @@ static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct drm_crtc_state return ret; } +#define MAX_CHANNELS_PER_CRTC 2 + +static int dpu_crtc_assign_resources(struct drm_crtc *crtc, + struct drm_crtc_state *crtc_state) +{ + struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_CRTC]; + struct dpu_hw_blk *hw_lm[MAX_CHANNELS_PER_CRTC]; + struct dpu_hw_blk *hw_dspp[MAX_CHANNELS_PER_CRTC]; + int i, num_lm, num_ctl, num_dspp; + struct dpu_kms *dpu_kms = _dpu_crtc_get_kms(crtc); + struct dpu_global_state *global_state; + struct dpu_crtc_state *cstate; + struct drm_encoder *drm_enc; + + if (!crtc_state->encoder_mask) + return 0; + + /* + * For now, grab the first encoder in the crtc state as we don't + * support clone mode yet + */ + drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) + break; + + global_state = dpu_kms_get_global_state(crtc_state->state); + if (IS_ERR(global_state)) + return PTR_ERR(global_state); + + if (!crtc_state->enable) + return 0; + + cstate = to_dpu_crtc_state(crtc_state); + + num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, + drm_enc->base.id, + DPU_HW_BLK_CTL, hw_ctl, + ARRAY_SIZE(hw_ctl)); + num_lm = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, + drm_enc->base.id, + DPU_HW_BLK_LM, hw_lm, + ARRAY_SIZE(hw_lm)); + num_dspp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, + drm_enc->base.id, + DPU_HW_BLK_DSPP, hw_dspp, + ARRAY_SIZE(hw_dspp)); + + for (i = 0; i < num_lm; i++) { + int ctl_idx = (i < num_ctl) ? i : (num_ctl-1); + + cstate->mixers[i].hw_lm = to_dpu_hw_mixer(hw_lm[i]); + cstate->mixers[i].lm_ctl = to_dpu_hw_ctl(hw_ctl[ctl_idx]); + if (i < num_dspp) + cstate->mixers[i].hw_dspp = to_dpu_hw_dspp(hw_dspp[i]); + } + + cstate->num_mixers = num_lm; + + return 0; +} + static int dpu_crtc_atomic_check(struct drm_crtc *crtc, struct drm_atomic_state *state) { @@ -1245,6 +1305,12 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc, bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state); + if (drm_atomic_crtc_needs_modeset(crtc_state)) { + rc = dpu_crtc_assign_resources(crtc, crtc_state); + if (rc < 0) + return rc; + } + if (dpu_use_virtual_planes && (crtc_state->planes_changed || crtc_state->zpos_changed)) { rc = dpu_crtc_reassign_planes(crtc, crtc_state); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index a24fedb5ba4f1c84777b71c669bac0241acdd421..84e6a5ad4a1edd4ef5f3ed82500c99068e0645ad 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -719,40 +719,6 @@ static struct msm_display_topology dpu_encoder_get_topology( return topology; } -static void dpu_encoder_assign_crtc_resources(struct dpu_kms *dpu_kms, - struct drm_encoder *drm_enc, - struct dpu_global_state *global_state, - struct drm_crtc_state *crtc_state) -{ - struct dpu_crtc_state *cstate; - struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_ENC]; - struct dpu_hw_blk *hw_lm[MAX_CHANNELS_PER_ENC]; - struct dpu_hw_blk *hw_dspp[MAX_CHANNELS_PER_ENC]; - int num_lm, num_ctl, num_dspp, i; - - cstate = to_dpu_crtc_state(crtc_state); - - memset(cstate->mixers, 0, sizeof(cstate->mixers)); - - num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl)); - num_lm = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm)); - num_dspp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, DPU_HW_BLK_DSPP, hw_dspp, - ARRAY_SIZE(hw_dspp)); - - for (i = 0; i < num_lm; i++) { - int ctl_idx = (i < num_ctl) ? i : (num_ctl-1); - - cstate->mixers[i].hw_lm = to_dpu_hw_mixer(hw_lm[i]); - cstate->mixers[i].lm_ctl = to_dpu_hw_ctl(hw_ctl[ctl_idx]); - cstate->mixers[i].hw_dspp = i < num_dspp ? to_dpu_hw_dspp(hw_dspp[i]) : NULL; - } - - cstate->num_mixers = num_lm; -} - /** * dpu_encoder_virt_check_mode_changed: check if full modeset is required * @drm_enc: Pointer to drm encoder structure @@ -823,9 +789,6 @@ static int dpu_encoder_virt_atomic_check( if (crtc_state->enable) ret = dpu_rm_reserve(&dpu_kms->rm, global_state, drm_enc, crtc_state, &topology); - if (!ret) - dpu_encoder_assign_crtc_resources(dpu_kms, drm_enc, - global_state, crtc_state); } trace_dpu_enc_atomic_check_flags(DRMID(drm_enc), adj_mode->flags); From patchwork Wed Jan 29 03:20:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953439 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 266DA19D8A9; Wed, 29 Jan 2025 03:21:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120898; cv=none; b=mIexzHqCiIiK1Si3BiskhBeWyRT0FzewukzToVcaJ1m17cdMcu/eTEHDz7jsDjCDWm7Uy8ketOoOw1z4bInkN6F3y49V2PqzCgnkifN8GvUHwsydaysg4Ttj5Wr+g8f96V4WKbfBnpnhasBO/xJ3+v3+ttIqz94JX80giImgu4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120898; c=relaxed/simple; bh=h32tmOiKs4ZQH6yV6zhltrL0pCpKiPot6OABU1jesTo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=E03VQgEIumgOpgul56V15rXbCsFkJpxa1xWN/PB7gKQhuxEd0Ut82cGQEd+wLjufDDUwdGYULvjdZ+62kjCncU/gfMnZFfc/yUEyoKl2zhQk3yK4e/vW1F/wFsOtCUuZvJ9iiWQmp0pqylxGQJ6KWvjIpVVc/aRiRmWLAB1UMec= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=jqpjHv2M; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="jqpjHv2M" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2sMZ1021401; Wed, 29 Jan 2025 03:21:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= Jl1qccRv7rmyYJIPf3OIjyc29ToPM2hP+tG5FY+ZXq0=; b=jqpjHv2MLj3jeA7P w35LI/c/4YUvPkNMiBk7MpGDN5vRZIvjJoOatHb1NI+OuKNgsj4GZoOcHeKoMB1e xFpZ/HSqYB1uHEbtVheHArVlPmtgLQyktxZhiZGdoNbjr7j8HSAyDvGNOaS5Akxo dPhyUBPrPXORELyna7dti9c6xGojS3VsckGvptsRfsAh69uTw9e4ecr5+tPchFG/ X1TI2Fc6V/udD4EvrCu9JXdQ1DcO8Myg7GrajBHD/1MPFQPyveixtBUS7W8fUVhH WhED1OgK9DAimeJh7pQCYqT6dHujsHMg+Eb1WMziQHw3rT79tr/+yePROX+k2hiw T1B9HQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44fbxvg12s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:23 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L6l5003974 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:06 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:05 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:34 -0800 Subject: [PATCH v5 02/14] drm/msm/dpu: move resource allocation to CRTC Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-2-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=16468; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=qoKBXf19lYJuLuWCg8kKtSp5YLwqcLIz4p7338kQLVg=; b=mkwXrgtPWMizDcSpJmIwhxhekCSiUUDyXkKUvvnqAhyfjQKDo6yvoIbSHgzZMs88zckkdas9x AVcqoS7qDJNAyxv02RcPdHhZJAXBWizAuJnE+gfJsG/MCqPNoI7ofw+ X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: KT404Kvco-vpedRlErzV6Mp8RxbpDV6m X-Proofpoint-ORIG-GUID: KT404Kvco-vpedRlErzV6Mp8RxbpDV6m X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 bulkscore=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 malwarescore=0 impostorscore=0 phishscore=0 suspectscore=0 mlxscore=0 adultscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 From: Dmitry Baryshkov All resource allocation is centered around the LMs. Then other blocks (except DSCs) are allocated basing on the LMs that was selected, and LM powers up the CRTC rather than the encoder. Moreover if at some point the driver supports encoder cloning, allocating resources from the encoder will be incorrect, as all clones will have different encoder IDs, while LMs are to be shared by these encoders. In addition, move mode_changed() to dpu_crtc as encoder no longer has access to topology information Signed-off-by: Dmitry Baryshkov [quic_abhinavk@quicinc.com: Refactored resource allocation for CDM] Signed-off-by: Abhinav Kumar [quic_jesszhan@quicinc.com: Changed to grabbing exising global state] Signed-off-by: Jessica Zhang --- Changes in v5: - Reordered to prevent breaking CI and upon partial applciation - Moved mode_changed() from dpu_encoder to dpu_crtc - Dropped dpu_encoder_needs_dsc_merge() refactor to clean up commit - In dpu_encoder_update_topology(), grab DSC config using dpu_encoder helper as dpu_encoder->dsc hasn't been set yet --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 79 +++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h | 2 + drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 174 +++++++++------------------- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h | 11 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 19 +-- 5 files changed, 144 insertions(+), 141 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index 505827174a685617bfa6fb5c790670c80a3f6101..88c91b76f8096080a1b6e62e406413481e00617c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -1231,6 +1231,50 @@ static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct drm_crtc_state } #define MAX_CHANNELS_PER_CRTC 2 +#define MAX_HDISPLAY_SPLIT 1080 + +static struct msm_display_topology dpu_crtc_get_topology( + struct drm_crtc *crtc, + struct dpu_kms *dpu_kms, + struct drm_crtc_state *crtc_state) +{ + struct drm_display_mode *mode = &crtc_state->adjusted_mode; + struct msm_display_topology topology = {0}; + struct drm_encoder *drm_enc; + + drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) + dpu_encoder_update_topology(drm_enc, &topology, crtc_state->state, + &crtc_state->adjusted_mode); + + /* + * Datapath topology selection + * + * Dual display + * 2 LM, 2 INTF ( Split display using 2 interfaces) + * + * Single display + * 1 LM, 1 INTF + * 2 LM, 1 INTF (stream merge to support high resolution interfaces) + * + * If DSC is enabled, use 2 LMs for 2:2:1 topology + * + * Add dspps to the reservation requirements if ctm is requested + */ + + if (topology.num_intf == 2) + topology.num_lm = 2; + else if (topology.num_dsc == 2) + topology.num_lm = 2; + else if (dpu_kms->catalog->caps->has_3d_merge) + topology.num_lm = (mode->hdisplay > MAX_HDISPLAY_SPLIT) ? 2 : 1; + else + topology.num_lm = 1; + + if (crtc_state->ctm) + topology.num_dspp = topology.num_lm; + + return topology; +} static int dpu_crtc_assign_resources(struct drm_crtc *crtc, struct drm_crtc_state *crtc_state) @@ -1243,6 +1287,8 @@ static int dpu_crtc_assign_resources(struct drm_crtc *crtc, struct dpu_global_state *global_state; struct dpu_crtc_state *cstate; struct drm_encoder *drm_enc; + struct msm_display_topology topology; + int ret; if (!crtc_state->encoder_mask) return 0; @@ -1254,13 +1300,24 @@ static int dpu_crtc_assign_resources(struct drm_crtc *crtc, drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) break; + /* + * Release and Allocate resources on every modeset + */ global_state = dpu_kms_get_global_state(crtc_state->state); if (IS_ERR(global_state)) return PTR_ERR(global_state); + dpu_rm_release(global_state, drm_enc); + if (!crtc_state->enable) return 0; + topology = dpu_crtc_get_topology(crtc, dpu_kms, crtc_state); + ret = dpu_rm_reserve(&dpu_kms->rm, global_state, + drm_enc, crtc_state, &topology); + if (ret) + return ret; + cstate = to_dpu_crtc_state(crtc_state); num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, @@ -1290,6 +1347,28 @@ static int dpu_crtc_assign_resources(struct drm_crtc *crtc, return 0; } +/** + * dpu_crtc_check_mode_changed: check if full modeset is required + * @crtc_state: Corresponding CRTC state to be checked + * + * Check if the changes in the object properties demand full mode set. + */ +int dpu_crtc_check_mode_changed(struct drm_crtc_state *crtc_state) +{ + struct drm_encoder *drm_enc; + struct drm_crtc *crtc = crtc_state->crtc; + + DRM_DEBUG_ATOMIC("%d\n", crtc->base.id); + + /* there might be cases where encoder needs a modeset too */ + drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) { + if (dpu_encoder_needs_modeset(drm_enc, crtc_state->state)) + crtc_state->mode_changed = true; + } + + return 0; +} + static int dpu_crtc_atomic_check(struct drm_crtc *crtc, struct drm_atomic_state *state) { diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h index 0b148f3ce0d7af80ec4ffcd31d8632a5815b16f1..51a3b5fc879a1c83836601d717757dd1e801f884 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h @@ -239,6 +239,8 @@ static inline int dpu_crtc_frame_pending(struct drm_crtc *crtc) return crtc ? atomic_read(&to_dpu_crtc(crtc)->frame_pending) : -EINVAL; } +int dpu_crtc_check_mode_changed(struct drm_crtc_state *crtc_state); + int dpu_crtc_vblank(struct drm_crtc *crtc, bool en); void dpu_crtc_vblank_callback(struct drm_crtc *crtc); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 84e6a5ad4a1edd4ef5f3ed82500c99068e0645ad..a4d59cf6b727e38bbaa8063fb541f74d1277d444 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -58,8 +58,6 @@ #define IDLE_SHORT_TIMEOUT 1 -#define MAX_HDISPLAY_SPLIT 1080 - /* timeout in frames waiting for frame done */ #define DPU_ENCODER_FRAME_DONE_TIMEOUT_FRAMES 5 @@ -622,7 +620,7 @@ bool dpu_encoder_use_dsc_merge(struct drm_encoder *drm_enc) if (dpu_enc->phys_encs[i]) intf_count++; - /* See dpu_encoder_get_topology, we only support 2:2:1 topology */ + /* See dpu_crtc_get_topology, we only support 2:2:1 topology */ if (dpu_enc->dsc) num_dsc = 2; @@ -647,153 +645,88 @@ struct drm_dsc_config *dpu_encoder_get_dsc_config(struct drm_encoder *drm_enc) return NULL; } -static struct msm_display_topology dpu_encoder_get_topology( - struct dpu_encoder_virt *dpu_enc, - struct drm_display_mode *mode, - struct drm_crtc_state *crtc_state, - struct drm_connector_state *conn_state) +void dpu_encoder_update_topology(struct drm_encoder *drm_enc, + struct msm_display_topology *topology, + struct drm_atomic_state *state, + const struct drm_display_mode *adj_mode) { - struct msm_drm_private *priv = dpu_enc->base.dev->dev_private; - struct msm_display_info *disp_info = &dpu_enc->disp_info; - struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms); - struct drm_dsc_config *dsc = dpu_encoder_get_dsc_config(&dpu_enc->base); - struct msm_display_topology topology = {0}; - int i, intf_count = 0; + struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc); + struct drm_connector *connector; + struct drm_connector_state *conn_state; + struct msm_display_info *disp_info; + struct drm_framebuffer *fb; + struct msm_drm_private *priv; + struct drm_dsc_config *dsc; + + int i; for (i = 0; i < MAX_PHYS_ENCODERS_PER_VIRTUAL; i++) if (dpu_enc->phys_encs[i]) - intf_count++; - - /* Datapath topology selection - * - * Dual display - * 2 LM, 2 INTF ( Split display using 2 interfaces) - * - * Single display - * 1 LM, 1 INTF - * 2 LM, 1 INTF (stream merge to support high resolution interfaces) - * - * Add dspps to the reservation requirements if ctm is requested - */ - if (intf_count == 2) - topology.num_lm = 2; - else if (!dpu_kms->catalog->caps->has_3d_merge) - topology.num_lm = 1; - else - topology.num_lm = (mode->hdisplay > MAX_HDISPLAY_SPLIT) ? 2 : 1; - - if (crtc_state->ctm) - topology.num_dspp = topology.num_lm; + topology->num_intf++; - topology.num_intf = intf_count; + dsc = dpu_encoder_get_dsc_config(drm_enc); + /* We only support 2 DSC mode (with 2 LM and 1 INTF) */ if (dsc) { - /* - * In case of Display Stream Compression (DSC), we would use - * 2 DSC encoders, 2 layer mixers and 1 interface - * this is power optimal and can drive up to (including) 4k - * screens - */ - topology.num_dsc = 2; - topology.num_lm = 2; - topology.num_intf = 1; + topology->num_dsc = 2; + topology->num_intf = 1; } + connector = drm_atomic_get_new_connector_for_encoder(state, drm_enc); + if (!connector) + return; + conn_state = drm_atomic_get_new_connector_state(state, connector); + if (!conn_state) + return; + + disp_info = &dpu_enc->disp_info; + + priv = drm_enc->dev->dev_private; + /* * Use CDM only for writeback or DP at the moment as other interfaces cannot handle it. * If writeback itself cannot handle cdm for some reason it will fail in its atomic_check() * earlier. */ if (disp_info->intf_type == INTF_WB && conn_state->writeback_job) { - struct drm_framebuffer *fb; - fb = conn_state->writeback_job->fb; if (fb && MSM_FORMAT_IS_YUV(msm_framebuffer_format(fb))) - topology.needs_cdm = true; + topology->needs_cdm = true; } else if (disp_info->intf_type == INTF_DP) { - if (msm_dp_is_yuv_420_enabled(priv->dp[disp_info->h_tile_instance[0]], mode)) - topology.needs_cdm = true; + if (msm_dp_is_yuv_420_enabled(priv->dp[disp_info->h_tile_instance[0]], adj_mode)) + topology->needs_cdm = true; } - - return topology; } -/** - * dpu_encoder_virt_check_mode_changed: check if full modeset is required - * @drm_enc: Pointer to drm encoder structure - * @crtc_state: Corresponding CRTC state to be checked - * @conn_state: Corresponding Connector's state to be checked - * - * Check if the changes in the object properties demand full mode set. - */ -int dpu_encoder_virt_check_mode_changed(struct drm_encoder *drm_enc, - struct drm_crtc_state *crtc_state, - struct drm_connector_state *conn_state) +bool dpu_encoder_needs_modeset(struct drm_encoder *drm_enc, struct drm_atomic_state *state) { + struct drm_connector *connector; + struct drm_connector_state *conn_state; + struct drm_framebuffer *fb; struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc); - struct msm_display_topology topology; - - DPU_DEBUG_ENC(dpu_enc, "\n"); - - /* Using mode instead of adjusted_mode as it wasn't computed yet */ - topology = dpu_encoder_get_topology(dpu_enc, &crtc_state->mode, crtc_state, conn_state); - - if (topology.needs_cdm && !dpu_enc->cur_master->hw_cdm) - crtc_state->mode_changed = true; - else if (!topology.needs_cdm && dpu_enc->cur_master->hw_cdm) - crtc_state->mode_changed = true; - - return 0; -} - -static int dpu_encoder_virt_atomic_check( - struct drm_encoder *drm_enc, - struct drm_crtc_state *crtc_state, - struct drm_connector_state *conn_state) -{ - struct dpu_encoder_virt *dpu_enc; - struct msm_drm_private *priv; - struct dpu_kms *dpu_kms; - struct drm_display_mode *adj_mode; - struct msm_display_topology topology; - struct dpu_global_state *global_state; - int ret = 0; - - if (!drm_enc || !crtc_state || !conn_state) { - DPU_ERROR("invalid arg(s), drm_enc %d, crtc/conn state %d/%d\n", - drm_enc != NULL, crtc_state != NULL, conn_state != NULL); - return -EINVAL; - } - - dpu_enc = to_dpu_encoder_virt(drm_enc); - DPU_DEBUG_ENC(dpu_enc, "\n"); - - priv = drm_enc->dev->dev_private; - dpu_kms = to_dpu_kms(priv->kms); - adj_mode = &crtc_state->adjusted_mode; - global_state = dpu_kms_get_global_state(crtc_state->state); - if (IS_ERR(global_state)) - return PTR_ERR(global_state); - trace_dpu_enc_atomic_check(DRMID(drm_enc)); + if (!drm_enc || !state) + return false; - topology = dpu_encoder_get_topology(dpu_enc, adj_mode, crtc_state, conn_state); + connector = drm_atomic_get_new_connector_for_encoder(state, drm_enc); + if (!connector) + return false; - /* - * Release and Allocate resources on every modeset - */ - if (drm_atomic_crtc_needs_modeset(crtc_state)) { - dpu_rm_release(global_state, drm_enc); + conn_state = drm_atomic_get_new_connector_state(state, connector); - if (crtc_state->enable) - ret = dpu_rm_reserve(&dpu_kms->rm, global_state, - drm_enc, crtc_state, &topology); + if (dpu_enc->disp_info.intf_type == INTF_WB && conn_state->writeback_job) { + fb = conn_state->writeback_job->fb; + if (fb && MSM_FORMAT_IS_YUV(msm_framebuffer_format(fb))) { + if (!dpu_enc->cur_master->hw_cdm) + return true; + } else { + if (dpu_enc->cur_master->hw_cdm) + return true; + } } - trace_dpu_enc_atomic_check_flags(DRMID(drm_enc), adj_mode->flags); - - return ret; + return false; } static void _dpu_encoder_update_vsync_source(struct dpu_encoder_virt *dpu_enc, @@ -2612,7 +2545,6 @@ static const struct drm_encoder_helper_funcs dpu_encoder_helper_funcs = { .atomic_mode_set = dpu_encoder_virt_atomic_mode_set, .atomic_disable = dpu_encoder_virt_atomic_disable, .atomic_enable = dpu_encoder_virt_atomic_enable, - .atomic_check = dpu_encoder_virt_atomic_check, }; static const struct drm_encoder_funcs dpu_encoder_funcs = { diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h index da133ee4701a329f566f6f9a7255f2f6d050f891..b0ac10ebd02c2b63e6f6f9010a22cdace931cf3b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h @@ -80,6 +80,13 @@ int dpu_encoder_get_crc(const struct drm_encoder *drm_enc, u32 *crcs, int pos); bool dpu_encoder_use_dsc_merge(struct drm_encoder *drm_enc); +void dpu_encoder_update_topology(struct drm_encoder *drm_enc, + struct msm_display_topology *topology, + struct drm_atomic_state *state, + const struct drm_display_mode *adj_mode); + +bool dpu_encoder_needs_modeset(struct drm_encoder *drm_enc, struct drm_atomic_state *state); + void dpu_encoder_prepare_wb_job(struct drm_encoder *drm_enc, struct drm_writeback_job *job); @@ -88,8 +95,4 @@ void dpu_encoder_cleanup_wb_job(struct drm_encoder *drm_enc, bool dpu_encoder_is_valid_for_commit(struct drm_encoder *drm_enc); -int dpu_encoder_virt_check_mode_changed(struct drm_encoder *drm_enc, - struct drm_crtc_state *crtc_state, - struct drm_connector_state *conn_state); - #endif /* __DPU_ENCODER_H__ */ diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 5ce06e25990cb70284d3c3f04ac1e1e1bed6142a..c6b3b2e147b4c61ec93db4a9f01d5a288d2b9eb2 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -449,24 +449,11 @@ static void dpu_kms_disable_commit(struct msm_kms *kms) static int dpu_kms_check_mode_changed(struct msm_kms *kms, struct drm_atomic_state *state) { struct drm_crtc_state *new_crtc_state; - struct drm_connector *connector; - struct drm_connector_state *new_conn_state; + struct drm_crtc *crtc; int i; - for_each_new_connector_in_state(state, connector, new_conn_state, i) { - struct drm_encoder *encoder; - - WARN_ON(!!new_conn_state->best_encoder != !!new_conn_state->crtc); - - if (!new_conn_state->crtc || !new_conn_state->best_encoder) - continue; - - new_crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); - - encoder = new_conn_state->best_encoder; - - dpu_encoder_virt_check_mode_changed(encoder, new_crtc_state, new_conn_state); - } + for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) + dpu_crtc_check_mode_changed(new_crtc_state); return 0; } From patchwork Wed Jan 29 03:20:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953440 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8907F1A3A95; Wed, 29 Jan 2025 03:21:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120901; cv=none; b=eLTWUENikUCj585o1GqVjGFpF79vB8PqCo+2/C0SBzyLmvcUAs0qN1KfVRFIvP0jAET4DHOcTKGIiOizDmB+frkyjSgcxtNXjox2gUyb1QKGJdeanW9rDwL5fbDp0O3dGl6AdUfKviMUs9ejJ4ClCwjpKzwQrJTtIcjgIWRz834= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120901; c=relaxed/simple; bh=FHxZbjWWZ8oWIT7LLC/KudWIiAN4C+yjIhxVelRb17s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=FTg1POS18f89Q6WlCHm2vqYlxXUnxOXVnCUzytvsMUpzJTgHnuWtJsmIEFBM3Uxb59M9sUHeH95SNtDog9hv87e35qY8OhDxZU9ZAx64j/pW2t3SCNB4JRYJiacRriOAclNoEumC/QwX+GoT/W4w1OlUC7B02SoowfgBtq3shKE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=PeKoV/p9; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="PeKoV/p9" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T3GtNi028933; Wed, 29 Jan 2025 03:21:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= yXX5B2MwUQf1OqR8u86BozPxFwuwYI4LTO8yhgBsBLY=; b=PeKoV/p9FEz67eaj jftJDHHj/gIZ0v/Ya+0oDgOvRZntEGDFjfwuSWYeZ4hraqkVWaQ+mpHaoSszaGUB eMCu6wS0RlC8rwVdi7VhDzTaJaxo2Ucvm0RJjpXoemLzBel0WO7HGmHY99ee/1WE og2pvWc/EdEwNubHOp5ZgQ5C42JeTV/MEO0n3pVN0OZr42zHX8TYUTs0w7YEX3u0 HxRbgYEoZyiE3NIq912rUyDXYdyjtUB6T5bLz4o3FdzHxZLV9s74LLlvXV6x9TNP ieiFhE+VaRt1V8Y01EA8iPGcbS7KDiPSBiF/6/9xiLoDlwgZGzWmdOudhDtuRO9M dXZCqg== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44fbxvg12t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:23 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L6ci023659 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:06 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:06 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:35 -0800 Subject: [PATCH v5 03/14] drm/msm/dpu: switch RM to use crtc_id rather than enc_id for allocation Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-3-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=26714; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=5fSMIsKFP5fqqLFY16VmLjEzhSl2WWBcMNEEKaqYoDw=; b=PbH6U4heuP+/H/YMWy/9y496QTF4RatfS2yLaQKSnKWGhBkzxRyBrIiRwLnH2jvTdNxUkFpPP 0/+89kDxYdxDkMMi9t8nDBUrQHBFqnNd5j1nkLFz3kesrdllpkX70kw X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: Ie4u2mNRgh-jjDJymikhCfqiEgJhMPu1 X-Proofpoint-ORIG-GUID: Ie4u2mNRgh-jjDJymikhCfqiEgJhMPu1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 bulkscore=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 malwarescore=0 impostorscore=0 phishscore=0 suspectscore=0 mlxscore=0 adultscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 From: Dmitry Baryshkov Up to now the driver has been using encoder to allocate hardware resources. Switch it to use CRTC id in preparation for the next step. Reviewed-by: Abhinav Kumar Signed-off-by: Dmitry Baryshkov Reviewed-by: Dmitry Baryshkov Signed-off-by: Jessica Zhang --- Changes in v5: - Reordered to prevent breaking CI and upon partial application Changes in v4 (due to rebase): - moved *_get_assigned_resources() changes for DSPP and LM from encoder *_virt_atomic_mode_set() to *_assign_crtc_resources() --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 20 ++- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 10 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h | 12 +- drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c | 189 ++++++++++++++-------------- drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h | 7 +- 5 files changed, 113 insertions(+), 125 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index 88c91b76f8096080a1b6e62e406413481e00617c..8144de25ac7316e9ab9ab0aa45c0d01a7267c24c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -1286,19 +1286,15 @@ static int dpu_crtc_assign_resources(struct drm_crtc *crtc, struct dpu_kms *dpu_kms = _dpu_crtc_get_kms(crtc); struct dpu_global_state *global_state; struct dpu_crtc_state *cstate; - struct drm_encoder *drm_enc; struct msm_display_topology topology; int ret; if (!crtc_state->encoder_mask) return 0; - /* - * For now, grab the first encoder in the crtc state as we don't - * support clone mode yet - */ - drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) - break; + cstate = to_dpu_crtc_state(crtc_state); + + memset(cstate->mixers, 0, sizeof(cstate->mixers)); /* * Release and Allocate resources on every modeset @@ -1307,29 +1303,29 @@ static int dpu_crtc_assign_resources(struct drm_crtc *crtc, if (IS_ERR(global_state)) return PTR_ERR(global_state); - dpu_rm_release(global_state, drm_enc); + dpu_rm_release(global_state, crtc); if (!crtc_state->enable) return 0; topology = dpu_crtc_get_topology(crtc, dpu_kms, crtc_state); ret = dpu_rm_reserve(&dpu_kms->rm, global_state, - drm_enc, crtc_state, &topology); + crtc_state->crtc, &topology); if (ret) return ret; cstate = to_dpu_crtc_state(crtc_state); num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, + crtc_state->crtc, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl)); num_lm = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, + crtc_state->crtc, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm)); num_dspp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, + crtc_state->crtc, DPU_HW_BLK_DSPP, hw_dspp, ARRAY_SIZE(hw_dspp)); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index a4d59cf6b727e38bbaa8063fb541f74d1277d444..bd6fef8eb8f267cde4ebe1155be39ce69e786967 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -1162,17 +1162,17 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc, /* Query resource that have been reserved in atomic check step. */ num_pp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, DPU_HW_BLK_PINGPONG, hw_pp, + drm_enc->crtc, DPU_HW_BLK_PINGPONG, hw_pp, ARRAY_SIZE(hw_pp)); num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl)); + drm_enc->crtc, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl)); for (i = 0; i < MAX_CHANNELS_PER_ENC; i++) dpu_enc->hw_pp[i] = i < num_pp ? to_dpu_hw_pingpong(hw_pp[i]) : NULL; num_dsc = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, DPU_HW_BLK_DSC, + drm_enc->crtc, DPU_HW_BLK_DSC, hw_dsc, ARRAY_SIZE(hw_dsc)); for (i = 0; i < num_dsc; i++) { dpu_enc->hw_dsc[i] = to_dpu_hw_dsc(hw_dsc[i]); @@ -1186,7 +1186,7 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc, struct dpu_hw_blk *hw_cdm = NULL; dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->base.id, DPU_HW_BLK_CDM, + drm_enc->crtc, DPU_HW_BLK_CDM, &hw_cdm, 1); dpu_enc->cur_master->hw_cdm = hw_cdm ? to_dpu_hw_cdm(hw_cdm) : NULL; } @@ -2107,7 +2107,7 @@ static void dpu_encoder_helper_reset_mixers(struct dpu_encoder_phys *phys_enc) global_state = dpu_kms_get_existing_global_state(phys_enc->dpu_kms); num_lm = dpu_rm_get_assigned_resources(&phys_enc->dpu_kms->rm, global_state, - phys_enc->parent->base.id, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm)); + phys_enc->parent->crtc, DPU_HW_BLK_LM, hw_lm, ARRAY_SIZE(hw_lm)); for (i = 0; i < num_lm; i++) { hw_mixer[i] = to_dpu_hw_mixer(hw_lm[i]); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h index 547cdb2c0c788a031685e397e2c8ef73ca6290d7..54ef6cfa2485a8a3886bd26b7ec3692d037dc35e 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h @@ -124,12 +124,12 @@ struct dpu_global_state { struct dpu_rm *rm; - uint32_t pingpong_to_enc_id[PINGPONG_MAX - PINGPONG_0]; - uint32_t mixer_to_enc_id[LM_MAX - LM_0]; - uint32_t ctl_to_enc_id[CTL_MAX - CTL_0]; - uint32_t dspp_to_enc_id[DSPP_MAX - DSPP_0]; - uint32_t dsc_to_enc_id[DSC_MAX - DSC_0]; - uint32_t cdm_to_enc_id; + uint32_t pingpong_to_crtc_id[PINGPONG_MAX - PINGPONG_0]; + uint32_t mixer_to_crtc_id[LM_MAX - LM_0]; + uint32_t ctl_to_crtc_id[CTL_MAX - CTL_0]; + uint32_t dspp_to_crtc_id[DSPP_MAX - DSPP_0]; + uint32_t dsc_to_crtc_id[DSC_MAX - DSC_0]; + uint32_t cdm_to_crtc_id; uint32_t sspp_to_crtc_id[SSPP_MAX - SSPP_NONE]; }; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c index 5baf9df702b84b74ba00e703ad3cc12afb0e94a4..a7b4086ae990aaebfd1103acd50ff2a84cff4406 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c @@ -22,9 +22,9 @@ static inline bool reserved_by_other(uint32_t *res_map, int idx, - uint32_t enc_id) + uint32_t crtc_id) { - return res_map[idx] && res_map[idx] != enc_id; + return res_map[idx] && res_map[idx] != crtc_id; } /** @@ -239,7 +239,7 @@ static int _dpu_rm_get_lm_peer(struct dpu_rm *rm, int primary_idx) * pingpong * @rm: dpu resource manager handle * @global_state: resources shared across multiple kms objects - * @enc_id: encoder id requesting for allocation + * @crtc_id: crtc id requesting for allocation * @lm_idx: index of proposed layer mixer in rm->mixer_blks[], function checks * if lm, and all other hardwired blocks connected to the lm (pp) is * available and appropriate @@ -252,14 +252,14 @@ static int _dpu_rm_get_lm_peer(struct dpu_rm *rm, int primary_idx) */ static bool _dpu_rm_check_lm_and_get_connected_blks(struct dpu_rm *rm, struct dpu_global_state *global_state, - uint32_t enc_id, int lm_idx, int *pp_idx, int *dspp_idx, + uint32_t crtc_id, int lm_idx, int *pp_idx, int *dspp_idx, struct msm_display_topology *topology) { const struct dpu_lm_cfg *lm_cfg; int idx; /* Already reserved? */ - if (reserved_by_other(global_state->mixer_to_enc_id, lm_idx, enc_id)) { + if (reserved_by_other(global_state->mixer_to_crtc_id, lm_idx, crtc_id)) { DPU_DEBUG("lm %d already reserved\n", lm_idx + LM_0); return false; } @@ -271,7 +271,7 @@ static bool _dpu_rm_check_lm_and_get_connected_blks(struct dpu_rm *rm, return false; } - if (reserved_by_other(global_state->pingpong_to_enc_id, idx, enc_id)) { + if (reserved_by_other(global_state->pingpong_to_crtc_id, idx, crtc_id)) { DPU_DEBUG("lm %d pp %d already reserved\n", lm_cfg->id, lm_cfg->pingpong); return false; @@ -287,7 +287,7 @@ static bool _dpu_rm_check_lm_and_get_connected_blks(struct dpu_rm *rm, return false; } - if (reserved_by_other(global_state->dspp_to_enc_id, idx, enc_id)) { + if (reserved_by_other(global_state->dspp_to_crtc_id, idx, crtc_id)) { DPU_DEBUG("lm %d dspp %d already reserved\n", lm_cfg->id, lm_cfg->dspp); return false; @@ -299,7 +299,7 @@ static bool _dpu_rm_check_lm_and_get_connected_blks(struct dpu_rm *rm, static int _dpu_rm_reserve_lms(struct dpu_rm *rm, struct dpu_global_state *global_state, - uint32_t enc_id, + uint32_t crtc_id, struct msm_display_topology *topology) { @@ -323,7 +323,7 @@ static int _dpu_rm_reserve_lms(struct dpu_rm *rm, lm_idx[lm_count] = i; if (!_dpu_rm_check_lm_and_get_connected_blks(rm, global_state, - enc_id, i, &pp_idx[lm_count], + crtc_id, i, &pp_idx[lm_count], &dspp_idx[lm_count], topology)) { continue; } @@ -342,7 +342,7 @@ static int _dpu_rm_reserve_lms(struct dpu_rm *rm, continue; if (!_dpu_rm_check_lm_and_get_connected_blks(rm, - global_state, enc_id, j, + global_state, crtc_id, j, &pp_idx[lm_count], &dspp_idx[lm_count], topology)) { continue; @@ -359,12 +359,12 @@ static int _dpu_rm_reserve_lms(struct dpu_rm *rm, } for (i = 0; i < lm_count; i++) { - global_state->mixer_to_enc_id[lm_idx[i]] = enc_id; - global_state->pingpong_to_enc_id[pp_idx[i]] = enc_id; - global_state->dspp_to_enc_id[dspp_idx[i]] = - topology->num_dspp ? enc_id : 0; + global_state->mixer_to_crtc_id[lm_idx[i]] = crtc_id; + global_state->pingpong_to_crtc_id[pp_idx[i]] = crtc_id; + global_state->dspp_to_crtc_id[dspp_idx[i]] = + topology->num_dspp ? crtc_id : 0; - trace_dpu_rm_reserve_lms(lm_idx[i] + LM_0, enc_id, + trace_dpu_rm_reserve_lms(lm_idx[i] + LM_0, crtc_id, pp_idx[i] + PINGPONG_0); } @@ -374,7 +374,7 @@ static int _dpu_rm_reserve_lms(struct dpu_rm *rm, static int _dpu_rm_reserve_ctls( struct dpu_rm *rm, struct dpu_global_state *global_state, - uint32_t enc_id, + uint32_t crtc_id, const struct msm_display_topology *top) { int ctl_idx[MAX_BLOCKS]; @@ -393,7 +393,7 @@ static int _dpu_rm_reserve_ctls( if (!rm->ctl_blks[j]) continue; - if (reserved_by_other(global_state->ctl_to_enc_id, j, enc_id)) + if (reserved_by_other(global_state->ctl_to_crtc_id, j, crtc_id)) continue; ctl = to_dpu_hw_ctl(rm->ctl_blks[j]); @@ -417,8 +417,8 @@ static int _dpu_rm_reserve_ctls( return -ENAVAIL; for (i = 0; i < ARRAY_SIZE(ctl_idx) && i < num_ctls; i++) { - global_state->ctl_to_enc_id[ctl_idx[i]] = enc_id; - trace_dpu_rm_reserve_ctls(i + CTL_0, enc_id); + global_state->ctl_to_crtc_id[ctl_idx[i]] = crtc_id; + trace_dpu_rm_reserve_ctls(i + CTL_0, crtc_id); } return 0; @@ -426,12 +426,12 @@ static int _dpu_rm_reserve_ctls( static int _dpu_rm_pingpong_next_index(struct dpu_global_state *global_state, int start, - uint32_t enc_id) + uint32_t crtc_id) { int i; for (i = start; i < (PINGPONG_MAX - PINGPONG_0); i++) { - if (global_state->pingpong_to_enc_id[i] == enc_id) + if (global_state->pingpong_to_crtc_id[i] == crtc_id) return i; } @@ -452,7 +452,7 @@ static int _dpu_rm_pingpong_dsc_check(int dsc_idx, int pp_idx) static int _dpu_rm_dsc_alloc(struct dpu_rm *rm, struct dpu_global_state *global_state, - uint32_t enc_id, + uint32_t crtc_id, const struct msm_display_topology *top) { int num_dsc = 0; @@ -465,10 +465,10 @@ static int _dpu_rm_dsc_alloc(struct dpu_rm *rm, if (!rm->dsc_blks[dsc_idx]) continue; - if (reserved_by_other(global_state->dsc_to_enc_id, dsc_idx, enc_id)) + if (reserved_by_other(global_state->dsc_to_crtc_id, dsc_idx, crtc_id)) continue; - pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx, enc_id); + pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx, crtc_id); if (pp_idx < 0) return -ENAVAIL; @@ -476,7 +476,7 @@ static int _dpu_rm_dsc_alloc(struct dpu_rm *rm, if (ret) return -ENAVAIL; - global_state->dsc_to_enc_id[dsc_idx] = enc_id; + global_state->dsc_to_crtc_id[dsc_idx] = crtc_id; num_dsc++; pp_idx++; } @@ -492,7 +492,7 @@ static int _dpu_rm_dsc_alloc(struct dpu_rm *rm, static int _dpu_rm_dsc_alloc_pair(struct dpu_rm *rm, struct dpu_global_state *global_state, - uint32_t enc_id, + uint32_t crtc_id, const struct msm_display_topology *top) { int num_dsc = 0; @@ -507,11 +507,11 @@ static int _dpu_rm_dsc_alloc_pair(struct dpu_rm *rm, continue; /* consective dsc index to be paired */ - if (reserved_by_other(global_state->dsc_to_enc_id, dsc_idx, enc_id) || - reserved_by_other(global_state->dsc_to_enc_id, dsc_idx + 1, enc_id)) + if (reserved_by_other(global_state->dsc_to_crtc_id, dsc_idx, crtc_id) || + reserved_by_other(global_state->dsc_to_crtc_id, dsc_idx + 1, crtc_id)) continue; - pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx, enc_id); + pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx, crtc_id); if (pp_idx < 0) return -ENAVAIL; @@ -521,7 +521,7 @@ static int _dpu_rm_dsc_alloc_pair(struct dpu_rm *rm, continue; } - pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx + 1, enc_id); + pp_idx = _dpu_rm_pingpong_next_index(global_state, pp_idx + 1, crtc_id); if (pp_idx < 0) return -ENAVAIL; @@ -531,8 +531,8 @@ static int _dpu_rm_dsc_alloc_pair(struct dpu_rm *rm, continue; } - global_state->dsc_to_enc_id[dsc_idx] = enc_id; - global_state->dsc_to_enc_id[dsc_idx + 1] = enc_id; + global_state->dsc_to_crtc_id[dsc_idx] = crtc_id; + global_state->dsc_to_crtc_id[dsc_idx + 1] = crtc_id; num_dsc += 2; pp_idx++; /* start for next pair */ } @@ -548,11 +548,9 @@ static int _dpu_rm_dsc_alloc_pair(struct dpu_rm *rm, static int _dpu_rm_reserve_dsc(struct dpu_rm *rm, struct dpu_global_state *global_state, - struct drm_encoder *enc, + uint32_t crtc_id, const struct msm_display_topology *top) { - uint32_t enc_id = enc->base.id; - if (!top->num_dsc || !top->num_intf) return 0; @@ -568,16 +566,16 @@ static int _dpu_rm_reserve_dsc(struct dpu_rm *rm, /* num_dsc should be either 1, 2 or 4 */ if (top->num_dsc > top->num_intf) /* merge mode */ - return _dpu_rm_dsc_alloc_pair(rm, global_state, enc_id, top); + return _dpu_rm_dsc_alloc_pair(rm, global_state, crtc_id, top); else - return _dpu_rm_dsc_alloc(rm, global_state, enc_id, top); + return _dpu_rm_dsc_alloc(rm, global_state, crtc_id, top); return 0; } static int _dpu_rm_reserve_cdm(struct dpu_rm *rm, struct dpu_global_state *global_state, - struct drm_encoder *enc) + uint32_t crtc_id) { /* try allocating only one CDM block */ if (!rm->cdm_blk) { @@ -585,12 +583,12 @@ static int _dpu_rm_reserve_cdm(struct dpu_rm *rm, return -EIO; } - if (global_state->cdm_to_enc_id) { + if (global_state->cdm_to_crtc_id) { DPU_ERROR("CDM_0 is already allocated\n"); return -EIO; } - global_state->cdm_to_enc_id = enc->base.id; + global_state->cdm_to_crtc_id = crtc_id; return 0; } @@ -598,30 +596,31 @@ static int _dpu_rm_reserve_cdm(struct dpu_rm *rm, static int _dpu_rm_make_reservation( struct dpu_rm *rm, struct dpu_global_state *global_state, - struct drm_encoder *enc, + uint32_t crtc_id, struct msm_display_topology *topology) { int ret; - ret = _dpu_rm_reserve_lms(rm, global_state, enc->base.id, topology); + ret = _dpu_rm_reserve_lms(rm, global_state, crtc_id, topology); if (ret) { DPU_ERROR("unable to find appropriate mixers\n"); return ret; } - ret = _dpu_rm_reserve_ctls(rm, global_state, enc->base.id, + + ret = _dpu_rm_reserve_ctls(rm, global_state, crtc_id, topology); if (ret) { DPU_ERROR("unable to find appropriate CTL\n"); return ret; } - ret = _dpu_rm_reserve_dsc(rm, global_state, enc, topology); + ret = _dpu_rm_reserve_dsc(rm, global_state, crtc_id, topology); if (ret) return ret; if (topology->needs_cdm) { - ret = _dpu_rm_reserve_cdm(rm, global_state, enc); + ret = _dpu_rm_reserve_cdm(rm, global_state, crtc_id); if (ret) { DPU_ERROR("unable to find CDM blk\n"); return ret; @@ -632,12 +631,12 @@ static int _dpu_rm_make_reservation( } static void _dpu_rm_clear_mapping(uint32_t *res_mapping, int cnt, - uint32_t enc_id) + uint32_t crtc_id) { int i; for (i = 0; i < cnt; i++) { - if (res_mapping[i] == enc_id) + if (res_mapping[i] == crtc_id) res_mapping[i] = 0; } } @@ -646,23 +645,25 @@ static void _dpu_rm_clear_mapping(uint32_t *res_mapping, int cnt, * dpu_rm_release - Given the encoder for the display chain, release any * HW blocks previously reserved for that use case. * @global_state: resources shared across multiple kms objects - * @enc: DRM Encoder handle + * @crtc: DRM CRTC handle * @return: 0 on Success otherwise -ERROR */ void dpu_rm_release(struct dpu_global_state *global_state, - struct drm_encoder *enc) + struct drm_crtc *crtc) { - _dpu_rm_clear_mapping(global_state->pingpong_to_enc_id, - ARRAY_SIZE(global_state->pingpong_to_enc_id), enc->base.id); - _dpu_rm_clear_mapping(global_state->mixer_to_enc_id, - ARRAY_SIZE(global_state->mixer_to_enc_id), enc->base.id); - _dpu_rm_clear_mapping(global_state->ctl_to_enc_id, - ARRAY_SIZE(global_state->ctl_to_enc_id), enc->base.id); - _dpu_rm_clear_mapping(global_state->dsc_to_enc_id, - ARRAY_SIZE(global_state->dsc_to_enc_id), enc->base.id); - _dpu_rm_clear_mapping(global_state->dspp_to_enc_id, - ARRAY_SIZE(global_state->dspp_to_enc_id), enc->base.id); - _dpu_rm_clear_mapping(&global_state->cdm_to_enc_id, 1, enc->base.id); + uint32_t crtc_id = crtc->base.id; + + _dpu_rm_clear_mapping(global_state->pingpong_to_crtc_id, + ARRAY_SIZE(global_state->pingpong_to_crtc_id), crtc_id); + _dpu_rm_clear_mapping(global_state->mixer_to_crtc_id, + ARRAY_SIZE(global_state->mixer_to_crtc_id), crtc_id); + _dpu_rm_clear_mapping(global_state->ctl_to_crtc_id, + ARRAY_SIZE(global_state->ctl_to_crtc_id), crtc_id); + _dpu_rm_clear_mapping(global_state->dsc_to_crtc_id, + ARRAY_SIZE(global_state->dsc_to_crtc_id), crtc_id); + _dpu_rm_clear_mapping(global_state->dspp_to_crtc_id, + ARRAY_SIZE(global_state->dspp_to_crtc_id), crtc_id); + _dpu_rm_clear_mapping(&global_state->cdm_to_crtc_id, 1, crtc_id); } /** @@ -674,42 +675,33 @@ void dpu_rm_release(struct dpu_global_state *global_state, * HW Reservations should be released via dpu_rm_release_hw. * @rm: DPU Resource Manager handle * @global_state: resources shared across multiple kms objects - * @enc: DRM Encoder handle - * @crtc_state: Proposed Atomic DRM CRTC State handle + * @crtc: DRM CRTC handle * @topology: Pointer to topology info for the display * @return: 0 on Success otherwise -ERROR */ int dpu_rm_reserve( struct dpu_rm *rm, struct dpu_global_state *global_state, - struct drm_encoder *enc, - struct drm_crtc_state *crtc_state, + struct drm_crtc *crtc, struct msm_display_topology *topology) { int ret; - /* Check if this is just a page-flip */ - if (!drm_atomic_crtc_needs_modeset(crtc_state)) - return 0; - if (IS_ERR(global_state)) { DPU_ERROR("failed to global state\n"); return PTR_ERR(global_state); } - DRM_DEBUG_KMS("reserving hw for enc %d crtc %d\n", - enc->base.id, crtc_state->crtc->base.id); + DRM_DEBUG_KMS("reserving hw for crtc %d\n", crtc->base.id); DRM_DEBUG_KMS("num_lm: %d num_dsc: %d num_intf: %d\n", topology->num_lm, topology->num_dsc, topology->num_intf); - ret = _dpu_rm_make_reservation(rm, global_state, enc, topology); + ret = _dpu_rm_make_reservation(rm, global_state, crtc->base.id, topology); if (ret) DPU_ERROR("failed to reserve hw resources: %d\n", ret); - - return ret; } @@ -800,48 +792,49 @@ void dpu_rm_release_all_sspp(struct dpu_global_state *global_state, * assigned to this encoder * @rm: DPU Resource Manager handle * @global_state: resources shared across multiple kms objects - * @enc_id: encoder id requesting for allocation + * @crtc: DRM CRTC handle * @type: resource type to return data for * @blks: pointer to the array to be filled by HW resources * @blks_size: size of the @blks array */ int dpu_rm_get_assigned_resources(struct dpu_rm *rm, - struct dpu_global_state *global_state, uint32_t enc_id, + struct dpu_global_state *global_state, struct drm_crtc *crtc, enum dpu_hw_blk_type type, struct dpu_hw_blk **blks, int blks_size) { + uint32_t crtc_id = crtc->base.id; struct dpu_hw_blk **hw_blks; - uint32_t *hw_to_enc_id; + uint32_t *hw_to_crtc_id; int i, num_blks, max_blks; switch (type) { case DPU_HW_BLK_PINGPONG: hw_blks = rm->pingpong_blks; - hw_to_enc_id = global_state->pingpong_to_enc_id; + hw_to_crtc_id = global_state->pingpong_to_crtc_id; max_blks = ARRAY_SIZE(rm->pingpong_blks); break; case DPU_HW_BLK_LM: hw_blks = rm->mixer_blks; - hw_to_enc_id = global_state->mixer_to_enc_id; + hw_to_crtc_id = global_state->mixer_to_crtc_id; max_blks = ARRAY_SIZE(rm->mixer_blks); break; case DPU_HW_BLK_CTL: hw_blks = rm->ctl_blks; - hw_to_enc_id = global_state->ctl_to_enc_id; + hw_to_crtc_id = global_state->ctl_to_crtc_id; max_blks = ARRAY_SIZE(rm->ctl_blks); break; case DPU_HW_BLK_DSPP: hw_blks = rm->dspp_blks; - hw_to_enc_id = global_state->dspp_to_enc_id; + hw_to_crtc_id = global_state->dspp_to_crtc_id; max_blks = ARRAY_SIZE(rm->dspp_blks); break; case DPU_HW_BLK_DSC: hw_blks = rm->dsc_blks; - hw_to_enc_id = global_state->dsc_to_enc_id; + hw_to_crtc_id = global_state->dsc_to_crtc_id; max_blks = ARRAY_SIZE(rm->dsc_blks); break; case DPU_HW_BLK_CDM: hw_blks = &rm->cdm_blk; - hw_to_enc_id = &global_state->cdm_to_enc_id; + hw_to_crtc_id = &global_state->cdm_to_crtc_id; max_blks = 1; break; default: @@ -851,17 +844,17 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm, num_blks = 0; for (i = 0; i < max_blks; i++) { - if (hw_to_enc_id[i] != enc_id) + if (hw_to_crtc_id[i] != crtc_id) continue; if (num_blks == blks_size) { - DPU_ERROR("More than %d resources assigned to enc %d\n", - blks_size, enc_id); + DPU_ERROR("More than %d resources assigned to crtc %d\n", + blks_size, crtc_id); break; } if (!hw_blks[i]) { - DPU_ERROR("Allocated resource %d unavailable to assign to enc %d\n", - type, enc_id); + DPU_ERROR("Allocated resource %d unavailable to assign to crtc %d\n", + type, crtc_id); break; } blks[num_blks++] = hw_blks[i]; @@ -896,38 +889,38 @@ void dpu_rm_print_state(struct drm_printer *p, drm_puts(p, "resource mapping:\n"); drm_puts(p, "\tpingpong="); - for (i = 0; i < ARRAY_SIZE(global_state->pingpong_to_enc_id); i++) + for (i = 0; i < ARRAY_SIZE(global_state->pingpong_to_crtc_id); i++) dpu_rm_print_state_helper(p, rm->pingpong_blks[i], - global_state->pingpong_to_enc_id[i]); + global_state->pingpong_to_crtc_id[i]); drm_puts(p, "\n"); drm_puts(p, "\tmixer="); - for (i = 0; i < ARRAY_SIZE(global_state->mixer_to_enc_id); i++) + for (i = 0; i < ARRAY_SIZE(global_state->mixer_to_crtc_id); i++) dpu_rm_print_state_helper(p, rm->mixer_blks[i], - global_state->mixer_to_enc_id[i]); + global_state->mixer_to_crtc_id[i]); drm_puts(p, "\n"); drm_puts(p, "\tctl="); - for (i = 0; i < ARRAY_SIZE(global_state->ctl_to_enc_id); i++) + for (i = 0; i < ARRAY_SIZE(global_state->ctl_to_crtc_id); i++) dpu_rm_print_state_helper(p, rm->ctl_blks[i], - global_state->ctl_to_enc_id[i]); + global_state->ctl_to_crtc_id[i]); drm_puts(p, "\n"); drm_puts(p, "\tdspp="); - for (i = 0; i < ARRAY_SIZE(global_state->dspp_to_enc_id); i++) + for (i = 0; i < ARRAY_SIZE(global_state->dspp_to_crtc_id); i++) dpu_rm_print_state_helper(p, rm->dspp_blks[i], - global_state->dspp_to_enc_id[i]); + global_state->dspp_to_crtc_id[i]); drm_puts(p, "\n"); drm_puts(p, "\tdsc="); - for (i = 0; i < ARRAY_SIZE(global_state->dsc_to_enc_id); i++) + for (i = 0; i < ARRAY_SIZE(global_state->dsc_to_crtc_id); i++) dpu_rm_print_state_helper(p, rm->dsc_blks[i], - global_state->dsc_to_enc_id[i]); + global_state->dsc_to_crtc_id[i]); drm_puts(p, "\n"); drm_puts(p, "\tcdm="); dpu_rm_print_state_helper(p, rm->cdm_blk, - global_state->cdm_to_enc_id); + global_state->cdm_to_crtc_id); drm_puts(p, "\n"); drm_puts(p, "\tsspp="); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h index 99bd594ee0d1995eca5a1f661b15e24fdf6acf39..463c532cdfdff1d8b7fb8760c40c2c572ce3d674 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h @@ -69,12 +69,11 @@ int dpu_rm_init(struct drm_device *dev, int dpu_rm_reserve(struct dpu_rm *rm, struct dpu_global_state *global_state, - struct drm_encoder *drm_enc, - struct drm_crtc_state *crtc_state, + struct drm_crtc *crtc, struct msm_display_topology *topology); void dpu_rm_release(struct dpu_global_state *global_state, - struct drm_encoder *enc); + struct drm_crtc *crtc); struct dpu_hw_sspp *dpu_rm_reserve_sspp(struct dpu_rm *rm, struct dpu_global_state *global_state, @@ -85,7 +84,7 @@ void dpu_rm_release_all_sspp(struct dpu_global_state *global_state, struct drm_crtc *crtc); int dpu_rm_get_assigned_resources(struct dpu_rm *rm, - struct dpu_global_state *global_state, uint32_t enc_id, + struct dpu_global_state *global_state, struct drm_crtc *crtc, enum dpu_hw_blk_type type, struct dpu_hw_blk **blks, int blks_size); void dpu_rm_print_state(struct drm_printer *p, From patchwork Wed Jan 29 03:20:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953433 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9FE7194C92; Wed, 29 Jan 2025 03:21:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120893; cv=none; b=EXY+5hY+JFfu/k3Ej7Do2lNtpwmdNn4CbofzeIfAOgfvm2W+nh71DEOxfEbrhhNJrUaB5jiaogo9Q/XW89vqUy9auyV+voel4DcmSKPBt7AG5+Lqxu0eSimfoRPOq+PuqJPdKFcUvcg97mMyzv00ys8JgcL2LyFhu57RUnDA/N4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120893; c=relaxed/simple; bh=Rb8gnPR5V1E8D4gA2Znczy6RJFgd6o3zJ23bHpCSW8s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=n52sf4MrlJvEYjrGTg+TmgR0JdjQ7MdxQlXu5ECvr+MDsm0vu+IIi9TEFAcnQ1KKfsbYHT40CmDKBD8zotXhtyjrszA1UP4E3pnvRrLVde2ldBCzRdMXS1kQ4qm4MHgtI+3Su0ehOhKpfnEJW3AHbfWiICTgksMCnNmBheN7FEQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=J0t6zyM6; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="J0t6zyM6" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2hl08024678; Wed, 29 Jan 2025 03:21:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= ujyWe5p8QurZUnc6a7HNvOAEUv0f1Vbu+TXI2utMM/Y=; b=J0t6zyM6oV9EbS1o etKbxzLR+Ae12BvO+LZpj6f/tzgTp5aRdRFihChwzy6JdPPHnabjc7fRsk2b8ciz Ro5kfy/yDPy1eKoiVTBQfGf9NeU+eThNNB8L1YJkicyZgDsY4ZXxgcnufUoSheWT uanlsbj9y2VDVIYHOEZ/KVQQzrCWRKp2UKQveL/sK9Yk+6yJliLAYjEyIfC11bVT xgcS06QDN4eXzcZvrM1pf9R5i+SERCwE4ILAccD3GAFRXmZjlP0BpCbNOxnRemL/ Kq6bzGc+oddlx58AaN3IYwZ9JEzkLT9uqcaR8CFnPSVJ7iPrxTyXsY9sLlIpmoR6 OBtrAA== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44eygu9rc5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:20 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L6mn012897 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:06 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:06 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:36 -0800 Subject: [PATCH v5 04/14] drm/msm/dpu: Add CWB to msm_display_topology Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-4-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=4227; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=Rb8gnPR5V1E8D4gA2Znczy6RJFgd6o3zJ23bHpCSW8s=; b=ISfVxqcFhsHovIYMXw0tnMAYZfajPX5rEz3Rrq3ZGHBQIdx3TVJfyJ3TJfLkxltyOd46fjhlW FhAtx6NKdG1DauUebgq4A3ANQIhDgQqVHhZCwoaiNV+b11OI2aG+UnN X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 1vt_t2jMuneWlSBjdXIQq0Bm2yvOvfL_ X-Proofpoint-ORIG-GUID: 1vt_t2jMuneWlSBjdXIQq0Bm2yvOvfL_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 bulkscore=0 spamscore=0 lowpriorityscore=0 mlxscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 impostorscore=0 clxscore=1015 adultscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 Currently, the topology is calculated based on the assumption that the user cannot request real-time and writeback simultaneously. For example, the number of LMs and CTLs are currently based off the number of phys encoders under the assumption there will be at least 1 LM/CTL per phys encoder. This will not hold true for concurrent writeback as both phys encoders (1 real-time and 1 writeback) must be driven by 1 LM/CTL when concurrent writeback is enabled. To account for this, add a cwb_enabled flag and only adjust the number of CTL/LMs needed by a given topology based on the number of phys encoders only if CWB is not enabled. Reviewed-by: Abhinav Kumar Reviewed-by: Dmitry Baryshkov Signed-off-by: Jessica Zhang --- Changes in v5: - Reworded commit message to be more specific --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 11 ++++++++++- drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c | 14 ++++++++++++-- drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h | 2 ++ 3 files changed, 24 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index 8144de25ac7316e9ab9ab0aa45c0d01a7267c24c..3449a7066e084eaeeb713d21d53f3d8e877cc30e 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -1246,6 +1246,8 @@ static struct msm_display_topology dpu_crtc_get_topology( dpu_encoder_update_topology(drm_enc, &topology, crtc_state->state, &crtc_state->adjusted_mode); + topology.cwb_enabled = drm_crtc_in_clone_mode(crtc_state); + /* * Datapath topology selection * @@ -1259,9 +1261,16 @@ static struct msm_display_topology dpu_crtc_get_topology( * If DSC is enabled, use 2 LMs for 2:2:1 topology * * Add dspps to the reservation requirements if ctm is requested + * + * Only hardcode num_lm to 2 for cases where num_intf == 2 and CWB is not + * enabled. This is because in cases where CWB is enabled, num_intf will + * count both the WB and real-time phys encoders. + * + * For non-DSC CWB usecases, have the num_lm be decided by the + * (mode->hdisplay > MAX_HDISPLAY_SPLIT) check. */ - if (topology.num_intf == 2) + if (topology.num_intf == 2 && !topology.cwb_enabled) topology.num_lm = 2; else if (topology.num_dsc == 2) topology.num_lm = 2; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c index a7b4086ae990aaebfd1103acd50ff2a84cff4406..0fbb92021b184c4792ddfe059e98b3acf7bc7cc6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c @@ -381,8 +381,18 @@ static int _dpu_rm_reserve_ctls( int i = 0, j, num_ctls; bool needs_split_display; - /* each hw_intf needs its own hw_ctrl to program its control path */ - num_ctls = top->num_intf; + /* + * For non-CWB mode, each hw_intf needs its own hw_ctl to program its + * control path. + * + * Hardcode num_ctls to 1 if CWB is enabled because in CWB, both the + * writeback and real-time encoders must be driven by the same control + * path + */ + if (top->cwb_enabled) + num_ctls = 1; + else + num_ctls = top->num_intf; needs_split_display = _dpu_rm_needs_split_display(top); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h index 463c532cdfdff1d8b7fb8760c40c2c572ce3d674..b854e42d319d2c482e4e1732239754979f6950dd 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h @@ -52,6 +52,7 @@ struct dpu_rm_sspp_requirements { * @num_dspp: number of dspp blocks used * @num_dsc: number of Display Stream Compression (DSC) blocks used * @needs_cdm: indicates whether cdm block is needed for this display topology + * @cwb_enabled: indicates whether CWB is enabled for this display topology */ struct msm_display_topology { u32 num_lm; @@ -59,6 +60,7 @@ struct msm_display_topology { u32 num_dspp; u32 num_dsc; bool needs_cdm; + bool cwb_enabled; }; int dpu_rm_init(struct drm_device *dev, From patchwork Wed Jan 29 03:20:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953431 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 762CD193429; Wed, 29 Jan 2025 03:21:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120891; cv=none; b=RLs+z7PoRklCIifrh3lQGPNVhJ76IWGeHAW9STVvV64jpWRx5/8DLo+OFi8j3KsaXdbsxFf9b6IH31LGw8wvd9p6MtwF1FAojDoBHWsvPqxjDc5FVtshj8muMv0L2cy91n12INkNveILZOsirp60VgqrPooJxw1XdJHvqOV3K2U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120891; c=relaxed/simple; bh=ZBBkvgZCWJ1A02/soBMbIj9fDedeAgiGfzS0tqzTmJA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=EomzpfEbt7rUgH4aj+SpNRD9a+C3MrhkXLfkgmVkY41INVJ0AaObs6puZyvTue8d8qJ6tMuVOWtH6JFBlWnr4sTnhwSMIaXxdo4eHibv+o3qrcUacYX4ANNMwMCB5BsDTOYlR7U7d1r5Np++BsIt0BbiMX5HV4CJgASndWmEupI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=UmLoqLiu; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="UmLoqLiu" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2hB1v028472; Wed, 29 Jan 2025 03:21:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= xYO2iA09aVAv2klwSBy3BTWslYMJoan0NhNKvvfdCqM=; b=UmLoqLiuPn8r92hU PbSFormt6xRsdyrfIxTB+nALDmok7QJmdoDaT/UUMaXKX2/uZoabOMNMnt2KbXa5 bfE7UVQ7UJDbjhkfm9I5yyFb2ZWeWtI6vj6/MiK1ku9mVzRUb5hyyW/ym69D1zQR WJgR7h9tZo+WYuHw1GOkg7j0/PSQkoDXb1VBnykAwTB5hi+JsH65MKLMtztHP/TD f15qbNanlgahvfyPO92OLaOcAmF9LH3+0MEfAn0ustZFVEkzOebfesTl5sEbKoDi sjceXgpZ6vbZeWAZOq80hzxFXyCO6PTCCOkcEbnabMTfIqUhjrU/yyQhi7MU8M6Z Y6Ietw== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f5mrgqbt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:18 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L7Nx012901 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:07 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:06 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:37 -0800 Subject: [PATCH v5 05/14] drm/msm/dpu: Require modeset if clone mode status changes Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-5-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=3689; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=ZBBkvgZCWJ1A02/soBMbIj9fDedeAgiGfzS0tqzTmJA=; b=W+LRlzwH8+me0ZOG9csTedX1sV7KqjlLTr41Buxq5Udnows0z5sSbNZWKrUnSK8pew+lBZUzo rK2tf/Z9DyUBfZ/sn2X74TKwHisZNGDBREOkrADN1QLy84CpSOkMb30 X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: G3H1xNfvowc1zNy0ighgFH0EY8zw6S8e X-Proofpoint-ORIG-GUID: G3H1xNfvowc1zNy0ighgFH0EY8zw6S8e X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 lowpriorityscore=0 malwarescore=0 impostorscore=0 clxscore=1015 adultscore=0 phishscore=0 bulkscore=0 mlxscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 If the clone mode enabled status is changing, a modeset needs to happen so that the resources can be reassigned Reviewed-by: Abhinav Kumar Reviewed-by: Dmitry Baryshkov Signed-off-by: Jessica Zhang --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 17 ++++++++++++----- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h | 3 ++- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 5 +++-- 3 files changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index 3449a7066e084eaeeb713d21d53f3d8e877cc30e..cfb19be0547788aa9a3b78fd0abb0513b8c9bb47 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -1358,19 +1358,26 @@ static int dpu_crtc_assign_resources(struct drm_crtc *crtc, * * Check if the changes in the object properties demand full mode set. */ -int dpu_crtc_check_mode_changed(struct drm_crtc_state *crtc_state) +int dpu_crtc_check_mode_changed(struct drm_crtc_state *old_crtc_state, + struct drm_crtc_state *new_crtc_state) { struct drm_encoder *drm_enc; - struct drm_crtc *crtc = crtc_state->crtc; + struct drm_crtc *crtc = new_crtc_state->crtc; + bool clone_mode_enabled = drm_crtc_in_clone_mode(old_crtc_state); + bool clone_mode_requested = drm_crtc_in_clone_mode(new_crtc_state); DRM_DEBUG_ATOMIC("%d\n", crtc->base.id); /* there might be cases where encoder needs a modeset too */ - drm_for_each_encoder_mask(drm_enc, crtc->dev, crtc_state->encoder_mask) { - if (dpu_encoder_needs_modeset(drm_enc, crtc_state->state)) - crtc_state->mode_changed = true; + drm_for_each_encoder_mask(drm_enc, crtc->dev, new_crtc_state->encoder_mask) { + if (dpu_encoder_needs_modeset(drm_enc, new_crtc_state->state)) + new_crtc_state->mode_changed = true; } + if ((clone_mode_requested && !clone_mode_enabled) || + (!clone_mode_requested && clone_mode_enabled)) + new_crtc_state->mode_changed = true; + return 0; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h index 51a3b5fc879a1c83836601d717757dd1e801f884..94392b9b924546f96e738ae20920cf9afd568e6b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h @@ -239,7 +239,8 @@ static inline int dpu_crtc_frame_pending(struct drm_crtc *crtc) return crtc ? atomic_read(&to_dpu_crtc(crtc)->frame_pending) : -EINVAL; } -int dpu_crtc_check_mode_changed(struct drm_crtc_state *crtc_state); +int dpu_crtc_check_mode_changed(struct drm_crtc_state *old_crtc_state, + struct drm_crtc_state *new_crtc_state); int dpu_crtc_vblank(struct drm_crtc *crtc, bool en); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index c6b3b2e147b4c61ec93db4a9f01d5a288d2b9eb2..423af6f8251cb9c7c07f5a9878c8abc717d947a4 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -449,11 +449,12 @@ static void dpu_kms_disable_commit(struct msm_kms *kms) static int dpu_kms_check_mode_changed(struct msm_kms *kms, struct drm_atomic_state *state) { struct drm_crtc_state *new_crtc_state; + struct drm_crtc_state *old_crtc_state; struct drm_crtc *crtc; int i; - for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) - dpu_crtc_check_mode_changed(new_crtc_state); + for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) + dpu_crtc_check_mode_changed(old_crtc_state, new_crtc_state); return 0; } From patchwork Wed Jan 29 03:20:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953430 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7626E19340B; Wed, 29 Jan 2025 03:21:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120891; cv=none; b=FFloAL/PckPZmzxu3VdYC/wIVbiwskEIXzZYYrWW/WySCalMZeb9F9WSfAw/sATySsbk5s8o4oMRq1CrH/QUH1yAOgfihMnzqgForqge4vRS6i8e6xIVqYS7XrNCoth1yUIv5kGqtvVpje8eAdAakFD9p/QzG/9CKrXyS9goCR8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120891; c=relaxed/simple; bh=3XkwsP9c8T5+zw5yTgWjcZcAktaxrT/cKBRDJ+Z+8dk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=T5ueN8NwWm4kPAsX4wuHrtrpi60ZLbaLufATNnYOt61jaVkbvYBtDBu0MkXOYckLjYc3mqrk6tPrXAew9vzMkpLbPK8N4c8XsTlYEtAaVDRCO1YThU2NybCMOuBKyBWRMvPKLTdT3v9vrZo7R4AFfyr1x13GU5d3b/U8kF22ztw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=GDHD033N; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="GDHD033N" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2gZLC027400; Wed, 29 Jan 2025 03:21:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= h7ZGdEB8IPqoRZo8zbuflNeMGgTjoGvYlvvy9b79qEQ=; b=GDHD033NULTE69uf T+KBSS/VO/61NmeAmMffgB0Y02f/zdX3cTbg20DmLiNbS3m1vuRaeRKJOBCRx4Xb aBJTBVFtcPAQMjCI5t4hATHUKjVr+eMV7CUIB6UoKl37u/1kG1ICHr0T7LIOqE4/ MJzqFXqkilSeWpa8nsRF0uzTiPddIehhdBOho57i/ChMZsKQrZqVtH3MJC6WNP14 0OPz0kskOqCgVQFOPAQ7wtb4JfoKDytxkoa9GG0gIYPLEzNpL7x4WCVMzOmQYKRP RYtUEeQNjrkZJEvPq5PVdwoq7xQjTya7g510aJJz/IUpovZM+H9JFK7Z6PhAivll D7jp0w== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f5mrgqbu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:18 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L7GR003998 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:07 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:06 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:38 -0800 Subject: [PATCH v5 06/14] drm/msm/dpu: Fail atomic_check if multiple outputs request CDM block Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-6-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=3936; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=3XkwsP9c8T5+zw5yTgWjcZcAktaxrT/cKBRDJ+Z+8dk=; b=D24J52MyvhX5zLxv+eF+SO3+BwInyC2p+ZYswMwHtUAPBCvw1Lv/oVSLsl6vJrt6ZFxcJTtiI 3j2el1WjbIdAhEszvvf+7/jSYgp8R6W7Re7um/9QLP1gwFeCYIRdzLC X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: OZA8VSSRgsueftEjbiS-dJWvIgWu6EDA X-Proofpoint-ORIG-GUID: OZA8VSSRgsueftEjbiS-dJWvIgWu6EDA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 lowpriorityscore=0 malwarescore=0 impostorscore=0 clxscore=1015 adultscore=0 phishscore=0 bulkscore=0 mlxscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 Currently, our hardware only supports a single output using CDM block at most. Because of this, we cannot support cases where both writeback and DP output request CDM simultaneously To avoid this happening when CWB is enabled, change msm_display_topoloy.needs_cdm into a cdm_requested counter to track how many outputs are requesting CDM block. Return EINVAL if multiple outputs are trying to reserve CDM. Signed-off-by: Jessica Zhang --- Changes in v5: - Changed check to fail only if multiple outputs are requesting CDM simultaneously --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 4 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c | 12 +++++++++--- drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h | 5 +++-- 3 files changed, 14 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index bd6fef8eb8f267cde4ebe1155be39ce69e786967..99db194f5d095e83ac72f2830814e649a25918ef 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -692,10 +692,10 @@ void dpu_encoder_update_topology(struct drm_encoder *drm_enc, fb = conn_state->writeback_job->fb; if (fb && MSM_FORMAT_IS_YUV(msm_framebuffer_format(fb))) - topology->needs_cdm = true; + topology->cdm_requested++; } else if (disp_info->intf_type == INTF_DP) { if (msm_dp_is_yuv_420_enabled(priv->dp[disp_info->h_tile_instance[0]], adj_mode)) - topology->needs_cdm = true; + topology->cdm_requested++; } } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c index 0fbb92021b184c4792ddfe059e98b3acf7bc7cc6..dca3107d1e8265a864ac62d6b845d6cb966965ed 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c @@ -585,7 +585,8 @@ static int _dpu_rm_reserve_dsc(struct dpu_rm *rm, static int _dpu_rm_reserve_cdm(struct dpu_rm *rm, struct dpu_global_state *global_state, - uint32_t crtc_id) + uint32_t crtc_id, + const struct msm_display_topology *top) { /* try allocating only one CDM block */ if (!rm->cdm_blk) { @@ -593,6 +594,11 @@ static int _dpu_rm_reserve_cdm(struct dpu_rm *rm, return -EIO; } + if (top->cdm_requested > 1) { + DPU_ERROR("More than 1 INTF requesting CDM\n"); + return -EINVAL; + } + if (global_state->cdm_to_crtc_id) { DPU_ERROR("CDM_0 is already allocated\n"); return -EIO; @@ -629,8 +635,8 @@ static int _dpu_rm_make_reservation( if (ret) return ret; - if (topology->needs_cdm) { - ret = _dpu_rm_reserve_cdm(rm, global_state, crtc_id); + if (topology->cdm_requested > 0) { + ret = _dpu_rm_reserve_cdm(rm, global_state, crtc_id, topology); if (ret) { DPU_ERROR("unable to find CDM blk\n"); return ret; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h index b854e42d319d2c482e4e1732239754979f6950dd..7716c226cd7a270a1bac052dc085f5ed84cb9c26 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.h @@ -51,7 +51,8 @@ struct dpu_rm_sspp_requirements { * @num_intf: number of interfaces the panel is mounted on * @num_dspp: number of dspp blocks used * @num_dsc: number of Display Stream Compression (DSC) blocks used - * @needs_cdm: indicates whether cdm block is needed for this display topology + * @cdm_requested: indicates how many outputs are requesting cdm block for + * this display topology * @cwb_enabled: indicates whether CWB is enabled for this display topology */ struct msm_display_topology { @@ -59,7 +60,7 @@ struct msm_display_topology { u32 num_intf; u32 num_dspp; u32 num_dsc; - bool needs_cdm; + int cdm_requested; bool cwb_enabled; }; From patchwork Wed Jan 29 03:20:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953436 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDAD319D881; Wed, 29 Jan 2025 03:21:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120897; cv=none; b=UQlLD+8vWDdQC5eEEjxaIaV7bBW7soEPIka/29/pVd4lXE7+yt+/ga415+oOakwoQonHtJYr2Dkfh51WIvF503GQw3059gjLyoEdTJmrTjAwi2JxMnko7A5wq26TC5irjdqrLYraODJDxjMjBMQLgk9evyp7l7wTH9zVVoGX1Ak= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120897; c=relaxed/simple; bh=Q2ATPjcbVcFR+AzJKACl9OiyIesg3Ct7kzKjnxSG4fw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=m7P5k+6e+jFTcw0/5HhVE+v9B0aj9BUVqwPscQraKS9Vlmh9sSisMvqf4JHrqMt7KRcodtyqNFmuYEMM2ZW94IPkJQjSQEZ/IERWhhbHi7nSv07+I6iio0hm7dzDpwzm47w7ordoH+fb4S8YG0osexyEtbuaPUb04lQZb3zSmkk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=SOWdTiDU; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="SOWdTiDU" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2gQEN013244; Wed, 29 Jan 2025 03:21:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= SitV+qCipHXndz9L4/GyhZknSaIBHyYmiGNcgjqv6dA=; b=SOWdTiDUBvmS+dLB DMZdMgX17hSflDnvs7fmqU0rvnDv8FLtBfrk9cs2jdwX1um8vWYVir7Eyy+izmab twPLO9znMLqP24s8cPJ4pFObKJTowA/lgwUX3f34ZSUV1zHlJEjPXEge2H9S+rF8 23Dyjj7W8DBPeuXmA761ubGqG8XJFtmnJVMWFU4yA3892+B67qrkVMuuLIclZzdA +MNMd9/hUnFoZEDXJhzZDSLPvtmUnN8TrQUL7NFcagUnE4w6df+FQYClXgtvkMmF lPINg5K6pZLb7s/kQnVX/gcQZuvEbeGj23/xsxRch0DwtCiKhwChYbU4/iVdLa7a Lw3VLw== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f9ah08pn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:24 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L7NT012915 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:07 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:07 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:39 -0800 Subject: [PATCH v5 07/14] drm/msm/dpu: Reserve resources for CWB Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-7-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=10179; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=Q2ATPjcbVcFR+AzJKACl9OiyIesg3Ct7kzKjnxSG4fw=; b=tOnbZcUt7RzncFGHbWhNE2dmdvhBxvFSuZzrsQt6ejG60jfsZlQwpME2Sr3thuz5u6NXrOZIm qc8zUuuP4w4BGYDVEtjmYHaN1nRuX0grKRaltU+7vtMnietMuZ2ZEFj X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: mhAXX5XhnE59SRSR60dTMKZZlI650lmf X-Proofpoint-ORIG-GUID: mhAXX5XhnE59SRSR60dTMKZZlI650lmf X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0 mlxlogscore=999 phishscore=0 malwarescore=0 priorityscore=1501 clxscore=1015 spamscore=0 impostorscore=0 adultscore=0 bulkscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 Add support for RM to reserve dedicated CWB PINGPONGs and CWB muxes For concurrent writeback, even-indexed CWB muxes must be assigned to even-indexed LMs and odd-indexed CWB muxes for odd-indexed LMs. The same even/odd rule applies for dedicated CWB PINGPONGs. Track the CWB muxes in the global state and add a CWB-specific helper to reserve the correct CWB muxes and dedicated PINGPONGs following the even/odd rule. Signed-off-by: Jessica Zhang --- Changes in v5: - Allocate CWB muxes first then allocate PINGPONG block based on CWB mux index - Corrected comment doc on odd/even rule --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 34 ++++++++++-- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h | 2 + drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h | 1 + drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c | 82 +++++++++++++++++++++++++++++ 4 files changed, 115 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 99db194f5d095e83ac72f2830814e649a25918ef..17bd9762f56a392e8e9e8d7c897dcb6e06bccbb3 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -2,7 +2,7 @@ /* * Copyright (C) 2013 Red Hat * Copyright (c) 2014-2018, 2020-2021 The Linux Foundation. All rights reserved. - * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. * * Author: Rob Clark */ @@ -28,6 +28,7 @@ #include "dpu_hw_dsc.h" #include "dpu_hw_merge3d.h" #include "dpu_hw_cdm.h" +#include "dpu_hw_cwb.h" #include "dpu_formats.h" #include "dpu_encoder_phys.h" #include "dpu_crtc.h" @@ -133,6 +134,9 @@ enum dpu_enc_rc_states { * @cur_slave: As above but for the slave encoder. * @hw_pp: Handle to the pingpong blocks used for the display. No. * pingpong blocks can be different than num_phys_encs. + * @hw_cwb: Handle to the CWB muxes used for concurrent writeback + * display. Number of CWB muxes can be different than + * num_phys_encs. * @hw_dsc: Handle to the DSC blocks used for the display. * @dsc_mask: Bitmask of used DSC blocks. * @intfs_swapped: Whether or not the phys_enc interfaces have been swapped @@ -177,6 +181,7 @@ struct dpu_encoder_virt { struct dpu_encoder_phys *cur_master; struct dpu_encoder_phys *cur_slave; struct dpu_hw_pingpong *hw_pp[MAX_CHANNELS_PER_ENC]; + struct dpu_hw_cwb *hw_cwb[MAX_CHANNELS_PER_ENC]; struct dpu_hw_dsc *hw_dsc[MAX_CHANNELS_PER_ENC]; unsigned int dsc_mask; @@ -1137,7 +1142,10 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc, struct dpu_hw_blk *hw_pp[MAX_CHANNELS_PER_ENC]; struct dpu_hw_blk *hw_ctl[MAX_CHANNELS_PER_ENC]; struct dpu_hw_blk *hw_dsc[MAX_CHANNELS_PER_ENC]; + struct dpu_hw_blk *hw_cwb[MAX_CHANNELS_PER_ENC]; int num_ctl, num_pp, num_dsc; + int num_cwb = 0; + bool is_cwb_encoder; unsigned int dsc_mask = 0; int i; @@ -1151,6 +1159,8 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc, priv = drm_enc->dev->dev_private; dpu_kms = to_dpu_kms(priv->kms); + is_cwb_encoder = drm_crtc_in_clone_mode(crtc_state) && + dpu_enc->disp_info.intf_type == INTF_WB; global_state = dpu_kms_get_existing_global_state(dpu_kms); if (IS_ERR_OR_NULL(global_state)) { @@ -1161,9 +1171,25 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc, trace_dpu_enc_mode_set(DRMID(drm_enc)); /* Query resource that have been reserved in atomic check step. */ - num_pp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, - drm_enc->crtc, DPU_HW_BLK_PINGPONG, hw_pp, - ARRAY_SIZE(hw_pp)); + if (is_cwb_encoder) { + num_pp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, + drm_enc->crtc, + DPU_HW_BLK_DCWB_PINGPONG, + hw_pp, ARRAY_SIZE(hw_pp)); + num_cwb = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, + drm_enc->crtc, + DPU_HW_BLK_CWB, + hw_cwb, ARRAY_SIZE(hw_cwb)); + } else { + num_pp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, + drm_enc->crtc, + DPU_HW_BLK_PINGPONG, hw_pp, + ARRAY_SIZE(hw_pp)); + } + + for (i = 0; i < num_cwb; i++) + dpu_enc->hw_cwb[i] = to_dpu_hw_cwb(hw_cwb[i]); + num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, drm_enc->crtc, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl)); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h index ba7bb05efe9b8cac01a908e53121117e130f91ec..8d820cd1b5545d247515763039b341184e814e32 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h @@ -77,12 +77,14 @@ enum dpu_hw_blk_type { DPU_HW_BLK_LM, DPU_HW_BLK_CTL, DPU_HW_BLK_PINGPONG, + DPU_HW_BLK_DCWB_PINGPONG, DPU_HW_BLK_INTF, DPU_HW_BLK_WB, DPU_HW_BLK_DSPP, DPU_HW_BLK_MERGE_3D, DPU_HW_BLK_DSC, DPU_HW_BLK_CDM, + DPU_HW_BLK_CWB, DPU_HW_BLK_MAX, }; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h index 54ef6cfa2485a8a3886bd26b7ec3692d037dc35e..a57ec2ec106083e8f93578e4307e8b13ae549c08 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h @@ -132,6 +132,7 @@ struct dpu_global_state { uint32_t cdm_to_crtc_id; uint32_t sspp_to_crtc_id[SSPP_MAX - SSPP_NONE]; + uint32_t cwb_to_crtc_id[CWB_MAX - CWB_0]; }; struct dpu_global_state diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c index dca3107d1e8265a864ac62d6b845d6cb966965ed..2d5cf97a75913d51b2568ce85ec0c79a4a34deb4 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c @@ -233,6 +233,54 @@ static int _dpu_rm_get_lm_peer(struct dpu_rm *rm, int primary_idx) return -EINVAL; } +static int _dpu_rm_reserve_cwb_mux_and_pingpongs(struct dpu_rm *rm, + struct dpu_global_state *global_state, + uint32_t crtc_id, + struct msm_display_topology *topology) +{ + int num_cwb_mux = topology->num_lm, cwb_mux_count = 0; + int cwb_pp_start_idx = PINGPONG_CWB_0 - PINGPONG_0; + int cwb_pp_idx[MAX_BLOCKS]; + int cwb_mux_idx[MAX_BLOCKS]; + + /* + * Reserve additional dedicated CWB PINGPONG blocks and muxes for each + * mixer + * + * TODO: add support reserving resources for platforms with no + * PINGPONG_CWB + */ + for (int i = 0; i < ARRAY_SIZE(rm->mixer_blks) && + cwb_mux_count < num_cwb_mux; i++) { + for (int j = 0; j < ARRAY_SIZE(rm->cwb_blks); j++) { + /* + * Odd LMs must be assigned to odd CWB muxes and even + * LMs with even CWB muxes + */ + if (reserved_by_other(global_state->cwb_to_crtc_id, j, crtc_id) || + i % 2 != j % 2) + continue; + + cwb_mux_idx[cwb_mux_count] = j; + cwb_pp_idx[cwb_mux_count] = j + cwb_pp_start_idx; + cwb_mux_count++; + break; + } + } + + if (cwb_mux_count != num_cwb_mux) { + DPU_ERROR("Unable to reserve all CWB PINGPONGs\n"); + return -ENAVAIL; + } + + for (int i = 0; i < cwb_mux_count; i++) { + global_state->pingpong_to_crtc_id[cwb_pp_idx[i]] = crtc_id; + global_state->cwb_to_crtc_id[cwb_mux_idx[i]] = crtc_id; + } + + return 0; +} + /** * _dpu_rm_check_lm_and_get_connected_blks - check if proposed layer mixer meets * proposed use case requirements, incl. hardwired dependent blocks like @@ -623,6 +671,12 @@ static int _dpu_rm_make_reservation( return ret; } + if (topology->cwb_enabled) { + ret = _dpu_rm_reserve_cwb_mux_and_pingpongs(rm, global_state, + crtc_id, topology); + if (ret) + return ret; + } ret = _dpu_rm_reserve_ctls(rm, global_state, crtc_id, topology); @@ -680,6 +734,8 @@ void dpu_rm_release(struct dpu_global_state *global_state, _dpu_rm_clear_mapping(global_state->dspp_to_crtc_id, ARRAY_SIZE(global_state->dspp_to_crtc_id), crtc_id); _dpu_rm_clear_mapping(&global_state->cdm_to_crtc_id, 1, crtc_id); + _dpu_rm_clear_mapping(global_state->cwb_to_crtc_id, + ARRAY_SIZE(global_state->cwb_to_crtc_id), crtc_id); } /** @@ -824,6 +880,7 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm, switch (type) { case DPU_HW_BLK_PINGPONG: + case DPU_HW_BLK_DCWB_PINGPONG: hw_blks = rm->pingpong_blks; hw_to_crtc_id = global_state->pingpong_to_crtc_id; max_blks = ARRAY_SIZE(rm->pingpong_blks); @@ -853,6 +910,11 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm, hw_to_crtc_id = &global_state->cdm_to_crtc_id; max_blks = 1; break; + case DPU_HW_BLK_CWB: + hw_blks = rm->cwb_blks; + hw_to_crtc_id = global_state->cwb_to_crtc_id; + max_blks = ARRAY_SIZE(rm->cwb_blks); + break; default: DPU_ERROR("blk type %d not managed by rm\n", type); return 0; @@ -863,6 +925,20 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm, if (hw_to_crtc_id[i] != crtc_id) continue; + if (type == DPU_HW_BLK_PINGPONG) { + struct dpu_hw_pingpong *pp = to_dpu_hw_pingpong(hw_blks[i]); + + if (pp->idx >= PINGPONG_CWB_0) + continue; + } + + if (type == DPU_HW_BLK_DCWB_PINGPONG) { + struct dpu_hw_pingpong *pp = to_dpu_hw_pingpong(hw_blks[i]); + + if (pp->idx < PINGPONG_CWB_0) + continue; + } + if (num_blks == blks_size) { DPU_ERROR("More than %d resources assigned to crtc %d\n", blks_size, crtc_id); @@ -945,4 +1021,10 @@ void dpu_rm_print_state(struct drm_printer *p, dpu_rm_print_state_helper(p, rm->hw_sspp[i] ? &rm->hw_sspp[i]->base : NULL, global_state->sspp_to_crtc_id[i]); drm_puts(p, "\n"); + + drm_puts(p, "\tcwb="); + for (i = 0; i < ARRAY_SIZE(global_state->cwb_to_crtc_id); i++) + dpu_rm_print_state_helper(p, rm->cwb_blks[i], + global_state->cwb_to_crtc_id[i]); + drm_puts(p, "\n"); } From patchwork Wed Jan 29 03:20:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953427 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F0483CF58; Wed, 29 Jan 2025 03:21:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120885; cv=none; b=gL4jhvmOpRvkqofqDBoH5qXXhmYsFBBliJSvuQP8vdWBBZMcpnfXSfLr5iJvoLMLTwlEXMJgjFyHi8zZFFTUJ372+b5Su0CwYselWMgxH0D4SjYKwUPM2OOoqha0Q25/3O5jjGuj8bmh5e6luDWI0vJduFsf1sM/dINZyyixdAE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120885; c=relaxed/simple; bh=iSQ0t1UeYhymLmozLS8BB1Qpn/9CZhXSM6NwHUndz54=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=tHBry6ZHVvbZJqTzgXHYTS2wmKnKSTWD3KsKsGIlFEEpn2eLAGYRy2bhuUUrH+YWc+jA/Rxd+CRO0A99e2Ne7VxujBkHz7ot2QCPqdNTNaFgJo9N08M7BNXQ/O7v0GMjWmbEAznrRtae81uP3kAdElLOGP+yiUWMuoMERM61jUY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=XRvQQp3e; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="XRvQQp3e" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2gKU2025538; Wed, 29 Jan 2025 03:21:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 7bt+gy07fE3WsgTLxGihlAn+n+soURQCOd29fD2m7HU=; b=XRvQQp3eHGVC3LmD HxCjcK4SHouPLzfHEyaKODvA4KyXa+iwpyhE9+3n12lSE1v8B1OHECVsYZnpQ4OM POCLWjaY+Qn0EDwxhaBOtjamq9lvahHa3VWlhAxG0/TCoSXqiJTtcL9bMF7Wnje/ OoutrU0pQcgsDl7OceANz/FCV1iEtR8ZR2qMkFWIm2XT5AVmyAF6RB3+GRjpQNQN J4AYm5PAUWb8BAZUNfYZQ2tENPvjCJe0C3/+riODAOnuYQiK1of53jpVfNZdhCpU XhgprZD7aIfRF81SQygspPpiydeYfFUrjoLSNHNAs82SRSv6MbHeeDv7vKmaJP9C eObXkg== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f54dgsc7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:08 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L75K029763 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:07 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:07 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:40 -0800 Subject: [PATCH v5 08/14] drm/msm/dpu: Configure CWB in writeback encoder Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-8-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=7445; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=iSQ0t1UeYhymLmozLS8BB1Qpn/9CZhXSM6NwHUndz54=; b=VDS0wiruxeePH5eFE8VIkCMjx4voSue9tpL3mEi/OUXHEm2cF3ZNEr+/wn5gXrXoNMWGNiPEX kJ7QQ9IBIZeABGU8aKusNbV6MTPkSUzstBoUjPYG8rr3RunKbpuL+r8 X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: FNkIcbukCRXsiq5QUM87uVAj0XrsOfHS X-Proofpoint-ORIG-GUID: FNkIcbukCRXsiq5QUM87uVAj0XrsOfHS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 priorityscore=1501 bulkscore=0 malwarescore=0 suspectscore=0 spamscore=0 clxscore=1015 phishscore=0 adultscore=0 lowpriorityscore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290025 Cache the CWB block mask in the DPU virtual encoder and configure CWB according to the CWB block mask within the writeback phys encoder Reviewed-by: Dmitry Baryshkov Reviewed-by: Abhinav Kumar Signed-off-by: Jessica Zhang --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 75 +++++++++++++++++++++- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h | 7 +- .../gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 4 +- 3 files changed, 83 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 17bd9762f56a392e8e9e8d7c897dcb6e06bccbb3..877aa7b50315d20122bf524b99e43bbe3ad3b7ee 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -24,6 +24,7 @@ #include "dpu_hw_catalog.h" #include "dpu_hw_intf.h" #include "dpu_hw_ctl.h" +#include "dpu_hw_cwb.h" #include "dpu_hw_dspp.h" #include "dpu_hw_dsc.h" #include "dpu_hw_merge3d.h" @@ -139,6 +140,7 @@ enum dpu_enc_rc_states { * num_phys_encs. * @hw_dsc: Handle to the DSC blocks used for the display. * @dsc_mask: Bitmask of used DSC blocks. + * @cwb_mask Bitmask of used CWB muxes * @intfs_swapped: Whether or not the phys_enc interfaces have been swapped * for partial update right-only cases, such as pingpong * split where virtual pingpong does not generate IRQs @@ -185,6 +187,7 @@ struct dpu_encoder_virt { struct dpu_hw_dsc *hw_dsc[MAX_CHANNELS_PER_ENC]; unsigned int dsc_mask; + unsigned int cwb_mask; bool intfs_swapped; @@ -1147,6 +1150,7 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc, int num_cwb = 0; bool is_cwb_encoder; unsigned int dsc_mask = 0; + unsigned int cwb_mask = 0; int i; if (!drm_enc) { @@ -1187,8 +1191,12 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc, ARRAY_SIZE(hw_pp)); } - for (i = 0; i < num_cwb; i++) + for (i = 0; i < num_cwb; i++) { dpu_enc->hw_cwb[i] = to_dpu_hw_cwb(hw_cwb[i]); + cwb_mask |= BIT(dpu_enc->hw_cwb[i]->idx - CWB_0); + } + + dpu_enc->cwb_mask = cwb_mask; num_ctl = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, drm_enc->crtc, DPU_HW_BLK_CTL, hw_ctl, ARRAY_SIZE(hw_ctl)); @@ -2225,6 +2233,9 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc) } } + if (dpu_enc->cwb_mask) + dpu_encoder_helper_phys_setup_cwb(phys_enc, false); + /* reset the merge 3D HW block */ if (phys_enc->hw_pp && phys_enc->hw_pp->merge_3d) { phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d, @@ -2268,6 +2279,56 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc) ctl->ops.clear_pending_flush(ctl); } +void dpu_encoder_helper_phys_setup_cwb(struct dpu_encoder_phys *phys_enc, + bool enable) +{ + struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(phys_enc->parent); + struct dpu_hw_cwb *hw_cwb; + struct dpu_hw_cwb_setup_cfg cwb_cfg; + + struct dpu_kms *dpu_kms; + struct dpu_global_state *global_state; + struct dpu_hw_blk *rt_pp_list[MAX_CHANNELS_PER_ENC]; + int num_pp; + + if (!phys_enc->hw_wb) + return; + + dpu_kms = phys_enc->dpu_kms; + global_state = dpu_kms_get_existing_global_state(dpu_kms); + num_pp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, + phys_enc->parent->crtc, + DPU_HW_BLK_PINGPONG, rt_pp_list, + ARRAY_SIZE(rt_pp_list)); + + if (num_pp == 0 || num_pp > MAX_CHANNELS_PER_ENC) { + DPU_DEBUG_ENC(dpu_enc, "invalid num_pp %d\n", num_pp); + return; + } + + /* + * The CWB mux supports using LM or DSPP as tap points. For now, + * always use LM tap point + */ + cwb_cfg.input = INPUT_MODE_LM_OUT; + + for (int i = 0; i < MAX_CHANNELS_PER_ENC; i++) { + hw_cwb = dpu_enc->hw_cwb[i]; + if (!hw_cwb) + continue; + + if (enable) { + struct dpu_hw_pingpong *hw_pp = + to_dpu_hw_pingpong(rt_pp_list[i]); + cwb_cfg.pp_idx = hw_pp->idx; + } else { + cwb_cfg.pp_idx = PINGPONG_NONE; + } + + hw_cwb->ops.config_cwb(hw_cwb, &cwb_cfg); + } +} + /** * dpu_encoder_helper_phys_setup_cdm - setup chroma down sampling block * @phys_enc: Pointer to physical encoder @@ -2728,6 +2789,18 @@ enum dpu_intf_mode dpu_encoder_get_intf_mode(struct drm_encoder *encoder) return INTF_MODE_NONE; } +/** + * dpu_encoder_helper_get_cwb_mask - get CWB blocks mask for the DPU encoder + * @phys_enc: Pointer to physical encoder structure + */ +unsigned int dpu_encoder_helper_get_cwb_mask(struct dpu_encoder_phys *phys_enc) +{ + struct drm_encoder *encoder = phys_enc->parent; + struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(encoder); + + return dpu_enc->cwb_mask; +} + /** * dpu_encoder_helper_get_dsc - get DSC blocks mask for the DPU encoder * This helper function is used by physical encoder to get DSC blocks mask diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h index 63f09857025c2004dcb56bd33e9c51f8e0f80e48..61b22d9494546885db609efa156222792af73d2a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2015-2018 The Linux Foundation. All rights reserved. */ @@ -309,6 +309,8 @@ static inline enum dpu_3d_blend_mode dpu_encoder_helper_get_3d_blend_mode( return BLEND_3D_NONE; } +unsigned int dpu_encoder_helper_get_cwb_mask(struct dpu_encoder_phys *phys_enc); + unsigned int dpu_encoder_helper_get_dsc(struct dpu_encoder_phys *phys_enc); struct drm_dsc_config *dpu_encoder_get_dsc_config(struct drm_encoder *drm_enc); @@ -331,6 +333,9 @@ int dpu_encoder_helper_wait_for_irq(struct dpu_encoder_phys *phys_enc, void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc); +void dpu_encoder_helper_phys_setup_cwb(struct dpu_encoder_phys *phys_enc, + bool enable); + void dpu_encoder_helper_phys_setup_cdm(struct dpu_encoder_phys *phys_enc, const struct msm_format *dpu_fmt, u32 output_type); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 4c006ec74575b2829265f0eae5c462af8d491621..f2cbc9335e54e826c2ebd00b3a809af41eed95a4 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. */ #define pr_fmt(fmt) "[drm:%s:%d] " fmt, __func__, __LINE__ @@ -340,6 +340,8 @@ static void dpu_encoder_phys_wb_setup( dpu_encoder_helper_phys_setup_cdm(phys_enc, format, CDM_CDWN_OUTPUT_WB); + dpu_encoder_helper_phys_setup_cwb(phys_enc, true); + dpu_encoder_phys_wb_setup_ctl(phys_enc); } From patchwork Wed Jan 29 03:20:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953434 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6024119D07C; Wed, 29 Jan 2025 03:21:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120897; cv=none; b=q23JSArdW2k1P79HZXzgsjMIR5BMd9i3OCSGYacJKS57wT2cfShYZkLz2QyY7ashCMLSr2yfLpr46FFKNP79bmnKLtOjXrNA4ieZBS6ZqNWEV9DrIM3DO7+bbPxhPAzBCGl3TLTNwYesocePCDtY6TYZk8CygglKXtMgpg8G4Gs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120897; c=relaxed/simple; bh=CsJ02UGrcOCthZroDAIz4z/ly0p3o6les94EQpUF0Qo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=In4j/Uks2Ns4MhaeIGMAdbZe+a/cZ4GbFC69c/sV7Dn3gsgQRoz86tvO2volb8pNBz4X6YTZVQo2Q92FKtpcjewEMVnLLvcL5F84uvWmRc8MlZCVwd30H/A/sYj9maIlNuBDE5iXtgxoVBHQopQ0JiHfZeLOga10ARYiGab7kMU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=OglmVZ1v; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="OglmVZ1v" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2rjQu020269; Wed, 29 Jan 2025 03:21:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= SkprHbA7YcSrk5sn56jWRDfHuOKccKfPSbqDdy7YSK4=; b=OglmVZ1vam6r0iMp xaWSJiIxz3HzOgmFYLJ+vwP3rfe73e9QIrJHcO33HtE/vIKs9HXcrlPGdP4UmMos UjilVug1iDL4fOcgBYPVx7IFX6E48yBq6mDXpzXGoAPcUC+Uw/g4jB/napOwNy2Z SdUtlliGXTMCpXhNgLoOJWXeqsO2GrDAy/h3b67pc/MRE/Ig9HxPyY7mH3mR51+B UIb+7QIZfiOogUiysfNAK6wsot5D5WC2n9ndmp33hASlaQyQO4jqDIdZBKUbApSC 5TtlGeKOgSfyqgaTnPOJpvusWjAB56PzezoO8rTawE/ubu9Pq5gBtw9/zyKulwFi tA9RQQ== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44fbxvg12u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:23 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L8Vd024016 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:08 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:07 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:41 -0800 Subject: [PATCH v5 09/14] drm/msm/dpu: Support CWB in dpu_hw_ctl Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-9-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=10415; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=CsJ02UGrcOCthZroDAIz4z/ly0p3o6les94EQpUF0Qo=; b=TOc5dFguwx66szAscp6r5BlQeb3q7DJzLRzvw6Ut4L5hCaBYrU7DOxfX7fFZPyrSSZmLnFsr3 JzKCmyXplZmCPg1oj5pCeNiQkT4+Lv0nS/figpIf7Gb2f7bmTac8ZlS X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: XPozFrdub57afvYle9qFhpTZLzI4WggR X-Proofpoint-ORIG-GUID: XPozFrdub57afvYle9qFhpTZLzI4WggR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 bulkscore=0 priorityscore=1501 mlxlogscore=941 lowpriorityscore=0 malwarescore=0 impostorscore=0 phishscore=0 suspectscore=0 mlxscore=0 adultscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 The CWB mux has a pending flush bit and *_active register. Add support for configuring them within the dpu_hw_ctl layer. Reviewed-by: Dmitry Baryshkov Reviewed-by: Abhinav Kumar Signed-off-by: Jessica Zhang --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 13 ++++++++++ .../gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 1 + drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c | 30 +++++++++++++++++++++- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h | 15 ++++++++++- 4 files changed, 57 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 877aa7b50315d20122bf524b99e43bbe3ad3b7ee..578b5f1f039551b649fdae130cf097e3fb621d95 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -2262,6 +2262,7 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc) intf_cfg.stream_sel = 0; /* Don't care value for video mode */ intf_cfg.mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc); intf_cfg.dsc = dpu_encoder_helper_get_dsc(phys_enc); + intf_cfg.cwb = dpu_enc->cwb_mask; if (phys_enc->hw_intf) intf_cfg.intf = phys_enc->hw_intf->idx; @@ -2284,6 +2285,7 @@ void dpu_encoder_helper_phys_setup_cwb(struct dpu_encoder_phys *phys_enc, { struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(phys_enc->parent); struct dpu_hw_cwb *hw_cwb; + struct dpu_hw_ctl *hw_ctl; struct dpu_hw_cwb_setup_cfg cwb_cfg; struct dpu_kms *dpu_kms; @@ -2294,6 +2296,14 @@ void dpu_encoder_helper_phys_setup_cwb(struct dpu_encoder_phys *phys_enc, if (!phys_enc->hw_wb) return; + hw_ctl = phys_enc->hw_ctl; + + if (!phys_enc->hw_ctl) { + DPU_DEBUG("[wb:%d] no ctl assigned\n", + phys_enc->hw_wb->idx - WB_0); + return; + } + dpu_kms = phys_enc->dpu_kms; global_state = dpu_kms_get_existing_global_state(dpu_kms); num_pp = dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state, @@ -2326,6 +2336,9 @@ void dpu_encoder_helper_phys_setup_cwb(struct dpu_encoder_phys *phys_enc, } hw_cwb->ops.config_cwb(hw_cwb, &cwb_cfg); + + if (hw_ctl->ops.update_pending_flush_cwb) + hw_ctl->ops.update_pending_flush_cwb(hw_ctl, hw_cwb->idx); } } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index f2cbc9335e54e826c2ebd00b3a809af41eed95a4..648e6b3aab84957ca0401cbbc25889f0bd64b71a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -236,6 +236,7 @@ static void dpu_encoder_phys_wb_setup_ctl(struct dpu_encoder_phys *phys_enc) intf_cfg.intf = DPU_NONE; intf_cfg.wb = hw_wb->idx; + intf_cfg.cwb = dpu_encoder_helper_get_cwb_mask(phys_enc); if (mode_3d && hw_pp && hw_pp->merge_3d) intf_cfg.merge_3d = hw_pp->merge_3d->idx; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c index 4893f10d6a5832521808c0f4d8b231c356dbdc41..411a7cf088eb72f856940c09b0af9e108ccade4b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. - * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. */ #include @@ -31,12 +31,14 @@ #define CTL_MERGE_3D_ACTIVE 0x0E4 #define CTL_DSC_ACTIVE 0x0E8 #define CTL_WB_ACTIVE 0x0EC +#define CTL_CWB_ACTIVE 0x0F0 #define CTL_INTF_ACTIVE 0x0F4 #define CTL_CDM_ACTIVE 0x0F8 #define CTL_FETCH_PIPE_ACTIVE 0x0FC #define CTL_MERGE_3D_FLUSH 0x100 #define CTL_DSC_FLUSH 0x104 #define CTL_WB_FLUSH 0x108 +#define CTL_CWB_FLUSH 0x10C #define CTL_INTF_FLUSH 0x110 #define CTL_CDM_FLUSH 0x114 #define CTL_PERIPH_FLUSH 0x128 @@ -53,6 +55,7 @@ #define PERIPH_IDX 30 #define INTF_IDX 31 #define WB_IDX 16 +#define CWB_IDX 28 #define DSPP_IDX 29 /* From DPU hw rev 7.x.x */ #define CTL_INVALID_BIT 0xffff #define CTL_DEFAULT_GROUP_ID 0xf @@ -110,6 +113,7 @@ static inline void dpu_hw_ctl_clear_pending_flush(struct dpu_hw_ctl *ctx) ctx->pending_flush_mask = 0x0; ctx->pending_intf_flush_mask = 0; ctx->pending_wb_flush_mask = 0; + ctx->pending_cwb_flush_mask = 0; ctx->pending_merge_3d_flush_mask = 0; ctx->pending_dsc_flush_mask = 0; ctx->pending_cdm_flush_mask = 0; @@ -144,6 +148,9 @@ static inline void dpu_hw_ctl_trigger_flush_v1(struct dpu_hw_ctl *ctx) if (ctx->pending_flush_mask & BIT(WB_IDX)) DPU_REG_WRITE(&ctx->hw, CTL_WB_FLUSH, ctx->pending_wb_flush_mask); + if (ctx->pending_flush_mask & BIT(CWB_IDX)) + DPU_REG_WRITE(&ctx->hw, CTL_CWB_FLUSH, + ctx->pending_cwb_flush_mask); if (ctx->pending_flush_mask & BIT(DSPP_IDX)) for (dspp = DSPP_0; dspp < DSPP_MAX; dspp++) { @@ -310,6 +317,13 @@ static void dpu_hw_ctl_update_pending_flush_wb_v1(struct dpu_hw_ctl *ctx, ctx->pending_flush_mask |= BIT(WB_IDX); } +static void dpu_hw_ctl_update_pending_flush_cwb_v1(struct dpu_hw_ctl *ctx, + enum dpu_cwb cwb) +{ + ctx->pending_cwb_flush_mask |= BIT(cwb - CWB_0); + ctx->pending_flush_mask |= BIT(CWB_IDX); +} + static void dpu_hw_ctl_update_pending_flush_intf_v1(struct dpu_hw_ctl *ctx, enum dpu_intf intf) { @@ -547,6 +561,7 @@ static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx, u32 intf_active = 0; u32 dsc_active = 0; u32 wb_active = 0; + u32 cwb_active = 0; u32 mode_sel = 0; /* CTL_TOP[31:28] carries group_id to collate CTL paths @@ -561,6 +576,7 @@ static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx, intf_active = DPU_REG_READ(c, CTL_INTF_ACTIVE); wb_active = DPU_REG_READ(c, CTL_WB_ACTIVE); + cwb_active = DPU_REG_READ(c, CTL_CWB_ACTIVE); dsc_active = DPU_REG_READ(c, CTL_DSC_ACTIVE); if (cfg->intf) @@ -569,12 +585,16 @@ static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx, if (cfg->wb) wb_active |= BIT(cfg->wb - WB_0); + if (cfg->cwb) + cwb_active |= cfg->cwb; + if (cfg->dsc) dsc_active |= cfg->dsc; DPU_REG_WRITE(c, CTL_TOP, mode_sel); DPU_REG_WRITE(c, CTL_INTF_ACTIVE, intf_active); DPU_REG_WRITE(c, CTL_WB_ACTIVE, wb_active); + DPU_REG_WRITE(c, CTL_CWB_ACTIVE, cwb_active); DPU_REG_WRITE(c, CTL_DSC_ACTIVE, dsc_active); if (cfg->merge_3d) @@ -624,6 +644,7 @@ static void dpu_hw_ctl_reset_intf_cfg_v1(struct dpu_hw_ctl *ctx, struct dpu_hw_blk_reg_map *c = &ctx->hw; u32 intf_active = 0; u32 wb_active = 0; + u32 cwb_active = 0; u32 merge3d_active = 0; u32 dsc_active; u32 cdm_active; @@ -651,6 +672,12 @@ static void dpu_hw_ctl_reset_intf_cfg_v1(struct dpu_hw_ctl *ctx, DPU_REG_WRITE(c, CTL_INTF_ACTIVE, intf_active); } + if (cfg->cwb) { + cwb_active = DPU_REG_READ(c, CTL_CWB_ACTIVE); + cwb_active &= ~cfg->cwb; + DPU_REG_WRITE(c, CTL_CWB_ACTIVE, cwb_active); + } + if (cfg->wb) { wb_active = DPU_REG_READ(c, CTL_WB_ACTIVE); wb_active &= ~BIT(cfg->wb - WB_0); @@ -703,6 +730,7 @@ static void _setup_ctl_ops(struct dpu_hw_ctl_ops *ops, ops->update_pending_flush_merge_3d = dpu_hw_ctl_update_pending_flush_merge_3d_v1; ops->update_pending_flush_wb = dpu_hw_ctl_update_pending_flush_wb_v1; + ops->update_pending_flush_cwb = dpu_hw_ctl_update_pending_flush_cwb_v1; ops->update_pending_flush_dsc = dpu_hw_ctl_update_pending_flush_dsc_v1; ops->update_pending_flush_cdm = dpu_hw_ctl_update_pending_flush_cdm_v1; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h index 85c6c835cc8780e6cb66f3a262d9897c91962935..080a9550a0cc6530b4115165dd737857b6213d15 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. - * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. */ #ifndef _DPU_HW_CTL_H @@ -42,6 +42,7 @@ struct dpu_hw_stage_cfg { * @cdm: CDM block used * @stream_sel: Stream selection for multi-stream interfaces * @dsc: DSC BIT masks used + * @cwb: CWB BIT masks used */ struct dpu_hw_intf_cfg { enum dpu_intf intf; @@ -51,6 +52,7 @@ struct dpu_hw_intf_cfg { enum dpu_ctl_mode_sel intf_mode_sel; enum dpu_cdm cdm; int stream_sel; + unsigned int cwb; unsigned int dsc; }; @@ -114,6 +116,15 @@ struct dpu_hw_ctl_ops { void (*update_pending_flush_wb)(struct dpu_hw_ctl *ctx, enum dpu_wb blk); + /** + * OR in the given flushbits to the cached pending_(cwb_)flush_mask + * No effect on hardware + * @ctx : ctl path ctx pointer + * @blk : concurrent writeback block index + */ + void (*update_pending_flush_cwb)(struct dpu_hw_ctl *ctx, + enum dpu_cwb blk); + /** * OR in the given flushbits to the cached pending_(intf_)flush_mask * No effect on hardware @@ -258,6 +269,7 @@ struct dpu_hw_ctl_ops { * @pending_flush_mask: storage for pending ctl_flush managed via ops * @pending_intf_flush_mask: pending INTF flush * @pending_wb_flush_mask: pending WB flush + * @pending_cwb_flush_mask: pending CWB flush * @pending_dsc_flush_mask: pending DSC flush * @pending_cdm_flush_mask: pending CDM flush * @ops: operation list @@ -274,6 +286,7 @@ struct dpu_hw_ctl { u32 pending_flush_mask; u32 pending_intf_flush_mask; u32 pending_wb_flush_mask; + u32 pending_cwb_flush_mask; u32 pending_periph_flush_mask; u32 pending_merge_3d_flush_mask; u32 pending_dspp_flush_mask[DSPP_MAX - DSPP_0]; From patchwork Wed Jan 29 03:20:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953432 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6604F1946A2; Wed, 29 Jan 2025 03:21:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120891; cv=none; b=CYpbuiB9YGY1IxIXXoNtj/5W7EiKs7gmvHakXcnZUykp/ZX7vZR5CIxGOeQHheKYh7QozjUZdQsG5VUTWgLMN9kCy421LVY8I7I033uFX5hawj8eNxP5OI+BhgW5uNzhSfmc1HiyI4yO2EEWTZzWzfmenv+Y1XYb8WrC05HFOQY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120891; c=relaxed/simple; bh=jLwzsKKxKK6M7jt6oQDovGskhLXEYqkBhXbvwi6CkZ0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=QJckr0v0MEs+kBEX9v2hiaZmFwEP+YWq7dX+bSnYAYYzyx99XIOrBgFXa8Y/cAjng86Mmh2RFMKgzvvOJbNdfXUSaYrJCnJt5uA1Fl5i+kg+cs4hfEsSJj08pC3kgBpjKDjtiuMF030CYC7iW8A/0LGDJ1SLvkkD+cuDdSpnFwY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=j607g2CU; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="j607g2CU" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2gLdu027231; Wed, 29 Jan 2025 03:21:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= HNMI8C0jjHCmwYf05SWsIlGZKq8f4xSGwsZ3oW7iQno=; b=j607g2CUXGsFBMUW SDNikA/1uCqmq+zTibUeTrTQkInOJap9tVIwYAOf1lJoDps+Lg4EDfn7IUwFx/Qb /khWSmCN1kwrzdsEamZU4BLUhKKzWIfsvfLByKYOaqxXAf+OkG6xPfVMgkCIIRls hZ76MR44a2t49bG++Bw0ADx75hzuiW8Ga1pxYdqqmoTbxnNsUl74lgubuGzLPzFI lIFhZgAQPZbvKjlNR2w/hokLUZshSay/2QYA+FsaCaqHn4lABKJQ6jTVpeizAmoV yUimCQ6CgQPKcOjmV+Vy9Cu+5n6MCDoNa13W/6NmDWb4Z6GxQ8R1mgrC0zwQ1tuL PiR1wg== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f5mrgqbv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:18 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L808020189 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:08 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:07 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:42 -0800 Subject: [PATCH v5 10/14] drm/msm/dpu: Adjust writeback phys encoder setup for CWB Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-10-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=2657; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=jLwzsKKxKK6M7jt6oQDovGskhLXEYqkBhXbvwi6CkZ0=; b=jyfgNt7wwKcyVen1wd8H6zcX6zFNfjsAxNVs4EKdaMhBsytJ4zukdA5e2AY5m3cY1EMIz6nkN 6+DC0nPpbkwB9MvnuhQIvq4S4zdoXBb9fepOyWCK+/xn1sCLaFMm2/k X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: TUrjqCS9sw21CzKUDLaUBRNjaEswRjge X-Proofpoint-ORIG-GUID: TUrjqCS9sw21CzKUDLaUBRNjaEswRjge X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 lowpriorityscore=0 malwarescore=0 impostorscore=0 clxscore=1015 adultscore=0 phishscore=0 bulkscore=0 mlxscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 Adjust QoS remapper, OT limit, and CDP parameters to account for concurrent writeback Reviewed-by: Dmitry Baryshkov Reviewed-by: Abhinav Kumar Signed-off-by: Jessica Zhang --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 648e6b3aab84957ca0401cbbc25889f0bd64b71a..849fea580a4ca55fc4a742c6b6dee7dfcdd788e4 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -68,7 +68,7 @@ static void dpu_encoder_phys_wb_set_ot_limit( ot_params.num = hw_wb->idx - WB_0; ot_params.width = phys_enc->cached_mode.hdisplay; ot_params.height = phys_enc->cached_mode.vdisplay; - ot_params.is_wfd = true; + ot_params.is_wfd = !dpu_encoder_helper_get_cwb_mask(phys_enc); ot_params.frame_rate = drm_mode_vrefresh(&phys_enc->cached_mode); ot_params.vbif_idx = hw_wb->caps->vbif_idx; ot_params.rd = false; @@ -111,7 +111,7 @@ static void dpu_encoder_phys_wb_set_qos_remap( qos_params.vbif_idx = hw_wb->caps->vbif_idx; qos_params.xin_id = hw_wb->caps->xin_id; qos_params.num = hw_wb->idx - WB_0; - qos_params.is_rt = false; + qos_params.is_rt = dpu_encoder_helper_get_cwb_mask(phys_enc); DPU_DEBUG("[qos_remap] wb:%d vbif:%d xin:%d is_rt:%d\n", qos_params.num, @@ -174,6 +174,7 @@ static void dpu_encoder_phys_wb_setup_fb(struct dpu_encoder_phys *phys_enc, struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); struct dpu_hw_wb *hw_wb; struct dpu_hw_wb_cfg *wb_cfg; + u32 cdp_usage; if (!phys_enc || !phys_enc->dpu_kms || !phys_enc->dpu_kms->catalog) { DPU_ERROR("invalid encoder\n"); @@ -182,6 +183,10 @@ static void dpu_encoder_phys_wb_setup_fb(struct dpu_encoder_phys *phys_enc, hw_wb = phys_enc->hw_wb; wb_cfg = &wb_enc->wb_cfg; + if (dpu_encoder_helper_get_cwb_mask(phys_enc)) + cdp_usage = DPU_PERF_CDP_USAGE_RT; + else + cdp_usage = DPU_PERF_CDP_USAGE_NRT; wb_cfg->intf_mode = phys_enc->intf_mode; wb_cfg->roi.x1 = 0; @@ -199,7 +204,7 @@ static void dpu_encoder_phys_wb_setup_fb(struct dpu_encoder_phys *phys_enc, const struct dpu_perf_cfg *perf = phys_enc->dpu_kms->catalog->perf; hw_wb->ops.setup_cdp(hw_wb, format, - perf->cdp_cfg[DPU_PERF_CDP_USAGE_NRT].wr_enable); + perf->cdp_cfg[cdp_usage].wr_enable); } if (hw_wb->ops.setup_outaddress) From patchwork Wed Jan 29 03:20:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953426 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6CD05171CD; Wed, 29 Jan 2025 03:21:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120885; cv=none; b=h963zRZCKBZbHirSBPvg8wpZKm5oCHekk4lt43Zbgv/sehruFuDmnme6DXWzGpgDq5yDulpcEPT11ggiROmxYaIr7XK07XyjY177hMp8h73vakxFoKgyHTaMexJeY2DmeaFtm57dqFEOMfqxtdeXp52dqOzKcvwahtFyF/rDGmg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120885; c=relaxed/simple; bh=bwpa2nh3gEUJMG70dSCaJRtyDb7SW+1OOYbvr0CoXvo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=eJMghahwU3JW1q9TySd67Rn543iK0fBQNloLeNeMsgy9+wjBbMBD3LQSE7ZXw9E8fteXK0aBrqi14II+lc2SJMzNQH0CdPkGtS/mTz51oUlBOmEl8wYz/iU/OYuP8PjiI5+Xr59B1H5BP7cLb10IoeX9qgNsOn0A1EvYYn39LDI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=GgH2W5Du; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="GgH2W5Du" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2gS7t002255; Wed, 29 Jan 2025 03:21:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= HNi868xUKsUHHMuMgvXEyB0MXMPFgSW62DLyrW/UXuA=; b=GgH2W5DuPNtEnvtg 5KAC+ZJFcPd0dOQ4ApIcnwlPMogAeUAf0+FJPYO3R1glVZ9YHno0Am/mMI3/uzXV SahS4Lc9jqxorog0zDbMzhamrAsbrK6Lpja85Bg4xxM/FuCU9K7+T/JJt+bjOfSv 0SrF8Y4muGU+YrvNSRMoyxEc81tfLDts5GrUbgc3YYff2ThwA+19lXGepLk67a/k 7CgIjQz6C03ywgUbcHZiucTF7Irs7SwYiLEdE/oX/AcUytJG7dPymRf3aWikNAm0 sO+8QNzC1fCbT/y8qATWCvGra7v97ond9hB2NA7K/9JrWxoUZewV6IqsyZ2p3Jno PCROQw== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f97q08uv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:09 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L86a020194 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:08 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:08 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:43 -0800 Subject: [PATCH v5 11/14] drm/msm/dpu: Start frame done timer after encoder kickoff Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-11-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=4307; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=bwpa2nh3gEUJMG70dSCaJRtyDb7SW+1OOYbvr0CoXvo=; b=liY1s/F340JtkvpEdUQZqHlzEpxWmvLDaSB7dPpV4cvs1m3lcqjBVmk9J2IeAztZTRCCiM2L3 vwRwDBfghELC4PxLbnKVmBb9BcAxJqlwUpWemeBB1KLbvqH6zLZw+2N X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: gbJgKI4bgf5CX1xHt-U8F7AqV7vooJDl X-Proofpoint-ORIG-GUID: gbJgKI4bgf5CX1xHt-U8F7AqV7vooJDl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 malwarescore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 adultscore=0 clxscore=1015 mlxscore=0 mlxlogscore=999 suspectscore=0 bulkscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290025 Starting the frame done timer before the encoder is finished kicking off can lead to unnecessary frame done timeouts when the device is experiencing heavy load (ex. when debug logs are enabled). Thus, create a separate API for starting the encoder frame done timer and call it after the encoder kickoff is finished Reviewed-by: Dmitry Baryshkov Reviewed-by: Abhinav Kumar Signed-off-by: Jessica Zhang --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 4 +++- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 27 +++++++++++++++++++-------- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h | 3 ++- 3 files changed, 24 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index cfb19be0547788aa9a3b78fd0abb0513b8c9bb47..0f8b220b52cec6c6844d3f5e5d924d47c55eb65b 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -999,8 +999,10 @@ void dpu_crtc_commit_kickoff(struct drm_crtc *crtc) dpu_vbif_clear_errors(dpu_kms); - drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask) + drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask) { dpu_encoder_kickoff(encoder); + dpu_encoder_start_frame_done_timer(encoder); + } reinit_completion(&dpu_crtc->frame_done_comp); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index 578b5f1f039551b649fdae130cf097e3fb621d95..ba18226e396fa86bfaf3df38a3ff5caffc6e8841 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -2086,6 +2086,25 @@ bool dpu_encoder_is_valid_for_commit(struct drm_encoder *drm_enc) return true; } +/** + * dpu_encoder_start_frame_done_timer - Start the encoder frame done timer + * @drm_enc: Pointer to drm encoder structure + */ +void dpu_encoder_start_frame_done_timer(struct drm_encoder *drm_enc) +{ + struct dpu_encoder_virt *dpu_enc; + unsigned long timeout_ms; + + dpu_enc = to_dpu_encoder_virt(drm_enc); + timeout_ms = DPU_ENCODER_FRAME_DONE_TIMEOUT_FRAMES * 1000 / + drm_mode_vrefresh(&drm_enc->crtc->state->adjusted_mode); + + atomic_set(&dpu_enc->frame_done_timeout_ms, timeout_ms); + mod_timer(&dpu_enc->frame_done_timer, + jiffies + msecs_to_jiffies(timeout_ms)); + +} + /** * dpu_encoder_kickoff - trigger a double buffer flip of the ctl path * (i.e. ctl flush and start) immediately. @@ -2095,7 +2114,6 @@ void dpu_encoder_kickoff(struct drm_encoder *drm_enc) { struct dpu_encoder_virt *dpu_enc; struct dpu_encoder_phys *phys; - unsigned long timeout_ms; unsigned int i; DPU_ATRACE_BEGIN("encoder_kickoff"); @@ -2103,13 +2121,6 @@ void dpu_encoder_kickoff(struct drm_encoder *drm_enc) trace_dpu_enc_kickoff(DRMID(drm_enc)); - timeout_ms = DPU_ENCODER_FRAME_DONE_TIMEOUT_FRAMES * 1000 / - drm_mode_vrefresh(&drm_enc->crtc->state->adjusted_mode); - - atomic_set(&dpu_enc->frame_done_timeout_ms, timeout_ms); - mod_timer(&dpu_enc->frame_done_timer, - jiffies + msecs_to_jiffies(timeout_ms)); - /* All phys encs are ready to go, trigger the kickoff */ _dpu_encoder_kickoff_phys(dpu_enc); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h index b0ac10ebd02c2b63e6f6f9010a22cdace931cf3b..8503386bb50330a38b065c32d259de18166464d6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. * Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. * Copyright (C) 2013 Red Hat * Author: Rob Clark @@ -95,4 +95,5 @@ void dpu_encoder_cleanup_wb_job(struct drm_encoder *drm_enc, bool dpu_encoder_is_valid_for_commit(struct drm_encoder *drm_enc); +void dpu_encoder_start_frame_done_timer(struct drm_encoder *drm_enc); #endif /* __DPU_ENCODER_H__ */ From patchwork Wed Jan 29 03:20:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953435 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1DA619D898; Wed, 29 Jan 2025 03:21:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120897; cv=none; b=XFgUxaonua6rC++n9AFtpeW9WHsrUyIY/JO0B72OoXSnBKfhfTwlE5J8tOmP+lsl/XgukTd/atc9s+vPjhp6kswHbrW4VOMkscvERC/AHAnBNjWQY8ieJXDzJwLcPVNBz5bY+azMGBbmm6kuFSLoyj5sj7sy5HDEP3ueYFsUYMM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120897; c=relaxed/simple; bh=fCVmopgwyQCkt6TAsDQ5IYSgSzM3mjO+OhqdXhAwFe0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=WnXSyLvI7W9j9IEgjL2i9gF4BzOhc3tm6vjBkbY+PoHhJHNkBGxdbYLU4HJC5gaTAL6ElIlbj0wllr0Kzek0Fw4i9Ol+lkvKpahUkcW5eOmAvXIkEk5S7KqLnRIDWkyTkhkuSPwNAo11bif6EEhNDiPNyuY9vR4hbL13x5l6BT0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=E4zpiA44; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="E4zpiA44" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2i3cv015539; Wed, 29 Jan 2025 03:21:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= zDUxnJOwdU/UlDpOFF1qzmrdIh7JdT3dXDGXuE+CH1U=; b=E4zpiA44bz9+CdLx t7Lx7Sz0SK/8bbaGvO/dfO2EmBf44EoUzMiJB8fCWOFvIuUmnXQIdzTHbYVpxJsw a4kgBqhHJCcaB+gHeATVZ5tz1K7geLnDHSkhrM+6BiKGkLzMbfDtHhlcTYxbVsut h+gwkWqOY1+zLy6tpMP2VLw9lUWHIqePbuJmb/2pP3knRC4RgcMhcqeAluduVx+7 Fj+NkJIYQ9aptHSGlk6UYchrb/tVzzC0gHo70DCeWUxshv37uIsbI89tbQKVJLEQ bEjqLIBi9eMCCvMTtjMwlYw5CY5Xt4dmU9WfvXHqb6MWRh+LJa1ziWjShdqe5kpw zk4VPw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f9ah08pp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:24 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA05.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L8oH024059 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:08 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:08 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:44 -0800 Subject: [PATCH v5 12/14] drm/msm/dpu: Skip trigger flush and start for CWB Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-12-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=2443; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=fCVmopgwyQCkt6TAsDQ5IYSgSzM3mjO+OhqdXhAwFe0=; b=GN5JCvWRcZc3bObzRGWvQLvJtoTcb6QRQzjvMUSxevJ0Ln34/ItnnlxqbyBxg7/MSovr28Lbf BFmiiah3MGxCx4QBTs8dDnPODXLEe45hSxcmzn20zu5LqFsOEpzhIdu X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: zuXAiqaxTYc1pUgHA5wv5NN2h3OQ64rC X-Proofpoint-ORIG-GUID: zuXAiqaxTYc1pUgHA5wv5NN2h3OQ64rC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0 mlxlogscore=999 phishscore=0 malwarescore=0 priorityscore=1501 clxscore=1015 spamscore=0 impostorscore=0 adultscore=0 bulkscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 For concurrent writeback, the real time encoder is responsible for trigger flush and trigger start. Return early for trigger start and trigger flush for the concurrent writeback encoders. Reviewed-by: Dmitry Baryshkov Reviewed-by: Abhinav Kumar Signed-off-by: Jessica Zhang --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index ba18226e396fa86bfaf3df38a3ff5caffc6e8841..ee740cc881c483370336d84361c2fab7143ff2fc 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -1606,6 +1606,7 @@ static void dpu_encoder_off_work(struct work_struct *work) static void _dpu_encoder_trigger_flush(struct drm_encoder *drm_enc, struct dpu_encoder_phys *phys, uint32_t extra_flush_bits) { + struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc); struct dpu_hw_ctl *ctl; int pending_kickoff_cnt; u32 ret = UINT_MAX; @@ -1623,6 +1624,15 @@ static void _dpu_encoder_trigger_flush(struct drm_encoder *drm_enc, pending_kickoff_cnt = dpu_encoder_phys_inc_pending(phys); + /* Return early if encoder is writeback and in clone mode */ + if (drm_enc->encoder_type == DRM_MODE_ENCODER_VIRTUAL && + dpu_enc->cwb_mask) { + DPU_DEBUG("encoder %d skip flush for concurrent writeback encoder\n", + DRMID(drm_enc)); + return; + } + + if (extra_flush_bits && ctl->ops.update_pending_flush) ctl->ops.update_pending_flush(ctl, extra_flush_bits); @@ -1645,6 +1655,8 @@ static void _dpu_encoder_trigger_flush(struct drm_encoder *drm_enc, */ static void _dpu_encoder_trigger_start(struct dpu_encoder_phys *phys) { + struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(phys->parent); + if (!phys) { DPU_ERROR("invalid argument(s)\n"); return; @@ -1655,6 +1667,12 @@ static void _dpu_encoder_trigger_start(struct dpu_encoder_phys *phys) return; } + if (phys->parent->encoder_type == DRM_MODE_ENCODER_VIRTUAL && + dpu_enc->cwb_mask) { + DPU_DEBUG("encoder %d CWB enabled, skipping\n", DRMID(phys->parent)); + return; + } + if (phys->ops.trigger_start && phys->enable_state != DPU_ENC_DISABLED) phys->ops.trigger_start(phys); } From patchwork Wed Jan 29 03:20:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953429 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D3F5188591; Wed, 29 Jan 2025 03:21:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120886; cv=none; b=s7sXg1JcnsoKR8NfXJj8aQ4ph3fvjPi9V0Vj4sNXW79JcmSFDgpqwujxNbkATK+INXog0kBgQ3COnRdB+SP+7uR7L3MTzU+8CFhUvfBK9mmpiEVWfMkw6OYRvtWdrqeODlaQ94ALggnpOyn2buYtjcrvNvYmgf4yRBMUGqAXbhk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120886; c=relaxed/simple; bh=Xt09URBiiwh2TPgR3nwlLrCOGvNyasYtbmKN+d6Ludk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=eZHuPHG6vEfJacC5qj/X9zAgdLRvSPanmWuSubgxiQbYxS8cWVHOYU4Wfi7T3VnFWarQ7FJYvLjO7fMnXOxrPB7l+n5gjH3nMvltCMgFBCV/ifNuazuuqX3Wp7bBkDwz+2vlpKf/jMbis5NC0i0+CE/eOrVN8ROmBop965CPrJE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=l9DQYyF0; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="l9DQYyF0" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2hipa026985; Wed, 29 Jan 2025 03:21:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= qYeO2/lBww5XiKFN6rkuIerP+jLYOQZYAD/9O0ve870=; b=l9DQYyF0RDNVRYFh LRTh7z29+iP3NJTEJs09mwFlgJLkuU8Ab5vbLT7ooPJFhkCHHPOvA8JJsg/I16DI VMQLNhjHffQ3BR/yAc4zT2hgc+SmvGHICDAGMTGD+q0Ia7pUZlDyOl5m5Cr90l24 vJIDh6E2vOncqdPKi6ua/YZ94sT7vTGjzVLrkIMcu9FeIEr1Wbf3dD2p9jWFDFMx 69GNWv/6GX/S5frkAMdyQgimn1i7VWD0CYD4LK89v9+fNzm0sLrEA/EOj7UFkXPx SsjGB2/GCLnFeCx4bgdASUXsDJqLsGFjhP4Cpk9dl7ChThlYusdKS4dfydwePCgg QSSE+Q== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f54dgsc8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:10 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA03.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L95S020228 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:09 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:08 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:45 -0800 Subject: [PATCH v5 13/14] drm/msm/dpu: Reorder encoder kickoff for CWB Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-13-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=4470; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=Xt09URBiiwh2TPgR3nwlLrCOGvNyasYtbmKN+d6Ludk=; b=BG5BYYPfHUO+H/gzsCKm0vtIzD8vlx6jQNxKDAUdl7/BQ+137xPAD//Prds8JMH508lirorM5 1avEg/AXOdgAJ7hICMQxVgyW9lyp7MxrRSuDJrtAcjTyGRSnreYHAAc X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: tYwrdC6tvdqkII7c6jbQN8UKHXM64mBw X-Proofpoint-ORIG-GUID: tYwrdC6tvdqkII7c6jbQN8UKHXM64mBw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 priorityscore=1501 bulkscore=0 malwarescore=0 suspectscore=0 spamscore=0 clxscore=1015 phishscore=0 adultscore=0 lowpriorityscore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290025 Add a helper that will handle the correct order of the encoder kickoffs for concurrent writeback. For concurrent writeback, the realtime encoder must always kickoff last as it will call the trigger flush and start. This avoids the following scenario where the writeback encoder increments the pending kickoff count after the WB_DONE interrupt is fired: If the realtime encoder is kicked off first, the encoder kickoff will flush/start the encoder and increment the pending kickoff count. The WB_DONE interrupt then fires (before the writeback encoder is kicked off). When the writeback encoder enters its kickoff, it will skip the flush/start (due to CWB being enabled) and hit a frame done timeout as the frame was kicked off (and the WB_DONE interrupt fired) without the pending kickoff count being incremented. In addition, the writeback timer should only start after the realtime encoder is kicked off to ensure that we don't get timeouts when the system has a heavy load (ex. when debug logs are enabled) Reviewed-by: Dmitry Baryshkov Reviewed-by: Abhinav Kumar Signed-off-by: Jessica Zhang --- drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c | 74 ++++++++++++++++++++++++++------ 1 file changed, 60 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c index 0f8b220b52cec6c6844d3f5e5d924d47c55eb65b..5eabeb1ae4208b707be2413afcda9fa5ed1a2f65 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c @@ -953,6 +953,45 @@ static int _dpu_crtc_wait_for_frame_done(struct drm_crtc *crtc) return rc; } +static int dpu_crtc_kickoff_clone_mode(struct drm_crtc *crtc) +{ + struct drm_encoder *encoder; + struct drm_encoder *rt_encoder = NULL, *wb_encoder; + struct dpu_kms *dpu_kms = _dpu_crtc_get_kms(crtc); + + /* Find encoder for real time display */ + drm_for_each_encoder_mask(encoder, crtc->dev, + crtc->state->encoder_mask) { + if (encoder->encoder_type == DRM_MODE_ENCODER_VIRTUAL) + wb_encoder = encoder; + else + rt_encoder = encoder; + } + + if (!rt_encoder || !wb_encoder) { + DRM_DEBUG_ATOMIC("real time or wb encoder not found\n"); + return -EINVAL; + } + + dpu_encoder_prepare_for_kickoff(wb_encoder); + dpu_encoder_prepare_for_kickoff(rt_encoder); + + dpu_vbif_clear_errors(dpu_kms); + + /* + * Kickoff real time encoder last as it's the encoder that + * will do the flush + */ + dpu_encoder_kickoff(wb_encoder); + dpu_encoder_kickoff(rt_encoder); + + /* Don't start frame done timers until the kickoffs have finished */ + dpu_encoder_start_frame_done_timer(wb_encoder); + dpu_encoder_start_frame_done_timer(rt_encoder); + + return 0; +} + /** * dpu_crtc_commit_kickoff - trigger kickoff of the commit for this crtc * @crtc: Pointer to drm crtc object @@ -981,13 +1020,27 @@ void dpu_crtc_commit_kickoff(struct drm_crtc *crtc) goto end; } } - /* - * Encoder will flush/start now, unless it has a tx pending. If so, it - * may delay and flush at an irq event (e.g. ppdone) - */ - drm_for_each_encoder_mask(encoder, crtc->dev, - crtc->state->encoder_mask) - dpu_encoder_prepare_for_kickoff(encoder); + + if (drm_crtc_in_clone_mode(crtc->state)) { + if (dpu_crtc_kickoff_clone_mode(crtc)) + goto end; + } else { + /* + * Encoder will flush/start now, unless it has a tx pending. + * If so, it may delay and flush at an irq event (e.g. ppdone) + */ + drm_for_each_encoder_mask(encoder, crtc->dev, + crtc->state->encoder_mask) + dpu_encoder_prepare_for_kickoff(encoder); + + dpu_vbif_clear_errors(dpu_kms); + + drm_for_each_encoder_mask(encoder, crtc->dev, + crtc->state->encoder_mask) { + dpu_encoder_kickoff(encoder); + dpu_encoder_start_frame_done_timer(encoder); + } + } if (atomic_inc_return(&dpu_crtc->frame_pending) == 1) { /* acquire bandwidth and other resources */ @@ -997,13 +1050,6 @@ void dpu_crtc_commit_kickoff(struct drm_crtc *crtc) dpu_crtc->play_count++; - dpu_vbif_clear_errors(dpu_kms); - - drm_for_each_encoder_mask(encoder, crtc->dev, crtc->state->encoder_mask) { - dpu_encoder_kickoff(encoder); - dpu_encoder_start_frame_done_timer(encoder); - } - reinit_completion(&dpu_crtc->frame_done_comp); end: From patchwork Wed Jan 29 03:20:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jessica Zhang X-Patchwork-Id: 13953437 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FC7E19DF4A; Wed, 29 Jan 2025 03:21:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120898; cv=none; b=VeKrRV/h9VmQK51UJHglNuWasXTRrrR0H6Hp8AyP4qwzrH3tUWbWv1AvlskWTPKgkzNf5svaq0EyWUYZ2krHhbUcq01d7/dKOhJ40OJavrQesuuUdjs43RmWjJm2/gm2LGEFOfkdJdeEcAHLbITutNRzS1q6uU46Xkr8loUNAUM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738120898; c=relaxed/simple; bh=a+DUJ01oOCtYoTl0SwY+6asxjrQi3oi5Dx8nX4Bn9g0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=JJDWQqspXzteHJTesu8kFdbBnoRlJOKw9DUHpW16me7XgMngqROOFCX2bggmgD9OICViQDVHlcJGRyIGpXD4l9N/H40IZhQFvvdHl8OCLYDbKp0uZPpbK+AP3yD37Pd+PHE/tvCF18kdTM19zXlg5BFwQk9Sj3enXnkrZEfv8jU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=Hvl2W7va; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="Hvl2W7va" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 50T2i3cw015539; Wed, 29 Jan 2025 03:21:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= Cn/B42vYFebt3hKPyF2K29gAOQdPJFcCxbsIZe5rscg=; b=Hvl2W7vaX+jW7pDS s67/PI5KmfAFpQJ2W33tam/bTR85x/e4sYtAWr20LFJo1Nzdf18qUIgM0YBLpyZK O6C5r06dH80Ko0xWnj1hAltIynm9GQmOs0vBNrFQnQwf+IgHGbzjQNtK5/t1lCmU 0SH40g3oHOQN1WJqOjTyyysHA7h06fC2R4G6HodhtBA/ka+iW4xsvcG9BS9Fx7Ik 5CLWCDtcVzXihffd0sd4CpdDRy0Qv7ekbV07O9tsWzqeJUbCvp2Y1Jpo+aqSVFfG Hde9KJLYIAcjHTl0mFwo7t42tOLwWrH8TMomhfiLdolY7oPD+emqAy7BRwmZqgTH z7tbvw== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 44f9ah08pq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:24 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA01.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 50T3L9lV029787 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jan 2025 03:21:09 GMT Received: from jesszhan-linux.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 28 Jan 2025 19:21:09 -0800 From: Jessica Zhang Date: Tue, 28 Jan 2025 19:20:46 -0800 Subject: [PATCH v5 14/14] drm/msm/dpu: Set possible clones for all encoders Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-ID: <20250128-concurrent-wb-v5-14-6464ca5360df@quicinc.com> References: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> In-Reply-To: <20250128-concurrent-wb-v5-0-6464ca5360df@quicinc.com> To: Rob Clark , Dmitry Baryshkov , , Sean Paul , Marijn Suijten , "David Airlie" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Simona Vetter , Simona Vetter CC: , , , , , Rob Clark , =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= , "Jessica Zhang" X-Mailer: b4 0.15-dev-1b0d6 X-Developer-Signature: v=1; a=ed25519-sha256; t=1738120865; l=3799; i=quic_jesszhan@quicinc.com; s=20230329; h=from:subject:message-id; bh=a+DUJ01oOCtYoTl0SwY+6asxjrQi3oi5Dx8nX4Bn9g0=; b=jSVUoEqbxT0CrILRg4zM/cLzBXU6ucOG0nvBfQ4sJ2A0GUef8ASEHuDaKzqt1SBxvr7OoixLi Zk5qOVSPGrdCLfjnggooPGJFZJhm4sKadkdR99F47QuHWbxe0JOwUd1 X-Developer-Key: i=quic_jesszhan@quicinc.com; a=ed25519; pk=gAUCgHZ6wTJOzQa3U0GfeCDH7iZLlqIEPo4rrjfDpWE= X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: D7kWcti4BHO085TdnategZG1Vm6OLHkF X-Proofpoint-ORIG-GUID: D7kWcti4BHO085TdnategZG1Vm6OLHkF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1057,Hydra:6.0.680,FMLib:17.12.68.34 definitions=2025-01-28_04,2025-01-27_01,2024-11-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0 mlxlogscore=999 phishscore=0 malwarescore=0 priorityscore=1501 clxscore=1015 spamscore=0 impostorscore=0 adultscore=0 bulkscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2411120000 definitions=main-2501290026 Set writeback encoders as possible clones for DSI encoders and vice versa. Reviewed-by: Dmitry Baryshkov Reviewed-by: Abhinav Kumar Signed-off-by: Jessica Zhang --- drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 32 +++++++++++++++++++++++++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h | 2 ++ drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 7 +++++-- 3 files changed, 39 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c index ee740cc881c483370336d84361c2fab7143ff2fc..df4ae6fe0c562983729a79cddc8b9e4306db768a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c @@ -2557,6 +2557,38 @@ static int dpu_encoder_virt_add_phys_encs( return 0; } +/** + * dpu_encoder_get_clones - Calculate the possible_clones for DPU encoder + * @drm_enc: DRM encoder pointer + * Returns: possible_clones mask + */ +uint32_t dpu_encoder_get_clones(struct drm_encoder *drm_enc) +{ + struct drm_encoder *curr; + int type = drm_enc->encoder_type; + uint32_t clone_mask = drm_encoder_mask(drm_enc); + + /* + * Set writeback as possible clones of real-time DSI encoders and vice + * versa + * + * Writeback encoders can't be clones of each other and DSI + * encoders can't be clones of each other. + * + * TODO: Add DP encoders as valid possible clones for writeback encoders + * (and vice versa) once concurrent writeback has been validated for DP + */ + drm_for_each_encoder(curr, drm_enc->dev) { + if ((type == DRM_MODE_ENCODER_VIRTUAL && + curr->encoder_type == DRM_MODE_ENCODER_DSI) || + (type == DRM_MODE_ENCODER_DSI && + curr->encoder_type == DRM_MODE_ENCODER_VIRTUAL)) + clone_mask |= drm_encoder_mask(curr); + } + + return clone_mask; +} + static int dpu_encoder_setup_display(struct dpu_encoder_virt *dpu_enc, struct dpu_kms *dpu_kms, struct msm_display_info *disp_info) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h index 8503386bb50330a38b065c32d259de18166464d6..ca1ca2e51d7ead0eb34b27f3168e6bb06a71a11a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.h @@ -60,6 +60,8 @@ enum dpu_intf_mode dpu_encoder_get_intf_mode(struct drm_encoder *encoder); void dpu_encoder_virt_runtime_resume(struct drm_encoder *encoder); +uint32_t dpu_encoder_get_clones(struct drm_encoder *drm_enc); + struct drm_encoder *dpu_encoder_init(struct drm_device *dev, int drm_enc_mode, struct msm_display_info *disp_info); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 423af6f8251cb9c7c07f5a9878c8abc717d947a4..ec40405b27a1b53d44142c5f84e62bed878a0f6d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -2,7 +2,7 @@ /* * Copyright (C) 2013 Red Hat * Copyright (c) 2014-2018, The Linux Foundation. All rights reserved. - * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved. * * Author: Rob Clark */ @@ -824,8 +824,11 @@ static int _dpu_kms_drm_obj_init(struct dpu_kms *dpu_kms) return ret; num_encoders = 0; - drm_for_each_encoder(encoder, dev) + drm_for_each_encoder(encoder, dev) { num_encoders++; + if (catalog->cwb_count > 0) + encoder->possible_clones = dpu_encoder_get_clones(encoder); + } max_crtc_count = min(catalog->mixer_count, num_encoders);