From patchwork Tue Apr 28 23:35:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hai Li X-Patchwork-Id: 6292771 Return-Path: X-Original-To: patchwork-linux-arm-msm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BCFE69F389 for ; Tue, 28 Apr 2015 23:36:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A7B4F202C8 for ; Tue, 28 Apr 2015 23:36:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8C08E20220 for ; Tue, 28 Apr 2015 23:36:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1031315AbbD1XgD (ORCPT ); Tue, 28 Apr 2015 19:36:03 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:51176 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031303AbbD1XgA (ORCPT ); Tue, 28 Apr 2015 19:36:00 -0400 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id D1AAA14122B; Tue, 28 Apr 2015 23:35:59 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id C3A0F141236; Tue, 28 Apr 2015 23:35:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from yyzubuntu32.qualcomm.com (rrcs-67-52-130-30.west.biz.rr.com [67.52.130.30]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: hali@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 3A1D8141237; Tue, 28 Apr 2015 23:35:58 +0000 (UTC) From: Hai Li To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, robdclark@gmail.com, Hai Li Subject: [PATCH v2 2/2] drm/msm/mdp5: Wait for PP_DONE irq for command mode CRTC atomic commit Date: Tue, 28 Apr 2015 19:35:38 -0400 Message-Id: <1430264138-31349-3-git-send-email-hali@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <1430264138-31349-1-git-send-email-hali@codeaurora.org> References: <1430264138-31349-1-git-send-email-hali@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP CRTCs in DSI command mode data path should wait for pingpong done, instead of vblank, to finish atomic commit. This change is to enable PP_DONE irq on command mode CRTCs and wait for this irq happens before atomic commit completion. Signed-off-by: Hai Li --- drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c | 4 -- drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c | 82 ++++++++++++++++++++----- 2 files changed, 66 insertions(+), 20 deletions(-) diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c index e4e8956..5f87357 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c @@ -216,16 +216,12 @@ static void mdp5_cmd_encoder_mode_set(struct drm_encoder *encoder, static void mdp5_cmd_encoder_disable(struct drm_encoder *encoder) { struct mdp5_cmd_encoder *mdp5_cmd_enc = to_mdp5_cmd_encoder(encoder); - struct mdp5_kms *mdp5_kms = get_kms(encoder); struct mdp5_ctl *ctl = mdp5_crtc_get_ctl(encoder->crtc); struct mdp5_interface *intf = &mdp5_cmd_enc->intf; - int lm = mdp5_crtc_get_lm(encoder->crtc); if (WARN_ON(!mdp5_cmd_enc->enabled)) return; - /* Wait for the last frame done */ - mdp_irq_wait(&mdp5_kms->base, lm2ppdone(lm)); pingpong_tearcheck_disable(encoder); mdp5_ctl_set_encoder_state(ctl, false); diff --git a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c index aa0b0c5..8d9d67a 100644 --- a/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c @@ -1,5 +1,5 @@ /* - * Copyright (c) 2014 The Linux Foundation. All rights reserved. + * Copyright (c) 2014-2015 The Linux Foundation. All rights reserved. * Copyright (C) 2013 Red Hat * Author: Rob Clark * @@ -60,6 +60,11 @@ struct mdp5_crtc { struct mdp_irq vblank; struct mdp_irq err; + struct mdp_irq pp_done; + + struct completion pp_completion; + + bool cmd_mode; struct { /* protect REG_MDP5_LM_CURSOR* registers and cursor scanout_bo*/ @@ -87,6 +92,12 @@ static void request_pending(struct drm_crtc *crtc, uint32_t pending) mdp_irq_register(&get_kms(crtc)->base, &mdp5_crtc->vblank); } +static void request_pp_done_pending(struct drm_crtc *crtc) +{ + struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + reinit_completion(&mdp5_crtc->pp_completion); +} + static u32 crtc_flush(struct drm_crtc *crtc, u32 flush_mask) { struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); @@ -282,6 +293,9 @@ static void mdp5_crtc_disable(struct drm_crtc *crtc) /* set STAGE_UNUSED for all layers */ mdp5_ctl_blend(mdp5_crtc->ctl, mdp5_crtc->lm, 0x00000000); + if (mdp5_crtc->cmd_mode) + mdp_irq_unregister(&mdp5_kms->base, &mdp5_crtc->pp_done); + mdp_irq_unregister(&mdp5_kms->base, &mdp5_crtc->err); mdp5_disable(mdp5_kms); @@ -301,6 +315,9 @@ static void mdp5_crtc_enable(struct drm_crtc *crtc) mdp5_enable(mdp5_kms); mdp_irq_register(&mdp5_kms->base, &mdp5_crtc->err); + if (mdp5_crtc->cmd_mode) + mdp_irq_register(&mdp5_kms->base, &mdp5_crtc->pp_done); + mdp5_crtc->enabled = true; } @@ -402,6 +419,15 @@ static void mdp5_crtc_atomic_flush(struct drm_crtc *crtc) blend_setup(crtc); + /* PP_DONE irq is only used by command mode for now. + * It is better to request pending before FLUSH and START trigger + * to make sure no pp_done irq missed. + * This is safe because no pp_done will happen before SW trigger + * in command mode. + */ + if (mdp5_crtc->cmd_mode) + request_pp_done_pending(crtc); + mdp5_crtc->flushed_mask = crtc_flush_all(crtc); request_pending(crtc, PENDING_FLIP); @@ -612,6 +638,26 @@ static void mdp5_crtc_err_irq(struct mdp_irq *irq, uint32_t irqstatus) DBG("%s: error: %08x", mdp5_crtc->name, irqstatus); } +static void mdp5_crtc_pp_done_irq(struct mdp_irq *irq, uint32_t irqstatus) +{ + struct mdp5_crtc *mdp5_crtc = container_of(irq, struct mdp5_crtc, + pp_done); + + complete(&mdp5_crtc->pp_completion); +} + +static void mdp5_crtc_wait_for_pp_done(struct drm_crtc *crtc) +{ + struct drm_device *dev = crtc->dev; + struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + int ret; + + ret = wait_for_completion_timeout(&mdp5_crtc->pp_completion, + msecs_to_jiffies(50)); + if (ret == 0) + dev_warn(dev->dev, "pp done time out, lm=%d\n", mdp5_crtc->lm); +} + static void mdp5_crtc_wait_for_flush_done(struct drm_crtc *crtc) { struct drm_device *dev = crtc->dev; @@ -659,16 +705,18 @@ void mdp5_crtc_set_intf(struct drm_crtc *crtc, struct mdp5_interface *intf) /* now that we know what irq's we want: */ mdp5_crtc->err.irqmask = intf2err(intf->num); - - /* Register command mode Pingpong done as vblank for now, - * so that atomic commit should wait for it to finish. - * Ideally, in the future, we should take rd_ptr done as vblank, - * and let atomic commit wait for pingpong done for commond mode. - */ - if (intf->mode == MDP5_INTF_DSI_MODE_COMMAND) - mdp5_crtc->vblank.irqmask = lm2ppdone(lm); - else - mdp5_crtc->vblank.irqmask = intf2vblank(lm, intf); + mdp5_crtc->vblank.irqmask = intf2vblank(lm, intf); + + if ((intf->type == INTF_DSI) && + (intf->mode == MDP5_INTF_DSI_MODE_COMMAND)) { + mdp5_crtc->pp_done.irqmask = lm2ppdone(lm); + mdp5_crtc->pp_done.irq = mdp5_crtc_pp_done_irq; + mdp5_crtc->cmd_mode = true; + } else { + mdp5_crtc->pp_done.irqmask = 0; + mdp5_crtc->pp_done.irq = NULL; + mdp5_crtc->cmd_mode = false; + } mdp_irq_update(&mdp5_kms->base); @@ -689,11 +737,12 @@ struct mdp5_ctl *mdp5_crtc_get_ctl(struct drm_crtc *crtc) void mdp5_crtc_wait_for_commit_done(struct drm_crtc *crtc) { - /* wait_for_flush_done is the only case for now. - * Later we will have command mode CRTC to wait for - * other event. - */ - mdp5_crtc_wait_for_flush_done(crtc); + struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc); + + if (mdp5_crtc->cmd_mode) + mdp5_crtc_wait_for_pp_done(crtc); + else + mdp5_crtc_wait_for_flush_done(crtc); } /* initialize crtc */ @@ -714,6 +763,7 @@ struct drm_crtc *mdp5_crtc_init(struct drm_device *dev, spin_lock_init(&mdp5_crtc->lm_lock); spin_lock_init(&mdp5_crtc->cursor.lock); + init_completion(&mdp5_crtc->pp_completion); mdp5_crtc->vblank.irq = mdp5_crtc_vblank_irq; mdp5_crtc->err.irq = mdp5_crtc_err_irq;