From patchwork Fri Aug 30 07:38:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bibby Hsieh X-Patchwork-Id: 11123107 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81B6714DE for ; Fri, 30 Aug 2019 07:38:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5B04D22CE9 for ; Fri, 30 Aug 2019 07:38:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="X5RByTiD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B04D22CE9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nvYNMo25kr8cuHWGqj5CdxRDybnGDuVCCledcRCftZw=; b=X5RByTiDMPL7iq AWxYUA4bhu+YKMHcK81/7IcK/HU/SbTKYBxjIuavYDW7sLQ+x7hSUdgpAg49VI2SlVjGOLwLDBukP V/sd5ntUU96Pt4b8XUl50EEJ2e24i6B40Gpp7QNY3Bq8u8ZYglTRwQyBrAvPg0FC367MmgzjxgTpv s1HyKaKsq+a2eYLRzIY/qZsm5swnM3+4PvvYomv9b0xOhU+0wnY9wd0EiW+HlirO6yU0BehSbylzd /r2ZYOVpwNXwW/GjSIHvpJ0iJA2YltyHnQMRR8xWIBc1f/RFa46dCBiWj89djsBM12Ri55SyW8aEf 9vP20ByJjkQwjfYOYlig==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1i3bUQ-0000lS-FQ; Fri, 30 Aug 2019 07:38:34 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1i3bUM-0000kL-I4; Fri, 30 Aug 2019 07:38:32 +0000 X-UUID: 227cb1d7d9594168a5403395dddc390a-20190829 X-UUID: 227cb1d7d9594168a5403395dddc390a-20190829 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLS) with ESMTP id 2115976032; Thu, 29 Aug 2019 23:38:31 -0800 Received: from mtkmbs05n1.mediatek.inc (172.21.101.15) by MTKMBS62N2.mediatek.inc (172.29.193.42) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Fri, 30 Aug 2019 00:38:29 -0700 Received: from mtkcas08.mediatek.inc (172.21.101.126) by mtkmbs05n1.mediatek.inc (172.21.101.15) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Fri, 30 Aug 2019 15:38:28 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkcas08.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1395.4 via Frontend Transport; Fri, 30 Aug 2019 15:38:28 +0800 From: Bibby Hsieh To: David Airlie , Matthias Brugger , Daniel Vetter , , Subject: [PATCH 1/2] drm/mediatek: Only block updates to CRTCs that have a pending update Date: Fri, 30 Aug 2019 15:38:18 +0800 Message-ID: <20190830073819.16566-2-bibby.hsieh@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190830073819.16566-1-bibby.hsieh@mediatek.com> References: <20190830073819.16566-1-bibby.hsieh@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190830_003830_605062_D048AAB3 X-CRM114-Status: GOOD ( 19.55 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [216.200.240.184 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 UNPARSEABLE_RELAY Informational: message has unparseable relay lines 0.0 T_FILL_THIS_FORM_SHORT Fill in a short form with personal information X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: drinkcat@chromium.org, Bibby Hsieh , linux-kernel@vger.kernel.org, Daniel Kurtz , tfiga@chromium.org, CK Hu , Thierry Reding , Philipp Zabel , YT Shen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Currently we use a single mutex to allow only a single atomic update at a time. In truth, though, we really only want to ensure that only a single atomic update is allowed per CRTC. In other words, for each atomic update, we only block if there is a pending update for the CRTCs involved, and don't block if there are only pending updates for other CRTCs. Fixes: 119f5173628a ("drm/mediatek: Add DRM Driver for Mediatek SoC MT8173.") Signed-off-by: Daniel Kurtz Signed-off-by: Bibby Hsieh --- drivers/gpu/drm/mediatek/mtk_drm_crtc.c | 14 +- drivers/gpu/drm/mediatek/mtk_drm_drv.c | 182 +++++++++++++++++++++--- drivers/gpu/drm/mediatek/mtk_drm_drv.h | 12 +- 3 files changed, 184 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c index b55970a2869d..7697b40baac0 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c @@ -5,6 +5,7 @@ #include #include +#include #include #include #include @@ -45,6 +46,8 @@ struct mtk_drm_crtc { struct mtk_disp_mutex *mutex; unsigned int ddp_comp_nr; struct mtk_ddp_comp **ddp_comp; + + struct drm_crtc_state *old_crtc_state; }; struct mtk_crtc_state { @@ -343,6 +346,7 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_drm_crtc *mtk_crtc) static void mtk_crtc_ddp_config(struct drm_crtc *crtc) { struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); + struct drm_atomic_state *atomic_state = mtk_crtc->old_crtc_state->state; struct mtk_crtc_state *state = to_mtk_crtc_state(mtk_crtc->base.state); struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0]; unsigned int i; @@ -382,6 +386,7 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc) } } mtk_crtc->pending_planes = false; + mtk_atomic_state_put_queue(atomic_state); } } @@ -451,6 +456,7 @@ static void mtk_drm_crtc_atomic_begin(struct drm_crtc *crtc, static void mtk_drm_crtc_atomic_flush(struct drm_crtc *crtc, struct drm_crtc_state *old_crtc_state) { + struct drm_atomic_state *old_atomic_state = old_crtc_state->state; struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); struct mtk_drm_private *priv = crtc->dev->dev_private; unsigned int pending_planes = 0; @@ -469,8 +475,13 @@ static void mtk_drm_crtc_atomic_flush(struct drm_crtc *crtc, pending_planes |= BIT(i); } } - if (pending_planes) + + if (pending_planes) { mtk_crtc->pending_planes = true; + drm_atomic_state_get(old_atomic_state); + mtk_crtc->old_crtc_state = old_crtc_state; + } + if (crtc->state->color_mgmt_changed) for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) mtk_ddp_gamma_set(mtk_crtc->ddp_comp[i], crtc->state); @@ -526,6 +537,7 @@ static int mtk_drm_crtc_init(struct drm_device *drm, void mtk_crtc_ddp_irq(struct drm_crtc *crtc, struct mtk_ddp_comp *comp) { + struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); struct mtk_drm_private *priv = crtc->dev->dev_private; diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c index c0928b69dc43..b0308a3a7483 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c @@ -31,11 +31,120 @@ #define DRIVER_MAJOR 1 #define DRIVER_MINOR 0 -static void mtk_atomic_schedule(struct mtk_drm_private *private, +struct mtk_atomic_state { + struct drm_atomic_state base; + struct list_head list; + struct work_struct work; +}; + +static inline struct mtk_atomic_state *to_mtk_state(struct drm_atomic_state *s) +{ + return container_of(s, struct mtk_atomic_state, base); +} + +void mtk_atomic_state_put_queue(struct drm_atomic_state *state) +{ + struct drm_device *drm = state->dev; + struct mtk_drm_private *mtk_drm = drm->dev_private; + struct mtk_atomic_state *mtk_state = to_mtk_state(state); + unsigned long flags; + + spin_lock_irqsave(&mtk_drm->unreference.lock, flags); + list_add_tail(&mtk_state->list, &mtk_drm->unreference.list); + spin_unlock_irqrestore(&mtk_drm->unreference.lock, flags); + + schedule_work(&mtk_drm->unreference.work); +} + +static uint32_t mtk_atomic_crtc_mask(struct drm_device *drm, + struct drm_atomic_state *state) +{ + uint32_t crtc_mask; + int i; + + for (i = 0, crtc_mask = 0; i < drm->mode_config.num_crtc; i++) { + struct drm_crtc *crtc = state->crtcs[i].ptr; + + if (crtc) + crtc_mask |= (1 << drm_crtc_index(crtc)); + } + + return crtc_mask; +} + +/* + * Block until specified crtcs are no longer pending update, and atomically + * mark them as pending update. + */ +static int mtk_atomic_get_crtcs(struct drm_device *drm, + struct drm_atomic_state *state) +{ + struct mtk_drm_private *private = drm->dev_private; + uint32_t crtc_mask; + int ret; + + crtc_mask = mtk_atomic_crtc_mask(drm, state); + + /* + * Wait for all pending updates to complete for the set of crtcs being + * changed in this atomic commit + */ + spin_lock(&private->commit.crtcs_event.lock); + ret = wait_event_interruptible_locked(private->commit.crtcs_event, + !(private->commit.crtcs & crtc_mask)); + if (ret == 0) + private->commit.crtcs |= crtc_mask; + spin_unlock(&private->commit.crtcs_event.lock); + + return ret; +} + +/* + * Mark specified crtcs as no longer pending update. + */ +static void mtk_atomic_put_crtcs(struct drm_device *drm, + struct drm_atomic_state *state) +{ + struct mtk_drm_private *private = drm->dev_private; + uint32_t crtc_mask; + + crtc_mask = mtk_atomic_crtc_mask(drm, state); + + spin_lock(&private->commit.crtcs_event.lock); + private->commit.crtcs &= ~crtc_mask; + wake_up_all_locked(&private->commit.crtcs_event); + spin_unlock(&private->commit.crtcs_event.lock); +} + +static void mtk_unreference_work(struct work_struct *work) +{ + struct mtk_drm_private *mtk_drm = container_of(work, + struct mtk_drm_private, unreference.work); + unsigned long flags; + struct mtk_atomic_state *state, *tmp; + + /* + * framebuffers cannot be unreferenced in atomic context. + * Therefore, only hold the spinlock when iterating unreference_list, + * and drop it when doing the unreference. + */ + spin_lock_irqsave(&mtk_drm->unreference.lock, flags); + list_for_each_entry_safe(state, tmp, &mtk_drm->unreference.list, list) { + list_del(&state->list); + spin_unlock_irqrestore(&mtk_drm->unreference.lock, flags); + drm_atomic_state_put(&state->base); + spin_lock_irqsave(&mtk_drm->unreference.lock, flags); + } + spin_unlock_irqrestore(&mtk_drm->unreference.lock, flags); +} + + +static void mtk_atomic_schedule(struct drm_device *drm, struct drm_atomic_state *state) { - private->commit.state = state; - schedule_work(&private->commit.work); + struct mtk_atomic_state *mtk_state = to_mtk_state(state); + + schedule_work(&mtk_state->work); } static void mtk_atomic_wait_for_fences(struct drm_atomic_state *state) @@ -48,13 +157,10 @@ static void mtk_atomic_wait_for_fences(struct drm_atomic_state *state) mtk_fb_wait(new_plane_state->fb); } -static void mtk_atomic_complete(struct mtk_drm_private *private, +static void mtk_atomic_complete(struct drm_device *drm, struct drm_atomic_state *state) { - struct drm_device *drm = private->drm; - mtk_atomic_wait_for_fences(state); - /* * Mediatek drm supports runtime PM, so plane registers cannot be * written when their crtc is disabled. @@ -77,53 +183,86 @@ static void mtk_atomic_complete(struct mtk_drm_private *private, drm_atomic_helper_wait_for_vblanks(drm, state); drm_atomic_helper_cleanup_planes(drm, state); + mtk_atomic_put_crtcs(drm, state); + drm_atomic_state_put(state); } static void mtk_atomic_work(struct work_struct *work) { - struct mtk_drm_private *private = container_of(work, - struct mtk_drm_private, commit.work); + struct mtk_atomic_state *mtk_state = container_of(work, + struct mtk_atomic_state, work); + struct drm_atomic_state *state = &mtk_state->base; + struct drm_device *drm = state->dev; - mtk_atomic_complete(private, private->commit.state); + mtk_atomic_complete(drm, state); } static int mtk_atomic_commit(struct drm_device *drm, struct drm_atomic_state *state, bool async) { - struct mtk_drm_private *private = drm->dev_private; int ret; ret = drm_atomic_helper_prepare_planes(drm, state); if (ret) return ret; - mutex_lock(&private->commit.lock); - flush_work(&private->commit.work); + ret = mtk_atomic_get_crtcs(drm, state); + if (ret) { + drm_atomic_helper_cleanup_planes(drm, state); + return ret; + } ret = drm_atomic_helper_swap_state(state, true); if (ret) { - mutex_unlock(&private->commit.lock); drm_atomic_helper_cleanup_planes(drm, state); return ret; } drm_atomic_state_get(state); if (async) - mtk_atomic_schedule(private, state); + mtk_atomic_schedule(drm, state); else - mtk_atomic_complete(private, state); - - mutex_unlock(&private->commit.lock); + mtk_atomic_complete(drm, state); return 0; } +static struct drm_atomic_state *mtk_drm_atomic_state_alloc( + struct drm_device *dev) +{ + struct mtk_atomic_state *mtk_state; + + mtk_state = kzalloc(sizeof(*mtk_state), GFP_KERNEL); + if (!mtk_state) + return NULL; + + if (drm_atomic_state_init(dev, &mtk_state->base) < 0) { + kfree(mtk_state); + return NULL; + } + + INIT_LIST_HEAD(&mtk_state->list); + INIT_WORK(&mtk_state->work, mtk_atomic_work); + + return &mtk_state->base; +} + +static void mtk_drm_atomic_state_free(struct drm_atomic_state *state) +{ + struct mtk_atomic_state *mtk_state = to_mtk_state(state); + + drm_atomic_state_default_release(state); + kfree(mtk_state); +} + static const struct drm_mode_config_funcs mtk_drm_mode_config_funcs = { .fb_create = mtk_drm_mode_fb_create, .atomic_check = drm_atomic_helper_check, .atomic_commit = mtk_atomic_commit, + .atomic_state_alloc = mtk_drm_atomic_state_alloc, + .atomic_state_free = mtk_drm_atomic_state_free }; static const enum mtk_ddp_comp_id mt2701_mtk_ddp_main[] = { @@ -319,6 +458,11 @@ static int mtk_drm_kms_init(struct drm_device *drm) drm_kms_helper_poll_init(drm); drm_mode_config_reset(drm); + INIT_WORK(&private->unreference.work, mtk_unreference_work); + INIT_LIST_HEAD(&private->unreference.list); + spin_lock_init(&private->unreference.lock); + init_waitqueue_head(&private->commit.crtcs_event); + return 0; err_component_unbind: @@ -504,8 +648,6 @@ static int mtk_drm_probe(struct platform_device *pdev) if (!private) return -ENOMEM; - mutex_init(&private->commit.lock); - INIT_WORK(&private->commit.work, mtk_atomic_work); private->data = of_device_get_match_data(dev); mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.h b/drivers/gpu/drm/mediatek/mtk_drm_drv.h index 823ba4081c18..0934f83b860d 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.h +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.h @@ -48,12 +48,16 @@ struct mtk_drm_private { const struct mtk_mmsys_driver_data *data; struct { - struct drm_atomic_state *state; - struct work_struct work; - struct mutex lock; + uint32_t crtcs; + wait_queue_head_t crtcs_event; } commit; struct drm_atomic_state *suspend_state; + struct { + struct work_struct work; + struct list_head list; + spinlock_t lock; + } unreference; }; extern struct platform_driver mtk_ddp_driver; @@ -64,4 +68,6 @@ extern struct platform_driver mtk_dpi_driver; extern struct platform_driver mtk_dsi_driver; extern struct platform_driver mtk_mipi_tx_driver; +void mtk_atomic_state_put_queue(struct drm_atomic_state *state); + #endif /* MTK_DRM_DRV_H */