From patchwork Sun Mar 5 16:00:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kieran Bingham X-Patchwork-Id: 9604677 X-Patchwork-Delegate: geert@linux-m68k.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AD47D60414 for ; Sun, 5 Mar 2017 16:00:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C3BF327FAE for ; Sun, 5 Mar 2017 16:00:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B8D7D2810E; Sun, 5 Mar 2017 16:00:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4854327FAE for ; Sun, 5 Mar 2017 16:00:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752515AbdCEQAP (ORCPT ); Sun, 5 Mar 2017 11:00:15 -0500 Received: from mail.kernel.org ([198.145.29.136]:41842 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752425AbdCEQAO (ORCPT ); Sun, 5 Mar 2017 11:00:14 -0500 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 875642034C; Sun, 5 Mar 2017 16:00:12 +0000 (UTC) Received: from CookieMonster.cookiemonster.local (cpc89242-aztw30-2-0-cust488.18-1.cable.virginm.net [86.31.129.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 345242035E; Sun, 5 Mar 2017 16:00:10 +0000 (UTC) From: Kieran Bingham To: laurent.pinchart@ideasonboard.com Cc: linux-renesas-soc@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, Kieran Bingham Subject: [PATCH v3 1/3] v4l: vsp1: Postpone frame end handling in event of display list race Date: Sun, 5 Mar 2017 16:00:02 +0000 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-renesas-soc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-renesas-soc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we try to commit the display list while an update is pending, we have missed our opportunity. The display list manager will hold the commit until the next interrupt. In this event, we skip the pipeline completion callback handler so that the pipeline will not mistakenly report frame completion to the user. Signed-off-by: Kieran Bingham Reviewed-by: Laurent Pinchart --- drivers/media/platform/vsp1/vsp1_dl.c | 19 +++++++++++++++++-- drivers/media/platform/vsp1/vsp1_dl.h | 2 +- drivers/media/platform/vsp1/vsp1_pipe.c | 13 ++++++++++++- 3 files changed, 30 insertions(+), 4 deletions(-) diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c index b9e5027778ff..f449ca689554 100644 --- a/drivers/media/platform/vsp1/vsp1_dl.c +++ b/drivers/media/platform/vsp1/vsp1_dl.c @@ -562,9 +562,19 @@ void vsp1_dlm_irq_display_start(struct vsp1_dl_manager *dlm) spin_unlock(&dlm->lock); } -void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) +/** + * vsp1_dlm_irq_frame_end - Display list handler for the frame end interrupt + * @dlm: the display list manager + * + * Return true if the previous display list has completed at frame end, or false + * if it has been delayed by one frame because the display list commit raced + * with the frame end interrupt. The function always returns true in header mode + * as display list processing is then not continuous and races never occur. + */ +bool vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) { struct vsp1_device *vsp1 = dlm->vsp1; + bool completed = false; spin_lock(&dlm->lock); @@ -576,8 +586,10 @@ void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) * perform any operation as there can't be any new display list queued * in that case. */ - if (dlm->mode == VSP1_DL_MODE_HEADER) + if (dlm->mode == VSP1_DL_MODE_HEADER) { + completed = true; goto done; + } /* * The UPD bit set indicates that the commit operation raced with the @@ -597,6 +609,7 @@ void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) if (dlm->queued) { dlm->active = dlm->queued; dlm->queued = NULL; + completed = true; } /* @@ -619,6 +632,8 @@ void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) done: spin_unlock(&dlm->lock); + + return completed; } /* Hardware Setup */ diff --git a/drivers/media/platform/vsp1/vsp1_dl.h b/drivers/media/platform/vsp1/vsp1_dl.h index 7131aa3c5978..6ec1380a10af 100644 --- a/drivers/media/platform/vsp1/vsp1_dl.h +++ b/drivers/media/platform/vsp1/vsp1_dl.h @@ -28,7 +28,7 @@ struct vsp1_dl_manager *vsp1_dlm_create(struct vsp1_device *vsp1, void vsp1_dlm_destroy(struct vsp1_dl_manager *dlm); void vsp1_dlm_reset(struct vsp1_dl_manager *dlm); void vsp1_dlm_irq_display_start(struct vsp1_dl_manager *dlm); -void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm); +bool vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm); struct vsp1_dl_list *vsp1_dl_list_get(struct vsp1_dl_manager *dlm); void vsp1_dl_list_put(struct vsp1_dl_list *dl); diff --git a/drivers/media/platform/vsp1/vsp1_pipe.c b/drivers/media/platform/vsp1/vsp1_pipe.c index 35364f594e19..d15327701ad8 100644 --- a/drivers/media/platform/vsp1/vsp1_pipe.c +++ b/drivers/media/platform/vsp1/vsp1_pipe.c @@ -304,10 +304,21 @@ bool vsp1_pipeline_ready(struct vsp1_pipeline *pipe) void vsp1_pipeline_frame_end(struct vsp1_pipeline *pipe) { + bool completed; + if (pipe == NULL) return; - vsp1_dlm_irq_frame_end(pipe->output->dlm); + completed = vsp1_dlm_irq_frame_end(pipe->output->dlm); + if (!completed) { + /* + * If the DL commit raced with the frame end interrupt, the + * commit ends up being postponed by one frame. Return + * immediately without calling the pipeline's frame end handler + * or incrementing the sequence number. + */ + return; + } if (pipe->frame_end) pipe->frame_end(pipe);