From patchwork Wed Jun 30 06:27:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Brezillon X-Patchwork-Id: 12351311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24690C11F68 for ; Wed, 30 Jun 2021 06:28:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E6F5761D07 for ; Wed, 30 Jun 2021 06:28:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6F5761D07 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=collabora.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 286CA6E936; Wed, 30 Jun 2021 06:28:19 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 941C96E92D for ; Wed, 30 Jun 2021 06:28:07 +0000 (UTC) Received: from localhost.localdomain (unknown [IPv6:2a01:e0a:2c:6930:5cf4:84a1:2763:fe0d]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: bbrezillon) by bhuna.collabora.co.uk (Postfix) with ESMTPSA id 06BB91F42510; Wed, 30 Jun 2021 07:28:05 +0100 (BST) From: Boris Brezillon To: dri-devel@lists.freedesktop.org Subject: [PATCH v6 10/16] drm/panfrost: Make sure job interrupts are masked before resetting Date: Wed, 30 Jun 2021 08:27:45 +0200 Message-Id: <20210630062751.2832545-11-boris.brezillon@collabora.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210630062751.2832545-1-boris.brezillon@collabora.com> References: <20210630062751.2832545-1-boris.brezillon@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tomeu Vizoso , Steven Price , Rob Herring , Alyssa Rosenzweig , Boris Brezillon , Robin Murphy Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This is not yet needed because we let active jobs be killed during by the reset and we don't really bother making sure they can be restarted. But once we start adding soft-stop support, controlling when we deal with the remaining interrrupts and making sure those are handled before the reset is issued gets tricky if we keep job interrupts active. Let's prepare for that and mask+flush job IRQs before issuing a reset. v4: * Add a comment explaining why we WARN_ON(!job) in the irq handler * Keep taking the job_lock when evicting stalled jobs v3: * New patch Signed-off-by: Boris Brezillon Reviewed-by: Steven Price --- drivers/gpu/drm/panfrost/panfrost_job.c | 27 ++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 59c23c91e47c..11ff33841caf 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -34,6 +34,7 @@ struct panfrost_queue_state { struct panfrost_job_slot { struct panfrost_queue_state queue[NUM_JOB_SLOTS]; spinlock_t job_lock; + int irq; }; static struct panfrost_job * @@ -389,6 +390,15 @@ static void panfrost_reset(struct panfrost_device *pfdev, if (bad) drm_sched_increase_karma(bad); + /* Mask job interrupts and synchronize to make sure we won't be + * interrupted during our reset. + */ + job_write(pfdev, JOB_INT_MASK, 0); + synchronize_irq(pfdev->js->irq); + + /* Schedulers are stopped and interrupts are masked+flushed, we don't + * need to protect the 'evict unfinished jobs' lock with the job_lock. + */ spin_lock(&pfdev->js->job_lock); for (i = 0; i < NUM_JOB_SLOTS; i++) { if (pfdev->jobs[i]) { @@ -486,7 +496,14 @@ static void panfrost_job_handle_irq(struct panfrost_device *pfdev, u32 status) struct panfrost_job *job; job = pfdev->jobs[j]; - /* Only NULL if job timeout occurred */ + /* The only reason this job could be NULL is if the + * job IRQ handler is called just after the + * in-flight job eviction in the reset path, and + * this shouldn't happen because the job IRQ has + * been masked and synchronized when this eviction + * happens. + */ + WARN_ON(!job); if (job) { pfdev->jobs[j] = NULL; @@ -546,7 +563,7 @@ static void panfrost_reset_work(struct work_struct *work) int panfrost_job_init(struct panfrost_device *pfdev) { struct panfrost_job_slot *js; - int ret, j, irq; + int ret, j; INIT_WORK(&pfdev->reset.work, panfrost_reset_work); @@ -556,11 +573,11 @@ int panfrost_job_init(struct panfrost_device *pfdev) spin_lock_init(&js->job_lock); - irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "job"); - if (irq <= 0) + js->irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "job"); + if (js->irq <= 0) return -ENODEV; - ret = devm_request_threaded_irq(pfdev->dev, irq, + ret = devm_request_threaded_irq(pfdev->dev, js->irq, panfrost_job_irq_handler, panfrost_job_irq_handler_thread, IRQF_SHARED, KBUILD_MODNAME "-job",