From patchwork Thu Apr 6 16:06:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13203704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0246C76196 for ; Thu, 6 Apr 2023 16:07:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A1DE410EC62; Thu, 6 Apr 2023 16:07:04 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 58B7510EC4A; Thu, 6 Apr 2023 16:07:02 +0000 (UTC) Received: from workpc.. (109-252-119-170.nat.spd-mgts.ru [109.252.119.170]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1405066031C7; Thu, 6 Apr 2023 17:06:59 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1680797221; bh=J/NjrW07xAKPsgTOVlsTpuUa/MmUjyQmfwWP2+i3cC4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vg0pe0ix1CzHqiooWmXyE1zukDucl835eaQmnoaEQud/sV3MsqCFI05+UXtXUA5Xk ZSdh2zEml7TICexbRqo8X5nQ96P27SWSrPqzp9FhLGONDTDO4lZVhkOPLu95M8EoN7 CbIr+LlyrCkm1O6ikBX6a9cv5xxccU9ePHm+BEV9AW1X6rCXeMigETvoGdaISOtR6U g4Bq1A4t95nIgSOMoSgcZml+xIdFjR5tCRnTtN/ulwYZY3hb2pGrhOStcN/22Z5D6e H+VM1rW+/6qkORdvrsujVuxIy2bszxREpKFGeT+4EQKULuA3KSotBkRMw3yc8Lav7k EQFVAefFCvbDw== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Greg Kroah-Hartman , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Srinivas Kandagatla , Amol Maheshwari , Emil Velikov Date: Thu, 6 Apr 2023 19:06:31 +0300 Message-Id: <20230406160637.541702-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230406160637.541702-1-dmitry.osipenko@collabora.com> References: <20230406160637.541702-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 1/7] media: videobuf2: Don't assert held reservation lock for dma-buf mmapping X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com, linux-media@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem is solved by moving the lock down to exporters. This patch prepares videobuf2 for the locking policy update. Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko Reviewed-by: Hans Verkuil --- drivers/media/common/videobuf2/videobuf2-dma-contig.c | 3 --- drivers/media/common/videobuf2/videobuf2-dma-sg.c | 3 --- drivers/media/common/videobuf2/videobuf2-vmalloc.c | 3 --- 3 files changed, 9 deletions(-) diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 205d3cac425c..2fa455d4a048 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -11,7 +11,6 @@ */ #include -#include #include #include #include @@ -456,8 +455,6 @@ static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct iosys_map *map) static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf, struct vm_area_struct *vma) { - dma_resv_assert_held(dbuf->resv); - return vb2_dc_mmap(dbuf->priv, vma); } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index 183037fb1273..28f3fdfe23a2 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -10,7 +10,6 @@ * the Free Software Foundation. */ -#include #include #include #include @@ -498,8 +497,6 @@ static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf, struct vm_area_struct *vma) { - dma_resv_assert_held(dbuf->resv); - return vb2_dma_sg_mmap(dbuf->priv, vma); } diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c index a6c6d2fcaaa4..7c635e292106 100644 --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c @@ -10,7 +10,6 @@ * the Free Software Foundation. */ -#include #include #include #include @@ -319,8 +318,6 @@ static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf, struct vm_area_struct *vma) { - dma_resv_assert_held(dbuf->resv); - return vb2_vmalloc_mmap(dbuf->priv, vma); } From patchwork Thu Apr 6 16:06:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13203706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AFFDFC7618D for ; Thu, 6 Apr 2023 16:07:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 84B9C10EC56; Thu, 6 Apr 2023 16:07:08 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6D31D10EC5C; Thu, 6 Apr 2023 16:07:04 +0000 (UTC) Received: from workpc.. (109-252-119-170.nat.spd-mgts.ru [109.252.119.170]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 4E78366031CE; Thu, 6 Apr 2023 17:07:01 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1680797223; bh=3Ek/yYDXiV8p6oUK9bZEdI6kiU1RzQXW4TBUlTZgIOc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ecq5yJJW4mYu2cixJiY2SKovl+cm60t70r+zw9E9b91J7lWjb8tudECvcjvpzoNLD 4D2+Ib49s3Ibz8dmwFhbc4+Ho8MBZGOV6CeotCGC36/E9yx3IRbNQvgqcoJyzwCUGD h5VIC38jPAdCauuNsTxGLCuoq3gtydfRf97Sh7pU58lzFGZIe4Eaam+XSet0Z/FweB nds0u6VZ7Zf9AEmPXmd/5fcVKzXhcr9jHSAroQ9eF0UTNhOWR431KWkfWCzxy6WZ2d eXABhPCUKGr7NolBdq0MTVUzyAoSdg74r8ipNLBzBsh8et72HdMUjK2EwNUaypLIP8 a9e1PLca8QcFQ== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Greg Kroah-Hartman , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Srinivas Kandagatla , Amol Maheshwari , Emil Velikov Date: Thu, 6 Apr 2023 19:06:32 +0300 Message-Id: <20230406160637.541702-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230406160637.541702-1-dmitry.osipenko@collabora.com> References: <20230406160637.541702-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 2/7] dma-buf/heaps: Don't assert held reservation lock for dma-buf mmapping X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com, linux-media@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem solved by moving the lock down to exporters. This patch prepares dma-buf heaps for the locking policy update. Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/heaps/cma_heap.c | 3 --- drivers/dma-buf/heaps/system_heap.c | 3 --- 2 files changed, 6 deletions(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 1131fb943992..28fb04eccdd0 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include @@ -183,8 +182,6 @@ static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) { struct cma_heap_buffer *buffer = dmabuf->priv; - dma_resv_assert_held(dmabuf->resv); - if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) return -EINVAL; diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index e8bd10e60998..fcf836ba9c1f 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include @@ -202,8 +201,6 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) struct sg_page_iter piter; int ret; - dma_resv_assert_held(dmabuf->resv); - for_each_sgtable_page(table, &piter, vma->vm_pgoff) { struct page *page = sg_page_iter_page(&piter); From patchwork Thu Apr 6 16:06:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13203705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B86BDC77B70 for ; Thu, 6 Apr 2023 16:07:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 65DD610EC60; Thu, 6 Apr 2023 16:07:09 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id AD48310EC88; Thu, 6 Apr 2023 16:07:06 +0000 (UTC) Received: from workpc.. (109-252-119-170.nat.spd-mgts.ru [109.252.119.170]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 7007066031DB; Thu, 6 Apr 2023 17:07:03 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1680797225; bh=rvdy/3koIHZzgBU5kiqAP2iCy3wZx5b0JXTIR0fi+xo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d/WbIEQynYpmc1xVO9rE6G+Vg2FfZlZVZ+cqJCVsXibp41ZjKgUTTdmKOFNrMfjWV 5XrC5G9WTKtE0Uq7Sbzsdo3cqIOKXsltRTuMt/ZMiWERYGogw79oh2CdyEROR0nKke bRZtKiO3xGJC7yACTO6TvVa/ZzqfqHX264+7iDb0ouUuGMTCby6+nKtsR5Nyg2M0DT 2t3C+dQwcG5cTk4rh1EKXgBBB7EiguriqnkPkaPnFeXjnn9fso7kX0+U1NfIBoPp9Y /hXBGXM+bSCXJSxEsYcCI10TyB9WNdXrVoEfj/MqdXbZOyaeYLd/oJZOTypPLVi0jc IrX0m90B64ZEA== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Greg Kroah-Hartman , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Srinivas Kandagatla , Amol Maheshwari , Emil Velikov Date: Thu, 6 Apr 2023 19:06:33 +0300 Message-Id: <20230406160637.541702-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230406160637.541702-1-dmitry.osipenko@collabora.com> References: <20230406160637.541702-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 3/7] udmabuf: Don't assert held reservation lock for dma-buf mmapping X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com, linux-media@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem is solved by moving the lock down to exporters. This patch prepares udmabuf for the locking policy update. Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/udmabuf.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index 740d6e426ee9..277f1afa9552 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -52,8 +52,6 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma) { struct udmabuf *ubuf = buf->priv; - dma_resv_assert_held(buf->resv); - if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) return -EINVAL; From patchwork Thu Apr 6 16:06:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13203707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 17B08C77B76 for ; Thu, 6 Apr 2023 16:07:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BB9E010EC57; Thu, 6 Apr 2023 16:07:11 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id DFBB610EC5C; Thu, 6 Apr 2023 16:07:08 +0000 (UTC) Received: from workpc.. (109-252-119-170.nat.spd-mgts.ru [109.252.119.170]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id B5DED66031E0; Thu, 6 Apr 2023 17:07:05 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1680797227; bh=+y9WRIJ4shQTe2dz4WBdbVwQw7y2f8JJMM7aQYwXu1I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CBhAGKo6G+0QBfaJo75kb4LKjTBYR2m6NK1xy5J0DRNvIebZ9QaxkjBr9agK4Rfuy ya/DQ9dnvLpP+edoT6nG21XhAwBhG60O1YO7t1DqYlUrGKbRLXi+u3BbSKpek0Q1el ws6XQty7fBHw0izzYZMPNNq6cqpE6zk8TekNCpy6hig7/HWwD4iPzPpEcZuTbboNp2 1tz7YmR5XokeqFuKb4iQE3yXW/2rybS3mjbmFlsPsmK8fyxFps2Nrde+Yo5ySOeK0q DJP1wS9uzUngRbwvYc5t+wO3ku3aqPK2XWdhMfQeLveiQjy6rtG3zyIV/KdDBE6Gn0 0bpHdsfZmKfFQ== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Greg Kroah-Hartman , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Srinivas Kandagatla , Amol Maheshwari , Emil Velikov Date: Thu, 6 Apr 2023 19:06:34 +0300 Message-Id: <20230406160637.541702-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230406160637.541702-1-dmitry.osipenko@collabora.com> References: <20230406160637.541702-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 4/7] fastrpc: Don't assert held reservation lock for dma-buf mmapping X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com, linux-media@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem is solved by moving the lock down to exporters. This patch prepares fastrpc for the locking policy update. Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/misc/fastrpc.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index a701132638cf..7e9c9ad37fd9 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -6,7 +6,6 @@ #include #include #include -#include #include #include #include @@ -733,8 +732,6 @@ static int fastrpc_mmap(struct dma_buf *dmabuf, struct fastrpc_buf *buf = dmabuf->priv; size_t size = vma->vm_end - vma->vm_start; - dma_resv_assert_held(dmabuf->resv); - return dma_mmap_coherent(buf->dev, vma, buf->virt, FASTRPC_PHYS(buf->phys), size); } From patchwork Thu Apr 6 16:06:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13203708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 41C44C7618D for ; Thu, 6 Apr 2023 16:07:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CCE9110EC63; Thu, 6 Apr 2023 16:07:13 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0D6B910EC59; Thu, 6 Apr 2023 16:07:11 +0000 (UTC) Received: from workpc.. (109-252-119-170.nat.spd-mgts.ru [109.252.119.170]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id E9B2466031D1; Thu, 6 Apr 2023 17:07:07 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1680797229; bh=wj+P+EtiPQCYCAqaOfmF5IaGcLMSp7ArisVFY//b40U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dDh9OxapAut2zQhFRSrbI/+hjrpKAvecRLRtluAeSqQ3GU5A6dgzGCPej98M2C4Pb t5B7IeeGk4qNEZNDe9U7sEGUg0WuWZpOW/0Vp0Qs4y3lX6GwFF+fFdmpatY4V5Mlmt KM0wByYF0IBApZsQ4NuvG/0nrVSrcHmDl5itivQyySQ0cSVHV4I/0bYSyE7R3QDO+a U0rZ5dSyCUN4FKN06SWLVi1bCNw+iNkZxwffS2HB2ESb0xsbofPCVdsjt8OMltO+Fk UIkA0VDBY9UN6MWnpj3Bg4I4Cg1goyscSabpnRECxuMpjv9Gc6C0onSx0LrESjvibf 9Q74VGdT/vuAA== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Greg Kroah-Hartman , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Srinivas Kandagatla , Amol Maheshwari , Emil Velikov Date: Thu, 6 Apr 2023 19:06:35 +0300 Message-Id: <20230406160637.541702-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230406160637.541702-1-dmitry.osipenko@collabora.com> References: <20230406160637.541702-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 5/7] drm: Don't assert held reservation lock for dma-buf mmapping X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com, linux-media@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Don't assert held dma-buf reservation lock on memory mapping of exported buffer. We're going to change dma-buf mmap() locking policy such that exporters will have to handle the lock. The previous locking policy caused deadlock problem for DRM drivers in a case of self-imported dma-bufs once these drivers are moved to use reservation lock universally. The problem is solved by moving the lock down to exporters. This patch prepares DRM drivers for the locking policy update. Reviewed-by: Emil Velikov Acked-by: Christian König Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_prime.c | 2 -- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 2 -- drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 2 -- drivers/gpu/drm/tegra/gem.c | 2 -- 4 files changed, 8 deletions(-) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 149cd4ff6a3b..cea85e84666f 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -781,8 +781,6 @@ int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma) struct drm_gem_object *obj = dma_buf->priv; struct drm_device *dev = obj->dev; - dma_resv_assert_held(dma_buf->resv); - if (!dev->driver->gem_prime_mmap) return -ENOSYS; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index fd556a076d05..1df74f7aa3dc 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -97,8 +97,6 @@ static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct * struct drm_i915_private *i915 = to_i915(obj->base.dev); int ret; - dma_resv_assert_held(dma_buf->resv); - if (obj->base.size < vma->vm_end - vma->vm_start) return -EINVAL; diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 3abc47521b2c..8e194dbc9506 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -66,8 +66,6 @@ static int omap_gem_dmabuf_mmap(struct dma_buf *buffer, struct drm_gem_object *obj = buffer->priv; int ret = 0; - dma_resv_assert_held(buffer->resv); - ret = drm_gem_mmap_obj(obj, omap_gem_mmap_size(obj), vma); if (ret < 0) return ret; diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index bce991a2ccc0..871ef5d26523 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -693,8 +693,6 @@ static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma) struct drm_gem_object *gem = buf->priv; int err; - dma_resv_assert_held(buf->resv); - err = drm_gem_mmap_obj(gem, gem->size, vma); if (err < 0) return err; From patchwork Thu Apr 6 16:06:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13203709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1ED1AC77B6F for ; Thu, 6 Apr 2023 16:07:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0A99210EC68; Thu, 6 Apr 2023 16:07:19 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8B01910EC5E; Thu, 6 Apr 2023 16:07:13 +0000 (UTC) Received: from workpc.. (109-252-119-170.nat.spd-mgts.ru [109.252.119.170]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1412B66031DE; Thu, 6 Apr 2023 17:07:10 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1680797232; bh=i+8ZTN0RqkBszh4mtnmP6dUFSRdIZ0W3HJzBJKtrT90=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JD3kaKCZfu72elUMIMaTzy55WisgCoHuHIk7bt98jXNjZwBf8mC8cTGW5aI7A6sh9 RcbHeVqfSLlyqPzVVN+Snzq5qkxI58jfDBD6igXWXrEs8ZFTmRod7mEpVb+JD0rY4e STMB6FbcN9hD+dvEfu4YuYGfi8jYrX3p9zG8lHg5moNOV/CQQRBRLCYjIbIODA641Z g3My4K+RpFyxYb3BD2c6AvX7Y+czPh42DeO/WtFuGoiPexhrlLsBc5g+EvwPFgzioc 6VNj+QYGZsYpIZ87NYwgfKh3PMgPLZ44XRa0Xxl2RRS2GeRbp8coHz/JmSBCGETjAc 2jiTJaQSCNqQA== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Greg Kroah-Hartman , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Srinivas Kandagatla , Amol Maheshwari , Emil Velikov Date: Thu, 6 Apr 2023 19:06:36 +0300 Message-Id: <20230406160637.541702-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230406160637.541702-1-dmitry.osipenko@collabora.com> References: <20230406160637.541702-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 6/7] dma-buf: Change locking policy for mmap() X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com, linux-media@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Change locking policy of mmap() callback, making exporters responsible for handling dma-buf reservation locking. Previous locking policy stated that dma-buf is locked for both importers and exporters by the dma-buf core, which caused a deadlock problem for DRM drivers in a case of self-imported dma-bufs which required to take the lock from the DRM exporter side. Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/dma-buf.c | 17 +++-------------- 1 file changed, 3 insertions(+), 14 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index aa4ea8530cb3..21916bba77d5 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -131,7 +131,6 @@ static struct file_system_type dma_buf_fs_type = { static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) { struct dma_buf *dmabuf; - int ret; if (!is_dma_buf_file(file)) return -EINVAL; @@ -147,11 +146,7 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma) dmabuf->size >> PAGE_SHIFT) return -EINVAL; - dma_resv_lock(dmabuf->resv, NULL); - ret = dmabuf->ops->mmap(dmabuf, vma); - dma_resv_unlock(dmabuf->resv); - - return ret; + return dmabuf->ops->mmap(dmabuf, vma); } static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) @@ -850,6 +845,7 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach, * - &dma_buf_ops.release() * - &dma_buf_ops.begin_cpu_access() * - &dma_buf_ops.end_cpu_access() + * - &dma_buf_ops.mmap() * * 2. These &dma_buf_ops callbacks are invoked with locked dma-buf * reservation and exporter can't take the lock: @@ -858,7 +854,6 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach, * - &dma_buf_ops.unpin() * - &dma_buf_ops.map_dma_buf() * - &dma_buf_ops.unmap_dma_buf() - * - &dma_buf_ops.mmap() * - &dma_buf_ops.vmap() * - &dma_buf_ops.vunmap() * @@ -1463,8 +1458,6 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { - int ret; - if (WARN_ON(!dmabuf || !vma)) return -EINVAL; @@ -1485,11 +1478,7 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, vma_set_file(vma, dmabuf->file); vma->vm_pgoff = pgoff; - dma_resv_lock(dmabuf->resv, NULL); - ret = dmabuf->ops->mmap(dmabuf, vma); - dma_resv_unlock(dmabuf->resv); - - return ret; + return dmabuf->ops->mmap(dmabuf, vma); } EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); From patchwork Thu Apr 6 16:06:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 13203710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6209FC761A6 for ; Thu, 6 Apr 2023 16:07:29 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 87E2B10EC5C; Thu, 6 Apr 2023 16:07:18 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id A31F910EC64; Thu, 6 Apr 2023 16:07:15 +0000 (UTC) Received: from workpc.. (109-252-119-170.nat.spd-mgts.ru [109.252.119.170]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 7A95A66031D2; Thu, 6 Apr 2023 17:07:12 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1680797234; bh=2LGtU0pJUE3a/3duS9Vh1FoAGcCC8Le26Q5vq5urikI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FUcbSMVcgc77GSMrqGVNkMhvFwZrSGNzJ698IQQ1N3WW5XQazwplcZjVab4Dhy528 m5HxUf/9rOU2FLKhwrW67htcu162jochjWYkBRmA+QTVzhrnn6rJ3u4lfv3Uycup4j 1sd9LzAtcJI7QaRU6krnptZJUaF07uBSbFDVxjeJYIwTp/MuFPWzhxlPUWNG9Kfqxi 8f98/oUvPjZNJxsjUuf0HI+PLu4GqPHd2h8KrU2CfmLjYi7NwBIOWxpTZWB5rzMDbj fpBU90QCHbraRJM2RJ4yBsfLAS0+BrwY+ZjKEYwiimWBwR95KiLAWOeAWzlYWVxqMM dY7Wi3cK2lIwQ== From: Dmitry Osipenko To: Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Benjamin Gaignard , Liam Mark , Brian Starkey , John Stultz , Gerd Hoffmann , Daniel Vetter , Jani Nikula , Arnd Bergmann , Greg Kroah-Hartman , Thomas Zimmermann , Tomi Valkeinen , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Srinivas Kandagatla , Amol Maheshwari , Emil Velikov Date: Thu, 6 Apr 2023 19:06:37 +0300 Message-Id: <20230406160637.541702-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230406160637.541702-1-dmitry.osipenko@collabora.com> References: <20230406160637.541702-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 7/7] drm/shmem-helper: Switch to reservation lock X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-tegra@vger.kernel.org, kernel@collabora.com, linux-media@vger.kernel.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Replace all drm-shmem locks with a GEM reservation lock. This makes locks consistent with dma-buf locking convention where importers are responsible for holding reservation lock for all operations performed over dma-bufs, preventing deadlock between dma-buf importers and exporters. Suggested-by: Daniel Vetter Acked-by: Thomas Zimmermann Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 208 ++++++++---------- drivers/gpu/drm/lima/lima_gem.c | 8 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 19 +- include/drm/drm_gem_shmem_helper.h | 14 +- 6 files changed, 114 insertions(+), 148 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 4ea6507a77e5..395942ca36fe 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -88,8 +88,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) if (ret) goto err_release; - mutex_init(&shmem->pages_lock); - mutex_init(&shmem->vmap_lock); INIT_LIST_HEAD(&shmem->madv_list); if (!private) { @@ -141,11 +139,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - drm_WARN_ON(obj->dev, shmem->vmap_use_count); - if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { + dma_resv_lock(shmem->base.resv, NULL); + + drm_WARN_ON(obj->dev, shmem->vmap_use_count); + if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -154,18 +154,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } if (shmem->pages) drm_gem_shmem_put_pages(shmem); - } - drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, shmem->pages_use_count); + + dma_resv_unlock(shmem->base.resv); + } drm_gem_object_release(obj); - mutex_destroy(&shmem->pages_lock); - mutex_destroy(&shmem->vmap_lock); kfree(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; @@ -197,35 +197,16 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) } /* - * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object * - * This function makes sure that backing pages exists for the shmem GEM object - * and increases the use count. - * - * Returns: - * 0 on success or a negative error code on failure. + * This function decreases the use count and puts the backing pages when use drops to zero. */ -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - int ret; - - drm_WARN_ON(obj->dev, obj->import_attach); - - ret = mutex_lock_interruptible(&shmem->pages_lock); - if (ret) - return ret; - ret = drm_gem_shmem_get_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); - - return ret; -} -EXPORT_SYMBOL(drm_gem_shmem_get_pages); -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) -{ - struct drm_gem_object *obj = &shmem->base; + dma_resv_assert_held(shmem->base.resv); if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) return; @@ -243,20 +224,25 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages_mark_accessed_on_put); shmem->pages = NULL; } +EXPORT_SYMBOL(drm_gem_shmem_put_pages); -/* - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object - * @shmem: shmem GEM object - * - * This function decreases the use count and puts the backing pages when use drops to zero. - */ -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) +{ + int ret; + + dma_resv_assert_held(shmem->base.resv); + + ret = drm_gem_shmem_get_pages(shmem); + + return ret; +} + +static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) { - mutex_lock(&shmem->pages_lock); - drm_gem_shmem_put_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_assert_held(shmem->base.resv); + + drm_gem_shmem_put_pages(shmem); } -EXPORT_SYMBOL(drm_gem_shmem_put_pages); /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object @@ -271,10 +257,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + int ret; drm_WARN_ON(obj->dev, obj->import_attach); - return drm_gem_shmem_get_pages(shmem); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); + if (ret) + return ret; + ret = drm_gem_shmem_pin_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -291,12 +284,29 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); - drm_gem_shmem_put_pages(shmem); + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_unpin_locked(shmem); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_unpin); -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +/* + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * @shmem: shmem GEM object + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing + * store. + * + * This function makes sure that a contiguous kernel virtual address mapping + * exists for the buffer backing the shmem GEM object. It hides the differences + * between dma-buf imported and natively allocated objects. + * + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; int ret = 0; @@ -312,6 +322,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, } else { pgprot_t prot = PAGE_KERNEL; + dma_resv_assert_held(shmem->base.resv); + if (shmem->vmap_use_count++ > 0) { iosys_map_set_vaddr(map, shmem->vaddr); return 0; @@ -346,45 +358,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, return ret; } +EXPORT_SYMBOL(drm_gem_shmem_vmap); /* - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object * @shmem: shmem GEM object - * @map: Returns the kernel virtual address of the SHMEM GEM object's backing - * store. - * - * This function makes sure that a contiguous kernel virtual address mapping - * exists for the buffer backing the shmem GEM object. It hides the differences - * between dma-buf imported and natively allocated objects. + * @map: Kernel virtual address where the SHMEM GEM object was mapped * - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). + * This function cleans up a kernel virtual address mapping acquired by + * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to + * zero. * - * Returns: - * 0 on success or a negative error code on failure. + * This function hides the differences between dma-buf imported and natively + * allocated objects. */ -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) -{ - int ret; - - ret = mutex_lock_interruptible(&shmem->vmap_lock); - if (ret) - return ret; - ret = drm_gem_shmem_vmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); - - return ret; -} -EXPORT_SYMBOL(drm_gem_shmem_vmap); - -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; if (obj->import_attach) { dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { + dma_resv_assert_held(shmem->base.resv); + if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) return; @@ -397,26 +394,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, shmem->vaddr = NULL; } - -/* - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object - * @shmem: shmem GEM object - * @map: Kernel virtual address where the SHMEM GEM object was mapped - * - * This function cleans up a kernel virtual address mapping acquired by - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to - * zero. - * - * This function hides the differences between dma-buf imported and natively - * allocated objects. - */ -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) -{ - mutex_lock(&shmem->vmap_lock); - drm_gem_shmem_vunmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); -} EXPORT_SYMBOL(drm_gem_shmem_vunmap); static int @@ -447,24 +424,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, */ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - mutex_lock(&shmem->pages_lock); + dma_resv_assert_held(shmem->base.resv); if (shmem->madv >= 0) shmem->madv = madv; madv = shmem->madv; - mutex_unlock(&shmem->pages_lock); - return (madv >= 0); } EXPORT_SYMBOL(drm_gem_shmem_madvise); -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct drm_device *dev = obj->dev; + dma_resv_assert_held(shmem->base.resv); + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -472,7 +449,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) kfree(shmem->sgt); shmem->sgt = NULL; - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_put_pages(shmem); shmem->madv = -1; @@ -488,17 +465,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL(drm_gem_shmem_purge_locked); - -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) -{ - if (!mutex_trylock(&shmem->pages_lock)) - return false; - drm_gem_shmem_purge_locked(shmem); - mutex_unlock(&shmem->pages_lock); - - return true; -} EXPORT_SYMBOL(drm_gem_shmem_purge); /** @@ -551,7 +517,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (page_offset >= num_pages || drm_WARN_ON_ONCE(obj->dev, !shmem->pages) || @@ -563,7 +529,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } @@ -575,7 +541,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) drm_WARN_ON(obj->dev, obj->import_attach); - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); /* * We should have already pinned the pages when the buffer was first @@ -585,7 +551,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) shmem->pages_use_count++; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); drm_gem_vm_open(vma); } @@ -595,7 +561,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma) struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages(shmem); + dma_resv_unlock(shmem->base.resv); + drm_gem_vm_close(vma); } @@ -633,7 +602,10 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct return ret; } + dma_resv_lock(shmem->base.resv, NULL); ret = drm_gem_shmem_get_pages(shmem); + dma_resv_unlock(shmem->base.resv); + if (ret) return ret; @@ -699,7 +671,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ drm_WARN_ON(obj->dev, obj->import_attach); - ret = drm_gem_shmem_get_pages_locked(shmem); + ret = drm_gem_shmem_get_pages(shmem); if (ret) return ERR_PTR(ret); @@ -721,7 +693,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ sg_free_table(sgt); kfree(sgt); err_put_pages: - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_put_pages(shmem); return ERR_PTR(ret); } @@ -746,11 +718,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) int ret; struct sg_table *sgt; - ret = mutex_lock_interruptible(&shmem->pages_lock); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ERR_PTR(ret); sgt = drm_gem_shmem_get_pages_sgt_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return sgt; } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 10252dc11a22..4f9736e5f929 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) new_size = min(new_size, bo->base.base.size); - mutex_lock(&bo->base.pages_lock); + dma_resv_lock(bo->base.base.resv, NULL); if (bo->base.pages) { pages = bo->base.pages; @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, sizeof(*pages), GFP_KERNEL | __GFP_ZERO); if (!pages) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return -ENOMEM; } @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) struct page *page = shmem_read_mapping_page(mapping, i); if (IS_ERR(page)) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return PTR_ERR(page); } pages[i] = page; } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); ret = sg_alloc_table_from_pages(&sgt, pages, i, 0, new_size, GFP_KERNEL); diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index bbada731bbbd..d9dda6acdfac 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -407,6 +407,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, bo = to_panfrost_bo(gem_obj); + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL); + if (ret) + goto out_put_object; + mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv == PANFROST_MADV_DONTNEED) { @@ -444,7 +448,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, out_unlock_mappings: mutex_unlock(&bo->mappings.lock); mutex_unlock(&pfdev->shrinker_lock); - + dma_resv_unlock(bo->base.base.resv); +out_put_object: drm_gem_object_put(gem_obj); return ret; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index bf0170782f25..6a71a2555f85 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) if (!mutex_trylock(&bo->mappings.lock)) return false; - if (!mutex_trylock(&shmem->pages_lock)) + if (!dma_resv_trylock(shmem->base.resv)) goto unlock_mappings; panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); + drm_gem_shmem_purge(&bo->base); ret = true; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); unlock_mappings: mutex_unlock(&bo->mappings.lock); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 666a5e53fe19..0679df57f394 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -443,6 +443,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; struct address_space *mapping; + struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; struct page **pages; @@ -465,15 +466,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, page_offset = addr >> PAGE_SHIFT; page_offset -= bomapping->mmnode.start; - mutex_lock(&bo->base.pages_lock); + obj = &bo->base.base; + + dma_resv_lock(obj->resv, NULL); if (!bo->base.pages) { bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { - mutex_unlock(&bo->base.pages_lock); ret = -ENOMEM; - goto err_bo; + goto err_unlock; } pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, @@ -481,9 +483,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, if (!pages) { kvfree(bo->sgts); bo->sgts = NULL; - mutex_unlock(&bo->base.pages_lock); ret = -ENOMEM; - goto err_bo; + goto err_unlock; } bo->base.pages = pages; bo->base.pages_use_count = 1; @@ -491,7 +492,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, pages = bo->base.pages; if (pages[page_offset]) { /* Pages are already mapped, bail out. */ - mutex_unlock(&bo->base.pages_lock); goto out; } } @@ -502,14 +502,11 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { pages[i] = shmem_read_mapping_page(mapping, i); if (IS_ERR(pages[i])) { - mutex_unlock(&bo->base.pages_lock); ret = PTR_ERR(pages[i]); goto err_pages; } } - mutex_unlock(&bo->base.pages_lock); - sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; ret = sg_alloc_table_from_pages(sgt, pages + page_offset, NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); @@ -528,6 +525,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); out: + dma_resv_unlock(obj->resv); + panfrost_gem_mapping_put(bomapping); return 0; @@ -536,6 +535,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, sg_free_table(sgt); err_pages: drm_gem_shmem_put_pages(&bo->base); +err_unlock: + dma_resv_unlock(obj->resv); err_bo: panfrost_gem_mapping_put(bomapping); return ret; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 5994fed5e327..20ddcd799df9 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -26,11 +26,6 @@ struct drm_gem_shmem_object { */ struct drm_gem_object base; - /** - * @pages_lock: Protects the page table and use count - */ - struct mutex pages_lock; - /** * @pages: Page table */ @@ -65,11 +60,6 @@ struct drm_gem_shmem_object { */ struct sg_table *sgt; - /** - * @vmap_lock: Protects the vmap address and use count - */ - struct mutex vmap_lock; - /** * @vaddr: Kernel virtual address of the backing memory */ @@ -109,7 +99,6 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); @@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);