From patchwork Thu Jun 30 20:04:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12902140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26490C433EF for ; Thu, 30 Jun 2022 20:04:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5B5A010E89C; Thu, 30 Jun 2022 20:04:48 +0000 (UTC) Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4CDD910E752 for ; Thu, 30 Jun 2022 20:04:44 +0000 (UTC) Received: from dimapc.. (109-252-118-164.nat.spd-mgts.ru [109.252.118.164]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id F3127660197A; Thu, 30 Jun 2022 21:04:41 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1656619483; bh=bF2vJby7Mhz2elgCtte5Q4NU6PyYt5P90frAlP4YkMc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ya2yWbx3mpSHqfsRClOrTB1cjBbQlmWEdlZ1+F77cVbRhvYal8ege+Ix/9JG5vk79 wktaxk+t+5bfKSuObVQnUWVLR7Q3IxFin64MxSbg2zCzzYl7gshaGOcbeivoV2o7Qn t4aq9c/2Pyz9enB3E1FU6vJ/oxWJ9hX49wEIp9P4uRVUwGWIZmJcfo9WbOmNQf/xph hXFTnMUQFATV1vkodo2BoLbcqP88jABhOdq0pIA+3lZUDcFT73teojSPbH551vf9Zp C53JOgXfyNi1L8J4bTshjqMR+pXT/SdjvQK2acDb+NitZXeKYKWzAohaixK8BKgpo6 zKRnWduyRQreg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Emil Velikov , =?utf-8?q?Christian_K=C3=B6nig?= , =?utf-8?q?Thom?= =?utf-8?q?as_Hellstr=C3=B6m?= Subject: [PATCH v7 2/2] drm/gem: Don't map imported GEMs Date: Thu, 30 Jun 2022 23:04:05 +0300 Message-Id: <20220630200405.1883897-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220630200405.1883897-1-dmitry.osipenko@collabora.com> References: <20220630200405.1883897-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org, linux-tegra@vger.kernel.org, Dmitry Osipenko , kernel@collabora.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't handle imported dma-bufs properly, which results in mapping of something else than the imported dma-buf. On NVIDIA Tegra we get a hard lockup when userspace writes to the memory mapping of a dma-buf that was imported into Tegra's DRM GEM. Majority of DRM drivers prohibit mapping of the imported GEM objects. Mapping of imported GEMs require special care from userspace since it should sync dma-buf because mapping coherency of the exporter device may not match the DRM device. Let's prohibit the mapping for all DRM drivers for consistency. Cc: stable@vger.kernel.org Suggested-by: Thomas Hellström Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem.c | 4 ++++ drivers/gpu/drm/drm_gem_shmem_helper.c | 9 --------- 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 86d670c71286..fc9ec42fa0ab 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1034,6 +1034,10 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, { int ret; + /* Don't allow imported objects to be mapped */ + if (obj->import_attach) + return -EINVAL; + /* Check for valid size. */ if (obj_size < vma->vm_end - vma->vm_start) return -EINVAL; diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 8ad0e02991ca..6190f5018986 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -609,17 +609,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops); */ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma) { - struct drm_gem_object *obj = &shmem->base; int ret; - if (obj->import_attach) { - /* Drop the reference drm_gem_mmap_obj() acquired.*/ - drm_gem_object_put(obj); - vma->vm_private_data = NULL; - - return dma_buf_mmap(obj->dma_buf, vma, 0); - } - ret = drm_gem_shmem_get_pages(shmem); if (ret) { drm_gem_vm_close(vma);