From patchwork Sun Apr 24 19:04:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825034 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A54FC433EF for ; Sun, 24 Apr 2022 19:04:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E0F6210EFA7; Sun, 24 Apr 2022 19:04:44 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id CB64310EF35 for ; Sun, 24 Apr 2022 19:04:36 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 4924D1F4065F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827075; bh=lhIfNUJOR+nfmdpbizX2v1sPc4G/RBRenNvxpsrr5ps=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SSgxi5gTDPsbwqj3S8FS6JR87X6ArOk1WQ04ZdPXubrgGmRt4/nBe8QXJZrSya0ZJ 9fVXZwDePk9CZeZDRI1wZKEHgtmOZYZuljgHrABV8Ew3TMLRBeiG3qt26uxnYVIbvt k5XbONcAh/wxl7I0OneAMqeog6v5KlPq4NkMA487HRO4wMuwfE9JNOQGkiEnN2ChAR Q64OUwS03AGYGQVic/yF288hGD8dLCceAahPfbxHsbMKUYCowV4khcdgOa/Jt1ia95 7LDM+hxtEq/+vpdKr/dKttlCnOEUctIj2P8Hy2Dm4Qlp7AF9IqzecSjuMqgZXX/NsU xkC4xgEy4HCEQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 01/17] drm/panfrost: Put mapping instead of shmem obj on panfrost_mmu_map_fault_addr() error Date: Sun, 24 Apr 2022 22:04:08 +0300 Message-Id: <20220424190424.540501-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When panfrost_mmu_map_fault_addr() fails, the BO's mapping should be unreferenced and not the shmem object that backs that mapping. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko Reviewed-by: Steven Price --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index d3f82b26a631..b285a8001b1d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -518,7 +518,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, err_pages: drm_gem_shmem_put_pages(&bo->base); err_bo: - drm_gem_object_put(&bo->base.base); + panfrost_gem_mapping_put(bomapping); return ret; } From patchwork Sun Apr 24 19:04:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 42890C433F5 for ; Sun, 24 Apr 2022 19:04:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6B45910EF35; Sun, 24 Apr 2022 19:04:39 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9728A10EF35 for ; Sun, 24 Apr 2022 19:04:38 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 67F8D1F44DAC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827077; bh=wczZ5iN2wkFP7L3LjYkS8mg7jidJdoFxxHCwNVCzNBg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B9z4s57HSgTTIa5kF6gzsveMS213uPtrxM9AN1Fzt0dUw9pOu2XtDzxZqXozVmCVv I/9nDHGHOVtWfTUmVe+6hpPSrMmNJYnWXkCMIftUQ8lUkANQyJWf+CzEar2snn8VS+ EtaPzIbmp4O2PWMUWc2RvV2hDJgGiCkwhxEBnsK59czdT4UrtBVoDHGtW/uwsh+8XC h2/TNZBzq+WN1z7F8QJYXB40HofVd4zShr6whn7O5LAdlGgeS5IBrb8yc8dWeSnNYu 9//eXFWQnFyq826ybiY9J0aFNM6dUIL8zk/3DY3aM23tAInJwdMyxgbKW39Wq8E1IM 6irCi+frl37EA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 02/17] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling Date: Sun, 24 Apr 2022 22:04:09 +0300 Message-Id: <20220424190424.540501-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" drm_gem_shmem_get_sg_table() never ever returned NULL on error. Correct the error handling to avoid crash on OOM. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index f293e6ad52da..3d0c8d4d1c20 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -168,9 +168,11 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, * since virtio_gpu doesn't support dma-buf import from other devices. */ shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); - if (!shmem->pages) { + ret = PTR_ERR_OR_ZERO(shmem->pages); + if (ret) { drm_gem_shmem_unpin(&bo->base); - return -EINVAL; + shmem->pages = NULL; + return ret; } if (use_dma_api) { From patchwork Sun Apr 24 19:04:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3EA57C433F5 for ; Sun, 24 Apr 2022 19:04:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4E37A10F039; Sun, 24 Apr 2022 19:04:48 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id C00F810EFA7 for ; Sun, 24 Apr 2022 19:04:40 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id AEE281F44DAD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827079; bh=b2ST04zm/Wc2+g1oGHK3LnebsnkduuUXRdFzzBAvqNs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RYqZIrqV2Et/qyugz9gIMrhJiZsmfGJNjT6FM6SWwMyYOQV8s8YN/GSTtA2dGwp5u p49w6MWQWN428gleVaqnJ1Oo82dX//FO8P1Zgq/YGDXK4dmKMYyWANGnBx6kP/Hxgi bSztUra9mR26Yclfgj5wBKXQdHA8Kvi+j1N8DlblfXcMBEt/B+MbQC9dRahFw0PCKW hSu44gaH+pxgo6/kOVHnXUwr50a4jN7WeEKYvCT4vJy4WuUYkxqaOZ/TCAC7V7nKsn wdFIJxwP1nxignskSiJ7LARwKSbD4lyoqDMxSg+2CJNDCb8/fuVm1VNvyMAJIXCdmd qC2QwCBYTwYHg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 03/17] drm/virtio: Check whether transferred 2D BO is shmem Date: Sun, 24 Apr 2022 22:04:10 +0300 Message-Id: <20220424190424.540501-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Transferred 2D BO always must be a shmem BO. Add check for that to prevent NULL dereference if userspace passes a VRAM BO. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 7c052efe8836..2edf31806b74 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -595,7 +595,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - if (use_dma_api) + if (virtio_gpu_is_shmem(bo) && use_dma_api) dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, shmem->pages, DMA_TO_DEVICE); From patchwork Sun Apr 24 19:04:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0414AC433EF for ; Sun, 24 Apr 2022 19:04:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F096410FE10; Sun, 24 Apr 2022 19:04:48 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id C108710EFA7 for ; Sun, 24 Apr 2022 19:04:42 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id C660E1F44DAE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827081; bh=isO7PTUeJcDVmAZaS1b0iVVGgXGziCi18eZsBem42Os=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oZybRU4m8FYNIo5lOZbKiof+CBL6PMKvYiaXlkBXxadri70VDv5ZujwQay1V4WdBr oEfe5JqWvrVhdfl4nzE7GXsOCszDf9nkWi48reEV0/g3KO/Pl0SGqHL7SvLsNdoLAZ 0I0GYwDAepeF+GzAefgBCtos/S2xCODh+mknyQEEzyVxyVLebTGuIk2EzMzkSE3EzD LZRMi1Cf3/U7d6JDI2OyxeU9pJHoaCZB2gySXA7Uw+vy6yCV6JepnghHbHtdUtpryI gGNcBe5EKy326KwMzk2VXQO2zfj+f5Pfcph2XGGLqZjniLwtXJRD8Vq+Oaz2cIIpAn U1iMat256jMkw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 04/17] drm/virtio: Unlock reservations on virtio_gpu_object_shmem_init() error Date: Sun, 24 Apr 2022 22:04:11 +0300 Message-Id: <20220424190424.540501-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Unlock reservations in the error code path of virtio_gpu_object_create() to silence debug warning splat produced by ww_mutex_destroy(&obj->lock) when GEM is released with the held lock. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 3d0c8d4d1c20..21c19cdedce0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -250,6 +250,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); if (ret != 0) { + if (fence) + virtio_gpu_array_unlock_resv(objs); virtio_gpu_array_put_free(objs); virtio_gpu_free_object(&shmem_obj->base); return ret; From patchwork Sun Apr 24 19:04:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA586C433F5 for ; Sun, 24 Apr 2022 19:05:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BB7A810F726; Sun, 24 Apr 2022 19:04:59 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 07EAC10F039 for ; Sun, 24 Apr 2022 19:04:45 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id EF6FF1F44DAF DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827083; bh=CWqvGLqMByoxyOodrX4VDvjOrVtokLSbI2qbVKtiRlE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n3gZ507nUEjkxanjrfYvMbPGr6luevyfyePOFTjRbSczf/vG2/OiLqYVnzhLLB3GH DHROBA12hQWd1djtC4QjK9/ZwtJPKJpXqyTiApJO2+kEHymyLUNXN7dmdf9g41VHCi L25WmQ+XFuQFO5WWrqAib6wm7VIZ/Tx189f1bsKblSGKsT1tP1SH109gLw0LUYFESr lr//oA/cxhGISAAr2rL2pU7n6Enj5+sFILtNq3VCK5yk7OQaG9LwSec+CSQa8/uOwp sJPSM5TpcivojKpayK7ZE71AT7HmWJrE22KCHV7qPeUUPikdSmKashSkU6oWr/LRYf 4SPJQGhwg48JQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 05/17] drm/virtio: Unlock reservations on dma_resv_reserve_fences() error Date: Sun, 24 Apr 2022 22:04:12 +0300 Message-Id: <20220424190424.540501-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Unlock reservations on dma_resv_reserve_fences() error to fix recursive locking of the reservations when this error happens. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_gem.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 580a78809836..7db48d17ee3a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -228,8 +228,10 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) for (i = 0; i < objs->nents; ++i) { ret = dma_resv_reserve_fences(objs->objs[i]->resv, 1); - if (ret) + if (ret) { + virtio_gpu_array_unlock_resv(objs); return ret; + } } return ret; } From patchwork Sun Apr 24 19:04:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1925EC433F5 for ; Sun, 24 Apr 2022 19:04:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7DB2210FCC7; Sun, 24 Apr 2022 19:04:48 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1DE5D10F039 for ; Sun, 24 Apr 2022 19:04:47 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 0F6D31F40651 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827086; bh=H5TVWLdKQBunPI+41h2cAjj6IZ6UbyVtXydAliQEN5M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kbBbsmNiTPU+IBOLounBpOr3xvTBE2JCl71TJeU+bnzs4eJKdiqmV2xMznXR4doaq kKwQlLpZKqNTUIggaaiP4bEAwIba5A4kkSBlIVwaQUkdj5fkR9N9ohnXwdchn8jFFt ricB3koe0UhOYVrWj6QYf6oOXJiu00dGMjK+qAcVKNfKfkv1ZtWirkgoiIO61Ga0Cf Kk8Tz79AaQ0cus92M9MYnI/sarH+j+aQvPRIpgD23F9sd6s/k2eW/TXDRCA4X9TYSI iQ3kX6VJ5Fb2JYf2+223wy9D1EdUjHzO5Im+1kntPezGNacGqmy8Tx5HuiDrWZ6Hyo NMJAFLNpt0OUA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 06/17] drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb() Date: Sun, 24 Apr 2022 22:04:13 +0300 Message-Id: <20220424190424.540501-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Make virtio_gpu_plane_cleanup_fb() to clean the state which DRM core wants to clean up and not the current plane's state. Normally the older atomic state is cleaned up, but the newer state could also be cleaned up in case of aborted commits. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_plane.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index 6d3cc9e238a4..7148f3813d8b 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -266,14 +266,14 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, } static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, - struct drm_plane_state *old_state) + struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; - if (!plane->state->fb) + if (!state->fb) return; - vgfb = to_virtio_gpu_framebuffer(plane->state->fb); + vgfb = to_virtio_gpu_framebuffer(state->fb); if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence = NULL; From patchwork Sun Apr 24 19:04:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1148AC433F5 for ; Sun, 24 Apr 2022 19:05:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7C1EA10FF9B; Sun, 24 Apr 2022 19:05:00 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4B62A10FEB9 for ; Sun, 24 Apr 2022 19:04:49 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 3167A1F44DAC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827088; bh=isR8z997diym4El61jLlrCRI0NqP4AALskkuO0PNVVE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L+QwIJb6w2n/XAAyRKeNsAXld9L8radrHbzJxg1vTHMCpHDTWtMtFNHWoQ5xQb+JK IY2YE7R2q1qJyQIrIRCtwgIhvWntrMkf7xi2Nc63J2+qVGR6KVJuFM/6+QA7L9l/Gf +iDMcLWpxouqnCj0xrC3bUaqL8EQqG7mIb73TgGWTf/Ubxp5F1ZRLBBQh4Wj9UtIhN sfh3bdXZYKe1Ogc0aL1ZeIPqh9m+uVGQB4SSa6vPdMMi2ZCa7OsbUT0uyvEgYk+Ga3 p7xkY9kUeMtGXyS257JuFEZ5HPkxnXuKLX9NK0oE7Hyj70ikqdjo1lQLQrLfzEtCks Ll3rN7RdLF9UQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 07/17] drm/virtio: Simplify error handling of virtio_gpu_object_create() Date: Sun, 24 Apr 2022 22:04:14 +0300 Message-Id: <20220424190424.540501-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Change the order of SHMEM initialization and reservation locking to make code cleaner a tad and to prepare to transitioning of the common GEM SHMEM code to use the GEM's reservation lock instead of the shmem.page_lock. There is no need to lock reservation during allocation of the SHMEM pages because the lock is needed only to avoid racing with the async host-side allocation. Hence we can safely move the SHMEM initialization out of the reservation lock. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 21c19cdedce0..18f70ef6b4d0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -236,6 +236,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bo->dumb = params->dumb; + ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (ret != 0) + goto err_put_id; + if (fence) { ret = -ENOMEM; objs = virtio_gpu_array_alloc(1); @@ -248,15 +252,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, goto err_put_objs; } - ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); - if (ret != 0) { - if (fence) - virtio_gpu_array_unlock_resv(objs); - virtio_gpu_array_put_free(objs); - virtio_gpu_free_object(&shmem_obj->base); - return ret; - } - if (params->blob) { if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST) bo->guest_blob = true; From patchwork Sun Apr 24 19:04:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50DC6C433EF for ; Sun, 24 Apr 2022 19:05:01 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C60D110FE70; Sun, 24 Apr 2022 19:04:59 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6652710F726 for ; Sun, 24 Apr 2022 19:04:51 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 566D01F44DAD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827090; bh=Na+rmd+AM8NkiHwyRMATAj2tXo9n6QMPBVYodYOgZSM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fRJNVfOUeVsEsa6Mb301gYg3JzfwvYMrj3B7wpsF9Ewb1ppUS4S5oJ/po0dJmx6SX VVgRGNP2gFAYmsRE1xdlpVAvazzcSsZjN8j1fVFZD1cKUzBT4k3Dosyey4EApuJsRx XqZhUcO9OX1M64wgV7/rzGRjD1z/J5m5SruH3BHaHfYwXv8G4qTNcOX0tQtBCw6kdM 2j2e0noxF45NqtWt7OUPqLQvnOaxQ/QMrbK6E0wyeM4n/N65jDUaFA8C2wsp0N4K/5 TEd37RbogslZq6yLD+E5iz/ztUu1tl1W0AdvBrdpHqUsXK5hMQeS5c3leLkV6RPvkv LiAA9Eg23Olxw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 08/17] drm/virtio: Improve DMA API usage for shmem BOs Date: Sun, 24 Apr 2022 22:04:15 +0300 Message-Id: <20220424190424.540501-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" DRM API requires the DRM's driver to be backed with the device that can be used for generic DMA operations. The VirtIO-GPU device can't perform DMA operations if it uses PCI transport because PCI device driver creates a virtual VirtIO-GPU device that isn't associated with the PCI. Use PCI's GPU device for the DRM's device instead of the VirtIO-GPU device and drop DMA-related hacks from the VirtIO-GPU driver. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.c | 51 ++++++---------------- drivers/gpu/drm/virtio/virtgpu_drv.h | 5 +-- drivers/gpu/drm/virtio/virtgpu_kms.c | 7 ++-- drivers/gpu/drm/virtio/virtgpu_object.c | 56 +++++-------------------- drivers/gpu/drm/virtio/virtgpu_vq.c | 13 +++--- 5 files changed, 32 insertions(+), 100 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c index 5f25a8d15464..0141b7df97ec 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.c +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c @@ -46,12 +46,11 @@ static int virtio_gpu_modeset = -1; MODULE_PARM_DESC(modeset, "Disable/Enable modesetting"); module_param_named(modeset, virtio_gpu_modeset, int, 0400); -static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev) +static int virtio_gpu_pci_quirk(struct drm_device *dev) { - struct pci_dev *pdev = to_pci_dev(vdev->dev.parent); + struct pci_dev *pdev = to_pci_dev(dev->dev); const char *pname = dev_name(&pdev->dev); bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA; - char unique[20]; int ret; DRM_INFO("pci: %s detected at %s\n", @@ -63,39 +62,7 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vd return ret; } - /* - * Normally the drm_dev_set_unique() call is done by core DRM. - * The following comment covers, why virtio cannot rely on it. - * - * Unlike the other virtual GPU drivers, virtio abstracts the - * underlying bus type by using struct virtio_device. - * - * Hence the dev_is_pci() check, used in core DRM, will fail - * and the unique returned will be the virtio_device "virtio0", - * while a "pci:..." one is required. - * - * A few other ideas were considered: - * - Extend the dev_is_pci() check [in drm_set_busid] to - * consider virtio. - * Seems like a bigger hack than what we have already. - * - * - Point drm_device::dev to the parent of the virtio_device - * Semantic changes: - * * Using the wrong device for i2c, framebuffer_alloc and - * prime import. - * Visual changes: - * * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer, - * will print the wrong information. - * - * We could address the latter issues, by introducing - * drm_device::bus_dev, ... which would be used solely for this. - * - * So for the moment keep things as-is, with a bulky comment - * for the next person who feels like removing this - * drm_dev_set_unique() quirk. - */ - snprintf(unique, sizeof(unique), "pci:%s", pname); - return drm_dev_set_unique(dev, unique); + return 0; } static int virtio_gpu_probe(struct virtio_device *vdev) @@ -109,18 +76,24 @@ static int virtio_gpu_probe(struct virtio_device *vdev) if (virtio_gpu_modeset == 0) return -EINVAL; - dev = drm_dev_alloc(&driver, &vdev->dev); + /* + * The virtio-gpu device is a virtual device that doesn't have DMA + * ops assigned to it, nor DMA mask set and etc. Its parent device + * is actual GPU device we want to use it for the DRM's device in + * order to benefit from using generic DRM APIs. + */ + dev = drm_dev_alloc(&driver, vdev->dev.parent); if (IS_ERR(dev)) return PTR_ERR(dev); vdev->priv = dev; if (!strcmp(vdev->dev.parent->bus->name, "pci")) { - ret = virtio_gpu_pci_quirk(dev, vdev); + ret = virtio_gpu_pci_quirk(dev); if (ret) goto err_free; } - ret = virtio_gpu_init(dev); + ret = virtio_gpu_init(vdev, dev); if (ret) goto err_free; diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 0a194aaad419..b2d93cb12ebf 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -100,8 +100,6 @@ struct virtio_gpu_object { struct virtio_gpu_object_shmem { struct virtio_gpu_object base; - struct sg_table *pages; - uint32_t mapped; }; struct virtio_gpu_object_vram { @@ -214,7 +212,6 @@ struct virtio_gpu_drv_cap_cache { }; struct virtio_gpu_device { - struct device *dev; struct drm_device *ddev; struct virtio_device *vdev; @@ -282,7 +279,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); /* virtgpu_kms.c */ -int virtio_gpu_init(struct drm_device *dev); +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev); void virtio_gpu_deinit(struct drm_device *dev); void virtio_gpu_release(struct drm_device *dev); int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file); diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 3313b92db531..0d1e3eb61bee 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -110,7 +110,7 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev, vgdev->num_capsets = num_capsets; } -int virtio_gpu_init(struct drm_device *dev) +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) { static vq_callback_t *callbacks[] = { virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack @@ -123,7 +123,7 @@ int virtio_gpu_init(struct drm_device *dev) u32 num_scanouts, num_capsets; int ret = 0; - if (!virtio_has_feature(dev_to_virtio(dev->dev), VIRTIO_F_VERSION_1)) + if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) return -ENODEV; vgdev = kzalloc(sizeof(struct virtio_gpu_device), GFP_KERNEL); @@ -132,8 +132,7 @@ int virtio_gpu_init(struct drm_device *dev) vgdev->ddev = dev; dev->dev_private = vgdev; - vgdev->vdev = dev_to_virtio(dev->dev); - vgdev->dev = dev->dev; + vgdev->vdev = vdev; spin_lock_init(&vgdev->display_info_lock); spin_lock_init(&vgdev->resource_export_lock); diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 18f70ef6b4d0..8d7728181de0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -67,21 +67,6 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); if (virtio_gpu_is_shmem(bo)) { - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - - if (shmem->pages) { - if (shmem->mapped) { - dma_unmap_sgtable(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE, 0); - shmem->mapped = 0; - } - - sg_free_table(shmem->pages); - kfree(shmem->pages); - shmem->pages = NULL; - drm_gem_shmem_unpin(&bo->base); - } - drm_gem_shmem_free(&bo->base); } else if (virtio_gpu_is_vram(bo)) { struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo); @@ -153,37 +138,18 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, unsigned int *nents) { bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); struct scatterlist *sg; - int si, ret; + struct sg_table *pages; + int si; - ret = drm_gem_shmem_pin(&bo->base); - if (ret < 0) - return -EINVAL; - - /* - * virtio_gpu uses drm_gem_shmem_get_sg_table instead of - * drm_gem_shmem_get_pages_sgt because virtio has it's own set of - * dma-ops. This is discouraged for other drivers, but should be fine - * since virtio_gpu doesn't support dma-buf import from other devices. - */ - shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); - ret = PTR_ERR_OR_ZERO(shmem->pages); - if (ret) { - drm_gem_shmem_unpin(&bo->base); - shmem->pages = NULL; - return ret; - } + pages = drm_gem_shmem_get_pages_sgt(&bo->base); + if (IS_ERR(pages)) + return PTR_ERR(pages); - if (use_dma_api) { - ret = dma_map_sgtable(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE, 0); - if (ret) - return ret; - *nents = shmem->mapped = shmem->pages->nents; - } else { - *nents = shmem->pages->orig_nents; - } + if (use_dma_api) + *nents = pages->nents; + else + *nents = pages->orig_nents; *ents = kvmalloc_array(*nents, sizeof(struct virtio_gpu_mem_entry), @@ -194,13 +160,13 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, } if (use_dma_api) { - for_each_sgtable_dma_sg(shmem->pages, sg, si) { + for_each_sgtable_dma_sg(pages, sg, si) { (*ents)[si].addr = cpu_to_le64(sg_dma_address(sg)); (*ents)[si].length = cpu_to_le32(sg_dma_len(sg)); (*ents)[si].padding = 0; } } else { - for_each_sgtable_sg(shmem->pages, sg, si) { + for_each_sgtable_sg(pages, sg, si) { (*ents)[si].addr = cpu_to_le64(sg_phys(sg)); (*ents)[si].length = cpu_to_le32(sg->length); (*ents)[si].padding = 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 2edf31806b74..06566e44307d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -593,11 +593,10 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, struct virtio_gpu_transfer_to_host_2d *cmd_p; struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); if (virtio_gpu_is_shmem(bo) && use_dma_api) - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE); + dma_sync_sgtable_for_device(&vgdev->vdev->dev, + bo->base.sgt, DMA_TO_DEVICE); cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); @@ -1017,11 +1016,9 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - if (virtio_gpu_is_shmem(bo) && use_dma_api) { - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE); - } + if (virtio_gpu_is_shmem(bo) && use_dma_api) + dma_sync_sgtable_for_device(&vgdev->vdev->dev, + bo->base.sgt, DMA_TO_DEVICE); cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); From patchwork Sun Apr 24 19:04:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81EA6C433EF for ; Sun, 24 Apr 2022 19:05:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 91CC910FF63; Sun, 24 Apr 2022 19:05:01 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8193210F726 for ; Sun, 24 Apr 2022 19:04:53 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 768971F40651 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827092; bh=1ir1kXOoDBYMTq2gw1c/hNFv88UQm29tKUM6x3KBBGE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R7Sd/8g6wisG0MKQWWH15F2pNoyiVLXNywLd89VGO9Uukr+u5HLIyoK6QaUzTOkNm OVkAtGNc3EtVY0Z7xLz+l31QsFnrLW3YaqTJKKzNUG+h9IogitxJPL3Dsf9vd0urZ5 u1KZvzycNzgGRfWbAdHVQ2/ER9h2f/XzoW64s/JTQ9bmw1raVE7Y3oNAbvhck98NDM RhjQl3t39MF+NX6wEuRcufv38AZETYdWtzrpG3+v0E9Fg+w00ISXEPqTDa6GHH4UMr 6zIRXtBQxOZZ8o5QhAT8ECsSvOD/HRz8kLMKPdV1rGiotsmQDUcL3E8UtrCN+lwDNx tBWfv3CQuRWxw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 09/17] drm/virtio: Use dev_is_pci() Date: Sun, 24 Apr 2022 22:04:16 +0300 Message-Id: <20220424190424.540501-10-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use common dev_is_pci() helper to replace the strcmp("pci") used by driver. Suggested-by: Robin Murphy Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c index 0141b7df97ec..0035affc3e59 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.c +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c @@ -87,7 +87,7 @@ static int virtio_gpu_probe(struct virtio_device *vdev) return PTR_ERR(dev); vdev->priv = dev; - if (!strcmp(vdev->dev.parent->bus->name, "pci")) { + if (dev_is_pci(vdev->dev.parent)) { ret = virtio_gpu_pci_quirk(dev); if (ret) goto err_free; From patchwork Sun Apr 24 19:04:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825040 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 34442C433FE for ; Sun, 24 Apr 2022 19:05:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3127F10FED7; Sun, 24 Apr 2022 19:05:00 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id A57AB10F726 for ; Sun, 24 Apr 2022 19:04:55 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 83ABB1F44DAC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827094; bh=H46XxB6oFHCaE4uAm7MAxT74nXXK7Q5+TkH98BogMYM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m53oTjo4v5Mah0XdA0IRsCw5H20XTxDf/JXLQOfbpKj7iI8xF7RzHj2BvvGRvbzgD Ec14L/kPfb7rusJ/NSVKEvCGICRc9HjiVYVYKVwnSrftKLpbSb1eUXzxcWIN/g3y/K MF1Q8tykMm7H3A6NRSFKqFOCg/np0Qao5vdMZVCPI6i15QrGgKA6Ug6tW5BBTu0Xv+ DnSC2zP+PnEtN/nGXnuUlwosWv9a8vYjhst0KP08vULTNgjlN/uxat1av5tl3KI//A Lk8+DhR0TlHPjom/AloALVNPXCksqeTyNXIQLduah/WcF/j9W2MaxCiAlsSB7pvIGQ OzvZHrwMvw2OQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 10/17] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Date: Sun, 24 Apr 2022 22:04:17 +0300 Message-Id: <20220424190424.540501-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" drm_gem_shmem_get_sg_table() never returns NULL on error, but a ERR_PTR. Correct the doc comment which says that it returns NULL on error. Acked-by: Thomas Zimmermann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 8ad0e02991ca..5c7a7106b41d 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -662,7 +662,8 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info); * drm_gem_shmem_get_pages_sgt() instead. * * Returns: - * A pointer to the scatter/gather table of pinned pages or NULL on failure. + * A pointer to the scatter/gather table of pinned pages or an ERR_PTR()-encoded + * error code on failure. */ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) { @@ -688,7 +689,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); * drm_gem_shmem_get_sg_table() should not be directly called by drivers. * * Returns: - * A pointer to the scatter/gather table of pinned pages or errno on failure. + * A pointer to the scatter/gather table of pinned pages or an ERR_PTR()-encoded + * error code on failure. */ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) { From patchwork Sun Apr 24 19:04:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825042 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1C24C433FE for ; Sun, 24 Apr 2022 19:05:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B220210FFB6; Sun, 24 Apr 2022 19:05:00 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id AD3F110FEB9 for ; Sun, 24 Apr 2022 19:04:57 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id A8A301F40651 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827096; bh=NA8GIeZUN0xSKt9qlnnLwO9Fsg+xFe6NFF2QaEg9UZQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dO5msEyuI0Byb1eSMhHAkttw+o587fMcm3UFbU5/NX56d1kHv8sLQOu1SW5etV7uO FIzEmyCBPAg8OKr6w6ykyU78x2DzOLhkbBsEhzqBv7kPJ4q706i1Dw1bC8x12II0+8 TQ4Nh42Nb2ufclkdTkRmqjkbB4WNHW6gu1v20An4jYbZcp2+CHvBU4d2W1nuWJ6unt tedqdfRY99sEhHCZQiQ+wmXHDKYi8bWOU4r7xVwaoZl8iOF2PYylaFjUWY59j7tYbg rwz7eAXs5b767lG11cL2knF3ItkDNmSgoOHeqtVJB3EKD7+pQSgMKFLV8xSYaL9A8m BHVBUVHuUbOuA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 11/17] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Date: Sun, 24 Apr 2022 22:04:18 +0300 Message-Id: <20220424190424.540501-12-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Replace drm_gem_shmem locks with the reservation lock to make GEM lockings more consistent. Previously drm_gem_shmem_vmap() and drm_gem_shmem_get_pages() were protected by separate locks, now it's the same lock for non-imported GEMs. For imported GEMs with still use a separate lock in vmap/vunmap() to avoid recursive locking of reservations, the reservation's locking isn't needed in this case. Suggested-by: Daniel Vetter Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 54 ++++++++++++------- drivers/gpu/drm/lima/lima_gem.c | 8 +-- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 4 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 15 +++--- include/drm/drm_gem_shmem_helper.h | 5 -- 5 files changed, 50 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 5c7a7106b41d..cc90a4c28ace 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -86,7 +86,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) if (ret) goto err_release; - mutex_init(&shmem->pages_lock); mutex_init(&shmem->vmap_lock); INIT_LIST_HEAD(&shmem->madv_list); @@ -157,8 +156,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->pages_use_count); drm_gem_object_release(obj); - mutex_destroy(&shmem->pages_lock); - mutex_destroy(&shmem->vmap_lock); kfree(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); @@ -209,11 +206,11 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->base.import_attach); - ret = mutex_lock_interruptible(&shmem->pages_lock); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; ret = drm_gem_shmem_get_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } @@ -248,9 +245,9 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -310,7 +307,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, } else { pgprot_t prot = PAGE_KERNEL; - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); if (ret) goto err_zero_use; @@ -358,13 +355,22 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, struct iosys_map *map) { + struct drm_gem_object *obj = &shmem->base; int ret; - ret = mutex_lock_interruptible(&shmem->vmap_lock); + if (obj->import_attach) + ret = mutex_lock_interruptible(&shmem->vmap_lock); + else + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); + if (ret) return ret; ret = drm_gem_shmem_vmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); + + if (obj->import_attach) + mutex_unlock(&shmem->vmap_lock); + else + dma_resv_unlock(shmem->base.resv); return ret; } @@ -385,7 +391,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } shmem->vaddr = NULL; @@ -406,9 +412,19 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, struct iosys_map *map) { - mutex_lock(&shmem->vmap_lock); + struct drm_gem_object *obj = &shmem->base; + + if (obj->import_attach) + mutex_lock(&shmem->vmap_lock); + else + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_vunmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); + + if (obj->import_attach) + mutex_unlock(&shmem->vmap_lock); + else + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_vunmap); @@ -442,14 +458,14 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, */ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (shmem->madv >= 0) shmem->madv = madv; madv = shmem->madv; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return (madv >= 0); } @@ -487,10 +503,10 @@ EXPORT_SYMBOL(drm_gem_shmem_purge_locked); bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { - if (!mutex_trylock(&shmem->pages_lock)) + if (!dma_resv_trylock(shmem->base.resv)) return false; drm_gem_shmem_purge_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return true; } @@ -549,7 +565,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (page_offset >= num_pages || WARN_ON_ONCE(!shmem->pages) || @@ -561,7 +577,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 0f1ca0b0db49..5008f0c2428f 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) new_size = min(new_size, bo->base.base.size); - mutex_lock(&bo->base.pages_lock); + dma_resv_lock(bo->base.base.resv, NULL); if (bo->base.pages) { pages = bo->base.pages; @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, sizeof(*pages), GFP_KERNEL | __GFP_ZERO); if (!pages) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return -ENOMEM; } @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) struct page *page = shmem_read_mapping_page(mapping, i); if (IS_ERR(page)) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return PTR_ERR(page); } pages[i] = page; } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); ret = sg_alloc_table_from_pages(&sgt, pages, i, 0, new_size, GFP_KERNEL); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 77e7cb6d1ae3..3bcf8c291866 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) if (!mutex_trylock(&bo->mappings.lock)) return false; - if (!mutex_trylock(&shmem->pages_lock)) + if (!dma_resv_trylock(shmem->base.resv)) goto unlock_mappings; panfrost_gem_teardown_mappings_locked(bo); drm_gem_shmem_purge_locked(&bo->base); ret = true; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); unlock_mappings: mutex_unlock(&bo->mappings.lock); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b285a8001b1d..9b261a1c7f5c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -424,6 +424,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; struct address_space *mapping; + struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; struct page **pages; @@ -446,13 +447,15 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, page_offset = addr >> PAGE_SHIFT; page_offset -= bomapping->mmnode.start; - mutex_lock(&bo->base.pages_lock); + obj = &bo->base.base; + + dma_resv_lock(obj->resv, NULL); if (!bo->base.pages) { bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = -ENOMEM; goto err_bo; } @@ -462,7 +465,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, if (!pages) { kvfree(bo->sgts); bo->sgts = NULL; - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = -ENOMEM; goto err_bo; } @@ -472,7 +475,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, pages = bo->base.pages; if (pages[page_offset]) { /* Pages are already mapped, bail out. */ - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); goto out; } } @@ -483,13 +486,13 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { pages[i] = shmem_read_mapping_page(mapping, i); if (IS_ERR(pages[i])) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = PTR_ERR(pages[i]); goto err_pages; } } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; ret = sg_alloc_table_from_pages(sgt, pages + page_offset, diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index d0a57853c188..6f2b8fee620c 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -26,11 +26,6 @@ struct drm_gem_shmem_object { */ struct drm_gem_object base; - /** - * @pages_lock: Protects the page table and use count - */ - struct mutex pages_lock; - /** * @pages: Page table */ From patchwork Sun Apr 24 19:04:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9BA24C433F5 for ; Sun, 24 Apr 2022 19:05:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C86211122CD; Sun, 24 Apr 2022 19:05:30 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id DA94610FEB9 for ; Sun, 24 Apr 2022 19:04:59 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id B8F2E1F44DAD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827098; bh=tuKx3UZqmIV426YhuKAWKATSb+IbuD1a+3nRq6T84XI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NXVIs3IcxRdmvVlpIMLQd4xTzh9ulUjSE86Lylf/1Av0zRxeT0DcDQDN8ksUQJpQ7 kBcSAQ1O5xgs3tuUM/xC5Tx32M5K/KqeQJjb9uITwwm1TSPlyyV+SOfBxbBg1cFR5B /ynAZEajCUOyEfmutxRfTskFjYYaiWm4gQETRUDelXKu68JHR3Be6SFxFqR97RL1OC naoAJe2i32xH1FA/Ku2yy098INu/kVpx+mALVAQePHFCSZNDZ2mnxBCE0Nub/oUZpk HyaRRoDTgSLOl9QtbY/HU0PqKqrpY1FlnInGl5DY0oj6w3qSy1L+fz90tBJ6Zlel3j 7JobzsJZD/yUQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 12/17] drm/shmem-helper: Add generic memory shrinker Date: Sun, 24 Apr 2022 22:04:19 +0300 Message-Id: <20220424190424.540501-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Introduce a common DRM SHMEM shrinker. It allows to reduce code duplication among DRM drivers that implement theirs own shrinkers. This is initial version of the shrinker that covers basic needs of GPU drivers, both purging and eviction of shmem objects are supported. This patch is based on a couple ideas borrowed from Rob's Clark MSM shrinker and Thomas' Zimmermann variant of SHMEM shrinker. In order to start using DRM SHMEM shrinker drivers should: 1. Optionally implement new purge(), evict() + swap_in() shmem objects callbacks. 2. Register shrinker using drm_gem_shmem_shrinker_register(drm_device). 3. Use drm_gem_shmem_set_purgeable_and_evictable(shmem) and alike API functions to activate shrinking of shmem GEMs. Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 734 +++++++++++++++++++++++-- include/drm/drm_device.h | 4 + include/drm/drm_gem_shmem_helper.h | 115 +++- 3 files changed, 820 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index cc90a4c28ace..25e9bc2803ee 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -89,6 +89,13 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) mutex_init(&shmem->vmap_lock); INIT_LIST_HEAD(&shmem->madv_list); + /* + * Eviction and purging are disabled by default, shmem user must enable + * them explicitly using drm_gem_shmem_set_evictable/purgeable(). + */ + shmem->eviction_disable_count = 1; + shmem->purge_disable_count = 1; + if (!private) { /* * Our buffers are kept pinned, so allocating them @@ -127,6 +134,77 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static void +drm_gem_shmem_add_pages_to_shrinker(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + size_t page_count = obj->size >> PAGE_SHIFT; + + if (!shmem->pages_accounted_by_shrinker) { + WARN_ON(gem_shrinker->shrinkable_count + page_count < page_count); + gem_shrinker->shrinkable_count += page_count; + shmem->pages_accounted_by_shrinker = true; + } +} + +static void +drm_gem_shmem_remove_pages_from_shrinker(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + size_t page_count = obj->size >> PAGE_SHIFT; + + if (shmem->pages_accounted_by_shrinker) { + WARN_ON(gem_shrinker->shrinkable_count < page_count); + gem_shrinker->shrinkable_count -= page_count; + shmem->pages_accounted_by_shrinker = false; + } +} + +static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv >= 0) && !shmem->eviction_disable_count && + !shmem->vmap_use_count && !shmem->base.dma_buf && + !shmem->base.import_attach && shmem->sgt && !shmem->evicted; +} + +static void +drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + + if (!gem_shrinker || obj->import_attach) + return; + + lockdep_assert_held(&obj->resv->lock.base); + + mutex_lock(&gem_shrinker->lock); + + if (drm_gem_shmem_is_purgeable(shmem) && !shmem->purge_disable_count) { + drm_gem_shmem_add_pages_to_shrinker(shmem); + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_purgeable); + } else if (drm_gem_shmem_is_evictable(shmem)) { + drm_gem_shmem_add_pages_to_shrinker(shmem); + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evictable); + } else if (shmem->madv < 0) { + drm_gem_shmem_remove_pages_from_shrinker(shmem); + list_del_init(&shmem->madv_list); + } else if (shmem->evicted) { + drm_gem_shmem_remove_pages_from_shrinker(shmem); + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evicted); + } else if (!shmem->pages) { + drm_gem_shmem_remove_pages_from_shrinker(shmem); + list_del_init(&shmem->madv_list); + } else { + drm_gem_shmem_remove_pages_from_shrinker(shmem); + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_active); + } + + mutex_unlock(&gem_shrinker->lock); +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -138,6 +216,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + /* take out shmem GEM object from the memory shrinker */ + drm_gem_shmem_madvise(shmem, -1); + WARN_ON(shmem->vmap_use_count); if (obj->import_attach) { @@ -149,7 +230,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) + if (shmem->pages_use_count) drm_gem_shmem_put_pages(shmem); } @@ -160,18 +241,208 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +static int +drm_gem_shmem_set_evictable_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + int ret = 0; + + lockdep_assert_held(&obj->resv->lock.base); + + WARN_ON_ONCE(!shmem->eviction_disable_count--); + + if (shmem->madv < 0) + ret = -ENOMEM; + + drm_gem_shmem_update_pages_state_locked(shmem); + + return ret; +} + +static int +drm_gem_shmem_set_unevictable_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + int err; + + lockdep_assert_held(&obj->resv->lock.base); + + if (shmem->madv < 0) + return -ENOMEM; + + if (shmem->evicted) { + err = drm_gem_shmem_swap_in_locked(shmem); + if (err) + return err; + } + + shmem->eviction_disable_count++; + + drm_gem_shmem_update_pages_state_locked(shmem); + + return 0; +} + +static int +drm_gem_shmem_set_unpurgeable_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + lockdep_assert_held(&obj->resv->lock.base); + + if (shmem->madv < 0) + return -ENOMEM; + + shmem->purge_disable_count++; + + drm_gem_shmem_update_pages_state_locked(shmem); + + return 0; +} + +static int +drm_gem_shmem_set_purgeable_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + int ret = 0; + + lockdep_assert_held(&obj->resv->lock.base); + + WARN_ON_ONCE(!shmem->purge_disable_count--); + + if (shmem->madv < 0) + ret = -ENOMEM; + + drm_gem_shmem_update_pages_state_locked(shmem); + + return ret; +} + +/** + * drm_gem_shmem_set_purgeable() - Make GEM purgeable by memory shrinker + * @shmem: shmem GEM object + * + * Tell memory shrinker that this GEM can be purged. Initially purging is + * disabled for all GEMs. Each set_pureable() call must have corresponding + * set_unpureable() call. If GEM was purged, then -ENOMEM is returned. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem) +{ + int ret; + + dma_resv_lock(shmem->base.resv, NULL); + ret = drm_gem_shmem_set_purgeable_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable); + +static int +drm_gem_shmem_set_purgeable_and_evictable_locked(struct drm_gem_shmem_object *shmem) +{ + int ret; + + ret = drm_gem_shmem_set_evictable_locked(shmem); + if (!ret) { + ret = drm_gem_shmem_set_purgeable_locked(shmem); + if (ret) + drm_gem_shmem_set_unevictable_locked(shmem); + } + + return ret; +} + +static int +drm_gem_shmem_set_unpurgeable_and_unevictable_locked(struct drm_gem_shmem_object *shmem) +{ + int ret; + + ret = drm_gem_shmem_set_unpurgeable_locked(shmem); + if (!ret) { + ret = drm_gem_shmem_set_unevictable_locked(shmem); + if (ret) + drm_gem_shmem_set_purgeable_locked(shmem); + } + + return ret; +} + +/** + * drm_gem_shmem_set_purgeable_and_evictable() - Make GEM purgeable and + * evictable by memory shrinker + * @shmem: shmem GEM object + * + * Tell memory shrinker that this GEM can be purged and evicted. Each + * set_unpurgeable_and_unevictable() call must have corresponding + * set_purgeable_and_evictable() call. If GEM was purged, then -ENOMEM + * is returned. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem) +{ + int ret; + + dma_resv_lock(shmem->base.resv, NULL); + ret = drm_gem_shmem_set_purgeable_and_evictable_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable_and_evictable); + +/** + * drm_gem_shmem_set_unpurgeable_and_unevictable() - Make GEM unpurgeable and + * unevictable by memory shrinker + * @shmem: shmem GEM object + * + * Tell memory shrinker that this GEM can't be purged and evicted. Each + * set_purgeable_and_evictable() call must have corresponding + * set_unpurgeable_and_unevictable() call. If GEM was purged, then -ENOMEM + * is returned. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +static int +drm_gem_shmem_set_unpurgeable_and_unevictable(struct drm_gem_shmem_object *shmem) +{ + int ret; + + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); + if (ret) + return ret; + + ret = drm_gem_shmem_set_unpurgeable_and_unevictable_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} + +static int +drm_gem_shmem_acquire_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; - if (shmem->pages_use_count++ > 0) + if (shmem->madv < 0) { + WARN_ON(shmem->pages); + return -ENOMEM; + } + + if (shmem->pages) { + WARN_ON(!shmem->evicted); return 0; + } pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count = 0; return PTR_ERR(pages); } @@ -190,6 +461,25 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) return 0; } +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +{ + int err; + + if (shmem->madv < 0) + return -ENOMEM; + + if (shmem->pages_use_count++ > 0) + return 0; + + err = drm_gem_shmem_acquire_pages_locked(shmem); + if (err) { + shmem->pages_use_count = 0; + return err; + } + + return 0; +} + /* * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -210,21 +500,38 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) if (ret) return ret; ret = drm_gem_shmem_get_pages_locked(shmem); + + drm_gem_shmem_update_pages_state_locked(shmem); + dma_resv_unlock(shmem->base.resv); return ret; } EXPORT_SYMBOL(drm_gem_shmem_get_pages); -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) +static void drm_gem_shmem_get_pages_no_fail(struct drm_gem_shmem_object *shmem) { - struct drm_gem_object *obj = &shmem->base; + WARN_ON(shmem->base.import_attach); - if (WARN_ON_ONCE(!shmem->pages_use_count)) - return; + dma_resv_lock(shmem->base.resv, NULL); - if (--shmem->pages_use_count > 0) + if (drm_gem_shmem_get_pages_locked(shmem)) + shmem->pages_use_count++; + + drm_gem_shmem_update_pages_state_locked(shmem); + + dma_resv_unlock(shmem->base.resv); +} + +static void +drm_gem_shmem_release_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + if (!shmem->pages) { + WARN_ON(!shmem->evicted && shmem->madv >= 0); return; + } #ifdef CONFIG_X86 if (shmem->map_wc) @@ -237,6 +544,21 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages = NULL; } +static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + lockdep_assert_held(&obj->resv->lock.base); + + if (WARN_ON(!shmem->pages_use_count)) + return; + + if (--shmem->pages_use_count > 0) + return; + + drm_gem_shmem_release_pages_locked(shmem); +} + /* * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -247,6 +569,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_update_pages_state_locked(shmem); dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -263,9 +586,21 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); */ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { + int err; + WARN_ON(shmem->base.import_attach); - return drm_gem_shmem_get_pages(shmem); + err = drm_gem_shmem_set_unpurgeable_and_unevictable(shmem); + if (err) + return err; + + err = drm_gem_shmem_get_pages(shmem); + if (err) { + drm_gem_shmem_set_purgeable_and_evictable(shmem); + return err; + } + + return 0; } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -281,6 +616,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->base.import_attach); drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_set_purgeable_and_evictable(shmem); } EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -365,8 +701,18 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, if (ret) return ret; + + ret = drm_gem_shmem_set_unpurgeable_and_unevictable_locked(shmem); + if (ret) + goto unlock; + ret = drm_gem_shmem_vmap_locked(shmem, map); + if (ret) + drm_gem_shmem_set_purgeable_and_evictable_locked(shmem); + else + drm_gem_shmem_update_pages_state_locked(shmem); +unlock: if (obj->import_attach) mutex_unlock(&shmem->vmap_lock); else @@ -420,6 +766,8 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_vunmap_locked(shmem, map); + drm_gem_shmem_update_pages_state_locked(shmem); + drm_gem_shmem_set_purgeable_and_evictable_locked(shmem); if (obj->import_attach) mutex_unlock(&shmem->vmap_lock); @@ -465,12 +813,86 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) madv = shmem->madv; + drm_gem_shmem_update_pages_state_locked(shmem); + dma_resv_unlock(shmem->base.resv); return (madv >= 0); } EXPORT_SYMBOL(drm_gem_shmem_madvise); +/** + * drm_gem_shmem_swap_in_locked() - Moves shmem GEM back to memory and enables + * hardware access to the memory. + * @shmem: shmem GEM object + * + * This function moves shmem GEM back to memory if it was previously evicted + * by the memory shrinker. The GEM is ready to use on success. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct sg_table *sgt; + int err; + + lockdep_assert_held(&obj->resv->lock.base); + + if (shmem->evicted) { + err = drm_gem_shmem_acquire_pages_locked(shmem); + if (err) + return err; + + sgt = drm_gem_shmem_get_sg_table(shmem); + if (IS_ERR(sgt)) + return PTR_ERR(sgt); + + err = dma_map_sgtable(obj->dev->dev, sgt, + DMA_BIDIRECTIONAL, 0); + if (err) { + sg_free_table(sgt); + kfree(sgt); + return err; + } + + shmem->sgt = sgt; + shmem->evicted = false; + + drm_gem_shmem_update_pages_state_locked(shmem); + + if (shmem->swap_in) { + err = shmem->swap_in(shmem); + if (err) + return err; + } + } + + if (!shmem->pages) + return -ENOMEM; + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in_locked); + +static void drm_gem_shmem_unpin_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_device *dev = obj->dev; + + if (shmem->evicted) + return; + + dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); + drm_gem_shmem_release_pages_locked(shmem); + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + + sg_free_table(shmem->sgt); + kfree(shmem->sgt); + shmem->sgt = NULL; +} + void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; @@ -501,17 +923,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL(drm_gem_shmem_purge_locked); -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) -{ - if (!dma_resv_trylock(shmem->base.resv)) - return false; - drm_gem_shmem_purge_locked(shmem); - dma_resv_unlock(shmem->base.resv); - - return true; -} -EXPORT_SYMBOL(drm_gem_shmem_purge); - /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object * @file: DRM file structure to create the dumb buffer for @@ -561,22 +972,31 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) vm_fault_t ret; struct page *page; pgoff_t page_offset; + bool pages_inactive; + int err; /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; dma_resv_lock(shmem->base.resv, NULL); - if (page_offset >= num_pages || - WARN_ON_ONCE(!shmem->pages) || - shmem->madv < 0) { + pages_inactive = (shmem->evicted || shmem->madv < 0 || !shmem->pages_use_count); + WARN_ON_ONCE(!shmem->pages ^ pages_inactive); + + if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) { ret = VM_FAULT_SIGBUS; } else { + err = drm_gem_shmem_swap_in_locked(shmem); + if (err) { + ret = VM_FAULT_OOM; + goto unlock; + } + page = shmem->pages[page_offset]; ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } - +unlock: dma_resv_unlock(shmem->base.resv); return ret; @@ -586,13 +1006,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) { struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - int ret; - - WARN_ON(shmem->base.import_attach); - - ret = drm_gem_shmem_get_pages(shmem); - WARN_ON_ONCE(ret != 0); + drm_gem_shmem_get_pages_no_fail(shmem); drm_gem_vm_open(vma); } @@ -660,9 +1075,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap); void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent) { + drm_printf_indent(p, indent, "eviction_disable_count=%u\n", shmem->eviction_disable_count); + drm_printf_indent(p, indent, "purge_disable_count=%u\n", shmem->purge_disable_count); drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); + drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); + drm_printf_indent(p, indent, "madv=%d\n", shmem->madv); } EXPORT_SYMBOL(drm_gem_shmem_print_info); @@ -735,6 +1154,10 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) shmem->sgt = sgt; + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_update_pages_state_locked(shmem); + dma_resv_unlock(shmem->base.resv); + return sgt; err_free_sgt: @@ -781,6 +1204,255 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); +static struct drm_gem_shmem_shrinker * +to_drm_shrinker(struct shrinker *shrinker) +{ + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); +} + +static unsigned long +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); + u64 count = READ_ONCE(gem_shrinker->shrinkable_count); + + if (count >= SHRINK_EMPTY) + return SHRINK_EMPTY - 1; + + return count ?: SHRINK_EMPTY; +} + +static unsigned long drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + int err; + + if (!drm_gem_shmem_is_evictable(shmem)) + return 0; + + if (shmem->evict) { + err = shmem->evict(shmem); + if (err) + return 0; + } + + WARN_ON(!drm_gem_shmem_is_evictable(shmem)); + WARN_ON(shmem->madv < 0); + WARN_ON(shmem->evicted); + + drm_gem_shmem_unpin_pages_locked(shmem); + + shmem->evicted = true; + drm_gem_shmem_update_pages_state_locked(shmem); + + return obj->size >> PAGE_SHIFT; +} + +static unsigned long drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + int err; + + if (!drm_gem_shmem_is_purgeable(shmem)) + return 0; + + if (shmem->purge) { + err = shmem->purge(shmem); + if (err) + return 0; + } + + WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); + WARN_ON(shmem->madv < 0); + + drm_gem_shmem_unpin_pages_locked(shmem); + drm_gem_free_mmap_offset(obj); + + /* Our goal here is to return as much of the memory as + * is possible back to the system as we are called from OOM. + * To do this we must instruct the shmfs to drop all of its + * backing pages, *now*. + */ + shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); + + invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + + shmem->madv = -1; + drm_gem_shmem_update_pages_state_locked(shmem); + + return obj->size >> PAGE_SHIFT; +} + +static unsigned long +drm_gem_shmem_shrinker_run_objects_scan(struct shrinker *shrinker, + unsigned long nr_to_scan, + bool *lock_contention, + bool evict) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); + struct drm_gem_shmem_object *shmem; + struct list_head still_in_list; + struct drm_gem_object *obj; + unsigned long freed = 0; + struct list_head *lru; + size_t page_count; + + INIT_LIST_HEAD(&still_in_list); + + mutex_lock(&gem_shrinker->lock); + + if (evict) + lru = &gem_shrinker->lru_evictable; + else + lru = &gem_shrinker->lru_purgeable; + + while (freed < nr_to_scan) { + shmem = list_first_entry_or_null(lru, typeof(*shmem), madv_list); + if (!shmem) + break; + + obj = &shmem->base; + page_count = obj->size >> PAGE_SHIFT; + list_move_tail(&shmem->madv_list, &still_in_list); + + if (evict && get_nr_swap_pages() < page_count) + continue; + + /* + * If it's in the process of being freed, gem_object->free() + * may be blocked on lock waiting to remove it. So just + * skip it. + */ + if (!kref_get_unless_zero(&obj->refcount)) + continue; + + mutex_unlock(&gem_shrinker->lock); + + /* prevent racing with job-submission code paths */ + if (!dma_resv_trylock(obj->resv)) { + *lock_contention |= true; + goto shrinker_lock; + } + + /* prevent racing with the dma-buf exporting */ + if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) { + *lock_contention |= true; + goto resv_unlock; + } + + /* check whether h/w uses this object */ + if (!dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_WRITE)) + goto object_name_unlock; + + if (evict) + freed += drm_gem_shmem_evict(shmem); + else + freed += drm_gem_shmem_purge(shmem); + +object_name_unlock: + mutex_unlock(&gem_shrinker->dev->object_name_lock); +resv_unlock: + dma_resv_unlock(obj->resv); +shrinker_lock: + drm_gem_object_put(&shmem->base); + mutex_lock(&gem_shrinker->lock); + } + + list_splice_tail(&still_in_list, lru); + + mutex_unlock(&gem_shrinker->lock); + + return freed; +} + +static unsigned long +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + unsigned long nr_to_scan = sc->nr_to_scan; + bool lock_contention = false; + unsigned long freed; + + /* purge as many objects as we can */ + freed = drm_gem_shmem_shrinker_run_objects_scan(shrinker, nr_to_scan, + &lock_contention, false); + nr_to_scan -= freed; + + /* evict as many objects as we can */ + if (freed < nr_to_scan) + freed += drm_gem_shmem_shrinker_run_objects_scan(shrinker, + nr_to_scan, + &lock_contention, + true); + + return (!freed && !lock_contention) ? SHRINK_STOP : freed; +} + +/** + * drm_gem_shmem_shrinker_register() - Register shmem shrinker + * @dev: DRM device + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_shrinker_register(struct drm_device *dev) +{ + struct drm_gem_shmem_shrinker *gem_shrinker; + int err; + + if (WARN_ON(dev->shmem_shrinker)) + return -EBUSY; + + gem_shrinker = kzalloc(sizeof(*gem_shrinker), GFP_KERNEL); + if (!gem_shrinker) + return -ENOMEM; + + gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects; + gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects; + gem_shrinker->base.seeks = DEFAULT_SEEKS; + gem_shrinker->dev = dev; + + INIT_LIST_HEAD(&gem_shrinker->lru_purgeable); + INIT_LIST_HEAD(&gem_shrinker->lru_evictable); + INIT_LIST_HEAD(&gem_shrinker->lru_evicted); + INIT_LIST_HEAD(&gem_shrinker->lru_active); + mutex_init(&gem_shrinker->lock); + + dev->shmem_shrinker = gem_shrinker; + + err = register_shrinker(&gem_shrinker->base); + if (err) { + dev->shmem_shrinker = NULL; + kfree(gem_shrinker); + return err; + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_register); + +/** + * drm_gem_shmem_shrinker_unregister() - Unregister shmem shrinker + * @dev: DRM device + */ +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = dev->shmem_shrinker; + + if (gem_shrinker) { + unregister_shrinker(&gem_shrinker->base); + WARN_ON(!list_empty(&gem_shrinker->lru_purgeable)); + WARN_ON(!list_empty(&gem_shrinker->lru_evictable)); + WARN_ON(!list_empty(&gem_shrinker->lru_evicted)); + WARN_ON(!list_empty(&gem_shrinker->lru_active)); + mutex_destroy(&gem_shrinker->lock); + dev->shmem_shrinker = NULL; + kfree(gem_shrinker); + } +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_unregister); + MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); MODULE_IMPORT_NS(DMA_BUF); MODULE_LICENSE("GPL v2"); diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h index 9923c7a6885e..929546cad894 100644 --- a/include/drm/drm_device.h +++ b/include/drm/drm_device.h @@ -16,6 +16,7 @@ struct drm_vblank_crtc; struct drm_vma_offset_manager; struct drm_vram_mm; struct drm_fb_helper; +struct drm_gem_shmem_shrinker; struct inode; @@ -277,6 +278,9 @@ struct drm_device { /** @vram_mm: VRAM MM memory manager */ struct drm_vram_mm *vram_mm; + /** @shmem_shrinker: SHMEM GEM memory shrinker */ + struct drm_gem_shmem_shrinker *shmem_shrinker; + /** * @switch_power_state: * diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 6f2b8fee620c..638cb16a4576 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -15,6 +16,7 @@ struct dma_buf_attachment; struct drm_mode_create_dumb; struct drm_printer; +struct drm_device; struct sg_table; /** @@ -43,8 +45,8 @@ struct drm_gem_shmem_object { * @madv: State for madvise * * 0 is active/inuse. + * 1 is not-needed/can-be-purged * A negative value is the object is purged. - * Positive values are driver specific and not used by the helpers. */ int madv; @@ -96,6 +98,80 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc; + + /** + * @eviction_disable_count: + * + * The shmem pages are disallowed to be evicted by the memory shrinker + * while count is non-zero. Used internally by memory shrinker. + */ + unsigned int eviction_disable_count; + + /** + * @purge_disable_count: + * + * The shmem pages are disallowed to be purged by the memory shrinker + * while count is non-zero. Used internally by memory shrinker. + */ + unsigned int purge_disable_count; + + /** + * @evicted: True if shmem pages were evicted by the memory shrinker. + * Used internally by memory shrinker. + */ + bool evicted; + + /** + * @pages_accounted_by_shrinker: True if shmem pages can be evicted or + * purged by the memory shrinker. Used internally by memory shrinker + * to prevent double accounting of shrinkable pages. + */ + bool pages_accounted_by_shrinker; + + /** + * @swap_in: + * + * Invoked by shmem shrinker after pinning shmem GEM pages to memory. + * GEM's DMA reservation is locked by the shrinker during invocation. + * This callback is intended for DRM drivers that need to do something + * special to make pages accessible to hardware after they've been + * pinned. + * + * Returns 0 on success, or -errno on error. + * + * This callback is optional and should be set by drivers. + */ + int (*swap_in)(struct drm_gem_shmem_object *shmem); + + /** + * @evict: + * + * Invoked by shmem shrinker before unpinning shmem GEM pages from memory. + * GEM's DMA reservation is locked by the shrinker during invocation. + * This callback is intended for DRM drivers that need to do something + * special to make pages unaccessible to hardware before they are + * swapped out. + * + * Returns 0 on success, or -errno on error. + * + * This callback is optional and should be set by drivers. + */ + int (*evict)(struct drm_gem_shmem_object *shmem); + + /** + * @purge: + * + * Invoked by shmem shrinker before permanently purging shmem GEM pages. + * GEM's DMA reservation is locked by the shrinker during invocation. + * This callback is intended for DRM drivers that need to do something + * special to make pages unaccessible to hardware before they are + * purged. + * + * Returns 0 on success, or -errno on error. + * + * This callback is optional and should be set by drivers. + */ + int (*purge)(struct drm_gem_shmem_object *shmem); }; #define to_drm_gem_shmem_obj(obj) \ @@ -116,6 +192,9 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); +int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem); + static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { return (shmem->madv > 0) && @@ -123,8 +202,8 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } +int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); @@ -267,6 +346,38 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v return drm_gem_shmem_mmap(shmem, vma); } +/** + * struct drm_gem_shmem_shrinker - Generic memory shrinker for shmem GEMs + */ +struct drm_gem_shmem_shrinker { + /** @base: Shrinker for purging shmem GEM objects */ + struct shrinker base; + + /** @lock: Protects @lru_* */ + struct mutex lock; + + /** @lru_purgeable: List of shmem GEM objects that can be purged */ + struct list_head lru_purgeable; + + /** @lru_active: List of active shmem GEM objects */ + struct list_head lru_active; + + /** @lru_evictable: List of shmem GEM objects that can be evicted */ + struct list_head lru_evictable; + + /** @lru_evicted: List of evicted shmem GEM objects */ + struct list_head lru_evicted; + + /** @dev: DRM device that uses this shrinker */ + struct drm_device *dev; + + /** @shrinkable_count: Count of shmem GEM pages to be purged and evicted */ + u64 shrinkable_count; +}; + +int drm_gem_shmem_shrinker_register(struct drm_device *dev); +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev); + /* * Driver ops */ From patchwork Sun Apr 24 19:04:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1A90C433FE for ; Sun, 24 Apr 2022 19:05:22 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F095510FFFB; Sun, 24 Apr 2022 19:05:20 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3F9F310FFB1 for ; Sun, 24 Apr 2022 19:05:02 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id F21B81F44DAC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827100; bh=sLYuWFCCs2GStUL/XQAxiZ9Boi8kBkwtjnjNc2eadPg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R/tDq/HNjFKFnyljS8ZGceWd2DR54pa3vWhKHNU7vfZK689hM/RswCLzA/6aLF8t1 VT1EIPexBYJO9XEKYs7kl5KP/vpItIl2xCYW7WTA0FMY4rttQDS0+FHwC3xix0FjOr 19lsnxpsTEmrm3lAITuxvClzGCz+tXwIjusTtwBobnAfFBNiAp6RtckBhKi3+VvJ0u 4GKmqryXGTqIXoUJ78miWEkPK4CDvYEBclOZNwVAOT9YmzeANlEpspS4HgNms7nORH iPp1lMfshrVRSv/ef2b5uoxy6VKXl41WfsoI9N3/flgINmyKREcZvbUC2Soyi0gtmH kMPvrwlCFNphg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 13/17] drm/virtio: Support memory shrinking Date: Sun, 24 Apr 2022 22:04:20 +0300 Message-Id: <20220424190424.540501-14-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Support generic DRM SHMEM memory shrinker and add new madvise IOCTL to the VirtIO-GPU driver. Userspace (BO cache manager of Mesa driver) will mark BOs as "don't need" using the new IOCTL to let shrinker purge the marked BOs on OOM, the shrinker will also evict unpurgeable shmem BOs from memory if guest supports SWAP. Altogether this allows to prevent OOM kills of guest applications that use VirGL by lowering memory pressure. Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 15 ++- drivers/gpu/drm/virtio/virtgpu_gem.c | 46 +++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 37 +++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 9 ++ drivers/gpu/drm/virtio/virtgpu_object.c | 130 +++++++++++++++++++----- drivers/gpu/drm/virtio/virtgpu_plane.c | 22 +++- drivers/gpu/drm/virtio/virtgpu_vq.c | 40 ++++++++ include/uapi/drm/virtgpu_drm.h | 14 +++ 8 files changed, 283 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index b2d93cb12ebf..c8918a271e1c 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -274,7 +274,7 @@ struct virtio_gpu_fpriv { }; /* virtgpu_ioctl.c */ -#define DRM_VIRTIO_NUM_IOCTLS 12 +#define DRM_VIRTIO_NUM_IOCTLS 13 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); @@ -310,6 +310,10 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs); +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo); +bool virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv); /* virtgpu_vq.c */ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); @@ -321,6 +325,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo); +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, @@ -341,6 +347,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *obj, struct virtio_gpu_mem_entry *ents, unsigned int nents); +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence); int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev); int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev); void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, @@ -483,4 +492,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev, struct sg_table *sgt, enum dma_data_direction dir); +/* virtgpu_gem_shrinker.c */ +int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev); +void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev); + #endif diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 7db48d17ee3a..08189ad43736 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -294,3 +294,49 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) } spin_unlock(&vgdev->obj_free_lock); } + +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs) +{ + struct drm_gem_shmem_object *shmem; + int ret = 0; + u32 i; + + for (i = 0; i < objs->nents; i++) { + shmem = to_drm_gem_shmem_obj(objs->objs[i]); + ret = drm_gem_shmem_swap_in_locked(shmem); + if (ret) + break; + } + + return ret; +} + +bool virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv) +{ + /* + * For now we support only purging BOs that are backed by guest's + * memory. + */ + if (!virtio_gpu_is_shmem(bo)) + return true; + + return drm_gem_shmem_madvise(&bo->base, madv); +} + +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + int err; + + if (bo->created) { + err = virtio_gpu_cmd_release_resource(vgdev, bo); + if (err) + return err; + + virtio_gpu_notify(vgdev); + bo->created = false; + } + + return 0; +} diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index f8d83358d2a0..55ee9bd2098e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -217,6 +217,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, ret = virtio_gpu_array_lock_resv(buflist); if (ret) goto out_memdup; + + ret = virtio_gpu_array_prepare(vgdev, buflist); + if (ret) + goto out_unresv; } out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx); @@ -423,6 +427,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); if (!fence) { ret = -ENOMEM; @@ -482,6 +490,10 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + ret = -ENOMEM; fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); @@ -836,6 +848,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev, return ret; } +static int virtio_gpu_madvise_ioctl(struct drm_device *dev, + void *data, + struct drm_file *file) +{ + struct drm_virtgpu_madvise *args = data; + struct virtio_gpu_object *bo; + struct drm_gem_object *obj; + + if (args->madv > VIRTGPU_MADV_DONTNEED) + return -EOPNOTSUPP; + + obj = drm_gem_object_lookup(file, args->bo_handle); + if (!obj) + return -ENOENT; + + bo = gem_to_virtio_gpu_obj(obj); + args->retained = virtio_gpu_gem_madvise(bo, args->madv); + drm_gem_object_put(obj); + + return 0; +} + struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl, DRM_RENDER_ALLOW), @@ -875,4 +909,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl, DRM_RENDER_ALLOW), + + DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl, + DRM_RENDER_ALLOW), }; diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 0d1e3eb61bee..1175999acea1 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -238,6 +238,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) goto err_scanouts; } + ret = drm_gem_shmem_shrinker_register(dev); + if (ret) { + DRM_ERROR("shrinker init failed\n"); + goto err_modeset; + } + virtio_device_ready(vgdev->vdev); if (num_capsets) @@ -250,6 +256,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) 5 * HZ); return 0; +err_modeset: + virtio_gpu_modeset_fini(vgdev); err_scanouts: virtio_gpu_free_vbufs(vgdev); err_vbufs: @@ -289,6 +297,7 @@ void virtio_gpu_release(struct drm_device *dev) if (!vgdev) return; + drm_gem_shmem_shrinker_unregister(dev); virtio_gpu_modeset_fini(vgdev); virtio_gpu_free_vbufs(vgdev); virtio_gpu_cleanup_cap_cache(vgdev); diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 8d7728181de0..fc8f2a78fd1d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -97,41 +97,53 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj) virtio_gpu_cleanup_object(bo); } -static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { - .free = virtio_gpu_free_object, - .open = virtio_gpu_gem_object_open, - .close = virtio_gpu_gem_object_close, - .print_info = drm_gem_shmem_object_print_info, - .export = virtgpu_gem_prime_export, - .pin = drm_gem_shmem_object_pin, - .unpin = drm_gem_shmem_object_unpin, - .get_sg_table = drm_gem_shmem_object_get_sg_table, - .vmap = drm_gem_shmem_object_vmap, - .vunmap = drm_gem_shmem_object_vunmap, - .mmap = drm_gem_shmem_object_mmap, - .vm_ops = &drm_gem_shmem_vm_ops, -}; - -bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) +static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo) { - return bo->base.base.funcs == &virtio_gpu_shmem_funcs; + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_fence *fence; + + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) + return -ENOMEM; + + virtio_gpu_object_detach(vgdev, bo, fence); + virtio_gpu_notify(vgdev); + + dma_fence_wait(&fence->f, false); + dma_fence_put(&fence->f); + + return 0; } -struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, - size_t size) +static int virtio_gpu_shmem_purge(struct drm_gem_shmem_object *shmem) { - struct virtio_gpu_object_shmem *shmem; - struct drm_gem_shmem_object *dshmem; + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(&shmem->base); + int err; - shmem = kzalloc(sizeof(*shmem), GFP_KERNEL); - if (!shmem) - return ERR_PTR(-ENOMEM); + /* + * At first tell host to stop using guest's memory to ensure that + * host won't touch the released guest's memory once it's gone. + */ + err = virtio_gpu_detach_object_fenced(bo); + if (err) + return err; - dshmem = &shmem->base.base; - dshmem->base.funcs = &virtio_gpu_shmem_funcs; - return &dshmem->base; + err = virtio_gpu_gem_host_mem_release(bo); + if (err) + return err; + + return 0; +} + +static int virtio_gpu_shmem_evict(struct drm_gem_shmem_object *shmem) +{ + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(&shmem->base); + + return virtio_gpu_detach_object_fenced(bo); } +static int virtio_gpu_shmem_swap_in(struct drm_gem_shmem_object *shmem); + static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo, struct virtio_gpu_mem_entry **ents, @@ -176,6 +188,59 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return 0; } +static int virtio_gpu_shmem_swap_in(struct drm_gem_shmem_object *shmem) +{ + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(&shmem->base); + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_mem_entry *ents; + unsigned int nents; + int err; + + err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (err) + return err; + + virtio_gpu_object_attach(vgdev, bo, ents, nents); + virtio_gpu_notify(vgdev); + + return 0; +} + +static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { + .free = virtio_gpu_free_object, + .open = virtio_gpu_gem_object_open, + .close = virtio_gpu_gem_object_close, + .print_info = drm_gem_shmem_object_print_info, + .export = virtgpu_gem_prime_export, + .pin = drm_gem_shmem_object_pin, + .unpin = drm_gem_shmem_object_unpin, + .get_sg_table = drm_gem_shmem_object_get_sg_table, + .vmap = drm_gem_shmem_object_vmap, + .vunmap = drm_gem_shmem_object_vunmap, + .mmap = drm_gem_shmem_object_mmap, + .vm_ops = &drm_gem_shmem_vm_ops, +}; + +bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) +{ + return bo->base.base.funcs == &virtio_gpu_shmem_funcs; +} + +struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, + size_t size) +{ + struct virtio_gpu_object_shmem *shmem; + struct drm_gem_shmem_object *dshmem; + + shmem = kzalloc(sizeof(*shmem), GFP_KERNEL); + if (!shmem) + return ERR_PTR(-ENOMEM); + + dshmem = &shmem->base.base; + dshmem->base.funcs = &virtio_gpu_shmem_funcs; + return &dshmem->base; +} + int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_params *params, struct virtio_gpu_object **bo_ptr, @@ -201,6 +266,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, goto err_free_gem; bo->dumb = params->dumb; + bo->base.purge = virtio_gpu_shmem_purge; + bo->base.evict = virtio_gpu_shmem_evict; + bo->base.swap_in = virtio_gpu_shmem_swap_in; ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); if (ret != 0) @@ -228,10 +296,18 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, objs, fence); virtio_gpu_object_attach(vgdev, bo, ents, nents); + + shmem_obj->pages_mark_dirty_on_put = 1; + + drm_gem_shmem_set_purgeable_and_evictable(shmem_obj); } else { virtio_gpu_cmd_create_resource(vgdev, bo, params, objs, fence); virtio_gpu_object_attach(vgdev, bo, ents, nents); + + shmem_obj->pages_mark_dirty_on_put = 1; + + drm_gem_shmem_set_purgeable_and_evictable(shmem_obj); } *bo_ptr = bo; diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index 7148f3813d8b..c3ac77cae555 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -246,20 +246,32 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_framebuffer *vgfb; struct virtio_gpu_object *bo; + int err; if (!new_state->fb) return 0; vgfb = to_virtio_gpu_framebuffer(new_state->fb); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)) + + if (virtio_gpu_is_shmem(bo)) { + err = drm_gem_shmem_pin(&bo->base); + if (err) + return err; + } + + if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob) return 0; if (bo->dumb && (plane->state->fb != new_state->fb)) { vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); - if (!vgfb->fence) + if (!vgfb->fence) { + if (virtio_gpu_is_shmem(bo)) + drm_gem_shmem_unpin(&bo->base); + return -ENOMEM; + } } return 0; @@ -269,15 +281,21 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_object *bo; if (!state->fb) return; vgfb = to_virtio_gpu_framebuffer(state->fb); + bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence = NULL; } + + if (virtio_gpu_is_shmem(bo)) + drm_gem_shmem_unpin(&bo->base); } static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 06566e44307d..2a04dad1ae89 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -536,6 +536,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, virtio_gpu_cleanup_object(bo); } +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo) +{ + struct virtio_gpu_resource_unref *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF); + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + + return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); +} + void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev, uint32_t scanout_id, uint32_t resource_id, uint32_t width, uint32_t height, @@ -636,6 +651,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev, virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); } +static void +virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev, + u32 resource_id, + struct virtio_gpu_fence *fence) +{ + struct virtio_gpu_resource_attach_backing *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING); + cmd_p->resource_id = cpu_to_le32(resource_id); + + virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); +} + static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf) { @@ -1099,6 +1131,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, ents, nents, NULL); } +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence) +{ + virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle, + fence); +} + void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, struct virtio_gpu_output *output) { diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h index 0512fde5e697..12197d8e9759 100644 --- a/include/uapi/drm/virtgpu_drm.h +++ b/include/uapi/drm/virtgpu_drm.h @@ -48,6 +48,7 @@ extern "C" { #define DRM_VIRTGPU_GET_CAPS 0x09 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a #define DRM_VIRTGPU_CONTEXT_INIT 0x0b +#define DRM_VIRTGPU_MADVISE 0x0c #define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01 #define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02 @@ -196,6 +197,15 @@ struct drm_virtgpu_context_init { __u64 ctx_set_params; }; +#define VIRTGPU_MADV_WILLNEED 0 +#define VIRTGPU_MADV_DONTNEED 1 +struct drm_virtgpu_madvise { + __u32 bo_handle; + __u32 retained; /* out, non-zero if BO can be used */ + __u32 madv; + __u32 pad; +}; + /* * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in * effect. The event size is sizeof(drm_event), since there is no additional @@ -246,6 +256,10 @@ struct drm_virtgpu_context_init { DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT, \ struct drm_virtgpu_context_init) +#define DRM_IOCTL_VIRTGPU_MADVISE \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \ + struct drm_virtgpu_madvise) + #if defined(__cplusplus) } #endif From patchwork Sun Apr 24 19:04:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EAE43C433EF for ; Sun, 24 Apr 2022 19:05:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 33B3E10FEB9; Sun, 24 Apr 2022 19:05:20 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 54E8F10FFB1 for ; Sun, 24 Apr 2022 19:05:04 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 1D8981F40651 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827103; bh=dzvAS2g1wMmzjGNuHFt4mHjebPlEWHgH15CKQfy6J9c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E9tPg+ZO356J8KX3CPAIhJZ/PqHytGXk/eeTwdLJ7S7iUk58VKKMWuzPaUCwXRdDm 22AjKz2l4MEN38rKXdbDdI2rvEQ8rMRQ5hqlalNGVo7OnVtiYNKUWmHm7tBUlsJ1ns +qfslNmdrgfEmNt8HpMS+3RVCxdn99U8iDgxCmA/wL5E4/xdxjaAaHSSqq9p96bip/ ZVC8W83F24Xay2axgYiqYpElFv9V2tGMjFuM3wLiISFsjpgemDuDaGfxBTgP+UEkLS T0bIT3zwVDGGQrYk5oR0gspozq5oqw6HkPeGivxcALQfXZI3CBHdtbcnfY6m3t3xMD FLJdzeHYJaakQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 14/17] drm/panfrost: Switch to generic memory shrinker Date: Sun, 24 Apr 2022 22:04:21 +0300 Message-Id: <20220424190424.540501-15-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Replace Panfrost's memory shrinker with a generic DRM SHMEM memory shrinker. Tested-by: Steven Price Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/Makefile | 1 - drivers/gpu/drm/panfrost/panfrost_device.h | 4 - drivers/gpu/drm/panfrost/panfrost_drv.c | 19 +-- drivers/gpu/drm/panfrost/panfrost_gem.c | 28 ++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 122 ------------------ drivers/gpu/drm/panfrost/panfrost_job.c | 18 ++- 7 files changed, 37 insertions(+), 164 deletions(-) delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile index b71935862417..ecf0864cb515 100644 --- a/drivers/gpu/drm/panfrost/Makefile +++ b/drivers/gpu/drm/panfrost/Makefile @@ -5,7 +5,6 @@ panfrost-y := \ panfrost_device.o \ panfrost_devfreq.o \ panfrost_gem.o \ - panfrost_gem_shrinker.o \ panfrost_gpu.o \ panfrost_job.o \ panfrost_mmu.o \ diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 8b25278f34c8..fe04b21fc044 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -115,10 +115,6 @@ struct panfrost_device { atomic_t pending; } reset; - struct mutex shrinker_lock; - struct list_head shrinker_list; - struct shrinker shrinker; - struct panfrost_devfreq pfdevfreq; }; diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 7fcbc2a5b6cd..57a93555813f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -160,7 +160,6 @@ panfrost_lookup_bos(struct drm_device *dev, break; } - atomic_inc(&bo->gpu_usecount); job->mappings[i] = mapping; } @@ -391,7 +390,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, { struct panfrost_file_priv *priv = file_priv->driver_priv; struct drm_panfrost_madvise *args = data; - struct panfrost_device *pfdev = dev->dev_private; struct drm_gem_object *gem_obj; struct panfrost_gem_object *bo; int ret = 0; @@ -404,7 +402,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, bo = to_panfrost_bo(gem_obj); - mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv == PANFROST_MADV_DONTNEED) { struct panfrost_gem_mapping *first; @@ -430,17 +427,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, args->retained = drm_gem_shmem_madvise(&bo->base, args->madv); - if (args->retained) { - if (args->madv == PANFROST_MADV_DONTNEED) - list_add_tail(&bo->base.madv_list, - &pfdev->shrinker_list); - else if (args->madv == PANFROST_MADV_WILLNEED) - list_del_init(&bo->base.madv_list); - } - out_unlock_mappings: mutex_unlock(&bo->mappings.lock); - mutex_unlock(&pfdev->shrinker_lock); drm_gem_object_put(gem_obj); return ret; @@ -571,9 +559,6 @@ static int panfrost_probe(struct platform_device *pdev) ddev->dev_private = pfdev; pfdev->ddev = ddev; - mutex_init(&pfdev->shrinker_lock); - INIT_LIST_HEAD(&pfdev->shrinker_list); - err = panfrost_device_init(pfdev); if (err) { if (err != -EPROBE_DEFER) @@ -595,7 +580,7 @@ static int panfrost_probe(struct platform_device *pdev) if (err < 0) goto err_out1; - panfrost_gem_shrinker_init(ddev); + drm_gem_shmem_shrinker_register(ddev); return 0; @@ -613,8 +598,8 @@ static int panfrost_remove(struct platform_device *pdev) struct panfrost_device *pfdev = platform_get_drvdata(pdev); struct drm_device *ddev = pfdev->ddev; + drm_gem_shmem_shrinker_unregister(ddev); drm_dev_unregister(ddev); - panfrost_gem_shrinker_cleanup(ddev); pm_runtime_get_sync(pfdev->dev); pm_runtime_disable(pfdev->dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 293e799e2fe8..f91ef0726e5e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) struct panfrost_gem_object *bo = to_panfrost_bo(obj); struct panfrost_device *pfdev = obj->dev->dev_private; - /* - * Make sure the BO is no longer inserted in the shrinker list before - * taking care of the destruction itself. If we don't do that we have a - * race condition between this function and what's done in - * panfrost_gem_shrinker_scan(). - */ - mutex_lock(&pfdev->shrinker_lock); - list_del_init(&bo->base.madv_list); - mutex_unlock(&pfdev->shrinker_lock); - /* * If we still have mappings attached to the BO, there's a problem in * our refcounting. @@ -209,6 +199,20 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .vm_ops = &drm_gem_shmem_vm_ops, }; +static int panfrost_shmem_purge(struct drm_gem_shmem_object *shmem) +{ + struct panfrost_gem_object *bo = to_panfrost_bo(&shmem->base); + + if (!mutex_trylock(&bo->mappings.lock)) + return -EBUSY; + + panfrost_gem_teardown_mappings_locked(bo); + + mutex_unlock(&bo->mappings.lock); + + return 0; +} + /** * panfrost_gem_create_object - Implementation of driver->gem_create_object. * @dev: DRM device @@ -230,6 +234,7 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t mutex_init(&obj->mappings.lock); obj->base.base.funcs = &panfrost_gem_funcs; obj->base.map_wc = !pfdev->coherent; + obj->base.purge = panfrost_shmem_purge; return &obj->base.base; } @@ -266,6 +271,9 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, if (ret) return ERR_PTR(ret); + if (!bo->is_heap) + drm_gem_shmem_set_purgeable(shmem); + return bo; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 8088d5fd8480..09da064f1c07 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -30,12 +30,6 @@ struct panfrost_gem_object { struct mutex lock; } mappings; - /* - * Count the number of jobs referencing this BO so we don't let the - * shrinker reclaim this object prematurely. - */ - atomic_t gpu_usecount; - bool noexec :1; bool is_heap :1; }; @@ -84,7 +78,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo, void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping); void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo); -void panfrost_gem_shrinker_init(struct drm_device *dev); -void panfrost_gem_shrinker_cleanup(struct drm_device *dev); - #endif /* __PANFROST_GEM_H__ */ diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c deleted file mode 100644 index 3bcf8c291866..000000000000 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ /dev/null @@ -1,122 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2019 Arm Ltd. - * - * Based on msm_gem_freedreno.c: - * Copyright (C) 2016 Red Hat - * Author: Rob Clark - */ - -#include - -#include -#include - -#include "panfrost_device.h" -#include "panfrost_gem.h" -#include "panfrost_mmu.h" - -static unsigned long -panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem; - unsigned long count = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return 0; - - list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (drm_gem_shmem_is_purgeable(shmem)) - count += shmem->base.size >> PAGE_SHIFT; - } - - mutex_unlock(&pfdev->shrinker_lock); - - return count; -} - -static bool panfrost_gem_purge(struct drm_gem_object *obj) -{ - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - struct panfrost_gem_object *bo = to_panfrost_bo(obj); - bool ret = false; - - if (atomic_read(&bo->gpu_usecount)) - return false; - - if (!mutex_trylock(&bo->mappings.lock)) - return false; - - if (!dma_resv_trylock(shmem->base.resv)) - goto unlock_mappings; - - panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); - ret = true; - - dma_resv_unlock(shmem->base.resv); - -unlock_mappings: - mutex_unlock(&bo->mappings.lock); - return ret; -} - -static unsigned long -panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem, *tmp; - unsigned long freed = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return SHRINK_STOP; - - list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) { - if (freed >= sc->nr_to_scan) - break; - if (drm_gem_shmem_is_purgeable(shmem) && - panfrost_gem_purge(&shmem->base)) { - freed += shmem->base.size >> PAGE_SHIFT; - list_del_init(&shmem->madv_list); - } - } - - mutex_unlock(&pfdev->shrinker_lock); - - if (freed > 0) - pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT); - - return freed; -} - -/** - * panfrost_gem_shrinker_init - Initialize panfrost shrinker - * @dev: DRM device - * - * This function registers and sets up the panfrost shrinker. - */ -void panfrost_gem_shrinker_init(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - pfdev->shrinker.count_objects = panfrost_gem_shrinker_count; - pfdev->shrinker.scan_objects = panfrost_gem_shrinker_scan; - pfdev->shrinker.seeks = DEFAULT_SEEKS; - WARN_ON(register_shrinker(&pfdev->shrinker)); -} - -/** - * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker - * @dev: DRM device - * - * This function unregisters the panfrost shrinker. - */ -void panfrost_gem_shrinker_cleanup(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - - if (pfdev->shrinker.nr_deferred) { - unregister_shrinker(&pfdev->shrinker); - } -} diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index fda5871aebe3..bcf496b837ce 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -271,6 +271,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos, dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE); } +static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count) +{ + struct panfrost_gem_object *bo; + int ret = 0; + + while (!ret && bo_count--) { + bo = to_panfrost_bo(bos[bo_count]); + ret = bo->base.madv ? -ENOMEM : 0; + } + + return ret; +} + int panfrost_job_push(struct panfrost_job *job) { struct panfrost_device *pfdev = job->pfdev; @@ -282,6 +295,10 @@ int panfrost_job_push(struct panfrost_job *job) if (ret) return ret; + ret = panfrost_objects_prepare(job->bos, job->bo_count); + if (ret) + goto unlock; + mutex_lock(&pfdev->sched_lock); drm_sched_job_arm(&job->base); @@ -323,7 +340,6 @@ static void panfrost_job_cleanup(struct kref *ref) if (!job->mappings[i]) break; - atomic_dec(&job->mappings[i]->obj->gpu_usecount); panfrost_gem_mapping_put(job->mappings[i]); } kvfree(job->mappings); From patchwork Sun Apr 24 19:04:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0CCD8C433EF for ; Sun, 24 Apr 2022 19:05:33 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 537A81123CA; Sun, 24 Apr 2022 19:05:31 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 33FCB10FEB9 for ; Sun, 24 Apr 2022 19:05:06 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 38A481F44DB2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827105; bh=CCVdkmki7kdyUl6LNHWZFBWC7kDCorH5xyYEwWyTQkM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PeIqGijAucrlGWSeF+jw7ZQRqaZe6pTgb1MFz2Wsg37VIHyV9RLMzLqx4NCkhhp5/ Qlnd03Rv6PefyqQTINhkU/ZhsuZP1H+YubZLph2S3ty/exYV7g+uynUreKAtahIl/u gptKYg9Ry5Ijn73dbA21KlGvxXv7bObnNk6hYh9fJaPLyZdR4dFWJauPb1GLECSJ00 DgT4okf+CdriRoVMk5OczMgAfuhn9W/IRcZydoj4XYeTCMGZ2m7FnGDxUM7Dm7LxK6 n9jXfvly6TlFoATqoG9ne5C7IY2l/uzbX9Ho8VgRlNzEfUICpt9pXCFxVu2ZMignJs ekuLri1lVuGsQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 15/17] drm/shmem-helper: Make drm_gem_shmem_get_pages() private Date: Sun, 24 Apr 2022 22:04:22 +0300 Message-Id: <20220424190424.540501-16-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" VirtIO-GPU driver was the only user of drm_gem_shmem_get_pages() and it now uses drm_gem_shmem_get_pages_sgt(). Make the get_pages() private to drm_gem_shmem_helper. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +-- include/drm/drm_gem_shmem_helper.h | 1 - 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 25e9bc2803ee..7ec5f8002f68 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -490,7 +490,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) * Returns: * 0 on success or a negative error code on failure. */ -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret; @@ -507,7 +507,6 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) return ret; } -EXPORT_SYMBOL(drm_gem_shmem_get_pages); static void drm_gem_shmem_get_pages_no_fail(struct drm_gem_shmem_object *shmem) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 638cb16a4576..5b351933c293 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -180,7 +180,6 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); From patchwork Sun Apr 24 19:04:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39F79C4332F for ; Sun, 24 Apr 2022 19:05:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8BCC1112406; Sun, 24 Apr 2022 19:05:31 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4A6FC10FEB9 for ; Sun, 24 Apr 2022 19:05:08 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 423BF1F44DB3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827107; bh=v+jC0YBB4Q1BHtuN+QSR1N9z8+FgGIDkfR2SgOwOH1c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dQZUF6UHZhc4aQV6I+8YCzzf0nwtfM6G8rjdEz0XemNXO4l75m8rPbvP1/IEK6R7u cizJD0xfpYI7U4Ifc+ExN+4gRqDGF7qUio413Ar1LLCsMfl13thRZs5Ml9FSqfAt9p Mtu13Nlart7C7CJj+ZNLnBSc5L6i79QRSAH9RZCtgfxRQ0N7nURFSRkW2iL81gMGqg x00/HmwsoMwKVcoHg93sBPHx55uRCpNhVsssGAorv6/CexZbF1/olfxB1DZmERhdOl +EQuMF0k9ACRM6Afw5eyMoFVkTyT38owkQwoXYwVQfMSudBjb1TiRovoPmJqnNf+AH ZzYkUTyZMl6BQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 16/17] drm/shmem-helper: Make drm_gem_shmem_is_purgeable() private Date: Sun, 24 Apr 2022 22:04:23 +0300 Message-Id: <20220424190424.540501-17-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Panfrost driver was the only user of the drm_gem_shmem_is_purgeable() helper. Panfrost driver was converted to use new generic memory shrinker and the helper doesn't have external users anymore, hence make it private to the drm_gem_shmem_helper.c to keep the code clean. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ++++++++- include/drm/drm_gem_shmem_helper.h | 7 ------- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 7ec5f8002f68..045921ad4795 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -169,6 +169,13 @@ static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem) !shmem->base.import_attach && shmem->sgt && !shmem->evicted; } +static bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv > 0) && !shmem->purge_disable_count && + !shmem->vmap_use_count && !shmem->base.dma_buf && + !shmem->base.import_attach && shmem->sgt; +} + static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem) { @@ -182,7 +189,7 @@ drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem) mutex_lock(&gem_shrinker->lock); - if (drm_gem_shmem_is_purgeable(shmem) && !shmem->purge_disable_count) { + if (drm_gem_shmem_is_purgeable(shmem)) { drm_gem_shmem_add_pages_to_shrinker(shmem); list_move_tail(&shmem->madv_list, &gem_shrinker->lru_purgeable); } else if (drm_gem_shmem_is_evictable(shmem)) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 5b351933c293..972687bf9717 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -194,13 +194,6 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem); -static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) -{ - return (shmem->madv > 0) && - !shmem->vmap_use_count && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; -} - int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); From patchwork Sun Apr 24 19:04:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12825046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D50D0C433EF for ; Sun, 24 Apr 2022 19:05:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B4C03112129; Sun, 24 Apr 2022 19:05:21 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4336010FEB9 for ; Sun, 24 Apr 2022 19:05:10 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 4F6101F4065F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1650827109; bh=AlpWMMFY2kuHyruY9FDCsN1f6mNBZxXcpmSdzVCs994=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ry+ZPMXQVP4xnByVPenI0tsSh+KMi7RYEpqpfiLul1cWE2liWNSVpPF7mdvTKsDf8 xMV/7sATfjAaFp1aKNpKKZVZHGvay96HMbwMhUZatIDB0+Y34qyQbm+S3NaMyK5Vo6 wrUY9C8NjiDr9BUaY0c2Wf2TEv5hgSRBEB9XXA62XAwPCuoRrs7l47RGOkjA8ma+f6 dUzSbg2NzG06GPMAywlfYjJu46eJBwFBF3HmrYwUj7wwY/SKwFtl619lyjd8ZUlLqG WAGXyzpQs1vlNCDn5coseHNkg4bI4C/IneTx2kfFrSDJYWgSBtw7izIZVoyE3bfZlw BeS+lMaC6L9zA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH v5 17/17] drm/shmem-helper: Remove drm_gem_shmem_purge_locked() Date: Sun, 24 Apr 2022 22:04:24 +0300 Message-Id: <20220424190424.540501-18-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220424190424.540501-1-dmitry.osipenko@collabora.com> References: <20220424190424.540501-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Osipenko , Dmitry Osipenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Panfrost driver was the only user of the drm_gem_shmem_purge_locked() helper. Panfrost driver was converted to use new generic memory shrinker and the helper doesn't have users anymore, remove it. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 30 -------------------------- include/drm/drm_gem_shmem_helper.h | 1 - 2 files changed, 31 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 045921ad4795..ef7691c84fa8 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -899,36 +899,6 @@ static void drm_gem_shmem_unpin_pages_locked(struct drm_gem_shmem_object *shmem) shmem->sgt = NULL; } -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) -{ - struct drm_gem_object *obj = &shmem->base; - struct drm_device *dev = obj->dev; - - WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); - - dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - shmem->sgt = NULL; - - drm_gem_shmem_put_pages_locked(shmem); - - shmem->madv = -1; - - drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); - drm_gem_free_mmap_offset(obj); - - /* Our goal here is to return as much of the memory as - * is possible back to the system as we are called from OOM. - * To do this we must instruct the shmfs to drop all of its - * backing pages, *now*. - */ - shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); - - invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); -} -EXPORT_SYMBOL(drm_gem_shmem_purge_locked); - /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object * @file: DRM file structure to create the dumb buffer for diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 972687bf9717..8d7053c36fa6 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -195,7 +195,6 @@ int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem); -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);