From patchwork Mon Apr 11 21:59:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2ADF0C433F5 for ; Mon, 11 Apr 2022 22:00:23 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 59CCC10E094; Mon, 11 Apr 2022 22:00:22 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 59ECA10E094 for ; Mon, 11 Apr 2022 22:00:19 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 851571F43C6C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714418; bh=wczZ5iN2wkFP7L3LjYkS8mg7jidJdoFxxHCwNVCzNBg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f/wz4QYeAT54TyX8ugUAgVaFjiArplVVjrM6JDXz0Xb5Mi99JrIQaSY/dmBLaVWKI hcxA6DRzd+X7uqSDvl7vJ8c7V08L/PS2Ud4jjuI9gFqU5/lUogfdh9yh7CUHranH1H 1kfn7EVZQMoSKCbsAtlOktc56qJabWOZ2yG25/Dg4TjlFxaM7ST68fmtV5fpD9maHj LmL0xtNLluOq1J3GdaYRVCADI1SfTFLyycdXKF+7xpJpeA65P+KwZdzKBfRf02Tn7N Qh+JKYYygG86h3uBx0b8pwzyAu/MAgrPgPPkiBPLIw5hMsa52dtkKldDhcCOBB+NCA aFeSzVMIqcwDg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 01/15] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling Date: Tue, 12 Apr 2022 00:59:23 +0300 Message-Id: <20220411215937.281655-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" drm_gem_shmem_get_sg_table() never ever returned NULL on error. Correct the error handling to avoid crash on OOM. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index f293e6ad52da..3d0c8d4d1c20 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -168,9 +168,11 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, * since virtio_gpu doesn't support dma-buf import from other devices. */ shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); - if (!shmem->pages) { + ret = PTR_ERR_OR_ZERO(shmem->pages); + if (ret) { drm_gem_shmem_unpin(&bo->base); - return -EINVAL; + shmem->pages = NULL; + return ret; } if (use_dma_api) { From patchwork Mon Apr 11 21:59:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5CEFCC433EF for ; Mon, 11 Apr 2022 22:00:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CD60310E2B3; Mon, 11 Apr 2022 22:00:22 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id C321710E094 for ; Mon, 11 Apr 2022 22:00:20 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 317931F43C7E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714419; bh=b2ST04zm/Wc2+g1oGHK3LnebsnkduuUXRdFzzBAvqNs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g7Vr1Jabyms1/C+YkMzYjic/knwWRp5rQrjPhU5jgjsKz9oIu0/hHqi1yo/dKxXWU yv7z+M5b+R346aIkcH9+n04Ze/4g5NF3EpqcGyXto5Ee7q0K9sV2JH8wDKW0JBBIoD FIJpv3eBpRAK4zsLr4kaEKVVwQGPTn11rxAUYl+cUDoIbMoTQZItG0r5gM5IM2DIo7 OUxZT8q7rm7CvZtW+psCVue6hHR93wQ7LXsKlpMTbv7uKSgf1JvLlaRyxS/Ie5K3sz orcjZPHqmsd9yjrpGKFiNJBUzXtYyRJDpc0eNbaRwV58OhUMfsOalXoFkUVEzzGdTO L75y98aim+A/A== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 02/15] drm/virtio: Check whether transferred 2D BO is shmem Date: Tue, 12 Apr 2022 00:59:24 +0300 Message-Id: <20220411215937.281655-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Transferred 2D BO always must be a shmem BO. Add check for that to prevent NULL dereference if userspace passes a VRAM BO. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 7c052efe8836..2edf31806b74 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -595,7 +595,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - if (use_dma_api) + if (virtio_gpu_is_shmem(bo) && use_dma_api) dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, shmem->pages, DMA_TO_DEVICE); From patchwork Mon Apr 11 21:59:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EAD50C433EF for ; Mon, 11 Apr 2022 22:00:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0EA7910E332; Mon, 11 Apr 2022 22:00:27 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6FB5510E2B3 for ; Mon, 11 Apr 2022 22:00:22 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id D33161F43C82 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714421; bh=isO7PTUeJcDVmAZaS1b0iVVGgXGziCi18eZsBem42Os=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c5a6rNCKW1SFmEjGB2SM+3wBjzMZRF08aK5HRbuiJJXWAeIbjQ/ocfi6vDmFxdPbj GgKj/4ZqrxOB8X1J2B/xfPQ+qxt2JeHO/x9+wQmRNPu+opbt9D+6k9So9hPBJkTQst G3eumY637vW3fmHMvpc1wHkCGP25KiTwJ/yLfqPc+EIBZqXV+yVKmBwYSpX5ED/jDk WIYpg8ciwMwPsrIG/x2IUqR3Sns2b2IHZwEengN9xOF5W/wm2oZXpXoJMH3LeCZiKr jJrqZXSptwS+nFBwd/KSnV4CKotPZfG7FBKLlhnj1IXycBQgX8oYpjhOMQkvIOXdsQ Bi+ut5+3CjuQg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 03/15] drm/virtio: Unlock GEM reservations on virtio_gpu_object_shmem_init() error Date: Tue, 12 Apr 2022 00:59:25 +0300 Message-Id: <20220411215937.281655-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Unlock reservations in the error code path of virtio_gpu_object_create() to silence debug warning splat produced by ww_mutex_destroy(&obj->lock) when GEM is released with the held lock. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 3d0c8d4d1c20..21c19cdedce0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -250,6 +250,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); if (ret != 0) { + if (fence) + virtio_gpu_array_unlock_resv(objs); virtio_gpu_array_put_free(objs); virtio_gpu_free_object(&shmem_obj->base); return ret; From patchwork Mon Apr 11 21:59:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 97186C433F5 for ; Mon, 11 Apr 2022 22:00:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9E8AE10E31C; Mon, 11 Apr 2022 22:00:26 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2C9D810E31C for ; Mon, 11 Apr 2022 22:00:24 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 81CE41F438CC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714423; bh=CWqvGLqMByoxyOodrX4VDvjOrVtokLSbI2qbVKtiRlE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ohca8T9NknI5hVHqu9rr5ZgXYuOC9GXZJtDtkVbj/OxMg8xZc+KmY1YlqktGRzbvn RHHj5zlXOhIHZ1svIKMTtBTL6ieioJoyny22GnBKVQyb1Ns6Mjx3MQaHCkf70/myPB KV7fN6iulrVusmojzr5DBZ5ymQGAPvmIycGUAtmOgpqwparzpS98z+VKZ8l1tSULGF rEyX4IQ/DaUNR82ij5YKRy6pVvkfdA6l4cBlsINwR3h6ITsWIKx4W5mLI8cFfnW3Pm 87J8ylqNdiZH2vkmXUjfuS7k/ZWAIfkvsVU8WDUz7Lsc5ulnU8BZMw0dBu2TBOo90J zqjruRAl8rwdg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 04/15] drm/virtio: Unlock reservations on dma_resv_reserve_fences() error Date: Tue, 12 Apr 2022 00:59:26 +0300 Message-Id: <20220411215937.281655-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Unlock reservations on dma_resv_reserve_fences() error to fix recursive locking of the reservations when this error happens. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_gem.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 580a78809836..7db48d17ee3a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -228,8 +228,10 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) for (i = 0; i < objs->nents; ++i) { ret = dma_resv_reserve_fences(objs->objs[i]->resv, 1); - if (ret) + if (ret) { + virtio_gpu_array_unlock_resv(objs); return ret; + } } return ret; } From patchwork Mon Apr 11 21:59:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 755F9C433FE for ; Mon, 11 Apr 2022 22:00:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3F7EF10E339; Mon, 11 Apr 2022 22:00:27 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id C5F8B10E31C for ; Mon, 11 Apr 2022 22:00:25 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 306EB1F43C86 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714424; bh=H5TVWLdKQBunPI+41h2cAjj6IZ6UbyVtXydAliQEN5M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ghRkbXF8rhL12mWU1KFreAJI9IWMEWq2ZRIsd90oRUbv2Hi22nfvWlGYPyx1iO2Im lLepkN8foGV3yYTi/kthftgh9ENBp8b7zqRqle9F6fYOnHDfVSbhI8GzOT2hwaGSGS /A9STdZ5CD57NoJuhmSH9+3wm4N/Rbnd3gwu2JHh/UaekEDaQbVEtZW2zIbMnuK+Ei vJ9nKFF0g73jEi//GPA6LAidI0lgOOscDFiB2ovh753SJpGMgB78FCJdtcr6okk34Z iERUQs1LXIUQrYCJxf7S+GhF4m+eVHXZ6hnye92nR2xg0sDYgozC5onFTpl7k71MfR JkzMscCGSUXlg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 05/15] drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb() Date: Tue, 12 Apr 2022 00:59:27 +0300 Message-Id: <20220411215937.281655-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Make virtio_gpu_plane_cleanup_fb() to clean the state which DRM core wants to clean up and not the current plane's state. Normally the older atomic state is cleaned up, but the newer state could also be cleaned up in case of aborted commits. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_plane.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index 6d3cc9e238a4..7148f3813d8b 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -266,14 +266,14 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, } static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, - struct drm_plane_state *old_state) + struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; - if (!plane->state->fb) + if (!state->fb) return; - vgfb = to_virtio_gpu_framebuffer(plane->state->fb); + vgfb = to_virtio_gpu_framebuffer(state->fb); if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence = NULL; From patchwork Mon Apr 11 21:59:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C216C433FE for ; Mon, 11 Apr 2022 22:00:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A4C5410E3AA; Mon, 11 Apr 2022 22:00:48 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 84EAA10E366 for ; Mon, 11 Apr 2022 22:00:27 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id D792A1F43C8A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714426; bh=isR8z997diym4El61jLlrCRI0NqP4AALskkuO0PNVVE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZfozwM/HK56q10iKKu5vijkB8jdQhCiLnu0RjdNLKEEE+vf+aMn0/WSBDd0LJemTJ 4x6e6EDkGtKdqnenPJ/JSIuSy6Bo8zjjz/lHzYYsG+iGiHBGsA3jGfdeps3w3KFyfk giknpGGCGO5K59+dxZKDN0Ec2Dlh7tJtmHdME4XBxil9KL9HPsbBxMO0AGnqp8iL3q qAyo7o4Y++ZrTba4PnDbrJNhB5ecNpFXgvbioGs6irG85Ew34g7xjR0u2KNNfqNhr6 DDpay9Q1nK/mYCPYmTMVuH9g/1aHiCzbCkkra+Yu1WTToPgtLyzS2Qfzav4iuQT+oL Vcn77bJ8Zv/gw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 06/15] drm/virtio: Simplify error handling of virtio_gpu_object_create() Date: Tue, 12 Apr 2022 00:59:28 +0300 Message-Id: <20220411215937.281655-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Change the order of SHMEM initialization and reservation locking to make code cleaner a tad and to prepare to transitioning of the common GEM SHMEM code to use the GEM's reservation lock instead of the shmem.page_lock. There is no need to lock reservation during allocation of the SHMEM pages because the lock is needed only to avoid racing with the async host-side allocation. Hence we can safely move the SHMEM initialization out of the reservation lock. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 21c19cdedce0..18f70ef6b4d0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -236,6 +236,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bo->dumb = params->dumb; + ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (ret != 0) + goto err_put_id; + if (fence) { ret = -ENOMEM; objs = virtio_gpu_array_alloc(1); @@ -248,15 +252,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, goto err_put_objs; } - ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); - if (ret != 0) { - if (fence) - virtio_gpu_array_unlock_resv(objs); - virtio_gpu_array_put_free(objs); - virtio_gpu_free_object(&shmem_obj->base); - return ret; - } - if (params->blob) { if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST) bo->guest_blob = true; From patchwork Mon Apr 11 21:59:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DD066C433F5 for ; Mon, 11 Apr 2022 22:00:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 796B710E366; Mon, 11 Apr 2022 22:00:37 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3DEBC10E366 for ; Mon, 11 Apr 2022 22:00:29 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 856FF1F438CC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714428; bh=Na+rmd+AM8NkiHwyRMATAj2tXo9n6QMPBVYodYOgZSM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JpBzMGilef+rmNd1M1aXA8TKNQ1UAFWccEOTOOucvlB6MU00qFSAaBEhJD39fYQym xfsy89KQstROhALYZXh7EtuT1m4/jObFX3vQCaw4VVSmlxs+OV6BYgcoBRoCAMHtW/ hVJhF8bnv0Uv+PF11J6y/qHkER18MUCm+UmjvGBfSKSBO8aRGOpESnYvTIWAbD60ue MyUGh5ZtnmD4/aj7vUaPbJtdjZB1Ut7fEMsdPmQCo2BCQI2LkqlT12hqbS9Z3ghIUF cXIaZDwCISb05pXMWEiLE7/1DNoRs1BzOB9rrHnAfCqr8lRLlEhSXEXi3vcOFx6a5x zWjWxtNL6kiLg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 07/15] drm/virtio: Improve DMA API usage for shmem BOs Date: Tue, 12 Apr 2022 00:59:29 +0300 Message-Id: <20220411215937.281655-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" DRM API requires the DRM's driver to be backed with the device that can be used for generic DMA operations. The VirtIO-GPU device can't perform DMA operations if it uses PCI transport because PCI device driver creates a virtual VirtIO-GPU device that isn't associated with the PCI. Use PCI's GPU device for the DRM's device instead of the VirtIO-GPU device and drop DMA-related hacks from the VirtIO-GPU driver. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.c | 51 ++++++---------------- drivers/gpu/drm/virtio/virtgpu_drv.h | 5 +-- drivers/gpu/drm/virtio/virtgpu_kms.c | 7 ++-- drivers/gpu/drm/virtio/virtgpu_object.c | 56 +++++-------------------- drivers/gpu/drm/virtio/virtgpu_vq.c | 13 +++--- 5 files changed, 32 insertions(+), 100 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c index 5f25a8d15464..0141b7df97ec 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.c +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c @@ -46,12 +46,11 @@ static int virtio_gpu_modeset = -1; MODULE_PARM_DESC(modeset, "Disable/Enable modesetting"); module_param_named(modeset, virtio_gpu_modeset, int, 0400); -static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev) +static int virtio_gpu_pci_quirk(struct drm_device *dev) { - struct pci_dev *pdev = to_pci_dev(vdev->dev.parent); + struct pci_dev *pdev = to_pci_dev(dev->dev); const char *pname = dev_name(&pdev->dev); bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA; - char unique[20]; int ret; DRM_INFO("pci: %s detected at %s\n", @@ -63,39 +62,7 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vd return ret; } - /* - * Normally the drm_dev_set_unique() call is done by core DRM. - * The following comment covers, why virtio cannot rely on it. - * - * Unlike the other virtual GPU drivers, virtio abstracts the - * underlying bus type by using struct virtio_device. - * - * Hence the dev_is_pci() check, used in core DRM, will fail - * and the unique returned will be the virtio_device "virtio0", - * while a "pci:..." one is required. - * - * A few other ideas were considered: - * - Extend the dev_is_pci() check [in drm_set_busid] to - * consider virtio. - * Seems like a bigger hack than what we have already. - * - * - Point drm_device::dev to the parent of the virtio_device - * Semantic changes: - * * Using the wrong device for i2c, framebuffer_alloc and - * prime import. - * Visual changes: - * * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer, - * will print the wrong information. - * - * We could address the latter issues, by introducing - * drm_device::bus_dev, ... which would be used solely for this. - * - * So for the moment keep things as-is, with a bulky comment - * for the next person who feels like removing this - * drm_dev_set_unique() quirk. - */ - snprintf(unique, sizeof(unique), "pci:%s", pname); - return drm_dev_set_unique(dev, unique); + return 0; } static int virtio_gpu_probe(struct virtio_device *vdev) @@ -109,18 +76,24 @@ static int virtio_gpu_probe(struct virtio_device *vdev) if (virtio_gpu_modeset == 0) return -EINVAL; - dev = drm_dev_alloc(&driver, &vdev->dev); + /* + * The virtio-gpu device is a virtual device that doesn't have DMA + * ops assigned to it, nor DMA mask set and etc. Its parent device + * is actual GPU device we want to use it for the DRM's device in + * order to benefit from using generic DRM APIs. + */ + dev = drm_dev_alloc(&driver, vdev->dev.parent); if (IS_ERR(dev)) return PTR_ERR(dev); vdev->priv = dev; if (!strcmp(vdev->dev.parent->bus->name, "pci")) { - ret = virtio_gpu_pci_quirk(dev, vdev); + ret = virtio_gpu_pci_quirk(dev); if (ret) goto err_free; } - ret = virtio_gpu_init(dev); + ret = virtio_gpu_init(vdev, dev); if (ret) goto err_free; diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 0a194aaad419..b2d93cb12ebf 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -100,8 +100,6 @@ struct virtio_gpu_object { struct virtio_gpu_object_shmem { struct virtio_gpu_object base; - struct sg_table *pages; - uint32_t mapped; }; struct virtio_gpu_object_vram { @@ -214,7 +212,6 @@ struct virtio_gpu_drv_cap_cache { }; struct virtio_gpu_device { - struct device *dev; struct drm_device *ddev; struct virtio_device *vdev; @@ -282,7 +279,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); /* virtgpu_kms.c */ -int virtio_gpu_init(struct drm_device *dev); +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev); void virtio_gpu_deinit(struct drm_device *dev); void virtio_gpu_release(struct drm_device *dev); int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file); diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 3313b92db531..0d1e3eb61bee 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -110,7 +110,7 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev, vgdev->num_capsets = num_capsets; } -int virtio_gpu_init(struct drm_device *dev) +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) { static vq_callback_t *callbacks[] = { virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack @@ -123,7 +123,7 @@ int virtio_gpu_init(struct drm_device *dev) u32 num_scanouts, num_capsets; int ret = 0; - if (!virtio_has_feature(dev_to_virtio(dev->dev), VIRTIO_F_VERSION_1)) + if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) return -ENODEV; vgdev = kzalloc(sizeof(struct virtio_gpu_device), GFP_KERNEL); @@ -132,8 +132,7 @@ int virtio_gpu_init(struct drm_device *dev) vgdev->ddev = dev; dev->dev_private = vgdev; - vgdev->vdev = dev_to_virtio(dev->dev); - vgdev->dev = dev->dev; + vgdev->vdev = vdev; spin_lock_init(&vgdev->display_info_lock); spin_lock_init(&vgdev->resource_export_lock); diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 18f70ef6b4d0..8d7728181de0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -67,21 +67,6 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); if (virtio_gpu_is_shmem(bo)) { - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - - if (shmem->pages) { - if (shmem->mapped) { - dma_unmap_sgtable(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE, 0); - shmem->mapped = 0; - } - - sg_free_table(shmem->pages); - kfree(shmem->pages); - shmem->pages = NULL; - drm_gem_shmem_unpin(&bo->base); - } - drm_gem_shmem_free(&bo->base); } else if (virtio_gpu_is_vram(bo)) { struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo); @@ -153,37 +138,18 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, unsigned int *nents) { bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); struct scatterlist *sg; - int si, ret; + struct sg_table *pages; + int si; - ret = drm_gem_shmem_pin(&bo->base); - if (ret < 0) - return -EINVAL; - - /* - * virtio_gpu uses drm_gem_shmem_get_sg_table instead of - * drm_gem_shmem_get_pages_sgt because virtio has it's own set of - * dma-ops. This is discouraged for other drivers, but should be fine - * since virtio_gpu doesn't support dma-buf import from other devices. - */ - shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); - ret = PTR_ERR_OR_ZERO(shmem->pages); - if (ret) { - drm_gem_shmem_unpin(&bo->base); - shmem->pages = NULL; - return ret; - } + pages = drm_gem_shmem_get_pages_sgt(&bo->base); + if (IS_ERR(pages)) + return PTR_ERR(pages); - if (use_dma_api) { - ret = dma_map_sgtable(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE, 0); - if (ret) - return ret; - *nents = shmem->mapped = shmem->pages->nents; - } else { - *nents = shmem->pages->orig_nents; - } + if (use_dma_api) + *nents = pages->nents; + else + *nents = pages->orig_nents; *ents = kvmalloc_array(*nents, sizeof(struct virtio_gpu_mem_entry), @@ -194,13 +160,13 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, } if (use_dma_api) { - for_each_sgtable_dma_sg(shmem->pages, sg, si) { + for_each_sgtable_dma_sg(pages, sg, si) { (*ents)[si].addr = cpu_to_le64(sg_dma_address(sg)); (*ents)[si].length = cpu_to_le32(sg_dma_len(sg)); (*ents)[si].padding = 0; } } else { - for_each_sgtable_sg(shmem->pages, sg, si) { + for_each_sgtable_sg(pages, sg, si) { (*ents)[si].addr = cpu_to_le64(sg_phys(sg)); (*ents)[si].length = cpu_to_le32(sg->length); (*ents)[si].padding = 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 2edf31806b74..06566e44307d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -593,11 +593,10 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, struct virtio_gpu_transfer_to_host_2d *cmd_p; struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); if (virtio_gpu_is_shmem(bo) && use_dma_api) - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE); + dma_sync_sgtable_for_device(&vgdev->vdev->dev, + bo->base.sgt, DMA_TO_DEVICE); cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); @@ -1017,11 +1016,9 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - if (virtio_gpu_is_shmem(bo) && use_dma_api) { - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE); - } + if (virtio_gpu_is_shmem(bo) && use_dma_api) + dma_sync_sgtable_for_device(&vgdev->vdev->dev, + bo->base.sgt, DMA_TO_DEVICE); cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); From patchwork Mon Apr 11 21:59:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F4CEC433F5 for ; Mon, 11 Apr 2022 22:00:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D908610E3D1; Mon, 11 Apr 2022 22:00:49 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id CB99710E366 for ; Mon, 11 Apr 2022 22:00:30 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 36C7E1F43C7E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714429; bh=1ir1kXOoDBYMTq2gw1c/hNFv88UQm29tKUM6x3KBBGE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Rq9PS5kCEv+DC4Qzk/V8M7LTxLVNjUw6NtCNHBxIicMLSquM3gmFjfJ50UFUObwSR phhOp/VgAHafTnQ2Gt8W2HOQ9NO3QwSAmxu6pMAmTb7lJaJ/hb+GBGH7cw0qhRtOML iRpgWEVaWkEPXR3boR8YIMc8RDx0AC4ls7D98mIO39U5aEZPe4zkldxImrOQyjOTOr eNmnv2Q1Z1SYvHOw2py931IbsvaBgk09u0HBXv0vPk4jeowvGaZAnidfg54kRnRc7R vX1cvRPURWlCUw+RuJvXGa3tlFEosHaH6eaAi5fFcX8zmTWWZANfChOdd3KKQORbDs qHt4IAJtAu5QQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 08/15] drm/virtio: Use dev_is_pci() Date: Tue, 12 Apr 2022 00:59:30 +0300 Message-Id: <20220411215937.281655-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use common dev_is_pci() helper to replace the strcmp("pci") used by driver. Suggested-by: Robin Murphy Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c index 0141b7df97ec..0035affc3e59 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.c +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c @@ -87,7 +87,7 @@ static int virtio_gpu_probe(struct virtio_device *vdev) return PTR_ERR(dev); vdev->priv = dev; - if (!strcmp(vdev->dev.parent->bus->name, "pci")) { + if (dev_is_pci(vdev->dev.parent)) { ret = virtio_gpu_pci_quirk(dev); if (ret) goto err_free; From patchwork Mon Apr 11 21:59:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 10E13C433EF for ; Mon, 11 Apr 2022 22:00:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E8AAF10E357; Mon, 11 Apr 2022 22:00:33 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8116010E357 for ; Mon, 11 Apr 2022 22:00:32 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id DB2AE1F43C82 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714431; bh=rykbT46970dnwG8XD3/7UFJ4mDKmg/KtVkYLQH0qBew=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ynz54hlDD9OMiT6vTyY2rs246aLEQ6s85f9REUWTafcGadzyFiOoKAPgh1085Vxde xh48RRmftrJRm0bxOHSpO6oCIdshqXcROjyBO2HKNoEpcOgzdciVW2py4O3fwqHIvT kPuOqFKs3jLEgbnreRC5er2ffUz4zz/a5Ugw9UkEZL2UAfdjA5fQLzcAPSWDHidFz5 Ykx25wi36RZWuR9zGT174kJDEaUEHJhvHeuVI/sciCZkVdW+EjVaTw5ZBh3rBYmZdE pmSEvKwUF0cdmn5eJPq/0PWsY+rEqG5KqIrwwUcZkYmXtjIBUM3qnBUwV+I6FKKII5 9k9a50GhP6obA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 09/15] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Date: Tue, 12 Apr 2022 00:59:31 +0300 Message-Id: <20220411215937.281655-10-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" drm_gem_shmem_get_sg_table() never returns NULL on error, but a ERR_PTR. Correct the doc comment which says that it returns NULL on error. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 8ad0e02991ca..30ee46348a99 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -662,7 +662,7 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info); * drm_gem_shmem_get_pages_sgt() instead. * * Returns: - * A pointer to the scatter/gather table of pinned pages or NULL on failure. + * A pointer to the scatter/gather table of pinned pages or errno on failure. */ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) { @@ -688,7 +688,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); * drm_gem_shmem_get_sg_table() should not be directly called by drivers. * * Returns: - * A pointer to the scatter/gather table of pinned pages or errno on failure. + * A pointer to the scatter/gather table of pinned pages ERR_PTR()-encoded + * error code on failure. */ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) { From patchwork Mon Apr 11 21:59:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6329BC433EF for ; Mon, 11 Apr 2022 22:00:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B970D10E389; Mon, 11 Apr 2022 22:00:37 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1E41510E366 for ; Mon, 11 Apr 2022 22:00:34 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 876151F438CC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714433; bh=/K/9dcREecPMUsw0K2kYNPeisYWPe0Wev+n2P9euta4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y1tj5hcs+44imub0sB1JHFFB8ZzXYa0dsxBnyoJJuAIOoS5jjBX3JanF4UbWfKSNw iAs14HpGl9/rd9BXFUJuDpyYAIlHa/j0bg1m5woVDddBziiT80GB9W5fO5ZxXjymq6 s46qucF8ZrFGINjQEEmSLIDxuCVPJ+Nzc9RLQsKurV7phbQp3jA94ru0WgB8n4c/K5 4Kai5C0qrMshzdxsx3T6WQKhjSIGUN/naKikUriSAb7YRC2C4WsGskLQum2yhMNS2F lp9Wi2JYbgQtjhbYtdZTkcuLaUlhLV9lWyz5pgRKo9zmQxWD+1TUjQXXxYQGgs/j/h bQmF+rbcvHCwA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Date: Tue, 12 Apr 2022 00:59:32 +0300 Message-Id: <20220411215937.281655-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Replace drm_gem_shmem locks with the reservation lock to make GEM lockings more consistent. Previously drm_gem_shmem_vmap() and drm_gem_shmem_get_pages() were protected by separate locks, now it's the same lock, but it doesn't make any difference for the current GEM SHMEM users. Only Panfrost and Lima drivers use vmap() and they do it in the slow code paths, hence there was no practical justification for the usage of separate lock in the vmap(). Suggested-by: Daniel Vetter Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 38 ++++++++++++------------- drivers/gpu/drm/lima/lima_gem.c | 8 +++--- drivers/gpu/drm/panfrost/panfrost_mmu.c | 15 ++++++---- include/drm/drm_gem_shmem_helper.h | 10 ------- 4 files changed, 31 insertions(+), 40 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 30ee46348a99..3ecef571eff3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -86,8 +86,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) if (ret) goto err_release; - mutex_init(&shmem->pages_lock); - mutex_init(&shmem->vmap_lock); INIT_LIST_HEAD(&shmem->madv_list); if (!private) { @@ -157,8 +155,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->pages_use_count); drm_gem_object_release(obj); - mutex_destroy(&shmem->pages_lock); - mutex_destroy(&shmem->vmap_lock); kfree(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); @@ -209,11 +205,11 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->base.import_attach); - ret = mutex_lock_interruptible(&shmem->pages_lock); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; ret = drm_gem_shmem_get_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } @@ -248,9 +244,9 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -310,7 +306,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, } else { pgprot_t prot = PAGE_KERNEL; - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); if (ret) goto err_zero_use; @@ -360,11 +356,11 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, { int ret; - ret = mutex_lock_interruptible(&shmem->vmap_lock); + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; ret = drm_gem_shmem_vmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); + dma_resv_unlock(shmem->base.resv); return ret; } @@ -385,7 +381,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } shmem->vaddr = NULL; @@ -406,9 +402,11 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, struct iosys_map *map) { - mutex_lock(&shmem->vmap_lock); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_vunmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); + dma_resv_unlock(shmem->base.resv); + + drm_gem_shmem_update_purgeable_status(shmem); } EXPORT_SYMBOL(drm_gem_shmem_vunmap); @@ -442,14 +440,14 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, */ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (shmem->madv >= 0) shmem->madv = madv; madv = shmem->madv; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return (madv >= 0); } @@ -487,10 +485,10 @@ EXPORT_SYMBOL(drm_gem_shmem_purge_locked); bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { - if (!mutex_trylock(&shmem->pages_lock)) + if (!dma_resv_trylock(shmem->base.resv)) return false; drm_gem_shmem_purge_locked(shmem); - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return true; } @@ -549,7 +547,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (page_offset >= num_pages || WARN_ON_ONCE(!shmem->pages) || @@ -561,7 +559,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 0f1ca0b0db49..5008f0c2428f 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) new_size = min(new_size, bo->base.base.size); - mutex_lock(&bo->base.pages_lock); + dma_resv_lock(bo->base.base.resv, NULL); if (bo->base.pages) { pages = bo->base.pages; @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, sizeof(*pages), GFP_KERNEL | __GFP_ZERO); if (!pages) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return -ENOMEM; } @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) struct page *page = shmem_read_mapping_page(mapping, i); if (IS_ERR(page)) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return PTR_ERR(page); } pages[i] = page; } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); ret = sg_alloc_table_from_pages(&sgt, pages, i, 0, new_size, GFP_KERNEL); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index d3f82b26a631..404b8f67e2df 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -424,6 +424,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; struct address_space *mapping; + struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; struct page **pages; @@ -446,13 +447,15 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, page_offset = addr >> PAGE_SHIFT; page_offset -= bomapping->mmnode.start; - mutex_lock(&bo->base.pages_lock); + obj = &bo->base.base; + + dma_resv_lock(obj->resv, NULL); if (!bo->base.pages) { bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = -ENOMEM; goto err_bo; } @@ -462,7 +465,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, if (!pages) { kvfree(bo->sgts); bo->sgts = NULL; - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = -ENOMEM; goto err_bo; } @@ -472,7 +475,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, pages = bo->base.pages; if (pages[page_offset]) { /* Pages are already mapped, bail out. */ - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); goto out; } } @@ -483,13 +486,13 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { pages[i] = shmem_read_mapping_page(mapping, i); if (IS_ERR(pages[i])) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); ret = PTR_ERR(pages[i]); goto err_pages; } } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(obj->resv); sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; ret = sg_alloc_table_from_pages(sgt, pages + page_offset, diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index d0a57853c188..70889533962a 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -26,11 +26,6 @@ struct drm_gem_shmem_object { */ struct drm_gem_object base; - /** - * @pages_lock: Protects the page table and use count - */ - struct mutex pages_lock; - /** * @pages: Page table */ @@ -79,11 +74,6 @@ struct drm_gem_shmem_object { */ struct sg_table *sgt; - /** - * @vmap_lock: Protects the vmap address and use count - */ - struct mutex vmap_lock; - /** * @vaddr: Kernel virtual address of the backing memory */ From patchwork Mon Apr 11 21:59:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1278AC4321E for ; Mon, 11 Apr 2022 22:00:41 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A386810E388; Mon, 11 Apr 2022 22:00:37 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id DEE4C10E366 for ; Mon, 11 Apr 2022 22:00:35 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 33BF01F43C7E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714434; bh=sk/vRuU5bo6MAEQ4Hqk/Ux1XmBTVaSZUB2EdKijgEZw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U4u9Cv7ZIGfmSnqUbs/6J2iVX35jLn4NM/uIarCW01Ad+fhmF2oQZpGEOzAD+q4mo HGMolrSN51OiytH8GrCt4fpi/M+uF7VYl5P30bIhuT/+2BQGyAG0brfpljf322fzqg V9s9UiaGMAhyFLx36eynJ8JJ3Zd4QdQtktc1SqQ4rO8e8q8q89iOBfqPljDzcWvW9O hmFic/GgRUx7SHpcX1mwLJ9BV2YfWqnpnKOLdDWIQOQNadCVHCywPAyWhG+auMMPtj vDg9BjA2G4WKK4WZ3Fuw/TIHSJmsuO9tZ1JJbu6sqet2zwJlZbx83jWxFedllKzO/e RAdVzFHviIkOQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 11/15] drm/shmem-helper: Add generic memory shrinker Date: Tue, 12 Apr 2022 00:59:33 +0300 Message-Id: <20220411215937.281655-12-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Introduce a common DRM SHMEM shrinker. It allows to reduce code duplication among DRM drivers that implement theirs own shrinkers. This is initial version of the shrinker that covers basic needs of GPU drivers, both purging and eviction of shmem objects are supported. This patch is based on a couple ideas borrowed from Rob's Clark MSM shrinker and Thomas' Zimmermann variant of SHMEM shrinker. In order to start using DRM SHMEM shrinker drivers should: 1. Implement new purge(), evict() + swap_in() GEM callbacks. 2. Register shrinker using drm_gem_shmem_shrinker_register(drm_device). 3. Use drm_gem_shmem_set_purgeable_and_evictable(shmem) and alike API functions to activate shrinking of GEMs. Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot --- drivers/gpu/drm/drm_gem_shmem_helper.c | 766 ++++++++++++++++++++++++- include/drm/drm_device.h | 4 + include/drm/drm_gem.h | 35 ++ include/drm/drm_gem_shmem_helper.h | 105 +++- 4 files changed, 878 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 3ecef571eff3..7e4851363d14 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -88,6 +88,13 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) INIT_LIST_HEAD(&shmem->madv_list); + /* + * Eviction and purging are disabled by default, shmem user must enable + * them explicitly using drm_gem_shmem_set_evictable/purgeable(). + */ + shmem->eviction_disable_count = 1; + shmem->purging_disable_count = 1; + if (!private) { /* * Our buffers are kept pinned, so allocating them @@ -126,6 +133,107 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static void +drm_gem_shmem_add_pages_to_shrinker(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + size_t page_count = obj->size >> PAGE_SHIFT; + + if (!shmem->pages_shrinkable) { + WARN_ON(gem_shrinker->shrinkable_count + page_count < page_count); + gem_shrinker->shrinkable_count += page_count; + shmem->pages_shrinkable = true; + } +} + +static void +drm_gem_shmem_remove_pages_from_shrinker(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + size_t page_count = obj->size >> PAGE_SHIFT; + + if (shmem->pages_shrinkable) { + WARN_ON(gem_shrinker->shrinkable_count < page_count); + gem_shrinker->shrinkable_count -= page_count; + shmem->pages_shrinkable = false; + } +} + +static void +drm_gem_shmem_set_pages_state_locked(struct drm_gem_shmem_object *shmem, + enum drm_gem_shmem_pages_state new_state) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + + lockdep_assert_held(&gem_shrinker->lock); + lockdep_assert_held(&obj->resv->lock.base); + + if (new_state >= DRM_GEM_SHMEM_PAGES_STATE_ACTIVE) { + if (drm_gem_shmem_is_evictable(shmem)) + new_state = DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE; + + if (drm_gem_shmem_is_purgeable(shmem)) + new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE; + + if (!shmem->pages) + new_state = DRM_GEM_SHMEM_PAGES_STATE_INACTIVE; + + if (shmem->evicted) + new_state = DRM_GEM_SHMEM_PAGES_STATE_EVICTED; + } + + if (shmem->pages_state == new_state) + return; + + switch (new_state) { + case DRM_GEM_SHMEM_PAGES_STATE_INACTIVE: + case DRM_GEM_SHMEM_PAGES_STATE_PURGED: + drm_gem_shmem_remove_pages_from_shrinker(shmem); + list_del_init(&shmem->madv_list); + break; + + case DRM_GEM_SHMEM_PAGES_STATE_ACTIVE: + drm_gem_shmem_remove_pages_from_shrinker(shmem); + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_active); + break; + + case DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE: + drm_gem_shmem_add_pages_to_shrinker(shmem); + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_purgeable); + break; + + case DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE: + drm_gem_shmem_add_pages_to_shrinker(shmem); + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evictable); + break; + + case DRM_GEM_SHMEM_PAGES_STATE_EVICTED: + drm_gem_shmem_remove_pages_from_shrinker(shmem); + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evicted); + break; + } + + shmem->pages_state = new_state; +} + +static void +drm_gem_shmem_set_pages_state(struct drm_gem_shmem_object *shmem, + enum drm_gem_shmem_pages_state new_state) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + + if (!gem_shrinker) + return; + + mutex_lock(&gem_shrinker->lock); + drm_gem_shmem_set_pages_state_locked(shmem, new_state); + mutex_unlock(&gem_shrinker->lock); +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -137,6 +245,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + /* take out shmem GEM object from the memory shrinker */ + drm_gem_shmem_madvise(shmem, -1); + WARN_ON(shmem->vmap_use_count); if (obj->import_attach) { @@ -145,10 +256,11 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); + sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) + if (shmem->pages_use_count) drm_gem_shmem_put_pages(shmem); } @@ -159,18 +271,226 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +static void drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + enum drm_gem_shmem_pages_state new_state; + + if (!gem_shrinker || obj->import_attach) + return; + + mutex_lock(&gem_shrinker->lock); + + if (!shmem->madv) + new_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE; + else if (shmem->madv > 0) + new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE; + else if (shmem->madv < 0) + new_state = DRM_GEM_SHMEM_PAGES_STATE_PURGED; + + drm_gem_shmem_set_pages_state_locked(shmem, new_state); + + mutex_unlock(&gem_shrinker->lock); +} + +static void drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem) +{ + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_update_pages_state_locked(shmem); + dma_resv_unlock(shmem->base.resv); +} + +static int +drm_gem_shmem_set_evictable_locked(struct drm_gem_shmem_object *shmem) +{ + int ret = 0; + + WARN_ON_ONCE(!shmem->eviction_disable_count--); + + if (shmem->madv < 0) + ret = -ENOMEM; + + drm_gem_shmem_update_pages_state_locked(shmem); + + return ret; +} + +static int +drm_gem_shmem_set_unevictable_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + int err; + + if (shmem->madv < 0) + return -ENOMEM; + + if (shmem->evicted) { + err = obj->funcs->swap_in(obj); + if (err) + return err; + } + + shmem->eviction_disable_count++; + + drm_gem_shmem_update_pages_state_locked(shmem); + + return 0; +} + +static int +drm_gem_shmem_set_purgeable_locked(struct drm_gem_shmem_object *shmem) +{ + int ret = 0; + + WARN_ON_ONCE(!shmem->purging_disable_count--); + + if (shmem->madv < 0) + ret = -ENOMEM; + + drm_gem_shmem_update_pages_state_locked(shmem); + + return ret; +} + +/** + * drm_gem_shmem_set_purgeable() - Make GEM purgeable by memory shrinker + * @shmem: shmem GEM object + * + * Tell memory shrinker that this GEM can be purged. Initially purging is + * disabled for all GEMs. Each set_pureable() call must have corresponding + * set_unpureable() call. If GEM was purged, then -ENOMEM is returned. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem) +{ + int ret; + + dma_resv_lock(shmem->base.resv, NULL); + ret = drm_gem_shmem_set_purgeable_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable); + +static int +drm_gem_shmem_set_unpurgeable_locked(struct drm_gem_shmem_object *shmem) +{ + if (shmem->madv < 0) + return -ENOMEM; + + shmem->purging_disable_count++; + + drm_gem_shmem_update_pages_state_locked(shmem); + + return 0; +} + +static int +drm_gem_shmem_set_purgeable_and_evictable_locked(struct drm_gem_shmem_object *shmem) +{ + int ret; + + ret = drm_gem_shmem_set_evictable_locked(shmem); + if (!ret) { + ret = drm_gem_shmem_set_purgeable_locked(shmem); + if (ret) + drm_gem_shmem_set_unevictable_locked(shmem); + } + + return ret; +} + +static int +drm_gem_shmem_set_unpurgeable_and_unevictable_locked(struct drm_gem_shmem_object *shmem) +{ + int ret; + + ret = drm_gem_shmem_set_unpurgeable_locked(shmem); + if (!ret) { + ret = drm_gem_shmem_set_unevictable_locked(shmem); + if (ret) + drm_gem_shmem_set_purgeable_locked(shmem); + } + + return ret; +} + +/** + * drm_gem_shmem_set_purgeable_and_evictable() - Make GEM unpurgeable and + * unevictable by memory shrinker + * @shmem: shmem GEM object + * + * Tell memory shrinker that this GEM can't be purged and evicted. Each + * set_purgeable_and_evictable() call must have corresponding + * unpurgeable_and_unevictable() call. If GEM was purged, then -ENOMEM + * is returned. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem) +{ + int ret; + + dma_resv_lock(shmem->base.resv, NULL); + ret = drm_gem_shmem_set_purgeable_and_evictable_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable_and_evictable); + +/** + * drm_gem_shmem_set_unpurgeable_and_unevictable() - Make GEM purgeable and + * evictable by memory shrinker + * @shmem: shmem GEM object + * + * Tell memory shrinker that this GEM can be purged and evicted. Each + * unpurgeable_and_unevictable() call must have corresponding + * set_purgeable_and_evictable() call. If GEM was purged, then -ENOMEM + * is returned. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_set_unpurgeable_and_unevictable(struct drm_gem_shmem_object *shmem) +{ + int ret; + + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); + if (ret) + return ret; + + ret = drm_gem_shmem_set_unpurgeable_and_unevictable_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_set_unpurgeable_and_unevictable); + +static int +drm_gem_shmem_acquire_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; - if (shmem->pages_use_count++ > 0) + if (shmem->madv < 0) { + WARN_ON(shmem->pages); + return -ENOMEM; + } + + if (shmem->pages) { + WARN_ON(!shmem->evicted); return 0; + } pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count = 0; return PTR_ERR(pages); } @@ -189,6 +509,25 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) return 0; } +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +{ + int err; + + if (shmem->madv < 0) + return -ENOMEM; + + if (shmem->pages_use_count++ > 0) + return 0; + + err = drm_gem_shmem_acquire_pages_locked(shmem); + if (err) { + shmem->pages_use_count = 0; + return err; + } + + return 0; +} + /* * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -209,21 +548,38 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) if (ret) return ret; ret = drm_gem_shmem_get_pages_locked(shmem); + + drm_gem_shmem_update_pages_state_locked(shmem); + dma_resv_unlock(shmem->base.resv); return ret; } EXPORT_SYMBOL(drm_gem_shmem_get_pages); -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) +static void drm_gem_shmem_get_pages_no_fail(struct drm_gem_shmem_object *shmem) { - struct drm_gem_object *obj = &shmem->base; + WARN_ON(shmem->base.import_attach); - if (WARN_ON_ONCE(!shmem->pages_use_count)) - return; + dma_resv_lock(shmem->base.resv, NULL); - if (--shmem->pages_use_count > 0) + if (drm_gem_shmem_get_pages_locked(shmem)) + shmem->pages_use_count++; + + drm_gem_shmem_update_pages_state_locked(shmem); + + dma_resv_unlock(shmem->base.resv); +} + +static void +drm_gem_shmem_release_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + if (!shmem->pages) { + WARN_ON(!shmem->evicted && shmem->madv >= 0); return; + } #ifdef CONFIG_X86 if (shmem->map_wc) @@ -236,6 +592,21 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages = NULL; } +static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + lockdep_assert_held(&obj->resv->lock.base); + + if (WARN_ON(!shmem->pages_use_count)) + return; + + if (--shmem->pages_use_count > 0) + return; + + drm_gem_shmem_release_pages_locked(shmem); +} + /* * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -246,6 +617,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_update_pages_state_locked(shmem); dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -262,9 +634,21 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); */ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { + int err; + WARN_ON(shmem->base.import_attach); - return drm_gem_shmem_get_pages(shmem); + err = drm_gem_shmem_set_unpurgeable_and_unevictable(shmem); + if (err) + return err; + + err = drm_gem_shmem_get_pages(shmem); + if (err) { + drm_gem_shmem_set_purgeable_and_evictable(shmem); + return err; + } + + return 0; } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -280,6 +664,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->base.import_attach); drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_set_purgeable_and_evictable(shmem); } EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -359,7 +744,18 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; + + ret = drm_gem_shmem_set_unpurgeable_and_unevictable_locked(shmem); + if (ret) + goto unlock; + ret = drm_gem_shmem_vmap_locked(shmem, map); + if (ret) + drm_gem_shmem_set_purgeable_and_evictable_locked(shmem); + else + drm_gem_shmem_update_pages_state_locked(shmem); + +unlock: dma_resv_unlock(shmem->base.resv); return ret; @@ -404,9 +800,9 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, { dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_vunmap_locked(shmem, map); + drm_gem_shmem_update_pages_state_locked(shmem); + drm_gem_shmem_set_purgeable_and_evictable_locked(shmem); dma_resv_unlock(shmem->base.resv); - - drm_gem_shmem_update_purgeable_status(shmem); } EXPORT_SYMBOL(drm_gem_shmem_vunmap); @@ -447,29 +843,140 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) madv = shmem->madv; + drm_gem_shmem_update_pages_state_locked(shmem); + dma_resv_unlock(shmem->base.resv); return (madv >= 0); } EXPORT_SYMBOL(drm_gem_shmem_madvise); -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +/** + * drm_gem_shmem_swap_in_pages_locked() - Moves shmem pages back to memory + * @shmem: shmem GEM object + * + * This function moves pages back to memory if they were previously evicted + * by the memory shrinker. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct sg_table *sgt; + int ret; + + lockdep_assert_held(&obj->resv->lock.base); + + if (shmem->evicted) { + ret = drm_gem_shmem_acquire_pages_locked(shmem); + if (ret) + return ret; + + sgt = drm_gem_shmem_get_sg_table(shmem); + if (IS_ERR(sgt)) + return PTR_ERR(sgt); + + ret = dma_map_sgtable(obj->dev->dev, sgt, + DMA_BIDIRECTIONAL, 0); + if (ret) { + sg_free_table(sgt); + kfree(sgt); + return ret; + } + + shmem->sgt = sgt; + shmem->evicted = false; + shmem->pages_state = DRM_GEM_SHMEM_PAGES_STATE_ACTIVE; + + drm_gem_shmem_update_pages_state_locked(shmem); + } + + return shmem->pages ? 0 : -ENOMEM; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in_pages_locked); + +/** + * drm_gem_shmem_swap_in_locked() - Moves shmem GEM back to memory + * @shmem: shmem GEM object + * + * This function moves shmem GEM back to memory if it was previously evicted + * by the memory shrinker. The GEM is ready to use on success. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + lockdep_assert_held(&obj->resv->lock.base); + + if (shmem->evicted) + return obj->funcs->swap_in(obj); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in_locked); + +static void drm_gem_shmem_unpin_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct drm_device *dev = obj->dev; - WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); + if (shmem->evicted) + return; dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); + drm_gem_shmem_release_pages_locked(shmem); + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + sg_free_table(shmem->sgt); kfree(shmem->sgt); shmem->sgt = NULL; +} - drm_gem_shmem_put_pages_locked(shmem); +/** + * drm_gem_shmem_evict_locked - Evict shmem pages + * @shmem: shmem GEM object + * + * This function unpins shmem pages, allowing them to be swapped out from + * memory. + */ +void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; - shmem->madv = -1; + lockdep_assert_held(&obj->resv->lock.base); - drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + WARN_ON(!drm_gem_shmem_is_evictable(shmem)); + WARN_ON(shmem->madv < 0); + WARN_ON(shmem->evicted); + + drm_gem_shmem_unpin_pages_locked(shmem); + + shmem->evicted = true; + drm_gem_shmem_set_pages_state(shmem, DRM_GEM_SHMEM_PAGES_STATE_EVICTED); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_evict_locked); + +/** + * drm_gem_shmem_purge_locked - Purge shmem pages + * @shmem: shmem GEM object + * + * This function permanently releases shmem pages. + */ +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + lockdep_assert_held(&obj->resv->lock.base); + + WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); + WARN_ON(shmem->madv < 0); + + drm_gem_shmem_unpin_pages_locked(shmem); drm_gem_free_mmap_offset(obj); /* Our goal here is to return as much of the memory as @@ -480,6 +987,9 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + + shmem->madv = -1; + drm_gem_shmem_set_pages_state(shmem, DRM_GEM_SHMEM_PAGES_STATE_PURGED); } EXPORT_SYMBOL(drm_gem_shmem_purge_locked); @@ -543,22 +1053,31 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) vm_fault_t ret; struct page *page; pgoff_t page_offset; + bool pages_inactive; + int err; /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; dma_resv_lock(shmem->base.resv, NULL); - if (page_offset >= num_pages || - WARN_ON_ONCE(!shmem->pages) || - shmem->madv < 0) { + pages_inactive = shmem->pages_state < DRM_GEM_SHMEM_PAGES_STATE_ACTIVE; + WARN_ON_ONCE(!shmem->pages ^ pages_inactive); + + if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) { ret = VM_FAULT_SIGBUS; } else { + err = drm_gem_shmem_swap_in_locked(shmem); + if (err) { + ret = VM_FAULT_OOM; + goto unlock; + } + page = shmem->pages[page_offset]; ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } - +unlock: dma_resv_unlock(shmem->base.resv); return ret; @@ -568,13 +1087,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) { struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - int ret; - - WARN_ON(shmem->base.import_attach); - - ret = drm_gem_shmem_get_pages(shmem); - WARN_ON_ONCE(ret != 0); + drm_gem_shmem_get_pages_no_fail(shmem); drm_gem_vm_open(vma); } @@ -716,6 +1230,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) shmem->sgt = sgt; + drm_gem_shmem_update_pages_state(shmem); + return sgt; err_free_sgt: @@ -762,6 +1278,202 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); +static struct drm_gem_shmem_shrinker * +to_drm_shrinker(struct shrinker *shrinker) +{ + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); +} + +static unsigned long +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); + u64 count = READ_ONCE(gem_shrinker->shrinkable_count); + + if (count >= SHRINK_EMPTY) + return SHRINK_EMPTY - 1; + + return count ?: SHRINK_EMPTY; +} + +static unsigned long +drm_gem_shmem_shrinker_run_objects_scan(struct shrinker *shrinker, + unsigned long nr_to_scan, + bool *lock_contention, + bool evict) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); + struct drm_gem_shmem_object *shmem; + struct list_head still_in_list; + struct drm_gem_object *obj; + unsigned long freed = 0; + struct list_head *lru; + size_t page_count; + + INIT_LIST_HEAD(&still_in_list); + + mutex_lock(&gem_shrinker->lock); + + if (evict) + lru = &gem_shrinker->lru_evictable; + else + lru = &gem_shrinker->lru_purgeable; + + while (freed < nr_to_scan) { + shmem = list_first_entry_or_null(lru, typeof(*shmem), madv_list); + if (!shmem) + break; + + obj = &shmem->base; + page_count = obj->size >> PAGE_SHIFT; + list_move_tail(&shmem->madv_list, &still_in_list); + + if (evict && get_nr_swap_pages() < page_count) + continue; + + /* + * If it's in the process of being freed, gem_object->free() + * may be blocked on lock waiting to remove it. So just + * skip it. + */ + if (!kref_get_unless_zero(&obj->refcount)) + continue; + + mutex_unlock(&gem_shrinker->lock); + + /* prevent racing with job-submission code paths */ + if (!dma_resv_trylock(obj->resv)) { + *lock_contention |= true; + goto shrinker_lock; + } + + /* prevent racing with the dma-buf exporting */ + if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) { + *lock_contention |= true; + goto resv_unlock; + } + + /* check whether h/w uses this object */ + if (!dma_resv_test_signaled(obj->resv, true)) + goto object_name_unlock; + + /* GEM may've become unpurgeable while shrinker was unlocked */ + if (evict) { + if (!drm_gem_shmem_is_evictable(shmem)) + goto object_name_unlock; + } else { + if (!drm_gem_shmem_is_purgeable(shmem)) + goto object_name_unlock; + } + + if (evict) + freed += obj->funcs->evict(obj); + else + freed += obj->funcs->purge(obj); +object_name_unlock: + mutex_unlock(&gem_shrinker->dev->object_name_lock); +resv_unlock: + dma_resv_unlock(obj->resv); +shrinker_lock: + drm_gem_object_put(&shmem->base); + mutex_lock(&gem_shrinker->lock); + } + + list_splice_tail(&still_in_list, lru); + + mutex_unlock(&gem_shrinker->lock); + + return freed; +} + +static unsigned long +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + unsigned long nr_to_scan = sc->nr_to_scan; + bool lock_contention = false; + unsigned long freed; + + /* purge as many objects as we can */ + freed = drm_gem_shmem_shrinker_run_objects_scan(shrinker, nr_to_scan, + &lock_contention, false); + nr_to_scan -= freed; + + /* evict as many objects as we can */ + if (freed < nr_to_scan) + freed += drm_gem_shmem_shrinker_run_objects_scan(shrinker, + nr_to_scan, + &lock_contention, + true); + + return (!freed && !lock_contention) ? SHRINK_STOP : freed; +} + +/** + * drm_gem_shmem_shrinker_register() - Register shmem shrinker + * @dev: DRM device + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_shrinker_register(struct drm_device *dev) +{ + struct drm_gem_shmem_shrinker *gem_shrinker; + int err; + + if (WARN_ON(dev->shmem_shrinker)) + return -EBUSY; + + gem_shrinker = kzalloc(sizeof(*gem_shrinker), GFP_KERNEL); + if (!gem_shrinker) + return -ENOMEM; + + gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects; + gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects; + gem_shrinker->base.seeks = DEFAULT_SEEKS; + gem_shrinker->dev = dev; + + INIT_LIST_HEAD(&gem_shrinker->lru_purgeable); + INIT_LIST_HEAD(&gem_shrinker->lru_evictable); + INIT_LIST_HEAD(&gem_shrinker->lru_evicted); + INIT_LIST_HEAD(&gem_shrinker->lru_active); + mutex_init(&gem_shrinker->lock); + + dev->shmem_shrinker = gem_shrinker; + + err = register_shrinker(&gem_shrinker->base); + if (err) { + dev->shmem_shrinker = NULL; + kfree(gem_shrinker); + return err; + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_register); + +/** + * drm_gem_shmem_shrinker_unregister() - Unregister shmem shrinker + * @dev: DRM device + */ +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = dev->shmem_shrinker; + + if (gem_shrinker) { + unregister_shrinker(&gem_shrinker->base); + WARN_ON(!list_empty(&gem_shrinker->lru_purgeable)); + WARN_ON(!list_empty(&gem_shrinker->lru_evictable)); + WARN_ON(!list_empty(&gem_shrinker->lru_evicted)); + WARN_ON(!list_empty(&gem_shrinker->lru_active)); + mutex_destroy(&gem_shrinker->lock); + dev->shmem_shrinker = NULL; + kfree(gem_shrinker); + } +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_unregister); + MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); MODULE_IMPORT_NS(DMA_BUF); MODULE_LICENSE("GPL v2"); diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h index 9923c7a6885e..929546cad894 100644 --- a/include/drm/drm_device.h +++ b/include/drm/drm_device.h @@ -16,6 +16,7 @@ struct drm_vblank_crtc; struct drm_vma_offset_manager; struct drm_vram_mm; struct drm_fb_helper; +struct drm_gem_shmem_shrinker; struct inode; @@ -277,6 +278,9 @@ struct drm_device { /** @vram_mm: VRAM MM memory manager */ struct drm_vram_mm *vram_mm; + /** @shmem_shrinker: SHMEM GEM memory shrinker */ + struct drm_gem_shmem_shrinker *shmem_shrinker; + /** * @switch_power_state: * diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 9d7c61a122dc..390d1ce08ed3 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -172,6 +172,41 @@ struct drm_gem_object_funcs { * This is optional but necessary for mmap support. */ const struct vm_operations_struct *vm_ops; + + /** + * @purge: + * + * Releases the GEM object's allocated backing storage to the system. + * + * Returns the number of pages that have been freed by purging the GEM object. + * + * This callback is used by the GEM shrinker. + */ + unsigned long (*purge)(struct drm_gem_object *obj); + + /** + * @evict: + * + * Unpins the GEM object's allocated backing storage, allowing shmem pages + * to be swapped out. + * + * Returns the number of pages that have been unpinned. + * + * This callback is used by the GEM shrinker. + */ + unsigned long (*evict)(struct drm_gem_object *obj); + + /** + * @swap_in: + * + * Pins GEM object's allocated backing storage if it was previously evicted, + * moving swapped out pages back to memory. + * + * Returns 0 on success, or -errno on error. + * + * This callback is used by the GEM shrinker. + */ + int (*swap_in)(struct drm_gem_object *obj); }; /** diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 70889533962a..dc1c2db7d095 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -15,8 +16,18 @@ struct dma_buf_attachment; struct drm_mode_create_dumb; struct drm_printer; +struct drm_device; struct sg_table; +enum drm_gem_shmem_pages_state { + DRM_GEM_SHMEM_PAGES_STATE_PURGED = -2, + DRM_GEM_SHMEM_PAGES_STATE_EVICTED = -1, + DRM_GEM_SHMEM_PAGES_STATE_INACTIVE = 0, + DRM_GEM_SHMEM_PAGES_STATE_ACTIVE = 1, + DRM_GEM_SHMEM_PAGES_STATE_EVICTABLE = 2, + DRM_GEM_SHMEM_PAGES_STATE_PURGEABLE = 3, +}; + /** * struct drm_gem_shmem_object - GEM object backed by shmem */ @@ -43,8 +54,8 @@ struct drm_gem_shmem_object { * @madv: State for madvise * * 0 is active/inuse. + * 1 is not-needed/can-be-purged * A negative value is the object is purged. - * Positive values are driver specific and not used by the helpers. */ int madv; @@ -91,6 +102,40 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc; + + /** + * @eviction_disable_count: + * + * The shmem pages are disallowed to be evicted by the memory shrinker + * while count is non-zero. Used internally by memory shrinker. + */ + unsigned int eviction_disable_count; + + /** + * @purging_disable_count: + * + * The shmem pages are disallowed to be purged by the memory shrinker + * while count is non-zero. Used internally by memory shrinker. + */ + unsigned int purging_disable_count; + + /** + * @pages_state: Current state of shmem pages. Used internally by + * memory shrinker. + */ + enum drm_gem_shmem_pages_state pages_state; + + /** + * @evicted: True if shmem pages were evicted by the memory shrinker. + * Used internally by memory shrinker. + */ + bool evicted; + + /** + * @pages_shrinkable: True if shmem pages can be evicted or purged + * by the memory shrinker. Used internally by memory shrinker. + */ + bool pages_shrinkable; }; #define to_drm_gem_shmem_obj(obj) \ @@ -111,15 +156,33 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); +int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_set_purgeable_and_evictable(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_set_unpurgeable_and_unevictable(struct drm_gem_shmem_object *shmem); + +static inline bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv >= 0) && !shmem->eviction_disable_count && + shmem->base.funcs->evict && shmem->base.funcs->swap_in && + !shmem->vmap_use_count && !shmem->base.dma_buf && + !shmem->base.import_attach && shmem->sgt; +} + static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { - return (shmem->madv > 0) && - !shmem->vmap_use_count && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; + return (shmem->madv > 0) && !shmem->purging_disable_count && + !shmem->vmap_use_count && shmem->base.funcs->purge && + !shmem->base.dma_buf && !shmem->base.import_attach && + shmem->sgt; } -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem); + +void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem); + bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); @@ -262,6 +325,38 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v return drm_gem_shmem_mmap(shmem, vma); } +/** + * struct drm_gem_shmem_shrinker - Generic memory shrinker for shmem GEMs + */ +struct drm_gem_shmem_shrinker { + /** @base: Shrinker for purging shmem GEM objects */ + struct shrinker base; + + /** @lock: Protects @lru_* */ + struct mutex lock; + + /** @lru_purgeable: List of shmem GEM objects available for purging */ + struct list_head lru_purgeable; + + /** @lru_active: List of active shmem GEM objects */ + struct list_head lru_active; + + /** @lru_evictable: List of shmem GEM objects that can be evicted */ + struct list_head lru_evictable; + + /** @lru_evicted: List of evicted shmem GEM objects */ + struct list_head lru_evicted; + + /** @dev: DRM device that uses this shrinker */ + struct drm_device *dev; + + /** @shrinkable_count: Count of shmem GEM pages to be purged and evicted */ + u64 shrinkable_count; +}; + +int drm_gem_shmem_shrinker_register(struct drm_device *dev); +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev); + /* * Driver ops */ From patchwork Mon Apr 11 21:59:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D05C6C433EF for ; Mon, 11 Apr 2022 22:00:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 35AE410E3AD; Mon, 11 Apr 2022 22:00:49 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8C23D10E37B for ; Mon, 11 Apr 2022 22:00:37 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id E3EDC1F43C82 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714436; bh=b8fdsQiCc99U6x8DLvcAZEh7TWAxSdxqeIufJ6WK2Aw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WwkG/pu/B9Wj7pIYuaCFH6wkG2KfmKD0I+5BkrztnjiDCl4+gYqhVJ5yrRInws+/R oar2AXvvCdswJfQuYjQqeh9RoBlVsx9JBpISUBHnehKXd4Qe9iz351HdsjSTx9J/7a VD5m7+Tj7XaTUyX0Ngtm/DIZWUcvRbLrLd9D4z9EI63JIJld/bSbD6yp158DD66mtQ vbA4w3po4GYPn/y5DNsEjEe0g2fk/ArRxgM+RBwbdmS1grCWpFPwG8KjdLKgEOVkWb AfIRDkbqYXLDdn0BxuswKW6Z4Vts7+NGzvWZ3TDTlj4/KSTFgYIXkqw5LQEoNxhHR6 jCZ3G6OrgufMA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 12/15] drm/virtio: Support memory shrinking Date: Tue, 12 Apr 2022 00:59:34 +0300 Message-Id: <20220411215937.281655-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Support generic DRM SHMEM memory shrinker and add new madvise IOCTL to the VirtIO-GPU driver. Userspace (BO cache manager of Mesa driver) will mark BOs as "don't need" using the new IOCTL to let shrinker purge the marked BOs on OOM, the shrinker will also evict unpurgeable shmem BOs from memory if guest supports SWAP. Altogether this allows to prevent OOM kills of guest applications that use VirGL by lowering memory pressure. Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 15 ++- drivers/gpu/drm/virtio/virtgpu_gem.c | 46 ++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 37 +++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 9 ++ drivers/gpu/drm/virtio/virtgpu_object.c | 139 +++++++++++++++++++----- drivers/gpu/drm/virtio/virtgpu_plane.c | 17 ++- drivers/gpu/drm/virtio/virtgpu_vq.c | 40 +++++++ include/uapi/drm/virtgpu_drm.h | 14 +++ 8 files changed, 288 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index b2d93cb12ebf..c8918a271e1c 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -274,7 +274,7 @@ struct virtio_gpu_fpriv { }; /* virtgpu_ioctl.c */ -#define DRM_VIRTIO_NUM_IOCTLS 12 +#define DRM_VIRTIO_NUM_IOCTLS 13 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); @@ -310,6 +310,10 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs); +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo); +bool virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv); /* virtgpu_vq.c */ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); @@ -321,6 +325,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo); +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, @@ -341,6 +347,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *obj, struct virtio_gpu_mem_entry *ents, unsigned int nents); +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence); int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev); int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev); void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, @@ -483,4 +492,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev, struct sg_table *sgt, enum dma_data_direction dir); +/* virtgpu_gem_shrinker.c */ +int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev); +void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev); + #endif diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 7db48d17ee3a..08189ad43736 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -294,3 +294,49 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) } spin_unlock(&vgdev->obj_free_lock); } + +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs) +{ + struct drm_gem_shmem_object *shmem; + int ret = 0; + u32 i; + + for (i = 0; i < objs->nents; i++) { + shmem = to_drm_gem_shmem_obj(objs->objs[i]); + ret = drm_gem_shmem_swap_in_locked(shmem); + if (ret) + break; + } + + return ret; +} + +bool virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv) +{ + /* + * For now we support only purging BOs that are backed by guest's + * memory. + */ + if (!virtio_gpu_is_shmem(bo)) + return true; + + return drm_gem_shmem_madvise(&bo->base, madv); +} + +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + int err; + + if (bo->created) { + err = virtio_gpu_cmd_release_resource(vgdev, bo); + if (err) + return err; + + virtio_gpu_notify(vgdev); + bo->created = false; + } + + return 0; +} diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index f8d83358d2a0..55ee9bd2098e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -217,6 +217,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, ret = virtio_gpu_array_lock_resv(buflist); if (ret) goto out_memdup; + + ret = virtio_gpu_array_prepare(vgdev, buflist); + if (ret) + goto out_unresv; } out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx); @@ -423,6 +427,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); if (!fence) { ret = -ENOMEM; @@ -482,6 +490,10 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + ret = -ENOMEM; fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); @@ -836,6 +848,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev, return ret; } +static int virtio_gpu_madvise_ioctl(struct drm_device *dev, + void *data, + struct drm_file *file) +{ + struct drm_virtgpu_madvise *args = data; + struct virtio_gpu_object *bo; + struct drm_gem_object *obj; + + if (args->madv > VIRTGPU_MADV_DONTNEED) + return -EOPNOTSUPP; + + obj = drm_gem_object_lookup(file, args->bo_handle); + if (!obj) + return -ENOENT; + + bo = gem_to_virtio_gpu_obj(obj); + args->retained = virtio_gpu_gem_madvise(bo, args->madv); + drm_gem_object_put(obj); + + return 0; +} + struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl, DRM_RENDER_ALLOW), @@ -875,4 +909,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl, DRM_RENDER_ALLOW), + + DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl, + DRM_RENDER_ALLOW), }; diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 0d1e3eb61bee..1175999acea1 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -238,6 +238,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) goto err_scanouts; } + ret = drm_gem_shmem_shrinker_register(dev); + if (ret) { + DRM_ERROR("shrinker init failed\n"); + goto err_modeset; + } + virtio_device_ready(vgdev->vdev); if (num_capsets) @@ -250,6 +256,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) 5 * HZ); return 0; +err_modeset: + virtio_gpu_modeset_fini(vgdev); err_scanouts: virtio_gpu_free_vbufs(vgdev); err_vbufs: @@ -289,6 +297,7 @@ void virtio_gpu_release(struct drm_device *dev) if (!vgdev) return; + drm_gem_shmem_shrinker_unregister(dev); virtio_gpu_modeset_fini(vgdev); virtio_gpu_free_vbufs(vgdev); virtio_gpu_cleanup_cap_cache(vgdev); diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 8d7728181de0..771165fbda7c 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -97,39 +97,58 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj) virtio_gpu_cleanup_object(bo); } -static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { - .free = virtio_gpu_free_object, - .open = virtio_gpu_gem_object_open, - .close = virtio_gpu_gem_object_close, - .print_info = drm_gem_shmem_object_print_info, - .export = virtgpu_gem_prime_export, - .pin = drm_gem_shmem_object_pin, - .unpin = drm_gem_shmem_object_unpin, - .get_sg_table = drm_gem_shmem_object_get_sg_table, - .vmap = drm_gem_shmem_object_vmap, - .vunmap = drm_gem_shmem_object_vunmap, - .mmap = drm_gem_shmem_object_mmap, - .vm_ops = &drm_gem_shmem_vm_ops, -}; +static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_fence *fence; -bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) + return -ENOMEM; + + virtio_gpu_object_detach(vgdev, bo, fence); + virtio_gpu_notify(vgdev); + + dma_fence_wait(&fence->f, false); + dma_fence_put(&fence->f); + + return 0; +} + +static unsigned long virtio_gpu_purge_object(struct drm_gem_object *obj) { - return bo->base.base.funcs == &virtio_gpu_shmem_funcs; + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + int err; + + /* + * At first tell host to stop using guest's memory to ensure that + * host won't touch the released guest's memory once it's gone. + */ + err = virtio_gpu_detach_object_fenced(bo); + if (err) + return 0; + + err = virtio_gpu_gem_host_mem_release(bo); + if (err) + return 0; + + drm_gem_shmem_purge_locked(&bo->base); + + return obj->size >> PAGE_SHIFT; } -struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, - size_t size) +static unsigned long virtio_gpu_evict_object(struct drm_gem_object *obj) { - struct virtio_gpu_object_shmem *shmem; - struct drm_gem_shmem_object *dshmem; + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + int err; - shmem = kzalloc(sizeof(*shmem), GFP_KERNEL); - if (!shmem) - return ERR_PTR(-ENOMEM); + err = virtio_gpu_detach_object_fenced(bo); + if (err) + return 0; - dshmem = &shmem->base.base; - dshmem->base.funcs = &virtio_gpu_shmem_funcs; - return &dshmem->base; + drm_gem_shmem_evict_locked(&bo->base); + + return obj->size >> PAGE_SHIFT; } static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, @@ -176,6 +195,66 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return 0; } +static int virtio_gpu_swap_in_object(struct drm_gem_object *obj) +{ + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_mem_entry *ents; + unsigned int nents; + int err; + + err = drm_gem_shmem_swap_in_pages_locked(&bo->base); + if (err) + return err; + + err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (err) + return err; + + virtio_gpu_object_attach(vgdev, bo, ents, nents); + virtio_gpu_notify(vgdev); + + return 0; +} + +static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { + .free = virtio_gpu_free_object, + .open = virtio_gpu_gem_object_open, + .close = virtio_gpu_gem_object_close, + .print_info = drm_gem_shmem_object_print_info, + .export = virtgpu_gem_prime_export, + .pin = drm_gem_shmem_object_pin, + .unpin = drm_gem_shmem_object_unpin, + .get_sg_table = drm_gem_shmem_object_get_sg_table, + .vmap = drm_gem_shmem_object_vmap, + .vunmap = drm_gem_shmem_object_vunmap, + .mmap = drm_gem_shmem_object_mmap, + .vm_ops = &drm_gem_shmem_vm_ops, + .purge = virtio_gpu_purge_object, + .evict = virtio_gpu_evict_object, + .swap_in = virtio_gpu_swap_in_object, +}; + +bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) +{ + return bo->base.base.funcs == &virtio_gpu_shmem_funcs; +} + +struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, + size_t size) +{ + struct virtio_gpu_object_shmem *shmem; + struct drm_gem_shmem_object *dshmem; + + shmem = kzalloc(sizeof(*shmem), GFP_KERNEL); + if (!shmem) + return ERR_PTR(-ENOMEM); + + dshmem = &shmem->base.base; + dshmem->base.funcs = &virtio_gpu_shmem_funcs; + return &dshmem->base; +} + int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_params *params, struct virtio_gpu_object **bo_ptr, @@ -228,10 +307,18 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, objs, fence); virtio_gpu_object_attach(vgdev, bo, ents, nents); + + shmem_obj->pages_mark_dirty_on_put = 1; + + drm_gem_shmem_set_purgeable_and_evictable(shmem_obj); } else { virtio_gpu_cmd_create_resource(vgdev, bo, params, objs, fence); virtio_gpu_object_attach(vgdev, bo, ents, nents); + + shmem_obj->pages_mark_dirty_on_put = 1; + + drm_gem_shmem_set_purgeable_and_evictable(shmem_obj); } *bo_ptr = bo; diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index 7148f3813d8b..2db6166a0307 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -246,20 +246,28 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_framebuffer *vgfb; struct virtio_gpu_object *bo; + int err; if (!new_state->fb) return 0; vgfb = to_virtio_gpu_framebuffer(new_state->fb); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)) + + err = drm_gem_shmem_set_unpurgeable_and_unevictable(&bo->base); + if (err) + return err; + + if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob) return 0; if (bo->dumb && (plane->state->fb != new_state->fb)) { vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); - if (!vgfb->fence) + if (!vgfb->fence) { + drm_gem_shmem_set_purgeable_and_evictable(&bo->base); return -ENOMEM; + } } return 0; @@ -269,15 +277,20 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_object *bo; if (!state->fb) return; vgfb = to_virtio_gpu_framebuffer(state->fb); + bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence = NULL; } + + drm_gem_shmem_set_purgeable_and_evictable(&bo->base); } static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 06566e44307d..2a04dad1ae89 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -536,6 +536,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, virtio_gpu_cleanup_object(bo); } +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo) +{ + struct virtio_gpu_resource_unref *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF); + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + + return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); +} + void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev, uint32_t scanout_id, uint32_t resource_id, uint32_t width, uint32_t height, @@ -636,6 +651,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev, virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); } +static void +virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev, + u32 resource_id, + struct virtio_gpu_fence *fence) +{ + struct virtio_gpu_resource_attach_backing *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING); + cmd_p->resource_id = cpu_to_le32(resource_id); + + virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); +} + static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf) { @@ -1099,6 +1131,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, ents, nents, NULL); } +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence) +{ + virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle, + fence); +} + void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, struct virtio_gpu_output *output) { diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h index 0512fde5e697..12197d8e9759 100644 --- a/include/uapi/drm/virtgpu_drm.h +++ b/include/uapi/drm/virtgpu_drm.h @@ -48,6 +48,7 @@ extern "C" { #define DRM_VIRTGPU_GET_CAPS 0x09 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a #define DRM_VIRTGPU_CONTEXT_INIT 0x0b +#define DRM_VIRTGPU_MADVISE 0x0c #define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01 #define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02 @@ -196,6 +197,15 @@ struct drm_virtgpu_context_init { __u64 ctx_set_params; }; +#define VIRTGPU_MADV_WILLNEED 0 +#define VIRTGPU_MADV_DONTNEED 1 +struct drm_virtgpu_madvise { + __u32 bo_handle; + __u32 retained; /* out, non-zero if BO can be used */ + __u32 madv; + __u32 pad; +}; + /* * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in * effect. The event size is sizeof(drm_event), since there is no additional @@ -246,6 +256,10 @@ struct drm_virtgpu_context_init { DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT, \ struct drm_virtgpu_context_init) +#define DRM_IOCTL_VIRTGPU_MADVISE \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \ + struct drm_virtgpu_madvise) + #if defined(__cplusplus) } #endif From patchwork Mon Apr 11 21:59:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8BAB4C4332F for ; Mon, 11 Apr 2022 22:00:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3D8C610E38D; Mon, 11 Apr 2022 22:00:44 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4428F10E37B for ; Mon, 11 Apr 2022 22:00:39 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 9CDBA1F438CC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714438; bh=fn5iGfrH7Doc77PZ/AjsmRZxm95B+x0ULuINeaTwWcw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jS4tPgha2YFhH2EGV7voM/toaDcNss/uxjXzwvWz6Ibcjk6Vmu9c9hQw/OtV1vI+c eoLg+3CJV82AgLkFugnISzWUNnJBjFGGrEUAM1078hpdICYkv2bKtT71T/jbWnaQ4h HSxFvGyzRzd02YR9VKU0aTPj3WC3EGdeb5qovUBGMgAURmAARA/HWFogMA8JMaaGKx LVe6XCc4m1M6pYkkFcwlEJQAoRtWvzGyj6GstqZ2AJMzUwPTx/PYYugt602XKGDgNH pEb8cQSx+KccNt89GptkkHuDx7smsHkFtgPkdtNjfA2qATuqxbY6sMTSqn5NthtnFT wtYi8ZrTPekjg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 13/15] drm/panfrost: Switch to generic memory shrinker Date: Tue, 12 Apr 2022 00:59:35 +0300 Message-Id: <20220411215937.281655-14-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Replace Panfrost's memory shrinker with a generic DRM SHMEM memory shrinker. Tested-by: Steven Price Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/Makefile | 1 - drivers/gpu/drm/panfrost/panfrost_device.h | 4 - drivers/gpu/drm/panfrost/panfrost_drv.c | 19 +-- drivers/gpu/drm/panfrost/panfrost_gem.c | 30 +++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 122 ------------------ drivers/gpu/drm/panfrost/panfrost_job.c | 18 ++- 7 files changed, 39 insertions(+), 164 deletions(-) delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile index b71935862417..ecf0864cb515 100644 --- a/drivers/gpu/drm/panfrost/Makefile +++ b/drivers/gpu/drm/panfrost/Makefile @@ -5,7 +5,6 @@ panfrost-y := \ panfrost_device.o \ panfrost_devfreq.o \ panfrost_gem.o \ - panfrost_gem_shrinker.o \ panfrost_gpu.o \ panfrost_job.o \ panfrost_mmu.o \ diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 8b25278f34c8..fe04b21fc044 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -115,10 +115,6 @@ struct panfrost_device { atomic_t pending; } reset; - struct mutex shrinker_lock; - struct list_head shrinker_list; - struct shrinker shrinker; - struct panfrost_devfreq pfdevfreq; }; diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 7fcbc2a5b6cd..57a93555813f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -160,7 +160,6 @@ panfrost_lookup_bos(struct drm_device *dev, break; } - atomic_inc(&bo->gpu_usecount); job->mappings[i] = mapping; } @@ -391,7 +390,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, { struct panfrost_file_priv *priv = file_priv->driver_priv; struct drm_panfrost_madvise *args = data; - struct panfrost_device *pfdev = dev->dev_private; struct drm_gem_object *gem_obj; struct panfrost_gem_object *bo; int ret = 0; @@ -404,7 +402,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, bo = to_panfrost_bo(gem_obj); - mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv == PANFROST_MADV_DONTNEED) { struct panfrost_gem_mapping *first; @@ -430,17 +427,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, args->retained = drm_gem_shmem_madvise(&bo->base, args->madv); - if (args->retained) { - if (args->madv == PANFROST_MADV_DONTNEED) - list_add_tail(&bo->base.madv_list, - &pfdev->shrinker_list); - else if (args->madv == PANFROST_MADV_WILLNEED) - list_del_init(&bo->base.madv_list); - } - out_unlock_mappings: mutex_unlock(&bo->mappings.lock); - mutex_unlock(&pfdev->shrinker_lock); drm_gem_object_put(gem_obj); return ret; @@ -571,9 +559,6 @@ static int panfrost_probe(struct platform_device *pdev) ddev->dev_private = pfdev; pfdev->ddev = ddev; - mutex_init(&pfdev->shrinker_lock); - INIT_LIST_HEAD(&pfdev->shrinker_list); - err = panfrost_device_init(pfdev); if (err) { if (err != -EPROBE_DEFER) @@ -595,7 +580,7 @@ static int panfrost_probe(struct platform_device *pdev) if (err < 0) goto err_out1; - panfrost_gem_shrinker_init(ddev); + drm_gem_shmem_shrinker_register(ddev); return 0; @@ -613,8 +598,8 @@ static int panfrost_remove(struct platform_device *pdev) struct panfrost_device *pfdev = platform_get_drvdata(pdev); struct drm_device *ddev = pfdev->ddev; + drm_gem_shmem_shrinker_unregister(ddev); drm_dev_unregister(ddev); - panfrost_gem_shrinker_cleanup(ddev); pm_runtime_get_sync(pfdev->dev); pm_runtime_disable(pfdev->dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 293e799e2fe8..b4a7ea7c8f00 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) struct panfrost_gem_object *bo = to_panfrost_bo(obj); struct panfrost_device *pfdev = obj->dev->dev_private; - /* - * Make sure the BO is no longer inserted in the shrinker list before - * taking care of the destruction itself. If we don't do that we have a - * race condition between this function and what's done in - * panfrost_gem_shrinker_scan(). - */ - mutex_lock(&pfdev->shrinker_lock); - list_del_init(&bo->base.madv_list); - mutex_unlock(&pfdev->shrinker_lock); - /* * If we still have mappings attached to the BO, there's a problem in * our refcounting. @@ -195,6 +185,22 @@ static int panfrost_gem_pin(struct drm_gem_object *obj) return drm_gem_shmem_pin(&bo->base); } +static unsigned long panfrost_gem_purge(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + + if (!mutex_trylock(&bo->mappings.lock)) + return 0; + + panfrost_gem_teardown_mappings_locked(bo); + drm_gem_shmem_purge_locked(shmem); + + mutex_unlock(&bo->mappings.lock); + + return obj->size >> PAGE_SHIFT; +} + static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .open = panfrost_gem_open, @@ -207,6 +213,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .vunmap = drm_gem_shmem_object_vunmap, .mmap = drm_gem_shmem_object_mmap, .vm_ops = &drm_gem_shmem_vm_ops, + .purge = panfrost_gem_purge, }; /** @@ -266,6 +273,9 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, if (ret) return ERR_PTR(ret); + if (!bo->is_heap) + drm_gem_shmem_set_purgeable(shmem); + return bo; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 8088d5fd8480..09da064f1c07 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -30,12 +30,6 @@ struct panfrost_gem_object { struct mutex lock; } mappings; - /* - * Count the number of jobs referencing this BO so we don't let the - * shrinker reclaim this object prematurely. - */ - atomic_t gpu_usecount; - bool noexec :1; bool is_heap :1; }; @@ -84,7 +78,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo, void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping); void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo); -void panfrost_gem_shrinker_init(struct drm_device *dev); -void panfrost_gem_shrinker_cleanup(struct drm_device *dev); - #endif /* __PANFROST_GEM_H__ */ diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c deleted file mode 100644 index 77e7cb6d1ae3..000000000000 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ /dev/null @@ -1,122 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2019 Arm Ltd. - * - * Based on msm_gem_freedreno.c: - * Copyright (C) 2016 Red Hat - * Author: Rob Clark - */ - -#include - -#include -#include - -#include "panfrost_device.h" -#include "panfrost_gem.h" -#include "panfrost_mmu.h" - -static unsigned long -panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem; - unsigned long count = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return 0; - - list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (drm_gem_shmem_is_purgeable(shmem)) - count += shmem->base.size >> PAGE_SHIFT; - } - - mutex_unlock(&pfdev->shrinker_lock); - - return count; -} - -static bool panfrost_gem_purge(struct drm_gem_object *obj) -{ - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - struct panfrost_gem_object *bo = to_panfrost_bo(obj); - bool ret = false; - - if (atomic_read(&bo->gpu_usecount)) - return false; - - if (!mutex_trylock(&bo->mappings.lock)) - return false; - - if (!mutex_trylock(&shmem->pages_lock)) - goto unlock_mappings; - - panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); - ret = true; - - mutex_unlock(&shmem->pages_lock); - -unlock_mappings: - mutex_unlock(&bo->mappings.lock); - return ret; -} - -static unsigned long -panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem, *tmp; - unsigned long freed = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return SHRINK_STOP; - - list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) { - if (freed >= sc->nr_to_scan) - break; - if (drm_gem_shmem_is_purgeable(shmem) && - panfrost_gem_purge(&shmem->base)) { - freed += shmem->base.size >> PAGE_SHIFT; - list_del_init(&shmem->madv_list); - } - } - - mutex_unlock(&pfdev->shrinker_lock); - - if (freed > 0) - pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT); - - return freed; -} - -/** - * panfrost_gem_shrinker_init - Initialize panfrost shrinker - * @dev: DRM device - * - * This function registers and sets up the panfrost shrinker. - */ -void panfrost_gem_shrinker_init(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - pfdev->shrinker.count_objects = panfrost_gem_shrinker_count; - pfdev->shrinker.scan_objects = panfrost_gem_shrinker_scan; - pfdev->shrinker.seeks = DEFAULT_SEEKS; - WARN_ON(register_shrinker(&pfdev->shrinker)); -} - -/** - * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker - * @dev: DRM device - * - * This function unregisters the panfrost shrinker. - */ -void panfrost_gem_shrinker_cleanup(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - - if (pfdev->shrinker.nr_deferred) { - unregister_shrinker(&pfdev->shrinker); - } -} diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index fda5871aebe3..bcf496b837ce 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -271,6 +271,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos, dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE); } +static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count) +{ + struct panfrost_gem_object *bo; + int ret = 0; + + while (!ret && bo_count--) { + bo = to_panfrost_bo(bos[bo_count]); + ret = bo->base.madv ? -ENOMEM : 0; + } + + return ret; +} + int panfrost_job_push(struct panfrost_job *job) { struct panfrost_device *pfdev = job->pfdev; @@ -282,6 +295,10 @@ int panfrost_job_push(struct panfrost_job *job) if (ret) return ret; + ret = panfrost_objects_prepare(job->bos, job->bo_count); + if (ret) + goto unlock; + mutex_lock(&pfdev->sched_lock); drm_sched_job_arm(&job->base); @@ -323,7 +340,6 @@ static void panfrost_job_cleanup(struct kref *ref) if (!job->mappings[i]) break; - atomic_dec(&job->mappings[i]->obj->gpu_usecount); panfrost_gem_mapping_put(job->mappings[i]); } kvfree(job->mappings); From patchwork Mon Apr 11 21:59:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E0380C433F5 for ; Mon, 11 Apr 2022 22:00:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 07F2B10E37B; Mon, 11 Apr 2022 22:00:44 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e3e3]) by gabe.freedesktop.org (Postfix) with ESMTPS id DD1FE10E37B for ; Mon, 11 Apr 2022 22:00:40 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 4CB891F43C86 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714439; bh=ZY21VcCnt1KZ/SFvuhi1MrKpSE1xXyIn448Zho4m5n0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iQqrK5tFQhoASlfAkhuQPuI9Tnwja1en0GJe/AeFWiWRfYkDYoGp151oCil/a2KXS zZavlr2NIQ/mOUxKCg7Ci1rUYaU3TvzDG0D5sMZPSV6vZqJj2DszrdHUFmecOft4uk ck6KRrFoCUDdhhmwfdqlP4A1m4YSQOqwDOwldB7a+yAKEaRSxI3O/nkjFO2NLn2PDI /qzGt4C01zbpZA8224csl0EkAluE79l1C4T6pyuvaLf269BGtSPx11dQEv/KWEgb0F u6TXdJzYtVwDxmxGWEQezSFbkH0/PI4/dF155JtfQwjlnJdsMaYrBenEOX2fg9PhFz QqvvuGdl8SBvw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 14/15] drm/shmem-helper: Make drm_gem_shmem_get_pages() private Date: Tue, 12 Apr 2022 00:59:36 +0300 Message-Id: <20220411215937.281655-15-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" VirtIO-GPU driver was the only user of drm_gem_shmem_get_pages() and it now uses drm_gem_shmem_get_pages_sgt(). Make the get_pages() private to drm_gem_shmem_helper. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +-- include/drm/drm_gem_shmem_helper.h | 1 - 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 7e4851363d14..8e31056575e3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -538,7 +538,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) * Returns: * 0 on success or a negative error code on failure. */ -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret; @@ -555,7 +555,6 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) return ret; } -EXPORT_SYMBOL(drm_gem_shmem_get_pages); static void drm_gem_shmem_get_pages_no_fail(struct drm_gem_shmem_object *shmem) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index dc1c2db7d095..7284335ea2c9 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -144,7 +144,6 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); From patchwork Mon Apr 11 21:59:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12809713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CB9E2C433FE for ; Mon, 11 Apr 2022 22:00:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5A54610E3C0; Mon, 11 Apr 2022 22:00:49 +0000 (UTC) Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by gabe.freedesktop.org (Postfix) with ESMTPS id 97C5210E38D for ; Mon, 11 Apr 2022 22:00:42 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id EC8301F43C7E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1649714441; bh=qFt9ZOB1/K3f6j95e83uUTQsFAU8GObxCwVCXrvjk/g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LREqFUxnsMN90DJgwmRZ7u6JNL5myGlW9jIG6varDOlXyjxoaIFSYimM5MxRzygAs O5Ad1UoO1cqMO/unKfXTrk0zWOzrvi7HFGS6s9j7NcVoWJRsglcuvuR6VnXn72aTv0 i3GQ9LjparoI7L4PfLG7Wr7/lDzzuNBKUQkT88gyQUKvIOVNoGX+ShAUb/as4Hjofr cqOagGFMIymj9mocJaqElqqKfJOfxJnRFRxEx4iZwdggA+c5OLZiXw9VmKlqwvtU+G Mfi/xgnauguU7GFoNm3sbg1XfVkaAwLnwwIrUiIfcYMWJRVTiz/GIZfJaXlZPIZA1w LitM6cQ0nyYgg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark Subject: [PATCH v3 15/15] drm/shmem-helper: Remove drm_gem_shmem_purge() Date: Tue, 12 Apr 2022 00:59:37 +0300 Message-Id: <20220411215937.281655-16-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220411215937.281655-1-dmitry.osipenko@collabora.com> References: <20220411215937.281655-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Gustavo Padovan , dri-devel@lists.freedesktop.org, Dmitry Osipenko , Dmitry Osipenko Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The drm_gem_shmem_purge() was added back in 2019 and never had a user since then. GEM's purging is now managed by the generic shrinker and only the "locked" variant of the function is wanted. Remove the obsoleted function. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 11 ----------- include/drm/drm_gem_shmem_helper.h | 2 -- 2 files changed, 13 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 8e31056575e3..89c64053247c 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -992,17 +992,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL(drm_gem_shmem_purge_locked); -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) -{ - if (!dma_resv_trylock(shmem->base.resv)) - return false; - drm_gem_shmem_purge_locked(shmem); - dma_resv_unlock(shmem->base.resv); - - return true; -} -EXPORT_SYMBOL(drm_gem_shmem_purge); - /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object * @file: DRM file structure to create the dumb buffer for diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 7284335ea2c9..197d08452974 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -179,8 +179,6 @@ int drm_gem_shmem_swap_in_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_swap_in_locked(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem); - -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);