From patchwork Tue Dec 1 20:18:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 11943727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23168C64E7B for ; Tue, 1 Dec 2020 20:33:34 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 89E5D2151B for ; Tue, 1 Dec 2020 20:33:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 89E5D2151B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B4B766E02B; Tue, 1 Dec 2020 20:33:32 +0000 (UTC) X-Greylist: delayed 901 seconds by postgrey-1.36 at gabe; Tue, 01 Dec 2020 20:33:31 UTC Received: from EX13-EDG-OU-002.vmware.com (ex13-edg-ou-002.vmware.com [208.91.0.190]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7492E6E91A for ; Tue, 1 Dec 2020 20:33:31 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Tue, 1 Dec 2020 12:18:28 -0800 Received: from vertex.vmware.com (unknown [10.21.244.133]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id E3CAA20ACA; Tue, 1 Dec 2020 12:18:28 -0800 (PST) From: Zack Rusin To: Subject: [PATCH 1/8] drm/vmwgfx: add Zack Rusin as maintainer Date: Tue, 1 Dec 2020 15:18:21 -0500 Message-ID: <20201201201828.808888-1-zackr@vmware.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Roland Scheidegger Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Roland Scheidegger Reviewed-by: Zack Rusin Signed-off-by: Roland Scheidegger Signed-off-by: Zack Rusin --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) diff --git a/MAINTAINERS b/MAINTAINERS index 2daa6ee673f7..55e3e2ef5f18 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5712,6 +5712,7 @@ F: drivers/gpu/drm/vboxvideo/ DRM DRIVER FOR VMWARE VIRTUAL GPU M: "VMware Graphics" M: Roland Scheidegger +M: Zack Rusin L: dri-devel@lists.freedesktop.org S: Supported T: git git://people.freedesktop.org/~sroland/linux From patchwork Tue Dec 1 20:18:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 11943735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E6BDC71155 for ; Tue, 1 Dec 2020 20:33:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A6C412151B for ; Tue, 1 Dec 2020 20:33:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6C412151B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2731F6E8E6; Tue, 1 Dec 2020 20:33:39 +0000 (UTC) Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by gabe.freedesktop.org (Postfix) with ESMTPS id 52D356E8E6 for ; Tue, 1 Dec 2020 20:33:37 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 1 Dec 2020 12:18:27 -0800 Received: from vertex.vmware.com (unknown [10.21.244.133]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 8065520ABB; Tue, 1 Dec 2020 12:18:29 -0800 (PST) From: Zack Rusin To: Subject: [PATCH 2/8] drm/vmwgfx: Remove stealth mode Date: Tue, 1 Dec 2020 15:18:22 -0500 Message-ID: <20201201201828.808888-2-zackr@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201201201828.808888-1-zackr@vmware.com> References: <20201201201828.808888-1-zackr@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Martin Krastev , Roland Scheidegger Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Before drm got helpers for removing conflicting pci framebuffer devices we implemented something known as "stealth" mode which allowed vmwgfx to run even if it couldn't reserve pci resources. We can just switch to regular drm helpers instead of keeping the stealth mode alive as it makes our code a lot cleaner. Signed-off-by: Zack Rusin Reviewed-by: Martin Krastev Reviewed-by: Roland Scheidegger --- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 31 ++++++++++------------------- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 5 ----- 2 files changed, 10 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 31e3e5c9f362..0a3a2b6e4bf9 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -32,6 +32,7 @@ #include #include +#include #include #include #include @@ -840,19 +841,9 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) dev->dev_private = dev_priv; ret = pci_request_regions(dev->pdev, "vmwgfx probe"); - dev_priv->stealth = (ret != 0); - if (dev_priv->stealth) { - /** - * Request at least the mmio PCI resource. - */ - - DRM_INFO("It appears like vesafb is loaded. " - "Ignore above error if any.\n"); - ret = pci_request_region(dev->pdev, 2, "vmwgfx stealth probe"); - if (unlikely(ret != 0)) { - DRM_ERROR("Failed reserving the SVGA MMIO resource.\n"); - goto out_no_device; - } + if (ret) { + DRM_ERROR("Failed reserving PCI regions.\n"); + goto out_no_device; } if (dev_priv->capabilities & SVGA_CAP_IRQMASK) { @@ -1000,10 +991,7 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) if (dev_priv->capabilities & SVGA_CAP_IRQMASK) vmw_irq_uninstall(dev_priv->dev); out_no_irq: - if (dev_priv->stealth) - pci_release_region(dev->pdev, 2); - else - pci_release_regions(dev->pdev); + pci_release_regions(dev->pdev); out_no_device: ttm_object_device_release(&dev_priv->tdev); out_err4: @@ -1051,10 +1039,7 @@ static void vmw_driver_unload(struct drm_device *dev) vmw_fence_manager_takedown(dev_priv->fman); if (dev_priv->capabilities & SVGA_CAP_IRQMASK) vmw_irq_uninstall(dev_priv->dev); - if (dev_priv->stealth) - pci_release_region(dev->pdev, 2); - else - pci_release_regions(dev->pdev); + pci_release_regions(dev->pdev); ttm_object_device_release(&dev_priv->tdev); memunmap(dev_priv->mmio_virt); @@ -1504,6 +1489,10 @@ static int vmw_probe(struct pci_dev *pdev, const struct pci_device_id *ent) struct drm_device *dev; int ret; + ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "svgadrmfb"); + if (ret) + return ret; + ret = pci_enable_device(pdev); if (ret) return ret; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index 1523b51a7284..db3dc9f40dcb 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -593,11 +593,6 @@ struct vmw_private { struct mutex cmdbuf_mutex; struct mutex binding_mutex; - /** - * Operating mode. - */ - - bool stealth; bool enable_fb; spinlock_t svga_lock; From patchwork Tue Dec 1 20:18:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 11943733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB3B3C64E7B for ; Tue, 1 Dec 2020 20:33:41 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7876F2151B for ; Tue, 1 Dec 2020 20:33:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7876F2151B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C22D56E8D9; Tue, 1 Dec 2020 20:33:38 +0000 (UTC) Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2CD7A6E8D6 for ; Tue, 1 Dec 2020 20:33:37 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 1 Dec 2020 12:18:27 -0800 Received: from vertex.vmware.com (unknown [10.21.244.133]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 2B22A20AB9; Tue, 1 Dec 2020 12:18:30 -0800 (PST) From: Zack Rusin To: Subject: [PATCH 3/8] drm/vmwgfx: Switch to a managed drm device Date: Tue, 1 Dec 2020 15:18:23 -0500 Message-ID: <20201201201828.808888-3-zackr@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201201201828.808888-1-zackr@vmware.com> References: <20201201201828.808888-1-zackr@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Martin Krastev , Roland Scheidegger Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To cleanup some of the error handling and prepare for some other work lets switch to a managed drm device. It will let us get a better handle on some of the error paths. Signed-off-by: Zack Rusin Reviewed-by: Martin Krastev Reviewed-by: Roland Scheidegger Reported-by: kernel test robot --- drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c | 8 +-- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 83 +++++++++------------- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_fb.c | 10 +-- drivers/gpu/drm/vmwgfx/vmwgfx_fence.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 36 +++++----- drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 6 +- 10 files changed, 72 insertions(+), 89 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c index 3b41cf63110a..3cbc8f6f083f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c @@ -1230,7 +1230,7 @@ int vmw_cmdbuf_set_pool_size(struct vmw_cmdbuf_man *man, /* First, try to allocate a huge chunk of DMA memory */ size = PAGE_ALIGN(size); - man->map = dma_alloc_coherent(&dev_priv->dev->pdev->dev, size, + man->map = dma_alloc_coherent(&dev_priv->drm.pdev->dev, size, &man->handle, GFP_KERNEL); if (man->map) { man->using_mob = false; @@ -1313,7 +1313,7 @@ struct vmw_cmdbuf_man *vmw_cmdbuf_man_create(struct vmw_private *dev_priv) man->num_contexts = (dev_priv->capabilities & SVGA_CAP_HP_CMD_QUEUE) ? 2 : 1; man->headers = dma_pool_create("vmwgfx cmdbuf", - &dev_priv->dev->pdev->dev, + &dev_priv->drm.pdev->dev, sizeof(SVGACBHeader), 64, PAGE_SIZE); if (!man->headers) { @@ -1322,7 +1322,7 @@ struct vmw_cmdbuf_man *vmw_cmdbuf_man_create(struct vmw_private *dev_priv) } man->dheaders = dma_pool_create("vmwgfx inline cmdbuf", - &dev_priv->dev->pdev->dev, + &dev_priv->drm.pdev->dev, sizeof(struct vmw_cmdbuf_dheader), 64, PAGE_SIZE); if (!man->dheaders) { @@ -1387,7 +1387,7 @@ void vmw_cmdbuf_remove_pool(struct vmw_cmdbuf_man *man) ttm_bo_put(man->cmd_space); man->cmd_space = NULL; } else { - dma_free_coherent(&man->dev_priv->dev->pdev->dev, + dma_free_coherent(&man->dev_priv->drm.pdev->dev, man->size, man->map, man->handle); } } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 0a3a2b6e4bf9..a2617422a612 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -608,7 +608,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv) */ static int vmw_dma_masks(struct vmw_private *dev_priv) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; int ret = 0; ret = dma_set_mask_and_coherent(dev->dev, DMA_BIT_MASK(64)); @@ -643,24 +643,16 @@ static void vmw_vram_manager_fini(struct vmw_private *dev_priv) #endif } -static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) +static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) { - struct vmw_private *dev_priv; int ret; uint32_t svga_id; enum vmw_res_type i; bool refuse_dma = false; char host_log[100] = {0}; - dev_priv = kzalloc(sizeof(*dev_priv), GFP_KERNEL); - if (unlikely(!dev_priv)) { - DRM_ERROR("Failed allocating a device private struct.\n"); - return -ENOMEM; - } + pci_set_master(dev_priv->drm.pdev); - pci_set_master(dev->pdev); - - dev_priv->dev = dev; dev_priv->vmw_chipset = chipset; dev_priv->last_read_seqno = (uint32_t) -100; mutex_init(&dev_priv->cmdbuf_mutex); @@ -687,9 +679,9 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) dev_priv->used_memory_size = 0; - dev_priv->io_start = pci_resource_start(dev->pdev, 0); - dev_priv->vram_start = pci_resource_start(dev->pdev, 1); - dev_priv->mmio_start = pci_resource_start(dev->pdev, 2); + dev_priv->io_start = pci_resource_start(dev_priv->drm.pdev, 0); + dev_priv->vram_start = pci_resource_start(dev_priv->drm.pdev, 1); + dev_priv->mmio_start = pci_resource_start(dev_priv->drm.pdev, 2); dev_priv->assume_16bpp = !!vmw_assume_16bpp; @@ -793,8 +785,8 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) if (unlikely(ret != 0)) goto out_err0; - dma_set_max_seg_size(dev->dev, min_t(unsigned int, U32_MAX & PAGE_MASK, - SCATTERLIST_MAX_SEGMENT)); + dma_set_max_seg_size(dev_priv->drm->dev, min_t(unsigned int, U32_MAX & PAGE_MASK, + SCATTERLIST_MAX_SEGMENT)); if (dev_priv->capabilities & SVGA_CAP_GMR2) { DRM_INFO("Max GMR ids is %u\n", @@ -838,16 +830,16 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) goto out_err4; } - dev->dev_private = dev_priv; + dev_priv->drm.dev_private = dev_priv; - ret = pci_request_regions(dev->pdev, "vmwgfx probe"); + ret = pci_request_regions(dev_priv->drm.pdev, "vmwgfx probe"); if (ret) { DRM_ERROR("Failed reserving PCI regions.\n"); goto out_no_device; } if (dev_priv->capabilities & SVGA_CAP_IRQMASK) { - ret = vmw_irq_install(dev, dev->pdev->irq); + ret = vmw_irq_install(&dev_priv->drm, dev_priv->drm.pdev->irq); if (ret != 0) { DRM_ERROR("Failed installing irq: %d\n", ret); goto out_no_irq; @@ -944,7 +936,7 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) if (ret) goto out_no_fifo; - DRM_INFO("Atomic: %s\n", (dev->driver->driver_features & DRIVER_ATOMIC) + DRM_INFO("Atomic: %s\n", (dev_priv->drm.driver->driver_features & DRIVER_ATOMIC) ? "yes." : "no."); if (dev_priv->sm_type == VMW_SM_5) DRM_INFO("SM5 support available.\n"); @@ -989,9 +981,9 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) vmw_fence_manager_takedown(dev_priv->fman); out_no_fman: if (dev_priv->capabilities & SVGA_CAP_IRQMASK) - vmw_irq_uninstall(dev_priv->dev); + vmw_irq_uninstall(&dev_priv->drm); out_no_irq: - pci_release_regions(dev->pdev); + pci_release_regions(dev_priv->drm.pdev); out_no_device: ttm_object_device_release(&dev_priv->tdev); out_err4: @@ -1038,7 +1030,7 @@ static void vmw_driver_unload(struct drm_device *dev) vmw_release_device_late(dev_priv); vmw_fence_manager_takedown(dev_priv->fman); if (dev_priv->capabilities & SVGA_CAP_IRQMASK) - vmw_irq_uninstall(dev_priv->dev); + vmw_irq_uninstall(&dev_priv->drm); pci_release_regions(dev->pdev); ttm_object_device_release(&dev_priv->tdev); @@ -1236,7 +1228,7 @@ void vmw_svga_disable(struct vmw_private *dev_priv) * to be inconsistent with the device, causing modesetting problems. * */ - vmw_kms_lost_device(dev_priv->dev); + vmw_kms_lost_device(&dev_priv->drm); ttm_write_lock(&dev_priv->reservation_sem, false); spin_lock(&dev_priv->svga_lock); if (ttm_resource_manager_used(man)) { @@ -1258,8 +1250,6 @@ static void vmw_remove(struct pci_dev *pdev) drm_dev_unregister(dev); vmw_driver_unload(dev); - drm_dev_put(dev); - pci_disable_device(pdev); } static unsigned long @@ -1356,7 +1346,7 @@ static int vmw_pm_freeze(struct device *kdev) * No user-space processes should be running now. */ ttm_suspend_unlock(&dev_priv->reservation_sem); - ret = vmw_kms_suspend(dev_priv->dev); + ret = vmw_kms_suspend(&dev_priv->drm); if (ret) { ttm_suspend_lock(&dev_priv->reservation_sem); DRM_ERROR("Failed to freeze modesetting.\n"); @@ -1417,7 +1407,7 @@ static int vmw_pm_restore(struct device *kdev) dev_priv->suspend_locked = false; ttm_suspend_unlock(&dev_priv->reservation_sem); if (dev_priv->suspend_state) - vmw_kms_resume(dev_priv->dev); + vmw_kms_resume(&dev_priv->drm); if (dev_priv->enable_fb) vmw_fb_on(dev_priv); @@ -1486,43 +1476,36 @@ static struct pci_driver vmw_pci_driver = { static int vmw_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { - struct drm_device *dev; + struct vmw_private *vmw; int ret; ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "svgadrmfb"); if (ret) return ret; - ret = pci_enable_device(pdev); + ret = pcim_enable_device(pdev); if (ret) return ret; - dev = drm_dev_alloc(&driver, &pdev->dev); - if (IS_ERR(dev)) { - ret = PTR_ERR(dev); - goto err_pci_disable_device; - } + vmw = devm_drm_dev_alloc(&pdev->dev, &driver, + struct vmw_private, drm); + if (IS_ERR(vmw)) + return PTR_ERR(vmw); - dev->pdev = pdev; - pci_set_drvdata(pdev, dev); + vmw->drm.pdev = pdev; + pci_set_drvdata(pdev, &vmw->drm); - ret = vmw_driver_load(dev, ent->driver_data); + ret = vmw_driver_load(vmw, ent->device); if (ret) - goto err_drm_dev_put; + return ret; - ret = drm_dev_register(dev, ent->driver_data); - if (ret) - goto err_vmw_driver_unload; + ret = drm_dev_register(&vmw->drm, 0); + if (ret) { + vmw_driver_unload(&vmw->drm); + return ret; + } return 0; - -err_vmw_driver_unload: - vmw_driver_unload(dev); -err_drm_dev_put: - drm_dev_put(dev); -err_pci_disable_device: - pci_disable_device(pdev); - return ret; } static int __init vmwgfx_init(void) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index db3dc9f40dcb..b9669867e177 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -492,11 +492,11 @@ enum vmw_sm_type { }; struct vmw_private { + struct drm_device drm; struct ttm_bo_device bdev; struct vmw_fifo_state fifo; - struct drm_device *dev; struct drm_vma_offset_manager vma_manager; unsigned long vmw_chipset; unsigned int io_start; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c index c59806d40e15..770b906b18c3 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c @@ -481,7 +481,7 @@ static int vmw_fb_kms_detach(struct vmw_fb_par *par, DRM_ERROR("Could not unset a mode.\n"); return ret; } - drm_mode_destroy(par->vmw_priv->dev, par->set_mode); + drm_mode_destroy(&par->vmw_priv->drm, par->set_mode); par->set_mode = NULL; } @@ -567,7 +567,7 @@ static int vmw_fb_set_par(struct fb_info *info) struct drm_display_mode *mode; int ret; - mode = drm_mode_duplicate(vmw_priv->dev, &new_mode); + mode = drm_mode_duplicate(&vmw_priv->drm, &new_mode); if (!mode) { DRM_ERROR("Could not create new fb mode.\n"); return -ENOMEM; @@ -581,7 +581,7 @@ static int vmw_fb_set_par(struct fb_info *info) mode->hdisplay * DIV_ROUND_UP(var->bits_per_pixel, 8), mode->vdisplay)) { - drm_mode_destroy(vmw_priv->dev, mode); + drm_mode_destroy(&vmw_priv->drm, mode); return -EINVAL; } @@ -615,7 +615,7 @@ static int vmw_fb_set_par(struct fb_info *info) out_unlock: if (par->set_mode) - drm_mode_destroy(vmw_priv->dev, par->set_mode); + drm_mode_destroy(&vmw_priv->drm, par->set_mode); par->set_mode = mode; mutex_unlock(&par->bo_mutex); @@ -638,7 +638,7 @@ static const struct fb_ops vmw_fb_ops = { int vmw_fb_init(struct vmw_private *vmw_priv) { - struct device *device = &vmw_priv->dev->pdev->dev; + struct device *device = &vmw_priv->drm.pdev->dev; struct vmw_fb_par *par; struct fb_info *info; unsigned fb_width, fb_height; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c index 0f8d29397157..66fa81e20990 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c @@ -1033,7 +1033,7 @@ int vmw_event_fence_action_queue(struct drm_file *file_priv, eaction->action.type = VMW_ACTION_EVENT; eaction->fence = vmw_fence_obj_reference(fence); - eaction->dev = fman->dev_priv->dev; + eaction->dev = &fman->dev_priv->drm; eaction->tv_sec = tv_sec; eaction->tv_usec = tv_usec; @@ -1055,7 +1055,7 @@ static int vmw_event_fence_action_create(struct drm_file *file_priv, { struct vmw_event_fence_pending *event; struct vmw_fence_manager *fman = fman_from_fence(fence); - struct drm_device *dev = fman->dev_priv->dev; + struct drm_device *dev = &fman->dev_priv->drm; int ret; event = kzalloc(sizeof(*event), GFP_KERNEL); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 312ed0881a99..9c285c799fc8 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -236,7 +236,7 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf, */ void vmw_kms_legacy_hotspot_clear(struct vmw_private *dev_priv) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; struct vmw_display_unit *du; struct drm_crtc *crtc; @@ -252,7 +252,7 @@ void vmw_kms_legacy_hotspot_clear(struct vmw_private *dev_priv) void vmw_kms_cursor_post_execbuf(struct vmw_private *dev_priv) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; struct vmw_display_unit *du; struct drm_crtc *crtc; @@ -889,7 +889,7 @@ static int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv, bool is_bo_proxy) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; struct vmw_framebuffer_surface *vfbs; enum SVGA3dSurfaceFormat format; int ret; @@ -1001,11 +1001,11 @@ static int vmw_framebuffer_bo_dirty(struct drm_framebuffer *framebuffer, struct drm_clip_rect norect; int ret, increment = 1; - drm_modeset_lock_all(dev_priv->dev); + drm_modeset_lock_all(&dev_priv->drm); ret = ttm_read_lock(&dev_priv->reservation_sem, true); if (unlikely(ret != 0)) { - drm_modeset_unlock_all(dev_priv->dev); + drm_modeset_unlock_all(&dev_priv->drm); return ret; } @@ -1034,7 +1034,7 @@ static int vmw_framebuffer_bo_dirty(struct drm_framebuffer *framebuffer, vmw_fifo_flush(dev_priv, false); ttm_read_unlock(&dev_priv->reservation_sem); - drm_modeset_unlock_all(dev_priv->dev); + drm_modeset_unlock_all(&dev_priv->drm); return ret; } @@ -1211,7 +1211,7 @@ static int vmw_kms_new_framebuffer_bo(struct vmw_private *dev_priv, *mode_cmd) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; struct vmw_framebuffer_bo *vfbd; unsigned int requested_size; struct drm_format_name_buf format_name; @@ -1317,7 +1317,7 @@ vmw_kms_new_framebuffer(struct vmw_private *dev_priv, bo && only_2d && mode_cmd->width > 64 && /* Don't create a proxy for cursor */ dev_priv->active_display_unit == vmw_du_screen_target) { - ret = vmw_create_bo_proxy(dev_priv->dev, mode_cmd, + ret = vmw_create_bo_proxy(&dev_priv->drm, mode_cmd, bo, &surface); if (ret) return ERR_PTR(ret); @@ -1778,7 +1778,7 @@ vmw_kms_create_hotplug_mode_update_property(struct vmw_private *dev_priv) return; dev_priv->hotplug_mode_update_property = - drm_property_create_range(dev_priv->dev, + drm_property_create_range(&dev_priv->drm, DRM_MODE_PROP_IMMUTABLE, "hotplug_mode_update", 0, 1); @@ -1789,7 +1789,7 @@ vmw_kms_create_hotplug_mode_update_property(struct vmw_private *dev_priv) int vmw_kms_init(struct vmw_private *dev_priv) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; int ret; drm_mode_config_init(dev); @@ -1821,7 +1821,7 @@ int vmw_kms_close(struct vmw_private *dev_priv) * but since it destroys encoders and our destructor calls * drm_encoder_cleanup which takes the lock we deadlock. */ - drm_mode_config_cleanup(dev_priv->dev); + drm_mode_config_cleanup(&dev_priv->drm); if (dev_priv->active_display_unit == vmw_du_legacy) ret = vmw_kms_ldu_close_display(dev_priv); @@ -1932,7 +1932,7 @@ void vmw_disable_vblank(struct drm_crtc *crtc) static int vmw_du_update_layout(struct vmw_private *dev_priv, unsigned int num_rects, struct drm_rect *rects) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; struct vmw_display_unit *du; struct drm_connector *con; struct drm_connector_list_iter conn_iter; @@ -2364,7 +2364,7 @@ int vmw_kms_helper_dirty(struct vmw_private *dev_priv, if (dirty->crtc) { units[num_units++] = vmw_crtc_to_du(dirty->crtc); } else { - list_for_each_entry(crtc, &dev_priv->dev->mode_config.crtc_list, + list_for_each_entry(crtc, &dev_priv->drm.mode_config.crtc_list, head) { struct drm_plane *plane = crtc->primary; @@ -2566,8 +2566,8 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv, int i = 0; int ret = 0; - mutex_lock(&dev_priv->dev->mode_config.mutex); - list_for_each_entry(con, &dev_priv->dev->mode_config.connector_list, + mutex_lock(&dev_priv->drm.mode_config.mutex); + list_for_each_entry(con, &dev_priv->drm.mode_config.connector_list, head) { if (i == unit) break; @@ -2575,7 +2575,7 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv, ++i; } - if (&con->head == &dev_priv->dev->mode_config.connector_list) { + if (&con->head == &dev_priv->drm.mode_config.connector_list) { DRM_ERROR("Could not find initial display unit.\n"); ret = -EINVAL; goto out_unlock; @@ -2609,7 +2609,7 @@ int vmw_kms_fbdev_init_data(struct vmw_private *dev_priv, } out_unlock: - mutex_unlock(&dev_priv->dev->mode_config.mutex); + mutex_unlock(&dev_priv->drm.mode_config.mutex); return ret; } @@ -2629,7 +2629,7 @@ vmw_kms_create_implicit_placement_property(struct vmw_private *dev_priv) return; dev_priv->implicit_placement_property = - drm_property_create_range(dev_priv->dev, + drm_property_create_range(&dev_priv->drm, DRM_MODE_PROP_IMMUTABLE, "implicit_placement", 0, 1); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c index c4017c7a24db..45b8fee92b2f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c @@ -355,7 +355,7 @@ static const struct drm_crtc_helper_funcs vmw_ldu_crtc_helper_funcs = { static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit) { struct vmw_legacy_display_unit *ldu; - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; struct drm_connector *connector; struct drm_encoder *encoder; struct drm_plane *primary, *cursor; @@ -479,7 +479,7 @@ static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit) int vmw_kms_ldu_init_display(struct vmw_private *dev_priv) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; int i, ret; if (dev_priv->ldu_priv) { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c index 4bf0f5ec4fc2..e60d0c570296 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c @@ -829,7 +829,7 @@ static const struct drm_crtc_helper_funcs vmw_sou_crtc_helper_funcs = { static int vmw_sou_init(struct vmw_private *dev_priv, unsigned unit) { struct vmw_screen_object_unit *sou; - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; struct drm_connector *connector; struct drm_encoder *encoder; struct drm_plane *primary, *cursor; @@ -946,7 +946,7 @@ static int vmw_sou_init(struct vmw_private *dev_priv, unsigned unit) int vmw_kms_sou_init_display(struct vmw_private *dev_priv) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; int i, ret; if (!(dev_priv->capabilities & SVGA_CAP_SCREEN_OBJECT_2)) { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c index cf3aafd00837..83b06f7bd63d 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c @@ -1713,7 +1713,7 @@ static const struct drm_crtc_helper_funcs vmw_stdu_crtc_helper_funcs = { static int vmw_stdu_init(struct vmw_private *dev_priv, unsigned unit) { struct vmw_screen_target_display_unit *stdu; - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; struct drm_connector *connector; struct drm_encoder *encoder; struct drm_plane *primary, *cursor; @@ -1861,7 +1861,7 @@ static void vmw_stdu_destroy(struct vmw_screen_target_display_unit *stdu) */ int vmw_kms_stdu_init_display(struct vmw_private *dev_priv) { - struct drm_device *dev = dev_priv->dev; + struct drm_device *dev = &dev_priv->drm; int i, ret; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index 73116ec70ba5..f5d24c34d122 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -382,7 +382,7 @@ void vmw_piter_start(struct vmw_piter *viter, const struct vmw_sg_table *vsgt, */ static void vmw_ttm_unmap_from_dma(struct vmw_ttm_tt *vmw_tt) { - struct device *dev = vmw_tt->dev_priv->dev->dev; + struct device *dev = vmw_tt->dev_priv->drm.dev; dma_unmap_sgtable(dev, &vmw_tt->sgt, DMA_BIDIRECTIONAL, 0); vmw_tt->sgt.nents = vmw_tt->sgt.orig_nents; @@ -403,7 +403,7 @@ static void vmw_ttm_unmap_from_dma(struct vmw_ttm_tt *vmw_tt) */ static int vmw_ttm_map_for_dma(struct vmw_ttm_tt *vmw_tt) { - struct device *dev = vmw_tt->dev_priv->dev->dev; + struct device *dev = vmw_tt->dev_priv->drm.dev; return dma_map_sgtable(dev, &vmw_tt->sgt, DMA_BIDIRECTIONAL, 0); } @@ -458,7 +458,7 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt) sg = __sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0, (unsigned long) vsgt->num_pages << PAGE_SHIFT, - dma_get_max_seg_size(dev_priv->dev->dev), + dma_get_max_seg_size(dev_priv->drm.dev), NULL, 0, GFP_KERNEL); if (IS_ERR(sg)) { ret = PTR_ERR(sg); From patchwork Tue Dec 1 20:18:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 11943739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D4B6C71155 for ; Tue, 1 Dec 2020 20:33:47 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3F6022151B for ; Tue, 1 Dec 2020 20:33:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3F6022151B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B25576E8F1; Tue, 1 Dec 2020 20:33:39 +0000 (UTC) Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by gabe.freedesktop.org (Postfix) with ESMTPS id 05F8E6E8E6 for ; Tue, 1 Dec 2020 20:33:37 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 1 Dec 2020 12:18:28 -0800 Received: from vertex.vmware.com (unknown [10.21.244.133]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id E57D820AB9; Tue, 1 Dec 2020 12:18:30 -0800 (PST) From: Zack Rusin To: Subject: [PATCH 4/8] drm/vmwgfx: Cleanup fifo mmio handling Date: Tue, 1 Dec 2020 15:18:24 -0500 Message-ID: <20201201201828.808888-4-zackr@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201201201828.808888-1-zackr@vmware.com> References: <20201201201828.808888-1-zackr@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Martin Krastev , Roland Scheidegger Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Going forward the svga device might reuse mmio for general register accesses, in order to prepare for that we need to cleanup our naming and handling of fifo specific mmio reads and writes. As part of this work lets switch to managed mapping of the fifo mmio to make the error handling cleaner. Signed-off-by: Zack Rusin Reviewed-by: Martin Krastev Reviewed-by: Roland Scheidegger Reported-by: kernel test robot --- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 29 +++++----- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 31 +++++----- drivers/gpu/drm/vmwgfx/vmwgfx_fence.c | 24 ++++---- drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c | 83 +++++++++++++-------------- drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c | 13 ++--- drivers/gpu/drm/vmwgfx/vmwgfx_irq.c | 9 +-- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 14 ++--- 7 files changed, 94 insertions(+), 109 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index a2617422a612..23ae86ab9250 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -681,7 +681,7 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) dev_priv->io_start = pci_resource_start(dev_priv->drm.pdev, 0); dev_priv->vram_start = pci_resource_start(dev_priv->drm.pdev, 1); - dev_priv->mmio_start = pci_resource_start(dev_priv->drm.pdev, 2); + dev_priv->fifo_mem_start = pci_resource_start(dev_priv->drm.pdev, 2); dev_priv->assume_16bpp = !!vmw_assume_16bpp; @@ -711,7 +711,7 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) } dev_priv->vram_size = vmw_read(dev_priv, SVGA_REG_VRAM_SIZE); - dev_priv->mmio_size = vmw_read(dev_priv, SVGA_REG_MEM_SIZE); + dev_priv->fifo_mem_size = vmw_read(dev_priv, SVGA_REG_MEM_SIZE); dev_priv->fb_max_width = vmw_read(dev_priv, SVGA_REG_MAX_WIDTH); dev_priv->fb_max_height = vmw_read(dev_priv, SVGA_REG_MAX_HEIGHT); @@ -796,19 +796,21 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) DRM_INFO("Max dedicated hypervisor surface memory is %u kiB\n", (unsigned)dev_priv->memory_size / 1024); } - DRM_INFO("Maximum display memory size is %u kiB\n", + DRM_INFO("Maximum display memory size is %llu kiB\n", dev_priv->prim_bb_mem / 1024); - DRM_INFO("VRAM at 0x%08x size is %u kiB\n", + DRM_INFO("VRAM at 0x%08llx size is %llu kiB\n", dev_priv->vram_start, dev_priv->vram_size / 1024); - DRM_INFO("MMIO at 0x%08x size is %u kiB\n", - dev_priv->mmio_start, dev_priv->mmio_size / 1024); + DRM_INFO("MMIO at 0x%08llx size is %llu kiB\n", + dev_priv->fifo_mem_start, dev_priv->fifo_mem_size / 1024); - dev_priv->mmio_virt = memremap(dev_priv->mmio_start, - dev_priv->mmio_size, MEMREMAP_WB); + dev_priv->fifo_mem = devm_memremap(dev_priv->drm.dev, + dev_priv->fifo_mem_start, + dev_priv->fifo_mem_size, + MEMREMAP_WB); - if (unlikely(dev_priv->mmio_virt == NULL)) { + if (unlikely(dev_priv->fifo_mem == NULL)) { ret = -ENOMEM; - DRM_ERROR("Failed mapping MMIO.\n"); + DRM_ERROR("Failed mapping the FIFO MMIO.\n"); goto out_err0; } @@ -818,7 +820,7 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) !vmw_fifo_have_pitchlock(dev_priv)) { ret = -ENOSYS; DRM_ERROR("Hardware has no pitchlock\n"); - goto out_err4; + goto out_err0; } dev_priv->tdev = ttm_object_device_init(&ttm_mem_glob, 12, @@ -827,7 +829,7 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) if (unlikely(dev_priv->tdev == NULL)) { DRM_ERROR("Unable to initialize TTM object management.\n"); ret = -ENOMEM; - goto out_err4; + goto out_err0; } dev_priv->drm.dev_private = dev_priv; @@ -986,8 +988,6 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) pci_release_regions(dev_priv->drm.pdev); out_no_device: ttm_object_device_release(&dev_priv->tdev); -out_err4: - memunmap(dev_priv->mmio_virt); out_err0: for (i = vmw_res_context; i < vmw_res_max; ++i) idr_destroy(&dev_priv->res_idr[i]); @@ -1034,7 +1034,6 @@ static void vmw_driver_unload(struct drm_device *dev) pci_release_regions(dev->pdev); ttm_object_device_release(&dev_priv->tdev); - memunmap(dev_priv->mmio_virt); if (dev_priv->ctx.staged_bindings) vmw_binding_state_free(dev_priv->ctx.staged_bindings); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index b9669867e177..e388edb9e50f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -499,12 +499,13 @@ struct vmw_private { struct drm_vma_offset_manager vma_manager; unsigned long vmw_chipset; - unsigned int io_start; - uint32_t vram_start; - uint32_t vram_size; - uint32_t prim_bb_mem; - uint32_t mmio_start; - uint32_t mmio_size; + resource_size_t io_start; + resource_size_t vram_start; + resource_size_t vram_size; + resource_size_t prim_bb_mem; + u32 *fifo_mem; + resource_size_t fifo_mem_start; + resource_size_t fifo_mem_size; uint32_t fb_max_width; uint32_t fb_max_height; uint32_t texture_max_width; @@ -513,7 +514,6 @@ struct vmw_private { uint32_t stdu_max_height; uint32_t initial_width; uint32_t initial_height; - u32 *mmio_virt; uint32_t capabilities; uint32_t capabilities2; uint32_t max_gmr_ids; @@ -1578,28 +1578,29 @@ static inline void vmw_fifo_resource_dec(struct vmw_private *dev_priv) } /** - * vmw_mmio_read - Perform a MMIO read from volatile memory + * vmw_fifo_mem_read - Perform a MMIO read from the fifo memory * - * @addr: The address to read from + * @fifo_reg: The fifo register to read from * * This function is intended to be equivalent to ioread32() on * memremap'd memory, but without byteswapping. */ -static inline u32 vmw_mmio_read(u32 *addr) +static inline u32 vmw_fifo_mem_read(struct vmw_private *vmw, uint32 fifo_reg) { - return READ_ONCE(*addr); + return READ_ONCE(*(vmw->fifo_mem + fifo_reg)); } /** - * vmw_mmio_write - Perform a MMIO write to volatile memory + * vmw_fifo_mem_write - Perform a MMIO write to volatile memory * - * @addr: The address to write to + * @addr: The fifo register to write to * * This function is intended to be equivalent to iowrite32 on * memremap'd memory, but without byteswapping. */ -static inline void vmw_mmio_write(u32 value, u32 *addr) +static inline void vmw_fifo_mem_write(struct vmw_private *vmw, u32 fifo_reg, + u32 value) { - WRITE_ONCE(*addr, value); + WRITE_ONCE(*(vmw->fifo_mem + fifo_reg), value); } #endif diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c index 66fa81e20990..378ec7600154 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c @@ -141,8 +141,7 @@ static bool vmw_fence_enable_signaling(struct dma_fence *f) struct vmw_fence_manager *fman = fman_from_fence(fence); struct vmw_private *dev_priv = fman->dev_priv; - u32 *fifo_mem = dev_priv->mmio_virt; - u32 seqno = vmw_mmio_read(fifo_mem + SVGA_FIFO_FENCE); + u32 seqno = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_FENCE); if (seqno - fence->base.seqno < VMW_FENCE_WRAP) return false; @@ -401,14 +400,12 @@ static bool vmw_fence_goal_new_locked(struct vmw_fence_manager *fman, u32 passed_seqno) { u32 goal_seqno; - u32 *fifo_mem; struct vmw_fence_obj *fence; if (likely(!fman->seqno_valid)) return false; - fifo_mem = fman->dev_priv->mmio_virt; - goal_seqno = vmw_mmio_read(fifo_mem + SVGA_FIFO_FENCE_GOAL); + goal_seqno = vmw_fifo_mem_read(fman->dev_priv, SVGA_FIFO_FENCE_GOAL); if (likely(passed_seqno - goal_seqno >= VMW_FENCE_WRAP)) return false; @@ -416,8 +413,9 @@ static bool vmw_fence_goal_new_locked(struct vmw_fence_manager *fman, list_for_each_entry(fence, &fman->fence_list, head) { if (!list_empty(&fence->seq_passed_actions)) { fman->seqno_valid = true; - vmw_mmio_write(fence->base.seqno, - fifo_mem + SVGA_FIFO_FENCE_GOAL); + vmw_fifo_mem_write(fman->dev_priv, + SVGA_FIFO_FENCE_GOAL, + fence->base.seqno); break; } } @@ -445,18 +443,17 @@ static bool vmw_fence_goal_check_locked(struct vmw_fence_obj *fence) { struct vmw_fence_manager *fman = fman_from_fence(fence); u32 goal_seqno; - u32 *fifo_mem; if (dma_fence_is_signaled_locked(&fence->base)) return false; - fifo_mem = fman->dev_priv->mmio_virt; - goal_seqno = vmw_mmio_read(fifo_mem + SVGA_FIFO_FENCE_GOAL); + goal_seqno = vmw_fifo_mem_read(fman->dev_priv, SVGA_FIFO_FENCE_GOAL); if (likely(fman->seqno_valid && goal_seqno - fence->base.seqno < VMW_FENCE_WRAP)) return false; - vmw_mmio_write(fence->base.seqno, fifo_mem + SVGA_FIFO_FENCE_GOAL); + vmw_fifo_mem_write(fman->dev_priv, SVGA_FIFO_FENCE_GOAL, + fence->base.seqno); fman->seqno_valid = true; return true; @@ -468,9 +465,8 @@ static void __vmw_fences_update(struct vmw_fence_manager *fman) struct list_head action_list; bool needs_rerun; uint32_t seqno, new_seqno; - u32 *fifo_mem = fman->dev_priv->mmio_virt; - seqno = vmw_mmio_read(fifo_mem + SVGA_FIFO_FENCE); + seqno = vmw_fifo_mem_read(fman->dev_priv, SVGA_FIFO_FENCE); rerun: list_for_each_entry_safe(fence, next_fence, &fman->fence_list, head) { if (seqno - fence->base.seqno < VMW_FENCE_WRAP) { @@ -492,7 +488,7 @@ static void __vmw_fences_update(struct vmw_fence_manager *fman) needs_rerun = vmw_fence_goal_new_locked(fman, seqno); if (unlikely(needs_rerun)) { - new_seqno = vmw_mmio_read(fifo_mem + SVGA_FIFO_FENCE); + new_seqno = vmw_fifo_mem_read(fman->dev_priv, SVGA_FIFO_FENCE); if (new_seqno != seqno) { seqno = new_seqno; goto rerun; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c index a95156fc5db7..4674bc1c32f0 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c @@ -38,7 +38,6 @@ struct vmw_temp_set_context { bool vmw_fifo_have_3d(struct vmw_private *dev_priv) { - u32 *fifo_mem = dev_priv->mmio_virt; uint32_t fifo_min, hwversion; const struct vmw_fifo_state *fifo = &dev_priv->fifo; @@ -62,11 +61,11 @@ bool vmw_fifo_have_3d(struct vmw_private *dev_priv) if (!(dev_priv->capabilities & SVGA_CAP_EXTENDED_FIFO)) return false; - fifo_min = vmw_mmio_read(fifo_mem + SVGA_FIFO_MIN); + fifo_min = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MIN); if (fifo_min <= SVGA_FIFO_3D_HWVERSION * sizeof(unsigned int)) return false; - hwversion = vmw_mmio_read(fifo_mem + + hwversion = vmw_fifo_mem_read(dev_priv, ((fifo->capabilities & SVGA_FIFO_CAP_3D_HWVERSION_REVISED) ? SVGA_FIFO_3D_HWVERSION_REVISED : @@ -87,13 +86,12 @@ bool vmw_fifo_have_3d(struct vmw_private *dev_priv) bool vmw_fifo_have_pitchlock(struct vmw_private *dev_priv) { - u32 *fifo_mem = dev_priv->mmio_virt; uint32_t caps; if (!(dev_priv->capabilities & SVGA_CAP_EXTENDED_FIFO)) return false; - caps = vmw_mmio_read(fifo_mem + SVGA_FIFO_CAPABILITIES); + caps = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_CAPABILITIES); if (caps & SVGA_FIFO_CAP_PITCHLOCK) return true; @@ -102,7 +100,6 @@ bool vmw_fifo_have_pitchlock(struct vmw_private *dev_priv) int vmw_fifo_init(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) { - u32 *fifo_mem = dev_priv->mmio_virt; uint32_t max; uint32_t min; @@ -139,19 +136,19 @@ int vmw_fifo_init(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) if (min < PAGE_SIZE) min = PAGE_SIZE; - vmw_mmio_write(min, fifo_mem + SVGA_FIFO_MIN); - vmw_mmio_write(dev_priv->mmio_size, fifo_mem + SVGA_FIFO_MAX); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_MIN, min); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_MAX, dev_priv->fifo_mem_size); wmb(); - vmw_mmio_write(min, fifo_mem + SVGA_FIFO_NEXT_CMD); - vmw_mmio_write(min, fifo_mem + SVGA_FIFO_STOP); - vmw_mmio_write(0, fifo_mem + SVGA_FIFO_BUSY); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_NEXT_CMD, min); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_STOP, min); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_BUSY, 0); mb(); vmw_write(dev_priv, SVGA_REG_CONFIG_DONE, 1); - max = vmw_mmio_read(fifo_mem + SVGA_FIFO_MAX); - min = vmw_mmio_read(fifo_mem + SVGA_FIFO_MIN); - fifo->capabilities = vmw_mmio_read(fifo_mem + SVGA_FIFO_CAPABILITIES); + max = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MAX); + min = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MIN); + fifo->capabilities = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_CAPABILITIES); DRM_INFO("Fifo max 0x%08x min 0x%08x cap 0x%08x\n", (unsigned int) max, @@ -159,7 +156,7 @@ int vmw_fifo_init(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) (unsigned int) fifo->capabilities); atomic_set(&dev_priv->marker_seq, dev_priv->last_read_seqno); - vmw_mmio_write(dev_priv->last_read_seqno, fifo_mem + SVGA_FIFO_FENCE); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_FENCE, dev_priv->last_read_seqno); vmw_marker_queue_init(&fifo->marker_queue); return 0; @@ -167,7 +164,7 @@ int vmw_fifo_init(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason) { - u32 *fifo_mem = dev_priv->mmio_virt; + u32 *fifo_mem = dev_priv->fifo_mem; if (cmpxchg(fifo_mem + SVGA_FIFO_BUSY, 0, 1) == 0) vmw_write(dev_priv, SVGA_REG_SYNC, reason); @@ -175,13 +172,11 @@ void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason) void vmw_fifo_release(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) { - u32 *fifo_mem = dev_priv->mmio_virt; - vmw_write(dev_priv, SVGA_REG_SYNC, SVGA_SYNC_GENERIC); while (vmw_read(dev_priv, SVGA_REG_BUSY) != 0) ; - dev_priv->last_read_seqno = vmw_mmio_read(fifo_mem + SVGA_FIFO_FENCE); + dev_priv->last_read_seqno = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_FENCE); vmw_write(dev_priv, SVGA_REG_CONFIG_DONE, dev_priv->config_done_state); @@ -205,11 +200,10 @@ void vmw_fifo_release(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) static bool vmw_fifo_is_full(struct vmw_private *dev_priv, uint32_t bytes) { - u32 *fifo_mem = dev_priv->mmio_virt; - uint32_t max = vmw_mmio_read(fifo_mem + SVGA_FIFO_MAX); - uint32_t next_cmd = vmw_mmio_read(fifo_mem + SVGA_FIFO_NEXT_CMD); - uint32_t min = vmw_mmio_read(fifo_mem + SVGA_FIFO_MIN); - uint32_t stop = vmw_mmio_read(fifo_mem + SVGA_FIFO_STOP); + uint32_t max = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MAX); + uint32_t next_cmd = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_NEXT_CMD); + uint32_t min = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MIN); + uint32_t stop = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_STOP); return ((max - next_cmd) + (stop - min) <= bytes); } @@ -298,7 +292,7 @@ static void *vmw_local_fifo_reserve(struct vmw_private *dev_priv, uint32_t bytes) { struct vmw_fifo_state *fifo_state = &dev_priv->fifo; - u32 *fifo_mem = dev_priv->mmio_virt; + u32 *fifo_mem = dev_priv->fifo_mem; uint32_t max; uint32_t min; uint32_t next_cmd; @@ -306,9 +300,9 @@ static void *vmw_local_fifo_reserve(struct vmw_private *dev_priv, int ret; mutex_lock(&fifo_state->fifo_mutex); - max = vmw_mmio_read(fifo_mem + SVGA_FIFO_MAX); - min = vmw_mmio_read(fifo_mem + SVGA_FIFO_MIN); - next_cmd = vmw_mmio_read(fifo_mem + SVGA_FIFO_NEXT_CMD); + max = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MAX); + min = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MIN); + next_cmd = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_NEXT_CMD); if (unlikely(bytes >= (max - min))) goto out_err; @@ -319,7 +313,7 @@ static void *vmw_local_fifo_reserve(struct vmw_private *dev_priv, fifo_state->reserved_size = bytes; while (1) { - uint32_t stop = vmw_mmio_read(fifo_mem + SVGA_FIFO_STOP); + uint32_t stop = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_STOP); bool need_bounce = false; bool reserve_in_place = false; @@ -353,8 +347,9 @@ static void *vmw_local_fifo_reserve(struct vmw_private *dev_priv, fifo_state->using_bounce_buffer = false; if (reserveable) - vmw_mmio_write(bytes, fifo_mem + - SVGA_FIFO_RESERVED); + vmw_fifo_mem_write(dev_priv, + SVGA_FIFO_RESERVED, + bytes); return (void __force *) (fifo_mem + (next_cmd >> 2)); } else { @@ -402,10 +397,11 @@ void *vmw_fifo_reserve_dx(struct vmw_private *dev_priv, uint32_t bytes, } static void vmw_fifo_res_copy(struct vmw_fifo_state *fifo_state, - u32 *fifo_mem, + struct vmw_private *vmw, uint32_t next_cmd, uint32_t max, uint32_t min, uint32_t bytes) { + u32 *fifo_mem = vmw->fifo_mem; uint32_t chunk_size = max - next_cmd; uint32_t rest; uint32_t *buffer = (fifo_state->dynamic_buffer != NULL) ? @@ -414,7 +410,7 @@ static void vmw_fifo_res_copy(struct vmw_fifo_state *fifo_state, if (bytes < chunk_size) chunk_size = bytes; - vmw_mmio_write(bytes, fifo_mem + SVGA_FIFO_RESERVED); + vmw_fifo_mem_write(vmw, SVGA_FIFO_RESERVED, bytes); mb(); memcpy(fifo_mem + (next_cmd >> 2), buffer, chunk_size); rest = bytes - chunk_size; @@ -423,7 +419,7 @@ static void vmw_fifo_res_copy(struct vmw_fifo_state *fifo_state, } static void vmw_fifo_slow_copy(struct vmw_fifo_state *fifo_state, - u32 *fifo_mem, + struct vmw_private *vmw, uint32_t next_cmd, uint32_t max, uint32_t min, uint32_t bytes) { @@ -431,12 +427,12 @@ static void vmw_fifo_slow_copy(struct vmw_fifo_state *fifo_state, fifo_state->dynamic_buffer : fifo_state->static_buffer; while (bytes > 0) { - vmw_mmio_write(*buffer++, fifo_mem + (next_cmd >> 2)); + vmw_fifo_mem_write(vmw, (next_cmd >> 2), *buffer++); next_cmd += sizeof(uint32_t); if (unlikely(next_cmd == max)) next_cmd = min; mb(); - vmw_mmio_write(next_cmd, fifo_mem + SVGA_FIFO_NEXT_CMD); + vmw_fifo_mem_write(vmw, SVGA_FIFO_NEXT_CMD, next_cmd); mb(); bytes -= sizeof(uint32_t); } @@ -445,10 +441,9 @@ static void vmw_fifo_slow_copy(struct vmw_fifo_state *fifo_state, static void vmw_local_fifo_commit(struct vmw_private *dev_priv, uint32_t bytes) { struct vmw_fifo_state *fifo_state = &dev_priv->fifo; - u32 *fifo_mem = dev_priv->mmio_virt; - uint32_t next_cmd = vmw_mmio_read(fifo_mem + SVGA_FIFO_NEXT_CMD); - uint32_t max = vmw_mmio_read(fifo_mem + SVGA_FIFO_MAX); - uint32_t min = vmw_mmio_read(fifo_mem + SVGA_FIFO_MIN); + uint32_t next_cmd = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_NEXT_CMD); + uint32_t max = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MAX); + uint32_t min = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_MIN); bool reserveable = fifo_state->capabilities & SVGA_FIFO_CAP_RESERVE; if (fifo_state->dx) @@ -462,10 +457,10 @@ static void vmw_local_fifo_commit(struct vmw_private *dev_priv, uint32_t bytes) if (fifo_state->using_bounce_buffer) { if (reserveable) - vmw_fifo_res_copy(fifo_state, fifo_mem, + vmw_fifo_res_copy(fifo_state, dev_priv, next_cmd, max, min, bytes); else - vmw_fifo_slow_copy(fifo_state, fifo_mem, + vmw_fifo_slow_copy(fifo_state, dev_priv, next_cmd, max, min, bytes); if (fifo_state->dynamic_buffer) { @@ -481,11 +476,11 @@ static void vmw_local_fifo_commit(struct vmw_private *dev_priv, uint32_t bytes) if (next_cmd >= max) next_cmd -= max - min; mb(); - vmw_mmio_write(next_cmd, fifo_mem + SVGA_FIFO_NEXT_CMD); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_NEXT_CMD, next_cmd); } if (reserveable) - vmw_mmio_write(0, fifo_mem + SVGA_FIFO_RESERVED); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_RESERVED, 0); mb(); up_write(&fifo_state->rwsem); vmw_fifo_ping_host(dev_priv, SVGA_SYNC_GENERIC); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c index f681b7b4df1b..c21a841dfc6d 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c @@ -67,7 +67,6 @@ int vmw_getparam_ioctl(struct drm_device *dev, void *data, break; case DRM_VMW_PARAM_FIFO_HW_VERSION: { - u32 *fifo_mem = dev_priv->mmio_virt; const struct vmw_fifo_state *fifo = &dev_priv->fifo; if ((dev_priv->capabilities & SVGA_CAP_GBOBJECTS)) { @@ -76,11 +75,11 @@ int vmw_getparam_ioctl(struct drm_device *dev, void *data, } param->value = - vmw_mmio_read(fifo_mem + - ((fifo->capabilities & - SVGA_FIFO_CAP_3D_HWVERSION_REVISED) ? - SVGA_FIFO_3D_HWVERSION_REVISED : - SVGA_FIFO_3D_HWVERSION)); + vmw_fifo_mem_read(dev_priv, + ((fifo->capabilities & + SVGA_FIFO_CAP_3D_HWVERSION_REVISED) ? + SVGA_FIFO_3D_HWVERSION_REVISED : + SVGA_FIFO_3D_HWVERSION)); break; } case DRM_VMW_PARAM_MAX_SURF_MEMORY: @@ -235,7 +234,7 @@ int vmw_get_cap_3d_ioctl(struct drm_device *dev, void *data, if (unlikely(ret != 0)) goto out_err; } else { - fifo_mem = dev_priv->mmio_virt; + fifo_mem = dev_priv->fifo_mem; memcpy(bounce, &fifo_mem[SVGA_FIFO_3D_CAPS], size); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c b/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c index 75f3efee21a4..c62bbe1d2eb6 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c @@ -117,8 +117,7 @@ static bool vmw_fifo_idle(struct vmw_private *dev_priv, uint32_t seqno) void vmw_update_seqno(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo_state) { - u32 *fifo_mem = dev_priv->mmio_virt; - uint32_t seqno = vmw_mmio_read(fifo_mem + SVGA_FIFO_FENCE); + uint32_t seqno = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_FENCE); if (dev_priv->last_read_seqno != seqno) { dev_priv->last_read_seqno = seqno; @@ -222,11 +221,9 @@ int vmw_fallback_wait(struct vmw_private *dev_priv, } } finish_wait(&dev_priv->fence_queue, &__wait); - if (ret == 0 && fifo_idle) { - u32 *fifo_mem = dev_priv->mmio_virt; + if (ret == 0 && fifo_idle) + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_FENCE, signal_seq); - vmw_mmio_write(signal_seq, fifo_mem + SVGA_FIFO_FENCE); - } wake_up_all(&dev_priv->fence_queue); out_err: if (fifo_idle) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 9c285c799fc8..2c72f87173c3 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -128,15 +128,14 @@ static int vmw_cursor_update_bo(struct vmw_private *dev_priv, static void vmw_cursor_update_position(struct vmw_private *dev_priv, bool show, int x, int y) { - u32 *fifo_mem = dev_priv->mmio_virt; uint32_t count; spin_lock(&dev_priv->cursor_lock); - vmw_mmio_write(show ? 1 : 0, fifo_mem + SVGA_FIFO_CURSOR_ON); - vmw_mmio_write(x, fifo_mem + SVGA_FIFO_CURSOR_X); - vmw_mmio_write(y, fifo_mem + SVGA_FIFO_CURSOR_Y); - count = vmw_mmio_read(fifo_mem + SVGA_FIFO_CURSOR_COUNT); - vmw_mmio_write(++count, fifo_mem + SVGA_FIFO_CURSOR_COUNT); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_CURSOR_ON, show ? 1 : 0); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_CURSOR_X, x); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_CURSOR_Y, y); + count = vmw_fifo_mem_read(dev_priv, SVGA_FIFO_CURSOR_COUNT); + vmw_fifo_mem_write(dev_priv, SVGA_FIFO_CURSOR_COUNT, ++count); spin_unlock(&dev_priv->cursor_lock); } @@ -1874,8 +1873,7 @@ int vmw_kms_write_svga(struct vmw_private *vmw_priv, if (vmw_priv->capabilities & SVGA_CAP_PITCHLOCK) vmw_write(vmw_priv, SVGA_REG_PITCHLOCK, pitch); else if (vmw_fifo_have_pitchlock(vmw_priv)) - vmw_mmio_write(pitch, vmw_priv->mmio_virt + - SVGA_FIFO_PITCHLOCK); + vmw_fifo_mem_write(vmw_priv, SVGA_FIFO_PITCHLOCK, pitch); vmw_write(vmw_priv, SVGA_REG_WIDTH, width); vmw_write(vmw_priv, SVGA_REG_HEIGHT, height); vmw_write(vmw_priv, SVGA_REG_BITS_PER_PIXEL, bpp); From patchwork Tue Dec 1 20:18:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 11943731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21A67C71155 for ; Tue, 1 Dec 2020 20:33:40 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B680521741 for ; Tue, 1 Dec 2020 20:33:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B680521741 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 850BA6E8D6; Tue, 1 Dec 2020 20:33:38 +0000 (UTC) Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by gabe.freedesktop.org (Postfix) with ESMTPS id D2CC96E8D6 for ; Tue, 1 Dec 2020 20:33:36 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 1 Dec 2020 12:18:29 -0800 Received: from vertex.vmware.com (unknown [10.21.244.133]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id A55E720ABB; Tue, 1 Dec 2020 12:18:31 -0800 (PST) From: Zack Rusin To: Subject: [PATCH 5/8] drm/vmwgfx: Cleanup pci resource allocation Date: Tue, 1 Dec 2020 15:18:25 -0500 Message-ID: <20201201201828.808888-5-zackr@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201201201828.808888-1-zackr@vmware.com> References: <20201201201828.808888-1-zackr@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Martin Krastev , Roland Scheidegger Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Instead of doing it in multiple spots lets centralize the code to handle pci resources. This also cleans up the error handling a bit and will make it a lot easier to add additional svga versions to the driver. Signed-off-by: Zack Rusin Reviewed-by: Martin Krastev Reviewed-by: Roland Scheidegger --- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 112 +++++++++++++++++----------- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 5 +- 2 files changed, 72 insertions(+), 45 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 23ae86ab9250..43fb7ff27e99 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -44,8 +44,6 @@ #include "vmwgfx_drv.h" #define VMWGFX_DRIVER_DESC "Linux drm driver for VMware graphics devices" -#define VMWGFX_CHIP_SVGAII 0 -#define VMW_FB_RESERVATION 0 #define VMW_MIN_INITIAL_WIDTH 800 #define VMW_MIN_INITIAL_HEIGHT 600 @@ -254,8 +252,8 @@ static const struct drm_ioctl_desc vmw_ioctls[] = { }; static const struct pci_device_id vmw_pci_id_list[] = { - {0x15ad, 0x0405, PCI_ANY_ID, PCI_ANY_ID, 0, 0, VMWGFX_CHIP_SVGAII}, - {0, 0, 0} + { PCI_DEVICE(0x15ad, VMWGFX_PCI_ID_SVGA2) }, + { } }; MODULE_DEVICE_TABLE(pci, vmw_pci_id_list); @@ -643,18 +641,81 @@ static void vmw_vram_manager_fini(struct vmw_private *dev_priv) #endif } -static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) +static int vmw_setup_pci_resources(struct vmw_private *dev, + unsigned long pci_id) { + resource_size_t fifo_start; + resource_size_t fifo_size; int ret; + + pci_set_master(dev->drm.pdev); + + ret = pci_request_regions(dev->drm.pdev, "vmwgfx probe"); + if (ret) + return ret; + + dev->io_start = pci_resource_start(dev->drm.pdev, 0); + dev->vram_start = pci_resource_start(dev->drm.pdev, 1); + dev->vram_size = pci_resource_len(dev->drm.pdev, 1); + fifo_start = pci_resource_start(dev->drm.pdev, 2); + fifo_size = pci_resource_len(dev->drm.pdev, 2); + + DRM_INFO("FIFO at 0x%llx size is %llu kiB\n", + fifo_start, fifo_size / 1024); + dev->fifo_mem = devm_memremap(dev->drm.dev, + fifo_start, + fifo_size, + MEMREMAP_WB); + + if (unlikely(dev->fifo_mem == NULL)) { + DRM_ERROR("Failed mapping FIFO memory.\n"); + return -ENOMEM; + } + + /* + * This is approximate size of the vram, the exact size will only + * be known after we read SVGA_REG_VRAM_SIZE. The PCI resource + * size will be equal to or bigger than the size reported by + * SVGA_REG_VRAM_SIZE. + */ + DRM_INFO("VRAM at 0x%llx size is %llu kiB\n", + dev->vram_start, dev->vram_size / 1024); + + return 0; +} + +static int vmw_detect_version(struct vmw_private *dev) +{ uint32_t svga_id; + + vmw_write(dev, SVGA_REG_ID, SVGA_ID_2); + svga_id = vmw_read(dev, SVGA_REG_ID); + if (svga_id != SVGA_ID_2) { + DRM_ERROR("Unsupported SVGA ID 0x%x on chipset 0x%x\n", + svga_id, dev->vmw_chipset); + return -ENOSYS; + } + return 0; +} + +static int vmw_driver_load(struct vmw_private *dev_priv, u32 pci_id) +{ + int ret; enum vmw_res_type i; bool refuse_dma = false; char host_log[100] = {0}; - pci_set_master(dev_priv->drm.pdev); - - dev_priv->vmw_chipset = chipset; + dev_priv->vmw_chipset = pci_id; dev_priv->last_read_seqno = (uint32_t) -100; + dev_priv->drm.dev_private = dev_priv; + + ret = vmw_setup_pci_resources(dev_priv, pci_id); + if (ret) + return ret; + ret = vmw_detect_version(dev_priv); + if (ret) + return ret; + mutex_init(&dev_priv->cmdbuf_mutex); mutex_init(&dev_priv->release_mutex); mutex_init(&dev_priv->binding_mutex); @@ -679,21 +740,10 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) dev_priv->used_memory_size = 0; - dev_priv->io_start = pci_resource_start(dev_priv->drm.pdev, 0); - dev_priv->vram_start = pci_resource_start(dev_priv->drm.pdev, 1); - dev_priv->fifo_mem_start = pci_resource_start(dev_priv->drm.pdev, 2); - dev_priv->assume_16bpp = !!vmw_assume_16bpp; dev_priv->enable_fb = enable_fbdev; - vmw_write(dev_priv, SVGA_REG_ID, SVGA_ID_2); - svga_id = vmw_read(dev_priv, SVGA_REG_ID); - if (svga_id != SVGA_ID_2) { - ret = -ENOSYS; - DRM_ERROR("Unsupported SVGA ID 0x%x\n", svga_id); - goto out_err0; - } dev_priv->capabilities = vmw_read(dev_priv, SVGA_REG_CAPABILITIES); @@ -798,21 +848,6 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) } DRM_INFO("Maximum display memory size is %llu kiB\n", dev_priv->prim_bb_mem / 1024); - DRM_INFO("VRAM at 0x%08llx size is %llu kiB\n", - dev_priv->vram_start, dev_priv->vram_size / 1024); - DRM_INFO("MMIO at 0x%08llx size is %llu kiB\n", - dev_priv->fifo_mem_start, dev_priv->fifo_mem_size / 1024); - - dev_priv->fifo_mem = devm_memremap(dev_priv->drm.dev, - dev_priv->fifo_mem_start, - dev_priv->fifo_mem_size, - MEMREMAP_WB); - - if (unlikely(dev_priv->fifo_mem == NULL)) { - ret = -ENOMEM; - DRM_ERROR("Failed mapping the FIFO MMIO.\n"); - goto out_err0; - } /* Need mmio memory to check for fifo pitchlock cap. */ if (!(dev_priv->capabilities & SVGA_CAP_DISPLAY_TOPOLOGY) && @@ -832,14 +867,6 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) goto out_err0; } - dev_priv->drm.dev_private = dev_priv; - - ret = pci_request_regions(dev_priv->drm.pdev, "vmwgfx probe"); - if (ret) { - DRM_ERROR("Failed reserving PCI regions.\n"); - goto out_no_device; - } - if (dev_priv->capabilities & SVGA_CAP_IRQMASK) { ret = vmw_irq_install(&dev_priv->drm, dev_priv->drm.pdev->irq); if (ret != 0) { @@ -986,7 +1013,6 @@ static int vmw_driver_load(struct vmw_private *dev_priv, unsigned long chipset) vmw_irq_uninstall(&dev_priv->drm); out_no_irq: pci_release_regions(dev_priv->drm.pdev); -out_no_device: ttm_object_device_release(&dev_priv->tdev); out_err0: for (i = vmw_res_context; i < vmw_res_max; ++i) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index e388edb9e50f..738a9371013f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -67,6 +67,8 @@ #define VMWGFX_CMD_BOUNCE_INIT_SIZE 32768 #define VMWGFX_ENABLE_SCREEN_TARGET_OTABLE 1 +#define VMWGFX_PCI_ID_SVGA2 0x0405 + /* * Perhaps we should have sysfs entries for these. */ @@ -498,13 +500,12 @@ struct vmw_private { struct vmw_fifo_state fifo; struct drm_vma_offset_manager vma_manager; - unsigned long vmw_chipset; + u32 vmw_chipset; resource_size_t io_start; resource_size_t vram_start; resource_size_t vram_size; resource_size_t prim_bb_mem; u32 *fifo_mem; - resource_size_t fifo_mem_start; resource_size_t fifo_mem_size; uint32_t fb_max_width; uint32_t fb_max_height; From patchwork Tue Dec 1 20:18:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 11943737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9C75C64E7A for ; Tue, 1 Dec 2020 20:33:45 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8B2B32151B for ; Tue, 1 Dec 2020 20:33:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8B2B32151B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7932D6E8EC; Tue, 1 Dec 2020 20:33:39 +0000 (UTC) Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by gabe.freedesktop.org (Postfix) with ESMTPS id AC3A16E8D0 for ; Tue, 1 Dec 2020 20:33:36 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 1 Dec 2020 12:18:29 -0800 Received: from vertex.vmware.com (unknown [10.21.244.133]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 74CC520AB9; Tue, 1 Dec 2020 12:18:32 -0800 (PST) From: Zack Rusin To: Subject: [PATCH 6/8] drm/vmwgfx: Remove the throttling code Date: Tue, 1 Dec 2020 15:18:26 -0500 Message-ID: <20201201201828.808888-6-zackr@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201201201828.808888-1-zackr@vmware.com> References: <20201201201828.808888-1-zackr@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Martin Krastev , Roland Scheidegger Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Throttling was used before fencing to implement early vsync support in the xorg state tracker a long time ago. The xorg state tracker has been removed years ago and no one else has ever used throttling. It's time to remove this code, it hasn't been used or tested in years. Signed-off-by: Zack Rusin Reviewed-by: Martin Krastev Reviewed-by: Roland Scheidegger --- drivers/gpu/drm/vmwgfx/Makefile | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 21 ---- drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 6 +- drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c | 4 - drivers/gpu/drm/vmwgfx/vmwgfx_irq.c | 1 - drivers/gpu/drm/vmwgfx/vmwgfx_marker.c | 155 ------------------------ 6 files changed, 3 insertions(+), 188 deletions(-) delete mode 100644 drivers/gpu/drm/vmwgfx/vmwgfx_marker.c diff --git a/drivers/gpu/drm/vmwgfx/Makefile b/drivers/gpu/drm/vmwgfx/Makefile index 31f85f09f1fc..ef455d6d7c3f 100644 --- a/drivers/gpu/drm/vmwgfx/Makefile +++ b/drivers/gpu/drm/vmwgfx/Makefile @@ -2,8 +2,8 @@ vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \ vmwgfx_fb.o vmwgfx_ioctl.o vmwgfx_resource.o vmwgfx_ttm_buffer.o \ vmwgfx_fifo.o vmwgfx_irq.o vmwgfx_ldu.o vmwgfx_ttm_glue.o \ - vmwgfx_overlay.o vmwgfx_marker.o vmwgfx_gmrid_manager.o \ - vmwgfx_fence.o vmwgfx_bo.o vmwgfx_scrn.o vmwgfx_context.o \ + vmwgfx_overlay.o vmwgfx_gmrid_manager.o vmwgfx_fence.o \ + vmwgfx_bo.o vmwgfx_scrn.o vmwgfx_context.o \ vmwgfx_surface.o vmwgfx_prime.o vmwgfx_mob.o vmwgfx_shader.o \ vmwgfx_cmdbuf_res.o vmwgfx_cmdbuf.o vmwgfx_stdu.o \ vmwgfx_cotable.o vmwgfx_so.o vmwgfx_binding.o vmwgfx_msg.o \ diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index 738a9371013f..fb531790f4bb 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -279,13 +279,6 @@ struct vmw_surface { struct list_head view_list; }; -struct vmw_marker_queue { - struct list_head head; - u64 lag; - u64 lag_time; - spinlock_t lock; -}; - struct vmw_fifo_state { unsigned long reserved_size; u32 *dynamic_buffer; @@ -295,7 +288,6 @@ struct vmw_fifo_state { uint32_t capabilities; struct mutex fifo_mutex; struct rw_semaphore rwsem; - struct vmw_marker_queue marker_queue; bool dx; }; @@ -1123,19 +1115,6 @@ extern void vmw_generic_waiter_add(struct vmw_private *dev_priv, u32 flag, extern void vmw_generic_waiter_remove(struct vmw_private *dev_priv, u32 flag, int *waiter_count); -/** - * Rudimentary fence-like objects currently used only for throttling - - * vmwgfx_marker.c - */ - -extern void vmw_marker_queue_init(struct vmw_marker_queue *queue); -extern void vmw_marker_queue_takedown(struct vmw_marker_queue *queue); -extern int vmw_marker_push(struct vmw_marker_queue *queue, - uint32_t seqno); -extern int vmw_marker_pull(struct vmw_marker_queue *queue, - uint32_t signaled_seqno); -extern int vmw_wait_lag(struct vmw_private *dev_priv, - struct vmw_marker_queue *queue, uint32_t us); /** * Kernel framebuffer - vmwgfx_fb.c diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c index e67e2e8f6e6f..7c7d4b147d85 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c @@ -4046,11 +4046,7 @@ int vmw_execbuf_process(struct drm_file *file_priv, } if (throttle_us) { - ret = vmw_wait_lag(dev_priv, &dev_priv->fifo.marker_queue, - throttle_us); - - if (ret) - goto out_free_fence_fd; + VMW_DEBUG_USER("Throttling is no longer supported.\n"); } kernel_commands = vmw_execbuf_cmdbuf(dev_priv, user_commands, diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c index 4674bc1c32f0..f4b9af67551f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c @@ -157,7 +157,6 @@ int vmw_fifo_init(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) atomic_set(&dev_priv->marker_seq, dev_priv->last_read_seqno); vmw_fifo_mem_write(dev_priv, SVGA_FIFO_FENCE, dev_priv->last_read_seqno); - vmw_marker_queue_init(&fifo->marker_queue); return 0; } @@ -185,8 +184,6 @@ void vmw_fifo_release(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) vmw_write(dev_priv, SVGA_REG_TRACES, dev_priv->traces_state); - vmw_marker_queue_takedown(&fifo->marker_queue); - if (likely(fifo->static_buffer != NULL)) { vfree(fifo->static_buffer); fifo->static_buffer = NULL; @@ -563,7 +560,6 @@ int vmw_fifo_send_fence(struct vmw_private *dev_priv, uint32_t *seqno) cmd_fence = (struct svga_fifo_cmd_fence *) fm; cmd_fence->fence = *seqno; vmw_fifo_commit_flush(dev_priv, bytes); - (void) vmw_marker_push(&fifo_state->marker_queue, *seqno); vmw_update_seqno(dev_priv, fifo_state); out_err: diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c b/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c index c62bbe1d2eb6..6c2a569f1fcb 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c @@ -121,7 +121,6 @@ void vmw_update_seqno(struct vmw_private *dev_priv, if (dev_priv->last_read_seqno != seqno) { dev_priv->last_read_seqno = seqno; - vmw_marker_pull(&fifo_state->marker_queue, seqno); vmw_fences_update(dev_priv->fman); } } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_marker.c b/drivers/gpu/drm/vmwgfx/vmwgfx_marker.c deleted file mode 100644 index e53bc639a754..000000000000 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_marker.c +++ /dev/null @@ -1,155 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 OR MIT -/************************************************************************** - * - * Copyright 2010 VMware, Inc., Palo Alto, CA., USA - * - * Permission is hereby granted, free of charge, to any person obtaining a - * copy of this software and associated documentation files (the - * "Software"), to deal in the Software without restriction, including - * without limitation the rights to use, copy, modify, merge, publish, - * distribute, sub license, and/or sell copies of the Software, and to - * permit persons to whom the Software is furnished to do so, subject to - * the following conditions: - * - * The above copyright notice and this permission notice (including the - * next paragraph) shall be included in all copies or substantial portions - * of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL - * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, - * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR - * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE - * USE OR OTHER DEALINGS IN THE SOFTWARE. - * - **************************************************************************/ - - -#include "vmwgfx_drv.h" - -struct vmw_marker { - struct list_head head; - uint32_t seqno; - u64 submitted; -}; - -void vmw_marker_queue_init(struct vmw_marker_queue *queue) -{ - INIT_LIST_HEAD(&queue->head); - queue->lag = 0; - queue->lag_time = ktime_get_raw_ns(); - spin_lock_init(&queue->lock); -} - -void vmw_marker_queue_takedown(struct vmw_marker_queue *queue) -{ - struct vmw_marker *marker, *next; - - spin_lock(&queue->lock); - list_for_each_entry_safe(marker, next, &queue->head, head) { - kfree(marker); - } - spin_unlock(&queue->lock); -} - -int vmw_marker_push(struct vmw_marker_queue *queue, - uint32_t seqno) -{ - struct vmw_marker *marker = kmalloc(sizeof(*marker), GFP_KERNEL); - - if (unlikely(!marker)) - return -ENOMEM; - - marker->seqno = seqno; - marker->submitted = ktime_get_raw_ns(); - spin_lock(&queue->lock); - list_add_tail(&marker->head, &queue->head); - spin_unlock(&queue->lock); - - return 0; -} - -int vmw_marker_pull(struct vmw_marker_queue *queue, - uint32_t signaled_seqno) -{ - struct vmw_marker *marker, *next; - bool updated = false; - u64 now; - - spin_lock(&queue->lock); - now = ktime_get_raw_ns(); - - if (list_empty(&queue->head)) { - queue->lag = 0; - queue->lag_time = now; - updated = true; - goto out_unlock; - } - - list_for_each_entry_safe(marker, next, &queue->head, head) { - if (signaled_seqno - marker->seqno > (1 << 30)) - continue; - - queue->lag = now - marker->submitted; - queue->lag_time = now; - updated = true; - list_del(&marker->head); - kfree(marker); - } - -out_unlock: - spin_unlock(&queue->lock); - - return (updated) ? 0 : -EBUSY; -} - -static u64 vmw_fifo_lag(struct vmw_marker_queue *queue) -{ - u64 now; - - spin_lock(&queue->lock); - now = ktime_get_raw_ns(); - queue->lag += now - queue->lag_time; - queue->lag_time = now; - spin_unlock(&queue->lock); - return queue->lag; -} - - -static bool vmw_lag_lt(struct vmw_marker_queue *queue, - uint32_t us) -{ - u64 cond = (u64) us * NSEC_PER_USEC; - - return vmw_fifo_lag(queue) <= cond; -} - -int vmw_wait_lag(struct vmw_private *dev_priv, - struct vmw_marker_queue *queue, uint32_t us) -{ - struct vmw_marker *marker; - uint32_t seqno; - int ret; - - while (!vmw_lag_lt(queue, us)) { - spin_lock(&queue->lock); - if (list_empty(&queue->head)) - seqno = atomic_read(&dev_priv->marker_seq); - else { - marker = list_first_entry(&queue->head, - struct vmw_marker, head); - seqno = marker->seqno; - } - spin_unlock(&queue->lock); - - ret = vmw_wait_seqno(dev_priv, false, seqno, true, - 3*HZ); - - if (unlikely(ret != 0)) - return ret; - - (void) vmw_marker_pull(queue, seqno); - } - return 0; -} From patchwork Tue Dec 1 20:18:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 11943741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52AF5C64E7B for ; Tue, 1 Dec 2020 20:33:49 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BB1C4221FF for ; Tue, 1 Dec 2020 20:33:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB1C4221FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 97B8F6E8F0; Tue, 1 Dec 2020 20:33:41 +0000 (UTC) Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by gabe.freedesktop.org (Postfix) with ESMTPS id 51C4B6E8D6 for ; Tue, 1 Dec 2020 20:33:36 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 1 Dec 2020 12:18:30 -0800 Received: from vertex.vmware.com (unknown [10.21.244.133]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 205A820AB9; Tue, 1 Dec 2020 12:18:33 -0800 (PST) From: Zack Rusin To: Subject: [PATCH 7/8] drm/vmwgfx: Cleanup the cmd/fifo split Date: Tue, 1 Dec 2020 15:18:27 -0500 Message-ID: <20201201201828.808888-7-zackr@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201201201828.808888-1-zackr@vmware.com> References: <20201201201828.808888-1-zackr@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Martin Krastev Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Lets try to cleanup the usage of the term FIFO which we used for both our MMIO based cmd queue processing and for general command processing which could have been using command buffers interface. We're going to rename the functions which are processing commands (and work either via MMIO or command buffers) as _cmd_ and functions which operate on the MMIO based commands as FIFO to match the SVGA device naming. Signed-off-by: Zack Rusin Reviewed-by: Martin Krastev --- drivers/gpu/drm/vmwgfx/Makefile | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_binding.c | 52 +++++++++---------- .../vmwgfx/{vmwgfx_fifo.c => vmwgfx_cmd.c} | 40 +++++++------- drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c | 14 ++--- drivers/gpu/drm/vmwgfx/vmwgfx_context.c | 40 +++++++------- drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c | 12 ++--- drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 3 +- drivers/gpu/drm/vmwgfx/vmwgfx_drv.h | 30 +++++------ drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 16 +++--- drivers/gpu/drm/vmwgfx/vmwgfx_fb.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_gmr.c | 8 +-- drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 21 ++++---- drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_mob.c | 16 +++--- drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c | 8 +-- drivers/gpu/drm/vmwgfx/vmwgfx_resource.c | 4 +- drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c | 24 ++++----- drivers/gpu/drm/vmwgfx/vmwgfx_shader.c | 24 ++++----- drivers/gpu/drm/vmwgfx/vmwgfx_so.c | 8 +-- drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c | 28 +++++----- drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c | 8 +-- drivers/gpu/drm/vmwgfx/vmwgfx_surface.c | 40 +++++++------- 23 files changed, 199 insertions(+), 207 deletions(-) rename drivers/gpu/drm/vmwgfx/{vmwgfx_fifo.c => vmwgfx_cmd.c} (94%) diff --git a/drivers/gpu/drm/vmwgfx/Makefile b/drivers/gpu/drm/vmwgfx/Makefile index ef455d6d7c3f..cc4cdca7176e 100644 --- a/drivers/gpu/drm/vmwgfx/Makefile +++ b/drivers/gpu/drm/vmwgfx/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \ vmwgfx_fb.o vmwgfx_ioctl.o vmwgfx_resource.o vmwgfx_ttm_buffer.o \ - vmwgfx_fifo.o vmwgfx_irq.o vmwgfx_ldu.o vmwgfx_ttm_glue.o \ + vmwgfx_cmd.o vmwgfx_irq.o vmwgfx_ldu.o vmwgfx_ttm_glue.o \ vmwgfx_overlay.o vmwgfx_gmrid_manager.o vmwgfx_fence.o \ vmwgfx_bo.o vmwgfx_scrn.o vmwgfx_context.o \ vmwgfx_surface.o vmwgfx_prime.o vmwgfx_mob.o vmwgfx_shader.o \ diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c b/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c index f41550797970..180f6dbc9460 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_binding.c @@ -555,7 +555,7 @@ static int vmw_binding_scrub_shader(struct vmw_ctx_bindinfo *bi, bool rebind) SVGA3dCmdSetShader body; } *cmd; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -564,7 +564,7 @@ static int vmw_binding_scrub_shader(struct vmw_ctx_bindinfo *bi, bool rebind) cmd->body.cid = bi->ctx->id; cmd->body.type = binding->shader_slot + SVGA3D_SHADERTYPE_MIN; cmd->body.shid = ((rebind) ? bi->res->id : SVGA3D_INVALID_ID); - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -587,7 +587,7 @@ static int vmw_binding_scrub_render_target(struct vmw_ctx_bindinfo *bi, SVGA3dCmdSetRenderTarget body; } *cmd; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -598,7 +598,7 @@ static int vmw_binding_scrub_render_target(struct vmw_ctx_bindinfo *bi, cmd->body.target.sid = ((rebind) ? bi->res->id : SVGA3D_INVALID_ID); cmd->body.target.face = 0; cmd->body.target.mipmap = 0; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -626,7 +626,7 @@ static int vmw_binding_scrub_texture(struct vmw_ctx_bindinfo *bi, } body; } *cmd; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -636,7 +636,7 @@ static int vmw_binding_scrub_texture(struct vmw_ctx_bindinfo *bi, cmd->body.s1.stage = binding->texture_stage; cmd->body.s1.name = SVGA3D_TS_BIND_TEXTURE; cmd->body.s1.value = ((rebind) ? bi->res->id : SVGA3D_INVALID_ID); - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -657,7 +657,7 @@ static int vmw_binding_scrub_dx_shader(struct vmw_ctx_bindinfo *bi, bool rebind) SVGA3dCmdDXSetShader body; } *cmd; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), bi->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), bi->ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -665,7 +665,7 @@ static int vmw_binding_scrub_dx_shader(struct vmw_ctx_bindinfo *bi, bool rebind) cmd->header.size = sizeof(cmd->body); cmd->body.type = binding->shader_slot + SVGA3D_SHADERTYPE_MIN; cmd->body.shaderId = ((rebind) ? bi->res->id : SVGA3D_INVALID_ID); - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -686,7 +686,7 @@ static int vmw_binding_scrub_cb(struct vmw_ctx_bindinfo *bi, bool rebind) SVGA3dCmdDXSetSingleConstantBuffer body; } *cmd; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), bi->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), bi->ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -703,7 +703,7 @@ static int vmw_binding_scrub_cb(struct vmw_ctx_bindinfo *bi, bool rebind) cmd->body.sizeInBytes = 0; cmd->body.sid = SVGA3D_INVALID_ID; } - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -810,7 +810,7 @@ static int vmw_emit_set_sr(struct vmw_ctx_binding_state *cbs, view_id_size = cbs->bind_cmd_count*sizeof(uint32); cmd_size = sizeof(*cmd) + view_id_size; - cmd = VMW_FIFO_RESERVE_DX(ctx->dev_priv, cmd_size, ctx->id); + cmd = VMW_CMD_CTX_RESERVE(ctx->dev_priv, cmd_size, ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -821,7 +821,7 @@ static int vmw_emit_set_sr(struct vmw_ctx_binding_state *cbs, memcpy(&cmd[1], cbs->bind_cmd_buffer, view_id_size); - vmw_fifo_commit(ctx->dev_priv, cmd_size); + vmw_cmd_commit(ctx->dev_priv, cmd_size); bitmap_clear(cbs->per_shader[shader_slot].dirty_sr, cbs->bind_first_slot, cbs->bind_cmd_count); @@ -846,7 +846,7 @@ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs) vmw_collect_view_ids(cbs, loc, SVGA3D_MAX_SIMULTANEOUS_RENDER_TARGETS); view_id_size = cbs->bind_cmd_count*sizeof(uint32); cmd_size = sizeof(*cmd) + view_id_size; - cmd = VMW_FIFO_RESERVE_DX(ctx->dev_priv, cmd_size, ctx->id); + cmd = VMW_CMD_CTX_RESERVE(ctx->dev_priv, cmd_size, ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -860,7 +860,7 @@ static int vmw_emit_set_rt(struct vmw_ctx_binding_state *cbs) memcpy(&cmd[1], cbs->bind_cmd_buffer, view_id_size); - vmw_fifo_commit(ctx->dev_priv, cmd_size); + vmw_cmd_commit(ctx->dev_priv, cmd_size); return 0; @@ -930,7 +930,7 @@ static int vmw_emit_set_so_target(struct vmw_ctx_binding_state *cbs) so_target_size = cbs->bind_cmd_count*sizeof(SVGA3dSoTarget); cmd_size = sizeof(*cmd) + so_target_size; - cmd = VMW_FIFO_RESERVE_DX(ctx->dev_priv, cmd_size, ctx->id); + cmd = VMW_CMD_CTX_RESERVE(ctx->dev_priv, cmd_size, ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -938,7 +938,7 @@ static int vmw_emit_set_so_target(struct vmw_ctx_binding_state *cbs) cmd->header.size = sizeof(cmd->body) + so_target_size; memcpy(&cmd[1], cbs->bind_cmd_buffer, so_target_size); - vmw_fifo_commit(ctx->dev_priv, cmd_size); + vmw_cmd_commit(ctx->dev_priv, cmd_size); return 0; @@ -1044,7 +1044,7 @@ static int vmw_emit_set_vb(struct vmw_ctx_binding_state *cbs) set_vb_size = cbs->bind_cmd_count*sizeof(SVGA3dVertexBuffer); cmd_size = sizeof(*cmd) + set_vb_size; - cmd = VMW_FIFO_RESERVE_DX(ctx->dev_priv, cmd_size, ctx->id); + cmd = VMW_CMD_CTX_RESERVE(ctx->dev_priv, cmd_size, ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -1054,7 +1054,7 @@ static int vmw_emit_set_vb(struct vmw_ctx_binding_state *cbs) memcpy(&cmd[1], cbs->bind_cmd_buffer, set_vb_size); - vmw_fifo_commit(ctx->dev_priv, cmd_size); + vmw_cmd_commit(ctx->dev_priv, cmd_size); bitmap_clear(cbs->dirty_vb, cbs->bind_first_slot, cbs->bind_cmd_count); @@ -1074,7 +1074,7 @@ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs) vmw_collect_view_ids(cbs, loc, SVGA3D_MAX_UAVIEWS); view_id_size = cbs->bind_cmd_count*sizeof(uint32); cmd_size = sizeof(*cmd) + view_id_size; - cmd = VMW_FIFO_RESERVE_DX(ctx->dev_priv, cmd_size, ctx->id); + cmd = VMW_CMD_CTX_RESERVE(ctx->dev_priv, cmd_size, ctx->id); if (!cmd) return -ENOMEM; @@ -1086,7 +1086,7 @@ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs) memcpy(&cmd[1], cbs->bind_cmd_buffer, view_id_size); - vmw_fifo_commit(ctx->dev_priv, cmd_size); + vmw_cmd_commit(ctx->dev_priv, cmd_size); return 0; } @@ -1104,7 +1104,7 @@ static int vmw_emit_set_cs_uav(struct vmw_ctx_binding_state *cbs) vmw_collect_view_ids(cbs, loc, SVGA3D_MAX_UAVIEWS); view_id_size = cbs->bind_cmd_count*sizeof(uint32); cmd_size = sizeof(*cmd) + view_id_size; - cmd = VMW_FIFO_RESERVE_DX(ctx->dev_priv, cmd_size, ctx->id); + cmd = VMW_CMD_CTX_RESERVE(ctx->dev_priv, cmd_size, ctx->id); if (!cmd) return -ENOMEM; @@ -1116,7 +1116,7 @@ static int vmw_emit_set_cs_uav(struct vmw_ctx_binding_state *cbs) memcpy(&cmd[1], cbs->bind_cmd_buffer, view_id_size); - vmw_fifo_commit(ctx->dev_priv, cmd_size); + vmw_cmd_commit(ctx->dev_priv, cmd_size); return 0; } @@ -1263,7 +1263,7 @@ static int vmw_binding_scrub_ib(struct vmw_ctx_bindinfo *bi, bool rebind) SVGA3dCmdDXSetIndexBuffer body; } *cmd; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), bi->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), bi->ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -1279,7 +1279,7 @@ static int vmw_binding_scrub_ib(struct vmw_ctx_bindinfo *bi, bool rebind) cmd->body.offset = 0; } - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -1315,14 +1315,14 @@ static int vmw_binding_scrub_so(struct vmw_ctx_bindinfo *bi, bool rebind) SVGA3dCmdDXSetStreamOutput body; } *cmd; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), bi->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), bi->ctx->id); if (!cmd) return -ENOMEM; cmd->header.id = SVGA_3D_CMD_DX_SET_STREAMOUTPUT; cmd->header.size = sizeof(cmd->body); cmd->body.soid = rebind ? bi->res->id : SVGA3D_INVALID_ID; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c similarity index 94% rename from drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c rename to drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c index f4b9af67551f..3a1bee0212e1 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR MIT /************************************************************************** * - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA + * Copyright 2009-2020 VMware, Inc., Palo Alto, CA., USA * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the @@ -36,7 +36,7 @@ struct vmw_temp_set_context { SVGA3dCmdDXTempSetContext body; }; -bool vmw_fifo_have_3d(struct vmw_private *dev_priv) +bool vmw_supports_3d(struct vmw_private *dev_priv) { uint32_t fifo_min, hwversion; const struct vmw_fifo_state *fifo = &dev_priv->fifo; @@ -66,10 +66,10 @@ bool vmw_fifo_have_3d(struct vmw_private *dev_priv) return false; hwversion = vmw_fifo_mem_read(dev_priv, - ((fifo->capabilities & - SVGA_FIFO_CAP_3D_HWVERSION_REVISED) ? - SVGA_FIFO_3D_HWVERSION_REVISED : - SVGA_FIFO_3D_HWVERSION)); + ((fifo->capabilities & + SVGA_FIFO_CAP_3D_HWVERSION_REVISED) ? + SVGA_FIFO_3D_HWVERSION_REVISED : + SVGA_FIFO_3D_HWVERSION)); if (hwversion == 0) return false; @@ -126,6 +126,7 @@ int vmw_fifo_init(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) vmw_write(dev_priv, SVGA_REG_ENABLE, SVGA_REG_ENABLE_ENABLE | SVGA_REG_ENABLE_HIDE); + vmw_write(dev_priv, SVGA_REG_TRACES, 0); min = 4; @@ -373,7 +374,7 @@ static void *vmw_local_fifo_reserve(struct vmw_private *dev_priv, return NULL; } -void *vmw_fifo_reserve_dx(struct vmw_private *dev_priv, uint32_t bytes, +void *vmw_cmd_ctx_reserve(struct vmw_private *dev_priv, uint32_t bytes, int ctx_id) { void *ret; @@ -484,7 +485,7 @@ static void vmw_local_fifo_commit(struct vmw_private *dev_priv, uint32_t bytes) mutex_unlock(&fifo_state->fifo_mutex); } -void vmw_fifo_commit(struct vmw_private *dev_priv, uint32_t bytes) +void vmw_cmd_commit(struct vmw_private *dev_priv, uint32_t bytes) { if (dev_priv->cman) vmw_cmdbuf_commit(dev_priv->cman, bytes, NULL, false); @@ -499,7 +500,7 @@ void vmw_fifo_commit(struct vmw_private *dev_priv, uint32_t bytes) * @dev_priv: Pointer to device private structure. * @bytes: Number of bytes to commit. */ -void vmw_fifo_commit_flush(struct vmw_private *dev_priv, uint32_t bytes) +void vmw_cmd_commit_flush(struct vmw_private *dev_priv, uint32_t bytes) { if (dev_priv->cman) vmw_cmdbuf_commit(dev_priv->cman, bytes, NULL, true); @@ -514,7 +515,7 @@ void vmw_fifo_commit_flush(struct vmw_private *dev_priv, uint32_t bytes) * @dev_priv: Pointer to device private structure. * @interruptible: Whether to wait interruptible if function needs to sleep. */ -int vmw_fifo_flush(struct vmw_private *dev_priv, bool interruptible) +int vmw_cmd_flush(struct vmw_private *dev_priv, bool interruptible) { might_sleep(); @@ -524,7 +525,7 @@ int vmw_fifo_flush(struct vmw_private *dev_priv, bool interruptible) return 0; } -int vmw_fifo_send_fence(struct vmw_private *dev_priv, uint32_t *seqno) +int vmw_cmd_send_fence(struct vmw_private *dev_priv, uint32_t *seqno) { struct vmw_fifo_state *fifo_state = &dev_priv->fifo; struct svga_fifo_cmd_fence *cmd_fence; @@ -532,7 +533,7 @@ int vmw_fifo_send_fence(struct vmw_private *dev_priv, uint32_t *seqno) int ret = 0; uint32_t bytes = sizeof(u32) + sizeof(*cmd_fence); - fm = VMW_FIFO_RESERVE(dev_priv, bytes); + fm = VMW_CMD_RESERVE(dev_priv, bytes); if (unlikely(fm == NULL)) { *seqno = atomic_read(&dev_priv->marker_seq); ret = -ENOMEM; @@ -552,14 +553,14 @@ int vmw_fifo_send_fence(struct vmw_private *dev_priv, uint32_t *seqno) * waiting code in vmwgfx_irq.c will emulate this. */ - vmw_fifo_commit(dev_priv, 0); + vmw_cmd_commit(dev_priv, 0); return 0; } *fm++ = SVGA_CMD_FENCE; cmd_fence = (struct svga_fifo_cmd_fence *) fm; cmd_fence->fence = *seqno; - vmw_fifo_commit_flush(dev_priv, bytes); + vmw_cmd_commit_flush(dev_priv, bytes); vmw_update_seqno(dev_priv, fifo_state); out_err: @@ -590,7 +591,7 @@ static int vmw_fifo_emit_dummy_legacy_query(struct vmw_private *dev_priv, SVGA3dCmdWaitForQuery body; } *cmd; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -607,7 +608,7 @@ static int vmw_fifo_emit_dummy_legacy_query(struct vmw_private *dev_priv, cmd->body.guestResult.offset = 0; } - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -636,7 +637,7 @@ static int vmw_fifo_emit_dummy_gb_query(struct vmw_private *dev_priv, SVGA3dCmdWaitForGBQuery body; } *cmd; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -648,7 +649,7 @@ static int vmw_fifo_emit_dummy_gb_query(struct vmw_private *dev_priv, cmd->body.mobid = bo->mem.start; cmd->body.offset = 0; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -672,7 +673,7 @@ static int vmw_fifo_emit_dummy_gb_query(struct vmw_private *dev_priv, * * Returns -ENOMEM on failure to reserve fifo space. */ -int vmw_fifo_emit_dummy_query(struct vmw_private *dev_priv, +int vmw_cmd_emit_dummy_query(struct vmw_private *dev_priv, uint32_t cid) { if (dev_priv->has_mob) @@ -680,3 +681,4 @@ int vmw_fifo_emit_dummy_query(struct vmw_private *dev_priv, return vmw_fifo_emit_dummy_legacy_query(dev_priv, cid); } + diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c index 3cbc8f6f083f..7478ed177df7 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c @@ -610,7 +610,7 @@ static void vmw_cmdbuf_work_func(struct work_struct *work) /* Send a new fence in case one was removed */ if (send_fence) { - vmw_fifo_send_fence(man->dev_priv, &dummy); + vmw_cmd_send_fence(man->dev_priv, &dummy); wake_up_all(&man->idle_queue); } @@ -1208,18 +1208,14 @@ static int vmw_cmdbuf_startstop(struct vmw_cmdbuf_man *man, u32 context, * * @man: The command buffer manager. * @size: The size of the main space pool. - * @default_size: The default size of the command buffer for small kernel - * submissions. * - * Set the size and allocate the main command buffer space pool, - * as well as the default size of the command buffer for - * small kernel submissions. If successful, this enables large command - * submissions. Note that this function requires that rudimentary command + * Set the size and allocate the main command buffer space pool. + * If successful, this enables large command submissions. + * Note that this function requires that rudimentary command * submission is already available and that the MOB memory manager is alive. * Returns 0 on success. Negative error code on failure. */ -int vmw_cmdbuf_set_pool_size(struct vmw_cmdbuf_man *man, - size_t size, size_t default_size) +int vmw_cmdbuf_set_pool_size(struct vmw_cmdbuf_man *man, size_t size) { struct vmw_private *dev_priv = man->dev_priv; bool dummy; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_context.c b/drivers/gpu/drm/vmwgfx/vmwgfx_context.c index 61c246335e66..6f4d0da11ad8 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_context.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_context.c @@ -163,7 +163,7 @@ static void vmw_hw_context_destroy(struct vmw_resource *res) } vmw_execbuf_release_pinned_bo(dev_priv); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return; @@ -171,7 +171,7 @@ static void vmw_hw_context_destroy(struct vmw_resource *res) cmd->header.size = sizeof(cmd->body); cmd->body.cid = res->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_fifo_resource_dec(dev_priv); } @@ -265,7 +265,7 @@ static int vmw_context_init(struct vmw_private *dev_priv, return -ENOMEM; } - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) { vmw_resource_unreference(&res); return -ENOMEM; @@ -275,7 +275,7 @@ static int vmw_context_init(struct vmw_private *dev_priv, cmd->header.size = sizeof(cmd->body); cmd->body.cid = res->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_fifo_resource_inc(dev_priv); res->hw_destroy = vmw_hw_context_destroy; return 0; @@ -316,7 +316,7 @@ static int vmw_gb_context_create(struct vmw_resource *res) goto out_no_fifo; } - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) { ret = -ENOMEM; goto out_no_fifo; @@ -325,7 +325,7 @@ static int vmw_gb_context_create(struct vmw_resource *res) cmd->header.id = SVGA_3D_CMD_DEFINE_GB_CONTEXT; cmd->header.size = sizeof(cmd->body); cmd->body.cid = res->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_fifo_resource_inc(dev_priv); return 0; @@ -348,7 +348,7 @@ static int vmw_gb_context_bind(struct vmw_resource *res, BUG_ON(bo->mem.mem_type != VMW_PL_MOB); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -358,7 +358,7 @@ static int vmw_gb_context_bind(struct vmw_resource *res, cmd->body.mobid = bo->mem.start; cmd->body.validContents = res->backup_dirty; res->backup_dirty = false; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -392,7 +392,7 @@ static int vmw_gb_context_unbind(struct vmw_resource *res, submit_size = sizeof(*cmd2) + (readback ? sizeof(*cmd1) : 0); - cmd = VMW_FIFO_RESERVE(dev_priv, submit_size); + cmd = VMW_CMD_RESERVE(dev_priv, submit_size); if (unlikely(cmd == NULL)) { mutex_unlock(&dev_priv->binding_mutex); return -ENOMEM; @@ -411,7 +411,7 @@ static int vmw_gb_context_unbind(struct vmw_resource *res, cmd2->body.cid = res->id; cmd2->body.mobid = SVGA3D_INVALID_ID; - vmw_fifo_commit(dev_priv, submit_size); + vmw_cmd_commit(dev_priv, submit_size); mutex_unlock(&dev_priv->binding_mutex); /* @@ -440,14 +440,14 @@ static int vmw_gb_context_destroy(struct vmw_resource *res) if (likely(res->id == -1)) return 0; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; cmd->header.id = SVGA_3D_CMD_DESTROY_GB_CONTEXT; cmd->header.size = sizeof(cmd->body); cmd->body.cid = res->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); if (dev_priv->query_cid == res->id) dev_priv->query_cid_valid = false; vmw_resource_release_id(res); @@ -483,7 +483,7 @@ static int vmw_dx_context_create(struct vmw_resource *res) goto out_no_fifo; } - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) { ret = -ENOMEM; goto out_no_fifo; @@ -492,7 +492,7 @@ static int vmw_dx_context_create(struct vmw_resource *res) cmd->header.id = SVGA_3D_CMD_DX_DEFINE_CONTEXT; cmd->header.size = sizeof(cmd->body); cmd->body.cid = res->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_fifo_resource_inc(dev_priv); return 0; @@ -515,7 +515,7 @@ static int vmw_dx_context_bind(struct vmw_resource *res, BUG_ON(bo->mem.mem_type != VMW_PL_MOB); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -525,7 +525,7 @@ static int vmw_dx_context_bind(struct vmw_resource *res, cmd->body.mobid = bo->mem.start; cmd->body.validContents = res->backup_dirty; res->backup_dirty = false; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; @@ -608,7 +608,7 @@ static int vmw_dx_context_unbind(struct vmw_resource *res, submit_size = sizeof(*cmd2) + (readback ? sizeof(*cmd1) : 0); - cmd = VMW_FIFO_RESERVE(dev_priv, submit_size); + cmd = VMW_CMD_RESERVE(dev_priv, submit_size); if (unlikely(cmd == NULL)) { mutex_unlock(&dev_priv->binding_mutex); return -ENOMEM; @@ -627,7 +627,7 @@ static int vmw_dx_context_unbind(struct vmw_resource *res, cmd2->body.cid = res->id; cmd2->body.mobid = SVGA3D_INVALID_ID; - vmw_fifo_commit(dev_priv, submit_size); + vmw_cmd_commit(dev_priv, submit_size); mutex_unlock(&dev_priv->binding_mutex); /* @@ -656,14 +656,14 @@ static int vmw_dx_context_destroy(struct vmw_resource *res) if (likely(res->id == -1)) return 0; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; cmd->header.id = SVGA_3D_CMD_DX_DESTROY_CONTEXT; cmd->header.size = sizeof(cmd->body); cmd->body.cid = res->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); if (dev_priv->query_cid == res->id) dev_priv->query_cid_valid = false; vmw_resource_release_id(res); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c index 65e8e7a97724..3fc9a8db7aaa 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c @@ -175,7 +175,7 @@ static int vmw_cotable_unscrub(struct vmw_resource *res) WARN_ON_ONCE(bo->mem.mem_type != VMW_PL_MOB); dma_resv_assert_held(bo->base.resv); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (!cmd) return -ENOMEM; @@ -188,7 +188,7 @@ static int vmw_cotable_unscrub(struct vmw_resource *res) cmd->body.mobid = bo->mem.start; cmd->body.validSizeInBytes = vcotbl->size_read_back; - vmw_fifo_commit_flush(dev_priv, sizeof(*cmd)); + vmw_cmd_commit_flush(dev_priv, sizeof(*cmd)); vcotbl->scrubbed = false; return 0; @@ -263,7 +263,7 @@ int vmw_cotable_scrub(struct vmw_resource *res, bool readback) if (readback) submit_size += sizeof(*cmd0); - cmd1 = VMW_FIFO_RESERVE(dev_priv, submit_size); + cmd1 = VMW_CMD_RESERVE(dev_priv, submit_size); if (!cmd1) return -ENOMEM; @@ -283,7 +283,7 @@ int vmw_cotable_scrub(struct vmw_resource *res, bool readback) cmd1->body.type = vcotbl->type; cmd1->body.mobid = SVGA3D_INVALID_ID; cmd1->body.validSizeInBytes = 0; - vmw_fifo_commit_flush(dev_priv, submit_size); + vmw_cmd_commit_flush(dev_priv, submit_size); vcotbl->scrubbed = true; /* Trigger a create() on next validate. */ @@ -349,7 +349,7 @@ static int vmw_cotable_readback(struct vmw_resource *res) struct vmw_fence_obj *fence; if (!vcotbl->scrubbed) { - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (!cmd) return -ENOMEM; @@ -358,7 +358,7 @@ static int vmw_cotable_readback(struct vmw_resource *res) cmd->body.cid = vcotbl->ctx->id; cmd->body.type = vcotbl->type; vcotbl->size_read_back = res->backup_size; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); } (void) vmw_execbuf_fence_commands(NULL, dev_priv, &fence, NULL); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 43fb7ff27e99..5e16ef52f6c0 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -424,8 +424,7 @@ static int vmw_request_device_late(struct vmw_private *dev_priv) } if (dev_priv->cman) { - ret = vmw_cmdbuf_set_pool_size(dev_priv->cman, - 256*4096, 2*4096); + ret = vmw_cmdbuf_set_pool_size(dev_priv->cman, 256*4096); if (ret) { struct vmw_cmdbuf_man *man = dev_priv->cman; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h index fb531790f4bb..0acc008db88a 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h @@ -953,30 +953,29 @@ extern int vmw_fifo_init(struct vmw_private *dev_priv, extern void vmw_fifo_release(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo); extern void * -vmw_fifo_reserve_dx(struct vmw_private *dev_priv, uint32_t bytes, int ctx_id); -extern void vmw_fifo_commit(struct vmw_private *dev_priv, uint32_t bytes); -extern void vmw_fifo_commit_flush(struct vmw_private *dev_priv, uint32_t bytes); -extern int vmw_fifo_send_fence(struct vmw_private *dev_priv, - uint32_t *seqno); +vmw_cmd_ctx_reserve(struct vmw_private *dev_priv, uint32_t bytes, int ctx_id); +extern void vmw_cmd_commit(struct vmw_private *dev_priv, uint32_t bytes); +extern void vmw_cmd_commit_flush(struct vmw_private *dev_priv, uint32_t bytes); +extern int vmw_cmd_send_fence(struct vmw_private *dev_priv, uint32_t *seqno); +extern bool vmw_supports_3d(struct vmw_private *dev_priv); extern void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason); -extern bool vmw_fifo_have_3d(struct vmw_private *dev_priv); extern bool vmw_fifo_have_pitchlock(struct vmw_private *dev_priv); -extern int vmw_fifo_emit_dummy_query(struct vmw_private *dev_priv, - uint32_t cid); -extern int vmw_fifo_flush(struct vmw_private *dev_priv, - bool interruptible); +extern int vmw_cmd_emit_dummy_query(struct vmw_private *dev_priv, + uint32_t cid); +extern int vmw_cmd_flush(struct vmw_private *dev_priv, + bool interruptible); -#define VMW_FIFO_RESERVE_DX(__priv, __bytes, __ctx_id) \ +#define VMW_CMD_CTX_RESERVE(__priv, __bytes, __ctx_id) \ ({ \ - vmw_fifo_reserve_dx(__priv, __bytes, __ctx_id) ? : ({ \ + vmw_cmd_ctx_reserve(__priv, __bytes, __ctx_id) ? : ({ \ DRM_ERROR("FIFO reserve failed at %s for %u bytes\n", \ __func__, (unsigned int) __bytes); \ NULL; \ }); \ }) -#define VMW_FIFO_RESERVE(__priv, __bytes) \ - VMW_FIFO_RESERVE_DX(__priv, __bytes, SVGA3D_INVALID_ID) +#define VMW_CMD_RESERVE(__priv, __bytes) \ + VMW_CMD_CTX_RESERVE(__priv, __bytes, SVGA3D_INVALID_ID) /** * TTM glue - vmwgfx_ttm_glue.c @@ -1388,8 +1387,7 @@ struct vmw_cmdbuf_header; extern struct vmw_cmdbuf_man * vmw_cmdbuf_man_create(struct vmw_private *dev_priv); -extern int vmw_cmdbuf_set_pool_size(struct vmw_cmdbuf_man *man, - size_t size, size_t default_size); +extern int vmw_cmdbuf_set_pool_size(struct vmw_cmdbuf_man *man, size_t size); extern void vmw_cmdbuf_remove_pool(struct vmw_cmdbuf_man *man); extern void vmw_cmdbuf_man_destroy(struct vmw_cmdbuf_man *man); extern int vmw_cmdbuf_idle(struct vmw_cmdbuf_man *man, bool interruptible, diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c index 7c7d4b147d85..77178cdc80ac 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c @@ -724,7 +724,7 @@ static int vmw_rebind_all_dx_query(struct vmw_resource *ctx_res) if (!dx_query_mob || dx_query_mob->dx_query_ctx) return 0; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), ctx_res->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), ctx_res->id); if (cmd == NULL) return -ENOMEM; @@ -732,7 +732,7 @@ static int vmw_rebind_all_dx_query(struct vmw_resource *ctx_res) cmd->header.size = sizeof(cmd->body); cmd->body.cid = ctx_res->id; cmd->body.mobid = dx_query_mob->base.mem.start; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_context_bind_dx_query(ctx_res, dx_query_mob); @@ -1100,7 +1100,7 @@ static void vmw_query_bo_switch_commit(struct vmw_private *dev_priv, BUG_ON(!ctx_entry->valid); ctx = ctx_entry->res; - ret = vmw_fifo_emit_dummy_query(dev_priv, ctx->id); + ret = vmw_cmd_emit_dummy_query(dev_priv, ctx->id); if (unlikely(ret != 0)) VMW_DEBUG_USER("Out of fifo space for dummy query.\n"); @@ -3762,7 +3762,7 @@ int vmw_execbuf_fence_commands(struct drm_file *file_priv, /* p_handle implies file_priv. */ BUG_ON(p_handle != NULL && file_priv == NULL); - ret = vmw_fifo_send_fence(dev_priv, &sequence); + ret = vmw_cmd_send_fence(dev_priv, &sequence); if (unlikely(ret != 0)) { VMW_DEBUG_USER("Fence submission error. Syncing.\n"); synced = true; @@ -3876,10 +3876,10 @@ static int vmw_execbuf_submit_fifo(struct vmw_private *dev_priv, void *cmd; if (sw_context->dx_ctx_node) - cmd = VMW_FIFO_RESERVE_DX(dev_priv, command_size, + cmd = VMW_CMD_CTX_RESERVE(dev_priv, command_size, sw_context->dx_ctx_node->ctx->id); else - cmd = VMW_FIFO_RESERVE(dev_priv, command_size); + cmd = VMW_CMD_RESERVE(dev_priv, command_size); if (!cmd) return -ENOMEM; @@ -3888,7 +3888,7 @@ static int vmw_execbuf_submit_fifo(struct vmw_private *dev_priv, memcpy(cmd, kernel_commands, command_size); vmw_resource_relocations_apply(cmd, &sw_context->res_relocations); vmw_resource_relocations_free(&sw_context->res_relocations); - vmw_fifo_commit(dev_priv, command_size); + vmw_cmd_commit(dev_priv, command_size); return 0; } @@ -4325,7 +4325,7 @@ void __vmw_execbuf_release_pinned_bo(struct vmw_private *dev_priv, if (dev_priv->query_cid_valid) { BUG_ON(fence != NULL); - ret = vmw_fifo_emit_dummy_query(dev_priv, dev_priv->query_cid); + ret = vmw_cmd_emit_dummy_query(dev_priv, dev_priv->query_cid); if (ret) goto out_no_emit; dev_priv->query_cid_valid = false; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c index 770b906b18c3..bb0557541ca9 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c @@ -258,7 +258,7 @@ static void vmw_fb_dirty_flush(struct work_struct *work) if (w && h) { WARN_ON_ONCE(par->set_fb->funcs->dirty(cur_fb, NULL, 0, 0, &clip, 1)); - vmw_fifo_flush(vmw_priv, false); + vmw_cmd_flush(vmw_priv, false); } out_unlock: mutex_unlock(&par->bo_mutex); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmr.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmr.c index 83c0d5a3e4fd..964ddf1ca57a 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmr.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmr.c @@ -51,7 +51,7 @@ static int vmw_gmr2_bind(struct vmw_private *dev_priv, uint32_t cmd_size = define_size + remap_size; uint32_t i; - cmd_orig = cmd = VMW_FIFO_RESERVE(dev_priv, cmd_size); + cmd_orig = cmd = VMW_CMD_RESERVE(dev_priv, cmd_size); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -98,7 +98,7 @@ static int vmw_gmr2_bind(struct vmw_private *dev_priv, BUG_ON(cmd != cmd_orig + cmd_size / sizeof(*cmd)); - vmw_fifo_commit(dev_priv, cmd_size); + vmw_cmd_commit(dev_priv, cmd_size); return 0; } @@ -110,7 +110,7 @@ static void vmw_gmr2_unbind(struct vmw_private *dev_priv, uint32_t define_size = sizeof(define_cmd) + 4; uint32_t *cmd; - cmd = VMW_FIFO_RESERVE(dev_priv, define_size); + cmd = VMW_CMD_RESERVE(dev_priv, define_size); if (unlikely(cmd == NULL)) return; @@ -120,7 +120,7 @@ static void vmw_gmr2_unbind(struct vmw_private *dev_priv, *cmd++ = SVGA_CMD_DEFINE_GMR2; memcpy(cmd, &define_cmd, sizeof(define_cmd)); - vmw_fifo_commit(dev_priv, define_size); + vmw_cmd_commit(dev_priv, define_size); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c index c21a841dfc6d..80af8772b8c2 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c @@ -51,7 +51,7 @@ int vmw_getparam_ioctl(struct drm_device *dev, void *data, param->value = vmw_overlay_num_free_overlays(dev_priv); break; case DRM_VMW_PARAM_3D: - param->value = vmw_fifo_have_3d(dev_priv) ? 1 : 0; + param->value = vmw_supports_3d(dev_priv) ? 1 : 0; break; case DRM_VMW_PARAM_HW_CAPS: param->value = dev_priv->capabilities; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 2c72f87173c3..551070fa87a3 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -36,9 +36,6 @@ #include "vmwgfx_kms.h" -/* Might need a hrtimer here? */ -#define VMWGFX_PRESENT_RATE ((HZ / 60 > 0) ? HZ / 60 : 1) - void vmw_du_cleanup(struct vmw_display_unit *du) { drm_plane_cleanup(&du->primary); @@ -68,7 +65,7 @@ static int vmw_cursor_update_image(struct vmw_private *dev_priv, if (!image) return -EINVAL; - cmd = VMW_FIFO_RESERVE(dev_priv, cmd_size); + cmd = VMW_CMD_RESERVE(dev_priv, cmd_size); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -83,7 +80,7 @@ static int vmw_cursor_update_image(struct vmw_private *dev_priv, cmd->cursor.hotspotX = hotspotX; cmd->cursor.hotspotY = hotspotY; - vmw_fifo_commit_flush(dev_priv, cmd_size); + vmw_cmd_commit_flush(dev_priv, cmd_size); return 0; } @@ -1030,7 +1027,7 @@ static int vmw_framebuffer_bo_dirty(struct drm_framebuffer *framebuffer, break; } - vmw_fifo_flush(dev_priv, false); + vmw_cmd_flush(dev_priv, false); ttm_read_unlock(&dev_priv->reservation_sem); drm_modeset_unlock_all(&dev_priv->drm); @@ -1765,7 +1762,7 @@ int vmw_kms_present(struct vmw_private *dev_priv, if (ret) return ret; - vmw_fifo_flush(dev_priv, false); + vmw_cmd_flush(dev_priv, false); return 0; } @@ -2382,7 +2379,7 @@ int vmw_kms_helper_dirty(struct vmw_private *dev_priv, dirty->unit = unit; if (dirty->fifo_reserve_size > 0) { - dirty->cmd = VMW_FIFO_RESERVE(dev_priv, + dirty->cmd = VMW_CMD_RESERVE(dev_priv, dirty->fifo_reserve_size); if (!dirty->cmd) return -ENOMEM; @@ -2516,7 +2513,7 @@ int vmw_kms_update_proxy(struct vmw_resource *res, if (!clips) return 0; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd) * num_clips); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd) * num_clips); if (!cmd) return -ENOMEM; @@ -2545,7 +2542,7 @@ int vmw_kms_update_proxy(struct vmw_resource *res, copy_size += sizeof(*cmd); } - vmw_fifo_commit(dev_priv, copy_size); + vmw_cmd_commit(dev_priv, copy_size); return 0; } @@ -2748,7 +2745,7 @@ int vmw_du_helper_plane_update(struct vmw_du_update_plane *update) goto out_unref; reserved_size = update->calc_fifo_size(update, num_hits); - cmd_start = VMW_FIFO_RESERVE(update->dev_priv, reserved_size); + cmd_start = VMW_CMD_RESERVE(update->dev_priv, reserved_size); if (!cmd_start) { ret = -ENOMEM; goto out_revert; @@ -2797,7 +2794,7 @@ int vmw_du_helper_plane_update(struct vmw_du_update_plane *update) if (reserved_size < submit_size) submit_size = 0; - vmw_fifo_commit(update->dev_priv, submit_size); + vmw_cmd_commit(update->dev_priv, submit_size); vmw_kms_helper_validation_finish(update->dev_priv, NULL, &val_ctx, update->out_fence, NULL); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c index 45b8fee92b2f..4a4ae14d9b9b 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c @@ -554,7 +554,7 @@ int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv, } *cmd; fifo_size = sizeof(*cmd) * num_clips; - cmd = VMW_FIFO_RESERVE(dev_priv, fifo_size); + cmd = VMW_CMD_RESERVE(dev_priv, fifo_size); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -567,6 +567,6 @@ int vmw_kms_ldu_do_bo_dirty(struct vmw_private *dev_priv, cmd[i].body.height = clips->y2 - clips->y1; } - vmw_fifo_commit(dev_priv, fifo_size); + vmw_cmd_commit(dev_priv, fifo_size); return 0; } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c index 7f95ed6aa224..a372980fe6a5 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c @@ -148,7 +148,7 @@ static int vmw_setup_otable_base(struct vmw_private *dev_priv, mob->pt_level += VMW_MOBFMT_PTDEPTH_1 - SVGA3D_MOBFMT_PTDEPTH_1; } - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) { ret = -ENOMEM; goto out_no_fifo; @@ -170,7 +170,7 @@ static int vmw_setup_otable_base(struct vmw_private *dev_priv, */ BUG_ON(mob->pt_level == VMW_MOBFMT_PTDEPTH_2); - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); otable->page_table = mob; return 0; @@ -203,7 +203,7 @@ static void vmw_takedown_otable_base(struct vmw_private *dev_priv, return; bo = otable->page_table->pt_bo; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return; @@ -215,7 +215,7 @@ static void vmw_takedown_otable_base(struct vmw_private *dev_priv, cmd->body.sizeInBytes = 0; cmd->body.validSizeInBytes = 0; cmd->body.ptDepth = SVGA3D_MOBFMT_INVALID; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); if (bo) { int ret; @@ -558,12 +558,12 @@ void vmw_mob_unbind(struct vmw_private *dev_priv, BUG_ON(ret != 0); } - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (cmd) { cmd->header.id = SVGA_3D_CMD_DESTROY_GB_MOB; cmd->header.size = sizeof(cmd->body); cmd->body.mobid = mob->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); } if (bo) { @@ -625,7 +625,7 @@ int vmw_mob_bind(struct vmw_private *dev_priv, vmw_fifo_resource_inc(dev_priv); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) goto out_no_cmd_space; @@ -636,7 +636,7 @@ int vmw_mob_bind(struct vmw_private *dev_priv, cmd->body.base = mob->pt_root_page >> PAGE_SHIFT; cmd->body.sizeInBytes = num_data_pages * PAGE_SIZE; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c index cd7ed1650d60..d6d282c13b7f 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c @@ -122,7 +122,7 @@ static int vmw_overlay_send_put(struct vmw_private *dev_priv, fifo_size = sizeof(*cmds) + sizeof(*flush) + sizeof(*items) * num_items; - cmds = VMW_FIFO_RESERVE(dev_priv, fifo_size); + cmds = VMW_CMD_RESERVE(dev_priv, fifo_size); /* hardware has hung, can't do anything here */ if (!cmds) return -ENOMEM; @@ -169,7 +169,7 @@ static int vmw_overlay_send_put(struct vmw_private *dev_priv, fill_flush(flush, arg->stream_id); - vmw_fifo_commit(dev_priv, fifo_size); + vmw_cmd_commit(dev_priv, fifo_size); return 0; } @@ -192,7 +192,7 @@ static int vmw_overlay_send_stop(struct vmw_private *dev_priv, int ret; for (;;) { - cmds = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmds)); + cmds = VMW_CMD_RESERVE(dev_priv, sizeof(*cmds)); if (cmds) break; @@ -211,7 +211,7 @@ static int vmw_overlay_send_stop(struct vmw_private *dev_priv, cmds->body.items[0].value = false; fill_flush(&cmds->flush, stream_id); - vmw_fifo_commit(dev_priv, sizeof(*cmds)); + vmw_cmd_commit(dev_priv, sizeof(*cmds)); return 0; } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c index c0f156078dda..477452040a7a 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c @@ -827,7 +827,7 @@ int vmw_query_readback_all(struct vmw_buffer_object *dx_query_mob) dx_query_ctx = dx_query_mob->dx_query_ctx; dev_priv = dx_query_ctx->dev_priv; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), dx_query_ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), dx_query_ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -835,7 +835,7 @@ int vmw_query_readback_all(struct vmw_buffer_object *dx_query_mob) cmd->header.size = sizeof(cmd->body); cmd->body.cid = dx_query_ctx->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); /* Triggers a rebind the next time affected context is bound */ dx_query_mob->dx_query_ctx = NULL; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c index e60d0c570296..e4d87e9b7837 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c @@ -132,7 +132,7 @@ static int vmw_sou_fifo_create(struct vmw_private *dev_priv, BUG_ON(!sou->buffer); fifo_size = sizeof(*cmd); - cmd = VMW_FIFO_RESERVE(dev_priv, fifo_size); + cmd = VMW_CMD_RESERVE(dev_priv, fifo_size); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -153,7 +153,7 @@ static int vmw_sou_fifo_create(struct vmw_private *dev_priv, vmw_bo_get_guest_ptr(&sou->buffer->base, &cmd->obj.backingStore.ptr); cmd->obj.backingStore.pitch = mode->hdisplay * 4; - vmw_fifo_commit(dev_priv, fifo_size); + vmw_cmd_commit(dev_priv, fifo_size); sou->defined = true; @@ -181,7 +181,7 @@ static int vmw_sou_fifo_destroy(struct vmw_private *dev_priv, return 0; fifo_size = sizeof(*cmd); - cmd = VMW_FIFO_RESERVE(dev_priv, fifo_size); + cmd = VMW_CMD_RESERVE(dev_priv, fifo_size); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -189,7 +189,7 @@ static int vmw_sou_fifo_destroy(struct vmw_private *dev_priv, cmd->header.cmdType = SVGA_CMD_DESTROY_SCREEN; cmd->body.screenId = sou->base.unit; - vmw_fifo_commit(dev_priv, fifo_size); + vmw_cmd_commit(dev_priv, fifo_size); /* Force sync */ ret = vmw_fallback_wait(dev_priv, false, true, 0, false, 3*HZ); @@ -992,7 +992,7 @@ static int do_bo_define_gmrfb(struct vmw_private *dev_priv, if (depth == 32) depth = 24; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (!cmd) return -ENOMEM; @@ -1003,7 +1003,7 @@ static int do_bo_define_gmrfb(struct vmw_private *dev_priv, cmd->body.bytesPerLine = framebuffer->base.pitches[0]; /* Buffer is reserved in vram or GMR */ vmw_bo_get_guest_ptr(&buf->base, &cmd->body.ptr); - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -1029,7 +1029,7 @@ static void vmw_sou_surface_fifo_commit(struct vmw_kms_dirty *dirty) int i; if (!dirty->num_hits) { - vmw_fifo_commit(dirty->dev_priv, 0); + vmw_cmd_commit(dirty->dev_priv, 0); return; } @@ -1061,7 +1061,7 @@ static void vmw_sou_surface_fifo_commit(struct vmw_kms_dirty *dirty) blit->bottom -= sdirty->top; } - vmw_fifo_commit(dirty->dev_priv, region_size + sizeof(*cmd)); + vmw_cmd_commit(dirty->dev_priv, region_size + sizeof(*cmd)); sdirty->left = sdirty->top = S32_MAX; sdirty->right = sdirty->bottom = S32_MIN; @@ -1185,11 +1185,11 @@ int vmw_kms_sou_do_surface_dirty(struct vmw_private *dev_priv, static void vmw_sou_bo_fifo_commit(struct vmw_kms_dirty *dirty) { if (!dirty->num_hits) { - vmw_fifo_commit(dirty->dev_priv, 0); + vmw_cmd_commit(dirty->dev_priv, 0); return; } - vmw_fifo_commit(dirty->dev_priv, + vmw_cmd_commit(dirty->dev_priv, sizeof(struct vmw_kms_sou_bo_blit) * dirty->num_hits); } @@ -1295,11 +1295,11 @@ int vmw_kms_sou_do_bo_dirty(struct vmw_private *dev_priv, static void vmw_sou_readback_fifo_commit(struct vmw_kms_dirty *dirty) { if (!dirty->num_hits) { - vmw_fifo_commit(dirty->dev_priv, 0); + vmw_cmd_commit(dirty->dev_priv, 0); return; } - vmw_fifo_commit(dirty->dev_priv, + vmw_cmd_commit(dirty->dev_priv, sizeof(struct vmw_kms_sou_readback_blit) * dirty->num_hits); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c index e139fdfd1635..7d9d3c717bd9 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c @@ -222,7 +222,7 @@ static int vmw_gb_shader_create(struct vmw_resource *res) goto out_no_fifo; } - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) { ret = -ENOMEM; goto out_no_fifo; @@ -233,7 +233,7 @@ static int vmw_gb_shader_create(struct vmw_resource *res) cmd->body.shid = res->id; cmd->body.type = shader->type; cmd->body.sizeInBytes = shader->size; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_fifo_resource_inc(dev_priv); return 0; @@ -256,7 +256,7 @@ static int vmw_gb_shader_bind(struct vmw_resource *res, BUG_ON(bo->mem.mem_type != VMW_PL_MOB); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -266,7 +266,7 @@ static int vmw_gb_shader_bind(struct vmw_resource *res, cmd->body.mobid = bo->mem.start; cmd->body.offsetInBytes = res->backup_offset; res->backup_dirty = false; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -284,7 +284,7 @@ static int vmw_gb_shader_unbind(struct vmw_resource *res, BUG_ON(res->backup->base.mem.mem_type != VMW_PL_MOB); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -293,7 +293,7 @@ static int vmw_gb_shader_unbind(struct vmw_resource *res, cmd->body.shid = res->id; cmd->body.mobid = SVGA3D_INVALID_ID; cmd->body.offsetInBytes = 0; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); /* * Create a fence object and fence the backup buffer. @@ -324,7 +324,7 @@ static int vmw_gb_shader_destroy(struct vmw_resource *res) mutex_lock(&dev_priv->binding_mutex); vmw_binding_res_list_scrub(&res->binding_head); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) { mutex_unlock(&dev_priv->binding_mutex); return -ENOMEM; @@ -333,7 +333,7 @@ static int vmw_gb_shader_destroy(struct vmw_resource *res) cmd->header.id = SVGA_3D_CMD_DESTROY_GB_SHADER; cmd->header.size = sizeof(cmd->body); cmd->body.shid = res->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); mutex_unlock(&dev_priv->binding_mutex); vmw_resource_release_id(res); vmw_fifo_resource_dec(dev_priv); @@ -394,7 +394,7 @@ static int vmw_dx_shader_unscrub(struct vmw_resource *res) if (!list_empty(&shader->cotable_head) || !shader->committed) return 0; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), shader->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), shader->ctx->id); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -404,7 +404,7 @@ static int vmw_dx_shader_unscrub(struct vmw_resource *res) cmd->body.shid = shader->id; cmd->body.mobid = res->backup->base.mem.start; cmd->body.offsetInBytes = res->backup_offset; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_cotable_add_resource(shader->cotable, &shader->cotable_head); @@ -481,7 +481,7 @@ static int vmw_dx_shader_scrub(struct vmw_resource *res) return 0; WARN_ON_ONCE(!shader->committed); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -491,7 +491,7 @@ static int vmw_dx_shader_scrub(struct vmw_resource *res) cmd->body.shid = res->id; cmd->body.mobid = SVGA3D_INVALID_ID; cmd->body.offsetInBytes = 0; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); res->id = -1; list_del_init(&shader->cotable_head); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_so.c b/drivers/gpu/drm/vmwgfx/vmwgfx_so.c index 3f97b61dd5d8..7369dd86d3a9 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_so.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_so.c @@ -170,7 +170,7 @@ static int vmw_view_create(struct vmw_resource *res) return 0; } - cmd = VMW_FIFO_RESERVE_DX(res->dev_priv, view->cmd_size, view->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(res->dev_priv, view->cmd_size, view->ctx->id); if (!cmd) { mutex_unlock(&dev_priv->binding_mutex); return -ENOMEM; @@ -181,7 +181,7 @@ static int vmw_view_create(struct vmw_resource *res) /* Sid may have changed due to surface eviction. */ WARN_ON(view->srf->id == SVGA3D_INVALID_ID); cmd->body.sid = view->srf->id; - vmw_fifo_commit(res->dev_priv, view->cmd_size); + vmw_cmd_commit(res->dev_priv, view->cmd_size); res->id = view->view_id; list_add_tail(&view->srf_head, &srf->view_list); vmw_cotable_add_resource(view->cotable, &view->cotable_head); @@ -213,14 +213,14 @@ static int vmw_view_destroy(struct vmw_resource *res) if (!view->committed || res->id == -1) return 0; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), view->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), view->ctx->id); if (!cmd) return -ENOMEM; cmd->header.id = vmw_view_destroy_cmds[view->view_type]; cmd->header.size = sizeof(cmd->body); cmd->body.view_id = view->view_id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); res->id = -1; list_del_init(&view->cotable_head); list_del_init(&view->srf_head); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c index 83b06f7bd63d..44e38632c542 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c @@ -170,7 +170,7 @@ static int vmw_stdu_define_st(struct vmw_private *dev_priv, SVGA3dCmdDefineGBScreenTarget body; } *cmd; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -188,7 +188,7 @@ static int vmw_stdu_define_st(struct vmw_private *dev_priv, stdu->base.set_gui_x = cmd->body.xRoot; stdu->base.set_gui_y = cmd->body.yRoot; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); stdu->defined = true; stdu->display_width = mode->hdisplay; @@ -229,7 +229,7 @@ static int vmw_stdu_bind_st(struct vmw_private *dev_priv, memset(&image, 0, sizeof(image)); image.sid = res ? res->id : SVGA3D_INVALID_ID; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -239,7 +239,7 @@ static int vmw_stdu_bind_st(struct vmw_private *dev_priv, cmd->body.stid = stdu->base.unit; cmd->body.image = image; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -293,7 +293,7 @@ static int vmw_stdu_update_st(struct vmw_private *dev_priv, return -EINVAL; } - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -301,7 +301,7 @@ static int vmw_stdu_update_st(struct vmw_private *dev_priv, 0, stdu->display_width, 0, stdu->display_height); - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); return 0; } @@ -329,7 +329,7 @@ static int vmw_stdu_destroy_st(struct vmw_private *dev_priv, if (unlikely(!stdu->defined)) return 0; - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) return -ENOMEM; @@ -338,7 +338,7 @@ static int vmw_stdu_destroy_st(struct vmw_private *dev_priv, cmd->body.stid = stdu->base.unit; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); /* Force sync */ ret = vmw_fallback_wait(dev_priv, false, true, 0, false, 3*HZ); @@ -499,7 +499,7 @@ static void vmw_stdu_bo_fifo_commit(struct vmw_kms_dirty *dirty) size_t blit_size = sizeof(*blit) * dirty->num_hits + sizeof(*suffix); if (!dirty->num_hits) { - vmw_fifo_commit(dirty->dev_priv, 0); + vmw_cmd_commit(dirty->dev_priv, 0); return; } @@ -522,7 +522,7 @@ static void vmw_stdu_bo_fifo_commit(struct vmw_kms_dirty *dirty) ddirty->top, ddirty->bottom); } - vmw_fifo_commit(dirty->dev_priv, sizeof(*cmd) + blit_size); + vmw_cmd_commit(dirty->dev_priv, sizeof(*cmd) + blit_size); stdu->display_srf->res.res_dirty = true; ddirty->left = ddirty->top = S32_MAX; @@ -628,7 +628,7 @@ static void vmw_stdu_bo_cpu_commit(struct vmw_kms_dirty *dirty) dev_priv = vmw_priv(stdu->base.crtc.dev); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (!cmd) goto out_cleanup; @@ -636,7 +636,7 @@ static void vmw_stdu_bo_cpu_commit(struct vmw_kms_dirty *dirty) region.x1, region.x2, region.y1, region.y2); - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); } out_cleanup: @@ -795,7 +795,7 @@ static void vmw_kms_stdu_surface_fifo_commit(struct vmw_kms_dirty *dirty) size_t commit_size; if (!dirty->num_hits) { - vmw_fifo_commit(dirty->dev_priv, 0); + vmw_cmd_commit(dirty->dev_priv, 0); return; } @@ -817,7 +817,7 @@ static void vmw_kms_stdu_surface_fifo_commit(struct vmw_kms_dirty *dirty) vmw_stdu_populate_update(update, stdu->base.unit, sdirty->left, sdirty->right, sdirty->top, sdirty->bottom); - vmw_fifo_commit(dirty->dev_priv, commit_size); + vmw_cmd_commit(dirty->dev_priv, commit_size); sdirty->left = sdirty->top = S32_MAX; sdirty->right = sdirty->bottom = S32_MIN; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c b/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c index 193192456663..1dd042a20a66 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c @@ -99,7 +99,7 @@ static int vmw_dx_streamoutput_unscrub(struct vmw_resource *res) if (!list_empty(&so->cotable_head) || !so->committed ) return 0; - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), so->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), so->ctx->id); if (!cmd) return -ENOMEM; @@ -109,7 +109,7 @@ static int vmw_dx_streamoutput_unscrub(struct vmw_resource *res) cmd->body.mobid = res->backup->base.mem.start; cmd->body.offsetInBytes = res->backup_offset; cmd->body.sizeInBytes = so->size; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_cotable_add_resource(so->cotable, &so->cotable_head); @@ -172,7 +172,7 @@ static int vmw_dx_streamoutput_scrub(struct vmw_resource *res) WARN_ON_ONCE(!so->committed); - cmd = VMW_FIFO_RESERVE_DX(dev_priv, sizeof(*cmd), so->ctx->id); + cmd = VMW_CMD_CTX_RESERVE(dev_priv, sizeof(*cmd), so->ctx->id); if (!cmd) return -ENOMEM; @@ -182,7 +182,7 @@ static int vmw_dx_streamoutput_scrub(struct vmw_resource *res) cmd->body.mobid = SVGA3D_INVALID_ID; cmd->body.offsetInBytes = 0; cmd->body.sizeInBytes = so->size; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); res->id = -1; list_del_init(&so->cotable_head); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c index 3914bfee0533..dabe3152e5dc 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c @@ -372,12 +372,12 @@ static void vmw_hw_surface_destroy(struct vmw_resource *res) if (res->id != -1) { - cmd = VMW_FIFO_RESERVE(dev_priv, vmw_surface_destroy_size()); + cmd = VMW_CMD_RESERVE(dev_priv, vmw_surface_destroy_size()); if (unlikely(!cmd)) return; vmw_surface_destroy_encode(res->id, cmd); - vmw_fifo_commit(dev_priv, vmw_surface_destroy_size()); + vmw_cmd_commit(dev_priv, vmw_surface_destroy_size()); /* * used_memory_size_atomic, or separate lock @@ -440,14 +440,14 @@ static int vmw_legacy_srf_create(struct vmw_resource *res) */ submit_size = vmw_surface_define_size(srf); - cmd = VMW_FIFO_RESERVE(dev_priv, submit_size); + cmd = VMW_CMD_RESERVE(dev_priv, submit_size); if (unlikely(!cmd)) { ret = -ENOMEM; goto out_no_fifo; } vmw_surface_define_encode(srf, cmd); - vmw_fifo_commit(dev_priv, submit_size); + vmw_cmd_commit(dev_priv, submit_size); vmw_fifo_resource_inc(dev_priv); /* @@ -492,14 +492,14 @@ static int vmw_legacy_srf_dma(struct vmw_resource *res, BUG_ON(!val_buf->bo); submit_size = vmw_surface_dma_size(srf); - cmd = VMW_FIFO_RESERVE(dev_priv, submit_size); + cmd = VMW_CMD_RESERVE(dev_priv, submit_size); if (unlikely(!cmd)) return -ENOMEM; vmw_bo_get_guest_ptr(val_buf->bo, &ptr); vmw_surface_dma_encode(srf, cmd, &ptr, bind); - vmw_fifo_commit(dev_priv, submit_size); + vmw_cmd_commit(dev_priv, submit_size); /* * Create a fence object and fence the backup buffer. @@ -578,12 +578,12 @@ static int vmw_legacy_srf_destroy(struct vmw_resource *res) */ submit_size = vmw_surface_destroy_size(); - cmd = VMW_FIFO_RESERVE(dev_priv, submit_size); + cmd = VMW_CMD_RESERVE(dev_priv, submit_size); if (unlikely(!cmd)) return -ENOMEM; vmw_surface_destroy_encode(res->id, cmd); - vmw_fifo_commit(dev_priv, submit_size); + vmw_cmd_commit(dev_priv, submit_size); /* * Surface memory usage accounting. @@ -1121,7 +1121,7 @@ static int vmw_gb_surface_create(struct vmw_resource *res) submit_len = sizeof(*cmd); } - cmd = VMW_FIFO_RESERVE(dev_priv, submit_len); + cmd = VMW_CMD_RESERVE(dev_priv, submit_len); cmd2 = (typeof(cmd2))cmd; cmd3 = (typeof(cmd3))cmd; cmd4 = (typeof(cmd4))cmd; @@ -1188,7 +1188,7 @@ static int vmw_gb_surface_create(struct vmw_resource *res) cmd->body.size.depth = metadata->base_size.depth; } - vmw_fifo_commit(dev_priv, submit_len); + vmw_cmd_commit(dev_priv, submit_len); return 0; @@ -1219,7 +1219,7 @@ static int vmw_gb_surface_bind(struct vmw_resource *res, submit_size = sizeof(*cmd1) + (res->backup_dirty ? sizeof(*cmd2) : 0); - cmd1 = VMW_FIFO_RESERVE(dev_priv, submit_size); + cmd1 = VMW_CMD_RESERVE(dev_priv, submit_size); if (unlikely(!cmd1)) return -ENOMEM; @@ -1233,7 +1233,7 @@ static int vmw_gb_surface_bind(struct vmw_resource *res, cmd2->header.size = sizeof(cmd2->body); cmd2->body.sid = res->id; } - vmw_fifo_commit(dev_priv, submit_size); + vmw_cmd_commit(dev_priv, submit_size); if (res->backup->dirty && res->backup_dirty) { /* We've just made a full upload. Cear dirty regions. */ @@ -1272,7 +1272,7 @@ static int vmw_gb_surface_unbind(struct vmw_resource *res, BUG_ON(bo->mem.mem_type != VMW_PL_MOB); submit_size = sizeof(*cmd3) + (readback ? sizeof(*cmd1) : sizeof(*cmd2)); - cmd = VMW_FIFO_RESERVE(dev_priv, submit_size); + cmd = VMW_CMD_RESERVE(dev_priv, submit_size); if (unlikely(!cmd)) return -ENOMEM; @@ -1295,7 +1295,7 @@ static int vmw_gb_surface_unbind(struct vmw_resource *res, cmd3->body.sid = res->id; cmd3->body.mobid = SVGA3D_INVALID_ID; - vmw_fifo_commit(dev_priv, submit_size); + vmw_cmd_commit(dev_priv, submit_size); /* * Create a fence object and fence the backup buffer. @@ -1328,7 +1328,7 @@ static int vmw_gb_surface_destroy(struct vmw_resource *res) vmw_view_surface_list_destroy(dev_priv, &srf->view_list); vmw_binding_res_list_scrub(&res->binding_head); - cmd = VMW_FIFO_RESERVE(dev_priv, sizeof(*cmd)); + cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(!cmd)) { mutex_unlock(&dev_priv->binding_mutex); return -ENOMEM; @@ -1337,7 +1337,7 @@ static int vmw_gb_surface_destroy(struct vmw_resource *res) cmd->header.id = SVGA_3D_CMD_DESTROY_GB_SURFACE; cmd->header.size = sizeof(cmd->body); cmd->body.sid = res->id; - vmw_fifo_commit(dev_priv, sizeof(*cmd)); + vmw_cmd_commit(dev_priv, sizeof(*cmd)); mutex_unlock(&dev_priv->binding_mutex); vmw_resource_release_id(res); vmw_fifo_resource_dec(dev_priv); @@ -1896,7 +1896,7 @@ static int vmw_surface_dirty_sync(struct vmw_resource *res) goto out; alloc_size = num_dirty * ((has_dx) ? sizeof(*cmd1) : sizeof(*cmd2)); - cmd = VMW_FIFO_RESERVE(dev_priv, alloc_size); + cmd = VMW_CMD_RESERVE(dev_priv, alloc_size); if (!cmd) return -ENOMEM; @@ -1932,7 +1932,7 @@ static int vmw_surface_dirty_sync(struct vmw_resource *res) } } - vmw_fifo_commit(dev_priv, alloc_size); + vmw_cmd_commit(dev_priv, alloc_size); out: memset(&dirty->boxes[0], 0, sizeof(dirty->boxes[0]) * dirty->num_subres); @@ -2032,14 +2032,14 @@ static int vmw_surface_clean(struct vmw_resource *res) } *cmd; alloc_size = sizeof(*cmd); - cmd = VMW_FIFO_RESERVE(dev_priv, alloc_size); + cmd = VMW_CMD_RESERVE(dev_priv, alloc_size); if (!cmd) return -ENOMEM; cmd->header.id = SVGA_3D_CMD_READBACK_GB_SURFACE; cmd->header.size = sizeof(cmd->body); cmd->body.sid = res->id; - vmw_fifo_commit(dev_priv, alloc_size); + vmw_cmd_commit(dev_priv, alloc_size); return 0; } From patchwork Tue Dec 1 20:18:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zack Rusin X-Patchwork-Id: 11943729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 143B6C64E7A for ; Tue, 1 Dec 2020 20:33:38 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 998092151B for ; Tue, 1 Dec 2020 20:33:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 998092151B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E98686E8D0; Tue, 1 Dec 2020 20:33:36 +0000 (UTC) Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com [208.91.0.189]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2ABAD6E8D0 for ; Tue, 1 Dec 2020 20:33:36 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Tue, 1 Dec 2020 12:18:31 -0800 Received: from vertex.vmware.com (unknown [10.21.244.133]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id BF54B20AB9; Tue, 1 Dec 2020 12:18:33 -0800 (PST) From: Zack Rusin To: Subject: [PATCH 8/8] drm/vmwgfx: Fix display register usage for some older configs Date: Tue, 1 Dec 2020 15:18:28 -0500 Message-ID: <20201201201828.808888-8-zackr@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201201201828.808888-1-zackr@vmware.com> References: <20201201201828.808888-1-zackr@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: zackr@vmware.com does not designate permitted sender hosts) X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Martin Krastev , Roland Scheidegger Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We can't be setting the display_id register to an invalid value because that makes our device reset the fb which causes nasty flicker (due to destruction and creation of a new fb). Also we can't be using the BITS_PER_PIXEL register if the 8BIT_EMULATION is not supported. Signed-off-by: Zack Rusin Reviewed-by: Martin Krastev Reviewed-by: Roland Scheidegger --- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 3 ++- drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c | 1 - 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c index 551070fa87a3..31218bf6db51 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c @@ -1873,7 +1873,8 @@ int vmw_kms_write_svga(struct vmw_private *vmw_priv, vmw_fifo_mem_write(vmw_priv, SVGA_FIFO_PITCHLOCK, pitch); vmw_write(vmw_priv, SVGA_REG_WIDTH, width); vmw_write(vmw_priv, SVGA_REG_HEIGHT, height); - vmw_write(vmw_priv, SVGA_REG_BITS_PER_PIXEL, bpp); + if ((vmw_priv->capabilities & SVGA_CAP_8BIT_EMULATION) != 0) + vmw_write(vmw_priv, SVGA_REG_BITS_PER_PIXEL, bpp); if (vmw_read(vmw_priv, SVGA_REG_DEPTH) != depth) { DRM_ERROR("Invalid depth %u for %u bpp, host expects %u\n", diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c index 4a4ae14d9b9b..fa8ce1b9de13 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c @@ -125,7 +125,6 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv) vmw_write(dev_priv, SVGA_REG_DISPLAY_POSITION_Y, crtc->y); vmw_write(dev_priv, SVGA_REG_DISPLAY_WIDTH, crtc->mode.hdisplay); vmw_write(dev_priv, SVGA_REG_DISPLAY_HEIGHT, crtc->mode.vdisplay); - vmw_write(dev_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID); i++; }