From patchwork Fri Apr 30 09:25:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11851C433ED for ; Fri, 30 Apr 2021 09:25:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D18C66145B for ; Fri, 30 Apr 2021 09:25:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D18C66145B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C04CD6F515; Fri, 30 Apr 2021 09:25:35 +0000 (UTC) Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [IPv6:2a00:1450:4864:20::62c]) by gabe.freedesktop.org (Postfix) with ESMTPS id 09C9B6F508 for ; Fri, 30 Apr 2021 09:25:19 +0000 (UTC) Received: by mail-ej1-x62c.google.com with SMTP id r9so104225233ejj.3 for ; Fri, 30 Apr 2021 02:25:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GEcgVRMUuu3Cob1aHLpc4hY8Ye13OZQKcNeUwIHDh6U=; b=jS+eXmaYUr0/GHSP1S8Fe46Fj4d2i37hSwxol9Khi4skI/TbXJJvmbF28K2rS0lhLg 9a3kPm110+XQ9rjy5QlG3CrXG6dtOe2sz51IQjYk3zTZjL8gzjV0BAz5rLqXec04wa/C qx3emFDOngH6ln5QueeOCoGhPE0BGXrRtbfN/SEw9sNhOZuGq7xTlAAMrd0bDzsfFZZ6 4XAV3loPue4Y4uPYr0XoUcU1iMSAcOFfLoVaS3XC9zI6ibaGgsSoUpYlV4nFAyI9XpSI fqdg1cyPavopbcj7+esfU9gkKoeb0tQUvLg8tEW4nTkK9RD/djouB8BUrYPcm1CAJnuK LP7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GEcgVRMUuu3Cob1aHLpc4hY8Ye13OZQKcNeUwIHDh6U=; b=WHbRcdaJyEBC4F1Qs5TCNmdYnzX7e0CngluNTlJLDYikVsakCV2rxPzJdbfcsMl6nX vopGpy+rGAHxyZRiux6d1KOVpaOOvt7y8qsycyhBSfH+3rzTFdOctDIeLB+zzqAx+n4W 3QERZRYeX/Bc6sSNXCqebi/yu7EKbjgPrdR8HkF41UwJahIt5r1tQ8qB0KTNgGKOQxGO rQyG9Lu8U+X2htBPIkdSkat+RhBaFPuX4NV+OjyekrBZrkvhaJXq79gL2xGle2sTKgby xY2xbz1MwHUFux64bwPLqqqMaNFRr6nt5/2+ot/0lVxpi3m+pClnHwd/PjrEjF7dYaFh /F0A== X-Gm-Message-State: AOAM533WbiogIiUgHID0KpBjkWnUL/qnlOwSaOPpWMViMYgbGWsH+tuo 5/8DTUlrCBSZIccNcDM+RJMMWDU8b2Q= X-Google-Smtp-Source: ABdhPJw6YW4vvqxBbaHf12n5Dp275+8SzkFIcVgM3TVVrkLqbFerNhkpeULuvqsQ8KnT95z/jahv0A== X-Received: by 2002:a17:906:a0c2:: with SMTP id bh2mr3218143ejb.394.1619774717740; Fri, 30 Apr 2021 02:25:17 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:17 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 12/13] drm/vmwgfx: switch the TTM backends to self alloc Date: Fri, 30 Apr 2021 11:25:07 +0200 Message-Id: <20210430092508.60710-12-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Similar to the TTM range manager. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c | 18 +++++---- drivers/gpu/drm/vmwgfx/vmwgfx_thp.c | 37 ++++++++++--------- 2 files changed, 31 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c index 1774960d1b89..82a5e6489810 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c @@ -57,6 +57,12 @@ static int vmw_gmrid_man_get_node(struct ttm_resource_manager *man, struct vmwgfx_gmrid_man *gman = to_gmrid_manager(man); int id; + mem->mm_node = kmalloc(sizeof(*mem), GFP_KERNEL); + if (!mem->mm_node) + return -ENOMEM; + + ttm_resource_init(bo, place, mem->mm_node); + id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL); if (id < 0) return id; @@ -87,13 +93,11 @@ static void vmw_gmrid_man_put_node(struct ttm_resource_manager *man, { struct vmwgfx_gmrid_man *gman = to_gmrid_manager(man); - if (mem->mm_node) { - ida_free(&gman->gmr_ida, mem->start); - spin_lock(&gman->lock); - gman->used_gmr_pages -= mem->num_pages; - spin_unlock(&gman->lock); - mem->mm_node = NULL; - } + ida_free(&gman->gmr_ida, mem->start); + spin_lock(&gman->lock); + gman->used_gmr_pages -= mem->num_pages; + spin_unlock(&gman->lock); + kfree(mem->mm_node); } static const struct ttm_resource_manager_func vmw_gmrid_manager_func; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c b/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c index 5ccc35b3194c..8765835696ac 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c @@ -7,6 +7,7 @@ #include "vmwgfx_drv.h" #include #include +#include /** * struct vmw_thp_manager - Range manager implementing huge page alignment @@ -54,16 +55,18 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, { struct vmw_thp_manager *rman = to_thp_manager(man); struct drm_mm *mm = &rman->mm; - struct drm_mm_node *node; + struct ttm_range_mgr_node *node; unsigned long align_pages; unsigned long lpfn; enum drm_mm_insert_mode mode = DRM_MM_INSERT_BEST; int ret; - node = kzalloc(sizeof(*node), GFP_KERNEL); + node = kzalloc(struct_size(node, mm_nodes, 1), GFP_KERNEL); if (!node) return -ENOMEM; + ttm_resource_init(bo, place, &node->base); + lpfn = place->lpfn; if (!lpfn) lpfn = man->size; @@ -76,8 +79,9 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)) { align_pages = (HPAGE_PUD_SIZE >> PAGE_SHIFT); if (mem->num_pages >= align_pages) { - ret = vmw_thp_insert_aligned(bo, mm, node, align_pages, - place, mem, lpfn, mode); + ret = vmw_thp_insert_aligned(bo, mm, &node->mm_nodes[0], + align_pages, place, mem, + lpfn, mode); if (!ret) goto found_unlock; } @@ -85,14 +89,15 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, align_pages = (HPAGE_PMD_SIZE >> PAGE_SHIFT); if (mem->num_pages >= align_pages) { - ret = vmw_thp_insert_aligned(bo, mm, node, align_pages, place, - mem, lpfn, mode); + ret = vmw_thp_insert_aligned(bo, mm, &node->mm_nodes[0], + align_pages, place, mem, lpfn, + mode); if (!ret) goto found_unlock; } - ret = drm_mm_insert_node_in_range(mm, node, mem->num_pages, - bo->page_alignment, 0, + ret = drm_mm_insert_node_in_range(mm, &node->mm_nodes[0], + mem->num_pages, bo->page_alignment, 0, place->fpfn, lpfn, mode); found_unlock: spin_unlock(&rman->lock); @@ -100,8 +105,8 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, if (unlikely(ret)) { kfree(node); } else { - mem->mm_node = node; - mem->start = node->start; + mem->mm_node = &node->mm_nodes[0]; + mem->start = node->mm_nodes[0].start; } return ret; @@ -113,15 +118,13 @@ static void vmw_thp_put_node(struct ttm_resource_manager *man, struct ttm_resource *mem) { struct vmw_thp_manager *rman = to_thp_manager(man); + struct ttm_range_mgr_node * node = mem->mm_node; - if (mem->mm_node) { - spin_lock(&rman->lock); - drm_mm_remove_node(mem->mm_node); - spin_unlock(&rman->lock); + spin_lock(&rman->lock); + drm_mm_remove_node(&node->mm_nodes[0]); + spin_unlock(&rman->lock); - kfree(mem->mm_node); - mem->mm_node = NULL; - } + kfree(node); } int vmw_thp_init(struct vmw_private *dev_priv)