From patchwork Fri Apr 30 09:24:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99B5BC433B4 for ; Fri, 30 Apr 2021 09:25:13 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4C3CD61352 for ; Fri, 30 Apr 2021 09:25:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C3CD61352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 963F66F503; Fri, 30 Apr 2021 09:25:12 +0000 (UTC) Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by gabe.freedesktop.org (Postfix) with ESMTPS id 84E796F503 for ; Fri, 30 Apr 2021 09:25:11 +0000 (UTC) Received: by mail-ej1-x634.google.com with SMTP id f24so5677074ejc.6 for ; Fri, 30 Apr 2021 02:25:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=P4e0Y6+hS4/K3bhBW/GEn8l7kXdxRUpzJe3iWfonmPI=; b=bN+rvQ+EZVIs37ULX/Wxq+iagkQeoveQOYYwYvlMQTWeyOicaz0uXPLdgDCSf5Ss8x JYE7K4zuSPMID/G8hCw1hoJmxX6V8XPTLbdoNh6E6Gyn+xTQsokYlfqg7sQjiZVdc4Nk keEUQSFk2FSnQx+37t6FzeD7BmAwuo1ifZ9zDz84OOKuH+U659DV5OWDQTqK+EeVBCqV g/F7jVY2JKgtky7xYBaavjUSrHPd9aMpZ9yOnroOkVgMOUmCVB9iGT+uGWbo3Zdoz8xK +rZH2bVFrXj7WSHGh11jLKYwjwx2g+qRrQoEjFKjbjHVSPYhVkYUFh5ibGovxwopMpfL Izhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=P4e0Y6+hS4/K3bhBW/GEn8l7kXdxRUpzJe3iWfonmPI=; b=J55EG6rMvWcEhQgsmKkDyLn4xnLJabq254sDANAFVpBcGFJzsPbuelHK42d+X4sM8M uJv6qxOV0OYGTfBuW6MLwtjAYQIwHm7m2ziXziLuJpsNjaPdqNcbNOyLa15NRqWtMmUp s+RJvyeuNNzsJNE4v3XcMacIFG9ek2Xnk2cW5SXqlQtiz3l6nLbkyG95ZnZvma4TADA4 SFkGeABuHS+37Vpoh1iDQ06pst1+HjfU0lXgjm2GI5gjLZkbzg3p1cKjP0rcO0EdrOUB p/NNRrMCJ6rWqYKRhGE+q1VJmjzhSpoVzkESFe7fufL1JkOhqQm4TawgkCKSoihF95tE jXwA== X-Gm-Message-State: AOAM530RPidIEonJsYsujkWTH4tnzTVK0XWvtxDiw3vgpfzB9pQJVDJr BRMyCCEMJZn0cYpYmbuzUSyduIGSDSo= X-Google-Smtp-Source: ABdhPJxAFlmmVjQA2Zs15vq4kgy4D0lzyX4FeG4hGCxYlGMF/SLah5toxg99niZbwt8Kyf3ayzYabg== X-Received: by 2002:a17:906:7ac9:: with SMTP id k9mr3290062ejo.229.1619774710164; Fri, 30 Apr 2021 02:25:10 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:09 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 01/13] drm/ttm: add ttm_sys_manager v2 Date: Fri, 30 Apr 2021 11:24:56 +0200 Message-Id: <20210430092508.60710-1-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add a separate manager for the system domain and make function tables mandatory. v2: debug is still optional Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/ttm/Makefile | 2 +- drivers/gpu/drm/ttm/ttm_device.c | 17 +----------- drivers/gpu/drm/ttm/ttm_module.h | 3 ++ drivers/gpu/drm/ttm/ttm_resource.c | 11 ++------ drivers/gpu/drm/ttm/ttm_sys_manager.c | 40 +++++++++++++++++++++++++++ 5 files changed, 48 insertions(+), 25 deletions(-) create mode 100644 drivers/gpu/drm/ttm/ttm_sys_manager.c diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile index 40e5e9da7953..f906b22959cf 100644 --- a/drivers/gpu/drm/ttm/Makefile +++ b/drivers/gpu/drm/ttm/Makefile @@ -4,7 +4,7 @@ ttm-y := ttm_tt.o ttm_bo.o ttm_bo_util.o ttm_bo_vm.o ttm_module.o \ ttm_execbuf_util.o ttm_range_manager.o ttm_resource.o ttm_pool.o \ - ttm_device.o + ttm_device.o ttm_sys_manager.o ttm-$(CONFIG_AGP) += ttm_agp_backend.o obj-$(CONFIG_DRM_TTM) += ttm.o diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index b169e9a4f5d4..460953dcad11 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -165,21 +165,6 @@ int ttm_device_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, } EXPORT_SYMBOL(ttm_device_swapout); -static void ttm_init_sysman(struct ttm_device *bdev) -{ - struct ttm_resource_manager *man = &bdev->sysman; - - /* - * Initialize the system memory buffer type. - * Other types need to be driver / IOCTL initialized. - */ - man->use_tt = true; - - ttm_resource_manager_init(man, 0); - ttm_set_driver_manager(bdev, TTM_PL_SYSTEM, man); - ttm_resource_manager_set_used(man, true); -} - static void ttm_device_delayed_workqueue(struct work_struct *work) { struct ttm_device *bdev = @@ -222,7 +207,7 @@ int ttm_device_init(struct ttm_device *bdev, struct ttm_device_funcs *funcs, bdev->funcs = funcs; - ttm_init_sysman(bdev); + ttm_sys_man_init(bdev); ttm_pool_init(&bdev->pool, dev, use_dma_alloc, use_dma32); bdev->vma_manager = vma_manager; diff --git a/drivers/gpu/drm/ttm/ttm_module.h b/drivers/gpu/drm/ttm/ttm_module.h index d7cac5d4b835..26564a98958f 100644 --- a/drivers/gpu/drm/ttm/ttm_module.h +++ b/drivers/gpu/drm/ttm/ttm_module.h @@ -34,7 +34,10 @@ #define TTM_PFX "[TTM] " struct dentry; +struct ttm_device; extern struct dentry *ttm_debugfs_root; +int ttm_sys_man_init(struct ttm_device *bdev); + #endif /* _TTM_MODULE_H_ */ diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index 04f2eef653ab..fc351700d035 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -33,9 +33,6 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo, ttm_manager_type(bo->bdev, res->mem_type); res->mm_node = NULL; - if (!man->func || !man->func->alloc) - return 0; - return man->func->alloc(man, bo, place, res); } @@ -44,9 +41,7 @@ void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource *res) struct ttm_resource_manager *man = ttm_manager_type(bo->bdev, res->mem_type); - if (man->func && man->func->free) - man->func->free(man, res); - + man->func->free(man, res); res->mm_node = NULL; res->mem_type = TTM_PL_SYSTEM; } @@ -139,7 +134,7 @@ void ttm_resource_manager_debug(struct ttm_resource_manager *man, drm_printf(p, " use_type: %d\n", man->use_type); drm_printf(p, " use_tt: %d\n", man->use_tt); drm_printf(p, " size: %llu\n", man->size); - if (man->func && man->func->debug) - (*man->func->debug)(man, p); + if (man->func->debug) + man->func->debug(man, p); } EXPORT_SYMBOL(ttm_resource_manager_debug); diff --git a/drivers/gpu/drm/ttm/ttm_sys_manager.c b/drivers/gpu/drm/ttm/ttm_sys_manager.c new file mode 100644 index 000000000000..ed92615214e3 --- /dev/null +++ b/drivers/gpu/drm/ttm/ttm_sys_manager.c @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ + +#include +#include +#include + +static int ttm_sys_man_alloc(struct ttm_resource_manager *man, + struct ttm_buffer_object *bo, + const struct ttm_place *place, + struct ttm_resource *mem) +{ + return 0; +} + +static void ttm_sys_man_free(struct ttm_resource_manager *man, + struct ttm_resource *mem) +{ +} + +static const struct ttm_resource_manager_func ttm_sys_manager_func = { + .alloc = ttm_sys_man_alloc, + .free = ttm_sys_man_free, +}; + +int ttm_sys_man_init(struct ttm_device *bdev) +{ + struct ttm_resource_manager *man = &bdev->sysman; + + /* + * Initialize the system memory buffer type. + * Other types need to be driver / IOCTL initialized. + */ + man->use_tt = true; + man->func = &ttm_sys_manager_func; + + ttm_resource_manager_init(man, 0); + ttm_set_driver_manager(bdev, TTM_PL_SYSTEM, man); + ttm_resource_manager_set_used(man, true); + return 0; +} From patchwork Fri Apr 30 09:24:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 489E0C433B4 for ; Fri, 30 Apr 2021 09:25:21 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 081ED61352 for ; Fri, 30 Apr 2021 09:25:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 081ED61352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CE4306F50D; Fri, 30 Apr 2021 09:25:15 +0000 (UTC) Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) by gabe.freedesktop.org (Postfix) with ESMTPS id 31D006F503 for ; Fri, 30 Apr 2021 09:25:12 +0000 (UTC) Received: by mail-ed1-x532.google.com with SMTP id h10so81801468edt.13 for ; Fri, 30 Apr 2021 02:25:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0l02agDnOuVLB+MouQtP5QkSpgJ0lhF+vC/Ibhz6sX8=; b=SBWtMIkUHLbKs+fw3ZlE8flxmHrc1zABfH3090knFVll31bEiDadHhSuDUrDD5IcLj 8Vcwc7NkbZBugtx4JFuUFhMrnjdrEhvkrctj7KpkdqIE/TBSgMdaTuRbTkcyv8gY94rg AWEdQDNtztYpCCZll7s5LjF4SxNR7yVVlbGrhdnKyrl6ml/7j/arhswrVJxGli/GUuZp Af97dqvnUjFjIYWFsuLMfugk4Jmyr2JuCjO6mHQdierXNr2rOmvZck0Zr5YKb3QKvPZ9 2eGRi/dWpy3og6xB5B5SkjcHGdoc4WHckyfd4V4cleEzkvpkhwjIER1CqSVaf3YMyTrp /3/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0l02agDnOuVLB+MouQtP5QkSpgJ0lhF+vC/Ibhz6sX8=; b=Z6nynYqy/dtoUHfRk+z93HnKP2v30lDVvwsd+mQ5i3IkzXqDHqb561fiZCU+r4vM9S ce/emV4jEORz8LOTOUfv5Vhgzb33vOoltqzLeToebOR4NGmBgGazGHPP+aNJvfMo5HVE V5uSaAwxlSO82UAI6ZENpkfmu0VGmoOwT4N5W99fDiayaESOuNQWGIs7Nn/27WBZ0FUp FLHpmHMAWdVkIPDly0oLc6BkO4TOS2X3I236ko+MsO26+C1OI2GJs8sl6fogVnvs0d9x n3vqs0ymQsIX7PY4gML/WAiBmACh013Q/Ct9GYtuObV2vbSMp41vFoBR9IACGPxPfGjR Un8g== X-Gm-Message-State: AOAM533+5A/2fbkBTfFcp54zqI7T4211Evji/EEKtNt7mSTHQR+SIxTA hkUXaQz6g820yX77xXPq8TFNPSyDAjw= X-Google-Smtp-Source: ABdhPJyJQpYThR5eR569GmcsWp7LDGYohI5CmFtE9vDQT/BMFFcgn/Fh1aJKgBjm+W3y4gCBHbKPXg== X-Received: by 2002:a05:6402:35c9:: with SMTP id z9mr4690731edc.94.1619774710933; Fri, 30 Apr 2021 02:25:10 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:10 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 02/13] drm/ttm: always initialize the full ttm_resource Date: Fri, 30 Apr 2021 11:24:57 +0200 Message-Id: <20210430092508.60710-2-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Init all fields in ttm_resource_alloc() when we create a new resource. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 -- drivers/gpu/drm/ttm/ttm_bo.c | 26 ++++--------------------- drivers/gpu/drm/ttm/ttm_bo_util.c | 4 ++-- drivers/gpu/drm/ttm/ttm_resource.c | 9 +++++++++ 4 files changed, 15 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 7ba761e833ba..2f7580d2ee71 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -1015,8 +1015,6 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo) } else { /* allocate GART space */ - tmp = bo->mem; - tmp.mm_node = NULL; placement.num_placement = 1; placement.placement = &placements; placement.num_busy_placement = 1; diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index df63a07a70de..55f1ddcf22b6 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -507,11 +507,6 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, return ttm_tt_create(bo, false); } - evict_mem = bo->mem; - evict_mem.mm_node = NULL; - evict_mem.bus.offset = 0; - evict_mem.bus.addr = NULL; - ret = ttm_bo_mem_space(bo, &placement, &evict_mem, ctx); if (ret) { if (ret != -ERESTARTSYS) { @@ -867,12 +862,8 @@ static int ttm_bo_bounce_temp_buffer(struct ttm_buffer_object *bo, struct ttm_place *hop) { struct ttm_placement hop_placement; + struct ttm_resource hop_mem; int ret; - struct ttm_resource hop_mem = *mem; - - hop_mem.mm_node = NULL; - hop_mem.mem_type = TTM_PL_SYSTEM; - hop_mem.placement = 0; hop_placement.num_placement = hop_placement.num_busy_placement = 1; hop_placement.placement = hop_placement.busy_placement = hop; @@ -894,19 +885,14 @@ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo, struct ttm_placement *placement, struct ttm_operation_ctx *ctx) { - int ret = 0; struct ttm_place hop; struct ttm_resource mem; + int ret; dma_resv_assert_held(bo->base.resv); memset(&hop, 0, sizeof(hop)); - mem.num_pages = PAGE_ALIGN(bo->base.size) >> PAGE_SHIFT; - mem.bus.offset = 0; - mem.bus.addr = NULL; - mem.mm_node = NULL; - /* * Determine where to move the buffer. * @@ -1027,6 +1013,7 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, struct dma_resv *resv, void (*destroy) (struct ttm_buffer_object *)) { + static const struct ttm_place sys_mem = { .mem_type = TTM_PL_SYSTEM }; bool locked; int ret = 0; @@ -1038,13 +1025,8 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, bo->bdev = bdev; bo->type = type; bo->page_alignment = page_alignment; - bo->mem.mem_type = TTM_PL_SYSTEM; - bo->mem.num_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; - bo->mem.mm_node = NULL; - bo->mem.bus.offset = 0; - bo->mem.bus.addr = NULL; + ttm_resource_alloc(bo, &sys_mem, &bo->mem); bo->moving = NULL; - bo->mem.placement = 0; bo->pin_count = 0; bo->sg = sg; if (resv) { diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index efb7e9c34ab4..ae8b61460724 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -664,6 +664,7 @@ EXPORT_SYMBOL(ttm_bo_move_accel_cleanup); int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo) { + static const struct ttm_place sys_mem = { .mem_type = TTM_PL_SYSTEM }; struct ttm_buffer_object *ghost; int ret; @@ -676,8 +677,7 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo) if (ret) ttm_bo_wait(bo, false, false); - memset(&bo->mem, 0, sizeof(bo->mem)); - bo->mem.mem_type = TTM_PL_SYSTEM; + ttm_resource_alloc(bo, &sys_mem, &bo->mem); bo->ttm = NULL; dma_resv_unlock(&ghost->base._resv); diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index fc351700d035..cec2e6fb1c52 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -33,6 +33,15 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo, ttm_manager_type(bo->bdev, res->mem_type); res->mm_node = NULL; + res->start = 0; + res->num_pages = PFN_UP(bo->base.size); + res->mem_type = place->mem_type; + res->placement = place->flags; + res->bus.addr = NULL; + res->bus.offset = 0; + res->bus.is_iomem = false; + res->bus.caching = ttm_cached; + return man->func->alloc(man, bo, place, res); } From patchwork Fri Apr 30 09:24:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C961C433B4 for ; Fri, 30 Apr 2021 09:25:25 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DD19E613CA for ; Fri, 30 Apr 2021 09:25:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD19E613CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 084086F510; Fri, 30 Apr 2021 09:25:16 +0000 (UTC) Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [IPv6:2a00:1450:4864:20::529]) by gabe.freedesktop.org (Postfix) with ESMTPS id C2AF66F504 for ; Fri, 30 Apr 2021 09:25:12 +0000 (UTC) Received: by mail-ed1-x529.google.com with SMTP id q6so22552759edr.3 for ; Fri, 30 Apr 2021 02:25:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Epa6EPUKIq0AtRc9qpb2KEw9vCfYgfZ//7VzPzMIqhI=; b=p+YDyeitUjwdU3+NXp78iH9pTMURWKcvTxbcjV0uiCRB40QtCrmRk6wvuaPIWQXy2m 7xgkZs65vk1A5bTyRpYeispB/JZdDe0kuhcZx4MvKidlKlFYFHLFWX51LPmahnSgw/BE CpH/VYQA38OtGkgkBlqUAeZhfmDH9gxUqRhFUmQYfhwQnUbW2Ws98tao1i+490iDUWi/ RqkWwI4ZExEgs8N1vWkbLFtw9P58/58dD4nYkasBc3+e4W/RaB2+e1DOS7IPSOU79M7j +cZ8g/7ZwUa/H+tHxozSDbqW8el9mXRYXHQ7ppXg1fAGRNCxDsIpmpaCSlWrWsiYj/by A02g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Epa6EPUKIq0AtRc9qpb2KEw9vCfYgfZ//7VzPzMIqhI=; b=EfvRFg1DUfaB1n6JRUgw8hBEK5wx4PDZpZGGieLzvA2C1bVapMEHNX/SR8gY1lZNXy savCT9jZF6vWvfh3Htd5NpLVLtjnEcZG3EYHriSLPzIsxD428f5NrNxl4ONjy7eNcUDs SMAvLKk5HJtvIZ1IsM5KT5KFhRd2U9TAaAAYrCQbRWuD4p8Ss4jm8/r5V6XZ8TeSjlkp MVKznstvt1uzHRxUwjH3ljsv+ycX/0rZOmqaFetW8xhWzU63kgeClBDmZf4fFcvjLU+d D4Bd3H7NeEuyQKlogb0y0cxVaft9VE4s8dkwPevTJgAY9WQszTrpSFnN66165GpzgwWy hI/Q== X-Gm-Message-State: AOAM532PNlVY4VZLbvdLPlrKnvBecCVwS0p5CX03z5i35Io2NMfEAiYU avQn0Oa5aj0pP0jQAXayjtptJysci1U= X-Google-Smtp-Source: ABdhPJxNTCJcjsBKOb8T1GGb8vM5joAbqjfUQPGP8yWcQDS0h/lzAm9+XP4U+oIREuWA1KHO2Xo4WQ== X-Received: by 2002:aa7:ce8f:: with SMTP id y15mr4669922edv.148.1619774711525; Fri, 30 Apr 2021 02:25:11 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:11 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 03/13] drm/ttm: properly allocate sys resource during swapout Date: Fri, 30 Apr 2021 11:24:58 +0200 Message-Id: <20210430092508.60710-3-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Drop the special handling here. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/ttm/ttm_bo.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 55f1ddcf22b6..ca1b098b6a56 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1166,14 +1166,16 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, if (bo->mem.mem_type != TTM_PL_SYSTEM) { struct ttm_operation_ctx ctx = { false, false }; struct ttm_resource evict_mem; - struct ttm_place hop; + struct ttm_place place, hop; + memset(&place, 0, sizeof(place)); memset(&hop, 0, sizeof(hop)); - evict_mem = bo->mem; - evict_mem.mm_node = NULL; - evict_mem.placement = 0; - evict_mem.mem_type = TTM_PL_SYSTEM; + place.mem_type = TTM_PL_SYSTEM; + + ret = ttm_resource_alloc(bo, &place, &evict_mem); + if (unlikely(ret)) + goto out; ret = ttm_bo_handle_move_mem(bo, &evict_mem, true, &ctx, &hop); if (unlikely(ret != 0)) { From patchwork Fri Apr 30 09:24:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E70EC43460 for ; Fri, 30 Apr 2021 09:25:23 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 097B461352 for ; Fri, 30 Apr 2021 09:25:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 097B461352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E43C66F50F; Fri, 30 Apr 2021 09:25:15 +0000 (UTC) Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by gabe.freedesktop.org (Postfix) with ESMTPS id 393F46F504 for ; Fri, 30 Apr 2021 09:25:14 +0000 (UTC) Received: by mail-ed1-x52d.google.com with SMTP id h10so81801571edt.13 for ; Fri, 30 Apr 2021 02:25:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SU0+4saHUaDaVPK/pmkTcgPhg2NlnFK8gZkzEH9qbsk=; b=InlV7/JHT4xUmZEk7LOUqm3ZwZT6WzoSvrA5dNQQonz5WPiTTlAJx9yphDBBn/WyW2 dJa4teqlyCeH/1sz+FrgMxYR4tKTScXEONeV9OFF53EcrbpjARKomU6+HUnhSmEf/GLm u0oafvES6yyaAXki4zeA24gO0Hv16r99qZxodGQHTV6gvbU9N7FaXZrXT6cuodhHO8At cAdfkJRG7fIYjRY0U31bZPbf54D2neZCPtmQayrQ3VSJbqlrX9FHpQUKnBiLu+dReL4C +orH5viZPtz56fhyHdytsepjWc1mnqOBi6yXGs0OasMdMIWWvlJtXk+tQtaOHSCpBv0F X1pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SU0+4saHUaDaVPK/pmkTcgPhg2NlnFK8gZkzEH9qbsk=; b=S3lHq7fVfaINFK79ubM4OpT8lBTQaZ45TA9iml8v6sVRc71TLb/qKBWZP+qdobJ9bF yC1qh0KIfw4je+GSkDag3WZ83tOJUAJRh4GY+6d9nL4MBz86CBCY7LvAUHVdra2ewjjL cuZM9kGJHWKtZXdTt5nFZ0x/qZEYXtfxhCn+atFtUTlprFhhl7SHDMjCXpuctnlwBK06 HtNJtC9ikpYG/t7/RegcWjkKiCoXwHKLkVHfkw7SnzJs5ybWXp8E0Si9JykpYkgR45R3 catH1kLxUGZTSKdVNw4O60uoKspxfk/G3RFog0HvDVV/+SnMlTKGKum9X+9nT1N95Oo1 oSGQ== X-Gm-Message-State: AOAM531G3k82j7ia0NaKcop3N7NWVIgdwABnGslorx4vuBtoMOOltPtC lwSMkvYCFQL2lkYZxdTJRSOai0lU+E0= X-Google-Smtp-Source: ABdhPJyB6pBJkJ7kt69NisyCsncZLUI8gXrp7ANfYhsXjM1jmxAQrPDS+mtK+Ip4t8DU8f6jom8krg== X-Received: by 2002:a05:6402:1547:: with SMTP id p7mr4683641edx.319.1619774712520; Fri, 30 Apr 2021 02:25:12 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:11 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 04/13] drm/ttm: rename bo->mem and make it a pointer Date: Fri, 30 Apr 2021 11:24:59 +0200 Message-Id: <20210430092508.60710-4-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When we want to decouble resource management from buffer management we need to be able to handle resources separately. Add a resource pointer and rename bo->mem so that all code needs to change to access the pointer instead. No functional change. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 11 +++-- drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 48 +++++++++--------- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 42 ++++++++-------- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 12 ++--- drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 2 +- drivers/gpu/drm/drm_gem_ttm_helper.c | 6 +-- drivers/gpu/drm/drm_gem_vram_helper.c | 4 +- drivers/gpu/drm/nouveau/nouveau_abi16.c | 2 +- drivers/gpu/drm/nouveau/nouveau_bo.c | 30 ++++++------ drivers/gpu/drm/nouveau/nouveau_chan.c | 2 +- drivers/gpu/drm/nouveau/nouveau_fbcon.c | 2 +- drivers/gpu/drm/nouveau/nouveau_gem.c | 16 +++--- drivers/gpu/drm/nouveau/nouveau_vmm.c | 4 +- drivers/gpu/drm/nouveau/nv17_fence.c | 2 +- drivers/gpu/drm/nouveau/nv50_fence.c | 2 +- drivers/gpu/drm/qxl/qxl_drv.h | 6 +-- drivers/gpu/drm/qxl/qxl_object.c | 10 ++-- drivers/gpu/drm/qxl/qxl_ttm.c | 4 +- drivers/gpu/drm/radeon/radeon_cs.c | 8 +-- drivers/gpu/drm/radeon/radeon_gem.c | 10 ++-- drivers/gpu/drm/radeon/radeon_object.c | 22 ++++----- drivers/gpu/drm/radeon/radeon_object.h | 4 +- drivers/gpu/drm/radeon/radeon_pm.c | 2 +- drivers/gpu/drm/radeon/radeon_trace.h | 2 +- drivers/gpu/drm/radeon/radeon_ttm.c | 8 +-- drivers/gpu/drm/ttm/ttm_bo.c | 37 +++++++------- drivers/gpu/drm/ttm/ttm_bo_util.c | 49 ++++++++++--------- drivers/gpu/drm/ttm/ttm_bo_vm.c | 22 ++++----- drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 8 +-- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 36 +++++++------- drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c | 10 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_context.c | 12 ++--- drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c | 10 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c | 12 ++--- drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c | 8 +-- drivers/gpu/drm/vmwgfx/vmwgfx_shader.c | 12 ++--- drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c | 6 +-- drivers/gpu/drm/vmwgfx/vmwgfx_surface.c | 6 +-- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 10 ++-- include/drm/ttm/ttm_bo_api.h | 3 +- include/drm/ttm/ttm_bo_driver.h | 6 +-- 48 files changed, 270 insertions(+), 262 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index e93850f2f3b1..7e558167e52b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -1456,7 +1456,7 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu( * the next restore worker */ if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm) && - bo->tbo.mem.mem_type == TTM_PL_SYSTEM) + bo->tbo.resource->mem_type == TTM_PL_SYSTEM) is_invalid_userptr = true; if (check_if_add_bo_to_vm(avm, mem)) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index b4ad1c055c70..dcabe86283f3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4030,9 +4030,9 @@ static int amdgpu_device_recover_vram(struct amdgpu_device *adev) list_for_each_entry(shadow, &adev->shadow_list, shadow_list) { /* No need to recover an evicted BO */ - if (shadow->tbo.mem.mem_type != TTM_PL_TT || - shadow->tbo.mem.start == AMDGPU_BO_INVALID_OFFSET || - shadow->parent->tbo.mem.mem_type != TTM_PL_VRAM) + if (shadow->tbo.resource->mem_type != TTM_PL_TT || + shadow->tbo.resource->start == AMDGPU_BO_INVALID_OFFSET || + shadow->parent->tbo.resource->mem_type != TTM_PL_VRAM) continue; r = amdgpu_bo_restore_shadow(shadow, &next); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index baa980a477d9..fa59f7c17d56 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -272,12 +272,12 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, if (r) return ERR_PTR(r); - } else if (!(amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type) & + } else if (!(amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type) & AMDGPU_GEM_DOMAIN_GTT)) { return ERR_PTR(-EBUSY); } - switch (bo->tbo.mem.mem_type) { + switch (bo->tbo.resource->mem_type) { case TTM_PL_TT: sgt = drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, @@ -291,8 +291,9 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, break; case TTM_PL_VRAM: - r = amdgpu_vram_mgr_alloc_sgt(adev, &bo->tbo.mem, 0, - bo->tbo.base.size, attach->dev, dir, &sgt); + r = amdgpu_vram_mgr_alloc_sgt(adev, bo->tbo.resource, 0, + bo->tbo.base.size, attach->dev, + dir, &sgt); if (r) return ERR_PTR(r); break; @@ -482,7 +483,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) struct amdgpu_vm_bo_base *bo_base; int r; - if (bo->tbo.mem.mem_type == TTM_PL_SYSTEM) + if (bo->tbo.resource->mem_type == TTM_PL_SYSTEM) return; r = ttm_bo_validate(&bo->tbo, &placement, &ctx); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c index 4d32233cde92..d32718bbb464 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c @@ -99,7 +99,7 @@ void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level, { struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); - switch (bo->tbo.mem.mem_type) { + switch (bo->tbo.resource->mem_type) { case TTM_PL_TT: *addr = bo->tbo.ttm->dma_address[0]; break; @@ -110,7 +110,7 @@ void amdgpu_gmc_get_pde_for_bo(struct amdgpu_bo *bo, int level, *addr = 0; break; } - *flags = amdgpu_ttm_tt_pde_flags(bo->tbo.ttm, &bo->tbo.mem); + *flags = amdgpu_ttm_tt_pde_flags(bo->tbo.ttm, bo->tbo.resource); amdgpu_gmc_get_vm_pde(adev, level, addr, flags); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c index c026972ca9a1..1086687a8138 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c @@ -112,7 +112,7 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, int r; spin_lock(&mgr->lock); - if ((&tbo->mem == mem || tbo->mem.mem_type != TTM_PL_TT) && + if ((tbo->resource == mem || tbo->resource->mem_type != TTM_PL_TT) && atomic64_read(&mgr->available) < mem->num_pages) { spin_unlock(&mgr->lock); return -ENOSPC; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index fe4e5880509d..6bb9d9d05326 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -362,14 +362,14 @@ int amdgpu_bo_create_kernel_at(struct amdgpu_device *adev, if (cpu_addr) amdgpu_bo_kunmap(*bo_ptr); - ttm_resource_free(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.mem); + ttm_resource_free(&(*bo_ptr)->tbo, (*bo_ptr)->tbo.resource); for (i = 0; i < (*bo_ptr)->placement.num_placement; ++i) { (*bo_ptr)->placements[i].fpfn = offset >> PAGE_SHIFT; (*bo_ptr)->placements[i].lpfn = (offset + size) >> PAGE_SHIFT; } r = ttm_bo_mem_space(&(*bo_ptr)->tbo, &(*bo_ptr)->placement, - &(*bo_ptr)->tbo.mem, &ctx); + (*bo_ptr)->tbo.resource, &ctx); if (r) goto error; @@ -562,15 +562,15 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev, return r; if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && - bo->tbo.mem.mem_type == TTM_PL_VRAM && - bo->tbo.mem.start < adev->gmc.visible_vram_size >> PAGE_SHIFT) + bo->tbo.resource->mem_type == TTM_PL_VRAM && + bo->tbo.resource->start < adev->gmc.visible_vram_size >> PAGE_SHIFT) amdgpu_cs_report_moved_bytes(adev, ctx.bytes_moved, ctx.bytes_moved); else amdgpu_cs_report_moved_bytes(adev, ctx.bytes_moved, 0); if (bp->flags & AMDGPU_GEM_CREATE_VRAM_CLEARED && - bo->tbo.mem.mem_type == TTM_PL_VRAM) { + bo->tbo.resource->mem_type == TTM_PL_VRAM) { struct dma_fence *fence; r = amdgpu_fill_buffer(bo, 0, bo->tbo.base.resv, &fence); @@ -796,7 +796,7 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr) if (r < 0) return r; - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.mem.num_pages, &bo->kmap); + r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.resource->num_pages, &bo->kmap); if (r) return r; @@ -919,8 +919,8 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain, domain = amdgpu_bo_get_preferred_pin_domain(adev, domain); if (bo->tbo.pin_count) { - uint32_t mem_type = bo->tbo.mem.mem_type; - uint32_t mem_flags = bo->tbo.mem.placement; + uint32_t mem_type = bo->tbo.resource->mem_type; + uint32_t mem_flags = bo->tbo.resource->placement; if (!(domain & amdgpu_mem_type_to_domain(mem_type))) return -EINVAL; @@ -970,7 +970,7 @@ int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32 domain, ttm_bo_pin(&bo->tbo); - domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type); + domain = amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type); if (domain == AMDGPU_GEM_DOMAIN_VRAM) { atomic64_add(amdgpu_bo_size(bo), &adev->vram_pin_size); atomic64_add(amdgpu_vram_mgr_bo_visible_size(bo), @@ -1022,11 +1022,11 @@ void amdgpu_bo_unpin(struct amdgpu_bo *bo) if (bo->tbo.base.import_attach) dma_buf_unpin(bo->tbo.base.import_attach); - if (bo->tbo.mem.mem_type == TTM_PL_VRAM) { + if (bo->tbo.resource->mem_type == TTM_PL_VRAM) { atomic64_sub(amdgpu_bo_size(bo), &adev->vram_pin_size); atomic64_sub(amdgpu_vram_mgr_bo_visible_size(bo), &adev->visible_pin_size); - } else if (bo->tbo.mem.mem_type == TTM_PL_TT) { + } else if (bo->tbo.resource->mem_type == TTM_PL_TT) { atomic64_sub(amdgpu_bo_size(bo), &adev->gart_pin_size); } } @@ -1262,7 +1262,7 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, { struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev); struct amdgpu_bo *abo; - struct ttm_resource *old_mem = &bo->mem; + struct ttm_resource *old_mem = bo->resource; if (!amdgpu_bo_is_amdgpu_bo(bo)) return; @@ -1273,7 +1273,7 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, amdgpu_bo_kunmap(abo); if (abo->tbo.base.dma_buf && !abo->tbo.base.import_attach && - bo->mem.mem_type != TTM_PL_SYSTEM) + bo->resource->mem_type != TTM_PL_SYSTEM) dma_buf_move_notify(abo->tbo.base.dma_buf); /* remember the eviction */ @@ -1315,7 +1315,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo) if (bo->base.resv == &bo->base._resv) amdgpu_amdkfd_remove_fence_on_pt_pd_bos(abo); - if (bo->mem.mem_type != TTM_PL_VRAM || !bo->mem.mm_node || + if (bo->resource->mem_type != TTM_PL_VRAM || !bo->resource->mm_node || !(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) return; @@ -1352,10 +1352,10 @@ vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo) /* Remember that this BO was accessed by the CPU */ abo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; - if (bo->mem.mem_type != TTM_PL_VRAM) + if (bo->resource->mem_type != TTM_PL_VRAM) return 0; - offset = bo->mem.start << PAGE_SHIFT; + offset = bo->resource->start << PAGE_SHIFT; if ((offset + bo->base.size) <= adev->gmc.visible_vram_size) return 0; @@ -1378,9 +1378,9 @@ vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo) else if (unlikely(r)) return VM_FAULT_SIGBUS; - offset = bo->mem.start << PAGE_SHIFT; + offset = bo->resource->start << PAGE_SHIFT; /* this should never happen */ - if (bo->mem.mem_type == TTM_PL_VRAM && + if (bo->resource->mem_type == TTM_PL_VRAM && (offset + bo->base.size) > adev->gmc.visible_vram_size) return VM_FAULT_SIGBUS; @@ -1465,11 +1465,11 @@ int amdgpu_bo_sync_wait(struct amdgpu_bo *bo, void *owner, bool intr) */ u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo) { - WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_SYSTEM); + WARN_ON_ONCE(bo->tbo.resource->mem_type == TTM_PL_SYSTEM); WARN_ON_ONCE(!dma_resv_is_locked(bo->tbo.base.resv) && !bo->tbo.pin_count && bo->tbo.type != ttm_bo_type_kernel); - WARN_ON_ONCE(bo->tbo.mem.start == AMDGPU_BO_INVALID_OFFSET); - WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_VRAM && + WARN_ON_ONCE(bo->tbo.resource->start == AMDGPU_BO_INVALID_OFFSET); + WARN_ON_ONCE(bo->tbo.resource->mem_type == TTM_PL_VRAM && !(bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)); return amdgpu_bo_gpu_offset_no_check(bo); @@ -1487,8 +1487,8 @@ u64 amdgpu_bo_gpu_offset_no_check(struct amdgpu_bo *bo) struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); uint64_t offset; - offset = (bo->tbo.mem.start << PAGE_SHIFT) + - amdgpu_ttm_domain_start(adev, bo->tbo.mem.mem_type); + offset = (bo->tbo.resource->start << PAGE_SHIFT) + + amdgpu_ttm_domain_start(adev, bo->tbo.resource->mem_type); return amdgpu_gmc_sign_extend(offset); } @@ -1541,7 +1541,7 @@ u64 amdgpu_bo_print_info(int id, struct amdgpu_bo *bo, struct seq_file *m) unsigned int pin_count; u64 size; - domain = amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type); + domain = amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type); switch (domain) { case AMDGPU_GEM_DOMAIN_VRAM: placement = "VRAM"; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h index deebb4e87183..32256c5bb487 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h @@ -215,10 +215,10 @@ static inline bool amdgpu_bo_in_cpu_visible_vram(struct amdgpu_bo *bo) struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); struct amdgpu_res_cursor cursor; - if (bo->tbo.mem.mem_type != TTM_PL_VRAM) + if (bo->tbo.resource->mem_type != TTM_PL_VRAM) return false; - amdgpu_res_first(&bo->tbo.mem, 0, amdgpu_bo_size(bo), &cursor); + amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor); while (cursor.remaining) { if (cursor.start < adev->gmc.visible_vram_size) return true; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h index 792d20261846..0527772fe1b8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h @@ -127,8 +127,8 @@ TRACE_EVENT(amdgpu_bo_create, TP_fast_assign( __entry->bo = bo; - __entry->pages = bo->tbo.mem.num_pages; - __entry->type = bo->tbo.mem.mem_type; + __entry->pages = bo->tbo.resource->num_pages; + __entry->type = bo->tbo.resource->mem_type; __entry->prefer = bo->preferred_domains; __entry->allow = bo->allowed_domains; __entry->visible = bo->flags; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 2f7580d2ee71..581523d212d2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -112,7 +112,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo, } abo = ttm_to_amdgpu_bo(bo); - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case AMDGPU_PL_GDS: case AMDGPU_PL_GWS: case AMDGPU_PL_OA: @@ -471,7 +471,7 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, { struct amdgpu_device *adev; struct amdgpu_bo *abo; - struct ttm_resource *old_mem = &bo->mem; + struct ttm_resource *old_mem = bo->resource; int r; if (new_mem->mem_type == TTM_PL_TT) { @@ -503,7 +503,7 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, return r; amdgpu_ttm_backend_unbind(bo->bdev, bo->ttm); - ttm_resource_free(bo, &bo->mem); + ttm_resource_free(bo, bo->resource); ttm_bo_assign_mem(bo, new_mem); goto out; } @@ -612,7 +612,8 @@ static unsigned long amdgpu_ttm_io_mem_pfn(struct ttm_buffer_object *bo, struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev); struct amdgpu_res_cursor cursor; - amdgpu_res_first(&bo->mem, (u64)page_offset << PAGE_SHIFT, 0, &cursor); + amdgpu_res_first(bo->resource, (u64)page_offset << PAGE_SHIFT, 0, + &cursor); return (adev->gmc.aper_base + cursor.start) >> PAGE_SHIFT; } @@ -1006,12 +1007,12 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo) uint64_t addr, flags; int r; - if (bo->mem.start != AMDGPU_BO_INVALID_OFFSET) + if (bo->resource->start != AMDGPU_BO_INVALID_OFFSET) return 0; addr = amdgpu_gmc_agp_addr(bo); if (addr != AMDGPU_BO_INVALID_OFFSET) { - bo->mem.start = addr >> PAGE_SHIFT; + bo->resource->start = addr >> PAGE_SHIFT; } else { /* allocate GART space */ @@ -1022,7 +1023,7 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo) placements.fpfn = 0; placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT; placements.mem_type = TTM_PL_TT; - placements.flags = bo->mem.placement; + placements.flags = bo->resource->placement; r = ttm_bo_mem_space(bo, &placement, &tmp, &ctx); if (unlikely(r)) @@ -1039,8 +1040,8 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo) return r; } - ttm_resource_free(bo, &bo->mem); - bo->mem = tmp; + ttm_resource_free(bo, bo->resource); + ttm_bo_assign_mem(bo, &tmp); } return 0; @@ -1061,7 +1062,7 @@ int amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo) if (!tbo->ttm) return 0; - flags = amdgpu_ttm_tt_pte_flags(adev, tbo->ttm, &tbo->mem); + flags = amdgpu_ttm_tt_pte_flags(adev, tbo->ttm, tbo->resource); r = amdgpu_ttm_gart_bind(adev, tbo, flags); return r; @@ -1390,7 +1391,7 @@ uint64_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device *adev, struct ttm_tt *ttm, static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, const struct ttm_place *place) { - unsigned long num_pages = bo->mem.num_pages; + unsigned long num_pages = bo->resource->num_pages; struct amdgpu_res_cursor cursor; struct dma_resv_list *flist; struct dma_fence *f; @@ -1414,7 +1415,7 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, } } - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_TT: if (amdgpu_bo_is_amdgpu_bo(bo) && amdgpu_bo_encrypted(ttm_to_amdgpu_bo(bo))) @@ -1423,7 +1424,7 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, case TTM_PL_VRAM: /* Check each drm MM node individually */ - amdgpu_res_first(&bo->mem, 0, (u64)num_pages << PAGE_SHIFT, + amdgpu_res_first(bo->resource, 0, (u64)num_pages << PAGE_SHIFT, &cursor); while (cursor.remaining) { if (place->fpfn < PFN_DOWN(cursor.start + cursor.size) @@ -1465,10 +1466,10 @@ static int amdgpu_ttm_access_memory(struct ttm_buffer_object *bo, uint32_t value = 0; int ret = 0; - if (bo->mem.mem_type != TTM_PL_VRAM) + if (bo->resource->mem_type != TTM_PL_VRAM) return -EIO; - amdgpu_res_first(&bo->mem, offset, len, &cursor); + amdgpu_res_first(bo->resource, offset, len, &cursor); while (cursor.remaining) { uint64_t aligned_pos = cursor.start & ~(uint64_t)3; uint64_t bytes = 4 - (cursor.start & 3); @@ -2037,16 +2038,16 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo, return -EINVAL; } - if (bo->tbo.mem.mem_type == TTM_PL_TT) { + if (bo->tbo.resource->mem_type == TTM_PL_TT) { r = amdgpu_ttm_alloc_gart(&bo->tbo); if (r) return r; } - num_bytes = bo->tbo.mem.num_pages << PAGE_SHIFT; + num_bytes = bo->tbo.resource->num_pages << PAGE_SHIFT; num_loops = 0; - amdgpu_res_first(&bo->tbo.mem, 0, num_bytes, &cursor); + amdgpu_res_first(bo->tbo.resource, 0, num_bytes, &cursor); while (cursor.remaining) { num_loops += DIV_ROUND_UP_ULL(cursor.size, max_bytes); amdgpu_res_next(&cursor, cursor.size); @@ -2071,12 +2072,13 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo, } } - amdgpu_res_first(&bo->tbo.mem, 0, num_bytes, &cursor); + amdgpu_res_first(bo->tbo.resource, 0, num_bytes, &cursor); while (cursor.remaining) { uint32_t cur_size = min_t(uint64_t, cursor.size, max_bytes); uint64_t dst_addr = cursor.start; - dst_addr += amdgpu_ttm_domain_start(adev, bo->tbo.mem.mem_type); + dst_addr += amdgpu_ttm_domain_start(adev, + bo->tbo.resource->mem_type); amdgpu_emit_fill_buffer(adev, &job->ibs[0], src_data, dst_addr, cur_size); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 0311f8b39c32..1e9f358f25f3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -339,7 +339,7 @@ static void amdgpu_vm_bo_base_init(struct amdgpu_vm_bo_base *base, amdgpu_vm_bo_idle(base); if (bo->preferred_domains & - amdgpu_mem_type_to_domain(bo->tbo.mem.mem_type)) + amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type)) return; /* @@ -654,11 +654,11 @@ void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev, if (!bo->parent) continue; - ttm_bo_move_to_lru_tail(&bo->tbo, &bo->tbo.mem, + ttm_bo_move_to_lru_tail(&bo->tbo, bo->tbo.resource, &vm->lru_bulk_move); if (bo->shadow) ttm_bo_move_to_lru_tail(&bo->shadow->tbo, - &bo->shadow->tbo.mem, + bo->shadow->tbo.resource, &vm->lru_bulk_move); } spin_unlock(&adev->mman.bdev.lru_lock); @@ -1739,10 +1739,10 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, struct amdgpu_bo_va *bo_va, struct drm_gem_object *gobj = dma_buf->priv; struct amdgpu_bo *abo = gem_to_amdgpu_bo(gobj); - if (abo->tbo.mem.mem_type == TTM_PL_VRAM) + if (abo->tbo.resource->mem_type == TTM_PL_VRAM) bo = gem_to_amdgpu_bo(gobj); } - mem = &bo->tbo.mem; + mem = bo->tbo.resource; if (mem->mem_type == TTM_PL_TT) pages_addr = bo->tbo.ttm->dma_address; } @@ -1802,7 +1802,7 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, struct amdgpu_bo_va *bo_va, * next command submission. */ if (bo && bo->tbo.base.resv == vm->root.base.bo->tbo.base.resv) { - uint32_t mem_type = bo->tbo.mem.mem_type; + uint32_t mem_type = bo->tbo.resource->mem_type; if (!(bo->preferred_domains & amdgpu_mem_type_to_domain(mem_type))) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c index 2ff7b94e58f5..bb01e0fc621c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c @@ -213,7 +213,7 @@ static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev, u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo) { struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); - struct ttm_resource *mem = &bo->tbo.mem; + struct ttm_resource *mem = bo->tbo.resource; struct drm_mm_node *nodes = mem->mm_node; unsigned pages = mem->num_pages; u64 usage; diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c index b14bed8be771..ecf3d2a54a98 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c @@ -40,12 +40,12 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, const struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); drm_printf_indent(p, indent, "placement="); - drm_print_bits(p, bo->mem.placement, plname, ARRAY_SIZE(plname)); + drm_print_bits(p, bo->resource->placement, plname, ARRAY_SIZE(plname)); drm_printf(p, "\n"); - if (bo->mem.bus.is_iomem) + if (bo->resource->bus.is_iomem) drm_printf_indent(p, indent, "bus.offset=%lx\n", - (unsigned long)bo->mem.bus.offset); + (unsigned long)bo->resource->bus.offset); } EXPORT_SYMBOL(drm_gem_ttm_print_info); diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 797200315854..83e7258c7f90 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -248,10 +248,10 @@ EXPORT_SYMBOL(drm_gem_vram_put); static u64 drm_gem_vram_pg_offset(struct drm_gem_vram_object *gbo) { /* Keep TTM behavior for now, remove when drivers are audited */ - if (WARN_ON_ONCE(!gbo->bo.mem.mm_node)) + if (WARN_ON_ONCE(!gbo->bo.resource->mm_node)) return 0; - return gbo->bo.mem.start; + return gbo->bo.resource->start; } /** diff --git a/drivers/gpu/drm/nouveau/nouveau_abi16.c b/drivers/gpu/drm/nouveau/nouveau_abi16.c index 0a9334deffe2..b45ec3086285 100644 --- a/drivers/gpu/drm/nouveau/nouveau_abi16.c +++ b/drivers/gpu/drm/nouveau/nouveau_abi16.c @@ -312,7 +312,7 @@ nouveau_abi16_ioctl_channel_alloc(ABI16_IOCTL_ARGS) init->pushbuf_domains = NOUVEAU_GEM_DOMAIN_VRAM | NOUVEAU_GEM_DOMAIN_GART; else - if (chan->chan->push.buffer->bo.mem.mem_type == TTM_PL_VRAM) + if (chan->chan->push.buffer->bo.resource->mem_type == TTM_PL_VRAM) init->pushbuf_domains = NOUVEAU_GEM_DOMAIN_VRAM; else init->pushbuf_domains = NOUVEAU_GEM_DOMAIN_GART; diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 7a2624c0ba4c..9eaa3274bd02 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -433,7 +433,7 @@ nouveau_bo_pin(struct nouveau_bo *nvbo, uint32_t domain, bool contig) if (nvbo->bo.pin_count) { bool error = evict; - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_VRAM: error |= !(domain & NOUVEAU_GEM_DOMAIN_VRAM); break; @@ -446,7 +446,7 @@ nouveau_bo_pin(struct nouveau_bo *nvbo, uint32_t domain, bool contig) if (error) { NV_ERROR(drm, "bo %p pinned elsewhere: " "0x%08x vs 0x%08x\n", bo, - bo->mem.mem_type, domain); + bo->resource->mem_type, domain); ret = -EBUSY; } ttm_bo_pin(&nvbo->bo); @@ -467,7 +467,7 @@ nouveau_bo_pin(struct nouveau_bo *nvbo, uint32_t domain, bool contig) ttm_bo_pin(&nvbo->bo); - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_VRAM: drm->gem.vram_available -= bo->base.size; break; @@ -498,7 +498,7 @@ nouveau_bo_unpin(struct nouveau_bo *nvbo) ttm_bo_unpin(&nvbo->bo); if (!nvbo->bo.pin_count) { - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_VRAM: drm->gem.vram_available += bo->base.size; break; @@ -523,7 +523,7 @@ nouveau_bo_map(struct nouveau_bo *nvbo) if (ret) return ret; - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.mem.num_pages, &nvbo->kmap); + ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.resource->num_pages, &nvbo->kmap); ttm_bo_unreserve(&nvbo->bo); return ret; @@ -737,7 +737,7 @@ nouveau_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl) { struct nouveau_bo *nvbo = nouveau_bo(bo); - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_VRAM: nouveau_bo_placement_set(nvbo, NOUVEAU_GEM_DOMAIN_GART, NOUVEAU_GEM_DOMAIN_CPU); @@ -754,7 +754,7 @@ static int nouveau_bo_move_prep(struct nouveau_drm *drm, struct ttm_buffer_object *bo, struct ttm_resource *reg) { - struct nouveau_mem *old_mem = nouveau_mem(&bo->mem); + struct nouveau_mem *old_mem = nouveau_mem(bo->resource); struct nouveau_mem *new_mem = nouveau_mem(reg); struct nvif_vmm *vmm = &drm->client.vmm.vmm; int ret; @@ -809,7 +809,7 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict, mutex_lock_nested(&cli->mutex, SINGLE_DEPTH_NESTING); ret = nouveau_fence_sync(nouveau_bo(bo), chan, true, ctx->interruptible); if (ret == 0) { - ret = drm->ttm.move(chan, bo, &bo->mem, new_reg); + ret = drm->ttm.move(chan, bo, bo->resource, new_reg); if (ret == 0) { ret = nouveau_fence_new(chan, false, &fence); if (ret == 0) { @@ -969,7 +969,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, { struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct nouveau_bo *nvbo = nouveau_bo(bo); - struct ttm_resource *old_reg = &bo->mem; + struct ttm_resource *old_reg = bo->resource; struct nouveau_drm_tile *new_tile = NULL; int ret = 0; @@ -1009,7 +1009,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, if (old_reg->mem_type == TTM_PL_TT && new_reg->mem_type == TTM_PL_SYSTEM) { nouveau_ttm_tt_unbind(bo->bdev, bo->ttm); - ttm_resource_free(bo, &bo->mem); + ttm_resource_free(bo, bo->resource); ttm_bo_assign_mem(bo, new_reg); goto out; } @@ -1045,7 +1045,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, } out_ntfy: if (ret) { - nouveau_bo_move_ntfy(bo, &bo->mem); + nouveau_bo_move_ntfy(bo, bo->resource); } return ret; } @@ -1179,7 +1179,7 @@ nouveau_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *reg) list_del_init(&nvbo->io_reserve_lru); drm_vma_node_unmap(&nvbo->bo.base.vma_node, bdev->dev_mapping); - nouveau_ttm_io_mem_free_locked(drm, &nvbo->bo.mem); + nouveau_ttm_io_mem_free_locked(drm, nvbo->bo.resource); goto retry; } @@ -1209,12 +1209,12 @@ vm_fault_t nouveau_ttm_fault_reserve_notify(struct ttm_buffer_object *bo) /* as long as the bo isn't in vram, and isn't tiled, we've got * nothing to do here. */ - if (bo->mem.mem_type != TTM_PL_VRAM) { + if (bo->resource->mem_type != TTM_PL_VRAM) { if (drm->client.device.info.family < NV_DEVICE_INFO_V0_TESLA || !nvbo->kind) return 0; - if (bo->mem.mem_type != TTM_PL_SYSTEM) + if (bo->resource->mem_type != TTM_PL_SYSTEM) return 0; nouveau_bo_placement_set(nvbo, NOUVEAU_GEM_DOMAIN_GART, 0); @@ -1222,7 +1222,7 @@ vm_fault_t nouveau_ttm_fault_reserve_notify(struct ttm_buffer_object *bo) } else { /* make sure bo is in mappable vram */ if (drm->client.device.info.family >= NV_DEVICE_INFO_V0_TESLA || - bo->mem.start + bo->mem.num_pages < mappable) + bo->resource->start + bo->resource->num_pages < mappable) return 0; for (i = 0; i < nvbo->placement.num_placement; ++i) { diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c index 7cfac265fd45..40362600eed2 100644 --- a/drivers/gpu/drm/nouveau/nouveau_chan.c +++ b/drivers/gpu/drm/nouveau/nouveau_chan.c @@ -212,7 +212,7 @@ nouveau_channel_prep(struct nouveau_drm *drm, struct nvif_device *device, args.start = 0; args.limit = chan->vmm->vmm.limit - 1; } else - if (chan->push.buffer->bo.mem.mem_type == TTM_PL_VRAM) { + if (chan->push.buffer->bo.resource->mem_type == TTM_PL_VRAM) { if (device->info.family == NV_DEVICE_INFO_V0_TNT) { /* nv04 vram pushbuf hack, retarget to its location in * the framebuffer bar rather than direct vram access.. diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c index 93ac78bda750..4f9b3aa5deda 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c @@ -378,7 +378,7 @@ nouveau_fbcon_create(struct drm_fb_helper *helper, FBINFO_HWACCEL_FILLRECT | FBINFO_HWACCEL_IMAGEBLIT; info->fbops = &nouveau_fbcon_sw_ops; - info->fix.smem_start = nvbo->bo.mem.bus.offset; + info->fix.smem_start = nvbo->bo.resource->bus.offset; info->fix.smem_len = nvbo->bo.base.size; info->screen_base = nvbo_kmap_obj_iovirtual(nvbo); diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index a70e82413fa7..71b2401f8c73 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -240,7 +240,7 @@ nouveau_gem_info(struct drm_file *file_priv, struct drm_gem_object *gem, if (is_power_of_2(nvbo->valid_domains)) rep->domain = nvbo->valid_domains; - else if (nvbo->bo.mem.mem_type == TTM_PL_TT) + else if (nvbo->bo.resource->mem_type == TTM_PL_TT) rep->domain = NOUVEAU_GEM_DOMAIN_GART; else rep->domain = NOUVEAU_GEM_DOMAIN_VRAM; @@ -311,11 +311,11 @@ nouveau_gem_set_domain(struct drm_gem_object *gem, uint32_t read_domains, valid_domains &= ~(NOUVEAU_GEM_DOMAIN_VRAM | NOUVEAU_GEM_DOMAIN_GART); if ((domains & NOUVEAU_GEM_DOMAIN_VRAM) && - bo->mem.mem_type == TTM_PL_VRAM) + bo->resource->mem_type == TTM_PL_VRAM) pref_domains |= NOUVEAU_GEM_DOMAIN_VRAM; else if ((domains & NOUVEAU_GEM_DOMAIN_GART) && - bo->mem.mem_type == TTM_PL_TT) + bo->resource->mem_type == TTM_PL_TT) pref_domains |= NOUVEAU_GEM_DOMAIN_GART; else if (domains & NOUVEAU_GEM_DOMAIN_VRAM) @@ -525,13 +525,13 @@ validate_list(struct nouveau_channel *chan, struct nouveau_cli *cli, if (drm->client.device.info.family < NV_DEVICE_INFO_V0_TESLA) { if (nvbo->offset == b->presumed.offset && - ((nvbo->bo.mem.mem_type == TTM_PL_VRAM && + ((nvbo->bo.resource->mem_type == TTM_PL_VRAM && b->presumed.domain & NOUVEAU_GEM_DOMAIN_VRAM) || - (nvbo->bo.mem.mem_type == TTM_PL_TT && + (nvbo->bo.resource->mem_type == TTM_PL_TT && b->presumed.domain & NOUVEAU_GEM_DOMAIN_GART))) continue; - if (nvbo->bo.mem.mem_type == TTM_PL_TT) + if (nvbo->bo.resource->mem_type == TTM_PL_TT) b->presumed.domain = NOUVEAU_GEM_DOMAIN_GART; else b->presumed.domain = NOUVEAU_GEM_DOMAIN_VRAM; @@ -645,7 +645,7 @@ nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, } if (!nvbo->kmap.virtual) { - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.mem.num_pages, + ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.resource->num_pages, &nvbo->kmap); if (ret) { NV_PRINTK(err, cli, "failed kmap for reloc\n"); @@ -834,7 +834,7 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, if (unlikely(cmd != req->suffix0)) { if (!nvbo->kmap.virtual) { ret = ttm_bo_kmap(&nvbo->bo, 0, - nvbo->bo.mem. + nvbo->bo.resource-> num_pages, &nvbo->kmap); if (ret) { diff --git a/drivers/gpu/drm/nouveau/nouveau_vmm.c b/drivers/gpu/drm/nouveau/nouveau_vmm.c index a49e88129c92..67d6619fcd5e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_vmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_vmm.c @@ -77,7 +77,7 @@ int nouveau_vma_new(struct nouveau_bo *nvbo, struct nouveau_vmm *vmm, struct nouveau_vma **pvma) { - struct nouveau_mem *mem = nouveau_mem(&nvbo->bo.mem); + struct nouveau_mem *mem = nouveau_mem(nvbo->bo.resource); struct nouveau_vma *vma; struct nvif_vma tmp; int ret; @@ -96,7 +96,7 @@ nouveau_vma_new(struct nouveau_bo *nvbo, struct nouveau_vmm *vmm, vma->fence = NULL; list_add_tail(&vma->head, &nvbo->vma_list); - if (nvbo->bo.mem.mem_type != TTM_PL_SYSTEM && + if (nvbo->bo.resource->mem_type != TTM_PL_SYSTEM && mem->mem.page == nvbo->page) { ret = nvif_vmm_get(&vmm->vmm, LAZY, false, mem->mem.page, 0, mem->mem.size, &tmp); diff --git a/drivers/gpu/drm/nouveau/nv17_fence.c b/drivers/gpu/drm/nouveau/nv17_fence.c index b1cd8d7dd87d..07c2e0878c24 100644 --- a/drivers/gpu/drm/nouveau/nv17_fence.c +++ b/drivers/gpu/drm/nouveau/nv17_fence.c @@ -77,8 +77,8 @@ static int nv17_fence_context_new(struct nouveau_channel *chan) { struct nv10_fence_priv *priv = chan->drm->fence; + struct ttm_resource *reg = priv->bo->bo.resource; struct nv10_fence_chan *fctx; - struct ttm_resource *reg = &priv->bo->bo.mem; u32 start = reg->start * PAGE_SIZE; u32 limit = start + priv->bo->bo.base.size - 1; int ret = 0; diff --git a/drivers/gpu/drm/nouveau/nv50_fence.c b/drivers/gpu/drm/nouveau/nv50_fence.c index 1625826505f6..ea1e1f480bfe 100644 --- a/drivers/gpu/drm/nouveau/nv50_fence.c +++ b/drivers/gpu/drm/nouveau/nv50_fence.c @@ -37,7 +37,7 @@ nv50_fence_context_new(struct nouveau_channel *chan) { struct nv10_fence_priv *priv = chan->drm->fence; struct nv10_fence_chan *fctx; - struct ttm_resource *reg = &priv->bo->bo.mem; + struct ttm_resource *reg = priv->bo->bo.resource; u32 start = reg->start * PAGE_SIZE; u32 limit = start + priv->bo->bo.base.size - 1; int ret; diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index 20a0f3ab84ad..dd6abee55f56 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -292,12 +292,12 @@ qxl_bo_physical_address(struct qxl_device *qdev, struct qxl_bo *bo, unsigned long offset) { struct qxl_memslot *slot = - (bo->tbo.mem.mem_type == TTM_PL_VRAM) + (bo->tbo.resource->mem_type == TTM_PL_VRAM) ? &qdev->main_slot : &qdev->surfaces_slot; - /* TODO - need to hold one of the locks to read bo->tbo.mem.start */ + /* TODO - need to hold one of the locks to read bo->tbo.resource->start */ - return slot->high_bits | ((bo->tbo.mem.start << PAGE_SHIFT) + offset); + return slot->high_bits | ((bo->tbo.resource->start << PAGE_SHIFT) + offset); } /* qxl_display.c */ diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 6e26d70f2f07..fbb36e3e8564 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -212,14 +212,14 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct io_mapping *map; struct dma_buf_map bo_map; - if (bo->tbo.mem.mem_type == TTM_PL_VRAM) + if (bo->tbo.resource->mem_type == TTM_PL_VRAM) map = qdev->vram_mapping; - else if (bo->tbo.mem.mem_type == TTM_PL_PRIV) + else if (bo->tbo.resource->mem_type == TTM_PL_PRIV) map = qdev->surface_mapping; else goto fallback; - offset = bo->tbo.mem.start << PAGE_SHIFT; + offset = bo->tbo.resource->start << PAGE_SHIFT; return io_mapping_map_atomic_wc(map, offset + page_offset); fallback: if (bo->kptr) { @@ -266,8 +266,8 @@ int qxl_bo_vunmap(struct qxl_bo *bo) void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *pmap) { - if ((bo->tbo.mem.mem_type != TTM_PL_VRAM) && - (bo->tbo.mem.mem_type != TTM_PL_PRIV)) + if ((bo->tbo.resource->mem_type != TTM_PL_VRAM) && + (bo->tbo.resource->mem_type != TTM_PL_PRIV)) goto fallback; io_mapping_unmap_atomic(pmap); diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c index 47afe95d04a1..8aa87b8edb9c 100644 --- a/drivers/gpu/drm/qxl/qxl_ttm.c +++ b/drivers/gpu/drm/qxl/qxl_ttm.c @@ -131,7 +131,7 @@ static void qxl_bo_move_notify(struct ttm_buffer_object *bo, qbo = to_qxl_bo(bo); qdev = to_qxl(qbo->tbo.base.dev); - if (bo->mem.mem_type == TTM_PL_PRIV && qbo->surface_id) + if (bo->resource->mem_type == TTM_PL_PRIV && qbo->surface_id) qxl_surface_evict(qdev, qbo, new_mem ? true : false); } @@ -140,7 +140,7 @@ static int qxl_bo_move(struct ttm_buffer_object *bo, bool evict, struct ttm_resource *new_mem, struct ttm_place *hop) { - struct ttm_resource *old_mem = &bo->mem; + struct ttm_resource *old_mem = bo->resource; int ret; qxl_bo_move_notify(bo, new_mem); diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c index 059431689c2d..51da880a84a0 100644 --- a/drivers/gpu/drm/radeon/radeon_cs.c +++ b/drivers/gpu/drm/radeon/radeon_cs.c @@ -400,8 +400,8 @@ static int cmp_size_smaller_first(void *priv, struct list_head *a, struct radeon_bo_list *lb = list_entry(b, struct radeon_bo_list, tv.head); /* Sort A before B if A is smaller. */ - return (int)la->robj->tbo.mem.num_pages - - (int)lb->robj->tbo.mem.num_pages; + return (int)la->robj->tbo.resource->num_pages - + (int)lb->robj->tbo.resource->num_pages; } /** @@ -516,7 +516,7 @@ static int radeon_bo_vm_update_pte(struct radeon_cs_parser *p, } r = radeon_vm_bo_update(rdev, vm->ib_bo_va, - &rdev->ring_tmp_bo.bo->tbo.mem); + rdev->ring_tmp_bo.bo->tbo.resource); if (r) return r; @@ -530,7 +530,7 @@ static int radeon_bo_vm_update_pte(struct radeon_cs_parser *p, return -EINVAL; } - r = radeon_vm_bo_update(rdev, bo_va, &bo->tbo.mem); + r = radeon_vm_bo_update(rdev, bo_va, bo->tbo.resource); if (r) return r; diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index 05ea2f39f626..226c1750709c 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -480,7 +480,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data, else r = 0; - cur_placement = READ_ONCE(robj->tbo.mem.mem_type); + cur_placement = READ_ONCE(robj->tbo.resource->mem_type); args->domain = radeon_mem_type_to_domain(cur_placement); drm_gem_object_put(gobj); return r; @@ -510,7 +510,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data, r = ret; /* Flush HDP cache via MMIO if necessary */ - cur_placement = READ_ONCE(robj->tbo.mem.mem_type); + cur_placement = READ_ONCE(robj->tbo.resource->mem_type); if (rdev->asic->mmio_hdp_flush && radeon_mem_type_to_domain(cur_placement) == RADEON_GEM_DOMAIN_VRAM) robj->rdev->asic->mmio_hdp_flush(rdev); @@ -594,7 +594,7 @@ static void radeon_gem_va_update_vm(struct radeon_device *rdev, goto error_free; list_for_each_entry(entry, &list, head) { - domain = radeon_mem_type_to_domain(entry->bo->mem.mem_type); + domain = radeon_mem_type_to_domain(entry->bo->resource->mem_type); /* if anything is swapped out don't swap it in here, just abort and wait for the next CS */ if (domain == RADEON_GEM_DOMAIN_CPU) @@ -607,7 +607,7 @@ static void radeon_gem_va_update_vm(struct radeon_device *rdev, goto error_unlock; if (bo_va->it.start) - r = radeon_vm_bo_update(rdev, bo_va, &bo_va->bo->tbo.mem); + r = radeon_vm_bo_update(rdev, bo_va, bo_va->bo->tbo.resource); error_unlock: mutex_unlock(&bo_va->vm->mutex); @@ -811,7 +811,7 @@ static int radeon_debugfs_gem_info_show(struct seq_file *m, void *unused) unsigned domain; const char *placement; - domain = radeon_mem_type_to_domain(rbo->tbo.mem.mem_type); + domain = radeon_mem_type_to_domain(rbo->tbo.resource->mem_type); switch (domain) { case RADEON_GEM_DOMAIN_VRAM: placement = "VRAM"; diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index cee11c55fd15..bfaaa3c969a3 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -76,7 +76,7 @@ static void radeon_ttm_bo_destroy(struct ttm_buffer_object *tbo) bo = container_of(tbo, struct radeon_bo, tbo); - radeon_update_memory_usage(bo, bo->tbo.mem.mem_type, -1); + radeon_update_memory_usage(bo, bo->tbo.resource->mem_type, -1); mutex_lock(&bo->rdev->gem.mutex); list_del_init(&bo->list); @@ -250,7 +250,7 @@ int radeon_bo_kmap(struct radeon_bo *bo, void **ptr) } return 0; } - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.mem.num_pages, &bo->kmap); + r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.resource->num_pages, &bo->kmap); if (r) { return r; } @@ -359,7 +359,7 @@ void radeon_bo_unpin(struct radeon_bo *bo) { ttm_bo_unpin(&bo->tbo); if (!bo->tbo.pin_count) { - if (bo->tbo.mem.mem_type == TTM_PL_VRAM) + if (bo->tbo.resource->mem_type == TTM_PL_VRAM) bo->rdev->vram_pin_size -= radeon_bo_size(bo); else bo->rdev->gart_pin_size -= radeon_bo_size(bo); @@ -506,7 +506,7 @@ int radeon_bo_list_validate(struct radeon_device *rdev, u32 domain = lobj->preferred_domains; u32 allowed = lobj->allowed_domains; u32 current_domain = - radeon_mem_type_to_domain(bo->tbo.mem.mem_type); + radeon_mem_type_to_domain(bo->tbo.resource->mem_type); /* Check if this buffer will be moved and don't move it * if we have moved too many buffers for this IB already. @@ -605,7 +605,7 @@ int radeon_bo_get_surface_reg(struct radeon_bo *bo) out: radeon_set_surface_reg(rdev, i, bo->tiling_flags, bo->pitch, - bo->tbo.mem.start << PAGE_SHIFT, + bo->tbo.resource->start << PAGE_SHIFT, bo->tbo.base.size); return 0; } @@ -711,7 +711,7 @@ int radeon_bo_check_tiling(struct radeon_bo *bo, bool has_moved, return 0; } - if (bo->tbo.mem.mem_type != TTM_PL_VRAM) { + if (bo->tbo.resource->mem_type != TTM_PL_VRAM) { if (!has_moved) return 0; @@ -743,7 +743,7 @@ void radeon_bo_move_notify(struct ttm_buffer_object *bo, if (!new_mem) return; - radeon_update_memory_usage(rbo, bo->mem.mem_type, -1); + radeon_update_memory_usage(rbo, bo->resource->mem_type, -1); radeon_update_memory_usage(rbo, new_mem->mem_type, 1); } @@ -760,11 +760,11 @@ vm_fault_t radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo) rbo = container_of(bo, struct radeon_bo, tbo); radeon_bo_check_tiling(rbo, 0, 0); rdev = rbo->rdev; - if (bo->mem.mem_type != TTM_PL_VRAM) + if (bo->resource->mem_type != TTM_PL_VRAM) return 0; - size = bo->mem.num_pages << PAGE_SHIFT; - offset = bo->mem.start << PAGE_SHIFT; + size = bo->resource->num_pages << PAGE_SHIFT; + offset = bo->resource->start << PAGE_SHIFT; if ((offset + size) <= rdev->mc.visible_vram_size) return 0; @@ -786,7 +786,7 @@ vm_fault_t radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo) radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_GTT); r = ttm_bo_validate(bo, &rbo->placement, &ctx); } else if (likely(!r)) { - offset = bo->mem.start << PAGE_SHIFT; + offset = bo->resource->start << PAGE_SHIFT; /* this should never happen */ if ((offset + size) > rdev->mc.visible_vram_size) return VM_FAULT_SIGBUS; diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h index fd4116bdde0f..1739c6a142cd 100644 --- a/drivers/gpu/drm/radeon/radeon_object.h +++ b/drivers/gpu/drm/radeon/radeon_object.h @@ -95,7 +95,7 @@ static inline u64 radeon_bo_gpu_offset(struct radeon_bo *bo) rdev = radeon_get_rdev(bo->tbo.bdev); - switch (bo->tbo.mem.mem_type) { + switch (bo->tbo.resource->mem_type) { case TTM_PL_TT: start = rdev->mc.gtt_start; break; @@ -104,7 +104,7 @@ static inline u64 radeon_bo_gpu_offset(struct radeon_bo *bo) break; } - return (bo->tbo.mem.start << PAGE_SHIFT) + start; + return (bo->tbo.resource->start << PAGE_SHIFT) + start; } static inline unsigned long radeon_bo_size(struct radeon_bo *bo) diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c index 0c1950f4e146..edd7b9b7c6ff 100644 --- a/drivers/gpu/drm/radeon/radeon_pm.c +++ b/drivers/gpu/drm/radeon/radeon_pm.c @@ -154,7 +154,7 @@ static void radeon_unmap_vram_bos(struct radeon_device *rdev) return; list_for_each_entry_safe(bo, n, &rdev->gem.objects, list) { - if (bo->tbo.mem.mem_type == TTM_PL_VRAM) + if (bo->tbo.resource->mem_type == TTM_PL_VRAM) ttm_bo_unmap_virtual(&bo->tbo); } } diff --git a/drivers/gpu/drm/radeon/radeon_trace.h b/drivers/gpu/drm/radeon/radeon_trace.h index 1729cb9a95c5..c9fed5f2b870 100644 --- a/drivers/gpu/drm/radeon/radeon_trace.h +++ b/drivers/gpu/drm/radeon/radeon_trace.h @@ -22,7 +22,7 @@ TRACE_EVENT(radeon_bo_create, TP_fast_assign( __entry->bo = bo; - __entry->pages = bo->tbo.mem.num_pages; + __entry->pages = bo->tbo.resource->num_pages; ), TP_printk("bo=%p, pages=%u", __entry->bo, __entry->pages) ); diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 380b3007fd0b..4787bc281281 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -99,12 +99,12 @@ static void radeon_evict_flags(struct ttm_buffer_object *bo, return; } rbo = container_of(bo, struct radeon_bo, tbo); - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_VRAM: if (rbo->rdev->ring[radeon_copy_ring_index(rbo->rdev)].ready == false) radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_CPU); else if (rbo->rdev->mc.visible_vram_size < rbo->rdev->mc.real_vram_size && - bo->mem.start < (rbo->rdev->mc.visible_vram_size >> PAGE_SHIFT)) { + bo->resource->start < (rbo->rdev->mc.visible_vram_size >> PAGE_SHIFT)) { unsigned fpfn = rbo->rdev->mc.visible_vram_size >> PAGE_SHIFT; int i; @@ -207,9 +207,9 @@ static int radeon_bo_move(struct ttm_buffer_object *bo, bool evict, struct ttm_resource *new_mem, struct ttm_place *hop) { + struct ttm_resource *old_mem = bo->resource; struct radeon_device *rdev; struct radeon_bo *rbo; - struct ttm_resource *old_mem = &bo->mem; int r; if (new_mem->mem_type == TTM_PL_TT) { @@ -241,7 +241,7 @@ static int radeon_bo_move(struct ttm_buffer_object *bo, bool evict, if (old_mem->mem_type == TTM_PL_TT && new_mem->mem_type == TTM_PL_SYSTEM) { radeon_ttm_tt_unbind(bo->bdev, bo->ttm); - ttm_resource_free(bo, &bo->mem); + ttm_resource_free(bo, bo->resource); ttm_bo_assign_mem(bo, new_mem); goto out; } diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index ca1b098b6a56..d6cb2b289ba5 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -58,7 +58,7 @@ static void ttm_bo_mem_space_debug(struct ttm_buffer_object *bo, int i, mem_type; drm_printf(&p, "No space for %p (%lu pages, %zuK, %zuM)\n", - bo, bo->mem.num_pages, bo->base.size >> 10, + bo, bo->resource->num_pages, bo->base.size >> 10, bo->base.size >> 20); for (i = 0; i < placement->num_placement; i++) { mem_type = placement->placement[i].mem_type; @@ -109,7 +109,7 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo, bdev->funcs->del_from_lru_notify(bo); if (bulk && !bo->pin_count) { - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_TT: ttm_bo_bulk_move_set_pos(&bulk->tt[bo->priority], bo); break; @@ -163,11 +163,13 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, struct ttm_place *hop) { + struct ttm_resource_manager *old_man, *new_man; struct ttm_device *bdev = bo->bdev; - struct ttm_resource_manager *old_man = ttm_manager_type(bdev, bo->mem.mem_type); - struct ttm_resource_manager *new_man = ttm_manager_type(bdev, mem->mem_type); int ret; + old_man = ttm_manager_type(bdev, bo->resource->mem_type); + new_man = ttm_manager_type(bdev, mem->mem_type); + ttm_bo_unmap_virtual(bo); /* @@ -200,7 +202,7 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo, return 0; out_err: - new_man = ttm_manager_type(bdev, bo->mem.mem_type); + new_man = ttm_manager_type(bdev, bo->resource->mem_type); if (!new_man->use_tt) ttm_bo_tt_destroy(bo); @@ -221,7 +223,7 @@ static void ttm_bo_cleanup_memtype_use(struct ttm_buffer_object *bo) bo->bdev->funcs->delete_mem_notify(bo); ttm_bo_tt_destroy(bo); - ttm_resource_free(bo, &bo->mem); + ttm_resource_free(bo, bo->resource); } static int ttm_bo_individualize_resv(struct ttm_buffer_object *bo) @@ -417,7 +419,7 @@ static void ttm_bo_release(struct kref *kref) bo->bdev->funcs->release_notify(bo); drm_vma_offset_remove(bdev->vma_manager, &bo->base.vma_node); - ttm_mem_io_free(bdev, &bo->mem); + ttm_mem_io_free(bdev, bo->resource); } if (!dma_resv_test_signaled_rcu(bo->base.resv, true) || @@ -438,7 +440,7 @@ static void ttm_bo_release(struct kref *kref) */ if (bo->pin_count) { bo->pin_count = 0; - ttm_bo_move_to_lru_tail(bo, &bo->mem, NULL); + ttm_bo_move_to_lru_tail(bo, bo->resource, NULL); } kref_init(&bo->kref); @@ -534,8 +536,8 @@ bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo, /* Don't evict this BO if it's outside of the * requested placement range */ - if (place->fpfn >= (bo->mem.start + bo->mem.num_pages) || - (place->lpfn && place->lpfn <= bo->mem.start)) + if (place->fpfn >= (bo->resource->start + bo->resource->num_pages) || + (place->lpfn && place->lpfn <= bo->resource->start)) return false; return true; @@ -849,7 +851,7 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo, } error: - if (bo->mem.mem_type == TTM_PL_SYSTEM && !bo->pin_count) + if (bo->resource->mem_type == TTM_PL_SYSTEM && !bo->pin_count) ttm_bo_move_to_lru_tail_unlocked(bo); return ret; @@ -985,7 +987,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, /* * Check whether we need to move buffer. */ - if (!ttm_bo_mem_compat(placement, &bo->mem, &new_flags)) { + if (!ttm_bo_mem_compat(placement, bo->resource, &new_flags)) { ret = ttm_bo_move_buffer(bo, placement, ctx); if (ret) return ret; @@ -993,7 +995,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo, /* * We might need to add a TTM. */ - if (bo->mem.mem_type == TTM_PL_SYSTEM) { + if (bo->resource->mem_type == TTM_PL_SYSTEM) { ret = ttm_tt_create(bo, true); if (ret) return ret; @@ -1025,7 +1027,8 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, bo->bdev = bdev; bo->type = type; bo->page_alignment = page_alignment; - ttm_resource_alloc(bo, &sys_mem, &bo->mem); + bo->resource = &bo->_mem; + ttm_resource_alloc(bo, &sys_mem, bo->resource); bo->moving = NULL; bo->pin_count = 0; bo->sg = sg; @@ -1044,7 +1047,7 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, if (bo->type == ttm_bo_type_device || bo->type == ttm_bo_type_sg) ret = drm_vma_offset_add(bdev->vma_manager, &bo->base.vma_node, - bo->mem.num_pages); + bo->resource->num_pages); /* passed reservation objects should already be locked, * since otherwise lockdep will be angered in radeon. @@ -1106,7 +1109,7 @@ void ttm_bo_unmap_virtual(struct ttm_buffer_object *bo) struct ttm_device *bdev = bo->bdev; drm_vma_node_unmap(&bo->base.vma_node, bdev->dev_mapping); - ttm_mem_io_free(bdev, &bo->mem); + ttm_mem_io_free(bdev, bo->resource); } EXPORT_SYMBOL(ttm_bo_unmap_virtual); @@ -1163,7 +1166,7 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, /* * Move to system cached */ - if (bo->mem.mem_type != TTM_PL_SYSTEM) { + if (bo->resource->mem_type != TTM_PL_SYSTEM) { struct ttm_operation_ctx ctx = { false, false }; struct ttm_resource evict_mem; struct ttm_place place, hop; diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index ae8b61460724..aedf02a31c70 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -179,7 +179,7 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, struct ttm_device *bdev = bo->bdev; struct ttm_resource_manager *man = ttm_manager_type(bdev, new_mem->mem_type); struct ttm_tt *ttm = bo->ttm; - struct ttm_resource *old_mem = &bo->mem; + struct ttm_resource *old_mem = bo->resource; struct ttm_resource old_copy = *old_mem; void *old_iomap; void *new_iomap; @@ -365,24 +365,23 @@ static int ttm_bo_ioremap(struct ttm_buffer_object *bo, unsigned long size, struct ttm_bo_kmap_obj *map) { - struct ttm_resource *mem = &bo->mem; + struct ttm_resource *mem = bo->resource; - if (bo->mem.bus.addr) { + if (bo->resource->bus.addr) { map->bo_kmap_type = ttm_bo_map_premapped; - map->virtual = (void *)(((u8 *)bo->mem.bus.addr) + offset); + map->virtual = ((u8 *)bo->resource->bus.addr) + offset; } else { + resource_size_t res = bo->resource->bus.offset + offset; + map->bo_kmap_type = ttm_bo_map_iomap; if (mem->bus.caching == ttm_write_combined) - map->virtual = ioremap_wc(bo->mem.bus.offset + offset, - size); + map->virtual = ioremap_wc(res, size); #ifdef CONFIG_X86 else if (mem->bus.caching == ttm_cached) - map->virtual = ioremap_cache(bo->mem.bus.offset + offset, - size); + map->virtual = ioremap_cache(res, size); #endif else - map->virtual = ioremap(bo->mem.bus.offset + offset, - size); + map->virtual = ioremap(res, size); } return (!map->virtual) ? -ENOMEM : 0; } @@ -392,7 +391,7 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo, unsigned long num_pages, struct ttm_bo_kmap_obj *map) { - struct ttm_resource *mem = &bo->mem; + struct ttm_resource *mem = bo->resource; struct ttm_operation_ctx ctx = { .interruptible = false, .no_wait_gpu = false @@ -438,15 +437,15 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, map->virtual = NULL; map->bo = bo; - if (num_pages > bo->mem.num_pages) + if (num_pages > bo->resource->num_pages) return -EINVAL; - if ((start_page + num_pages) > bo->mem.num_pages) + if ((start_page + num_pages) > bo->resource->num_pages) return -EINVAL; - ret = ttm_mem_io_reserve(bo->bdev, &bo->mem); + ret = ttm_mem_io_reserve(bo->bdev, bo->resource); if (ret) return ret; - if (!bo->mem.bus.is_iomem) { + if (!bo->resource->bus.is_iomem) { return ttm_bo_kmap_ttm(bo, start_page, num_pages, map); } else { offset = start_page << PAGE_SHIFT; @@ -475,7 +474,7 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) default: BUG(); } - ttm_mem_io_free(map->bo->bdev, &map->bo->mem); + ttm_mem_io_free(map->bo->bdev, map->bo->resource); map->virtual = NULL; map->page = NULL; } @@ -483,7 +482,7 @@ EXPORT_SYMBOL(ttm_bo_kunmap); int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) { - struct ttm_resource *mem = &bo->mem; + struct ttm_resource *mem = bo->resource; int ret; ret = ttm_mem_io_reserve(bo->bdev, mem); @@ -542,7 +541,7 @@ EXPORT_SYMBOL(ttm_bo_vmap); void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) { - struct ttm_resource *mem = &bo->mem; + struct ttm_resource *mem = bo->resource; if (dma_buf_map_is_null(map)) return; @@ -553,7 +552,7 @@ void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) iounmap(map->vaddr_iomem); dma_buf_map_clear(map); - ttm_mem_io_free(bo->bdev, &bo->mem); + ttm_mem_io_free(bo->bdev, bo->resource); } EXPORT_SYMBOL(ttm_bo_vunmap); @@ -567,7 +566,7 @@ static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, if (!dst_use_tt) ttm_bo_tt_destroy(bo); - ttm_resource_free(bo, &bo->mem); + ttm_resource_free(bo, bo->resource); return 0; } @@ -615,7 +614,9 @@ static void ttm_bo_move_pipeline_evict(struct ttm_buffer_object *bo, struct dma_fence *fence) { struct ttm_device *bdev = bo->bdev; - struct ttm_resource_manager *from = ttm_manager_type(bdev, bo->mem.mem_type); + struct ttm_resource_manager *from; + + from = ttm_manager_type(bdev, bo->resource->mem_type); /** * BO doesn't have a TTM we need to bind/unbind. Just remember @@ -628,7 +629,7 @@ static void ttm_bo_move_pipeline_evict(struct ttm_buffer_object *bo, } spin_unlock(&from->move_lock); - ttm_resource_free(bo, &bo->mem); + ttm_resource_free(bo, bo->resource); dma_fence_put(bo->moving); bo->moving = dma_fence_get(fence); @@ -641,7 +642,7 @@ int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo, struct ttm_resource *new_mem) { struct ttm_device *bdev = bo->bdev; - struct ttm_resource_manager *from = ttm_manager_type(bdev, bo->mem.mem_type); + struct ttm_resource_manager *from = ttm_manager_type(bdev, bo->resource->mem_type); struct ttm_resource_manager *man = ttm_manager_type(bdev, new_mem->mem_type); int ret = 0; @@ -677,7 +678,7 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo) if (ret) ttm_bo_wait(bo, false, false); - ttm_resource_alloc(bo, &sys_mem, &bo->mem); + ttm_resource_alloc(bo, &sys_mem, bo->resource); bo->ttm = NULL; dma_resv_unlock(&ghost->base._resv); diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index b31b18058965..e3f168321d2f 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -100,7 +100,7 @@ static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, if (bdev->funcs->io_mem_pfn) return bdev->funcs->io_mem_pfn(bo, page_offset); - return (bo->mem.bus.offset >> PAGE_SHIFT) + page_offset; + return (bo->resource->bus.offset >> PAGE_SHIFT) + page_offset; } /** @@ -198,10 +198,10 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf, /* Fault should not cross bo boundary. */ page_offset &= ~(fault_page_size - 1); - if (page_offset + fault_page_size > bo->mem.num_pages) + if (page_offset + fault_page_size > bo->resource->num_pages) goto out_fallback; - if (bo->mem.bus.is_iomem) + if (bo->resource->bus.is_iomem) pfn = ttm_bo_io_mem_pfn(bo, page_offset); else pfn = page_to_pfn(ttm->pages[page_offset]); @@ -211,7 +211,7 @@ static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf, goto out_fallback; /* Check that memory is contiguous. */ - if (!bo->mem.bus.is_iomem) { + if (!bo->resource->bus.is_iomem) { for (i = 1; i < fault_page_size; ++i) { if (page_to_pfn(ttm->pages[page_offset + i]) != pfn + i) goto out_fallback; @@ -297,7 +297,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, if (unlikely(ret != 0)) return ret; - err = ttm_mem_io_reserve(bdev, &bo->mem); + err = ttm_mem_io_reserve(bdev, bo->resource); if (unlikely(err != 0)) return VM_FAULT_SIGBUS; @@ -306,11 +306,11 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, page_last = vma_pages(vma) + vma->vm_pgoff - drm_vma_node_start(&bo->base.vma_node); - if (unlikely(page_offset >= bo->mem.num_pages)) + if (unlikely(page_offset >= bo->resource->num_pages)) return VM_FAULT_SIGBUS; - prot = ttm_io_prot(bo, &bo->mem, prot); - if (!bo->mem.bus.is_iomem) { + prot = ttm_io_prot(bo, bo->resource, prot); + if (!bo->resource->bus.is_iomem) { struct ttm_operation_ctx ctx = { .interruptible = false, .no_wait_gpu = false, @@ -335,7 +335,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, * first page. */ for (i = 0; i < num_prefault; ++i) { - if (bo->mem.bus.is_iomem) { + if (bo->resource->bus.is_iomem) { pfn = ttm_bo_io_mem_pfn(bo, page_offset); } else { page = ttm->pages[page_offset]; @@ -469,14 +469,14 @@ int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr, << PAGE_SHIFT); int ret; - if (len < 1 || (offset + len) >> PAGE_SHIFT > bo->mem.num_pages) + if (len < 1 || (offset + len) >> PAGE_SHIFT > bo->resource->num_pages) return -EIO; ret = ttm_bo_reserve(bo, true, false, NULL); if (ret) return ret; - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_SYSTEM: if (unlikely(bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)) { ret = ttm_tt_swapin(bo->ttm); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c index 3a438ae4d3f4..fa7c0c8da193 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c @@ -483,10 +483,10 @@ int vmw_bo_cpu_blit(struct ttm_buffer_object *dst, d.src_addr = NULL; d.dst_pages = dst->ttm->pages; d.src_pages = src->ttm->pages; - d.dst_num_pages = dst->mem.num_pages; - d.src_num_pages = src->mem.num_pages; - d.dst_prot = ttm_io_prot(dst, &dst->mem, PAGE_KERNEL); - d.src_prot = ttm_io_prot(src, &src->mem, PAGE_KERNEL); + d.dst_num_pages = dst->resource->num_pages; + d.src_num_pages = src->resource->num_pages; + d.dst_prot = ttm_io_prot(dst, dst->resource, PAGE_KERNEL); + d.src_prot = ttm_io_prot(src, src->resource, PAGE_KERNEL); d.diff = diff; for (j = 0; j < h; ++j) { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index 587314d57991..6f8df12ccebb 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -107,7 +107,7 @@ int vmw_bo_pin_in_placement(struct vmw_private *dev_priv, goto err; if (buf->base.pin_count > 0) - ret = ttm_bo_mem_compat(placement, &bo->mem, + ret = ttm_bo_mem_compat(placement, bo->resource, &new_flags) == true ? 0 : -EINVAL; else ret = ttm_bo_validate(bo, placement, &ctx); @@ -155,7 +155,7 @@ int vmw_bo_pin_in_vram_or_gmr(struct vmw_private *dev_priv, goto err; if (buf->base.pin_count > 0) { - ret = ttm_bo_mem_compat(&vmw_vram_gmr_placement, &bo->mem, + ret = ttm_bo_mem_compat(&vmw_vram_gmr_placement, bo->resource, &new_flags) == true ? 0 : -EINVAL; goto out_unreserve; } @@ -222,7 +222,7 @@ int vmw_bo_pin_in_start_of_vram(struct vmw_private *dev_priv, uint32_t new_flags; place = vmw_vram_placement.placement[0]; - place.lpfn = bo->mem.num_pages; + place.lpfn = bo->resource->num_pages; placement.num_placement = 1; placement.placement = &place; placement.num_busy_placement = 1; @@ -242,22 +242,22 @@ int vmw_bo_pin_in_start_of_vram(struct vmw_private *dev_priv, * In that case, evict it first because TTM isn't good at handling * that situation. */ - if (bo->mem.mem_type == TTM_PL_VRAM && - bo->mem.start < bo->mem.num_pages && - bo->mem.start > 0 && + if (bo->resource->mem_type == TTM_PL_VRAM && + bo->resource->start < bo->resource->num_pages && + bo->resource->start > 0 && buf->base.pin_count == 0) { ctx.interruptible = false; (void) ttm_bo_validate(bo, &vmw_sys_placement, &ctx); } if (buf->base.pin_count > 0) - ret = ttm_bo_mem_compat(&placement, &bo->mem, + ret = ttm_bo_mem_compat(&placement, bo->resource, &new_flags) == true ? 0 : -EINVAL; else ret = ttm_bo_validate(bo, &placement, &ctx); /* For some reason we didn't end up at the start of vram */ - WARN_ON(ret == 0 && bo->mem.start != 0); + WARN_ON(ret == 0 && bo->resource->start != 0); if (!ret) vmw_bo_pin_reserved(buf, true); @@ -314,11 +314,11 @@ int vmw_bo_unpin(struct vmw_private *dev_priv, void vmw_bo_get_guest_ptr(const struct ttm_buffer_object *bo, SVGAGuestPtr *ptr) { - if (bo->mem.mem_type == TTM_PL_VRAM) { + if (bo->resource->mem_type == TTM_PL_VRAM) { ptr->gmrId = SVGA_GMR_FRAMEBUFFER; - ptr->offset = bo->mem.start << PAGE_SHIFT; + ptr->offset = bo->resource->start << PAGE_SHIFT; } else { - ptr->gmrId = bo->mem.start; + ptr->gmrId = bo->resource->start; ptr->offset = 0; } } @@ -337,7 +337,7 @@ void vmw_bo_pin_reserved(struct vmw_buffer_object *vbo, bool pin) struct ttm_place pl; struct ttm_placement placement; struct ttm_buffer_object *bo = &vbo->base; - uint32_t old_mem_type = bo->mem.mem_type; + uint32_t old_mem_type = bo->resource->mem_type; int ret; dma_resv_assert_held(bo->base.resv); @@ -347,8 +347,8 @@ void vmw_bo_pin_reserved(struct vmw_buffer_object *vbo, bool pin) pl.fpfn = 0; pl.lpfn = 0; - pl.mem_type = bo->mem.mem_type; - pl.flags = bo->mem.placement; + pl.mem_type = bo->resource->mem_type; + pl.flags = bo->resource->placement; memset(&placement, 0, sizeof(placement)); placement.num_placement = 1; @@ -356,7 +356,7 @@ void vmw_bo_pin_reserved(struct vmw_buffer_object *vbo, bool pin) ret = ttm_bo_validate(bo, &placement, &ctx); - BUG_ON(ret != 0 || bo->mem.mem_type != old_mem_type); + BUG_ON(ret != 0 || bo->resource->mem_type != old_mem_type); if (pin) ttm_bo_pin(bo); @@ -390,7 +390,7 @@ void *vmw_bo_map_and_cache(struct vmw_buffer_object *vbo) if (virtual) return virtual; - ret = ttm_bo_kmap(bo, 0, bo->mem.num_pages, &vbo->map); + ret = ttm_bo_kmap(bo, 0, bo->resource->num_pages, &vbo->map); if (ret) DRM_ERROR("Buffer object map failed: %d.\n", ret); @@ -1228,7 +1228,7 @@ void vmw_bo_move_notify(struct ttm_buffer_object *bo, * With other types of moves, the underlying pages stay the same, * and the map can be kept. */ - if (mem->mem_type == TTM_PL_VRAM || bo->mem.mem_type == TTM_PL_VRAM) + if (mem->mem_type == TTM_PL_VRAM || bo->resource->mem_type == TTM_PL_VRAM) vmw_bo_unmap(vbo); /* @@ -1236,6 +1236,6 @@ void vmw_bo_move_notify(struct ttm_buffer_object *bo, * read back all resource content first, and unbind the MOB from * the resource. */ - if (mem->mem_type != VMW_PL_MOB && bo->mem.mem_type == VMW_PL_MOB) + if (mem->mem_type != VMW_PL_MOB && bo->resource->mem_type == VMW_PL_MOB) vmw_resource_unbind_list(vbo); } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c index 20246a7c97c9..06134305af11 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c @@ -600,11 +600,11 @@ static int vmw_fifo_emit_dummy_legacy_query(struct vmw_private *dev_priv, cmd->body.cid = cid; cmd->body.type = SVGA3D_QUERYTYPE_OCCLUSION; - if (bo->mem.mem_type == TTM_PL_VRAM) { + if (bo->resource->mem_type == TTM_PL_VRAM) { cmd->body.guestResult.gmrId = SVGA_GMR_FRAMEBUFFER; - cmd->body.guestResult.offset = bo->mem.start << PAGE_SHIFT; + cmd->body.guestResult.offset = bo->resource->start << PAGE_SHIFT; } else { - cmd->body.guestResult.gmrId = bo->mem.start; + cmd->body.guestResult.gmrId = bo->resource->start; cmd->body.guestResult.offset = 0; } @@ -645,8 +645,8 @@ static int vmw_fifo_emit_dummy_gb_query(struct vmw_private *dev_priv, cmd->header.size = sizeof(cmd->body); cmd->body.cid = cid; cmd->body.type = SVGA3D_QUERYTYPE_OCCLUSION; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); - cmd->body.mobid = bo->mem.start; + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); + cmd->body.mobid = bo->resource->start; cmd->body.offset = 0; vmw_cmd_commit(dev_priv, sizeof(*cmd)); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c index 2e23e537cdf5..95d442361b30 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c @@ -889,7 +889,7 @@ static int vmw_cmdbuf_space_pool(struct vmw_cmdbuf_man *man, header->cmd = man->map + offset; if (man->using_mob) { cb_hdr->flags = SVGA_CB_FLAG_MOB; - cb_hdr->ptr.mob.mobid = man->cmd_space->mem.start; + cb_hdr->ptr.mob.mobid = man->cmd_space->resource->start; cb_hdr->ptr.mob.mobOffset = offset; } else { cb_hdr->ptr.pa = (u64)man->handle + (u64)offset; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_context.c b/drivers/gpu/drm/vmwgfx/vmwgfx_context.c index 4a5a3e246216..47823fb373fa 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_context.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_context.c @@ -346,7 +346,7 @@ static int vmw_gb_context_bind(struct vmw_resource *res, } *cmd; struct ttm_buffer_object *bo = val_buf->bo; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) @@ -355,7 +355,7 @@ static int vmw_gb_context_bind(struct vmw_resource *res, cmd->header.id = SVGA_3D_CMD_BIND_GB_CONTEXT; cmd->header.size = sizeof(cmd->body); cmd->body.cid = res->id; - cmd->body.mobid = bo->mem.start; + cmd->body.mobid = bo->resource->start; cmd->body.validContents = res->backup_dirty; res->backup_dirty = false; vmw_cmd_commit(dev_priv, sizeof(*cmd)); @@ -385,7 +385,7 @@ static int vmw_gb_context_unbind(struct vmw_resource *res, uint8_t *cmd; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); mutex_lock(&dev_priv->binding_mutex); vmw_binding_state_scrub(uctx->cbs); @@ -513,7 +513,7 @@ static int vmw_dx_context_bind(struct vmw_resource *res, } *cmd; struct ttm_buffer_object *bo = val_buf->bo; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) @@ -522,7 +522,7 @@ static int vmw_dx_context_bind(struct vmw_resource *res, cmd->header.id = SVGA_3D_CMD_DX_BIND_CONTEXT; cmd->header.size = sizeof(cmd->body); cmd->body.cid = res->id; - cmd->body.mobid = bo->mem.start; + cmd->body.mobid = bo->resource->start; cmd->body.validContents = res->backup_dirty; res->backup_dirty = false; vmw_cmd_commit(dev_priv, sizeof(*cmd)); @@ -594,7 +594,7 @@ static int vmw_dx_context_unbind(struct vmw_resource *res, uint8_t *cmd; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); mutex_lock(&dev_priv->binding_mutex); vmw_dx_context_scrub_cotables(res, readback); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c index 42321b9c8129..660000a37e8d 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c @@ -173,7 +173,7 @@ static int vmw_cotable_unscrub(struct vmw_resource *res) SVGA3dCmdDXSetCOTable body; } *cmd; - WARN_ON_ONCE(bo->mem.mem_type != VMW_PL_MOB); + WARN_ON_ONCE(bo->resource->mem_type != VMW_PL_MOB); dma_resv_assert_held(bo->base.resv); cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); @@ -181,12 +181,12 @@ static int vmw_cotable_unscrub(struct vmw_resource *res) return -ENOMEM; WARN_ON(vcotbl->ctx->id == SVGA3D_INVALID_ID); - WARN_ON(bo->mem.mem_type != VMW_PL_MOB); + WARN_ON(bo->resource->mem_type != VMW_PL_MOB); cmd->header.id = SVGA_3D_CMD_DX_SET_COTABLE; cmd->header.size = sizeof(cmd->body); cmd->body.cid = vcotbl->ctx->id; cmd->body.type = vcotbl->type; - cmd->body.mobid = bo->mem.start; + cmd->body.mobid = bo->resource->start; cmd->body.validSizeInBytes = vcotbl->size_read_back; vmw_cmd_commit_flush(dev_priv, sizeof(*cmd)); @@ -315,7 +315,7 @@ static int vmw_cotable_unbind(struct vmw_resource *res, if (!vmw_resource_mob_attached(res)) return 0; - WARN_ON_ONCE(bo->mem.mem_type != VMW_PL_MOB); + WARN_ON_ONCE(bo->resource->mem_type != VMW_PL_MOB); dma_resv_assert_held(bo->base.resv); mutex_lock(&dev_priv->binding_mutex); @@ -431,7 +431,7 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size) * Do a page by page copy of COTables. This eliminates slow vmap()s. * This should really be a TTM utility. */ - for (i = 0; i < old_bo->mem.num_pages; ++i) { + for (i = 0; i < old_bo->resource->num_pages; ++i) { bool dummy; ret = ttm_bo_kmap(old_bo, i, 1, &old_map); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c index 7a24196f92c3..42f376582e14 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c @@ -735,7 +735,7 @@ static int vmw_rebind_all_dx_query(struct vmw_resource *ctx_res) cmd->header.id = SVGA_3D_CMD_DX_BIND_ALL_QUERY; cmd->header.size = sizeof(cmd->body); cmd->body.cid = ctx_res->id; - cmd->body.mobid = dx_query_mob->base.mem.start; + cmd->body.mobid = dx_query_mob->base.resource->start; vmw_cmd_commit(dev_priv, sizeof(*cmd)); vmw_context_bind_dx_query(ctx_res, dx_query_mob); @@ -1046,7 +1046,7 @@ static int vmw_query_bo_switch_prepare(struct vmw_private *dev_priv, if (unlikely(new_query_bo != sw_context->cur_query_bo)) { - if (unlikely(new_query_bo->base.mem.num_pages > 4)) { + if (unlikely(new_query_bo->base.resource->num_pages > 4)) { VMW_DEBUG_USER("Query buffer too large.\n"); return -EINVAL; } @@ -3698,16 +3698,16 @@ static void vmw_apply_relocations(struct vmw_sw_context *sw_context) list_for_each_entry(reloc, &sw_context->bo_relocations, head) { bo = &reloc->vbo->base; - switch (bo->mem.mem_type) { + switch (bo->resource->mem_type) { case TTM_PL_VRAM: - reloc->location->offset += bo->mem.start << PAGE_SHIFT; + reloc->location->offset += bo->resource->start << PAGE_SHIFT; reloc->location->gmrId = SVGA_GMR_FRAMEBUFFER; break; case VMW_PL_GMR: - reloc->location->gmrId = bo->mem.start; + reloc->location->gmrId = bo->resource->start; break; case VMW_PL_MOB: - *reloc->mob_loc = bo->mem.start; + *reloc->mob_loc = bo->resource->start; break; default: BUG(); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c index 45c9c6a7f1d6..e5a9a5cbd01a 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c @@ -232,7 +232,7 @@ void vmw_bo_dirty_unmap(struct vmw_buffer_object *vbo, int vmw_bo_dirty_add(struct vmw_buffer_object *vbo) { struct vmw_bo_dirty *dirty = vbo->dirty; - pgoff_t num_pages = vbo->base.mem.num_pages; + pgoff_t num_pages = vbo->base.resource->num_pages; size_t size, acc_size; int ret; static struct ttm_operation_ctx ctx = { @@ -413,7 +413,7 @@ vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf) return ret; page_offset = vmf->pgoff - drm_vma_node_start(&bo->base.vma_node); - if (unlikely(page_offset >= bo->mem.num_pages)) { + if (unlikely(page_offset >= bo->resource->num_pages)) { ret = VM_FAULT_SIGBUS; goto out_unlock; } @@ -456,7 +456,7 @@ vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf) page_offset = vmf->pgoff - drm_vma_node_start(&bo->base.vma_node); - if (page_offset >= bo->mem.num_pages || + if (page_offset >= bo->resource->num_pages || vmw_resources_clean(vbo, page_offset, page_offset + PAGE_SIZE, &allowed_prefault)) { @@ -529,7 +529,7 @@ vm_fault_t vmw_bo_vm_huge_fault(struct vm_fault *vmf, page_offset = vmf->pgoff - drm_vma_node_start(&bo->base.vma_node); - if (page_offset >= bo->mem.num_pages || + if (page_offset >= bo->resource->num_pages || vmw_resources_clean(vbo, page_offset, page_offset + PAGE_SIZE, &allowed_prefault)) { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c index a0db06564013..3206744988f9 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c @@ -254,7 +254,7 @@ static int vmw_gb_shader_bind(struct vmw_resource *res, } *cmd; struct ttm_buffer_object *bo = val_buf->bo; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) @@ -263,7 +263,7 @@ static int vmw_gb_shader_bind(struct vmw_resource *res, cmd->header.id = SVGA_3D_CMD_BIND_GB_SHADER; cmd->header.size = sizeof(cmd->body); cmd->body.shid = res->id; - cmd->body.mobid = bo->mem.start; + cmd->body.mobid = bo->resource->start; cmd->body.offsetInBytes = res->backup_offset; res->backup_dirty = false; vmw_cmd_commit(dev_priv, sizeof(*cmd)); @@ -282,7 +282,7 @@ static int vmw_gb_shader_unbind(struct vmw_resource *res, } *cmd; struct vmw_fence_obj *fence; - BUG_ON(res->backup->base.mem.mem_type != VMW_PL_MOB); + BUG_ON(res->backup->base.resource->mem_type != VMW_PL_MOB); cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); if (unlikely(cmd == NULL)) @@ -402,7 +402,7 @@ static int vmw_dx_shader_unscrub(struct vmw_resource *res) cmd->header.size = sizeof(cmd->body); cmd->body.cid = shader->ctx->id; cmd->body.shid = shader->id; - cmd->body.mobid = res->backup->base.mem.start; + cmd->body.mobid = res->backup->base.resource->start; cmd->body.offsetInBytes = res->backup_offset; vmw_cmd_commit(dev_priv, sizeof(*cmd)); @@ -450,7 +450,7 @@ static int vmw_dx_shader_bind(struct vmw_resource *res, struct vmw_private *dev_priv = res->dev_priv; struct ttm_buffer_object *bo = val_buf->bo; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); mutex_lock(&dev_priv->binding_mutex); vmw_dx_shader_unscrub(res); mutex_unlock(&dev_priv->binding_mutex); @@ -513,7 +513,7 @@ static int vmw_dx_shader_unbind(struct vmw_resource *res, struct vmw_fence_obj *fence; int ret; - BUG_ON(res->backup->base.mem.mem_type != VMW_PL_MOB); + BUG_ON(res->backup->base.resource->mem_type != VMW_PL_MOB); mutex_lock(&dev_priv->binding_mutex); ret = vmw_dx_shader_scrub(res); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c b/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c index 1dd042a20a66..c8efa4a6c995 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c @@ -106,7 +106,7 @@ static int vmw_dx_streamoutput_unscrub(struct vmw_resource *res) cmd->header.id = SVGA_3D_CMD_DX_BIND_STREAMOUTPUT; cmd->header.size = sizeof(cmd->body); cmd->body.soid = so->id; - cmd->body.mobid = res->backup->base.mem.start; + cmd->body.mobid = res->backup->base.resource->start; cmd->body.offsetInBytes = res->backup_offset; cmd->body.sizeInBytes = so->size; vmw_cmd_commit(dev_priv, sizeof(*cmd)); @@ -142,7 +142,7 @@ static int vmw_dx_streamoutput_bind(struct vmw_resource *res, struct ttm_buffer_object *bo = val_buf->bo; int ret; - if (WARN_ON(bo->mem.mem_type != VMW_PL_MOB)) + if (WARN_ON(bo->resource->mem_type != VMW_PL_MOB)) return -EINVAL; mutex_lock(&dev_priv->binding_mutex); @@ -197,7 +197,7 @@ static int vmw_dx_streamoutput_unbind(struct vmw_resource *res, bool readback, struct vmw_fence_obj *fence; int ret; - if (WARN_ON(res->backup->base.mem.mem_type != VMW_PL_MOB)) + if (WARN_ON(res->backup->base.resource->mem_type != VMW_PL_MOB)) return -EINVAL; mutex_lock(&dev_priv->binding_mutex); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c index c3e55c1376eb..c9032b134365 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c @@ -1218,7 +1218,7 @@ static int vmw_gb_surface_bind(struct vmw_resource *res, uint32_t submit_size; struct ttm_buffer_object *bo = val_buf->bo; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); submit_size = sizeof(*cmd1) + (res->backup_dirty ? sizeof(*cmd2) : 0); @@ -1229,7 +1229,7 @@ static int vmw_gb_surface_bind(struct vmw_resource *res, cmd1->header.id = SVGA_3D_CMD_BIND_GB_SURFACE; cmd1->header.size = sizeof(cmd1->body); cmd1->body.sid = res->id; - cmd1->body.mobid = bo->mem.start; + cmd1->body.mobid = bo->resource->start; if (res->backup_dirty) { cmd2 = (void *) &cmd1[1]; cmd2->header.id = SVGA_3D_CMD_UPDATE_GB_SURFACE; @@ -1272,7 +1272,7 @@ static int vmw_gb_surface_unbind(struct vmw_resource *res, uint8_t *cmd; - BUG_ON(bo->mem.mem_type != VMW_PL_MOB); + BUG_ON(bo->resource->mem_type != VMW_PL_MOB); submit_size = sizeof(*cmd3) + (readback ? sizeof(*cmd1) : sizeof(*cmd2)); cmd = VMW_CMD_RESERVE(dev_priv, submit_size); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index 2dc031fe4a90..bb1d453f7ca6 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -724,7 +724,7 @@ static int vmw_move(struct ttm_buffer_object *bo, struct ttm_resource *new_mem, struct ttm_place *hop) { - struct ttm_resource_manager *old_man = ttm_manager_type(bo->bdev, bo->mem.mem_type); + struct ttm_resource_manager *old_man = ttm_manager_type(bo->bdev, bo->resource->mem_type); struct ttm_resource_manager *new_man = ttm_manager_type(bo->bdev, new_mem->mem_type); int ret; @@ -734,10 +734,10 @@ static int vmw_move(struct ttm_buffer_object *bo, return ret; } - vmw_move_notify(bo, &bo->mem, new_mem); + vmw_move_notify(bo, bo->resource, new_mem); if (old_man->use_tt && new_man->use_tt) { - if (bo->mem.mem_type == TTM_PL_SYSTEM) { + if (bo->resource->mem_type == TTM_PL_SYSTEM) { ttm_bo_assign_mem(bo, new_mem); return 0; } @@ -746,7 +746,7 @@ static int vmw_move(struct ttm_buffer_object *bo, goto fail; vmw_ttm_unbind(bo->bdev, bo->ttm); - ttm_resource_free(bo, &bo->mem); + ttm_resource_free(bo, bo->resource); ttm_bo_assign_mem(bo, new_mem); return 0; } else { @@ -756,7 +756,7 @@ static int vmw_move(struct ttm_buffer_object *bo, } return 0; fail: - vmw_move_notify(bo, new_mem, &bo->mem); + vmw_move_notify(bo, new_mem, bo->resource); return ret; } diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 639521880c29..1876a3d6df73 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -136,7 +136,8 @@ struct ttm_buffer_object { * Members protected by the bo::resv::reserved lock. */ - struct ttm_resource mem; + struct ttm_resource *resource; + struct ttm_resource _mem; struct ttm_tt *ttm; bool deleted; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index dbccac957f8f..1a9ba0b13622 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -181,14 +181,14 @@ static inline void ttm_bo_move_to_lru_tail_unlocked(struct ttm_buffer_object *bo) { spin_lock(&bo->bdev->lru_lock); - ttm_bo_move_to_lru_tail(bo, &bo->mem, NULL); + ttm_bo_move_to_lru_tail(bo, bo->resource, NULL); spin_unlock(&bo->bdev->lru_lock); } static inline void ttm_bo_assign_mem(struct ttm_buffer_object *bo, struct ttm_resource *new_mem) { - bo->mem = *new_mem; + bo->_mem = *new_mem; new_mem->mm_node = NULL; } @@ -202,7 +202,7 @@ static inline void ttm_bo_assign_mem(struct ttm_buffer_object *bo, static inline void ttm_bo_move_null(struct ttm_buffer_object *bo, struct ttm_resource *new_mem) { - struct ttm_resource *old_mem = &bo->mem; + struct ttm_resource *old_mem = bo->resource; WARN_ON(old_mem->mm_node != NULL); ttm_bo_assign_mem(bo, new_mem); From patchwork Fri Apr 30 09:25:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64729C43461 for ; Fri, 30 Apr 2021 09:25:18 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2179A613CA for ; Fri, 30 Apr 2021 09:25:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2179A613CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4F6646F507; Fri, 30 Apr 2021 09:25:15 +0000 (UTC) Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [IPv6:2a00:1450:4864:20::631]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8BB626F507 for ; Fri, 30 Apr 2021 09:25:14 +0000 (UTC) Received: by mail-ej1-x631.google.com with SMTP id f24so5677292ejc.6 for ; Fri, 30 Apr 2021 02:25:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qvUgYtue2jAobf3uJ0ZZJZ10buNslOFzETteRqurBrc=; b=gWdRsBmNmr91vqyO7hc5YzIo4a3N/kXLMDOT2VDXILAJlZ4EaEjWJrVgYDokP09mnB wFuapQA0vkj3y2g3zhoT5RHN+Ad/maiT8ERLd/dfj6P4A2BJxIkqX1W3LLnoox4huKMy WM7Qgul1mZxpBbcpa08Tsz3u/KdyALcN9lLC/WkrIjvYjZoJmrSQ8p+Alahxy8l6vXek nXIG6p43PRRx1lNXTZ2zfYTbCYPe4V6si+MtCbeM8ezHi/w0hsHBHGKe/1jjDD+tL0bj Kot4bEmwQWxOiJwOtb1YCbtGUHywkt5t6slHZ0stvfaSAiJCDyjDoopB29GDU0tywsvf Vhxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qvUgYtue2jAobf3uJ0ZZJZ10buNslOFzETteRqurBrc=; b=eZKVnSr2Y2oBeP9uNCqYIq8+EfX5zq/TNJrJYRHDyWaGsc+Cr2iaobONYPV7TovyKc AimYCu8BU4T5OcNa50PQvRPmcHYTSVAod5xoGzwMIvpjgRBLyRD6VZJIimIK0LdeO1vz 369/i10eLTHldA9sFSXetQSy7hHhX4eP4DI5JhFdEG8DuLx7aov1XHT/lypVBFdLNryT TWCdx36EssBNS4y1GJak6loZtuIpnAJm4lxZKbaHZr+dx0AeMBROUz5fs/c79suf55WU eWmWIa02DChtDdde+R8c6l55GPh8xUPGRWA6yLyaMHWY3+/OMA0Z+sVuqxMKyBmwHMAO ZZng== X-Gm-Message-State: AOAM531XRv24EY5n4vApfAUxUjyZy4AaJv/APFCO535SnnrJWVFf5hgk 2jX2TMItrXgQU6HgIXbxu3d8Tz4Cf3s= X-Google-Smtp-Source: ABdhPJzTeYZVC0sbclxgoLRLm+faGHPnaJJUkhDNVaiEoyGJK+eCDZucMDyPBG9R9iQVVkHsxg10qA== X-Received: by 2002:a17:907:10cb:: with SMTP id rv11mr3216923ejb.379.1619774713169; Fri, 30 Apr 2021 02:25:13 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:12 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 05/13] drm/ttm: allocate resource object instead of embedding it Date: Fri, 30 Apr 2021 11:25:00 +0200 Message-Id: <20210430092508.60710-5-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To improve the handling we want the establish the resource object as base class for the backend allocations. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 54 +++++++-------- drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +- drivers/gpu/drm/radeon/radeon_ttm.c | 2 +- drivers/gpu/drm/ttm/ttm_bo.c | 76 +++++++--------------- drivers/gpu/drm/ttm/ttm_bo_util.c | 41 ++++++------ drivers/gpu/drm/ttm/ttm_resource.c | 30 ++++++--- drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 2 +- include/drm/ttm/ttm_bo_api.h | 1 - include/drm/ttm/ttm_bo_driver.h | 10 ++- include/drm/ttm/ttm_resource.h | 4 +- 11 files changed, 101 insertions(+), 125 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index 6bb9d9d05326..e669a3adac44 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -362,14 +362,14 @@ int amdgpu_bo_create_kernel_at(struct amdgpu_device *adev, if (cpu_addr) amdgpu_bo_kunmap(*bo_ptr); - ttm_resource_free(&(*bo_ptr)->tbo, (*bo_ptr)->tbo.resource); + ttm_resource_free(&(*bo_ptr)->tbo, &(*bo_ptr)->tbo.resource); for (i = 0; i < (*bo_ptr)->placement.num_placement; ++i) { (*bo_ptr)->placements[i].fpfn = offset >> PAGE_SHIFT; (*bo_ptr)->placements[i].lpfn = (offset + size) >> PAGE_SHIFT; } r = ttm_bo_mem_space(&(*bo_ptr)->tbo, &(*bo_ptr)->placement, - (*bo_ptr)->tbo.resource, &ctx); + &(*bo_ptr)->tbo.resource, &ctx); if (r) goto error; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 581523d212d2..3fe2482a40b4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -503,7 +503,7 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, return r; amdgpu_ttm_backend_unbind(bo->bdev, bo->ttm); - ttm_resource_free(bo, bo->resource); + ttm_resource_free(bo, &bo->resource); ttm_bo_assign_mem(bo, new_mem); goto out; } @@ -1001,9 +1001,9 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo) struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev); struct ttm_operation_ctx ctx = { false, false }; struct amdgpu_ttm_tt *gtt = (void *)bo->ttm; - struct ttm_resource tmp; struct ttm_placement placement; struct ttm_place placements; + struct ttm_resource *tmp; uint64_t addr, flags; int r; @@ -1013,37 +1013,37 @@ int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo) addr = amdgpu_gmc_agp_addr(bo); if (addr != AMDGPU_BO_INVALID_OFFSET) { bo->resource->start = addr >> PAGE_SHIFT; - } else { + return 0; + } - /* allocate GART space */ - placement.num_placement = 1; - placement.placement = &placements; - placement.num_busy_placement = 1; - placement.busy_placement = &placements; - placements.fpfn = 0; - placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT; - placements.mem_type = TTM_PL_TT; - placements.flags = bo->resource->placement; - - r = ttm_bo_mem_space(bo, &placement, &tmp, &ctx); - if (unlikely(r)) - return r; + /* allocate GART space */ + placement.num_placement = 1; + placement.placement = &placements; + placement.num_busy_placement = 1; + placement.busy_placement = &placements; + placements.fpfn = 0; + placements.lpfn = adev->gmc.gart_size >> PAGE_SHIFT; + placements.mem_type = TTM_PL_TT; + placements.flags = bo->resource->placement; - /* compute PTE flags for this buffer object */ - flags = amdgpu_ttm_tt_pte_flags(adev, bo->ttm, &tmp); + r = ttm_bo_mem_space(bo, &placement, &tmp, &ctx); + if (unlikely(r)) + return r; - /* Bind pages */ - gtt->offset = (u64)tmp.start << PAGE_SHIFT; - r = amdgpu_ttm_gart_bind(adev, bo, flags); - if (unlikely(r)) { - ttm_resource_free(bo, &tmp); - return r; - } + /* compute PTE flags for this buffer object */ + flags = amdgpu_ttm_tt_pte_flags(adev, bo->ttm, tmp); - ttm_resource_free(bo, bo->resource); - ttm_bo_assign_mem(bo, &tmp); + /* Bind pages */ + gtt->offset = (u64)tmp->start << PAGE_SHIFT; + r = amdgpu_ttm_gart_bind(adev, bo, flags); + if (unlikely(r)) { + ttm_resource_free(bo, &tmp); + return r; } + ttm_resource_free(bo, &bo->resource); + ttm_bo_assign_mem(bo, tmp); + return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 9eaa3274bd02..be546a2d3e36 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1009,7 +1009,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, if (old_reg->mem_type == TTM_PL_TT && new_reg->mem_type == TTM_PL_SYSTEM) { nouveau_ttm_tt_unbind(bo->bdev, bo->ttm); - ttm_resource_free(bo, bo->resource); + ttm_resource_free(bo, &bo->resource); ttm_bo_assign_mem(bo, new_reg); goto out; } diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 4787bc281281..5191e994bff7 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -241,7 +241,7 @@ static int radeon_bo_move(struct ttm_buffer_object *bo, bool evict, if (old_mem->mem_type == TTM_PL_TT && new_mem->mem_type == TTM_PL_SYSTEM) { radeon_ttm_tt_unbind(bo->bdev, bo->ttm); - ttm_resource_free(bo, bo->resource); + ttm_resource_free(bo, &bo->resource); ttm_bo_assign_mem(bo, new_mem); goto out; } diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index d6cb2b289ba5..8517c823afc0 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -223,7 +223,7 @@ static void ttm_bo_cleanup_memtype_use(struct ttm_buffer_object *bo) bo->bdev->funcs->delete_mem_notify(bo); ttm_bo_tt_destroy(bo); - ttm_resource_free(bo, bo->resource); + ttm_resource_free(bo, &bo->resource); } static int ttm_bo_individualize_resv(struct ttm_buffer_object *bo) @@ -489,7 +489,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx) { struct ttm_device *bdev = bo->bdev; - struct ttm_resource evict_mem; + struct ttm_resource *evict_mem; struct ttm_placement placement; struct ttm_place hop; int ret = 0; @@ -519,7 +519,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, goto out; } - ret = ttm_bo_handle_move_mem(bo, &evict_mem, true, ctx, &hop); + ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop); if (unlikely(ret)) { WARN(ret == -EMULTIHOP, "Unexpected multihop in eviction - likely driver bug\n"); if (ret != -ERESTARTSYS) @@ -726,14 +726,15 @@ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo, */ static int ttm_bo_mem_force_space(struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *mem, + struct ttm_resource **mem, struct ttm_operation_ctx *ctx) { struct ttm_device *bdev = bo->bdev; - struct ttm_resource_manager *man = ttm_manager_type(bdev, mem->mem_type); + struct ttm_resource_manager *man; struct ww_acquire_ctx *ticket; int ret; + man = ttm_manager_type(bdev, (*mem)->mem_type); ticket = dma_resv_locking_ctx(bo->base.resv); do { ret = ttm_resource_alloc(bo, place, mem); @@ -747,37 +748,7 @@ static int ttm_bo_mem_force_space(struct ttm_buffer_object *bo, return ret; } while (1); - return ttm_bo_add_move_fence(bo, man, mem, ctx->no_wait_gpu); -} - -/** - * ttm_bo_mem_placement - check if placement is compatible - * @bo: BO to find memory for - * @place: where to search - * @mem: the memory object to fill in - * - * Check if placement is compatible and fill in mem structure. - * Returns -EBUSY if placement won't work or negative error code. - * 0 when placement can be used. - */ -static int ttm_bo_mem_placement(struct ttm_buffer_object *bo, - const struct ttm_place *place, - struct ttm_resource *mem) -{ - struct ttm_device *bdev = bo->bdev; - struct ttm_resource_manager *man; - - man = ttm_manager_type(bdev, place->mem_type); - if (!man || !ttm_resource_manager_used(man)) - return -EBUSY; - - mem->mem_type = place->mem_type; - mem->placement = place->flags; - - spin_lock(&bo->bdev->lru_lock); - ttm_bo_move_to_lru_tail(bo, mem, NULL); - spin_unlock(&bo->bdev->lru_lock); - return 0; + return ttm_bo_add_move_fence(bo, man, *mem, ctx->no_wait_gpu); } /* @@ -790,7 +761,7 @@ static int ttm_bo_mem_placement(struct ttm_buffer_object *bo, */ int ttm_bo_mem_space(struct ttm_buffer_object *bo, struct ttm_placement *placement, - struct ttm_resource *mem, + struct ttm_resource **mem, struct ttm_operation_ctx *ctx) { struct ttm_device *bdev = bo->bdev; @@ -805,8 +776,8 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo, const struct ttm_place *place = &placement->placement[i]; struct ttm_resource_manager *man; - ret = ttm_bo_mem_placement(bo, place, mem); - if (ret) + man = ttm_manager_type(bdev, place->mem_type); + if (!man || !ttm_resource_manager_used(man)) continue; type_found = true; @@ -816,8 +787,7 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo, if (unlikely(ret)) goto error; - man = ttm_manager_type(bdev, mem->mem_type); - ret = ttm_bo_add_move_fence(bo, man, mem, ctx->no_wait_gpu); + ret = ttm_bo_add_move_fence(bo, man, *mem, ctx->no_wait_gpu); if (unlikely(ret)) { ttm_resource_free(bo, mem); if (ret == -EBUSY) @@ -830,9 +800,10 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo, for (i = 0; i < placement->num_busy_placement; ++i) { const struct ttm_place *place = &placement->busy_placement[i]; + struct ttm_resource_manager *man; - ret = ttm_bo_mem_placement(bo, place, mem); - if (ret) + man = ttm_manager_type(bdev, place->mem_type); + if (!man || !ttm_resource_manager_used(man)) continue; type_found = true; @@ -859,12 +830,12 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo, EXPORT_SYMBOL(ttm_bo_mem_space); static int ttm_bo_bounce_temp_buffer(struct ttm_buffer_object *bo, - struct ttm_resource *mem, + struct ttm_resource **mem, struct ttm_operation_ctx *ctx, struct ttm_place *hop) { struct ttm_placement hop_placement; - struct ttm_resource hop_mem; + struct ttm_resource *hop_mem; int ret; hop_placement.num_placement = hop_placement.num_busy_placement = 1; @@ -875,7 +846,7 @@ static int ttm_bo_bounce_temp_buffer(struct ttm_buffer_object *bo, if (ret) return ret; /* move to the bounce domain */ - ret = ttm_bo_handle_move_mem(bo, &hop_mem, false, ctx, NULL); + ret = ttm_bo_handle_move_mem(bo, hop_mem, false, ctx, NULL); if (ret) { ttm_resource_free(bo, &hop_mem); return ret; @@ -887,14 +858,12 @@ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo, struct ttm_placement *placement, struct ttm_operation_ctx *ctx) { + struct ttm_resource *mem; struct ttm_place hop; - struct ttm_resource mem; int ret; dma_resv_assert_held(bo->base.resv); - memset(&hop, 0, sizeof(hop)); - /* * Determine where to move the buffer. * @@ -908,7 +877,7 @@ static int ttm_bo_move_buffer(struct ttm_buffer_object *bo, if (ret) return ret; bounce: - ret = ttm_bo_handle_move_mem(bo, &mem, false, ctx, &hop); + ret = ttm_bo_handle_move_mem(bo, mem, false, ctx, &hop); if (ret == -EMULTIHOP) { ret = ttm_bo_bounce_temp_buffer(bo, &mem, ctx, &hop); if (ret) @@ -1027,8 +996,7 @@ int ttm_bo_init_reserved(struct ttm_device *bdev, bo->bdev = bdev; bo->type = type; bo->page_alignment = page_alignment; - bo->resource = &bo->_mem; - ttm_resource_alloc(bo, &sys_mem, bo->resource); + ttm_resource_alloc(bo, &sys_mem, &bo->resource); bo->moving = NULL; bo->pin_count = 0; bo->sg = sg; @@ -1168,7 +1136,7 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, */ if (bo->resource->mem_type != TTM_PL_SYSTEM) { struct ttm_operation_ctx ctx = { false, false }; - struct ttm_resource evict_mem; + struct ttm_resource *evict_mem; struct ttm_place place, hop; memset(&place, 0, sizeof(place)); @@ -1180,7 +1148,7 @@ int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, if (unlikely(ret)) goto out; - ret = ttm_bo_handle_move_mem(bo, &evict_mem, true, &ctx, &hop); + ret = ttm_bo_handle_move_mem(bo, evict_mem, true, &ctx, &hop); if (unlikely(ret != 0)) { WARN(ret == -EMULTIHOP, "Unexpected multihop in swaput - likely driver bug.\n"); goto out; diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index aedf02a31c70..e417677c791f 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -176,16 +176,17 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, struct ttm_resource *new_mem) { + struct ttm_resource *old_mem = bo->resource; struct ttm_device *bdev = bo->bdev; - struct ttm_resource_manager *man = ttm_manager_type(bdev, new_mem->mem_type); + struct ttm_resource_manager *man; struct ttm_tt *ttm = bo->ttm; - struct ttm_resource *old_mem = bo->resource; - struct ttm_resource old_copy = *old_mem; void *old_iomap; void *new_iomap; int ret; unsigned long i; + man = ttm_manager_type(bdev, new_mem->mem_type); + ret = ttm_bo_wait_ctx(bo, ctx); if (ret) return ret; @@ -201,7 +202,7 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, * Single TTM move. NOP. */ if (old_iomap == NULL && new_iomap == NULL) - goto out2; + goto out1; /* * Don't move nonexistent data. Clear destination instead. @@ -210,7 +211,7 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, (ttm == NULL || (!ttm_tt_is_populated(ttm) && !(ttm->page_flags & TTM_PAGE_FLAG_SWAPPED)))) { memset_io(new_iomap, 0, new_mem->num_pages*PAGE_SIZE); - goto out2; + goto out1; } /* @@ -235,27 +236,25 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, ret = ttm_copy_io_page(new_iomap, old_iomap, i); } if (ret) - goto out1; + break; } mb(); -out2: - old_copy = *old_mem; +out1: + ttm_resource_iounmap(bdev, new_mem, new_iomap); +out: + ttm_resource_iounmap(bdev, old_mem, old_iomap); + if (ret) { + ttm_resource_free(bo, &new_mem); + return ret; + } + + ttm_resource_free(bo, &bo->resource); ttm_bo_assign_mem(bo, new_mem); if (!man->use_tt) ttm_bo_tt_destroy(bo); -out1: - ttm_resource_iounmap(bdev, old_mem, new_iomap); -out: - ttm_resource_iounmap(bdev, &old_copy, old_iomap); - - /* - * On error, keep the mm node! - */ - if (!ret) - ttm_resource_free(bo, &old_copy); return ret; } EXPORT_SYMBOL(ttm_bo_move_memcpy); @@ -566,7 +565,7 @@ static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, if (!dst_use_tt) ttm_bo_tt_destroy(bo); - ttm_resource_free(bo, bo->resource); + ttm_resource_free(bo, &bo->resource); return 0; } @@ -629,7 +628,7 @@ static void ttm_bo_move_pipeline_evict(struct ttm_buffer_object *bo, } spin_unlock(&from->move_lock); - ttm_resource_free(bo, bo->resource); + ttm_resource_free(bo, &bo->resource); dma_fence_put(bo->moving); bo->moving = dma_fence_get(fence); @@ -678,7 +677,7 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo) if (ret) ttm_bo_wait(bo, false, false); - ttm_resource_alloc(bo, &sys_mem, bo->resource); + ttm_resource_alloc(bo, &sys_mem, &bo->resource); bo->ttm = NULL; dma_resv_unlock(&ghost->base._resv); diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index cec2e6fb1c52..7ebcc3e6818c 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -27,11 +27,14 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *res) + struct ttm_resource **res_ptr) { struct ttm_resource_manager *man = - ttm_manager_type(bo->bdev, res->mem_type); + ttm_manager_type(bo->bdev, place->mem_type); + struct ttm_resource *res; + int r; + res = kmalloc(sizeof(*res), GFP_KERNEL); res->mm_node = NULL; res->start = 0; res->num_pages = PFN_UP(bo->base.size); @@ -41,18 +44,27 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo, res->bus.offset = 0; res->bus.is_iomem = false; res->bus.caching = ttm_cached; + r = man->func->alloc(man, bo, place, res); + if (r) { + kfree(res); + return r; + } - return man->func->alloc(man, bo, place, res); + *res_ptr = res; + return 0; } -void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource *res) +void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res) { - struct ttm_resource_manager *man = - ttm_manager_type(bo->bdev, res->mem_type); + struct ttm_resource_manager *man; - man->func->free(man, res); - res->mm_node = NULL; - res->mem_type = TTM_PL_SYSTEM; + if (!*res) + return; + + man = ttm_manager_type(bo->bdev, (*res)->mem_type); + man->func->free(man, *res); + kfree(*res); + *res = NULL; } EXPORT_SYMBOL(ttm_resource_free); diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c index bb1d453f7ca6..4c34216a9ab8 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c @@ -746,7 +746,7 @@ static int vmw_move(struct ttm_buffer_object *bo, goto fail; vmw_ttm_unbind(bo->bdev, bo->ttm); - ttm_resource_free(bo, bo->resource); + ttm_resource_free(bo, &bo->resource); ttm_bo_assign_mem(bo, new_mem); return 0; } else { diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 1876a3d6df73..d8bb46228cc7 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -137,7 +137,6 @@ struct ttm_buffer_object { */ struct ttm_resource *resource; - struct ttm_resource _mem; struct ttm_tt *ttm; bool deleted; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 1a9ba0b13622..ead0ef7136c8 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -96,7 +96,7 @@ struct ttm_lru_bulk_move { */ int ttm_bo_mem_space(struct ttm_buffer_object *bo, struct ttm_placement *placement, - struct ttm_resource *mem, + struct ttm_resource **mem, struct ttm_operation_ctx *ctx); /** @@ -188,8 +188,8 @@ ttm_bo_move_to_lru_tail_unlocked(struct ttm_buffer_object *bo) static inline void ttm_bo_assign_mem(struct ttm_buffer_object *bo, struct ttm_resource *new_mem) { - bo->_mem = *new_mem; - new_mem->mm_node = NULL; + WARN_ON(bo->resource); + bo->resource = new_mem; } /** @@ -202,9 +202,7 @@ static inline void ttm_bo_assign_mem(struct ttm_buffer_object *bo, static inline void ttm_bo_move_null(struct ttm_buffer_object *bo, struct ttm_resource *new_mem) { - struct ttm_resource *old_mem = bo->resource; - - WARN_ON(old_mem->mm_node != NULL); + ttm_resource_free(bo, &bo->resource); ttm_bo_assign_mem(bo, new_mem); } diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h index 890b9d369519..c17c1a52070d 100644 --- a/include/drm/ttm/ttm_resource.h +++ b/include/drm/ttm/ttm_resource.h @@ -225,8 +225,8 @@ ttm_resource_manager_cleanup(struct ttm_resource_manager *man) int ttm_resource_alloc(struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *res); -void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource *res); + struct ttm_resource **res); +void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res); void ttm_resource_manager_init(struct ttm_resource_manager *man, unsigned long p_size); From patchwork Fri Apr 30 09:25:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18797C433B4 for ; Fri, 30 Apr 2021 09:25:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D5EE061352 for ; Fri, 30 Apr 2021 09:25:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D5EE061352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E5E9E6F513; Fri, 30 Apr 2021 09:25:25 +0000 (UTC) Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by gabe.freedesktop.org (Postfix) with ESMTPS id 2E3056F504 for ; Fri, 30 Apr 2021 09:25:15 +0000 (UTC) Received: by mail-ed1-x52e.google.com with SMTP id g14so22132086edy.6 for ; Fri, 30 Apr 2021 02:25:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pO/oHYAylfSenGFoFYRqhNwrEpPLz2jPJPUjNMWx5hc=; b=kcD79eFMOR5a53RZmAm9K+QRZ5sOW6HRazuB+hkJRroFGT1vsOE5XIVqbWE4pUyvFE VMcOe9UbcofF0hi7gVJa6JLahjHJshw4179nlz0QTTCnIM916Sx/A0fJ4iaXwEBO5qlk H15iBKMxFSbldezm4mtpPjknoaTTrmIjFmknXmzVnItSWbaK7ctZP8lnS8LHcYfCdpB0 HrgM58H4jqC19gpA18YRDHIpP7Jgce7sLcaNBsPg7WhxYf3dGcRi7SCFT/N4H/uolc0D 1zs2QkFaney6dA1TkSwgJwwcxJHi6qxRyCYPMRmfhS7szQNSml/9kkZnlhrmcTEi9tDQ 5Vwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pO/oHYAylfSenGFoFYRqhNwrEpPLz2jPJPUjNMWx5hc=; b=om2CJnitweMqGLjYdGLYxeIjk85LMilU74F8ucFQk7T6oS0yzwemHT5devz4T3kB0V ChPhp55BR4lJEtgMdwuQ0UjZqDfS4uAPaaJnCq4Gx2v0jfRSYVrGal2hteJGw6n+hPmH zVR/V8kMtPHNMw8uH70K2A9gsjUYok6tTjDuST5LbhU4TuDKTeDjfG/rgelHe/cfLn6D MvLpTtSnhkl3lqs2mcGBFGluLUAefgJS9UfXkN9fEEdscLPnHI9/jF7LXF4N4JGdsykn AR8tr4fxBEg6XP366YQn3X8nFvaq960/gMYK3cYxxzEfGmE+3mDO6ZTt74Jw+wXkfKIh DiOw== X-Gm-Message-State: AOAM5313gUSvUNkMSu4wHAj1UOlgUxubAzTyTXJZIWr+LhzAs6AMnQjP ehi9spyh2iJiV+ruwkrSQ6EfQZSD7oQ= X-Google-Smtp-Source: ABdhPJys37rq+PqGPqwocRYTTJWgtDhSWH1ga2cK28dXcCL8vdVoGpq77cnWzSLr5BbTALDfUOC6gg== X-Received: by 2002:a50:8fe6:: with SMTP id y93mr4625154edy.224.1619774713815; Fri, 30 Apr 2021 02:25:13 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:13 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 06/13] drm/ttm: flip over the range manager to self allocated nodes Date: Fri, 30 Apr 2021 11:25:01 +0200 Message-Id: <20210430092508.60710-6-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Start with the range manager to make the resource object the base class for the allocated nodes. While at it cleanup a lot of the code around that. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 + drivers/gpu/drm/drm_gem_vram_helper.c | 2 + drivers/gpu/drm/nouveau/nouveau_ttm.c | 1 + drivers/gpu/drm/qxl/qxl_ttm.c | 1 + drivers/gpu/drm/radeon/radeon_ttm.c | 1 + drivers/gpu/drm/ttm/ttm_range_manager.c | 56 ++++++++++++++++++------- drivers/gpu/drm/ttm/ttm_resource.c | 26 ++++++++---- include/drm/ttm/ttm_bo_driver.h | 26 ------------ include/drm/ttm/ttm_range_manager.h | 43 +++++++++++++++++++ include/drm/ttm/ttm_resource.h | 3 ++ 10 files changed, 110 insertions(+), 50 deletions(-) create mode 100644 include/drm/ttm/ttm_range_manager.h diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 3fe2482a40b4..3d5863262337 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -46,6 +46,7 @@ #include #include #include +#include #include diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 83e7258c7f90..17a4c5d47b6a 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -17,6 +17,8 @@ #include #include +#include + static const struct drm_gem_object_funcs drm_gem_vram_object_funcs; /** diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c index b81ae90b8449..15c7627f8f58 100644 --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c @@ -32,6 +32,7 @@ #include "nouveau_ttm.h" #include +#include #include diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c index 8aa87b8edb9c..19fd39d9a00c 100644 --- a/drivers/gpu/drm/qxl/qxl_ttm.c +++ b/drivers/gpu/drm/qxl/qxl_ttm.c @@ -32,6 +32,7 @@ #include #include #include +#include #include "qxl_drv.h" #include "qxl_object.h" diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 5191e994bff7..c3d2f1fce71a 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -46,6 +46,7 @@ #include #include #include +#include #include "radeon_reg.h" #include "radeon.h" diff --git a/drivers/gpu/drm/ttm/ttm_range_manager.c b/drivers/gpu/drm/ttm/ttm_range_manager.c index b9d5da6e6a81..ce5d07ca384c 100644 --- a/drivers/gpu/drm/ttm/ttm_range_manager.c +++ b/drivers/gpu/drm/ttm/ttm_range_manager.c @@ -29,12 +29,13 @@ * Authors: Thomas Hellstrom */ -#include +#include #include +#include +#include #include #include #include -#include /* * Currently we use a spinlock for the lock, but a mutex *may* be @@ -60,8 +61,8 @@ static int ttm_range_man_alloc(struct ttm_resource_manager *man, struct ttm_resource *mem) { struct ttm_range_manager *rman = to_range_manager(man); + struct ttm_range_mgr_node *node; struct drm_mm *mm = &rman->mm; - struct drm_mm_node *node; enum drm_mm_insert_mode mode; unsigned long lpfn; int ret; @@ -70,7 +71,7 @@ static int ttm_range_man_alloc(struct ttm_resource_manager *man, if (!lpfn) lpfn = man->size; - node = kzalloc(sizeof(*node), GFP_KERNEL); + node = kzalloc(struct_size(node, mm_nodes, 1), GFP_KERNEL); if (!node) return -ENOMEM; @@ -78,17 +79,19 @@ static int ttm_range_man_alloc(struct ttm_resource_manager *man, if (place->flags & TTM_PL_FLAG_TOPDOWN) mode = DRM_MM_INSERT_HIGH; + ttm_resource_init(bo, place, &node->base); + spin_lock(&rman->lock); - ret = drm_mm_insert_node_in_range(mm, node, mem->num_pages, - bo->page_alignment, 0, + ret = drm_mm_insert_node_in_range(mm, &node->mm_nodes[0], + mem->num_pages, bo->page_alignment, 0, place->fpfn, lpfn, mode); spin_unlock(&rman->lock); if (unlikely(ret)) { kfree(node); } else { - mem->mm_node = node; - mem->start = node->start; + mem->mm_node = &node->mm_nodes[0]; + mem->start = node->mm_nodes[0].start; } return ret; @@ -98,15 +101,19 @@ static void ttm_range_man_free(struct ttm_resource_manager *man, struct ttm_resource *mem) { struct ttm_range_manager *rman = to_range_manager(man); + struct ttm_range_mgr_node *node; - if (mem->mm_node) { - spin_lock(&rman->lock); - drm_mm_remove_node(mem->mm_node); - spin_unlock(&rman->lock); + if (!mem->mm_node) + return; - kfree(mem->mm_node); - mem->mm_node = NULL; - } + node = to_ttm_range_mgr_node(mem); + + spin_lock(&rman->lock); + drm_mm_remove_node(&node->mm_nodes[0]); + spin_unlock(&rman->lock); + + kfree(node); + mem->mm_node = NULL; } static void ttm_range_man_debug(struct ttm_resource_manager *man, @@ -125,6 +132,17 @@ static const struct ttm_resource_manager_func ttm_range_manager_func = { .debug = ttm_range_man_debug }; +/** + * ttm_range_man_init + * + * @bdev: ttm device + * @type: memory manager type + * @use_tt: if the memory manager uses tt + * @p_size: size of area to be managed in pages. + * + * Initialise a generic range manager for the selected memory type. + * The range manager is installed for this device in the type slot. + */ int ttm_range_man_init(struct ttm_device *bdev, unsigned type, bool use_tt, unsigned long p_size) @@ -152,6 +170,14 @@ int ttm_range_man_init(struct ttm_device *bdev, } EXPORT_SYMBOL(ttm_range_man_init); +/** + * ttm_range_man_fini + * + * @bdev: ttm device + * @type: memory manager type + * + * Remove the generic range manager from a slot and tear it down. + */ int ttm_range_man_fini(struct ttm_device *bdev, unsigned type) { diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index 7ebcc3e6818c..b412ae515e98 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -25,16 +25,10 @@ #include #include -int ttm_resource_alloc(struct ttm_buffer_object *bo, - const struct ttm_place *place, - struct ttm_resource **res_ptr) +void ttm_resource_init(struct ttm_buffer_object *bo, + const struct ttm_place *place, + struct ttm_resource *res) { - struct ttm_resource_manager *man = - ttm_manager_type(bo->bdev, place->mem_type); - struct ttm_resource *res; - int r; - - res = kmalloc(sizeof(*res), GFP_KERNEL); res->mm_node = NULL; res->start = 0; res->num_pages = PFN_UP(bo->base.size); @@ -44,6 +38,20 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo, res->bus.offset = 0; res->bus.is_iomem = false; res->bus.caching = ttm_cached; +} +EXPORT_SYMBOL(ttm_resource_init); + +int ttm_resource_alloc(struct ttm_buffer_object *bo, + const struct ttm_place *place, + struct ttm_resource **res_ptr) +{ + struct ttm_resource_manager *man = + ttm_manager_type(bo->bdev, place->mem_type); + struct ttm_resource *res; + int r; + + res = kmalloc(sizeof(*res), GFP_KERNEL); + ttm_resource_init(bo, place, res); r = man->func->alloc(man, bo, place, res); if (r) { kfree(res); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index ead0ef7136c8..b266971c1974 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -304,30 +304,4 @@ int ttm_bo_tt_bind(struct ttm_buffer_object *bo, struct ttm_resource *mem); */ void ttm_bo_tt_destroy(struct ttm_buffer_object *bo); -/** - * ttm_range_man_init - * - * @bdev: ttm device - * @type: memory manager type - * @use_tt: if the memory manager uses tt - * @p_size: size of area to be managed in pages. - * - * Initialise a generic range manager for the selected memory type. - * The range manager is installed for this device in the type slot. - */ -int ttm_range_man_init(struct ttm_device *bdev, - unsigned type, bool use_tt, - unsigned long p_size); - -/** - * ttm_range_man_fini - * - * @bdev: ttm device - * @type: memory manager type - * - * Remove the generic range manager from a slot and tear it down. - */ -int ttm_range_man_fini(struct ttm_device *bdev, - unsigned type); - #endif diff --git a/include/drm/ttm/ttm_range_manager.h b/include/drm/ttm/ttm_range_manager.h new file mode 100644 index 000000000000..e02b6c8d355e --- /dev/null +++ b/include/drm/ttm/ttm_range_manager.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ + +#ifndef _TTM_RANGE_MANAGER_H_ +#define _TTM_RANGE_MANAGER_H_ + +#include +#include + +/** + * struct ttm_range_mgr_node + * + * @base: base clase we extend + * @mm_nodes: MM nodes, usually 1 + * + * Extending the ttm_resource object to manage an address space allocation with + * one or more drm_mm_nodes. + */ +struct ttm_range_mgr_node { + struct ttm_resource base; + struct drm_mm_node mm_nodes[]; +}; + +/** + * to_ttm_range_mgr_node + * + * @res: the resource to upcast + * + * Upcast the ttm_resource object into a ttm_range_mgr_node object. + */ +static inline struct ttm_range_mgr_node * +to_ttm_range_mgr_node(struct ttm_resource *res) +{ + return container_of(res->mm_node, struct ttm_range_mgr_node, + mm_nodes[1]); +} + +int ttm_range_man_init(struct ttm_device *bdev, + unsigned type, bool use_tt, + unsigned long p_size); +int ttm_range_man_fini(struct ttm_device *bdev, + unsigned type); + +#endif diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h index c17c1a52070d..803e4875d779 100644 --- a/include/drm/ttm/ttm_resource.h +++ b/include/drm/ttm/ttm_resource.h @@ -223,6 +223,9 @@ ttm_resource_manager_cleanup(struct ttm_resource_manager *man) man->move = NULL; } +void ttm_resource_init(struct ttm_buffer_object *bo, + const struct ttm_place *place, + struct ttm_resource *res); int ttm_resource_alloc(struct ttm_buffer_object *bo, const struct ttm_place *place, struct ttm_resource **res); From patchwork Fri Apr 30 09:25:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46BBBC433B4 for ; Fri, 30 Apr 2021 09:25:32 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 13C5561352 for ; Fri, 30 Apr 2021 09:25:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 13C5561352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 653296F508; Fri, 30 Apr 2021 09:25:26 +0000 (UTC) Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [IPv6:2a00:1450:4864:20::52a]) by gabe.freedesktop.org (Postfix) with ESMTPS id C7E076F50C for ; Fri, 30 Apr 2021 09:25:15 +0000 (UTC) Received: by mail-ed1-x52a.google.com with SMTP id y26so2195912eds.4 for ; Fri, 30 Apr 2021 02:25:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r6aEW97fOMPUkgP+EB/ycjpi8gPt+rTAkpJNrZPAF7I=; b=s4PUX8Nll2ZO40VRQ4c9ltnAxPeU1kqsV2MnIGorBQhkGXx2Jh5RBU0sdzThb9At+r /DzucuBuajjiP+KMKGD+4/Wpa9p8aEV+CmYfJlzksmvSz3tISnkMv0a8FoVlEHreJbgZ 4h8+OisglrHEgcNX7nY+ie3D8C9GAKoq4Gbf9o0v2qryHC9G/Jkeg7ubXwa7lbjI5POl +d6bER1Nnl3xvNAumOIsnz2MPSO7QsgVtUMXsk/ENFN+d6ELKhEM7aDX0/7x4wdaRSvA J94KFeHmeMTos1oYDjt+J7pV+iS+dX2W4vvZh0Z+8OzWF7dky//3LNuFizZecAaFFg86 NLNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r6aEW97fOMPUkgP+EB/ycjpi8gPt+rTAkpJNrZPAF7I=; b=cMhBCRPzP1w7mWD0HKkKoWSX7xVD5dLRJCAcfAcVlb/j8dqOkZuJw1VNsR8GUoPKJN 38Qpz5erYWJ3u6G1kC8aECrhFQ1lv7JIbvDgSC6kSuvyD2JwTCj77rNmUW/cNoeSKynl mq0AOM0a4mugknEHLeS9qhLRnTb/A9A5V5n2Pl5r7lXh3wXqDxNg/pNXx9h2SZINOMIF 2ALMNzCM3bAIxjb+rjdpdyqlLzVqu4k+DUPWsmJIDO63GpQ0C1HJreMT58x5pww3wQPy izRNW5l6ueQ8s+jIgV+cehuPtt4+BObAhCorwVWMUQ8RgYeY71LJRY2Wo/+CaxpwTVzm SFqg== X-Gm-Message-State: AOAM53154ADy+gXJopqoIQnwlAm/0KkmByuqiO7hpNLL7BaZpjMN2jMC a15cA17aPeJAt0nLWDkwr2dpyTWq8Qc= X-Google-Smtp-Source: ABdhPJyqetnaiLB/gS0Avv1WDrUI1n+tRnU0dqt6CIOqrDWG1Dud2veJkNB4wqAwKhMugWHczT5UWg== X-Received: by 2002:a05:6402:10c9:: with SMTP id p9mr4733175edu.268.1619774714472; Fri, 30 Apr 2021 02:25:14 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:14 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 07/13] drm/ttm: flip over the sys manager to self allocated nodes Date: Fri, 30 Apr 2021 11:25:02 +0200 Message-Id: <20210430092508.60710-7-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Make sure to allocate a resource object here. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/ttm/ttm_sys_manager.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_sys_manager.c b/drivers/gpu/drm/ttm/ttm_sys_manager.c index ed92615214e3..a926114edfe5 100644 --- a/drivers/gpu/drm/ttm/ttm_sys_manager.c +++ b/drivers/gpu/drm/ttm/ttm_sys_manager.c @@ -3,18 +3,25 @@ #include #include #include +#include static int ttm_sys_man_alloc(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, struct ttm_resource *mem) { + mem->mm_node = kzalloc(sizeof(*mem), GFP_KERNEL); + if (!mem->mm_node) + return -ENOMEM; + + ttm_resource_init(bo, place, mem->mm_node); return 0; } static void ttm_sys_man_free(struct ttm_resource_manager *man, struct ttm_resource *mem) { + kfree(mem->mm_node); } static const struct ttm_resource_manager_func ttm_sys_manager_func = { From patchwork Fri Apr 30 09:25:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3D0AC433B4 for ; Fri, 30 Apr 2021 09:25:38 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6EA4161352 for ; Fri, 30 Apr 2021 09:25:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6EA4161352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 46BDA6F518; Fri, 30 Apr 2021 09:25:36 +0000 (UTC) Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com [IPv6:2a00:1450:4864:20::52e]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6205F6F513 for ; Fri, 30 Apr 2021 09:25:16 +0000 (UTC) Received: by mail-ed1-x52e.google.com with SMTP id g10so13878370edb.0 for ; Fri, 30 Apr 2021 02:25:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e6q54fvBCiYGRfmjpMAR2o6A/8GOcEovwEmbKV4DRfU=; b=f3O8+kC9rs29iHyjUzUuEElsUHda9WiidMrBnV6++WRZ9V9rl0JuntVL4l4PJ6MbpC RMiJGI0Yah+32dmZ/dNxhnRJ81fvwp93Dd7wf7bMaW6IZCjf5PFOGKrqvhuZswFytcID 4dSiEWanoeZ9H5l16jV+dxJCFgEPCZfU0M96lP9veIoiYtqGc2B9R6n8udCkl8Okofw6 9s0/L3rU9fHbNm9Ae2Rit/GSyUTX2wXJExAB6kGoB0YiK749adksAtPZk1/zsEY0kade oJ0PHL7aj8hz9cl7x9ktyy58tgpzopncy2HTYnQaNvDH7p3KiiX20B+jYweClffj907c DULg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e6q54fvBCiYGRfmjpMAR2o6A/8GOcEovwEmbKV4DRfU=; b=MBnyaJCiDOYfRPBPz2ctHORhigF5wJKlX9m9ZB5vYjIAcHD5jhLhmsyhZmfOglZaNS 1F6Hu0dSeXa9cIR34J81lAojaSIZBws2BwsvRJgaOA+BLaONWZmbV8wWrLGyAt1MnIJr M14UkY5jEa+cwAKjT+woUiO1kN7TJudn3ZgAhjeA8ujH07zGe8TDeQ6UciPGkPJt3mrG 3C6gzfe4F4AhXQKixdUKHa1CaRJEBcA3exB1F2vPvocbA30v8QgPrzRd3US5zs3J3ptT KWlRjNJmAcOLw0MniRF5eS4xBKTdlolbgvuq2wMm6nrF9Ki0wYfYGRm7XTQzT5AtfLH4 RXag== X-Gm-Message-State: AOAM530AqBZCGYgOTBUQ3k3e7DFWxtqkD1FWv6au1dXOmWYpsgfeYk1d aE9OvzHh4XGCg+ndx03a5/iGQXjbUko= X-Google-Smtp-Source: ABdhPJxY251JFfC3r4Ww4hVzPez2qvx9xvoi7HP6wYbo0gBzuxNOQ0B3vDQ4HY7J4Rg004L8IIALZQ== X-Received: by 2002:aa7:d513:: with SMTP id y19mr4576918edq.9.1619774715123; Fri, 30 Apr 2021 02:25:15 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:14 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 08/13] drm/amdgpu: revert "drm/amdgpu: stop allocating dummy GTT nodes" Date: Fri, 30 Apr 2021 11:25:03 +0200 Message-Id: <20210430092508.60710-8-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" TTM is going to need this again since we are moving the resource allocation into the backend. Signed-off-by: Christian König Acked-by: Matthew Auld --- drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 68 ++++++++++++--------- 1 file changed, 39 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c index 1086687a8138..55ca80133411 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c @@ -24,16 +24,22 @@ #include "amdgpu.h" +struct amdgpu_gtt_node { + struct drm_mm_node node; + struct ttm_buffer_object *tbo; +}; + static inline struct amdgpu_gtt_mgr * to_gtt_mgr(struct ttm_resource_manager *man) { return container_of(man, struct amdgpu_gtt_mgr, manager); } -struct amdgpu_gtt_node { - struct drm_mm_node node; - struct ttm_buffer_object *tbo; -}; +static inline struct amdgpu_gtt_node * +to_amdgpu_gtt_node(struct ttm_resource *res) +{ + return container_of(res->mm_node, struct amdgpu_gtt_node, node); +} /** * DOC: mem_info_gtt_total @@ -89,7 +95,9 @@ static DEVICE_ATTR(mem_info_gtt_used, S_IRUGO, */ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_resource *mem) { - return mem->mm_node != NULL; + struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(mem); + + return drm_mm_node_allocated(&node->node); } /** @@ -120,12 +128,6 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, atomic64_sub(mem->num_pages, &mgr->available); spin_unlock(&mgr->lock); - if (!place->lpfn) { - mem->mm_node = NULL; - mem->start = AMDGPU_BO_INVALID_OFFSET; - return 0; - } - node = kzalloc(sizeof(*node), GFP_KERNEL); if (!node) { r = -ENOMEM; @@ -133,19 +135,25 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, } node->tbo = tbo; + if (place->lpfn) { + spin_lock(&mgr->lock); + r = drm_mm_insert_node_in_range(&mgr->mm, &node->node, + mem->num_pages, + tbo->page_alignment, 0, + place->fpfn, place->lpfn, + DRM_MM_INSERT_BEST); + spin_unlock(&mgr->lock); + if (unlikely(r)) + goto err_free; - spin_lock(&mgr->lock); - r = drm_mm_insert_node_in_range(&mgr->mm, &node->node, mem->num_pages, - tbo->page_alignment, 0, place->fpfn, - place->lpfn, DRM_MM_INSERT_BEST); - spin_unlock(&mgr->lock); - - if (unlikely(r)) - goto err_free; - - mem->mm_node = node; - mem->start = node->node.start; + mem->start = node->node.start; + } else { + node->node.start = 0; + node->node.size = mem->num_pages; + mem->start = AMDGPU_BO_INVALID_OFFSET; + } + mem->mm_node = &node->node; return 0; err_free: @@ -168,17 +176,19 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man, struct ttm_resource *mem) { + struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(mem); struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man); - struct amdgpu_gtt_node *node = mem->mm_node; - if (node) { - spin_lock(&mgr->lock); - drm_mm_remove_node(&node->node); - spin_unlock(&mgr->lock); - kfree(node); - } + if (!node) + return; + spin_lock(&mgr->lock); + if (drm_mm_node_allocated(&node->node)) + drm_mm_remove_node(&node->node); + spin_unlock(&mgr->lock); atomic64_add(mem->num_pages, &mgr->available); + + kfree(node); } /** From patchwork Fri Apr 30 09:25:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9863C433ED for ; Fri, 30 Apr 2021 09:25:44 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6F8A561352 for ; Fri, 30 Apr 2021 09:25:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F8A561352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C7A486F517; Fri, 30 Apr 2021 09:25:43 +0000 (UTC) Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [IPv6:2a00:1450:4864:20::633]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1AF496F50C for ; Fri, 30 Apr 2021 09:25:17 +0000 (UTC) Received: by mail-ej1-x633.google.com with SMTP id r9so104225096ejj.3 for ; Fri, 30 Apr 2021 02:25:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=09FzZUp/FDpa+6m0of7/xNo4usnS7yLTYPpVTfDsqOI=; b=gJJMhpEyjzRUplgHrMN6TRPqhFKkPW37uEfkqPHyathzKdQg2gg1hS062EzjiUNmFv iHDcCQaUEyjyp7Mw364FCI9/VRiAMoeDBuvB8RHvMTjRb4AdaSpnACooX/Ro2MYoQPax xzX7NVuXFngkNP6vThnvgg5aY5oEaWA74+DKj4AOhLLpkBL9yRPHBgK9qPzIHmO6/gNq eJX1bo/UdNjjYfwOSWw8s5ALC/Wa9p/VCgvzR8+ZRQUaKGmMMiVbsfGmZje2G7Aku7Zc HkT7UPFs2OAU9MxCY+drxi6vyG/JgLf537+muo6UX5j/DHxFX6qGEbKCV9SgEIsCCaly 3Tyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=09FzZUp/FDpa+6m0of7/xNo4usnS7yLTYPpVTfDsqOI=; b=jo5G1NU6egPPREvXG6lV7UUt0NkWetRlQKNJwrcOWl7t4QiTYIDAEvnPGR8YtEbv+4 wjyynE6GA5djT4V0OLyDrRyEPCdwmRQzT1WWjDRqlMCVqw3SS7h4kt7yHIsgxhCuppCT uqeQkzn8l4C+T3jWs3JKzSeqwzy2lXin+qL8rDR7WfMo1XoXJZtpkv9LGRKJkb7PcK4G 0DuuyGRioQ16kUp+L5aKvls9iOz4NWCRUTAmw19ZsPeXCetZw3HfjLPqEZLxPdKOtebs 5rz4XyNNrTKgSjS98oQX53nSQv8q0YOC/yOgyLlbUpGebml0QT7zvAkb+pSIRz/5AVsD 8kwA== X-Gm-Message-State: AOAM531ubPl5GYRAX+9fZaBHPRIliMXeEd+MWtV4wAAind5P9klxyOKt M/ASmEpggXPQgfmVYICRzHionNW6ieY= X-Google-Smtp-Source: ABdhPJyXpqej8xCzZsVSeJw8aQwxDcZsNPAHFF1okBPljJfXLsSEguHIKIO34AK1SX9IlUJ7WNNnRQ== X-Received: by 2002:a17:906:3458:: with SMTP id d24mr3393331ejb.54.1619774715754; Fri, 30 Apr 2021 02:25:15 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:15 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 09/13] drm/amdgpu: switch the GTT backend to self alloc Date: Fri, 30 Apr 2021 11:25:04 +0200 Message-Id: <20210430092508.60710-9-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Similar to the TTM range manager. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 36 +++++++++++++-------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c index 55ca80133411..2e7dffd3614d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c @@ -22,11 +22,13 @@ * Authors: Christian König */ +#include + #include "amdgpu.h" struct amdgpu_gtt_node { - struct drm_mm_node node; struct ttm_buffer_object *tbo; + struct ttm_range_mgr_node base; }; static inline struct amdgpu_gtt_mgr * @@ -38,7 +40,8 @@ to_gtt_mgr(struct ttm_resource_manager *man) static inline struct amdgpu_gtt_node * to_amdgpu_gtt_node(struct ttm_resource *res) { - return container_of(res->mm_node, struct amdgpu_gtt_node, node); + return container_of(res->mm_node, struct amdgpu_gtt_node, + base.mm_nodes[0]); } /** @@ -97,7 +100,7 @@ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_resource *mem) { struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(mem); - return drm_mm_node_allocated(&node->node); + return drm_mm_node_allocated(&node->base.mm_nodes[0]); } /** @@ -128,16 +131,19 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, atomic64_sub(mem->num_pages, &mgr->available); spin_unlock(&mgr->lock); - node = kzalloc(sizeof(*node), GFP_KERNEL); + node = kzalloc(struct_size(node, base.mm_nodes, 1), GFP_KERNEL); if (!node) { r = -ENOMEM; goto err_out; } node->tbo = tbo; + ttm_resource_init(tbo, place, &node->base.base); + if (place->lpfn) { spin_lock(&mgr->lock); - r = drm_mm_insert_node_in_range(&mgr->mm, &node->node, + r = drm_mm_insert_node_in_range(&mgr->mm, + &node->base.mm_nodes[0], mem->num_pages, tbo->page_alignment, 0, place->fpfn, place->lpfn, @@ -146,14 +152,14 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, if (unlikely(r)) goto err_free; - mem->start = node->node.start; + mem->start = node->base.mm_nodes[0].start; } else { - node->node.start = 0; - node->node.size = mem->num_pages; + node->base.mm_nodes[0].start = 0; + node->base.mm_nodes[0].size = mem->num_pages; mem->start = AMDGPU_BO_INVALID_OFFSET; } - mem->mm_node = &node->node; + mem->mm_node = &node->base.mm_nodes[0]; return 0; err_free: @@ -176,15 +182,17 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man, struct ttm_resource *mem) { - struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(mem); struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man); + struct amdgpu_gtt_node *node; - if (!node) + if (!mem->mm_node) return; + node = to_amdgpu_gtt_node(mem); + spin_lock(&mgr->lock); - if (drm_mm_node_allocated(&node->node)) - drm_mm_remove_node(&node->node); + if (drm_mm_node_allocated(&node->base.mm_nodes[0])) + drm_mm_remove_node(&node->base.mm_nodes[0]); spin_unlock(&mgr->lock); atomic64_add(mem->num_pages, &mgr->available); @@ -222,7 +230,7 @@ int amdgpu_gtt_mgr_recover(struct ttm_resource_manager *man) spin_lock(&mgr->lock); drm_mm_for_each_node(mm_node, &mgr->mm) { - node = container_of(mm_node, struct amdgpu_gtt_node, node); + node = container_of(mm_node, typeof(*node), base.mm_nodes[0]); r = amdgpu_ttm_recover_gart(node->tbo); if (r) break; From patchwork Fri Apr 30 09:25:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19562C433ED for ; Fri, 30 Apr 2021 09:25:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE2E96145B for ; Fri, 30 Apr 2021 09:25:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE2E96145B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E8DC56F514; Fri, 30 Apr 2021 09:25:25 +0000 (UTC) Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) by gabe.freedesktop.org (Postfix) with ESMTPS id BF0726F508 for ; Fri, 30 Apr 2021 09:25:17 +0000 (UTC) Received: by mail-ej1-x62f.google.com with SMTP id gx5so12148324ejb.11 for ; Fri, 30 Apr 2021 02:25:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MgNhTti4iINApNm2rNWtkAlbVBnLOTf2Z21QB1T3JAY=; b=ihC9PMYyumCYUR8DEyWz3PPntFn3efi0qv2lUXRbbTrvjQDfk7ddZcZRJ/Y8KyOMwr Gbe+MEpD0zPrmkoLqpWHerj2nX2PK3iFCCcuWvXZpke8xyzEOrc2cSnCcJj0jCORNWRa owd4WoheJqVSdE7YiPavzpFvBLbnD0XugT72uv5gHnU/JujqtZ987/LpSmjgINIh1w8b ZIaxKTv4pDcErJXAi7ztoGMifMIpiaKnQGax/js2o3AHc3PwnhV/v1JKEvM+pfqgAlHq 7JFR/mJoJ1Ht505awUXFqKGv0vpRxRnfp8XpDiXP2Hli7aMJkD5Iy3iP6ln9D7Tc39pU Hq3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MgNhTti4iINApNm2rNWtkAlbVBnLOTf2Z21QB1T3JAY=; b=F7Mzk38qMUeXZL1SoN6y3xIBLpNBP2zysXXv8lMrpxSRxdEfk6a+HLYuOz6r+AZXnW W5N4WaXg5M/d1llFDEJGoBWH5yH4zcxAnZ/ZWaL32MPn+/kOvp+yD7hpNXCW/geYLk1M H+Y8MO5SBYX+VCA06kG5NUUvRa7AuEQtuJiqlTZQ3SwxGVaDBfsPowHUU2eiWBh7JpAd sz8FIor19Dxq6MpkZnS/ZA9jKl+pIdY+osxztHkiiO5//SwKHm73j5uAE85vSpXQ1CB4 VDuSffXUaJGM+ciSRvJr3FLksJSWFE++Gz76UYsyA6QfSqMNMjlpI4YH0X7aFFXXp3FO 228w== X-Gm-Message-State: AOAM5328nfuDB8NDWD0DzbpM4tpq8S/gvozyVBMrcGmCrhi01Bb8pXhy hOLqb1ZpV49vk6Kgkv2N29bEGkeQB/k= X-Google-Smtp-Source: ABdhPJxBxsAFW528cUDvGl0U/AkxkZipDMQsERCkBf/ZLpHOGx0urEzSmSla3AQM2kZJO+geonAZ8A== X-Received: by 2002:a17:906:36da:: with SMTP id b26mr3379624ejc.8.1619774716492; Fri, 30 Apr 2021 02:25:16 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:16 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 10/13] drm/amdgpu: switch the VRAM backend to self alloc Date: Fri, 30 Apr 2021 11:25:05 +0200 Message-Id: <20210430092508.60710-10-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Similar to the TTM range manager. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 51 ++++++++++++-------- 1 file changed, 30 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c index bb01e0fc621c..d59ec07c77bb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c @@ -23,6 +23,8 @@ */ #include +#include + #include "amdgpu.h" #include "amdgpu_vm.h" #include "amdgpu_res_cursor.h" @@ -367,9 +369,9 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); struct amdgpu_device *adev = to_amdgpu_device(mgr); uint64_t vis_usage = 0, mem_bytes, max_bytes; + struct ttm_range_mgr_node *node; struct drm_mm *mm = &mgr->mm; enum drm_mm_insert_mode mode; - struct drm_mm_node *nodes; unsigned i; int r; @@ -384,8 +386,8 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, /* bail out quickly if there's likely not enough VRAM for this BO */ mem_bytes = (u64)mem->num_pages << PAGE_SHIFT; if (atomic64_add_return(mem_bytes, &mgr->usage) > max_bytes) { - atomic64_sub(mem_bytes, &mgr->usage); - return -ENOSPC; + r = -ENOSPC; + goto error_sub; } if (place->flags & TTM_PL_FLAG_CONTIGUOUS) { @@ -403,13 +405,15 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, num_nodes = DIV_ROUND_UP(mem->num_pages, pages_per_node); } - nodes = kvmalloc_array((uint32_t)num_nodes, sizeof(*nodes), - GFP_KERNEL | __GFP_ZERO); - if (!nodes) { - atomic64_sub(mem_bytes, &mgr->usage); - return -ENOMEM; + node = kvmalloc(struct_size(node, mm_nodes, num_nodes), + GFP_KERNEL | __GFP_ZERO); + if (!node) { + r = -ENOMEM; + goto error_sub; } + ttm_resource_init(tbo, place, &node->base); + mode = DRM_MM_INSERT_BEST; if (place->flags & TTM_PL_FLAG_TOPDOWN) mode = DRM_MM_INSERT_HIGH; @@ -428,8 +432,9 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, if (pages >= pages_per_node) alignment = pages_per_node; - r = drm_mm_insert_node_in_range(mm, &nodes[i], pages, alignment, - 0, place->fpfn, lpfn, mode); + r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages, + alignment, 0, place->fpfn, + lpfn, mode); if (unlikely(r)) { if (pages > pages_per_node) { if (is_power_of_2(pages)) @@ -438,11 +443,11 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, pages = rounddown_pow_of_two(pages); continue; } - goto error; + goto error_free; } - vis_usage += amdgpu_vram_mgr_vis_size(adev, &nodes[i]); - amdgpu_vram_mgr_virt_start(mem, &nodes[i]); + vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]); + amdgpu_vram_mgr_virt_start(mem, &node->mm_nodes[i]); pages_left -= pages; ++i; @@ -455,16 +460,17 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, mem->placement |= TTM_PL_FLAG_CONTIGUOUS; atomic64_add(vis_usage, &mgr->vis_usage); - mem->mm_node = nodes; + mem->mm_node = &node->mm_nodes[0]; return 0; -error: +error_free: while (i--) - drm_mm_remove_node(&nodes[i]); + drm_mm_remove_node(&node->mm_nodes[i]); spin_unlock(&mgr->lock); - atomic64_sub(mem->num_pages << PAGE_SHIFT, &mgr->usage); + kvfree(node); - kvfree(nodes); +error_sub: + atomic64_sub(mem_bytes, &mgr->usage); return r; } @@ -481,13 +487,17 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man, { struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); struct amdgpu_device *adev = to_amdgpu_device(mgr); - struct drm_mm_node *nodes = mem->mm_node; + struct ttm_range_mgr_node *node; uint64_t usage = 0, vis_usage = 0; unsigned pages = mem->num_pages; + struct drm_mm_node *nodes; if (!mem->mm_node) return; + node = to_ttm_range_mgr_node(mem); + nodes = &node->mm_nodes[0]; + spin_lock(&mgr->lock); while (pages) { pages -= nodes->size; @@ -502,8 +512,7 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man, atomic64_sub(usage, &mgr->usage); atomic64_sub(vis_usage, &mgr->vis_usage); - kvfree(mem->mm_node); - mem->mm_node = NULL; + kvfree(node); } /** From patchwork Fri Apr 30 09:25:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D5A7C433B4 for ; Fri, 30 Apr 2021 09:25:36 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4A79561352 for ; Fri, 30 Apr 2021 09:25:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4A79561352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8BE836F50C; Fri, 30 Apr 2021 09:25:35 +0000 (UTC) Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [IPv6:2a00:1450:4864:20::62d]) by gabe.freedesktop.org (Postfix) with ESMTPS id 62F0D6F508 for ; Fri, 30 Apr 2021 09:25:18 +0000 (UTC) Received: by mail-ej1-x62d.google.com with SMTP id u17so104338721ejk.2 for ; Fri, 30 Apr 2021 02:25:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gUAvKxvD/sLpoSx+hHmo3rbgcgIylVoMEK7bh2Nyg3c=; b=pW3vbojOXp199phYf2HX2507Kc62RBq3wKkFuyf8lfT31Y/R223xgO4QBVtQl1tF85 SRokeWNhKNYmOhB9262rZUVE+r94qARqQnMW4aGne8LJ5PUV46c7qNC5xOxmeCnJuSuI vM3qSfnRLt0ip2DQHtDS1yvq880MXR55Ody3d/SJqDYzO+gyG1WL5keWPyzg8BYB0Zru LJYZI/7uATsvCbH3X7dc/06B/WdW4YYaoDoTfj8bUx+RerUZikpT8DjFMsMdL6Xig1Nv Koe+TAnprgvx3DS3Y2ZlWgfggsxNP5eF37MG1CoCSnmsqsw1Rp7pAq0XoA1pXOGhWxrE 9wwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gUAvKxvD/sLpoSx+hHmo3rbgcgIylVoMEK7bh2Nyg3c=; b=d38olixDqkBmerZYqITf4S1am5oQaBG0Kl9Qf0jLd+yVabwLH27ZL5j8/I2C9rx1DU 5cMMcpPlcO9KI3o6Y/HBT7FFrI4DnrSNL1vjs1ZhOQOd0nvvu4M/URAgyTrLjEoCdUgb VzZnKGYMQrdZqNhMZMGk0Frs+o1QHfwEH2Wa6T96N3tXtIoDznmRjlfVRgxA52+I/zFV uI/7EE3M6oPdM5HM6jQiHSdfHFqIETL3O+pZWWhm+JmTPogqsenux0qr/vq0O5EMXW4D msBZYTxtQDY+fk1c7+K88vXPJbyXxXoks3Cy1SoPTwOs3zqQr7s0+byO8AGkS2gWKWQ3 Nigw== X-Gm-Message-State: AOAM530yFm3H5nDBvlmTbFu/SBD2WdkM8QUAnbEavvNHUF/ROHj+ELfX gHusYLKpiO2K7WIQZq7S+Ol5hAcrMIo= X-Google-Smtp-Source: ABdhPJzoZpgwTiZJK/QoJAUvoXPD2fge6WS6FVqJl+HbknM0VqXA11MwcYU90MDJBUqfZKuq8NCcFQ== X-Received: by 2002:a17:906:7206:: with SMTP id m6mr3272835ejk.281.1619774717108; Fri, 30 Apr 2021 02:25:17 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:16 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 11/13] drm/nouveau: switch the TTM backends to self alloc Date: Fri, 30 Apr 2021 11:25:06 +0200 Message-Id: <20210430092508.60710-11-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Similar to the TTM range manager. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/nouveau/nouveau_mem.h | 1 + drivers/gpu/drm/nouveau/nouveau_ttm.c | 4 ++++ 2 files changed, 5 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.h b/drivers/gpu/drm/nouveau/nouveau_mem.h index 7df3848e85aa..3a6a1be2ed52 100644 --- a/drivers/gpu/drm/nouveau/nouveau_mem.h +++ b/drivers/gpu/drm/nouveau/nouveau_mem.h @@ -13,6 +13,7 @@ nouveau_mem(struct ttm_resource *reg) } struct nouveau_mem { + struct ttm_resource base; struct nouveau_cli *cli; u8 kind; u8 comp; diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c index 15c7627f8f58..5e5ce2ec89f0 100644 --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c @@ -59,6 +59,8 @@ nouveau_vram_manager_new(struct ttm_resource_manager *man, if (ret) return ret; + ttm_resource_init(bo, place, reg->mm_node); + ret = nouveau_mem_vram(reg, nvbo->contig, nvbo->page); if (ret) { nouveau_mem_del(reg); @@ -87,6 +89,7 @@ nouveau_gart_manager_new(struct ttm_resource_manager *man, if (ret) return ret; + ttm_resource_init(bo, place, reg->mm_node); reg->start = 0; return 0; } @@ -112,6 +115,7 @@ nv04_gart_manager_new(struct ttm_resource_manager *man, if (ret) return ret; + ttm_resource_init(bo, place, reg->mm_node); ret = nvif_vmm_get(&mem->cli->vmm.vmm, PTES, false, 12, 0, (long)reg->num_pages << PAGE_SHIFT, &mem->vma[0]); if (ret) { From patchwork Fri Apr 30 09:25:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11851C433ED for ; Fri, 30 Apr 2021 09:25:37 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D18C66145B for ; Fri, 30 Apr 2021 09:25:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D18C66145B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C04CD6F515; Fri, 30 Apr 2021 09:25:35 +0000 (UTC) Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [IPv6:2a00:1450:4864:20::62c]) by gabe.freedesktop.org (Postfix) with ESMTPS id 09C9B6F508 for ; Fri, 30 Apr 2021 09:25:19 +0000 (UTC) Received: by mail-ej1-x62c.google.com with SMTP id r9so104225233ejj.3 for ; Fri, 30 Apr 2021 02:25:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GEcgVRMUuu3Cob1aHLpc4hY8Ye13OZQKcNeUwIHDh6U=; b=jS+eXmaYUr0/GHSP1S8Fe46Fj4d2i37hSwxol9Khi4skI/TbXJJvmbF28K2rS0lhLg 9a3kPm110+XQ9rjy5QlG3CrXG6dtOe2sz51IQjYk3zTZjL8gzjV0BAz5rLqXec04wa/C qx3emFDOngH6ln5QueeOCoGhPE0BGXrRtbfN/SEw9sNhOZuGq7xTlAAMrd0bDzsfFZZ6 4XAV3loPue4Y4uPYr0XoUcU1iMSAcOFfLoVaS3XC9zI6ibaGgsSoUpYlV4nFAyI9XpSI fqdg1cyPavopbcj7+esfU9gkKoeb0tQUvLg8tEW4nTkK9RD/djouB8BUrYPcm1CAJnuK LP7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GEcgVRMUuu3Cob1aHLpc4hY8Ye13OZQKcNeUwIHDh6U=; b=WHbRcdaJyEBC4F1Qs5TCNmdYnzX7e0CngluNTlJLDYikVsakCV2rxPzJdbfcsMl6nX vopGpy+rGAHxyZRiux6d1KOVpaOOvt7y8qsycyhBSfH+3rzTFdOctDIeLB+zzqAx+n4W 3QERZRYeX/Bc6sSNXCqebi/yu7EKbjgPrdR8HkF41UwJahIt5r1tQ8qB0KTNgGKOQxGO rQyG9Lu8U+X2htBPIkdSkat+RhBaFPuX4NV+OjyekrBZrkvhaJXq79gL2xGle2sTKgby xY2xbz1MwHUFux64bwPLqqqMaNFRr6nt5/2+ot/0lVxpi3m+pClnHwd/PjrEjF7dYaFh /F0A== X-Gm-Message-State: AOAM533WbiogIiUgHID0KpBjkWnUL/qnlOwSaOPpWMViMYgbGWsH+tuo 5/8DTUlrCBSZIccNcDM+RJMMWDU8b2Q= X-Google-Smtp-Source: ABdhPJw6YW4vvqxBbaHf12n5Dp275+8SzkFIcVgM3TVVrkLqbFerNhkpeULuvqsQ8KnT95z/jahv0A== X-Received: by 2002:a17:906:a0c2:: with SMTP id bh2mr3218143ejb.394.1619774717740; Fri, 30 Apr 2021 02:25:17 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:17 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 12/13] drm/vmwgfx: switch the TTM backends to self alloc Date: Fri, 30 Apr 2021 11:25:07 +0200 Message-Id: <20210430092508.60710-12-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Similar to the TTM range manager. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c | 18 +++++---- drivers/gpu/drm/vmwgfx/vmwgfx_thp.c | 37 ++++++++++--------- 2 files changed, 31 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c index 1774960d1b89..82a5e6489810 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c @@ -57,6 +57,12 @@ static int vmw_gmrid_man_get_node(struct ttm_resource_manager *man, struct vmwgfx_gmrid_man *gman = to_gmrid_manager(man); int id; + mem->mm_node = kmalloc(sizeof(*mem), GFP_KERNEL); + if (!mem->mm_node) + return -ENOMEM; + + ttm_resource_init(bo, place, mem->mm_node); + id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL); if (id < 0) return id; @@ -87,13 +93,11 @@ static void vmw_gmrid_man_put_node(struct ttm_resource_manager *man, { struct vmwgfx_gmrid_man *gman = to_gmrid_manager(man); - if (mem->mm_node) { - ida_free(&gman->gmr_ida, mem->start); - spin_lock(&gman->lock); - gman->used_gmr_pages -= mem->num_pages; - spin_unlock(&gman->lock); - mem->mm_node = NULL; - } + ida_free(&gman->gmr_ida, mem->start); + spin_lock(&gman->lock); + gman->used_gmr_pages -= mem->num_pages; + spin_unlock(&gman->lock); + kfree(mem->mm_node); } static const struct ttm_resource_manager_func vmw_gmrid_manager_func; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c b/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c index 5ccc35b3194c..8765835696ac 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c @@ -7,6 +7,7 @@ #include "vmwgfx_drv.h" #include #include +#include /** * struct vmw_thp_manager - Range manager implementing huge page alignment @@ -54,16 +55,18 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, { struct vmw_thp_manager *rman = to_thp_manager(man); struct drm_mm *mm = &rman->mm; - struct drm_mm_node *node; + struct ttm_range_mgr_node *node; unsigned long align_pages; unsigned long lpfn; enum drm_mm_insert_mode mode = DRM_MM_INSERT_BEST; int ret; - node = kzalloc(sizeof(*node), GFP_KERNEL); + node = kzalloc(struct_size(node, mm_nodes, 1), GFP_KERNEL); if (!node) return -ENOMEM; + ttm_resource_init(bo, place, &node->base); + lpfn = place->lpfn; if (!lpfn) lpfn = man->size; @@ -76,8 +79,9 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)) { align_pages = (HPAGE_PUD_SIZE >> PAGE_SHIFT); if (mem->num_pages >= align_pages) { - ret = vmw_thp_insert_aligned(bo, mm, node, align_pages, - place, mem, lpfn, mode); + ret = vmw_thp_insert_aligned(bo, mm, &node->mm_nodes[0], + align_pages, place, mem, + lpfn, mode); if (!ret) goto found_unlock; } @@ -85,14 +89,15 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, align_pages = (HPAGE_PMD_SIZE >> PAGE_SHIFT); if (mem->num_pages >= align_pages) { - ret = vmw_thp_insert_aligned(bo, mm, node, align_pages, place, - mem, lpfn, mode); + ret = vmw_thp_insert_aligned(bo, mm, &node->mm_nodes[0], + align_pages, place, mem, lpfn, + mode); if (!ret) goto found_unlock; } - ret = drm_mm_insert_node_in_range(mm, node, mem->num_pages, - bo->page_alignment, 0, + ret = drm_mm_insert_node_in_range(mm, &node->mm_nodes[0], + mem->num_pages, bo->page_alignment, 0, place->fpfn, lpfn, mode); found_unlock: spin_unlock(&rman->lock); @@ -100,8 +105,8 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, if (unlikely(ret)) { kfree(node); } else { - mem->mm_node = node; - mem->start = node->start; + mem->mm_node = &node->mm_nodes[0]; + mem->start = node->mm_nodes[0].start; } return ret; @@ -113,15 +118,13 @@ static void vmw_thp_put_node(struct ttm_resource_manager *man, struct ttm_resource *mem) { struct vmw_thp_manager *rman = to_thp_manager(man); + struct ttm_range_mgr_node * node = mem->mm_node; - if (mem->mm_node) { - spin_lock(&rman->lock); - drm_mm_remove_node(mem->mm_node); - spin_unlock(&rman->lock); + spin_lock(&rman->lock); + drm_mm_remove_node(&node->mm_nodes[0]); + spin_unlock(&rman->lock); - kfree(mem->mm_node); - mem->mm_node = NULL; - } + kfree(node); } int vmw_thp_init(struct vmw_private *dev_priv) From patchwork Fri Apr 30 09:25:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 12232851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8B51C433ED for ; Fri, 30 Apr 2021 09:25:30 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8089F61352 for ; Fri, 30 Apr 2021 09:25:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8089F61352 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9EDC06F50A; Fri, 30 Apr 2021 09:25:25 +0000 (UTC) Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com [IPv6:2a00:1450:4864:20::62b]) by gabe.freedesktop.org (Postfix) with ESMTPS id D89AA6F508 for ; Fri, 30 Apr 2021 09:25:19 +0000 (UTC) Received: by mail-ej1-x62b.google.com with SMTP id zg3so22637037ejb.8 for ; Fri, 30 Apr 2021 02:25:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MrHVgs4uQtR736I8oYDwmYOWH+Ni1lmHqgNkrNb90MI=; b=ZI0OWhW0u72jW0iO6Gpcxo1lkxlxrew/3PelPeuJbfoXcE0HiPzQbJj2Q1RTnBYEvq O3GnvcxDMxW+8HCB6RW+1uaHm/gRSHc2Nhv/pnsnn8jrNoiRboHr4x9IXLrAV3R43A6F XLt+OsMPbucq18ECoet0J+JGTTNrX+racJ32aRxddvhHs/RsXE2dKmvJVmxk6UHy/tFO rQ/rptPqrrugBrvp2TLHLw5oc6E9FPfcP/KPu0oBhAPRRZXxg21k3dk7SeWuwvUu8DrE ByuhkIkVbpJl06evNDF4eCI4ONIwGnCDcWcciRYeCNDx9OylVclhrO3hlNm4m6HJpOrk Uevw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MrHVgs4uQtR736I8oYDwmYOWH+Ni1lmHqgNkrNb90MI=; b=GGSIGxw1Cl1j1sgthMcbG/JJd0EHvBzbWlSdNLzpsoAdU08J7t/jDRmUKOsKdDdI3f 4twBpVrcXvSI/1qDBzLTVZvT4SX3K4rW1G7EQMDvTWNzyG0zw6GyrVSqCHO/CzllOLBy 90XsRBU7WQMA/rgH+Upz1lpAxw1L/7+3qjA6cTpzxCRuwa7I3Syy/v+3CgnczhcP3UfT gTqMNfdoG9Sc5J2gWxtpS0KTYBuVZk9W8OkM25oEhRhzcfpR0r3k5MFqOjLKhKVd3Zbx KAvkNiRBir1Tdg9ldg/acbKFWDrgWedSH83RSXLmWGecreZ5IyIe59pr9KVp2ODwSko0 GWAw== X-Gm-Message-State: AOAM531qvZqJVlMh03hlW84/zZKDWptTmlZJJ8LcZ04JEle07xPL7wTR h6wNlpiAfNk97G+k3Ed6E1Jry3gnnw8= X-Google-Smtp-Source: ABdhPJwyXgTB+D63bClP1bqq7rjRRsCk2Qml1Epax8BiQCkoIjfLM1hGC+EX43fQdF4JUSqtFn94Ig== X-Received: by 2002:a17:907:c0d:: with SMTP id ga13mr1529750ejc.413.1619774718447; Fri, 30 Apr 2021 02:25:18 -0700 (PDT) Received: from abel.fritz.box ([2a02:908:1252:fb60:d08c:9633:b7a2:37e2]) by smtp.gmail.com with ESMTPSA id h23sm1550959ejx.90.2021.04.30.02.25.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Apr 2021 02:25:18 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 13/13] drm/ttm: flip the switch for driver allocated resources Date: Fri, 30 Apr 2021 11:25:08 +0200 Message-Id: <20210430092508.60710-13-christian.koenig@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210430092508.60710-1-christian.koenig@amd.com> References: <20210430092508.60710-1-christian.koenig@amd.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: daniel.vetter@ffwll.ch, matthew.william.auld@gmail.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Instead of both driver and TTM allocating memory finalize embedding the ttm_resource object as base into the driver backends. Signed-off-by: Christian König Reviewed-by: Matthew Auld --- drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 44 ++++++-------- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +- .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 60 +++++++++---------- drivers/gpu/drm/drm_gem_vram_helper.c | 3 +- drivers/gpu/drm/nouveau/nouveau_bo.c | 8 +-- drivers/gpu/drm/nouveau/nouveau_mem.c | 11 ++-- drivers/gpu/drm/nouveau/nouveau_mem.h | 14 ++--- drivers/gpu/drm/nouveau/nouveau_ttm.c | 32 +++++----- drivers/gpu/drm/ttm/ttm_range_manager.c | 23 +++---- drivers/gpu/drm/ttm/ttm_resource.c | 15 +---- drivers/gpu/drm/ttm/ttm_sys_manager.c | 12 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c | 24 ++++---- drivers/gpu/drm/vmwgfx/vmwgfx_thp.c | 27 ++++----- include/drm/ttm/ttm_range_manager.h | 3 +- include/drm/ttm/ttm_resource.h | 43 ++++++------- 16 files changed, 140 insertions(+), 186 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c index 2e7dffd3614d..d6c1960abf64 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c @@ -40,8 +40,7 @@ to_gtt_mgr(struct ttm_resource_manager *man) static inline struct amdgpu_gtt_node * to_amdgpu_gtt_node(struct ttm_resource *res) { - return container_of(res->mm_node, struct amdgpu_gtt_node, - base.mm_nodes[0]); + return container_of(res, struct amdgpu_gtt_node, base.base); } /** @@ -92,13 +91,13 @@ static DEVICE_ATTR(mem_info_gtt_used, S_IRUGO, /** * amdgpu_gtt_mgr_has_gart_addr - Check if mem has address space * - * @mem: the mem object to check + * @res: the mem object to check * * Check if a mem object has already address space allocated. */ -bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_resource *mem) +bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_resource *res) { - struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(mem); + struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(res); return drm_mm_node_allocated(&node->base.mm_nodes[0]); } @@ -116,19 +115,20 @@ bool amdgpu_gtt_mgr_has_gart_addr(struct ttm_resource *mem) static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, struct ttm_buffer_object *tbo, const struct ttm_place *place, - struct ttm_resource *mem) + struct ttm_resource **res) { struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man); + uint32_t num_pages = PFN_UP(tbo->base.size); struct amdgpu_gtt_node *node; int r; spin_lock(&mgr->lock); - if ((tbo->resource == mem || tbo->resource->mem_type != TTM_PL_TT) && - atomic64_read(&mgr->available) < mem->num_pages) { + if (tbo->resource && tbo->resource->mem_type != TTM_PL_TT && + atomic64_read(&mgr->available) < num_pages) { spin_unlock(&mgr->lock); return -ENOSPC; } - atomic64_sub(mem->num_pages, &mgr->available); + atomic64_sub(num_pages, &mgr->available); spin_unlock(&mgr->lock); node = kzalloc(struct_size(node, base.mm_nodes, 1), GFP_KERNEL); @@ -144,29 +144,28 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, spin_lock(&mgr->lock); r = drm_mm_insert_node_in_range(&mgr->mm, &node->base.mm_nodes[0], - mem->num_pages, - tbo->page_alignment, 0, - place->fpfn, place->lpfn, + num_pages, tbo->page_alignment, + 0, place->fpfn, place->lpfn, DRM_MM_INSERT_BEST); spin_unlock(&mgr->lock); if (unlikely(r)) goto err_free; - mem->start = node->base.mm_nodes[0].start; + node->base.base.start = node->base.mm_nodes[0].start; } else { node->base.mm_nodes[0].start = 0; - node->base.mm_nodes[0].size = mem->num_pages; - mem->start = AMDGPU_BO_INVALID_OFFSET; + node->base.mm_nodes[0].size = node->base.base.num_pages; + node->base.base.start = AMDGPU_BO_INVALID_OFFSET; } - mem->mm_node = &node->base.mm_nodes[0]; + *res = &node->base.base; return 0; err_free: kfree(node); err_out: - atomic64_add(mem->num_pages, &mgr->available); + atomic64_add(num_pages, &mgr->available); return r; } @@ -180,21 +179,16 @@ static int amdgpu_gtt_mgr_new(struct ttm_resource_manager *man, * Free the allocated GTT again. */ static void amdgpu_gtt_mgr_del(struct ttm_resource_manager *man, - struct ttm_resource *mem) + struct ttm_resource *res) { + struct amdgpu_gtt_node *node = to_amdgpu_gtt_node(res); struct amdgpu_gtt_mgr *mgr = to_gtt_mgr(man); - struct amdgpu_gtt_node *node; - - if (!mem->mm_node) - return; - - node = to_amdgpu_gtt_node(mem); spin_lock(&mgr->lock); if (drm_mm_node_allocated(&node->base.mm_nodes[0])) drm_mm_remove_node(&node->base.mm_nodes[0]); spin_unlock(&mgr->lock); - atomic64_add(mem->num_pages, &mgr->available); + atomic64_add(res->num_pages, &mgr->available); kfree(node); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index e669a3adac44..84142eed66df 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -1315,7 +1315,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo) if (bo->base.resv == &bo->base._resv) amdgpu_amdkfd_remove_fence_on_pt_pd_bos(abo); - if (bo->resource->mem_type != TTM_PL_VRAM || !bo->resource->mm_node || + if (bo->resource->mem_type != TTM_PL_VRAM || !(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) return; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h index 40f2adf305bc..59e0fefb15aa 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h @@ -28,6 +28,7 @@ #include #include +#include /* state back for walking over vram_mgr and gtt_mgr allocations */ struct amdgpu_res_cursor { @@ -53,7 +54,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res, { struct drm_mm_node *node; - if (!res || !res->mm_node) { + if (!res) { cur->start = start; cur->size = size; cur->remaining = size; @@ -63,7 +64,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res, BUG_ON(start + size > res->num_pages << PAGE_SHIFT); - node = res->mm_node; + node = to_ttm_range_mgr_node(res)->mm_nodes; while (start >= node->size << PAGE_SHIFT) start -= node++->size << PAGE_SHIFT; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c index d59ec07c77bb..c8feae788e79 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c @@ -215,19 +215,20 @@ static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev, u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo) { struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); - struct ttm_resource *mem = bo->tbo.resource; - struct drm_mm_node *nodes = mem->mm_node; - unsigned pages = mem->num_pages; + struct ttm_resource *res = bo->tbo.resource; + unsigned pages = res->num_pages; + struct drm_mm_node *mm; u64 usage; if (amdgpu_gmc_vram_full_visible(&adev->gmc)) return amdgpu_bo_size(bo); - if (mem->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT) + if (res->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT) return 0; - for (usage = 0; nodes && pages; pages -= nodes->size, nodes++) - usage += amdgpu_vram_mgr_vis_size(adev, nodes); + mm = &container_of(res, struct ttm_range_mgr_node, base)->mm_nodes[0]; + for (usage = 0; pages; pages -= mm->size, mm++) + usage += amdgpu_vram_mgr_vis_size(adev, mm); return usage; } @@ -363,7 +364,7 @@ static void amdgpu_vram_mgr_virt_start(struct ttm_resource *mem, static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, struct ttm_buffer_object *tbo, const struct ttm_place *place, - struct ttm_resource *mem) + struct ttm_resource **res) { unsigned long lpfn, num_nodes, pages_per_node, pages_left, pages; struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); @@ -384,7 +385,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, max_bytes -= AMDGPU_VM_RESERVED_VRAM; /* bail out quickly if there's likely not enough VRAM for this BO */ - mem_bytes = (u64)mem->num_pages << PAGE_SHIFT; + mem_bytes = tbo->base.size; if (atomic64_add_return(mem_bytes, &mgr->usage) > max_bytes) { r = -ENOSPC; goto error_sub; @@ -402,7 +403,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, #endif pages_per_node = max_t(uint32_t, pages_per_node, tbo->page_alignment); - num_nodes = DIV_ROUND_UP(mem->num_pages, pages_per_node); + num_nodes = DIV_ROUND_UP(PFN_UP(mem_bytes), pages_per_node); } node = kvmalloc(struct_size(node, mm_nodes, num_nodes), @@ -418,8 +419,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, if (place->flags & TTM_PL_FLAG_TOPDOWN) mode = DRM_MM_INSERT_HIGH; - mem->start = 0; - pages_left = mem->num_pages; + pages_left = node->base.num_pages; /* Limit maximum size to 2GB due to SG table limitations */ pages = min(pages_left, 2UL << (30 - PAGE_SHIFT)); @@ -447,7 +447,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, } vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]); - amdgpu_vram_mgr_virt_start(mem, &node->mm_nodes[i]); + amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]); pages_left -= pages; ++i; @@ -457,10 +457,10 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, spin_unlock(&mgr->lock); if (i == 1) - mem->placement |= TTM_PL_FLAG_CONTIGUOUS; + node->base.placement |= TTM_PL_FLAG_CONTIGUOUS; atomic64_add(vis_usage, &mgr->vis_usage); - mem->mm_node = &node->mm_nodes[0]; + *res = &node->base; return 0; error_free: @@ -483,28 +483,22 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, * Free the allocated VRAM again. */ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man, - struct ttm_resource *mem) + struct ttm_resource *res) { + struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res); struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); struct amdgpu_device *adev = to_amdgpu_device(mgr); - struct ttm_range_mgr_node *node; uint64_t usage = 0, vis_usage = 0; - unsigned pages = mem->num_pages; - struct drm_mm_node *nodes; - - if (!mem->mm_node) - return; - - node = to_ttm_range_mgr_node(mem); - nodes = &node->mm_nodes[0]; + unsigned i, pages = res->num_pages; spin_lock(&mgr->lock); - while (pages) { - pages -= nodes->size; - drm_mm_remove_node(nodes); - usage += nodes->size << PAGE_SHIFT; - vis_usage += amdgpu_vram_mgr_vis_size(adev, nodes); - ++nodes; + for (i = 0, pages = res->num_pages; pages; + pages -= node->mm_nodes[i].size, ++i) { + struct drm_mm_node *mm = &node->mm_nodes[i]; + + drm_mm_remove_node(mm); + usage += mm->size << PAGE_SHIFT; + vis_usage += amdgpu_vram_mgr_vis_size(adev, mm); } amdgpu_vram_mgr_do_reserve(man); spin_unlock(&mgr->lock); @@ -529,7 +523,7 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man, * Allocate and fill a sg table from a VRAM allocation. */ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev, - struct ttm_resource *mem, + struct ttm_resource *res, u64 offset, u64 length, struct device *dev, enum dma_data_direction dir, @@ -545,7 +539,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev, return -ENOMEM; /* Determine the number of DRM_MM nodes to export */ - amdgpu_res_first(mem, offset, length, &cursor); + amdgpu_res_first(res, offset, length, &cursor); while (cursor.remaining) { num_entries++; amdgpu_res_next(&cursor, cursor.size); @@ -565,7 +559,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev, * and the number of bytes from it. Access the following * DRM_MM node(s) if more buffer needs to exported */ - amdgpu_res_first(mem, offset, length, &cursor); + amdgpu_res_first(res, offset, length, &cursor); for_each_sgtable_sg((*sgt), sg, i) { phys_addr_t phys = cursor.start + adev->gmc.aper_base; size_t size = cursor.size; diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 17a4c5d47b6a..2a1229b8364e 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -250,7 +250,8 @@ EXPORT_SYMBOL(drm_gem_vram_put); static u64 drm_gem_vram_pg_offset(struct drm_gem_vram_object *gbo) { /* Keep TTM behavior for now, remove when drivers are audited */ - if (WARN_ON_ONCE(!gbo->bo.resource->mm_node)) + if (WARN_ON_ONCE(!gbo->bo.resource || + gbo->bo.resource->mem_type == TTM_PL_SYSTEM)) return 0; return gbo->bo.resource->start; diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index be546a2d3e36..73ddf8c861a3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -918,12 +918,8 @@ static void nouveau_bo_move_ntfy(struct ttm_buffer_object *bo, } } - if (new_reg) { - if (new_reg->mm_node) - nvbo->offset = (new_reg->start << PAGE_SHIFT); - else - nvbo->offset = 0; - } + if (new_reg) + nvbo->offset = (new_reg->start << PAGE_SHIFT); } diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c b/drivers/gpu/drm/nouveau/nouveau_mem.c index a1049e9feee1..0de6549fb875 100644 --- a/drivers/gpu/drm/nouveau/nouveau_mem.c +++ b/drivers/gpu/drm/nouveau/nouveau_mem.c @@ -178,25 +178,24 @@ void nouveau_mem_del(struct ttm_resource *reg) { struct nouveau_mem *mem = nouveau_mem(reg); - if (!mem) - return; + nouveau_mem_fini(mem); - kfree(reg->mm_node); - reg->mm_node = NULL; + kfree(mem); } int nouveau_mem_new(struct nouveau_cli *cli, u8 kind, u8 comp, - struct ttm_resource *reg) + struct ttm_resource **res) { struct nouveau_mem *mem; if (!(mem = kzalloc(sizeof(*mem), GFP_KERNEL))) return -ENOMEM; + mem->cli = cli; mem->kind = kind; mem->comp = comp; - reg->mm_node = mem; + *res = &mem->base; return 0; } diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.h b/drivers/gpu/drm/nouveau/nouveau_mem.h index 3a6a1be2ed52..2c01166a90f2 100644 --- a/drivers/gpu/drm/nouveau/nouveau_mem.h +++ b/drivers/gpu/drm/nouveau/nouveau_mem.h @@ -6,12 +6,6 @@ struct ttm_tt; #include #include -static inline struct nouveau_mem * -nouveau_mem(struct ttm_resource *reg) -{ - return reg->mm_node; -} - struct nouveau_mem { struct ttm_resource base; struct nouveau_cli *cli; @@ -21,8 +15,14 @@ struct nouveau_mem { struct nvif_vma vma[2]; }; +static inline struct nouveau_mem * +nouveau_mem(struct ttm_resource *reg) +{ + return container_of(reg, struct nouveau_mem, base); +} + int nouveau_mem_new(struct nouveau_cli *, u8 kind, u8 comp, - struct ttm_resource *); + struct ttm_resource **); void nouveau_mem_del(struct ttm_resource *); int nouveau_mem_vram(struct ttm_resource *, bool contig, u8 page); int nouveau_mem_host(struct ttm_resource *, struct ttm_tt *); diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c index 5e5ce2ec89f0..723fed27fe77 100644 --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c @@ -46,7 +46,7 @@ static int nouveau_vram_manager_new(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *reg) + struct ttm_resource **res) { struct nouveau_bo *nvbo = nouveau_bo(bo); struct nouveau_drm *drm = nouveau_bdev(bo->bdev); @@ -55,15 +55,15 @@ nouveau_vram_manager_new(struct ttm_resource_manager *man, if (drm->client.device.info.ram_size == 0) return -ENOMEM; - ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, reg); + ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, res); if (ret) return ret; - ttm_resource_init(bo, place, reg->mm_node); + ttm_resource_init(bo, place, *res); - ret = nouveau_mem_vram(reg, nvbo->contig, nvbo->page); + ret = nouveau_mem_vram(*res, nvbo->contig, nvbo->page); if (ret) { - nouveau_mem_del(reg); + nouveau_mem_del(*res); return ret; } @@ -79,18 +79,18 @@ static int nouveau_gart_manager_new(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *reg) + struct ttm_resource **res) { struct nouveau_bo *nvbo = nouveau_bo(bo); struct nouveau_drm *drm = nouveau_bdev(bo->bdev); int ret; - ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, reg); + ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, res); if (ret) return ret; - ttm_resource_init(bo, place, reg->mm_node); - reg->start = 0; + ttm_resource_init(bo, place, *res); + (*res)->start = 0; return 0; } @@ -103,27 +103,27 @@ static int nv04_gart_manager_new(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *reg) + struct ttm_resource **res) { struct nouveau_bo *nvbo = nouveau_bo(bo); struct nouveau_drm *drm = nouveau_bdev(bo->bdev); struct nouveau_mem *mem; int ret; - ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, reg); - mem = nouveau_mem(reg); + ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, res); if (ret) return ret; - ttm_resource_init(bo, place, reg->mm_node); + mem = nouveau_mem(*res); + ttm_resource_init(bo, place, *res); ret = nvif_vmm_get(&mem->cli->vmm.vmm, PTES, false, 12, 0, - (long)reg->num_pages << PAGE_SHIFT, &mem->vma[0]); + (long)(*res)->num_pages << PAGE_SHIFT, &mem->vma[0]); if (ret) { - nouveau_mem_del(reg); + nouveau_mem_del(*res); return ret; } - reg->start = mem->vma[0].addr >> PAGE_SHIFT; + (*res)->start = mem->vma[0].addr >> PAGE_SHIFT; return 0; } diff --git a/drivers/gpu/drm/ttm/ttm_range_manager.c b/drivers/gpu/drm/ttm/ttm_range_manager.c index ce5d07ca384c..c32e1aee2481 100644 --- a/drivers/gpu/drm/ttm/ttm_range_manager.c +++ b/drivers/gpu/drm/ttm/ttm_range_manager.c @@ -58,7 +58,7 @@ to_range_manager(struct ttm_resource_manager *man) static int ttm_range_man_alloc(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *mem) + struct ttm_resource **res) { struct ttm_range_manager *rman = to_range_manager(man); struct ttm_range_mgr_node *node; @@ -83,37 +83,30 @@ static int ttm_range_man_alloc(struct ttm_resource_manager *man, spin_lock(&rman->lock); ret = drm_mm_insert_node_in_range(mm, &node->mm_nodes[0], - mem->num_pages, bo->page_alignment, 0, + node->base.num_pages, + bo->page_alignment, 0, place->fpfn, lpfn, mode); spin_unlock(&rman->lock); - if (unlikely(ret)) { + if (unlikely(ret)) kfree(node); - } else { - mem->mm_node = &node->mm_nodes[0]; - mem->start = node->mm_nodes[0].start; - } + else + node->base.start = node->mm_nodes[0].start; return ret; } static void ttm_range_man_free(struct ttm_resource_manager *man, - struct ttm_resource *mem) + struct ttm_resource *res) { + struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res); struct ttm_range_manager *rman = to_range_manager(man); - struct ttm_range_mgr_node *node; - - if (!mem->mm_node) - return; - - node = to_ttm_range_mgr_node(mem); spin_lock(&rman->lock); drm_mm_remove_node(&node->mm_nodes[0]); spin_unlock(&rman->lock); kfree(node); - mem->mm_node = NULL; } static void ttm_range_man_debug(struct ttm_resource_manager *man, diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c index b412ae515e98..2a68145572cc 100644 --- a/drivers/gpu/drm/ttm/ttm_resource.c +++ b/drivers/gpu/drm/ttm/ttm_resource.c @@ -29,7 +29,6 @@ void ttm_resource_init(struct ttm_buffer_object *bo, const struct ttm_place *place, struct ttm_resource *res) { - res->mm_node = NULL; res->start = 0; res->num_pages = PFN_UP(bo->base.size); res->mem_type = place->mem_type; @@ -47,19 +46,8 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo, { struct ttm_resource_manager *man = ttm_manager_type(bo->bdev, place->mem_type); - struct ttm_resource *res; - int r; - - res = kmalloc(sizeof(*res), GFP_KERNEL); - ttm_resource_init(bo, place, res); - r = man->func->alloc(man, bo, place, res); - if (r) { - kfree(res); - return r; - } - *res_ptr = res; - return 0; + return man->func->alloc(man, bo, place, res_ptr); } void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res) @@ -71,7 +59,6 @@ void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res) man = ttm_manager_type(bo->bdev, (*res)->mem_type); man->func->free(man, *res); - kfree(*res); *res = NULL; } EXPORT_SYMBOL(ttm_resource_free); diff --git a/drivers/gpu/drm/ttm/ttm_sys_manager.c b/drivers/gpu/drm/ttm/ttm_sys_manager.c index a926114edfe5..3b9e76abe97e 100644 --- a/drivers/gpu/drm/ttm/ttm_sys_manager.c +++ b/drivers/gpu/drm/ttm/ttm_sys_manager.c @@ -8,20 +8,20 @@ static int ttm_sys_man_alloc(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *mem) + struct ttm_resource **res) { - mem->mm_node = kzalloc(sizeof(*mem), GFP_KERNEL); - if (!mem->mm_node) + *res = kzalloc(sizeof(**res), GFP_KERNEL); + if (!*res) return -ENOMEM; - ttm_resource_init(bo, place, mem->mm_node); + ttm_resource_init(bo, place, *res); return 0; } static void ttm_sys_man_free(struct ttm_resource_manager *man, - struct ttm_resource *mem) + struct ttm_resource *res) { - kfree(mem->mm_node); + kfree(res); } static const struct ttm_resource_manager_func ttm_sys_manager_func = { diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c index 82a5e6489810..354219a27f31 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c @@ -52,16 +52,16 @@ static struct vmwgfx_gmrid_man *to_gmrid_manager(struct ttm_resource_manager *ma static int vmw_gmrid_man_get_node(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *mem) + struct ttm_resource **res) { struct vmwgfx_gmrid_man *gman = to_gmrid_manager(man); int id; - mem->mm_node = kmalloc(sizeof(*mem), GFP_KERNEL); - if (!mem->mm_node) + *res = kmalloc(sizeof(*res), GFP_KERNEL); + if (!*res) return -ENOMEM; - ttm_resource_init(bo, place, mem->mm_node); + ttm_resource_init(bo, place, *res); id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL); if (id < 0) @@ -70,34 +70,34 @@ static int vmw_gmrid_man_get_node(struct ttm_resource_manager *man, spin_lock(&gman->lock); if (gman->max_gmr_pages > 0) { - gman->used_gmr_pages += mem->num_pages; + gman->used_gmr_pages += (*res)->num_pages; if (unlikely(gman->used_gmr_pages > gman->max_gmr_pages)) goto nospace; } - mem->mm_node = gman; - mem->start = id; + (*res)->start = id; spin_unlock(&gman->lock); return 0; nospace: - gman->used_gmr_pages -= mem->num_pages; + gman->used_gmr_pages -= (*res)->num_pages; spin_unlock(&gman->lock); ida_free(&gman->gmr_ida, id); + kfree(*res); return -ENOSPC; } static void vmw_gmrid_man_put_node(struct ttm_resource_manager *man, - struct ttm_resource *mem) + struct ttm_resource *res) { struct vmwgfx_gmrid_man *gman = to_gmrid_manager(man); - ida_free(&gman->gmr_ida, mem->start); + ida_free(&gman->gmr_ida, res->start); spin_lock(&gman->lock); - gman->used_gmr_pages -= mem->num_pages; + gman->used_gmr_pages -= res->num_pages; spin_unlock(&gman->lock); - kfree(mem->mm_node); + kfree(res); } static const struct ttm_resource_manager_func vmw_gmrid_manager_func; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c b/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c index 8765835696ac..2a3d3468e4e0 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_thp.c @@ -51,7 +51,7 @@ static int vmw_thp_insert_aligned(struct ttm_buffer_object *bo, static int vmw_thp_get_node(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *mem) + struct ttm_resource **res) { struct vmw_thp_manager *rman = to_thp_manager(man); struct drm_mm *mm = &rman->mm; @@ -78,26 +78,27 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, spin_lock(&rman->lock); if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)) { align_pages = (HPAGE_PUD_SIZE >> PAGE_SHIFT); - if (mem->num_pages >= align_pages) { + if (node->base.num_pages >= align_pages) { ret = vmw_thp_insert_aligned(bo, mm, &node->mm_nodes[0], - align_pages, place, mem, - lpfn, mode); + align_pages, place, + &node->base, lpfn, mode); if (!ret) goto found_unlock; } } align_pages = (HPAGE_PMD_SIZE >> PAGE_SHIFT); - if (mem->num_pages >= align_pages) { + if (node->base.num_pages >= align_pages) { ret = vmw_thp_insert_aligned(bo, mm, &node->mm_nodes[0], - align_pages, place, mem, lpfn, - mode); + align_pages, place, &node->base, + lpfn, mode); if (!ret) goto found_unlock; } ret = drm_mm_insert_node_in_range(mm, &node->mm_nodes[0], - mem->num_pages, bo->page_alignment, 0, + node->base.num_pages, + bo->page_alignment, 0, place->fpfn, lpfn, mode); found_unlock: spin_unlock(&rman->lock); @@ -105,20 +106,18 @@ static int vmw_thp_get_node(struct ttm_resource_manager *man, if (unlikely(ret)) { kfree(node); } else { - mem->mm_node = &node->mm_nodes[0]; - mem->start = node->mm_nodes[0].start; + node->base.start = node->mm_nodes[0].start; + *res = &node->base; } return ret; } - - static void vmw_thp_put_node(struct ttm_resource_manager *man, - struct ttm_resource *mem) + struct ttm_resource *res) { + struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res); struct vmw_thp_manager *rman = to_thp_manager(man); - struct ttm_range_mgr_node * node = mem->mm_node; spin_lock(&rman->lock); drm_mm_remove_node(&node->mm_nodes[0]); diff --git a/include/drm/ttm/ttm_range_manager.h b/include/drm/ttm/ttm_range_manager.h index e02b6c8d355e..22b6fa42ac20 100644 --- a/include/drm/ttm/ttm_range_manager.h +++ b/include/drm/ttm/ttm_range_manager.h @@ -30,8 +30,7 @@ struct ttm_range_mgr_node { static inline struct ttm_range_mgr_node * to_ttm_range_mgr_node(struct ttm_resource *res) { - return container_of(res->mm_node, struct ttm_range_mgr_node, - mm_nodes[1]); + return container_of(res, struct ttm_range_mgr_node, base); } int ttm_range_man_init(struct ttm_device *bdev, diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h index 803e4875d779..4abb95b9fd11 100644 --- a/include/drm/ttm/ttm_resource.h +++ b/include/drm/ttm/ttm_resource.h @@ -45,46 +45,38 @@ struct ttm_resource_manager_func { * * @man: Pointer to a memory type manager. * @bo: Pointer to the buffer object we're allocating space for. - * @placement: Placement details. - * @flags: Additional placement flags. - * @mem: Pointer to a struct ttm_resource to be filled in. + * @place: Placement details. + * @res: Resulting pointer to the ttm_resource. * * This function should allocate space in the memory type managed - * by @man. Placement details if - * applicable are given by @placement. If successful, - * @mem::mm_node should be set to a non-null value, and - * @mem::start should be set to a value identifying the beginning + * by @man. Placement details if applicable are given by @place. If + * successful, a filled in ttm_resource object should be returned in + * @res. @res::start should be set to a value identifying the beginning * of the range allocated, and the function should return zero. - * If the memory region accommodate the buffer object, @mem::mm_node - * should be set to NULL, and the function should return 0. + * If the manager can't fulfill the request -ENOSPC should be returned. * If a system error occurred, preventing the request to be fulfilled, * the function should return a negative error code. * - * Note that @mem::mm_node will only be dereferenced by - * struct ttm_resource_manager functions and optionally by the driver, - * which has knowledge of the underlying type. - * - * This function may not be called from within atomic context, so - * an implementation can and must use either a mutex or a spinlock to - * protect any data structures managing the space. + * This function may not be called from within atomic context and needs + * to take care of its own locking to protect any data structures + * managing the space. */ int (*alloc)(struct ttm_resource_manager *man, struct ttm_buffer_object *bo, const struct ttm_place *place, - struct ttm_resource *mem); + struct ttm_resource **res); /** * struct ttm_resource_manager_func member free * * @man: Pointer to a memory type manager. - * @mem: Pointer to a struct ttm_resource to be filled in. + * @res: Pointer to a struct ttm_resource to be freed. * - * This function frees memory type resources previously allocated - * and that are identified by @mem::mm_node and @mem::start. May not - * be called from within atomic context. + * This function frees memory type resources previously allocated. + * May not be called from within atomic context. */ void (*free)(struct ttm_resource_manager *man, - struct ttm_resource *mem); + struct ttm_resource *res); /** * struct ttm_resource_manager_func member debug @@ -158,9 +150,9 @@ struct ttm_bus_placement { /** * struct ttm_resource * - * @mm_node: Memory manager node. - * @size: Requested size of memory region. - * @num_pages: Actual size of memory region in pages. + * @start: Start of the allocation. + * @num_pages: Actual size of resource in pages. + * @mem_type: Resource type of the allocation. * @placement: Placement flags. * @bus: Placement on io bus accessible to the CPU * @@ -168,7 +160,6 @@ struct ttm_bus_placement { * buffer object. */ struct ttm_resource { - void *mm_node; unsigned long start; unsigned long num_pages; uint32_t mem_type;