From patchwork Sat Dec 7 16:15:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13898312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7BA6DE7717B for ; Sat, 7 Dec 2024 16:17:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E67FE10E423; Sat, 7 Dec 2024 16:17:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="VpvtTj9i"; dkim-atps=neutral Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by gabe.freedesktop.org (Postfix) with ESMTPS id A746710E421; Sat, 7 Dec 2024 16:17:48 +0000 (UTC) Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-725aa5c597cso2331584b3a.1; Sat, 07 Dec 2024 08:17:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588268; x=1734193068; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BZMhVAbruUOuMzLPRzU2/czWJYT/swnfV5kQxwHKnPc=; b=VpvtTj9iqOOA353BDt+rS/5DJUJ3H+lOjaIIUOwODyD5i08Rj5xWYSpndwfwZlC3+n IphuN7Z9xFCIadGHXtimMHQB5faEJQ3SQl2MWaSGQvCCBSW6b9QJHXJjuHERteAQvQ/4 uKKtJNXsZKHJTg6lS3kTci8cUGbIZnL9I/wIyjvsvG7MTEjQdpAzY4wsfiJozjQ/2IHp SSfN86niLQ1roJe0KTmTVbJePkW/k2SnLxEM9GEDSgwYoUF7GFc7B0GXy93T3cAVNzrQ Vi1u1rKIgOugydEOvTWHnZcideNxKQpy8RXA7kpZh5yjdnq3whcR3vMOaM7Q4PM93T/R t+Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588268; x=1734193068; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BZMhVAbruUOuMzLPRzU2/czWJYT/swnfV5kQxwHKnPc=; b=LxWRNlcKstiI4FiYxoSaIFjgx92fMeIJa5+2EGzhO+sVRCfjBp+f7DUUDc7rIJ/n0f piuLwjXFxqkFHgDiuQHLiCW7d2FtVYae8lwJMGZBXtmSc3SaqF0TOOCemndiEj7zm2DU pC7LFB71CP6ajEzksuWKfsue9ZQH1YVVtTLj0++LXTqkuJBWH4++vUmk/2KCzBiewfZM LDbHwLj1s6MfW+C423TcEVvj7Iq+HlP7QCf8Vxwt0o+9mXtR6oFiZFR0oyQOT8aaVF2x A1ZS4x1JMIAjZgV9KXsVJbxXGg8Y9EMuIc4MGcxSY6vs0ey4DSZmmTX8kfXXz1CXqLWE RCbw== X-Gm-Message-State: AOJu0YxrakUftC/5we1jW5Iyb4Or7c96n3Z3hSHsAPM5FDQWiudRTk90 RQqWp/bFLfCoHa7oKpPmvfSfIRXwApzIh08+f5qDrdm/UlmhIAATwleFhQ== X-Gm-Gg: ASbGncsxZ6GPmYmXuKtYhUOJ8HQEWyfLHMkkjlP+/jfiD5nIbskPYVEdjbvg6Ea01K6 YHl/pEoxfIr8CyHi0jS6+L83neLVVCul37uo0SRG2HCjodBnO/vFFBjQP1MoZAvhCVpS9b0IGkS egCfxcwLKq58xCrD8aM6LEKVRHqgwNM4ETk3QHtY9I2NiUbIBoTEH0Zz6uN8zjehZLlKS7VkmlO fXtCPyPHZTREiSLnd5584nAvwNwNtuMOvCZW7MWq3MKjtYA2RX3quh5qDg8EAkOI4QC9tKiIyqU sZWHt0xz X-Google-Smtp-Source: AGHT+IHJHY+orF7bP3/H85kbQipjJJg7wExw8RfZbVxqUd1vGnSUj81wYJc9mxdydcQsStcK1VrngQ== X-Received: by 2002:a17:903:110f:b0:215:6f5d:b756 with SMTP id d9443c01a7336-21614d1ee97mr92961785ad.7.1733588267566; Sat, 07 Dec 2024 08:17:47 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-215f8f0a939sm45110735ad.219.2024.12.07.08.17.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:46 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [RFC 10/24] drm/msm: drm_gpuvm conversion Date: Sat, 7 Dec 2024 08:15:10 -0800 Message-ID: <20241207161651.410556-11-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Rob Clark Now that we've realigned deletion and allocation, switch over to using drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per VM, to allow mapping different parts of a single BO at different virtual addresses, which is a key requirement for sparse/VM_BIND. This prepares us for using drm_gpuvm to translate a batch of MAP/ MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/ unmap steps for updating the page tables. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/Kconfig | 1 + drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gem.c | 113 ++++++++++++------- drivers/gpu/drm/msm/msm_gem.h | 87 ++++++++++---- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 138 ++++++++++++++++------- drivers/gpu/drm/msm/msm_kms.c | 4 +- 12 files changed, 256 insertions(+), 116 deletions(-) diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 7ec833b6d829..cdecf745af8d 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -21,6 +21,7 @@ config DRM_MSM select DRM_DISPLAY_HELPER select DRM_BRIDGE_CONNECTOR select DRM_EXEC + select DRM_GPUVM select DRM_KMS_HELPER select DRM_PANEL select DRM_BRIDGE diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 5eb063ed0b46..e0f2e9c77976 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); struct msm_gem_vm *vm; - vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, - 0xfff * SZ_64K); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 31cceb9eb51a..e278d7564642 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1270,7 +1270,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, return 0; } -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) +static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *gmu) { struct msm_mmu *mmu; @@ -1280,7 +1280,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + gmu->vm = msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true); if (IS_ERR(gmu->vm)) return PTR_ERR(gmu->vm); @@ -1680,7 +1680,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) if (ret) goto err_put_device; - ret = a6xx_gmu_memory_probe(gmu); + ret = a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu); if (ret) goto err_put_device; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 6b961267614f..9e2721f8aff8 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2259,9 +2259,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu) if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_vm_create(mmu, - "gpu", 0x100000000ULL, - adreno_private_vm_size(gpu)); + return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL, + adreno_private_vm_size(gpu), true); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 14ac1900f031..5f82a56f17be 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -224,7 +224,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0), + size, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); @@ -366,12 +367,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_start; + *value = ctx->vm->base.mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_size; + *value = ctx->vm->base.mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index 3c5f8c3a5059..13176168ade2 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -451,8 +451,9 @@ static int mdp4_kms_init(struct drm_device *dev) "contig buffers for scanout\n"); vm = NULL; } else { - vm = msm_gem_vm_create(mmu, - "mdp4", 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp4", + 0x1000, 0x100000000 - 0x1000, + true); if (IS_ERR(vm)) { if (!IS_ERR(mmu)) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index a5a95a53d2c8..ab0998c2e846 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -776,6 +776,7 @@ static const struct file_operations fops = { static const struct drm_driver msm_driver = { .driver_features = DRIVER_GEM | + DRIVER_GEM_GPUVA | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_MODESET | diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 326764026ebb..a8de7b158a37 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -43,9 +43,22 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) return 0; } +static void put_iova_spaces(struct drm_gem_object *obj, bool close); + static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { update_ctx_mem(file, -obj->size); + + /* + * TODO we might need to kick this to a queue to avoid blocking + * in CLOSE ioctl + */ + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, + msecs_to_jiffies(1000)); + + msm_gem_lock(obj); + put_iova_spaces(obj, true); + msm_gem_unlock(obj); } /* @@ -167,6 +180,13 @@ static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + /* + * Skip gpuvm in the object free path to avoid a WARN_ON() splat. + * See explaination in msm_gem_assert_locked() + */ + if (kref_read(&obj->refcount)) + drm_gpuvm_bo_gem_evict(obj, true); + if (msm_obj->pages) { if (msm_obj->sgt) { /* For non-cached buffers, ensure the new @@ -334,16 +354,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct msm_gem_vm *vm) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct drm_gpuvm_bo *vm_bo; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->vm == vm) - return vma; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm == &vm->base) { + /* lookup_vma() should only be used in paths + * with at most one vma per vm + */ + GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); + + return to_msm_vma(vma); + } + } } return NULL; @@ -358,16 +387,19 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, static void put_iova_spaces(struct drm_gem_object *obj, bool close) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - if (vma->vm) { - msm_gem_vma_purge(vma); + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; + + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + msm_gem_vma_purge(msm_vma); if (close) - msm_gem_vma_close(vma); + msm_gem_vma_close(msm_vma); } } } @@ -376,13 +408,18 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) static void put_iova_vmas(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - msm_gem_vma_close(vma); + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; + + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + msm_gem_vma_close(msm_vma); + } } } @@ -390,7 +427,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -399,12 +435,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); - if (IS_ERR(vma)) - return vma; - list_add_tail(&vma->list, &msm_obj->vmas); } else { - GEM_WARN_ON(vma->iova < range_start); - GEM_WARN_ON((vma->iova + obj->size) > range_end); + GEM_WARN_ON(vma->base.va.addr < range_start); + GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); } return vma; @@ -484,7 +517,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->iova; + *iova = vma->base.va.addr; pin_obj_locked(obj); } @@ -530,7 +563,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->iova; + *iova = vma->base.va.addr; } msm_gem_unlock(obj); @@ -571,7 +604,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->iova != iova)) { + } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -861,7 +894,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct dma_resv *robj = obj->resv; - struct msm_gem_vma *vma; uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; @@ -904,14 +936,17 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name); - if (!list_empty(&msm_obj->vmas)) { + if (!list_empty(&obj->gpuva.list)) { + struct drm_gpuvm_bo *vm_bo; seq_puts(m, " vmas:"); - list_for_each_entry(vma, &msm_obj->vmas, list) { - const char *name, *comm; - if (vma->vm) { - struct msm_gem_vm *vm = vma->vm; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + const char *name, *comm; + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct task_struct *task = get_pid_task(vm->pid, PIDTYPE_PID); if (task) { @@ -920,15 +955,14 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, } else { comm = NULL; } - name = vm->name; - } else { - name = comm = NULL; + name = vm->base.name; + + seq_printf(m, " [%s%s%s: vm=%p, %08llx,%smapped]", + name, comm ? ":" : "", comm ? comm : "", + vma->vm, vma->va.addr, + to_msm_vma(vma)->mapped ? "" : "un"); + kfree(comm); } - seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", - name, comm ? ":" : "", comm ? comm : "", - vma->vm, vma->iova, - vma->mapped ? "mapped" : "unmapped"); - kfree(comm); } seq_puts(m, "\n"); @@ -1096,7 +1130,6 @@ static int msm_gem_new_impl(struct drm_device *dev, msm_obj->madv = MSM_MADV_WILLNEED; INIT_LIST_HEAD(&msm_obj->node); - INIT_LIST_HEAD(&msm_obj->vmas); *obj = &msm_obj->base; (*obj)->funcs = &msm_gem_object_funcs; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9bd78642671c..5091892bbe2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -10,6 +10,7 @@ #include #include #include "drm/drm_exec.h" +#include "drm/drm_gpuvm.h" #include "drm/gpu_scheduler.h" #include "msm_drv.h" @@ -22,30 +23,67 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ +/** + * struct msm_gem_vm - VM object + * + * A VM object representing a GPU (or display or GMU or ...) virtual address + * space. + * + * In the case of GPU, if per-process address spaces are supported, the address + * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. TTBR0 + * is used for userspace objects, and is unique per msm_context/drm_file, while + * TTBR1 is the same for all processes. (The kernel controlled ringbuffer and + * a few other kernel controlled buffers live in TTBR1.) + * + * The GPU TTBR0 vm can be managed by userspace or by the kernel, depending on + * whether userspace supports VM_BIND. All other vm's are managed by the kernel. + * (Managed by kernel means the kernel is responsible for VA allocation.) + * + * Note that because VM_BIND allows a given BO to be mapped multiple times in + * a VM, and therefore have multiple VMA's in a VM, there is an extra object + * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not + * embedded in any larger driver structure. The GEM object holds a list of + * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma + * holds a reference to the vm_bo, and drops it when the vma is unlinked. + * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an + * existing vm_bo, or create a new one. Once the vma is linked, the ref + * to the vm_bo can be dropped (since the vma is holding one). + */ struct msm_gem_vm { - const char *name; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: + /** @base: Inherit from drm_gpuvm. */ + struct drm_gpuvm base; + + /** + * @mm: Memory management for kernel managed VA allocations + * + * Only used for kernel managed VMs, unused for user managed VMs. + * + * Protected by @mm_lock. */ struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ + + /** @mm_lock: protects @mm node allocation/removal */ + struct spinlock mm_lock; + + /** @vm_lock: protects gpuvm insert/remove/traverse */ + struct mutex vm_lock; + + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; - struct kref kref; - /* For address spaces associated with a specific process, this + /** + * @pid: For address spaces associated with a specific process, this * will be non-NULL: */ struct pid *pid; - /* @faults: the number of GPU hangs associated with this address space */ + /** @faults: the number of GPU hangs associated with this address space */ int faults; - /** @va_start: lowest possible address to allocate */ - uint64_t va_start; - - /** @va_size: the size of the address space (in bytes) */ - uint64_t va_size; + /** @managed: is this a kernel managed VM? */ + bool managed; }; +#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm); @@ -53,18 +91,31 @@ msm_gem_vm_get(struct msm_gem_vm *vm); void msm_gem_vm_put(struct msm_gem_vm *vm); struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size); +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed); struct msm_fence_context; +/** + * struct msm_gem_vma - a VMA mapping + * + * Represents a combination of a GEM object plus a VM. + */ struct msm_gem_vma { + /** @base: inherit from drm_gpuva */ + struct drm_gpuva base; + + /** + * @node: mm node for VA allocation + * + * Only used by kernel managed VMs + */ struct drm_mm_node node; - uint64_t iova; - struct msm_gem_vm *vm; - struct list_head list; /* node in msm_gem_object::vmas */ + + /** @mapped: Is this VMA mapped? */ bool mapped; }; +#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, @@ -100,8 +151,6 @@ struct msm_gem_object { struct sg_table *sgt; void *vaddr; - struct list_head vmas; /* list of msm_gem_vma */ - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 235ad4be7fd0..14845768f7af 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -312,7 +312,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].iova = vma->iova; + submit->bos[i].iova = vma->base.va.addr; } /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index ca29e81d79d2..f4655ae1d71b 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -5,14 +5,13 @@ */ #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_mmu.h" static void -msm_gem_vm_destroy(struct kref *kref) +msm_gem_vm_free(struct drm_gpuvm *gpuvm) { - struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base); drm_mm_takedown(&vm->mm); if (vm->mmu) @@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref) void msm_gem_vm_put(struct msm_gem_vm *vm) { if (vm) - kref_put(&vm->kref, msm_gem_vm_destroy); + drm_gpuvm_put(&vm->base); } struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm) { if (!IS_ERR_OR_NULL(vm)) - kref_get(&vm->kref); + drm_gpuvm_get(&vm->base); return vm; } @@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm) /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; - unsigned size = vma->node.size; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + unsigned size = vma->base.va.range; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); vma->mapped = false; } @@ -57,10 +56,10 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); int ret; - if (GEM_WARN_ON(!vma->iova)) + if (GEM_WARN_ON(!vma->base.va.addr)) return -EINVAL; if (vma->mapped) @@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!vm) - return 0; - /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); if (ret) { vma->mapped = false; @@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); GEM_WARN_ON(vma->mapped); - spin_lock(&vm->lock); - if (vma->iova) + spin_lock(&vm->mm_lock); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->lock); + spin_unlock(&vm->mm_lock); - vma->iova = 0; - list_del(&vma->list); + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + drm_gpuva_unlink(&vma->base); + mutex_unlock(&vm->vm_lock); - msm_gem_vm_put(vm); kfree(vma); } @@ -113,6 +110,7 @@ struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -120,36 +118,81 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, if (!vma) return ERR_PTR(-ENOMEM); - vma->vm = vm; + if (vm->managed) { + spin_lock(&vm->mm_lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, + obj->size, PAGE_SIZE, 0, + range_start, range_end, 0); + spin_unlock(&vm->mm_lock); - spin_lock(&vm->lock); - ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - obj->size, PAGE_SIZE, 0, - range_start, range_end, 0); - spin_unlock(&vm->lock); + if (ret) + goto err_free_vma; - if (ret) - goto err_free_vma; + range_start = vma->node.start; + range_end = range_start + obj->size; + } - vma->iova = vma->node.start; + GEM_WARN_ON((range_end - range_start) > obj->size); + + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - INIT_LIST_HEAD(&vma->list); + mutex_lock(&vm->vm_lock); + ret = drm_gpuva_insert(&vm->base, &vma->base); + mutex_unlock(&vm->vm_lock); + if (ret) + goto err_free_range; - kref_get(&vm->kref); + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); + if (IS_ERR(vm_bo)) { + ret = PTR_ERR(vm_bo); + goto err_va_remove; + } + + mutex_lock(&vm->vm_lock); + drm_gpuva_link(&vma->base, vm_bo); + mutex_unlock(&vm->vm_lock); + GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; +err_va_remove: + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + mutex_unlock(&vm->vm_lock); +err_free_range: + if (vm->managed) + drm_mm_remove_node(&vma->node); err_free_vma: kfree(vma); return ERR_PTR(ret); } +static const struct drm_gpuvm_ops msm_gpuvm_ops = { + .vm_free = msm_gem_vm_free, +}; + +/** + * msm_gem_vm_create() - Create and initialize a &msm_gem_vm + * @drm: the drm device + * @mmu: the backing MMU objects handling mapping/unmapping + * @name: the name of the VM + * @va_start: the start offset of the GPU VA space + * @va_size: the size of the GPU VA space + * @managed: is it a kernel managed VM? + * + * In a kernel managed VM, the kernel handles address allocation, and only + * synchronous operations are supported. In a user managed VM, userspace + * handles virtual address allocation, and both async and sync operations + * are supported. + */ struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size) +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed) { struct msm_gem_vm *vm; + struct drm_gem_object *dummy_gem; + int ret = 0; if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -158,15 +201,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *name, if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&vm->lock); - vm->name = name; - vm->mmu = mmu; - vm->va_start = va_start; - vm->va_size = size; + dummy_gem = drm_gpuvm_resv_object_alloc(drm); + if (!dummy_gem) { + ret = -ENOMEM; + goto err_free_vm; + } + + drm_gpuvm_init(&vm->base, name, 0, drm, dummy_gem, + va_start, va_size, 0, 0, &msm_gpuvm_ops); + drm_gem_object_put(dummy_gem); + + spin_lock_init(&vm->mm_lock); + mutex_init(&vm->vm_lock); - drm_mm_init(&vm->mm, va_start, size); + vm->mmu = mmu; + vm->managed = managed; - kref_init(&vm->kref); + drm_mm_init(&vm->mm, va_start, va_size); return vm; + +err_free_vm: + kfree(vm); + return ERR_PTR(ret); + } diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 3649276ea1b2..4e90efaad714 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -190,8 +190,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) return NULL; } - vm = msm_gem_vm_create(mmu, "mdp_kms", - 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp_kms", + 0x1000, 0x100000000 - 0x1000, true); if (IS_ERR(vm)) { dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu);