From patchwork Fri Nov 22 20:57:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Niranjana Vishwanathapura X-Patchwork-Id: 11258441 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0274A109A for ; Fri, 22 Nov 2019 21:09:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DE17B20708 for ; Fri, 22 Nov 2019 21:09:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DE17B20708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8DD4A6F583; Fri, 22 Nov 2019 21:09:07 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id B85B06F575; Fri, 22 Nov 2019 21:09:01 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Nov 2019 13:09:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,231,1571727600"; d="scan'208";a="205575856" Received: from nvishwa1-desk.sc.intel.com ([10.3.160.185]) by fmsmga007.fm.intel.com with ESMTP; 22 Nov 2019 13:08:59 -0800 From: Niranjana Vishwanathapura To: intel-gfx@lists.freedesktop.org Subject: [RFC 06/13] drm/i915/svm: Page table mirroring support Date: Fri, 22 Nov 2019 12:57:27 -0800 Message-Id: <20191122205734.15925-7-niranjana.vishwanathapura@intel.com> X-Mailer: git-send-email 2.21.0.rc0.32.g243a4c7e27 In-Reply-To: <20191122205734.15925-1-niranjana.vishwanathapura@intel.com> References: <20191122205734.15925-1-niranjana.vishwanathapura@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: sanjay.k.kumar@intel.com, sudeep.dutt@intel.com, dri-devel@lists.freedesktop.org, dave.hansen@intel.com, jglisse@redhat.com, jon.bloomfield@intel.com, daniel.vetter@intel.com, dan.j.williams@intel.com, ira.weiny@intel.com, jgg@mellanox.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Use HMM page table mirroring support to build device page table. Implement the bind ioctl and bind the process address range in the specified context's ppgtt. Handle invalidation notifications by unbinding the address range. Cc: Joonas Lahtinen Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Sudeep Dutt Signed-off-by: Niranjana Vishwanathapura --- drivers/gpu/drm/i915/Kconfig | 3 + drivers/gpu/drm/i915/Makefile | 3 +- drivers/gpu/drm/i915/i915_drv.c | 3 + drivers/gpu/drm/i915/i915_gem_gtt.c | 5 + drivers/gpu/drm/i915/i915_gem_gtt.h | 4 + drivers/gpu/drm/i915/i915_svm.c | 296 ++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_svm.h | 50 +++++ 7 files changed, 363 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/i915/i915_svm.c create mode 100644 drivers/gpu/drm/i915/i915_svm.h diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig index c2e48710eec8..689e57fe3973 100644 --- a/drivers/gpu/drm/i915/Kconfig +++ b/drivers/gpu/drm/i915/Kconfig @@ -141,6 +141,9 @@ config DRM_I915_SVM bool "Enable Shared Virtual Memory support in i915" depends on STAGING depends on DRM_I915 + depends on MMU + select HMM_MIRROR + select MMU_NOTIFIER default n help Choose this option if you want Shared Virtual Memory (SVM) diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile index 75fe45633779..7d4cd9eefd12 100644 --- a/drivers/gpu/drm/i915/Makefile +++ b/drivers/gpu/drm/i915/Makefile @@ -154,7 +154,8 @@ i915-y += \ intel_wopcm.o # SVM code -i915-$(CONFIG_DRM_I915_SVM) += gem/i915_gem_svm.o +i915-$(CONFIG_DRM_I915_SVM) += gem/i915_gem_svm.o \ + i915_svm.o # general-purpose microcontroller (GuC) support obj-y += gt/uc/ diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index 9c525d3f694c..c190df614c48 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -73,6 +73,7 @@ #include "i915_perf.h" #include "i915_query.h" #include "i915_suspend.h" +#include "i915_svm.h" #include "i915_switcheroo.h" #include "i915_sysfs.h" #include "i915_trace.h" @@ -2700,6 +2701,8 @@ static int i915_bind_ioctl(struct drm_device *dev, void *data, return -ENOENT; switch (args->type) { + case I915_BIND_SVM_BUFFER: + return i915_svm_bind(vm, args); case I915_BIND_SVM_GEM_OBJ: ret = i915_gem_svm_bind(vm, args, file); } diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index d9e95229ba1d..08cbbb4d91cb 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -42,6 +42,7 @@ #include "i915_drv.h" #include "i915_scatterlist.h" +#include "i915_svm.h" #include "i915_trace.h" #include "i915_vgpu.h" @@ -550,6 +551,7 @@ static void i915_address_space_fini(struct i915_address_space *vm) drm_mm_takedown(&vm->mm); mutex_destroy(&vm->mutex); + mutex_destroy(&vm->svm_mutex); } void __i915_vm_close(struct i915_address_space *vm) @@ -593,6 +595,7 @@ void i915_vm_release(struct kref *kref) GEM_BUG_ON(i915_is_ggtt(vm)); trace_i915_ppgtt_release(vm); + i915_svm_unbind_mm(vm); queue_rcu_work(vm->i915->wq, &vm->rcu); } @@ -608,6 +611,7 @@ static void i915_address_space_init(struct i915_address_space *vm, int subclass) * attempt holding the lock is immediately reported by lockdep. */ mutex_init(&vm->mutex); + mutex_init(&vm->svm_mutex); lockdep_set_subclass(&vm->mutex, subclass); i915_gem_shrinker_taints_mutex(vm->i915, &vm->mutex); @@ -619,6 +623,7 @@ static void i915_address_space_init(struct i915_address_space *vm, int subclass) INIT_LIST_HEAD(&vm->bound_list); INIT_LIST_HEAD(&vm->svm_list); + RCU_INIT_POINTER(vm->svm, NULL); } static int __setup_page_dma(struct i915_address_space *vm, diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h index 6a8d55490e39..722b1e7ce291 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.h +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h @@ -293,6 +293,8 @@ struct i915_svm_obj { u64 offset; }; +struct i915_svm; + struct i915_address_space { struct kref ref; struct rcu_work rcu; @@ -342,6 +344,8 @@ struct i915_address_space { */ struct list_head svm_list; unsigned int svm_count; + struct i915_svm *svm; + struct mutex svm_mutex; /* protects svm enabling */ struct pagestash free_pages; diff --git a/drivers/gpu/drm/i915/i915_svm.c b/drivers/gpu/drm/i915/i915_svm.c new file mode 100644 index 000000000000..4e414f257121 --- /dev/null +++ b/drivers/gpu/drm/i915/i915_svm.c @@ -0,0 +1,296 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2019 Intel Corporation + */ + +#include +#include + +#include "i915_svm.h" +#include "intel_memory_region.h" +#include "gem/i915_gem_context.h" + +static const u64 i915_range_flags[HMM_PFN_FLAG_MAX] = { + [HMM_PFN_VALID] = (1 << 0), /* HMM_PFN_VALID */ + [HMM_PFN_WRITE] = (1 << 1), /* HMM_PFN_WRITE */ + [HMM_PFN_DEVICE_PRIVATE] = (1 << 2) /* HMM_PFN_DEVICE_PRIVATE */ +}; + +static const u64 i915_range_values[HMM_PFN_VALUE_MAX] = { + [HMM_PFN_ERROR] = 0xfffffffffffffffeUL, /* HMM_PFN_ERROR */ + [HMM_PFN_NONE] = 0, /* HMM_PFN_NONE */ + [HMM_PFN_SPECIAL] = 0xfffffffffffffffcUL /* HMM_PFN_SPECIAL */ +}; + +static inline bool +i915_range_done(struct hmm_range *range) +{ + bool ret = hmm_range_valid(range); + + hmm_range_unregister(range); + return ret; +} + +static int +i915_range_fault(struct i915_svm *svm, struct hmm_range *range) +{ + long ret; + + range->default_flags = 0; + range->pfn_flags_mask = -1UL; + + ret = hmm_range_register(range, &svm->mirror); + if (ret) { + up_read(&svm->mm->mmap_sem); + return (int)ret; + } + + if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { + up_read(&svm->mm->mmap_sem); + return -EBUSY; + } + + ret = hmm_range_fault(range, 0); + if (ret <= 0) { + if (ret == 0) + ret = -EBUSY; + up_read(&svm->mm->mmap_sem); + hmm_range_unregister(range); + return ret; + } + return 0; +} + +static struct i915_svm *vm_get_svm(struct i915_address_space *vm) +{ + struct i915_svm *svm = vm->svm; + + mutex_lock(&vm->svm_mutex); + if (svm && !kref_get_unless_zero(&svm->ref)) + svm = NULL; + + mutex_unlock(&vm->svm_mutex); + return svm; +} + +static void release_svm(struct kref *ref) +{ + struct i915_svm *svm = container_of(ref, typeof(*svm), ref); + struct i915_address_space *vm = svm->vm; + + hmm_mirror_unregister(&svm->mirror); + mutex_destroy(&svm->mutex); + vm->svm = NULL; + kfree(svm); +} + +static void vm_put_svm(struct i915_address_space *vm) +{ + mutex_lock(&vm->svm_mutex); + if (vm->svm) + kref_put(&vm->svm->ref, release_svm); + mutex_unlock(&vm->svm_mutex); +} + +static u32 i915_svm_build_sg(struct i915_address_space *vm, + struct hmm_range *range, + struct sg_table *st) +{ + struct scatterlist *sg; + u32 sg_page_sizes = 0; + u64 i, npages; + + sg = NULL; + st->nents = 0; + npages = (range->end - range->start) / PAGE_SIZE; + + /* + * No needd to dma map the host pages and later unmap it, as + * GPU is not allowed to access it with SVM. Hence, no need + * of any intermediate data strucutre to hold the mappings. + */ + for (i = 0; i < npages; i++) { + u64 addr = range->pfns[i] & ~((1UL << range->pfn_shift) - 1); + + if (sg && (addr == (sg_dma_address(sg) + sg->length))) { + sg->length += PAGE_SIZE; + sg_dma_len(sg) += PAGE_SIZE; + continue; + } + + if (sg) + sg_page_sizes |= sg->length; + + sg = sg ? __sg_next(sg) : st->sgl; + sg_dma_address(sg) = addr; + sg_dma_len(sg) = PAGE_SIZE; + sg->length = PAGE_SIZE; + st->nents++; + } + + sg_page_sizes |= sg->length; + sg_mark_end(sg); + return sg_page_sizes; +} + +static void i915_svm_unbind_addr(struct i915_address_space *vm, + u64 start, u64 length) +{ + struct i915_svm *svm = vm->svm; + + mutex_lock(&svm->mutex); + /* FIXME: Need to flush the TLB */ + svm_unbind_addr(vm, start, length); + mutex_unlock(&svm->mutex); +} + +int i915_svm_bind(struct i915_address_space *vm, struct drm_i915_bind *args) +{ + struct vm_area_struct *vma; + struct hmm_range range; + struct i915_svm *svm; + u64 i, flags, npages; + struct sg_table st; + u32 sg_page_sizes; + int ret = 0; + + svm = vm_get_svm(vm); + if (!svm) + return -EINVAL; + + args->length += (args->start & ~PAGE_MASK); + args->start &= PAGE_MASK; + DRM_DEBUG_DRIVER("%sing start 0x%llx length 0x%llx vm_id 0x%x\n", + (args->flags & I915_BIND_UNBIND) ? "Unbind" : "Bind", + args->start, args->length, args->vm_id); + if (args->flags & I915_BIND_UNBIND) { + i915_svm_unbind_addr(vm, args->start, args->length); + goto bind_done; + } + + npages = args->length / PAGE_SIZE; + if (unlikely(sg_alloc_table(&st, npages, GFP_KERNEL))) { + ret = -ENOMEM; + goto bind_done; + } + + range.pfns = kvmalloc_array(npages, sizeof(uint64_t), GFP_KERNEL); + if (unlikely(!range.pfns)) { + ret = -ENOMEM; + goto range_done; + } + + ret = svm_bind_addr_prepare(vm, args->start, args->length); + if (unlikely(ret)) + goto prepare_done; + + range.pfn_shift = PAGE_SHIFT; + range.start = args->start; + range.flags = i915_range_flags; + range.values = i915_range_values; + range.end = range.start + args->length; + for (i = 0; i < npages; ++i) { + range.pfns[i] = range.flags[HMM_PFN_VALID]; + if (!(args->flags & I915_BIND_READONLY)) + range.pfns[i] |= range.flags[HMM_PFN_WRITE]; + } + + down_read(&svm->mm->mmap_sem); + vma = find_vma(svm->mm, range.start); + if (unlikely(!vma || vma->vm_end < range.end)) { + ret = -EFAULT; + goto vma_done; + } +again: + ret = i915_range_fault(svm, &range); + if (unlikely(ret)) + goto vma_done; + + sg_page_sizes = i915_svm_build_sg(vm, &range, &st); + + mutex_lock(&svm->mutex); + if (!i915_range_done(&range)) { + mutex_unlock(&svm->mutex); + goto again; + } + + flags = (args->flags & I915_BIND_READONLY) ? I915_GTT_SVM_READONLY : 0; + ret = svm_bind_addr_commit(vm, args->start, args->length, + flags, &st, sg_page_sizes); + mutex_unlock(&svm->mutex); +vma_done: + up_read(&svm->mm->mmap_sem); + if (unlikely(ret)) + i915_svm_unbind_addr(vm, args->start, args->length); +prepare_done: + kvfree(range.pfns); +range_done: + sg_free_table(&st); +bind_done: + vm_put_svm(vm); + return ret; +} + +static int +i915_sync_cpu_device_pagetables(struct hmm_mirror *mirror, + const struct mmu_notifier_range *update) +{ + struct i915_svm *svm = container_of(mirror, typeof(*svm), mirror); + unsigned long length = update->end - update->start; + + DRM_DEBUG_DRIVER("start 0x%lx length 0x%lx\n", update->start, length); + if (!mmu_notifier_range_blockable(update)) + return -EAGAIN; + + i915_svm_unbind_addr(svm->vm, update->start, length); + return 0; +} + +static void i915_mirror_release(struct hmm_mirror *mirror) +{ +} + +static const struct hmm_mirror_ops i915_mirror_ops = { + .sync_cpu_device_pagetables = &i915_sync_cpu_device_pagetables, + .release = &i915_mirror_release, +}; + +void i915_svm_unbind_mm(struct i915_address_space *vm) +{ + vm_put_svm(vm); +} + +int i915_svm_bind_mm(struct i915_address_space *vm) +{ + struct i915_svm *svm; + struct mm_struct *mm; + int ret = 0; + + mm = get_task_mm(current); + down_write(&mm->mmap_sem); + mutex_lock(&vm->svm_mutex); + if (vm->svm) + goto bind_out; + + svm = kzalloc(sizeof(*svm), GFP_KERNEL); + if (!svm) { + ret = -ENOMEM; + goto bind_out; + } + svm->mirror.ops = &i915_mirror_ops; + mutex_init(&svm->mutex); + kref_init(&svm->ref); + svm->mm = mm; + svm->vm = vm; + + ret = hmm_mirror_register(&svm->mirror, mm); + if (ret) + goto bind_out; + + vm->svm = svm; +bind_out: + mutex_unlock(&vm->svm_mutex); + up_write(&mm->mmap_sem); + mmput(mm); + return ret; +} diff --git a/drivers/gpu/drm/i915/i915_svm.h b/drivers/gpu/drm/i915/i915_svm.h new file mode 100644 index 000000000000..f176f1dc493f --- /dev/null +++ b/drivers/gpu/drm/i915/i915_svm.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2019 Intel Corporation + */ + +#ifndef __I915_SVM_H +#define __I915_SVM_H + +#include + +#include "i915_drv.h" + +#if defined(CONFIG_DRM_I915_SVM) +struct i915_svm { + /* i915 address space */ + struct i915_address_space *vm; + + /* Associated process mm */ + struct mm_struct *mm; + + /* hmm_mirror to track memory invalidations for a process */ + struct hmm_mirror mirror; + + struct mutex mutex; /* protects svm operations */ + struct kref ref; +}; + +int i915_svm_bind(struct i915_address_space *vm, struct drm_i915_bind *args); +void i915_svm_unbind_mm(struct i915_address_space *vm); +int i915_svm_bind_mm(struct i915_address_space *vm); +static inline bool i915_vm_is_svm_enabled(struct i915_address_space *vm) +{ + return vm->svm; +} + +#else + +struct i915_svm { }; +static inline int i915_svm_bind(struct i915_address_space *vm, + struct drm_i915_bind *args) +{ return -ENOTSUPP; } +static inline void i915_svm_unbind_mm(struct i915_address_space *vm) { } +static inline int i915_svm_bind_mm(struct i915_address_space *vm) +{ return -ENOTSUPP; } +static inline bool i915_vm_is_svm_enabled(struct i915_address_space *vm) +{ return false; } + +#endif + +#endif /* __I915_SVM_H */