From patchwork Tue Jun 30 19:57:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11634719 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EAC4C912 for ; Tue, 30 Jun 2020 19:58:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C958320772 for ; Tue, 30 Jun 2020 19:58:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="r0AhEB3u" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728897AbgF3T6Q (ORCPT ); Tue, 30 Jun 2020 15:58:16 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:4434 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728327AbgF3T6Q (ORCPT ); Tue, 30 Jun 2020 15:58:16 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 30 Jun 2020 12:56:36 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 30 Jun 2020 12:58:15 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 30 Jun 2020 12:58:15 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 30 Jun 2020 19:58:07 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 30 Jun 2020 19:58:07 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 30 Jun 2020 12:58:07 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Ralph Campbell Subject: [PATCH v2 1/5] nouveau/hmm: fault one page at a time Date: Tue, 30 Jun 2020 12:57:33 -0700 Message-ID: <20200630195737.8667-2-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200630195737.8667-1-rcampbell@nvidia.com> References: <20200630195737.8667-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593546996; bh=2WyHYZcAAd5j4j8YMKEb9Xnebvu/ReeLnPA7IJpaPHY=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=r0AhEB3uF+5wNNtM0mmIB01SZmgLqLlmHnepBFQmHzf4tRP0UTxTQs9DPBBRuI0FP zqjRwmgP1abIaPUC7nsZvRET4KZmg+f9mjYRi59+5CTjpP8xRc37YbBvC94qcxixKo GMJGXam43JhMVk8m5Yr87K0qbRDzfBHuBQrOSeE8UpsCAWiClRlr/h4Ic38BqYsWk2 YONZNtzbM1atRj0HZuCSqgCkGm/atkoVDj66ClPR00F8p/EFuu3Fr0O4ndlzdir4wp 3mFRc3Xa3s/yv1B7CTJDLEX2naf8jt7Yni2zry2JH1TX5u4ZVOUlejHE8/he4/MtPP o+cnDyb/dc/XQ== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The SVM page fault handler groups faults into a range of contiguous virtual addresses and requests hmm_range_fault() to populate and return the page frame number of system memory mapped by the CPU. In preparation for supporting large pages to be mapped by the GPU, process faults one page at a time. In addition, use the hmm_range default_flags to fix a corner case where the input hmm_pfns array is not reinitialized after hmm_range_fault() returns -EBUSY and must be called again. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +++++++++----------------- 1 file changed, 66 insertions(+), 133 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index ba9f9359c30e..665dede69bd1 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -516,7 +516,7 @@ static const struct mmu_interval_notifier_ops nouveau_svm_mni_ops = { static void nouveau_hmm_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range, u64 *ioctl_addr) { - unsigned long i, npages; + struct page *page; /* * The ioctl_addr prepared here is passed through nvif_object_ioctl() @@ -525,42 +525,38 @@ static void nouveau_hmm_convert_pfn(struct nouveau_drm *drm, * This is all just encoding the internal hmm representation into a * different nouveau internal representation. */ - npages = (range->end - range->start) >> PAGE_SHIFT; - for (i = 0; i < npages; ++i) { - struct page *page; - - if (!(range->hmm_pfns[i] & HMM_PFN_VALID)) { - ioctl_addr[i] = 0; - continue; - } - - page = hmm_pfn_to_page(range->hmm_pfns[i]); - if (is_device_private_page(page)) - ioctl_addr[i] = nouveau_dmem_page_addr(page) | - NVIF_VMM_PFNMAP_V0_V | - NVIF_VMM_PFNMAP_V0_VRAM; - else - ioctl_addr[i] = page_to_phys(page) | - NVIF_VMM_PFNMAP_V0_V | - NVIF_VMM_PFNMAP_V0_HOST; - if (range->hmm_pfns[i] & HMM_PFN_WRITE) - ioctl_addr[i] |= NVIF_VMM_PFNMAP_V0_W; + if (!(range->hmm_pfns[0] & HMM_PFN_VALID)) { + ioctl_addr[0] = 0; + return; } + + page = hmm_pfn_to_page(range->hmm_pfns[0]); + if (is_device_private_page(page)) + ioctl_addr[0] = nouveau_dmem_page_addr(page) | + NVIF_VMM_PFNMAP_V0_V | + NVIF_VMM_PFNMAP_V0_VRAM; + else + ioctl_addr[0] = page_to_phys(page) | + NVIF_VMM_PFNMAP_V0_V | + NVIF_VMM_PFNMAP_V0_HOST; + if (range->hmm_pfns[0] & HMM_PFN_WRITE) + ioctl_addr[0] |= NVIF_VMM_PFNMAP_V0_W; } static int nouveau_range_fault(struct nouveau_svmm *svmm, struct nouveau_drm *drm, void *data, u32 size, - unsigned long hmm_pfns[], u64 *ioctl_addr, + u64 *ioctl_addr, unsigned long hmm_flags, struct svm_notifier *notifier) { unsigned long timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); /* Have HMM fault pages within the fault window to the GPU. */ + unsigned long hmm_pfns[1]; struct hmm_range range = { .notifier = ¬ifier->notifier, .start = notifier->notifier.interval_tree.start, .end = notifier->notifier.interval_tree.last + 1, - .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE, + .default_flags = hmm_flags, .hmm_pfns = hmm_pfns, }; struct mm_struct *mm = notifier->notifier.mm; @@ -575,11 +571,6 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, ret = hmm_range_fault(&range); mmap_read_unlock(mm); if (ret) { - /* - * FIXME: the input PFN_REQ flags are destroyed on - * -EBUSY, we need to regenerate them, also for the - * other continue below - */ if (ret == -EBUSY) continue; return ret; @@ -614,17 +605,12 @@ nouveau_svm_fault(struct nvif_notify *notify) struct nvif_object *device = &svm->drm->client.device.object; struct nouveau_svmm *svmm; struct { - struct { - struct nvif_ioctl_v0 i; - struct nvif_ioctl_mthd_v0 m; - struct nvif_vmm_pfnmap_v0 p; - } i; - u64 phys[16]; + struct nouveau_pfnmap_args i; + u64 phys[1]; } args; - unsigned long hmm_pfns[ARRAY_SIZE(args.phys)]; - struct vm_area_struct *vma; + unsigned long hmm_flags; u64 inst, start, limit; - int fi, fn, pi, fill; + int fi, fn; int replay = 0, ret; /* Parse available fault buffer entries into a cache, and update @@ -691,66 +677,53 @@ nouveau_svm_fault(struct nvif_notify *notify) * window into a single update. */ start = buffer->fault[fi]->addr; - limit = start + (ARRAY_SIZE(args.phys) << PAGE_SHIFT); + limit = start + PAGE_SIZE; if (start < svmm->unmanaged.limit) limit = min_t(u64, limit, svmm->unmanaged.start); - SVMM_DBG(svmm, "wndw %016llx-%016llx", start, limit); - mm = svmm->notifier.mm; - if (!mmget_not_zero(mm)) { - nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); - continue; - } - - /* Intersect fault window with the CPU VMA, cancelling - * the fault if the address is invalid. + /* + * Prepare the GPU-side update of all pages within the + * fault window, determining required pages and access + * permissions based on pending faults. */ - mmap_read_lock(mm); - vma = find_vma_intersection(mm, start, limit); - if (!vma) { - SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit); - mmap_read_unlock(mm); - mmput(mm); - nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); - continue; + args.i.p.addr = start; + args.i.p.page = PAGE_SHIFT; + args.i.p.size = PAGE_SIZE; + /* + * Determine required permissions based on GPU fault + * access flags. + * XXX: atomic? + */ + switch (buffer->fault[fi]->access) { + case 0: /* READ. */ + hmm_flags = HMM_PFN_REQ_FAULT; + break; + case 3: /* PREFETCH. */ + hmm_flags = 0; + break; + default: + hmm_flags = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE; + break; } - start = max_t(u64, start, vma->vm_start); - limit = min_t(u64, limit, vma->vm_end); - mmap_read_unlock(mm); - SVMM_DBG(svmm, "wndw %016llx-%016llx", start, limit); - if (buffer->fault[fi]->addr != start) { - SVMM_ERR(svmm, "addr %016llx", buffer->fault[fi]->addr); - mmput(mm); + mm = svmm->notifier.mm; + if (!mmget_not_zero(mm)) { nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); continue; } - /* Prepare the GPU-side update of all pages within the - * fault window, determining required pages and access - * permissions based on pending faults. - */ - args.i.p.page = PAGE_SHIFT; - args.i.p.addr = start; - for (fn = fi, pi = 0;;) { - /* Determine required permissions based on GPU fault - * access flags. - *XXX: atomic? - */ - switch (buffer->fault[fn]->access) { - case 0: /* READ. */ - hmm_pfns[pi++] = HMM_PFN_REQ_FAULT; - break; - case 3: /* PREFETCH. */ - hmm_pfns[pi++] = 0; - break; - default: - hmm_pfns[pi++] = HMM_PFN_REQ_FAULT | - HMM_PFN_REQ_WRITE; - break; - } - args.i.p.size = pi << PAGE_SHIFT; + notifier.svmm = svmm; + ret = mmu_interval_notifier_insert(¬ifier.notifier, mm, + args.i.p.addr, args.i.p.size, + &nouveau_svm_mni_ops); + if (!ret) { + ret = nouveau_range_fault(svmm, svm->drm, &args, + sizeof(args), args.phys, hmm_flags, ¬ifier); + mmu_interval_notifier_remove(¬ifier.notifier); + } + mmput(mm); + for (fn = fi; ++fn < buffer->fault_nr; ) { /* It's okay to skip over duplicate addresses from the * same SVMM as faults are ordered by access type such * that only the first one needs to be handled. @@ -758,61 +731,21 @@ nouveau_svm_fault(struct nvif_notify *notify) * ie. WRITE faults appear first, thus any handling of * pending READ faults will already be satisfied. */ - while (++fn < buffer->fault_nr && - buffer->fault[fn]->svmm == svmm && - buffer->fault[fn ]->addr == - buffer->fault[fn - 1]->addr); - - /* If the next fault is outside the window, or all GPU - * faults have been dealt with, we're done here. - */ - if (fn >= buffer->fault_nr || - buffer->fault[fn]->svmm != svmm || + if (buffer->fault[fn]->svmm != svmm || buffer->fault[fn]->addr >= limit) break; - - /* Fill in the gap between this fault and the next. */ - fill = (buffer->fault[fn ]->addr - - buffer->fault[fn - 1]->addr) >> PAGE_SHIFT; - while (--fill) - hmm_pfns[pi++] = 0; } - SVMM_DBG(svmm, "wndw %016llx-%016llx covering %d fault(s)", - args.i.p.addr, - args.i.p.addr + args.i.p.size, fn - fi); - - notifier.svmm = svmm; - ret = mmu_interval_notifier_insert(¬ifier.notifier, - svmm->notifier.mm, - args.i.p.addr, args.i.p.size, - &nouveau_svm_mni_ops); - if (!ret) { - ret = nouveau_range_fault( - svmm, svm->drm, &args, - sizeof(args.i) + pi * sizeof(args.phys[0]), - hmm_pfns, args.phys, ¬ifier); - mmu_interval_notifier_remove(¬ifier.notifier); - } - mmput(mm); + /* If handling failed completely, cancel all faults. */ + if (ret) { + while (fi < fn) { + struct nouveau_svm_fault *fault = + buffer->fault[fi++]; - /* Cancel any faults in the window whose pages didn't manage - * to keep their valid bit, or stay writeable when required. - * - * If handling failed completely, cancel all faults. - */ - while (fi < fn) { - struct nouveau_svm_fault *fault = buffer->fault[fi++]; - pi = (fault->addr - args.i.p.addr) >> PAGE_SHIFT; - if (ret || - !(args.phys[pi] & NVIF_VMM_PFNMAP_V0_V) || - (!(args.phys[pi] & NVIF_VMM_PFNMAP_V0_W) && - fault->access != 0 && fault->access != 3)) { nouveau_svm_fault_cancel_fault(svm, fault); - continue; } + } else replay++; - } } /* Issue fault replay to the GPU. */ From patchwork Tue Jun 30 19:57:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11634699 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D59813BD for ; Tue, 30 Jun 2020 19:58:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3FAB520774 for ; Tue, 30 Jun 2020 19:58:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="HjN6nXF8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728327AbgF3T6W (ORCPT ); Tue, 30 Jun 2020 15:58:22 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:17426 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728842AbgF3T6S (ORCPT ); Tue, 30 Jun 2020 15:58:18 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 30 Jun 2020 12:57:28 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 30 Jun 2020 12:58:17 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 30 Jun 2020 12:58:17 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 30 Jun 2020 19:58:07 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 30 Jun 2020 19:58:07 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 30 Jun 2020 12:58:07 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Ralph Campbell Subject: [PATCH v2 2/5] mm/hmm: add output flags for PMD/PUD page mapping Date: Tue, 30 Jun 2020 12:57:34 -0700 Message-ID: <20200630195737.8667-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200630195737.8667-1-rcampbell@nvidia.com> References: <20200630195737.8667-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593547048; bh=0Gy+XSChjYh71LXJAnrVVY2NnvyyYPvv/32KOnrPX5s=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=HjN6nXF89xGcKeoZLwjKDMJYjXbwTJzRkVyWMlqxhseoWA/4C4+uR6vMXwDXwT5rh uH/IeASStaqCHyXavpRH1OfUeqins7+jPIrz/tsVXj7hxcTE26+dwnBaEoHu15rmBu bBtqX/knwmc+tLV5Qi3y7NkHJWMEzqECCmwWcwBvz9pWJqUI8fDHM1Wk7lhQqigTI5 msCuO9V7uekppOCUjVIF43HxeHax+7oIGU2GD3P+E/QKQ5ARaSwVK2bkXvMPHNzXAd Wf61VVxM46PcYexh6Ri5wT+hagoIU7vx26sW8kVKAKW+3PaDc1TZWomv3OJexZtBGc SLmbEST4YbvQQ== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that a compound page is mapped by the CPU using a larger page size. Without this information, the caller can't safely use a large device PTE to map the compound page because the CPU might be using smaller PTEs with different read/write permissions. Add two new output flags to indicate the mapping size (PMD or PUD sized) so that callers know the pages are being mapped with consistent permissions and a large device page table mapping can be used if one is available. Signed-off-by: Ralph Campbell --- include/linux/hmm.h | 11 ++++++++++- mm/hmm.c | 13 +++++++++++-- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index f4a09ed223ac..bd250edc7048 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -28,6 +28,12 @@ * HMM_PFN_WRITE - if the page memory can be written to (requires HMM_PFN_VALID) * HMM_PFN_ERROR - accessing the pfn is impossible and the device should * fail. ie poisoned memory, special pages, no vma, etc + * HMM_PFN_PMD - if HMM_PFN_VALID is set, the page is at least of size + * PMD_SIZE and fully mapped by the CPU with consistent + * protection (e.g., all writeable if HMM_PFN_WRITE is set). + * HMM_PFN_PUD - if HMM_PFN_VALID is set, the page is at least of size + * PUD_SIZE and fully mapped by the CPU with consistent + * protection (e.g., all writeable if HMM_PFN_WRITE is set). * * On input: * 0 - Return the current state of the page, do not fault it. @@ -41,12 +47,15 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + HMM_PFN_PMD = 1UL << (BITS_PER_LONG - 4), + HMM_PFN_PUD = 1UL << (BITS_PER_LONG - 5), /* Input flags */ HMM_PFN_REQ_FAULT = HMM_PFN_VALID, HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, - HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR, + HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR | + HMM_PFN_PMD | HMM_PFN_PUD, }; /* diff --git a/mm/hmm.c b/mm/hmm.c index e9a545751108..d9de95450be3 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -170,7 +170,9 @@ static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range, { if (pmd_protnone(pmd)) return 0; - return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pmd_write(pmd) ? + (HMM_PFN_VALID | HMM_PFN_PMD | HMM_PFN_WRITE) : + (HMM_PFN_VALID | HMM_PFN_PMD); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -389,7 +391,9 @@ static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, { if (!pud_present(pud)) return 0; - return pud_write(pud) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pud_write(pud) ? + (HMM_PFN_VALID | HMM_PFN_PUD | HMM_PFN_WRITE) : + (HMM_PFN_VALID | HMM_PFN_PUD); } static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, @@ -468,6 +472,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, unsigned long cpu_flags; spinlock_t *ptl; pte_t entry; + unsigned int hshift = huge_page_shift(hstate_vma(vma)); ptl = huge_pte_lock(hstate_vma(vma), walk->mm, pte); entry = huge_ptep_get(pte); @@ -475,6 +480,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, i = (start - range->start) >> PAGE_SHIFT; pfn_req_flags = range->hmm_pfns[i]; cpu_flags = pte_to_hmm_pfn_flags(range, entry); + if (hshift >= PUD_SHIFT) + cpu_flags |= HMM_PFN_PUD; + else if (hshift >= PMD_SHIFT) + cpu_flags |= HMM_PFN_PMD; required_fault = hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, cpu_flags); if (required_fault) { From patchwork Tue Jun 30 19:57:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11634701 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A175912 for ; Tue, 30 Jun 2020 19:58:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6385220809 for ; Tue, 30 Jun 2020 19:58:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ohnBt61L" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728955AbgF3T6W (ORCPT ); Tue, 30 Jun 2020 15:58:22 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:17415 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728926AbgF3T6S (ORCPT ); Tue, 30 Jun 2020 15:58:18 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 30 Jun 2020 12:57:27 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 30 Jun 2020 12:58:17 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 30 Jun 2020 12:58:17 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 30 Jun 2020 19:58:07 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 30 Jun 2020 19:58:07 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 30 Jun 2020 12:58:07 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Ralph Campbell Subject: [PATCH v2 3/5] nouveau: fix mapping 2MB sysmem pages Date: Tue, 30 Jun 2020 12:57:35 -0700 Message-ID: <20200630195737.8667-4-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200630195737.8667-1-rcampbell@nvidia.com> References: <20200630195737.8667-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593547047; bh=LJGpDBN6OLESv1BK8GJcMMzgaimKKBD7JI7EOfFUHn4=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=ohnBt61Lexu+xAcW3rzKxIWjcD4P/hkqJ8ocfj1iwR57KaLjTuECBXOIJz8MQn9l0 J0WkKwU0iCZ8TN1lp4XjCS1qkOrfhXD6M5kt+zhroEdurZ6qOfPXJ3ZSnqE/mkMvSP 59PWrUCOG0bUabM84SVYWdeZdPzAJ4wab0TcBs0EXCsMjl+sesidIfqrkhivf9/jBx d//G+1pfRcnSxGXTG6dpOeCGfoT2dRSzi7DNQT3OXVcf6Mbn0QbnFeCidYjL9l1ZDc mcHub9tSAAJpjs1f8A8Uhi7csCUEqQtlfA/wDXUzkmMx0TU6c2sR4rxTSmKdXuAw1M NpZFWdpdBtlXg== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org The nvif_object_ioctl() method NVIF_VMM_V0_PFNMAP wasn't correctly setting the hardware specific GPU page table entries for 2MB sized pages. Fix this by adding functions to set and clear PD0 GPU page table entries. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 5 +- .../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 82 +++++++++++++++++++ 2 files changed, 84 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c index 199f94e15c5f..19a6804e3989 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c @@ -1204,7 +1204,6 @@ nvkm_vmm_pfn_unmap(struct nvkm_vmm *vmm, u64 addr, u64 size) /*TODO: * - Avoid PT readback (for dma_unmap etc), this might end up being dealt * with inside HMM, which would be a lot nicer for us to deal with. - * - Multiple page sizes (particularly for huge page support). * - Support for systems without a 4KiB page size. */ int @@ -1220,8 +1219,8 @@ nvkm_vmm_pfn_map(struct nvkm_vmm *vmm, u8 shift, u64 addr, u64 size, u64 *pfn) /* Only support mapping where the page size of the incoming page * array matches a page size available for direct mapping. */ - while (page->shift && page->shift != shift && - page->desc->func->pfn == NULL) + while (page->shift && (page->shift != shift || + page->desc->func->pfn == NULL)) page++; if (!page->shift || !IS_ALIGNED(addr, 1ULL << shift) || diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c index d86287565542..ed37fddd063f 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c @@ -258,12 +258,94 @@ gp100_vmm_pd0_unmap(struct nvkm_vmm *vmm, VMM_FO128(pt, vmm, pdei * 0x10, 0ULL, 0ULL, pdes); } +static void +gp100_vmm_pd0_pfn_unmap(struct nvkm_vmm *vmm, + struct nvkm_mmu_pt *pt, u32 ptei, u32 ptes) +{ + struct device *dev = vmm->mmu->subdev.device->dev; + dma_addr_t addr; + + nvkm_kmap(pt->memory); + while (ptes--) { + u32 datalo = nvkm_ro32(pt->memory, pt->base + ptei * 16 + 0); + u32 datahi = nvkm_ro32(pt->memory, pt->base + ptei * 16 + 4); + u64 data = (u64)datahi << 32 | datalo; + + if ((data & (3ULL << 1)) != 0) { + addr = (data >> 8) << 12; + dma_unmap_page(dev, addr, 1UL << 21, DMA_BIDIRECTIONAL); + } + ptei++; + } + nvkm_done(pt->memory); +} + +static bool +gp100_vmm_pd0_pfn_clear(struct nvkm_vmm *vmm, + struct nvkm_mmu_pt *pt, u32 ptei, u32 ptes) +{ + bool dma = false; + + nvkm_kmap(pt->memory); + while (ptes--) { + u32 datalo = nvkm_ro32(pt->memory, pt->base + ptei * 16 + 0); + u32 datahi = nvkm_ro32(pt->memory, pt->base + ptei * 16 + 4); + u64 data = (u64)datahi << 32 | datalo; + + if ((data & BIT_ULL(0)) && (data & (3ULL << 1)) != 0) { + VMM_WO064(pt, vmm, ptei * 16, data & ~BIT_ULL(0)); + dma = true; + } + ptei++; + } + nvkm_done(pt->memory); + return dma; +} + +static void +gp100_vmm_pd0_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt, + u32 ptei, u32 ptes, struct nvkm_vmm_map *map) +{ + struct device *dev = vmm->mmu->subdev.device->dev; + dma_addr_t addr; + + nvkm_kmap(pt->memory); + while (ptes--) { + u64 data = 0; + + if (!(*map->pfn & NVKM_VMM_PFN_W)) + data |= BIT_ULL(6); /* RO. */ + + if (!(*map->pfn & NVKM_VMM_PFN_VRAM)) { + addr = *map->pfn >> NVKM_VMM_PFN_ADDR_SHIFT; + addr = dma_map_page(dev, pfn_to_page(addr), 0, + 1UL << 21, DMA_BIDIRECTIONAL); + if (!WARN_ON(dma_mapping_error(dev, addr))) { + data |= addr >> 4; + data |= 2ULL << 1; /* SYSTEM_COHERENT_MEMORY. */ + data |= BIT_ULL(3); /* VOL. */ + data |= BIT_ULL(0); /* VALID. */ + } + } else { + data |= (*map->pfn & NVKM_VMM_PFN_ADDR) >> 4; + data |= BIT_ULL(0); /* VALID. */ + } + + VMM_WO064(pt, vmm, ptei++ * 16, data); + map->pfn++; + } + nvkm_done(pt->memory); +} + static const struct nvkm_vmm_desc_func gp100_vmm_desc_pd0 = { .unmap = gp100_vmm_pd0_unmap, .sparse = gp100_vmm_pd0_sparse, .pde = gp100_vmm_pd0_pde, .mem = gp100_vmm_pd0_mem, + .pfn = gp100_vmm_pd0_pfn, + .pfn_clear = gp100_vmm_pd0_pfn_clear, + .pfn_unmap = gp100_vmm_pd0_pfn_unmap, }; static void From patchwork Tue Jun 30 19:57:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11634725 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A4A34912 for ; Tue, 30 Jun 2020 19:58:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 838CB2083B for ; Tue, 30 Jun 2020 19:58:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="a2zWMBsD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729034AbgF3T6g (ORCPT ); Tue, 30 Jun 2020 15:58:36 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19065 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728878AbgF3T6Q (ORCPT ); Tue, 30 Jun 2020 15:58:16 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 30 Jun 2020 12:58:03 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 30 Jun 2020 12:58:16 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 30 Jun 2020 12:58:16 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 30 Jun 2020 19:58:07 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 30 Jun 2020 19:58:07 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 30 Jun 2020 12:58:07 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Ralph Campbell Subject: [PATCH v2 4/5] nouveau/hmm: support mapping large sysmem pages Date: Tue, 30 Jun 2020 12:57:36 -0700 Message-ID: <20200630195737.8667-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200630195737.8667-1-rcampbell@nvidia.com> References: <20200630195737.8667-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593547083; bh=7qlIBxbPrc7M3JXyfs1Htqr6J6ShZakoC7ZHT1YKlL8=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=a2zWMBsDgWIHEcX+M/I5wg93dCADlyviRBsZwFMCENdVWSRZpx0oGj0aHqtzoqy65 oEQ2XmoR/VrVpPIufHUI4YcRNM4w1ZR4G4gAQN7hsLx5NoRa9s+SR/JUhdku/moADT E2LdXrOG7hBdd8jdNKkiGJLEXSezbp5RB2px0wgmWQ2/Ec07HYsVeKGARQ7PFhihqR fluehRmUmTJQmnL0pWkKMw4kH4ZOP3iIjSL+46a4vT2v+h2T9hE45hHCkCzTkZQlKo OxzrSQQLXWLI1iHr+qjddK9/TTM4ZR4cFHVKxTPP4IwoPTwh10rg6OL2Kktgj9jEwU Az0YWiKgVK1Ug== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Nouveau currently only supports mapping PAGE_SIZE sized pages of system memory when shared virtual memory (SVM) is enabled. Use the new HMM_PFN_PMD flag that hmm_range_fault() returns to support mapping system memory pages that are PMD_SIZE. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nouveau_svm.c | 57 +++++++++++++++++++++------ 1 file changed, 44 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 665dede69bd1..891b6a180447 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -514,38 +514,61 @@ static const struct mmu_interval_notifier_ops nouveau_svm_mni_ops = { }; static void nouveau_hmm_convert_pfn(struct nouveau_drm *drm, - struct hmm_range *range, u64 *ioctl_addr) + struct hmm_range *range, + struct nouveau_pfnmap_args *args) { struct page *page; /* - * The ioctl_addr prepared here is passed through nvif_object_ioctl() + * The address prepared here is passed through nvif_object_ioctl() * to an eventual DMA map in something like gp100_vmm_pgt_pfn() * * This is all just encoding the internal hmm representation into a * different nouveau internal representation. */ if (!(range->hmm_pfns[0] & HMM_PFN_VALID)) { - ioctl_addr[0] = 0; + args->p.phys[0] = 0; return; } page = hmm_pfn_to_page(range->hmm_pfns[0]); + /* + * Only map compound pages to the GPU if the CPU is also mapping the + * page as a compound page. Otherwise, the PTE protections might not be + * consistent (e.g., CPU only maps part of a compound page). + * Note that the underlying page might still be larger than the + * CPU mapping (e.g., a PUD sized compound page partially mapped with + * a PMD sized page table entry). + */ + if (range->hmm_pfns[0] & (HMM_PFN_PMD | HMM_PFN_PUD)) { + unsigned long addr = args->p.addr; + + /* + * For now, only map using PMD sized pages. + * FIXME: need to handle 512MB GPU PTEs with 1GB PUD sized + * pages. + */ + args->p.page = PMD_SHIFT; + args->p.size = 1UL << args->p.page; + args->p.addr &= ~(args->p.size - 1); + page -= (addr - args->p.addr) >> PAGE_SHIFT; + } if (is_device_private_page(page)) - ioctl_addr[0] = nouveau_dmem_page_addr(page) | + args->p.phys[0] = nouveau_dmem_page_addr(page) | NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_VRAM; else - ioctl_addr[0] = page_to_phys(page) | + args->p.phys[0] = page_to_phys(page) | NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_HOST; if (range->hmm_pfns[0] & HMM_PFN_WRITE) - ioctl_addr[0] |= NVIF_VMM_PFNMAP_V0_W; + args->p.phys[0] |= NVIF_VMM_PFNMAP_V0_W; } static int nouveau_range_fault(struct nouveau_svmm *svmm, - struct nouveau_drm *drm, void *data, u32 size, - u64 *ioctl_addr, unsigned long hmm_flags, + struct nouveau_drm *drm, + struct nouveau_pfnmap_args *args, u32 size, + unsigned long hmm_flags, struct svm_notifier *notifier) { unsigned long timeout = @@ -585,10 +608,10 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, break; } - nouveau_hmm_convert_pfn(drm, &range, ioctl_addr); + nouveau_hmm_convert_pfn(drm, &range, args); svmm->vmm->vmm.object.client->super = true; - ret = nvif_object_ioctl(&svmm->vmm->vmm.object, data, size, NULL); + ret = nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL); svmm->vmm->vmm.object.client->super = false; mutex_unlock(&svmm->mutex); @@ -717,12 +740,13 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.addr, args.i.p.size, &nouveau_svm_mni_ops); if (!ret) { - ret = nouveau_range_fault(svmm, svm->drm, &args, - sizeof(args), args.phys, hmm_flags, ¬ifier); + ret = nouveau_range_fault(svmm, svm->drm, &args.i, + sizeof(args), hmm_flags, ¬ifier); mmu_interval_notifier_remove(¬ifier.notifier); } mmput(mm); + limit = args.i.p.addr + args.i.p.size; for (fn = fi; ++fn < buffer->fault_nr; ) { /* It's okay to skip over duplicate addresses from the * same SVMM as faults are ordered by access type such @@ -730,9 +754,16 @@ nouveau_svm_fault(struct nvif_notify *notify) * * ie. WRITE faults appear first, thus any handling of * pending READ faults will already be satisfied. + * But if a large page is mapped, make sure subsequent + * fault addresses have sufficient access permission. */ if (buffer->fault[fn]->svmm != svmm || - buffer->fault[fn]->addr >= limit) + buffer->fault[fn]->addr >= limit || + (buffer->fault[fi]->access == 0 /* READ. */ && + !(args.phys[0] & NVIF_VMM_PFNMAP_V0_V)) || + (buffer->fault[fi]->access != 0 /* READ. */ && + buffer->fault[fi]->access != 3 /* PREFETCH. */ && + !(args.phys[0] & NVIF_VMM_PFNMAP_V0_W))) break; } From patchwork Tue Jun 30 19:57:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11634705 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6DF32912 for ; Tue, 30 Jun 2020 19:58:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D22420775 for ; Tue, 30 Jun 2020 19:58:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="IP4I+Usi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728971AbgF3T6X (ORCPT ); Tue, 30 Jun 2020 15:58:23 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:17403 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728888AbgF3T6R (ORCPT ); Tue, 30 Jun 2020 15:58:17 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 30 Jun 2020 12:57:26 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 30 Jun 2020 12:58:16 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 30 Jun 2020 12:58:16 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 30 Jun 2020 19:58:08 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 30 Jun 2020 19:58:08 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 30 Jun 2020 12:58:07 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Ralph Campbell Subject: [PATCH v2 5/5] hmm: add tests for HMM_PFN_PMD flag Date: Tue, 30 Jun 2020 12:57:37 -0700 Message-ID: <20200630195737.8667-6-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200630195737.8667-1-rcampbell@nvidia.com> References: <20200630195737.8667-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593547046; bh=xuRfNJ36q0LnGXZex5ovQdfiNn+iM2/aFxAzu26EyNY=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=IP4I+UsioFZBLvnRoozRSTgj3AkNCNg7sQ19NgqKnBZ7E02H2ArJ8qhpeblyEPs1k N5QwP8OqGmCC3q+YgmfXEKkLY2Au0Mn9iRv2V0ovRKQKIMKWUvAfMoAPk3Y5/gra4F DKrR5XJWn5XNp8nNBFYQ0seQWW1KWLih0lKov6AOkk4EUlsJ5XlJxUBvTJP4oVaUcJ Kv9xeaZSH/wQ47mzGbAKhnMcR2Jxi5Qt3e6RUYz1Mq+qudm/B+keCu88HL89AjvUmw xYKuFWr6iyl8mAKIjyHFnQoqJ2u4k8kul9ODB88dk5V6NrYXUdcR5TsgW4hVv/eKHP cw8q9U8O/zN7Q== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add a sanity test for hmm_range_fault() returning the HMM_PFN_PMD flag. Signed-off-by: Ralph Campbell --- lib/test_hmm.c | 4 ++ lib/test_hmm_uapi.h | 4 ++ tools/testing/selftests/vm/hmm-tests.c | 76 ++++++++++++++++++++++++++ 3 files changed, 84 insertions(+) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index a2a82262b97b..a0c081641f78 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -766,6 +766,10 @@ static void dmirror_mkentry(struct dmirror *dmirror, struct hmm_range *range, *perm |= HMM_DMIRROR_PROT_WRITE; else *perm |= HMM_DMIRROR_PROT_READ; + if (entry & HMM_PFN_PMD) + *perm |= HMM_DMIRROR_PROT_PMD; + if (entry & HMM_PFN_PUD) + *perm |= HMM_DMIRROR_PROT_PUD; } static bool dmirror_snapshot_invalidate(struct mmu_interval_notifier *mni, diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index 67b3b2e6ff5d..670b4ef2a5b6 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -40,6 +40,8 @@ struct hmm_dmirror_cmd { * HMM_DMIRROR_PROT_NONE: unpopulated PTE or PTE with no access * HMM_DMIRROR_PROT_READ: read-only PTE * HMM_DMIRROR_PROT_WRITE: read/write PTE + * HMM_DMIRROR_PROT_PMD: PMD sized page is fully mapped by same permissions + * HMM_DMIRROR_PROT_PUD: PUD sized page is fully mapped by same permissions * HMM_DMIRROR_PROT_ZERO: special read-only zero page * HMM_DMIRROR_PROT_DEV_PRIVATE_LOCAL: Migrated device private page on the * device the ioctl() is made @@ -51,6 +53,8 @@ enum { HMM_DMIRROR_PROT_NONE = 0x00, HMM_DMIRROR_PROT_READ = 0x01, HMM_DMIRROR_PROT_WRITE = 0x02, + HMM_DMIRROR_PROT_PMD = 0x04, + HMM_DMIRROR_PROT_PUD = 0x08, HMM_DMIRROR_PROT_ZERO = 0x10, HMM_DMIRROR_PROT_DEV_PRIVATE_LOCAL = 0x20, HMM_DMIRROR_PROT_DEV_PRIVATE_REMOTE = 0x30, diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 79db22604019..b533dd08da1d 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -1291,6 +1291,82 @@ TEST_F(hmm2, snapshot) hmm_buffer_free(buffer); } +/* + * Test the hmm_range_fault() HMM_PFN_PMD flag for large pages that + * should be mapped by a large page table entry. + */ +TEST_F(hmm, compound) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + int *ptr; + unsigned char *m; + int ret; + long pagesizes[4]; + int n, idx; + unsigned long i; + + /* Skip test if we can't allocate a hugetlbfs page. */ + + n = gethugepagesizes(pagesizes, 4); + if (n <= 0) + return; + for (idx = 0; --n > 0; ) { + if (pagesizes[n] < pagesizes[idx]) + idx = n; + } + size = ALIGN(TWOMEG, pagesizes[idx]); + npages = size >> self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->ptr = get_hugepage_region(size, GHR_STRICT); + if (buffer->ptr == NULL) { + free(buffer); + return; + } + + buffer->size = size; + buffer->mirror = malloc(npages); + ASSERT_NE(buffer->mirror, NULL); + + /* Initialize the pages the device will snapshot in buffer->ptr. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Simulate a device snapshotting CPU pagetables. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_SNAPSHOT, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device saw. */ + m = buffer->mirror; + for (i = 0; i < npages; ++i) + ASSERT_EQ(m[i], HMM_DMIRROR_PROT_WRITE | + HMM_DMIRROR_PROT_PMD); + + /* Make the region read-only. */ + ret = mprotect(buffer->ptr, size, PROT_READ); + ASSERT_EQ(ret, 0); + + /* Simulate a device snapshotting CPU pagetables. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_SNAPSHOT, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device saw. */ + m = buffer->mirror; + for (i = 0; i < npages; ++i) + ASSERT_EQ(m[i], HMM_DMIRROR_PROT_READ | + HMM_DMIRROR_PROT_PMD); + + free_hugepage_region(buffer->ptr); + buffer->ptr = NULL; + hmm_buffer_free(buffer); +} + /* * Test two devices reading the same memory (double mapped). */