From patchwork Tue Jun 30 19:57:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11634703 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1615314B7 for ; Tue, 30 Jun 2020 19:58:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F178B20772 for ; Tue, 30 Jun 2020 19:58:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ohnBt61L" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728956AbgF3T6W (ORCPT ); Tue, 30 Jun 2020 15:58:22 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:17415 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728926AbgF3T6S (ORCPT ); Tue, 30 Jun 2020 15:58:18 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 30 Jun 2020 12:57:27 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 30 Jun 2020 12:58:17 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 30 Jun 2020 12:58:17 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 30 Jun 2020 19:58:07 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 30 Jun 2020 19:58:07 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 30 Jun 2020 12:58:07 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Ralph Campbell Subject: [PATCH v2 3/5] nouveau: fix mapping 2MB sysmem pages Date: Tue, 30 Jun 2020 12:57:35 -0700 Message-ID: <20200630195737.8667-4-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200630195737.8667-1-rcampbell@nvidia.com> References: <20200630195737.8667-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1593547047; bh=LJGpDBN6OLESv1BK8GJcMMzgaimKKBD7JI7EOfFUHn4=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=ohnBt61Lexu+xAcW3rzKxIWjcD4P/hkqJ8ocfj1iwR57KaLjTuECBXOIJz8MQn9l0 J0WkKwU0iCZ8TN1lp4XjCS1qkOrfhXD6M5kt+zhroEdurZ6qOfPXJ3ZSnqE/mkMvSP 59PWrUCOG0bUabM84SVYWdeZdPzAJ4wab0TcBs0EXCsMjl+sesidIfqrkhivf9/jBx d//G+1pfRcnSxGXTG6dpOeCGfoT2dRSzi7DNQT3OXVcf6Mbn0QbnFeCidYjL9l1ZDc mcHub9tSAAJpjs1f8A8Uhi7csCUEqQtlfA/wDXUzkmMx0TU6c2sR4rxTSmKdXuAw1M NpZFWdpdBtlXg== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The nvif_object_ioctl() method NVIF_VMM_V0_PFNMAP wasn't correctly setting the hardware specific GPU page table entries for 2MB sized pages. Fix this by adding functions to set and clear PD0 GPU page table entries. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 5 +- .../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 82 +++++++++++++++++++ 2 files changed, 84 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c index 199f94e15c5f..19a6804e3989 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c @@ -1204,7 +1204,6 @@ nvkm_vmm_pfn_unmap(struct nvkm_vmm *vmm, u64 addr, u64 size) /*TODO: * - Avoid PT readback (for dma_unmap etc), this might end up being dealt * with inside HMM, which would be a lot nicer for us to deal with. - * - Multiple page sizes (particularly for huge page support). * - Support for systems without a 4KiB page size. */ int @@ -1220,8 +1219,8 @@ nvkm_vmm_pfn_map(struct nvkm_vmm *vmm, u8 shift, u64 addr, u64 size, u64 *pfn) /* Only support mapping where the page size of the incoming page * array matches a page size available for direct mapping. */ - while (page->shift && page->shift != shift && - page->desc->func->pfn == NULL) + while (page->shift && (page->shift != shift || + page->desc->func->pfn == NULL)) page++; if (!page->shift || !IS_ALIGNED(addr, 1ULL << shift) || diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c index d86287565542..ed37fddd063f 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c @@ -258,12 +258,94 @@ gp100_vmm_pd0_unmap(struct nvkm_vmm *vmm, VMM_FO128(pt, vmm, pdei * 0x10, 0ULL, 0ULL, pdes); } +static void +gp100_vmm_pd0_pfn_unmap(struct nvkm_vmm *vmm, + struct nvkm_mmu_pt *pt, u32 ptei, u32 ptes) +{ + struct device *dev = vmm->mmu->subdev.device->dev; + dma_addr_t addr; + + nvkm_kmap(pt->memory); + while (ptes--) { + u32 datalo = nvkm_ro32(pt->memory, pt->base + ptei * 16 + 0); + u32 datahi = nvkm_ro32(pt->memory, pt->base + ptei * 16 + 4); + u64 data = (u64)datahi << 32 | datalo; + + if ((data & (3ULL << 1)) != 0) { + addr = (data >> 8) << 12; + dma_unmap_page(dev, addr, 1UL << 21, DMA_BIDIRECTIONAL); + } + ptei++; + } + nvkm_done(pt->memory); +} + +static bool +gp100_vmm_pd0_pfn_clear(struct nvkm_vmm *vmm, + struct nvkm_mmu_pt *pt, u32 ptei, u32 ptes) +{ + bool dma = false; + + nvkm_kmap(pt->memory); + while (ptes--) { + u32 datalo = nvkm_ro32(pt->memory, pt->base + ptei * 16 + 0); + u32 datahi = nvkm_ro32(pt->memory, pt->base + ptei * 16 + 4); + u64 data = (u64)datahi << 32 | datalo; + + if ((data & BIT_ULL(0)) && (data & (3ULL << 1)) != 0) { + VMM_WO064(pt, vmm, ptei * 16, data & ~BIT_ULL(0)); + dma = true; + } + ptei++; + } + nvkm_done(pt->memory); + return dma; +} + +static void +gp100_vmm_pd0_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt, + u32 ptei, u32 ptes, struct nvkm_vmm_map *map) +{ + struct device *dev = vmm->mmu->subdev.device->dev; + dma_addr_t addr; + + nvkm_kmap(pt->memory); + while (ptes--) { + u64 data = 0; + + if (!(*map->pfn & NVKM_VMM_PFN_W)) + data |= BIT_ULL(6); /* RO. */ + + if (!(*map->pfn & NVKM_VMM_PFN_VRAM)) { + addr = *map->pfn >> NVKM_VMM_PFN_ADDR_SHIFT; + addr = dma_map_page(dev, pfn_to_page(addr), 0, + 1UL << 21, DMA_BIDIRECTIONAL); + if (!WARN_ON(dma_mapping_error(dev, addr))) { + data |= addr >> 4; + data |= 2ULL << 1; /* SYSTEM_COHERENT_MEMORY. */ + data |= BIT_ULL(3); /* VOL. */ + data |= BIT_ULL(0); /* VALID. */ + } + } else { + data |= (*map->pfn & NVKM_VMM_PFN_ADDR) >> 4; + data |= BIT_ULL(0); /* VALID. */ + } + + VMM_WO064(pt, vmm, ptei++ * 16, data); + map->pfn++; + } + nvkm_done(pt->memory); +} + static const struct nvkm_vmm_desc_func gp100_vmm_desc_pd0 = { .unmap = gp100_vmm_pd0_unmap, .sparse = gp100_vmm_pd0_sparse, .pde = gp100_vmm_pd0_pde, .mem = gp100_vmm_pd0_mem, + .pfn = gp100_vmm_pd0_pfn, + .pfn_clear = gp100_vmm_pd0_pfn_clear, + .pfn_unmap = gp100_vmm_pd0_pfn_unmap, }; static void