From patchwork Fri Jun 19 21:56:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11615157 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92D2C14E3 for ; Fri, 19 Jun 2020 21:57:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59D60221FF for ; Fri, 19 Jun 2020 21:57:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="mU9KnbzY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59D60221FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 19C5B6B000D; Fri, 19 Jun 2020 17:57:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0AB5F6B0030; Fri, 19 Jun 2020 17:57:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA8506B000D; Fri, 19 Jun 2020 17:57:09 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 794246B0022 for ; Fri, 19 Jun 2020 17:57:09 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 374731F1B for ; Fri, 19 Jun 2020 21:57:09 +0000 (UTC) X-FDA: 76947322578.02.elbow81_1d0430f26e1c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id D949B100095DC064 for ; Fri, 19 Jun 2020 21:57:08 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:30003:30054:30064:30070,0,RBL:216.228.121.143:@nvidia.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: elbow81_1d0430f26e1c X-Filterd-Recvd-Size: 4648 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 21:57:08 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 19 Jun 2020 14:55:26 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Fri, 19 Jun 2020 14:57:07 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Fri, 19 Jun 2020 14:57:07 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 19 Jun 2020 21:56:59 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 19 Jun 2020 21:56:59 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Fri, 19 Jun 2020 14:56:59 -0700 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Ben Skeggs" , Andrew Morton , Shuah Khan , Ralph Campbell Subject: [PATCH 03/16] nouveau: fix mixed normal and device private page migration Date: Fri, 19 Jun 2020 14:56:36 -0700 Message-ID: <20200619215649.32297-4-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200619215649.32297-1-rcampbell@nvidia.com> References: <20200619215649.32297-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1592603726; bh=ZfcMxwlEjaJ9jud4ZrslX8wzG6hxoVtsjt+UlX695yk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=mU9KnbzYIJxSfATGWHX9y6ZjjtDaywPixJ52o6MZeR6UGLlSiA1jlzu0XdSndhmPX iLpZb5OIKOq2tiNtGx2SA7+0C6IXXZAD9shfBEEKghxsY2VL2xNOEJ0jqCewxzd/9X BCEh4wzqf0r0kmc30QRvHFpcqe5YieRlMbBU1FpfSHDcf4CPVruyHz5ekH/aWtYHjS D/qYlmwKed8ltDQ+TmlI6p5lpb94ZcowbE7lH9kwSfM6QNDrxbl2lT9YB+2ea0oUMj nb1L/Bp2dHaJvaVln1/zPWYM1OSW5yuZj8HHjh3QHpnBLEgFzBA9f4B8q0OfmY9P5f T2eg4bo3r5Umg== X-Rspamd-Queue-Id: D949B100095DC064 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will migrate memory in the given address range to device private memory. The source pages might already have been migrated to device private memory. In that case, the source struct page is not checked to see if it is a device private page and incorrectly computes the GPU's physical address of local memory leading to data corruption. Fix this by checking the source struct page and computing the correct physical address. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index cc9993837508..f6a806ba3caa 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -540,6 +540,12 @@ static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm, if (!(src & MIGRATE_PFN_MIGRATE)) goto out; + if (spage && is_device_private_page(spage)) { + paddr = nouveau_dmem_page_addr(spage); + *dma_addr = DMA_MAPPING_ERROR; + goto done; + } + dpage = nouveau_dmem_page_alloc_locked(drm); if (!dpage) goto out; @@ -560,6 +566,7 @@ static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm, goto out_free_page; } +done: *pfn = NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_VRAM | ((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT); if (src & MIGRATE_PFN_WRITE) @@ -615,6 +622,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, struct migrate_vma args = { .vma = vma, .start = start, + .src_owner = drm->dev, }; unsigned long i; u64 *pfns;