From patchwork Wed Mar 11 18:35:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11432511 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B39831392 for ; Wed, 11 Mar 2020 18:35:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 28F5220736 for ; Wed, 11 Mar 2020 18:35:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="Obbz1qB+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 28F5220736 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 632346B000E; Wed, 11 Mar 2020 14:35:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5419D6B0032; Wed, 11 Mar 2020 14:35:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 409EB6B0010; Wed, 11 Mar 2020 14:35:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 1BED26B000E for ; Wed, 11 Mar 2020 14:35:20 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id CB4BD8248068 for ; Wed, 11 Mar 2020 18:35:19 +0000 (UTC) X-FDA: 76583933958.24.smash86_68b1c31b38658 X-Spam-Summary: 2,0,0,7066ae34cd09d228,d41d8cd98f00b204,jgg@ziepe.ca,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4321:5007:6117:6119:6120:6261:6653:7576:7875:7901:7903:9036:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13255:13894:14181:14394:14721:21080:21212:21325:21444:21451:21627:21990:30012:30054,0,RBL:209.85.160.195:@ziepe.ca:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: smash86_68b1c31b38658 X-Filterd-Recvd-Size: 6740 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Wed, 11 Mar 2020 18:35:19 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id t13so2312931qtn.13 for ; Wed, 11 Mar 2020 11:35:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uygKHmkoF/XJeHmET22wDgQZ4IaJVIkBqpHNgU4CsJM=; b=Obbz1qB+tB3LW2qPkKBSaSvBXS0QAxokU4EYzwIOqtm8mh2BUcwkjBK401WkLDG3qN maRWjYJWBrWo0yx7uPmbBEL9/OzQfBwoxFIjDiIZ4OOTuIIkTb0vfXIeIFlufr44gvy8 ozDGUwJ+AScwhg3khplp06oZj8CIkNUNOOOSsNf9UpdHapwBkULUpPjYH90BGR4d5ybZ l2D7tGyYQL+lxEibzC81pQq/n5FVYiBCvnO3s8o0YZ74E2FigGT6Ku4BH3nFtHmoRzsv Y1UQAMgsdkUeEyDQRU6R2V+PqPmV9Io8KFSEaRkCEdGoEN79gzAbDpSPiBBGoLfSwi6k 0GSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uygKHmkoF/XJeHmET22wDgQZ4IaJVIkBqpHNgU4CsJM=; b=AtpmdI0FDvlADxgPsHYQZcjPMz8uV4tOmSCQ0MY78cSYV47njjqmUFaGKhtKxJ03hl /NfQk9Sjc8CnIeJf5iHjww3VMfH36q3ngrYaCNajaOcQsWhJLupX8q2zbXSSKv+2i8g5 u006opqMAYGSCRtoHKWi1lkXT1O0y9YWl86qQDGUobxUNaRd8AIAPjBbLbFArqy9FXAf fJ/xZgq0YHr+MYhCY+b7DFwfhlTwTdX1dkX5d1gUQLznd3yNaEKKyRJUxiOuN9EBulm1 5x7hxxY8JtZ2Ag+0MJbUXWuz03EYZ/+U7qeONytNWFFHsvr/JXxEMxj45i8eUhqvrOrk pNqg== X-Gm-Message-State: ANhLgQ3fBzdHK5c3TJQBEZniVjpVXSvbG3x7gaxsekAjl+aJvtp8qbEg ui+oPibMTj9ACIcQa+HO6jiHEwg3ndA= X-Google-Smtp-Source: ADFU+vuxi2jwTZNWP7BbaHt9jLf2pO1O8MStIxhR8zwOb6icVa77ppaQhtJcOJ1hj7G1PBtoZXjjag== X-Received: by 2002:ac8:b43:: with SMTP id m3mr3894888qti.191.1583951718521; Wed, 11 Mar 2020 11:35:18 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-68-57-212.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.68.57.212]) by smtp.gmail.com with ESMTPSA id g22sm11074847qtp.8.2020.03.11.11.35.15 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 11 Mar 2020 11:35:16 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1jC6CJ-00016j-CP; Wed, 11 Mar 2020 15:35:15 -0300 From: Jason Gunthorpe To: Jerome Glisse , Ralph Campbell , Felix.Kuehling@amd.com Cc: linux-mm@kvack.org, John Hubbard , dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Christoph Hellwig , Philip Yang , Jason Gunthorpe Subject: [PATCH hmm 7/8] mm/hmm: return -EFAULT when setting HMM_PFN_ERROR on requested valid pages Date: Wed, 11 Mar 2020 15:35:05 -0300 Message-Id: <20200311183506.3997-8-jgg@ziepe.ca> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200311183506.3997-1-jgg@ziepe.ca> References: <20200311183506.3997-1-jgg@ziepe.ca> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jason Gunthorpe hmm_range_fault() should never return 0 if the caller requested a valid page, but the pfns output for that page would be HMM_PFN_ERROR. hmm_pte_need_fault() must always be called before setting HMM_PFN_ERROR to detect if the page is in faulting mode or not. Fix two cases in hmm_vma_walk_pmd() and reorganize some of the duplicated code. Fixes: d08faca018c4 ("mm/hmm: properly handle migration pmd") Fixes: da4c3c735ea4 ("mm/hmm/mirror: helper to snapshot CPU page table") Signed-off-by: Jason Gunthorpe Reviewed-by: Ralph Campbell Reviewed-by: Christoph Hellwig --- mm/hmm.c | 38 +++++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 17 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index bf676cfef3e8ee..f61fddf2ef6505 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -363,8 +363,10 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, { struct hmm_vma_walk *hmm_vma_walk = walk->private; struct hmm_range *range = hmm_vma_walk->range; - uint64_t *pfns = range->pfns; - unsigned long addr = start, i; + uint64_t *pfns = &range->pfns[(start - range->start) >> PAGE_SHIFT]; + unsigned long npages = (end - start) >> PAGE_SHIFT; + unsigned long addr = start; + bool fault, write_fault; pte_t *ptep; pmd_t pmd; @@ -374,14 +376,6 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return hmm_vma_walk_hole(start, end, -1, walk); if (thp_migration_supported() && is_pmd_migration_entry(pmd)) { - bool fault, write_fault; - unsigned long npages; - uint64_t *pfns; - - i = (addr - range->start) >> PAGE_SHIFT; - npages = (end - addr) >> PAGE_SHIFT; - pfns = &range->pfns[i]; - hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0, &fault, &write_fault); if (fault || write_fault) { @@ -390,8 +384,15 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return -EBUSY; } return hmm_pfns_fill(start, end, range, HMM_PFN_NONE); - } else if (!pmd_present(pmd)) + } + + if (!pmd_present(pmd)) { + hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0, &fault, + &write_fault); + if (fault || write_fault) + return -EFAULT; return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + } if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) { /* @@ -408,8 +409,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, if (!pmd_devmap(pmd) && !pmd_trans_huge(pmd)) goto again; - i = (addr - range->start) >> PAGE_SHIFT; - return hmm_vma_handle_pmd(walk, addr, end, &pfns[i], pmd); + return hmm_vma_handle_pmd(walk, addr, end, pfns, pmd); } /* @@ -418,15 +418,19 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, * entry pointing to pte directory or it is a bad pmd that will not * recover. */ - if (pmd_bad(pmd)) + if (pmd_bad(pmd)) { + hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0, &fault, + &write_fault); + if (fault || write_fault) + return -EFAULT; return hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); + } ptep = pte_offset_map(pmdp, addr); - i = (addr - range->start) >> PAGE_SHIFT; - for (; addr < end; addr += PAGE_SIZE, ptep++, i++) { + for (; addr < end; addr += PAGE_SIZE, ptep++, pfns++) { int r; - r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, &pfns[i]); + r = hmm_vma_handle_pte(walk, addr, end, pmdp, ptep, pfns); if (r) { /* hmm_vma_handle_pte() did pte_unmap() */ hmm_vma_walk->last = addr;