From patchwork Thu Mar 12 10:28:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 11433813 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1293139A for ; Thu, 12 Mar 2020 10:29:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BE28C2071B for ; Thu, 12 Mar 2020 10:29:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE28C2071B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F20786B0006; Thu, 12 Mar 2020 06:29:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EAAF46B0007; Thu, 12 Mar 2020 06:29:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D98D76B0008; Thu, 12 Mar 2020 06:29:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0171.hostedemail.com [216.40.44.171]) by kanga.kvack.org (Postfix) with ESMTP id BDC1F6B0006 for ; Thu, 12 Mar 2020 06:29:08 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 939D0824805A for ; Thu, 12 Mar 2020 10:29:08 +0000 (UTC) X-FDA: 76586337576.17.brain56_89ff0139cbb21 X-Spam-Summary: 2,0,0,2abc881841ac8328,d41d8cd98f00b204,steven.price@arm.com,,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1261:1311:1314:1345:1359:1437:1515:1534:1543:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2730:2736:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3872:4031:4250:4321:4362:4605:5007:6117:6119:6261:7875:8634:9036:9592:10004:11026:11232:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13161:13229:13894:14181:14394:14721:21080:21451:21627:21740:21990:30012:30054:30070,0,RBL:217.140.110.172:@arm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: brain56_89ff0139cbb21 X-Filterd-Recvd-Size: 4909 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Mar 2020 10:29:08 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6521031B; Thu, 12 Mar 2020 03:29:07 -0700 (PDT) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.195.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD5873F67D; Thu, 12 Mar 2020 03:29:05 -0700 (PDT) From: Steven Price To: Jason Gunthorpe , Jerome Glisse , Ralph Campbell , Felix.Kuehling@amd.com Cc: Philip Yang , John Hubbard , amd-gfx@lists.freedesktop.org, linux-mm@kvack.org, Jason Gunthorpe , dri-devel@lists.freedesktop.org, Christoph Hellwig , Steven Price Subject: [PATCH] mm/hmm: Simplify hmm_vma_walk_pud slightly Date: Thu, 12 Mar 2020 10:28:13 +0000 Message-Id: <20200312102813.56699-1-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <5bd778fa-51e5-3e0c-d9bb-b38539b03c8d@arm.com> References: <5bd778fa-51e5-3e0c-d9bb-b38539b03c8d@arm.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: By refactoring to deal with the !pud_huge(pud) || !pud_devmap(pud) condition early it's possible to remove the 'ret' variable and remove a level of indentation from half the function making the code easier to read. No functional change. Signed-off-by: Steven Price Signed-off-by: Jason Gunthorpe --- Thanks to Jason's changes there were only two code paths left using the out_unlock label so it seemed like a good opportunity to refactor. --- mm/hmm.c | 69 ++++++++++++++++++++++++++------------------------------ 1 file changed, 32 insertions(+), 37 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index ca33d086bdc1..0117c86426d1 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -466,8 +466,10 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, struct hmm_range *range = hmm_vma_walk->range; unsigned long addr = start; pud_t pud; - int ret = 0; spinlock_t *ptl = pud_trans_huge_lock(pudp, walk->vma); + unsigned long i, npages, pfn; + uint64_t *pfns, cpu_flags; + bool fault, write_fault; if (!ptl) return 0; @@ -481,50 +483,43 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, return hmm_vma_walk_hole(start, end, -1, walk); } - if (pud_huge(pud) && pud_devmap(pud)) { - unsigned long i, npages, pfn; - uint64_t *pfns, cpu_flags; - bool fault, write_fault; + if (!pud_huge(pud) || !pud_devmap(pud)) { + /* Ask for the PUD to be split */ + walk->action = ACTION_SUBTREE; + spin_unlock(ptl); + return 0; + } - if (!pud_present(pud)) { - spin_unlock(ptl); - return hmm_vma_walk_hole(start, end, -1, walk); - } + if (!pud_present(pud)) { + spin_unlock(ptl); + return hmm_vma_walk_hole(start, end, -1, walk); + } - i = (addr - range->start) >> PAGE_SHIFT; - npages = (end - addr) >> PAGE_SHIFT; - pfns = &range->pfns[i]; + i = (addr - range->start) >> PAGE_SHIFT; + npages = (end - addr) >> PAGE_SHIFT; + pfns = &range->pfns[i]; - cpu_flags = pud_to_hmm_pfn_flags(range, pud); - hmm_range_need_fault(hmm_vma_walk, pfns, npages, - cpu_flags, &fault, &write_fault); - if (fault || write_fault) { - spin_unlock(ptl); - return hmm_vma_walk_hole_(addr, end, fault, write_fault, - walk); - } + cpu_flags = pud_to_hmm_pfn_flags(range, pud); + hmm_range_need_fault(hmm_vma_walk, pfns, npages, + cpu_flags, &fault, &write_fault); + if (fault || write_fault) { + spin_unlock(ptl); + return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk); + } - pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) { - hmm_vma_walk->pgmap = get_dev_pagemap(pfn, - hmm_vma_walk->pgmap); - if (unlikely(!hmm_vma_walk->pgmap)) { - ret = -EBUSY; - goto out_unlock; - } - pfns[i] = hmm_device_entry_from_pfn(range, pfn) | - cpu_flags; + pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + for (i = 0; i < npages; ++i, ++pfn) { + hmm_vma_walk->pgmap = get_dev_pagemap(pfn, hmm_vma_walk->pgmap); + if (unlikely(!hmm_vma_walk->pgmap)) { + spin_unlock(ptl); + return -EBUSY; } - hmm_vma_walk->last = end; - goto out_unlock; + pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags; } + hmm_vma_walk->last = end; - /* Ask for the PUD to be split */ - walk->action = ACTION_SUBTREE; - -out_unlock: spin_unlock(ptl); - return ret; + return 0; } #else #define hmm_vma_walk_pud NULL