From patchwork Fri Jan 17 23:22:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 11339991 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DDCCE109A for ; Fri, 17 Jan 2020 23:23:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AB23E2464B for ; Fri, 17 Jan 2020 23:23:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB23E2464B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 469426B0507; Fri, 17 Jan 2020 18:23:10 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 41A0D6B0508; Fri, 17 Jan 2020 18:23:10 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 358666B0509; Fri, 17 Jan 2020 18:23:10 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 1103B6B0507 for ; Fri, 17 Jan 2020 18:23:10 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id D15AD180AD807 for ; Fri, 17 Jan 2020 23:23:09 +0000 (UTC) X-FDA: 76388704098.15.patch83_38a6c543dbd27 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,richardw.yang@linux.intel.com,:akpm@linux-foundation.org:dan.j.williams@intel.com:aneesh.kumar@linux.ibm.com:kirill@shutemov.name:yang.shi@linux.alibaba.com:richardw.yang@linux.intel.com:thellstrom@vmware.com:linux-kernel@vger.kernel.org:,RULES_HIT:30054:30070,0,RBL:134.134.136.31:@linux.intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: patch83_38a6c543dbd27 X-Filterd-Recvd-Size: 2518 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 17 Jan 2020 23:23:09 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Jan 2020 15:23:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,332,1574150400"; d="scan'208";a="306388172" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by orsmga001.jf.intel.com with ESMTP; 17 Jan 2020 15:23:07 -0800 From: Wei Yang To: akpm@linux-foundation.org, dan.j.williams@intel.com, aneesh.kumar@linux.ibm.com, kirill@shutemov.name, yang.shi@linux.alibaba.com, richardw.yang@linux.intel.com, thellstrom@vmware.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/5] mm/mremap: use pmd_addr_end to calculate next in move_page_tables() Date: Sat, 18 Jan 2020 07:22:52 +0800 Message-Id: <20200117232254.2792-4-richardw.yang@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200117232254.2792-1-richardw.yang@linux.intel.com> References: <20200117232254.2792-1-richardw.yang@linux.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the general helper instead of do it by hand. Signed-off-by: Wei Yang --- mm/mremap.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/mremap.c b/mm/mremap.c index c2af8ba4ba43..a258914f3ee1 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -253,11 +253,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma, for (; old_addr < old_end; old_addr += extent, new_addr += extent) { cond_resched(); - next = (old_addr + PMD_SIZE) & PMD_MASK; - /* even if next overflowed, extent below will be ok */ + next = pmd_addr_end(old_addr, old_end); extent = next - old_addr; - if (extent > old_end - old_addr) - extent = old_end - old_addr; old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; @@ -301,7 +298,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (pte_alloc(new_vma->vm_mm, new_pmd)) break; - next = (new_addr + PMD_SIZE) & PMD_MASK; + next = pmd_addr_end(new_addr, new_addr + len); if (extent > next - new_addr) extent = next - new_addr; move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,