From patchwork Thu Oct 3 20:18:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davidlohr Bueso X-Patchwork-Id: 11173289 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA03D1599 for ; Thu, 3 Oct 2019 20:20:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 918E220862 for ; Thu, 3 Oct 2019 20:20:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 918E220862 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=stgolabs.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 57D486B0266; Thu, 3 Oct 2019 16:20:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4E0A56B0269; Thu, 3 Oct 2019 16:20:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A8596B026A; Thu, 3 Oct 2019 16:20:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 147D36B0266 for ; Thu, 3 Oct 2019 16:20:32 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id A8101180AD7C3 for ; Thu, 3 Oct 2019 20:20:31 +0000 (UTC) X-FDA: 76003591062.22.land19_48d26685a309 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dave@stgolabs.net,:akpm@linux-foundation.org:walken@google.com:peterz@infradead.org:linux-kernel@vger.kernel.org::dri-devel@lists.freedesktop.org:linux-rdma@vger.kernel.org:dave@stgolabs.net:dbueso@suse.de,RULES_HIT:30003:30054,0,RBL:195.135.220.15:@stgolabs.net:.lbl8.mailshell.net-62.12.6.2 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fs,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: land19_48d26685a309 X-Filterd-Recvd-Size: 5148 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Oct 2019 20:20:31 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EF8BDB06B; Thu, 3 Oct 2019 20:20:29 +0000 (UTC) From: Davidlohr Bueso To: akpm@linux-foundation.org Cc: walken@google.com, peterz@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, dave@stgolabs.net, Davidlohr Bueso Subject: [PATCH 08/11] mm: convert vma_interval_tree to half closed intervals Date: Thu, 3 Oct 2019 13:18:55 -0700 Message-Id: <20191003201858.11666-9-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191003201858.11666-1-dave@stgolabs.net> References: <20191003201858.11666-1-dave@stgolabs.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The vma and anon vma interval tree really wants [a, b) intervals, not fully closed. As such convert it to use the new interval_tree_gen.h. Because of vma_last_pgoff(), the conversion is quite straightforward. Signed-off-by: Davidlohr Bueso --- include/linux/mm.h | 4 ++-- mm/interval_tree.c | 4 ++-- mm/memory.c | 2 +- mm/nommu.c | 2 +- mm/rmap.c | 6 +++--- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 53f9784d917d..3bc06e1de40c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2249,7 +2249,7 @@ struct vm_area_struct *vma_interval_tree_iter_next(struct vm_area_struct *node, vma; vma = vma_interval_tree_iter_next(vma, start, last)) #define vma_interval_tree_foreach_stab(vma, root, start) \ - vma_interval_tree_foreach(vma, root, start, start) + vma_interval_tree_foreach(vma, root, start, start + 1) void anon_vma_interval_tree_insert(struct anon_vma_chain *node, struct rb_root_cached *root); @@ -2269,7 +2269,7 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node); avc; avc = anon_vma_interval_tree_iter_next(avc, start, last)) #define anon_vma_interval_tree_foreach_stab(vma, root, start) \ - anon_vma_interval_tree_foreach(vma, root, start, start) + anon_vma_interval_tree_foreach(vma, root, start, start + 1) /* mmap.c */ extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin); diff --git a/mm/interval_tree.c b/mm/interval_tree.c index 11c75fb07584..1f8a7c122dd7 100644 --- a/mm/interval_tree.c +++ b/mm/interval_tree.c @@ -8,7 +8,7 @@ #include #include #include -#include +#include static inline unsigned long vma_start_pgoff(struct vm_area_struct *v) { @@ -17,7 +17,7 @@ static inline unsigned long vma_start_pgoff(struct vm_area_struct *v) static inline unsigned long vma_last_pgoff(struct vm_area_struct *v) { - return v->vm_pgoff + vma_pages(v) - 1; + return v->vm_pgoff + vma_pages(v); } INTERVAL_TREE_DEFINE(struct vm_area_struct, shared.rb, diff --git a/mm/memory.c b/mm/memory.c index b1ca51a079f2..8f6978abf64a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2679,7 +2679,7 @@ void unmap_mapping_pages(struct address_space *mapping, pgoff_t start, details.check_mapping = even_cows ? NULL : mapping; details.first_index = start; - details.last_index = start + nr - 1; + details.last_index = start + nr; if (details.last_index < details.first_index) details.last_index = ULONG_MAX; diff --git a/mm/nommu.c b/mm/nommu.c index 99b7ec318824..284c2a948d79 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1793,7 +1793,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size, size_t r_size, r_top; low = newsize >> PAGE_SHIFT; - high = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; + high = (size + PAGE_SIZE) >> PAGE_SHIFT; down_write(&nommu_region_sem); i_mmap_lock_read(inode->i_mapping); diff --git a/mm/rmap.c b/mm/rmap.c index d9a23bb773bf..48ca7d1a06b5 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1826,7 +1826,7 @@ static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, return; pgoff_start = page_to_pgoff(page); - pgoff_end = pgoff_start + hpage_nr_pages(page) - 1; + pgoff_end = pgoff_start + hpage_nr_pages(page); anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root, pgoff_start, pgoff_end) { struct vm_area_struct *vma = avc->vma; @@ -1879,11 +1879,11 @@ static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, return; pgoff_start = page_to_pgoff(page); - pgoff_end = pgoff_start + hpage_nr_pages(page) - 1; + pgoff_end = pgoff_start + hpage_nr_pages(page); if (!locked) i_mmap_lock_read(mapping); vma_interval_tree_foreach(vma, &mapping->i_mmap, - pgoff_start, pgoff_end) { + pgoff_start, pgoff_end) { unsigned long address = vma_address(page, vma); cond_resched();