From patchwork Thu Oct 3 20:18:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davidlohr Bueso X-Patchwork-Id: 11173283 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C3CDD1599 for ; Thu, 3 Oct 2019 20:20:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C7EA215EA for ; Thu, 3 Oct 2019 20:20:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8C7EA215EA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=stgolabs.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 555EB6B000C; Thu, 3 Oct 2019 16:20:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 508276B000E; Thu, 3 Oct 2019 16:20:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CC856B0010; Thu, 3 Oct 2019 16:20:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0040.hostedemail.com [216.40.44.40]) by kanga.kvack.org (Postfix) with ESMTP id 183AD6B000C for ; Thu, 3 Oct 2019 16:20:24 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id C3F9D82437C9 for ; Thu, 3 Oct 2019 20:20:23 +0000 (UTC) X-FDA: 76003590726.23.cave95_3694f4916a2c X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dave@stgolabs.net,:akpm@linux-foundation.org:walken@google.com:peterz@infradead.org:linux-kernel@vger.kernel.org::dri-devel@lists.freedesktop.org:linux-rdma@vger.kernel.org:dave@stgolabs.net:mike.marciniszyn@intel.com:dennis.dalessandro@intel.com:dledford@redhat.com:dbueso@suse.de,RULES_HIT:30054:30064,0,RBL:195.135.220.15:@stgolabs.net:.lbl8.mailshell.net-62.12.6.2 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fs,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:3,LUA_SUMMARY:none X-HE-Tag: cave95_3694f4916a2c X-Filterd-Recvd-Size: 3700 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Oct 2019 20:20:23 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id E0BE4B03B; Thu, 3 Oct 2019 20:20:21 +0000 (UTC) From: Davidlohr Bueso To: akpm@linux-foundation.org Cc: walken@google.com, peterz@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, dave@stgolabs.net, Mike Marciniszyn , Dennis Dalessandro , Doug Ledford , Davidlohr Bueso Subject: [PATCH 05/11] IB/hfi1: convert __mmu_int_rb to half closed intervals Date: Thu, 3 Oct 2019 13:18:52 -0700 Message-Id: <20191003201858.11666-6-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191003201858.11666-1-dave@stgolabs.net> References: <20191003201858.11666-1-dave@stgolabs.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The __mmu_int_rb interval tree really wants [a, b) intervals, not fully closed as currently. As such convert it to use the new interval_tree_gen.h. Cc: Mike Marciniszyn Cc: Dennis Dalessandro Cc: Doug Ledford Cc: linux-rdma@vger.kernel.org Signed-off-by: Davidlohr Bueso Reviewed-by: Michel Lespinasse --- drivers/infiniband/hw/hfi1/mmu_rb.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c index 14d2a90964c3..fb6382b2d44e 100644 --- a/drivers/infiniband/hw/hfi1/mmu_rb.c +++ b/drivers/infiniband/hw/hfi1/mmu_rb.c @@ -47,7 +47,7 @@ #include #include #include -#include +#include #include "mmu_rb.h" #include "trace.h" @@ -89,7 +89,7 @@ static unsigned long mmu_node_start(struct mmu_rb_node *node) static unsigned long mmu_node_last(struct mmu_rb_node *node) { - return PAGE_ALIGN(node->addr + node->len) - 1; + return PAGE_ALIGN(node->addr + node->len); } int hfi1_mmu_rb_register(void *ops_arg, struct mm_struct *mm, @@ -195,13 +195,13 @@ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler, trace_hfi1_mmu_rb_search(addr, len); if (!handler->ops->filter) { node = __mmu_int_rb_iter_first(&handler->root, addr, - (addr + len) - 1); + addr + len); } else { for (node = __mmu_int_rb_iter_first(&handler->root, addr, - (addr + len) - 1); + addr + len); node; node = __mmu_int_rb_iter_next(node, addr, - (addr + len) - 1)) { + addr + len)) { if (handler->ops->filter(node, addr, len)) return node; } @@ -293,11 +293,10 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn, bool added = false; spin_lock_irqsave(&handler->lock, flags); - for (node = __mmu_int_rb_iter_first(root, range->start, range->end-1); + for (node = __mmu_int_rb_iter_first(root, range->start, range->end); node; node = ptr) { /* Guard against node removal. */ - ptr = __mmu_int_rb_iter_next(node, range->start, - range->end - 1); + ptr = __mmu_int_rb_iter_next(node, range->start, range->end); trace_hfi1_mmu_mem_invalidate(node->addr, node->len); if (handler->ops->invalidate(handler->ops_arg, node)) { __mmu_int_rb_remove(node, root);