From patchwork Mon Jan 13 22:47:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11331037 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1240A139A for ; Mon, 13 Jan 2020 22:47:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D2DEE206DA for ; Mon, 13 Jan 2020 22:47:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="fd++jT+O" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D2DEE206DA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BCEDD8E0007; Mon, 13 Jan 2020 17:47:15 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AB3408E0009; Mon, 13 Jan 2020 17:47:15 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F8EE8E0007; Mon, 13 Jan 2020 17:47:15 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id 6B89A8E0003 for ; Mon, 13 Jan 2020 17:47:15 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 2732C1CDD for ; Mon, 13 Jan 2020 22:47:15 +0000 (UTC) X-FDA: 76374098430.04.smash24_5c55c51559b1c X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,:linux-rdma@vger.kernel.org::linux-kernel@vger.kernel.org:nouveau@lists.freedesktop.org:linux-kselftest@vger.kernel.org:jglisse@redhat.com:jhubbard@nvidia.com:hch@lst.de:jgg@mellanox.com:akpm@linux-foundation.org:bskeggs@redhat.com:shuah@kernel.org:rcampbell@nvidia.com,RULES_HIT:30003:30054:30064:30090,0,RBL:216.228.121.143:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: smash24_5c55c51559b1c X-Filterd-Recvd-Size: 5984 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jan 2020 22:47:14 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 13 Jan 2020 14:46:19 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 13 Jan 2020 14:47:13 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 13 Jan 2020 14:47:13 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 13 Jan 2020 22:47:13 +0000 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 13 Jan 2020 22:47:12 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 13 Jan 2020 22:47:11 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 13 Jan 2020 14:47:11 -0800 From: Ralph Campbell To: , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Ben Skeggs , "Shuah Khan" , Ralph Campbell Subject: [PATCH v6 4/6] mm/mmu_notifier: add mmu_interval_notifier_find() Date: Mon, 13 Jan 2020 14:47:01 -0800 Message-ID: <20200113224703.5917-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200113224703.5917-1-rcampbell@nvidia.com> References: <20200113224703.5917-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1578955579; bh=4zTuXc0CKU6YrnV6uf1EmOy7Oy/UdZiLxLw4iGtjyrQ=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=fd++jT+OQXFtmpIHOl28C4037Q8VrPBBYOvqiQTwadzERzw4EKdg+nAz+mC4N9LM4 lrFeMU4uXG/szra8U6tiXaQkOOko/3Smh75yv6UxW9uwOV/b4PgL6zFSknd+NKIGnK 8d97NTOlQtzf2P5fjm+wao0+3Twclmttc7SipQeFpg5sZH41rmCsNqn+3bqHkY+Ncj DmlOdaMcwa+2FEjXlqYtPD1M6+db3wwZRHdWtnskgBneyaz2rDhRevnVxIG8RlZVyF qJLTpWYS+0oH8RiNBl+JJ4EEgWLrZsq5T/a8/Cb+YBbnKEErt4W9MlJNcZN/hBufig tWwfAygjWiYGw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Device drivers may or may not have a convenient range based data structure to look up and find intervals that are registered with the mmu interval notifiers. Rather than forcing drivers to duplicate the interval tree, provide an API to look up intervals that are registered and accessor functions to return the start and last address of the interval. Signed-off-by: Ralph Campbell --- include/linux/mmu_notifier.h | 15 +++++++++++++++ mm/mmu_notifier.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 0ce59b4f22c2..cdbbad13b278 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -314,6 +314,21 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *mni); void mmu_interval_notifier_put(struct mmu_interval_notifier *mni); void mmu_interval_notifier_update(struct mmu_interval_notifier *mni, unsigned long start, unsigned long last); +struct mmu_interval_notifier *mmu_interval_notifier_find(struct mm_struct *mm, + const struct mmu_interval_notifier_ops *ops, + unsigned long start, unsigned long last); + +static inline unsigned long mmu_interval_notifier_start( + struct mmu_interval_notifier *mni) +{ + return mni->interval_tree.start; +} + +static inline unsigned long mmu_interval_notifier_last( + struct mmu_interval_notifier *mni) +{ + return mni->interval_tree.last; +} /** * mmu_interval_set_seq - Save the invalidation sequence diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 47ad9cc89aab..4efecc0f13cb 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -1171,6 +1171,39 @@ void mmu_interval_notifier_update(struct mmu_interval_notifier *mni, } EXPORT_SYMBOL_GPL(mmu_interval_notifier_update); +struct mmu_interval_notifier *mmu_interval_notifier_find(struct mm_struct *mm, + const struct mmu_interval_notifier_ops *ops, + unsigned long start, unsigned long last) +{ + struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm; + struct interval_tree_node *node; + struct mmu_interval_notifier *mni; + struct mmu_interval_notifier *res = NULL; + + spin_lock(&mmn_mm->lock); + node = interval_tree_iter_first(&mmn_mm->itree, start, last); + if (node) { + mni = container_of(node, struct mmu_interval_notifier, + interval_tree); + while (true) { + if (mni->ops == ops) { + res = mni; + break; + } + node = interval_tree_iter_next(&mni->interval_tree, + start, last); + if (!node) + break; + mni = container_of(node, struct mmu_interval_notifier, + interval_tree); + } + } + spin_unlock(&mmn_mm->lock); + + return res; +} +EXPORT_SYMBOL_GPL(mmu_interval_notifier_find); + /** * mmu_notifier_synchronize - Ensure all mmu_notifiers are freed *