From patchwork Wed Mar 13 19:05:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 10852073 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 36F9017E6 for ; Thu, 14 Mar 2019 03:07:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A31629B52 for ; Thu, 14 Mar 2019 03:07:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E3F62A13E; Thu, 14 Mar 2019 03:07:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F13A29B52 for ; Thu, 14 Mar 2019 03:07:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726512AbfCNDH3 (ORCPT ); Wed, 13 Mar 2019 23:07:29 -0400 Received: from mga01.intel.com ([192.55.52.88]:30677 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726487AbfCNDH3 (ORCPT ); Wed, 13 Mar 2019 23:07:29 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Mar 2019 20:07:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,477,1544515200"; d="scan'208";a="213985749" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by orsmga001.jf.intel.com with ESMTP; 13 Mar 2019 20:07:28 -0700 From: ira.weiny@intel.com To: Jason Gunthorpe , Leon Romanovsky Cc: linux-rdma@vger.kernel.org, Ira Weiny , Artemy Kovalyov , John Hubbard , Haggai Eran Subject: [PATCH] IB/core: Ensure an invalidate_range callback on ODP MR Date: Wed, 13 Mar 2019 12:05:59 -0700 Message-Id: <20190313190559.8068-1-ira.weiny@intel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ira Weiny No device supports ODP MR without an invalidate_range callback. Warn on any any device which attempts to support ODP without supplying this callback. Then we can remove the checks for the callback within the code. This stems from the discussion https://www.spinics.net/lists/linux-rdma/msg76460.html ...which concluded this code was no longer necessary. CC: Artemy Kovalyov Acked-by: John Hubbard Reviewed-by: Haggai Eran Signed-off-by: Ira Weiny --- drivers/infiniband/core/umem.c | 5 +++++ drivers/infiniband/core/umem_odp.c | 13 +++---------- 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index fe5551562dbc..89a7d57f9fa5 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -138,6 +138,11 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, mmgrab(mm); if (access & IB_ACCESS_ON_DEMAND) { + if (WARN_ON_ONCE(!context->invalidate_range)) { + ret = -EINVAL; + goto umem_kfree; + } + ret = ib_umem_odp_get(to_ib_umem_odp(umem), access); if (ret) goto umem_kfree; diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index e6ec79ad9cc8..6f8c36fcda78 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -241,7 +241,7 @@ static struct ib_ucontext_per_mm *alloc_per_mm(struct ib_ucontext *ctx, per_mm->mm = mm; per_mm->umem_tree = RB_ROOT_CACHED; init_rwsem(&per_mm->umem_rwsem); - per_mm->active = ctx->invalidate_range; + per_mm->active = true; rcu_read_lock(); per_mm->tgid = get_task_pid(current->group_leader, PIDTYPE_PID); @@ -503,7 +503,6 @@ static int ib_umem_odp_map_dma_single_page( struct ib_umem *umem = &umem_odp->umem; struct ib_device *dev = umem->context->device; dma_addr_t dma_addr; - int stored_page = 0; int remove_existing_mapping = 0; int ret = 0; @@ -528,7 +527,6 @@ static int ib_umem_odp_map_dma_single_page( umem_odp->dma_list[page_index] = dma_addr | access_mask; umem_odp->page_list[page_index] = page; umem->npages++; - stored_page = 1; } else if (umem_odp->page_list[page_index] == page) { umem_odp->dma_list[page_index] |= access_mask; } else { @@ -540,11 +538,9 @@ static int ib_umem_odp_map_dma_single_page( } out: - /* On Demand Paging - avoid pinning the page */ - if (umem->context->invalidate_range || !stored_page) - put_page(page); + put_page(page); - if (remove_existing_mapping && umem->context->invalidate_range) { + if (remove_existing_mapping) { ib_umem_notifier_start_account(umem_odp); umem->context->invalidate_range( umem_odp, @@ -754,9 +750,6 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, */ set_page_dirty(head_page); } - /* on demand pinning support */ - if (!umem->context->invalidate_range) - put_page(page); umem_odp->page_list[idx] = NULL; umem_odp->dma_list[idx] = 0; umem->npages--;