From patchwork Wed Mar 6 11:24:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 10841667 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6C6114DE for ; Wed, 6 Mar 2019 19:26:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C20C82E17C for ; Wed, 6 Mar 2019 19:26:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B5F762E6DC; Wed, 6 Mar 2019 19:26:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 449FE2E17C for ; Wed, 6 Mar 2019 19:26:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727019AbfCFT0Q (ORCPT ); Wed, 6 Mar 2019 14:26:16 -0500 Received: from mga18.intel.com ([134.134.136.126]:51167 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727096AbfCFT0P (ORCPT ); Wed, 6 Mar 2019 14:26:15 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Mar 2019 11:26:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,448,1544515200"; d="scan'208";a="149213310" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by fmsmga002.fm.intel.com with ESMTP; 06 Mar 2019 11:26:15 -0800 From: ira.weiny@intel.com To: Jason Gunthorpe , Leon Romanovsky , Haggai Eran , Artemy Kovalyov Cc: John Hubbard , linux-rdma@vger.kernel.org, Ira Weiny Subject: [RFC PATCH V2] IB/core: Ensure an invalidate_range callback on ODP MR Date: Wed, 6 Mar 2019 03:24:32 -0800 Message-Id: <20190306112432.30842-1-ira.weiny@intel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ira Weiny No device supports ODP MR without an invalidate_range callback. Warn on any any device which attempts to support ODP without supplying this callback. Then we can remove the checks for the callback within the code. This stems from the discussion https://www.spinics.net/lists/linux-rdma/msg76460.html ...which concluded this code was no longer necessary. Compile tested only CC: Haggai Eran CC: Artemy Kovalyov CC: John Hubbard Signed-off-by: Ira Weiny --- Changes from V1: remove stored_page logic which was also unnecessary remove unnecessary put_page drivers/infiniband/core/umem.c | 5 +++++ drivers/infiniband/core/umem_odp.c | 13 +++---------- 2 files changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index fe5551562dbc..89a7d57f9fa5 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -138,6 +138,11 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, mmgrab(mm); if (access & IB_ACCESS_ON_DEMAND) { + if (WARN_ON_ONCE(!context->invalidate_range)) { + ret = -EINVAL; + goto umem_kfree; + } + ret = ib_umem_odp_get(to_ib_umem_odp(umem), access); if (ret) goto umem_kfree; diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 577f1b12bff4..ec0ed9190ab6 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -241,7 +241,7 @@ static struct ib_ucontext_per_mm *alloc_per_mm(struct ib_ucontext *ctx, per_mm->mm = mm; per_mm->umem_tree = RB_ROOT_CACHED; init_rwsem(&per_mm->umem_rwsem); - per_mm->active = ctx->invalidate_range; + per_mm->active = true; rcu_read_lock(); per_mm->tgid = get_task_pid(current->group_leader, PIDTYPE_PID); @@ -503,7 +503,6 @@ static int ib_umem_odp_map_dma_single_page( struct ib_umem *umem = &umem_odp->umem; struct ib_device *dev = umem->context->device; dma_addr_t dma_addr; - int stored_page = 0; int remove_existing_mapping = 0; int ret = 0; @@ -528,7 +527,6 @@ static int ib_umem_odp_map_dma_single_page( umem_odp->dma_list[page_index] = dma_addr | access_mask; umem_odp->page_list[page_index] = page; umem->npages++; - stored_page = 1; } else if (umem_odp->page_list[page_index] == page) { umem_odp->dma_list[page_index] |= access_mask; } else { @@ -540,11 +538,9 @@ static int ib_umem_odp_map_dma_single_page( } out: - /* On Demand Paging - avoid pinning the page */ - if (umem->context->invalidate_range || !stored_page) - put_page(page); + put_page(page); - if (remove_existing_mapping && umem->context->invalidate_range) { + if (remove_existing_mapping) { ib_umem_notifier_start_account(umem_odp); umem->context->invalidate_range( umem_odp, @@ -751,9 +747,6 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, */ set_page_dirty(head_page); } - /* on demand pinning support */ - if (!umem->context->invalidate_range) - put_page(page); umem_odp->page_list[idx] = NULL; umem_odp->dma_list[idx] = 0; umem->npages--;