From patchwork Mon Mar 4 18:41:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 10838867 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F5FE1399 for ; Tue, 5 Mar 2019 02:42:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 028DC2B269 for ; Tue, 5 Mar 2019 02:42:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E9C032B271; Tue, 5 Mar 2019 02:42:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54D912B269 for ; Tue, 5 Mar 2019 02:42:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726657AbfCECmv (ORCPT ); Mon, 4 Mar 2019 21:42:51 -0500 Received: from mga07.intel.com ([134.134.136.100]:18180 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726553AbfCECmv (ORCPT ); Mon, 4 Mar 2019 21:42:51 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Mar 2019 18:42:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,442,1544515200"; d="scan'208";a="119638609" Received: from iweiny-desk2.sc.intel.com ([10.3.52.157]) by orsmga007.jf.intel.com with ESMTP; 04 Mar 2019 18:42:49 -0800 From: ira.weiny@intel.com To: Jason Gunthorpe , Leon Romanovsky Cc: linux-rdma@vger.kernel.org, Ira Weiny , Haggai Eran , Artemy Kovalyov , John Hubbard Subject: [RFC PATCH] IB/core: Ensure an invalidate_range callback on ODP MR Date: Mon, 4 Mar 2019 10:41:08 -0800 Message-Id: <20190304184108.29340-1-ira.weiny@intel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ira Weiny No device supports ODP MR without an invalidate_range callback. Warn on any any device which attempts to support ODP without supplying this callback. Then we can remove the checks for the callback within the code. This stems from the discussion https://www.spinics.net/lists/linux-rdma/msg76460.html ...which concluded this code was no longer necessary. CC: Haggai Eran CC: Artemy Kovalyov CC: John Hubbard Signed-off-by: Ira Weiny --- drivers/infiniband/core/umem.c | 5 +++++ drivers/infiniband/core/umem_odp.c | 11 ++++------- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index fe5551562dbc..89a7d57f9fa5 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -138,6 +138,11 @@ struct ib_umem *ib_umem_get(struct ib_udata *udata, unsigned long addr, mmgrab(mm); if (access & IB_ACCESS_ON_DEMAND) { + if (WARN_ON_ONCE(!context->invalidate_range)) { + ret = -EINVAL; + goto umem_kfree; + } + ret = ib_umem_odp_get(to_ib_umem_odp(umem), access); if (ret) goto umem_kfree; diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 6013cf0b8f4f..aa7f95633581 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -240,7 +240,7 @@ static struct ib_ucontext_per_mm *alloc_per_mm(struct ib_ucontext *ctx, per_mm->mm = mm; per_mm->umem_tree = RB_ROOT_CACHED; init_rwsem(&per_mm->umem_rwsem); - per_mm->active = ctx->invalidate_range; + per_mm->active = true; rcu_read_lock(); per_mm->tgid = get_task_pid(current->group_leader, PIDTYPE_PID); @@ -539,11 +539,10 @@ static int ib_umem_odp_map_dma_single_page( } out: - /* On Demand Paging - avoid pinning the page */ - if (umem->context->invalidate_range || !stored_page) + if (!stored_page) put_page(page); - if (remove_existing_mapping && umem->context->invalidate_range) { + if (remove_existing_mapping) { ib_umem_notifier_start_account(umem_odp); umem->context->invalidate_range( umem_odp, @@ -748,9 +747,7 @@ void ib_umem_odp_unmap_dma_pages(struct ib_umem_odp *umem_odp, u64 virt, */ set_page_dirty(head_page); } - /* on demand pinning support */ - if (!umem->context->invalidate_range) - put_page(page); + put_page(page); umem_odp->page_list[idx] = NULL; umem_odp->dma_list[idx] = 0; umem->npages--;