From patchwork Fri Sep 2 00:09:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Knut Omang X-Patchwork-Id: 9310169 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 92245607D6 for ; Fri, 2 Sep 2016 00:10:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7130C29564 for ; Fri, 2 Sep 2016 00:10:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 65B9F29591; Fri, 2 Sep 2016 00:10:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DCAF429564 for ; Fri, 2 Sep 2016 00:10:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751845AbcIBAKL (ORCPT ); Thu, 1 Sep 2016 20:10:11 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:21133 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751301AbcIBAKL (ORCPT ); Thu, 1 Sep 2016 20:10:11 -0400 Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id u820A8rP023044 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 2 Sep 2016 00:10:09 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0022.oracle.com (8.14.4/8.13.8) with ESMTP id u820A8Ip002235 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 2 Sep 2016 00:10:08 GMT Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by userv0121.oracle.com (8.13.8/8.13.8) with ESMTP id u820A5tN008262; Fri, 2 Sep 2016 00:10:08 GMT Received: from abi.no.oracle.com (/10.172.144.123) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 01 Sep 2016 17:10:05 -0700 From: Knut Omang To: Doug Ledford Cc: linux-rdma@vger.kernel.org, Knut Omang Subject: [PATCH 2/9] ib_umem: Add a new, more generic ib_umem_get_attrs Date: Fri, 2 Sep 2016 02:09:22 +0200 Message-Id: <1472774969-18997-3-git-send-email-knut.omang@oracle.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1472774969-18997-1-git-send-email-knut.omang@oracle.com> References: <1472774969-18997-1-git-send-email-knut.omang@oracle.com> X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This call allows a full range of DMA attributes and also DMA direction to be supplied and is just a refactor of the old ib_umem_get. Reimplement ib_umem_get using the new generic call, now a trivial implementation. Signed-off-by: Knut Omang --- drivers/infiniband/core/umem.c | 23 +++++++++++++++-------- include/rdma/ib_umem.h | 15 ++++++++++++++- 2 files changed, 29 insertions(+), 9 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index c68746c..699a0f7 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -52,7 +52,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d if (umem->nmap > 0) ib_dma_unmap_sg(dev, umem->sg_head.sgl, umem->nmap, - DMA_BIDIRECTIONAL); + umem->dir); for_each_sg(umem->sg_head.sgl, sg, umem->npages, i) { @@ -82,6 +82,17 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, size_t size, int access, int dmasync) { + unsigned long dma_attrs = 0; + if (dmasync) + dma_attrs |= DMA_ATTR_WRITE_BARRIER; + return ib_umem_get_attrs(context, addr, size, access, DMA_BIDIRECTIONAL, dma_attrs); +} +EXPORT_SYMBOL(ib_umem_get); + +struct ib_umem *ib_umem_get_attrs(struct ib_ucontext *context, unsigned long addr, + size_t size, int access, enum dma_data_direction dir, + unsigned long dma_attrs) +{ struct ib_umem *umem; struct page **page_list; struct vm_area_struct **vma_list; @@ -91,16 +102,11 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, unsigned long npages; int ret; int i; - unsigned long dma_attrs = 0; struct scatterlist *sg, *sg_list_start; int need_release = 0; - if (dmasync) - dma_attrs |= DMA_ATTR_WRITE_BARRIER; - if (!size) return ERR_PTR(-EINVAL); - /* * If the combination of the addr and size requested for this memory * region causes an integer overflow, return error. @@ -121,6 +127,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, umem->address = addr; umem->page_size = PAGE_SIZE; umem->pid = get_task_pid(current, PIDTYPE_PID); + umem->dir = dir; /* * We ask for writable memory if any of the following * access flags are set. "Local write" and "remote write" @@ -213,7 +220,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, umem->nmap = ib_dma_map_sg_attrs(context->device, umem->sg_head.sgl, umem->npages, - DMA_BIDIRECTIONAL, + dir, dma_attrs); if (umem->nmap <= 0) { @@ -239,7 +246,7 @@ out: return ret < 0 ? ERR_PTR(ret) : umem; } -EXPORT_SYMBOL(ib_umem_get); +EXPORT_SYMBOL(ib_umem_get_attrs); static void ib_umem_account(struct work_struct *work) { diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index 2d83cfd..2876679 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -36,6 +36,7 @@ #include #include #include +#include struct ib_ucontext; struct ib_umem_odp; @@ -47,6 +48,7 @@ struct ib_umem { int page_size; int writable; int hugetlb; + enum dma_data_direction dir; struct work_struct work; struct pid *pid; struct mm_struct *mm; @@ -81,9 +83,12 @@ static inline size_t ib_umem_num_pages(struct ib_umem *umem) } #ifdef CONFIG_INFINIBAND_USER_MEM - struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, size_t size, int access, int dmasync); +struct ib_umem *ib_umem_get_attrs(struct ib_ucontext *context, unsigned long addr, + size_t size, int access, + enum dma_data_direction dir, + unsigned long dma_attrs); void ib_umem_release(struct ib_umem *umem); int ib_umem_page_count(struct ib_umem *umem); int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset, @@ -98,6 +103,14 @@ static inline struct ib_umem *ib_umem_get(struct ib_ucontext *context, int access, int dmasync) { return ERR_PTR(-EINVAL); } +static inline struct ib_umem *ib_umem_get_attrs(struct ib_ucontext *context, + unsigned long addr, + size_t size, int access, + enum dma_data_direction dir, + unsigned long dma_attrs) +{ + return ERR_PTR(-EINVAL); +} static inline void ib_umem_release(struct ib_umem *umem) { } static inline int ib_umem_page_count(struct ib_umem *umem) { return 0; } static inline int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset,