From patchwork Wed Sep 29 04:19:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shunsuke Mie X-Patchwork-Id: 12524579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 893FDC433F5 for ; Wed, 29 Sep 2021 04:19:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 63EB16135D for ; Wed, 29 Sep 2021 04:19:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243905AbhI2EVM (ORCPT ); Wed, 29 Sep 2021 00:21:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243934AbhI2EVI (ORCPT ); Wed, 29 Sep 2021 00:21:08 -0400 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5861DC061755 for ; Tue, 28 Sep 2021 21:19:28 -0700 (PDT) Received: by mail-oi1-x22b.google.com with SMTP id s69so1262347oie.13 for ; Tue, 28 Sep 2021 21:19:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=r3BsV1GTEqKXRyiYvYJGLvk2Pn1r3f4nsDFd9biBydc=; b=yQhoCHSc2QmeXBNruKP7ofAeqKwXabtEQcqWyz1YFuL5HRasrZUXf/07QWDq2jfE/i rf3eib1kC/fm8Exz4aooPYGC2Z+2M6L9AGo7JkLSe5tIR8EtXzDOdqNdX5/IMrbPXLxN GoKr9HETaWP19B0FuaGOj35nuxIIlM2L46h3yl2ir70nEwQ7pAFSRjQD0x0MftQMqqyG 21mUAr1FncdJgxXIfPaZ5dkqaHCWpL0FeGQJKZrIzeeBkAgFoX4PIYqwOla5k03Vc084 dHkygsOf8gGQmUBJ9Y2FJ3Flqdl6tj0VSRo+hSejWGiQOkJYas40OeFNT541sUmMAtY9 VycQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=r3BsV1GTEqKXRyiYvYJGLvk2Pn1r3f4nsDFd9biBydc=; b=k1oT4Sn9t/mMt6oKERj62tV+L1G6Q3r0lKmESwA5ssgpjOYHKrCPaqte/QyiaC1+j6 oMrqm7LWxr5qcBNBT5RqTZj9CKpIbxrcCyK1zDIXitNtkAAFCk288oc/keE/w+tzCtWU 0Hk4sI6+hHzMiE2fEQB35DvVYMsBNa3S9QOEHZswcBuMK0jAC8eZqsmsIyA8ed9fa0Tr kK3TeI5AQAanJmL/0Q/DszGvlYa8aQjt/QLbccXeI10xqAta0JgsWgY/MiLQNMhzJiCk aqHBnajaxi0mt0zYEYQlDjwwE+Axl7D2YYNWI8zezcUMqE9erN2O6s3TVS3dBjtMKbD3 UgUQ== X-Gm-Message-State: AOAM533o7T0aYkMyyhWgl55tlO5o7JbvJF46wXt94LR3yiICOsmjtZyr lcbZyksT3w2RvHbMt+7IlOXx1w== X-Google-Smtp-Source: ABdhPJwzsEOaAz8FEYD/FCGvQDiTqfuGHJVDEOeGeJfBiOFJp+9FcqX2X2S2s/laF7R43xWlcO8G5g== X-Received: by 2002:a05:6808:1a11:: with SMTP id bk17mr6539822oib.0.1632889167751; Tue, 28 Sep 2021 21:19:27 -0700 (PDT) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id p2sm240861ooe.34.2021.09.28.21.19.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Sep 2021 21:19:27 -0700 (PDT) From: Shunsuke Mie To: Zhu Yanjun Cc: Shunsuke Mie , =?utf-8?q?Christian_K=C3=B6nig?= , Alex Deucher , Daniel Vetter , Doug Ledford , Jason Gunthorpe , Jianxin Xiong , Leon Romanovsky , Maor Gottlieb , Sean Hefty , Sumit Semwal , dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, dhobsong@igel.co.jp, taki@igel.co.jp, etom@igel.co.jp Subject: [RFC PATCH v2 1/2] RDMA/umem: Change for rdma devices has not dma device Date: Wed, 29 Sep 2021 13:19:04 +0900 Message-Id: <20210929041905.126454-2-mie@igel.co.jp> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210929041905.126454-1-mie@igel.co.jp> References: <20210929041905.126454-1-mie@igel.co.jp> Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Current implementation requires a dma device for RDMA driver to use dma-buf memory space as RDMA buffer. However, software RDMA drivers has not dma device and copy RDMA data using CPU instead of hardware. This patch changes to be hold a dma-buf on struct ib_umem_dmabuf. This allows the software RDMA driver to map dma-buf memory for CPU memory accessing. Signed-off-by: Shunsuke Mie --- drivers/infiniband/core/umem_dmabuf.c | 20 ++++++++++++++++---- include/rdma/ib_umem.h | 1 + 2 files changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index e824baf4640d..ebbb0a259fd4 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -117,9 +117,6 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device, if (check_add_overflow(offset, (unsigned long)size, &end)) return ret; - if (unlikely(!ops || !ops->move_notify)) - return ret; - dmabuf = dma_buf_get(fd); if (IS_ERR(dmabuf)) return ERR_CAST(dmabuf); @@ -133,6 +130,8 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device, goto out_release_dmabuf; } + umem_dmabuf->dmabuf = dmabuf; + umem = &umem_dmabuf->umem; umem->ibdev = device; umem->length = size; @@ -143,6 +142,13 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device, if (!ib_umem_num_pages(umem)) goto out_free_umem; + /* Software RDMA drivers has not dma device. Just get dmabuf from fd */ + if (!device->dma_device) + goto done; + + if (unlikely(!ops || !ops->move_notify)) + goto out_free_umem; + umem_dmabuf->attach = dma_buf_dynamic_attach( dmabuf, device->dma_device, @@ -152,6 +158,7 @@ struct ib_umem_dmabuf *ib_umem_dmabuf_get(struct ib_device *device, ret = ERR_CAST(umem_dmabuf->attach); goto out_free_umem; } +done: return umem_dmabuf; out_free_umem: @@ -165,13 +172,18 @@ EXPORT_SYMBOL(ib_umem_dmabuf_get); void ib_umem_dmabuf_release(struct ib_umem_dmabuf *umem_dmabuf) { - struct dma_buf *dmabuf = umem_dmabuf->attach->dmabuf; + struct dma_buf *dmabuf = umem_dmabuf->dmabuf; + + if (!umem_dmabuf->attach) + goto free_dmabuf; dma_resv_lock(dmabuf->resv, NULL); ib_umem_dmabuf_unmap_pages(umem_dmabuf); dma_resv_unlock(dmabuf->resv); dma_buf_detach(dmabuf, umem_dmabuf->attach); + +free_dmabuf: dma_buf_put(dmabuf); kfree(umem_dmabuf); } diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index 5ae9dff74dac..11c0cf7e0dd8 100644 --- a/include/rdma/ib_umem.h +++ b/include/rdma/ib_umem.h @@ -32,6 +32,7 @@ struct ib_umem { struct ib_umem_dmabuf { struct ib_umem umem; struct dma_buf_attachment *attach; + struct dma_buf *dmabuf; struct sg_table *sgt; struct scatterlist *first_sg; struct scatterlist *last_sg; From patchwork Wed Sep 29 04:19:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shunsuke Mie X-Patchwork-Id: 12524581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49BAAC433F5 for ; Wed, 29 Sep 2021 04:19:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2F564613DB for ; Wed, 29 Sep 2021 04:19:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243918AbhI2EVS (ORCPT ); Wed, 29 Sep 2021 00:21:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243975AbhI2EVQ (ORCPT ); Wed, 29 Sep 2021 00:21:16 -0400 Received: from mail-ot1-x32b.google.com (mail-ot1-x32b.google.com [IPv6:2607:f8b0:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E32BDC061746 for ; Tue, 28 Sep 2021 21:19:33 -0700 (PDT) Received: by mail-ot1-x32b.google.com with SMTP id g62-20020a9d2dc4000000b0054752cfbc59so1328363otb.1 for ; Tue, 28 Sep 2021 21:19:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wGElA4q7plQa8+h675MjByrmxCgr2foTglcxGJwe7eQ=; b=0XxtxdxuxQsaGrYghSepZmEg+mymN7mpDodcuer1ZfBwLDWdMD6Pjr69uI89F3um/4 7CJ6XBqNgPlmiDn+FI+rG1zutNpgRyuU7blXeuqQPEfkeRojjbHvYChgY9aR7zdN5JNM gMvnoxk+mcSqrinmr1183bzXHlGGCFiCOzo1HSmuYAHq69Xd9vSlIPYnZs/66gVwAHBM VZFsnfKqx0jK9QdbML9HhOzaVOHbjQiVr1RGTRMox7Ngzhv84M98P2Wl4ginPQM+QwQ5 4zmFID1o+X1fKzOBWmLNB4MvW0Uj31v/0pnfM2u22sqZ55PdC1+/NGxw4gHpbFgXx+UA rosg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wGElA4q7plQa8+h675MjByrmxCgr2foTglcxGJwe7eQ=; b=M+io2sBgKtRujQ3jDfNz2uzPcbcJf3ctNfxKzejvgZ+vBpc/2FVBqGbYMUDXS7VpyA aIGPqDIIG3OowuDkBn0xdrEzv/CeodDsU8MOxRZ2JLzOtZjo++6iH+zlsNUB8CEA9X0H fI7aWHVFvkdYS9OFVN4eTBxyeuxPu/2MpLHWWf0i06il+AP5mC/Lrh6I2UBCThkN4Z6i TtrKLWLv6OaLzr4bfeiSFD1QEPrCQsnIIH4sB+/Gf50ROLmcqcSSgpB+tzXi/TuAw3CI DugjQp5W4siNjNWP7ZU+WQmxum/fJsnt0xMHv7PoJLP/WGkkVE4aLNxQXQHkKdzaZDqW JF2A== X-Gm-Message-State: AOAM533RP2rwYu4k2t0ec3dJx7TOsNxtNI7dri9PEKOqdX08y5H6n52q 2iCigs3SU2SS5eLlGo4dFxFsLw== X-Google-Smtp-Source: ABdhPJxm3jV3B4Xb0fXgD0QDodasvr5KZQLcmb7LjX5hM3pCvIFkZs4M5h+NCuPtSHe14gi5dAALAw== X-Received: by 2002:a9d:7116:: with SMTP id n22mr8531664otj.56.1632889173299; Tue, 28 Sep 2021 21:19:33 -0700 (PDT) Received: from tyrell.hq.igel.co.jp (napt.igel.co.jp. [219.106.231.132]) by smtp.gmail.com with ESMTPSA id p2sm240861ooe.34.2021.09.28.21.19.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Sep 2021 21:19:32 -0700 (PDT) From: Shunsuke Mie To: Zhu Yanjun Cc: Shunsuke Mie , =?utf-8?q?Christian_K=C3=B6nig?= , Alex Deucher , Daniel Vetter , Doug Ledford , Jason Gunthorpe , Jianxin Xiong , Leon Romanovsky , Maor Gottlieb , Sean Hefty , Sumit Semwal , dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, dhobsong@igel.co.jp, taki@igel.co.jp, etom@igel.co.jp Subject: [RFC PATCH v2 2/2] RDMA/rxe: Add dma-buf support Date: Wed, 29 Sep 2021 13:19:05 +0900 Message-Id: <20210929041905.126454-3-mie@igel.co.jp> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210929041905.126454-1-mie@igel.co.jp> References: <20210929041905.126454-1-mie@igel.co.jp> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Implement a ib device operation ‘reg_user_mr_dmabuf’. Generate a rxe_map from the memory space linked the passed dma-buf. Signed-off-by: Shunsuke Mie --- drivers/infiniband/sw/rxe/rxe_loc.h | 2 + drivers/infiniband/sw/rxe/rxe_mr.c | 118 ++++++++++++++++++++++++++ drivers/infiniband/sw/rxe/rxe_verbs.c | 34 ++++++++ drivers/infiniband/sw/rxe/rxe_verbs.h | 2 + 4 files changed, 156 insertions(+) diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h index 1ca43b859d80..8bc19ea1a376 100644 --- a/drivers/infiniband/sw/rxe/rxe_loc.h +++ b/drivers/infiniband/sw/rxe/rxe_loc.h @@ -75,6 +75,8 @@ u8 rxe_get_next_key(u32 last_key); void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr); int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, int access, struct rxe_mr *mr); +int rxe_mr_dmabuf_init_user(struct rxe_pd *pd, int fd, u64 start, u64 length, + u64 iova, int access, struct rxe_mr *mr); int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr); int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, enum rxe_mr_copy_dir dir); diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 53271df10e47..af6ef671c3a5 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -4,6 +4,7 @@ * Copyright (c) 2015 System Fabric Works, Inc. All rights reserved. */ +#include #include "rxe.h" #include "rxe_loc.h" @@ -245,6 +246,120 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, return err; } +static int rxe_map_dmabuf_mr(struct rxe_mr *mr, + struct ib_umem_dmabuf *umem_dmabuf) +{ + struct rxe_map_set *set; + struct rxe_phys_buf *buf = NULL; + struct rxe_map **map; + void *vaddr, *vaddr_end; + int num_buf = 0; + int err; + size_t remain; + + mr->dmabuf_map = kzalloc(sizeof &mr->dmabuf_map, GFP_KERNEL); + if (!mr->dmabuf_map) { + err = -ENOMEM; + goto err_out; + } + + err = dma_buf_vmap(umem_dmabuf->dmabuf, mr->dmabuf_map); + if (err) + goto err_free_dmabuf_map; + + set = mr->cur_map_set; + set->page_shift = PAGE_SHIFT; + set->page_mask = PAGE_SIZE - 1; + + map = set->map; + buf = map[0]->buf; + + vaddr = mr->dmabuf_map->vaddr; + vaddr_end = vaddr + umem_dmabuf->dmabuf->size; + remain = umem_dmabuf->dmabuf->size; + + for (; remain; vaddr += PAGE_SIZE) { + if (num_buf >= RXE_BUF_PER_MAP) { + map++; + buf = map[0]->buf; + num_buf = 0; + } + + buf->addr = (uintptr_t)vaddr; + if (remain >= PAGE_SIZE) + buf->size = PAGE_SIZE; + else + buf->size = remain; + remain -= buf->size; + + num_buf++; + buf++; + } + + return 0; + +err_free_dmabuf_map: + kfree(mr->dmabuf_map); +err_out: + return err; +} + +static void rxe_unmap_dmabuf_mr(struct rxe_mr *mr) +{ + struct ib_umem_dmabuf *umem_dmabuf = to_ib_umem_dmabuf(mr->umem); + + dma_buf_vunmap(umem_dmabuf->dmabuf, mr->dmabuf_map); + kfree(mr->dmabuf_map); +} + +int rxe_mr_dmabuf_init_user(struct rxe_pd *pd, int fd, u64 start, u64 length, + u64 iova, int access, struct rxe_mr *mr) +{ + struct ib_umem_dmabuf *umem_dmabuf; + struct rxe_map_set *set; + int err; + + umem_dmabuf = ib_umem_dmabuf_get(pd->ibpd.device, start, length, fd, + access, NULL); + if (IS_ERR(umem_dmabuf)) { + err = PTR_ERR(umem_dmabuf); + goto err_out; + } + + rxe_mr_init(access, mr); + + err = rxe_mr_alloc(mr, ib_umem_num_pages(&umem_dmabuf->umem), 0); + if (err) { + pr_warn("%s: Unable to allocate memory for map\n", __func__); + goto err_release_umem; + } + + mr->ibmr.pd = &pd->ibpd; + mr->umem = &umem_dmabuf->umem; + mr->access = access; + mr->state = RXE_MR_STATE_VALID; + mr->type = IB_MR_TYPE_USER; + + set = mr->cur_map_set; + set->length = length; + set->iova = iova; + set->va = start; + set->offset = ib_umem_offset(mr->umem); + + err = rxe_map_dmabuf_mr(mr, umem_dmabuf); + if (err) + goto err_free_map_set; + + return 0; + +err_free_map_set: + rxe_mr_free_map_set(mr->num_map, mr->cur_map_set); +err_release_umem: + ib_umem_release(&umem_dmabuf->umem); +err_out: + return err; +} + int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr) { int err; @@ -703,6 +818,9 @@ void rxe_mr_cleanup(struct rxe_pool_entry *arg) { struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); + if (mr->umem && mr->umem->is_dmabuf) + rxe_unmap_dmabuf_mr(mr); + ib_umem_release(mr->umem); if (mr->cur_map_set) diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c index 9d0bb9aa7514..6191bb4f434d 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.c +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c @@ -916,6 +916,39 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, return ERR_PTR(err); } +static struct ib_mr *rxe_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 start, + u64 length, u64 iova, int fd, + int access, struct ib_udata *udata) +{ + int err; + struct rxe_dev *rxe = to_rdev(ibpd->device); + struct rxe_pd *pd = to_rpd(ibpd); + struct rxe_mr *mr; + + mr = rxe_alloc(&rxe->mr_pool); + if (!mr) { + err = -ENOMEM; + goto err2; + } + + rxe_add_index(mr); + + rxe_add_ref(pd); + + err = rxe_mr_dmabuf_init_user(pd, fd, start, length, iova, access, mr); + if (err) + goto err3; + + return &mr->ibmr; + +err3: + rxe_drop_ref(pd); + rxe_drop_index(mr); + rxe_drop_ref(mr); +err2: + return ERR_PTR(err); +} + static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, u32 max_num_sg) { @@ -1081,6 +1114,7 @@ static const struct ib_device_ops rxe_dev_ops = { .query_qp = rxe_query_qp, .query_srq = rxe_query_srq, .reg_user_mr = rxe_reg_user_mr, + .reg_user_mr_dmabuf = rxe_reg_user_mr_dmabuf, .req_notify_cq = rxe_req_notify_cq, .resize_cq = rxe_resize_cq, diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h index c807639435eb..0aa95ab06b6e 100644 --- a/drivers/infiniband/sw/rxe/rxe_verbs.h +++ b/drivers/infiniband/sw/rxe/rxe_verbs.h @@ -334,6 +334,8 @@ struct rxe_mr { struct rxe_map_set *cur_map_set; struct rxe_map_set *next_map_set; + + struct dma_buf_map *dmabuf_map; }; enum rxe_mw_state {