From patchwork Wed Jan 3 10:29:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcel Apfelbaum X-Patchwork-Id: 10142009 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B09C36034B for ; Wed, 3 Jan 2018 10:31:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FF9C29002 for ; Wed, 3 Jan 2018 10:31:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 94DF12907A; Wed, 3 Jan 2018 10:31:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DADED29002 for ; Wed, 3 Jan 2018 10:31:09 +0000 (UTC) Received: from localhost ([::1]:47810 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eWgKD-0003xo-3H for patchwork-qemu-devel@patchwork.kernel.org; Wed, 03 Jan 2018 05:31:09 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59429) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eWgIi-0002Qs-He for qemu-devel@nongnu.org; Wed, 03 Jan 2018 05:29:38 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eWgIe-0001N5-LY for qemu-devel@nongnu.org; Wed, 03 Jan 2018 05:29:36 -0500 Received: from mx1.redhat.com ([209.132.183.28]:45058) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eWgIe-0001MT-CJ for qemu-devel@nongnu.org; Wed, 03 Jan 2018 05:29:32 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8DC4E624CB; Wed, 3 Jan 2018 10:29:31 +0000 (UTC) Received: from localhost.localdomain (unknown [10.35.206.31]) by smtp.corp.redhat.com (Postfix) with ESMTP id D195E63137; Wed, 3 Jan 2018 10:29:27 +0000 (UTC) From: Marcel Apfelbaum To: qemu-devel@nongnu.org Date: Wed, 3 Jan 2018 12:29:09 +0200 Message-Id: <20180103102911.35562-4-marcel@redhat.com> In-Reply-To: <20180103102911.35562-1-marcel@redhat.com> References: <20180103102911.35562-1-marcel@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 03 Jan 2018 10:29:31 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH V3 3/5] docs: add pvrdma device documentation. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ehabkost@redhat.com, mst@redhat.com, f4bug@amsat.org, yuval.shaia@oracle.com, pbonzini@redhat.com, marcel@redhat.com, imammedo@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Marcel Apfelbaum Signed-off-by: Yuval Shaia Reviewed-by: Shamir Rabinovitch --- docs/pvrdma.txt | 145 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 145 insertions(+) create mode 100644 docs/pvrdma.txt diff --git a/docs/pvrdma.txt b/docs/pvrdma.txt new file mode 100644 index 0000000000..74c5cf2495 --- /dev/null +++ b/docs/pvrdma.txt @@ -0,0 +1,145 @@ +Paravirtualized RDMA Device (PVRDMA) +==================================== + + +1. Description +=============== +PVRDMA is the QEMU implementation of VMware's paravirtualized RDMA device. +It works with its Linux Kernel driver AS IS, no need for any special guest +modifications. + +While it complies with the VMware device, it can also communicate with bare +metal RDMA-enabled machines and does not require an RDMA HCA in the host, it +can work with Soft-RoCE (rxe). + +It does not require the whole guest RAM to be pinned allowing memory +over-commit and, even if not implemented yet, migration support will be +possible with some HW assistance. + +A project presentation accompany this document: +- http://events.linuxfoundation.org/sites/events/files/slides/lpc-2017-pvrdma-marcel-apfelbaum-yuval-shaia.pdf + + + +2. Setup +======== + + +2.1 Guest setup +=============== +Fedora 27+ kernels work out of the box, older distributions +require updating the kernel to 4.14 to include the pvrdma driver. + +However the libpvrdma library needed by User Level Software is still +not available as part of the distributions, so the rdma-core library +needs to be compiled and optionally installed. + +Please follow the instructions at: + https://github.com/linux-rdma/rdma-core.git + + +2.2 Host Setup +============== +The pvrdma backend is an ibdevice interface that can be exposed +either by a Soft-RoCE(rxe) device on machines with no RDMA device, +or an HCA SRIOV function(VF/PF). +Note that ibdevice interfaces can't be shared between pvrdma devices, +each one requiring a separate instance (rxe or SRIOV VF). + + +2.2.1 Soft-RoCE backend(rxe) +=========================== +A stable version of rxe is required, Fedora 27+ or a Linux +Kernel 4.14+ is preferred. + +The rdma_rxe module is part of the Linux Kernel but not loaded by default. +Install the User Level library (librxe) following the instructions from: +https://github.com/SoftRoCE/rxe-dev/wiki/rxe-dev:-Home + +Associate an ETH interface with rxe by running: + rxe_cfg add eth0 +An rxe0 ibdevice interface will be created and can be used as pvrdma backend. + + +2.2.2 RDMA device Virtual Function backend +========================================== +Nothing special is required, the pvrdma device can work not only with +Ethernet Links, but also Infinibands Links. +All is needed is an ibdevice with an active port, for Mellanox cards +will be something like mlx5_6 which can be the backend. + + +2.2.3 QEMU setup +================ +Configure QEMU with --enable-rdma flag, installing +the required RDMA libraries. + + +3. Usage +======== +Currently the device is working only with memory backed RAM +and it must be mark as "shared": + -m 1G \ + -object memory-backend-ram,id=mb1,size=1G,share \ + -numa node,memdev=mb1 \ + +The pvrdma device is composed of two functions: + - Function 0 is a vmxnet Ethernet Device which is redundant in Guest + but is required to pass the ibdevice GID using its MAC. + Examples: + For an rxe backend using eth0 interface it will use its mac: + -device vmxnet3,addr=.0,multifunction=on,mac= + For an SRIOV VF, we take the Ethernet Interface exposed by it: + -device vmxnet3,multifunction=on,mac= + - Function 1 is the actual device: + -device pvrdma,addr=.1,backend-dev=,backend-gid-idx=,backend-port= + where the ibdevice can be rxe or RDMA VF (e.g. mlx5_4) + Note: Pay special attention that the GID at backend-gid-idx matches vmxnet's MAC. + The rules of conversion are part of the RoCE spec, but since manual conversion + is not required, spotting problems is not hard: + Example: GID: fe80:0000:0000:0000:7efe:90ff:fecb:743a + MAC: 7c:fe:90:cb:74:3a + Note the difference between the first byte of the MAC and the GID. + + +4. Implementation details +========================= +The device acts like a proxy between the Guest Driver and the host +ibdevice interface. +On configuration path: + - For every hardware resource request (PD/QP/CQ/...) the pvrdma will request + a resource from the backend interface, maintaining a 1-1 mapping + between the guest and host. +On data path: + - Every post_send/receive received from the guest will be converted into + a post_send/receive for the backend. The buffers data will not be touched + or copied resulting in near bare-metal performance for large enough buffers. + - Completions from the backend interface will result in completions for + the pvrdma device. + + + +5. Limitations +============== +- The device obviously is limited by the Guest Linux Driver features implementation + of the VMware device API. +- Memory registration mechanism requires mremap for every page in the buffer in order + to map it to a contiguous virtual address range. Since this is not the data path + it should not matter much. +- QEMU cannot map guest RAM from a file descriptor if a pvrdma device is attached, + so it can't work with huge pages. The limitation will be addressed in the future, + however QEMU allocates Gust RAM with MADV_HUGEPAGE so if there are enough huge + pages available, QEMU will use them. +- As previously stated, migration is not supported yet, however with some hardware + support can be done. + + + +6. Performance +============== +By design the pvrdma device exits on each post-send/receive, so for small buffers +the performance is affected; however for medium buffers it will became close to +bare metal and from 1MB buffers and up it reaches bare metal performance. +(tested with 2 VMs, the pvrdma devices connected to 2 VFs of the same device) + +All the above assumes no memory registration is done on data path.