From patchwork Tue Jan 29 16:58:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 10786623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 384C41390 for ; Tue, 29 Jan 2019 16:58:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 231322AF29 for ; Tue, 29 Jan 2019 16:58:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 136802AF33; Tue, 29 Jan 2019 16:58:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9CD62AF29 for ; Tue, 29 Jan 2019 16:58:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728751AbfA2Q6r (ORCPT ); Tue, 29 Jan 2019 11:58:47 -0500 Received: from mx1.redhat.com ([209.132.183.28]:48610 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728628AbfA2Q6r (ORCPT ); Tue, 29 Jan 2019 11:58:47 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E96B881F0B; Tue, 29 Jan 2019 16:58:46 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-122-2.rdu2.redhat.com [10.10.122.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id 632585F7E7; Tue, 29 Jan 2019 16:58:45 +0000 (UTC) From: jglisse@redhat.com To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , linux-rdma@vger.kernel.org, Jason Gunthorpe , Leon Romanovsky , Doug Ledford , Artemy Kovalyov , Moni Shoua , Mike Marciniszyn , Kaike Wan , Dennis Dalessandro Subject: [RFC PATCH 0/1] Use HMM for ODP Date: Tue, 29 Jan 2019 11:58:38 -0500 Message-Id: <20190129165839.4127-1-jglisse@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Tue, 29 Jan 2019 16:58:47 +0000 (UTC) Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse This patchset convert RDMA ODP to use HMM underneath this is motivated by stronger code sharing for same feature (share virtual memory SVM or Share Virtual Address SVA) and also stronger integration with mm code to achieve that. It depends on HMM patchset posted for inclusion in 5.1 so earliest target for this should be 5.2. I welcome any testing people can do on this. Moreover they are some features of HMM in the works like peer to peer support, fast CPU page table snapshot, fast IOMMU mapping update ... It will be easier for RDMA devices with ODP to leverage those if they use HMM underneath. Quick summary of what HMM is: HMM is a toolbox for device driver to implement software support for Share Virtual Memory (SVM). Not only it provides helpers to mirror a process address space on a device (hmm_mirror). It also provides helper to allow to use device memory to back regular valid virtual address of a process (any valid mmap that is not an mmap of a device or a DAX mapping). They are two kinds of device memory. Private memory that is not accessible to CPU because it does not have all the expected properties (this is for all PCIE devices) or public memory which can also be access by CPU without restriction (with OpenCAPI or CCIX or similar cache-coherent and atomic inter-connect). Device driver can use each of HMM tools separatly. You do not have to use all the tools it provides. For RDMA device i do not expect a need to use the device memory support of HMM. This device memory support is geared toward accelerator like GPU. You can find a branch [1] with all the prerequisite in. This patch is on top of 5.0rc2+ but i can rebase it on any specific branch before it is consider for inclusion (5.2 at best). Questions and reviews are more than welcome. [1] https://cgit.freedesktop.org/~glisse/linux/log/?h=odp-hmm [2] https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-for-5.1 Cc: linux-rdma@vger.kernel.org Cc: Jason Gunthorpe Cc: Leon Romanovsky Cc: Doug Ledford Cc: Artemy Kovalyov Cc: Moni Shoua Cc: Mike Marciniszyn Cc: Kaike Wan Cc: Dennis Dalessandro Jérôme Glisse (1): RDMA/odp: convert to use HMM for ODP drivers/infiniband/core/umem_odp.c | 483 ++++++++--------------------- drivers/infiniband/hw/mlx5/mem.c | 20 +- drivers/infiniband/hw/mlx5/mr.c | 2 +- drivers/infiniband/hw/mlx5/odp.c | 95 +++--- include/rdma/ib_umem_odp.h | 54 +--- 5 files changed, 202 insertions(+), 452 deletions(-)