From patchwork Fri Nov 6 17:00:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B805C55178 for ; Fri, 6 Nov 2020 17:01:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 17EAD20720 for ; Fri, 6 Nov 2020 17:01:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="e3fXrM2b" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727781AbgKFRA7 (ORCPT ); Fri, 6 Nov 2020 12:00:59 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57644 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727757AbgKFRA5 (ORCPT ); Fri, 6 Nov 2020 12:00:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=70pLG2mtaM/qAjlvPwrCa6hi8Pb+2Avuax5NOW5ft8g=; b=e3fXrM2b1kZsOVGk71a7haq0Yp 7RG4YrCC2Cvne+bpsEC97u/0eFwdO7jlz9n5RCpfzP00QkdA8tz+Sloo98K3ADKY31zjKlZYoH2Fe 4KqIHoNPukgrMVovpVHxYynxmcjscoqsrc14DFp4ohfXt8/9zZNoUyliKyDpXULTWoJvcAheeNtjq s0jOQhPKtk2DJWT0QpFILMwfSdUfaK60IWJbF3zy02TVs1fQx1DzC9F0qZE75jj4T9Kba4o4iO+w6 aGIgLGzkcsqi3VQQ2B/8rQHVeaxTNjjUSfVoEmeMpabF91WiheXZI+COO/yn48YuiyVSaKonNITku skfhLijA==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56Z-0002PW-5k; Fri, 06 Nov 2020 10:00:56 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56U-0004sq-64; Fri, 06 Nov 2020 10:00:46 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:22 -0700 Message-Id: <20201106170036.18713-2-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 01/15] PCI/P2PDMA: Don't sleep in upstream_bridge_distance_warn() X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In order to call this function from a dma_map function, it must not sleep. The only reason it does sleep so to allocate the seqbuf to print which devices are within the ACS path. Switch the kmalloc call to use GFP_NOWAIT and simply not print that message if the buffer fails to be allocated. Signed-off-by: Logan Gunthorpe --- drivers/pci/p2pdma.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index de1c331dbed4..94583779c36e 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -267,7 +267,7 @@ static int pci_bridge_has_acs_redir(struct pci_dev *pdev) static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev) { - if (!buf) + if (!buf || !buf->buffer) return; seq_buf_printf(buf, "%s;", pci_name(pdev)); @@ -501,19 +501,20 @@ upstream_bridge_distance_warn(struct pci_dev *provider, struct pci_dev *client, bool acs_redirects; int ret; - seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE); - if (!acs_list.buffer) - return -ENOMEM; + seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_NOWAIT), PAGE_SIZE); ret = upstream_bridge_distance(provider, client, dist, &acs_redirects, &acs_list); if (acs_redirects) { pci_warn(client, "ACS redirect is set between the client and provider (%s)\n", pci_name(provider)); - /* Drop final semicolon */ - acs_list.buffer[acs_list.len-1] = 0; - pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n", - acs_list.buffer); + + if (acs_list.buffer) { + /* Drop final semicolon */ + acs_list.buffer[acs_list.len - 1] = 0; + pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n", + acs_list.buffer); + } } if (ret == PCI_P2PDMA_MAP_NOT_SUPPORTED) { From patchwork Fri Nov 6 17:00:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F44FC2D0A3 for ; Fri, 6 Nov 2020 17:02:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4948E2151B for ; Fri, 6 Nov 2020 17:02:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="E4md01Ra" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727662AbgKFRAz (ORCPT ); Fri, 6 Nov 2020 12:00:55 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57612 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727258AbgKFRAz (ORCPT ); Fri, 6 Nov 2020 12:00:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=3JGE+zyDK4D2O+eP7VElzaTL/57y/ZuAN1RtFkDLi+Y=; b=E4md01RaDgc5ca4VxuE651O53Y 14g/QSSmSB24bw31bf1ZFImil9U+bjqhrYvHWtpO61bo+XuyxZkMTtWPhWJxJqRGEsgLAJjH5802J F9RnesqbKHRcqmJ9KA908uEKak9SHV+DSH47V/C6f0fAAlFYfBOJ8tOMm+bWhXx9mx7hvO9X4vz4G 4EFqPx6gAqd+FAEa7fjbEBrPZ6tGZkORigKpQuIviHKvGN1VGx87R9X/9PXetfEMmTunOpS7xPyko 56CW4ryiWfSHOyjDY8AQ11NFDKJMf0wTFlHPwJ3O46OgU+f9Se4wBgJGHsCgQFpwZFAxOwUZ/ZU19 QKi3CluQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56Z-0002PX-5l; Fri, 06 Nov 2020 10:00:53 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56U-0004st-Ad; Fri, 06 Nov 2020 10:00:46 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:23 -0700 Message-Id: <20201106170036.18713-3-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 02/15] PCI/P2PDMA: Attempt to set map_type if it has not been set X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Attempt to find the mapping type for P2PDMA pages on the first DMA map attempt if it has not been done ahead of time. Previously, the mapping type was expected to be calculated ahead of time, but if pages are to come from userspace then there's no way to ensure the path was checked ahead of time. Signed-off-by: Logan Gunthorpe --- drivers/pci/p2pdma.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 94583779c36e..ea8472278b11 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -824,11 +824,17 @@ EXPORT_SYMBOL_GPL(pci_p2pmem_publish); static enum pci_p2pdma_map_type pci_p2pdma_map_type(struct pci_dev *provider, struct pci_dev *client) { + enum pci_p2pdma_map_type ret; + if (!provider->p2pdma) return PCI_P2PDMA_MAP_NOT_SUPPORTED; - return xa_to_value(xa_load(&provider->p2pdma->map_types, - map_types_idx(client))); + ret = xa_to_value(xa_load(&provider->p2pdma->map_types, + map_types_idx(client))); + if (ret != PCI_P2PDMA_MAP_UNKNOWN) + return ret; + + return upstream_bridge_distance_warn(provider, client, NULL); } static int __pci_p2pdma_map_sg(struct pci_p2pdma_pagemap *p2p_pgmap, @@ -890,7 +896,6 @@ int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg, case PCI_P2PDMA_MAP_BUS_ADDR: return __pci_p2pdma_map_sg(p2p_pgmap, dev, sg, nents); default: - WARN_ON_ONCE(1); return 0; } } From patchwork Fri Nov 6 17:00:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C842CC55178 for ; Fri, 6 Nov 2020 17:00:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5478F22203 for ; Fri, 6 Nov 2020 17:00:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="q9ieAOfg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727293AbgKFRAy (ORCPT ); Fri, 6 Nov 2020 12:00:54 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57608 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725868AbgKFRAx (ORCPT ); Fri, 6 Nov 2020 12:00:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=q5mN94C0GkWo+VfB8UZJYJyJ0FbV13s+5fRAPuv9Vi0=; b=q9ieAOfgH5dn9nd+5h7la82GAr H6p3fUY3rpgzh4OPt3P1l1PuI9h7ecYxc7I5RndJ9a1n+B3tAZOi3clUR6aY36i+UdMJ4QIggspKI 6yb0s5rFl52pn/iuYtn20KNAe0dPDeUcJ8r2/dH+kZBokpsnruyp6xIniH+mT6z1cfVu4jfHwFsJC Hya2i2aUU1+GBaL5H5SwAp5e2z/heNFpcz8WmQlpjcq2aUmtRexe2J/4Xs9krhy3OVkBJDAKfiql6 45GoV6VNGLaXjtSsULb9F/djCFEJ6Wk3aUs+czOqwLS3/XaChCuurdh1t143XkBIpFwDoxq/zLNqq QaXyaDng==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56Z-0002PY-5k; Fri, 06 Nov 2020 10:00:53 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56U-0004sw-Eh; Fri, 06 Nov 2020 10:00:46 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:24 -0700 Message-Id: <20201106170036.18713-4-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 03/15] PCI/P2PDMA: Introduce pci_p2pdma_should_map_bus() and pci_p2pdma_bus_offset() X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce pci_p2pdma_should_map_bus() which is meant to be called by dma map functions to determine how to map a given p2pdma page. pci_p2pdma_bus_offset() is also added to allow callers to get the bus offset if they need to map the bus address. Signed-off-by: Logan Gunthorpe --- drivers/pci/p2pdma.c | 46 ++++++++++++++++++++++++++++++++++++++ include/linux/pci-p2pdma.h | 11 +++++++++ 2 files changed, 57 insertions(+) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index ea8472278b11..9961e779f430 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -930,6 +930,52 @@ void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, } EXPORT_SYMBOL_GPL(pci_p2pdma_unmap_sg_attrs); +/** + * pci_p2pdma_bus_offset - returns the bus offset for a given page + * @page: page to get the offset for + * + * Must be passed a pci p2pdma page. + */ +u64 pci_p2pdma_bus_offset(struct page *page) +{ + struct pci_p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(page->pgmap); + + WARN_ON(!is_pci_p2pdma_page(page)); + + return p2p_pgmap->bus_offset; +} +EXPORT_SYMBOL_GPL(pci_p2pdma_bus_offset); + +/** + * pci_p2pdma_should_map_bus - determine if a dma mapping should use the + * bus address + * @dev: device doing the DMA request + * @pgmap: dev_pagemap structure for the mapping + * + * Returns 1 if the page should be mapped with a bus address, 0 otherwise + * and -1 the device should not be mapping P2PDMA pages. + */ +int pci_p2pdma_should_map_bus(struct device *dev, struct dev_pagemap *pgmap) +{ + struct pci_p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(pgmap); + struct pci_dev *client; + + if (!dev_is_pci(dev)) + return -1; + + client = to_pci_dev(dev); + + switch (pci_p2pdma_map_type(p2p_pgmap->provider, client)) { + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: + return 0; + case PCI_P2PDMA_MAP_BUS_ADDR: + return 1; + default: + return -1; + } +} +EXPORT_SYMBOL_GPL(pci_p2pdma_should_map_bus); + /** * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store * to enable p2pdma diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index 8318a97c9c61..fc5de47eeac4 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -34,6 +34,8 @@ int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); void pci_p2pdma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); +u64 pci_p2pdma_bus_offset(struct page *page); +int pci_p2pdma_should_map_bus(struct device *dev, struct dev_pagemap *pgmap); int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, bool *use_p2pdma); ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, @@ -83,6 +85,15 @@ static inline void pci_p2pmem_free_sgl(struct pci_dev *pdev, static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) { } +static inline u64 pci_p2pdma_bus_offset(struct page *page) +{ + return -1; +} +static inline int pci_p2pdma_should_map_bus(struct device *dev, + struct dev_pagemap *pgmap) +{ + return -1; +} static inline int pci_p2pdma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) From patchwork Fri Nov 6 17:00:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 680E0C388F2 for ; Fri, 6 Nov 2020 17:00:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDCB622202 for ; Fri, 6 Nov 2020 17:00:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="soNcve2w" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726075AbgKFRAx (ORCPT ); Fri, 6 Nov 2020 12:00:53 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57604 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725868AbgKFRAx (ORCPT ); Fri, 6 Nov 2020 12:00:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=zgGEpEC881xygcChLUzP5MM+TPIgCL+RooeJy/MEFvk=; b=soNcve2wKJLNlkdIcwhFKrP00x UXFJJhudsSkX2SoBn5CzMojAj0uyR2/WZqlCmQ/roDA/tjn6xVSzQu+640oHm0jxA9uJO+KPDZB2G Oq2rKjg3PcmQ6QJuv02gmSnlDR6+W0FJ6I7CDF1vKR8DUcOpumtVWqKYU3Q/xYP3DNgg/KR7V8uUp Sjtqu2gdb19JzmLSGzPhptvO846Ick13R0oTl58IwEn9Oipv/GhF9Np9hw63jsKDkQlT4dpJYLiPQ 3KZ01uvW3ys592KHNx+H/xFfQw/zHmxKjSRmJL0sZrzTcbmwZJDE7aGagChRGvl6uO9r0hehlSsVR mhF6EdYw==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56Z-0002Pa-5l; Fri, 06 Nov 2020 10:00:52 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56U-0004sz-Hw; Fri, 06 Nov 2020 10:00:46 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:25 -0700 Message-Id: <20201106170036.18713-5-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 04/15] lib/scatterlist: Add flag for indicating P2PDMA segments in an SGL X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We make use of the top bit of the dma_length to indicate a P2PDMA segment. Code that uses this will need to use sg_dma_p2pdma_len() to get the length and ensure no lengths exceed 2GB. An sg_dma_is_p2pdma() helper is included to check if a particular segment is p2pdma(). Signed-off-by: Logan Gunthorpe --- include/linux/scatterlist.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 36c47e7e66a2..e738159d56f9 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -39,6 +39,10 @@ struct scatterlist { #define sg_dma_len(sg) ((sg)->length) #endif +#define SG_P2PDMA_FLAG (1U << 31) +#define sg_dma_p2pdma_len(sg) (sg_dma_len(sg) & ~SG_P2PDMA_FLAG) +#define sg_dma_is_p2pdma(sg) (sg_dma_len(sg) & SG_P2PDMA_FLAG) + struct sg_table { struct scatterlist *sgl; /* the list */ unsigned int nents; /* number of mapped entries */ From patchwork Fri Nov 6 17:00:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67A12C56201 for ; Fri, 6 Nov 2020 17:01:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 213AD2151B for ; Fri, 6 Nov 2020 17:01:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="BHzl7brW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727837AbgKFRBN (ORCPT ); Fri, 6 Nov 2020 12:01:13 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57914 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727822AbgKFRBG (ORCPT ); Fri, 6 Nov 2020 12:01:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=SXfy4s8hoEPUTYM9kHHNonZmUe909URW6TTsyKH/U90=; b=BHzl7brWOE/xtPuLFhRTe7TuHr CCuWDnC47LTjBL2g1BoQKbHUoik8thlIDKyGFVrzXe/dGQN/+q/AWwhzkDSG6NdRqMWTmAk7orOgm KZqZo4YCrM1c5gq6KJoBazV729nnyG74Kar8Gv8V0vLg66n2prPHvabelgflf6qaiTIOa9nfeH69h ftQrsd9dF9OF9hUH3f7nuXp5nYN33gwIU7KdPW/+hieD4Z1ZUvgHpBdGlk2kU5ZtnPCHRdTHe98Ai 4V4Av/tH/K18dDGBupBS5yz0N4oKJW3RHAb9dz2rzhTisd+rpgJpzqbgkHksBZ2JSO2NF4fo9vQ3a 9Yf47kmw==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56k-0002Pa-Ak; Fri, 06 Nov 2020 10:01:05 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56U-0004t2-LB; Fri, 06 Nov 2020 10:00:46 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:26 -0700 Message-Id: <20201106170036.18713-6-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 05/15] dma-direct: Support PCI P2PDMA pages in dma-direct map_sg X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add PCI P2PDMA support for dma_direct_map_sg() so that it can map PCI P2PDMA pages directly without a hack in the callers. This allows for heterogeneous SGLs that contain both P2PDMA and regular pages. The DMA_ATTR_P2PDMA flag is added to allow callers to indicate support for P2PDMA pages. In order for a caller to support P2PDMA pages they must ensure no segment is greater than 2GB such that the high bit of the dma length can be used as a flag to indicate a P2PDMA segment. Such code must then ensure to use sg_dma_p2pdma_len() instead of sg_dma_len() to filter out the flag. Signed-off-by: Logan Gunthorpe --- include/linux/dma-mapping.h | 11 +++++++++++ kernel/dma/direct.c | 33 +++++++++++++++++++++++++++++++-- 2 files changed, 42 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 956151052d45..8d028e15b531 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -61,6 +61,17 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9) +/* + * DMA_ATTR_P2PDMA: specifies that dma_map_sg() may return p2pdma + * bus addresses. Code that specifies this must ensure to + * use sg_dma_p2pdma_len() instead of sg_dma_len() as the high + * bit of the length will indicate a P2PDMA bus address. + * + * If this attribute is not set and P2PDMA pages are encountered, + * dma_map_sg() will return an error. + */ +#define DMA_ATTR_P2PDMA (1UL << 10) + /* * A dma_addr_t can hold any valid DMA or bus address for the platform. It can * be given to a device to use as a DMA source or target. It is specific to a diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 06c111544f61..2fcb31789436 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "direct.h" /* @@ -387,19 +388,47 @@ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, struct scatterlist *sg; int i; - for_each_sg(sgl, sg, nents, i) + for_each_sg(sgl, sg, nents, i) { + if (attrs & DMA_ATTR_P2PDMA && sg_dma_is_p2pdma(sg)) { + sg_dma_len(sg) &= ~SG_P2PDMA_FLAG; + continue; + } + dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); + } } #endif int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, unsigned long attrs) { - int i; + struct dev_pagemap *pgmap = NULL; + int i, map = -1; struct scatterlist *sg; + u64 bus_off; for_each_sg(sgl, sg, nents, i) { + if (is_pci_p2pdma_page(sg_page(sg))) { + if (sg_page(sg)->pgmap != pgmap) { + pgmap = sg_page(sg)->pgmap; + map = pci_p2pdma_should_map_bus(dev, pgmap); + bus_off = pci_p2pdma_bus_offset(sg_page(sg)); + } + + if (map < 0 || !(attrs & DMA_ATTR_P2PDMA)) { + sg->dma_address = DMA_MAPPING_ERROR; + goto out_unmap; + } + + if (map) { + sg->dma_address = sg_phys(sg) + sg->offset - + bus_off; + sg_dma_len(sg) = sg->length | SG_P2PDMA_FLAG; + continue; + } + } + sg->dma_address = dma_direct_map_page(dev, sg_page(sg), sg->offset, sg->length, dir, attrs); if (sg->dma_address == DMA_MAPPING_ERROR) From patchwork Fri Nov 6 17:00:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D939C56202 for ; Fri, 6 Nov 2020 17:01:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 292B82151B for ; Fri, 6 Nov 2020 17:01:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="Iz1KSxmq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727679AbgKFRBN (ORCPT ); Fri, 6 Nov 2020 12:01:13 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57964 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727827AbgKFRBH (ORCPT ); Fri, 6 Nov 2020 12:01:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=zPFpNgiugVQ+xdbkGSN+0eJClV8q4n6t0btTMsXElBs=; b=Iz1KSxmqkSDBJvFrjXSKdYsP0w JoIPvPZhAvmvTc2lXaxFt1bgKacOe7+FyScP8rRK1wl72/MIYfdhmd4S8VHDuTM2a9RT80bboSZxU kInz2blm2LfIetmFOS+pFz2ZDumaZk+B9kKpHPAfHLHHxPop8H/AQU+J+22clDSdPbXw2xOsKL/p6 JDjN3eygxLT19oFLKyMdyqSp6POAyRPc2QdRw/g2hPAV00HljzjTCSURL1tmZ5epJ/uxTfgIhDucM wm5frCa9+aQ85atixSw3Vjo1ZyL2Gdtti8z/+t0xacAhMPtnxcTiXLx33IwyE9VGqxJPdcsJBpRF3 u5Y8wxiQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56j-0002PW-HZ; Fri, 06 Nov 2020 10:01:06 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56U-0004t5-OR; Fri, 06 Nov 2020 10:00:46 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:27 -0700 Message-Id: <20201106170036.18713-7-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 06/15] dma-mapping: Add flags to dma_map_ops to indicate PCI P2PDMA support X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add a flags member to the dma_map_ops structure with one flag to indicate support for PCI P2PDMA. Also, add a helper to check if a device supports PCI P2PDMA. Signed-off-by: Logan Gunthorpe --- include/linux/dma-map-ops.h | 3 +++ include/linux/dma-mapping.h | 5 +++++ kernel/dma/mapping.c | 8 ++++++++ 3 files changed, 16 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index a5f89fc4d6df..8c12dfa43ce1 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -12,6 +12,9 @@ struct cma; struct dma_map_ops { + unsigned int flags; +#define DMA_F_PCI_P2PDMA_SUPPORTED (1 << 0) + void *(*alloc)(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 8d028e15b531..3646c6ba6a00 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -149,6 +149,7 @@ int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, unsigned long attrs); bool dma_can_mmap(struct device *dev); int dma_supported(struct device *dev, u64 mask); +int dma_pci_p2pdma_supported(struct device *dev); int dma_set_mask(struct device *dev, u64 mask); int dma_set_coherent_mask(struct device *dev, u64 mask); u64 dma_get_required_mask(struct device *dev); @@ -244,6 +245,10 @@ static inline int dma_supported(struct device *dev, u64 mask) { return 0; } +static inline int dma_pci_p2pdma_supported(struct device *dev) +{ + return 0; +} static inline int dma_set_mask(struct device *dev, u64 mask) { return -EIO; diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 51bb8fa8eb89..192418ef43f8 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -567,6 +567,14 @@ int dma_supported(struct device *dev, u64 mask) } EXPORT_SYMBOL(dma_supported); +int dma_pci_p2pdma_supported(struct device *dev) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + return !ops || ops->flags & DMA_F_PCI_P2PDMA_SUPPORTED; +} +EXPORT_SYMBOL(dma_pci_p2pdma_supported); + #ifdef CONFIG_ARCH_HAS_DMA_SET_MASK void arch_dma_set_mask(struct device *dev, u64 mask); #else From patchwork Fri Nov 6 17:00:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27B59C5517A for ; Fri, 6 Nov 2020 17:01:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B9AE120720 for ; Fri, 6 Nov 2020 17:01:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="rY/cLrzC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727650AbgKFRBX (ORCPT ); Fri, 6 Nov 2020 12:01:23 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57816 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727521AbgKFRBD (ORCPT ); Fri, 6 Nov 2020 12:01:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=VLOfKPK+Zi2nLbPxB2RMFlp3vuqLnc2Fa5/3XSZoOkg=; b=rY/cLrzCiAzbq4ZIlHRpYwdkI6 WqQYPcJFEOLZTMtY7YzVS5nKxRMAYBSmO1PqPdFS7hIq5M1Zx9aqk8kudHNloty/eBUsDetBOGtwo WVPRcquq607bgfsQZmvaozqmWVbkFzwBmFJ0wbhoE+FZnFUmVtHa+NcgtRHa0Vybx1th5NikUPGc0 NXAtdqGsC5Ygfn7zy1iJg1it8qHpJvu85FILHZZJI1gBJ9gEZiFRnzN2XQOsl33s0J1c97PMr5oGv x2iFNr6WpFpmBb0KHX1xBldnhJBHO0Lc5vsS0jaQZqtjXpLUOMoybdNw6MqkZS8xtZ8+DxVTO/Y3b /VpCw86A==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56i-0002PV-Cj; Fri, 06 Nov 2020 10:01:02 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56U-0004t8-Sl; Fri, 06 Nov 2020 10:00:46 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:28 -0700 Message-Id: <20201106170036.18713-8-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 07/15] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When a PCI P2PDMA page is seen, set the IOVA length of the segment to zero so that it is not mapped into the IOVA. Then, in finalise_sg(), apply the appropriate bus address to the segment. The IOVA is not created if the scatterlist only consists of P2PDMA pages. Similar to dma-direct, DMA_ATTR_P2PDMA is used to indicate caller support seeing the high bit of the dma_length is used as a flag to indicate P2PDMA segments. On unmap, P2PDMA segments are skipped over when determining the start and end IOVA addresses. With this change, the flags variable in the dma_map_ops is set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for P2PDMA pages. Signed-off-by: Logan Gunthorpe --- drivers/iommu/dma-iommu.c | 63 ++++++++++++++++++++++++++++++++------- 1 file changed, 53 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 5591d6593583..1c8402474376 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -872,13 +873,16 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, * segment's start address to avoid concatenating across one. */ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, - dma_addr_t dma_addr) + dma_addr_t dma_addr, unsigned long attrs) { struct scatterlist *s, *cur = sg; unsigned long seg_mask = dma_get_seg_boundary(dev); unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev); int i, count = 0; + if (attrs & DMA_ATTR_P2PDMA && max_len >= SG_P2PDMA_FLAG) + max_len = SG_P2PDMA_FLAG - 1; + /* * The Intel graphic driver is used to assume that the returned * sg list is not combound. This blocks the efforts of converting @@ -917,6 +921,19 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, sg_dma_address(s) = DMA_MAPPING_ERROR; sg_dma_len(s) = 0; + if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) { + if (i > 0) + cur = sg_next(cur); + + sg_dma_address(cur) = sg_phys(s) + s->offset - + pci_p2pdma_bus_offset(sg_page(s)); + sg_dma_len(cur) = s->length | SG_P2PDMA_FLAG; + + count++; + cur_len = 0; + continue; + } + /* * Now fill in the real DMA data. If... * - there is a valid output segment to append to @@ -1013,11 +1030,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; struct scatterlist *s, *prev = NULL; + struct dev_pagemap *pgmap = NULL; int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs); dma_addr_t iova; size_t iova_len = 0; unsigned long mask = dma_get_seg_boundary(dev); - int i; + int i, map = -1; if (unlikely(iommu_dma_deferred_attach(dev, domain))) return 0; @@ -1045,6 +1063,21 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, s_length = iova_align(iovad, s_length + s_iova_off); s->length = s_length; + if (is_pci_p2pdma_page(sg_page(s))) { + if (sg_page(s)->pgmap != pgmap) { + pgmap = sg_page(s)->pgmap; + map = pci_p2pdma_should_map_bus(dev, pgmap); + } + + if (map < 0 || !(attrs & DMA_ATTR_P2PDMA)) + goto out_restore_sg; + + if (map) { + s->length = 0; + continue; + } + } + /* * Due to the alignment of our single IOVA allocation, we can * depend on these assumptions about the segment boundary mask: @@ -1067,6 +1100,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, prev = s; } + if (!iova_len) + return __finalise_sg(dev, sg, nents, 0, attrs); + iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) goto out_restore_sg; @@ -1078,7 +1114,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, if (iommu_map_sg_atomic(domain, iova, sg, nents, prot) < iova_len) goto out_free_iova; - return __finalise_sg(dev, sg, nents, iova); + return __finalise_sg(dev, sg, nents, iova, attrs); out_free_iova: iommu_dma_free_iova(cookie, iova, iova_len, NULL); @@ -1090,7 +1126,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t start, end; + dma_addr_t end, start = DMA_MAPPING_ERROR; struct scatterlist *tmp; int i; @@ -1106,14 +1142,20 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, * The scatterlist segments are mapped into a single * contiguous IOVA allocation, so this is incredibly easy. */ - start = sg_dma_address(sg); - for_each_sg(sg_next(sg), tmp, nents - 1, i) { - if (sg_dma_len(tmp) == 0) + for_each_sg(sg, tmp, nents, i) { + if ((attrs & DMA_ATTR_P2PDMA) && sg_dma_is_p2pdma(tmp)) + continue; + if (sg_dma_p2pdma_len(tmp) == 0) break; - sg = tmp; + + if (start == DMA_MAPPING_ERROR) + start = sg_dma_address(tmp); + + end = sg_dma_address(tmp) + sg_dma_len(tmp); } - end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(dev, start, end - start); + + if (start != DMA_MAPPING_ERROR) + __iommu_dma_unmap(dev, start, end - start); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, @@ -1334,6 +1376,7 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev) } static const struct dma_map_ops iommu_dma_ops = { + .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, .free = iommu_dma_free, .alloc_pages = dma_common_alloc_pages, From patchwork Fri Nov 6 17:00:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EF4DC55178 for ; Fri, 6 Nov 2020 17:01:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC620217A0 for ; Fri, 6 Nov 2020 17:01:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="VWwcxWwP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727674AbgKFRBX (ORCPT ); Fri, 6 Nov 2020 12:01:23 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57790 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727499AbgKFRBD (ORCPT ); Fri, 6 Nov 2020 12:01:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=3wtYAi0StI8nd0I1iePTfTtLTxyZ466aPXeo9Bi8PoQ=; b=VWwcxWwP1IDqGr3/kp969leU1z KFRMv/I9ZyQc6PxdptG26nWupvfqehgZ3fs2A3CyvFTZC3xjQ5s1akXGEjn3bg8cqNRC2fPonKSA5 s/btudX7LX5JYJShE0vMCz0nIgJFhcF5pRyYqxgdrxA1Nz6RLoCN3TxGy+b7RSyB8r4dpFEHBR9cl wSeQClNu5Dyd2TGv/OIVFb25TcsfV5DIE5ngFXj3JRIgnTf5xQnA8Nsg+ClwtHkA2WKVN6zPNi07A QLKu6BeAvrXWSGj6sRoiIff60TUjTFD8COo0gRJHeIj8780DdDsRqS1CNOHftC6AGsaQJllSEmfLD HrGas1Jg==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56h-0002PY-Qv; Fri, 06 Nov 2020 10:01:01 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56V-0004tB-17; Fri, 06 Nov 2020 10:00:47 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:29 -0700 Message-Id: <20201106170036.18713-9-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 08/15] nvme-pci: Check DMA ops when indicating support for PCI P2PDMA X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce a supports_pci_p2pdma() operation in nvme_ctrl_ops to replace the fixed NVME_F_PCI_P2PDMA flag such that the dma_map_ops flags can be checked for PCI P2PDMA support. Signed-off-by: Logan Gunthorpe --- drivers/nvme/host/core.c | 3 ++- drivers/nvme/host/nvme.h | 2 +- drivers/nvme/host/pci.c | 11 +++++++++-- 3 files changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 376096bfc54a..f14316c9b34a 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3824,7 +3824,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid, blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, ns->queue); blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue); - if (ctrl->ops->flags & NVME_F_PCI_P2PDMA) + if (ctrl->ops->supports_pci_p2pdma && + ctrl->ops->supports_pci_p2pdma(ctrl)) blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue); ns->queue->queuedata = ns; diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index cc111136a981..a0bfe8709cfc 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -465,7 +465,6 @@ struct nvme_ctrl_ops { unsigned int flags; #define NVME_F_FABRICS (1 << 0) #define NVME_F_METADATA_SUPPORTED (1 << 1) -#define NVME_F_PCI_P2PDMA (1 << 2) int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val); int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val); int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val); @@ -473,6 +472,7 @@ struct nvme_ctrl_ops { void (*submit_async_event)(struct nvme_ctrl *ctrl); void (*delete_ctrl)(struct nvme_ctrl *ctrl); int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); + bool (*supports_pci_p2pdma)(struct nvme_ctrl *ctrl); }; #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index df8f3612107f..ef7ce464a48d 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2686,17 +2686,24 @@ static int nvme_pci_get_address(struct nvme_ctrl *ctrl, char *buf, int size) return snprintf(buf, size, "%s\n", dev_name(&pdev->dev)); } +static bool nvme_pci_supports_pci_p2pdma(struct nvme_ctrl *ctrl) +{ + struct nvme_dev *dev = to_nvme_dev(ctrl); + + return dma_pci_p2pdma_supported(dev->dev); +} + static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { .name = "pcie", .module = THIS_MODULE, - .flags = NVME_F_METADATA_SUPPORTED | - NVME_F_PCI_P2PDMA, + .flags = NVME_F_METADATA_SUPPORTED, .reg_read32 = nvme_pci_reg_read32, .reg_write32 = nvme_pci_reg_write32, .reg_read64 = nvme_pci_reg_read64, .free_ctrl = nvme_pci_free_ctrl, .submit_async_event = nvme_pci_submit_async_event, .get_address = nvme_pci_get_address, + .supports_pci_p2pdma = nvme_pci_supports_pci_p2pdma, }; static int nvme_dev_map(struct nvme_dev *dev) From patchwork Fri Nov 6 17:00:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42F4AC55178 for ; Fri, 6 Nov 2020 17:01:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE09A22203 for ; Fri, 6 Nov 2020 17:01:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="E/wOw3kN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727703AbgKFRBg (ORCPT ); Fri, 6 Nov 2020 12:01:36 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57756 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727799AbgKFRBB (ORCPT ); Fri, 6 Nov 2020 12:01:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=7ZcZGtWvTkVsTVUpkhext4YPtK8jbBrBUEsKJaygiKM=; b=E/wOw3kNyO6V3futeClVovyVEi 0ofnTLpBwpRI12gOy55UDZsn4lgqLbjIrlVDUk5jitjZfiHHi9Uon94ml5xkCw/H9JdFFq0TAR87L drBag+pE3zEJ5UoXM0JywhZX7DgNTUweeo3Jp+DZGgj1gj9d3/9MmGQAvfaWLOn0vLUeAtb5X5+Zs M+ojMJrWn8vEzJ1QUnE2xRy+wNXpVwIkOukWFon2TazamH6LxZD9rDChqhxdoWuUXPusbmpaDrpyS laIGUKQxC+ZCaPc5GDRuDocCPvYNgCrSDMGr9vpm8Sj4MyggVUbqRxnvG4nlrwwCSGtib6yuVDp5S DpoLEf9g==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56h-0002PX-HK; Fri, 06 Nov 2020 10:01:00 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56V-0004tE-5g; Fri, 06 Nov 2020 10:00:47 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:30 -0700 Message-Id: <20201106170036.18713-10-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 09/15] nvme-pci: Convert to using dma_map_sg for p2pdma pages X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Switch to using sg_dma_p2pdma_len() in places where sg_dma_len() is used. Then replace the calls to pci_p2pdma_[un]map_sg() with calls to dma_[un]map_sg() with DMA_ATTR_P2PDMA. This should be equivalent, though support will be somewhat less (only dma-direct and dma-iommu are currently supported). Using DMA_ATTR_P2PDMA is safe because the block layer restricts requests to be much less than 2GB, thus there's no way for a segment to be greater than 2GB. Signed-off-by: Logan Gunthorpe --- drivers/nvme/host/pci.c | 30 ++++++++++++------------------ 1 file changed, 12 insertions(+), 18 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index ef7ce464a48d..26976bdf4af0 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -528,12 +528,8 @@ static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) WARN_ON_ONCE(!iod->nents); - if (is_pci_p2pdma_page(sg_page(iod->sg))) - pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, - rq_dma_dir(req)); - else - dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); - + dma_unmap_sg_attrs(dev->dev, iod->sg, iod->nents, rq_dma_dir(req), + DMA_ATTR_P2PDMA); if (iod->npages == 0) dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], @@ -570,7 +566,7 @@ static void nvme_print_sgl(struct scatterlist *sgl, int nents) pr_warn("sg[%d] phys_addr:%pad offset:%d length:%d " "dma_address:%pad dma_length:%d\n", i, &phys, sg->offset, sg->length, &sg_dma_address(sg), - sg_dma_len(sg)); + sg_dma_p2pdma_len(sg)); } } @@ -581,7 +577,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, struct dma_pool *pool; int length = blk_rq_payload_bytes(req); struct scatterlist *sg = iod->sg; - int dma_len = sg_dma_len(sg); + int dma_len = sg_dma_p2pdma_len(sg); u64 dma_addr = sg_dma_address(sg); int offset = dma_addr & (NVME_CTRL_PAGE_SIZE - 1); __le64 *prp_list; @@ -601,7 +597,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, } else { sg = sg_next(sg); dma_addr = sg_dma_address(sg); - dma_len = sg_dma_len(sg); + dma_len = sg_dma_p2pdma_len(sg); } if (length <= NVME_CTRL_PAGE_SIZE) { @@ -650,7 +646,7 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, goto bad_sgl; sg = sg_next(sg); dma_addr = sg_dma_address(sg); - dma_len = sg_dma_len(sg); + dma_len = sg_dma_p2pdma_len(sg); } done: @@ -670,7 +666,7 @@ static void nvme_pci_sgl_set_data(struct nvme_sgl_desc *sge, struct scatterlist *sg) { sge->addr = cpu_to_le64(sg_dma_address(sg)); - sge->length = cpu_to_le32(sg_dma_len(sg)); + sge->length = cpu_to_le32(sg_dma_p2pdma_len(sg)); sge->type = NVME_SGL_FMT_DATA_DESC << 4; } @@ -814,14 +810,12 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req, if (!iod->nents) goto out; - if (is_pci_p2pdma_page(sg_page(iod->sg))) - nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg, - iod->nents, rq_dma_dir(req), DMA_ATTR_NO_WARN); - else - nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, - rq_dma_dir(req), DMA_ATTR_NO_WARN); - if (!nr_mapped) + nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, + rq_dma_dir(req), DMA_ATTR_NO_WARN | DMA_ATTR_P2PDMA); + if (!nr_mapped) { + ret = BLK_STS_IOERR; goto out; + } iod->use_sgl = nvme_pci_use_sgls(dev, req); if (iod->use_sgl) From patchwork Fri Nov 6 17:00:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C691C55ABD for ; Fri, 6 Nov 2020 17:01:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8CF8E2224F for ; Fri, 6 Nov 2020 17:01:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="K/HV2BMN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727505AbgKFRBC (ORCPT ); Fri, 6 Nov 2020 12:01:02 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57750 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727798AbgKFRBB (ORCPT ); Fri, 6 Nov 2020 12:01:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=+uqfieveTb31l1I/WgAkJQG/BPoFvWugOcEIiwpm3kQ=; b=K/HV2BMNQ6+6v1KGlkocloGfxX ZWdpu/VS9RNxpNY9R/oz0jP293p4I8inagWYs5mlGH5bia7wQlzpw3FTgvAFzWEwL0ybuh3hIAXXP MQlIvwWcTYi0ARarXpd7S2Bc32AzGJ6la7fmqzzvGUOVG8ImW4ZkrByhDx/xok/sKn7RBnA2n2NDb HOxtePt42cqkRTQIomX4tiFo46PEYnrHBW/M2u8Rox2KBUqqAsXYw0D7bxoumM3wchsQq+YXQTQI7 04zRIPfswqboFYzi35+ClwHNkXD71FRFjRylfRjcS74F/xb/qcEtUatWrnKqfrQkCIk/HCqRSQ523 vwEV+YBQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56g-0002Pa-SW; Fri, 06 Nov 2020 10:01:00 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56V-0004tH-Aa; Fri, 06 Nov 2020 10:00:47 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:31 -0700 Message-Id: <20201106170036.18713-11-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 10/15] mm: Introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to allow obtaining P2PDMA pages. If a caller does not set this flag and tries to map P2PDMA pages it will fail. This is implemented by adding a flag and a check to get_dev_pagemap(). Signed-off-by: Logan Gunthorpe --- drivers/dax/super.c | 7 ++++--- include/linux/memremap.h | 4 ++-- include/linux/mm.h | 1 + mm/gup.c | 28 +++++++++++++++++----------- mm/huge_memory.c | 8 ++++---- mm/memory-failure.c | 4 ++-- mm/memremap.c | 14 ++++++++++---- 7 files changed, 40 insertions(+), 26 deletions(-) diff --git a/drivers/dax/super.c b/drivers/dax/super.c index edc279be3e59..59c05839b3f8 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -132,9 +132,10 @@ bool __generic_fsdax_supported(struct dax_device *dax_dev, } else if (pfn_t_devmap(pfn) && pfn_t_devmap(end_pfn)) { struct dev_pagemap *pgmap, *end_pgmap; - pgmap = get_dev_pagemap(pfn_t_to_pfn(pfn), NULL); - end_pgmap = get_dev_pagemap(pfn_t_to_pfn(end_pfn), NULL); - if (pgmap && pgmap == end_pgmap && pgmap->type == MEMORY_DEVICE_FS_DAX + pgmap = get_dev_pagemap(pfn_t_to_pfn(pfn), NULL, false); + end_pgmap = get_dev_pagemap(pfn_t_to_pfn(end_pfn), NULL, false); + if (!IS_ERR_OR_NULL(pgmap) && pgmap == end_pgmap + && pgmap->type == MEMORY_DEVICE_FS_DAX && pfn_t_to_page(pfn)->pgmap == pgmap && pfn_t_to_page(end_pfn)->pgmap == pgmap && pfn_t_to_pfn(pfn) == PHYS_PFN(__pa(kaddr)) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 79c49e7f5c30..14f6d899c657 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -136,7 +136,7 @@ void memunmap_pages(struct dev_pagemap *pgmap); void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap); struct dev_pagemap *get_dev_pagemap(unsigned long pfn, - struct dev_pagemap *pgmap); + struct dev_pagemap *pgmap, bool allow_pci_p2pdma); unsigned long vmem_altmap_offset(struct vmem_altmap *altmap); void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns); @@ -160,7 +160,7 @@ static inline void devm_memunmap_pages(struct device *dev, } static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn, - struct dev_pagemap *pgmap) + struct dev_pagemap *pgmap, bool allow_pci_p2pdma) { return NULL; } diff --git a/include/linux/mm.h b/include/linux/mm.h index ef360fe70aaf..c405af966a4e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2790,6 +2790,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ #define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ +#define FOLL_PCI_P2PDMA 0x100000 /* allow returning PCI P2PDMA pages */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each diff --git a/mm/gup.c b/mm/gup.c index 102877ed77a4..abbc0a06d241 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -449,11 +449,16 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, * case since they are only valid while holding the pgmap * reference. */ - *pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap); - if (*pgmap) + *pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap, + flags & FOLL_PCI_P2PDMA); + if (IS_ERR(*pgmap)) { + page = ERR_CAST(*pgmap); + goto out; + } else if (*pgmap) { page = pte_page(pte); - else + } else { goto no_page; + } } else if (unlikely(!page)) { if (flags & FOLL_DUMP) { /* Avoid special (like zero) pages in core dumps */ @@ -794,7 +799,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, struct page *page; page = follow_page_mask(vma, address, foll_flags, &ctx); - if (ctx.pgmap) + if (!IS_ERR_OR_NULL(ctx.pgmap)) put_dev_pagemap(ctx.pgmap); return page; } @@ -1138,7 +1143,7 @@ static long __get_user_pages(struct mm_struct *mm, nr_pages -= page_increm; } while (nr_pages); out: - if (ctx.pgmap) + if (!IS_ERR_OR_NULL(ctx.pgmap)) put_dev_pagemap(ctx.pgmap); return i ? i : ret; } @@ -2177,8 +2182,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, if (unlikely(flags & FOLL_LONGTERM)) goto pte_unmap; - pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); - if (unlikely(!pgmap)) { + pgmap = get_dev_pagemap(pte_pfn(pte), pgmap, + flags & FOLL_PCI_P2PDMA); + if (IS_ERR_OR_NULL(pgmap)) { undo_dev_pagemap(nr, nr_start, flags, pages); goto pte_unmap; } @@ -2221,7 +2227,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, ret = 1; pte_unmap: - if (pgmap) + if (!IS_ERR_OR_NULL(pgmap)) put_dev_pagemap(pgmap); pte_unmap(ptem); return ret; @@ -2255,8 +2261,8 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, do { struct page *page = pfn_to_page(pfn); - pgmap = get_dev_pagemap(pfn, pgmap); - if (unlikely(!pgmap)) { + pgmap = get_dev_pagemap(pfn, pgmap, flags & FOLL_PCI_P2PDMA); + if (IS_ERR_OR_NULL(pgmap)) { undo_dev_pagemap(nr, nr_start, flags, pages); return 0; } @@ -2681,7 +2687,7 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages, if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | FOLL_FORCE | FOLL_PIN | FOLL_GET | - FOLL_FAST_ONLY))) + FOLL_FAST_ONLY | FOLL_PCI_P2PDMA))) return -EINVAL; if (gup_flags & FOLL_PIN) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9474dbc150ed..3030548dc007 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -985,8 +985,8 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, return ERR_PTR(-EEXIST); pfn += (addr & ~PMD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) + *pgmap = get_dev_pagemap(pfn, *pgmap, flags & FOLL_PCI_P2PDMA); + if (IS_ERR_OR_NULL(*pgmap)) return ERR_PTR(-EFAULT); page = pfn_to_page(pfn); if (!try_grab_page(page, flags)) @@ -1159,8 +1159,8 @@ struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, return ERR_PTR(-EEXIST); pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) + *pgmap = get_dev_pagemap(pfn, *pgmap, flags & FOLL_PCI_P2PDMA); + if (IS_ERR_OR_NULL(*pgmap)) return ERR_PTR(-EFAULT); page = pfn_to_page(pfn); if (!try_grab_page(page, flags)) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index c0bb186bba62..44fdad77f06a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1328,8 +1328,8 @@ int memory_failure(unsigned long pfn, int flags) p = pfn_to_online_page(pfn); if (!p) { if (pfn_valid(pfn)) { - pgmap = get_dev_pagemap(pfn, NULL); - if (pgmap) + pgmap = get_dev_pagemap(pfn, NULL, false); + if (!IS_ERR_OR_NULL(pgmap)) return memory_failure_dev_pagemap(pfn, flags, pgmap); } diff --git a/mm/memremap.c b/mm/memremap.c index 73a206d0f645..5f0284c3a3c3 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -197,14 +197,14 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, "altmap not supported for multiple ranges\n")) return -EINVAL; - conflict_pgmap = get_dev_pagemap(PHYS_PFN(range->start), NULL); + conflict_pgmap = get_dev_pagemap(PHYS_PFN(range->start), NULL, true); if (conflict_pgmap) { WARN(1, "Conflicting mapping in same section\n"); put_dev_pagemap(conflict_pgmap); return -ENOMEM; } - conflict_pgmap = get_dev_pagemap(PHYS_PFN(range->end), NULL); + conflict_pgmap = get_dev_pagemap(PHYS_PFN(range->end), NULL, true); if (conflict_pgmap) { WARN(1, "Conflicting mapping in same section\n"); put_dev_pagemap(conflict_pgmap); @@ -454,19 +454,20 @@ void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns) * get_dev_pagemap() - take a new live reference on the dev_pagemap for @pfn * @pfn: page frame number to lookup page_map * @pgmap: optional known pgmap that already has a reference + * @allow_pci_p2pdma: allow getting a pgmap with the PCI P2PDMA type * * If @pgmap is non-NULL and covers @pfn it will be returned as-is. If @pgmap * is non-NULL but does not cover @pfn the reference to it will be released. */ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, - struct dev_pagemap *pgmap) + struct dev_pagemap *pgmap, bool allow_pci_p2pdma) { resource_size_t phys = PFN_PHYS(pfn); /* * In the cached case we're already holding a live reference. */ - if (pgmap) { + if (!IS_ERR_OR_NULL(pgmap)) { if (phys >= pgmap->range.start && phys <= pgmap->range.end) return pgmap; put_dev_pagemap(pgmap); @@ -479,6 +480,11 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, pgmap = NULL; rcu_read_unlock(); + if (!allow_pci_p2pdma && pgmap->type == MEMORY_DEVICE_PCI_P2PDMA) { + put_dev_pagemap(pgmap); + return ERR_PTR(-EINVAL); + } + return pgmap; } EXPORT_SYMBOL_GPL(get_dev_pagemap); From patchwork Fri Nov 6 17:00:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C432BC388F2 for ; Fri, 6 Nov 2020 17:01:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 709FD22203 for ; Fri, 6 Nov 2020 17:01:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="chSuzXXn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727864AbgKFRBh (ORCPT ); Fri, 6 Nov 2020 12:01:37 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57714 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727789AbgKFRBA (ORCPT ); Fri, 6 Nov 2020 12:01:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=m3VUYIxc1jPda5VSWLgLRAw02xsge+Z5NCWv/SFFLZ8=; b=chSuzXXnNIiMj8x5tY0yWvzN4Y uBDq8UiWGCPIy2lWdq+rPDZ2h5EhO1gULhRxUsFaANPfnMgWurYOAuvt26685GVMhBb3qwsZ3pRCa qVFIWSiO1Mb/oIoh/gHNuYpYp6odfYJAIowmbsy5Y3GC0wZj5M0XK2xzEBEmyjq9q0ziFlmpyKJWG 47/pRiA99NbZLRcUGsRkkujiMQhJJjYYf4o9FkVaDUmcu5ymPZBUg7YBmec0FxLbxZrNIZNzERe5y u6NhVVJ3PSa7O/jaPyEpsddskNSLUAkBaQPsg5bpjX2A6AM4QSjRQj/0hxsZIuls70Rg2Q0qvQYG9 jEWfZTtw==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56g-0002PW-Bc; Fri, 06 Nov 2020 10:00:59 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56V-0004tK-FA; Fri, 06 Nov 2020 10:00:47 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:32 -0700 Message-Id: <20201106170036.18713-12-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 11/15] iov_iter: Introduce iov_iter_get_pages_[alloc_]flags() X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add iov_iter_get_pages_flags() and iov_iter_get_pages_alloc_flags() which take a flags argument that is passed to get_user_pages_fast(). This is so that FOLL_PCI_P2PDMA can be passed when appropriate. Signed-off-by: Logan Gunthorpe --- include/linux/uio.h | 21 +++++++++++++++++---- lib/iov_iter.c | 25 ++++++++++++++----------- 2 files changed, 31 insertions(+), 15 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 72d88566694e..694caa868c05 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -221,10 +221,23 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_ void iov_iter_pipe(struct iov_iter *i, unsigned int direction, struct pipe_inode_info *pipe, size_t count); void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count); -ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, - size_t maxsize, unsigned maxpages, size_t *start); -ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, - size_t maxsize, size_t *start); +ssize_t iov_iter_get_pages_flags(struct iov_iter *i, struct page **pages, + size_t maxsize, unsigned maxpages, size_t *start, + unsigned int gup_flags); +ssize_t iov_iter_get_pages_alloc_flags(struct iov_iter *i, + struct page ***pages, size_t maxsize, size_t *start, + unsigned int gup_flags); +static inline ssize_t iov_iter_get_pages(struct iov_iter *i, + struct page **pages, size_t maxsize, unsigned maxpages, + size_t *start) +{ + return iov_iter_get_pages_flags(i, pages, maxsize, maxpages, start, 0); +} +static inline ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, + struct page ***pages, size_t maxsize, size_t *start) +{ + return iov_iter_get_pages_alloc_flags(i, pages, maxsize, start, 0); +} int iov_iter_npages(const struct iov_iter *i, int maxpages); const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 1635111c5bd2..700effe0bb86 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1313,9 +1313,9 @@ static ssize_t pipe_get_pages(struct iov_iter *i, return __pipe_get_pages(i, min(maxsize, capacity), pages, iter_head, start); } -ssize_t iov_iter_get_pages(struct iov_iter *i, +ssize_t iov_iter_get_pages_flags(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, - size_t *start) + size_t *start, unsigned int gup_flags) { if (maxsize > i->count) maxsize = i->count; @@ -1325,6 +1325,9 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, if (unlikely(iov_iter_is_discard(i))) return -EFAULT; + if (iov_iter_rw(i) != WRITE) + gup_flags |= FOLL_WRITE; + iterate_all_kinds(i, maxsize, v, ({ unsigned long addr = (unsigned long)v.iov_base; size_t len = v.iov_len + (*start = addr & (PAGE_SIZE - 1)); @@ -1335,9 +1338,7 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, len = maxpages * PAGE_SIZE; addr &= ~(PAGE_SIZE - 1); n = DIV_ROUND_UP(len, PAGE_SIZE); - res = get_user_pages_fast(addr, n, - iov_iter_rw(i) != WRITE ? FOLL_WRITE : 0, - pages); + res = get_user_pages_fast(addr, n, gup_flags, pages); if (unlikely(res < 0)) return res; return (res == n ? len : res * PAGE_SIZE) - *start; @@ -1352,7 +1353,7 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, ) return 0; } -EXPORT_SYMBOL(iov_iter_get_pages); +EXPORT_SYMBOL(iov_iter_get_pages_flags); static struct page **get_pages_array(size_t n) { @@ -1392,9 +1393,9 @@ static ssize_t pipe_get_pages_alloc(struct iov_iter *i, return n; } -ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, +ssize_t iov_iter_get_pages_alloc_flags(struct iov_iter *i, struct page ***pages, size_t maxsize, - size_t *start) + size_t *start, unsigned int gup_flags) { struct page **p; @@ -1406,6 +1407,9 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, if (unlikely(iov_iter_is_discard(i))) return -EFAULT; + if (iov_iter_rw(i) != WRITE) + gup_flags |= FOLL_WRITE; + iterate_all_kinds(i, maxsize, v, ({ unsigned long addr = (unsigned long)v.iov_base; size_t len = v.iov_len + (*start = addr & (PAGE_SIZE - 1)); @@ -1417,8 +1421,7 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, p = get_pages_array(n); if (!p) return -ENOMEM; - res = get_user_pages_fast(addr, n, - iov_iter_rw(i) != WRITE ? FOLL_WRITE : 0, p); + res = get_user_pages_fast(addr, n, gup_flags, p); if (unlikely(res < 0)) { kvfree(p); return res; @@ -1439,7 +1442,7 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, ) return 0; } -EXPORT_SYMBOL(iov_iter_get_pages_alloc); +EXPORT_SYMBOL(iov_iter_get_pages_alloc_flags); size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i) From patchwork Fri Nov 6 17:00:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 043C8C5517A for ; Fri, 6 Nov 2020 17:01:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E0572151B for ; Fri, 6 Nov 2020 17:01:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="Riax6Lm6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727792AbgKFRBw (ORCPT ); Fri, 6 Nov 2020 12:01:52 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57694 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727771AbgKFRA6 (ORCPT ); Fri, 6 Nov 2020 12:00:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=vIAzFRE+nSs1Od5bUBFlY5DkV6P9QVEcl+CPAdcAk/o=; b=Riax6Lm6nK77suGcEtPdixJhwy ogbah0pJjRO+uUgDRRm0EctKlaLGBbG334qFWcO79vuSi8DoHii0sc7wt302kKLgaigv0cWhnI97t OU7nnol7v3/zBlNUx3qbZHaCUmEpV6KS3edRY82zksvqjILrVN1Oul79VgT6ObquLVsmhd8ZcVjIG GCBI1aMUZGpX6IFNeg6XUdXUWn6Tt2k/bfDcVLRRel0atZPC4U6EpYZMj6muUdZuMaS4W9Sw4C+MO hi3fTyRMlllHav4rwT1z952XjcqfpbKk1CV4kKjjg3CAIR8vkG3b59XdJN4Bt1qUvyC5S0Jn3JcJi ZT21YM0Q==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56f-0002PV-5e; Fri, 06 Nov 2020 10:00:58 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56V-0004tN-K4; Fri, 06 Nov 2020 10:00:47 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:33 -0700 Message-Id: <20201106170036.18713-13-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 12/15] block: Set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages() X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed from userspace and enables the O_DIRECT path in iomap based filesystems and direct to block devices. Signed-off-by: Logan Gunthorpe --- block/bio.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index fa01bef35bb1..6b28256a5575 100644 --- a/block/bio.c +++ b/block/bio.c @@ -997,6 +997,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt; struct page **pages = (struct page **)bv; bool same_page = false; + unsigned int flags = 0; ssize_t size, left; unsigned len, i; size_t offset; @@ -1009,7 +1010,11 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2); pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); - size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); + if (bio->bi_disk && blk_queue_pci_p2pdma(bio->bi_disk->queue)) + flags |= FOLL_PCI_P2PDMA; + + size = iov_iter_get_pages_flags(iter, pages, LONG_MAX, nr_pages, + &offset, flags); if (unlikely(size <= 0)) return size ? size : -EFAULT; From patchwork Fri Nov 6 17:00:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 019A3C388F2 for ; Fri, 6 Nov 2020 17:02:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A197B217A0 for ; Fri, 6 Nov 2020 17:02:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="LKLCRs5E" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727617AbgKFRBw (ORCPT ); Fri, 6 Nov 2020 12:01:52 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57670 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727763AbgKFRA6 (ORCPT ); Fri, 6 Nov 2020 12:00:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=/ib4hcAfbiHMdLX/8DBMNJM6Gg1f+72PJ0ND9MJ2JPY=; b=LKLCRs5E1fTNVSfyLyggO9F00J ebA/QlSU1neMqjpg6RXX69lynbZmYxAVQReH4MRpE300YyrdAKMEEs0R9dBNTt2+Sn9fa5ZYuR2qE BMJGKFEk2kv8lZODOz7/Dtmjj2ir+gpt5cQvYp7xvFZcw5Dg01QCyOzB1IZZCZoSDRM2c2UiBycyt lu6DdPugsSp10V+yx5M0RwBAleHSQT17iLdK3N3AvfCMkGd9ZVFZwgiGPAbpHTORNODrCOzamsd3j ftrZdcJQIcIIK7uAVQf127ZdItrwzvKQvj7WJHXN/+D/ivdDaNbuJLgvyxC5ssGGyLKrGSo+xNcA5 XYQETn/Q==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56d-0002PX-U2; Fri, 06 Nov 2020 10:00:57 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56V-0004tQ-Od; Fri, 06 Nov 2020 10:00:47 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:34 -0700 Message-Id: <20201106170036.18713-14-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 13/15] block: Set FOLL_PCI_P2PDMA in bio_map_user_iov() X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed from userspace and enables the NVMe passthru requests to use P2PDMA pages. Signed-off-by: Logan Gunthorpe --- block/blk-map.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/block/blk-map.c b/block/blk-map.c index 21630dccac62..cad8fcfe4f17 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -245,6 +245,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, { unsigned int max_sectors = queue_max_hw_sectors(rq->q); struct bio *bio, *bounce_bio; + unsigned int flags = 0; int ret; int j; @@ -256,13 +257,17 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, return -ENOMEM; bio->bi_opf |= req_op(rq); + if (blk_queue_pci_p2pdma(rq->q)) + flags |= FOLL_PCI_P2PDMA; + while (iov_iter_count(iter)) { struct page **pages; ssize_t bytes; size_t offs, added = 0; int npages; - bytes = iov_iter_get_pages_alloc(iter, &pages, LONG_MAX, &offs); + bytes = iov_iter_get_pages_alloc_flags(iter, &pages, LONG_MAX, + &offs, flags); if (unlikely(bytes <= 0)) { ret = bytes ? bytes : -EFAULT; goto out_unmap; From patchwork Fri Nov 6 17:00:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A90B2C56201 for ; Fri, 6 Nov 2020 17:01:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5101120720 for ; Fri, 6 Nov 2020 17:01:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="Bjj9w5en" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727873AbgKFRBx (ORCPT ); Fri, 6 Nov 2020 12:01:53 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57674 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727765AbgKFRA6 (ORCPT ); Fri, 6 Nov 2020 12:00:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=5dvKfzRnO0iiiLL/CYBa9gf//32ohjJCm8GSYQzKYAs=; b=Bjj9w5enjWXzYve9zF10M+2O9P qjVGNonCDmqcDolyhjugoGithINjBxlB/lx8aYU7j9Fd9F4mEgsxs2J7X5pgLFNdwuriDlCJep9hR EDoGZ4dVz7LW07ISC74HuoBMP0AFPEZ8d3iZ2gUCXE5nTe9eQGQyrniQmkZYDp1YJmgOS8+N5LC5U jW4dO74EYiAbhsAzx0C5Jyt8MQx8C6uv/BLPAvRjgQ8By2vdhAjcX28IMS053QSvZPJnrtkyXUutw iL1GnELk8QMpL5Kia/vaMQgw/h3WCQ6HXpcipTLtRdpt91b/0NfNWFnXLQhcTWyzVyMVJsMXGHOdf +Z9NM3tQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56d-0002PY-EO; Fri, 06 Nov 2020 10:00:57 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56V-0004tT-Sy; Fri, 06 Nov 2020 10:00:47 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:35 -0700 Message-Id: <20201106170036.18713-15-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 14/15] PCI/P2PDMA: Introduce pci_mmap_p2pmem() X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce pci_mmap_p2pmem() which is a helper to allocate and mmap a hunk of p2pmem into userspace. Signed-off-by: Logan Gunthorpe --- drivers/pci/p2pdma.c | 104 +++++++++++++++++++++++++++++++++++++ include/linux/pci-p2pdma.h | 6 +++ 2 files changed, 110 insertions(+) diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 9961e779f430..8eab53ac59ae 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -1055,3 +1056,106 @@ ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, return sprintf(page, "%s\n", pci_name(p2p_dev)); } EXPORT_SYMBOL_GPL(pci_p2pdma_enable_show); + +struct pci_p2pdma_map { + struct kref ref; + struct pci_dev *pdev; + void *kaddr; + size_t len; +}; + +static struct pci_p2pdma_map *pci_p2pdma_map_alloc(struct pci_dev *pdev, + size_t len) +{ + struct pci_p2pdma_map *pmap; + + pmap = kzalloc(sizeof(*pmap), GFP_KERNEL); + if (!pmap) + return NULL; + + kref_init(&pmap->ref); + pmap->pdev = pdev; + pmap->len = len; + + pmap->kaddr = pci_alloc_p2pmem(pdev, len); + if (!pmap->kaddr) { + kfree(pmap); + return NULL; + } + + return pmap; +} + +static void pci_p2pdma_map_free(struct kref *ref) +{ + struct pci_p2pdma_map *pmap = + container_of(ref, struct pci_p2pdma_map, ref); + + pci_free_p2pmem(pmap->pdev, pmap->kaddr, pmap->len); + kfree(pmap); +} + +static void pci_p2pdma_vma_open(struct vm_area_struct *vma) +{ + struct pci_p2pdma_map *pmap = vma->vm_private_data; + + kref_get(&pmap->ref); +} + +static void pci_p2pdma_vma_close(struct vm_area_struct *vma) +{ + struct pci_p2pdma_map *pmap = vma->vm_private_data; + + kref_put(&pmap->ref, pci_p2pdma_map_free); +} + +const struct vm_operations_struct pci_p2pdma_vmops = { + .open = pci_p2pdma_vma_open, + .close = pci_p2pdma_vma_close, +}; + +/** + * pci_mmap_p2pmem - allocate peer-to-peer DMA memory + * @pdev: the device to allocate memory from + * @vma: the userspace vma to map the memory to + * + * Returns 0 on success, or a negative error code on failure + */ +int pci_mmap_p2pmem(struct pci_dev *pdev, struct vm_area_struct *vma) +{ + struct pci_p2pdma_map *pmap; + unsigned long addr, pfn; + vm_fault_t ret; + + /* prevent private mappings from being established */ + if ((vma->vm_flags & VM_MAYSHARE) != VM_MAYSHARE) { + pci_info_ratelimited(pdev, + "%s: fail, attempted private mapping\n", + current->comm); + return -EINVAL; + } + + pmap = pci_p2pdma_map_alloc(pdev, vma->vm_end - vma->vm_start); + if (!pmap) + return -ENOMEM; + + vma->vm_flags |= VM_MIXEDMAP; + vma->vm_private_data = pmap; + vma->vm_ops = &pci_p2pdma_vmops; + + pfn = virt_to_phys(pmap->kaddr) >> PAGE_SHIFT; + for (addr = vma->vm_start; addr < vma->vm_end; addr += PAGE_SIZE) { + ret = vmf_insert_mixed(vma, addr, + __pfn_to_pfn_t(pfn, PFN_DEV | PFN_MAP)); + if (ret & VM_FAULT_ERROR) + goto out_error; + pfn++; + } + + return 0; + +out_error: + kref_put(&pmap->ref, pci_p2pdma_map_free); + return -EFAULT; +} +EXPORT_SYMBOL_GPL(pci_mmap_p2pmem); diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h index fc5de47eeac4..26fe40363d1c 100644 --- a/include/linux/pci-p2pdma.h +++ b/include/linux/pci-p2pdma.h @@ -40,6 +40,7 @@ int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, bool *use_p2pdma); ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, bool use_p2pdma); +int pci_mmap_p2pmem(struct pci_dev *pdev, struct vm_area_struct *vma); #else /* CONFIG_PCI_P2PDMA */ static inline int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, u64 offset) @@ -116,6 +117,11 @@ static inline ssize_t pci_p2pdma_enable_show(char *page, { return sprintf(page, "none\n"); } +static inline int pci_mmap_p2pmem(struct pci_dev *pdev, + struct vm_area_struct *vma) +{ + return -EOPNOTSUPP; +} #endif /* CONFIG_PCI_P2PDMA */ From patchwork Fri Nov 6 17:00:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11887529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7443FC55178 for ; Fri, 6 Nov 2020 17:01:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1558D22202 for ; Fri, 6 Nov 2020 17:01:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=deltatee.com header.i=@deltatee.com header.b="pr/Jowlo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727790AbgKFRA7 (ORCPT ); Fri, 6 Nov 2020 12:00:59 -0500 Received: from ale.deltatee.com ([204.191.154.188]:57652 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725868AbgKFRA5 (ORCPT ); Fri, 6 Nov 2020 12:00:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Fp3zmrc4rC+wqr/t/2S1tglSPbg2RkSx9ufEnh8O6wg=; b=pr/Jowlou3yq1rIX94ylABVlRB 5w6LYJAv3MifRhRJ3ba0AonoKLftJrGzJo6fpOesPsyS+IfGsO2xahIAfEKHn7xozhVzLCatnFErO C33aKyXn6ERKFCdlFQXfr6IMaMPeQRDDMugs+o0yk8O6CrYC/Ow69Q1ogdNZPHAg61/Pkubf9ce9y inecSG3KcBrb8MNSLrE+4L5YV91Y7pVZv70CvAXdOjHZTXXodIaqKI+lVT1+VK6uUCgBVs73TEDAr Asex6dkaH+/b0CGsBqV32lPp3wXtqhurcLDtZBH5yZbR8hY8DNMWJoRUVuqjNgASX+ggal9UgFKtR gWI6EXhA==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kb56c-0002Pa-7h; Fri, 06 Nov 2020 10:00:56 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1kb56W-0004tW-1a; Fri, 06 Nov 2020 10:00:48 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Logan Gunthorpe Date: Fri, 6 Nov 2020 10:00:36 -0700 Message-Id: <20201106170036.18713-16-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201106170036.18713-1-logang@deltatee.com> References: <20201106170036.18713-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, dan.j.williams@intel.com, iweiny@intel.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH 15/15] nvme-pci: Allow mmaping the CMB in userspace X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Allow userspace to obtain CMB memory by mmaping the controller's char device. The mmap call allocates and returns a hunk of CMB memory, (the offset is ignored) so userspace does not have control over the address within the CMB. A VMA allocated in this way will only be usable by drivers that set FOLL_PCI_P2PDMA when calling GUP. And inter-device support will be checked the first time the pages are mapped for DMA. Currently this is only supported by O_DIRECT to an PCI NVMe device or through the NVMe passthrough IOCTL. Signed-off-by: Logan Gunthorpe --- drivers/nvme/host/core.c | 11 +++++++++++ drivers/nvme/host/nvme.h | 1 + drivers/nvme/host/pci.c | 9 +++++++++ 3 files changed, 21 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index f14316c9b34a..fc642aba671d 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3240,12 +3240,23 @@ static long nvme_dev_ioctl(struct file *file, unsigned int cmd, } } +static int nvme_dev_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct nvme_ctrl *ctrl = file->private_data; + + if (!ctrl->ops->mmap_cmb) + return -ENODEV; + + return ctrl->ops->mmap_cmb(ctrl, vma); +} + static const struct file_operations nvme_dev_fops = { .owner = THIS_MODULE, .open = nvme_dev_open, .release = nvme_dev_release, .unlocked_ioctl = nvme_dev_ioctl, .compat_ioctl = compat_ptr_ioctl, + .mmap = nvme_dev_mmap, }; static ssize_t nvme_sysfs_reset(struct device *dev, diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index a0bfe8709cfc..3d790d849701 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -473,6 +473,7 @@ struct nvme_ctrl_ops { void (*delete_ctrl)(struct nvme_ctrl *ctrl); int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); bool (*supports_pci_p2pdma)(struct nvme_ctrl *ctrl); + int (*mmap_cmb)(struct nvme_ctrl *ctrl, struct vm_area_struct *vma); }; #ifdef CONFIG_FAULT_INJECTION_DEBUG_FS diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 26976bdf4af0..9c61573111ea 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2687,6 +2687,14 @@ static bool nvme_pci_supports_pci_p2pdma(struct nvme_ctrl *ctrl) return dma_pci_p2pdma_supported(dev->dev); } +static int nvme_pci_mmap_cmb(struct nvme_ctrl *ctrl, + struct vm_area_struct *vma) +{ + struct pci_dev *pdev = to_pci_dev(to_nvme_dev(ctrl)->dev); + + return pci_mmap_p2pmem(pdev, vma); +} + static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { .name = "pcie", .module = THIS_MODULE, @@ -2698,6 +2706,7 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { .submit_async_event = nvme_pci_submit_async_event, .get_address = nvme_pci_get_address, .supports_pci_p2pdma = nvme_pci_supports_pci_p2pdma, + .mmap_cmb = nvme_pci_mmap_cmb, }; static int nvme_dev_map(struct nvme_dev *dev)