From patchwork Thu Apr 8 17:01:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 12191947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A996C4360C for ; Thu, 8 Apr 2021 17:01:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9F65060FDB for ; Thu, 8 Apr 2021 17:01:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9F65060FDB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=deltatee.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F30806B0075; Thu, 8 Apr 2021 13:01:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F088C6B0083; Thu, 8 Apr 2021 13:01:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5A436B0087; Thu, 8 Apr 2021 13:01:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id B74906B0075 for ; Thu, 8 Apr 2021 13:01:39 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id F2605A2AC for ; Thu, 8 Apr 2021 17:01:38 +0000 (UTC) X-FDA: 78009816276.09.E621821 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf05.hostedemail.com (Postfix) with ESMTP id BA474E005F2A for ; Thu, 8 Apr 2021 17:01:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=SVnj1+oSzpqKVO2sKwUH12TGDWlucr8BFIiDjxJcbS8=; b=K5LYomURiCDPVY97WcsROKl/wc 7ZfgWDaaI+F8nr9JH7B9zmwtL+7bsX+nMblgI6lSd3IdUMyFZ0kbUO9v3YnhmJhBtWbby/pBKU9Tr 9lf1hKXKzR9Rh4oYbLKmMtxfmlhE2wR/CZl0auq8Kbc+2oTSLIGl/au06Ag38x7kku6oicrr4jO6s mynGCsIT/gDkzmeNVeVm9nsZPhTsfIsJZpNY0JMudxGzTBr2YbmtHnZxkEc84xMt9RrHEoEv7OkcF 9syz9dChXZfwaM+7lJ5MtAHMy18a9KkyefSvBq18GqKVAzEXFO9DFr8r/EY1Z8+/CeezeprfqAPqu 9hessKJQ==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1lUY29-0002Ln-II; Thu, 08 Apr 2021 11:01:34 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1lUY27-0002JR-2X; Thu, 08 Apr 2021 11:01:31 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?utf-8?q?Christian_K=C3=B6nig?= , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Bjorn Helgaas , Ira Weiny , Robin Murphy , Logan Gunthorpe Date: Thu, 8 Apr 2021 11:01:22 -0600 Message-Id: <20210408170123.8788-16-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210408170123.8788-1-logang@deltatee.com> References: <20210408170123.8788-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, helgaas@kernel.org, dan.j.williams@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, ira.weiny@intel.com, robin.murphy@arm.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [PATCH 15/16] RDMA/rw: use dma_map_sg_p2pdma() X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: mffpzh7f8jb7eo3i4cnuhf5txffzzpex X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BA474E005F2A Received-SPF: none (deltatee.com>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=ale.deltatee.com; client-ip=204.191.154.188 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617901295-953146 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Drop the use of pci_p2pdma_map_sg() in favour of dma_map_sg_p2pdma(). The new interface allows mapping scatterlists that mix both regular and P2PDMA pages and will verify that the dma device can communicate with the device the pages are on. Signed-off-by: Logan Gunthorpe --- drivers/infiniband/core/rw.c | 50 ++++++++++-------------------------- include/rdma/ib_verbs.h | 32 +++++++++++++++++++++++ 2 files changed, 46 insertions(+), 36 deletions(-) diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c index 31156e22d3e7..0c6213d9b044 100644 --- a/drivers/infiniband/core/rw.c +++ b/drivers/infiniband/core/rw.c @@ -273,26 +273,6 @@ static int rdma_rw_init_single_wr(struct rdma_rw_ctx *ctx, struct ib_qp *qp, return 1; } -static void rdma_rw_unmap_sg(struct ib_device *dev, struct scatterlist *sg, - u32 sg_cnt, enum dma_data_direction dir) -{ - if (is_pci_p2pdma_page(sg_page(sg))) - pci_p2pdma_unmap_sg(dev->dma_device, sg, sg_cnt, dir); - else - ib_dma_unmap_sg(dev, sg, sg_cnt, dir); -} - -static int rdma_rw_map_sg(struct ib_device *dev, struct scatterlist *sg, - u32 sg_cnt, enum dma_data_direction dir) -{ - if (is_pci_p2pdma_page(sg_page(sg))) { - if (WARN_ON_ONCE(ib_uses_virt_dma(dev))) - return 0; - return pci_p2pdma_map_sg(dev->dma_device, sg, sg_cnt, dir); - } - return ib_dma_map_sg(dev, sg, sg_cnt, dir); -} - /** * rdma_rw_ctx_init - initialize a RDMA READ/WRITE context * @ctx: context to initialize @@ -315,9 +295,9 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num, struct ib_device *dev = qp->pd->device; int ret; - ret = rdma_rw_map_sg(dev, sg, sg_cnt, dir); - if (!ret) - return -ENOMEM; + ret = ib_dma_map_sg_p2pdma(dev, sg, sg_cnt, dir); + if (ret < 0) + return ret; sg_cnt = ret; /* @@ -354,7 +334,7 @@ int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num, return ret; out_unmap_sg: - rdma_rw_unmap_sg(dev, sg, sg_cnt, dir); + ib_dma_unmap_sg(dev, sg, sg_cnt, dir); return ret; } EXPORT_SYMBOL(rdma_rw_ctx_init); @@ -394,17 +374,15 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, return -EINVAL; } - ret = rdma_rw_map_sg(dev, sg, sg_cnt, dir); - if (!ret) - return -ENOMEM; + ret = ib_dma_map_sg_p2pdma(dev, sg, sg_cnt, dir); + if (ret < 0) + return ret; sg_cnt = ret; if (prot_sg_cnt) { - ret = rdma_rw_map_sg(dev, prot_sg, prot_sg_cnt, dir); - if (!ret) { - ret = -ENOMEM; + ret = ib_dma_map_sg_p2pdma(dev, prot_sg, prot_sg_cnt, dir); + if (ret < 0) goto out_unmap_sg; - } prot_sg_cnt = ret; } @@ -469,9 +447,9 @@ int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, kfree(ctx->reg); out_unmap_prot_sg: if (prot_sg_cnt) - rdma_rw_unmap_sg(dev, prot_sg, prot_sg_cnt, dir); + ib_dma_unmap_sg(dev, prot_sg, prot_sg_cnt, dir); out_unmap_sg: - rdma_rw_unmap_sg(dev, sg, sg_cnt, dir); + ib_dma_unmap_sg(dev, sg, sg_cnt, dir); return ret; } EXPORT_SYMBOL(rdma_rw_ctx_signature_init); @@ -603,7 +581,7 @@ void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num, break; } - rdma_rw_unmap_sg(qp->pd->device, sg, sg_cnt, dir); + ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); } EXPORT_SYMBOL(rdma_rw_ctx_destroy); @@ -631,8 +609,8 @@ void rdma_rw_ctx_destroy_signature(struct rdma_rw_ctx *ctx, struct ib_qp *qp, kfree(ctx->reg); if (prot_sg_cnt) - rdma_rw_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir); - rdma_rw_unmap_sg(qp->pd->device, sg, sg_cnt, dir); + ib_dma_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir); + ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); } EXPORT_SYMBOL(rdma_rw_ctx_destroy_signature); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index ca28fca5736b..a541ed1702f5 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4028,6 +4028,17 @@ static inline int ib_dma_map_sg_attrs(struct ib_device *dev, dma_attrs); } +static inline int ib_dma_map_sg_p2pdma_attrs(struct ib_device *dev, + struct scatterlist *sg, int nents, + enum dma_data_direction direction, + unsigned long dma_attrs) +{ + if (ib_uses_virt_dma(dev)) + return ib_dma_virt_map_sg(dev, sg, nents); + return dma_map_sg_p2pdma_attrs(dev->dma_device, sg, nents, direction, + dma_attrs); +} + static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev, struct scatterlist *sg, int nents, enum dma_data_direction direction, @@ -4052,6 +4063,27 @@ static inline int ib_dma_map_sg(struct ib_device *dev, return ib_dma_map_sg_attrs(dev, sg, nents, direction, 0); } +/** + * ib_dma_map_sg_p2pdma - Map a scatter/gather list to DMA addresses + * @dev: The device for which the DMA addresses are to be created + * @sg: The array of scatter/gather entries + * @nents: The number of scatter/gather entries + * @direction: The direction of the DMA + * + * Map an scatter/gather list that might contain P2PDMA pages. + * Unlike ib_dma_map_sg() it will return either a negative errno or + * a positive value indicating the number of dma segments. See + * dma_map_sg_p2pdma_attrs() for details. + * + * The resulting list should be unmapped with ib_dma_unmap_sg(). + */ +static inline int ib_dma_map_sg_p2pdma(struct ib_device *dev, + struct scatterlist *sg, int nents, + enum dma_data_direction direction) +{ + return ib_dma_map_sg_p2pdma_attrs(dev, sg, nents, direction, 0); +} + /** * ib_dma_unmap_sg - Unmap a scatter/gather list of DMA addresses * @dev: The device for which the DMA addresses were created