From patchwork Tue Sep 15 17:10:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wdavis@nvidia.com X-Patchwork-Id: 7188651 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 67423BEEC1 for ; Tue, 15 Sep 2015 17:11:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7C84920742 for ; Tue, 15 Sep 2015 17:11:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 44DD82070F for ; Tue, 15 Sep 2015 17:11:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751387AbbIORL3 (ORCPT ); Tue, 15 Sep 2015 13:11:29 -0400 Received: from hqemgate15.nvidia.com ([216.228.121.64]:13374 "EHLO hqemgate15.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751005AbbIORL3 (ORCPT ); Tue, 15 Sep 2015 13:11:29 -0400 Received: from hqnvupgp08.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com id ; Tue, 15 Sep 2015 10:11:18 -0700 Received: from HQMAIL101.nvidia.com ([172.20.187.10]) by hqnvupgp08.nvidia.com (PGP Universal service); Tue, 15 Sep 2015 10:11:13 -0700 X-PGP-Universal: processed; by hqnvupgp08.nvidia.com on Tue, 15 Sep 2015 10:11:13 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Tue, 15 Sep 2015 17:11:27 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server id 15.0.1044.25 via Frontend Transport; Tue, 15 Sep 2015 17:11:27 +0000 Received: from wdavis-lt.nvidia.com (Not Verified[10.20.168.59]) by hqnvemgw01.nvidia.com with MailMarshal (v7, 1, 2, 5326) id ; Tue, 15 Sep 2015 10:11:27 -0700 From: Will Davis To: Bjorn Helgaas CC: Alex Williamson , Joerg Roedel , , , Konrad Wilk , Mark Hounschell , "David S. Miller" , Jonathan Corbet , Terence Ripperda , John Hubbard , Jerome Glisse , Will Davis Subject: [PATCH 04/22] DMA-API: Introduce dma_(un)map_peer_resource Date: Tue, 15 Sep 2015 12:10:49 -0500 Message-ID: <1442337067-22964-5-git-send-email-wdavis@nvidia.com> X-Mailer: git-send-email 2.5.1 In-Reply-To: <1442337067-22964-1-git-send-email-wdavis@nvidia.com> References: <1442337067-22964-1-git-send-email-wdavis@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add functions to DMA-map and -unmap a peer device's resource for a given device. This will allow devices to DMA-map, for example, another device's BAR region on PCI to enable peer-to-peer transactions. Guard these new functions behind CONFIG_HAS_DMA_P2P. Signed-off-by: Will Davis --- include/asm-generic/dma-mapping-common.h | 43 ++++++++++++++++++++++++++++++++ include/linux/dma-mapping.h | 12 +++++++++ 2 files changed, 55 insertions(+) diff --git a/include/asm-generic/dma-mapping-common.h b/include/asm-generic/dma-mapping-common.h index 940d5ec..45eec17 100644 --- a/include/asm-generic/dma-mapping-common.h +++ b/include/asm-generic/dma-mapping-common.h @@ -73,6 +73,42 @@ static inline void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg ops->unmap_sg(dev, sg, nents, dir, attrs); } +#ifdef CONFIG_HAS_DMA_P2P +static inline dma_peer_addr_t dma_map_peer_resource_attrs(struct device *dev, + struct device *peer, + struct resource *res, + size_t offset, + size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + struct dma_map_ops *ops = get_dma_ops(dev); + dma_peer_addr_t addr; + + BUG_ON(!valid_dma_direction(dir)); + BUG_ON(ops->map_peer_resource == NULL); + addr = ops->map_peer_resource(dev, peer, res, offset, size, dir, + attrs); + debug_dma_map_peer_resource(dev, peer, res, offset, size, dir, addr); + + return addr; +} + +static inline void dma_unmap_peer_resource_attrs(struct device *dev, + dma_peer_addr_t addr, + size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs) +{ + struct dma_map_ops *ops = get_dma_ops(dev); + + BUG_ON(!valid_dma_direction(dir)); + if (ops->unmap_peer_resource) + ops->unmap_peer_resource(dev, addr, size, dir, attrs); + debug_dma_unmap_peer_resource(dev, addr, size, dir); +} +#endif + static inline dma_addr_t dma_map_page(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir) @@ -181,6 +217,13 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, NULL) #define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, NULL) +#ifdef CONFIG_HAS_DMA_P2P +#define dma_map_peer_resource(d, p, e, o, s, r) \ + dma_map_peer_resource_attrs(d, p, e, o, s, r, NULL) +#define dma_unmap_peer_resource(d, a, s, r) \ + dma_unmap_peer_resource_attrs(d, a, s, r, NULL) +#endif + extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index ac07ff0..7b8fddc 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -34,6 +34,18 @@ struct dma_map_ops { void (*unmap_page)(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs); +#ifdef CONFIG_HAS_DMA_P2P + dma_peer_addr_t (*map_peer_resource)(struct device *dev, + struct device *peer, + struct resource *res, + unsigned long offset, size_t size, + enum dma_data_direction dir, + struct dma_attrs *attrs); + void (*unmap_peer_resource)(struct device *dev, + dma_peer_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs); +#endif /* * map_sg returns 0 on error and a value > 0 on success. * It should never return a value < 0.