From patchwork Thu Oct 19 15:25:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13429370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80159CDB482 for ; Thu, 19 Oct 2023 15:25:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345550AbjJSPZt (ORCPT ); Thu, 19 Oct 2023 11:25:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345747AbjJSPZt (ORCPT ); Thu, 19 Oct 2023 11:25:49 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FBB6132 for ; Thu, 19 Oct 2023 08:25:47 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 74A8FC433C8; Thu, 19 Oct 2023 15:25:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697729146; bh=rHw641BxoEniSZKS3TqYaiuJRFAhMpJhe6yrKgKV3tA=; h=Subject:From:Cc:Date:In-Reply-To:References:From; b=BnlHbiv/wGrZr62VlehUMM5I7UG1UFVZkdo7FRvv4kGbpL6WpDAEzXH3ZQn0OBn9a ir40Aj9VBGeoL46O7FHBrW+tfz0NqfpfQm0lDRtoRTyF+HqDR6Wd1qBJ2/6w5zVznB RRuBsjIx5RA+wqJIo943V2DU3wwKRLOvWq6q9C5ltYgLfansfJDlTJ2/VuEyJEk6c6 xIL0OcMC5QOyms4UtJvxWRzM4qMMAumwYAQFvK66m2GpN/8BzU1Ebwlno9DVs+PybA M3K1OQnVM37n7wmH3RaqFhdD3B+AzpBw99ZrRUbvCyrwN7W/VG/dGajR1e9i6auu43 ZOhcgbm9tvzdg== Subject: [PATCH RFC 2/9] bvec: Add bio_vec fields to manage DMA mapping From: Chuck Lever Cc: Jens Axboe , Christoph Hellwig , David Howells , iommu@lists.linux.dev, linux-rdma@vger.kernel.org, Chuck Lever Date: Thu, 19 Oct 2023 11:25:45 -0400 Message-ID: <169772914548.5232.12015170784207638561.stgit@klimt.1015granger.net> In-Reply-To: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> References: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever These are roughly equivalent to the fields used for managing scatterlist DMA mapping. Cc: Jens Axboe Cc: Christoph Hellwig Cc: David Howells Cc: iommu@lists.linux.dev Cc: linux-rdma@vger.kernel.org Signed-off-by: Chuck Lever --- include/linux/bvec.h | 143 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+) diff --git a/include/linux/bvec.h b/include/linux/bvec.h index 555aae5448ae..1074f34a4e8f 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -13,6 +13,7 @@ #include #include #include +#include struct page; @@ -32,6 +33,13 @@ struct bio_vec { struct page *bv_page; unsigned int bv_len; unsigned int bv_offset; + dma_addr_t bv_dma_address; +#ifdef CONFIG_NEED_SG_DMA_LENGTH + unsigned int bv_dma_length; +#endif +#ifdef CONFIG_NEED_SG_DMA_FLAGS + unsigned int bv_dma_flags; +#endif }; /** @@ -74,6 +82,24 @@ static inline void bvec_set_virt(struct bio_vec *bv, void *vaddr, bvec_set_page(bv, virt_to_page(vaddr), len, offset_in_page(vaddr)); } +/** + * bv_phys - return physical address of a bio_vec + * @bv: bio_vec + */ +static inline dma_addr_t bv_phys(struct bio_vec *bv) +{ + return page_to_phys(bv->bv_page) + bv->bv_offset; +} + +/** + * bv_virt - return virtual address of a bio_vec + * @bv: bio_vec + */ +static inline void *bv_virt(struct bio_vec *bv) +{ + return page_address(bv->bv_page) + bv->bv_offset; +} + struct bvec_iter { sector_t bi_sector; /* device address in 512 byte sectors */ @@ -280,4 +306,121 @@ static inline void *bvec_virt(struct bio_vec *bvec) return page_address(bvec->bv_page) + bvec->bv_offset; } +/* + * These macros should be used after a dma_map_bvecs call has been done + * to get bus addresses of each of the bio_vec array entries and their + * lengths. You should work only with the number of bio_vec array entries + * dma_map_bvecs returns, or alternatively stop on the first bv_dma_len(bv) + * which is 0. + */ +#define bv_dma_address(bv) ((bv)->bv_dma_address) + +#ifdef CONFIG_NEED_SG_DMA_LENGTH +#define bv_dma_len(bv) ((bv)->bv_dma_length) +#else +#define bv_dma_len(bv) ((bv)->bv_len) +#endif + +/* + * On 64-bit architectures there is a 4-byte padding in struct scatterlist + * (assuming also CONFIG_NEED_SG_DMA_LENGTH is set). Use this padding for DMA + * flags bits to indicate when a specific dma address is a bus address or the + * buffer may have been bounced via SWIOTLB. + */ +#ifdef CONFIG_NEED_SG_DMA_FLAGS + +#define BV_DMA_BUS_ADDRESS BIT(0) +#define BV_DMA_SWIOTLB BIT(1) + +/** + * bv_dma_is_bus_address - Return whether a given segment was marked + * as a bus address + * @bv: bio_vec array entry + * + * Description: + * Returns true if bv_dma_mark_bus_address() has been called on + * this bio_vec. + **/ +static inline bool bv_dma_is_bus_address(struct bio_vec *bv) +{ + return bv->bv_dma_flags & BV_DMA_BUS_ADDRESS; +} + +/** + * bv_dma_mark_bus_address - Mark the bio_vec entry as a bus address + * @bv: bio_vec array entry + * + * Description: + * Marks the passed-in bv entry to indicate that the dma_address is + * a bus address and doesn't need to be unmapped. This should only be + * used by dma_map_bvecs() implementations to mark bus addresses + * so they can be properly cleaned up in dma_unmap_bvecs(). + **/ +static inline void bv_dma_mark_bus_address(struct bio_vec *bv) +{ + bv->bv_dma_flags |= BV_DMA_BUS_ADDRESS; +} + +/** + * bv_unmark_bus_address - Unmark the bio_vec entry as a bus address + * @bv: bio_vec array entry + * + * Description: + * Clears the bus address mark. + **/ +static inline void bv_dma_unmark_bus_address(struct bio_vec *bv) +{ + bv->bv_dma_flags &= ~BV_DMA_BUS_ADDRESS; +} + +/** + * bv_dma_is_swiotlb - Return whether the bio_vec was marked for SWIOTLB + * bouncing + * @bv: bio_vec array entry + * + * Description: + * Returns true if the bio_vec was marked for SWIOTLB bouncing. Not all + * elements may have been bounced, so the caller would have to check + * individual BV entries with is_swiotlb_buffer(). + */ +static inline bool bv_dma_is_swiotlb(struct bio_vec *bv) +{ + return bv->bv_dma_flags & BV_DMA_SWIOTLB; +} + +/** + * bv_dma_mark_swiotlb - Mark the bio_vec for SWIOTLB bouncing + * @bv: bio_vec array entry + * + * Description: + * Marks a a bio_vec for SWIOTLB bounce. Not all bio_vec entries may + * be bounced. + */ +static inline void bv_dma_mark_swiotlb(struct bio_vec *bv) +{ + bv->bv_dma_flags |= BV_DMA_SWIOTLB; +} + +#else + +static inline bool bv_dma_is_bus_address(struct bio_vec *bv) +{ + return false; +} +static inline void bv_dma_mark_bus_address(struct bio_vec *bv) +{ +} +static inline void bv_dma_unmark_bus_address(struct bio_vec *bv) +{ +} +static inline bool bv_dma_is_swiotlb(struct bio_vec *bv) +{ + return false; +} +static inline void bv_dma_mark_swiotlb(struct bio_vec *bv) +{ +} + +#endif /* CONFIG_NEED_SG_DMA_FLAGS */ + #endif /* __LINUX_BVEC_H */ From patchwork Thu Oct 19 15:25:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13429371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8D4FCDB465 for ; Thu, 19 Oct 2023 15:25:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345812AbjJSPZ4 (ORCPT ); Thu, 19 Oct 2023 11:25:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345747AbjJSPZz (ORCPT ); Thu, 19 Oct 2023 11:25:55 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF9F412D for ; Thu, 19 Oct 2023 08:25:53 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 395E2C433C7; Thu, 19 Oct 2023 15:25:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697729153; bh=uRB/mEHfqFq0/byWOeHcx/c9OoEQMOSFmfHR/6Welrw=; h=Subject:From:Cc:Date:In-Reply-To:References:From; b=pBszrRJEaabi/HAj3x4dLEgnsXeaQstPQgO5SWx8hri66P7BPSMLEddMGoMgwk8Rb vYsJsz/31e9umLykgtY6i2FT5B5l6uBrEEbOkDzb76vzzuzuxkvm54OGulXNog38fA A52O3hiGE8pcK2UsjjuxnjwTBv965Nf8MzB2m7AJKIi2TqzBIQPWKYdId8ojwp8BYG dyQuwOaj5EcLrwyukTvpe9a+NQ4S0d5M25wZitveCajb8Qe0DIS0j0QkTzXq+TMg00 cNI2S46YGgysF5R/Z5QBFj/snPaHptT5YHKd7siTEcd3+aB0vuyct3jLirrfEZKqRZ DHl3s80M+58VA== Subject: [PATCH RFC 3/9] dma-debug: Add dma_debug_ helpers for mapping bio_vec arrays From: Chuck Lever Cc: iommu@lists.linux.dev, linux-rdma@vger.kernel.org, Chuck Lever Date: Thu, 19 Oct 2023 11:25:52 -0400 Message-ID: <169772915215.5232.10127407258544978465.stgit@klimt.1015granger.net> In-Reply-To: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> References: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Cc: iommu@lists.linux.dev Cc: linux-rdma@vger.kernel.org Signed-off-by: Chuck Lever --- include/linux/dma-mapping.h | 1 kernel/dma/debug.c | 163 +++++++++++++++++++++++++++++++++++++++++++ kernel/dma/debug.h | 38 ++++++++++ 3 files changed, 202 insertions(+) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f0ccca16a0ac..f511ec546f4d 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -9,6 +9,7 @@ #include #include #include +#include #include #include diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 3de494375b7b..efb4a2eaf9a0 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -39,6 +39,7 @@ enum { dma_debug_sg, dma_debug_coherent, dma_debug_resource, + dma_debug_bv, }; enum map_err_types { @@ -142,6 +143,7 @@ static const char *type2name[] = { [dma_debug_sg] = "scatter-gather", [dma_debug_coherent] = "coherent", [dma_debug_resource] = "resource", + [dma_debug_bv] = "bio-vec", }; static const char *dir2name[] = { @@ -1189,6 +1191,32 @@ static void check_sg_segment(struct device *dev, struct scatterlist *sg) #endif } +static void check_bv_segment(struct device *dev, struct bio_vec *bv) +{ +#ifdef CONFIG_DMA_API_DEBUG_SG + unsigned int max_seg = dma_get_max_seg_size(dev); + u64 start, end, boundary = dma_get_seg_boundary(dev); + + /* + * Either the driver forgot to set dma_parms appropriately, or + * whoever generated the list forgot to check them. + */ + if (bv->length > max_seg) + err_printk(dev, NULL, "mapping bv entry longer than device claims to support [len=%u] [max=%u]\n", + bv->length, max_seg); + /* + * In some cases this could potentially be the DMA API + * implementation's fault, but it would usually imply that + * the scatterlist was built inappropriately to begin with. + */ + start = bv_dma_address(bv); + end = start + bv_dma_len(bv) - 1; + if ((start ^ end) & ~boundary) + err_printk(dev, NULL, "mapping bv entry across boundary [start=0x%016llx] [end=0x%016llx] [boundary=0x%016llx]\n", + start, end, boundary); +#endif +} + void debug_dma_map_single(struct device *dev, const void *addr, unsigned long len) { @@ -1333,6 +1361,47 @@ void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, } } +void debug_dma_map_bvecs(struct device *dev, struct bio_vec *bvecs, + int nents, int mapped_ents, int direction, + unsigned long attrs) +{ + struct dma_debug_entry *entry; + struct bio_vec *bv; + int i; + + if (unlikely(dma_debug_disabled())) + return; + + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + check_for_stack(dev, bv_page(bv), bv->offset); + if (!PageHighMem(bv_page(bv))) + check_for_illegal_area(dev, bv_virt(bv), bv->length); + } + + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + + entry = dma_entry_alloc(); + if (!entry) + return; + + entry->type = dma_debug_bv; + entry->dev = dev; + entry->pfn = page_to_pfn(bv_page(bv)); + entry->offset = bv->offset; + entry->size = bv_dma_len(bv); + entry->dev_addr = bv_dma_address(bv); + entry->direction = direction; + entry->sg_call_ents = nents; + entry->sg_mapped_ents = mapped_ents; + + check_bv_segment(dev, bv); + + add_dma_entry(entry, attrs); + } +} + static int get_nr_mapped_entries(struct device *dev, struct dma_debug_entry *ref) { @@ -1384,6 +1453,37 @@ void debug_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, } } +void debug_dma_unmap_bvecs(struct device *dev, struct bio_vec *bvecs, + int nelems, int dir) +{ + int mapped_ents = 0, i; + + if (unlikely(dma_debug_disabled())) + return; + + for (i = 0; i < nents; i++) { + struct bio_vec *bv = &bvecs[i]; + struct dma_debug_entry ref = { + .type = dma_debug_bv, + .dev = dev, + .pfn = page_to_pfn(bv_page(bv)), + .offset = bv->offset, + .dev_addr = bv_dma_address(bv), + .size = bv_dma_len(bv), + .direction = dir, + .sg_call_ents = nelems, + }; + + if (mapped_ents && i >= mapped_ents) + break; + + if (!i) + mapped_ents = get_nr_mapped_entries(dev, &ref); + + check_unmap(&ref); + } +} + void debug_dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t dma_addr, void *virt, unsigned long attrs) @@ -1588,6 +1688,69 @@ void debug_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, } } +void debug_dma_sync_bvecs_for_cpu(struct device *dev, struct bio_vec *bvecs, + int nelems, int direction) +{ + int mapped_ents = 0, i; + struct bio_vec *bv; + + if (unlikely(dma_debug_disabled())) + return; + + for (i = 0; i < nents; i++) { + struct bio_vec *bv = &bvecs[i]; + struct dma_debug_entry ref = { + .type = dma_debug_bv, + .dev = dev, + .pfn = page_to_pfn(bv->bv_page), + .offset = bv->bv_offset, + .dev_addr = bv_dma_address(bv), + .size = bv_dma_len(bv), + .direction = direction, + .sg_call_ents = nelems, + }; + + if (!i) + mapped_ents = get_nr_mapped_entries(dev, &ref); + + if (i >= mapped_ents) + break; + + check_sync(dev, &ref, true); + } +} + +void debug_dma_sync_bvecs_for_device(struct device *dev, struct bio_vec *bvecs, + int nelems, int direction) +{ + int mapped_ents = 0, i; + struct bio_vec *bv; + + if (unlikely(dma_debug_disabled())) + return; + + for (i = 0; i < nents; i++) { + struct bio_vec *bv = &bvecs[i]; + struct dma_debug_entry ref = { + .type = dma_debug_bv, + .dev = dev, + .pfn = page_to_pfn(bv->bv_page), + .offset = bv->bv_offset, + .dev_addr = bv_dma_address(bv), + .size = bv_dma_len(bv), + .direction = direction, + .sg_call_ents = nelems, + }; + if (!i) + mapped_ents = get_nr_mapped_entries(dev, &ref); + + if (i >= mapped_ents) + break; + + check_sync(dev, &ref, false); + } +} + static int __init dma_debug_driver_setup(char *str) { int i; diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae..dff7e8a2f594 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -24,6 +24,13 @@ extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, extern void debug_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nelems, int dir); +extern void debug_dma_map_bvecs(struct device *dev, struct bio_vec *bvecs, + int nents, int mapped_ents, int direction, + unsigned long attrs); + +extern void debug_dma_unmap_bvecs(struct device *dev, struct bio_vec *bvecs, + int nelems, int dir); + extern void debug_dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t dma_addr, void *virt, unsigned long attrs); @@ -54,6 +61,14 @@ extern void debug_dma_sync_sg_for_cpu(struct device *dev, extern void debug_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, int direction); + +extern void debug_dma_sync_bvecs_for_cpu(struct device *dev, + struct bio_vec *bvecs, + int nelems, int direction); + +extern void debug_dma_sync_bvecs_for_device(struct device *dev, + struct bio_vec *bvecs, + int nelems, int direction); #else /* CONFIG_DMA_API_DEBUG */ static inline void debug_dma_map_page(struct device *dev, struct page *page, size_t offset, size_t size, @@ -79,6 +94,17 @@ static inline void debug_dma_unmap_sg(struct device *dev, { } +static inline void debug_dma_map_bvecs(struct device *dev, struct bio_vec *bvecs, + int nents, int mapped_ents, int direction, + unsigned long attrs) +{ +} + +static inline void debug_dma_unmap_bvecs(struct device *dev, struct bio_vec *bvecs, + int nelems, int dir) +{ +} + static inline void debug_dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t dma_addr, void *virt, unsigned long attrs) @@ -126,5 +152,17 @@ static inline void debug_dma_sync_sg_for_device(struct device *dev, int nelems, int direction) { } + +static inline void debug_dma_sync_bvecs_for_cpu(struct device *dev, + struct bio_vec *bvecs, + int nelems, int direction) +{ +} + +static inline void debug_dma_sync_bvecs_for_device(struct device *dev, + struct bio_vec *bvecs, + int nelems, int direction) +{ +} #endif /* CONFIG_DMA_API_DEBUG */ #endif /* _KERNEL_DMA_DEBUG_H */ From patchwork Thu Oct 19 15:25:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13429372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A066FCDB465 for ; Thu, 19 Oct 2023 15:26:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345747AbjJSP0D (ORCPT ); Thu, 19 Oct 2023 11:26:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345938AbjJSP0C (ORCPT ); Thu, 19 Oct 2023 11:26:02 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 993D112A for ; Thu, 19 Oct 2023 08:26:00 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF50DC433C9; Thu, 19 Oct 2023 15:25:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697729160; bh=P4b6wsJEKT/3IXcqFp8KjGqq8Imdm8sdVfTi6RWsVXU=; h=Subject:From:Cc:Date:In-Reply-To:References:From; b=ocXti95o34TJcUGQJjL1VTemEZSTo7XFgSL9x327zb9g4/KyKjKuQOJJN4iN+Vlw2 +93yWadJ4LZ+Ew2TmR5EjM5PLLWjPsBAzXuxNTn+B0xKRkNPM2f4VYMFAqy/IW9Vq7 0UkS+Et/KQ1C76yEgXV3hkMgM6Ko3kSzw5uyNfwt4AUGf+A6rRF2UOQFfl05a2d1It IbHjj7se3QfzvuzXk5QhkSbwRSRL6M38+qrn63gt4BT71WUDx+RofV2BD/iXCCPX+5 itWEmX8gSsYXDHnHzARYHMGi6XuQOFTP4yixh5fOkdtYTZ9vd9N+yk2aXXmxrZSGqa sgDSxnS0Lev2Q== Subject: [PATCH RFC 4/9] mm: kmsan: Add support for DMA mapping bio_vec arrays From: Chuck Lever Cc: Alexander Potapenko , kasan-dev@googlegroups.com, linux-mm@kvack.org, iommu@lists.linux.dev, linux-rdma@vger.kernel.org, Chuck Lever Date: Thu, 19 Oct 2023 11:25:58 -0400 Message-ID: <169772915869.5232.9306605321315591579.stgit@klimt.1015granger.net> In-Reply-To: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> References: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Cc: Alexander Potapenko Cc: kasan-dev@googlegroups.com Cc: linux-mm@kvack.org Cc: iommu@lists.linux.dev Cc: linux-rdma@vger.kernel.org Signed-off-by: Chuck Lever --- include/linux/kmsan.h | 20 ++++++++++++++++++++ mm/kmsan/hooks.c | 13 +++++++++++++ 2 files changed, 33 insertions(+) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index e0c23a32cdf0..36c581a18b30 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -18,6 +18,7 @@ struct page; struct kmem_cache; struct task_struct; struct scatterlist; +struct bio_vec; struct urb; #ifdef CONFIG_KMSAN @@ -209,6 +210,20 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size, void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, enum dma_data_direction dir); +/** + * kmsan_handle_dma_bvecs() - Handle a DMA transfer using bio_vec array. + * @bvecs: bio_vec array holding DMA buffers. + * @nents: number of scatterlist entries. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffers in the bio_vec array, if they are copied to device; + * * initializes the buffers, if they are copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma_bvecs(struct bio_vec *bv, int nents, + enum dma_data_direction dir); + /** * kmsan_handle_urb() - Handle a USB data transfer. * @urb: struct urb pointer. @@ -321,6 +336,11 @@ static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, { } +static inline void kmsan_handle_dma_bvecs(struct bio_vec *bv, int nents, + enum dma_data_direction dir) +{ +} + static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) { } diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 5d6e2dee5692..87846011c9bd 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -358,6 +358,19 @@ void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, dir); } +void kmsan_handle_dma_bvecs(struct bio_vec *bvecs, int nents, + enum dma_data_direction dir) +{ + struct bio_vec *item; + int i; + + for (i = 0; i < nents; i++) { + item = &bvecs[i]; + kmsan_handle_dma(bv_page(item), item->bv_offset, item->bv_len, + dir); + } +} + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { From patchwork Thu Oct 19 15:26:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13429373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B952ECDB465 for ; Thu, 19 Oct 2023 15:26:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345977AbjJSP0J (ORCPT ); Thu, 19 Oct 2023 11:26:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346191AbjJSP0I (ORCPT ); Thu, 19 Oct 2023 11:26:08 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1338612A for ; Thu, 19 Oct 2023 08:26:07 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71121C433C9; Thu, 19 Oct 2023 15:26:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697729166; bh=yQhqxZwOMDPJhW7Xwgk5yIT4GMSsV4nf1//lo7TNLyc=; h=Subject:From:Cc:Date:In-Reply-To:References:From; b=cThuabzcIkYl5Bb/AuPXihl2As71NbsV8Y+T8/Gjhqg7BNgYJq0gqZnAOpEfcKTbM mdF0zvj/3fr0s7HHqevttUTIXT0Aqs5GRUq57aNQ7Savc7LCtuNCbIuNcqJtdpJHQl QrliO9wy1A9STmaFNVO33BCbgLnS1ZIpDFZdgj9U6jTVL4jJPTljBLIVyKmxCJOFWB qi9AbUoN4GtRS/UlIeXxiAmUCyjUP3WGtMbLJwhqmSQJT7WzthVhC5G71Us8wklNdR K/vSa390seTwSDQBBoH2grw5rQtIV2fVi6cg9+teWglZcSAKqmvXZ7WgqBREECCaIW 4xj8/3VAPP/9A== Subject: [PATCH RFC 5/9] dma-direct: Support direct mapping bio_vec arrays From: Chuck Lever Cc: iommu@lists.linux.dev, linux-rdma@vger.kernel.org, Chuck Lever Date: Thu, 19 Oct 2023 11:26:05 -0400 Message-ID: <169772916546.5232.14817964507475231582.stgit@klimt.1015granger.net> In-Reply-To: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> References: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Cc: iommu@lists.linux.dev Cc: linux-rdma@vger.kernel.org Signed-off-by: Chuck Lever --- kernel/dma/direct.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++ kernel/dma/direct.h | 17 +++++++++ 2 files changed, 109 insertions(+) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 9596ae1aa0da..7587c5c3d051 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -423,6 +423,26 @@ void dma_direct_sync_sg_for_device(struct device *dev, dir); } } + +void dma_direct_sync_bvecs_for_device(struct device *dev, + struct bio_vec *bvecs, int nents, enum dma_data_direction dir) +{ + struct bio_vec *bv; + int i; + + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + phys_addr_t paddr = dma_to_phys(dev, bv_dma_address(bv)); + + if (unlikely(is_swiotlb_buffer(dev, paddr))) + swiotlb_sync_single_for_device(dev, paddr, bv->bv_len, + dir); + + if (!dev_is_dma_coherent(dev)) + arch_sync_dma_for_device(paddr, bv->bv_len, + dir); + } +} #endif #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU) || \ @@ -516,6 +536,78 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, return ret; } +void dma_direct_sync_bvecs_for_cpu(struct device *dev, + struct bio_vec *bvecs, int nents, enum dma_data_direction dir) +{ + struct bio_vec *bv; + int i; + + for (i = 0; i < nents; i++) { + phys_addr_t paddr; + + bv = &bvecs[i]; + paddr = dma_to_phys(dev, bv_dma_address(bv)); + + if (!dev_is_dma_coherent(dev)) + arch_sync_dma_for_cpu(paddr, bv->bv_len, dir); + + if (unlikely(is_swiotlb_buffer(dev, paddr))) + swiotlb_sync_single_for_cpu(dev, paddr, bv->bv_len, + dir); + + if (dir == DMA_FROM_DEVICE) + arch_dma_mark_clean(paddr, bv->bv_len); + } + + if (!dev_is_dma_coherent(dev)) + arch_sync_dma_for_cpu_all(); +} + +/* + * Unmaps segments, except for ones marked as pci_p2pdma which do not + * require any further action as they contain a bus address. + */ +void dma_direct_unmap_bvecs(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, + unsigned long attrs) +{ + struct bio_vec *bv; + int i; + + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + if (bv_dma_is_bus_address(bv)) + bv_dma_unmark_bus_address(bv); + else + dma_direct_unmap_page(dev, bv_dma_address(bv), + bv_dma_len(bv), dir, attrs); + } + +} + +int dma_direct_map_bvecs(struct device *dev, struct bio_vec *bvecs, int nents, + enum dma_data_direction dir, unsigned long attrs) +{ + struct bio_vec *bv; + int i; + + /* p2p DMA mapping support can be added later */ + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + bv->bv_dma_address = dma_direct_map_page(dev, bv->bv_page, + bv->bv_offset, bv->bv_len, dir, attrs); + if (bv->bv_dma_address == DMA_MAPPING_ERROR) + goto out_unmap; + bv_dma_len(bv) = bv->bv_len; + } + + return nents; + +out_unmap: + dma_direct_unmap_bvecs(dev, bvecs, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); + return -EIO; +} + dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, size_t size, enum dma_data_direction dir, unsigned long attrs) { diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index 97ec892ea0b5..6db1ccd04d21 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -20,17 +20,26 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma, bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr); int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, unsigned long attrs); +int dma_direct_map_bvecs(struct device *dev, struct bio_vec *bvecs, int nents, + enum dma_data_direction dir, unsigned long attrs); size_t dma_direct_max_mapping_size(struct device *dev); #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ defined(CONFIG_SWIOTLB) void dma_direct_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir); +void dma_direct_sync_bvecs_for_device(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir); #else static inline void dma_direct_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir) { } + +static inline void dma_direct_sync_bvecs_for_device(struct device *dev, + struct bio_vec *bvecs, int nents, enum dma_data_direction dir) +{ +} #endif #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU) || \ @@ -40,6 +49,10 @@ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, unsigned long attrs); void dma_direct_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir); +void dma_direct_unmap_bvecs(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, unsigned long attrs); +void dma_direct_sync_bvecs_for_cpu(struct device *dev, + struct bio_vec *bvecs, int nents, enum dma_data_direction dir); #else static inline void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir, @@ -50,6 +63,10 @@ static inline void dma_direct_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl, int nents, enum dma_data_direction dir) { } +static inline void dma_direct_sync_bvecs_for_cpu(struct device *dev, + struct bio_vec *bvecs, int nents, enum dma_data_direction dir) +{ +} #endif static inline void dma_direct_sync_single_for_device(struct device *dev, From patchwork Thu Oct 19 15:26:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13429374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB953CDB465 for ; Thu, 19 Oct 2023 15:26:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346115AbjJSP0P (ORCPT ); Thu, 19 Oct 2023 11:26:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345938AbjJSP0P (ORCPT ); Thu, 19 Oct 2023 11:26:15 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7892112D for ; Thu, 19 Oct 2023 08:26:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5F48C433C7; Thu, 19 Oct 2023 15:26:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697729173; bh=8lPy8rsZStYEVHbRVMx8WfUHPcgCULF0xCyPhT/GQIk=; h=Subject:From:Cc:Date:In-Reply-To:References:From; b=Xo0q5LOFCDxiiaJv+oB38nSmpPeW/PWuF0FIIepzw9Cqjt/OEzeNf7x5rynFqx5Dn 8E6a60NqfmE/l1t90IJKPaM5ZUDVwkGL6N7BjBWmJHUPaCsZdkUGWgh5aWdhKiDtUz Rae1yqsuWb3KKTARu3X7xasIQQBfbMmLi4x5XXsiLzJjOS2kLcZ/jJ/XZcZqEgRdvU RyAafc1BvFVkIm3jX2R7xiBb33gzWAV/IMnqZhsgyxdJY5gN+cr6mj+9qYcMNtCYi9 /cAIHkZ+Ox4TNaZWqhZ+w7rkngOU8qc7wPiBV0VlzPxbMsSvRDfIZBG5TMPYpbFskw HbcFh78x2iJkA== Subject: [PATCH RFC 6/9] DMA-API: Add dma_sync_bvecs_for_cpu() and dma_sync_bvecs_for_device() From: Chuck Lever Cc: iommu@lists.linux.dev, linux-rdma@vger.kernel.org, Chuck Lever Date: Thu, 19 Oct 2023 11:26:11 -0400 Message-ID: <169772917192.5232.2827727564287466466.stgit@klimt.1015granger.net> In-Reply-To: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> References: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Cc: iommu@lists.linux.dev Cc: linux-rdma@vger.kernel.org Signed-off-by: Chuck Lever --- include/linux/dma-map-ops.h | 4 ++++ include/linux/dma-mapping.h | 4 ++++ kernel/dma/mapping.c | 28 ++++++++++++++++++++++++++++ 3 files changed, 36 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f2fc203fb8a1..de2a50d9207a 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -75,6 +75,10 @@ struct dma_map_ops { int nents, enum dma_data_direction dir); void (*sync_sg_for_device)(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir); + void (*sync_bvecs_for_cpu)(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir); + void (*sync_bvecs_for_device)(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir); void (*cache_sync)(struct device *dev, void *vaddr, size_t size, enum dma_data_direction direction); int (*dma_supported)(struct device *dev, u64 mask); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index f511ec546f4d..9fb422f376b6 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -126,6 +126,10 @@ void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction dir); void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction dir); +void dma_sync_bvecs_for_cpu(struct device *dev, struct bio_vec *bvecs, + int nelems, enum dma_data_direction dir); +void dma_sync_bvecs_for_device(struct device *dev, struct bio_vec *bvecs, + int nelems, enum dma_data_direction dir); void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag, unsigned long attrs); void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index e323ca48f7f2..94cffc9b45a5 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -385,6 +385,34 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, } EXPORT_SYMBOL(dma_sync_sg_for_device); +void dma_sync_bvecs_for_cpu(struct device *dev, struct bio_vec *bvecs, + int nelems, enum dma_data_direction dir) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + BUG_ON(!valid_dma_direction(dir)); + if (dma_map_direct(dev, ops)) + dma_direct_sync_bvecs_for_cpu(dev, bvecs, nelems, dir); + else if (ops->sync_bvecs_for_cpu) + ops->sync_bvecs_for_cpu(dev, bvecs, nelems, dir); + debug_dma_sync_bvecs_for_cpu(dev, bvecs, nelems, dir); +} +EXPORT_SYMBOL(dma_sync_bvecs_for_cpu); + +void dma_sync_bvecs_for_device(struct device *dev, struct bio_vec *bvecs, + int nelems, enum dma_data_direction dir) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + BUG_ON(!valid_dma_direction(dir)); + if (dma_map_direct(dev, ops)) + dma_direct_sync_bvecs_for_device(dev, bvecs, nelems, dir); + else if (ops->sync_bvecs_for_device) + ops->sync_bvecs_for_device(dev, bvecs, nelems, dir); + debug_dma_sync_bvecs_for_device(dev, bvecs, nelems, dir); +} +EXPORT_SYMBOL(dma_sync_bvecs_for_device); + /* * The whole dma_get_sgtable() idea is fundamentally unsafe - it seems * that the intention is to allow exporting memory allocated via the From patchwork Thu Oct 19 15:26:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13429375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D01C3CDB465 for ; Thu, 19 Oct 2023 15:26:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346189AbjJSP0W (ORCPT ); Thu, 19 Oct 2023 11:26:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345938AbjJSP0V (ORCPT ); Thu, 19 Oct 2023 11:26:21 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9EB8121 for ; Thu, 19 Oct 2023 08:26:19 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 43CF9C433C8; Thu, 19 Oct 2023 15:26:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697729179; bh=vrWYcWHb8JepDA5UyhONANsPwTqHXa7W1d6oKXTubOY=; h=Subject:From:Cc:Date:In-Reply-To:References:From; b=QNbUWcaeUcIQohwIKnubZ84iKB50FmE8IcYi+P/RZtk2Vl8HjcbVh+j2A1RN1vlGu NGKqi+p8Is/6wVh4TAyYQTekxIP5urv6k6gcZc06Q42o7RO0pH57A0w307oi60lK/9 AH3r9+2rRDYHYa15qnnu1FwTJN1y2xPSd8TxGueFDYUhJvedOg4qCBwcK4gRt6ejdg HdqSdkkkbIV6hQ6MUApeE3cd70sGjb0ia5WOrUqTpZNUneqllfnHNjySjTYQWxXOxI 1oYVCdP3Q/ZM/5aSokK6uhB9QHll/7MBxxrZXTg6Mr0mTyI3OEHKOhFV7KMfoNVkUN IpbpQeNn9xIQQ== Subject: [PATCH RFC 7/9] DMA: Add dma_map_bvecs_attrs() From: Chuck Lever Cc: iommu@lists.linux.dev, linux-rdma@vger.kernel.org, Chuck Lever Date: Thu, 19 Oct 2023 11:26:18 -0400 Message-ID: <169772917833.5232.13488378553385610086.stgit@klimt.1015granger.net> In-Reply-To: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> References: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Cc: iommu@lists.linux.dev Cc: linux-rdma@vger.kernel.org Signed-off-by: Chuck Lever --- include/linux/dma-map-ops.h | 4 +++ include/linux/dma-mapping.h | 4 +++ kernel/dma/mapping.c | 65 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 73 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index de2a50d9207a..69ecfd403249 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -60,6 +60,10 @@ struct dma_map_ops { enum dma_data_direction dir, unsigned long attrs); void (*unmap_sg)(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); + int (*map_bvecs)(struct device *dev, struct bio_vec *bvecs, int nents, + enum dma_data_direction dir, unsigned long attrs); + void (*unmap_bvecs)(struct device *dev, struct bio_vec *bvecs, int nents, + enum dma_data_direction dir, unsigned long attrs); dma_addr_t (*map_resource)(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 9fb422f376b6..6f522a82cfe3 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -114,6 +114,10 @@ void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, unsigned long attrs); int dma_map_sgtable(struct device *dev, struct sg_table *sgt, enum dma_data_direction dir, unsigned long attrs); +unsigned int dma_map_bvecs_attrs(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, unsigned long attrs); +void dma_unmap_bvecs_attrs(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, unsigned long attrs); dma_addr_t dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs); void dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 94cffc9b45a5..f53cc4da2797 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -296,6 +296,71 @@ void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, } EXPORT_SYMBOL(dma_unmap_sg_attrs); +/** + * dma_map_sg_attrs - Map the given buffer for DMA + * @dev: The device for which to perform the DMA operation + * @bvecs: The bio_vec array describing the buffer + * @nents: Number of bio_vecs to map + * @dir: DMA direction + * @attrs: Optional DMA attributes for the map operation + * + * Maps a buffer described by a bio_vec array passed in the bvecs + * argument with nents segments for the @dir DMA operation by the + * @dev device. + * + * Returns the number of mapped entries (which can be less than nents) + * on success. Zero is returned for any error. + * + * dma_unmap_bvecs_attrs() should be used to unmap the buffer with the + * original bvecs and original nents (not the value returned by this + * function). + */ +unsigned int dma_map_bvecs_attrs(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, + unsigned long attrs) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + int ents; + + BUG_ON(!valid_dma_direction(dir)); + + if (WARN_ON_ONCE(!dev->dma_mask)) + return 0; + + if (dma_map_direct(dev, ops)) + ents = dma_direct_map_bvecs(dev, bvecs, nents, dir, attrs); + else + ents = ops->map_bvecs(dev, bvecs, nents, dir, attrs); + + if (ents > 0) { + kmsan_handle_dma_bvecs(bvecs, nents, dir); + debug_dma_map_bvecs(dev, bvecs, nents, ents, dir, attrs); + } else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && + ents != -EIO && ents != -EREMOTEIO)) { + return -EIO; + } + + return ents; +} +EXPORT_SYMBOL_GPL(dma_map_bvecs_attrs); + +void dma_unmap_bvecs_attrs(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, + unsigned long attrs) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + BUG_ON(!valid_dma_direction(dir)); + + debug_dma_unmap_bvecs(dev, bvecs, nents, dir); + + if (dma_map_direct(dev, ops)) + dma_direct_unmap_bvecs(dev, bvecs, nents, dir, attrs); + else if (ops->unmap_bvecs) + ops->unmap_bvecs(dev, bvecs, nents, dir, attrs); +} +EXPORT_SYMBOL(dma_unmap_bvecs_attrs); + dma_addr_t dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { From patchwork Thu Oct 19 15:26:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13429376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7118ECDB465 for ; Thu, 19 Oct 2023 15:26:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346191AbjJSP0a (ORCPT ); Thu, 19 Oct 2023 11:26:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345938AbjJSP03 (ORCPT ); Thu, 19 Oct 2023 11:26:29 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 687C1130 for ; Thu, 19 Oct 2023 08:26:26 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8CBEC433C8; Thu, 19 Oct 2023 15:26:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697729186; bh=h84l5x3ycK6LsY4vt1Jwfqds4V6VmRnYajmPxsipX0g=; h=Subject:From:Cc:Date:In-Reply-To:References:From; b=lISaPF2Bfw7noke+jKyafEMdYxClUp41lhCIB4QjSn0xkrVAyLCnND9478zuibbFC AwP/jnlgxz88zfdILf8m0AqYc7jW1xaT8aUoGKOuZmn/Yc0IoNYBGcKD+jqHVFC0Yf zSi3MxP5+Zc8nb3o2SC3CRrsMNF0ci/l2Uw22rMAPG6yPekE+Qp+wAvoS3Vkm/y3+B nFOeA/L5vhZa3dEn9pqLgqXPkGqQ6+JJ0GR8dxyJUFkhZJHHMU0ZRr6iv8O6CK4vpz J1me8rMaM9gp7Y6zkFQ9ceowLSubnfB67X8I45rxPAb1YJQo69FYZhqYNZ0WGj7qPK 2kBMuXgoH4BIg== Subject: [PATCH RFC 8/9] iommu/dma: Support DMA-mapping a bio_vec array From: Chuck Lever Cc: iommu@lists.linux.dev, linux-rdma@vger.kernel.org, Chuck Lever Date: Thu, 19 Oct 2023 11:26:24 -0400 Message-ID: <169772918473.5232.6022085226786774578.stgit@klimt.1015granger.net> In-Reply-To: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> References: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Cc: iommu@lists.linux.dev Cc: linux-rdma@vger.kernel.org Signed-off-by: Chuck Lever --- drivers/iommu/dma-iommu.c | 368 +++++++++++++++++++++++++++++++++++++++++++++ drivers/iommu/iommu.c | 58 +++++++ include/linux/iommu.h | 4 3 files changed, 430 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 4b1a88f514c9..5ed15eac9a4a 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -554,6 +554,34 @@ static bool dev_use_sg_swiotlb(struct device *dev, struct scatterlist *sg, return false; } +static bool dev_use_bvecs_swiotlb(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir) +{ + struct bio_vec *bv; + int i; + + if (!IS_ENABLED(CONFIG_SWIOTLB)) + return false; + + if (dev_is_untrusted(dev)) + return true; + + /* + * If kmalloc() buffers are not DMA-safe for this device and + * direction, check the individual lengths in the sg list. If any + * element is deemed unsafe, use the swiotlb for bouncing. + */ + if (!dma_kmalloc_safe(dev, dir)) { + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + if (!dma_kmalloc_size_aligned(bv->bv_len)) + return true; + } + } + + return false; +} + /** * iommu_dma_init_domain - Initialise a DMA mapping domain * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() @@ -1026,6 +1054,49 @@ static void iommu_dma_sync_sg_for_device(struct device *dev, arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); } +static void iommu_dma_sync_bvecs_for_cpu(struct device *dev, + struct bio_vec *bvecs, int nelems, + enum dma_data_direction dir) +{ + struct bio_vec *bv; + int i; + + if (bv_dma_is_swiotlb(bvecs)) { + for (i = 0; i < nelems; i++) { + bv = &bvecs[i]; + iommu_dma_sync_single_for_cpu(dev, bv_dma_address(bv), + bv->bv_len, dir); + } + } else if (!dev_is_dma_coherent(dev)) { + for (i = 0; i < nelems; i++) { + bv = &bvecs[i]; + arch_sync_dma_for_cpu(bv_phys(bv), bv->bv_len, dir); + } + } +} + +static void iommu_dma_sync_bvecs_for_device(struct device *dev, + struct bio_vec *bvecs, int nelems, + enum dma_data_direction dir) +{ + struct bio_vec *bv; + int i; + + if (bv_dma_is_swiotlb(bvecs)) { + for (i = 0; i < nelems; i++) { + bv = &bvecs[i]; + iommu_dma_sync_single_for_device(dev, + bv_dma_address(bv), + bv->bv_len, dir); + } + } else if (!dev_is_dma_coherent(dev)) { + for (i = 0; i < nelems; i++) { + bv = &bvecs[i]; + arch_sync_dma_for_device(bv_phys(bv), bv->bv_len, dir); + } + } +} + static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -1405,6 +1476,299 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, __iommu_dma_unmap(dev, start, end - start); } +/* + * Prepare a successfully-mapped bio_vec array to give back to the caller. + * + * At this point the elements are already laid out by iommu_dma_map_bvecs() + * to avoid individually crossing any boundaries, so we merely need to check + * an element's start address to avoid concatenating across one. + */ +static int __finalise_bvecs(struct device *dev, struct bio_vec *bvecs, + int nents, dma_addr_t dma_addr) +{ + unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev); + unsigned long seg_mask = dma_get_seg_boundary(dev); + struct bio_vec *cur = bvecs; + int i, count = 0; + + for (i = 0; i < nents; i++) { + struct bio_vec *bv = &bvecs[i]; + + /* Restore this segment's original unaligned fields first */ + dma_addr_t s_dma_addr = bv_dma_address(bv); + unsigned int s_iova_off = bv_dma_address(bv); + unsigned int s_length = bv_dma_len(bv); + unsigned int s_iova_len = bv->bv_len; + + bv_dma_address(bv) = DMA_MAPPING_ERROR; + bv_dma_len(bv) = 0; + + if (bv_dma_is_bus_address(bv)) { + if (i > 0) + cur++; + + bv_dma_unmark_bus_address(bv); + bv_dma_address(cur) = s_dma_addr; + bv_dma_len(cur) = s_length; + bv_dma_mark_bus_address(cur); + count++; + cur_len = 0; + continue; + } + + bv->bv_offset += s_iova_off; + bv->bv_len = s_length; + + /* + * Now fill in the real DMA data. If... + * - there is a valid output segment to append to + * - and this segment starts on an IOVA page boundary + * - but doesn't fall at a segment boundary + * - and wouldn't make the resulting output segment too long + */ + if (cur_len && !s_iova_off && (dma_addr & seg_mask) && + (max_len - cur_len >= s_length)) { + /* ...then concatenate it with the previous one */ + cur_len += s_length; + } else { + /* Otherwise start the next output segment */ + if (i > 0) + cur++; + cur_len = s_length; + count++; + + bv_dma_address(cur) = dma_addr + s_iova_off; + } + + bv_dma_len(cur) = cur_len; + dma_addr += s_iova_len; + + if (s_length + s_iova_off < s_iova_len) + cur_len = 0; + } + return count; +} + +/* + * If mapping failed, then just restore the original list, + * but making sure the DMA fields are invalidated. + */ +static void __invalidate_bvecs(struct bio_vec *bvecs, int nents) +{ + struct bio_vec *bv; + int i; + + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + if (bv_dma_is_bus_address(bv)) { + bv_dma_unmark_bus_address(bv); + } else { + if (bv_dma_address(bv) != DMA_MAPPING_ERROR) + bv->bv_offset += bv_dma_address(bv); + if (bv_dma_len(bv)) + bv->bv_len = bv_dma_len(bv); + } + bv_dma_address(bv) = DMA_MAPPING_ERROR; + bv_dma_len(bv) = 0; + } +} + +static void iommu_dma_unmap_bvecs_swiotlb(struct device *dev, + struct bio_vec *bvecs, int nents, enum dma_data_direction dir, + unsigned long attrs) +{ + struct bio_vec *bv; + int i; + + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + iommu_dma_unmap_page(dev, bv_dma_address(bv), + bv_dma_len(bv), dir, attrs); + } +} + +static int iommu_dma_map_bvecs_swiotlb(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, unsigned long attrs) +{ + struct bio_vec *bv; + int i; + + bv_dma_mark_swiotlb(bvecs); + + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + bv_dma_address(bv) = iommu_dma_map_page(dev, bv->bv_page, + bv->bv_offset, bv->bv_len, dir, attrs); + if (bv_dma_address(bv) == DMA_MAPPING_ERROR) + goto out_unmap; + bv_dma_len(bv) = bv->bv_len; + } + + return nents; + +out_unmap: + iommu_dma_unmap_bvecs_swiotlb(dev, bvecs, i, dir, + attrs | DMA_ATTR_SKIP_CPU_SYNC); + return -EIO; +} + +/* + * The DMA API client is passing in an array of bio_vecs which could + * describe any old buffer layout, but the IOMMU API requires everything + * to be aligned to IOMMU pages. Hence the need for this complicated bit + * of impedance-matching, to be able to hand off a suitably-aligned list, + * but still preserve the original offsets and sizes for the caller. + */ +static int iommu_dma_map_bvecs(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, unsigned long attrs) +{ + int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs); + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + unsigned long mask = dma_get_seg_boundary(dev); + struct iova_domain *iovad = &cookie->iovad; + size_t iova_len = 0; + dma_addr_t iova; + ssize_t ret; + int i; + + if (static_branch_unlikely(&iommu_deferred_attach_enabled)) { + ret = iommu_deferred_attach(dev, domain); + if (ret) + goto out; + } + + if (dev_use_bvecs_swiotlb(dev, bvecs, nents, dir)) + return iommu_dma_map_bvecs_swiotlb(dev, bvecs, nents, + dir, attrs); + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + iommu_dma_sync_bvecs_for_device(dev, bvecs, nents, dir); + + /* + * Work out how much IOVA space we need, and align the segments to + * IOVA granules for the IOMMU driver to handle. With some clever + * trickery we can modify the list in-place, but reversibly, by + * stashing the unaligned parts in the as-yet-unused DMA fields. + */ + for (i = 0; i < nents; i++) { + struct bio_vec *bv = &bvecs[i]; + size_t s_iova_off = iova_offset(iovad, bv->bv_offset); + size_t pad_len = (mask - iova_len + 1) & mask; + size_t s_length = bv->bv_len; + struct bio_vec *prev = NULL; + + bv_dma_address(bv) = s_iova_off; + bv_dma_len(bv) = s_length; + bv->bv_offset -= s_iova_off; + s_length = iova_align(iovad, s_length + s_iova_off); + bv->bv_len = s_length; + + /* + * Due to the alignment of our single IOVA allocation, we can + * depend on these assumptions about the segment boundary mask: + * - If mask size >= IOVA size, then the IOVA range cannot + * possibly fall across a boundary, so we don't care. + * - If mask size < IOVA size, then the IOVA range must start + * exactly on a boundary, therefore we can lay things out + * based purely on segment lengths without needing to know + * the actual addresses beforehand. + * - The mask must be a power of 2, so pad_len == 0 if + * iova_len == 0, thus we cannot dereference prev the first + * time through here (i.e. before it has a meaningful value). + */ + if (pad_len && pad_len < s_length - 1) { + prev->bv_len += pad_len; + iova_len += pad_len; + } + + iova_len += s_length; + prev = bv; + } + + if (!iova_len) + return __finalise_bvecs(dev, bvecs, nents, 0); + + iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); + if (!iova) { + ret = -ENOMEM; + goto out_restore_sg; + } + + /* + * We'll leave any physical concatenation to the IOMMU driver's + * implementation - it knows better than we do. + */ + ret = iommu_map_bvecs(domain, iova, bvecs, nents, prot, GFP_ATOMIC); + if (ret < 0 || ret < iova_len) + goto out_free_iova; + + return __finalise_bvecs(dev, bvecs, nents, iova); + +out_free_iova: + iommu_dma_free_iova(cookie, iova, iova_len, NULL); +out_restore_sg: + __invalidate_bvecs(bvecs, nents); +out: + if (ret != -ENOMEM && ret != -EREMOTEIO) + return -EINVAL; + return ret; +} + +static void iommu_dma_unmap_bvecs(struct device *dev, struct bio_vec *bvecs, + int nents, enum dma_data_direction dir, unsigned long attrs) +{ + dma_addr_t end = 0, start; + struct bio_vec *bv; + int i; + + if (bv_dma_is_swiotlb(bvecs)) { + iommu_dma_unmap_bvecs_swiotlb(dev, bvecs, nents, dir, attrs); + return; + } + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + iommu_dma_sync_bvecs_for_cpu(dev, bvecs, nents, dir); + + /* + * The bio_vec array elements are mapped into a single + * contiguous IOVA allocation, the start and end points + * just have to be determined. + */ + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + + if (bv_dma_is_bus_address(bv)) { + bv_dma_unmark_bus_address(bv); + continue; + } + + if (bv_dma_len(bv) == 0) + break; + + start = bv_dma_address(bv); + break; + } + + nents -= i; + for (i = 0; i < nents; i++) { + bv = &bvecs[i]; + + if (bv_dma_is_bus_address(bv)) { + bv_dma_unmark_bus_address(bv); + continue; + } + + if (bv_dma_len(bv) == 0) + break; + + end = bv_dma_address(bv) + bv_dma_len(bv); + } + + if (end) + __iommu_dma_unmap(dev, start, end - start); +} + static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -1613,10 +1977,14 @@ static const struct dma_map_ops iommu_dma_ops = { .unmap_page = iommu_dma_unmap_page, .map_sg = iommu_dma_map_sg, .unmap_sg = iommu_dma_unmap_sg, + .map_bvecs = iommu_dma_map_bvecs, + .unmap_bvecs = iommu_dma_unmap_bvecs, .sync_single_for_cpu = iommu_dma_sync_single_for_cpu, .sync_single_for_device = iommu_dma_sync_single_for_device, .sync_sg_for_cpu = iommu_dma_sync_sg_for_cpu, .sync_sg_for_device = iommu_dma_sync_sg_for_device, + .sync_bvecs_for_cpu = iommu_dma_sync_bvecs_for_cpu, + .sync_bvecs_for_device = iommu_dma_sync_bvecs_for_device, .map_resource = iommu_dma_map_resource, .unmap_resource = iommu_dma_unmap_resource, .get_merge_boundary = iommu_dma_get_merge_boundary, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 3bfc56df4f78..a117917bf9d0 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2669,6 +2669,64 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, } EXPORT_SYMBOL_GPL(iommu_map_sg); +ssize_t iommu_map_bvecs(struct iommu_domain *domain, unsigned long iova, + struct bio_vec *bv, unsigned int nents, int prot, + gfp_t gfp) +{ + const struct iommu_domain_ops *ops = domain->ops; + size_t len = 0, mapped = 0; + unsigned int i = 0; + phys_addr_t start; + int ret; + + might_sleep_if(gfpflags_allow_blocking(gfp)); + + /* Discourage passing strange GFP flags */ + if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 | + __GFP_HIGHMEM))) + return -EINVAL; + + while (i <= nents) { + phys_addr_t b_phys = bv_phys(bv); + + if (len && b_phys != start + len) { + ret = __iommu_map(domain, iova + mapped, start, + len, prot, gfp); + + if (ret) + goto out_err; + + mapped += len; + len = 0; + } + + if (bv_dma_is_bus_address(bv)) + goto next; + + if (len) { + len += bv->bv_len; + } else { + len = bv->bv_len; + start = b_phys; + } + +next: + if (++i < nents) + bv++; + } + + if (ops->iotlb_sync_map) + ops->iotlb_sync_map(domain, iova, mapped); + return mapped; + +out_err: + /* undo mappings already done */ + iommu_unmap(domain, iova, mapped); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_map_bvecs); + /** * report_iommu_fault() - report about an IOMMU fault to the IOMMU framework * @domain: the iommu domain where the fault has happened diff --git a/include/linux/iommu.h b/include/linux/iommu.h index c50a769d569a..9f7120314fda 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -8,6 +8,7 @@ #define __LINUX_IOMMU_H #include +#include #include #include #include @@ -485,6 +486,9 @@ extern size_t iommu_unmap_fast(struct iommu_domain *domain, extern ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg, unsigned int nents, int prot, gfp_t gfp); +extern ssize_t iommu_map_bvecs(struct iommu_domain *domain, unsigned long iova, + struct bio_vec *bvecs, unsigned int nents, + int prot, gfp_t gfp); extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova); extern void iommu_set_fault_handler(struct iommu_domain *domain, iommu_fault_handler_t handler, void *token); From patchwork Thu Oct 19 15:26:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13429377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18384CDB465 for ; Thu, 19 Oct 2023 15:26:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346277AbjJSP0e (ORCPT ); Thu, 19 Oct 2023 11:26:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345938AbjJSP0e (ORCPT ); Thu, 19 Oct 2023 11:26:34 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D73E8124 for ; Thu, 19 Oct 2023 08:26:32 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45A56C433C7; Thu, 19 Oct 2023 15:26:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697729192; bh=78EBrsEYDbjqzKz6oikO8SaOCUrBd4svH1Rn6ZfGLCE=; h=Subject:From:Cc:Date:In-Reply-To:References:From; b=rswgT3YhnWkT3o0G6MojVzyCpe1SD39xHMrPipMZmSgVISLih5aNMWaWjg0idGwlR wr1ngmFiei5XhT7c+Bu0NDL/fXh4/s7CIqIKFFPFWNoEYEuA8oQFX3jQ9CBPbBczrH KX6RdenvTZNcABi7dkWPz+b61ZF7orVMEZSuh78HlmyDoT3ATBRLAiRnXOUlkXDbFu qoskSLR5ffA7+QCiaPDrFep6sEAilcxP52iiAf6f4m0sON8DaScP9t0vKWNIYZNnSt 65qEPZJWJJZyZNDRRA4SThj1yBcY53wJs1tU1afsX5b6KqbKu+THc4hyXRfHqHjmnm nqTECaowlRAmw== Subject: [PATCH RFC 9/9] RDMA: Add helpers for DMA-mapping an array of bio_vecs From: Chuck Lever Cc: iommu@lists.linux.dev, linux-rdma@vger.kernel.org, Chuck Lever Date: Thu, 19 Oct 2023 11:26:31 -0400 Message-ID: <169772919129.5232.11342896871510148807.stgit@klimt.1015granger.net> In-Reply-To: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> References: <169772852492.5232.17148564580779995849.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Cc: iommu@lists.linux.dev Cc: linux-rdma@vger.kernel.org Signed-off-by: Chuck Lever --- include/rdma/ib_verbs.h | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 533ab92684d8..5e205fda90f9 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -4220,6 +4220,35 @@ static inline void ib_dma_unmap_sg(struct ib_device *dev, ib_dma_unmap_sg_attrs(dev, sg, nents, direction, 0); } +/** + * ib_dma_map_sg - Map an array of bio_vecs to DMA addresses + * @dev: The device for which the DMA addresses are to be created + * @bvecs: The array of bio_vec entries to map + * @nents: The number of entries in the array + * @direction: The direction of the DMA + */ +static inline int ib_dma_map_bvecs(struct ib_device *dev, + struct bio_vec *bvecs, int nents, + enum dma_data_direction direction) +{ + return dma_map_bvecs_attrs(dev->dma_device, bvecs, nents, direction, 0); +} + +/** + * ib_dma_unmap_bvecs - Unmap a DMA-mapped bio_vec array + * @dev: The device for which the DMA addresses were created + * @bvecs: The array of bio_vec entries to unmap + * @nents: The number of entries in the array + * @direction: The direction of the DMA + */ +static inline void ib_dma_unmap_bvec(struct ib_device *dev, + struct bio_vec *bvecs, int nents, + enum dma_data_direction direction) +{ + if (!ib_uses_virt_dma(dev)) + dma_unmap_bvecs_attrs(dev->dma_device, bvecs, nents, direction); +} + /** * ib_dma_max_seg_size - Return the size limit of a single DMA transfer * @dev: The device to query