From patchwork Thu Sep 12 11:15:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13801927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15A71EEB593 for ; Thu, 12 Sep 2024 11:16:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85ED16B0092; Thu, 12 Sep 2024 07:16:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 80EF36B0093; Thu, 12 Sep 2024 07:16:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6AEF36B0095; Thu, 12 Sep 2024 07:16:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 48F6B6B0092 for ; Thu, 12 Sep 2024 07:16:23 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 00F74141F43 for ; Thu, 12 Sep 2024 11:16:22 +0000 (UTC) X-FDA: 82555832646.04.A72F962 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf28.hostedemail.com (Postfix) with ESMTP id 65CB1C0006 for ; Thu, 12 Sep 2024 11:16:21 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DW0EiOBL; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726139677; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GPZuS5wvsAjn9FDbEhcpirCEsYojcDK3lhW1ODMTRzw=; b=guup6vVLVV+EjpuWiOGkaFIH9OylxcIwF+rnOP7pE97Lu1p6xHxk/i1gNE/8p5q78RC6Og VyOAog7Y34jIg6pJxJ1UYJ2XS1A8t60LV1FL1oaigzRD1tJm0WLk3RIGJrBLn85r68mh/v F3JJLTbIaHWNzkl8VMKsoMtRKOiNISM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726139677; a=rsa-sha256; cv=none; b=bpSU2mLgvaf9A4SgGmlkXvZZAe7bpHHCXherCY2T381+zqVN6dtdkIGb9W/KWUTDGMDp0t TWhGUIuET082czH/vivCJCiSUiX8Q4fzhg7JGsKY1itZPGoqCIWPRsbW3zLZAQYEc9GtqL e9D1vRiuk6d9+sQT2D5/G6gzHEhENJQ= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DW0EiOBL; spf=pass (imf28.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 351C3A451EC; Thu, 12 Sep 2024 11:16:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ADF5FC4CED2; Thu, 12 Sep 2024 11:16:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1726139780; bh=li4T+ci0miq/Ck6TZvJSh35TTfQLVZ3ez/sogkVEAqg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DW0EiOBLWTdXdcYL7iQeO1KEagJUzhnYtnawH9XAOccSvNYhDqCr0kKYfQM7BvY7U Z/wF8BB04Og3VC/8rARWoLu32FUWjBo1rIeCC7O48BCmC1v7BqOLzPGtwScbzU+jh5 yM16QVL6oJS9fqiYwrvdfeNXuOWuirfNAINa2A9ynEvi8yQyOZcl7Q5AiJF1Psq1wq BlV0gczTOznpsch1Z/NwAxSedcftaMBui/NE1/LqZ1tIhEFQpGA/yfhVm5TfMyPZyn iwopoujIQGMD6QhZFyOHnnS0zH8X4MT/j6CzTSHEFKa//fWWK8/8tzQhXDftpbBb7j 3hFBAtVG640fQ== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC v2 05/21] dma-mapping: provide an interface to allocate IOVA Date: Thu, 12 Sep 2024 14:15:40 +0300 Message-ID: X-Mailer: git-send-email 2.46.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 65CB1C0006 X-Stat-Signature: 756cf56qrqo4fi4ka7s9tkmio6jumwxr X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1726139781-527890 X-HE-Meta: U2FsdGVkX19LcIyBIH0PBxeHGVxE6yNWvc8cRncD55D+ncTGWPanMPerAaOqozdU8SNLD7yJkyZB1sOg+sjPjqHVXapnSg2Fs7RQQabOuTlNYykGydDAp4qoMVy3ZhKYlAFxd1Y1fhugcWyjOzbGGMzbCUopO/dweLYCZA3pKwNCOdEx1IRbXTJPSD8N6bnd0+VbX08vDUK2VkwyGyJmUotg/HTZ6lQdD9RXUpW8TqGWBk+kgJgcqM9T4I0YObc70dxypFVbIk8QL/ZnZ+NBOTf/rFDro833v0idNuikVW/vKEbdd9p6jyXPuNIvorB8gR/fsNCpt3hiIJkxUg1yZeemvaNMRZ4z9Srg0mbquFcXiES1nLp6Ahn013uERfo6drMGzh5Yhl6rr8PHdvEwX4L8nM8xevex3V9XmcClEz+OonRmNsjw6MvvOV+fINaRHVb/M7+hr5i4oLynZI9wDQgr0OgUu819AdIk7IhyQeTsHNXFyjilY2nJBWjHwOeajA2KaFTdufdL7EBzvGky+KujsxUKRJl1GXHIPBzH2oYrCY+Mb+ZO84i5/O3dyS4wgla41qcr4HsFYLAWufh4Rrx5nGs11deumWFUYdSQ9muoGWx4r2I4+l2Oy1RAygUIT8pCrhDpNWGIJww53yELb2Nbk8S2fSAiTTRIMEr/HLLhNK5xZyvpKITZ+91LNjjTE1dEMDtrEYqN08or2brcCNBMLIklK9VyIgiM78gdcDoeynWoYuxAA5lktCsynPOsK0g0YNKSMoMESyQwiWguJCPTbruigCL+x4/qR2Q7/nwaYE4rmAs4726ilvb1pYYjNUpOs7PZmHKiVUXO7otfPl8ktfN9cK/0jTmjJ0jCNMGEqZmw0IvuVXAn9LADjIeskwlupATKxrQyZfNjhCh9IC4muBgT0a3QTJVFKxLRLfQ1a6s4uFFJoU8DoW3eolSwlrMbhW4L676Hxq/F7AY WWTZbFim S0IU5dAjtNC0TyKOGaTfleDqe/7g5oru0zzYiW/PDxYaWphkOUiH2qR2J5ET+4gOt2tdvntB1WHvTlvGZneFmLETNMkevEYICYgCOa5ETMdzL0uebC9jA9i00WL5/OMdLPoaqy6UF3XDMW6MRl3TRW3jMvedRm55OMwtJ70cdqBZ59kwrxy97PeggExl24RTZ5ZRLO08dpk+PNqoYkJ8Ix2FZP96xFR4C91h52fOl3Vq8Ghs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Existing .map_page() callback provides two things at the same time: allocates IOVA and links DMA pages. That combination works great for most of the callers who use it in control paths, but less effective in fast paths. These advanced callers already manage their data in some sort of database and can perform IOVA allocation in advance, leaving range linkage operation to be in fast path. Provide an interface to allocate/deallocate IOVA and next patch link/unlink DMA ranges to that specific IOVA. Signed-off-by: Leon Romanovsky --- include/linux/dma-mapping.h | 18 ++++++++++++++++++ kernel/dma/mapping.c | 35 +++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 285075873077..6a51d8e96a9d 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -78,6 +78,8 @@ struct dma_iova_state { struct device *dev; + dma_addr_t addr; + size_t size; enum dma_data_direction dir; }; @@ -115,6 +117,10 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) return 0; } +int dma_alloc_iova_unaligned(struct dma_iova_state *state, phys_addr_t phys, + size_t size); +void dma_free_iova(struct dma_iova_state *state); + dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs); @@ -164,6 +170,14 @@ void dma_vunmap_noncontiguous(struct device *dev, void *vaddr); int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma, size_t size, struct sg_table *sgt); #else /* CONFIG_HAS_DMA */ +static inline int dma_alloc_iova_unaligned(struct dma_iova_state *state, + phys_addr_t phys, size_t size) +{ + return -EOPNOTSUPP; +} +static inline void dma_free_iova(struct dma_iova_state *state) +{ +} static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -370,6 +384,10 @@ static inline bool dma_need_sync(struct device *dev, dma_addr_t dma_addr) return false; } #endif /* !CONFIG_HAS_DMA || !CONFIG_DMA_NEED_SYNC */ +static inline int dma_alloc_iova(struct dma_iova_state *state, size_t size) +{ + return dma_alloc_iova_unaligned(state, 0, size); +} struct page *dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index fd9ecff8beee..4cd910f27dee 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -951,3 +951,38 @@ unsigned long dma_get_merge_boundary(struct device *dev) return ops->get_merge_boundary(dev); } EXPORT_SYMBOL_GPL(dma_get_merge_boundary); + +/** + * dma_alloc_iova_unaligned - Allocate an IOVA space + * @state: IOVA state + * @phys: physical address + * @size: IOVA size + * + * Allocate an IOVA space for the given IOVA state and size. The IOVA space + * is allocated to the worst case when whole range is going to be used. + */ +int dma_alloc_iova_unaligned(struct dma_iova_state *state, phys_addr_t phys, + size_t size) +{ + if (!use_dma_iommu(state->dev)) + return 0; + + WARN_ON_ONCE(!size); + return iommu_dma_alloc_iova(state, phys, size); +} +EXPORT_SYMBOL_GPL(dma_alloc_iova_unaligned); + +/** + * dma_free_iova - Free an IOVA space + * @state: IOVA state + * + * Free an IOVA space for the given IOVA attributes. + */ +void dma_free_iova(struct dma_iova_state *state) +{ + if (!use_dma_iommu(state->dev)) + return; + + iommu_dma_free_iova(state); +} +EXPORT_SYMBOL_GPL(dma_free_iova);