From patchwork Wed Nov 6 13:49:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13864475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAC7BD44D54 for ; Wed, 6 Nov 2024 11:50:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68F916B009D; Wed, 6 Nov 2024 06:50:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 61AED6B009E; Wed, 6 Nov 2024 06:50:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4433D6B009F; Wed, 6 Nov 2024 06:50:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 255306B009D for ; Wed, 6 Nov 2024 06:50:13 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id CC2A1C1A46 for ; Wed, 6 Nov 2024 11:50:12 +0000 (UTC) X-FDA: 82755500562.06.917218C Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf15.hostedemail.com (Postfix) with ESMTP id 3390EA0008 for ; Wed, 6 Nov 2024 11:49:36 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tzWCYslJ; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730893687; a=rsa-sha256; cv=none; b=pfE3fU6xSLQywsYkfd31tfg590PBGTuzBXg3JLjGWz5EmQwHyAhmpUth0MBVAT0BM4kpXu A96xPD9BTcCi8gn4ZXNrnhyr5Ki6hvGtC5Qb+bWfmxSn+Rl8n2OoJYHjYNiBR1EfRsoykZ QQQiMAbIfUkwjzAFXmSVBk411f4p1HE= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tzWCYslJ; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of leon@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=leon@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730893687; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X/p9lF0JgZac8OtJRRR5sWEE536H+ASNZKb44rUtAK8=; b=Xw29dFLrqdgl9kDgcyBTXHHRSqLxH2rL30UatXUqiNtF+I/vkk5IZExFmWOvbQogsQwz7O 752lRbuaB4InuNWLIHnm8VXqYsc/97b24DV4pWRU81s/8VbOrkg3R/iUatQhlliFN94SZU O898TxdzHUqyme8nA8+nn8fs30ayX9o= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id B2D78A4373E; Wed, 6 Nov 2024 11:48:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A3587C4CECD; Wed, 6 Nov 2024 11:50:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730893810; bh=kaaNm90pDGFtHWSKL1yQ8jnPuKnBeiUpEsxBGDfH5mU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tzWCYslJ24AU4ijb+/fTTbI4ChGNNsn4z5LLQMlvP6GNkzhRlDpjR6lz/MrYi5b2j bnWnLasaivltuWI6m+WjInyHQQgY55osSGikEmDaLvWgdE/W5XRFDjUPEJXKQvLCgc GorJoLgX8uQNT0nvwebFOG3YEqDisIHemWUu6VoLyrVp81yJ6RucW2e1yzzzObf9hB LBNOOqh7sNsGZXTqcGuCer29IH0NoD0TUcRo04QHWgZgsluAvox950JXePTrJulOAh HwJ1XM9hGBGN2gFH86RLleNYGb4fBK0MVe176FT80GMIxRrm8LqVGaClcLWVP1PRpq 2w0ImUwH+ZxVA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Christoph Hellwig , Sagi Grimberg Cc: Leon Romanovsky , Keith Busch , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , Jonathan Corbet , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Randy Dunlap Subject: [PATCH v2 05/17] dma: Provide an interface to allow allocate IOVA Date: Wed, 6 Nov 2024 15:49:33 +0200 Message-ID: X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 3390EA0008 X-Stat-Signature: ii4p1u45bgdrwrmijxmj4isqg43otnbm X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1730893776-864280 X-HE-Meta: U2FsdGVkX18wKICEWUtcWNz7zKUxBp7IHBnvUQg2OebT2U4EHQrV9+/TJAb30dzaO7JWmPL1BqMJ02HRQCleI43QXPTMmy6SfTUfxL7h0Uv4P0bamd2zin30kFyYpfUgjU9TL+snoF42+qqL0OsklVPasMfMjJZKyc0Z4o/1sBujUstpWtO9Z/BaVf2Xv2fx51qAJMFSoEpV2re8z10BpS4e9cI7W1+0jRpZvAchCOi9/lhAQntkrhiwJczVJH1koR2L0ubImsIq5LrpumNwXpE/Rkd10wzIAtjZF0JniYy767VCmjlexm+vDazRFkFnyHmk0PxSV92uCXYbmrW4+bYDNfgRiq/VdlnhHV6S5No9vhutafPY7+GWsSnbqrSZ7r2kf/bAAQFej5gvuwEvVO2GrreNDo7rFUz+dLEP3TYAe/PF1psn9Z09lpYcy+zo5/bwUgagCp8pDGaOwkTKXGoSA/tmokAGLMiS18qsABF1desQyGClJrif2lSYl2rXpH7/YboQTjju2me5JopxZm1yuaz7A5oEA4HbiDpdFz0/NaA2fG3YMEWuahIY3gV81bleZgPq1GOh9J10icR3Hsc0b1YVfdMdnvd4B93pgYXdKeyVCVF43HKsg6ip2/wsT0AenHpVPLE2WlMic7QDPv/Wxqqfbilb8zu8pk1PWKT5biHIFPnT2E55aCZLgQI3U477eXoyaO96mtz1PgidsWtiWltPx6ul4ENVKd+bXyXAZFtywqU+0xon9YRCOr4rkYVS1V/N8/ybf4o8+DAPOQpsRW6cWXP70R996MEKBkAT5RBUNx6Rro+5GuiMCiGCmoBBgTNnwNjLUwHEVR+TbFi5R50QLGefr9AGLR1ZtPMsKBrKkhqYKuCpoqYuAQNVczRa1SXd6+JrKjKtQ1SejeOCI7GMdtYr7Lj76izdKcSNnU83hIA/BW/pDwt2f6zJp3CHNU3jOppCiE/mksB mFnxy6+2 t+GdLZyoJJ6/Rnn+n72TQfbZAt/yqamyAcwrRVB69EiPqPJEK6XPouu9/+4yO+E6+FL/0fDVqVJUBNtY3Eh8e6ZElOIipfRKrtjVH8OtWqkmLkCFBH9ZT3xET6kcWLJXdix3zpEb3fVDIY9z++hohj3lG/Z2L57IYnc/W1BpM7OmAFaN/Gg7gCdNv77EVGxxLLod5DbWziVB0TUjn5FtW8vya/Qamk0J4vBaToH4lnbxHZGeKz0U3rfHRkeuT/oq2iEydnOMYmA9RnkUlJkomenWcpFIC49WvgIh9krQSumh3h8auSQUJBZivxMIIhgGv4rD0Yk14g7wXDAVINirNV2FLU11+bfpAJkdl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky The existing .map_page() callback provides both allocating of IOVA and linking DMA pages. That combination works great for most of the callers who use it in control paths, but is less effective in fast paths where there may be multiple calls to map_page(). These advanced callers already manage their data in some sort of database and can perform IOVA allocation in advance, leaving range linkage operation to be in fast path. Provide an interface to allocate/deallocate IOVA and next patch link/unlink DMA ranges to that specific IOVA. The API is exported from dma-iommu as it is the only implementation supported, the namespace is clearly different from iommu_* functions which are not allowed to be used. This code layout allows us to save function call per API call used in datapath as well as a lot of boilerplate code. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 79 +++++++++++++++++++++++++++++++++++++ include/linux/dma-mapping.h | 15 +++++++ 2 files changed, 94 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 853247c42f7d..127150f63c95 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1746,6 +1746,85 @@ size_t iommu_dma_max_mapping_size(struct device *dev) return SIZE_MAX; } +static bool iommu_dma_iova_alloc(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_off = iova_offset(iovad, phys); + dma_addr_t addr; + + if (WARN_ON_ONCE(!size)) + return false; + if (WARN_ON_ONCE(size & DMA_IOVA_USE_SWIOTLB)) + return false; + + addr = iommu_dma_alloc_iova(domain, + iova_align(iovad, size + iova_off), + dma_get_mask(dev), dev); + if (!addr) + return false; + + state->addr = addr + iova_off; + state->__size = size; + return true; +} + +/** + * dma_iova_try_alloc - Try to allocate an IOVA space + * @dev: Device to allocate the IOVA space for + * @state: IOVA state + * @phys: physical address + * @size: IOVA size + * + * Check if @dev supports the IOVA-based DMA API, and if yes allocate IOVA space + * for the given base address and size. + * + * Note: @phys is only used to calculate the IOVA alignment. Callers that always + * do PAGE_SIZE aligned transfers can safely pass 0 here. + * + * Returns %true if the IOVA-based DMA API can be used and IOVA space has been + * allocated, or %false if the regular DMA API should be used. + */ +bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size) +{ + memset(state, 0, sizeof(*state)); + if (!use_dma_iommu(dev)) + return false; + if (static_branch_unlikely(&iommu_deferred_attach_enabled) && + iommu_deferred_attach(dev, iommu_get_domain_for_dev(dev))) + return false; + return iommu_dma_iova_alloc(dev, state, phys, size); +} +EXPORT_SYMBOL_GPL(dma_iova_try_alloc); + +/** + * dma_iova_free - Free an IOVA space + * @dev: Device to free the IOVA space for + * @state: IOVA state + * + * Undoes a successful dma_try_iova_alloc(). + * + * Note that all dma_iova_link() calls need to be undone first. For callers + * that never call dma_iova_unlink(), dma_iova_destroy() can be used instead + * which unlinks all ranges and frees the IOVA space in a single efficient + * operation. + */ +void dma_iova_free(struct device *dev, struct dma_iova_state *state) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_start_pad = iova_offset(iovad, state->addr); + size_t size = dma_iova_size(state); + + iommu_dma_free_iova(cookie, state->addr - iova_start_pad, + iova_align(iovad, size + iova_start_pad), NULL); +} +EXPORT_SYMBOL_GPL(dma_iova_free); + void iommu_setup_dma_ops(struct device *dev) { struct iommu_domain *domain = iommu_get_domain_for_dev(dev); diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 6075e0708deb..817f11bce7bc 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -11,6 +11,7 @@ #include #include #include +#include /** * List of possible attributes associated with a DMA mapping. The semantics @@ -77,6 +78,7 @@ #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1)) struct dma_iova_state { + dma_addr_t addr; size_t __size; }; @@ -307,11 +309,24 @@ static inline bool dma_use_iova(struct dma_iova_state *state) { return state->__size != 0; } + +bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state, + phys_addr_t phys, size_t size); +void dma_iova_free(struct device *dev, struct dma_iova_state *state); #else /* CONFIG_IOMMU_DMA */ static inline bool dma_use_iova(struct dma_iova_state *state) { return false; } +static inline bool dma_iova_try_alloc(struct device *dev, + struct dma_iova_state *state, phys_addr_t phys, size_t size) +{ + return false; +} +static inline void dma_iova_free(struct device *dev, + struct dma_iova_state *state) +{ +} #endif /* CONFIG_IOMMU_DMA */ #if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC)