From patchwork Tue Jul 2 09:09:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13719171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6C99C30658 for ; Tue, 2 Jul 2024 09:10:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39F456B00A6; Tue, 2 Jul 2024 05:10:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34E7B6B00A7; Tue, 2 Jul 2024 05:10:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23DD86B00A8; Tue, 2 Jul 2024 05:10:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 077026B00A6 for ; Tue, 2 Jul 2024 05:10:31 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A8B101403F7 for ; Tue, 2 Jul 2024 09:10:30 +0000 (UTC) X-FDA: 82294241820.29.8B64B24 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf04.hostedemail.com (Postfix) with ESMTP id 5636840007 for ; Tue, 2 Jul 2024 09:10:27 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=jzgab3Uf; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719911407; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/wSyz3lVz4kACp7GRUzUvVI1JSwAXJQSeDPkH1ecdos=; b=y4n8NGC90jBa/Ry+FVpNJCJnVy4aDAKJcU3dOOH1lLHiPMnoFi9zlpSXeVSLzqtlMpkDl4 RAxz0vMtBJo/b75llI4tSqdnwgqK8cJvUXHB6eG90/ca5u8TbEZuwBbET2gdqg4p6fg2hZ TmLrLRlTUYX63iJ3oqU2KVOnVf5lnv4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719911407; a=rsa-sha256; cv=none; b=dE52UmH8GQAz8YfsMK1vHYtZgiK8IVPCSwyCU8cv5Z5NVVt6w/J/1Stl6jzq4O6BKgza9U KFmd6/gsOfQYSQEQLoP9++/rizxkoiTj0SZwg+l6IqZdayzkIxnNPrlx/6XB45V/kh0joR H7eGQ/97PwhNZdk2moDdzW04dRuxDww= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=jzgab3Uf; spf=pass (imf04.hostedemail.com: domain of leon@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id A5E71CE1CE4; Tue, 2 Jul 2024 09:10:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3695BC116B1; Tue, 2 Jul 2024 09:10:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719911425; bh=3LQD0/3Cm7R9Az4oi5cuZkbR6iiFWyUtiH8FYV5qU6g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jzgab3UfsAsewcAa24eobcDOjpYZ/7PPMau2CiYGMxN+FfxGb00wDLL3BI2ELXmGU moRg+6YKgbtiDf2JVUe5eQJT8XKaWX5rrUt19Vkg6yDcQ3EOq1SxCxg+IKYC8PD3l0 HS940fsiVLIaK9q8PMwxocD9VU27iN2gNzMGrNBw8q8UcVws7pN76iWryjdgzqisHm VXjMg0SkcQEtPyECwWKViptiiD9d+jmj3OBTAB4yD01wnJBA03aMuR0LwlA7S+3cjd 2MBCI6RclneZK65+IxUismvWUQZUYYw63kyUErE4LhTZye947A31418PnSxjVrmxDt Tsc8PYJOJc7iA== From: Leon Romanovsky To: Jens Axboe , Jason Gunthorpe , Robin Murphy , Joerg Roedel , Will Deacon , Keith Busch , Christoph Hellwig , "Zeng, Oak" , Chaitanya Kulkarni Cc: Leon Romanovsky , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , Marek Szyprowski , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Andrew Morton , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 07/18] iommu/dma: Provide an interface to allow preallocate IOVA Date: Tue, 2 Jul 2024 12:09:37 +0300 Message-ID: X-Mailer: git-send-email 2.45.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 5636840007 X-Stat-Signature: p5dj7ezgm7zqya918dbjtu3eg1wxwq1t X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1719911427-664238 X-HE-Meta: U2FsdGVkX19rsdVkNfD15653DA8b18CygM9RuShFuUVnLqK0r9XLgtjcnMtL5ap2RllY64k32rzWpLrJaf8zreeit64n7sT7QEQHcBR1RUQqRHh+kAmRuW8Gtgy7+m/SoSqJZKzVC7BaI3TAktsdq2DZwRYBTNibYlwPrtIBNW0HvvLNOTsWdphzhoqu9ajflaYG9Wd26Ji61nHM/WbQk682Mw47H0u6uGo8MVc9r7OCFk2HFXXxl8lV8cBrUKmGHAuvJkpcs9MUHDkXW5FqhweGNulPdbe/KUS+hfqCCVrQ529Z2Sp0Lpp/YugO4Ld7Zn8ND6MwN6/uxCpsD8J7w5xTWq2mjcH7l4B6FEZ5q4vl4bPGAlGiioves+1kmdfpV+15+xDB9bSR/AAguTcAqVrtxYvsczuxxQeGW94nvJaLBMHBIkmTanFzNKAgh/BBSLwWFm34Gp+Cf8Er534I6sOMgPCMMrD0HnaSJe+qrxu/E5TJBSdT6mVNRMn11O9GSnKl0m1mOmokNZZHrm8KRVCd12FnVloYfrpajmrVSGdQyJhG7UOkWPBqiJoRBOn3Oxv+FXnjVrrq5PvkXG5qH2+X8LfixYlqHa90lLBTfccIqsKUukvoe99gs4ln9WnyA9BYAqJjwvbIaorpG1B7+ngbA/dYCIZ8QChqWcaG1KzLnOIoRr4L3z55sUoW0vmHTNyRIQ42lshYmSu074hiE7KYYoR0e3l9Mcv4+K5D/Ym2+hiqB2wwHmCxZT6Aau/xxRQVP7Vz8rcUflCS1s67PDFaO+k2/bje7nQAgNXxtQckSfz//WXnxNCAVKoRdUVgH05QHTCYj7YHj1xdbAre1TsaP6FzGyoXFDRJ9OAU0crnWAiLjP+zK/9wIGODAfhfpvYUC50vEIMFEXelpm7bFj9RbNrVMjpoWEvNqqxVCBWaiGc7Mb+88U56t30OK4kYbIgBd1EFtMKtvk/nenL 7XQkqNUk bDt3jmJ/ii0H1T+4mebbgmqxBtPDfxrUbXWiUwmaOxglkUMea6mIuw0LXBU54F+ZtQxSiJiol+uraHug/jR4aLbM/v1rjw/LrAzCyFl4EN9+jr76bco2xIbA/urEsNjjvRHB6De4kBDF+7rixgtyCr659MEVYyd++PbeNDgb8TXs4pofRKf3p/2/rxtV6vZRiRN7iYKcuZv3pUVI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Separate IOVA allocation to dedicated callback so it will allow cache of IOVA and reuse it in fast paths for devices which support ODP (on-demand-paging) mechanism. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 50 +++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 89e34503e0bb..0b5ca6961940 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -357,7 +357,7 @@ int iommu_dma_init_fq(struct iommu_domain *domain) atomic_set(&cookie->fq_timer_on, 0); /* * Prevent incomplete fq state being observable. Pairs with path from - * __iommu_dma_unmap() through iommu_dma_free_iova() to queue_iova() + * __iommu_dma_unmap() through __iommu_dma_free_iova() to queue_iova() */ smp_wmb(); WRITE_ONCE(cookie->fq_domain, domain); @@ -745,7 +745,7 @@ static int dma_info_to_prot(enum dma_data_direction dir, bool coherent, } } -static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, +static dma_addr_t __iommu_dma_alloc_iova(struct iommu_domain *domain, size_t size, u64 dma_limit, struct device *dev) { struct iommu_dma_cookie *cookie = domain->iova_cookie; @@ -791,7 +791,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, return (dma_addr_t)iova << shift; } -static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, +static void __iommu_dma_free_iova(struct iommu_dma_cookie *cookie, dma_addr_t iova, size_t size, struct iommu_iotlb_gather *gather) { struct iova_domain *iovad = &cookie->iovad; @@ -828,7 +828,7 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, if (!iotlb_gather.queued) iommu_iotlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); + __iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -851,12 +851,12 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, size = iova_align(iovad, size + iova_off); - iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; if (iommu_map(domain, iova, phys - iova_off, size, prot, GFP_ATOMIC)) { - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -960,7 +960,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, return NULL; size = iova_align(iovad, size); - iova = iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); + iova = __iommu_dma_alloc_iova(domain, size, dev->coherent_dma_mask, dev); if (!iova) goto out_free_pages; @@ -994,7 +994,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, out_free_sg: sg_free_table(sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -1429,7 +1429,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, if (!iova_len) return __finalise_sg(dev, sg, nents, 0); - iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev); if (!iova) { ret = -ENOMEM; goto out_restore_sg; @@ -1446,7 +1446,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len, NULL); + __iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); out: @@ -1707,6 +1707,30 @@ static size_t iommu_dma_max_mapping_size(struct device *dev) return SIZE_MAX; } +static dma_addr_t iommu_dma_alloc_iova(struct device *dev, size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + dma_addr_t dma_mask = dma_get_mask(dev); + + size = iova_align(iovad, size); + return __iommu_dma_alloc_iova(domain, size, dma_mask, dev); +} + +static void iommu_dma_free_iova(struct device *dev, dma_addr_t iova, + size_t size) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + struct iommu_iotlb_gather iotlb_gather; + + size = iova_align(iovad, size); + iommu_iotlb_gather_init(&iotlb_gather); + __iommu_dma_free_iova(cookie, iova, size, &iotlb_gather); +} + static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED | DMA_F_CAN_SKIP_SYNC, @@ -1731,6 +1755,8 @@ static const struct dma_map_ops iommu_dma_ops = { .get_merge_boundary = iommu_dma_get_merge_boundary, .opt_mapping_size = iommu_dma_opt_mapping_size, .max_mapping_size = iommu_dma_max_mapping_size, + .alloc_iova = iommu_dma_alloc_iova, + .free_iova = iommu_dma_free_iova, }; void iommu_setup_dma_ops(struct device *dev) @@ -1773,7 +1799,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); + iova = __iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); if (!iova) goto out_free_page; @@ -1787,7 +1813,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size, NULL); + __iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL;