From patchwork Fri Feb 26 17:35:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8440271 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A7180C0553 for ; Fri, 26 Feb 2016 17:39:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8661620395 for ; Fri, 26 Feb 2016 17:39:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 72AE6203B4 for ; Fri, 26 Feb 2016 17:39:36 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aZMKy-0002xP-Ng; Fri, 26 Feb 2016 17:37:56 +0000 Received: from mail-wm0-x22b.google.com ([2a00:1450:400c:c09::22b]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aZMJc-0001k0-TX for linux-arm-kernel@lists.infradead.org; Fri, 26 Feb 2016 17:36:38 +0000 Received: by mail-wm0-x22b.google.com with SMTP id g62so82760169wme.1 for ; Fri, 26 Feb 2016 09:36:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4ROUflPdr8BbeeMRheZpR31wmal10rvekCaqUCl79tk=; b=XzM00xYv1129xmHHTwKN6GPxK/hutg+xXQ0hgZ65PbevKFj1pZBZ97KCRi+FDDGMTT GDyFnTFgqZr4Ibhowg7Gu/oJxcS/g9A1+Lmnh8/IEmz7lSCfU4UJfVh/PLwUUmQ9EqdM JsGLyJGolPJngQDbwkxwjnhWVzMKY372rq5tk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4ROUflPdr8BbeeMRheZpR31wmal10rvekCaqUCl79tk=; b=fyqKYy2OZwwWoVbQEz9aIH5zVzQrYJWYLv70ZJRpAX6ZCWhMlL1Ure49JBGlGGjQ0r fuezTBY18FMhcvteZFf2e7l1yd4EFNt4/urFtPsz8rS4HxysvXQqnuZjzaprpEmuycW/ 7MfUSn8Mslpo+fmKdyiskdzBwpNWbyCuCQC5jA70O0a3vYWV6xEGsNLKc5664/i4aiXB 0oILjnPBrjd9a+5lZEdwRfmJBvGOIy8rX9l24Ph2a/W3niHZ0UjMtRlhcvRlzM8giiFF vHASv497OtMpyPZiQpNkJedYTcOplyZsguRPISqXfbrIW4T5rOaej6LSIke11JC+jJP6 3jrw== X-Gm-Message-State: AD7BkJJ6tNhy2xfRMZaeCmv6M6b6/zVxaITdkl4pHDnJ9y7VcU94tpr8Jy6FYROqbhKKwYXQ X-Received: by 10.194.90.229 with SMTP id bz5mr3103028wjb.143.1456508175863; Fri, 26 Feb 2016 09:36:15 -0800 (PST) Received: from new-host-8.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id 77sm3750373wmp.18.2016.02.26.09.36.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Feb 2016 09:36:14 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC v4 05/14] dma-reserved-iommu: iommu_get/put_single_reserved Date: Fri, 26 Feb 2016 17:35:45 +0000 Message-Id: <1456508154-2253-6-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1456508154-2253-1-git-send-email-eric.auger@linaro.org> References: <1456508154-2253-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160226_093633_656191_F44A7C73 X-CRM114-Status: GOOD ( 17.14 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patches@linaro.org, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces iommu_get/put_single_reserved. iommu_get_single_reserved allows to allocate a new reserved iova page and map it onto the physical page that contains a given physical address. Page size is the IOMMU page one. It is the responsability of the system integrator to make sure the in use IOMMU page size corresponds to the granularity of the MSI frame. It returns the iova that is mapped onto the provided physical address. Hence the physical address passed in argument does not need to be aligned. In case a mapping already exists between both pages, the IOVA mapped to the PA is directly returned. Each time an iova is successfully returned a binding ref count is incremented. iommu_put_single_reserved decrements the ref count and when this latter is null, the mapping is destroyed and the iova is released. Signed-off-by: Eric Auger Signed-off-by: Ankit Jindal Signed-off-by: Pranavkumar Sawargaonkar Signed-off-by: Bharat Bhushan --- v3 -> v4: - formerly in iommu: iommu_get/put_single_reserved & iommu/arm-smmu: implement iommu_get/put_single_reserved - Attempted to address Marc's doubts about missing size/alignment at VFIO level (user-space knows the IOMMU page size and the number of IOVA pages to provision) v2 -> v3: - remove static implementation of iommu_get_single_reserved & iommu_put_single_reserved when CONFIG_IOMMU_API is not set v1 -> v2: - previously a VFIO API, named vfio_alloc_map/unmap_free_reserved_iova --- drivers/iommu/dma-reserved-iommu.c | 115 +++++++++++++++++++++++++++++++++++++ include/linux/dma-reserved-iommu.h | 26 +++++++++ 2 files changed, 141 insertions(+) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index 30d54d0..537c83e 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -132,3 +132,118 @@ void iommu_free_reserved_iova_domain(struct iommu_domain *domain) mutex_unlock(&domain->reserved_mutex); } EXPORT_SYMBOL_GPL(iommu_free_reserved_iova_domain); + +int iommu_get_single_reserved(struct iommu_domain *domain, + phys_addr_t addr, int prot, + dma_addr_t *iova) +{ + unsigned long order = __ffs(domain->ops->pgsize_bitmap); + size_t page_size = 1 << order; + phys_addr_t mask = page_size - 1; + phys_addr_t aligned_addr = addr & ~mask; + phys_addr_t offset = addr - aligned_addr; + struct iommu_reserved_binding *b; + struct iova *p_iova; + struct iova_domain *iovad = + (struct iova_domain *)domain->reserved_iova_cookie; + int ret; + + if (!iovad) + return -EINVAL; + + mutex_lock(&domain->reserved_mutex); + + b = find_reserved_binding(domain, aligned_addr, page_size); + if (b) { + *iova = b->iova + offset; + kref_get(&b->kref); + ret = 0; + goto unlock; + } + + /* there is no existing reserved iova for this pa */ + p_iova = alloc_iova(iovad, 1, iovad->dma_32bit_pfn, true); + if (!p_iova) { + ret = -ENOMEM; + goto unlock; + } + *iova = p_iova->pfn_lo << order; + + b = kzalloc(sizeof(*b), GFP_KERNEL); + if (!b) { + ret = -ENOMEM; + goto free_iova_unlock; + } + + ret = iommu_map(domain, *iova, aligned_addr, page_size, prot); + if (ret) + goto free_binding_iova_unlock; + + kref_init(&b->kref); + kref_get(&b->kref); + b->domain = domain; + b->addr = aligned_addr; + b->iova = *iova; + b->size = page_size; + + link_reserved_binding(domain, b); + + *iova += offset; + goto unlock; + +free_binding_iova_unlock: + kfree(b); +free_iova_unlock: + free_iova(iovad, *iova >> order); +unlock: + mutex_unlock(&domain->reserved_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_get_single_reserved); + +/* called with reserved_mutex locked */ +static void reserved_binding_release(struct kref *kref) +{ + struct iommu_reserved_binding *b = + container_of(kref, struct iommu_reserved_binding, kref); + struct iommu_domain *d = b->domain; + struct iova_domain *iovad = + (struct iova_domain *)d->reserved_iova_cookie; + unsigned long order = __ffs(b->size); + + iommu_unmap(d, b->iova, b->size); + free_iova(iovad, b->iova >> order); + unlink_reserved_binding(d, b); + kfree(b); +} + +void iommu_put_single_reserved(struct iommu_domain *domain, dma_addr_t iova) +{ + unsigned long order; + phys_addr_t aligned_addr; + dma_addr_t aligned_iova, page_size, mask, offset; + struct iommu_reserved_binding *b; + + order = __ffs(domain->ops->pgsize_bitmap); + page_size = (uint64_t)1 << order; + mask = page_size - 1; + + aligned_iova = iova & ~mask; + offset = iova - aligned_iova; + + aligned_addr = iommu_iova_to_phys(domain, aligned_iova); + + mutex_lock(&domain->reserved_mutex); + + b = find_reserved_binding(domain, aligned_addr, page_size); + if (!b) + goto unlock; + kref_put(&b->kref, reserved_binding_release); + +unlock: + mutex_unlock(&domain->reserved_mutex); +} +EXPORT_SYMBOL_GPL(iommu_put_single_reserved); + + + diff --git a/include/linux/dma-reserved-iommu.h b/include/linux/dma-reserved-iommu.h index 5bf863b..71ec800 100644 --- a/include/linux/dma-reserved-iommu.h +++ b/include/linux/dma-reserved-iommu.h @@ -40,6 +40,32 @@ int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, */ void iommu_free_reserved_iova_domain(struct iommu_domain *domain); +/** + * iommu_get_single_reserved: allocate a reserved iova page and bind + * it onto the page that contains a physical address (@addr) + * + * @domain: iommu domain handle + * @addr: physical address to bind + * @prot: mapping protection attribute + * @iova: returned iova + * + * In case the 2 pages already are bound simply return @iova and + * increment a ref count + */ +int iommu_get_single_reserved(struct iommu_domain *domain, + phys_addr_t addr, int prot, + dma_addr_t *iova); + +/** + * iommu_put_single_reserved: decrement a ref count of the iova page + * + * @domain: iommu domain handle + * @iova: iova whose binding ref count is decremented + * + * if the binding ref count is null, unmap the iova page and release the iova + */ +void iommu_put_single_reserved(struct iommu_domain *domain, dma_addr_t iova); + #endif /* CONFIG_IOMMU_DMA_RESERVED */ #endif /* __KERNEL__ */ #endif /* __DMA_RESERVED_IOMMU_H */