From patchwork Mon Apr 4 08:07:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8738271 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A73BA9F39A for ; Mon, 4 Apr 2016 08:10:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 85AB32022A for ; Mon, 4 Apr 2016 08:10:29 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0D42D201DD for ; Mon, 4 Apr 2016 08:10:28 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1amzZ8-00026Q-WA; Mon, 04 Apr 2016 08:08:55 +0000 Received: from mail-lf0-x22e.google.com ([2a00:1450:4010:c07::22e]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1amzY4-0001Hu-Jy for linux-arm-kernel@lists.infradead.org; Mon, 04 Apr 2016 08:07:54 +0000 Received: by mail-lf0-x22e.google.com with SMTP id c126so38335720lfb.2 for ; Mon, 04 Apr 2016 01:07:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zEa2GM/639iE4UyyV8tHb984WpLF/EeQ4nQp5dvnrSY=; b=dAfTgagKkH49mfjvkS7i07crJAVjzC/6pjgb5VTonkAa2QiXecda5wavuQYyxuplJI N3JgrKmyWvQq5bNjqpwnMn5g2ub7X4WScxUr0gGyupY7IWxxZvn0v3v4lP1gmybsxMMy w9HhrMEFXxJIND1s0nizwPs+g5Lqa4UWt0PWk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zEa2GM/639iE4UyyV8tHb984WpLF/EeQ4nQp5dvnrSY=; b=RYeMT87U6C2HA4row+BR9glbisxTB1EDaEwTsOEhqCH/7YVGogEpMmo1EGq4d6yy+a utIexWnRzOPnAs2tA2T8ksy4kPy3AvkGyLLnUpxrmO1XueUqgMv9hYnRFHQg6/3eNZJN K1yQZDnHvChOcYl84rCDrRA6mTqHkVDs07l53XOBQZwTO+JjCu4LQ7SmGlV7PPwcA3bZ qiFQ54Hc0LmqWm4ALzDnrfshcdEIFcEdRR6jzj4CLsSrve3rgqEDFzWiwuPM/RwwCLRb mKmaKGCx2nr2ikQmwqz5cGBvF64VihAdgJO7+1Z/CfF2h/ABgATWnBpLnydSy5ZOhTTP p/yw== X-Gm-Message-State: AD7BkJJJPhMViynbNqeGIT0IETCfeQ9ffc9OZjSaDghhs60ZaSLHEAP6sAjpGUl2XDitygsU X-Received: by 10.28.12.80 with SMTP id 77mr10973925wmm.19.1459757249756; Mon, 04 Apr 2016 01:07:29 -0700 (PDT) Received: from new-host-2.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id m67sm7505239wma.3.2016.04.04.01.07.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Apr 2016 01:07:28 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH v6 6/7] dma-reserved-iommu: iommu_get/put_single_reserved Date: Mon, 4 Apr 2016 08:07:01 +0000 Message-Id: <1459757222-2668-7-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> References: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160404_010748_994593_434B38E5 X-CRM114-Status: GOOD ( 19.56 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: julien.grall@arm.com, patches@linaro.org, Jean-Philippe.Brucker@arm.com, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces iommu_get/put_single_reserved. iommu_get_single_reserved allows to allocate a new reserved iova page and map it onto the physical page that contains a given physical address. Page size is the IOMMU page one. It is the responsability of the system integrator to make sure the in use IOMMU page size corresponds to the granularity of the MSI frame. It returns the iova that is mapped onto the provided physical address. Hence the physical address passed in argument does not need to be aligned. In case a mapping already exists between both pages, the IOVA mapped to the PA is directly returned. Each time an iova is successfully returned a binding ref count is incremented. iommu_put_single_reserved decrements the ref count and when this latter is null, the mapping is destroyed and the iova is released. Signed-off-by: Eric Auger Signed-off-by: Ankit Jindal Signed-off-by: Pranavkumar Sawargaonkar Signed-off-by: Bharat Bhushan --- v5 -> v6: - revisit locking with spin_lock instead of mutex - do not kref_get on 1st get - add size parameter to the get function following Marc's request - use the iova domain shift instead of using the smallest supported page size v3 -> v4: - formerly in iommu: iommu_get/put_single_reserved & iommu/arm-smmu: implement iommu_get/put_single_reserved - Attempted to address Marc's doubts about missing size/alignment at VFIO level (user-space knows the IOMMU page size and the number of IOVA pages to provision) v2 -> v3: - remove static implementation of iommu_get_single_reserved & iommu_put_single_reserved when CONFIG_IOMMU_API is not set v1 -> v2: - previously a VFIO API, named vfio_alloc_map/unmap_free_reserved_iova --- drivers/iommu/dma-reserved-iommu.c | 146 +++++++++++++++++++++++++++++++++++++ include/linux/dma-reserved-iommu.h | 28 +++++++ 2 files changed, 174 insertions(+) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index f592118..3c759d9 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -136,3 +136,149 @@ void iommu_free_reserved_iova_domain(struct iommu_domain *domain) spin_unlock_irqrestore(&domain->reserved_lock, flags); } EXPORT_SYMBOL_GPL(iommu_free_reserved_iova_domain); + +static void delete_reserved_binding(struct iommu_domain *domain, + struct iommu_reserved_binding *b) +{ + struct iova_domain *iovad = + (struct iova_domain *)domain->reserved_iova_cookie; + unsigned long order = iova_shift(iovad); + + iommu_unmap(domain, b->iova, b->size); + free_iova(iovad, b->iova >> order); + kfree(b); +} + +int iommu_get_reserved_iova(struct iommu_domain *domain, + phys_addr_t addr, size_t size, int prot, + dma_addr_t *iova) +{ + struct iova_domain *iovad = + (struct iova_domain *)domain->reserved_iova_cookie; + unsigned long order = iova_shift(iovad); + unsigned long base_pfn, end_pfn, nb_iommu_pages; + size_t iommu_page_size = 1 << order, binding_size; + phys_addr_t aligned_base, offset; + struct iommu_reserved_binding *b, *newb; + unsigned long flags; + struct iova *p_iova; + bool unmap = false; + int ret; + + base_pfn = addr >> order; + end_pfn = (addr + size - 1) >> order; + nb_iommu_pages = end_pfn - base_pfn + 1; + aligned_base = base_pfn << order; + offset = addr - aligned_base; + binding_size = nb_iommu_pages * iommu_page_size; + + if (!iovad) + return -EINVAL; + + spin_lock_irqsave(&domain->reserved_lock, flags); + + b = find_reserved_binding(domain, aligned_base, binding_size); + if (b) { + *iova = b->iova + offset; + kref_get(&b->kref); + ret = 0; + goto unlock; + } + + spin_unlock_irqrestore(&domain->reserved_lock, flags); + + /* + * no reserved IOVA was found for this PA, start allocating and + * registering one while the spin-lock is not held. iommu_map/unmap + * are not supposed to be atomic + */ + + p_iova = alloc_iova(iovad, nb_iommu_pages, iovad->dma_32bit_pfn, true); + if (!p_iova) + return -ENOMEM; + + *iova = iova_dma_addr(iovad, p_iova); + + newb = kzalloc(sizeof(*b), GFP_KERNEL); + if (!newb) { + free_iova(iovad, p_iova->pfn_lo); + return -ENOMEM; + } + + ret = iommu_map(domain, *iova, aligned_base, binding_size, prot); + if (ret) { + kfree(newb); + free_iova(iovad, p_iova->pfn_lo); + return ret; + } + + spin_lock_irqsave(&domain->reserved_lock, flags); + + /* re-check the PA was not mapped in our back when lock was not held */ + b = find_reserved_binding(domain, aligned_base, binding_size); + if (b) { + *iova = b->iova + offset; + kref_get(&b->kref); + ret = 0; + unmap = true; + goto unlock; + } + + kref_init(&newb->kref); + newb->domain = domain; + newb->addr = aligned_base; + newb->iova = *iova; + newb->size = binding_size; + + link_reserved_binding(domain, newb); + + *iova += offset; + goto unlock; + +unlock: + spin_unlock_irqrestore(&domain->reserved_lock, flags); + if (unmap) + delete_reserved_binding(domain, newb); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_get_reserved_iova); + +void iommu_put_reserved_iova(struct iommu_domain *domain, dma_addr_t iova) +{ + struct iova_domain *iovad = + (struct iova_domain *)domain->reserved_iova_cookie; + unsigned long order; + phys_addr_t aligned_addr; + dma_addr_t aligned_iova, page_size, mask, offset; + struct iommu_reserved_binding *b; + unsigned long flags; + bool unmap = false; + + order = iova_shift(iovad); + page_size = (uint64_t)1 << order; + mask = page_size - 1; + + aligned_iova = iova & ~mask; + offset = iova - aligned_iova; + + aligned_addr = iommu_iova_to_phys(domain, aligned_iova); + + spin_lock_irqsave(&domain->reserved_lock, flags); + b = find_reserved_binding(domain, aligned_addr, page_size); + if (!b) + goto unlock; + + if (atomic_sub_and_test(1, &b->kref.refcount)) { + unlink_reserved_binding(domain, b); + unmap = true; + } + +unlock: + spin_unlock_irqrestore(&domain->reserved_lock, flags); + if (unmap) + delete_reserved_binding(domain, b); +} +EXPORT_SYMBOL_GPL(iommu_put_reserved_iova); + + + diff --git a/include/linux/dma-reserved-iommu.h b/include/linux/dma-reserved-iommu.h index 5bf863b..dedea56 100644 --- a/include/linux/dma-reserved-iommu.h +++ b/include/linux/dma-reserved-iommu.h @@ -40,6 +40,34 @@ int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, */ void iommu_free_reserved_iova_domain(struct iommu_domain *domain); +/** + * iommu_get_reserved_iova: allocate a contiguous set of iova pages and + * map them to the physical range defined by @addr and @size. + * + * @domain: iommu domain handle + * @addr: physical address to bind + * @size: size of the binding + * @prot: mapping protection attribute + * @iova: returned iova + * + * Mapped physical pfns are within [@addr >> order, (@addr + size -1) >> order] + * where order corresponds to the iova domain order. + * This mapping is reference counted as a whole and cannot by split. + */ +int iommu_get_reserved_iova(struct iommu_domain *domain, + phys_addr_t addr, size_t size, int prot, + dma_addr_t *iova); + +/** + * iommu_put_reserved_iova: decrement a ref count of the reserved mapping + * + * @domain: iommu domain handle + * @iova: reserved iova whose binding ref count is decremented + * + * if the binding ref count is null, destroy the reserved mapping + */ +void iommu_put_reserved_iova(struct iommu_domain *domain, dma_addr_t iova); + #endif /* CONFIG_IOMMU_DMA_RESERVED */ #endif /* __KERNEL__ */ #endif /* __DMA_RESERVED_IOMMU_H */