From patchwork Tue Apr 19 16:56:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8881851 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id B1916BF29F for ; Tue, 19 Apr 2016 16:59:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 98717201B9 for ; Tue, 19 Apr 2016 16:59:28 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5E15E2017D for ; Tue, 19 Apr 2016 16:59:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1asYyY-00024G-Qc; Tue, 19 Apr 2016 16:58:10 +0000 Received: from mail-wm0-x232.google.com ([2a00:1450:400c:c09::232]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1asYxg-00013K-7A for linux-arm-kernel@lists.infradead.org; Tue, 19 Apr 2016 16:57:18 +0000 Received: by mail-wm0-x232.google.com with SMTP id e201so21403772wme.0 for ; Tue, 19 Apr 2016 09:56:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=A30rRmVXxHEcxXPimyrGDwcDrKgZ/pMLv+XUKU62E3M=; b=is9QJ4Kq7QVOX+OqLkoek3U6kKuvVjgQkHzM9Wq+OSkr8yx0tLFaiZiIH2UGf4nfPY UEpCIvX/ophWBKthqHbNNrAJK5NZhDIZg7bD+0cmaCVd4ATSZbI59KMbsVk0pftWFtic LY872TfkyLjReLAMuqHixat54tWFc3rFSGS/o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=A30rRmVXxHEcxXPimyrGDwcDrKgZ/pMLv+XUKU62E3M=; b=e5mFrviEBwUqBeDFwE8pJ2kMNU++6/8Wx+DqGsXRnoxzdXAM1Xyme6LqLf7UgRy+LD 3GEJphlTejPAJJszQ3PKNOFokSGoZd0xiGbcJN/y/HpUu+PLpH6I9FddGKL59xjIsb4t zqT6z7iBywsTocdCKaoZqDnz9jT6PaACV0qvVkOv7HWFVjQWTUB5WIp5iYzWIjQimVvb TgthagIwF5qvFkswDCpDfdT7hPwZYZ6YP0HXFAh+/cxQmQpdPdZDKnq/eZZOD+Fk9ZVt V3Dyn2NfpnWE8nh50E2JmJbB4CqzOF9GTC6F/Pou43zsKrfMwsImX1YBYT64K6078bfv MGvg== X-Gm-Message-State: AOPr4FWadqbpd/IHtVHhs+d/COWyvCj73c+t25DzaZStu+4axbhjFUcZcL/kBv7Xm927gqWE X-Received: by 10.28.182.195 with SMTP id g186mr4565575wmf.72.1461085014579; Tue, 19 Apr 2016 09:56:54 -0700 (PDT) Received: from new-host-34.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id x2sm1169329wjr.33.2016.04.19.09.56.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 19 Apr 2016 09:56:53 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v7 06/10] iommu/dma-reserved-iommu: iommu_get/put_reserved_iova Date: Tue, 19 Apr 2016 16:56:30 +0000 Message-Id: <1461084994-2355-7-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1461084994-2355-1-git-send-email-eric.auger@linaro.org> References: <1461084994-2355-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160419_095716_637540_68280C7F X-CRM114-Status: GOOD ( 20.69 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: julien.grall@arm.com, patches@linaro.org, Jean-Philippe.Brucker@arm.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces iommu_get/put_reserved_iova. iommu_get_reserved_iova allows to iommu map a contiguous physical region onto a reserved contiguous IOVA region. The physical region base address does not need to be iommu page size aligned. iova pages are allocated and mapped so that they cover all the physical region. This mapping is tracked as a whole (and cannot be split) in an RB tree indexed by PA. In case a mapping already exists for the physical pages, the IOVA mapped to the PA base is directly returned. Each time the get succeeds a binding ref count is incremented. iommu_put_reserved_iova decrements the ref count and when this latter is null, the mapping is destroyed and the iovas are released. Signed-off-by: Eric Auger --- v7: - change title and rework commit message with new name of the functions and size parameter - fix locking - rework header doc comments - put now takes a phys_addr_t - check prot argument against reserved_iova_domain prot flags v5 -> v6: - revisit locking with spin_lock instead of mutex - do not kref_get on 1st get - add size parameter to the get function following Marc's request - use the iova domain shift instead of using the smallest supported page size v3 -> v4: - formerly in iommu: iommu_get/put_single_reserved & iommu/arm-smmu: implement iommu_get/put_single_reserved - Attempted to address Marc's doubts about missing size/alignment at VFIO level (user-space knows the IOMMU page size and the number of IOVA pages to provision) v2 -> v3: - remove static implementation of iommu_get_single_reserved & iommu_put_single_reserved when CONFIG_IOMMU_API is not set v1 -> v2: - previously a VFIO API, named vfio_alloc_map/unmap_free_reserved_iova --- drivers/iommu/dma-reserved-iommu.c | 150 +++++++++++++++++++++++++++++++++++++ include/linux/dma-reserved-iommu.h | 38 ++++++++++ 2 files changed, 188 insertions(+) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index f6fa18e..426d339 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -135,6 +135,22 @@ unlock: } EXPORT_SYMBOL_GPL(iommu_alloc_reserved_iova_domain); +/* called with domain's reserved_lock held */ +static void reserved_binding_release(struct kref *kref) +{ + struct iommu_reserved_binding *b = + container_of(kref, struct iommu_reserved_binding, kref); + struct iommu_domain *d = b->domain; + struct reserved_iova_domain *rid = + (struct reserved_iova_domain *)d->reserved_iova_cookie; + unsigned long order; + + order = iova_shift(rid->iovad); + free_iova(rid->iovad, b->iova >> order); + unlink_reserved_binding(d, b); + kfree(b); +} + void iommu_free_reserved_iova_domain(struct iommu_domain *domain) { struct reserved_iova_domain *rid; @@ -160,3 +176,137 @@ unlock: } } EXPORT_SYMBOL_GPL(iommu_free_reserved_iova_domain); + +int iommu_get_reserved_iova(struct iommu_domain *domain, + phys_addr_t addr, size_t size, int prot, + dma_addr_t *iova) +{ + unsigned long base_pfn, end_pfn, nb_iommu_pages, order, flags; + struct iommu_reserved_binding *b, *newb; + size_t iommu_page_size, binding_size; + phys_addr_t aligned_base, offset; + struct reserved_iova_domain *rid; + struct iova_domain *iovad; + struct iova *p_iova; + int ret = -EINVAL; + + newb = kzalloc(sizeof(*newb), GFP_KERNEL); + if (!newb) + return -ENOMEM; + + spin_lock_irqsave(&domain->reserved_lock, flags); + + rid = (struct reserved_iova_domain *)domain->reserved_iova_cookie; + if (!rid) + goto free_newb; + + if ((prot & IOMMU_READ & !(rid->prot & IOMMU_READ)) || + (prot & IOMMU_WRITE & !(rid->prot & IOMMU_WRITE))) + goto free_newb; + + iovad = rid->iovad; + order = iova_shift(iovad); + base_pfn = addr >> order; + end_pfn = (addr + size - 1) >> order; + aligned_base = base_pfn << order; + offset = addr - aligned_base; + nb_iommu_pages = end_pfn - base_pfn + 1; + iommu_page_size = 1 << order; + binding_size = nb_iommu_pages * iommu_page_size; + + b = find_reserved_binding(domain, aligned_base, binding_size); + if (b) { + *iova = b->iova + offset + aligned_base - b->addr; + kref_get(&b->kref); + ret = 0; + goto free_newb; + } + + p_iova = alloc_iova(iovad, nb_iommu_pages, + iovad->dma_32bit_pfn, true); + if (!p_iova) { + ret = -ENOMEM; + goto free_newb; + } + + *iova = iova_dma_addr(iovad, p_iova); + + /* unlock to call iommu_map which is not guaranteed to be atomic */ + spin_unlock_irqrestore(&domain->reserved_lock, flags); + + ret = iommu_map(domain, *iova, aligned_base, binding_size, prot); + + spin_lock_irqsave(&domain->reserved_lock, flags); + + rid = (struct reserved_iova_domain *) domain->reserved_iova_cookie; + if (!rid || (rid->iovad != iovad)) { + /* reserved iova domain was destroyed in our back */ + ret = -EBUSY; + goto free_newb; /* iova already released */ + } + + /* no change in iova reserved domain but iommu_map failed */ + if (ret) + goto free_iova; + + /* everything is fine, add in the new node in the rb tree */ + kref_init(&newb->kref); + newb->domain = domain; + newb->addr = aligned_base; + newb->iova = *iova; + newb->size = binding_size; + + link_reserved_binding(domain, newb); + + *iova += offset; + goto unlock; + +free_iova: + free_iova(rid->iovad, p_iova->pfn_lo); +free_newb: + kfree(newb); +unlock: + spin_unlock_irqrestore(&domain->reserved_lock, flags); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_get_reserved_iova); + +void iommu_put_reserved_iova(struct iommu_domain *domain, phys_addr_t addr) +{ + phys_addr_t aligned_addr, page_size, mask; + struct iommu_reserved_binding *b; + struct reserved_iova_domain *rid; + unsigned long order, flags; + struct iommu_domain *d; + dma_addr_t iova; + size_t size; + int ret = 0; + + spin_lock_irqsave(&domain->reserved_lock, flags); + + rid = (struct reserved_iova_domain *)domain->reserved_iova_cookie; + if (!rid) + goto unlock; + + order = iova_shift(rid->iovad); + page_size = (uint64_t)1 << order; + mask = page_size - 1; + aligned_addr = addr & ~mask; + + b = find_reserved_binding(domain, aligned_addr, page_size); + if (!b) + goto unlock; + + iova = b->iova; + size = b->size; + d = b->domain; + + ret = kref_put(&b->kref, reserved_binding_release); + +unlock: + spin_unlock_irqrestore(&domain->reserved_lock, flags); + if (ret) + iommu_unmap(d, iova, size); +} +EXPORT_SYMBOL_GPL(iommu_put_reserved_iova); + diff --git a/include/linux/dma-reserved-iommu.h b/include/linux/dma-reserved-iommu.h index 01ec385..8722131 100644 --- a/include/linux/dma-reserved-iommu.h +++ b/include/linux/dma-reserved-iommu.h @@ -42,6 +42,34 @@ int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, */ void iommu_free_reserved_iova_domain(struct iommu_domain *domain); +/** + * iommu_get_reserved_iova: allocate a contiguous set of iova pages and + * map them to the physical range defined by @addr and @size. + * + * @domain: iommu domain handle + * @addr: physical address to bind + * @size: size of the binding + * @prot: mapping protection attribute + * @iova: returned iova + * + * Mapped physical pfns are within [@addr >> order, (@addr + size -1) >> order] + * where order corresponds to the reserved iova domain order. + * This mapping is tracked and reference counted with the minimal granularity + * of @size. + */ +int iommu_get_reserved_iova(struct iommu_domain *domain, + phys_addr_t addr, size_t size, int prot, + dma_addr_t *iova); + +/** + * iommu_put_reserved_iova: decrement a ref count of the reserved mapping + * + * @domain: iommu domain handle + * @addr: physical address whose binding ref count is decremented + * + * if the binding ref count is null, destroy the reserved mapping + */ +void iommu_put_reserved_iova(struct iommu_domain *domain, phys_addr_t addr); #else static inline int @@ -55,5 +83,15 @@ iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, static inline void iommu_free_reserved_iova_domain(struct iommu_domain *domain) {} +static inline int iommu_get_reserved_iova(struct iommu_domain *domain, + phys_addr_t addr, size_t size, + int prot, dma_addr_t *iova) +{ + return -ENOENT; +} + +static inline void iommu_put_reserved_iova(struct iommu_domain *domain, + phys_addr_t addr) {} + #endif /* CONFIG_IOMMU_DMA_RESERVED */ #endif /* __DMA_RESERVED_IOMMU_H */