From patchwork Tue Jan 26 13:12:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8121821 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 21E339F6DA for ; Tue, 26 Jan 2016 13:17:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F370A2024C for ; Tue, 26 Jan 2016 13:17:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C6BDA2020F for ; Tue, 26 Jan 2016 13:17:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aO3TS-0002oS-G6; Tue, 26 Jan 2016 13:15:58 +0000 Received: from mail-wm0-x22f.google.com ([2a00:1450:400c:c09::22f]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aO3RV-00007o-6d for linux-arm-kernel@lists.infradead.org; Tue, 26 Jan 2016 13:13:59 +0000 Received: by mail-wm0-x22f.google.com with SMTP id l65so103476229wmf.1 for ; Tue, 26 Jan 2016 05:13:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LL6c/K9/enJdNr3i1fmo891UFyKz6SzYGk0sh4nJ9kY=; b=dNZAaH6LxvvXxLv9H5Ytziw5xHISeHSgTXg0AmXQwbn5KTcTe7NgCpzfIMJoHK4B7d ZNZKPj5jdD6ErRxa/BRaV0vnvryVX7PlxLo+7lYW6v7zcwtrYiDlERUow1f6H3BW6yIu S0n4luYu4nBNMrTD2FXsh+N4kYkWLsmSf+b3c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LL6c/K9/enJdNr3i1fmo891UFyKz6SzYGk0sh4nJ9kY=; b=kWz2Kw60p11KHSE9iFSYVimeg5ikrh5390yMQtGH/bEs29evablBwW35qdKjEu+Tko TjFNZFy7v1Zcq6BDxhzdOPPV6Q21BABR/HxXhxVfSWR/JecrsLEhCtnEUZ98ara+0Eos WfO4LOXTwCLE6OoU8aAvQprX7aQ2URIMCjBeWz5sjaSkLdcGyXUnM3pxMuqmmWbASuQC z+IMzL7S4BHYkCKbwM194xg8ZBRXpt63QjrF7edRStQDRMHjHg0LSIPB2m0S0s3FLWYx ZzGl1ubTZXisrGfXwhRvONxOSUQd+28j2EiWUzXL5MXNgkmESgd447yLLQYvAj0e6kKV 6+GQ== X-Gm-Message-State: AG10YOQSePFmWsp1usR/aem/jx4Wr5uh38pvwNBk+rQQ040OpyhrgM9iBmi5TeUO9iZIIzUC X-Received: by 10.28.98.133 with SMTP id w127mr23606779wmb.4.1453814016728; Tue, 26 Jan 2016 05:13:36 -0800 (PST) Received: from localhost.localdomain (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id ct2sm1388885wjb.46.2016.01.26.05.13.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Jan 2016 05:13:35 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 06/10] vfio: introduce vfio_group_alloc_map_/unmap_free_reserved_iova Date: Tue, 26 Jan 2016 13:12:44 +0000 Message-Id: <1453813968-2024-7-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1453813968-2024-1-git-send-email-eric.auger@linaro.org> References: <1453813968-2024-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160126_051357_667721_B1F3A135 X-CRM114-Status: GOOD ( 19.04 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patches@linaro.org, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces vfio_group_alloc_map_/unmap_free_reserved_iova and implements corresponding vfio_iommu_type1 operations. alloc_map allows to allocate a new reserved iova page and map it onto the physical page that contains a given PA. It returns the iova that is mapped onto the provided PA. In case a mapping already exist between both pages, the IOVA corresponding to the PA is directly returned. Signed-off-by: Eric Auger Signed-off-by: Ankit Jindal Signed-off-by: Pranavkumar Sawargaonkar Signed-off-by: Bharat Bhushan --- drivers/vfio/vfio.c | 39 ++++++++++ drivers/vfio/vfio_iommu_type1.c | 163 ++++++++++++++++++++++++++++++++++++++-- include/linux/vfio.h | 34 ++++++++- 3 files changed, 228 insertions(+), 8 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index 82f25cc..3d9de00 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -268,6 +268,45 @@ void vfio_unregister_iommu_driver(const struct vfio_iommu_driver_ops *ops) } EXPORT_SYMBOL_GPL(vfio_unregister_iommu_driver); +int vfio_group_alloc_map_reserved_iova(struct vfio_group *group, + phys_addr_t addr, int prot, + dma_addr_t *iova) +{ + struct vfio_container *container = group->container; + const struct vfio_iommu_driver_ops *ops = container->iommu_driver->ops; + int ret; + + if (!ops->alloc_map_reserved_iova) + return -EINVAL; + + down_read(&container->group_lock); + ret = ops->alloc_map_reserved_iova(container->iommu_data, + group->iommu_group, + addr, prot, iova); + up_read(&container->group_lock); + return ret; + +} +EXPORT_SYMBOL_GPL(vfio_group_alloc_map_reserved_iova); + +int vfio_group_unmap_free_reserved_iova(struct vfio_group *group, + dma_addr_t iova) +{ + struct vfio_container *container = group->container; + const struct vfio_iommu_driver_ops *ops = container->iommu_driver->ops; + int ret; + + if (!ops->unmap_free_reserved_iova) + return -EINVAL; + + down_read(&container->group_lock); + ret = ops->unmap_free_reserved_iova(container->iommu_data, + group->iommu_group, iova); + up_read(&container->group_lock); + return ret; +} +EXPORT_SYMBOL_GPL(vfio_group_unmap_free_reserved_iova); + /** * Group minor allocation/free - both called with vfio.group_lock held */ diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 33304c0..a79e2a8 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -156,6 +156,19 @@ static void vfio_unlink_reserved_binding(struct vfio_domain *d, rb_erase(&old->node, &d->reserved_binding_list); } +static void vfio_reserved_binding_release(struct kref *kref) +{ + struct vfio_reserved_binding *b = + container_of(kref, struct vfio_reserved_binding, kref); + struct vfio_domain *d = b->domain; + unsigned long order = __ffs(b->size); + + iommu_unmap(d->domain, b->iova, b->size); + free_iova(d->reserved_iova_domain, b->iova >> order); + vfio_unlink_reserved_binding(d, b); + kfree(b); +} + /* * This code handles mapping and unmapping of user data buffers * into DMA'ble space using the IOMMU @@ -1034,6 +1047,138 @@ done: mutex_unlock(&iommu->lock); } +static struct vfio_domain *vfio_find_iommu_domain(void *iommu_data, + struct iommu_group *group) +{ + struct vfio_iommu *iommu = iommu_data; + struct vfio_group *g; + struct vfio_domain *d; + + list_for_each_entry(d, &iommu->domain_list, next) { + list_for_each_entry(g, &d->group_list, next) { + if (g->iommu_group == group) + return d; + } + } + return NULL; +} + +static int vfio_iommu_type1_alloc_map_reserved_iova(void *iommu_data, + struct iommu_group *group, + phys_addr_t addr, int prot, + dma_addr_t *iova) +{ + struct vfio_iommu *iommu = iommu_data; + struct vfio_domain *d; + uint64_t mask, iommu_page_size; + struct vfio_reserved_binding *b; + unsigned long order; + struct iova *p_iova; + phys_addr_t aligned_addr, offset; + int ret = 0; + + order = __ffs(vfio_pgsize_bitmap(iommu)); + iommu_page_size = (uint64_t)1 << order; + mask = iommu_page_size - 1; + aligned_addr = addr & ~mask; + offset = addr - aligned_addr; + + mutex_lock(&iommu->lock); + + d = vfio_find_iommu_domain(iommu_data, group); + if (!d) { + ret = -EINVAL; + goto unlock; + } + + b = vfio_find_reserved_binding(d, aligned_addr, iommu_page_size); + if (b) { + ret = 0; + *iova = b->iova + offset; + kref_get(&b->kref); + goto unlock; + } + + /* allocate a new reserved IOVA page and a new binding node */ + p_iova = alloc_iova(d->reserved_iova_domain, 1, + d->reserved_iova_domain->dma_32bit_pfn, true); + if (!p_iova) { + ret = -ENOMEM; + goto unlock; + } + *iova = p_iova->pfn_lo << order; + + b = kzalloc(sizeof(*b), GFP_KERNEL); + if (!b) { + ret = -ENOMEM; + goto free_iova_unlock; + } + + ret = iommu_map(d->domain, *iova, aligned_addr, iommu_page_size, prot); + if (ret) + goto free_binding_iova_unlock; + + kref_init(&b->kref); + kref_get(&b->kref); + b->domain = d; + b->addr = aligned_addr; + b->iova = *iova; + b->size = iommu_page_size; + vfio_link_reserved_binding(d, b); + *iova += offset; + + goto unlock; + +free_binding_iova_unlock: + kfree(b); +free_iova_unlock: + free_iova(d->reserved_iova_domain, *iova >> order); +unlock: + mutex_unlock(&iommu->lock); + return ret; +} + +static int vfio_iommu_type1_unmap_free_reserved_iova(void *iommu_data, + struct iommu_group *group, + dma_addr_t iova) +{ + struct vfio_iommu *iommu = iommu_data; + struct vfio_reserved_binding *b; + struct vfio_domain *d; + phys_addr_t aligned_addr; + dma_addr_t aligned_iova, iommu_page_size, mask, offset; + unsigned long order; + int ret = 0; + + order = __ffs(vfio_pgsize_bitmap(iommu)); + iommu_page_size = (uint64_t)1 << order; + mask = iommu_page_size - 1; + aligned_iova = iova & ~mask; + offset = iova - aligned_iova; + + mutex_lock(&iommu->lock); + + d = vfio_find_iommu_domain(iommu_data, group); + if (!d) { + ret = -EINVAL; + goto unlock; + } + + aligned_addr = iommu_iova_to_phys(d->domain, aligned_iova); + + b = vfio_find_reserved_binding(d, aligned_addr, iommu_page_size); + if (!b) { + ret = -EINVAL; + goto unlock; + } + + kref_put(&b->kref, vfio_reserved_binding_release); + +unlock: + mutex_unlock(&iommu->lock); + return ret; +} + static void *vfio_iommu_type1_open(unsigned long arg) { struct vfio_iommu *iommu; @@ -1180,13 +1325,17 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, } static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { - .name = "vfio-iommu-type1", - .owner = THIS_MODULE, - .open = vfio_iommu_type1_open, - .release = vfio_iommu_type1_release, - .ioctl = vfio_iommu_type1_ioctl, - .attach_group = vfio_iommu_type1_attach_group, - .detach_group = vfio_iommu_type1_detach_group, + .name = "vfio-iommu-type1", + .owner = THIS_MODULE, + .open = vfio_iommu_type1_open, + .release = vfio_iommu_type1_release, + .ioctl = vfio_iommu_type1_ioctl, + .attach_group = vfio_iommu_type1_attach_group, + .detach_group = vfio_iommu_type1_detach_group, + .alloc_map_reserved_iova = + vfio_iommu_type1_alloc_map_reserved_iova, + .unmap_free_reserved_iova = + vfio_iommu_type1_unmap_free_reserved_iova, }; static int __init vfio_iommu_type1_init(void) diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 610a86a..0020f81 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -75,7 +75,13 @@ struct vfio_iommu_driver_ops { struct iommu_group *group); void (*detach_group)(void *iommu_data, struct iommu_group *group); - + int (*alloc_map_reserved_iova)(void *iommu_data, + struct iommu_group *group, + phys_addr_t addr, int prot, + dma_addr_t *iova); + int (*unmap_free_reserved_iova)(void *iommu_data, + struct iommu_group *group, + dma_addr_t iova); }; extern int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops); @@ -138,4 +144,30 @@ extern int vfio_virqfd_enable(void *opaque, void *data, struct virqfd **pvirqfd, int fd); extern void vfio_virqfd_disable(struct virqfd **pvirqfd); +/** + * vfio_group_alloc_map_reserved_iova: allocates a new iova page and map + * it onto the aligned physical page that contains a given physical addr. + * page size is the domain iommu page size. + * + * @group: vfio group handle + * @addr: physical address to map + * @prot: protection attribute + * @iova: returned iova that is mapped onto addr + * + * returns 0 on success, < 0 on failure + */ +extern int vfio_group_alloc_map_reserved_iova(struct vfio_group *group, + phys_addr_t addr, int prot, + dma_addr_t *iova); +/** + * vfio_group_unmap_free_reserved_iova: unmap and free the reserved iova page + * + * @group: vfio group handle + * @iova: base iova, must be aligned on the IOMMU page size + * + * returns 0 on success, < 0 on failure + */ +extern int vfio_group_unmap_free_reserved_iova(struct vfio_group *group, + dma_addr_t iova); + #endif /* VFIO_H */