From patchwork Tue Jan 26 13:12:41 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8121731 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5C80A9F6DA for ; Tue, 26 Jan 2016 13:15:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 78D3F201F2 for ; Tue, 26 Jan 2016 13:15:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8553A2020F for ; Tue, 26 Jan 2016 13:15:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aO3Ry-0000UE-Bg; Tue, 26 Jan 2016 13:14:26 +0000 Received: from mail-wm0-x22e.google.com ([2a00:1450:400c:c09::22e]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aO3RN-0008Vz-Rw for linux-arm-kernel@lists.infradead.org; Tue, 26 Jan 2016 13:13:52 +0000 Received: by mail-wm0-x22e.google.com with SMTP id u188so105676882wmu.1 for ; Tue, 26 Jan 2016 05:13:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=aJHt8JZ4DtqzokDAYoZRiD477A/U1UznFlMPZrsNgg4=; b=G8hBPxZYuXuqRGSf9Ua0g3H/BAdMQtEKCMy3gTB8lh6fRJe38r0/aRDv/yQCTPmaP2 DZLnRPOdHhGSpJzRUihQwEHpnK1JwClyuophvxS96jq8slAu+snWhkEW7p5mnvYv/dMA hm4u8HPqik5ZnUxvFqupla70Y96MYo4lT3F2Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=aJHt8JZ4DtqzokDAYoZRiD477A/U1UznFlMPZrsNgg4=; b=f9JKKz64Tw0jdw2MzoQQ6SJqZOJlE/5oRh3FH2LhBxh+yqqyH9gkc+ctjvwAHo0Mdz DnCKir2/pvGJ9nmwy3KFgQvABkoPXC8DJdevV3vLVBzqk8ceEE67h00JiJ6b7qzJigmG EFdBpg03WW7PnfJsxjrtqyPD+lo9XG4kUueH1huX8LVTxqDJ33WmrtD8da4z1GQjMdjB 9Aam14zs76TtI/oQVa4f0rz46/5NSCj+LtOoAWBzS8jjgpaWyVyB6CJhm0kpeLcHxCF8 V8zb6/nl2oj2YxdYMaP4hWEARowVrrEk+0bnYc7ExUOMxfrTPoTIbLT1GVrUdl9Zvi5x Vn1g== X-Gm-Message-State: AG10YOSB6SCfIgST2ei5qll26BfiMvplXKYnYxcBXX569RErxLEJRo/RYH278Pntxn3X2ihF X-Received: by 10.28.4.145 with SMTP id 139mr25313098wme.56.1453814007972; Tue, 26 Jan 2016 05:13:27 -0800 (PST) Received: from localhost.localdomain (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id ct2sm1388885wjb.46.2016.01.26.05.13.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Jan 2016 05:13:26 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 03/10] vfio_iommu_type1: add reserved binding RB tree management Date: Tue, 26 Jan 2016 13:12:41 +0000 Message-Id: <1453813968-2024-4-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1453813968-2024-1-git-send-email-eric.auger@linaro.org> References: <1453813968-2024-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160126_051350_233623_06BE6CAD X-CRM114-Status: GOOD ( 15.77 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patches@linaro.org, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Legacy dma_list is just used to insert the reserved iova region and check mapping of reserved iova happens in this window. As opposed to other vfio_dma slots, the reserved one is not necessarily mapped. We will need to track which host physical addresses are mapped to reserved IOVA. In that prospect we introduce a new RB tree indexed by physical address. This reverse RB tree only is used for reserved IOVA bindings. It belongs to a given iommu domain. It is expected this RB tree will contain very few bindings. Those generally correspond to single page mapping one MSI frame (at least for ARM, containing the GICv2m frame or ITS GITS_TRANSLATER frame. Signed-off-by: Eric Auger --- drivers/vfio/vfio_iommu_type1.c | 63 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index c5b57e1..32438d9 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -56,6 +56,7 @@ MODULE_PARM_DESC(disable_hugepages, struct vfio_iommu { struct list_head domain_list; struct mutex lock; + /* rb tree indexed by IOVA */ struct rb_root dma_list; bool v2; bool nesting; @@ -65,6 +66,8 @@ struct vfio_domain { struct iommu_domain *domain; struct list_head next; struct list_head group_list; + /* rb tree indexed by PA, for reserved bindings only */ + struct rb_root reserved_binding_list; int prot; /* IOMMU_CACHE */ bool fgsp; /* Fine-grained super pages */ }; @@ -77,11 +80,70 @@ struct vfio_dma { int prot; /* IOMMU_READ/WRITE */ }; +struct vfio_reserved_binding { + struct kref kref; + struct rb_node node; + struct vfio_domain *domain; + phys_addr_t addr; + dma_addr_t iova; + size_t size; +}; + struct vfio_group { struct iommu_group *iommu_group; struct list_head next; }; +/* Reserved binding RB-tree manipulation */ + +static struct vfio_reserved_binding *vfio_find_reserved_binding( + struct vfio_domain *d, + phys_addr_t start, size_t size) +{ + struct rb_node *node = d->reserved_binding_list.rb_node; + + while (node) { + struct vfio_reserved_binding *binding = + rb_entry(node, struct vfio_reserved_binding, node); + + if (start + size <= binding->addr) + node = node->rb_left; + else if (start >= binding->addr + binding->size) + node = node->rb_right; + else + return binding; + } + + return NULL; +} + +static void vfio_link_reserved_binding(struct vfio_domain *d, + struct vfio_reserved_binding *new) +{ + struct rb_node **link = &d->reserved_binding_list.rb_node; + struct rb_node *parent = NULL; + struct vfio_reserved_binding *binding; + + while (*link) { + parent = *link; + binding = rb_entry(parent, struct vfio_reserved_binding, node); + + if (new->addr + new->size <= binding->addr) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->node, parent, link); + rb_insert_color(&new->node, &d->reserved_binding_list); +} + +static void vfio_unlink_reserved_binding(struct vfio_domain *d, + struct vfio_reserved_binding *old) +{ + rb_erase(&old->node, &d->reserved_binding_list); +} + /* * This code handles mapping and unmapping of user data buffers * into DMA'ble space using the IOMMU @@ -784,6 +846,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, ret = -ENOMEM; goto out_free; } + domain->reserved_binding_list = RB_ROOT; group->iommu_group = iommu_group;