From patchwork Mon Apr 4 08:07:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8738251 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C4930C0553 for ; Mon, 4 Apr 2016 08:10:07 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D7F882024F for ; Mon, 4 Apr 2016 08:10:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EE5BA201DD for ; Mon, 4 Apr 2016 08:10:05 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1amzYn-0001lC-3p; Mon, 04 Apr 2016 08:08:33 +0000 Received: from mail-lf0-x232.google.com ([2a00:1450:4010:c07::232]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1amzY4-0001Hk-HS for linux-arm-kernel@lists.infradead.org; Mon, 04 Apr 2016 08:07:51 +0000 Received: by mail-lf0-x232.google.com with SMTP id c126so38333476lfb.2 for ; Mon, 04 Apr 2016 01:07:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=80gEqLM8SVbLuvVY0Z2v/KouOb30ufRT3ol04rcckqU=; b=NmcgeyYeqndp9A9aN6ruZOY7xYr3rsE7OhTq1j8RVpUKZgetmOE5G1s6p0QadkzPfX YJnbbRAqHKxN8mlLUvOLCfm2CA2YQvlbz/pYBersbtEW58JnuYAOP1FcS07Xl6WJ4X9D BdDW+BwY7Yk5iLvU+yHuLu2cEECjZv8wvTllA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=80gEqLM8SVbLuvVY0Z2v/KouOb30ufRT3ol04rcckqU=; b=BZhTAPQc9z3fdRTj9mCimDo4wUPdSBJSUURtL+9Ls3KPaW14aWIiA+e3eQ5KO+QFgR AscOq/AI92zgnMmpxFkZFqYNUEDUfaU/yf6jUafAX/snTvRREgNG/QC14ODapWTnO3KD CYkOP6WCtqOsTqj8uZqM45S4CBAg6l2D0spRQIhmYYrPMSIxAbbmLL1PZcswkLS1isp3 kIz7LmhJSXi4fM+UTWjLvA8P2VkdiWp1FE3GNlBG71E4tE+Kw9l5NEiEpa/ulhxh2GkQ TybqDVoiXg+EG0N1c6Ebe2Q6Gf+BSk5vjI65FNZjPPoLMWza6E4oL1nYTfbnxyERkmLo EIqw== X-Gm-Message-State: AD7BkJK/mNlYFStXABhTGDZXzpAMWiRWd9AiJDgtzXynA0ZmHgNntCAFBeXCBNjcTR6/T//w X-Received: by 10.194.59.138 with SMTP id z10mr16359224wjq.74.1459757246614; Mon, 04 Apr 2016 01:07:26 -0700 (PDT) Received: from new-host-2.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id m67sm7505239wma.3.2016.04.04.01.07.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Apr 2016 01:07:24 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH v6 5/7] dma-reserved-iommu: reserved binding rb-tree and helpers Date: Mon, 4 Apr 2016 08:07:00 +0000 Message-Id: <1459757222-2668-6-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> References: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160404_010748_957014_8062B1EE X-CRM114-Status: GOOD ( 13.45 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: julien.grall@arm.com, patches@linaro.org, Jean-Philippe.Brucker@arm.com, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP we will need to track which host physical addresses are mapped to reserved IOVA. In that prospect we introduce a new RB tree indexed by physical address. This RB tree only is used for reserved IOVA bindings. It is expected this RB tree will contain very few bindings. Those generally correspond to single page mapping one MSI frame (GICv2m frame or ITS GITS_TRANSLATER frame). Signed-off-by: Eric Auger --- v5 -> v6: - add comment about @d->reserved_lock to be held v3 -> v4: - that code was formerly in "iommu/arm-smmu: add a reserved binding RB tree" --- drivers/iommu/dma-reserved-iommu.c | 63 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index a461482..f592118 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -18,6 +18,69 @@ #include #include +struct iommu_reserved_binding { + struct kref kref; + struct rb_node node; + struct iommu_domain *domain; + phys_addr_t addr; + dma_addr_t iova; + size_t size; +}; + +/* Reserved binding RB-tree manipulation */ + +/* @d->reserved_lock must be held */ +static struct iommu_reserved_binding *find_reserved_binding( + struct iommu_domain *d, + phys_addr_t start, size_t size) +{ + struct rb_node *node = d->reserved_binding_list.rb_node; + + while (node) { + struct iommu_reserved_binding *binding = + rb_entry(node, struct iommu_reserved_binding, node); + + if (start + size <= binding->addr) + node = node->rb_left; + else if (start >= binding->addr + binding->size) + node = node->rb_right; + else + return binding; + } + + return NULL; +} + +/* @d->reserved_lock must be held */ +static void link_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *new) +{ + struct rb_node **link = &d->reserved_binding_list.rb_node; + struct rb_node *parent = NULL; + struct iommu_reserved_binding *binding; + + while (*link) { + parent = *link; + binding = rb_entry(parent, struct iommu_reserved_binding, + node); + + if (new->addr + new->size <= binding->addr) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->node, parent, link); + rb_insert_color(&new->node, &d->reserved_binding_list); +} + +/* @d->reserved_lock must be held */ +static void unlink_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *old) +{ + rb_erase(&old->node, &d->reserved_binding_list); +} + int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, dma_addr_t iova, size_t size, unsigned long order)