From patchwork Mon Apr 4 08:07:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8738161 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 62EA7C0553 for ; Mon, 4 Apr 2016 08:09:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 765532024F for ; Mon, 4 Apr 2016 08:09:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8805E2022A for ; Mon, 4 Apr 2016 08:09:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932084AbcDDIIn (ORCPT ); Mon, 4 Apr 2016 04:08:43 -0400 Received: from mail-lf0-f43.google.com ([209.85.215.43]:34763 "EHLO mail-lf0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754599AbcDDIH2 (ORCPT ); Mon, 4 Apr 2016 04:07:28 -0400 Received: by mail-lf0-f43.google.com with SMTP id c62so157245612lfc.1 for ; Mon, 04 Apr 2016 01:07:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=80gEqLM8SVbLuvVY0Z2v/KouOb30ufRT3ol04rcckqU=; b=NmcgeyYeqndp9A9aN6ruZOY7xYr3rsE7OhTq1j8RVpUKZgetmOE5G1s6p0QadkzPfX YJnbbRAqHKxN8mlLUvOLCfm2CA2YQvlbz/pYBersbtEW58JnuYAOP1FcS07Xl6WJ4X9D BdDW+BwY7Yk5iLvU+yHuLu2cEECjZv8wvTllA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=80gEqLM8SVbLuvVY0Z2v/KouOb30ufRT3ol04rcckqU=; b=O3wgXcv60BjO2XyMkkPTxuWUEIHdbxXK8eOKHn/YxfNR14qeW3cyX7ssCBfqR6VN/h +ZS424/21PZ5QPDl/ev7f2PIWioMsxg00a1zPX21/SGaDDfPhzaCNDYM42OXFMplrTDo gBu6YAKyesShOPrAxKg2bTZvsuhbaKNXv6bC3B3Ag+hiVuqLftzrHbdvGieufHBJMyl2 jOZ+MjYkth8krhG8wMPWLFpZJuQIBAzJ8M+sY9C8xLdkTzEee/QOjD1ivwc2IOt9ahjP IjwQX7a34Q82TOZRoWc/Tu/ds20mP5RTmD5Zic9RvguPa5iRIuV8rJ7wYV+p3pyWsv8B yNnQ== X-Gm-Message-State: AD7BkJJ4cHz45hf2XBPufTmXUBanj09PKusyJklrEBfAq9zg9324ge4tE6W3OYsFeR1+SSL7 X-Received: by 10.194.59.138 with SMTP id z10mr16359224wjq.74.1459757246614; Mon, 04 Apr 2016 01:07:26 -0700 (PDT) Received: from new-host-2.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id m67sm7505239wma.3.2016.04.04.01.07.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Apr 2016 01:07:24 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: suravee.suthikulpanit@amd.com, patches@linaro.org, linux-kernel@vger.kernel.org, Manish.Jaggi@caviumnetworks.com, Bharat.Bhushan@freescale.com, pranav.sawargaonkar@gmail.com, p.fedin@samsung.com, iommu@lists.linux-foundation.org, Jean-Philippe.Brucker@arm.com, julien.grall@arm.com Subject: [PATCH v6 5/7] dma-reserved-iommu: reserved binding rb-tree and helpers Date: Mon, 4 Apr 2016 08:07:00 +0000 Message-Id: <1459757222-2668-6-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> References: <1459757222-2668-1-git-send-email-eric.auger@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP we will need to track which host physical addresses are mapped to reserved IOVA. In that prospect we introduce a new RB tree indexed by physical address. This RB tree only is used for reserved IOVA bindings. It is expected this RB tree will contain very few bindings. Those generally correspond to single page mapping one MSI frame (GICv2m frame or ITS GITS_TRANSLATER frame). Signed-off-by: Eric Auger --- v5 -> v6: - add comment about @d->reserved_lock to be held v3 -> v4: - that code was formerly in "iommu/arm-smmu: add a reserved binding RB tree" --- drivers/iommu/dma-reserved-iommu.c | 63 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index a461482..f592118 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -18,6 +18,69 @@ #include #include +struct iommu_reserved_binding { + struct kref kref; + struct rb_node node; + struct iommu_domain *domain; + phys_addr_t addr; + dma_addr_t iova; + size_t size; +}; + +/* Reserved binding RB-tree manipulation */ + +/* @d->reserved_lock must be held */ +static struct iommu_reserved_binding *find_reserved_binding( + struct iommu_domain *d, + phys_addr_t start, size_t size) +{ + struct rb_node *node = d->reserved_binding_list.rb_node; + + while (node) { + struct iommu_reserved_binding *binding = + rb_entry(node, struct iommu_reserved_binding, node); + + if (start + size <= binding->addr) + node = node->rb_left; + else if (start >= binding->addr + binding->size) + node = node->rb_right; + else + return binding; + } + + return NULL; +} + +/* @d->reserved_lock must be held */ +static void link_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *new) +{ + struct rb_node **link = &d->reserved_binding_list.rb_node; + struct rb_node *parent = NULL; + struct iommu_reserved_binding *binding; + + while (*link) { + parent = *link; + binding = rb_entry(parent, struct iommu_reserved_binding, + node); + + if (new->addr + new->size <= binding->addr) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->node, parent, link); + rb_insert_color(&new->node, &d->reserved_binding_list); +} + +/* @d->reserved_lock must be held */ +static void unlink_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *old) +{ + rb_erase(&old->node, &d->reserved_binding_list); +} + int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, dma_addr_t iova, size_t size, unsigned long order)