From patchwork Fri Feb 12 08:13:08 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8288181 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A62AABEEE5 for ; Fri, 12 Feb 2016 08:17:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BBC0C200F3 for ; Fri, 12 Feb 2016 08:17:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BB02C2035D for ; Fri, 12 Feb 2016 08:17:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751565AbcBLIOY (ORCPT ); Fri, 12 Feb 2016 03:14:24 -0500 Received: from mail-wm0-f43.google.com ([74.125.82.43]:38227 "EHLO mail-wm0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751452AbcBLIOP (ORCPT ); Fri, 12 Feb 2016 03:14:15 -0500 Received: by mail-wm0-f43.google.com with SMTP id p63so8867484wmp.1 for ; Fri, 12 Feb 2016 00:14:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QMnHL03sO+1wX8S/A2JOmq9MHfXeIH3bt67dpLfCZRE=; b=k1KSHRs7CserJhVmsfonGHb+p0codoO2a6+Ln5TEMuF+xcHK11mqG/MT5PtlF/DDlS ontrEi8PY2P2PbkzReoJoiK8uSiF/udA3aW7DiCGgmSFmR5AETg12Hl5zulbZ0oPSQoP x7Fxyxz/svFeOqAXHKSSfRbl/J20Nq+vV6nhY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QMnHL03sO+1wX8S/A2JOmq9MHfXeIH3bt67dpLfCZRE=; b=WzINzK1MxCKUKXHrgZkIhhh6izmloqteLGd8Jol/SLtjQsjlOUBUCCuhOy0KSMsaqu zfdADOVZ4221u/1On2AzyOAhSL4kACYmvCN26u0dZmfow7SmCnhewT589SAzE9g1eM0Z sZZYSQaCyqhRcWZNMxlQtmp4wpCdyWZBL9mtMGSjmHUIE0uYXB78ViQEY7+5wXgD/NFV gaQKfAlbdVmi9nsS6s4BdzjDbbgRHNuBREb3Sl27qJ6/X11qMKurpHi0n9SAlrJBQSI0 bdnSqetqww/FrxdCdeeqqmB7fb9xU6rKGqpEYKGxtYUGtsC/0V3T2YLhkCsSP0hjI5za 6GHQ== X-Gm-Message-State: AG10YOQdGclX7CmYz3YeuptYit+8+bUkN58HH/p+MwsV5ZvkRZ2Q0KZzCpjfe6CqJsDFc+F/ X-Received: by 10.194.24.164 with SMTP id v4mr262853wjf.138.1455264854480; Fri, 12 Feb 2016 00:14:14 -0800 (PST) Received: from new-host-17.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id x66sm1243977wmb.20.2016.02.12.00.14.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 12 Feb 2016 00:14:12 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: suravee.suthikulpanit@amd.com, patches@linaro.org, linux-kernel@vger.kernel.org, Manish.Jaggi@caviumnetworks.com, Bharat.Bhushan@freescale.com, pranav.sawargaonkar@gmail.com, p.fedin@samsung.com, iommu@lists.linux-foundation.org, sherry.hurwitz@amd.com, brijesh.singh@amd.com, leo.duran@amd.com, Thomas.Lendacky@amd.com Subject: [RFC v3 06/15] iommu/arm-smmu: add a reserved binding RB tree Date: Fri, 12 Feb 2016 08:13:08 +0000 Message-Id: <1455264797-2334-7-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1455264797-2334-1-git-send-email-eric.auger@linaro.org> References: <1455264797-2334-1-git-send-email-eric.auger@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP we will need to track which host physical addresses are mapped to reserved IOVA. In that prospect we introduce a new RB tree indexed by physical address. This RB tree only is used for reserved IOVA bindings. It is expected this RB tree will contain very few bindings. Those generally correspond to single page mapping one MSI frame (GICv2m frame or ITS GITS_TRANSLATER frame). Signed-off-by: Eric Auger --- --- drivers/iommu/arm-smmu.c | 65 +++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 64 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index f42341d..729a4c6 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -349,10 +349,21 @@ struct arm_smmu_domain { struct mutex init_mutex; /* Protects smmu pointer */ struct iommu_domain domain; struct iova_domain *reserved_iova_domain; - /* protects reserved domain manipulation */ + /* rb tree indexed by PA, for reserved bindings only */ + struct rb_root reserved_binding_list; + /* protects reserved domain and rbtree manipulation */ struct mutex reserved_mutex; }; +struct arm_smmu_reserved_binding { + struct kref kref; + struct rb_node node; + struct arm_smmu_domain *domain; + phys_addr_t addr; + dma_addr_t iova; + size_t size; +}; + static struct iommu_ops arm_smmu_ops; static DEFINE_SPINLOCK(arm_smmu_devices_lock); @@ -400,6 +411,57 @@ static struct device_node *dev_get_dev_node(struct device *dev) return dev->of_node; } +/* Reserved binding RB-tree manipulation */ + +static struct arm_smmu_reserved_binding *find_reserved_binding( + struct arm_smmu_domain *d, + phys_addr_t start, size_t size) +{ + struct rb_node *node = d->reserved_binding_list.rb_node; + + while (node) { + struct arm_smmu_reserved_binding *binding = + rb_entry(node, struct arm_smmu_reserved_binding, node); + + if (start + size <= binding->addr) + node = node->rb_left; + else if (start >= binding->addr + binding->size) + node = node->rb_right; + else + return binding; + } + + return NULL; +} + +static void link_reserved_binding(struct arm_smmu_domain *d, + struct arm_smmu_reserved_binding *new) +{ + struct rb_node **link = &d->reserved_binding_list.rb_node; + struct rb_node *parent = NULL; + struct arm_smmu_reserved_binding *binding; + + while (*link) { + parent = *link; + binding = rb_entry(parent, struct arm_smmu_reserved_binding, + node); + + if (new->addr + new->size <= binding->addr) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->node, parent, link); + rb_insert_color(&new->node, &d->reserved_binding_list); +} + +static void unlink_reserved_binding(struct arm_smmu_domain *d, + struct arm_smmu_reserved_binding *old) +{ + rb_erase(&old->node, &d->reserved_binding_list); +} + static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu, struct device_node *dev_node) { @@ -981,6 +1043,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) mutex_init(&smmu_domain->init_mutex); mutex_init(&smmu_domain->reserved_mutex); spin_lock_init(&smmu_domain->pgtbl_lock); + smmu_domain->reserved_binding_list = RB_ROOT; return &smmu_domain->domain; }