From patchwork Fri Feb 26 17:35:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8440411 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C379BC0553 for ; Fri, 26 Feb 2016 17:42:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DC42C20395 for ; Fri, 26 Feb 2016 17:42:55 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9F105203B8 for ; Fri, 26 Feb 2016 17:42:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aZMOG-0006hJ-2A; Fri, 26 Feb 2016 17:41:20 +0000 Received: from mail-wm0-x22f.google.com ([2a00:1450:400c:c09::22f]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aZMJe-0001h7-Vf for linux-arm-kernel@lists.infradead.org; Fri, 26 Feb 2016 17:36:43 +0000 Received: by mail-wm0-x22f.google.com with SMTP id g62so81590283wme.0 for ; Fri, 26 Feb 2016 09:36:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cYIwbjneOwaIYMwwJSy4RtESSV4Bki/+zT4Hbm0Ybyw=; b=WzEKhYY657a5L7ipV9LY50nXJb7Wcc2QQjUntBQCafNrCpcIzswJyQi76rQ1PPt4Wg Ef7s12EWTbDCM+DkkzIsaSENpLm1p68cJVMVh3hzS1X89uGRUZ5WgU8SxxYmLULA5svk 1RsrvIzkX7XFS4Yhd38eIP5zzcLbGlIoEgP28= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cYIwbjneOwaIYMwwJSy4RtESSV4Bki/+zT4Hbm0Ybyw=; b=TZEwRTtqY+lboH4FuQtYlCIGmrgsqFaCSflu7eySnZVJKGOip9FrJoWPg4Kl2yOlX9 t1FEKgRHyBGS1gyAnoWsYgTB7xI3FNT8iwwn7JKCqNTmoP7GRNluwW2/gXiQrfwwC0fq 4yFo/2RzlX+5VGTDkZqRw73bDVhHrQAWjv3FFCLKV1mk5ZdB8EBdCiqpu3fqgO1LwlK9 /yD23AU1Gbuitx2Pzezh3ExmZQ/KyHIG2G4WxmyEYf8h1Bk/s/RwHbwn0bMquEEBMlSw MY/l3Wmedw/Npn2temmrr69vBm01CFRiRNotujRhS/6wBqhbzVVRRyZmZl8xbY/9pXFu 1Bug== X-Gm-Message-State: AD7BkJLUxwNsYMw+HsPyUKHuC3FxQCcGMKXwS5GuRlP5ly//32DdjUFCsuoqTOeYAZrQfEHw X-Received: by 10.194.76.72 with SMTP id i8mr2797715wjw.117.1456508173485; Fri, 26 Feb 2016 09:36:13 -0800 (PST) Received: from new-host-8.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id 77sm3750373wmp.18.2016.02.26.09.36.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Feb 2016 09:36:12 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC v4 04/14] dma-reserved-iommu: reserved binding rb-tree and helpers Date: Fri, 26 Feb 2016 17:35:44 +0000 Message-Id: <1456508154-2253-5-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1456508154-2253-1-git-send-email-eric.auger@linaro.org> References: <1456508154-2253-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160226_093635_777308_7BBF374F X-CRM114-Status: GOOD ( 12.02 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patches@linaro.org, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP we will need to track which host physical addresses are mapped to reserved IOVA. In that prospect we introduce a new RB tree indexed by physical address. This RB tree only is used for reserved IOVA bindings. It is expected this RB tree will contain very few bindings. Those generally correspond to single page mapping one MSI frame (GICv2m frame or ITS GITS_TRANSLATER frame). Signed-off-by: Eric Auger --- v3 -> v4: - that code was formerly in "iommu/arm-smmu: add a reserved binding RB tree" --- drivers/iommu/dma-reserved-iommu.c | 60 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index 41a1add..30d54d0 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -20,6 +20,66 @@ #include #include +struct iommu_reserved_binding { + struct kref kref; + struct rb_node node; + struct iommu_domain *domain; + phys_addr_t addr; + dma_addr_t iova; + size_t size; +}; + +/* Reserved binding RB-tree manipulation */ + +static struct iommu_reserved_binding *find_reserved_binding( + struct iommu_domain *d, + phys_addr_t start, size_t size) +{ + struct rb_node *node = d->reserved_binding_list.rb_node; + + while (node) { + struct iommu_reserved_binding *binding = + rb_entry(node, struct iommu_reserved_binding, node); + + if (start + size <= binding->addr) + node = node->rb_left; + else if (start >= binding->addr + binding->size) + node = node->rb_right; + else + return binding; + } + + return NULL; +} + +static void link_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *new) +{ + struct rb_node **link = &d->reserved_binding_list.rb_node; + struct rb_node *parent = NULL; + struct iommu_reserved_binding *binding; + + while (*link) { + parent = *link; + binding = rb_entry(parent, struct iommu_reserved_binding, + node); + + if (new->addr + new->size <= binding->addr) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->node, parent, link); + rb_insert_color(&new->node, &d->reserved_binding_list); +} + +static void unlink_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *old) +{ + rb_erase(&old->node, &d->reserved_binding_list); +} + int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, dma_addr_t iova, size_t size, unsigned long order)