From patchwork Tue Mar 1 18:27:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8468741 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 25F1CC0553 for ; Tue, 1 Mar 2016 18:33:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 33A00202C8 for ; Tue, 1 Mar 2016 18:33:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3464420115 for ; Tue, 1 Mar 2016 18:33:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752885AbcCAS2f (ORCPT ); Tue, 1 Mar 2016 13:28:35 -0500 Received: from mail-wm0-f51.google.com ([74.125.82.51]:34357 "EHLO mail-wm0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752816AbcCAS22 (ORCPT ); Tue, 1 Mar 2016 13:28:28 -0500 Received: by mail-wm0-f51.google.com with SMTP id p65so47894296wmp.1 for ; Tue, 01 Mar 2016 10:28:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cYIwbjneOwaIYMwwJSy4RtESSV4Bki/+zT4Hbm0Ybyw=; b=U2fuolGb8XEdoKVDnEoNt4DYq4ohEjq/qDQE+uVpmsDzn5SpDd1PgKU5VSWgca0fse 89amLXMS4hYrSGqdTWcGU2TBfTs2cHv1NR7S9ZA2BTCsy8twFwhp7TYKGT8Stk4WHEl/ YDPyvfjzj5wbr1vp3Mosn68xj/Plse4efHK/s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cYIwbjneOwaIYMwwJSy4RtESSV4Bki/+zT4Hbm0Ybyw=; b=k12Uv1IA2OMrp5PHNRKcNifSMJ4YPiz9KvDnWtOcVTsjTPO3VfiaRTYxWYGbh7Ps5K orK5HvLL1BkaHpQWR1x0AdlqWk24nO7z6y46LMKFVYiHXVCMo1Se8iBTjAbv0QaMztm7 djJlASrQG3gTiZdaMD1/jI8RTaILS+fgJg95Lu3hwtcthGdxsi/LqnjaBXw+D7ZRggjs Hh5nYO0ow/QT7BFag828bsZP2lPa2YnBJyVbrqvMqBgP7TLkP4gPDLM3cb0YXUko9NXx 606hircV5lUOwiySmKwkY8AExCmqmJGLeOv0s/c119xM0N4tgRH9m4Ni+LfaZTb7Z8v1 CEJg== X-Gm-Message-State: AD7BkJLIwGXqdhP3TjCHJ0vCeArm787JzMZwlIaDK9l/8CQf0YDW9eIJdSFE+sN67EtJBs51 X-Received: by 10.194.71.70 with SMTP id s6mr21712634wju.1.1456856907273; Tue, 01 Mar 2016 10:28:27 -0800 (PST) Received: from new-host-8.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id k8sm32176385wjr.38.2016.03.01.10.28.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 01 Mar 2016 10:28:24 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: suravee.suthikulpanit@amd.com, patches@linaro.org, linux-kernel@vger.kernel.org, Manish.Jaggi@caviumnetworks.com, Bharat.Bhushan@freescale.com, pranav.sawargaonkar@gmail.com, p.fedin@samsung.com, iommu@lists.linux-foundation.org Subject: [RFC v5 05/17] dma-reserved-iommu: reserved binding rb-tree and helpers Date: Tue, 1 Mar 2016 18:27:45 +0000 Message-Id: <1456856877-4817-6-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1456856877-4817-1-git-send-email-eric.auger@linaro.org> References: <1456856877-4817-1-git-send-email-eric.auger@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP we will need to track which host physical addresses are mapped to reserved IOVA. In that prospect we introduce a new RB tree indexed by physical address. This RB tree only is used for reserved IOVA bindings. It is expected this RB tree will contain very few bindings. Those generally correspond to single page mapping one MSI frame (GICv2m frame or ITS GITS_TRANSLATER frame). Signed-off-by: Eric Auger --- v3 -> v4: - that code was formerly in "iommu/arm-smmu: add a reserved binding RB tree" --- drivers/iommu/dma-reserved-iommu.c | 60 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index 41a1add..30d54d0 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -20,6 +20,66 @@ #include #include +struct iommu_reserved_binding { + struct kref kref; + struct rb_node node; + struct iommu_domain *domain; + phys_addr_t addr; + dma_addr_t iova; + size_t size; +}; + +/* Reserved binding RB-tree manipulation */ + +static struct iommu_reserved_binding *find_reserved_binding( + struct iommu_domain *d, + phys_addr_t start, size_t size) +{ + struct rb_node *node = d->reserved_binding_list.rb_node; + + while (node) { + struct iommu_reserved_binding *binding = + rb_entry(node, struct iommu_reserved_binding, node); + + if (start + size <= binding->addr) + node = node->rb_left; + else if (start >= binding->addr + binding->size) + node = node->rb_right; + else + return binding; + } + + return NULL; +} + +static void link_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *new) +{ + struct rb_node **link = &d->reserved_binding_list.rb_node; + struct rb_node *parent = NULL; + struct iommu_reserved_binding *binding; + + while (*link) { + parent = *link; + binding = rb_entry(parent, struct iommu_reserved_binding, + node); + + if (new->addr + new->size <= binding->addr) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->node, parent, link); + rb_insert_color(&new->node, &d->reserved_binding_list); +} + +static void unlink_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *old) +{ + rb_erase(&old->node, &d->reserved_binding_list); +} + int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, dma_addr_t iova, size_t size, unsigned long order)