From patchwork Tue Mar 1 18:27:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8468621 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 65FB8C0553 for ; Tue, 1 Mar 2016 18:31:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 920E420115 for ; Tue, 1 Mar 2016 18:31:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ABF952017D for ; Tue, 1 Mar 2016 18:31:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aap3r-0003h2-0C; Tue, 01 Mar 2016 18:30:19 +0000 Received: from mail-wm0-x22f.google.com ([2a00:1450:400c:c09::22f]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aap2O-0001Xs-Un for linux-arm-kernel@lists.infradead.org; Tue, 01 Mar 2016 18:28:53 +0000 Received: by mail-wm0-x22f.google.com with SMTP id l68so48085156wml.0 for ; Tue, 01 Mar 2016 10:28:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cYIwbjneOwaIYMwwJSy4RtESSV4Bki/+zT4Hbm0Ybyw=; b=U2fuolGb8XEdoKVDnEoNt4DYq4ohEjq/qDQE+uVpmsDzn5SpDd1PgKU5VSWgca0fse 89amLXMS4hYrSGqdTWcGU2TBfTs2cHv1NR7S9ZA2BTCsy8twFwhp7TYKGT8Stk4WHEl/ YDPyvfjzj5wbr1vp3Mosn68xj/Plse4efHK/s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cYIwbjneOwaIYMwwJSy4RtESSV4Bki/+zT4Hbm0Ybyw=; b=ZB3eLt+iYTx4V1eMWJ5HOUDn0sE4fvLaNgKz44IijHWu/kx3dQCCLV7T7yCY6BGYwM ba0cA4xxmJXwulEMB3hkJwoTdh5qHJlAEAHkZHl+sZsTX5G+dTD1IdVUVo6GFYc9TMsi k9jSVWMI9n0u2fEPDHeCAh6bU5aW1YT5a5akO1hDb4Jb7qJ6rZt+Y2rZIO7MeEdM0VAw 5Q25zd9wRpSL3iCka4r/dpBzlAr3uIQnOx4GK/8woZsnyol/yZrrY6LuMqHctAbsBdtt /uDL7SVDZo4DsOPee40et2mZKk1kNJzqmVsPe7TpvIQaLwZSV/bMpANpWGE1qCLi8nTm oOwA== X-Gm-Message-State: AD7BkJIzNvo+Mr1kUwvMnlYj7kg3DdDG5i+zcPo6aMymA7uveQtsfpTHhMXWNgQ7XzczU7yQ X-Received: by 10.194.71.70 with SMTP id s6mr21712634wju.1.1456856907273; Tue, 01 Mar 2016 10:28:27 -0800 (PST) Received: from new-host-8.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id k8sm32176385wjr.38.2016.03.01.10.28.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 01 Mar 2016 10:28:24 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC v5 05/17] dma-reserved-iommu: reserved binding rb-tree and helpers Date: Tue, 1 Mar 2016 18:27:45 +0000 Message-Id: <1456856877-4817-6-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1456856877-4817-1-git-send-email-eric.auger@linaro.org> References: <1456856877-4817-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160301_102849_495539_727F454D X-CRM114-Status: GOOD ( 12.68 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patches@linaro.org, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP we will need to track which host physical addresses are mapped to reserved IOVA. In that prospect we introduce a new RB tree indexed by physical address. This RB tree only is used for reserved IOVA bindings. It is expected this RB tree will contain very few bindings. Those generally correspond to single page mapping one MSI frame (GICv2m frame or ITS GITS_TRANSLATER frame). Signed-off-by: Eric Auger --- v3 -> v4: - that code was formerly in "iommu/arm-smmu: add a reserved binding RB tree" --- drivers/iommu/dma-reserved-iommu.c | 60 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/drivers/iommu/dma-reserved-iommu.c b/drivers/iommu/dma-reserved-iommu.c index 41a1add..30d54d0 100644 --- a/drivers/iommu/dma-reserved-iommu.c +++ b/drivers/iommu/dma-reserved-iommu.c @@ -20,6 +20,66 @@ #include #include +struct iommu_reserved_binding { + struct kref kref; + struct rb_node node; + struct iommu_domain *domain; + phys_addr_t addr; + dma_addr_t iova; + size_t size; +}; + +/* Reserved binding RB-tree manipulation */ + +static struct iommu_reserved_binding *find_reserved_binding( + struct iommu_domain *d, + phys_addr_t start, size_t size) +{ + struct rb_node *node = d->reserved_binding_list.rb_node; + + while (node) { + struct iommu_reserved_binding *binding = + rb_entry(node, struct iommu_reserved_binding, node); + + if (start + size <= binding->addr) + node = node->rb_left; + else if (start >= binding->addr + binding->size) + node = node->rb_right; + else + return binding; + } + + return NULL; +} + +static void link_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *new) +{ + struct rb_node **link = &d->reserved_binding_list.rb_node; + struct rb_node *parent = NULL; + struct iommu_reserved_binding *binding; + + while (*link) { + parent = *link; + binding = rb_entry(parent, struct iommu_reserved_binding, + node); + + if (new->addr + new->size <= binding->addr) + link = &(*link)->rb_left; + else + link = &(*link)->rb_right; + } + + rb_link_node(&new->node, parent, link); + rb_insert_color(&new->node, &d->reserved_binding_list); +} + +static void unlink_reserved_binding(struct iommu_domain *d, + struct iommu_reserved_binding *old) +{ + rb_erase(&old->node, &d->reserved_binding_list); +} + int iommu_alloc_reserved_iova_domain(struct iommu_domain *domain, dma_addr_t iova, size_t size, unsigned long order)