From patchwork Thu Feb 11 14:34:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8281071 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 07B319F3CD for ; Thu, 11 Feb 2016 14:39:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 14B022035D for ; Thu, 11 Feb 2016 14:39:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id ED7D82009C for ; Thu, 11 Feb 2016 14:39:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752379AbcBKOfa (ORCPT ); Thu, 11 Feb 2016 09:35:30 -0500 Received: from mail-wm0-f54.google.com ([74.125.82.54]:37941 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752371AbcBKOf0 (ORCPT ); Thu, 11 Feb 2016 09:35:26 -0500 Received: by mail-wm0-f54.google.com with SMTP id p63so71003746wmp.1 for ; Thu, 11 Feb 2016 06:35:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CMdouB7hepqZCjZ3/O8LmiQdOc6Aer5cVPdjCblo9oI=; b=iIfxSAMbg1NRtHbvAs/a2nF0Wz+BDH052DKH3cfhA4xdajQrbTM1P9oo0+Lu3fZpT6 chJouvP9XQlMQu+uTMs0TIjxe/f2JF5T1BSwWEvqt3aqI6rWQLJippMrnaTeMS/5fIHU 4NqmcHzKyoJpDDh8LrAc0x+koPSNQysoF0c9o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CMdouB7hepqZCjZ3/O8LmiQdOc6Aer5cVPdjCblo9oI=; b=U9WWJw+DyCEXc5RS6Qxmajpsg+yLMi7YyVhlf8Fdjw1eb/w6Dx/58CBLjwGBuoFXVC zePnvOn4I38PDWIXJ3+U0Ug8gj3+bHmHmviO4Q99YhE9yadIfxFJbK4X79qcJhJzN2fA mdMFh25FPVKy/cduQFDBEqA0qs8407rOvfop9GqLO13NdJzDvHe5zBrCVhxnQNxaBtxN XlzDecKSF7LfuPQQQmUxoVB1W6a/TqutJGZ6D+f1vSZtD8u2HJjpYd3yWwXHGmJXp1DF +Nxf+ywEAtfai3Ujvy+YSK9cjXqbLmOIPTqFgRS7ZdA+PjB4LxcbwKH6CitCPJxdPwai bQUg== X-Gm-Message-State: AG10YOSvDsCgDcFW03N1QFLOiBfC7adsmAs4YcXIH5+yHnNzbuEs0KPBWcs3Gi9UBGf+D4U1 X-Received: by 10.28.45.73 with SMTP id t70mr17012461wmt.31.1455201325410; Thu, 11 Feb 2016 06:35:25 -0800 (PST) Received: from new-host-12.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id t205sm8290751wmt.23.2016.02.11.06.35.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Feb 2016 06:35:23 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: suravee.suthikulpanit@amd.com, patches@linaro.org, linux-kernel@vger.kernel.org, Manish.Jaggi@caviumnetworks.com, Bharat.Bhushan@freescale.com, pranav.sawargaonkar@gmail.com, p.fedin@samsung.com, iommu@lists.linux-foundation.org, sherry.hurwitz@amd.com, brijesh.singh@amd.com, leo.duran@amd.com, Thomas.Lendacky@amd.com Subject: [RFC v2 08/15] iommu/arm-smmu: implement iommu_get/put_single_reserved Date: Thu, 11 Feb 2016 14:34:15 +0000 Message-Id: <1455201262-5259-9-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> References: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Implement the iommu_get/put_single_reserved API in arm-smmu. In order to track which physical address is already mapped we use the RB tree indexed by PA. Signed-off-by: Eric Auger --- v1 -> v2: - previously in vfio_iommu_type1.c --- drivers/iommu/arm-smmu.c | 114 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 114 insertions(+) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 729a4c6..9961bfd 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1563,6 +1563,118 @@ static void arm_smmu_free_reserved_iova_domain(struct iommu_domain *domain) mutex_unlock(&smmu_domain->reserved_mutex); } +static int arm_smmu_get_single_reserved(struct iommu_domain *domain, + phys_addr_t addr, int prot, + dma_addr_t *iova) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + unsigned long order = __ffs(domain->ops->pgsize_bitmap); + size_t page_size = 1 << order; + phys_addr_t mask = page_size - 1; + phys_addr_t aligned_addr = addr & ~mask; + phys_addr_t offset = addr - aligned_addr; + struct arm_smmu_reserved_binding *b; + struct iova *p_iova; + struct iova_domain *iovad = smmu_domain->reserved_iova_domain; + int ret; + + if (!iovad) + return -EINVAL; + + mutex_lock(&smmu_domain->reserved_mutex); + + b = find_reserved_binding(smmu_domain, aligned_addr, page_size); + if (b) { + *iova = b->iova + offset; + kref_get(&b->kref); + ret = 0; + goto unlock; + } + + /* there is no existing reserved iova for this pa */ + p_iova = alloc_iova(iovad, 1, iovad->dma_32bit_pfn, true); + if (!p_iova) { + ret = -ENOMEM; + goto unlock; + } + *iova = p_iova->pfn_lo << order; + + b = kzalloc(sizeof(*b), GFP_KERNEL); + if (!b) { + ret = -ENOMEM; + goto free_iova_unlock; + } + + ret = arm_smmu_map(domain, *iova, aligned_addr, page_size, prot); + if (ret) + goto free_binding_iova_unlock; + + kref_init(&b->kref); + kref_get(&b->kref); + b->domain = smmu_domain; + b->addr = aligned_addr; + b->iova = *iova; + b->size = page_size; + + link_reserved_binding(smmu_domain, b); + + *iova += offset; + goto unlock; + +free_binding_iova_unlock: + kfree(b); +free_iova_unlock: + free_iova(smmu_domain->reserved_iova_domain, *iova >> order); +unlock: + mutex_unlock(&smmu_domain->reserved_mutex); + return ret; +} + +/* called with reserved_mutex locked */ +static void reserved_binding_release(struct kref *kref) +{ + struct arm_smmu_reserved_binding *b = + container_of(kref, struct arm_smmu_reserved_binding, kref); + struct arm_smmu_domain *smmu_domain = b->domain; + struct iommu_domain *d = &smmu_domain->domain; + unsigned long order = __ffs(b->size); + + + arm_smmu_unmap(d, b->iova, b->size); + free_iova(smmu_domain->reserved_iova_domain, b->iova >> order); + unlink_reserved_binding(smmu_domain, b); + kfree(b); +} + +static void arm_smmu_put_single_reserved(struct iommu_domain *domain, + dma_addr_t iova) +{ + unsigned long order; + phys_addr_t aligned_addr; + dma_addr_t aligned_iova, page_size, mask, offset; + struct arm_smmu_reserved_binding *b; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + order = __ffs(domain->ops->pgsize_bitmap); + page_size = (uint64_t)1 << order; + mask = page_size - 1; + + aligned_iova = iova & ~mask; + offset = iova - aligned_iova; + + aligned_addr = iommu_iova_to_phys(domain, aligned_iova); + + mutex_lock(&smmu_domain->reserved_mutex); + + b = find_reserved_binding(smmu_domain, aligned_addr, page_size); + if (!b) + goto unlock; + kref_put(&b->kref, reserved_binding_release); + +unlock: + mutex_unlock(&smmu_domain->reserved_mutex); +} + static struct iommu_ops arm_smmu_ops = { .capable = arm_smmu_capable, .domain_alloc = arm_smmu_domain_alloc, @@ -1580,6 +1692,8 @@ static struct iommu_ops arm_smmu_ops = { .domain_set_attr = arm_smmu_domain_set_attr, .alloc_reserved_iova_domain = arm_smmu_alloc_reserved_iova_domain, .free_reserved_iova_domain = arm_smmu_free_reserved_iova_domain, + .get_single_reserved = arm_smmu_get_single_reserved, + .put_single_reserved = arm_smmu_put_single_reserved, /* Page size bitmap, restricted during device attach */ .pgsize_bitmap = -1UL, };