From patchwork Thu Feb 11 14:34:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8281051 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 25AD5BEEE5 for ; Thu, 11 Feb 2016 14:39:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A27DA2009C for ; Thu, 11 Feb 2016 14:39:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 91280203AC for ; Thu, 11 Feb 2016 14:39:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aTsNN-0006WR-Iz; Thu, 11 Feb 2016 14:37:45 +0000 Received: from mail-wm0-x229.google.com ([2a00:1450:400c:c09::229]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aTsLQ-0004hc-Jp for linux-arm-kernel@lists.infradead.org; Thu, 11 Feb 2016 14:35:49 +0000 Received: by mail-wm0-x229.google.com with SMTP id g62so23916831wme.0 for ; Thu, 11 Feb 2016 06:35:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CMdouB7hepqZCjZ3/O8LmiQdOc6Aer5cVPdjCblo9oI=; b=iIfxSAMbg1NRtHbvAs/a2nF0Wz+BDH052DKH3cfhA4xdajQrbTM1P9oo0+Lu3fZpT6 chJouvP9XQlMQu+uTMs0TIjxe/f2JF5T1BSwWEvqt3aqI6rWQLJippMrnaTeMS/5fIHU 4NqmcHzKyoJpDDh8LrAc0x+koPSNQysoF0c9o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CMdouB7hepqZCjZ3/O8LmiQdOc6Aer5cVPdjCblo9oI=; b=Lo4TGu031ds3I/I0KZ1L1s6V6IYVnciWDzhEincjdvCXTc7xf66n3zm4QNVX1joG9t e6UVo6oets6xVKYcWzGzI8QMYURQ3wNdq4furE+5DDJpKsUh7YMiMyK/dEYvu5Fwn1uZ 8Js/Ypal+D2SksBwW2kA/HuEIOx8DnkIFxPmhGOEbBP0BSBU/f3gTEf6gQ2f4yACbb8b hXb264Onr7HTeoa6DeGJJNgIyG5IAXD9jn91223mj4m+5VwxHKqbK9cHSBiy1bILnOsq F6ybzlFN/ihj7s+DYnZLdSyBTE+jahLKfWam79HkFI3fdgpsTeZa+lPtncEGPlsUj1yk FDfQ== X-Gm-Message-State: AG10YORCUyi5hTSNufgnGKvz6riaQZCYs/1D8E/LAclea+N6kO31wm77gkEN+hBEgDXqx4sM X-Received: by 10.28.45.73 with SMTP id t70mr17012461wmt.31.1455201325410; Thu, 11 Feb 2016 06:35:25 -0800 (PST) Received: from new-host-12.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id t205sm8290751wmt.23.2016.02.11.06.35.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Feb 2016 06:35:23 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC v2 08/15] iommu/arm-smmu: implement iommu_get/put_single_reserved Date: Thu, 11 Feb 2016 14:34:15 +0000 Message-Id: <1455201262-5259-9-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> References: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160211_063545_130606_A210C6D0 X-CRM114-Status: GOOD ( 13.68 ) X-Spam-Score: -2.0 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas.Lendacky@amd.com, brijesh.singh@amd.com, patches@linaro.org, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, leo.duran@amd.com, suravee.suthikulpanit@amd.com, sherry.hurwitz@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Implement the iommu_get/put_single_reserved API in arm-smmu. In order to track which physical address is already mapped we use the RB tree indexed by PA. Signed-off-by: Eric Auger --- v1 -> v2: - previously in vfio_iommu_type1.c --- drivers/iommu/arm-smmu.c | 114 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 114 insertions(+) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 729a4c6..9961bfd 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1563,6 +1563,118 @@ static void arm_smmu_free_reserved_iova_domain(struct iommu_domain *domain) mutex_unlock(&smmu_domain->reserved_mutex); } +static int arm_smmu_get_single_reserved(struct iommu_domain *domain, + phys_addr_t addr, int prot, + dma_addr_t *iova) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + unsigned long order = __ffs(domain->ops->pgsize_bitmap); + size_t page_size = 1 << order; + phys_addr_t mask = page_size - 1; + phys_addr_t aligned_addr = addr & ~mask; + phys_addr_t offset = addr - aligned_addr; + struct arm_smmu_reserved_binding *b; + struct iova *p_iova; + struct iova_domain *iovad = smmu_domain->reserved_iova_domain; + int ret; + + if (!iovad) + return -EINVAL; + + mutex_lock(&smmu_domain->reserved_mutex); + + b = find_reserved_binding(smmu_domain, aligned_addr, page_size); + if (b) { + *iova = b->iova + offset; + kref_get(&b->kref); + ret = 0; + goto unlock; + } + + /* there is no existing reserved iova for this pa */ + p_iova = alloc_iova(iovad, 1, iovad->dma_32bit_pfn, true); + if (!p_iova) { + ret = -ENOMEM; + goto unlock; + } + *iova = p_iova->pfn_lo << order; + + b = kzalloc(sizeof(*b), GFP_KERNEL); + if (!b) { + ret = -ENOMEM; + goto free_iova_unlock; + } + + ret = arm_smmu_map(domain, *iova, aligned_addr, page_size, prot); + if (ret) + goto free_binding_iova_unlock; + + kref_init(&b->kref); + kref_get(&b->kref); + b->domain = smmu_domain; + b->addr = aligned_addr; + b->iova = *iova; + b->size = page_size; + + link_reserved_binding(smmu_domain, b); + + *iova += offset; + goto unlock; + +free_binding_iova_unlock: + kfree(b); +free_iova_unlock: + free_iova(smmu_domain->reserved_iova_domain, *iova >> order); +unlock: + mutex_unlock(&smmu_domain->reserved_mutex); + return ret; +} + +/* called with reserved_mutex locked */ +static void reserved_binding_release(struct kref *kref) +{ + struct arm_smmu_reserved_binding *b = + container_of(kref, struct arm_smmu_reserved_binding, kref); + struct arm_smmu_domain *smmu_domain = b->domain; + struct iommu_domain *d = &smmu_domain->domain; + unsigned long order = __ffs(b->size); + + + arm_smmu_unmap(d, b->iova, b->size); + free_iova(smmu_domain->reserved_iova_domain, b->iova >> order); + unlink_reserved_binding(smmu_domain, b); + kfree(b); +} + +static void arm_smmu_put_single_reserved(struct iommu_domain *domain, + dma_addr_t iova) +{ + unsigned long order; + phys_addr_t aligned_addr; + dma_addr_t aligned_iova, page_size, mask, offset; + struct arm_smmu_reserved_binding *b; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + order = __ffs(domain->ops->pgsize_bitmap); + page_size = (uint64_t)1 << order; + mask = page_size - 1; + + aligned_iova = iova & ~mask; + offset = iova - aligned_iova; + + aligned_addr = iommu_iova_to_phys(domain, aligned_iova); + + mutex_lock(&smmu_domain->reserved_mutex); + + b = find_reserved_binding(smmu_domain, aligned_addr, page_size); + if (!b) + goto unlock; + kref_put(&b->kref, reserved_binding_release); + +unlock: + mutex_unlock(&smmu_domain->reserved_mutex); +} + static struct iommu_ops arm_smmu_ops = { .capable = arm_smmu_capable, .domain_alloc = arm_smmu_domain_alloc, @@ -1580,6 +1692,8 @@ static struct iommu_ops arm_smmu_ops = { .domain_set_attr = arm_smmu_domain_set_attr, .alloc_reserved_iova_domain = arm_smmu_alloc_reserved_iova_domain, .free_reserved_iova_domain = arm_smmu_free_reserved_iova_domain, + .get_single_reserved = arm_smmu_get_single_reserved, + .put_single_reserved = arm_smmu_put_single_reserved, /* Page size bitmap, restricted during device attach */ .pgsize_bitmap = -1UL, };