From patchwork Fri Feb 12 08:13:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8288171 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D551F9F3CD for ; Fri, 12 Feb 2016 08:17:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CDBF8203C4 for ; Fri, 12 Feb 2016 08:17:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D304A200F3 for ; Fri, 12 Feb 2016 08:17:03 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aU8t2-0006jH-FU; Fri, 12 Feb 2016 08:15:32 +0000 Received: from mail-wm0-x230.google.com ([2a00:1450:400c:c09::230]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aU8s5-0004lX-Ez for linux-arm-kernel@lists.infradead.org; Fri, 12 Feb 2016 08:14:35 +0000 Received: by mail-wm0-x230.google.com with SMTP id 128so51764110wmz.1 for ; Fri, 12 Feb 2016 00:14:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=RvYEtErA4OIZUq8RWC+HSX1nKGbYKzo9EbF9mCtPcDI=; b=RJ6/Q+/YwISR0UslGPlLX6m103aiVbk8ffggzhXwMdEb2AOm5NhXKZ3ndsdeiY9TCj hr8Whc5bRZS+MhWHqQbG2X5jI0gzNawj84Mr/w92DOQIHh5BcUON1V6yvocgm0qFYEjF De/+CRTXTwrNwIcMqHO6HMQ/3gA3a/l9UK1b4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=RvYEtErA4OIZUq8RWC+HSX1nKGbYKzo9EbF9mCtPcDI=; b=kDnQNUWgD5sMWuh4wADkna0wE8Inky4k8W1FYSL2FLn/Xj5MeVgMaq9Qo6tPGwQJWD 9X7HOfjBKy4dIJe239qZFHq/ZU7vGTB4mLXiw7c1wzu8hnevrussRKM8dsW77+OPLhoo cdutTUSw8HYsJdYtvnXu4GPL6mJBAT+mAFzOr6xQSc9c82zYXfS0jWMLbq8CJuPgRDNU gbvPghMUqTohSurm0aPgis8KirF7fv99u1vZ//uTDesqIXKWfnDE/lihbzrTN0V7z/e7 5yOReaaySUYL+GzTid+vnfhmFhbPrsdBfFPCpUiPqz7T/qdKyCnCcc7f8+3JizR+brwi ZtyQ== X-Gm-Message-State: AG10YOR+F9XqebQDbjpBj8uvemdoeekZ0ulr/DRnq70CBeApljQ8rNfpF+LFcLPjQP4PTO2Q X-Received: by 10.28.132.212 with SMTP id g203mr1527140wmd.30.1455264851747; Fri, 12 Feb 2016 00:14:11 -0800 (PST) Received: from new-host-17.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id x66sm1243977wmb.20.2016.02.12.00.14.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 12 Feb 2016 00:14:10 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC v3 05/15] iommu/arm-smmu: implement alloc/free_reserved_iova_domain Date: Fri, 12 Feb 2016 08:13:07 +0000 Message-Id: <1455264797-2334-6-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1455264797-2334-1-git-send-email-eric.auger@linaro.org> References: <1455264797-2334-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160212_001433_802658_DB26DA40 X-CRM114-Status: GOOD ( 16.84 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas.Lendacky@amd.com, brijesh.singh@amd.com, patches@linaro.org, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, leo.duran@amd.com, suravee.suthikulpanit@amd.com, sherry.hurwitz@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Implement alloc/free_reserved_iova_domain for arm-smmu. we use the iova allocator (iova.c). The iova_domain is attached to the arm_smmu_domain struct. A mutex is introduced to protect it. Signed-off-by: Eric Auger --- v2 -> v3: - select IOMMU_IOVA when ARM_SMMU or ARM_SMMU_V3 is set v1 -> v2: - formerly implemented in vfio_iommu_type1 --- drivers/iommu/Kconfig | 2 ++ drivers/iommu/arm-smmu.c | 87 +++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 74 insertions(+), 15 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index a1e75cb..1106528 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -289,6 +289,7 @@ config ARM_SMMU bool "ARM Ltd. System MMU (SMMU) Support" depends on (ARM64 || ARM) && MMU select IOMMU_API + select IOMMU_IOVA select IOMMU_IO_PGTABLE_LPAE select ARM_DMA_USE_IOMMU if ARM help @@ -302,6 +303,7 @@ config ARM_SMMU_V3 bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" depends on ARM64 && PCI select IOMMU_API + select IOMMU_IOVA select IOMMU_IO_PGTABLE_LPAE select GENERIC_MSI_IRQ_DOMAIN help diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index c8b7e71..f42341d 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -42,6 +42,7 @@ #include #include #include +#include #include @@ -347,6 +348,9 @@ struct arm_smmu_domain { enum arm_smmu_domain_stage stage; struct mutex init_mutex; /* Protects smmu pointer */ struct iommu_domain domain; + struct iova_domain *reserved_iova_domain; + /* protects reserved domain manipulation */ + struct mutex reserved_mutex; }; static struct iommu_ops arm_smmu_ops; @@ -975,6 +979,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) return NULL; mutex_init(&smmu_domain->init_mutex); + mutex_init(&smmu_domain->reserved_mutex); spin_lock_init(&smmu_domain->pgtbl_lock); return &smmu_domain->domain; @@ -1446,22 +1451,74 @@ out_unlock: return ret; } +static int arm_smmu_alloc_reserved_iova_domain(struct iommu_domain *domain, + dma_addr_t iova, size_t size, + unsigned long order) +{ + unsigned long granule, mask; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + int ret = 0; + + granule = 1UL << order; + mask = granule - 1; + if (iova & mask || (!size) || (size & mask)) + return -EINVAL; + + if (smmu_domain->reserved_iova_domain) + return -EEXIST; + + mutex_lock(&smmu_domain->reserved_mutex); + + smmu_domain->reserved_iova_domain = + kzalloc(sizeof(struct iova_domain), GFP_KERNEL); + if (!smmu_domain->reserved_iova_domain) { + ret = -ENOMEM; + goto unlock; + } + + init_iova_domain(smmu_domain->reserved_iova_domain, + granule, iova >> order, (iova + size - 1) >> order); + +unlock: + mutex_unlock(&smmu_domain->reserved_mutex); + return ret; +} + +static void arm_smmu_free_reserved_iova_domain(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct iova_domain *iovad = smmu_domain->reserved_iova_domain; + + if (!iovad) + return; + + mutex_lock(&smmu_domain->reserved_mutex); + + put_iova_domain(iovad); + kfree(iovad); + + mutex_unlock(&smmu_domain->reserved_mutex); +} + static struct iommu_ops arm_smmu_ops = { - .capable = arm_smmu_capable, - .domain_alloc = arm_smmu_domain_alloc, - .domain_free = arm_smmu_domain_free, - .attach_dev = arm_smmu_attach_dev, - .detach_dev = arm_smmu_detach_dev, - .map = arm_smmu_map, - .unmap = arm_smmu_unmap, - .map_sg = default_iommu_map_sg, - .iova_to_phys = arm_smmu_iova_to_phys, - .add_device = arm_smmu_add_device, - .remove_device = arm_smmu_remove_device, - .device_group = arm_smmu_device_group, - .domain_get_attr = arm_smmu_domain_get_attr, - .domain_set_attr = arm_smmu_domain_set_attr, - .pgsize_bitmap = -1UL, /* Restricted during device attach */ + .capable = arm_smmu_capable, + .domain_alloc = arm_smmu_domain_alloc, + .domain_free = arm_smmu_domain_free, + .attach_dev = arm_smmu_attach_dev, + .detach_dev = arm_smmu_detach_dev, + .map = arm_smmu_map, + .unmap = arm_smmu_unmap, + .map_sg = default_iommu_map_sg, + .iova_to_phys = arm_smmu_iova_to_phys, + .add_device = arm_smmu_add_device, + .remove_device = arm_smmu_remove_device, + .device_group = arm_smmu_device_group, + .domain_get_attr = arm_smmu_domain_get_attr, + .domain_set_attr = arm_smmu_domain_set_attr, + .alloc_reserved_iova_domain = arm_smmu_alloc_reserved_iova_domain, + .free_reserved_iova_domain = arm_smmu_free_reserved_iova_domain, + /* Page size bitmap, restricted during device attach */ + .pgsize_bitmap = -1UL, }; static void arm_smmu_device_reset(struct arm_smmu_device *smmu)