From patchwork Thu Feb 11 14:34:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8280991 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 336CC9F3CD for ; Thu, 11 Feb 2016 14:38:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3E78E2009C for ; Thu, 11 Feb 2016 14:38:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 93154203AC for ; Thu, 11 Feb 2016 14:38:23 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aTsMR-0005fC-8d; Thu, 11 Feb 2016 14:36:47 +0000 Received: from mail-wm0-x229.google.com ([2a00:1450:400c:c09::229]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aTsLC-0004aC-0w for linux-arm-kernel@lists.infradead.org; Thu, 11 Feb 2016 14:35:32 +0000 Received: by mail-wm0-x229.google.com with SMTP id 128so23797493wmz.1 for ; Thu, 11 Feb 2016 06:35:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=USivi+AF/lmf9T4/vcAO0rGgRNXbWOtfDswrrl3X91A=; b=kbF27Bs3Fg3YaTBK15r0NHOR6BvzoUgblnvy/8Z4ngCrZLa99Ls1ySC46n65uBqmpw TDy+cquvTU+R+NQbSwvi+01a4IH0laUOlsreHTn9p8wI/BYNpAFCDDQVX1Y2I00G6okj m204S85bPvAtuEo+twwWCiFV+1p3pYHtvB4zo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=USivi+AF/lmf9T4/vcAO0rGgRNXbWOtfDswrrl3X91A=; b=jkvfJfURBt4n6oIlLY+F13Nb843TxFhTR8b3OO+Fhk6f3ZU18B9aTBQZijbW37d88Q A3sD2uZEMjTqB0nVxMYXzG1TVQWIZsxLBKcAmAt88ZS8pHzUcDyk1rKrtY6wiLWtuZay nYhcAFd+gQhXLvxmWmZNfHgIsmcB/m3mT4/w/Cof4dPOrMSaS0n5X22uT48zBAwulbgj gjBbyiRtlNYo81BhNnOsg+1XSmlLOAjfbYY5DtlwJttorcJeX+pS5+EKe6XzwuSRzASL 6T5zH4EVyGyb2p+g8h2IQYyHpu+LQgXFSelDz1nSB4+TAlNpBE5pG1kvPfgzH9MlRYvq GZTg== X-Gm-Message-State: AG10YOSL4Icu0srN6MnAeNNNnETS8IZQHLAIzdQdLXzDbfDX3VKs8KtZW9RPRZlauF+m8tTw X-Received: by 10.194.92.147 with SMTP id cm19mr53547356wjb.32.1455201312258; Thu, 11 Feb 2016 06:35:12 -0800 (PST) Received: from new-host-12.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id t205sm8290751wmt.23.2016.02.11.06.35.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Feb 2016 06:35:10 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC v2 05/15] iommu/arm-smmu: implement alloc/free_reserved_iova_domain Date: Thu, 11 Feb 2016 14:34:12 +0000 Message-Id: <1455201262-5259-6-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> References: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160211_063530_477018_52076F25 X-CRM114-Status: GOOD ( 16.17 ) X-Spam-Score: -2.0 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas.Lendacky@amd.com, brijesh.singh@amd.com, patches@linaro.org, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, leo.duran@amd.com, suravee.suthikulpanit@amd.com, sherry.hurwitz@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Implement alloc/free_reserved_iova_domain for arm-smmu. we use the iova allocator (iova.c). The iova_domain is attached to the arm_smmu_domain struct. A mutex is introduced to protect it. Signed-off-by: Eric Auger --- v1 -> v2: - formerly implemented in vfio_iommu_type1 --- drivers/iommu/arm-smmu.c | 87 +++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 72 insertions(+), 15 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index c8b7e71..f42341d 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -42,6 +42,7 @@ #include #include #include +#include #include @@ -347,6 +348,9 @@ struct arm_smmu_domain { enum arm_smmu_domain_stage stage; struct mutex init_mutex; /* Protects smmu pointer */ struct iommu_domain domain; + struct iova_domain *reserved_iova_domain; + /* protects reserved domain manipulation */ + struct mutex reserved_mutex; }; static struct iommu_ops arm_smmu_ops; @@ -975,6 +979,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) return NULL; mutex_init(&smmu_domain->init_mutex); + mutex_init(&smmu_domain->reserved_mutex); spin_lock_init(&smmu_domain->pgtbl_lock); return &smmu_domain->domain; @@ -1446,22 +1451,74 @@ out_unlock: return ret; } +static int arm_smmu_alloc_reserved_iova_domain(struct iommu_domain *domain, + dma_addr_t iova, size_t size, + unsigned long order) +{ + unsigned long granule, mask; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + int ret = 0; + + granule = 1UL << order; + mask = granule - 1; + if (iova & mask || (!size) || (size & mask)) + return -EINVAL; + + if (smmu_domain->reserved_iova_domain) + return -EEXIST; + + mutex_lock(&smmu_domain->reserved_mutex); + + smmu_domain->reserved_iova_domain = + kzalloc(sizeof(struct iova_domain), GFP_KERNEL); + if (!smmu_domain->reserved_iova_domain) { + ret = -ENOMEM; + goto unlock; + } + + init_iova_domain(smmu_domain->reserved_iova_domain, + granule, iova >> order, (iova + size - 1) >> order); + +unlock: + mutex_unlock(&smmu_domain->reserved_mutex); + return ret; +} + +static void arm_smmu_free_reserved_iova_domain(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct iova_domain *iovad = smmu_domain->reserved_iova_domain; + + if (!iovad) + return; + + mutex_lock(&smmu_domain->reserved_mutex); + + put_iova_domain(iovad); + kfree(iovad); + + mutex_unlock(&smmu_domain->reserved_mutex); +} + static struct iommu_ops arm_smmu_ops = { - .capable = arm_smmu_capable, - .domain_alloc = arm_smmu_domain_alloc, - .domain_free = arm_smmu_domain_free, - .attach_dev = arm_smmu_attach_dev, - .detach_dev = arm_smmu_detach_dev, - .map = arm_smmu_map, - .unmap = arm_smmu_unmap, - .map_sg = default_iommu_map_sg, - .iova_to_phys = arm_smmu_iova_to_phys, - .add_device = arm_smmu_add_device, - .remove_device = arm_smmu_remove_device, - .device_group = arm_smmu_device_group, - .domain_get_attr = arm_smmu_domain_get_attr, - .domain_set_attr = arm_smmu_domain_set_attr, - .pgsize_bitmap = -1UL, /* Restricted during device attach */ + .capable = arm_smmu_capable, + .domain_alloc = arm_smmu_domain_alloc, + .domain_free = arm_smmu_domain_free, + .attach_dev = arm_smmu_attach_dev, + .detach_dev = arm_smmu_detach_dev, + .map = arm_smmu_map, + .unmap = arm_smmu_unmap, + .map_sg = default_iommu_map_sg, + .iova_to_phys = arm_smmu_iova_to_phys, + .add_device = arm_smmu_add_device, + .remove_device = arm_smmu_remove_device, + .device_group = arm_smmu_device_group, + .domain_get_attr = arm_smmu_domain_get_attr, + .domain_set_attr = arm_smmu_domain_set_attr, + .alloc_reserved_iova_domain = arm_smmu_alloc_reserved_iova_domain, + .free_reserved_iova_domain = arm_smmu_free_reserved_iova_domain, + /* Page size bitmap, restricted during device attach */ + .pgsize_bitmap = -1UL, }; static void arm_smmu_device_reset(struct arm_smmu_device *smmu)