From patchwork Thu Feb 11 14:34:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8281091 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id ED6F79F3CD for ; Thu, 11 Feb 2016 14:40:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3CB0A20222 for ; Thu, 11 Feb 2016 14:40:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 29EBD2035D for ; Thu, 11 Feb 2016 14:40:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752324AbcBKOfT (ORCPT ); Thu, 11 Feb 2016 09:35:19 -0500 Received: from mail-wm0-f45.google.com ([74.125.82.45]:33117 "EHLO mail-wm0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752219AbcBKOfN (ORCPT ); Thu, 11 Feb 2016 09:35:13 -0500 Received: by mail-wm0-f45.google.com with SMTP id g62so23907666wme.0 for ; Thu, 11 Feb 2016 06:35:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=USivi+AF/lmf9T4/vcAO0rGgRNXbWOtfDswrrl3X91A=; b=kbF27Bs3Fg3YaTBK15r0NHOR6BvzoUgblnvy/8Z4ngCrZLa99Ls1ySC46n65uBqmpw TDy+cquvTU+R+NQbSwvi+01a4IH0laUOlsreHTn9p8wI/BYNpAFCDDQVX1Y2I00G6okj m204S85bPvAtuEo+twwWCiFV+1p3pYHtvB4zo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=USivi+AF/lmf9T4/vcAO0rGgRNXbWOtfDswrrl3X91A=; b=QqvTt2dzBjfFH7k6yDMvcNIjzwofc8F5Xm2O4omWR1snd9sUMDehHAWSM9icgd2uYY ydesbql0uxp6M3YSyXQvlcjzYFMAD4TDN+PLrdrzbfDa0ILocIlhT29+SkSPxhFgYDTm w5xoOc6Auh5Np3U0xvMRsCJsfUAlxnYXVnWVIoDyMJkggdlw3oZM96UmCf4Xrq7eDeU/ tWjPoDPEkkWux0p6+WSmMzvpau04hh2ZT5zQtDF2v7ip83TT1hMN8XnJeSEnqNcwPzek bsxRRdLRMzx7P8d/TuYEVQyid1VI1tnxFEP/ugUEZG3rkS0nY860WgaboHHi7JAnmkBh q23A== X-Gm-Message-State: AG10YOR25+clOPgmim4iXzw20O717qIrHyDNxlVIO8vUrOQhGzw/9oxvr3VZkTJTfuSDdgYQ X-Received: by 10.194.92.147 with SMTP id cm19mr53547356wjb.32.1455201312258; Thu, 11 Feb 2016 06:35:12 -0800 (PST) Received: from new-host-12.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id t205sm8290751wmt.23.2016.02.11.06.35.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Feb 2016 06:35:10 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: suravee.suthikulpanit@amd.com, patches@linaro.org, linux-kernel@vger.kernel.org, Manish.Jaggi@caviumnetworks.com, Bharat.Bhushan@freescale.com, pranav.sawargaonkar@gmail.com, p.fedin@samsung.com, iommu@lists.linux-foundation.org, sherry.hurwitz@amd.com, brijesh.singh@amd.com, leo.duran@amd.com, Thomas.Lendacky@amd.com Subject: [RFC v2 05/15] iommu/arm-smmu: implement alloc/free_reserved_iova_domain Date: Thu, 11 Feb 2016 14:34:12 +0000 Message-Id: <1455201262-5259-6-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> References: <1455201262-5259-1-git-send-email-eric.auger@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Implement alloc/free_reserved_iova_domain for arm-smmu. we use the iova allocator (iova.c). The iova_domain is attached to the arm_smmu_domain struct. A mutex is introduced to protect it. Signed-off-by: Eric Auger --- v1 -> v2: - formerly implemented in vfio_iommu_type1 --- drivers/iommu/arm-smmu.c | 87 +++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 72 insertions(+), 15 deletions(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index c8b7e71..f42341d 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -42,6 +42,7 @@ #include #include #include +#include #include @@ -347,6 +348,9 @@ struct arm_smmu_domain { enum arm_smmu_domain_stage stage; struct mutex init_mutex; /* Protects smmu pointer */ struct iommu_domain domain; + struct iova_domain *reserved_iova_domain; + /* protects reserved domain manipulation */ + struct mutex reserved_mutex; }; static struct iommu_ops arm_smmu_ops; @@ -975,6 +979,7 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) return NULL; mutex_init(&smmu_domain->init_mutex); + mutex_init(&smmu_domain->reserved_mutex); spin_lock_init(&smmu_domain->pgtbl_lock); return &smmu_domain->domain; @@ -1446,22 +1451,74 @@ out_unlock: return ret; } +static int arm_smmu_alloc_reserved_iova_domain(struct iommu_domain *domain, + dma_addr_t iova, size_t size, + unsigned long order) +{ + unsigned long granule, mask; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + int ret = 0; + + granule = 1UL << order; + mask = granule - 1; + if (iova & mask || (!size) || (size & mask)) + return -EINVAL; + + if (smmu_domain->reserved_iova_domain) + return -EEXIST; + + mutex_lock(&smmu_domain->reserved_mutex); + + smmu_domain->reserved_iova_domain = + kzalloc(sizeof(struct iova_domain), GFP_KERNEL); + if (!smmu_domain->reserved_iova_domain) { + ret = -ENOMEM; + goto unlock; + } + + init_iova_domain(smmu_domain->reserved_iova_domain, + granule, iova >> order, (iova + size - 1) >> order); + +unlock: + mutex_unlock(&smmu_domain->reserved_mutex); + return ret; +} + +static void arm_smmu_free_reserved_iova_domain(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct iova_domain *iovad = smmu_domain->reserved_iova_domain; + + if (!iovad) + return; + + mutex_lock(&smmu_domain->reserved_mutex); + + put_iova_domain(iovad); + kfree(iovad); + + mutex_unlock(&smmu_domain->reserved_mutex); +} + static struct iommu_ops arm_smmu_ops = { - .capable = arm_smmu_capable, - .domain_alloc = arm_smmu_domain_alloc, - .domain_free = arm_smmu_domain_free, - .attach_dev = arm_smmu_attach_dev, - .detach_dev = arm_smmu_detach_dev, - .map = arm_smmu_map, - .unmap = arm_smmu_unmap, - .map_sg = default_iommu_map_sg, - .iova_to_phys = arm_smmu_iova_to_phys, - .add_device = arm_smmu_add_device, - .remove_device = arm_smmu_remove_device, - .device_group = arm_smmu_device_group, - .domain_get_attr = arm_smmu_domain_get_attr, - .domain_set_attr = arm_smmu_domain_set_attr, - .pgsize_bitmap = -1UL, /* Restricted during device attach */ + .capable = arm_smmu_capable, + .domain_alloc = arm_smmu_domain_alloc, + .domain_free = arm_smmu_domain_free, + .attach_dev = arm_smmu_attach_dev, + .detach_dev = arm_smmu_detach_dev, + .map = arm_smmu_map, + .unmap = arm_smmu_unmap, + .map_sg = default_iommu_map_sg, + .iova_to_phys = arm_smmu_iova_to_phys, + .add_device = arm_smmu_add_device, + .remove_device = arm_smmu_remove_device, + .device_group = arm_smmu_device_group, + .domain_get_attr = arm_smmu_domain_get_attr, + .domain_set_attr = arm_smmu_domain_set_attr, + .alloc_reserved_iova_domain = arm_smmu_alloc_reserved_iova_domain, + .free_reserved_iova_domain = arm_smmu_free_reserved_iova_domain, + /* Page size bitmap, restricted during device attach */ + .pgsize_bitmap = -1UL, }; static void arm_smmu_device_reset(struct arm_smmu_device *smmu)