From patchwork Thu Apr 30 14:34:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11520531 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 651C014B4 for ; Thu, 30 Apr 2020 14:41:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2509C20774 for ; Thu, 30 Apr 2020 14:41:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="TtWZcvpy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2509C20774 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D36B28E0015; Thu, 30 Apr 2020 10:40:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C9A738E0010; Thu, 30 Apr 2020 10:40:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B36D68E0015; Thu, 30 Apr 2020 10:40:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id 95E7A8E0010 for ; Thu, 30 Apr 2020 10:40:39 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 46D01181AC553 for ; Thu, 30 Apr 2020 14:40:39 +0000 (UTC) X-FDA: 76764782598.13.tub30_5cf2e9980611a X-Spam-Summary: 2,0,0,a0448c66485ba354,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:2:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3874:4119:4250:4321:4605:5007:6261:6653:6742:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12663:12683:12895:12986:13894:14096:14110:14394:21080:21433:21444:21627:21789:21966:21990:30054:30070,0,RBL:209.85.221.65:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:84,LUA_SUMMARY:none X-HE-Tag: tub30_5cf2e9980611a X-Filterd-Recvd-Size: 8884 Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Thu, 30 Apr 2020 14:40:38 +0000 (UTC) Received: by mail-wr1-f65.google.com with SMTP id k1so7248104wrx.4 for ; Thu, 30 Apr 2020 07:40:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2+Y465Sni3J3luu7XimUOxgL9xc/SbRMYs8QGnoYtwY=; b=TtWZcvpylTP3NKbk1Qf+IfameDvqZwRMdS3q3fg9H/lGyRYGkfrFQHeCjVudgdRINK Dq9MBXVYhLCootNKcVgNxJ0FpLgrlim5f8lBDkhqOHb4VlcZ0hgTMzBs9d8jFICV8fxd KUaJwiZ/OQ/gYRUVE9gc/aIwRBe6IYvQieRFWK0Azug7U8btG7FcOaY/Rl26e4sAz3/Z RZCe40RFC/rBtjCjoR1VIZgcnqrMnLBOx8zWFTiYXhg9MCah5cXaoQvRl2P+hicnxqoW WZX2CnayKDPy2Ftc0OhUQymVBrwqr0Ejdo1y1yZof/qNkV3kOYRJQ/n9IxhuHsljv10x hQdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2+Y465Sni3J3luu7XimUOxgL9xc/SbRMYs8QGnoYtwY=; b=Zht3zttVGvX8cYyVCeJziZriARoyVbZy84cUvnajSmz1NRbClSZe7hqgUk45wpjBIl 0cRILcAneI7LFo3CD9H+fUFB91fdQUVBb/tntGb+vwAE8C2Sz5RcHdcM0tWJTJ9sPgrh XkY7WF9eVBF8UMySYxya+NEtYNwD9qfubINPzIyeUMhHocP1F92d7GDePXc8Nt9ac4bp Pc52odT4l5MGrZsY0oXFjywI/Rv/Yoy1BHxDBixdWw+ibPAjEMhAjHltInpJXJ7GPqXN viue7p4t8Z42Ju6IIfYHvRNnRJ4rKncpq5Kki8m+Tj2erZs9DTagJ0Vb+4u9jbu3ugP9 /O7g== X-Gm-Message-State: AGi0PuaI1I8WsfffzVr8YBophOtuBWACKBxLgZ1bel0+lLItFNLUxFkQ 3fgFVITdwFQEk2z1j6EtA3PabQ== X-Google-Smtp-Source: APiQypLNwodkDrJdjB7I+V7HVBZt0+iw9yRn6csSeR+Ms37qAdkN8Vx1hciLeFRaXiPs0nPP5OwXew== X-Received: by 2002:adf:8169:: with SMTP id 96mr3141118wrm.283.1588257637544; Thu, 30 Apr 2020 07:40:37 -0700 (PDT) Received: from localhost.localdomain ([2001:171b:226e:c200:c43b:ef78:d083:b355]) by smtp.gmail.com with ESMTPSA id n2sm4153286wrt.33.2020.04.30.07.40.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Apr 2020 07:40:36 -0700 (PDT) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, felix.kuehling@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, fenghua.yu@intel.com, hch@infradead.org, Jean-Philippe Brucker Subject: [PATCH v6 16/25] iommu/arm-smmu-v3: Add SVA device feature Date: Thu, 30 Apr 2020 16:34:15 +0200 Message-Id: <20200430143424.2787566-17-jean-philippe@linaro.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200430143424.2787566-1-jean-philippe@linaro.org> References: <20200430143424.2787566-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement the IOMMU device feature callbacks to support the SVA feature. At the moment dev_has_feat() returns false since I/O Page Faults isn't yet implemented. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 125 ++++++++++++++++++++++++++++++++++++ 1 file changed, 125 insertions(+) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 9b90cc57a609b..c7942d0540599 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -700,6 +700,8 @@ struct arm_smmu_master { u32 *sids; unsigned int num_sids; bool ats_enabled; + bool sva_enabled; + struct list_head bonds; unsigned int ssid_bits; }; @@ -738,6 +740,7 @@ struct arm_smmu_option_prop { static DEFINE_XARRAY_ALLOC1(asid_xa); static DEFINE_SPINLOCK(contexts_lock); +static DEFINE_MUTEX(arm_smmu_sva_lock); static struct arm_smmu_option_prop arm_smmu_options[] = { { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, @@ -3003,6 +3006,19 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) master = dev_iommu_priv_get(dev); smmu = master->smmu; + /* + * Checking that SVA is disabled ensures that this device isn't bound to + * any mm, and can be safely detached from its old domain. Bonds cannot + * be removed concurrently since we're holding the group mutex. + */ + mutex_lock(&arm_smmu_sva_lock); + if (master->sva_enabled) { + mutex_unlock(&arm_smmu_sva_lock); + dev_err(dev, "cannot attach - SVA enabled\n"); + return -EBUSY; + } + mutex_unlock(&arm_smmu_sva_lock); + arm_smmu_detach_dev(master); mutex_lock(&smmu_domain->init_mutex); @@ -3151,6 +3167,7 @@ static int arm_smmu_add_device(struct device *dev) master->smmu = smmu; master->sids = fwspec->ids; master->num_sids = fwspec->num_ids; + INIT_LIST_HEAD(&master->bonds); dev_iommu_priv_set(dev, master); /* Check the SIDs are in range of the SMMU and our stream table */ @@ -3220,6 +3237,7 @@ static void arm_smmu_remove_device(struct device *dev) master = dev_iommu_priv_get(dev); smmu = master->smmu; + WARN_ON(master->sva_enabled); arm_smmu_detach_dev(master); iommu_group_remove_device(dev); iommu_device_unlink(&smmu->iommu, dev); @@ -3339,6 +3357,109 @@ static void arm_smmu_get_resv_regions(struct device *dev, iommu_dma_get_resv_regions(dev, head); } +static bool arm_smmu_iopf_supported(struct arm_smmu_master *master) +{ + return false; +} + +static bool arm_smmu_dev_has_feature(struct device *dev, + enum iommu_dev_features feat) +{ + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + if (!master) + return false; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + if (!(master->smmu->features & ARM_SMMU_FEAT_SVA)) + return false; + + /* SSID and IOPF support are mandatory for the moment */ + return master->ssid_bits && arm_smmu_iopf_supported(master); + default: + return false; + } +} + +static bool arm_smmu_dev_feature_enabled(struct device *dev, + enum iommu_dev_features feat) +{ + bool enabled = false; + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + if (!master) + return false; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + mutex_lock(&arm_smmu_sva_lock); + enabled = master->sva_enabled; + mutex_unlock(&arm_smmu_sva_lock); + return enabled; + default: + return false; + } +} + +static int arm_smmu_dev_enable_sva(struct device *dev) +{ + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + mutex_lock(&arm_smmu_sva_lock); + master->sva_enabled = true; + mutex_unlock(&arm_smmu_sva_lock); + + return 0; +} + +static int arm_smmu_dev_disable_sva(struct device *dev) +{ + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + mutex_lock(&arm_smmu_sva_lock); + if (!list_empty(&master->bonds)) { + dev_err(dev, "cannot disable SVA, device is bound\n"); + mutex_unlock(&arm_smmu_sva_lock); + return -EBUSY; + } + master->sva_enabled = false; + mutex_unlock(&arm_smmu_sva_lock); + + return 0; +} + +static int arm_smmu_dev_enable_feature(struct device *dev, + enum iommu_dev_features feat) +{ + if (!arm_smmu_dev_has_feature(dev, feat)) + return -ENODEV; + + if (arm_smmu_dev_feature_enabled(dev, feat)) + return -EBUSY; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + return arm_smmu_dev_enable_sva(dev); + default: + return -EINVAL; + } +} + +static int arm_smmu_dev_disable_feature(struct device *dev, + enum iommu_dev_features feat) +{ + if (!arm_smmu_dev_feature_enabled(dev, feat)) + return -EINVAL; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + return arm_smmu_dev_disable_sva(dev); + default: + return -EINVAL; + } +} + static struct iommu_ops arm_smmu_ops = { .capable = arm_smmu_capable, .domain_alloc = arm_smmu_domain_alloc, @@ -3357,6 +3478,10 @@ static struct iommu_ops arm_smmu_ops = { .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, + .dev_has_feat = arm_smmu_dev_has_feature, + .dev_feat_enabled = arm_smmu_dev_feature_enabled, + .dev_enable_feat = arm_smmu_dev_enable_feature, + .dev_disable_feat = arm_smmu_dev_disable_feature, .pgsize_bitmap = -1UL, /* Restricted during device attach */ };