From patchwork Wed Mar 19 17:31:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 14022873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5622CC35FFA for ; Wed, 19 Mar 2025 17:36:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=qxEFq+tkQsQY54WIjXCDvOVP0CS7ReLg86nKmRhTjhE=; b=cN52phUGX/AdZkZB8p3TP0tFsn aSGmUvcNE/rDfN9/FrGvwXfmFLFwL0kOx2ybYprXBhkJZGsgrcJqv7lDxeqkEFXw6Z7TFMZZWqOFa cUmULMNgUU4iVTPmPlKTBQx1y6+vQx93NGgo06ycyGORn2Q9ulaw3VPfCFX0RgGd7rLmQVgqmGLH4 m9gBElh+zM59I1tawtPkrmz4NYaZXpX5ccnvLsnBeFK/V1HNt4hpg5P5D9CF6UVLPcSKuRmgpUqYT pAzmvBbe9ylTEHKN0hn5ZS0JnPhlLiEQLlC352gHE/7vYFhU/cD3vYjvw0NDMDgV1pCBRFOXOZRBt jre8INUA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tuxKt-00000009j2o-0Z2h; Wed, 19 Mar 2025 17:36:11 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tuxHX-00000009iSq-3bVa for linux-arm-kernel@lists.infradead.org; Wed, 19 Mar 2025 17:32:45 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZHwgt5K67z6K9Fb; Thu, 20 Mar 2025 01:29:46 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 5DEEB1409C9; Thu, 20 Mar 2025 01:32:42 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 19 Mar 2025 18:32:35 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , Subject: [RFC PATCH v3 1/5] KVM: arm64: Introduce support to pin VMIDs Date: Wed, 19 Mar 2025 17:31:58 +0000 Message-ID: <20250319173202.78988-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> References: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_103244_184274_920E7F77 X-CRM114-Status: GOOD ( 20.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce kvm_arm_pinned_vmid_get() and kvm_arm_pinned_vmid_put(), to pin a VMID associated with a KVM instance. This will guarantee that VMID remains the same after a rollover. This is in preparation of introducing support in the SMMUv3 driver to use the KVM VMID for S2 stage configuration in nested mode. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/kvm_host.h | 3 ++ arch/arm64/kvm/vmid.c | 76 ++++++++++++++++++++++++++++++- 2 files changed, 78 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index d919557af5e5..b6682f5d1b86 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -142,6 +142,7 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages); struct kvm_vmid { atomic64_t id; + refcount_t pinned; }; struct kvm_s2_mmu { @@ -1261,6 +1262,8 @@ int __init kvm_arm_vmid_alloc_init(void); void __init kvm_arm_vmid_alloc_free(void); void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid); void kvm_arm_vmid_clear_active(void); +int kvm_arm_pinned_vmid_get(struct kvm *kvm); +void kvm_arm_pinned_vmid_put(struct kvm *kvm); static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) { diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c index 7fe8ba1a2851..7bda189e927c 100644 --- a/arch/arm64/kvm/vmid.c +++ b/arch/arm64/kvm/vmid.c @@ -25,6 +25,10 @@ static unsigned long *vmid_map; static DEFINE_PER_CPU(atomic64_t, active_vmids); static DEFINE_PER_CPU(u64, reserved_vmids); +static unsigned long max_pinned_vmids; +static unsigned long nr_pinned_vmids; +static unsigned long *pinned_vmid_map; + #define VMID_MASK (~GENMASK(kvm_arm_vmid_bits - 1, 0)) #define VMID_FIRST_VERSION (1UL << kvm_arm_vmid_bits) @@ -47,7 +51,10 @@ static void flush_context(void) int cpu; u64 vmid; - bitmap_zero(vmid_map, NUM_USER_VMIDS); + if (pinned_vmid_map) + bitmap_copy(vmid_map, pinned_vmid_map, NUM_USER_VMIDS); + else + bitmap_zero(vmid_map, NUM_USER_VMIDS); for_each_possible_cpu(cpu) { vmid = atomic64_xchg_relaxed(&per_cpu(active_vmids, cpu), 0); @@ -103,6 +110,14 @@ static u64 new_vmid(struct kvm_vmid *kvm_vmid) return newvmid; } + /* + * If it is pinned, we can keep using it. Note that reserved + * takes priority, because even if it is also pinned, we need to + * update the generation into the reserved_vmids. + */ + if (refcount_read(&kvm_vmid->pinned)) + return newvmid; + if (!__test_and_set_bit(vmid2idx(vmid), vmid_map)) { atomic64_set(&kvm_vmid->id, newvmid); return newvmid; @@ -169,6 +184,55 @@ void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid) raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags); } +int kvm_arm_pinned_vmid_get(struct kvm *kvm) +{ + struct kvm_vmid *kvm_vmid; + u64 vmid; + + if (!pinned_vmid_map || !kvm) + return -EINVAL; + + kvm_vmid = &kvm->arch.mmu.vmid; + + guard(raw_spinlock_irqsave)(&cpu_vmid_lock); + vmid = atomic64_read(&kvm_vmid->id); + + if (refcount_inc_not_zero(&kvm_vmid->pinned)) + return (vmid & ~VMID_MASK); + + if (nr_pinned_vmids >= max_pinned_vmids) + return -EINVAL; + + /* + * If we went through one or more rollover since that VMID was + * used, make sure it is still valid, or generate a new one. + */ + if (!vmid_gen_match(vmid)) + vmid = new_vmid(kvm_vmid); + + nr_pinned_vmids++; + __set_bit(vmid2idx(vmid), pinned_vmid_map); + refcount_set(&kvm_vmid->pinned, 1); + return (vmid & ~VMID_MASK); +} + +void kvm_arm_pinned_vmid_put(struct kvm *kvm) +{ + struct kvm_vmid *kvm_vmid; + u64 vmid; + + if (!pinned_vmid_map || !kvm) + return; + + kvm_vmid = &kvm->arch.mmu.vmid; + vmid = atomic64_read(&kvm_vmid->id); + guard(raw_spinlock_irqsave)(&cpu_vmid_lock); + if (refcount_dec_and_test(&kvm_vmid->pinned)) { + __clear_bit(vmid2idx(vmid), pinned_vmid_map); + nr_pinned_vmids--; + } +} + /* * Initialize the VMID allocator */ @@ -186,10 +250,20 @@ int __init kvm_arm_vmid_alloc_init(void) if (!vmid_map) return -ENOMEM; + pinned_vmid_map = bitmap_zalloc(NUM_USER_VMIDS, GFP_KERNEL); + nr_pinned_vmids = 0; + + /* + * Ensure we have at least one emty slot available after rollover + * and maximum number of VMIDs are pinned. VMID#0 is reserved. + */ + max_pinned_vmids = NUM_USER_VMIDS - num_possible_cpus() - 2; + return 0; } void __init kvm_arm_vmid_alloc_free(void) { + bitmap_free(pinned_vmid_map); bitmap_free(vmid_map); } From patchwork Wed Mar 19 17:31:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 14022874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2836FC35FFA for ; Wed, 19 Mar 2025 17:38:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eyrXoIyIIUQiqCctn5Ej/R7NbC46+ZTooE03EWZLr0Y=; b=La2VW0Cxl0WKnqosA+AQAdTKAq H3cDYBu/zwJ8K4pBRS72Az5+3++vLHTYPS9dPkjA2PTxpEUQv1n9gCC3WC4XyD8V9zDgjq4vvWILW d9+5VOHgrATue0SlolYIbbTQ50sNLxb92MIm7UIoEw1Lb9zAUzi04IyEy0J9bzUSfYSAlt/WqqGDu FwviDc2QWDudeWBMcIDeamHUrYt9f4VdroPkVLj3I59U0XdytvClPOz5utxHauiuoLV0AAq9XIVsz AxnHRPEBcj2BHutyk/cNwYGDHM9Ns167TiwShWEiJAcEUOTrnRC1r/xBq0+/eTcq5cNQoP3CfhvL+ cUYGZVBw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tuxMY-00000009jKK-1iD5; Wed, 19 Mar 2025 17:37:54 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tuxHg-00000009iWO-33wJ for linux-arm-kernel@lists.infradead.org; Wed, 19 Mar 2025 17:32:54 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZHwh34sgcz6K9RP; Thu, 20 Mar 2025 01:29:55 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 4EC2F1409C9; Thu, 20 Mar 2025 01:32:51 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 19 Mar 2025 18:32:44 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , Subject: [RFC PATCH v3 2/5] iommufd/device: Associate a kvm pointer to iommufd_device Date: Wed, 19 Mar 2025 17:31:59 +0000 Message-ID: <20250319173202.78988-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> References: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_103253_059516_6A627DDA X-CRM114-Status: GOOD ( 16.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a struct kvm * to iommufd_device_bind() fn and associate it with idev if bind is successful. Signed-off-by: Shameer Kolothum Reviewed-by: Jason Gunthorpe --- drivers/iommu/iommufd/device.c | 5 ++++- drivers/iommu/iommufd/iommufd_private.h | 2 ++ drivers/vfio/iommufd.c | 2 +- include/linux/iommufd.h | 4 +++- 4 files changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c index dfd0898fb6c1..0f3983b16d56 100644 --- a/drivers/iommu/iommufd/device.c +++ b/drivers/iommu/iommufd/device.c @@ -146,6 +146,7 @@ void iommufd_device_destroy(struct iommufd_object *obj) * iommufd_device_bind - Bind a physical device to an iommu fd * @ictx: iommufd file descriptor * @dev: Pointer to a physical device struct + * @kvm: Pointer to struct kvm if device belongs to a KVM VM * @id: Output ID number to return to userspace for this device * * A successful bind establishes an ownership over the device and returns @@ -159,7 +160,8 @@ void iommufd_device_destroy(struct iommufd_object *obj) * The caller must undo this with iommufd_device_unbind() */ struct iommufd_device *iommufd_device_bind(struct iommufd_ctx *ictx, - struct device *dev, u32 *id) + struct device *dev, struct kvm *kvm, + u32 *id) { struct iommufd_device *idev; struct iommufd_group *igroup; @@ -209,6 +211,7 @@ struct iommufd_device *iommufd_device_bind(struct iommufd_ctx *ictx, if (!iommufd_selftest_is_mock_dev(dev)) iommufd_ctx_get(ictx); idev->dev = dev; + idev->kvm = kvm; idev->enforce_cache_coherency = device_iommu_capable(dev, IOMMU_CAP_ENFORCE_CACHE_COHERENCY); /* The calling driver is a user until iommufd_device_unbind() */ diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h index 0b1bafc7fd99..73201ff2c40e 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -398,6 +398,8 @@ struct iommufd_device { struct list_head group_item; /* always the physical device */ struct device *dev; + /* ..and kvm if available */ + struct kvm *kvm; bool enforce_cache_coherency; /* protect iopf_enabled counter */ struct mutex iopf_lock; diff --git a/drivers/vfio/iommufd.c b/drivers/vfio/iommufd.c index 516294fd901b..664e3579ce0e 100644 --- a/drivers/vfio/iommufd.c +++ b/drivers/vfio/iommufd.c @@ -115,7 +115,7 @@ int vfio_iommufd_physical_bind(struct vfio_device *vdev, { struct iommufd_device *idev; - idev = iommufd_device_bind(ictx, vdev->dev, out_device_id); + idev = iommufd_device_bind(ictx, vdev->dev, vdev->kvm, out_device_id); if (IS_ERR(idev)) return PTR_ERR(idev); vdev->iommufd_device = idev; diff --git a/include/linux/iommufd.h b/include/linux/iommufd.h index 11110c749200..ac8cca0190f4 100644 --- a/include/linux/iommufd.h +++ b/include/linux/iommufd.h @@ -22,6 +22,7 @@ struct iommufd_ctx; struct iommufd_device; struct iommufd_viommu_ops; struct page; +struct kvm; enum iommufd_object_type { IOMMUFD_OBJ_NONE, @@ -49,7 +50,8 @@ struct iommufd_object { }; struct iommufd_device *iommufd_device_bind(struct iommufd_ctx *ictx, - struct device *dev, u32 *id); + struct device *dev, struct kvm *kvm, + u32 *id); void iommufd_device_unbind(struct iommufd_device *idev); int iommufd_device_attach(struct iommufd_device *idev, u32 *pt_id); From patchwork Wed Mar 19 17:32:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 14022875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF052C35FFC for ; Wed, 19 Mar 2025 17:39:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vCaZbioWfyaVzkUJibmLi7b6cSq203xUO5Hw2akccaQ=; b=YU53jqOS/NED9d2C9vXYZ9maGa BtWFNGqf/Tx9l6AzBsJB77hnXu0aoB9m4XCT/9Hhw4jXrYMBBSBacsj7jtcLSLCnFbWvfC3HoJ92/ E+xT3Juead+OTlKZNkLxjGbZ/McLnsWR7n4AUL2SJIPmFPrU8drl94g7c+57dKrFPdLVzgXgXH08f RLhiyBkeyJwL/t8TJHErL8ucbO2+E3uqqZL2FjfEFFhVvBvL+8EluNsSkVc1TMe5EhjcnqjHpeu6G djK2gJn0nVsi/z2swgJJRdgN4X2gtmBvyy2BJS/jPTg84gM/F12V8XG6kPDbHdZfTx5gUyT5IWgUj SfVMWRzA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tuxOE-00000009jZE-0IAB; Wed, 19 Mar 2025 17:39:38 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tuxHp-00000009iYd-1Jah for linux-arm-kernel@lists.infradead.org; Wed, 19 Mar 2025 17:33:03 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZHwhD26tfz6K9RP; Thu, 20 Mar 2025 01:30:04 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id E51A314050C; Thu, 20 Mar 2025 01:32:59 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 19 Mar 2025 18:32:53 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , Subject: [RFC PATCH v3 3/5] iommu/arm-smmu-v3-iommufd: Pass in kvm pointer to viommu_alloc Date: Wed, 19 Mar 2025 17:32:00 +0000 Message-ID: <20250319173202.78988-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> References: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_103301_650461_24CE8715 X-CRM114-Status: GOOD ( 15.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org No functional changes. This will be used in a later patch to add support to use KVM VMID in ARM SMMUv3 s2 stage configuration. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c | 1 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + drivers/iommu/iommufd/viommu.c | 3 ++- include/linux/iommu.h | 4 +++- 4 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c index 6f8be1164167..ee2fac5c899b 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c @@ -370,6 +370,7 @@ static const struct iommufd_viommu_ops arm_vsmmu_ops = { }; struct iommufd_viommu *arm_vsmmu_alloc(struct device *dev, + struct kvm *kvm, struct iommu_domain *parent, struct iommufd_ctx *ictx, unsigned int viommu_type) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 1f6696bc4f6c..9f49de52a700 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -1060,6 +1060,7 @@ struct arm_vsmmu { #if IS_ENABLED(CONFIG_ARM_SMMU_V3_IOMMUFD) void *arm_smmu_hw_info(struct device *dev, u32 *length, u32 *type); struct iommufd_viommu *arm_vsmmu_alloc(struct device *dev, + struct kvm *kvm, struct iommu_domain *parent, struct iommufd_ctx *ictx, unsigned int viommu_type); diff --git a/drivers/iommu/iommufd/viommu.c b/drivers/iommu/iommufd/viommu.c index 69b88e8c7c26..e157d786f295 100644 --- a/drivers/iommu/iommufd/viommu.c +++ b/drivers/iommu/iommufd/viommu.c @@ -47,7 +47,8 @@ int iommufd_viommu_alloc_ioctl(struct iommufd_ucmd *ucmd) goto out_put_hwpt; } - viommu = ops->viommu_alloc(idev->dev, hwpt_paging->common.domain, + viommu = ops->viommu_alloc(idev->dev, idev->kvm, + hwpt_paging->common.domain, ucmd->ictx, cmd->type); if (IS_ERR(viommu)) { rc = PTR_ERR(viommu); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index cb01fe49f5df..2f61e0178cfa 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -44,6 +44,7 @@ struct iommu_dma_cookie; struct iommu_fault_param; struct iommufd_ctx; struct iommufd_viommu; +struct kvm; #define IOMMU_FAULT_PERM_READ (1 << 0) /* read */ #define IOMMU_FAULT_PERM_WRITE (1 << 1) /* write */ @@ -646,7 +647,8 @@ struct iommu_ops { int (*def_domain_type)(struct device *dev); struct iommufd_viommu *(*viommu_alloc)( - struct device *dev, struct iommu_domain *parent_domain, + struct device *dev, struct kvm *kvm, + struct iommu_domain *parent_domain, struct iommufd_ctx *ictx, unsigned int viommu_type); const struct iommu_domain_ops *default_domain_ops; From patchwork Wed Mar 19 17:32:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 14022876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6BFBC35FFC for ; Wed, 19 Mar 2025 17:41:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=1LaREExP9pjfkZkFw7Tfu3JyhMV7Wsu4xR6XfLcJUmk=; b=pwJfBN20aMMHmuZn+IUcThlLjT em9dtSaViV/DzqMCJ33kZkcxUEHBIDevvz+JyE9etpuVMG28w7ovaxKMTUnShQWpZz5+EUxcwi5M/ T7W08XrMAEQ6uOwkVtdv1fbvS4qEvHIrkSPdbNHDFSRZUfunOu3248GG3RlrFb6Yma78KemReqTfz MLPUwPJPlB58eLUhuFU8/beOnOd2mjPyQ7wH4P4GtoNyqYDjXTduYuwnoNtl/2DT31EWBsQtkQdM9 daU7XSJKy811SgF9VzEcGqx3AO1JzA1drTNQgt3hG4md30P/5Tm6dgz5IG9pDJzo7+C9PHWd1TCJV 0obfPzCg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tuxPs-00000009jo2-3CjK; Wed, 19 Mar 2025 17:41:20 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tuxI9-00000009idu-3Z8B for linux-arm-kernel@lists.infradead.org; Wed, 19 Mar 2025 17:33:23 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZHwhc5WzJz67GVD; Thu, 20 Mar 2025 01:30:24 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id 667551402A5; Thu, 20 Mar 2025 01:33:20 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 19 Mar 2025 18:33:13 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , Subject: [RFC PATCH v3 4/5] iommu/arm-smmu-v3-iommufd: Use KVM VMID for s2 stage Date: Wed, 19 Mar 2025 17:32:01 +0000 Message-ID: <20250319173202.78988-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> References: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_103322_192173_90935EA8 X-CRM114-Status: GOOD ( 16.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If kvm is available make use of kvm pinned VMID on BTM enabled systems to set the s2 stage VMID for nested domains. Signed-off-by: Shameer Kolothum --- .../arm/arm-smmu-v3/arm-smmu-v3-iommufd.c | 53 +++++++++++++++++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + 2 files changed, 50 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c index ee2fac5c899b..79fcb903741f 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-iommufd.c @@ -3,6 +3,7 @@ * Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES */ +#include #include #include "arm-smmu-v3.h" @@ -39,6 +40,48 @@ arm_smmu_get_msi_mapping_domain(struct iommu_domain *domain) return &nested_domain->vsmmu->s2_parent->domain; } +static int arm_vsmmu_alloc_vmid(struct arm_smmu_device *smmu, struct kvm *kvm, + bool *kvm_used) +{ +#ifdef CONFIG_KVM + /* + * There can only be one allocator for VMIDs active at once. If BTM is + * turned on then KVM's allocator always supplies the VMID, and the + * VMID is matched by CPU invalidation of the KVM S2. Right now there + * is no API to get an unused VMID from KVM so this also means BTM systems + * cannot support S2 without an associated KVM. + */ + if ((smmu->features & ARM_SMMU_FEAT_BTM)) { + int vmid; + + if (!kvm || !kvm_get_kvm_safe(kvm)) + return -EOPNOTSUPP; + vmid = kvm_arm_pinned_vmid_get(kvm); + if (vmid < 0) + kvm_put_kvm(kvm); + else + *kvm_used = true; + return vmid; + } +#endif + return ida_alloc_range(&smmu->vmid_map, 1, (1 << smmu->vmid_bits) - 1, + GFP_KERNEL); +} + +static void arm_vsmmu_free_vmid(struct arm_smmu_device *smmu, u16 vmid, + struct kvm *kvm) +{ +#ifdef CONFIG_KVM + if ((smmu->features & ARM_SMMU_FEAT_BTM)) { + if (kvm) { + kvm_arm_pinned_vmid_put(kvm); + return kvm_put_kvm(kvm); + } + } +#endif + ida_free(&smmu->vmid_map, vmid); +} + static void arm_vsmmu_destroy(struct iommufd_viommu *viommu) { struct arm_vsmmu *vsmmu = container_of(viommu, struct arm_vsmmu, core); @@ -53,7 +96,7 @@ static void arm_vsmmu_destroy(struct iommufd_viommu *viommu) list_del(&vsmmu->vsmmus_elm); spin_unlock_irqrestore(&vsmmu->s2_parent->vsmmus.lock, flags); arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); - ida_free(&smmu->vmid_map, vsmmu->vmid); + arm_vsmmu_free_vmid(smmu, vsmmu->vmid, vsmmu->kvm); } static void arm_smmu_make_nested_cd_table_ste( @@ -379,6 +422,7 @@ struct iommufd_viommu *arm_vsmmu_alloc(struct device *dev, iommu_get_iommu_dev(dev, struct arm_smmu_device, iommu); struct arm_smmu_master *master = dev_iommu_priv_get(dev); struct arm_smmu_domain *s2_parent = to_smmu_domain(parent); + bool kvm_used = false; struct arm_vsmmu *vsmmu; unsigned long flags; int vmid; @@ -409,21 +453,22 @@ struct iommufd_viommu *arm_vsmmu_alloc(struct device *dev, !(smmu->features & ARM_SMMU_FEAT_S2FWB)) return ERR_PTR(-EOPNOTSUPP); - vmid = ida_alloc_range(&smmu->vmid_map, 1, (1 << smmu->vmid_bits) - 1, - GFP_KERNEL); + vmid = arm_vsmmu_alloc_vmid(smmu, kvm, &kvm_used); if (vmid < 0) return ERR_PTR(vmid); vsmmu = iommufd_viommu_alloc(ictx, struct arm_vsmmu, core, &arm_vsmmu_ops); if (IS_ERR(vsmmu)) { - ida_free(&smmu->vmid_map, vmid); + arm_vsmmu_free_vmid(smmu, vmid, kvm); return ERR_CAST(vsmmu); } vsmmu->smmu = smmu; vsmmu->vmid = (u16)vmid; vsmmu->s2_parent = s2_parent; + if (kvm_used) + vsmmu->kvm = kvm; spin_lock_irqsave(&s2_parent->vsmmus.lock, flags); list_add_tail(&vsmmu->vsmmus_elm, &s2_parent->vsmmus.list); spin_unlock_irqrestore(&s2_parent->vsmmus.lock, flags); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 9f49de52a700..5890c233f73b 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -1053,6 +1053,7 @@ struct arm_vsmmu { struct arm_smmu_device *smmu; struct arm_smmu_domain *s2_parent; u16 vmid; + struct kvm *kvm; struct list_head vsmmus_elm; /* arm_smmu_domain::vsmmus::list */ }; From patchwork Wed Mar 19 17:32:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 14022878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A5B30C36002 for ; Wed, 19 Mar 2025 17:43:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=asej6lztkm0YVGl2y+2Y81qJK+/SWoZ+yvSKH0k2kak=; b=foewbNfM0ENf0XjiiqV/1nx8MI fb31oo4/v0/9zOIhD1gFGsHLOHJkXU/zUUBUhg7q0xxtUHSDba1PK/M4tn2pqkWz/KpW5VR9zMEQe 0g/pfrlw9cLUAmgvme8jr3oEL1h1q49bfcPmShkaY4QkjE9Q8kYk53lsDfawR+bpjBv1AVVeDgX2O fCIByJ5htEMg7F2kmWFUO7m81FlDvJ+mryKltd6xm+Bgye0rgpSB39zueSjwXs0i3QAHWtJNnhi4h 5bwNmfxAhjfgweClHFKKg3tWUEo8oH3Ky/EDPLwbS0qSIkL5e5e12QVHLPNK3VxXxrVSTcMn79NPA pxjsgm5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tuxRY-00000009jyh-2MpJ; Wed, 19 Mar 2025 17:43:04 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tuxIJ-00000009ifU-0zdG for linux-arm-kernel@lists.infradead.org; Wed, 19 Mar 2025 17:33:32 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ZHwhp0BxTz6K9Rq; Thu, 20 Mar 2025 01:30:34 +0800 (CST) Received: from frapeml500008.china.huawei.com (unknown [7.182.85.71]) by mail.maildlp.com (Postfix) with ESMTPS id A323914050C; Thu, 20 Mar 2025 01:33:29 +0800 (CST) Received: from A2303104131.china.huawei.com (10.203.177.241) by frapeml500008.china.huawei.com (7.182.85.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Wed, 19 Mar 2025 18:33:22 +0100 From: Shameer Kolothum To: , , CC: , , , , , , , , , , Subject: [RFC PATCH v3 5/5] iommu/arm-smmu-v3: Enable broadcast TLB maintenance Date: Wed, 19 Mar 2025 17:32:02 +0000 Message-ID: <20250319173202.78988-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> References: <20250319173202.78988-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.177.241] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To frapeml500008.china.huawei.com (7.182.85.71) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250319_103331_559958_F7C90719 X-CRM114-Status: GOOD ( 18.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Jean-Philippe Brucker The SMMUv3 can handle invalidation targeted at TLB entries with shared ASIDs. If the implementation supports broadcast TLB maintenance, enable it and keep track of it in a feature bit. The SMMU will then be affected by inner-shareable TLB invalidations from other agents. In order to avoid over invalidation with stage-2 translation contexts, enable BTM only when SMMUv3 supports eiher S1 or both S1 & S2 transaltion contexts. In this way the default domain will use stage-1 and stage-2 will be only used for NESTED Domain setup. Signed-off-by: Jean-Philippe Brucker [Shameer: Enable BTM only if S1 is supported] Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 24 +++++++++++++++++++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index addc6308742b..06a13d78286a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -4119,11 +4119,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu) writel_relaxed(reg, smmu->base + ARM_SMMU_CR1); /* CR2 (random crap) */ - reg = CR2_PTM | CR2_RECINVSID; + reg = CR2_RECINVSID; if (smmu->features & ARM_SMMU_FEAT_E2H) reg |= CR2_E2H; + if (!(smmu->features & ARM_SMMU_FEAT_BTM)) + reg |= CR2_PTM; + writel_relaxed(reg, smmu->base + ARM_SMMU_CR2); /* Stream table */ @@ -4289,6 +4292,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY; + bool vhe = cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN); /* IDR0 */ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0); @@ -4341,7 +4345,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) if (reg & IDR0_HYP) { smmu->features |= ARM_SMMU_FEAT_HYP; - if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) + if (vhe) smmu->features |= ARM_SMMU_FEAT_E2H; } @@ -4368,6 +4372,22 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) if (reg & IDR0_S2P) smmu->features |= ARM_SMMU_FEAT_TRANS_S2; + /* + * If S1 is supported, verify that BTM can be enabled. If S2 is available + * and BTM is enabled, S2 will be used exclusively for nested domains, + * ensuring a KVM VMID is obtained. + * BTM is beneficial when the CPU shares page tables with SMMUv3 (e.g., vSVA). + */ + if (reg & IDR0_S1P) { + /* + * If the CPU is using VHE, but the SMMU doesn't support it, the SMMU + * will create TLB entries for NH-EL1 world and will miss the + * broadcasted TLB invalidations that target EL2-E2H world. Don't enable + * BTM in that case. + */ + if (reg & IDR0_BTM && (!vhe || reg & IDR0_HYP)) + smmu->features |= ARM_SMMU_FEAT_BTM; + } if (!(reg & (IDR0_S1P | IDR0_S2P))) { dev_err(smmu->dev, "no translation support!\n"); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 5890c233f73b..f554b6aa52c9 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -39,6 +39,7 @@ struct arm_smmu_device; #define IDR0_HTTU GENMASK(7, 6) #define IDR0_HTTU_ACCESS 1 #define IDR0_HTTU_ACCESS_DIRTY 2 +#define IDR0_BTM (1 << 5) #define IDR0_COHACC (1 << 4) #define IDR0_TTF GENMASK(3, 2) #define IDR0_TTF_AARCH64 2