From patchwork Thu Feb 8 15:18:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13549962 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 92CF8C4829C for ; Thu, 8 Feb 2024 15:21:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZyJHm4ROj5cxrijAphdKx31K7bMGznE7hqJznvdwjlE=; b=D6EgF1OtbmGdEE Ijj7sRH2FAlIjidhEogAKEl64E7F+zQSNJho4hItHJQ4Sv2Wa++8hx30K0ZOvhaRL8uVc3c1/CFiV 7Rh/KsCV6pG0UdP+p/8vGWJffv09g3eQLrvETqKp9/IhFMijVf6N+NO1etYxW/jidX5kKXHDs8RbM Ayb4mPPFIxLP4/ElrGWd/ZnqJ5HCVn6CPn3hqd9TI7ZN5DigMuPWqGrO1nWcRSUTm7P8F3btvwKuy 7TeIiZthYWOm3PBzHu8wuz62iAUzfIcTbvVBZcfx4ddynhfSQ9v9sBK0IijsDPm3ZdUykUh86OWvB mw3msX8yKzm2uO9JJIxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6Cu-0000000E9Gt-3LRl; Thu, 08 Feb 2024 15:20:56 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6Cr-0000000E9EL-2MNI for linux-arm-kernel@lists.infradead.org; Thu, 08 Feb 2024 15:20:54 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4TW0vk73tSz6JB4R; Thu, 8 Feb 2024 23:17:06 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id C8BF9140B2F; Thu, 8 Feb 2024 23:20:49 +0800 (CST) Received: from A2303104131.china.huawei.com (10.202.227.28) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 8 Feb 2024 15:20:43 +0000 From: Shameer Kolothum To: , , , CC: , , , , , , , , Subject: [RFC PATCH v2 1/7] KVM: Add generic infrastructure to support pinned VMIDs Date: Thu, 8 Feb 2024 15:18:31 +0000 Message-ID: <20240208151837.35068-2-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> References: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.28] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240208_072053_775273_D8CE6C9E X-CRM114-Status: GOOD ( 10.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Provide generic helper functions to get/put pinned VMIDs if the arch supports it. Signed-off-by: Shameer Kolothum --- include/linux/kvm_host.h | 18 ++++++++++++++++++ virt/kvm/Kconfig | 3 +++ virt/kvm/kvm_main.c | 23 +++++++++++++++++++++++ 3 files changed, 44 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7e7fd25b09b3..610e239bea46 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2311,6 +2311,24 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) } #endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */ +#ifdef CONFIG_HAVE_KVM_PINNED_VMID +int kvm_arch_pinned_vmid_get(struct kvm *kvm); +void kvm_arch_pinned_vmid_put(struct kvm *kvm); +#endif + +#ifdef CONFIG_HAVE_KVM_PINNED_VMID +int kvm_pinned_vmid_get(struct kvm *kvm); +void kvm_pinned_vmid_put(struct kvm *kvm); +#else +static inline int kvm_pinned_vmid_get(struct kvm *kvm) +{ + return -EINVAL; +} + +static inline void kvm_pinned_vmid_put(struct kvm *kvm) +{ +} +#endif /* * If more than one page is being (un)accounted, @virt must be the address of * the first page of a block of pages what were allocated together (i.e diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 184dab4ee871..a3052c8e3ac4 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -108,3 +108,6 @@ config KVM_GENERIC_PRIVATE_MEM select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_PRIVATE_MEM bool + +config HAVE_KVM_PINNED_VMID + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 10bfc88a69f7..f84d6da5f464 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3918,6 +3918,29 @@ bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_vcpu_wake_up); +#ifdef CONFIG_HAVE_KVM_PINNED_VMID +int kvm_pinned_vmid_get(struct kvm *kvm) +{ + int ret; + + if (!kvm_get_kvm_safe(kvm)) + return -ENOENT; + ret = kvm_arch_pinned_vmid_get(kvm); + if (ret < 0) + kvm_put_kvm(kvm); + + return ret; +} +EXPORT_SYMBOL_GPL(kvm_pinned_vmid_get); + +void kvm_pinned_vmid_put(struct kvm *kvm) +{ + kvm_arch_pinned_vmid_put(kvm); + kvm_put_kvm(kvm); +} +EXPORT_SYMBOL_GPL(kvm_pinned_vmid_put); +#endif + #ifndef CONFIG_S390 /* * Kick a sleeping VCPU, or a guest VCPU in guest mode, into host kernel mode. From patchwork Thu Feb 8 15:18:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13549961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C5B0EC4828F for ; Thu, 8 Feb 2024 15:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=s0G69IA+PzsMPrQk+/xVYhwCyeBzTm7D12wKij3de4E=; b=LaDNS+k6zpkxBo Z9FIhgPayZc+GTREaXLiGtoGZVjxrD5dJRIB0pWiOhB8mAT1EEs/qpxAmFIo9hpN/Yn7pyN9OylA9 Xn56HMSJvQsyt/brt4G6Ta1QyYcfU7es6UM5zYnAgsvTjdnc8otAhmMcolDflFzpAHd31IVSvuXxE XrGPUPmjNqbDCL8K2VjPe8mSbG9GwiIUItiwUiPY1MrE6thDeOQyRs/KjLPpFJiVlTNfzjIXPKJLR yzg2mnPTZ/mVWvLYkea+OqVvcUIu3PzkJA26vSN2sccI1aww1pbOk2d8/QHcBaBsgqKpzB1a/+Q72 EO3ITui5SEly5wWqFsFQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6D2-0000000E9Ku-1yNU; Thu, 08 Feb 2024 15:21:04 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6Cy-0000000E9Ig-3W4f for linux-arm-kernel@lists.infradead.org; Thu, 08 Feb 2024 15:21:03 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4TW0vw1p6Xz6JB4K; Thu, 8 Feb 2024 23:17:16 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 1676C140CF4; Thu, 8 Feb 2024 23:20:59 +0800 (CST) Received: from A2303104131.china.huawei.com (10.202.227.28) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 8 Feb 2024 15:20:52 +0000 From: Shameer Kolothum To: , , , CC: , , , , , , , , Subject: [RFC PATCH v2 2/7] KVM: arm64: Introduce support to pin VMIDs Date: Thu, 8 Feb 2024 15:18:32 +0000 Message-ID: <20240208151837.35068-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> References: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.28] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240208_072101_302919_00897A9C X-CRM114-Status: GOOD ( 19.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce kvm_arm_pinned_vmid_get() and kvm_arm_pinned_vmid_put(), to pin a VMID associated with a KVM instance. This will guarantee that VMID remains the same after a rollover. This is in preparation of introducing support in the SMMUv3 driver to use the KVM VMID for S2 stage configuration in nested mode. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/kvm_host.h | 3 ++ arch/arm64/kvm/vmid.c | 84 ++++++++++++++++++++++++++++++- 2 files changed, 86 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 21c57b812569..20fb00d29f48 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -141,6 +141,7 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages); struct kvm_vmid { atomic64_t id; + refcount_t pinned; }; struct kvm_s2_mmu { @@ -1097,6 +1098,8 @@ int __init kvm_arm_vmid_alloc_init(void); void __init kvm_arm_vmid_alloc_free(void); bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid); void kvm_arm_vmid_clear_active(void); +unsigned long kvm_arm_pinned_vmid_get(struct kvm_vmid *kvm_vmid); +void kvm_arm_pinned_vmid_put(struct kvm_vmid *kvm_vmid); static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) { diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c index 806223b7022a..0ffe24683071 100644 --- a/arch/arm64/kvm/vmid.c +++ b/arch/arm64/kvm/vmid.c @@ -25,6 +25,10 @@ static unsigned long *vmid_map; static DEFINE_PER_CPU(atomic64_t, active_vmids); static DEFINE_PER_CPU(u64, reserved_vmids); +static unsigned long max_pinned_vmids; +static unsigned long nr_pinned_vmids; +static unsigned long *pinned_vmid_map; + #define VMID_MASK (~GENMASK(kvm_arm_vmid_bits - 1, 0)) #define VMID_FIRST_VERSION (1UL << kvm_arm_vmid_bits) @@ -47,7 +51,10 @@ static void flush_context(void) int cpu; u64 vmid; - bitmap_zero(vmid_map, NUM_USER_VMIDS); + if (pinned_vmid_map) + bitmap_copy(vmid_map, pinned_vmid_map, NUM_USER_VMIDS); + else + bitmap_zero(vmid_map, NUM_USER_VMIDS); for_each_possible_cpu(cpu) { vmid = atomic64_xchg_relaxed(&per_cpu(active_vmids, cpu), 0); @@ -103,6 +110,14 @@ static u64 new_vmid(struct kvm_vmid *kvm_vmid) return newvmid; } + /* + * If it is pinned, we can keep using it. Note that reserved + * takes priority, because even if it is also pinned, we need to + * update the generation into the reserved_vmids. + */ + if (refcount_read(&kvm_vmid->pinned)) + return newvmid; + if (!__test_and_set_bit(vmid2idx(vmid), vmid_map)) { atomic64_set(&kvm_vmid->id, newvmid); return newvmid; @@ -174,6 +189,63 @@ bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid) return updated; } +unsigned long kvm_arm_pinned_vmid_get(struct kvm_vmid *kvm_vmid) +{ + unsigned long flags; + u64 vmid; + + if (!pinned_vmid_map) + return 0; + + raw_spin_lock_irqsave(&cpu_vmid_lock, flags); + + vmid = atomic64_read(&kvm_vmid->id); + + if (refcount_inc_not_zero(&kvm_vmid->pinned)) + goto out_unlock; + + if (nr_pinned_vmids >= max_pinned_vmids) { + vmid = 0; + goto out_unlock; + } + + /* + * If we went through one or more rollover since that VMID was + * used, make sure it is still valid, or generate a new one. + */ + if (!vmid_gen_match(vmid)) + vmid = new_vmid(kvm_vmid); + + nr_pinned_vmids++; + __set_bit(vmid2idx(vmid), pinned_vmid_map); + refcount_set(&kvm_vmid->pinned, 1); + +out_unlock: + raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags); + + vmid &= ~VMID_MASK; + + return vmid; +} + +void kvm_arm_pinned_vmid_put(struct kvm_vmid *kvm_vmid) +{ + unsigned long flags; + u64 vmid = atomic64_read(&kvm_vmid->id); + + if (!pinned_vmid_map) + return; + + raw_spin_lock_irqsave(&cpu_vmid_lock, flags); + + if (refcount_dec_and_test(&kvm_vmid->pinned)) { + __clear_bit(vmid2idx(vmid), pinned_vmid_map); + nr_pinned_vmids--; + } + + raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags); +} + /* * Initialize the VMID allocator */ @@ -191,10 +263,20 @@ int __init kvm_arm_vmid_alloc_init(void) if (!vmid_map) return -ENOMEM; + pinned_vmid_map = bitmap_zalloc(NUM_USER_VMIDS, GFP_KERNEL); + nr_pinned_vmids = 0; + + /* + * Ensure we have at least one emty slot available after rollover + * and maximum number of VMIDs are pinned. VMID#0 is reserved. + */ + max_pinned_vmids = NUM_USER_VMIDS - num_possible_cpus() - 2; + return 0; } void __init kvm_arm_vmid_alloc_free(void) { + bitmap_free(pinned_vmid_map); bitmap_free(vmid_map); } From patchwork Thu Feb 8 15:18:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13549963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4B3F4C4828F for ; Thu, 8 Feb 2024 15:21:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=H9WEnR6fM+j0UcjsIVJ+7cFPtc8MYCIrbcIzo+NyTEk=; b=tSHnpkiYl3u/P5 4g+zB+Na+y3MUcHPSL5LkiRD/I/e2TAYfwW4KPOLoL5MGjnQO8G/uMJTvwR5kYDMfnsrorWNYhUx0 EMLMZnNjvyXKzvdYUylpg7+9ckizabEKmT3aHHStxRHKnYtXAKhzDRJlzc1G8VFZURTLW4MPsZwsJ wVL1Jljhk4q2Y6gZm/danV9WD8iL60V/zR97fpqQjhRW4DgwE8u/M9SvvP9R7z4wSW9lgfwzUe9c3 3W4Fxm2BHMoKDK9VgMRghl5LudVX+C3vmwz79XNl8O6cv3bPCF7VUjPCJqoQBLdQ5yTqy32gS9fcq uU/HbKJmIzWBpmZCGSng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6DA-0000000E9Pz-0z2W; Thu, 08 Feb 2024 15:21:12 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6D7-0000000E9Ne-2i7H for linux-arm-kernel@lists.infradead.org; Thu, 08 Feb 2024 15:21:10 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4TW0wV03qPz6K5dp; Thu, 8 Feb 2024 23:17:46 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id E75E1140B73; Thu, 8 Feb 2024 23:21:07 +0800 (CST) Received: from A2303104131.china.huawei.com (10.202.227.28) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 8 Feb 2024 15:21:01 +0000 From: Shameer Kolothum To: , , , CC: , , , , , , , , Subject: [RFC PATCH v2 3/7] KVM: arm64: Add interfaces for pinned VMID support Date: Thu, 8 Feb 2024 15:18:33 +0000 Message-ID: <20240208151837.35068-4-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> References: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.28] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240208_072109_860916_4C4BC42D X-CRM114-Status: UNSURE ( 8.75 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Provide interfaces to get/put pinned VMIDs from KVM VMID allocator. This will be used by SMMUv3 driver in subsequent patch. Signed-off-by: Shameer Kolothum --- arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/arm.c | 14 ++++++++++++++ 2 files changed, 15 insertions(+) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 6c3c8ca73e7f..29ff79f112ba 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -40,6 +40,7 @@ menuconfig KVM select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS select XARRAY_MULTI + select HAVE_KVM_PINNED_VMID help Support hosting virtualized guest machines. diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a25265aca432..ed905da6e9ab 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -711,6 +711,20 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) return ret; } +int kvm_arch_pinned_vmid_get(struct kvm *kvm) +{ + int vmid; + + vmid = kvm_arm_pinned_vmid_get(&kvm->arch.mmu.vmid); + + return (vmid == 0) ? -EINVAL : vmid; +} + +void kvm_arch_pinned_vmid_put(struct kvm *kvm) +{ + kvm_arm_pinned_vmid_put(&kvm->arch.mmu.vmid); +} + bool kvm_arch_intc_initialized(struct kvm *kvm) { return vgic_initialized(kvm); From patchwork Thu Feb 8 15:18:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13549964 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA7E9C4828F for ; Thu, 8 Feb 2024 15:21:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KEvY8CY927+0wAus9fIU+7PF3EiUqtiAN4FoecRXWjI=; b=BLGYwxejeSQWgi E/ComDpgD9QXgnSLSl/QxzK/6r+EBVSdQylkS3pR6Z2ThxBybXUg15pW563qZIJZCEfCNJRw9NY3K 1wqfjqGN/UhJapk6QVSDvh7ClXYJjzlCS//3WTyoiQtLtvgdPlozHSlOfGy3YzkvhNuKqQ+0tW7QZ rgiAedc4Nxv0mnXbNZHfiYqSAUJhVEZR0VUFk74EZBt/1PnSCTzMmHPpdFrlWlXapof+mIbpFbGXY pfAT8vMTcUCRXyQI6XbmCYh9r50A31F6NKmYyoxEU+lPg3ULhXuEknLUCVe5oOIKpuMhv4C4pEsnd wDZcovCIdmS7QejxZLHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6DJ-0000000E9V2-0zt4; Thu, 08 Feb 2024 15:21:21 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6DG-0000000E9T4-1QZr for linux-arm-kernel@lists.infradead.org; Thu, 08 Feb 2024 15:21:19 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4TW0wk2M81z6K93W; Thu, 8 Feb 2024 23:17:58 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 746B6140595; Thu, 8 Feb 2024 23:21:16 +0800 (CST) Received: from A2303104131.china.huawei.com (10.202.227.28) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 8 Feb 2024 15:21:09 +0000 From: Shameer Kolothum To: , , , CC: , , , , , , , , Subject: [RFC PATCH v2 4/7] iommufd: Associate kvm pointer to iommufd ctx Date: Thu, 8 Feb 2024 15:18:34 +0000 Message-ID: <20240208151837.35068-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> References: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.28] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240208_072118_731784_12BC93B8 X-CRM114-Status: GOOD ( 14.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce an API to set the KVM pointer to the iommufd ctx and set the same when a vfio dev is bind to iommufd. Signed-off-by: Shameer Kolothum --- drivers/iommu/iommufd/iommufd_private.h | 3 +++ drivers/iommu/iommufd/main.c | 14 ++++++++++++++ drivers/vfio/device_cdev.c | 3 +++ include/linux/iommufd.h | 7 +++++++ 4 files changed, 27 insertions(+) diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h index 991f864d1f9b..28ede82bb1a6 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -16,6 +16,7 @@ struct iommu_domain; struct iommu_group; struct iommu_option; struct iommufd_device; +struct kvm; struct iommufd_ctx { struct file *file; @@ -27,6 +28,8 @@ struct iommufd_ctx { /* Compatibility with VFIO no iommu */ u8 no_iommu_mode; struct iommufd_ioas *vfio_ioas; + /* Associated KVM pointer */ + struct kvm *kvm; }; /* diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c index 39b32932c61e..28272510fba4 100644 --- a/drivers/iommu/iommufd/main.c +++ b/drivers/iommu/iommufd/main.c @@ -495,6 +495,20 @@ void iommufd_ctx_put(struct iommufd_ctx *ictx) } EXPORT_SYMBOL_NS_GPL(iommufd_ctx_put, IOMMUFD); +/** + * iommufd_ctx_set_kvm - Called to set a KVM pointer to iommufd context + * @ictx: Context to operate on + * @kvm: KVM pointer with a reference taken using kvm_get_kvm_safe() + */ +void iommufd_ctx_set_kvm(struct iommufd_ctx *ictx, struct kvm *kvm) +{ + xa_lock(&ictx->objects); + if (!ictx->kvm) + ictx->kvm = kvm; + xa_unlock(&ictx->objects); +} +EXPORT_SYMBOL_NS_GPL(iommufd_ctx_set_kvm, IOMMUFD); + static const struct iommufd_object_ops iommufd_object_ops[] = { [IOMMUFD_OBJ_ACCESS] = { .destroy = iommufd_access_destroy_object, diff --git a/drivers/vfio/device_cdev.c b/drivers/vfio/device_cdev.c index e75da0a70d1f..e75e96fb57cb 100644 --- a/drivers/vfio/device_cdev.c +++ b/drivers/vfio/device_cdev.c @@ -101,6 +101,9 @@ long vfio_df_ioctl_bind_iommufd(struct vfio_device_file *df, */ vfio_df_get_kvm_safe(df); + if (df->kvm) + iommufd_ctx_set_kvm(df->iommufd, df->kvm); + ret = vfio_df_open(df); if (ret) goto out_put_kvm; diff --git a/include/linux/iommufd.h b/include/linux/iommufd.h index ffc3a949f837..7408b620d0b8 100644 --- a/include/linux/iommufd.h +++ b/include/linux/iommufd.h @@ -17,6 +17,7 @@ struct iommufd_ctx; struct iommufd_access; struct file; struct iommu_group; +struct kvm; struct iommufd_device *iommufd_device_bind(struct iommufd_ctx *ictx, struct device *dev, u32 *id); @@ -59,6 +60,7 @@ struct iommufd_ctx *iommufd_ctx_from_file(struct file *file); struct iommufd_ctx *iommufd_ctx_from_fd(int fd); void iommufd_ctx_put(struct iommufd_ctx *ictx); bool iommufd_ctx_has_group(struct iommufd_ctx *ictx, struct iommu_group *group); +void iommufd_ctx_set_kvm(struct iommufd_ctx *ictx, struct kvm *kvm); int iommufd_access_pin_pages(struct iommufd_access *access, unsigned long iova, unsigned long length, struct page **out_pages, @@ -80,6 +82,11 @@ static inline void iommufd_ctx_put(struct iommufd_ctx *ictx) { } +static inline void iommufd_ctx_set_kvm(struct iommufd_ctx *ictx, + struct kvm *kvm) +{ +} + static inline int iommufd_access_pin_pages(struct iommufd_access *access, unsigned long iova, unsigned long length, From patchwork Thu Feb 8 15:18:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13549965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0B991C4828F for ; Thu, 8 Feb 2024 15:21:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aniNQEheJ2y5+QQAEWKsSIbzrcIdkV1T+Xy6HgIdTh8=; b=sgkus7gpmWKhRy 4yCYbM+pCmrRhoMsns69Urer1obvWb/OBFjgZXVnz5g1a6zeQbhd62c4YIGQOasSkdZN2w9rtXhIC Ih65OU7T7cbR+mqDkClqWKfs+g0T07G0CnJvZp5IMxCZyqktSO2NZ6JOmsXq5iLjxBzb3nTtkDTSI PYPuX2Q6fUjB5YeTUAD0ct4AU5pmL1H+lYYWg/YpGWQxvas8pO1wLBW//4hFACNzENwDfArXJ9XZK Zs2qdpNWpey27q19P68iGoBRI/RNmF/nDamfBDyZf8xVMdW4cPwpABJUPJijtLl9xMGIQc8mKyHvz JFrB6jXe8RWTfHfi1mXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6DR-0000000E9Yt-0nxk; Thu, 08 Feb 2024 15:21:29 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6DP-0000000E9XG-02PX for linux-arm-kernel@lists.infradead.org; Thu, 08 Feb 2024 15:21:28 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4TW0wq2MMvz6K5lR; Thu, 8 Feb 2024 23:18:03 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 40BBB140595; Thu, 8 Feb 2024 23:21:25 +0800 (CST) Received: from A2303104131.china.huawei.com (10.202.227.28) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 8 Feb 2024 15:21:18 +0000 From: Shameer Kolothum To: , , , CC: , , , , , , , , Subject: [RFC PATCH v2 5/7] iommu: Pass in kvm pointer to domain_alloc_user Date: Thu, 8 Feb 2024 15:18:35 +0000 Message-ID: <20240208151837.35068-6-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> References: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.28] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240208_072127_413237_91BE6ED4 X-CRM114-Status: GOOD ( 13.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org No functional changes. This will be used in a later patch to add support to use KVM VMID in ARM SMMUv3 s2 stage configuration. Signed-off-by: Shameer Kolothum --- drivers/iommu/amd/iommu.c | 1 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 1 + drivers/iommu/intel/iommu.c | 1 + drivers/iommu/iommufd/hw_pagetable.c | 5 +++-- drivers/iommu/iommufd/selftest.c | 1 + include/linux/iommu.h | 3 ++- 6 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index 4283dd8191f0..75e0f4e9253a 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -2244,6 +2244,7 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned int type) static struct iommu_domain * amd_iommu_domain_alloc_user(struct device *dev, u32 flags, struct iommu_domain *parent, + struct kvm *kvm, const struct iommu_user_data *user_data) { diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 0606166a8781..b41d77787a2f 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3190,6 +3190,7 @@ arm_smmu_domain_alloc_nesting(struct device *dev, u32 flags, static struct iommu_domain * arm_smmu_domain_alloc_user(struct device *dev, u32 flags, struct iommu_domain *parent, + struct kvm *kvm, const struct iommu_user_data *user_data) { struct arm_smmu_master *master = dev_iommu_priv_get(dev); diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 6fb5f6fceea1..992a2f233198 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -3877,6 +3877,7 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) static struct iommu_domain * intel_iommu_domain_alloc_user(struct device *dev, u32 flags, struct iommu_domain *parent, + struct kvm *kvm, const struct iommu_user_data *user_data) { struct device_domain_info *info = dev_iommu_priv_get(dev); diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c index 3f3f1fa1a0a9..1efc4ef863ac 100644 --- a/drivers/iommu/iommufd/hw_pagetable.c +++ b/drivers/iommu/iommufd/hw_pagetable.c @@ -129,7 +129,7 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas, if (ops->domain_alloc_user) { hwpt->domain = ops->domain_alloc_user(idev->dev, flags, NULL, - user_data); + ictx->kvm, user_data); if (IS_ERR(hwpt->domain)) { rc = PTR_ERR(hwpt->domain); hwpt->domain = NULL; @@ -228,7 +228,8 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx, hwpt_nested->parent = parent; hwpt->domain = ops->domain_alloc_user(idev->dev, flags, - parent->common.domain, user_data); + parent->common.domain, + ictx->kvm, user_data); if (IS_ERR(hwpt->domain)) { rc = PTR_ERR(hwpt->domain); hwpt->domain = NULL; diff --git a/drivers/iommu/iommufd/selftest.c b/drivers/iommu/iommufd/selftest.c index d9e9920c7eba..483bb7216235 100644 --- a/drivers/iommu/iommufd/selftest.c +++ b/drivers/iommu/iommufd/selftest.c @@ -269,6 +269,7 @@ __mock_domain_alloc_nested(struct mock_iommu_domain *mock_parent, static struct iommu_domain * mock_domain_alloc_user(struct device *dev, u32 flags, struct iommu_domain *parent, + struct kvm *kvm, const struct iommu_user_data *user_data) { struct mock_iommu_domain *mock_parent; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 1446e3718642..ad14cdc22863 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -43,6 +43,7 @@ struct notifier_block; struct iommu_sva; struct iommu_fault_event; struct iommu_dma_cookie; +struct kvm; /* iommu fault flags */ #define IOMMU_FAULT_READ 0x0 @@ -458,7 +459,7 @@ struct iommu_ops { struct iommu_domain *(*domain_alloc)(unsigned iommu_domain_type); struct iommu_domain *(*domain_alloc_user)( struct device *dev, u32 flags, struct iommu_domain *parent, - const struct iommu_user_data *user_data); + struct kvm *kvm, const struct iommu_user_data *user_data); struct iommu_domain *(*domain_alloc_paging)(struct device *dev); struct iommu_domain *(*domain_alloc_sva)(struct device *dev, struct mm_struct *mm); From patchwork Thu Feb 8 15:18:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13549966 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3537AC48260 for ; Thu, 8 Feb 2024 15:21:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LKoFdhWIrRR/LQ9kQTHfhi76LoiCmmwD/8dcsSksvp8=; b=gmmVw9MM0/FbBy VkW3qxHjecbWnC1fcME/XUh3Ik30A6hLy7Salh91UcX4AUg4vivJ8lIPR9G2nPBm4MiSeCunswvwW Yp2PKVcxlsbQ3DwRcUuVT72zKG5JycaBV7SEEXW21hJ9eTGjJShD9esvufqqSKNpSxSz+H59NkCP1 hJFHw/6AAXtOCTrjsEJvnvTJyco0h3VLnEbZ8HshfWaGpz1aIHehMmDQ199NY6bGZG8oe6c3vW8P0 0kIGIMweRT8Z2FU6lvD48GhHe1aVwjQ3KeFeNx39E4A+A5OyIPY5WwV7eXD+BWnp2f2aawgE+QYR3 RNPYeEcdLN6IG67BKCDA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6Db-0000000E9dB-1f3m; Thu, 08 Feb 2024 15:21:39 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6DY-0000000E9bZ-0OWu for linux-arm-kernel@lists.infradead.org; Thu, 08 Feb 2024 15:21:37 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4TW0wb3l9fz6JB4j; Thu, 8 Feb 2024 23:17:51 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 56D82140B2F; Thu, 8 Feb 2024 23:21:34 +0800 (CST) Received: from A2303104131.china.huawei.com (10.202.227.28) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 8 Feb 2024 15:21:27 +0000 From: Shameer Kolothum To: , , , CC: , , , , , , , , Subject: [RFC PATCH v2 6/7] iommu/arm-smmu-v3: Use KVM VMID for s2 stage Date: Thu, 8 Feb 2024 15:18:36 +0000 Message-ID: <20240208151837.35068-7-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> References: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.28] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240208_072136_465021_F34118C2 X-CRM114-Status: GOOD ( 13.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If kvm is available make use of kvm pinned VMID interfaces to set the s2 stage VMID for nested domains. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 18 +++++++++++++----- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 ++ 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index b41d77787a2f..18e3e04b50f4 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -2399,9 +2400,13 @@ int arm_smmu_domain_alloc_id(struct arm_smmu_device *smmu, } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S2) { int vmid; - /* Reserve VMID 0 for stage-2 bypass STEs */ - vmid = ida_alloc_range(&smmu->vmid_map, 1, - (1 << smmu->vmid_bits) - 1, GFP_KERNEL); + if (smmu_domain->kvm) { + vmid = kvm_pinned_vmid_get(smmu_domain->kvm); + } else { + /* Reserve VMID 0 for stage-2 bypass STEs */ + vmid = ida_alloc_range(&smmu->vmid_map, 1, + (1 << smmu->vmid_bits) - 1, GFP_KERNEL); + } if (vmid < 0) return vmid; smmu_domain->vmid = vmid; @@ -2431,7 +2436,10 @@ void arm_smmu_domain_free_id(struct arm_smmu_domain *smmu_domain) } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S2 && smmu_domain->vmid) { arm_smmu_tlb_inv_all_s2(smmu_domain); - ida_free(&smmu->vmid_map, smmu_domain->vmid); + if (smmu_domain->kvm) + kvm_pinned_vmid_put(smmu_domain->kvm); + else + ida_free(&smmu->vmid_map, smmu_domain->vmid); } } @@ -3217,7 +3225,7 @@ arm_smmu_domain_alloc_user(struct device *dev, u32 flags, goto err_free; } smmu_domain->stage = ARM_SMMU_DOMAIN_S2; - smmu_domain->nesting_parent = true; + smmu_domain->kvm = kvm; } smmu_domain->domain.type = IOMMU_DOMAIN_UNMANAGED; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 45bcd72fcda4..7d732ea2a6ee 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -758,6 +758,8 @@ struct arm_smmu_domain { struct mmu_notifier mmu_notifier; bool btm_invalidation : 1; bool nesting_parent : 1; + + struct kvm *kvm; }; struct arm_smmu_nested_domain { From patchwork Thu Feb 8 15:18:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13549967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 684DAC4828F for ; Thu, 8 Feb 2024 15:21:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UMtaLcV1RShrhlNzViKvZ+gx1Dlu485tpmK3g+smi2E=; b=qPTmPi0dB+Ie+Z dISapt4tSKIao9xzL8vdG0bA/pwJZC4mEaWmLjq6JJtqzeFxxR3BqQTMhDk+otLEO9hQqJsmBzOPx vLp3h0eO5LT1hPYYWe8cL1yAWBDP26OWZUR/Fl/a66MXQrZed7nS7L+XZIt+RnLOxA5elmTVIOSbz px7qjzbjvg6+qPbjZAvqwYYTJLKEzU+C+Wud0YCnhZS2uDa8vkhZrlIJ/ILU3sAzOJ9uH+d1nGouL imUQ4IrFQdbS2ejuHzgLQaD1CizFzdHcwL8/qA/fymz7p5X83tqTUs8xYosrVR21yNIB2ftTEjFbd N/bXLGZdYIsuVC7wshzw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6Dj-0000000E9hM-3lYK; Thu, 08 Feb 2024 15:21:47 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6Dg-0000000E9fd-3fSi for linux-arm-kernel@lists.infradead.org; Thu, 08 Feb 2024 15:21:46 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4TW0wm2BnTz687wh; Thu, 8 Feb 2024 23:18:00 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 25543140B73; Thu, 8 Feb 2024 23:21:43 +0800 (CST) Received: from A2303104131.china.huawei.com (10.202.227.28) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 8 Feb 2024 15:21:36 +0000 From: Shameer Kolothum To: , , , CC: , , , , , , , , Subject: [RFC PATCH v2 7/7] iommu/arm-smmu-v3: Enable broadcast TLB maintenance Date: Thu, 8 Feb 2024 15:18:37 +0000 Message-ID: <20240208151837.35068-8-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> References: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.28] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240208_072145_298932_1CBFA6CA X-CRM114-Status: GOOD ( 16.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Jean-Philippe Brucker The SMMUv3 can handle invalidation targeted at TLB entries with shared ASIDs. If the implementation supports broadcast TLB maintenance, enable it and keep track of it in a feature bit. The SMMU will then be affected by inner-shareable TLB invalidations from other agents. In order to avoid over invalidation with stage-2 translation contexts, enable BTM only when SMMUv3 supports eiher S1 or both S1 & S2 transaltion contexts. In this way the default domain will use stage-1 and stage 2 will be only used for NESTED Domain setup. Signed-off-by: Jean-Philippe Brucker [Shameer: Enable BTM only if S1 is supported] Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 23 +++++++++++++++++++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 + 2 files changed, 22 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 18e3e04b50f4..f40912de9e9f 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -4060,11 +4060,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) writel_relaxed(reg, smmu->base + ARM_SMMU_CR1); /* CR2 (random crap) */ - reg = CR2_PTM | CR2_RECINVSID; + reg = CR2_RECINVSID; if (smmu->features & ARM_SMMU_FEAT_E2H) reg |= CR2_E2H; + if (!(smmu->features & ARM_SMMU_FEAT_BTM)) + reg |= CR2_PTM; + writel_relaxed(reg, smmu->base + ARM_SMMU_CR2); /* Stream table */ @@ -4209,6 +4212,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY; + bool vhe = cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN); /* IDR0 */ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0); @@ -4261,7 +4265,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) if (reg & IDR0_HYP) { smmu->features |= ARM_SMMU_FEAT_HYP; - if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) + if (vhe) smmu->features |= ARM_SMMU_FEAT_E2H; } @@ -4286,6 +4290,21 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) if (reg & IDR0_S2P) smmu->features |= ARM_SMMU_FEAT_TRANS_S2; + /* + * If S1 is supported, check we can enable BTM. This means if S2 is available, + * we will use S2 for nested domain only with a KVM VMID. BTM is useful when + * CPU shares the page tables with SMMUv3(eg: vSVA) + */ + if (reg & IDR0_S1P) { + /* + * If the CPU is using VHE, but the SMMU doesn't support it, the SMMU + * will create TLB entries for NH-EL1 world and will miss the + * broadcasted TLB invalidations that target EL2-E2H world. Don't enable + * BTM in that case. + */ + if (reg & IDR0_BTM && (!vhe || reg & IDR0_HYP)) + smmu->features |= ARM_SMMU_FEAT_BTM; + } if (!(reg & (IDR0_S1P | IDR0_S2P))) { dev_err(smmu->dev, "no translation support!\n"); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 7d732ea2a6ee..ff3de784d1be 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -33,6 +33,7 @@ #define IDR0_ASID16 (1 << 12) #define IDR0_ATS (1 << 10) #define IDR0_HYP (1 << 9) +#define IDR0_BTM (1 << 5) #define IDR0_COHACC (1 << 4) #define IDR0_TTF GENMASK(3, 2) #define IDR0_TTF_AARCH64 2