From patchwork Mon Jan 20 12:05:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13945036 Received: from smtp232.sjtu.edu.cn (smtp232.sjtu.edu.cn [202.120.2.232]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED7D91B87EE; Mon, 20 Jan 2025 12:05:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.232 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737374739; cv=none; b=s04+A9RCKK6l2t8IYHDjx6164zuLddpDhD0sLy2ZCc6Lzn5A+KTZH9fC6tWYro7MLCMvHKX44Q4U3ZykXm52tJeGBh1VroO/QDLN5un9yUEB8sIYRy9nMY1BkbttAItQDyQfna3LGwMUOUot2KoWtl2ukB5IywoWkfp5+ynx3KM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737374739; c=relaxed/simple; bh=zyBrhgj16ZYkACjwL0qJsSO24/rzCJR1WsReWdgVXoE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=n9lKM1qATz6f3ETY8w+fKIpVe45oMz5V3ne6NeGOTq1lkaoaiyYd1pCVX+SW+bpGq5/3ptUycXIiOwBAg0/8R4oEN6IrsYv7kvIVkT0wUbmXY4lUufPMkOcnglf9JtxlNBO9gZLrwRuGGBkFVrEXC8tpvrXtTWoMovi6Zre/aV4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.232 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy188.sjtu.edu.cn (smtp188.sjtu.edu.cn [202.120.2.188]) by smtp232.sjtu.edu.cn (Postfix) with ESMTPS id 1F93C1008CBDE; Mon, 20 Jan 2025 20:05:23 +0800 (CST) Received: from broadband.. (unknown [202.120.40.80]) by proxy188.sjtu.edu.cn (Postfix) with ESMTPSA id D9DC937C955; Mon, 20 Jan 2025 20:05:15 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v5 1/3] KVM: x86: Add a wbinvd helper Date: Mon, 20 Jan 2025 20:05:01 +0800 Message-Id: <20250120120503.470533-2-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250120120503.470533-1-szy0127@sjtu.edu.cn> References: <20250120120503.470533-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 At the moment open-coded calls to on_each_cpu_mask() are used when emulating wbinvd. A subsequent patch needs the same behavior and the helper prevents callers from preparing some idential parameters. Signed-off-by: Zheyun Shen --- arch/x86/kvm/x86.c | 9 +++++++-- arch/x86/kvm/x86.h | 1 + 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c79a8cc57..77f656306 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8231,8 +8231,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu) int cpu = get_cpu(); cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask); - on_each_cpu_mask(vcpu->arch.wbinvd_dirty_mask, - wbinvd_ipi, NULL, 1); + wbinvd_on_many_cpus(vcpu->arch.wbinvd_dirty_mask); put_cpu(); cpumask_clear(vcpu->arch.wbinvd_dirty_mask); } else @@ -13971,6 +13970,12 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, } EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); +void wbinvd_on_many_cpus(struct cpumask *mask) +{ + on_each_cpu_mask(mask, wbinvd_ipi, NULL, 1); +} +EXPORT_SYMBOL_GPL(wbinvd_on_many_cpus); + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index ec623d23d..8f715e14b 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -611,5 +611,6 @@ int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa_t src, unsigned int bytes, int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, unsigned int port, void *data, unsigned int count, int in); +void wbinvd_on_many_cpus(struct cpumask *mask); #endif From patchwork Mon Jan 20 12:05:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13945037 Received: from smtp232.sjtu.edu.cn (smtp232.sjtu.edu.cn [202.120.2.232]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 535211B87EE; Mon, 20 Jan 2025 12:05:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.232 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737374755; cv=none; b=sm5TWsGbVhVmoYrC3XJVsCoUfTEdlOGMNtWKeFYXxvP4nu/DCC+6lH63oc4qk7r30srLtDSMf/b43IXBnR/E/JmLhaeEmgna+S4iplpH571B3C5ZzjzImhVnzWioB/sJY7YQ0MMUHOP84wjucBKreYuv8c9rylUY0Fv3aY205AA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737374755; c=relaxed/simple; bh=zpeBuTpYHa2Kqv3XZOLfBNj6vVElaqjrMcRTvvU6jZA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hq9eHCifEtUuttOaWazapypBwArZHnEzFBsTLkjAW1+w/7ZT5RlU/fMeiCeEritBC9OeFvPcjQB3TfyJiNjB+iLCTHE36jm/ZtuSCo5VsnTrk4GiZ+NPMgPYDDI9y+vObQ9grtMIYinIV+T8Ig34LI49FDj53DUEc4Nya9544R0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.232 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy188.sjtu.edu.cn (smtp188.sjtu.edu.cn [202.120.2.188]) by smtp232.sjtu.edu.cn (Postfix) with ESMTPS id B175A1008CBF8; Mon, 20 Jan 2025 20:05:29 +0800 (CST) Received: from broadband.. (unknown [202.120.40.80]) by proxy188.sjtu.edu.cn (Postfix) with ESMTPSA id 19CF737C975; Mon, 20 Jan 2025 20:05:22 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v5 2/3] KVM: SVM: Remove wbinvd in sev_vm_destroy() Date: Mon, 20 Jan 2025 20:05:02 +0800 Message-Id: <20250120120503.470533-3-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250120120503.470533-1-szy0127@sjtu.edu.cn> References: <20250120120503.470533-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Before sev_vm_destroy() is called, kvm_arch_guest_memory_reclaimed() has been called for SEV and SEV-ES and kvm_arch_gmem_invalidate() has been called for SEV-SNP. These functions have already handled flushing the memory. Therefore, this wbinvd_on_all_cpus() can simply be dropped. Suggested-by: Sean Christopherson Suggested-by: Kevin Loughlin Signed-off-by: Zheyun Shen --- arch/x86/kvm/svm/sev.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 943bd074a..1ce67de9d 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2899,12 +2899,6 @@ void sev_vm_destroy(struct kvm *kvm) return; } - /* - * Ensure that all guest tagged cache entries are flushed before - * releasing the pages back to the system for use. CLFLUSH will - * not do this, so issue a WBINVD. - */ - wbinvd_on_all_cpus(); /* * if userspace was terminated before unregistering the memory regions From patchwork Mon Jan 20 12:05:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13945038 Received: from smtp232.sjtu.edu.cn (smtp232.sjtu.edu.cn [202.120.2.232]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97F771DEFFE; Mon, 20 Jan 2025 12:06:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.232 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737374772; cv=none; b=dTa7zQzfD7KtoSVp2Z4R1nIbjGOHrwX/1Z2YixyFko0IQWcDybgXI86an4iuddZ3OU+BLfEaNUTO4EgnsRqj5GOHkfHZaoQ7yfG08fxZFvDTys+oN+K869ibSely0yZ9Ah6lKdIowzR+XnhNRqpAY+WcPF1mMYuIFpVxdWjwyrk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737374772; c=relaxed/simple; bh=E0vz45kujEmmbnsv5ULF5wrUd2lin7nq6u9gVVJVLMM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DGv//7V3+GMV5p9v80+uH6fkKSP+3JqmPu36uzEFJ4tYDgC106Fk6qdY2c6uBr22QRCBUaUeiJx0zje9wdvWgANn7HXChJ7MJ39YH9FrGpHlV1C5cF7DhcEkK7qw74YRWn53szWU12p9WxUxFLuehIicGyt1WwvdEKJ5ppU6Ja4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.232 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy188.sjtu.edu.cn (smtp188.sjtu.edu.cn [202.120.2.188]) by smtp232.sjtu.edu.cn (Postfix) with ESMTPS id 03D4D1008CF48; Mon, 20 Jan 2025 20:05:37 +0800 (CST) Received: from broadband.. (unknown [202.120.40.80]) by proxy188.sjtu.edu.cn (Postfix) with ESMTPSA id 5931837C955; Mon, 20 Jan 2025 20:05:29 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v5 3/3] KVM: SVM: Flush cache only on CPUs running SEV guest Date: Mon, 20 Jan 2025 20:05:03 +0800 Message-Id: <20250120120503.470533-4-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250120120503.470533-1-szy0127@sjtu.edu.cn> References: <20250120120503.470533-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 On AMD CPUs without ensuring cache consistency, each memory page reclamation in an SEV guest triggers a call to wbinvd_on_all_cpus(), thereby affecting the performance of other programs on the host. Typically, an AMD server may have 128 cores or more, while the SEV guest might only utilize 8 of these cores. Meanwhile, host can use qemu-affinity to bind these 8 vCPUs to specific physical CPUs. Therefore, keeping a record of the physical core numbers each time a vCPU runs can help avoid flushing the cache for all CPUs every time. Suggested-by: Sean Christopherson Signed-off-by: Zheyun Shen --- arch/x86/kvm/svm/sev.c | 39 ++++++++++++++++++++++++++++++++++++--- arch/x86/kvm/svm/svm.c | 2 ++ arch/x86/kvm/svm/svm.h | 5 ++++- 3 files changed, 42 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 1ce67de9d..91469edd1 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -252,6 +252,36 @@ static void sev_asid_free(struct kvm_sev_info *sev) sev->misc_cg = NULL; } +static struct cpumask *sev_get_wbinvd_dirty_mask(struct kvm *kvm) +{ + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; + + return sev->wbinvd_dirty_mask; +} + +void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + /* + * To optimize cache flushes when memory is reclaimed from an SEV VM, + * track physical CPUs that enter the guest for SEV VMs and thus can + * have encrypted, dirty data in the cache, and flush caches only for + * CPUs that have entered the guest. + */ + cpumask_set_cpu(cpu, sev_get_wbinvd_dirty_mask(vcpu->kvm)); +} + +static void sev_do_wbinvd(struct kvm *kvm) +{ + struct cpumask *dirty_mask = sev_get_wbinvd_dirty_mask(kvm); + + /* + * TODO: Clear CPUs from the bitmap prior to flushing. Doing so + * requires serializing multiple calls and having CPUs mark themselves + * "dirty" if they are currently running a vCPU for the VM. + */ + wbinvd_on_many_cpus(dirty_mask); +} + static void sev_decommission(unsigned int handle) { struct sev_data_decommission decommission; @@ -448,6 +478,8 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, ret = sev_platform_init(&init_args); if (ret) goto e_free; + if (!zalloc_cpumask_var(&sev->wbinvd_dirty_mask, GFP_KERNEL_ACCOUNT)) + goto e_free; /* This needs to happen after SEV/SNP firmware initialization. */ if (vm_type == KVM_X86_SNP_VM) { @@ -2778,7 +2810,7 @@ int sev_mem_enc_unregister_region(struct kvm *kvm, * releasing the pages back to the system for use. CLFLUSH will * not do this, so issue a WBINVD. */ - wbinvd_on_all_cpus(); + sev_do_wbinvd(kvm); __unregister_enc_region_locked(kvm, region); @@ -2926,6 +2958,7 @@ void sev_vm_destroy(struct kvm *kvm) } sev_asid_free(sev); + free_cpumask_var(sev->wbinvd_dirty_mask); } void __init sev_set_cpu_caps(void) @@ -3129,7 +3162,7 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va) return; do_wbinvd: - wbinvd_on_all_cpus(); + sev_do_wbinvd(vcpu->kvm); } void sev_guest_memory_reclaimed(struct kvm *kvm) @@ -3143,7 +3176,7 @@ void sev_guest_memory_reclaimed(struct kvm *kvm) if (!sev_guest(kvm) || sev_snp_guest(kvm)) return; - wbinvd_on_all_cpus(); + sev_do_wbinvd(kvm); } void sev_free_vcpu(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 21dacd312..d2a423c0e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1565,6 +1565,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } if (kvm_vcpu_apicv_active(vcpu)) avic_vcpu_load(vcpu, cpu); + if (sev_guest(vcpu->kvm)) + sev_vcpu_load(vcpu, cpu); } static void svm_vcpu_put(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 43fa6a16e..c8f42cb61 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -112,6 +112,8 @@ struct kvm_sev_info { void *guest_req_buf; /* Bounce buffer for SNP Guest Request input */ void *guest_resp_buf; /* Bounce buffer for SNP Guest Request output */ struct mutex guest_req_mutex; /* Must acquire before using bounce buffers */ + /* CPUs invoked VMRUN call wbinvd after guest memory is reclaimed */ + struct cpumask *wbinvd_dirty_mask; }; struct kvm_svm { @@ -763,6 +765,7 @@ void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn); +void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu); #else static inline struct page *snp_safe_alloc_page_node(int node, gfp_t gfp) { @@ -793,7 +796,7 @@ static inline int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) { return 0; } - +static inline void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {} #endif /* vmenter.S */