From patchwork Tue Jan 28 01:53:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13951969 Received: from smtp237.sjtu.edu.cn (smtp237.sjtu.edu.cn [202.120.2.237]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 102701B59A; Tue, 28 Jan 2025 01:54:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.237 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738029260; cv=none; b=ZPvjXAvNuV2LxCzM0ltlrAMhADBizUK4iO/AYtlhRQgHj4uQCn6XQX5xfcdIBI9+ZE584uIrG6ffFYi2S9T6SBGT1q29r97+vTYGYshJ/W79ZrLeDdaat+yrDibzbVCa/A8zd6nkr5MHXkLVW6RiPJbxHtG87JfzuhNwIkv++8Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738029260; c=relaxed/simple; bh=cbiVZKEg9Uh8BmT6qPPw+jDvE3/obEXdD7yVlAP3YpU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=idAsn9xQUw1WNFP2uWLlbHbfWnwn6K3Sxg3QFAlZAcgCw3V5nm1jZ2fbOS6Z+wOZAOUJELiva1zU3MHS8Mu+9DD10h863AkiKCccC6GdjonPmB5ebKTZprU4XAylBL/wCbadt++DKNGvhjbq/1NxTJapVNpiquHFe4ZapbwuByo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.237 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy189.sjtu.edu.cn (smtp189.sjtu.edu.cn [202.120.2.189]) by smtp237.sjtu.edu.cn (Postfix) with ESMTPS id C170F812A4; Tue, 28 Jan 2025 09:54:08 +0800 (CST) Received: from localhost.localdomain (unknown [101.80.151.229]) by proxy189.sjtu.edu.cn (Postfix) with ESMTPSA id 6AC443FC501; Tue, 28 Jan 2025 09:53:56 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v7 1/3] KVM: x86: Add a wbinvd helper Date: Tue, 28 Jan 2025 09:53:43 +0800 Message-Id: <20250128015345.7929-2-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250128015345.7929-1-szy0127@sjtu.edu.cn> References: <20250128015345.7929-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 At the moment open-coded calls to on_each_cpu_mask() are used when emulating wbinvd. A subsequent patch needs the same behavior and the helper prevents callers from preparing some idential parameters. Signed-off-by: Zheyun Shen --- arch/x86/kvm/x86.c | 9 +++++++-- arch/x86/kvm/x86.h | 1 + 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2e7134809..b635e0e5c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8231,8 +8231,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu) int cpu = get_cpu(); cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask); - on_each_cpu_mask(vcpu->arch.wbinvd_dirty_mask, - wbinvd_ipi, NULL, 1); + wbinvd_on_many_cpus(vcpu->arch.wbinvd_dirty_mask); put_cpu(); cpumask_clear(vcpu->arch.wbinvd_dirty_mask); } else @@ -13964,6 +13963,12 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, } EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); +void wbinvd_on_many_cpus(struct cpumask *mask) +{ + on_each_cpu_mask(mask, wbinvd_ipi, NULL, 1); +} +EXPORT_SYMBOL_GPL(wbinvd_on_many_cpus); + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index ec623d23d..8f715e14b 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -611,5 +611,6 @@ int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa_t src, unsigned int bytes, int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, unsigned int port, void *data, unsigned int count, int in); +void wbinvd_on_many_cpus(struct cpumask *mask); #endif From patchwork Tue Jan 28 01:53:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13951970 Received: from smtp237.sjtu.edu.cn (smtp237.sjtu.edu.cn [202.120.2.237]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3EE91B59A; Tue, 28 Jan 2025 01:54:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.237 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738029270; cv=none; b=XFqcyQYlMyktoCG9PRrp20wTM1SYmFMEMsVQsiFPjnPUDSyVBm4BLARHOhxn195Psq4lf88RZKqbMQnHHigJHIwNsfzpQQ5pvXtZWqRgf9qaRcXPpZyw/ASNgMWYQDecII8JLtLA1aBTjeYPg0/BLLRH5HF197YDWhBRcYnbUVc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738029270; c=relaxed/simple; bh=aXlbBblBf10y6gHnVZodpVi9WWOUnJ6TR8vEOSe9WZQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=htCAQPfJ2PFgK2eJeOG55SU+aHa2LkpnxbAWazdCi7Gm160YmL+aQJudQbl7fdmJcbJEJi8b/0gOJY1BIc8/4M4BvnMmYi8F6H42muaF2D22IZsjd2ku9oOnnGZmYDIf7JnZUy0SWwiLrEn4zxmysQFmfULLnhGQXgD35CdIEbo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.237 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy189.sjtu.edu.cn (smtp189.sjtu.edu.cn [202.120.2.189]) by smtp237.sjtu.edu.cn (Postfix) with ESMTPS id BE862812AE; Tue, 28 Jan 2025 09:54:17 +0800 (CST) Received: from localhost.localdomain (unknown [101.80.151.229]) by proxy189.sjtu.edu.cn (Postfix) with ESMTPSA id 819793FC501; Tue, 28 Jan 2025 09:54:08 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v7 2/3] KVM: SVM: Remove wbinvd in sev_vm_destroy() Date: Tue, 28 Jan 2025 09:53:44 +0800 Message-Id: <20250128015345.7929-3-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250128015345.7929-1-szy0127@sjtu.edu.cn> References: <20250128015345.7929-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Before sev_vm_destroy() is called, kvm_arch_guest_memory_reclaimed() has been called for SEV and SEV-ES and kvm_arch_gmem_invalidate() has been called for SEV-SNP. These functions have already handled flushing the memory. Therefore, this wbinvd_on_all_cpus() can simply be dropped. Suggested-by: Sean Christopherson Signed-off-by: Zheyun Shen --- arch/x86/kvm/svm/sev.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 943bd074a..1ce67de9d 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2899,12 +2899,6 @@ void sev_vm_destroy(struct kvm *kvm) return; } - /* - * Ensure that all guest tagged cache entries are flushed before - * releasing the pages back to the system for use. CLFLUSH will - * not do this, so issue a WBINVD. - */ - wbinvd_on_all_cpus(); /* * if userspace was terminated before unregistering the memory regions From patchwork Tue Jan 28 01:53:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13951971 Received: from smtp237.sjtu.edu.cn (smtp237.sjtu.edu.cn [202.120.2.237]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E3AA14F135; Tue, 28 Jan 2025 01:54:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.237 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738029280; cv=none; b=GQifLxwI5QwfJY4QpzwSkLGXOY0pgI31LeUt9XUS9bWTzkZ4DpucuXcgrqyuyj0ZZ4qkjD/zcXkRS8PnOq3ABuQMpSw/+aoh25qkDxyKducZUnpXgrXVJo6/nL9y4WGCGHtU+bZW3iAbIS7hoDQmOINuH0tzg/r+28JUqOQSI+M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738029280; c=relaxed/simple; bh=9aRYgQ09sW3875gMdosONv1tOEM1CBRNtsy8NVgN7mM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AxKDK1uKSSmeCJ+8A38+rrtf0ZZN9AdHmbwJ1eijcABSUQvfZrwNegCDJsojH9F+ze+M5IbKJztmPZV9EoaaWbSp7eY7zyR3NZvwC+8trlRg8ekzi+np1kQZRlIbN4SZ2kxjANENMZpnMdpKj53AWJwW/Mzg0Mi8pfjtt5eqMko= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.237 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy189.sjtu.edu.cn (smtp189.sjtu.edu.cn [202.120.2.189]) by smtp237.sjtu.edu.cn (Postfix) with ESMTPS id CC7AE812D1; Tue, 28 Jan 2025 09:54:25 +0800 (CST) Received: from localhost.localdomain (unknown [101.80.151.229]) by proxy189.sjtu.edu.cn (Postfix) with ESMTPSA id B86863FC599; Tue, 28 Jan 2025 09:54:16 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v7 3/3] KVM: SVM: Flush cache only on CPUs running SEV guest Date: Tue, 28 Jan 2025 09:53:45 +0800 Message-Id: <20250128015345.7929-4-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250128015345.7929-1-szy0127@sjtu.edu.cn> References: <20250128015345.7929-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 On AMD CPUs without ensuring cache consistency, each memory page reclamation in an SEV guest triggers a call to wbinvd_on_all_cpus(), thereby affecting the performance of other programs on the host. Typically, an AMD server may have 128 cores or more, while the SEV guest might only utilize 8 of these cores. Meanwhile, host can use qemu-affinity to bind these 8 vCPUs to specific physical CPUs. Therefore, keeping a record of the physical core numbers each time a vCPU runs can help avoid flushing the cache for all CPUs every time. Signed-off-by: Zheyun Shen --- arch/x86/kvm/svm/sev.c | 30 +++++++++++++++++++++++++++--- arch/x86/kvm/svm/svm.c | 2 ++ arch/x86/kvm/svm/svm.h | 5 ++++- 3 files changed, 33 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 1ce67de9d..4b80ecbe7 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -252,6 +252,27 @@ static void sev_asid_free(struct kvm_sev_info *sev) sev->misc_cg = NULL; } +void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + /* + * To optimize cache flushes when memory is reclaimed from an SEV VM, + * track physical CPUs that enter the guest for SEV VMs and thus can + * have encrypted, dirty data in the cache, and flush caches only for + * CPUs that have entered the guest. + */ + cpumask_set_cpu(cpu, to_kvm_sev_info(vcpu->kvm)->wbinvd_dirty_mask); +} + +static void sev_do_wbinvd(struct kvm *kvm) +{ + /* + * TODO: Clear CPUs from the bitmap prior to flushing. Doing so + * requires serializing multiple calls and having CPUs mark themselves + * "dirty" if they are currently running a vCPU for the VM. + */ + wbinvd_on_many_cpus(to_kvm_sev_info(kvm)->wbinvd_dirty_mask); +} + static void sev_decommission(unsigned int handle) { struct sev_data_decommission decommission; @@ -448,6 +469,8 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, ret = sev_platform_init(&init_args); if (ret) goto e_free; + if (!zalloc_cpumask_var(&sev->wbinvd_dirty_mask, GFP_KERNEL_ACCOUNT)) + goto e_free; /* This needs to happen after SEV/SNP firmware initialization. */ if (vm_type == KVM_X86_SNP_VM) { @@ -2778,7 +2801,7 @@ int sev_mem_enc_unregister_region(struct kvm *kvm, * releasing the pages back to the system for use. CLFLUSH will * not do this, so issue a WBINVD. */ - wbinvd_on_all_cpus(); + sev_do_wbinvd(kvm); __unregister_enc_region_locked(kvm, region); @@ -2926,6 +2949,7 @@ void sev_vm_destroy(struct kvm *kvm) } sev_asid_free(sev); + free_cpumask_var(sev->wbinvd_dirty_mask); } void __init sev_set_cpu_caps(void) @@ -3129,7 +3153,7 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va) return; do_wbinvd: - wbinvd_on_all_cpus(); + sev_do_wbinvd(vcpu->kvm); } void sev_guest_memory_reclaimed(struct kvm *kvm) @@ -3143,7 +3167,7 @@ void sev_guest_memory_reclaimed(struct kvm *kvm) if (!sev_guest(kvm) || sev_snp_guest(kvm)) return; - wbinvd_on_all_cpus(); + sev_do_wbinvd(kvm); } void sev_free_vcpu(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index dd15cc635..f3b03b0d8 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1565,6 +1565,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } if (kvm_vcpu_apicv_active(vcpu)) avic_vcpu_load(vcpu, cpu); + if (sev_guest(vcpu->kvm)) + sev_vcpu_load(vcpu, cpu); } static void svm_vcpu_put(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 43fa6a16e..82ec80cf4 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -112,6 +112,8 @@ struct kvm_sev_info { void *guest_req_buf; /* Bounce buffer for SNP Guest Request input */ void *guest_resp_buf; /* Bounce buffer for SNP Guest Request output */ struct mutex guest_req_mutex; /* Must acquire before using bounce buffers */ + /* CPUs invoked VMRUN call wbinvd after guest memory is reclaimed */ + struct cpumask *wbinvd_dirty_mask; }; struct kvm_svm { @@ -763,6 +765,7 @@ void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn); +void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu); #else static inline struct page *snp_safe_alloc_page_node(int node, gfp_t gfp) { @@ -793,7 +796,7 @@ static inline int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) { return 0; } - +static inline void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {} #endif /* vmenter.S */