From patchwork Sun Jan 26 11:36:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13950645 Received: from smtp237.sjtu.edu.cn (smtp237.sjtu.edu.cn [202.120.2.237]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E12D011CAF; Sun, 26 Jan 2025 11:37:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.237 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737891440; cv=none; b=ENu3Gt0yr5FTu9BA6xMwgeBR/GPVnuUEeAzV6Ougm5+6V+9ITXWImA0rhnv9u4/oSMypstGz3Gp88vgUOG/Cz8f01fddaW18tvtsadCr5RHWKiIcX1VQ8XWdDOHJWsgnAC57UmI9g4XVEA9qiXQy4zoSjcEqkZxpmho7Fjj0inQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737891440; c=relaxed/simple; bh=cbiVZKEg9Uh8BmT6qPPw+jDvE3/obEXdD7yVlAP3YpU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GvbD0XddBdBt6eer7HZBMPGBith33GvBURfu1Ee+hcmMhs6nr2IbWxdjcGMWLvDMF0BALNledXL+88ddeIgGj+NeOUxMn5nYj6jgn2lZteDckb9Nt6KFPWxKMoK/kM+IeMyPe7NjPfy2IqWm21F4OSycwCAR57Y5sSZxPj+CcbA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.237 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy189.sjtu.edu.cn (smtp189.sjtu.edu.cn [202.120.2.189]) by smtp237.sjtu.edu.cn (Postfix) with ESMTPS id 4909F7FD1D; Sun, 26 Jan 2025 19:37:02 +0800 (CST) Received: from localhost.localdomain (unknown [101.80.151.229]) by proxy189.sjtu.edu.cn (Postfix) with ESMTPSA id 740583FC595; Sun, 26 Jan 2025 19:36:53 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v6 1/3] KVM: x86: Add a wbinvd helper Date: Sun, 26 Jan 2025 19:36:38 +0800 Message-Id: <20250126113640.3426-2-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250126113640.3426-1-szy0127@sjtu.edu.cn> References: <20250126113640.3426-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 At the moment open-coded calls to on_each_cpu_mask() are used when emulating wbinvd. A subsequent patch needs the same behavior and the helper prevents callers from preparing some idential parameters. Signed-off-by: Zheyun Shen --- arch/x86/kvm/x86.c | 9 +++++++-- arch/x86/kvm/x86.h | 1 + 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2e7134809..b635e0e5c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8231,8 +8231,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu) int cpu = get_cpu(); cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask); - on_each_cpu_mask(vcpu->arch.wbinvd_dirty_mask, - wbinvd_ipi, NULL, 1); + wbinvd_on_many_cpus(vcpu->arch.wbinvd_dirty_mask); put_cpu(); cpumask_clear(vcpu->arch.wbinvd_dirty_mask); } else @@ -13964,6 +13963,12 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, } EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); +void wbinvd_on_many_cpus(struct cpumask *mask) +{ + on_each_cpu_mask(mask, wbinvd_ipi, NULL, 1); +} +EXPORT_SYMBOL_GPL(wbinvd_on_many_cpus); + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index ec623d23d..8f715e14b 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -611,5 +611,6 @@ int kvm_sev_es_mmio_read(struct kvm_vcpu *vcpu, gpa_t src, unsigned int bytes, int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size, unsigned int port, void *data, unsigned int count, int in); +void wbinvd_on_many_cpus(struct cpumask *mask); #endif From patchwork Sun Jan 26 11:36:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13950646 Received: from smtp237.sjtu.edu.cn (smtp237.sjtu.edu.cn [202.120.2.237]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 22CFA155C87; Sun, 26 Jan 2025 11:37:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.237 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737891449; cv=none; b=ILBaRup7dPkLB/nPG60QjL8gMAxzcHfOSK6oHduZyXhIEy+GqIHaDLcjoJNWIasayyKaGCYsoFKjbGiAwNCgE33Gbs+vAptao3ql5fKLQkdBj9yeyGEUQCgCQqxImSq7OUlDKWSzdUoBthceX6Op0z9EVlwTb4oR3K1wS+mfPrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737891449; c=relaxed/simple; bh=aXlbBblBf10y6gHnVZodpVi9WWOUnJ6TR8vEOSe9WZQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lGsjcmnbgOwx4jkRJWrwejpnUsAyrE13x/aVM6XngKgHfZf8cAWfZ8Iuc5ds8V7CVizVhAZqEr4Vemua6/oM5yWBGQbgwDVDC5yuBeS4AzAnJrbLFx1k2hSIw+tvR8m2C8PkBs4rf5KMxYx/EHun38saoCcpa7zTYB48oHRcnrs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.237 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy189.sjtu.edu.cn (smtp189.sjtu.edu.cn [202.120.2.189]) by smtp237.sjtu.edu.cn (Postfix) with ESMTPS id A7063812A4; Sun, 26 Jan 2025 19:37:10 +0800 (CST) Received: from localhost.localdomain (unknown [101.80.151.229]) by proxy189.sjtu.edu.cn (Postfix) with ESMTPSA id 135833FC394; Sun, 26 Jan 2025 19:37:02 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v6 2/3] KVM: SVM: Remove wbinvd in sev_vm_destroe() Date: Sun, 26 Jan 2025 19:36:39 +0800 Message-Id: <20250126113640.3426-3-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250126113640.3426-1-szy0127@sjtu.edu.cn> References: <20250126113640.3426-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Before sev_vm_destroy() is called, kvm_arch_guest_memory_reclaimed() has been called for SEV and SEV-ES and kvm_arch_gmem_invalidate() has been called for SEV-SNP. These functions have already handled flushing the memory. Therefore, this wbinvd_on_all_cpus() can simply be dropped. Suggested-by: Sean Christopherson Signed-off-by: Zheyun Shen --- arch/x86/kvm/svm/sev.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 943bd074a..1ce67de9d 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2899,12 +2899,6 @@ void sev_vm_destroy(struct kvm *kvm) return; } - /* - * Ensure that all guest tagged cache entries are flushed before - * releasing the pages back to the system for use. CLFLUSH will - * not do this, so issue a WBINVD. - */ - wbinvd_on_all_cpus(); /* * if userspace was terminated before unregistering the memory regions From patchwork Sun Jan 26 11:36:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zheyun Shen X-Patchwork-Id: 13950647 Received: from smtp237.sjtu.edu.cn (smtp237.sjtu.edu.cn [202.120.2.237]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3434E1531C4; Sun, 26 Jan 2025 11:37:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=202.120.2.237 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737891458; cv=none; b=fAfjkcGyJCrrc7cVxhZyxy6G/u/Hiz4FjkUwM3d0LJLAHtFyXQjmbWCDUk7bghop853qX7/NkyIRTv3dq1sa3O7ajJbMLcQtxe0Qx9+ajeZ8AymY1E/pEC4M0Jf03xf2MVCuQDDwIP3upnJxmDiCOfEUsi2Mm7yqUFCQtQNdSyg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737891458; c=relaxed/simple; bh=dmAcphMopESfxZ04xl0/mZ9P2VsXIR6iEbbjAqJZ9d4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=drkJi7chwsvR37oWkxD9b/Qg91CViqqjg+lY5FrOFeca4G4nA59PXB72cQkwR0GtaFX3oh3gTmel/UCqY1rOyKWqDMdFO/80K2m5bfbgiCo1hRvNuiCgqb9h/Ibu9QgCfIE1FNAwBe7BVMKoXN/xS8WflPFgPTiEV3VL7IoQHJs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn; spf=pass smtp.mailfrom=sjtu.edu.cn; arc=none smtp.client-ip=202.120.2.237 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sjtu.edu.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sjtu.edu.cn Received: from proxy189.sjtu.edu.cn (smtp189.sjtu.edu.cn [202.120.2.189]) by smtp237.sjtu.edu.cn (Postfix) with ESMTPS id 10D88812AC; Sun, 26 Jan 2025 19:37:19 +0800 (CST) Received: from localhost.localdomain (unknown [101.80.151.229]) by proxy189.sjtu.edu.cn (Postfix) with ESMTPSA id 6D2623FC595; Sun, 26 Jan 2025 19:37:10 +0800 (CST) From: Zheyun Shen To: thomas.lendacky@amd.com, seanjc@google.com, pbonzini@redhat.com, tglx@linutronix.de, kevinloughlin@google.com, mingo@redhat.com, bp@alien8.de Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zheyun Shen Subject: [PATCH v6 3/3] KVM: SVM: Flush cache only on CPUs running SEV guest Date: Sun, 26 Jan 2025 19:36:40 +0800 Message-Id: <20250126113640.3426-4-szy0127@sjtu.edu.cn> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250126113640.3426-1-szy0127@sjtu.edu.cn> References: <20250126113640.3426-1-szy0127@sjtu.edu.cn> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 On AMD CPUs without ensuring cache consistency, each memory page reclamation in an SEV guest triggers a call to wbinvd_on_all_cpus(), thereby affecting the performance of other programs on the host. Typically, an AMD server may have 128 cores or more, while the SEV guest might only utilize 8 of these cores. Meanwhile, host can use qemu-affinity to bind these 8 vCPUs to specific physical CPUs. Therefore, keeping a record of the physical core numbers each time a vCPU runs can help avoid flushing the cache for all CPUs every time. Signed-off-by: Zheyun Shen --- arch/x86/kvm/svm/sev.c | 30 +++++++++++++++++++++++++++--- arch/x86/kvm/svm/svm.c | 2 ++ arch/x86/kvm/svm/svm.h | 5 ++++- 3 files changed, 33 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 1ce67de9d..4b80ecbe7 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -252,6 +252,27 @@ static void sev_asid_free(struct kvm_sev_info *sev) sev->misc_cg = NULL; } +void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu) +{ + /* + * To optimize cache flushes when memory is reclaimed from an SEV VM, + * track physical CPUs that enter the guest for SEV VMs and thus can + * have encrypted, dirty data in the cache, and flush caches only for + * CPUs that have entered the guest. + */ + cpumask_set_cpu(cpu, to_kvm_sev_info(kvm)->wbinvd_dirty_mask); +} + +static void sev_do_wbinvd(struct kvm *kvm) +{ + /* + * TODO: Clear CPUs from the bitmap prior to flushing. Doing so + * requires serializing multiple calls and having CPUs mark themselves + * "dirty" if they are currently running a vCPU for the VM. + */ + wbinvd_on_many_cpus(to_kvm_sev_info(kvm)->wbinvd_dirty_mask); +} + static void sev_decommission(unsigned int handle) { struct sev_data_decommission decommission; @@ -448,6 +469,8 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, ret = sev_platform_init(&init_args); if (ret) goto e_free; + if (!zalloc_cpumask_var(&sev->wbinvd_dirty_mask, GFP_KERNEL_ACCOUNT)) + goto e_free; /* This needs to happen after SEV/SNP firmware initialization. */ if (vm_type == KVM_X86_SNP_VM) { @@ -2778,7 +2801,7 @@ int sev_mem_enc_unregister_region(struct kvm *kvm, * releasing the pages back to the system for use. CLFLUSH will * not do this, so issue a WBINVD. */ - wbinvd_on_all_cpus(); + sev_do_wbinvd(kvm); __unregister_enc_region_locked(kvm, region); @@ -2926,6 +2949,7 @@ void sev_vm_destroy(struct kvm *kvm) } sev_asid_free(sev); + free_cpumask_var(sev->wbinvd_dirty_mask); } void __init sev_set_cpu_caps(void) @@ -3129,7 +3153,7 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va) return; do_wbinvd: - wbinvd_on_all_cpus(); + sev_do_wbinvd(vcpu->kvm); } void sev_guest_memory_reclaimed(struct kvm *kvm) @@ -3143,7 +3167,7 @@ void sev_guest_memory_reclaimed(struct kvm *kvm) if (!sev_guest(kvm) || sev_snp_guest(kvm)) return; - wbinvd_on_all_cpus(); + sev_do_wbinvd(kvm); } void sev_free_vcpu(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index dd15cc635..f3b03b0d8 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -1565,6 +1565,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } if (kvm_vcpu_apicv_active(vcpu)) avic_vcpu_load(vcpu, cpu); + if (sev_guest(vcpu->kvm)) + sev_vcpu_load(vcpu, cpu); } static void svm_vcpu_put(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 43fa6a16e..82ec80cf4 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -112,6 +112,8 @@ struct kvm_sev_info { void *guest_req_buf; /* Bounce buffer for SNP Guest Request input */ void *guest_resp_buf; /* Bounce buffer for SNP Guest Request output */ struct mutex guest_req_mutex; /* Must acquire before using bounce buffers */ + /* CPUs invoked VMRUN call wbinvd after guest memory is reclaimed */ + struct cpumask *wbinvd_dirty_mask; }; struct kvm_svm { @@ -763,6 +765,7 @@ void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn); +void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu); #else static inline struct page *snp_safe_alloc_page_node(int node, gfp_t gfp) { @@ -793,7 +796,7 @@ static inline int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) { return 0; } - +static inline void sev_vcpu_load(struct kvm_vcpu *vcpu, int cpu) {} #endif /* vmenter.S */