From patchwork Fri Aug 30 04:35:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13784307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EFDEFCA0EE1 for ; Fri, 30 Aug 2024 05:46:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID :References:Mime-Version:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IO3T4792On8/njRSgj41YAZmWSBMtKk5pQzkn2A/Od0=; b=MU7HdzFXlkXmQH BpT0ehnnCLTDjqWKtoLX7FJxc7GyjuFVTjuEsrMp/mMpx6Hvfs613B/WHWxvCqTflScY/WuL6K/PS I+akZFrX5rjhu9A3rCkZ7/EuNFcUSrRhOr633rpD7UU55oX7oECPIHx4I3TUQbf06Ew4IzR3ThkPE n+9sLvpCwcDqFwAnlHXqCbh0MsxJ+RIgc2mH69OSlli/HMCj4cNf0lqbqmjlzNNM+XzCG3zWJac9b NJHJSYfmdt8TxlTEm7ZS7CosMUWE6gV9QAL+GFQTa0iAQmVgcyVvR7aeYsckRy2eHc6JfVSqg6baj fxV2ZvxOd5lg7sJybwng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sjuSa-00000004qgh-0aXH; Fri, 30 Aug 2024 05:46:12 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sjtN8-00000004fxP-0iaK for linux-riscv@bombadil.infradead.org; Fri, 30 Aug 2024 04:36:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To:Sender: Content-Transfer-Encoding:Content-ID:Content-Description; bh=mgnh0vt90hRoRdUmmB6HToka9M38WB/s+4LU+KtV258=; b=bPP4Qehg/ixuYBx2aeEDTA2NdE 7N6FaRErhwrY1yEC8VnCGkbtGHfMQCPmT1fdBOX9/vI4C52dW1CMaSMZ/GyoUXHxKPMdkRiiJjX7v sXwL7WxCw6IuoI8XHk/u7S098aoPWvMhBE7WECKRVpqvMLMWawtCmqv0j8zZonbYF5wUL2Xb+ymHp X4CB/5RhKAADQaPmwmXHl4t5m2wlMtfUq2l7kDzKzx0pxAuOACKWRwYcYNw9Xu7eKLy+dWtgYKAjj 4evonXnObp9z2vqDmZviwbJ15QBPgGELd1JBT9kjzIaYdq+Oac/ag2+VpL/GasL+PaHSqtRAs8lwt 46F3HaDw==; Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sjtMy-0000000BYeD-0rHd for linux-riscv@lists.infradead.org; Fri, 30 Aug 2024 04:36:28 +0000 Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-201f89c6b21so16835565ad.1 for ; Thu, 29 Aug 2024 21:36:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724992577; x=1725597377; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=mgnh0vt90hRoRdUmmB6HToka9M38WB/s+4LU+KtV258=; b=YNyemvaImIyXJG9fMBYEOeG2wkzrNsdzOYZmH2ywaPE5hRepx/LLEkg35roWjCae+C GkYNXa6vKQRupxgFa3oWWAR2/WnHe4Pp/HKFTDoyHw8ZDED3eUZAYsN8z2vzzUTgquqT axkWXzuXectvL9p0J1YahuDeGmiYCOILlA7sZpK2X3JztesZC7bZKyqQnQ83uXGFyzja eqXQy4BTLNccAtNydOROF1Fpj5IR5u4NcuflcPJNYIy5lfuelfVrH5tlbsCJ7qlkXRu6 WANayyN1PQmowvGp6zLeHHkeBb5gi6rbsIUaUJUEuHUx8X1RkQNlyvIYiNV7FcMyLoAV mAeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724992577; x=1725597377; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=mgnh0vt90hRoRdUmmB6HToka9M38WB/s+4LU+KtV258=; b=p7TTm1Od0sjrsDaJ/F8mmK7mjvYW9l0SoqbIlBpTDcn0oRnAnXmkaFjPwrHesw9fCW gFXVLmPFpUYLUFllPfWawdeXXUAF9c4lHP8/GLHZ4AO09XtMeFkMG0SYQ9wPYmmdi0UW 5YUrrAIwcb+ExIspGwji1L8j9by7oig5IqfspPHonY5Y9NaKFcTbY2UUCUUMeBUCIYvv 2kbbsASSANwbk5qMVIxWgOiHdmPw12HFps2T42EWAlbvFN2qri54oOUxLugtDCqIf8qr 8a7HFUHEibxyVV7U4TjWltAqgLKnut+ou8MAWhI80anJoQOQmLGNipTep29u0e/uwIUh GzQw== X-Forwarded-Encrypted: i=1; AJvYcCXviUxbeNq/xzRcr1O5nE3Dsd1Syphup4eIbgIDhs+dVVD8i8otvaXa39An6+9rZzYgPyFnR+s3POkY4g==@lists.infradead.org X-Gm-Message-State: AOJu0YwCxVmbhkDoTVc48cc4R856ErGjw6lsdrU9KpIi4RPUXwXYTuzE 90NjUTn3JBKzuHWGTaE6lcAa5y2MXqmjDXqQM1gaF2okK7lE8uYbOgtNzzUJZyKEbBPAKzs4FSb z9g== X-Google-Smtp-Source: AGHT+IE7HTy6N+J/THFIKpl4Od8/dndX02uUteqyB99aiEenMzXSGqlK1AXIizH5wGxOY4Cdt0h/M5FZk0k= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:1105:b0:1fd:74a8:df4a with SMTP id d9443c01a7336-20527681a40mr898945ad.5.1724992576608; Thu, 29 Aug 2024 21:36:16 -0700 (PDT) Date: Thu, 29 Aug 2024 21:35:56 -0700 In-Reply-To: <20240830043600.127750-1-seanjc@google.com> Mime-Version: 1.0 References: <20240830043600.127750-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240830043600.127750-7-seanjc@google.com> Subject: [PATCH v4 06/10] KVM: x86: Rename virtualization {en,dis}abling APIs to match common KVM From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Chao Gao , Kai Huang , Farrah Chen X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240830_053620_548276_CB3C416A X-CRM114-Status: GOOD ( 19.97 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Rename x86's the per-CPU vendor hooks used to enable virtualization in hardware to align with the recently renamed arch hooks. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Kai Huang --- arch/x86/include/asm/kvm-x86-ops.h | 4 ++-- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/svm/svm.c | 18 +++++++++--------- arch/x86/kvm/vmx/main.c | 4 ++-- arch/x86/kvm/vmx/vmx.c | 10 +++++----- arch/x86/kvm/vmx/x86_ops.h | 4 ++-- arch/x86/kvm/x86.c | 10 +++++----- 7 files changed, 27 insertions(+), 27 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 68ad4f923664..03b7e13f15bb 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -14,8 +14,8 @@ BUILD_BUG_ON(1) * be __static_call_return0. */ KVM_X86_OP(check_processor_compatibility) -KVM_X86_OP(hardware_enable) -KVM_X86_OP(hardware_disable) +KVM_X86_OP(enable_virtualization_cpu) +KVM_X86_OP(disable_virtualization_cpu) KVM_X86_OP(hardware_unsetup) KVM_X86_OP(has_emulated_msr) KVM_X86_OP(vcpu_after_set_cpuid) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 94e7b5a4fafe..cb3b5f107c6e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1629,8 +1629,8 @@ struct kvm_x86_ops { int (*check_processor_compatibility)(void); - int (*hardware_enable)(void); - void (*hardware_disable)(void); + int (*enable_virtualization_cpu)(void); + void (*disable_virtualization_cpu)(void); void (*hardware_unsetup)(void); bool (*has_emulated_msr)(struct kvm *kvm, u32 index); void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d6f252555ab3..a9adbe10c12e 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -592,14 +592,14 @@ static inline void kvm_cpu_svm_disable(void) } } -static void svm_emergency_disable(void) +static void svm_emergency_disable_virtualization_cpu(void) { kvm_rebooting = true; kvm_cpu_svm_disable(); } -static void svm_hardware_disable(void) +static void svm_disable_virtualization_cpu(void) { /* Make sure we clean up behind us */ if (tsc_scaling) @@ -610,7 +610,7 @@ static void svm_hardware_disable(void) amd_pmu_disable_virt(); } -static int svm_hardware_enable(void) +static int svm_enable_virtualization_cpu(void) { struct svm_cpu_data *sd; @@ -1533,7 +1533,7 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu) * TSC_AUX is always virtualized for SEV-ES guests when the feature is * available. The user return MSR support is not required in this case * because TSC_AUX is restored on #VMEXIT from the host save area - * (which has been initialized in svm_hardware_enable()). + * (which has been initialized in svm_enable_virtualization_cpu()). */ if (likely(tsc_aux_uret_slot >= 0) && (!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm))) @@ -3132,7 +3132,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) * feature is available. The user return MSR support is not * required in this case because TSC_AUX is restored on #VMEXIT * from the host save area (which has been initialized in - * svm_hardware_enable()). + * svm_enable_virtualization_cpu()). */ if (boot_cpu_has(X86_FEATURE_V_TSC_AUX) && sev_es_guest(vcpu->kvm)) break; @@ -4980,8 +4980,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .check_processor_compatibility = svm_check_processor_compat, .hardware_unsetup = svm_hardware_unsetup, - .hardware_enable = svm_hardware_enable, - .hardware_disable = svm_hardware_disable, + .enable_virtualization_cpu = svm_enable_virtualization_cpu, + .disable_virtualization_cpu = svm_disable_virtualization_cpu, .has_emulated_msr = svm_has_emulated_msr, .vcpu_create = svm_vcpu_create, @@ -5411,7 +5411,7 @@ static void __svm_exit(void) { kvm_x86_vendor_exit(); - cpu_emergency_unregister_virt_callback(svm_emergency_disable); + cpu_emergency_unregister_virt_callback(svm_emergency_disable_virtualization_cpu); } static int __init svm_init(void) @@ -5427,7 +5427,7 @@ static int __init svm_init(void) if (r) return r; - cpu_emergency_register_virt_callback(svm_emergency_disable); + cpu_emergency_register_virt_callback(svm_emergency_disable_virtualization_cpu); /* * Common KVM initialization _must_ come last, after this, /dev/kvm is diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 0bf35ebe8a1b..4a5bf92edccf 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -23,8 +23,8 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .hardware_unsetup = vmx_hardware_unsetup, - .hardware_enable = vmx_hardware_enable, - .hardware_disable = vmx_hardware_disable, + .enable_virtualization_cpu = vmx_enable_virtualization_cpu, + .disable_virtualization_cpu = vmx_disable_virtualization_cpu, .has_emulated_msr = vmx_has_emulated_msr, .vm_size = sizeof(struct kvm_vmx), diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f18c2d8c7476..cf7d937bfd2c 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -755,7 +755,7 @@ static int kvm_cpu_vmxoff(void) return -EIO; } -static void vmx_emergency_disable(void) +static void vmx_emergency_disable_virtualization_cpu(void) { int cpu = raw_smp_processor_id(); struct loaded_vmcs *v; @@ -2844,7 +2844,7 @@ static int kvm_cpu_vmxon(u64 vmxon_pointer) return -EFAULT; } -int vmx_hardware_enable(void) +int vmx_enable_virtualization_cpu(void) { int cpu = raw_smp_processor_id(); u64 phys_addr = __pa(per_cpu(vmxarea, cpu)); @@ -2881,7 +2881,7 @@ static void vmclear_local_loaded_vmcss(void) __loaded_vmcs_clear(v); } -void vmx_hardware_disable(void) +void vmx_disable_virtualization_cpu(void) { vmclear_local_loaded_vmcss(); @@ -8584,7 +8584,7 @@ static void __vmx_exit(void) { allow_smaller_maxphyaddr = false; - cpu_emergency_unregister_virt_callback(vmx_emergency_disable); + cpu_emergency_unregister_virt_callback(vmx_emergency_disable_virtualization_cpu); vmx_cleanup_l1d_flush(); } @@ -8632,7 +8632,7 @@ static int __init vmx_init(void) pi_init_cpu(cpu); } - cpu_emergency_register_virt_callback(vmx_emergency_disable); + cpu_emergency_register_virt_callback(vmx_emergency_disable_virtualization_cpu); vmx_check_vmcs12_offsets(); diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index ce3221cd1d01..205692c43a8e 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -13,8 +13,8 @@ extern struct kvm_x86_init_ops vt_init_ops __initdata; void vmx_hardware_unsetup(void); int vmx_check_processor_compat(void); -int vmx_hardware_enable(void); -void vmx_hardware_disable(void); +int vmx_enable_virtualization_cpu(void); +void vmx_disable_virtualization_cpu(void); int vmx_vm_init(struct kvm *kvm); void vmx_vm_destroy(struct kvm *kvm); int vmx_vcpu_precreate(struct kvm *kvm); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1182baf0d487..431358167fa8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9749,7 +9749,7 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) guard(mutex)(&vendor_module_lock); - if (kvm_x86_ops.hardware_enable) { + if (kvm_x86_ops.enable_virtualization_cpu) { pr_err("already loaded vendor module '%s'\n", kvm_x86_ops.name); return -EEXIST; } @@ -9876,7 +9876,7 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops) return 0; out_unwind_ops: - kvm_x86_ops.hardware_enable = NULL; + kvm_x86_ops.enable_virtualization_cpu = NULL; kvm_x86_call(hardware_unsetup)(); out_mmu_exit: kvm_mmu_vendor_module_exit(); @@ -9917,7 +9917,7 @@ void kvm_x86_vendor_exit(void) WARN_ON(static_branch_unlikely(&kvm_xen_enabled.key)); #endif mutex_lock(&vendor_module_lock); - kvm_x86_ops.hardware_enable = NULL; + kvm_x86_ops.enable_virtualization_cpu = NULL; mutex_unlock(&vendor_module_lock); } EXPORT_SYMBOL_GPL(kvm_x86_vendor_exit); @@ -12528,7 +12528,7 @@ int kvm_arch_enable_virtualization_cpu(void) if (ret) return ret; - ret = kvm_x86_call(hardware_enable)(); + ret = kvm_x86_call(enable_virtualization_cpu)(); if (ret != 0) return ret; @@ -12610,7 +12610,7 @@ int kvm_arch_enable_virtualization_cpu(void) void kvm_arch_disable_virtualization_cpu(void) { - kvm_x86_call(hardware_disable)(); + kvm_x86_call(disable_virtualization_cpu)(); drop_user_return_notifiers(); }