Message ID | 20221205232341.4131240-2-vannapurve@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: selftests: selftests for fd-based private memory | expand |
On Mon, Dec 05, 2022, Vishal Annapurve wrote: > Introduce HAVE_KVM_PRIVATE_MEM_TESTING config to be able to test fd based > @@ -272,13 +274,15 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > .rsvd = err & PFERR_RSVD_MASK, > .user = err & PFERR_USER_MASK, > .prefetch = prefetch, > - .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), > + .is_tdp = is_tdp, > .nx_huge_page_workaround_enabled = > is_nx_huge_page_enabled(vcpu->kvm), > > .max_level = KVM_MAX_HUGEPAGE_LEVEL, > .req_level = PG_LEVEL_4K, > .goal_level = PG_LEVEL_4K, > + .is_private = IS_ENABLED(CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING) && is_tdp && > + kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), After looking at the SNP+UPM series, I think we should forego a dedicated Kconfig for testing and instead add a new VM type for UPM-capable guests. The new type, e.g. KVM_X86_PROTECTED_VM, can then be used to leverage UPM for "legacy" SEV and SEV-ES guests, as well as for UPM-capable guests that don't utilize per-VM memory encryption, e.g. KVM selftests. Carrying test-only behavior is obviously never ideal, and it would pretty much have to be mutually exclusive with "real" usage of UPM, otherwise the KVM logics gets unnecessarily complex.
On Tue, Jan 17, 2023 at 1:39 PM Sean Christopherson <seanjc@google.com> wrote: > > On Mon, Dec 05, 2022, Vishal Annapurve wrote: > > Introduce HAVE_KVM_PRIVATE_MEM_TESTING config to be able to test fd based > > @@ -272,13 +274,15 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > > .rsvd = err & PFERR_RSVD_MASK, > > .user = err & PFERR_USER_MASK, > > .prefetch = prefetch, > > - .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), > > + .is_tdp = is_tdp, > > .nx_huge_page_workaround_enabled = > > is_nx_huge_page_enabled(vcpu->kvm), > > > > .max_level = KVM_MAX_HUGEPAGE_LEVEL, > > .req_level = PG_LEVEL_4K, > > .goal_level = PG_LEVEL_4K, > > + .is_private = IS_ENABLED(CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING) && is_tdp && > > + kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), > > After looking at the SNP+UPM series, I think we should forego a dedicated Kconfig > for testing and instead add a new VM type for UPM-capable guests. The new type, > e.g. KVM_X86_PROTECTED_VM, can then be used to leverage UPM for "legacy" SEV and > SEV-ES guests, as well as for UPM-capable guests that don't utilize per-VM > memory encryption, e.g. KVM selftests. > > Carrying test-only behavior is obviously never ideal, and it would pretty much have > to be mutually exclusive with "real" usage of UPM, otherwise the KVM logics gets > unnecessarily complex. Ack, the newly added VM type fits better here with SEV/SEV-ES and non-confidential selftests being able to share this framework.
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 5ccf08183b00..e2f508db0b6e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -263,6 +263,8 @@ enum { static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch) { + bool is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault); + struct kvm_page_fault fault = { .addr = cr2_or_gpa, .error_code = err, @@ -272,13 +274,15 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .rsvd = err & PFERR_RSVD_MASK, .user = err & PFERR_USER_MASK, .prefetch = prefetch, - .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), + .is_tdp = is_tdp, .nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(vcpu->kvm), .max_level = KVM_MAX_HUGEPAGE_LEVEL, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, + .is_private = IS_ENABLED(CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING) && is_tdp && + kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), }; int r; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index d605545d6dd1..1e326367a21c 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -92,3 +92,7 @@ config HAVE_KVM_PM_NOTIFIER config HAVE_KVM_RESTRICTED_MEM bool + +config HAVE_KVM_PRIVATE_MEM_TESTING + bool "Private memory selftests" + depends on HAVE_KVM_MEMORY_ATTRIBUTES && HAVE_KVM_RESTRICTED_MEM diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ac835fc77273..d2d829d23442 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1262,7 +1262,8 @@ int __weak kvm_arch_create_vm_debugfs(struct kvm *kvm) bool __weak kvm_arch_has_private_mem(struct kvm *kvm) { - return false; + return IS_ENABLED(CONFIG_HAVE_KVM_PRIVATE_MEM_TESTING); + } static struct kvm *kvm_create_vm(unsigned long type, const char *fdname)
Introduce HAVE_KVM_PRIVATE_MEM_TESTING config to be able to test fd based approach to support private memory with non-confidential selftest VMs. To support this testing few important aspects need to be considered from the perspective of selftests - * KVM needs to know whether the access from guest VM is private or shared. Confidential VMs (SNP/TDX) carry a dedicated bit in gpa that can be used by KVM to deduce the nature of the access. Non-confidential VMs don't have mechanism to carry/convey such an information to KVM. So KVM just relies on what attributes are set by userspace VMM keeping the userspace VMM in the TCB for the testing purposes. * arch_private_mem_supported is updated to allow private memory logic to work with non-confidential vm selftests. Signed-off-by: Vishal Annapurve <vannapurve@google.com> --- arch/x86/kvm/mmu/mmu_internal.h | 6 +++++- virt/kvm/Kconfig | 4 ++++ virt/kvm/kvm_main.c | 3 ++- 3 files changed, 11 insertions(+), 2 deletions(-)