Message ID | 20240421180122.1650812-17-michael.roth@amd.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Herbert Xu |
Headers | show |
Series | Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand |
On Sun, Apr 21, 2024, Michael Roth wrote: > --- > arch/x86/kvm/svm/sev.c | 32 ++++++++++++++++++++++++++++++++ > arch/x86/kvm/svm/svm.c | 1 + > arch/x86/kvm/svm/svm.h | 7 +++++++ > 3 files changed, 40 insertions(+) > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index ff9b8c68ae56..243369e302f4 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -4528,3 +4528,35 @@ void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) > cond_resched(); > } > } > + > +/* > + * Re-check whether an #NPF for a private/gmem page can still be serviced, and > + * adjust maximum mapping level if needed. > + */ > +int sev_gmem_validate_fault(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, bool is_private, This is a misleading name. The primary purpose is not to validate the fault, the primary purpose is to get the max mapping level. The fact that this can fail should not dictate the name. I also think we should skip the call if the max level is already PG_LEVEL_4K. Something _could_ race and invalidate the RMP, but that's _exactly_ why KVM guards the page fault path with mmu_invalidate_seq. Actually, is returning an error in this case even correct? Me thinks no. If something invalidates the RMP between kvm_gmem_get_pfn() and getting the mapping level, then KVM should retry, which mmu_invalidate_seq handles. Returning -EINVAL and killing the VM is wrong. And IMO, "gmem" shouldn't be in the name, this is not a hook from guest_memfd, it's a hook for mapping private memory. And as someone called out somwhere else, the "private" parameters is pointless. And for that matter, so is the gfn. And even _if_ we want to return an error, we could even overload the return code to handle this, e.g. in the caller: r = static_call(kvm_x86_max_private_mapping_level)(vcpu->kvm, fault->pfn); if (r < 0) { kvm_release_pfn_clean(fault->pfn); return r; } fault->max_level = min(fault->max_level, r); but what I think we want is: --- int sev_snp_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) { int level, rc; bool assigned; if (!sev_snp_guest(kvm)) return 0; rc = snp_lookup_rmpentry(pfn, &assigned, &level); if (rc || !assigned) return PG_LEVEL_4K; return level; } static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, u8 max_level, int gmem_order) { if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; max_level = min(kvm_max_level_for_order(gmem_order),max_level); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; return min(max_level, static_call(kvm_x86_private_max_mapping_level)(kvm, pfn); } static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm *kvm = vcpu->kvm; int max_order, r; if (!kvm_slot_can_be_private(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } r = kvm_gmem_get_pfn(kvm, fault->slot, fault->gfn, &fault->pfn, &max_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); fault->max_level = kvm_max_private_mapping_level(kvm, fault->pfn, fault->max_level, order); return RET_PF_CONTINUE; } --- Side topic, the KVM_MEM_READONLY check is unnecessary, KVM doesn't allow RO memslots to coincide with guest_memfd. I missed that in commit e563592224e0 ("KVM: Make KVM_MEM_GUEST_MEMFD mutually exclusive with KVM_MEM_READONLY").
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index ff9b8c68ae56..243369e302f4 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -4528,3 +4528,35 @@ void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) cond_resched(); } } + +/* + * Re-check whether an #NPF for a private/gmem page can still be serviced, and + * adjust maximum mapping level if needed. + */ +int sev_gmem_validate_fault(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, bool is_private, + u8 *max_level) +{ + int level, rc; + bool assigned; + + if (!sev_snp_guest(kvm)) + return 0; + + rc = snp_lookup_rmpentry(pfn, &assigned, &level); + if (rc) { + pr_err_ratelimited("SEV: RMP entry not found: GFN %llx PFN %llx level %d error %d\n", + gfn, pfn, level, rc); + return -ENOENT; + } + + if (!assigned) { + pr_err_ratelimited("SEV: RMP entry is not assigned: GFN %llx PFN %llx level %d\n", + gfn, pfn, level); + return -EINVAL; + } + + if (level < *max_level) + *max_level = level; + + return 0; +} diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 29dc5fa28d97..c26a7a933b93 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5088,6 +5088,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .gmem_prepare = sev_gmem_prepare, .gmem_invalidate = sev_gmem_invalidate, + .gmem_validate_fault = sev_gmem_validate_fault, }; /* diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 6721e5c6cf73..8a8ee475ad86 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -732,6 +732,8 @@ void sev_vcpu_unblocking(struct kvm_vcpu *vcpu); void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); +int sev_gmem_validate_fault(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, bool is_private, + u8 *max_level); #else static inline struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu) { return alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); @@ -753,6 +755,11 @@ static inline int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, in return 0; } static inline void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) {} +static inline int sev_gmem_validate_fault(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, + bool is_private, u8 *max_level) +{ + return 0; +} #endif