Message ID | eabc3f3e5eb03b370cadf6e1901ea34d7a020adc.1712785629.git.isaku.yamahata@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Guest Memory Pre-Population API | expand |
On Wed, 2024-04-10 at 15:07 -0700, isaku.yamahata@intel.com wrote: > From: Isaku Yamahata <isaku.yamahata@intel.com> > > The guest memory population logic will need to know what page size or level > (4K, 2M, ...) is mapped. TDX needs this, but do the normal VM users need to have it fixed to 4k? Is it actually good? > > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
On Tue, Apr 16, 2024 at 02:40:39PM +0000, "Edgecombe, Rick P" <rick.p.edgecombe@intel.com> wrote: > On Wed, 2024-04-10 at 15:07 -0700, isaku.yamahata@intel.com wrote: > > From: Isaku Yamahata <isaku.yamahata@intel.com> > > > > The guest memory population logic will need to know what page size or level > > (4K, 2M, ...) is mapped. > > TDX needs this, but do the normal VM users need to have it fixed to 4k? Is it > actually good? I meant that the function, kvm_arch_vcpu_map_memory(), in [PATCH v2 06/10] KVM: x86: Implement kvm_arch_vcpu_map_memory() needs level. No logic in this patch series enforces to fixed 4K. gmem_max_level() hook will determine it. https://lore.kernel.org/all/20240404185034.3184582-12-pbonzini@redhat.com/ I'll update the commit message to reflect it.
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 9baae6c223ee..b0a10f5a40dd 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -288,7 +288,8 @@ static inline void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, } static inline int __kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, - u64 err, bool prefetch, int *emulation_type) + u64 err, bool prefetch, + int *emulation_type, u8 *level) { struct kvm_page_fault fault = { .addr = cr2_or_gpa, @@ -330,6 +331,8 @@ static inline int __kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gp if (fault.write_fault_to_shadow_pgtable && emulation_type) *emulation_type |= EMULTYPE_WRITE_PF_TO_SP; + if (level) + *level = fault.goal_level; return r; } @@ -347,7 +350,8 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!prefetch) vcpu->stat.pf_taken++; - r = __kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, err, prefetch, emulation_type); + r = __kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, err, prefetch, + emulation_type, NULL); /* * Similar to above, prefetch faults aren't truly spurious, and the