Message ID | 7138a3bc00ea8d3cbe0e59df15f8c22027005b59.1712785629.git.isaku.yamahata@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Guest Memory Pre-Population API | expand |
On Wed, 2024-04-10 at 15:07 -0700, isaku.yamahata@intel.com wrote: > From: Isaku Yamahata <isaku.yamahata@intel.com> > > Wire KVM_MAP_MEMORY ioctl to kvm_mmu_map_tdp_page() to populate guest > memory. When KVM_CREATE_VCPU creates vCPU, it initializes the x86 > KVM MMU part by kvm_mmu_create() and kvm_init_mmu(). vCPU is ready to > invoke the KVM page fault handler. > > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> > --- > v2: > - Catch up the change of struct kvm_memory_mapping. (Sean) > - Removed mapping level check. Push it down into vendor code. (David, Sean) > - Rename goal_level to level. (Sean) > - Drop kvm_arch_pre_vcpu_map_memory(), directly call kvm_mmu_reload(). > (David, Sean) > - Fixed the update of mapping. > --- > arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++++++++++ > 1 file changed, 30 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 2d2619d3eee4..2c765de3531e 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -4713,6 +4713,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long > ext) > case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: > case KVM_CAP_IRQFD_RESAMPLE: > case KVM_CAP_MEMORY_FAULT_INFO: > + case KVM_CAP_MAP_MEMORY: > r = 1; > break; Should we add this after all of the pieces are in place? > case KVM_CAP_EXIT_HYPERCALL: > @@ -5867,6 +5868,35 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu > *vcpu, > } > } > > +int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, > + struct kvm_memory_mapping *mapping) > +{ > + u64 end, error_code = 0; > + u8 level = PG_LEVEL_4K; > + int r; > + > + /* > + * Shadow paging uses GVA for kvm page fault. The first > implementation > + * supports GPA only to avoid confusion. > + */ > + if (!tdp_enabled) > + return -EOPNOTSUPP; It's not confusion, it's that you can't pre-map GPAs for legacy shadow paging. Or you are saying why not to support pre-mapping GVAs? I think that discussion belongs more in the commit log. The code should just say it's not possible to pre-map GPAs in shadow paging. > + > + /* reload is optimized for repeated call. */ > + kvm_mmu_reload(vcpu); > + > + r = kvm_tdp_map_page(vcpu, mapping->base_address, error_code, &level); > + if (r) > + return r; > + > + /* mapping->base_address is not necessarily aligned to level-hugepage. > */ > + end = (mapping->base_address & KVM_HPAGE_MASK(level)) + > + KVM_HPAGE_SIZE(level); > + mapping->size -= end - mapping->base_address; > + mapping->base_address = end; > + return r; > +} > + > long kvm_arch_vcpu_ioctl(struct file *filp, > unsigned int ioctl, unsigned long arg) > {
On Wed, Apr 10, 2024 at 03:07:32PM -0700, isaku.yamahata@intel.com wrote: >From: Isaku Yamahata <isaku.yamahata@intel.com> > >Wire KVM_MAP_MEMORY ioctl to kvm_mmu_map_tdp_page() to populate guest >memory. When KVM_CREATE_VCPU creates vCPU, it initializes the x86 >KVM MMU part by kvm_mmu_create() and kvm_init_mmu(). vCPU is ready to >invoke the KVM page fault handler. > >Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> >--- >v2: >- Catch up the change of struct kvm_memory_mapping. (Sean) >- Removed mapping level check. Push it down into vendor code. (David, Sean) >- Rename goal_level to level. (Sean) >- Drop kvm_arch_pre_vcpu_map_memory(), directly call kvm_mmu_reload(). > (David, Sean) >- Fixed the update of mapping. >--- > arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++++++++++ > 1 file changed, 30 insertions(+) > >diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >index 2d2619d3eee4..2c765de3531e 100644 >--- a/arch/x86/kvm/x86.c >+++ b/arch/x86/kvm/x86.c >@@ -4713,6 +4713,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) > case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: > case KVM_CAP_IRQFD_RESAMPLE: > case KVM_CAP_MEMORY_FAULT_INFO: >+ case KVM_CAP_MAP_MEMORY: > r = 1; > break; > case KVM_CAP_EXIT_HYPERCALL: >@@ -5867,6 +5868,35 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, > } > } > >+int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, >+ struct kvm_memory_mapping *mapping) >+{ >+ u64 end, error_code = 0; >+ u8 level = PG_LEVEL_4K; IIUC, no need to initialize @level here. >+ int r; >+ >+ /* >+ * Shadow paging uses GVA for kvm page fault. The first implementation >+ * supports GPA only to avoid confusion. >+ */ >+ if (!tdp_enabled) >+ return -EOPNOTSUPP; This check duplicates the one for vcpu->arch.mmu->page_fault() in patch 5. >+ >+ /* reload is optimized for repeated call. */ >+ kvm_mmu_reload(vcpu); >+ >+ r = kvm_tdp_map_page(vcpu, mapping->base_address, error_code, &level); >+ if (r) >+ return r; >+ >+ /* mapping->base_address is not necessarily aligned to level-hugepage. */ >+ end = (mapping->base_address & KVM_HPAGE_MASK(level)) + >+ KVM_HPAGE_SIZE(level); maybe end = ALIGN(mapping->base_address, KVM_HPAGE_SIZE(level)); >+ mapping->size -= end - mapping->base_address; >+ mapping->base_address = end; >+ return r; >+} >+ > long kvm_arch_vcpu_ioctl(struct file *filp, > unsigned int ioctl, unsigned long arg) > { >-- >2.43.2 > >
On Thu, Apr 11, 2024 at 12:08 AM <isaku.yamahata@intel.com> wrote: > > From: Isaku Yamahata <isaku.yamahata@intel.com> > > Wire KVM_MAP_MEMORY ioctl to kvm_mmu_map_tdp_page() to populate guest > memory. When KVM_CREATE_VCPU creates vCPU, it initializes the x86 > KVM MMU part by kvm_mmu_create() and kvm_init_mmu(). vCPU is ready to > invoke the KVM page fault handler. > > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> > --- > v2: > - Catch up the change of struct kvm_memory_mapping. (Sean) > - Removed mapping level check. Push it down into vendor code. (David, Sean) > - Rename goal_level to level. (Sean) > - Drop kvm_arch_pre_vcpu_map_memory(), directly call kvm_mmu_reload(). > (David, Sean) > - Fixed the update of mapping. > --- > arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++++++++++ > 1 file changed, 30 insertions(+) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 2d2619d3eee4..2c765de3531e 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -4713,6 +4713,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) > case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: > case KVM_CAP_IRQFD_RESAMPLE: > case KVM_CAP_MEMORY_FAULT_INFO: > + case KVM_CAP_MAP_MEMORY: > r = 1; > break; > case KVM_CAP_EXIT_HYPERCALL: > @@ -5867,6 +5868,35 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, > } > } > > +int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, > + struct kvm_memory_mapping *mapping) > +{ > + u64 end, error_code = 0; > + u8 level = PG_LEVEL_4K; > + int r; > + > + /* > + * Shadow paging uses GVA for kvm page fault. The first implementation > + * supports GPA only to avoid confusion. > + */ > + if (!tdp_enabled) > + return -EOPNOTSUPP; > + > + /* reload is optimized for repeated call. */ > + kvm_mmu_reload(vcpu); > + > + r = kvm_tdp_map_page(vcpu, mapping->base_address, error_code, &level); > + if (r) > + return r; > + > + /* mapping->base_address is not necessarily aligned to level-hugepage. */ /* * level can be more than the alignment of mapping->base_address if * the mapping can use a huge page. */ > + end = (mapping->base_address & KVM_HPAGE_MASK(level)) + > + KVM_HPAGE_SIZE(level); > + mapping->size -= end - mapping->base_address; > + mapping->base_address = end; Slightly safer in the case where level is more than the alignment of mapping->base_address: mapped = min(mapping->size, end - mapping->base_address); mapping->size -= mapped; mapping->base_address += mapped; Paolo > + return r; > +} > + > long kvm_arch_vcpu_ioctl(struct file *filp, > unsigned int ioctl, unsigned long arg) > { > -- > 2.43.2 >
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2d2619d3eee4..2c765de3531e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4713,6 +4713,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES: case KVM_CAP_IRQFD_RESAMPLE: case KVM_CAP_MEMORY_FAULT_INFO: + case KVM_CAP_MAP_MEMORY: r = 1; break; case KVM_CAP_EXIT_HYPERCALL: @@ -5867,6 +5868,35 @@ static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, } } +int kvm_arch_vcpu_map_memory(struct kvm_vcpu *vcpu, + struct kvm_memory_mapping *mapping) +{ + u64 end, error_code = 0; + u8 level = PG_LEVEL_4K; + int r; + + /* + * Shadow paging uses GVA for kvm page fault. The first implementation + * supports GPA only to avoid confusion. + */ + if (!tdp_enabled) + return -EOPNOTSUPP; + + /* reload is optimized for repeated call. */ + kvm_mmu_reload(vcpu); + + r = kvm_tdp_map_page(vcpu, mapping->base_address, error_code, &level); + if (r) + return r; + + /* mapping->base_address is not necessarily aligned to level-hugepage. */ + end = (mapping->base_address & KVM_HPAGE_MASK(level)) + + KVM_HPAGE_SIZE(level); + mapping->size -= end - mapping->base_address; + mapping->base_address = end; + return r; +} + long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) {