Message ID | 20220330174621.1567317-8-bgardon@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86: Add a cap to disable NX hugepages on a VM | expand |
On Wed, Mar 30, 2022 at 10:46:17AM -0700, Ben Gardon wrote: > Factor out the code to update the NX hugepages state for an individual > VM. This will be expanded in future commits to allow per-VM control of > Nx hugepages. > > No functional change intended. > > Signed-off-by: Ben Gardon <bgardon@google.com> Reviewed-by: David Matlack <dmatlack@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 17 +++++++++++------ > 1 file changed, 11 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index dbf46dd98618..af428cb65b3f 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -6202,6 +6202,15 @@ static void __set_nx_huge_pages(bool val) > nx_huge_pages = itlb_multihit_kvm_mitigation = val; > } > > +static void kvm_update_nx_huge_pages(struct kvm *kvm) > +{ > + mutex_lock(&kvm->slots_lock); > + kvm_mmu_zap_all_fast(kvm); > + mutex_unlock(&kvm->slots_lock); > + > + wake_up_process(kvm->arch.nx_lpage_recovery_thread); > +} > + > static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) > { > bool old_val = nx_huge_pages; > @@ -6224,13 +6233,9 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) > > mutex_lock(&kvm_lock); > > - list_for_each_entry(kvm, &vm_list, vm_list) { > - mutex_lock(&kvm->slots_lock); > - kvm_mmu_zap_all_fast(kvm); > - mutex_unlock(&kvm->slots_lock); > + list_for_each_entry(kvm, &vm_list, vm_list) > + kvm_update_nx_huge_pages(kvm); > > - wake_up_process(kvm->arch.nx_lpage_recovery_thread); > - } > mutex_unlock(&kvm_lock); > } > > -- > 2.35.1.1021.g381101b075-goog >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dbf46dd98618..af428cb65b3f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6202,6 +6202,15 @@ static void __set_nx_huge_pages(bool val) nx_huge_pages = itlb_multihit_kvm_mitigation = val; } +static void kvm_update_nx_huge_pages(struct kvm *kvm) +{ + mutex_lock(&kvm->slots_lock); + kvm_mmu_zap_all_fast(kvm); + mutex_unlock(&kvm->slots_lock); + + wake_up_process(kvm->arch.nx_lpage_recovery_thread); +} + static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) { bool old_val = nx_huge_pages; @@ -6224,13 +6233,9 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp) mutex_lock(&kvm_lock); - list_for_each_entry(kvm, &vm_list, vm_list) { - mutex_lock(&kvm->slots_lock); - kvm_mmu_zap_all_fast(kvm); - mutex_unlock(&kvm->slots_lock); + list_for_each_entry(kvm, &vm_list, vm_list) + kvm_update_nx_huge_pages(kvm); - wake_up_process(kvm->arch.nx_lpage_recovery_thread); - } mutex_unlock(&kvm_lock); }
Factor out the code to update the NX hugepages state for an individual VM. This will be expanded in future commits to allow per-VM control of Nx hugepages. No functional change intended. Signed-off-by: Ben Gardon <bgardon@google.com> --- arch/x86/kvm/mmu/mmu.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-)