Message ID | 20220414132013.1588929-14-vkuznets@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86: hyper-v: Fine-grained TLB flush + L2 TLB flush feature | expand |
On Thu, 2022-04-14 at 15:19 +0200, Vitaly Kuznetsov wrote: > Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition > assist page address to handle L2 TLB flush requests. > > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> > --- > arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++ > arch/x86/kvm/svm/nested.c | 2 ++ > 2 files changed, 18 insertions(+) > > diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h > index 7d6d97968fb9..8cf702fed7e5 100644 > --- a/arch/x86/kvm/svm/hyperv.h > +++ b/arch/x86/kvm/svm/hyperv.h > @@ -9,6 +9,7 @@ > #include <asm/mshyperv.h> > > #include "../hyperv.h" > +#include "svm.h" > > /* > * Hyper-V uses the software reserved 32 bytes in VMCB > @@ -32,4 +33,19 @@ struct hv_enlightenments { > */ > #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW > > +static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) > +{ > + struct vcpu_svm *svm = to_svm(vcpu); > + struct hv_enlightenments *hve = > + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; Small nitpick: Can we use this as an opportunity to rename the 'reserved_sw' to \ 'hv_enlightenments' or something, because that is what it is? Also the reserved_sw is an array, which is confusing, since from first look, it looks like we have a pointer dereference here. > + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); > + > + if (!hv_vcpu) > + return; > + > + hv_vcpu->nested.pa_page_gpa = hve->partition_assist_page; > + hv_vcpu->nested.vm_id = hve->hv_vm_id; > + hv_vcpu->nested.vp_id = hve->hv_vp_id; > +} > + > #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c > index bed5e1692cef..2d1a76343404 100644 > --- a/arch/x86/kvm/svm/nested.c > +++ b/arch/x86/kvm/svm/nested.c > @@ -826,6 +826,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) > > svm->nested.nested_run_pending = 1; > > + nested_svm_hv_update_vm_vp_ids(vcpu); > + > if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true)) > goto out_exit_err; > That won't work after migration, since this won't be called if we migrate with nested guest running. I think that nested_svm_hv_update_vm_vp_ids should be called from enter_svm_guest_mode. Best regards, Maxim Levitsky
Maxim Levitsky <mlevitsk@redhat.com> writes: > On Thu, 2022-04-14 at 15:19 +0200, Vitaly Kuznetsov wrote: >> Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition >> assist page address to handle L2 TLB flush requests. >> >> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> >> --- >> arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++ >> arch/x86/kvm/svm/nested.c | 2 ++ >> 2 files changed, 18 insertions(+) >> >> diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h >> index 7d6d97968fb9..8cf702fed7e5 100644 >> --- a/arch/x86/kvm/svm/hyperv.h >> +++ b/arch/x86/kvm/svm/hyperv.h >> @@ -9,6 +9,7 @@ >> #include <asm/mshyperv.h> >> >> #include "../hyperv.h" >> +#include "svm.h" >> >> /* >> * Hyper-V uses the software reserved 32 bytes in VMCB >> @@ -32,4 +33,19 @@ struct hv_enlightenments { >> */ >> #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW >> >> +static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) >> +{ >> + struct vcpu_svm *svm = to_svm(vcpu); >> + struct hv_enlightenments *hve = >> + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; > > Small nitpick: > > Can we use this as an opportunity to rename the 'reserved_sw' to \ > 'hv_enlightenments' or something, because that is what it is? > > Also the reserved_sw is an array, which is confusing, since from first look, > it looks like we have a pointer dereference here. > Well, that's what it is in Hyper-V world and so far we didn't give it another meaning in KVM but in theory it is not impossible, e.g. we can use this area to speed up nested KVM on KVM. AMD calls this "Reserved for Host usage" so we can probably rename it to 'reserved_host' but I'm not sure it's worth the hassle... > > >> + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); >> + >> + if (!hv_vcpu) >> + return; >> + >> + hv_vcpu->nested.pa_page_gpa = hve->partition_assist_page; >> + hv_vcpu->nested.vm_id = hve->hv_vm_id; >> + hv_vcpu->nested.vp_id = hve->hv_vp_id; >> +} >> + >> #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ >> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c >> index bed5e1692cef..2d1a76343404 100644 >> --- a/arch/x86/kvm/svm/nested.c >> +++ b/arch/x86/kvm/svm/nested.c >> @@ -826,6 +826,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) >> >> svm->nested.nested_run_pending = 1; >> >> + nested_svm_hv_update_vm_vp_ids(vcpu); >> + >> if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true)) >> goto out_exit_err; >> > > That won't work after migration, since this won't be called > if we migrate with nested guest running. > > > I think that nested_svm_hv_update_vm_vp_ids should be called > from enter_svm_guest_mode. > Oh that's a good one, thanks! This could've been a hard to debug issue.
On Wed, 2022-05-18 at 14:25 +0200, Vitaly Kuznetsov wrote: > Maxim Levitsky <mlevitsk@redhat.com> writes: > > > On Thu, 2022-04-14 at 15:19 +0200, Vitaly Kuznetsov wrote: > > > Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition > > > assist page address to handle L2 TLB flush requests. > > > > > > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> > > > --- > > > arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++ > > > arch/x86/kvm/svm/nested.c | 2 ++ > > > 2 files changed, 18 insertions(+) > > > > > > diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h > > > index 7d6d97968fb9..8cf702fed7e5 100644 > > > --- a/arch/x86/kvm/svm/hyperv.h > > > +++ b/arch/x86/kvm/svm/hyperv.h > > > @@ -9,6 +9,7 @@ > > > #include <asm/mshyperv.h> > > > > > > #include "../hyperv.h" > > > +#include "svm.h" > > > > > > /* > > > * Hyper-V uses the software reserved 32 bytes in VMCB > > > @@ -32,4 +33,19 @@ struct hv_enlightenments { > > > */ > > > #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW > > > > > > +static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) > > > +{ > > > + struct vcpu_svm *svm = to_svm(vcpu); > > > + struct hv_enlightenments *hve = > > > + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; > > > > Small nitpick: > > > > Can we use this as an opportunity to rename the 'reserved_sw' to \ > > 'hv_enlightenments' or something, because that is what it is? > > > > Also the reserved_sw is an array, which is confusing, since from first look, > > it looks like we have a pointer dereference here. > > > > Well, that's what it is in Hyper-V world and so far we didn't give it > another meaning in KVM but in theory it is not impossible, e.g. we can > use this area to speed up nested KVM on KVM. > > AMD calls this "Reserved for Host usage" so we can probably rename it to > 'reserved_host' but I'm not sure it's worth the hassle... This is a very good piece of information. If AMD calls it like that, than let it be. It is probably not worth it to rename the field then, but I think it is might be worth it to add this info as a comment to the KVM. Also it might be worth it to add some wrapper function for 'struct hv_enlightenments *hve = (struct hv_enlightenments *)svm->nested.ctl.reserved_sw;' (+ check if this area is valid - currently it is copied only when 'kvm_hv_hypercall_enabled == true'). Both would be a very low priority items to be honest. Thanks, Best regards, MaxiMm Levitsky > > > > > > + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); > > > + > > > + if (!hv_vcpu) > > > + return; > > > + > > > + hv_vcpu->nested.pa_page_gpa = hve->partition_assist_page; > > > + hv_vcpu->nested.vm_id = hve->hv_vm_id; > > > + hv_vcpu->nested.vp_id = hve->hv_vp_id; > > > +} > > > + > > > #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ > > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c > > > index bed5e1692cef..2d1a76343404 100644 > > > --- a/arch/x86/kvm/svm/nested.c > > > +++ b/arch/x86/kvm/svm/nested.c > > > @@ -826,6 +826,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) > > > > > > svm->nested.nested_run_pending = 1; > > > > > > + nested_svm_hv_update_vm_vp_ids(vcpu); > > > + > > > if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true)) > > > goto out_exit_err; > > > > > > > That won't work after migration, since this won't be called > > if we migrate with nested guest running. > > > > > > I think that nested_svm_hv_update_vm_vp_ids should be called > > from enter_svm_guest_mode. > > > > Oh that's a good one, thanks! This could've been a hard to debug issue. >
diff --git a/arch/x86/kvm/svm/hyperv.h b/arch/x86/kvm/svm/hyperv.h index 7d6d97968fb9..8cf702fed7e5 100644 --- a/arch/x86/kvm/svm/hyperv.h +++ b/arch/x86/kvm/svm/hyperv.h @@ -9,6 +9,7 @@ #include <asm/mshyperv.h> #include "../hyperv.h" +#include "svm.h" /* * Hyper-V uses the software reserved 32 bytes in VMCB @@ -32,4 +33,19 @@ struct hv_enlightenments { */ #define VMCB_HV_NESTED_ENLIGHTENMENTS VMCB_SW +static inline void nested_svm_hv_update_vm_vp_ids(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + struct hv_enlightenments *hve = + (struct hv_enlightenments *)svm->nested.ctl.reserved_sw; + struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); + + if (!hv_vcpu) + return; + + hv_vcpu->nested.pa_page_gpa = hve->partition_assist_page; + hv_vcpu->nested.vm_id = hve->hv_vm_id; + hv_vcpu->nested.vp_id = hve->hv_vp_id; +} + #endif /* __ARCH_X86_KVM_SVM_HYPERV_H__ */ diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index bed5e1692cef..2d1a76343404 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -826,6 +826,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) svm->nested.nested_run_pending = 1; + nested_svm_hv_update_vm_vp_ids(vcpu); + if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true)) goto out_exit_err;
Similar to nSVM, KVM needs to know L2's VM_ID/VP_ID and Partition assist page address to handle L2 TLB flush requests. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> --- arch/x86/kvm/svm/hyperv.h | 16 ++++++++++++++++ arch/x86/kvm/svm/nested.c | 2 ++ 2 files changed, 18 insertions(+)