mbox series

[V3,0/3] KVM/Hyper-V: Add Hyper-V direct tlb flush support

Message ID 20190819131737.26942-1-Tianyu.Lan@microsoft.com (mailing list archive)
Headers show
Series KVM/Hyper-V: Add Hyper-V direct tlb flush support | expand

Message

Tianyu Lan Aug. 19, 2019, 1:17 p.m. UTC
From: Tianyu Lan <Tianyu.Lan@microsoft.com>

This patchset is to add Hyper-V direct tlb support in KVM. Hyper-V
in L0 can delegate L1 hypervisor to handle tlb flush request from
L2 guest when direct tlb flush is enabled in L1.

Patch 2 introduces new cap KVM_CAP_HYPERV_DIRECT_TLBFLUSH to enable
feature from user space. User space should enable this feature only
when Hyper-V hypervisor capability is exposed to guest and KVM profile
is hided. There is a parameter conflict between KVM and Hyper-V hypercall.
We hope L2 guest doesn't use KVM hypercall when the feature is
enabled. Detail please see comment of new API "KVM_CAP_HYPERV_DIRECT_TLBFLUSH"

Change since v2:
       - Move hv assist page(hv_pa_pg) from struct kvm  to struct kvm_hv.

Change since v1:
       - Fix offset issue in the patch 1.
       - Update description of KVM KVM_CAP_HYPERV_DIRECT_TLBFLUSH.


Tianyu Lan (2):
  x86/Hyper-V: Fix definition of struct hv_vp_assist_page
  KVM/Hyper-V: Add new KVM cap KVM_CAP_HYPERV_DIRECT_TLBFLUSH

Vitaly Kuznetsov (1):
  KVM/Hyper-V/VMX: Add direct tlb flush support

 Documentation/virtual/kvm/api.txt  | 13 +++++++++++++
 arch/x86/include/asm/hyperv-tlfs.h | 24 ++++++++++++++++++-----
 arch/x86/include/asm/kvm_host.h    |  4 ++++
 arch/x86/kvm/vmx/evmcs.h           |  2 ++
 arch/x86/kvm/vmx/vmx.c             | 39 ++++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c                 |  8 ++++++++
 include/uapi/linux/kvm.h           |  1 +
 7 files changed, 86 insertions(+), 5 deletions(-)

Comments

Vitaly Kuznetsov Aug. 27, 2019, 6:41 a.m. UTC | #1
lantianyu1986@gmail.com writes:

> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>
> This patchset is to add Hyper-V direct tlb support in KVM. Hyper-V
> in L0 can delegate L1 hypervisor to handle tlb flush request from
> L2 guest when direct tlb flush is enabled in L1.
>
> Patch 2 introduces new cap KVM_CAP_HYPERV_DIRECT_TLBFLUSH to enable
> feature from user space. User space should enable this feature only
> when Hyper-V hypervisor capability is exposed to guest and KVM profile
> is hided. There is a parameter conflict between KVM and Hyper-V hypercall.
> We hope L2 guest doesn't use KVM hypercall when the feature is
> enabled. Detail please see comment of new API
> "KVM_CAP_HYPERV_DIRECT_TLBFLUSH"

I was thinking about this for awhile and I think I have a better
proposal. Instead of adding this new capability let's enable direct TLB
flush when KVM guest enables Hyper-V Hypercall page (writes to
HV_X64_MSR_HYPERCALL) - this guarantees that the guest doesn't need KVM
hypercalls as we can't handle both KVM-style and Hyper-V-style
hypercalls simultaneously and kvm_emulate_hypercall() does:

	if (kvm_hv_hypercall_enabled(vcpu->kvm))
		return kvm_hv_hypercall(vcpu);

What do you think?

(and instead of adding the capability we can add kvm.ko module parameter
to enable direct tlb flush unconditionally, like
'hv_direct_tlbflush=-1/0/1' with '-1' being the default (autoselect
based on Hyper-V hypercall enablement, '0' - permanently disabled, '1' -
permanenetly enabled)).
Tianyu Lan Aug. 27, 2019, 12:17 p.m. UTC | #2
On Tue, Aug 27, 2019 at 2:41 PM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>
> lantianyu1986@gmail.com writes:
>
> > From: Tianyu Lan <Tianyu.Lan@microsoft.com>
> >
> > This patchset is to add Hyper-V direct tlb support in KVM. Hyper-V
> > in L0 can delegate L1 hypervisor to handle tlb flush request from
> > L2 guest when direct tlb flush is enabled in L1.
> >
> > Patch 2 introduces new cap KVM_CAP_HYPERV_DIRECT_TLBFLUSH to enable
> > feature from user space. User space should enable this feature only
> > when Hyper-V hypervisor capability is exposed to guest and KVM profile
> > is hided. There is a parameter conflict between KVM and Hyper-V hypercall.
> > We hope L2 guest doesn't use KVM hypercall when the feature is
> > enabled. Detail please see comment of new API
> > "KVM_CAP_HYPERV_DIRECT_TLBFLUSH"
>
> I was thinking about this for awhile and I think I have a better
> proposal. Instead of adding this new capability let's enable direct TLB
> flush when KVM guest enables Hyper-V Hypercall page (writes to
> HV_X64_MSR_HYPERCALL) - this guarantees that the guest doesn't need KVM
> hypercalls as we can't handle both KVM-style and Hyper-V-style
> hypercalls simultaneously and kvm_emulate_hypercall() does:
>
>         if (kvm_hv_hypercall_enabled(vcpu->kvm))
>                 return kvm_hv_hypercall(vcpu);
>
> What do you think?
>
> (and instead of adding the capability we can add kvm.ko module parameter
> to enable direct tlb flush unconditionally, like
> 'hv_direct_tlbflush=-1/0/1' with '-1' being the default (autoselect
> based on Hyper-V hypercall enablement, '0' - permanently disabled, '1' -
> permanenetly enabled)).
>

Hi Vitaly::
     Actually, I had such idea before. But user space should check
whether hv tlb flush
is exposed to VM before enabling direct tlb flush. If no, user space
should not direct
tlb flush for guest since Hyper-V will do more check for each
hypercall from nested
VM with enabling the feauter..
Tianyu Lan Aug. 27, 2019, 12:33 p.m. UTC | #3
On Tue, Aug 27, 2019 at 8:17 PM Tianyu Lan <lantianyu1986@gmail.com> wrote:
>
> On Tue, Aug 27, 2019 at 2:41 PM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
> >
> > lantianyu1986@gmail.com writes:
> >
> > > From: Tianyu Lan <Tianyu.Lan@microsoft.com>
> > >
> > > This patchset is to add Hyper-V direct tlb support in KVM. Hyper-V
> > > in L0 can delegate L1 hypervisor to handle tlb flush request from
> > > L2 guest when direct tlb flush is enabled in L1.
> > >
> > > Patch 2 introduces new cap KVM_CAP_HYPERV_DIRECT_TLBFLUSH to enable
> > > feature from user space. User space should enable this feature only
> > > when Hyper-V hypervisor capability is exposed to guest and KVM profile
> > > is hided. There is a parameter conflict between KVM and Hyper-V hypercall.
> > > We hope L2 guest doesn't use KVM hypercall when the feature is
> > > enabled. Detail please see comment of new API
> > > "KVM_CAP_HYPERV_DIRECT_TLBFLUSH"
> >
> > I was thinking about this for awhile and I think I have a better
> > proposal. Instead of adding this new capability let's enable direct TLB
> > flush when KVM guest enables Hyper-V Hypercall page (writes to
> > HV_X64_MSR_HYPERCALL) - this guarantees that the guest doesn't need KVM
> > hypercalls as we can't handle both KVM-style and Hyper-V-style
> > hypercalls simultaneously and kvm_emulate_hypercall() does:
> >
> >         if (kvm_hv_hypercall_enabled(vcpu->kvm))
> >                 return kvm_hv_hypercall(vcpu);
> >
> > What do you think?
> >
> > (and instead of adding the capability we can add kvm.ko module parameter
> > to enable direct tlb flush unconditionally, like
> > 'hv_direct_tlbflush=-1/0/1' with '-1' being the default (autoselect
> > based on Hyper-V hypercall enablement, '0' - permanently disabled, '1' -
> > permanenetly enabled)).
> >
>
> Hi Vitaly::
>      Actually, I had such idea before. But user space should check
> whether hv tlb flush
> is exposed to VM before enabling direct tlb flush. If no, user space
> should not direct
> tlb flush for guest since Hyper-V will do more check for each
> hypercall from nested
> VM with enabling the feauter..
>
Fix the line break.Sorry for noise.

Actually, I had such idea before. But user space should check
whether hv tlb flush is exposed to VM before enabling direct tlb
flush. If no, user space should not direct tlb flush for guest since
Hyper-V will do more check for each hypercall from nested VM
with enabling the feauter..
---
Best regards
Tianyu Lan
Vitaly Kuznetsov Aug. 27, 2019, 12:38 p.m. UTC | #4
Tianyu Lan <lantianyu1986@gmail.com> writes:

> On Tue, Aug 27, 2019 at 2:41 PM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>>
>> lantianyu1986@gmail.com writes:
>>
>> > From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>> >
>> > This patchset is to add Hyper-V direct tlb support in KVM. Hyper-V
>> > in L0 can delegate L1 hypervisor to handle tlb flush request from
>> > L2 guest when direct tlb flush is enabled in L1.
>> >
>> > Patch 2 introduces new cap KVM_CAP_HYPERV_DIRECT_TLBFLUSH to enable
>> > feature from user space. User space should enable this feature only
>> > when Hyper-V hypervisor capability is exposed to guest and KVM profile
>> > is hided. There is a parameter conflict between KVM and Hyper-V hypercall.
>> > We hope L2 guest doesn't use KVM hypercall when the feature is
>> > enabled. Detail please see comment of new API
>> > "KVM_CAP_HYPERV_DIRECT_TLBFLUSH"
>>
>> I was thinking about this for awhile and I think I have a better
>> proposal. Instead of adding this new capability let's enable direct TLB
>> flush when KVM guest enables Hyper-V Hypercall page (writes to
>> HV_X64_MSR_HYPERCALL) - this guarantees that the guest doesn't need KVM
>> hypercalls as we can't handle both KVM-style and Hyper-V-style
>> hypercalls simultaneously and kvm_emulate_hypercall() does:
>>
>>         if (kvm_hv_hypercall_enabled(vcpu->kvm))
>>                 return kvm_hv_hypercall(vcpu);
>>
>> What do you think?
>>
>> (and instead of adding the capability we can add kvm.ko module parameter
>> to enable direct tlb flush unconditionally, like
>> 'hv_direct_tlbflush=-1/0/1' with '-1' being the default (autoselect
>> based on Hyper-V hypercall enablement, '0' - permanently disabled, '1' -
>> permanenetly enabled)).
>>
>
> Hi Vitaly::
>      Actually, I had such idea before. But user space should check
> whether hv tlb flush
> is exposed to VM before enabling direct tlb flush. If no, user space
> should not direct
> tlb flush for guest since Hyper-V will do more check for each
> hypercall from nested
> VM with enabling the feauter..

If TLB Flush enlightenment is not exposed to the VM at all there's no
difference if we enable direct TLB flush in eVMCS or not: the guest
won't be using 'TLB Flush' hypercall and will do TLB flushing with
IPIs. And, in case the guest enables Hyper-V hypercall page, it is
definitelly not going to use KVM hypercalls so we can't break these.
Tianyu Lan Aug. 27, 2019, 1:07 p.m. UTC | #5
On Tue, Aug 27, 2019 at 8:38 PM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>
> Tianyu Lan <lantianyu1986@gmail.com> writes:
>
> > On Tue, Aug 27, 2019 at 2:41 PM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
> >>
> >> lantianyu1986@gmail.com writes:
> >>
> >> > From: Tianyu Lan <Tianyu.Lan@microsoft.com>
> >> >
> >> > This patchset is to add Hyper-V direct tlb support in KVM. Hyper-V
> >> > in L0 can delegate L1 hypervisor to handle tlb flush request from
> >> > L2 guest when direct tlb flush is enabled in L1.
> >> >
> >> > Patch 2 introduces new cap KVM_CAP_HYPERV_DIRECT_TLBFLUSH to enable
> >> > feature from user space. User space should enable this feature only
> >> > when Hyper-V hypervisor capability is exposed to guest and KVM profile
> >> > is hided. There is a parameter conflict between KVM and Hyper-V hypercall.
> >> > We hope L2 guest doesn't use KVM hypercall when the feature is
> >> > enabled. Detail please see comment of new API
> >> > "KVM_CAP_HYPERV_DIRECT_TLBFLUSH"
> >>
> >> I was thinking about this for awhile and I think I have a better
> >> proposal. Instead of adding this new capability let's enable direct TLB
> >> flush when KVM guest enables Hyper-V Hypercall page (writes to
> >> HV_X64_MSR_HYPERCALL) - this guarantees that the guest doesn't need KVM
> >> hypercalls as we can't handle both KVM-style and Hyper-V-style
> >> hypercalls simultaneously and kvm_emulate_hypercall() does:
> >>
> >>         if (kvm_hv_hypercall_enabled(vcpu->kvm))
> >>                 return kvm_hv_hypercall(vcpu);
> >>
> >> What do you think?
> >>
> >> (and instead of adding the capability we can add kvm.ko module parameter
> >> to enable direct tlb flush unconditionally, like
> >> 'hv_direct_tlbflush=-1/0/1' with '-1' being the default (autoselect
> >> based on Hyper-V hypercall enablement, '0' - permanently disabled, '1' -
> >> permanenetly enabled)).
> >>
> >
> > Hi Vitaly::
> >      Actually, I had such idea before. But user space should check
> > whether hv tlb flush
> > is exposed to VM before enabling direct tlb flush. If no, user space
> > should not direct
> > tlb flush for guest since Hyper-V will do more check for each
> > hypercall from nested
> > VM with enabling the feauter..
>
> If TLB Flush enlightenment is not exposed to the VM at all there's no
> difference if we enable direct TLB flush in eVMCS or not: the guest
> won't be using 'TLB Flush' hypercall and will do TLB flushing with
> IPIs. And, in case the guest enables Hyper-V hypercall page, it is
> definitelly not going to use KVM hypercalls so we can't break these.
>

Yes, this won't tigger KVM/Hyper-V hypercall conflict. My point is
that if tlb flush enlightenment is not enabled, enabling direct tlb
flush will not accelate anything and Hyper-V still will check each
hypercalls from nested VM in order to intercpt tlb flush hypercall
But guest won't use tlb flush hypercall in this case. The check
of each hypercall in Hyper-V is redundant. We may avoid the
overhead via checking status of tlb flush enlightenment and just
enable direct tlb flush when it's enabled.

---
Best regards
Tianyu Lan
Vitaly Kuznetsov Aug. 27, 2019, 1:29 p.m. UTC | #6
Tianyu Lan <lantianyu1986@gmail.com> writes:

> On Tue, Aug 27, 2019 at 8:38 PM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>>
>> Tianyu Lan <lantianyu1986@gmail.com> writes:
>>
>> > On Tue, Aug 27, 2019 at 2:41 PM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>> >>
>> >> lantianyu1986@gmail.com writes:
>> >>
>> >> > From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>> >> >
>> >> > This patchset is to add Hyper-V direct tlb support in KVM. Hyper-V
>> >> > in L0 can delegate L1 hypervisor to handle tlb flush request from
>> >> > L2 guest when direct tlb flush is enabled in L1.
>> >> >
>> >> > Patch 2 introduces new cap KVM_CAP_HYPERV_DIRECT_TLBFLUSH to enable
>> >> > feature from user space. User space should enable this feature only
>> >> > when Hyper-V hypervisor capability is exposed to guest and KVM profile
>> >> > is hided. There is a parameter conflict between KVM and Hyper-V hypercall.
>> >> > We hope L2 guest doesn't use KVM hypercall when the feature is
>> >> > enabled. Detail please see comment of new API
>> >> > "KVM_CAP_HYPERV_DIRECT_TLBFLUSH"
>> >>
>> >> I was thinking about this for awhile and I think I have a better
>> >> proposal. Instead of adding this new capability let's enable direct TLB
>> >> flush when KVM guest enables Hyper-V Hypercall page (writes to
>> >> HV_X64_MSR_HYPERCALL) - this guarantees that the guest doesn't need KVM
>> >> hypercalls as we can't handle both KVM-style and Hyper-V-style
>> >> hypercalls simultaneously and kvm_emulate_hypercall() does:
>> >>
>> >>         if (kvm_hv_hypercall_enabled(vcpu->kvm))
>> >>                 return kvm_hv_hypercall(vcpu);
>> >>
>> >> What do you think?
>> >>
>> >> (and instead of adding the capability we can add kvm.ko module parameter
>> >> to enable direct tlb flush unconditionally, like
>> >> 'hv_direct_tlbflush=-1/0/1' with '-1' being the default (autoselect
>> >> based on Hyper-V hypercall enablement, '0' - permanently disabled, '1' -
>> >> permanenetly enabled)).
>> >>
>> >
>> > Hi Vitaly::
>> >      Actually, I had such idea before. But user space should check
>> > whether hv tlb flush
>> > is exposed to VM before enabling direct tlb flush. If no, user space
>> > should not direct
>> > tlb flush for guest since Hyper-V will do more check for each
>> > hypercall from nested
>> > VM with enabling the feauter..
>>
>> If TLB Flush enlightenment is not exposed to the VM at all there's no
>> difference if we enable direct TLB flush in eVMCS or not: the guest
>> won't be using 'TLB Flush' hypercall and will do TLB flushing with
>> IPIs. And, in case the guest enables Hyper-V hypercall page, it is
>> definitelly not going to use KVM hypercalls so we can't break these.
>>
>
> Yes, this won't tigger KVM/Hyper-V hypercall conflict. My point is
> that if tlb flush enlightenment is not enabled, enabling direct tlb
> flush will not accelate anything and Hyper-V still will check each
> hypercalls from nested VM in order to intercpt tlb flush hypercall
> But guest won't use tlb flush hypercall in this case. The check
> of each hypercall in Hyper-V is redundant. We may avoid the
> overhead via checking status of tlb flush enlightenment and just
> enable direct tlb flush when it's enabled.

Oh, I see. Yes, this optimization is possible and I'm not against it,
however I doubt it will make a significant difference because no matter
what upon VMCALL we first drop into L0 which can either inject this in
L1 or, in case of direct TLB flush, execute it by itself. Checking if
the hypercall is a TLB flush hypercall is just a register read, it
should be very cheap.