diff mbox series

[6/9] KVM: x86: Provide paravirtualized flush_tlb_multi()

Message ID 20190613064813.8102-7-namit@vmware.com (mailing list archive)
State New, archived
Headers show
Series None | expand

Commit Message

Nadav Amit June 13, 2019, 6:48 a.m. UTC
Support the new interface of flush_tlb_multi, which also flushes the
local CPU's TLB, instead of flush_tlb_others that does not. This
interface is more performant since it parallelize remote and local TLB
flushes.

The actual implementation of flush_tlb_multi() is almost identical to
that of flush_tlb_others().

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/kvm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Dave Hansen June 25, 2019, 9:40 p.m. UTC | #1
On 6/12/19 11:48 PM, Nadav Amit wrote:
> Support the new interface of flush_tlb_multi, which also flushes the
> local CPU's TLB, instead of flush_tlb_others that does not. This
> interface is more performant since it parallelize remote and local TLB
> flushes.
> 
> The actual implementation of flush_tlb_multi() is almost identical to
> that of flush_tlb_others().

This confused me a bit.  I thought we didn't support paravirtualized
flush_tlb_multi() from reading earlier in the series.

But, it seems like that might be Xen-only and doesn't apply to KVM and
paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
that right?  It might be good to include some of that background in the
changelog to set the context.
Nadav Amit June 26, 2019, 2:39 a.m. UTC | #2
> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> 
> On 6/12/19 11:48 PM, Nadav Amit wrote:
>> Support the new interface of flush_tlb_multi, which also flushes the
>> local CPU's TLB, instead of flush_tlb_others that does not. This
>> interface is more performant since it parallelize remote and local TLB
>> flushes.
>> 
>> The actual implementation of flush_tlb_multi() is almost identical to
>> that of flush_tlb_others().
> 
> This confused me a bit.  I thought we didn't support paravirtualized
> flush_tlb_multi() from reading earlier in the series.
> 
> But, it seems like that might be Xen-only and doesn't apply to KVM and
> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> that right?  It might be good to include some of that background in the
> changelog to set the context.

I’ll try to improve the change-logs a bit. There is no inherent reason for
PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
for future work, and here are some reasons:

1. Hyper-V/Xen TLB-flushing code is not very simple
2. I don’t have a proper setup
3. I am lazy
Andy Lutomirski June 26, 2019, 3:35 a.m. UTC | #3
On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >
> > On 6/12/19 11:48 PM, Nadav Amit wrote:
> >> Support the new interface of flush_tlb_multi, which also flushes the
> >> local CPU's TLB, instead of flush_tlb_others that does not. This
> >> interface is more performant since it parallelize remote and local TLB
> >> flushes.
> >>
> >> The actual implementation of flush_tlb_multi() is almost identical to
> >> that of flush_tlb_others().
> >
> > This confused me a bit.  I thought we didn't support paravirtualized
> > flush_tlb_multi() from reading earlier in the series.
> >
> > But, it seems like that might be Xen-only and doesn't apply to KVM and
> > paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> > that right?  It might be good to include some of that background in the
> > changelog to set the context.
>
> I’ll try to improve the change-logs a bit. There is no inherent reason for
> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> for future work, and here are some reasons:
>
> 1. Hyper-V/Xen TLB-flushing code is not very simple
> 2. I don’t have a proper setup
> 3. I am lazy
>

In the long run, I think that we're going to want a way for one CPU to
do a remote flush and then, with appropriate locking, update the
tlb_gen fields for the remote CPU.  Getting this right may be a bit
nontrivial.
Nadav Amit June 26, 2019, 3:41 a.m. UTC | #4
> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>>> 
>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
>>>> Support the new interface of flush_tlb_multi, which also flushes the
>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
>>>> interface is more performant since it parallelize remote and local TLB
>>>> flushes.
>>>> 
>>>> The actual implementation of flush_tlb_multi() is almost identical to
>>>> that of flush_tlb_others().
>>> 
>>> This confused me a bit.  I thought we didn't support paravirtualized
>>> flush_tlb_multi() from reading earlier in the series.
>>> 
>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
>>> that right?  It might be good to include some of that background in the
>>> changelog to set the context.
>> 
>> I’ll try to improve the change-logs a bit. There is no inherent reason for
>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
>> for future work, and here are some reasons:
>> 
>> 1. Hyper-V/Xen TLB-flushing code is not very simple
>> 2. I don’t have a proper setup
>> 3. I am lazy
> 
> In the long run, I think that we're going to want a way for one CPU to
> do a remote flush and then, with appropriate locking, update the
> tlb_gen fields for the remote CPU.  Getting this right may be a bit
> nontrivial.

What do you mean by “do a remote flush”?
Andy Lutomirski June 26, 2019, 3:56 a.m. UTC | #5
On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
> >>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >>>
> >>> On 6/12/19 11:48 PM, Nadav Amit wrote:
> >>>> Support the new interface of flush_tlb_multi, which also flushes the
> >>>> local CPU's TLB, instead of flush_tlb_others that does not. This
> >>>> interface is more performant since it parallelize remote and local TLB
> >>>> flushes.
> >>>>
> >>>> The actual implementation of flush_tlb_multi() is almost identical to
> >>>> that of flush_tlb_others().
> >>>
> >>> This confused me a bit.  I thought we didn't support paravirtualized
> >>> flush_tlb_multi() from reading earlier in the series.
> >>>
> >>> But, it seems like that might be Xen-only and doesn't apply to KVM and
> >>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> >>> that right?  It might be good to include some of that background in the
> >>> changelog to set the context.
> >>
> >> I’ll try to improve the change-logs a bit. There is no inherent reason for
> >> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> >> for future work, and here are some reasons:
> >>
> >> 1. Hyper-V/Xen TLB-flushing code is not very simple
> >> 2. I don’t have a proper setup
> >> 3. I am lazy
> >
> > In the long run, I think that we're going to want a way for one CPU to
> > do a remote flush and then, with appropriate locking, update the
> > tlb_gen fields for the remote CPU.  Getting this right may be a bit
> > nontrivial.
>
> What do you mean by “do a remote flush”?
>

I mean a PV-assisted flush on a CPU other than the CPU that started
it.  If you look at flush_tlb_func_common(), it's doing some work that
is rather fancier than just flushing the TLB.  By replacing it with
just a pure flush on Xen or Hyper-V, we're losing the potential CR3
switch and this bit:

        /* Both paths above update our state to mm_tlb_gen. */
        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);

Skipping the former can hurt idle performance, although we should
consider just disabling all the lazy optimizations on systems with PV
flush.  (And I've asked Intel to help us out here in future hardware.
I have no idea what the result of asking will be.)  Skipping the
cpu_tlbstate write means that we will do unnecessary flushes in the
future, and that's not doing us any favors.

In principle, we should be able to do something like:

flush_tlb_multi(...);
for(each CPU that got flushed) {
  spin_lock(something appropriate?);
  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
  spin_unlock(...);
}

with the caveat that it's more complicated than this if the flush is a
partial flush, and that we'll want to check that the ctx_id still
matches, etc.

Does this make sense?
Nadav Amit June 26, 2019, 6:30 a.m. UTC | #6
> On Jun 25, 2019, at 8:56 PM, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
>>> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
>>> 
>>> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
>>>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>>>>> 
>>>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
>>>>>> Support the new interface of flush_tlb_multi, which also flushes the
>>>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
>>>>>> interface is more performant since it parallelize remote and local TLB
>>>>>> flushes.
>>>>>> 
>>>>>> The actual implementation of flush_tlb_multi() is almost identical to
>>>>>> that of flush_tlb_others().
>>>>> 
>>>>> This confused me a bit.  I thought we didn't support paravirtualized
>>>>> flush_tlb_multi() from reading earlier in the series.
>>>>> 
>>>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
>>>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
>>>>> that right?  It might be good to include some of that background in the
>>>>> changelog to set the context.
>>>> 
>>>> I’ll try to improve the change-logs a bit. There is no inherent reason for
>>>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
>>>> for future work, and here are some reasons:
>>>> 
>>>> 1. Hyper-V/Xen TLB-flushing code is not very simple
>>>> 2. I don’t have a proper setup
>>>> 3. I am lazy
>>> 
>>> In the long run, I think that we're going to want a way for one CPU to
>>> do a remote flush and then, with appropriate locking, update the
>>> tlb_gen fields for the remote CPU.  Getting this right may be a bit
>>> nontrivial.
>> 
>> What do you mean by “do a remote flush”?
> 
> I mean a PV-assisted flush on a CPU other than the CPU that started
> it.  If you look at flush_tlb_func_common(), it's doing some work that
> is rather fancier than just flushing the TLB.  By replacing it with
> just a pure flush on Xen or Hyper-V, we're losing the potential CR3
> switch and this bit:
> 
>        /* Both paths above update our state to mm_tlb_gen. */
>        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
> 
> Skipping the former can hurt idle performance, although we should
> consider just disabling all the lazy optimizations on systems with PV
> flush.  (And I've asked Intel to help us out here in future hardware.
> I have no idea what the result of asking will be.)  Skipping the
> cpu_tlbstate write means that we will do unnecessary flushes in the
> future, and that's not doing us any favors.
> 
> In principle, we should be able to do something like:
> 
> flush_tlb_multi(...);
> for(each CPU that got flushed) {
>  spin_lock(something appropriate?);
>  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
>  spin_unlock(...);
> }
> 
> with the caveat that it's more complicated than this if the flush is a
> partial flush, and that we'll want to check that the ctx_id still
> matches, etc.
> 
> Does this make sense?

Thanks for the detailed explanation. Let me check that I got it right. 

You want to optimize cases in which:

1. A virtual machine

2. Which issues mtultiple (remote) TLB shootdowns

2. To remote vCPU which is preempted by the hypervisor

4. And unlike KVM, the hypervisor does not provide facilities for the VM to
know which vCPU is preempted, and atomically request TLB flush when the vCPU
is scheduled.

Right?
Andy Lutomirski June 26, 2019, 4:37 p.m. UTC | #7
On Tue, Jun 25, 2019 at 11:30 PM Nadav Amit <namit@vmware.com> wrote:
>
> > On Jun 25, 2019, at 8:56 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > On Tue, Jun 25, 2019 at 8:41 PM Nadav Amit <namit@vmware.com> wrote:
> >>> On Jun 25, 2019, at 8:35 PM, Andy Lutomirski <luto@kernel.org> wrote:
> >>>
> >>> On Tue, Jun 25, 2019 at 7:39 PM Nadav Amit <namit@vmware.com> wrote:
> >>>>> On Jun 25, 2019, at 2:40 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> >>>>>
> >>>>> On 6/12/19 11:48 PM, Nadav Amit wrote:
> >>>>>> Support the new interface of flush_tlb_multi, which also flushes the
> >>>>>> local CPU's TLB, instead of flush_tlb_others that does not. This
> >>>>>> interface is more performant since it parallelize remote and local TLB
> >>>>>> flushes.
> >>>>>>
> >>>>>> The actual implementation of flush_tlb_multi() is almost identical to
> >>>>>> that of flush_tlb_others().
> >>>>>
> >>>>> This confused me a bit.  I thought we didn't support paravirtualized
> >>>>> flush_tlb_multi() from reading earlier in the series.
> >>>>>
> >>>>> But, it seems like that might be Xen-only and doesn't apply to KVM and
> >>>>> paravirtualized KVM has no problem supporting flush_tlb_multi().  Is
> >>>>> that right?  It might be good to include some of that background in the
> >>>>> changelog to set the context.
> >>>>
> >>>> I’ll try to improve the change-logs a bit. There is no inherent reason for
> >>>> PV TLB-flushers not to implement their own flush_tlb_multi(). It is left
> >>>> for future work, and here are some reasons:
> >>>>
> >>>> 1. Hyper-V/Xen TLB-flushing code is not very simple
> >>>> 2. I don’t have a proper setup
> >>>> 3. I am lazy
> >>>
> >>> In the long run, I think that we're going to want a way for one CPU to
> >>> do a remote flush and then, with appropriate locking, update the
> >>> tlb_gen fields for the remote CPU.  Getting this right may be a bit
> >>> nontrivial.
> >>
> >> What do you mean by “do a remote flush”?
> >
> > I mean a PV-assisted flush on a CPU other than the CPU that started
> > it.  If you look at flush_tlb_func_common(), it's doing some work that
> > is rather fancier than just flushing the TLB.  By replacing it with
> > just a pure flush on Xen or Hyper-V, we're losing the potential CR3
> > switch and this bit:
> >
> >        /* Both paths above update our state to mm_tlb_gen. */
> >        this_cpu_write(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, mm_tlb_gen);
> >
> > Skipping the former can hurt idle performance, although we should
> > consider just disabling all the lazy optimizations on systems with PV
> > flush.  (And I've asked Intel to help us out here in future hardware.
> > I have no idea what the result of asking will be.)  Skipping the
> > cpu_tlbstate write means that we will do unnecessary flushes in the
> > future, and that's not doing us any favors.
> >
> > In principle, we should be able to do something like:
> >
> > flush_tlb_multi(...);
> > for(each CPU that got flushed) {
> >  spin_lock(something appropriate?);
> >  per_cpu_write(cpu, cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen, f->new_tlb_gen);
> >  spin_unlock(...);
> > }
> >
> > with the caveat that it's more complicated than this if the flush is a
> > partial flush, and that we'll want to check that the ctx_id still
> > matches, etc.
> >
> > Does this make sense?
>
> Thanks for the detailed explanation. Let me check that I got it right.
>
> You want to optimize cases in which:
>
> 1. A virtual machine

Yes.

>
> 2. Which issues mtultiple (remote) TLB shootdowns

Yes.  Or just one followed by a context switch.  Right now it's
suboptimal with just two vCPUs and a single remote flush.  If CPU 0
does a remote PV flush of CPU1 and then CPU1 context switches away
from the running mm and back, it will do an unnecessary flush on the
way back because the tlb_gen won't match.

>
> 2. To remote vCPU which is preempted by the hypervisor

Yes, or even one that isn't preempted.

>
> 4. And unlike KVM, the hypervisor does not provide facilities for the VM to
> know which vCPU is preempted, and atomically request TLB flush when the vCPU
> is scheduled.
>

I'm not sure this makes much difference to the case I'm thinking of.

All this being said, do we currently have any system that supports
PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
not that familiar with its exact capabilities.  If I remember right,
Hyper-V doesn't expose PCID yet.


> Right?
>
Vitaly Kuznetsov June 26, 2019, 5:41 p.m. UTC | #8
Andy Lutomirski <luto@kernel.org> writes:

> All this being said, do we currently have any system that supports
> PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
> not that familiar with its exact capabilities.  If I remember right,
> Hyper-V doesn't expose PCID yet.
>

It already does (and support it to certain extent), see

commit 617ab45c9a8900e64a78b43696c02598b8cad68b
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Wed Jan 24 11:36:29 2018 +0100

    x86/hyperv: Stop suppressing X86_FEATURE_PCID
Andy Lutomirski June 26, 2019, 6:21 p.m. UTC | #9
On Wed, Jun 26, 2019 at 10:41 AM Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>
> Andy Lutomirski <luto@kernel.org> writes:
>
> > All this being said, do we currently have any system that supports
> > PCID *and* remote flushes?  I guess KVM has some mechanism, but I'm
> > not that familiar with its exact capabilities.  If I remember right,
> > Hyper-V doesn't expose PCID yet.
> >
>
> It already does (and support it to certain extent), see
>
> commit 617ab45c9a8900e64a78b43696c02598b8cad68b
> Author: Vitaly Kuznetsov <vkuznets@redhat.com>
> Date:   Wed Jan 24 11:36:29 2018 +0100
>
>     x86/hyperv: Stop suppressing X86_FEATURE_PCID
>

Hmm.  Once the dust settles from Nadav's patches, I think we should
see about supporting it better :)
diff mbox series

Patch

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 00d81e898717..d00d551d4a2a 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -580,7 +580,7 @@  static void __init kvm_apf_trap_init(void)
 
 static DEFINE_PER_CPU(cpumask_var_t, __pv_tlb_mask);
 
-static void kvm_flush_tlb_others(const struct cpumask *cpumask,
+static void kvm_flush_tlb_multi(const struct cpumask *cpumask,
 			const struct flush_tlb_info *info)
 {
 	u8 state;
@@ -594,6 +594,11 @@  static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 	 * queue flush_on_enter for pre-empted vCPUs
 	 */
 	for_each_cpu(cpu, flushmask) {
+		/*
+		 * The local vCPU is never preempted, so we do not explicitly
+		 * skip check for local vCPU - it will never be cleared from
+		 * flushmask.
+		 */
 		src = &per_cpu(steal_time, cpu);
 		state = READ_ONCE(src->preempted);
 		if ((state & KVM_VCPU_PREEMPTED)) {
@@ -603,7 +608,7 @@  static void kvm_flush_tlb_others(const struct cpumask *cpumask,
 		}
 	}
 
-	native_flush_tlb_others(flushmask, info);
+	native_flush_tlb_multi(flushmask, info);
 }
 
 static void __init kvm_guest_init(void)
@@ -628,9 +633,8 @@  static void __init kvm_guest_init(void)
 	if (kvm_para_has_feature(KVM_FEATURE_PV_TLB_FLUSH) &&
 	    !kvm_para_has_hint(KVM_HINTS_REALTIME) &&
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
-		pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
+		pv_ops.mmu.flush_tlb_multi = kvm_flush_tlb_multi;
 		pv_ops.mmu.tlb_remove_table = tlb_remove_table;
-		static_key_disable(&flush_tlb_multi_enabled.key);
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))