diff mbox series

[RFC,18/27] kvm/isolation: function to copy page table entries for percpu buffer

Message ID 1557758315-12667-19-git-send-email-alexandre.chartre@oracle.com (mailing list archive)
State New, archived
Headers show
Series KVM Address Space Isolation | expand

Commit Message

Alexandre Chartre May 13, 2019, 2:38 p.m. UTC
pcpu_base_addr is already mapped to the KVM address space, but this
represents the first percpu chunk. To access a per-cpu buffer not
allocated in the first chunk, add a function which maps all cpu
buffers corresponding to that per-cpu buffer.

Also add function to clear page table entries for a percpu buffer.

Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com>
---
 arch/x86/kvm/isolation.c |   34 ++++++++++++++++++++++++++++++++++
 arch/x86/kvm/isolation.h |    2 ++
 2 files changed, 36 insertions(+), 0 deletions(-)

Comments

Andy Lutomirski May 13, 2019, 6:18 p.m. UTC | #1
On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
<alexandre.chartre@oracle.com> wrote:
>
> pcpu_base_addr is already mapped to the KVM address space, but this
> represents the first percpu chunk. To access a per-cpu buffer not
> allocated in the first chunk, add a function which maps all cpu
> buffers corresponding to that per-cpu buffer.
>
> Also add function to clear page table entries for a percpu buffer.
>

This needs some kind of clarification so that readers can tell whether
you're trying to map all percpu memory or just map a specific
variable.  In either case, you're making a dubious assumption that
percpu memory contains no secrets.
Peter Zijlstra May 14, 2019, 7:09 a.m. UTC | #2
On Mon, May 13, 2019 at 11:18:41AM -0700, Andy Lutomirski wrote:
> On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
> <alexandre.chartre@oracle.com> wrote:
> >
> > pcpu_base_addr is already mapped to the KVM address space, but this
> > represents the first percpu chunk. To access a per-cpu buffer not
> > allocated in the first chunk, add a function which maps all cpu
> > buffers corresponding to that per-cpu buffer.
> >
> > Also add function to clear page table entries for a percpu buffer.
> >
> 
> This needs some kind of clarification so that readers can tell whether
> you're trying to map all percpu memory or just map a specific
> variable.  In either case, you're making a dubious assumption that
> percpu memory contains no secrets.

I'm thinking the per-cpu random pool is a secrit. IOW, it demonstrably
does contain secrits, invalidating that premise.
Alexandre Chartre May 14, 2019, 8:25 a.m. UTC | #3
On 5/14/19 9:09 AM, Peter Zijlstra wrote:
> On Mon, May 13, 2019 at 11:18:41AM -0700, Andy Lutomirski wrote:
>> On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
>> <alexandre.chartre@oracle.com> wrote:
>>>
>>> pcpu_base_addr is already mapped to the KVM address space, but this
>>> represents the first percpu chunk. To access a per-cpu buffer not
>>> allocated in the first chunk, add a function which maps all cpu
>>> buffers corresponding to that per-cpu buffer.
>>>
>>> Also add function to clear page table entries for a percpu buffer.
>>>
>>
>> This needs some kind of clarification so that readers can tell whether
>> you're trying to map all percpu memory or just map a specific
>> variable.  In either case, you're making a dubious assumption that
>> percpu memory contains no secrets.
> 
> I'm thinking the per-cpu random pool is a secrit. IOW, it demonstrably
> does contain secrits, invalidating that premise.
> 

The current code unconditionally maps the entire first percpu chunk
(pcpu_base_addr). So it assumes it doesn't contain any secret. That is
mainly a simplification for the POC because a lot of core information
that we need, for example just to switch mm, are stored there (like
cpu_tlbstate, current_task...).

If the entire first percpu chunk effectively has secret then we will
need to individually map only buffers we need. The kvm_copy_percpu_mapping()
function is added to copy mapping for a specified percpu buffer, so
this used to map percpu buffers which are not in the first percpu chunk.

Also note that mapping is constrained by PTE (4K), so mapped buffers
(percpu or not) which do not fill a whole set of pages can leak adjacent
data store on the same pages.

alex.
Andy Lutomirski May 14, 2019, 8:34 a.m. UTC | #4
> On May 14, 2019, at 1:25 AM, Alexandre Chartre <alexandre.chartre@oracle.com> wrote:
> 
> 
>> On 5/14/19 9:09 AM, Peter Zijlstra wrote:
>>> On Mon, May 13, 2019 at 11:18:41AM -0700, Andy Lutomirski wrote:
>>> On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
>>> <alexandre.chartre@oracle.com> wrote:
>>>> 
>>>> pcpu_base_addr is already mapped to the KVM address space, but this
>>>> represents the first percpu chunk. To access a per-cpu buffer not
>>>> allocated in the first chunk, add a function which maps all cpu
>>>> buffers corresponding to that per-cpu buffer.
>>>> 
>>>> Also add function to clear page table entries for a percpu buffer.
>>>> 
>>> 
>>> This needs some kind of clarification so that readers can tell whether
>>> you're trying to map all percpu memory or just map a specific
>>> variable.  In either case, you're making a dubious assumption that
>>> percpu memory contains no secrets.
>> I'm thinking the per-cpu random pool is a secrit. IOW, it demonstrably
>> does contain secrits, invalidating that premise.
> 
> The current code unconditionally maps the entire first percpu chunk
> (pcpu_base_addr). So it assumes it doesn't contain any secret. That is
> mainly a simplification for the POC because a lot of core information
> that we need, for example just to switch mm, are stored there (like
> cpu_tlbstate, current_task...).

I don’t think you should need any of this.

> 
> If the entire first percpu chunk effectively has secret then we will
> need to individually map only buffers we need. The kvm_copy_percpu_mapping()
> function is added to copy mapping for a specified percpu buffer, so
> this used to map percpu buffers which are not in the first percpu chunk.
> 
> Also note that mapping is constrained by PTE (4K), so mapped buffers
> (percpu or not) which do not fill a whole set of pages can leak adjacent
> data store on the same pages.
> 
> 

I would take a different approach: figure out what you need and put it in its own dedicated area, kind of like cpu_entry_area.

One nasty issue you’ll have is vmalloc: the kernel stack is in the vmap range, and, if you allow access to vmap memory at all, you’ll need some way to ensure that *unmap* gets propagated. I suspect the right choice is to see if you can avoid using the kernel stack at all in isolated mode.  Maybe you could run on the IRQ stack instead.
Alexandre Chartre May 14, 2019, 9:41 a.m. UTC | #5
On 5/14/19 10:34 AM, Andy Lutomirski wrote:
> 
> 
>> On May 14, 2019, at 1:25 AM, Alexandre Chartre <alexandre.chartre@oracle.com> wrote:
>>
>>
>>> On 5/14/19 9:09 AM, Peter Zijlstra wrote:
>>>> On Mon, May 13, 2019 at 11:18:41AM -0700, Andy Lutomirski wrote:
>>>> On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
>>>> <alexandre.chartre@oracle.com> wrote:
>>>>>
>>>>> pcpu_base_addr is already mapped to the KVM address space, but this
>>>>> represents the first percpu chunk. To access a per-cpu buffer not
>>>>> allocated in the first chunk, add a function which maps all cpu
>>>>> buffers corresponding to that per-cpu buffer.
>>>>>
>>>>> Also add function to clear page table entries for a percpu buffer.
>>>>>
>>>>
>>>> This needs some kind of clarification so that readers can tell whether
>>>> you're trying to map all percpu memory or just map a specific
>>>> variable.  In either case, you're making a dubious assumption that
>>>> percpu memory contains no secrets.
>>> I'm thinking the per-cpu random pool is a secrit. IOW, it demonstrably
>>> does contain secrits, invalidating that premise.
>>
>> The current code unconditionally maps the entire first percpu chunk
>> (pcpu_base_addr). So it assumes it doesn't contain any secret. That is
>> mainly a simplification for the POC because a lot of core information
>> that we need, for example just to switch mm, are stored there (like
>> cpu_tlbstate, current_task...).
> 
> I don’t think you should need any of this.
> 

At the moment, the current code does need it. Otherwise it can't switch from
kvm mm to kernel mm: switch_mm_irqs_off() will fault accessing "cpu_tlbstate",
and then the page fault handler will fail accessing "current" before calling
the kvm page fault handler. So it will double fault or loop on page faults.
There are many different places where percpu variables are used, and I have
experienced many double fault/page fault loop because of that.

>>
>> If the entire first percpu chunk effectively has secret then we will
>> need to individually map only buffers we need. The kvm_copy_percpu_mapping()
>> function is added to copy mapping for a specified percpu buffer, so
>> this used to map percpu buffers which are not in the first percpu chunk.
>>
>> Also note that mapping is constrained by PTE (4K), so mapped buffers
>> (percpu or not) which do not fill a whole set of pages can leak adjacent
>> data store on the same pages.
>>
>>
> 
> I would take a different approach: figure out what you need and put it in its
> own dedicated area, kind of like cpu_entry_area.

That's certainly something we can do, like Julian proposed with "Process-local
memory allocations": https://lkml.org/lkml/2018/11/22/1240

That's fine for buffers allocated from KVM, however, we will still need some
core kernel mappings so the thread can run and interrupts can be handled.

> One nasty issue you’ll have is vmalloc: the kernel stack is in the
> vmap range, and, if you allow access to vmap memory at all, you’ll
> need some way to ensure that *unmap* gets propagated. I suspect the
> right choice is to see if you can avoid using the kernel stack at all
> in isolated mode.  Maybe you could run on the IRQ stack instead.

I am currently just copying the task stack mapping into the KVM page table
(patch 23) when a vcpu is created:

	err = kvm_copy_ptes(tsk->stack, THREAD_SIZE);

And this seems to work. I am clearing the mapping when the VM vcpu is freed,
so I am making the assumption that the same task is used to create and free
a vcpu.


alex.
Andy Lutomirski May 14, 2019, 3:23 p.m. UTC | #6
On Tue, May 14, 2019 at 2:42 AM Alexandre Chartre
<alexandre.chartre@oracle.com> wrote:
>
>
> On 5/14/19 10:34 AM, Andy Lutomirski wrote:
> >
> >
> >> On May 14, 2019, at 1:25 AM, Alexandre Chartre <alexandre.chartre@oracle.com> wrote:
> >>
> >>
> >>> On 5/14/19 9:09 AM, Peter Zijlstra wrote:
> >>>> On Mon, May 13, 2019 at 11:18:41AM -0700, Andy Lutomirski wrote:
> >>>> On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
> >>>> <alexandre.chartre@oracle.com> wrote:
> >>>>>
> >>>>> pcpu_base_addr is already mapped to the KVM address space, but this
> >>>>> represents the first percpu chunk. To access a per-cpu buffer not
> >>>>> allocated in the first chunk, add a function which maps all cpu
> >>>>> buffers corresponding to that per-cpu buffer.
> >>>>>
> >>>>> Also add function to clear page table entries for a percpu buffer.
> >>>>>
> >>>>
> >>>> This needs some kind of clarification so that readers can tell whether
> >>>> you're trying to map all percpu memory or just map a specific
> >>>> variable.  In either case, you're making a dubious assumption that
> >>>> percpu memory contains no secrets.
> >>> I'm thinking the per-cpu random pool is a secrit. IOW, it demonstrably
> >>> does contain secrits, invalidating that premise.
> >>
> >> The current code unconditionally maps the entire first percpu chunk
> >> (pcpu_base_addr). So it assumes it doesn't contain any secret. That is
> >> mainly a simplification for the POC because a lot of core information
> >> that we need, for example just to switch mm, are stored there (like
> >> cpu_tlbstate, current_task...).
> >
> > I don’t think you should need any of this.
> >
>
> At the moment, the current code does need it. Otherwise it can't switch from
> kvm mm to kernel mm: switch_mm_irqs_off() will fault accessing "cpu_tlbstate",
> and then the page fault handler will fail accessing "current" before calling
> the kvm page fault handler. So it will double fault or loop on page faults.
> There are many different places where percpu variables are used, and I have
> experienced many double fault/page fault loop because of that.

Now you're experiencing what working on the early PTI code was like :)

This is why I think you shouldn't touch current in any of this.

>
> >>
> >> If the entire first percpu chunk effectively has secret then we will
> >> need to individually map only buffers we need. The kvm_copy_percpu_mapping()
> >> function is added to copy mapping for a specified percpu buffer, so
> >> this used to map percpu buffers which are not in the first percpu chunk.
> >>
> >> Also note that mapping is constrained by PTE (4K), so mapped buffers
> >> (percpu or not) which do not fill a whole set of pages can leak adjacent
> >> data store on the same pages.
> >>
> >>
> >
> > I would take a different approach: figure out what you need and put it in its
> > own dedicated area, kind of like cpu_entry_area.
>
> That's certainly something we can do, like Julian proposed with "Process-local
> memory allocations": https://lkml.org/lkml/2018/11/22/1240
>
> That's fine for buffers allocated from KVM, however, we will still need some
> core kernel mappings so the thread can run and interrupts can be handled.
>
> > One nasty issue you’ll have is vmalloc: the kernel stack is in the
> > vmap range, and, if you allow access to vmap memory at all, you’ll
> > need some way to ensure that *unmap* gets propagated. I suspect the
> > right choice is to see if you can avoid using the kernel stack at all
> > in isolated mode.  Maybe you could run on the IRQ stack instead.
>
> I am currently just copying the task stack mapping into the KVM page table
> (patch 23) when a vcpu is created:
>
>         err = kvm_copy_ptes(tsk->stack, THREAD_SIZE);
>
> And this seems to work. I am clearing the mapping when the VM vcpu is freed,
> so I am making the assumption that the same task is used to create and free
> a vcpu.
>

vCPUs are bound to an mm but not a specific task, right?  So I think
this is wrong in both directions.

Suppose a vCPU is created, then the task exits, the stack mapping gets
freed (the core code tries to avoid this, but it does happen), and a
new stack gets allocated at the same VA with different physical pages.
Now you're toast :)  On the flip side, wouldn't you crash if a vCPU is
created and then run on a different thread?

How important is the ability to enable IRQs while running with the KVM
page tables?
Alexandre Chartre May 14, 2019, 4:24 p.m. UTC | #7
On 5/14/19 5:23 PM, Andy Lutomirski wrote:
> On Tue, May 14, 2019 at 2:42 AM Alexandre Chartre
> <alexandre.chartre@oracle.com> wrote:
>>
>>
>> On 5/14/19 10:34 AM, Andy Lutomirski wrote:
>>>
>>>
>>>> On May 14, 2019, at 1:25 AM, Alexandre Chartre <alexandre.chartre@oracle.com> wrote:
>>>>
>>>>
>>>>> On 5/14/19 9:09 AM, Peter Zijlstra wrote:
>>>>>> On Mon, May 13, 2019 at 11:18:41AM -0700, Andy Lutomirski wrote:
>>>>>> On Mon, May 13, 2019 at 7:39 AM Alexandre Chartre
>>>>>> <alexandre.chartre@oracle.com> wrote:
>>>>>>>
>>>>>>> pcpu_base_addr is already mapped to the KVM address space, but this
>>>>>>> represents the first percpu chunk. To access a per-cpu buffer not
>>>>>>> allocated in the first chunk, add a function which maps all cpu
>>>>>>> buffers corresponding to that per-cpu buffer.
>>>>>>>
>>>>>>> Also add function to clear page table entries for a percpu buffer.
>>>>>>>
>>>>>>
>>>>>> This needs some kind of clarification so that readers can tell whether
>>>>>> you're trying to map all percpu memory or just map a specific
>>>>>> variable.  In either case, you're making a dubious assumption that
>>>>>> percpu memory contains no secrets.
>>>>> I'm thinking the per-cpu random pool is a secrit. IOW, it demonstrably
>>>>> does contain secrits, invalidating that premise.
>>>>
>>>> The current code unconditionally maps the entire first percpu chunk
>>>> (pcpu_base_addr). So it assumes it doesn't contain any secret. That is
>>>> mainly a simplification for the POC because a lot of core information
>>>> that we need, for example just to switch mm, are stored there (like
>>>> cpu_tlbstate, current_task...).
>>>
>>> I don’t think you should need any of this.
>>>
>>
>> At the moment, the current code does need it. Otherwise it can't switch from
>> kvm mm to kernel mm: switch_mm_irqs_off() will fault accessing "cpu_tlbstate",
>> and then the page fault handler will fail accessing "current" before calling
>> the kvm page fault handler. So it will double fault or loop on page faults.
>> There are many different places where percpu variables are used, and I have
>> experienced many double fault/page fault loop because of that.
> 
> Now you're experiencing what working on the early PTI code was like :)
> 
> This is why I think you shouldn't touch current in any of this.
> 
>>
>>>>
>>>> If the entire first percpu chunk effectively has secret then we will
>>>> need to individually map only buffers we need. The kvm_copy_percpu_mapping()
>>>> function is added to copy mapping for a specified percpu buffer, so
>>>> this used to map percpu buffers which are not in the first percpu chunk.
>>>>
>>>> Also note that mapping is constrained by PTE (4K), so mapped buffers
>>>> (percpu or not) which do not fill a whole set of pages can leak adjacent
>>>> data store on the same pages.
>>>>
>>>>
>>>
>>> I would take a different approach: figure out what you need and put it in its
>>> own dedicated area, kind of like cpu_entry_area.
>>
>> That's certainly something we can do, like Julian proposed with "Process-local
>> memory allocations": https://lkml.org/lkml/2018/11/22/1240
>>
>> That's fine for buffers allocated from KVM, however, we will still need some
>> core kernel mappings so the thread can run and interrupts can be handled.
>>
>>> One nasty issue you’ll have is vmalloc: the kernel stack is in the
>>> vmap range, and, if you allow access to vmap memory at all, you’ll
>>> need some way to ensure that *unmap* gets propagated. I suspect the
>>> right choice is to see if you can avoid using the kernel stack at all
>>> in isolated mode.  Maybe you could run on the IRQ stack instead.
>>
>> I am currently just copying the task stack mapping into the KVM page table
>> (patch 23) when a vcpu is created:
>>
>>          err = kvm_copy_ptes(tsk->stack, THREAD_SIZE);
>>
>> And this seems to work. I am clearing the mapping when the VM vcpu is freed,
>> so I am making the assumption that the same task is used to create and free
>> a vcpu.
>>
> 
> vCPUs are bound to an mm but not a specific task, right?  So I think
> this is wrong in both directions.
> 

I know, that was yet another shortcut for the POC, I assume there's a 1:1
mapping between a vCPU and task, but I think that's fair with qemu.


> Suppose a vCPU is created, then the task exits, the stack mapping gets
> freed (the core code tries to avoid this, but it does happen), and a
> new stack gets allocated at the same VA with different physical pages.
> Now you're toast :)  On the flip side, wouldn't you crash if a vCPU is
> created and then run on a different thread?

Yes, that's why I have a safety net: before entering KVM isolation I always
check that the current task is mapped in the KVM address space, if not it
gets mapped.

> How important is the ability to enable IRQs while running with the KVM
> page tables?
> 

I can't say, I would need to check but we probably need IRQs at least for
some timers. Sounds like you would really prefer IRQs to be disabled.


alex.
Peter Zijlstra May 14, 2019, 5:05 p.m. UTC | #8
On Tue, May 14, 2019 at 06:24:48PM +0200, Alexandre Chartre wrote:
> On 5/14/19 5:23 PM, Andy Lutomirski wrote:

> > How important is the ability to enable IRQs while running with the KVM
> > page tables?
> > 
> 
> I can't say, I would need to check but we probably need IRQs at least for
> some timers. Sounds like you would really prefer IRQs to be disabled.
> 

I think what amluto is getting at, is:

again:
	local_irq_disable();
	switch_to_kvm_mm();
	/* do very little -- (A) */
	VMEnter()

		/* runs as guest */

	/* IRQ happens */
	WMExit()
	/* inspect exit raisin */
	if (/* IRQ pending */) {
		switch_from_kvm_mm();
		local_irq_restore();
		goto again;
	}


but I don't know anything about VMX/SVM at all, so the above might not
be feasible, specifically I read something about how VMX allows NMIs
where SVM did not somewhere around (A) -- or something like that,
earlier in this thread.
Sean Christopherson May 14, 2019, 6:09 p.m. UTC | #9
On Tue, May 14, 2019 at 07:05:22PM +0200, Peter Zijlstra wrote:
> On Tue, May 14, 2019 at 06:24:48PM +0200, Alexandre Chartre wrote:
> > On 5/14/19 5:23 PM, Andy Lutomirski wrote:
> 
> > > How important is the ability to enable IRQs while running with the KVM
> > > page tables?
> > > 
> > 
> > I can't say, I would need to check but we probably need IRQs at least for
> > some timers. Sounds like you would really prefer IRQs to be disabled.
> > 
> 
> I think what amluto is getting at, is:
> 
> again:
> 	local_irq_disable();
> 	switch_to_kvm_mm();
> 	/* do very little -- (A) */
> 	VMEnter()
> 
> 		/* runs as guest */
> 
> 	/* IRQ happens */
> 	WMExit()
> 	/* inspect exit raisin */
> 	if (/* IRQ pending */) {
> 		switch_from_kvm_mm();
> 		local_irq_restore();
> 		goto again;
> 	}
> 
> 
> but I don't know anything about VMX/SVM at all, so the above might not
> be feasible, specifically I read something about how VMX allows NMIs
> where SVM did not somewhere around (A) -- or something like that,
> earlier in this thread.

For IRQs it's somewhat feasible, but not for NMIs since NMIs are unblocked
on VMX immediately after VM-Exit, i.e. there's no way to prevent an NMI
from occuring while KVM's page tables are loaded.

Back to Andy's question about enabling IRQs, the answer is "it depends".
Exits due to INTR, NMI and #MC are considered high priority and are
serviced before re-enabling IRQs and preemption[1].  All other exits are
handled after IRQs and preemption are re-enabled.

A decent number of exit handlers are quite short, e.g. CPUID, most RDMSR
and WRMSR, any event-related exit, etc...  But many exit handlers require 
significantly longer flows, e.g. EPT violations (page faults) and anything
that requires extensive emulation, e.g. nested VMX.  In short, leaving
IRQs disabled across all exits is not practical.

Before going down the path of figuring out how to handle the corner cases
regarding kvm_mm, I think it makes sense to pinpoint exactly what exits
are a) in the hot path for the use case (configuration) and b) can be
handled fast enough that they can run with IRQs disabled.  Generating that
list might allow us to tightly bound the contents of kvm_mm and sidestep
many of the corner cases, i.e. select VM-Exits are handle with IRQs
disabled using KVM's mm, while "slow" VM-Exits go through the full context
switch.

[1] Technically, IRQs are actually enabled when SVM services INTR.  SVM
    hardware doesn't acknowledge the INTR/NMI on VM-Exit, but rather keeps
    it pending until the event is unblocked, e.g. servicing a VM-Exit due
    to an INTR is simply a matter of enabling IRQs.
Andy Lutomirski May 14, 2019, 8:27 p.m. UTC | #10
On Tue, May 14, 2019 at 10:05 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, May 14, 2019 at 06:24:48PM +0200, Alexandre Chartre wrote:
> > On 5/14/19 5:23 PM, Andy Lutomirski wrote:
>
> > > How important is the ability to enable IRQs while running with the KVM
> > > page tables?
> > >
> >
> > I can't say, I would need to check but we probably need IRQs at least for
> > some timers. Sounds like you would really prefer IRQs to be disabled.
> >
>
> I think what amluto is getting at, is:
>
> again:
>         local_irq_disable();
>         switch_to_kvm_mm();
>         /* do very little -- (A) */
>         VMEnter()
>
>                 /* runs as guest */
>
>         /* IRQ happens */
>         WMExit()
>         /* inspect exit raisin */
>         if (/* IRQ pending */) {
>                 switch_from_kvm_mm();
>                 local_irq_restore();
>                 goto again;
>         }
>

What I'm getting at is that running the kernel without mapping the
whole kernel is a horrible, horrible thing to do.  The less code we
can run like that, the better.
Andy Lutomirski May 14, 2019, 8:33 p.m. UTC | #11
On Tue, May 14, 2019 at 11:09 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Tue, May 14, 2019 at 07:05:22PM +0200, Peter Zijlstra wrote:
> > On Tue, May 14, 2019 at 06:24:48PM +0200, Alexandre Chartre wrote:
> > > On 5/14/19 5:23 PM, Andy Lutomirski wrote:
> >
> > > > How important is the ability to enable IRQs while running with the KVM
> > > > page tables?
> > > >
> > >
> > > I can't say, I would need to check but we probably need IRQs at least for
> > > some timers. Sounds like you would really prefer IRQs to be disabled.
> > >
> >
> > I think what amluto is getting at, is:
> >
> > again:
> >       local_irq_disable();
> >       switch_to_kvm_mm();
> >       /* do very little -- (A) */
> >       VMEnter()
> >
> >               /* runs as guest */
> >
> >       /* IRQ happens */
> >       WMExit()
> >       /* inspect exit raisin */
> >       if (/* IRQ pending */) {
> >               switch_from_kvm_mm();
> >               local_irq_restore();
> >               goto again;
> >       }
> >
> >
> > but I don't know anything about VMX/SVM at all, so the above might not
> > be feasible, specifically I read something about how VMX allows NMIs
> > where SVM did not somewhere around (A) -- or something like that,
> > earlier in this thread.
>
> For IRQs it's somewhat feasible, but not for NMIs since NMIs are unblocked
> on VMX immediately after VM-Exit, i.e. there's no way to prevent an NMI
> from occuring while KVM's page tables are loaded.
>
> Back to Andy's question about enabling IRQs, the answer is "it depends".
> Exits due to INTR, NMI and #MC are considered high priority and are
> serviced before re-enabling IRQs and preemption[1].  All other exits are
> handled after IRQs and preemption are re-enabled.
>
> A decent number of exit handlers are quite short, e.g. CPUID, most RDMSR
> and WRMSR, any event-related exit, etc...  But many exit handlers require
> significantly longer flows, e.g. EPT violations (page faults) and anything
> that requires extensive emulation, e.g. nested VMX.  In short, leaving
> IRQs disabled across all exits is not practical.
>
> Before going down the path of figuring out how to handle the corner cases
> regarding kvm_mm, I think it makes sense to pinpoint exactly what exits
> are a) in the hot path for the use case (configuration) and b) can be
> handled fast enough that they can run with IRQs disabled.  Generating that
> list might allow us to tightly bound the contents of kvm_mm and sidestep
> many of the corner cases, i.e. select VM-Exits are handle with IRQs
> disabled using KVM's mm, while "slow" VM-Exits go through the full context
> switch.

I suspect that the context switch is a bit of a red herring.  A
PCID-don't-flush CR3 write is IIRC under 300 cycles.  Sure, it's slow,
but it's probably minor compared to the full cost of the vm exit.  The
pain point is kicking the sibling thread.

When I worked on the PTI stuff, I went to great lengths to never have
a copy of the vmalloc page tables.  The top-level entry is either
there or it isn't, so everything is always in sync.  I'm sure it's
*possible* to populate just part of it for this KVM isolation, but
it's going to be ugly.  It would be really nice if we could avoid it.
Unfortunately, this interacts unpleasantly with having the kernel
stack in there.  We can freely use a different stack (the IRQ stack,
for example) as long as we don't schedule, but that means we can't run
preemptable code.

Another issue is tracing, kprobes, etc -- I don't think anyone will
like it if a kprobe in KVM either dramatically changes performance by
triggering isolation exits or by crashing.  So you may need to
restrict the isolated code to a file that is compiled with tracing off
and has everything marked NOKPROBE.  Yuck.

I hate to say this, but at what point do we declare that "if you have
SMT on, you get to keep both pieces, simultaneously!"?
Sean Christopherson May 14, 2019, 9:06 p.m. UTC | #12
On Tue, May 14, 2019 at 01:33:21PM -0700, Andy Lutomirski wrote:
> On Tue, May 14, 2019 at 11:09 AM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> > For IRQs it's somewhat feasible, but not for NMIs since NMIs are unblocked
> > on VMX immediately after VM-Exit, i.e. there's no way to prevent an NMI
> > from occuring while KVM's page tables are loaded.
> >
> > Back to Andy's question about enabling IRQs, the answer is "it depends".
> > Exits due to INTR, NMI and #MC are considered high priority and are
> > serviced before re-enabling IRQs and preemption[1].  All other exits are
> > handled after IRQs and preemption are re-enabled.
> >
> > A decent number of exit handlers are quite short, e.g. CPUID, most RDMSR
> > and WRMSR, any event-related exit, etc...  But many exit handlers require
> > significantly longer flows, e.g. EPT violations (page faults) and anything
> > that requires extensive emulation, e.g. nested VMX.  In short, leaving
> > IRQs disabled across all exits is not practical.
> >
> > Before going down the path of figuring out how to handle the corner cases
> > regarding kvm_mm, I think it makes sense to pinpoint exactly what exits
> > are a) in the hot path for the use case (configuration) and b) can be
> > handled fast enough that they can run with IRQs disabled.  Generating that
> > list might allow us to tightly bound the contents of kvm_mm and sidestep
> > many of the corner cases, i.e. select VM-Exits are handle with IRQs
> > disabled using KVM's mm, while "slow" VM-Exits go through the full context
> > switch.
> 
> I suspect that the context switch is a bit of a red herring.  A
> PCID-don't-flush CR3 write is IIRC under 300 cycles.  Sure, it's slow,
> but it's probably minor compared to the full cost of the vm exit.  The
> pain point is kicking the sibling thread.

Speaking of PCIDs, a separate mm for KVM would mean consuming another
ASID, which isn't good.

> When I worked on the PTI stuff, I went to great lengths to never have
> a copy of the vmalloc page tables.  The top-level entry is either
> there or it isn't, so everything is always in sync.  I'm sure it's
> *possible* to populate just part of it for this KVM isolation, but
> it's going to be ugly.  It would be really nice if we could avoid it.
> Unfortunately, this interacts unpleasantly with having the kernel
> stack in there.  We can freely use a different stack (the IRQ stack,
> for example) as long as we don't schedule, but that means we can't run
> preemptable code.
> 
> Another issue is tracing, kprobes, etc -- I don't think anyone will
> like it if a kprobe in KVM either dramatically changes performance by
> triggering isolation exits or by crashing.  So you may need to
> restrict the isolated code to a file that is compiled with tracing off
> and has everything marked NOKPROBE.  Yuck.

Right, and all of the above is largely why I suggested compiling a list
of VM-Exits that "need" preferential treatment.  If the cumulative amount
of code and data that needs to be accessed is tiny, then this might be
feasible.  But if the goal is to be able to do things like handle IRQs
using the KVM mm, ouch.

> I hate to say this, but at what point do we declare that "if you have
> SMT on, you get to keep both pieces, simultaneously!"?
Andy Lutomirski May 14, 2019, 9:55 p.m. UTC | #13
> On May 14, 2019, at 2:06 PM, Sean Christopherson <sean.j.christopherson@intel.com> wrote:
> 
>> On Tue, May 14, 2019 at 01:33:21PM -0700, Andy Lutomirski wrote:
>> On Tue, May 14, 2019 at 11:09 AM Sean Christopherson
>> <sean.j.christopherson@intel.com> wrote:
>>> For IRQs it's somewhat feasible, but not for NMIs since NMIs are unblocked
>>> on VMX immediately after VM-Exit, i.e. there's no way to prevent an NMI
>>> from occuring while KVM's page tables are loaded.
>>> 
>>> Back to Andy's question about enabling IRQs, the answer is "it depends".
>>> Exits due to INTR, NMI and #MC are considered high priority and are
>>> serviced before re-enabling IRQs and preemption[1].  All other exits are
>>> handled after IRQs and preemption are re-enabled.
>>> 
>>> A decent number of exit handlers are quite short, e.g. CPUID, most RDMSR
>>> and WRMSR, any event-related exit, etc...  But many exit handlers require
>>> significantly longer flows, e.g. EPT violations (page faults) and anything
>>> that requires extensive emulation, e.g. nested VMX.  In short, leaving
>>> IRQs disabled across all exits is not practical.
>>> 
>>> Before going down the path of figuring out how to handle the corner cases
>>> regarding kvm_mm, I think it makes sense to pinpoint exactly what exits
>>> are a) in the hot path for the use case (configuration) and b) can be
>>> handled fast enough that they can run with IRQs disabled.  Generating that
>>> list might allow us to tightly bound the contents of kvm_mm and sidestep
>>> many of the corner cases, i.e. select VM-Exits are handle with IRQs
>>> disabled using KVM's mm, while "slow" VM-Exits go through the full context
>>> switch.
>> 
>> I suspect that the context switch is a bit of a red herring.  A
>> PCID-don't-flush CR3 write is IIRC under 300 cycles.  Sure, it's slow,
>> but it's probably minor compared to the full cost of the vm exit.  The
>> pain point is kicking the sibling thread.
> 
> Speaking of PCIDs, a separate mm for KVM would mean consuming another
> ASID, which isn't good.

I’m not sure we care. We have many logical address spaces (two per mm plus a few more).  We have 4096 PCIDs, but we only use ten or so.  And we have some undocumented number of *physical* ASIDs with some undocumented mechanism by which PCID maps to a physical ASID.

I don’t suppose you know how many physical ASIDs we have?  And how it interacts with the VPID stuff?
Sean Christopherson May 14, 2019, 10:38 p.m. UTC | #14
On Tue, May 14, 2019 at 02:55:18PM -0700, Andy Lutomirski wrote:
> 
> > On May 14, 2019, at 2:06 PM, Sean Christopherson <sean.j.christopherson@intel.com> wrote:
> > 
> >> On Tue, May 14, 2019 at 01:33:21PM -0700, Andy Lutomirski wrote:
> >> I suspect that the context switch is a bit of a red herring.  A
> >> PCID-don't-flush CR3 write is IIRC under 300 cycles.  Sure, it's slow,
> >> but it's probably minor compared to the full cost of the vm exit.  The
> >> pain point is kicking the sibling thread.
> > 
> > Speaking of PCIDs, a separate mm for KVM would mean consuming another
> > ASID, which isn't good.
> 
> I’m not sure we care. We have many logical address spaces (two per mm plus a
> few more).  We have 4096 PCIDs, but we only use ten or so.  And we have some
> undocumented number of *physical* ASIDs with some undocumented mechanism by
> which PCID maps to a physical ASID.

Yeah, I was referring to physical ASIDs.

> I don’t suppose you know how many physical ASIDs we have?

Limited number of physical ASIDs.  I'll leave it at that so as not to
disclose something I shouldn't.

> And how it interacts with the VPID stuff?

VPID and PCID get factored into the final ASID, i.e. changing either one
results in a new ASID.  The SDM's oblique way of saying that:

  VPIDs and PCIDs (see Section 4.10.1) can be used concurrently. When this
  is done, the processor associates cached information with both a VPID and
  a PCID. Such information is used only if the current VPID and PCID both
  match those associated with the cached information.

E.g. enabling PTI in both the host and guest consumes four ASIDs just to
run a single task in the guest:

  - VPID=0, PCID=kernel
  - VPID=0, PCID=user
  - VPID=1, PCID=kernel
  - VPID=1, PCID=user

The impact of consuming another ASID for KVM would likely depend on both
the guest and host configurations/worloads, e.g. if the guest is using a
lot of PCIDs then it's probably a moot point.  It's something to keep in
mind though if we go down this path.
Jonathan Adams May 18, 2019, 12:05 a.m. UTC | #15
On Tue, May 14, 2019 at 3:38 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
> On Tue, May 14, 2019 at 02:55:18PM -0700, Andy Lutomirski wrote:
> > > On May 14, 2019, at 2:06 PM, Sean Christopherson <sean.j.christopherson@intel.com> wrote:
> > >> On Tue, May 14, 2019 at 01:33:21PM -0700, Andy Lutomirski wrote:
> > >> I suspect that the context switch is a bit of a red herring.  A
> > >> PCID-don't-flush CR3 write is IIRC under 300 cycles.  Sure, it's slow,
> > >> but it's probably minor compared to the full cost of the vm exit.  The
> > >> pain point is kicking the sibling thread.
> > >
> > > Speaking of PCIDs, a separate mm for KVM would mean consuming another
> > > ASID, which isn't good.
> >
> > I’m not sure we care. We have many logical address spaces (two per mm plus a
> > few more).  We have 4096 PCIDs, but we only use ten or so.  And we have some
> > undocumented number of *physical* ASIDs with some undocumented mechanism by
> > which PCID maps to a physical ASID.
>
> Yeah, I was referring to physical ASIDs.
>
> > I don’t suppose you know how many physical ASIDs we have?
>
> Limited number of physical ASIDs.  I'll leave it at that so as not to
> disclose something I shouldn't.
>
> > And how it interacts with the VPID stuff?
>
> VPID and PCID get factored into the final ASID, i.e. changing either one
> results in a new ASID.  The SDM's oblique way of saying that:
>
>   VPIDs and PCIDs (see Section 4.10.1) can be used concurrently. When this
>   is done, the processor associates cached information with both a VPID and
>   a PCID. Such information is used only if the current VPID and PCID both
>   match those associated with the cached information.
>
> E.g. enabling PTI in both the host and guest consumes four ASIDs just to
> run a single task in the guest:
>
>   - VPID=0, PCID=kernel
>   - VPID=0, PCID=user
>   - VPID=1, PCID=kernel
>   - VPID=1, PCID=user
>
> The impact of consuming another ASID for KVM would likely depend on both
> the guest and host configurations/worloads, e.g. if the guest is using a
> lot of PCIDs then it's probably a moot point.  It's something to keep in
> mind though if we go down this path.

One answer to that would be to have the KVM page tables use the same
PCID as the normal user-mode PTI page tables.  It's not ideal (since
the qemu/whatever process can see some kernel data via meltdown it
wouldn't be able to normally see), but might be an option to
investigate.

Cheers,
- jonathan
diff mbox series

Patch

diff --git a/arch/x86/kvm/isolation.c b/arch/x86/kvm/isolation.c
index 539e287..2052abf 100644
--- a/arch/x86/kvm/isolation.c
+++ b/arch/x86/kvm/isolation.c
@@ -990,6 +990,40 @@  void kvm_clear_range_mapping(void *ptr)
 EXPORT_SYMBOL(kvm_clear_range_mapping);
 
 
+void kvm_clear_percpu_mapping(void *percpu_ptr)
+{
+	void *ptr;
+	int cpu;
+
+	pr_debug("PERCPU CLEAR percpu=%px\n", percpu_ptr);
+	for_each_possible_cpu(cpu) {
+		ptr = per_cpu_ptr(percpu_ptr, cpu);
+		kvm_clear_range_mapping(ptr);
+	}
+}
+EXPORT_SYMBOL(kvm_clear_percpu_mapping);
+
+int kvm_copy_percpu_mapping(void *percpu_ptr, size_t size)
+{
+	void *ptr;
+	int cpu, err;
+
+	pr_debug("PERCPU COPY percpu=%px size=%lx\n", percpu_ptr, size);
+	for_each_possible_cpu(cpu) {
+		ptr = per_cpu_ptr(percpu_ptr, cpu);
+		pr_debug("PERCPU COPY cpu%d addr=%px\n", cpu, ptr);
+		err = kvm_copy_ptes(ptr, size);
+		if (err) {
+			kvm_clear_range_mapping(percpu_ptr);
+			return err;
+		}
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(kvm_copy_percpu_mapping);
+
+
 static int kvm_isolation_init_mm(void)
 {
 	pgd_t *kvm_pgd;
diff --git a/arch/x86/kvm/isolation.h b/arch/x86/kvm/isolation.h
index 7d3c985..3ef2060 100644
--- a/arch/x86/kvm/isolation.h
+++ b/arch/x86/kvm/isolation.h
@@ -18,5 +18,7 @@  static inline bool kvm_isolation(void)
 extern void kvm_may_access_sensitive_data(struct kvm_vcpu *vcpu);
 extern int kvm_copy_ptes(void *ptr, unsigned long size);
 extern void kvm_clear_range_mapping(void *ptr);
+extern int kvm_copy_percpu_mapping(void *percpu_ptr, size_t size);
+extern void kvm_clear_percpu_mapping(void *percpu_ptr);
 
 #endif