mbox series

[v4,00/14] KVM/X86: Introduce a new guest mapping interface

Message ID 1543829467-18025-1-git-send-email-karahmed@amazon.de (mailing list archive)
Headers show
Series KVM/X86: Introduce a new guest mapping interface | expand

Message

KarimAllah Ahmed Dec. 3, 2018, 9:30 a.m. UTC
Guest memory can either be directly managed by the kernel (i.e. have a "struct
page") or they can simply live outside kernel control (i.e. do not have a
"struct page"). KVM mostly support these two modes, except in a few places
where the code seems to assume that guest memory must have a "struct page".

This patchset introduces a new mapping interface to map guest memory into host
kernel memory which also supports PFN-based memory (i.e. memory without 'struct
page'). It also converts all offending code to this interface or simply
read/write directly from guest memory.

As far as I can see all offending code is now fixed except the APIC-access page
which I will handle in a seperate series along with dropping
kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API.

The current implementation of the new API uses memremap to map memory that does
not have a "struct page". This proves to be very slow for high frequency
mappings. Since this does not affect the normal use-case where a "struct page"
is available, the performance of this API will be handled by a seperate patch
series.

v3 -> v4:
- Rebase
- Add a new patch to also fix the newly introduced enhanced VMCS.

v2 -> v3:
- Rebase
- Add a new patch to also fix the newly introduced shadow VMCS.

Filippo Sironi (1):
  X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs

KarimAllah Ahmed (13):
  X86/nVMX: handle_vmon: Read 4 bytes from guest memory
  X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
  X86/nVMX: Update the PML table without mapping and unmapping the page
  KVM: Introduce a new guest mapping API
  KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
  KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
  KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt
    descriptor table
  KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
  KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
  KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
  KVM/nSVM: Use the new mapping API for mapping guest memory
  KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
  KVM/nVMX: Use kvm_vcpu_map for accessing the enhanced VMCS

 arch/x86/kvm/hyperv.c      |  28 +++----
 arch/x86/kvm/paging_tmpl.h |  38 ++++++---
 arch/x86/kvm/svm.c         |  97 +++++++++++------------
 arch/x86/kvm/vmx.c         | 189 +++++++++++++++++----------------------------
 arch/x86/kvm/x86.c         |  13 ++--
 include/linux/kvm_host.h   |   9 +++
 virt/kvm/kvm_main.c        |  50 ++++++++++++
 7 files changed, 228 insertions(+), 196 deletions(-)

Comments

Konrad Rzeszutek Wilk Dec. 6, 2018, 4:01 p.m. UTC | #1
On Mon, Dec 03, 2018 at 10:30:53AM +0100, KarimAllah Ahmed wrote:
> Guest memory can either be directly managed by the kernel (i.e. have a "struct
> page") or they can simply live outside kernel control (i.e. do not have a
> "struct page"). KVM mostly support these two modes, except in a few places
> where the code seems to assume that guest memory must have a "struct page".
> 
> This patchset introduces a new mapping interface to map guest memory into host
> kernel memory which also supports PFN-based memory (i.e. memory without 'struct
> page'). It also converts all offending code to this interface or simply
> read/write directly from guest memory.
> 
> As far as I can see all offending code is now fixed except the APIC-access page
> which I will handle in a seperate series along with dropping
> kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API.
> 
> The current implementation of the new API uses memremap to map memory that does
> not have a "struct page". This proves to be very slow for high frequency
> mappings. Since this does not affect the normal use-case where a "struct page"
> is available, the performance of this API will be handled by a seperate patch
> series.

How (if any) does it affect performance?

Thanks!
Paolo Bonzini Dec. 19, 2018, 9:27 p.m. UTC | #2
On 06/12/18 17:01, Konrad Rzeszutek Wilk wrote:
> On Mon, Dec 03, 2018 at 10:30:53AM +0100, KarimAllah Ahmed wrote:
>> Guest memory can either be directly managed by the kernel (i.e. have a "struct
>> page") or they can simply live outside kernel control (i.e. do not have a
>> "struct page"). KVM mostly support these two modes, except in a few places
>> where the code seems to assume that guest memory must have a "struct page".
>>
>> This patchset introduces a new mapping interface to map guest memory into host
>> kernel memory which also supports PFN-based memory (i.e. memory without 'struct
>> page'). It also converts all offending code to this interface or simply
>> read/write directly from guest memory.
>>
>> As far as I can see all offending code is now fixed except the APIC-access page
>> which I will handle in a seperate series along with dropping
>> kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API.
>>
>> The current implementation of the new API uses memremap to map memory that does
>> not have a "struct page". This proves to be very slow for high frequency
>> mappings. Since this does not affect the normal use-case where a "struct page"
>> is available, the performance of this API will be handled by a seperate patch
>> series.
> 
> How (if any) does it affect performance?

This is for Amazon's super special userspace sauce.  It doesn't affect
performance for normal usecases.

Paolo
Paolo Bonzini Dec. 21, 2018, 3:22 p.m. UTC | #3
On 03/12/18 10:30, KarimAllah Ahmed wrote:
> Guest memory can either be directly managed by the kernel (i.e. have a "struct
> page") or they can simply live outside kernel control (i.e. do not have a
> "struct page"). KVM mostly support these two modes, except in a few places
> where the code seems to assume that guest memory must have a "struct page".
> 
> This patchset introduces a new mapping interface to map guest memory into host
> kernel memory which also supports PFN-based memory (i.e. memory without 'struct
> page'). It also converts all offending code to this interface or simply
> read/write directly from guest memory.
> 
> As far as I can see all offending code is now fixed except the APIC-access page
> which I will handle in a seperate series along with dropping
> kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API.
> 
> The current implementation of the new API uses memremap to map memory that does
> not have a "struct page". This proves to be very slow for high frequency
> mappings. Since this does not affect the normal use-case where a "struct page"
> is available, the performance of this API will be handled by a seperate patch
> series.
> 
> v3 -> v4:
> - Rebase
> - Add a new patch to also fix the newly introduced enhanced VMCS.

This will need a few more changes (especially given the review remarks
for patch 2), so please also add the separate dirty/clean unmap APIs in
the next revision.

In order to rebase against the vmx.c split, my suggestion is that you
first rebase to the last commit before nested.c was separated, then on
the immediately following one, and then on the top of the tree.  Most of
the time, "patch -p1 arch/x86/kvm/vmx/nested.c <
.git/rebase-apply/patch" will do the right thing.

Paolo

> v2 -> v3:
> - Rebase
> - Add a new patch to also fix the newly introduced shadow VMCS.
> 
> Filippo Sironi (1):
>   X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs
> 
> KarimAllah Ahmed (13):
>   X86/nVMX: handle_vmon: Read 4 bytes from guest memory
>   X86/nVMX: handle_vmptrld: Copy the VMCS12 directly from guest memory
>   X86/nVMX: Update the PML table without mapping and unmapping the page
>   KVM: Introduce a new guest mapping API
>   KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
>   KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
>   KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt
>     descriptor table
>   KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
>   KVM/X86: hyperv: Use kvm_vcpu_map in synic_clear_sint_msg_pending
>   KVM/X86: hyperv: Use kvm_vcpu_map in synic_deliver_msg
>   KVM/nSVM: Use the new mapping API for mapping guest memory
>   KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
>   KVM/nVMX: Use kvm_vcpu_map for accessing the enhanced VMCS
> 
>  arch/x86/kvm/hyperv.c      |  28 +++----
>  arch/x86/kvm/paging_tmpl.h |  38 ++++++---
>  arch/x86/kvm/svm.c         |  97 +++++++++++------------
>  arch/x86/kvm/vmx.c         | 189 +++++++++++++++++----------------------------
>  arch/x86/kvm/x86.c         |  13 ++--
>  include/linux/kvm_host.h   |   9 +++
>  virt/kvm/kvm_main.c        |  50 ++++++++++++
>  7 files changed, 228 insertions(+), 196 deletions(-)
>