diff mbox series

[v1,02/10] KVM: x86/mmu: Prepare kvm_is_mmio_pfn() for PG_reserved changes

Message ID 20191024120938.11237-3-david@redhat.com (mailing list archive)
State New, archived
Headers show
Series mm: Don't mark hotplugged pages PG_reserved (including ZONE_DEVICE) | expand

Commit Message

David Hildenbrand Oct. 24, 2019, 12:09 p.m. UTC
Right now, ZONE_DEVICE memory is always set PG_reserved. We want to
change that.

KVM has this weird use case that you can map anything from /dev/mem
into the guest. pfn_valid() is not a reliable check whether the memmap
was initialized and can be touched. pfn_to_online_page() makes sure
that we have an initialized memmap (and don't have ZONE_DEVICE memory).

Rewrite kvm_is_mmio_pfn() to make sure the function produces the
same result once we stop setting ZONE_DEVICE pages PG_reserved.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: KarimAllah Ahmed <karahmed@amazon.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/kvm/mmu.c | 29 +++++++++++++++++------------
 1 file changed, 17 insertions(+), 12 deletions(-)

Comments

Dan Williams Nov. 5, 2019, 1:37 a.m. UTC | #1
On Thu, Oct 24, 2019 at 5:10 AM David Hildenbrand <david@redhat.com> wrote:
>
> Right now, ZONE_DEVICE memory is always set PG_reserved. We want to
> change that.
>
> KVM has this weird use case that you can map anything from /dev/mem
> into the guest. pfn_valid() is not a reliable check whether the memmap
> was initialized and can be touched. pfn_to_online_page() makes sure
> that we have an initialized memmap (and don't have ZONE_DEVICE memory).
>
> Rewrite kvm_is_mmio_pfn() to make sure the function produces the
> same result once we stop setting ZONE_DEVICE pages PG_reserved.
>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: KarimAllah Ahmed <karahmed@amazon.de>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  arch/x86/kvm/mmu.c | 29 +++++++++++++++++------------
>  1 file changed, 17 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 24c23c66b226..f03089a336de 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2962,20 +2962,25 @@ static bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn,
>
>  static bool kvm_is_mmio_pfn(kvm_pfn_t pfn)
>  {
> +       struct page *page = pfn_to_online_page(pfn);
> +
> +       /*
> +        * ZONE_DEVICE pages are never online. Online pages that are reserved
> +        * either indicate the zero page or MMIO pages.
> +        */
> +       if (page)
> +               return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn));
> +
> +       /*
> +        * Anything with a valid (but not online) memmap could be ZONE_DEVICE.
> +        * Treat only UC/UC-/WC pages as MMIO.

You might clarify that ZONE_DEVICE pages that belong to WB cacheable
pmem have the correct memtype established by devm_memremap_pages().

Other than that, feel free to add:

Reviewed-by: Dan Williams <dan.j.williams@intel.com>
David Hildenbrand Nov. 5, 2019, 11:09 a.m. UTC | #2
On 05.11.19 02:37, Dan Williams wrote:
> On Thu, Oct 24, 2019 at 5:10 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> Right now, ZONE_DEVICE memory is always set PG_reserved. We want to
>> change that.
>>
>> KVM has this weird use case that you can map anything from /dev/mem
>> into the guest. pfn_valid() is not a reliable check whether the memmap
>> was initialized and can be touched. pfn_to_online_page() makes sure
>> that we have an initialized memmap (and don't have ZONE_DEVICE memory).
>>
>> Rewrite kvm_is_mmio_pfn() to make sure the function produces the
>> same result once we stop setting ZONE_DEVICE pages PG_reserved.
>>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
>> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
>> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Cc: Wanpeng Li <wanpengli@tencent.com>
>> Cc: Jim Mattson <jmattson@google.com>
>> Cc: Joerg Roedel <joro@8bytes.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>> Cc: KarimAllah Ahmed <karahmed@amazon.de>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Dan Williams <dan.j.williams@intel.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>   arch/x86/kvm/mmu.c | 29 +++++++++++++++++------------
>>   1 file changed, 17 insertions(+), 12 deletions(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 24c23c66b226..f03089a336de 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -2962,20 +2962,25 @@ static bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn,
>>
>>   static bool kvm_is_mmio_pfn(kvm_pfn_t pfn)
>>   {
>> +       struct page *page = pfn_to_online_page(pfn);
>> +
>> +       /*
>> +        * ZONE_DEVICE pages are never online. Online pages that are reserved
>> +        * either indicate the zero page or MMIO pages.
>> +        */
>> +       if (page)
>> +               return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn));
>> +
>> +       /*
>> +        * Anything with a valid (but not online) memmap could be ZONE_DEVICE.
>> +        * Treat only UC/UC-/WC pages as MMIO.
> 
> You might clarify that ZONE_DEVICE pages that belong to WB cacheable
> pmem have the correct memtype established by devm_memremap_pages().

/*
  * Anything with a valid (but not online) memmap could be ZONE_DEVICE.
  * Treat only UC/UC-/WC pages as MMIO. devm_memremap_pages() established
  * the correct memtype for WB cacheable ZONE_DEVICE pages.
  */

Thanks!

> 
> Other than that, feel free to add:
> 
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 24c23c66b226..f03089a336de 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2962,20 +2962,25 @@  static bool mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn,
 
 static bool kvm_is_mmio_pfn(kvm_pfn_t pfn)
 {
+	struct page *page = pfn_to_online_page(pfn);
+
+	/*
+	 * ZONE_DEVICE pages are never online. Online pages that are reserved
+	 * either indicate the zero page or MMIO pages.
+	 */
+	if (page)
+		return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn));
+
+	/*
+	 * Anything with a valid (but not online) memmap could be ZONE_DEVICE.
+	 * Treat only UC/UC-/WC pages as MMIO.
+	 */
 	if (pfn_valid(pfn))
-		return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)) &&
-			/*
-			 * Some reserved pages, such as those from NVDIMM
-			 * DAX devices, are not for MMIO, and can be mapped
-			 * with cached memory type for better performance.
-			 * However, the above check misconceives those pages
-			 * as MMIO, and results in KVM mapping them with UC
-			 * memory type, which would hurt the performance.
-			 * Therefore, we check the host memory type in addition
-			 * and only treat UC/UC-/WC pages as MMIO.
-			 */
-			(!pat_enabled() || pat_pfn_immune_to_uc_mtrr(pfn));
+		return !pat_enabled() || pat_pfn_immune_to_uc_mtrr(pfn);
 
+	/*
+	 * Any RAM that has no memmap (e.g., mapped via /dev/mem) is not MMIO.
+	 */
 	return !e820__mapped_raw_any(pfn_to_hpa(pfn),
 				     pfn_to_hpa(pfn + 1) - 1,
 				     E820_TYPE_RAM);