diff mbox series

[v5,4/4] KVM: mmu: remove over-aggressive warnings

Message ID 20211129034317.2964790-5-stevensd@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: allow mapping non-refcounted pages | expand

Commit Message

David Stevens Nov. 29, 2021, 3:43 a.m. UTC
From: David Stevens <stevensd@chromium.org>

Remove two warnings that require ref counts for pages to be non-zero, as
mapped pfns from follow_pfn may not have an initialized ref count.

Signed-off-by: David Stevens <stevensd@chromium.org>
---
 arch/x86/kvm/mmu/mmu.c | 7 -------
 virt/kvm/kvm_main.c    | 2 +-
 2 files changed, 1 insertion(+), 8 deletions(-)

Comments

Sean Christopherson Dec. 30, 2021, 7:22 p.m. UTC | #1
On Mon, Nov 29, 2021, David Stevens wrote:
> From: David Stevens <stevensd@chromium.org>
> 
> Remove two warnings that require ref counts for pages to be non-zero, as
> mapped pfns from follow_pfn may not have an initialized ref count.
> 
> Signed-off-by: David Stevens <stevensd@chromium.org>
> ---
>  arch/x86/kvm/mmu/mmu.c | 7 -------
>  virt/kvm/kvm_main.c    | 2 +-
>  2 files changed, 1 insertion(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 0626395ff1d9..7c4c7fededf0 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -621,13 +621,6 @@ static int mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
>  
>  	pfn = spte_to_pfn(old_spte);
>  
> -	/*
> -	 * KVM does not hold the refcount of the page used by
> -	 * kvm mmu, before reclaiming the page, we should
> -	 * unmap it from mmu first.
> -	 */
> -	WARN_ON(!kvm_is_reserved_pfn(pfn) && !page_count(pfn_to_page(pfn)));
> -
>  	if (is_accessed_spte(old_spte))
>  		kvm_set_pfn_accessed(pfn);
>  
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 16a8a71f20bf..d81edcb3e107 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -170,7 +170,7 @@ bool kvm_is_zone_device_pfn(kvm_pfn_t pfn)
>  	 * the device has been pinned, e.g. by get_user_pages().  WARN if the
>  	 * page_count() is zero to help detect bad usage of this helper.

Stale comment.

>  	 */
> -	if (!pfn_valid(pfn) || WARN_ON_ONCE(!page_count(pfn_to_page(pfn))))
> +	if (!pfn_valid(pfn) || !page_count(pfn_to_page(pfn)))

Hrm, I know the whole point of this series is to support pages without an elevated
refcount, but this WARN was extremely helpful in catching several use-after-free
bugs in the TDP MMU.  We talked about burying a slow check behind MMU_WARN_ON, but
that isn't very helpful because no one runs with MMU_WARN_ON, and this is also a
type of check that's most useful if it runs in production.

IIUC, this series explicitly disallows using pfns that have a struct page without
refcounting, and the issue with the WARN here is that kvm_is_zone_device_pfn() is
called by kvm_is_reserved_pfn() before ensure_pfn_ref() rejects problematic pages,
i.e. triggers false positive.

So, can't we preserve the use-after-free benefits of the check by moving it to
where KVM releases the PFN?  I.e.

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index fbca2e232e94..675b835525fa 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2904,15 +2904,19 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);

 void kvm_set_pfn_dirty(kvm_pfn_t pfn)
 {
-       if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
+       if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn)) {
+               WARN_ON_ONCE(!page_count(pfn_to_page(pfn)));
                SetPageDirty(pfn_to_page(pfn));
+       }
 }
 EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);

 void kvm_set_pfn_accessed(kvm_pfn_t pfn)
 {
-       if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
+       if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn)) {
+               WARN_ON_ONCE(!page_count(pfn_to_page(pfn)));
                mark_page_accessed(pfn_to_page(pfn));
+       }
 }
 EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed);

In a way, that's even better than the current check as it makes it more obvious
that the WARN is due to a use-after-free.
David Stevens Jan. 5, 2022, 7:14 a.m. UTC | #2
On Fri, Dec 31, 2021 at 4:22 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Mon, Nov 29, 2021, David Stevens wrote:
> > From: David Stevens <stevensd@chromium.org>
> >
> > Remove two warnings that require ref counts for pages to be non-zero, as
> > mapped pfns from follow_pfn may not have an initialized ref count.
> >
> > Signed-off-by: David Stevens <stevensd@chromium.org>
> > ---
> >  arch/x86/kvm/mmu/mmu.c | 7 -------
> >  virt/kvm/kvm_main.c    | 2 +-
> >  2 files changed, 1 insertion(+), 8 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 0626395ff1d9..7c4c7fededf0 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -621,13 +621,6 @@ static int mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
> >
> >       pfn = spte_to_pfn(old_spte);
> >
> > -     /*
> > -      * KVM does not hold the refcount of the page used by
> > -      * kvm mmu, before reclaiming the page, we should
> > -      * unmap it from mmu first.
> > -      */
> > -     WARN_ON(!kvm_is_reserved_pfn(pfn) && !page_count(pfn_to_page(pfn)));
> > -
> >       if (is_accessed_spte(old_spte))
> >               kvm_set_pfn_accessed(pfn);
> >
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 16a8a71f20bf..d81edcb3e107 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -170,7 +170,7 @@ bool kvm_is_zone_device_pfn(kvm_pfn_t pfn)
> >        * the device has been pinned, e.g. by get_user_pages().  WARN if the
> >        * page_count() is zero to help detect bad usage of this helper.
>
> Stale comment.
>
> >        */
> > -     if (!pfn_valid(pfn) || WARN_ON_ONCE(!page_count(pfn_to_page(pfn))))
> > +     if (!pfn_valid(pfn) || !page_count(pfn_to_page(pfn)))
>
> Hrm, I know the whole point of this series is to support pages without an elevated
> refcount, but this WARN was extremely helpful in catching several use-after-free
> bugs in the TDP MMU.  We talked about burying a slow check behind MMU_WARN_ON, but
> that isn't very helpful because no one runs with MMU_WARN_ON, and this is also a
> type of check that's most useful if it runs in production.
>
> IIUC, this series explicitly disallows using pfns that have a struct page without
> refcounting, and the issue with the WARN here is that kvm_is_zone_device_pfn() is
> called by kvm_is_reserved_pfn() before ensure_pfn_ref() rejects problematic pages,
> i.e. triggers false positive.
>
> So, can't we preserve the use-after-free benefits of the check by moving it to
> where KVM releases the PFN?  I.e.
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index fbca2e232e94..675b835525fa 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2904,15 +2904,19 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
>
>  void kvm_set_pfn_dirty(kvm_pfn_t pfn)
>  {
> -       if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
> +       if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn)) {
> +               WARN_ON_ONCE(!page_count(pfn_to_page(pfn)));
>                 SetPageDirty(pfn_to_page(pfn));
> +       }
>  }
>  EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);

I'm still seeing this warning show up via __handle_changed_spte
calling kvm_set_pfn_dirty:

[  113.350473]  kvm_set_pfn_dirty+0x26/0x3e
[  113.354861]  __handle_changed_spte+0x452/0x4f6
[  113.359841]  __handle_changed_spte+0x452/0x4f6
[  113.364819]  __handle_changed_spte+0x452/0x4f6
[  113.369790]  zap_gfn_range+0x1de/0x27a
[  113.373992]  kvm_tdp_mmu_zap_invalidated_roots+0x64/0xb8
[  113.379945]  kvm_mmu_zap_all_fast+0x18c/0x1c1
[  113.384827]  kvm_page_track_flush_slot+0x55/0x87
[  113.390000]  kvm_set_memslot+0x137/0x455
[  113.394394]  kvm_delete_memslot+0x5c/0x91
[  113.398888]  __kvm_set_memory_region+0x3c0/0x5e6
[  113.404061]  kvm_set_memory_region+0x45/0x74
[  113.408844]  kvm_vm_ioctl+0x563/0x60c

I wasn't seeing it for my particular test case, but the gfn aging code
might trigger the warning as well.

I don't know if setting the dirty/accessed bits in non-refcounted
struct pages is problematic. The only way I can see to avoid it would
be to try to map from the spte to the vma and then check its flags. If
setting the flags is benign, then we'd need to do that lookup to
differentiate the safe case from the use-after-free case. Do you have
any advice on how to handle this?

-David
Sean Christopherson Jan. 5, 2022, 7:02 p.m. UTC | #3
On Wed, Jan 05, 2022, David Stevens wrote:
> On Fri, Dec 31, 2021 at 4:22 AM Sean Christopherson <seanjc@google.com> wrote:
> > >        */
> > > -     if (!pfn_valid(pfn) || WARN_ON_ONCE(!page_count(pfn_to_page(pfn))))
> > > +     if (!pfn_valid(pfn) || !page_count(pfn_to_page(pfn)))
> >
> > Hrm, I know the whole point of this series is to support pages without an elevated
> > refcount, but this WARN was extremely helpful in catching several use-after-free
> > bugs in the TDP MMU.  We talked about burying a slow check behind MMU_WARN_ON, but
> > that isn't very helpful because no one runs with MMU_WARN_ON, and this is also a
> > type of check that's most useful if it runs in production.
> >
> > IIUC, this series explicitly disallows using pfns that have a struct page without
> > refcounting, and the issue with the WARN here is that kvm_is_zone_device_pfn() is
> > called by kvm_is_reserved_pfn() before ensure_pfn_ref() rejects problematic pages,
> > i.e. triggers false positive.
> >
> > So, can't we preserve the use-after-free benefits of the check by moving it to
> > where KVM releases the PFN?  I.e.
> >
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index fbca2e232e94..675b835525fa 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -2904,15 +2904,19 @@ EXPORT_SYMBOL_GPL(kvm_release_pfn_dirty);
> >
> >  void kvm_set_pfn_dirty(kvm_pfn_t pfn)
> >  {
> > -       if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn))
> > +       if (!kvm_is_reserved_pfn(pfn) && !kvm_is_zone_device_pfn(pfn)) {
> > +               WARN_ON_ONCE(!page_count(pfn_to_page(pfn)));
> >                 SetPageDirty(pfn_to_page(pfn));
> > +       }
> >  }
> >  EXPORT_SYMBOL_GPL(kvm_set_pfn_dirty);
> 
> I'm still seeing this warning show up via __handle_changed_spte
> calling kvm_set_pfn_dirty:
> 
> [  113.350473]  kvm_set_pfn_dirty+0x26/0x3e
> [  113.354861]  __handle_changed_spte+0x452/0x4f6
> [  113.359841]  __handle_changed_spte+0x452/0x4f6
> [  113.364819]  __handle_changed_spte+0x452/0x4f6
> [  113.369790]  zap_gfn_range+0x1de/0x27a
> [  113.373992]  kvm_tdp_mmu_zap_invalidated_roots+0x64/0xb8
> [  113.379945]  kvm_mmu_zap_all_fast+0x18c/0x1c1
> [  113.384827]  kvm_page_track_flush_slot+0x55/0x87
> [  113.390000]  kvm_set_memslot+0x137/0x455
> [  113.394394]  kvm_delete_memslot+0x5c/0x91
> [  113.398888]  __kvm_set_memory_region+0x3c0/0x5e6
> [  113.404061]  kvm_set_memory_region+0x45/0x74
> [  113.408844]  kvm_vm_ioctl+0x563/0x60c
> 
> I wasn't seeing it for my particular test case, but the gfn aging code
> might trigger the warning as well.

Ah, I got royally confused by ensure_pfn_ref()'s comment

  * Certain IO or PFNMAP mappings can be backed with valid
  * struct pages, but be allocated without refcounting e.g.,
  * tail pages of non-compound higher order allocations, which
  * would then underflow the refcount when the caller does the
  * required put_page. Don't allow those pages here.
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
that doesn't apply here because kvm_faultin_pfn() uses the low level
__gfn_to_pfn_page_memslot().

and my understanding is that @page will be non-NULL in ensure_pfn_ref() iff the
page has an elevated refcount.

Can you update the changelogs for the x86+arm64 "use gfn_to_pfn_page" patches to
explicitly call out the various ramifications of moving to gfn_to_pfn_page()?

Side topic, s/covert/convert in both changelogs :-)

> I don't know if setting the dirty/accessed bits in non-refcounted
> struct pages is problematic.

Without knowing exactly what lies behind such pages, KVM needs to set dirty bits,
otherwise there's a potential for data lost.

> The only way I can see to avoid it would be to try to map from the spte to
> the vma and then check its flags. If setting the flags is benign, then we'd
> need to do that lookup to differentiate the safe case from the use-after-free
> case. Do you have any advice on how to handle this?

Hrm.  I can't think of a clever generic solution.  But for x86-64, we can use a
software available bit to mark SPTEs as being refcounted use that flag to assert
the refcount is elevated when marking the backing pfn dirty/accessed.  It'd be
64-bit only because we're out of software available bits for PAE paging, but (a)
practically no one cares about 32-bit and (b) odds are slim that a use-after-free
would be unique to 32-bit KVM.

But that can all go in after your series is merged, e.g. I'd prefer to cleanup
make_spte()'s prototype to use @fault adding yet another parameter, and that'll
take a few patches to make happen since FNAME(sync_page) also uses make_spte().

TL;DR: continue as you were, I'll stop whining about this :-)
Sean Christopherson Jan. 5, 2022, 7:19 p.m. UTC | #4
On Wed, Jan 05, 2022, Sean Christopherson wrote:
> Ah, I got royally confused by ensure_pfn_ref()'s comment
> 
>   * Certain IO or PFNMAP mappings can be backed with valid
>   * struct pages, but be allocated without refcounting e.g.,
>   * tail pages of non-compound higher order allocations, which
>   * would then underflow the refcount when the caller does the
>   * required put_page. Don't allow those pages here.
>                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> that doesn't apply here because kvm_faultin_pfn() uses the low level
> __gfn_to_pfn_page_memslot().

On fifth thought, I think this is wrong and doomed to fail.  By mapping these pages
into the guest, KVM is effectively saying it supports these pages.  But if the guest
uses the corresponding gfns for an action that requires KVM to access the page,
e.g. via kvm_vcpu_map(), ensure_pfn_ref() will reject the access and all sorts of
bad things will happen to the guest.

So, why not fully reject these types of pages?  If someone is relying on KVM to
support these types of pages, then we'll fail fast and get a bug report letting us
know we need to properly support these types of pages.  And if not, then we reduce
KVM's complexity and I get to keep my precious WARN :-)
David Stevens Jan. 6, 2022, 2:42 a.m. UTC | #5
On Thu, Jan 6, 2022 at 4:19 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Wed, Jan 05, 2022, Sean Christopherson wrote:
> > Ah, I got royally confused by ensure_pfn_ref()'s comment
> >
> >   * Certain IO or PFNMAP mappings can be backed with valid
> >   * struct pages, but be allocated without refcounting e.g.,
> >   * tail pages of non-compound higher order allocations, which
> >   * would then underflow the refcount when the caller does the
> >   * required put_page. Don't allow those pages here.
> >                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > that doesn't apply here because kvm_faultin_pfn() uses the low level
> > __gfn_to_pfn_page_memslot().
>
> On fifth thought, I think this is wrong and doomed to fail.  By mapping these pages
> into the guest, KVM is effectively saying it supports these pages.  But if the guest
> uses the corresponding gfns for an action that requires KVM to access the page,
> e.g. via kvm_vcpu_map(), ensure_pfn_ref() will reject the access and all sorts of
> bad things will happen to the guest.
>
> So, why not fully reject these types of pages?  If someone is relying on KVM to
> support these types of pages, then we'll fail fast and get a bug report letting us
> know we need to properly support these types of pages.  And if not, then we reduce
> KVM's complexity and I get to keep my precious WARN :-)

Our current use case here is virtio-gpu blob resources [1]. Blob
resources are useful because they avoid a guest shadow buffer and the
associated memcpys, and as I understand it they are also required for
virtualized vulkan.

One type of blob resources requires mapping dma-bufs allocated by the
host directly into the guest. This works on Intel platforms and the
ARM platforms I've tested. However, the amdgpu driver sometimes
allocates higher order, non-compound pages via ttm_pool_alloc_page.
These are the type of pages which KVM is currently rejecting. Is this
something that KVM can support?

+olv, who has done some of the blob resource work.

[1] https://patchwork.kernel.org/project/dri-devel/cover/20200814024000.2485-1-gurchetansingh@chromium.org/

-David
Sean Christopherson Jan. 6, 2022, 5:38 p.m. UTC | #6
On Thu, Jan 06, 2022, David Stevens wrote:
> On Thu, Jan 6, 2022 at 4:19 AM Sean Christopherson <seanjc@google.com> wrote:
> >
> > On Wed, Jan 05, 2022, Sean Christopherson wrote:
> > > Ah, I got royally confused by ensure_pfn_ref()'s comment
> > >
> > >   * Certain IO or PFNMAP mappings can be backed with valid
> > >   * struct pages, but be allocated without refcounting e.g.,
> > >   * tail pages of non-compound higher order allocations, which
> > >   * would then underflow the refcount when the caller does the
> > >   * required put_page. Don't allow those pages here.
> > >                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > that doesn't apply here because kvm_faultin_pfn() uses the low level
> > > __gfn_to_pfn_page_memslot().
> >
> > On fifth thought, I think this is wrong and doomed to fail.  By mapping these pages
> > into the guest, KVM is effectively saying it supports these pages.  But if the guest
> > uses the corresponding gfns for an action that requires KVM to access the page,
> > e.g. via kvm_vcpu_map(), ensure_pfn_ref() will reject the access and all sorts of
> > bad things will happen to the guest.
> >
> > So, why not fully reject these types of pages?  If someone is relying on KVM to
> > support these types of pages, then we'll fail fast and get a bug report letting us
> > know we need to properly support these types of pages.  And if not, then we reduce
> > KVM's complexity and I get to keep my precious WARN :-)
> 
> Our current use case here is virtio-gpu blob resources [1]. Blob
> resources are useful because they avoid a guest shadow buffer and the
> associated memcpys, and as I understand it they are also required for
> virtualized vulkan.
> 
> One type of blob resources requires mapping dma-bufs allocated by the
> host directly into the guest. This works on Intel platforms and the
> ARM platforms I've tested. However, the amdgpu driver sometimes
> allocates higher order, non-compound pages via ttm_pool_alloc_page.

Ah.  In the future, please provide this type of information in the cover letter,
and in this case, a paragraph in patch 01 is also warranted.  The context of _why_
is critical information, e.g. having something in the changelog explaining the use
case is very helpful for future developers wondering why on earth KVM supports
this type of odd behavior.

> These are the type of pages which KVM is currently rejecting. Is this
> something that KVM can support?

I'm not opposed to it.  My complaint is that this series is incomplete in that it
allows mapping the memory into the guest, but doesn't support accessing the memory
from KVM itself.  That means for things to work properly, KVM is relying on the
guest to use the memory in a limited capacity, e.g. isn't using the memory as
general purpose RAM.  That's not problematic for your use case, because presumably
the memory is used only by the vGPU, but as is KVM can't enforce that behavior in
any way.

The really gross part is that failures are not strictly punted to userspace;
the resulting error varies significantly depending on how the guest "illegally"
uses the memory.

My first choice would be to get the amdgpu driver "fixed", but that's likely an
unreasonable request since it sounds like the non-KVM behavior is working as intended.

One thought would be to require userspace to opt-in to mapping this type of memory
by introducing a new memslot flag that explicitly states that the memslot cannot
be accessed directly by KVM, i.e. can only be mapped into the guest.  That way,
KVM has an explicit ABI with respect to how it handles this type of memory, even
though the semantics of exactly what will happen if userspace/guest violates the
ABI are not well-defined.  And internally, KVM would also have a clear touchpoint
where it deliberately allows mapping such memslots, as opposed to the more implicit
behavior of bypassing ensure_pfn_ref().

If we're clever, we might even be able to share the flag with the "guest private
memory"[*] concept being pursued for confidential VMs.

[*] https://lore.kernel.org/all/20211223123011.41044-1-chao.p.peng@linux.intel.com
David Stevens Jan. 7, 2022, 2:21 a.m. UTC | #7
> > These are the type of pages which KVM is currently rejecting. Is this
> > something that KVM can support?
>
> I'm not opposed to it.  My complaint is that this series is incomplete in that it
> allows mapping the memory into the guest, but doesn't support accessing the memory
> from KVM itself.  That means for things to work properly, KVM is relying on the
> guest to use the memory in a limited capacity, e.g. isn't using the memory as
> general purpose RAM.  That's not problematic for your use case, because presumably
> the memory is used only by the vGPU, but as is KVM can't enforce that behavior in
> any way.
>
> The really gross part is that failures are not strictly punted to userspace;
> the resulting error varies significantly depending on how the guest "illegally"
> uses the memory.
>
> My first choice would be to get the amdgpu driver "fixed", but that's likely an
> unreasonable request since it sounds like the non-KVM behavior is working as intended.
>
> One thought would be to require userspace to opt-in to mapping this type of memory
> by introducing a new memslot flag that explicitly states that the memslot cannot
> be accessed directly by KVM, i.e. can only be mapped into the guest.  That way,
> KVM has an explicit ABI with respect to how it handles this type of memory, even
> though the semantics of exactly what will happen if userspace/guest violates the
> ABI are not well-defined.  And internally, KVM would also have a clear touchpoint
> where it deliberately allows mapping such memslots, as opposed to the more implicit
> behavior of bypassing ensure_pfn_ref().

Is it well defined when KVM needs to directly access a memslot? At
least for x86, it looks like most of the use cases are related to
nested virtualization, except for the call in
emulator_cmpxchg_emulated. Without being able to specifically state
what should be avoided, a flag like that would be difficult for
userspace to use.

> If we're clever, we might even be able to share the flag with the "guest private
> memory"[*] concept being pursued for confidential VMs.
>
> [*] https://lore.kernel.org/all/20211223123011.41044-1-chao.p.peng@linux.intel.com
Sean Christopherson Jan. 7, 2022, 4:31 p.m. UTC | #8
On Fri, Jan 07, 2022, David Stevens wrote:
> > > These are the type of pages which KVM is currently rejecting. Is this
> > > something that KVM can support?
> >
> > I'm not opposed to it.  My complaint is that this series is incomplete in that it
> > allows mapping the memory into the guest, but doesn't support accessing the memory
> > from KVM itself.  That means for things to work properly, KVM is relying on the
> > guest to use the memory in a limited capacity, e.g. isn't using the memory as
> > general purpose RAM.  That's not problematic for your use case, because presumably
> > the memory is used only by the vGPU, but as is KVM can't enforce that behavior in
> > any way.
> >
> > The really gross part is that failures are not strictly punted to userspace;
> > the resulting error varies significantly depending on how the guest "illegally"
> > uses the memory.
> >
> > My first choice would be to get the amdgpu driver "fixed", but that's likely an
> > unreasonable request since it sounds like the non-KVM behavior is working as intended.
> >
> > One thought would be to require userspace to opt-in to mapping this type of memory
> > by introducing a new memslot flag that explicitly states that the memslot cannot
> > be accessed directly by KVM, i.e. can only be mapped into the guest.  That way,
> > KVM has an explicit ABI with respect to how it handles this type of memory, even
> > though the semantics of exactly what will happen if userspace/guest violates the
> > ABI are not well-defined.  And internally, KVM would also have a clear touchpoint
> > where it deliberately allows mapping such memslots, as opposed to the more implicit
> > behavior of bypassing ensure_pfn_ref().
> 
> Is it well defined when KVM needs to directly access a memslot?

Not really, there's certainly no established rule.

> At least for x86, it looks like most of the use cases are related to nested
> virtualization, except for the call in emulator_cmpxchg_emulated.

The emulator_cmpxchg_emulated() will hopefully go away in the nearish future[*].
Paravirt features that communicate between guest and host via memory is the other
case that often maps a pfn into KVM.

> Without being able to specifically state what should be avoided, a flag like
> that would be difficult for userspace to use.

Yeah :-(  I was thinking KVM could state the flag would be safe to use if and only
if userspace could guarantee that the guest would use the memory for some "special"
use case, but hadn't actually thought about how to word things.

The best thing to do is probably to wait for for kvm_vcpu_map() to be eliminated,
as described in the changelogs for commits:

  357a18ad230f ("KVM: Kill kvm_map_gfn() / kvm_unmap_gfn() and gfn_to_pfn_cache")
  7e2175ebd695 ("KVM: x86: Fix recording of guest steal time / preempted status")

Once that is done, everything in KVM will either access guest memory through the
userspace hva, or via a mechanism that is tied into the mmu_notifier, at which
point accessing non-refcounted struct pages is safe and just needs to worry about
not corrupting _refcount.
Sean Christopherson Jan. 7, 2022, 4:46 p.m. UTC | #9
On Fri, Jan 07, 2022, Sean Christopherson wrote:
> On Fri, Jan 07, 2022, David Stevens wrote:
> > > > These are the type of pages which KVM is currently rejecting. Is this
> > > > something that KVM can support?
> > >
> > > I'm not opposed to it.  My complaint is that this series is incomplete in that it
> > > allows mapping the memory into the guest, but doesn't support accessing the memory
> > > from KVM itself.  That means for things to work properly, KVM is relying on the
> > > guest to use the memory in a limited capacity, e.g. isn't using the memory as
> > > general purpose RAM.  That's not problematic for your use case, because presumably
> > > the memory is used only by the vGPU, but as is KVM can't enforce that behavior in
> > > any way.
> > >
> > > The really gross part is that failures are not strictly punted to userspace;
> > > the resulting error varies significantly depending on how the guest "illegally"
> > > uses the memory.
> > >
> > > My first choice would be to get the amdgpu driver "fixed", but that's likely an
> > > unreasonable request since it sounds like the non-KVM behavior is working as intended.
> > >
> > > One thought would be to require userspace to opt-in to mapping this type of memory
> > > by introducing a new memslot flag that explicitly states that the memslot cannot
> > > be accessed directly by KVM, i.e. can only be mapped into the guest.  That way,
> > > KVM has an explicit ABI with respect to how it handles this type of memory, even
> > > though the semantics of exactly what will happen if userspace/guest violates the
> > > ABI are not well-defined.  And internally, KVM would also have a clear touchpoint
> > > where it deliberately allows mapping such memslots, as opposed to the more implicit
> > > behavior of bypassing ensure_pfn_ref().
> > 
> > Is it well defined when KVM needs to directly access a memslot?
> 
> Not really, there's certainly no established rule.
> 
> > At least for x86, it looks like most of the use cases are related to nested
> > virtualization, except for the call in emulator_cmpxchg_emulated.
> 
> The emulator_cmpxchg_emulated() will hopefully go away in the nearish future[*].

Forgot the link...

https://lore.kernel.org/all/YcG32Ytj0zUAW%2FB2@hirez.programming.kicks-ass.net/

> Paravirt features that communicate between guest and host via memory is the other
> case that often maps a pfn into KVM.
> 
> > Without being able to specifically state what should be avoided, a flag like
> > that would be difficult for userspace to use.
> 
> Yeah :-(  I was thinking KVM could state the flag would be safe to use if and only
> if userspace could guarantee that the guest would use the memory for some "special"
> use case, but hadn't actually thought about how to word things.
> 
> The best thing to do is probably to wait for for kvm_vcpu_map() to be eliminated,
> as described in the changelogs for commits:
> 
>   357a18ad230f ("KVM: Kill kvm_map_gfn() / kvm_unmap_gfn() and gfn_to_pfn_cache")
>   7e2175ebd695 ("KVM: x86: Fix recording of guest steal time / preempted status")
> 
> Once that is done, everything in KVM will either access guest memory through the
> userspace hva, or via a mechanism that is tied into the mmu_notifier, at which
> point accessing non-refcounted struct pages is safe and just needs to worry about
> not corrupting _refcount.
David Stevens Jan. 10, 2022, 11:47 p.m. UTC | #10
> The best thing to do is probably to wait for for kvm_vcpu_map() to be eliminated,
> as described in the changelogs for commits:
>
>   357a18ad230f ("KVM: Kill kvm_map_gfn() / kvm_unmap_gfn() and gfn_to_pfn_cache")
>   7e2175ebd695 ("KVM: x86: Fix recording of guest steal time / preempted status")
>
> Once that is done, everything in KVM will either access guest memory through the
> userspace hva, or via a mechanism that is tied into the mmu_notifier, at which
> point accessing non-refcounted struct pages is safe and just needs to worry about
> not corrupting _refcount.

That does sound like the best approach. I'll put this patch series on
hold until that work is done.
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0626395ff1d9..7c4c7fededf0 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -621,13 +621,6 @@  static int mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep)
 
 	pfn = spte_to_pfn(old_spte);
 
-	/*
-	 * KVM does not hold the refcount of the page used by
-	 * kvm mmu, before reclaiming the page, we should
-	 * unmap it from mmu first.
-	 */
-	WARN_ON(!kvm_is_reserved_pfn(pfn) && !page_count(pfn_to_page(pfn)));
-
 	if (is_accessed_spte(old_spte))
 		kvm_set_pfn_accessed(pfn);
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 16a8a71f20bf..d81edcb3e107 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -170,7 +170,7 @@  bool kvm_is_zone_device_pfn(kvm_pfn_t pfn)
 	 * the device has been pinned, e.g. by get_user_pages().  WARN if the
 	 * page_count() is zero to help detect bad usage of this helper.
 	 */
-	if (!pfn_valid(pfn) || WARN_ON_ONCE(!page_count(pfn_to_page(pfn))))
+	if (!pfn_valid(pfn) || !page_count(pfn_to_page(pfn)))
 		return false;
 
 	return is_zone_device_page(pfn_to_page(pfn));