mbox series

[RFC,00/15] KVM: Dirty ring interface

Message ID 20191129213505.18472-1-peterx@redhat.com (mailing list archive)
Headers show
Series KVM: Dirty ring interface | expand

Message

Peter Xu Nov. 29, 2019, 9:34 p.m. UTC
Branch is here: https://github.com/xzpeter/linux/tree/kvm-dirty-ring

Overview
============

This is a continued work from Lei Cao <lei.cao@stratus.com> and Paolo
on the KVM dirty ring interface.  To make it simple, I'll still start
with version 1 as RFC.

The new dirty ring interface is another way to collect dirty pages for
the virtual machine, but it is different from the existing dirty
logging interface in a few ways, majorly:

  - Data format: The dirty data was in a ring format rather than a
    bitmap format, so the size of data to sync for dirty logging does
    not depend on the size of guest memory any more, but speed of
    dirtying.  Also, the dirty ring is per-vcpu (currently plus
    another per-vm ring, so total ring number is N+1), while the dirty
    bitmap is per-vm.

  - Data copy: The sync of dirty pages does not need data copy any more,
    but instead the ring is shared between the userspace and kernel by
    page sharings (mmap() on either the vm fd or vcpu fd)

  - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
    KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
    called KVM_RESET_DIRTY_RINGS when we want to reset the collected
    dirty pages to protected mode again (works like
    KVM_CLEAR_DIRTY_LOG, but ring based)

And more.

I would appreciate if the reviewers can start with patch "KVM:
Implement ring-based dirty memory tracking", especially the document
update part for the big picture.  Then I'll avoid copying into most of
them into cover letter again.

I marked this series as RFC because I'm at least uncertain on this
change of vcpu_enter_guest():

        if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
                vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
                /*
                        * If this is requested, it means that we've
                        * marked the dirty bit in the dirty ring BUT
                        * we've not written the date.  Do it now.
                        */
                r = kvm_emulate_instruction(vcpu, 0);
                r = r >= 0 ? 0 : r;
                goto out;
        }

I did a kvm_emulate_instruction() when dirty ring reaches softlimit
and want to exit to userspace, however I'm not really sure whether
there could have any side effect.  I'd appreciate any comment of
above, or anything else.

Tests
===========

I wanted to continue work on the QEMU part, but after I noticed that
the interface might still prone to change, I posted this series first.
However to make sure it's at least working, I've provided unit tests
together with the series.  The unit tests should be able to test the
series in at least three major paths:

  (1) ./dirty_log_test -M dirty-ring

      This tests async ring operations: this should be the major work
      mode for the dirty ring interface, say, when the kernel is
      queuing more data, the userspace is collecting too.  Ring can
      hardly reaches full when working like this, because in most
      cases the collection could be fast.

  (2) ./dirty_log_test -M dirty-ring -c 1024

      This set the ring size to be very small so that ring soft-full
      always triggers (soft-full is a soft limit of the ring state,
      when the dirty ring reaches the soft limit it'll do a userspace
      exit and let the userspace to collect the data).

  (3) ./dirty_log_test -M dirty-ring-wait-queue

      This sololy test the extreme case where ring is full.  When the
      ring is completely full, the thread (no matter vcpu or not) will
      be put onto a per-vm waitqueue, and KVM_RESET_DIRTY_RINGS will
      wake the threads up (assuming until which the ring will not be
      full any more).

Thanks,

Cao, Lei (2):
  KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot
  KVM: X86: Implement ring-based dirty memory tracking

Paolo Bonzini (1):
  KVM: Move running VCPU from ARM to common code

Peter Xu (12):
  KVM: Add build-time error check on kvm_run size
  KVM: Implement ring-based dirty memory tracking
  KVM: Make dirty ring exclusive to dirty bitmap log
  KVM: Introduce dirty ring wait queue
  KVM: selftests: Always clear dirty bitmap after iteration
  KVM: selftests: Sync uapi/linux/kvm.h to tools/
  KVM: selftests: Use a single binary for dirty/clear log test
  KVM: selftests: Introduce after_vcpu_run hook for dirty log test
  KVM: selftests: Add dirty ring buffer test
  KVM: selftests: Let dirty_log_test async for dirty ring test
  KVM: selftests: Add "-c" parameter to dirty log test
  KVM: selftests: Test dirty ring waitqueue

 Documentation/virt/kvm/api.txt                | 116 +++++
 arch/arm/include/asm/kvm_host.h               |   2 -
 arch/arm64/include/asm/kvm_host.h             |   2 -
 arch/x86/include/asm/kvm_host.h               |   5 +
 arch/x86/include/uapi/asm/kvm.h               |   1 +
 arch/x86/kvm/Makefile                         |   3 +-
 arch/x86/kvm/mmu/mmu.c                        |   6 +
 arch/x86/kvm/vmx/vmx.c                        |   7 +
 arch/x86/kvm/x86.c                            |  12 +
 include/linux/kvm_dirty_ring.h                |  67 +++
 include/linux/kvm_host.h                      |  37 ++
 include/linux/kvm_types.h                     |   1 +
 include/uapi/linux/kvm.h                      |  36 ++
 tools/include/uapi/linux/kvm.h                |  47 ++
 tools/testing/selftests/kvm/Makefile          |   2 -
 .../selftests/kvm/clear_dirty_log_test.c      |   2 -
 tools/testing/selftests/kvm/dirty_log_test.c  | 452 ++++++++++++++++--
 .../testing/selftests/kvm/include/kvm_util.h  |   6 +
 tools/testing/selftests/kvm/lib/kvm_util.c    | 103 ++++
 .../selftests/kvm/lib/kvm_util_internal.h     |   5 +
 virt/kvm/arm/arm.c                            |  29 --
 virt/kvm/arm/perf.c                           |   6 +-
 virt/kvm/arm/vgic/vgic-mmio.c                 |  15 +-
 virt/kvm/dirty_ring.c                         | 156 ++++++
 virt/kvm/kvm_main.c                           | 315 +++++++++++-
 25 files changed, 1329 insertions(+), 104 deletions(-)
 create mode 100644 include/linux/kvm_dirty_ring.h
 delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
 create mode 100644 virt/kvm/dirty_ring.c

Comments

Paolo Bonzini Nov. 30, 2019, 8:29 a.m. UTC | #1
Hi Peter,

thanks for the RFC!  Just a couple comments before I look at the series
(for which I don't expect many surprises).

On 29/11/19 22:34, Peter Xu wrote:
> I marked this series as RFC because I'm at least uncertain on this
> change of vcpu_enter_guest():
> 
>         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
>                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
>                 /*
>                         * If this is requested, it means that we've
>                         * marked the dirty bit in the dirty ring BUT
>                         * we've not written the date.  Do it now.
>                         */
>                 r = kvm_emulate_instruction(vcpu, 0);
>                 r = r >= 0 ? 0 : r;
>                 goto out;
>         }

This is not needed, it will just be a false negative (dirty page that
actually isn't dirty).  The dirty bit will be cleared when userspace
resets the ring buffer; then the instruction will be executed again and
mark the page dirty again.  Since ring full is not a common condition,
it's not a big deal.

> I did a kvm_emulate_instruction() when dirty ring reaches softlimit
> and want to exit to userspace, however I'm not really sure whether
> there could have any side effect.  I'd appreciate any comment of
> above, or anything else.
> 
> Tests
> ===========
> 
> I wanted to continue work on the QEMU part, but after I noticed that
> the interface might still prone to change, I posted this series first.
> However to make sure it's at least working, I've provided unit tests
> together with the series.  The unit tests should be able to test the
> series in at least three major paths:
> 
>   (1) ./dirty_log_test -M dirty-ring
> 
>       This tests async ring operations: this should be the major work
>       mode for the dirty ring interface, say, when the kernel is
>       queuing more data, the userspace is collecting too.  Ring can
>       hardly reaches full when working like this, because in most
>       cases the collection could be fast.
> 
>   (2) ./dirty_log_test -M dirty-ring -c 1024
> 
>       This set the ring size to be very small so that ring soft-full
>       always triggers (soft-full is a soft limit of the ring state,
>       when the dirty ring reaches the soft limit it'll do a userspace
>       exit and let the userspace to collect the data).
> 
>   (3) ./dirty_log_test -M dirty-ring-wait-queue
> 
>       This sololy test the extreme case where ring is full.  When the
>       ring is completely full, the thread (no matter vcpu or not) will
>       be put onto a per-vm waitqueue, and KVM_RESET_DIRTY_RINGS will
>       wake the threads up (assuming until which the ring will not be
>       full any more).

One question about this testcase: why does the task get into
uninterruptible wait?

Paolo

> 
> Thanks,
> 
> Cao, Lei (2):
>   KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot
>   KVM: X86: Implement ring-based dirty memory tracking
> 
> Paolo Bonzini (1):
>   KVM: Move running VCPU from ARM to common code
> 
> Peter Xu (12):
>   KVM: Add build-time error check on kvm_run size
>   KVM: Implement ring-based dirty memory tracking
>   KVM: Make dirty ring exclusive to dirty bitmap log
>   KVM: Introduce dirty ring wait queue
>   KVM: selftests: Always clear dirty bitmap after iteration
>   KVM: selftests: Sync uapi/linux/kvm.h to tools/
>   KVM: selftests: Use a single binary for dirty/clear log test
>   KVM: selftests: Introduce after_vcpu_run hook for dirty log test
>   KVM: selftests: Add dirty ring buffer test
>   KVM: selftests: Let dirty_log_test async for dirty ring test
>   KVM: selftests: Add "-c" parameter to dirty log test
>   KVM: selftests: Test dirty ring waitqueue
> 
>  Documentation/virt/kvm/api.txt                | 116 +++++
>  arch/arm/include/asm/kvm_host.h               |   2 -
>  arch/arm64/include/asm/kvm_host.h             |   2 -
>  arch/x86/include/asm/kvm_host.h               |   5 +
>  arch/x86/include/uapi/asm/kvm.h               |   1 +
>  arch/x86/kvm/Makefile                         |   3 +-
>  arch/x86/kvm/mmu/mmu.c                        |   6 +
>  arch/x86/kvm/vmx/vmx.c                        |   7 +
>  arch/x86/kvm/x86.c                            |  12 +
>  include/linux/kvm_dirty_ring.h                |  67 +++
>  include/linux/kvm_host.h                      |  37 ++
>  include/linux/kvm_types.h                     |   1 +
>  include/uapi/linux/kvm.h                      |  36 ++
>  tools/include/uapi/linux/kvm.h                |  47 ++
>  tools/testing/selftests/kvm/Makefile          |   2 -
>  .../selftests/kvm/clear_dirty_log_test.c      |   2 -
>  tools/testing/selftests/kvm/dirty_log_test.c  | 452 ++++++++++++++++--
>  .../testing/selftests/kvm/include/kvm_util.h  |   6 +
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 103 ++++
>  .../selftests/kvm/lib/kvm_util_internal.h     |   5 +
>  virt/kvm/arm/arm.c                            |  29 --
>  virt/kvm/arm/perf.c                           |   6 +-
>  virt/kvm/arm/vgic/vgic-mmio.c                 |  15 +-
>  virt/kvm/dirty_ring.c                         | 156 ++++++
>  virt/kvm/kvm_main.c                           | 315 +++++++++++-
>  25 files changed, 1329 insertions(+), 104 deletions(-)
>  create mode 100644 include/linux/kvm_dirty_ring.h
>  delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
>  create mode 100644 virt/kvm/dirty_ring.c
>
Peter Xu Dec. 2, 2019, 2:13 a.m. UTC | #2
On Sat, Nov 30, 2019 at 09:29:42AM +0100, Paolo Bonzini wrote:
> Hi Peter,
> 
> thanks for the RFC!  Just a couple comments before I look at the series
> (for which I don't expect many surprises).
> 
> On 29/11/19 22:34, Peter Xu wrote:
> > I marked this series as RFC because I'm at least uncertain on this
> > change of vcpu_enter_guest():
> > 
> >         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
> >                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> >                 /*
> >                         * If this is requested, it means that we've
> >                         * marked the dirty bit in the dirty ring BUT
> >                         * we've not written the date.  Do it now.
> >                         */
> >                 r = kvm_emulate_instruction(vcpu, 0);
> >                 r = r >= 0 ? 0 : r;
> >                 goto out;
> >         }
> 
> This is not needed, it will just be a false negative (dirty page that
> actually isn't dirty).  The dirty bit will be cleared when userspace
> resets the ring buffer; then the instruction will be executed again and
> mark the page dirty again.  Since ring full is not a common condition,
> it's not a big deal.

Actually I added this only because it failed one of the unit tests
when verifying the dirty bits..  But now after a second thought, I
probably agree with you that we can change the userspace too to fix
this.

I think the steps of the failed test case could be simplified into
something like this (assuming the QEMU migration context, might be
easier to understand):

  1. page P has data P1
  2. vcpu writes to page P, with date P2
  3. vmexit (P is still with data P1)
  4. mark P as dirty, ring full, user exit
  5. collect dirty bit P, migrate P with data P1
  6. vcpu run due to some reason, P was written with P2, user exit again
     (because ring is already reaching soft limit)
  7. do KVM_RESET_DIRTY_RINGS
  8. never write to P again

Then P will be P1 always on destination, while it'll be P2 on source.

I think maybe that's why we need to be very sure that when userspace
exits happens (soft limit reached), we need to kick all the vcpus out,
and more importantly we must _not_ let them run again before the
KVM_RESET_DIRTY_PAGES otherwise we might face the data corrupt.  I'm
not sure whether we should mention this in the document to let the
userspace to be sure of the issue.

On the other side, I tried to remove the emulate_instruction() above
and fixed the test case, though I found that the last address before
user exit is not really written again after the next vmenter right
after KVM_RESET_DIRTY_RINGS, so the dirty bit was truly lost...  I'm
pasting some traces below (I added some tracepoints too, I think I'll
just keep them for v2):

  ...
  dirty_log_test-29003 [001] 184503.384328: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.384329: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.384329: kvm_page_fault:       address 7fc036d000 error_code 582
  dirty_log_test-29003 [001] 184503.384331: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.384332: kvm_exit: reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.384332: kvm_page_fault:       address 7fc036d000 error_code 582
  dirty_log_test-29003 [001] 184503.384332: kvm_dirty_ring_push:  ring 1: dirty 0x37f reset 0x1c0 slot 1 offset 0x37e ret 0 (used 447)
  dirty_log_test-29003 [001] 184503.384333: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.384334: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.384334: kvm_page_fault:       address 7fc036e000 error_code 582
  dirty_log_test-29003 [001] 184503.384336: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.384336: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.384336: kvm_page_fault:       address 7fc036e000 error_code 582
  dirty_log_test-29003 [001] 184503.384337: kvm_dirty_ring_push:  ring 1: dirty 0x380 reset 0x1c0 slot 1 offset 0x37f ret 1 (used 448)
  dirty_log_test-29003 [001] 184503.384337: kvm_dirty_ring_exit:  vcpu 1
  dirty_log_test-29003 [001] 184503.384338: kvm_fpu:              unload
  dirty_log_test-29003 [001] 184503.384340: kvm_userspace_exit:   reason 0x1d (29)
  dirty_log_test-29000 [006] 184503.505103: kvm_dirty_ring_reset: ring 1: dirty 0x380 reset 0x380 (used 0)
  dirty_log_test-29003 [001] 184503.505184: kvm_fpu:              load
  dirty_log_test-29003 [001] 184503.505187: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.505193: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.505194: kvm_page_fault:       address 7fc036f000 error_code 582              <-------- [1]
  dirty_log_test-29003 [001] 184503.505206: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.505207: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.505207: kvm_page_fault:       address 7fc036f000 error_code 582
  dirty_log_test-29003 [001] 184503.505226: kvm_dirty_ring_push:  ring 1: dirty 0x381 reset 0x380 slot 1 offset 0x380 ret 0 (used 1)
  dirty_log_test-29003 [001] 184503.505226: kvm_entry:            vcpu 1
  dirty_log_test-29003 [001] 184503.505227: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
  dirty_log_test-29003 [001] 184503.505228: kvm_page_fault:       address 7fc0370000 error_code 582
  dirty_log_test-29003 [001] 184503.505231: kvm_entry:            vcpu 1
  ...

The test was trying to continuously write to pages, from above log
starting from 7fc036d000. The reason 0x1d (29) is the new dirty ring
full exit reason.

So far I'm still unsure of two things:

  1. Why for each page we faulted twice rather than once.  Take the
     example of page at 7fc036e000 above, the first fault didn't
     trigger the marking dirty path, while only until the 2nd ept
     violation did we trigger kvm_dirty_ring_push.

  2. Why we didn't get the last page written again after
     kvm_userspace_exit (last page was 7fc036e000, and the test failed
     because 7fc036e000 detected change however dirty bit unset).  In
     this case the first write after KVM_RESET_DIRTY_RINGS is the line
     pointed by [1], I thought it should be a rewritten of page
     7fc036e000 because when the user exit happens logically the write
     should not happen yet and eip should keep.  However at [1] it's
     already writting to a new page.

I'll continue to dig tomorrow, or quick answers will be greatly
welcomed too. :)

> 
> > I did a kvm_emulate_instruction() when dirty ring reaches softlimit
> > and want to exit to userspace, however I'm not really sure whether
> > there could have any side effect.  I'd appreciate any comment of
> > above, or anything else.
> > 
> > Tests
> > ===========
> > 
> > I wanted to continue work on the QEMU part, but after I noticed that
> > the interface might still prone to change, I posted this series first.
> > However to make sure it's at least working, I've provided unit tests
> > together with the series.  The unit tests should be able to test the
> > series in at least three major paths:
> > 
> >   (1) ./dirty_log_test -M dirty-ring
> > 
> >       This tests async ring operations: this should be the major work
> >       mode for the dirty ring interface, say, when the kernel is
> >       queuing more data, the userspace is collecting too.  Ring can
> >       hardly reaches full when working like this, because in most
> >       cases the collection could be fast.
> > 
> >   (2) ./dirty_log_test -M dirty-ring -c 1024
> > 
> >       This set the ring size to be very small so that ring soft-full
> >       always triggers (soft-full is a soft limit of the ring state,
> >       when the dirty ring reaches the soft limit it'll do a userspace
> >       exit and let the userspace to collect the data).
> > 
> >   (3) ./dirty_log_test -M dirty-ring-wait-queue
> > 
> >       This sololy test the extreme case where ring is full.  When the
> >       ring is completely full, the thread (no matter vcpu or not) will
> >       be put onto a per-vm waitqueue, and KVM_RESET_DIRTY_RINGS will
> >       wake the threads up (assuming until which the ring will not be
> >       full any more).
> 
> One question about this testcase: why does the task get into
> uninterruptible wait?

Because I'm using wait_event_killable() to wait when ring is
completely full.  I thought we should be strict there because it's
after all rare (even more rare than the soft-limit reached), and with
that we will never have a change to lose a dirty bit accidentally.  Or
do you think we should still respond to non fatal signals due to some
reason even during that wait period?

Thanks,
Sean Christopherson Dec. 2, 2019, 8:21 p.m. UTC | #3
On Sat, Nov 30, 2019 at 09:29:42AM +0100, Paolo Bonzini wrote:
> Hi Peter,
> 
> thanks for the RFC!  Just a couple comments before I look at the series
> (for which I don't expect many surprises).
> 
> On 29/11/19 22:34, Peter Xu wrote:
> > I marked this series as RFC because I'm at least uncertain on this
> > change of vcpu_enter_guest():
> > 
> >         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
> >                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> >                 /*
> >                         * If this is requested, it means that we've
> >                         * marked the dirty bit in the dirty ring BUT
> >                         * we've not written the date.  Do it now.
> >                         */
> >                 r = kvm_emulate_instruction(vcpu, 0);
> >                 r = r >= 0 ? 0 : r;
> >                 goto out;
> >         }
> 
> This is not needed, it will just be a false negative (dirty page that
> actually isn't dirty).  The dirty bit will be cleared when userspace
> resets the ring buffer; then the instruction will be executed again and
> mark the page dirty again.  Since ring full is not a common condition,
> it's not a big deal.

Side topic, KVM_REQ_DIRTY_RING_FULL is misnamed, it's set when a ring goes
above its soft limit, not when the ring is actually full.  It took quite a
bit of digging to figure out whether or not PML was broken...
Peter Xu Dec. 2, 2019, 8:43 p.m. UTC | #4
On Mon, Dec 02, 2019 at 12:21:19PM -0800, Sean Christopherson wrote:
> On Sat, Nov 30, 2019 at 09:29:42AM +0100, Paolo Bonzini wrote:
> > Hi Peter,
> > 
> > thanks for the RFC!  Just a couple comments before I look at the series
> > (for which I don't expect many surprises).
> > 
> > On 29/11/19 22:34, Peter Xu wrote:
> > > I marked this series as RFC because I'm at least uncertain on this
> > > change of vcpu_enter_guest():
> > > 
> > >         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
> > >                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> > >                 /*
> > >                         * If this is requested, it means that we've
> > >                         * marked the dirty bit in the dirty ring BUT
> > >                         * we've not written the date.  Do it now.
> > >                         */
> > >                 r = kvm_emulate_instruction(vcpu, 0);
> > >                 r = r >= 0 ? 0 : r;
> > >                 goto out;
> > >         }
> > 
> > This is not needed, it will just be a false negative (dirty page that
> > actually isn't dirty).  The dirty bit will be cleared when userspace
> > resets the ring buffer; then the instruction will be executed again and
> > mark the page dirty again.  Since ring full is not a common condition,
> > it's not a big deal.
> 
> Side topic, KVM_REQ_DIRTY_RING_FULL is misnamed, it's set when a ring goes
> above its soft limit, not when the ring is actually full.  It took quite a
> bit of digging to figure out whether or not PML was broken...

Yeah it's indeed a bit confusing.

Do you like KVM_REQ_DIRTY_RING_COLLECT?  Pair with
KVM_EXIT_DIRTY_RING_COLLECT.  Or, suggestions?
Paolo Bonzini Dec. 3, 2019, 1:59 p.m. UTC | #5
On 02/12/19 03:13, Peter Xu wrote:
>> This is not needed, it will just be a false negative (dirty page that
>> actually isn't dirty).  The dirty bit will be cleared when userspace
>> resets the ring buffer; then the instruction will be executed again and
>> mark the page dirty again.  Since ring full is not a common condition,
>> it's not a big deal.
> 
> Actually I added this only because it failed one of the unit tests
> when verifying the dirty bits..  But now after a second thought, I
> probably agree with you that we can change the userspace too to fix
> this.

I think there is already a similar case in dirty_log_test when a page is
dirty but we called KVM_GET_DIRTY_LOG just before it got written to.

> I think the steps of the failed test case could be simplified into
> something like this (assuming the QEMU migration context, might be
> easier to understand):
> 
>   1. page P has data P1
>   2. vcpu writes to page P, with date P2
>   3. vmexit (P is still with data P1)
>   4. mark P as dirty, ring full, user exit
>   5. collect dirty bit P, migrate P with data P1
>   6. vcpu run due to some reason, P was written with P2, user exit again
>      (because ring is already reaching soft limit)
>   7. do KVM_RESET_DIRTY_RINGS

Migration should only be done after KVM_RESET_DIRTY_RINGS (think of
KVM_RESET_DIRTY_RINGS as the equivalent of KVM_CLEAR_DIRTY_LOG).

>   dirty_log_test-29003 [001] 184503.384328: kvm_entry:            vcpu 1
>   dirty_log_test-29003 [001] 184503.384329: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
>   dirty_log_test-29003 [001] 184503.384329: kvm_page_fault:       address 7fc036d000 error_code 582
>   dirty_log_test-29003 [001] 184503.384331: kvm_entry:            vcpu 1
>   dirty_log_test-29003 [001] 184503.384332: kvm_exit: reason EPT_VIOLATION rip 0x40359f info 582 0
>   dirty_log_test-29003 [001] 184503.384332: kvm_page_fault:       address 7fc036d000 error_code 582
>   dirty_log_test-29003 [001] 184503.384332: kvm_dirty_ring_push:  ring 1: dirty 0x37f reset 0x1c0 slot 1 offset 0x37e ret 0 (used 447)
>   dirty_log_test-29003 [001] 184503.384333: kvm_entry:            vcpu 1
>   dirty_log_test-29003 [001] 184503.384334: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
>   dirty_log_test-29003 [001] 184503.384334: kvm_page_fault:       address 7fc036e000 error_code 582
>   dirty_log_test-29003 [001] 184503.384336: kvm_entry:            vcpu 1
>   dirty_log_test-29003 [001] 184503.384336: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
>   dirty_log_test-29003 [001] 184503.384336: kvm_page_fault:       address 7fc036e000 error_code 582
>   dirty_log_test-29003 [001] 184503.384337: kvm_dirty_ring_push:  ring 1: dirty 0x380 reset 0x1c0 slot 1 offset 0x37f ret 1 (used 448)
>   dirty_log_test-29003 [001] 184503.384337: kvm_dirty_ring_exit:  vcpu 1
>   dirty_log_test-29003 [001] 184503.384338: kvm_fpu:              unload
>   dirty_log_test-29003 [001] 184503.384340: kvm_userspace_exit:   reason 0x1d (29)
>   dirty_log_test-29000 [006] 184503.505103: kvm_dirty_ring_reset: ring 1: dirty 0x380 reset 0x380 (used 0)
>   dirty_log_test-29003 [001] 184503.505184: kvm_fpu:              load
>   dirty_log_test-29003 [001] 184503.505187: kvm_entry:            vcpu 1
>   dirty_log_test-29003 [001] 184503.505193: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
>   dirty_log_test-29003 [001] 184503.505194: kvm_page_fault:       address 7fc036f000 error_code 582              <-------- [1]
>   dirty_log_test-29003 [001] 184503.505206: kvm_entry:            vcpu 1
>   dirty_log_test-29003 [001] 184503.505207: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
>   dirty_log_test-29003 [001] 184503.505207: kvm_page_fault:       address 7fc036f000 error_code 582
>   dirty_log_test-29003 [001] 184503.505226: kvm_dirty_ring_push:  ring 1: dirty 0x381 reset 0x380 slot 1 offset 0x380 ret 0 (used 1)
>   dirty_log_test-29003 [001] 184503.505226: kvm_entry:            vcpu 1
>   dirty_log_test-29003 [001] 184503.505227: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
>   dirty_log_test-29003 [001] 184503.505228: kvm_page_fault:       address 7fc0370000 error_code 582
>   dirty_log_test-29003 [001] 184503.505231: kvm_entry:            vcpu 1
>   ...
> 
> The test was trying to continuously write to pages, from above log
> starting from 7fc036d000. The reason 0x1d (29) is the new dirty ring
> full exit reason.
> 
> So far I'm still unsure of two things:
> 
>   1. Why for each page we faulted twice rather than once.  Take the
>      example of page at 7fc036e000 above, the first fault didn't
>      trigger the marking dirty path, while only until the 2nd ept
>      violation did we trigger kvm_dirty_ring_push.

Not sure about that.  Try enabling kvmmmu tracepoints too, it will tell
you more of the path that was taken while processing the EPT violation.

If your machine has PML, what you're seeing is likely not-present
violation, not dirty-protect violation.  Try disabling pml and see if
the trace changes.

>   2. Why we didn't get the last page written again after
>      kvm_userspace_exit (last page was 7fc036e000, and the test failed
>      because 7fc036e000 detected change however dirty bit unset).  In
>      this case the first write after KVM_RESET_DIRTY_RINGS is the line
>      pointed by [1], I thought it should be a rewritten of page
>      7fc036e000 because when the user exit happens logically the write
>      should not happen yet and eip should keep.  However at [1] it's
>      already writting to a new page.

IIUC you should get, with PML enabled:

- guest writes to page
- PML marks dirty bit, causes vmexit
- host copies PML log to ring, causes userspace exit
- userspace calls KVM_RESET_DIRTY_RINGS
  - host marks page as clean
- userspace calls KVM_RUN
  - guest writes again to page

but the page won't be in the ring until after another vmexit happens.
Therefore, it's okay to reap the pages in the ring asynchronously, but
there must be a synchronization point in the testcase sooner or later,
where all CPUs are kicked out of KVM_RUN.  This synchronization point
corresponds to the migration downtime.

Thanks,

Paolo
Jason Wang Dec. 4, 2019, 10:39 a.m. UTC | #6
On 2019/11/30 上午5:34, Peter Xu wrote:
> Branch is here:https://github.com/xzpeter/linux/tree/kvm-dirty-ring
>
> Overview
> ============
>
> This is a continued work from Lei Cao<lei.cao@stratus.com>  and Paolo
> on the KVM dirty ring interface.  To make it simple, I'll still start
> with version 1 as RFC.
>
> The new dirty ring interface is another way to collect dirty pages for
> the virtual machine, but it is different from the existing dirty
> logging interface in a few ways, majorly:
>
>    - Data format: The dirty data was in a ring format rather than a
>      bitmap format, so the size of data to sync for dirty logging does
>      not depend on the size of guest memory any more, but speed of
>      dirtying.  Also, the dirty ring is per-vcpu (currently plus
>      another per-vm ring, so total ring number is N+1), while the dirty
>      bitmap is per-vm.
>
>    - Data copy: The sync of dirty pages does not need data copy any more,
>      but instead the ring is shared between the userspace and kernel by
>      page sharings (mmap() on either the vm fd or vcpu fd)
>
>    - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
>      KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
>      called KVM_RESET_DIRTY_RINGS when we want to reset the collected
>      dirty pages to protected mode again (works like
>      KVM_CLEAR_DIRTY_LOG, but ring based)
>
> And more.


Looks really interesting, I wonder if we can make this as a library then 
we can reuse it for vhost.

Thanks
Peter Xu Dec. 4, 2019, 7:33 p.m. UTC | #7
On Wed, Dec 04, 2019 at 06:39:48PM +0800, Jason Wang wrote:
> 
> On 2019/11/30 上午5:34, Peter Xu wrote:
> > Branch is here:https://github.com/xzpeter/linux/tree/kvm-dirty-ring
> > 
> > Overview
> > ============
> > 
> > This is a continued work from Lei Cao<lei.cao@stratus.com>  and Paolo
> > on the KVM dirty ring interface.  To make it simple, I'll still start
> > with version 1 as RFC.
> > 
> > The new dirty ring interface is another way to collect dirty pages for
> > the virtual machine, but it is different from the existing dirty
> > logging interface in a few ways, majorly:
> > 
> >    - Data format: The dirty data was in a ring format rather than a
> >      bitmap format, so the size of data to sync for dirty logging does
> >      not depend on the size of guest memory any more, but speed of
> >      dirtying.  Also, the dirty ring is per-vcpu (currently plus
> >      another per-vm ring, so total ring number is N+1), while the dirty
> >      bitmap is per-vm.
> > 
> >    - Data copy: The sync of dirty pages does not need data copy any more,
> >      but instead the ring is shared between the userspace and kernel by
> >      page sharings (mmap() on either the vm fd or vcpu fd)
> > 
> >    - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
> >      KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
> >      called KVM_RESET_DIRTY_RINGS when we want to reset the collected
> >      dirty pages to protected mode again (works like
> >      KVM_CLEAR_DIRTY_LOG, but ring based)
> > 
> > And more.
> 
> 
> Looks really interesting, I wonder if we can make this as a library then we
> can reuse it for vhost.

So iiuc this ring will majorly for (1) data exchange between kernel
and user, and (2) shared memory.  I think from that pov yeh it should
work even for vhost.

It shouldn't be hard to refactor the interfaces to avoid kvm elements,
however I'm not sure how to do that best.  Maybe like irqbypass and
put it into virt/lib/ as a standlone module?  Would it worth it?

Paolo, what's your take?
Jason Wang Dec. 5, 2019, 6:49 a.m. UTC | #8
On 2019/12/5 上午3:33, Peter Xu wrote:
> On Wed, Dec 04, 2019 at 06:39:48PM +0800, Jason Wang wrote:
>> On 2019/11/30 上午5:34, Peter Xu wrote:
>>> Branch is here:https://github.com/xzpeter/linux/tree/kvm-dirty-ring
>>>
>>> Overview
>>> ============
>>>
>>> This is a continued work from Lei Cao<lei.cao@stratus.com>  and Paolo
>>> on the KVM dirty ring interface.  To make it simple, I'll still start
>>> with version 1 as RFC.
>>>
>>> The new dirty ring interface is another way to collect dirty pages for
>>> the virtual machine, but it is different from the existing dirty
>>> logging interface in a few ways, majorly:
>>>
>>>     - Data format: The dirty data was in a ring format rather than a
>>>       bitmap format, so the size of data to sync for dirty logging does
>>>       not depend on the size of guest memory any more, but speed of
>>>       dirtying.  Also, the dirty ring is per-vcpu (currently plus
>>>       another per-vm ring, so total ring number is N+1), while the dirty
>>>       bitmap is per-vm.
>>>
>>>     - Data copy: The sync of dirty pages does not need data copy any more,
>>>       but instead the ring is shared between the userspace and kernel by
>>>       page sharings (mmap() on either the vm fd or vcpu fd)
>>>
>>>     - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
>>>       KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
>>>       called KVM_RESET_DIRTY_RINGS when we want to reset the collected
>>>       dirty pages to protected mode again (works like
>>>       KVM_CLEAR_DIRTY_LOG, but ring based)
>>>
>>> And more.
>>
>> Looks really interesting, I wonder if we can make this as a library then we
>> can reuse it for vhost.
> So iiuc this ring will majorly for (1) data exchange between kernel
> and user, and (2) shared memory.  I think from that pov yeh it should
> work even for vhost.
>
> It shouldn't be hard to refactor the interfaces to avoid kvm elements,
> however I'm not sure how to do that best.  Maybe like irqbypass and
> put it into virt/lib/ as a standlone module?  Would it worth it?


Maybe, and it looks to me some dirty pages reporting API for VFIO is 
proposed in the same time. It will be helpful to unify them (or at least 
leave a chance for other users).

Thanks


>
> Paolo, what's your take?
>
Peter Xu Dec. 5, 2019, 7:30 p.m. UTC | #9
On Tue, Dec 03, 2019 at 02:59:14PM +0100, Paolo Bonzini wrote:
> On 02/12/19 03:13, Peter Xu wrote:
> >> This is not needed, it will just be a false negative (dirty page that
> >> actually isn't dirty).  The dirty bit will be cleared when userspace
> >> resets the ring buffer; then the instruction will be executed again and
> >> mark the page dirty again.  Since ring full is not a common condition,
> >> it's not a big deal.
> > 
> > Actually I added this only because it failed one of the unit tests
> > when verifying the dirty bits..  But now after a second thought, I
> > probably agree with you that we can change the userspace too to fix
> > this.
> 
> I think there is already a similar case in dirty_log_test when a page is
> dirty but we called KVM_GET_DIRTY_LOG just before it got written to.

If you mean the host_bmap_track (in dirty_log_test.c), that should be
a reversed version of this race (that's where the data is written,
while we didn't see the dirty bit set).  But yes I think I can
probably use the same bitmap to fix the test case, because in both
cases what we want to do is to make sure "the dirty bit of this page
should be set in next round".

> 
> > I think the steps of the failed test case could be simplified into
> > something like this (assuming the QEMU migration context, might be
> > easier to understand):
> > 
> >   1. page P has data P1
> >   2. vcpu writes to page P, with date P2
> >   3. vmexit (P is still with data P1)
> >   4. mark P as dirty, ring full, user exit
> >   5. collect dirty bit P, migrate P with data P1
> >   6. vcpu run due to some reason, P was written with P2, user exit again
> >      (because ring is already reaching soft limit)
> >   7. do KVM_RESET_DIRTY_RINGS
> 
> Migration should only be done after KVM_RESET_DIRTY_RINGS (think of
> KVM_RESET_DIRTY_RINGS as the equivalent of KVM_CLEAR_DIRTY_LOG).

Totally agree for migration.  It's probably just that the test case
needs fixing.

> 
> >   dirty_log_test-29003 [001] 184503.384328: kvm_entry:            vcpu 1
> >   dirty_log_test-29003 [001] 184503.384329: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
> >   dirty_log_test-29003 [001] 184503.384329: kvm_page_fault:       address 7fc036d000 error_code 582
> >   dirty_log_test-29003 [001] 184503.384331: kvm_entry:            vcpu 1
> >   dirty_log_test-29003 [001] 184503.384332: kvm_exit: reason EPT_VIOLATION rip 0x40359f info 582 0
> >   dirty_log_test-29003 [001] 184503.384332: kvm_page_fault:       address 7fc036d000 error_code 582
> >   dirty_log_test-29003 [001] 184503.384332: kvm_dirty_ring_push:  ring 1: dirty 0x37f reset 0x1c0 slot 1 offset 0x37e ret 0 (used 447)
> >   dirty_log_test-29003 [001] 184503.384333: kvm_entry:            vcpu 1
> >   dirty_log_test-29003 [001] 184503.384334: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
> >   dirty_log_test-29003 [001] 184503.384334: kvm_page_fault:       address 7fc036e000 error_code 582
> >   dirty_log_test-29003 [001] 184503.384336: kvm_entry:            vcpu 1
> >   dirty_log_test-29003 [001] 184503.384336: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
> >   dirty_log_test-29003 [001] 184503.384336: kvm_page_fault:       address 7fc036e000 error_code 582
> >   dirty_log_test-29003 [001] 184503.384337: kvm_dirty_ring_push:  ring 1: dirty 0x380 reset 0x1c0 slot 1 offset 0x37f ret 1 (used 448)
> >   dirty_log_test-29003 [001] 184503.384337: kvm_dirty_ring_exit:  vcpu 1
> >   dirty_log_test-29003 [001] 184503.384338: kvm_fpu:              unload
> >   dirty_log_test-29003 [001] 184503.384340: kvm_userspace_exit:   reason 0x1d (29)
> >   dirty_log_test-29000 [006] 184503.505103: kvm_dirty_ring_reset: ring 1: dirty 0x380 reset 0x380 (used 0)
> >   dirty_log_test-29003 [001] 184503.505184: kvm_fpu:              load
> >   dirty_log_test-29003 [001] 184503.505187: kvm_entry:            vcpu 1
> >   dirty_log_test-29003 [001] 184503.505193: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
> >   dirty_log_test-29003 [001] 184503.505194: kvm_page_fault:       address 7fc036f000 error_code 582              <-------- [1]
> >   dirty_log_test-29003 [001] 184503.505206: kvm_entry:            vcpu 1
> >   dirty_log_test-29003 [001] 184503.505207: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
> >   dirty_log_test-29003 [001] 184503.505207: kvm_page_fault:       address 7fc036f000 error_code 582
> >   dirty_log_test-29003 [001] 184503.505226: kvm_dirty_ring_push:  ring 1: dirty 0x381 reset 0x380 slot 1 offset 0x380 ret 0 (used 1)
> >   dirty_log_test-29003 [001] 184503.505226: kvm_entry:            vcpu 1
> >   dirty_log_test-29003 [001] 184503.505227: kvm_exit:             reason EPT_VIOLATION rip 0x40359f info 582 0
> >   dirty_log_test-29003 [001] 184503.505228: kvm_page_fault:       address 7fc0370000 error_code 582
> >   dirty_log_test-29003 [001] 184503.505231: kvm_entry:            vcpu 1
> >   ...
> > 
> > The test was trying to continuously write to pages, from above log
> > starting from 7fc036d000. The reason 0x1d (29) is the new dirty ring
> > full exit reason.
> > 
> > So far I'm still unsure of two things:
> > 
> >   1. Why for each page we faulted twice rather than once.  Take the
> >      example of page at 7fc036e000 above, the first fault didn't
> >      trigger the marking dirty path, while only until the 2nd ept
> >      violation did we trigger kvm_dirty_ring_push.
> 
> Not sure about that.  Try enabling kvmmmu tracepoints too, it will tell
> you more of the path that was taken while processing the EPT violation.

These new tracepoints are extremely useful (which I didn't notice
before).

So here's the final culprit...

void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask)
{
        ...
	spin_lock(&kvm->mmu_lock);
	/* FIXME: we should use a single AND operation, but there is no
	 * applicable atomic API.
	 */
	while (mask) {
		clear_bit_le(offset + __ffs(mask), memslot->dirty_bitmap);
		mask &= mask - 1;
	}

	kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask);
	spin_unlock(&kvm->mmu_lock);
}

The mask is cleared before reaching
kvm_arch_mmu_enable_log_dirty_pt_masked()..

The funny thing is that I did have a few more patches to even skip
allocate the dirty_bitmap when dirty ring is enabled (hence in that
tree I removed this while loop too, so that has no such problem).
However I dropped those patches when I posted the RFC because I don't
think it's mature, and the selftest didn't complain about that
either..  Though, I do plan to redo that in v2 if you don't disagree.
The major question would be whether the dirty_bitmap could still be
for any use if dirty ring is enabled.

> 
> If your machine has PML, what you're seeing is likely not-present
> violation, not dirty-protect violation.  Try disabling pml and see if
> the trace changes.
> 
> >   2. Why we didn't get the last page written again after
> >      kvm_userspace_exit (last page was 7fc036e000, and the test failed
> >      because 7fc036e000 detected change however dirty bit unset).  In
> >      this case the first write after KVM_RESET_DIRTY_RINGS is the line
> >      pointed by [1], I thought it should be a rewritten of page
> >      7fc036e000 because when the user exit happens logically the write
> >      should not happen yet and eip should keep.  However at [1] it's
> >      already writting to a new page.
> 
> IIUC you should get, with PML enabled:
> 
> - guest writes to page
> - PML marks dirty bit, causes vmexit
> - host copies PML log to ring, causes userspace exit
> - userspace calls KVM_RESET_DIRTY_RINGS
>   - host marks page as clean
> - userspace calls KVM_RUN
>   - guest writes again to page
> 
> but the page won't be in the ring until after another vmexit happens.
> Therefore, it's okay to reap the pages in the ring asynchronously, but
> there must be a synchronization point in the testcase sooner or later,
> where all CPUs are kicked out of KVM_RUN.  This synchronization point
> corresponds to the migration downtime.

Yep, currently in the test case I used the same signal trick to kick
the vcpu out to make sure PML buffers are flushed during the vmexit,
before the main thread starts to collect dirty bits.

Thanks,
Paolo Bonzini Dec. 5, 2019, 7:59 p.m. UTC | #10
On 05/12/19 20:30, Peter Xu wrote:
>> Try enabling kvmmmu tracepoints too, it will tell
>> you more of the path that was taken while processing the EPT violation.
>
> These new tracepoints are extremely useful (which I didn't notice
> before).

Yes, they are!

> So here's the final culprit...
> 
> void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask)
> {
>         ...
> 	spin_lock(&kvm->mmu_lock);
> 	/* FIXME: we should use a single AND operation, but there is no
> 	 * applicable atomic API.
> 	 */
> 	while (mask) {
> 		clear_bit_le(offset + __ffs(mask), memslot->dirty_bitmap);
> 		mask &= mask - 1;
> 	}
> 
> 	kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask);
> 	spin_unlock(&kvm->mmu_lock);
> }
> 
> The mask is cleared before reaching
> kvm_arch_mmu_enable_log_dirty_pt_masked()..

I'm not sure why that results in two vmexits?  (clearing before
kvm_arch_mmu_enable_log_dirty_pt_masked is also what
KVM_{GET,CLEAR}_DIRTY_LOG does).

> The funny thing is that I did have a few more patches to even skip
> allocate the dirty_bitmap when dirty ring is enabled (hence in that
> tree I removed this while loop too, so that has no such problem).
> However I dropped those patches when I posted the RFC because I don't
> think it's mature, and the selftest didn't complain about that
> either..  Though, I do plan to redo that in v2 if you don't disagree.
> The major question would be whether the dirty_bitmap could still be
> for any use if dirty ring is enabled.

Userspace may want a dirty bitmap in addition to a list (for example:
list for migration, bitmap for framebuffer update), but it can also do a
pass over the dirty rings in order to update an internal bitmap.

So I think it make sense to make it either one or the other.

Paolo
Peter Xu Dec. 5, 2019, 8:52 p.m. UTC | #11
On Thu, Dec 05, 2019 at 08:59:33PM +0100, Paolo Bonzini wrote:
> On 05/12/19 20:30, Peter Xu wrote:
> >> Try enabling kvmmmu tracepoints too, it will tell
> >> you more of the path that was taken while processing the EPT violation.
> >
> > These new tracepoints are extremely useful (which I didn't notice
> > before).
> 
> Yes, they are!

(I forgot to say thanks for teaching me that! :)

> 
> > So here's the final culprit...
> > 
> > void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask)
> > {
> >         ...
> > 	spin_lock(&kvm->mmu_lock);
> > 	/* FIXME: we should use a single AND operation, but there is no
> > 	 * applicable atomic API.
> > 	 */
> > 	while (mask) {
> > 		clear_bit_le(offset + __ffs(mask), memslot->dirty_bitmap);
> > 		mask &= mask - 1;
> > 	}
> > 
> > 	kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask);
> > 	spin_unlock(&kvm->mmu_lock);
> > }
> > 
> > The mask is cleared before reaching
> > kvm_arch_mmu_enable_log_dirty_pt_masked()..
> 
> I'm not sure why that results in two vmexits?  (clearing before
> kvm_arch_mmu_enable_log_dirty_pt_masked is also what
> KVM_{GET,CLEAR}_DIRTY_LOG does).

Sorry my fault to be not clear on this.

The kvm_arch_mmu_enable_log_dirty_pt_masked() only explains why the
same page is not written again after the ring-full userspace exit
(which triggered the real dirty bit missing), and that's because the
write bit is not removed during KVM_RESET_DIRTY_RINGS so the next
vmenter will directly write to the previous page without vmexit.

The two vmexits is another story - I tracked it is retried because
mmu_notifier_seq has changed, hence it goes through this path:

	if (mmu_notifier_retry(vcpu->kvm, mmu_seq))
		goto out_unlock;

It's because when try_async_pf(), we will do a writable page fault,
which probably triggers both the invalidate_range_end and change_pte
notifiers.  A reference trace when EPT enabled:

        kvm_mmu_notifier_change_pte+1
        __mmu_notifier_change_pte+82
        wp_page_copy+1907
        do_wp_page+478
        __handle_mm_fault+3395
        handle_mm_fault+196
        __get_user_pages+681
        get_user_pages_unlocked+172
        __gfn_to_pfn_memslot+290
        try_async_pf+141
        tdp_page_fault+326
        kvm_mmu_page_fault+115
        kvm_arch_vcpu_ioctl_run+2675
        kvm_vcpu_ioctl+536
        do_vfs_ioctl+1029
        ksys_ioctl+94
        __x64_sys_ioctl+22
        do_syscall_64+91

I'm not sure whether that's ideal, but it makes sense to me.

> 
> > The funny thing is that I did have a few more patches to even skip
> > allocate the dirty_bitmap when dirty ring is enabled (hence in that
> > tree I removed this while loop too, so that has no such problem).
> > However I dropped those patches when I posted the RFC because I don't
> > think it's mature, and the selftest didn't complain about that
> > either..  Though, I do plan to redo that in v2 if you don't disagree.
> > The major question would be whether the dirty_bitmap could still be
> > for any use if dirty ring is enabled.
> 
> Userspace may want a dirty bitmap in addition to a list (for example:
> list for migration, bitmap for framebuffer update), but it can also do a
> pass over the dirty rings in order to update an internal bitmap.
> 
> So I think it make sense to make it either one or the other.

Ok, then I'll do.

Thanks,
Christophe de Dinechin Dec. 11, 2019, 1:41 p.m. UTC | #12
Peter Xu writes:

> Branch is here: https://github.com/xzpeter/linux/tree/kvm-dirty-ring
>
> Overview
> ============
>
> This is a continued work from Lei Cao <lei.cao@stratus.com> and Paolo
> on the KVM dirty ring interface.  To make it simple, I'll still start
> with version 1 as RFC.
>
> The new dirty ring interface is another way to collect dirty pages for
> the virtual machine, but it is different from the existing dirty
> logging interface in a few ways, majorly:
>
>   - Data format: The dirty data was in a ring format rather than a
>     bitmap format, so the size of data to sync for dirty logging does
>     not depend on the size of guest memory any more, but speed of
>     dirtying.  Also, the dirty ring is per-vcpu (currently plus
>     another per-vm ring, so total ring number is N+1), while the dirty
>     bitmap is per-vm.

I like Sean's suggestion to fetch rings when dirtying. That could reduce
the number of dirty rings to examine.

Also, as is, this means that the same gfn may be present in multiple
rings, right?

>
>   - Data copy: The sync of dirty pages does not need data copy any more,
>     but instead the ring is shared between the userspace and kernel by
>     page sharings (mmap() on either the vm fd or vcpu fd)
>
>   - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
>     KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
>     called KVM_RESET_DIRTY_RINGS when we want to reset the collected
>     dirty pages to protected mode again (works like
>     KVM_CLEAR_DIRTY_LOG, but ring based)
>
> And more.
>
> I would appreciate if the reviewers can start with patch "KVM:
> Implement ring-based dirty memory tracking", especially the document
> update part for the big picture.  Then I'll avoid copying into most of
> them into cover letter again.
>
> I marked this series as RFC because I'm at least uncertain on this
> change of vcpu_enter_guest():
>
>         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
>                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
>                 /*
>                         * If this is requested, it means that we've
>                         * marked the dirty bit in the dirty ring BUT
>                         * we've not written the date.  Do it now.

not written the "data" ?

>                         */
>                 r = kvm_emulate_instruction(vcpu, 0);
>                 r = r >= 0 ? 0 : r;
>                 goto out;
>         }
>
> I did a kvm_emulate_instruction() when dirty ring reaches softlimit
> and want to exit to userspace, however I'm not really sure whether
> there could have any side effect.  I'd appreciate any comment of
> above, or anything else.
>
> Tests
> ===========
>
> I wanted to continue work on the QEMU part, but after I noticed that
> the interface might still prone to change, I posted this series first.
> However to make sure it's at least working, I've provided unit tests
> together with the series.  The unit tests should be able to test the
> series in at least three major paths:
>
>   (1) ./dirty_log_test -M dirty-ring
>
>       This tests async ring operations: this should be the major work
>       mode for the dirty ring interface, say, when the kernel is
>       queuing more data, the userspace is collecting too.  Ring can
>       hardly reaches full when working like this, because in most
>       cases the collection could be fast.
>
>   (2) ./dirty_log_test -M dirty-ring -c 1024
>
>       This set the ring size to be very small so that ring soft-full
>       always triggers (soft-full is a soft limit of the ring state,
>       when the dirty ring reaches the soft limit it'll do a userspace
>       exit and let the userspace to collect the data).
>
>   (3) ./dirty_log_test -M dirty-ring-wait-queue
>
>       This sololy test the extreme case where ring is full.  When the
>       ring is completely full, the thread (no matter vcpu or not) will
>       be put onto a per-vm waitqueue, and KVM_RESET_DIRTY_RINGS will
>       wake the threads up (assuming until which the ring will not be
>       full any more).

Am I correct assuming that guest memory can be dirtied by DMA operations?
Should

Not being that familiar with the current implementation of dirty page
tracking, I wonder who marks the pages dirty in that case, and when?
If the VM ring is used for I/O threads, isn't it possible that a large
DMA could dirty a sufficiently large number of GFNs to overflow the
associated ring? Does this case need a separate way to queue the
dirtying I/O thread?

>
> Thanks,
>
> Cao, Lei (2):
>   KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot
>   KVM: X86: Implement ring-based dirty memory tracking
>
> Paolo Bonzini (1):
>   KVM: Move running VCPU from ARM to common code
>
> Peter Xu (12):
>   KVM: Add build-time error check on kvm_run size
>   KVM: Implement ring-based dirty memory tracking
>   KVM: Make dirty ring exclusive to dirty bitmap log
>   KVM: Introduce dirty ring wait queue
>   KVM: selftests: Always clear dirty bitmap after iteration
>   KVM: selftests: Sync uapi/linux/kvm.h to tools/
>   KVM: selftests: Use a single binary for dirty/clear log test
>   KVM: selftests: Introduce after_vcpu_run hook for dirty log test
>   KVM: selftests: Add dirty ring buffer test
>   KVM: selftests: Let dirty_log_test async for dirty ring test
>   KVM: selftests: Add "-c" parameter to dirty log test
>   KVM: selftests: Test dirty ring waitqueue
>
>  Documentation/virt/kvm/api.txt                | 116 +++++
>  arch/arm/include/asm/kvm_host.h               |   2 -
>  arch/arm64/include/asm/kvm_host.h             |   2 -
>  arch/x86/include/asm/kvm_host.h               |   5 +
>  arch/x86/include/uapi/asm/kvm.h               |   1 +
>  arch/x86/kvm/Makefile                         |   3 +-
>  arch/x86/kvm/mmu/mmu.c                        |   6 +
>  arch/x86/kvm/vmx/vmx.c                        |   7 +
>  arch/x86/kvm/x86.c                            |  12 +
>  include/linux/kvm_dirty_ring.h                |  67 +++
>  include/linux/kvm_host.h                      |  37 ++
>  include/linux/kvm_types.h                     |   1 +
>  include/uapi/linux/kvm.h                      |  36 ++
>  tools/include/uapi/linux/kvm.h                |  47 ++
>  tools/testing/selftests/kvm/Makefile          |   2 -
>  .../selftests/kvm/clear_dirty_log_test.c      |   2 -
>  tools/testing/selftests/kvm/dirty_log_test.c  | 452 ++++++++++++++++--
>  .../testing/selftests/kvm/include/kvm_util.h  |   6 +
>  tools/testing/selftests/kvm/lib/kvm_util.c    | 103 ++++
>  .../selftests/kvm/lib/kvm_util_internal.h     |   5 +
>  virt/kvm/arm/arm.c                            |  29 --
>  virt/kvm/arm/perf.c                           |   6 +-
>  virt/kvm/arm/vgic/vgic-mmio.c                 |  15 +-
>  virt/kvm/dirty_ring.c                         | 156 ++++++
>  virt/kvm/kvm_main.c                           | 315 +++++++++++-
>  25 files changed, 1329 insertions(+), 104 deletions(-)
>  create mode 100644 include/linux/kvm_dirty_ring.h
>  delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
>  create mode 100644 virt/kvm/dirty_ring.c


--
Cheers,
Christophe de Dinechin (IRC c3d)
Paolo Bonzini Dec. 11, 2019, 2:16 p.m. UTC | #13
On 11/12/19 14:41, Christophe de Dinechin wrote:
> 
> Peter Xu writes:
> 
>> Branch is here: https://github.com/xzpeter/linux/tree/kvm-dirty-ring
>>
>> Overview
>> ============
>>
>> This is a continued work from Lei Cao <lei.cao@stratus.com> and Paolo
>> on the KVM dirty ring interface.  To make it simple, I'll still start
>> with version 1 as RFC.
>>
>> The new dirty ring interface is another way to collect dirty pages for
>> the virtual machine, but it is different from the existing dirty
>> logging interface in a few ways, majorly:
>>
>>   - Data format: The dirty data was in a ring format rather than a
>>     bitmap format, so the size of data to sync for dirty logging does
>>     not depend on the size of guest memory any more, but speed of
>>     dirtying.  Also, the dirty ring is per-vcpu (currently plus
>>     another per-vm ring, so total ring number is N+1), while the dirty
>>     bitmap is per-vm.
> 
> I like Sean's suggestion to fetch rings when dirtying. That could reduce
> the number of dirty rings to examine.

What do you mean by "fetch rings"?

> Also, as is, this means that the same gfn may be present in multiple
> rings, right?

I think the actual marking of a page as dirty is protected by a spinlock
but I will defer to Peter on this.

Paolo

>>
>>   - Data copy: The sync of dirty pages does not need data copy any more,
>>     but instead the ring is shared between the userspace and kernel by
>>     page sharings (mmap() on either the vm fd or vcpu fd)
>>
>>   - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
>>     KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
>>     called KVM_RESET_DIRTY_RINGS when we want to reset the collected
>>     dirty pages to protected mode again (works like
>>     KVM_CLEAR_DIRTY_LOG, but ring based)
>>
>> And more.
>>
>> I would appreciate if the reviewers can start with patch "KVM:
>> Implement ring-based dirty memory tracking", especially the document
>> update part for the big picture.  Then I'll avoid copying into most of
>> them into cover letter again.
>>
>> I marked this series as RFC because I'm at least uncertain on this
>> change of vcpu_enter_guest():
>>
>>         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
>>                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
>>                 /*
>>                         * If this is requested, it means that we've
>>                         * marked the dirty bit in the dirty ring BUT
>>                         * we've not written the date.  Do it now.
> 
> not written the "data" ?
> 
>>                         */
>>                 r = kvm_emulate_instruction(vcpu, 0);
>>                 r = r >= 0 ? 0 : r;
>>                 goto out;
>>         }
>>
>> I did a kvm_emulate_instruction() when dirty ring reaches softlimit
>> and want to exit to userspace, however I'm not really sure whether
>> there could have any side effect.  I'd appreciate any comment of
>> above, or anything else.
>>
>> Tests
>> ===========
>>
>> I wanted to continue work on the QEMU part, but after I noticed that
>> the interface might still prone to change, I posted this series first.
>> However to make sure it's at least working, I've provided unit tests
>> together with the series.  The unit tests should be able to test the
>> series in at least three major paths:
>>
>>   (1) ./dirty_log_test -M dirty-ring
>>
>>       This tests async ring operations: this should be the major work
>>       mode for the dirty ring interface, say, when the kernel is
>>       queuing more data, the userspace is collecting too.  Ring can
>>       hardly reaches full when working like this, because in most
>>       cases the collection could be fast.
>>
>>   (2) ./dirty_log_test -M dirty-ring -c 1024
>>
>>       This set the ring size to be very small so that ring soft-full
>>       always triggers (soft-full is a soft limit of the ring state,
>>       when the dirty ring reaches the soft limit it'll do a userspace
>>       exit and let the userspace to collect the data).
>>
>>   (3) ./dirty_log_test -M dirty-ring-wait-queue
>>
>>       This sololy test the extreme case where ring is full.  When the
>>       ring is completely full, the thread (no matter vcpu or not) will
>>       be put onto a per-vm waitqueue, and KVM_RESET_DIRTY_RINGS will
>>       wake the threads up (assuming until which the ring will not be
>>       full any more).
> 
> Am I correct assuming that guest memory can be dirtied by DMA operations?
> Should
> 
> Not being that familiar with the current implementation of dirty page
> tracking, I wonder who marks the pages dirty in that case, and when?
> If the VM ring is used for I/O threads, isn't it possible that a large
> DMA could dirty a sufficiently large number of GFNs to overflow the
> associated ring? Does this case need a separate way to queue the
> dirtying I/O thread?
> 
>>
>> Thanks,
>>
>> Cao, Lei (2):
>>   KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot
>>   KVM: X86: Implement ring-based dirty memory tracking
>>
>> Paolo Bonzini (1):
>>   KVM: Move running VCPU from ARM to common code
>>
>> Peter Xu (12):
>>   KVM: Add build-time error check on kvm_run size
>>   KVM: Implement ring-based dirty memory tracking
>>   KVM: Make dirty ring exclusive to dirty bitmap log
>>   KVM: Introduce dirty ring wait queue
>>   KVM: selftests: Always clear dirty bitmap after iteration
>>   KVM: selftests: Sync uapi/linux/kvm.h to tools/
>>   KVM: selftests: Use a single binary for dirty/clear log test
>>   KVM: selftests: Introduce after_vcpu_run hook for dirty log test
>>   KVM: selftests: Add dirty ring buffer test
>>   KVM: selftests: Let dirty_log_test async for dirty ring test
>>   KVM: selftests: Add "-c" parameter to dirty log test
>>   KVM: selftests: Test dirty ring waitqueue
>>
>>  Documentation/virt/kvm/api.txt                | 116 +++++
>>  arch/arm/include/asm/kvm_host.h               |   2 -
>>  arch/arm64/include/asm/kvm_host.h             |   2 -
>>  arch/x86/include/asm/kvm_host.h               |   5 +
>>  arch/x86/include/uapi/asm/kvm.h               |   1 +
>>  arch/x86/kvm/Makefile                         |   3 +-
>>  arch/x86/kvm/mmu/mmu.c                        |   6 +
>>  arch/x86/kvm/vmx/vmx.c                        |   7 +
>>  arch/x86/kvm/x86.c                            |  12 +
>>  include/linux/kvm_dirty_ring.h                |  67 +++
>>  include/linux/kvm_host.h                      |  37 ++
>>  include/linux/kvm_types.h                     |   1 +
>>  include/uapi/linux/kvm.h                      |  36 ++
>>  tools/include/uapi/linux/kvm.h                |  47 ++
>>  tools/testing/selftests/kvm/Makefile          |   2 -
>>  .../selftests/kvm/clear_dirty_log_test.c      |   2 -
>>  tools/testing/selftests/kvm/dirty_log_test.c  | 452 ++++++++++++++++--
>>  .../testing/selftests/kvm/include/kvm_util.h  |   6 +
>>  tools/testing/selftests/kvm/lib/kvm_util.c    | 103 ++++
>>  .../selftests/kvm/lib/kvm_util_internal.h     |   5 +
>>  virt/kvm/arm/arm.c                            |  29 --
>>  virt/kvm/arm/perf.c                           |   6 +-
>>  virt/kvm/arm/vgic/vgic-mmio.c                 |  15 +-
>>  virt/kvm/dirty_ring.c                         | 156 ++++++
>>  virt/kvm/kvm_main.c                           | 315 +++++++++++-
>>  25 files changed, 1329 insertions(+), 104 deletions(-)
>>  create mode 100644 include/linux/kvm_dirty_ring.h
>>  delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
>>  create mode 100644 virt/kvm/dirty_ring.c
> 
> 
> --
> Cheers,
> Christophe de Dinechin (IRC c3d)
>
Peter Xu Dec. 11, 2019, 5:15 p.m. UTC | #14
On Wed, Dec 11, 2019 at 03:16:30PM +0100, Paolo Bonzini wrote:
> On 11/12/19 14:41, Christophe de Dinechin wrote:
> > 
> > Peter Xu writes:
> > 
> >> Branch is here: https://github.com/xzpeter/linux/tree/kvm-dirty-ring
> >>
> >> Overview
> >> ============
> >>
> >> This is a continued work from Lei Cao <lei.cao@stratus.com> and Paolo
> >> on the KVM dirty ring interface.  To make it simple, I'll still start
> >> with version 1 as RFC.
> >>
> >> The new dirty ring interface is another way to collect dirty pages for
> >> the virtual machine, but it is different from the existing dirty
> >> logging interface in a few ways, majorly:
> >>
> >>   - Data format: The dirty data was in a ring format rather than a
> >>     bitmap format, so the size of data to sync for dirty logging does
> >>     not depend on the size of guest memory any more, but speed of
> >>     dirtying.  Also, the dirty ring is per-vcpu (currently plus
> >>     another per-vm ring, so total ring number is N+1), while the dirty
> >>     bitmap is per-vm.
> > 
> > I like Sean's suggestion to fetch rings when dirtying. That could reduce
> > the number of dirty rings to examine.
> 
> What do you mean by "fetch rings"?

I'd wildly guess Christophe means something like we create a ring pool
and we try to find a ring to push the dirty gfn when it comes.

OK, should I count it as another vote to Sean's? :)

I agree, but imho larger number of rings won't really be a problem as
long as it's still per-vcpu (after all we have a vcpu number
limitation which is harder to break...).  To me what Sean's suggestion
attracted me most is that the interface is cleaner, that we don't need
to expose the ring in two places any more.  At the meantime, I won't
care too much on perf issue here because after all it's dirty logging.
If perf is critial, then I think I'll certainly choose per-vcpu ring
without doubt even if it complicates the interface because it'll
certainly help on some conditional lockless.

> 
> > Also, as is, this means that the same gfn may be present in multiple
> > rings, right?
> 
> I think the actual marking of a page as dirty is protected by a spinlock
> but I will defer to Peter on this.

In most cases imho we should be with the mmu lock iiuc because the
general mmu page fault will take it.  However I think there're special
cases:

  - when the spte was popped already and just write protected, then
    it's very possible we can go via the quick page fault path
    (fast_page_fault()).  That is lockless (no mmu lock taken).

  - when there's no vcpu context, we'll use the per-vm ring.  Though
    per-vm ring is locked (per-vcpu ring is not!), I don't see how it
    would protect two callers to insert two identical gfns
    sequentially.. Also it can happen between per-vm and per-vcpu ring
    as well.

So I think gfn duplication could happen, but it should be rare.  Even
if it happens, it won't hurt much because the 2nd/3rd/... dirty bit of
the same gfn will simply be skipped by userspace when harvesting.

> 
> Paolo
> 
> >>
> >>   - Data copy: The sync of dirty pages does not need data copy any more,
> >>     but instead the ring is shared between the userspace and kernel by
> >>     page sharings (mmap() on either the vm fd or vcpu fd)
> >>
> >>   - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
> >>     KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
> >>     called KVM_RESET_DIRTY_RINGS when we want to reset the collected
> >>     dirty pages to protected mode again (works like
> >>     KVM_CLEAR_DIRTY_LOG, but ring based)
> >>
> >> And more.
> >>
> >> I would appreciate if the reviewers can start with patch "KVM:
> >> Implement ring-based dirty memory tracking", especially the document
> >> update part for the big picture.  Then I'll avoid copying into most of
> >> them into cover letter again.
> >>
> >> I marked this series as RFC because I'm at least uncertain on this
> >> change of vcpu_enter_guest():
> >>
> >>         if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
> >>                 vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> >>                 /*
> >>                         * If this is requested, it means that we've
> >>                         * marked the dirty bit in the dirty ring BUT
> >>                         * we've not written the date.  Do it now.
> > 
> > not written the "data" ?

Yep, though I'll drop these lines altogether so we'll be fine.. :)

Thanks,