mbox series

[0/2,V7] Add AMD SEV and SEV-ES intra host migration support

Message ID 20210902181751.252227-1-pgonda@google.com (mailing list archive)
Headers show
Series Add AMD SEV and SEV-ES intra host migration support | expand

Message

Peter Gonda Sept. 2, 2021, 6:17 p.m. UTC
Intra host migration provides a low-cost mechanism for userspace VMM
upgrades.  It is an alternative to traditional (i.e., remote) live
migration. Whereas remote migration handles moving a guest to a new host,
intra host migration only handles moving a guest to a new userspace VMM
within a host.  This can be used to update, rollback, change flags of the
VMM, etc. The lower cost compared to live migration comes from the fact
that the guest's memory does not need to be copied between processes. A
handle to the guest memory simply gets passed to the new VMM, this could
be done via /dev/shm with share=on or similar feature.

The guest state can be transferred from an old VMM to a new VMM as follows:
1. Export guest state from KVM to the old user-space VMM via a getter
user-space/kernel API 2. Transfer guest state from old VMM to new VMM via
IPC communication 3. Import guest state into KVM from the new user-space
VMM via a setter user-space/kernel API VMMs by exporting from KVM using
getters, sending that data to the new VMM, then setting it again in KVM.

In the common case for intra host migration, we can rely on the normal
ioctls for passing data from one VMM to the next. SEV, SEV-ES, and other
confidential compute environments make most of this information opaque, and
render KVM ioctls such as "KVM_GET_REGS" irrelevant.  As a result, we need
the ability to pass this opaque metadata from one VMM to the next. The
easiest way to do this is to leave this data in the kernel, and transfer
ownership of the metadata from one KVM VM (or vCPU) to the next. For
example, we need to move the SEV enabled ASID, VMSAs, and GHCB metadata
from one VMM to the next.  In general, we need to be able to hand off any
data that would be unsafe/impossible for the kernel to hand directly to
userspace (and cannot be reproduced using data that can be handed safely to
userspace).

For the intra host operation the SEV required metadata, the source VM FD is
sent to the target VMM. The target VMM calls the new cap ioctl with the
source VM FD, KVM then moves all the SEV state to the target VM from the
source VM.

V7
 * Address selftest feedback.

V6
 * Add selftest.

V5:
 * Fix up locking scheme
 * Address marcorr@ comments.

V4:
 * Move to seanjc@'s suggestion of source VM FD based single ioctl design.

v3:
 * Fix memory leak found by dan.carpenter@

v2:
 * Added marcorr@ reviewed by tag
 * Renamed function introduced in 1/3
 * Edited with seanjc@'s review comments
 ** Cleaned up WARN usage
 ** Userspace makes random token now
 * Edited with brijesh.singh@'s review comments
 ** Checks for different LAUNCH_* states in send function

v1: https://lore.kernel.org/kvm/20210621163118.1040170-1-pgonda@google.com/

base-commit: 680c7e3be6a3

Peter Gonda (3):
  KVM, SEV: Add support for SEV intra host migration
  KVM, SEV: Add support for SEV-ES intra host migration
  selftest: KVM: Add intra host migration tests

 Documentation/virt/kvm/api.rst                |  15 ++
 arch/x86/include/asm/kvm_host.h               |   1 +
 arch/x86/kvm/svm/sev.c                        | 159 ++++++++++++++++++
 arch/x86/kvm/svm/svm.c                        |   1 +
 arch/x86/kvm/svm/svm.h                        |   2 +
 arch/x86/kvm/x86.c                            |   5 +
 include/uapi/linux/kvm.h                      |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/x86_64/sev_vm_tests.c       | 159 ++++++++++++++++++
 9 files changed, 344 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c

Comments

Sean Christopherson Sept. 2, 2021, 6:45 p.m. UTC | #1
Please Cc the cover letter to anyone that was Cc'd on one or more patches.  That's
especially helpful if some recipients aren't subscribed to KVM.  Oh, and Cc lkml
as well, otherwise I believe lore, patchwork, etc... won't have the cover letter.

On Thu, Sep 02, 2021, Peter Gonda wrote:
> Intra host migration provides a low-cost mechanism for userspace VMM
> upgrades.  It is an alternative to traditional (i.e., remote) live
> migration. Whereas remote migration handles moving a guest to a new host,
> intra host migration only handles moving a guest to a new userspace VMM
> within a host.  This can be used to update, rollback, change flags of the
> VMM, etc. The lower cost compared to live migration comes from the fact
> that the guest's memory does not need to be copied between processes. A
> handle to the guest memory simply gets passed to the new VMM, this could
> be done via /dev/shm with share=on or similar feature.
> 
> The guest state can be transferred from an old VMM to a new VMM as follows:
> 1. Export guest state from KVM to the old user-space VMM via a getter
> user-space/kernel API 2. Transfer guest state from old VMM to new VMM via
> IPC communication 3. Import guest state into KVM from the new user-space
> VMM via a setter user-space/kernel API VMMs by exporting from KVM using
> getters, sending that data to the new VMM, then setting it again in KVM.
> 
> In the common case for intra host migration, we can rely on the normal
> ioctls for passing data from one VMM to the next. SEV, SEV-ES, and other
> confidential compute environments make most of this information opaque, and
> render KVM ioctls such as "KVM_GET_REGS" irrelevant.  As a result, we need
> the ability to pass this opaque metadata from one VMM to the next. The
> easiest way to do this is to leave this data in the kernel, and transfer
> ownership of the metadata from one KVM VM (or vCPU) to the next. For
> example, we need to move the SEV enabled ASID, VMSAs, and GHCB metadata
> from one VMM to the next.  In general, we need to be able to hand off any
> data that would be unsafe/impossible for the kernel to hand directly to
> userspace (and cannot be reproduced using data that can be handed safely to
> userspace).
> 
> For the intra host operation the SEV required metadata, the source VM FD is
> sent to the target VMM. The target VMM calls the new cap ioctl with the
> source VM FD, KVM then moves all the SEV state to the target VM from the
> source VM.
Peter Gonda Sept. 2, 2021, 6:53 p.m. UTC | #2
On Thu, Sep 2, 2021 at 12:45 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Please Cc the cover letter to anyone that was Cc'd on one or more patches.  That's
> especially helpful if some recipients aren't subscribed to KVM.  Oh, and Cc lkml
> as well, otherwise I believe lore, patchwork, etc... won't have the cover letter.

Add CCs here. Thanks.

>
> On Thu, Sep 02, 2021, Peter Gonda wrote:
> > Intra host migration provides a low-cost mechanism for userspace VMM
> > upgrades.  It is an alternative to traditional (i.e., remote) live
> > migration. Whereas remote migration handles moving a guest to a new host,
> > intra host migration only handles moving a guest to a new userspace VMM
> > within a host.  This can be used to update, rollback, change flags of the
> > VMM, etc. The lower cost compared to live migration comes from the fact
> > that the guest's memory does not need to be copied between processes. A
> > handle to the guest memory simply gets passed to the new VMM, this could
> > be done via /dev/shm with share=on or similar feature.
> >
> > The guest state can be transferred from an old VMM to a new VMM as follows:
> > 1. Export guest state from KVM to the old user-space VMM via a getter
> > user-space/kernel API 2. Transfer guest state from old VMM to new VMM via
> > IPC communication 3. Import guest state into KVM from the new user-space
> > VMM via a setter user-space/kernel API VMMs by exporting from KVM using
> > getters, sending that data to the new VMM, then setting it again in KVM.
> >
> > In the common case for intra host migration, we can rely on the normal
> > ioctls for passing data from one VMM to the next. SEV, SEV-ES, and other
> > confidential compute environments make most of this information opaque, and
> > render KVM ioctls such as "KVM_GET_REGS" irrelevant.  As a result, we need
> > the ability to pass this opaque metadata from one VMM to the next. The
> > easiest way to do this is to leave this data in the kernel, and transfer
> > ownership of the metadata from one KVM VM (or vCPU) to the next. For
> > example, we need to move the SEV enabled ASID, VMSAs, and GHCB metadata
> > from one VMM to the next.  In general, we need to be able to hand off any
> > data that would be unsafe/impossible for the kernel to hand directly to
> > userspace (and cannot be reproduced using data that can be handed safely to
> > userspace).
> >
> > For the intra host operation the SEV required metadata, the source VM FD is
> > sent to the target VMM. The target VMM calls the new cap ioctl with the
> > source VM FD, KVM then moves all the SEV state to the target VM from the
> > source VM.