From patchwork Wed Jul 10 23:42:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13729871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 86A9BC3DA42 for ; Wed, 10 Jul 2024 23:43:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:Mime-Version:Date:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=i4DifozVZ/fDrEjgZQtvTFkudk1JzJtOjKWrKpGAYuU=; b=Y2fW8YzZXwdkvCWy+uqKk4tlrL rf67rsRduwKeb4eSOurXQf4cQkHM1IWb8EZ+YolXYFEKweinZa6SU32WZtk7aVoztwOZscXhnDseI QNwqzY1735H9QdmR+85TSjnygHYx/ug3VTutvPSMxZHFDnyjKdYMw+AZpwiAKVmNlv1QcUffg05Nx 8p3fAtd8TVjkMmuuaidRKDAw33/UISSbV6MAseqH2AMnHCYTnYM+9a5hfDY3M0CQkyTolsZWr2bZt LjpBHjRN89ByITgLCLOoCrQ3Knga2KX90YikVEbPiTRFRVMnoXcseTW47+PlPUBgd4We+GXiQv7bx n/W0CiCw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRgy9-0000000C7cP-0UiL; Wed, 10 Jul 2024 23:43:29 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sRgxL-0000000C76P-1BO5 for linux-arm-kernel@lists.infradead.org; Wed, 10 Jul 2024 23:42:41 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-654c14abdcfso13730527b3.1 for ; Wed, 10 Jul 2024 16:42:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1720654957; x=1721259757; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=i4DifozVZ/fDrEjgZQtvTFkudk1JzJtOjKWrKpGAYuU=; b=aF4Jl68yMfmpFsd6KMbV8BxN1vGhXYX4ARjT29qm2AbtCBGb/ozdl9CK19Nn8p6A42 +usmkuSRTWgAvo9bJWfRl2RU06IdRFSGkrdGjT7D32hOp3M+hehTlKXwiq8qBHvUZfjW eyXNibibNsRLcRGWuvazaQQqlTUlX7H5PVpzIO+z+q3jHvCebgO56aY/mgVs3MdiTSie YaqqWj6wEciuFVLhZ0A2OJaAo5nSgR49y/57+kqdq9REVvTSUKTbLIoaWBOrgskEdFNC T92arDyqwW1cPhHTGBPN39eM5tJjJww9nJOeIaZGo6tGcHClle56HtPV7dD47Vp4KWki 62Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720654957; x=1721259757; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=i4DifozVZ/fDrEjgZQtvTFkudk1JzJtOjKWrKpGAYuU=; b=iSd+++lScjvpB9DDnWa7IxGj2j/kCR4PgSJY3c1f8hCM8XqW9xm5LPt/DE8H8YphnO uZcp6cF9ZJCL6CEakNGL8k7TCkO6EZzxzAztfYa9lxulmN8DoWOdW4x6swITLqn90a1l xcFzi0Oava8eQZ2eNZoRIsJo2ftErXaOLKm1ExSsNQY8q0yohIMh3Yi9UEd5zTcN2lwc cCnsalgnHyPtxxyegGJO4p/aZPtd27sLC8DLeSGKni7l11+alF/4UUZ3REniYNh21pj9 hQZMYjhju4eCpoBB1zXmXjpiRxGukQNGJkJ6YugbTukGj4dpO2qLmSmJex1DlmZQUu2H b0Gw== X-Forwarded-Encrypted: i=1; AJvYcCWUpZM6UiJks99WxEB1cxw6xJnqUAhKdbss2cazMkou6OJdnZPQMBWuzTmAIZZK1pv9TkGVs0nvQtOZk3EZIFOo4AiR7tEd3b9x+Bo5c65eTOrthWg= X-Gm-Message-State: AOJu0YxS7LBTZ7GoXI1pHxVdaes+8ZBcmFH1yd0QpglCrwAm5+KfkzJX db09hGzCNBGKwOy/kCO8NyOP2DFKcKsH9f3xuPgGqR7VjhPbGKUyUPS45nT06/X1p0J/WdW7gHO 9ukDhH+dAn511PD/PDQ== X-Google-Smtp-Source: AGHT+IGleZxYUt4mvQCru8lvqfwiOY5k61Gmu5l5Nxb9Gfvqk177LFIz9ZNpte5dLHg976OYyY0/0Rk5LPFmcxhb X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:6902:32a6:b0:e03:61bb:6032 with SMTP id 3f1490d57ef6-e0577fa164amr19023276.1.1720654957517; Wed, 10 Jul 2024 16:42:37 -0700 (PDT) Date: Wed, 10 Jul 2024 23:42:04 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.45.2.993.g49e7a77208-goog Message-ID: <20240710234222.2333120-1-jthoughton@google.com> Subject: [RFC PATCH 00/18] KVM: Post-copy live migration for guest_memfd From: James Houghton To: Paolo Bonzini Cc: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Sean Christopherson , Shuah Khan , Peter Xu , Axel Rasmussen , David Matlack , James Houghton , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240710_164239_418391_8D8E09DA X-CRM114-Status: GOOD ( 28.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch series implements the KVM-based demand paging system that was first introduced back in November[1] by David Matlack. The working name for this new system is KVM Userfault, but that name is very confusing so it will not be the final name. Problem: post-copy with guest_memfd =================================== Post-copy live migration makes it possible to migrate VMs from one host to another no matter how fast they are writing to memory while keeping the VM paused for a minimal amount of time. For post-copy to work, we need: 1. to be able to prevent KVM from being able to access particular pages of guest memory until we have populated it 2. for userspace to know when KVM is trying to access a particular page. 3. a way to allow the access to proceed. Traditionally, post-copy live migration is implemented using userfaultfd, which hooks into the main mm fault path. KVM hits this path when it is doing HVA -> PFN translations (with GUP) or when it itself attempts to access guest memory. Userfaultfd sends a page fault notification to userspace, and KVM goes to sleep. Userfaultfd works well, as it is not specific to KVM; everyone who attempts to access guest memory will block the same way. However, with guest_memfd, we do not use GUP to translate from GFN to HPA (nor is there an intermediate HVA). So userfaultfd in its current form cannot be used to support post-copy live migration with guest_memfd-backed VMs. Solution: hook into the gfn -> pfn translation ============================================== The only way to implement post-copy with a non-KVM-specific userfaultfd-like system would be to introduce the concept of a file-userfault[2] to intercept faults on a guest_memfd. Instead, we take the simpler approach of adding a KVM-specific API, and we hook into the GFN -> HVA or GFN -> PFN translation steps (for traditional memslots and for guest_memfd respectively). I have intentionally added support for traditional memslots, as the complexity that it adds is minimal, and it is useful for some VMMs, as it can be used to fully implement post-copy live migration. Implementation Details ====================== Let's break down how KVM implements each of the three core requirements for implementing post-copy as laid out above: --- Preventing access: KVM_MEMORY_ATTRIBUTE_USERFAULT --- The most straightforward way to inform KVM of userfault-enabled pages is to use a new memory attribute, say KVM_MEMORY_ATTRIBUTE_USERFAULT. There is already infrastructure in place for modifying and checking memory attributes. Using this interface is slightly challenging, as there is no UAPI for setting/clearing particular attributes; we must set the exact attributes we want. The synchronization that is in place for updating memory attributes is not suitable for post-copy live migration either, which will require updating memory attributes (from userfault to no-userfault) very frequently. Another potential interface could be to use something akin to a dirty bitmap, where a bitmap describes which pages within a memslot (or VM) should trigger userfaults. This way, it is straightforward to make updates to the userfault status of a page cheap. When KVM Userfault is enabled, we need to be careful not to map a userfault page in response to a fault on a non-userfault page. In this RFC, I've taken the simplest approach: force new PTEs to be PAGE_SIZE. --- Page fault notifications --- For page faults generated by vCPUs running in guest mode, if the page the vCPU is trying to access is a userfault-enabled page, we use KVM_EXIT_MEMORY_FAULT with a new flag: KVM_MEMORY_EXIT_FLAG_USERFAULT. For arm64, I believe this is actually all we need, provided we handle steal_time properly. For x86, where returning from deep within the instruction emulator (or other non-trivial execution paths) is infeasible, being able to pause execution while userspace fetches the page, just as userfaultfd would do, is necessary. Let's call these "asynchronous userfaults." A new ioctl, KVM_READ_USERFAULT, has been added to read asynchronous userfaults, and an eventfd is used to signal that new faults are available for reading. Today, we busy-wait for a gfn to have userfault disabled. This will change in the future. --- Fault resolution --- Resolving userfaults today is as simple as removing the USERFAULT memory attribute on the faulting gfn. This will change if we do not end up using memory attributes for KVM Userfault. Having a range-based wake-up like userfaultfd (see UFFDIO_WAKE) might also be helpful for performance. Problems with this series ========================= - This cannot be named KVM Userfault! Perhaps "KVM missing pages"? - Memory attribute modification doesn't scale well. - We busy-wait for pages to not be userfault-enabled. - gfn_to_hva and gfn_to_pfn caches are not invalidated. - Page tables are not collapsed when KVM Userfault is disabled. - There is no self-test for asynchronous userfaults. - Asynchronous page faults can be dropped if KVM_READ_USERFAULT fails. - Supports only x86 and arm64. - Probably many more! Thanks! [1]: https://lore.kernel.org/kvm/CALzav=d23P5uE=oYqMpjFohvn0CASMJxXB_XEOEi-jtqWcFTDA@mail.gmail.com/ [2]: https://lore.kernel.org/kvm/CADrL8HVwBjLpWDM9i9Co1puFWmJshZOKVu727fMPJUAbD+XX5g@mail.gmail.com/ James Houghton (18): KVM: Add KVM_USERFAULT build option KVM: Add KVM_CAP_USERFAULT and KVM_MEMORY_ATTRIBUTE_USERFAULT KVM: Put struct kvm pointer in memslot KVM: Fail __gfn_to_hva_many for userfault gfns. KVM: Add KVM_PFN_ERR_USERFAULT KVM: Add KVM_MEMORY_EXIT_FLAG_USERFAULT KVM: Provide attributes to kvm_arch_pre_set_memory_attributes KVM: x86: Add KVM Userfault support KVM: x86: Add vCPU fault fast-path for Userfault KVM: arm64: Add KVM Userfault support KVM: arm64: Add vCPU memory fault fast-path for Userfault KVM: arm64: Add userfault support for steal-time KVM: Add atomic parameter to __gfn_to_hva_many KVM: Add asynchronous userfaults, KVM_READ_USERFAULT KVM: guest_memfd: Add KVM Userfault support KVM: Advertise KVM_CAP_USERFAULT in KVM_CHECK_EXTENSION KVM: selftests: Add KVM Userfault mode to demand_paging_test KVM: selftests: Remove restriction in vm_set_memory_attributes Documentation/virt/kvm/api.rst | 23 ++ arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/arm.c | 8 +- arch/arm64/kvm/mmu.c | 45 +++- arch/arm64/kvm/pvtime.c | 11 +- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 67 +++++- arch/x86/kvm/mmu/mmu_internal.h | 3 +- include/linux/kvm_host.h | 41 +++- include/uapi/linux/kvm.h | 13 ++ .../selftests/kvm/demand_paging_test.c | 46 +++- .../testing/selftests/kvm/include/kvm_util.h | 7 - virt/kvm/Kconfig | 4 + virt/kvm/guest_memfd.c | 16 +- virt/kvm/kvm_main.c | 213 +++++++++++++++++- 16 files changed, 457 insertions(+), 44 deletions(-) base-commit: 02b0d3b9d4dd1ef76b3e8c63175f1ae9ff392313