From patchwork Thu Jan 9 20:49:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933218 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 37671204F83 for ; Thu, 9 Jan 2025 20:49:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455799; cv=none; b=U6JS1SKRwD4RZPUqu3zd0TgLQY+qRbikX7SNwmPGWZOBq9X58UqTiP697Tk2PJQv0IHfQCjfbvAGAdo990sY9JUixG0rVSFD3Y6Uqz+35IfZpU9sRZ4mB7qzSolmOXPY+eMrPZ/6+59KLNeF/hfvoTOeWE8yg5/C94bniqbXd20= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455799; c=relaxed/simple; bh=6egPkffTqCljeChAEC1xtjQx5okoA3w1tTDlM9Q/gaY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LeVPikPkXgJj8NGcaPRFzHPAowDLZ0Ebpcuct5G4cPQkWWqJ28USUhgpQw8IDPVw8Wkm4EZ6z6Q8mL0duk6XxX93ujlC0synW9BfdIf6UzH82iCy40vw2+VU4BFLNVUYNoQkQzwFGJnK86EVb/Y0AUCChzSvdU3NgqMBOno24f4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ufEASTQR; arc=none smtp.client-ip=209.85.222.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ufEASTQR" Received: by mail-qk1-f201.google.com with SMTP id af79cd13be357-7b6ebe1ab63so315826685a.1 for ; Thu, 09 Jan 2025 12:49:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455796; x=1737060596; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4wiJ4r6U+4DKwl7a0Sm0LZeZPdZmNUbmXckiwF/fu60=; b=ufEASTQRfOty4pmHjO1Estca+rcvTERGUu2gZN4b85yJsQ0tsrrGcfIov4vM74s8tw FC8KjmaTzJ23QUa7ggMsdJnsOK9TBDqmrgzhebLvAEfI7ckwLn6gcjnPXKWreQIte1BD 429CUj/kxserARkbjufgOuNr8aYAhNmJcP519hg+m/xlVTwjIrDIQ5Bszk709Fnj6KgC IBtQYuUAEJcRarEXZFXqxRbejeifzLRAO4NS5V8/Q9eDPk0BrqUKi3zgdNHufEbz7vxZ pZhg4EQfLv1hLw/k0auVh0CCxiiullcAypWBP7olJEE7WZ5xtYLG83Ik5KiF1m1I7gov K9jQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455796; x=1737060596; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4wiJ4r6U+4DKwl7a0Sm0LZeZPdZmNUbmXckiwF/fu60=; b=kq8OJp2rQJCU1QixVGHZuWY2RbnkijZsJkAtJEeKGypGxLdti45NPg6N3DkYGrO5vL r/VGxQJ7Yfw3ujGxH7QKk5fNz7QP6P7x4yB2VC2iMzA4f12/04cjvkwJdaDDmFZNPXKN oaYj+f+adnM3MX8NM7Mw6Jvzp3+a5n/bwiTnjAOTuchbxBFvtZM6E0YMDxmlcZ/E6VMW b8lmEwVhfAQlboeBlRKsMt/0HymbdqS13Be/oapnB3bxloHzLw6ttr8Djb7Iz5ozxLJ5 Pfdl7dDA90zk188JE2FQ4Bw0ZOv5uxf04q8fFjiaxDtAjV5XfOhFe99jDIUlwY1Qv0AE Qkcw== X-Forwarded-Encrypted: i=1; AJvYcCXlQLO1iCrj4t3AJwv2SmDy82uTYqEr7ppfvyG26O+UOHp84IWTB7dsl19bKzD0WDfX5Yc=@vger.kernel.org X-Gm-Message-State: AOJu0Yw0p8qWVOH52qpdQIPMdUz61lwSj4t+337DiVZaauylnsS4JQ3D VGeGJia0r0prKaPGC+6nXJ1BZpyL8scqLbaLwYSB6IkmhS2TzHXB1uPURbVBWkJwvrnHYNDnG7J x57uVTaeZJjQ/Y0fkXA== X-Google-Smtp-Source: AGHT+IG26IPp3qbl2E13g/CbVexbYg/eWO5E84o5Ua+axOZx93ciuXt5lAWy60cQgF+PsXPiSlHQOVHo5aFaWyuK X-Received: from qkkl1.prod.google.com ([2002:a37:f501:0:b0:7b6:e209:1c29]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:620a:d87:b0:7b6:d632:37cf with SMTP id af79cd13be357-7bcd9729affmr1146945885a.3.1736455796215; Thu, 09 Jan 2025 12:49:56 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:17 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-2-jthoughton@google.com> Subject: [PATCH v2 01/13] KVM: Add KVM_MEM_USERFAULT memslot flag and bitmap From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Use one of the 14 reserved u64s in struct kvm_userspace_memory_region2 for the user to provide `userfault_bitmap`. The memslot flag indicates if KVM should be reading from the `userfault_bitmap` field from the memslot. The user is permitted to provide a bogus pointer. If the pointer cannot be read from, we will return -EFAULT (with no other information) back to the user. Signed-off-by: James Houghton --- include/linux/kvm_host.h | 14 ++++++++++++++ include/uapi/linux/kvm.h | 4 +++- virt/kvm/Kconfig | 3 +++ virt/kvm/kvm_main.c | 35 +++++++++++++++++++++++++++++++++++ 4 files changed, 55 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 401439bb21e3..f7a3dfd5e224 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -590,6 +590,7 @@ struct kvm_memory_slot { unsigned long *dirty_bitmap; struct kvm_arch_memory_slot arch; unsigned long userspace_addr; + unsigned long __user *userfault_bitmap; u32 flags; short id; u16 as_id; @@ -724,6 +725,11 @@ static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) } #endif +static inline bool kvm_has_userfault(struct kvm *kvm) +{ + return IS_ENABLED(CONFIG_HAVE_KVM_USERFAULT); +} + struct kvm_memslots { u64 generation; atomic_long_t last_used_slot; @@ -2553,4 +2559,12 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu, struct kvm_pre_fault_memory *range); #endif +int kvm_gfn_userfault(struct kvm *kvm, struct kvm_memory_slot *memslot, + gfn_t gfn); + +static inline bool kvm_memslot_userfault(struct kvm_memory_slot *memslot) +{ + return memslot->flags & KVM_MEM_USERFAULT; +} + #endif diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 343de0a51797..7ade5169d373 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -40,7 +40,8 @@ struct kvm_userspace_memory_region2 { __u64 guest_memfd_offset; __u32 guest_memfd; __u32 pad1; - __u64 pad2[14]; + __u64 userfault_bitmap; + __u64 pad2[13]; }; /* @@ -51,6 +52,7 @@ struct kvm_userspace_memory_region2 { #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) #define KVM_MEM_GUEST_MEMFD (1UL << 2) +#define KVM_MEM_USERFAULT (1UL << 3) /* for KVM_IRQ_LINE */ struct kvm_irq_level { diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 54e959e7d68f..9eb1fae238b1 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -124,3 +124,6 @@ config HAVE_KVM_ARCH_GMEM_PREPARE config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_PRIVATE_MEM + +config HAVE_KVM_USERFAULT + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index de2c11dae231..4bceae6a6401 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1541,6 +1541,9 @@ static int check_memory_region_flags(struct kvm *kvm, !(mem->flags & KVM_MEM_GUEST_MEMFD)) valid_flags |= KVM_MEM_READONLY; + if (kvm_has_userfault(kvm)) + valid_flags |= KVM_MEM_USERFAULT; + if (mem->flags & ~valid_flags) return -EINVAL; @@ -1974,6 +1977,12 @@ int __kvm_set_memory_region(struct kvm *kvm, return -EINVAL; if ((mem->memory_size >> PAGE_SHIFT) > KVM_MEM_MAX_NR_PAGES) return -EINVAL; + if (mem->flags & KVM_MEM_USERFAULT && + ((mem->userfault_bitmap != untagged_addr(mem->userfault_bitmap)) || + !access_ok((void __user *)(unsigned long)mem->userfault_bitmap, + DIV_ROUND_UP(mem->memory_size >> PAGE_SHIFT, BITS_PER_LONG) + * sizeof(long)))) + return -EINVAL; slots = __kvm_memslots(kvm, as_id); @@ -2042,6 +2051,9 @@ int __kvm_set_memory_region(struct kvm *kvm, if (r) goto out; } + if (mem->flags & KVM_MEM_USERFAULT) + new->userfault_bitmap = + (unsigned long __user *)(unsigned long)mem->userfault_bitmap; r = kvm_set_memslot(kvm, old, new, change); if (r) @@ -6426,3 +6438,26 @@ void kvm_exit(void) kvm_irqfd_exit(); } EXPORT_SYMBOL_GPL(kvm_exit); + +int kvm_gfn_userfault(struct kvm *kvm, struct kvm_memory_slot *memslot, + gfn_t gfn) +{ + unsigned long bitmap_chunk = 0; + off_t offset; + + if (!kvm_memslot_userfault(memslot)) + return 0; + + if (WARN_ON_ONCE(!memslot->userfault_bitmap)) + return 0; + + offset = gfn - memslot->base_gfn; + + if (copy_from_user(&bitmap_chunk, + memslot->userfault_bitmap + offset / BITS_PER_LONG, + sizeof(bitmap_chunk))) + return -EFAULT; + + /* Set in the bitmap means that the gfn is userfault */ + return !!(bitmap_chunk & (1ul << (offset % BITS_PER_LONG))); +} From patchwork Thu Jan 9 20:49:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933219 Received: from mail-vs1-f74.google.com (mail-vs1-f74.google.com [209.85.217.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2285F2010E8 for ; Thu, 9 Jan 2025 20:49:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455799; cv=none; b=euNas9QLtmvZ52i9vkt/H9MLFbpuzS8z4q1MSOmI6a3BDS3QRlLVi+o9RiAFDpgBMbh5lUTNhW3AKwel1rVI6An9lbbsjtkTtW5XZ47uKQTG/rNwhCcWdiaL/gJYbfPWry+sbIDjZOHN7mt017orsubXEOCk/n1BLlj250mbghg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455799; c=relaxed/simple; bh=CmgtQ2eKueuvA0O3zOIgnVmBoSBMqtMP0Wab4N434t0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=p7rYPZ9zFKiuXNLe2ByA5UVYDpMUNgpV1PIyfPk2zAirTgg4Dse3M7lhUsGtpRYEcV+ltoH+Wh8dKmY2ENKAIay30ocAUJ/iPDeOFd3JHna0XnGkF1/EeB8DoWH4vH/WPCVUP0cE7x76OQlt5nTeckpIXVXA4R20l8zmWSlSlOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=p2Ypn3us; arc=none smtp.client-ip=209.85.217.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p2Ypn3us" Received: by mail-vs1-f74.google.com with SMTP id ada2fe7eead31-4afb0276095so232729137.1 for ; Thu, 09 Jan 2025 12:49:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455797; x=1737060597; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=inJmWA70Ac+hZHw9fDPSAlORU93rduBYUNSCVpvyraE=; b=p2Ypn3us1MoEIQVIpqufx0haf3J6pgjd9V2/GtF2sO68+YNnC23rxFKfkjWfS0oKgc MMnUCB+BpokOcbPoLsDZxp3Jgldm3aqzvTVMjljvRh/xyEwK/H2ro7tQ3iAAKpJCp25C oXKh/CvoLYzoTnaGXubOoMbAiak9H3D/N2wNB/wA1gel+JVmkhEcPBztebSMDB0e4Q8w Da4YpwDuTktoVv5NW/ofp+vz41YgyQ98BFsrwUqnyxxz6IMHo7052eQaNzHM+TtecsI0 5L4ZaOCeqh4Srsbqmu9gAX9ucOED+i4EdfCMyxeX0eagK5OoCXVo2MGG3zze8gjNdA76 eYcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455797; x=1737060597; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=inJmWA70Ac+hZHw9fDPSAlORU93rduBYUNSCVpvyraE=; b=ZohJveNQPp7urpyHdMm+/b8awFYGtmmIzRCpk/Vi6qV3XE5pZ66SniknjoTTqM8rCD o+YkJsrD3nY9CfrPA/i0FygcKpefzVIjNFoSN8hG1te4gnX2t3CPmmScmkMsQJc35vRg xVyX+Jm0lDxuw4BkmqIj7IJSDyzG9b8u0dnH/wgf0gjZgfSIINTk/VlIss6ILazUYfDF 7UdTYN0N5eTpVmWfeACzGMjHRPK1VmWMvOvuH8Ll/Qv4XUZ5PTLhzmiPldcD6Zid5+FB ibiFkMCk+4EqdVZspdPgO1g50/cADp3gxBGP9K95ay2nxawkE6or5jZho+k+Bi0mYitO nDVg== X-Forwarded-Encrypted: i=1; AJvYcCVNP3SudScOxGaB7t5Ur6bZLmxZ89h45YQfXXbGn0a4SRc+Yns11JRuZL2fXBcycFJeFwA=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4Z/nLNy8SoVjbxsaooP2UqNGDDFKrSmfeZobNhscx3CU4bAOq W2CbAKgAWUmTY9mSzY8Jm1gNTODLr9g7Y0qBEqvDtDrQZav6WVzs/z2ygF5BCqFtOeEzOlWZ0Oi 9tT6LcqZujoNMQBLaug== X-Google-Smtp-Source: AGHT+IGmWC0I9uhGcwdr2Gd8ZWKOxj9FqNl2pt1I3pHNzlZaCTTUBXu0xZZz+qkK1VqDCdKw56Aw4mYDSrKY0XUr X-Received: from vsbka9.prod.google.com ([2002:a05:6102:8009:b0:4b4:eb7c:c900]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6102:38cd:b0:4b2:adca:c13a with SMTP id ada2fe7eead31-4b3d0f05fbamr8311073137.12.1736455797065; Thu, 09 Jan 2025 12:49:57 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:18 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-3-jthoughton@google.com> Subject: [PATCH v2 02/13] KVM: Add KVM_MEMORY_EXIT_FLAG_USERFAULT From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev This flag is used for vCPU memory faults caused by KVM Userfault; i.e., the bit in `userfault_bitmap` corresponding to the faulting gfn was set. Signed-off-by: James Houghton --- include/uapi/linux/kvm.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 7ade5169d373..c302edf1c984 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -444,6 +444,7 @@ struct kvm_run { /* KVM_EXIT_MEMORY_FAULT */ struct { #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3) +#define KVM_MEMORY_EXIT_FLAG_USERFAULT (1ULL << 4) __u64 flags; __u64 gpa; __u64 size; From patchwork Thu Jan 9 20:49:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933220 Received: from mail-vs1-f74.google.com (mail-vs1-f74.google.com [209.85.217.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E660D204F9D for ; Thu, 9 Jan 2025 20:49:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455800; cv=none; b=SQ1Q/ILK5rQtCH2viuW9vzDljlRnewfGi6eE6qOr/5YSt1luG03/H9ZZrOHcPMTlyzmt53a5hl42T2NWreZAXfRiOSrs01t8Sq7ztcz1v5QUI7as68bYAoYEKiUjMS8r8e1h6kfLSGhRXnGfdo1A7DMKKTnz0zivaqGf5mCXNYg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455800; c=relaxed/simple; bh=HNfpPbc0B5+oiokqFlBEDy9D4Azs1/AaYZmxA0SH3Sc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ws3ZRyLqMHz1zWEpeanrsVUTq/OXN9QYbazuAxrHBrOpxa7G0Qmu/hHpkAzU/ZOKCBj2vyyDdibph5PqazWtBP1Ey0tYaWJHH+6VgPa+DE01wyZFPnvMk2QXrEZofsBNn/AEwtGeQN8iQjjhdefkOumD1v3VvuY9ElUAZztw11s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LiAyCuwj; arc=none smtp.client-ip=209.85.217.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LiAyCuwj" Received: by mail-vs1-f74.google.com with SMTP id ada2fe7eead31-4b2d50cf325so177540137.3 for ; Thu, 09 Jan 2025 12:49:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455798; x=1737060598; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sy46ZKdMOYPNfFozinwrK5U4YnAl6mgCHSprJv2I5EE=; b=LiAyCuwjezVniwNKKASD5bBlkD7sA4eWP3Jg1YTyIlzNR1+HrBnfpLHFRCHADZcE7V xDsN8VNRWnN5JHAOMDPNV+pUHviJR6bl5DhyXnHmD4GKHpZsJE1L0BE3odnB3jtYFjhi +cFTsSmnmiDWuXFuy2UlDHnXY3m+2mmSIZbOfXrD9qAv8Ila4AipFifEtafpnror4Q3y rF21EUWJihAFlG8ISK00ovsfuaMc+xHzVEpPYktVcpNmXbIsgqemMxdaw4Nz7ZN9HTYS bsFxMIxeXRZQqh3+X7tHmx97R6cGyr4ae1WtnAB2Q5VrWbN/LhjS3jcAPwu3pQ9za3cS hnIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455798; x=1737060598; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sy46ZKdMOYPNfFozinwrK5U4YnAl6mgCHSprJv2I5EE=; b=AhXyYbOwRdGQ0BAd9+rIrkLvDuCkJ71IiN6EZrjghTXfB8g8OOVRD3lKjXJTtiIgeo 9JIk4nZ4kd6kIdDvCY2UnTQaz0+7ndIa6rK2YDBTurpw45MbMllm0KP06DmplhoIaUVo iWn23PEc6h88RpL+bNysYMIsoUHnB28Vx5S1ea1ocjGWBZzWjc42CLMiASHwGY7Nvv+s Bivz3l0DWaIMr9P/hdqJbZvH05TuOLU4rD99zEW5sKB4NYh4eCCrvET8Z3MRPsrirF5u zvIJkHFgz3tGvUtUjC/IGXqFcjxxBSYDuiprDD7HkYZBMJEZBmmsfUW6OWSmLWMS0K8O YZQQ== X-Forwarded-Encrypted: i=1; AJvYcCV7uZJbJiYrcSjkR0SE8owhSjCflQaANa/rujpdpCVryDtyVmNkzr9hNvsGYBILng0hPTI=@vger.kernel.org X-Gm-Message-State: AOJu0YzjU0u75rVJM5834hQD/U4sUntg12iFwsUfWS6GpeIL0Zhwnp4o V3SWhFS1gNvJT9+v1UHJAFb3V+vkcdbcdvdRILDtltI0bSLEfx21+Nb406tE1+CJvexr9OVmm8g IwoIF2I+PpY3SJaOKIw== X-Google-Smtp-Source: AGHT+IGxjoRseI0ptp1PpNGkamYSZpMVzGlMOrPqkiBhNVZ0uPSquo0Jh6/07xFgWyNiCB/pmTP7I9swsH2tkGpZ X-Received: from vsvj16.prod.google.com ([2002:a05:6102:3e10:b0:4b4:f067:c6f]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6102:c12:b0:4b1:1a11:fe3 with SMTP id ada2fe7eead31-4b3d0f15e35mr7763898137.8.1736455797824; Thu, 09 Jan 2025 12:49:57 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:19 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-4-jthoughton@google.com> Subject: [PATCH v2 03/13] KVM: Allow late setting of KVM_MEM_USERFAULT on guest_memfd memslot From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Currently guest_memfd memslots can only be deleted. Slightly change the logic to allow KVM_MR_FLAGS_ONLY changes when the only flag being changed is KVM_MEM_USERFAULT. Signed-off-by: James Houghton --- virt/kvm/kvm_main.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4bceae6a6401..882c1f7b4aa8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2015,9 +2015,6 @@ int __kvm_set_memory_region(struct kvm *kvm, if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages) return -EINVAL; } else { /* Modify an existing slot. */ - /* Private memslots are immutable, they can only be deleted. */ - if (mem->flags & KVM_MEM_GUEST_MEMFD) - return -EINVAL; if ((mem->userspace_addr != old->userspace_addr) || (npages != old->npages) || ((mem->flags ^ old->flags) & KVM_MEM_READONLY)) @@ -2031,6 +2028,16 @@ int __kvm_set_memory_region(struct kvm *kvm, return 0; } + /* + * Except for being able to set KVM_MEM_USERFAULT, private memslots are + * immutable, they can only be deleted. + */ + if (mem->flags & KVM_MEM_GUEST_MEMFD && + !(change == KVM_MR_CREATE || + (change == KVM_MR_FLAGS_ONLY && + (mem->flags ^ old->flags) == KVM_MEM_USERFAULT))) + return -EINVAL; + if ((change == KVM_MR_CREATE || change == KVM_MR_MOVE) && kvm_check_memslot_overlap(slots, id, base_gfn, base_gfn + npages)) return -EEXIST; @@ -2046,7 +2053,7 @@ int __kvm_set_memory_region(struct kvm *kvm, new->npages = npages; new->flags = mem->flags; new->userspace_addr = mem->userspace_addr; - if (mem->flags & KVM_MEM_GUEST_MEMFD) { + if (mem->flags & KVM_MEM_GUEST_MEMFD && change == KVM_MR_CREATE) { r = kvm_gmem_bind(kvm, new, mem->guest_memfd, mem->guest_memfd_offset); if (r) goto out; From patchwork Thu Jan 9 20:49:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933222 Received: from mail-ua1-f73.google.com (mail-ua1-f73.google.com [209.85.222.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CD002054E9 for ; Thu, 9 Jan 2025 20:49:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455802; cv=none; b=N2UY92YjBcE+wDXcxGYzA0V7m4/MSPpC439OZ38LJ+3xsHobhlMPU9UhYtZHi5UO4rf7gg7Cxfd0V06G60def4IS9OoD/+gXyZKleKh5f+U/iEPp1fEacLstfZ0KtO9cfq11c4zGAj4c94FL+vYF0WpqPQXa5aV9VgC7KdUE6hY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455802; c=relaxed/simple; bh=tSVByeVc+YvO1YwpxO6WQbJnBeXOFHCPwH2YC78/pxw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BSpCw4PQDj70yQi4EKUv1KV+SMHIgsrS6+ppqvMSJhgvgMOATGV0oVVCL1Yer3Ee42B3UvMg6CboxY7HnAdkTbAkJHKxTX/K+qi+Wna+jvU4xJ0bncTmJm7XjQZeJUYEHk7CVjSSGEEpNODU0+KP+/nTbn4HnZBK4DZqDhCEG6M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mcF52Ldy; arc=none smtp.client-ip=209.85.222.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mcF52Ldy" Received: by mail-ua1-f73.google.com with SMTP id a1e0cc1a2514c-85ba1d9dcf8so359753241.3 for ; Thu, 09 Jan 2025 12:49:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455798; x=1737060598; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1DrcQjgptnhIcToWnmFksBp0MRaAX2CW6X5qNo+eD6A=; b=mcF52LdyaRfn8cLItO/k6gTIxPtxYCogEWTleb8jRHW5KCCfdg6b4Wxm6SdmctkCMX 666H6Ru4TkG1aTe9t4Cx2NzHHduVFIL1s7ZFcPOhgwUIzhQVeJHbyt0aeS4j/Oapc5fu 7WQ3Fgq27FPX/bljvQi0dHJd4D7DxZu8V4Adm4i7utDYAPDeC3sqkPvp9A+lDvOTQ/zV ffGQKir7iTjt/Et9/bvsItbKw6jxc+VOWSL9KKCaMOfpIEm9vfkeeUofLque5fwYuXoM U1gMtxRzhZkup6Zk6CeoK3meC4LYDukumG0Ay0mTX2jVcv+XULeQyFs+sfljBZPOEE/U C3hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455798; x=1737060598; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1DrcQjgptnhIcToWnmFksBp0MRaAX2CW6X5qNo+eD6A=; b=Yij/1L1fGxxinO410GKapI/ZcLlBSZ6aKVTKu1HNZe0HgZ5UkWPOe3RppEXqr6nx3M wtjdn4wY2TZsLSpowUsPH+HRFVygVawmN2WhJ6ZPi1FQIfrlB3f/xDnUpLXtHfX4ImJw J/QIQbVo0cu9+4zCa8wCfnKpzdGIdBh0yttxyQhCLbAfqisDqEaNzLYIzzoFhMCBNN/c VjP9SX6qNXsKs/K25V0eLqSyPz4CdjLJXtJFxntds6EKX3JkHMk9WndciB7gDe3adw7r JeNpBNBf73gUA+Gzz1uQqW+lUbE+PAeYd7ysHbyZoRGnzChUBkimhf2aRKbTny297OCR 9CpA== X-Forwarded-Encrypted: i=1; AJvYcCXr9bqu/o0dAyeG5Wa8FVaC7YCqbi+ezWAGzewi7Y3c/5G3TwZN00zU6j+StgqYiHcw1tQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxVqm0J7+lnkHMeH/BwO79vymqhjdmbhHRjG2+HjhHhUCZVRvQ5 DPXtXlNLGZze4jhK01vTB0pvgP0o7L0os/18Y/SYAlV+GxlGMlmSASpKCd9bo+I3bjrZ5Nk/bGq /RsMHZDJDcvnNCnLE2Q== X-Google-Smtp-Source: AGHT+IEK0449o9KbC3fWak3S16jydYyIqzxOV/2eZ44AIR7w7I0DQuvF+HeqMnZYowZ6lm/VqNUOgc0IX+KhZU1E X-Received: from vscv18.prod.google.com ([2002:a05:6102:3312:b0:4af:df7b:f439]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6102:2c02:b0:4af:d487:45f3 with SMTP id ada2fe7eead31-4b3d0ffc73fmr8314563137.23.1736455798514; Thu, 09 Jan 2025 12:49:58 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:20 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-5-jthoughton@google.com> Subject: [PATCH v2 04/13] KVM: Advertise KVM_CAP_USERFAULT in KVM_CHECK_EXTENSION From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Advertise support for KVM_CAP_USERFAULT when kvm_has_userfault() returns true. Currently this is merely IS_ENABLED(CONFIG_HAVE_KVM_USERFAULT), so it is somewhat redundant. Signed-off-by: James Houghton --- include/uapi/linux/kvm.h | 1 + virt/kvm/kvm_main.c | 4 ++++ 2 files changed, 5 insertions(+) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c302edf1c984..defcad38d423 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -936,6 +936,7 @@ struct kvm_enable_cap { #define KVM_CAP_PRE_FAULT_MEMORY 236 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 +#define KVM_CAP_USERFAULT 239 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 882c1f7b4aa8..30f09141df64 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4811,6 +4811,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_PRIVATE_MEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_has_private_mem(kvm); +#endif +#ifdef CONFIG_HAVE_KVM_USERFAULT + case KVM_CAP_USERFAULT: + return kvm_has_userfault(kvm); #endif default: break; From patchwork Thu Jan 9 20:49:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933221 Received: from mail-vs1-f73.google.com (mail-vs1-f73.google.com [209.85.217.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51F59204F8C for ; Thu, 9 Jan 2025 20:50:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455802; cv=none; b=P1dnanFGuSMcALMzbvWFDbo5xSy3tfLYDQlTUf0Gr9N7z2pxCD57JnzdIthSviP3QAYkwYGq8w5vKyN40PrUFs8bSzQuwoHxA13oQyPNcm73tMf0gt66HdcHVifQFBG1oqqxhQ4NkdjvXsbXX5RQORgQkNyqO2SnJrw5OSEE06w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455802; c=relaxed/simple; bh=TwC+lxLuP7IoW1MiOmdDsx36ofsqO5AeypzfePzITWE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Z4R1TcDvvLhumlgTHDnBApD7AVpo0otwEy/xgwGh7loqh4WBzLUKFE4vYbwA4INY4waH19VbyGyCs3QhFFSLi051Zz8o5KCIYBM4ZjxxUlRPUT533fjGRuLBlGz58o1nJf2BmaDtxR2f+LXmbkOJTD5EiKb8WafYpyNJmjq2nJ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=R4sFoSJz; arc=none smtp.client-ip=209.85.217.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="R4sFoSJz" Received: by mail-vs1-f73.google.com with SMTP id ada2fe7eead31-4b1013c826aso184090137.3 for ; Thu, 09 Jan 2025 12:50:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455799; x=1737060599; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HQeKkaeKNAINyBb7M+Oe0wr1NoWCGD3LtUC6EAysWvA=; b=R4sFoSJz4a691lRggESbr2D1UtwTSSdaNZT9N5+5cBtrglYNkYlnQGihDPTNs4yFq3 GC+sbF71Lv4vKPrJtp6g6hk6QEQFgJ0A3k2sBKjarb24hDHZxwDDmKrgTd3Rd8t6KMG2 tNRdJIaMPY5gcCpU/G+h5P1q8GiwzvEwmQMnvEb1ZLps72ANJNkG5WDQkyVhilr9HnXQ dGBdJg24ZEi6QoruHuUpl8vAdB9Vp64HVOLeaIEX03RJ6HlEWyl8ATzR2VJ2uArHM1JZ bj0yIfFMQdT0EEWnsSE4SM/jXFspgMmZC58ox3h3X93nlqmH9cur3TMb7VQD4D8sZ7cl gecw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455799; x=1737060599; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HQeKkaeKNAINyBb7M+Oe0wr1NoWCGD3LtUC6EAysWvA=; b=thHhsVreV1yKDRPsr9zw43CwEZe1A5xf40+IIdspnSHIOkILGya2iM5zF6hI9CV+Zf vkK9fmu8njBpPYaefxeeO7e32Ms0EdLpskK844J2Y6pzJkm/SgZcnVRgxYXW/EHef4VC spaqAFNpgcBjuz3LkV7orGHDT0IRgTdWgtdR+BuTkOUBxKy+AYBY8ugjOloY7DFBSKh4 GLxn4wCSRHyLmo72ZVFbx0MNYlzVWEN1CVYz/oXBCSxwmFIlR/GuLKAJWjBsr/7yhIUw Z23grs6I+a7JW+K/2n5OJ3OU8XZYu7I35ZP7ek1aBr5hIIuBexgmgoT0zmvNKLPsew0w 39DQ== X-Forwarded-Encrypted: i=1; AJvYcCXDsSfAsuh52U8DiUOcFdXcuOaIttKH3TeVPvAshKtJVGKLAbdxVOACTScsTmoSjM11LIU=@vger.kernel.org X-Gm-Message-State: AOJu0Yx2MH7EijxSFp2ycFYnHFRHDr5W6TfLtT29HxAPbd3npeV6830w YKUqjNzlXzXZML7jE3VTWIVSUst6EKrOibmUWPbS1Cm31D8BOif1nCYIzGmqFmHMopBPRA9zSpr iGKOZa1qCtNPmU/OBDA== X-Google-Smtp-Source: AGHT+IE1KR8Jo0njtEmz2wd6kPBO+ZySo59zWwIng47TZxzkw4plAazjZ+t5031NO6mP2E11kmCWbmDTOkkQK6kQ X-Received: from vsig20.prod.google.com ([2002:a05:6102:9d4:b0:4b2:cc7a:f725]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6102:509f:b0:4b2:5d10:58f1 with SMTP id ada2fe7eead31-4b3d0f9d056mr8432287137.11.1736455799217; Thu, 09 Jan 2025 12:49:59 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:21 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-6-jthoughton@google.com> Subject: [PATCH v2 05/13] KVM: x86/mmu: Add support for KVM_MEM_USERFAULT From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Adhering to the requirements of KVM Userfault: 1. Zap all sptes for the memslot when KVM_MEM_USERFAULT is toggled on with kvm_arch_flush_shadow_memslot(). 2. Only all PAGE_SIZE sptes when KVM_MEM_USERFAULT is enabled (for both normal/GUP memory and guest_memfd memory). 3. Reconstruct huge mappings when KVM_MEM_USERFAULT is toggled off with kvm_mmu_recover_huge_pages(). This is the behavior when dirty logging is disabled; remain consistent with it. With the new logic in kvm_mmu_slot_apply_flags(), I've simplified the two dirty-logging-toggle checks into one, and I have dropped the WARN_ON() that was there. Signed-off-by: James Houghton --- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 27 +++++++++++++++++++++---- arch/x86/kvm/mmu/mmu_internal.h | 20 +++++++++++++++--- arch/x86/kvm/x86.c | 36 ++++++++++++++++++++++++--------- include/linux/kvm_host.h | 5 ++++- 5 files changed, 71 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index ea2c4f21c1ca..286c6825cd1c 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -47,6 +47,7 @@ config KVM_X86 select KVM_GENERIC_PRE_FAULT_MEMORY select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR + select HAVE_KVM_USERFAULT config KVM tristate "Kernel-based Virtual Machine (KVM) support" diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2401606db260..5cab2785b97f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4280,14 +4280,19 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, - u8 max_level, int gmem_order) +static u8 kvm_max_private_mapping_level(struct kvm *kvm, + struct kvm_memory_slot *slot, + kvm_pfn_t pfn, u8 max_level, + int gmem_order) { u8 req_max_level; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; + if (kvm_memslot_userfault(slot)) + return PG_LEVEL_4K; + max_level = min(kvm_max_level_for_order(gmem_order), max_level); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; @@ -4324,8 +4329,10 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->slot, + fault->pfn, + fault->max_level, + max_order); return RET_PF_CONTINUE; } @@ -4334,6 +4341,18 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { unsigned int foll = fault->write ? FOLL_WRITE : 0; + int userfault; + + userfault = kvm_gfn_userfault(vcpu->kvm, fault->slot, fault->gfn); + if (userfault < 0) + return userfault; + if (userfault) { + kvm_mmu_prepare_userfault_exit(vcpu, fault); + return -EFAULT; + } + + if (kvm_memslot_userfault(fault->slot)) + fault->max_level = PG_LEVEL_4K; if (fault->is_private) return kvm_mmu_faultin_pfn_private(vcpu, fault); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index b00abbe3f6cf..15705faa3b67 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -282,12 +282,26 @@ enum { RET_PF_SPURIOUS, }; -static inline void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) +static inline void __kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + bool is_userfault) { kvm_prepare_memory_fault_exit(vcpu, fault->gfn << PAGE_SHIFT, PAGE_SIZE, fault->write, fault->exec, - fault->is_private); + fault->is_private, + is_userfault); +} + +static inline void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + __kvm_mmu_prepare_memory_fault_exit(vcpu, fault, false); +} + +static inline void kvm_mmu_prepare_userfault_exit(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + __kvm_mmu_prepare_memory_fault_exit(vcpu, fault, true); } static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1b04092ec76a..2abb425a6514 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13053,12 +13053,36 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, u32 new_flags = new ? new->flags : 0; bool log_dirty_pages = new_flags & KVM_MEM_LOG_DIRTY_PAGES; + /* + * When toggling KVM Userfault on, zap all sptes so that userfault-ness + * will be respected at refault time. All new faults will only install + * small sptes. Therefore, when toggling it off, recover hugepages. + * + * For MOVE and DELETE, there will be nothing to do, as the old + * mappings will have already been deleted by + * kvm_arch_flush_shadow_memslot(). + * + * For CREATE, no mappings will have been created yet. + */ + if ((old_flags ^ new_flags) & KVM_MEM_USERFAULT && + (change == KVM_MR_FLAGS_ONLY)) { + if (old_flags & KVM_MEM_USERFAULT) + kvm_mmu_recover_huge_pages(kvm, new); + else + kvm_arch_flush_shadow_memslot(kvm, old); + } + + /* + * Nothing more to do if dirty logging isn't being toggled. + */ + if (!((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES)) + return; + /* * Update CPU dirty logging if dirty logging is being toggled. This * applies to all operations. */ - if ((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES) - kvm_mmu_update_cpu_dirty_logging(kvm, log_dirty_pages); + kvm_mmu_update_cpu_dirty_logging(kvm, log_dirty_pages); /* * Nothing more to do for RO slots (which can't be dirtied and can't be @@ -13078,14 +13102,6 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, if ((change != KVM_MR_FLAGS_ONLY) || (new_flags & KVM_MEM_READONLY)) return; - /* - * READONLY and non-flags changes were filtered out above, and the only - * other flag is LOG_DIRTY_PAGES, i.e. something is wrong if dirty - * logging isn't being toggled on or off. - */ - if (WARN_ON_ONCE(!((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES))) - return; - if (!log_dirty_pages) { /* * Recover huge page mappings in the slot now that dirty logging diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f7a3dfd5e224..9e8a8dcf2b73 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2465,7 +2465,8 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr) static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, gpa_t gpa, gpa_t size, bool is_write, bool is_exec, - bool is_private) + bool is_private, + bool is_userfault) { vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; vcpu->run->memory_fault.gpa = gpa; @@ -2475,6 +2476,8 @@ static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, vcpu->run->memory_fault.flags = 0; if (is_private) vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_PRIVATE; + if (is_userfault) + vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_USERFAULT; } #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES From patchwork Thu Jan 9 20:49:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933223 Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5793E205AAE for ; Thu, 9 Jan 2025 20:50:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455803; cv=none; b=FrDrsN569EJEG8nCb8BhaitqeA/rYpaqqhYjYgb3cKLwJq+VXLqq/LMeK4FohhcdnMWV1+nUDcQ+w6ljuhrKXrids0+Ip/Q5qLkUN+ezyIMJkfeHZv+vFv4l9vooI8dt1ufIDYXdz/LjDpMEbCfc5t3WIRkHl6YZDmASTABklaI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455803; c=relaxed/simple; bh=Bf6Z/SdUsbkpIsqoT2JIQQkVc3cDT9ghaIOkviu+ke8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fl2OZPB4+BwxL86TVQnjEt48xqC/nM+sEXpIuTF3gI3eXTnzGPNnrkvp8b3gq4EsykxfCfQG6N0+1CZ3ntoIuwYgPy3NiF2CYJwQTQm0dkGKQySe9OXWCWYZElt4XNn+y0Mm4+Y/mxgbrKMY8BNeydiMAHb1JB2iTZbL43WJDUk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XKvnVZX5; arc=none smtp.client-ip=209.85.160.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XKvnVZX5" Received: by mail-qt1-f202.google.com with SMTP id d75a77b69052e-4675749a982so22458031cf.1 for ; Thu, 09 Jan 2025 12:50:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455800; x=1737060600; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ej4vKCsRcXi3JRFU5yXFClNg+jWCgVgQEDR3f6Axo50=; b=XKvnVZX53ule8fsbj+y0ODeoguCnwbs4AppPD6dYohQG0IVcsTq9Wc/PzEeZxwOOts SOzF59ti/yPAf6ZnKEVI2TWAV6LqOIhs/T0xwb2X5442vyG1iFMqv+1QT1iB82ax2+kJ 36rgyIhocAVPUvxPwFAuikBPR0bfbHEz9ceRCvqM8gd7JY1Ca90FFJUc4ptq3XLXR763 tlU6dWx4OFkqlvhmsE8fFZl8ygqH5peXACJMsW+JqgwXM4/D+XUob67JkkHjzJxJc535 OR5bZKmyb6W0se36a8r3GUuNXq/oZOM5UQmZXtDv7zsn1+ZAlAICDSjekWNkwFLW+r+B ar+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455800; x=1737060600; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ej4vKCsRcXi3JRFU5yXFClNg+jWCgVgQEDR3f6Axo50=; b=V/OO8mInDMrDmodPpP6EPgdoMtPuEAQuyLptAKVUsmr+k1X25n/oJb/C3Iq6fhSyTI VKC3+NHgYgCZwEJJKcvxV2sqrL4mPalb2iSeaoB5yj2WdSBKKIhueCXF0AjnCU0O7eEQ GTn6Ahaeu5ZW6J6rRwmfqaEX5Jb+PGuMNhPhZd5CxcvlAOv4Nhg/IP7ku7uUYkhSnSQ7 CbHkrqeiUMoaYZrjgpgjLeOdapkcXHl5dnoXX+CtTtR2k1V16sMCIrNGRpSzTMJ/+Isq 2t9Qarvgw6f5Np4m9GJ1jaOykR5aH2sJ8AEIwVnf/7yCGbNUfCrfptL7jTK/JM+2D4eX lwbA== X-Forwarded-Encrypted: i=1; AJvYcCVX6Cu/EE11+JlEglgJRedmBmutNcFGzfFhFZRzHHleP4OvQuOTIibJEUiFZ1/OykRrej8=@vger.kernel.org X-Gm-Message-State: AOJu0Yx/G2Q/M+uQYRm1qcMnnOnsrcz5CyNOnjkZTC8pUXVGgWNsSeU5 R19mCAJLWOsLIZJoPk390T+7HsZL8Omvz9H4Rot69L5FdZpADrvUaWQ0BtQCAjmUnYv8nB/qGqm 0g5390MIoMeMc5s0R6A== X-Google-Smtp-Source: AGHT+IEjWh1ifMEjbhq2t/4s1LokEzVaJxOMILKR5p4NmJRJSQEQbDzHfJF1YycGc5qCqnm1VGVZZj4bYy7TorUf X-Received: from qtbiw5.prod.google.com ([2002:a05:622a:6f85:b0:466:928b:3b7c]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:622a:14e:b0:467:5926:fcf2 with SMTP id d75a77b69052e-46c7107e0b4mr103936581cf.9.1736455800057; Thu, 09 Jan 2025 12:50:00 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:22 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-7-jthoughton@google.com> Subject: [PATCH v2 06/13] KVM: arm64: Add support for KVM_MEM_USERFAULT From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Adhering to the requirements of KVM Userfault: 1. When it is toggled on, zap the second stage with kvm_arch_flush_shadow_memslot(). This is to respect userfault-ness. 2. When KVM_MEM_USERFAULT is enabled, restrict new second-stage mappings to be PAGE_SIZE, just like when dirty logging is enabled. Do not zap the second stage when KVM_MEM_USERFAULT is disabled to remain consistent with the behavior when dirty logging is disabled. Signed-off-by: James Houghton --- arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/mmu.c | 26 +++++++++++++++++++++++++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index ead632ad01b4..d89b4088b580 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -38,6 +38,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select HAVE_KVM_USERFAULT help Support hosting virtualized guest machines. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..e099bdcfac42 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1493,7 +1493,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || kvm_memslot_userfault(memslot)) { force_pte = true; vma_shift = PAGE_SHIFT; } else { @@ -1582,6 +1582,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, mmu_seq = vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); + if (kvm_gfn_userfault(kvm, memslot, gfn)) { + kvm_prepare_memory_fault_exit(vcpu, gfn << PAGE_SHIFT, + PAGE_SIZE, write_fault, + exec_fault, false, true); + return -EFAULT; + } + pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, &writable, &page); if (pfn == KVM_PFN_ERR_HWPOISON) { @@ -2073,6 +2080,23 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, enum kvm_mr_change change) { bool log_dirty_pages = new && new->flags & KVM_MEM_LOG_DIRTY_PAGES; + u32 new_flags = new ? new->flags : 0; + u32 changed_flags = (new_flags) ^ (old ? old->flags : 0); + + /* + * If KVM_MEM_USERFAULT has been enabled, drop all the stage-2 mappings + * so that we can respect userfault-ness. + */ + if ((changed_flags & KVM_MEM_USERFAULT) && + (new_flags & KVM_MEM_USERFAULT) && + change == KVM_MR_FLAGS_ONLY) + kvm_arch_flush_shadow_memslot(kvm, old); + + /* + * Nothing left to do if not toggling dirty logging. + */ + if (!(changed_flags & KVM_MEM_LOG_DIRTY_PAGES)) + return; /* * At this point memslot has been committed and there is an From patchwork Thu Jan 9 20:49:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933224 Received: from mail-vs1-f74.google.com (mail-vs1-f74.google.com [209.85.217.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 647BF205AD7 for ; Thu, 9 Jan 2025 20:50:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455804; cv=none; b=q4WFwBoOSE9xarDmCYx1JjoY83O0qBXDO16Udi/q9KE0XeOdBsKeCpUHu13hnwRcze7PMCumx3ySyaR/XtRgggDbVBy4WsneX3wm/SRu7B/h8f0PYsJx4wHjLVMltCyYQoM4CyRPFW40KJA0Dtuw0bGgg2NnDwy9XDPY9D9zO6o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455804; c=relaxed/simple; bh=KKi/BUktWZk9+uDC0b8Vh9PzTd/GFa3ygvWEq+T3pcA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Z64oy8fLh2Yy9wGy8j+UqbnskhND9oQarvW/3GZy0beczT6D4y/gLlN8eVpL24u/g/Dug9L1o05Gdy52paRAUpCxLB33ymiUkt3WN5+nw/0bmeB0Wmv0HFX6pOkmv5rjTsna5JVV1uCoeRYvUVDYcTkIj+U4+HJx45efz+sWTAM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=djQKsJqb; arc=none smtp.client-ip=209.85.217.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="djQKsJqb" Received: by mail-vs1-f74.google.com with SMTP id ada2fe7eead31-4b126bb6f2dso201450137.2 for ; Thu, 09 Jan 2025 12:50:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455801; x=1737060601; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9jDqHEv7geetsXAPhiE7vHc+NbQmOip57zkXZiw1WEE=; b=djQKsJqbs5VIG1gVhUTIuTN/g6eKK3136i+atytOCEqL1U5g+mEOTB6EoQEh9Xz8Qe YHy0QVTdOwPC6qZMfJDBy5BPjJvXHaiI0qRgZA7EVIOUPoShA+9lDNbxIj3ZIKxE74+B tJrpotC9LJoECLlfmtYKnVqzEgEGv84YsfCjD4roiBW6IOWZl9D4Gw1DWRwNVEtiPQpg dPMlrlqApuu01qI6N5trVXrSImPaFYj2NAoDs3hdyUKjl/ZcqxPX979Gyx2wfo/Hvjrz z1MXVMZREE4r6RGQAcoLftKCBPNHL8BvRyqdXGfuOuaZzLisS3zyNTavnj/zFDhjSfvm juYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455801; x=1737060601; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9jDqHEv7geetsXAPhiE7vHc+NbQmOip57zkXZiw1WEE=; b=ehRk+TV1ixiDB+Lj0vrh+/93YJqDk6lXzUSDvkOuUxlchQUabH7buz9J5z1QXpI0mJ X9Mah6WernatxUNIbSKyYYbzZFf8mpvHKR0Yqa5wyS3FKCJ+4MA6Od2NyDfxYX2DUTN2 xy5iBsmdLGYKnhGmsvIw+NFVz+pE4hcsKj5iz637YgmvG/WBQ/R6BfqiaAS4ygAk+zTz 0ViSZ7apKDfJFejO6BAfe16glzlB6R83ffKbKqavi5C9l/Z5ijhOCoHa/44hnSszrPhD yiyG/wuQogADYdH+dSVwFT/LDAClSRnURsG7Ub0+D8SS5HIkdm4IZvy3VAijG+pD5hH1 uwzg== X-Forwarded-Encrypted: i=1; AJvYcCX0cSbsYyAvOmfT6djDRveYD9G0vNgTOsxFjTQGwfMxAGp6oApjslam9LC27Js5XzDxPnA=@vger.kernel.org X-Gm-Message-State: AOJu0YzszqQXS8DMdNgkal7l0lI4x2bV+8rwYQLX2DQ3zerFaLSegYer h9yDGdbkULMzPWvsSRHZxMbzzIoEoWUnVUoX55O8MAMXoFeZa6yWrjIHXCHtAiBbaGcj/ey1GDS PIPAUeo62H/AIqfW/VA== X-Google-Smtp-Source: AGHT+IFNTRpr21NUpgVvBV6RDqL9+2vcXAZWltadh1brmBLANlrw+WVMeHKDcwijSskNQ4NVBvhEHS4SPdBJrOsp X-Received: from vsbih13.prod.google.com ([2002:a05:6102:2d0d:b0:4b6:1ac7:43de]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6102:3c8e:b0:4b1:1a9d:ecbc with SMTP id ada2fe7eead31-4b3d0fb68c2mr7939857137.20.1736455801194; Thu, 09 Jan 2025 12:50:01 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:23 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-8-jthoughton@google.com> Subject: [PATCH v2 07/13] KVM: selftests: Fix vm_mem_region_set_flags docstring From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev `flags` is what region->region.flags gets set to. Signed-off-by: James Houghton --- tools/testing/selftests/kvm/lib/kvm_util.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 33fefeb3ca44..a87988a162f1 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1124,7 +1124,7 @@ memslot2region(struct kvm_vm *vm, uint32_t memslot) * * Input Args: * vm - Virtual Machine - * flags - Starting guest physical address + * flags - Flags for the memslot * * Output Args: None * From patchwork Thu Jan 9 20:49:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933225 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC809205E24 for ; Thu, 9 Jan 2025 20:50:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455806; cv=none; b=aHHcaAf8BMbMIMANZfOxZrip2iXksAbvAPZuczPj0rkynoxKfQXt+EUEo7XEha7PLy7951Wyf/WY7QFw/iRhDigB5re98CBVeupomGMaX3tcnY3cSSjF6LpE4gMGkDd01uGiybLOns5zpWi/vIl+Dgh6nKe+d30yRy2D+wIErdM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455806; c=relaxed/simple; bh=wX30xj9sJm8y6Ulhl1IoKeVlsOrKZuVPq174Ef2/lmg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BlCysF5MkG1fgB87WKFvx/hR1OlyO9I3d1xyt9pcTIU9pIF7P52dwoGMn+Z6CpLW0RU0LKtSMGwplBLKOP4vH4MrfDPnmAWGR5xo3lLmoHIA1M2GL4yQ36/t+dVNwPsRCV/5BJAO6kczrydvj/1sosWtN5Ml0/kKCNVKfpE3qOw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WlCdUd61; arc=none smtp.client-ip=209.85.219.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WlCdUd61" Received: by mail-qv1-f73.google.com with SMTP id 6a1803df08f44-6d8edc021f9so20742756d6.3 for ; Thu, 09 Jan 2025 12:50:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455802; x=1737060602; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=O767jE59AeAMsTglDWemPeR1CkI5EMgHAt7K2sI2JAg=; b=WlCdUd619YU8lYk986ZY4xjT3n6gmOwDPGl5+xf/JRzVAaIN7N9QmlRJx0dlHQ4BxR eX8r/UJ8YTO2Fglf+hSatu3nSIvBc8kN0o4/C1YK5FYVsyeBjmxecijdFVqZTU/BogUf jtbcbgd41FIzi7Tz8wgb9IEs8phkhwf9b6H3PauPzdi3FaKQR/fluydOUL0tJmEEOMRh tFHArHoI3yTVHm64fj1JwlHpkbguH9RnUh2pmvz9hgNSC1lsVh5fDTeJ/f/MzghN8ZXf V3J10OLS1/zvXt4LA6UyHn9fAAsBx/aziEqsNt3v3pNBJOFZTd/v6ZOP74bJJnlj9tD/ RsuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455802; x=1737060602; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=O767jE59AeAMsTglDWemPeR1CkI5EMgHAt7K2sI2JAg=; b=UDtA1JhTgYuEd5nVqlMxZodCgafR/G2zHRsXKjt+pQj4jNp/emUpmUy01PMdflzmdK YK9n8XBkIMCJF12AUjORVIRR0O3AWq4UPOXVs937dI8xUvC4Fcxp6An3dZeVn7JBB3f1 ulgyNSS/QcjUmmmvm1Cw6UFrS0QucdYGnLEqrjpkS+RjViEe0naxoNN2wPMGg1VSG0Av IQnrSouJK3lSvSV+L9xrgV0EwjskgZ5U4fYaua0tHhTDQOKkob8ks7yD7x9iQI3Iyvzs Q9eKzhSa14/65q2y+r7bIGZW2WPOhc588GncdH7h8SSDdkrdNuncP8Ar6VjSrUMpBM6D lmvQ== X-Forwarded-Encrypted: i=1; AJvYcCWmDjOYjvWwUIWLCxOd90PHh+EZNFHDO/M1G8IxYOfEO9+mawy78TePPtt0oWggOws83Hw=@vger.kernel.org X-Gm-Message-State: AOJu0Yw8OtQyPUD2Fdal4xWwYeCMLqSZxWsvkjaTD/0n6F/axKQF8rKd gEckHAAOT1DM2nK0RFzd9/w0qriqRvds0XWL193kQpcr/wQPIoOJVIdcXuWCzqzad5R0ULMPE6u 6nahPPEJNnay8g0uMGQ== X-Google-Smtp-Source: AGHT+IFgx6gAPenNPcGigBK2pqFto4Hhpu9WmCDI55SJzVOesvWzc1cj0nZb4kbFOEWQTb6nHd4wPYt7OTvCG9XJ X-Received: from qvboq1.prod.google.com ([2002:a05:6214:4601:b0:6d8:f326:1f33]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6214:2688:b0:6d4:1c9d:4f47 with SMTP id 6a1803df08f44-6df9b238643mr125150226d6.13.1736455802071; Thu, 09 Jan 2025 12:50:02 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:24 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-9-jthoughton@google.com> Subject: [PATCH v2 08/13] KVM: selftests: Fix prefault_mem logic From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev The previous logic didn't handle the case where memory was partitioned AND we were using a single userfaultfd. It would only prefault the first vCPU's memory and not the rest. Signed-off-by: James Houghton --- tools/testing/selftests/kvm/demand_paging_test.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 0202b78f8680..315f5c9037b4 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -172,11 +172,13 @@ static void run_test(enum vm_guest_mode mode, void *arg) memset(guest_data_prototype, 0xAB, demand_paging_size); if (p->uffd_mode == UFFDIO_REGISTER_MODE_MINOR) { - num_uffds = p->single_uffd ? 1 : nr_vcpus; - for (i = 0; i < num_uffds; i++) { + for (i = 0; i < nr_vcpus; i++) { vcpu_args = &memstress_args.vcpu_args[i]; prefault_mem(addr_gpa2alias(vm, vcpu_args->gpa), vcpu_args->pages * memstress_args.guest_page_size); + if (!p->partition_vcpu_memory_access) + /* We prefaulted everything */ + break; } } From patchwork Thu Jan 9 20:49:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933226 Received: from mail-vs1-f73.google.com (mail-vs1-f73.google.com [209.85.217.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE390205E2B for ; Thu, 9 Jan 2025 20:50:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455806; cv=none; b=QhEMOWpzRrh4zAoeHd6n8DH5G0XgRie4ZCwebHYMUhLIaPufvqbKTrTccxSDPhZS2KH5PXMXPwMjwBekOgz3TMaaYmQR9Gx3qeNSfY3jZettlNcFm5YepXcuGJhcLKWsouBx6L0GDn6w3ZV9LsIktbqYk7J3prYNKwsjyiIIM3I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455806; c=relaxed/simple; bh=7dvrWaFHLcfsBdDYCLYrpKWe49ir6woEr/o0fl0YETg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MU9A+3/TYgxYpJO9UYdHhRVv/a2aDkV+/pOW/RdlmeNRxBd/r1lSszkUG9PAXG73OEJI/oPMJPv+C0u0qdqx9s8G8Chort7SBJWk419rpRb+CC3wGtqnikYKb1jMk1rjIO2H/FIt5iuKcJ92Owbgv8TtIfErTW20qHYDT5O4B7Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kEXtOHDm; arc=none smtp.client-ip=209.85.217.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kEXtOHDm" Received: by mail-vs1-f73.google.com with SMTP id ada2fe7eead31-4b11b33e466so304585137.1 for ; Thu, 09 Jan 2025 12:50:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455803; x=1737060603; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OvbYHGNl+8d7DJVwo06D1EaOJgLS4cwG8AKsUnFt2VU=; b=kEXtOHDmICt5Ci3ldzUPR86PKZFypjLeNcQvxoawZkQEOJzjDXXqoIzKubFJ4XrWfP RdrwGAd0B1BJQGo3oR3B5OTV7WdY0HycAVwZ7tnMDoROet00Z8SSJ1IsKobfb3o3Tz1H XztHi2WON8NkaJFcJHq42rfEhSF4hcLby+iygoMlUjOnCkjyP0xcbt4Oqrh4idY5FPJF SXVQj8rFn7i4DlzEzXpk5s6dqyAyoQN74e1TGvv1aURjChAhFFS/6d9CfpZ3Hh7IROvt vCmln3AxZy1o9IgoiR4x1RFd+cu8DiSdwRKvB3N/fnfjNKIgAIfz03X/qD3MDZABj8Y+ 6dZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455803; x=1737060603; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OvbYHGNl+8d7DJVwo06D1EaOJgLS4cwG8AKsUnFt2VU=; b=LAGuSt7RO7sdp6Xvxs6NWabU6kHil+q4LXs8usfnAJfKXvGYCXgjASzPVFhe4fGZ8X FH1hqz18AYIpBcnpxYyQUjWWNMnvtpdVUNWbO9cJG60JHst0JIGL7KEsGkmMNm+YoPCC ERfKZAJaC2whPDwRqhgA7V57kh5y/SB+CauDA4MArv/ceVM7YrBhPtZopr2LpvR69dQl zr3x3FjyQNAGNl7GiVIBFgMPtzZI4ohVtfqyVtokh8omeefPZHaN4FHnhRtgWvisgaHJ yWk6nBLUAxMCh2tOLZuX2XldVuO5rLvcgQAEQRWwr7UEAOe095ImGe4PQe6E8sy7t9gs OJ/A== X-Forwarded-Encrypted: i=1; AJvYcCWF2cRWWN1gt/85/uxXao8R8Cj/0Dz6QN/35p4yLA6pLB1G7AMluwEu3HUANKB2Bvtoj8g=@vger.kernel.org X-Gm-Message-State: AOJu0YxDiIT49599eyNk68bnVSy+sz94ZZwER2YqQ1Yly0GfZePIdT7u 0bn44OBloBQ6VNigqKQGMS3i8VtFr5KVTW/NZ0/R2RghmGVt0LcAwli/LMfm9DtaQXe1eTDum11 xc9n57yBM8rzodhYkZA== X-Google-Smtp-Source: AGHT+IE5vJKdBIol6yLO/bCx7LzYtkzGMj0nrEKYhdyPGNNYWWWO3Zt70S5/gUk7SHW9elNAbwwPb9/d1PnA+b6V X-Received: from vsvj21.prod.google.com ([2002:a05:6102:3e15:b0:4af:dea8:7367]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6102:41a7:b0:4b1:1a11:95f8 with SMTP id ada2fe7eead31-4b3d0da8f06mr8495356137.11.1736455802892; Thu, 09 Jan 2025 12:50:02 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:25 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-10-jthoughton@google.com> Subject: [PATCH v2 09/13] KVM: selftests: Add va_start/end into uffd_desc From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev This will be used for the self-test to look up which userfaultfd we should be using when handling a KVM Userfault (in the event KVM Userfault and userfaultfd are being used together). Signed-off-by: James Houghton --- tools/testing/selftests/kvm/include/userfaultfd_util.h | 2 ++ tools/testing/selftests/kvm/lib/userfaultfd_util.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/tools/testing/selftests/kvm/include/userfaultfd_util.h b/tools/testing/selftests/kvm/include/userfaultfd_util.h index 60f7f9d435dc..b62fecdfe745 100644 --- a/tools/testing/selftests/kvm/include/userfaultfd_util.h +++ b/tools/testing/selftests/kvm/include/userfaultfd_util.h @@ -30,6 +30,8 @@ struct uffd_desc { int *pipefds; pthread_t *readers; struct uffd_reader_args *reader_args; + void *va_start; + void *va_end; }; struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, useconds_t delay, diff --git a/tools/testing/selftests/kvm/lib/userfaultfd_util.c b/tools/testing/selftests/kvm/lib/userfaultfd_util.c index 7c9de8414462..93004c85bcdc 100644 --- a/tools/testing/selftests/kvm/lib/userfaultfd_util.c +++ b/tools/testing/selftests/kvm/lib/userfaultfd_util.c @@ -152,6 +152,8 @@ struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, useconds_t delay, expected_ioctls, "missing userfaultfd ioctls"); uffd_desc->uffd = uffd; + uffd_desc->va_start = hva; + uffd_desc->va_end = (char *)hva + len; for (i = 0; i < uffd_desc->num_readers; ++i) { int pipes[2]; From patchwork Thu Jan 9 20:49:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933228 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2936D2063DB for ; Thu, 9 Jan 2025 20:50:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455808; cv=none; b=T9w/DUE9ACYuNbd/XO5rr3Isjv2iXmwGthIVOFU83A4deFajDnDD3H5Ql8WHXIo9JrSdeH/ftuy6opaJydgEzk5M8IPI1ZKjyySZOKwct21R1zDnVK5ueYalU3pe6ObekuECFzAPr1tChhYPrW3AMsqBZgVSh9YH1JV1hD0MFAs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455808; c=relaxed/simple; bh=CBqKjphRN4yzq0Lnfo/XK0XpaUgUbFJIiIBKlhItOmI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NLx52//9hvS2g5ZyV573BCgntya1XJC01IJKchqDUd21FYKExNVSrzz/52MUgowhgJfezqDuIF1bj294dqDJGB12Xy5sgHE2GQYGsqO7eINxDCYJ2Npqcn8eNAGHbM6hk/F8fi9qMmpeOWMGHz9nHCQwAnSqkNt8u3t7RpxIyc4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FjgQ/2A8; arc=none smtp.client-ip=209.85.219.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FjgQ/2A8" Received: by mail-qv1-f73.google.com with SMTP id 6a1803df08f44-6d88833dffcso22751736d6.0 for ; Thu, 09 Jan 2025 12:50:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455804; x=1737060604; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Z8gbNSyyPwETvpRi8nU7lhBfk6CAq4ixhNTmiBrenco=; b=FjgQ/2A8QlBFU5zwOACan1E+u03HCottwK5nLYt4HK8BkD9Ec8UYbfF5UHBWxv34MD XKQo/5doQ7JK3kmCPBvtOY+CAA4ccNhnHEhtwC/LXSQQ7+c82JVnwygMBISU2OTkjmzK Z/wd4J46MyUJitJAAX/okFSzeFNcgEIfpl+bUhKC/KrkBkrRVc1jeRAGMuPdsh08dgrs vTWaNN0+fr8lm75eqKNVoXEzVaAbWwyfuS7OyTS+HLtEav8Tl9WxnbY+XMZmAgnV8xG5 JMYPrhQHO6J8Y/yUezc+vv73qhbTEpMQHH9L2ltql2vuEj4jgPzyy/HPXX366O+HL4nk pa+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455804; x=1737060604; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Z8gbNSyyPwETvpRi8nU7lhBfk6CAq4ixhNTmiBrenco=; b=f4j9IRhn9O60SGmJ89pCZCorK5OPiGRVktI/BegZ/s80QdlWyD9p+6yNayl2KvbuZl oPSuFG6dqVFr1qp3JI4bZo46ox9Nn9XnIlz/ZHthFiyaidpfLFOnD0Y+beYIew0qRN7Q Ntx88ynZh6ZnDGop4W5ThwGFL0fa0uJSoCcXIDGN8/x9PzgMoPSUfMMYCmfQEE2uHt9j PA0F5S1fGosWdtF8MWtwnVusoGb3fkkdWfsNbD0Ut8TCRwRupmCV1GGsH1txF3+BwdFK 3fH5hwCH9SjV2ZvCvyWy876Wfi43QQvYkmIQTV61mat6s8tODSWZOB2bEhIdThUDhzOt bM8g== X-Forwarded-Encrypted: i=1; AJvYcCU0tDDfVVQr3T7JthEvNEmPh4+tJZ7mu5pCsn/YWfyMVAoc/HNoHSh4OKaGAV0Wubv+sQw=@vger.kernel.org X-Gm-Message-State: AOJu0YxfbzkTPEG2u9PUC9uqmJnmoHtuOrqiXKMut+OhaVSGTy7dVoUo tZKMyoOq7WrvfbQ1hAw2WnAS+JY/OuHDB8p8o6dKizclDO9yXDqrvSETHMzBOSvmYea3CkskeDb R/0EvXS+XDHUQzbx1PQ== X-Google-Smtp-Source: AGHT+IEi3mzHkXPZiL5KdytvEtxQWpAfAta5Mi5rbpufEknhZSKdy69JHtvBwhVHRWx68M/7Cxqd+lUk9mWkjayn X-Received: from qvlh7.prod.google.com ([2002:a0c:f407:0:b0:6dd:3c13:842]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6214:53c6:b0:6d4:586:6291 with SMTP id 6a1803df08f44-6df9b232c31mr147596196d6.25.1736455803946; Thu, 09 Jan 2025 12:50:03 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:26 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-11-jthoughton@google.com> Subject: [PATCH v2 10/13] KVM: selftests: Add KVM Userfault mode to demand_paging_test From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Add a way for the KVM_RUN loop to handle -EFAULT exits when they are for KVM_MEMORY_EXIT_FLAG_USERFAULT. In this case, preemptively handle the UFFDIO_COPY or UFFDIO_CONTINUE if userfaultfd is also in use. This saves the trip through the userfaultfd poll/read/WAKE loop. When preemptively handling UFFDIO_COPY/CONTINUE, do so with MODE_DONTWAKE, as there will not be a thread to wake. If a thread *does* take the userfaultfd slow path, we will get a regular userfault, and we will call handle_uffd_page_request() which will do a full wake-up. In the EEXIST case, a wake-up will not occur. Make sure to call UFFDIO_WAKE explicitly in this case. When handling KVM userfaults, make sure to set the bitmap with memory_order_release. Although it wouldn't affect the functionality of the test (because memstress doesn't actually require any particular guest memory contents), it is what userspace normally needs to do. Add `-k` to set the test to use KVM Userfault. Add the vm_mem_region_set_flags_userfault() helper for setting `userfault_bitmap` and KVM_MEM_USERFAULT at the same time. Signed-off-by: James Houghton --- .../selftests/kvm/demand_paging_test.c | 139 +++++++++++++++++- .../testing/selftests/kvm/include/kvm_util.h | 5 + tools/testing/selftests/kvm/lib/kvm_util.c | 40 ++++- 3 files changed, 176 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 315f5c9037b4..183c70731093 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -12,7 +12,9 @@ #include #include #include +#include #include +#include #include "kvm_util.h" #include "test_util.h" @@ -24,11 +26,21 @@ #ifdef __NR_userfaultfd static int nr_vcpus = 1; +static int num_uffds; static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE; static size_t demand_paging_size; +static size_t host_page_size; static char *guest_data_prototype; +static struct { + bool enabled; + int uffd_mode; /* set if userfaultfd is also in use */ + struct uffd_desc **uffd_descs; +} kvm_userfault_data; + +static void resolve_kvm_userfault(u64 gpa, u64 size); + static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) { struct kvm_vcpu *vcpu = vcpu_args->vcpu; @@ -41,8 +53,22 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) clock_gettime(CLOCK_MONOTONIC, &start); /* Let the guest access its memory */ +restart: ret = _vcpu_run(vcpu); - TEST_ASSERT(ret == 0, "vcpu_run failed: %d", ret); + if (ret < 0 && errno == EFAULT && kvm_userfault_data.enabled) { + /* Check for userfault. */ + TEST_ASSERT(run->exit_reason == KVM_EXIT_MEMORY_FAULT, + "Got invalid exit reason: %x", run->exit_reason); + TEST_ASSERT(run->memory_fault.flags == + KVM_MEMORY_EXIT_FLAG_USERFAULT, + "Got invalid memory fault exit: %llx", + run->memory_fault.flags); + resolve_kvm_userfault(run->memory_fault.gpa, + run->memory_fault.size); + goto restart; + } else + TEST_ASSERT(ret == 0, "vcpu_run failed: %d", ret); + if (get_ucall(vcpu, NULL) != UCALL_SYNC) { TEST_ASSERT(false, "Invalid guest sync status: exit_reason=%s", @@ -54,11 +80,10 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args) ts_diff.tv_sec, ts_diff.tv_nsec); } -static int handle_uffd_page_request(int uffd_mode, int uffd, - struct uffd_msg *msg) +static int resolve_uffd_page_request(int uffd_mode, int uffd, uint64_t addr, + bool wake) { pid_t tid = syscall(__NR_gettid); - uint64_t addr = msg->arg.pagefault.address; struct timespec start; struct timespec ts_diff; int r; @@ -71,7 +96,7 @@ static int handle_uffd_page_request(int uffd_mode, int uffd, copy.src = (uint64_t)guest_data_prototype; copy.dst = addr; copy.len = demand_paging_size; - copy.mode = 0; + copy.mode = wake ? 0 : UFFDIO_COPY_MODE_DONTWAKE; r = ioctl(uffd, UFFDIO_COPY, ©); /* @@ -96,6 +121,7 @@ static int handle_uffd_page_request(int uffd_mode, int uffd, cont.range.start = addr; cont.range.len = demand_paging_size; + cont.mode = wake ? 0 : UFFDIO_CONTINUE_MODE_DONTWAKE; r = ioctl(uffd, UFFDIO_CONTINUE, &cont); /* @@ -119,6 +145,20 @@ static int handle_uffd_page_request(int uffd_mode, int uffd, TEST_FAIL("Invalid uffd mode %d", uffd_mode); } + if (r < 0 && wake) { + /* + * No wake-up occurs when UFFDIO_COPY/CONTINUE fails, but we + * have a thread waiting. Wake it up. + */ + struct uffdio_range range = {0}; + + range.start = addr; + range.len = demand_paging_size; + + TEST_ASSERT(ioctl(uffd, UFFDIO_WAKE, &range) == 0, + "UFFDIO_WAKE failed: 0x%lx", addr); + } + ts_diff = timespec_elapsed(start); PER_PAGE_DEBUG("UFFD page-in %d \t%ld ns\n", tid, @@ -129,6 +169,58 @@ static int handle_uffd_page_request(int uffd_mode, int uffd, return 0; } +static int handle_uffd_page_request(int uffd_mode, int uffd, + struct uffd_msg *msg) +{ + uint64_t addr = msg->arg.pagefault.address; + + return resolve_uffd_page_request(uffd_mode, uffd, addr, true); +} + +static void resolve_kvm_userfault(u64 gpa, u64 size) +{ + struct kvm_vm *vm = memstress_args.vm; + struct userspace_mem_region *region; + unsigned long *bitmap_chunk; + u64 page, gpa_offset; + + region = (struct userspace_mem_region *) userspace_mem_region_find( + vm, gpa, (gpa + size - 1)); + + if (kvm_userfault_data.uffd_mode) { + /* + * Resolve userfaults early, without needing to read them + * off the userfaultfd. + */ + uint64_t hva = (uint64_t)addr_gpa2hva(vm, gpa); + struct uffd_desc **descs = kvm_userfault_data.uffd_descs; + int i, fd; + + for (i = 0; i < num_uffds; ++i) + if (hva >= (uint64_t)descs[i]->va_start && + hva < (uint64_t)descs[i]->va_end) + break; + + TEST_ASSERT(i < num_uffds, + "Did not find userfaultfd for hva: %lx", hva); + + fd = kvm_userfault_data.uffd_descs[i]->uffd; + resolve_uffd_page_request(kvm_userfault_data.uffd_mode, fd, + hva, false); + } else { + uint64_t hva = (uint64_t)addr_gpa2hva(vm, gpa); + + memcpy((char *)hva, guest_data_prototype, demand_paging_size); + } + + gpa_offset = gpa - region->region.guest_phys_addr; + page = gpa_offset / host_page_size; + bitmap_chunk = (unsigned long *)region->region.userfault_bitmap + + page / BITS_PER_LONG; + atomic_fetch_and_explicit((_Atomic unsigned long *)bitmap_chunk, + ~(1ul << (page % BITS_PER_LONG)), memory_order_release); +} + struct test_params { int uffd_mode; bool single_uffd; @@ -136,6 +228,7 @@ struct test_params { int readers_per_uffd; enum vm_mem_backing_src_type src_type; bool partition_vcpu_memory_access; + bool kvm_userfault; }; static void prefault_mem(void *alias, uint64_t len) @@ -149,6 +242,25 @@ static void prefault_mem(void *alias, uint64_t len) } } +static void enable_userfault(struct kvm_vm *vm, int slots) +{ + for (int i = 0; i < slots; ++i) { + int slot = MEMSTRESS_MEM_SLOT_INDEX + i; + struct userspace_mem_region *region; + unsigned long *userfault_bitmap; + int flags = KVM_MEM_USERFAULT; + + region = memslot2region(vm, slot); + userfault_bitmap = bitmap_zalloc(region->mmap_size / + host_page_size); + /* everything is userfault initially */ + memset(userfault_bitmap, -1, region->mmap_size / host_page_size / CHAR_BIT); + printf("Setting bitmap: %p\n", userfault_bitmap); + vm_mem_region_set_flags_userfault(vm, slot, flags, + userfault_bitmap); + } +} + static void run_test(enum vm_guest_mode mode, void *arg) { struct memstress_vcpu_args *vcpu_args; @@ -159,12 +271,13 @@ static void run_test(enum vm_guest_mode mode, void *arg) struct timespec ts_diff; double vcpu_paging_rate; struct kvm_vm *vm; - int i, num_uffds = 0; + int i; vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1, p->src_type, p->partition_vcpu_memory_access); demand_paging_size = get_backing_src_pagesz(p->src_type); + host_page_size = getpagesize(); guest_data_prototype = malloc(demand_paging_size); TEST_ASSERT(guest_data_prototype, @@ -208,6 +321,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) } } + if (p->kvm_userfault) { + TEST_REQUIRE(kvm_has_cap(KVM_CAP_USERFAULT)); + kvm_userfault_data.enabled = true; + kvm_userfault_data.uffd_mode = p->uffd_mode; + kvm_userfault_data.uffd_descs = uffd_descs; + enable_userfault(vm, 1); + } + pr_info("Finished creating vCPUs and starting uffd threads\n"); clock_gettime(CLOCK_MONOTONIC, &start); @@ -265,6 +386,7 @@ static void help(char *name) printf(" -v: specify the number of vCPUs to run.\n"); printf(" -o: Overlap guest memory accesses instead of partitioning\n" " them into a separate region of memory for each vCPU.\n"); + printf(" -k: Use KVM Userfault\n"); puts(""); exit(0); } @@ -283,7 +405,7 @@ int main(int argc, char *argv[]) guest_modes_append_default(); - while ((opt = getopt(argc, argv, "ahom:u:d:b:s:v:c:r:")) != -1) { + while ((opt = getopt(argc, argv, "ahokm:u:d:b:s:v:c:r:")) != -1) { switch (opt) { case 'm': guest_modes_cmdline(optarg); @@ -326,6 +448,9 @@ int main(int argc, char *argv[]) "Invalid number of readers per uffd %d: must be >=1", p.readers_per_uffd); break; + case 'k': + p.kvm_userfault = true; + break; case 'h': default: help(argv[0]); diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 4c4e5a847f67..0d49a9ce832a 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -582,6 +582,8 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t guest_paddr, uint32_t slot, uint64_t npages, uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset); +struct userspace_mem_region * +userspace_mem_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end); #ifndef vm_arch_has_protected_memory static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm) @@ -591,6 +593,9 @@ static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm) #endif void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); +void vm_mem_region_set_flags_userfault(struct kvm_vm *vm, uint32_t slot, + uint32_t flags, + unsigned long *userfault_bitmap); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index a87988a162f1..a8f6b949ac59 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -634,7 +634,7 @@ void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[], * of the regions is returned. Null is returned only when no overlapping * region exists. */ -static struct userspace_mem_region * +struct userspace_mem_region * userspace_mem_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end) { struct rb_node *node; @@ -1149,6 +1149,44 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags) ret, errno, slot, flags); } +/* + * VM Memory Region Flags Set with a userfault bitmap + * + * Input Args: + * vm - Virtual Machine + * flags - Flags for the memslot + * userfault_bitmap - The bitmap to use for KVM_MEM_USERFAULT + * + * Output Args: None + * + * Return: None + * + * Sets the flags of the memory region specified by the value of slot, + * to the values given by flags. This helper adds a way to provide a + * userfault_bitmap. + */ +void vm_mem_region_set_flags_userfault(struct kvm_vm *vm, uint32_t slot, + uint32_t flags, + unsigned long *userfault_bitmap) +{ + int ret; + struct userspace_mem_region *region; + + region = memslot2region(vm, slot); + + TEST_ASSERT(!userfault_bitmap ^ (flags & KVM_MEM_USERFAULT), + "KVM_MEM_USERFAULT must be specified with a bitmap"); + + region->region.flags = flags; + region->region.userfault_bitmap = (__u64)userfault_bitmap; + + ret = __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION2, ®ion->region); + + TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION2 IOCTL failed,\n" + " rc: %i errno: %i slot: %u flags: 0x%x", + ret, errno, slot, flags); +} + /* * VM Memory Region Move * From patchwork Thu Jan 9 20:49:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933227 Received: from mail-ua1-f73.google.com (mail-ua1-f73.google.com [209.85.222.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9BC71FC114 for ; Thu, 9 Jan 2025 20:50:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455808; cv=none; b=MYDc7eV2adf8hhkqzfnqM3lp1SgZTJnaZy6dvd+jg3hIpNAdkXxqSDrNcuy/BVeOGmvTTeIwFs1nBEr3r4XZJzjxmV4BCiZYKsAiRw7oiu8h1kJatV/dg0aChJcfRkFeNn4gT08vEJe735sTDeag9ioO83dgl7LXQyFHkjOro3o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455808; c=relaxed/simple; bh=0eyMEtCsJeYENXDg27iZXcBn86i7IQLtffZKe3KbKs0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=f3529iyHgGG2nKa57MPnr8hh9cRGwDTJCsshFzPxZZ2eSMTGPIrf13ZHIVTMJ5TqU1so65Xq/mCJIxSPBAyoC+4D7c2yF8IFgYRbn2OvmtCAM42hfNKVoyZsyBR6uSY1+eKiRVH/JZQeBAmqoge/+eozaToFKiaHcAd6Y4Jdh6s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lnGb2amd; arc=none smtp.client-ip=209.85.222.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lnGb2amd" Received: by mail-ua1-f73.google.com with SMTP id a1e0cc1a2514c-85ba35d100fso163266241.3 for ; Thu, 09 Jan 2025 12:50:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455804; x=1737060604; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=A4uSddURoLEbUkGuIhSA7RvXzffvl1Ath1WbpEvQHEc=; b=lnGb2amdeluEUvZZa4tCCapSYrU6KfvKXqKec37vD1kloezlACy77FPdOyOSvkOxdL veSdrNAq/+EM+UTg8KzEzaf5in0+l+yHrfmPU0gwcX0UpwS3oJ/C334ZF3pGaKitBKTp J6n/+gCSEcWaDAKbEsKF/EJMKOYfZ1DMT6RW2WY2njJHZiJZ6rBdZk35tcc47FV1j86B yDC+7jVhyjF9/yHpzR4zr3X/1fS9s9y+BPqef+i8zf5UP4WkIvrpNDu6jADWrYLOXB+g N0KyitO+Utw/RVEbGGnA+4oeDymKX1hwi/JJ/8KwB5nMzMoQqb/N1CMdS4F4LSecJLkk 1SSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455804; x=1737060604; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A4uSddURoLEbUkGuIhSA7RvXzffvl1Ath1WbpEvQHEc=; b=gl3BmvRQRjzL/IqQsiqzhstAorNpc5i8npvSquaHlOAEheAcc6olmoCzKOZJws/rJw wlK14+t/rEVufjeuGs+jw1byXn1O/8zrQn+QJjYr4UViEdnRmgAHptq+TdzH7DiSEoAb jG2peDeM3gf0hDR2ZIOqxYS6SL1GnIp9S87OvfSZsiSZa22N6LSMvjbty1RVkK14EmQT q901uBFhlD1/azqGL8EAHpQVwuirupAwLxdlIl/V/v/COKyUfPZbUG9DvBWr1cb3+5zr TFj/r8ugN2UWvXIK1nYhdNWQj8X2aE2vOcBHsEZshNee2NJWRUKJqIFIH5lEPlR2FSp0 CyCA== X-Forwarded-Encrypted: i=1; AJvYcCX08i6Kl7I3D53RfM1vECQPcKmuXhGXXhh7MAdUxPttpSld6Q9QMzz1RGGChAmYgRJYooI=@vger.kernel.org X-Gm-Message-State: AOJu0YwomULbBG7aYtVsdHHnKRl+hM6vau1+/LRnGYneOjod2m1y/3Pi 33iVJaTTVchPvdYo8eRJwCLcn+ZnxfNzfLhrRoKNOCublmurtvN6Y6wbcv5sSjKZzE0DnpCGRxL 6uwyxvVoVPXIv6PpcDg== X-Google-Smtp-Source: AGHT+IHUcz3SJhDUBT3AFAphG25I/PCLrMSFE5oskoAYquhM4FPeQ8hRbnzHYl6jdbwhQgIl3dB0klJ9OTTcUSfO X-Received: from vsbic11.prod.google.com ([2002:a05:6102:4b8b:b0:4af:dad1:fc51]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6102:3e20:b0:4af:e5fd:77fc with SMTP id ada2fe7eead31-4b3d0d75fcfmr8791219137.3.1736455804705; Thu, 09 Jan 2025 12:50:04 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:27 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-12-jthoughton@google.com> Subject: [PATCH v2 11/13] KVM: selftests: Inform set_memory_region_test of KVM_MEM_USERFAULT From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev The KVM_MEM_USERFAULT flag is supported iff KVM_CAP_USERFAULT is available. Signed-off-by: James Houghton --- tools/testing/selftests/kvm/set_memory_region_test.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c index 86ee3385e860..adce75720cc1 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -364,6 +364,9 @@ static void test_invalid_memory_region_flags(void) if (kvm_check_cap(KVM_CAP_MEMORY_ATTRIBUTES) & KVM_MEMORY_ATTRIBUTE_PRIVATE) supported_flags |= KVM_MEM_GUEST_MEMFD; + if (kvm_check_cap(KVM_CAP_USERFAULT)) + supported_flags |= KVM_MEM_USERFAULT; + for (i = 0; i < 32; i++) { if ((supported_flags & BIT(i)) && !(v2_only_flags & BIT(i))) continue; From patchwork Thu Jan 9 20:49:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933229 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BDFF0204088 for ; Thu, 9 Jan 2025 20:50:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455809; cv=none; b=WXU/k+GbamaWK9R1S6oVh/xfqL/85hnrgln+C1HplCO37IEv7soCGbsZeA/6HfEzIr/CIss/QdxwkiLBQLseZCfF74/bMVnUl2L8oWvrpKtXkUp/UMf0O5Px1XOXhNgsPl5JZ+Qwy0vbYCS43pt+/cAD5lQZwf2XwvdA6wW+ejQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455809; c=relaxed/simple; bh=CDB0cv1aSXnhbWgtnZoFSntC80Trm0mVRdwCoCUf0ZQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JCyLo4U/laZcZ9pz7bxo2K3MV8FxzWiusl/t2XTD6JI3gqMV7sZxKQjQTnMQqyUvSFc5c4K5slpNd9jA8g9Wpmhd92n90kpFi+JV6Vjqpyxcka4bmEq027vbOC1sAsYWGyW8fsWK5GAK1S5JSEH9ypwp9emPeget8bsxYm/oyoc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VnaZAejF; arc=none smtp.client-ip=209.85.222.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VnaZAejF" Received: by mail-qk1-f201.google.com with SMTP id af79cd13be357-7b6f943f59dso227575985a.2 for ; Thu, 09 Jan 2025 12:50:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455805; x=1737060605; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=A6d9SLIy3ba7m9amfCG873jUIu7Rw+T0rUlSTA7+tzA=; b=VnaZAejF2+Jqkh4bYYgT9N+jpl0rhrHwVG1gBNsipuFLNVLqWmkcJa9ewOH2gj+eAq 7JAyN7it5uFj5hCSjnzS0ANZ6bq7/WPeBC0peVCsUD18n25ESD+sGuCqiwdOIRQpGWJ3 0n1MCnUoBSxUoZSce8fWefMXXnk1zG19BDPeMEknS1gXe+9iU5Ibs4+EUxPG50Kf2Nr8 jhnWA5Dj+qSWU5HgMtW/CmP6+60/GHnnCQ7NROquJKtG6NbP9QSitloRjORonS6X7/rW P5wL0NBAaUQgJWysHg/HyIDJvkEFbbSBzP6VBc4bVr2mBtFZYoOpxd0FZGJTC9F8iFpQ sxlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455805; x=1737060605; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=A6d9SLIy3ba7m9amfCG873jUIu7Rw+T0rUlSTA7+tzA=; b=WoAPsa7Dzqi5RSDWOQNl/ceMozKwqNBqDUkgLv6Pq4sLjIo8LyDKUc9bqkLO/U4uv8 odDAs9qfsDBOPcrwcYtuGZTJBeT9jZFZvzfaKIVJ/maabb3Rumz3pxULB432EWs/Yibg jXlPJbVbQ7R6Mx3qJgTIDHihIUCMCAFHbZ3PV+WRzRE5H76otfgShDsE5/nriHpYyBpm RYTCiuLiTO1pL4IXk10lnJr3pJh7qEh7DJJRRe3v4Cw6JrxiZf/4CL/SuW9t+iN+Y5i7 z+Lh53ciITJemYgrgybovcWG/b5mh5T01+JjdoK+42arcJJb7ckjwuBvNGGhYHSt+qBJ GbYQ== X-Forwarded-Encrypted: i=1; AJvYcCVWpGvJNe2t2N40lhGd8THlIAieL4n4lSk6iv25UTfEHSke/M1cZR1w9mBe8yYJ+cSWwLk=@vger.kernel.org X-Gm-Message-State: AOJu0YzsA6yh7DlucfkPg2rx++6TpI5Nagss4dL+YIPFLMyux/G77goc Uy3fFx1hFu+d0CfmXvaM3e8w7xpPTdiverH4NNaTQZqNrAjApCvEN7yvUr7uW92bTq+TH5JWloD Ge8EgDfITsEBOQ+KqVQ== X-Google-Smtp-Source: AGHT+IGuiGH+ZZwCsi1PGFRN10vipIun+GV8y4B9awYjMBGGjqu8CtE272hh3gh4o7UZqaJM/jxBCOqhOtlKtS/V X-Received: from qkpr8.prod.google.com ([2002:a05:620a:2988:b0:7b6:e6cf:180e]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:620a:240d:b0:7b1:e0f:bf9b with SMTP id af79cd13be357-7bcd97afd59mr1217697485a.45.1736455805434; Thu, 09 Jan 2025 12:50:05 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:28 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-13-jthoughton@google.com> Subject: [PATCH v2 12/13] KVM: selftests: Add KVM_MEM_USERFAULT + guest_memfd toggle tests From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Make sure KVM_MEM_USERFAULT can be toggled on and off for KVM_MEM_GUEST_MEMFD memslots. Signed-off-by: James Houghton --- .../selftests/kvm/set_memory_region_test.c | 30 +++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c index adce75720cc1..1fea8ff0fe74 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -556,6 +556,35 @@ static void test_add_overlapping_private_memory_regions(void) close(memfd); kvm_vm_free(vm); } + +static void test_private_memory_region_userfault(void) +{ + struct kvm_vm *vm; + int memfd; + + pr_info("Testing toggling KVM_MEM_USERFAULT on KVM_MEM_GUEST_MEMFD memory regions\n"); + + vm = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM); + + test_invalid_guest_memfd(vm, vm->kvm_fd, 0, "KVM fd should fail"); + test_invalid_guest_memfd(vm, vm->fd, 0, "VM's fd should fail"); + + memfd = vm_create_guest_memfd(vm, MEM_REGION_SIZE, 0); + + vm_set_user_memory_region2(vm, MEM_REGION_SLOT, KVM_MEM_GUEST_MEMFD, + MEM_REGION_GPA, MEM_REGION_SIZE, 0, memfd, 0); + + vm_set_user_memory_region2(vm, MEM_REGION_SLOT, + KVM_MEM_GUEST_MEMFD | KVM_MEM_USERFAULT, + MEM_REGION_GPA, MEM_REGION_SIZE, 0, memfd, 0); + + vm_set_user_memory_region2(vm, MEM_REGION_SLOT, KVM_MEM_GUEST_MEMFD, + MEM_REGION_GPA, MEM_REGION_SIZE, 0, memfd, 0); + + close(memfd); + + kvm_vm_free(vm); +} #endif int main(int argc, char *argv[]) @@ -582,6 +611,7 @@ int main(int argc, char *argv[]) (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) { test_add_private_memory_region(); test_add_overlapping_private_memory_regions(); + test_private_memory_region_userfault(); } else { pr_info("Skipping tests for KVM_MEM_GUEST_MEMFD memory regions\n"); } From patchwork Thu Jan 9 20:49:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13933230 Received: from mail-vs1-f73.google.com (mail-vs1-f73.google.com [209.85.217.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D05DA206F13 for ; Thu, 9 Jan 2025 20:50:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455810; cv=none; b=sXsE7J9FG7NsPaOlhWczC4m7Vk9fWwpnV+mqCl2UB3265lTJzOhqv06cljZxFIHW5lwerUcIMT8Cs7Mr1TLKMAQQc8O/5w8e4ajuj7Fvq+HHoqZahWK1/7sHTBIIPjC7/VFc3Kah3htZ7W9cy6glD2Q44sVoDBkQXZnRjlHYPx8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736455810; c=relaxed/simple; bh=c6V4B4l9UrLf1w77J8RGRf/h3npRvkrIneh/XJhh/Us=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rDP7mQm+viw+ql1F2WiRWgQikeB29KkLNDjGG8G2KJk7jiHxp2cmVr+TQf1Jy0hUDV+Nr59AnGiX2hi+46F7ZmcNQU8ACUH8m4ZwFH8Sws+D9rlymqS/ehDG2Sk89RLRPdXat2Lqnsu74tdCwakMEotfVCsFqvoe0ODwcpxj29A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Q47Ft42E; arc=none smtp.client-ip=209.85.217.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jthoughton.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Q47Ft42E" Received: by mail-vs1-f73.google.com with SMTP id ada2fe7eead31-4b03fdeda53so927231137.2 for ; Thu, 09 Jan 2025 12:50:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736455806; x=1737060606; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1FPGE2VUglE9CFYJaRkZs5d1Qv8okwjTkQX7zeGY738=; b=Q47Ft42Ekw3cqtHr7A0bqye44VJiqxhhQe/33Af9yXBiFLXmmiElF6dT+t2Ruz3V8p RjjAHAWVDxrBpKsCQwuVK7cv+viQDxDpxJX8uBCK7H4D6U8KBO54EFfP4GvMSMBx4jEU GApyv8N9bpAh5FgiZsleHtwvwXbrXA0zL21yc8ckl19vs2yBICZtXTPGajP33mM65z5/ MEn6zDypZqH2Jt6KtSp+kbX4V1BlrERKtlwaA3T8rOqk1XzuQO4HCzZy3iZ0Z8WJlpaz r7lmv4nPa+jgzI/7pvJ7d+FJrbTYUpLgya5EdtXq+h+P4fKRJl6yuhVhkgXGGGopWngt iP8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736455806; x=1737060606; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1FPGE2VUglE9CFYJaRkZs5d1Qv8okwjTkQX7zeGY738=; b=FHB3wOFNJXf1Td3M5ZWKMTJS1C2Sdiewp1NoWzvZ2Bsc12uoKQjjnqNy3YY82h4YRh Od2CLYdQ6rWboOS2np7lp7McLa3aS2GvNW4iPHuwGkGzZ7fMLsO6uOG+XeYvTvYim2M0 3SkNK8NRoDTOZ/AmqIz3FZLC+Jc9v4vRtES3mKBTh1j8HzJQIxh6+OkffbHPxacm149O u6fhm8nbACJnQFmjE6c1MKQGX+uPoA9sQyPH3ZC6c6XdkfbFDGSfArdDpO1aofxnJUri YKUUdQXlsSqprjYlZcP7AKxPEUGI1xdrhtl4z8YpV2ZdEe8aqEQpl0v3uI0bKNfGHBUP 96Mw== X-Forwarded-Encrypted: i=1; AJvYcCXSwz4ssptFIPhZwqSQW6+EpSVBbyyuhJUyG8QrR1B320wRlUvI9mTBKb1KLpDxjhXhhiQ=@vger.kernel.org X-Gm-Message-State: AOJu0YyxcMvh5KY1xFKdIseMIC9tGsYHYJIYqEJYG3qZ8YNAQooq/EUT iGE9/8XgwdDB4qi7haAdBnKNhiamSrPnnNmy0mvs4OVoXZ0p9XanZRIIwOdYfyEHTsu7A9zi28C bSsY1s1J87tBeQhgwuA== X-Google-Smtp-Source: AGHT+IER4UDdWkNRD2ZPfHth4m0AOq+dYEUlBJOggH5PvPrkTbjMUbHW/yGSl0VACauqUoUnO5tUTjFJuH8L321Q X-Received: from vsvg20.prod.google.com ([2002:a05:6102:1594:b0:4af:fda4:ed12]) (user=jthoughton job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6102:6e88:b0:4b4:e7e7:56c0 with SMTP id ada2fe7eead31-4b4e7e76563mr5145199137.3.1736455806482; Thu, 09 Jan 2025 12:50:06 -0800 (PST) Date: Thu, 9 Jan 2025 20:49:29 +0000 In-Reply-To: <20250109204929.1106563-1-jthoughton@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250109204929.1106563-1-jthoughton@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250109204929.1106563-14-jthoughton@google.com> Subject: [PATCH v2 13/13] KVM: Documentation: Add KVM_CAP_USERFAULT and KVM_MEM_USERFAULT details From: James Houghton To: Paolo Bonzini , Sean Christopherson Cc: Jonathan Corbet , Marc Zyngier , Oliver Upton , Yan Zhao , James Houghton , Nikita Kalyazin , Anish Moorthy , Peter Gonda , Peter Xu , David Matlack , wei.w.wang@intel.com, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, Bagas Sanjaya Include the note about memory ordering when clearing bits in userfault_bitmap, as it may not be obvious for users. Signed-off-by: James Houghton Reviewed-by: Bagas Sanjaya --- Documentation/virt/kvm/api.rst | 33 ++++++++++++++++++++++++++++++++- 1 file changed, 32 insertions(+), 1 deletion(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 454c2aaa155e..eec485dcf0bc 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6281,7 +6281,8 @@ bounds checks apply (use common sense). __u64 guest_memfd_offset; __u32 guest_memfd; __u32 pad1; - __u64 pad2[14]; + __u64 userfault_bitmap; + __u64 pad2[13]; }; A KVM_MEM_GUEST_MEMFD region _must_ have a valid guest_memfd (private memory) and @@ -6297,6 +6298,25 @@ state. At VM creation time, all memory is shared, i.e. the PRIVATE attribute is '0' for all gfns. Userspace can control whether memory is shared/private by toggling KVM_MEMORY_ATTRIBUTE_PRIVATE via KVM_SET_MEMORY_ATTRIBUTES as needed. +When the KVM_MEM_USERFAULT flag is set, userfault_bitmap points to the starting +address for the bitmap that controls if vCPU memory faults should immediately +exit to userspace. If an invalid pointer is provided, at fault time, KVM_RUN +will return -EFAULT. KVM_MEM_USERFAULT is only supported when +KVM_CAP_USERFAULT is supported. + +userfault_bitmap should point to an array of longs where each bit in the array +linearly corresponds to a single gfn. Bit 0 in userfault_bitmap corresponds to +guest_phys_addr, bit 1 corresponds to guest_phys_addr + PAGE_SIZE, etc. If the +bit for a page is set, any vCPU access to that page will exit to userspace with +KVM_MEMORY_EXIT_FLAG_USERFAULT. + +Setting bits in userfault_bitmap has no effect on pages that have already been +mapped by KVM until KVM_MEM_USERFAULT is disabled and re-enabled again. + +Clearing bits in userfault_bitmap should usually be done with a store-release +if changes to guest memory are being made available to the guest via +userfault_bitmap. + S390: ^^^^^ @@ -8251,6 +8271,17 @@ KVM exits with the register state of either the L1 or L2 guest depending on which executed at the time of an exit. Userspace must take care to differentiate between these cases. +7.37 KVM_CAP_USERFAULT +---------------------- + +:Architectures: x86, arm64 +:Returns: Informational only, -EINVAL on direct KVM_ENABLE_CAP. + +The presence of this capability indicates that KVM_SET_USER_MEMORY_REGION2 will +accept KVM_MEM_USERFAULT as a valid memslot flag. + +See KVM_SET_USER_MEMORY_REGION2 for more details. + 8. Other capabilities. ======================