From patchwork Tue Sep 10 23:44:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13799502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EDE0EE01F1 for ; Tue, 10 Sep 2024 23:46:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AF378D00E8; Tue, 10 Sep 2024 19:45:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 333888D00E2; Tue, 10 Sep 2024 19:45:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18FAB8D00E8; Tue, 10 Sep 2024 19:45:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EA8DF8D00E2 for ; Tue, 10 Sep 2024 19:45:26 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A6DA61607E9 for ; Tue, 10 Sep 2024 23:45:26 +0000 (UTC) X-FDA: 82550462652.23.778AA40 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) by imf23.hostedemail.com (Postfix) with ESMTP id C01E3140008 for ; Tue, 10 Sep 2024 23:45:24 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=GSUu6mpb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 3E9rgZgsKCIgmowq3xqA5zss00sxq.o0yxuz69-yyw7mow.03s@flex--ackerleytng.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3E9rgZgsKCIgmowq3xqA5zss00sxq.o0yxuz69-yyw7mow.03s@flex--ackerleytng.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726011920; a=rsa-sha256; cv=none; b=DXVHVRRI5sECGjjB/XiqWO6uYtC366BUIM8txcsHKsCKwyQFkv5D7or2VrM6BzJUDwl2Az /kls048XykigjB36JuswSfSLRd7Bs3cBUfZfu16rfZc63Llsujonz4lkj5hz5XRyeAEBD0 q6wAizT9Rffauo79zpUxG+QQa8FsqL8= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=GSUu6mpb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 3E9rgZgsKCIgmowq3xqA5zss00sxq.o0yxuz69-yyw7mow.03s@flex--ackerleytng.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3E9rgZgsKCIgmowq3xqA5zss00sxq.o0yxuz69-yyw7mow.03s@flex--ackerleytng.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726011920; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FjI6cBjR9AW7KizYJRES3kHs2nxbgzqD9PMU12DXiXA=; b=rF0jRYMoHfQFzoO+BKitZLs8IQRCXXvmPOcObF7eIsI6NlqSNQx8/fUdjvt2JtRC7BYZn9 IH6Ec+PeNHmQ7cJ+7abqIFk1D4LQ6ELFdJCB1tCnTiB75xEEFMLq5XVoNnSx0fw+IUodFu 0ExLNBHGDZnJp9V7StMqB0lsvITvLCU= Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7d904fe9731so2772688a12.3 for ; Tue, 10 Sep 2024 16:45:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011923; x=1726616723; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FjI6cBjR9AW7KizYJRES3kHs2nxbgzqD9PMU12DXiXA=; b=GSUu6mpb++lj5ZPRelS1rjd70W0Rc9IkyYiPo6FGLoI6zx6maKuzROdQISxVAyze8u qtja9QepWYmTamAMcv/ittphbeNMAr3jFKw/rLH72J2Squa9ZImSx1l3uOhu2IrQuMpE by+uwlFiIHaEj2Oz218z30uxVefli/+JcdFPcODuLM2mr8PnE9i8kAG5S8UGV3LONYTy uQj+CnWssDsWR/qdGIK9a/S1beKU1wCFcRv6DjIGiKQEYGMdVYBJXIvoR7YxKWAqlcU/ o46gBfrSk0ffxv2Tcfp6D7p276DcDx75rtC9PRlCY2c0/ebEKtHrv1ApR7/9NEw0DaLj z1Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011923; x=1726616723; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FjI6cBjR9AW7KizYJRES3kHs2nxbgzqD9PMU12DXiXA=; b=AaYQGTChrFnshJZ++oPc3eV5ypums1+A5h8PY9kzwJXLo0lRtjTm3mbYX4+t4CJJWj j4Jgmyt6UQti9CmPdG+XfzxB3u0XKA9FKMkeT7f4MCVcZKY7lEj9JppxnCW7CkXRXTYh YbH4Xl7tNCC0wy8gyhs0q5Y9vUg1s/6yIPMM4RTi9FBejJggRVDJ9T7zVKPfRTueMSkw RSQe2k7+TjgmGOHlF4toiHCIP2KadDWe0rIo7/sLwJEaFiEM8VrIIsxlDIRJ6bY0H0WT PZOU3aZ6dREEC7G+w0BZX3l6N4CxONUsKA6GTFKTN56rbIgMCHqYr8fEo6trXKx08958 pHMA== X-Forwarded-Encrypted: i=1; AJvYcCWLBeVonHZiZF/21/4TSpa+9O77POujqUBbrvjBAmamseoa3JUfABdV5tvR1zaPyANtDRHTtubxmQ==@kvack.org X-Gm-Message-State: AOJu0YwMZ5E4tYbXr42lsh/l8u0VxgnHC2sp3rTTAmhzGuOEC+GDY8qu cdrov3r9EcdJdeV5yGh3KxzEGR6bQyfrC0mPPccK2tjlRYAziHGxKGUQY1xmVKvxnDFyJFmucZp 0twE8EPTy0lFsVISRdEYikg== X-Google-Smtp-Source: AGHT+IHSKJ4yx6H+R6vmjAajFKhOO5Lbm3TP4ppQgPnLaqzbpul3ZYzZk4Z4nMlq9x4ej3kbxbURUt5MTOfpTqRbXQ== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:902:c405:b0:207:50e2:f54 with SMTP id d9443c01a7336-20750e20f6emr1226925ad.1.1726011923409; Tue, 10 Sep 2024 16:45:23 -0700 (PDT) Date: Tue, 10 Sep 2024 23:44:00 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: Subject: [RFC PATCH 29/39] KVM: Handle conversions in the SET_MEMORY_ATTRIBUTES ioctl From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org X-Rspam-User: X-Stat-Signature: gqp8kytrxq1c6zm53hnsdx74gna735ch X-Rspamd-Queue-Id: C01E3140008 X-Rspamd-Server: rspam02 X-HE-Tag: 1726011924-7445 X-HE-Meta: U2FsdGVkX1+U3JaFaVyyoUeL/X+mp9FgV4cyW+OALzUvUTZ48TNeNtLaSB8Wo397dpeH+XoDcMMvQZgYCXrBNO+5mRRPp+K3kQVcuUkOd+wtNRdSMI3qP1C/9UwkU6rymUceJ1LPlox+MzvkOGvEHLwgcgzVOihZfXF0VdxOB6HAZ+IYH4AXQGJnwrWWguPFvVYxcM0SgFROycCOYwZoNNL8ScFhUIkuF8ZvSxm0VMnQKF7aYu8hYc60kmLJvsJ6eG8PE/kzBI4gJJM4U2Y3ulLXsPaT2VjUKfe7gseOZv0wbFXU4wLzcgaQquxM1ZheqYtDugdC481tn9yuyotf/UvXpHdY6z0qURvI+jBuEi7JXrq4DrzomOjwKylA2m+g/OAkaFwP30n3sE1KDnqztj+KMmYHHNmKVjhxAhj/eX67J4HsDyhplRQCgQZTUk3Re8ZZ3LTxd8/HRmgK9H3MnuMMV+RRRm+JFSi470jwno6x2ppH2PLm7OSUWuQHyis4qkam7Rg4apABQAMnOj/1d3Nr1c53zG2/t+FsMqXhc6kd7EChbvSD4qHbXuezapMmjmQ9bvtFewa77JQVp1GtNwWzGBB6rXb3tsoSVFuFWiRU7tLx0wAshcKW2hN7gXp+bwbVS4IvXGT5RoU6sK7/zJyg3joSfNA00766UapuZEnmoHRrH1u5OLI1b37A5kjk3ODzwy+QP+9NejZIBo7D1OdcNxZkzDHcuKbeIutlRGi7TWVY9IKd0rdAcxB5hM/kyqFkJHjsxAckkAJ6G/OCRz613q+uQRr31xmjQKEOlzrSuSSZFuHnoQR8gxKwhFnQs/DaGgm6B1zEbMZ2x+s5HZVwm1WFTIuNbWusP1QRXJAI0fP9wi6y3YX/nqdLD0JwWYjmyhGANyqBD0vDHWZU04UZNJR1uMrjiOz6hZa5IqU5et0Ffu7BmM4Ht3qjIjQtfkYergVB7tKSMBURmvO MNfBd5ts 5QZbUaFB5Fgp48e3JdBU5zN/PEMIrCMCdmp7i/54W8VjuSw8dK28zVbGsAC51wS19EL5PlTv3xl83yXeh1Bdard2yo3sYcG2SRGO25mg1Tm2HyN4kpBPJkVma1mhAOeeBaTDEHXRDRxsPmEYTvRIh0M1yYnJLQ/a3hI1l3VwzyNRgPxzFKcxX6GqkuxHaxO+I4jRi95d3hEc3gGriejXm/W3t4pPbR9Gn86D3PTAn5rKilKh+LHK8GRs3uO4mqXlk8Kuz1sUNG7EOebk9RpMMHCQnuo4HgyWRDvB9SX765YoMPhaac3OiT5r8+ZpYGcL8us5JePBJuVukX+BAIaCID/k8lqMj+IKRLHf7ADgwO/ELP2hG2DueLRKVqWCvU3veMWjRtHnH4PwlrubLxg8KlL8Z+CfiV4LTw2pdzWu0wsC2WrnlhWP0Nsu+GJ7fA6bnRcP6g+2KvEZrllY21q1jNiLG4ViGWwJt1p3jVJHx2ko06j8GBqW/CpE4z4jAU+c1vMb4cf9gw9tWf2VKaiFpP4nUSPJNKYdPoax1Z0cxlKm3juvSZ/WGg+BbYazJv03n7IQ9/lORHMUJ+Sfp7EbKrDF9pE6aIZsDwcMCk91EedNiq/EJ7tZ1YtQlBJ/o6uyc0ilANi0A9/aeSPjz0A2QsurKLbVb28I6Uu/0k7G//0SMUMf5keihzsM2ymH29s5vyXiQ2dGh7WarvHY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The key steps for a private to shared conversion are: 1. Unmap from guest page tables 2. Set pages associated with requested range in memslot to be faultable 3. Update kvm->mem_attr_array The key steps for a shared to private conversion are: 1. Check and disallow set_memory_attributes if any page in the range is still mapped or pinned, by a. Updating guest_memfd's faultability to prevent future faulting b. Returning -EINVAL if any pages are still pinned. 2. Update kvm->mem_attr_array Userspace VMM must ensure shared pages are not in use, since any faults racing with this call will get a SIGBUS. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- include/linux/kvm_host.h | 1 + virt/kvm/guest_memfd.c | 207 +++++++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 15 +++ virt/kvm/kvm_mm.h | 9 ++ 4 files changed, 232 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 79a6b1a63027..10993cd33e34 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2476,6 +2476,7 @@ typedef int (*kvm_gmem_populate_cb)(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque); + #endif #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 1d4dfe0660ad..110c4bbb004b 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1592,4 +1592,211 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long return ret && !i ? ret : i; } EXPORT_SYMBOL_GPL(kvm_gmem_populate); + +/** + * Returns true if pages in range [@start, @end) in inode @inode have no + * userspace mappings. + */ +static bool kvm_gmem_no_mappings_range(struct inode *inode, pgoff_t start, pgoff_t end) +{ + pgoff_t index; + bool checked_indices_unmapped; + + filemap_invalidate_lock_shared(inode->i_mapping); + + /* TODO: replace iteration with filemap_get_folios() for efficiency. */ + checked_indices_unmapped = true; + for (index = start; checked_indices_unmapped && index < end;) { + struct folio *folio; + + /* Don't use kvm_gmem_get_folio to avoid allocating */ + folio = filemap_lock_folio(inode->i_mapping, index); + if (IS_ERR(folio)) { + ++index; + continue; + } + + if (folio_mapped(folio) || folio_maybe_dma_pinned(folio)) + checked_indices_unmapped = false; + else + index = folio_next_index(folio); + + folio_unlock(folio); + folio_put(folio); + } + + filemap_invalidate_unlock_shared(inode->i_mapping); + return checked_indices_unmapped; +} + +/** + * Returns true if pages in range [@start, @end) in memslot @slot have no + * userspace mappings. + */ +static bool kvm_gmem_no_mappings_slot(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + pgoff_t offset_start; + pgoff_t offset_end; + struct file *file; + bool ret; + + offset_start = start - slot->base_gfn + slot->gmem.pgoff; + offset_end = end - slot->base_gfn + slot->gmem.pgoff; + + file = kvm_gmem_get_file(slot); + if (!file) + return false; + + ret = kvm_gmem_no_mappings_range(file_inode(file), offset_start, offset_end); + + fput(file); + + return ret; +} + +/** + * Returns true if pages in range [@start, @end) have no host userspace mappings. + */ +static bool kvm_gmem_no_mappings(struct kvm *kvm, gfn_t start, gfn_t end) +{ + int i; + + lockdep_assert_held(&kvm->slots_lock); + + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { + struct kvm_memslot_iter iter; + struct kvm_memslots *slots; + + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + struct kvm_memory_slot *slot; + gfn_t gfn_start; + gfn_t gfn_end; + + slot = iter.slot; + gfn_start = max(start, slot->base_gfn); + gfn_end = min(end, slot->base_gfn + slot->npages); + + if (iter.slot->flags & KVM_MEM_GUEST_MEMFD && + !kvm_gmem_no_mappings_slot(iter.slot, gfn_start, gfn_end)) + return false; + } + } + + return true; +} + +/** + * Set faultability of given range of gfns [@start, @end) in memslot @slot to + * @faultable. + */ +static void kvm_gmem_set_faultable_slot(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end, bool faultable) +{ + pgoff_t start_offset; + pgoff_t end_offset; + struct file *file; + + file = kvm_gmem_get_file(slot); + if (!file) + return; + + start_offset = start - slot->base_gfn + slot->gmem.pgoff; + end_offset = end - slot->base_gfn + slot->gmem.pgoff; + + WARN_ON(kvm_gmem_set_faultable(file_inode(file), start_offset, end_offset, + faultable)); + + fput(file); +} + +/** + * Set faultability of given range of gfns [@start, @end) in memslot @slot to + * @faultable. + */ +static void kvm_gmem_set_faultable_vm(struct kvm *kvm, gfn_t start, gfn_t end, + bool faultable) +{ + int i; + + lockdep_assert_held(&kvm->slots_lock); + + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { + struct kvm_memslot_iter iter; + struct kvm_memslots *slots; + + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + struct kvm_memory_slot *slot; + gfn_t gfn_start; + gfn_t gfn_end; + + slot = iter.slot; + gfn_start = max(start, slot->base_gfn); + gfn_end = min(end, slot->base_gfn + slot->npages); + + if (iter.slot->flags & KVM_MEM_GUEST_MEMFD) { + kvm_gmem_set_faultable_slot(slot, gfn_start, + gfn_end, faultable); + } + } + } +} + +/** + * Returns true if guest_memfd permits setting range [@start, @end) to PRIVATE. + * + * If memory is faulted in to host userspace and a request was made to set the + * memory to PRIVATE, the faulted in pages must not be pinned for the request to + * be permitted. + */ +static int kvm_gmem_should_set_attributes_private(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + kvm_gmem_set_faultable_vm(kvm, start, end, false); + + if (kvm_gmem_no_mappings(kvm, start, end)) + return 0; + + kvm_gmem_set_faultable_vm(kvm, start, end, true); + return -EINVAL; +} + +/** + * Returns true if guest_memfd permits setting range [@start, @end) to SHARED. + * + * Because this allows pages to be faulted in to userspace, this must only be + * called after the pages have been invalidated from guest page tables. + */ +static int kvm_gmem_should_set_attributes_shared(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + /* Always okay to set shared, hence set range faultable here. */ + kvm_gmem_set_faultable_vm(kvm, start, end, true); + + return 0; +} + +/** + * Returns 0 if guest_memfd permits setting attributes @attrs for range [@start, + * @end) or negative error otherwise. + * + * If memory is faulted in to host userspace and a request was made to set the + * memory to PRIVATE, the faulted in pages must not be pinned for the request to + * be permitted. + * + * Because this may allow pages to be faulted in to userspace when requested to + * set attributes to shared, this must only be called after the pages have been + * invalidated from guest page tables. + */ +int kvm_gmem_should_set_attributes(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned long attrs) +{ + if (attrs & KVM_MEMORY_ATTRIBUTE_PRIVATE) + return kvm_gmem_should_set_attributes_private(kvm, start, end); + else + return kvm_gmem_should_set_attributes_shared(kvm, start, end); +} + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 92901656a0d4..1a7bbcc31b7e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2524,6 +2524,13 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, .on_lock = kvm_mmu_invalidate_end, .may_block = true, }; + struct kvm_mmu_notifier_range error_set_range = { + .start = start, + .end = end, + .handler = (void *)kvm_null_fn, + .on_lock = kvm_mmu_invalidate_end, + .may_block = true, + }; unsigned long i; void *entry; int r = 0; @@ -2548,6 +2555,10 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, kvm_handle_gfn_range(kvm, &pre_set_range); + r = kvm_gmem_should_set_attributes(kvm, start, end, attributes); + if (r) + goto err; + for (i = start; i < end; i++) { r = xa_err(xa_store(&kvm->mem_attr_array, i, entry, GFP_KERNEL_ACCOUNT)); @@ -2560,6 +2571,10 @@ static int kvm_vm_set_mem_attributes(struct kvm *kvm, gfn_t start, gfn_t end, mutex_unlock(&kvm->slots_lock); return r; + +err: + kvm_handle_gfn_range(kvm, &error_set_range); + goto out_unlock; } static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, struct kvm_memory_attributes *attrs) diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 715f19669d01..d8ff2b380d0e 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -41,6 +41,8 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset); void kvm_gmem_unbind(struct kvm_memory_slot *slot); +int kvm_gmem_should_set_attributes(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned long attrs); #else static inline void kvm_gmem_init(struct module *module) { @@ -59,6 +61,13 @@ static inline void kvm_gmem_unbind(struct kvm_memory_slot *slot) { WARN_ON_ONCE(1); } + +static inline int kvm_gmem_should_set_attributes(struct kvm *kvm, gfn_t start, + gfn_t end, unsigned long attrs) +{ + return 0; +} + #endif /* CONFIG_KVM_PRIVATE_MEM */ #endif /* __KVM_MM_H__ */