diff mbox series

[RFC,3/3] KVM: guest_memfd: Enforce NUMA mempolicy if available

Message ID 20240916165743.201087-4-shivankg@amd.com (mailing list archive)
State New, archived
Headers show
Series Add NUMA mempolicy support for KVM guest_memfd | expand

Commit Message

Shivank Garg Sept. 16, 2024, 4:57 p.m. UTC
From: Shivansh Dhiman <shivansh.dhiman@amd.com>

Enforce memory policy on guest-memfd to provide proper NUMA support.
Previously, guest-memfd allocations were following local NUMA node id in
absence of process mempolicy, resulting in random memory allocation.
Moreover, it cannot use mbind() since memory isn't mapped to userspace.

To support NUMA policies, retrieve the mempolicy struct from
i_private_data part of memfd's inode. Use filemap_grab_folio_mpol() to
ensure that allocations follow the specified memory policy.

Signed-off-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
 virt/kvm/guest_memfd.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 8f1877be4976..8553d7069ba8 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -130,12 +130,15 @@  static struct folio *__kvm_gmem_get_folio(struct inode *inode, pgoff_t index,
 					  bool allow_huge)
 {
 	struct folio *folio = NULL;
+	struct mempolicy *mpol;
 
 	if (gmem_2m_enabled && allow_huge)
 		folio = kvm_gmem_get_huge_folio(inode, index, PMD_ORDER);
 
-	if (!folio)
-		folio = filemap_grab_folio(inode->i_mapping, index);
+	if (!folio) {
+		mpol = (struct mempolicy *)(inode->i_mapping->i_private_data);
+		folio = filemap_grab_folio_mpol(inode->i_mapping, index, mpol);
+	}
 
 	pr_debug("%s: allocate folio with PFN %lx order %d\n",
 		 __func__, folio_pfn(folio), folio_order(folio));