diff mbox series

[RFC,4/4] mcpage: get_unmapped_area return mcpage size aligned addr

Message ID 20230109072232.2398464-5-fengwei.yin@intel.com (mailing list archive)
State New
Headers show
Series Multiple consecutive page for anonymous mapping | expand

Commit Message

Yin, Fengwei Jan. 9, 2023, 7:22 a.m. UTC
For x86_64, let mmap start from mcpage size aligned address.

Using firefox with one tab to access the entry page of
"www.lwn.net" as workload. With mcpage set to 2, collected the
count about the mcpage can't be used because mcpage is out of
VMA range:
                         run1  run2  run3  avg  stddev
    With this patch:     1453, 1434, 1428  1438 13.0%
    Without this patch:  1536, 1467, 1493  1498 34.8%

It shows that the chance of using mcpage for anonymous mapping
is increased 4.2% with the patch.

For general possible impact because the virtual address space is
more sparse, run will-it-scale:malloc1, will-it-scale:page_fault1
and kernel build w/o the change based on v6.1-rc7. The result shows
no performance change introduced by this change:

malloc1:
        v6.1-rc7 v6.1-rc7 + this patch
---------------- ---------------------------
     23338            -0.5%      23210        will-it-scale.per_process_ops

page_fault1:
        v6.1-rc7 v6.1-rc7 + this patch
---------------- ---------------------------
     96322            -0.1%      96222        will-it-scale.per_process_ops

kernel build:
        v6.1-rc7 v6.1-rc7 + this patch
---------------- ---------------------------
     28.45            +0.2%      28.52        kbuild.buildtime_per_iteration

One drawback of the change is that the effective ASLR bits is reduced
by mcpage_order bits.

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
---
 arch/x86/kernel/sys_x86_64.c | 8 ++++++++
 1 file changed, 8 insertions(+)
diff mbox series

Patch

diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 8cc653ffdccd..9b5617973e81 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -154,6 +154,10 @@  arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		info.align_mask = get_align_mask();
 		info.align_offset += get_align_bits();
 	}
+
+	if (info.align_mask < ~MCPAGE_MASK)
+		info.align_mask = ~MCPAGE_MASK;
+
 	return vm_unmapped_area(&info);
 }
 
@@ -212,6 +216,10 @@  arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		info.align_mask = get_align_mask();
 		info.align_offset += get_align_bits();
 	}
+
+	if (info.align_mask < ~MCPAGE_MASK)
+		info.align_mask = ~MCPAGE_MASK;
+
 	addr = vm_unmapped_area(&info);
 	if (!(addr & ~PAGE_MASK))
 		return addr;