diff mbox series

mm: Stop PMD alignment for PIE shared objects

Message ID 20241219211552.1450226-1-audra@redhat.com (mailing list archive)
State New
Headers show
Series mm: Stop PMD alignment for PIE shared objects | expand

Commit Message

Audra Mitchell Dec. 19, 2024, 9:15 p.m. UTC
After commit 1854bc6e2420 ("mm/readahead: Align file mappings for non-DAX")
any request through thp_get_unmapped_area would align to a PMD_SIZE,
causing shared objects to have less randomization than previously (9 less
bits for 2MB PMDs). As these lower 9 bits are the most impactful for
ASLR, this change could be argued to have an impact on security.
Running the pie-so program [1] multiple times we can see the randomization
loss as the lower address bits get aligned to a 2MB size on x86 (pie-so):

    # ./all-gather && ./all-bits
    ---------[SNIP]---------
    aslr heap       19 bits
    aslr exec       00 bits
    aslr mmap       29 bits
    aslr so         00 bits
    aslr stack      31 bits
    aslr pie-exec   30 bits
    aslr pie-heap   30 bits
    aslr pie-so     20 bits
    aslr pie-mmap   29 bits
    aslr pie-stack  30 bits

Fix this issue by checking that the request is aligned to the PMD_SIZE,
otherwise fall back to mm_get_unmapped_area_vmflags().

[1]  https://github.com/stevegrubb/distro-elf-inspector
Fixes: 1854bc6e2420 ("mm/readahead: Align file mappings for non-DAX")

Signed-off-by: Audra Mitchell <audra@redhat.com>
---
 mm/huge_memory.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Matthew Wilcox (Oracle) Dec. 19, 2024, 9:30 p.m. UTC | #1
On Thu, Dec 19, 2024 at 04:15:52PM -0500, Audra Mitchell wrote:
> After commit 1854bc6e2420 ("mm/readahead: Align file mappings for non-DAX")
> any request through thp_get_unmapped_area would align to a PMD_SIZE,
> causing shared objects to have less randomization than previously (9 less
> bits for 2MB PMDs). As these lower 9 bits are the most impactful for
> ASLR, this change could be argued to have an impact on security.

Yes, very tiresome people have been making that argument for a long
time.  Do you have anything further to add to the discussion that
happened here:

https://lore.kernel.org/linux-mm/20240118133504.2910955-1-shy828301@gmail.com/

particularly in light of 3afb76a66b55 existing.

> Fix this issue by checking that the request is aligned to the PMD_SIZE,
> otherwise fall back to mm_get_unmapped_area_vmflags().

NAK this version anyway.  Even if the executable is, say, 2.1MB in size,
we still want the first 2MB of the file to be covered with a PMD
mapping.
Audra Mitchell Dec. 19, 2024, 9:59 p.m. UTC | #2
On Thu, Dec 19, 2024 at 09:30:58PM +0000, Matthew Wilcox wrote:
> On Thu, Dec 19, 2024 at 04:15:52PM -0500, Audra Mitchell wrote:
> > After commit 1854bc6e2420 ("mm/readahead: Align file mappings for non-DAX")
> > any request through thp_get_unmapped_area would align to a PMD_SIZE,
> > causing shared objects to have less randomization than previously (9 less
> > bits for 2MB PMDs). As these lower 9 bits are the most impactful for
> > ASLR, this change could be argued to have an impact on security.
> 
> Yes, very tiresome people have been making that argument for a long
> time.  Do you have anything further to add to the discussion that
> happened here:
> 
> https://lore.kernel.org/linux-mm/20240118133504.2910955-1-shy828301@gmail.com/
> 
> particularly in light of 3afb76a66b55 existing.

Hey all,

Happy Holidays! I was not aware of this discussion, so thank you for bringing
it to my attention. I've read over it and I'll take a closer look after the
holidays. 

Respectfully, I'm trying to understand your view of this problem given your
feedback - can you clarify for me what you mean by "particularly in light 
of" regarding 3afb76a66b55... it looks to me like this was reverted with 
14d7c92f8df9 and there does appear to be some room for improvement for this
topic based on linus' commit message there.

I did play around with the idea of checking to see if kaslr was enabled and
only doing the return in that case, as users then could opt-in to the
solution or not.. 

Thanks in advance for your insight!

 
> > Fix this issue by checking that the request is aligned to the PMD_SIZE,
> > otherwise fall back to mm_get_unmapped_area_vmflags().
> 
> NAK this version anyway.  Even if the executable is, say, 2.1MB in size,
> we still want the first 2MB of the file to be covered with a PMD
> mapping.
>
Rafael Aquini Dec. 20, 2024, 6:20 p.m. UTC | #3
On Thu, Dec 19, 2024 at 09:30:58PM +0000, Matthew Wilcox wrote:
> On Thu, Dec 19, 2024 at 04:15:52PM -0500, Audra Mitchell wrote:
> > After commit 1854bc6e2420 ("mm/readahead: Align file mappings for non-DAX")
> > any request through thp_get_unmapped_area would align to a PMD_SIZE,
> > causing shared objects to have less randomization than previously (9 less
> > bits for 2MB PMDs). As these lower 9 bits are the most impactful for
> > ASLR, this change could be argued to have an impact on security.
> 
> Yes, very tiresome people have been making that argument for a long
> time.  Do you have anything further to add to the discussion that
> happened here:
> 
> https://lore.kernel.org/linux-mm/20240118133504.2910955-1-shy828301@gmail.com/
> 
> particularly in light of 3afb76a66b55 existing.
>

Linus, however, disliked the config question and ended up reverting 
3afb76a66b55 a while after.
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ee335d96fc39..696caf6cbf4a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1101,6 +1101,9 @@  static unsigned long __thp_get_unmapped_area(struct file *filp,
 	if (len_pad < len || (off + len_pad) < off)
 		return 0;
 
+	if (!IS_ALIGNED(len, PMD_SIZE))
+		return 0;
+
 	ret = mm_get_unmapped_area_vmflags(current->mm, filp, addr, len_pad,
 					   off >> PAGE_SHIFT, flags, vm_flags);