diff mbox series

[v3,3/3] mm/usercopy: Detect compound page overruns

Message ID 20211213142703.3066590-4-willy@infradead.org (mailing list archive)
State Superseded
Headers show
Series Assorted improvements to usercopy | expand

Commit Message

Matthew Wilcox Dec. 13, 2021, 2:27 p.m. UTC
Move the compound page overrun detection out of
CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
---
 mm/usercopy.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

Comments

Kees Cook Dec. 13, 2021, 8:52 p.m. UTC | #1
On Mon, Dec 13, 2021 at 02:27:03PM +0000, Matthew Wilcox (Oracle) wrote:
> Move the compound page overrun detection out of
> CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.

I'd argue that everything else enabled by USERCOPY_PAGESPAN could be
removed now too. Do you want to add a 4th patch to rip that out?

https://github.com/KSPP/linux/issues/163

Thanks!

-Kees
Matthew Wilcox Dec. 13, 2021, 11:44 p.m. UTC | #2
On Mon, Dec 13, 2021 at 12:52:22PM -0800, Kees Cook wrote:
> On Mon, Dec 13, 2021 at 02:27:03PM +0000, Matthew Wilcox (Oracle) wrote:
> > Move the compound page overrun detection out of
> > CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.
> 
> I'd argue that everything else enabled by USERCOPY_PAGESPAN could be
> removed now too. Do you want to add a 4th patch to rip that out?
> 
> https://github.com/KSPP/linux/issues/163

I don't mind ... is it your assessment that it's not worth checking for
a copy_to/from_user that spans a boundary between a reserved and
!reserved page, or overlaps the boundary of rodata/bss/data/CMA?

I have no basis on which to judge that, so it's really up to you.
Kees Cook Dec. 13, 2021, 11:50 p.m. UTC | #3
On Mon, Dec 13, 2021 at 11:44:33PM +0000, Matthew Wilcox wrote:
> On Mon, Dec 13, 2021 at 12:52:22PM -0800, Kees Cook wrote:
> > On Mon, Dec 13, 2021 at 02:27:03PM +0000, Matthew Wilcox (Oracle) wrote:
> > > Move the compound page overrun detection out of
> > > CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.
> > 
> > I'd argue that everything else enabled by USERCOPY_PAGESPAN could be
> > removed now too. Do you want to add a 4th patch to rip that out?
> > 
> > https://github.com/KSPP/linux/issues/163
> 
> I don't mind ... is it your assessment that it's not worth checking for
> a copy_to/from_user that spans a boundary between a reserved and
> !reserved page, or overlaps the boundary of rodata/bss/data/CMA?
> 
> I have no basis on which to judge that, so it's really up to you.

It's always been a problem because some arch mark the kernel as reserved,
so we have to do all the allow-listing first, which is tedious. I'd
certainly like to add all the checks possible, but rationally, we need
to keep only the stuff that is fast, useful, or both. PAGESPAN has been
disabled almost everywhere, too, so I don't think it's a loss.
diff mbox series

Patch

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 63476e1506e0..db2e8c4f79fd 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -163,7 +163,6 @@  static inline void check_page_span(const void *ptr, unsigned long n,
 {
 #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
 	const void *end = ptr + n - 1;
-	struct page *endpage;
 	bool is_reserved, is_cma;
 
 	/*
@@ -194,11 +193,6 @@  static inline void check_page_span(const void *ptr, unsigned long n,
 		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
 		return;
 
-	/* Allow if fully inside the same compound (__GFP_COMP) page. */
-	endpage = virt_to_head_page(end);
-	if (likely(endpage == page))
-		return;
-
 	/*
 	 * Reject if range is entirely either Reserved (i.e. special or
 	 * device memory), or CMA. Otherwise, reject since the object spans
@@ -258,6 +252,11 @@  static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
 		__check_heap_object(ptr, n, page, to_user);
+	} else if (PageHead(page)) {
+		/* A compound allocation */
+		unsigned long offset = ptr - page_address(page);
+		if (offset + n > page_size(page))
+			usercopy_abort("page alloc", NULL, to_user, offset, n);
 	} else {
 		/* Verify object does not incorrectly span multiple pages. */
 		check_page_span(ptr, n, page, to_user);