diff mbox series

[v2,3/3] mm/usercopy: Detect compound page overruns

Message ID 20211006124226.209484-4-willy@infradead.org (mailing list archive)
State New
Headers show
Series Assorted improvements to usercopy | expand

Commit Message

Matthew Wilcox Oct. 6, 2021, 12:42 p.m. UTC
Move the compound page overrun detection out of
CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
---
 mm/usercopy.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Matthew Wilcox Oct. 6, 2021, 2:08 p.m. UTC | #1
On Wed, Oct 06, 2021 at 01:42:26PM +0100, Matthew Wilcox (Oracle) wrote:
> Move the compound page overrun detection out of
> CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Acked-by: Kees Cook <keescook@chromium.org>
> ---
>  mm/usercopy.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 63476e1506e0..b825c4344917 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -194,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
>  		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
>  		return;
>  
> -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> -	endpage = virt_to_head_page(end);
> -	if (likely(endpage == page))
> -		return;
> -
>  	/*
>  	 * Reject if range is entirely either Reserved (i.e. special or
>  	 * device memory), or CMA. Otherwise, reject since the object spans

Needs an extra hunk to avoid a warning with that config:

@@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
 {
 #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
        const void *end = ptr + n - 1;
-       struct page *endpage;
        bool is_reserved, is_cma;

        /*

I'll wait a few days and send a v3.
Kees Cook Oct. 6, 2021, 10:07 p.m. UTC | #2
On Wed, Oct 06, 2021 at 03:08:46PM +0100, Matthew Wilcox wrote:
> On Wed, Oct 06, 2021 at 01:42:26PM +0100, Matthew Wilcox (Oracle) wrote:
> > Move the compound page overrun detection out of
> > CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > Acked-by: Kees Cook <keescook@chromium.org>
> > ---
> >  mm/usercopy.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> > 
> > diff --git a/mm/usercopy.c b/mm/usercopy.c
> > index 63476e1506e0..b825c4344917 100644
> > --- a/mm/usercopy.c
> > +++ b/mm/usercopy.c
> > @@ -194,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
> >  		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> >  		return;
> >  
> > -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> > -	endpage = virt_to_head_page(end);
> > -	if (likely(endpage == page))
> > -		return;
> > -
> >  	/*
> >  	 * Reject if range is entirely either Reserved (i.e. special or
> >  	 * device memory), or CMA. Otherwise, reject since the object spans
> 
> Needs an extra hunk to avoid a warning with that config:

Ah yeah, good catch.

> 
> @@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
>  {
>  #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
>         const void *end = ptr + n - 1;
> -       struct page *endpage;
>         bool is_reserved, is_cma;
> 
>         /*
> 
> I'll wait a few days and send a v3.

When you send v3, can you CC linux-hardening@vger.kernel.org too?

Thanks for poking at this!

-Kees
diff mbox series

Patch

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 63476e1506e0..b825c4344917 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -194,11 +194,6 @@  static inline void check_page_span(const void *ptr, unsigned long n,
 		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
 		return;
 
-	/* Allow if fully inside the same compound (__GFP_COMP) page. */
-	endpage = virt_to_head_page(end);
-	if (likely(endpage == page))
-		return;
-
 	/*
 	 * Reject if range is entirely either Reserved (i.e. special or
 	 * device memory), or CMA. Otherwise, reject since the object spans
@@ -258,6 +253,11 @@  static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
 		__check_heap_object(ptr, n, page, to_user);
+	} else if (PageHead(page)) {
+		/* A compound allocation */
+		unsigned long offset = ptr - page_address(page);
+		if (offset + n > page_size(page))
+			usercopy_abort("page alloc", NULL, to_user, offset, n);
 	} else {
 		/* Verify object does not incorrectly span multiple pages. */
 		check_page_span(ptr, n, page, to_user);