Message ID | 20220110231530.665970-5-willy@infradead.org (mailing list archive) |
---|---|
State | Mainlined |
Commit | 1109a5d907015005cdbe9eaa4fec40213e2f9010 |
Headers | show |
Series | Assorted improvements to usercopy | expand |
On Mon, Jan 10, 2022 at 11:15:30PM +0000, Matthew Wilcox (Oracle) wrote: > There isn't enough information to make this a useful check any more; > the useful parts of it were moved in earlier patches, so remove this > set of checks now. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Thank you! Acked-by: Kees Cook <keescook@chromium.org> > --- > mm/usercopy.c | 61 ------------------------------------------------ > security/Kconfig | 13 +---------- > 2 files changed, 1 insertion(+), 73 deletions(-) > > diff --git a/mm/usercopy.c b/mm/usercopy.c > index e1cb98087a05..94831945d9e7 100644 > --- a/mm/usercopy.c > +++ b/mm/usercopy.c > @@ -158,64 +158,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n, > usercopy_abort("null address", NULL, to_user, ptr, n); > } > > -/* Checks for allocs that are marked in some way as spanning multiple pages. */ > -static inline void check_page_span(const void *ptr, unsigned long n, > - struct page *page, bool to_user) > -{ > -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN > - const void *end = ptr + n - 1; > - bool is_reserved, is_cma; > - > - /* > - * Sometimes the kernel data regions are not marked Reserved (see > - * check below). And sometimes [_sdata,_edata) does not cover > - * rodata and/or bss, so check each range explicitly. > - */ > - > - /* Allow reads of kernel rodata region (if not marked as Reserved). */ > - if (ptr >= (const void *)__start_rodata && > - end <= (const void *)__end_rodata) { > - if (!to_user) > - usercopy_abort("rodata", NULL, to_user, 0, n); > - return; > - } > - > - /* Allow kernel data region (if not marked as Reserved). */ > - if (ptr >= (const void *)_sdata && end <= (const void *)_edata) > - return; > - > - /* Allow kernel bss region (if not marked as Reserved). */ > - if (ptr >= (const void *)__bss_start && > - end <= (const void *)__bss_stop) > - return; > - > - /* Is the object wholly within one base page? */ > - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) == > - ((unsigned long)end & (unsigned long)PAGE_MASK))) > - return; > - > - /* > - * Reject if range is entirely either Reserved (i.e. special or > - * device memory), or CMA. Otherwise, reject since the object spans > - * several independently allocated pages. > - */ > - is_reserved = PageReserved(page); > - is_cma = is_migrate_cma_page(page); > - if (!is_reserved && !is_cma) > - usercopy_abort("spans multiple pages", NULL, to_user, 0, n); > - > - for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) { > - page = virt_to_head_page(ptr); > - if (is_reserved && !PageReserved(page)) > - usercopy_abort("spans Reserved and non-Reserved pages", > - NULL, to_user, 0, n); > - if (is_cma && !is_migrate_cma_page(page)) > - usercopy_abort("spans CMA and non-CMA pages", NULL, > - to_user, 0, n); > - } > -#endif > -} > - > static inline void check_heap_object(const void *ptr, unsigned long n, > bool to_user) > { > @@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, > unsigned long offset = ptr - folio_address(folio); > if (offset + n > folio_size(folio)) > usercopy_abort("page alloc", NULL, to_user, offset, n); > - } else { > - /* Verify object does not incorrectly span multiple pages. */ > - check_page_span(ptr, n, folio_page(folio, 0), to_user); > } > } > > diff --git a/security/Kconfig b/security/Kconfig > index 0b847f435beb..5b289b329a51 100644 > --- a/security/Kconfig > +++ b/security/Kconfig > @@ -160,20 +160,9 @@ config HARDENED_USERCOPY > copy_from_user() functions) by rejecting memory ranges that > are larger than the specified heap object, span multiple > separately allocated pages, are not on the process stack, > - or are part of the kernel text. This kills entire classes > + or are part of the kernel text. This prevents entire classes > of heap overflow exploits and similar kernel memory exposures. > > -config HARDENED_USERCOPY_PAGESPAN > - bool "Refuse to copy allocations that span multiple pages" > - depends on HARDENED_USERCOPY > - depends on EXPERT > - help > - When a multi-page allocation is done without __GFP_COMP, > - hardened usercopy will reject attempts to copy it. There are, > - however, several cases of this in the kernel that have not all > - been removed. This config is intended to be used only while > - trying to find such users. > - > config FORTIFY_SOURCE > bool "Harden common str/mem functions against buffer overflows" > depends on ARCH_HAS_FORTIFY_SOURCE > -- > 2.33.0 >
On 11.01.22 00:15, Matthew Wilcox (Oracle) wrote: > There isn't enough information to make this a useful check any more; > the useful parts of it were moved in earlier patches, so remove this > set of checks now. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Very nice cleanup IMHO Reviewed-by: David Hildenbrand <david@redhat.com>
On Mon, Jan 31, 2022 at 03:27:02PM +0100, David Hildenbrand wrote: > On 11.01.22 00:15, Matthew Wilcox (Oracle) wrote: > > There isn't enough information to make this a useful check any more; > > the useful parts of it were moved in earlier patches, so remove this > > set of checks now. > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > > Very nice cleanup IMHO > > Reviewed-by: David Hildenbrand <david@redhat.com> Thanks. I'm thinking about adding another check. If the order-0 page has a non-NULL ->mapping, I believe that is enough to determine that it's a page cache / KSM / Anon page, and as such is guaranteed to actually be order-0 (and not part of a postive-order allocation without __GFP_COMP set). But that would require a bit more auditing, or just throwing it in and seeing what breaks *whistles*.
diff --git a/mm/usercopy.c b/mm/usercopy.c index e1cb98087a05..94831945d9e7 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -158,64 +158,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n, usercopy_abort("null address", NULL, to_user, ptr, n); } -/* Checks for allocs that are marked in some way as spanning multiple pages. */ -static inline void check_page_span(const void *ptr, unsigned long n, - struct page *page, bool to_user) -{ -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN - const void *end = ptr + n - 1; - bool is_reserved, is_cma; - - /* - * Sometimes the kernel data regions are not marked Reserved (see - * check below). And sometimes [_sdata,_edata) does not cover - * rodata and/or bss, so check each range explicitly. - */ - - /* Allow reads of kernel rodata region (if not marked as Reserved). */ - if (ptr >= (const void *)__start_rodata && - end <= (const void *)__end_rodata) { - if (!to_user) - usercopy_abort("rodata", NULL, to_user, 0, n); - return; - } - - /* Allow kernel data region (if not marked as Reserved). */ - if (ptr >= (const void *)_sdata && end <= (const void *)_edata) - return; - - /* Allow kernel bss region (if not marked as Reserved). */ - if (ptr >= (const void *)__bss_start && - end <= (const void *)__bss_stop) - return; - - /* Is the object wholly within one base page? */ - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) == - ((unsigned long)end & (unsigned long)PAGE_MASK))) - return; - - /* - * Reject if range is entirely either Reserved (i.e. special or - * device memory), or CMA. Otherwise, reject since the object spans - * several independently allocated pages. - */ - is_reserved = PageReserved(page); - is_cma = is_migrate_cma_page(page); - if (!is_reserved && !is_cma) - usercopy_abort("spans multiple pages", NULL, to_user, 0, n); - - for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) { - page = virt_to_head_page(ptr); - if (is_reserved && !PageReserved(page)) - usercopy_abort("spans Reserved and non-Reserved pages", - NULL, to_user, 0, n); - if (is_cma && !is_migrate_cma_page(page)) - usercopy_abort("spans CMA and non-CMA pages", NULL, - to_user, 0, n); - } -#endif -} - static inline void check_heap_object(const void *ptr, unsigned long n, bool to_user) { @@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, unsigned long offset = ptr - folio_address(folio); if (offset + n > folio_size(folio)) usercopy_abort("page alloc", NULL, to_user, offset, n); - } else { - /* Verify object does not incorrectly span multiple pages. */ - check_page_span(ptr, n, folio_page(folio, 0), to_user); } } diff --git a/security/Kconfig b/security/Kconfig index 0b847f435beb..5b289b329a51 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -160,20 +160,9 @@ config HARDENED_USERCOPY copy_from_user() functions) by rejecting memory ranges that are larger than the specified heap object, span multiple separately allocated pages, are not on the process stack, - or are part of the kernel text. This kills entire classes + or are part of the kernel text. This prevents entire classes of heap overflow exploits and similar kernel memory exposures. -config HARDENED_USERCOPY_PAGESPAN - bool "Refuse to copy allocations that span multiple pages" - depends on HARDENED_USERCOPY - depends on EXPERT - help - When a multi-page allocation is done without __GFP_COMP, - hardened usercopy will reject attempts to copy it. There are, - however, several cases of this in the kernel that have not all - been removed. This config is intended to be used only while - trying to find such users. - config FORTIFY_SOURCE bool "Harden common str/mem functions against buffer overflows" depends on ARCH_HAS_FORTIFY_SOURCE
There isn't enough information to make this a useful check any more; the useful parts of it were moved in earlier patches, so remove this set of checks now. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/usercopy.c | 61 ------------------------------------------------ security/Kconfig | 13 +---------- 2 files changed, 1 insertion(+), 73 deletions(-)