From patchwork Thu Dec 16 21:53:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ABE0C433F5 for ; Thu, 16 Dec 2021 21:54:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237093AbhLPVyI (ORCPT ); Thu, 16 Dec 2021 16:54:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234629AbhLPVyH (ORCPT ); Thu, 16 Dec 2021 16:54:07 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EB42C061574 for ; Thu, 16 Dec 2021 13:54:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FxsC6amCuAZFSahXDaieLHiz+Xxu18/zbToomONxAZM=; b=SzQJ4/uELzoWhsgIEpd1ZA6JF5 yz1fVXarmGVFcevbWbo/pU2ZaZS3P1QZQNXTB6U/n0GIdJrHbb45+TcF/YWcscuqIJgSQhY8ylQav 1Ouu1akpTf0aYQBwTyec9gxa0Yj/d/IX0PRvYb9Vw6tq9Gw6dEJZwY7qQecAPkriOLpA1R8rkkf01 /hKGCDYetu7/UaoM6EDKk7sN0AZKJke6DoC2ug6fAZRMdDT48XyhlT2PNKgYjFBtFXQpoivGDomVu O+vp/+bQ/3J2zc5wnrSvPz1bXYHylHfknnGJGkQwfg8tjDWmEtfYqnVl+D+mJMunqxP51fJjSH5JN 22WReiCA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxyhE-00FzY6-MA; Thu, 16 Dec 2021 21:53:52 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org, William Kucharski Subject: [PATCH v4 1/4] mm/usercopy: Check kmap addresses properly Date: Thu, 16 Dec 2021 21:53:48 +0000 Message-Id: <20211216215351.3811471-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216215351.3811471-1-willy@infradead.org> References: <20211216215351.3811471-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org If you are copying to an address in the kmap region, you may not copy across a page boundary, no matter what the size of the underlying allocation. You can't kmap() a slab page because slab pages always come from low memory. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook Reviewed-by: William Kucharski --- arch/x86/include/asm/highmem.h | 1 + include/linux/highmem-internal.h | 10 ++++++++++ mm/usercopy.c | 16 ++++++++++------ 3 files changed, 21 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index 032e020853aa..731ee7cc40a5 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -26,6 +26,7 @@ #include #include #include +#include /* declarations for highmem.c */ extern unsigned long highstart_pfn, highend_pfn; diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 0a0b2b09b1b8..01fb76d101b0 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -149,6 +149,11 @@ static inline void totalhigh_pages_add(long count) atomic_long_add(count, &_totalhigh_pages); } +static inline bool is_kmap_addr(const void *x) +{ + unsigned long addr = (unsigned long)x; + return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP); +} #else /* CONFIG_HIGHMEM */ static inline struct page *kmap_to_page(void *addr) @@ -234,6 +239,11 @@ static inline void __kunmap_atomic(void *addr) static inline unsigned int nr_free_highpages(void) { return 0; } static inline unsigned long totalhigh_pages(void) { return 0UL; } +static inline bool is_kmap_addr(const void *x) +{ + return false; +} + #endif /* CONFIG_HIGHMEM */ /* diff --git a/mm/usercopy.c b/mm/usercopy.c index b3de3c4eefba..8c039302465f 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -228,12 +228,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (!virt_addr_valid(ptr)) return; - /* - * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the - * highmem page or fallback to virt_to_page(). The following - * is effectively a highmem-aware virt_to_head_page(). - */ - page = compound_head(kmap_to_page((void *)ptr)); + if (is_kmap_addr(ptr)) { + unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); + + if ((unsigned long)ptr + n - 1 > page_end) + usercopy_abort("kmap", NULL, to_user, + offset_in_page(ptr), n); + return; + } + + page = virt_to_head_page(ptr); if (PageSlab(page)) { /* Check slab allocator for flags and size. */ From patchwork Thu Dec 16 21:53:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B51D8C433F5 for ; Thu, 16 Dec 2021 21:54:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237090AbhLPVyC (ORCPT ); Thu, 16 Dec 2021 16:54:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234629AbhLPVyC (ORCPT ); Thu, 16 Dec 2021 16:54:02 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFA5CC061574 for ; Thu, 16 Dec 2021 13:54:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dlV3pNPMcMviclf5RhZNDIDmGBRDvRI+kfEWSJSDVkk=; b=a5rViquGApQES/ro+eHyDlumcs 160F+ADE5SQRRimF9Cgdlegh0C+3Dw8YOhFmXZPEgEloXTtGY/I8CRteFwJBO5+8FKLOIrkoR7XRW 3NxBzMzmwyo2Q9SGMUJqPnRmA4DcsN+Muf4x0YomNyH5CiA55zMjdrTcygM556Z2xFtiiIDsjdW/y 1Gqz/1+CvNcvz/L34z/e14/HITS7g1xl+5l+rSjyfq+FjRLBtdqANW964qpRWcb6PUoCQ/evYrl2L cLH6JYCbxRnCzpabI3ahplPPWEM0pE6hfWSnK8cBFYLDvRmHPlb8WVRm3mjd8860IUe+3F6OTcoEb HFUton9w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxyhE-00FzY8-Op; Thu, 16 Dec 2021 21:53:52 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org, William Kucharski Subject: [PATCH v4 2/4] mm/usercopy: Detect vmalloc overruns Date: Thu, 16 Dec 2021 21:53:49 +0000 Message-Id: <20211216215351.3811471-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216215351.3811471-1-willy@infradead.org> References: <20211216215351.3811471-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org If you have a vmalloc() allocation, or an address from calling vmap(), you cannot overrun the vm_area which describes it, regardless of the size of the underlying allocation. This probably doesn't do much for security because vmalloc comes with guard pages these days, but it prevents usercopy aborts when copying to a vmap() of smaller pages. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook Reviewed-by: William Kucharski --- mm/usercopy.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/usercopy.c b/mm/usercopy.c index 8c039302465f..63476e1506e0 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -237,6 +238,21 @@ static inline void check_heap_object(const void *ptr, unsigned long n, return; } + if (is_vmalloc_addr(ptr)) { + struct vm_struct *vm = find_vm_area(ptr); + unsigned long offset; + + if (!vm) { + usercopy_abort("vmalloc", "no area", to_user, 0, n); + return; + } + + offset = ptr - vm->addr; + if (offset + n > vm->size) + usercopy_abort("vmalloc", NULL, to_user, offset, n); + return; + } + page = virt_to_head_page(ptr); if (PageSlab(page)) { From patchwork Thu Dec 16 21:53:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59526C433F5 for ; Thu, 16 Dec 2021 21:54:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237091AbhLPVyF (ORCPT ); Thu, 16 Dec 2021 16:54:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234629AbhLPVyE (ORCPT ); Thu, 16 Dec 2021 16:54:04 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97806C061574 for ; Thu, 16 Dec 2021 13:54:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jtkiEbeC8BH5Pq80FDCgjIrecOy7gCyzuNpIoqv5nX0=; b=DJwMtd4jnj3lLUZ0yZEetfivZS yHv/lAf3GgRuYuwfu+huJY7nUxInXVVeByfxdMGwS5hqgKQJ84T2U8CKSkeyGAiHgx8gkHXfZcgoG sKBEaiNe5/E1+e0r4Tpn6imTF3zEb0kR7cV63MHAw5vyfVgKmw0mWBepC4nLc8yORAoFz8+vOs0+Z ovsmwRtjUjuwwhdV0gD5gEDHCYxnafQGZNkPVAOdvCv8oprVEhVeBksy+xwLZaKni5TwIlT2AcUpF mBOd4vqvve/ATY6CGdt7feEV3gC/hWEPRGaJCNGVlfeYhingGDFntV5fwgvkM+thA2//m/JRAmgEi PDhy927A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxyhE-00FzYA-RJ; Thu, 16 Dec 2021 21:53:52 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org, William Kucharski Subject: [PATCH v4 3/4] mm/usercopy: Detect compound page overruns Date: Thu, 16 Dec 2021 21:53:50 +0000 Message-Id: <20211216215351.3811471-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216215351.3811471-1-willy@infradead.org> References: <20211216215351.3811471-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Move the compound page overrun detection out of CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook Reviewed-by: William Kucharski --- mm/usercopy.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index 63476e1506e0..db2e8c4f79fd 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, { #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN const void *end = ptr + n - 1; - struct page *endpage; bool is_reserved, is_cma; /* @@ -194,11 +193,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, ((unsigned long)end & (unsigned long)PAGE_MASK))) return; - /* Allow if fully inside the same compound (__GFP_COMP) page. */ - endpage = virt_to_head_page(end); - if (likely(endpage == page)) - return; - /* * Reject if range is entirely either Reserved (i.e. special or * device memory), or CMA. Otherwise, reject since the object spans @@ -258,6 +252,11 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (PageSlab(page)) { /* Check slab allocator for flags and size. */ __check_heap_object(ptr, n, page, to_user); + } else if (PageHead(page)) { + /* A compound allocation */ + unsigned long offset = ptr - page_address(page); + if (offset + n > page_size(page)) + usercopy_abort("page alloc", NULL, to_user, offset, n); } else { /* Verify object does not incorrectly span multiple pages. */ check_page_span(ptr, n, page, to_user); From patchwork Thu Dec 16 21:53:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A2C2C433EF for ; Thu, 16 Dec 2021 21:53:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235255AbhLPVx5 (ORCPT ); Thu, 16 Dec 2021 16:53:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234629AbhLPVx4 (ORCPT ); Thu, 16 Dec 2021 16:53:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D7BFC061574 for ; Thu, 16 Dec 2021 13:53:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Yd8tiX8+x5pXfOFC4eElbWaK5eHhiwmaYRCfz7A9fM4=; b=cHMTBQ+svuDgXMImHP699DDS3K k6HtE/PXCUE/5o4x3XPmZoNajs3H4wqcV6e3UxcZhuhNXDZOZ8pWXRmRNPTJh0GX3i6p1wArCSG9d ZL3tWcUpqdu37q/3BlANIeUNCP4n81A42S/ad59ATiAFzXoDDtzGtINuiVhfdfN3SBD4KpJI2hFzm M8kcPvpp72AzUEy5kikllthnq5QduRTqTJ2v15sB+1wN2t4rIcSIhEHGpPBEt8ZIY49bJdCqRseNj P3QPsj9D436d1IId7BYHGYw/qmqehqk/wvgZVcK8oQIwfyI/PdO2xrRztagcrnd7vmPIh2E2uRsiW FrzsgsjA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxyhE-00FzYC-TR; Thu, 16 Dec 2021 21:53:52 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org Subject: [PATCH v4 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Date: Thu, 16 Dec 2021 21:53:51 +0000 Message-Id: <20211216215351.3811471-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216215351.3811471-1-willy@infradead.org> References: <20211216215351.3811471-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org There isn't enough information to make this a useful check any more; the useful parts of it were moved in earlier patches, so remove this set of checks now. Signed-off-by: Matthew Wilcox (Oracle) --- mm/usercopy.c | 61 ------------------------------------------------ security/Kconfig | 13 +---------- 2 files changed, 1 insertion(+), 73 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index db2e8c4f79fd..1ee1f0d74828 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -157,64 +157,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n, usercopy_abort("null address", NULL, to_user, ptr, n); } -/* Checks for allocs that are marked in some way as spanning multiple pages. */ -static inline void check_page_span(const void *ptr, unsigned long n, - struct page *page, bool to_user) -{ -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN - const void *end = ptr + n - 1; - bool is_reserved, is_cma; - - /* - * Sometimes the kernel data regions are not marked Reserved (see - * check below). And sometimes [_sdata,_edata) does not cover - * rodata and/or bss, so check each range explicitly. - */ - - /* Allow reads of kernel rodata region (if not marked as Reserved). */ - if (ptr >= (const void *)__start_rodata && - end <= (const void *)__end_rodata) { - if (!to_user) - usercopy_abort("rodata", NULL, to_user, 0, n); - return; - } - - /* Allow kernel data region (if not marked as Reserved). */ - if (ptr >= (const void *)_sdata && end <= (const void *)_edata) - return; - - /* Allow kernel bss region (if not marked as Reserved). */ - if (ptr >= (const void *)__bss_start && - end <= (const void *)__bss_stop) - return; - - /* Is the object wholly within one base page? */ - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) == - ((unsigned long)end & (unsigned long)PAGE_MASK))) - return; - - /* - * Reject if range is entirely either Reserved (i.e. special or - * device memory), or CMA. Otherwise, reject since the object spans - * several independently allocated pages. - */ - is_reserved = PageReserved(page); - is_cma = is_migrate_cma_page(page); - if (!is_reserved && !is_cma) - usercopy_abort("spans multiple pages", NULL, to_user, 0, n); - - for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) { - page = virt_to_head_page(ptr); - if (is_reserved && !PageReserved(page)) - usercopy_abort("spans Reserved and non-Reserved pages", - NULL, to_user, 0, n); - if (is_cma && !is_migrate_cma_page(page)) - usercopy_abort("spans CMA and non-CMA pages", NULL, - to_user, 0, n); - } -#endif -} - static inline void check_heap_object(const void *ptr, unsigned long n, bool to_user) { @@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, unsigned long offset = ptr - page_address(page); if (offset + n > page_size(page)) usercopy_abort("page alloc", NULL, to_user, offset, n); - } else { - /* Verify object does not incorrectly span multiple pages. */ - check_page_span(ptr, n, page, to_user); } } diff --git a/security/Kconfig b/security/Kconfig index 0b847f435beb..5b289b329a51 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -160,20 +160,9 @@ config HARDENED_USERCOPY copy_from_user() functions) by rejecting memory ranges that are larger than the specified heap object, span multiple separately allocated pages, are not on the process stack, - or are part of the kernel text. This kills entire classes + or are part of the kernel text. This prevents entire classes of heap overflow exploits and similar kernel memory exposures. -config HARDENED_USERCOPY_PAGESPAN - bool "Refuse to copy allocations that span multiple pages" - depends on HARDENED_USERCOPY - depends on EXPERT - help - When a multi-page allocation is done without __GFP_COMP, - hardened usercopy will reject attempts to copy it. There are, - however, several cases of this in the kernel that have not all - been removed. This config is intended to be used only while - trying to find such users. - config FORTIFY_SOURCE bool "Harden common str/mem functions against buffer overflows" depends on ARCH_HAS_FORTIFY_SOURCE