From patchwork Mon Jan 10 23:15:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12709268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D07F8C433FE for ; Mon, 10 Jan 2022 23:15:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242681AbiAJXPm (ORCPT ); Mon, 10 Jan 2022 18:15:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241981AbiAJXPl (ORCPT ); Mon, 10 Jan 2022 18:15:41 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4B39C06173F for ; Mon, 10 Jan 2022 15:15:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NgAM8XeDlgbWb3UkmtzsQneagAPKQSrfxhi6Ht2Q1p8=; b=COK9h77xY+5GeSH0HQR6OSRcRE 67iuFt06GzhkDw3ECI2eAvF9Zi3moMwe3XPuLPmOQLdZCfg9Ng7wmQKOwAARwuyOLIYz8cv7aU97T /UiH6fnA5b14PK2aWWTuno2XQxhAGdsorqQJ7pRlRPlzPaVTJsB54z8/9DhrRMY/S8z7WFbBL3EGs GS9PY47wIOui6tZcByXY0dHMWxscSBNbB4cAN/wEcsNatvQeHmLXh88lQXmZ2a+hfxiDt1Qu0DN5Z 6dC6HFkpqOnT8Ab8Ls9afffCGQTgBwaCPB/WYLD0zwezWg7Y2OytvTGwkkm8+L8Ou7ZVx+SFqVtAv 9HamDpAw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n73sy-002nGF-Dr; Mon, 10 Jan 2022 23:15:32 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org Subject: [PATCH 1/4] mm/usercopy: Check kmap addresses properly Date: Mon, 10 Jan 2022 23:15:27 +0000 Message-Id: <20220110231530.665970-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110231530.665970-1-willy@infradead.org> References: <20220110231530.665970-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org If you are copying to an address in the kmap region, you may not copy across a page boundary, no matter what the size of the underlying allocation. You can't kmap() a slab page because slab pages always come from low memory. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- arch/x86/include/asm/highmem.h | 1 + include/linux/highmem-internal.h | 10 ++++++++++ mm/usercopy.c | 16 ++++++++++------ 3 files changed, 21 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index 032e020853aa..731ee7cc40a5 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -26,6 +26,7 @@ #include #include #include +#include /* declarations for highmem.c */ extern unsigned long highstart_pfn, highend_pfn; diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 0a0b2b09b1b8..01fb76d101b0 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -149,6 +149,11 @@ static inline void totalhigh_pages_add(long count) atomic_long_add(count, &_totalhigh_pages); } +static inline bool is_kmap_addr(const void *x) +{ + unsigned long addr = (unsigned long)x; + return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP); +} #else /* CONFIG_HIGHMEM */ static inline struct page *kmap_to_page(void *addr) @@ -234,6 +239,11 @@ static inline void __kunmap_atomic(void *addr) static inline unsigned int nr_free_highpages(void) { return 0; } static inline unsigned long totalhigh_pages(void) { return 0UL; } +static inline bool is_kmap_addr(const void *x) +{ + return false; +} + #endif /* CONFIG_HIGHMEM */ /* diff --git a/mm/usercopy.c b/mm/usercopy.c index d0d268135d96..2d13bc3bd83b 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -229,12 +229,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (!virt_addr_valid(ptr)) return; - /* - * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the - * highmem page or fallback to virt_to_page(). The following - * is effectively a highmem-aware virt_to_slab(). - */ - folio = page_folio(kmap_to_page((void *)ptr)); + if (is_kmap_addr(ptr)) { + unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); + + if ((unsigned long)ptr + n - 1 > page_end) + usercopy_abort("kmap", NULL, to_user, + offset_in_page(ptr), n); + return; + } + + folio = virt_to_folio(ptr); if (folio_test_slab(folio)) { /* Check slab allocator for flags and size. */ From patchwork Mon Jan 10 23:15:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12709267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F2B4C433FE for ; Mon, 10 Jan 2022 23:15:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242666AbiAJXPj (ORCPT ); Mon, 10 Jan 2022 18:15:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241981AbiAJXPi (ORCPT ); Mon, 10 Jan 2022 18:15:38 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A08ADC06173F for ; Mon, 10 Jan 2022 15:15:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nYSnqKs9eUHoLhXGMAa3D+i7diZP4kPH2WYq8LEVbfQ=; b=LNkuwdwOG/DhLYe+dCZA7hAGXP lLVMd7rapIm35q3VHLji7eGhYqSIt/rl++mn5ejylF+IeoBJO14ZtB0B4sw7HswMbI3nkhQC6Xg/H cL8NSVjmXU+5/hgwLkmAi0gYZvyyheywxnijj0Z6Rk4I9iTbMtFVgq6TnrsQy4vJ+ZS61+Ov2Uhy/ L/FvXOEZSnjPjv+3n6oMRV5Bf6OG5iwe0LSLuHfeWfXLdIs0E3GsvsXM3+AUgnlgHkMNeWzkA0f7j Eg7nLBGxUPCxMdpdlh/BNgAE0eiN99LApbpQuTDOEH7IqsRGTSqOJuL900phQx4ocQ2XsKEYDVKDR r9whMWlQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n73sy-002nGH-Fy; Mon, 10 Jan 2022 23:15:32 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org Subject: [PATCH 2/4] mm/usercopy: Detect vmalloc overruns Date: Mon, 10 Jan 2022 23:15:28 +0000 Message-Id: <20220110231530.665970-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110231530.665970-1-willy@infradead.org> References: <20220110231530.665970-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org If you have a vmalloc() allocation, or an address from calling vmap(), you cannot overrun the vm_area which describes it, regardless of the size of the underlying allocation. This probably doesn't do much for security because vmalloc comes with guard pages these days, but it prevents usercopy aborts when copying to a vmap() of smaller pages. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- mm/usercopy.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/usercopy.c b/mm/usercopy.c index 2d13bc3bd83b..dcf71b7e3098 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -238,6 +239,21 @@ static inline void check_heap_object(const void *ptr, unsigned long n, return; } + if (is_vmalloc_addr(ptr)) { + struct vm_struct *area = find_vm_area(ptr); + unsigned long offset; + + if (!area) { + usercopy_abort("vmalloc", "no area", to_user, 0, n); + return; + } + + offset = ptr - area->addr; + if (offset + n > get_vm_area_size(area)) + usercopy_abort("vmalloc", NULL, to_user, offset, n); + return; + } + folio = virt_to_folio(ptr); if (folio_test_slab(folio)) { From patchwork Mon Jan 10 23:15:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12709269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66C2EC433F5 for ; Mon, 10 Jan 2022 23:15:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242803AbiAJXPp (ORCPT ); Mon, 10 Jan 2022 18:15:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241981AbiAJXPo (ORCPT ); Mon, 10 Jan 2022 18:15:44 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95A87C06173F for ; Mon, 10 Jan 2022 15:15:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BHhXo+AiVU90OItLmEg1G7k1aWJm7bvB04PrbDkRSvg=; b=J0IBKvayJTZCRIbOwYHIOgFiCw PrQhqFhxJ0wdgWPH4IzsV9cfwFN6167MVzdwDY/IpXWVZWnZBlt/FRGbZ+A6ZnzKsiX34lRkBqw6C c2KGJERqfWWc/AFQZQpM9Cpt52CKGpHLvB+naedmQWMPcv60C8+oxGrbzqr2XDXQmCcxSzdwp2NW/ HYsvzJ4ZBoabfdQwXOd3PSlNCdmHIhWoQJiTGhrGYKewQQbXM8hfvsLggQOxlL818cKCtMximhMoV oq8wsJrgpuSUjSpFN1H0MM9oJD97WmWf2yQhFH5NQQiet6LqUfvu4sza8oMGkjQxvYAzmqE36tgj2 D9CD8egQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n73sy-002nGJ-Hv; Mon, 10 Jan 2022 23:15:32 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org Subject: [PATCH 3/4] mm/usercopy: Detect large folio overruns Date: Mon, 10 Jan 2022 23:15:29 +0000 Message-Id: <20220110231530.665970-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110231530.665970-1-willy@infradead.org> References: <20220110231530.665970-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Move the compound page overrun detection out of CONFIG_HARDENED_USERCOPY_PAGESPAN and convert it to use folios so it's enabled for more people. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook Reviewed-by: David Hildenbrand --- mm/usercopy.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index dcf71b7e3098..e1cb98087a05 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -164,7 +164,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, { #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN const void *end = ptr + n - 1; - struct page *endpage; bool is_reserved, is_cma; /* @@ -195,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, ((unsigned long)end & (unsigned long)PAGE_MASK))) return; - /* Allow if fully inside the same compound (__GFP_COMP) page. */ - endpage = virt_to_head_page(end); - if (likely(endpage == page)) - return; - /* * Reject if range is entirely either Reserved (i.e. special or * device memory), or CMA. Otherwise, reject since the object spans @@ -259,6 +253,10 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (folio_test_slab(folio)) { /* Check slab allocator for flags and size. */ __check_heap_object(ptr, n, folio_slab(folio), to_user); + } else if (folio_test_large(folio)) { + unsigned long offset = ptr - folio_address(folio); + if (offset + n > folio_size(folio)) + usercopy_abort("page alloc", NULL, to_user, offset, n); } else { /* Verify object does not incorrectly span multiple pages. */ check_page_span(ptr, n, folio_page(folio, 0), to_user); From patchwork Mon Jan 10 23:15:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12709266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DB55C433F5 for ; Mon, 10 Jan 2022 23:15:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242639AbiAJXPh (ORCPT ); Mon, 10 Jan 2022 18:15:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241981AbiAJXPg (ORCPT ); Mon, 10 Jan 2022 18:15:36 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7531C06173F for ; Mon, 10 Jan 2022 15:15:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2K9FJArQQ4Zc/6id73IHPQQQVI8purmQSvL56E8yS9w=; b=TTknhy30p8pzM6shD0tatBnd8e S9DxInpcrxUoQXo6E3L9oR/4M2GUMAxnxF0t2SmaeUiMKJlv9b8SHwUmo49PIUTJ1hRZU1Q4QlGIx dZxGi2Q/nk9hqTcBlL7lR6s5E61LepvaZ7d30rI4wKTum8GbOHphEALyv39bx2IUmWArr3pqz6Utw pdFH7GL4fNLLDbmoHY2UY8RZtrzRmuegyglkpZg2YeE2WbG2Ocd19CbgBGjzbWzV865mN3e1bpJdi JVczpK5NHI4sY2oYfrb7mqfYBH4C/bE/dlrNilbuI87uHcTyTvqLwhI+ITnTkkapd9Ynsoq8YjFmp wsUxGJIQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n73sy-002nGL-KK; Mon, 10 Jan 2022 23:15:32 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org Subject: [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Date: Mon, 10 Jan 2022 23:15:30 +0000 Message-Id: <20220110231530.665970-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220110231530.665970-1-willy@infradead.org> References: <20220110231530.665970-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org There isn't enough information to make this a useful check any more; the useful parts of it were moved in earlier patches, so remove this set of checks now. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook Reviewed-by: David Hildenbrand --- mm/usercopy.c | 61 ------------------------------------------------ security/Kconfig | 13 +---------- 2 files changed, 1 insertion(+), 73 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index e1cb98087a05..94831945d9e7 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -158,64 +158,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n, usercopy_abort("null address", NULL, to_user, ptr, n); } -/* Checks for allocs that are marked in some way as spanning multiple pages. */ -static inline void check_page_span(const void *ptr, unsigned long n, - struct page *page, bool to_user) -{ -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN - const void *end = ptr + n - 1; - bool is_reserved, is_cma; - - /* - * Sometimes the kernel data regions are not marked Reserved (see - * check below). And sometimes [_sdata,_edata) does not cover - * rodata and/or bss, so check each range explicitly. - */ - - /* Allow reads of kernel rodata region (if not marked as Reserved). */ - if (ptr >= (const void *)__start_rodata && - end <= (const void *)__end_rodata) { - if (!to_user) - usercopy_abort("rodata", NULL, to_user, 0, n); - return; - } - - /* Allow kernel data region (if not marked as Reserved). */ - if (ptr >= (const void *)_sdata && end <= (const void *)_edata) - return; - - /* Allow kernel bss region (if not marked as Reserved). */ - if (ptr >= (const void *)__bss_start && - end <= (const void *)__bss_stop) - return; - - /* Is the object wholly within one base page? */ - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) == - ((unsigned long)end & (unsigned long)PAGE_MASK))) - return; - - /* - * Reject if range is entirely either Reserved (i.e. special or - * device memory), or CMA. Otherwise, reject since the object spans - * several independently allocated pages. - */ - is_reserved = PageReserved(page); - is_cma = is_migrate_cma_page(page); - if (!is_reserved && !is_cma) - usercopy_abort("spans multiple pages", NULL, to_user, 0, n); - - for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) { - page = virt_to_head_page(ptr); - if (is_reserved && !PageReserved(page)) - usercopy_abort("spans Reserved and non-Reserved pages", - NULL, to_user, 0, n); - if (is_cma && !is_migrate_cma_page(page)) - usercopy_abort("spans CMA and non-CMA pages", NULL, - to_user, 0, n); - } -#endif -} - static inline void check_heap_object(const void *ptr, unsigned long n, bool to_user) { @@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, unsigned long offset = ptr - folio_address(folio); if (offset + n > folio_size(folio)) usercopy_abort("page alloc", NULL, to_user, offset, n); - } else { - /* Verify object does not incorrectly span multiple pages. */ - check_page_span(ptr, n, folio_page(folio, 0), to_user); } } diff --git a/security/Kconfig b/security/Kconfig index 0b847f435beb..5b289b329a51 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -160,20 +160,9 @@ config HARDENED_USERCOPY copy_from_user() functions) by rejecting memory ranges that are larger than the specified heap object, span multiple separately allocated pages, are not on the process stack, - or are part of the kernel text. This kills entire classes + or are part of the kernel text. This prevents entire classes of heap overflow exploits and similar kernel memory exposures. -config HARDENED_USERCOPY_PAGESPAN - bool "Refuse to copy allocations that span multiple pages" - depends on HARDENED_USERCOPY - depends on EXPERT - help - When a multi-page allocation is done without __GFP_COMP, - hardened usercopy will reject attempts to copy it. There are, - however, several cases of this in the kernel that have not all - been removed. This config is intended to be used only while - trying to find such users. - config FORTIFY_SOURCE bool "Harden common str/mem functions against buffer overflows" depends on ARCH_HAS_FORTIFY_SOURCE