From patchwork Thu Dec 16 21:53:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B5EFC433F5 for ; Thu, 16 Dec 2021 21:56:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 757A36B0078; Thu, 16 Dec 2021 16:54:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 706B96B007B; Thu, 16 Dec 2021 16:54:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CF616B007D; Thu, 16 Dec 2021 16:54:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id 4BBD36B0078 for ; Thu, 16 Dec 2021 16:54:17 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 114A486992 for ; Thu, 16 Dec 2021 21:54:07 +0000 (UTC) X-FDA: 78925010934.15.4C392AC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id DCB5A4001A for ; Thu, 16 Dec 2021 21:54:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FxsC6amCuAZFSahXDaieLHiz+Xxu18/zbToomONxAZM=; b=SzQJ4/uELzoWhsgIEpd1ZA6JF5 yz1fVXarmGVFcevbWbo/pU2ZaZS3P1QZQNXTB6U/n0GIdJrHbb45+TcF/YWcscuqIJgSQhY8ylQav 1Ouu1akpTf0aYQBwTyec9gxa0Yj/d/IX0PRvYb9Vw6tq9Gw6dEJZwY7qQecAPkriOLpA1R8rkkf01 /hKGCDYetu7/UaoM6EDKk7sN0AZKJke6DoC2ug6fAZRMdDT48XyhlT2PNKgYjFBtFXQpoivGDomVu O+vp/+bQ/3J2zc5wnrSvPz1bXYHylHfknnGJGkQwfg8tjDWmEtfYqnVl+D+mJMunqxP51fJjSH5JN 22WReiCA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxyhE-00FzY6-MA; Thu, 16 Dec 2021 21:53:52 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org, William Kucharski Subject: [PATCH v4 1/4] mm/usercopy: Check kmap addresses properly Date: Thu, 16 Dec 2021 21:53:48 +0000 Message-Id: <20211216215351.3811471-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216215351.3811471-1-willy@infradead.org> References: <20211216215351.3811471-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: DCB5A4001A X-Stat-Signature: zgjpo4j6oc66tmzgztkp46igberb57qx Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="SzQJ4/uE"; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam11 X-HE-Tag: 1639691641-807284 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If you are copying to an address in the kmap region, you may not copy across a page boundary, no matter what the size of the underlying allocation. You can't kmap() a slab page because slab pages always come from low memory. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook Reviewed-by: William Kucharski --- arch/x86/include/asm/highmem.h | 1 + include/linux/highmem-internal.h | 10 ++++++++++ mm/usercopy.c | 16 ++++++++++------ 3 files changed, 21 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index 032e020853aa..731ee7cc40a5 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -26,6 +26,7 @@ #include #include #include +#include /* declarations for highmem.c */ extern unsigned long highstart_pfn, highend_pfn; diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 0a0b2b09b1b8..01fb76d101b0 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -149,6 +149,11 @@ static inline void totalhigh_pages_add(long count) atomic_long_add(count, &_totalhigh_pages); } +static inline bool is_kmap_addr(const void *x) +{ + unsigned long addr = (unsigned long)x; + return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP); +} #else /* CONFIG_HIGHMEM */ static inline struct page *kmap_to_page(void *addr) @@ -234,6 +239,11 @@ static inline void __kunmap_atomic(void *addr) static inline unsigned int nr_free_highpages(void) { return 0; } static inline unsigned long totalhigh_pages(void) { return 0UL; } +static inline bool is_kmap_addr(const void *x) +{ + return false; +} + #endif /* CONFIG_HIGHMEM */ /* diff --git a/mm/usercopy.c b/mm/usercopy.c index b3de3c4eefba..8c039302465f 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -228,12 +228,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (!virt_addr_valid(ptr)) return; - /* - * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the - * highmem page or fallback to virt_to_page(). The following - * is effectively a highmem-aware virt_to_head_page(). - */ - page = compound_head(kmap_to_page((void *)ptr)); + if (is_kmap_addr(ptr)) { + unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); + + if ((unsigned long)ptr + n - 1 > page_end) + usercopy_abort("kmap", NULL, to_user, + offset_in_page(ptr), n); + return; + } + + page = virt_to_head_page(ptr); if (PageSlab(page)) { /* Check slab allocator for flags and size. */ From patchwork Thu Dec 16 21:53:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A7A2C433EF for ; Thu, 16 Dec 2021 21:55:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92E5C6B0071; Thu, 16 Dec 2021 16:54:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DBBC6B0075; Thu, 16 Dec 2021 16:54:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A36B6B0078; Thu, 16 Dec 2021 16:54:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 67E156B0071 for ; Thu, 16 Dec 2021 16:54:11 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 24DD78906A for ; Thu, 16 Dec 2021 21:54:01 +0000 (UTC) X-FDA: 78925010682.07.CF9FF55 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 42FC61C001D for ; Thu, 16 Dec 2021 21:53:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dlV3pNPMcMviclf5RhZNDIDmGBRDvRI+kfEWSJSDVkk=; b=a5rViquGApQES/ro+eHyDlumcs 160F+ADE5SQRRimF9Cgdlegh0C+3Dw8YOhFmXZPEgEloXTtGY/I8CRteFwJBO5+8FKLOIrkoR7XRW 3NxBzMzmwyo2Q9SGMUJqPnRmA4DcsN+Muf4x0YomNyH5CiA55zMjdrTcygM556Z2xFtiiIDsjdW/y 1Gqz/1+CvNcvz/L34z/e14/HITS7g1xl+5l+rSjyfq+FjRLBtdqANW964qpRWcb6PUoCQ/evYrl2L cLH6JYCbxRnCzpabI3ahplPPWEM0pE6hfWSnK8cBFYLDvRmHPlb8WVRm3mjd8860IUe+3F6OTcoEb HFUton9w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxyhE-00FzY8-Op; Thu, 16 Dec 2021 21:53:52 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org, William Kucharski Subject: [PATCH v4 2/4] mm/usercopy: Detect vmalloc overruns Date: Thu, 16 Dec 2021 21:53:49 +0000 Message-Id: <20211216215351.3811471-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216215351.3811471-1-willy@infradead.org> References: <20211216215351.3811471-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 42FC61C001D X-Stat-Signature: uinfkdismio39gp5oqjtx7okbhxj69zc Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=a5rViquG; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1639691638-43111 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If you have a vmalloc() allocation, or an address from calling vmap(), you cannot overrun the vm_area which describes it, regardless of the size of the underlying allocation. This probably doesn't do much for security because vmalloc comes with guard pages these days, but it prevents usercopy aborts when copying to a vmap() of smaller pages. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook Reviewed-by: William Kucharski --- mm/usercopy.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/usercopy.c b/mm/usercopy.c index 8c039302465f..63476e1506e0 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -237,6 +238,21 @@ static inline void check_heap_object(const void *ptr, unsigned long n, return; } + if (is_vmalloc_addr(ptr)) { + struct vm_struct *vm = find_vm_area(ptr); + unsigned long offset; + + if (!vm) { + usercopy_abort("vmalloc", "no area", to_user, 0, n); + return; + } + + offset = ptr - vm->addr; + if (offset + n > vm->size) + usercopy_abort("vmalloc", NULL, to_user, offset, n); + return; + } + page = virt_to_head_page(ptr); if (PageSlab(page)) { From patchwork Thu Dec 16 21:53:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F6E0C433F5 for ; Thu, 16 Dec 2021 21:55:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8D806B0075; Thu, 16 Dec 2021 16:54:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A3BE86B0078; Thu, 16 Dec 2021 16:54:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 903766B007B; Thu, 16 Dec 2021 16:54:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id 7DDBF6B0075 for ; Thu, 16 Dec 2021 16:54:14 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 308FD8249980 for ; Thu, 16 Dec 2021 21:54:04 +0000 (UTC) X-FDA: 78925010808.04.7B56DDB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id E6999A0018 for ; Thu, 16 Dec 2021 21:53:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jtkiEbeC8BH5Pq80FDCgjIrecOy7gCyzuNpIoqv5nX0=; b=DJwMtd4jnj3lLUZ0yZEetfivZS yHv/lAf3GgRuYuwfu+huJY7nUxInXVVeByfxdMGwS5hqgKQJ84T2U8CKSkeyGAiHgx8gkHXfZcgoG sKBEaiNe5/E1+e0r4Tpn6imTF3zEb0kR7cV63MHAw5vyfVgKmw0mWBepC4nLc8yORAoFz8+vOs0+Z ovsmwRtjUjuwwhdV0gD5gEDHCYxnafQGZNkPVAOdvCv8oprVEhVeBksy+xwLZaKni5TwIlT2AcUpF mBOd4vqvve/ATY6CGdt7feEV3gC/hWEPRGaJCNGVlfeYhingGDFntV5fwgvkM+thA2//m/JRAmgEi PDhy927A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxyhE-00FzYA-RJ; Thu, 16 Dec 2021 21:53:52 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org, William Kucharski Subject: [PATCH v4 3/4] mm/usercopy: Detect compound page overruns Date: Thu, 16 Dec 2021 21:53:50 +0000 Message-Id: <20211216215351.3811471-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216215351.3811471-1-willy@infradead.org> References: <20211216215351.3811471-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DJwMtd4j; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E6999A0018 X-Stat-Signature: 9du35np7dw1pgju47p8bmpuq9qo74k7k X-HE-Tag: 1639691638-444924 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the compound page overrun detection out of CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook Reviewed-by: William Kucharski --- mm/usercopy.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index 63476e1506e0..db2e8c4f79fd 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, { #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN const void *end = ptr + n - 1; - struct page *endpage; bool is_reserved, is_cma; /* @@ -194,11 +193,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, ((unsigned long)end & (unsigned long)PAGE_MASK))) return; - /* Allow if fully inside the same compound (__GFP_COMP) page. */ - endpage = virt_to_head_page(end); - if (likely(endpage == page)) - return; - /* * Reject if range is entirely either Reserved (i.e. special or * device memory), or CMA. Otherwise, reject since the object spans @@ -258,6 +252,11 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (PageSlab(page)) { /* Check slab allocator for flags and size. */ __check_heap_object(ptr, n, page, to_user); + } else if (PageHead(page)) { + /* A compound allocation */ + unsigned long offset = ptr - page_address(page); + if (offset + n > page_size(page)) + usercopy_abort("page alloc", NULL, to_user, offset, n); } else { /* Verify object does not incorrectly span multiple pages. */ check_page_span(ptr, n, page, to_user); From patchwork Thu Dec 16 21:53:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12682801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4C64C433FE for ; Thu, 16 Dec 2021 21:54:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C93A6B0073; Thu, 16 Dec 2021 16:54:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0613A6B0075; Thu, 16 Dec 2021 16:54:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E59BC6B0074; Thu, 16 Dec 2021 16:54:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0083.hostedemail.com [216.40.44.83]) by kanga.kvack.org (Postfix) with ESMTP id D32286B0071 for ; Thu, 16 Dec 2021 16:54:07 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 80C7889064 for ; Thu, 16 Dec 2021 21:53:57 +0000 (UTC) X-FDA: 78925010514.06.214A7A9 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 5C1484000B for ; Thu, 16 Dec 2021 21:53:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Yd8tiX8+x5pXfOFC4eElbWaK5eHhiwmaYRCfz7A9fM4=; b=cHMTBQ+svuDgXMImHP699DDS3K k6HtE/PXCUE/5o4x3XPmZoNajs3H4wqcV6e3UxcZhuhNXDZOZ8pWXRmRNPTJh0GX3i6p1wArCSG9d ZL3tWcUpqdu37q/3BlANIeUNCP4n81A42S/ad59ATiAFzXoDDtzGtINuiVhfdfN3SBD4KpJI2hFzm M8kcPvpp72AzUEy5kikllthnq5QduRTqTJ2v15sB+1wN2t4rIcSIhEHGpPBEt8ZIY49bJdCqRseNj P3QPsj9D436d1IId7BYHGYw/qmqehqk/wvgZVcK8oQIwfyI/PdO2xrRztagcrnd7vmPIh2E2uRsiW FrzsgsjA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxyhE-00FzYC-TR; Thu, 16 Dec 2021 21:53:52 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-hardening@vger.kernel.org Subject: [PATCH v4 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Date: Thu, 16 Dec 2021 21:53:51 +0000 Message-Id: <20211216215351.3811471-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211216215351.3811471-1-willy@infradead.org> References: <20211216215351.3811471-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=cHMTBQ+s; dmarc=none; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 5C1484000B X-Stat-Signature: axpyudie8sz7heyefriur77wpict7kko X-HE-Tag: 1639691629-816117 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There isn't enough information to make this a useful check any more; the useful parts of it were moved in earlier patches, so remove this set of checks now. Signed-off-by: Matthew Wilcox (Oracle) --- mm/usercopy.c | 61 ------------------------------------------------ security/Kconfig | 13 +---------- 2 files changed, 1 insertion(+), 73 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index db2e8c4f79fd..1ee1f0d74828 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -157,64 +157,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n, usercopy_abort("null address", NULL, to_user, ptr, n); } -/* Checks for allocs that are marked in some way as spanning multiple pages. */ -static inline void check_page_span(const void *ptr, unsigned long n, - struct page *page, bool to_user) -{ -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN - const void *end = ptr + n - 1; - bool is_reserved, is_cma; - - /* - * Sometimes the kernel data regions are not marked Reserved (see - * check below). And sometimes [_sdata,_edata) does not cover - * rodata and/or bss, so check each range explicitly. - */ - - /* Allow reads of kernel rodata region (if not marked as Reserved). */ - if (ptr >= (const void *)__start_rodata && - end <= (const void *)__end_rodata) { - if (!to_user) - usercopy_abort("rodata", NULL, to_user, 0, n); - return; - } - - /* Allow kernel data region (if not marked as Reserved). */ - if (ptr >= (const void *)_sdata && end <= (const void *)_edata) - return; - - /* Allow kernel bss region (if not marked as Reserved). */ - if (ptr >= (const void *)__bss_start && - end <= (const void *)__bss_stop) - return; - - /* Is the object wholly within one base page? */ - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) == - ((unsigned long)end & (unsigned long)PAGE_MASK))) - return; - - /* - * Reject if range is entirely either Reserved (i.e. special or - * device memory), or CMA. Otherwise, reject since the object spans - * several independently allocated pages. - */ - is_reserved = PageReserved(page); - is_cma = is_migrate_cma_page(page); - if (!is_reserved && !is_cma) - usercopy_abort("spans multiple pages", NULL, to_user, 0, n); - - for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) { - page = virt_to_head_page(ptr); - if (is_reserved && !PageReserved(page)) - usercopy_abort("spans Reserved and non-Reserved pages", - NULL, to_user, 0, n); - if (is_cma && !is_migrate_cma_page(page)) - usercopy_abort("spans CMA and non-CMA pages", NULL, - to_user, 0, n); - } -#endif -} - static inline void check_heap_object(const void *ptr, unsigned long n, bool to_user) { @@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, unsigned long offset = ptr - page_address(page); if (offset + n > page_size(page)) usercopy_abort("page alloc", NULL, to_user, offset, n); - } else { - /* Verify object does not incorrectly span multiple pages. */ - check_page_span(ptr, n, page, to_user); } } diff --git a/security/Kconfig b/security/Kconfig index 0b847f435beb..5b289b329a51 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -160,20 +160,9 @@ config HARDENED_USERCOPY copy_from_user() functions) by rejecting memory ranges that are larger than the specified heap object, span multiple separately allocated pages, are not on the process stack, - or are part of the kernel text. This kills entire classes + or are part of the kernel text. This prevents entire classes of heap overflow exploits and similar kernel memory exposures. -config HARDENED_USERCOPY_PAGESPAN - bool "Refuse to copy allocations that span multiple pages" - depends on HARDENED_USERCOPY - depends on EXPERT - help - When a multi-page allocation is done without __GFP_COMP, - hardened usercopy will reject attempts to copy it. There are, - however, several cases of this in the kernel that have not all - been removed. This config is intended to be used only while - trying to find such users. - config FORTIFY_SOURCE bool "Harden common str/mem functions against buffer overflows" depends on ARCH_HAS_FORTIFY_SOURCE