From patchwork Mon Dec 13 14:27:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12674041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02D0EC433FE for ; Mon, 13 Dec 2021 14:27:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238788AbhLMO1J (ORCPT ); Mon, 13 Dec 2021 09:27:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238701AbhLMO1J (ORCPT ); Mon, 13 Dec 2021 09:27:09 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E021AC061574 for ; Mon, 13 Dec 2021 06:27:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=w4DGbDmVukCMyumUVjlvoObtquEzMdIQjJeODulGmAg=; b=ZGTG9RBHJQ+3OwtP7bwSuPAYs7 4v/rtpNhXZNZEswO89WfIWQ70Rqn6FC975hC0jOYPETLMLeKoNxr2ejcHD5TI59Gq5XDR/hSlJTrp Sndpy4vJbVYcVATgRiSZ4v8N8yhqjPvTqCYaFJ+X6R+uMGUqehb7nsjkkMcszbZ/7RAGvcWUKruZp SAcH8oICLC4WjM+sOUhYWx2sdfE/eCoiNyGwZd8/Ot3CgppP1mDOKIBJtCUH4SiEQlR+TDNeelBD5 KePtJUfVhVbzyZK0NHjU4UpteIItnd86Nf9rbuPhUaegO8gEC9OSUTlGNMd05lyT/zDoKGyqrohA6 K471n/hg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwmID-00Crlt-6Y; Mon, 13 Dec 2021 14:27:05 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Thomas Gleixner , linux-hardening@vger.kernel.org Subject: [PATCH v3 1/3] mm/usercopy: Check kmap addresses properly Date: Mon, 13 Dec 2021 14:27:01 +0000 Message-Id: <20211213142703.3066590-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211213142703.3066590-1-willy@infradead.org> References: <20211213142703.3066590-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org If you are copying to an address in the kmap region, you may not copy across a page boundary, no matter what the size of the underlying allocation. You can't kmap() a slab page because slab pages always come from low memory. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- arch/x86/include/asm/highmem.h | 1 + include/linux/highmem-internal.h | 10 ++++++++++ mm/usercopy.c | 16 ++++++++++------ 3 files changed, 21 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index 032e020853aa..731ee7cc40a5 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -26,6 +26,7 @@ #include #include #include +#include /* declarations for highmem.c */ extern unsigned long highstart_pfn, highend_pfn; diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 0a0b2b09b1b8..01fb76d101b0 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -149,6 +149,11 @@ static inline void totalhigh_pages_add(long count) atomic_long_add(count, &_totalhigh_pages); } +static inline bool is_kmap_addr(const void *x) +{ + unsigned long addr = (unsigned long)x; + return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP); +} #else /* CONFIG_HIGHMEM */ static inline struct page *kmap_to_page(void *addr) @@ -234,6 +239,11 @@ static inline void __kunmap_atomic(void *addr) static inline unsigned int nr_free_highpages(void) { return 0; } static inline unsigned long totalhigh_pages(void) { return 0UL; } +static inline bool is_kmap_addr(const void *x) +{ + return false; +} + #endif /* CONFIG_HIGHMEM */ /* diff --git a/mm/usercopy.c b/mm/usercopy.c index b3de3c4eefba..8c039302465f 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -228,12 +228,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (!virt_addr_valid(ptr)) return; - /* - * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the - * highmem page or fallback to virt_to_page(). The following - * is effectively a highmem-aware virt_to_head_page(). - */ - page = compound_head(kmap_to_page((void *)ptr)); + if (is_kmap_addr(ptr)) { + unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); + + if ((unsigned long)ptr + n - 1 > page_end) + usercopy_abort("kmap", NULL, to_user, + offset_in_page(ptr), n); + return; + } + + page = virt_to_head_page(ptr); if (PageSlab(page)) { /* Check slab allocator for flags and size. */ From patchwork Mon Dec 13 14:27:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12674039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD6C9C433F5 for ; Mon, 13 Dec 2021 14:27:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235523AbhLMO1J (ORCPT ); Mon, 13 Dec 2021 09:27:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239371AbhLMO1I (ORCPT ); Mon, 13 Dec 2021 09:27:08 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56627C061574 for ; Mon, 13 Dec 2021 06:27:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FaGT0W88prg5rCbGEk97JZKHxNwNL3WzIICgEAogG4E=; b=mtLnrhgjys1pqoGO33djIsde2C 8HmUctiwvH5oR5DW9wWo0s0HdWUgQ1mJbtegHveMzx0s3U1pZQP5gNSDCVFu261pmWunSPKgOFcA1 ysU1GynwW/iXshue2VuFpeTg+2lftBEbO4V2gZvT3LmeSEIoi3H7rlXnWQ2FXa0BiGkQq2aksrbTO 0e5WZmown/Qp2+9cgPieU6JasAZ/UOB/CipIcx6WEFz2htWixAfE9nQKUriINXvCIGcSMkk4MdM94 IC3k8tBonM7R+pNu8CsILjWvb2nFyBQh6PPRJ9DTzYqvk8pxWqUGHrXUoS1PZGe/wbwtlyUUQNiWE N9wujDPw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwmID-00Crlv-8z; Mon, 13 Dec 2021 14:27:05 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Thomas Gleixner , linux-hardening@vger.kernel.org Subject: [PATCH v3 2/3] mm/usercopy: Detect vmalloc overruns Date: Mon, 13 Dec 2021 14:27:02 +0000 Message-Id: <20211213142703.3066590-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211213142703.3066590-1-willy@infradead.org> References: <20211213142703.3066590-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org If you have a vmalloc() allocation, or an address from calling vmap(), you cannot overrun the vm_area which describes it, regardless of the size of the underlying allocation. This probably doesn't do much for security because vmalloc comes with guard pages these days, but it prevents usercopy aborts when copying to a vmap() of smaller pages. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- mm/usercopy.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/usercopy.c b/mm/usercopy.c index 8c039302465f..63476e1506e0 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -237,6 +238,21 @@ static inline void check_heap_object(const void *ptr, unsigned long n, return; } + if (is_vmalloc_addr(ptr)) { + struct vm_struct *vm = find_vm_area(ptr); + unsigned long offset; + + if (!vm) { + usercopy_abort("vmalloc", "no area", to_user, 0, n); + return; + } + + offset = ptr - vm->addr; + if (offset + n > vm->size) + usercopy_abort("vmalloc", NULL, to_user, offset, n); + return; + } + page = virt_to_head_page(ptr); if (PageSlab(page)) { From patchwork Mon Dec 13 14:27:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12674043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDA35C4332F for ; Mon, 13 Dec 2021 14:27:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238758AbhLMO1J (ORCPT ); Mon, 13 Dec 2021 09:27:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239373AbhLMO1I (ORCPT ); Mon, 13 Dec 2021 09:27:08 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78305C06173F for ; Mon, 13 Dec 2021 06:27:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IFa+DfiRwDRCozF/2nCT7kRZcW4NwEd0UI8DBzwcaaA=; b=iD8Wu5BjPcK7VGavwSrSgUlsXJ tqiC0hMOWgWwzB59+08QudrEsHZvKuwQmXrTBjR1uymNzzXLzsg8B7tubFibFBHhFzWlkaKBw5xHl uk7+OT0kO9ZJ3TsTv/TFwViAUYI2c2BCbEdMWf8qQ3L71h+0JFizUOIk+V8tDX2FqfjzUcTboBbai GxXzDalnP0aDCNPkIDv/lRxnYJxJCOA6H/1y5IMk/TCp3bYMY7KFfnoOPk7u3M5yWKj4+368X2Bwm AJhICcPAJJhdY+Y8ETQwnQHw3Ag+ZkrmGFCc0Ck2awXzi1L1KzsXhGhGahtElf4k1ck//3LSxNy1+ zSg/k6lw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwmID-00Crlz-CV; Mon, 13 Dec 2021 14:27:05 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Thomas Gleixner , linux-hardening@vger.kernel.org Subject: [PATCH v3 3/3] mm/usercopy: Detect compound page overruns Date: Mon, 13 Dec 2021 14:27:03 +0000 Message-Id: <20211213142703.3066590-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211213142703.3066590-1-willy@infradead.org> References: <20211213142703.3066590-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Move the compound page overrun detection out of CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- mm/usercopy.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index 63476e1506e0..db2e8c4f79fd 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, { #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN const void *end = ptr + n - 1; - struct page *endpage; bool is_reserved, is_cma; /* @@ -194,11 +193,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, ((unsigned long)end & (unsigned long)PAGE_MASK))) return; - /* Allow if fully inside the same compound (__GFP_COMP) page. */ - endpage = virt_to_head_page(end); - if (likely(endpage == page)) - return; - /* * Reject if range is entirely either Reserved (i.e. special or * device memory), or CMA. Otherwise, reject since the object spans @@ -258,6 +252,11 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (PageSlab(page)) { /* Check slab allocator for flags and size. */ __check_heap_object(ptr, n, page, to_user); + } else if (PageHead(page)) { + /* A compound allocation */ + unsigned long offset = ptr - page_address(page); + if (offset + n > page_size(page)) + usercopy_abort("page alloc", NULL, to_user, offset, n); } else { /* Verify object does not incorrectly span multiple pages. */ check_page_span(ptr, n, page, to_user);