From patchwork Mon Dec 13 14:27:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12674031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C415DC433EF for ; Mon, 13 Dec 2021 14:27:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 405866B0072; Mon, 13 Dec 2021 09:27:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B6616B0074; Mon, 13 Dec 2021 09:27:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27DDF6B0075; Mon, 13 Dec 2021 09:27:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay035.a.hostedemail.com [64.99.140.35]) by kanga.kvack.org (Postfix) with ESMTP id 181C16B0072 for ; Mon, 13 Dec 2021 09:27:19 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D379522082 for ; Mon, 13 Dec 2021 14:27:08 +0000 (UTC) X-FDA: 78912998136.05.CD9E1C5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 8FD52140005 for ; Mon, 13 Dec 2021 14:27:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IFa+DfiRwDRCozF/2nCT7kRZcW4NwEd0UI8DBzwcaaA=; b=iD8Wu5BjPcK7VGavwSrSgUlsXJ tqiC0hMOWgWwzB59+08QudrEsHZvKuwQmXrTBjR1uymNzzXLzsg8B7tubFibFBHhFzWlkaKBw5xHl uk7+OT0kO9ZJ3TsTv/TFwViAUYI2c2BCbEdMWf8qQ3L71h+0JFizUOIk+V8tDX2FqfjzUcTboBbai GxXzDalnP0aDCNPkIDv/lRxnYJxJCOA6H/1y5IMk/TCp3bYMY7KFfnoOPk7u3M5yWKj4+368X2Bwm AJhICcPAJJhdY+Y8ETQwnQHw3Ag+ZkrmGFCc0Ck2awXzi1L1KzsXhGhGahtElf4k1ck//3LSxNy1+ zSg/k6lw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwmID-00Crlz-CV; Mon, 13 Dec 2021 14:27:05 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Thomas Gleixner , linux-hardening@vger.kernel.org Subject: [PATCH v3 3/3] mm/usercopy: Detect compound page overruns Date: Mon, 13 Dec 2021 14:27:03 +0000 Message-Id: <20211213142703.3066590-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211213142703.3066590-1-willy@infradead.org> References: <20211213142703.3066590-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8FD52140005 X-Stat-Signature: zds6hk4s54kq1fegrt45i11boz35nxju Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iD8Wu5Bj; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1639405625-376862 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the compound page overrun detection out of CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- mm/usercopy.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index 63476e1506e0..db2e8c4f79fd 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, { #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN const void *end = ptr + n - 1; - struct page *endpage; bool is_reserved, is_cma; /* @@ -194,11 +193,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, ((unsigned long)end & (unsigned long)PAGE_MASK))) return; - /* Allow if fully inside the same compound (__GFP_COMP) page. */ - endpage = virt_to_head_page(end); - if (likely(endpage == page)) - return; - /* * Reject if range is entirely either Reserved (i.e. special or * device memory), or CMA. Otherwise, reject since the object spans @@ -258,6 +252,11 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (PageSlab(page)) { /* Check slab allocator for flags and size. */ __check_heap_object(ptr, n, page, to_user); + } else if (PageHead(page)) { + /* A compound allocation */ + unsigned long offset = ptr - page_address(page); + if (offset + n > page_size(page)) + usercopy_abort("page alloc", NULL, to_user, offset, n); } else { /* Verify object does not incorrectly span multiple pages. */ check_page_span(ptr, n, page, to_user);