From patchwork Tue Jan 9 20:55:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10153417 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DFF2260223 for ; Tue, 9 Jan 2018 21:03:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D47942040D for ; Tue, 9 Jan 2018 21:03:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C570E204C4; Tue, 9 Jan 2018 21:03:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B84F32040D for ; Tue, 9 Jan 2018 21:03:57 +0000 (UTC) Received: (qmail 9902 invoked by uid 550); 9 Jan 2018 20:57:42 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 9408 invoked from network); 9 Jan 2018 20:57:26 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=zZowg4IdoWpei6us8oKFmPO04391cfwVIsOeV+y+hRQ=; b=fFIUVVzDEZ6XBYU+CknkNIwbjwmJZTaYhGaRC+jdyZOCVSX2OEYnD3raHuLP7rRTZu FcdGhSZinPxEiM22FYCuewNBHAmFIkQw9+qfvm/336XcCowzt0I5xsGN5690bZq3LaJq dOcIvxnCybkvO4qpykcpAHZm6GB7OqOC+AcWk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=zZowg4IdoWpei6us8oKFmPO04391cfwVIsOeV+y+hRQ=; b=Nt+3B48JcJWJ0E46Z+yUwswJtOBZ4VAyjqxjMga3sS7oDek8Rlh7VEDFTGT2AXxYU4 PikzOqbdnKwLT/wJyhbNqZUHgIZLiUhRxZeDM4lcXvEcD9eK7wryLOsQffZ20YFZDL4b tz5KKPNcmp5E2H2ZoTlcCDPLFaHSwxob41wHI4veNNjZ6+TGevvkV+A5vSc5CklAZ8qO dtrsGNnKmPQIMuWQGZad5iZeDaFTA4Ica2waBdze0Xgi5X7MdzMCkJibSEOfIYTxtdm4 keV2MqdajX0AhxiaTlAKJrPm1e+wZlcNQ/Hy8xAovkXr2px02SxTmct+HMi0KaDqHYN1 Q8Hw== X-Gm-Message-State: AKGB3mKbbu9eCpu7Qz+y2Bk986+OdeGiL/R9jYMx/DjAd3XInR6T6ThX bo2ePq415Ma2wtlPlxDyH5BnUw== X-Google-Smtp-Source: ACJfBov9g3UUQGAFNiAyhkn2mFhEBxeLtf5zY59aMb4o1LGJDSVV7BQTXeXdodqj4fDGFbgCdpVFVA== X-Received: by 10.99.175.9 with SMTP id w9mr13192474pge.214.1515531435207; Tue, 09 Jan 2018 12:57:15 -0800 (PST) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , David Windsor , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Laura Abbott , Ingo Molnar , Mark Rutland , linux-mm@kvack.org, linux-xfs@vger.kernel.org, Linus Torvalds , Alexander Viro , Andy Lutomirski , Christoph Hellwig , "David S. Miller" , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Tue, 9 Jan 2018 12:55:34 -0800 Message-Id: <1515531365-37423-6-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515531365-37423-1-git-send-email-keescook@chromium.org> References: <1515531365-37423-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 05/36] usercopy: WARN() on slab cache usercopy region violations X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor This patch adds checking of usercopy cache whitelisting, and is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. The SLAB and SLUB allocators are modified to WARN() on all copy operations in which the kernel heap memory being modified falls outside of the cache's defined usercopy region. Signed-off-by: David Windsor [kees: adjust commit log and comments, switch to WARN-by-default] Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Laura Abbott Cc: Ingo Molnar Cc: Mark Rutland Cc: linux-mm@kvack.org Cc: linux-xfs@vger.kernel.org Signed-off-by: Kees Cook --- mm/slab.c | 30 +++++++++++++++++++++++++----- mm/slub.c | 34 +++++++++++++++++++++++++++------- mm/usercopy.c | 12 ++++++++++++ 3 files changed, 64 insertions(+), 12 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index f1ead7b7909d..d9939828f8e4 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -4392,7 +4392,9 @@ module_init(slab_proc_init); #ifdef CONFIG_HARDENED_USERCOPY /* - * Rejects objects that are incorrectly sized. + * Rejects incorrectly sized objects and objects that are to be copied + * to/from userspace but do not fall entirely within the containing slab + * cache's usercopy region. * * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. @@ -4412,11 +4414,29 @@ int __check_heap_object(const void *ptr, unsigned long n, struct page *page, /* Find offset within object. */ offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); - /* Allow address range falling entirely within object size. */ - if (offset <= cachep->object_size && n <= cachep->object_size - offset) - return 0; + /* Make sure object falls entirely within cache's usercopy region. */ + if (offset < cachep->useroffset || + offset - cachep->useroffset > cachep->usersize || + n > cachep->useroffset - offset + cachep->usersize) { + /* + * If the copy is still within the allocated object, produce + * a warning instead of rejecting the copy. This is intended + * to be a temporary method to find any missing usercopy + * whitelists. + */ + if (offset <= cachep->object_size && + n <= cachep->object_size - offset) { + WARN_ONCE(1, "unexpected usercopy %s with bad or missing whitelist with SLAB object '%s' (offset %lu, size %lu)", + to_user ? "exposure" : "overwrite", + cachep->name, offset, n); + return 0; + } - return report_usercopy("SLAB object", cachep->name, to_user, offset, n); + return report_usercopy("SLAB object", cachep->name, to_user, + offset, n); + } + + return 0; } #endif /* CONFIG_HARDENED_USERCOPY */ diff --git a/mm/slub.c b/mm/slub.c index 8738a8d8bf8e..2aa4972a2058 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3813,7 +3813,9 @@ EXPORT_SYMBOL(__kmalloc_node); #ifdef CONFIG_HARDENED_USERCOPY /* - * Rejects objects that are incorrectly sized. + * Rejects incorrectly sized objects and objects that are to be copied + * to/from userspace but do not fall entirely within the containing slab + * cache's usercopy region. * * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. @@ -3823,11 +3825,9 @@ int __check_heap_object(const void *ptr, unsigned long n, struct page *page, { struct kmem_cache *s; unsigned long offset; - size_t object_size; /* Find object and usable object size. */ s = page->slab_cache; - object_size = slab_ksize(s); /* Reject impossible pointers. */ if (ptr < page_address(page)) @@ -3845,11 +3845,31 @@ int __check_heap_object(const void *ptr, unsigned long n, struct page *page, offset -= s->red_left_pad; } - /* Allow address range falling entirely within object size. */ - if (offset <= object_size && n <= object_size - offset) - return 0; + /* Make sure object falls entirely within cache's usercopy region. */ + if (offset < s->useroffset || + offset - s->useroffset > s->usersize || + n > s->useroffset - offset + s->usersize) { + size_t object_size; - return report_usercopy("SLUB object", s->name, to_user, offset, n); + /* + * If the copy is still within the allocated object, produce + * a warning instead of rejecting the copy. This is intended + * to be a temporary method to find any missing usercopy + * whitelists. + */ + object_size = slab_ksize(s); + if ((offset <= object_size && n <= object_size - offset)) { + WARN_ONCE(1, "unexpected usercopy %s with bad or missing whitelist with SLUB object '%s' (offset %lu size %lu)", + to_user ? "exposure" : "overwrite", + s->name, offset, n); + return 0; + } + + return report_usercopy("SLUB object", s->name, to_user, + offset, n); + } + + return 0; } #endif /* CONFIG_HARDENED_USERCOPY */ diff --git a/mm/usercopy.c b/mm/usercopy.c index a8426a502136..4ed615d4efc8 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -58,6 +58,18 @@ static noinline int check_stack_object(const void *obj, unsigned long len) return GOOD_STACK; } +/* + * If this function is reached, then CONFIG_HARDENED_USERCOPY has found an + * unexpected state during a copy_from_user() or copy_to_user() call. + * There are several checks being performed on the buffer by the + * __check_object_size() function. Normal stack buffer usage should never + * trip the checks, and kernel text addressing will always trip the check. + * For cache objects, it is checking that only the whitelisted range of + * bytes for a given cache is being accessed (via the cache's usersize and + * useroffset fields). To adjust a cache whitelist, use the usercopy-aware + * kmem_cache_create_usercopy() function to create the cache (and + * carefully audit the whitelist range). + */ int report_usercopy(const char *name, const char *detail, bool to_user, unsigned long offset, unsigned long len) {