From patchwork Thu Jan 11 02:02:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10156495 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3E2C9605BA for ; Thu, 11 Jan 2018 02:06:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2AF8828526 for ; Thu, 11 Jan 2018 02:06:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1DBD6286F4; Thu, 11 Jan 2018 02:06:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id A738528526 for ; Thu, 11 Jan 2018 02:06:22 +0000 (UTC) Received: (qmail 30383 invoked by uid 550); 11 Jan 2018 02:03:50 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 30024 invoked from network); 11 Jan 2018 02:03:40 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iBmNAoBk2YH4mZneAAymQyXiwPTJw6pwGg6G200XCDc=; b=e4NDwNInZOS71XZKM5vRLnaGvJHMq3HHmYIusDpR2mngTQKowOFHkwaUjo/SJQz7to 4PaZBOk7dl63cvzX/gZFFKGnHfdhKcrP2JKCuSgrGiN0VxYcuerVlpEg24xCg9Re6F2Q ZpppNxQXoqT5nnw5IcJLIjOq1QNgl6is3yoHQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iBmNAoBk2YH4mZneAAymQyXiwPTJw6pwGg6G200XCDc=; b=mCtL1AMxAjke/H5mEGh4nb1Th5vSa88AqAK2aujOzswYuw269CuhxCKx432JwiD/S6 Q8NHnuQzzRFtskzyW1eRJcvGOothZvsjthZYoOCVeiZ2lGWd4vRg65wkT+IE0G3mvFaV fMVPG0PGnaG9Og5saPOWi4pykT6MZNuDws1dqxM758ots5VjSmTmoAR9cEhYJO1AfBss WUCZAVNGk8OGMlKL3jubqg7zT9rmljTAyIVpbScPIMqqFAY+NE39EHjFjejU4lGQA6XS Y7ynqYqMevZ4OB56oE+C8yhXxPlXytPbWv/eF7ohfmoc+AfJvxEMb1otI6bvdc0RqgYu Ropw== X-Gm-Message-State: AKGB3mIPYXZSxhFb9FcDq+IcftpN6r6qoSDX1L4ycK67m4t+tu1Mlbdg GS6v8YmvIvkzr5Jns6WtdXL11w== X-Google-Smtp-Source: ACJfBouSJrAG3Z9zXoFc0Gs7/Ki7dlHXKXqUvnai0kl2rxqo4Ark2kC0yuDz1Q88CmGENlOGgzDbdQ== X-Received: by 10.101.101.200 with SMTP id y8mr15442981pgv.0.1515636206174; Wed, 10 Jan 2018 18:03:26 -0800 (PST) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Laura Abbott , Ingo Molnar , Mark Rutland , linux-mm@kvack.org, linux-xfs@vger.kernel.org, Linus Torvalds , David Windsor , Alexander Viro , Andy Lutomirski , Christoph Hellwig , "David S. Miller" , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Wed, 10 Jan 2018 18:02:39 -0800 Message-Id: <1515636190-24061-8-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515636190-24061-1-git-send-email-keescook@chromium.org> References: <1515636190-24061-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 07/38] usercopy: WARN() on slab cache usercopy region violations X-Virus-Scanned: ClamAV using ClamSMTP This patch adds checking of usercopy cache whitelisting, and is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. The SLAB and SLUB allocators are modified to WARN() on all copy operations in which the kernel heap memory being modified falls outside of the cache's defined usercopy region. Based on an earlier patch from David Windsor. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Laura Abbott Cc: Ingo Molnar Cc: Mark Rutland Cc: linux-mm@kvack.org Cc: linux-xfs@vger.kernel.org Signed-off-by: Kees Cook --- mm/slab.c | 22 +++++++++++++++++++--- mm/slab.h | 2 ++ mm/slub.c | 23 +++++++++++++++++++---- mm/usercopy.c | 21 ++++++++++++++++++--- 4 files changed, 58 insertions(+), 10 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 47acfe54e1ae..1c02f6e94235 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -4392,7 +4392,9 @@ module_init(slab_proc_init); #ifdef CONFIG_HARDENED_USERCOPY /* - * Rejects objects that are incorrectly sized. + * Rejects incorrectly sized objects and objects that are to be copied + * to/from userspace but do not fall entirely within the containing slab + * cache's usercopy region. * * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. @@ -4412,10 +4414,24 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, /* Find offset within object. */ offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); - /* Allow address range falling entirely within object size. */ - if (offset <= cachep->object_size && n <= cachep->object_size - offset) + /* Allow address range falling entirely within usercopy region. */ + if (offset >= cachep->useroffset && + offset - cachep->useroffset <= cachep->usersize && + n <= cachep->useroffset - offset + cachep->usersize) return; + /* + * If the copy is still within the allocated object, produce + * a warning instead of rejecting the copy. This is intended + * to be a temporary method to find any missing usercopy + * whitelists. + */ + if (offset <= cachep->object_size && + n <= cachep->object_size - offset) { + usercopy_warn("SLAB object", cachep->name, to_user, offset, n); + return; + } + usercopy_abort("SLAB object", cachep->name, to_user, offset, n); } #endif /* CONFIG_HARDENED_USERCOPY */ diff --git a/mm/slab.h b/mm/slab.h index 1897991df3fa..b476c435680f 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -530,6 +530,8 @@ static inline void cache_random_seq_destroy(struct kmem_cache *cachep) { } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ #ifdef CONFIG_HARDENED_USERCOPY +void usercopy_warn(const char *name, const char *detail, bool to_user, + unsigned long offset, unsigned long len); void __noreturn usercopy_abort(const char *name, const char *detail, bool to_user, unsigned long offset, unsigned long len); diff --git a/mm/slub.c b/mm/slub.c index f40a57164dd6..6d9b1e7d3226 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3813,7 +3813,9 @@ EXPORT_SYMBOL(__kmalloc_node); #ifdef CONFIG_HARDENED_USERCOPY /* - * Rejects objects that are incorrectly sized. + * Rejects incorrectly sized objects and objects that are to be copied + * to/from userspace but do not fall entirely within the containing slab + * cache's usercopy region. * * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. @@ -3827,7 +3829,6 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, /* Find object and usable object size. */ s = page->slab_cache; - object_size = slab_ksize(s); /* Reject impossible pointers. */ if (ptr < page_address(page)) @@ -3845,10 +3846,24 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, offset -= s->red_left_pad; } - /* Allow address range falling entirely within object size. */ - if (offset <= object_size && n <= object_size - offset) + /* Allow address range falling entirely within usercopy region. */ + if (offset >= s->useroffset && + offset - s->useroffset <= s->usersize && + n <= s->useroffset - offset + s->usersize) return; + /* + * If the copy is still within the allocated object, produce + * a warning instead of rejecting the copy. This is intended + * to be a temporary method to find any missing usercopy + * whitelists. + */ + object_size = slab_ksize(s); + if (offset <= object_size && n <= object_size - offset) { + usercopy_warn("SLUB object", s->name, to_user, offset, n); + return; + } + usercopy_abort("SLUB object", s->name, to_user, offset, n); } #endif /* CONFIG_HARDENED_USERCOPY */ diff --git a/mm/usercopy.c b/mm/usercopy.c index a562dd094ace..e9e9325f7638 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -59,13 +59,28 @@ static noinline int check_stack_object(const void *obj, unsigned long len) } /* - * If this function is reached, then CONFIG_HARDENED_USERCOPY has found an - * unexpected state during a copy_from_user() or copy_to_user() call. + * If these functions are reached, then CONFIG_HARDENED_USERCOPY has found + * an unexpected state during a copy_from_user() or copy_to_user() call. * There are several checks being performed on the buffer by the * __check_object_size() function. Normal stack buffer usage should never * trip the checks, and kernel text addressing will always trip the check. - * For cache objects, copies must be within the object size. + * For cache objects, it is checking that only the whitelisted range of + * bytes for a given cache is being accessed (via the cache's usersize and + * useroffset fields). To adjust a cache whitelist, use the usercopy-aware + * kmem_cache_create_usercopy() function to create the cache (and + * carefully audit the whitelist range). */ +void usercopy_warn(const char *name, const char *detail, bool to_user, + unsigned long offset, unsigned long len) +{ + WARN_ONCE(1, "Bad or missing usercopy whitelist? Kernel memory %s attempt detected %s %s%s%s%s (offset %lu, size %lu)!\n", + to_user ? "exposure" : "overwrite", + to_user ? "from" : "to", + name ? : "unknown?!", + detail ? " '" : "", detail ? : "", detail ? "'" : "", + offset, len); +} + void __noreturn usercopy_abort(const char *name, const char *detail, bool to_user, unsigned long offset, unsigned long len)