From patchwork Thu Jul 7 04:35:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Ellerman X-Patchwork-Id: 9217781 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2216360467 for ; Thu, 7 Jul 2016 04:35:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C32A28632 for ; Thu, 7 Jul 2016 04:35:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F3DD3286C1; Thu, 7 Jul 2016 04:35:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.2 required=2.0 tests=BAYES_00, MSGID_MULTIPLE_AT, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id ADC8B28632 for ; Thu, 7 Jul 2016 04:35:36 +0000 (UTC) Received: (qmail 28265 invoked by uid 550); 7 Jul 2016 04:35:32 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28241 invoked from network); 7 Jul 2016 04:35:31 -0000 From: Michael Ellerman To: Kees Cook , linux-kernel@vger.kernel.org Cc: Kees Cook , Rik van Riel , Casey Schaufler , PaX Team , Brad Spengler , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Benjamin Herrenschmidt , Tony Luck , Fenghua Yu , "David S. Miller" , x86@kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Andy Lutomirski , Borislav Petkov , Mathias Krause , Jan Kara , Vitaly Wool , Andrea Arcangeli , Dmitry Vyukov , Laura Abbott , lin@mail.wl.linuxfoundation.org, ux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com In-Reply-To: <1467843928-29351-10-git-send-email-keescook@chromium.org> References: <1467843928-29351-1-git-send-email-keescook@chromium.org> <1467843928-29351-10-git-send-email-keescook@chromium.org> User-Agent: Notmuch/0.21 (https://notmuchmail.org) Date: Thu, 07 Jul 2016 14:35:17 +1000 Message-ID: <87oa6aysje.fsf@@concordia.ellerman.id.au> MIME-Version: 1.0 Subject: [kernel-hardening] Re: [PATCH 9/9] mm: SLUB hardened usercopy support X-Virus-Scanned: ClamAV using ClamSMTP Kees Cook writes: > Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the > SLUB allocator to catch any copies that may span objects. > > Based on code from PaX and grsecurity. > > Signed-off-by: Kees Cook > diff --git a/mm/slub.c b/mm/slub.c > index 825ff4505336..0c8ace04f075 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3614,6 +3614,33 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) > EXPORT_SYMBOL(__kmalloc_node); > #endif > > +#ifdef CONFIG_HARDENED_USERCOPY > +/* > + * Rejects objects that are incorrectly sized. > + * > + * Returns NULL if check passes, otherwise const char * to name of cache > + * to indicate an error. > + */ > +const char *__check_heap_object(const void *ptr, unsigned long n, > + struct page *page) > +{ > + struct kmem_cache *s; > + unsigned long offset; > + > + /* Find object. */ > + s = page->slab_cache; > + > + /* Find offset within object. */ > + offset = (ptr - page_address(page)) % s->size; > + > + /* Allow address range falling entirely within object size. */ > + if (offset <= s->object_size && n <= s->object_size - offset) > + return NULL; > + > + return s->name; > +} I gave this a quick spin on powerpc, it blew up immediately :) Brought up 16 CPUs usercopy: kernel memory overwrite attempt detected to c0000001fe023868 (kmalloc-16) (9 bytes) CPU: 8 PID: 103 Comm: kdevtmpfs Not tainted 4.7.0-rc3-00098-g09d9556ae5d1 #55 Call Trace: [c0000001fa0cfb40] [c0000000009bdbe8] dump_stack+0xb0/0xf0 (unreliable) [c0000001fa0cfb80] [c00000000029cf44] __check_object_size+0x74/0x320 [c0000001fa0cfc00] [c00000000005d4d0] copy_from_user+0x60/0xd4 [c0000001fa0cfc40] [c00000000022b6cc] memdup_user+0x5c/0xf0 [c0000001fa0cfc80] [c00000000022b90c] strndup_user+0x7c/0x110 [c0000001fa0cfcc0] [c0000000002d6c28] SyS_mount+0x58/0x180 [c0000001fa0cfd10] [c0000000005ee908] devtmpfsd+0x98/0x210 [c0000001fa0cfd80] [c0000000000df810] kthread+0x110/0x130 [c0000001fa0cfe30] [c0000000000095e8] ret_from_kernel_thread+0x5c/0x74 SLUB tracing says: TRACE kmalloc-16 alloc 0xc0000001fe023868 inuse=186 fp=0x (null) Which is not 16-byte aligned, which seems to be caused by the red zone? The following patch fixes it for me, but I don't know SLUB enough to say if it's always correct. cheers diff --git a/mm/slub.c b/mm/slub.c index 0c8ace04f075..66191ea4545a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3630,6 +3630,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n, /* Find object. */ s = page->slab_cache; + /* Subtract red zone if enabled */ + ptr = restore_red_left(s, ptr); + /* Find offset within object. */ offset = (ptr - page_address(page)) % s->size;