From patchwork Tue Aug 15 16:30:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 9902175 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C3DC160244 for ; Tue, 15 Aug 2017 16:32:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B6A402887E for ; Tue, 15 Aug 2017 16:32:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB0F4288B7; Tue, 15 Aug 2017 16:32:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id DA227288B1 for ; Tue, 15 Aug 2017 16:32:02 +0000 (UTC) Received: (qmail 17638 invoked by uid 550); 15 Aug 2017 16:32:00 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 17620 invoked from network); 15 Aug 2017 16:32:00 -0000 Date: Tue, 15 Aug 2017 17:30:36 +0100 From: Mark Rutland To: Andy Lutomirski Cc: "linux-arm-kernel@lists.infradead.org" , Ard Biesheuvel , Catalin Marinas , James Morse , Laura Abbott , "linux-kernel@vger.kernel.org" , Matt Fleming , Will Deacon , "kernel-hardening@lists.openwall.com" , Kees Cook Message-ID: <20170815163036.GJ6090@leverpostej> References: <1502801449-29246-1-git-send-email-mark.rutland@arm.com> <1502801449-29246-3-git-send-email-mark.rutland@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Subject: [kernel-hardening] Re: [PATCHv2 02/14] fork: allow arch-override of VMAP stack alignment X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Aug 15, 2017 at 09:09:36AM -0700, Andy Lutomirski wrote: > On Tue, Aug 15, 2017 at 5:50 AM, Mark Rutland wrote: > > In some cases, an architecture might wish its stacks to be aligned to a > > boundary larger than THREAD_SIZE. For example, using an alignment of > > double THREAD_SIZE can allow for stack overflows smaller than > > THREAD_SIZE to be detected by checking a single bit of the stack > > pointer. > > > > This patch allows architectures to override the alignment of VMAP'd > > stacks, by defining THREAD_ALIGN. Where not defined, this defaults to > > THREAD_SIZE, as is the case today. > > This looks okay, but it might make sense to move that to a header file > so THREAD_ALIGN is always available. I was a little worried about breaking things, since arches don't define THREAD_SIZE in a consistent location. Looking again, it looks like those are all transitiviely included into each arch's , so I think I can move this into , which'll have to be added to kernel.fork.c's includes. Are you happy with the below fixup? Thanks, Mark. ---->8---- diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h index 250a276..905d769 100644 --- a/include/linux/thread_info.h +++ b/include/linux/thread_info.h @@ -38,6 +38,10 @@ enum { #ifdef __KERNEL__ +#ifndef THREAD_ALIGN +#define THREAD_ALIGN THREAD_SIZE +#endif + #ifdef CONFIG_DEBUG_STACK_USAGE # define THREADINFO_GFP (GFP_KERNEL_ACCOUNT | __GFP_NOTRACK | \ __GFP_ZERO) diff --git a/kernel/fork.c b/kernel/fork.c index 696d692..f12882a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -88,6 +88,7 @@ #include #include #include +#include #include #include @@ -217,9 +218,6 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node) return s->addr; } -#ifndef THREAD_ALIGN -#define THREAD_ALIGN THREAD_SIZE -#endif stack = __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN, VMALLOC_START, VMALLOC_END, THREADINFO_GFP,