From patchwork Wed Sep 20 20:45:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9962527 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F1D5060208 for ; Wed, 20 Sep 2017 20:53:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4FC72920C for ; Wed, 20 Sep 2017 20:53:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D94BD29224; Wed, 20 Sep 2017 20:53:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B4F942920C for ; Wed, 20 Sep 2017 20:53:57 +0000 (UTC) Received: (qmail 1431 invoked by uid 550); 20 Sep 2017 20:53:08 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32720 invoked from network); 20 Sep 2017 20:53:04 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+pRBL2cqMl4JJW+wjTw82xOcOAuJzz0A6deB5UclAH4=; b=oGHnA4blLrtebM6Eo6orX755urqLqlBxETzQ88faCf4SRaHPVKXUX5R+L+j01A8/cO O0eVBIeLY8fod71yMSbDSLzGd8wd3ca72TAgv/C/15iWHPsoE2TCFvVI1Pg9Gi9C8oo9 b7k/JiSJas+VpPSTyNbz4Xz3UO7e5TfzlGiuM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+pRBL2cqMl4JJW+wjTw82xOcOAuJzz0A6deB5UclAH4=; b=e0zJTtLfdeATTeyfNc7cwB3PYIlPbPIfBkFVg27GRdJ3hEZ0F54izEDKXZlp/DjUfD US6L/Zxfu4Go2B0GO0xoosNRt8rDkl8z8IzN+pfcZ95v1a9qwdVykDuQ83T7X/870jHv UppzCMyKBX/vD3RPimffo2s+ge7VjCsQ/FCHdeK6Dla2y82Q5LUf2n+vR0Wcynky/7ne M9Hr0tS0zCxFWtbZAgMrPw2p0/F7bsv8LhxzKiJKZlNEzhOyKlzxx5t0MXxBMjzzlufP XjdCgLfFm0OlRV9N9UjX7YYbG38JMcN1VZM54BUxlsNWu86f0m5fKhigDG5wryelYCBg wYDQ== X-Gm-Message-State: AHPjjUgbSs8C9X8VqqH9hY+qw3xC8tgooBULBk7EW8QTF8t3KUzW49VJ g9ZDwJ+Mtsy/2RcnsveIW9Or9Q== X-Google-Smtp-Source: AOwi7QBfDQ8PJ7uQWWXFG8OhhWmaWIN5yES/tIlmkMqeVInu2R0OhyxjhTLgxFDMRlZQWY4tgPst0g== X-Received: by 10.99.112.94 with SMTP id a30mr3432985pgn.304.1505940772809; Wed, 20 Sep 2017 13:52:52 -0700 (PDT) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , Andrew Morton , Nicholas Piggin , Laura Abbott , =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= , Ingo Molnar , Thomas Gleixner , Andy Lutomirski , linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com, David Windsor Date: Wed, 20 Sep 2017 13:45:32 -0700 Message-Id: <1505940337-79069-27-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1505940337-79069-1-git-send-email-keescook@chromium.org> References: <1505940337-79069-1-git-send-email-keescook@chromium.org> MIME-Version: 1.0 Subject: [kernel-hardening] [PATCH v3 26/31] fork: Provide usercopy whitelisting for task_struct X-Virus-Scanned: ClamAV using ClamSMTP While the blocked and saved_sigmask fields of task_struct are copied to userspace (via sigmask_to_save() and setup_rt_frame()), it is always copied with a static length (i.e. sizeof(sigset_t)), so they are implictly whitelisted. The only portion of task_struct that is potentially dynamically sized and may be copied to userspace is in the architecture-specific thread_struct at the end of task_struct. cache object allocation: kernel/fork.c: alloc_task_struct_node(...): return kmem_cache_alloc_node(task_struct_cachep, ...); dup_task_struct(...): ... tsk = alloc_task_struct_node(node); copy_process(...): ... dup_task_struct(...) _do_fork(...): ... copy_process(...) example usage trace: arch/x86/kernel/fpu/signal.c: __fpu__restore_sig(...): ... struct task_struct *tsk = current; struct fpu *fpu = &tsk->thread.fpu; ... __copy_from_user(&fpu->state.xsave, ..., state_size); fpu__restore_sig(...): ... return __fpu__restore_sig(...); arch/x86/kernel/signal.c: restore_sigcontext(...): ... fpu__restore_sig(...) This introduces arch_thread_struct_whitelist() to let an architecture declare specifically where the whitelist should be within thread_struct. If undefined, the entire thread_struct field is left whitelisted. Cc: Andrew Morton Cc: Nicholas Piggin Cc: Laura Abbott Cc: "Mickaël Salaün" Cc: Ingo Molnar Cc: Thomas Gleixner Cc: Andy Lutomirski Signed-off-by: Kees Cook Acked-by: Rik van Riel --- arch/Kconfig | 11 +++++++++++ include/linux/sched/task.h | 14 ++++++++++++++ kernel/fork.c | 22 ++++++++++++++++++++-- 3 files changed, 45 insertions(+), 2 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 1aafb4efbb51..43f2e7b033ca 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -241,6 +241,17 @@ config ARCH_INIT_TASK config ARCH_TASK_STRUCT_ALLOCATOR bool +config HAVE_ARCH_THREAD_STRUCT_WHITELIST + bool + depends on !ARCH_TASK_STRUCT_ALLOCATOR + help + An architecture should select this to provide hardened usercopy + knowledge about what region of the thread_struct should be + whitelisted for copying to userspace. Normally this is only the + FPU registers. Specifically, arch_thread_struct_whitelist() + should be implemented. Without this, the entire thread_struct + field in task_struct will be left whitelisted. + # Select if arch has its private alloc_thread_stack() function config ARCH_THREAD_STACK_ALLOCATOR bool diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 79a2a744648d..a5e6f0913f74 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -103,6 +103,20 @@ extern int arch_task_struct_size __read_mostly; # define arch_task_struct_size (sizeof(struct task_struct)) #endif +#ifndef CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST +/* + * If an architecture has not declared a thread_struct whitelist we + * must assume something there may need to be copied to userspace. + */ +static inline void arch_thread_struct_whitelist(unsigned long *offset, + unsigned long *size) +{ + *offset = 0; + /* Handle dynamically sized thread_struct. */ + *size = arch_task_struct_size - offsetof(struct task_struct, thread); +} +#endif + #ifdef CONFIG_VMAP_STACK static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t) { diff --git a/kernel/fork.c b/kernel/fork.c index 720109dc723a..d8dcd8f8e82f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -454,6 +454,21 @@ static void set_max_threads(unsigned int max_threads_suggested) int arch_task_struct_size __read_mostly; #endif +static void task_struct_whitelist(unsigned long *offset, unsigned long *size) +{ + /* Fetch thread_struct whitelist for the architecture. */ + arch_thread_struct_whitelist(offset, size); + + /* + * Handle zero-sized whitelist or empty thread_struct, otherwise + * adjust offset to position of thread_struct in task_struct. + */ + if (unlikely(*size == 0)) + *offset = 0; + else + *offset += offsetof(struct task_struct, thread); +} + void __init fork_init(void) { int i; @@ -462,11 +477,14 @@ void __init fork_init(void) #define ARCH_MIN_TASKALIGN 0 #endif int align = max_t(int, L1_CACHE_BYTES, ARCH_MIN_TASKALIGN); + unsigned long useroffset, usersize; /* create a slab on which task_structs can be allocated */ - task_struct_cachep = kmem_cache_create("task_struct", + task_struct_whitelist(&useroffset, &usersize); + task_struct_cachep = kmem_cache_create_usercopy("task_struct", arch_task_struct_size, align, - SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT, NULL); + SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT, + useroffset, usersize, NULL); #endif /* do the arch specific task caches init */