From patchwork Thu Jan 11 02:03:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10156593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 458BD605BA for ; Thu, 11 Jan 2018 02:12:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 328962874B for ; Thu, 11 Jan 2018 02:12:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 270502874E; Thu, 11 Jan 2018 02:12:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B66602874B for ; Thu, 11 Jan 2018 02:12:41 +0000 (UTC) Received: (qmail 29929 invoked by uid 550); 11 Jan 2018 02:10:07 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28671 invoked from network); 11 Jan 2018 02:09:57 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5Qe66OVfzB+9KiAkrGFdvqdLBels56bbXaNLCLzfiX0=; b=iU9L2eD2hYJYWcgUSz781cDScYp926nb5ZWj/W0BEgchasFRmTaA2sPRcmRZjvnYU6 Ly8HaLuVXsI+J0klkdPoesW/OIoM3I3VXTLbZGA/SA2/9EW/dbC9G4WtdOd0VEZguYFT DqJmNzQKdlKuDHdvAZEa8FIfxETc28L04W1MA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5Qe66OVfzB+9KiAkrGFdvqdLBels56bbXaNLCLzfiX0=; b=H+OU0vZGZ2di+/RwURv2s8d6Xwdu6+ENir9o6WmYWL3CuCafdNaxR6tlNKj9pqa3s2 DYsw0ok01lFWE4aoa2xGVDMrZ8o9QbLWX9FDXT+brNxBVOgi2vbEpONCj6ySgaYB3ywR sUl4iKTdV3sSjWYJlH9sXv3yL3GIXARRqd8netRa15CB71MC4TFqnNZIBN2qpWk8G6hc rZOZbQ1zsTHsKE+9UuNIqHt38CqROf3h4mvIWUxUMlYArSVEWkrweuqRuV3+k4i5K1sE 6MTNHDwK4n6g0Kglvrgi84vaKzJXq8gkrmHiUzWaFocjdPABWQIuhZkHFzU8z5ErkuvR t++Q== X-Gm-Message-State: AKGB3mJSYxOrxYW2JRrcM7CCvuARFgFeiJDSRSIGdgnBlo9r9kwsiRDP P7BG5omjUahkLVSyfGqh/F6hQw== X-Google-Smtp-Source: ACJfBotJW3rYBwKmH5W1oqaP+UsKRF/7pcbUN0VE7i7t4IKRNhxds8X1OEZnqk+INApjz3spJVpCQA== X-Received: by 10.84.252.20 with SMTP id x20mr10695233pll.366.1515636585921; Wed, 10 Jan 2018 18:09:45 -0800 (PST) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , Andrew Morton , Nicholas Piggin , Laura Abbott , =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= , Ingo Molnar , Thomas Gleixner , Andy Lutomirski , Linus Torvalds , David Windsor , Alexander Viro , Christoph Hellwig , Christoph Lameter , "David S. Miller" , Mark Rutland , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Date: Wed, 10 Jan 2018 18:03:03 -0800 Message-Id: <1515636190-24061-32-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515636190-24061-1-git-send-email-keescook@chromium.org> References: <1515636190-24061-1-git-send-email-keescook@chromium.org> MIME-Version: 1.0 Subject: [kernel-hardening] [PATCH 31/38] fork: Provide usercopy whitelisting for task_struct X-Virus-Scanned: ClamAV using ClamSMTP While the blocked and saved_sigmask fields of task_struct are copied to userspace (via sigmask_to_save() and setup_rt_frame()), it is always copied with a static length (i.e. sizeof(sigset_t)). The only portion of task_struct that is potentially dynamically sized and may be copied to userspace is in the architecture-specific thread_struct at the end of task_struct. cache object allocation: kernel/fork.c: alloc_task_struct_node(...): return kmem_cache_alloc_node(task_struct_cachep, ...); dup_task_struct(...): ... tsk = alloc_task_struct_node(node); copy_process(...): ... dup_task_struct(...) _do_fork(...): ... copy_process(...) example usage trace: arch/x86/kernel/fpu/signal.c: __fpu__restore_sig(...): ... struct task_struct *tsk = current; struct fpu *fpu = &tsk->thread.fpu; ... __copy_from_user(&fpu->state.xsave, ..., state_size); fpu__restore_sig(...): ... return __fpu__restore_sig(...); arch/x86/kernel/signal.c: restore_sigcontext(...): ... fpu__restore_sig(...) This introduces arch_thread_struct_whitelist() to let an architecture declare specifically where the whitelist should be within thread_struct. If undefined, the entire thread_struct field is left whitelisted. Cc: Andrew Morton Cc: Nicholas Piggin Cc: Laura Abbott Cc: "Mickaël Salaün" Cc: Ingo Molnar Cc: Thomas Gleixner Cc: Andy Lutomirski Signed-off-by: Kees Cook Acked-by: Rik van Riel --- arch/Kconfig | 11 +++++++++++ include/linux/sched/task.h | 14 ++++++++++++++ kernel/fork.c | 22 ++++++++++++++++++++-- 3 files changed, 45 insertions(+), 2 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 400b9e1b2f27..8911ff37335a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -242,6 +242,17 @@ config ARCH_INIT_TASK config ARCH_TASK_STRUCT_ALLOCATOR bool +config HAVE_ARCH_THREAD_STRUCT_WHITELIST + bool + depends on !ARCH_TASK_STRUCT_ALLOCATOR + help + An architecture should select this to provide hardened usercopy + knowledge about what region of the thread_struct should be + whitelisted for copying to userspace. Normally this is only the + FPU registers. Specifically, arch_thread_struct_whitelist() + should be implemented. Without this, the entire thread_struct + field in task_struct will be left whitelisted. + # Select if arch has its private alloc_thread_stack() function config ARCH_THREAD_STACK_ALLOCATOR bool diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 05b8650f06f5..5be31eb7b266 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -104,6 +104,20 @@ extern int arch_task_struct_size __read_mostly; # define arch_task_struct_size (sizeof(struct task_struct)) #endif +#ifndef CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST +/* + * If an architecture has not declared a thread_struct whitelist we + * must assume something there may need to be copied to userspace. + */ +static inline void arch_thread_struct_whitelist(unsigned long *offset, + unsigned long *size) +{ + *offset = 0; + /* Handle dynamically sized thread_struct. */ + *size = arch_task_struct_size - offsetof(struct task_struct, thread); +} +#endif + #ifdef CONFIG_VMAP_STACK static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t) { diff --git a/kernel/fork.c b/kernel/fork.c index 0e086af148f2..5977e691c754 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -458,6 +458,21 @@ static void set_max_threads(unsigned int max_threads_suggested) int arch_task_struct_size __read_mostly; #endif +static void task_struct_whitelist(unsigned long *offset, unsigned long *size) +{ + /* Fetch thread_struct whitelist for the architecture. */ + arch_thread_struct_whitelist(offset, size); + + /* + * Handle zero-sized whitelist or empty thread_struct, otherwise + * adjust offset to position of thread_struct in task_struct. + */ + if (unlikely(*size == 0)) + *offset = 0; + else + *offset += offsetof(struct task_struct, thread); +} + void __init fork_init(void) { int i; @@ -466,11 +481,14 @@ void __init fork_init(void) #define ARCH_MIN_TASKALIGN 0 #endif int align = max_t(int, L1_CACHE_BYTES, ARCH_MIN_TASKALIGN); + unsigned long useroffset, usersize; /* create a slab on which task_structs can be allocated */ - task_struct_cachep = kmem_cache_create("task_struct", + task_struct_whitelist(&useroffset, &usersize); + task_struct_cachep = kmem_cache_create_usercopy("task_struct", arch_task_struct_size, align, - SLAB_PANIC|SLAB_ACCOUNT, NULL); + SLAB_PANIC|SLAB_ACCOUNT, + useroffset, usersize, NULL); #endif /* do the arch specific task caches init */