From patchwork Mon Aug 28 21:35:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9926335 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F20FC60375 for ; Mon, 28 Aug 2017 21:45:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5344287D5 for ; Mon, 28 Aug 2017 21:45:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D9684287D7; Mon, 28 Aug 2017 21:45:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id ABC52287D5 for ; Mon, 28 Aug 2017 21:45:01 +0000 (UTC) Received: (qmail 9566 invoked by uid 550); 28 Aug 2017 21:44:11 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 9443 invoked from network); 28 Aug 2017 21:44:06 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZFlSRoIY+NHYVdNXCXjZNMNvlpdraJXM8EEISy2WAwA=; b=kZRPQPgI8Ip7zNpJx4Hn4XrRInGvhDlJ+ewTeWGH6kuKBPXtcP/iUsadrPIBsPCaQt 565TExgDoiPPxoW42Sa1xpWgWAD8f0IseLIK76scv37R3rdwmgRahYshknfZ34t1Bshg I+ZqFyAPVCiY9evGydM5+Z6PJujlsNaX0VBK4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZFlSRoIY+NHYVdNXCXjZNMNvlpdraJXM8EEISy2WAwA=; b=SqTbK6bONtCkSdPxrTtB2vMvd7DyIw3i+JJCYFm7j31MhGgrWdnbgvIaoP/1E3bhvt Yy6V2fF7bJikqcqdG3cs0WGpoedsm9N4nIlO+rjTQYG1XMW9V0ypyGzP8NT6vzhVGtJj si+pOYVLh8matDlliCXM+j8AsyvEEOCNZZJgQ+88O47E8d+QWWpYnkU2YX+JjuLkb79M VCDIaeAOklj5qUpqBmkNV3CSsN4gW0AknusETQXxpZznPuITCfH/KhIi74obfIyRod+L A6+NQBGqNQE8nT/rJ0HYmrQG8wVA6y8+x6z05CmhQlIm2ubgUcPYjb+lsTrwXObysWhp NT+w== X-Gm-Message-State: AHYfb5g8gagmtbvofVIRBxw0rchvAui1QchcI7vLZN0papmyoFxE2D9+ AnNXwxB8kCW9pe8D X-Received: by 10.98.103.147 with SMTP id t19mr1938432pfj.84.1503956635182; Mon, 28 Aug 2017 14:43:55 -0700 (PDT) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , Andrew Morton , Nicholas Piggin , Laura Abbott , =?UTF-8?q?Micka=C3=ABl=20Sala=C3=BCn?= , Ingo Molnar , Thomas Gleixner , Andy Lutomirski , linux-mm@kvack.org, kernel-hardening@lists.openwall.com, David Windsor Date: Mon, 28 Aug 2017 14:35:07 -0700 Message-Id: <1503956111-36652-27-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1503956111-36652-1-git-send-email-keescook@chromium.org> References: <1503956111-36652-1-git-send-email-keescook@chromium.org> MIME-Version: 1.0 Subject: [kernel-hardening] [PATCH v2 26/30] fork: Provide usercopy whitelisting for task_struct X-Virus-Scanned: ClamAV using ClamSMTP While the blocked and saved_sigmask fields of task_struct are copied to userspace (via sigmask_to_save() and setup_rt_frame()), it is always copied with a static length (i.e. sizeof(sigset_t)). The only portion of task_struct that is potentially dynamically sized and may be copied to userspace is in the architecture-specific thread_struct at the end of task_struct. cache object allocation: kernel/fork.c: alloc_task_struct_node(...): return kmem_cache_alloc_node(task_struct_cachep, ...); dup_task_struct(...): ... tsk = alloc_task_struct_node(node); copy_process(...): ... dup_task_struct(...) _do_fork(...): ... copy_process(...) example usage trace: arch/x86/kernel/fpu/signal.c: __fpu__restore_sig(...): ... struct task_struct *tsk = current; struct fpu *fpu = &tsk->thread.fpu; ... __copy_from_user(&fpu->state.xsave, ..., state_size); fpu__restore_sig(...): ... return __fpu__restore_sig(...); arch/x86/kernel/signal.c: restore_sigcontext(...): ... fpu__restore_sig(...) This introduces arch_thread_struct_whitelist() to let an architecture declare specifically where the whitelist should be within thread_struct. If undefined, the entire thread_struct field is left whitelisted. Cc: Andrew Morton Cc: Nicholas Piggin Cc: Laura Abbott Cc: "Mickaël Salaün" Cc: Ingo Molnar Cc: Thomas Gleixner Cc: Andy Lutomirski Signed-off-by: Kees Cook Acked-by: Rik van Riel --- arch/Kconfig | 11 +++++++++++ include/linux/sched/task.h | 14 ++++++++++++++ kernel/fork.c | 22 ++++++++++++++++++++-- 3 files changed, 45 insertions(+), 2 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 21d0089117fe..380d2bc2001b 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -241,6 +241,17 @@ config ARCH_INIT_TASK config ARCH_TASK_STRUCT_ALLOCATOR bool +config HAVE_ARCH_THREAD_STRUCT_WHITELIST + bool + depends on !ARCH_TASK_STRUCT_ALLOCATOR + help + An architecture should select this to provide hardened usercopy + knowledge about what region of the thread_struct should be + whitelisted for copying to userspace. Normally this is only the + FPU registers. Specifically, arch_thread_struct_whitelist() + should be implemented. Without this, the entire thread_struct + field in task_struct will be left whitelisted. + # Select if arch has its private alloc_thread_stack() function config ARCH_THREAD_STACK_ALLOCATOR bool diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index c97e5f096927..60f36adaa504 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -104,6 +104,20 @@ extern int arch_task_struct_size __read_mostly; # define arch_task_struct_size (sizeof(struct task_struct)) #endif +#ifndef CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST +/* + * If an architecture has not declared a thread_struct whitelist we + * must assume something there may need to be copied to userspace. + */ +static inline void arch_thread_struct_whitelist(unsigned long *offset, + unsigned long *size) +{ + *offset = 0; + /* Handle dynamically sized thread_struct. */ + *size = arch_task_struct_size - offsetof(struct task_struct, thread); +} +#endif + #ifdef CONFIG_VMAP_STACK static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t) { diff --git a/kernel/fork.c b/kernel/fork.c index 0f33fb1aabbf..4fcc9cc8e108 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -452,6 +452,21 @@ static void set_max_threads(unsigned int max_threads_suggested) int arch_task_struct_size __read_mostly; #endif +static void task_struct_whitelist(unsigned long *offset, unsigned long *size) +{ + /* Fetch thread_struct whitelist for the architecture. */ + arch_thread_struct_whitelist(offset, size); + + /* + * Handle zero-sized whitelist or empty thread_struct, otherwise + * adjust offset to position of thread_struct in task_struct. + */ + if (unlikely(*size == 0)) + *offset = 0; + else + *offset += offsetof(struct task_struct, thread); +} + void __init fork_init(void) { int i; @@ -460,11 +475,14 @@ void __init fork_init(void) #define ARCH_MIN_TASKALIGN 0 #endif int align = max_t(int, L1_CACHE_BYTES, ARCH_MIN_TASKALIGN); + unsigned long useroffset, usersize; /* create a slab on which task_structs can be allocated */ - task_struct_cachep = kmem_cache_create("task_struct", + task_struct_whitelist(&useroffset, &usersize); + task_struct_cachep = kmem_cache_create_usercopy("task_struct", arch_task_struct_size, align, - SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT, NULL); + SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT, + useroffset, usersize, NULL); #endif /* do the arch specific task caches init */