From patchwork Mon Jun 19 23:36:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9798063 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2B3F960381 for ; Mon, 19 Jun 2017 23:39:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2859226C9B for ; Mon, 19 Jun 2017 23:39:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1CFA727F92; Mon, 19 Jun 2017 23:39:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id ABE6126C9B for ; Mon, 19 Jun 2017 23:39:25 +0000 (UTC) Received: (qmail 15866 invoked by uid 550); 19 Jun 2017 23:37:30 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 11893 invoked from network); 19 Jun 2017 23:37:05 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MOMX0fXlj2JgtXGoY70muoFL+M4r/WarhCHwjIKVJ7Q=; b=PU2USzmp1xBsK9mbZJ44mUb2aia6fd2rcFOCrajOYNouSalJXjIoJfZW0rM9CN6cuY My7+/EbWSH47RTJqyrz+r+iWdH8uq4xW3ITSGWR9zoKImhCwGrXGySwPNJhhqWECZSc4 tzvEYVeb78xEbZ2jlTRnkYlu/K5xk3nKihTFY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MOMX0fXlj2JgtXGoY70muoFL+M4r/WarhCHwjIKVJ7Q=; b=dm8Qg1qyFyG7xrxmi0+H5eY+5AX496e5bhwfLjh+zkZhYV7RXK3rs4QOjC7pfWaky/ /ll7D4yAi/r08tbMVweKak6qZCr1FAEFo0v0yySg2nxF+ozcHQELuJkW4jTfnfxc62US EAgbSlczOJYXmLWIN3yKDwXoaamP9WLLppP19D8QZ0IKGTXFRg2xFyfqBeXrA/yD9erd ISbP8njdbE0KshhemUd82UpPiSnFj4ZtsSqpSW+MwlYGzYsH7GURZTfL9Arb9X51YdjU xkkZU9q4cAg/PcUqVusEdXbhwVPQkF8c+iNbHre0cfm90g6DIHE3dj43PjviPz+wZW6t I+mQ== X-Gm-Message-State: AKS2vOyzCRoYdTNJ89RFeNjHDzff4KtxtE859JhbrHqDEg3uq5FBBCtu 0e7NNsEMZ/btx9vZ X-Received: by 10.99.122.13 with SMTP id v13mr16696118pgc.156.1497915414387; Mon, 19 Jun 2017 16:36:54 -0700 (PDT) From: Kees Cook To: kernel-hardening@lists.openwall.com Cc: Kees Cook , David Windsor , linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 19 Jun 2017 16:36:28 -0700 Message-Id: <1497915397-93805-15-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497915397-93805-1-git-send-email-keescook@chromium.org> References: <1497915397-93805-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 14/23] fork: define usercopy region in thread_stack, task_struct, mm_struct slab caches X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor In support of usercopy hardening, this patch defines a region in the thread_stack, task_struct and mm_struct slab caches in which userspace copy operations are allowed. Since only a single whitelisted buffer region is used, this moves the usercopyable fields next to each other in task_struct so a single window can cover them. This region is known as the slab cache's usercopy region. Slab caches can now check that each copy operation involving cache-managed memory falls entirely within the slab's usercopy region. This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: David Windsor [kees: adjust commit log] Signed-off-by: Kees Cook --- include/linux/sched.h | 15 ++++++++++++--- kernel/fork.c | 18 +++++++++++++----- 2 files changed, 25 insertions(+), 8 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 2b69fc650201..345db7983af1 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -745,10 +745,19 @@ struct task_struct { /* Signal handlers: */ struct signal_struct *signal; struct sighand_struct *sighand; - sigset_t blocked; sigset_t real_blocked; - /* Restored if set_restore_sigmask() was used: */ - sigset_t saved_sigmask; + + /* + * Usercopy slab whitelisting needs blocked, saved_sigmask + * to be adjacent. + */ + struct { + sigset_t blocked; + + /* Restored if set_restore_sigmask() was used */ + sigset_t saved_sigmask; + }; + struct sigpending pending; unsigned long sas_ss_sp; size_t sas_ss_size; diff --git a/kernel/fork.c b/kernel/fork.c index e53770d2bf95..172df19baeb5 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -282,8 +282,9 @@ static void free_thread_stack(struct task_struct *tsk) void thread_stack_cache_init(void) { - thread_stack_cache = kmem_cache_create("thread_stack", THREAD_SIZE, - THREAD_SIZE, 0, NULL); + thread_stack_cache = kmem_cache_create_usercopy("thread_stack", + THREAD_SIZE, THREAD_SIZE, 0, 0, + THREAD_SIZE, NULL); BUG_ON(thread_stack_cache == NULL); } # endif @@ -467,9 +468,14 @@ void __init fork_init(void) int align = max_t(int, L1_CACHE_BYTES, ARCH_MIN_TASKALIGN); /* create a slab on which task_structs can be allocated */ - task_struct_cachep = kmem_cache_create("task_struct", + task_struct_cachep = kmem_cache_create_usercopy("task_struct", arch_task_struct_size, align, - SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT, NULL); + SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT, + offsetof(struct task_struct, blocked), + offsetof(struct task_struct, saved_sigmask) - + offsetof(struct task_struct, blocked) + + sizeof(init_task.saved_sigmask), + NULL); #endif /* do the arch specific task caches init */ @@ -2208,9 +2214,11 @@ void __init proc_caches_init(void) * maximum number of CPU's we can ever have. The cpumask_allocation * is at the end of the structure, exactly for that reason. */ - mm_cachep = kmem_cache_create("mm_struct", + mm_cachep = kmem_cache_create_usercopy("mm_struct", sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK|SLAB_ACCOUNT, + offsetof(struct mm_struct, saved_auxv), + sizeof_field(struct mm_struct, saved_auxv), NULL); vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); mmap_init();