From patchwork Tue Jan 9 20:55:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10153373 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B07F860223 for ; Tue, 9 Jan 2018 21:01:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A614826223 for ; Tue, 9 Jan 2018 21:01:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A49752624D; Tue, 9 Jan 2018 21:01:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 83B922624C for ; Tue, 9 Jan 2018 21:01:24 +0000 (UTC) Received: (qmail 9623 invoked by uid 550); 9 Jan 2018 20:57:34 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 8187 invoked from network); 9 Jan 2018 20:57:20 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FZoJICJotuAzGqQDFJ5azsrb47mSKcdNyj5ozJokbJk=; b=NihXLg332ISZM0lhF4xB2DYmL6lNjF+cgoGZ0ZEpYrXy0pxV0XjGZj3miq6tIok1mR 565YyIuW2YmqVeF/ZENcD9/jFWHP0fd7Et8cNUc8/M9MhTKBZxFyO5yHYQSOxXgz6ZH5 1zBCun4cDvVWcdhIcUVCXfuCmtbMp878oYchY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FZoJICJotuAzGqQDFJ5azsrb47mSKcdNyj5ozJokbJk=; b=eI7/jEBVIqdThUoHDg+kEDVwxGUqqIDnRzOc1dzoUK/QBGRonpxEafQ+cpxYnl8uWg Be4d6utJz60ByidjE0i6Fe4xLjVmjmwlnRXRPAUDBf4OlLZKOcVtP70FrSCjj1kgsebg xWhUI8fmBM+YznyR1l8Wv5WIBj5nIgsNrULzMmJ+sRy2RwmPxRyV6Odf14ZUPqKRi5lT /7/BR0Xh9HhXuMUnCptFYbQogfiEgENdqc06BXffREVauF4K3DY+HXRT4W79B3sNYrYH ZO5S9nkAjsarf2KlhizYmlNPLQZVwHQgvX6KQ7lHLgb+VSKwRFX67VZFkUaKJ5VX2wI/ +bAg== X-Gm-Message-State: AKGB3mKUa/HFVtHLqKocxCrMxvxYkQhPQ/D80Zt6R3Rdyos8PAyxHk0/ LmQElTzP0HQKby8sgfZb7vK5yQ== X-Google-Smtp-Source: ACJfBosDYrhMAB6q/J8kKXgR9ih0iXsMLnjv+nz2KGjOBgcEdUAO1pvg+X1KhdNkWj6OJVFUxzWJRQ== X-Received: by 10.84.128.71 with SMTP id 65mr16979867pla.282.1515531428833; Tue, 09 Jan 2018 12:57:08 -0800 (PST) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , David Windsor , Ingo Molnar , Andrew Morton , Thomas Gleixner , Andy Lutomirski , Linus Torvalds , Alexander Viro , Christoph Hellwig , Christoph Lameter , "David S. Miller" , Laura Abbott , Mark Rutland , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Date: Tue, 9 Jan 2018 12:55:55 -0800 Message-Id: <1515531365-37423-27-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515531365-37423-1-git-send-email-keescook@chromium.org> References: <1515531365-37423-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 26/36] fork: Define usercopy region in mm_struct slab caches X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor In support of usercopy hardening, this patch defines a region in the mm_struct slab caches in which userspace copy operations are allowed. Only the auxv field is copied to userspace. cache object allocation: kernel/fork.c: #define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL)) dup_mm(): ... mm = allocate_mm(); copy_mm(...): ... dup_mm(); copy_process(...): ... copy_mm(...) _do_fork(...): ... copy_process(...) example usage trace: fs/binfmt_elf.c: create_elf_tables(...): ... elf_info = (elf_addr_t *)current->mm->saved_auxv; ... copy_to_user(..., elf_info, ei_index * sizeof(elf_addr_t)) load_elf_binary(...): ... create_elf_tables(...); This region is known as the slab cache's usercopy region. Slab caches can now check that each dynamically sized copy operation involving cache-managed memory falls entirely within the slab's usercopy region. This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: David Windsor [kees: adjust commit log, split patch, provide usage trace] Cc: Ingo Molnar Cc: Andrew Morton Cc: Thomas Gleixner Cc: Andy Lutomirski Signed-off-by: Kees Cook Acked-by: Rik van Riel --- kernel/fork.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/fork.c b/kernel/fork.c index 432eadf6b58c..82f2a0441d3b 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2225,9 +2225,11 @@ void __init proc_caches_init(void) * maximum number of CPU's we can ever have. The cpumask_allocation * is at the end of the structure, exactly for that reason. */ - mm_cachep = kmem_cache_create("mm_struct", + mm_cachep = kmem_cache_create_usercopy("mm_struct", sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, + offsetof(struct mm_struct, saved_auxv), + sizeof_field(struct mm_struct, saved_auxv), NULL); vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_ACCOUNT); mmap_init();