From patchwork Thu Jan 11 02:03:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10156625 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BEFC0605BA for ; Thu, 11 Jan 2018 02:14:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD49C2874B for ; Thu, 11 Jan 2018 02:14:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A17792874F; Thu, 11 Jan 2018 02:14:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 9245E2874B for ; Thu, 11 Jan 2018 02:14:09 +0000 (UTC) Received: (qmail 30173 invoked by uid 550); 11 Jan 2018 02:10:16 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29760 invoked from network); 11 Jan 2018 02:10:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=e9mehbylxpULwq1JxF6fYk2hZRoxnP8EylWUrDfh/Go=; b=AH3i3QMBjHcV2eWAKdVrtVkeKv43MCWAP/zxJk/u6HQTp0K+1qOMN0kPy+RN2sW3J6 HLdI2KA7TLYIvbAHHc+J2jxctr1LjoTKXfJN/FCosNksQXyUL+e5D4TU7gAwVYJDoSIx FhQ3MCCaU60GEPDi5sKHoJkpxAGAlw6gtd2xY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=e9mehbylxpULwq1JxF6fYk2hZRoxnP8EylWUrDfh/Go=; b=Kr4UFmMbE/PVOQzt0Xe0RptqXb0GXrSO02Zsw5Te59NbiwZ14OV0cJRzcn/hkUlMc0 XWXp0aY7qipFdd3+eOIkJ8ZME6R9HT9ePc8i85kvJLa4K3ktr3llAqLrD8zO2XnI9+UN ci1ODS24CxsB4eG/LH4vlu0I+SUs4oj23BbgJlO4IJefGaKlppn5FQLkHqbI1jUTgl48 +Og7ZceqBHwT/RbSlgISH+GDFjynKjzchYQhzlexEzdoIrDybgfvIO9PNvdiyOB7OP9x RrwCrfuRdby6eDhwDgBaK1l3K37hFCPoTXwL4s33tajKpUq42NTLi/U/11yxTH8zvrg0 FDsg== X-Gm-Message-State: AKwxyteXwvMpEEWb13zYnDFtBRpy7muhBNFTDP3gUuNAQNIOWpaYvtml q5j5rKuS5ldXPeHSjg3NENErXw== X-Google-Smtp-Source: ACJfBoseWnDVuxTg4dNT6NF5LzfiNKdzZ4ADIgpVLkvwF4igOadVdgpinLrcYSK6+E6oyPW2MlBFCQ== X-Received: by 10.98.160.25 with SMTP id r25mr3267241pfe.218.1515636588626; Wed, 10 Jan 2018 18:09:48 -0800 (PST) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , David Windsor , Ingo Molnar , Andrew Morton , Thomas Gleixner , Andy Lutomirski , Linus Torvalds , Alexander Viro , Christoph Hellwig , Christoph Lameter , "David S. Miller" , Laura Abbott , Mark Rutland , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Date: Wed, 10 Jan 2018 18:03:02 -0800 Message-Id: <1515636190-24061-31-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515636190-24061-1-git-send-email-keescook@chromium.org> References: <1515636190-24061-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 30/38] fork: Define usercopy region in thread_stack slab caches X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor In support of usercopy hardening, this patch defines a region in the thread_stack slab caches in which userspace copy operations are allowed. Since the entire thread_stack needs to be available to userspace, the entire slab contents are whitelisted. Note that the slab-based thread stack is only present on systems with THREAD_SIZE < PAGE_SIZE and !CONFIG_VMAP_STACK. cache object allocation: kernel/fork.c: alloc_thread_stack_node(...): return kmem_cache_alloc_node(thread_stack_cache, ...) dup_task_struct(...): ... stack = alloc_thread_stack_node(...) ... tsk->stack = stack; copy_process(...): ... dup_task_struct(...) _do_fork(...): ... copy_process(...) This region is known as the slab cache's usercopy region. Slab caches can now check that each dynamically sized copy operation involving cache-managed memory falls entirely within the slab's usercopy region. This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: David Windsor [kees: adjust commit log, split patch, provide usage trace] Cc: Ingo Molnar Cc: Andrew Morton Cc: Thomas Gleixner Cc: Andy Lutomirski Signed-off-by: Kees Cook Acked-by: Rik van Riel --- kernel/fork.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/fork.c b/kernel/fork.c index 82f2a0441d3b..0e086af148f2 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -282,8 +282,9 @@ static void free_thread_stack(struct task_struct *tsk) void thread_stack_cache_init(void) { - thread_stack_cache = kmem_cache_create("thread_stack", THREAD_SIZE, - THREAD_SIZE, 0, NULL); + thread_stack_cache = kmem_cache_create_usercopy("thread_stack", + THREAD_SIZE, THREAD_SIZE, 0, 0, + THREAD_SIZE, NULL); BUG_ON(thread_stack_cache == NULL); } # endif