From patchwork Wed Sep 20 20:45:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9962603 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4A8086057C for ; Wed, 20 Sep 2017 21:02:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3912B2921F for ; Wed, 20 Sep 2017 21:02:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2DE5029240; Wed, 20 Sep 2017 21:02:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 630C92924B for ; Wed, 20 Sep 2017 21:02:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751826AbdITVCe (ORCPT ); Wed, 20 Sep 2017 17:02:34 -0400 Received: from mail-pg0-f51.google.com ([74.125.83.51]:52544 "EHLO mail-pg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751724AbdITUp7 (ORCPT ); Wed, 20 Sep 2017 16:45:59 -0400 Received: by mail-pg0-f51.google.com with SMTP id i195so2326149pgd.9 for ; Wed, 20 Sep 2017 13:45:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mBQklQ4xqs4+wiyyaTCtSaQrrQ6cot0LKmSYvJBnT64=; b=NUYVP68ACXcBfoonMticqta35UeLW7yRnVv3BLKxNUjKEAdJn8p1x95BQPE0L4PU2E ps6qjP9MHPrYfusQBRlTwTRcDxfl7r3WlCQ42cmAgq1QK1e3x07QsXJlf94D02aLKSwd 9STrSN16bMU7Hm4opp8dGup9P0w09+N5OrVfM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mBQklQ4xqs4+wiyyaTCtSaQrrQ6cot0LKmSYvJBnT64=; b=LXd82FN+UoJuP08iBi4LoYpuJaEI5W3Nsvk2GGvkRUo9Vp1DOZpFKZjp6NKklakBhf a/nFYfPIz38BJy0me/jb5uZh2/JMhYh93AARHbmOStjIWq+VkJhPZotCaqvPjfyzuW/e 581UA6T5kXSb4Hrynf9pOtuZiw3N/NVMwmvhcCzTKKJUGEJEBiEWpvucWu/EkVZw1xEO IeQAcha8lQnXZiKrek9qSf/LtNTnYyK5n9jRO0zhN3IuwIU6RXwHtAMwPel8wn9QOWf4 wG/GzNC+wUSTvasu6pzvk5BNglx2XxpEQl077EnBV+2wt7RVUwUNOAoKnYtnQd4TOS5+ iyiQ== X-Gm-Message-State: AHPjjUgCMgxjWwYh4zwKsEiDljuEgmR9O3BdmiWQe/jw/6Pa/+K6ZaCv auLu19m/HJXXzTJ3DZ0Xx+mZ1g== X-Google-Smtp-Source: AOwi7QAzN1kvev9GohizVHROiZwonRAZQsB3jy2aiY874TAZEUAy5xOjgBwWasuXlh0wPA+oCM4/bQ== X-Received: by 10.99.122.29 with SMTP id v29mr3387458pgc.434.1505940358671; Wed, 20 Sep 2017 13:45:58 -0700 (PDT) Received: from www.outflux.net (173-164-112-133-Oregon.hfc.comcastbusiness.net. [173.164.112.133]) by smtp.gmail.com with ESMTPSA id e13sm9373191pgt.14.2017.09.20.13.45.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Sep 2017 13:45:54 -0700 (PDT) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , David Windsor , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , linux-mm@kvack.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches Date: Wed, 20 Sep 2017 13:45:09 -0700 Message-Id: <1505940337-79069-4-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1505940337-79069-1-git-send-email-keescook@chromium.org> References: <1505940337-79069-1-git-send-email-keescook@chromium.org> Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor Mark the kmalloc slab caches as entirely whitelisted. These caches are frequently used to fulfill kernel allocations that contain data to be copied to/from userspace. Internal-only uses are also common, but are scattered in the kernel. For now, mark all the kmalloc caches as whitelisted. This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: David Windsor [kees: merged in moved kmalloc hunks, adjust commit log] Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-xfs@vger.kernel.org Signed-off-by: Kees Cook --- mm/slab.c | 3 ++- mm/slab.h | 3 ++- mm/slab_common.c | 10 ++++++---- 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index df268999cf02..9af16f675927 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1291,7 +1291,8 @@ void __init kmem_cache_init(void) */ kmalloc_caches[INDEX_NODE] = create_kmalloc_cache( kmalloc_info[INDEX_NODE].name, - kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS); + kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS, + 0, kmalloc_size(INDEX_NODE)); slab_state = PARTIAL_NODE; setup_kmalloc_cache_index_table(); diff --git a/mm/slab.h b/mm/slab.h index 044755ff9632..2e0fe357d777 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -97,7 +97,8 @@ struct kmem_cache *kmalloc_slab(size_t, gfp_t); extern int __kmem_cache_create(struct kmem_cache *, unsigned long flags); extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size, - unsigned long flags); + unsigned long flags, size_t useroffset, + size_t usersize); extern void create_boot_cache(struct kmem_cache *, const char *name, size_t size, unsigned long flags, size_t useroffset, size_t usersize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 36408f5f2a34..d4e6442f9bbc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -920,14 +920,15 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz } struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size, - unsigned long flags) + unsigned long flags, size_t useroffset, + size_t usersize) { struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT); if (!s) panic("Out of memory when creating slab %s\n", name); - create_boot_cache(s, name, size, flags, 0, size); + create_boot_cache(s, name, size, flags, useroffset, usersize); list_add(&s->list, &slab_caches); memcg_link_cache(s); s->refcount = 1; @@ -1081,7 +1082,8 @@ void __init setup_kmalloc_cache_index_table(void) static void __init new_kmalloc_cache(int idx, unsigned long flags) { kmalloc_caches[idx] = create_kmalloc_cache(kmalloc_info[idx].name, - kmalloc_info[idx].size, flags); + kmalloc_info[idx].size, flags, 0, + kmalloc_info[idx].size); } /* @@ -1122,7 +1124,7 @@ void __init create_kmalloc_caches(unsigned long flags) BUG_ON(!n); kmalloc_dma_caches[i] = create_kmalloc_cache(n, - size, SLAB_CACHE_DMA | flags); + size, SLAB_CACHE_DMA | flags, 0, 0); } } #endif