From patchwork Tue Jan 9 20:55:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10153451 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E520760223 for ; Tue, 9 Jan 2018 21:05:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB01D1FFE4 for ; Tue, 9 Jan 2018 21:05:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CF43A203B9; Tue, 9 Jan 2018 21:05:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 9BA871FFE4 for ; Tue, 9 Jan 2018 21:05:41 +0000 (UTC) Received: (qmail 11450 invoked by uid 550); 9 Jan 2018 20:58:01 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 9574 invoked from network); 9 Jan 2018 20:57:33 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=igG8G6zxCPqP37BzWbbCgw13VfVuga8bbbKRFYKvReg=; b=Y6qB/CNINFZjNp9Sg5Cs4eUKbS4bjFrF/IqJRoaTvpf26e8uouXbRzfDLlxkLOL10D 8SYcgoukpqBC9IiPYUNMTooqN/qnjjU9xTgCN8eBLfd6VUTCjOJggRqy7E391H+L1H8F 7DN2dmX3MdAjNnEQULdCQahAQAg1t1CJ2FgoQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=igG8G6zxCPqP37BzWbbCgw13VfVuga8bbbKRFYKvReg=; b=sYQnYrs0E1fLmwil494lDRI0KOsv96lH7FPrVZ0JEP8QyRf4jRS5pOBZXLn/fbIYCh VypcKZ5VeOp9UrYTPXG1US+YSsFFeNtz9MSAtt0XPNpftrA3XUdv+p3S9Td2Bil4yivh zmiZ7jos/QsNjpUf+zgdb/+IzUppknC9KCdNY9QK5ScCqJr+QMLmNWqCemTaxA5rsnUa eWkSGYp0xuzYWHSmZThY+SMxlDD3gJYEVv2ZyU7imxY7fryARAYKL8d3KjSAYu0dr2yF SQl2Agge/9b7C4AkbT3uDw2ERXszPmZA4rh7ZPfEV1kMADPpBu8pbh3PsEUA8pBdaNfC mrIQ== X-Gm-Message-State: AKGB3mJChgZoPONfNnRO7GNlXPEN2ya7unnAcRYaK1sOIBtaRDduSSCf RKl6qu8zQpjaiUvgag3vS1ipNw== X-Google-Smtp-Source: ACJfBosk3zrItU8n/CDJY/DZrMh8Nlp/o1hzBCZERZK3rKWW6+MT5aX8UR7qgcHyIRYv5FhrvgZMGA== X-Received: by 10.101.98.90 with SMTP id q26mr12866928pgv.158.1515531441537; Tue, 09 Jan 2018 12:57:21 -0800 (PST) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , David Windsor , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , linux-mm@kvack.org, linux-xfs@vger.kernel.org, Linus Torvalds , Alexander Viro , Andy Lutomirski , Christoph Hellwig , "David S. Miller" , Laura Abbott , Mark Rutland , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, kernel-hardening@lists.openwall.com Date: Tue, 9 Jan 2018 12:55:35 -0800 Message-Id: <1515531365-37423-7-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515531365-37423-1-git-send-email-keescook@chromium.org> References: <1515531365-37423-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 06/36] usercopy: Mark kmalloc caches as usercopy caches X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor Mark the kmalloc slab caches as entirely whitelisted. These caches are frequently used to fulfill kernel allocations that contain data to be copied to/from userspace. Internal-only uses are also common, but are scattered in the kernel. For now, mark all the kmalloc caches as whitelisted. This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: David Windsor [kees: merged in moved kmalloc hunks, adjust commit log] Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-xfs@vger.kernel.org Signed-off-by: Kees Cook --- mm/slab.c | 3 ++- mm/slab.h | 3 ++- mm/slab_common.c | 10 ++++++---- 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index d9939828f8e4..6488066e718a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1291,7 +1291,8 @@ void __init kmem_cache_init(void) */ kmalloc_caches[INDEX_NODE] = create_kmalloc_cache( kmalloc_info[INDEX_NODE].name, - kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS); + kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS, + 0, kmalloc_size(INDEX_NODE)); slab_state = PARTIAL_NODE; setup_kmalloc_cache_index_table(); diff --git a/mm/slab.h b/mm/slab.h index 8f3030788e01..1f013f7795c6 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -97,7 +97,8 @@ struct kmem_cache *kmalloc_slab(size_t, gfp_t); int __kmem_cache_create(struct kmem_cache *, slab_flags_t flags); extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size, - slab_flags_t flags); + slab_flags_t flags, size_t useroffset, + size_t usersize); extern void create_boot_cache(struct kmem_cache *, const char *name, size_t size, slab_flags_t flags, size_t useroffset, size_t usersize); diff --git a/mm/slab_common.c b/mm/slab_common.c index fc3e66bdce75..6c9e945907b6 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -929,14 +929,15 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz } struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size, - slab_flags_t flags) + slab_flags_t flags, size_t useroffset, + size_t usersize) { struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT); if (!s) panic("Out of memory when creating slab %s\n", name); - create_boot_cache(s, name, size, flags, 0, size); + create_boot_cache(s, name, size, flags, useroffset, usersize); list_add(&s->list, &slab_caches); memcg_link_cache(s); s->refcount = 1; @@ -1090,7 +1091,8 @@ void __init setup_kmalloc_cache_index_table(void) static void __init new_kmalloc_cache(int idx, slab_flags_t flags) { kmalloc_caches[idx] = create_kmalloc_cache(kmalloc_info[idx].name, - kmalloc_info[idx].size, flags); + kmalloc_info[idx].size, flags, 0, + kmalloc_info[idx].size); } /* @@ -1131,7 +1133,7 @@ void __init create_kmalloc_caches(slab_flags_t flags) BUG_ON(!n); kmalloc_dma_caches[i] = create_kmalloc_cache(n, - size, SLAB_CACHE_DMA | flags); + size, SLAB_CACHE_DMA | flags, 0, 0); } } #endif