From patchwork Tue Jan 9 20:56:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10153461 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BEACF60223 for ; Tue, 9 Jan 2018 21:06:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B10602434C for ; Tue, 9 Jan 2018 21:06:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A525E205F7; Tue, 9 Jan 2018 21:06:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 5777E205F7 for ; Tue, 9 Jan 2018 21:06:20 +0000 (UTC) Received: (qmail 20352 invoked by uid 550); 9 Jan 2018 21:04:35 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 20200 invoked from network); 9 Jan 2018 21:04:33 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lMUlJD1EuRpWMvI2cG0yPNmE/Z2668XYSpzNrKVgG90=; b=GezH5AjHTpo0VjCSkg7Rr7Ap3FLbmpMkdAVDK6uav91kbqHbITCW0kf2Pr5qvrADRB gfYSLneO5XMYWRLNY7Xs9Mw8evUnP5hqfRFfwgFicAeQQAI9g/CKlPoPjWguN/CF3PF+ dj1apXLFY5KKiYSqeljv8sDGIwkkMX5uZ8B2U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lMUlJD1EuRpWMvI2cG0yPNmE/Z2668XYSpzNrKVgG90=; b=VI7ppgRivBr/gM/weOLEHTvdUYURBdrIQhUaqr6m5B/9cX8QDFogVAzbmPVmCkHJSW MJ8t9VdBgZ+s0UkJIICNJBLlvg9mTeT7zfGlqfnQP1fyQFzOek3YZz1dY1gWNitxg62s /pQwe2lKOjLaxWf7FGhrQd25DliHc+6RN7ErJrm0KVbIbrcHfDmtL/g4VBrbXd02WE/W EztMOARWIDFGxWIzAH2xmbcPvM1uEnSZaIxIWN6EEwfWbTIPmHiUYpyTmNjihISKyepO xKdx1RVMYhfOeSx0vXb6FENcNZrVek7hr5kH/GAt0rjDf4UuHcKaY5v05rrj+Y5EQ5Vp 2CCg== X-Gm-Message-State: AKGB3mKRGyzf4uU8i74OSsKBctdeq/ivgJ2zomqhiWqgWs1kgeoz8UE9 qVvGRipdLIjmgazrEYqITmD0UA== X-Google-Smtp-Source: ACJfBoubmCv6Q3UHRhKe8e00D1zhrOnEEuPCc8D8ioihkwLUhcmeU8fmBcvzWhM26e4IkzmRHuntYA== X-Received: by 10.99.107.201 with SMTP id g192mr6160602pgc.295.1515531862150; Tue, 09 Jan 2018 13:04:22 -0800 (PST) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , Linus Torvalds , David Windsor , Alexander Viro , Andrew Morton , Andy Lutomirski , Christoph Hellwig , Christoph Lameter , "David S. Miller" , Laura Abbott , Mark Rutland , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Date: Tue, 9 Jan 2018 12:56:03 -0800 Message-Id: <1515531365-37423-35-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515531365-37423-1-git-send-email-keescook@chromium.org> References: <1515531365-37423-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 34/36] usercopy: Allow strict enforcement of whitelists X-Virus-Scanned: ClamAV using ClamSMTP This introduces CONFIG_HARDENED_USERCOPY_FALLBACK to control the behavior of hardened usercopy whitelist violations. By default, whitelist violations will continue to WARN() so that any bad or missing usercopy whitelists can be discovered without being too disruptive. If this config is disabled at build time or a system is booted with "slab_common.usercopy_fallback=0", usercopy whitelists will BUG() instead of WARN(). This is useful for admins that want to use usercopy whitelists immediately. Suggested-by: Matthew Garrett Signed-off-by: Kees Cook --- include/linux/slab.h | 2 ++ mm/slab.c | 3 ++- mm/slab_common.c | 8 ++++++++ mm/slub.c | 3 ++- security/Kconfig | 14 ++++++++++++++ 5 files changed, 28 insertions(+), 2 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 518f72bf565e..4bef1ed1daa1 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -135,6 +135,8 @@ struct mem_cgroup; void __init kmem_cache_init(void); bool slab_is_available(void); +extern bool usercopy_fallback; + struct kmem_cache *kmem_cache_create(const char *name, size_t size, size_t align, slab_flags_t flags, void (*ctor)(void *)); diff --git a/mm/slab.c b/mm/slab.c index 6488066e718a..50539a76a46a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -4425,7 +4425,8 @@ int __check_heap_object(const void *ptr, unsigned long n, struct page *page, * to be a temporary method to find any missing usercopy * whitelists. */ - if (offset <= cachep->object_size && + if (usercopy_fallback && + offset <= cachep->object_size && n <= cachep->object_size - offset) { WARN_ONCE(1, "unexpected usercopy %s with bad or missing whitelist with SLAB object '%s' (offset %lu, size %lu)", to_user ? "exposure" : "overwrite", diff --git a/mm/slab_common.c b/mm/slab_common.c index 6c9e945907b6..8ac2a6320a6c 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -31,6 +31,14 @@ LIST_HEAD(slab_caches); DEFINE_MUTEX(slab_mutex); struct kmem_cache *kmem_cache; +#ifdef CONFIG_HARDENED_USERCOPY +bool usercopy_fallback __ro_after_init = + IS_ENABLED(CONFIG_HARDENED_USERCOPY_FALLBACK); +module_param(usercopy_fallback, bool, 0400); +MODULE_PARM_DESC(usercopy_fallback, + "WARN instead of reject usercopy whitelist violations"); +#endif + static LIST_HEAD(slab_caches_to_rcu_destroy); static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work); static DECLARE_WORK(slab_caches_to_rcu_destroy_work, diff --git a/mm/slub.c b/mm/slub.c index 2aa4972a2058..1c0ff635d408 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3858,7 +3858,8 @@ int __check_heap_object(const void *ptr, unsigned long n, struct page *page, * whitelists. */ object_size = slab_ksize(s); - if ((offset <= object_size && n <= object_size - offset)) { + if (usercopy_fallback && + (offset <= object_size && n <= object_size - offset)) { WARN_ONCE(1, "unexpected usercopy %s with bad or missing whitelist with SLUB object '%s' (offset %lu size %lu)", to_user ? "exposure" : "overwrite", s->name, offset, n); diff --git a/security/Kconfig b/security/Kconfig index e8e449444e65..ae457b018da5 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -152,6 +152,20 @@ config HARDENED_USERCOPY or are part of the kernel text. This kills entire classes of heap overflow exploits and similar kernel memory exposures. +config HARDENED_USERCOPY_FALLBACK + bool "Allow usercopy whitelist violations to fallback to object size" + depends on HARDENED_USERCOPY + default y + help + This is a temporary option that allows missing usercopy whitelists + to be discovered via a WARN() to the kernel log, instead of + rejecting the copy, falling back to non-whitelisted hardened + usercopy that checks the slab allocation size instead of the + whitelist size. This option will be removed once it seems like + all missing usercopy whitelists have been identified and fixed. + Booting with "slab_common.usercopy_fallback=Y/N" can change + this setting. + config HARDENED_USERCOPY_PAGESPAN bool "Refuse to copy allocations that span multiple pages" depends on HARDENED_USERCOPY