From patchwork Tue May 30 21:39:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9755257 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D4A8F602BF for ; Tue, 30 May 2017 21:40:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C6BF228405 for ; Tue, 30 May 2017 21:40:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BB6C528481; Tue, 30 May 2017 21:40:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id AFC2E2847A for ; Tue, 30 May 2017 21:40:21 +0000 (UTC) Received: (qmail 24348 invoked by uid 550); 30 May 2017 21:40:11 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24301 invoked from network); 30 May 2017 21:40:10 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=feDDgMfRGYFwroqWx3ai8PGuU+0ISFVjFqWnSWwZMXM=; b=WPWg1x3ZwkOBu9J7RTDwR8OtatuPuvEnTM4i89iiQIqNJZ3KdZRPMpl20ikm4R9U+7 H2+5T5v+Fapgv55MfW/N88iELNk2qxAqzaQGTK+uC+zYyD2NERA7tUSOSnHcyplNwfQP M7tnbQKTjmurTXUIAIVX1KurBgD/38NSqmoGE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=feDDgMfRGYFwroqWx3ai8PGuU+0ISFVjFqWnSWwZMXM=; b=JrflJdDhEfqmwg4zrbCm1KaWfaJbba1qtR0nQx+DnIzpXJ08kd0Fpu+dTL7x8oFt+d gzg6TOkao5ZfrxyjzqSCYCMXK/c35Ki7uxF5wXb2pxYHrmeaxVAXpwp/qsr2P/I/mB91 rOkwkegRBtpFJFAXbAmw2zin1Zy2YMkEzbFobmJOeF9adO1OJ46jEbsGI1yhLRNPMROz HIgshhI+XOkzsC8diKM28JCkDM1LUXYjuWmFVvxYtzOksXz7GgVWF/GQgrXK7w3U5YoI a1/LSP1IsSEVkVlU+I6zR/R8+wVTTBUeTQFMvlWb95s5Sv49zNa3K0k1UMp89qnGhHD/ uyyw== X-Gm-Message-State: AODbwcC4zEcu4Qu/BlNnjbwNNfgz/bF1PPSul/S3aC2vgCaUizFM0L6o IHzF4Tiuv+kfdRSc X-Received: by 10.84.218.204 with SMTP id g12mr86241566plm.32.1496180398324; Tue, 30 May 2017 14:39:58 -0700 (PDT) From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , Christoph Hellwig , Peter Zijlstra , "Eric W. Biederman" , Andrew Morton , Josh Poimboeuf , PaX Team , Jann Horn , Eric Biggers , Elena Reshetova , Hans Liljestrand , David Windsor , Greg KH , Ingo Molnar , Alexey Dobriyan , "Serge E. Hallyn" , arozansk@redhat.com, Davidlohr Bueso , Manfred Spraul , "axboe@kernel.dk" , James Bottomley , "x86@kernel.org" , Ingo Molnar , Arnd Bergmann , "David S. Miller" , Rik van Riel , linux-arch , "kernel-hardening@lists.openwall.com" Date: Tue, 30 May 2017 14:39:50 -0700 Message-Id: <1496180392-98718-2-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1496180392-98718-1-git-send-email-keescook@chromium.org> References: <1496180392-98718-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH v5 1/3] refcount: Create unchecked atomic_t implementation X-Virus-Scanned: ClamAV using ClamSMTP Many subsystems will not use refcount_t unless there is a way to build the kernel so that there is no regression in speed compared to atomic_t. This adds CONFIG_REFCOUNT_FULL to enable the full refcount_t implementation which has the validation but is slightly slower. When not enabled, refcount_t uses the basic unchecked atomic_t routines, which results in no code changes compared to just using atomic_t directly. Signed-off-by: Kees Cook --- arch/Kconfig | 9 +++++++++ include/linux/refcount.h | 44 ++++++++++++++++++++++++++++++++++++++++++++ lib/refcount.c | 3 +++ 3 files changed, 56 insertions(+) diff --git a/arch/Kconfig b/arch/Kconfig index 6c00e5b00f8b..fba3bf186728 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -867,4 +867,13 @@ config STRICT_MODULE_RWX config ARCH_WANT_RELAX_ORDER bool +config REFCOUNT_FULL + bool "Perform full reference count validation at the expense of speed" + help + Enabling this switches the refcounting infrastructure from a fast + unchecked atomic_t implementation to a fully state checked + implementation, which can be slower but provides protections + against various use-after-free conditions that can be used in + security flaw exploits. + source "kernel/gcov/Kconfig" diff --git a/include/linux/refcount.h b/include/linux/refcount.h index b34aa649d204..68ecb431dbab 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -41,6 +41,7 @@ static inline unsigned int refcount_read(const refcount_t *r) return atomic_read(&r->refs); } +#ifdef CONFIG_REFCOUNT_FULL extern __must_check bool refcount_add_not_zero(unsigned int i, refcount_t *r); extern void refcount_add(unsigned int i, refcount_t *r); @@ -52,6 +53,49 @@ extern void refcount_sub(unsigned int i, refcount_t *r); extern __must_check bool refcount_dec_and_test(refcount_t *r); extern void refcount_dec(refcount_t *r); +#else +static inline __must_check bool refcount_add_not_zero(unsigned int i, + refcount_t *r) +{ + return atomic_add_return(i, &r->refs) != 0; +} + +static inline void refcount_add(unsigned int i, refcount_t *r) +{ + atomic_add(i, &r->refs); +} + +static inline __must_check bool refcount_inc_not_zero(refcount_t *r) +{ + return atomic_add_unless(&r->refs, 1, 0); +} + +static inline void refcount_inc(refcount_t *r) +{ + atomic_inc(&r->refs); +} + +static inline __must_check bool refcount_sub_and_test(unsigned int i, + refcount_t *r) +{ + return atomic_sub_return(i, &r->refs) == 0; +} + +static inline void refcount_sub(unsigned int i, refcount_t *r) +{ + atomic_sub(i, &r->refs); +} + +static inline __must_check bool refcount_dec_and_test(refcount_t *r) +{ + return atomic_dec_return(&r->refs) == 0; +} + +static inline void refcount_dec(refcount_t *r) +{ + atomic_dec(&r->refs); +} +#endif /* CONFIG_REFCOUNT_FULL */ extern __must_check bool refcount_dec_if_one(refcount_t *r); extern __must_check bool refcount_dec_not_one(refcount_t *r); diff --git a/lib/refcount.c b/lib/refcount.c index 9f906783987e..5d0582a9480c 100644 --- a/lib/refcount.c +++ b/lib/refcount.c @@ -37,6 +37,8 @@ #include #include +#ifdef CONFIG_REFCOUNT_FULL + /** * refcount_add_not_zero - add a value to a refcount unless it is 0 * @i: the value to add to the refcount @@ -225,6 +227,7 @@ void refcount_dec(refcount_t *r) WARN_ONCE(refcount_dec_and_test(r), "refcount_t: decrement hit 0; leaking memory.\n"); } EXPORT_SYMBOL(refcount_dec); +#endif /* CONFIG_REFCOUNT_FULL */ /** * refcount_dec_if_one - decrement a refcount if it is 1