From patchwork Wed Aug 2 15:06:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13338323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ABB18C04E69 for ; Wed, 2 Aug 2023 15:08:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5Sd8SVFare97nr0YlVgfeEvH9/+GgVXlbW1BK5CBAxo=; b=042S8H+FvirXGFQqkM4thJ+p+d vqjzsbd8Pok6SqQvAgDXJ82LB6hbG4/H2QHXIWykb+dea4Cs4gYwRkuCTEbfgnhFMKA/3kTYqe7cj gDeZqco25DJm1ebyssWgwXWZ3/U+U7efMvJWR/cPz3Om7SbYzwWoI/LxwlcjM8VF1UhMQc3sUpMMH 9qb67OlGpNzyFytkctM3Hyq/zPnHJ6/kwbsgI4wanComYyfGs9ptdrTWePbzuWW7SamEJ0v0hMweI w5tYJVzEMwsO9Lso8aAUmB/lOJNixR4oZAadW47qp0W0NiFye+Okv7yvsMn+oM6g+iC2iLIgGz8Ge D4xmY64w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRDRu-005BGQ-04; Wed, 02 Aug 2023 15:07:42 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRDRp-005BEO-1a for linux-arm-kernel@lists.infradead.org; Wed, 02 Aug 2023 15:07:39 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5847479b559so81356297b3.1 for ; Wed, 02 Aug 2023 08:07:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690988854; x=1691593654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QsQzCCvgsUc29iN0t+r4OrRLxcOlAg596pNmSZ7559A=; b=4ZYamtVu7c5DBlxohGCGmrb3sFLfmn9HVQ5Wg0u74wgUuS9HmeqzsIbgmvxx4vbynp G5rhNqMQc/y5nAAokrb8biNfvQGEp8VjT02HxfGGmCTT5wPDSvxvnVw8kiA6iPjuGFVD LA+s86W47CrKMLOaDF3oTsFV1irTVQ90npxdXPTIsyYRRVrnjksYD/YOpvA5gcb/NJON 1/DNjlxIfTxKKGGO7EWAXeVsoT9Ba6/DtSkaghydahqdF14skDb3T2FlkiHoREdye/Wn PP4uXHlG/PZ7eozLUn1Qs2nxG2z0Y5gSqG2d0rJKhv4XdK6rQ2bU5XD8YUaBIrlQg13L ZuFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690988854; x=1691593654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QsQzCCvgsUc29iN0t+r4OrRLxcOlAg596pNmSZ7559A=; b=EbyRmHOj4qQ4sSomDvWQa2/H20zMPpqzDxmKvwa9To0Fjdn4CWyM5gL/aODrfWikyI th8BqOwCDpQNUAVN8nJJZo6kuW9dfQmI/P64ZaQuPj7by4nFoqAyfClODUVsPMmt2Eit MMgw8weWV9GlxvGKrEh/u0Gry2k5O215o6HF7Zm+Tiozxgni4/9NOBsvJh9N+1PSjJFp 2mVYIeOxCZYn58UTy8wis7qn9RntRVorunTfoUEVnqCDarJOof4sOA/btAZETsuR2fY+ j9o/dkzE+stl4/fAdqZjQfmu2booO+3UuyWA6azbYhT94RJpr930bB5yh973Ni25aJJN rOZA== X-Gm-Message-State: ABy/qLaMukQnRFJbnEI3sfKbZOsmeUhOhJag7W+09dseyezuJUxuw/mn zWHuIhzCoz64kKtEEZ8RL47r4uxVBw== X-Google-Smtp-Source: APBJJlF2dxbHCNlW4tE69srWZHv4LNf1FBrEIP73vEfLojj7ssYK9cpF/2NfJOSE1k3fWGXBcvsb8SmIjQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:5f73:1fc0:c9fd:f203]) (user=elver job=sendgmr) by 2002:a81:b50d:0:b0:586:4e84:26d2 with SMTP id t13-20020a81b50d000000b005864e8426d2mr79140ywh.3.1690988854773; Wed, 02 Aug 2023 08:07:34 -0700 (PDT) Date: Wed, 2 Aug 2023 17:06:39 +0200 In-Reply-To: <20230802150712.3583252-1-elver@google.com> Mime-Version: 1.0 References: <20230802150712.3583252-1-elver@google.com> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog Message-ID: <20230802150712.3583252-3-elver@google.com> Subject: [PATCH 3/3] list_debug: Introduce CONFIG_DEBUG_LIST_MINIMAL From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Miguel Ojeda , Nick Desaulniers , Nathan Chancellor , Tom Rix , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230802_080737_528065_C66E4E6F X-CRM114-Status: GOOD ( 22.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The feature has never been designed with performance in mind, yet common list manipulation is happening across hot paths all over the kernel. Introduce CONFIG_DEBUG_LIST_MINIMAL, which performs list pointer checking inline, and only upon list corruption delegates to the reporting slow path. To generate optimal machine code with CONFIG_DEBUG_LIST_MINIMAL: 1. Elide checking for pointer values which upon dereference would result in an immediate access fault -- therefore "minimal" checks. The trade-off is lower-quality error reports. 2. Use the newly introduced __preserve_most function attribute (available with Clang, but not yet with GCC) to minimize the code footprint for calling the reporting slow path. As a result, function size of callers is reduced by avoiding saving registers before calling the rarely called reporting slow path. 3. Because the inline checks are a subset of the full set of checks in ___list_*_valid(), always return false if the inline checks failed. This avoids redundant compare and conditional branch right after return from the slow path. As a side-effect of the checks being inline, if the compiler can prove some condition to always be true, it can completely elide some checks. Running netperf with CONFIG_DEBUG_LIST_MINIMAL (using a Clang compiler with "preserve_most") shows throughput improvements, in my case of ~7% on average (up to 20-30% on some test cases). Link: https://r.android.com/1266735 [1] Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2] Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3] Signed-off-by: Marco Elver --- arch/arm64/kvm/hyp/nvhe/list_debug.c | 2 + include/linux/list.h | 56 +++++++++++++++++++++++++--- lib/Kconfig.debug | 15 ++++++++ lib/list_debug.c | 2 + 4 files changed, 69 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index 589284496ac5..df718e29f6d4 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,6 +26,7 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ +__list_valid_slowpath bool ___list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -37,6 +38,7 @@ bool ___list_add_valid(struct list_head *new, struct list_head *prev, return true; } +__list_valid_slowpath bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index e0b2cf904409..a28a215a3eb1 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -39,20 +39,64 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } #ifdef CONFIG_DEBUG_LIST -extern bool ___list_add_valid(struct list_head *new, - struct list_head *prev, - struct list_head *next); + +#ifdef CONFIG_DEBUG_LIST_MINIMAL +# define __list_valid_slowpath __cold __preserve_most +#else +# define __list_valid_slowpath +#endif + +extern bool __list_valid_slowpath ___list_add_valid(struct list_head *new, + struct list_head *prev, + struct list_head *next); static __always_inline bool __list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { - return ___list_add_valid(new, prev, next); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + /* + * In the minimal config, elide checking if next and prev are + * NULL, since the immediate dereference of them below would + * result in a fault if NULL. + * + * With the minimal config we can afford to inline the checks, + * which also gives the compiler a chance to elide some of them + * completely if they can be proven at compile-time. If one of + * the pre-conditions does not hold, the slow-path will show a + * report which pre-condition failed. + */ + if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) + return true; + ret = false; + } + + ret &= ___list_add_valid(new, prev, next); + return ret; } -extern bool ___list_del_entry_valid(struct list_head *entry); +extern bool __list_valid_slowpath ___list_del_entry_valid(struct list_head *entry); static __always_inline bool __list_del_entry_valid(struct list_head *entry) { - return ___list_del_entry_valid(entry); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + struct list_head *prev = entry->prev; + struct list_head *next = entry->next; + + /* + * In the minimal config, elide checking if next and prev are + * NULL, LIST_POISON1 or LIST_POISON2, since the immediate + * dereference of them below would result in a fault. + */ + if (likely(prev->next == entry && next->prev == entry)) + return true; + ret = false; + } + + ret &= ___list_del_entry_valid(entry); + return ret; } #else static inline bool __list_add_valid(struct list_head *new, diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index fbc89baf7de6..e72cf08af0fa 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1680,6 +1680,21 @@ config DEBUG_LIST If unsure, say N. +config DEBUG_LIST_MINIMAL + bool "Minimal linked list debug checks" + default !DEBUG_KERNEL + depends on DEBUG_LIST + help + Only perform the minimal set of checks in the linked-list walking + routines to catch corruptions that are not guaranteed to result in an + immediate access fault. + + This trades lower quality error reports for improved performance: the + generated code should be more optimal and provide trade-offs that may + better serve safety- and performance- critical environments. + + If unsure, say Y. + config DEBUG_PLIST bool "Debug priority linked list manipulation" depends on DEBUG_KERNEL diff --git a/lib/list_debug.c b/lib/list_debug.c index fd69009cc696..daad32855f0d 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -17,6 +17,7 @@ * attempt). */ +__list_valid_slowpath bool ___list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -39,6 +40,7 @@ bool ___list_add_valid(struct list_head *new, struct list_head *prev, } EXPORT_SYMBOL(___list_add_valid); +__list_valid_slowpath bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next;