From patchwork Fri Aug 4 09:02:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13341546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6A10FC001DE for ; Fri, 4 Aug 2023 09:07:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: Mime-Version:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=tJ3YlxwtX1CsmXo+wnwtltFdQG9UDphYSlGBZOU2iz0=; b=J1y jMfSAnAB7vDBrdJ1lyc0K55UXBy65OYqmwe+bJXkPQkaMyOKYi3BO3ol1CbVN2xmvxRaS62UIqnJl IO6qUJD2n85n6tlcwLbPJkOeFrfaVQ9Ms9mlQGTshcaG2Up710nEbBlbhSAngO63FSmjTK7n3TooD ogQA3qNOiuRg3okrEX2mh9wiclkaGljKAl+KsUXjBMkw7wO5rlV3KKiSm5K0rqA/OcFFMMRAFdYzG hDqJ5JqXuc2HP8TpXwYYlBLoGaLv8xJDxaNdjcDqfc/9sobe+oZiu4aoZ/Ut+iKAl3K6UJt01Dw1L eUBcYsv0YWayzRb6v3SUygFoxvbshmQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRqls-00By1W-2E; Fri, 04 Aug 2023 09:06:56 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRqlp-00Bxz4-35 for linux-arm-kernel@lists.infradead.org; Fri, 04 Aug 2023 09:06:55 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-3176fe7b67bso1023484f8f.2 for ; Fri, 04 Aug 2023 02:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691140010; x=1691744810; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=mbDuRBO1V21kY8Ln+XMGj8tah/dM0ZngOK9DCSez6Ds=; b=5+NdHYLigl8F6LR7td6fJwq4mY9o+g31biS6M/x67uOl9aEZNMw4T8nENGruiRPiRB +Gw5NEcsPnW4zysL4GKnQBPK9MO8u0Fsh1G5fWlgkmiENC1pguO3W0zZ0X/9KVUs9XYc Tobr1N4uOqLruYOpHjKAtJJ2mDS90eZzDY6tlw9DthEFZ7ugqCsD5LBgD4MRn1hh6V3W c6lem7rrbgF4amzGiob51SzurdoFT2U3i2F0AvVFAwCHWg57HOfRNXyQL4/kaXiOsTGY Q4BSyO6jSC8eDLVGZOooU1CA1YZfiIB3t/WuaZY5AlYApLPEZnbtXoouz1qw8nB4BnEV 78og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691140010; x=1691744810; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=mbDuRBO1V21kY8Ln+XMGj8tah/dM0ZngOK9DCSez6Ds=; b=H7+cCxJ+IxFYfknkSePpMyIGnZKTksayo8vysV2yzc0E8K5O+Ig9LZ7e1PrbLPVpCw b4OYm5gz5tv5LhQZcUozYwDAz19yqVh/YBZOukLuGsBBE/YSaz4hm88q/MiPmi7IMtGW 4HO4lkj2K6Inyc+5difq29clgWatpqiR81VUcVlMLRh5vKetnbHeO/OeGBTbfotQ4U4G QE6ANHXwAm/8VyKlc5CCMpkobvZZCDB/Jw2Q9KLBvlTjqoVdQhNDqp9rDGkrFzBMBEJr Edrfn04oU7ggkzPc6G4YLuU0Ks8bOPhUqU/T8uUmDuyVzhRXNGmR3CwAVZrutBMx7anH +iOA== X-Gm-Message-State: AOJu0YyLM+5OE9lHcbJ6kOM4WC/e1Fz5SmZP0E8prq7wCcWhj23yGFaX UIxQXHiGoc/Nev5Tx3GHW8LxqgIgGA== X-Google-Smtp-Source: AGHT+IEPvozwf23yi22U6OKa9kIgIghSUiAWRzblYbPC2JjxGfCAqNsiEHfVbqNraFFYkBMktGuoEKz+Fw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:2ebf:f3ea:4841:53b6]) (user=elver job=sendgmr) by 2002:adf:f587:0:b0:313:e68e:885d with SMTP id f7-20020adff587000000b00313e68e885dmr6006wro.13.1691140009920; Fri, 04 Aug 2023 02:06:49 -0700 (PDT) Date: Fri, 4 Aug 2023 11:02:56 +0200 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230804090621.400-1-elver@google.com> Subject: [PATCH v2 1/3] compiler_types: Introduce the Clang __preserve_most function attribute From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Peter Zijlstra , Mark Rutland , Steven Rostedt , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230804_020653_992413_6D00346A X-CRM114-Status: GOOD ( 17.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [1]: "On X86-64 and AArch64 targets, this attribute changes the calling convention of a function. The preserve_most calling convention attempts to make the code in the caller as unintrusive as possible. This convention behaves identically to the C calling convention on how arguments and return values are passed, but it uses a different set of caller/callee-saved registers. This alleviates the burden of saving and recovering a large register set before and after the call in the caller." [1] https://clang.llvm.org/docs/AttributeReference.html#preserve-most Introduce the attribute to compiler_types.h as __preserve_most. Use of this attribute results in better code generation for calls to very rarely called functions, such as error-reporting functions, or rarely executed slow paths. Beware that the attribute conflicts with instrumentation calls inserted on function entry which do not use __preserve_most themselves. Notably, function tracing which assumes the normal C calling convention for the given architecture. Where the attribute is supported, __preserve_most will imply notrace. It is recommended to restrict use of the attribute to functions that should or already disable tracing. Signed-off-by: Marco Elver Acked-by: Steven Rostedt (Google) Reviewed-by: Miguel Ojeda --- v2: * Imply notrace, to avoid any conflicts with tracing which is inserted on function entry. See added comments. --- include/linux/compiler_types.h | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 547ea1ff806e..12c4540335b7 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -106,6 +106,33 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } #define __cold #endif +/* + * On x86-64 and arm64 targets, __preserve_most changes the calling convention + * of a function to make the code in the caller as unintrusive as possible. This + * convention behaves identically to the C calling convention on how arguments + * and return values are passed, but uses a different set of caller- and callee- + * saved registers. + * + * The purpose is to alleviates the burden of saving and recovering a large + * register set before and after the call in the caller. This is beneficial for + * rarely taken slow paths, such as error-reporting functions that may be called + * from hot paths. + * + * Note: This may conflict with instrumentation inserted on function entry which + * does not use __preserve_most or equivalent convention (if in assembly). Since + * function tracing assumes the normal C calling convention, where the attribute + * is supported, __preserve_most implies notrace. + * + * Optional: not supported by gcc. + * + * clang: https://clang.llvm.org/docs/AttributeReference.html#preserve-most + */ +#if __has_attribute(__preserve_most__) +# define __preserve_most notrace __attribute__((__preserve_most__)) +#else +# define __preserve_most +#endif + /* Builtins */ /* From patchwork Fri Aug 4 09:02:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13341545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3DBF7C001DB for ; Fri, 4 Aug 2023 09:07:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=owVEznOIToPBD2LQUmLUrcbEKCqXrngG7LksYfxrsCE=; b=4sVCVvPdbJE0qTczbB+Prjft4n 0omsAiSFKflGv+eMfuKJmDKHuxsQBRS/PdzBeYb9HQIfwY4XsrN66O8/NEoCVMSp/K6yDla9RPlOr gerDOQlOOfHx7aSMrjNrhNzFibK/aQKjEz62T3NjZ4JiFa8Xx8lZaywoVc910+qcwKR32oTSowASB C1cCzrgYwZvqME8UAIv61FXKbvxOZYtinWOlSWEtPfFGWxNj10D1pe30syCBzkvkYo6yY36LTQFxN za6DMQbh+ZMx1k6yYCT3DGHwDmqR6MU+x9B3aVR7r8mUIZmqYr8O6v3x4IFV2WTFKyqInnMXGQSbg BnqSSkig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRqlv-00By2E-0z; Fri, 04 Aug 2023 09:06:59 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRqlq-00Bxzv-2t for linux-arm-kernel@lists.infradead.org; Fri, 04 Aug 2023 09:06:56 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-3175efd89e4so1218686f8f.1 for ; Fri, 04 Aug 2023 02:06:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691140013; x=1691744813; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GKmfKWa1C9dNl3H8pIKLebOm0SSKHLFe115my+PvGn0=; b=2ZPhM3agmEsbxcqIzSenTzOTGpZ1asd1w1Q2izGb4kgJmfmxUYGz5Y9l6+EQeZrOBE wKNBntVb39K0eFYcJF0q4Es5nWqxHHrmUx+v1Q/g0CkO2lOICunT8LA+Vx4EqT2mdLzK cRWPMBoIUiovuVrezWfy5xGenDsixAjKv/5i0TSx4k1iLk6obBIMG2LYCuzlFGn3VAks 5WkXmXZihY4ic5RGw4NuwPhAR0NgdnMdfTxwUNhSzbzRS4sx30z/hws6Fm9aRaveEx7V u57wZrcpE/D0hKVRYaMo+T6207xMAPr2dei4NhD4al1cMBvDnRZaf1WlEA36L9tlh2l1 5ZuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691140013; x=1691744813; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GKmfKWa1C9dNl3H8pIKLebOm0SSKHLFe115my+PvGn0=; b=B0GPV9ZO1Bvayd/LTWmp8X3d2UpY+lOd6I7mwIRp9LJaNn3OioQ09IrcxnJvn17oRC /DusfgkDDi5aCybZFfDPhluD74G85E87z/vEtMYE0v9AAQHlJ4Mr2pbEY+gSRYymC+bs xszTVL8GcbcaaSzbsc8KTjoNlxCvPZk5e4UxWR++V/fzpCE/9oyT3xhqcUZd4F1N1+vH H3A1VSeNOLt4q4zBU7sL3C0qTuTd9yxLBrmD3zh70fFvm0JcG1XwrTFRyzIYoKTjfHKd r9g0YmmsL0An6B9SyU7qqnAc44jng283y31gNWxSng7pIUTC4FwDr7ya3sP4Y/17A/Js HHmQ== X-Gm-Message-State: AOJu0YwJPXwMdHm1jG/f4gbW0pkxdTHY+yZ6yhpb/VVqWnHDFb9o3BAs ecpWHF+2MB5MU/dFA3CBxcmcKCTI6A== X-Google-Smtp-Source: AGHT+IE7VnI4i6X3QPJu2b4ARgq1Vxe4AP1Aorl0SMtRyhheu21XgeM+lzw0MQMQiOQdWXUYokuSwpw2pg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:2ebf:f3ea:4841:53b6]) (user=elver job=sendgmr) by 2002:a5d:457c:0:b0:317:4ce5:c4b4 with SMTP id a28-20020a5d457c000000b003174ce5c4b4mr6475wrc.13.1691140012752; Fri, 04 Aug 2023 02:06:52 -0700 (PDT) Date: Fri, 4 Aug 2023 11:02:57 +0200 In-Reply-To: <20230804090621.400-1-elver@google.com> Mime-Version: 1.0 References: <20230804090621.400-1-elver@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230804090621.400-2-elver@google.com> Subject: [PATCH v2 2/3] list_debug: Introduce inline wrappers for debug checks From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Peter Zijlstra , Mark Rutland , Steven Rostedt , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230804_020654_931840_E379B571 X-CRM114-Status: GOOD ( 13.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Turn the list debug checking functions __list_*_valid() into inline functions that wrap the out-of-line functions. Care is taken to ensure the inline wrappers are always inlined, so that additional compiler instrumentation (such as sanitizers) does not result in redundant outlining. This change is preparation for performing checks in the inline wrappers. No functional change intended. Signed-off-by: Marco Elver --- arch/arm64/kvm/hyp/nvhe/list_debug.c | 6 +++--- include/linux/list.h | 15 +++++++++++++-- lib/list_debug.c | 11 +++++------ 3 files changed, 21 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index d68abd7ea124..589284496ac5 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,8 +26,8 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ -bool __list_add_valid(struct list_head *new, struct list_head *prev, - struct list_head *next) +bool ___list_add_valid(struct list_head *new, struct list_head *prev, + struct list_head *next) { if (NVHE_CHECK_DATA_CORRUPTION(next->prev != prev) || NVHE_CHECK_DATA_CORRUPTION(prev->next != next) || @@ -37,7 +37,7 @@ bool __list_add_valid(struct list_head *new, struct list_head *prev, return true; } -bool __list_del_entry_valid(struct list_head *entry) +bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index f10344dbad4d..e0b2cf904409 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -39,10 +39,21 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } #ifdef CONFIG_DEBUG_LIST -extern bool __list_add_valid(struct list_head *new, +extern bool ___list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next); -extern bool __list_del_entry_valid(struct list_head *entry); +static __always_inline bool __list_add_valid(struct list_head *new, + struct list_head *prev, + struct list_head *next) +{ + return ___list_add_valid(new, prev, next); +} + +extern bool ___list_del_entry_valid(struct list_head *entry); +static __always_inline bool __list_del_entry_valid(struct list_head *entry) +{ + return ___list_del_entry_valid(entry); +} #else static inline bool __list_add_valid(struct list_head *new, struct list_head *prev, diff --git a/lib/list_debug.c b/lib/list_debug.c index d98d43f80958..fd69009cc696 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -17,8 +17,8 @@ * attempt). */ -bool __list_add_valid(struct list_head *new, struct list_head *prev, - struct list_head *next) +bool ___list_add_valid(struct list_head *new, struct list_head *prev, + struct list_head *next) { if (CHECK_DATA_CORRUPTION(prev == NULL, "list_add corruption. prev is NULL.\n") || @@ -37,9 +37,9 @@ bool __list_add_valid(struct list_head *new, struct list_head *prev, return true; } -EXPORT_SYMBOL(__list_add_valid); +EXPORT_SYMBOL(___list_add_valid); -bool __list_del_entry_valid(struct list_head *entry) +bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next; @@ -65,6 +65,5 @@ bool __list_del_entry_valid(struct list_head *entry) return false; return true; - } -EXPORT_SYMBOL(__list_del_entry_valid); +EXPORT_SYMBOL(___list_del_entry_valid); From patchwork Fri Aug 4 09:02:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 13341547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5AE7FC001DE for ; Fri, 4 Aug 2023 09:07:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VFO+VyVSOtpL0PwL7hO63QdZVACQ7dnoG1pasQBla0o=; b=oS9EZ6fOBtCx7bTPHkDPI0MQTJ GCrRNbJARR8hlfjrzJfPoiuAZ3McTQrdAyIttN6gM+hys+AbfzRots7foXZVOS4lRTyMRrhZKfQ/H wOjYj9tbTJp3FviCyLx98AzHx71iV6JH0G7zYwuubfKMV1hzFlBrRAf/dQ33dcL6QNerBcDG4YGxK 2+gJIxO2yffIfXfK+8QWBHU1afDToLoAYjbjoXdmgb55gDtVSxNNphORCebcOX+Xoh+qUh8dudqh4 Hj1pOZ8qIWrPw9pOUlaTtuab4BiEqUJtBQwdoIFOL37iGapGceD23tkMZ1pkHIVpD4R+G/Y4CAa7b 33kUEBIA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRqlz-00By3w-38; Fri, 04 Aug 2023 09:07:03 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRqlw-00By14-1i for linux-arm-kernel@lists.infradead.org; Fri, 04 Aug 2023 09:07:02 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5847479b559so22190287b3.1 for ; Fri, 04 Aug 2023 02:06:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691140015; x=1691744815; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MzWCD4HwKY1r1A6RBU3efg4gsiTlnVjc3j6YbCfBqUM=; b=X1qQHJBN0yCJmiiCOnUz6+pj65LuFmL9UZz8FfdlIEiWrVdg46nOF6JPHUsBf46tyT gu/cU7tVPjn5xggOa6K89RQzk9Xs2SR+GfyZp18F5E0spipr3dg5dVvX7DslvcvYCFRD 2gvOVVugZtDQlWRnRsapExEDo9j1EfKY2zqVkvdakxjxG7T1dXj/JefW2y72bQ2yBMMe weUy74bSRSwjfV9CAKw48V+vgo+KAzLIPbCDKTWEXh9SxkSClJWIQsafkhKswHn4G16r tvs0P0LXkXbfRVlSTUqEog1H3rU0OIcvaDaBHXz9cepHqF4vObfSkJ5xxrTUelL8NE+l pocA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691140015; x=1691744815; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MzWCD4HwKY1r1A6RBU3efg4gsiTlnVjc3j6YbCfBqUM=; b=K/l3Qs8R4MbTUE3WHxj7438Yrar1MPkRf2ymkVpDHT3Kpxh1OgB7JqLpn6sr4OAw7H m4mvP8gw+xv6TiSx/l8qZHtROkGNi7Uof4Qeg6UKBogOhUkgAOc4UwIhIRM8E/xtQ8zg b5Ralc+wRTKqH+yAk/P3k8QHIcdWLLUNxnVP8Ej8Gdn1cNu0qKT3DS/c7Q/tHbk9YhmU w0Kt+UCCQ+UR3YrO2nUpDEcaHjuXY70r81SMK3S+W0XAgzJ5+HtXJQUYydlOd4zQ5oTa tyksXACvm/ILfGUh/+snlaSHzh39ccNZYOtzICvWO5thYC9aEG9BuJdOVCG9otNrGYhd D/eQ== X-Gm-Message-State: AOJu0YyBP8vSA2BvoDGcHbhb3NzuJeNVwBFQLQPtVMcMxD6EnTNe+yhI P/lhcmDILBZPEk5e3/HWuN7C5fzgzA== X-Google-Smtp-Source: AGHT+IHb47om6RdVmuoXy9g0+Rz+FIp330BgZAdxPKXZUIucVRKYEj76dp9OQ4SsePP+7+HTjAkoWKdCsg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:9c:201:2ebf:f3ea:4841:53b6]) (user=elver job=sendgmr) by 2002:a81:af41:0:b0:573:8316:8d04 with SMTP id x1-20020a81af41000000b0057383168d04mr7391ywj.4.1691140015704; Fri, 04 Aug 2023 02:06:55 -0700 (PDT) Date: Fri, 4 Aug 2023 11:02:58 +0200 In-Reply-To: <20230804090621.400-1-elver@google.com> Mime-Version: 1.0 References: <20230804090621.400-1-elver@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230804090621.400-3-elver@google.com> Subject: [PATCH v2 3/3] list_debug: Introduce CONFIG_DEBUG_LIST_MINIMAL From: Marco Elver To: elver@google.com, Andrew Morton , Kees Cook Cc: Guenter Roeck , Peter Zijlstra , Mark Rutland , Steven Rostedt , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Nathan Chancellor , Nick Desaulniers , Tom Rix , Miguel Ojeda , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, llvm@lists.linux.dev, Dmitry Vyukov , Alexander Potapenko , kasan-dev@googlegroups.com, linux-toolchains@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230804_020700_572416_69AA9BEB X-CRM114-Status: GOOD ( 23.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The feature has never been designed with performance in mind, yet common list manipulation is happening across hot paths all over the kernel. Introduce CONFIG_DEBUG_LIST_MINIMAL, which performs list pointer checking inline, and only upon list corruption delegates to the reporting slow path. To generate optimal machine code with CONFIG_DEBUG_LIST_MINIMAL: 1. Elide checking for pointer values which upon dereference would result in an immediate access fault -- therefore "minimal" checks. The trade-off is lower-quality error reports. 2. Use the newly introduced __preserve_most function attribute (available with Clang, but not yet with GCC) to minimize the code footprint for calling the reporting slow path. As a result, function size of callers is reduced by avoiding saving registers before calling the rarely called reporting slow path. Note that all TUs in lib/Makefile already disable function tracing, including list_debug.c, and __preserve_most's implied notrace has no effect in this case. 3. Because the inline checks are a subset of the full set of checks in ___list_*_valid(), always return false if the inline checks failed. This avoids redundant compare and conditional branch right after return from the slow path. As a side-effect of the checks being inline, if the compiler can prove some condition to always be true, it can completely elide some checks. Running netperf with CONFIG_DEBUG_LIST_MINIMAL (using a Clang compiler with "preserve_most") shows throughput improvements, in my case of ~7% on average (up to 20-30% on some test cases). Link: https://r.android.com/1266735 [1] Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2] Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3] Signed-off-by: Marco Elver --- v2: * Note that lib/Makefile disables function tracing for everything and __preserve_most's implied notrace is a noop here. --- arch/arm64/kvm/hyp/nvhe/list_debug.c | 2 + include/linux/list.h | 56 +++++++++++++++++++++++++--- lib/Kconfig.debug | 15 ++++++++ lib/list_debug.c | 2 + 4 files changed, 69 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/list_debug.c b/arch/arm64/kvm/hyp/nvhe/list_debug.c index 589284496ac5..df718e29f6d4 100644 --- a/arch/arm64/kvm/hyp/nvhe/list_debug.c +++ b/arch/arm64/kvm/hyp/nvhe/list_debug.c @@ -26,6 +26,7 @@ static inline __must_check bool nvhe_check_data_corruption(bool v) /* The predicates checked here are taken from lib/list_debug.c. */ +__list_valid_slowpath bool ___list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -37,6 +38,7 @@ bool ___list_add_valid(struct list_head *new, struct list_head *prev, return true; } +__list_valid_slowpath bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next; diff --git a/include/linux/list.h b/include/linux/list.h index e0b2cf904409..a28a215a3eb1 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -39,20 +39,64 @@ static inline void INIT_LIST_HEAD(struct list_head *list) } #ifdef CONFIG_DEBUG_LIST -extern bool ___list_add_valid(struct list_head *new, - struct list_head *prev, - struct list_head *next); + +#ifdef CONFIG_DEBUG_LIST_MINIMAL +# define __list_valid_slowpath __cold __preserve_most +#else +# define __list_valid_slowpath +#endif + +extern bool __list_valid_slowpath ___list_add_valid(struct list_head *new, + struct list_head *prev, + struct list_head *next); static __always_inline bool __list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { - return ___list_add_valid(new, prev, next); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + /* + * In the minimal config, elide checking if next and prev are + * NULL, since the immediate dereference of them below would + * result in a fault if NULL. + * + * With the minimal config we can afford to inline the checks, + * which also gives the compiler a chance to elide some of them + * completely if they can be proven at compile-time. If one of + * the pre-conditions does not hold, the slow-path will show a + * report which pre-condition failed. + */ + if (likely(next->prev == prev && prev->next == next && new != prev && new != next)) + return true; + ret = false; + } + + ret &= ___list_add_valid(new, prev, next); + return ret; } -extern bool ___list_del_entry_valid(struct list_head *entry); +extern bool __list_valid_slowpath ___list_del_entry_valid(struct list_head *entry); static __always_inline bool __list_del_entry_valid(struct list_head *entry) { - return ___list_del_entry_valid(entry); + bool ret = true; + + if (IS_ENABLED(CONFIG_DEBUG_LIST_MINIMAL)) { + struct list_head *prev = entry->prev; + struct list_head *next = entry->next; + + /* + * In the minimal config, elide checking if next and prev are + * NULL, LIST_POISON1 or LIST_POISON2, since the immediate + * dereference of them below would result in a fault. + */ + if (likely(prev->next == entry && next->prev == entry)) + return true; + ret = false; + } + + ret &= ___list_del_entry_valid(entry); + return ret; } #else static inline bool __list_add_valid(struct list_head *new, diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index fbc89baf7de6..e72cf08af0fa 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1680,6 +1680,21 @@ config DEBUG_LIST If unsure, say N. +config DEBUG_LIST_MINIMAL + bool "Minimal linked list debug checks" + default !DEBUG_KERNEL + depends on DEBUG_LIST + help + Only perform the minimal set of checks in the linked-list walking + routines to catch corruptions that are not guaranteed to result in an + immediate access fault. + + This trades lower quality error reports for improved performance: the + generated code should be more optimal and provide trade-offs that may + better serve safety- and performance- critical environments. + + If unsure, say Y. + config DEBUG_PLIST bool "Debug priority linked list manipulation" depends on DEBUG_KERNEL diff --git a/lib/list_debug.c b/lib/list_debug.c index fd69009cc696..daad32855f0d 100644 --- a/lib/list_debug.c +++ b/lib/list_debug.c @@ -17,6 +17,7 @@ * attempt). */ +__list_valid_slowpath bool ___list_add_valid(struct list_head *new, struct list_head *prev, struct list_head *next) { @@ -39,6 +40,7 @@ bool ___list_add_valid(struct list_head *new, struct list_head *prev, } EXPORT_SYMBOL(___list_add_valid); +__list_valid_slowpath bool ___list_del_entry_valid(struct list_head *entry) { struct list_head *prev, *next;