From patchwork Fri Jan 20 00:08:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: KP Singh X-Patchwork-Id: 13109172 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28B80C678D6 for ; Fri, 20 Jan 2023 05:30:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229789AbjATFaG (ORCPT ); Fri, 20 Jan 2023 00:30:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229701AbjATF2y (ORCPT ); Fri, 20 Jan 2023 00:28:54 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AE4B518E4; Thu, 19 Jan 2023 21:23:55 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6C355B82755; Fri, 20 Jan 2023 00:08:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BBEE6C433EF; Fri, 20 Jan 2023 00:08:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1674173314; bh=ZII5T8PW4jDZyBdwaPUuY/oi4AIqimsLvi34L121owk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=scIOOCNvAlFJ7E4oCy5BjKgZMLnyP5bW9W44JGBCm6YvN1mi+nYinjtBQ6OKD+Jby iZ+9vE67EEKgthF8zxt9v4Ez9cuxcPINVQF9oRW+WvontASuzuij7ZDg8ROt+y6PVF VGgfjG5g5SrA3hJTB7cFD4Az4GMqRZlQpV7aZLClUEf5Gdu2SWMA6jMskj+o7Vn8aN Qk/WVtp7neWKUZnQ8qGJl3jscdLCG4t3LNlKMhmLykrnB6c2BOADiR1JTBVm6mt2ii suJEqfg8JXcKL4Uu70mvLIwksRjJsUrH/SUsQd+IfU+nZWF+vCqwLwjFqCLZfnHrCf 3ViAqMn+otruw== From: KP Singh To: linux-security-module@vger.kernel.org, bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, jackmanb@google.com, renauld@google.com, paul@paul-moore.com, casey@schaufler-ca.com, song@kernel.org, revest@chromium.org, keescook@chromium.org, KP Singh Subject: [PATCH RESEND bpf-next 1/4] kernel: Add helper macros for loop unrolling Date: Fri, 20 Jan 2023 01:08:15 +0100 Message-Id: <20230120000818.1324170-2-kpsingh@kernel.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230120000818.1324170-1-kpsingh@kernel.org> References: <20230120000818.1324170-1-kpsingh@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This helps in easily initializing blocks of code (e.g. static calls and keys). UNROLL(N, MACRO, __VA_ARGS__) calls MACRO N times with the first argument as the index of the iteration. This allows string pasting to create unique tokens for variable names, function calls etc. As an example: #include #define MACRO(N, a, b) \ int add_##N(int a, int b) \ { \ return a + b + N; \ } UNROLL(2, MACRO, x, y) expands to: int add_0(int x, int y) { return x + y + 0; } int add_1(int x, int y) { return x + y + 1; } Signed-off-by: KP Singh --- include/linux/unroll.h | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 include/linux/unroll.h diff --git a/include/linux/unroll.h b/include/linux/unroll.h new file mode 100644 index 000000000000..e19aef95b94b --- /dev/null +++ b/include/linux/unroll.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (C) 2020 Google LLC. + */ + +#ifndef __UNROLL_H +#define __UNROLL_H + +#define __UNROLL_CONCAT(a, b) a ## _ ## b +#define UNROLL(N, MACRO, args...) __UNROLL_CONCAT(__UNROLL, N)(MACRO, args) + +#define __UNROLL_0(MACRO, args...) +#define __UNROLL_1(MACRO, args...) __UNROLL_0(MACRO, args) MACRO(0, args) +#define __UNROLL_2(MACRO, args...) __UNROLL_1(MACRO, args) MACRO(1, args) +#define __UNROLL_3(MACRO, args...) __UNROLL_2(MACRO, args) MACRO(2, args) +#define __UNROLL_4(MACRO, args...) __UNROLL_3(MACRO, args) MACRO(3, args) +#define __UNROLL_5(MACRO, args...) __UNROLL_4(MACRO, args) MACRO(4, args) +#define __UNROLL_6(MACRO, args...) __UNROLL_5(MACRO, args) MACRO(5, args) +#define __UNROLL_7(MACRO, args...) __UNROLL_6(MACRO, args) MACRO(6, args) +#define __UNROLL_8(MACRO, args...) __UNROLL_7(MACRO, args) MACRO(7, args) +#define __UNROLL_9(MACRO, args...) __UNROLL_8(MACRO, args) MACRO(8, args) +#define __UNROLL_10(MACRO, args...) __UNROLL_9(MACRO, args) MACRO(9, args) +#define __UNROLL_11(MACRO, args...) __UNROLL_10(MACRO, args) MACRO(10, args) +#define __UNROLL_12(MACRO, args...) __UNROLL_11(MACRO, args) MACRO(11, args) +#define __UNROLL_13(MACRO, args...) __UNROLL_12(MACRO, args) MACRO(12, args) +#define __UNROLL_14(MACRO, args...) __UNROLL_13(MACRO, args) MACRO(13, args) +#define __UNROLL_15(MACRO, args...) __UNROLL_14(MACRO, args) MACRO(14, args) +#define __UNROLL_16(MACRO, args...) __UNROLL_15(MACRO, args) MACRO(15, args) +#define __UNROLL_17(MACRO, args...) __UNROLL_16(MACRO, args) MACRO(16, args) +#define __UNROLL_18(MACRO, args...) __UNROLL_17(MACRO, args) MACRO(17, args) +#define __UNROLL_19(MACRO, args...) __UNROLL_18(MACRO, args) MACRO(18, args) +#define __UNROLL_20(MACRO, args...) __UNROLL_19(MACRO, args) MACRO(19, args) + +#endif /* __UNROLL_H */ From patchwork Fri Jan 20 00:08:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: KP Singh X-Patchwork-Id: 13109007 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0776C004D4 for ; Fri, 20 Jan 2023 00:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229684AbjATAIx (ORCPT ); Thu, 19 Jan 2023 19:08:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229683AbjATAIv (ORCPT ); Thu, 19 Jan 2023 19:08:51 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3394A25A2; Thu, 19 Jan 2023 16:08:49 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 2B477CE25F8; Fri, 20 Jan 2023 00:08:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5EF6C433EF; Fri, 20 Jan 2023 00:08:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1674173326; bh=hc+zSS6UN7/G+UWGJaA93YRrvGjaJMf/t5FWQ/bg9VE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i0TkuzarQGKNYrNMV5QfYgsDsBDS3H55XFo+pHVIq3hdJFo31s0ZYU3DiYkWtyRZN eqIYfuYEKJDzpKQ74S5aOMPwkpdoa/QVtmJ9FiM/SiC4CiHSOYkmZ/yfLYiJMt4P1F ZGh2bz0v7pgM4DNFEns4vrJooXCDziWZ1BzwwlKsiBNsvsiCBKJHuXLRrQ/YHl7yoX J5N4kzFbe5JOFJE4eKdI7Kag6zoSN3pS3tppqUtMhmQ+1ordwJYgU4GZd0idl/Hhb4 68TjsIxFKPPXHlqyRLLgLKocqsXNMDNam7GjVDo6GqBDPLGpq2zkQcKQE7JZxIYwlX YMQS5kJiJy9Qw== From: KP Singh To: linux-security-module@vger.kernel.org, bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, jackmanb@google.com, renauld@google.com, paul@paul-moore.com, casey@schaufler-ca.com, song@kernel.org, revest@chromium.org, keescook@chromium.org, KP Singh Subject: [PATCH RESEND bpf-next 2/4] security: Generate a header with the count of enabled LSMs Date: Fri, 20 Jan 2023 01:08:16 +0100 Message-Id: <20230120000818.1324170-3-kpsingh@kernel.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230120000818.1324170-1-kpsingh@kernel.org> References: <20230120000818.1324170-1-kpsingh@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The header defines a MAX_LSM_COUNT constant which is used in a subsequent patch to generate the static calls for each LSM hook which are named using preprocessor token pasting. Since token pasting does not work with arithmetic expressions, generate a simple lsm_count.h header which represents the subset of LSMs that can be enabled on a given kernel based on the config. While one can generate static calls for all the possible LSMs that the kernel has, this is actually wasteful as most kernels only enable a handful of LSMs. Signed-off-by: KP Singh --- scripts/Makefile | 1 + scripts/security/.gitignore | 1 + scripts/security/Makefile | 4 +++ scripts/security/gen_lsm_count.c | 57 ++++++++++++++++++++++++++++++++ security/Makefile | 11 ++++++ 5 files changed, 74 insertions(+) create mode 100644 scripts/security/.gitignore create mode 100644 scripts/security/Makefile create mode 100644 scripts/security/gen_lsm_count.c diff --git a/scripts/Makefile b/scripts/Makefile index 1575af84d557..9712249c0fb3 100644 --- a/scripts/Makefile +++ b/scripts/Makefile @@ -41,6 +41,7 @@ targets += module.lds subdir-$(CONFIG_GCC_PLUGINS) += gcc-plugins subdir-$(CONFIG_MODVERSIONS) += genksyms subdir-$(CONFIG_SECURITY_SELINUX) += selinux +subdir-$(CONFIG_SECURITY) += security # Let clean descend into subdirs subdir- += basic dtc gdb kconfig mod diff --git a/scripts/security/.gitignore b/scripts/security/.gitignore new file mode 100644 index 000000000000..684af16735f1 --- /dev/null +++ b/scripts/security/.gitignore @@ -0,0 +1 @@ +gen_lsm_count diff --git a/scripts/security/Makefile b/scripts/security/Makefile new file mode 100644 index 000000000000..05f7e4109052 --- /dev/null +++ b/scripts/security/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 +hostprogs-always-y += gen_lsm_count +HOST_EXTRACFLAGS += \ + -I$(srctree)/include/uapi -I$(srctree)/include diff --git a/scripts/security/gen_lsm_count.c b/scripts/security/gen_lsm_count.c new file mode 100644 index 000000000000..a9a227724d84 --- /dev/null +++ b/scripts/security/gen_lsm_count.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* NOTE: we really do want to use the kernel headers here */ +#define __EXPORTED_HEADERS__ + +#include +#include +#include +#include +#include +#include + +#include + +#define GEN_MAX_LSM_COUNT ( \ + /* Capabilities */ \ + IS_ENABLED(CONFIG_SECURITY) + \ + IS_ENABLED(CONFIG_SECURITY_SELINUX) + \ + IS_ENABLED(CONFIG_SECURITY_SMACK) + \ + IS_ENABLED(CONFIG_SECURITY_TOMOYO) + \ + IS_ENABLED(CONFIG_SECURITY_APPARMOR) + \ + IS_ENABLED(CONFIG_SECURITY_YAMA) + \ + IS_ENABLED(CONFIG_SECURITY_LOADPIN) + \ + IS_ENABLED(CONFIG_SECURITY_SAFESETID) + \ + IS_ENABLED(CONFIG_SECURITY_LOCKDOWN_LSM) + \ + IS_ENABLED(CONFIG_BPF_LSM) + \ + IS_ENABLED(CONFIG_SECURITY_LANDLOCK)) + +const char *progname; + +static void usage(void) +{ + printf("usage: %s lsm_count.h\n", progname); + exit(1); +} + +int main(int argc, char *argv[]) +{ + FILE *fout; + + progname = argv[0]; + + if (argc < 2) + usage(); + + fout = fopen(argv[1], "w"); + if (!fout) { + fprintf(stderr, "Could not open %s for writing: %s\n", + argv[1], strerror(errno)); + exit(2); + } + + fprintf(fout, "#ifndef _LSM_COUNT_H_\n#define _LSM_COUNT_H_\n\n"); + fprintf(fout, "\n#define MAX_LSM_COUNT %d\n", GEN_MAX_LSM_COUNT); + fprintf(fout, "#endif /* _LSM_COUNT_H_ */\n"); + exit(0); +} diff --git a/security/Makefile b/security/Makefile index 18121f8f85cd..7a47174831f4 100644 --- a/security/Makefile +++ b/security/Makefile @@ -3,6 +3,7 @@ # Makefile for the kernel security code # +gen := include/generated obj-$(CONFIG_KEYS) += keys/ # always enable default capabilities @@ -27,3 +28,13 @@ obj-$(CONFIG_SECURITY_LANDLOCK) += landlock/ # Object integrity file lists obj-$(CONFIG_INTEGRITY) += integrity/ + +$(addprefix $(obj)/,$(obj-y)): $(gen)/lsm_count.h + +quiet_cmd_lsm_count = GEN ${gen}/lsm_count.h + cmd_lsm_count = scripts/security/gen_lsm_count ${gen}/lsm_count.h + +targets += lsm_count.h + +${gen}/lsm_count.h: FORCE + $(call if_changed,lsm_count) From patchwork Fri Jan 20 00:08:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: KP Singh X-Patchwork-Id: 13109008 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B59F8C678D4 for ; Fri, 20 Jan 2023 00:09:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229445AbjATAJD (ORCPT ); Thu, 19 Jan 2023 19:09:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229454AbjATAI6 (ORCPT ); Thu, 19 Jan 2023 19:08:58 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29BF4A25B6; Thu, 19 Jan 2023 16:08:56 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9B9EA61DB1; Fri, 20 Jan 2023 00:08:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C6C4C433EF; Fri, 20 Jan 2023 00:08:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1674173335; bh=PEIUFjWVbkdv9FwnERK5qaagyDYayJUl2lsdEAuFz2g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tW5ogBri4lRLoXLlZ9H/gDFPP4M37/yd550PIsp22RufIfJv/OIKVy4MP5OsNMKCY 34mWC2SPRm0SYDF8TG4YBQ6FLu0rgCYNYTOEm/DLu499DzISYHRlIJAMyVSb2OsvUd 3sE2+w43K9xlbSoxJ0AjDb7Okn7ZSujCcP/p0CcS69Tw98e8TRQGnoERljr7FBrW9K JjE9Ob0z2KcWNPE2BGXvblzyInF5iqcXqgjZczsobGKV/lwPIk4fXnavJJfZ6B+kKD IIep9XMT7U6m2vgt1wJcqaeSK3guveEG3JZP8XtpPYz242M8Dffo0ICl/2jSRI7PEB D4TkCEuTkABXw== From: KP Singh To: linux-security-module@vger.kernel.org, bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, jackmanb@google.com, renauld@google.com, paul@paul-moore.com, casey@schaufler-ca.com, song@kernel.org, revest@chromium.org, keescook@chromium.org, KP Singh Subject: [PATCH RESEND bpf-next 3/4] security: Replace indirect LSM hook calls with static calls Date: Fri, 20 Jan 2023 01:08:17 +0100 Message-Id: <20230120000818.1324170-4-kpsingh@kernel.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230120000818.1324170-1-kpsingh@kernel.org> References: <20230120000818.1324170-1-kpsingh@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net LSM hooks are currently invoked from a linked list as indirect calls which are invoked using retpolines as a mitigation for speculative attacks (Branch History / Target injection) and add extra overhead which is especially bad in kernel hot paths: security_file_ioctl: 0xffffffff814f0320 <+0>: endbr64 0xffffffff814f0324 <+4>: push %rbp 0xffffffff814f0325 <+5>: push %r15 0xffffffff814f0327 <+7>: push %r14 0xffffffff814f0329 <+9>: push %rbx 0xffffffff814f032a <+10>: mov %rdx,%rbx 0xffffffff814f032d <+13>: mov %esi,%ebp 0xffffffff814f032f <+15>: mov %rdi,%r14 0xffffffff814f0332 <+18>: mov $0xffffffff834a7030,%r15 0xffffffff814f0339 <+25>: mov (%r15),%r15 0xffffffff814f033c <+28>: test %r15,%r15 0xffffffff814f033f <+31>: je 0xffffffff814f0358 0xffffffff814f0341 <+33>: mov 0x18(%r15),%r11 0xffffffff814f0345 <+37>: mov %r14,%rdi 0xffffffff814f0348 <+40>: mov %ebp,%esi 0xffffffff814f034a <+42>: mov %rbx,%rdx >>> 0xffffffff814f034d <+45>: call 0xffffffff81f742e0 <__x86_indirect_thunk_array+352> Indirect calls that use retpolines leading to overhead, not just due to extra instruction but also branch misses. 0xffffffff814f0352 <+50>: test %eax,%eax 0xffffffff814f0354 <+52>: je 0xffffffff814f0339 0xffffffff814f0356 <+54>: jmp 0xffffffff814f035a 0xffffffff814f0358 <+56>: xor %eax,%eax 0xffffffff814f035a <+58>: pop %rbx 0xffffffff814f035b <+59>: pop %r14 0xffffffff814f035d <+61>: pop %r15 0xffffffff814f035f <+63>: pop %rbp 0xffffffff814f0360 <+64>: jmp 0xffffffff81f747c4 <__x86_return_thunk> The indirect calls are not really needed as one knows the addresses of enabled LSM callbacks at boot time and only the order can possibly change at boot time with the lsm= kernel command line parameter. An array of static calls is defined per LSM hook and the static calls are updated at boot time once the order has been determined. A static key guards whether an LSM static call is enabled or not, without this static key, for LSM hooks that return an int, the presence of the hook that returns a default value can create side-effects which has resulted in bugs [1]. With the hook now exposed as a static call, one can see that the retpolines are no longer there and the LSM callbacks are invoked directly: security_file_ioctl: 0xffffffff814f0dd0 <+0>: endbr64 0xffffffff814f0dd4 <+4>: push %rbp 0xffffffff814f0dd5 <+5>: push %r14 0xffffffff814f0dd7 <+7>: push %rbx 0xffffffff814f0dd8 <+8>: mov %rdx,%rbx 0xffffffff814f0ddb <+11>: mov %esi,%ebp 0xffffffff814f0ddd <+13>: mov %rdi,%r14 >>> 0xffffffff814f0de0 <+16>: jmp 0xffffffff814f0df1 Static key enabled for selinux_file_ioctl >>> 0xffffffff814f0de2 <+18>: jmp 0xffffffff814f0e08 Static key enabled for bpf_lsm_file_ioctl. This is something that is changed to default to false to avoid the existing side effect issues of BPF LSM [1] 0xffffffff814f0de4 <+20>: xor %eax,%eax 0xffffffff814f0de6 <+22>: xchg %ax,%ax 0xffffffff814f0de8 <+24>: pop %rbx 0xffffffff814f0de9 <+25>: pop %r14 0xffffffff814f0deb <+27>: pop %rbp 0xffffffff814f0dec <+28>: jmp 0xffffffff81f767c4 <__x86_return_thunk> 0xffffffff814f0df1 <+33>: endbr64 0xffffffff814f0df5 <+37>: mov %r14,%rdi 0xffffffff814f0df8 <+40>: mov %ebp,%esi 0xffffffff814f0dfa <+42>: mov %rbx,%rdx >>> 0xffffffff814f0dfd <+45>: call 0xffffffff814fe820 Direct call to SELinux. 0xffffffff814f0e02 <+50>: test %eax,%eax 0xffffffff814f0e04 <+52>: jne 0xffffffff814f0de8 0xffffffff814f0e06 <+54>: jmp 0xffffffff814f0de2 0xffffffff814f0e08 <+56>: endbr64 0xffffffff814f0e0c <+60>: mov %r14,%rdi 0xffffffff814f0e0f <+63>: mov %ebp,%esi 0xffffffff814f0e11 <+65>: mov %rbx,%rdx >>> 0xffffffff814f0e14 <+68>: call 0xffffffff8123b7d0 Direct call to bpf_lsm_file_ioctl 0xffffffff814f0e19 <+73>: test %eax,%eax 0xffffffff814f0e1b <+75>: jne 0xffffffff814f0de8 0xffffffff814f0e1d <+77>: jmp 0xffffffff814f0de4 0xffffffff814f0e1f <+79>: endbr64 0xffffffff814f0e23 <+83>: mov %r14,%rdi 0xffffffff814f0e26 <+86>: mov %ebp,%esi 0xffffffff814f0e28 <+88>: mov %rbx,%rdx 0xffffffff814f0e2b <+91>: pop %rbx 0xffffffff814f0e2c <+92>: pop %r14 0xffffffff814f0e2e <+94>: pop %rbp 0xffffffff814f0e2f <+95>: ret 0xffffffff814f0e30 <+96>: int3 0xffffffff814f0e31 <+97>: int3 0xffffffff814f0e32 <+98>: int3 0xffffffff814f0e33 <+99>: int3 There are some hooks that don't use the call_int_hook and call_void_hook. These hooks are updated to use a new macro called security_for_each_hook where the lsm_callback is directly invoked as an indirect call. Currently, there are no performance sensitive hooks that use the security_for_each_hook macro. However, if, some performance sensitive hooks are discovered, these can be updated to use static calls with loop unrolling as well using a custom macro. [1] https://lore.kernel.org/linux-security-module/20220609234601.2026362-1-kpsingh@kernel.org/ Signed-off-by: KP Singh --- include/linux/lsm_hooks.h | 83 +++++++++++++-- security/security.c | 216 ++++++++++++++++++++++++-------------- 2 files changed, 211 insertions(+), 88 deletions(-) diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index 0a5ba81f7367..c82d15a4ef50 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -28,6 +28,26 @@ #include #include #include +#include +#include +#include + +/* Include the generated MAX_LSM_COUNT */ +#include + +#define SECURITY_HOOK_ENABLED_KEY(HOOK, IDX) security_enabled_key_##HOOK##_##IDX + +/* + * Identifier for the LSM static calls. + * HOOK is an LSM hook as defined in linux/lsm_hookdefs.h + * IDX is the index of the static call. 0 <= NUM < MAX_LSM_COUNT + */ +#define LSM_STATIC_CALL(HOOK, IDX) lsm_static_call_##HOOK##_##IDX + +/* + * Call the macro M for each LSM hook MAX_LSM_COUNT times. + */ +#define LSM_UNROLL(M, ...) UNROLL(MAX_LSM_COUNT, M, __VA_ARGS__) /** * union security_list_options - Linux Security Module hook function list @@ -1657,21 +1677,48 @@ union security_list_options { #define LSM_HOOK(RET, DEFAULT, NAME, ...) RET (*NAME)(__VA_ARGS__); #include "lsm_hook_defs.h" #undef LSM_HOOK + void *lsm_callback; }; -struct security_hook_heads { - #define LSM_HOOK(RET, DEFAULT, NAME, ...) struct hlist_head NAME; - #include "lsm_hook_defs.h" +/* + * @key: static call key as defined by STATIC_CALL_KEY + * @trampoline: static call trampoline as defined by STATIC_CALL_TRAMP + * @hl: The security_hook_list as initialized by the owning LSM. + * @enabled_key: Enabled when the static call has an LSM hook associated. + */ +struct lsm_static_call { + struct static_call_key *key; + void *trampoline; + struct security_hook_list *hl; + struct static_key *enabled_key; +}; + +/* + * Table of the static calls for each LSM hook. + * Once the LSMs are initialized, their callbacks will be copied to these + * tables such that the calls are filled backwards (from last to first). + * This way, we can jump directly to the first used static call, and execute + * all of them after. This essentially makes the entry point + * dynamic to adapt the number of static calls to the number of callbacks. + */ +struct lsm_static_calls_table { + #define LSM_HOOK(RET, DEFAULT, NAME, ...) \ + struct lsm_static_call NAME[MAX_LSM_COUNT]; + #include #undef LSM_HOOK } __randomize_layout; /* * Security module hook list structure. * For use with generic list macros for common operations. + * + * struct security_hook_list - Contents of a cacheable, mappable object. + * @scalls: The beginning of the array of static calls assigned to this hook. + * @hook: The callback for the hook. + * @lsm: The name of the lsm that owns this hook. */ struct security_hook_list { - struct hlist_node list; - struct hlist_head *head; + struct lsm_static_call *scalls; union security_list_options hook; const char *lsm; } __randomize_layout; @@ -1701,10 +1748,12 @@ struct lsm_blob_sizes { * care of the common case and reduces the amount of * text involved. */ -#define LSM_HOOK_INIT(HEAD, HOOK) \ - { .head = &security_hook_heads.HEAD, .hook = { .HEAD = HOOK } } +#define LSM_HOOK_INIT(NAME, CALLBACK) \ + { \ + .scalls = static_calls_table.NAME, \ + .hook = { .NAME = CALLBACK } \ + } -extern struct security_hook_heads security_hook_heads; extern char *lsm_names; extern void security_add_hooks(struct security_hook_list *hooks, int count, @@ -1756,10 +1805,21 @@ extern struct lsm_info __start_early_lsm_info[], __end_early_lsm_info[]; static inline void security_delete_hooks(struct security_hook_list *hooks, int count) { - int i; + struct lsm_static_call *scalls; + int i, j; + + for (i = 0; i < count; i++) { + scalls = hooks[i].scalls; + for (j = 0; j < MAX_LSM_COUNT; j++) { + if (scalls[j].hl != &hooks[i]) + continue; - for (i = 0; i < count; i++) - hlist_del_rcu(&hooks[i].list); + static_key_disable(scalls[j].enabled_key); + __static_call_update(scalls[j].key, + scalls[j].trampoline, NULL); + scalls[j].hl = NULL; + } + } } #endif /* CONFIG_SECURITY_SELINUX_DISABLE */ @@ -1771,5 +1831,6 @@ static inline void security_delete_hooks(struct security_hook_list *hooks, #endif /* CONFIG_SECURITY_WRITABLE_HOOKS */ extern int lsm_inode_alloc(struct inode *inode); +extern struct lsm_static_calls_table static_calls_table __ro_after_init; #endif /* ! __LINUX_LSM_HOOKS_H */ diff --git a/security/security.c b/security/security.c index d1571900a8c7..e54d5ba187d1 100644 --- a/security/security.c +++ b/security/security.c @@ -29,6 +29,8 @@ #include #include #include +#include +#include #define MAX_LSM_EVM_XATTR 2 @@ -74,7 +76,6 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = { [LOCKDOWN_CONFIDENTIALITY_MAX] = "confidentiality", }; -struct security_hook_heads security_hook_heads __lsm_ro_after_init; static BLOCKING_NOTIFIER_HEAD(blocking_lsm_notifier_chain); static struct kmem_cache *lsm_file_cache; @@ -93,6 +94,43 @@ static __initconst const char * const builtin_lsm_order = CONFIG_LSM; static __initdata struct lsm_info **ordered_lsms; static __initdata struct lsm_info *exclusive; +/* + * Define static calls and static keys for each LSM hook. + */ + +#define DEFINE_LSM_STATIC_CALL(NUM, NAME, RET, ...) \ + DEFINE_STATIC_CALL_NULL(LSM_STATIC_CALL(NAME, NUM), \ + *((RET(*)(__VA_ARGS__))NULL)); \ + DEFINE_STATIC_KEY_FALSE(SECURITY_HOOK_ENABLED_KEY(NAME, NUM)); + +#define LSM_HOOK(RET, DEFAULT, NAME, ...) \ + LSM_UNROLL(DEFINE_LSM_STATIC_CALL, NAME, RET, __VA_ARGS__) +#include +#undef LSM_HOOK +#undef DEFINE_LSM_STATIC_CALL + +/* + * Initialise a table of static calls for each LSM hook. + * DEFINE_STATIC_CALL_NULL invocation above generates a key (STATIC_CALL_KEY) + * and a trampoline (STATIC_CALL_TRAMP) which are used to call + * __static_call_update when updating the static call. + */ +struct lsm_static_calls_table static_calls_table __lsm_ro_after_init = { +#define INIT_LSM_STATIC_CALL(NUM, NAME) \ + (struct lsm_static_call) { \ + .key = &STATIC_CALL_KEY(LSM_STATIC_CALL(NAME, NUM)), \ + .trampoline = &STATIC_CALL_TRAMP(LSM_STATIC_CALL(NAME, NUM)),\ + .enabled_key = &SECURITY_HOOK_ENABLED_KEY(NAME, NUM).key,\ + }, +#define LSM_HOOK(RET, DEFAULT, NAME, ...) \ + .NAME = { \ + LSM_UNROLL(INIT_LSM_STATIC_CALL, NAME) \ + }, +#include +#undef LSM_HOOK +#undef INIT_LSM_STATIC_CALL +}; + static __initdata bool debug; #define init_debug(...) \ do { \ @@ -153,7 +191,7 @@ static void __init append_ordered_lsm(struct lsm_info *lsm, const char *from) if (exists_ordered_lsm(lsm)) return; - if (WARN(last_lsm == LSM_COUNT, "%s: out of LSM slots!?\n", from)) + if (WARN(last_lsm == LSM_COUNT, "%s: out of LSM static calls!?\n", from)) return; /* Enable this LSM, if it is not already set. */ @@ -318,6 +356,24 @@ static void __init ordered_lsm_parse(const char *order, const char *origin) kfree(sep); } +static void __init lsm_static_call_init(struct security_hook_list *hl) +{ + struct lsm_static_call *scall = hl->scalls; + int i; + + for (i = 0; i < MAX_LSM_COUNT; i++) { + /* Update the first static call that is not used yet */ + if (!scall->hl) { + __static_call_update(scall->key, scall->trampoline, hl->hook.lsm_callback); + scall->hl = hl; + static_key_enable(scall->enabled_key); + return; + } + scall++; + } + panic("%s - No static call remaining to add LSM hook.\n", __func__); +} + static void __init lsm_early_cred(struct cred *cred); static void __init lsm_early_task(struct task_struct *task); @@ -395,11 +451,6 @@ int __init early_security_init(void) { struct lsm_info *lsm; -#define LSM_HOOK(RET, DEFAULT, NAME, ...) \ - INIT_HLIST_HEAD(&security_hook_heads.NAME); -#include "linux/lsm_hook_defs.h" -#undef LSM_HOOK - for (lsm = __start_early_lsm_info; lsm < __end_early_lsm_info; lsm++) { if (!lsm->enabled) lsm->enabled = &lsm_enabled_true; @@ -515,7 +566,7 @@ void __init security_add_hooks(struct security_hook_list *hooks, int count, for (i = 0; i < count; i++) { hooks[i].lsm = lsm; - hlist_add_tail_rcu(&hooks[i].list, hooks[i].head); + lsm_static_call_init(&hooks[i]); } /* @@ -753,28 +804,42 @@ static int lsm_superblock_alloc(struct super_block *sb) * call_int_hook: * This is a hook that returns a value. */ +#define __CALL_STATIC_VOID(NUM, HOOK, ...) \ + if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \ + static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__); \ + } -#define call_void_hook(FUNC, ...) \ - do { \ - struct security_hook_list *P; \ - \ - hlist_for_each_entry(P, &security_hook_heads.FUNC, list) \ - P->hook.FUNC(__VA_ARGS__); \ +#define call_void_hook(FUNC, ...) \ + do { \ + LSM_UNROLL(__CALL_STATIC_VOID, FUNC, __VA_ARGS__) \ } while (0) -#define call_int_hook(FUNC, IRC, ...) ({ \ - int RC = IRC; \ - do { \ - struct security_hook_list *P; \ - \ - hlist_for_each_entry(P, &security_hook_heads.FUNC, list) { \ - RC = P->hook.FUNC(__VA_ARGS__); \ - if (RC != 0) \ - break; \ - } \ - } while (0); \ - RC; \ -}) +#define __CALL_STATIC_INT(NUM, R, HOOK, ...) \ + if (static_branch_unlikely(&SECURITY_HOOK_ENABLED_KEY(HOOK, NUM))) { \ + R = static_call(LSM_STATIC_CALL(HOOK, NUM))(__VA_ARGS__); \ + if (R != 0) \ + goto out; \ + } + +#define call_int_hook(FUNC, IRC, ...) \ + ({ \ + __label__ out; \ + int RC = IRC; \ + do { \ + LSM_UNROLL(__CALL_STATIC_INT, RC, FUNC, __VA_ARGS__) \ + \ + } while (0); \ +out: \ + RC; \ + }) + +#define security_for_each_hook(scall, NAME, expression) \ + for (scall = static_calls_table.NAME; \ + scall - static_calls_table.NAME < MAX_LSM_COUNT; scall++) { \ + if (!static_key_enabled(scall->enabled_key)) \ + continue; \ + (expression); \ + } /* Security operations */ @@ -859,7 +924,7 @@ int security_settime64(const struct timespec64 *ts, const struct timezone *tz) int security_vm_enough_memory_mm(struct mm_struct *mm, long pages) { - struct security_hook_list *hp; + struct lsm_static_call *scall; int cap_sys_admin = 1; int rc; @@ -870,13 +935,13 @@ int security_vm_enough_memory_mm(struct mm_struct *mm, long pages) * agree that it should be set it will. If any module * thinks it should not be set it won't. */ - hlist_for_each_entry(hp, &security_hook_heads.vm_enough_memory, list) { - rc = hp->hook.vm_enough_memory(mm, pages); + security_for_each_hook(scall, vm_enough_memory, ({ + rc = scall->hl->hook.vm_enough_memory(mm, pages); if (rc <= 0) { cap_sys_admin = 0; break; } - } + })); return __vm_enough_memory(mm, pages, cap_sys_admin); } @@ -918,18 +983,17 @@ int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc) int security_fs_context_parse_param(struct fs_context *fc, struct fs_parameter *param) { - struct security_hook_list *hp; + struct lsm_static_call *scall; int trc; int rc = -ENOPARAM; - hlist_for_each_entry(hp, &security_hook_heads.fs_context_parse_param, - list) { - trc = hp->hook.fs_context_parse_param(fc, param); + security_for_each_hook(scall, fs_context_parse_param, ({ + trc = scall->hl->hook.fs_context_parse_param(fc, param); if (trc == 0) rc = 0; else if (trc != -ENOPARAM) return trc; - } + })); return rc; } @@ -1092,18 +1156,19 @@ int security_dentry_init_security(struct dentry *dentry, int mode, const char **xattr_name, void **ctx, u32 *ctxlen) { - struct security_hook_list *hp; + struct lsm_static_call *scall; int rc; /* * Only one module will provide a security context. */ - hlist_for_each_entry(hp, &security_hook_heads.dentry_init_security, list) { - rc = hp->hook.dentry_init_security(dentry, mode, name, + security_for_each_hook(scall, dentry_init_security, ({ + rc = scall->hl->hook.dentry_init_security(dentry, mode, name, xattr_name, ctx, ctxlen); if (rc != LSM_RET_DEFAULT(dentry_init_security)) return rc; - } + })); + return LSM_RET_DEFAULT(dentry_init_security); } EXPORT_SYMBOL(security_dentry_init_security); @@ -1502,7 +1567,7 @@ int security_inode_getsecurity(struct user_namespace *mnt_userns, struct inode *inode, const char *name, void **buffer, bool alloc) { - struct security_hook_list *hp; + struct lsm_static_call *scall; int rc; if (unlikely(IS_PRIVATE(inode))) @@ -1510,17 +1575,17 @@ int security_inode_getsecurity(struct user_namespace *mnt_userns, /* * Only one module will provide an attribute with a given name. */ - hlist_for_each_entry(hp, &security_hook_heads.inode_getsecurity, list) { - rc = hp->hook.inode_getsecurity(mnt_userns, inode, name, buffer, alloc); + security_for_each_hook(scall, inode_getsecurity, ({ + rc = scall->hl->hook.inode_getsecurity(mnt_userns, inode, name, buffer, alloc); if (rc != LSM_RET_DEFAULT(inode_getsecurity)) return rc; - } + })); return LSM_RET_DEFAULT(inode_getsecurity); } int security_inode_setsecurity(struct inode *inode, const char *name, const void *value, size_t size, int flags) { - struct security_hook_list *hp; + struct lsm_static_call *scall; int rc; if (unlikely(IS_PRIVATE(inode))) @@ -1528,12 +1593,11 @@ int security_inode_setsecurity(struct inode *inode, const char *name, const void /* * Only one module will provide an attribute with a given name. */ - hlist_for_each_entry(hp, &security_hook_heads.inode_setsecurity, list) { - rc = hp->hook.inode_setsecurity(inode, name, value, size, - flags); + security_for_each_hook(scall, inode_setsecurity, ({ + rc = scall->hl->hook.inode_setsecurity(inode, name, value, size, flags); if (rc != LSM_RET_DEFAULT(inode_setsecurity)) return rc; - } + })); return LSM_RET_DEFAULT(inode_setsecurity); } @@ -1558,7 +1622,7 @@ EXPORT_SYMBOL(security_inode_copy_up); int security_inode_copy_up_xattr(const char *name) { - struct security_hook_list *hp; + struct lsm_static_call *scall; int rc; /* @@ -1566,12 +1630,11 @@ int security_inode_copy_up_xattr(const char *name) * xattr), -EOPNOTSUPP if it does not know anything about the xattr or * any other error code incase of an error. */ - hlist_for_each_entry(hp, - &security_hook_heads.inode_copy_up_xattr, list) { - rc = hp->hook.inode_copy_up_xattr(name); + security_for_each_hook(scall, inode_copy_up_xattr, ({ + rc = scall->hl->hook.inode_copy_up_xattr(name); if (rc != LSM_RET_DEFAULT(inode_copy_up_xattr)) return rc; - } + })); return LSM_RET_DEFAULT(inode_copy_up_xattr); } @@ -1968,16 +2031,16 @@ int security_task_prctl(int option, unsigned long arg2, unsigned long arg3, { int thisrc; int rc = LSM_RET_DEFAULT(task_prctl); - struct security_hook_list *hp; + struct lsm_static_call *scall; - hlist_for_each_entry(hp, &security_hook_heads.task_prctl, list) { - thisrc = hp->hook.task_prctl(option, arg2, arg3, arg4, arg5); + security_for_each_hook(scall, task_prctl, ({ + thisrc = scall->hl->hook.task_prctl(option, arg2, arg3, arg4, arg5); if (thisrc != LSM_RET_DEFAULT(task_prctl)) { rc = thisrc; if (thisrc != 0) break; } - } + })); return rc; } @@ -2142,26 +2205,26 @@ EXPORT_SYMBOL(security_d_instantiate); int security_getprocattr(struct task_struct *p, const char *lsm, const char *name, char **value) { - struct security_hook_list *hp; + struct lsm_static_call *scall; - hlist_for_each_entry(hp, &security_hook_heads.getprocattr, list) { - if (lsm != NULL && strcmp(lsm, hp->lsm)) + security_for_each_hook(scall, getprocattr, ({ + if (lsm != NULL && strcmp(lsm, scall->hl->lsm)) continue; - return hp->hook.getprocattr(p, name, value); - } + return scall->hl->hook.getprocattr(p, name, value); + })); return LSM_RET_DEFAULT(getprocattr); } int security_setprocattr(const char *lsm, const char *name, void *value, size_t size) { - struct security_hook_list *hp; + struct lsm_static_call *scall; - hlist_for_each_entry(hp, &security_hook_heads.setprocattr, list) { - if (lsm != NULL && strcmp(lsm, hp->lsm)) + security_for_each_hook(scall, setprocattr, ({ + if (lsm != NULL && strcmp(lsm, scall->hl->lsm)) continue; - return hp->hook.setprocattr(name, value, size); - } + return scall->hl->hook.setprocattr(name, value, size); + })); return LSM_RET_DEFAULT(setprocattr); } @@ -2178,18 +2241,18 @@ EXPORT_SYMBOL(security_ismaclabel); int security_secid_to_secctx(u32 secid, char **secdata, u32 *seclen) { - struct security_hook_list *hp; + struct lsm_static_call *scall; int rc; /* * Currently, only one LSM can implement secid_to_secctx (i.e this * LSM hook is not "stackable"). */ - hlist_for_each_entry(hp, &security_hook_heads.secid_to_secctx, list) { - rc = hp->hook.secid_to_secctx(secid, secdata, seclen); + security_for_each_hook(scall, secid_to_secctx, ({ + rc = scall->hl->hook.secid_to_secctx(secid, secdata, seclen); if (rc != LSM_RET_DEFAULT(secid_to_secctx)) return rc; - } + })); return LSM_RET_DEFAULT(secid_to_secctx); } @@ -2582,7 +2645,7 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x, struct xfrm_policy *xp, const struct flowi_common *flic) { - struct security_hook_list *hp; + struct lsm_static_call *scall; int rc = LSM_RET_DEFAULT(xfrm_state_pol_flow_match); /* @@ -2594,11 +2657,10 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x, * For speed optimization, we explicitly break the loop rather than * using the macro */ - hlist_for_each_entry(hp, &security_hook_heads.xfrm_state_pol_flow_match, - list) { - rc = hp->hook.xfrm_state_pol_flow_match(x, xp, flic); + security_for_each_hook(scall, xfrm_state_pol_flow_match, ({ + rc = scall->hl->hook.xfrm_state_pol_flow_match(x, xp, flic); break; - } + })); return rc; } From patchwork Fri Jan 20 00:08:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: KP Singh X-Patchwork-Id: 13109009 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B327C678DC for ; Fri, 20 Jan 2023 00:09:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229454AbjATAJE (ORCPT ); Thu, 19 Jan 2023 19:09:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229471AbjATAJC (ORCPT ); Thu, 19 Jan 2023 19:09:02 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5344A25B3; Thu, 19 Jan 2023 16:09:00 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 422AC61CBE; Fri, 20 Jan 2023 00:09:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 550FFC433EF; Fri, 20 Jan 2023 00:08:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1674173339; bh=Wkh3s2bYpotYVb4Ma65Ndhl8R6YDKnjNCrmI6zW6fDo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c1IITasFA7h9gu3BzAPso5fg6gO6NKdGPi3CPHwLHauc3DC54AQPQ4NpPbffmnbcN F7fOpVyFxj8BoemDqCIkqQuP2XZ1MpUP5sFgVnoMm9H0m5e9GqXq+unepu9y/sHDTU Bss/GIQOp/EI01oOOmHFRVvVal6bBbvOrTUt/ZAoI6/3vU74I6f8kD2G9o04Ci2P6C 0lk5XVMWJ5rD5oWuWF1eKoe0ZtQz+GHFcjTjh4LgCwMbtpM82TAdswSuze1s8oU48R gzpywUB4oXmkxaDycj4qRv0JbkUnRfSZfQNuVueqZBmZ8Mfo8zeuSTEy3cydR/nnSA T50IHkyvB1sFA== From: KP Singh To: linux-security-module@vger.kernel.org, bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, jackmanb@google.com, renauld@google.com, paul@paul-moore.com, casey@schaufler-ca.com, song@kernel.org, revest@chromium.org, keescook@chromium.org, KP Singh Subject: [PATCH RESEND bpf-next 4/4] bpf: Only enable BPF LSM hooks when an LSM program is attached Date: Fri, 20 Jan 2023 01:08:18 +0100 Message-Id: <20230120000818.1324170-5-kpsingh@kernel.org> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog In-Reply-To: <20230120000818.1324170-1-kpsingh@kernel.org> References: <20230120000818.1324170-1-kpsingh@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net BPF LSM hooks have side-effects (even when a default value is returned), as some hooks end up behaving differently due to the very presence of the hook. The static keys guarding the BPF LSM hooks are disabled by default and enabled only when a BPF program is attached implementing the hook logic. This avoids the issue of the side-effects and also the minor overhead associated with the empty callback. security_file_ioctl: 0xffffffff814f0e90 <+0>: endbr64 0xffffffff814f0e94 <+4>: push %rbp 0xffffffff814f0e95 <+5>: push %r14 0xffffffff814f0e97 <+7>: push %rbx 0xffffffff814f0e98 <+8>: mov %rdx,%rbx 0xffffffff814f0e9b <+11>: mov %esi,%ebp 0xffffffff814f0e9d <+13>: mov %rdi,%r14 >>> 0xffffffff814f0ea0 <+16>: jmp 0xffffffff814f0eb1 Static key enabled for SELinux 0xffffffff814f0ea2 <+18>: xchg %ax,%ax Static key disabled for BPF. This gets patched to a jmp when a BPF LSM program is attached. 0xffffffff814f0ea4 <+20>: xor %eax,%eax 0xffffffff814f0ea6 <+22>: xchg %ax,%ax 0xffffffff814f0ea8 <+24>: pop %rbx 0xffffffff814f0ea9 <+25>: pop %r14 0xffffffff814f0eab <+27>: pop %rbp 0xffffffff814f0eac <+28>: jmp 0xffffffff81f767c4 <__x86_return_thunk> 0xffffffff814f0eb1 <+33>: endbr64 0xffffffff814f0eb5 <+37>: mov %r14,%rdi 0xffffffff814f0eb8 <+40>: mov %ebp,%esi 0xffffffff814f0eba <+42>: mov %rbx,%rdx 0xffffffff814f0ebd <+45>: call 0xffffffff814fe8e0 0xffffffff814f0ec2 <+50>: test %eax,%eax 0xffffffff814f0ec4 <+52>: jne 0xffffffff814f0ea8 0xffffffff814f0ec6 <+54>: jmp 0xffffffff814f0ea2 0xffffffff814f0ec8 <+56>: endbr64 0xffffffff814f0ecc <+60>: mov %r14,%rdi 0xffffffff814f0ecf <+63>: mov %ebp,%esi 0xffffffff814f0ed1 <+65>: mov %rbx,%rdx 0xffffffff814f0ed4 <+68>: call 0xffffffff8123b890 0xffffffff814f0ed9 <+73>: test %eax,%eax 0xffffffff814f0edb <+75>: jne 0xffffffff814f0ea8 0xffffffff814f0edd <+77>: jmp 0xffffffff814f0ea4 0xffffffff814f0edf <+79>: endbr64 0xffffffff814f0ee3 <+83>: mov %r14,%rdi 0xffffffff814f0ee6 <+86>: mov %ebp,%esi 0xffffffff814f0ee8 <+88>: mov %rbx,%rdx 0xffffffff814f0eeb <+91>: pop %rbx 0xffffffff814f0eec <+92>: pop %r14 0xffffffff814f0eee <+94>: pop %rbp 0xffffffff814f0eef <+95>: ret 0xffffffff814f0ef0 <+96>: int3 0xffffffff814f0ef1 <+97>: int3 0xffffffff814f0ef2 <+98>: int3 0xffffffff814f0ef3 <+99>: int3 Signed-off-by: KP Singh --- include/linux/bpf.h | 1 + include/linux/bpf_lsm.h | 1 + include/linux/lsm_hooks.h | 13 ++++++++++++- kernel/bpf/trampoline.c | 29 +++++++++++++++++++++++++++-- security/bpf/hooks.c | 26 +++++++++++++++++++++++++- security/security.c | 3 ++- 6 files changed, 68 insertions(+), 5 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index ae7771c7d750..4008f4a00851 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1049,6 +1049,7 @@ struct bpf_attach_target_info { long tgt_addr; const char *tgt_name; const struct btf_type *tgt_type; + bool is_lsm_target; }; #define BPF_DISPATCHER_MAX 48 /* Fits in 2048B */ diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h index 1de7ece5d36d..cce615af93d5 100644 --- a/include/linux/bpf_lsm.h +++ b/include/linux/bpf_lsm.h @@ -29,6 +29,7 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog, bool bpf_lsm_is_sleepable_hook(u32 btf_id); bool bpf_lsm_is_trusted(const struct bpf_prog *prog); +void bpf_lsm_toggle_hook(void *addr, bool value); static inline struct bpf_storage_blob *bpf_inode( const struct inode *inode) diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index c82d15a4ef50..5e85d3340a07 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -1716,11 +1716,14 @@ struct lsm_static_calls_table { * @scalls: The beginning of the array of static calls assigned to this hook. * @hook: The callback for the hook. * @lsm: The name of the lsm that owns this hook. + * @default_state: The state of the LSM hook when initialized. If set to false, + * the static key guarding the hook will be set to disabled. */ struct security_hook_list { struct lsm_static_call *scalls; union security_list_options hook; const char *lsm; + bool default_state; } __randomize_layout; /* @@ -1751,7 +1754,15 @@ struct lsm_blob_sizes { #define LSM_HOOK_INIT(NAME, CALLBACK) \ { \ .scalls = static_calls_table.NAME, \ - .hook = { .NAME = CALLBACK } \ + .hook = { .NAME = CALLBACK }, \ + .default_state = true \ + } + +#define LSM_HOOK_INIT_DISABLED(NAME, CALLBACK) \ + { \ + .scalls = static_calls_table.NAME, \ + .hook = { .NAME = CALLBACK }, \ + .default_state = false \ } extern char *lsm_names; diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index d0ed7d6f5eec..9789ecf6f29c 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -14,6 +14,7 @@ #include #include #include +#include /* dummy _ops. The verifier will operate on target program's ops. */ const struct bpf_verifier_ops bpf_extension_verifier_ops = { @@ -536,7 +537,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr { enum bpf_tramp_prog_type kind; struct bpf_tramp_link *link_exiting; - int err = 0; + int err = 0, num_lsm_progs = 0; int cnt = 0, i; kind = bpf_attach_type_to_tramp(link->link.prog); @@ -567,8 +568,14 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr continue; /* prog already linked */ return -EBUSY; + + if (link_exiting->link.prog->type == BPF_PROG_TYPE_LSM) + num_lsm_progs++; } + if (!num_lsm_progs && link->link.prog->type == BPF_PROG_TYPE_LSM) + bpf_lsm_toggle_hook(tr->func.addr, true); + hlist_add_head(&link->tramp_hlist, &tr->progs_hlist[kind]); tr->progs_cnt[kind]++; err = bpf_trampoline_update(tr, true /* lock_direct_mutex */); @@ -591,8 +598,10 @@ int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr) { + struct bpf_tramp_link *link_exiting; enum bpf_tramp_prog_type kind; - int err; + bool lsm_link_found = false; + int err, num_lsm_progs = 0; kind = bpf_attach_type_to_tramp(link->link.prog); if (kind == BPF_TRAMP_REPLACE) { @@ -602,8 +611,24 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_ tr->extension_prog = NULL; return err; } + + if (link->link.prog->type == BPF_PROG_TYPE_LSM) { + hlist_for_each_entry(link_exiting, &tr->progs_hlist[kind], + tramp_hlist) { + if (link_exiting->link.prog->type == BPF_PROG_TYPE_LSM) + num_lsm_progs++; + + if (link_exiting->link.prog == link->link.prog) + lsm_link_found = true; + } + } + hlist_del_init(&link->tramp_hlist); tr->progs_cnt[kind]--; + + if (lsm_link_found && num_lsm_progs == 1) + bpf_lsm_toggle_hook(tr->func.addr, false); + return bpf_trampoline_update(tr, true /* lock_direct_mutex */); } diff --git a/security/bpf/hooks.c b/security/bpf/hooks.c index e5971fa74fd7..a39799e9f648 100644 --- a/security/bpf/hooks.c +++ b/security/bpf/hooks.c @@ -8,7 +8,7 @@ static struct security_hook_list bpf_lsm_hooks[] __lsm_ro_after_init = { #define LSM_HOOK(RET, DEFAULT, NAME, ...) \ - LSM_HOOK_INIT(NAME, bpf_lsm_##NAME), + LSM_HOOK_INIT_DISABLED(NAME, bpf_lsm_##NAME), #include #undef LSM_HOOK LSM_HOOK_INIT(inode_free_security, bpf_inode_storage_free), @@ -32,3 +32,27 @@ DEFINE_LSM(bpf) = { .init = bpf_lsm_init, .blobs = &bpf_lsm_blob_sizes }; + +void bpf_lsm_toggle_hook(void *addr, bool value) +{ + struct security_hook_list *h; + struct lsm_static_call *scalls; + int i, j; + + for (i = 0; i < ARRAY_SIZE(bpf_lsm_hooks); i++) { + h = &bpf_lsm_hooks[i]; + scalls = h->scalls; + if (h->hook.lsm_callback == addr) + continue; + + for (j = 0; j < MAX_LSM_COUNT; j++) { + if (scalls[j].hl != h) + continue; + + if (value) + static_key_enable(scalls[j].enabled_key); + else + static_key_disable(scalls[j].enabled_key); + } + } +} diff --git a/security/security.c b/security/security.c index e54d5ba187d1..f74135349429 100644 --- a/security/security.c +++ b/security/security.c @@ -366,7 +366,8 @@ static void __init lsm_static_call_init(struct security_hook_list *hl) if (!scall->hl) { __static_call_update(scall->key, scall->trampoline, hl->hook.lsm_callback); scall->hl = hl; - static_key_enable(scall->enabled_key); + if (hl->default_state) + static_key_enable(scall->enabled_key); return; } scall++;