From patchwork Fri Jul 19 11:00:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737188 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7780E39ACC; Fri, 19 Jul 2024 10:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386557; cv=none; b=h2vfI1MiFX+ADvK9wTw3mNh5k4K/qk4/8prqBC5v3Fzphl8O6AAv61IP1sXL8M8eFgX6IiPSV1Dzwxu1L+C8ygSkKAG8dan3+rJfyiI28cTTkhksELReHTkgqo+nVNv43SDFkPTFJtQbaRHBFIVU1Lzl2ruZGIbJWjZCdQnvJR4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386557; c=relaxed/simple; bh=uNQ1z8kkfsI805d3NoXouSjAKTFZvbv6hUGSX4Zh/5w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FbpNq1VWslZ7ucIv31G4Nv3YM8gS1vgCfifNKdQ7rBCKND+5FGM6OCEbk87hz2xqmIm3mfjyGV78f/yIPntdool9YdpagjAZYuNaq71IOtNaU5lqB9jEIbyj5OovAJ8Go/NBmAfu4IzPAzawHx1DY+J389G2r1SkI3sYL4HMUOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4WQRRJ1J34z4f3jXP; Fri, 19 Jul 2024 18:55:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 535FA1A11E8; Fri, 19 Jul 2024 18:55:52 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S3; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 1/9] bpf, lsm: Add disabled BPF LSM hook list Date: Fri, 19 Jul 2024 19:00:51 +0800 Message-Id: <20240719110059.797546-2-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S3 X-Coremail-Antispam: 1UD129KBjvJXoW7uFyfWry7KF1fKF1DJF45Awb_yoW8tw4kpa 1fJryYkw1rZw4a93W3tFs5ur98tr1FganFkrnrXw12kr48Zr1kJw1jyrnrury3WryUJrna grZF9F1Ygr12vaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPFb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Gr0_Xr1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV WxJVW8Jr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx 0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWU JVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7AKxV W8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_WrylIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r4j6F 4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjxU3cTm DUUUU X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Xu Kuohai Add a disabled hooks list for BPF LSM. progs being attached to the listed hooks will be rejected by the verifier. Suggested-by: KP Singh Signed-off-by: Xu Kuohai --- kernel/bpf/bpf_lsm.c | 31 +++++++++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c index 08a338e1f231..1f596ad6257c 100644 --- a/kernel/bpf/bpf_lsm.c +++ b/kernel/bpf/bpf_lsm.c @@ -36,6 +36,24 @@ BTF_SET_START(bpf_lsm_hooks) #undef LSM_HOOK BTF_SET_END(bpf_lsm_hooks) +BTF_SET_START(bpf_lsm_disabled_hooks) +BTF_ID(func, bpf_lsm_vm_enough_memory) +BTF_ID(func, bpf_lsm_inode_need_killpriv) +BTF_ID(func, bpf_lsm_inode_getsecurity) +BTF_ID(func, bpf_lsm_inode_listsecurity) +BTF_ID(func, bpf_lsm_inode_copy_up_xattr) +BTF_ID(func, bpf_lsm_getselfattr) +BTF_ID(func, bpf_lsm_getprocattr) +BTF_ID(func, bpf_lsm_setprocattr) +#ifdef CONFIG_KEYS +BTF_ID(func, bpf_lsm_key_getsecurity) +#endif +#ifdef CONFIG_AUDIT +BTF_ID(func, bpf_lsm_audit_rule_match) +#endif +BTF_ID(func, bpf_lsm_ismaclabel) +BTF_SET_END(bpf_lsm_disabled_hooks) + /* List of LSM hooks that should operate on 'current' cgroup regardless * of function signature. */ @@ -97,15 +115,24 @@ void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog, int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog, const struct bpf_prog *prog) { + u32 btf_id = prog->aux->attach_btf_id; + const char *func_name = prog->aux->attach_func_name; + if (!prog->gpl_compatible) { bpf_log(vlog, "LSM programs must have a GPL compatible license\n"); return -EINVAL; } - if (!btf_id_set_contains(&bpf_lsm_hooks, prog->aux->attach_btf_id)) { + if (btf_id_set_contains(&bpf_lsm_disabled_hooks, btf_id)) { + bpf_log(vlog, "attach_btf_id %u points to disabled hook %s\n", + btf_id, func_name); + return -EINVAL; + } + + if (!btf_id_set_contains(&bpf_lsm_hooks, btf_id)) { bpf_log(vlog, "attach_btf_id %u points to wrong type name %s\n", - prog->aux->attach_btf_id, prog->aux->attach_func_name); + btf_id, func_name); return -EINVAL; } From patchwork Fri Jul 19 11:00:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737194 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC9F584A31; Fri, 19 Jul 2024 10:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; cv=none; b=BSLsx5wQ9FyGxGJhAdFVtGXsK4p4Yvjt9TJC93P78+uiI7ugjLejCgI5z1YrLPIqgvghtxNAx7I3yndAb6vR0TdzR0ZIIVKWXF+dMgLSse4/RA70cecDqKGfx2ev47s+Gy2w0tJfHv/e07UCcWxJRREIZO+cc8PuE7rNB9pQkgc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; c=relaxed/simple; bh=h6QqusRCwUdR5gF9ZWLsbCy1fy0X/4oYZK/KDZHdM88=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tuNYYp/MGJknhySiJbF1YUbSdj4tsMZv2CXWfTn60ZJbkaNnNldXgUPs4ZKcRhs2tEB8rI5WcauwYzGOWBqiLVj2E5NJcw9r5p7nGLcgzAd7kvJ7uqZ+Me7pci7v9FYxesVyC92Xf2mJDA87PR/w6X8MbyWIInhF1bszynlSOl8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4WQRRH0nJQz4f3l1K; Fri, 19 Jul 2024 18:55:39 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 6B36E1A0189; Fri, 19 Jul 2024 18:55:52 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S4; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 2/9] bpf, lsm: Add check for BPF LSM return value Date: Fri, 19 Jul 2024 19:00:52 +0800 Message-Id: <20240719110059.797546-3-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S4 X-Coremail-Antispam: 1UD129KBjvJXoW3Wr1UAr1rGrykuFW8JFWfGrg_yoWfAFWDpF s3Gr95Cr40vrW3uFnrtan7ZFyrJr10g3yIkFy7GryFvFWavrn5XF1qgryjvr1fCrWkCw1x Cr4jgrZ8u34UZaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUP2b4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV WxJVW8Jr1l84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx 0E2Ix0cI8IcVAFwI0_Jr0_Jr4lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWU JVW8JwACjcxG0xvY0x0EwIxGrwACI402YVCY1x02628vn2kIc2xKxwCY1x0262kKe7AKxV W8ZVWrXwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E 14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_GFv_WrylIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Cr0_Gr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JV WxJwCI42IY6I8E87Iv6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZEXa7IU0I3 85UUUUU== X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Xu Kuohai A bpf prog returning a positive number attached to file_alloc_security hook makes kernel panic. This happens because file system can not filter out the positive number returned by the LSM prog using IS_ERR, and misinterprets this positive number as a file pointer. Given that hook file_alloc_security never returned positive number before the introduction of BPF LSM, and other BPF LSM hooks may encounter similar issues, this patch adds LSM return value check in verifier, to ensure no unexpected value is returned. Fixes: 520b7aa00d8c ("bpf: lsm: Initialize the BPF LSM hooks") Reported-by: Xin Liu Signed-off-by: Xu Kuohai Acked-by: Eduard Zingerman --- include/linux/bpf.h | 1 + include/linux/bpf_lsm.h | 8 ++++++ kernel/bpf/bpf_lsm.c | 34 ++++++++++++++++++++++- kernel/bpf/btf.c | 5 +++- kernel/bpf/verifier.c | 60 ++++++++++++++++++++++++++++++++++------- 5 files changed, 97 insertions(+), 11 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4f1d4a97b9d1..d255201035c4 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -927,6 +927,7 @@ struct bpf_insn_access_aux { }; }; struct bpf_verifier_log *log; /* for verbose logs */ + bool is_retval; /* is accessing function return value ? */ }; static inline void diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h index 1de7ece5d36d..aefcd6564251 100644 --- a/include/linux/bpf_lsm.h +++ b/include/linux/bpf_lsm.h @@ -9,6 +9,7 @@ #include #include +#include #include #ifdef CONFIG_BPF_LSM @@ -45,6 +46,8 @@ void bpf_inode_storage_free(struct inode *inode); void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog, bpf_func_t *bpf_func); +int bpf_lsm_get_retval_range(const struct bpf_prog *prog, + struct bpf_retval_range *range); #else /* !CONFIG_BPF_LSM */ static inline bool bpf_lsm_is_sleepable_hook(u32 btf_id) @@ -78,6 +81,11 @@ static inline void bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog, { } +static inline int bpf_lsm_get_retval_range(const struct bpf_prog *prog, + struct bpf_retval_range *range) +{ + return -EOPNOTSUPP; +} #endif /* CONFIG_BPF_LSM */ #endif /* _LINUX_BPF_LSM_H */ diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c index 1f596ad6257c..6292ac5f9bd1 100644 --- a/kernel/bpf/bpf_lsm.c +++ b/kernel/bpf/bpf_lsm.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -417,3 +416,36 @@ const struct bpf_verifier_ops lsm_verifier_ops = { .get_func_proto = bpf_lsm_func_proto, .is_valid_access = btf_ctx_access, }; + +/* hooks return 0 or 1 */ +BTF_SET_START(bool_lsm_hooks) +#ifdef CONFIG_SECURITY_NETWORK_XFRM +BTF_ID(func, bpf_lsm_xfrm_state_pol_flow_match) +#endif +#ifdef CONFIG_AUDIT +BTF_ID(func, bpf_lsm_audit_rule_known) +#endif +BTF_ID(func, bpf_lsm_inode_xattr_skipcap) +BTF_SET_END(bool_lsm_hooks) + +int bpf_lsm_get_retval_range(const struct bpf_prog *prog, + struct bpf_retval_range *retval_range) +{ + /* no return value range for void hooks */ + if (!prog->aux->attach_func_proto->type) + return -EINVAL; + + if (btf_id_set_contains(&bool_lsm_hooks, prog->aux->attach_btf_id)) { + retval_range->minval = 0; + retval_range->maxval = 1; + } else { + /* All other available LSM hooks, except task_prctl, return 0 + * on success and negative error code on failure. + * To keep things simple, we only allow bpf progs to return 0 + * or negative errno for task_prctl too. + */ + retval_range->minval = -MAX_ERRNO; + retval_range->maxval = 0; + } + return 0; +} diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 520f49f422fe..95426d5b634e 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6416,8 +6416,11 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type, if (arg == nr_args) { switch (prog->expected_attach_type) { - case BPF_LSM_CGROUP: case BPF_LSM_MAC: + /* mark we are accessing the return value */ + info->is_retval = true; + fallthrough; + case BPF_LSM_CGROUP: case BPF_TRACE_FEXIT: /* When LSM programs are attached to void LSM hooks * they use FEXIT trampolines and when attached to diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 8da132a1ef28..fefa1d5d2faa 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2334,6 +2334,25 @@ static void mark_reg_unknown(struct bpf_verifier_env *env, __mark_reg_unknown(env, regs + regno); } +static int __mark_reg_s32_range(struct bpf_verifier_env *env, + struct bpf_reg_state *regs, + u32 regno, + s32 s32_min, + s32 s32_max) +{ + struct bpf_reg_state *reg = regs + regno; + + reg->s32_min_value = max_t(s32, reg->s32_min_value, s32_min); + reg->s32_max_value = min_t(s32, reg->s32_max_value, s32_max); + + reg->smin_value = max_t(s64, reg->smin_value, s32_min); + reg->smax_value = min_t(s64, reg->smax_value, s32_max); + + reg_bounds_sync(reg); + + return reg_bounds_sanity_check(env, reg, "s32_range"); +} + static void __mark_reg_not_init(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) { @@ -5587,11 +5606,12 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off, /* check access to 'struct bpf_context' fields. Supports fixed offsets only */ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off, int size, enum bpf_access_type t, enum bpf_reg_type *reg_type, - struct btf **btf, u32 *btf_id) + struct btf **btf, u32 *btf_id, bool *is_retval) { struct bpf_insn_access_aux info = { .reg_type = *reg_type, .log = &env->log, + .is_retval = false, }; if (env->ops->is_valid_access && @@ -5604,6 +5624,7 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off, * type of narrower access. */ *reg_type = info.reg_type; + *is_retval = info.is_retval; if (base_type(*reg_type) == PTR_TO_BTF_ID) { *btf = info.btf; @@ -6772,6 +6793,17 @@ static int check_stack_access_within_bounds( return grow_stack_state(env, state, -min_off /* size */); } +static bool get_func_retval_range(struct bpf_prog *prog, + struct bpf_retval_range *range) +{ + if (prog->type == BPF_PROG_TYPE_LSM && + prog->expected_attach_type == BPF_LSM_MAC && + !bpf_lsm_get_retval_range(prog, range)) { + return true; + } + return false; +} + /* check whether memory at (regno + off) is accessible for t = (read | write) * if t==write, value_regno is a register which value is stored into memory * if t==read, value_regno is a register which will receive the value from memory @@ -6876,6 +6908,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem)) mark_reg_unknown(env, regs, value_regno); } else if (reg->type == PTR_TO_CTX) { + bool is_retval = false; + struct bpf_retval_range range; enum bpf_reg_type reg_type = SCALAR_VALUE; struct btf *btf = NULL; u32 btf_id = 0; @@ -6891,7 +6925,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn return err; err = check_ctx_access(env, insn_idx, off, size, t, ®_type, &btf, - &btf_id); + &btf_id, &is_retval); if (err) verbose_linfo(env, insn_idx, "; "); if (!err && t == BPF_READ && value_regno >= 0) { @@ -6900,7 +6934,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn * case, we know the offset is zero. */ if (reg_type == SCALAR_VALUE) { - mark_reg_unknown(env, regs, value_regno); + if (is_retval && get_func_retval_range(env->prog, &range)) { + err = __mark_reg_s32_range(env, regs, value_regno, + range.minval, range.maxval); + if (err) + return err; + } else { + mark_reg_unknown(env, regs, value_regno); + } } else { mark_reg_known_zero(env, regs, value_regno); @@ -15674,12 +15715,13 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char case BPF_PROG_TYPE_LSM: if (env->prog->expected_attach_type != BPF_LSM_CGROUP) { - /* Regular BPF_PROG_TYPE_LSM programs can return - * any value. - */ - return 0; - } - if (!env->prog->aux->attach_func_proto->type) { + /* no range found, any return value is allowed */ + if (!get_func_retval_range(env->prog, &range)) + return 0; + /* no restricted range, any return value is allowed */ + if (range.minval == S32_MIN && range.maxval == S32_MAX) + return 0; + } else if (!env->prog->aux->attach_func_proto->type) { /* Make sure programs that attach to void * hooks don't try to modify return value. */ From patchwork Fri Jul 19 11:00:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737189 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC91B823AF; Fri, 19 Jul 2024 10:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; cv=none; b=IJKMdjImr/ApjaYd94hfdSWxvIz6Iiy0wDoHfh1JzCCeCVBKLV11KkrAH8WCYHzbO2uNqVruSUl6A6fDR/TrzMHC2UtcqTBLm/SJdvgSyTRmCEo6Q+9uMYqkV8Ts5MGi2ZbfVndH8FNNUoLfbUga1MR+RxvfX9DJV+fXFbtWYHI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; c=relaxed/simple; bh=Nka+bODEYP2N+MN4laCszVqVFo8qBRk8+LNJimzPa2U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kuTUWJiYJCdo5T/OTJqfZxstZnvJQ7JO6Jzs3Kuiv9lJCarrGyPDEMnTx9P+++hPEd8CIXvHiuSCKXSb2cagsKf+NL28QtDpr+iXQCwLZLgVhCntoucx9JCfg46k/lcmmtRxpGpTK+NB63NupzrzdTZlcemoXmk6Y3gCmjo3nMg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4WQRRN3KPXz4f3jZC; Fri, 19 Jul 2024 18:55:44 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 875681A11F0; Fri, 19 Jul 2024 18:55:52 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S5; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 3/9] bpf: Prevent tail call between progs attached to different hooks Date: Fri, 19 Jul 2024 19:00:53 +0800 Message-Id: <20240719110059.797546-4-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S5 X-Coremail-Antispam: 1UD129KBjvJXoWxAF1Dtw45tFWUJF1ktryxXwb_yoWrGrWxpF ZrZry8Cr48ur4xXrWxGw1fZry5Aw48Kw47K348X34YvF4qqrn5KF4jgFWavry5Gry5JrWS g3W2qFZ8CF95Z3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPIb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AFwI 0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG 67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MI IYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E 14v26F4j6r4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr 0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x07jx CztUUUUU= X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Xu Kuohai bpf progs can be attached to kernel functions, and the attached functions can take different parameters or return different return values. If prog attached to one kernel function tail calls prog attached to another kernel function, the ctx access or return value verification could be bypassed. For example, if prog1 is attached to func1 which takes only 1 parameter and prog2 is attached to func2 which takes two parameters. Since verifier assumes the bpf ctx passed to prog2 is constructed based on func2's prototype, verifier allows prog2 to access the second parameter from the bpf ctx passed to it. The problem is that verifier does not prevent prog1 from passing its bpf ctx to prog2 via tail call. In this case, the bpf ctx passed to prog2 is constructed from func1 instead of func2, that is, the assumption for ctx access verification is bypassed. Another example, if BPF LSM prog1 is attached to hook file_alloc_security, and BPF LSM prog2 is attached to hook bpf_lsm_audit_rule_known. Verifier knows the return value rules for these two hooks, e.g. it is legal for bpf_lsm_audit_rule_known to return positive number 1, and it is illegal for file_alloc_security to return positive number. So verifier allows prog2 to return positive number 1, but does not allow prog1 to return positive number. The problem is that verifier does not prevent prog1 from calling prog2 via tail call. In this case, prog2's return value 1 will be used as the return value for prog1's hook file_alloc_security. That is, the return value rule is bypassed. This patch adds restriction for tail call to prevent such bypasses. Signed-off-by: Xu Kuohai --- include/linux/bpf.h | 1 + kernel/bpf/core.c | 21 ++++++++++++++++++--- 2 files changed, 19 insertions(+), 3 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index d255201035c4..bf71edb260cd 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -294,6 +294,7 @@ struct bpf_map { * same prog type, JITed flag and xdp_has_frags flag. */ struct { + const struct btf_type *attach_func_proto; spinlock_t lock; enum bpf_prog_type type; bool jited; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 7ee62e38faf0..4e07cc057d6f 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2302,6 +2302,7 @@ bool bpf_prog_map_compatible(struct bpf_map *map, { enum bpf_prog_type prog_type = resolve_prog_type(fp); bool ret; + struct bpf_prog_aux *aux = fp->aux; if (fp->kprobe_override) return false; @@ -2311,7 +2312,7 @@ bool bpf_prog_map_compatible(struct bpf_map *map, * in the case of devmap and cpumap). Until device checks * are implemented, prohibit adding dev-bound programs to program maps. */ - if (bpf_prog_is_dev_bound(fp->aux)) + if (bpf_prog_is_dev_bound(aux)) return false; spin_lock(&map->owner.lock); @@ -2321,12 +2322,26 @@ bool bpf_prog_map_compatible(struct bpf_map *map, */ map->owner.type = prog_type; map->owner.jited = fp->jited; - map->owner.xdp_has_frags = fp->aux->xdp_has_frags; + map->owner.xdp_has_frags = aux->xdp_has_frags; + map->owner.attach_func_proto = aux->attach_func_proto; ret = true; } else { ret = map->owner.type == prog_type && map->owner.jited == fp->jited && - map->owner.xdp_has_frags == fp->aux->xdp_has_frags; + map->owner.xdp_has_frags == aux->xdp_has_frags; + if (ret && + map->owner.attach_func_proto != aux->attach_func_proto) { + switch (prog_type) { + case BPF_PROG_TYPE_TRACING: + case BPF_PROG_TYPE_LSM: + case BPF_PROG_TYPE_EXT: + case BPF_PROG_TYPE_STRUCT_OPS: + ret = false; + break; + default: + break; + } + } } spin_unlock(&map->owner.lock); From patchwork Fri Jul 19 11:00:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737195 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0C8584A32; Fri, 19 Jul 2024 10:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386559; cv=none; b=r2RA+2HXpDlkgUCH7sXVEhibczVrWnPovgvXdsTO7PhtDc0nGK2nfurPfb2zSxAAGwRGr81n0o0B4ver6DmMvcxNlySIr8Sj7RG7JA+ng1UqYLbty2EOEn2kALJCFMDcGQpmrrGTwFSvHffTX/JfdEei/A9FKRkLNgJwYTDAqq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386559; c=relaxed/simple; bh=zlqLgd6GZc0VkrNvoIBa6GFAHFPcWJh7iCT2Nln+fM0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Wmjx1LOxvjbbCLKL3JOXyJrLOyM+Bktp5JONN/inu7GsEmATm0+36FYRzOvCaAzLl6g3oCZnjRepyPitYVffFxu/aLO7WPRyLgC0qk1bQqsTeC8qSD31hjLCmBMpg9SAhnYo0f0AymaTDplIlD4Q0XlMcMZ6MFL+GAwIiu83Bfs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4WQRRH2Jhqz4f3l1r; Fri, 19 Jul 2024 18:55:39 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 9F1091A11E8; Fri, 19 Jul 2024 18:55:52 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S6; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 4/9] bpf: Fix compare error in function retval_range_within Date: Fri, 19 Jul 2024 19:00:54 +0800 Message-Id: <20240719110059.797546-5-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S6 X-Coremail-Antispam: 1UD129KBjvJXoWxXw4kWF4rtw1kZw4rtw4UCFg_yoWrXw4rpr 4rG34qyr1DtF43ua12yan5A34FyF1aqayIkrWkJ3sYyw45trWDXFW2kw4Y9ayFyrW8Gw1I vF4jva15Gw1UZaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPqb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF 7I0E14v26F4j6r4UJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI 0_Gr0_Cr1lIxAIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x 07jIPfQUUUUU= X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Xu Kuohai After checking lsm hook return range in verifier, the test case "test_progs -t test_lsm" failed, and the failure log says: libbpf: prog 'test_int_hook': BPF program load failed: Invalid argument libbpf: prog 'test_int_hook': -- BEGIN PROG LOAD LOG -- 0: R1=ctx() R10=fp0 ; int BPF_PROG(test_int_hook, struct vm_area_struct *vma, @ lsm.c:89 0: (79) r0 = *(u64 *)(r1 +24) ; R0_w=scalar(smin=smin32=-4095,smax=smax32=0) R1=ctx() [...] 24: (b4) w0 = -1 ; R0_w=0xffffffff ; int BPF_PROG(test_int_hook, struct vm_area_struct *vma, @ lsm.c:89 25: (95) exit At program exit the register R0 has smin=4294967295 smax=4294967295 should have been in [-4095, 0] It can be seen that instruction "w0 = -1" zero extended -1 to 64-bit register r0, setting both smin and smax values of r0 to 4294967295. This resulted in a false reject when r0 was checked with range [-4095, 0]. Given bpf lsm does not return 64-bit values, this patch fixes it by changing the compare between r0 and return range from 64-bit operation to 32-bit operation for bpf lsm. Fixes: 8fa4ecd49b81 ("bpf: enforce exact retval range on subprog/callback exit") Signed-off-by: Xu Kuohai Acked-by: Shung-Hsi Yu --- kernel/bpf/verifier.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fefa1d5d2faa..78104bd85274 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9964,9 +9964,13 @@ static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env) return is_rbtree_lock_required_kfunc(kfunc_btf_id); } -static bool retval_range_within(struct bpf_retval_range range, const struct bpf_reg_state *reg) +static bool retval_range_within(struct bpf_retval_range range, const struct bpf_reg_state *reg, + bool return_32bit) { - return range.minval <= reg->smin_value && reg->smax_value <= range.maxval; + if (return_32bit) + return range.minval <= reg->s32_min_value && reg->s32_max_value <= range.maxval; + else + return range.minval <= reg->smin_value && reg->smax_value <= range.maxval; } static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) @@ -10003,8 +10007,8 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) if (err) return err; - /* enforce R0 return value range */ - if (!retval_range_within(callee->callback_ret_range, r0)) { + /* enforce R0 return value range, and bpf_callback_t returns 64bit */ + if (!retval_range_within(callee->callback_ret_range, r0, false)) { verbose_invalid_scalar(env, r0, callee->callback_ret_range, "At callback return", "R0"); return -EINVAL; @@ -15610,6 +15614,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char int err; struct bpf_func_state *frame = env->cur_state->frame[0]; const bool is_subprog = frame->subprogno; + bool return_32bit = false; /* LSM and struct_ops func-ptr's return type could be "void" */ if (!is_subprog || frame->in_exception_callback_fn) { @@ -15721,6 +15726,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char /* no restricted range, any return value is allowed */ if (range.minval == S32_MIN && range.maxval == S32_MAX) return 0; + return_32bit = true; } else if (!env->prog->aux->attach_func_proto->type) { /* Make sure programs that attach to void * hooks don't try to modify return value. @@ -15751,7 +15757,7 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char if (err) return err; - if (!retval_range_within(range, reg)) { + if (!retval_range_within(range, reg, return_32bit)) { verbose_invalid_scalar(env, reg, range, exit_ctx, reg_name); if (!is_subprog && prog->expected_attach_type == BPF_LSM_CGROUP && From patchwork Fri Jul 19 11:00:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737193 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A51C84A3F; Fri, 19 Jul 2024 10:55:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; cv=none; b=dNuCx5aQ28FkiakMNbJg5+rrDn5fSPnZDEteTTxGM6d8QOUb7L19yyI0oVNaExj6lh5vzPhXlUMsLj+yPul5bF9lzDHRbL4Tj9wZPol/bqpuJSTXjR8pNjBxqdILUtGo3tMTALjcoFoCpK0R/Rq09LOUEagTvzgC+dMt5tquJXY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; c=relaxed/simple; bh=DHI2hedDMMz07VqAFSjDnNrW1QMMfKLU3MTa+sYZTFc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JW4WIgyIiCvkdkxexyYH19NQHiF9I16bzQqjspfuteUoZquSETKcMnHfskuw1bB0SySZ5A/xuqlW5u+ZPEluQpavYA+RgHEzHePGeQibhSbylcrhUTR/EhRdUXwXU0+PNT0Juf8swM3kasz0pRkaLlDiXJ+sAa/0qdlEbMQlgcE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4WQRRJ4Pglz4f3jdW; Fri, 19 Jul 2024 18:55:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id BEA701A0568; Fri, 19 Jul 2024 18:55:52 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S7; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 5/9] bpf, verifier: improve signed ranges inference for BPF_AND Date: Fri, 19 Jul 2024 19:00:55 +0800 Message-Id: <20240719110059.797546-6-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S7 X-Coremail-Antispam: 1UD129KBjvAXoW3Zw1fWF1kKr1fXF1UWr17ZFb_yoW8Gw17Co WFvr4jy3yxGr48GFyYyw1DtryFgryUGrnrGr1UtF15CFyUAa13X3WUAr48Gr1avF4rKry8 Cr17KrWktw1xJFnxn29KB7ZKAUJUUUU8529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UjIYCTnIWjp_UUUOY7kC6x804xWl14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK 8VAvwI8IcIk0rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF 0E3s1l82xGYIkIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vE j48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Xr0_Ar1l84ACjcxK6xIIjxv20xvEc7CjxV AFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x02 67AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I 80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCj c4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4 kS14v26r4a6rW5MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E 5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZV WrXwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY 1x0267AKxVW8Jr0_Cr1UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67 AKxVW8JVWxJwCI42IY6I8E87Iv6xkF7I0E14v26r4UJVWxJrUvcSsGvfC2KfnxnUUI43ZE Xa7IU0sqXPUUUUU== X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Shung-Hsi Yu This commit improve BPF verifier's inference of signed ranges by learning new signed ranges directly from signed ranges of the operands by doing dst_reg->smin_value = negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value)) dst_reg->smax_value = max(dst_reg->smax_value, src_reg->smax_value) See below for th complete explanation. The improvement is needed to prevent verifier rejection of BPF program like the one presented by Xu Kuohai: SEC("lsm/bpf_map") int BPF_PROG(check_access, struct bpf_map *map, fmode_t fmode) { if (map != (struct bpf_map *)&data_input) return 0; if (fmode & FMODE_WRITE) return -EACCES; return 0; } Where the relevant verifer log upon rejection are: ... 5: (79) r0 = *(u64 *)(r1 +8) ; R0_w=scalar() R1=ctx() ; if (fmode & FMODE_WRITE) @ test_libbpf_get_fd_by_id_opts.c:32 6: (67) r0 <<= 62 ; R0_w=scalar(smax=0x4000000000000000,umax=0xc000000000000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xc000000000000000)) 7: (c7) r0 s>>= 63 ; R0_w=scalar(smin=smin32=-1,smax=smax32=0) ; @ test_libbpf_get_fd_by_id_opts.c:0 8: (57) r0 &= -13 ; R0_w=scalar(smax=0x7ffffffffffffff3,umax=0xfffffffffffffff3,smax32=0x7ffffff3,umax32=0xfffffff3,var_off=(0x0; 0xfffffffffffffff3)) 9: (95) exit This sequence of instructions comes from Clang's transformation located in DAGCombiner::SimplifySelectCC() method, which combined the "fmode & FMODE_WRITE" check with the return statement without needing BPF_JMP at all. See Eduard's comment for more detail of this transformation[0]. While the verifier can correctly infer that the value of r0 is in a tight [-1, 0] range after instruction "r0 s>>= 63", is was not able to come up with a tight range for "r0 &= -13" (which would be [-13, 0]), and instead inferred a very loose range: r0 s>>= 63; R0_w=scalar(smin=smin32=-1,smax=smax32=0) r0 &= -13 ; R0_w=scalar(smax=0x7ffffffffffffff3,umax=0xfffffffffffffff3,smax32=0x7ffffff3,umax32=0xfffffff3,var_off=(0x0; 0xfffffffffffffff3)) The reason is that scalar*_min_max_add() mainly relies on tnum for inferring bounds in register after BPF_AND, however [-1, 0] cannot be tracked precisely with tnum, and was effectively turns into [0, -1] (i.e. tnum_unknown). So upon BPF_AND the resulting tnum is equivalent to dst_reg->var_off = tnum_and(tnum_unknown, tnum_const(-13)) And from there the BPF verifier was only able to infer smin=S64_MIN and smax=0x7ffffffffffffff3, which is outside of the expected [-4095, 0] range for return values, and thus the program was rejected. To allow verification of such instruction pattern, update scalar*_min_max_and() to infer signed ranges directly from signed ranges of the operands. For BPF_AND, the resulting value always gains more unset '0' bit, thus it only move towards 0x0000000000000000. The difficulty lies with how to deal with signs. While non-negative (positive and zero) value simply grows smaller, a negative number can grows smaller, but may also "underflow" and become a larger value. To better address this situation we split the signed ranges into negative range and non-negative range cases, ignoring the mixed sign cases for now; and only consider how to calculate smax_value. Since negative range & negative range preserve the sign bit, so we know the result is still a negative value, thus it only move towards S64_MIN, but never underflow, thus a save bet is to use a value in ranges that is closet to 0, thus "max(dst_reg->smax_value, src->smax_value)". For negative range & positive range the sign bit is always cleared, thus we know the resulting value is non-negative, and only moves towards 0, so a safe bet is to use smax_value of the non-negative range. Last but not least, non-negative range & non-negative range is still a non-negative value, and only moves towards 0; however same as the unsigned range case, the maximum is actually capped by the lesser of the two, and thus min(dst_reg->smax_value, src_reg->smax_value); Listing out the above reasoning as a table (dst_reg abbreviated as dst, src_reg abbreviated as src, smax_value abbrivated as smax) we get: | src_reg smax = ? +---------------------------+--------------------------- | negative | non-negative ---------+--------------+---------------------------+--------------------------- | negative | max(dst->smax, src->smax) | src->smax dst_reg +--------------+---------------------------+--------------------------- | non-negative | dst->smax | min(dst->smax, src->smax) However this is quite complicated, and could use some simplification given the following observations: max(dst_reg->smax_value, src_reg->smax_value) >= src_reg->smax_value max(dst_reg->smax_value, src_reg->smax_value) >= dst_reg->smax_value max(dst_reg->smax_value, src_reg->smax_value) >= min(dst_reg->smax_value, src_reg->smax_value) So we could substitute the cells in the table above all with max(...), and arrive at: | src_reg smax' = ? +---------------------------+--------------------------- smax'(r) >= smax(r) | negative | non-negative ---------+--------------+---------------------------+--------------------------- | negative | max(dst->smax, src->smax) | max(dst->smax, src->smax) dst_reg +--------------+---------------------------+--------------------------- | non-negative | max(dst->smax, src->smax) | max(dst->smax, src->smax) Meaning that simply using max(dst_reg->smax_value, src_reg->smax_value) to calculate the resulting smax_value would work across all sign combinations. For smin_value, we know that both non-negative range & non-negative range and negative range & non-negative range both result in a non-negative value, so an easy guess is to use the minimum value in non-negative range, thus 0. | src_reg smin = ? +----------------------------+--------------------------- | negative | non-negative ---------+--------------+----------------------------+--------------------------- | negative | ? | 0 dst_reg +--------------+----------------------------+--------------------------- | non-negative | 0 | 0 That leaves the negative range & negative range case to be considered. We know that negative range & negative range always yield a negative value, so a preliminary guess would be S64_MIN. However, that guess is too imprecise to help with the r0 <<= 62, r0 s>>= 63, r0 &= -13 pattern we're trying to deal with here. Further improvement comes with the observation that for negative range & negative range, the smallest possible value must be one that has longest _common_ most-significant set '1' bits sequence, thus we can use min(dst_reg->smin_value, src->smin_value) as the starting point, as the smaller value will be the one with the shorter most-significant set '1' bits sequence. But that alone is not enough, as we do not know whether rest of the bits would be set, so the safest guess would be one that clear alls bits after the most-significant set '1' bits sequence, something akin to bit_floor(), but for rounding to a negative power-of-2 instead. negative_bit_floor(0xffff000000000003) == 0xffff000000000000 negative_bit_floor(0xfffffb0000000000) == 0xfffff80000000000 negative_bit_floor(0xffffffffffffffff) == 0xffffffffffffffff /* -1 remains unchanged */ negative_bit_floor(0x0000fb0000000000) == 0x0000000000000000 /* non-negative values became 0 */ With negative range & negative range solve, we now have: | src_reg smin = ? +----------------------------+--------------------------- | negative | non-negative ---------+--------------+----------------------------+--------------------------- | negative |negative_bit_floor( | 0 | | min(dst->smin, src->smin))| dst_reg +--------------+----------------------------+--------------------------- | non-negative | 0 | 0 This can be also simplified with some observations (quadrants refers to the cells in above table, number start from top-right cell -- I, and goes counter-clockwise): A. min(dst_reg->smin_value, src_reg->smin_value) < 0 /* dst negative & src non-negative, quadrant I */ B. min(dst_reg->smin_value, src_reg->smin_value) < 0 /* dst non-negative & src negative, quadrant III */ C. min(dst_reg->smin_value, src_reg->smin_value) >= 0 /* dst non-negative & src non-negative, quadrant IV */ D. negative_bit_floor(x) s<= x /* for any x, negative_bit_floor(x) is always smaller (or equal to the original value) */ E. negative_bit_floor(y) == 0 /* when y is non-negative, i.e. y >= 0, since the most-significant is unset, so every bit is unset */ Thus we can derive negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value)) < 0 /* combine A and D, where dst negative & src non-negative */ negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value)) < 0 /* combine B and D, where dst non-negative & src negative */ negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value)) == 0 /* combine C and E, where dst non-negative & src non-negative */ Substitute quadrants I, III, and IV cells in the table above all with negative_bit_floor(min(...)), we arrive at: | src_reg smin' = ? +----------------------------+--------------------------- smin'(r) <= smin(r) | negative | non-negative ---------+--------------+----------------------------+--------------------------- | negative |negative_bit_floor( |negative_bit_floor( | | min(dst->smin, src->smin))| min(dst->smin, src->smin)) dst_reg +--------------+----------------------------+--------------------------- | non-negative |negative_bit_floor( |negative_bit_floor( | | min(dst->smin, src->smin))| min(dst->smin, src->smin)) Meaning that simply using negative_bit_floor(min(dst_reg->smin_value, src_reg->smin_value)) to calculate the resulting smin_value would work across all sign combinations. Together these allows the BPF verifier to infer the signed range of the result of BPF_AND operation using the signed range from its operands, and use that information r0 s>>= 63; R0_w=scalar(smin=smin32=-1,smax=smax32=0) r0 &= -13 ; R0_w=scalar(smin=smin32=-16,smax=smax32=0,umax=0xfffffffffffffff3,umax32=0xfffffff3,var_off=(0x0; 0xfffffffffffffff3)) [0] https://lore.kernel.org/bpf/e62e2971301ca7f2e9eb74fc500c520285cad8f5.camel@gmail.com/ Link: https://lore.kernel.org/bpf/phcqmyzeqrsfzy7sb4rwpluc37hxyz7rcajk2bqw6cjk2x7rt5@m2hl6enudv7d/ Cc: Eduard Zingerman Signed-off-by: Shung-Hsi Yu Acked-by: Xu Kuohai Acked-by: Eduard Zingerman --- kernel/bpf/verifier.c | 63 +++++++++++++++++++++++++++++-------------- 1 file changed, 43 insertions(+), 20 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 78104bd85274..d3f3a464a871 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -13511,6 +13511,40 @@ static void scalar_min_max_mul(struct bpf_reg_state *dst_reg, } } +/* Clears all trailing bits after the most significant unset bit. + * + * Used for estimating the minimum possible value after BPF_AND. This + * effectively rounds a negative value down to a negative power-of-2 value + * (except for -1, which just return -1) and returning 0 for non-negative + * values. E.g. negative32_bit_floor(0xff0ff0ff) == 0xff000000. + */ +static inline s32 negative32_bit_floor(s32 v) +{ + u8 bits = fls(~v); /* find most-significant unset bit */ + u32 delta; + + /* special case, needed because 1UL << 32 is undefined */ + if (bits > 31) + return 0; + + delta = (1UL << bits) - 1; + return ~delta; +} + +/* Same as negative32_bit_floor() above, but for 64-bit signed value */ +static inline s64 negative_bit_floor(s64 v) +{ + u8 bits = fls64(~v); /* find most-significant unset bit */ + u64 delta; + + /* special case, needed because 1ULL << 64 is undefined */ + if (bits > 63) + return 0; + + delta = (1ULL << bits) - 1; + return ~delta; +} + static void scalar32_min_max_and(struct bpf_reg_state *dst_reg, struct bpf_reg_state *src_reg) { @@ -13530,16 +13564,10 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg, dst_reg->u32_min_value = var32_off.value; dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val); - /* Safe to set s32 bounds by casting u32 result into s32 when u32 - * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded. - */ - if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) { - dst_reg->s32_min_value = dst_reg->u32_min_value; - dst_reg->s32_max_value = dst_reg->u32_max_value; - } else { - dst_reg->s32_min_value = S32_MIN; - dst_reg->s32_max_value = S32_MAX; - } + /* Handle the [-1, 0] & -CONSTANT case that's difficult for tnum */ + dst_reg->s32_min_value = negative32_bit_floor(min(dst_reg->s32_min_value, + src_reg->s32_min_value)); + dst_reg->s32_max_value = max(dst_reg->s32_max_value, src_reg->s32_max_value); } static void scalar_min_max_and(struct bpf_reg_state *dst_reg, @@ -13560,16 +13588,11 @@ static void scalar_min_max_and(struct bpf_reg_state *dst_reg, dst_reg->umin_value = dst_reg->var_off.value; dst_reg->umax_value = min(dst_reg->umax_value, umax_val); - /* Safe to set s64 bounds by casting u64 result into s64 when u64 - * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded. - */ - if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) { - dst_reg->smin_value = dst_reg->umin_value; - dst_reg->smax_value = dst_reg->umax_value; - } else { - dst_reg->smin_value = S64_MIN; - dst_reg->smax_value = S64_MAX; - } + /* Handle the [-1, 0] & -CONSTANT case that's difficult for tnum */ + dst_reg->smin_value = negative_bit_floor(min(dst_reg->smin_value, + src_reg->smin_value)); + dst_reg->smax_value = max(dst_reg->smax_value, src_reg->smax_value); + /* We may learn something more from the var_off */ __update_reg_bounds(dst_reg); } From patchwork Fri Jul 19 11:00:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737196 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC9A484A28; Fri, 19 Jul 2024 10:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386559; cv=none; b=Tt3xdlaALugmXfk0mH5lWsUWHZoEKb7aXEr+BkS7Jrlb4e4MdYzJMLwgwsC7YGSWDI30Zneh11l2ZeFf9x4i05JPikxoDSRu/KRkONnY4ofwmhJf+ei9UpRoNoMfkFTw6yu17ClONhRV8vJkvMiRuyNXM/8DWc/GTvqwgo3AyKQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386559; c=relaxed/simple; bh=USl+GS8SjbfAG9TdaFkQ19+FupY8gk0c9UWuJ1wPhLY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pcUJkTj8ETIXN7kRGlHU3Iv3MuWvdxKcMLRlbW03VTpbzg/w+VNdr+ym7hYu7MN4fkXqIJ8EYZfYpg5irpmcSPyxnATMT8X6G8UxwqtTgxO2gGC7bbR2Kq6UJeVpyfV0/33rAY/6w4Mw9k0iSJ4FdYE77TD6GzpyGgckHwBoLFE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=none smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4WQRRN5bk0z4f3jjm; Fri, 19 Jul 2024 18:55:44 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id D57331A016E; Fri, 19 Jul 2024 18:55:52 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S8; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 6/9] selftests/bpf: Avoid load failure for token_lsm.c Date: Fri, 19 Jul 2024 19:00:56 +0800 Message-Id: <20240719110059.797546-7-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S8 X-Coremail-Antispam: 1UD129KBjvJXoWxCw1rJr4DJry7XFy7Jr1UJrb_yoW5CF48pF 95W3429rWkJFy2kr1xXF13KryYqFs2va17JF1UCry0q3y7Kw4UXry7GFWakF95Grsayrsa vF95XFZ0qr12kaUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI-eODUUUU X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Xu Kuohai The compiler optimized the two bpf progs in token_lsm.c to make return value from the bool variable in the "return -1" path, causing an unexpected rejection: 0: R1=ctx() R10=fp0 ; int BPF_PROG(bpf_token_capable, struct bpf_token *token, int cap) @ bpf_lsm.c:17 0: (b7) r6 = 0 ; R6_w=0 ; if (my_pid == 0 || my_pid != (bpf_get_current_pid_tgid() >> 32)) @ bpf_lsm.c:19 1: (18) r1 = 0xffffc9000102a000 ; R1_w=map_value(map=bpf_lsm.bss,ks=4,vs=5) 3: (61) r7 = *(u32 *)(r1 +0) ; R1_w=map_value(map=bpf_lsm.bss,ks=4,vs=5) R7_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff)) 4: (15) if r7 == 0x0 goto pc+11 ; R7_w=scalar(smin=umin=umin32=1,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff)) 5: (67) r7 <<= 32 ; R7_w=scalar(smax=0x7fffffff00000000,umax=0xffffffff00000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xffffffff00000000)) 6: (c7) r7 s>>= 32 ; R7_w=scalar(smin=0xffffffff80000000,smax=0x7fffffff) 7: (85) call bpf_get_current_pid_tgid#14 ; R0=scalar() 8: (77) r0 >>= 32 ; R0_w=scalar(smin=0,smax=umax=0xffffffff,var_off=(0x0; 0xffffffff)) 9: (5d) if r0 != r7 goto pc+6 ; R0_w=scalar(smin=smin32=0,smax=umax=umax32=0x7fffffff,var_off=(0x0; 0x7fffffff)) R7=scalar(smin=smin32=0,smax=umax=umax32=0x7fffffff,var_off=(0x0; 0x7fffffff)) ; if (reject_capable) @ bpf_lsm.c:21 10: (18) r1 = 0xffffc9000102a004 ; R1_w=map_value(map=bpf_lsm.bss,ks=4,vs=5,off=4) 12: (71) r6 = *(u8 *)(r1 +0) ; R1_w=map_value(map=bpf_lsm.bss,ks=4,vs=5,off=4) R6_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff)) ; @ bpf_lsm.c:0 13: (87) r6 = -r6 ; R6_w=scalar() 14: (67) r6 <<= 56 ; R6_w=scalar(smax=0x7f00000000000000,umax=0xff00000000000000,smin32=0,smax32=umax32=0,var_off=(0x0; 0xff00000000000000)) 15: (c7) r6 s>>= 56 ; R6_w=scalar(smin=smin32=-128,smax=smax32=127) ; int BPF_PROG(bpf_token_capable, struct bpf_token *token, int cap) @ bpf_lsm.c:17 16: (bf) r0 = r6 ; R0_w=scalar(id=1,smin=smin32=-128,smax=smax32=127) R6_w=scalar(id=1,smin=smin32=-128,smax=smax32=127) 17: (95) exit At program exit the register R0 has smin=-128 smax=127 should have been in [-4095, 0] To avoid this failure, change the variable type from bool to int. Signed-off-by: Xu Kuohai --- tools/testing/selftests/bpf/progs/token_lsm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/token_lsm.c b/tools/testing/selftests/bpf/progs/token_lsm.c index e4d59b6ba743..a6002d073b1b 100644 --- a/tools/testing/selftests/bpf/progs/token_lsm.c +++ b/tools/testing/selftests/bpf/progs/token_lsm.c @@ -8,8 +8,8 @@ char _license[] SEC("license") = "GPL"; int my_pid; -bool reject_capable; -bool reject_cmd; +int reject_capable; +int reject_cmd; SEC("lsm/bpf_token_capable") int BPF_PROG(token_capable, struct bpf_token *token, int cap) From patchwork Fri Jul 19 11:00:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737190 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1417284A3E; Fri, 19 Jul 2024 10:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; cv=none; b=j6RFvvd1pxReUgR2Pw7Aiyc/qmYRaR2ZCuy/gCzNU9TkNvyanN5XbLVolO/EJd8ZVtOSwSXhCde5drHTrBrrpJ1KDJ+JfBGquMrKf+bfzHxkDLrFk8mvzVBq8QGkrSS+0RLEbxrkqQUkKoHU52dZ461HNtxIRALs00pIVDrCUYE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; c=relaxed/simple; bh=WHsYV4fWQ1SKrc2VdfUxUlEo1NTw+En+rcCSDDxq04U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=W1R2E9yCXFE6akta5ba0cSO+hGiRUod+o+bZWN4u+SduWywf7komYBch5ZL26+F3WeyRfzMSXjr4SvGpbyXezv3qsPOXi2SJeOMeFEdN6FsaJaK5FYGeZDseHk4KNu3sU5sWvqmBgvAqU3ZNgNb+Df3i65gNBawRmNJwjKHLrlA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.93.142]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4WQRRH4q7dz4f3lD4; Fri, 19 Jul 2024 18:55:39 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id F051A1A016E; Fri, 19 Jul 2024 18:55:52 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S9; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 7/9] selftests/bpf: Add return value checks for failed tests Date: Fri, 19 Jul 2024 19:00:57 +0800 Message-Id: <20240719110059.797546-8-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S9 X-Coremail-Antispam: 1UD129KBjvJXoWxWw1UXF4ftrW7Xr43uFy5twb_yoWrWw45pa 4kZ3s2krySgF13Xw1xAr4xXFWFgws2q3yUArWxX34xZ3W7Jr97Xr4IgF45Xrn8JrZYyws5 Zay2qrZxZr48Z3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI-eODUUUU X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Xu Kuohai The return ranges of some bpf lsm test progs can not be deduced by the verifier accurately. To avoid erroneous rejections, add explicit return value checks for these progs. Signed-off-by: Xu Kuohai --- tools/testing/selftests/bpf/progs/err.h | 10 ++++++++++ tools/testing/selftests/bpf/progs/test_sig_in_xattr.c | 4 ++++ .../selftests/bpf/progs/test_verify_pkcs7_sig.c | 8 ++++++-- .../selftests/bpf/progs/verifier_global_subprogs.c | 7 ++++++- 4 files changed, 26 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/err.h b/tools/testing/selftests/bpf/progs/err.h index d66d283d9e59..38529779a236 100644 --- a/tools/testing/selftests/bpf/progs/err.h +++ b/tools/testing/selftests/bpf/progs/err.h @@ -5,6 +5,16 @@ #define MAX_ERRNO 4095 #define IS_ERR_VALUE(x) (unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO +#define __STR(x) #x + +#define set_if_not_errno_or_zero(x, y) \ +({ \ + asm volatile ("if %0 s< -4095 goto +1\n" \ + "if %0 s<= 0 goto +1\n" \ + "%0 = " __STR(y) "\n" \ + : "+r"(x)); \ +}) + static inline int IS_ERR_OR_NULL(const void *ptr) { return !ptr || IS_ERR_VALUE((unsigned long)ptr); diff --git a/tools/testing/selftests/bpf/progs/test_sig_in_xattr.c b/tools/testing/selftests/bpf/progs/test_sig_in_xattr.c index 2f0eb1334d65..8ef6b39335b6 100644 --- a/tools/testing/selftests/bpf/progs/test_sig_in_xattr.c +++ b/tools/testing/selftests/bpf/progs/test_sig_in_xattr.c @@ -6,6 +6,7 @@ #include #include #include "bpf_kfuncs.h" +#include "err.h" char _license[] SEC("license") = "GPL"; @@ -79,5 +80,8 @@ int BPF_PROG(test_file_open, struct file *f) ret = bpf_verify_pkcs7_signature(&digest_ptr, &sig_ptr, trusted_keyring); bpf_key_put(trusted_keyring); + + set_if_not_errno_or_zero(ret, -EFAULT); + return ret; } diff --git a/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c b/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c index f42e9f3831a1..12034a73ee2d 100644 --- a/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c +++ b/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c @@ -11,6 +11,7 @@ #include #include #include "bpf_kfuncs.h" +#include "err.h" #define MAX_DATA_SIZE (1024 * 1024) #define MAX_SIG_SIZE 1024 @@ -55,12 +56,12 @@ int BPF_PROG(bpf, int cmd, union bpf_attr *attr, unsigned int size) ret = bpf_probe_read_kernel(&value, sizeof(value), &attr->value); if (ret) - return ret; + goto out; ret = bpf_copy_from_user(data_val, sizeof(struct data), (void *)(unsigned long)value); if (ret) - return ret; + goto out; if (data_val->data_len > sizeof(data_val->data)) return -EINVAL; @@ -84,5 +85,8 @@ int BPF_PROG(bpf, int cmd, union bpf_attr *attr, unsigned int size) bpf_key_put(trusted_keyring); +out: + set_if_not_errno_or_zero(ret, -EFAULT); + return ret; } diff --git a/tools/testing/selftests/bpf/progs/verifier_global_subprogs.c b/tools/testing/selftests/bpf/progs/verifier_global_subprogs.c index a9fc30ed4d73..20904cd2baa2 100644 --- a/tools/testing/selftests/bpf/progs/verifier_global_subprogs.c +++ b/tools/testing/selftests/bpf/progs/verifier_global_subprogs.c @@ -7,6 +7,7 @@ #include "bpf_misc.h" #include "xdp_metadata.h" #include "bpf_kfuncs.h" +#include "err.h" /* The compiler may be able to detect the access to uninitialized memory in the routines performing out of bound memory accesses and @@ -331,7 +332,11 @@ SEC("?lsm/bpf") __success __log_level(2) int BPF_PROG(arg_tag_ctx_lsm) { - return tracing_subprog_void(ctx) + tracing_subprog_u64(ctx); + int ret; + + ret = tracing_subprog_void(ctx) + tracing_subprog_u64(ctx); + set_if_not_errno_or_zero(ret, -1); + return ret; } SEC("?struct_ops/test_1") From patchwork Fri Jul 19 11:00:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737192 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7033C84A4C; Fri, 19 Jul 2024 10:55:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; cv=none; b=GicVBAPnE7d0jdcjwmoA2xGN0pmKhSUG6kOhBwfsNvf9rHQUJHVx4VcDUIE/2hXRoRqkZ6kphoGTkkehcnweEmjUIPL1rwvJBVmHHMxNtnwfvwmA6uOTSkyeUl4l2YBS4mkPaXDIieepXh1sixn+I+zug7PpcCaOrdpTScGzKeg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; c=relaxed/simple; bh=vtjp7xT4YXwZ4iYQIFComZgrWpTRVHT6wKNCkqf+1eA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fnV4LPvEYjeN2u1zrVVE+8elllxRwL9mtxKo69TWeHTGl00Z7BrC26cf2uVPHomB//d+3XqdDhVS0mI3L71r41NpCjNeEWrX6SrpFyE2CYUF/11fxug8o5+Z/9cU9Uh9TZVIDP7MQUWD4W0NohLZiAckt3kdiMYGDU/7yLlgGD4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4WQRRJ6ms7z4f3js8; Fri, 19 Jul 2024 18:55:40 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 1F5CF1A1211; Fri, 19 Jul 2024 18:55:53 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S10; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 8/9] selftests/bpf: Add test for lsm tail call Date: Fri, 19 Jul 2024 19:00:58 +0800 Message-Id: <20240719110059.797546-9-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S10 X-Coremail-Antispam: 1UD129KBjvJXoWxXrWkXw1UtF4UCr15AFy7Jrb_yoW5urW7pa 48W345KryFvFy3Xw43KF4xuF4Sya1kuryUArW7XryYvrn7Arn7GF1xKFWUtFnxJFWruwn5 Zas7trs7Cr48Z3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI-eODUUUU X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Xu Kuohai Add test for lsm tail call to ensure tail call can only be used between bpf lsm progs attached to the same hook. Signed-off-by: Xu Kuohai --- .../selftests/bpf/prog_tests/test_lsm.c | 46 ++++++++++++++++++- .../selftests/bpf/progs/lsm_tailcall.c | 34 ++++++++++++++ 2 files changed, 79 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/progs/lsm_tailcall.c diff --git a/tools/testing/selftests/bpf/prog_tests/test_lsm.c b/tools/testing/selftests/bpf/prog_tests/test_lsm.c index 16175d579bc7..2a27f3714f5c 100644 --- a/tools/testing/selftests/bpf/prog_tests/test_lsm.c +++ b/tools/testing/selftests/bpf/prog_tests/test_lsm.c @@ -12,6 +12,7 @@ #include #include "lsm.skel.h" +#include "lsm_tailcall.skel.h" char *CMD_ARGS[] = {"true", NULL}; @@ -95,7 +96,7 @@ static int test_lsm(struct lsm *skel) return 0; } -void test_test_lsm(void) +static void test_lsm_basic(void) { struct lsm *skel = NULL; int err; @@ -114,3 +115,46 @@ void test_test_lsm(void) close_prog: lsm__destroy(skel); } + +static void test_lsm_tailcall(void) +{ + struct lsm_tailcall *skel = NULL; + int map_fd, prog_fd; + int err, key; + + skel = lsm_tailcall__open_and_load(); + if (!ASSERT_OK_PTR(skel, "lsm_tailcall__skel_load")) + goto close_prog; + + map_fd = bpf_map__fd(skel->maps.jmp_table); + if (CHECK_FAIL(map_fd < 0)) + goto close_prog; + + prog_fd = bpf_program__fd(skel->progs.lsm_file_permission_prog); + if (CHECK_FAIL(prog_fd < 0)) + goto close_prog; + + key = 0; + err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY); + if (CHECK_FAIL(!err)) + goto close_prog; + + prog_fd = bpf_program__fd(skel->progs.lsm_file_alloc_security_prog); + if (CHECK_FAIL(prog_fd < 0)) + goto close_prog; + + err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY); + if (CHECK_FAIL(err)) + goto close_prog; + +close_prog: + lsm_tailcall__destroy(skel); +} + +void test_test_lsm(void) +{ + if (test__start_subtest("lsm_basic")) + test_lsm_basic(); + if (test__start_subtest("lsm_tailcall")) + test_lsm_tailcall(); +} diff --git a/tools/testing/selftests/bpf/progs/lsm_tailcall.c b/tools/testing/selftests/bpf/progs/lsm_tailcall.c new file mode 100644 index 000000000000..49c075ce2d4c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/lsm_tailcall.c @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Huawei Technologies Co., Ltd */ + +#include "vmlinux.h" +#include +#include + +char _license[] SEC("license") = "GPL"; + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table SEC(".maps"); + +SEC("lsm/file_permission") +int lsm_file_permission_prog(void *ctx) +{ + return 0; +} + +SEC("lsm/file_alloc_security") +int lsm_file_alloc_security_prog(void *ctx) +{ + return 0; +} + +SEC("lsm/file_alloc_security") +int lsm_file_alloc_security_entry(void *ctx) +{ + bpf_tail_call_static(ctx, &jmp_table, 0); + return 0; +} From patchwork Fri Jul 19 11:00:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xu Kuohai X-Patchwork-Id: 13737191 X-Patchwork-Delegate: paul@paul-moore.com Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EBAB84A46; Fri, 19 Jul 2024 10:55:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; cv=none; b=n1qQFg5ouoERProNZEPN+iMxkgRzxYoE+uqM1g6fxvwUOY1bqHIAXilay0xD28xskA2VXKv27Afef9f3jcsdAwPoAw4aJ2ibiZDZ9E4eTre6z1YZfrOHaDVMtskmeyx8FsNQgr8A0xrPSrQ8ditb3jBzK53RJAyK+6QQgcfr5To= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721386558; c=relaxed/simple; bh=d2Ah1d9arkHmIdug+puU2JhYyInQUMKL1HRqyvjBT2U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MzC5+j0EHYhPC1l3OSvkNK0ShcmiVW470NX+xVOSrd7tcsMUqzI+VbZAwYY3nf+26qOQoiIB9T36yLmI82EAw2bpW9ijCIXPe077H7pwpBKQunEzhCUcpAZkV/Tsr9cKiA1GBUMrEWLhFfDp+GbqNgtbpqyh3EdcaCYuUDN1ZXI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com; spf=pass smtp.mailfrom=huaweicloud.com; arc=none smtp.client-ip=45.249.212.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=huaweicloud.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huaweicloud.com Received: from mail.maildlp.com (unknown [172.19.163.235]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4WQRRP1Kykz4f3jYp; Fri, 19 Jul 2024 18:55:45 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id 4366A1A0568; Fri, 19 Jul 2024 18:55:53 +0800 (CST) Received: from k01.huawei.com (unknown [10.67.174.197]) by APP3 (Coremail) with SMTP id _Ch0CgD3BVE0RppmM3cvAg--.11767S11; Fri, 19 Jul 2024 18:55:52 +0800 (CST) From: Xu Kuohai To: bpf@vger.kernel.org, netdev@vger.kernel.org, linux-security-module@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Shung-Hsi Yu , Yonghong Song , KP Singh , Roberto Sassu , Matt Bobrowski , Yafang Shao , Ilya Leoshkevich , "Jose E . Marchesi" , James Morris , Kees Cook , Brendan Jackman , Florent Revest Subject: [PATCH bpf-next v2 9/9] selftests/bpf: Add verifier tests for bpf lsm Date: Fri, 19 Jul 2024 19:00:59 +0800 Message-Id: <20240719110059.797546-10-xukuohai@huaweicloud.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240719110059.797546-1-xukuohai@huaweicloud.com> References: <20240719110059.797546-1-xukuohai@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-CM-TRANSID: _Ch0CgD3BVE0RppmM3cvAg--.11767S11 X-Coremail-Antispam: 1UD129KBjvJXoWxKrWrAr17JFy7Gr47tFWfAFb_yoW7Zw15pF 9Fk34DGFs5Ary3WFyxCFW7ZF1fGFZ2qFyrXF40vr1YyFs3J3s7XryxW3WUX3s3J3Z5uw4Y vFZIkayakr1UC3DanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPvb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUAV Cq3wA2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0 rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267 AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E 14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7 xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Y z7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2 AFwI0_GFv_Wryl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAq x4xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6r W5MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_JFI_Gr1lIxAIcVC0I7IYx2IY6xkF 7I0E14v26r4UJVWxJr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14 v26r4j6F4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuY vjxUI-eODUUUU X-CM-SenderInfo: 50xn30hkdlqx5xdzvxpfor3voofrz/ From: Xu Kuohai Add verifier tests to check bpf lsm return values and disabled hooks. Signed-off-by: Xu Kuohai --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_lsm.c | 178 ++++++++++++++++++ 2 files changed, 180 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/verifier_lsm.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 9dc3687bc406..ff1c7da1d06e 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -88,6 +88,7 @@ #include "verifier_xdp.skel.h" #include "verifier_xdp_direct_packet_access.skel.h" #include "verifier_bits_iter.skel.h" +#include "verifier_lsm.skel.h" #define MAX_ENTRIES 11 @@ -206,6 +207,7 @@ void test_verifier_xadd(void) { RUN(verifier_xadd); } void test_verifier_xdp(void) { RUN(verifier_xdp); } void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); } void test_verifier_bits_iter(void) { RUN(verifier_bits_iter); } +void test_verifier_lsm(void) { RUN(verifier_lsm); } static int init_test_val_map(struct bpf_object *obj, char *map_name) { diff --git a/tools/testing/selftests/bpf/progs/verifier_lsm.c b/tools/testing/selftests/bpf/progs/verifier_lsm.c new file mode 100644 index 000000000000..08251c517154 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_lsm.c @@ -0,0 +1,178 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include "bpf_misc.h" + +SEC("lsm/file_alloc_security") +__description("lsm bpf prog with -4095~0 retval. test 1") +__success +__naked int errno_zero_retval_test1(void *ctx) +{ + asm volatile ( + "r0 = 0;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/file_alloc_security") +__description("lsm bpf prog with -4095~0 retval. test 2") +__success +__naked int errno_zero_retval_test2(void *ctx) +{ + asm volatile ( + "r0 = -4095;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/file_alloc_security") +__description("lsm bpf prog with -4095~0 retval. test 3") +__success +__naked int errno_zero_retval_test3(void *ctx) +{ + asm volatile ( + "call %[bpf_get_prandom_u32];" + "r0 <<= 63;" + "r0 s>>= 63;" + "r0 &= -13;" + "exit;" + : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("lsm/file_mprotect") +__description("lsm bpf prog with -4095~0 retval. test 4") +__failure __msg("R0 has smin=-4096 smax=-4096 should have been in [-4095, 0]") +__naked int errno_zero_retval_test4(void *ctx) +{ + asm volatile ( + "r0 = -4096;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/file_mprotect") +__description("lsm bpf prog with -4095~0 retval. test 5") +__failure __msg("R0 has smin=4096 smax=4096 should have been in [-4095, 0]") +__naked int errno_zero_retval_test5(void *ctx) +{ + asm volatile ( + "r0 = 4096;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/file_mprotect") +__description("lsm bpf prog with -4095~0 retval. test 6") +__failure __msg("R0 has smin=1 smax=1 should have been in [-4095, 0]") +__naked int errno_zero_retval_test6(void *ctx) +{ + asm volatile ( + "r0 = 1;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/audit_rule_known") +__description("lsm bpf prog with bool retval. test 1") +__success +__naked int bool_retval_test1(void *ctx) +{ + asm volatile ( + "r0 = 1;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/audit_rule_known") +__description("lsm bpf prog with bool retval. test 2") +__success +__success +__naked int bool_retval_test2(void *ctx) +{ + asm volatile ( + "r0 = 0;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/audit_rule_known") +__description("lsm bpf prog with bool retval. test 3") +__failure __msg("R0 has smin=-1 smax=-1 should have been in [0, 1]") +__naked int bool_retval_test3(void *ctx) +{ + asm volatile ( + "r0 = -1;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/audit_rule_known") +__description("lsm bpf prog with bool retval. test 4") +__failure __msg("R0 has smin=2 smax=2 should have been in [0, 1]") +__naked int bool_retval_test4(void *ctx) +{ + asm volatile ( + "r0 = 2;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/file_free_security") +__success +__description("lsm bpf prog with void retval. test 1") +__naked int void_retval_test1(void *ctx) +{ + asm volatile ( + "r0 = -4096;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/file_free_security") +__success +__description("lsm bpf prog with void retval. test 2") +__naked int void_retval_test2(void *ctx) +{ + asm volatile ( + "r0 = 4096;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/getprocattr") +__description("lsm disabled hook: getprocattr") +__failure __msg("points to disabled hook") +__naked int disabled_hook_test1(void *ctx) +{ + asm volatile ( + "r0 = 0;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/setprocattr") +__description("lsm disabled hook: setprocattr") +__failure __msg("points to disabled hook") +__naked int disabled_hook_test2(void *ctx) +{ + asm volatile ( + "r0 = 0;" + "exit;" + ::: __clobber_all); +} + +SEC("lsm/ismaclabel") +__description("lsm disabled hook: ismaclabel") +__failure __msg("points to disabled hook") +__naked int disabled_hook_test3(void *ctx) +{ + asm volatile ( + "r0 = 0;" + "exit;" + ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL";