From patchwork Tue Nov 8 07:41:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 13035986 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29491C4321E for ; Tue, 8 Nov 2022 07:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230246AbiKHHmG (ORCPT ); Tue, 8 Nov 2022 02:42:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233640AbiKHHlf (ORCPT ); Tue, 8 Nov 2022 02:41:35 -0500 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3A5226E8 for ; Mon, 7 Nov 2022 23:41:29 -0800 (PST) Received: from pps.filterd (m0109334.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A85q32V028186 for ; Mon, 7 Nov 2022 23:41:29 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=vYmd+XUPO4E788rbu0d0k0RsfhRSejEIvekPruP1RQM=; b=h3EHvlGMd+ZtMfAI6NY47x8lPjHPOTPSRvvDsAuhL+8cUqFmXDJsuSeU3voEtGuuYvl4 R3GzKgUV+t8r4CY5TaUCJNnX6QYIH40k7jVshw9QoGshpz6AK3vgt/r4W69jLKlorknb Slsxu28wbNObjkHBFu9wBymnPNMkfVehz0o= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3kqhb78hck-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 07 Nov 2022 23:41:29 -0800 Received: from twshared24004.14.frc2.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 7 Nov 2022 23:41:28 -0800 Received: by devbig309.ftw3.facebook.com (Postfix, from userid 128203) id 92C3711D2346E; Mon, 7 Nov 2022 23:41:25 -0800 (PST) From: Yonghong Song To: CC: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , , Martin KaFai Lau Subject: [PATCH bpf-next v2 7/8] selftests/bpf: Add tests for bpf_rcu_read_lock() Date: Mon, 7 Nov 2022 23:41:25 -0800 Message-ID: <20221108074125.267314-1-yhs@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221108074047.261848-1-yhs@fb.com> References: <20221108074047.261848-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: RJHICZiUNLz8a8ZLsKrvNoUqFpB9IsMH X-Proofpoint-GUID: RJHICZiUNLz8a8ZLsKrvNoUqFpB9IsMH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-07_11,2022-11-07_02,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add a few positive/negative tests to test bpf_rcu_read_lock() and its corresponding verifier support. Signed-off-by: Yonghong Song --- .../selftests/bpf/prog_tests/rcu_read_lock.c | 127 +++++++ .../selftests/bpf/progs/rcu_read_lock.c | 353 ++++++++++++++++++ 2 files changed, 480 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c create mode 100644 tools/testing/selftests/bpf/progs/rcu_read_lock.c diff --git a/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c b/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c new file mode 100644 index 000000000000..4cf1a5188c34 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates.*/ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include "rcu_read_lock.skel.h" + +static void test_local_storage(void) +{ + struct rcu_read_lock *skel; + int err; + + skel = rcu_read_lock__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + skel->bss->target_pid = syscall(SYS_gettid); + + bpf_program__set_autoload(skel->progs.cgrp_succ, true); + bpf_program__set_autoload(skel->progs.task_succ, true); + bpf_program__set_autoload(skel->progs.two_regions, true); + bpf_program__set_autoload(skel->progs.non_sleepable_1, true); + bpf_program__set_autoload(skel->progs.non_sleepable_2, true); + err = rcu_read_lock__load(skel); + if (!ASSERT_OK(err, "skel_load")) + goto done; + + err = rcu_read_lock__attach(skel); + if (!ASSERT_OK(err, "skel_attach")) + goto done; + + syscall(SYS_getpgid); + + ASSERT_EQ(skel->bss->result, 2, "result"); +done: + rcu_read_lock__destroy(skel); +} + +static void test_runtime_diff_rcu_tag(void) +{ + struct rcu_read_lock *skel; + int err; + + skel = rcu_read_lock__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + bpf_program__set_autoload(skel->progs.dump_ipv6_route, true); + err = rcu_read_lock__load(skel); + ASSERT_OK(err, "skel_load"); + rcu_read_lock__destroy(skel); +} + +static void test_negative(void) +{ +#define NUM_FAILED_PROGS 10 + struct rcu_read_lock *skel; + struct bpf_program *prog; + int i, err; + + for (i = 0; i < NUM_FAILED_PROGS; i++) { + skel = rcu_read_lock__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + switch (i) { + case 0: + prog = skel->progs.miss_lock; + break; + case 1: + prog = skel->progs.miss_unlock; + break; + case 2: + prog = skel->progs.cgrp_incorrect_rcu_region; + break; + case 3: + prog = skel->progs.task_incorrect_rcu_region1; + break; + case 4: + prog = skel->progs.task_incorrect_rcu_region2; + break; + case 5: + prog = skel->progs.non_sleepable_rcu_mismatch; + break; + case 6: + prog = skel->progs.inproper_sleepable_helper; + break; + case 7: + prog = skel->progs.inproper_sleepable_kfunc; + break; + case 8: + prog = skel->progs.nested_rcu_region; + break; + default: + prog = skel->progs.cross_rcu_region; + break; + } + + bpf_program__set_autoload(prog, true); + err = rcu_read_lock__load(skel); + if (!ASSERT_ERR(err, "skel_load")) { + rcu_read_lock__destroy(skel); + return; + } + } +} + +void test_rcu_read_lock(void) +{ + int cgroup_fd; + + cgroup_fd = test__join_cgroup("/rcu_read_lock"); + if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup /rcu_read_lock")) + return; + + if (test__start_subtest("local_storage")) + test_local_storage(); + if (test__start_subtest("runtime_diff_rcu_tag")) + test_runtime_diff_rcu_tag(); + if (test__start_subtest("negative_tests")) + test_negative(); + + close(cgroup_fd); +} diff --git a/tools/testing/selftests/bpf/progs/rcu_read_lock.c b/tools/testing/selftests/bpf/progs/rcu_read_lock.c new file mode 100644 index 000000000000..fbd1aeedc14c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/rcu_read_lock.c @@ -0,0 +1,353 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ + +#include "vmlinux.h" +#include +#include +#include "bpf_tracing_net.h" +#include "bpf_misc.h" + +char _license[] SEC("license") = "GPL"; + +struct { + __uint(type, BPF_MAP_TYPE_CGRP_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, long); +} map_a SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_TASK_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, long); +} map_b SEC(".maps"); + +__u32 user_data, key_serial, target_pid = 0; +__u64 flags, result = 0; + +struct bpf_key *bpf_lookup_user_key(__u32 serial, __u64 flags) __ksym; +void bpf_key_put(struct bpf_key *key) __ksym; +void bpf_rcu_read_lock(void) __ksym; +void bpf_rcu_read_unlock(void) __ksym; + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int cgrp_succ(void *ctx) +{ + struct task_struct *task; + struct css_set *cgroups; + struct cgroup *dfl_cgrp; + long init_val = 2; + long *ptr; + + task = bpf_get_current_task_btf(); + if (task->pid != target_pid) + return 0; + + bpf_rcu_read_lock(); + cgroups = task->cgroups; + dfl_cgrp = cgroups->dfl_cgrp; + bpf_rcu_read_unlock(); + ptr = bpf_cgrp_storage_get(&map_a, dfl_cgrp, &init_val, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (!ptr) + return 0; + ptr = bpf_cgrp_storage_get(&map_a, dfl_cgrp, 0, 0); + if (!ptr) + return 0; + result = *ptr; + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep") +int task_succ(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + if (task->pid != target_pid) + return 0; + + /* region including helper using rcu ptr */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep") +int two_regions(void *ctx) +{ + struct task_struct *task, *real_parent; + + /* two regions */ + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + bpf_rcu_read_unlock(); + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry/" SYS_PREFIX "sys_getpgid") +int non_sleepable_1(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry/" SYS_PREFIX "sys_getpgid") +int non_sleepable_2(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + bpf_rcu_read_lock(); + real_parent = task->real_parent; + bpf_rcu_read_unlock(); + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?iter.s/ipv6_route") +int dump_ipv6_route(struct bpf_iter__ipv6_route *ctx) +{ + struct seq_file *seq = ctx->meta->seq; + struct fib6_info *rt = ctx->rt; + const struct net_device *dev; + struct fib6_nh *fib6_nh; + unsigned int flags; + struct nexthop *nh; + + if (rt == (void *)0) + return 0; + + /* fib6_nh is not a rcu ptr */ + fib6_nh = &rt->fib6_nh[0]; + flags = rt->fib6_flags; + + nh = rt->nh; + bpf_rcu_read_lock(); + if (rt->nh) + /* fib6_nh is a rcu ptr */ + fib6_nh = &nh->nh_info->fib6_nh; + + /* fib6_nh could be a rcu or non-rcu ptr */ + if (fib6_nh->fib_nh_gw_family) { + flags |= RTF_GATEWAY; + BPF_SEQ_PRINTF(seq, "%pi6 ", &fib6_nh->fib_nh_gw6); + } else { + BPF_SEQ_PRINTF(seq, "00000000000000000000000000000000 "); + } + + dev = fib6_nh->fib_nh_dev; + bpf_rcu_read_unlock(); + if (dev) + BPF_SEQ_PRINTF(seq, "%08x %08x %08x %08x %8s\n", rt->fib6_metric, + rt->fib6_ref.refs.counter, 0, flags, dev->name); + else + BPF_SEQ_PRINTF(seq, "%08x %08x %08x %08x\n", rt->fib6_metric, + rt->fib6_ref.refs.counter, 0, flags); + + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int miss_lock(void *ctx) +{ + struct task_struct *task; + struct css_set *cgroups; + struct cgroup *dfl_cgrp; + + /* missing bpf_rcu_read_lock() */ + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + cgroups = task->cgroups; + bpf_rcu_read_unlock(); + dfl_cgrp = cgroups->dfl_cgrp; + bpf_rcu_read_unlock(); + (void)bpf_cgrp_storage_get(&map_a, dfl_cgrp, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int miss_unlock(void *ctx) +{ + struct task_struct *task; + struct css_set *cgroups; + struct cgroup *dfl_cgrp; + + /* missing bpf_rcu_read_unlock() */ + bpf_rcu_read_lock(); + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + cgroups = task->cgroups; + bpf_rcu_read_unlock(); + dfl_cgrp = cgroups->dfl_cgrp; + (void)bpf_cgrp_storage_get(&map_a, dfl_cgrp, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int cgrp_incorrect_rcu_region(void *ctx) +{ + struct task_struct *task; + struct css_set *cgroups; + struct cgroup *dfl_cgrp; + + /* load with rcu_ptr outside the rcu read lock region */ + bpf_rcu_read_lock(); + task = bpf_get_current_task_btf(); + cgroups = task->cgroups; + bpf_rcu_read_unlock(); + dfl_cgrp = cgroups->dfl_cgrp; + (void)bpf_cgrp_storage_get(&map_a, dfl_cgrp, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int task_incorrect_rcu_region1(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + /* helper use of rcu ptr outside the rcu read lock region */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + bpf_rcu_read_unlock(); + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int task_incorrect_rcu_region2(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + /* missing bpf_rcu_read_unlock() in one path */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (real_parent) + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry/" SYS_PREFIX "sys_getpgid") +int non_sleepable_rcu_mismatch(void *ctx) +{ + struct task_struct *task, *real_parent; + + task = bpf_get_current_task_btf(); + + /* non-sleepable: missing bpf_rcu_read_unlock() in one path */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (real_parent) + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +int inproper_sleepable_helper(void *ctx) +{ + struct task_struct *task, *real_parent; + struct pt_regs *regs; + __u32 value = 0; + void *ptr; + + task = bpf_get_current_task_btf(); + + /* sleepable helper in rcu read lock region */ + bpf_rcu_read_lock(); + real_parent = task->real_parent; + regs = (struct pt_regs *)bpf_task_pt_regs(real_parent); + if (!regs) { + bpf_rcu_read_unlock(); + return 0; + } + + ptr = (void *)PT_REGS_IP(regs); + (void)bpf_copy_from_user_task(&value, sizeof(uint32_t), ptr, task, 0); + user_data = value; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?lsm.s/bpf") +int BPF_PROG(inproper_sleepable_kfunc, int cmd, union bpf_attr *attr, unsigned int size) +{ + struct bpf_key *bkey; + + /* sleepable kfunc in rcu read lock region */ + bpf_rcu_read_lock(); + bkey = bpf_lookup_user_key(key_serial, flags); + bpf_rcu_read_unlock(); + if (!bkey) + return -1; + bpf_key_put(bkey); + + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep") +int nested_rcu_region(void *ctx) +{ + struct task_struct *task, *real_parent; + + /* nested rcu read lock regions */ + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + bpf_rcu_read_lock(); + real_parent = task->real_parent; + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + bpf_rcu_read_unlock(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_nanosleep") +int cross_rcu_region(void *ctx) +{ + struct task_struct *task, *real_parent; + + /* rcu ptr define/use in different regions */ + task = bpf_get_current_task_btf(); + bpf_rcu_read_lock(); + real_parent = task->real_parent; + bpf_rcu_read_unlock(); + bpf_rcu_read_lock(); + (void)bpf_task_storage_get(&map_b, real_parent, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + bpf_rcu_read_unlock(); + return 0; +} +