From patchwork Wed Sep 25 22:30:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 13812493 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0932B15B0FC; Wed, 25 Sep 2024 22:30:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727303427; cv=none; b=tir6cgv0yDtDamdywfjmpHvC2dwomjgl5SLohvgwCn5Jru5Pa1wwqi7q2rouTvCIUxYGbeN4WAIlRWYHVT8lHlZsAEYIYOQBPaveFmxBY1v0QsOgAaVxPmf7C3HlAjSexrSSAzEiQgSJyWtWfCyUZrAbBtl1UUDNtpzfFLBuVjE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727303427; c=relaxed/simple; bh=IZbnpYx7HLrnxLUhGSarNuN3sSbeCYpJAfQP4eedA0Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hX2FSvyV6UQrhx/OYHIorzaer9B1dgwJ8UNRT0WMDIB9/p/bdRTUhBLS6IlEOb07RLSmps4UqOBGoy6KvASyZGTQgc66AQui4/oKr8M4gBzGSHiwbcBPJRcv/xIn/qOHzs6ukHiUTW8yeYxNAQHJWbXtg5RhyoVg5METeYu/Utg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hP3uzsJa; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hP3uzsJa" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD8CEC4CED0; Wed, 25 Sep 2024 22:30:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727303425; bh=IZbnpYx7HLrnxLUhGSarNuN3sSbeCYpJAfQP4eedA0Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hP3uzsJaqEqSbMN7cXerNzoM3ytZTbTgvmYg9KxlAbUC1BGyI+422hsFAGVd87g0t cDpwrOhWNY8UsLK/+XC0deTSKHwKUcsG8EttSMg6ER6dzJ94mtkN8L57W8ytbGzC+Y KlJsOcoLrYvzifR4kKqaHRNWOr8IcpaS2mq6BMYmQzZcT5HR5X3cFs1alBeQlSNbdc +KAVT4Vzj5Ef3dLH73ZZrsReLp4Xh2pFGsvK1DYEaqUSxqfiuf6trArB9ktUz/ITxA oYx4Jo3yk1MFiV6icw6J3rmuc8+x4t14fYCwe4Z3cNeddfvknOVGXV+oqFAD8oGw2C /rEPG0rwhjSkw== From: Namhyung Kim To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , LKML , bpf@vger.kernel.org, Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org Subject: [RFC/PATCH bpf-next 1/3] bpf: Add slab iterator Date: Wed, 25 Sep 2024 15:30:21 -0700 Message-ID: <20240925223023.735947-2-namhyung@kernel.org> X-Mailer: git-send-email 2.46.0.792.g87dc391469-goog In-Reply-To: <20240925223023.735947-1-namhyung@kernel.org> References: <20240925223023.735947-1-namhyung@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The new "slab" iterator will traverse the list of slab caches (kmem_cache) and call attached BPF programs for each entry. It should check the argument (ctx.s) if it's NULL before using it. The iteration will be done with slab_mutex held but it'd break and return to user if the BPF program emits data to seq buffer more than the buffer size given by the user. IOW the whole iteration would be protected by the slab_mutex as long as it won't emit anything. It includes the internal "mm/slab.h" header to access kmem_cache, slab_caches and slab_mutex. Hope it's ok to mm folks. Signed-off-by: Namhyung Kim --- include/linux/btf_ids.h | 1 + kernel/bpf/Makefile | 1 + kernel/bpf/slab_iter.c | 131 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 133 insertions(+) create mode 100644 kernel/bpf/slab_iter.c diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h index c0e3e1426a82f5c4..1474ab7f44a9cff6 100644 --- a/include/linux/btf_ids.h +++ b/include/linux/btf_ids.h @@ -283,5 +283,6 @@ extern u32 btf_tracing_ids[]; extern u32 bpf_cgroup_btf_id[]; extern u32 bpf_local_storage_map_btf_id[]; extern u32 btf_bpf_map_id[]; +extern u32 bpf_slab_btf_id[]; #endif diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index 9b9c151b5c826b31..e18b09069349e1e9 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -52,3 +52,4 @@ obj-$(CONFIG_BPF_PRELOAD) += preload/ obj-$(CONFIG_BPF_SYSCALL) += relo_core.o obj-$(CONFIG_BPF_SYSCALL) += btf_iter.o obj-$(CONFIG_BPF_SYSCALL) += btf_relocate.o +obj-$(CONFIG_BPF_SYSCALL) += slab_iter.o diff --git a/kernel/bpf/slab_iter.c b/kernel/bpf/slab_iter.c new file mode 100644 index 0000000000000000..bf1e50bd7497220e --- /dev/null +++ b/kernel/bpf/slab_iter.c @@ -0,0 +1,131 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2024 Google */ +#include +#include +#include +#include +#include + +#include "../../mm/slab.h" /* kmem_cache, slab_caches and slab_mutex */ + +struct bpf_iter__slab { + __bpf_md_ptr(struct bpf_iter_meta *, meta); + __bpf_md_ptr(struct kmem_cache *, s); +}; + +static void *slab_iter_seq_start(struct seq_file *seq, loff_t *pos) +{ + loff_t cnt = 0; + struct kmem_cache *s = NULL; + + mutex_lock(&slab_mutex); + + /* + * Find an entry at the given position in the slab_caches list instead + * of keeping a reference (of the last visited entry, if any) out of + * slab_mutex. It might miss something if one is deleted in the middle + * while it releases the lock. But it should be rare and there's not + * much we can do about it. + */ + list_for_each_entry(s, &slab_caches, list) { + if (cnt == *pos) + break; + + cnt++; + } + + if (cnt != *pos) + return NULL; + + ++*pos; + return s; +} + +static void slab_iter_seq_stop(struct seq_file *seq, void *v) +{ + struct bpf_iter_meta meta; + struct bpf_iter__slab ctx = { + .meta = &meta, + .s = v, + }; + struct bpf_prog *prog; + + meta.seq = seq; + prog = bpf_iter_get_info(&meta, true); + if (prog) + bpf_iter_run_prog(prog, &ctx); + + mutex_unlock(&slab_mutex); +} + +static void *slab_iter_seq_next(struct seq_file *seq, void *v, loff_t *pos) +{ + struct kmem_cache *s = v; + + ++*pos; + + if (list_last_entry(&slab_caches, struct kmem_cache, list) == s) + return NULL; + + return list_next_entry(s, list); +} + +static int slab_iter_seq_show(struct seq_file *seq, void *v) +{ + struct bpf_iter_meta meta; + struct bpf_iter__slab ctx = { + .meta = &meta, + .s = v, + }; + struct bpf_prog *prog; + int ret = 0; + + meta.seq = seq; + prog = bpf_iter_get_info(&meta, false); + if (prog) + ret = bpf_iter_run_prog(prog, &ctx); + + return ret; +} + +static const struct seq_operations slab_iter_seq_ops = { + .start = slab_iter_seq_start, + .next = slab_iter_seq_next, + .stop = slab_iter_seq_stop, + .show = slab_iter_seq_show, +}; + +BTF_ID_LIST_GLOBAL_SINGLE(bpf_slab_btf_id, struct, kmem_cache) + +static const struct bpf_iter_seq_info slab_iter_seq_info = { + .seq_ops = &slab_iter_seq_ops, +}; + +static void bpf_iter_slab_show_fdinfo(const struct bpf_iter_aux_info *aux, + struct seq_file *seq) +{ + seq_puts(seq, "slab iter\n"); +} + +DEFINE_BPF_ITER_FUNC(slab, struct bpf_iter_meta *meta, + struct kmem_cache *s) + +static struct bpf_iter_reg bpf_slab_reg_info = { + .target = "slab", + .feature = BPF_ITER_RESCHED, + .show_fdinfo = bpf_iter_slab_show_fdinfo, + .ctx_arg_info_size = 1, + .ctx_arg_info = { + { offsetof(struct bpf_iter__slab, s), + PTR_TO_BTF_ID_OR_NULL | PTR_TRUSTED }, + }, + .seq_info = &slab_iter_seq_info, +}; + +static int __init bpf_slab_iter_init(void) +{ + bpf_slab_reg_info.ctx_arg_info[0].btf_id = bpf_slab_btf_id[0]; + return bpf_iter_reg_target(&bpf_slab_reg_info); +} + +late_initcall(bpf_slab_iter_init); From patchwork Wed Sep 25 22:30:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 13812494 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D2F715B12F; Wed, 25 Sep 2024 22:30:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727303427; cv=none; b=KurqLeyKeAR+OvUEIwfILKC4++H94i5ZjT1Fz7CQLBkPY1PqdULok57EZ1eEAxUdZBu7BX9Y12noubvRSXuGNoeX5HxEiSq/CmHLHa4TrYUHNK6zlcpDic9NXXPujYcOa37zHINdY48sgWF76ULGfdtatg1tdoP5dHD2IM+VCDQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727303427; c=relaxed/simple; bh=p9xJJCK3sjdnEPQSrpwEL4Bd6PM0VmkqDDJfyawq95E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JBMDvtaYWbtG4ce7wOthqTcuwplVE8so1sLT6M8izkswA2W7dtmw6x17mlwrlB4xTI0mGN5NmZ6JId1yweHNkDfXSiXk3vO5atX1Q3bnbBAudWYR+BkNh6tKq/n3FKZAN+Ja9r5NHHd5XvxLpuqLKHJrz9dlPEnDQFPZpzD2BM4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=p1oejC7+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="p1oejC7+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A91D4C4AF0D; Wed, 25 Sep 2024 22:30:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727303426; bh=p9xJJCK3sjdnEPQSrpwEL4Bd6PM0VmkqDDJfyawq95E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p1oejC7+XVCwNRAWMWb4liG+RlJRSHTe6i7jZnnI1bsrdWYUvIbiYRhAcYn7mK9A6 Rb6y1UTGncPTi2bKRvBzAX3S29OJmrzbFtf7uooMNt3kb1kihPEz63vK+n5Q/rDBa+ QiV/ajK12l2NTZoqyuuczq0zNb6+Hm1CM3Am1n8TZYMmCU3JaDoOzrhlsJmWKwJ3PY 6fGuKaZLfIj2SPcMsxEFeRfCgIHQ7G+lRIVWwVdQbmFZMdUw0X5UwV4Hpma8I9kJSa 4P+1U360/3PhTx5bYR5nlRI1rWBbeAE0K2/ODtT9NWqFl18NqTXV/QetfcyQSILH2I 9Z+4pLAf+dk3w== From: Namhyung Kim To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , LKML , bpf@vger.kernel.org, Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org Subject: [RFC/PATCH bpf-next 2/3] mm/bpf: Add bpf_get_slab_cache() kfunc Date: Wed, 25 Sep 2024 15:30:22 -0700 Message-ID: <20240925223023.735947-3-namhyung@kernel.org> X-Mailer: git-send-email 2.46.0.792.g87dc391469-goog In-Reply-To: <20240925223023.735947-1-namhyung@kernel.org> References: <20240925223023.735947-1-namhyung@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The bpf_get_slab_cache() is to get a slab cache information from a virtual address like virt_to_cache(). If the address is a pointer to a slab object, it'd return a valid kmem_cache pointer, otherwise NULL is returned. It doesn't grab a reference count of the kmem_cache so the caller is responsible to manage the access. The intended use case for now is to symbolize locks in slab objects from the lock contention tracepoints. Suggested-by: Vlastimil Babka Signed-off-by: Namhyung Kim --- kernel/bpf/helpers.c | 1 + mm/slab_common.c | 14 ++++++++++++++ 2 files changed, 15 insertions(+) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 1a43d06eab286c26..03db007a247175c4 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -3090,6 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW) BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY) BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_get_slab_cache, KF_RET_NULL) BTF_KFUNCS_END(common_btf_ids) static const struct btf_kfunc_id_set common_kfunc_set = { diff --git a/mm/slab_common.c b/mm/slab_common.c index 7443244656150325..a87adcf182f49fc4 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1322,6 +1322,20 @@ size_t ksize(const void *objp) } EXPORT_SYMBOL(ksize); +#ifdef CONFIG_BPF_SYSCALL +__bpf_kfunc_start_defs(); + +__bpf_kfunc struct kmem_cache *bpf_get_slab_cache(u64 addr) +{ + struct slab *slab; + + slab = virt_to_slab((void *)(long)addr); + return slab ? slab->slab_cache : NULL; +} + +__bpf_kfunc_end_defs(); +#endif /* CONFIG_BPF_SYSCALL */ + /* Tracepoints definitions. */ EXPORT_TRACEPOINT_SYMBOL(kmalloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); From patchwork Wed Sep 25 22:30:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 13812495 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADE1F1714B3; Wed, 25 Sep 2024 22:30:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727303427; cv=none; b=aIyKPrEYNVCEW4U0iSR8cnY0nvOE0LcMvKhAGRE95PhaJz0NHaiwsu7J+Y6W/UcS9uEE47wrMFqJoHW89bk6KgH5r7iDvIGCHjQ1W8MWYxY3nbRZOXHoplcsxJoB0O3FV7fT6/22oL4rIwdCiS9LX50rYw3Y6WwsBQleZHK1wg8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727303427; c=relaxed/simple; bh=F0K+S6VZ06qBNMDXblIhDt1PDc7u3rtDzvje3N0WCyk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fYbKoc79Q9rmq5OqEGFpgLxKseTqraawGivIKvcfpbZoE658ANUFlYF63sheyV1NVa4N+ai/2kl+qOOXA/W6q95RxLsnPxpImDv7jdQDuvHwbvfccBDWqKoK1zBgu2I/4gB6gMB7ICsD8YfRILlNk1i0qzEw5+IMc5pDXEQ1ddc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=EZsDwet5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EZsDwet5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87B17C4CECD; Wed, 25 Sep 2024 22:30:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727303427; bh=F0K+S6VZ06qBNMDXblIhDt1PDc7u3rtDzvje3N0WCyk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EZsDwet5lz1zxqaEpGuxsFM2AJIGYVu0wR8Fx4aoPVtBDiXpyXhUR9ER1ai+Q1eGs 5kmLrSdWia5+BA/FXlCLaefbVT0KDVM4BPenIWg8jIGrp1Vs0j3lVqq7iX51Dwp4Yp Q+UBtdYTE7BOYfrB1xbAhHSuFnO8CTo9FPkJP+1ZjACDAHNr6VS6wcOlGN5vzibLWo kq68VeU/qIDJWg3GeM+5xE9HlOK2yPd7QWWXkT31X2QG/WBzTLEEBuRa2YS+7N3Pk0 4nIxGd/JqIn0aypxOPksiiahT/XXzdveSurDU8E1egdGUN+lI8yzg0lbIq0J6qcHOE 1vr7YVNJkG2nw== From: Namhyung Kim To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , LKML , bpf@vger.kernel.org, Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org Subject: [RFC/PATCH bpf-next 3/3] selftests/bpf: Add a test for slab_iter Date: Wed, 25 Sep 2024 15:30:23 -0700 Message-ID: <20240925223023.735947-4-namhyung@kernel.org> X-Mailer: git-send-email 2.46.0.792.g87dc391469-goog In-Reply-To: <20240925223023.735947-1-namhyung@kernel.org> References: <20240925223023.735947-1-namhyung@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The test traverses all slab caches using the slab_iter and check if current task's pointer is from "task_struct" slab cache. Signed-off-by: Namhyung Kim --- .../selftests/bpf/prog_tests/slab_iter.c | 64 +++++++++++++++++++ tools/testing/selftests/bpf/progs/bpf_iter.h | 7 ++ tools/testing/selftests/bpf/progs/slab_iter.c | 62 ++++++++++++++++++ 3 files changed, 133 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/slab_iter.c create mode 100644 tools/testing/selftests/bpf/progs/slab_iter.c diff --git a/tools/testing/selftests/bpf/prog_tests/slab_iter.c b/tools/testing/selftests/bpf/prog_tests/slab_iter.c new file mode 100644 index 0000000000000000..e461f6e703d67dc8 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/slab_iter.c @@ -0,0 +1,64 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Google */ + +#include +#include +#include +#include "slab_iter.skel.h" + +static void test_slab_iter_check_task(struct slab_iter *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, opts, + .flags = BPF_F_TEST_RUN_ON_CPU, + ); + int prog_fd = bpf_program__fd(skel->progs.check_task_struct); + + /* get task_struct and check it if's from a slab cache */ + bpf_prog_test_run_opts(prog_fd, &opts); + + /* the BPF program should set 'found' variable */ + ASSERT_EQ(skel->bss->found, 1, "found task_struct"); +} + +void test_slab_iter(void) +{ + DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts); + struct slab_iter *skel = NULL; + union bpf_iter_link_info linfo = {}; + struct bpf_link *link; + char buf[1024]; + int iter_fd; + + skel = slab_iter__open_and_load(); + if (!ASSERT_OK_PTR(skel, "slab_iter__open_and_load")) + return; + + opts.link_info = &linfo; + opts.link_info_len = sizeof(linfo); + + link = bpf_program__attach_iter(skel->progs.slab_info_collector, &opts); + if (!ASSERT_OK_PTR(link, "attach_iter")) + goto destroy; + + iter_fd = bpf_iter_create(bpf_link__fd(link)); + if (!ASSERT_GE(iter_fd, 0, "iter_create")) + goto free_link; + + memset(buf, 0, sizeof(buf)); + while (read(iter_fd, buf, sizeof(buf) > 0)) { + /* read out all contents */ + printf("%s", buf); + } + + /* next reads should return 0 */ + ASSERT_EQ(read(iter_fd, buf, sizeof(buf)), 0, "read"); + + test_slab_iter_check_task(skel); + + close(iter_fd); + +free_link: + bpf_link__destroy(link); +destroy: + slab_iter__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/bpf_iter.h b/tools/testing/selftests/bpf/progs/bpf_iter.h index c41ee80533ca219a..33ea7181e1f03560 100644 --- a/tools/testing/selftests/bpf/progs/bpf_iter.h +++ b/tools/testing/selftests/bpf/progs/bpf_iter.h @@ -24,6 +24,7 @@ #define BTF_F_PTR_RAW BTF_F_PTR_RAW___not_used #define BTF_F_ZERO BTF_F_ZERO___not_used #define bpf_iter__ksym bpf_iter__ksym___not_used +#define bpf_iter__slab bpf_iter__slab___not_used #include "vmlinux.h" #undef bpf_iter_meta #undef bpf_iter__bpf_map @@ -48,6 +49,7 @@ #undef BTF_F_PTR_RAW #undef BTF_F_ZERO #undef bpf_iter__ksym +#undef bpf_iter__slab struct bpf_iter_meta { struct seq_file *seq; @@ -165,3 +167,8 @@ struct bpf_iter__ksym { struct bpf_iter_meta *meta; struct kallsym_iter *ksym; }; + +struct bpf_iter__slab { + struct bpf_iter_meta *meta; + struct kmem_cache *s; +} __attribute__((preserve_access_index)); diff --git a/tools/testing/selftests/bpf/progs/slab_iter.c b/tools/testing/selftests/bpf/progs/slab_iter.c new file mode 100644 index 0000000000000000..f806365506851774 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/slab_iter.c @@ -0,0 +1,62 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Google */ + +#include "bpf_iter.h" +#include +#include + +char _license[] SEC("license") = "GPL"; + +#define SLAB_NAME_MAX 256 + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(key_size, sizeof(void *)); + __uint(value_size, SLAB_NAME_MAX); + __uint(max_entries, 1024); +} slab_hash SEC(".maps"); + +extern struct kmem_cache *bpf_get_slab_cache(__u64 addr) __ksym; + +/* result, will be checked by userspace */ +int found; + +SEC("iter/slab") +int slab_info_collector(struct bpf_iter__slab *ctx) +{ + struct seq_file *seq = ctx->meta->seq; + struct kmem_cache *s = ctx->s; + + if (s) { + char name[SLAB_NAME_MAX]; + + /* + * To make sure if the slab_iter implements the seq interface + * properly and it's also useful for debugging. + */ + BPF_SEQ_PRINTF(seq, "%s: %u\n", s->name, s->object_size); + + bpf_probe_read_kernel_str(name, sizeof(name), s->name); + bpf_map_update_elem(&slab_hash, &s, name, BPF_NOEXIST); + } + + return 0; +} + +SEC("raw_tp/bpf_test_finish") +int BPF_PROG(check_task_struct) +{ + __u64 curr = bpf_get_current_task(); + struct kmem_cache *s; + char *name; + + s = bpf_get_slab_cache(curr); + if (s == NULL) + return 0; + + name = bpf_map_lookup_elem(&slab_hash, &s); + if (name && !bpf_strncmp(name, 11, "task_struct")) + found = 1; + + return 0; +}