Message ID | 20241010232505.1339892-3-namhyung@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | bpf: Add kmem_cache iterator and kfunc | expand |
On Thu, Oct 10, 2024 at 4:25 PM Namhyung Kim <namhyung@kernel.org> wrote: > > The bpf_get_kmem_cache() is to get a slab cache information from a > virtual address like virt_to_cache(). If the address is a pointer > to a slab object, it'd return a valid kmem_cache pointer, otherwise > NULL is returned. > > It doesn't grab a reference count of the kmem_cache so the caller is > responsible to manage the access. The returned point is marked as > PTR_UNTRUSTED. And the kfunc has KF_RCU_PROTECTED as the slab object > might be protected by RCU. ... > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RCU_PROTECTED) This flag is unnecessary. PTR_UNTRUSTED can point to absolutely any memory. In this case it likely points to a valid kmem_cache, but the verifier will guard all accesses with probe_read anyway. I can remove this flag while applying.
On Fri, Oct 11, 2024 at 11:35:27AM -0700, Alexei Starovoitov wrote: > On Thu, Oct 10, 2024 at 4:25 PM Namhyung Kim <namhyung@kernel.org> wrote: > > > > The bpf_get_kmem_cache() is to get a slab cache information from a > > virtual address like virt_to_cache(). If the address is a pointer > > to a slab object, it'd return a valid kmem_cache pointer, otherwise > > NULL is returned. > > > > It doesn't grab a reference count of the kmem_cache so the caller is > > responsible to manage the access. The returned point is marked as > > PTR_UNTRUSTED. And the kfunc has KF_RCU_PROTECTED as the slab object > > might be protected by RCU. > > ... > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RCU_PROTECTED) > > This flag is unnecessary. PTR_UNTRUSTED can point to absolutely any memory. > In this case it likely points to a valid kmem_cache, but > the verifier will guard all accesses with probe_read anyway. > > I can remove this flag while applying. Ok, I'd be happy if you would remove it. Thanks, Namhyung
Hi Alexei, On Fri, Oct 11, 2024 at 12:14:14PM -0700, Namhyung Kim wrote: > On Fri, Oct 11, 2024 at 11:35:27AM -0700, Alexei Starovoitov wrote: > > On Thu, Oct 10, 2024 at 4:25 PM Namhyung Kim <namhyung@kernel.org> wrote: > > > > > > The bpf_get_kmem_cache() is to get a slab cache information from a > > > virtual address like virt_to_cache(). If the address is a pointer > > > to a slab object, it'd return a valid kmem_cache pointer, otherwise > > > NULL is returned. > > > > > > It doesn't grab a reference count of the kmem_cache so the caller is > > > responsible to manage the access. The returned point is marked as > > > PTR_UNTRUSTED. And the kfunc has KF_RCU_PROTECTED as the slab object > > > might be protected by RCU. > > > > ... > > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RCU_PROTECTED) > > > > This flag is unnecessary. PTR_UNTRUSTED can point to absolutely any memory. > > In this case it likely points to a valid kmem_cache, but > > the verifier will guard all accesses with probe_read anyway. > > > > I can remove this flag while applying. > > Ok, I'd be happy if you would remove it. You will need to update the bpf_rcu_read_lock/unlock() in the test code (patch 3). I can send v6 with that and Vlastimil's Ack if you want. Thanks, Namhyung
On Mon, Oct 14, 2024 at 11:13 AM Namhyung Kim <namhyung@kernel.org> wrote: > > Hi Alexei, > > On Fri, Oct 11, 2024 at 12:14:14PM -0700, Namhyung Kim wrote: > > On Fri, Oct 11, 2024 at 11:35:27AM -0700, Alexei Starovoitov wrote: > > > On Thu, Oct 10, 2024 at 4:25 PM Namhyung Kim <namhyung@kernel.org> wrote: > > > > > > > > The bpf_get_kmem_cache() is to get a slab cache information from a > > > > virtual address like virt_to_cache(). If the address is a pointer > > > > to a slab object, it'd return a valid kmem_cache pointer, otherwise > > > > NULL is returned. > > > > > > > > It doesn't grab a reference count of the kmem_cache so the caller is > > > > responsible to manage the access. The returned point is marked as > > > > PTR_UNTRUSTED. And the kfunc has KF_RCU_PROTECTED as the slab object > > > > might be protected by RCU. > > > > > > ... > > > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RCU_PROTECTED) > > > > > > This flag is unnecessary. PTR_UNTRUSTED can point to absolutely any memory. > > > In this case it likely points to a valid kmem_cache, but > > > the verifier will guard all accesses with probe_read anyway. > > > > > > I can remove this flag while applying. > > > > Ok, I'd be happy if you would remove it. > > You will need to update the bpf_rcu_read_lock/unlock() in the test code > (patch 3). I can send v6 with that and Vlastimil's Ack if you want. Fixed all that while applying. Could you please follow up with an open-coded iterator version of the same slab iterator ? So that progs can iterate slabs as a normal for/while loop ?
On Mon, Oct 14, 2024 at 06:50:49PM -0700, Alexei Starovoitov wrote: > On Mon, Oct 14, 2024 at 11:13 AM Namhyung Kim <namhyung@kernel.org> wrote: > > > > Hi Alexei, > > > > On Fri, Oct 11, 2024 at 12:14:14PM -0700, Namhyung Kim wrote: > > > On Fri, Oct 11, 2024 at 11:35:27AM -0700, Alexei Starovoitov wrote: > > > > On Thu, Oct 10, 2024 at 4:25 PM Namhyung Kim <namhyung@kernel.org> wrote: > > > > > > > > > > The bpf_get_kmem_cache() is to get a slab cache information from a > > > > > virtual address like virt_to_cache(). If the address is a pointer > > > > > to a slab object, it'd return a valid kmem_cache pointer, otherwise > > > > > NULL is returned. > > > > > > > > > > It doesn't grab a reference count of the kmem_cache so the caller is > > > > > responsible to manage the access. The returned point is marked as > > > > > PTR_UNTRUSTED. And the kfunc has KF_RCU_PROTECTED as the slab object > > > > > might be protected by RCU. > > > > > > > > ... > > > > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RCU_PROTECTED) > > > > > > > > This flag is unnecessary. PTR_UNTRUSTED can point to absolutely any memory. > > > > In this case it likely points to a valid kmem_cache, but > > > > the verifier will guard all accesses with probe_read anyway. > > > > > > > > I can remove this flag while applying. > > > > > > Ok, I'd be happy if you would remove it. > > > > You will need to update the bpf_rcu_read_lock/unlock() in the test code > > (patch 3). I can send v6 with that and Vlastimil's Ack if you want. > > Fixed all that while applying. > > Could you please follow up with an open-coded iterator version > of the same slab iterator ? > So that progs can iterate slabs as a normal for/while loop ? I'm not sure I'm following. Do you want a new test program to iterate kmem_caches by reading list pointers manually? How can I grab the slab_mutex then? Thanks, Namhyung
On Tue, Oct 15, 2024 at 11:20 AM Namhyung Kim <namhyung@kernel.org> wrote: > > On Mon, Oct 14, 2024 at 06:50:49PM -0700, Alexei Starovoitov wrote: > > On Mon, Oct 14, 2024 at 11:13 AM Namhyung Kim <namhyung@kernel.org> wrote: > > > > > > Hi Alexei, > > > > > > On Fri, Oct 11, 2024 at 12:14:14PM -0700, Namhyung Kim wrote: > > > > On Fri, Oct 11, 2024 at 11:35:27AM -0700, Alexei Starovoitov wrote: > > > > > On Thu, Oct 10, 2024 at 4:25 PM Namhyung Kim <namhyung@kernel.org> wrote: > > > > > > > > > > > > The bpf_get_kmem_cache() is to get a slab cache information from a > > > > > > virtual address like virt_to_cache(). If the address is a pointer > > > > > > to a slab object, it'd return a valid kmem_cache pointer, otherwise > > > > > > NULL is returned. > > > > > > > > > > > > It doesn't grab a reference count of the kmem_cache so the caller is > > > > > > responsible to manage the access. The returned point is marked as > > > > > > PTR_UNTRUSTED. And the kfunc has KF_RCU_PROTECTED as the slab object > > > > > > might be protected by RCU. > > > > > > > > > > ... > > > > > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RCU_PROTECTED) > > > > > > > > > > This flag is unnecessary. PTR_UNTRUSTED can point to absolutely any memory. > > > > > In this case it likely points to a valid kmem_cache, but > > > > > the verifier will guard all accesses with probe_read anyway. > > > > > > > > > > I can remove this flag while applying. > > > > > > > > Ok, I'd be happy if you would remove it. > > > > > > You will need to update the bpf_rcu_read_lock/unlock() in the test code > > > (patch 3). I can send v6 with that and Vlastimil's Ack if you want. > > > > Fixed all that while applying. > > > > Could you please follow up with an open-coded iterator version > > of the same slab iterator ? > > So that progs can iterate slabs as a normal for/while loop ? > > I'm not sure I'm following. Do you want a new test program to iterate > kmem_caches by reading list pointers manually? How can I grab the > slab_mutex then? No. See bpf_iter_task_new/_next/_destroy kfuncs and commit c68a78ffe2cb ("bpf: Introduce task open coded iterator kfuncs").
On Tue, Oct 15, 2024 at 11:25:11AM -0700, Alexei Starovoitov wrote: > On Tue, Oct 15, 2024 at 11:20 AM Namhyung Kim <namhyung@kernel.org> wrote: > > > > On Mon, Oct 14, 2024 at 06:50:49PM -0700, Alexei Starovoitov wrote: > > > On Mon, Oct 14, 2024 at 11:13 AM Namhyung Kim <namhyung@kernel.org> wrote: > > > > > > > > Hi Alexei, > > > > > > > > On Fri, Oct 11, 2024 at 12:14:14PM -0700, Namhyung Kim wrote: > > > > > On Fri, Oct 11, 2024 at 11:35:27AM -0700, Alexei Starovoitov wrote: > > > > > > On Thu, Oct 10, 2024 at 4:25 PM Namhyung Kim <namhyung@kernel.org> wrote: > > > > > > > > > > > > > > The bpf_get_kmem_cache() is to get a slab cache information from a > > > > > > > virtual address like virt_to_cache(). If the address is a pointer > > > > > > > to a slab object, it'd return a valid kmem_cache pointer, otherwise > > > > > > > NULL is returned. > > > > > > > > > > > > > > It doesn't grab a reference count of the kmem_cache so the caller is > > > > > > > responsible to manage the access. The returned point is marked as > > > > > > > PTR_UNTRUSTED. And the kfunc has KF_RCU_PROTECTED as the slab object > > > > > > > might be protected by RCU. > > > > > > > > > > > > ... > > > > > > > +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RCU_PROTECTED) > > > > > > > > > > > > This flag is unnecessary. PTR_UNTRUSTED can point to absolutely any memory. > > > > > > In this case it likely points to a valid kmem_cache, but > > > > > > the verifier will guard all accesses with probe_read anyway. > > > > > > > > > > > > I can remove this flag while applying. > > > > > > > > > > Ok, I'd be happy if you would remove it. > > > > > > > > You will need to update the bpf_rcu_read_lock/unlock() in the test code > > > > (patch 3). I can send v6 with that and Vlastimil's Ack if you want. > > > > > > Fixed all that while applying. > > > > > > Could you please follow up with an open-coded iterator version > > > of the same slab iterator ? > > > So that progs can iterate slabs as a normal for/while loop ? > > > > I'm not sure I'm following. Do you want a new test program to iterate > > kmem_caches by reading list pointers manually? How can I grab the > > slab_mutex then? > > No. > See bpf_iter_task_new/_next/_destroy kfuncs and > commit c68a78ffe2cb ("bpf: Introduce task open coded iterator kfuncs"). Oh, ok. Thanks for the pointer, I'll take a look and add the open code version. Thanks, Namhyung
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 4053f279ed4cc7ab..7bfef9378ab21267 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -3090,6 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW) BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY) BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RCU_PROTECTED) BTF_KFUNCS_END(common_btf_ids) static const struct btf_kfunc_id_set common_kfunc_set = { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index cfc62e0776bff2c8..f514247ba8ba8a57 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -11259,6 +11259,7 @@ enum special_kfunc_type { KF_bpf_preempt_enable, KF_bpf_iter_css_task_new, KF_bpf_session_cookie, + KF_bpf_get_kmem_cache, }; BTF_SET_START(special_kfunc_set) @@ -11324,6 +11325,7 @@ BTF_ID(func, bpf_session_cookie) #else BTF_ID_UNUSED #endif +BTF_ID(func, bpf_get_kmem_cache) static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) { @@ -12834,6 +12836,9 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, regs[BPF_REG_0].type = PTR_TO_BTF_ID; regs[BPF_REG_0].btf_id = ptr_type_id; + if (meta.func_id == special_kfunc_list[KF_bpf_get_kmem_cache]) + regs[BPF_REG_0].type |= PTR_UNTRUSTED; + if (is_iter_next_kfunc(&meta)) { struct bpf_reg_state *cur_iter; diff --git a/mm/slab_common.c b/mm/slab_common.c index 7443244656150325..5484e1cd812f698e 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1322,6 +1322,25 @@ size_t ksize(const void *objp) } EXPORT_SYMBOL(ksize); +#ifdef CONFIG_BPF_SYSCALL +#include <linux/btf.h> + +__bpf_kfunc_start_defs(); + +__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr) +{ + struct slab *slab; + + if (!virt_addr_valid(addr)) + return NULL; + + slab = virt_to_slab((void *)(long)addr); + return slab ? slab->slab_cache : NULL; +} + +__bpf_kfunc_end_defs(); +#endif /* CONFIG_BPF_SYSCALL */ + /* Tracepoints definitions. */ EXPORT_TRACEPOINT_SYMBOL(kmalloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc);