diff mbox series

[RFC,v3,1/3] bpf: cgroup: Introduce helper cgroup_bpf_current_enabled()

Message ID 20231213143813.6818-2-michael.weiss@aisec.fraunhofer.de (mailing list archive)
State RFC
Delegated to: BPF
Headers show
Series devguard: guard mknod for non-initial user namespace | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-14 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-13 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-17 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-42 success Logs for x86_64-llvm-18 / veristat
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Guessed tree name to be net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 2179 this patch: 2179
netdev/cc_maintainers warning 1 maintainers not CCed: yonghong.song@linux.dev
netdev/build_clang success Errors and warnings before: 1258 this patch: 1258
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 2232 this patch: 2232
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 28 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Michael Weiß Dec. 13, 2023, 2:38 p.m. UTC
This helper can be used to check if a cgroup-bpf specific program is
active for the current task.

Signed-off-by: Michael Weiß <michael.weiss@aisec.fraunhofer.de>
Reviewed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
---
 include/linux/bpf-cgroup.h |  2 ++
 kernel/bpf/cgroup.c        | 14 ++++++++++++++
 2 files changed, 16 insertions(+)

Comments

Yonghong Song Dec. 13, 2023, 4:59 p.m. UTC | #1
On 12/13/23 6:38 AM, Michael Weiß wrote:
> This helper can be used to check if a cgroup-bpf specific program is
> active for the current task.
>
> Signed-off-by: Michael Weiß <michael.weiss@aisec.fraunhofer.de>
> Reviewed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
> ---
>   include/linux/bpf-cgroup.h |  2 ++
>   kernel/bpf/cgroup.c        | 14 ++++++++++++++
>   2 files changed, 16 insertions(+)
>
> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
> index a789266feac3..7cb49bde09ff 100644
> --- a/include/linux/bpf-cgroup.h
> +++ b/include/linux/bpf-cgroup.h
> @@ -191,6 +191,8 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
>   	return array != &bpf_empty_prog_array.hdr;
>   }
>   
> +bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type);
> +
>   /* Wrappers for __cgroup_bpf_run_filter_skb() guarded by cgroup_bpf_enabled. */
>   #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb)			      \
>   ({									      \
> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
> index 491d20038cbe..9007165abe8c 100644
> --- a/kernel/bpf/cgroup.c
> +++ b/kernel/bpf/cgroup.c
> @@ -24,6 +24,20 @@
>   DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE);
>   EXPORT_SYMBOL(cgroup_bpf_enabled_key);
>   
> +bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type)
> +{
> +	struct cgroup *cgrp;
> +	struct bpf_prog_array *array;
> +
> +	rcu_read_lock();
> +	cgrp = task_dfl_cgroup(current);
> +	rcu_read_unlock();
> +
> +	array = rcu_access_pointer(cgrp->bpf.effective[type]);

This seems wrong here. The cgrp could become invalid once leaving
rcu critical section.

> +	return array != &bpf_empty_prog_array.hdr;

I guess you need include 'array' usage as well in the rcu cs.
So overall should look like:

	rcu_read_lock();
	cgrp = task_dfl_cgroup(current);
	array = rcu_access_pointer(cgrp->bpf.effective[type]);
	bpf_prog_exists = array != &bpf_empty_prog_array.hdr;
	rcu_read_unlock();

	return bpf_prog_exists;

> +}
> +EXPORT_SYMBOL(cgroup_bpf_current_enabled);
> +
>   /* __always_inline is necessary to prevent indirect call through run_prog
>    * function pointer.
>    */
Michael Weiß Dec. 14, 2023, 8:17 a.m. UTC | #2
On 13.12.23 17:59, Yonghong Song wrote:
> 
> On 12/13/23 6:38 AM, Michael Weiß wrote:
>> This helper can be used to check if a cgroup-bpf specific program is
>> active for the current task.
>>
>> Signed-off-by: Michael Weiß <michael.weiss@aisec.fraunhofer.de>
>> Reviewed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
>> ---
>>   include/linux/bpf-cgroup.h |  2 ++
>>   kernel/bpf/cgroup.c        | 14 ++++++++++++++
>>   2 files changed, 16 insertions(+)
>>
>> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
>> index a789266feac3..7cb49bde09ff 100644
>> --- a/include/linux/bpf-cgroup.h
>> +++ b/include/linux/bpf-cgroup.h
>> @@ -191,6 +191,8 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
>>   	return array != &bpf_empty_prog_array.hdr;
>>   }
>>   
>> +bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type);
>> +
>>   /* Wrappers for __cgroup_bpf_run_filter_skb() guarded by cgroup_bpf_enabled. */
>>   #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb)			      \
>>   ({									      \
>> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
>> index 491d20038cbe..9007165abe8c 100644
>> --- a/kernel/bpf/cgroup.c
>> +++ b/kernel/bpf/cgroup.c
>> @@ -24,6 +24,20 @@
>>   DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE);
>>   EXPORT_SYMBOL(cgroup_bpf_enabled_key);
>>   
>> +bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type)
>> +{
>> +	struct cgroup *cgrp;
>> +	struct bpf_prog_array *array;
>> +
>> +	rcu_read_lock();
>> +	cgrp = task_dfl_cgroup(current);
>> +	rcu_read_unlock();
>> +
>> +	array = rcu_access_pointer(cgrp->bpf.effective[type]);
> 
> This seems wrong here. The cgrp could become invalid once leaving
> rcu critical section.

You are right, maybe we where to opportunistic here. We just wanted
to hold the lock as short as possible.

> 
>> +	return array != &bpf_empty_prog_array.hdr;
> 
> I guess you need include 'array' usage as well in the rcu cs.
> So overall should look like:
> 
> 	rcu_read_lock();
> 	cgrp = task_dfl_cgroup(current);
> 	array = rcu_access_pointer(cgrp->bpf.effective[type]);

Looks reasonable, but that we are in the cs now I would change this to
rcu_dereference() then.

> 	bpf_prog_exists = array != &bpf_empty_prog_array.hdr;
> 	rcu_read_unlock();
> 
> 	return bpf_prog_exists;
> 
>> +}
>> +EXPORT_SYMBOL(cgroup_bpf_current_enabled);
>> +
>>   /* __always_inline is necessary to prevent indirect call through run_prog
>>    * function pointer.
>>    */
Yonghong Song Dec. 15, 2023, 2:31 p.m. UTC | #3
On 12/14/23 12:17 AM, Michael Weiß wrote:
> On 13.12.23 17:59, Yonghong Song wrote:
>> On 12/13/23 6:38 AM, Michael Weiß wrote:
>>> This helper can be used to check if a cgroup-bpf specific program is
>>> active for the current task.
>>>
>>> Signed-off-by: Michael Weiß <michael.weiss@aisec.fraunhofer.de>
>>> Reviewed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
>>> ---
>>>    include/linux/bpf-cgroup.h |  2 ++
>>>    kernel/bpf/cgroup.c        | 14 ++++++++++++++
>>>    2 files changed, 16 insertions(+)
>>>
>>> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
>>> index a789266feac3..7cb49bde09ff 100644
>>> --- a/include/linux/bpf-cgroup.h
>>> +++ b/include/linux/bpf-cgroup.h
>>> @@ -191,6 +191,8 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
>>>    	return array != &bpf_empty_prog_array.hdr;
>>>    }
>>>    
>>> +bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type);
>>> +
>>>    /* Wrappers for __cgroup_bpf_run_filter_skb() guarded by cgroup_bpf_enabled. */
>>>    #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb)			      \
>>>    ({									      \
>>> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
>>> index 491d20038cbe..9007165abe8c 100644
>>> --- a/kernel/bpf/cgroup.c
>>> +++ b/kernel/bpf/cgroup.c
>>> @@ -24,6 +24,20 @@
>>>    DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE);
>>>    EXPORT_SYMBOL(cgroup_bpf_enabled_key);
>>>    
>>> +bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type)
>>> +{
>>> +	struct cgroup *cgrp;
>>> +	struct bpf_prog_array *array;
>>> +
>>> +	rcu_read_lock();
>>> +	cgrp = task_dfl_cgroup(current);
>>> +	rcu_read_unlock();
>>> +
>>> +	array = rcu_access_pointer(cgrp->bpf.effective[type]);
>> This seems wrong here. The cgrp could become invalid once leaving
>> rcu critical section.
> You are right, maybe we where to opportunistic here. We just wanted
> to hold the lock as short as possible.
>
>>> +	return array != &bpf_empty_prog_array.hdr;
>> I guess you need include 'array' usage as well in the rcu cs.
>> So overall should look like:
>>
>> 	rcu_read_lock();
>> 	cgrp = task_dfl_cgroup(current);
>> 	array = rcu_access_pointer(cgrp->bpf.effective[type]);
> Looks reasonable, but that we are in the cs now I would change this to
> rcu_dereference() then.

copy-paste error. Right, should use rcu_deference() indeed.

>
>> 	bpf_prog_exists = array != &bpf_empty_prog_array.hdr;
>> 	rcu_read_unlock();
>>
>> 	return bpf_prog_exists;
>>
>>> +}
>>> +EXPORT_SYMBOL(cgroup_bpf_current_enabled);
>>> +
>>>    /* __always_inline is necessary to prevent indirect call through run_prog
>>>     * function pointer.
>>>     */
diff mbox series

Patch

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index a789266feac3..7cb49bde09ff 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -191,6 +191,8 @@  static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
 	return array != &bpf_empty_prog_array.hdr;
 }
 
+bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type);
+
 /* Wrappers for __cgroup_bpf_run_filter_skb() guarded by cgroup_bpf_enabled. */
 #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb)			      \
 ({									      \
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 491d20038cbe..9007165abe8c 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -24,6 +24,20 @@ 
 DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE);
 EXPORT_SYMBOL(cgroup_bpf_enabled_key);
 
+bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type)
+{
+	struct cgroup *cgrp;
+	struct bpf_prog_array *array;
+
+	rcu_read_lock();
+	cgrp = task_dfl_cgroup(current);
+	rcu_read_unlock();
+
+	array = rcu_access_pointer(cgrp->bpf.effective[type]);
+	return array != &bpf_empty_prog_array.hdr;
+}
+EXPORT_SYMBOL(cgroup_bpf_current_enabled);
+
 /* __always_inline is necessary to prevent indirect call through run_prog
  * function pointer.
  */