diff mbox series

[v2,bpf-next,6/7] bpf: Allow bpf_spin_{lock,unlock} in sleepable progs

Message ID 20230821193311.3290257-7-davemarchevsky@fb.com (mailing list archive)
State Accepted
Commit 5861d1e8dbc4e1a03ebffb96ac041026cdd34c07
Delegated to: BPF
Headers show
Series BPF Refcount followups 3: bpf_mem_free_rcu refcounted nodes | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1380 this patch: 1380
netdev/cc_maintainers warning 7 maintainers not CCed: kpsingh@kernel.org martin.lau@linux.dev john.fastabend@gmail.com sdf@google.com song@kernel.org jolsa@kernel.org haoluo@google.com
netdev/build_clang success Errors and warnings before: 1353 this patch: 1353
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1403 this patch: 1403
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 35 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-28 success Logs for veristat
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-11 fail Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-6 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-0 success Logs for ${{ matrix.test }} on ${{ matrix.arch }} with ${{ matrix.toolchain_full }}
bpf/vmtest-bpf-next-VM_Test-3 fail Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for veristat
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc

Commit Message

Dave Marchevsky Aug. 21, 2023, 7:33 p.m. UTC
Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks")
disabled bpf_spin_lock usage in sleepable progs, stating:

 Sleepable LSM programs can be preempted which means that allowng spin
 locks will need more work (disabling preemption and the verifier
 ensuring that no sleepable helpers are called when a spin lock is
 held).

This patch disables preemption before grabbing bpf_spin_lock. The second
requirement above "no sleepable helpers are called when a spin lock is
held" is implicitly enforced by current verifier logic due to helper
calls in spin_lock CS being disabled except for a few exceptions, none
of which sleep.

Due to above preemption changes, bpf_spin_lock CS can also be considered
a RCU CS, so verifier's in_rcu_cs check is modified to account for this.

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
---
 kernel/bpf/helpers.c  | 2 ++
 kernel/bpf/verifier.c | 9 +++------
 2 files changed, 5 insertions(+), 6 deletions(-)

Comments

Yonghong Song Aug. 22, 2023, 2:53 a.m. UTC | #1
On 8/21/23 12:33 PM, Dave Marchevsky wrote:
> Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks")
> disabled bpf_spin_lock usage in sleepable progs, stating:
> 
>   Sleepable LSM programs can be preempted which means that allowng spin
>   locks will need more work (disabling preemption and the verifier
>   ensuring that no sleepable helpers are called when a spin lock is
>   held).
> 
> This patch disables preemption before grabbing bpf_spin_lock. The second
> requirement above "no sleepable helpers are called when a spin lock is
> held" is implicitly enforced by current verifier logic due to helper
> calls in spin_lock CS being disabled except for a few exceptions, none
> of which sleep.
> 
> Due to above preemption changes, bpf_spin_lock CS can also be considered
> a RCU CS, so verifier's in_rcu_cs check is modified to account for this.
> 
> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> ---
>   kernel/bpf/helpers.c  | 2 ++
>   kernel/bpf/verifier.c | 9 +++------
>   2 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 945a85e25ac5..8bd3812fb8df 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -286,6 +286,7 @@ static inline void __bpf_spin_lock(struct bpf_spin_lock *lock)
>   	compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0");
>   	BUILD_BUG_ON(sizeof(*l) != sizeof(__u32));
>   	BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32));
> +	preempt_disable();
>   	arch_spin_lock(l);
>   }
>   
> @@ -294,6 +295,7 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock)
>   	arch_spinlock_t *l = (void *)lock;
>   
>   	arch_spin_unlock(l);
> +	preempt_enable();
>   }

preempt_disable()/preempt_enable() is not needed. Is it possible we can
have a different bpf_spin_lock proto, e.g, bpf_spin_lock_sleepable_proto
which implements the above with preempt_disable()/preempt_enable()?
Not sure how much difference my proposal will make since current
bpf_spin_lock() region does not support func calls except some
graph api kfunc operations.

>   
>   #else
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 55607ab30522..33e4b854d2d4 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5062,7 +5062,9 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
>    */
>   static bool in_rcu_cs(struct bpf_verifier_env *env)
>   {
> -	return env->cur_state->active_rcu_lock || !env->prog->aux->sleepable;
> +	return env->cur_state->active_rcu_lock ||
> +	       env->cur_state->active_lock.ptr ||
> +	       !env->prog->aux->sleepable;
>   }
>   
>   /* Once GCC supports btf_type_tag the following mechanism will be replaced with tag check */
> @@ -16980,11 +16982,6 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env,
>   			verbose(env, "tracing progs cannot use bpf_spin_lock yet\n");
>   			return -EINVAL;
>   		}
> -
> -		if (prog->aux->sleepable) {
> -			verbose(env, "sleepable progs cannot use bpf_spin_lock yet\n");
> -			return -EINVAL;
> -		}
>   	}
>   
>   	if (btf_record_has_field(map->record, BPF_TIMER)) {
Alexei Starovoitov Aug. 22, 2023, 7:46 p.m. UTC | #2
On Mon, Aug 21, 2023 at 07:53:22PM -0700, Yonghong Song wrote:
> 
> 
> On 8/21/23 12:33 PM, Dave Marchevsky wrote:
> > Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks")
> > disabled bpf_spin_lock usage in sleepable progs, stating:
> > 
> >   Sleepable LSM programs can be preempted which means that allowng spin
> >   locks will need more work (disabling preemption and the verifier
> >   ensuring that no sleepable helpers are called when a spin lock is
> >   held).
> > 
> > This patch disables preemption before grabbing bpf_spin_lock. The second
> > requirement above "no sleepable helpers are called when a spin lock is
> > held" is implicitly enforced by current verifier logic due to helper
> > calls in spin_lock CS being disabled except for a few exceptions, none
> > of which sleep.
> > 
> > Due to above preemption changes, bpf_spin_lock CS can also be considered
> > a RCU CS, so verifier's in_rcu_cs check is modified to account for this.
> > 
> > Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
> > ---
> >   kernel/bpf/helpers.c  | 2 ++
> >   kernel/bpf/verifier.c | 9 +++------
> >   2 files changed, 5 insertions(+), 6 deletions(-)
> > 
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 945a85e25ac5..8bd3812fb8df 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -286,6 +286,7 @@ static inline void __bpf_spin_lock(struct bpf_spin_lock *lock)
> >   	compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0");
> >   	BUILD_BUG_ON(sizeof(*l) != sizeof(__u32));
> >   	BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32));
> > +	preempt_disable();
> >   	arch_spin_lock(l);
> >   }
> > @@ -294,6 +295,7 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock)
> >   	arch_spinlock_t *l = (void *)lock;
> >   	arch_spin_unlock(l);
> > +	preempt_enable();
> >   }
> 
> preempt_disable()/preempt_enable() is not needed. Is it possible we can

preempt_disable is needed in all cases. This mistake slipped in when
we converted preempt disabled bpf progs into migrate disabled.
For example, see how raw_spin_lock is doing it.
Yonghong Song Aug. 22, 2023, 7:53 p.m. UTC | #3
On 8/22/23 12:46 PM, Alexei Starovoitov wrote:
> On Mon, Aug 21, 2023 at 07:53:22PM -0700, Yonghong Song wrote:
>>
>>
>> On 8/21/23 12:33 PM, Dave Marchevsky wrote:
>>> Commit 9e7a4d9831e8 ("bpf: Allow LSM programs to use bpf spin locks")
>>> disabled bpf_spin_lock usage in sleepable progs, stating:
>>>
>>>    Sleepable LSM programs can be preempted which means that allowng spin
>>>    locks will need more work (disabling preemption and the verifier
>>>    ensuring that no sleepable helpers are called when a spin lock is
>>>    held).
>>>
>>> This patch disables preemption before grabbing bpf_spin_lock. The second
>>> requirement above "no sleepable helpers are called when a spin lock is
>>> held" is implicitly enforced by current verifier logic due to helper
>>> calls in spin_lock CS being disabled except for a few exceptions, none
>>> of which sleep.
>>>
>>> Due to above preemption changes, bpf_spin_lock CS can also be considered
>>> a RCU CS, so verifier's in_rcu_cs check is modified to account for this.
>>>
>>> Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
>>> ---
>>>    kernel/bpf/helpers.c  | 2 ++
>>>    kernel/bpf/verifier.c | 9 +++------
>>>    2 files changed, 5 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
>>> index 945a85e25ac5..8bd3812fb8df 100644
>>> --- a/kernel/bpf/helpers.c
>>> +++ b/kernel/bpf/helpers.c
>>> @@ -286,6 +286,7 @@ static inline void __bpf_spin_lock(struct bpf_spin_lock *lock)
>>>    	compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0");
>>>    	BUILD_BUG_ON(sizeof(*l) != sizeof(__u32));
>>>    	BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32));
>>> +	preempt_disable();
>>>    	arch_spin_lock(l);
>>>    }
>>> @@ -294,6 +295,7 @@ static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock)
>>>    	arch_spinlock_t *l = (void *)lock;
>>>    	arch_spin_unlock(l);
>>> +	preempt_enable();
>>>    }
>>
>> preempt_disable()/preempt_enable() is not needed. Is it possible we can
> 
> preempt_disable is needed in all cases. This mistake slipped in when
> we converted preempt disabled bpf progs into migrate disabled.
> For example, see how raw_spin_lock is doing it.

Okay, a slipped bug. That explains the difference between our 
bpf_spin_lock and raw_spin_lock. The change then makes sense.
diff mbox series

Patch

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 945a85e25ac5..8bd3812fb8df 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -286,6 +286,7 @@  static inline void __bpf_spin_lock(struct bpf_spin_lock *lock)
 	compiletime_assert(u.val == 0, "__ARCH_SPIN_LOCK_UNLOCKED not 0");
 	BUILD_BUG_ON(sizeof(*l) != sizeof(__u32));
 	BUILD_BUG_ON(sizeof(*lock) != sizeof(__u32));
+	preempt_disable();
 	arch_spin_lock(l);
 }
 
@@ -294,6 +295,7 @@  static inline void __bpf_spin_unlock(struct bpf_spin_lock *lock)
 	arch_spinlock_t *l = (void *)lock;
 
 	arch_spin_unlock(l);
+	preempt_enable();
 }
 
 #else
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 55607ab30522..33e4b854d2d4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5062,7 +5062,9 @@  static int map_kptr_match_type(struct bpf_verifier_env *env,
  */
 static bool in_rcu_cs(struct bpf_verifier_env *env)
 {
-	return env->cur_state->active_rcu_lock || !env->prog->aux->sleepable;
+	return env->cur_state->active_rcu_lock ||
+	       env->cur_state->active_lock.ptr ||
+	       !env->prog->aux->sleepable;
 }
 
 /* Once GCC supports btf_type_tag the following mechanism will be replaced with tag check */
@@ -16980,11 +16982,6 @@  static int check_map_prog_compatibility(struct bpf_verifier_env *env,
 			verbose(env, "tracing progs cannot use bpf_spin_lock yet\n");
 			return -EINVAL;
 		}
-
-		if (prog->aux->sleepable) {
-			verbose(env, "sleepable progs cannot use bpf_spin_lock yet\n");
-			return -EINVAL;
-		}
 	}
 
 	if (btf_record_has_field(map->record, BPF_TIMER)) {