Message ID | 20231206083041.1306660-2-jolsa@kernel.org (mailing list archive) |
---|---|
State | Accepted |
Commit | 4b7de801606e504e69689df71475d27e35336fb3 |
Delegated to: | BPF |
Headers | show |
Series | bpf: Fix map poke update | expand |
Dear Stable, > Lee pointed out issue found by syscaller [0] hitting BUG in prog array > map poke update in prog_array_map_poke_run function due to error value > returned from bpf_arch_text_poke function. > > There's race window where bpf_arch_text_poke can fail due to missing > bpf program kallsym symbols, which is accounted for with check for > -EINVAL in that BUG_ON call. > > The problem is that in such case we won't update the tail call jump > and cause imbalance for the next tail call update check which will > fail with -EBUSY in bpf_arch_text_poke. > > I'm hitting following race during the program load: > > CPU 0 CPU 1 > > bpf_prog_load > bpf_check > do_misc_fixups > prog_array_map_poke_track > > map_update_elem > bpf_fd_array_map_update_elem > prog_array_map_poke_run > > bpf_arch_text_poke returns -EINVAL > > bpf_prog_kallsyms_add > > After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next > poke update fails on expected jump instruction check in bpf_arch_text_poke > with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. > > Similar race exists on the program unload. > > Fixing this by moving the update to bpf_arch_poke_desc_update function which > makes sure we call __bpf_arch_text_poke that skips the bpf address check. > > Each architecture has slightly different approach wrt looking up bpf address > in bpf_arch_text_poke, so instead of splitting the function or adding new > 'checkip' argument in previous version, it seems best to move the whole > map_poke_run update as arch specific code. > > [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 > > Cc: Lee Jones <lee@kernel.org> > Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com > Acked-by: Yonghong Song <yonghong.song@linux.dev> > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > --- > arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ > include/linux/bpf.h | 3 ++ > kernel/bpf/arraymap.c | 58 +++++++------------------------------ > 3 files changed, 59 insertions(+), 48 deletions(-) Please could we have this backported? Guided by the Fixes: tag. > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > index 8c10d9abc239..e89e415aa743 100644 > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c > @@ -3025,3 +3025,49 @@ void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp > #endif > WARN(1, "verification of programs using bpf_throw should have failed\n"); > } > + > +void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, > + struct bpf_prog *new, struct bpf_prog *old) > +{ > + u8 *old_addr, *new_addr, *old_bypass_addr; > + int ret; > + > + old_bypass_addr = old ? NULL : poke->bypass_addr; > + old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL; > + new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL; > + > + /* > + * On program loading or teardown, the program's kallsym entry > + * might not be in place, so we use __bpf_arch_text_poke to skip > + * the kallsyms check. > + */ > + if (new) { > + ret = __bpf_arch_text_poke(poke->tailcall_target, > + BPF_MOD_JUMP, > + old_addr, new_addr); > + BUG_ON(ret < 0); > + if (!old) { > + ret = __bpf_arch_text_poke(poke->tailcall_bypass, > + BPF_MOD_JUMP, > + poke->bypass_addr, > + NULL); > + BUG_ON(ret < 0); > + } > + } else { > + ret = __bpf_arch_text_poke(poke->tailcall_bypass, > + BPF_MOD_JUMP, > + old_bypass_addr, > + poke->bypass_addr); > + BUG_ON(ret < 0); > + /* let other CPUs finish the execution of program > + * so that it will not possible to expose them > + * to invalid nop, stack unwind, nop state > + */ > + if (!ret) > + synchronize_rcu(); > + ret = __bpf_arch_text_poke(poke->tailcall_target, > + BPF_MOD_JUMP, > + old_addr, NULL); > + BUG_ON(ret < 0); > + } > +} > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index 6762dac3ef76..cff5bb08820e 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -3175,6 +3175,9 @@ enum bpf_text_poke_type { > int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, > void *addr1, void *addr2); > > +void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, > + struct bpf_prog *new, struct bpf_prog *old); > + > void *bpf_arch_text_copy(void *dst, void *src, size_t len); > int bpf_arch_text_invalidate(void *dst, size_t len); > > diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c > index 2058e89b5ddd..c85ff9162a5c 100644 > --- a/kernel/bpf/arraymap.c > +++ b/kernel/bpf/arraymap.c > @@ -1012,11 +1012,16 @@ static void prog_array_map_poke_untrack(struct bpf_map *map, > mutex_unlock(&aux->poke_mutex); > } > > +void __weak bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, > + struct bpf_prog *new, struct bpf_prog *old) > +{ > + WARN_ON_ONCE(1); > +} > + > static void prog_array_map_poke_run(struct bpf_map *map, u32 key, > struct bpf_prog *old, > struct bpf_prog *new) > { > - u8 *old_addr, *new_addr, *old_bypass_addr; > struct prog_poke_elem *elem; > struct bpf_array_aux *aux; > > @@ -1025,7 +1030,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, > > list_for_each_entry(elem, &aux->poke_progs, list) { > struct bpf_jit_poke_descriptor *poke; > - int i, ret; > + int i; > > for (i = 0; i < elem->aux->size_poke_tab; i++) { > poke = &elem->aux->poke_tab[i]; > @@ -1044,21 +1049,10 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, > * activated, so tail call updates can arrive from here > * while JIT is still finishing its final fixup for > * non-activated poke entries. > - * 3) On program teardown, the program's kallsym entry gets > - * removed out of RCU callback, but we can only untrack > - * from sleepable context, therefore bpf_arch_text_poke() > - * might not see that this is in BPF text section and > - * bails out with -EINVAL. As these are unreachable since > - * RCU grace period already passed, we simply skip them. > - * 4) Also programs reaching refcount of zero while patching > + * 3) Also programs reaching refcount of zero while patching > * is in progress is okay since we're protected under > * poke_mutex and untrack the programs before the JIT > - * buffer is freed. When we're still in the middle of > - * patching and suddenly kallsyms entry of the program > - * gets evicted, we just skip the rest which is fine due > - * to point 3). > - * 5) Any other error happening below from bpf_arch_text_poke() > - * is a unexpected bug. > + * buffer is freed. > */ > if (!READ_ONCE(poke->tailcall_target_stable)) > continue; > @@ -1068,39 +1062,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, > poke->tail_call.key != key) > continue; > > - old_bypass_addr = old ? NULL : poke->bypass_addr; > - old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL; > - new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL; > - > - if (new) { > - ret = bpf_arch_text_poke(poke->tailcall_target, > - BPF_MOD_JUMP, > - old_addr, new_addr); > - BUG_ON(ret < 0 && ret != -EINVAL); > - if (!old) { > - ret = bpf_arch_text_poke(poke->tailcall_bypass, > - BPF_MOD_JUMP, > - poke->bypass_addr, > - NULL); > - BUG_ON(ret < 0 && ret != -EINVAL); > - } > - } else { > - ret = bpf_arch_text_poke(poke->tailcall_bypass, > - BPF_MOD_JUMP, > - old_bypass_addr, > - poke->bypass_addr); > - BUG_ON(ret < 0 && ret != -EINVAL); > - /* let other CPUs finish the execution of program > - * so that it will not possible to expose them > - * to invalid nop, stack unwind, nop state > - */ > - if (!ret) > - synchronize_rcu(); > - ret = bpf_arch_text_poke(poke->tailcall_target, > - BPF_MOD_JUMP, > - old_addr, NULL); > - BUG_ON(ret < 0 && ret != -EINVAL); > - } > + bpf_arch_poke_desc_update(poke, new, old); > } > } > } > -- > 2.43.0 >
On Thu, Dec 21, 2023 at 09:07:45AM +0000, Lee Jones wrote: > Dear Stable, > > > Lee pointed out issue found by syscaller [0] hitting BUG in prog array > > map poke update in prog_array_map_poke_run function due to error value > > returned from bpf_arch_text_poke function. > > > > There's race window where bpf_arch_text_poke can fail due to missing > > bpf program kallsym symbols, which is accounted for with check for > > -EINVAL in that BUG_ON call. > > > > The problem is that in such case we won't update the tail call jump > > and cause imbalance for the next tail call update check which will > > fail with -EBUSY in bpf_arch_text_poke. > > > > I'm hitting following race during the program load: > > > > CPU 0 CPU 1 > > > > bpf_prog_load > > bpf_check > > do_misc_fixups > > prog_array_map_poke_track > > > > map_update_elem > > bpf_fd_array_map_update_elem > > prog_array_map_poke_run > > > > bpf_arch_text_poke returns -EINVAL > > > > bpf_prog_kallsyms_add > > > > After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next > > poke update fails on expected jump instruction check in bpf_arch_text_poke > > with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. > > > > Similar race exists on the program unload. > > > > Fixing this by moving the update to bpf_arch_poke_desc_update function which > > makes sure we call __bpf_arch_text_poke that skips the bpf address check. > > > > Each architecture has slightly different approach wrt looking up bpf address > > in bpf_arch_text_poke, so instead of splitting the function or adding new > > 'checkip' argument in previous version, it seems best to move the whole > > map_poke_run update as arch specific code. > > > > [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 > > > > Cc: Lee Jones <lee@kernel.org> > > Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > > Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com > > Acked-by: Yonghong Song <yonghong.song@linux.dev> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > > --- > > arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ > > include/linux/bpf.h | 3 ++ > > kernel/bpf/arraymap.c | 58 +++++++------------------------------ > > 3 files changed, 59 insertions(+), 48 deletions(-) > > Please could we have this backported? > > Guided by the Fixes: tag. <formletter> This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly. </formletter>
On Thu, 21 Dec 2023, Greg KH wrote: > On Thu, Dec 21, 2023 at 09:07:45AM +0000, Lee Jones wrote: > > Dear Stable, > > > > > Lee pointed out issue found by syscaller [0] hitting BUG in prog array > > > map poke update in prog_array_map_poke_run function due to error value > > > returned from bpf_arch_text_poke function. > > > > > > There's race window where bpf_arch_text_poke can fail due to missing > > > bpf program kallsym symbols, which is accounted for with check for > > > -EINVAL in that BUG_ON call. > > > > > > The problem is that in such case we won't update the tail call jump > > > and cause imbalance for the next tail call update check which will > > > fail with -EBUSY in bpf_arch_text_poke. > > > > > > I'm hitting following race during the program load: > > > > > > CPU 0 CPU 1 > > > > > > bpf_prog_load > > > bpf_check > > > do_misc_fixups > > > prog_array_map_poke_track > > > > > > map_update_elem > > > bpf_fd_array_map_update_elem > > > prog_array_map_poke_run > > > > > > bpf_arch_text_poke returns -EINVAL > > > > > > bpf_prog_kallsyms_add > > > > > > After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next > > > poke update fails on expected jump instruction check in bpf_arch_text_poke > > > with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. > > > > > > Similar race exists on the program unload. > > > > > > Fixing this by moving the update to bpf_arch_poke_desc_update function which > > > makes sure we call __bpf_arch_text_poke that skips the bpf address check. > > > > > > Each architecture has slightly different approach wrt looking up bpf address > > > in bpf_arch_text_poke, so instead of splitting the function or adding new > > > 'checkip' argument in previous version, it seems best to move the whole > > > map_poke_run update as arch specific code. > > > > > > [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 > > > > > > Cc: Lee Jones <lee@kernel.org> > > > Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > > > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > > > Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com > > > Acked-by: Yonghong Song <yonghong.song@linux.dev> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > > > --- > > > arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ > > > include/linux/bpf.h | 3 ++ > > > kernel/bpf/arraymap.c | 58 +++++++------------------------------ > > > 3 files changed, 59 insertions(+), 48 deletions(-) > > > > Please could we have this backported? > > > > Guided by the Fixes: tag. > > <formletter> > > This is not the correct way to submit patches for inclusion in the > stable kernel tree. Please read: > https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html > for how to do this properly. > > </formletter> Apologies. Commit ID: 4b7de801606e504e69689df71475d27e35336fb3 Subject: bpf: Fix prog_array_map_poke_run map poke update Reason: Fixes a race condition in BPF. Versions: linux-5.10.y+, as specified by the Fixes: tag above Thanks.
On Thu, Dec 21, 2023 at 09:55:22AM +0000, Lee Jones wrote: > On Thu, 21 Dec 2023, Greg KH wrote: > > > On Thu, Dec 21, 2023 at 09:07:45AM +0000, Lee Jones wrote: > > > Dear Stable, > > > > > > > Lee pointed out issue found by syscaller [0] hitting BUG in prog array > > > > map poke update in prog_array_map_poke_run function due to error value > > > > returned from bpf_arch_text_poke function. > > > > > > > > There's race window where bpf_arch_text_poke can fail due to missing > > > > bpf program kallsym symbols, which is accounted for with check for > > > > -EINVAL in that BUG_ON call. > > > > > > > > The problem is that in such case we won't update the tail call jump > > > > and cause imbalance for the next tail call update check which will > > > > fail with -EBUSY in bpf_arch_text_poke. > > > > > > > > I'm hitting following race during the program load: > > > > > > > > CPU 0 CPU 1 > > > > > > > > bpf_prog_load > > > > bpf_check > > > > do_misc_fixups > > > > prog_array_map_poke_track > > > > > > > > map_update_elem > > > > bpf_fd_array_map_update_elem > > > > prog_array_map_poke_run > > > > > > > > bpf_arch_text_poke returns -EINVAL > > > > > > > > bpf_prog_kallsyms_add > > > > > > > > After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next > > > > poke update fails on expected jump instruction check in bpf_arch_text_poke > > > > with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. > > > > > > > > Similar race exists on the program unload. > > > > > > > > Fixing this by moving the update to bpf_arch_poke_desc_update function which > > > > makes sure we call __bpf_arch_text_poke that skips the bpf address check. > > > > > > > > Each architecture has slightly different approach wrt looking up bpf address > > > > in bpf_arch_text_poke, so instead of splitting the function or adding new > > > > 'checkip' argument in previous version, it seems best to move the whole > > > > map_poke_run update as arch specific code. > > > > > > > > [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 > > > > > > > > Cc: Lee Jones <lee@kernel.org> > > > > Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > > > > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > > > > Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com > > > > Acked-by: Yonghong Song <yonghong.song@linux.dev> > > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > > > > --- > > > > arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ > > > > include/linux/bpf.h | 3 ++ > > > > kernel/bpf/arraymap.c | 58 +++++++------------------------------ > > > > 3 files changed, 59 insertions(+), 48 deletions(-) > > > > > > Please could we have this backported? > > > > > > Guided by the Fixes: tag. > > > > <formletter> > > > > This is not the correct way to submit patches for inclusion in the > > stable kernel tree. Please read: > > https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html > > for how to do this properly. > > > > </formletter> > > Apologies. > > Commit ID: 4b7de801606e504e69689df71475d27e35336fb3 > Subject: bpf: Fix prog_array_map_poke_run map poke update > Reason: Fixes a race condition in BPF. > Versions: linux-5.10.y+, as specified by the Fixes: tag above Did not apply to 5.10.y or 5.15.y, so if you need/want it there, we will need a working backport that has been tested. Other trees it's now queued up for. BPF developers, please remember, just adding a "Fixes:" tag does NOT guarantee that any patch will be backported to any stable kernel, you MUST add a "cc: stable@..." tag to the patch if you wish to have it automatically backported. thanks, greg k-h
On Thu, 21 Dec 2023, Greg KH wrote: > On Thu, Dec 21, 2023 at 09:55:22AM +0000, Lee Jones wrote: > > On Thu, 21 Dec 2023, Greg KH wrote: > > > > > On Thu, Dec 21, 2023 at 09:07:45AM +0000, Lee Jones wrote: > > > > Dear Stable, > > > > > > > > > Lee pointed out issue found by syscaller [0] hitting BUG in prog array > > > > > map poke update in prog_array_map_poke_run function due to error value > > > > > returned from bpf_arch_text_poke function. > > > > > > > > > > There's race window where bpf_arch_text_poke can fail due to missing > > > > > bpf program kallsym symbols, which is accounted for with check for > > > > > -EINVAL in that BUG_ON call. > > > > > > > > > > The problem is that in such case we won't update the tail call jump > > > > > and cause imbalance for the next tail call update check which will > > > > > fail with -EBUSY in bpf_arch_text_poke. > > > > > > > > > > I'm hitting following race during the program load: > > > > > > > > > > CPU 0 CPU 1 > > > > > > > > > > bpf_prog_load > > > > > bpf_check > > > > > do_misc_fixups > > > > > prog_array_map_poke_track > > > > > > > > > > map_update_elem > > > > > bpf_fd_array_map_update_elem > > > > > prog_array_map_poke_run > > > > > > > > > > bpf_arch_text_poke returns -EINVAL > > > > > > > > > > bpf_prog_kallsyms_add > > > > > > > > > > After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next > > > > > poke update fails on expected jump instruction check in bpf_arch_text_poke > > > > > with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. > > > > > > > > > > Similar race exists on the program unload. > > > > > > > > > > Fixing this by moving the update to bpf_arch_poke_desc_update function which > > > > > makes sure we call __bpf_arch_text_poke that skips the bpf address check. > > > > > > > > > > Each architecture has slightly different approach wrt looking up bpf address > > > > > in bpf_arch_text_poke, so instead of splitting the function or adding new > > > > > 'checkip' argument in previous version, it seems best to move the whole > > > > > map_poke_run update as arch specific code. > > > > > > > > > > [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 > > > > > > > > > > Cc: Lee Jones <lee@kernel.org> > > > > > Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > > > > > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > > > > > Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com > > > > > Acked-by: Yonghong Song <yonghong.song@linux.dev> > > > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > > > > > --- > > > > > arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ > > > > > include/linux/bpf.h | 3 ++ > > > > > kernel/bpf/arraymap.c | 58 +++++++------------------------------ > > > > > 3 files changed, 59 insertions(+), 48 deletions(-) > > > > > > > > Please could we have this backported? > > > > > > > > Guided by the Fixes: tag. > > > > > > <formletter> > > > > > > This is not the correct way to submit patches for inclusion in the > > > stable kernel tree. Please read: > > > https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html > > > for how to do this properly. > > > > > > </formletter> > > > > Apologies. > > > > Commit ID: 4b7de801606e504e69689df71475d27e35336fb3 > > Subject: bpf: Fix prog_array_map_poke_run map poke update > > Reason: Fixes a race condition in BPF. > > Versions: linux-5.10.y+, as specified by the Fixes: tag above > > Did not apply to 5.10.y or 5.15.y, so if you need/want it there, we will > need a working backport that has been tested. Other trees it's now > queued up for. Thank you. > BPF developers, please remember, just adding a "Fixes:" tag does NOT > guarantee that any patch will be backported to any stable kernel, you > MUST add a "cc: stable@..." tag to the patch if you wish to have it > automatically backported.
On Thu, Dec 21, 2023 at 10:17:44AM +0000, Lee Jones wrote: > On Thu, 21 Dec 2023, Greg KH wrote: > > > On Thu, Dec 21, 2023 at 09:55:22AM +0000, Lee Jones wrote: > > > On Thu, 21 Dec 2023, Greg KH wrote: > > > > > > > On Thu, Dec 21, 2023 at 09:07:45AM +0000, Lee Jones wrote: > > > > > Dear Stable, > > > > > > > > > > > Lee pointed out issue found by syscaller [0] hitting BUG in prog array > > > > > > map poke update in prog_array_map_poke_run function due to error value > > > > > > returned from bpf_arch_text_poke function. > > > > > > > > > > > > There's race window where bpf_arch_text_poke can fail due to missing > > > > > > bpf program kallsym symbols, which is accounted for with check for > > > > > > -EINVAL in that BUG_ON call. > > > > > > > > > > > > The problem is that in such case we won't update the tail call jump > > > > > > and cause imbalance for the next tail call update check which will > > > > > > fail with -EBUSY in bpf_arch_text_poke. > > > > > > > > > > > > I'm hitting following race during the program load: > > > > > > > > > > > > CPU 0 CPU 1 > > > > > > > > > > > > bpf_prog_load > > > > > > bpf_check > > > > > > do_misc_fixups > > > > > > prog_array_map_poke_track > > > > > > > > > > > > map_update_elem > > > > > > bpf_fd_array_map_update_elem > > > > > > prog_array_map_poke_run > > > > > > > > > > > > bpf_arch_text_poke returns -EINVAL > > > > > > > > > > > > bpf_prog_kallsyms_add > > > > > > > > > > > > After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next > > > > > > poke update fails on expected jump instruction check in bpf_arch_text_poke > > > > > > with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. > > > > > > > > > > > > Similar race exists on the program unload. > > > > > > > > > > > > Fixing this by moving the update to bpf_arch_poke_desc_update function which > > > > > > makes sure we call __bpf_arch_text_poke that skips the bpf address check. > > > > > > > > > > > > Each architecture has slightly different approach wrt looking up bpf address > > > > > > in bpf_arch_text_poke, so instead of splitting the function or adding new > > > > > > 'checkip' argument in previous version, it seems best to move the whole > > > > > > map_poke_run update as arch specific code. > > > > > > > > > > > > [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 > > > > > > > > > > > > Cc: Lee Jones <lee@kernel.org> > > > > > > Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > > > > > > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > > > > > > Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com > > > > > > Acked-by: Yonghong Song <yonghong.song@linux.dev> > > > > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > > > > > > --- > > > > > > arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ > > > > > > include/linux/bpf.h | 3 ++ > > > > > > kernel/bpf/arraymap.c | 58 +++++++------------------------------ > > > > > > 3 files changed, 59 insertions(+), 48 deletions(-) > > > > > > > > > > Please could we have this backported? > > > > > > > > > > Guided by the Fixes: tag. > > > > > > > > <formletter> > > > > > > > > This is not the correct way to submit patches for inclusion in the > > > > stable kernel tree. Please read: > > > > https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html > > > > for how to do this properly. > > > > > > > > </formletter> > > > > > > Apologies. > > > > > > Commit ID: 4b7de801606e504e69689df71475d27e35336fb3 > > > Subject: bpf: Fix prog_array_map_poke_run map poke update > > > Reason: Fixes a race condition in BPF. > > > Versions: linux-5.10.y+, as specified by the Fixes: tag above > > > > Did not apply to 5.10.y or 5.15.y, so if you need/want it there, we will > > need a working backport that has been tested. Other trees it's now > > queued up for. > > Thank you. please let me know if you need any help with that, I can check on that jirka > > > BPF developers, please remember, just adding a "Fixes:" tag does NOT > > guarantee that any patch will be backported to any stable kernel, you > > MUST add a "cc: stable@..." tag to the patch if you wish to have it > > automatically backported. > > -- > Lee Jones [李琼斯]
On Thu, 21 Dec 2023, Jiri Olsa wrote: > On Thu, Dec 21, 2023 at 10:17:44AM +0000, Lee Jones wrote: > > On Thu, 21 Dec 2023, Greg KH wrote: > > > > > On Thu, Dec 21, 2023 at 09:55:22AM +0000, Lee Jones wrote: > > > > On Thu, 21 Dec 2023, Greg KH wrote: > > > > > > > > > On Thu, Dec 21, 2023 at 09:07:45AM +0000, Lee Jones wrote: > > > > > > Dear Stable, > > > > > > > > > > > > > Lee pointed out issue found by syscaller [0] hitting BUG in prog array > > > > > > > map poke update in prog_array_map_poke_run function due to error value > > > > > > > returned from bpf_arch_text_poke function. > > > > > > > > > > > > > > There's race window where bpf_arch_text_poke can fail due to missing > > > > > > > bpf program kallsym symbols, which is accounted for with check for > > > > > > > -EINVAL in that BUG_ON call. > > > > > > > > > > > > > > The problem is that in such case we won't update the tail call jump > > > > > > > and cause imbalance for the next tail call update check which will > > > > > > > fail with -EBUSY in bpf_arch_text_poke. > > > > > > > > > > > > > > I'm hitting following race during the program load: > > > > > > > > > > > > > > CPU 0 CPU 1 > > > > > > > > > > > > > > bpf_prog_load > > > > > > > bpf_check > > > > > > > do_misc_fixups > > > > > > > prog_array_map_poke_track > > > > > > > > > > > > > > map_update_elem > > > > > > > bpf_fd_array_map_update_elem > > > > > > > prog_array_map_poke_run > > > > > > > > > > > > > > bpf_arch_text_poke returns -EINVAL > > > > > > > > > > > > > > bpf_prog_kallsyms_add > > > > > > > > > > > > > > After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next > > > > > > > poke update fails on expected jump instruction check in bpf_arch_text_poke > > > > > > > with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run. > > > > > > > > > > > > > > Similar race exists on the program unload. > > > > > > > > > > > > > > Fixing this by moving the update to bpf_arch_poke_desc_update function which > > > > > > > makes sure we call __bpf_arch_text_poke that skips the bpf address check. > > > > > > > > > > > > > > Each architecture has slightly different approach wrt looking up bpf address > > > > > > > in bpf_arch_text_poke, so instead of splitting the function or adding new > > > > > > > 'checkip' argument in previous version, it seems best to move the whole > > > > > > > map_poke_run update as arch specific code. > > > > > > > > > > > > > > [0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810 > > > > > > > > > > > > > > Cc: Lee Jones <lee@kernel.org> > > > > > > > Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > > > > > > > Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") > > > > > > > Reported-by: syzbot+97a4fe20470e9bc30810@syzkaller.appspotmail.com > > > > > > > Acked-by: Yonghong Song <yonghong.song@linux.dev> > > > > > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org> > > > > > > > --- > > > > > > > arch/x86/net/bpf_jit_comp.c | 46 +++++++++++++++++++++++++++++ > > > > > > > include/linux/bpf.h | 3 ++ > > > > > > > kernel/bpf/arraymap.c | 58 +++++++------------------------------ > > > > > > > 3 files changed, 59 insertions(+), 48 deletions(-) > > > > > > > > > > > > Please could we have this backported? > > > > > > > > > > > > Guided by the Fixes: tag. > > > > > > > > > > <formletter> > > > > > > > > > > This is not the correct way to submit patches for inclusion in the > > > > > stable kernel tree. Please read: > > > > > https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html > > > > > for how to do this properly. > > > > > > > > > > </formletter> > > > > > > > > Apologies. > > > > > > > > Commit ID: 4b7de801606e504e69689df71475d27e35336fb3 > > > > Subject: bpf: Fix prog_array_map_poke_run map poke update > > > > Reason: Fixes a race condition in BPF. > > > > Versions: linux-5.10.y+, as specified by the Fixes: tag above > > > > > > Did not apply to 5.10.y or 5.15.y, so if you need/want it there, we will > > > need a working backport that has been tested. Other trees it's now > > > queued up for. > > > > Thank you. > > please let me know if you need any help with that, I can check on that I absolutely do. I have no way to test BPF.
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 8c10d9abc239..e89e415aa743 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -3025,3 +3025,49 @@ void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp #endif WARN(1, "verification of programs using bpf_throw should have failed\n"); } + +void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old) +{ + u8 *old_addr, *new_addr, *old_bypass_addr; + int ret; + + old_bypass_addr = old ? NULL : poke->bypass_addr; + old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL; + new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL; + + /* + * On program loading or teardown, the program's kallsym entry + * might not be in place, so we use __bpf_arch_text_poke to skip + * the kallsyms check. + */ + if (new) { + ret = __bpf_arch_text_poke(poke->tailcall_target, + BPF_MOD_JUMP, + old_addr, new_addr); + BUG_ON(ret < 0); + if (!old) { + ret = __bpf_arch_text_poke(poke->tailcall_bypass, + BPF_MOD_JUMP, + poke->bypass_addr, + NULL); + BUG_ON(ret < 0); + } + } else { + ret = __bpf_arch_text_poke(poke->tailcall_bypass, + BPF_MOD_JUMP, + old_bypass_addr, + poke->bypass_addr); + BUG_ON(ret < 0); + /* let other CPUs finish the execution of program + * so that it will not possible to expose them + * to invalid nop, stack unwind, nop state + */ + if (!ret) + synchronize_rcu(); + ret = __bpf_arch_text_poke(poke->tailcall_target, + BPF_MOD_JUMP, + old_addr, NULL); + BUG_ON(ret < 0); + } +} diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 6762dac3ef76..cff5bb08820e 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -3175,6 +3175,9 @@ enum bpf_text_poke_type { int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, void *addr1, void *addr2); +void bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old); + void *bpf_arch_text_copy(void *dst, void *src, size_t len); int bpf_arch_text_invalidate(void *dst, size_t len); diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index 2058e89b5ddd..c85ff9162a5c 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -1012,11 +1012,16 @@ static void prog_array_map_poke_untrack(struct bpf_map *map, mutex_unlock(&aux->poke_mutex); } +void __weak bpf_arch_poke_desc_update(struct bpf_jit_poke_descriptor *poke, + struct bpf_prog *new, struct bpf_prog *old) +{ + WARN_ON_ONCE(1); +} + static void prog_array_map_poke_run(struct bpf_map *map, u32 key, struct bpf_prog *old, struct bpf_prog *new) { - u8 *old_addr, *new_addr, *old_bypass_addr; struct prog_poke_elem *elem; struct bpf_array_aux *aux; @@ -1025,7 +1030,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, list_for_each_entry(elem, &aux->poke_progs, list) { struct bpf_jit_poke_descriptor *poke; - int i, ret; + int i; for (i = 0; i < elem->aux->size_poke_tab; i++) { poke = &elem->aux->poke_tab[i]; @@ -1044,21 +1049,10 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, * activated, so tail call updates can arrive from here * while JIT is still finishing its final fixup for * non-activated poke entries. - * 3) On program teardown, the program's kallsym entry gets - * removed out of RCU callback, but we can only untrack - * from sleepable context, therefore bpf_arch_text_poke() - * might not see that this is in BPF text section and - * bails out with -EINVAL. As these are unreachable since - * RCU grace period already passed, we simply skip them. - * 4) Also programs reaching refcount of zero while patching + * 3) Also programs reaching refcount of zero while patching * is in progress is okay since we're protected under * poke_mutex and untrack the programs before the JIT - * buffer is freed. When we're still in the middle of - * patching and suddenly kallsyms entry of the program - * gets evicted, we just skip the rest which is fine due - * to point 3). - * 5) Any other error happening below from bpf_arch_text_poke() - * is a unexpected bug. + * buffer is freed. */ if (!READ_ONCE(poke->tailcall_target_stable)) continue; @@ -1068,39 +1062,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key, poke->tail_call.key != key) continue; - old_bypass_addr = old ? NULL : poke->bypass_addr; - old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL; - new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL; - - if (new) { - ret = bpf_arch_text_poke(poke->tailcall_target, - BPF_MOD_JUMP, - old_addr, new_addr); - BUG_ON(ret < 0 && ret != -EINVAL); - if (!old) { - ret = bpf_arch_text_poke(poke->tailcall_bypass, - BPF_MOD_JUMP, - poke->bypass_addr, - NULL); - BUG_ON(ret < 0 && ret != -EINVAL); - } - } else { - ret = bpf_arch_text_poke(poke->tailcall_bypass, - BPF_MOD_JUMP, - old_bypass_addr, - poke->bypass_addr); - BUG_ON(ret < 0 && ret != -EINVAL); - /* let other CPUs finish the execution of program - * so that it will not possible to expose them - * to invalid nop, stack unwind, nop state - */ - if (!ret) - synchronize_rcu(); - ret = bpf_arch_text_poke(poke->tailcall_target, - BPF_MOD_JUMP, - old_addr, NULL); - BUG_ON(ret < 0 && ret != -EINVAL); - } + bpf_arch_poke_desc_update(poke, new, old); } } }