Message ID | 20220220184055.3608317-1-trix@redhat.com (mailing list archive) |
---|---|
State | Accepted |
Commit | c561d11063009323a0e57c528cb1d77b7d2c41e0 |
Delegated to: | BPF |
Headers | show |
Series | bpf: cleanup comments | expand |
Context | Check | Description |
---|---|---|
bpf/vmtest-bpf-next-PR | success | PR summary |
bpf/vmtest-bpf-next | success | VM_Test |
netdev/tree_selection | success | Guessing tree name failed - patch did not apply, async |
On Sun, Feb 20, 2022 at 10:41 AM <trix@redhat.com> wrote: > > From: Tom Rix <trix@redhat.com> > > Add leading space to spdx tag > Use // for spdx c file comment > > Replacements > resereved to reserved > inbetween to in between > everytime to every time I think everytime could be a single word? Other than that, Acked-by: Song Liu <songliubraving@fb.com> > intutivie to intuitive > currenct to current > encontered to encountered > referenceing to referencing > upto to up to > exectuted to executed > > Signed-off-by: Tom Rix <trix@redhat.com> > --- > kernel/bpf/bpf_local_storage.c | 2 +- > kernel/bpf/btf.c | 6 +++--- > kernel/bpf/cgroup.c | 8 ++++---- > kernel/bpf/hashtab.c | 2 +- > kernel/bpf/helpers.c | 2 +- > kernel/bpf/local_storage.c | 2 +- > kernel/bpf/reuseport_array.c | 2 +- > kernel/bpf/syscall.c | 2 +- > kernel/bpf/trampoline.c | 2 +- > 9 files changed, 14 insertions(+), 14 deletions(-) > > diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c > index 71de2a89869c..092a1ac772d7 100644 > --- a/kernel/bpf/bpf_local_storage.c > +++ b/kernel/bpf/bpf_local_storage.c > @@ -136,7 +136,7 @@ bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage, > * will be done by the caller. > * > * Although the unlock will be done under > - * rcu_read_lock(), it is more intutivie to > + * rcu_read_lock(), it is more intuitive to > * read if the freeing of the storage is done > * after the raw_spin_unlock_bh(&local_storage->lock). > * > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c > index 02d7014417a0..8b11d1a9bee1 100644 > --- a/kernel/bpf/btf.c > +++ b/kernel/bpf/btf.c > @@ -1,4 +1,4 @@ > -/* SPDX-License-Identifier: GPL-2.0 */ > +// SPDX-License-Identifier: GPL-2.0 > /* Copyright (c) 2018 Facebook */ > > #include <uapi/linux/btf.h> > @@ -2547,7 +2547,7 @@ static int btf_ptr_resolve(struct btf_verifier_env *env, > * > * We now need to continue from the last-resolved-ptr to > * ensure the last-resolved-ptr will not referring back to > - * the currenct ptr (t). > + * the current ptr (t). > */ > if (btf_type_is_modifier(next_type)) { > const struct btf_type *resolved_type; > @@ -6148,7 +6148,7 @@ int btf_type_snprintf_show(const struct btf *btf, u32 type_id, void *obj, > > btf_type_show(btf, type_id, obj, (struct btf_show *)&ssnprintf); > > - /* If we encontered an error, return it. */ > + /* If we encountered an error, return it. */ > if (ssnprintf.show.state.status) > return ssnprintf.show.state.status; > > diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c > index 098632fdbc45..128028efda64 100644 > --- a/kernel/bpf/cgroup.c > +++ b/kernel/bpf/cgroup.c > @@ -1031,7 +1031,7 @@ int cgroup_bpf_prog_query(const union bpf_attr *attr, > * __cgroup_bpf_run_filter_skb() - Run a program for packet filtering > * @sk: The socket sending or receiving traffic > * @skb: The skb that is being sent or received > - * @type: The type of program to be exectuted > + * @type: The type of program to be executed > * > * If no socket is passed, or the socket is not of type INET or INET6, > * this function does nothing and returns 0. > @@ -1094,7 +1094,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_skb); > /** > * __cgroup_bpf_run_filter_sk() - Run a program on a sock > * @sk: sock structure to manipulate > - * @type: The type of program to be exectuted > + * @type: The type of program to be executed > * > * socket is passed is expected to be of type INET or INET6. > * > @@ -1119,7 +1119,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sk); > * provided by user sockaddr > * @sk: sock struct that will use sockaddr > * @uaddr: sockaddr struct provided by user > - * @type: The type of program to be exectuted > + * @type: The type of program to be executed > * @t_ctx: Pointer to attach type specific context > * @flags: Pointer to u32 which contains higher bits of BPF program > * return value (OR'ed together). > @@ -1166,7 +1166,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sock_addr); > * @sock_ops: bpf_sock_ops_kern struct to pass to program. Contains > * sk with connection information (IP addresses, etc.) May not contain > * cgroup info if it is a req sock. > - * @type: The type of program to be exectuted > + * @type: The type of program to be executed > * > * socket passed is expected to be of type INET or INET6. > * > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > index d29af9988f37..65877967f414 100644 > --- a/kernel/bpf/hashtab.c > +++ b/kernel/bpf/hashtab.c > @@ -1636,7 +1636,7 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, > value_size = size * num_possible_cpus(); > total = 0; > /* while experimenting with hash tables with sizes ranging from 10 to > - * 1000, it was observed that a bucket can have upto 5 entries. > + * 1000, it was observed that a bucket can have up to 5 entries. > */ > bucket_size = 5; > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > index 49817755b8c3..ae64110a98b5 100644 > --- a/kernel/bpf/helpers.c > +++ b/kernel/bpf/helpers.c > @@ -1093,7 +1093,7 @@ struct bpf_hrtimer { > struct bpf_timer_kern { > struct bpf_hrtimer *timer; > /* bpf_spin_lock is used here instead of spinlock_t to make > - * sure that it always fits into space resereved by struct bpf_timer > + * sure that it always fits into space reserved by struct bpf_timer > * regardless of LOCKDEP and spinlock debug flags. > */ > struct bpf_spin_lock lock; > diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c > index 23f7f9d08a62..497916060ac7 100644 > --- a/kernel/bpf/local_storage.c > +++ b/kernel/bpf/local_storage.c > @@ -1,4 +1,4 @@ > -//SPDX-License-Identifier: GPL-2.0 > +// SPDX-License-Identifier: GPL-2.0 > #include <linux/bpf-cgroup.h> > #include <linux/bpf.h> > #include <linux/bpf_local_storage.h> > diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c > index 556a769b5b80..962556917c4d 100644 > --- a/kernel/bpf/reuseport_array.c > +++ b/kernel/bpf/reuseport_array.c > @@ -143,7 +143,7 @@ static void reuseport_array_free(struct bpf_map *map) > > /* > * Once reaching here, all sk->sk_user_data is not > - * referenceing this "array". "array" can be freed now. > + * referencing this "array". "array" can be freed now. > */ > bpf_map_area_free(array); > } > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c > index 35646db3d950..ce4657a00dae 100644 > --- a/kernel/bpf/syscall.c > +++ b/kernel/bpf/syscall.c > @@ -2562,7 +2562,7 @@ static int bpf_link_alloc_id(struct bpf_link *link) > * pre-allocated resources are to be freed with bpf_cleanup() call. All the > * transient state is passed around in struct bpf_link_primer. > * This is preferred way to create and initialize bpf_link, especially when > - * there are complicated and expensive operations inbetween creating bpf_link > + * there are complicated and expensive operations in between creating bpf_link > * itself and attaching it to BPF hook. By using bpf_link_prime() and > * bpf_link_settle() kernel code using bpf_link doesn't have to perform > * expensive (and potentially failing) roll back operations in a rare case > diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c > index 7224691df2ec..0b41fa993825 100644 > --- a/kernel/bpf/trampoline.c > +++ b/kernel/bpf/trampoline.c > @@ -45,7 +45,7 @@ void *bpf_jit_alloc_exec_page(void) > > set_vm_flush_reset_perms(image); > /* Keep image as writeable. The alternative is to keep flipping ro/rw > - * everytime new program is attached or detached. > + * every time new program is attached or detached. > */ > set_memory_x((long)image, 1); > return image; > -- > 2.26.3 >
On 2/20/22 17:28, Song Liu wrote: > On Sun, Feb 20, 2022 at 10:41 AM <trix@redhat.com> wrote: >> >> From: Tom Rix <trix@redhat.com> >> >> Add leading space to spdx tag >> Use // for spdx c file comment >> >> Replacements >> resereved to reserved >> inbetween to in between >> everytime to every time > > I think everytime could be a single word? Other than that, Nope. :) > > Acked-by: Song Liu <songliubraving@fb.com> > >> intutivie to intuitive >> currenct to current >> encontered to encountered >> referenceing to referencing >> upto to up to >> exectuted to executed >> >> Signed-off-by: Tom Rix <trix@redhat.com> >> --- >> kernel/bpf/bpf_local_storage.c | 2 +- >> kernel/bpf/btf.c | 6 +++--- >> kernel/bpf/cgroup.c | 8 ++++---- >> kernel/bpf/hashtab.c | 2 +- >> kernel/bpf/helpers.c | 2 +- >> kernel/bpf/local_storage.c | 2 +- >> kernel/bpf/reuseport_array.c | 2 +- >> kernel/bpf/syscall.c | 2 +- >> kernel/bpf/trampoline.c | 2 +- >> 9 files changed, 14 insertions(+), 14 deletions(-)
Le 20/02/2022 à 19:40, trix@redhat.com a écrit : > From: Tom Rix <trix@redhat.com> > > Add leading space to spdx tag > Use // for spdx c file comment > > Replacements > resereved to reserved > inbetween to in between > everytime to every time > intutivie to intuitive > currenct to current > encontered to encountered > referenceing to referencing > upto to up to > exectuted to executedYou can add them in scripts/spelling.txt Regards, Nicolas
Hello: This patch was applied to bpf/bpf-next.git (master) by Andrii Nakryiko <andrii@kernel.org>: On Sun, 20 Feb 2022 10:40:55 -0800 you wrote: > From: Tom Rix <trix@redhat.com> > > Add leading space to spdx tag > Use // for spdx c file comment > > Replacements > resereved to reserved > inbetween to in between > everytime to every time > intutivie to intuitive > currenct to current > encontered to encountered > referenceing to referencing > upto to up to > exectuted to executed > > [...] Here is the summary with links: - bpf: cleanup comments https://git.kernel.org/bpf/bpf-next/c/c561d1106300 You are awesome, thank you!
diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index 71de2a89869c..092a1ac772d7 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -136,7 +136,7 @@ bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage, * will be done by the caller. * * Although the unlock will be done under - * rcu_read_lock(), it is more intutivie to + * rcu_read_lock(), it is more intuitive to * read if the freeing of the storage is done * after the raw_spin_unlock_bh(&local_storage->lock). * diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 02d7014417a0..8b11d1a9bee1 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -1,4 +1,4 @@ -/* SPDX-License-Identifier: GPL-2.0 */ +// SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2018 Facebook */ #include <uapi/linux/btf.h> @@ -2547,7 +2547,7 @@ static int btf_ptr_resolve(struct btf_verifier_env *env, * * We now need to continue from the last-resolved-ptr to * ensure the last-resolved-ptr will not referring back to - * the currenct ptr (t). + * the current ptr (t). */ if (btf_type_is_modifier(next_type)) { const struct btf_type *resolved_type; @@ -6148,7 +6148,7 @@ int btf_type_snprintf_show(const struct btf *btf, u32 type_id, void *obj, btf_type_show(btf, type_id, obj, (struct btf_show *)&ssnprintf); - /* If we encontered an error, return it. */ + /* If we encountered an error, return it. */ if (ssnprintf.show.state.status) return ssnprintf.show.state.status; diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 098632fdbc45..128028efda64 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -1031,7 +1031,7 @@ int cgroup_bpf_prog_query(const union bpf_attr *attr, * __cgroup_bpf_run_filter_skb() - Run a program for packet filtering * @sk: The socket sending or receiving traffic * @skb: The skb that is being sent or received - * @type: The type of program to be exectuted + * @type: The type of program to be executed * * If no socket is passed, or the socket is not of type INET or INET6, * this function does nothing and returns 0. @@ -1094,7 +1094,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_skb); /** * __cgroup_bpf_run_filter_sk() - Run a program on a sock * @sk: sock structure to manipulate - * @type: The type of program to be exectuted + * @type: The type of program to be executed * * socket is passed is expected to be of type INET or INET6. * @@ -1119,7 +1119,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sk); * provided by user sockaddr * @sk: sock struct that will use sockaddr * @uaddr: sockaddr struct provided by user - * @type: The type of program to be exectuted + * @type: The type of program to be executed * @t_ctx: Pointer to attach type specific context * @flags: Pointer to u32 which contains higher bits of BPF program * return value (OR'ed together). @@ -1166,7 +1166,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sock_addr); * @sock_ops: bpf_sock_ops_kern struct to pass to program. Contains * sk with connection information (IP addresses, etc.) May not contain * cgroup info if it is a req sock. - * @type: The type of program to be exectuted + * @type: The type of program to be executed * * socket passed is expected to be of type INET or INET6. * diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index d29af9988f37..65877967f414 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -1636,7 +1636,7 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, value_size = size * num_possible_cpus(); total = 0; /* while experimenting with hash tables with sizes ranging from 10 to - * 1000, it was observed that a bucket can have upto 5 entries. + * 1000, it was observed that a bucket can have up to 5 entries. */ bucket_size = 5; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 49817755b8c3..ae64110a98b5 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1093,7 +1093,7 @@ struct bpf_hrtimer { struct bpf_timer_kern { struct bpf_hrtimer *timer; /* bpf_spin_lock is used here instead of spinlock_t to make - * sure that it always fits into space resereved by struct bpf_timer + * sure that it always fits into space reserved by struct bpf_timer * regardless of LOCKDEP and spinlock debug flags. */ struct bpf_spin_lock lock; diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c index 23f7f9d08a62..497916060ac7 100644 --- a/kernel/bpf/local_storage.c +++ b/kernel/bpf/local_storage.c @@ -1,4 +1,4 @@ -//SPDX-License-Identifier: GPL-2.0 +// SPDX-License-Identifier: GPL-2.0 #include <linux/bpf-cgroup.h> #include <linux/bpf.h> #include <linux/bpf_local_storage.h> diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c index 556a769b5b80..962556917c4d 100644 --- a/kernel/bpf/reuseport_array.c +++ b/kernel/bpf/reuseport_array.c @@ -143,7 +143,7 @@ static void reuseport_array_free(struct bpf_map *map) /* * Once reaching here, all sk->sk_user_data is not - * referenceing this "array". "array" can be freed now. + * referencing this "array". "array" can be freed now. */ bpf_map_area_free(array); } diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 35646db3d950..ce4657a00dae 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2562,7 +2562,7 @@ static int bpf_link_alloc_id(struct bpf_link *link) * pre-allocated resources are to be freed with bpf_cleanup() call. All the * transient state is passed around in struct bpf_link_primer. * This is preferred way to create and initialize bpf_link, especially when - * there are complicated and expensive operations inbetween creating bpf_link + * there are complicated and expensive operations in between creating bpf_link * itself and attaching it to BPF hook. By using bpf_link_prime() and * bpf_link_settle() kernel code using bpf_link doesn't have to perform * expensive (and potentially failing) roll back operations in a rare case diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 7224691df2ec..0b41fa993825 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -45,7 +45,7 @@ void *bpf_jit_alloc_exec_page(void) set_vm_flush_reset_perms(image); /* Keep image as writeable. The alternative is to keep flipping ro/rw - * everytime new program is attached or detached. + * every time new program is attached or detached. */ set_memory_x((long)image, 1); return image;