Message ID | 20230728023030.1906124-3-houtao@huaweicloud.com (mailing list archive) |
---|---|
State | RFC |
Delegated to: | BPF |
Headers | show |
Series | Remove unnecessary synchronizations in cpumap | expand |
Hou Tao <houtao@huaweicloud.com> writes: > From: Hou Tao <houtao1@huawei.com> > > After synchronize_rcu(), both the dettached XDP program and > xdp_do_flush() are completed, and the only user of bpf_cpu_map_entry > will be cpu_map_kthread_run(), so instead of calling > __cpu_map_entry_replace() to empty queue and do cleanup after a RCU > grace period, do these things directly. > > Signed-off-by: Hou Tao <houtao1@huawei.com> With one nit below: Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> > --- > kernel/bpf/cpumap.c | 17 ++++++++--------- > 1 file changed, 8 insertions(+), 9 deletions(-) > > diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c > index 24f39c37526f..f8e2b24320c0 100644 > --- a/kernel/bpf/cpumap.c > +++ b/kernel/bpf/cpumap.c > @@ -554,16 +554,15 @@ static void cpu_map_free(struct bpf_map *map) > /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0, > * so the bpf programs (can be more than one that used this map) were > * disconnected from events. Wait for outstanding critical sections in > - * these programs to complete. The rcu critical section only guarantees > - * no further "XDP/bpf-side" reads against bpf_cpu_map->cpu_map. > - * It does __not__ ensure pending flush operations (if any) are > - * complete. > + * these programs to complete. synchronize_rcu() below not only > + * guarantees no further "XDP/bpf-side" reads against > + * bpf_cpu_map->cpu_map, but also ensure pending flush operations > + * (if any) are complete. > */ > - > synchronize_rcu(); > > - /* For cpu_map the remote CPUs can still be using the entries > - * (struct bpf_cpu_map_entry). > + /* The only possible user of bpf_cpu_map_entry is > + * cpu_map_kthread_run(). > */ > for (i = 0; i < cmap->map.max_entries; i++) { > struct bpf_cpu_map_entry *rcpu; > @@ -572,8 +571,8 @@ static void cpu_map_free(struct bpf_map *map) > if (!rcpu) > continue; > > - /* bq flush and cleanup happens after RCU grace-period */ > - __cpu_map_entry_replace(cmap, i, NULL); /* call_rcu */ > + /* Empty queue and do cleanup directly */ The "empty queue" here is a bit ambiguous, maybe "Stop kthread and cleanup entry"? > + __cpu_map_entry_free(&rcpu->free_work.work); > } > bpf_map_area_free(cmap->cpu_map); > bpf_map_area_free(cmap); > -- > 2.29.2
Hi, On 8/10/2023 6:22 PM, Toke Høiland-Jørgensen wrote: > Hou Tao <houtao@huaweicloud.com> writes: > >> From: Hou Tao <houtao1@huawei.com> >> >> After synchronize_rcu(), both the dettached XDP program and >> xdp_do_flush() are completed, and the only user of bpf_cpu_map_entry >> will be cpu_map_kthread_run(), so instead of calling >> __cpu_map_entry_replace() to empty queue and do cleanup after a RCU >> grace period, do these things directly. >> >> Signed-off-by: Hou Tao <houtao1@huawei.com> > With one nit below: > > Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Thanks for the review. >> --- >> kernel/bpf/cpumap.c | 17 ++++++++--------- >> 1 file changed, 8 insertions(+), 9 deletions(-) >> >> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c >> index 24f39c37526f..f8e2b24320c0 100644 >> --- a/kernel/bpf/cpumap.c >> +++ b/kernel/bpf/cpumap.c >> @@ -554,16 +554,15 @@ static void cpu_map_free(struct bpf_map *map) >> /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0, >> * so the bpf programs (can be more than one that used this map) were >> * disconnected from events. Wait for outstanding critical sections in >> - * these programs to complete. The rcu critical section only guarantees >> - * no further "XDP/bpf-side" reads against bpf_cpu_map->cpu_map. >> - * It does __not__ ensure pending flush operations (if any) are >> - * complete. >> + * these programs to complete. synchronize_rcu() below not only >> + * guarantees no further "XDP/bpf-side" reads against >> + * bpf_cpu_map->cpu_map, but also ensure pending flush operations >> + * (if any) are complete. >> */ >> - >> synchronize_rcu(); >> >> - /* For cpu_map the remote CPUs can still be using the entries >> - * (struct bpf_cpu_map_entry). >> + /* The only possible user of bpf_cpu_map_entry is >> + * cpu_map_kthread_run(). >> */ >> for (i = 0; i < cmap->map.max_entries; i++) { >> struct bpf_cpu_map_entry *rcpu; >> @@ -572,8 +571,8 @@ static void cpu_map_free(struct bpf_map *map) >> if (!rcpu) >> continue; >> >> - /* bq flush and cleanup happens after RCU grace-period */ >> - __cpu_map_entry_replace(cmap, i, NULL); /* call_rcu */ >> + /* Empty queue and do cleanup directly */ > The "empty queue" here is a bit ambiguous, maybe "Stop kthread and > cleanup entry"? Sure. Will do in v1. > >> + __cpu_map_entry_free(&rcpu->free_work.work); >> } >> bpf_map_area_free(cmap->cpu_map); >> bpf_map_area_free(cmap); >> -- >> 2.29.2
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 24f39c37526f..f8e2b24320c0 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -554,16 +554,15 @@ static void cpu_map_free(struct bpf_map *map) /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0, * so the bpf programs (can be more than one that used this map) were * disconnected from events. Wait for outstanding critical sections in - * these programs to complete. The rcu critical section only guarantees - * no further "XDP/bpf-side" reads against bpf_cpu_map->cpu_map. - * It does __not__ ensure pending flush operations (if any) are - * complete. + * these programs to complete. synchronize_rcu() below not only + * guarantees no further "XDP/bpf-side" reads against + * bpf_cpu_map->cpu_map, but also ensure pending flush operations + * (if any) are complete. */ - synchronize_rcu(); - /* For cpu_map the remote CPUs can still be using the entries - * (struct bpf_cpu_map_entry). + /* The only possible user of bpf_cpu_map_entry is + * cpu_map_kthread_run(). */ for (i = 0; i < cmap->map.max_entries; i++) { struct bpf_cpu_map_entry *rcpu; @@ -572,8 +571,8 @@ static void cpu_map_free(struct bpf_map *map) if (!rcpu) continue; - /* bq flush and cleanup happens after RCU grace-period */ - __cpu_map_entry_replace(cmap, i, NULL); /* call_rcu */ + /* Empty queue and do cleanup directly */ + __cpu_map_entry_free(&rcpu->free_work.work); } bpf_map_area_free(cmap->cpu_map); bpf_map_area_free(cmap);