Message ID | 20221105025146.238209-1-horenchuang@bytedance.com (mailing list archive) |
---|---|
Headers | show |
Series | Add BPF htab map's used size for monitoring | expand |
On Fri, Nov 4, 2022 at 7:52 PM Ho-Ren (Jack) Chuang <horenchuang@bytedance.com> wrote: > > Hello everyone, > > We have prepared patches to address an issue from a previous discussion. > The previous discussion email thread is here: https://lore.kernel.org/all/CAADnVQLBt0snxv4bKwg1WKQ9wDFbaDCtZ03v1-LjOTYtsKPckQ@mail.gmail.com/ Rephrasing what was said earlier. We're not keeping the count of elements in a preallocated hash map and we are not going to add one. The bpf prog needs to do the accounting on its own if it needs this kind of statistics. Keeping the count for non-prealloc is already significant performance overhead. We don't trade performance for stats.
Hi Alexei, We understand the concern on added performance overhead. We had some discussion about this while working on the patch and decided to give it a try (my bad). Adding some more context. We are leveraging the BPF_OBJ_GET_INFO_BY_FD syscall to trace CPU usage per prog and memory usage per map. We would like to use this patch to add an interface for map types to return its internal "count". For instance, we are thinking of having the below map types to report the "count" and those won't add overhead to the hot path. 1. ringbuf to return its "count" by calculating the distance between producer_pos and consumer_pos 2. queue and stack to return its "count" from the head's position 3. dev map hash to returns its "count" from items There are other map types, similar to the hashtab pre-allocation case, will introduce overhead in the hot path in order to count the stats. I think we can find alternative solutions for those (eg, iterate the map and count, count only if bpf_stats_enabled switch is on, etc). There are cases where this can't be done at the application level because applications don't see the internal stats in order to do the right counting. We can remove the counting for the pre-allocated case in this patch. Please let us know what you think. Thanks, Hao On Sat, Nov 5, 2022 at 9:20 AM Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > On Fri, Nov 4, 2022 at 7:52 PM Ho-Ren (Jack) Chuang > <horenchuang@bytedance.com> wrote: > > > > Hello everyone, > > > > We have prepared patches to address an issue from a previous discussion. > > The previous discussion email thread is here: https://lore.kernel.org/all/CAADnVQLBt0snxv4bKwg1WKQ9wDFbaDCtZ03v1-LjOTYtsKPckQ@mail.gmail.com/ > > Rephrasing what was said earlier. > We're not keeping the count of elements in a preallocated hash map > and we are not going to add one. > The bpf prog needs to do the accounting on its own if it needs > this kind of statistics. > Keeping the count for non-prealloc is already significant performance > overhead. We don't trade performance for stats.
Hi Alexei, we can use the existing switch bpf_stats_enabled around the added overhead. The switch is turned off by default so I believe there will be no extra overhead once we do that. Can you please have a second thought on this? On Mon, Nov 7, 2022 at 4:30 PM Hao Xiang . <hao.xiang@bytedance.com> wrote: > > Hi Alexei, > > We understand the concern on added performance overhead. We had some > discussion about this while working on the patch and decided to give > it a try (my bad). > > Adding some more context. We are leveraging the BPF_OBJ_GET_INFO_BY_FD > syscall to trace CPU usage per prog and memory usage per map. We would > like to use this patch to add an interface for map types to return its > internal "count". For instance, we are thinking of having the below > map types to report the "count" and those won't add overhead to the > hot path. > 1. ringbuf to return its "count" by calculating the distance between > producer_pos and consumer_pos > 2. queue and stack to return its "count" from the head's position > 3. dev map hash to returns its "count" from items > > There are other map types, similar to the hashtab pre-allocation case, > will introduce overhead in the hot path in order to count the stats. I > think we can find alternative solutions for those (eg, iterate the map > and count, count only if bpf_stats_enabled switch is on, etc). There > are cases where this can't be done at the application level because > applications don't see the internal stats in order to do the right > counting. > > We can remove the counting for the pre-allocated case in this patch. > Please let us know what you think. > > Thanks, Hao > > On Sat, Nov 5, 2022 at 9:20 AM Alexei Starovoitov > <alexei.starovoitov@gmail.com> wrote: > > > > On Fri, Nov 4, 2022 at 7:52 PM Ho-Ren (Jack) Chuang > > <horenchuang@bytedance.com> wrote: > > > > > > Hello everyone, > > > > > > We have prepared patches to address an issue from a previous discussion. > > > The previous discussion email thread is here: https://lore.kernel.org/all/CAADnVQLBt0snxv4bKwg1WKQ9wDFbaDCtZ03v1-LjOTYtsKPckQ@mail.gmail.com/ > > > > Rephrasing what was said earlier. > > We're not keeping the count of elements in a preallocated hash map > > and we are not going to add one. > > The bpf prog needs to do the accounting on its own if it needs > > this kind of statistics. > > Keeping the count for non-prealloc is already significant performance > > overhead. We don't trade performance for stats.