Message ID | 20240214173950.18570-1-khuey@kylehuey.com (mailing list archive) |
---|---|
Headers | show |
Series | Combine perf and bpf for fast eval of hw breakpoint conditions] | expand |
On Wed, Feb 14, 2024 at 9:40 AM Kyle Huey <me@kylehuey.com> wrote: > > Peter, Ingo, could you take a look at this? > > ---- > > rr, a userspace record and replay debugger[0], replays asynchronous events > such as signals and context switches by essentially[1] setting a breakpoint > at the address where the asynchronous event was delivered during recording > with a condition that the program state matches the state when the event > was delivered. > > Currently, rr uses software breakpoints that trap (via ptrace) to the > supervisor, and evaluates the condition from the supervisor. If the > asynchronous event is delivered in a tight loop (thus requiring the > breakpoint condition to be repeatedly evaluated) the overhead can be > immense. A patch to rr that uses hardware breakpoints via perf events with > an attached BPF program to reject breakpoint hits where the condition is > not satisfied reduces rr's replay overhead by 94% on a pathological (but a > real customer-provided, not contrived) rr trace. > > The only obstacle to this approach is that while the kernel allows a BPF > program to suppress sample output when a perf event overflows it does not > suppress signalling the perf event fd or sending the perf event's SIGTRAP. > This patch set redesigns __perf_overflow_handler() and > bpf_overflow_handler() so that the former invokes the latter directly when > appropriate rather than through the generic overflow handler machinery, > passes the return code of the BPF program back to __perf_overflow_handler() > to allow it to decide whether to execute the regular overflow handler, > reorders bpf_overflow_handler() and the side effects of perf event > overflow, changes __perf_overflow_handler() to suppress those side effects > if the BPF program returns zero, and adds a selftest. > > The previous version of this patchset can be found at > https://lore.kernel.org/linux-kernel/20240119001352.9396-1-khuey@kylehuey.com/ > > Changes since v4: > > Patches 1, 2, 3, 4 added various Acked-by. > > Patch 4 addresses additional nits from Song. > > v3 of this patchset can be found at > https://lore.kernel.org/linux-kernel/20231211045543.31741-1-khuey@kylehuey.com/ > > Changes since v3: > > Patches 1, 2, 3 added various Acked-by. > > Patch 4 addresses Song's review comments by dropping signals_expected and the > corresponding ASSERT_OKs, handling errors from signal(), and fixing multiline > comment formatting. > > v2 of this patchset can be found at > https://lore.kernel.org/linux-kernel/20231207163458.5554-1-khuey@kylehuey.com/ > > Changes since v2: > > Patches 1 and 2 were added from a suggestion by Namhyung Kim to refactor > this code to implement this feature in a cleaner way. Patch 2 is separated > for the benefit of the ARM arch maintainers. > > Patch 3 conceptually supercedes v2's patches 1 and 2, now with a cleaner > implementation thanks to the earlier refactoring. > > Patch 4 is v2's patch 3, and addresses review comments about C++ style > comments, getting a TRAP_PERF definition into the test, and unnecessary > NULL checks. > > [0] https://rr-project.org/ > [1] Various optimizations exist to skip as much as execution as possible > before setting a breakpoint, and to determine a set of program state that > is practical to check and verify. > > The series LGTM, I'm just confused why patch 1 and patch 3 are separated. But regardless, for the series: Acked-by: Andrii Nakryiko <andrii@kernel.org>
* Kyle Huey <me@kylehuey.com> wrote: > Peter, Ingo, could you take a look at this? > > ---- > > rr, a userspace record and replay debugger[0], replays asynchronous > events such as signals and context switches by essentially[1] setting a > breakpoint at the address where the asynchronous event was delivered > during recording with a condition that the program state matches the > state when the event was delivered. > > Currently, rr uses software breakpoints that trap (via ptrace) to the > supervisor, and evaluates the condition from the supervisor. If the > asynchronous event is delivered in a tight loop (thus requiring the > breakpoint condition to be repeatedly evaluated) the overhead can be > immense. A patch to rr that uses hardware breakpoints via perf events > with an attached BPF program to reject breakpoint hits where the > condition is not satisfied reduces rr's replay overhead by 94% on a > pathological (but a real customer-provided, not contrived) rr trace. > > The only obstacle to this approach is that while the kernel allows a BPF > program to suppress sample output when a perf event overflows it does not > suppress signalling the perf event fd or sending the perf event's > SIGTRAP. This patch set redesigns __perf_overflow_handler() and > bpf_overflow_handler() so that the former invokes the latter directly > when appropriate rather than through the generic overflow handler > machinery, passes the return code of the BPF program back to > __perf_overflow_handler() to allow it to decide whether to execute the > regular overflow handler, reorders bpf_overflow_handler() and the side > effects of perf event overflow, changes __perf_overflow_handler() to > suppress those side effects if the BPF program returns zero, and adds a > selftest. I suppose this optimization makes sense. Patch quality still needs to be improved though - see my review comments. Thanks, Ingo