mbox series

[RFC,0/1] add parallelism to test_progs

Message ID 20210827231307.3787723-1-fallentree@fb.com (mailing list archive)
Headers show
Series add parallelism to test_progs | expand

Message

Yucong Sun Aug. 27, 2021, 11:13 p.m. UTC
This patch added a optional "-p" to test_progs to run tests in multiple
process, speeding up the tests.

Example:

time ./test_progs
real    5m51.393s
user    0m4.695s
sys    5m48.055s

time ./test_progs -p 16 (on a 8 core vm)
real    3m45.673s
user    0m4.434s
sys    5m47.465s

The feedback area I'm looking for :

  1.Some tests are taking too long to run (for example:
  bpf_verif_scale/pyperf* takes almost 80% of the total runtime). If we
  need a work-stealing pool mechanism it would be a bigger change.

  2. The tests output from workers are currently interleaved from all
  workers, making it harder to read, one option would be redirect all
  outputs onto pipes and have main process collect and print in sequence
  for each worker finish, but that will make seeing real time progress
  harder.

  3. If main process want to collect tests results from worker, I plan
  to have each worker writes a stats file to /tmp, or I can use IPC, any
  preference?

  4. Some tests would fail if run in parallel, I think we would need to
  pin some tasks onto worker 0.

Yucong Sun (1):
  selftests/bpf: Add parallelism to test_progs

 tools/testing/selftests/bpf/test_progs.c | 94 ++++++++++++++++++++++--
 tools/testing/selftests/bpf/test_progs.h |  3 +
 2 files changed, 91 insertions(+), 6 deletions(-)

Comments

Andrii Nakryiko Aug. 31, 2021, 4:03 a.m. UTC | #1
On Fri, Aug 27, 2021 at 4:13 PM Yucong Sun <fallentree@fb.com> wrote:
>
> This patch added a optional "-p" to test_progs to run tests in multiple
> process, speeding up the tests.
>
> Example:
>
> time ./test_progs
> real    5m51.393s
> user    0m4.695s
> sys    5m48.055s
>
> time ./test_progs -p 16 (on a 8 core vm)
> real    3m45.673s
> user    0m4.434s
> sys    5m47.465s
>
> The feedback area I'm looking for :
>
>   1.Some tests are taking too long to run (for example:
>   bpf_verif_scale/pyperf* takes almost 80% of the total runtime). If we
>   need a work-stealing pool mechanism it would be a bigger change.

Seems like you did just a static assignment based on worker number and
test number in this RFC. I think that's way too simplistic to work
well in practice. But I don't think we need a work stealing queue
either (not any explicit queue at all).

I'd rather go with a simple client/server model, where the server is
the main process which does all the coordination. It would "dispense"
task to each forked worker one by one, wait for that test to complete,
accumulating test's output in per-worker temporary output. If we are
running in verbose mode or a test failed, output accumulated logs. If
not verbose and test is successful, just emit a summary with test name
and OK message and discard accumulated output. I think we can easily
extend this to support running multiple sub-tests on *different*
workers, "breaking up" and scaling that bpf_verif_scale test nicely.
But that could be a pretty easy step #2 after the whole client/server
machinery is setup.

Look into Unix domain sockets (UDS). But not the SOCK_STREAM kind,
rather SOCK_DGRAM. UDS allows to establish bi-directional connection
between server and worker. And it preserves packet boundaries, so you
don't have TCP stream problems of delineating boundaries of logical
packets. And it preserves ordering between packets. All great
properties. With this we can set up client/server communication with a
very simple protocol:

1. Server sends "RUN_TEST" command, specifying the number of the test
to execute by the worker.
2. Worker sends back "TEST_COMPLETED" command with the test number,
test result (success, failure, skipped), and, optionally, console
output.
3. Repeat #1-#2 as many times as needed.
4. Server sends "SHUTDOWN" command and worker exits.

(Well, probably we need a bit more flexibility to report sub-test
successes, so maybe worker will have two possible messages:
SUBTEST_COMPLETED and TEST_COMPLETED, or something along those lines).

On the server side, we can use as suboptimal and simplistic locking
scheme as possible to coordinate everything. It's probably simplest to
have a thread per worker that would take global lock to take the next
test to run (just i++, but under lock). And just remember all the
statuses (and error outputs, for dumping failed tests details).

Some refactoring will be needed to make existing code work in both
non-parallelized and parallelized modes with minimal amount of
changes, but this seems simple enough.

>
>   2. The tests output from workers are currently interleaved from all
>   workers, making it harder to read, one option would be redirect all
>   outputs onto pipes and have main process collect and print in sequence
>   for each worker finish, but that will make seeing real time progress
>   harder.

Yeah, I don't think that's acceptable. Good news is that we needed
some more complexity to hold onto test output until the very end for
error summary reporting anyway.

>
>   3. If main process want to collect tests results from worker, I plan
>   to have each worker writes a stats file to /tmp, or I can use IPC, any
>   preference?

See above, I think UDS is the way to go.

>
>   4. Some tests would fail if run in parallel, I think we would need to
>   pin some tasks onto worker 0.

Yeah, we can mark such tests with some special naming convention
(e.g., to test_blahblah_noparallel) and run them sequentially.

>
> Yucong Sun (1):
>   selftests/bpf: Add parallelism to test_progs
>
>  tools/testing/selftests/bpf/test_progs.c | 94 ++++++++++++++++++++++--
>  tools/testing/selftests/bpf/test_progs.h |  3 +
>  2 files changed, 91 insertions(+), 6 deletions(-)
>
> --
> 2.30.2
>