mbox series

[v2,00/12] tests: enable meson test timeouts to improve debuggability

Message ID 20230717182859.707658-1-berrange@redhat.com (mailing list archive)
Headers show
Series tests: enable meson test timeouts to improve debuggability | expand

Message

Daniel P. Berrangé July 17, 2023, 6:28 p.m. UTC
Perhaps the most painful of all the GitLab CI failures we see are
the enforced job timeouts:

   "ERROR: Job failed: execution took longer than 1h15m0s seconds"

   https://gitlab.com/qemu-project/qemu/-/jobs/4387047648

when that hits the CI log shows what has *already* run, but figuring
out what was currently running (or rather stuck) is an horrendously
difficult.

The initial meson port disabled the meson test timeouts, in order to
limit the scope for introducing side effects from the port that would
complicate adoption.

Now that the meson port is basically finished we can take advantage of
more of its improved features. It has the ability to set timeouts for
test programs, defaulting to 30 seconds, but overridable per test. This
is further helped by fact that we changed the iotests integration so
that each iotests was a distinct meson test, instead of having one
single giant (slow) test.

We already set overrides for a bunch of tests, but they've not been
kept up2date since we had timeouts disabled. So this series first
updates the timeout overrides such that all tests pass when run in
my test gitlab CI pipeline. Then it enables use of meson timeouts.

We might still hit timeouts due to non-deterministic performance of
gitlab CI runners. So we'll probably have to increase a few more
timeouts in the short term. Fortunately this is going to be massively
easier to diagnose. For example this job during my testing:

   https://gitlab.com/berrange/qemu/-/jobs/4392029495

we can immediately see  the problem tests

Summary of Failures:
  6/252 qemu:qtest+qtest-i386 / qtest-i386/bios-tables-test                TIMEOUT        120.02s   killed by signal 15 SIGTERM
  7/252 qemu:qtest+qtest-aarch64 / qtest-aarch64/bios-tables-test          TIMEOUT        120.03s   killed by signal 15 SIGTERM
 64/252 qemu:qtest+qtest-aarch64 / qtest-aarch64/qom-test                  TIMEOUT        300.03s   killed by signal 15 SIGTERM

The full meson testlog.txt will show each individual TAP log output,
so we can then see exactly which test case we got stuck on.

NB, the artifacts are missing on the job links above, until this
patch merges:

   https://lists.gnu.org/archive/html/qemu-devel/2023-05/msg04668.html

Changed in v2:

 * Increase timeouts for many more tests, such that
   an --enable-debug build stands a better chance of
   passing tests too, without the user manually setting
   a timeout multiplier for meson.

Daniel P. Berrangé (12):
  qtest: bump min meson timeout to 60 seconds
  qtest: bump migration-test timeout to 5 minutes
  qtest: bump qom-test timeout to 15 minutes
  qtest: bump npcm7xx_pwn-test timeout to 5 minutes
  qtest: bump test-hmp timeout to 4 minutes
  qtest: bump pxe-test timeout to 3 minutes
  qtest: bump prom-env-test timeout to 3 minutes
  qtest: bump boot-serial-test timeout to 3 minutes
  qtest: bump qos-test timeout to 2 minutes
  qtest: bump aspeed_smc-test timeout to 4 minutes
  qtest: bump bios-table-test timeout to 9 minutes
  mtest2make: stop disabling meson test timeouts

 scripts/mtest2make.py   |  3 ++-
 tests/qtest/meson.build | 24 ++++++++++++------------
 2 files changed, 14 insertions(+), 13 deletions(-)

Comments

Alex Bennée Aug. 8, 2023, 8:57 a.m. UTC | #1
Daniel P. Berrangé <berrange@redhat.com> writes:

> Perhaps the most painful of all the GitLab CI failures we see are
> the enforced job timeouts:
>
>    "ERROR: Job failed: execution took longer than 1h15m0s seconds"
>
>    https://gitlab.com/qemu-project/qemu/-/jobs/4387047648
>
> when that hits the CI log shows what has *already* run, but figuring
> out what was currently running (or rather stuck) is an horrendously
> difficult.

I had this in my tree but I see there are a number of review comments to
take into account. Will there be a v3 and do we want it this late in the
cycle?
Thomas Huth Aug. 13, 2023, 7:02 a.m. UTC | #2
On 08/08/2023 10.57, Alex Bennée wrote:
> 
> Daniel P. Berrangé <berrange@redhat.com> writes:
> 
>> Perhaps the most painful of all the GitLab CI failures we see are
>> the enforced job timeouts:
>>
>>     "ERROR: Job failed: execution took longer than 1h15m0s seconds"
>>
>>     https://gitlab.com/qemu-project/qemu/-/jobs/4387047648
>>
>> when that hits the CI log shows what has *already* run, but figuring
>> out what was currently running (or rather stuck) is an horrendously
>> difficult.
> 
> I had this in my tree but I see there are a number of review comments to
> take into account. Will there be a v3 and do we want it this late in the
> cycle?

I think this could maybe cause some false positives in the CI until we 
fine-tuned all related timeouts, so no, we don't want to have this in the 
last release candidates of 8.1. We should commit it early in the 8.2 cycle 
(hoping that Daniel has some spare minutes to release a v3), so we can iron 
out the remaining issues there.

  Thomas
Daniel P. Berrangé Aug. 17, 2023, 10:36 a.m. UTC | #3
On Sun, Aug 13, 2023 at 09:02:03AM +0200, Thomas Huth wrote:
> On 08/08/2023 10.57, Alex Bennée wrote:
> > 
> > Daniel P. Berrangé <berrange@redhat.com> writes:
> > 
> > > Perhaps the most painful of all the GitLab CI failures we see are
> > > the enforced job timeouts:
> > > 
> > >     "ERROR: Job failed: execution took longer than 1h15m0s seconds"
> > > 
> > >     https://gitlab.com/qemu-project/qemu/-/jobs/4387047648
> > > 
> > > when that hits the CI log shows what has *already* run, but figuring
> > > out what was currently running (or rather stuck) is an horrendously
> > > difficult.
> > 
> > I had this in my tree but I see there are a number of review comments to
> > take into account. Will there be a v3 and do we want it this late in the
> > cycle?
> 
> I think this could maybe cause some false positives in the CI until we
> fine-tuned all related timeouts, so no, we don't want to have this in the
> last release candidates of 8.1. We should commit it early in the 8.2 cycle
> (hoping that Daniel has some spare minutes to release a v3), so we can iron
> out the remaining issues there.

Agreed, it is safer to wait until 8.2

I've been away on holiday, but will post a v3 after I catchup


With regards,
Daniel