Message ID | 20250303130948.630029-6-smayhew@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | tweak results organization and reporting | expand |
On Mon, Mar 03, 2025 at 08:09:44AM -0500, Scott Mayhew wrote: > Add 'fstests-show-results' makefile target to show test results. Yay! > Under the hood, it more or less just does 'find ... | xargs cat'. > > By default, the result.xml files will be shown for the most recent > kernel run. You can show the results for a different kernel by > overriding the LAST_KERNEL variable, e.g. > $ LAST_KERNEL=6.13.4-300.fc41.x86_64 make fstests-show-results > > You can change the files being shown by overriding the PATTERN variable. > For example, to just see the summary: > $ PATTERN="\( -name xunit_results.txt \)" make fstests-show-results > > or to see the summary and the bad results: > $ PATTERN="\( -name xunit_results.txt -o -name \"*.bad\" \)" make fstests-show-results > > or you can do any combination thereof, e.g. > $ LAST_KERNEL=6.13.4-300.fc41.x86_64 PATTERN="\( -name xunit_results.txt \)" make fstests-show-results > > Signed-off-by: Scott Mayhew <smayhew@redhat.com> > --- > workflows/fstests/Makefile | 16 ++++++++++++++++ > 1 file changed, 16 insertions(+) > > diff --git a/workflows/fstests/Makefile b/workflows/fstests/Makefile > index 344a1f8..ef9c0fa 100644 > --- a/workflows/fstests/Makefile > +++ b/workflows/fstests/Makefile > @@ -115,6 +115,14 @@ ifneq (,$(COUNT)) > FSTESTS_DYNAMIC_RUNTIME_VARS += , "oscheck_extra_args": "-I $(COUNT)" > endif > > +ifndef LAST_KERNEL > +LAST_KERNEL := $(shell cat workflows/fstests/results/last-kernel.txt 2>/dev/null) > +endif > + > +ifndef PATTERN > +PATTERN := -name result.xml > +endif > + > fstests: $(FSTESTS_BASELINE_EXTRA) > $(Q)ansible-playbook $(ANSIBLE_VERBOSE) -l localhost,baseline,dev \ > -f 30 -i hosts playbooks/fstests.yml --skip-tags run_tests,copy_results $(LIMIT_HOSTS) > @@ -218,6 +226,14 @@ fstests-dev-results: $(KDEVOPS_EXTRA_VARS) > --extra-vars=@./extra_vars.yaml \ > $(LIMIT_HOSTS) > > +fstests-show-results: > +ifdef LAST_KERNEL > + @find workflows/fstests/results/$(LAST_KERNEL) -type f $(PATTERN) \ > + | xargs -I {} bash -c 'echo "{}:"; cat {}; echo;' xml results are not human-friendly, and we already do post processing to unify each of the guest sections to one human friendly output into the file xunit_results.txt. Which is why to support kdevops-ci [0] for fstests we have an "xfs" branch [1] which uses this file to output to the ci.commit_extra file when we run tests on kdevops's github runners to test XFS. The kdevops repo has a kdevops_archive role for which its main task file, playbooks/roles/kdevops_archive/tasks/main.yml looks for this ci.commit_extra file, if present will use it to commit the results into the results repo kdevops-results-archive [1]. The page for ci for fstests to kdevops [3] documents how the github runners work, and the XFS specific documentaiton page [4] has links to the persistent kevops-results-archive results such as, which is essentially a way to easily see commits into kdevops-results-archive related to XFS testing. Sadly I don't see that propagating well but previous results do have it such as [6] and [7] on the commit log. Since we rotate kdevops-results-archive by epoch once the archive fills to capacity we rotate to a new empty results archive so older results are in epoch archives. The xunit_results.txt is built by kdevops playbooks/roles/fstests/tasks/main.yml task: - name: Print fstests results to xunit_results.txt on localhost if xunit xml file was found local_action: "shell ./python/workflows/fstests/gen_results_summary --results_file result.xml --print_section --output_file {{ fstests_results_target }}/{{ last_kernel }}/xunit_results.txt {{ fstests_results_target }}/" tags: [ 'oscheck', 'fstests', 'copy_results', 'print_results', 'augment_expunge_list' ] when: - xunit_files.matched > 0 run_once: true That python script is in playbooks/python/workflows/fstests/gen_results_summary Essentially this is something like what should aim to get back as results: Detailed test report: KERNEL: 6.13.0-rc6+ CPUS: 8 xfs_crc_logdev: 1154 tests, 22 failures, 452 skipped, 13924 seconds Failures: generic/042 generic/054 generic/055 generic/081 generic/108 generic/361 generic/459 generic/704 generic/730 generic/731 generic/741 generic/746 xfs/078 xfs/157 xfs/158 xfs/160 xfs/161 xfs/188 xfs/294 xfs/597 xfs/598 xfs/604 xfs_crc_rtdev_extsize_28k: 1154 tests, 10 failures, 508 skipped, 14096 seconds Failures: generic/741 xfs/078 xfs/185 xfs/188 xfs/597 xfs/598 xfs/629 xfs/630 xfs/631 xfs/632 xfs_nocrc: 1154 tests, 10 failures, 431 skipped, 14585 seconds Failures: generic/741 generic/753 xfs/078 xfs/188 xfs/189 xfs/348 xfs/597 xfs/598 xfs/803 xfs/804 xfs_nocrc_4k: 1154 tests, 11 failures, 432 skipped, 14613 seconds Failures: generic/741 generic/753 xfs/078 xfs/188 xfs/189 xfs/301 xfs/348 xfs/597 xfs/598 xfs/803 xfs/804 xfs_crc: 1154 tests, 5 failures, 419 skipped, 14778 seconds Failures: generic/741 xfs/078 xfs/188 xfs/597 xfs/598 xfs_crc_rtdev_extsize_64k: 1154 tests, 7 failures, 481 skipped, 14890 seconds Failures: generic/741 xfs/078 xfs/188 xfs/597 xfs/598 xfs/629 xfs/632 xfs_crc_rtdev: 1154 tests, 5 failures, 477 skipped, 14897 seconds Failures: generic/741 xfs/078 xfs/188 xfs/597 xfs/598 xfs_crc_logdev_rtdev: 1154 tests, 19 failures, 482 skipped, 15098 seconds Failures: generic/042 generic/054 generic/081 generic/108 generic/361 generic/459 generic/704 generic/730 generic/731 generic/741 generic/746 xfs/002 xfs/078 xfs/157 xfs/188 xfs/294 xfs/597 xfs/598 xfs/604 xfs_nocrc_2k: 1154 tests, 9 failures, 432 skipped, 15504 seconds Failures: generic/741 xfs/078 xfs/188 xfs/189 xfs/348 xfs/597 xfs/598 xfs/803 xfs/804 xfs_nocrc_1k: 1154 tests, 10 failures, 435 skipped, 16748 seconds Failures: generic/741 xfs/078 xfs/188 xfs/189 xfs/348 xfs/597 xfs/598 xfs/629 xfs/803 xfs/804 xfs_nocrc_512: 1154 tests, 14 failures, 438 skipped, 18325 seconds Failures: generic/219 generic/741 xfs/008 xfs/071 xfs/078 xfs/188 xfs/189 xfs/301 xfs/348 xfs/597 xfs/598 xfs/629 xfs/803 xfs/804 xfs_reflink_8k_4ks: 1154 tests, 7 failures, 143 skipped, 20860 seconds Failures: generic/741 xfs/078 xfs/157 xfs/188 xfs/529 xfs/597 xfs/598 xfs_reflink_normapbt: 1154 tests, 8 failures, 142 skipped, 21048 seconds Failures: generic/741 xfs/059 xfs/078 xfs/157 xfs/188 xfs/301 xfs/597 xfs/598 xfs_reflink_logdev: 1154 tests, 19 failures, 150 skipped, 21990 seconds Failures: generic/042 generic/054 generic/055 generic/081 generic/108 generic/361 generic/459 generic/704 generic/730 generic/731 generic/741 generic/746 xfs/078 xfs/188 xfs/294 xfs/507 xfs/597 xfs/598 xfs/604 xfs_reflink_16k_4ks: 1154 tests, 7 failures, 143 skipped, 22088 seconds Failures: generic/102 generic/741 xfs/078 xfs/157 xfs/188 xfs/597 xfs/598 xfs_reflink: 1154 tests, 6 failures, 121 skipped, 22144 seconds Failures: generic/741 xfs/078 xfs/157 xfs/188 xfs/597 xfs/598 xfs_reflink_4k: 1154 tests, 6 failures, 137 skipped, 22209 seconds Failures: generic/741 xfs/078 xfs/157 xfs/188 xfs/597 xfs/598 xfs_reflink_nrext64: 1154 tests, 7 failures, 121 skipped, 22240 seconds Failures: generic/670 generic/741 xfs/078 xfs/157 xfs/188 xfs/597 xfs/598 xfs_reflink_dir_bsize_8k: 1154 tests, 6 failures, 121 skipped, 22526 seconds Failures: generic/741 xfs/078 xfs/157 xfs/188 xfs/597 xfs/598 xfs_reflink_stripe_len: 1154 tests, 7 failures, 123 skipped, 22714 seconds Failures: generic/741 xfs/059 xfs/078 xfs/157 xfs/188 xfs/597 xfs/598 xfs_reflink_2k: 1154 tests, 6 failures, 138 skipped, 23596 seconds Failures: generic/741 xfs/078 xfs/157 xfs/188 xfs/597 xfs/598 xfs_reflink_32k_4ks: 1154 tests, 8 failures, 146 skipped, 23850 seconds Failures: generic/102 generic/172 generic/741 xfs/078 xfs/157 xfs/188 xfs/597 xfs/598 xfs_reflink_1024: 1154 tests, 8 failures, 125 skipped, 26285 seconds Failures: generic/741 xfs/078 xfs/157 xfs/168 xfs/188 xfs/597 xfs/598 xfs/629 xfs_reflink_64k_4ks: 1154 tests, 26 failures, 147 skipped, 27644 seconds Failures: generic/048 generic/102 generic/133 generic/172 generic/251 generic/299 generic/347 generic/459 generic/562 generic/741 xfs/009 xfs/066 xfs/078 xfs/129 xfs/139 xfs/140 xfs/157 xfs/169 xfs/188 xfs/229 xfs/234 xfs/236 xfs/503 xfs/508 xfs/597 xfs/598 Totals: 27696 tests, 6744 skipped, 243 failures, 0 errors, 429867s So if you add support for fstests-show-results to use this, I can just modify the kdevops-ci xfs branch like this: diff --git a/.github/workflows/kdevops-fstests.yml b/.github/workflows/kdevops-fstests.yml index b2da84e96f42..cad44c6c800f 100644 --- a/.github/workflows/kdevops-fstests.yml +++ b/.github/workflows/kdevops-fstests.yml @@ -1715,7 +1715,7 @@ jobs: sleep 60 done - find workflows/fstests/results/last-run -name xunit_results.txt -type f -exec cat {} \; > ci.commit_extra || true + make fstests-show-results > ci.commit_extra if ! grep -E "failures, [1-9]|errors, [1-9]" ci.commit_extra; then echo "ok" > ci.result fi > +else > + @echo "No results." > +endif Notice though that if $(make fstests-show-results) fails the CI will fail so I think the above is fine as it does not return non-zero if the results file is not present. [0] https://github.com/linux-kdevops/kdevops/blob/main/docs/kernel-ci/README.md [1] https://github.com/linux-kdevops/kdevops-ci/tree/xfs [2] https://github.com/linux-kdevops/kdevops-results-archive [3] https://github.com/linux-kdevops/kdevops/blob/main/docs/kernel-ci/linux-filesystems-kdevops-CI-testing.md [4] https://github.com/linux-kdevops/kdevops/blob/main/docs/kernel-ci/linux-xfs-kdevops-ci.md [5] https://github.com/search?q=repo%3Alinux-kdevops%2Fkdevops-results-archive+is%3Acommit+%22linux-xfs-kpd%3A%22&type=commits [6] https://github.com/linux-kdevops/kdevops-results-archive-2025-05/commit/d105803b5f1e9db44078dac149b285917a3aecaf [7] https://github.com/linux-kdevops/kdevops-results-archive-2025-05/commit/1b94c7227e58c0fb8e3f6362fd59e482d373c433 Luis
diff --git a/workflows/fstests/Makefile b/workflows/fstests/Makefile index 344a1f8..ef9c0fa 100644 --- a/workflows/fstests/Makefile +++ b/workflows/fstests/Makefile @@ -115,6 +115,14 @@ ifneq (,$(COUNT)) FSTESTS_DYNAMIC_RUNTIME_VARS += , "oscheck_extra_args": "-I $(COUNT)" endif +ifndef LAST_KERNEL +LAST_KERNEL := $(shell cat workflows/fstests/results/last-kernel.txt 2>/dev/null) +endif + +ifndef PATTERN +PATTERN := -name result.xml +endif + fstests: $(FSTESTS_BASELINE_EXTRA) $(Q)ansible-playbook $(ANSIBLE_VERBOSE) -l localhost,baseline,dev \ -f 30 -i hosts playbooks/fstests.yml --skip-tags run_tests,copy_results $(LIMIT_HOSTS) @@ -218,6 +226,14 @@ fstests-dev-results: $(KDEVOPS_EXTRA_VARS) --extra-vars=@./extra_vars.yaml \ $(LIMIT_HOSTS) +fstests-show-results: +ifdef LAST_KERNEL + @find workflows/fstests/results/$(LAST_KERNEL) -type f $(PATTERN) \ + | xargs -I {} bash -c 'echo "{}:"; cat {}; echo;' +else + @echo "No results." +endif + fstests-help-menu: @echo "fstests options:" @echo "fstests - Git clones fstests, builds and install it"
Add 'fstests-show-results' makefile target to show test results. Under the hood, it more or less just does 'find ... | xargs cat'. By default, the result.xml files will be shown for the most recent kernel run. You can show the results for a different kernel by overriding the LAST_KERNEL variable, e.g. $ LAST_KERNEL=6.13.4-300.fc41.x86_64 make fstests-show-results You can change the files being shown by overriding the PATTERN variable. For example, to just see the summary: $ PATTERN="\( -name xunit_results.txt \)" make fstests-show-results or to see the summary and the bad results: $ PATTERN="\( -name xunit_results.txt -o -name \"*.bad\" \)" make fstests-show-results or you can do any combination thereof, e.g. $ LAST_KERNEL=6.13.4-300.fc41.x86_64 PATTERN="\( -name xunit_results.txt \)" make fstests-show-results Signed-off-by: Scott Mayhew <smayhew@redhat.com> --- workflows/fstests/Makefile | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)