Message ID | 20221102160336.616599-1-mst@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > Changes from v1: > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > bugs that tripped up the pipeline. > Updated expected files for core-count test. Several "make check" CI failures have occurred. They look like they are related. Here is one (see the URLs at the bottom of this email for more details): 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly ERROR 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR 31.44s killed by signal 6 SIGABRT >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img QTEST_QEMU_BINARY=./qemu-system-arm QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k ――――――――――――――――――――――――――――――――――――― ✀ ――――――――――――――――――――――――――――――――――――― stderr: qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) UndefinedBehaviorSanitizer:DEADLYSIGNAL ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55e34deccab0 bp 0x000000000000 sp 0x7ffc94894710 T8618) ==8618==The signal is caused by a READ memory access. ==8618==Hint: address points to the zero page. #0 0x55e34deccab0 in ldl_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 #1 0x55e34deccab0 in ldn_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 #2 0x55e34deccab0 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 #3 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 #4 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 #5 0x55e34decace7 in address_space_unmap /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 #6 0x55e34de6d4ec in vhost_memory_unmap /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 #7 0x55e34de6d4ec in vhost_virtqueue_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 #8 0x55e34de72904 in vhost_dev_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 #9 0x55e34d890514 in vhost_net_stop_one /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 #10 0x55e34d88fef6 in vhost_net_start /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:404:13 #11 0x55e34de0bec6 in virtio_net_vhost_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:307:13 #12 0x55e34de0bec6 in virtio_net_set_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:388:5 #13 0x55e34de5e409 in virtio_set_status /builds/qemu-project/qemu/build/../hw/virtio/virtio.c:2442:9 #14 0x55e34da22a50 in virtio_mmio_write /builds/qemu-project/qemu/build/../hw/virtio/virtio-mmio.c:428:9 #15 0x55e34deb44a6 in memory_region_write_accessor /builds/qemu-project/qemu/build/../softmmu/memory.c:493:5 #16 0x55e34deb428a in access_with_adjusted_size /builds/qemu-project/qemu/build/../softmmu/memory.c:555:18 #17 0x55e34deb402d in memory_region_dispatch_write /builds/qemu-project/qemu/build/../softmmu/memory.c #18 0x55e34deccaf1 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2825:23 #19 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 #20 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 #21 0x55e34ded0bf6 in qtest_process_command /builds/qemu-project/qemu/build/../softmmu/qtest.c #22 0x55e34ded008d in qtest_process_inbuf /builds/qemu-project/qemu/build/../softmmu/qtest.c:796:9 #23 0x55e34e109b02 in tcp_chr_read /builds/qemu-project/qemu/build/../chardev/char-socket.c:508:13 #24 0x7fc6c665d0ae in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x550ae) #25 0x55e34e1fc1bc in glib_pollfds_poll /builds/qemu-project/qemu/build/../util/main-loop.c:297:9 #26 0x55e34e1fc1bc in os_host_main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:320:5 #27 0x55e34e1fc1bc in main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:596:11 #28 0x55e34da52de6 in qemu_main_loop /builds/qemu-project/qemu/build/../softmmu/runstate.c:739:9 #29 0x55e34d60a4f5 in qemu_default_main /builds/qemu-project/qemu/build/../softmmu/main.c:37:14 #30 0x7fc6c43a5eaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) #31 0x7fc6c43a5f5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) #32 0x55e34d5e1094 in _start (/builds/qemu-project/qemu/build/qemu-system-arm+0xc17094) UndefinedBehaviorSanitizer can not provide additional info. SUMMARY: UndefinedBehaviorSanitizer: SEGV /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 in ldl_he_p ==8618==ABORTING Broken pipe ../tests/qtest/libqtest.c:179: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0) ** ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly (test program exited with status code -6) https://gitlab.com/qemu-project/qemu/-/jobs/3265209698 https://gitlab.com/qemu-project/qemu/-/pipelines/683909108 Stefan
On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: > On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > > Changes from v1: > > > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > > bugs that tripped up the pipeline. > > Updated expected files for core-count test. > > Several "make check" CI failures have occurred. They look like they are > related. Here is one (see the URLs at the bottom of this email for more > details): > > 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly ERROR > 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR 31.44s killed by signal 6 SIGABRT > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img QTEST_QEMU_BINARY=./qemu-system-arm QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k > ――――――――――――――――――――――――――――――――――――― ✀ ――――――――――――――――――――――――――――――――――――― > stderr: > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: Failed to set msg fds. > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: Failed to set msg fds. > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: Failed to set msg fds. > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on > qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. > qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) > qemu-system-arm: Failed to set msg fds. > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > UndefinedBehaviorSanitizer:DEADLYSIGNAL > ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55e34deccab0 bp 0x000000000000 sp 0x7ffc94894710 T8618) > ==8618==The signal is caused by a READ memory access. > ==8618==Hint: address points to the zero page. > #0 0x55e34deccab0 in ldl_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 > #1 0x55e34deccab0 in ldn_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 > #2 0x55e34deccab0 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 > #3 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > #4 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > #5 0x55e34decace7 in address_space_unmap /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 > #6 0x55e34de6d4ec in vhost_memory_unmap /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 > #7 0x55e34de6d4ec in vhost_virtqueue_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 > #8 0x55e34de72904 in vhost_dev_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 > #9 0x55e34d890514 in vhost_net_stop_one /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 > #10 0x55e34d88fef6 in vhost_net_start /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:404:13 > #11 0x55e34de0bec6 in virtio_net_vhost_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:307:13 > #12 0x55e34de0bec6 in virtio_net_set_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:388:5 > #13 0x55e34de5e409 in virtio_set_status /builds/qemu-project/qemu/build/../hw/virtio/virtio.c:2442:9 > #14 0x55e34da22a50 in virtio_mmio_write /builds/qemu-project/qemu/build/../hw/virtio/virtio-mmio.c:428:9 > #15 0x55e34deb44a6 in memory_region_write_accessor /builds/qemu-project/qemu/build/../softmmu/memory.c:493:5 > #16 0x55e34deb428a in access_with_adjusted_size /builds/qemu-project/qemu/build/../softmmu/memory.c:555:18 > #17 0x55e34deb402d in memory_region_dispatch_write /builds/qemu-project/qemu/build/../softmmu/memory.c > #18 0x55e34deccaf1 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2825:23 > #19 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > #20 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > #21 0x55e34ded0bf6 in qtest_process_command /builds/qemu-project/qemu/build/../softmmu/qtest.c > #22 0x55e34ded008d in qtest_process_inbuf /builds/qemu-project/qemu/build/../softmmu/qtest.c:796:9 > #23 0x55e34e109b02 in tcp_chr_read /builds/qemu-project/qemu/build/../chardev/char-socket.c:508:13 > #24 0x7fc6c665d0ae in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x550ae) > #25 0x55e34e1fc1bc in glib_pollfds_poll /builds/qemu-project/qemu/build/../util/main-loop.c:297:9 > #26 0x55e34e1fc1bc in os_host_main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:320:5 > #27 0x55e34e1fc1bc in main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:596:11 > #28 0x55e34da52de6 in qemu_main_loop /builds/qemu-project/qemu/build/../softmmu/runstate.c:739:9 > #29 0x55e34d60a4f5 in qemu_default_main /builds/qemu-project/qemu/build/../softmmu/main.c:37:14 > #30 0x7fc6c43a5eaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) > #31 0x7fc6c43a5f5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) > #32 0x55e34d5e1094 in _start (/builds/qemu-project/qemu/build/qemu-system-arm+0xc17094) > UndefinedBehaviorSanitizer can not provide additional info. > SUMMARY: UndefinedBehaviorSanitizer: SEGV /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 in ldl_he_p > ==8618==ABORTING > Broken pipe > ../tests/qtest/libqtest.c:179: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0) > ** > ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly > (test program exited with status code -6) > > https://gitlab.com/qemu-project/qemu/-/jobs/3265209698 > https://gitlab.com/qemu-project/qemu/-/pipelines/683909108 > > Stefan Ugh. I need to build with ubsan to reproduce yes? didn't trigger for me I am wondering how to bisect on gitlab.
On Thu, 3 Nov 2022 at 08:14, Michael S. Tsirkin <mst@redhat.com> wrote: > On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: > > On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > > > Changes from v1: > > > > > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > > > bugs that tripped up the pipeline. > > > Updated expected files for core-count test. > > > > Several "make check" CI failures have occurred. They look like they are > > related. Here is one (see the URLs at the bottom of this email for more > > details): > > > > 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly ERROR > > 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR 31.44s killed by signal 6 SIGABRT > > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img QTEST_QEMU_BINARY=./qemu-system-arm QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k > > ――――――――――――――――――――――――――――――――――――― ✀ ――――――――――――――――――――――――――――――――――――― > > stderr: > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: Failed to set msg fds. > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: Failed to set msg fds. > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: Failed to set msg fds. > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. > > qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) > > qemu-system-arm: Failed to set msg fds. > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > UndefinedBehaviorSanitizer:DEADLYSIGNAL > > ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55e34deccab0 bp 0x000000000000 sp 0x7ffc94894710 T8618) > > ==8618==The signal is caused by a READ memory access. > > ==8618==Hint: address points to the zero page. > > #0 0x55e34deccab0 in ldl_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 > > #1 0x55e34deccab0 in ldn_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 > > #2 0x55e34deccab0 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 > > #3 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > #4 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > #5 0x55e34decace7 in address_space_unmap /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 > > #6 0x55e34de6d4ec in vhost_memory_unmap /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 > > #7 0x55e34de6d4ec in vhost_virtqueue_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 > > #8 0x55e34de72904 in vhost_dev_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 > > #9 0x55e34d890514 in vhost_net_stop_one /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 > > #10 0x55e34d88fef6 in vhost_net_start /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:404:13 > > #11 0x55e34de0bec6 in virtio_net_vhost_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:307:13 > > #12 0x55e34de0bec6 in virtio_net_set_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:388:5 > > #13 0x55e34de5e409 in virtio_set_status /builds/qemu-project/qemu/build/../hw/virtio/virtio.c:2442:9 > > #14 0x55e34da22a50 in virtio_mmio_write /builds/qemu-project/qemu/build/../hw/virtio/virtio-mmio.c:428:9 > > #15 0x55e34deb44a6 in memory_region_write_accessor /builds/qemu-project/qemu/build/../softmmu/memory.c:493:5 > > #16 0x55e34deb428a in access_with_adjusted_size /builds/qemu-project/qemu/build/../softmmu/memory.c:555:18 > > #17 0x55e34deb402d in memory_region_dispatch_write /builds/qemu-project/qemu/build/../softmmu/memory.c > > #18 0x55e34deccaf1 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2825:23 > > #19 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > #20 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > #21 0x55e34ded0bf6 in qtest_process_command /builds/qemu-project/qemu/build/../softmmu/qtest.c > > #22 0x55e34ded008d in qtest_process_inbuf /builds/qemu-project/qemu/build/../softmmu/qtest.c:796:9 > > #23 0x55e34e109b02 in tcp_chr_read /builds/qemu-project/qemu/build/../chardev/char-socket.c:508:13 > > #24 0x7fc6c665d0ae in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x550ae) > > #25 0x55e34e1fc1bc in glib_pollfds_poll /builds/qemu-project/qemu/build/../util/main-loop.c:297:9 > > #26 0x55e34e1fc1bc in os_host_main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:320:5 > > #27 0x55e34e1fc1bc in main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:596:11 > > #28 0x55e34da52de6 in qemu_main_loop /builds/qemu-project/qemu/build/../softmmu/runstate.c:739:9 > > #29 0x55e34d60a4f5 in qemu_default_main /builds/qemu-project/qemu/build/../softmmu/main.c:37:14 > > #30 0x7fc6c43a5eaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) > > #31 0x7fc6c43a5f5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) > > #32 0x55e34d5e1094 in _start (/builds/qemu-project/qemu/build/qemu-system-arm+0xc17094) > > UndefinedBehaviorSanitizer can not provide additional info. > > SUMMARY: UndefinedBehaviorSanitizer: SEGV /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 in ldl_he_p > > ==8618==ABORTING > > Broken pipe > > ../tests/qtest/libqtest.c:179: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0) > > ** > > ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly > > (test program exited with status code -6) > > > > https://gitlab.com/qemu-project/qemu/-/jobs/3265209698 > > https://gitlab.com/qemu-project/qemu/-/pipelines/683909108 > > > > Stefan > > > Ugh. I need to build with ubsan to reproduce yes? didn't trigger for me > I am wondering how to bisect on gitlab. I searched for "clang-system" (the name of the job) in .gitlab-ci.d to get the job commands. The GitLab job output also contains details of the commands that were run (unfortunately it doesn't expand environment variables so some aspects are not visible from the GitLab output). That led to the following local command-line: $ git checkout 645ec851 $ ./configure --enable-werror --disable-docs --target-list=arm-softmmu --cc=clang --cxx=clang++ --extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined && make check-qtest It reproduces locally on my Fedora 36 machine. Stefan
On 3/11/22 13:13, Michael S. Tsirkin wrote: > On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: >> On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: >>> Changes from v1: >>> >>> Applied and squashed fixes by Igor, Lei He, Hesham Almatary for >>> bugs that tripped up the pipeline. >>> Updated expected files for core-count test. >> >> Several "make check" CI failures have occurred. They look like they are >> related. Here is one (see the URLs at the bottom of this email for more >> details): >> >> 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly ERROR >> 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR 31.44s killed by signal 6 SIGABRT >>>>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img QTEST_QEMU_BINARY=./qemu-system-arm QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k >> ――――――――――――――――――――――――――――――――――――― ✀ ――――――――――――――――――――――――――――――――――――― >> stderr: >> qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. >> qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) >> qemu-system-arm: Failed to set msg fds. >> qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) >> qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on >> qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. >> qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) >> qemu-system-arm: Failed to set msg fds. >> qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) >> qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on >> qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. >> qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error >> qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 >> qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on >> qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. >> qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) >> qemu-system-arm: Failed to set msg fds. >> qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) >> qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on >> qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. >> qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) >> qemu-system-arm: Failed to set msg fds. >> qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) >> UndefinedBehaviorSanitizer:DEADLYSIGNAL >> ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55e34deccab0 bp 0x000000000000 sp 0x7ffc94894710 T8618) >> ==8618==The signal is caused by a READ memory access. >> ==8618==Hint: address points to the zero page. >> #0 0x55e34deccab0 in ldl_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 >> #1 0x55e34deccab0 in ldn_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 >> #2 0x55e34deccab0 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 >> #3 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 >> #4 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 >> #5 0x55e34decace7 in address_space_unmap /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 >> #6 0x55e34de6d4ec in vhost_memory_unmap /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 >> #7 0x55e34de6d4ec in vhost_virtqueue_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 >> #8 0x55e34de72904 in vhost_dev_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 >> #9 0x55e34d890514 in vhost_net_stop_one /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 >> #10 0x55e34d88fef6 in vhost_net_start /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:404:13 >> #11 0x55e34de0bec6 in virtio_net_vhost_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:307:13 >> #12 0x55e34de0bec6 in virtio_net_set_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:388:5 >> #13 0x55e34de5e409 in virtio_set_status /builds/qemu-project/qemu/build/../hw/virtio/virtio.c:2442:9 >> #14 0x55e34da22a50 in virtio_mmio_write /builds/qemu-project/qemu/build/../hw/virtio/virtio-mmio.c:428:9 >> #15 0x55e34deb44a6 in memory_region_write_accessor /builds/qemu-project/qemu/build/../softmmu/memory.c:493:5 >> #16 0x55e34deb428a in access_with_adjusted_size /builds/qemu-project/qemu/build/../softmmu/memory.c:555:18 >> #17 0x55e34deb402d in memory_region_dispatch_write /builds/qemu-project/qemu/build/../softmmu/memory.c >> #18 0x55e34deccaf1 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2825:23 >> #19 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 >> #20 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 >> #21 0x55e34ded0bf6 in qtest_process_command /builds/qemu-project/qemu/build/../softmmu/qtest.c >> #22 0x55e34ded008d in qtest_process_inbuf /builds/qemu-project/qemu/build/../softmmu/qtest.c:796:9 >> #23 0x55e34e109b02 in tcp_chr_read /builds/qemu-project/qemu/build/../chardev/char-socket.c:508:13 >> #24 0x7fc6c665d0ae in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x550ae) >> #25 0x55e34e1fc1bc in glib_pollfds_poll /builds/qemu-project/qemu/build/../util/main-loop.c:297:9 >> #26 0x55e34e1fc1bc in os_host_main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:320:5 >> #27 0x55e34e1fc1bc in main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:596:11 >> #28 0x55e34da52de6 in qemu_main_loop /builds/qemu-project/qemu/build/../softmmu/runstate.c:739:9 >> #29 0x55e34d60a4f5 in qemu_default_main /builds/qemu-project/qemu/build/../softmmu/main.c:37:14 >> #30 0x7fc6c43a5eaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) >> #31 0x7fc6c43a5f5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) >> #32 0x55e34d5e1094 in _start (/builds/qemu-project/qemu/build/qemu-system-arm+0xc17094) >> UndefinedBehaviorSanitizer can not provide additional info. >> SUMMARY: UndefinedBehaviorSanitizer: SEGV /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 in ldl_he_p >> ==8618==ABORTING >> Broken pipe >> ../tests/qtest/libqtest.c:179: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0) >> ** >> ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly >> (test program exited with status code -6) >> >> https://gitlab.com/qemu-project/qemu/-/jobs/3265209698 >> https://gitlab.com/qemu-project/qemu/-/pipelines/683909108 >> >> Stefan > > > Ugh. I need to build with ubsan to reproduce yes? didn't trigger for me > I am wondering how to bisect on gitlab. GitLab build within a container, you can pull it and run within it locally. Per https://gitlab.com/qemu-project/qemu/-/jobs/3265209698#L23 Using Docker executor with image registry.gitlab.com/qemu-project/[MASKED]/fedora:latest ... Using docker image sha256:3a352388ce66a26d125a580b1a09c8c9884e47caac07a36dda834f8e3b3fff97 for registry.gitlab.com/qemu-project/[MASKED]/fedora:latest with digest registry.gitlab.com/qemu-project/[MASKED]/fedora@sha256:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 ... To pull this image: $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest (or to be sure to pull the very same:) $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 Then you can reproduce the same build steps within a container instance. Regards, Phil.
On Thu, Nov 03, 2022 at 09:29:56AM -0400, Stefan Hajnoczi wrote: > On Thu, 3 Nov 2022 at 08:14, Michael S. Tsirkin <mst@redhat.com> wrote: > > On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: > > > On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > > > > Changes from v1: > > > > > > > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > > > > bugs that tripped up the pipeline. > > > > Updated expected files for core-count test. > > > > > > Several "make check" CI failures have occurred. They look like they are > > > related. Here is one (see the URLs at the bottom of this email for more > > > details): > > > > > > 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly ERROR > > > 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR 31.44s killed by signal 6 SIGABRT > > > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img QTEST_QEMU_BINARY=./qemu-system-arm QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k > > > ――――――――――――――――――――――――――――――――――――― ✀ ――――――――――――――――――――――――――――――――――――― > > > stderr: > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. > > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error > > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 > > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. > > > qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > > UndefinedBehaviorSanitizer:DEADLYSIGNAL > > > ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55e34deccab0 bp 0x000000000000 sp 0x7ffc94894710 T8618) > > > ==8618==The signal is caused by a READ memory access. > > > ==8618==Hint: address points to the zero page. > > > #0 0x55e34deccab0 in ldl_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 > > > #1 0x55e34deccab0 in ldn_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 > > > #2 0x55e34deccab0 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 > > > #3 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > > #4 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > > #5 0x55e34decace7 in address_space_unmap /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 > > > #6 0x55e34de6d4ec in vhost_memory_unmap /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 > > > #7 0x55e34de6d4ec in vhost_virtqueue_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 > > > #8 0x55e34de72904 in vhost_dev_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 > > > #9 0x55e34d890514 in vhost_net_stop_one /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 > > > #10 0x55e34d88fef6 in vhost_net_start /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:404:13 > > > #11 0x55e34de0bec6 in virtio_net_vhost_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:307:13 > > > #12 0x55e34de0bec6 in virtio_net_set_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:388:5 > > > #13 0x55e34de5e409 in virtio_set_status /builds/qemu-project/qemu/build/../hw/virtio/virtio.c:2442:9 > > > #14 0x55e34da22a50 in virtio_mmio_write /builds/qemu-project/qemu/build/../hw/virtio/virtio-mmio.c:428:9 > > > #15 0x55e34deb44a6 in memory_region_write_accessor /builds/qemu-project/qemu/build/../softmmu/memory.c:493:5 > > > #16 0x55e34deb428a in access_with_adjusted_size /builds/qemu-project/qemu/build/../softmmu/memory.c:555:18 > > > #17 0x55e34deb402d in memory_region_dispatch_write /builds/qemu-project/qemu/build/../softmmu/memory.c > > > #18 0x55e34deccaf1 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2825:23 > > > #19 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > > #20 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > > #21 0x55e34ded0bf6 in qtest_process_command /builds/qemu-project/qemu/build/../softmmu/qtest.c > > > #22 0x55e34ded008d in qtest_process_inbuf /builds/qemu-project/qemu/build/../softmmu/qtest.c:796:9 > > > #23 0x55e34e109b02 in tcp_chr_read /builds/qemu-project/qemu/build/../chardev/char-socket.c:508:13 > > > #24 0x7fc6c665d0ae in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x550ae) > > > #25 0x55e34e1fc1bc in glib_pollfds_poll /builds/qemu-project/qemu/build/../util/main-loop.c:297:9 > > > #26 0x55e34e1fc1bc in os_host_main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:320:5 > > > #27 0x55e34e1fc1bc in main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:596:11 > > > #28 0x55e34da52de6 in qemu_main_loop /builds/qemu-project/qemu/build/../softmmu/runstate.c:739:9 > > > #29 0x55e34d60a4f5 in qemu_default_main /builds/qemu-project/qemu/build/../softmmu/main.c:37:14 > > > #30 0x7fc6c43a5eaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) > > > #31 0x7fc6c43a5f5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) > > > #32 0x55e34d5e1094 in _start (/builds/qemu-project/qemu/build/qemu-system-arm+0xc17094) > > > UndefinedBehaviorSanitizer can not provide additional info. > > > SUMMARY: UndefinedBehaviorSanitizer: SEGV /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 in ldl_he_p > > > ==8618==ABORTING > > > Broken pipe > > > ../tests/qtest/libqtest.c:179: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0) > > > ** > > > ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly > > > (test program exited with status code -6) > > > > > > https://gitlab.com/qemu-project/qemu/-/jobs/3265209698 > > > https://gitlab.com/qemu-project/qemu/-/pipelines/683909108 > > > > > > Stefan > > > > > > Ugh. I need to build with ubsan to reproduce yes? didn't trigger for me > > I am wondering how to bisect on gitlab. > > I searched for "clang-system" (the name of the job) in .gitlab-ci.d to > get the job commands. The GitLab job output also contains details of > the commands that were run (unfortunately it doesn't expand > environment variables so some aspects are not visible from the GitLab > output). > > That led to the following local command-line: > > $ git checkout 645ec851 > $ ./configure --enable-werror --disable-docs --target-list=arm-softmmu > --cc=clang --cxx=clang++ --extra-cflags=-fsanitize=undefined > --extra-cflags=-fno-sanitize-recover=undefined && make check-qtest > It reproduces locally on my Fedora 36 machine. > > Stefan Oh I'm still on 35, that's why.
> To pull this image: > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest Actually the URL is: $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > (or to be sure to pull the very same:) > $ docker pull > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 Same here, registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 See https://gitlab.com/qemu-project/qemu/container_registry/1215910
gitlab-runner can run locally with minimal setup: https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ I haven't tried it yet, but that seems like the most reliable (and easiest) way to reproduce the CI environment. Stefan
On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > gitlab-runner can run locally with minimal setup: > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > I haven't tried it yet, but that seems like the most reliable (and > easiest) way to reproduce the CI environment. > > Stefan How does one pass in variables do you know? Environment?
On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > gitlab-runner can run locally with minimal setup: > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > I haven't tried it yet, but that seems like the most reliable (and > easiest) way to reproduce the CI environment. IMHO that is total overkill. Just running the containers directly is what I'd recommend for any attempt to reproduce problems. There isn't actually anything gitlab specific in our CI environment, gitlab merely provides the harness for invoking jobs. This is good as it means we can move our CI to another systems if we find Gitlab no longer meets our needs, and our actual build env won't change, as it'll be the same containers still. I wouldn't recommend QEMU contributors to tie their local workflow into the use of gitlab-runner, when they can avoid that dependency. With regards, Daniel
On Thu, 3 Nov 2022 at 11:59, Daniel P. Berrangé <berrange@redhat.com> wrote: > > On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > > gitlab-runner can run locally with minimal setup: > > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > > > I haven't tried it yet, but that seems like the most reliable (and > > easiest) way to reproduce the CI environment. > > IMHO that is total overkill. > > Just running the containers directly is what I'd recommend for any > attempt to reproduce problems. There isn't actually anything gitlab > specific in our CI environment, gitlab merely provides the harness > for invoking jobs. This is good as it means we can move our CI to > another systems if we find Gitlab no longer meets our needs, and > our actual build env won't change, as it'll be the same containers > still. > > I wouldn't recommend QEMU contributors to tie their local workflow > into the use of gitlab-runner, when they can avoid that dependency. If there was a complete list of commands to run I would agree with you. Unfortunately there is no easy way to run the container locally: 1. The container image path is hidden in the GitLab output and easy to get wrong (see Ani's reply). 2. The GitLab output does not contain the full command lines because environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). 3. The .gitlab-ci.d/ is non-trivial (uses YAML templates and who knows what else GitLab CI does when running the YAML). When doing what you suggested, how easy is it and how confident are you that you're reproducing the same environment? Unless I missed something it doesn't work very well. Stefan
On Thu, 3 Nov 2022 at 11:59, Michael S. Tsirkin <mst@redhat.com> wrote: > > On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > > gitlab-runner can run locally with minimal setup: > > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > > > I haven't tried it yet, but that seems like the most reliable (and > > easiest) way to reproduce the CI environment. > > > > Stefan > > How does one pass in variables do you know? Environment? Haven't tried it yet, sorry. Stefan
On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > On Thu, 3 Nov 2022 at 11:59, Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > > > gitlab-runner can run locally with minimal setup: > > > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > > > > > I haven't tried it yet, but that seems like the most reliable (and > > > easiest) way to reproduce the CI environment. > > > > IMHO that is total overkill. > > > > Just running the containers directly is what I'd recommend for any > > attempt to reproduce problems. There isn't actually anything gitlab > > specific in our CI environment, gitlab merely provides the harness > > for invoking jobs. This is good as it means we can move our CI to > > another systems if we find Gitlab no longer meets our needs, and > > our actual build env won't change, as it'll be the same containers > > still. > > > > I wouldn't recommend QEMU contributors to tie their local workflow > > into the use of gitlab-runner, when they can avoid that dependency. > > If there was a complete list of commands to run I would agree with > you. Unfortunately there is no easy way to run the container locally: > 1. The container image path is hidden in the GitLab output and easy to > get wrong (see Ani's reply). That is bizarre Pulling docker image registry.gitlab.com/qemu-project/[MASKED]/fedora:latest ... I've not seen any other gitlab project where the paths are 'MASKED' in this way. Makes me wonder if there's some setting in the QEMU gitlab project causing this, as its certainly not expected behaviour. Grabbing the container URL from line 8 of the build log is my standard goto approach. > 2. The GitLab output does not contain the full command lines because > environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). Note, $QEMU_CONFIGURE_OPTS is set by the container image itself, so there's no need to know that one. $CONFIGURE_ARGS meanwhile is set in the build-XXXXX template and easy to find. > 3. The .gitlab-ci.d/ is non-trivial (uses YAML templates and who knows > what else GitLab CI does when running the YAML). You shouldn't need to understand that to reproduce problems. At most just need to find the $CONFIGURE_ARGS and $MAKE_CHECK_ARGS settings for the build-XXX job at hand. > When doing what you suggested, how easy is it and how confident are > you that you're reproducing the same environment? Unless I missed > something it doesn't work very well. Running the containers directly in docker/podman is how I reproduce pretty much everything locally, and its been pretty straightforward IME. With regards, Daniel
On Thu, 3 Nov 2022 at 16:38, Daniel P. Berrangé <berrange@redhat.com> wrote: > On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > > 2. The GitLab output does not contain the full command lines because > > environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). > > Note, $QEMU_CONFIGURE_OPTS is set by the container image itself, so > there's no need to know that one. > > $CONFIGURE_ARGS meanwhile is set in the build-XXXXX template and > easy to find. Not all that easy if you're looking at some specific gitlab job output... it would be helpful if the scripts echoed the exact configure command line before running it, then you wouldn't need to go ferreting around in the gitlab config files and hoping you've found the right bit. -- PMM
On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha <ani@anisinha.ca> wrote: > > > To pull this image: > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > Actually the URL is: > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > (or to be sure to pull the very same:) > > > $ docker pull > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > Same here, > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 I pulled this container, used the configure line Stefan mentioned earlier in the thread and re-ran make check-qtest and still could not repro the crash. All tests pass. /usr/bin/meson test --no-rebuild -t 0 --num-processes 1 --print-errorlogs --suite qtest 1/31 qemu:qtest+qtest-arm / qtest-arm/qom-test OK 293.59s 85 subtests passed 2/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_pwm-test OK 96.69s 24 subtests passed 3/31 qemu:qtest+qtest-arm / qtest-arm/test-hmp OK 56.11s 86 subtests passed 4/31 qemu:qtest+qtest-arm / qtest-arm/boot-serial-test OK 0.45s 3 subtests passed 5/31 qemu:qtest+qtest-arm / qtest-arm/qos-test OK 20.50s 115 subtests passed 6/31 qemu:qtest+qtest-arm / qtest-arm/sse-timer-test OK 0.29s 3 subtests passed 7/31 qemu:qtest+qtest-arm / qtest-arm/cmsdk-apb-dualtimer-test OK 0.20s 2 subtests passed 8/31 qemu:qtest+qtest-arm / qtest-arm/cmsdk-apb-timer-test OK 0.22s 1 subtests passed 9/31 qemu:qtest+qtest-arm / qtest-arm/cmsdk-apb-watchdog-test OK 0.25s 2 subtests passed 10/31 qemu:qtest+qtest-arm / qtest-arm/pflash-cfi02-test OK 4.31s 4 subtests passed 11/31 qemu:qtest+qtest-arm / qtest-arm/aspeed_hace-test OK 22.36s 16 subtests passed 12/31 qemu:qtest+qtest-arm / qtest-arm/aspeed_smc-test OK 144.47s 10 subtests passed 13/31 qemu:qtest+qtest-arm / qtest-arm/aspeed_gpio-test OK 0.21s 2 subtests passed 14/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_adc-test OK 1.88s 6 subtests passed 15/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_gpio-test OK 0.24s 18 subtests passed 16/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_rng-test OK 0.26s 2 subtests passed 17/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_sdhci-test OK 0.97s 3 subtests passed 18/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_smbus-test OK 11.23s 40 subtests passed 19/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_timer-test OK 1.91s 180 subtests passed 20/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_watchdog_timer-test OK 20.69s 15 subtests passed 21/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_emc-test OK 0.90s 6 subtests passed 22/31 qemu:qtest+qtest-arm / qtest-arm/arm-cpu-features OK 0.15s 1 subtests passed 23/31 qemu:qtest+qtest-arm / qtest-arm/microbit-test OK 4.46s 5 subtests passed 24/31 qemu:qtest+qtest-arm / qtest-arm/test-arm-mptimer OK 0.20s 61 subtests passed 25/31 qemu:qtest+qtest-arm / qtest-arm/hexloader-test OK 0.14s 1 subtests passed 26/31 qemu:qtest+qtest-arm / qtest-arm/cdrom-test OK 1.06s 9 subtests passed 27/31 qemu:qtest+qtest-arm / qtest-arm/device-introspect-test OK 3.18s 6 subtests passed 28/31 qemu:qtest+qtest-arm / qtest-arm/machine-none-test OK 0.09s 1 subtests passed 29/31 qemu:qtest+qtest-arm / qtest-arm/qmp-test OK 0.34s 4 subtests passed 30/31 qemu:qtest+qtest-arm / qtest-arm/qmp-cmd-test OK 7.80s 62 subtests passed 31/31 qemu:qtest+qtest-arm / qtest-arm/readconfig-test OK 0.22s 2 subtests passed Ok: 31 Expected Fail: 0 Fail: 0 Unexpected Pass: 0 Skipped: 0 Timeout: 0 Full log written to /qemu/qemu/build/meson-logs/testlog.txt
On Thu, Nov 03, 2022 at 04:38:35PM +0000, Daniel P. Berrangé wrote: > On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > > On Thu, 3 Nov 2022 at 11:59, Daniel P. Berrangé <berrange@redhat.com> wrote: > > > > > > On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > > > > gitlab-runner can run locally with minimal setup: > > > > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > > > > > > > I haven't tried it yet, but that seems like the most reliable (and > > > > easiest) way to reproduce the CI environment. > > > > > > IMHO that is total overkill. > > > > > > Just running the containers directly is what I'd recommend for any > > > attempt to reproduce problems. There isn't actually anything gitlab > > > specific in our CI environment, gitlab merely provides the harness > > > for invoking jobs. This is good as it means we can move our CI to > > > another systems if we find Gitlab no longer meets our needs, and > > > our actual build env won't change, as it'll be the same containers > > > still. > > > > > > I wouldn't recommend QEMU contributors to tie their local workflow > > > into the use of gitlab-runner, when they can avoid that dependency. > > > > If there was a complete list of commands to run I would agree with > > you. Unfortunately there is no easy way to run the container locally: > > 1. The container image path is hidden in the GitLab output and easy to > > get wrong (see Ani's reply). > > That is bizarre > > Pulling docker image registry.gitlab.com/qemu-project/[MASKED]/fedora:latest ... > > I've not seen any other gitlab project where the paths are 'MASKED' in > this way. Makes me wonder if there's some setting in the QEMU gitlab > project causing this, as its certainly not expected behaviour. Spoke with Peter on IRC, and we had a variable set CIRRUS_GITHUB_REPO with value 'qemu/qemu' that was marked as 'masked'. This caused gitlab to scrub that string from the build logs. We've unmasked that now, so the container URLs should be intact from the next CI pipeline onwards. Masking is only needed for security sensitive variables like tokens, passwords, etc With regards, Daniel
On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha <ani@anisinha.ca> wrote: > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > To pull this image: > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > Actually the URL is: > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > > > (or to be sure to pull the very same:) > > > > > $ docker pull > > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > Same here, > > > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > I pulled this container, used the configure line Stefan mentioned > earlier in the thread and re-ran make check-qtest and still could not > repro the crash. All tests pass. > [root@6089e5581e63 build]# git status On branch master Your branch is ahead of 'origin/master' by 82 commits. (use "git push" to publish your local commits) nothing to commit, working tree clean [root@6089e5581e63 build]# git log --oneline -1 77dd1e2b09 (HEAD -> master, tag: for_upstream, tag: for_autotest_next, tag: for_autotest, mst/pci, mst/next) intel-iommu: PASID support [root@6089e5581e63 build]# git log --oneline -5 77dd1e2b09 (HEAD -> master, tag: for_upstream, tag: for_autotest_next, tag: for_autotest, mst/pci, mst/next) intel-iommu: PASID support a0f831c879 intel-iommu: convert VTD_PE_GET_FPD_ERR() to be a function 840d70c49b intel-iommu: drop VTDBus c89dbf5551 intel-iommu: don't warn guest errors when getting rid2pasid entry d8ebe4ce22 vfio: move implement of vfio_get_xlat_addr() to memory.c [root@6089e5581e63 build]#
On Thu, Nov 03, 2022 at 04:47:03PM +0000, Peter Maydell wrote: > On Thu, 3 Nov 2022 at 16:38, Daniel P. Berrangé <berrange@redhat.com> wrote: > > On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > > > 2. The GitLab output does not contain the full command lines because > > > environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). > > > > Note, $QEMU_CONFIGURE_OPTS is set by the container image itself, so > > there's no need to know that one. > > > > $CONFIGURE_ARGS meanwhile is set in the build-XXXXX template and > > easy to find. > > Not all that easy if you're looking at some specific gitlab > job output... it would be helpful if the scripts > echoed the exact configure command line before running it, > then you wouldn't need to go ferreting around in the gitlab > config files and hoping you've found the right bit. That's easy enough to do, I'll send a patch. With regards, Daniel
On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha <ani@anisinha.ca> wrote: > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > > To pull this image: > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > Actually the URL is: > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > > > > > (or to be sure to pull the very same:) > > > > > > > $ docker pull > > > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > Same here, > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > I pulled this container, This is fc35, the same mst is using: # cat /etc/fedora-release Fedora release 35 (Thirty Five) Hmm. Something else is going on in the gitlab specific environment.
On Thu, Nov 3, 2022, 12:49 Daniel P. Berrangé <berrange@redhat.com> wrote: > On Thu, Nov 03, 2022 at 04:47:03PM +0000, Peter Maydell wrote: > > On Thu, 3 Nov 2022 at 16:38, Daniel P. Berrangé <berrange@redhat.com> > wrote: > > > On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > > > > 2. The GitLab output does not contain the full command lines because > > > > environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). > > > > > > Note, $QEMU_CONFIGURE_OPTS is set by the container image itself, so > > > there's no need to know that one. > > > > > > $CONFIGURE_ARGS meanwhile is set in the build-XXXXX template and > > > easy to find. > > > > Not all that easy if you're looking at some specific gitlab > > job output... it would be helpful if the scripts > > echoed the exact configure command line before running it, > > then you wouldn't need to go ferreting around in the gitlab > > config files and hoping you've found the right bit. > > That's easy enough to do, I'll send a patch. > Awesome, thank you! Stefan >
On Thu, Nov 03, 2022 at 10:26:26PM +0530, Ani Sinha wrote: > On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > > > > To pull this image: > > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > > > Actually the URL is: > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > > > > > > > (or to be sure to pull the very same:) > > > > > > > > > $ docker pull > > > > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > Same here, > > > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > I pulled this container, > > This is fc35, the same mst is using: > > # cat /etc/fedora-release > Fedora release 35 (Thirty Five) > > Hmm. Something else is going on in the gitlab specific environment. Or it is a non-deterministic race condition and the chance of hitting it varies based on your hardware and/or CPU load. With regards, Daniel
On Thu, Nov 3, 2022 at 23:11 Daniel P. Berrangé <berrange@redhat.com> wrote: > On Thu, Nov 03, 2022 at 10:26:26PM +0530, Ani Sinha wrote: > > On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > > > > > > To pull this image: > > > > > > > > > > > $ docker pull > registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > > > > > Actually the URL is: > > > > > > > > > > $ docker pull > registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > > > > > > > > > (or to be sure to pull the very same:) > > > > > > > > > > > $ docker pull > > > > > > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > > > Same here, > > > > > > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > I pulled this container, > > > > This is fc35, the same mst is using: > > > > # cat /etc/fedora-release > > Fedora release 35 (Thirty Five) > > > > Hmm. Something else is going on in the gitlab specific environment. > > Or it is a non-deterministic race condition and the chance of hitting > it varies based on your hardware and/or CPU load. Can we kick off the same CI job again? Does it pass this time? >
On Thu, Nov 03, 2022 at 11:14:21PM +0530, Ani Sinha wrote: > > > On Thu, Nov 3, 2022 at 23:11 Daniel P. Berrangé <berrange@redhat.com> wrote: > > On Thu, Nov 03, 2022 at 10:26:26PM +0530, Ani Sinha wrote: > > On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > > > > > > To pull this image: > > > > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > > > > > Actually the URL is: > > > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/ > fedora:latest > > > > > > > > > > > (or to be sure to pull the very same:) > > > > > > > > > > > $ docker pull > > > > > > registry.gitlab.com/qemu-project/qemu/ > fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > > > Same here, > > > > > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/ > fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > I pulled this container, > > > > This is fc35, the same mst is using: > > > > # cat /etc/fedora-release > > Fedora release 35 (Thirty Five) > > > > Hmm. Something else is going on in the gitlab specific environment. > > Or it is a non-deterministic race condition and the chance of hitting > it varies based on your hardware and/or CPU load. > > > Can we kick off the same CI job again? Does it pass this time? > It's completely deterministic on gitlab. Stefan also reproduced on his F36 box.
On Thu, Nov 03, 2022 at 09:29:56AM -0400, Stefan Hajnoczi wrote: > On Thu, 3 Nov 2022 at 08:14, Michael S. Tsirkin <mst@redhat.com> wrote: > > On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: > > > On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > > > > Changes from v1: > > > > > > > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > > > > bugs that tripped up the pipeline. > > > > Updated expected files for core-count test. > > > > > > Several "make check" CI failures have occurred. They look like they are > > > related. Here is one (see the URLs at the bottom of this email for more > > > details): > > > > > > 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly ERROR > > > 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR 31.44s killed by signal 6 SIGABRT > > > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img QTEST_QEMU_BINARY=./qemu-system-arm QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k > > > ――――――――――――――――――――――――――――――――――――― ✀ ――――――――――――――――――――――――――――――――――――― > > > stderr: > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. > > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error > > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 > > > qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > > qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. > > > qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > > UndefinedBehaviorSanitizer:DEADLYSIGNAL > > > ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x55e34deccab0 bp 0x000000000000 sp 0x7ffc94894710 T8618) > > > ==8618==The signal is caused by a READ memory access. > > > ==8618==Hint: address points to the zero page. > > > #0 0x55e34deccab0 in ldl_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 > > > #1 0x55e34deccab0 in ldn_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 > > > #2 0x55e34deccab0 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 > > > #3 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > > #4 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > > #5 0x55e34decace7 in address_space_unmap /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 > > > #6 0x55e34de6d4ec in vhost_memory_unmap /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 > > > #7 0x55e34de6d4ec in vhost_virtqueue_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 > > > #8 0x55e34de72904 in vhost_dev_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 > > > #9 0x55e34d890514 in vhost_net_stop_one /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 > > > #10 0x55e34d88fef6 in vhost_net_start /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:404:13 > > > #11 0x55e34de0bec6 in virtio_net_vhost_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:307:13 > > > #12 0x55e34de0bec6 in virtio_net_set_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:388:5 > > > #13 0x55e34de5e409 in virtio_set_status /builds/qemu-project/qemu/build/../hw/virtio/virtio.c:2442:9 > > > #14 0x55e34da22a50 in virtio_mmio_write /builds/qemu-project/qemu/build/../hw/virtio/virtio-mmio.c:428:9 > > > #15 0x55e34deb44a6 in memory_region_write_accessor /builds/qemu-project/qemu/build/../softmmu/memory.c:493:5 > > > #16 0x55e34deb428a in access_with_adjusted_size /builds/qemu-project/qemu/build/../softmmu/memory.c:555:18 > > > #17 0x55e34deb402d in memory_region_dispatch_write /builds/qemu-project/qemu/build/../softmmu/memory.c > > > #18 0x55e34deccaf1 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2825:23 > > > #19 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > > #20 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > > #21 0x55e34ded0bf6 in qtest_process_command /builds/qemu-project/qemu/build/../softmmu/qtest.c > > > #22 0x55e34ded008d in qtest_process_inbuf /builds/qemu-project/qemu/build/../softmmu/qtest.c:796:9 > > > #23 0x55e34e109b02 in tcp_chr_read /builds/qemu-project/qemu/build/../chardev/char-socket.c:508:13 > > > #24 0x7fc6c665d0ae in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x550ae) > > > #25 0x55e34e1fc1bc in glib_pollfds_poll /builds/qemu-project/qemu/build/../util/main-loop.c:297:9 > > > #26 0x55e34e1fc1bc in os_host_main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:320:5 > > > #27 0x55e34e1fc1bc in main_loop_wait /builds/qemu-project/qemu/build/../util/main-loop.c:596:11 > > > #28 0x55e34da52de6 in qemu_main_loop /builds/qemu-project/qemu/build/../softmmu/runstate.c:739:9 > > > #29 0x55e34d60a4f5 in qemu_default_main /builds/qemu-project/qemu/build/../softmmu/main.c:37:14 > > > #30 0x7fc6c43a5eaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) > > > #31 0x7fc6c43a5f5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) > > > #32 0x55e34d5e1094 in _start (/builds/qemu-project/qemu/build/qemu-system-arm+0xc17094) > > > UndefinedBehaviorSanitizer can not provide additional info. > > > SUMMARY: UndefinedBehaviorSanitizer: SEGV /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 in ldl_he_p > > > ==8618==ABORTING > > > Broken pipe > > > ../tests/qtest/libqtest.c:179: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0) > > > ** > > > ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly > > > (test program exited with status code -6) > > > > > > https://gitlab.com/qemu-project/qemu/-/jobs/3265209698 > > > https://gitlab.com/qemu-project/qemu/-/pipelines/683909108 > > > > > > Stefan > > > > > > Ugh. I need to build with ubsan to reproduce yes? didn't trigger for me > > I am wondering how to bisect on gitlab. > > I searched for "clang-system" (the name of the job) in .gitlab-ci.d to > get the job commands. The GitLab job output also contains details of > the commands that were run (unfortunately it doesn't expand > environment variables so some aspects are not visible from the GitLab > output). > > That led to the following local command-line: > > $ git checkout 645ec851 > $ ./configure --enable-werror --disable-docs --target-list=arm-softmmu > --cc=clang --cxx=clang++ --extra-cflags=-fsanitize=undefined > --extra-cflags=-fno-sanitize-recover=undefined && make check-qtest > > It reproduces locally on my Fedora 36 machine. > > Stefan Does not reproduce locally for me :( With some guessing I figured out this is the 1st bad commit: virtio: re-order vm_running and use_started checks it's a bugfix not easy to revert ...
On Fri, Nov 4, 2022 at 02:02 Michael S. Tsirkin <mst@redhat.com> wrote: > On Thu, Nov 03, 2022 at 11:14:21PM +0530, Ani Sinha wrote: > > > > > > On Thu, Nov 3, 2022 at 23:11 Daniel P. Berrangé <berrange@redhat.com> > wrote: > > > > On Thu, Nov 03, 2022 at 10:26:26PM +0530, Ani Sinha wrote: > > > On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha <ani@anisinha.ca> wrote: > > > > > > > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha <ani@anisinha.ca> > wrote: > > > > > > > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha <ani@anisinha.ca> > wrote: > > > > > > > > > > > > > To pull this image: > > > > > > > > > > > > > $ docker pull > registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > > > > > > > Actually the URL is: > > > > > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/ > > fedora:latest > > > > > > > > > > > > > (or to be sure to pull the very same:) > > > > > > > > > > > > > $ docker pull > > > > > > > registry.gitlab.com/qemu-project/qemu/ > > > fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > > > > > Same here, > > > > > > > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/ > > > fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > > > I pulled this container, > > > > > > This is fc35, the same mst is using: > > > > > > # cat /etc/fedora-release > > > Fedora release 35 (Thirty Five) > > > > > > Hmm. Something else is going on in the gitlab specific environment. > > > > Or it is a non-deterministic race condition and the chance of > hitting > > it varies based on your hardware and/or CPU load. > > > > > > Can we kick off the same CI job again? Does it pass this time? > > > > It's completely deterministic on gitlab. Stefan also reproduced on > his F36 box. Then this means it’s not enough to simply use the same container as the CI and same configure line to reproduce all the issues.
Changes from v1: Applied and squashed fixes by Igor, Lei He, Hesham Almatary for bugs that tripped up the pipeline. Updated expected files for core-count test. The following changes since commit a11f65ec1b8adcb012b89c92819cbda4dc25aaf1: Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2022-11-01 13:49:33 -0400) are available in the Git repository at: https://git.kernel.org/pub/scm/virt/kvm/mst/qemu.git tags/for_upstream for you to fetch changes up to 77dd1e2b092bb92978a2d68bed7d048ed74a5d23: intel-iommu: PASID support (2022-11-02 07:55:26 -0400) ---------------------------------------------------------------- pci,pc,virtio: features, tests, fixes, cleanups lots of acpi rework first version of biosbits infrastructure ASID support in vhost-vdpa core_count2 support in smbios PCIe DOE emulation virtio vq reset HMAT support part of infrastructure for viommu support in vhost-vdpa VTD PASID support fixes, tests all over the place Signed-off-by: Michael S. Tsirkin <mst@redhat.com> ---------------------------------------------------------------- Akihiko Odaki (1): msix: Assert that specified vector is in range Alex Bennée (1): virtio: re-order vm_running and use_started checks Ani Sinha (7): hw/i386/e820: remove legacy reserved entries for e820 acpi/tests/avocado/bits: initial commit of test scripts that are run by biosbits acpi/tests/avocado/bits: disable acpi PSS tests that are failing in biosbits acpi/tests/avocado/bits: add biosbits config file for running bios tests acpi/tests/avocado/bits: add acpi and smbios avocado tests that uses biosbits acpi/tests/avocado/bits/doc: add a doc file to describe the acpi bits test MAINTAINERS: add myself as the maintainer for acpi biosbits avocado tests Bernhard Beschow (3): hw/i386/acpi-build: Remove unused struct hw/i386/acpi-build: Resolve redundant attribute hw/i386/acpi-build: Resolve north rather than south bridges Brice Goglin (4): hmat acpi: Don't require initiator value in -numa tests: acpi: add and whitelist *.hmat-noinitiator expected blobs tests: acpi: q35: add test for hmat nodes without initiators tests: acpi: q35: update expected blobs *.hmat-noinitiators expected HMAT: Christian A. Ehrhardt (1): hw/acpi/erst.c: Fix memory handling issues Cindy Lu (1): vfio: move implement of vfio_get_xlat_addr() to memory.c David Daney (1): virtio-rng-pci: Allow setting nvectors, so we can use MSI-X Eric Auger (1): hw/virtio/virtio-iommu-pci: Enforce the device is plugged on the root bus Gregory Price (1): hw/i386/pc.c: CXL Fixed Memory Window should not reserve e820 in bios Hesham Almatary (3): tests: Add HMAT AArch64/virt empty table files tests: acpi: aarch64/virt: add a test for hmat nodes with no initiators tests: virt: Update expected *.acpihmatvirt tables Huai-Cheng Kuo (3): hw/pci: PCIe Data Object Exchange emulation hw/cxl/cdat: CXL CDAT Data Object Exchange implementation hw/mem/cxl-type3: Add CXL CDAT Data Object Exchange Igor Mammedov (11): acpi: pc: vga: use AcpiDevAmlIf interface to build VGA device descriptors tests: acpi: whitelist DSDT before generating PCI-ISA bridge AML automatically acpi: pc/q35: drop ad-hoc PCI-ISA bridge AML routines and let bus ennumeration generate AML tests: acpi: update expected DSDT after ISA bridge is moved directly under PCI host bridge tests: acpi: whitelist DSDT before generating ICH9_SMB AML automatically acpi: add get_dev_aml_func() helper acpi: enumerate SMB bridge automatically along with other PCI devices tests: acpi: update expected blobs tests: acpi: pc/q35 whitelist DSDT before \_GPE cleanup acpi: pc/35: sanitize _GPE declaration order tests: acpi: update expected blobs Jason Wang (4): intel-iommu: don't warn guest errors when getting rid2pasid entry intel-iommu: drop VTDBus intel-iommu: convert VTD_PE_GET_FPD_ERR() to be a function intel-iommu: PASID support Jonathan Cameron (2): hw/mem/cxl-type3: Add MSIX support hw/pci-bridge/cxl-upstream: Add a CDAT table access DOE Julia Suvorova (5): hw/smbios: add core_count2 to smbios table type 4 bios-tables-test: teach test to use smbios 3.0 tables tests/acpi: allow changes for core_count2 test bios-tables-test: add test for number of cores > 255 tests/acpi: update tables for new core count test Kangjie Xu (10): virtio: introduce virtio_queue_enable() virtio: core: vq reset feature negotation support virtio-pci: support queue enable vhost: expose vhost_virtqueue_start() vhost: expose vhost_virtqueue_stop() vhost-net: vhost-kernel: introduce vhost_net_virtqueue_reset() vhost-net: vhost-kernel: introduce vhost_net_virtqueue_restart() virtio-net: introduce flush_or_purge_queued_packets() virtio-net: support queue_enable vhost: vhost-kernel: enable vq reset feature Lei He (4): virtio-crypto: Support asynchronous mode crypto: Support DER encodings crypto: Support export akcipher to pkcs8 cryptodev: Add a lkcf-backend for cryptodev Markus Armbruster (1): MAINTAINERS: Add qapi/virtio.json to section "virtio" Miguel Luis (4): tests/acpi: virt: allow acpi MADT and FADT changes acpi: fadt: support revision 6.0 of the ACPI specification acpi: arm/virt: madt: bump to revision 4 accordingly to ACPI 6.0 Errata A tests/acpi: virt: update ACPI MADT and FADT binaries Robert Hoo (5): tests/acpi: allow SSDT changes acpi/ssdt: Fix aml_or() and aml_and() in if clause acpi/nvdimm: define macro for NVDIMM Device _DSM acpi/nvdimm: Implement ACPI NVDIMM Label Methods test/acpi/bios-tables-test: SSDT: update golden master binaries Xiang Chen (1): hw/arm/virt: Enable HMAT on arm virt machine Xuan Zhuo (5): virtio: introduce __virtio_queue_reset() virtio: introduce virtio_queue_reset() virtio-pci: support queue reset virtio-net: support queue reset virtio-net: enable vq reset feature Yajun Wu (3): vhost: Change the sequence of device start vhost-user: Support vhost_dev_start vhost-user: Fix out of order vring host notification handling tests/avocado/acpi-bits/bits-config/bits-cfg.txt | 18 + qapi/qom.json | 2 + crypto/der.h | 211 +- crypto/rsakey.h | 11 +- hw/display/vga_int.h | 2 + hw/i386/e820_memory_layout.h | 8 - hw/i386/fw_cfg.h | 1 - hw/i386/intel_iommu_internal.h | 16 +- hw/smbios/smbios_build.h | 9 +- include/crypto/akcipher.h | 21 + include/exec/memory.h | 4 + include/hw/acpi/acpi_aml_interface.h | 13 +- include/hw/cxl/cxl_cdat.h | 166 ++ include/hw/cxl/cxl_component.h | 7 + include/hw/cxl/cxl_device.h | 3 + include/hw/cxl/cxl_pci.h | 1 + include/hw/firmware/smbios.h | 12 + include/hw/i386/intel_iommu.h | 18 +- include/hw/pci/msix.h | 4 +- include/hw/pci/pci_bus.h | 2 + include/hw/pci/pci_ids.h | 3 + include/hw/pci/pcie.h | 1 + include/hw/pci/pcie_doe.h | 123 ++ include/hw/pci/pcie_regs.h | 4 + include/hw/virtio/vhost.h | 5 + include/hw/virtio/virtio-pci.h | 5 + include/hw/virtio/virtio.h | 16 +- include/net/vhost_net.h | 4 + include/sysemu/cryptodev.h | 61 +- backends/cryptodev-builtin.c | 69 +- backends/cryptodev-lkcf.c | 645 ++++++ backends/cryptodev-vhost-user.c | 53 +- backends/cryptodev.c | 44 +- crypto/akcipher.c | 18 + crypto/der.c | 313 ++- crypto/rsakey.c | 42 + hw/acpi/aml-build.c | 13 +- hw/acpi/erst.c | 6 +- hw/acpi/nvdimm.c | 106 +- hw/arm/virt-acpi-build.c | 33 +- hw/block/vhost-user-blk.c | 18 +- hw/core/machine.c | 8 +- hw/cxl/cxl-cdat.c | 224 ++ hw/display/acpi-vga-stub.c | 7 + hw/display/acpi-vga.c | 26 + hw/display/vga-pci.c | 4 + hw/i386/acpi-build.c | 203 +- hw/i386/e820_memory_layout.c | 20 +- hw/i386/fw_cfg.c | 3 - hw/i386/intel_iommu.c | 692 +++--- hw/i386/microvm.c | 2 - hw/i386/pc.c | 2 - hw/isa/lpc_ich9.c | 23 + hw/isa/piix3.c | 17 +- hw/mem/cxl_type3.c | 264 +++ hw/net/e1000e.c | 15 +- hw/net/rocker/rocker.c | 23 +- hw/net/vhost_net-stub.c | 12 + hw/net/vhost_net.c | 91 +- hw/net/virtio-net.c | 57 +- hw/net/vmxnet3.c | 27 +- hw/nvme/ctrl.c | 5 +- hw/pci-bridge/cxl_upstream.c | 195 +- hw/pci/msix.c | 24 +- hw/pci/pcie_doe.c | 367 ++++ hw/rdma/vmw/pvrdma_main.c | 7 +- hw/remote/vfio-user-obj.c | 9 +- hw/smbios/smbios.c | 19 +- hw/vfio/common.c | 66 +- hw/virtio/vhost-user.c | 79 +- hw/virtio/vhost.c | 16 +- hw/virtio/virtio-crypto.c | 339 +-- hw/virtio/virtio-iommu-pci.c | 12 +- hw/virtio/virtio-pci.c | 83 +- hw/virtio/virtio-rng-pci.c | 14 + hw/virtio/virtio.c | 62 +- softmmu/memory.c | 72 + tests/qtest/bios-tables-test.c | 267 ++- tests/unit/test-crypto-der.c | 126 +- MAINTAINERS | 15 + backends/meson.build | 3 + docs/devel/acpi-bits.rst | 145 ++ docs/devel/index-build.rst | 1 + hw/arm/Kconfig | 1 + hw/cxl/meson.build | 1 + hw/display/meson.build | 17 + hw/i386/trace-events | 2 + hw/pci/meson.build | 1 + tests/avocado/acpi-bits.py | 396 ++++ tests/avocado/acpi-bits/bits-tests/smbios.py2 | 2430 ++++++++++++++++++++++ tests/avocado/acpi-bits/bits-tests/testacpi.py2 | 283 +++ tests/avocado/acpi-bits/bits-tests/testcpuid.py2 | 83 + tests/data/acpi/pc/DSDT | Bin 6422 -> 6501 bytes tests/data/acpi/pc/DSDT.acpierst | Bin 6382 -> 6461 bytes tests/data/acpi/pc/DSDT.acpihmat | Bin 7747 -> 7826 bytes tests/data/acpi/pc/DSDT.bridge | Bin 9496 -> 9575 bytes tests/data/acpi/pc/DSDT.cphp | Bin 6886 -> 6965 bytes tests/data/acpi/pc/DSDT.dimmpxm | Bin 8076 -> 8155 bytes tests/data/acpi/pc/DSDT.hpbridge | Bin 6382 -> 6461 bytes tests/data/acpi/pc/DSDT.hpbrroot | Bin 3069 -> 3107 bytes tests/data/acpi/pc/DSDT.ipmikcs | Bin 6494 -> 6573 bytes tests/data/acpi/pc/DSDT.memhp | Bin 7781 -> 7860 bytes tests/data/acpi/pc/DSDT.nohpet | Bin 6280 -> 6359 bytes tests/data/acpi/pc/DSDT.numamem | Bin 6428 -> 6507 bytes tests/data/acpi/pc/DSDT.roothp | Bin 6656 -> 6699 bytes tests/data/acpi/pc/SSDT.dimmpxm | Bin 734 -> 1815 bytes tests/data/acpi/q35/APIC.acpihmat-noinitiator | Bin 0 -> 144 bytes tests/data/acpi/q35/APIC.core-count2 | Bin 0 -> 2478 bytes tests/data/acpi/q35/DSDT | Bin 8320 -> 8412 bytes tests/data/acpi/q35/DSDT.acpierst | Bin 8337 -> 8429 bytes tests/data/acpi/q35/DSDT.acpihmat | Bin 9645 -> 9737 bytes tests/data/acpi/q35/DSDT.acpihmat-noinitiator | Bin 0 -> 8691 bytes tests/data/acpi/q35/DSDT.applesmc | Bin 8366 -> 8458 bytes tests/data/acpi/q35/DSDT.bridge | Bin 11449 -> 11541 bytes tests/data/acpi/q35/DSDT.core-count2 | Bin 0 -> 32552 bytes tests/data/acpi/q35/DSDT.cphp | Bin 8784 -> 8876 bytes tests/data/acpi/q35/DSDT.cxl | Bin 9646 -> 9738 bytes tests/data/acpi/q35/DSDT.dimmpxm | Bin 9974 -> 10066 bytes tests/data/acpi/q35/DSDT.ipmibt | Bin 8395 -> 8487 bytes tests/data/acpi/q35/DSDT.ipmismbus | Bin 8409 -> 8500 bytes tests/data/acpi/q35/DSDT.ivrs | Bin 8337 -> 8429 bytes tests/data/acpi/q35/DSDT.memhp | Bin 9679 -> 9771 bytes tests/data/acpi/q35/DSDT.mmio64 | Bin 9450 -> 9542 bytes tests/data/acpi/q35/DSDT.multi-bridge | Bin 8640 -> 8732 bytes tests/data/acpi/q35/DSDT.nohpet | Bin 8178 -> 8270 bytes tests/data/acpi/q35/DSDT.numamem | Bin 8326 -> 8418 bytes tests/data/acpi/q35/DSDT.pvpanic-isa | Bin 8421 -> 8513 bytes tests/data/acpi/q35/DSDT.tis.tpm12 | Bin 8926 -> 9018 bytes tests/data/acpi/q35/DSDT.tis.tpm2 | Bin 8952 -> 9044 bytes tests/data/acpi/q35/DSDT.viot | Bin 9429 -> 9521 bytes tests/data/acpi/q35/DSDT.xapic | Bin 35683 -> 35775 bytes tests/data/acpi/q35/FACP.core-count2 | Bin 0 -> 244 bytes tests/data/acpi/q35/HMAT.acpihmat-noinitiator | Bin 0 -> 288 bytes tests/data/acpi/q35/SRAT.acpihmat-noinitiator | Bin 0 -> 312 bytes tests/data/acpi/q35/SSDT.dimmpxm | Bin 734 -> 1815 bytes tests/data/acpi/virt/APIC | Bin 168 -> 172 bytes tests/data/acpi/virt/APIC.acpihmatvirt | Bin 0 -> 412 bytes tests/data/acpi/virt/APIC.memhp | Bin 168 -> 172 bytes tests/data/acpi/virt/APIC.numamem | Bin 168 -> 172 bytes tests/data/acpi/virt/DSDT.acpihmatvirt | Bin 0 -> 5282 bytes tests/data/acpi/virt/FACP | Bin 268 -> 276 bytes tests/data/acpi/virt/FACP.memhp | Bin 268 -> 276 bytes tests/data/acpi/virt/FACP.numamem | Bin 268 -> 276 bytes tests/data/acpi/virt/HMAT.acpihmatvirt | Bin 0 -> 288 bytes tests/data/acpi/virt/PPTT.acpihmatvirt | Bin 0 -> 196 bytes tests/data/acpi/virt/SRAT.acpihmatvirt | Bin 0 -> 240 bytes tests/data/acpi/virt/SSDT.memhp | Bin 736 -> 1817 bytes 147 files changed, 7960 insertions(+), 1011 deletions(-) create mode 100644 tests/avocado/acpi-bits/bits-config/bits-cfg.txt create mode 100644 include/hw/cxl/cxl_cdat.h create mode 100644 include/hw/pci/pcie_doe.h create mode 100644 backends/cryptodev-lkcf.c create mode 100644 hw/cxl/cxl-cdat.c create mode 100644 hw/display/acpi-vga-stub.c create mode 100644 hw/display/acpi-vga.c create mode 100644 hw/pci/pcie_doe.c create mode 100644 docs/devel/acpi-bits.rst create mode 100644 tests/avocado/acpi-bits.py create mode 100644 tests/avocado/acpi-bits/bits-tests/smbios.py2 create mode 100644 tests/avocado/acpi-bits/bits-tests/testacpi.py2 create mode 100644 tests/avocado/acpi-bits/bits-tests/testcpuid.py2 create mode 100644 tests/data/acpi/q35/APIC.acpihmat-noinitiator create mode 100644 tests/data/acpi/q35/APIC.core-count2 create mode 100644 tests/data/acpi/q35/DSDT.acpihmat-noinitiator create mode 100644 tests/data/acpi/q35/DSDT.core-count2 create mode 100644 tests/data/acpi/q35/FACP.core-count2 create mode 100644 tests/data/acpi/q35/HMAT.acpihmat-noinitiator create mode 100644 tests/data/acpi/q35/SRAT.acpihmat-noinitiator create mode 100644 tests/data/acpi/virt/APIC.acpihmatvirt create mode 100644 tests/data/acpi/virt/DSDT.acpihmatvirt create mode 100644 tests/data/acpi/virt/HMAT.acpihmatvirt create mode 100644 tests/data/acpi/virt/PPTT.acpihmatvirt create mode 100644 tests/data/acpi/virt/SRAT.acpihmatvirt