Message ID | cover.1644492115.git.qemu_oss@crudebyte.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, 10 Feb 2022 at 11:33, Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > > The following changes since commit 0a301624c2f4ced3331ffd5bce85b4274fe132af: > > Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20220208' into staging (2022-02-08 11:40:08 +0000) > > are available in the Git repository at: > > https://github.com/cschoenebeck/qemu.git tags/pull-9p-20220210 > > for you to fetch changes up to de19c79dad6a2cad54ae04ce754d47c07bf9bc93: > > 9pfs: Fix segfault in do_readdir_many caused by struct dirent overread (2022-02-10 11:56:01 +0100) > > ---------------------------------------------------------------- > 9pfs: fixes and cleanup > > * Fifth patch fixes a 9pfs server crash that happened on some systems due > to incorrect (system dependant) handling of struct dirent size. > > * Tests: Second patch fixes a test error that happened on some systems due > mkdir() being called twice for creating the test directory for the 9p > 'local' tests. > > * Tests: Third patch fixes a memory leak. > > * Tests: The remaining two patches are code cleanup. > > ---------------------------------------------------------------- Hi; this fails CI for the build-oss-fuzz job, which finds a heap-buffer-overflow: https://gitlab.com/qemu-project/qemu/-/jobs/2087610013 8/152 qemu:qtest+qtest-i386 / qtest-i386/qos-test ERROR 66.74s killed by signal 6 SIGABRT >>> QTEST_QEMU_BINARY=./qemu-system-i386 QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon MALLOC_PERTURB_=120 G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh QTEST_QEMU_IMG=./qemu-img /builds/qemu-project/qemu/build-oss-fuzz/tests/qtest/qos-test --tap -k ――――――――――――――――――――――――――――――――――――― ✀ ――――――――――――――――――――――――――――――――――――― Listing only the last 100 lines from a long log. For details see https://github.com/google/sanitizers/issues/189 ==7270==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! ==7270==WARNING: ASan is ignoring requested __asan_handle_no_return: stack type: default top: 0x7ffc79fb0000; bottom 0x7ff908ffd000; size: 0x000370fb3000 (14780411904) False positive error reports may follow For details see https://github.com/google/sanitizers/issues/189 ==7276==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! ==7276==WARNING: ASan is ignoring requested __asan_handle_no_return: stack type: default top: 0x7fff7e4a8000; bottom 0x7fd6363fd000; size: 0x0029480ab000 (177302319104) False positive error reports may follow For details see https://github.com/google/sanitizers/issues/189 ==7282==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! ==7282==WARNING: ASan is ignoring requested __asan_handle_no_return: stack type: default top: 0x7ffee6e7f000; bottom 0x7f32fb5fd000; size: 0x00cbeb882000 (875829927936) False positive error reports may follow For details see https://github.com/google/sanitizers/issues/189 ==7288==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! ==7288==WARNING: ASan is ignoring requested __asan_handle_no_return: stack type: default top: 0x7ffc6118e000; bottom 0x7f6391cfd000; size: 0x0098cf491000 (656312700928) False positive error reports may follow For details see https://github.com/google/sanitizers/issues/189 ==7294==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! ==7294==WARNING: ASan is ignoring requested __asan_handle_no_return: stack type: default top: 0x7ffef665d000; bottom 0x7f69dc8fd000; size: 0x009519d60000 (640383582208) False positive error reports may follow For details see https://github.com/google/sanitizers/issues/189 ==7300==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! ==7300==WARNING: ASan is ignoring requested __asan_handle_no_return: stack type: default top: 0x7ffe33db0000; bottom 0x7f01421fd000; size: 0x00fcf1bb3000 (1086387335168) False positive error reports may follow For details see https://github.com/google/sanitizers/issues/189 ==7306==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! ==7306==WARNING: ASan is ignoring requested __asan_handle_no_return: stack type: default top: 0x7ffebd618000; bottom 0x7ff1179fd000; size: 0x000da5c1b000 (58615508992) False positive error reports may follow For details see https://github.com/google/sanitizers/issues/189 ================================================================= ==7306==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x612000030768 at pc 0x562351066c74 bp 0x7ff1078c3a90 sp 0x7ff1078c3240 READ of size 48830 at 0x612000030768 thread T4 #0 0x562351066c73 in __interceptor_memcpy.part.0 asan_interceptors.cpp.o #1 0x7ff1290d443f in g_memdup (/lib64/libglib-2.0.so.0+0x6e43f) #2 0x56235134537a in do_readdir_many /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:146:19 #3 0x56235134537a in v9fs_co_readdir_many /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:225:5 #4 0x56235132d626 in v9fs_do_readdir /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:2430:13 #5 0x56235132d626 in v9fs_readdir /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:2543:13 #6 0x56235257101e in coroutine_trampoline /builds/qemu-project/qemu/build-oss-fuzz/../util/coroutine-ucontext.c:173:9 #7 0x7ff126e0e84f (/lib64/libc.so.6+0x5784f) 0x612000030768 is located 0 bytes to the right of 296-byte region [0x612000030640,0x612000030768) allocated by thread T4 here: #0 0x5623510a4e47 in malloc (/builds/qemu-project/qemu/build-oss-fuzz/qemu-system-i386+0x1146e47) #1 0x7ff1290c03d8 in g_malloc (/lib64/libglib-2.0.so.0+0x5a3d8) #2 0x56235131e659 in synth_opendir /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p-synth.c:185:18 #3 0x5623513462f5 in v9fs_co_opendir /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:321:5 #4 0x5623513257d7 in v9fs_open /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:1959:15 #5 0x56235257101e in coroutine_trampoline /builds/qemu-project/qemu/build-oss-fuzz/../util/coroutine-ucontext.c:173:9 #6 0x7ff126e0e84f (/lib64/libc.so.6+0x5784f) Thread T4 created by T0 here: #0 0x562351015926 in pthread_create (/builds/qemu-project/qemu/build-oss-fuzz/qemu-system-i386+0x10b7926) #1 0x5623525351ea in qemu_thread_create /builds/qemu-project/qemu/build-oss-fuzz/../util/qemu-thread-posix.c:596:11 #2 0x5623525a4588 in do_spawn_thread /builds/qemu-project/qemu/build-oss-fuzz/../util/thread-pool.c:134:5 #3 0x5623525a4588 in spawn_thread_bh_fn /builds/qemu-project/qemu/build-oss-fuzz/../util/thread-pool.c:142:5 #4 0x562352569814 in aio_bh_call /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:141:5 #5 0x562352569814 in aio_bh_poll /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:169:13 #6 0x5623525248cc in aio_dispatch /builds/qemu-project/qemu/build-oss-fuzz/../util/aio-posix.c:415:5 #7 0x56235256c34c in aio_ctx_dispatch /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:311:5 #8 0x7ff1290bb05e in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x5505e) SUMMARY: AddressSanitizer: heap-buffer-overflow asan_interceptors.cpp.o in __interceptor_memcpy.part.0 Shadow bytes around the buggy address: 0x0c247fffe090: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x0c247fffe0a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c247fffe0b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa 0x0c247fffe0c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x0c247fffe0d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c247fffe0e0: 00 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa fa 0x0c247fffe0f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c247fffe100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c247fffe110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c247fffe120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c247fffe130: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==7306==ABORTING thanks -- PMM
On Sonntag, 13. Februar 2022 21:33:10 CET Peter Maydell wrote: > On Thu, 10 Feb 2022 at 11:33, Christian Schoenebeck > > <qemu_oss@crudebyte.com> wrote: > > The following changes since commit 0a301624c2f4ced3331ffd5bce85b4274fe132af: > > Merge remote-tracking branch > > 'remotes/pmaydell/tags/pull-target-arm-20220208' into staging > > (2022-02-08 11:40:08 +0000)> > > are available in the Git repository at: > > https://github.com/cschoenebeck/qemu.git tags/pull-9p-20220210 > > > > for you to fetch changes up to de19c79dad6a2cad54ae04ce754d47c07bf9bc93: > > 9pfs: Fix segfault in do_readdir_many caused by struct dirent overread > > (2022-02-10 11:56:01 +0100)> > > ---------------------------------------------------------------- > > 9pfs: fixes and cleanup > > > > * Fifth patch fixes a 9pfs server crash that happened on some systems due > > > > to incorrect (system dependant) handling of struct dirent size. > > > > * Tests: Second patch fixes a test error that happened on some systems due > > > > mkdir() being called twice for creating the test directory for the 9p > > 'local' tests. > > > > * Tests: Third patch fixes a memory leak. > > > > * Tests: The remaining two patches are code cleanup. > > > > ---------------------------------------------------------------- > > Hi; this fails CI for the build-oss-fuzz job, which finds > a heap-buffer-overflow: > https://gitlab.com/qemu-project/qemu/-/jobs/2087610013 So this is about the 'dirent' patch: https://github.com/cschoenebeck/qemu/commit/de19c79dad6a2cad54ae04ce754d47c07bf9bc93 In conjunction with the 9p fuzzing tests: https://wiki.qemu.org/Documentation/9p#Fuzzing I first thought it might be a false positive due to the unorthodox handling of dirent duplication by that patch, but from the ASan output below I am not really sure about that. Is there a way to get the content of local variables? Would it be possible that the following issue (g_memdup vs. g_memdup2) might apply here? https://discourse.gnome.org/t/port-your-module-from-g-memdup-to-g-memdup2-now/5538 Best regards, Christian Schoenebeck > > 8/152 qemu:qtest+qtest-i386 / qtest-i386/qos-test ERROR 66.74s killed > by signal 6 SIGABRT > > >>> QTEST_QEMU_BINARY=./qemu-system-i386 > >>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon > >>> MALLOC_PERTURB_=120 > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon. > >>> sh QTEST_QEMU_IMG=./qemu-img > >>> /builds/qemu-project/qemu/build-oss-fuzz/tests/qtest/qos-test --tap -k > ――――――――――――――――――――――――――――――――――――― ✀ > ――――――――――――――――――――――――――――――――――――― Listing only the last 100 lines from > a long log. > For details see https://github.com/google/sanitizers/issues/189 > ==7270==WARNING: ASan doesn't fully support makecontext/swapcontext > functions and may produce false positives in some cases! > ==7270==WARNING: ASan is ignoring requested __asan_handle_no_return: > stack type: default top: 0x7ffc79fb0000; bottom 0x7ff908ffd000; size: > 0x000370fb3000 (14780411904) > False positive error reports may follow > For details see https://github.com/google/sanitizers/issues/189 > ==7276==WARNING: ASan doesn't fully support makecontext/swapcontext > functions and may produce false positives in some cases! > ==7276==WARNING: ASan is ignoring requested __asan_handle_no_return: > stack type: default top: 0x7fff7e4a8000; bottom 0x7fd6363fd000; size: > 0x0029480ab000 (177302319104) > False positive error reports may follow > For details see https://github.com/google/sanitizers/issues/189 > ==7282==WARNING: ASan doesn't fully support makecontext/swapcontext > functions and may produce false positives in some cases! > ==7282==WARNING: ASan is ignoring requested __asan_handle_no_return: > stack type: default top: 0x7ffee6e7f000; bottom 0x7f32fb5fd000; size: > 0x00cbeb882000 (875829927936) > False positive error reports may follow > For details see https://github.com/google/sanitizers/issues/189 > ==7288==WARNING: ASan doesn't fully support makecontext/swapcontext > functions and may produce false positives in some cases! > ==7288==WARNING: ASan is ignoring requested __asan_handle_no_return: > stack type: default top: 0x7ffc6118e000; bottom 0x7f6391cfd000; size: > 0x0098cf491000 (656312700928) > False positive error reports may follow > For details see https://github.com/google/sanitizers/issues/189 > ==7294==WARNING: ASan doesn't fully support makecontext/swapcontext > functions and may produce false positives in some cases! > ==7294==WARNING: ASan is ignoring requested __asan_handle_no_return: > stack type: default top: 0x7ffef665d000; bottom 0x7f69dc8fd000; size: > 0x009519d60000 (640383582208) > False positive error reports may follow > For details see https://github.com/google/sanitizers/issues/189 > ==7300==WARNING: ASan doesn't fully support makecontext/swapcontext > functions and may produce false positives in some cases! > ==7300==WARNING: ASan is ignoring requested __asan_handle_no_return: > stack type: default top: 0x7ffe33db0000; bottom 0x7f01421fd000; size: > 0x00fcf1bb3000 (1086387335168) > False positive error reports may follow > For details see https://github.com/google/sanitizers/issues/189 > ==7306==WARNING: ASan doesn't fully support makecontext/swapcontext > functions and may produce false positives in some cases! > ==7306==WARNING: ASan is ignoring requested __asan_handle_no_return: > stack type: default top: 0x7ffebd618000; bottom 0x7ff1179fd000; size: > 0x000da5c1b000 (58615508992) > False positive error reports may follow > For details see https://github.com/google/sanitizers/issues/189 > ================================================================= > ==7306==ERROR: AddressSanitizer: heap-buffer-overflow on address > 0x612000030768 at pc 0x562351066c74 bp 0x7ff1078c3a90 sp > 0x7ff1078c3240 > READ of size 48830 at 0x612000030768 thread T4 > #0 0x562351066c73 in __interceptor_memcpy.part.0 asan_interceptors.cpp.o > #1 0x7ff1290d443f in g_memdup (/lib64/libglib-2.0.so.0+0x6e43f) > #2 0x56235134537a in do_readdir_many > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:146:19 > #3 0x56235134537a in v9fs_co_readdir_many > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:225:5 > #4 0x56235132d626 in v9fs_do_readdir > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:2430:13 > #5 0x56235132d626 in v9fs_readdir > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:2543:13 > #6 0x56235257101e in coroutine_trampoline > /builds/qemu-project/qemu/build-oss-fuzz/../util/coroutine-ucontext.c:173:9 > #7 0x7ff126e0e84f (/lib64/libc.so.6+0x5784f) > 0x612000030768 is located 0 bytes to the right of 296-byte region > [0x612000030640,0x612000030768) > allocated by thread T4 here: > #0 0x5623510a4e47 in malloc > (/builds/qemu-project/qemu/build-oss-fuzz/qemu-system-i386+0x1146e47) > #1 0x7ff1290c03d8 in g_malloc (/lib64/libglib-2.0.so.0+0x5a3d8) > #2 0x56235131e659 in synth_opendir > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p-synth.c:185:18 > #3 0x5623513462f5 in v9fs_co_opendir > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:321:5 > #4 0x5623513257d7 in v9fs_open > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:1959:15 > #5 0x56235257101e in coroutine_trampoline > /builds/qemu-project/qemu/build-oss-fuzz/../util/coroutine-ucontext.c:173:9 > #6 0x7ff126e0e84f (/lib64/libc.so.6+0x5784f) > Thread T4 created by T0 here: > #0 0x562351015926 in pthread_create > (/builds/qemu-project/qemu/build-oss-fuzz/qemu-system-i386+0x10b7926) > #1 0x5623525351ea in qemu_thread_create > /builds/qemu-project/qemu/build-oss-fuzz/../util/qemu-thread-posix.c:596:11 > #2 0x5623525a4588 in do_spawn_thread > /builds/qemu-project/qemu/build-oss-fuzz/../util/thread-pool.c:134:5 > #3 0x5623525a4588 in spawn_thread_bh_fn > /builds/qemu-project/qemu/build-oss-fuzz/../util/thread-pool.c:142:5 > #4 0x562352569814 in aio_bh_call > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:141:5 > #5 0x562352569814 in aio_bh_poll > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:169:13 > #6 0x5623525248cc in aio_dispatch > /builds/qemu-project/qemu/build-oss-fuzz/../util/aio-posix.c:415:5 > #7 0x56235256c34c in aio_ctx_dispatch > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:311:5 > #8 0x7ff1290bb05e in g_main_context_dispatch > (/lib64/libglib-2.0.so.0+0x5505e) SUMMARY: AddressSanitizer: > heap-buffer-overflow > asan_interceptors.cpp.o in __interceptor_memcpy.part.0 > Shadow bytes around the buggy address: > 0x0c247fffe090: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd > 0x0c247fffe0a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd > 0x0c247fffe0b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa > 0x0c247fffe0c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 > 0x0c247fffe0d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > =>0x0c247fffe0e0: 00 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa fa > 0x0c247fffe0f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > 0x0c247fffe100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > 0x0c247fffe110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > 0x0c247fffe120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > 0x0c247fffe130: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > Shadow byte legend (one shadow byte represents 8 application bytes): > Addressable: 00 > Partially addressable: 01 02 03 04 05 06 07 > Heap left redzone: fa > Freed heap region: fd > Stack left redzone: f1 > Stack mid redzone: f2 > Stack right redzone: f3 > Stack after return: f5 > Stack use after scope: f8 > Global redzone: f9 > Global init order: f6 > Poisoned by user: f7 > Container overflow: fc > Array cookie: ac > Intra object redzone: bb > ASan internal: fe > Left alloca redzone: ca > Right alloca redzone: cb > ==7306==ABORTING > > > thanks > -- PMM
On Mon, 14 Feb 2022 at 09:47, Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > So this is about the 'dirent' patch: > https://github.com/cschoenebeck/qemu/commit/de19c79dad6a2cad54ae04ce754d47c07bf9bc93 > > In conjunction with the 9p fuzzing tests: > https://wiki.qemu.org/Documentation/9p#Fuzzing > > I first thought it might be a false positive due to the unorthodox handling of > dirent duplication by that patch, but from the ASan output below I am not > really sure about that. > > Is there a way to get the content of local variables? Yes. You can build locally with the clang sanitizers enabled and then run under gdb and with the appropriate environment variables to tell the sanitizer to abort() on failures. > Would it be possible that the following issue (g_memdup vs. g_memdup2) might > apply here? > https://discourse.gnome.org/t/port-your-module-from-g-memdup-to-g-memdup2-now/5538 It seems unlikely that the problem is that you're allocating more than 4 gigabytes and thus hitting a 64-to-32 truncation. thanks -- PMM
On Mon, 14 Feb 2022 10:47:43 +0100 Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > On Sonntag, 13. Februar 2022 21:33:10 CET Peter Maydell wrote: > > On Thu, 10 Feb 2022 at 11:33, Christian Schoenebeck > > > > <qemu_oss@crudebyte.com> wrote: > > > The following changes since commit 0a301624c2f4ced3331ffd5bce85b4274fe132af: > > > Merge remote-tracking branch > > > 'remotes/pmaydell/tags/pull-target-arm-20220208' into staging > > > (2022-02-08 11:40:08 +0000)> > > > are available in the Git repository at: > > > https://github.com/cschoenebeck/qemu.git tags/pull-9p-20220210 > > > > > > for you to fetch changes up to de19c79dad6a2cad54ae04ce754d47c07bf9bc93: > > > 9pfs: Fix segfault in do_readdir_many caused by struct dirent overread > > > (2022-02-10 11:56:01 +0100)> > > > ----------------------------------------------------------------9d82f6a3e68c2 > > > 9pfs: fixes and cleanup > > > > > > * Fifth patch fixes a 9pfs server crash that happened on some systems due > > > > > > to incorrect (system dependant) handling of struct dirent size. > > > > > > * Tests: Second patch fixes a test error that happened on some systems due > > > > > > mkdir() being called twice for creating the test directory for the 9p > > > 'local' tests. > > > > > > * Tests: Third patch fixes a memory leak. > > > > > > * Tests: The remaining two patches are code cleanup. > > > > > > ---------------------------------------------------------------- > > > > Hi; this fails CI for the build-oss-fuzz job, which finds > > a heap-buffer-overflow: > > https://gitlab.com/qemu-project/qemu/-/jobs/2087610013 > > So this is about the 'dirent' patch: > https://github.com/cschoenebeck/qemu/commit/de19c79dad6a2cad54ae04ce754d47c07bf9bc93 > > In conjunction with the 9p fuzzing tests: > https://wiki.qemu.org/Documentation/9p#Fuzzing > > I first thought it might be a false positive due to the unorthodox handling of > dirent duplication by that patch, but from the ASan output below I am not > really sure about that. > No, this is an actual bug. See below. > Is there a way to get the content of local variables? > > Would it be possible that the following issue (g_memdup vs. g_memdup2) might > apply here? > https://discourse.gnome.org/t/port-your-module-from-g-memdup-to-g-memdup2-now/5538 > > Best regards, > Christian Schoenebeck > > > > > 8/152 qemu:qtest+qtest-i386 / qtest-i386/qos-test ERROR 66.74s killed > > by signal 6 SIGABRT > > > > >>> QTEST_QEMU_BINARY=./qemu-system-i386 > > >>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon > > >>> MALLOC_PERTURB_=120 > > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon. > > >>> sh QTEST_QEMU_IMG=./qemu-img > > >>> /builds/qemu-project/qemu/build-oss-fuzz/tests/qtest/qos-test --tap -k > > ――――――――――――――――――――――――――――――――――――― ✀ > > ――――――――――――――――――――――――――――――――――――― Listing only the last 100 lines from > > a long log. > > For details see https://github.com/google/sanitizers/issues/189 > > ==7270==WARNING: ASan doesn't fully support makecontext/swapcontext > > functions and may produce false positives in some cases! > > ==7270==WARNING: ASan is ignoring requested __asan_handle_no_return: > > stack type: default top: 0x7ffc79fb0000; bottom 0x7ff908ffd000; size: > > 0x000370fb3000 (14780411904) > > False positive error reports may follow > > For details see https://github.com/google/sanitizers/issues/189 > > ==7276==WARNING: ASan doesn't fully support makecontext/swapcontext > > functions and may produce false positives in some cases! > > ==7276==WARNING: ASan is ignoring requested __asan_handle_no_return: > > stack type: default top: 0x7fff7e4a8000; bottom 0x7fd6363fd000; size: > > 0x0029480ab000 (177302319104) > > False positive error reports may follow > > For details see https://github.com/google/sanitizers/issues/189 > > ==7282==WARNING: ASan doesn't fully support makecontext/swapcontext > > functions and may produce false positives in some cases! > > ==7282==WARNING: ASan is ignoring requested __asan_handle_no_return: > > stack type: default top: 0x7ffee6e7f000; bottom 0x7f32fb5fd000; size: > > 0x00cbeb882000 (875829927936) > > False positive error reports may follow > > For details see https://github.com/google/sanitizers/issues/189 > > ==7288==WARNING: ASan doesn't fully support makecontext/swapcontext > > functions and may produce false positives in some cases! > > ==7288==WARNING: ASan is ignoring requested __asan_handle_no_return: > > stack type: default top: 0x7ffc6118e000; bottom 0x7f6391cfd000; size: > > 0x0098cf491000 (656312700928) > > False positive error reports may follow > > For details see https://github.com/google/sanitizers/issues/189 > > ==7294==WARNING: ASan doesn't fully support makecontext/swapcontext > > functions and may produce false positives in some cases! > > ==7294==WARNING: ASan is ignoring requested __asan_handle_no_return: > > stack type: default top: 0x7ffef665d000; bottom 0x7f69dc8fd000; size: > > 0x009519d60000 (640383582208) > > False positive error reports may follow > > For details see https://github.com/google/sanitizers/issues/189 > > ==7300==WARNING: ASan doesn't fully support makecontext/swapcontext > > functions and may produce false positives in some cases! > > ==7300==WARNING: ASan is ignoring requested __asan_handle_no_return: > > stack type: default top: 0x7ffe33db0000; bottom 0x7f01421fd000; size: > > 0x00fcf1bb3000 (1086387335168) > > False positive error reports may follow > > For details see https://github.com/google/sanitizers/issues/189 > > ==7306==WARNING: ASan doesn't fully support makecontext/swapcontext > > functions and may produce false positives in some cases! > > ==7306==WARNING: ASan is ignoring requested __asan_handle_no_return: > > stack type: default top: 0x7ffebd618000; bottom 0x7ff1179fd000; size: > > 0x000da5c1b000 (58615508992) > > False positive error reports may follow > > For details see https://github.com/google/sanitizers/issues/189 > > ================================================================= > > ==7306==ERROR: AddressSanitizer: heap-buffer-overflow on address > > 0x612000030768 at pc 0x562351066c74 bp 0x7ff1078c3a90 sp > > 0x7ff1078c3240 > > READ of size 48830 at 0x612000030768 thread T4 The size looks pretty big to me. Test file paths in virtio-9p-test are only a couple of bytes long... > > #0 0x562351066c73 in __interceptor_memcpy.part.0 asan_interceptors.cpp.o > > #1 0x7ff1290d443f in g_memdup (/lib64/libglib-2.0.so.0+0x6e43f) > > #2 0x56235134537a in do_readdir_many > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:146:19 > > #3 0x56235134537a in v9fs_co_readdir_many > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:225:5 > > #4 0x56235132d626 in v9fs_do_readdir > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:2430:13 > > #5 0x56235132d626 in v9fs_readdir > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:2543:13 > > #6 0x56235257101e in coroutine_trampoline > > /builds/qemu-project/qemu/build-oss-fuzz/../util/coroutine-ucontext.c:173:9 > > #7 0x7ff126e0e84f (/lib64/libc.so.6+0x5784f) > > 0x612000030768 is located 0 bytes to the right of 296-byte region > > [0x612000030640,0x612000030768) > > allocated by thread T4 here: > > #0 0x5623510a4e47 in malloc > > (/builds/qemu-project/qemu/build-oss-fuzz/qemu-system-i386+0x1146e47) > > #1 0x7ff1290c03d8 in g_malloc (/lib64/libglib-2.0.so.0+0x5a3d8) i.e. synth_open = g_malloc(sizeof(*synth_open)); and we have: typedef struct V9fsSynthOpenState { off_t offset; V9fsSynthNode *node; struct dirent dent; } V9fsSynthOpenState; It looks like that qemu_dirent_dup() ends up using an uninitialized dent->d_reclen. The synth backend should be fixed to honor d_reclen, or at least to allocate with g_new0(). Cheers, -- Greg > > #2 0x56235131e659 in synth_opendir > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p-synth.c:185:18 > > #3 0x5623513462f5 in v9fs_co_opendir > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:321:5 > > #4 0x5623513257d7 in v9fs_open > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:1959:15 > > #5 0x56235257101e in coroutine_trampoline > > /builds/qemu-project/qemu/build-oss-fuzz/../util/coroutine-ucontext.c:173:9 > > #6 0x7ff126e0e84f (/lib64/libc.so.6+0x5784f) > > Thread T4 created by T0 here: > > #0 0x562351015926 in pthread_create > > (/builds/qemu-project/qemu/build-oss-fuzz/qemu-system-i386+0x10b7926) > > #1 0x5623525351ea in qemu_thread_create > > /builds/qemu-project/qemu/build-oss-fuzz/../util/qemu-thread-posix.c:596:11 > > #2 0x5623525a4588 in do_spawn_thread > > /builds/qemu-project/qemu/build-oss-fuzz/../util/thread-pool.c:134:5 > > #3 0x5623525a4588 in spawn_thread_bh_fn > > /builds/qemu-project/qemu/build-oss-fuzz/../util/thread-pool.c:142:5 > > #4 0x562352569814 in aio_bh_call > > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:141:5 > > #5 0x562352569814 in aio_bh_poll > > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:169:13 > > #6 0x5623525248cc in aio_dispatch > > /builds/qemu-project/qemu/build-oss-fuzz/../util/aio-posix.c:415:5 > > #7 0x56235256c34c in aio_ctx_dispatch > > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:311:5 > > #8 0x7ff1290bb05e in g_main_context_dispatch > > (/lib64/libglib-2.0.so.0+0x5505e) SUMMARY: AddressSanitizer: > > heap-buffer-overflow > > asan_interceptors.cpp.o in __interceptor_memcpy.part.0 > > Shadow bytes around the buggy address: > > 0x0c247fffe090: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd > > 0x0c247fffe0a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd > > 0x0c247fffe0b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa > > 0x0c247fffe0c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 > > 0x0c247fffe0d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > =>0x0c247fffe0e0: 00 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa fa > > 0x0c247fffe0f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > 0x0c247fffe100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > 0x0c247fffe110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > 0x0c247fffe120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > 0x0c247fffe130: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > Shadow byte legend (one shadow byte represents 8 application bytes): > > Addressable: 00 > > Partially addressable: 01 02 03 04 05 06 07 > > Heap left redzone: fa > > Freed heap region: fd > > Stack left redzone: f1 > > Stack mid redzone: f2 > > Stack right redzone: f3 > > Stack after return: f5 > > Stack use after scope: f8 > > Global redzone: f9 > > Global init order: f6 > > Poisoned by user: f7 > > Container overflow: fc > > Array cookie: ac > > Intra object redzone: bb > > ASan internal: fe > > Left alloca redzone: ca > > Right alloca redzone: cb > > ==7306==ABORTING > > > > > > thanks > > -- PMM > >
On Montag, 14. Februar 2022 11:36:53 CET Greg Kurz wrote: > On Mon, 14 Feb 2022 10:47:43 +0100 > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > > On Sonntag, 13. Februar 2022 21:33:10 CET Peter Maydell wrote: > > > On Thu, 10 Feb 2022 at 11:33, Christian Schoenebeck > > > > > > <qemu_oss@crudebyte.com> wrote: > > > > The following changes since commit 0a301624c2f4ced3331ffd5bce85b4274fe132af: > > > > Merge remote-tracking branch > > > > 'remotes/pmaydell/tags/pull-target-arm-20220208' into staging > > > > (2022-02-08 11:40:08 +0000)> > > > > > > > > are available in the Git repository at: > > > > https://github.com/cschoenebeck/qemu.git tags/pull-9p-20220210 > > > > > > > > for you to fetch changes up to de19c79dad6a2cad54ae04ce754d47c07bf9bc93: > > > > 9pfs: Fix segfault in do_readdir_many caused by struct dirent > > > > overread > > > > (2022-02-10 11:56:01 +0100)> > > > > > > > > ----------------------------------------------------------------9d82f6 > > > > a3e68c2 9pfs: fixes and cleanup > > > > > > > > * Fifth patch fixes a 9pfs server crash that happened on some systems > > > > due > > > > > > > > to incorrect (system dependant) handling of struct dirent size. > > > > > > > > * Tests: Second patch fixes a test error that happened on some systems > > > > due > > > > > > > > mkdir() being called twice for creating the test directory for the > > > > 9p > > > > 'local' tests. > > > > > > > > * Tests: Third patch fixes a memory leak. > > > > > > > > * Tests: The remaining two patches are code cleanup. > > > > > > > > ---------------------------------------------------------------- > > > > > > Hi; this fails CI for the build-oss-fuzz job, which finds > > > a heap-buffer-overflow: > > > https://gitlab.com/qemu-project/qemu/-/jobs/2087610013 > > > > So this is about the 'dirent' patch: > > https://github.com/cschoenebeck/qemu/commit/de19c79dad6a2cad54ae04ce754d47 > > c07bf9bc93 > > > > In conjunction with the 9p fuzzing tests: > > https://wiki.qemu.org/Documentation/9p#Fuzzing > > > > I first thought it might be a false positive due to the unorthodox > > handling of dirent duplication by that patch, but from the ASan output > > below I am not really sure about that. > > No, this is an actual bug. See below. Yep, the patch would turn the 9p tests' synth driver buggy. :/ Vitaly, I fear the patch needs a v5. > > Is there a way to get the content of local variables? > > > > Would it be possible that the following issue (g_memdup vs. g_memdup2) > > might apply here? > > https://discourse.gnome.org/t/port-your-module-from-g-memdup-to-g-memdup2-> > now/5538 > > > > Best regards, > > Christian Schoenebeck > > > > > 8/152 qemu:qtest+qtest-i386 / qtest-i386/qos-test ERROR 66.74s killed > > > by signal 6 SIGABRT > > > > > > >>> QTEST_QEMU_BINARY=./qemu-system-i386 > > > >>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemo > > > >>> n > > > >>> MALLOC_PERTURB_=120 > > > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daem > > > >>> on. > > > >>> sh QTEST_QEMU_IMG=./qemu-img > > > >>> /builds/qemu-project/qemu/build-oss-fuzz/tests/qtest/qos-test --tap > > > >>> -k > > > > > > ――――――――――――――――――――――――――――――――――――― ✀ > > > ――――――――――――――――――――――――――――――――――――― Listing only the last 100 lines > > > from > > > a long log. > > > For details see https://github.com/google/sanitizers/issues/189 > > > ==7270==WARNING: ASan doesn't fully support makecontext/swapcontext > > > functions and may produce false positives in some cases! > > > ==7270==WARNING: ASan is ignoring requested __asan_handle_no_return: > > > stack type: default top: 0x7ffc79fb0000; bottom 0x7ff908ffd000; size: > > > 0x000370fb3000 (14780411904) > > > False positive error reports may follow > > > For details see https://github.com/google/sanitizers/issues/189 > > > ==7276==WARNING: ASan doesn't fully support makecontext/swapcontext > > > functions and may produce false positives in some cases! > > > ==7276==WARNING: ASan is ignoring requested __asan_handle_no_return: > > > stack type: default top: 0x7fff7e4a8000; bottom 0x7fd6363fd000; size: > > > 0x0029480ab000 (177302319104) > > > False positive error reports may follow > > > For details see https://github.com/google/sanitizers/issues/189 > > > ==7282==WARNING: ASan doesn't fully support makecontext/swapcontext > > > functions and may produce false positives in some cases! > > > ==7282==WARNING: ASan is ignoring requested __asan_handle_no_return: > > > stack type: default top: 0x7ffee6e7f000; bottom 0x7f32fb5fd000; size: > > > 0x00cbeb882000 (875829927936) > > > False positive error reports may follow > > > For details see https://github.com/google/sanitizers/issues/189 > > > ==7288==WARNING: ASan doesn't fully support makecontext/swapcontext > > > functions and may produce false positives in some cases! > > > ==7288==WARNING: ASan is ignoring requested __asan_handle_no_return: > > > stack type: default top: 0x7ffc6118e000; bottom 0x7f6391cfd000; size: > > > 0x0098cf491000 (656312700928) > > > False positive error reports may follow > > > For details see https://github.com/google/sanitizers/issues/189 > > > ==7294==WARNING: ASan doesn't fully support makecontext/swapcontext > > > functions and may produce false positives in some cases! > > > ==7294==WARNING: ASan is ignoring requested __asan_handle_no_return: > > > stack type: default top: 0x7ffef665d000; bottom 0x7f69dc8fd000; size: > > > 0x009519d60000 (640383582208) > > > False positive error reports may follow > > > For details see https://github.com/google/sanitizers/issues/189 > > > ==7300==WARNING: ASan doesn't fully support makecontext/swapcontext > > > functions and may produce false positives in some cases! > > > ==7300==WARNING: ASan is ignoring requested __asan_handle_no_return: > > > stack type: default top: 0x7ffe33db0000; bottom 0x7f01421fd000; size: > > > 0x00fcf1bb3000 (1086387335168) > > > False positive error reports may follow > > > For details see https://github.com/google/sanitizers/issues/189 > > > ==7306==WARNING: ASan doesn't fully support makecontext/swapcontext > > > functions and may produce false positives in some cases! > > > ==7306==WARNING: ASan is ignoring requested __asan_handle_no_return: > > > stack type: default top: 0x7ffebd618000; bottom 0x7ff1179fd000; size: > > > 0x000da5c1b000 (58615508992) > > > False positive error reports may follow > > > For details see https://github.com/google/sanitizers/issues/189 > > > ================================================================= > > > ==7306==ERROR: AddressSanitizer: heap-buffer-overflow on address > > > 0x612000030768 at pc 0x562351066c74 bp 0x7ff1078c3a90 sp > > > 0x7ff1078c3240 > > > READ of size 48830 at 0x612000030768 thread T4 > > The size looks pretty big to me. Test file paths in virtio-9p-test are > only a couple of bytes long... Right. > > > #0 0x562351066c73 in __interceptor_memcpy.part.0 asan_interceptors.cpp.o > > > #1 0x7ff1290d443f in g_memdup (/lib64/libglib-2.0.so.0+0x6e43f) > > > #2 0x56235134537a in do_readdir_many > > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:146:19 > > > #3 0x56235134537a in v9fs_co_readdir_many > > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:225:5 > > > #4 0x56235132d626 in v9fs_do_readdir > > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:2430:13 > > > #5 0x56235132d626 in v9fs_readdir > > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:2543:13 > > > #6 0x56235257101e in coroutine_trampoline > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/coroutine-ucontext.c:17 > > > 3:9 > > > #7 0x7ff126e0e84f (/lib64/libc.so.6+0x5784f) > > > 0x612000030768 is located 0 bytes to the right of 296-byte region > > > [0x612000030640,0x612000030768) > > > allocated by thread T4 here: > > > #0 0x5623510a4e47 in malloc > > > (/builds/qemu-project/qemu/build-oss-fuzz/qemu-system-i386+0x1146e47) > > > #1 0x7ff1290c03d8 in g_malloc (/lib64/libglib-2.0.so.0+0x5a3d8) > > i.e. > > synth_open = g_malloc(sizeof(*synth_open)); > > and we have: > > typedef struct V9fsSynthOpenState { > off_t offset; > V9fsSynthNode *node; > struct dirent dent; > } V9fsSynthOpenState; > > It looks like that qemu_dirent_dup() ends up using an > uninitialized dent->d_reclen. > > The synth backend should be fixed to honor d_reclen, or > at least to allocate with g_new0(). Yes, I overlooked that this is not initialized with zero already. With g_new0() d_reclen would be zero and qemu_dirent_dup() would then fallback to the portable branch (as I assumed it already would): struct dirent * qemu_dirent_dup(struct dirent *dent) { size_t sz = 0; #if defined _DIRENT_HAVE_D_RECLEN /* Avoid use of strlen() if platform supports d_reclen. */ sz = dent->d_reclen; #endif /* * Test sz for zero even if d_reclen is available * because some drivers may set d_reclen to zero. */ if (sz == 0) { /* Fallback to the most portable way. */ sz = offsetof(struct dirent, d_name) + strlen(dent->d_name) + 1; } return g_memdup(dent, sz); } Additionally I would add NAME_MAX to the V9fsSynthOpenState allocation size, because it is known that some systems define dirent as flex-array (zero d_name size). I know Greg would not favour this solution (using g_new0), but it's the most minimalistic and most portable solution. So I would favour it for now. A cleaner solution on the long-term would be turning V9fsSynthOpenState's 'dent' member into a pointer and adding a new function to osdep like: struct dirent * qemu_dirent_new(const char* name) { ... } But I would like to postpone that qemu_dirent_new() solution, e.g. because I guess some people would probably not like qemu_dirent_new() to have in osdep, as it is probably not a general purpose function, and I am not keen putting qemu_dirent_new() into a different location than qemu_dirent_dup(), because it would raise the danger that system dependent code might deviate in future. Best regards, Christian Schoenebeck > > Cheers, > > -- > Greg > > > > #2 0x56235131e659 in synth_opendir > > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p-synth.c:185:18 > > > #3 0x5623513462f5 in v9fs_co_opendir > > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/codir.c:321:5 > > > #4 0x5623513257d7 in v9fs_open > > > /builds/qemu-project/qemu/build-oss-fuzz/../hw/9pfs/9p.c:1959:15 > > > #5 0x56235257101e in coroutine_trampoline > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/coroutine-ucontext.c:17 > > > 3:9 > > > #6 0x7ff126e0e84f (/lib64/libc.so.6+0x5784f) > > > Thread T4 created by T0 here: > > > #0 0x562351015926 in pthread_create > > > (/builds/qemu-project/qemu/build-oss-fuzz/qemu-system-i386+0x10b7926) > > > #1 0x5623525351ea in qemu_thread_create > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/qemu-thread-posix.c:596 > > > :11 > > > #2 0x5623525a4588 in do_spawn_thread > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/thread-pool.c:134:5 > > > #3 0x5623525a4588 in spawn_thread_bh_fn > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/thread-pool.c:142:5 > > > #4 0x562352569814 in aio_bh_call > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:141:5 > > > #5 0x562352569814 in aio_bh_poll > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:169:13 > > > #6 0x5623525248cc in aio_dispatch > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/aio-posix.c:415:5 > > > #7 0x56235256c34c in aio_ctx_dispatch > > > /builds/qemu-project/qemu/build-oss-fuzz/../util/async.c:311:5 > > > #8 0x7ff1290bb05e in g_main_context_dispatch > > > (/lib64/libglib-2.0.so.0+0x5505e) SUMMARY: AddressSanitizer: > > > heap-buffer-overflow > > > asan_interceptors.cpp.o in __interceptor_memcpy.part.0 > > > Shadow bytes around the buggy address: > > > 0x0c247fffe090: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd > > > 0x0c247fffe0a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd > > > 0x0c247fffe0b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa fa > > > 0x0c247fffe0c0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 > > > 0x0c247fffe0d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > > =>0x0c247fffe0e0: 00 00 00 00 00 00 00 00 00 00 00 00 00[fa]fa fa > > > 0x0c247fffe0f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > > 0x0c247fffe100: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > > 0x0c247fffe110: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > > 0x0c247fffe120: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > > 0x0c247fffe130: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa > > > Shadow byte legend (one shadow byte represents 8 application bytes): > > > Addressable: 00 > > > Partially addressable: 01 02 03 04 05 06 07 > > > Heap left redzone: fa > > > Freed heap region: fd > > > Stack left redzone: f1 > > > Stack mid redzone: f2 > > > Stack right redzone: f3 > > > Stack after return: f5 > > > Stack use after scope: f8 > > > Global redzone: f9 > > > Global init order: f6 > > > Poisoned by user: f7 > > > Container overflow: fc > > > Array cookie: ac > > > Intra object redzone: bb > > > ASan internal: fe > > > Left alloca redzone: ca > > > Right alloca redzone: cb > > > ==7306==ABORTING > > > > > > > > > thanks > > > -- PMM
On Montag, 14. Februar 2022 10:55:17 CET Peter Maydell wrote: > On Mon, 14 Feb 2022 at 09:47, Christian Schoenebeck > > <qemu_oss@crudebyte.com> wrote: > > So this is about the 'dirent' patch: > > https://github.com/cschoenebeck/qemu/commit/de19c79dad6a2cad54ae04ce754d47 > > c07bf9bc93 > > > > In conjunction with the 9p fuzzing tests: > > https://wiki.qemu.org/Documentation/9p#Fuzzing > > > > I first thought it might be a false positive due to the unorthodox > > handling of dirent duplication by that patch, but from the ASan output > > below I am not really sure about that. > > > > Is there a way to get the content of local variables? > > Yes. You can build locally with the clang sanitizers enabled and then > run under gdb and with the appropriate environment variables to tell the > sanitizer to abort() on failures. Well, it does no longer matter for this particular issue here, but it would be useful if the CI scripts would already dump the local variables to the logs. E.g. because the patch in question was about system dependant variations. Another reason is the random data nature of fuzzing tests. Even though the latter could probably be reproduced with an appropriate seed. Best regards, Christian Schoenebeck
Christian, On Mon, Feb 14, 2022 at 12:44:48PM +0100, Christian Schoenebeck wrote: > On Montag, 14. Februar 2022 11:36:53 CET Greg Kurz wrote: > > The synth backend should be fixed to honor d_reclen, or > > at least to allocate with g_new0(). > > Yes, I overlooked that this is not initialized with zero already. > > With g_new0() d_reclen would be zero and qemu_dirent_dup() would then fallback > to the portable branch (as I assumed it already would): Perhaps, this additional change should be added (I only found two instances of V9fsSynthOpenState allocation): diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c --- a/hw/9pfs/9p-synth.c +++ b/hw/9pfs/9p-synth.c @@ -182,7 +182,7 @@ static int synth_opendir(FsContext *ctx, V9fsSynthOpenState *synth_open; V9fsSynthNode *node = *(V9fsSynthNode **)fs_path->data; - synth_open = g_malloc(sizeof(*synth_open)); + synth_open = g_malloc0(sizeof(*synth_open)); synth_open->node = node; node->open_count++; fs->private = synth_open; @@ -266,7 +266,7 @@ static int synth_open(FsContext *ctx, V9fsPath *fs_path, V9fsSynthOpenState *synth_open; V9fsSynthNode *node = *(V9fsSynthNode **)fs_path->data; - synth_open = g_malloc(sizeof(*synth_open)); + synth_open = g_malloc0(sizeof(*synth_open)); synth_open->node = node; node->open_count++; fs->private = synth_open; > Additionally I would add NAME_MAX to the V9fsSynthOpenState allocation size, > because it is known that some systems define dirent as flex-array (zero d_name > size). (To be precise) not just zero, but 1 byte. Also, to remind, for some filesystems, such as CIFS, actual d_name size could be longer than NAME_MAX. Because of that struct dirent cannot be allocated statically or with simple sizeof. > > I know Greg would not favour this solution (using g_new0), but it's the most > minimalistic and most portable solution. So I would favour it for now. Why g_new0 and not just g_malloc0? This is smallest code change, which seems appropriate for a bug fix. Thanks, > > A cleaner solution on the long-term would be turning V9fsSynthOpenState's > 'dent' member into a pointer and adding a new function to osdep like: > > struct dirent * > qemu_dirent_new(const char* name) { > ... > } > > But I would like to postpone that qemu_dirent_new() solution, e.g. because I > guess some people would probably not like qemu_dirent_new() to have in osdep, > as it is probably not a general purpose function, and I am not keen putting > qemu_dirent_new() into a different location than qemu_dirent_dup(), because it > would raise the danger that system dependent code might deviate in future. > > Best regards, > Christian Schoenebeck >
On Montag, 14. Februar 2022 15:43:51 CET Vitaly Chikunov wrote: > Christian, > > On Mon, Feb 14, 2022 at 12:44:48PM +0100, Christian Schoenebeck wrote: > > On Montag, 14. Februar 2022 11:36:53 CET Greg Kurz wrote: > > > The synth backend should be fixed to honor d_reclen, or > > > at least to allocate with g_new0(). > > > > Yes, I overlooked that this is not initialized with zero already. > > > > With g_new0() d_reclen would be zero and qemu_dirent_dup() would then > > fallback > > to the portable branch (as I assumed it already would): > Perhaps, this additional change should be added (I only found two instances > of V9fsSynthOpenState allocation): > > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c > --- a/hw/9pfs/9p-synth.c > +++ b/hw/9pfs/9p-synth.c > @@ -182,7 +182,7 @@ static int synth_opendir(FsContext *ctx, > V9fsSynthOpenState *synth_open; > V9fsSynthNode *node = *(V9fsSynthNode **)fs_path->data; > > - synth_open = g_malloc(sizeof(*synth_open)); > + synth_open = g_malloc0(sizeof(*synth_open)); > synth_open->node = node; > node->open_count++; > fs->private = synth_open; > @@ -266,7 +266,7 @@ static int synth_open(FsContext *ctx, V9fsPath *fs_path, > V9fsSynthOpenState *synth_open; > V9fsSynthNode *node = *(V9fsSynthNode **)fs_path->data; > > - synth_open = g_malloc(sizeof(*synth_open)); > + synth_open = g_malloc0(sizeof(*synth_open)); > synth_open->node = node; > node->open_count++; > fs->private = synth_open; Either /* * Add NAME_MAX to ensure there is enough space for 'dent' member, because * some systems have d_name size of just 1, which would cause a buffer * overrun. */ synth_open = g_malloc0(sizeof(*synth_open) + NAME_MAX); Or more simple by adjusting struct V9fsSynthOpenState: index 036d7e4a5b..eeb246f377 100644 --- a/hw/9pfs/9p-synth.h +++ b/hw/9pfs/9p-synth.h @@ -41,6 +41,11 @@ typedef struct V9fsSynthOpenState { off_t offset; V9fsSynthNode *node; struct dirent dent; + /* + * Ensure there is enough space for 'dent' above, some systems have a + * d_name size of just 1, which would cause a buffer overrun. + */ + char dent_trailing_space[NAME_MAX]; } V9fsSynthOpenState; int qemu_v9fs_synth_mkdir(V9fsSynthNode *parent, int mode, and of course still replacing g_malloc() by g_malloc0(). > > Additionally I would add NAME_MAX to the V9fsSynthOpenState allocation > > size, because it is known that some systems define dirent as flex-array > > (zero d_name size). > > (To be precise) not just zero, but 1 byte. Also, to remind, for some > filesystems, such as CIFS, actual d_name size could be longer than NAME_MAX. > Because of that struct dirent cannot be allocated statically or with simple > sizeof. Yes, but the dir names in the synth driver are just short hard coded names anyway, there is no access to a real filesystem going on in the synth driver. > > I know Greg would not favour this solution (using g_new0), but it's the > > most minimalistic and most portable solution. So I would favour it for > > now. > Why g_new0 and not just g_malloc0? This is smallest code change, which seems > appropriate for a bug fix. Both are fine with me in this case. > > Thanks, > > > A cleaner solution on the long-term would be turning V9fsSynthOpenState's > > 'dent' member into a pointer and adding a new function to osdep like: > > > > struct dirent * > > qemu_dirent_new(const char* name) { > > > > ... > > > > } > > > > But I would like to postpone that qemu_dirent_new() solution, e.g. because > > I guess some people would probably not like qemu_dirent_new() to have in > > osdep, as it is probably not a general purpose function, and I am not > > keen putting qemu_dirent_new() into a different location than > > qemu_dirent_dup(), because it would raise the danger that system > > dependent code might deviate in future. > > > > Best regards, > > Christian Schoenebeck
On Mon, 14 Feb 2022 17:43:51 +0300 Vitaly Chikunov <vt@altlinux.org> wrote: > Christian, > > On Mon, Feb 14, 2022 at 12:44:48PM +0100, Christian Schoenebeck wrote: > > On Montag, 14. Februar 2022 11:36:53 CET Greg Kurz wrote: > > > The synth backend should be fixed to honor d_reclen, or > > > at least to allocate with g_new0(). > > > > Yes, I overlooked that this is not initialized with zero already. > > > > With g_new0() d_reclen would be zero and qemu_dirent_dup() would then fallback > > to the portable branch (as I assumed it already would): > > Perhaps, this additional change should be added (I only found two instances of > V9fsSynthOpenState allocation): > > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c > --- a/hw/9pfs/9p-synth.c > +++ b/hw/9pfs/9p-synth.c > @@ -182,7 +182,7 @@ static int synth_opendir(FsContext *ctx, > V9fsSynthOpenState *synth_open; > V9fsSynthNode *node = *(V9fsSynthNode **)fs_path->data; > > - synth_open = g_malloc(sizeof(*synth_open)); > + synth_open = g_malloc0(sizeof(*synth_open)); > synth_open->node = node; > node->open_count++; > fs->private = synth_open; > @@ -266,7 +266,7 @@ static int synth_open(FsContext *ctx, V9fsPath *fs_path, > V9fsSynthOpenState *synth_open; > V9fsSynthNode *node = *(V9fsSynthNode **)fs_path->data; > > - synth_open = g_malloc(sizeof(*synth_open)); > + synth_open = g_malloc0(sizeof(*synth_open)); > synth_open->node = node; > node->open_count++; > fs->private = synth_open; > > > Additionally I would add NAME_MAX to the V9fsSynthOpenState allocation size, > > because it is known that some systems define dirent as flex-array (zero d_name > > size). > > (To be precise) not just zero, but 1 byte. Also, to remind, for some > filesystems, such as CIFS, actual d_name size could be longer than NAME_MAX. > Because of that struct dirent cannot be allocated statically or with simple > sizeof. > > > > > I know Greg would not favour this solution (using g_new0), but it's the most > > minimalistic and most portable solution. So I would favour it for now. > > Why g_new0 and not just g_malloc0? This is smallest code change, which seems > appropriate for a bug fix. > I prefer g_new0() for the exact reasons that are provided in QEMU's official coding style docs/devel/style.rst: --- Prefer ``g_new(T, n)`` instead of ``g_malloc(sizeof(T) * n)`` for the following reasons: * It catches multiplication overflowing size_t; * It returns T ``*`` instead of void ``*``, letting compiler catch more type errors. Declarations like .. code-block:: c T *v = g_malloc(sizeof(*v)) are acceptable, though. --- I'm fine with the acceptable version as well. The only important thing is to fix the synth backend. Cheers, -- Greg > Thanks, > > > > > A cleaner solution on the long-term would be turning V9fsSynthOpenState's > > 'dent' member into a pointer and adding a new function to osdep like: > > > > struct dirent * > > qemu_dirent_new(const char* name) { > > ... > > } > > > > But I would like to postpone that qemu_dirent_new() solution, e.g. because I > > guess some people would probably not like qemu_dirent_new() to have in osdep, > > as it is probably not a general purpose function, and I am not keen putting > > qemu_dirent_new() into a different location than qemu_dirent_dup(), because it > > would raise the danger that system dependent code might deviate in future. > > > > Best regards, > > Christian Schoenebeck > >
On Dienstag, 15. Februar 2022 08:01:37 CET Greg Kurz wrote: > On Mon, 14 Feb 2022 17:43:51 +0300 > > Vitaly Chikunov <vt@altlinux.org> wrote: > > Why g_new0 and not just g_malloc0? This is smallest code change, which > > seems appropriate for a bug fix. > > I prefer g_new0() for the exact reasons that are provided in QEMU's > official coding style docs/devel/style.rst: [...] > I'm fine with the acceptable version as well. The only important thing is > to fix the synth backend. > > Cheers, Hi, is anybody working on a v5 of this patch? If not I will send one this evening to bring this issue forward, because it is blocking my queue. Best regards, Christian Schoenebeck
On Wed, 16 Feb 2022 11:30:12 +0100 Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > On Dienstag, 15. Februar 2022 08:01:37 CET Greg Kurz wrote: > > On Mon, 14 Feb 2022 17:43:51 +0300 > > > > Vitaly Chikunov <vt@altlinux.org> wrote: > > > Why g_new0 and not just g_malloc0? This is smallest code change, which > > > seems appropriate for a bug fix. > > > > I prefer g_new0() for the exact reasons that are provided in QEMU's > > official coding style docs/devel/style.rst: > [...] > > I'm fine with the acceptable version as well. The only important thing is > > to fix the synth backend. > > > > Cheers, > > Hi, is anybody working on a v5 of this patch? If not I will send one this > evening to bring this issue forward, because it is blocking my queue. > I'm not, please post a patch and I'll review it ASAP. Cheers, -- Greg > Best regards, > Christian Schoenebeck > >
On 16/2/22 11:30, Christian Schoenebeck wrote: > On Dienstag, 15. Februar 2022 08:01:37 CET Greg Kurz wrote: >> On Mon, 14 Feb 2022 17:43:51 +0300 >> >> Vitaly Chikunov <vt@altlinux.org> wrote: >>> Why g_new0 and not just g_malloc0? This is smallest code change, which >>> seems appropriate for a bug fix. >> >> I prefer g_new0() for the exact reasons that are provided in QEMU's >> official coding style docs/devel/style.rst: > [...] >> I'm fine with the acceptable version as well. The only important thing is >> to fix the synth backend. >> >> Cheers, > > Hi, is anybody working on a v5 of this patch? If not I will send one this > evening to bring this issue forward, because it is blocking my queue. If a patch blocks your tree queue, you can simply drop it, ask the contributor to rework it, and keep merging your tree...
Christian, On Wed, Feb 16, 2022 at 11:30:12AM +0100, Christian Schoenebeck wrote: > On Dienstag, 15. Februar 2022 08:01:37 CET Greg Kurz wrote: > > On Mon, 14 Feb 2022 17:43:51 +0300 > > > > Vitaly Chikunov <vt@altlinux.org> wrote: > > > Why g_new0 and not just g_malloc0? This is smallest code change, which > > > seems appropriate for a bug fix. > > > > I prefer g_new0() for the exact reasons that are provided in QEMU's > > official coding style docs/devel/style.rst: > [...] > > I'm fine with the acceptable version as well. The only important thing is > > to fix the synth backend. > > > > Cheers, > > Hi, is anybody working on a v5 of this patch? If not I will send one this > evening to bring this issue forward, because it is blocking my queue. I will send in a few hours if it's not too late. Thanks, > > Best regards, > Christian Schoenebeck >
On Mittwoch, 16. Februar 2022 17:09:56 CET Vitaly Chikunov wrote: > Christian, > > On Wed, Feb 16, 2022 at 11:30:12AM +0100, Christian Schoenebeck wrote: > > On Dienstag, 15. Februar 2022 08:01:37 CET Greg Kurz wrote: > > > On Mon, 14 Feb 2022 17:43:51 +0300 > > > > > > Vitaly Chikunov <vt@altlinux.org> wrote: > > > > Why g_new0 and not just g_malloc0? This is smallest code change, which > > > > seems appropriate for a bug fix. > > > > > > I prefer g_new0() for the exact reasons that are provided in QEMU's > > > > > official coding style docs/devel/style.rst: > > [...] > > > > > I'm fine with the acceptable version as well. The only important thing > > > is > > > to fix the synth backend. > > > > > > Cheers, > > > > Hi, is anybody working on a v5 of this patch? If not I will send one this > > evening to bring this issue forward, because it is blocking my queue. > > I will send in a few hours if it's not too late. Great! I would have been also just able to do that in several hours. Best regards, Christian Schoenebeck