Message ID | 20190917122559.15555-1-johannes@sipsolutions.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | libvhost-user: handle NOFD flag in call/kick/err better | expand |
Patchew URL: https://patchew.org/QEMU/20190917122559.15555-1-johannes@sipsolutions.net/ Hi, This series failed the asan build test. Please find the testing commands and their output below. If you have Docker installed, you can probably reproduce it locally. === TEST SCRIPT BEGIN === #!/bin/bash make docker-image-fedora V=1 NETWORK=1 time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1 === TEST SCRIPT END === ./tests/docker/docker.py --engine auto build qemu:fedora tests/docker/dockerfiles/fedora.docker --add-current-user Image is up to date. LD docker-test-debug@fedora.mo cc: fatal error: no input files compilation terminated. make: *** [docker-test-debug@fedora.mo] Error 4 The full log is available at http://patchew.org/logs/20190917122559.15555-1-johannes@sipsolutions.net/testing.asan/?type=message. --- Email generated automatically by Patchew [https://patchew.org/]. Please send your feedback to patchew-devel@redhat.com
Patchew URL: https://patchew.org/QEMU/20190917122559.15555-1-johannes@sipsolutions.net/ Hi, This series failed the docker-mingw@fedora build test. Please find the testing commands and their output below. If you have Docker installed, you can probably reproduce it locally. === TEST SCRIPT BEGIN === #! /bin/bash make docker-image-fedora V=1 NETWORK=1 time make docker-test-mingw@fedora J=14 NETWORK=1 === TEST SCRIPT END === ./tests/docker/docker.py --engine auto build qemu:fedora tests/docker/dockerfiles/fedora.docker --add-current-user Image is up to date. LD docker-test-mingw@fedora.mo cc: fatal error: no input files compilation terminated. make: *** [docker-test-mingw@fedora.mo] Error 4 The full log is available at http://patchew.org/logs/20190917122559.15555-1-johannes@sipsolutions.net/testing.docker-mingw@fedora/?type=message. --- Email generated automatically by Patchew [https://patchew.org/]. Please send your feedback to patchew-devel@redhat.com
On Tue, Sep 17, 2019 at 02:25:59PM +0200, Johannes Berg wrote: > diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c > index f1677da21201..17b7833d1f6b 100644 > --- a/contrib/libvhost-user/libvhost-user.c > +++ b/contrib/libvhost-user/libvhost-user.c > @@ -920,6 +920,7 @@ static bool > vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg) > { > int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK; > + bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; > > if (index >= dev->max_queues) { > vmsg_close_fds(vmsg); > @@ -927,8 +928,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg) > return false; > } > > - if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK || > - vmsg->fd_num != 1) { > + if (nofd) { > + vmsg_close_fds(vmsg); > + return true; > + } With the following change to vmsg_close_fds(): for (i = 0; i < vmsg->fd_num; i++) { close(vmsg->fds[i]); } + for (i = 0; i < sizeof(vmsg->fd_num) / sizeof(vmsg->fd_num[0]); i++) { + vmsg->fds[i] = -1; + } + vmsg->fd_num = 0; ...the message handler functions below can use vmsg->fds[0] (-1) without worrying about NOFD. This makes the code simpler.
On Wed, 2019-09-18 at 10:39 +0100, Stefan Hajnoczi wrote: > > > vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg) > > { > > int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK; > > + bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; > > > > if (index >= dev->max_queues) { > > vmsg_close_fds(vmsg); > > @@ -927,8 +928,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg) > > return false; > > } > > > > - if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK || > > - vmsg->fd_num != 1) { > > + if (nofd) { > > + vmsg_close_fds(vmsg); > > + return true; > > + } So in this particular code you quoted, I actually just aligned to have the same "bool nofd" variable - and I made it return "true" when no FD was given. It couldn't make use of what you proposed: > With the following change to vmsg_close_fds(): > > for (i = 0; i < vmsg->fd_num; i++) { > close(vmsg->fds[i]); > } > + for (i = 0; i < sizeof(vmsg->fd_num) / sizeof(vmsg->fd_num[0]); i++) { > + vmsg->fds[i] = -1; > + } > + vmsg->fd_num = 0; > > ...the message handler functions below can use vmsg->fds[0] (-1) without > worrying about NOFD. This makes the code simpler. because fd_num != 1 leads to the original code returning false, which leads to the ring not getting started in vu_set_vring_kick_exec(). So we need the special code here, can be argued if I should pull out the test into the "bool nofd" variable or not ... *shrug* The changes in vu_set_vring_kick_exec() and vu_set_vring_err_exec() would indeed then not be necessary, but in vu_set_vring_call_exec() we should still avoid the eventfd_write() if it's going to get -1. So, yeah - could be a bit simpler there. I'd say being explicit here is easier to understand and thus nicer, but your (or Michael's I guess?) call. johannes
On Wed, Sep 18, 2019 at 11:49:14AM +0200, Johannes Berg wrote: > On Wed, 2019-09-18 at 10:39 +0100, Stefan Hajnoczi wrote: > > > > > vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg) > > > { > > > int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK; > > > + bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; > > > > > > if (index >= dev->max_queues) { > > > vmsg_close_fds(vmsg); > > > @@ -927,8 +928,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg) > > > return false; > > > } > > > > > > - if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK || > > > - vmsg->fd_num != 1) { > > > + if (nofd) { > > > + vmsg_close_fds(vmsg); > > > + return true; > > > + } > > So in this particular code you quoted, I actually just aligned to have > the same "bool nofd" variable - and I made it return "true" when no FD > was given. > > It couldn't make use of what you proposed: > > > With the following change to vmsg_close_fds(): > > > > for (i = 0; i < vmsg->fd_num; i++) { > > close(vmsg->fds[i]); > > } > > + for (i = 0; i < sizeof(vmsg->fd_num) / sizeof(vmsg->fd_num[0]); i++) { > > + vmsg->fds[i] = -1; > > + } > > + vmsg->fd_num = 0; > > > > ...the message handler functions below can use vmsg->fds[0] (-1) without > > worrying about NOFD. This makes the code simpler. > > because fd_num != 1 leads to the original code returning false, which > leads to the ring not getting started in vu_set_vring_kick_exec(). So we > need the special code here, can be argued if I should pull out the test > into the "bool nofd" variable or not ... *shrug* > > The changes in vu_set_vring_kick_exec() and vu_set_vring_err_exec() > would indeed then not be necessary, but in vu_set_vring_call_exec() we > should still avoid the eventfd_write() if it's going to get -1. > > > So, yeah - could be a bit simpler there. I'd say being explicit here is > easier to understand and thus nicer, but your (or Michael's I guess?) > call. Yeah, there is a trade-off to hiding NOFD and if what I proposed isn't convincing then it wasn't a good proposal :-): Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c index f1677da21201..17b7833d1f6b 100644 --- a/contrib/libvhost-user/libvhost-user.c +++ b/contrib/libvhost-user/libvhost-user.c @@ -920,6 +920,7 @@ static bool vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg) { int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK; + bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; if (index >= dev->max_queues) { vmsg_close_fds(vmsg); @@ -927,8 +928,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg) return false; } - if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK || - vmsg->fd_num != 1) { + if (nofd) { + vmsg_close_fds(vmsg); + return true; + } + + if (vmsg->fd_num != 1) { vmsg_close_fds(vmsg); vu_panic(dev, "Invalid fds in request: %d", vmsg->request); return false; @@ -1025,6 +1030,7 @@ static bool vu_set_vring_kick_exec(VuDev *dev, VhostUserMsg *vmsg) { int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK; + bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64); @@ -1038,8 +1044,8 @@ vu_set_vring_kick_exec(VuDev *dev, VhostUserMsg *vmsg) dev->vq[index].kick_fd = -1; } - dev->vq[index].kick_fd = vmsg->fds[0]; - DPRINT("Got kick_fd: %d for vq: %d\n", vmsg->fds[0], index); + dev->vq[index].kick_fd = nofd ? -1 : vmsg->fds[0]; + DPRINT("Got kick_fd: %d for vq: %d\n", dev->vq[index].kick_fd, index); dev->vq[index].started = true; if (dev->iface->queue_set_started) { @@ -1116,6 +1122,7 @@ static bool vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg) { int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK; + bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64); @@ -1128,14 +1135,14 @@ vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg) dev->vq[index].call_fd = -1; } - dev->vq[index].call_fd = vmsg->fds[0]; + dev->vq[index].call_fd = nofd ? -1 : vmsg->fds[0]; /* in case of I/O hang after reconnecting */ - if (eventfd_write(vmsg->fds[0], 1)) { + if (dev->vq[index].call_fd != -1 && eventfd_write(vmsg->fds[0], 1)) { return -1; } - DPRINT("Got call_fd: %d for vq: %d\n", vmsg->fds[0], index); + DPRINT("Got call_fd: %d for vq: %d\n", dev->vq[index].call_fd, index); return false; } @@ -1144,6 +1151,7 @@ static bool vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg) { int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK; + bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64); @@ -1156,7 +1164,7 @@ vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg) dev->vq[index].err_fd = -1; } - dev->vq[index].err_fd = vmsg->fds[0]; + dev->vq[index].err_fd = nofd ? -1 : vmsg->fds[0]; return false; }