Message ID | 7e16d521-7c8a-3ac7-497a-04e69fee1afe@kernel.dk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] io_uring/net: disable partial retries for recvmsg with cmsg | expand |
Am 20.06.23 um 15:19 schrieb Jens Axboe: > We cannot sanely handle partial retries for recvmsg if we have cmsg > attached. If we don't, then we'd just be overwriting the initial cmsg > header on retries. Alternatively we could increment and handle this > appropriately, but it doesn't seem worth the complication. > > Move the MSG_WAITALL check into the non-multishot case while at it, > since MSG_WAITALL is explicitly disabled for multishot anyway. > > Link: https://lore.kernel.org/io-uring/0b0d4411-c8fd-4272-770b-e030af6919a0@kernel.dk/ > Cc: stable@vger.kernel.org # 5.10+ > Reported-by: Stefan Metzmacher <metze@samba.org> > Signed-off-by: Jens Axboe <axboe@kernel.dk> Also Reviewed-by: Stefan Metzmacher <metze@samba.org> > --- > > v2: correct msg_controllen check and move into non-mshot branch > > io_uring/net.c | 11 +++++++---- > 1 file changed, 7 insertions(+), 4 deletions(-) > > diff --git a/io_uring/net.c b/io_uring/net.c > index c0924ab1ea11..2bc2cb2f4d6c 100644 > --- a/io_uring/net.c > +++ b/io_uring/net.c > @@ -789,16 +789,19 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags) > flags = sr->msg_flags; > if (force_nonblock) > flags |= MSG_DONTWAIT; > - if (flags & MSG_WAITALL) > - min_ret = iov_iter_count(&kmsg->msg.msg_iter); > > kmsg->msg.msg_get_inq = 1; > - if (req->flags & REQ_F_APOLL_MULTISHOT) > + if (req->flags & REQ_F_APOLL_MULTISHOT) { > ret = io_recvmsg_multishot(sock, sr, kmsg, flags, > &mshot_finished); > - else > + } else { > + /* disable partial retry for recvmsg with cmsg attached */ > + if (flags & MSG_WAITALL && !kmsg->msg.msg_controllen) > + min_ret = iov_iter_count(&kmsg->msg.msg_iter); > + > ret = __sys_recvmsg_sock(sock, &kmsg->msg, sr->umsg, > kmsg->uaddr, flags); > + } > > if (ret < min_ret) { > if (ret == -EAGAIN && force_nonblock) {
diff --git a/io_uring/net.c b/io_uring/net.c index c0924ab1ea11..2bc2cb2f4d6c 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -789,16 +789,19 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags) flags = sr->msg_flags; if (force_nonblock) flags |= MSG_DONTWAIT; - if (flags & MSG_WAITALL) - min_ret = iov_iter_count(&kmsg->msg.msg_iter); kmsg->msg.msg_get_inq = 1; - if (req->flags & REQ_F_APOLL_MULTISHOT) + if (req->flags & REQ_F_APOLL_MULTISHOT) { ret = io_recvmsg_multishot(sock, sr, kmsg, flags, &mshot_finished); - else + } else { + /* disable partial retry for recvmsg with cmsg attached */ + if (flags & MSG_WAITALL && !kmsg->msg.msg_controllen) + min_ret = iov_iter_count(&kmsg->msg.msg_iter); + ret = __sys_recvmsg_sock(sock, &kmsg->msg, sr->umsg, kmsg->uaddr, flags); + } if (ret < min_ret) { if (ret == -EAGAIN && force_nonblock) {
We cannot sanely handle partial retries for recvmsg if we have cmsg attached. If we don't, then we'd just be overwriting the initial cmsg header on retries. Alternatively we could increment and handle this appropriately, but it doesn't seem worth the complication. Move the MSG_WAITALL check into the non-multishot case while at it, since MSG_WAITALL is explicitly disabled for multishot anyway. Link: https://lore.kernel.org/io-uring/0b0d4411-c8fd-4272-770b-e030af6919a0@kernel.dk/ Cc: stable@vger.kernel.org # 5.10+ Reported-by: Stefan Metzmacher <metze@samba.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> --- v2: correct msg_controllen check and move into non-mshot branch io_uring/net.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)