diff mbox series

[RFC,net-next] tcp: add support for read with offset when using MSG_PEEK

Message ID 20240111230057.305672-1-jmaloy@redhat.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [RFC,net-next] tcp: add support for read with offset when using MSG_PEEK | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1094 this patch: 1094
netdev/cc_maintainers success CCed 0 of 0 maintainers
netdev/build_clang success Errors and warnings before: 1108 this patch: 1108
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1109 this patch: 1109
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 20 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Jon Maloy Jan. 11, 2024, 11 p.m. UTC
From: Jon Maloy <jmaloy@redhat.com>

When reading received messages from a socket with MSG_PEEK, we may want
to read the contents with an offset, like we can do with pread/preadv()
when reading files. Currently, it is not possible to do that.

In this commit, we allow the user to set iovec.iov_base in the first
vector entry to NULL. This tells the socket to skip the first entry,
hence letting the iov_len field of that entry indicate the offset value.
This way, there is no need to add any new arguments or flags.

In the iperf3 log examples shown below, we can observe a throughput
improvement of ~15 % in the direction host->namespace when using the
protocol splicer 'pasta' (https://passt.top).
This is a consistent result.

pasta(1) and passt(1) implement user-mode networking for network
namespaces (containers) and virtual machines by means of a translation
layer between Layer-2 network interface and native Layer-4 sockets
(TCP, UDP, ICMP/ICMPv6 echo).

Received, pending TCP data to the container/guest is kept in kernel
buffers until acknowledged, so the tool routinely needs to fetch new
data from socket, skipping data that was already sent.

At the moment this is implemented using a dummy buffer passed to
recvmsg(). With this change, we don't need a dummy buffer and the
related buffer copy (copy_to_user()) anymore.

passt and pasta are supported in KubeVirt and libvirt/qemu.

jmaloy@freyr:~/passt$ perf record -g ./pasta --config-net -f
MSG_PEEK with offset not supported by kernel.

jmaloy@freyr:~/passt# iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 192.168.122.1, port 44822
[  5] local 192.168.122.180 port 5201 connected to 192.168.122.1 port 44832
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.02 GBytes  8.78 Gbits/sec
[  5]   1.00-2.00   sec  1.06 GBytes  9.08 Gbits/sec
[  5]   2.00-3.00   sec  1.07 GBytes  9.15 Gbits/sec
[  5]   3.00-4.00   sec  1.10 GBytes  9.46 Gbits/sec
[  5]   4.00-5.00   sec  1.03 GBytes  8.85 Gbits/sec
[  5]   5.00-6.00   sec  1.10 GBytes  9.44 Gbits/sec
[  5]   6.00-7.00   sec  1.11 GBytes  9.56 Gbits/sec
[  5]   7.00-8.00   sec  1.07 GBytes  9.20 Gbits/sec
[  5]   8.00-9.00   sec   667 MBytes  5.59 Gbits/sec
[  5]   9.00-10.00  sec  1.03 GBytes  8.83 Gbits/sec
[  5]  10.00-10.04  sec  30.1 MBytes  6.36 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.04  sec  10.3 GBytes  8.78 Gbits/sec   receiver
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
jmaloy@freyr:~/passt#
logout
[ perf record: Woken up 23 times to write data ]
[ perf record: Captured and wrote 5.696 MB perf.data (35580 samples) ]
jmaloy@freyr:~/passt$

jmaloy@freyr:~/passt$ perf record -g ./pasta --config-net -f
MSG_PEEK with offset supported by kernel.

jmaloy@freyr:~/passt# iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 192.168.122.1, port 40854
[  5] local 192.168.122.180 port 5201 connected to 192.168.122.1 port 40862
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  1.22 GBytes  10.5 Gbits/sec
[  5]   1.00-2.00   sec  1.19 GBytes  10.2 Gbits/sec
[  5]   2.00-3.00   sec  1.22 GBytes  10.5 Gbits/sec
[  5]   3.00-4.00   sec  1.11 GBytes  9.56 Gbits/sec
[  5]   4.00-5.00   sec  1.20 GBytes  10.3 Gbits/sec
[  5]   5.00-6.00   sec  1.14 GBytes  9.80 Gbits/sec
[  5]   6.00-7.00   sec  1.17 GBytes  10.0 Gbits/sec
[  5]   7.00-8.00   sec  1.12 GBytes  9.61 Gbits/sec
[  5]   8.00-9.00   sec  1.13 GBytes  9.74 Gbits/sec
[  5]   9.00-10.00  sec  1.26 GBytes  10.8 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.04  sec  11.8 GBytes  10.1 Gbits/sec   receiver
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
logout
[ perf record: Woken up 20 times to write data ]
[ perf record: Captured and wrote 5.040 MB perf.data (33411 samples) ]
jmaloy@freyr:~/passt$

The perf record confirms this result. Below, we can observe that the
CPU spends significantly less time in the function ____sys_recvmsg()
when we have offset support.

Without offset support:
----------------------
jmaloy@freyr:~/passt$ perf report -q --symbol-filter=do_syscall_64 -p ____sys_recvmsg -x --stdio -i  perf.data | head -1
    46.32%     0.00%  passt.avx2  [kernel.vmlinux]  [k] do_syscall_64  ____sys_recvmsg

With offset support:
----------------------
jmaloy@freyr:~/passt$ perf report -q --symbol-filter=do_syscall_64 -p ____sys_recvmsg -x --stdio -i  perf.data | head -1
   27.24%     0.00%  passt.avx2  [kernel.vmlinux]  [k] do_syscall_64  ____sys_recvmsg

Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: Jon Maloy <jmaloy@redhat.com>
---
 net/ipv4/tcp.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Paolo Abeni Jan. 16, 2024, 10:49 a.m. UTC | #1
On Thu, 2024-01-11 at 18:00 -0500, jmaloy@redhat.com wrote:
> From: Jon Maloy <jmaloy@redhat.com>
> 
> When reading received messages from a socket with MSG_PEEK, we may want
> to read the contents with an offset, like we can do with pread/preadv()
> when reading files. Currently, it is not possible to do that.
> 
> In this commit, we allow the user to set iovec.iov_base in the first
> vector entry to NULL. This tells the socket to skip the first entry,
> hence letting the iov_len field of that entry indicate the offset value.
> This way, there is no need to add any new arguments or flags.
> 
> In the iperf3 log examples shown below, we can observe a throughput
> improvement of ~15 % in the direction host->namespace when using the
> protocol splicer 'pasta' (https://passt.top).
> This is a consistent result.
> 
> pasta(1) and passt(1) implement user-mode networking for network
> namespaces (containers) and virtual machines by means of a translation
> layer between Layer-2 network interface and native Layer-4 sockets
> (TCP, UDP, ICMP/ICMPv6 echo).
> 
> Received, pending TCP data to the container/guest is kept in kernel
> buffers until acknowledged, so the tool routinely needs to fetch new
> data from socket, skipping data that was already sent.
> 
> At the moment this is implemented using a dummy buffer passed to
> recvmsg(). With this change, we don't need a dummy buffer and the
> related buffer copy (copy_to_user()) anymore.
> 
> passt and pasta are supported in KubeVirt and libvirt/qemu.
> 
> jmaloy@freyr:~/passt$ perf record -g ./pasta --config-net -f
> MSG_PEEK with offset not supported by kernel.
> 
> jmaloy@freyr:~/passt# iperf3 -s
> -----------------------------------------------------------
> Server listening on 5201 (test #1)
> -----------------------------------------------------------
> Accepted connection from 192.168.122.1, port 44822
> [  5] local 192.168.122.180 port 5201 connected to 192.168.122.1 port 44832
> [ ID] Interval           Transfer     Bitrate
> [  5]   0.00-1.00   sec  1.02 GBytes  8.78 Gbits/sec
> [  5]   1.00-2.00   sec  1.06 GBytes  9.08 Gbits/sec
> [  5]   2.00-3.00   sec  1.07 GBytes  9.15 Gbits/sec
> [  5]   3.00-4.00   sec  1.10 GBytes  9.46 Gbits/sec
> [  5]   4.00-5.00   sec  1.03 GBytes  8.85 Gbits/sec
> [  5]   5.00-6.00   sec  1.10 GBytes  9.44 Gbits/sec
> [  5]   6.00-7.00   sec  1.11 GBytes  9.56 Gbits/sec
> [  5]   7.00-8.00   sec  1.07 GBytes  9.20 Gbits/sec
> [  5]   8.00-9.00   sec   667 MBytes  5.59 Gbits/sec
> [  5]   9.00-10.00  sec  1.03 GBytes  8.83 Gbits/sec
> [  5]  10.00-10.04  sec  30.1 MBytes  6.36 Gbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate
> [  5]   0.00-10.04  sec  10.3 GBytes  8.78 Gbits/sec   receiver
> -----------------------------------------------------------
> Server listening on 5201 (test #2)
> -----------------------------------------------------------
> ^Ciperf3: interrupt - the server has terminated
> jmaloy@freyr:~/passt#
> logout
> [ perf record: Woken up 23 times to write data ]
> [ perf record: Captured and wrote 5.696 MB perf.data (35580 samples) ]
> jmaloy@freyr:~/passt$
> 
> jmaloy@freyr:~/passt$ perf record -g ./pasta --config-net -f
> MSG_PEEK with offset supported by kernel.
> 
> jmaloy@freyr:~/passt# iperf3 -s
> -----------------------------------------------------------
> Server listening on 5201 (test #1)
> -----------------------------------------------------------
> Accepted connection from 192.168.122.1, port 40854
> [  5] local 192.168.122.180 port 5201 connected to 192.168.122.1 port 40862
> [ ID] Interval           Transfer     Bitrate
> [  5]   0.00-1.00   sec  1.22 GBytes  10.5 Gbits/sec
> [  5]   1.00-2.00   sec  1.19 GBytes  10.2 Gbits/sec
> [  5]   2.00-3.00   sec  1.22 GBytes  10.5 Gbits/sec
> [  5]   3.00-4.00   sec  1.11 GBytes  9.56 Gbits/sec
> [  5]   4.00-5.00   sec  1.20 GBytes  10.3 Gbits/sec
> [  5]   5.00-6.00   sec  1.14 GBytes  9.80 Gbits/sec
> [  5]   6.00-7.00   sec  1.17 GBytes  10.0 Gbits/sec
> [  5]   7.00-8.00   sec  1.12 GBytes  9.61 Gbits/sec
> [  5]   8.00-9.00   sec  1.13 GBytes  9.74 Gbits/sec
> [  5]   9.00-10.00  sec  1.26 GBytes  10.8 Gbits/sec
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate
> [  5]   0.00-10.04  sec  11.8 GBytes  10.1 Gbits/sec   receiver
> -----------------------------------------------------------
> Server listening on 5201 (test #2)
> -----------------------------------------------------------
> ^Ciperf3: interrupt - the server has terminated
> logout
> [ perf record: Woken up 20 times to write data ]
> [ perf record: Captured and wrote 5.040 MB perf.data (33411 samples) ]
> jmaloy@freyr:~/passt$
> 
> The perf record confirms this result. Below, we can observe that the
> CPU spends significantly less time in the function ____sys_recvmsg()
> when we have offset support.
> 
> Without offset support:
> ----------------------
> jmaloy@freyr:~/passt$ perf report -q --symbol-filter=do_syscall_64 -p ____sys_recvmsg -x --stdio -i  perf.data | head -1
>     46.32%     0.00%  passt.avx2  [kernel.vmlinux]  [k] do_syscall_64  ____sys_recvmsg
> 
> With offset support:
> ----------------------
> jmaloy@freyr:~/passt$ perf report -q --symbol-filter=do_syscall_64 -p ____sys_recvmsg -x --stdio -i  perf.data | head -1
>    27.24%     0.00%  passt.avx2  [kernel.vmlinux]  [k] do_syscall_64  ____sys_recvmsg
> 
> Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
> Signed-off-by: Jon Maloy <jmaloy@redhat.com>
> ---
>  net/ipv4/tcp.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 1baa484d2190..82e1da3f0f98 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -2351,6 +2351,20 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len,
>  	if (flags & MSG_PEEK) {
>  		peek_seq = tp->copied_seq;
>  		seq = &peek_seq;
> +		if (!msg->msg_iter.__iov[0].iov_base) {
> +			size_t peek_offset;
> +
> +			if (msg->msg_iter.nr_segs < 2) {
> +				err = -EINVAL;
> +				goto out;
> +			}
> +			peek_offset = msg->msg_iter.__iov[0].iov_len;
> +			msg->msg_iter.__iov = &msg->msg_iter.__iov[1];
> +			msg->msg_iter.nr_segs -= 1;
> +			msg->msg_iter.count -= peek_offset;
> +			len -= peek_offset;
> +			*seq += peek_offset;
> +		}

IMHO this does not look like the correct interface to expose such
functionality. Doing the same with a different protocol should cause a
SIGSEG or the like, right?

What about using/implementing SO_PEEK_OFF support instead? 

Cheers,

Paolo
Jon Maloy Jan. 18, 2024, 10:22 p.m. UTC | #2
On 2024-01-16 05:49, Paolo Abeni wrote:
> On Thu, 2024-01-11 at 18:00 -0500, jmaloy@redhat.com wrote:
>> From: Jon Maloy <jmaloy@redhat.com>
>>
>> When reading received messages from a socket with MSG_PEEK, we may want
>> to read the contents with an offset, like we can do with pread/preadv()
>> when reading files. Currently, it is not possible to do that.
[...]
>> +				err = -EINVAL;
>> +				goto out;
>> +			}
>> +			peek_offset = msg->msg_iter.__iov[0].iov_len;
>> +			msg->msg_iter.__iov = &msg->msg_iter.__iov[1];
>> +			msg->msg_iter.nr_segs -= 1;
>> +			msg->msg_iter.count -= peek_offset;
>> +			len -= peek_offset;
>> +			*seq += peek_offset;
>> +		}
> IMHO this does not look like the correct interface to expose such
> functionality. Doing the same with a different protocol should cause a
> SIGSEG or the like, right?
I would expect doing the same thing with a different protocol to cause 
an EFAULT, as it should. But I haven't tried it.
This is a change to TCP only, at least until somebody decides to 
implement it elsewhere (why not?)
>
> What about using/implementing SO_PEEK_OFF support instead?
I looked at SO_PEEK_OFF, and it honestly looks both awkward and limited.
We would have to make frequent calls to setsockopt(), something that 
would beat much of the purpose of this feature.
I stand by my opinion here.
This feature is simple, non-intrusive, totally backwards compatible and 
implies no changes to the API or BPI.

I would love to hear other opinions on this, though.

Regards
/jon


>
> Cheers,
>
> Paolo
>
Stefano Brivio Jan. 21, 2024, 10:16 p.m. UTC | #3
On Thu, 18 Jan 2024 17:22:52 -0500
Jon Maloy <jmaloy@redhat.com> wrote:

> On 2024-01-16 05:49, Paolo Abeni wrote:
> > On Thu, 2024-01-11 at 18:00 -0500, jmaloy@redhat.com wrote:  
> >> From: Jon Maloy <jmaloy@redhat.com>
> >>
> >> When reading received messages from a socket with MSG_PEEK, we may want
> >> to read the contents with an offset, like we can do with pread/preadv()
> >> when reading files. Currently, it is not possible to do that.  
> [...]
> >> +				err = -EINVAL;
> >> +				goto out;
> >> +			}
> >> +			peek_offset = msg->msg_iter.__iov[0].iov_len;
> >> +			msg->msg_iter.__iov = &msg->msg_iter.__iov[1];
> >> +			msg->msg_iter.nr_segs -= 1;
> >> +			msg->msg_iter.count -= peek_offset;
> >> +			len -= peek_offset;
> >> +			*seq += peek_offset;
> >> +		}  
> > IMHO this does not look like the correct interface to expose such
> > functionality. Doing the same with a different protocol should cause a
> > SIGSEG or the like, right?  
>
> I would expect doing the same thing with a different protocol to cause 
> an EFAULT, as it should. But I haven't tried it.

So, out of curiosity, I actually tried: the current behaviour is
recvmsg() failing with EFAULT, only as data is received (!), for TCP
and UDP with AF_INET, and for AF_UNIX (both datagram and stream).

EFAULT, however, is not in the list of "shall fail", nor "may fail"
conditions described by POSIX.1-2008, so there isn't really anything
that mandates it API-wise.

Likewise, POSIX doesn't require any signal to be delivered (and no
signals are delivered on Linux in any case: note that iov_base is not
dereferenced).

For TCP sockets only, passing a NULL buffer is already supported by
recv() with MSG_TRUNC (same here, Linux extension). This change would
finally make recvmsg() consistent with that TCP-specific bit.

> This is a change to TCP only, at least until somebody decides to 
> implement it elsewhere (why not?)

Side note, I can't really think of a reasonable use case for UDP -- it
doesn't quite fit with the notion of message boundaries.

Even letting alone the fact that passt(1) and pasta(1) don't need this
for UDP (no acknowledgement means no need to keep unacknowledged data
anywhere), if another application wants to do something conceptually
similar, we should probably target recvmmsg().

> > What about using/implementing SO_PEEK_OFF support instead?
>
> I looked at SO_PEEK_OFF, and it honestly looks both awkward and limited.

I think it's rather intended to skip headers with fixed size or
suchlike.

> We would have to make frequent calls to setsockopt(), something that 
> would beat much of the purpose of this feature.

...right, we would need to reset the SO_PEEK_OFF value at every
recvmsg(), which is probably even worse than the current overhead.

> I stand by my opinion here.
> This feature is simple, non-intrusive, totally backwards compatible and 
> implies no changes to the API or BPI.

My thoughts as well, plus the advantage for our user-mode networking
case is quite remarkable given how simple the change is.

> I would love to hear other opinions on this, though.
> 
> Regards
> /jon
> 
> >
> > Cheers,
> >
> > Paolo
Jon Maloy Jan. 22, 2024, 4:22 p.m. UTC | #4
On 2024-01-21 17:16, Stefano Brivio wrote:
> On Thu, 18 Jan 2024 17:22:52 -0500
> Jon Maloy <jmaloy@redhat.com> wrote:
>
>> On 2024-01-16 05:49, Paolo Abeni wrote:
>>> On Thu, 2024-01-11 at 18:00 -0500, jmaloy@redhat.com wrote:
>>>> From: Jon Maloy <jmaloy@redhat.com>
>>>>
>>>> When reading received messages from a socket with MSG_PEEK, we may want
>>>> to read the contents with an offset, like we can do with pread/preadv()
>>>> when reading files. Currently, it is not possible to do that.
>> [...]
>>>> +				err = -EINVAL;
>>>> +				goto out;
>>>> +			}
>>>> +			peek_offset = msg->msg_iter.__iov[0].iov_len;
>>>> +			msg->msg_iter.__iov = &msg->msg_iter.__iov[1];
>>>> +			msg->msg_iter.nr_segs -= 1;
>>>> +			msg->msg_iter.count -= peek_offset;
>>>> +			len -= peek_offset;
>>>> +			*seq += peek_offset;
>>>> +		}
>>> IMHO this does not look like the correct interface to expose such
>>> functionality. Doing the same with a different protocol should cause a
>>> SIGSEG or the like, right?
>> I would expect doing the same thing with a different protocol to cause
>> an EFAULT, as it should. But I haven't tried it.
> So, out of curiosity, I actually tried: the current behaviour is
> recvmsg() failing with EFAULT, only as data is received (!), for TCP
> and UDP with AF_INET, and for AF_UNIX (both datagram and stream).
>
> EFAULT, however, is not in the list of "shall fail", nor "may fail"
> conditions described by POSIX.1-2008, so there isn't really anything
> that mandates it API-wise.
>
> Likewise, POSIX doesn't require any signal to be delivered (and no
> signals are delivered on Linux in any case: note that iov_base is not
> dereferenced).
>
> For TCP sockets only, passing a NULL buffer is already supported by
> recv() with MSG_TRUNC (same here, Linux extension). This change would
> finally make recvmsg() consistent with that TCP-specific bit.
>
>> This is a change to TCP only, at least until somebody decides to
>> implement it elsewhere (why not?)
> Side note, I can't really think of a reasonable use case for UDP -- it
> doesn't quite fit with the notion of message boundaries.
>
> Even letting alone the fact that passt(1) and pasta(1) don't need this
> for UDP (no acknowledgement means no need to keep unacknowledged data
> anywhere), if another application wants to do something conceptually
> similar, we should probably target recvmmsg().
>
>>> What about using/implementing SO_PEEK_OFF support instead?
>> I looked at SO_PEEK_OFF, and it honestly looks both awkward and limited.
> I think it's rather intended to skip headers with fixed size or
> suchlike.
>
>> We would have to make frequent calls to setsockopt(), something that
>> would beat much of the purpose of this feature.
> ...right, we would need to reset the SO_PEEK_OFF value at every
> recvmsg(), which is probably even worse than the current overhead.
>
>> I stand by my opinion here.
>> This feature is simple, non-intrusive, totally backwards compatible and
>> implies no changes to the API or BPI.
> My thoughts as well, plus the advantage for our user-mode networking
> case is quite remarkable given how simple the change is.

After pondering more upon this, and also some team internal discussions, 
I have decided to give it a try with SO_PEEK_OFF, just to see to see the 
outcome, both at kernel level and in user space.
So please wait with any possible application of this , if that ever 
happens with RFCs.

///jon
>
>> I would love to hear other opinions on this, though.
>>
>> Regards
>> /jon
>>
>>> Cheers,
>>>
>>> Paolo
diff mbox series

Patch

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 1baa484d2190..82e1da3f0f98 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2351,6 +2351,20 @@  static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len,
 	if (flags & MSG_PEEK) {
 		peek_seq = tp->copied_seq;
 		seq = &peek_seq;
+		if (!msg->msg_iter.__iov[0].iov_base) {
+			size_t peek_offset;
+
+			if (msg->msg_iter.nr_segs < 2) {
+				err = -EINVAL;
+				goto out;
+			}
+			peek_offset = msg->msg_iter.__iov[0].iov_len;
+			msg->msg_iter.__iov = &msg->msg_iter.__iov[1];
+			msg->msg_iter.nr_segs -= 1;
+			msg->msg_iter.count -= peek_offset;
+			len -= peek_offset;
+			*seq += peek_offset;
+		}
 	}
 
 	target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);