diff mbox series

[bpf-next,2/3] tcp: Add the data length in skmsg to SIOCINQ ioctl

Message ID 1699962120-3390-3-git-send-email-yangpc@wangsu.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series skmsg: Add the data length in skmsg to SIOCINQ ioctl and rx_queue | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR fail PR summary
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1136 this patch: 1136
netdev/cc_maintainers warning 2 maintainers not CCed: dsahern@kernel.org pabeni@redhat.com
netdev/build_clang success Errors and warnings before: 1162 this patch: 1162
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1163 this patch: 1163
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 15 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-3 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-5 fail Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 fail Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-15 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-16 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-17 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 fail Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-19 fail Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-llvm-16 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-llvm-16 / build / build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-27 fail Logs for x86_64-llvm-16 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-16 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-16 / veristat
bpf/vmtest-bpf-next-VM_Test-26 fail Logs for x86_64-llvm-16 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-13 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-11 fail Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-12 fail Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for s390x-gcc / test (test_maps, false, 360) / test_maps on s390x with gcc

Commit Message

Pengcheng Yang Nov. 14, 2023, 11:41 a.m. UTC
SIOCINQ ioctl returns the number unread bytes of the receive
queue but does not include the ingress_msg queue. With the
sk_msg redirect, an application may get a value 0 if it calls
SIOCINQ ioctl before recv() to determine the readable size.

Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
---
 net/ipv4/tcp.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

John Fastabend Nov. 15, 2023, 7:20 a.m. UTC | #1
Pengcheng Yang wrote:
> SIOCINQ ioctl returns the number unread bytes of the receive
> queue but does not include the ingress_msg queue. With the
> sk_msg redirect, an application may get a value 0 if it calls
> SIOCINQ ioctl before recv() to determine the readable size.
> 
> Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> ---
>  net/ipv4/tcp.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 3d3a24f79573..04da0684c397 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -267,6 +267,7 @@
>  #include <linux/errqueue.h>
>  #include <linux/static_key.h>
>  #include <linux/btf.h>
> +#include <linux/skmsg.h>
>  
>  #include <net/icmp.h>
>  #include <net/inet_common.h>
> @@ -613,7 +614,7 @@ int tcp_ioctl(struct sock *sk, int cmd, int *karg)
>  			return -EINVAL;
>  
>  		slow = lock_sock_fast(sk);
> -		answ = tcp_inq(sk);
> +		answ = tcp_inq(sk) + sk_msg_queue_len(sk);

This will break the SK_PASS case I believe. Here we do
not update copied_seq until data is actually copied into user
space. This also ensures tcp_epollin_ready works correctly and
tcp_inq. The fix is relatively recent.

 commit e5c6de5fa025882babf89cecbed80acf49b987fa
 Author: John Fastabend <john.fastabend@gmail.com>
 Date:   Mon May 22 19:56:12 2023 -0700

    bpf, sockmap: Incorrectly handling copied_seq

The previous patch increments the msg_len for all cases even
the SK_PASS case so you will get double counting.

I was starting to poke around at how to fix the other cases e.g.
stream parser is in use and redirects but haven't got to it  yet.
By the way I think even with this patch epollin_ready is likely
not correct still. We observe this as either failing to wake up
or waking up an application to early when using stream parser.

The other thing to consider is redirected skb into another socket
and then read off the list increment the copied_seq even though
they shouldn't if they came from another sock?  The result would
be tcp_inq would be incorrect even negative perhaps?

What does your test setup look like? Simple redirect between
two TCP sockets? With or without stream parser? My guess is we
need to fix underlying copied_seq issues related to the redirect
and stream parser case. I believe the fix is, only increment
copied_seq for data that was put on the ingress_queue from SK_PASS.
Then update previous patch to only incrmeent sk_msg_queue_len()
for redirect paths. And this patch plus fix to tcp_epollin_ready
would resolve most the issues. Its a bit unfortunate to leak the
sk_sg_queue_len() into tcp_ioctl and tcp_epollin but I don't have
a cleaner idea right now.

>  		unlock_sock_fast(sk, slow);
>  		break;
>  	case SIOCATMARK:
> -- 
> 2.38.1
>
Pengcheng Yang Nov. 15, 2023, 11:45 a.m. UTC | #2
John Fastabend <john.fastabend@gmail.com> wrote:
> Pengcheng Yang wrote:
> > SIOCINQ ioctl returns the number unread bytes of the receive
> > queue but does not include the ingress_msg queue. With the
> > sk_msg redirect, an application may get a value 0 if it calls
> > SIOCINQ ioctl before recv() to determine the readable size.
> >
> > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> 
> This will break the SK_PASS case I believe. Here we do
> not update copied_seq until data is actually copied into user
> space. This also ensures tcp_epollin_ready works correctly and
> tcp_inq. The fix is relatively recent.
> 
>  commit e5c6de5fa025882babf89cecbed80acf49b987fa
>  Author: John Fastabend <john.fastabend@gmail.com>
>  Date:   Mon May 22 19:56:12 2023 -0700
> 
>     bpf, sockmap: Incorrectly handling copied_seq
> 
> The previous patch increments the msg_len for all cases even
> the SK_PASS case so you will get double counting.

You are right, I missed the SK_PASS case of skb stream verdict.

> 
> I was starting to poke around at how to fix the other cases e.g.
> stream parser is in use and redirects but haven't got to it  yet.
> By the way I think even with this patch epollin_ready is likely
> not correct still. We observe this as either failing to wake up
> or waking up an application to early when using stream parser.
> 
> The other thing to consider is redirected skb into another socket
> and then read off the list increment the copied_seq even though
> they shouldn't if they came from another sock?  The result would
> be tcp_inq would be incorrect even negative perhaps?
> 
> What does your test setup look like? Simple redirect between
> two TCP sockets? With or without stream parser? My guess is we
> need to fix underlying copied_seq issues related to the redirect
> and stream parser case. I believe the fix is, only increment
> copied_seq for data that was put on the ingress_queue from SK_PASS.
> Then update previous patch to only incrmeent sk_msg_queue_len()
> for redirect paths. And this patch plus fix to tcp_epollin_ready
> would resolve most the issues. Its a bit unfortunate to leak the
> sk_sg_queue_len() into tcp_ioctl and tcp_epollin but I don't have
> a cleaner idea right now.
> 

What I tested was to use msg_verdict to redirect between two sockets
without stream parser, and the problem I encountered is that msg has
been queued in psock->ingress_msg, and the application has been woken up
by epoll (because of sk_psock_data_ready), but the ioctl(FIONREAD) returns 0.

The key is that the rcv_nxt is not updated on ingress redirect, or we only need
to update rcv_nxt on ingress redirect, such as in bpf_tcp_ingress() and
sk_psock_skb_ingress_enqueue() ?
John Fastabend Nov. 17, 2023, 1:32 a.m. UTC | #3
Pengcheng Yang wrote:
> John Fastabend <john.fastabend@gmail.com> wrote:
> > Pengcheng Yang wrote:
> > > SIOCINQ ioctl returns the number unread bytes of the receive
> > > queue but does not include the ingress_msg queue. With the
> > > sk_msg redirect, an application may get a value 0 if it calls
> > > SIOCINQ ioctl before recv() to determine the readable size.
> > >
> > > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> > 
> > This will break the SK_PASS case I believe. Here we do
> > not update copied_seq until data is actually copied into user
> > space. This also ensures tcp_epollin_ready works correctly and
> > tcp_inq. The fix is relatively recent.
> > 
> >  commit e5c6de5fa025882babf89cecbed80acf49b987fa
> >  Author: John Fastabend <john.fastabend@gmail.com>
> >  Date:   Mon May 22 19:56:12 2023 -0700
> > 
> >     bpf, sockmap: Incorrectly handling copied_seq
> > 
> > The previous patch increments the msg_len for all cases even
> > the SK_PASS case so you will get double counting.
> 
> You are right, I missed the SK_PASS case of skb stream verdict.
> 
> > 
> > I was starting to poke around at how to fix the other cases e.g.
> > stream parser is in use and redirects but haven't got to it  yet.
> > By the way I think even with this patch epollin_ready is likely
> > not correct still. We observe this as either failing to wake up
> > or waking up an application to early when using stream parser.
> > 
> > The other thing to consider is redirected skb into another socket
> > and then read off the list increment the copied_seq even though
> > they shouldn't if they came from another sock?  The result would
> > be tcp_inq would be incorrect even negative perhaps?
> > 
> > What does your test setup look like? Simple redirect between
> > two TCP sockets? With or without stream parser? My guess is we
> > need to fix underlying copied_seq issues related to the redirect
> > and stream parser case. I believe the fix is, only increment
> > copied_seq for data that was put on the ingress_queue from SK_PASS.
> > Then update previous patch to only incrmeent sk_msg_queue_len()
> > for redirect paths. And this patch plus fix to tcp_epollin_ready
> > would resolve most the issues. Its a bit unfortunate to leak the
> > sk_sg_queue_len() into tcp_ioctl and tcp_epollin but I don't have
> > a cleaner idea right now.
> > 
> 
> What I tested was to use msg_verdict to redirect between two sockets
> without stream parser, and the problem I encountered is that msg has
> been queued in psock->ingress_msg, and the application has been woken up
> by epoll (because of sk_psock_data_ready), but the ioctl(FIONREAD) returns 0.

Yep makes sense.

> 
> The key is that the rcv_nxt is not updated on ingress redirect, or we only need
> to update rcv_nxt on ingress redirect, such as in bpf_tcp_ingress() and
> sk_psock_skb_ingress_enqueue() ?
> 

I think its likely best not to touch rcv_nxt. 'rcv_nxt' is used in
the tcp stack to calculate lots of things. If you just bump it and
then ever received an actual TCP pkt you would get some really
odd behavior because seq numbers and rcv_nxt would be unrelated then.

The approach you have is really the best bet IMO, but mask out
the increment msg_len where its not needed. Then it should be OK.

Mixing ingress redirect and TCP sending/recv pkts doesn't usually work
very well anyway but I still think leaving rcv_nxt alone is best.
Pengcheng Yang Nov. 17, 2023, 10:59 a.m. UTC | #4
John Fastabend <john.fastabend@gmail.com> wrote:
> Pengcheng Yang wrote:
> > John Fastabend <john.fastabend@gmail.com> wrote:
> > > Pengcheng Yang wrote:
> > > > SIOCINQ ioctl returns the number unread bytes of the receive
> > > > queue but does not include the ingress_msg queue. With the
> > > > sk_msg redirect, an application may get a value 0 if it calls
> > > > SIOCINQ ioctl before recv() to determine the readable size.
> > > >
> > > > Signed-off-by: Pengcheng Yang <yangpc@wangsu.com>
> > >
> > > This will break the SK_PASS case I believe. Here we do
> > > not update copied_seq until data is actually copied into user
> > > space. This also ensures tcp_epollin_ready works correctly and
> > > tcp_inq. The fix is relatively recent.
> > >
> > >  commit e5c6de5fa025882babf89cecbed80acf49b987fa
> > >  Author: John Fastabend <john.fastabend@gmail.com>
> > >  Date:   Mon May 22 19:56:12 2023 -0700
> > >
> > >     bpf, sockmap: Incorrectly handling copied_seq
> > >
> > > The previous patch increments the msg_len for all cases even
> > > the SK_PASS case so you will get double counting.
> >
> > You are right, I missed the SK_PASS case of skb stream verdict.
> >
> > >
> > > I was starting to poke around at how to fix the other cases e.g.
> > > stream parser is in use and redirects but haven't got to it  yet.
> > > By the way I think even with this patch epollin_ready is likely
> > > not correct still. We observe this as either failing to wake up
> > > or waking up an application to early when using stream parser.
> > >
> > > The other thing to consider is redirected skb into another socket
> > > and then read off the list increment the copied_seq even though
> > > they shouldn't if they came from another sock?  The result would
> > > be tcp_inq would be incorrect even negative perhaps?
> > >
> > > What does your test setup look like? Simple redirect between
> > > two TCP sockets? With or without stream parser? My guess is we
> > > need to fix underlying copied_seq issues related to the redirect
> > > and stream parser case. I believe the fix is, only increment
> > > copied_seq for data that was put on the ingress_queue from SK_PASS.
> > > Then update previous patch to only incrmeent sk_msg_queue_len()
> > > for redirect paths. And this patch plus fix to tcp_epollin_ready
> > > would resolve most the issues. Its a bit unfortunate to leak the
> > > sk_sg_queue_len() into tcp_ioctl and tcp_epollin but I don't have
> > > a cleaner idea right now.
> > >
> >
> > What I tested was to use msg_verdict to redirect between two sockets
> > without stream parser, and the problem I encountered is that msg has
> > been queued in psock->ingress_msg, and the application has been woken up
> > by epoll (because of sk_psock_data_ready), but the ioctl(FIONREAD) returns 0.
> 
> Yep makes sense.
> 
> >
> > The key is that the rcv_nxt is not updated on ingress redirect, or we only need
> > to update rcv_nxt on ingress redirect, such as in bpf_tcp_ingress() and
> > sk_psock_skb_ingress_enqueue() ?
> >
> 
> I think its likely best not to touch rcv_nxt. 'rcv_nxt' is used in
> the tcp stack to calculate lots of things. If you just bump it and
> then ever received an actual TCP pkt you would get some really
> odd behavior because seq numbers and rcv_nxt would be unrelated then.
> 
> The approach you have is really the best bet IMO, but mask out
> the increment msg_len where its not needed. Then it should be OK.
> 

I think we can add a flag to msg to identify whether msg comes from the same
sock's receive_queue. In this way, we can increase and decrease the msg_len
based on this flag when msg is queued to ingress_msg and when it is read by
the application.

And, this can also fix the case you mentioned above:

	"The other thing to consider is redirected skb into another socket
	and then read off the list increment the copied_seq even though
	they shouldn't if they came from another sock?  The result would
	be tcp_inq would be incorrect even negative perhaps?"

During recv in tcp_bpf_recvmsg_parser(), we only need to increment copied_seq
when the msg comes from the same sock's receive_queue, otherwise copied_seq
may overflow rcv_nxt in this case.

> Mixing ingress redirect and TCP sending/recv pkts doesn't usually work
> very well anyway but I still think leaving rcv_nxt alone is best.
diff mbox series

Patch

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 3d3a24f79573..04da0684c397 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -267,6 +267,7 @@ 
 #include <linux/errqueue.h>
 #include <linux/static_key.h>
 #include <linux/btf.h>
+#include <linux/skmsg.h>
 
 #include <net/icmp.h>
 #include <net/inet_common.h>
@@ -613,7 +614,7 @@  int tcp_ioctl(struct sock *sk, int cmd, int *karg)
 			return -EINVAL;
 
 		slow = lock_sock_fast(sk);
-		answ = tcp_inq(sk);
+		answ = tcp_inq(sk) + sk_msg_queue_len(sk);
 		unlock_sock_fast(sk, slow);
 		break;
 	case SIOCATMARK: