Message ID | 20210331023237.41094-12-xiyou.wangcong@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | BPF |
Headers | show |
Series | sockmap: introduce BPF_SK_SKB_VERDICT and support UDP | expand |
Context | Check | Description |
---|---|---|
netdev/cover_letter | success | Link |
netdev/fixes_present | success | Link |
netdev/patch_count | fail | Series longer than 15 patches |
netdev/tree_selection | success | Clearly marked for bpf-next |
netdev/subject_prefix | success | Link |
netdev/cc_maintainers | warning | 10 maintainers not CCed: dsahern@kernel.org yhs@fb.com kpsingh@kernel.org yoshfuji@linux-ipv6.org andrii@kernel.org kafai@fb.com ast@kernel.org songliubraving@fb.com davem@davemloft.net kuba@kernel.org |
netdev/source_inline | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Link |
netdev/module_param | success | Was 0 now: 0 |
netdev/build_32bit | success | Errors and warnings before: 185 this patch: 185 |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_fixes | success | Link |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 57 lines checked |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 293 this patch: 293 |
netdev/header_inline | success | Link |
Cong Wang wrote: > From: Cong Wang <cong.wang@bytedance.com> > > This is similar to tcp_read_sock(), except we do not need > to worry about connections, we just need to retrieve skb > from UDP receive queue. > > Note, the return value of ->read_sock() is unused in > sk_psock_verdict_data_ready(), and UDP still does not > support splice() due to lack of ->splice_read(), so users > can not reach udp_read_sock() directly. > > Cc: John Fastabend <john.fastabend@gmail.com> > Cc: Daniel Borkmann <daniel@iogearbox.net> > Cc: Jakub Sitnicki <jakub@cloudflare.com> > Cc: Lorenz Bauer <lmb@cloudflare.com> > Signed-off-by: Cong Wang <cong.wang@bytedance.com> > --- Thanks this is easier to read IMO. One nit below. Acked-by: John Fastabend <john.fastabend@gmail.com> [...] > > +int udp_read_sock(struct sock *sk, read_descriptor_t *desc, > + sk_read_actor_t recv_actor) > +{ > + int copied = 0; > + > + while (1) { > + struct sk_buff *skb; > + int err, used; > + > + skb = skb_recv_udp(sk, 0, 1, &err); > + if (!skb) > + return err; > + used = recv_actor(desc, skb, 0, skb->len); > + if (used <= 0) { > + if (!copied) > + copied = used; > + break; > + } else if (used <= skb->len) { > + copied += used; > + } This 'else if' is always true if above is false right? Would be impler and clearer IMO as, if (used <= 0) { if (!copied) copied = used; break; } copied += used; I don't see anyway for used to be great than skb->len. > + > + if (!desc->count) > + break; > + } > + > + return copied; > +} > +EXPORT_SYMBOL(udp_read_sock); > +
On Wed, Mar 31, 2021 at 11:01 PM John Fastabend <john.fastabend@gmail.com> wrote: > This 'else if' is always true if above is false right? Would be > impler and clearer IMO as, > > if (used <= 0) { > if (!copied) > copied = used; > break; > } > copied += used; > > I don't see anyway for used to be great than skb->len. Yes, slightly better. Please feel free to submit a patch by yourself, like always your patches are welcome. Please also remember to submit a patch to address the name TCP_ESTABLISHED, or literally any code you feel uncomfortable with. I am actually comfortable with what they are, hence not motivated to make a change. BTW, please try to group your reviews in one round, it is completely a waste of time to address your review one during each update. On my side, I need to adjust the cover letter, rebase the whole patchset, and manually add your ACK's. On your side, you have to read this again and again. On other people side, they just see more than a dozen patches flooding in the mailing list again and again. In the end, everyone's time is wasted, this can be avoided if you just try to group as many reviews as possible together. I certainly do not mind waiting for more time just to get more reviews in one round. And please do not give any ACK unless you are comfortable with the whole patchset, because otherwise I have to add it manually. It is not too late to give one single ACK to the whole patchset once you are comfortable with everything. This would save some traffic in the mailing list too. Thanks!
On Fri, Apr 2, 2021 at 10:12 PM Cong Wang <xiyou.wangcong@gmail.com> wrote: > > On Wed, Mar 31, 2021 at 11:01 PM John Fastabend > <john.fastabend@gmail.com> wrote: > > This 'else if' is always true if above is false right? Would be > > impler and clearer IMO as, > > > > if (used <= 0) { > > if (!copied) > > copied = used; > > break; > > } > > copied += used; > > > > I don't see anyway for used to be great than skb->len. > > Yes, slightly better. Please feel free to submit a patch by yourself, > like always your patches are welcome. Please submit a follow up patch as John requested or I'm reverting your set.
diff --git a/include/net/udp.h b/include/net/udp.h index df7cc1edc200..347b62a753c3 100644 --- a/include/net/udp.h +++ b/include/net/udp.h @@ -329,6 +329,8 @@ struct sock *__udp6_lib_lookup(struct net *net, struct sk_buff *skb); struct sock *udp6_lib_lookup_skb(const struct sk_buff *skb, __be16 sport, __be16 dport); +int udp_read_sock(struct sock *sk, read_descriptor_t *desc, + sk_read_actor_t recv_actor); /* UDP uses skb->dev_scratch to cache as much information as possible and avoid * possibly multiple cache miss on dequeue() diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 1355e6c0d567..f17870ee558b 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -1070,6 +1070,7 @@ const struct proto_ops inet_dgram_ops = { .setsockopt = sock_common_setsockopt, .getsockopt = sock_common_getsockopt, .sendmsg = inet_sendmsg, + .read_sock = udp_read_sock, .recvmsg = inet_recvmsg, .mmap = sock_no_mmap, .sendpage = inet_sendpage, diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 38952aaee3a1..4d02f6839e38 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1782,6 +1782,35 @@ struct sk_buff *__skb_recv_udp(struct sock *sk, unsigned int flags, } EXPORT_SYMBOL(__skb_recv_udp); +int udp_read_sock(struct sock *sk, read_descriptor_t *desc, + sk_read_actor_t recv_actor) +{ + int copied = 0; + + while (1) { + struct sk_buff *skb; + int err, used; + + skb = skb_recv_udp(sk, 0, 1, &err); + if (!skb) + return err; + used = recv_actor(desc, skb, 0, skb->len); + if (used <= 0) { + if (!copied) + copied = used; + break; + } else if (used <= skb->len) { + copied += used; + } + + if (!desc->count) + break; + } + + return copied; +} +EXPORT_SYMBOL(udp_read_sock); + /* * This should be easy, if there is something there we * return it, otherwise we block. diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index 802f5111805a..71de739b4a9e 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -714,6 +714,7 @@ const struct proto_ops inet6_dgram_ops = { .getsockopt = sock_common_getsockopt, /* ok */ .sendmsg = inet6_sendmsg, /* retpoline's sake */ .recvmsg = inet6_recvmsg, /* retpoline's sake */ + .read_sock = udp_read_sock, .mmap = sock_no_mmap, .sendpage = sock_no_sendpage, .set_peek_off = sk_set_peek_off,