Message ID | 20240328144032.1864988-3-edumazet@google.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 6a1f12dd85a8b24f871dfcf467378660af9c064d |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | udp: small changes on receive path | expand |
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index f2736e8958187e132ef45d8e25ab2b4ea7bcbc3d..d2fa9755727ce034c2b4bca82bd9e72130d588e6 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1516,12 +1516,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb) size = skb->truesize; udp_set_dev_scratch(skb); - /* we drop only if the receive buf is full and the receive - * queue contains some other skb - */ - rmem = atomic_add_return(size, &sk->sk_rmem_alloc); - if (rmem > (size + (unsigned int)sk->sk_rcvbuf)) - goto uncharge_drop; + atomic_add(size, &sk->sk_rmem_alloc); spin_lock(&list->lock); err = udp_rmem_schedule(sk, size);
atomic_add_return() is more expensive than atomic_add() and seems overkill in UDP rx fast path. Signed-off-by: Eric Dumazet <edumazet@google.com> --- net/ipv4/udp.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-)