Message ID | 20230306115745.87401-1-kerneljasonxing@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [v2,net-next] udp: introduce __sk_mem_schedule() usage | expand |
Context | Check | Description |
---|---|---|
netdev/series_format | success | Single patches do not need cover letters |
netdev/tree_selection | success | Clearly marked for net-next |
netdev/fixes_present | success | Fixes tag not required for -next series |
netdev/header_inline | success | No static functions without inline keyword in header files |
netdev/build_32bit | success | Errors and warnings before: 8 this patch: 8 |
netdev/cc_maintainers | success | CCed 8 of 8 maintainers |
netdev/build_clang | success | Errors and warnings before: 0 this patch: 0 |
netdev/verify_signedoff | success | Signed-off-by tag matches author and committer |
netdev/deprecated_api | success | None detected |
netdev/check_selftest | success | No net selftest shell script |
netdev/verify_fixes | success | No Fixes tag |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 8 this patch: 8 |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 48 lines checked |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/source_inline | fail | Was 0 now: 1 |
On Mon, Mar 06, 2023 at 07:57:45PM +0800, Jason Xing wrote: > From: Jason Xing <kernelxing@tencent.com> > > Keep the accounting schema consistent across different protocols > with __sk_mem_schedule(). Besides, it adjusts a little bit on how > to calculate forward allocated memory compared to before. After > applied this patch, we could avoid receive path scheduling extra > amount of memory. > > Link: https://lore.kernel.org/lkml/20230221110344.82818-1-kerneljasonxing@gmail.com/ > Signed-off-by: Jason Xing <kernelxing@tencent.com> > --- > V2: > 1) change the title and body message > 2) use __sk_mem_schedule() instead suggested by Paolo Abeni > --- > net/ipv4/udp.c | 31 ++++++++++++++++++------------- > 1 file changed, 18 insertions(+), 13 deletions(-) > > diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c > index 9592fe3e444a..21c99087110d 100644 > --- a/net/ipv4/udp.c > +++ b/net/ipv4/udp.c > @@ -1531,10 +1531,23 @@ static void busylock_release(spinlock_t *busy) > spin_unlock(busy); > } > > +static inline int udp_rmem_schedule(struct sock *sk, int size) nit: I think it's best to drop the inline keyword and let the compiler figure that out.
On Mon, Mar 6, 2023 at 10:51 PM Simon Horman <simon.horman@corigine.com> wrote: > > On Mon, Mar 06, 2023 at 07:57:45PM +0800, Jason Xing wrote: > > From: Jason Xing <kernelxing@tencent.com> > > > > Keep the accounting schema consistent across different protocols > > with __sk_mem_schedule(). Besides, it adjusts a little bit on how > > to calculate forward allocated memory compared to before. After > > applied this patch, we could avoid receive path scheduling extra > > amount of memory. > > > > Link: https://lore.kernel.org/lkml/20230221110344.82818-1-kerneljasonxing@gmail.com/ > > Signed-off-by: Jason Xing <kernelxing@tencent.com> > > --- > > V2: > > 1) change the title and body message > > 2) use __sk_mem_schedule() instead suggested by Paolo Abeni > > --- > > net/ipv4/udp.c | 31 ++++++++++++++++++------------- > > 1 file changed, 18 insertions(+), 13 deletions(-) > > > > diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c > > index 9592fe3e444a..21c99087110d 100644 > > --- a/net/ipv4/udp.c > > +++ b/net/ipv4/udp.c > > @@ -1531,10 +1531,23 @@ static void busylock_release(spinlock_t *busy) > > spin_unlock(busy); > > } > > > > +static inline int udp_rmem_schedule(struct sock *sk, int size) > > nit: I think it's best to drop the inline keyword and > let the compiler figure that out. Thanks for the review. I'll do that in the v3 patch. Jason
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 9592fe3e444a..21c99087110d 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1531,10 +1531,23 @@ static void busylock_release(spinlock_t *busy) spin_unlock(busy); } +static inline int udp_rmem_schedule(struct sock *sk, int size) +{ + int delta; + + delta = size - sk->sk_forward_alloc; + if (delta > 0 && !__sk_mem_schedule(sk, delta, SK_MEM_RECV)) + return -ENOBUFS; + + sk->sk_forward_alloc -= size; + + return 0; +} + int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb) { struct sk_buff_head *list = &sk->sk_receive_queue; - int rmem, delta, amt, err = -ENOMEM; + int rmem, err = -ENOMEM; spinlock_t *busy = NULL; int size; @@ -1567,20 +1580,12 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb) goto uncharge_drop; spin_lock(&list->lock); - if (size >= sk->sk_forward_alloc) { - amt = sk_mem_pages(size); - delta = amt << PAGE_SHIFT; - if (!__sk_mem_raise_allocated(sk, delta, amt, SK_MEM_RECV)) { - err = -ENOBUFS; - spin_unlock(&list->lock); - goto uncharge_drop; - } - - sk->sk_forward_alloc += delta; + err = udp_rmem_schedule(sk, size); + if (err) { + spin_unlock(&list->lock); + goto uncharge_drop; } - sk->sk_forward_alloc -= size; - /* no need to setup a destructor, we will explicitly release the * forward allocated memory on dequeue */