diff mbox series

[bpf,v8,03/13] bpf: sockmap, reschedule is now done through backlog

Message ID 20230517052244.294755-4-john.fastabend@gmail.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series bpf sockmap fixes | expand

Checks

Context Check Description
bpf/vmtest-bpf-PR fail merge-conflict
netdev/tree_selection success Clearly marked for bpf, async
netdev/apply fail Patch does not apply to bpf
bpf/vmtest-bpf-VM_Test-1 success Logs for ${{ matrix.test }} on ${{ matrix.arch }} with ${{ matrix.toolchain_full }}
bpf/vmtest-bpf-VM_Test-2 success Logs for ShellCheck
bpf/vmtest-bpf-VM_Test-3 success Logs for ShellCheck
bpf/vmtest-bpf-VM_Test-4 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-VM_Test-5 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-VM_Test-6 fail Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-VM_Test-7 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-VM_Test-8 success Logs for build for s390x with gcc
bpf/vmtest-bpf-VM_Test-9 success Logs for build for s390x with gcc
bpf/vmtest-bpf-VM_Test-10 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-VM_Test-11 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-VM_Test-12 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-13 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-14 success Logs for set-matrix
bpf/vmtest-bpf-VM_Test-15 success Logs for set-matrix
bpf/vmtest-bpf-VM_Test-16 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-17 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-VM_Test-18 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-VM_Test-19 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-20 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-21 fail Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-22 fail Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-VM_Test-23 fail Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-VM_Test-24 fail Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-25 fail Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-26 fail Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-27 fail Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-VM_Test-28 fail Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-VM_Test-29 fail Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-30 fail Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-31 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-32 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-VM_Test-33 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-34 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-35 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-36 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-VM_Test-37 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-38 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-39 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-VM_Test-40 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-VM_Test-41 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-VM_Test-42 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-VM_Test-43 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-VM_Test-44 success Logs for veristat
bpf/vmtest-bpf-VM_Test-45 success Logs for veristat

Commit Message

John Fastabend May 17, 2023, 5:22 a.m. UTC
Now that the backlog manages the reschedule() logic correctly we can drop
the partial fix to reschedule from recvmsg hook.

Rescheduling on recvmsg hook was added to address a corner case where we
still had data in the backlog state but had nothing to kick it and
reschedule the backlog worker to run and finish copying data out of the
state. This had a couple limitations, first it required user space to
kick it introducing an unnecessary EBUSY and retry. Second it only
handled the ingress case and egress redirects would still be hung.

With the correct fix, pushing the reschedule logic down to where the
enomem error occurs we can drop this fix.

Reviewed-by: Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Fixes: bec217197b412 ("skmsg: Schedule psock work if the cached skb exists on the psock")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
---
 net/core/skmsg.c | 2 --
 1 file changed, 2 deletions(-)

Comments

Jakub Sitnicki May 17, 2023, 9:12 a.m. UTC | #1
On Tue, May 16, 2023 at 10:22 PM -07, John Fastabend wrote:
> Now that the backlog manages the reschedule() logic correctly we can drop
> the partial fix to reschedule from recvmsg hook.
>
> Rescheduling on recvmsg hook was added to address a corner case where we
> still had data in the backlog state but had nothing to kick it and
> reschedule the backlog worker to run and finish copying data out of the
> state. This had a couple limitations, first it required user space to
> kick it introducing an unnecessary EBUSY and retry. Second it only
> handled the ingress case and egress redirects would still be hung.
>
> With the correct fix, pushing the reschedule logic down to where the
> enomem error occurs we can drop this fix.
>
> Reviewed-by: Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>

Something went wrong here.

> Fixes: bec217197b412 ("skmsg: Schedule psock work if the cached skb exists on the psock")
> Signed-off-by: John Fastabend <john.fastabend@gmail.com>
> ---

[...]
diff mbox series

Patch

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 0a9ee2acac0b..76ff15f8bb06 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -481,8 +481,6 @@  int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
 		msg_rx = sk_psock_peek_msg(psock);
 	}
 out:
-	if (psock->work_state.skb && copied > 0)
-		schedule_delayed_work(&psock->work, 0);
 	return copied;
 }
 EXPORT_SYMBOL_GPL(sk_msg_recvmsg);