From patchwork Mon Mar 27 17:54:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189775 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A034CC77B61 for ; Mon, 27 Mar 2023 17:54:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232336AbjC0Ryx (ORCPT ); Mon, 27 Mar 2023 13:54:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230461AbjC0Ryw (ORCPT ); Mon, 27 Mar 2023 13:54:52 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A79E10E; Mon, 27 Mar 2023 10:54:51 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id cu12so6226606pfb.13; Mon, 27 Mar 2023 10:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OpJzN70oPUzh0zn7YxxB2eKnGbh8qoYQfWsq6BBmv1Y=; b=KiR3a+Uwx2hSg4HBAA4OZ7g2fyB2HRivC6HzjPg4Wn86I9dIGtIg0F8FHJOcLqwfUA NQerfiyi1GvhNTM+3bEZjnJNFRnnPxNXLj+umeUSNc20/O2D54oPhu4v3fxsLsHdmJF2 9+g/4rB4aDPt+LMKci9BMh3w5tVI6dvcP6c0KyBqgBH7NUfwf7ul+ejUgMp2EptF62OP Dgr9JJhPwD5f1OagBrRlsWVi34ykEomzvvo3z7sLfuWEb8+7rRKu+aDS9beBMAfQM5IV El7I91p+x/JmQcK1pHCgAV4f+bYBTQ5FywcWNfTSq+9nZJRmKNuqwTg+vMCsCdV/BdVY wgZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939691; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OpJzN70oPUzh0zn7YxxB2eKnGbh8qoYQfWsq6BBmv1Y=; b=zTqKwq+p1k7kAIPFlVp/aHoaEySj8NNu0VGt9WR02SF8zZVL3JdX68D80ZTCpYy9Tb p6exoTUbTzM3G2cB9CEzpBjIb/TMOOFWGdEnRbdHoel5emAqrWqcjeT3Gbxgn++BesM4 fnaoMN7OjwmpRh2Em0jtLWdHrRjrrQ3SsjhkEtYDnfXs7oON4p2dUUS1QUasY/AyGuJ2 snqqt/Bp0iYTAvufcbAOPe/Bhz8peoVSBgBre/m2pl/hDJehwc1Lm24REk/BrZkayAin qkBbDjaLGR/P2PWGg45yGg11K/WlSmwHxTKlBbn9x/80qvhWqcoTWscQQ6Cj7hQA/61S 10ww== X-Gm-Message-State: AAQBX9fw+AB5/h9r5EZ3fZJCPekjN3L92iQbawCPbkiCqdccLjOBE/qU hvi6VO5meqhP5wAFl24vm4w= X-Google-Smtp-Source: AKy350aDMh9p3JyXABfEU8CulMMukL4hDd1TxZHOuzrm37kSO5xD2lOfh/vdj0kIcMs5BrDsNaxyMw== X-Received: by 2002:aa7:9477:0:b0:628:0:7939 with SMTP id t23-20020aa79477000000b0062800007939mr13282813pfq.26.1679939690781; Mon, 27 Mar 2023 10:54:50 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.54.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:54:50 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 01/12] bpf: sockmap, pass skb ownership through read_skb Date: Mon, 27 Mar 2023 10:54:35 -0700 Message-Id: <20230327175446.98151-2-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The read_skb hook calls consume_skb() now, but this means that if the recv_actor program wants to use the skb it needs to inc the ref cnt so that the consume_skb() doesn't kfree the sk_buff. This is problematic because in some error cases under memory pressure we may need to linearize the sk_buff from sk_psock_skb_ingress_enqueue(). Then we get this, skb_linearize() __pskb_pull_tail() pskb_expand_head() BUG_ON(skb_shared(skb)) Because we incremented users refcnt from sk_psock_verdict_recv() we hit the bug on with refcnt > 1 and trip it. To fix lets simply pass ownership of the sk_buff through the skb_read call. Then we can drop the consume from read_skb handlers and assume the verdict recv does any required kfree. Bug found while testing in our CI which runs in VMs that hit memory constraints rather regularly. William tested TCP read_skb handlers. [ 106.536188] ------------[ cut here ]------------ [ 106.536197] kernel BUG at net/core/skbuff.c:1693! [ 106.536479] invalid opcode: 0000 [#1] PREEMPT SMP PTI [ 106.536726] CPU: 3 PID: 1495 Comm: curl Not tainted 5.19.0-rc5 #1 [ 106.537023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ArchLinux 1.16.0-1 04/01/2014 [ 106.537467] RIP: 0010:pskb_expand_head+0x269/0x330 [ 106.538585] RSP: 0018:ffffc90000138b68 EFLAGS: 00010202 [ 106.538839] RAX: 000000000000003f RBX: ffff8881048940e8 RCX: 0000000000000a20 [ 106.539186] RDX: 0000000000000002 RSI: 0000000000000000 RDI: ffff8881048940e8 [ 106.539529] RBP: ffffc90000138be8 R08: 00000000e161fd1a R09: 0000000000000000 [ 106.539877] R10: 0000000000000018 R11: 0000000000000000 R12: ffff8881048940e8 [ 106.540222] R13: 0000000000000003 R14: 0000000000000000 R15: ffff8881048940e8 [ 106.540568] FS: 00007f277dde9f00(0000) GS:ffff88813bd80000(0000) knlGS:0000000000000000 [ 106.540954] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 106.541227] CR2: 00007f277eeede64 CR3: 000000000ad3e000 CR4: 00000000000006e0 [ 106.541569] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 106.541915] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 106.542255] Call Trace: [ 106.542383] [ 106.542487] __pskb_pull_tail+0x4b/0x3e0 [ 106.542681] skb_ensure_writable+0x85/0xa0 [ 106.542882] sk_skb_pull_data+0x18/0x20 [ 106.543084] bpf_prog_b517a65a242018b0_bpf_skskb_http_verdict+0x3a9/0x4aa9 [ 106.543536] ? migrate_disable+0x66/0x80 [ 106.543871] sk_psock_verdict_recv+0xe2/0x310 [ 106.544258] ? sk_psock_write_space+0x1f0/0x1f0 [ 106.544561] tcp_read_skb+0x7b/0x120 [ 106.544740] tcp_data_queue+0x904/0xee0 [ 106.544931] tcp_rcv_established+0x212/0x7c0 [ 106.545142] tcp_v4_do_rcv+0x174/0x2a0 [ 106.545326] tcp_v4_rcv+0xe70/0xf60 [ 106.545500] ip_protocol_deliver_rcu+0x48/0x290 [ 106.545744] ip_local_deliver_finish+0xa7/0x150 Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Reported-by: William Findlay Tested-by: William Findlay Signed-off-by: John Fastabend Reviewed-by: Jakub Sitnicki --- net/core/skmsg.c | 2 -- net/ipv4/tcp.c | 1 - net/ipv4/udp.c | 5 +---- net/unix/af_unix.c | 5 +---- 4 files changed, 2 insertions(+), 11 deletions(-) diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 53d0251788aa..2b6d9519ff29 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -1180,8 +1180,6 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb) int ret = __SK_DROP; int len = skb->len; - skb_get(skb); - rcu_read_lock(); psock = sk_psock(sk); if (unlikely(!psock)) { diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 33f559f491c8..6572962b0237 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1770,7 +1770,6 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk)); tcp_flags = TCP_SKB_CB(skb)->tcp_flags; used = recv_actor(sk, skb); - consume_skb(skb); if (used < 0) { if (!copied) copied = used; diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 9592fe3e444a..04e8c6385246 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1832,10 +1832,7 @@ int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) } WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk)); - copied = recv_actor(sk, skb); - kfree_skb(skb); - - return copied; + return recv_actor(sk, skb); } EXPORT_SYMBOL(udp_read_skb); diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index f0c2293f1d3b..a5dd2ee0cfed 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -2554,10 +2554,7 @@ static int unix_read_skb(struct sock *sk, skb_read_actor_t recv_actor) if (!skb) return err; - copied = recv_actor(sk, skb); - kfree_skb(skb); - - return copied; + return recv_actor(sk, skb); } /* From patchwork Mon Mar 27 17:54:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189776 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 503FEC76195 for ; Mon, 27 Mar 2023 17:55:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232486AbjC0RzA (ORCPT ); Mon, 27 Mar 2023 13:55:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232411AbjC0Ryy (ORCPT ); Mon, 27 Mar 2023 13:54:54 -0400 Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D6F02681; Mon, 27 Mar 2023 10:54:53 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id u38so6231303pfg.10; Mon, 27 Mar 2023 10:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939692; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C16N4t/yebwnWEt3up1L0bcXG9FtrfIrB+7RyncYbw4=; b=Rx+8aMu1S5OCokhgOjj9VcKAd6xcTAfr5TEZ7fMDXcdINXcSceXeRtFpbZpW/miWdW L399WIgI0n/JpnazBmQVwb/TvJcSCo4EOFryfeETVAnQuHLFhkFFf2546nh67FMRs8KW LU41acMfN+kDpFmTdQlTNHlDXMOWX9LdErGZ3hv7lTkF9M35Ts64XFIGPh+lpExgf3WL WAu09kZ/O8/qc/ES5zuEqwpVXvmjxCr9n770yMmvQBx47lOfikOeft6BwCsXoTcSCsPA seuwM4dgecw79Pc6UaH+v+Y2GLc0jPevbpT559hSbX1bfmK1z+18FshJyUkaNuX83a+l HA7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939692; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C16N4t/yebwnWEt3up1L0bcXG9FtrfIrB+7RyncYbw4=; b=ydJ1v6y7J35HSis6WESo3W8hxL0Ux+E/AzOw5lshch82kiocDZjtW/vbfF8CsnaJ5n z/VFHU35T98I9ylMXSFchbgAEkcNv7EsZFMoYXrtE1ZeVmO0OM5kBLzdZ/WaTPxF/rdR F+9r2/N/V2VEDmaWN4Lzj3tgK7yypduU96exGeISht9GHET22KOM/ThzaT+XNJ0qTOTC EDTBuyOyLkfGjqAVh/F/qNFohxR3kfTAnXkq1ky143X8zCbXLuOab/+hdtwSXgVM2qr0 iIXh8Lg5qCtHsc9amWq+B/D5UX1Kbf3gQr7OF+i3+bB+8ZBgVhDOoFhFnd/pnLARPr6R /xfg== X-Gm-Message-State: AO0yUKUPnZ14CXv1AG8dOM2vZBb5Tm+ZIrfW0qDJQvrAIBAjBnVju/6a f+cR14zD9n0xx5hsdZM2cDzNv9DQeD8eOA== X-Google-Smtp-Source: AK7set+0QVnVDWf74ydnur5p6Xcm5H8+m01tD0sceNb0GvGWdGBIW4O395t06etQ17KMFQc6MtHqaA== X-Received: by 2002:a05:6a00:309e:b0:625:5403:cd87 with SMTP id bh30-20020a056a00309e00b006255403cd87mr15948601pfb.11.1679939692398; Mon, 27 Mar 2023 10:54:52 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.54.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:54:52 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 02/12] bpf: sockmap, convert schedule_work into delayed_work Date: Mon, 27 Mar 2023 10:54:36 -0700 Message-Id: <20230327175446.98151-3-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Sk_buffs are fed into sockmap verdict programs either from a strparser (when the user might want to decide how framing of skb is done by attaching another parser program) or directly through tcp_read_sock. The tcp_read_sock is the preferred method for performance when the BPF logic is a stream parser. The flow for Cilium's common use case with a stream parser is, tcp_read_sock() sk_psock_verdict_recv ret = bpf_prog_run_pin_on_cpu() sk_psock_verdict_apply(sock, skb, ret) // if system is under memory pressure or app is slow we may // need to queue skb. Do this queuing through ingress_skb and // then kick timer to wake up handler skb_queue_tail(ingress_skb, skb) schedule_work(work); The work queue is wired up to sk_psock_backlog(). This will then walk the ingress_skb skb list that holds our sk_buffs that could not be handled, but should be OK to run at some later point. However, its possible that the workqueue doing this work still hits an error when sending the skb. When this happens the skbuff is requeued on a temporary 'state' struct kept with the workqueue. This is necessary because its possible to partially send an skbuff before hitting an error and we need to know how and where to restart when the workqueue runs next. Now for the trouble, we don't rekick the workqueue. This can cause a stall where the skbuff we just cached on the state variable might never be sent. This happens when its the last packet in a flow and no further packets come along that would cause the system to kick the workqueue from that side. To fix we could do simple schedule_work(), but while under memory pressure it makes sense to back off some instead of continue to retry repeatedly. So instead to fix convert schedule_work to schedule_delayed_work and add backoff logic to reschedule from backlog queue on errors. Its not obvious though what a good backoff is so use '1'. To test we observed some flakes whil running NGINX compliance test with sockmap we attributed these failed test to this bug and subsequent issue. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Tested-by: William Findlay Signed-off-by: John Fastabend --- include/linux/skmsg.h | 2 +- net/core/skmsg.c | 19 ++++++++++++------- net/core/sock_map.c | 3 ++- 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 84f787416a54..904ff9a32ad6 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -105,7 +105,7 @@ struct sk_psock { struct proto *sk_proto; struct mutex work_mutex; struct sk_psock_work_state work_state; - struct work_struct work; + struct delayed_work work; struct rcu_work rwork; }; diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 2b6d9519ff29..96a6a3a74a67 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -481,7 +481,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg, } out: if (psock->work_state.skb && copied > 0) - schedule_work(&psock->work); + schedule_delayed_work(&psock->work, 0); return copied; } EXPORT_SYMBOL_GPL(sk_msg_recvmsg); @@ -639,7 +639,8 @@ static void sk_psock_skb_state(struct sk_psock *psock, static void sk_psock_backlog(struct work_struct *work) { - struct sk_psock *psock = container_of(work, struct sk_psock, work); + struct delayed_work *dwork = to_delayed_work(work); + struct sk_psock *psock = container_of(dwork, struct sk_psock, work); struct sk_psock_work_state *state = &psock->work_state; struct sk_buff *skb = NULL; bool ingress; @@ -679,6 +680,10 @@ static void sk_psock_backlog(struct work_struct *work) if (ret == -EAGAIN) { sk_psock_skb_state(psock, state, skb, len, off); + + // Delay slightly to prioritize any + // other work that might be here. + schedule_delayed_work(&psock->work, 1); goto end; } /* Hard errors break pipe and stop xmit. */ @@ -733,7 +738,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node) INIT_LIST_HEAD(&psock->link); spin_lock_init(&psock->link_lock); - INIT_WORK(&psock->work, sk_psock_backlog); + INIT_DELAYED_WORK(&psock->work, sk_psock_backlog); mutex_init(&psock->work_mutex); INIT_LIST_HEAD(&psock->ingress_msg); spin_lock_init(&psock->ingress_lock); @@ -822,7 +827,7 @@ static void sk_psock_destroy(struct work_struct *work) sk_psock_done_strp(psock); - cancel_work_sync(&psock->work); + cancel_delayed_work_sync(&psock->work); mutex_destroy(&psock->work_mutex); psock_progs_drop(&psock->progs); @@ -937,7 +942,7 @@ static int sk_psock_skb_redirect(struct sk_psock *from, struct sk_buff *skb) } skb_queue_tail(&psock_other->ingress_skb, skb); - schedule_work(&psock_other->work); + schedule_delayed_work(&psock_other->work, 0); spin_unlock_bh(&psock_other->ingress_lock); return 0; } @@ -1017,7 +1022,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb, spin_lock_bh(&psock->ingress_lock); if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) { skb_queue_tail(&psock->ingress_skb, skb); - schedule_work(&psock->work); + schedule_delayed_work(&psock->work, 0); err = 0; } spin_unlock_bh(&psock->ingress_lock); @@ -1048,7 +1053,7 @@ static void sk_psock_write_space(struct sock *sk) psock = sk_psock(sk); if (likely(psock)) { if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) - schedule_work(&psock->work); + schedule_delayed_work(&psock->work, 0); write_space = psock->saved_write_space; } rcu_read_unlock(); diff --git a/net/core/sock_map.c b/net/core/sock_map.c index a68a7290a3b2..d38267201892 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -1624,9 +1624,10 @@ void sock_map_close(struct sock *sk, long timeout) rcu_read_unlock(); sk_psock_stop(psock); release_sock(sk); - cancel_work_sync(&psock->work); + cancel_delayed_work_sync(&psock->work); sk_psock_put(sk, psock); } + /* Make sure we do not recurse. This is a bug. * Leak the socket instead of crashing on a stack overflow. */ From patchwork Mon Mar 27 17:54:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189778 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 537FAC6FD1D for ; Mon, 27 Mar 2023 17:55:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232491AbjC0RzB (ORCPT ); Mon, 27 Mar 2023 13:55:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232408AbjC0Ry7 (ORCPT ); Mon, 27 Mar 2023 13:54:59 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AC932D65; Mon, 27 Mar 2023 10:54:54 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id c18so9162659ple.11; Mon, 27 Mar 2023 10:54:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939694; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AfTerV6/U69AI92CPlAfsf8FX5XYFtSurKpJkvd75cI=; b=EfSuHwzAQxcT0QXdd86YFMLlA3HlDryz9kg83k/x5t+1jmUu/5Gy0NZhpacX41MDbM 2bhHoUaL+IGZOuqdbQ2Sm3LJJTO0YFnr4IId2i0GWgsvDHTkfO02Z7Ik6HkOXwz/HtGQ ssjKytD/hIZ+BAGxbRcIEnakTEhuKi9hz+ySNNad3/l/+pUJRi8ulCnVq/PqHEok6elz Ag7Gw2oMDdzJFgif/3gD4Pu5Q7benvJjCBXvWvK73IWls03O5n49htil01UyWYtVTCn0 jDv18ZMGb+zN9uq4zw5uvlsBnUsDuyIcwZeS+qwHD/sZn4PMdNePgJErtZcD8V1t1uP1 oI8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939694; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AfTerV6/U69AI92CPlAfsf8FX5XYFtSurKpJkvd75cI=; b=ItHEu+dyQaMd6OH40w3H72ep0eoTLZ9jU7ne2wuuaBlPM4dV2VmTQXWzYmTI/0Q12c KQXRI5IQDkG9HqPzAgkQWliiTBY1uwa5Nw9Y7jUsHplSpCOxhQyxlntQ2onVZTB1vx38 ynFXKM2u9b4CCk1DcUxjJui4oNH+iaHLxvxGqlHBg7WlzY2ZXPOIPGzzxEKgSzqK3GrH ru3yh0qEbogLm14yr2y5nSIvWsleXLs52TqyjOm95/qFkRup01zpKHKc7Nz+MuZslVG5 ybQTeggffHGl+c7sXOz0HAFVoN4dwUggiUqHO+OdRR58eC51BAmh5wYxsG69yn32mwi8 Zptw== X-Gm-Message-State: AO0yUKUC+gPBsTcz01s8IHxRD1aZ9CJrXo6k6rXUCNDsZDPYlCRfJwQV IDHbA1558/r1/9gGiC0eZM8= X-Google-Smtp-Source: AK7set/CRTesWW+HYMx4XSEn53uvqJguIQsExdns0MUMSJMbzbiLMgWX/3U9BxfOFnC7cboYuph6Ww== X-Received: by 2002:a05:6a20:1790:b0:db:ae75:c70e with SMTP id bl16-20020a056a20179000b000dbae75c70emr11641043pzb.15.1679939693891; Mon, 27 Mar 2023 10:54:53 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.54.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:54:53 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 03/12] bpf: sockmap, improved check for empty queue Date: Mon, 27 Mar 2023 10:54:37 -0700 Message-Id: <20230327175446.98151-4-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net We noticed some rare sk_buffs were stepping past the queue when system was under memory pressure. The general theory is to skip enqueueing sk_buffs when its not necessary which is the normal case with a system that is properly provisioned for the task, no memory pressure and enough cpu assigned. But, if we can't allocate memory due to an ENOMEM error when enqueueing the sk_buff into the sockmap receive queue we push it onto a delayed workqueue to retry later. When a new sk_buff is received we then check if that queue is empty. However, there is a problem with simply checking the queue length. When a sk_buff is being processed from the ingress queue but not yet on the sockmap msg receive queue its possible to also recv a sk_buff through normal path. It will check the ingress queue which is zero and then skip ahead of the pkt being processed. Previously we used sock lock from both contexts which made the problem harder to hit, but not impossible. To fix also check the 'state' variable where we would cache partially processed sk_buff. This catches the majority of cases. But, we also need to use the mutex lock around this check because we can't have both codes running and check sensibly. We could perhaps do this with atomic bit checks, but we are already here due to memory pressure so slowing things down a bit seems OK and simpler to just grab a lock. To reproduce issue we run NGINX compliance test with sockmap running and observe some flakes in our testing that we attributed to this issue. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Tested-by: William Findlay Signed-off-by: John Fastabend Reviewed-by: Jakub Sitnicki --- net/core/skmsg.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 96a6a3a74a67..34de0605694e 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -985,6 +985,7 @@ EXPORT_SYMBOL_GPL(sk_psock_tls_strp_read); static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb, int verdict) { + struct sk_psock_work_state *state; struct sock *sk_other; int err = 0; u32 len, off; @@ -1001,13 +1002,28 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb, skb_bpf_set_ingress(skb); + /* We need to grab mutex here because in-flight skb is in one of + * the following states: either on ingress_skb, in psock->state + * or being processed by backlog and neither in state->skb and + * ingress_skb may be also empty. The troublesome case is when + * the skb has been dequeued from ingress_skb list or taken from + * state->skb because we can not easily test this case. Maybe we + * could be clever with flags and resolve this but being clever + * got us here in the first place and we note this is done under + * sock lock and backlog conditions mean we are already running + * into ENOMEM or other performance hindering cases so lets do + * the obvious thing and grab the mutex. + */ + mutex_lock(&psock->work_mutex); + state = &psock->work_state; + /* If the queue is empty then we can submit directly * into the msg queue. If its not empty we have to * queue work otherwise we may get OOO data. Otherwise, * if sk_psock_skb_ingress errors will be handled by * retrying later from workqueue. */ - if (skb_queue_empty(&psock->ingress_skb)) { + if (skb_queue_empty(&psock->ingress_skb) && likely(!state->skb)) { len = skb->len; off = 0; if (skb_bpf_strparser(skb)) { @@ -1028,9 +1044,11 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb, spin_unlock_bh(&psock->ingress_lock); if (err < 0) { skb_bpf_redirect_clear(skb); + mutex_unlock(&psock->work_mutex); goto out_free; } } + mutex_unlock(&psock->work_mutex); break; case __SK_REDIRECT: err = sk_psock_skb_redirect(psock, skb); From patchwork Mon Mar 27 17:54:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189777 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8608BC77B60 for ; Mon, 27 Mar 2023 17:55:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232518AbjC0RzD (ORCPT ); Mon, 27 Mar 2023 13:55:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232470AbjC0Ry7 (ORCPT ); Mon, 27 Mar 2023 13:54:59 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 899C03AAC; Mon, 27 Mar 2023 10:54:56 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id z18so5634428pgj.13; Mon, 27 Mar 2023 10:54:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939696; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Y5tqObSMAtHNVK29HuvTPLgGUtx1E+Ad7BIcyyykhH0=; b=C6/0yBFeM8BHKJbCVN8wpZb508TlBLlmCCgmQBBzYr09CH+jqkRNbnBN5GHo3325T9 ZgUBsaA7XcUfruoty6pKpAIEkosZnBIHmCBPRvgjw29kkcxRg+5tZ3dlSc7e40xdeu4q pPuUO/ge7WwdVQxX5LITXywJgyVB4ugoUYqnRAPLYPcwa4WKWibxobnzDYoh61PN0paD peYe0qMNI9JlZovlyILXYgycrKUJ4t20W7TMWbjah9UDD4Lchx5z4aHTKcn7ynqZEbN3 Cd9WlYK1fDPDurLk2Lu+l8psLNZHR6BSSimqXqC/OtBzHaFYBGYBrHweESV8N4hybYtV nU3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939696; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y5tqObSMAtHNVK29HuvTPLgGUtx1E+Ad7BIcyyykhH0=; b=DwFGX8uPCA31a0F/uYhzPvxa+yHerhOt8aOjug89fJGWbCRnGkoOkCSP1yP9z/gcCF p7DNvVV+dj/+hI2DXBLwTS3ZEDRh+DVN3kxwbRzKZ1mX5lQ8A/s1+0AZTIz7Q8iGvDPF AqqAkOb1CGs//Rbk/mdNkI1uRQFNZr+aksaG+KEHNt0U28Qe1gbuoOSymTGYC56Rjwe5 ALY95swy2l1iLfFvTkudEdYk04zym54WuDe2USmo5jWJfzJEQxzYnbSzsCmTqTrazCl9 h+TUBrNhXRuWtCxrLEtrKGZY+dSG7fCd8fhm90LoC/+ec3dhbhoJwdi2Sf3Ob1C3v3m0 Xf3A== X-Gm-Message-State: AO0yUKVbFHgCgV3twmm5sYeuEafx6Pw+fGBI1Koa+xDxOeARACqJf3ro k5rPeT66wieh4RM3WcjzJ5A= X-Google-Smtp-Source: AK7set94V1FjHwxvalM+ftwDbvy0vMdUTn4GBWzD8uCyrN4sX2np/1MLnoOk70huPA0FrdAYrXsapA== X-Received: by 2002:a05:6a00:1da8:b0:5a8:c913:b108 with SMTP id z40-20020a056a001da800b005a8c913b108mr18335585pfw.9.1679939696074; Mon, 27 Mar 2023 10:54:56 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.54.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:54:55 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 04/12] bpf: sockmap, handle fin correctly Date: Mon, 27 Mar 2023 10:54:38 -0700 Message-Id: <20230327175446.98151-5-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The sockmap code is returning EAGAIN after a FIN packet is received and no more data is on the receive queue. Correct behavior is to return 0 to the user and the user can then close the socket. The EAGAIN causes many apps to retry which masks the problem. Eventually the socket is evicted from the sockmap because its released from sockmap sock free handling. The issue creates a delay and can cause some errors on application side. To fix this check on sk_msg_recvmsg side if length is zero and FIN flag is set then set return to zero. A selftest will be added to check this condition. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Tested-by: William Findlay Signed-off-by: John Fastabend --- net/ipv4/tcp_bpf.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index cf26d65ca389..3a0f43f3afd8 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -174,6 +174,24 @@ static int tcp_msg_wait_data(struct sock *sk, struct sk_psock *psock, return ret; } +static bool is_next_msg_fin(struct sk_psock *psock) +{ + struct scatterlist *sge; + struct sk_msg *msg_rx; + int i; + + msg_rx = sk_psock_peek_msg(psock); + i = msg_rx->sg.start; + sge = sk_msg_elem(msg_rx, i); + if (!sge->length) { + struct sk_buff *skb = msg_rx->skb; + + if (skb && TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) + return true; + } + return false; +} + static int tcp_bpf_recvmsg_parser(struct sock *sk, struct msghdr *msg, size_t len, @@ -193,6 +211,19 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, lock_sock(sk); msg_bytes_ready: copied = sk_msg_recvmsg(sk, psock, msg, len, flags); + /* The typical case for EFAULT is the socket was gracefully + * shutdown with a FIN pkt. So check here the other case is + * some error on copy_page_to_iter which would be unexpected. + * On fin return correct return code to zero. + */ + if (copied == -EFAULT) { + bool is_fin = is_next_msg_fin(psock); + + if (is_fin) { + copied = 0; + goto out; + } + } if (!copied) { long timeo; int data; From patchwork Mon Mar 27 17:54:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189779 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2371EC761A6 for ; Mon, 27 Mar 2023 17:55:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232597AbjC0RzM (ORCPT ); Mon, 27 Mar 2023 13:55:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232480AbjC0RzA (ORCPT ); Mon, 27 Mar 2023 13:55:00 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D43C26A2; Mon, 27 Mar 2023 10:54:58 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id q206so5645679pgq.9; Mon, 27 Mar 2023 10:54:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5HnEXOYmvzE8cKApqzizcKudiq0fvSHKWy9J37Q8ln4=; b=ZwW8SEt4LGFc6U/LUMlbh3Dqew+OucsAgxTqoklNX/CmG8nNiiS3MRuHZETlkWySL0 W8sxc0tcKhv2hkSOnEGHqtlTj1LWOg7duu2p9vhhIP3jekPmfOitjLzOOxUGU7HH67XS p91n+DmqMSgcD1EB5fLWPqCtacmyqlwJwLxqIeYM6DHEnnbovPSX9G6myZMkkAgSVaPV B7ha7s7qx2hTZWvJpjeIboGIxKElJKCsNZsSoDlf484C5aJTJi8K99nlva+Qn+0ENW4M 9ESqCK3nT6bdse1BtJPIuWdO7Zf9IeiiQv2ULMDvUFaGH77WEkpNa2VW5cJxTFbTGm0r h5NQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5HnEXOYmvzE8cKApqzizcKudiq0fvSHKWy9J37Q8ln4=; b=TrQdnvS5Bq6UR61wA4vjaZjUTv2RD+a+MiVH3cMbb/fWvNLXaLsVAVXeidadZWw0hc oI9TVHAihhsCl6/vhv4MX9YC5qm9r7uUyUw9P28ccuKjaPDU2ULBdvmfMUOqzuo/HhC4 1ZgDpKB/wq1uKx/l/wOWRYtt4ouTE2JOyr4bSa89EliTGZIQhRdw5ZwDPBZ45X2MnDa1 7FroQxHzZnn040cix1ITf+oo7RNZB1Xcrv2/szrEHvMksZpiV7DJVULCkY16q/7e489Q lANbG3M2yW46kIKnsfn7Gym6zfejcN9qWGc0H72z2gjD02PHufFagdFP1pFVphyi83ze GMaw== X-Gm-Message-State: AAQBX9eGKk2ISX93uZzAxhkM3XQmSJdBGoYEH4m2yzh2bbf3sAcbYxUw MURe0F4jxNavec75QIutC6Y= X-Google-Smtp-Source: AKy350Yvq/+zoWDCU6gfg33ajI/1zCReghYPGkCdyPosCclz59VsPY/GiNFraRtveOFqpFa2T9YSLQ== X-Received: by 2002:aa7:950d:0:b0:5a8:ad9d:83f with SMTP id b13-20020aa7950d000000b005a8ad9d083fmr11971485pfp.24.1679939698065; Mon, 27 Mar 2023 10:54:58 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.54.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:54:57 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 05/12] bpf: sockmap, TCP data stall on recv before accept Date: Mon, 27 Mar 2023 10:54:39 -0700 Message-Id: <20230327175446.98151-6-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A common mechanism to put a TCP socket into the sockmap is to hook the BPF_SOCK_OPS_{ACTIVE_PASSIVE}_ESTABLISHED_CB event with a BPF program that can map the socket info to the correct BPF verdict parser. When the user adds the socket to the map the psock is created and the new ops are assigned to ensure the verdict program will 'see' the sk_buffs as they arrive. Part of this process hooks the sk_data_ready op with a BPF specific handler to wake up the BPF verdict program when data is ready to read. The logic is simple enough (posted here for easy reading) static void sk_psock_verdict_data_ready(struct sock *sk) { struct socket *sock = sk->sk_socket; if (unlikely(!sock || !sock->ops || !sock->ops->read_skb)) return; sock->ops->read_skb(sk, sk_psock_verdict_recv); } The oversight here is sk->sk_socket is not assigned until the application accepts() the new socket. However, its entirely ok for the peer application to do a connect() followed immediately by sends. The socket on the receiver is sitting on the backlog queue of the listening socket until its accepted and the data is queued up. If the peer never accepts the socket or is slow it will eventually hit data limits and rate limit the session. But, important for BPF sockmap hooks when this data is received TCP stack does the sk_data_ready() call but the read_skb() for this data is never called because sk_socket is missing. The data sits on the sk_receive_queue. Then once the socket is accepted if we never receive more data from the peer there will be no further sk_data_ready calls and all the data is still on the sk_receive_queue(). Then user calls recvmsg after accept() and for TCP sockets in sockmap we use the tcp_bpf_recvmsg_parser() handler. The handler checks for data in the sk_msg ingress queue expecting that the BPF program has already run from the sk_data_ready hook and enqueued the data as needed. So we are stuck. To fix do an unlikely check in recvmsg handler for data on the sk_receive_queue and if it exists wake up data_ready. We have the sock locked in both read_skb and recvmsg so should avoid having multiple runners. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend --- net/ipv4/tcp_bpf.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 3a0f43f3afd8..2c75bbcbefed 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -209,6 +209,26 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, return tcp_recvmsg(sk, msg, len, flags, addr_len); lock_sock(sk); + + /* We may have received data on the sk_receive_queue pre-accept and + * then we can not use read_skb in this context because we haven't + * assigned a sk_socket yet so have no link to the ops. The work-around + * is to check the sk_receive_queue and in these cases read skbs off + * queue again. The read_skb hook is not running at this point because + * of lock_sock so we avoid having multiple runners in read_skb. + */ + if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) { + tcp_data_ready(sk); + /* This handles the ENOMEM errors if we both receive data + * pre accept and are already under memory pressure. At least + * let user no to retry. + */ + if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) { + copied = -EAGAIN; + goto out; + } + } + msg_bytes_ready: copied = sk_msg_recvmsg(sk, psock, msg, len, flags); /* The typical case for EFAULT is the socket was gracefully From patchwork Mon Mar 27 17:54:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189780 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5D05C6FD1D for ; Mon, 27 Mar 2023 17:55:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232613AbjC0RzN (ORCPT ); Mon, 27 Mar 2023 13:55:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232489AbjC0RzB (ORCPT ); Mon, 27 Mar 2023 13:55:01 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09E472681; Mon, 27 Mar 2023 10:55:00 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id cm5so415651pfb.0; Mon, 27 Mar 2023 10:55:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939699; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JtsTYTsJcWAWZKuDnStnRk5vYxx1AzkuxP7veMJ6MVo=; b=jROISYHnUCOdoBJDDKznFtb4ASa6WNWONHAVclierkokFwE2GXLQa+6y7s7YAgHkeq 74abbUzbm3jFVcUI9I1Cdh3h3oqIb0HrLSLUGXgxXNHr3O9ZWjsHnv+/DyDCJjqNhKZh SHVFFHBn5bHL3pLB7ZgPgaYDE9wKmmxt3gxhgz0uS7WHfooddIrFp7lFa8UZ8ygwYBIM K7855G0OaVvk/+c1c6/sDKT8Lz9QKhtGDNEMV6Sc3ndE8298SNpZPIwpGuczEOfeaufR Ojjtw1+os1mFrqfExNcAo4tIf8SRkBaXOLbVOPjaJ4FMRzODbhOW0duD2TRHOpAjOkZJ k5pA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939699; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JtsTYTsJcWAWZKuDnStnRk5vYxx1AzkuxP7veMJ6MVo=; b=kT0wb8rnHKr5/RR3dMcXfM6GtcoEE4U8Z20WfMSlAHuHlXoInOmj9a4Wd3fR9z9Z22 EmzNyx/r28BP17nuy5zicH+1abY135M3fNu0GAwvqWSB/Pdbh2xPN7pvmfl/qDUYpCgd iaimmNVceKrvS+Nw5OID6XZ2LbBjERxPZiIAjxzaReAk/hHRw0VP3PCpNjycLFFzDcuc UiVvQPUtn6wgBNBWCNPWVq3hUoG3dq0td40Fd478Sl6BVP08ryIl2hxuSfx0Dh38L1hw Gy9GayWjepH2KfkJZgzrViEXx2kKHtdYNb6RT6IlBaT7ojr+RTMS1lyxvBMpzKRcsRrH eQBQ== X-Gm-Message-State: AAQBX9e4OzznBpeC2qDHZ2VrG6N1ipUXtRzPH2HTi6wOnlB8QG7Vhm4A Bjc0VBrvYTo8oRj9blJddVI= X-Google-Smtp-Source: AKy350aGPUPAbzRaoQrX+GT4yKuyS8MRA4AuYfg+B5q/iA06RJgy8Wbj490v/QPmZrIjVP61aC04Ng== X-Received: by 2002:aa7:9508:0:b0:629:fae0:d96a with SMTP id b8-20020aa79508000000b00629fae0d96amr12809757pfp.16.1679939699500; Mon, 27 Mar 2023 10:54:59 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.54.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:54:59 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 06/12] bpf: sockmap, wake up polling after data copy Date: Mon, 27 Mar 2023 10:54:40 -0700 Message-Id: <20230327175446.98151-7-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net When TCP stack has data ready to read sk_data_ready() is called. Sockmap overwrites this with its own handler to call into BPF verdict program. But, the original TCP socket had sock_def_readable that would additionally wake up any user space waiters with sk_wake_async(). Sockmap saved the callback when the socket was created so call the saved data ready callback and then we can wake up any epoll() logic waiting on the read. Note we call on 'copied >= 0' to account for returning 0 when a FIN is received because we need to wake up user for this as well so they can do the recvmsg() -> 0 and detect the shutdown. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend --- net/core/skmsg.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 34de0605694e..10e5481da662 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -1230,10 +1230,19 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb) static void sk_psock_verdict_data_ready(struct sock *sk) { struct socket *sock = sk->sk_socket; + int copied; if (unlikely(!sock || !sock->ops || !sock->ops->read_skb)) return; - sock->ops->read_skb(sk, sk_psock_verdict_recv); + copied = sock->ops->read_skb(sk, sk_psock_verdict_recv); + if (copied >= 0) { + struct sk_psock *psock; + + rcu_read_lock(); + psock = sk_psock(sk); + psock->saved_data_ready(sk); + rcu_read_unlock(); + } } void sk_psock_start_verdict(struct sock *sk, struct sk_psock *psock) From patchwork Mon Mar 27 17:54:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189781 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0BB9C76195 for ; Mon, 27 Mar 2023 17:55:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232453AbjC0RzP (ORCPT ); Mon, 27 Mar 2023 13:55:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232475AbjC0RzE (ORCPT ); Mon, 27 Mar 2023 13:55:04 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEC3640C8; Mon, 27 Mar 2023 10:55:01 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id k2so9174812pll.8; Mon, 27 Mar 2023 10:55:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/vwLJbRVFroNWnth4Z1LSY+PY6/N1S6I5xUuSkt0za4=; b=jKmPHdNaQmwP6xZRzr98n8oRKxgSJ6auAgwKK2TjuROOdIuiBy1jLhhZOfVAZVmID0 XGqp1kyNHCv9W5NpdC95eCmPOhdBJywLsmRKSWQpcG1dpuekGrHAQEZdSxe/lG6DRRfa uEdp7sdKSFdWb/FU81XoIdPX+xnhy17Tb0vC25yFC4vCgqmcDcizEkVh82V/C2hmF5Ag 8cq1jv97MYZRJdmxYdsiiJ7vCG29cjuzHUU6w8F+VYwKYH7U0EsiKx9y2cKkgNxehoRP 28Wz+W/WUnJDWutgTsNCQpk/2hQk3dcyXLOlgmm6MqhoscfPGaflwrpno3ycO9LeZeMb w5fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/vwLJbRVFroNWnth4Z1LSY+PY6/N1S6I5xUuSkt0za4=; b=x7POzhoKQZ1a00UgOVdIGgL5iZpaVrl7CfnbL4wQ6sig9AI2elka3Va67HUZ9HAdlj xfHHmXKLEaNL+sl8f7x7AryDqtR+IMfotztMYSqAtj3ZIh+A91Ek84tQBa82lqFgXKZD FxpIFxJz5Nfv39de/eZSnVDDVWQv7F56AXGAxH55t63J3nS+8hZCESHCRhA7bRkV++rO Sa8qdG6OyIsSF+KfvT2K93GdxazttO1pzalAz9EScuzYx3LHGr0OnqNAF5kj7iHgtqXG 3Yerc366SvQZpnCjo+TTgpXT6pbxGzpiyvYyxhhGbozKua8/rg/IJtj2m4MQP6NgQflf Cnxw== X-Gm-Message-State: AO0yUKXkDsSqfeNmVs+FYjxk1fnksVlVocnZb7SHvXkCeWbmTgZHHWVi Xy86JTRJIGhZXjpG9KpP4Yg= X-Google-Smtp-Source: AK7set/CpyuVV07v8eFOHV0PdkZRtg15OXPaRsiqAH0oHxiPAGpx8M4ZHfjAEpep6PqEPh+6i0tuwQ== X-Received: by 2002:a05:6a20:c10f:b0:d9:84d2:7a9f with SMTP id bh15-20020a056a20c10f00b000d984d27a9fmr11615054pzb.24.1679939701295; Mon, 27 Mar 2023 10:55:01 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.54.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:55:00 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 07/12] bpf: sockmap incorrectly handling copied_seq Date: Mon, 27 Mar 2023 10:54:41 -0700 Message-Id: <20230327175446.98151-8-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The read_skb() logic is incrementing the tcp->copied_seq which is used for among other things calculating how many outstanding bytes can be read by the application. This results in application errors, if the application does an ioctl(FIONREAD) we return zero because this is calculated from the copied_seq value. To fix this we move tcp->copied_seq accounting into the recv handler so that we update these when the recvmsg() hook is called and data is in fact copied into user buffers. This gives an accurate FIONREAD value as expected and improves ACK handling. Before we were calling the tcp_rcv_space_adjust() which would update 'number of bytes copied to user in last RTT' which is wrong for programs returning SK_PASS. The bytes are only copied to the user when recvmsg is handled. Doing the fix for recvmsg is straightforward, but fixing redirect and SK_DROP pkts is a bit tricker. Build a tcp_psock_eat() helper and then call this from skmsg handlers. This fixes another issue where a broken socket with a BPF program doing a resubmit could hang the receiver. This happened because although read_skb() consumed the skb through sock_drop() it did not update the copied_seq. Now if a single reccv socket is redirecting to many sockets (for example for lb) the receiver sk will be hung even though we might expect it to continue. The hang comes from not updating the copied_seq numbers and memory pressure resulting from that. We have a slight layer problem of calling tcp_eat_skb even if its not a TCP socket. To fix we could refactor and create per type receiver handlers. I decided this is more work than we want in the fix and we already have some small tweaks depending on caller that use the helper skb_bpf_strparser(). So we extend that a bit and always set the strparser bit when it is in use and then we can gate the seq_copied updates on this. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend --- include/net/tcp.h | 3 +++ net/core/skmsg.c | 7 +++++-- net/ipv4/tcp.c | 10 +--------- net/ipv4/tcp_bpf.c | 28 +++++++++++++++++++++++++++- 4 files changed, 36 insertions(+), 12 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index db9f828e9d1e..674044b8bdaf 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1467,6 +1467,8 @@ static inline void tcp_adjust_rcv_ssthresh(struct sock *sk) } void tcp_cleanup_rbuf(struct sock *sk, int copied); +void __tcp_cleanup_rbuf(struct sock *sk, int copied); + /* We provision sk_rcvbuf around 200% of sk_rcvlowat. * If 87.5 % (7/8) of the space has been consumed, we want to override @@ -2321,6 +2323,7 @@ struct sk_psock; struct proto *tcp_bpf_get_proto(struct sock *sk, struct sk_psock *psock); int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore); void tcp_bpf_clone(const struct sock *sk, struct sock *newsk); +void tcp_eat_skb(struct sock *sk, struct sk_buff *skb); #endif /* CONFIG_BPF_SYSCALL */ int tcp_bpf_sendmsg_redir(struct sock *sk, bool ingress, diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 10e5481da662..b141b422697c 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -1051,11 +1051,14 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb, mutex_unlock(&psock->work_mutex); break; case __SK_REDIRECT: + tcp_eat_skb(psock->sk, skb); err = sk_psock_skb_redirect(psock, skb); break; case __SK_DROP: default: out_free: + tcp_eat_skb(psock->sk, skb); + skb_bpf_redirect_clear(skb); sock_drop(psock->sk, skb); } @@ -1100,8 +1103,7 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb) skb_dst_drop(skb); skb_bpf_redirect_clear(skb); ret = bpf_prog_run_pin_on_cpu(prog, skb); - if (ret == SK_PASS) - skb_bpf_set_strparser(skb); + skb_bpf_set_strparser(skb); ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb)); skb->sk = NULL; } @@ -1207,6 +1209,7 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb) psock = sk_psock(sk); if (unlikely(!psock)) { len = 0; + tcp_eat_skb(sk, skb); sock_drop(sk, skb); goto out; } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 6572962b0237..e2594d8e3429 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1568,7 +1568,7 @@ static int tcp_peek_sndq(struct sock *sk, struct msghdr *msg, int len) * calculation of whether or not we must ACK for the sake of * a window update. */ -static void __tcp_cleanup_rbuf(struct sock *sk, int copied) +void __tcp_cleanup_rbuf(struct sock *sk, int copied) { struct tcp_sock *tp = tcp_sk(sk); bool time_to_ack = false; @@ -1783,14 +1783,6 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) break; } } - WRITE_ONCE(tp->copied_seq, seq); - - tcp_rcv_space_adjust(sk); - - /* Clean up data we have read: This will do ACK frames. */ - if (copied > 0) - __tcp_cleanup_rbuf(sk, copied); - return copied; } EXPORT_SYMBOL(tcp_read_skb); diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 2c75bbcbefed..6fb9978d72e3 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -11,6 +11,24 @@ #include #include +void tcp_eat_skb(struct sock *sk, struct sk_buff *skb) +{ + struct tcp_sock *tcp; + int copied; + + if (!skb || !skb->len || !sk_is_tcp(sk)) + return; + + if (skb_bpf_strparser(skb)) + return; + + tcp = tcp_sk(sk); + copied = tcp->copied_seq + skb->len; + WRITE_ONCE(tcp->copied_seq, copied); + tcp_rcv_space_adjust(sk); + __tcp_cleanup_rbuf(sk, skb->len); +} + static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock, struct sk_msg *msg, u32 apply_bytes, int flags) { @@ -198,8 +216,10 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, int flags, int *addr_len) { + struct tcp_sock *tcp = tcp_sk(sk); + u32 seq = tcp->copied_seq; struct sk_psock *psock; - int copied; + int copied = 0; if (unlikely(flags & MSG_ERRQUEUE)) return inet_recv_error(sk, msg, len, addr_len); @@ -241,9 +261,11 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, if (is_fin) { copied = 0; + seq++; goto out; } } + seq += copied; if (!copied) { long timeo; int data; @@ -281,6 +303,10 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, copied = -EAGAIN; } out: + WRITE_ONCE(tcp->copied_seq, seq); + tcp_rcv_space_adjust(sk); + if (copied > 0) + __tcp_cleanup_rbuf(sk, copied); release_sock(sk); sk_psock_put(sk, psock); return copied; From patchwork Mon Mar 27 17:54:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189782 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 674EDC6FD1D for ; Mon, 27 Mar 2023 17:55:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232678AbjC0RzZ (ORCPT ); Mon, 27 Mar 2023 13:55:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232557AbjC0RzM (ORCPT ); Mon, 27 Mar 2023 13:55:12 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59F723A91; Mon, 27 Mar 2023 10:55:03 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id k2so9174878pll.8; Mon, 27 Mar 2023 10:55:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a6bFJnafED6Q9SZ4mvXold+p94/0Uyx0b8mGj4R6+no=; b=ihYqTfG5DN2dud07OfFsnENujDrJLyTpnf7NXtg8DCFxFNyIQuuxBZcBu5Cc3KlgzS 9D/4VpjeacR7gpr1TpsdI7VNQ0Kgpcx/71YH+Ut5jBOe1Fs4PvbPItoMr4YjxNcneMs4 uVlj+JB1gGo067LZyQN/c08QYFrUJGCFoJx43quAzTKEXz1E6SUSyzVXT9aYBOg5uZqq aMmlAQyxTfiHS9K4Wbetb18JwCdU9qSXbZ9IETjviF6m5JBN62IgkvjNOq5m6wVr6Tsl bxua5Ez5I0KcnoVjQtbSBUt1b6CWvkYQTpSoB69f3tiAfSZlQwIbmzpMW7zYLWe5+0Qh S//Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a6bFJnafED6Q9SZ4mvXold+p94/0Uyx0b8mGj4R6+no=; b=fwR0OGiPX9D21N5vyVcVlYBwIwfuEvZ9phhWXRmvJyG/qb9HsQMLQm6bDQC702aPPu 7r69Laq+K3enld7Rkd7L7eslVVe4w/laTZ4PGgMns6u0KXU6bPm+dRXLEkTIsIb6U1KU tT6DbC31Oxgm7aQRj9s8/9C+uzqlPuQGGi3pUTpQEZM4mlvLPSmryoaM9DC8O7EJ4v83 EYe7JuLg0yPyqZ8NaefkxSU5u1LUyXgyGfTRFyLbD1MmAiCK5Y3AeCcAHkb8HXXklBhJ anXE3LFLYhm+1t24IVjz00juDdiL6KEpH9seGnFbFbHTHzD6HSQ6AQ9gw3SgH1eu+YCM x++A== X-Gm-Message-State: AO0yUKX+4d1SUEWpv2lMEEjczkueovNwTGg6JViPtFb15NGfPvXWSIR9 O2rnOpJQbDZz68kusptd67c= X-Google-Smtp-Source: AK7set/Yv34xjBHvwvxXG5Vum2MZOiAs1ip7rVSxNv7kQk+pFBSFkvHzUX4jDfoTEYBuheAna9s32w== X-Received: by 2002:a05:6a20:b806:b0:da:2591:277d with SMTP id fi6-20020a056a20b80600b000da2591277dmr11158504pzb.61.1679939702714; Mon, 27 Mar 2023 10:55:02 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.55.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:55:02 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 08/12] bpf: sockmap, pull socket helpers out of listen test for general use Date: Mon, 27 Mar 2023 10:54:42 -0700 Message-Id: <20230327175446.98151-9-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net No functional change here we merely pull the helpers in sockmap_listen.c into a header file so we can use these in other programs. The tests we are about to add aren't really _listen tests so doesn't make sense to add them here. Signed-off-by: John Fastabend --- .../bpf/prog_tests/sockmap_helpers.h | 249 ++++++++++++++++++ .../selftests/bpf/prog_tests/sockmap_listen.c | 245 +---------------- 2 files changed, 250 insertions(+), 244 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h new file mode 100644 index 000000000000..bff56844e745 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h @@ -0,0 +1,249 @@ +#ifndef __SOCKAMP_HELPERS__ +#define __SOCKMAP_HELPERS__ + +#define IO_TIMEOUT_SEC 30 +#define MAX_STRERR_LEN 256 +#define MAX_TEST_NAME 80 + +#define __always_unused __attribute__((__unused__)) + +#define _FAIL(errnum, fmt...) \ + ({ \ + error_at_line(0, (errnum), __func__, __LINE__, fmt); \ + CHECK_FAIL(true); \ + }) +#define FAIL(fmt...) _FAIL(0, fmt) +#define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) +#define FAIL_LIBBPF(err, msg) \ + ({ \ + char __buf[MAX_STRERR_LEN]; \ + libbpf_strerror((err), __buf, sizeof(__buf)); \ + FAIL("%s: %s", (msg), __buf); \ + }) + +/* Wrappers that fail the test on error and report it. */ + +#define xaccept_nonblock(fd, addr, len) \ + ({ \ + int __ret = \ + accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC); \ + if (__ret == -1) \ + FAIL_ERRNO("accept"); \ + __ret; \ + }) + +#define xbind(fd, addr, len) \ + ({ \ + int __ret = bind((fd), (addr), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("bind"); \ + __ret; \ + }) + +#define xclose(fd) \ + ({ \ + int __ret = close((fd)); \ + if (__ret == -1) \ + FAIL_ERRNO("close"); \ + __ret; \ + }) + +#define xconnect(fd, addr, len) \ + ({ \ + int __ret = connect((fd), (addr), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("connect"); \ + __ret; \ + }) + +#define xgetsockname(fd, addr, len) \ + ({ \ + int __ret = getsockname((fd), (addr), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("getsockname"); \ + __ret; \ + }) + +#define xgetsockopt(fd, level, name, val, len) \ + ({ \ + int __ret = getsockopt((fd), (level), (name), (val), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("getsockopt(" #name ")"); \ + __ret; \ + }) + +#define xlisten(fd, backlog) \ + ({ \ + int __ret = listen((fd), (backlog)); \ + if (__ret == -1) \ + FAIL_ERRNO("listen"); \ + __ret; \ + }) + +#define xsetsockopt(fd, level, name, val, len) \ + ({ \ + int __ret = setsockopt((fd), (level), (name), (val), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("setsockopt(" #name ")"); \ + __ret; \ + }) + +#define xsend(fd, buf, len, flags) \ + ({ \ + ssize_t __ret = send((fd), (buf), (len), (flags)); \ + if (__ret == -1) \ + FAIL_ERRNO("send"); \ + __ret; \ + }) + +#define xrecv_nonblock(fd, buf, len, flags) \ + ({ \ + ssize_t __ret = recv_timeout((fd), (buf), (len), (flags), \ + IO_TIMEOUT_SEC); \ + if (__ret == -1) \ + FAIL_ERRNO("recv"); \ + __ret; \ + }) + +#define xsocket(family, sotype, flags) \ + ({ \ + int __ret = socket(family, sotype, flags); \ + if (__ret == -1) \ + FAIL_ERRNO("socket"); \ + __ret; \ + }) + +#define xbpf_map_delete_elem(fd, key) \ + ({ \ + int __ret = bpf_map_delete_elem((fd), (key)); \ + if (__ret < 0) \ + FAIL_ERRNO("map_delete"); \ + __ret; \ + }) + +#define xbpf_map_lookup_elem(fd, key, val) \ + ({ \ + int __ret = bpf_map_lookup_elem((fd), (key), (val)); \ + if (__ret < 0) \ + FAIL_ERRNO("map_lookup"); \ + __ret; \ + }) + +#define xbpf_map_update_elem(fd, key, val, flags) \ + ({ \ + int __ret = bpf_map_update_elem((fd), (key), (val), (flags)); \ + if (__ret < 0) \ + FAIL_ERRNO("map_update"); \ + __ret; \ + }) + +#define xbpf_prog_attach(prog, target, type, flags) \ + ({ \ + int __ret = \ + bpf_prog_attach((prog), (target), (type), (flags)); \ + if (__ret < 0) \ + FAIL_ERRNO("prog_attach(" #type ")"); \ + __ret; \ + }) + +#define xbpf_prog_detach2(prog, target, type) \ + ({ \ + int __ret = bpf_prog_detach2((prog), (target), (type)); \ + if (__ret < 0) \ + FAIL_ERRNO("prog_detach2(" #type ")"); \ + __ret; \ + }) + +#define xpthread_create(thread, attr, func, arg) \ + ({ \ + int __ret = pthread_create((thread), (attr), (func), (arg)); \ + errno = __ret; \ + if (__ret) \ + FAIL_ERRNO("pthread_create"); \ + __ret; \ + }) + +#define xpthread_join(thread, retval) \ + ({ \ + int __ret = pthread_join((thread), (retval)); \ + errno = __ret; \ + if (__ret) \ + FAIL_ERRNO("pthread_join"); \ + __ret; \ + }) + +static inline int poll_read(int fd, unsigned int timeout_sec) +{ + struct timeval timeout = { .tv_sec = timeout_sec }; + fd_set rfds; + int r; + + FD_ZERO(&rfds); + FD_SET(fd, &rfds); + + r = select(fd + 1, &rfds, NULL, NULL, &timeout); + if (r == 0) + errno = ETIME; + + return r == 1 ? 0 : -1; +} + +static inline int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len, + unsigned int timeout_sec) +{ + if (poll_read(fd, timeout_sec)) + return -1; + + return accept(fd, addr, len); +} + +static inline int recv_timeout(int fd, void *buf, size_t len, int flags, + unsigned int timeout_sec) +{ + if (poll_read(fd, timeout_sec)) + return -1; + + return recv(fd, buf, len, flags); +} + +static inline void init_addr_loopback4(struct sockaddr_storage *ss, socklen_t *len) +{ + struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); + + addr4->sin_family = AF_INET; + addr4->sin_port = 0; + addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); + *len = sizeof(*addr4); +} + +static inline void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len) +{ + struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); + + addr6->sin6_family = AF_INET6; + addr6->sin6_port = 0; + addr6->sin6_addr = in6addr_loopback; + *len = sizeof(*addr6); +} + +static inline void init_addr_loopback(int family, struct sockaddr_storage *ss, + socklen_t *len) +{ + switch (family) { + case AF_INET: + init_addr_loopback4(ss, len); + return; + case AF_INET6: + init_addr_loopback6(ss, len); + return; + default: + FAIL("unsupported address family %d", family); + } +} + +static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) +{ + return (struct sockaddr *)ss; +} + +#endif // __SOCKMAP_HELPERS__ diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c index 567e07c19ecc..0f0cddd4e15e 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c @@ -26,250 +26,7 @@ #include "test_progs.h" #include "test_sockmap_listen.skel.h" -#define IO_TIMEOUT_SEC 30 -#define MAX_STRERR_LEN 256 -#define MAX_TEST_NAME 80 - -#define __always_unused __attribute__((__unused__)) - -#define _FAIL(errnum, fmt...) \ - ({ \ - error_at_line(0, (errnum), __func__, __LINE__, fmt); \ - CHECK_FAIL(true); \ - }) -#define FAIL(fmt...) _FAIL(0, fmt) -#define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) -#define FAIL_LIBBPF(err, msg) \ - ({ \ - char __buf[MAX_STRERR_LEN]; \ - libbpf_strerror((err), __buf, sizeof(__buf)); \ - FAIL("%s: %s", (msg), __buf); \ - }) - -/* Wrappers that fail the test on error and report it. */ - -#define xaccept_nonblock(fd, addr, len) \ - ({ \ - int __ret = \ - accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC); \ - if (__ret == -1) \ - FAIL_ERRNO("accept"); \ - __ret; \ - }) - -#define xbind(fd, addr, len) \ - ({ \ - int __ret = bind((fd), (addr), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("bind"); \ - __ret; \ - }) - -#define xclose(fd) \ - ({ \ - int __ret = close((fd)); \ - if (__ret == -1) \ - FAIL_ERRNO("close"); \ - __ret; \ - }) - -#define xconnect(fd, addr, len) \ - ({ \ - int __ret = connect((fd), (addr), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("connect"); \ - __ret; \ - }) - -#define xgetsockname(fd, addr, len) \ - ({ \ - int __ret = getsockname((fd), (addr), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("getsockname"); \ - __ret; \ - }) - -#define xgetsockopt(fd, level, name, val, len) \ - ({ \ - int __ret = getsockopt((fd), (level), (name), (val), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("getsockopt(" #name ")"); \ - __ret; \ - }) - -#define xlisten(fd, backlog) \ - ({ \ - int __ret = listen((fd), (backlog)); \ - if (__ret == -1) \ - FAIL_ERRNO("listen"); \ - __ret; \ - }) - -#define xsetsockopt(fd, level, name, val, len) \ - ({ \ - int __ret = setsockopt((fd), (level), (name), (val), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("setsockopt(" #name ")"); \ - __ret; \ - }) - -#define xsend(fd, buf, len, flags) \ - ({ \ - ssize_t __ret = send((fd), (buf), (len), (flags)); \ - if (__ret == -1) \ - FAIL_ERRNO("send"); \ - __ret; \ - }) - -#define xrecv_nonblock(fd, buf, len, flags) \ - ({ \ - ssize_t __ret = recv_timeout((fd), (buf), (len), (flags), \ - IO_TIMEOUT_SEC); \ - if (__ret == -1) \ - FAIL_ERRNO("recv"); \ - __ret; \ - }) - -#define xsocket(family, sotype, flags) \ - ({ \ - int __ret = socket(family, sotype, flags); \ - if (__ret == -1) \ - FAIL_ERRNO("socket"); \ - __ret; \ - }) - -#define xbpf_map_delete_elem(fd, key) \ - ({ \ - int __ret = bpf_map_delete_elem((fd), (key)); \ - if (__ret < 0) \ - FAIL_ERRNO("map_delete"); \ - __ret; \ - }) - -#define xbpf_map_lookup_elem(fd, key, val) \ - ({ \ - int __ret = bpf_map_lookup_elem((fd), (key), (val)); \ - if (__ret < 0) \ - FAIL_ERRNO("map_lookup"); \ - __ret; \ - }) - -#define xbpf_map_update_elem(fd, key, val, flags) \ - ({ \ - int __ret = bpf_map_update_elem((fd), (key), (val), (flags)); \ - if (__ret < 0) \ - FAIL_ERRNO("map_update"); \ - __ret; \ - }) - -#define xbpf_prog_attach(prog, target, type, flags) \ - ({ \ - int __ret = \ - bpf_prog_attach((prog), (target), (type), (flags)); \ - if (__ret < 0) \ - FAIL_ERRNO("prog_attach(" #type ")"); \ - __ret; \ - }) - -#define xbpf_prog_detach2(prog, target, type) \ - ({ \ - int __ret = bpf_prog_detach2((prog), (target), (type)); \ - if (__ret < 0) \ - FAIL_ERRNO("prog_detach2(" #type ")"); \ - __ret; \ - }) - -#define xpthread_create(thread, attr, func, arg) \ - ({ \ - int __ret = pthread_create((thread), (attr), (func), (arg)); \ - errno = __ret; \ - if (__ret) \ - FAIL_ERRNO("pthread_create"); \ - __ret; \ - }) - -#define xpthread_join(thread, retval) \ - ({ \ - int __ret = pthread_join((thread), (retval)); \ - errno = __ret; \ - if (__ret) \ - FAIL_ERRNO("pthread_join"); \ - __ret; \ - }) - -static int poll_read(int fd, unsigned int timeout_sec) -{ - struct timeval timeout = { .tv_sec = timeout_sec }; - fd_set rfds; - int r; - - FD_ZERO(&rfds); - FD_SET(fd, &rfds); - - r = select(fd + 1, &rfds, NULL, NULL, &timeout); - if (r == 0) - errno = ETIME; - - return r == 1 ? 0 : -1; -} - -static int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len, - unsigned int timeout_sec) -{ - if (poll_read(fd, timeout_sec)) - return -1; - - return accept(fd, addr, len); -} - -static int recv_timeout(int fd, void *buf, size_t len, int flags, - unsigned int timeout_sec) -{ - if (poll_read(fd, timeout_sec)) - return -1; - - return recv(fd, buf, len, flags); -} - -static void init_addr_loopback4(struct sockaddr_storage *ss, socklen_t *len) -{ - struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); - - addr4->sin_family = AF_INET; - addr4->sin_port = 0; - addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); - *len = sizeof(*addr4); -} - -static void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len) -{ - struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); - - addr6->sin6_family = AF_INET6; - addr6->sin6_port = 0; - addr6->sin6_addr = in6addr_loopback; - *len = sizeof(*addr6); -} - -static void init_addr_loopback(int family, struct sockaddr_storage *ss, - socklen_t *len) -{ - switch (family) { - case AF_INET: - init_addr_loopback4(ss, len); - return; - case AF_INET6: - init_addr_loopback6(ss, len); - return; - default: - FAIL("unsupported address family %d", family); - } -} - -static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) -{ - return (struct sockaddr *)ss; -} +#include "sockmap_helpers.h" static int enable_reuseport(int s, int progfd) { From patchwork Mon Mar 27 17:54:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189783 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6486C6FD1D for ; Mon, 27 Mar 2023 17:55:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232641AbjC0Rz2 (ORCPT ); Mon, 27 Mar 2023 13:55:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232638AbjC0RzN (ORCPT ); Mon, 27 Mar 2023 13:55:13 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F067C3ABA; Mon, 27 Mar 2023 10:55:04 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id g7so6257924pfu.2; Mon, 27 Mar 2023 10:55:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+TaDLCIXSz9eYXTku+ilKJmltzAIAJaTUBVD0OtqxmY=; b=YPpVdeNY4guxmQUp/4GCkw28HCWvNTDfAmYfwOVXLqUqpbFgjpfAyer0YpIyb718Jh CODAhehCiZoJwZzqYxNICQuVkL4tuGo1KewwxNeDCheNrZVaR2bPodjOBAPQGOfnFtRP swfgPDK3p9K1rs0PWmFR/reZBwLBlJCR6dWrJjeG341hlf3FsTsijqhqX2uqmIYmwJt1 qpvfiUM9XU9LBewELuRznLVjub53fx/0cUzZHl8u6SjpZkAUSnlRoBV5Dxs8cs7EoyVh HUOqxTdQmK/H07PxUhMBfrhZQfs++XHq9Z9+dM9qqiUhMyhQGuSlg4OCDKAhgitf50Nb WSxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+TaDLCIXSz9eYXTku+ilKJmltzAIAJaTUBVD0OtqxmY=; b=O+tWbhFjxCGFmo5TQrKk5Mt2yvv9NHs27Xg9W1cpmrol+VeB7yaymIjKaSDY8TOTh1 6/ZrSCXRFblpfWjRBp6FmhRyN87Kys1hZqx0fPX8cDUunH44vvY/CWCPj9y9Zsv1CrTW +KVXgYJnfnOVIvTPOOUILNnGPQ0YHO3xAaOFLcq1DAE/9XEwwOPk1bvtJk2+pZpBke3r na3DWo5uvggRodPzbctaaX6Y49Ffn8P8C7/VJXI+EbYP+272mFdD1t9S97KQcubffcek 4DLt+mwwh2caJzV1v/LIYmqO/IsZPwVBeWi8huuBOs5NJVWls3ursv6X79EDbEASj+s3 zn2w== X-Gm-Message-State: AAQBX9cT6vDVh7TWdLprYH8/BtRkgqKYLVUlzlZqgfNVvH+/CbJlDyGX 6r2A/uTxq8QpcuZVaz9mKFU= X-Google-Smtp-Source: AKy350YYDlc9JVtNEQeXVSjU9h6FFuZbkR2TjV6RdVjhiooy4URbReKE54qceWCoXqp4bVXrt1iDsg== X-Received: by 2002:a62:1bc9:0:b0:62a:1267:2045 with SMTP id b192-20020a621bc9000000b0062a12672045mr12258146pfb.34.1679939704267; Mon, 27 Mar 2023 10:55:04 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.55.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:55:03 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 09/12] bpf: sockmap, build helper to create connected socket pair Date: Mon, 27 Mar 2023 10:54:43 -0700 Message-Id: <20230327175446.98151-10-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A common operation for testing is to spin up a pair of sockets that are connected. Then we can use these to run specific tests that need to send data, check BPF programs and so on. The sockmap_listen programs already have this logic lets move it into the new sockmap_helpers header file for general use. Signed-off-by: John Fastabend --- .../bpf/prog_tests/sockmap_helpers.h | 125 ++++++++++++++++++ .../selftests/bpf/prog_tests/sockmap_listen.c | 107 +-------------- 2 files changed, 130 insertions(+), 102 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h index bff56844e745..54e3a019ba72 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h @@ -246,4 +246,129 @@ static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) return (struct sockaddr *)ss; } +static inline int add_to_sockmap(int sock_mapfd, int fd1, int fd2) +{ + u64 value; + u32 key; + int err; + + key = 0; + value = fd1; + err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); + if (err) + return err; + + key = 1; + value = fd2; + return xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); +} + +static inline int create_socket_pairs(int s, int family, int sotype, + int *c0, int *c1, int *p0, int *p1) +{ + struct sockaddr_storage addr; + socklen_t len; + int err = 0; + + len = sizeof(addr); + err = xgetsockname(s, sockaddr(&addr), &len); + if (err) + return err; + + *c0 = xsocket(family, sotype, 0); + if (*c0 < 0) + return errno; + err = xconnect(*c0, sockaddr(&addr), len); + if (err) { + err = errno; + goto close_cli0; + } + + *p0 = xaccept_nonblock(s, NULL, NULL); + if (*p0 < 0) { + err = errno; + goto close_cli0; + } + + *c1 = xsocket(family, sotype, 0); + if (*c1 < 0) { + err = errno; + goto close_peer0; + } + err = xconnect(*c1, sockaddr(&addr), len); + if (err) { + err = errno; + goto close_cli1; + } + + *p1 = xaccept_nonblock(s, NULL, NULL); + if (*p1 < 0) { + err = errno; + goto close_peer1; + } + return err; +close_peer1: + close(*p1); +close_cli1: + close(*c1); +close_peer0: + close(*p0); +close_cli0: + close(*c0); + return err; +} + +static inline int enable_reuseport(int s, int progfd) +{ + int err, one = 1; + + err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); + if (err) + return -1; + err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, + sizeof(progfd)); + if (err) + return -1; + + return 0; +} + +static inline int socket_loopback_reuseport(int family, int sotype, int progfd) +{ + struct sockaddr_storage addr; + socklen_t len; + int err, s; + + init_addr_loopback(family, &addr, &len); + + s = xsocket(family, sotype, 0); + if (s == -1) + return -1; + + if (progfd >= 0) + enable_reuseport(s, progfd); + + err = xbind(s, sockaddr(&addr), len); + if (err) + goto close; + + if (sotype & SOCK_DGRAM) + return s; + + err = xlisten(s, SOMAXCONN); + if (err) + goto close; + + return s; +close: + xclose(s); + return -1; +} + +static inline int socket_loopback(int family, int sotype) +{ + return socket_loopback_reuseport(family, sotype, -1); +} + + #endif // __SOCKMAP_HELPERS__ diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c index 0f0cddd4e15e..f3913ba9e899 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c @@ -28,58 +28,6 @@ #include "sockmap_helpers.h" -static int enable_reuseport(int s, int progfd) -{ - int err, one = 1; - - err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); - if (err) - return -1; - err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, - sizeof(progfd)); - if (err) - return -1; - - return 0; -} - -static int socket_loopback_reuseport(int family, int sotype, int progfd) -{ - struct sockaddr_storage addr; - socklen_t len; - int err, s; - - init_addr_loopback(family, &addr, &len); - - s = xsocket(family, sotype, 0); - if (s == -1) - return -1; - - if (progfd >= 0) - enable_reuseport(s, progfd); - - err = xbind(s, sockaddr(&addr), len); - if (err) - goto close; - - if (sotype & SOCK_DGRAM) - return s; - - err = xlisten(s, SOMAXCONN); - if (err) - goto close; - - return s; -close: - xclose(s); - return -1; -} - -static int socket_loopback(int family, int sotype) -{ - return socket_loopback_reuseport(family, sotype, -1); -} - static void test_insert_invalid(struct test_sockmap_listen *skel __always_unused, int family, int sotype, int mapfd) { @@ -722,31 +670,12 @@ static const char *redir_mode_str(enum redir_mode mode) } } -static int add_to_sockmap(int sock_mapfd, int fd1, int fd2) -{ - u64 value; - u32 key; - int err; - - key = 0; - value = fd1; - err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); - if (err) - return err; - - key = 1; - value = fd2; - return xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); -} - static void redir_to_connected(int family, int sotype, int sock_mapfd, int verd_mapfd, enum redir_mode mode) { const char *log_prefix = redir_mode_str(mode); - struct sockaddr_storage addr; int s, c0, c1, p0, p1; unsigned int pass; - socklen_t len; int err, n; u32 key; char b; @@ -757,36 +686,13 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd, if (s < 0) return; - len = sizeof(addr); - err = xgetsockname(s, sockaddr(&addr), &len); + err = create_socket_pairs(s, family, sotype, &c0, &c1, &p0, &p1); if (err) goto close_srv; - c0 = xsocket(family, sotype, 0); - if (c0 < 0) - goto close_srv; - err = xconnect(c0, sockaddr(&addr), len); - if (err) - goto close_cli0; - - p0 = xaccept_nonblock(s, NULL, NULL); - if (p0 < 0) - goto close_cli0; - - c1 = xsocket(family, sotype, 0); - if (c1 < 0) - goto close_peer0; - err = xconnect(c1, sockaddr(&addr), len); - if (err) - goto close_cli1; - - p1 = xaccept_nonblock(s, NULL, NULL); - if (p1 < 0) - goto close_cli1; - err = add_to_sockmap(sock_mapfd, p0, p1); if (err) - goto close_peer1; + goto close; n = write(mode == REDIR_INGRESS ? c1 : p1, "a", 1); if (n < 0) @@ -794,12 +700,12 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd, if (n == 0) FAIL("%s: incomplete write", log_prefix); if (n < 1) - goto close_peer1; + goto close; key = SK_PASS; err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass); if (err) - goto close_peer1; + goto close; if (pass != 1) FAIL("%s: want pass count 1, have %d", log_prefix, pass); n = recv_timeout(c0, &b, 1, 0, IO_TIMEOUT_SEC); @@ -808,13 +714,10 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd, if (n == 0) FAIL("%s: incomplete recv", log_prefix); -close_peer1: +close: xclose(p1); -close_cli1: xclose(c1); -close_peer0: xclose(p0); -close_cli0: xclose(c0); close_srv: xclose(s); From patchwork Mon Mar 27 17:54:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189784 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84837C76195 for ; Mon, 27 Mar 2023 17:55:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232245AbjC0Rzl (ORCPT ); Mon, 27 Mar 2023 13:55:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232656AbjC0RzX (ORCPT ); Mon, 27 Mar 2023 13:55:23 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FF5044BA; Mon, 27 Mar 2023 10:55:07 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id bt19so6259338pfb.3; Mon, 27 Mar 2023 10:55:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=83CgZEWVlvtw6YRoN8OFaQuF2cF8yC9eL6KIAikLeN4=; b=CsbI+0CTpp48SqfOwY7TN1kVV6GQ6w3iHUzlGOTg5xHdqvw/u/pHan2HGLrKi/TNGo S6+dCttrfUU5OlJnDcxosHvCYQM/2FfAJBIc5YptPR2uvy3biHxJ0kc+JCldchlXSBWn QsUbNwWcoEvM7IXVGW3kfV++BDLrvoRXx3S9MyyVKQgDJkvUvM8L+zY8HvK6elHSDoMm mJkQXgtovRTcIfyY5jWlFvfMDNEbODbP4wpyLMibC+oKxCqOhRmbSD273dMzCg/CTfhb NiKOEJLLL4WnZ3rktuczY9XrqKr03Tp0446sIAMqBuKvY+zWjeDglBk4h/UNS+wrWf9+ Vh4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=83CgZEWVlvtw6YRoN8OFaQuF2cF8yC9eL6KIAikLeN4=; b=CYeqBSXnMQnRaQU5hKU92+3RAucpE8Xw/MtIIOd3bpjxbF1r/pX8L4oUuOIwNgF9wp lzpg371ME6a4UemqVC3nnizdrzNFyVfv1+8zgOP3+8pfai0mHw6H73uPd29hI+3vA32h RYYM2IJ1uyURZffPz/Exswvlpq1ZncHTf47hbX0umpI71i7KjcoYDSjOK3ARdP6MLegt u78/rGXh/UrjyEnTb54yrWwdGeUh80XiQmRw35Z9mHXw/t9YAKYe9TekKUYwdAZMTbMP eU/4NEh4IwLxQ9QWFWWfzR8f0qBBJM4tnI3zv925/zZknjMSoCmwI1o0ikf8Nz4n72+u hTYA== X-Gm-Message-State: AAQBX9cLKvJxudNMR31q6R1j9ziClGu8KehoKjlD12Kei6cW3MSqsy1Y BUshr4c0W0D6JTH4lrbQWnI= X-Google-Smtp-Source: AKy350ZYA/9ZjDtDOYZ78V+4MgFsBIgwyfurc0R79MU4wHeF/wpdKfvQcp+uv0Ed4Ev1g7AWKGEYxQ== X-Received: by 2002:aa7:8bc3:0:b0:627:f964:7594 with SMTP id s3-20020aa78bc3000000b00627f9647594mr12635795pfd.12.1679939706876; Mon, 27 Mar 2023 10:55:06 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.55.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:55:06 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 10/12] bpf: sockmap, test shutdown() correctly exits epoll and recv()=0 Date: Mon, 27 Mar 2023 10:54:44 -0700 Message-Id: <20230327175446.98151-11-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net When session gracefully shutdowns epoll needs to wake up and any recv() readers should return 0 not the -EAGAIN they previously returned. Note we use epoll instead of select to test the epoll wake on shutdown event as well. Signed-off-by: John Fastabend --- .../selftests/bpf/prog_tests/sockmap_basic.c | 68 +++++++++++++++++++ .../bpf/progs/test_sockmap_pass_prog.c | 32 +++++++++ 2 files changed, 100 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c index 0aa088900699..8f0d60f5c847 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c @@ -2,6 +2,7 @@ // Copyright (c) 2020 Cloudflare #include #include +#include #include "test_progs.h" #include "test_skmsg_load_helpers.skel.h" @@ -9,8 +10,11 @@ #include "test_sockmap_invalid_update.skel.h" #include "test_sockmap_skb_verdict_attach.skel.h" #include "test_sockmap_progs_query.skel.h" +#include "test_sockmap_pass_prog.skel.h" #include "bpf_iter_sockmap.skel.h" +#include "sockmap_helpers.h" + #define TCP_REPAIR 19 /* TCP sock is under repair right now */ #define TCP_REPAIR_ON 1 @@ -350,6 +354,68 @@ static void test_sockmap_progs_query(enum bpf_attach_type attach_type) test_sockmap_progs_query__destroy(skel); } +#define MAX_EVENTS 10 +static void test_sockmap_skb_verdict_shutdown(void) +{ + int n, err, map, verdict, s, c0, c1, p0, p1; + struct epoll_event ev, events[MAX_EVENTS]; + struct test_sockmap_pass_prog *skel; + int epollfd; + int zero = 0; + char b; + + skel = test_sockmap_pass_prog__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open_and_load")) + return; + + verdict = bpf_program__fd(skel->progs.prog_skb_verdict); + map = bpf_map__fd(skel->maps.sock_map_rx); + + err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); + if (!ASSERT_OK(err, "bpf_prog_attach")) + goto out; + + s = socket_loopback(AF_INET, SOCK_STREAM); + if (s < 0) + goto out; + err = create_socket_pairs(s, AF_INET, SOCK_STREAM, &c0, &c1, &p0, &p1); + if (err < 0) + goto out; + + err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST); + if (err < 0) + goto out_close; + + shutdown(c0, SHUT_RDWR); + shutdown(p1, SHUT_WR); + + ev.events = EPOLLIN; + ev.data.fd = c1; + + epollfd = epoll_create1(0); + if (!ASSERT_GT(epollfd, -1, "epoll_create(0)")) + goto out_close; + err = epoll_ctl(epollfd, EPOLL_CTL_ADD, c1, &ev); + if (!ASSERT_OK(err, "epoll_ctl(EPOLL_CTL_ADD)")) + goto out_close; + err = epoll_wait(epollfd, events, MAX_EVENTS, -1); + if (!ASSERT_EQ(err, 1, "epoll_wait(fd)")) + goto out_close; + + n = recv(c1, &b, 1, SOCK_NONBLOCK); + ASSERT_EQ(n, 0, "recv_timeout(fin)"); + n = recv(p0, &b, 1, SOCK_NONBLOCK); + ASSERT_EQ(n, 0, "recv_timeout(fin)"); + +out_close: + close(c0); + close(p0); + close(c1); + close(p1); +out: + test_sockmap_pass_prog__destroy(skel); +} + void test_sockmap_basic(void) { if (test__start_subtest("sockmap create_update_free")) @@ -384,4 +450,6 @@ void test_sockmap_basic(void) test_sockmap_progs_query(BPF_SK_SKB_STREAM_VERDICT); if (test__start_subtest("sockmap skb_verdict progs query")) test_sockmap_progs_query(BPF_SK_SKB_VERDICT); + if (test__start_subtest("sockmap skb_verdict shutdown")) + test_sockmap_skb_verdict_shutdown(); } diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c b/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c new file mode 100644 index 000000000000..1d86a717a290 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c @@ -0,0 +1,32 @@ +#include +#include +#include + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_rx SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_tx SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_msg SEC(".maps"); + +SEC("sk_skb") +int prog_skb_verdict(struct __sk_buff *skb) +{ + return SK_PASS; +} + +char _license[] SEC("license") = "GPL"; From patchwork Mon Mar 27 17:54:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189785 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DAAAC6FD1D for ; Mon, 27 Mar 2023 17:55:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232745AbjC0Rzn (ORCPT ); Mon, 27 Mar 2023 13:55:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232665AbjC0RzY (ORCPT ); Mon, 27 Mar 2023 13:55:24 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20BF44487; Mon, 27 Mar 2023 10:55:09 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id j13so8413870pjd.1; Mon, 27 Mar 2023 10:55:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939708; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5+s10zrxZejqq5bq+XUHu3L+KgjuGIY37R9ThVwAs5g=; b=mO8s5pjMGSmdLjwJC6jf0vcvjqtH5tdOzNABg82WWKkD4359fNsFNPgQ5IPuMHB5eZ 4s67PUs+qBpBHYJBTkSptMymiPl0jsZ3V3xFsBhbBAkUASZSIIGS9sOq3R6Sm7zMzZgZ 43kpaQQneyRluhea6vdbjIeaoyWxpMidbOLeqZNfRoyW+D5mg2hmZO9diD2eQBqygSAM ayuDbDgqofXsmNvgKJdUBJsrgwyDaxj7MyZXBOLChkZ+L2gDuWQuiInXHc+vXP0i9W2K zrnYMqChaGKD7bGKBoqfvRe92jtbpyWxKsF3RFx3RiOHqVwYMjCNONX6qbrqulgSIa5B VP+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939708; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5+s10zrxZejqq5bq+XUHu3L+KgjuGIY37R9ThVwAs5g=; b=JcHTEp6pCs2qAvtxZt8+UHViAweI7NaGvo+fTpJW7foJ0fyYtxHuiTefBATsdG0jC2 vJkJO0JzwK9mRfNTgAgkNyl2ScpjCq/koMbeLb46gkThHxveOvJd11rhEdvqrosz8wSg yVpNgm9yIc9HP/hykFrwujKDoV9O0HNfUnTJR0c4o4qfXvgpNBrnfPzTAw/fw5GhCekx 1J2ToARVleyAWf4UqLspe6XqRM18iz1guxhgoT9VW/rU2wS8kUg0TI8WNEUHWdrUvMx9 6BqYrSZFf6XHEUsL2q49/A6M/f+yK+W7LHDRVAEpOeUkMYgbnxmKkBCCSkUSfQJoLnO7 VXoA== X-Gm-Message-State: AO0yUKUB8CeQHUTtJnHoHfUjotQPUHYlJTqdbnRM6K4b9rkEI8aTjF7V ZU0AzBpvUnDClxRTpGZ9nl8= X-Google-Smtp-Source: AK7set/g5PTNrfrp781nZI2lkPmBDBD7/pW74IKFOa5RZeOs0GIOnsxjBJwU23gG/vEM9A+jYB75Kg== X-Received: by 2002:a05:6a20:b26:b0:d8:d3b4:4912 with SMTP id x38-20020a056a200b2600b000d8d3b44912mr10031760pzf.9.1679939708289; Mon, 27 Mar 2023 10:55:08 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.55.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:55:07 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 11/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer Date: Mon, 27 Mar 2023 10:54:45 -0700 Message-Id: <20230327175446.98151-12-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net A bug was reported where ioctl(FIONREAD) returned zero even though the socket with a SK_SKB verdict program attached had bytes in the msg queue. The result is programs may hang or more likely try to recover, but use suboptimal buffer sizes. Add a test to check that ioctl(FIONREAD) returns the correct number of bytes. Signed-off-by: John Fastabend --- .../selftests/bpf/prog_tests/sockmap_basic.c | 48 +++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c index 8f0d60f5c847..16d76ec1ea1c 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c @@ -416,6 +416,52 @@ static void test_sockmap_skb_verdict_shutdown(void) test_sockmap_pass_prog__destroy(skel); } +static void test_sockmap_skb_verdict_fionread(void) +{ + int err, map, verdict, s, c0, c1, p0, p1; + struct test_sockmap_pass_prog *skel; + int zero = 0, sent, recvd, avail; + char buf[256] = "0123456789"; + + skel = test_sockmap_pass_prog__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open_and_load")) + return; + + verdict = bpf_program__fd(skel->progs.prog_skb_verdict); + map = bpf_map__fd(skel->maps.sock_map_rx); + + err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); + if (!ASSERT_OK(err, "bpf_prog_attach")) + goto out; + + s = socket_loopback(AF_INET, SOCK_STREAM); + if (!ASSERT_GT(s, -1, "socket_loopback(s)")) + goto out; + err = create_socket_pairs(s, AF_INET, SOCK_STREAM, &c0, &c1, &p0, &p1); + if (!ASSERT_OK(err, "create_socket_pairs(s)")) + goto out; + + err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST); + if (!ASSERT_OK(err, "bpf_map_update_elem(c1)")) + goto out_close; + + sent = xsend(p1, &buf, sizeof(buf), 0); + ASSERT_EQ(sent, sizeof(buf), "xsend(p0)"); + err = ioctl(c1, FIONREAD, &avail); + ASSERT_OK(err, "ioctl(FIONREAD) error"); + ASSERT_EQ(avail, sizeof(buf), "ioctl(FIONREAD)"); + recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC); + ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)"); + +out_close: + close(c0); + close(p0); + close(c1); + close(p1); +out: + test_sockmap_pass_prog__destroy(skel); +} + void test_sockmap_basic(void) { if (test__start_subtest("sockmap create_update_free")) @@ -452,4 +498,6 @@ void test_sockmap_basic(void) test_sockmap_progs_query(BPF_SK_SKB_VERDICT); if (test__start_subtest("sockmap skb_verdict shutdown")) test_sockmap_skb_verdict_shutdown(); + if (test__start_subtest("sockmap skb_verdict fionread")) + test_sockmap_skb_verdict_fionread(); } From patchwork Mon Mar 27 17:54:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13189786 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3941EC761A6 for ; Mon, 27 Mar 2023 17:55:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231820AbjC0Rzw (ORCPT ); Mon, 27 Mar 2023 13:55:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232480AbjC0Rz0 (ORCPT ); Mon, 27 Mar 2023 13:55:26 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 479C03C33; Mon, 27 Mar 2023 10:55:10 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id i15so6250413pfo.8; Mon, 27 Mar 2023 10:55:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679939710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=esYoYZtEzxY7DX/LjUcABH0d/GpKLQcrNO6Gr9Qwjw8=; b=D6SQoWkMOYGIe/+LnUx0E6FIAsIJ2FlP3ZxoNFT3JcTiX1XZfmUIhsBmwi5RXnfk2D Uoihfv/jtB8GH2kbMjwHOxLXyDMHh2zWBIm/YA1dqdgi/VTQjKKxNQhjEV4ho9L+xGyE reuVDXTGiugmV/EWKWrw5g8rHFLkFIxr1G/dSFhucsgjk0qD64nw4IceuwJpEA6wJ7Ix KPbxft/9EU8XW5JzJucrS5q72G2OjOPpy3Gu9qqGn5gPQn3tHi/6BmZqsQ0zdeCSQXQ/ lPNn5a2sAqGqGaCdc4iX42pdiflLoa26f2NBp7v+0x01SYwImecj1mRMcDh8dzenLz3Q secA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679939710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=esYoYZtEzxY7DX/LjUcABH0d/GpKLQcrNO6Gr9Qwjw8=; b=GZq/LnLwhoUnGv1Ww6F/iqJ3i9wFOAKTHADLKOra1HPiuovqWg5E6iZ/1W6J+wcjLy Nk7f4N3ICZh51HX/0oEyWe8qZipetHNy7pqrSPZUC7LDxvKg140+QFGbWYRsaKAzYEKu c7oqwevqxYcPfmRo5bVwMINP7F1Z43KEcb+7GGgVBwA5gn/CWF7Po19cutImlVW96IUp 1lW85f64iGc4SteLRaGoiZrzlpMzRy3uQkzXH+dgWSYPMbVBNO/WO1UHCA8SK4KdC++B j/LpnGCIkkle56aPqqMTGDGpKwT4V1hxDiJMxvQYG0BGVqWwToY/G0OZiSAnlHdMyzfi OFqg== X-Gm-Message-State: AAQBX9fUxnaKLu2juHb5MdyLNau/GjLAxIxdhPRYpLpC2HHDBDkpeQ81 lk8p/XZkcQ2oYiyDyS4MfPRJtQPW23Dnpw== X-Google-Smtp-Source: AKy350ZDKH58zpKeWjnkyzY+iMWFi/BIs5anUXtPmTtQ/1Pph4LY/ErdCMc1/Nj8esMk1iCahT5Wvw== X-Received: by 2002:a62:19d8:0:b0:622:9e34:11f1 with SMTP id 207-20020a6219d8000000b006229e3411f1mr11341540pfz.17.1679939709626; Mon, 27 Mar 2023 10:55:09 -0700 (PDT) Received: from john.lan ([98.97.117.131]) by smtp.gmail.com with ESMTPSA id r1-20020a62e401000000b005a8ba70315bsm19408316pfh.6.2023.03.27.10.55.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Mar 2023 10:55:09 -0700 (PDT) From: John Fastabend To: cong.wang@bytedance.com, jakub@cloudflare.com, daniel@iogearbox.net, lmb@isovalent.com, edumazet@google.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v2 12/12] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer with drops Date: Mon, 27 Mar 2023 10:54:46 -0700 Message-Id: <20230327175446.98151-13-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230327175446.98151-1-john.fastabend@gmail.com> References: <20230327175446.98151-1-john.fastabend@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net When BPF program drops pkts the sockmap logic 'eats' the packet and updates copied_seq. In the PASS case where the sk_buff is accepted we update copied_seq from recvmsg path so we need a new test to handle the drop case. Original patch series broke this resulting in test_sockmap_skb_verdict_fionread:PASS:ioctl(FIONREAD) error 0 nsec test_sockmap_skb_verdict_fionread:FAIL:ioctl(FIONREAD) unexpected ioctl(FIONREAD): actual 1503041772 != expected 256 #176/17 sockmap_basic/sockmap skb_verdict fionread on drop:FAIL After updated patch with fix. #176/16 sockmap_basic/sockmap skb_verdict fionread:OK #176/17 sockmap_basic/sockmap skb_verdict fionread on drop:OK Signed-off-by: John Fastabend --- .../selftests/bpf/prog_tests/sockmap_basic.c | 47 ++++++++++++++----- .../bpf/progs/test_sockmap_drop_prog.c | 32 +++++++++++++ 2 files changed, 66 insertions(+), 13 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c index 16d76ec1ea1c..22cbe947de2f 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c @@ -11,6 +11,7 @@ #include "test_sockmap_skb_verdict_attach.skel.h" #include "test_sockmap_progs_query.skel.h" #include "test_sockmap_pass_prog.skel.h" +#include "test_sockmap_drop_prog.skel.h" #include "bpf_iter_sockmap.skel.h" #include "sockmap_helpers.h" @@ -416,19 +417,31 @@ static void test_sockmap_skb_verdict_shutdown(void) test_sockmap_pass_prog__destroy(skel); } -static void test_sockmap_skb_verdict_fionread(void) +static void test_sockmap_skb_verdict_fionread(bool pass_prog) { + int expected, zero = 0, sent, recvd, avail; int err, map, verdict, s, c0, c1, p0, p1; - struct test_sockmap_pass_prog *skel; - int zero = 0, sent, recvd, avail; + struct test_sockmap_pass_prog *pass; + struct test_sockmap_drop_prog *drop; char buf[256] = "0123456789"; - skel = test_sockmap_pass_prog__open_and_load(); - if (!ASSERT_OK_PTR(skel, "open_and_load")) - return; + if (pass_prog) { + pass = test_sockmap_pass_prog__open_and_load(); + if (!ASSERT_OK_PTR(pass, "open_and_load")) + return; + verdict = bpf_program__fd(pass->progs.prog_skb_verdict); + map = bpf_map__fd(pass->maps.sock_map_rx); + expected = sizeof(buf); + } else { + drop = test_sockmap_drop_prog__open_and_load(); + if (!ASSERT_OK_PTR(drop, "open_and_load")) + return; + verdict = bpf_program__fd(drop->progs.prog_skb_verdict); + map = bpf_map__fd(drop->maps.sock_map_rx); + /* On drop data is consumed immediately and copied_seq inc'd */ + expected = 0; + } - verdict = bpf_program__fd(skel->progs.prog_skb_verdict); - map = bpf_map__fd(skel->maps.sock_map_rx); err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); if (!ASSERT_OK(err, "bpf_prog_attach")) @@ -449,9 +462,12 @@ static void test_sockmap_skb_verdict_fionread(void) ASSERT_EQ(sent, sizeof(buf), "xsend(p0)"); err = ioctl(c1, FIONREAD, &avail); ASSERT_OK(err, "ioctl(FIONREAD) error"); - ASSERT_EQ(avail, sizeof(buf), "ioctl(FIONREAD)"); - recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC); - ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)"); + ASSERT_EQ(avail, expected, "ioctl(FIONREAD)"); + /* On DROP test there will be no data to read */ + if (pass_prog) { + recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC); + ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)"); + } out_close: close(c0); @@ -459,7 +475,10 @@ static void test_sockmap_skb_verdict_fionread(void) close(c1); close(p1); out: - test_sockmap_pass_prog__destroy(skel); + if (pass_prog) + test_sockmap_pass_prog__destroy(pass); + else + test_sockmap_drop_prog__destroy(drop); } void test_sockmap_basic(void) @@ -499,5 +518,7 @@ void test_sockmap_basic(void) if (test__start_subtest("sockmap skb_verdict shutdown")) test_sockmap_skb_verdict_shutdown(); if (test__start_subtest("sockmap skb_verdict fionread")) - test_sockmap_skb_verdict_fionread(); + test_sockmap_skb_verdict_fionread(true); + if (test__start_subtest("sockmap skb_verdict fionread on drop")) + test_sockmap_skb_verdict_fionread(false); } diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c b/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c new file mode 100644 index 000000000000..29314805ce42 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c @@ -0,0 +1,32 @@ +#include +#include +#include + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_rx SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_tx SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_msg SEC(".maps"); + +SEC("sk_skb") +int prog_skb_verdict(struct __sk_buff *skb) +{ + return SK_DROP; +} + +char _license[] SEC("license") = "GPL";