From patchwork Wed Sep 20 23:27:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13393562 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 738D93F4D1; Wed, 20 Sep 2023 23:27:13 +0000 (UTC) Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2174C1; Wed, 20 Sep 2023 16:27:11 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id 98e67ed59e1d1-2764b04dc5cso183163a91.3; Wed, 20 Sep 2023 16:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695252431; x=1695857231; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zx3qu+ByGUH9pcUUpikSva3zSMilQmAKQ5htxmEO3QE=; b=cnPKXHsKCDEfQWZcUNMaWwYLljnmQuFjCx5SJYu3IbXhgPuOU156ORBomVtRwadRSd N93HlKNPPPan+PxTMRnT7TEX/1G7QXidg4WwG4vV+rypK1E7HuXLcz5IYzoDAj1/8znJ 1l25N6xE6FOh2CF5s6K5RzMQS7D1DXUxSa76Czi5LhIgmuwjMIyOiGWxw4VgcwEAlerH jSQgrxYKzXcqojOW+0lPzWz/ali+Pjy7e6Aee+192ydzWYxyLtCXDHCW7+tY/arsU4lL 0XS1fny60s9yp36mMQL/kgEhp6H6n7Bs9PdCX1yC1NYixtZXk2X6drfCqRqoVaxscyEk hjLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695252431; x=1695857231; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zx3qu+ByGUH9pcUUpikSva3zSMilQmAKQ5htxmEO3QE=; b=dxn4cxxR1pU/RIZ38XMDAWiO1IkbZJ6VcGMKlrGkgkbhA2DNH34H5IB2zxBUsVf9ra 87vNeuOtqsRsQCa2DTRyi7dU/6L32PhECL/QAKH1+9HeF5P0K1JcFCAVuNyjKjceuDHc HDX1AA54XxAnkoWwzMC1aMytK0Ai6X+/q4OB0TfXQmyoy7Pa1gjNqjGjYlt16jrZofgd HP++oVwoAZBMHLZfa/8KnPdtp67XZyWYlT/s5s2jptjaM1usCnEG2ugfW87fwnU4NPBi k1TBnJ9KKtSHWOU0jC/x3ey4XhBM8/NBLxae65SderNb+jsCI7ltp0GNdjsPOkVkFWTZ o8Zw== X-Gm-Message-State: AOJu0Ywava6u5kQbWV6EN+Cay8FWVM9vMd2YcOkIpCX9OeNaqa3zE0Q6 YLZ6uRawrv7Viod6B/VhNf4= X-Google-Smtp-Source: AGHT+IFd6qAvhoOM+gmLS2Fhdc/1q6C0mMH23tUeJWZupXNP+3MIlTwgzRY/VbThGmuT+FztZQbfNg== X-Received: by 2002:a17:90a:694c:b0:274:4b04:392f with SMTP id j12-20020a17090a694c00b002744b04392fmr3672190pjm.24.1695252431302; Wed, 20 Sep 2023 16:27:11 -0700 (PDT) Received: from john.lan ([98.97.37.198]) by smtp.gmail.com with ESMTPSA id mz6-20020a17090b378600b0026b12768e46sm115362pjb.42.2023.09.20.16.27.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Sep 2023 16:27:10 -0700 (PDT) From: John Fastabend To: daniel@iogearbox.net, ast@kernel.org, andrii@kernel.org, jakub@cloudflare.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf 1/3] bpf: tcp_read_skb needs to pop skb regardless of seq Date: Wed, 20 Sep 2023 16:27:04 -0700 Message-Id: <20230920232706.498747-2-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230920232706.498747-1-john.fastabend@gmail.com> References: <20230920232706.498747-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Before fix e5c6de5fa0258 tcp_read_skb() would increment the tp->copied-seq value. This (as described in the commit) would cause an error for apps because once that is incremented the application might believe there is no data to be read. Then some apps would stall or abort believing no data is available. However, the fix is incomplete because it introduces another issue in the skb dequeue. The loop does tcp_recv_skb() in a while loop to consume as many skbs as possible. The problem is the call is, tcp_recv_skb(sk, seq, &offset) Where 'seq' is u32 seq = tp->copied_seq; Now we can hit a case where we've yet incremented copied_seq from BPF side, but then tcp_recv_skb() fails this test, if (offset < skb->len || (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)) so that instead of returning the skb we call tcp_eat_recv_skb() which frees the skb. This is because the routine believes the SKB has been collapsed per comment, /* This looks weird, but this can happen if TCP collapsing * splitted a fat GRO packet, while we released socket lock * in skb_splice_bits() */ This can't happen here we've unlinked the full SKB and orphaned it. Anyways it would confuse any BPF programs if the data were suddenly moved underneath it. To fix this situation do simpler operation and just skb_peek() the data of the queue followed by the unlink. It shouldn't need to check this condition and tcp_read_skb() reads entire skbs so there is no need to handle the 'offset!=0' case as we would see in tcp_read_sock(). Fixes: e5c6de5fa0258 ("bpf, sockmap: Incorrectly handling copied_seq") Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend --- net/ipv4/tcp.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 0c3040a63ebd..45e7f39e67bc 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1625,12 +1625,11 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) u32 seq = tp->copied_seq; struct sk_buff *skb; int copied = 0; - u32 offset; if (sk->sk_state == TCP_LISTEN) return -ENOTCONN; - while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) { + while ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) { u8 tcp_flags; int used; From patchwork Wed Sep 20 23:27:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13393563 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0679F3F4DC; Wed, 20 Sep 2023 23:27:14 +0000 (UTC) Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8584BC9; Wed, 20 Sep 2023 16:27:13 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id 98e67ed59e1d1-274dacb5d18so189570a91.1; Wed, 20 Sep 2023 16:27:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695252433; x=1695857233; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HtmXpIji2GdJDD3PGIdnsvoKl7h7PoK7NMP5lX5SbYc=; b=QKdP8Pd+NGqg+n/m5cIWN2Tx/Dj8vvqPPCLO70dHwIU7hjoXl+c5bY/8Wa6GLgV1te qyV6HJ4x3Qu+24RMz8LLaLf5IWQ8CCLpXQUWx6/ozUnEIwEgFzIJ3G07+meTFMTlFhqs VeUgd8FBOfy13TwvJzU0w8h18yFR0AI9+QXWWU6XDWv9L+NrlqBjC8CnMvuXI3jze6FH nU3BvsY67uE+gReGFt549HrwAWou3a+exaxHfixCGbb3ePDkXmhPEXDB8TlaE/efgut2 RXS2C9eUJPkIdCnrgWnAGBSqaR9DFeE8LCOSf49B7GMWsLDTdKauv8OFqh/lp0RVzoUV nyFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695252433; x=1695857233; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HtmXpIji2GdJDD3PGIdnsvoKl7h7PoK7NMP5lX5SbYc=; b=e/Ks7PomBKZMBENDr0d4XyNDzt9eY/9/XynnQt5LgTSizovilzi5ORldPndpagIYMW uR6rj3h1MZU0b55foQhP/J1xzePqFQb4VA9keUuO8yKb5O3WtemZtmoEdj+MglrvqKB/ TxmQ15ijJMFGRhYfmu4GCrdeAmsajEF9siaohSw5NwHnIbH9cawVBSkFPGRBQFSqPP0B wDiQRLiLunUJBdujM2xsp5gIF2V9UwYayrwFf+PEQ8TaHqrrqXOoshC0ehinqbw0fhf8 g5hbPGN+DE3LCo08eK4V7T9toBr7ohgexPFiQA1/R7a3dpFvyEVflqHjG8FxDeOYD330 C0LQ== X-Gm-Message-State: AOJu0Ywgz262iZdzniRjieXH34A6eIDXHMevSHBb/rG2ybLvUr3GoJTU 6xWYXGrQ7xn5c/7n2O4H3RI= X-Google-Smtp-Source: AGHT+IHXOqeHA7kpk6pB48m3pjFZHF1H6cAOx7xU1NN57Gu7P2EdiE3OrF50g7ZlP97ZlEXN5p3Hbg== X-Received: by 2002:a17:90a:7848:b0:274:98aa:72d8 with SMTP id y8-20020a17090a784800b0027498aa72d8mr5442427pjl.3.1695252432937; Wed, 20 Sep 2023 16:27:12 -0700 (PDT) Received: from john.lan ([98.97.37.198]) by smtp.gmail.com with ESMTPSA id mz6-20020a17090b378600b0026b12768e46sm115362pjb.42.2023.09.20.16.27.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Sep 2023 16:27:12 -0700 (PDT) From: John Fastabend To: daniel@iogearbox.net, ast@kernel.org, andrii@kernel.org, jakub@cloudflare.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf 2/3] bpf: sockmap, do not inc copied_seq when PEEK flag set Date: Wed, 20 Sep 2023 16:27:05 -0700 Message-Id: <20230920232706.498747-3-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230920232706.498747-1-john.fastabend@gmail.com> References: <20230920232706.498747-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net When data is peek'd off the receive queue we shouldn't considered it copied from tcp_sock side. When we increment copied_seq this will confuse tcp_data_ready() because copied_seq can be arbitrarily increased. From] application side it results in poll() operations not waking up when expected. Notice tcp stack without BPF recvmsg programs also does not increment copied_seq. We broke this when we moved copied_seq into recvmsg to only update when actual copy was happening. But, it wasn't working correctly either before because the tcp_data_ready() tried to use the copied_seq value to see if data was read by user yet. See fixes tags. Fixes: e5c6de5fa0258 ("bpf, sockmap: Incorrectly handling copied_seq") Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend Reviewed-by: Jakub Sitnicki --- net/ipv4/tcp_bpf.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 81f0dff69e0b..327268203001 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -222,6 +222,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, int *addr_len) { struct tcp_sock *tcp = tcp_sk(sk); + int peek = flags & MSG_PEEK; u32 seq = tcp->copied_seq; struct sk_psock *psock; int copied = 0; @@ -311,7 +312,8 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, copied = -EAGAIN; } out: - WRITE_ONCE(tcp->copied_seq, seq); + if (!peek) + WRITE_ONCE(tcp->copied_seq, seq); tcp_rcv_space_adjust(sk); if (copied > 0) __tcp_cleanup_rbuf(sk, copied); From patchwork Wed Sep 20 23:27:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13393564 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D73F941238; Wed, 20 Sep 2023 23:27:16 +0000 (UTC) Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61BEFC1; Wed, 20 Sep 2023 16:27:15 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-690bc3f8326so312050b3a.0; Wed, 20 Sep 2023 16:27:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695252435; x=1695857235; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yRax94hCwOhKc7hyYJNfFzu7McWGtMtmigQv48BjD80=; b=P3T0qKkx8YhTSLz93EYOe8wIdKi8Imz6AwDkU2iOOsAo6+nyFp3pkvu8D5Sms6l8G/ ns+hFRVef8FPj4XihKOEcXDk3otJ9/R7XlKvuEHjVzgMl+r0FNWf7X21uswgGLtXMAej ApXj6MDEWBhf488m5pkUfhRrSB8pdtTG34u2qNX61RUdXbc8ByGqiyaWOe9Zovn3dgvm EbjKcehfBaNxbPy565eHorJgpHnUbu8Si7m57rLT/YClv66/NvM2nEtbyWpNybJfeYaI Pqi1xl64KgL7IWksShNnDePDcuIkyLhRHklK7azrY9XrtdNqFjocw8spGN5SAACWe2xf RASg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695252435; x=1695857235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yRax94hCwOhKc7hyYJNfFzu7McWGtMtmigQv48BjD80=; b=W3UxBVpJBiOWA010wlhAubk83bpFTl8rCLKEPUlInz/5RxERrwilCWPREOBFwwPoGw KKcfTw6YrwOat3n8gVlIY/5YDV95lMr70K2zlM4nMvyvTV7EUY0Uqy0wdYIPNNq5B65D OGpbj6k+DyD0VDJifShVw/iT69yp7EHGmPZHCOFXgkNwq9wTP7aPZvW7oP0JSoNlKAzV uBOEeNIJwINVYL6mxAgZf+MbLAP12KAuCrNrHVvbbJjAE7Fu8Lloi62kQ6VxCAJZTnFP x6nfSoPwLQn1nCNC2zaMfGaBznITjkFJX/Y1TWL6+ouMve5JKgF0BP+GorcUfgEUXX+Y Eftg== X-Gm-Message-State: AOJu0YzgrM69zfMeKrYALD1UL6FIKtbPGcHpwNFhymVFe6bKbZ+J8nlH eCeGZgwTD5jiJKp3Pf+O01w= X-Google-Smtp-Source: AGHT+IE2WSglC5/05m2UJQ32jw5QvicrNogov53iyIgJaIh2f8lnp8E1ZEPtoukU4arciS5/A9zKCw== X-Received: by 2002:a05:6a20:a10b:b0:14d:4ab5:5e34 with SMTP id q11-20020a056a20a10b00b0014d4ab55e34mr4303043pzk.51.1695252434770; Wed, 20 Sep 2023 16:27:14 -0700 (PDT) Received: from john.lan ([98.97.37.198]) by smtp.gmail.com with ESMTPSA id mz6-20020a17090b378600b0026b12768e46sm115362pjb.42.2023.09.20.16.27.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Sep 2023 16:27:14 -0700 (PDT) From: John Fastabend To: daniel@iogearbox.net, ast@kernel.org, andrii@kernel.org, jakub@cloudflare.com Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf 3/3] bpf: sockmap, add tests for MSG_F_PEEK Date: Wed, 20 Sep 2023 16:27:06 -0700 Message-Id: <20230920232706.498747-4-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230920232706.498747-1-john.fastabend@gmail.com> References: <20230920232706.498747-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Test that we can read with MSG_F_PEEK and then still get correct number of available bytes through FIONREAD. The recv() (without PEEK) then returns the bytes as expected. The recv() always worked though because it was just the available byte reporting that was broke before latest fixes. Signed-off-by: John Fastabend Reviewed-by: Jakub Sitnicki --- .../selftests/bpf/prog_tests/sockmap_basic.c | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c index 064cc5e8d9ad..e8eee2b8901e 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c @@ -475,6 +475,56 @@ static void test_sockmap_skb_verdict_fionread(bool pass_prog) test_sockmap_drop_prog__destroy(drop); } + +static void test_sockmap_skb_verdict_peek(void) +{ + int err, map, verdict, s, c1, p1, zero = 0, sent, recvd, avail; + struct test_sockmap_pass_prog *pass; + char snd[256] = "0123456789"; + char rcv[256] = "0"; + + pass = test_sockmap_pass_prog__open_and_load(); + if (!ASSERT_OK_PTR(pass, "open_and_load")) + return; + verdict = bpf_program__fd(pass->progs.prog_skb_verdict); + map = bpf_map__fd(pass->maps.sock_map_rx); + + err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); + if (!ASSERT_OK(err, "bpf_prog_attach")) + goto out; + + s = socket_loopback(AF_INET, SOCK_STREAM); + if (!ASSERT_GT(s, -1, "socket_loopback(s)")) + goto out; + + err = create_pair(s, AF_INET, SOCK_STREAM, &c1, &p1); + if (!ASSERT_OK(err, "create_pairs(s)")) + goto out; + + err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST); + if (!ASSERT_OK(err, "bpf_map_update_elem(c1)")) + goto out_close; + + sent = xsend(p1, snd, sizeof(snd), 0); + ASSERT_EQ(sent, sizeof(snd), "xsend(p1)"); + recvd = recv(c1, rcv, sizeof(rcv), MSG_PEEK); + ASSERT_EQ(recvd, sizeof(rcv), "recv(c1)"); + err = ioctl(c1, FIONREAD, &avail); + ASSERT_OK(err, "ioctl(FIONREAD) error"); + ASSERT_EQ(avail, sizeof(snd), "after peek ioctl(FIONREAD)"); + recvd = recv(c1, rcv, sizeof(rcv), 0); + ASSERT_EQ(recvd, sizeof(rcv), "recv(p0)"); + err = ioctl(c1, FIONREAD, &avail); + ASSERT_OK(err, "ioctl(FIONREAD) error"); + ASSERT_EQ(avail, 0, "after read ioctl(FIONREAD)"); + +out_close: + close(c1); + close(p1); +out: + test_sockmap_pass_prog__destroy(pass); +} + void test_sockmap_basic(void) { if (test__start_subtest("sockmap create_update_free")) @@ -515,4 +565,6 @@ void test_sockmap_basic(void) test_sockmap_skb_verdict_fionread(true); if (test__start_subtest("sockmap skb_verdict fionread on drop")) test_sockmap_skb_verdict_fionread(false); + if (test__start_subtest("sockmap skb_verdict msg_f_peek")) + test_sockmap_skb_verdict_peek(); }