From patchwork Fri Sep 8 13:57:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13377489 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAABE125A6; Fri, 8 Sep 2023 13:58:01 +0000 (UTC) Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78F971BF5; Fri, 8 Sep 2023 06:58:00 -0700 (PDT) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1694181479; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SDmUPAeW4CUFwvGNZZf+Y/8buEalFOTqf/KOWGRWSAs=; b=0yDMz0gir+qi/hNhG31yIvHtnYwsrI5/toPSLQEyHKG4Xi6k4rlnTnTM6vQtOBUmlMAPdl +lyZ1/dUNVMe9TKdpZX6vbFs2zlFIawjr0bBcpF2aB5pxe+mpYHEJU+/3F/0RS1emKmlCF PTxhpt2eAqXiODymfUO/UhuMQiPtPEIDFMQUIhGGFTR7flBC7kY/bm9fnP4d4pXBG7rGzU SmC77qnKDS3GYwU/jqlyXeAVyLtikIw2XDZ/h5ikxw9QILttUxJ9b2kdZ99/RXHOMcLRoS vD/wg6pj5t5LrZSTlarBmACtA5hcI+MlqBKvYpJ4lf8ib5m3Y0/mhStMAhu4rw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1694181479; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SDmUPAeW4CUFwvGNZZf+Y/8buEalFOTqf/KOWGRWSAs=; b=0iLvnp+f+za9kQ0fZ7x9NKxPirY4+8PCAWEj2PubSOrnXR0MNNMEIdvFZk8cUG+EOgbyKG 48L9GR7pv45HN4Dw== To: netdev@vger.kernel.org, bpf@vger.kernel.org Cc: "David S. Miller" , Alexei Starovoitov , Daniel Borkmann , Eric Dumazet , Jakub Kicinski , Jesper Dangaard Brouer , John Fastabend , Paolo Abeni , Thomas Gleixner , Sebastian Andrzej Siewior , Andrii Nakryiko , Hao Luo , Jiri Olsa , KP Singh , Martin KaFai Lau , Song Liu , Stanislav Fomichev , Yonghong Song Subject: [PATCH net 4/4] bpf, cpumap: Flush xdp after cpu_map_bpf_prog_run_skb(). Date: Fri, 8 Sep 2023 15:57:48 +0200 Message-Id: <20230908135748.794163-5-bigeasy@linutronix.de> In-Reply-To: <20230908135748.794163-1-bigeasy@linutronix.de> References: <20230908135748.794163-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org xdp_do_flush() should be invoked before leaving the NAPI poll function if XDP-redirect has been performed. There are two possible XDP invocations in cpu_map_bpf_prog_run(): - cpu_map_bpf_prog_run_xdp() - cpu_map_bpf_prog_run_skb() Both functions update stats->redirect if a redirect is performed but xdp_do_flush() is only invoked after the former. Invoke xdp_do_flush() after both functions run and a redirect was performed. Cc: Andrii Nakryiko Cc: Hao Luo Cc: Jiri Olsa Cc: KP Singh Cc: Martin KaFai Lau Cc: Song Liu Cc: Stanislav Fomichev Cc: Yonghong Song Fixes: 11941f8a85362 ("bpf: cpumap: Implement generic cpumap") Signed-off-by: Sebastian Andrzej Siewior --- kernel/bpf/cpumap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index e42a1bdb7f536..f3ba11b4a732b 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -248,12 +248,12 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats); - if (stats->redirect) - xdp_do_flush(); - if (unlikely(!list_empty(list))) cpu_map_bpf_prog_run_skb(rcpu, list, stats); + if (stats->redirect) + xdp_do_flush(); + rcu_read_unlock_bh(); /* resched point, may call do_softirq() */ return nframes;