From patchwork Fri Dec 15 17:07:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13494669 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2086D56381; Fri, 15 Dec 2023 17:10:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="EBbpW7sF"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Kx2HJ8gu" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1702660240; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y7c3Sac8dmyVQ4hqle9rQ/Ik/z8HPOaNJezqWJL8Lx4=; b=EBbpW7sFFbB1fi7zZJOMAUqG0fNWpdhwl+36f6JGSng5DX9KJ7mc98Fyk438SZJrVm4iUZ E+SHLTy48qAfEBKBD1d9wXOsK8FbeZGL2mbWbt6SdXjnxo2n691PJM1JgDNyR/jJbP+cq+ YSDqoeMtMqK07aMmirlvGm+ltUZdcDVte8qc0vM6fZlFkXvCyTdEWMA5Vm4bit+yg0OSbc Np9oF9XObiwHU0wKk6QMUBgINODSjQiQcLH/5NLDf9MccsV3WqkfDY65FNxafa5nczHAR2 5IH2apHHbikJJ8gIPoCcQlDFV8KsJeq+AZ1wJ3FKwH1cLLO+nT6TrUikc7nbfQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1702660240; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y7c3Sac8dmyVQ4hqle9rQ/Ik/z8HPOaNJezqWJL8Lx4=; b=Kx2HJ8guXixJQvIThAGi9sONmjTWLdYavjuJOXtz876lDz/n34peXGd9z+zsaDJwBf/tv4 Uvg8tcNSC4J+MYCA== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Alexei Starovoitov , Clark Wang , Claudiu Manoil , Ioana Ciornei , Jesper Dangaard Brouer , John Fastabend , Madalin Bucur , NXP Linux Team , Shenwei Wang , Vladimir Oltean , Wei Fang , bpf@vger.kernel.org Subject: [PATCH net-next 18/24] net: Freescale: Use nested-BH locking for XDP redirect. Date: Fri, 15 Dec 2023 18:07:37 +0100 Message-ID: <20231215171020.687342-19-bigeasy@linutronix.de> In-Reply-To: <20231215171020.687342-1-bigeasy@linutronix.de> References: <20231215171020.687342-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The per-CPU variables used during bpf_prog_run_xdp() invocation and later during xdp_do_redirect() rely on disabled BH for their protection. Without locking in local_bh_disable() on PREEMPT_RT these data structure require explicit locking. This is a follow-up on the previous change which introduced bpf_run_lock.redirect_lock and uses it now within drivers. The simple way is to acquire the lock before bpf_prog_run_xdp() is invoked and hold it until the end of function. This does not always work because some drivers (cpsw, atlantic) invoke xdp_do_flush() in the same context. Acquiring the lock in bpf_prog_run_xdp() and dropping in xdp_do_redirect() (without touching drivers) does not work because not all driver, which use bpf_prog_run_xdp(), do support XDP_REDIRECT (and invoke xdp_do_redirect()). Ideally the minimal locking scope would be bpf_prog_run_xdp() + xdp_do_redirect() and everything else (error recovery, DMA unmapping, free/ alloc of memory, …) would happen outside of the locked section. Cc: Alexei Starovoitov Cc: Clark Wang Cc: Claudiu Manoil Cc: Ioana Ciornei Cc: Jesper Dangaard Brouer Cc: John Fastabend Cc: Madalin Bucur Cc: NXP Linux Team Cc: Shenwei Wang Cc: Vladimir Oltean Cc: Wei Fang Cc: bpf@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- .../net/ethernet/freescale/dpaa/dpaa_eth.c | 1 + .../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 1 + .../net/ethernet/freescale/dpaa2/dpaa2-xsk.c | 30 ++++++++++--------- drivers/net/ethernet/freescale/enetc/enetc.c | 1 + drivers/net/ethernet/freescale/fec_main.c | 1 + 5 files changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index dcbc598b11c6c..8adc766282fde 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2597,6 +2597,7 @@ static u32 dpaa_run_xdp(struct dpaa_priv *priv, struct qm_fd *fd, void *vaddr, } #endif + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); /* Update the length and the offset of the FD */ diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index 888509cf1f210..08be35a3e3de7 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -442,6 +442,7 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv, xdp_prepare_buff(&xdp, vaddr + offset, XDP_PACKET_HEADROOM, dpaa2_fd_get_len(fd), false); + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); xdp_act = bpf_prog_run_xdp(xdp_prog, &xdp); /* xdp.data pointer may have changed */ diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c index 051748b997f3f..e3ae9de6b0a34 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-xsk.c @@ -56,23 +56,25 @@ static u32 dpaa2_xsk_run_xdp(struct dpaa2_eth_priv *priv, xdp_buff->rxq = &ch->xdp_rxq; xsk_buff_dma_sync_for_cpu(xdp_buff, ch->xsk_pool); - xdp_act = bpf_prog_run_xdp(xdp_prog, xdp_buff); + scoped_guard(local_lock_nested_bh, &bpf_run_lock.redirect_lock) { + xdp_act = bpf_prog_run_xdp(xdp_prog, xdp_buff); - /* xdp.data pointer may have changed */ - dpaa2_fd_set_offset(fd, xdp_buff->data - vaddr); - dpaa2_fd_set_len(fd, xdp_buff->data_end - xdp_buff->data); + /* xdp.data pointer may have changed */ + dpaa2_fd_set_offset(fd, xdp_buff->data - vaddr); + dpaa2_fd_set_len(fd, xdp_buff->data_end - xdp_buff->data); - if (likely(xdp_act == XDP_REDIRECT)) { - err = xdp_do_redirect(priv->net_dev, xdp_buff, xdp_prog); - if (unlikely(err)) { - ch->stats.xdp_drop++; - dpaa2_eth_recycle_buf(priv, ch, addr); - } else { - ch->buf_count--; - ch->stats.xdp_redirect++; + if (likely(xdp_act == XDP_REDIRECT)) { + err = xdp_do_redirect(priv->net_dev, xdp_buff, xdp_prog); + if (unlikely(err)) { + ch->stats.xdp_drop++; + dpaa2_eth_recycle_buf(priv, ch, addr); + } else { + ch->buf_count--; + ch->stats.xdp_redirect++; + } + + goto xdp_redir; } - - goto xdp_redir; } switch (xdp_act) { diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c index cffbf27c4656b..d516b28815af4 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc.c +++ b/drivers/net/ethernet/freescale/enetc/enetc.c @@ -1578,6 +1578,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring, rx_byte_cnt += VLAN_HLEN; rx_byte_cnt += xdp_get_buff_len(&xdp_buff); + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); xdp_act = bpf_prog_run_xdp(prog, &xdp_buff); switch (xdp_act) { diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index c3b7694a74851..335b1e307d468 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -1587,6 +1587,7 @@ fec_enet_run_xdp(struct fec_enet_private *fep, struct bpf_prog *prog, int err; u32 act; + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); act = bpf_prog_run_xdp(prog, xdp); /* Due xdp_adjust_tail and xdp_adjust_head: DMA sync for_device cover