From patchwork Fri Jun 28 10:18:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13715941 X-Patchwork-Delegate: kuba@kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3FAC015443D; Fri, 28 Jun 2024 10:30:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719570629; cv=none; b=ix8cZqOs9mdJSfcGQcoYY84nPJu0k8CL83oUpioo3dqaYOg15Z59f9hMtJGaQvopqt8RJha33Q7ra3305oScyf9sdmoqECsuyv0u4Yv7T3KcXrv2fuvdM4LrEzB2psqhcYfAfzp7JHBiQwViUyn814pyxp5bLuMBuuy2GI2Kcn8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719570629; c=relaxed/simple; bh=2ORN60EApRJBy45tT+PirMZVIYjsUkUikSJ513Tgnrs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qBkoPrVAkqnoaMig0+8pHN3b0Ue4fgYuHKOZjQS4Bl1+BAkleLbzxPIXafNwusa7p0UXt+LeiemEfbcO82vpFXTzkfl/uf2GOQcMnnVKJ8AkCxF0FP6Ws6CUxl5XxWV/TNLL9NTeDOCQe+k/vqVyTAYhzK/TqpTioU9qcPGdyEw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=xaF4QCfx; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=bJhhgPT6; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="xaF4QCfx"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="bJhhgPT6" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1719570626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DMfpWOUIyyihkiEssQDFaZGxRJihfI6kO1l2dxoQQOY=; b=xaF4QCfxYDqyVAYwIm4S7Yl03lSVpt0k2QZWRBo+0/Szhqp/5UreeoP8z7V9MV45yiQhSr xozvRyKQRlaeJNF0V3C7xjFCGg/TH4RJUPdL8OwFtL2xEYplyO37leIju79J0wljpm8oAy B9k5dSDFulFOEcmJw1dQRfEIZTIFTecTqGYmtx+uWz1I+gKUINPJEZ3BV0ljr+Vn44vNYS RpOV05EFqa32OzzsHby5uUJBAkpOXrt7lDJbuBw3ca65f5YCW/VddfixcPMjiiDgMto/KP ww6mLnH036PI9yFCL7pr4BJoSIeWt3ssvzqb7mR0jaS/urRIIqG0O49wcZeqWg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1719570626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DMfpWOUIyyihkiEssQDFaZGxRJihfI6kO1l2dxoQQOY=; b=bJhhgPT6TB6NK8WogcC1KjX2hxaVqMWN04vaLHaPwcpj+zI/asPWh+AbX/ggUq2SifvTtf 6YjY5+0OOZmS8ZBg== To: netdev@vger.kernel.org, bpf@vger.kernel.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , "David S. Miller" , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Eduard Zingerman , Eric Dumazet , Hao Luo , Jakub Kicinski , Jesper Dangaard Brouer , Jiri Olsa , John Fastabend , Jonathan Lemon , KP Singh , Maciej Fijalkowski , Magnus Karlsson , Martin KaFai Lau , Paolo Abeni , Song Liu , Stanislav Fomichev , Thomas Gleixner , Yonghong Song , Sebastian Andrzej Siewior Subject: [PATCH net-next 3/3] net: Move flush list retrieval to where it is used. Date: Fri, 28 Jun 2024 12:18:56 +0200 Message-ID: <20240628103020.1766241-4-bigeasy@linutronix.de> In-Reply-To: <20240628103020.1766241-1-bigeasy@linutronix.de> References: <20240628103020.1766241-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The bpf_net_ctx_get_.*_flush_list() are used at the top of the function. This means the variable is always assigned even if unused. By moving the function to where it is used, it is possible to delay the initialisation until it is unavoidable. Not sure how much this gains in reality but by looking at bq_enqueue() (in devmap.c) gcc pushes one register less to the stack. \o/. Move flush list retrieval to where it is used. Signed-off-by: Sebastian Andrzej Siewior Acked-by: Jesper Dangaard Brouer --- kernel/bpf/cpumap.c | 6 ++++-- kernel/bpf/devmap.c | 3 ++- net/xdp/xsk.c | 6 ++++-- 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 4acf90cd79eb4..fbdf5a1aabfe4 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -707,7 +707,6 @@ static void bq_flush_to_queue(struct xdp_bulk_queue *bq) */ static void bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf) { - struct list_head *flush_list = bpf_net_ctx_get_cpu_map_flush_list(); struct xdp_bulk_queue *bq = this_cpu_ptr(rcpu->bulkq); if (unlikely(bq->count == CPU_MAP_BULK_SIZE)) @@ -724,8 +723,11 @@ static void bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf) */ bq->q[bq->count++] = xdpf; - if (!bq->flush_node.prev) + if (!bq->flush_node.prev) { + struct list_head *flush_list = bpf_net_ctx_get_cpu_map_flush_list(); + list_add(&bq->flush_node, flush_list); + } } int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf, diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 9ca47eaacdd5e..b18d4a14a0a70 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -448,7 +448,6 @@ static void *__dev_map_lookup_elem(struct bpf_map *map, u32 key) static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf, struct net_device *dev_rx, struct bpf_prog *xdp_prog) { - struct list_head *flush_list = bpf_net_ctx_get_dev_flush_list(); struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq); if (unlikely(bq->count == DEV_MAP_BULK_SIZE)) @@ -462,6 +461,8 @@ static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf, * are only ever modified together. */ if (!bq->dev_rx) { + struct list_head *flush_list = bpf_net_ctx_get_dev_flush_list(); + bq->dev_rx = dev_rx; bq->xdp_prog = xdp_prog; list_add(&bq->flush_node, flush_list); diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index de9c0322bc294..7e16336044b2d 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -370,15 +370,17 @@ static int xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp) { - struct list_head *flush_list = bpf_net_ctx_get_xskmap_flush_list(); int err; err = xsk_rcv(xs, xdp); if (err) return err; - if (!xs->flush_node.prev) + if (!xs->flush_node.prev) { + struct list_head *flush_list = bpf_net_ctx_get_xskmap_flush_list(); + list_add(&xs->flush_node, flush_list); + } return 0; }