From patchwork Mon Jul 29 20:07:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13745653 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8FA9C84D02; Mon, 29 Jul 2024 20:07:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283646; cv=none; b=bhtYUMUXIIxSTT/HJ0ikA9ufIQZWJDB/qwtSVee6Mp4NE/EfNpNembSRO4VkZ5+51D3nwukPw+rawLDonISilO2+MLdHjSgx/Xtt0mQktwm+WElVnJw4743KJ9aLawX2WLzAlv7p3gADVdJzzLZc7AuCpYkbl4GjhJPlO/R64O8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283646; c=relaxed/simple; bh=iNn0PtDLZyYks73jrKoaCX4Kn/S4WXPffBJBm0F3Od4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KGjLvMTkh8vP+NSfMf6zExPIB61cba+K/30Oqp5Gx3FQqHWoSuCKt6IJ2/XGNouvy/acxPVMp4iQiNjvRwj83MYN2qvzYEQdrGtPWE6auDwhXaAWh7TW9bYdO5L1CD5kwmivB4UX4udlhsWNXmY41GCXT2x95piWyKFkMKQqZGw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TEzaYttb; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TEzaYttb" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722283644; x=1753819644; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iNn0PtDLZyYks73jrKoaCX4Kn/S4WXPffBJBm0F3Od4=; b=TEzaYttbOLQKehPzJKO6GUqepcp/OZFaoUpbw+heGu8OPxJnVv28GXtv WfyF4RD8QDKxr1VjJtaHekn0uN4Qtg6ZOmPrgXGN7j0Oky/7RYJuJxMLc XRshRuGT/xLLdJlBafw4tTdt5QnNxZEjS5vsUDOE8aQN+xCBOepd9r5OT QK+2RHtStIsegXxU4Yn+rDh0NUa+FMFFN12wmJyKrT0/IQ8l806icv/HQ NKluWk+nfclvY5jqbHZ5PSa+ZOiajKfjxvTz717FVEslYCs9j9uCLgw21 fGv/8hJn6gcOQJLG/KWJZD7JPj/fcKVBn2UJW35Sjgw9E9F68dcYJU7w/ g==; X-CSE-ConnectionGUID: dxiSkjmvTq6M7xozdX98kQ== X-CSE-MsgGUID: HiRKAWq+QAOkdvDC0M6N+w== X-IronPort-AV: E=McAfee;i="6700,10204,11148"; a="23818497" X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="23818497" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 13:07:22 -0700 X-CSE-ConnectionGUID: AcWge2VlRHWXhcLCfz4wVw== X-CSE-MsgGUID: zBMHTXVrSkau9n9nrlAn6Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="54681278" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa007.jf.intel.com with ESMTP; 29 Jul 2024 13:07:23 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Michal Kubiak , anthony.l.nguyen@intel.com, maciej.fijalkowski@intel.com, magnus.karlsson@intel.com, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, Shannon Nelson , Chandan Kumar Rout Subject: [PATCH net v2 1/8] ice: respect netif readiness in AF_XDP ZC related ndo's Date: Mon, 29 Jul 2024 13:07:07 -0700 Message-ID: <20240729200716.681496-2-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240729200716.681496-1-anthony.l.nguyen@intel.com> References: <20240729200716.681496-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Michal Kubiak Address a scenario in which XSK ZC Tx produces descriptors to XDP Tx ring when link is either not yet fully initialized or process of stopping the netdev has already started. To avoid this, add checks against carrier readiness in ice_xsk_wakeup() and in ice_xmit_zc(). One could argue that bailing out early in ice_xsk_wakeup() would be sufficient but given the fact that we produce Tx descriptors on behalf of NAPI that is triggered for Rx traffic, the latter is also needed. Bringing link up is an asynchronous event executed within ice_service_task so even though interface has been brought up there is still a time frame where link is not yet ok. Without this patch, when AF_XDP ZC Tx is used simultaneously with stack Tx, Tx timeouts occur after going through link flap (admin brings interface down then up again). HW seem to be unable to transmit descriptor to the wire after HW tail register bump which in turn causes bit __QUEUE_STATE_STACK_XOFF to be set forever as netdev_tx_completed_queue() sees no cleaned bytes on the input. Fixes: 126cdfe1007a ("ice: xsk: Improve AF_XDP ZC Tx and use batching API") Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Reviewed-by: Shannon Nelson Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) Signed-off-by: Michal Kubiak Signed-off-by: Maciej Fijalkowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index a65955eb23c0..72738b8b8a68 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -1048,6 +1048,10 @@ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) ice_clean_xdp_irq_zc(xdp_ring); + if (!netif_carrier_ok(xdp_ring->vsi->netdev) || + !netif_running(xdp_ring->vsi->netdev)) + return true; + budget = ICE_DESC_UNUSED(xdp_ring); budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring)); @@ -1091,7 +1095,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, struct ice_vsi *vsi = np->vsi; struct ice_tx_ring *ring; - if (test_bit(ICE_VSI_DOWN, vsi->state)) + if (test_bit(ICE_VSI_DOWN, vsi->state) || !netif_carrier_ok(netdev)) return -ENETDOWN; if (!ice_is_xdp_ena_vsi(vsi)) From patchwork Mon Jul 29 20:07:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13745654 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AD92187348; Mon, 29 Jul 2024 20:07:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283647; cv=none; b=fVgrz4et6XbPUCG+D6cylloRjJgGQABG5i6l0LD/4VvjHrkX9Q5fNqV//CgcHnFH2Dpr+09zt/KDwK6nI0e3h/eaiVMjHU9JgVZSqgQHfhr5n5af9Qrk/I9XYdtSG8nBZC3FLhbbHgN90j8Nz1KMcKJUKFTP4XsEzJptrdcgp8Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283647; c=relaxed/simple; bh=YP5dr/ZSKjCtpvLWie8VEahFsRnH60EdY4KqQYmzPmw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GL/8K1v33OBocTeRMyCtHhXg6xeczo1XbkZWyA9Oz+V9I7owjDUVtl/9qr1PEzFuWUe7Mk1UvAOyD0E5YIMRAbw1TknW1pUogumGOSUJvQ99bMMOUXFz5LwbX8SUCysYBM8PA+7VyXnzNbr50FTCWe+uNGZJ6gRB5kTyGhhemxA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LL5V2nPc; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LL5V2nPc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722283645; x=1753819645; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YP5dr/ZSKjCtpvLWie8VEahFsRnH60EdY4KqQYmzPmw=; b=LL5V2nPcez7bFZjU3YetH9w2OOW5xmoE8J8rZM2XEI50za+rErDREars 1NoL6Ow3xcfrfZIXtZdN1Hm0f1Lck5Mziqln4+U11c7QlwflMDfo3HkZ3 PumYRIl2McyNtTMIiPwC+/JQSzBR3zo1+SUoFsw03ffNa0kUcDDfIo0r5 yg5ASx1ExBHTed3obUBIbr6gj2pQswhLzTWLH7rTspdFKhDTo2kge5RKC LQBfjDn5npnDSMTQ7398D0UW/9OJYgfC0kSKuptwvBaCBwt4DSot6csvT L9HIlETWP0y10Jgc2TTlxnSHsz9Dbakd4Cxrf3bmlA9DkIv99rlTPl73M Q==; X-CSE-ConnectionGUID: 7CYQDNXaTkW6rC74EPE7JQ== X-CSE-MsgGUID: xYO+O11VTMOH2zNmxmlWgg== X-IronPort-AV: E=McAfee;i="6700,10204,11148"; a="23818506" X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="23818506" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 13:07:23 -0700 X-CSE-ConnectionGUID: pmNZZa9kRPawsw1DSLxVFw== X-CSE-MsgGUID: L8kr2zd5SpeT/LZA6GieHw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="54681281" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa007.jf.intel.com with ESMTP; 29 Jul 2024 13:07:23 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Maciej Fijalkowski , anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, Shannon Nelson , Chandan Kumar Rout Subject: [PATCH net v2 2/8] ice: don't busy wait for Rx queue disable in ice_qp_dis() Date: Mon, 29 Jul 2024 13:07:08 -0700 Message-ID: <20240729200716.681496-3-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240729200716.681496-1-anthony.l.nguyen@intel.com> References: <20240729200716.681496-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Maciej Fijalkowski When ice driver is spammed with multiple xdpsock instances and flow control is enabled, there are cases when Rx queue gets stuck and unable to reflect the disable state in QRX_CTRL register. Similar issue has previously been addressed in commit 13a6233b033f ("ice: Add support to enable/disable all Rx queues before waiting"). To workaround this, let us simply not wait for a disabled state as later patch will make sure that regardless of the encountered error in the process of disabling a queue pair, the Rx queue will be enabled. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Reviewed-by: Shannon Nelson Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) Signed-off-by: Maciej Fijalkowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_xsk.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 72738b8b8a68..3104a5657b83 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -199,10 +199,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) if (err) return err; } - err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true); - if (err) - return err; + ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false); ice_qp_clean_rings(vsi, q_idx); ice_qp_reset_stats(vsi, q_idx); From patchwork Mon Jul 29 20:07:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13745656 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B098643152; Mon, 29 Jul 2024 20:07:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283648; cv=none; b=W/3latzCS86FuhTzn143QstTg7Xz8XLarZNEDRyBPO7ZHZHg6HIA4TRLpEt3jdElhf8JlFEnumW8ZzmD7JrZcEs1qCIMcjBmGfEL2CtCYN3kwPM4rRFdpYmpkbSR+iU9bXiDtdY6IrwbvJS3zhKWUqHxDmv1kpnE3EcJV/AMwC8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283648; c=relaxed/simple; bh=dHUNGAibRljH817e9T8fK71LyQ0fLnVfb5tWMTWaV0E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nCXn37WaAZ/efYgZHGCgJxBxgN1eM9zAh8zXqiFMW0fOmnVa7hBz89lZs6n72y+MqzBvOq5SHR5/O7x+iQnDkwDO1jxyEYPuyJGM2dAzRsVVMyAvR3FH7K/+nGj2OENvQFPGI5PaPwqTXiKezCPMaJKwmrV8v/NZjLTmo42EZjc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=erZyYRVN; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="erZyYRVN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722283647; x=1753819647; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dHUNGAibRljH817e9T8fK71LyQ0fLnVfb5tWMTWaV0E=; b=erZyYRVN/O90K6eZ9kb+8eEN2X/174s53AjsMODXvj2/ztPMO7S2OCIw dssTTfwV/vNA11TuQksa3O629n/eJqAe3dII4OYpEgLRgMSb7UxAymbP1 oPZUPu6RWcDRjl7pH5lL4m+3xtNdr79O22bwjbYcfvE527CIWcCScp3NH xus8F2FGt8OxpTWQ2GT2MN7dZWc0oeyByeizKxC/HUlK7yBzFeQgomZiC QMGL6SZWav+U1/NAuW8qC22op7AYB+vV6sTOfylmVkZMyBPaw/x7hl3/J n95buO1+toyIoUouiujt6oOvT8jEPpY1MwPoqza54DLrrlUyIlEGpoOu3 A==; X-CSE-ConnectionGUID: TowsjkwsSrW7d4h2oGFjnA== X-CSE-MsgGUID: LP1qxhMIT2Wei0gL8cg9OQ== X-IronPort-AV: E=McAfee;i="6700,10204,11148"; a="23818514" X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="23818514" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 13:07:23 -0700 X-CSE-ConnectionGUID: BFs897k+QKyDTfsNXSHOrw== X-CSE-MsgGUID: eLUcEbz8Tie0YDGlm8QQbQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="54681284" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa007.jf.intel.com with ESMTP; 29 Jul 2024 13:07:23 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Maciej Fijalkowski , anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, Shannon Nelson , Chandan Kumar Rout Subject: [PATCH net v2 3/8] ice: replace synchronize_rcu with synchronize_net Date: Mon, 29 Jul 2024 13:07:09 -0700 Message-ID: <20240729200716.681496-4-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240729200716.681496-1-anthony.l.nguyen@intel.com> References: <20240729200716.681496-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Maciej Fijalkowski Given that ice_qp_dis() is called under rtnl_lock, synchronize_net() can be called instead of synchronize_rcu() so that XDP rings can finish its job in a faster way. Also let us do this as earlier in XSK queue disable flow. Additionally, turn off regular Tx queue before disabling irqs and NAPI. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Reviewed-by: Shannon Nelson Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) Signed-off-by: Maciej Fijalkowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_xsk.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 3104a5657b83..ba50af9a5929 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -52,10 +52,8 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx) static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx) { ice_clean_tx_ring(vsi->tx_rings[q_idx]); - if (ice_is_xdp_ena_vsi(vsi)) { - synchronize_rcu(); + if (ice_is_xdp_ena_vsi(vsi)) ice_clean_tx_ring(vsi->xdp_rings[q_idx]); - } ice_clean_rx_ring(vsi->rx_rings[q_idx]); } @@ -180,11 +178,12 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) usleep_range(1000, 2000); } + synchronize_net(); + netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + ice_qvec_dis_irq(vsi, rx_ring, q_vector); ice_qvec_toggle_napi(vsi, q_vector, false); - netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); - ice_fill_txq_meta(vsi, tx_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); if (err) From patchwork Mon Jul 29 20:07:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13745655 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B18B1891DE; Mon, 29 Jul 2024 20:07:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283648; cv=none; b=mnkOH3AaScJiOWgPbfeJ//VZln63+iyN1bC9iIb9SkDWFzWOGRZ8oENs62r/qzcGoL7/MwoQGRinmoX6ZOS5JKNJLX/MImDZsQtO2t/wCU0uoZEqhcxrCs3q6+Tnq+5jyaK3/DwkiOzHY0LblgvGVVTFxFasDBqe8VxS1uEHMg4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283648; c=relaxed/simple; bh=+OAx9XBZV6fCmym0Md6VJ9my1b11KEMXnHw1W/0+Z80=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GhCoAu6Go+c5qYKwXPyxeoNyWiFHHgAq9k++laxaNQ0ytbzPFrNssFkceIBGbhAnH1LT/pGD3uZJBcl3DxxwPoMIxgjPaaQa55V7LMBCpt898cPDy/kUjlj3GhOTW/KzOYgN13xqKoZluXbeyjVQHChLMvXozAJGUoJZCt5lEbY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BgJ3hcRp; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BgJ3hcRp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722283646; x=1753819646; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+OAx9XBZV6fCmym0Md6VJ9my1b11KEMXnHw1W/0+Z80=; b=BgJ3hcRpfOYPaz0HC+oGOZibdb9J8WEyOb1rPeG8vjkiGPnGxxvr+ydb BPLRX5kuxNMQi7p178+e461y7ojNVLxR7Z4ZfL2T6GkCzSr3kg6aiBh2A ICYZrhwhXtab1TBz1lGqAq+q7gwl8ISO4mWFYLPmojn6N1ULntNbVm/ZT InHNSePpcoVpx2G2++KwRUTV0s4WJTvHYtKtMCkexwNCRWZMbuPBiVgHR ypl8RxfKXCxX+09miVI7Ucm1qeiWPkGLzrb4bAjGD1EJAnR8CfpJWrhpc CFoc5cHx9lM4JsxY1m2v/jk0s6qM8ZnNb3pdPnka2VIdQd/PPp1Ft83ON A==; X-CSE-ConnectionGUID: 5cmbOw/xQGyp5wjpyuL3jQ== X-CSE-MsgGUID: RH5UhcNbTkWzHpkEWwpswA== X-IronPort-AV: E=McAfee;i="6700,10204,11148"; a="23818522" X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="23818522" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 13:07:23 -0700 X-CSE-ConnectionGUID: sY5jkw/MR926Tvex92Hapw== X-CSE-MsgGUID: dujxUiWaTY+otWx7dG0qUw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="54681287" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa007.jf.intel.com with ESMTP; 29 Jul 2024 13:07:23 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Maciej Fijalkowski , anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, Shannon Nelson , Chandan Kumar Rout Subject: [PATCH net v2 4/8] ice: modify error handling when setting XSK pool in ndo_bpf Date: Mon, 29 Jul 2024 13:07:10 -0700 Message-ID: <20240729200716.681496-5-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240729200716.681496-1-anthony.l.nguyen@intel.com> References: <20240729200716.681496-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Maciej Fijalkowski Don't bail out right when spotting an error within ice_qp_{dis,ena}() but rather track error and go through whole flow of disabling and enabling queue pair. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Reviewed-by: Shannon Nelson Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) Signed-off-by: Maciej Fijalkowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_xsk.c | 30 +++++++++++++----------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index ba50af9a5929..902096b000f5 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -162,6 +162,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) struct ice_tx_ring *tx_ring; struct ice_rx_ring *rx_ring; int timeout = 50; + int fail = 0; int err; if (q_idx >= vsi->num_rxq || q_idx >= vsi->num_txq) @@ -186,8 +187,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) ice_fill_txq_meta(vsi, tx_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); - if (err) - return err; + if (!fail) + fail = err; if (ice_is_xdp_ena_vsi(vsi)) { struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; @@ -195,15 +196,15 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) ice_fill_txq_meta(vsi, xdp_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, xdp_ring, &txq_meta); - if (err) - return err; + if (!fail) + fail = err; } ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false); ice_qp_clean_rings(vsi, q_idx); ice_qp_reset_stats(vsi, q_idx); - return 0; + return fail; } /** @@ -216,32 +217,33 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) { struct ice_q_vector *q_vector; + int fail = 0; int err; err = ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx); - if (err) - return err; + if (!fail) + fail = err; if (ice_is_xdp_ena_vsi(vsi)) { struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; err = ice_vsi_cfg_single_txq(vsi, vsi->xdp_rings, q_idx); - if (err) - return err; + if (!fail) + fail = err; ice_set_ring_xdp(xdp_ring); ice_tx_xsk_pool(vsi, q_idx); } err = ice_vsi_cfg_single_rxq(vsi, q_idx); - if (err) - return err; + if (!fail) + fail = err; q_vector = vsi->rx_rings[q_idx]->q_vector; ice_qvec_cfg_msix(vsi, q_vector); err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); - if (err) - return err; + if (!fail) + fail = err; ice_qvec_toggle_napi(vsi, q_vector, true); ice_qvec_ena_irq(vsi, q_vector); @@ -249,7 +251,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); clear_bit(ICE_CFG_BUSY, vsi->state); - return 0; + return fail; } /** From patchwork Mon Jul 29 20:07:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13745657 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39B64189F5D; Mon, 29 Jul 2024 20:07:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283648; cv=none; b=Zo/ZBsy/eW/qvxzSNduw8E3ko1dMsEuQ1mWa0qJUoddg7Dsmsstcn7Dah3T6YHf3LQFrydLyr794u4YRjX3qGdW58nj+Nv9ZXpOxKRJfCQ3NAYmThdzsV0wgsCOTVgXOawqNNltcdC4cDGaHXTUGAL+OpIuBbgoQBkpVWwHkJX8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283648; c=relaxed/simple; bh=9dUjVJsUsqt4zdcPCra0Y/NV7Xb3BVuDzQc8YgBcTcs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TQiVbuSAAEsMB8rbDIT2M5ah59n7oVBKq8tp5tCg1zDcKiPiHptaIlW/CsX1VVXQbSJfVeJiZu6yZq/xPArnk+5zODR//NWmLfmcVNfA8+4cKzBsLbI1L/Z82qn8me4DQjVJyBYX91RBA52OrEF7j3YmzS93JbZQEaUmXEYOsQg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bAJxJBlA; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bAJxJBlA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722283647; x=1753819647; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9dUjVJsUsqt4zdcPCra0Y/NV7Xb3BVuDzQc8YgBcTcs=; b=bAJxJBlAZQWknzfN3vxZyY3UWGUoA7ilmj9AYD3B9eHkwOjSingsiqK6 VRVFCkx3uk+TPvZ2cEhObyjvoaFtfpd9f4M0BLjpeWQJ4bsQbnyhHi+Vw BA4g/syzAJUVk2dZl/bcSrDuFVvBiZX5DFe1Ic9iWQshfd27djVo+o/tj A+sBP8paAQmaToDwEDvP1EU7oEXMq4Lzr9LZmYO2f5DkVyXIYtMDepZ2R G+JiuhuTkMrBqJjw2Qpg4uKhnJ8tuo/U07lkipVQRv56NMkCXFNSsap8x o8/b4iPSawjCBoiHHni5Mw+TOzZtvKt44lsiToeAe5IR1ayfLz4Gs8THu g==; X-CSE-ConnectionGUID: yf7PqTy7SQSB2lolksiIrg== X-CSE-MsgGUID: DVj+yepVQXm5xercBrL6Hw== X-IronPort-AV: E=McAfee;i="6700,10204,11148"; a="23818531" X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="23818531" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 13:07:23 -0700 X-CSE-ConnectionGUID: GFpHnCIETZSQ0Z1GtQclAw== X-CSE-MsgGUID: gxHrVV/tSwuuyW8iA8x1bQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="54681290" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa007.jf.intel.com with ESMTP; 29 Jul 2024 13:07:24 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Maciej Fijalkowski , anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, Shannon Nelson , Chandan Kumar Rout Subject: [PATCH net v2 5/8] ice: toggle netif_carrier when setting up XSK pool Date: Mon, 29 Jul 2024 13:07:11 -0700 Message-ID: <20240729200716.681496-6-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240729200716.681496-1-anthony.l.nguyen@intel.com> References: <20240729200716.681496-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Maciej Fijalkowski This so we prevent Tx timeout issues. One of conditions checked on running in the background dev_watchdog() is netif_carrier_ok(), so let us turn it off when we disable the queues that belong to a q_vector where XSK pool is being configured. Turn carrier on in ice_qp_ena() only when ice_get_link_status() tells us that physical link is up. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Reviewed-by: Shannon Nelson Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) Signed-off-by: Maciej Fijalkowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_xsk.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 902096b000f5..3fbe4cfadfbf 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -180,6 +180,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) } synchronize_net(); + netif_carrier_off(vsi->netdev); netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); ice_qvec_dis_irq(vsi, rx_ring, q_vector); @@ -218,6 +219,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) { struct ice_q_vector *q_vector; int fail = 0; + bool link_up; int err; err = ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx); @@ -248,7 +250,11 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) ice_qvec_toggle_napi(vsi, q_vector, true); ice_qvec_ena_irq(vsi, q_vector); - netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + ice_get_link_status(vsi->port_info, &link_up); + if (link_up) { + netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + netif_carrier_on(vsi->netdev); + } clear_bit(ICE_CFG_BUSY, vsi->state); return fail; From patchwork Mon Jul 29 20:07:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13745659 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 197AC18A935; Mon, 29 Jul 2024 20:07:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283650; cv=none; b=mHhpvRC7QR4UA6xKT9WFPCfFIx2CNEzts5LMKffV85/N4w+o8Y8aGB4rzEKLD3KczYWkOruavIXfQV52yD4ReMjq2vYkUpTqy//MyrprM6xSTOjM2JpKItHkih7JJ2F0YpFLqCguCzu5CXL/S7NrI5JPnoZopD74oY5D6KJ3Ud8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283650; c=relaxed/simple; bh=YNF2e7pRA/sDJC7fuODbTOcEz7QaehnLsGBhpRDA4lo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZB8f4/XzB8LY8AYf4bVC6TtT/UutZLbQNVjctaLUcWeH55U+yHFMuIj95fmIqP/fEVCin1Sjn8q9ri1HhjxQoECu0MyTUky9+fCGYpDabDfb7N34eZQ//rSnrWC92utmlBqEHMqOTNihvupDqeriXfSw3fE+hP+MLobIMI0TQhU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=DSuwmaa5; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DSuwmaa5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722283648; x=1753819648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YNF2e7pRA/sDJC7fuODbTOcEz7QaehnLsGBhpRDA4lo=; b=DSuwmaa5Kgz/UCDF58Hp2tQfRz+GY/TSedwSu6EYRjemsQdNjLXGKm3M UwPI3N0S918J9NbG67n7BppZ/tbiDcV4UUbQbVm5EjjRGujHzN0HAyDkR 6dCo1igYLQL36lZPn+i6ZPTSZt/J7cfn3G6T4sjDD6yZzrF5YIT1pfPcc RlMDcl5Fj9AKWtxXS4y8RrEgjROhyP1ngK8k/irIjKD7HxsS2i4hC1qjU 7j7VG4vE1OI1So68OFLGIBhQYW1sZK255BsVQQWohP2vJoA3zAW8o1ZIh vk59TMLOJq/Quh7rPrrLBz+k37DgK7Xmqwaigo0dGu/Va+bLg9f1/kq7o g==; X-CSE-ConnectionGUID: 0I9qzFxsQ2e6Dy4h5aGt4Q== X-CSE-MsgGUID: qnHGdkb8QsKla6nQz9+XHg== X-IronPort-AV: E=McAfee;i="6700,10204,11148"; a="23818539" X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="23818539" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 13:07:23 -0700 X-CSE-ConnectionGUID: xcF+zBIwT0eFJibDxRwZuw== X-CSE-MsgGUID: Fu5qQgTHRkqZQePVD4IKGQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="54681294" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa007.jf.intel.com with ESMTP; 29 Jul 2024 13:07:24 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Maciej Fijalkowski , anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, Shannon Nelson , Chandan Kumar Rout Subject: [PATCH net v2 6/8] ice: improve updating ice_{t,r}x_ring::xsk_pool Date: Mon, 29 Jul 2024 13:07:12 -0700 Message-ID: <20240729200716.681496-7-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240729200716.681496-1-anthony.l.nguyen@intel.com> References: <20240729200716.681496-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Maciej Fijalkowski xsk_buff_pool pointers that ice ring structs hold are updated via ndo_bpf that is executed in process context while it can be read by remote CPU at the same time within NAPI poll. Use synchronize_net() after pointer update and {READ,WRITE}_ONCE() when working with mentioned pointer. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Reviewed-by: Shannon Nelson Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) Signed-off-by: Maciej Fijalkowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice.h | 11 ++- drivers/net/ethernet/intel/ice/ice_base.c | 4 +- drivers/net/ethernet/intel/ice/ice_main.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.c | 8 +- drivers/net/ethernet/intel/ice/ice_xsk.c | 103 ++++++++++++++-------- drivers/net/ethernet/intel/ice/ice_xsk.h | 14 ++- 6 files changed, 87 insertions(+), 55 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index 99a75a59078e..caaa10157909 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -765,18 +765,17 @@ static inline struct xsk_buff_pool *ice_get_xp_from_qid(struct ice_vsi *vsi, } /** - * ice_xsk_pool - get XSK buffer pool bound to a ring + * ice_rx_xsk_pool - assign XSK buff pool to Rx ring * @ring: Rx ring to use * - * Returns a pointer to xsk_buff_pool structure if there is a buffer pool - * present, NULL otherwise. + * Sets XSK buff pool pointer on Rx ring. */ -static inline struct xsk_buff_pool *ice_xsk_pool(struct ice_rx_ring *ring) +static inline void ice_rx_xsk_pool(struct ice_rx_ring *ring) { struct ice_vsi *vsi = ring->vsi; u16 qid = ring->q_index; - return ice_get_xp_from_qid(vsi, qid); + WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid)); } /** @@ -801,7 +800,7 @@ static inline void ice_tx_xsk_pool(struct ice_vsi *vsi, u16 qid) if (!ring) return; - ring->xsk_pool = ice_get_xp_from_qid(vsi, qid); + WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid)); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 5d396c1a7731..1facf179a96f 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -536,7 +536,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) return err; } - ring->xsk_pool = ice_xsk_pool(ring); + ice_rx_xsk_pool(ring); if (ring->xsk_pool) { xdp_rxq_info_unreg(&ring->xdp_rxq); @@ -597,7 +597,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) return 0; } - ok = ice_alloc_rx_bufs_zc(ring, num_bufs); + ok = ice_alloc_rx_bufs_zc(ring, ring->xsk_pool, num_bufs); if (!ok) { u16 pf_q = ring->vsi->rxq_map[ring->q_index]; diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index ec636be4d17d..3de020020bc4 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -2948,7 +2948,7 @@ static void ice_vsi_rx_napi_schedule(struct ice_vsi *vsi) ice_for_each_rxq(vsi, i) { struct ice_rx_ring *rx_ring = vsi->rx_rings[i]; - if (rx_ring->xsk_pool) + if (READ_ONCE(rx_ring->xsk_pool)) napi_schedule(&rx_ring->q_vector->napi); } } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8bb743f78fcb..0f91e9167427 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1521,10 +1521,11 @@ int ice_napi_poll(struct napi_struct *napi, int budget) * budget and be more aggressive about cleaning up the Tx descriptors. */ ice_for_each_tx_ring(tx_ring, q_vector->tx) { + struct xsk_buff_pool *xsk_pool = READ_ONCE(tx_ring->xsk_pool); bool wd; - if (tx_ring->xsk_pool) - wd = ice_xmit_zc(tx_ring); + if (xsk_pool) + wd = ice_xmit_zc(tx_ring, xsk_pool); else if (ice_ring_is_xdp(tx_ring)) wd = true; else @@ -1550,6 +1551,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget) budget_per_ring = budget; ice_for_each_rx_ring(rx_ring, q_vector->rx) { + struct xsk_buff_pool *xsk_pool = READ_ONCE(rx_ring->xsk_pool); int cleaned; /* A dedicated path for zero-copy allows making a single @@ -1557,7 +1559,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget) * ice_clean_rx_irq function and makes the codebase cleaner. */ cleaned = rx_ring->xsk_pool ? - ice_clean_rx_irq_zc(rx_ring, budget_per_ring) : + ice_clean_rx_irq_zc(rx_ring, xsk_pool, budget_per_ring) : ice_clean_rx_irq(rx_ring, budget_per_ring); work_done += cleaned; /* if we clean as many as budgeted, we must not be done */ diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 3fbe4cfadfbf..ee084ad80a61 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -250,6 +250,8 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) ice_qvec_toggle_napi(vsi, q_vector, true); ice_qvec_ena_irq(vsi, q_vector); + /* make sure NAPI sees updated ice_{t,x}_ring::xsk_pool */ + synchronize_net(); ice_get_link_status(vsi->port_info, &link_up); if (link_up) { netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); @@ -464,6 +466,7 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp, /** * __ice_alloc_rx_bufs_zc - allocate a number of Rx buffers * @rx_ring: Rx ring + * @xsk_pool: XSK buffer pool to pick buffers to be filled by HW * @count: The number of buffers to allocate * * Place the @count of descriptors onto Rx ring. Handle the ring wrap @@ -472,7 +475,8 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp, * * Returns true if all allocations were successful, false if any fail. */ -static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) +static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count) { u32 nb_buffs_extra = 0, nb_buffs = 0; union ice_32b_rx_flex_desc *rx_desc; @@ -484,8 +488,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) xdp = ice_xdp_buf(rx_ring, ntu); if (ntu + count >= rx_ring->count) { - nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, - rx_desc, + nb_buffs_extra = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, rx_ring->count - ntu); if (nb_buffs_extra != rx_ring->count - ntu) { ntu += nb_buffs_extra; @@ -498,7 +501,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) ice_release_rx_desc(rx_ring, 0); } - nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count); + nb_buffs = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, count); ntu += nb_buffs; if (ntu == rx_ring->count) @@ -514,6 +517,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) /** * ice_alloc_rx_bufs_zc - allocate a number of Rx buffers * @rx_ring: Rx ring + * @xsk_pool: XSK buffer pool to pick buffers to be filled by HW * @count: The number of buffers to allocate * * Wrapper for internal allocation routine; figure out how many tail @@ -521,7 +525,8 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) * * Returns true if all calls to internal alloc routine succeeded */ -bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) +bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count) { u16 rx_thresh = ICE_RING_QUARTER(rx_ring); u16 leftover, i, tail_bumps; @@ -530,9 +535,9 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) leftover = count - (tail_bumps * rx_thresh); for (i = 0; i < tail_bumps; i++) - if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh)) + if (!__ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, rx_thresh)) return false; - return __ice_alloc_rx_bufs_zc(rx_ring, leftover); + return __ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, leftover); } /** @@ -601,8 +606,10 @@ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) /** * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ * @xdp_ring: XDP Tx ring + * @xsk_pool: AF_XDP buffer pool pointer */ -static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) +static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool) { u16 ntc = xdp_ring->next_to_clean; struct ice_tx_desc *tx_desc; @@ -653,7 +660,7 @@ static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) if (xdp_ring->next_to_clean >= cnt) xdp_ring->next_to_clean -= cnt; if (xsk_frames) - xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); + xsk_tx_completed(xsk_pool, xsk_frames); return completed_frames; } @@ -662,6 +669,7 @@ static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) * ice_xmit_xdp_tx_zc - AF_XDP ZC handler for XDP_TX * @xdp: XDP buffer to xmit * @xdp_ring: XDP ring to produce descriptor onto + * @xsk_pool: AF_XDP buffer pool pointer * * note that this function works directly on xdp_buff, no need to convert * it to xdp_frame. xdp_buff pointer is stored to ice_tx_buf so that cleaning @@ -671,7 +679,8 @@ static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) * was not enough space on XDP ring */ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, - struct ice_tx_ring *xdp_ring) + struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool) { struct skb_shared_info *sinfo = NULL; u32 size = xdp->data_end - xdp->data; @@ -685,7 +694,7 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, free_space = ICE_DESC_UNUSED(xdp_ring); if (free_space < ICE_RING_QUARTER(xdp_ring)) - free_space += ice_clean_xdp_irq_zc(xdp_ring); + free_space += ice_clean_xdp_irq_zc(xdp_ring, xsk_pool); if (unlikely(!free_space)) goto busy; @@ -705,7 +714,7 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, dma_addr_t dma; dma = xsk_buff_xdp_get_dma(xdp); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, size); + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, size); tx_buf->xdp = xdp; tx_buf->type = ICE_TX_BUF_XSK_TX; @@ -747,12 +756,14 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, * @xdp: xdp_buff used as input to the XDP program * @xdp_prog: XDP program to run * @xdp_ring: ring to be used for XDP_TX action + * @xsk_pool: AF_XDP buffer pool pointer * * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} */ static int ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring) + struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool) { int err, result = ICE_XDP_PASS; u32 act; @@ -763,7 +774,7 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); if (!err) return ICE_XDP_REDIR; - if (xsk_uses_need_wakeup(rx_ring->xsk_pool) && err == -ENOBUFS) + if (xsk_uses_need_wakeup(xsk_pool) && err == -ENOBUFS) result = ICE_XDP_EXIT; else result = ICE_XDP_CONSUMED; @@ -774,7 +785,7 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, case XDP_PASS: break; case XDP_TX: - result = ice_xmit_xdp_tx_zc(xdp, xdp_ring); + result = ice_xmit_xdp_tx_zc(xdp, xdp_ring, xsk_pool); if (result == ICE_XDP_CONSUMED) goto out_failure; break; @@ -826,14 +837,16 @@ ice_add_xsk_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *first, /** * ice_clean_rx_irq_zc - consumes packets from the hardware ring * @rx_ring: AF_XDP Rx ring + * @xsk_pool: AF_XDP buffer pool pointer * @budget: NAPI budget * * Returns number of processed packets on success, remaining budget on failure. */ -int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) +int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, + int budget) { unsigned int total_rx_bytes = 0, total_rx_packets = 0; - struct xsk_buff_pool *xsk_pool = rx_ring->xsk_pool; u32 ntc = rx_ring->next_to_clean; u32 ntu = rx_ring->next_to_use; struct xdp_buff *first = NULL; @@ -896,7 +909,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) if (ice_is_non_eop(rx_ring, rx_desc)) continue; - xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring); + xdp_res = ice_run_xdp_zc(rx_ring, first, xdp_prog, xdp_ring, + xsk_pool); if (likely(xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))) { xdp_xmit |= xdp_res; } else if (xdp_res == ICE_XDP_EXIT) { @@ -945,7 +959,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) rx_ring->next_to_clean = ntc; entries_to_alloc = ICE_RX_DESC_UNUSED(rx_ring); if (entries_to_alloc > ICE_RING_QUARTER(rx_ring)) - failure |= !ice_alloc_rx_bufs_zc(rx_ring, entries_to_alloc); + failure |= !ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, + entries_to_alloc); ice_finalize_xdp_rx(xdp_ring, xdp_xmit, 0); ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); @@ -968,17 +983,19 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) /** * ice_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor * @xdp_ring: XDP ring to produce the HW Tx descriptor on + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW * @desc: AF_XDP descriptor to pull the DMA address and length from * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc, +static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, struct xdp_desc *desc, unsigned int *total_bytes) { struct ice_tx_desc *tx_desc; dma_addr_t dma; - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, desc->addr); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, desc->len); + dma = xsk_buff_raw_get_dma(xsk_pool, desc->addr); + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, desc->len); tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_to_use++); tx_desc->buf_addr = cpu_to_le64(dma); @@ -991,10 +1008,13 @@ static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc, /** * ice_xmit_pkt_batch - produce a batch of HW Tx descriptors out of AF_XDP descriptors * @xdp_ring: XDP ring to produce the HW Tx descriptors on + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, +static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, + struct xdp_desc *descs, unsigned int *total_bytes) { u16 ntu = xdp_ring->next_to_use; @@ -1004,8 +1024,8 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de loop_unrolled_for(i = 0; i < PKTS_PER_BATCH; i++) { dma_addr_t dma; - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, descs[i].addr); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, descs[i].len); + dma = xsk_buff_raw_get_dma(xsk_pool, descs[i].addr); + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, descs[i].len); tx_desc = ICE_TX_DESC(xdp_ring, ntu++); tx_desc->buf_addr = cpu_to_le64(dma); @@ -1021,37 +1041,41 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de /** * ice_fill_tx_hw_ring - produce the number of Tx descriptors onto ring * @xdp_ring: XDP ring to produce the HW Tx descriptors on + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from * @nb_pkts: count of packets to be send * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, - u32 nb_pkts, unsigned int *total_bytes) +static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, + struct xdp_desc *descs, u32 nb_pkts, + unsigned int *total_bytes) { u32 batched, leftover, i; batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH); leftover = nb_pkts & (PKTS_PER_BATCH - 1); for (i = 0; i < batched; i += PKTS_PER_BATCH) - ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes); + ice_xmit_pkt_batch(xdp_ring, xsk_pool, &descs[i], total_bytes); for (; i < batched + leftover; i++) - ice_xmit_pkt(xdp_ring, &descs[i], total_bytes); + ice_xmit_pkt(xdp_ring, xsk_pool, &descs[i], total_bytes); } /** * ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring * @xdp_ring: XDP ring to produce the HW Tx descriptors on + * @xsk_pool: AF_XDP buffer pool pointer * * Returns true if there is no more work that needs to be done, false otherwise */ -bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) +bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, struct xsk_buff_pool *xsk_pool) { - struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs; + struct xdp_desc *descs = xsk_pool->tx_descs; u32 nb_pkts, nb_processed = 0; unsigned int total_bytes = 0; int budget; - ice_clean_xdp_irq_zc(xdp_ring); + ice_clean_xdp_irq_zc(xdp_ring, xsk_pool); if (!netif_carrier_ok(xdp_ring->vsi->netdev) || !netif_running(xdp_ring->vsi->netdev)) @@ -1060,25 +1084,26 @@ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) budget = ICE_DESC_UNUSED(xdp_ring); budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring)); - nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget); + nb_pkts = xsk_tx_peek_release_desc_batch(xsk_pool, budget); if (!nb_pkts) return true; if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { nb_processed = xdp_ring->count - xdp_ring->next_to_use; - ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes); + ice_fill_tx_hw_ring(xdp_ring, xsk_pool, descs, nb_processed, + &total_bytes); xdp_ring->next_to_use = 0; } - ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed, - &total_bytes); + ice_fill_tx_hw_ring(xdp_ring, xsk_pool, &descs[nb_processed], + nb_pkts - nb_processed, &total_bytes); ice_set_rs_bit(xdp_ring); ice_xdp_ring_update_tail(xdp_ring); ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes); - if (xsk_uses_need_wakeup(xdp_ring->xsk_pool)) - xsk_set_tx_need_wakeup(xdp_ring->xsk_pool); + if (xsk_uses_need_wakeup(xsk_pool)) + xsk_set_tx_need_wakeup(xsk_pool); return nb_pkts < budget; } @@ -1111,7 +1136,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, ring = vsi->rx_rings[queue_id]->xdp_ring; - if (!ring->xsk_pool) + if (!READ_ONCE(ring->xsk_pool)) return -EINVAL; /* The idea here is that if NAPI is running, mark a miss, so diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.h b/drivers/net/ethernet/intel/ice/ice_xsk.h index 6fa181f080ef..45adeb513253 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.h +++ b/drivers/net/ethernet/intel/ice/ice_xsk.h @@ -20,16 +20,20 @@ struct ice_vsi; #ifdef CONFIG_XDP_SOCKETS int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid); -int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget); +int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, + int budget); int ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, u32 flags); -bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count); +bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count); bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi); void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring); void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring); -bool ice_xmit_zc(struct ice_tx_ring *xdp_ring); +bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, struct xsk_buff_pool *xsk_pool); int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc); #else -static inline bool ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring) +static inline bool ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring, + struct xsk_buff_pool __always_unused *xsk_pool) { return false; } @@ -44,6 +48,7 @@ ice_xsk_pool_setup(struct ice_vsi __always_unused *vsi, static inline int ice_clean_rx_irq_zc(struct ice_rx_ring __always_unused *rx_ring, + struct xsk_buff_pool __always_unused *xsk_pool, int __always_unused budget) { return 0; @@ -51,6 +56,7 @@ ice_clean_rx_irq_zc(struct ice_rx_ring __always_unused *rx_ring, static inline bool ice_alloc_rx_bufs_zc(struct ice_rx_ring __always_unused *rx_ring, + struct xsk_buff_pool __always_unused *xsk_pool, u16 __always_unused count) { return false; From patchwork Mon Jul 29 20:07:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13745658 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79C2818A951; Mon, 29 Jul 2024 20:07:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283650; cv=none; b=LJTaK9Q9VqeS/Z0ULiftvbw8KPZg2K0ggMXt85toE1EvgGpfnsnf3wBePFVxvTzv29Xr/iikcZ160HAmgZAPu11uKitiuKqSA1P6HBYvPKw9ztKVyXEyGW6Yp4abeg4eDVJzyJKSW/G18E+Bbqjh1BIi5X7+9+tEXpWWsMfAE8M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283650; c=relaxed/simple; bh=n7ANfRKum6afoJJa4U7py2dwNaCRzE2vIh6HSC/8KQM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mb5Oo68lcMy3cx91YOh35U3qdOL5elfTY5FHp5WU4sxv5fOArhCxw9Ampu+cV0ubAKfxxRk/mYDYIV0aOgxymgbi2MCSEcfZDLa2/y7Lj6rsMaBFibTgZuuYACVml/zKtSGH0xfvcu2afMIuKwsJkAfGD9v5YERAFIk5jsa+6Fc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WZcST0ez; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WZcST0ez" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722283648; x=1753819648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=n7ANfRKum6afoJJa4U7py2dwNaCRzE2vIh6HSC/8KQM=; b=WZcST0ez25ay/ISrtO9HWUehNe0PVhjiZQk4fHLnXtBDH3HucGIzPiVy Si53FUAQOaYdd3MwpQJwunRyLovBFUjfxPAL3lun8L0Ae1jEO1iXwlL20 KGvtlalVwAGWR2YuPjYbGzFKau8wbBu/uBGRIxX4xotjgmmfoGwE85p45 GYW4eSzB1RxEbi1cdZBR16MUjFyNoHC1TQPYmCkGmPH1XDh6o9OwhnJ8W djw6Y5BAPJq4E54N4uJuQbFs4DP55myLeNEmZHw6BA5Nl+Gi3VlAMG8ay ocAGIcid1cAj3BSLW6Wm3zVzfHx77Wlt50Ti6xFQg7UudPUUgvuniRN6B w==; X-CSE-ConnectionGUID: ADYladQITbu/Vyu44ciWKQ== X-CSE-MsgGUID: 1wTPd1uuQuWweQICPjOX4g== X-IronPort-AV: E=McAfee;i="6700,10204,11148"; a="23818547" X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="23818547" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 13:07:23 -0700 X-CSE-ConnectionGUID: Y+jJCG9EQCCnLXVTLp6uCQ== X-CSE-MsgGUID: vtAE8FltSqWOBBOrxmnVLg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="54681297" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa007.jf.intel.com with ESMTP; 29 Jul 2024 13:07:24 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Maciej Fijalkowski , anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, Shannon Nelson , Chandan Kumar Rout Subject: [PATCH net v2 7/8] ice: add missing WRITE_ONCE when clearing ice_rx_ring::xdp_prog Date: Mon, 29 Jul 2024 13:07:13 -0700 Message-ID: <20240729200716.681496-8-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240729200716.681496-1-anthony.l.nguyen@intel.com> References: <20240729200716.681496-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Maciej Fijalkowski It is read by data path and modified from process context on remote cpu so it is needed to use WRITE_ONCE to clear the pointer. Fixes: efc2214b6047 ("ice: Add support for XDP") Reviewed-by: Shannon Nelson Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) Signed-off-by: Maciej Fijalkowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 0f91e9167427..8d25b6981269 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -456,7 +456,7 @@ void ice_free_rx_ring(struct ice_rx_ring *rx_ring) if (rx_ring->vsi->type == ICE_VSI_PF) if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) xdp_rxq_info_unreg(&rx_ring->xdp_rxq); - rx_ring->xdp_prog = NULL; + WRITE_ONCE(rx_ring->xdp_prog, NULL); if (rx_ring->xsk_pool) { kfree(rx_ring->xdp_buf); rx_ring->xdp_buf = NULL; From patchwork Mon Jul 29 20:07:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13745660 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBCB018C325; Mon, 29 Jul 2024 20:07:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283650; cv=none; b=lvZOXuvudubaMSz4sCkDYzOx7eT7pxAwz37ShcK/nPUyKTiRixv99WM5QshqhGLDhTZ5J5y02e+KBm6mn7jXtM1rCl66H1bS3UrQ7L1dqt8r13l//StCsPu3jFyMJxfSolwRO8R4hkUAMH4dO2/nRYZE2L02K66awz4IGgxMkCc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722283650; c=relaxed/simple; bh=3LAjuSxLYaetIQgXY8nT5M5q9x91e4EkU5j0UGqqOZg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qvzmAN2yVOPSu/iiv1BqIleRcMK+NjtXKsJss0Gk/IQpXfhMyu6/vzWzoIOgPn2/qUkEzQke1UaQTi67CsuthzpYPNrTfiqxldwYt4gnWT6ZajAhGh2FOMAtFVmB9DLeMskRpCkpYt9uqrIcmEQJX3Ew2lK6JjIhVSE/iARF+2o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=QLaINT9a; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QLaINT9a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1722283649; x=1753819649; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3LAjuSxLYaetIQgXY8nT5M5q9x91e4EkU5j0UGqqOZg=; b=QLaINT9aWSqr+jNBiPxNQgu0//N0FZJZdJe4bZ/UkHz0NGcKhzlHjsPq q40uL6NxPrCGWF11QG4itM4uxUEUbCyWPV2Qwa1QnB/L4xzFkOuEaM+tL Owv3Z/w53bDvCiyTdnXYRNcsYSVnuIs9NgsYUQDGZ+a/tEJ3ye+T88g3L E3igyV6M39j5kvva9oXlrd54F0Yp9ADN6Nq7bCi8ifsrT7cNjR9fqqg+Q APa3RVgJYwAcE6cUvKL3voPBHxI41QYF+WPk2ppthNU/0okTXKW1zm+99 haz4npNhtjGBIlc7MQoxQtKWUJSnWXm1dCgpd+Dm7kWeOeMBy8eqfHOi6 g==; X-CSE-ConnectionGUID: T4uZA5yzQEGqh1nJQ0D4TA== X-CSE-MsgGUID: W89HRBBYQ7uosOeQ6zqGrA== X-IronPort-AV: E=McAfee;i="6700,10204,11148"; a="23818559" X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="23818559" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jul 2024 13:07:23 -0700 X-CSE-ConnectionGUID: BZgJhSC2Rvik0XOdbffbLg== X-CSE-MsgGUID: OtZhgceITyemfhSVFShb3Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,246,1716274800"; d="scan'208";a="54681300" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by orviesa007.jf.intel.com with ESMTP; 29 Jul 2024 13:07:24 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Maciej Fijalkowski , anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, aleksander.lobakin@intel.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, bpf@vger.kernel.org, Shannon Nelson , Chandan Kumar Rout Subject: [PATCH net v2 8/8] ice: xsk: fix txq interrupt mapping Date: Mon, 29 Jul 2024 13:07:14 -0700 Message-ID: <20240729200716.681496-9-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240729200716.681496-1-anthony.l.nguyen@intel.com> References: <20240729200716.681496-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Maciej Fijalkowski ice_cfg_txq_interrupt() internally handles XDP Tx ring. Do not use ice_for_each_tx_ring() in ice_qvec_cfg_msix() as this causing us to treat XDP ring that belongs to queue vector as Tx ring and therefore misconfiguring the interrupts. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Reviewed-by: Shannon Nelson Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) Signed-off-by: Maciej Fijalkowski Signed-off-by: Tony Nguyen --- drivers/net/ethernet/intel/ice/ice_xsk.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index ee084ad80a61..240a7bec242b 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -110,25 +110,29 @@ ice_qvec_dis_irq(struct ice_vsi *vsi, struct ice_rx_ring *rx_ring, * ice_qvec_cfg_msix - Enable IRQ for given queue vector * @vsi: the VSI that contains queue vector * @q_vector: queue vector + * @qid: queue index */ static void -ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector, u16 qid) { u16 reg_idx = q_vector->reg_idx; struct ice_pf *pf = vsi->back; struct ice_hw *hw = &pf->hw; - struct ice_tx_ring *tx_ring; - struct ice_rx_ring *rx_ring; + int q, _qid = qid; ice_cfg_itr(hw, q_vector); - ice_for_each_tx_ring(tx_ring, q_vector->tx) - ice_cfg_txq_interrupt(vsi, tx_ring->reg_idx, reg_idx, - q_vector->tx.itr_idx); + for (q = 0; q < q_vector->num_ring_tx; q++) { + ice_cfg_txq_interrupt(vsi, _qid, reg_idx, q_vector->tx.itr_idx); + _qid++; + } - ice_for_each_rx_ring(rx_ring, q_vector->rx) - ice_cfg_rxq_interrupt(vsi, rx_ring->reg_idx, reg_idx, - q_vector->rx.itr_idx); + _qid = qid; + + for (q = 0; q < q_vector->num_ring_rx; q++) { + ice_cfg_rxq_interrupt(vsi, _qid, reg_idx, q_vector->rx.itr_idx); + _qid++; + } ice_flush(hw); } @@ -241,7 +245,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) fail = err; q_vector = vsi->rx_rings[q_idx]->q_vector; - ice_qvec_cfg_msix(vsi, q_vector); + ice_qvec_cfg_msix(vsi, q_vector, q_idx); err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); if (!fail)