From patchwork Wed May 29 11:23:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13678725 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80D5317A938 for ; Wed, 29 May 2024 11:23:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981826; cv=none; b=EXeOGVSGgf+rFZvXKDS6XEZggh/4u5yXnUhYPo5Sm+o6zgiXLP9nLyMQt/h8ToumQAHVeHwioPDlvQuAzYBVLPnFexg8JgN15qv/8ApVuGQf5mjGdcm5Oi6BQkN+lhwF3MKaraTPWB+uzkbtkcDQSZOpi60pzudfFaZf/AG61J0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981826; c=relaxed/simple; bh=ROb1fNo4eUUBVPk/vZTtKWpVQ5tdIGh5tR00KpxWySo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cvBOeBIKp1qIuWLOIkbd9PpvFc9MOy2zihL3UK9IAQMi/tzo77mDeOCNw67M1dTfLeXkmKdhL2IIQ28VNWa6cCmjhB6NUWQT44eKglneXAC0COyWxTw1tIV3Jwu6x1SoGq2ZIyGHTiC7DdFTOSw5VqiH6q6FDgwlFqQGcS6+gpg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bgffExpc; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bgffExpc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716981824; x=1748517824; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ROb1fNo4eUUBVPk/vZTtKWpVQ5tdIGh5tR00KpxWySo=; b=bgffExpcroy9Ob70I9qOU7sBilBmKTyGF9d24nPl7O4wqbRyVIaWaHQj ynWw8w0LP2qWXWVknkcQJLwJvTMRoXTWNy9qQ8Ak5caV6XsJn/cW7N8e+ D9ERsqEf0RP8A7UfDrTv2Z+0rW8iIdaafiXP6+2eonU+nHwnC4czaSXkH Y5oTyNvop7Qt7DAOH2ABtOm6xVPTsi3WTs20kPu5jxgAu+Snt/HhUE1gH Ws28yyaDDOHRtWSixBuA54ufVMXzJ5fFGpkErmLu7vQvfY6ppcuefDIrb 9DlSTqT42QaLv7TlTzjkXRg4QJ5+GUeLZBme+zZYRTB3gmMkEIwZ3cuvB A==; X-CSE-ConnectionGUID: vTF4QIOhR6CJq/Qv/be54g== X-CSE-MsgGUID: zDkbxAc+SFqwGq0wgrJAuA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="17169223" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="17169223" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 04:23:44 -0700 X-CSE-ConnectionGUID: ooCYZPpaQku7TvvEwklyDQ== X-CSE-MsgGUID: G3RW5C5ESveffRA/2C9iyA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="66277168" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa002.jf.intel.com with ESMTP; 29 May 2024 04:23:42 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH v2 iwl-net 1/8] ice: respect netif readiness in AF_XDP ZC related ndo's Date: Wed, 29 May 2024 13:23:30 +0200 Message-Id: <20240529112337.3639084-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> References: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Michal Kubiak Address a scenario in which XSK ZC Tx produces descriptors to XDP Tx ring when link is either not yet fully initialized or process of stopping the netdev has already started. To avoid this, add checks against carrier readiness in ice_xsk_wakeup() and in ice_xmit_zc(). One could argue that bailing out early in ice_xsk_wakeup() would be sufficient but given the fact that we produce Tx descriptors on behalf of NAPI that is triggered for Rx traffic, the latter is also needed. Bringing link up is an asynchronous event executed within ice_service_task so even though interface has been brought up there is still a time frame where link is not yet ok. Without this patch, when AF_XDP ZC Tx is used simultaneously with stack Tx, Tx timeouts occur after going through link flap (admin brings interface down then up again). HW seem to be unable to transmit descriptor to the wire after HW tail register bump which in turn causes bit __QUEUE_STATE_STACK_XOFF to be set forever as netdev_tx_completed_queue() sees no cleaned bytes on the input. Fixes: 126cdfe1007a ("ice: xsk: Improve AF_XDP ZC Tx and use batching API") Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Michal Kubiak Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 2015f66b0cf9..1bd4b054dd80 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -1048,6 +1048,10 @@ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) ice_clean_xdp_irq_zc(xdp_ring); + if (!netif_carrier_ok(xdp_ring->vsi->netdev) || + !netif_running(xdp_ring->vsi->netdev)) + return true; + budget = ICE_DESC_UNUSED(xdp_ring); budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring)); @@ -1091,7 +1095,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, struct ice_vsi *vsi = np->vsi; struct ice_tx_ring *ring; - if (test_bit(ICE_VSI_DOWN, vsi->state)) + if (test_bit(ICE_VSI_DOWN, vsi->state) || !netif_carrier_ok(netdev)) return -ENETDOWN; if (!ice_is_xdp_ena_vsi(vsi)) From patchwork Wed May 29 11:23:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13678726 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C96914A4C3 for ; Wed, 29 May 2024 11:23:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981828; cv=none; b=BpA0vvFbeKiZZNxvDdshJYuFzwDHAJnO38x6VTbyxHqAe5YXt4fPMyR0w1tFOQw4o/qSvj/4b7MAOxLCYZ65uQnnJTDoKK8ImiDnZU1omjWoQPWzwPeoI84gPS1V8bCifZOY5qFfL0oGX7qQHQdeJJBe+/zYEkCdwAIf9BujKCQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981828; c=relaxed/simple; bh=89ItfMvWVLYERCZU8akE0pKSpXDzCfiBwTBt6Ag8CQc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Nvdwg1r4t4L/ITelEuh5Qen9LfiEorgThsVInV8hvwhJnDAS0m8R2AJF6f4Gw56/kSqh4KmIFWyywu4sRCWUt+VlzJsEvaR9QPGuUYM+YR1MGmdYFVROyp+2xd69FmYdkhhqnHefO/zcb6Xopaz1tqHfLuppg+Quhfz7k/yL4Fw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FpCzmaga; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FpCzmaga" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716981826; x=1748517826; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=89ItfMvWVLYERCZU8akE0pKSpXDzCfiBwTBt6Ag8CQc=; b=FpCzmaga4NOuJeXuZvhvpYH9xXo40vv+CmpRSHh66EsX2cCOxjFgJPM5 VuYglu7ZDGgBVFI3o3j67/y2LFrsdxiaX29bT1TX11+0B55wtTx9+B1ro mEcoBh0mmvV6uA9CB3d1R+YpWfl1Q1WwGqJTNiyp1+3BVQ4enDv9nZ3ym YFiy7JcH7wONI+LgzBMt0S7AUb3fDmAihQXvpnHgsqrfFuVTFmlZr9rZB nxGk3taRfc9wbd4EW+ZCDSEzrh3QVLMRoiy5Pb/KnNwm3DIbowa3iI47R 5hzZBpqbACJ7R7bzwp4Y9lpT2X1gI326OAkaJ+CWc8fokuR43H7X0dzWW g==; X-CSE-ConnectionGUID: vyUQGc/yQ7CNv6AF2Bv/Tw== X-CSE-MsgGUID: iIReIsPHSCiU7mkjeptphw== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="17169231" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="17169231" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 04:23:46 -0700 X-CSE-ConnectionGUID: cZ3wnMpwRdSwAVNHP0BaAQ== X-CSE-MsgGUID: aMza/qXhSkquNo3zewXxmg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="66277173" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa002.jf.intel.com with ESMTP; 29 May 2024 04:23:44 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH v2 iwl-net 2/8] ice: don't busy wait for Rx queue disable in ice_qp_dis() Date: Wed, 29 May 2024 13:23:31 +0200 Message-Id: <20240529112337.3639084-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> References: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org When ice driver is spammed with multiple xdpsock instances and flow control is enabled, there are cases when Rx queue gets stuck and unable to reflect the disable state in QRX_CTRL register. Similar issue has previously been addressed in commit 13a6233b033f ("ice: Add support to enable/disable all Rx queues before waiting"). To workaround this, let us simply not wait for a disabled state as later patch will make sure that regardless of the encountered error in the process of disabling a queue pair, the Rx queue will be enabled. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_xsk.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 1bd4b054dd80..4f606a1055b0 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -199,10 +199,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) if (err) return err; } - err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true); - if (err) - return err; + ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false); ice_qp_clean_rings(vsi, q_idx); ice_qp_reset_stats(vsi, q_idx); From patchwork Wed May 29 11:23:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13678727 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BCB01802C3 for ; Wed, 29 May 2024 11:23:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981829; cv=none; b=QHvuZFq2xJqRK8J7zJ7kA8pJvg0tZSkfKcUisstx8oXh67kxC7UNkkXu38M3GvRuLkMJWCmyCz9V9ViGBaKzjOIZsD4Fxpb5LEiKDUUZmUtaz/L43k4hPNvHi8WSHAUxUtmoTgXuj/XBUPaXFpD6qEQguHVNfYp69dtvyYPhYtE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981829; c=relaxed/simple; bh=k+t6/dW0a7XTqHTJyIkOeGq6uYftDeoAQY5f4RgHSbU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=C6G3yt6+NJ8eS8vVbTtsiaKSKJqEOMCWCAxBQfjFaybs+EgWtanvczFKKXJl8Y7crvMs0mvRXh6tCJtchrxWz2dMxFkJxEES4fWbtorHZoUkF1K5HZZxU3YZOU1LAT1ArxQt5u7DlcBdNyLHpqc/nALxhyT1knzZuPsrHHfykPI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=WXPm8C32; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WXPm8C32" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716981828; x=1748517828; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k+t6/dW0a7XTqHTJyIkOeGq6uYftDeoAQY5f4RgHSbU=; b=WXPm8C32fw5MRHX2BAbVj3cN9L77eiE2sOpjoKcc2dfDBB6blyeMvJh7 pHq7ogkwCleGlbBEG+yyv+zjzGxv6uA/hXO14S43fgFjL1P//Q5FlaL+a euwYLAK7vKS6HZWsUDRcB+wGCSF5DqUzp2HlANT8qkE+ULCovJmowtlgP FpQjql3+uG5SyfIgbnzMTBx3JxzpQHZdHiVClP0LpbZCb89rrZA1G7Jjm haZZAkuL0bFn3r0wnwSKz7rvi319FrY9hl8XpfxSZ8D4W3wzkYup2bGQ0 Kl97l6zUtsSX7aDcKcF4gP4t+zY1vN0bGIncG9QFvp701ztplmXvkfBzM Q==; X-CSE-ConnectionGUID: pykfFF8bRJqh3XCxyaym+A== X-CSE-MsgGUID: UiwBZhklT52GQkv/mJh5hA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="17169242" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="17169242" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 04:23:48 -0700 X-CSE-ConnectionGUID: EUxb2xOGTrmru1djNZ0m1g== X-CSE-MsgGUID: 0wMy7eFdTOShbSuYwBGt9g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="66277177" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa002.jf.intel.com with ESMTP; 29 May 2024 04:23:46 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH v2 iwl-net 3/8] ice: replace synchronize_rcu with synchronize_net Date: Wed, 29 May 2024 13:23:32 +0200 Message-Id: <20240529112337.3639084-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> References: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Given that ice_qp_dis() is called under rtnl_lock, synchronize_net() can be called instead of synchronize_rcu() so that XDP rings can finish its job in a faster way. Also let us do this as earlier in XSK queue disable flow. Additionally, turn off regular Tx queue before disabling irqs and NAPI. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_xsk.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 4f606a1055b0..e93cb0ca4106 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -53,7 +53,6 @@ static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx) { ice_clean_tx_ring(vsi->tx_rings[q_idx]); if (ice_is_xdp_ena_vsi(vsi)) { - synchronize_rcu(); ice_clean_tx_ring(vsi->xdp_rings[q_idx]); } ice_clean_rx_ring(vsi->rx_rings[q_idx]); @@ -180,11 +179,12 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) usleep_range(1000, 2000); } + synchronize_net(); + netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + ice_qvec_dis_irq(vsi, rx_ring, q_vector); ice_qvec_toggle_napi(vsi, q_vector, false); - netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); - ice_fill_txq_meta(vsi, tx_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); if (err) From patchwork Wed May 29 11:23:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13678728 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 84EFC180A98 for ; Wed, 29 May 2024 11:23:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981832; cv=none; b=iJstbcWPnfU3piRLNzTWTi//ecWgU9HUepkemTwX7cJxM7OzS0r3YpxopqbQDFU8/FM+OI9mrsS3K/xZ3auUEojhPr7HuU8pVwTOrGqOjv1lHImePgz6b76hWwljm8q7c8zeXUl/Vvv/jN2q6uwvsh6gG60jpyg7uN+z9T9cdUI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981832; c=relaxed/simple; bh=YVeYKEPpaAffYIVARbSPYcT3/ud4/QQnvxjXpeJuUIY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=U8ektEa1ORPBTIfgcHse3hyvL97ecfgyWjcH9uRbItGL8PjlkZ0ERUOdpWWl2mCj3Czkye/m5Q5l4vbbtYEWmdZwxlPnWse7xxrFqlpPZeTX14m8PSezUv5kzhUVx6szxLXRQBzwiWOiDlyAKAYAnEKUjcQ0sIC4O+IWUvEZsQY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mRRpl2aa; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mRRpl2aa" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716981830; x=1748517830; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YVeYKEPpaAffYIVARbSPYcT3/ud4/QQnvxjXpeJuUIY=; b=mRRpl2aaho+cDscMUHultgF0cFylO4NzTVeO9OPx6fvomdtVmpmlzgAi wF4yu1gR2d8LNut51lP/B2H+ADPjP8x/HIY80lezzC2+/AfSLgeFNyAfR 3VFllR/sTPdyK0JG8HRpaLvTJ/15OHb90FYXJgoKy4d9qnfeQWZ2k9x4X f5adxn0Id4skWvnctV87Hb3hCAAXYTlkuF4xFZtziBV5tXKfBFb37hh5p hnlsAagiPYZeNnqQuHM3/ASK5RmmYTm47fLkndYBmGsGnjg0cAsEV+FYh A2TvvBSw2JRx1eGEh46iVfcef4o1cPp3fAZ2yKajV7h/gNXNlhoZEK16I g==; X-CSE-ConnectionGUID: 8BhvHtvwTIGiLCO1W6CbxQ== X-CSE-MsgGUID: 5ddZrrASSwWW4ETPpU1JRA== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="17169250" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="17169250" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 04:23:50 -0700 X-CSE-ConnectionGUID: /6DLa9y8T5y+OYsQ1c4FLQ== X-CSE-MsgGUID: iyoA7kmbSq6J8xmsNmAFzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="66277181" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa002.jf.intel.com with ESMTP; 29 May 2024 04:23:48 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH v2 iwl-net 4/8] ice: modify error handling when setting XSK pool in ndo_bpf Date: Wed, 29 May 2024 13:23:33 +0200 Message-Id: <20240529112337.3639084-5-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> References: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Don't bail out right when spotting an error within ice_qp_{dis,ena}() but rather track error and go through whole flow of disabling and enabling queue pair. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_xsk.c | 30 +++++++++++++----------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index e93cb0ca4106..3dcab89be256 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -163,6 +163,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) struct ice_tx_ring *tx_ring; struct ice_rx_ring *rx_ring; int timeout = 50; + int fail = 0; int err; if (q_idx >= vsi->num_rxq || q_idx >= vsi->num_txq) @@ -187,8 +188,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) ice_fill_txq_meta(vsi, tx_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); - if (err) - return err; + if (!fail) + fail = err; if (ice_is_xdp_ena_vsi(vsi)) { struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; @@ -196,15 +197,15 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) ice_fill_txq_meta(vsi, xdp_ring, &txq_meta); err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, xdp_ring, &txq_meta); - if (err) - return err; + if (!fail) + fail = err; } ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false); ice_qp_clean_rings(vsi, q_idx); ice_qp_reset_stats(vsi, q_idx); - return 0; + return fail; } /** @@ -217,32 +218,33 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) { struct ice_q_vector *q_vector; + int fail = 0; int err; err = ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx); - if (err) - return err; + if (!fail) + fail = err; if (ice_is_xdp_ena_vsi(vsi)) { struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; err = ice_vsi_cfg_single_txq(vsi, vsi->xdp_rings, q_idx); - if (err) - return err; + if (!fail) + fail = err; ice_set_ring_xdp(xdp_ring); ice_tx_xsk_pool(vsi, q_idx); } err = ice_vsi_cfg_single_rxq(vsi, q_idx); - if (err) - return err; + if (!fail) + fail = err; q_vector = vsi->rx_rings[q_idx]->q_vector; ice_qvec_cfg_msix(vsi, q_vector); err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); - if (err) - return err; + if (!fail) + fail = err; ice_qvec_toggle_napi(vsi, q_vector, true); ice_qvec_ena_irq(vsi, q_vector); @@ -250,7 +252,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); clear_bit(ICE_CFG_BUSY, vsi->state); - return 0; + return fail; } /** From patchwork Wed May 29 11:23:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13678729 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 331F61802D4 for ; Wed, 29 May 2024 11:23:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981833; cv=none; b=YpCeddrNjL827Mp9DCrtpcDLWEoZpLEfC1vQzVBSb162ZYHTqPhHg9WgzMIbI0BD07e+zYB8maIik2Zl3uQwa4E5lpA7wCGmcsm3V/YrPhHxx8M6NGr+sIuS/jkAD3kfbRdOT6qcTxcoEgd4gitVSvIzWKOoZAu1KUvEwfpO5QM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981833; c=relaxed/simple; bh=Wa3f5Y5+AWIp0vcSDmjjHpoOXBSyhvET5vZlnTzOHrk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fCAjH/FW7QLhBFfU++06Zii+Pq9Rvw+o0g1Ka5EtMmhBDcuyXC16SDZZd9wP8TPoXeV9ydvMF02qOHAfA5Ya6WrD7ETwI2vSQV4sCq1ZdSlvguQtHFLZrLtiWyw/KKmm/qWAdtlEp+UmpfXCOHS7kbUbKBd+MFXQ/g4q8kknOIk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=G4yY4Bmq; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="G4yY4Bmq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716981832; x=1748517832; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Wa3f5Y5+AWIp0vcSDmjjHpoOXBSyhvET5vZlnTzOHrk=; b=G4yY4BmqJlFKpMIvC5JILiEySlcqo4Br3Nkc5dXVXIWObt/wbfP+FNGC 5twntwG4Vc1k7kWYmMI6Kx6nNlcrepkHqZ3za+WvUrba1KmcKFQcKQMmr 5zmoLdETRaicvhBupeJ/6L6vC7Lnxl5FCXsCh3h3cruYZM7mlJDFVWA9J chKot4qRnpEryijiqlRU19YjP1eXObC52k5xAvszKNUL1fdwBoKSaPKk3 5XdVfvqeF6+2dDaOjTs8aWeP5UH3PDU/ovFFN6M4CvPTZH/HopYJ5cphb YXmdGgRIClNM+FPhibsCceO8LnzObaGum+VV98mef0HEh2OJce4FNhH15 A==; X-CSE-ConnectionGUID: fhngQHG4T3GToUlp7vK5Tw== X-CSE-MsgGUID: B8UpBmRJToaJw4fR9yOUWg== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="17169255" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="17169255" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 04:23:52 -0700 X-CSE-ConnectionGUID: +i7+PhsdR6qnDYolI13umQ== X-CSE-MsgGUID: 1QTru7CTQJ+emebKffbWKQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="66277184" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa002.jf.intel.com with ESMTP; 29 May 2024 04:23:50 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH v2 iwl-net 5/8] ice: toggle netif_carrier when setting up XSK pool Date: Wed, 29 May 2024 13:23:34 +0200 Message-Id: <20240529112337.3639084-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> References: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org This so we prevent Tx timeout issues. One of conditions checked on running in the background dev_watchdog() is netif_carrier_ok(), so let us turn it off when we disable the queues that belong to a q_vector where XSK pool is being configured. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_xsk.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 3dcab89be256..8c5006f37310 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -181,6 +181,7 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx) } synchronize_net(); + netif_carrier_off(vsi->netdev); netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); ice_qvec_dis_irq(vsi, rx_ring, q_vector); @@ -250,6 +251,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) ice_qvec_ena_irq(vsi, q_vector); netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); + netif_carrier_on(vsi->netdev); clear_bit(ICE_CFG_BUSY, vsi->state); return fail; From patchwork Wed May 29 11:23:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13678730 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7ACA9181300 for ; Wed, 29 May 2024 11:23:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981836; cv=none; b=Ufz7uwSVc/cI8DvaiZs/K9MwJSYJD58mu2X1LDY1GR9Bt1/l+WNIU7jU1olxzcVoiQoV+aU+CijkP+/z77IpzCDzw0LEoxwbGAtw55fIf0AVUONYJ/amWbdR1zUPP2iv0+NzWwU6t2UOpSpjMdDGTklebGxaas8Rgd1H/RkSF2Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981836; c=relaxed/simple; bh=HEvBB6ZcAesExygpDGFjpof9VGDqi1TL6vDCgDzQiq0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BAKhZkRVUmVYRt7YG6tAhr/xZJxgKAanNpv+4Hqt577lCY/AZny5F1fKQP/iayk8kSUNUaP8l3LjhHpY/C9nh/GuDFdew1RauF7BUWhFw6ZudT/OuytCl4f+Zx/XqcCT5RW1XsRV9ESF6uz+315kIPraXc4P1Q0sR3E3DuJFHg0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=AoAxEIdt; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="AoAxEIdt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716981834; x=1748517834; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HEvBB6ZcAesExygpDGFjpof9VGDqi1TL6vDCgDzQiq0=; b=AoAxEIdtMsq/iL7CUPHIj3fUXaFai3P1bvSmTPUbtgc88FO4WFt+xivx d3NOWH8BM3y3sXh927Pq4L8sC/cy/Nn5kGlzCPWrjW1qLPpYF4JJuK2hU 4hMZ1wBGhwcYtl2rsuV1EBlbaqgctij0a94limZB7Kn/bUoUKuW98j4kM SdXCg8a2RVBvEpRelm47RS9uunTcXa6GSndHqvxPdwpIAUknMx5XwRnLY 1BUByNbWl4KHGFiUGILt2mqAnEVQdT2UmP1lV9+jDd3HWen3J1N7zweHU s0PBeFBdPNlH/8ZZqpzjwTwgI7yKhinvOqgEh2kCdhw1FSN13BgKLjmtW g==; X-CSE-ConnectionGUID: a5uibPruRZG/Fh06d/3u7w== X-CSE-MsgGUID: dWFbn4ZfTn24iWiwxXB7oQ== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="17169263" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="17169263" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 04:23:54 -0700 X-CSE-ConnectionGUID: z9XEw4CDR+W2HvVoUNUMfQ== X-CSE-MsgGUID: Php/IuHOT2OSI++n9ksZog== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="66277188" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa002.jf.intel.com with ESMTP; 29 May 2024 04:23:52 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH v2 iwl-net 6/8] ice: improve updating ice_{t,r}x_ring::xsk_pool Date: Wed, 29 May 2024 13:23:35 +0200 Message-Id: <20240529112337.3639084-7-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> References: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org xsk_buff_pool pointers that ice ring structs hold are updated via ndo_bpf that is executed in process context while it can be read by remote CPU at the same time within NAPI poll. Use synchronize_net() after pointer update and {READ,WRITE}_ONCE() when working with mentioned pointer. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice.h | 6 +- drivers/net/ethernet/intel/ice/ice_base.c | 4 +- drivers/net/ethernet/intel/ice/ice_main.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.c | 4 +- drivers/net/ethernet/intel/ice/ice_xsk.c | 78 ++++++++++++++--------- drivers/net/ethernet/intel/ice/ice_xsk.h | 4 +- 6 files changed, 59 insertions(+), 39 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h index da8c8afebc93..701a61d791dd 100644 --- a/drivers/net/ethernet/intel/ice/ice.h +++ b/drivers/net/ethernet/intel/ice/ice.h @@ -771,12 +771,12 @@ static inline struct xsk_buff_pool *ice_get_xp_from_qid(struct ice_vsi *vsi, * Returns a pointer to xsk_buff_pool structure if there is a buffer pool * present, NULL otherwise. */ -static inline struct xsk_buff_pool *ice_xsk_pool(struct ice_rx_ring *ring) +static inline void ice_xsk_pool(struct ice_rx_ring *ring) { struct ice_vsi *vsi = ring->vsi; u16 qid = ring->q_index; - return ice_get_xp_from_qid(vsi, qid); + WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid)); } /** @@ -801,7 +801,7 @@ static inline void ice_tx_xsk_pool(struct ice_vsi *vsi, u16 qid) if (!ring) return; - ring->xsk_pool = ice_get_xp_from_qid(vsi, qid); + WRITE_ONCE(ring->xsk_pool, ice_get_xp_from_qid(vsi, qid)); } /** diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 5d396c1a7731..f3dfce136106 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -536,7 +536,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) return err; } - ring->xsk_pool = ice_xsk_pool(ring); + ice_xsk_pool(ring); if (ring->xsk_pool) { xdp_rxq_info_unreg(&ring->xdp_rxq); @@ -597,7 +597,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) return 0; } - ok = ice_alloc_rx_bufs_zc(ring, num_bufs); + ok = ice_alloc_rx_bufs_zc(ring, ring->xsk_pool, num_bufs); if (!ok) { u16 pf_q = ring->vsi->rxq_map[ring->q_index]; diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 1b61ca3a6eb6..15a6805ac2a1 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -2946,7 +2946,7 @@ static void ice_vsi_rx_napi_schedule(struct ice_vsi *vsi) ice_for_each_rxq(vsi, i) { struct ice_rx_ring *rx_ring = vsi->rx_rings[i]; - if (rx_ring->xsk_pool) + if (READ_ONCE(rx_ring->xsk_pool)) napi_schedule(&rx_ring->q_vector->napi); } } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8bb743f78fcb..f4b2b1bca234 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -1523,7 +1523,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget) ice_for_each_tx_ring(tx_ring, q_vector->tx) { bool wd; - if (tx_ring->xsk_pool) + if (READ_ONCE(tx_ring->xsk_pool)) wd = ice_xmit_zc(tx_ring); else if (ice_ring_is_xdp(tx_ring)) wd = true; @@ -1556,7 +1556,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget) * comparison in the irq context instead of many inside the * ice_clean_rx_irq function and makes the codebase cleaner. */ - cleaned = rx_ring->xsk_pool ? + cleaned = READ_ONCE(rx_ring->xsk_pool) ? ice_clean_rx_irq_zc(rx_ring, budget_per_ring) : ice_clean_rx_irq(rx_ring, budget_per_ring); work_done += cleaned; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 8c5006f37310..693f0e3a37ce 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -250,6 +250,8 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) ice_qvec_toggle_napi(vsi, q_vector, true); ice_qvec_ena_irq(vsi, q_vector); + /* make sure NAPI sees updated ice_{t,x}_ring::xsk_pool */ + synchronize_net(); netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); netif_carrier_on(vsi->netdev); clear_bit(ICE_CFG_BUSY, vsi->state); @@ -461,6 +463,7 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp, /** * __ice_alloc_rx_bufs_zc - allocate a number of Rx buffers * @rx_ring: Rx ring + * @xsk_pool: XSK buffer pool to pick buffers to be filled by HW * @count: The number of buffers to allocate * * Place the @count of descriptors onto Rx ring. Handle the ring wrap @@ -469,7 +472,8 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp, * * Returns true if all allocations were successful, false if any fail. */ -static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) +static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count) { u32 nb_buffs_extra = 0, nb_buffs = 0; union ice_32b_rx_flex_desc *rx_desc; @@ -481,8 +485,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) xdp = ice_xdp_buf(rx_ring, ntu); if (ntu + count >= rx_ring->count) { - nb_buffs_extra = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, - rx_desc, + nb_buffs_extra = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, rx_ring->count - ntu); if (nb_buffs_extra != rx_ring->count - ntu) { ntu += nb_buffs_extra; @@ -495,7 +498,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) ice_release_rx_desc(rx_ring, 0); } - nb_buffs = ice_fill_rx_descs(rx_ring->xsk_pool, xdp, rx_desc, count); + nb_buffs = ice_fill_rx_descs(xsk_pool, xdp, rx_desc, count); ntu += nb_buffs; if (ntu == rx_ring->count) @@ -511,6 +514,7 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) /** * ice_alloc_rx_bufs_zc - allocate a number of Rx buffers * @rx_ring: Rx ring + * @xsk_pool: XSK buffer pool to pick buffers to be filled by HW * @count: The number of buffers to allocate * * Wrapper for internal allocation routine; figure out how many tail @@ -518,7 +522,8 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) * * Returns true if all calls to internal alloc routine succeeded */ -bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) +bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count) { u16 rx_thresh = ICE_RING_QUARTER(rx_ring); u16 leftover, i, tail_bumps; @@ -527,9 +532,9 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) leftover = count - (tail_bumps * rx_thresh); for (i = 0; i < tail_bumps; i++) - if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh)) + if (!__ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, rx_thresh)) return false; - return __ice_alloc_rx_bufs_zc(rx_ring, leftover); + return __ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, leftover); } /** @@ -650,7 +655,7 @@ static u32 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) if (xdp_ring->next_to_clean >= cnt) xdp_ring->next_to_clean -= cnt; if (xsk_frames) - xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); + xsk_tx_completed(READ_ONCE(xdp_ring->xsk_pool), xsk_frames); return completed_frames; } @@ -702,7 +707,8 @@ static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, dma_addr_t dma; dma = xsk_buff_xdp_get_dma(xdp); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, size); + xsk_buff_raw_dma_sync_for_device(READ_ONCE(xdp_ring->xsk_pool), + dma, size); tx_buf->xdp = xdp; tx_buf->type = ICE_TX_BUF_XSK_TX; @@ -760,7 +766,8 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); if (!err) return ICE_XDP_REDIR; - if (xsk_uses_need_wakeup(rx_ring->xsk_pool) && err == -ENOBUFS) + if (xsk_uses_need_wakeup(READ_ONCE(rx_ring->xsk_pool)) && + err == -ENOBUFS) result = ICE_XDP_EXIT; else result = ICE_XDP_CONSUMED; @@ -829,8 +836,8 @@ ice_add_xsk_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *first, */ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) { + struct xsk_buff_pool *xsk_pool = READ_ONCE(rx_ring->xsk_pool); unsigned int total_rx_bytes = 0, total_rx_packets = 0; - struct xsk_buff_pool *xsk_pool = rx_ring->xsk_pool; u32 ntc = rx_ring->next_to_clean; u32 ntu = rx_ring->next_to_use; struct xdp_buff *first = NULL; @@ -942,7 +949,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) rx_ring->next_to_clean = ntc; entries_to_alloc = ICE_RX_DESC_UNUSED(rx_ring); if (entries_to_alloc > ICE_RING_QUARTER(rx_ring)) - failure |= !ice_alloc_rx_bufs_zc(rx_ring, entries_to_alloc); + failure |= !ice_alloc_rx_bufs_zc(rx_ring, xsk_pool, + entries_to_alloc); ice_finalize_xdp_rx(xdp_ring, xdp_xmit, 0); ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); @@ -965,17 +973,19 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) /** * ice_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor * @xdp_ring: XDP ring to produce the HW Tx descriptor on + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW * @desc: AF_XDP descriptor to pull the DMA address and length from * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc, +static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, struct xdp_desc *desc, unsigned int *total_bytes) { struct ice_tx_desc *tx_desc; dma_addr_t dma; - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, desc->addr); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, desc->len); + dma = xsk_buff_raw_get_dma(xsk_pool, desc->addr); + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, desc->len); tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_to_use++); tx_desc->buf_addr = cpu_to_le64(dma); @@ -988,10 +998,13 @@ static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc, /** * ice_xmit_pkt_batch - produce a batch of HW Tx descriptors out of AF_XDP descriptors * @xdp_ring: XDP ring to produce the HW Tx descriptors on + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, +static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, + struct xdp_desc *descs, unsigned int *total_bytes) { u16 ntu = xdp_ring->next_to_use; @@ -1001,8 +1014,8 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de loop_unrolled_for(i = 0; i < PKTS_PER_BATCH; i++) { dma_addr_t dma; - dma = xsk_buff_raw_get_dma(xdp_ring->xsk_pool, descs[i].addr); - xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, descs[i].len); + dma = xsk_buff_raw_get_dma(xsk_pool, descs[i].addr); + xsk_buff_raw_dma_sync_for_device(xsk_pool, dma, descs[i].len); tx_desc = ICE_TX_DESC(xdp_ring, ntu++); tx_desc->buf_addr = cpu_to_le64(dma); @@ -1018,21 +1031,24 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de /** * ice_fill_tx_hw_ring - produce the number of Tx descriptors onto ring * @xdp_ring: XDP ring to produce the HW Tx descriptors on + * @xsk_pool: XSK buffer pool to pick buffers to be consumed by HW * @descs: AF_XDP descriptors to pull the DMA addresses and lengths from * @nb_pkts: count of packets to be send * @total_bytes: bytes accumulator that will be used for stats update */ -static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, - u32 nb_pkts, unsigned int *total_bytes) +static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, + struct xsk_buff_pool *xsk_pool, + struct xdp_desc *descs, u32 nb_pkts, + unsigned int *total_bytes) { u32 batched, leftover, i; batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH); leftover = nb_pkts & (PKTS_PER_BATCH - 1); for (i = 0; i < batched; i += PKTS_PER_BATCH) - ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes); + ice_xmit_pkt_batch(xdp_ring, xsk_pool, &descs[i], total_bytes); for (; i < batched + leftover; i++) - ice_xmit_pkt(xdp_ring, &descs[i], total_bytes); + ice_xmit_pkt(xdp_ring, xsk_pool, &descs[i], total_bytes); } /** @@ -1043,7 +1059,8 @@ static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *d */ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) { - struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs; + struct xsk_buff_pool *xsk_pool = READ_ONCE(xdp_ring->xsk_pool); + struct xdp_desc *descs = xsk_pool->tx_descs; u32 nb_pkts, nb_processed = 0; unsigned int total_bytes = 0; int budget; @@ -1057,25 +1074,26 @@ bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) budget = ICE_DESC_UNUSED(xdp_ring); budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring)); - nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget); + nb_pkts = xsk_tx_peek_release_desc_batch(xsk_pool, budget); if (!nb_pkts) return true; if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { nb_processed = xdp_ring->count - xdp_ring->next_to_use; - ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes); + ice_fill_tx_hw_ring(xdp_ring, xsk_pool, descs, nb_processed, + &total_bytes); xdp_ring->next_to_use = 0; } - ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed, - &total_bytes); + ice_fill_tx_hw_ring(xdp_ring, xsk_pool, &descs[nb_processed], + nb_pkts - nb_processed, &total_bytes); ice_set_rs_bit(xdp_ring); ice_xdp_ring_update_tail(xdp_ring); ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes); - if (xsk_uses_need_wakeup(xdp_ring->xsk_pool)) - xsk_set_tx_need_wakeup(xdp_ring->xsk_pool); + if (xsk_uses_need_wakeup(xsk_pool)) + xsk_set_tx_need_wakeup(xsk_pool); return nb_pkts < budget; } @@ -1108,7 +1126,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, ring = vsi->rx_rings[queue_id]->xdp_ring; - if (!ring->xsk_pool) + if (!READ_ONCE(ring->xsk_pool)) return -EINVAL; /* The idea here is that if NAPI is running, mark a miss, so diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.h b/drivers/net/ethernet/intel/ice/ice_xsk.h index 6fa181f080ef..4cd2d62a0836 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.h +++ b/drivers/net/ethernet/intel/ice/ice_xsk.h @@ -22,7 +22,8 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid); int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget); int ice_xsk_wakeup(struct net_device *netdev, u32 queue_id, u32 flags); -bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count); +bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, + struct xsk_buff_pool *xsk_pool, u16 count); bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi); void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring); void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring); @@ -51,6 +52,7 @@ ice_clean_rx_irq_zc(struct ice_rx_ring __always_unused *rx_ring, static inline bool ice_alloc_rx_bufs_zc(struct ice_rx_ring __always_unused *rx_ring, + struct xsk_buff_pool __always_unused *xsk_pool, u16 __always_unused count) { return false; From patchwork Wed May 29 11:23:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13678731 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93B13180A67 for ; Wed, 29 May 2024 11:23:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981838; cv=none; b=tMaBQbEmHwKFZFVtMM4/THnSKk0xC0WmSmpeQVeySfmjlGMfZ2W3cL0psofjk7azyns1ijJWpsqpoXhHkzHAZlmPa7gZ3dA41jxlMFg27TerwLFbd1vks8udLqZq21tTbtX/WAT1t+d7KhIozsr2SrVTcZ6OUAn4iQI4v7oIxe4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981838; c=relaxed/simple; bh=4Tu1vWmHZq1dZhCwiajusm9P0b3jYlUYGR9h4gmDQJg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oxNru+K6ybJAVnTB/C5hmYZp5MoJv+wSFjjWb8hOUniZKOusgzUp/hF1C6L+RXF2NtT8mTJT7/RD37EWs+IqoWw0R8d9QNb/9vyLaIgcfpPE/8/Ebf/SoCR9LVlaaXRIxUKa9tq2/RGutI8sjq2LnUg1rA6A1AObVD6zjmDQ82o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dtDEhBj+; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dtDEhBj+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716981836; x=1748517836; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4Tu1vWmHZq1dZhCwiajusm9P0b3jYlUYGR9h4gmDQJg=; b=dtDEhBj+EjK3FRcY1TM/418/UEgaYupxm4MOHluGXUqwQ2qAwrel6wS8 KQD+2F5pcGjzTxo1fce9IYbVcP51r2bVLkjjetoy0j9jbGbkTXOyPRmeM SBtSsuIp9wqOVwS1zeFeFTLpTxE4mwvcg0I+1lMyrjORHnrw7S0O3Jr3+ RGoOIbyTFKkJXEZVjWR8vDZjB2azsrJRi3DKySSXGHXl+OHQVNhntLBKE InEcDTjL4GbMFul6Ssvqo+Ad+BQ5qCDQpaxLjBXAlm7ZiHjL7HteK8SJm UoGMKILS/BP4mMHc6vtPmb1DY6VHs138wpAytXlYx/B1GTYos59boxFXS A==; X-CSE-ConnectionGUID: LLWDE3pDSQa7Z0/NnXJFTQ== X-CSE-MsgGUID: wdfOZB2NR+yrvir5vH81fQ== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="17169271" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="17169271" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 04:23:56 -0700 X-CSE-ConnectionGUID: 392CfLlZQI2S/ZthPXCXPQ== X-CSE-MsgGUID: sPVQQvXzSZqGne5ysk4cTg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="66277192" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa002.jf.intel.com with ESMTP; 29 May 2024 04:23:54 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH v2 iwl-net 7/8] ice: add missing WRITE_ONCE when clearing ice_rx_ring::xdp_prog Date: Wed, 29 May 2024 13:23:36 +0200 Message-Id: <20240529112337.3639084-8-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> References: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org It is read by data path and modified from process context on remote cpu so it is needed to use WRITE_ONCE to clear the pointer. Fixes: efc2214b6047 ("ice: Add support for XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index f4b2b1bca234..4c115531beba 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -456,7 +456,7 @@ void ice_free_rx_ring(struct ice_rx_ring *rx_ring) if (rx_ring->vsi->type == ICE_VSI_PF) if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) xdp_rxq_info_unreg(&rx_ring->xdp_rxq); - rx_ring->xdp_prog = NULL; + WRITE_ONCE(rx_ring->xdp_prog, NULL); if (rx_ring->xsk_pool) { kfree(rx_ring->xdp_buf); rx_ring->xdp_buf = NULL; From patchwork Wed May 29 11:23:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 13678732 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4AC618131B for ; Wed, 29 May 2024 11:23:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981840; cv=none; b=k+vkTSOsjph5d2Pqh2Oc4Ecas4YwWgYUzHS8hyIEHbbSKZ7frLB01N2Mzq2gft6kV8pzNAcjGt5dUjo13pVspDyafd4Zajl5qgEgnGObYRZvlJNHtlz+X/VJtESAsipxg/m7yJ473ctVYefDW2CLkvqof7LeiT4VhsNZL+yaPU0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716981840; c=relaxed/simple; bh=JxgorqeOgR50t13m+RvvtSW764Pt/UK0PXKKYocvqN8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=D+aGpwVXxAroA6iVO4IrjsddtepSgjrC43jVksfoiI5015NUTIG6IV+ZXJ5dRSbK0O3ylYs/DkQL+uOthxCmc4hh/BOSsVx/NRkLKAuuKKSDTo97gNtn7kCd7W5bWsDzQuxlw8xRttKjy/Y7Bqich8qTDmEv7d+C4N9tcln1q/g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=DC8+m8CJ; arc=none smtp.client-ip=198.175.65.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DC8+m8CJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716981838; x=1748517838; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JxgorqeOgR50t13m+RvvtSW764Pt/UK0PXKKYocvqN8=; b=DC8+m8CJs+IUgnDpXrFPhcSrqI0G7DCRHzpHEMRTWJ73EBgqojAHtWGc x+gXIJKXnh7D7MYNfuhQOjUtLrGn3d2AJBiClFT1Qhnx+mPyqBolPyTsV g+a0wjGEs7bNnhAYJHfNrCCs7btzfNvaX1IIBSUZThD3J/4Ksa+BZGxEY pOdht/C0AnEjkfJfrLY3JP5AshfsLF69qVwyRNDj15fa9NhBpZTXAXVqT Ui8TZjvUjp6VWFABeQVwdn9MC1cO7sdll2qPWGqr8WmCXXJarDbC2o0hi suYliDEYxFV1v1K04kzr2ddOb08IaGaHDpd1yr374sJOiRPyR2ZHCZAc2 A==; X-CSE-ConnectionGUID: S4jg2Q0NS7KbZnp0Uq+obA== X-CSE-MsgGUID: rtLOHSwUSlOGT0eScALR/g== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="17169278" X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="17169278" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 04:23:58 -0700 X-CSE-ConnectionGUID: AMy57ZThSt61j1VcW+Xi3w== X-CSE-MsgGUID: qKJnVYjeQ62irQKp9uqrFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,198,1712646000"; d="scan'208";a="66277195" Received: from boxer.igk.intel.com ([10.102.20.173]) by orviesa002.jf.intel.com with ESMTP; 29 May 2024 04:23:56 -0700 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, magnus.karlsson@intel.com, michal.kubiak@intel.com, larysa.zaremba@intel.com, Maciej Fijalkowski Subject: [PATCH v2 iwl-net 8/8] ice: xsk: fix txq interrupt mapping Date: Wed, 29 May 2024 13:23:37 +0200 Message-Id: <20240529112337.3639084-9-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> References: <20240529112337.3639084-1-maciej.fijalkowski@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org ice_cfg_txq_interrupt() internally handles XDP Tx ring. Do not use ice_for_each_tx_ring() in ice_qvec_cfg_msix() as this causing us to treat XDP ring that belongs to queue vector as Tx ring and therefore misconfiguring the interrupts. Fixes: 2d4238f55697 ("ice: Add support for AF_XDP") Signed-off-by: Maciej Fijalkowski Tested-by: Chandan Kumar Rout (A Contingent Worker at Intel) --- drivers/net/ethernet/intel/ice/ice_xsk.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 693f0e3a37ce..85aa841a16bb 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -111,25 +111,29 @@ ice_qvec_dis_irq(struct ice_vsi *vsi, struct ice_rx_ring *rx_ring, * ice_qvec_cfg_msix - Enable IRQ for given queue vector * @vsi: the VSI that contains queue vector * @q_vector: queue vector + * @qid: queue index */ static void -ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector) +ice_qvec_cfg_msix(struct ice_vsi *vsi, struct ice_q_vector *q_vector, u16 qid) { u16 reg_idx = q_vector->reg_idx; struct ice_pf *pf = vsi->back; struct ice_hw *hw = &pf->hw; - struct ice_tx_ring *tx_ring; - struct ice_rx_ring *rx_ring; + int q, _qid = qid; ice_cfg_itr(hw, q_vector); - ice_for_each_tx_ring(tx_ring, q_vector->tx) - ice_cfg_txq_interrupt(vsi, tx_ring->reg_idx, reg_idx, - q_vector->tx.itr_idx); + for (q = 0; q < q_vector->num_ring_tx; q++) { + ice_cfg_txq_interrupt(vsi, _qid, reg_idx, q_vector->tx.itr_idx); + _qid++; + } - ice_for_each_rx_ring(rx_ring, q_vector->rx) - ice_cfg_rxq_interrupt(vsi, rx_ring->reg_idx, reg_idx, - q_vector->rx.itr_idx); + _qid = qid; + + for (q = 0; q < q_vector->num_ring_rx; q++) { + ice_cfg_rxq_interrupt(vsi, _qid, reg_idx, q_vector->rx.itr_idx); + _qid++; + } ice_flush(hw); } @@ -241,7 +245,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx) fail = err; q_vector = vsi->rx_rings[q_idx]->q_vector; - ice_qvec_cfg_msix(vsi, q_vector); + ice_qvec_cfg_msix(vsi, q_vector, q_idx); err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true); if (!fail)