From patchwork Tue Nov 19 00:23:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879236 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 084E938FA3; Tue, 19 Nov 2024 00:23:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975833; cv=none; b=GNdxY5ad3RVtZfrtJ2dtmaK7Mdrm2TbxMLaPCeTb2NOMwtQhc6/JJgajCiUFMrDCNkDCKEdMAHg9C+ZuZ1drVoV69Mf5qr8sUsvWq3dgLri52fIwzxSPXYCmzQNatUFWSpfD7G3cKyheejv/reqxiuOAf9DolLg823p8ne1Qldw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975833; c=relaxed/simple; bh=qfh2z/3jWRjbzt9P+/s3k9NIdplcKnErwcLpc6EWwvo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dvOZaSchZ58RX7JONBaFkRnfnTW9Sf/rpxgCMe7DOKI2rRMsOVjt+keLi+lU9r2sQVF9xlpKjNHdzaH/7aKj6r7Qo7artmpOO+GEzE4xWZahZS/6bbgBin209lf3Nw8uyPlL0/FDWV4vGkjlOnrGOTv/8tyfkmQr3Ufe5B7Jceo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KDSMJcMg; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KDSMJcMg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975832; x=1763511832; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=qfh2z/3jWRjbzt9P+/s3k9NIdplcKnErwcLpc6EWwvo=; b=KDSMJcMgL9k9+hxhJrBDsV0/bEKuImDvsxtu9Qoy1h0g9NXmxBkrGvFm R6W8IRtlVHrUtSoWU/GJITg4vRZeyScKVVP+XVqEcTUBPgv0cW2puT/3W mc6SH/1ZWxKF8bQ/QF3xqOhzO5F5sxam+AqfJjJe//xe579PjJ4mPEY0k +DNGJE/K2LfnOM//v2jmQkZacL6GNG+JW/fHo3uxtePJgKY49Emihnfbu 6JhBNiDuKkt4EQ5i0GSBVBqAs6KiwO3W+hLOnYX1DEfOPA0aWyxywAoHp aR6mGRnS9d5vEdzrwRiqKBexp5oISsEfbzlCR67fObDP+V21NBTyCGCX1 A==; X-CSE-ConnectionGUID: XB6yxC8ZRW2CqUr/UA7SuA== X-CSE-MsgGUID: ekB59djWQQCEEKcmEWa7xA== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892471" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892471" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:47 -0800 X-CSE-ConnectionGUID: 6n27/F+gQKC9WJ5RMjM/uw== X-CSE-MsgGUID: IA/9HHbZS+qEnO8dOc68RQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162737" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:47 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:45 -0800 Subject: [PATCH net-next RFC v6 8/9] ice: move prefetch enable to ice_setup_rx_ctx Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-8-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The ice_write_rxq_ctx() function is responsible for programming the Rx Queue context into hardware. It receives the configuration in unpacked form via the ice_rlan_ctx structure. This function unconditionally modifies the context to set the prefetch enable bit. This was done by commit c31a5c25bb19 ("ice: Always set prefena when configuring an Rx queue"). Setting this bit makes sense, since prefetching descriptors is almost always the preferred behavior. However, the ice_write_rxq_ctx() function is not the place that actually defines the queue context. We initialize the Rx Queue context in ice_setup_rx_ctx(). It is surprising to have the Rx queue context changed by a function who's responsibility is to program the given context to hardware. Following the principle of least surprise, move the setting of the prefetch enable bit out of ice_write_rxq_ctx() and into the ice_setup_rx_ctx(). Signed-off-by: Jacob Keller Reviewed-by: Przemek Kitszel --- drivers/net/ethernet/intel/ice/ice_base.c | 3 +++ drivers/net/ethernet/intel/ice/ice_common.c | 9 +++------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 5fe7b5a10020..b2af8e3586f7 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -454,6 +454,9 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring) /* Rx queue threshold in units of 64 */ rlan_ctx.lrxqthresh = 1; + /* Enable descriptor prefetch */ + rlan_ctx.prefena = 1; + /* PF acts as uplink for switchdev; set flex descriptor with src_vsi * metadata and flags to allow redirecting to PR netdev */ diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 1b013c9c9378..379040593d97 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1430,14 +1430,13 @@ static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx, } /** - * ice_write_rxq_ctx + * ice_write_rxq_ctx - Write Rx Queue context to hardware * @hw: pointer to the hardware structure * @rlan_ctx: pointer to the rxq context * @rxq_index: the index of the Rx queue * - * Converts rxq context from sparse to dense structure and then writes - * it to HW register space and enables the hardware to prefetch descriptors - * instead of only fetching them on demand + * Pack the sparse Rx Queue context into dense hardware format and write it + * into the HW register space. */ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, u32 rxq_index) @@ -1447,8 +1446,6 @@ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, if (!rlan_ctx) return -EINVAL; - rlan_ctx->prefena = 1; - ice_pack_rxq_ctx(rlan_ctx, &buf); return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index);