From patchwork Fri Oct 18 14:17:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13841873 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 217E213C3CD for ; Fri, 18 Oct 2024 14:18:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729261115; cv=none; b=lz7/1Hc7LDi6rtZE7yIkx9+fR4mMlmtswRRy5iwlZvuvHihaV8HpUdjlJ4kCudnwCUWM33JyYvMt9CliJuw0jrTfIejjTJrBWfSHfOnmR0OK/3YwyNvw+oPNLjW+Z26XmnDiXwUyy55nmXfC3LDb33NQYTaezbHhhmLAy10fI5E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729261115; c=relaxed/simple; bh=iU0q+byX8GPofgxr2B3B753fifZOFZ+goaWJEpGIhB0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=I8CZ04ZrWGeCCOzERxv/aLwk9U/YsmsSPCKLA3yqHTpZm+GoO9Vg8SNpTptoFekburESNFEK6m7Gi66hJhNwJDoF9ChejLOl35RgW7cHoOX7Urm+PTj0u0mVPoJiLSj6soWQHfJQ5Ea82L5nAnXfCUjAcKOcXCTsHqyklOcDnh8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SI/rkwNg; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SI/rkwNg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729261113; x=1760797113; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iU0q+byX8GPofgxr2B3B753fifZOFZ+goaWJEpGIhB0=; b=SI/rkwNg8mgkNa+O5gIN7I6GGbm6mhxOYHM5QhYMpuxI2FJSDlMDvMy9 wjFXLc3bE0lLRfWw7LnmoaQRhPk2YjiMgB1zyAF4L0BVMpPP9JfbnIYOq ym1GX+CsRaW8jipndw9xZ9J0d9BH5pTPVA7Rz/4NkaphYrHaZsAvEog82 wk8rlV48lRWVpgDRNVel7FOSkS/OkdBsjzf+4kqU6Pkiwj49JVBdTiUwW Yd3+936Hjca8Ov0tK9Ekky8CV6gkce4u/1IpWjic5MNJKNp71g98eCAVH TFMh5CUVZvD57F2Ps8gjTU7fD/TS9THb5mHpDTuPC5gnB4NORkF0KKPj7 A==; X-CSE-ConnectionGUID: tmIB6/5BTcyukSI7q8XOtw== X-CSE-MsgGUID: ABXAOHZURg6nI0BA9hr/PA== X-IronPort-AV: E=McAfee;i="6700,10204,11229"; a="46293162" X-IronPort-AV: E=Sophos;i="6.11,213,1725346800"; d="scan'208";a="46293162" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2024 07:18:33 -0700 X-CSE-ConnectionGUID: CCQ6P/oSSkKiQxMQdplscQ== X-CSE-MsgGUID: bDISGa9YQbSLJlw7wh4UCQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,213,1725346800"; d="scan'208";a="78929745" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmviesa008.fm.intel.com with ESMTP; 18 Oct 2024 07:18:30 -0700 Received: from pkitszel-desk.tendawifi.com (unknown [10.245.246.186]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 726A02FC65; Fri, 18 Oct 2024 15:18:28 +0100 (IST) From: Przemek Kitszel To: intel-wired-lan@lists.osuosl.org, Tony Nguyen Cc: netdev@vger.kernel.org, Jacob Keller , Przemek Kitszel , Paul Greenwalt , Dan Nowlin , Ahmed Zaki , Simon Horman , Michal Swiatkowski , Pucha Himasekhar Reddy Subject: [PATCH iwl-next v3 1/2] ice: refactor "last" segment of DDP pkg Date: Fri, 18 Oct 2024 16:17:36 +0200 Message-ID: <20241018141823.178918-5-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241018141823.178918-4-przemyslaw.kitszel@intel.com> References: <20241018141823.178918-4-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add ice_ddp_send_hunk() that buffers "sent FW hunk" calls to AQ in order to mark the "last" one in more elegant way. Next commit will add even more complicated "sent FW" flow, so it's better to untangle a bit before. Note that metadata buffers were not skipped for NOT-@indicate_last segments, this is fixed now. Minor: + use ice_is_buffer_metadata() instead of open coding it in ice_dwnld_cfg_bufs(); + ice_dwnld_cfg_bufs_no_lock() + dependencies were moved up a bit to have better git-diff, as this function was rewritten (in terms of git-blame) CC: Paul Greenwalt CC: Dan Nowlin CC: Ahmed Zaki CC: Simon Horman Reviewed-by: Michal Swiatkowski Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Signed-off-by: Przemek Kitszel --- git: --inter-hunk-context=6 v3: added ice_ddp_send_ctx_set_err() to avoid "user" code setting the ctx->err directly, fix kdoc warnings, removed redundant assignement, typo fix, all thanks to Simon v2: fixed one kdoc warning --- drivers/net/ethernet/intel/ice/ice_ddp.c | 288 ++++++++++++----------- 1 file changed, 151 insertions(+), 137 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c index 272fd823a825..3e1173ef4b5c 100644 --- a/drivers/net/ethernet/intel/ice/ice_ddp.c +++ b/drivers/net/ethernet/intel/ice/ice_ddp.c @@ -1210,6 +1210,131 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, return status; } +/** + * ice_is_buffer_metadata - determine if package buffer is a metadata buffer + * @buf: pointer to buffer header + * Return: whether given @buf is a metadata one. + */ +static bool ice_is_buffer_metadata(struct ice_buf_hdr *buf) +{ + return le32_to_cpu(buf->section_entry[0].type) & ICE_METADATA_BUF; +} + +/** + * struct ice_ddp_send_ctx - sending context of current DDP segment + * @hw: pointer to the hardware struct + * + * Keeps current sending state (header, error) for the purpose of proper "last" + * bit setting in ice_aq_download_pkg(). Use via calls to ice_ddp_send_hunk(). + */ +struct ice_ddp_send_ctx { + struct ice_hw *hw; +/* private: only for ice_ddp_send_hunk() */ + struct ice_buf_hdr *hdr; + int err; +}; + +static void ice_ddp_send_ctx_set_err(struct ice_ddp_send_ctx *ctx, int err) +{ + ctx->err = err; +} + +/** + * ice_ddp_send_hunk - send one hunk of data to FW + * @ctx: current segment sending context + * @hunk: next hunk to send, size is always ICE_PKG_BUF_SIZE + * + * Send the next hunk of data to FW, retrying if needed. + * + * Notice: must be called once more with a NULL @hunk to finish up; such call + * will set up the "last" bit of an AQ request. After such call @ctx.hdr is + * cleared, @hw is still valid. + * + * Return: %ICE_DDP_PKG_SUCCESS if there were no problems; a sticky @err + * otherwise. + */ +static enum ice_ddp_state ice_ddp_send_hunk(struct ice_ddp_send_ctx *ctx, + struct ice_buf_hdr *hunk) +{ + struct ice_buf_hdr *prev_hunk = ctx->hdr; + struct ice_hw *hw = ctx->hw; + bool prev_was_last = !hunk; + enum ice_aq_err aq_err; + u32 offset, info; + int attempt, err; + + if (ctx->err) + return ctx->err; + + ctx->hdr = hunk; + if (!prev_hunk) + return ICE_DDP_PKG_SUCCESS; /* no problem so far */ + + for (attempt = 0; attempt < 5; attempt++) { + if (attempt) + msleep(20); + + err = ice_aq_download_pkg(hw, prev_hunk, ICE_PKG_BUF_SIZE, + prev_was_last, &offset, &info, NULL); + + aq_err = hw->adminq.sq_last_status; + if (aq_err != ICE_AQ_RC_ENOSEC && aq_err != ICE_AQ_RC_EBADSIG) + break; + } + + if (err) { + ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: err %d off %d inf %d\n", + err, offset, info); + ctx->err = ice_map_aq_err_to_ddp_state(aq_err); + } else if (attempt) { + dev_dbg(ice_hw_to_dev(hw), + "ice_aq_download_pkg number of retries: %d\n", attempt); + } + + return ctx->err; +} + +/** + * ice_dwnld_cfg_bufs_no_lock + * @ctx: context of the current buffers section to send + * @bufs: pointer to an array of buffers + * @start: buffer index of first buffer to download + * @count: the number of buffers to download + * + * Downloads package configuration buffers to the firmware. Metadata buffers + * are skipped, and the first metadata buffer found indicates that the rest + * of the buffers are all metadata buffers. + */ +static enum ice_ddp_state +ice_dwnld_cfg_bufs_no_lock(struct ice_ddp_send_ctx *ctx, struct ice_buf *bufs, + u32 start, u32 count) +{ + struct ice_buf_hdr *bh; + enum ice_ddp_state err; + + if (!bufs || !count) { + ice_ddp_send_ctx_set_err(ctx, ICE_DDP_PKG_ERR); + return ICE_DDP_PKG_ERR; + } + + bufs += start; + + for (int i = 0; i < count; i++, bufs++) { + bh = (struct ice_buf_hdr *)bufs; + /* Metadata buffers should not be sent to FW, + * their presence means "we are done here". + */ + if (ice_is_buffer_metadata(bh)) + break; + + err = ice_ddp_send_hunk(ctx, bh); + if (err) + return err; + } + + return 0; +} + /** * ice_get_pkg_seg_by_idx * @pkg_hdr: pointer to the package header to be searched @@ -1269,147 +1394,32 @@ ice_is_signing_seg_type_at_idx(struct ice_pkg_hdr *pkg_hdr, u32 idx, return false; } -/** - * ice_is_buffer_metadata - determine if package buffer is a metadata buffer - * @buf: pointer to buffer header - */ -static bool ice_is_buffer_metadata(struct ice_buf_hdr *buf) -{ - if (le32_to_cpu(buf->section_entry[0].type) & ICE_METADATA_BUF) - return true; - - return false; -} - -/** - * ice_is_last_download_buffer - * @buf: pointer to current buffer header - * @idx: index of the buffer in the current sequence - * @count: the buffer count in the current sequence - * - * Note: this routine should only be called if the buffer is not the last buffer - */ -static bool -ice_is_last_download_buffer(struct ice_buf_hdr *buf, u32 idx, u32 count) -{ - struct ice_buf *next_buf; - - if ((idx + 1) == count) - return true; - - /* A set metadata flag in the next buffer will signal that the current - * buffer will be the last buffer downloaded - */ - next_buf = ((struct ice_buf *)buf) + 1; - - return ice_is_buffer_metadata((struct ice_buf_hdr *)next_buf); -} - -/** - * ice_dwnld_cfg_bufs_no_lock - * @hw: pointer to the hardware structure - * @bufs: pointer to an array of buffers - * @start: buffer index of first buffer to download - * @count: the number of buffers to download - * @indicate_last: if true, then set last buffer flag on last buffer download - * - * Downloads package configuration buffers to the firmware. Metadata buffers - * are skipped, and the first metadata buffer found indicates that the rest - * of the buffers are all metadata buffers. - */ -static enum ice_ddp_state -ice_dwnld_cfg_bufs_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 start, - u32 count, bool indicate_last) -{ - enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS; - struct ice_buf_hdr *bh; - enum ice_aq_err err; - u32 offset, info, i; - - if (!bufs || !count) - return ICE_DDP_PKG_ERR; - - /* If the first buffer's first section has its metadata bit set - * then there are no buffers to be downloaded, and the operation is - * considered a success. - */ - bh = (struct ice_buf_hdr *)(bufs + start); - if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF) - return ICE_DDP_PKG_SUCCESS; - - for (i = 0; i < count; i++) { - bool last = false; - int try_cnt = 0; - int status; - - bh = (struct ice_buf_hdr *)(bufs + start + i); - - if (indicate_last) - last = ice_is_last_download_buffer(bh, i, count); - - while (1) { - status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, - last, &offset, &info, - NULL); - if (hw->adminq.sq_last_status != ICE_AQ_RC_ENOSEC && - hw->adminq.sq_last_status != ICE_AQ_RC_EBADSIG) - break; - - try_cnt++; - - if (try_cnt == 5) - break; - - msleep(20); - } - - if (try_cnt) - dev_dbg(ice_hw_to_dev(hw), - "ice_aq_download_pkg number of retries: %d\n", - try_cnt); - - /* Save AQ status from download package */ - if (status) { - ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: err %d off %d inf %d\n", - status, offset, info); - err = hw->adminq.sq_last_status; - state = ice_map_aq_err_to_ddp_state(err); - break; - } - - if (last) - break; - } - - return state; -} - /** * ice_download_pkg_sig_seg - download a signature segment - * @hw: pointer to the hardware structure + * @ctx: context of the current buffers section to send * @seg: pointer to signature segment */ static enum ice_ddp_state -ice_download_pkg_sig_seg(struct ice_hw *hw, struct ice_sign_seg *seg) +ice_download_pkg_sig_seg(struct ice_ddp_send_ctx *ctx, struct ice_sign_seg *seg) { - return ice_dwnld_cfg_bufs_no_lock(hw, seg->buf_tbl.buf_array, 0, - le32_to_cpu(seg->buf_tbl.buf_count), - false); + return ice_dwnld_cfg_bufs_no_lock(ctx, seg->buf_tbl.buf_array, 0, + le32_to_cpu(seg->buf_tbl.buf_count)); } /** * ice_download_pkg_config_seg - download a config segment - * @hw: pointer to the hardware structure + * @ctx: context of the current buffers section to send * @pkg_hdr: pointer to package header * @idx: segment index * @start: starting buffer * @count: buffer count * * Note: idx must reference a ICE segment */ static enum ice_ddp_state -ice_download_pkg_config_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, - u32 idx, u32 start, u32 count) +ice_download_pkg_config_seg(struct ice_ddp_send_ctx *ctx, + struct ice_pkg_hdr *pkg_hdr, u32 idx, u32 start, + u32 count) { struct ice_buf_table *bufs; struct ice_seg *seg; @@ -1425,46 +1435,44 @@ ice_download_pkg_config_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, if (start >= buf_count || start + count > buf_count) return ICE_DDP_PKG_ERR; - return ice_dwnld_cfg_bufs_no_lock(hw, bufs->buf_array, start, count, - true); + return ice_dwnld_cfg_bufs_no_lock(ctx, bufs->buf_array, start, count); } /** * ice_dwnld_sign_and_cfg_segs - download a signing segment and config segment - * @hw: pointer to the hardware structure + * @ctx: context of the current buffers section to send * @pkg_hdr: pointer to package header * @idx: segment index (must be a signature segment) * * Note: idx must reference a signature segment */ static enum ice_ddp_state -ice_dwnld_sign_and_cfg_segs(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, - u32 idx) +ice_dwnld_sign_and_cfg_segs(struct ice_ddp_send_ctx *ctx, + struct ice_pkg_hdr *pkg_hdr, u32 idx) { enum ice_ddp_state state; struct ice_sign_seg *seg; u32 conf_idx; u32 start; u32 count; seg = (struct ice_sign_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx); if (!seg) { state = ICE_DDP_PKG_ERR; - goto exit; + ice_ddp_send_ctx_set_err(ctx, state); + return state; } count = le32_to_cpu(seg->signed_buf_count); - state = ice_download_pkg_sig_seg(hw, seg); + state = ice_download_pkg_sig_seg(ctx, seg); if (state || !count) - goto exit; + return state; conf_idx = le32_to_cpu(seg->signed_seg_idx); start = le32_to_cpu(seg->signed_buf_start); - state = ice_download_pkg_config_seg(hw, pkg_hdr, conf_idx, start, + state = ice_download_pkg_config_seg(ctx, pkg_hdr, conf_idx, start, count); - -exit: return state; } @@ -1519,6 +1527,7 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) { enum ice_aq_err aq_err = hw->adminq.sq_last_status; enum ice_ddp_state state = ICE_DDP_PKG_ERR; + struct ice_ddp_send_ctx ctx = { .hw = hw }; int status; u32 i; @@ -1539,7 +1548,9 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) hw->pkg_sign_type)) continue; - state = ice_dwnld_sign_and_cfg_segs(hw, pkg_hdr, i); + ice_dwnld_sign_and_cfg_segs(&ctx, pkg_hdr, i); + /* finish up by sending last hunk with "last" flag set */ + state = ice_ddp_send_hunk(&ctx, NULL); if (state) break; } @@ -1564,29 +1575,32 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) static enum ice_ddp_state ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) { + struct ice_ddp_send_ctx ctx = { .hw = hw }; enum ice_ddp_state state; struct ice_buf_hdr *bh; int status; if (!bufs || !count) return ICE_DDP_PKG_ERR; /* If the first buffer's first section has its metadata bit set * then there are no buffers to be downloaded, and the operation is * considered a success. */ bh = (struct ice_buf_hdr *)bufs; - if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF) + if (ice_is_buffer_metadata(bh)) return ICE_DDP_PKG_SUCCESS; status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE); if (status) { if (status == -EALREADY) return ICE_DDP_PKG_ALREADY_LOADED; return ice_map_aq_err_to_ddp_state(hw->adminq.sq_last_status); } - state = ice_dwnld_cfg_bufs_no_lock(hw, bufs, 0, count, true); + ice_dwnld_cfg_bufs_no_lock(&ctx, bufs, 0, count); + /* finish up by sending last hunk with "last" flag set */ + state = ice_ddp_send_hunk(&ctx, NULL); if (!state) state = ice_post_dwnld_pkg_actions(hw); From patchwork Fri Oct 18 14:17:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Przemek Kitszel X-Patchwork-Id: 13841874 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 984C31D3182 for ; Fri, 18 Oct 2024 14:18:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729261116; cv=none; b=Hbq3yBR5VIRKuqFqSYFZ1rzdE12/wUHoFdbLbJc7QIeSuidWX1KW8y6eARlfchN9vtJtxA+vsGpuBJkN9+IStyswcMLvwgjNdayCpWnQHTPNiOAl8RDzdg1dEUYIHmod2KtI9dV0LwXm5e71/YYgjWFwhiP4huhHEsolr7/VHvY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729261116; c=relaxed/simple; bh=YwRF5hN8LiWMHJW+vgahFBnhO7gtBweNzskldU/Lyg4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nelyOAnIaq5AHHqeGjkwYUH0or3t8cHu/KBY+lNaNdREBmCn1y7DO6vCIkKiZOZqbLHJBMpytFTngmi195dvv9C0b5lK4r0dxEQKelvEtxHpkz2VGFz7kPBSfaY9tc2IEOh1Z7ssmnwoI2ursFIc5d+OHk+hmXvt/VtZ617vT5o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=D5QjhqDn; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="D5QjhqDn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1729261115; x=1760797115; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YwRF5hN8LiWMHJW+vgahFBnhO7gtBweNzskldU/Lyg4=; b=D5QjhqDnKCwUA7oTLnq4uESYBpZkT7x02KBx0N3g2xGnr556ieuh6x2+ SlEizS0Fu/vZowfgII62Ax5+KWYhIEnasOjSGnYwpNktGypx1SRubA/pT 3lKmvsLKnt/SFnn9r2H6E7zLE6C5CtySRgjEFjvG5o3ODTHjQLZyHIKSH OKASSvHIWy0NKzhEHCSMjRC8xAvL+T1PhVMkSRAGD18601OuVIIwZhzpk ybqhn7yMyIOeS9vhyRCrCTns9e6G5OwkGqsPxCWGEwAAMaenNgvTP9Dlw TFshVtFmOgY/GFP5+hUCHsdYkMcowvi49ioTdsvpW+b3b13vRKA9CfXs8 g==; X-CSE-ConnectionGUID: y2+U/XDQTqGsFb2KyWnUyw== X-CSE-MsgGUID: pv0sJJ12RAWejczcV98Xvw== X-IronPort-AV: E=McAfee;i="6700,10204,11229"; a="46293165" X-IronPort-AV: E=Sophos;i="6.11,213,1725346800"; d="scan'208";a="46293165" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2024 07:18:34 -0700 X-CSE-ConnectionGUID: XzIAdjJ4QiCcmKcyoKoefw== X-CSE-MsgGUID: G++cKWF7ROSHu7a6XlkMzg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,213,1725346800"; d="scan'208";a="78929748" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmviesa008.fm.intel.com with ESMTP; 18 Oct 2024 07:18:32 -0700 Received: from pkitszel-desk.tendawifi.com (unknown [10.245.246.186]) by irvmail002.ir.intel.com (Postfix) with ESMTP id DE4FC2FC66; Fri, 18 Oct 2024 15:18:29 +0100 (IST) From: Przemek Kitszel To: intel-wired-lan@lists.osuosl.org, Tony Nguyen Cc: netdev@vger.kernel.org, Jacob Keller , Przemek Kitszel , Paul Greenwalt , Ahmed Zaki , Dan Nowlin , Michal Swiatkowski , Pucha Himasekhar Reddy Subject: [PATCH iwl-next v3 2/2] ice: support optional flags in signature segment header Date: Fri, 18 Oct 2024 16:17:37 +0200 Message-ID: <20241018141823.178918-6-przemyslaw.kitszel@intel.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241018141823.178918-4-przemyslaw.kitszel@intel.com> References: <20241018141823.178918-4-przemyslaw.kitszel@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org An optional flag field has been added to the signature segment header. The field contains two flags, a "valid" bit, and a "last segment" bit that indicates whether the segment is the last segment that will be sent to firmware. If the flag field's valid bit is NOT set, then as was done before, assume that this is the last segment being downloaded. However, if the flag field's valid bit IS set, then use the last segment flag to determine if this segment is the last segment to download. Signed-off-by: Paul Greenwalt Signed-off-by: Ahmed Zaki Co-developed-by: Dan Nowlin Signed-off-by: Dan Nowlin Reviewed-by: Michal Swiatkowski Tested-by: Pucha Himasekhar Reddy (A Contingent worker at Intel) Signed-off-by: Przemek Kitszel --- v2: co/- authorship change --- drivers/net/ethernet/intel/ice/ice_ddp.h | 5 ++++- drivers/net/ethernet/intel/ice/ice_ddp.c | 21 +++++++++++++++------ 2 files changed, 19 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.h b/drivers/net/ethernet/intel/ice/ice_ddp.h index 79551da2a4b0..8a2d57fc5dae 100644 --- a/drivers/net/ethernet/intel/ice/ice_ddp.h +++ b/drivers/net/ethernet/intel/ice/ice_ddp.h @@ -181,7 +181,10 @@ struct ice_sign_seg { __le32 signed_seg_idx; __le32 signed_buf_start; __le32 signed_buf_count; -#define ICE_SIGN_SEG_RESERVED_COUNT 44 +#define ICE_SIGN_SEG_FLAGS_VALID 0x80000000 +#define ICE_SIGN_SEG_FLAGS_LAST 0x00000001 + __le32 flags; +#define ICE_SIGN_SEG_RESERVED_COUNT 40 u8 reserved[ICE_SIGN_SEG_RESERVED_COUNT]; struct ice_buf_table buf_tbl; }; diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c index 3e1173ef4b5c..5a70854b0ca2 100644 --- a/drivers/net/ethernet/intel/ice/ice_ddp.c +++ b/drivers/net/ethernet/intel/ice/ice_ddp.c @@ -1438,23 +1438,27 @@ ice_download_pkg_config_seg(struct ice_ddp_send_ctx *ctx, return ice_dwnld_cfg_bufs_no_lock(ctx, bufs->buf_array, start, count); } +static bool ice_is_last_sign_seg(u32 flags) +{ + return !(flags & ICE_SIGN_SEG_FLAGS_VALID) /* behavior prior to valid */ + || (flags & ICE_SIGN_SEG_FLAGS_LAST); +} + /** * ice_dwnld_sign_and_cfg_segs - download a signing segment and config segment * @ctx: context of the current buffers section to send * @pkg_hdr: pointer to package header * @idx: segment index (must be a signature segment) * * Note: idx must reference a signature segment */ static enum ice_ddp_state ice_dwnld_sign_and_cfg_segs(struct ice_ddp_send_ctx *ctx, struct ice_pkg_hdr *pkg_hdr, u32 idx) { + u32 conf_idx, start, count, flags; enum ice_ddp_state state; struct ice_sign_seg *seg; - u32 conf_idx; - u32 start; - u32 count; seg = (struct ice_sign_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx); if (!seg) { @@ -1473,6 +1477,13 @@ ice_dwnld_sign_and_cfg_segs(struct ice_ddp_send_ctx *ctx, state = ice_download_pkg_config_seg(ctx, pkg_hdr, conf_idx, start, count); + + /* finish up by sending last hunk with "last" flag set if requested by + * DDP content */ + flags = le32_to_cpu(seg->flags); + if (ice_is_last_sign_seg(flags)) + state = ice_ddp_send_hunk(ctx, NULL); + return state; } @@ -1548,9 +1559,7 @@ ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) hw->pkg_sign_type)) continue; - ice_dwnld_sign_and_cfg_segs(&ctx, pkg_hdr, i); - /* finish up by sending last hunk with "last" flag set */ - state = ice_ddp_send_hunk(&ctx, NULL); + state = ice_dwnld_sign_and_cfg_segs(&ctx, pkg_hdr, i); if (state) break; }