From patchwork Tue Jun 6 07:12:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 13268370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93A71C7EE2A for ; Tue, 6 Jun 2023 07:13:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236004AbjFFHNE (ORCPT ); Tue, 6 Jun 2023 03:13:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236018AbjFFHMp (ORCPT ); Tue, 6 Jun 2023 03:12:45 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65900E7C for ; Tue, 6 Jun 2023 00:12:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EFF0962DE6 for ; Tue, 6 Jun 2023 07:12:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46067C43326; Tue, 6 Jun 2023 07:12:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686035564; bh=SCPGQDaiOmgv4WfRu3mWoyJB1OIsY/K3JEmAQ5RxAxI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ECYHGwhI0xsGNGsdWTd6IxZpD0TLqszF7toES4OTzahw9JEiKhaziAQMaj7oJVvCf MIEJzgDCX/jlhVP6LRMxg5itW7rq+wlBuGqrM80LAjRmopdn1T0uYJHS9xdELbJvkL 2U22hlj7w0t4hwnqYtGxFZ3aWc6D1tzhrSW4vlD5sx0cntVlRPhFDAA2hf0jx2+Vgf RR2iBqyzVZLTU5CmxW/YghXUsCv5TAimH3TCxfMZDvZNplhjqGAX1lQxYCTaLUGNOb 1QtwujFauTn4MMxA2UqJQ2EuvjPO63DYPvUVEn8zoNzBzxTcaAXYS52OPVXTLtd6p7 qWxorKuHJbwRg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Leon Romanovsky , linux-rdma@vger.kernel.org, Dragos Tatulea Subject: [net-next 11/15] net/mlx5e: Remove RX page cache leftovers Date: Tue, 6 Jun 2023 00:12:15 -0700 Message-Id: <20230606071219.483255-12-saeed@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230606071219.483255-1-saeed@kernel.org> References: <20230606071219.483255-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Tariq Toukan Remove unused definitions left after the removal of the RX page cache feature. Signed-off-by: Tariq Toukan Reviewed-by: Dragos Tatulea Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 7 ------- 1 file changed, 7 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 8e999f238194..ceabe57c511a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -594,13 +594,6 @@ struct mlx5e_mpw_info { #define MLX5E_MAX_RX_FRAGS 4 -/* a single cache unit is capable to serve one napi call (for non-striding rq) - * or a MPWQE (for striding rq). - */ -#define MLX5E_CACHE_UNIT (MLX5_MPWRQ_MAX_PAGES_PER_WQE > NAPI_POLL_WEIGHT ? \ - MLX5_MPWRQ_MAX_PAGES_PER_WQE : NAPI_POLL_WEIGHT) -#define MLX5E_CACHE_SIZE (4 * roundup_pow_of_two(MLX5E_CACHE_UNIT)) - struct mlx5e_rq; typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq*, struct mlx5_cqe64*); typedef struct sk_buff *