From patchwork Tue Nov 10 09:48:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893711 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E3A6C4741F for ; Tue, 10 Nov 2020 09:50:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BEE5420781 for ; Tue, 10 Nov 2020 09:50:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729918AbgKJJuU (ORCPT ); Tue, 10 Nov 2020 04:50:20 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:42167 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726467AbgKJJuT (ORCPT ); Tue, 10 Nov 2020 04:50:19 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 7FF80E02; Tue, 10 Nov 2020 04:50:18 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:18 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=elYT3Jutfnk6WJqMrprpQCC2enjpU16l62f7dMX/YxA=; b=kFAT6bAW jDnLdRFR5/FiifHXZroU0Xl8nC0AFs1PkAOhMr7qiqf0Hq34+SSxUDrK3QnwJLgZ XOcmJ8cjV202mq6fbzziHpjqUSuBColmRUVS7yIbxsScAffLmPufTmKD2Ht/wL6c 7Ok1CD83K/wxxFZD6lbE2lsQnnxoZg6wmOk6klp7wchVhXSQPYtL8rPbq3pQthtM rcm7ZoFMyKXeFKiHv3tPTHyk4bYWVr41erxSQagiYqs7NBRg1k159HYeeFiN2lSW BbpcrDl1W4HhH5vd3X2lRKkG9VAYPQEUPf6iQuKVAnc7TGgoYbmVDP6SSqrjAeKi krWsQv+YrE4A5g== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id D9D24328006A; Tue, 10 Nov 2020 04:50:16 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 01/15] mlxsw: spectrum_router: Pass non-register proto enum to __mlxsw_sp_router_set_abort_trap() Date: Tue, 10 Nov 2020 11:48:46 +0200 Message-Id: <20201110094900.1920158-2-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Don't pass RALXX register enum and rather pass enum mlxsw_sp_l3proto to __mlxsw_sp_router_set_abort_trap(). This is in preparation to fib entry pack implementation by XMDR register. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../net/ethernet/mellanox/mlxsw/spectrum_router.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 29fc47821ad7..a1424962472d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -5685,15 +5685,17 @@ static void mlxsw_sp_router_fib6_del(struct mlxsw_sp *mlxsw_sp, } static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, - enum mlxsw_reg_ralxx_protocol proto, + enum mlxsw_sp_l3proto proto, u8 tree_id) { const struct mlxsw_sp_router_ll_ops *ll_ops = mlxsw_sp->router->proto_ll_ops[proto]; + enum mlxsw_reg_ralxx_protocol ralxx_proto = + (enum mlxsw_reg_ralxx_protocol) proto; char xralta_pl[MLXSW_REG_XRALTA_LEN]; char xralst_pl[MLXSW_REG_XRALST_LEN]; int i, err; - mlxsw_reg_xralta_pack(xralta_pl, true, proto, tree_id); + mlxsw_reg_xralta_pack(xralta_pl, true, ralxx_proto, tree_id); err = ll_ops->ralta_write(mlxsw_sp, xralta_pl); if (err) return err; @@ -5708,12 +5710,12 @@ static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, char xraltb_pl[MLXSW_REG_XRALTB_LEN]; char ralue_pl[MLXSW_REG_RALUE_LEN]; - mlxsw_reg_xraltb_pack(xraltb_pl, vr->id, proto, tree_id); + mlxsw_reg_xraltb_pack(xraltb_pl, vr->id, ralxx_proto, tree_id); err = ll_ops->raltb_write(mlxsw_sp, xraltb_pl); if (err) return err; - mlxsw_reg_ralue_pack(ralue_pl, proto, + mlxsw_reg_ralue_pack(ralue_pl, ralxx_proto, MLXSW_REG_RALUE_OP_WRITE_WRITE, vr->id, 0); mlxsw_reg_ralue_act_ip2me_pack(ralue_pl); err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), @@ -5813,7 +5815,7 @@ mlxsw_sp_router_fibmr_vif_del(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp) { - enum mlxsw_reg_ralxx_protocol proto = MLXSW_REG_RALXX_PROTOCOL_IPV4; + enum mlxsw_sp_l3proto proto = MLXSW_SP_L3_PROTO_IPV4; int err; err = __mlxsw_sp_router_set_abort_trap(mlxsw_sp, proto, @@ -5825,7 +5827,7 @@ static int mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp) * packets that don't match any routes are trapped to the CPU. */ - proto = MLXSW_REG_RALXX_PROTOCOL_IPV6; + proto = MLXSW_SP_L3_PROTO_IPV6; return __mlxsw_sp_router_set_abort_trap(mlxsw_sp, proto, MLXSW_SP_LPM_TREE_MIN + 1); } From patchwork Tue Nov 10 09:48:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893695 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27542C388F7 for ; Tue, 10 Nov 2020 09:50:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B107820781 for ; Tue, 10 Nov 2020 09:50:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730164AbgKJJuX (ORCPT ); Tue, 10 Nov 2020 04:50:23 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:54411 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729386AbgKJJuV (ORCPT ); Tue, 10 Nov 2020 04:50:21 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id DBC5FE07; Tue, 10 Nov 2020 04:50:19 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:20 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=zJhzB4rNfsB0adzTnkG0Wc6kG1QQSzFhLJBXXNhOibE=; b=MecKpCHO UuBfROxMDBuX8zNKS+Ks1IQ/jl4/Q/bYa92ez5wcqu1yEMA9j9ZcKkYqwf+2VabH v8qMKk/sFzi1MtF9Mcoxa5KXfttwt2EcwWasrtT370Eo1OGqNEz7zE4ZUe3lz5cA 8RxWkiQ3tXPf6xHuy/AD1Nph0bB+bfyej0NDndpZJO9gvayt3LNGbsp1w0t/LK85 fdhuSzj3UOVdB6bPsVenHoD0lm9XR7EfW5S0Mt8K/hR0qkRkL2nGJCF6HTNj8DNU Yb82lu/RufVg+iBLyfzq6SxGJ5wup/P/pjy47DRSPWPp92wWNAVskyHqyMZRkozw XMDf5QQ3ajNOXg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 41EDC3280060; Tue, 10 Nov 2020 04:50:18 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 02/15] mlxsw: spectrum_router: Use RALUE-independent op arg Date: Tue, 10 Nov 2020 11:48:47 +0200 Message-Id: <20201110094900.1920158-3-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Since the write/delete of FIB entry is going to be implemented by XMDR register for XM implementation, introduce RALUE-independent enum for op so the enum could be used in both RALUE and XMDR. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_ipip.c | 19 ++++++-- .../ethernet/mellanox/mlxsw/spectrum_ipip.h | 2 +- .../ethernet/mellanox/mlxsw/spectrum_router.c | 47 ++++++++++++------- .../ethernet/mellanox/mlxsw/spectrum_router.h | 5 ++ 4 files changed, 52 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c index a8525992528f..8487de3e9787 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c @@ -183,12 +183,25 @@ mlxsw_sp_ipip_fib_entry_op_gre4_rtdp(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_ipip_fib_entry_op_gre4_ralue(struct mlxsw_sp *mlxsw_sp, u32 dip, u8 prefix_len, u16 ul_vr_id, - enum mlxsw_reg_ralue_op op, + enum mlxsw_sp_fib_entry_op op, u32 tunnel_index) { char ralue_pl[MLXSW_REG_RALUE_LEN]; + enum mlxsw_reg_ralue_op ralue_op; + + switch (op) { + case MLXSW_SP_FIB_ENTRY_OP_WRITE: + ralue_op = MLXSW_REG_RALUE_OP_WRITE_WRITE; + break; + case MLXSW_SP_FIB_ENTRY_OP_DELETE: + ralue_op = MLXSW_REG_RALUE_OP_WRITE_DELETE; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } - mlxsw_reg_ralue_pack4(ralue_pl, MLXSW_REG_RALXX_PROTOCOL_IPV4, op, + mlxsw_reg_ralue_pack4(ralue_pl, MLXSW_REG_RALXX_PROTOCOL_IPV4, ralue_op, ul_vr_id, prefix_len, dip); mlxsw_reg_ralue_act_ip2me_tun_pack(ralue_pl, tunnel_index); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); @@ -196,7 +209,7 @@ mlxsw_sp_ipip_fib_entry_op_gre4_ralue(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_ipip_fib_entry_op_gre4(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_ipip_entry *ipip_entry, - enum mlxsw_reg_ralue_op op, + enum mlxsw_sp_fib_entry_op op, u32 tunnel_index) { u16 ul_vr_id = mlxsw_sp_ipip_lb_ul_vr_id(ipip_entry->ol_lb); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h index bb5c4d4a5872..f3ad1e149a45 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h @@ -53,7 +53,7 @@ struct mlxsw_sp_ipip_ops { int (*fib_entry_op)(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_ipip_entry *ipip_entry, - enum mlxsw_reg_ralue_op op, + enum mlxsw_sp_fib_entry_op op, u32 tunnel_index); int (*ol_netdev_change)(struct mlxsw_sp *mlxsw_sp, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index a1424962472d..d916f1045d97 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4293,13 +4293,13 @@ mlxsw_sp_fib_entry_hw_flags_clear(struct mlxsw_sp *mlxsw_sp, static void mlxsw_sp_fib_entry_hw_flags_refresh(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { switch (op) { - case MLXSW_REG_RALUE_OP_WRITE_WRITE: + case MLXSW_SP_FIB_ENTRY_OP_WRITE: mlxsw_sp_fib_entry_hw_flags_set(mlxsw_sp, fib_entry); break; - case MLXSW_REG_RALUE_OP_WRITE_DELETE: + case MLXSW_SP_FIB_ENTRY_OP_DELETE: mlxsw_sp_fib_entry_hw_flags_clear(mlxsw_sp, fib_entry); break; default: @@ -4310,23 +4310,36 @@ mlxsw_sp_fib_entry_hw_flags_refresh(struct mlxsw_sp *mlxsw_sp, static void mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, const struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { struct mlxsw_sp_fib *fib = fib_entry->fib_node->fib; enum mlxsw_reg_ralxx_protocol proto; + enum mlxsw_reg_ralue_op ralue_op; u32 *p_dip; proto = (enum mlxsw_reg_ralxx_protocol) fib->proto; + switch (op) { + case MLXSW_SP_FIB_ENTRY_OP_WRITE: + ralue_op = MLXSW_REG_RALUE_OP_WRITE_WRITE; + break; + case MLXSW_SP_FIB_ENTRY_OP_DELETE: + ralue_op = MLXSW_REG_RALUE_OP_WRITE_DELETE; + break; + default: + WARN_ON_ONCE(1); + return; + } + switch (fib->proto) { case MLXSW_SP_L3_PROTO_IPV4: p_dip = (u32 *) fib_entry->fib_node->key.addr; - mlxsw_reg_ralue_pack4(ralue_pl, proto, op, fib->vr->id, + mlxsw_reg_ralue_pack4(ralue_pl, proto, ralue_op, fib->vr->id, fib_entry->fib_node->key.prefix_len, *p_dip); break; case MLXSW_SP_L3_PROTO_IPV6: - mlxsw_reg_ralue_pack6(ralue_pl, proto, op, fib->vr->id, + mlxsw_reg_ralue_pack6(ralue_pl, proto, ralue_op, fib->vr->id, fib_entry->fib_node->key.prefix_len, fib_entry->fib_node->key.addr); break; @@ -4368,7 +4381,7 @@ static int mlxsw_sp_adj_discard_write(struct mlxsw_sp *mlxsw_sp, u16 rif_index) static int mlxsw_sp_fib_entry_op_remote(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { struct mlxsw_sp_nexthop_group *nh_group = fib_entry->nh_group; char ralue_pl[MLXSW_REG_RALUE_LEN]; @@ -4408,7 +4421,7 @@ static int mlxsw_sp_fib_entry_op_remote(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { struct mlxsw_sp_rif *rif = fib_entry->nh_group->nh_rif; enum mlxsw_reg_ralue_trap_action trap_action; @@ -4432,7 +4445,7 @@ static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { char ralue_pl[MLXSW_REG_RALUE_LEN]; @@ -4443,7 +4456,7 @@ static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { enum mlxsw_reg_ralue_trap_action trap_action; char ralue_pl[MLXSW_REG_RALUE_LEN]; @@ -4457,7 +4470,7 @@ static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op_unreachable(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { enum mlxsw_reg_ralue_trap_action trap_action; char ralue_pl[MLXSW_REG_RALUE_LEN]; @@ -4474,7 +4487,7 @@ mlxsw_sp_fib_entry_op_unreachable(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op_ipip_decap(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { struct mlxsw_sp_ipip_entry *ipip_entry = fib_entry->decap.ipip_entry; const struct mlxsw_sp_ipip_ops *ipip_ops; @@ -4489,7 +4502,7 @@ mlxsw_sp_fib_entry_op_ipip_decap(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op_nve_decap(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { char ralue_pl[MLXSW_REG_RALUE_LEN]; @@ -4501,7 +4514,7 @@ static int mlxsw_sp_fib_entry_op_nve_decap(struct mlxsw_sp *mlxsw_sp, static int __mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { switch (fib_entry->type) { case MLXSW_SP_FIB_ENTRY_TYPE_REMOTE: @@ -4526,7 +4539,7 @@ static int __mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_reg_ralue_op op) + enum mlxsw_sp_fib_entry_op op) { int err = __mlxsw_sp_fib_entry_op(mlxsw_sp, fib_entry, op); @@ -4542,14 +4555,14 @@ static int mlxsw_sp_fib_entry_update(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry) { return mlxsw_sp_fib_entry_op(mlxsw_sp, fib_entry, - MLXSW_REG_RALUE_OP_WRITE_WRITE); + MLXSW_SP_FIB_ENTRY_OP_WRITE); } static int mlxsw_sp_fib_entry_del(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry) { return mlxsw_sp_fib_entry_op(mlxsw_sp, fib_entry, - MLXSW_REG_RALUE_OP_WRITE_DELETE); + MLXSW_SP_FIB_ENTRY_OP_DELETE); } static int diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index c5c7346eb815..68f5feabc02c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -61,6 +61,11 @@ struct mlxsw_sp_router_ll_ops { int (*raltb_write)(struct mlxsw_sp *mlxsw_sp, char *xraltb_pl); }; +enum mlxsw_sp_fib_entry_op { + MLXSW_SP_FIB_ENTRY_OP_WRITE, + MLXSW_SP_FIB_ENTRY_OP_DELETE, +}; + struct mlxsw_sp_rif_ipip_lb; struct mlxsw_sp_rif_ipip_lb_config { enum mlxsw_reg_ritr_loopback_ipip_type lb_ipipt; From patchwork Tue Nov 10 09:48:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893707 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78A82C4741F for ; Tue, 10 Nov 2020 09:50:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1072720781 for ; Tue, 10 Nov 2020 09:50:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730224AbgKJJuZ (ORCPT ); Tue, 10 Nov 2020 04:50:25 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:32903 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726467AbgKJJuW (ORCPT ); Tue, 10 Nov 2020 04:50:22 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 4A864DC3; Tue, 10 Nov 2020 04:50:21 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=EQHth6un5efC5qwZJi9VVNI4h01Eok8xyjyLy2ak478=; b=I/v/4JgK 9GwXf+iBsjMEwyGAUeZEe1iE5cIkRTmml/5M4iG6MNzfA9teQmpGiE9JzTlazacT 6wOqUiC0CbIKakBfL1aHg+6E2gClVZtl0ycemBlHGyXw/69h1YA/qSOP6PLRaB0t qoczweWyTbIeArR7TnO/Yxh5Jbzze5jNMxf6qx5v7UqPC/RRg5Zf7QQFZDwKhClV jvMdeEJnIJmBhFWZwJOQrzYghaBEwHkP+r7IqSPAYu2aCToaZtj9PfSgILrC/Uub jC5tEp4ZFp5BMXw9il7L4sN3hPZlH/wYGVp02hpld7xyHM1tKC+SewMviJ7S9ShS Ly1E/cPTzQROmw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id A27493280069; Tue, 10 Nov 2020 04:50:19 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 03/15] mlxsw: spectrum_router: Introduce FIB event queue instead of separate works Date: Tue, 10 Nov 2020 11:48:48 +0200 Message-Id: <20201110094900.1920158-4-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Currently, every FIB event is queued-up as a separate work to be processed. However, that allows to process only one FIB entry per work callback. In preparation of future XMDR register bulking of multiple FIB entries, convert to FIB event queue. Implement this by a list_head, adding new events to the end of the list in the FIB notify callback. That allows to process multiple events from the list inside the work callback. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_router.c | 207 ++++++++++-------- .../ethernet/mellanox/mlxsw/spectrum_router.h | 3 + 2 files changed, 119 insertions(+), 91 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index d916f1045d97..99777d190e6d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -5945,15 +5945,15 @@ static void mlxsw_sp_router_fib_abort(struct mlxsw_sp *mlxsw_sp) dev_warn(mlxsw_sp->bus_info->dev, "Failed to set abort trap.\n"); } -struct mlxsw_sp_fib6_event_work { +struct mlxsw_sp_fib6_event { struct fib6_info **rt_arr; unsigned int nrt6; }; -struct mlxsw_sp_fib_event_work { - struct work_struct work; +struct mlxsw_sp_fib_event { + struct list_head list; /* node in fib queue */ union { - struct mlxsw_sp_fib6_event_work fib6_work; + struct mlxsw_sp_fib6_event fib6_event; struct fib_entry_notifier_info fen_info; struct fib_rule_notifier_info fr_info; struct fib_nh_notifier_info fnh_info; @@ -5962,11 +5962,12 @@ struct mlxsw_sp_fib_event_work { }; struct mlxsw_sp *mlxsw_sp; unsigned long event; + int family; }; static int -mlxsw_sp_router_fib6_work_init(struct mlxsw_sp_fib6_event_work *fib6_work, - struct fib6_entry_notifier_info *fen6_info) +mlxsw_sp_router_fib6_event_init(struct mlxsw_sp_fib6_event *fib6_event, + struct fib6_entry_notifier_info *fen6_info) { struct fib6_info *rt = fen6_info->rt; struct fib6_info **rt_arr; @@ -5980,8 +5981,8 @@ mlxsw_sp_router_fib6_work_init(struct mlxsw_sp_fib6_event_work *fib6_work, if (!rt_arr) return -ENOMEM; - fib6_work->rt_arr = rt_arr; - fib6_work->nrt6 = nrt6; + fib6_event->rt_arr = rt_arr; + fib6_event->nrt6 = nrt6; rt_arr[0] = rt; fib6_info_hold(rt); @@ -6003,170 +6004,186 @@ mlxsw_sp_router_fib6_work_init(struct mlxsw_sp_fib6_event_work *fib6_work, } static void -mlxsw_sp_router_fib6_work_fini(struct mlxsw_sp_fib6_event_work *fib6_work) +mlxsw_sp_router_fib6_event_fini(struct mlxsw_sp_fib6_event *fib6_event) { int i; - for (i = 0; i < fib6_work->nrt6; i++) - mlxsw_sp_rt6_release(fib6_work->rt_arr[i]); - kfree(fib6_work->rt_arr); + for (i = 0; i < fib6_event->nrt6; i++) + mlxsw_sp_rt6_release(fib6_event->rt_arr[i]); + kfree(fib6_event->rt_arr); } -static void mlxsw_sp_router_fib4_event_work(struct work_struct *work) +static void mlxsw_sp_router_fib4_event_process(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_event *fib_event) { - struct mlxsw_sp_fib_event_work *fib_work = - container_of(work, struct mlxsw_sp_fib_event_work, work); - struct mlxsw_sp *mlxsw_sp = fib_work->mlxsw_sp; int err; mutex_lock(&mlxsw_sp->router->lock); mlxsw_sp_span_respin(mlxsw_sp); - switch (fib_work->event) { + switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: - err = mlxsw_sp_router_fib4_replace(mlxsw_sp, - &fib_work->fen_info); + err = mlxsw_sp_router_fib4_replace(mlxsw_sp, &fib_event->fen_info); if (err) mlxsw_sp_router_fib_abort(mlxsw_sp); - fib_info_put(fib_work->fen_info.fi); + fib_info_put(fib_event->fen_info.fi); break; case FIB_EVENT_ENTRY_DEL: - mlxsw_sp_router_fib4_del(mlxsw_sp, &fib_work->fen_info); - fib_info_put(fib_work->fen_info.fi); + mlxsw_sp_router_fib4_del(mlxsw_sp, &fib_event->fen_info); + fib_info_put(fib_event->fen_info.fi); break; case FIB_EVENT_NH_ADD: case FIB_EVENT_NH_DEL: - mlxsw_sp_nexthop4_event(mlxsw_sp, fib_work->event, - fib_work->fnh_info.fib_nh); - fib_info_put(fib_work->fnh_info.fib_nh->nh_parent); + mlxsw_sp_nexthop4_event(mlxsw_sp, fib_event->event, fib_event->fnh_info.fib_nh); + fib_info_put(fib_event->fnh_info.fib_nh->nh_parent); break; } mutex_unlock(&mlxsw_sp->router->lock); - kfree(fib_work); } -static void mlxsw_sp_router_fib6_event_work(struct work_struct *work) +static void mlxsw_sp_router_fib6_event_process(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_event *fib_event) { - struct mlxsw_sp_fib_event_work *fib_work = - container_of(work, struct mlxsw_sp_fib_event_work, work); - struct mlxsw_sp *mlxsw_sp = fib_work->mlxsw_sp; int err; mutex_lock(&mlxsw_sp->router->lock); mlxsw_sp_span_respin(mlxsw_sp); - switch (fib_work->event) { + switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: - err = mlxsw_sp_router_fib6_replace(mlxsw_sp, - fib_work->fib6_work.rt_arr, - fib_work->fib6_work.nrt6); + err = mlxsw_sp_router_fib6_replace(mlxsw_sp, fib_event->fib6_event.rt_arr, + fib_event->fib6_event.nrt6); if (err) mlxsw_sp_router_fib_abort(mlxsw_sp); - mlxsw_sp_router_fib6_work_fini(&fib_work->fib6_work); + mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; case FIB_EVENT_ENTRY_APPEND: - err = mlxsw_sp_router_fib6_append(mlxsw_sp, - fib_work->fib6_work.rt_arr, - fib_work->fib6_work.nrt6); + err = mlxsw_sp_router_fib6_append(mlxsw_sp, fib_event->fib6_event.rt_arr, + fib_event->fib6_event.nrt6); if (err) mlxsw_sp_router_fib_abort(mlxsw_sp); - mlxsw_sp_router_fib6_work_fini(&fib_work->fib6_work); + mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; case FIB_EVENT_ENTRY_DEL: - mlxsw_sp_router_fib6_del(mlxsw_sp, - fib_work->fib6_work.rt_arr, - fib_work->fib6_work.nrt6); - mlxsw_sp_router_fib6_work_fini(&fib_work->fib6_work); + mlxsw_sp_router_fib6_del(mlxsw_sp, fib_event->fib6_event.rt_arr, + fib_event->fib6_event.nrt6); + mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; } mutex_unlock(&mlxsw_sp->router->lock); - kfree(fib_work); } -static void mlxsw_sp_router_fibmr_event_work(struct work_struct *work) +static void mlxsw_sp_router_fibmr_event_process(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_event *fib_event) { - struct mlxsw_sp_fib_event_work *fib_work = - container_of(work, struct mlxsw_sp_fib_event_work, work); - struct mlxsw_sp *mlxsw_sp = fib_work->mlxsw_sp; bool replace; int err; rtnl_lock(); mutex_lock(&mlxsw_sp->router->lock); - switch (fib_work->event) { + switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: case FIB_EVENT_ENTRY_ADD: - replace = fib_work->event == FIB_EVENT_ENTRY_REPLACE; + replace = fib_event->event == FIB_EVENT_ENTRY_REPLACE; - err = mlxsw_sp_router_fibmr_add(mlxsw_sp, &fib_work->men_info, - replace); + err = mlxsw_sp_router_fibmr_add(mlxsw_sp, &fib_event->men_info, replace); if (err) mlxsw_sp_router_fib_abort(mlxsw_sp); - mr_cache_put(fib_work->men_info.mfc); + mr_cache_put(fib_event->men_info.mfc); break; case FIB_EVENT_ENTRY_DEL: - mlxsw_sp_router_fibmr_del(mlxsw_sp, &fib_work->men_info); - mr_cache_put(fib_work->men_info.mfc); + mlxsw_sp_router_fibmr_del(mlxsw_sp, &fib_event->men_info); + mr_cache_put(fib_event->men_info.mfc); break; case FIB_EVENT_VIF_ADD: err = mlxsw_sp_router_fibmr_vif_add(mlxsw_sp, - &fib_work->ven_info); + &fib_event->ven_info); if (err) mlxsw_sp_router_fib_abort(mlxsw_sp); - dev_put(fib_work->ven_info.dev); + dev_put(fib_event->ven_info.dev); break; case FIB_EVENT_VIF_DEL: - mlxsw_sp_router_fibmr_vif_del(mlxsw_sp, - &fib_work->ven_info); - dev_put(fib_work->ven_info.dev); + mlxsw_sp_router_fibmr_vif_del(mlxsw_sp, &fib_event->ven_info); + dev_put(fib_event->ven_info.dev); break; } mutex_unlock(&mlxsw_sp->router->lock); rtnl_unlock(); - kfree(fib_work); } -static void mlxsw_sp_router_fib4_event(struct mlxsw_sp_fib_event_work *fib_work, +static void mlxsw_sp_router_fib_event_work(struct work_struct *work) +{ + struct mlxsw_sp_router *router = container_of(work, struct mlxsw_sp_router, fib_event_work); + struct mlxsw_sp *mlxsw_sp = router->mlxsw_sp; + struct mlxsw_sp_fib_event *fib_event, *tmp; + LIST_HEAD(fib_event_queue); + + spin_lock_bh(&router->fib_event_queue_lock); + list_splice_init(&router->fib_event_queue, &fib_event_queue); + spin_unlock_bh(&router->fib_event_queue_lock); + + list_for_each_entry_safe(fib_event, tmp, &fib_event_queue, list) { + switch (fib_event->family) { + case AF_INET: + mlxsw_sp_router_fib4_event_process(mlxsw_sp, fib_event); + break; + case AF_INET6: + mlxsw_sp_router_fib6_event_process(mlxsw_sp, fib_event); + break; + case RTNL_FAMILY_IP6MR: + case RTNL_FAMILY_IPMR: + mlxsw_sp_router_fibmr_event_process(mlxsw_sp, + fib_event); + break; + default: + WARN_ON_ONCE(1); + } + kfree(fib_event); + cond_resched(); + } +} + +static void mlxsw_sp_router_fib4_event(struct mlxsw_sp_fib_event *fib_event, struct fib_notifier_info *info) { struct fib_entry_notifier_info *fen_info; struct fib_nh_notifier_info *fnh_info; - switch (fib_work->event) { + switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: case FIB_EVENT_ENTRY_DEL: fen_info = container_of(info, struct fib_entry_notifier_info, info); - fib_work->fen_info = *fen_info; + fib_event->fen_info = *fen_info; /* Take reference on fib_info to prevent it from being - * freed while work is queued. Release it afterwards. + * freed while event is queued. Release it afterwards. */ - fib_info_hold(fib_work->fen_info.fi); + fib_info_hold(fib_event->fen_info.fi); break; case FIB_EVENT_NH_ADD: case FIB_EVENT_NH_DEL: fnh_info = container_of(info, struct fib_nh_notifier_info, info); - fib_work->fnh_info = *fnh_info; - fib_info_hold(fib_work->fnh_info.fib_nh->nh_parent); + fib_event->fnh_info = *fnh_info; + fib_info_hold(fib_event->fnh_info.fib_nh->nh_parent); break; } } -static int mlxsw_sp_router_fib6_event(struct mlxsw_sp_fib_event_work *fib_work, +static int mlxsw_sp_router_fib6_event(struct mlxsw_sp_fib_event *fib_event, struct fib_notifier_info *info) { struct fib6_entry_notifier_info *fen6_info; int err; - switch (fib_work->event) { + switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: case FIB_EVENT_ENTRY_APPEND: case FIB_EVENT_ENTRY_DEL: fen6_info = container_of(info, struct fib6_entry_notifier_info, info); - err = mlxsw_sp_router_fib6_work_init(&fib_work->fib6_work, - fen6_info); + err = mlxsw_sp_router_fib6_event_init(&fib_event->fib6_event, + fen6_info); if (err) return err; break; @@ -6176,20 +6193,20 @@ static int mlxsw_sp_router_fib6_event(struct mlxsw_sp_fib_event_work *fib_work, } static void -mlxsw_sp_router_fibmr_event(struct mlxsw_sp_fib_event_work *fib_work, +mlxsw_sp_router_fibmr_event(struct mlxsw_sp_fib_event *fib_event, struct fib_notifier_info *info) { - switch (fib_work->event) { + switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: case FIB_EVENT_ENTRY_ADD: case FIB_EVENT_ENTRY_DEL: - memcpy(&fib_work->men_info, info, sizeof(fib_work->men_info)); - mr_cache_hold(fib_work->men_info.mfc); + memcpy(&fib_event->men_info, info, sizeof(fib_event->men_info)); + mr_cache_hold(fib_event->men_info.mfc); break; case FIB_EVENT_VIF_ADD: case FIB_EVENT_VIF_DEL: - memcpy(&fib_work->ven_info, info, sizeof(fib_work->ven_info)); - dev_hold(fib_work->ven_info.dev); + memcpy(&fib_event->ven_info, info, sizeof(fib_event->ven_info)); + dev_hold(fib_event->ven_info.dev); break; } } @@ -6246,7 +6263,7 @@ static int mlxsw_sp_router_fib_rule_event(unsigned long event, static int mlxsw_sp_router_fib_event(struct notifier_block *nb, unsigned long event, void *ptr) { - struct mlxsw_sp_fib_event_work *fib_work; + struct mlxsw_sp_fib_event *fib_event; struct fib_notifier_info *info = ptr; struct mlxsw_sp_router *router; int err; @@ -6296,37 +6313,39 @@ static int mlxsw_sp_router_fib_event(struct notifier_block *nb, break; } - fib_work = kzalloc(sizeof(*fib_work), GFP_ATOMIC); - if (!fib_work) + fib_event = kzalloc(sizeof(*fib_event), GFP_ATOMIC); + if (!fib_event) return NOTIFY_BAD; - fib_work->mlxsw_sp = router->mlxsw_sp; - fib_work->event = event; + fib_event->mlxsw_sp = router->mlxsw_sp; + fib_event->event = event; + fib_event->family = info->family; switch (info->family) { case AF_INET: - INIT_WORK(&fib_work->work, mlxsw_sp_router_fib4_event_work); - mlxsw_sp_router_fib4_event(fib_work, info); + mlxsw_sp_router_fib4_event(fib_event, info); break; case AF_INET6: - INIT_WORK(&fib_work->work, mlxsw_sp_router_fib6_event_work); - err = mlxsw_sp_router_fib6_event(fib_work, info); + err = mlxsw_sp_router_fib6_event(fib_event, info); if (err) goto err_fib_event; break; case RTNL_FAMILY_IP6MR: case RTNL_FAMILY_IPMR: - INIT_WORK(&fib_work->work, mlxsw_sp_router_fibmr_event_work); - mlxsw_sp_router_fibmr_event(fib_work, info); + mlxsw_sp_router_fibmr_event(fib_event, info); break; } - mlxsw_core_schedule_work(&fib_work->work); + /* Enqueue the event and trigger the work */ + spin_lock_bh(&router->fib_event_queue_lock); + list_add_tail(&fib_event->list, &router->fib_event_queue); + spin_unlock_bh(&router->fib_event_queue_lock); + mlxsw_core_schedule_work(&router->fib_event_work); return NOTIFY_DONE; err_fib_event: - kfree(fib_work); + kfree(fib_event); return NOTIFY_BAD; } @@ -8171,6 +8190,10 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp, if (err) goto err_dscp_init; + INIT_WORK(&router->fib_event_work, mlxsw_sp_router_fib_event_work); + INIT_LIST_HEAD(&router->fib_event_queue); + spin_lock_init(&router->fib_event_queue_lock); + router->inetaddr_nb.notifier_call = mlxsw_sp_inetaddr_event; err = register_inetaddr_notifier(&router->inetaddr_nb); if (err) @@ -8204,6 +8227,7 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp, unregister_inetaddr_notifier(&router->inetaddr_nb); err_register_inetaddr_notifier: mlxsw_core_flush_owq(); + WARN_ON(!list_empty(&router->fib_event_queue)); err_dscp_init: err_mp_hash_init: mlxsw_sp_neigh_fini(mlxsw_sp); @@ -8237,6 +8261,7 @@ void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp) unregister_inet6addr_notifier(&mlxsw_sp->router->inet6addr_nb); unregister_inetaddr_notifier(&mlxsw_sp->router->inetaddr_nb); mlxsw_core_flush_owq(); + WARN_ON(!list_empty(&mlxsw_sp->router->fib_event_queue)); mlxsw_sp_neigh_fini(mlxsw_sp); mlxsw_sp_vrs_fini(mlxsw_sp); mlxsw_sp_mr_fini(mlxsw_sp); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index 68f5feabc02c..5683f20a325e 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -48,6 +48,9 @@ struct mlxsw_sp_router { bool adj_discard_index_valid; struct mlxsw_sp_router_nve_decap nve_decap_config; struct mutex lock; /* Protects shared router resources */ + struct work_struct fib_event_work; + struct list_head fib_event_queue; + spinlock_t fib_event_queue_lock; /* Protects fib event queue list */ /* One set of ops for each protocol: IPv4 and IPv6 */ const struct mlxsw_sp_router_ll_ops *proto_ll_ops[MLXSW_SP_L3_PROTO_MAX]; }; From patchwork Tue Nov 10 09:48:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893701 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4992DC55ABD for ; Tue, 10 Nov 2020 09:50:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D5335207BB for ; Tue, 10 Nov 2020 09:50:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730340AbgKJJu1 (ORCPT ); Tue, 10 Nov 2020 04:50:27 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:58291 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730145AbgKJJuY (ORCPT ); Tue, 10 Nov 2020 04:50:24 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id C830A367; Tue, 10 Nov 2020 04:50:22 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:23 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=To6HzVDlycGTL/faa7lW5tlF76i3Nmk7SkUV2TkvVGU=; b=UlRWpVZT +c05QD2nsbMz0nDm68851i38+wf0xwKUuUScaEWWHVzRMIReer9OaNQP+nNxEdI9 k7IjMuhKKuT1ogLaZYzvA5i5fRIh95f6URZ4Ino7HmrilCPq8GDXQmJuZkY6tSrh 8S7tKNkrt7b1MDdW/3nLp/Vubls2wsrjV1jFEFXDKx7sa/2QYCDuTJRnLXzmgpa5 xm68vDcXfEdU6YI6cSnFZLoEGIqDz3AU30ikwOP/PxAlRE3MSD3ttg10J1xT3Nqk vRiqG5y3yyR2OUYbWTacK2ybZhM8ISsRzLv7sbpYhh6kDyH8uyhZLEuqCNQ6UYeA Y2f8NdO9RVym8g== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 0A3253280060; Tue, 10 Nov 2020 04:50:20 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 04/15] mlxsw: spectrum: Propagate context from work handler containing RALUE payload Date: Tue, 10 Nov 2020 11:48:49 +0200 Message-Id: <20201110094900.1920158-5-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Currently, RALUE payload is defined locally in the function that is calling the register write. With introduction of alternative register to RALUE, XMDR, it has to be possible to put multiple FIB entry operations into single register write. So in order to prepare for that, have per-work entry operation context and propagate it all the way down to the functions writing RALUE. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_ipip.c | 11 +- .../ethernet/mellanox/mlxsw/spectrum_ipip.h | 1 + .../ethernet/mellanox/mlxsw/spectrum_router.c | 140 +++++++++++------- .../ethernet/mellanox/mlxsw/spectrum_router.h | 14 +- 4 files changed, 103 insertions(+), 63 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c index 8487de3e9787..f8b9b5be8247 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c @@ -182,11 +182,12 @@ mlxsw_sp_ipip_fib_entry_op_gre4_rtdp(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_ipip_fib_entry_op_gre4_ralue(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, u32 dip, u8 prefix_len, u16 ul_vr_id, enum mlxsw_sp_fib_entry_op op, u32 tunnel_index) { - char ralue_pl[MLXSW_REG_RALUE_LEN]; + char *ralue_pl = op_ctx->ralue_pl; enum mlxsw_reg_ralue_op ralue_op; switch (op) { @@ -208,9 +209,9 @@ mlxsw_sp_ipip_fib_entry_op_gre4_ralue(struct mlxsw_sp *mlxsw_sp, } static int mlxsw_sp_ipip_fib_entry_op_gre4(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_ipip_entry *ipip_entry, - enum mlxsw_sp_fib_entry_op op, - u32 tunnel_index) + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct mlxsw_sp_ipip_entry *ipip_entry, + enum mlxsw_sp_fib_entry_op op, u32 tunnel_index) { u16 ul_vr_id = mlxsw_sp_ipip_lb_ul_vr_id(ipip_entry->ol_lb); __be32 dip; @@ -223,7 +224,7 @@ static int mlxsw_sp_ipip_fib_entry_op_gre4(struct mlxsw_sp *mlxsw_sp, dip = mlxsw_sp_ipip_netdev_saddr(MLXSW_SP_L3_PROTO_IPV4, ipip_entry->ol_dev).addr4; - return mlxsw_sp_ipip_fib_entry_op_gre4_ralue(mlxsw_sp, be32_to_cpu(dip), + return mlxsw_sp_ipip_fib_entry_op_gre4_ralue(mlxsw_sp, op_ctx, be32_to_cpu(dip), 32, ul_vr_id, op, tunnel_index); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h index f3ad1e149a45..dd53b1c207b3 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h @@ -52,6 +52,7 @@ struct mlxsw_sp_ipip_ops { const struct net_device *ol_dev); int (*fib_entry_op)(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_ipip_entry *ipip_entry, enum mlxsw_sp_fib_entry_op op, u32 tunnel_index); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 99777d190e6d..9083c74c1904 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4380,12 +4380,13 @@ static int mlxsw_sp_adj_discard_write(struct mlxsw_sp *mlxsw_sp, u16 rif_index) } static int mlxsw_sp_fib_entry_op_remote(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { struct mlxsw_sp_nexthop_group *nh_group = fib_entry->nh_group; - char ralue_pl[MLXSW_REG_RALUE_LEN]; enum mlxsw_reg_ralue_trap_action trap_action; + char *ralue_pl = op_ctx->ralue_pl; u16 trap_id = 0; u32 adjacency_index = 0; u16 ecmp_size = 0; @@ -4420,12 +4421,13 @@ static int mlxsw_sp_fib_entry_op_remote(struct mlxsw_sp *mlxsw_sp, } static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { struct mlxsw_sp_rif *rif = fib_entry->nh_group->nh_rif; enum mlxsw_reg_ralue_trap_action trap_action; - char ralue_pl[MLXSW_REG_RALUE_LEN]; + char *ralue_pl = op_ctx->ralue_pl; u16 trap_id = 0; u16 rif_index = 0; @@ -4444,10 +4446,11 @@ static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, } static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { - char ralue_pl[MLXSW_REG_RALUE_LEN]; + char *ralue_pl = op_ctx->ralue_pl; mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); mlxsw_reg_ralue_act_ip2me_pack(ralue_pl); @@ -4455,11 +4458,12 @@ static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, } static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { enum mlxsw_reg_ralue_trap_action trap_action; - char ralue_pl[MLXSW_REG_RALUE_LEN]; + char *ralue_pl = op_ctx->ralue_pl; trap_action = MLXSW_REG_RALUE_TRAP_ACTION_DISCARD_ERROR; mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); @@ -4469,11 +4473,12 @@ static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op_unreachable(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { enum mlxsw_reg_ralue_trap_action trap_action; - char ralue_pl[MLXSW_REG_RALUE_LEN]; + char *ralue_pl = op_ctx->ralue_pl; u16 trap_id; trap_action = MLXSW_REG_RALUE_TRAP_ACTION_TRAP; @@ -4486,6 +4491,7 @@ mlxsw_sp_fib_entry_op_unreachable(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_op_ipip_decap(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { @@ -4496,15 +4502,16 @@ mlxsw_sp_fib_entry_op_ipip_decap(struct mlxsw_sp *mlxsw_sp, return -EINVAL; ipip_ops = mlxsw_sp->router->ipip_ops_arr[ipip_entry->ipipt]; - return ipip_ops->fib_entry_op(mlxsw_sp, ipip_entry, op, + return ipip_ops->fib_entry_op(mlxsw_sp, op_ctx, ipip_entry, op, fib_entry->decap.tunnel_index); } static int mlxsw_sp_fib_entry_op_nve_decap(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { - char ralue_pl[MLXSW_REG_RALUE_LEN]; + char *ralue_pl = op_ctx->ralue_pl; mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); mlxsw_reg_ralue_act_ip2me_tun_pack(ralue_pl, @@ -4513,35 +4520,35 @@ static int mlxsw_sp_fib_entry_op_nve_decap(struct mlxsw_sp *mlxsw_sp, } static int __mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { switch (fib_entry->type) { case MLXSW_SP_FIB_ENTRY_TYPE_REMOTE: - return mlxsw_sp_fib_entry_op_remote(mlxsw_sp, fib_entry, op); + return mlxsw_sp_fib_entry_op_remote(mlxsw_sp, op_ctx, fib_entry, op); case MLXSW_SP_FIB_ENTRY_TYPE_LOCAL: - return mlxsw_sp_fib_entry_op_local(mlxsw_sp, fib_entry, op); + return mlxsw_sp_fib_entry_op_local(mlxsw_sp, op_ctx, fib_entry, op); case MLXSW_SP_FIB_ENTRY_TYPE_TRAP: - return mlxsw_sp_fib_entry_op_trap(mlxsw_sp, fib_entry, op); + return mlxsw_sp_fib_entry_op_trap(mlxsw_sp, op_ctx, fib_entry, op); case MLXSW_SP_FIB_ENTRY_TYPE_BLACKHOLE: - return mlxsw_sp_fib_entry_op_blackhole(mlxsw_sp, fib_entry, op); + return mlxsw_sp_fib_entry_op_blackhole(mlxsw_sp, op_ctx, fib_entry, op); case MLXSW_SP_FIB_ENTRY_TYPE_UNREACHABLE: - return mlxsw_sp_fib_entry_op_unreachable(mlxsw_sp, fib_entry, - op); + return mlxsw_sp_fib_entry_op_unreachable(mlxsw_sp, op_ctx, fib_entry, op); case MLXSW_SP_FIB_ENTRY_TYPE_IPIP_DECAP: - return mlxsw_sp_fib_entry_op_ipip_decap(mlxsw_sp, - fib_entry, op); + return mlxsw_sp_fib_entry_op_ipip_decap(mlxsw_sp, op_ctx, fib_entry, op); case MLXSW_SP_FIB_ENTRY_TYPE_NVE_DECAP: - return mlxsw_sp_fib_entry_op_nve_decap(mlxsw_sp, fib_entry, op); + return mlxsw_sp_fib_entry_op_nve_decap(mlxsw_sp, op_ctx, fib_entry, op); } return -EINVAL; } static int mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { - int err = __mlxsw_sp_fib_entry_op(mlxsw_sp, fib_entry, op); + int err = __mlxsw_sp_fib_entry_op(mlxsw_sp, op_ctx, fib_entry, op); if (err) return err; @@ -4551,17 +4558,27 @@ static int mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, return err; } +static int __mlxsw_sp_fib_entry_update(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct mlxsw_sp_fib_entry *fib_entry) +{ + return mlxsw_sp_fib_entry_op(mlxsw_sp, op_ctx, fib_entry, + MLXSW_SP_FIB_ENTRY_OP_WRITE); +} + static int mlxsw_sp_fib_entry_update(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry) { - return mlxsw_sp_fib_entry_op(mlxsw_sp, fib_entry, - MLXSW_SP_FIB_ENTRY_OP_WRITE); + struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; + + return __mlxsw_sp_fib_entry_update(mlxsw_sp, &op_ctx, fib_entry); } static int mlxsw_sp_fib_entry_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry) { - return mlxsw_sp_fib_entry_op(mlxsw_sp, fib_entry, + return mlxsw_sp_fib_entry_op(mlxsw_sp, op_ctx, fib_entry, MLXSW_SP_FIB_ENTRY_OP_DELETE); } @@ -4917,6 +4934,7 @@ static void mlxsw_sp_fib_node_put(struct mlxsw_sp *mlxsw_sp, } static int mlxsw_sp_fib_node_entry_link(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry) { struct mlxsw_sp_fib_node *fib_node = fib_entry->fib_node; @@ -4924,7 +4942,7 @@ static int mlxsw_sp_fib_node_entry_link(struct mlxsw_sp *mlxsw_sp, fib_node->fib_entry = fib_entry; - err = mlxsw_sp_fib_entry_update(mlxsw_sp, fib_entry); + err = __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, fib_entry); if (err) goto err_fib_entry_update; @@ -4935,16 +4953,24 @@ static int mlxsw_sp_fib_node_entry_link(struct mlxsw_sp *mlxsw_sp, return err; } -static void -mlxsw_sp_fib_node_entry_unlink(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_fib_entry *fib_entry) +static void __mlxsw_sp_fib_node_entry_unlink(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct mlxsw_sp_fib_entry *fib_entry) { struct mlxsw_sp_fib_node *fib_node = fib_entry->fib_node; - mlxsw_sp_fib_entry_del(mlxsw_sp, fib_entry); + mlxsw_sp_fib_entry_del(mlxsw_sp, op_ctx, fib_entry); fib_node->fib_entry = NULL; } +static void mlxsw_sp_fib_node_entry_unlink(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry *fib_entry) +{ + struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; + + __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, &op_ctx, fib_entry); +} + static bool mlxsw_sp_fib4_allow_replace(struct mlxsw_sp_fib4_entry *fib4_entry) { struct mlxsw_sp_fib_node *fib_node = fib4_entry->common.fib_node; @@ -4964,6 +4990,7 @@ static bool mlxsw_sp_fib4_allow_replace(struct mlxsw_sp_fib4_entry *fib4_entry) static int mlxsw_sp_router_fib4_replace(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, const struct fib_entry_notifier_info *fen_info) { struct mlxsw_sp_fib4_entry *fib4_entry, *fib4_replaced; @@ -4997,7 +5024,7 @@ mlxsw_sp_router_fib4_replace(struct mlxsw_sp *mlxsw_sp, } replaced = fib_node->fib_entry; - err = mlxsw_sp_fib_node_entry_link(mlxsw_sp, &fib4_entry->common); + err = mlxsw_sp_fib_node_entry_link(mlxsw_sp, op_ctx, &fib4_entry->common); if (err) { dev_warn(mlxsw_sp->bus_info->dev, "Failed to link FIB entry to node\n"); goto err_fib_node_entry_link; @@ -5023,6 +5050,7 @@ mlxsw_sp_router_fib4_replace(struct mlxsw_sp *mlxsw_sp, } static void mlxsw_sp_router_fib4_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct fib_entry_notifier_info *fen_info) { struct mlxsw_sp_fib4_entry *fib4_entry; @@ -5036,7 +5064,7 @@ static void mlxsw_sp_router_fib4_del(struct mlxsw_sp *mlxsw_sp, return; fib_node = fib4_entry->common.fib_node; - mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, &fib4_entry->common); + __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, op_ctx, &fib4_entry->common); mlxsw_sp_fib4_entry_destroy(mlxsw_sp, fib4_entry); mlxsw_sp_fib_node_put(mlxsw_sp, fib_node); } @@ -5305,9 +5333,9 @@ static void mlxsw_sp_nexthop6_group_put(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_nexthop6_group_destroy(mlxsw_sp, nh_grp); } -static int -mlxsw_sp_nexthop6_group_update(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_fib6_entry *fib6_entry) +static int mlxsw_sp_nexthop6_group_update(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct mlxsw_sp_fib6_entry *fib6_entry) { struct mlxsw_sp_nexthop_group *old_nh_grp = fib6_entry->common.nh_group; int err; @@ -5323,7 +5351,7 @@ mlxsw_sp_nexthop6_group_update(struct mlxsw_sp *mlxsw_sp, * currently associated with it in the device's table is that * of the old group. Start using the new one instead. */ - err = mlxsw_sp_fib_entry_update(mlxsw_sp, &fib6_entry->common); + err = __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, &fib6_entry->common); if (err) goto err_fib_entry_update; @@ -5343,6 +5371,7 @@ mlxsw_sp_nexthop6_group_update(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib6_entry_nexthop_add(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib6_entry *fib6_entry, struct fib6_info **rt_arr, unsigned int nrt6) { @@ -5360,7 +5389,7 @@ mlxsw_sp_fib6_entry_nexthop_add(struct mlxsw_sp *mlxsw_sp, fib6_entry->nrt6++; } - err = mlxsw_sp_nexthop6_group_update(mlxsw_sp, fib6_entry); + err = mlxsw_sp_nexthop6_group_update(mlxsw_sp, op_ctx, fib6_entry); if (err) goto err_nexthop6_group_update; @@ -5381,6 +5410,7 @@ mlxsw_sp_fib6_entry_nexthop_add(struct mlxsw_sp *mlxsw_sp, static void mlxsw_sp_fib6_entry_nexthop_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib6_entry *fib6_entry, struct fib6_info **rt_arr, unsigned int nrt6) { @@ -5398,7 +5428,7 @@ mlxsw_sp_fib6_entry_nexthop_del(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_rt6_destroy(mlxsw_sp_rt6); } - mlxsw_sp_nexthop6_group_update(mlxsw_sp, fib6_entry); + mlxsw_sp_nexthop6_group_update(mlxsw_sp, op_ctx, fib6_entry); } static void mlxsw_sp_fib6_entry_type_set(struct mlxsw_sp *mlxsw_sp, @@ -5550,8 +5580,8 @@ static bool mlxsw_sp_fib6_allow_replace(struct mlxsw_sp_fib6_entry *fib6_entry) } static int mlxsw_sp_router_fib6_replace(struct mlxsw_sp *mlxsw_sp, - struct fib6_info **rt_arr, - unsigned int nrt6) + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct fib6_info **rt_arr, unsigned int nrt6) { struct mlxsw_sp_fib6_entry *fib6_entry, *fib6_replaced; struct mlxsw_sp_fib_entry *replaced; @@ -5590,7 +5620,7 @@ static int mlxsw_sp_router_fib6_replace(struct mlxsw_sp *mlxsw_sp, } replaced = fib_node->fib_entry; - err = mlxsw_sp_fib_node_entry_link(mlxsw_sp, &fib6_entry->common); + err = mlxsw_sp_fib_node_entry_link(mlxsw_sp, op_ctx, &fib6_entry->common); if (err) goto err_fib_node_entry_link; @@ -5614,8 +5644,8 @@ static int mlxsw_sp_router_fib6_replace(struct mlxsw_sp *mlxsw_sp, } static int mlxsw_sp_router_fib6_append(struct mlxsw_sp *mlxsw_sp, - struct fib6_info **rt_arr, - unsigned int nrt6) + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct fib6_info **rt_arr, unsigned int nrt6) { struct mlxsw_sp_fib6_entry *fib6_entry; struct mlxsw_sp_fib_node *fib_node; @@ -5646,8 +5676,7 @@ static int mlxsw_sp_router_fib6_append(struct mlxsw_sp *mlxsw_sp, fib6_entry = container_of(fib_node->fib_entry, struct mlxsw_sp_fib6_entry, common); - err = mlxsw_sp_fib6_entry_nexthop_add(mlxsw_sp, fib6_entry, rt_arr, - nrt6); + err = mlxsw_sp_fib6_entry_nexthop_add(mlxsw_sp, op_ctx, fib6_entry, rt_arr, nrt6); if (err) goto err_fib6_entry_nexthop_add; @@ -5659,8 +5688,8 @@ static int mlxsw_sp_router_fib6_append(struct mlxsw_sp *mlxsw_sp, } static void mlxsw_sp_router_fib6_del(struct mlxsw_sp *mlxsw_sp, - struct fib6_info **rt_arr, - unsigned int nrt6) + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct fib6_info **rt_arr, unsigned int nrt6) { struct mlxsw_sp_fib6_entry *fib6_entry; struct mlxsw_sp_fib_node *fib_node; @@ -5685,14 +5714,13 @@ static void mlxsw_sp_router_fib6_del(struct mlxsw_sp *mlxsw_sp, * group. */ if (nrt6 != fib6_entry->nrt6) { - mlxsw_sp_fib6_entry_nexthop_del(mlxsw_sp, fib6_entry, rt_arr, - nrt6); + mlxsw_sp_fib6_entry_nexthop_del(mlxsw_sp, op_ctx, fib6_entry, rt_arr, nrt6); return; } fib_node = fib6_entry->common.fib_node; - mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, &fib6_entry->common); + __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, op_ctx, &fib6_entry->common); mlxsw_sp_fib6_entry_destroy(mlxsw_sp, fib6_entry); mlxsw_sp_fib_node_put(mlxsw_sp, fib_node); } @@ -5720,8 +5748,9 @@ static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_VRS); i++) { struct mlxsw_sp_vr *vr = &mlxsw_sp->router->vrs[i]; + struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; char xraltb_pl[MLXSW_REG_XRALTB_LEN]; - char ralue_pl[MLXSW_REG_RALUE_LEN]; + char *ralue_pl = op_ctx.ralue_pl; mlxsw_reg_xraltb_pack(xraltb_pl, vr->id, ralxx_proto, tree_id); err = ll_ops->raltb_write(mlxsw_sp, xraltb_pl); @@ -6014,6 +6043,7 @@ mlxsw_sp_router_fib6_event_fini(struct mlxsw_sp_fib6_event *fib6_event) } static void mlxsw_sp_router_fib4_event_process(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_event *fib_event) { int err; @@ -6023,13 +6053,13 @@ static void mlxsw_sp_router_fib4_event_process(struct mlxsw_sp *mlxsw_sp, switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: - err = mlxsw_sp_router_fib4_replace(mlxsw_sp, &fib_event->fen_info); + err = mlxsw_sp_router_fib4_replace(mlxsw_sp, op_ctx, &fib_event->fen_info); if (err) mlxsw_sp_router_fib_abort(mlxsw_sp); fib_info_put(fib_event->fen_info.fi); break; case FIB_EVENT_ENTRY_DEL: - mlxsw_sp_router_fib4_del(mlxsw_sp, &fib_event->fen_info); + mlxsw_sp_router_fib4_del(mlxsw_sp, op_ctx, &fib_event->fen_info); fib_info_put(fib_event->fen_info.fi); break; case FIB_EVENT_NH_ADD: @@ -6042,6 +6072,7 @@ static void mlxsw_sp_router_fib4_event_process(struct mlxsw_sp *mlxsw_sp, } static void mlxsw_sp_router_fib6_event_process(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_event *fib_event) { int err; @@ -6051,21 +6082,21 @@ static void mlxsw_sp_router_fib6_event_process(struct mlxsw_sp *mlxsw_sp, switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: - err = mlxsw_sp_router_fib6_replace(mlxsw_sp, fib_event->fib6_event.rt_arr, + err = mlxsw_sp_router_fib6_replace(mlxsw_sp, op_ctx, fib_event->fib6_event.rt_arr, fib_event->fib6_event.nrt6); if (err) mlxsw_sp_router_fib_abort(mlxsw_sp); mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; case FIB_EVENT_ENTRY_APPEND: - err = mlxsw_sp_router_fib6_append(mlxsw_sp, fib_event->fib6_event.rt_arr, + err = mlxsw_sp_router_fib6_append(mlxsw_sp, op_ctx, fib_event->fib6_event.rt_arr, fib_event->fib6_event.nrt6); if (err) mlxsw_sp_router_fib_abort(mlxsw_sp); mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; case FIB_EVENT_ENTRY_DEL: - mlxsw_sp_router_fib6_del(mlxsw_sp, fib_event->fib6_event.rt_arr, + mlxsw_sp_router_fib6_del(mlxsw_sp, op_ctx, fib_event->fib6_event.rt_arr, fib_event->fib6_event.nrt6); mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; @@ -6114,6 +6145,7 @@ static void mlxsw_sp_router_fibmr_event_process(struct mlxsw_sp *mlxsw_sp, static void mlxsw_sp_router_fib_event_work(struct work_struct *work) { struct mlxsw_sp_router *router = container_of(work, struct mlxsw_sp_router, fib_event_work); + struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; struct mlxsw_sp *mlxsw_sp = router->mlxsw_sp; struct mlxsw_sp_fib_event *fib_event, *tmp; LIST_HEAD(fib_event_queue); @@ -6125,10 +6157,12 @@ static void mlxsw_sp_router_fib_event_work(struct work_struct *work) list_for_each_entry_safe(fib_event, tmp, &fib_event_queue, list) { switch (fib_event->family) { case AF_INET: - mlxsw_sp_router_fib4_event_process(mlxsw_sp, fib_event); + mlxsw_sp_router_fib4_event_process(mlxsw_sp, &op_ctx, + fib_event); break; case AF_INET6: - mlxsw_sp_router_fib6_event_process(mlxsw_sp, fib_event); + mlxsw_sp_router_fib6_event_process(mlxsw_sp, &op_ctx, + fib_event); break; case RTNL_FAMILY_IP6MR: case RTNL_FAMILY_IPMR: diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index 5683f20a325e..963825dff66b 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -55,6 +55,15 @@ struct mlxsw_sp_router { const struct mlxsw_sp_router_ll_ops *proto_ll_ops[MLXSW_SP_L3_PROTO_MAX]; }; +enum mlxsw_sp_fib_entry_op { + MLXSW_SP_FIB_ENTRY_OP_WRITE, + MLXSW_SP_FIB_ENTRY_OP_DELETE, +}; + +struct mlxsw_sp_fib_entry_op_ctx { + char ralue_pl[MLXSW_REG_RALUE_LEN]; +}; + /* Low-level router ops. Basically this is to handle the different * register sets to work with ordinary and XM trees and FIB entries. */ @@ -64,11 +73,6 @@ struct mlxsw_sp_router_ll_ops { int (*raltb_write)(struct mlxsw_sp *mlxsw_sp, char *xraltb_pl); }; -enum mlxsw_sp_fib_entry_op { - MLXSW_SP_FIB_ENTRY_OP_WRITE, - MLXSW_SP_FIB_ENTRY_OP_DELETE, -}; - struct mlxsw_sp_rif_ipip_lb; struct mlxsw_sp_rif_ipip_lb_config { enum mlxsw_reg_ritr_loopback_ipip_type lb_ipipt; From patchwork Tue Nov 10 09:48:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893699 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 271D2C388F7 for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BADBB207BB for ; Tue, 10 Nov 2020 09:50:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730457AbgKJJu2 (ORCPT ); Tue, 10 Nov 2020 04:50:28 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:52863 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730209AbgKJJuZ (ORCPT ); Tue, 10 Nov 2020 04:50:25 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 2FEE8E08; Tue, 10 Nov 2020 04:50:24 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:24 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=jD9noV40odXofwL5g939LNtJBSstCkliXYzfnO0JoKo=; b=G/msqlJp BgjUpwFhF/+dmjtVAf/9NTLhAsf2M2ZjdQcc3Fx3N+Rs5g/H40Jr32bCDUiYiGF7 G1zKcxt4BllHGZqGHL21p2bnCsErzQOHRaKgPcwgMdGBnidZwHFDnD9V5ChQjEQG muEg2OmRdWhbKcwQ0ObvuV6cFXnryw6U+axLIlzsAkK1bTVlIddQFNueCHJB7+43 PbmLpMqJ7ZbLH153OK211lkfNFFqRHZlEg9GG17K6A1yCwsqfpNXCC5QjcVohdl+ okUY4RXpugQD+1thWqVb8dwQgMJtGACGSbskpPNYKSLc5v/E7AxN2X1X2q3B/y+X C/W1WGgv15DOAQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 878023280066; Tue, 10 Nov 2020 04:50:22 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 05/15] mlxsw: spectrum_router: Push out RALUE pack into separate helper Date: Tue, 10 Nov 2020 11:48:50 +0200 Message-Id: <20201110094900.1920158-6-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko As the RALUE packing is going to be pushed into an op, in preparation for that push the code into a separate function in the meantime. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_router.c | 49 +++++++++++-------- 1 file changed, 29 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 9083c74c1904..cf186f1ff3f6 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4308,16 +4308,15 @@ mlxsw_sp_fib_entry_hw_flags_refresh(struct mlxsw_sp *mlxsw_sp, } static void -mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, - const struct mlxsw_sp_fib_entry *fib_entry, - enum mlxsw_sp_fib_entry_op op) +mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, enum mlxsw_sp_l3proto proto, + enum mlxsw_sp_fib_entry_op op, u16 virtual_router, + u8 prefix_len, unsigned char *addr) { - struct mlxsw_sp_fib *fib = fib_entry->fib_node->fib; - enum mlxsw_reg_ralxx_protocol proto; + enum mlxsw_reg_ralxx_protocol ralxx_proto; enum mlxsw_reg_ralue_op ralue_op; u32 *p_dip; - proto = (enum mlxsw_reg_ralxx_protocol) fib->proto; + ralxx_proto = (enum mlxsw_reg_ralxx_protocol) proto; switch (op) { case MLXSW_SP_FIB_ENTRY_OP_WRITE: @@ -4331,21 +4330,31 @@ mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, return; } - switch (fib->proto) { + switch (proto) { case MLXSW_SP_L3_PROTO_IPV4: - p_dip = (u32 *) fib_entry->fib_node->key.addr; - mlxsw_reg_ralue_pack4(ralue_pl, proto, ralue_op, fib->vr->id, - fib_entry->fib_node->key.prefix_len, - *p_dip); + p_dip = (u32 *) addr; + mlxsw_reg_ralue_pack4(ralue_pl, ralxx_proto, ralue_op, + virtual_router, prefix_len, *p_dip); break; case MLXSW_SP_L3_PROTO_IPV6: - mlxsw_reg_ralue_pack6(ralue_pl, proto, ralue_op, fib->vr->id, - fib_entry->fib_node->key.prefix_len, - fib_entry->fib_node->key.addr); + mlxsw_reg_ralue_pack6(ralue_pl, ralxx_proto, ralue_op, + virtual_router, prefix_len, addr); break; } } +static void mlxsw_sp_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct mlxsw_sp_fib_entry *fib_entry, + enum mlxsw_sp_fib_entry_op op) +{ + struct mlxsw_sp_fib *fib = fib_entry->fib_node->fib; + + mlxsw_sp_fib_entry_ralue_pack(op_ctx->ralue_pl, fib->proto, op, + fib->vr->id, + fib_entry->fib_node->key.prefix_len, + fib_entry->fib_node->key.addr); +} + static int mlxsw_sp_adj_discard_write(struct mlxsw_sp *mlxsw_sp, u16 rif_index) { enum mlxsw_reg_ratr_trap_action trap_action; @@ -4414,7 +4423,7 @@ static int mlxsw_sp_fib_entry_op_remote(struct mlxsw_sp *mlxsw_sp, trap_id = MLXSW_TRAP_ID_RTR_INGRESS0; } - mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); + mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); mlxsw_reg_ralue_act_remote_pack(ralue_pl, trap_action, trap_id, adjacency_index, ecmp_size); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); @@ -4439,7 +4448,7 @@ static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, trap_id = MLXSW_TRAP_ID_RTR_INGRESS0; } - mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); + mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); mlxsw_reg_ralue_act_local_pack(ralue_pl, trap_action, trap_id, rif_index); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); @@ -4452,7 +4461,7 @@ static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, { char *ralue_pl = op_ctx->ralue_pl; - mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); + mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); mlxsw_reg_ralue_act_ip2me_pack(ralue_pl); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); } @@ -4466,7 +4475,7 @@ static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, char *ralue_pl = op_ctx->ralue_pl; trap_action = MLXSW_REG_RALUE_TRAP_ACTION_DISCARD_ERROR; - mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); + mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); mlxsw_reg_ralue_act_local_pack(ralue_pl, trap_action, 0, 0); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); } @@ -4484,7 +4493,7 @@ mlxsw_sp_fib_entry_op_unreachable(struct mlxsw_sp *mlxsw_sp, trap_action = MLXSW_REG_RALUE_TRAP_ACTION_TRAP; trap_id = MLXSW_TRAP_ID_RTR_INGRESS1; - mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); + mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); mlxsw_reg_ralue_act_local_pack(ralue_pl, trap_action, trap_id, 0); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); } @@ -4513,7 +4522,7 @@ static int mlxsw_sp_fib_entry_op_nve_decap(struct mlxsw_sp *mlxsw_sp, { char *ralue_pl = op_ctx->ralue_pl; - mlxsw_sp_fib_entry_ralue_pack(ralue_pl, fib_entry, op); + mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); mlxsw_reg_ralue_act_ip2me_tun_pack(ralue_pl, fib_entry->decap.tunnel_index); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); From patchwork Tue Nov 10 09:48:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893705 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FC6AC56201 for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E77E20781 for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730731AbgKJJuc (ORCPT ); Tue, 10 Nov 2020 04:50:32 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:35539 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726467AbgKJJu0 (ORCPT ); Tue, 10 Nov 2020 04:50:26 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 8DE49DF9; Tue, 10 Nov 2020 04:50:25 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:25 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=OH7VIbh2tYdRjRF9xMm4i3XS38PbbYrLP9cFGsho9vs=; b=fQeDc9Pf bSI5j1sYamPC8UM5bp1pqaGiS3Nqn2g8fKBomAcKB5t1As7SqU5jlU2ZxoBcyhn+ PcJo+qGcvEbHj//hc92y4JgMPRO+uyk8lBxHlHHSlGWxsRdFNVEvOUJa2NrjStU8 IyDPiLPYJ4XpFmJIqbfcygLH/OaWcr6ze8VBv9+wIUhq08p0hGU/2xeo4gP90fH9 nNhL1vu/2oHo2gLg/2BkY2wmasToHlWNndGLgdZsQ14evGRFz/bB8nd65hx2C7h9 q8Y74BjqyBpgDMnd4aa9c3csZWAIpRtNU2B4nPwIOMZIXSoTYjmZn2X+IadRDHfm 64SOS0vQSDyvZA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id E40E0328005A; Tue, 10 Nov 2020 04:50:23 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 06/15] mlxsw: spectrum: Export RALUE pack helper and use it from IPIP Date: Tue, 10 Nov 2020 11:48:51 +0200 Message-Id: <20201110094900.1920158-7-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko As the RALUE packing is going to be put into op, make the user from IPIP code use the same helper as the router code does. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../net/ethernet/mellanox/mlxsw/spectrum_ipip.c | 17 ++--------------- .../ethernet/mellanox/mlxsw/spectrum_router.c | 2 +- .../ethernet/mellanox/mlxsw/spectrum_router.h | 5 +++++ 3 files changed, 8 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c index f8b9b5be8247..0f0064392468 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c @@ -188,22 +188,9 @@ mlxsw_sp_ipip_fib_entry_op_gre4_ralue(struct mlxsw_sp *mlxsw_sp, u32 tunnel_index) { char *ralue_pl = op_ctx->ralue_pl; - enum mlxsw_reg_ralue_op ralue_op; - - switch (op) { - case MLXSW_SP_FIB_ENTRY_OP_WRITE: - ralue_op = MLXSW_REG_RALUE_OP_WRITE_WRITE; - break; - case MLXSW_SP_FIB_ENTRY_OP_DELETE: - ralue_op = MLXSW_REG_RALUE_OP_WRITE_DELETE; - break; - default: - WARN_ON_ONCE(1); - return -EINVAL; - } - mlxsw_reg_ralue_pack4(ralue_pl, MLXSW_REG_RALXX_PROTOCOL_IPV4, ralue_op, - ul_vr_id, prefix_len, dip); + mlxsw_sp_fib_entry_ralue_pack(ralue_pl, MLXSW_SP_L3_PROTO_IPV4, op, + ul_vr_id, prefix_len, (unsigned char *) &dip); mlxsw_reg_ralue_act_ip2me_tun_pack(ralue_pl, tunnel_index); return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index cf186f1ff3f6..3ed9bd4afe95 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4307,7 +4307,7 @@ mlxsw_sp_fib_entry_hw_flags_refresh(struct mlxsw_sp *mlxsw_sp, } } -static void +void mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, enum mlxsw_sp_l3proto proto, enum mlxsw_sp_fib_entry_op op, u16 virtual_router, u8 prefix_len, unsigned char *addr) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index 963825dff66b..1b071f872a3b 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -173,4 +173,9 @@ static inline bool mlxsw_sp_l3addr_eq(const union mlxsw_sp_l3addr *addr1, int mlxsw_sp_ipip_ecn_encap_init(struct mlxsw_sp *mlxsw_sp); int mlxsw_sp_ipip_ecn_decap_init(struct mlxsw_sp *mlxsw_sp); +void +mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, enum mlxsw_sp_l3proto proto, + enum mlxsw_sp_fib_entry_op op, u16 virtual_router, + u8 prefix_len, unsigned char *addr); + #endif /* _MLXSW_ROUTER_H_*/ From patchwork Tue Nov 10 09:48:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893703 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A70EC4741F for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 18A9B20780 for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730623AbgKJJua (ORCPT ); Tue, 10 Nov 2020 04:50:30 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:56443 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730361AbgKJJu2 (ORCPT ); Tue, 10 Nov 2020 04:50:28 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id E38C9DE4; Tue, 10 Nov 2020 04:50:26 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=HOr8Er/Gz5QW6d8vyKNbJ9x7OdFjY3CwgKfb2QhrS5w=; b=mVk+BYLO jCtUSCQRiIaOn6jxPQCej3jIvv+wCqhD9NhokGCNb8gzNyb7ZRoafMIKXCAe/T2x oowYgDk2REWX5Z1shonwDbudxq8D+b3s8v7bQP8VTfQ/2o9sr897D4sFwhdCnDTA HNs94Iy/LaVkW1Sn4mqZdnvtVnXmUBmzaL6A8ncOWxlm1WuqMF1Yo1mcBMnit2z+ fxnU8VtvRIeDaiEo9nP4P3RydNv/Jh3iQ41fpya5T3z3y8RecomlQPJ/RTgWY0b4 FPnwUbVHeOnOHFh0P0nV7Zai9gA15uFYM2vZA0G7+aSj0OFlKdFiKXlrTMVHPO0o 1YNekcm21Qag9g== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 4E1C63280060; Tue, 10 Nov 2020 04:50:25 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 07/15] mlxsw: spectrum_router: Pass destination IP as a pointer to mlxsw_reg_ralue_pack4() Date: Tue, 10 Nov 2020 11:48:52 +0200 Message-Id: <20201110094900.1920158-8-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Instead of passing destination IP as a u32 value, pass it as pointer to u32. Avoid using local variable for the pointer store. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/reg.h | 4 ++-- drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c | 4 +--- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index 73aab72877fd..0da9f7e1eb9b 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -7279,10 +7279,10 @@ static inline void mlxsw_reg_ralue_pack4(char *payload, enum mlxsw_reg_ralxx_protocol protocol, enum mlxsw_reg_ralue_op op, u16 virtual_router, u8 prefix_len, - u32 dip) + u32 *dip) { mlxsw_reg_ralue_pack(payload, protocol, op, virtual_router, prefix_len); - mlxsw_reg_ralue_dip4_set(payload, dip); + mlxsw_reg_ralue_dip4_set(payload, *dip); } static inline void mlxsw_reg_ralue_pack6(char *payload, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 3ed9bd4afe95..4edb2eec8179 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4314,7 +4314,6 @@ mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, enum mlxsw_sp_l3proto proto, { enum mlxsw_reg_ralxx_protocol ralxx_proto; enum mlxsw_reg_ralue_op ralue_op; - u32 *p_dip; ralxx_proto = (enum mlxsw_reg_ralxx_protocol) proto; @@ -4332,9 +4331,8 @@ mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, enum mlxsw_sp_l3proto proto, switch (proto) { case MLXSW_SP_L3_PROTO_IPV4: - p_dip = (u32 *) addr; mlxsw_reg_ralue_pack4(ralue_pl, ralxx_proto, ralue_op, - virtual_router, prefix_len, *p_dip); + virtual_router, prefix_len, (u32 *) addr); break; case MLXSW_SP_L3_PROTO_IPV6: mlxsw_reg_ralue_pack6(ralue_pl, ralxx_proto, ralue_op, From patchwork Tue Nov 10 09:48:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893709 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1E74C55ABD for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8025320780 for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730864AbgKJJue (ORCPT ); Tue, 10 Nov 2020 04:50:34 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:58379 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730209AbgKJJu3 (ORCPT ); Tue, 10 Nov 2020 04:50:29 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 3D059367; Tue, 10 Nov 2020 04:50:28 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=WX8TTHzfn7/3Du+CegiBZNCEhgUtAn42m3f7rwrvKto=; b=dnbES88h 3WdLpViaV5msR8rb15znrDhTj94I2492wXeUlBXWherqKoFQEEVm5KvtOQ6f1q0Z 59K4AWxeKw+nMKk3Jw0cS5nBUc52aDOkEknO1ESfmZwGqAWWo2MRb8sM3oc2Pepu JD3D7QuhwT+94L/PxrL68UkF5qnvt6uect3ZxgVDR0tIx+3HbBe2lqhIR9QereiE 79j/yeauH9PimvgqMPbOZHoDdDXqW6+eOddSQR5qnewF5DTwf5JQ3udW4whPK63R KDXc64i/Iw0vIpsilSs8vJBICdSzttuJbl/1aupVP6LFeLcx0F6vfRZ3cbXeOIYc Ndn7UHqbfIp3tA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id A900C3280066; Tue, 10 Nov 2020 04:50:26 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 08/15] mlxsw: reg: Allow to pass NULL pointer to mlxsw_reg_ralue_pack4/6() Date: Tue, 10 Nov 2020 11:48:53 +0200 Message-Id: <20201110094900.1920158-9-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko In preparation for the change that is going to be done in the next patch, allow to pass NULL pointer to mlxsw_reg_ralue_pack4() and mlxsw_reg_ralue_pack6() helpers. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/reg.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index 0da9f7e1eb9b..fcf9095b3f55 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -7282,7 +7282,8 @@ static inline void mlxsw_reg_ralue_pack4(char *payload, u32 *dip) { mlxsw_reg_ralue_pack(payload, protocol, op, virtual_router, prefix_len); - mlxsw_reg_ralue_dip4_set(payload, *dip); + if (dip) + mlxsw_reg_ralue_dip4_set(payload, *dip); } static inline void mlxsw_reg_ralue_pack6(char *payload, @@ -7292,7 +7293,8 @@ static inline void mlxsw_reg_ralue_pack6(char *payload, const void *dip) { mlxsw_reg_ralue_pack(payload, protocol, op, virtual_router, prefix_len); - mlxsw_reg_ralue_dip6_memcpy_to(payload, dip); + if (dip) + mlxsw_reg_ralue_dip6_memcpy_to(payload, dip); } static inline void From patchwork Tue Nov 10 09:48:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893721 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B559C63798 for ; Tue, 10 Nov 2020 09:51:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0C6E120780 for ; Tue, 10 Nov 2020 09:51:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731896AbgKJJvA (ORCPT ); Tue, 10 Nov 2020 04:51:00 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:49595 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730570AbgKJJua (ORCPT ); Tue, 10 Nov 2020 04:50:30 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id AB66CE08; Tue, 10 Nov 2020 04:50:29 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=aAFLNEcChrju9De83QTrwBOj+/4wnwETE8s+WtTu8RY=; b=RqaSCKxy VThOD6yfX5kZBDaztyrSeZ2KWOMbNNIkkdElhmV+sCm7BlX2AIH1pxBUTC0H4woh pern92U9VtGBByIeQ041r2IG6Fw98KATtfvFPuZZSMxUuKx8SmZ1q6yXgtfumEuv s8V+R+OV4CAU0OaxGvW2OXp/Up3WWL1cL3vXWw7uX40JvxCrLgLGRlPJkgt8Z2s6 QetQoiIizTfoeqrdM0V6iXkealHvRumHTFTLwqPzCtMdLxsL9QOhzl2ZSqUuAjMq CMZGW12WBb+SATutOeAG6OGChbQcj7v8072ImWTHrF01wv66qk9R+g5K+GJRM5tl SCQIcoHOsSBAAw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 0F5133280064; Tue, 10 Nov 2020 04:50:27 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 09/15] mlxsw: spectrum_router: Use RALUE pack helper from abort function Date: Tue, 10 Nov 2020 11:48:54 +0200 Message-Id: <20201110094900.1920158-10-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Unify the RALUE register payload packing and use the __mlxsw_sp_fib_entry_ralue_pack() helper from __mlxsw_sp_router_set_abort_trap(). Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 4edb2eec8179..b0758c5c3490 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -5764,8 +5764,8 @@ static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, if (err) return err; - mlxsw_reg_ralue_pack(ralue_pl, ralxx_proto, - MLXSW_REG_RALUE_OP_WRITE_WRITE, vr->id, 0); + mlxsw_sp_fib_entry_ralue_pack(ralue_pl, proto, + MLXSW_SP_FIB_ENTRY_OP_WRITE, vr->id, 0, NULL); mlxsw_reg_ralue_act_ip2me_pack(ralue_pl); err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); From patchwork Tue Nov 10 09:48:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893713 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A336C388F7 for ; Tue, 10 Nov 2020 09:51:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B033320780 for ; Tue, 10 Nov 2020 09:51:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731283AbgKJJul (ORCPT ); Tue, 10 Nov 2020 04:50:41 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:53537 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730651AbgKJJuc (ORCPT ); Tue, 10 Nov 2020 04:50:32 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 05E34A0B; Tue, 10 Nov 2020 04:50:30 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=QJ7eR8ErQTlQn7Pz58yDCa9wB7VGd3uJKc/AauxB6tA=; b=UWKDeRWY U5uYL0nfvK121ZP2wZsDG3MU3bwPxKkTl2e6w5et+QqpxQ6TxhZ6pJEY/UInv7ef RIHtwSx9l69r2zFXU7TxOxn+HMlJvjbbqSKwuVkXTlxnyzyxi90LH5t9RwQnzRqz oqV4iQG0YP+RAW7hLCu/Ff04y4flMzyQfGkhJi5oXxg9oQhWG+fECpUxCuYcFVGO U664WujsqlqNPvym/qeb5SJGihl8jXIJz3Do9bydY5r9DIp6BgUfQ8I4hJ4LKk8w QJ2X6EpLpWAXEfiYhwaLlk2hnHdIkz8TIuvShoWhor6+km5sbto2YRSRBTvkjqDa cu94LZMzD+v7fA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 6D52D328005A; Tue, 10 Nov 2020 04:50:29 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 10/15] mlxsw: spectrum: Push RALUE packing and writing into low-level router ops Date: Tue, 10 Nov 2020 11:48:55 +0200 Message-Id: <20201110094900.1920158-11-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko With follow-up introduction of XM implementation, XMDR register is going to be optionally used instead of RALUE register. Push the RALUE packing helpers and write call into low-level router ops. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_ipip.c | 27 ++-- .../ethernet/mellanox/mlxsw/spectrum_ipip.h | 1 + .../ethernet/mellanox/mlxsw/spectrum_router.c | 115 ++++++++++++------ .../ethernet/mellanox/mlxsw/spectrum_router.h | 19 ++- 4 files changed, 107 insertions(+), 55 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c index 0f0064392468..3cea9ee5910d 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c @@ -181,21 +181,21 @@ mlxsw_sp_ipip_fib_entry_op_gre4_rtdp(struct mlxsw_sp *mlxsw_sp, } static int -mlxsw_sp_ipip_fib_entry_op_gre4_ralue(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_fib_entry_op_ctx *op_ctx, - u32 dip, u8 prefix_len, u16 ul_vr_id, - enum mlxsw_sp_fib_entry_op op, - u32 tunnel_index) +mlxsw_sp_ipip_fib_entry_op_gre4_do(struct mlxsw_sp *mlxsw_sp, + const struct mlxsw_sp_router_ll_ops *ll_ops, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + u32 dip, u8 prefix_len, u16 ul_vr_id, + enum mlxsw_sp_fib_entry_op op, + u32 tunnel_index) { - char *ralue_pl = op_ctx->ralue_pl; - - mlxsw_sp_fib_entry_ralue_pack(ralue_pl, MLXSW_SP_L3_PROTO_IPV4, op, - ul_vr_id, prefix_len, (unsigned char *) &dip); - mlxsw_reg_ralue_act_ip2me_tun_pack(ralue_pl, tunnel_index); - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); + ll_ops->fib_entry_pack(op_ctx, MLXSW_SP_L3_PROTO_IPV4, op, ul_vr_id, + prefix_len, (unsigned char *) &dip); + ll_ops->fib_entry_act_ip2me_tun_pack(op_ctx, tunnel_index); + return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); } static int mlxsw_sp_ipip_fib_entry_op_gre4(struct mlxsw_sp *mlxsw_sp, + const struct mlxsw_sp_router_ll_ops *ll_ops, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_ipip_entry *ipip_entry, enum mlxsw_sp_fib_entry_op op, u32 tunnel_index) @@ -211,9 +211,8 @@ static int mlxsw_sp_ipip_fib_entry_op_gre4(struct mlxsw_sp *mlxsw_sp, dip = mlxsw_sp_ipip_netdev_saddr(MLXSW_SP_L3_PROTO_IPV4, ipip_entry->ol_dev).addr4; - return mlxsw_sp_ipip_fib_entry_op_gre4_ralue(mlxsw_sp, op_ctx, be32_to_cpu(dip), - 32, ul_vr_id, op, - tunnel_index); + return mlxsw_sp_ipip_fib_entry_op_gre4_do(mlxsw_sp, ll_ops, op_ctx, be32_to_cpu(dip), + 32, ul_vr_id, op, tunnel_index); } static bool mlxsw_sp_ipip_tunnel_complete(enum mlxsw_sp_l3proto proto, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h index dd53b1c207b3..fe9a94362e61 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h @@ -52,6 +52,7 @@ struct mlxsw_sp_ipip_ops { const struct net_device *ol_dev); int (*fib_entry_op)(struct mlxsw_sp *mlxsw_sp, + const struct mlxsw_sp_router_ll_ops *ll_ops, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_ipip_entry *ipip_entry, enum mlxsw_sp_fib_entry_op op, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index b0758c5c3490..ede67a28f278 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4307,12 +4307,15 @@ mlxsw_sp_fib_entry_hw_flags_refresh(struct mlxsw_sp *mlxsw_sp, } } -void -mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, enum mlxsw_sp_l3proto proto, - enum mlxsw_sp_fib_entry_op op, u16 virtual_router, - u8 prefix_len, unsigned char *addr) +static void +mlxsw_sp_router_ll_basic_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + enum mlxsw_sp_l3proto proto, + enum mlxsw_sp_fib_entry_op op, + u16 virtual_router, u8 prefix_len, + unsigned char *addr) { enum mlxsw_reg_ralxx_protocol ralxx_proto; + char *ralue_pl = op_ctx->ralue_pl; enum mlxsw_reg_ralue_op ralue_op; ralxx_proto = (enum mlxsw_reg_ralxx_protocol) proto; @@ -4341,16 +4344,52 @@ mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, enum mlxsw_sp_l3proto proto, } } +static void +mlxsw_sp_router_ll_basic_fib_entry_act_remote_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + enum mlxsw_reg_ralue_trap_action trap_action, + u16 trap_id, u32 adjacency_index, u16 ecmp_size) +{ + mlxsw_reg_ralue_act_remote_pack(op_ctx->ralue_pl, trap_action, trap_id, + adjacency_index, ecmp_size); +} + +static void +mlxsw_sp_router_ll_basic_fib_entry_act_local_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + enum mlxsw_reg_ralue_trap_action trap_action, + u16 trap_id, u16 local_erif) +{ + mlxsw_reg_ralue_act_local_pack(op_ctx->ralue_pl, trap_action, trap_id, local_erif); +} + +static void +mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx) +{ + mlxsw_reg_ralue_act_ip2me_pack(op_ctx->ralue_pl); +} + +static void +mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_tun_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + u32 tunnel_ptr) +{ + mlxsw_reg_ralue_act_ip2me_tun_pack(op_ctx->ralue_pl, tunnel_ptr); +} + +static int +mlxsw_sp_router_ll_basic_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx) +{ + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), op_ctx->ralue_pl); +} + static void mlxsw_sp_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { struct mlxsw_sp_fib *fib = fib_entry->fib_node->fib; - mlxsw_sp_fib_entry_ralue_pack(op_ctx->ralue_pl, fib->proto, op, - fib->vr->id, - fib_entry->fib_node->key.prefix_len, - fib_entry->fib_node->key.addr); + fib->ll_ops->fib_entry_pack(op_ctx, fib->proto, op, fib->vr->id, + fib_entry->fib_node->key.prefix_len, + fib_entry->fib_node->key.addr); } static int mlxsw_sp_adj_discard_write(struct mlxsw_sp *mlxsw_sp, u16 rif_index) @@ -4391,9 +4430,9 @@ static int mlxsw_sp_fib_entry_op_remote(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { + const struct mlxsw_sp_router_ll_ops *ll_ops = fib_entry->fib_node->fib->ll_ops; struct mlxsw_sp_nexthop_group *nh_group = fib_entry->nh_group; enum mlxsw_reg_ralue_trap_action trap_action; - char *ralue_pl = op_ctx->ralue_pl; u16 trap_id = 0; u32 adjacency_index = 0; u16 ecmp_size = 0; @@ -4422,9 +4461,9 @@ static int mlxsw_sp_fib_entry_op_remote(struct mlxsw_sp *mlxsw_sp, } mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); - mlxsw_reg_ralue_act_remote_pack(ralue_pl, trap_action, trap_id, - adjacency_index, ecmp_size); - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); + ll_ops->fib_entry_act_remote_pack(op_ctx, trap_action, trap_id, + adjacency_index, ecmp_size); + return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); } static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, @@ -4432,9 +4471,9 @@ static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { + const struct mlxsw_sp_router_ll_ops *ll_ops = fib_entry->fib_node->fib->ll_ops; struct mlxsw_sp_rif *rif = fib_entry->nh_group->nh_rif; enum mlxsw_reg_ralue_trap_action trap_action; - char *ralue_pl = op_ctx->ralue_pl; u16 trap_id = 0; u16 rif_index = 0; @@ -4447,9 +4486,8 @@ static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, } mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); - mlxsw_reg_ralue_act_local_pack(ralue_pl, trap_action, trap_id, - rif_index); - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); + ll_ops->fib_entry_act_local_pack(op_ctx, trap_action, trap_id, rif_index); + return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); } static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, @@ -4457,11 +4495,11 @@ static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { - char *ralue_pl = op_ctx->ralue_pl; + const struct mlxsw_sp_router_ll_ops *ll_ops = fib_entry->fib_node->fib->ll_ops; mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); - mlxsw_reg_ralue_act_ip2me_pack(ralue_pl); - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); + ll_ops->fib_entry_act_ip2me_pack(op_ctx); + return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); } static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, @@ -4469,13 +4507,13 @@ static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { + const struct mlxsw_sp_router_ll_ops *ll_ops = fib_entry->fib_node->fib->ll_ops; enum mlxsw_reg_ralue_trap_action trap_action; - char *ralue_pl = op_ctx->ralue_pl; trap_action = MLXSW_REG_RALUE_TRAP_ACTION_DISCARD_ERROR; mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); - mlxsw_reg_ralue_act_local_pack(ralue_pl, trap_action, 0, 0); - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); + ll_ops->fib_entry_act_local_pack(op_ctx, trap_action, 0, 0); + return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); } static int @@ -4484,16 +4522,16 @@ mlxsw_sp_fib_entry_op_unreachable(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { + const struct mlxsw_sp_router_ll_ops *ll_ops = fib_entry->fib_node->fib->ll_ops; enum mlxsw_reg_ralue_trap_action trap_action; - char *ralue_pl = op_ctx->ralue_pl; u16 trap_id; trap_action = MLXSW_REG_RALUE_TRAP_ACTION_TRAP; trap_id = MLXSW_TRAP_ID_RTR_INGRESS1; mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); - mlxsw_reg_ralue_act_local_pack(ralue_pl, trap_action, trap_id, 0); - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); + ll_ops->fib_entry_act_local_pack(op_ctx, trap_action, trap_id, 0); + return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); } static int @@ -4502,6 +4540,7 @@ mlxsw_sp_fib_entry_op_ipip_decap(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { + const struct mlxsw_sp_router_ll_ops *ll_ops = fib_entry->fib_node->fib->ll_ops; struct mlxsw_sp_ipip_entry *ipip_entry = fib_entry->decap.ipip_entry; const struct mlxsw_sp_ipip_ops *ipip_ops; @@ -4509,7 +4548,7 @@ mlxsw_sp_fib_entry_op_ipip_decap(struct mlxsw_sp *mlxsw_sp, return -EINVAL; ipip_ops = mlxsw_sp->router->ipip_ops_arr[ipip_entry->ipipt]; - return ipip_ops->fib_entry_op(mlxsw_sp, op_ctx, ipip_entry, op, + return ipip_ops->fib_entry_op(mlxsw_sp, ll_ops, op_ctx, ipip_entry, op, fib_entry->decap.tunnel_index); } @@ -4518,12 +4557,12 @@ static int mlxsw_sp_fib_entry_op_nve_decap(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) { - char *ralue_pl = op_ctx->ralue_pl; + const struct mlxsw_sp_router_ll_ops *ll_ops = fib_entry->fib_node->fib->ll_ops; mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); - mlxsw_reg_ralue_act_ip2me_tun_pack(ralue_pl, - fib_entry->decap.tunnel_index); - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), ralue_pl); + ll_ops->fib_entry_act_ip2me_tun_pack(op_ctx, + fib_entry->decap.tunnel_index); + return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); } static int __mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, @@ -5757,18 +5796,16 @@ static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_vr *vr = &mlxsw_sp->router->vrs[i]; struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; char xraltb_pl[MLXSW_REG_XRALTB_LEN]; - char *ralue_pl = op_ctx.ralue_pl; mlxsw_reg_xraltb_pack(xraltb_pl, vr->id, ralxx_proto, tree_id); err = ll_ops->raltb_write(mlxsw_sp, xraltb_pl); if (err) return err; - mlxsw_sp_fib_entry_ralue_pack(ralue_pl, proto, - MLXSW_SP_FIB_ENTRY_OP_WRITE, vr->id, 0, NULL); - mlxsw_reg_ralue_act_ip2me_pack(ralue_pl); - err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), - ralue_pl); + ll_ops->fib_entry_pack(&op_ctx, proto, MLXSW_SP_FIB_ENTRY_OP_WRITE, + vr->id, 0, NULL); + ll_ops->fib_entry_act_ip2me_pack(&op_ctx); + err = ll_ops->fib_entry_commit(mlxsw_sp, &op_ctx); if (err) return err; } @@ -8165,6 +8202,12 @@ static const struct mlxsw_sp_router_ll_ops mlxsw_sp_router_ll_basic_ops = { .ralta_write = mlxsw_sp_router_ll_basic_ralta_write, .ralst_write = mlxsw_sp_router_ll_basic_ralst_write, .raltb_write = mlxsw_sp_router_ll_basic_raltb_write, + .fib_entry_pack = mlxsw_sp_router_ll_basic_fib_entry_pack, + .fib_entry_act_remote_pack = mlxsw_sp_router_ll_basic_fib_entry_act_remote_pack, + .fib_entry_act_local_pack = mlxsw_sp_router_ll_basic_fib_entry_act_local_pack, + .fib_entry_act_ip2me_pack = mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_pack, + .fib_entry_act_ip2me_tun_pack = mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_tun_pack, + .fib_entry_commit = mlxsw_sp_router_ll_basic_fib_entry_commit, }; int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index 1b071f872a3b..2f700ad74385 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -71,6 +71,20 @@ struct mlxsw_sp_router_ll_ops { int (*ralta_write)(struct mlxsw_sp *mlxsw_sp, char *xralta_pl); int (*ralst_write)(struct mlxsw_sp *mlxsw_sp, char *xralst_pl); int (*raltb_write)(struct mlxsw_sp *mlxsw_sp, char *xraltb_pl); + void (*fib_entry_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + enum mlxsw_sp_l3proto proto, enum mlxsw_sp_fib_entry_op op, + u16 virtual_router, u8 prefix_len, unsigned char *addr); + void (*fib_entry_act_remote_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + enum mlxsw_reg_ralue_trap_action trap_action, + u16 trap_id, u32 adjacency_index, u16 ecmp_size); + void (*fib_entry_act_local_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + enum mlxsw_reg_ralue_trap_action trap_action, + u16 trap_id, u16 local_erif); + void (*fib_entry_act_ip2me_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx); + void (*fib_entry_act_ip2me_tun_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + u32 tunnel_ptr); + int (*fib_entry_commit)(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx); }; struct mlxsw_sp_rif_ipip_lb; @@ -173,9 +187,4 @@ static inline bool mlxsw_sp_l3addr_eq(const union mlxsw_sp_l3addr *addr1, int mlxsw_sp_ipip_ecn_encap_init(struct mlxsw_sp *mlxsw_sp); int mlxsw_sp_ipip_ecn_decap_init(struct mlxsw_sp *mlxsw_sp); -void -mlxsw_sp_fib_entry_ralue_pack(char *ralue_pl, enum mlxsw_sp_l3proto proto, - enum mlxsw_sp_fib_entry_op op, u16 virtual_router, - u8 prefix_len, unsigned char *addr); - #endif /* _MLXSW_ROUTER_H_*/ From patchwork Tue Nov 10 09:48:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893697 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 882F8C61DD8 for ; Tue, 10 Nov 2020 09:51:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F298C20780 for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731114AbgKJJui (ORCPT ); Tue, 10 Nov 2020 04:50:38 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:38411 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726467AbgKJJud (ORCPT ); Tue, 10 Nov 2020 04:50:33 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 707A2514; Tue, 10 Nov 2020 04:50:32 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=n6S3mJR7nECMWWNumQQiL/YstAEvycM/xAqQ/0yr9NU=; b=bl4YsInd fYtq29Epj/MlJzJ3+napcCK5iUWsm3lgoayYaCeLh3P8M6PNpsZJCz3vjHfVWL55 WCbOYFvRKwUmp0B/7YG058KruWKcHdwi0bHsm16caPVNHSM7msyAYjNpZZsD4VHK QPADvktPH22l/tOl6ws7IRIfQcmFynNw+f69zFTmAss1LtdLljAV/gn7DxEimdC2 58FTP64yZH6mAxCeBZZMmDMhcCDtL1UODwfumsqprpXvbBZK2h2yPvef9vdx4+8I z/RzDvB5hZHsVATotfBnLFm6aURgSlxJ2GzinjWBZbxBKTY2pcvIlQ7m3rvPj5Z1 0uidilRXfEp41Q== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id C8CD53280060; Tue, 10 Nov 2020 04:50:30 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 11/15] mlxsw: spectrum_router: Prepare work context for possible bulking Date: Tue, 10 Nov 2020 11:48:56 +0200 Message-Id: <20201110094900.1920158-12-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko For XMDR register it is possible to carry multiple FIB entry operations in a single write. However the FW does not restrict mixing the types of operations, make the code easier and indicate the bulking is ok only in case the bulk contains FIB operations of the same family and event. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../net/ethernet/mellanox/mlxsw/spectrum_router.c | 15 +++++++++++++-- .../net/ethernet/mellanox/mlxsw/spectrum_router.h | 1 + 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index ede67a28f278..39c04e45f253 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -6191,14 +6191,25 @@ static void mlxsw_sp_router_fib_event_work(struct work_struct *work) struct mlxsw_sp_router *router = container_of(work, struct mlxsw_sp_router, fib_event_work); struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; struct mlxsw_sp *mlxsw_sp = router->mlxsw_sp; - struct mlxsw_sp_fib_event *fib_event, *tmp; + struct mlxsw_sp_fib_event *next_fib_event; + struct mlxsw_sp_fib_event *fib_event; LIST_HEAD(fib_event_queue); spin_lock_bh(&router->fib_event_queue_lock); list_splice_init(&router->fib_event_queue, &fib_event_queue); spin_unlock_bh(&router->fib_event_queue_lock); - list_for_each_entry_safe(fib_event, tmp, &fib_event_queue, list) { + list_for_each_entry_safe(fib_event, next_fib_event, + &fib_event_queue, list) { + /* Check if the next entry in the queue exists and it is + * of the same type (family and event) as the currect one. + * In that case it is permitted to do the bulking + * of multiple FIB entries to a single register write. + */ + op_ctx.bulk_ok = !list_is_last(&fib_event->list, &fib_event_queue) && + fib_event->family == next_fib_event->family && + fib_event->event == next_fib_event->event; + switch (fib_event->family) { case AF_INET: mlxsw_sp_router_fib4_event_process(mlxsw_sp, &op_ctx, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index 2f700ad74385..859a5c5d51d0 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -61,6 +61,7 @@ enum mlxsw_sp_fib_entry_op { }; struct mlxsw_sp_fib_entry_op_ctx { + u8 bulk_ok:1; char ralue_pl[MLXSW_REG_RALUE_LEN]; }; From patchwork Tue Nov 10 09:48:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893717 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0374AC56202 for ; Tue, 10 Nov 2020 09:51:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B4E9B20781 for ; Tue, 10 Nov 2020 09:51:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731008AbgKJJui (ORCPT ); Tue, 10 Nov 2020 04:50:38 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:41707 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730876AbgKJJuf (ORCPT ); Tue, 10 Nov 2020 04:50:35 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id D2796DC3; Tue, 10 Nov 2020 04:50:33 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:34 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=MtCm8Fu8+tpYuE2BMmC/ptF5pleJSEXo0nisauaye/8=; b=RASssUTD cxtyVIKeCJ1V3cXyxGNyx7WZhcWQ3MQgAtd0I6Jh1fcnDSSuySodInJt1xqV1gIZ rm3qHmgU8vlAiWtFKChe7omcoaq3NSie7wr3I2TWGv9o6F+TV5QEfzH1w0Oe/UHj DgiLyqJs8uZBbjwpcQCtEU72J33KQofxTh2CxnOewgkeKGdt9iIFF9at8lk7eTxV If5FoPuf/ebMqw5KOsnS04SICnk/fpQBip9lQ6uFgKnXsvofZQ0tviMbZf3nyg0t EXeUgisDt/wZzXLWK4Q3LtYo9zgCLZJlhU0J08ounybs4DjvPMdSpvbB0S0sjxfS ZUmvhGQdAKFPCw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 304243280068; Tue, 10 Nov 2020 04:50:32 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 12/15] mlxsw: spectrum_router: Have FIB entry op context allocated for the instance Date: Tue, 10 Nov 2020 11:48:57 +0200 Message-Id: <20201110094900.1920158-13-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Get the max size needed for FIB entry op context and allocate it once for the instance. Use it repeatedly from the scheduled work. By this, allow to extend the context to hold more data than it is wise to do when it was on the stack. Make sure to signalize that the context needs to be initialized in case families of subsequent FIB entries differ. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_router.c | 120 ++++++++++++++---- .../ethernet/mellanox/mlxsw/spectrum_router.h | 24 +++- 2 files changed, 114 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 39c04e45f253..43a4b6a34940 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4307,6 +4307,10 @@ mlxsw_sp_fib_entry_hw_flags_refresh(struct mlxsw_sp *mlxsw_sp, } } +struct mlxsw_sp_fib_entry_op_ctx_basic { + char ralue_pl[MLXSW_REG_RALUE_LEN]; +}; + static void mlxsw_sp_router_ll_basic_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, enum mlxsw_sp_l3proto proto, @@ -4314,8 +4318,9 @@ mlxsw_sp_router_ll_basic_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx u16 virtual_router, u8 prefix_len, unsigned char *addr) { + struct mlxsw_sp_fib_entry_op_ctx_basic *op_ctx_basic = (void *) op_ctx->ll_priv; enum mlxsw_reg_ralxx_protocol ralxx_proto; - char *ralue_pl = op_ctx->ralue_pl; + char *ralue_pl = op_ctx_basic->ralue_pl; enum mlxsw_reg_ralue_op ralue_op; ralxx_proto = (enum mlxsw_reg_ralxx_protocol) proto; @@ -4349,8 +4354,10 @@ mlxsw_sp_router_ll_basic_fib_entry_act_remote_pack(struct mlxsw_sp_fib_entry_op_ enum mlxsw_reg_ralue_trap_action trap_action, u16 trap_id, u32 adjacency_index, u16 ecmp_size) { - mlxsw_reg_ralue_act_remote_pack(op_ctx->ralue_pl, trap_action, trap_id, - adjacency_index, ecmp_size); + struct mlxsw_sp_fib_entry_op_ctx_basic *op_ctx_basic = (void *) op_ctx->ll_priv; + + mlxsw_reg_ralue_act_remote_pack(op_ctx_basic->ralue_pl, trap_action, + trap_id, adjacency_index, ecmp_size); } static void @@ -4358,27 +4365,37 @@ mlxsw_sp_router_ll_basic_fib_entry_act_local_pack(struct mlxsw_sp_fib_entry_op_c enum mlxsw_reg_ralue_trap_action trap_action, u16 trap_id, u16 local_erif) { - mlxsw_reg_ralue_act_local_pack(op_ctx->ralue_pl, trap_action, trap_id, local_erif); + struct mlxsw_sp_fib_entry_op_ctx_basic *op_ctx_basic = (void *) op_ctx->ll_priv; + + mlxsw_reg_ralue_act_local_pack(op_ctx_basic->ralue_pl, trap_action, + trap_id, local_erif); } static void mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx) { - mlxsw_reg_ralue_act_ip2me_pack(op_ctx->ralue_pl); + struct mlxsw_sp_fib_entry_op_ctx_basic *op_ctx_basic = (void *) op_ctx->ll_priv; + + mlxsw_reg_ralue_act_ip2me_pack(op_ctx_basic->ralue_pl); } static void mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_tun_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, u32 tunnel_ptr) { - mlxsw_reg_ralue_act_ip2me_tun_pack(op_ctx->ralue_pl, tunnel_ptr); + struct mlxsw_sp_fib_entry_op_ctx_basic *op_ctx_basic = (void *) op_ctx->ll_priv; + + mlxsw_reg_ralue_act_ip2me_tun_pack(op_ctx_basic->ralue_pl, tunnel_ptr); } static int mlxsw_sp_router_ll_basic_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry_op_ctx *op_ctx) { - return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), op_ctx->ralue_pl); + struct mlxsw_sp_fib_entry_op_ctx_basic *op_ctx_basic = (void *) op_ctx->ll_priv; + + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ralue), + op_ctx_basic->ralue_pl); } static void mlxsw_sp_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, @@ -4615,9 +4632,10 @@ static int __mlxsw_sp_fib_entry_update(struct mlxsw_sp *mlxsw_sp, static int mlxsw_sp_fib_entry_update(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry) { - struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; + struct mlxsw_sp_fib_entry_op_ctx *op_ctx = mlxsw_sp->router->ll_op_ctx; - return __mlxsw_sp_fib_entry_update(mlxsw_sp, &op_ctx, fib_entry); + mlxsw_sp_fib_entry_op_ctx_clear(op_ctx); + return __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, fib_entry); } static int mlxsw_sp_fib_entry_del(struct mlxsw_sp *mlxsw_sp, @@ -5012,9 +5030,10 @@ static void __mlxsw_sp_fib_node_entry_unlink(struct mlxsw_sp *mlxsw_sp, static void mlxsw_sp_fib_node_entry_unlink(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry) { - struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; + struct mlxsw_sp_fib_entry_op_ctx *op_ctx = mlxsw_sp->router->ll_op_ctx; - __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, &op_ctx, fib_entry); + mlxsw_sp_fib_entry_op_ctx_clear(op_ctx); + __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, op_ctx, fib_entry); } static bool mlxsw_sp_fib4_allow_replace(struct mlxsw_sp_fib4_entry *fib4_entry) @@ -5793,19 +5812,20 @@ static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, return err; for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_VRS); i++) { + struct mlxsw_sp_fib_entry_op_ctx *op_ctx = mlxsw_sp->router->ll_op_ctx; struct mlxsw_sp_vr *vr = &mlxsw_sp->router->vrs[i]; - struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; char xraltb_pl[MLXSW_REG_XRALTB_LEN]; + mlxsw_sp_fib_entry_op_ctx_clear(op_ctx); mlxsw_reg_xraltb_pack(xraltb_pl, vr->id, ralxx_proto, tree_id); err = ll_ops->raltb_write(mlxsw_sp, xraltb_pl); if (err) return err; - ll_ops->fib_entry_pack(&op_ctx, proto, MLXSW_SP_FIB_ENTRY_OP_WRITE, + ll_ops->fib_entry_pack(op_ctx, proto, MLXSW_SP_FIB_ENTRY_OP_WRITE, vr->id, 0, NULL); - ll_ops->fib_entry_act_ip2me_pack(&op_ctx); - err = ll_ops->fib_entry_commit(mlxsw_sp, &op_ctx); + ll_ops->fib_entry_act_ip2me_pack(op_ctx); + err = ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); if (err) return err; } @@ -6092,7 +6112,6 @@ static void mlxsw_sp_router_fib4_event_process(struct mlxsw_sp *mlxsw_sp, { int err; - mutex_lock(&mlxsw_sp->router->lock); mlxsw_sp_span_respin(mlxsw_sp); switch (fib_event->event) { @@ -6112,7 +6131,6 @@ static void mlxsw_sp_router_fib4_event_process(struct mlxsw_sp *mlxsw_sp, fib_info_put(fib_event->fnh_info.fib_nh->nh_parent); break; } - mutex_unlock(&mlxsw_sp->router->lock); } static void mlxsw_sp_router_fib6_event_process(struct mlxsw_sp *mlxsw_sp, @@ -6121,7 +6139,6 @@ static void mlxsw_sp_router_fib6_event_process(struct mlxsw_sp *mlxsw_sp, { int err; - mutex_lock(&mlxsw_sp->router->lock); mlxsw_sp_span_respin(mlxsw_sp); switch (fib_event->event) { @@ -6145,7 +6162,6 @@ static void mlxsw_sp_router_fib6_event_process(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; } - mutex_unlock(&mlxsw_sp->router->lock); } static void mlxsw_sp_router_fibmr_event_process(struct mlxsw_sp *mlxsw_sp, @@ -6189,16 +6205,23 @@ static void mlxsw_sp_router_fibmr_event_process(struct mlxsw_sp *mlxsw_sp, static void mlxsw_sp_router_fib_event_work(struct work_struct *work) { struct mlxsw_sp_router *router = container_of(work, struct mlxsw_sp_router, fib_event_work); - struct mlxsw_sp_fib_entry_op_ctx op_ctx = {}; + struct mlxsw_sp_fib_entry_op_ctx *op_ctx = router->ll_op_ctx; struct mlxsw_sp *mlxsw_sp = router->mlxsw_sp; struct mlxsw_sp_fib_event *next_fib_event; struct mlxsw_sp_fib_event *fib_event; + int last_family = AF_UNSPEC; LIST_HEAD(fib_event_queue); spin_lock_bh(&router->fib_event_queue_lock); list_splice_init(&router->fib_event_queue, &fib_event_queue); spin_unlock_bh(&router->fib_event_queue_lock); + /* Router lock is held here to make sure per-instance + * operation context is not used in between FIB4/6 events + * processing. + */ + mutex_lock(&router->lock); + mlxsw_sp_fib_entry_op_ctx_clear(op_ctx); list_for_each_entry_safe(fib_event, next_fib_event, &fib_event_queue, list) { /* Check if the next entry in the queue exists and it is @@ -6206,30 +6229,46 @@ static void mlxsw_sp_router_fib_event_work(struct work_struct *work) * In that case it is permitted to do the bulking * of multiple FIB entries to a single register write. */ - op_ctx.bulk_ok = !list_is_last(&fib_event->list, &fib_event_queue) && - fib_event->family == next_fib_event->family && - fib_event->event == next_fib_event->event; + op_ctx->bulk_ok = !list_is_last(&fib_event->list, &fib_event_queue) && + fib_event->family == next_fib_event->family && + fib_event->event == next_fib_event->event; + + /* In case family of this and the previous entry are different, context + * reinitialization is going to be needed now, indicate that. + * Note that since last_family is initialized to AF_UNSPEC, this is always + * going to happen for the first entry processed in the work. + */ + if (fib_event->family != last_family) + op_ctx->initialized = false; switch (fib_event->family) { case AF_INET: - mlxsw_sp_router_fib4_event_process(mlxsw_sp, &op_ctx, + mlxsw_sp_router_fib4_event_process(mlxsw_sp, op_ctx, fib_event); break; case AF_INET6: - mlxsw_sp_router_fib6_event_process(mlxsw_sp, &op_ctx, + mlxsw_sp_router_fib6_event_process(mlxsw_sp, op_ctx, fib_event); break; case RTNL_FAMILY_IP6MR: case RTNL_FAMILY_IPMR: + /* Unlock here as inside FIBMR the lock is taken again + * under RTNL. The per-instance operation context + * is not used by FIBMR. + */ + mutex_unlock(&router->lock); mlxsw_sp_router_fibmr_event_process(mlxsw_sp, fib_event); + mutex_lock(&router->lock); break; default: WARN_ON_ONCE(1); } + last_family = fib_event->family; kfree(fib_event); cond_resched(); } + mutex_unlock(&router->lock); } static void mlxsw_sp_router_fib4_event(struct mlxsw_sp_fib_event *fib_event, @@ -8213,6 +8252,7 @@ static const struct mlxsw_sp_router_ll_ops mlxsw_sp_router_ll_basic_ops = { .ralta_write = mlxsw_sp_router_ll_basic_ralta_write, .ralst_write = mlxsw_sp_router_ll_basic_ralst_write, .raltb_write = mlxsw_sp_router_ll_basic_raltb_write, + .fib_entry_op_ctx_size = sizeof(struct mlxsw_sp_fib_entry_op_ctx_basic), .fib_entry_pack = mlxsw_sp_router_ll_basic_fib_entry_pack, .fib_entry_act_remote_pack = mlxsw_sp_router_ll_basic_fib_entry_act_remote_pack, .fib_entry_act_local_pack = mlxsw_sp_router_ll_basic_fib_entry_act_local_pack, @@ -8221,6 +8261,29 @@ static const struct mlxsw_sp_router_ll_ops mlxsw_sp_router_ll_basic_ops = { .fib_entry_commit = mlxsw_sp_router_ll_basic_fib_entry_commit, }; +static int mlxsw_sp_router_ll_op_ctx_init(struct mlxsw_sp_router *router) +{ + size_t max_size = 0; + int i; + + for (i = 0; i < MLXSW_SP_L3_PROTO_MAX; i++) { + size_t size = router->proto_ll_ops[i]->fib_entry_op_ctx_size; + + if (size > max_size) + max_size = size; + } + router->ll_op_ctx = kzalloc(sizeof(*router->ll_op_ctx) + max_size, + GFP_KERNEL); + if (!router->ll_op_ctx) + return -ENOMEM; + return 0; +} + +static void mlxsw_sp_router_ll_op_ctx_fini(struct mlxsw_sp_router *router) +{ + kfree(router->ll_op_ctx); +} + int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp, struct netlink_ext_ack *extack) { @@ -8237,6 +8300,10 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp, router->proto_ll_ops[MLXSW_SP_L3_PROTO_IPV4] = &mlxsw_sp_router_ll_basic_ops; router->proto_ll_ops[MLXSW_SP_L3_PROTO_IPV6] = &mlxsw_sp_router_ll_basic_ops; + err = mlxsw_sp_router_ll_op_ctx_init(router); + if (err) + goto err_ll_op_ctx_init; + INIT_LIST_HEAD(&mlxsw_sp->router->nexthop_neighs_list); err = __mlxsw_sp_router_init(mlxsw_sp); if (err) @@ -8343,6 +8410,8 @@ int mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp, err_rifs_init: __mlxsw_sp_router_fini(mlxsw_sp); err_router_init: + mlxsw_sp_router_ll_op_ctx_fini(router); +err_ll_op_ctx_init: mutex_destroy(&mlxsw_sp->router->lock); kfree(mlxsw_sp->router); return err; @@ -8366,6 +8435,7 @@ void mlxsw_sp_router_fini(struct mlxsw_sp *mlxsw_sp) mlxsw_sp_ipips_fini(mlxsw_sp); mlxsw_sp_rifs_fini(mlxsw_sp); __mlxsw_sp_router_fini(mlxsw_sp); + mlxsw_sp_router_ll_op_ctx_fini(mlxsw_sp->router); mutex_destroy(&mlxsw_sp->router->lock); kfree(mlxsw_sp->router); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index 859a5c5d51d0..9db1e3da0e0c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -15,6 +15,23 @@ struct mlxsw_sp_router_nve_decap { u8 valid:1; }; +struct mlxsw_sp_fib_entry_op_ctx { + u8 bulk_ok:1, /* Indicate to the low-level op it is ok to bulk + * the actual entry with the one that is the next + * in queue. + */ + initialized:1; /* Bit that the low-level op sets in case + * the context priv is initialized. + */ + unsigned long ll_priv[]; +}; + +static inline void +mlxsw_sp_fib_entry_op_ctx_clear(struct mlxsw_sp_fib_entry_op_ctx *op_ctx) +{ + memset(op_ctx, 0, sizeof(*op_ctx)); +} + struct mlxsw_sp_router { struct mlxsw_sp *mlxsw_sp; struct mlxsw_sp_rif **rifs; @@ -53,6 +70,7 @@ struct mlxsw_sp_router { spinlock_t fib_event_queue_lock; /* Protects fib event queue list */ /* One set of ops for each protocol: IPv4 and IPv6 */ const struct mlxsw_sp_router_ll_ops *proto_ll_ops[MLXSW_SP_L3_PROTO_MAX]; + struct mlxsw_sp_fib_entry_op_ctx *ll_op_ctx; }; enum mlxsw_sp_fib_entry_op { @@ -60,11 +78,6 @@ enum mlxsw_sp_fib_entry_op { MLXSW_SP_FIB_ENTRY_OP_DELETE, }; -struct mlxsw_sp_fib_entry_op_ctx { - u8 bulk_ok:1; - char ralue_pl[MLXSW_REG_RALUE_LEN]; -}; - /* Low-level router ops. Basically this is to handle the different * register sets to work with ordinary and XM trees and FIB entries. */ @@ -72,6 +85,7 @@ struct mlxsw_sp_router_ll_ops { int (*ralta_write)(struct mlxsw_sp *mlxsw_sp, char *xralta_pl); int (*ralst_write)(struct mlxsw_sp *mlxsw_sp, char *xralst_pl); int (*raltb_write)(struct mlxsw_sp *mlxsw_sp, char *xraltb_pl); + size_t fib_entry_op_ctx_size; void (*fib_entry_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, enum mlxsw_sp_l3proto proto, enum mlxsw_sp_fib_entry_op op, u16 virtual_router, u8 prefix_len, unsigned char *addr); From patchwork Tue Nov 10 09:48:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893715 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAF53C63697 for ; Tue, 10 Nov 2020 09:51:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 70F0620781 for ; Tue, 10 Nov 2020 09:51:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731198AbgKJJuk (ORCPT ); Tue, 10 Nov 2020 04:50:40 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:53537 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730893AbgKJJug (ORCPT ); Tue, 10 Nov 2020 04:50:36 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 25AE0DE4; Tue, 10 Nov 2020 04:50:35 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=Sj/guht9kisiWqgWo9U+AEbVy5QWsGCVeWipdTxH9zE=; b=FVks8Jl/ zbPS2srr5XW61/gcMhE9dxjqKz01tWjTB35vevjJwz1R9s29VSCJ/3djNzy9wNQm U8//jNdXCM1TrTKd2P9QDoSk5tcA335VCwGP1Jp31greg2p3oDuxsYaU/+8W1ACS YhHTvz7XCr2QP0lwG413f2PeGNuNlNYDVOJFLlOYp5l+UfM1d431PAQ3wB4mNf+Y YGychOLUlt3WxpHnZiT6xIXKOqpHGwBijH07aaKoPClIbOGvvLdyJ9j3X8cOXB2B mxR90SXziosN4Ax/8/5N2MhHqLkKwwVoNMbRhJcCY5F8vcBHMmpiJPyIasdkQu3Y 03b1k+Zl62DF5g== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 92EF33280064; Tue, 10 Nov 2020 04:50:33 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 13/15] mlxsw: spectrum_router: Introduce fib_entry priv for low-level ops Date: Tue, 10 Nov 2020 11:48:58 +0200 Message-Id: <20201110094900.1920158-14-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Prepare for the low-level ops that need to store some data alongside the fib_entry and introduce a per-fib_entry priv for ll ops. The priv is reference counted as in the follow-up patch it is going to be saved in pack() function and used later on in commit() even in case the related fib_entry gets freed in the middle. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_ipip.c | 12 +- .../ethernet/mellanox/mlxsw/spectrum_ipip.h | 3 +- .../ethernet/mellanox/mlxsw/spectrum_router.c | 185 ++++++++++++++---- .../ethernet/mellanox/mlxsw/spectrum_router.h | 20 +- 4 files changed, 176 insertions(+), 44 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c index 3cea9ee5910d..ab2e0eb26c1a 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c @@ -186,19 +186,21 @@ mlxsw_sp_ipip_fib_entry_op_gre4_do(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, u32 dip, u8 prefix_len, u16 ul_vr_id, enum mlxsw_sp_fib_entry_op op, - u32 tunnel_index) + u32 tunnel_index, + struct mlxsw_sp_fib_entry_priv *priv) { ll_ops->fib_entry_pack(op_ctx, MLXSW_SP_L3_PROTO_IPV4, op, ul_vr_id, - prefix_len, (unsigned char *) &dip); + prefix_len, (unsigned char *) &dip, priv); ll_ops->fib_entry_act_ip2me_tun_pack(op_ctx, tunnel_index); - return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); + return mlxsw_sp_fib_entry_commit(mlxsw_sp, op_ctx, ll_ops); } static int mlxsw_sp_ipip_fib_entry_op_gre4(struct mlxsw_sp *mlxsw_sp, const struct mlxsw_sp_router_ll_ops *ll_ops, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_ipip_entry *ipip_entry, - enum mlxsw_sp_fib_entry_op op, u32 tunnel_index) + enum mlxsw_sp_fib_entry_op op, u32 tunnel_index, + struct mlxsw_sp_fib_entry_priv *priv) { u16 ul_vr_id = mlxsw_sp_ipip_lb_ul_vr_id(ipip_entry->ol_lb); __be32 dip; @@ -212,7 +214,7 @@ static int mlxsw_sp_ipip_fib_entry_op_gre4(struct mlxsw_sp *mlxsw_sp, dip = mlxsw_sp_ipip_netdev_saddr(MLXSW_SP_L3_PROTO_IPV4, ipip_entry->ol_dev).addr4; return mlxsw_sp_ipip_fib_entry_op_gre4_do(mlxsw_sp, ll_ops, op_ctx, be32_to_cpu(dip), - 32, ul_vr_id, op, tunnel_index); + 32, ul_vr_id, op, tunnel_index, priv); } static bool mlxsw_sp_ipip_tunnel_complete(enum mlxsw_sp_l3proto proto, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h index fe9a94362e61..00448cbac639 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.h @@ -56,7 +56,8 @@ struct mlxsw_sp_ipip_ops { struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_ipip_entry *ipip_entry, enum mlxsw_sp_fib_entry_op op, - u32 tunnel_index); + u32 tunnel_index, + struct mlxsw_sp_fib_entry_priv *priv); int (*ol_netdev_change)(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_ipip_entry *ipip_entry, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 43a4b6a34940..9d3ead1ef561 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -368,12 +368,65 @@ struct mlxsw_sp_fib_entry_decap { u32 tunnel_index; }; +static struct mlxsw_sp_fib_entry_priv * +mlxsw_sp_fib_entry_priv_create(const struct mlxsw_sp_router_ll_ops *ll_ops) +{ + struct mlxsw_sp_fib_entry_priv *priv; + + if (!ll_ops->fib_entry_priv_size) + /* No need to have priv */ + return NULL; + + priv = kzalloc(sizeof(*priv) + ll_ops->fib_entry_priv_size, GFP_KERNEL); + if (!priv) + return ERR_PTR(-ENOMEM); + refcount_set(&priv->refcnt, 1); + return priv; +} + +static void +mlxsw_sp_fib_entry_priv_destroy(struct mlxsw_sp_fib_entry_priv *priv) +{ + kfree(priv); +} + +static void mlxsw_sp_fib_entry_priv_hold(struct mlxsw_sp_fib_entry_priv *priv) +{ + refcount_inc(&priv->refcnt); +} + +static void mlxsw_sp_fib_entry_priv_put(struct mlxsw_sp_fib_entry_priv *priv) +{ + if (!priv || !refcount_dec_and_test(&priv->refcnt)) + return; + mlxsw_sp_fib_entry_priv_destroy(priv); +} + +static void mlxsw_sp_fib_entry_op_ctx_priv_hold(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct mlxsw_sp_fib_entry_priv *priv) +{ + if (!priv) + return; + mlxsw_sp_fib_entry_priv_hold(priv); + list_add(&priv->list, &op_ctx->fib_entry_priv_list); +} + +static void mlxsw_sp_fib_entry_op_ctx_priv_put_all(struct mlxsw_sp_fib_entry_op_ctx *op_ctx) +{ + struct mlxsw_sp_fib_entry_priv *priv, *tmp; + + list_for_each_entry_safe(priv, tmp, &op_ctx->fib_entry_priv_list, list) + mlxsw_sp_fib_entry_priv_put(priv); + INIT_LIST_HEAD(&op_ctx->fib_entry_priv_list); +} + struct mlxsw_sp_fib_entry { struct mlxsw_sp_fib_node *fib_node; enum mlxsw_sp_fib_entry_type type; struct list_head nexthop_group_node; struct mlxsw_sp_nexthop_group *nh_group; struct mlxsw_sp_fib_entry_decap decap; /* Valid for decap entries. */ + struct mlxsw_sp_fib_entry_priv *priv; }; struct mlxsw_sp_fib4_entry { @@ -4316,7 +4369,8 @@ mlxsw_sp_router_ll_basic_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx enum mlxsw_sp_l3proto proto, enum mlxsw_sp_fib_entry_op op, u16 virtual_router, u8 prefix_len, - unsigned char *addr) + unsigned char *addr, + struct mlxsw_sp_fib_entry_priv *priv) { struct mlxsw_sp_fib_entry_op_ctx_basic *op_ctx_basic = (void *) op_ctx->ll_priv; enum mlxsw_reg_ralxx_protocol ralxx_proto; @@ -4390,7 +4444,8 @@ mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_tun_pack(struct mlxsw_sp_fib_entry_ static int mlxsw_sp_router_ll_basic_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_fib_entry_op_ctx *op_ctx) + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + bool *postponed_for_bulk) { struct mlxsw_sp_fib_entry_op_ctx_basic *op_ctx_basic = (void *) op_ctx->ll_priv; @@ -4404,9 +4459,24 @@ static void mlxsw_sp_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, { struct mlxsw_sp_fib *fib = fib_entry->fib_node->fib; + mlxsw_sp_fib_entry_op_ctx_priv_hold(op_ctx, fib_entry->priv); fib->ll_ops->fib_entry_pack(op_ctx, fib->proto, op, fib->vr->id, fib_entry->fib_node->key.prefix_len, - fib_entry->fib_node->key.addr); + fib_entry->fib_node->key.addr, + fib_entry->priv); +} + +int mlxsw_sp_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + const struct mlxsw_sp_router_ll_ops *ll_ops) +{ + bool postponed_for_bulk = false; + int err; + + err = ll_ops->fib_entry_commit(mlxsw_sp, op_ctx, &postponed_for_bulk); + if (!postponed_for_bulk) + mlxsw_sp_fib_entry_op_ctx_priv_put_all(op_ctx); + return err; } static int mlxsw_sp_adj_discard_write(struct mlxsw_sp *mlxsw_sp, u16 rif_index) @@ -4480,7 +4550,7 @@ static int mlxsw_sp_fib_entry_op_remote(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); ll_ops->fib_entry_act_remote_pack(op_ctx, trap_action, trap_id, adjacency_index, ecmp_size); - return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); + return mlxsw_sp_fib_entry_commit(mlxsw_sp, op_ctx, ll_ops); } static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, @@ -4504,7 +4574,7 @@ static int mlxsw_sp_fib_entry_op_local(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); ll_ops->fib_entry_act_local_pack(op_ctx, trap_action, trap_id, rif_index); - return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); + return mlxsw_sp_fib_entry_commit(mlxsw_sp, op_ctx, ll_ops); } static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, @@ -4516,7 +4586,7 @@ static int mlxsw_sp_fib_entry_op_trap(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); ll_ops->fib_entry_act_ip2me_pack(op_ctx); - return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); + return mlxsw_sp_fib_entry_commit(mlxsw_sp, op_ctx, ll_ops); } static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, @@ -4530,7 +4600,7 @@ static int mlxsw_sp_fib_entry_op_blackhole(struct mlxsw_sp *mlxsw_sp, trap_action = MLXSW_REG_RALUE_TRAP_ACTION_DISCARD_ERROR; mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); ll_ops->fib_entry_act_local_pack(op_ctx, trap_action, 0, 0); - return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); + return mlxsw_sp_fib_entry_commit(mlxsw_sp, op_ctx, ll_ops); } static int @@ -4548,7 +4618,7 @@ mlxsw_sp_fib_entry_op_unreachable(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); ll_ops->fib_entry_act_local_pack(op_ctx, trap_action, trap_id, 0); - return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); + return mlxsw_sp_fib_entry_commit(mlxsw_sp, op_ctx, ll_ops); } static int @@ -4566,7 +4636,7 @@ mlxsw_sp_fib_entry_op_ipip_decap(struct mlxsw_sp *mlxsw_sp, ipip_ops = mlxsw_sp->router->ipip_ops_arr[ipip_entry->ipipt]; return ipip_ops->fib_entry_op(mlxsw_sp, ll_ops, op_ctx, ipip_entry, op, - fib_entry->decap.tunnel_index); + fib_entry->decap.tunnel_index, fib_entry->priv); } static int mlxsw_sp_fib_entry_op_nve_decap(struct mlxsw_sp *mlxsw_sp, @@ -4579,7 +4649,7 @@ static int mlxsw_sp_fib_entry_op_nve_decap(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_fib_entry_pack(op_ctx, fib_entry, op); ll_ops->fib_entry_act_ip2me_tun_pack(op_ctx, fib_entry->decap.tunnel_index); - return ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); + return mlxsw_sp_fib_entry_commit(mlxsw_sp, op_ctx, ll_ops); } static int __mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, @@ -4731,6 +4801,12 @@ mlxsw_sp_fib4_entry_create(struct mlxsw_sp *mlxsw_sp, return ERR_PTR(-ENOMEM); fib_entry = &fib4_entry->common; + fib_entry->priv = mlxsw_sp_fib_entry_priv_create(fib_node->fib->ll_ops); + if (IS_ERR(fib_entry->priv)) { + err = PTR_ERR(fib_entry->priv); + goto err_fib_entry_priv_create; + } + err = mlxsw_sp_fib4_entry_type_set(mlxsw_sp, fen_info, fib_entry); if (err) goto err_fib4_entry_type_set; @@ -4751,6 +4827,8 @@ mlxsw_sp_fib4_entry_create(struct mlxsw_sp *mlxsw_sp, err_nexthop4_group_get: mlxsw_sp_fib4_entry_type_unset(mlxsw_sp, fib_entry); err_fib4_entry_type_set: + mlxsw_sp_fib_entry_priv_put(fib_entry->priv); +err_fib_entry_priv_create: kfree(fib4_entry); return ERR_PTR(err); } @@ -4760,6 +4838,7 @@ static void mlxsw_sp_fib4_entry_destroy(struct mlxsw_sp *mlxsw_sp, { mlxsw_sp_nexthop4_group_put(mlxsw_sp, &fib4_entry->common); mlxsw_sp_fib4_entry_type_unset(mlxsw_sp, &fib4_entry->common); + mlxsw_sp_fib_entry_priv_put(fib4_entry->common.priv); kfree(fib4_entry); } @@ -5017,14 +5096,16 @@ static int mlxsw_sp_fib_node_entry_link(struct mlxsw_sp *mlxsw_sp, return err; } -static void __mlxsw_sp_fib_node_entry_unlink(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_fib_entry_op_ctx *op_ctx, - struct mlxsw_sp_fib_entry *fib_entry) +static int __mlxsw_sp_fib_node_entry_unlink(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct mlxsw_sp_fib_entry *fib_entry) { struct mlxsw_sp_fib_node *fib_node = fib_entry->fib_node; + int err; - mlxsw_sp_fib_entry_del(mlxsw_sp, op_ctx, fib_entry); + err = mlxsw_sp_fib_entry_del(mlxsw_sp, op_ctx, fib_entry); fib_node->fib_entry = NULL; + return err; } static void mlxsw_sp_fib_node_entry_unlink(struct mlxsw_sp *mlxsw_sp, @@ -5114,24 +5195,26 @@ mlxsw_sp_router_fib4_replace(struct mlxsw_sp *mlxsw_sp, return err; } -static void mlxsw_sp_router_fib4_del(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_fib_entry_op_ctx *op_ctx, - struct fib_entry_notifier_info *fen_info) +static int mlxsw_sp_router_fib4_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct fib_entry_notifier_info *fen_info) { struct mlxsw_sp_fib4_entry *fib4_entry; struct mlxsw_sp_fib_node *fib_node; + int err; if (mlxsw_sp->router->aborted) - return; + return 0; fib4_entry = mlxsw_sp_fib4_entry_lookup(mlxsw_sp, fen_info); if (!fib4_entry) - return; + return 0; fib_node = fib4_entry->common.fib_node; - __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, op_ctx, &fib4_entry->common); + err = __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, op_ctx, &fib4_entry->common); mlxsw_sp_fib4_entry_destroy(mlxsw_sp, fib4_entry); mlxsw_sp_fib_node_put(mlxsw_sp, fib_node); + return err; } static bool mlxsw_sp_fib6_rt_should_ignore(const struct fib6_info *rt) @@ -5546,6 +5629,12 @@ mlxsw_sp_fib6_entry_create(struct mlxsw_sp *mlxsw_sp, return ERR_PTR(-ENOMEM); fib_entry = &fib6_entry->common; + fib_entry->priv = mlxsw_sp_fib_entry_priv_create(fib_node->fib->ll_ops); + if (IS_ERR(fib_entry->priv)) { + err = PTR_ERR(fib_entry->priv); + goto err_fib_entry_priv_create; + } + INIT_LIST_HEAD(&fib6_entry->rt6_list); for (i = 0; i < nrt6; i++) { @@ -5578,6 +5667,8 @@ mlxsw_sp_fib6_entry_create(struct mlxsw_sp *mlxsw_sp, list_del(&mlxsw_sp_rt6->list); mlxsw_sp_rt6_destroy(mlxsw_sp_rt6); } + mlxsw_sp_fib_entry_priv_put(fib_entry->priv); +err_fib_entry_priv_create: kfree(fib6_entry); return ERR_PTR(err); } @@ -5588,6 +5679,7 @@ static void mlxsw_sp_fib6_entry_destroy(struct mlxsw_sp *mlxsw_sp, mlxsw_sp_nexthop6_group_put(mlxsw_sp, &fib6_entry->common); mlxsw_sp_fib6_entry_rt_destroy_all(fib6_entry); WARN_ON(fib6_entry->nrt6); + mlxsw_sp_fib_entry_priv_put(fib6_entry->common.priv); kfree(fib6_entry); } @@ -5752,19 +5844,20 @@ static int mlxsw_sp_router_fib6_append(struct mlxsw_sp *mlxsw_sp, return err; } -static void mlxsw_sp_router_fib6_del(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_fib_entry_op_ctx *op_ctx, - struct fib6_info **rt_arr, unsigned int nrt6) +static int mlxsw_sp_router_fib6_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + struct fib6_info **rt_arr, unsigned int nrt6) { struct mlxsw_sp_fib6_entry *fib6_entry; struct mlxsw_sp_fib_node *fib_node; struct fib6_info *rt = rt_arr[0]; + int err; if (mlxsw_sp->router->aborted) - return; + return 0; if (mlxsw_sp_fib6_rt_should_ignore(rt)) - return; + return 0; /* Multipath routes are first added to the FIB trie and only then * notified. If we vetoed the addition, we will get a delete @@ -5773,21 +5866,22 @@ static void mlxsw_sp_router_fib6_del(struct mlxsw_sp *mlxsw_sp, */ fib6_entry = mlxsw_sp_fib6_entry_lookup(mlxsw_sp, rt); if (!fib6_entry) - return; + return 0; /* If not all the nexthops are deleted, then only reduce the nexthop * group. */ if (nrt6 != fib6_entry->nrt6) { mlxsw_sp_fib6_entry_nexthop_del(mlxsw_sp, op_ctx, fib6_entry, rt_arr, nrt6); - return; + return 0; } fib_node = fib6_entry->common.fib_node; - __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, op_ctx, &fib6_entry->common); + err = __mlxsw_sp_fib_node_entry_unlink(mlxsw_sp, op_ctx, &fib6_entry->common); mlxsw_sp_fib6_entry_destroy(mlxsw_sp, fib6_entry); mlxsw_sp_fib_node_put(mlxsw_sp, fib_node); + return err; } static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, @@ -5797,6 +5891,7 @@ static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, const struct mlxsw_sp_router_ll_ops *ll_ops = mlxsw_sp->router->proto_ll_ops[proto]; enum mlxsw_reg_ralxx_protocol ralxx_proto = (enum mlxsw_reg_ralxx_protocol) proto; + struct mlxsw_sp_fib_entry_priv *priv; char xralta_pl[MLXSW_REG_XRALTA_LEN]; char xralst_pl[MLXSW_REG_XRALST_LEN]; int i, err; @@ -5822,10 +5917,15 @@ static int __mlxsw_sp_router_set_abort_trap(struct mlxsw_sp *mlxsw_sp, if (err) return err; + priv = mlxsw_sp_fib_entry_priv_create(ll_ops); + if (IS_ERR(priv)) + return PTR_ERR(priv); + ll_ops->fib_entry_pack(op_ctx, proto, MLXSW_SP_FIB_ENTRY_OP_WRITE, - vr->id, 0, NULL); + vr->id, 0, NULL, priv); ll_ops->fib_entry_act_ip2me_pack(op_ctx); - err = ll_ops->fib_entry_commit(mlxsw_sp, op_ctx); + err = ll_ops->fib_entry_commit(mlxsw_sp, op_ctx, NULL); + mlxsw_sp_fib_entry_priv_put(priv); if (err) return err; } @@ -6117,12 +6217,16 @@ static void mlxsw_sp_router_fib4_event_process(struct mlxsw_sp *mlxsw_sp, switch (fib_event->event) { case FIB_EVENT_ENTRY_REPLACE: err = mlxsw_sp_router_fib4_replace(mlxsw_sp, op_ctx, &fib_event->fen_info); - if (err) + if (err) { + mlxsw_sp_fib_entry_op_ctx_priv_put_all(op_ctx); mlxsw_sp_router_fib_abort(mlxsw_sp); + } fib_info_put(fib_event->fen_info.fi); break; case FIB_EVENT_ENTRY_DEL: - mlxsw_sp_router_fib4_del(mlxsw_sp, op_ctx, &fib_event->fen_info); + err = mlxsw_sp_router_fib4_del(mlxsw_sp, op_ctx, &fib_event->fen_info); + if (err) + mlxsw_sp_fib_entry_op_ctx_priv_put_all(op_ctx); fib_info_put(fib_event->fen_info.fi); break; case FIB_EVENT_NH_ADD: @@ -6145,20 +6249,26 @@ static void mlxsw_sp_router_fib6_event_process(struct mlxsw_sp *mlxsw_sp, case FIB_EVENT_ENTRY_REPLACE: err = mlxsw_sp_router_fib6_replace(mlxsw_sp, op_ctx, fib_event->fib6_event.rt_arr, fib_event->fib6_event.nrt6); - if (err) + if (err) { + mlxsw_sp_fib_entry_op_ctx_priv_put_all(op_ctx); mlxsw_sp_router_fib_abort(mlxsw_sp); + } mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; case FIB_EVENT_ENTRY_APPEND: err = mlxsw_sp_router_fib6_append(mlxsw_sp, op_ctx, fib_event->fib6_event.rt_arr, fib_event->fib6_event.nrt6); - if (err) + if (err) { + mlxsw_sp_fib_entry_op_ctx_priv_put_all(op_ctx); mlxsw_sp_router_fib_abort(mlxsw_sp); + } mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; case FIB_EVENT_ENTRY_DEL: - mlxsw_sp_router_fib6_del(mlxsw_sp, op_ctx, fib_event->fib6_event.rt_arr, - fib_event->fib6_event.nrt6); + err = mlxsw_sp_router_fib6_del(mlxsw_sp, op_ctx, fib_event->fib6_event.rt_arr, + fib_event->fib6_event.nrt6); + if (err) + mlxsw_sp_fib_entry_op_ctx_priv_put_all(op_ctx); mlxsw_sp_router_fib6_event_fini(&fib_event->fib6_event); break; } @@ -6268,6 +6378,7 @@ static void mlxsw_sp_router_fib_event_work(struct work_struct *work) kfree(fib_event); cond_resched(); } + WARN_ON_ONCE(!list_empty(&router->ll_op_ctx->fib_entry_priv_list)); mutex_unlock(&router->lock); } @@ -8276,11 +8387,13 @@ static int mlxsw_sp_router_ll_op_ctx_init(struct mlxsw_sp_router *router) GFP_KERNEL); if (!router->ll_op_ctx) return -ENOMEM; + INIT_LIST_HEAD(&router->ll_op_ctx->fib_entry_priv_list); return 0; } static void mlxsw_sp_router_ll_op_ctx_fini(struct mlxsw_sp_router *router) { + WARN_ON(!list_empty(&router->ll_op_ctx->fib_entry_priv_list)); kfree(router->ll_op_ctx); } diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index 9db1e3da0e0c..4dacbeee3142 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -23,13 +23,16 @@ struct mlxsw_sp_fib_entry_op_ctx { initialized:1; /* Bit that the low-level op sets in case * the context priv is initialized. */ + struct list_head fib_entry_priv_list; unsigned long ll_priv[]; }; static inline void mlxsw_sp_fib_entry_op_ctx_clear(struct mlxsw_sp_fib_entry_op_ctx *op_ctx) { + WARN_ON_ONCE(!list_empty(&op_ctx->fib_entry_priv_list)); memset(op_ctx, 0, sizeof(*op_ctx)); + INIT_LIST_HEAD(&op_ctx->fib_entry_priv_list); } struct mlxsw_sp_router { @@ -73,6 +76,12 @@ struct mlxsw_sp_router { struct mlxsw_sp_fib_entry_op_ctx *ll_op_ctx; }; +struct mlxsw_sp_fib_entry_priv { + refcount_t refcnt; + struct list_head list; /* Member in op_ctx->fib_entry_priv_list */ + unsigned long priv[]; +}; + enum mlxsw_sp_fib_entry_op { MLXSW_SP_FIB_ENTRY_OP_WRITE, MLXSW_SP_FIB_ENTRY_OP_DELETE, @@ -86,9 +95,11 @@ struct mlxsw_sp_router_ll_ops { int (*ralst_write)(struct mlxsw_sp *mlxsw_sp, char *xralst_pl); int (*raltb_write)(struct mlxsw_sp *mlxsw_sp, char *xraltb_pl); size_t fib_entry_op_ctx_size; + size_t fib_entry_priv_size; void (*fib_entry_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, enum mlxsw_sp_l3proto proto, enum mlxsw_sp_fib_entry_op op, - u16 virtual_router, u8 prefix_len, unsigned char *addr); + u16 virtual_router, u8 prefix_len, unsigned char *addr, + struct mlxsw_sp_fib_entry_priv *priv); void (*fib_entry_act_remote_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, enum mlxsw_reg_ralue_trap_action trap_action, u16 trap_id, u32 adjacency_index, u16 ecmp_size); @@ -99,9 +110,14 @@ struct mlxsw_sp_router_ll_ops { void (*fib_entry_act_ip2me_tun_pack)(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, u32 tunnel_ptr); int (*fib_entry_commit)(struct mlxsw_sp *mlxsw_sp, - struct mlxsw_sp_fib_entry_op_ctx *op_ctx); + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + bool *postponed_for_bulk); }; +int mlxsw_sp_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx *op_ctx, + const struct mlxsw_sp_router_ll_ops *ll_ops); + struct mlxsw_sp_rif_ipip_lb; struct mlxsw_sp_rif_ipip_lb_config { enum mlxsw_reg_ritr_loopback_ipip_type lb_ipipt; From patchwork Tue Nov 10 09:48:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893719 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C6CFC4741F for ; Tue, 10 Nov 2020 09:51:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3D82F207D3 for ; Tue, 10 Nov 2020 09:51:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731576AbgKJJur (ORCPT ); Tue, 10 Nov 2020 04:50:47 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:60551 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730741AbgKJJuh (ORCPT ); Tue, 10 Nov 2020 04:50:37 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 95606CC3; Tue, 10 Nov 2020 04:50:36 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:37 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=B9yKKLPofdGm+dgccYKCfGHnzmqiIFOLvxA+4Avwoas=; b=l0jBoDmm n1heJEoowQ3LTQ+oOnlbTP5NFYRD9rg3vwBvXebVwmAGK4OIPXxSQkLcRwZJVcnu LPJStByMBEgRhaXywzlP99lNFt5ZUclqxRm+7/Iv0bQWMQMtUQf9WlNFj2KD3O1d qfTKBVeV/LRfNFtRezyU/AfCCOBOqzhezs3wdmiVOBrp+N/4hZC0UVfMjVinFdNn ojnsPjEtxULst5qr8ZhfDKGWmq3ue3OGHe5ankxx9NVcMxmnwpp5bRAX4W+/uNi8 GzIEZcdBUjVWSNRYJUd4Iegu1t0L+S19DDQaPW3HbvAmC8hfnW+xq8c5ozNqb3OP MC69dKbsep5Ymw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpedufeenucfrrghrrghmpehmrghilhhfrhhomhepih guohhstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id EFBFD328005A; Tue, 10 Nov 2020 04:50:34 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 14/15] mlxsw: spectrum_router: Track FIB entry committed state and skip uncommitted on delete Date: Tue, 10 Nov 2020 11:48:59 +0200 Message-Id: <20201110094900.1920158-15-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko In case bulking is used, the entry that was previously added may not be yet committed to the HW as it waits in the queue for bulk send. For such entries, skip the deletion. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c | 11 +++++++++++ drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h | 1 + 2 files changed, 12 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 9d3ead1ef561..ef95d126d29a 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4453,6 +4453,12 @@ mlxsw_sp_router_ll_basic_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, op_ctx_basic->ralue_pl); } +static bool +mlxsw_sp_router_ll_basic_fib_entry_is_committed(struct mlxsw_sp_fib_entry_priv *priv) +{ + return true; +} + static void mlxsw_sp_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry, enum mlxsw_sp_fib_entry_op op) @@ -4712,6 +4718,10 @@ static int mlxsw_sp_fib_entry_del(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry *fib_entry) { + const struct mlxsw_sp_router_ll_ops *ll_ops = fib_entry->fib_node->fib->ll_ops; + + if (!ll_ops->fib_entry_is_committed(fib_entry->priv)) + return 0; return mlxsw_sp_fib_entry_op(mlxsw_sp, op_ctx, fib_entry, MLXSW_SP_FIB_ENTRY_OP_DELETE); } @@ -8370,6 +8380,7 @@ static const struct mlxsw_sp_router_ll_ops mlxsw_sp_router_ll_basic_ops = { .fib_entry_act_ip2me_pack = mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_pack, .fib_entry_act_ip2me_tun_pack = mlxsw_sp_router_ll_basic_fib_entry_act_ip2me_tun_pack, .fib_entry_commit = mlxsw_sp_router_ll_basic_fib_entry_commit, + .fib_entry_is_committed = mlxsw_sp_router_ll_basic_fib_entry_is_committed, }; static int mlxsw_sp_router_ll_op_ctx_init(struct mlxsw_sp_router *router) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index 4dacbeee3142..ed651b4200cb 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -112,6 +112,7 @@ struct mlxsw_sp_router_ll_ops { int (*fib_entry_commit)(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, bool *postponed_for_bulk); + bool (*fib_entry_is_committed)(struct mlxsw_sp_fib_entry_priv *priv); }; int mlxsw_sp_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, From patchwork Tue Nov 10 09:49:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 11893723 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F140C55ABD for ; Tue, 10 Nov 2020 09:51:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0167920797 for ; Tue, 10 Nov 2020 09:51:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731423AbgKJJuo (ORCPT ); Tue, 10 Nov 2020 04:50:44 -0500 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:54819 "EHLO wout3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731031AbgKJJuj (ORCPT ); Tue, 10 Nov 2020 04:50:39 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id EF56FDC3; Tue, 10 Nov 2020 04:50:37 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 10 Nov 2020 04:50:38 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=8VjLMR/h/rMMGBEr6ukSAy3MYys0BF/k2gczYiXJgwg=; b=laiz1nFc mebgoh6mV5lwBZTWj4Oygh8S7PbJwz9s4/Oysj5PPuZzF6R71FQs9DdaGyyDOQLb 1+90rMmPPjTLsNes7ohVSGYftevna8niNIp7lqpTDb6kjGQuZlW4rg6pM4HnyTOT B/jnKR23dI219AQLobucYpv42+OibXQ1ku5fAzi2BqturjJsg2y7DXY1Qb0dx8eu 2f+DmyUwaoyBGMJR6j3MvaKfdqZ3Kx75b5RdufFUbGdJOLKhjgpdl8/Yu/1QvxCO ISlNkgm66mPTlLr0ExGH61ShiQUyoe0666Ltt4YHVAHHk4CRBVFzJraBizFi55jv kTP84F6fejOuJA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedruddujedgtdekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehgedrudeg jeenucevlhhushhtvghrufhiiigvpedufeenucfrrghrrghmpehmrghilhhfrhhomhepih guohhstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-154-147.inter.net.il [84.229.154.147]) by mail.messagingengine.com (Postfix) with ESMTPA id 569C2328006D; Tue, 10 Nov 2020 04:50:36 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 15/15] mlxsw: spectrum_router: Introduce FIB entry update op Date: Tue, 10 Nov 2020 11:49:00 +0200 Message-Id: <20201110094900.1920158-16-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201110094900.1920158-1-idosch@idosch.org> References: <20201110094900.1920158-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jiri Pirko Follow-up patchset introducing XMDR implementation is going to need to distinguish write and update ops. Therefore introduce "update op" and call "write op" only when new FIB entry is inserted. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_router.c | 16 +++++++++++----- .../ethernet/mellanox/mlxsw/spectrum_router.h | 1 + 2 files changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index ef95d126d29a..e692e5a39f6c 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4350,6 +4350,7 @@ mlxsw_sp_fib_entry_hw_flags_refresh(struct mlxsw_sp *mlxsw_sp, { switch (op) { case MLXSW_SP_FIB_ENTRY_OP_WRITE: + case MLXSW_SP_FIB_ENTRY_OP_UPDATE: mlxsw_sp_fib_entry_hw_flags_set(mlxsw_sp, fib_entry); break; case MLXSW_SP_FIB_ENTRY_OP_DELETE: @@ -4381,6 +4382,7 @@ mlxsw_sp_router_ll_basic_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ctx *op_ctx switch (op) { case MLXSW_SP_FIB_ENTRY_OP_WRITE: + case MLXSW_SP_FIB_ENTRY_OP_UPDATE: ralue_op = MLXSW_REG_RALUE_OP_WRITE_WRITE; break; case MLXSW_SP_FIB_ENTRY_OP_DELETE: @@ -4699,10 +4701,12 @@ static int mlxsw_sp_fib_entry_op(struct mlxsw_sp *mlxsw_sp, static int __mlxsw_sp_fib_entry_update(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, - struct mlxsw_sp_fib_entry *fib_entry) + struct mlxsw_sp_fib_entry *fib_entry, + bool is_new) { return mlxsw_sp_fib_entry_op(mlxsw_sp, op_ctx, fib_entry, - MLXSW_SP_FIB_ENTRY_OP_WRITE); + is_new ? MLXSW_SP_FIB_ENTRY_OP_WRITE : + MLXSW_SP_FIB_ENTRY_OP_UPDATE); } static int mlxsw_sp_fib_entry_update(struct mlxsw_sp *mlxsw_sp, @@ -4711,7 +4715,7 @@ static int mlxsw_sp_fib_entry_update(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry_op_ctx *op_ctx = mlxsw_sp->router->ll_op_ctx; mlxsw_sp_fib_entry_op_ctx_clear(op_ctx); - return __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, fib_entry); + return __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, fib_entry, false); } static int mlxsw_sp_fib_entry_del(struct mlxsw_sp *mlxsw_sp, @@ -5091,11 +5095,12 @@ static int mlxsw_sp_fib_node_entry_link(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry *fib_entry) { struct mlxsw_sp_fib_node *fib_node = fib_entry->fib_node; + bool is_new = !fib_node->fib_entry; int err; fib_node->fib_entry = fib_entry; - err = __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, fib_entry); + err = __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, fib_entry, is_new); if (err) goto err_fib_entry_update; @@ -5509,7 +5514,8 @@ static int mlxsw_sp_nexthop6_group_update(struct mlxsw_sp *mlxsw_sp, * currently associated with it in the device's table is that * of the old group. Start using the new one instead. */ - err = __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, &fib6_entry->common); + err = __mlxsw_sp_fib_entry_update(mlxsw_sp, op_ctx, + &fib6_entry->common, false); if (err) goto err_fib_entry_update; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index ed651b4200cb..8230f6ff02ed 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -84,6 +84,7 @@ struct mlxsw_sp_fib_entry_priv { enum mlxsw_sp_fib_entry_op { MLXSW_SP_FIB_ENTRY_OP_WRITE, + MLXSW_SP_FIB_ENTRY_OP_UPDATE, MLXSW_SP_FIB_ENTRY_OP_DELETE, };