From patchwork Mon Nov 21 17:46:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13051489 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54DF3C4332F for ; Mon, 21 Nov 2022 17:46:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230345AbiKURqZ (ORCPT ); Mon, 21 Nov 2022 12:46:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230201AbiKURqP (ORCPT ); Mon, 21 Nov 2022 12:46:15 -0500 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D55148753 for ; Mon, 21 Nov 2022 09:46:15 -0800 (PST) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1669052772; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6TYM9OumVgRPKq+eX//pCt69s2IzXDIeTR3GIzFelbk=; b=KWVBkoiAIVM3m7loLlxBvGobIIZRyfeRkcXz/O+QmXB3FHWZl2eLM+pUItY0mlvi4Qwors sFFiQ+b06669Zr1QF8sC+GUx6qOcTmxVYUY6Ok3DYcKgEZwxp8P9l7PA+f/vMqN0aLqiAo 7K/744kf0rIbxgK0hMzO58AEW42CD1tAS4NyBNOEJt72CWLC2N7EParGAYBSlVux/tZCXM euw1DekyabFAPgfrb5carVZRg16L7kyaex0OyO5WvH5gC78Di3NL2pNtdwQHS5EE+Iamk7 +9CApDtTdCpzKf3Tm8kidnM59QAZ9zzK1xRK9dWRN9Oyt736YZ0ZInrsnb0ltw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1669052772; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6TYM9OumVgRPKq+eX//pCt69s2IzXDIeTR3GIzFelbk=; b=atuwAV2PXLm/BTyyiww+xzC9GJbyll6AIMEJvWGB2UfHsAqhKTkBsv5b/uLfj1Fi9hnFhT jCL/rfECg8a1I5BA== To: netdev@vger.kernel.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Thomas Gleixner , Kurt Kanzenbach , Sebastian Andrzej Siewior Subject: [PATCH v2 net 6/7] hsr: Synchronize sequence number updates. Date: Mon, 21 Nov 2022 18:46:04 +0100 Message-Id: <20221121174605.2456845-7-bigeasy@linutronix.de> In-Reply-To: <20221121174605.2456845-1-bigeasy@linutronix.de> References: <20221121174605.2456845-1-bigeasy@linutronix.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org hsr_register_frame_out() compares new sequence_nr vs the old one recorded in hsr_node::seq_out and if the new sequence_nr is higher then it will be written to hsr_node::seq_out as the new value. This operation isn't locked so it is possible that two frames with the same sequence number arrive (via the two slave devices) and are fed to hsr_register_frame_out() at the same time. Both will pass the check and update the sequence counter later to the same value. As a result the content of the same packet is fed into the stack twice. This was noticed by running ping and observing DUP being reported from time to time. Instead of using the hsr_priv::seqnr_lock for the whole receive path (as it is for sending in the master node) add an aditional lock that is only used for sequence number checks and updates. Add a per-node lock that is used during sequence number reads and updates. Fixes: f421436a591d3 ("net/hsr: Add support for the High-availability Seamless Redundancy protocol (HSRv0)") Signed-off-by: Sebastian Andrzej Siewior --- net/hsr/hsr_framereg.c | 9 ++++++++- net/hsr/hsr_framereg.h | 1 + 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c index f2dd846ff9038..39a6088080e93 100644 --- a/net/hsr/hsr_framereg.c +++ b/net/hsr/hsr_framereg.c @@ -157,6 +157,7 @@ static struct hsr_node *hsr_add_node(struct hsr_priv *hsr, return NULL; ether_addr_copy(new_node->macaddress_A, addr); + spin_lock_init(&new_node->seq_out_lock); /* We are only interested in time diffs here, so use current jiffies * as initialization. (0 could trigger an spurious ring error warning). @@ -353,6 +354,7 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame) } ether_addr_copy(node_real->macaddress_B, ethhdr->h_source); + spin_lock_bh(&node_real->seq_out_lock); for (i = 0; i < HSR_PT_PORTS; i++) { if (!node_curr->time_in_stale[i] && time_after(node_curr->time_in[i], node_real->time_in[i])) { @@ -363,6 +365,7 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame) if (seq_nr_after(node_curr->seq_out[i], node_real->seq_out[i])) node_real->seq_out[i] = node_curr->seq_out[i]; } + spin_unlock_bh(&node_real->seq_out_lock); node_real->addr_B_port = port_rcv->type; spin_lock_bh(&hsr->list_lock); @@ -456,13 +459,17 @@ void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port, int hsr_register_frame_out(struct hsr_port *port, struct hsr_node *node, u16 sequence_nr) { + spin_lock_bh(&node->seq_out_lock); if (seq_nr_before_or_eq(sequence_nr, node->seq_out[port->type]) && time_is_after_jiffies(node->time_out[port->type] + - msecs_to_jiffies(HSR_ENTRY_FORGET_TIME))) + msecs_to_jiffies(HSR_ENTRY_FORGET_TIME))) { + spin_unlock_bh(&node->seq_out_lock); return 1; + } node->time_out[port->type] = jiffies; node->seq_out[port->type] = sequence_nr; + spin_unlock_bh(&node->seq_out_lock); return 0; } diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h index b5f902397bf1a..9a047ac5991d0 100644 --- a/net/hsr/hsr_framereg.h +++ b/net/hsr/hsr_framereg.h @@ -69,6 +69,7 @@ void prp_update_san_info(struct hsr_node *node, bool is_sup); struct hsr_node { struct list_head mac_list; + spinlock_t seq_out_lock; unsigned char macaddress_A[ETH_ALEN]; unsigned char macaddress_B[ETH_ALEN]; /* Local slave through which AddrB frames are received from this node */