From patchwork Wed Jun 2 17:44:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jon Maloy X-Patchwork-Id: 12295465 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56EEFC47083 for ; Wed, 2 Jun 2021 17:44:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5C9861D72 for ; Wed, 2 Jun 2021 17:44:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230405AbhFBRqV (ORCPT ); Wed, 2 Jun 2021 13:46:21 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:38444 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230372AbhFBRqU (ORCPT ); Wed, 2 Jun 2021 13:46:20 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622655876; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0nSZvdYysQWxC6uneZcF7tt6OS9HwJAboMhzv5Ww/FA=; b=RA22eflKFIU2J1gPCb+EKG058UqMR2lpXYiW1DKh3CkLzr28euC5+enYsEN1NdaXAttfR9 0tweBYEfH65x6fKMT0XMQ73I9UWaH5ggHpctUyTFyZal/aMBGqshAL+KRwu9UKMVCy016s 7arQvhT3Rw3mrF7mB4wBZz/8rMaLiS4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-588-otsj5cXvNxy7UNj-uBk5pA-1; Wed, 02 Jun 2021 13:44:32 -0400 X-MC-Unique: otsj5cXvNxy7UNj-uBk5pA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 503571883520; Wed, 2 Jun 2021 17:44:31 +0000 (UTC) Received: from ymir.virt.lab.eng.bos.redhat.com (virtlab420.virt.lab.eng.bos.redhat.com [10.19.152.148]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7AB425C5E0; Wed, 2 Jun 2021 17:44:30 +0000 (UTC) From: jmaloy@redhat.com To: netdev@vger.kernel.org, davem@davemloft.net Cc: tipc-discussion@lists.sourceforge.net, tung.q.nguyen@dektech.com.au, hoang.h.le@dektech.com.au, tuong.t.lien@dektech.com.au, jmaloy@redhat.com, maloy@donjonn.com, xinl@redhat.com, ying.xue@windriver.com, parthasarathy.bhuvaragan@gmail.com Subject: [net-next v2 3/3] tipc: simplify handling of lookup scope during multicast message reception Date: Wed, 2 Jun 2021 13:44:26 -0400 Message-Id: <20210602174426.870536-4-jmaloy@redhat.com> In-Reply-To: <20210602174426.870536-1-jmaloy@redhat.com> References: <20210602174426.870536-1-jmaloy@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jon Maloy We introduce a new macro TIPC_ANY_SCOPE to make the handling of the lookup scope value more comprehensible during multicast reception. The (unchanged) rules go as follows: 1) Multicast messages sent from own node are delivered to all matching sockets on the own node, irrespective of their binding scope. 2) Multicast messages sent from other nodes arrive here because they have found TIPC_CLUSTER_SCOPE bindings emanating from this node. Those messages should be delivered to exactly those sockets, but not to local sockets bound with TIPC_NODE_SCOPE, since the latter obviously were not meant to be visible for those senders. 3) Group multicast/broadcast messages are delivered to the sockets with a binding scope matching exactly the lookup scope indicated in the message header, and nobody else. Reviewed-by: Xin Long Tested-by: Hoang Le Signed-off-by: Jon Maloy --- v2: Changed value of TIPC_ANY_SCOPE to avoid compiler warning Signed-off-by: Jon Maloy --- net/tipc/name_table.c | 6 +++--- net/tipc/name_table.h | 4 +++- net/tipc/socket.c | 26 ++++++++++---------------- 3 files changed, 16 insertions(+), 20 deletions(-) diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c index fecab516bf41..01396dd1c899 100644 --- a/net/tipc/name_table.c +++ b/net/tipc/name_table.c @@ -673,12 +673,12 @@ bool tipc_nametbl_lookup_group(struct net *net, struct tipc_uaddr *ua, * Returns a list of local sockets */ void tipc_nametbl_lookup_mcast_sockets(struct net *net, struct tipc_uaddr *ua, - bool exact, struct list_head *dports) + struct list_head *dports) { struct service_range *sr; struct tipc_service *sc; struct publication *p; - u32 scope = ua->scope; + u8 scope = ua->scope; rcu_read_lock(); sc = tipc_service_find(net, ua); @@ -688,7 +688,7 @@ void tipc_nametbl_lookup_mcast_sockets(struct net *net, struct tipc_uaddr *ua, spin_lock_bh(&sc->lock); service_range_foreach_match(sr, sc, ua->sr.lower, ua->sr.upper) { list_for_each_entry(p, &sr->local_publ, local_publ) { - if (p->scope == scope || (!exact && p->scope < scope)) + if (scope == p->scope || scope == TIPC_ANY_SCOPE) tipc_dest_push(dports, 0, p->sk.ref); } } diff --git a/net/tipc/name_table.h b/net/tipc/name_table.h index c7c9a3ddd420..259f95e3d99c 100644 --- a/net/tipc/name_table.h +++ b/net/tipc/name_table.h @@ -51,6 +51,8 @@ struct tipc_uaddr; #define TIPC_PUBL_SCOPE_NUM (TIPC_NODE_SCOPE + 1) #define TIPC_NAMETBL_SIZE 1024 /* must be a power of 2 */ +#define TIPC_ANY_SCOPE 10 /* Both node and cluster scope will match */ + /** * struct publication - info about a published service address or range * @sr: service range represented by this publication @@ -113,7 +115,7 @@ int tipc_nl_name_table_dump(struct sk_buff *skb, struct netlink_callback *cb); bool tipc_nametbl_lookup_anycast(struct net *net, struct tipc_uaddr *ua, struct tipc_socket_addr *sk); void tipc_nametbl_lookup_mcast_sockets(struct net *net, struct tipc_uaddr *ua, - bool exact, struct list_head *dports); + struct list_head *dports); void tipc_nametbl_lookup_mcast_nodes(struct net *net, struct tipc_uaddr *ua, struct tipc_nlist *nodes); bool tipc_nametbl_lookup_group(struct net *net, struct tipc_uaddr *ua, diff --git a/net/tipc/socket.c b/net/tipc/socket.c index c635fd27fb38..575a0238deb2 100644 --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -1200,12 +1200,12 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq, struct tipc_msg *hdr; struct tipc_uaddr ua; int user, mtyp, hlen; - bool exact; __skb_queue_head_init(&tmpq); INIT_LIST_HEAD(&dports); ua.addrtype = TIPC_SERVICE_RANGE; + /* tipc_skb_peek() increments the head skb's reference counter */ skb = tipc_skb_peek(arrvq, &inputq->lock); for (; skb; skb = tipc_skb_peek(arrvq, &inputq->lock)) { hdr = buf_msg(skb); @@ -1214,6 +1214,12 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq, hlen = skb_headroom(skb) + msg_hdr_sz(hdr); onode = msg_orignode(hdr); ua.sr.type = msg_nametype(hdr); + ua.sr.lower = msg_namelower(hdr); + ua.sr.upper = msg_nameupper(hdr); + if (onode == self) + ua.scope = TIPC_ANY_SCOPE; + else + ua.scope = TIPC_CLUSTER_SCOPE; if (mtyp == TIPC_GRP_UCAST_MSG || user == GROUP_PROTOCOL) { spin_lock_bh(&inputq->lock); @@ -1231,20 +1237,10 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq, ua.sr.lower = 0; ua.sr.upper = ~0; ua.scope = msg_lookup_scope(hdr); - exact = true; - } else { - /* TIPC_NODE_SCOPE means "any scope" in this context */ - if (onode == self) - ua.scope = TIPC_NODE_SCOPE; - else - ua.scope = TIPC_CLUSTER_SCOPE; - exact = false; - ua.sr.lower = msg_namelower(hdr); - ua.sr.upper = msg_nameupper(hdr); } /* Create destination port list: */ - tipc_nametbl_lookup_mcast_sockets(net, &ua, exact, &dports); + tipc_nametbl_lookup_mcast_sockets(net, &ua, &dports); /* Clone message per destination */ while (tipc_dest_pop(&dports, NULL, &portid)) { @@ -1256,13 +1252,11 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq, } pr_warn("Failed to clone mcast rcv buffer\n"); } - /* Append to inputq if not already done by other thread */ + /* Append clones to inputq only if skb is still head of arrvq */ spin_lock_bh(&inputq->lock); if (skb_peek(arrvq) == skb) { skb_queue_splice_tail_init(&tmpq, inputq); - /* Decrease the skb's refcnt as increasing in the - * function tipc_skb_peek - */ + /* Decrement the skb's refcnt */ kfree_skb(__skb_dequeue(arrvq)); } spin_unlock_bh(&inputq->lock);