From patchwork Mon Feb 8 17:52:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 12076161 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2295C43381 for ; Mon, 8 Feb 2021 17:56:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 54D6064D92 for ; Mon, 8 Feb 2021 17:56:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235130AbhBHR4O (ORCPT ); Mon, 8 Feb 2021 12:56:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233680AbhBHRyD (ORCPT ); Mon, 8 Feb 2021 12:54:03 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 634F2C061786 for ; Mon, 8 Feb 2021 09:53:22 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id e12so8228799pls.4 for ; Mon, 08 Feb 2021 09:53:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=sIoJfBY6gvII0G/ajcUNA3OL6aOnfe4EVuUO638/0Qs=; b=NFIO5lzab0Twjl8rbevlgBgWh/fH/tZ5s5+K7M4cl6e0vV6y1e8nsoP8i/uPSLF+TC CT2BLhCoKHscgMyFhyBrWZRVZ/9Kl/v8wa1xv2uxq0ZEWuNgnggCfD3y5kEgFHE2baCo QuxK0roOYRSNL8VvjDNMmIbnYFs/+yXIEtXfTXB/3spIH6XFptxxwXkRWlbFQ8cr9lvV 1yNVKWtgACii8B93UE8c67thvMTjsMFh43UD+0kMX2Ed5wwymkxzGScebZyt8cpyDCwX CrfxJzRYv1KcBUrf96P8iYpOtEljkpA9IKoxJIHEMJRszEzemnN8jmMShzzQZmWgLxVw 9YCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=sIoJfBY6gvII0G/ajcUNA3OL6aOnfe4EVuUO638/0Qs=; b=pprAIJG+HHii6ZVfpc/K0EEFG8Skj5i8AdOdQM3hKqVKR9M7ibCXOwy15onV1xDVRF dVtr7IN4l49yyD/eRH1fXJSvLEUsrgNXlphdklQksLqNkD/KhijnMnCt+02wOAf3Q095 7L6Xkqss4As2ZX4zKN/CkJeyzVt+1m7AGjVOROdIgsJA58unj/0QMqT0hKS7UUvb6Gro /kNcyww4rhdNF2zE/TSUAGuHoJIL/1Eae/hx3ejylLY7IzyIdj5F2CDXPKIm7F2CUETV W1wEkXdB7PRPVTJtPf9oRXnnQfpFX+/+O6mUs6ya0g8jRfpe5H9giLGrJTFcHIhGqTqJ b5bw== X-Gm-Message-State: AOAM530l3RQ0KiYrYkORThLz2IXL3WF6gKDUZHlzlA7vCWtJtAQb/PuQ 6p96DCSviQ9ZDHFHBGY1k9w= X-Google-Smtp-Source: ABdhPJwKwskGOGQ6pjhLoe8iNTvP6I1DKosPTJuRfuSEbi3YL6WFoUWjDTv95RjD/YPAgiicwqXBXg== X-Received: by 2002:a17:902:b08f:b029:dc:8ac6:a147 with SMTP id p15-20020a170902b08fb02900dc8ac6a147mr17514694plr.84.1612806801497; Mon, 08 Feb 2021 09:53:21 -0800 (PST) Received: from localhost.localdomain ([49.173.165.50]) by smtp.gmail.com with ESMTPSA id y5sm17094933pfp.42.2021.02.08.09.52.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 09:52:52 -0800 (PST) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, dsahern@kernel.org, xiyou.wangcong@gmail.com, jwi@linux.ibm.com, kgraul@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com, mareklindner@neomailbox.ch, sw@simonwunderlich.de, a@unstable.cc, sven@narfation.org, yoshfuji@linux-ipv6.org Cc: ap420073@gmail.com Subject: [PATCH net-next 1/8] mld: convert ifmcaddr6 to list macros Date: Mon, 8 Feb 2021 17:52:07 +0000 Message-Id: <20210208175207.4962-1-ap420073@gmail.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, struct ifmcaddr6 doesn't use list API so that code shape is a little bit different from others. So it converts ifmcaddr6 to use list API so it would improve readability. Signed-off-by: Taehee Yoo --- drivers/s390/net/qeth_l3_main.c | 2 +- include/net/if_inet6.h | 9 +- net/batman-adv/multicast.c | 2 +- net/ipv6/addrconf.c | 7 +- net/ipv6/addrconf_core.c | 3 +- net/ipv6/mcast.c | 999 ++++++++++++++++---------------- 6 files changed, 523 insertions(+), 499 deletions(-) diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index dd441eaec66e..e49abdeff69c 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -1099,7 +1099,7 @@ static int qeth_l3_add_mcast_rtnl(struct net_device *dev, int vid, void *arg) tmp.is_multicast = 1; read_lock_bh(&in6_dev->lock); - for (im6 = in6_dev->mc_list; im6 != NULL; im6 = im6->next) { + list_for_each_entry(im6, in6_dev->mc_list, list) { tmp.u.a6.addr = im6->mca_addr; ipm = qeth_l3_find_addr_by_ip(card, &tmp); diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h index 8bf5906073bc..1262ccd5221e 100644 --- a/include/net/if_inet6.h +++ b/include/net/if_inet6.h @@ -114,7 +114,7 @@ struct ip6_sf_list { struct ifmcaddr6 { struct in6_addr mca_addr; struct inet6_dev *idev; - struct ifmcaddr6 *next; + struct list_head list; struct ip6_sf_list *mca_sources; struct ip6_sf_list *mca_tomb; unsigned int mca_sfmode; @@ -164,10 +164,9 @@ struct inet6_dev { struct net_device *dev; struct list_head addr_list; - - struct ifmcaddr6 *mc_list; - struct ifmcaddr6 *mc_tomb; - spinlock_t mc_lock; + struct list_head mc_list; + struct list_head mc_tomb_list; + spinlock_t mc_tomb_lock; unsigned char mc_qrv; /* Query Robustness Variable */ unsigned char mc_gq_running; diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c index 854e5ff28a3f..1a9ad5a9257b 100644 --- a/net/batman-adv/multicast.c +++ b/net/batman-adv/multicast.c @@ -455,7 +455,7 @@ batadv_mcast_mla_softif_get_ipv6(struct net_device *dev, } read_lock_bh(&in6_dev->lock); - for (pmc6 = in6_dev->mc_list; pmc6; pmc6 = pmc6->next) { + list_for_each_entry(pmc6, &in6_dev->mc_list, list) { if (IPV6_ADDR_MC_SCOPE(&pmc6->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL) continue; diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index f2337fb756ac..e9fe0eee5768 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -5110,13 +5110,14 @@ static int in6_dump_addrs(struct inet6_dev *idev, struct sk_buff *skb, fillargs->event = RTM_GETMULTICAST; /* multicast address */ - for (ifmca = idev->mc_list; ifmca; - ifmca = ifmca->next, ip_idx++) { + list_for_each_entry(ifmca, &idev->mc_list, list) { if (ip_idx < s_ip_idx) - continue; + goto next2; err = inet6_fill_ifmcaddr(skb, ifmca, fillargs); if (err < 0) break; +next2: + ip_idx++; } break; case ANYCAST_ADDR: diff --git a/net/ipv6/addrconf_core.c b/net/ipv6/addrconf_core.c index c70c192bc91b..b55f85dcfd74 100644 --- a/net/ipv6/addrconf_core.c +++ b/net/ipv6/addrconf_core.c @@ -250,7 +250,8 @@ void in6_dev_finish_destroy(struct inet6_dev *idev) struct net_device *dev = idev->dev; WARN_ON(!list_empty(&idev->addr_list)); - WARN_ON(idev->mc_list); + WARN_ON(!list_empty(&idev->mc_list)); + WARN_ON(!list_empty(&idev->mc_tomb_list)); WARN_ON(timer_pending(&idev->rs_timer)); #ifdef NET_REFCNT_DEBUG diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 6c8604390266..508c007df84f 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -69,24 +69,19 @@ static int __mld2_query_bugs[] __attribute__((__unused__)) = { static struct in6_addr mld2_all_mcr = MLD2_ALL_MCR_INIT; -static void igmp6_join_group(struct ifmcaddr6 *ma); -static void igmp6_leave_group(struct ifmcaddr6 *ma); +static void igmp6_join_group(struct ifmcaddr6 *mc); +static void igmp6_leave_group(struct ifmcaddr6 *mc); static void igmp6_timer_handler(struct timer_list *t); -static void mld_gq_timer_expire(struct timer_list *t); -static void mld_ifc_timer_expire(struct timer_list *t); static void mld_ifc_event(struct inet6_dev *idev); -static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *pmc); -static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *pmc); -static void mld_clear_delrec(struct inet6_dev *idev); static bool mld_in_v1_mode(const struct inet6_dev *idev); -static int sf_setstate(struct ifmcaddr6 *pmc); -static void sf_markstate(struct ifmcaddr6 *pmc); -static void ip6_mc_clear_src(struct ifmcaddr6 *pmc); -static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *pmca, +static int sf_setstate(struct ifmcaddr6 *mc); +static void sf_markstate(struct ifmcaddr6 *mc); +static void ip6_mc_clear_src(struct ifmcaddr6 *mc); +static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, int sfmode, int sfcount, const struct in6_addr *psfsrc, int delta); -static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *pmca, +static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, int sfmode, int sfcount, const struct in6_addr *psfsrc, int delta); static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml, @@ -113,10 +108,23 @@ int sysctl_mld_qrv __read_mostly = MLD_QRV_DEFAULT; * socket join on multicast group */ -#define for_each_pmc_rcu(np, pmc) \ - for (pmc = rcu_dereference(np->ipv6_mc_list); \ - pmc != NULL; \ - pmc = rcu_dereference(pmc->next)) +#define for_each_mc_rcu(np, mc) \ + for (mc = rcu_dereference((np)->ipv6_mc_list); \ + mc; \ + mc = rcu_dereference(mc->next)) + +static void mca_get(struct ifmcaddr6 *mc) +{ + refcount_inc(&mc->mca_refcnt); +} + +static void mca_put(struct ifmcaddr6 *mc) +{ + if (refcount_dec_and_test(&mc->mca_refcnt)) { + in6_dev_put(mc->idev); + kfree(mc); + } +} static int unsolicited_report_interval(struct inet6_dev *idev) { @@ -145,7 +153,7 @@ static int __ipv6_sock_mc_join(struct sock *sk, int ifindex, return -EINVAL; rcu_read_lock(); - for_each_pmc_rcu(np, mc_lst) { + for_each_mc_rcu(np, mc_lst) { if ((ifindex == 0 || mc_lst->ifindex == ifindex) && ipv6_addr_equal(&mc_lst->addr, addr)) { rcu_read_unlock(); @@ -328,15 +336,15 @@ void ipv6_sock_mc_close(struct sock *sk) int ip6_mc_source(int add, int omode, struct sock *sk, struct group_source_req *pgsr) { - struct in6_addr *source, *group; - struct ipv6_mc_socklist *pmc; - struct inet6_dev *idev; struct ipv6_pinfo *inet6 = inet6_sk(sk); - struct ip6_sf_socklist *psl; + struct in6_addr *source, *group; struct net *net = sock_net(sk); - int i, j, rv; + struct ipv6_mc_socklist *mc; + struct ip6_sf_socklist *psl; + struct inet6_dev *idev; int leavegroup = 0; - int pmclocked = 0; + int mclocked = 0; + int i, j, rv; int err; source = &((struct sockaddr_in6 *)&pgsr->gsr_source)->sin6_addr; @@ -354,33 +362,33 @@ int ip6_mc_source(int add, int omode, struct sock *sk, err = -EADDRNOTAVAIL; - for_each_pmc_rcu(inet6, pmc) { - if (pgsr->gsr_interface && pmc->ifindex != pgsr->gsr_interface) + for_each_mc_rcu(inet6, mc) { + if (pgsr->gsr_interface && mc->ifindex != pgsr->gsr_interface) continue; - if (ipv6_addr_equal(&pmc->addr, group)) + if (ipv6_addr_equal(&mc->addr, group)) break; } - if (!pmc) { /* must have a prior join */ + if (!mc) { /* must have a prior join */ err = -EINVAL; goto done; } /* if a source filter was set, must be the same mode as before */ - if (pmc->sflist) { - if (pmc->sfmode != omode) { + if (mc->sflist) { + if (mc->sfmode != omode) { err = -EINVAL; goto done; } - } else if (pmc->sfmode != omode) { + } else if (mc->sfmode != omode) { /* allow mode switches for empty-set filters */ ip6_mc_add_src(idev, group, omode, 0, NULL, 0); - ip6_mc_del_src(idev, group, pmc->sfmode, 0, NULL, 0); - pmc->sfmode = omode; + ip6_mc_del_src(idev, group, mc->sfmode, 0, NULL, 0); + mc->sfmode = omode; } - write_lock(&pmc->sflock); - pmclocked = 1; + write_lock(&mc->sflock); + mclocked = 1; - psl = pmc->sflist; + psl = mc->sflist; if (!add) { if (!psl) goto done; /* err = -EADDRNOTAVAIL */ @@ -432,7 +440,8 @@ int ip6_mc_source(int add, int omode, struct sock *sk, newpsl->sl_addr[i] = psl->sl_addr[i]; sock_kfree_s(sk, psl, IP6_SFLSIZE(psl->sl_max)); } - pmc->sflist = psl = newpsl; + psl = newpsl; + mc->sflist = psl; } rv = 1; /* > 0 for insert logic below if sl_count is 0 */ for (i = 0; i < psl->sl_count; i++) { @@ -448,8 +457,8 @@ int ip6_mc_source(int add, int omode, struct sock *sk, /* update the interface list */ ip6_mc_add_src(idev, group, omode, 1, source, 1); done: - if (pmclocked) - write_unlock(&pmc->sflock); + if (mclocked) + write_unlock(&mc->sflock); read_unlock_bh(&idev->lock); rcu_read_unlock(); if (leavegroup) @@ -460,12 +469,12 @@ int ip6_mc_source(int add, int omode, struct sock *sk, int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, struct sockaddr_storage *list) { - const struct in6_addr *group; - struct ipv6_mc_socklist *pmc; - struct inet6_dev *idev; struct ipv6_pinfo *inet6 = inet6_sk(sk); struct ip6_sf_socklist *newpsl, *psl; struct net *net = sock_net(sk); + const struct in6_addr *group; + struct ipv6_mc_socklist *mc; + struct inet6_dev *idev; int leavegroup = 0; int i, err; @@ -492,13 +501,13 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, goto done; } - for_each_pmc_rcu(inet6, pmc) { - if (pmc->ifindex != gsf->gf_interface) + for_each_mc_rcu(inet6, mc) { + if (mc->ifindex != gsf->gf_interface) continue; - if (ipv6_addr_equal(&pmc->addr, group)) + if (ipv6_addr_equal(&mc->addr, group)) break; } - if (!pmc) { /* must have a prior join */ + if (!mc) { /* must have a prior join */ err = -EINVAL; goto done; } @@ -524,20 +533,20 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, } } else { newpsl = NULL; - (void) ip6_mc_add_src(idev, group, gsf->gf_fmode, 0, NULL, 0); + ip6_mc_add_src(idev, group, gsf->gf_fmode, 0, NULL, 0); } - write_lock(&pmc->sflock); - psl = pmc->sflist; + write_lock(&mc->sflock); + psl = mc->sflist; if (psl) { - (void) ip6_mc_del_src(idev, group, pmc->sfmode, - psl->sl_count, psl->sl_addr, 0); + ip6_mc_del_src(idev, group, mc->sfmode, + psl->sl_count, psl->sl_addr, 0); sock_kfree_s(sk, psl, IP6_SFLSIZE(psl->sl_max)); } else - (void) ip6_mc_del_src(idev, group, pmc->sfmode, 0, NULL, 0); - pmc->sflist = newpsl; - pmc->sfmode = gsf->gf_fmode; - write_unlock(&pmc->sflock); + ip6_mc_del_src(idev, group, mc->sfmode, 0, NULL, 0); + mc->sflist = newpsl; + mc->sfmode = gsf->gf_fmode; + write_unlock(&mc->sflock); err = 0; done: read_unlock_bh(&idev->lock); @@ -552,7 +561,7 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, { int err, i, count, copycount; const struct in6_addr *group; - struct ipv6_mc_socklist *pmc; + struct ipv6_mc_socklist *mc; struct inet6_dev *idev; struct ipv6_pinfo *inet6 = inet6_sk(sk); struct ip6_sf_socklist *psl; @@ -577,16 +586,16 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, * so reading the list is safe. */ - for_each_pmc_rcu(inet6, pmc) { - if (pmc->ifindex != gsf->gf_interface) + for_each_mc_rcu(inet6, mc) { + if (mc->ifindex != gsf->gf_interface) continue; - if (ipv6_addr_equal(group, &pmc->addr)) + if (ipv6_addr_equal(group, &mc->addr)) break; } - if (!pmc) /* must have a prior join */ + if (!mc) /* must have a prior join */ goto done; - gsf->gf_fmode = pmc->sfmode; - psl = pmc->sflist; + gsf->gf_fmode = mc->sfmode; + psl = mc->sflist; count = psl ? psl->sl_count : 0; read_unlock_bh(&idev->lock); rcu_read_unlock(); @@ -594,7 +603,7 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, copycount = count < gsf->gf_numsrc ? count : gsf->gf_numsrc; gsf->gf_numsrc = count; /* changes to psl require the socket lock, and a write lock - * on pmc->sflock. We have the socket lock so reading here is safe. + * on mc->sflock. We have the socket lock so reading here is safe. */ for (i = 0; i < copycount; i++, p++) { struct sockaddr_in6 *psin6; @@ -623,7 +632,7 @@ bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, bool rv = true; rcu_read_lock(); - for_each_pmc_rcu(np, mc) { + for_each_mc_rcu(np, mc) { if (ipv6_addr_equal(&mc->addr, mc_addr)) break; } @@ -723,7 +732,7 @@ static void igmp6_group_dropped(struct ifmcaddr6 *mc) */ static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) { - struct ifmcaddr6 *pmc; + struct ifmcaddr6 *mc; /* this is an "ifmcaddr6" for convenience; only the fields below * are actually used. In particular, the refcnt and users are not @@ -731,98 +740,91 @@ static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) * for deleted items allows change reports to use common code with * non-deleted or query-response MCA's. */ - pmc = kzalloc(sizeof(*pmc), GFP_ATOMIC); - if (!pmc) + mc = kzalloc(sizeof(*mc), GFP_ATOMIC); + if (!mc) return; spin_lock_bh(&im->mca_lock); - spin_lock_init(&pmc->mca_lock); - pmc->idev = im->idev; + spin_lock_init(&mc->mca_lock); + INIT_LIST_HEAD(&mc->list); + mc->idev = im->idev; in6_dev_hold(idev); - pmc->mca_addr = im->mca_addr; - pmc->mca_crcount = idev->mc_qrv; - pmc->mca_sfmode = im->mca_sfmode; - if (pmc->mca_sfmode == MCAST_INCLUDE) { + mc->mca_addr = im->mca_addr; + mc->mca_crcount = idev->mc_qrv; + mc->mca_sfmode = im->mca_sfmode; + if (mc->mca_sfmode == MCAST_INCLUDE) { struct ip6_sf_list *psf; - pmc->mca_tomb = im->mca_tomb; - pmc->mca_sources = im->mca_sources; + mc->mca_tomb = im->mca_tomb; + mc->mca_sources = im->mca_sources; im->mca_tomb = im->mca_sources = NULL; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) - psf->sf_crcount = pmc->mca_crcount; + for (psf = mc->mca_sources; psf; psf = psf->sf_next) + psf->sf_crcount = mc->mca_crcount; } spin_unlock_bh(&im->mca_lock); - spin_lock_bh(&idev->mc_lock); - pmc->next = idev->mc_tomb; - idev->mc_tomb = pmc; - spin_unlock_bh(&idev->mc_lock); + spin_lock_bh(&idev->mc_tomb_lock); + list_add(&mc->list, &idev->mc_tomb_list); + spin_unlock_bh(&idev->mc_tomb_lock); } static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) { - struct ifmcaddr6 *pmc, *pmc_prev; + struct ifmcaddr6 *mc = NULL, *tmp = NULL; + struct in6_addr *mca = &im->mca_addr; struct ip6_sf_list *psf; - struct in6_addr *pmca = &im->mca_addr; + bool found = false; - spin_lock_bh(&idev->mc_lock); - pmc_prev = NULL; - for (pmc = idev->mc_tomb; pmc; pmc = pmc->next) { - if (ipv6_addr_equal(&pmc->mca_addr, pmca)) + spin_lock_bh(&idev->mc_tomb_lock); + list_for_each_entry_safe(mc, tmp, &idev->mc_tomb_list, list) { + if (ipv6_addr_equal(&mc->mca_addr, mca)) { + list_del(&mc->list); + found = true; break; - pmc_prev = pmc; - } - if (pmc) { - if (pmc_prev) - pmc_prev->next = pmc->next; - else - idev->mc_tomb = pmc->next; + } } - spin_unlock_bh(&idev->mc_lock); + spin_unlock_bh(&idev->mc_tomb_lock); spin_lock_bh(&im->mca_lock); - if (pmc) { - im->idev = pmc->idev; + if (found) { + im->idev = mc->idev; if (im->mca_sfmode == MCAST_INCLUDE) { - swap(im->mca_tomb, pmc->mca_tomb); - swap(im->mca_sources, pmc->mca_sources); + swap(im->mca_tomb, mc->mca_tomb); + swap(im->mca_sources, mc->mca_sources); for (psf = im->mca_sources; psf; psf = psf->sf_next) psf->sf_crcount = idev->mc_qrv; } else { im->mca_crcount = idev->mc_qrv; } - in6_dev_put(pmc->idev); - ip6_mc_clear_src(pmc); - kfree(pmc); + in6_dev_put(mc->idev); + ip6_mc_clear_src(mc); + kfree(mc); } spin_unlock_bh(&im->mca_lock); } static void mld_clear_delrec(struct inet6_dev *idev) { - struct ifmcaddr6 *pmc, *nextpmc; + struct ifmcaddr6 *mc, *tmp; - spin_lock_bh(&idev->mc_lock); - pmc = idev->mc_tomb; - idev->mc_tomb = NULL; - spin_unlock_bh(&idev->mc_lock); - - for (; pmc; pmc = nextpmc) { - nextpmc = pmc->next; - ip6_mc_clear_src(pmc); - in6_dev_put(pmc->idev); - kfree(pmc); + spin_lock_bh(&idev->mc_tomb_lock); + list_for_each_entry_safe(mc, tmp, &idev->mc_tomb_list, list) { + list_del(&mc->list); + ip6_mc_clear_src(mc); + in6_dev_put(mc->idev); + kfree(mc); } + spin_unlock_bh(&idev->mc_tomb_lock); /* clear dead sources, too */ read_lock_bh(&idev->lock); - for (pmc = idev->mc_list; pmc; pmc = pmc->next) { + list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { struct ip6_sf_list *psf, *psf_next; - spin_lock_bh(&pmc->mca_lock); - psf = pmc->mca_tomb; - pmc->mca_tomb = NULL; - spin_unlock_bh(&pmc->mca_lock); + spin_lock_bh(&mc->mca_lock); + psf = mc->mca_tomb; + mc->mca_tomb = NULL; + spin_unlock_bh(&mc->mca_lock); for (; psf; psf = psf_next) { psf_next = psf->sf_next; kfree(psf); @@ -831,19 +833,6 @@ static void mld_clear_delrec(struct inet6_dev *idev) read_unlock_bh(&idev->lock); } -static void mca_get(struct ifmcaddr6 *mc) -{ - refcount_inc(&mc->mca_refcnt); -} - -static void ma_put(struct ifmcaddr6 *mc) -{ - if (refcount_dec_and_test(&mc->mca_refcnt)) { - in6_dev_put(mc->idev); - kfree(mc); - } -} - static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, const struct in6_addr *addr, unsigned int mode) @@ -858,6 +847,7 @@ static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, mc->mca_addr = *addr; mc->idev = idev; /* reference taken by caller */ + INIT_LIST_HEAD(&mc->list); mc->mca_users = 1; /* mca_stamp should be updated upon changes */ mc->mca_cstamp = mc->mca_tstamp = jiffies; @@ -880,14 +870,13 @@ static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, static int __ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr, unsigned int mode) { - struct ifmcaddr6 *mc; struct inet6_dev *idev; + struct ifmcaddr6 *mc; ASSERT_RTNL(); /* we need to take a reference on idev */ idev = in6_dev_get(dev); - if (!idev) return -EINVAL; @@ -898,7 +887,7 @@ static int __ipv6_dev_mc_inc(struct net_device *dev, return -ENODEV; } - for (mc = idev->mc_list; mc; mc = mc->next) { + list_for_each_entry(mc, &idev->mc_list, list) { if (ipv6_addr_equal(&mc->mca_addr, addr)) { mc->mca_users++; write_unlock_bh(&idev->lock); @@ -915,8 +904,7 @@ static int __ipv6_dev_mc_inc(struct net_device *dev, return -ENOMEM; } - mc->next = idev->mc_list; - idev->mc_list = mc; + list_add(&mc->list, &idev->mc_list); /* Hold this for the code below before we unlock, * it is already exposed via idev->mc_list. @@ -926,7 +914,7 @@ static int __ipv6_dev_mc_inc(struct net_device *dev, mld_del_delrec(idev, mc); igmp6_group_added(mc); - ma_put(mc); + mca_put(mc); return 0; } @@ -941,29 +929,28 @@ EXPORT_SYMBOL(ipv6_dev_mc_inc); */ int __ipv6_dev_mc_dec(struct inet6_dev *idev, const struct in6_addr *addr) { - struct ifmcaddr6 *ma, **map; + struct ifmcaddr6 *mc, *tmp; ASSERT_RTNL(); write_lock_bh(&idev->lock); - for (map = &idev->mc_list; (ma = *map) != NULL; map = &ma->next) { - if (ipv6_addr_equal(&ma->mca_addr, addr)) { - if (--ma->mca_users == 0) { - *map = ma->next; + list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { + if (ipv6_addr_equal(&mc->mca_addr, addr)) { + if (--mc->mca_users == 0) { + list_del(&mc->list); write_unlock_bh(&idev->lock); - - igmp6_group_dropped(ma); - ip6_mc_clear_src(ma); - - ma_put(ma); + igmp6_group_dropped(mc); + ip6_mc_clear_src(mc); + mca_put(mc); return 0; } + write_unlock_bh(&idev->lock); return 0; } } - write_unlock_bh(&idev->lock); + write_unlock_bh(&idev->lock); return -ENOENT; } @@ -990,19 +977,22 @@ EXPORT_SYMBOL(ipv6_dev_mc_dec); bool ipv6_chk_mcast_addr(struct net_device *dev, const struct in6_addr *group, const struct in6_addr *src_addr) { + bool rv = false, found = false; struct inet6_dev *idev; struct ifmcaddr6 *mc; - bool rv = false; rcu_read_lock(); idev = __in6_dev_get(dev); if (idev) { read_lock_bh(&idev->lock); - for (mc = idev->mc_list; mc; mc = mc->next) { - if (ipv6_addr_equal(&mc->mca_addr, group)) + list_for_each_entry(mc, &idev->mc_list, list) { + if (ipv6_addr_equal(&mc->mca_addr, group)) { + found = true; break; + } } - if (mc) { + + if (found) { if (src_addr && !ipv6_addr_any(src_addr)) { struct ip6_sf_list *psf; @@ -1076,44 +1066,44 @@ static void mld_dad_stop_timer(struct inet6_dev *idev) * IGMP handling (alias multicast ICMPv6 messages) */ -static void igmp6_group_queried(struct ifmcaddr6 *ma, unsigned long resptime) +static void igmp6_group_queried(struct ifmcaddr6 *mc, unsigned long resptime) { unsigned long delay = resptime; /* Do not start timer for these addresses */ - if (ipv6_addr_is_ll_all_nodes(&ma->mca_addr) || - IPV6_ADDR_MC_SCOPE(&ma->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL) + if (ipv6_addr_is_ll_all_nodes(&mc->mca_addr) || + IPV6_ADDR_MC_SCOPE(&mc->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL) return; - if (del_timer(&ma->mca_timer)) { - refcount_dec(&ma->mca_refcnt); - delay = ma->mca_timer.expires - jiffies; + if (del_timer(&mc->mca_timer)) { + refcount_dec(&mc->mca_refcnt); + delay = mc->mca_timer.expires - jiffies; } if (delay >= resptime) delay = prandom_u32() % resptime; - ma->mca_timer.expires = jiffies + delay; - if (!mod_timer(&ma->mca_timer, jiffies + delay)) - refcount_inc(&ma->mca_refcnt); - ma->mca_flags |= MAF_TIMER_RUNNING; + mc->mca_timer.expires = jiffies + delay; + if (!mod_timer(&mc->mca_timer, jiffies + delay)) + refcount_inc(&mc->mca_refcnt); + mc->mca_flags |= MAF_TIMER_RUNNING; } /* mark EXCLUDE-mode sources */ -static bool mld_xmarksources(struct ifmcaddr6 *pmc, int nsrcs, +static bool mld_xmarksources(struct ifmcaddr6 *mc, int nsrcs, const struct in6_addr *srcs) { struct ip6_sf_list *psf; int i, scount; scount = 0; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) { + for (psf = mc->mca_sources; psf; psf = psf->sf_next) { if (scount == nsrcs) break; for (i = 0; i < nsrcs; i++) { /* skip inactive filters */ if (psf->sf_count[MCAST_INCLUDE] || - pmc->mca_sfcount[MCAST_EXCLUDE] != + mc->mca_sfcount[MCAST_EXCLUDE] != psf->sf_count[MCAST_EXCLUDE]) break; if (ipv6_addr_equal(&srcs[i], &psf->sf_addr)) { @@ -1122,25 +1112,25 @@ static bool mld_xmarksources(struct ifmcaddr6 *pmc, int nsrcs, } } } - pmc->mca_flags &= ~MAF_GSQUERY; + mc->mca_flags &= ~MAF_GSQUERY; if (scount == nsrcs) /* all sources excluded */ return false; return true; } -static bool mld_marksources(struct ifmcaddr6 *pmc, int nsrcs, +static bool mld_marksources(struct ifmcaddr6 *mc, int nsrcs, const struct in6_addr *srcs) { struct ip6_sf_list *psf; int i, scount; - if (pmc->mca_sfmode == MCAST_EXCLUDE) - return mld_xmarksources(pmc, nsrcs, srcs); + if (mc->mca_sfmode == MCAST_EXCLUDE) + return mld_xmarksources(mc, nsrcs, srcs); /* mark INCLUDE-mode sources */ scount = 0; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) { + for (psf = mc->mca_sources; psf; psf = psf->sf_next) { if (scount == nsrcs) break; for (i = 0; i < nsrcs; i++) { @@ -1152,10 +1142,10 @@ static bool mld_marksources(struct ifmcaddr6 *pmc, int nsrcs, } } if (!scount) { - pmc->mca_flags &= ~MAF_GSQUERY; + mc->mca_flags &= ~MAF_GSQUERY; return false; } - pmc->mca_flags |= MAF_GSQUERY; + mc->mca_flags |= MAF_GSQUERY; return true; } @@ -1333,10 +1323,10 @@ static int mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld, int igmp6_event_query(struct sk_buff *skb) { struct mld2_query *mlh2 = NULL; - struct ifmcaddr6 *ma; const struct in6_addr *group; unsigned long max_delay; struct inet6_dev *idev; + struct ifmcaddr6 *mc; struct mld_msg *mld; int group_type; int mark = 0; @@ -1416,31 +1406,31 @@ int igmp6_event_query(struct sk_buff *skb) read_lock_bh(&idev->lock); if (group_type == IPV6_ADDR_ANY) { - for (ma = idev->mc_list; ma; ma = ma->next) { - spin_lock_bh(&ma->mca_lock); - igmp6_group_queried(ma, max_delay); - spin_unlock_bh(&ma->mca_lock); + list_for_each_entry(mc, &idev->mc_list, list) { + spin_lock_bh(&mc->mca_lock); + igmp6_group_queried(mc, max_delay); + spin_unlock_bh(&mc->mca_lock); } } else { - for (ma = idev->mc_list; ma; ma = ma->next) { - if (!ipv6_addr_equal(group, &ma->mca_addr)) + list_for_each_entry(mc, &idev->mc_list, list) { + if (!ipv6_addr_equal(group, &mc->mca_addr)) continue; - spin_lock_bh(&ma->mca_lock); - if (ma->mca_flags & MAF_TIMER_RUNNING) { + spin_lock_bh(&mc->mca_lock); + if (mc->mca_flags & MAF_TIMER_RUNNING) { /* gsquery <- gsquery && mark */ if (!mark) - ma->mca_flags &= ~MAF_GSQUERY; + mc->mca_flags &= ~MAF_GSQUERY; } else { /* gsquery <- mark */ if (mark) - ma->mca_flags |= MAF_GSQUERY; + mc->mca_flags |= MAF_GSQUERY; else - ma->mca_flags &= ~MAF_GSQUERY; + mc->mca_flags &= ~MAF_GSQUERY; } - if (!(ma->mca_flags & MAF_GSQUERY) || - mld_marksources(ma, ntohs(mlh2->mld2q_nsrcs), mlh2->mld2q_srcs)) - igmp6_group_queried(ma, max_delay); - spin_unlock_bh(&ma->mca_lock); + if (!(mc->mca_flags & MAF_GSQUERY) || + mld_marksources(mc, ntohs(mlh2->mld2q_nsrcs), mlh2->mld2q_srcs)) + igmp6_group_queried(mc, max_delay); + spin_unlock_bh(&mc->mca_lock); break; } } @@ -1452,8 +1442,8 @@ int igmp6_event_query(struct sk_buff *skb) /* called with rcu_read_lock() */ int igmp6_event_report(struct sk_buff *skb) { - struct ifmcaddr6 *ma; struct inet6_dev *idev; + struct ifmcaddr6 *mc; struct mld_msg *mld; int addr_type; @@ -1486,13 +1476,13 @@ int igmp6_event_report(struct sk_buff *skb) */ read_lock_bh(&idev->lock); - for (ma = idev->mc_list; ma; ma = ma->next) { - if (ipv6_addr_equal(&ma->mca_addr, &mld->mld_mca)) { - spin_lock(&ma->mca_lock); - if (del_timer(&ma->mca_timer)) - refcount_dec(&ma->mca_refcnt); - ma->mca_flags &= ~(MAF_LAST_REPORTER|MAF_TIMER_RUNNING); - spin_unlock(&ma->mca_lock); + list_for_each_entry(mc, &idev->mc_list, list) { + if (ipv6_addr_equal(&mc->mca_addr, &mld->mld_mca)) { + spin_lock(&mc->mca_lock); + if (del_timer(&mc->mca_timer)) + refcount_dec(&mc->mca_refcnt); + mc->mca_flags &= ~(MAF_LAST_REPORTER | MAF_TIMER_RUNNING); + spin_unlock(&mc->mca_lock); break; } } @@ -1500,7 +1490,7 @@ int igmp6_event_report(struct sk_buff *skb) return 0; } -static bool is_in(struct ifmcaddr6 *pmc, struct ip6_sf_list *psf, int type, +static bool is_in(struct ifmcaddr6 *mc, struct ip6_sf_list *psf, int type, int gdeleted, int sdeleted) { switch (type) { @@ -1508,15 +1498,15 @@ static bool is_in(struct ifmcaddr6 *pmc, struct ip6_sf_list *psf, int type, case MLD2_MODE_IS_EXCLUDE: if (gdeleted || sdeleted) return false; - if (!((pmc->mca_flags & MAF_GSQUERY) && !psf->sf_gsresp)) { - if (pmc->mca_sfmode == MCAST_INCLUDE) + if (!((mc->mca_flags & MAF_GSQUERY) && !psf->sf_gsresp)) { + if (mc->mca_sfmode == MCAST_INCLUDE) return true; /* don't include if this source is excluded * in all filters */ if (psf->sf_count[MCAST_INCLUDE]) return type == MLD2_MODE_IS_INCLUDE; - return pmc->mca_sfcount[MCAST_EXCLUDE] == + return mc->mca_sfcount[MCAST_EXCLUDE] == psf->sf_count[MCAST_EXCLUDE]; } return false; @@ -1527,31 +1517,31 @@ static bool is_in(struct ifmcaddr6 *pmc, struct ip6_sf_list *psf, int type, case MLD2_CHANGE_TO_EXCLUDE: if (gdeleted || sdeleted) return false; - if (pmc->mca_sfcount[MCAST_EXCLUDE] == 0 || + if (mc->mca_sfcount[MCAST_EXCLUDE] == 0 || psf->sf_count[MCAST_INCLUDE]) return false; - return pmc->mca_sfcount[MCAST_EXCLUDE] == + return mc->mca_sfcount[MCAST_EXCLUDE] == psf->sf_count[MCAST_EXCLUDE]; case MLD2_ALLOW_NEW_SOURCES: if (gdeleted || !psf->sf_crcount) return false; - return (pmc->mca_sfmode == MCAST_INCLUDE) ^ sdeleted; + return (mc->mca_sfmode == MCAST_INCLUDE) ^ sdeleted; case MLD2_BLOCK_OLD_SOURCES: - if (pmc->mca_sfmode == MCAST_INCLUDE) + if (mc->mca_sfmode == MCAST_INCLUDE) return gdeleted || (psf->sf_crcount && sdeleted); return psf->sf_crcount && !gdeleted && !sdeleted; } return false; } -static int -mld_scount(struct ifmcaddr6 *pmc, int type, int gdeleted, int sdeleted) +static int mld_scount(struct ifmcaddr6 *mc, int type, int gdeleted, + int sdeleted) { struct ip6_sf_list *psf; int scount = 0; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) { - if (!is_in(pmc, psf, type, gdeleted, sdeleted)) + for (psf = mc->mca_sources; psf; psf = psf->sf_next) { + if (!is_in(mc, psf, type, gdeleted, sdeleted)) continue; scount++; } @@ -1585,21 +1575,23 @@ static void ip6_mc_hdr(struct sock *sk, struct sk_buff *skb, static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu) { + u8 ra[8] = { IPPROTO_ICMPV6, 0, + IPV6_TLV_ROUTERALERT, 2, + 0, 0, IPV6_TLV_PADN, 0 }; struct net_device *dev = idev->dev; - struct net *net = dev_net(dev); - struct sock *sk = net->ipv6.igmp_sk; - struct sk_buff *skb; - struct mld2_report *pmr; - struct in6_addr addr_buf; - const struct in6_addr *saddr; int hlen = LL_RESERVED_SPACE(dev); int tlen = dev->needed_tailroom; - unsigned int size = mtu + hlen + tlen; + struct net *net = dev_net(dev); + const struct in6_addr *saddr; + struct in6_addr addr_buf; + struct mld2_report *pmr; + struct sk_buff *skb; + unsigned int size; + struct sock *sk; int err; - u8 ra[8] = { IPPROTO_ICMPV6, 0, - IPV6_TLV_ROUTERALERT, 2, 0, 0, - IPV6_TLV_PADN, 0 }; + size = mtu + hlen + tlen; + sk = net->ipv6.igmp_sk; /* we assume size > sizeof(ra) here */ /* limit our allocations to order-0 page */ size = min_t(int, size, SKB_MAX_ORDER(0, 0)); @@ -1639,21 +1631,22 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu) static void mld_sendpack(struct sk_buff *skb) { struct ipv6hdr *pip6 = ipv6_hdr(skb); - struct mld2_report *pmr = - (struct mld2_report *)skb_transport_header(skb); + struct net *net = dev_net(skb->dev); int payload_len, mldlen; + struct mld2_report *pmr; struct inet6_dev *idev; - struct net *net = dev_net(skb->dev); - int err; - struct flowi6 fl6; struct dst_entry *dst; + struct flowi6 fl6; + int err; + + pmr = (struct mld2_report *)skb_transport_header(skb); rcu_read_lock(); idev = __in6_dev_get(skb->dev); IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUT, skb->len); payload_len = (skb_tail_pointer(skb) - skb_network_header(skb)) - - sizeof(*pip6); + sizeof(*pip6); mldlen = skb_tail_pointer(skb) - skb_transport_header(skb); pip6->payload_len = htons(payload_len); @@ -1695,19 +1688,20 @@ static void mld_sendpack(struct sk_buff *skb) goto out; } -static int grec_size(struct ifmcaddr6 *pmc, int type, int gdel, int sdel) +static int grec_size(struct ifmcaddr6 *mc, int type, int gdel, int sdel) { - return sizeof(struct mld2_grec) + 16 * mld_scount(pmc,type,gdel,sdel); + return sizeof(struct mld2_grec) + 16 * mld_scount(mc, type, gdel, sdel); } -static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc, - int type, struct mld2_grec **ppgr, unsigned int mtu) +static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *mc, + int type, struct mld2_grec **ppgr, + unsigned int mtu) { struct mld2_report *pmr; struct mld2_grec *pgr; if (!skb) { - skb = mld_newpack(pmc->idev, mtu); + skb = mld_newpack(mc->idev, mtu); if (!skb) return NULL; } @@ -1715,7 +1709,7 @@ static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc, pgr->grec_type = type; pgr->grec_auxwords = 0; pgr->grec_nsrcs = 0; - pgr->grec_mca = pmc->mca_addr; /* structure copy */ + pgr->grec_mca = mc->mca_addr; /* structure copy */ pmr = (struct mld2_report *)skb_transport_header(skb); pmr->mld2r_ngrec = htons(ntohs(pmr->mld2r_ngrec)+1); *ppgr = pgr; @@ -1724,18 +1718,20 @@ static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc, #define AVAILABLE(skb) ((skb) ? skb_availroom(skb) : 0) -static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, - int type, int gdeleted, int sdeleted, int crsend) +static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, + int type, int gdeleted, int sdeleted, + int crsend) { - struct inet6_dev *idev = pmc->idev; - struct net_device *dev = idev->dev; - struct mld2_report *pmr; - struct mld2_grec *pgr = NULL; struct ip6_sf_list *psf, *psf_next, *psf_prev, **psf_list; int scount, stotal, first, isquery, truncate; + struct inet6_dev *idev = mc->idev; + struct mld2_grec *pgr = NULL; + struct mld2_report *pmr; + struct net_device *dev; unsigned int mtu; - if (pmc->mca_flags & MAF_NOREPORT) + dev = idev->dev; + if (mc->mca_flags & MAF_NOREPORT) return skb; mtu = READ_ONCE(dev->mtu); @@ -1749,7 +1745,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, stotal = scount = 0; - psf_list = sdeleted ? &pmc->mca_tomb : &pmc->mca_sources; + psf_list = sdeleted ? &mc->mca_tomb : &mc->mca_sources; if (!*psf_list) goto empty_source; @@ -1759,7 +1755,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, /* EX and TO_EX get a fresh packet, if needed */ if (truncate) { if (pmr && pmr->mld2r_ngrec && - AVAILABLE(skb) < grec_size(pmc, type, gdeleted, sdeleted)) { + AVAILABLE(skb) < grec_size(mc, type, gdeleted, sdeleted)) { if (skb) mld_sendpack(skb); skb = mld_newpack(idev, mtu); @@ -1772,7 +1768,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, psf_next = psf->sf_next; - if (!is_in(pmc, psf, type, gdeleted, sdeleted) && !crsend) { + if (!is_in(mc, psf, type, gdeleted, sdeleted) && !crsend) { psf_prev = psf; continue; } @@ -1780,8 +1776,8 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, /* Based on RFC3810 6.1. Should not send source-list change * records when there is a filter mode change. */ - if (((gdeleted && pmc->mca_sfmode == MCAST_EXCLUDE) || - (!gdeleted && pmc->mca_crcount)) && + if (((gdeleted && mc->mca_sfmode == MCAST_EXCLUDE) || + (!gdeleted && mc->mca_crcount)) && (type == MLD2_ALLOW_NEW_SOURCES || type == MLD2_BLOCK_OLD_SOURCES) && psf->sf_crcount) goto decrease_sf_crcount; @@ -1803,7 +1799,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, scount = 0; } if (first) { - skb = add_grhead(skb, pmc, type, &pgr, mtu); + skb = add_grhead(skb, mc, type, &pgr, mtu); first = 0; } if (!skb) @@ -1832,49 +1828,49 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, if (type == MLD2_ALLOW_NEW_SOURCES || type == MLD2_BLOCK_OLD_SOURCES) return skb; - if (pmc->mca_crcount || isquery || crsend) { + if (mc->mca_crcount || isquery || crsend) { /* make sure we have room for group header */ if (skb && AVAILABLE(skb) < sizeof(struct mld2_grec)) { mld_sendpack(skb); skb = NULL; /* add_grhead will get a new one */ } - skb = add_grhead(skb, pmc, type, &pgr, mtu); + skb = add_grhead(skb, mc, type, &pgr, mtu); } } if (pgr) pgr->grec_nsrcs = htons(scount); if (isquery) - pmc->mca_flags &= ~MAF_GSQUERY; /* clear query state */ + mc->mca_flags &= ~MAF_GSQUERY; /* clear query state */ return skb; } -static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *pmc) +static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *mc) { struct sk_buff *skb = NULL; int type; read_lock_bh(&idev->lock); - if (!pmc) { - for (pmc = idev->mc_list; pmc; pmc = pmc->next) { - if (pmc->mca_flags & MAF_NOREPORT) + if (!mc) { + list_for_each_entry(mc, &idev->mc_list, list) { + if (mc->mca_flags & MAF_NOREPORT) continue; - spin_lock_bh(&pmc->mca_lock); - if (pmc->mca_sfcount[MCAST_EXCLUDE]) + spin_lock_bh(&mc->mca_lock); + if (mc->mca_sfcount[MCAST_EXCLUDE]) type = MLD2_MODE_IS_EXCLUDE; else type = MLD2_MODE_IS_INCLUDE; - skb = add_grec(skb, pmc, type, 0, 0, 0); - spin_unlock_bh(&pmc->mca_lock); + skb = add_grec(skb, mc, type, 0, 0, 0); + spin_unlock_bh(&mc->mca_lock); } } else { - spin_lock_bh(&pmc->mca_lock); - if (pmc->mca_sfcount[MCAST_EXCLUDE]) + spin_lock_bh(&mc->mca_lock); + if (mc->mca_sfcount[MCAST_EXCLUDE]) type = MLD2_MODE_IS_EXCLUDE; else type = MLD2_MODE_IS_INCLUDE; - skb = add_grec(skb, pmc, type, 0, 0, 0); - spin_unlock_bh(&pmc->mca_lock); + skb = add_grec(skb, mc, type, 0, 0, 0); + spin_unlock_bh(&mc->mca_lock); } read_unlock_bh(&idev->lock); if (skb) @@ -1904,94 +1900,90 @@ static void mld_clear_zeros(struct ip6_sf_list **ppsf) static void mld_send_cr(struct inet6_dev *idev) { - struct ifmcaddr6 *pmc, *pmc_prev, *pmc_next; struct sk_buff *skb = NULL; + struct ifmcaddr6 *mc, *tmp; int type, dtype; read_lock_bh(&idev->lock); - spin_lock(&idev->mc_lock); + spin_lock(&idev->mc_tomb_lock); /* deleted MCA's */ - pmc_prev = NULL; - for (pmc = idev->mc_tomb; pmc; pmc = pmc_next) { - pmc_next = pmc->next; - if (pmc->mca_sfmode == MCAST_INCLUDE) { + list_for_each_entry_safe(mc, tmp, &idev->mc_tomb_list, list) { + if (mc->mca_sfmode == MCAST_INCLUDE) { type = MLD2_BLOCK_OLD_SOURCES; dtype = MLD2_BLOCK_OLD_SOURCES; - skb = add_grec(skb, pmc, type, 1, 0, 0); - skb = add_grec(skb, pmc, dtype, 1, 1, 0); + skb = add_grec(skb, mc, type, 1, 0, 0); + skb = add_grec(skb, mc, dtype, 1, 1, 0); } - if (pmc->mca_crcount) { - if (pmc->mca_sfmode == MCAST_EXCLUDE) { + if (mc->mca_crcount) { + if (mc->mca_sfmode == MCAST_EXCLUDE) { type = MLD2_CHANGE_TO_INCLUDE; - skb = add_grec(skb, pmc, type, 1, 0, 0); + skb = add_grec(skb, mc, type, 1, 0, 0); } - pmc->mca_crcount--; - if (pmc->mca_crcount == 0) { - mld_clear_zeros(&pmc->mca_tomb); - mld_clear_zeros(&pmc->mca_sources); + mc->mca_crcount--; + if (mc->mca_crcount == 0) { + mld_clear_zeros(&mc->mca_tomb); + mld_clear_zeros(&mc->mca_sources); } } - if (pmc->mca_crcount == 0 && !pmc->mca_tomb && - !pmc->mca_sources) { - if (pmc_prev) - pmc_prev->next = pmc_next; - else - idev->mc_tomb = pmc_next; - in6_dev_put(pmc->idev); - kfree(pmc); - } else - pmc_prev = pmc; + if (mc->mca_crcount == 0 && !mc->mca_tomb && + !mc->mca_sources) { + list_del(&mc->list); + in6_dev_put(mc->idev); + kfree(mc); + } } - spin_unlock(&idev->mc_lock); + spin_unlock(&idev->mc_tomb_lock); /* change recs */ - for (pmc = idev->mc_list; pmc; pmc = pmc->next) { - spin_lock_bh(&pmc->mca_lock); - if (pmc->mca_sfcount[MCAST_EXCLUDE]) { + list_for_each_entry(mc, &idev->mc_list, list) { + spin_lock_bh(&mc->mca_lock); + if (mc->mca_sfcount[MCAST_EXCLUDE]) { type = MLD2_BLOCK_OLD_SOURCES; dtype = MLD2_ALLOW_NEW_SOURCES; } else { type = MLD2_ALLOW_NEW_SOURCES; dtype = MLD2_BLOCK_OLD_SOURCES; } - skb = add_grec(skb, pmc, type, 0, 0, 0); - skb = add_grec(skb, pmc, dtype, 0, 1, 0); /* deleted sources */ + skb = add_grec(skb, mc, type, 0, 0, 0); + skb = add_grec(skb, mc, dtype, 0, 1, 0); /* deleted sources */ /* filter mode changes */ - if (pmc->mca_crcount) { - if (pmc->mca_sfmode == MCAST_EXCLUDE) + if (mc->mca_crcount) { + if (mc->mca_sfmode == MCAST_EXCLUDE) type = MLD2_CHANGE_TO_EXCLUDE; else type = MLD2_CHANGE_TO_INCLUDE; - skb = add_grec(skb, pmc, type, 0, 0, 0); - pmc->mca_crcount--; + skb = add_grec(skb, mc, type, 0, 0, 0); + mc->mca_crcount--; } - spin_unlock_bh(&pmc->mca_lock); + spin_unlock_bh(&mc->mca_lock); } read_unlock_bh(&idev->lock); if (!skb) return; - (void) mld_sendpack(skb); + mld_sendpack(skb); } static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type) { + u8 ra[8] = { IPPROTO_ICMPV6, 0, + IPV6_TLV_ROUTERALERT, + 2, 0, 0, IPV6_TLV_PADN, 0 }; + const struct in6_addr *snd_addr, *saddr; + int err, len, payload_len, full_len; + int hlen = LL_RESERVED_SPACE(dev); + int tlen = dev->needed_tailroom; struct net *net = dev_net(dev); - struct sock *sk = net->ipv6.igmp_sk; + struct in6_addr addr_buf; struct inet6_dev *idev; + struct dst_entry *dst; struct sk_buff *skb; struct mld_msg *hdr; - const struct in6_addr *snd_addr, *saddr; - struct in6_addr addr_buf; - int hlen = LL_RESERVED_SPACE(dev); - int tlen = dev->needed_tailroom; - int err, len, payload_len, full_len; - u8 ra[8] = { IPPROTO_ICMPV6, 0, - IPV6_TLV_ROUTERALERT, 2, 0, 0, - IPV6_TLV_PADN, 0 }; struct flowi6 fl6; - struct dst_entry *dst; + struct sock *sk; + + sk = net->ipv6.igmp_sk; if (type == ICMPV6_MGM_REDUCTION) snd_addr = &in6addr_linklocal_allrouters; @@ -2074,7 +2066,7 @@ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type) static void mld_send_initial_cr(struct inet6_dev *idev) { struct sk_buff *skb; - struct ifmcaddr6 *pmc; + struct ifmcaddr6 *mc; int type; if (mld_in_v1_mode(idev)) @@ -2082,14 +2074,14 @@ static void mld_send_initial_cr(struct inet6_dev *idev) skb = NULL; read_lock_bh(&idev->lock); - for (pmc = idev->mc_list; pmc; pmc = pmc->next) { - spin_lock_bh(&pmc->mca_lock); - if (pmc->mca_sfcount[MCAST_EXCLUDE]) + list_for_each_entry(mc, &idev->mc_list, list) { + spin_lock_bh(&mc->mca_lock); + if (mc->mca_sfcount[MCAST_EXCLUDE]) type = MLD2_CHANGE_TO_EXCLUDE; else type = MLD2_ALLOW_NEW_SOURCES; - skb = add_grec(skb, pmc, type, 0, 0, 1); - spin_unlock_bh(&pmc->mca_lock); + skb = add_grec(skb, mc, type, 0, 0, 1); + spin_unlock_bh(&mc->mca_lock); } read_unlock_bh(&idev->lock); if (skb) @@ -2122,14 +2114,14 @@ static void mld_dad_timer_expire(struct timer_list *t) in6_dev_put(idev); } -static int ip6_mc_del1_src(struct ifmcaddr6 *pmc, int sfmode, - const struct in6_addr *psfsrc) +static int ip6_mc_del1_src(struct ifmcaddr6 *mc, int sfmode, + const struct in6_addr *psfsrc) { struct ip6_sf_list *psf, *psf_prev; int rv = 0; psf_prev = NULL; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) { + for (psf = mc->mca_sources; psf; psf = psf->sf_next) { if (ipv6_addr_equal(&psf->sf_addr, psfsrc)) break; psf_prev = psf; @@ -2140,78 +2132,83 @@ static int ip6_mc_del1_src(struct ifmcaddr6 *pmc, int sfmode, } psf->sf_count[sfmode]--; if (!psf->sf_count[MCAST_INCLUDE] && !psf->sf_count[MCAST_EXCLUDE]) { - struct inet6_dev *idev = pmc->idev; + struct inet6_dev *idev = mc->idev; /* no more filters for this source */ if (psf_prev) psf_prev->sf_next = psf->sf_next; else - pmc->mca_sources = psf->sf_next; - if (psf->sf_oldin && !(pmc->mca_flags & MAF_NOREPORT) && + mc->mca_sources = psf->sf_next; + if (psf->sf_oldin && !(mc->mca_flags & MAF_NOREPORT) && !mld_in_v1_mode(idev)) { psf->sf_crcount = idev->mc_qrv; - psf->sf_next = pmc->mca_tomb; - pmc->mca_tomb = psf; + psf->sf_next = mc->mca_tomb; + mc->mca_tomb = psf; rv = 1; - } else + } else { kfree(psf); + } } return rv; } -static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *pmca, +static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, int sfmode, int sfcount, const struct in6_addr *psfsrc, int delta) { - struct ifmcaddr6 *pmc; - int changerec = 0; - int i, err; + struct ifmcaddr6 *mc; + bool found = false; + int changerec = 0; + int i, err; if (!idev) return -ENODEV; read_lock_bh(&idev->lock); - for (pmc = idev->mc_list; pmc; pmc = pmc->next) { - if (ipv6_addr_equal(pmca, &pmc->mca_addr)) + list_for_each_entry(mc, &idev->mc_list, list) { + if (ipv6_addr_equal(mca, &mc->mca_addr)) { + found = true; break; + } } - if (!pmc) { + if (!found) { /* MCA not found?? bug */ read_unlock_bh(&idev->lock); return -ESRCH; } - spin_lock_bh(&pmc->mca_lock); - sf_markstate(pmc); + spin_lock_bh(&mc->mca_lock); + sf_markstate(mc); if (!delta) { - if (!pmc->mca_sfcount[sfmode]) { - spin_unlock_bh(&pmc->mca_lock); + if (!mc->mca_sfcount[sfmode]) { + spin_unlock_bh(&mc->mca_lock); read_unlock_bh(&idev->lock); return -EINVAL; } - pmc->mca_sfcount[sfmode]--; + mc->mca_sfcount[sfmode]--; } err = 0; for (i = 0; i < sfcount; i++) { - int rv = ip6_mc_del1_src(pmc, sfmode, &psfsrc[i]); + int rv = ip6_mc_del1_src(mc, sfmode, &psfsrc[i]); changerec |= rv > 0; if (!err && rv < 0) err = rv; } - if (pmc->mca_sfmode == MCAST_EXCLUDE && - pmc->mca_sfcount[MCAST_EXCLUDE] == 0 && - pmc->mca_sfcount[MCAST_INCLUDE]) { + if (mc->mca_sfmode == MCAST_EXCLUDE && + mc->mca_sfcount[MCAST_EXCLUDE] == 0 && + mc->mca_sfcount[MCAST_INCLUDE]) { struct ip6_sf_list *psf; /* filter mode change */ - pmc->mca_sfmode = MCAST_INCLUDE; - pmc->mca_crcount = idev->mc_qrv; - idev->mc_ifc_count = pmc->mca_crcount; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) + mc->mca_sfmode = MCAST_INCLUDE; + mc->mca_crcount = idev->mc_qrv; + idev->mc_ifc_count = mc->mca_crcount; + for (psf = mc->mca_sources; psf; psf = psf->sf_next) psf->sf_crcount = 0; - mld_ifc_event(pmc->idev); - } else if (sf_setstate(pmc) || changerec) - mld_ifc_event(pmc->idev); - spin_unlock_bh(&pmc->mca_lock); + mld_ifc_event(mc->idev); + } else if (sf_setstate(mc) || changerec) { + mld_ifc_event(mc->idev); + } + spin_unlock_bh(&mc->mca_lock); read_unlock_bh(&idev->lock); return err; } @@ -2219,13 +2216,13 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *pmca, /* * Add multicast single-source filter to the interface list */ -static int ip6_mc_add1_src(struct ifmcaddr6 *pmc, int sfmode, - const struct in6_addr *psfsrc) +static int ip6_mc_add1_src(struct ifmcaddr6 *mc, int sfmode, + const struct in6_addr *psfsrc) { struct ip6_sf_list *psf, *psf_prev; psf_prev = NULL; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) { + for (psf = mc->mca_sources; psf; psf = psf->sf_next) { if (ipv6_addr_equal(&psf->sf_addr, psfsrc)) break; psf_prev = psf; @@ -2239,36 +2236,37 @@ static int ip6_mc_add1_src(struct ifmcaddr6 *pmc, int sfmode, if (psf_prev) { psf_prev->sf_next = psf; } else - pmc->mca_sources = psf; + mc->mca_sources = psf; } psf->sf_count[sfmode]++; return 0; } -static void sf_markstate(struct ifmcaddr6 *pmc) +static void sf_markstate(struct ifmcaddr6 *mc) { + int mca_xcount = mc->mca_sfcount[MCAST_EXCLUDE]; struct ip6_sf_list *psf; - int mca_xcount = pmc->mca_sfcount[MCAST_EXCLUDE]; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) - if (pmc->mca_sfcount[MCAST_EXCLUDE]) { + for (psf = mc->mca_sources; psf; psf = psf->sf_next) { + if (mc->mca_sfcount[MCAST_EXCLUDE]) { psf->sf_oldin = mca_xcount == psf->sf_count[MCAST_EXCLUDE] && !psf->sf_count[MCAST_INCLUDE]; } else psf->sf_oldin = psf->sf_count[MCAST_INCLUDE] != 0; + } } -static int sf_setstate(struct ifmcaddr6 *pmc) +static int sf_setstate(struct ifmcaddr6 *mc) { + int mca_xcount = mc->mca_sfcount[MCAST_EXCLUDE]; struct ip6_sf_list *psf, *dpsf; - int mca_xcount = pmc->mca_sfcount[MCAST_EXCLUDE]; - int qrv = pmc->idev->mc_qrv; + int qrv = mc->idev->mc_qrv; int new_in, rv; rv = 0; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) { - if (pmc->mca_sfcount[MCAST_EXCLUDE]) { + for (psf = mc->mca_sources; psf; psf = psf->sf_next) { + if (mc->mca_sfcount[MCAST_EXCLUDE]) { new_in = mca_xcount == psf->sf_count[MCAST_EXCLUDE] && !psf->sf_count[MCAST_INCLUDE]; } else @@ -2277,7 +2275,7 @@ static int sf_setstate(struct ifmcaddr6 *pmc) if (!psf->sf_oldin) { struct ip6_sf_list *prev = NULL; - for (dpsf = pmc->mca_tomb; dpsf; + for (dpsf = mc->mca_tomb; dpsf; dpsf = dpsf->sf_next) { if (ipv6_addr_equal(&dpsf->sf_addr, &psf->sf_addr)) @@ -2288,7 +2286,7 @@ static int sf_setstate(struct ifmcaddr6 *pmc) if (prev) prev->sf_next = dpsf->sf_next; else - pmc->mca_tomb = dpsf->sf_next; + mc->mca_tomb = dpsf->sf_next; kfree(dpsf); } psf->sf_crcount = qrv; @@ -2300,7 +2298,7 @@ static int sf_setstate(struct ifmcaddr6 *pmc) * add or update "delete" records if an active filter * is now inactive */ - for (dpsf = pmc->mca_tomb; dpsf; dpsf = dpsf->sf_next) + for (dpsf = mc->mca_tomb; dpsf; dpsf = dpsf->sf_next) if (ipv6_addr_equal(&dpsf->sf_addr, &psf->sf_addr)) break; @@ -2309,9 +2307,9 @@ static int sf_setstate(struct ifmcaddr6 *pmc) if (!dpsf) continue; *dpsf = *psf; - /* pmc->mca_lock held by callers */ - dpsf->sf_next = pmc->mca_tomb; - pmc->mca_tomb = dpsf; + /* mc->mca_lock held by callers */ + dpsf->sf_next = mc->mca_tomb; + mc->mca_tomb = dpsf; } dpsf->sf_crcount = qrv; rv++; @@ -2323,35 +2321,39 @@ static int sf_setstate(struct ifmcaddr6 *pmc) /* * Add multicast source filter list to the interface list */ -static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *pmca, +static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, int sfmode, int sfcount, const struct in6_addr *psfsrc, int delta) { - struct ifmcaddr6 *pmc; - int isexclude; - int i, err; + struct ifmcaddr6 *mc; + bool found = false; + int isexclude; + int i, err; if (!idev) return -ENODEV; + read_lock_bh(&idev->lock); - for (pmc = idev->mc_list; pmc; pmc = pmc->next) { - if (ipv6_addr_equal(pmca, &pmc->mca_addr)) + list_for_each_entry(mc, &idev->mc_list, list) { + if (ipv6_addr_equal(mca, &mc->mca_addr)) { + found = true; break; + } } - if (!pmc) { + if (!found) { /* MCA not found?? bug */ read_unlock_bh(&idev->lock); return -ESRCH; } - spin_lock_bh(&pmc->mca_lock); + spin_lock_bh(&mc->mca_lock); - sf_markstate(pmc); - isexclude = pmc->mca_sfmode == MCAST_EXCLUDE; + sf_markstate(mc); + isexclude = mc->mca_sfmode == MCAST_EXCLUDE; if (!delta) - pmc->mca_sfcount[sfmode]++; + mc->mca_sfcount[sfmode]++; err = 0; for (i = 0; i < sfcount; i++) { - err = ip6_mc_add1_src(pmc, sfmode, &psfsrc[i]); + err = ip6_mc_add1_src(mc, sfmode, &psfsrc[i]); if (err) break; } @@ -2359,72 +2361,72 @@ static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *pmca, int j; if (!delta) - pmc->mca_sfcount[sfmode]--; + mc->mca_sfcount[sfmode]--; for (j = 0; j < i; j++) - ip6_mc_del1_src(pmc, sfmode, &psfsrc[j]); - } else if (isexclude != (pmc->mca_sfcount[MCAST_EXCLUDE] != 0)) { + ip6_mc_del1_src(mc, sfmode, &psfsrc[j]); + } else if (isexclude != (mc->mca_sfcount[MCAST_EXCLUDE] != 0)) { struct ip6_sf_list *psf; /* filter mode change */ - if (pmc->mca_sfcount[MCAST_EXCLUDE]) - pmc->mca_sfmode = MCAST_EXCLUDE; - else if (pmc->mca_sfcount[MCAST_INCLUDE]) - pmc->mca_sfmode = MCAST_INCLUDE; + if (mc->mca_sfcount[MCAST_EXCLUDE]) + mc->mca_sfmode = MCAST_EXCLUDE; + else if (mc->mca_sfcount[MCAST_INCLUDE]) + mc->mca_sfmode = MCAST_INCLUDE; /* else no filters; keep old mode for reports */ - pmc->mca_crcount = idev->mc_qrv; - idev->mc_ifc_count = pmc->mca_crcount; - for (psf = pmc->mca_sources; psf; psf = psf->sf_next) + mc->mca_crcount = idev->mc_qrv; + idev->mc_ifc_count = mc->mca_crcount; + for (psf = mc->mca_sources; psf; psf = psf->sf_next) psf->sf_crcount = 0; mld_ifc_event(idev); - } else if (sf_setstate(pmc)) + } else if (sf_setstate(mc)) mld_ifc_event(idev); - spin_unlock_bh(&pmc->mca_lock); + spin_unlock_bh(&mc->mca_lock); read_unlock_bh(&idev->lock); return err; } -static void ip6_mc_clear_src(struct ifmcaddr6 *pmc) +static void ip6_mc_clear_src(struct ifmcaddr6 *mc) { struct ip6_sf_list *psf, *nextpsf; - for (psf = pmc->mca_tomb; psf; psf = nextpsf) { + for (psf = mc->mca_tomb; psf; psf = nextpsf) { nextpsf = psf->sf_next; kfree(psf); } - pmc->mca_tomb = NULL; - for (psf = pmc->mca_sources; psf; psf = nextpsf) { + mc->mca_tomb = NULL; + for (psf = mc->mca_sources; psf; psf = nextpsf) { nextpsf = psf->sf_next; kfree(psf); } - pmc->mca_sources = NULL; - pmc->mca_sfmode = MCAST_EXCLUDE; - pmc->mca_sfcount[MCAST_INCLUDE] = 0; - pmc->mca_sfcount[MCAST_EXCLUDE] = 1; + mc->mca_sources = NULL; + mc->mca_sfmode = MCAST_EXCLUDE; + mc->mca_sfcount[MCAST_INCLUDE] = 0; + mc->mca_sfcount[MCAST_EXCLUDE] = 1; } -static void igmp6_join_group(struct ifmcaddr6 *ma) +static void igmp6_join_group(struct ifmcaddr6 *mc) { unsigned long delay; - if (ma->mca_flags & MAF_NOREPORT) + if (mc->mca_flags & MAF_NOREPORT) return; - igmp6_send(&ma->mca_addr, ma->idev->dev, ICMPV6_MGM_REPORT); + igmp6_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); - delay = prandom_u32() % unsolicited_report_interval(ma->idev); + delay = prandom_u32() % unsolicited_report_interval(mc->idev); - spin_lock_bh(&ma->mca_lock); - if (del_timer(&ma->mca_timer)) { - refcount_dec(&ma->mca_refcnt); - delay = ma->mca_timer.expires - jiffies; + spin_lock_bh(&mc->mca_lock); + if (del_timer(&mc->mca_timer)) { + refcount_dec(&mc->mca_refcnt); + delay = mc->mca_timer.expires - jiffies; } - if (!mod_timer(&ma->mca_timer, jiffies + delay)) - refcount_inc(&ma->mca_refcnt); - ma->mca_flags |= MAF_TIMER_RUNNING | MAF_LAST_REPORTER; - spin_unlock_bh(&ma->mca_lock); + if (!mod_timer(&mc->mca_timer, jiffies + delay)) + refcount_inc(&mc->mca_refcnt); + mc->mca_flags |= MAF_TIMER_RUNNING | MAF_LAST_REPORTER; + spin_unlock_bh(&mc->mca_lock); } static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml, @@ -2446,15 +2448,15 @@ static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml, return err; } -static void igmp6_leave_group(struct ifmcaddr6 *ma) +static void igmp6_leave_group(struct ifmcaddr6 *mc) { - if (mld_in_v1_mode(ma->idev)) { - if (ma->mca_flags & MAF_LAST_REPORTER) - igmp6_send(&ma->mca_addr, ma->idev->dev, - ICMPV6_MGM_REDUCTION); + if (mld_in_v1_mode(mc->idev)) { + if (mc->mca_flags & MAF_LAST_REPORTER) + igmp6_send(&mc->mca_addr, mc->idev->dev, + ICMPV6_MGM_REDUCTION); } else { - mld_add_delrec(ma->idev, ma); - mld_ifc_event(ma->idev); + mld_add_delrec(mc->idev, mc); + mld_ifc_event(mc->idev); } } @@ -2491,31 +2493,31 @@ static void mld_ifc_event(struct inet6_dev *idev) static void igmp6_timer_handler(struct timer_list *t) { - struct ifmcaddr6 *ma = from_timer(ma, t, mca_timer); + struct ifmcaddr6 *mc = from_timer(mc, t, mca_timer); - if (mld_in_v1_mode(ma->idev)) - igmp6_send(&ma->mca_addr, ma->idev->dev, ICMPV6_MGM_REPORT); + if (mld_in_v1_mode(mc->idev)) + igmp6_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); else - mld_send_report(ma->idev, ma); + mld_send_report(mc->idev, mc); - spin_lock(&ma->mca_lock); - ma->mca_flags |= MAF_LAST_REPORTER; - ma->mca_flags &= ~MAF_TIMER_RUNNING; - spin_unlock(&ma->mca_lock); - ma_put(ma); + spin_lock(&mc->mca_lock); + mc->mca_flags |= MAF_LAST_REPORTER; + mc->mca_flags &= ~MAF_TIMER_RUNNING; + spin_unlock(&mc->mca_lock); + mca_put(mc); } /* Device changing type */ void ipv6_mc_unmap(struct inet6_dev *idev) { - struct ifmcaddr6 *i; + struct ifmcaddr6 *mc, *tmp; /* Install multicast list, except for all-nodes (already installed) */ read_lock_bh(&idev->lock); - for (i = idev->mc_list; i; i = i->next) - igmp6_group_dropped(i); + list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) + igmp6_group_dropped(mc); read_unlock_bh(&idev->lock); } @@ -2528,14 +2530,14 @@ void ipv6_mc_remap(struct inet6_dev *idev) void ipv6_mc_down(struct inet6_dev *idev) { - struct ifmcaddr6 *i; + struct ifmcaddr6 *mc, *tmp; /* Withdraw multicast list */ read_lock_bh(&idev->lock); - for (i = idev->mc_list; i; i = i->next) - igmp6_group_dropped(i); + list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) + igmp6_group_dropped(mc); /* Should stop timer after group drop. or we will * start timer again in mld_ifc_event() @@ -2559,15 +2561,15 @@ static void ipv6_mc_reset(struct inet6_dev *idev) void ipv6_mc_up(struct inet6_dev *idev) { - struct ifmcaddr6 *i; + struct ifmcaddr6 *mc, *tmp; /* Install multicast list, except for all-nodes (already installed) */ read_lock_bh(&idev->lock); ipv6_mc_reset(idev); - for (i = idev->mc_list; i; i = i->next) { - mld_del_delrec(idev, i); - igmp6_group_added(i); + list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { + mld_del_delrec(idev, mc); + igmp6_group_added(mc); } read_unlock_bh(&idev->lock); } @@ -2577,10 +2579,11 @@ void ipv6_mc_up(struct inet6_dev *idev) void ipv6_mc_init_dev(struct inet6_dev *idev) { write_lock_bh(&idev->lock); - spin_lock_init(&idev->mc_lock); + spin_lock_init(&idev->mc_tomb_lock); idev->mc_gq_running = 0; timer_setup(&idev->mc_gq_timer, mld_gq_timer_expire, 0); - idev->mc_tomb = NULL; + INIT_LIST_HEAD(&idev->mc_tomb_list); + INIT_LIST_HEAD(&idev->mc_list); idev->mc_ifc_count = 0; timer_setup(&idev->mc_ifc_timer, mld_ifc_timer_expire, 0); timer_setup(&idev->mc_dad_timer, mld_dad_timer_expire, 0); @@ -2594,7 +2597,7 @@ void ipv6_mc_init_dev(struct inet6_dev *idev) void ipv6_mc_destroy_dev(struct inet6_dev *idev) { - struct ifmcaddr6 *i; + struct ifmcaddr6 *mc, *tmp; /* Deactivate timers */ ipv6_mc_down(idev); @@ -2611,12 +2614,11 @@ void ipv6_mc_destroy_dev(struct inet6_dev *idev) __ipv6_dev_mc_dec(idev, &in6addr_linklocal_allrouters); write_lock_bh(&idev->lock); - while ((i = idev->mc_list) != NULL) { - idev->mc_list = i->next; - + list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { + list_del(&mc->list); write_unlock_bh(&idev->lock); - ip6_mc_clear_src(i); - ma_put(i); + ip6_mc_clear_src(mc); + mca_put(mc); write_lock_bh(&idev->lock); } write_unlock_bh(&idev->lock); @@ -2624,14 +2626,14 @@ void ipv6_mc_destroy_dev(struct inet6_dev *idev) static void ipv6_mc_rejoin_groups(struct inet6_dev *idev) { - struct ifmcaddr6 *pmc; + struct ifmcaddr6 *mc; ASSERT_RTNL(); if (mld_in_v1_mode(idev)) { read_lock_bh(&idev->lock); - for (pmc = idev->mc_list; pmc; pmc = pmc->next) - igmp6_join_group(pmc); + list_for_each_entry(mc, &idev->mc_list, list) + igmp6_join_group(mc); read_unlock_bh(&idev->lock); } else mld_send_report(idev, NULL); @@ -2671,57 +2673,64 @@ struct igmp6_mc_iter_state { static inline struct ifmcaddr6 *igmp6_mc_get_first(struct seq_file *seq) { - struct ifmcaddr6 *im = NULL; struct igmp6_mc_iter_state *state = igmp6_mc_seq_private(seq); struct net *net = seq_file_net(seq); + struct ifmcaddr6 *mc; state->idev = NULL; for_each_netdev_rcu(net, state->dev) { struct inet6_dev *idev; + idev = __in6_dev_get(state->dev); if (!idev) continue; + read_lock_bh(&idev->lock); - im = idev->mc_list; - if (im) { + list_for_each_entry(mc, &idev->mc_list, list) { state->idev = idev; - break; + return mc; } read_unlock_bh(&idev->lock); } - return im; + return NULL; } -static struct ifmcaddr6 *igmp6_mc_get_next(struct seq_file *seq, struct ifmcaddr6 *im) +static struct ifmcaddr6 *igmp6_mc_get_next(struct seq_file *seq, struct ifmcaddr6 *mc) { struct igmp6_mc_iter_state *state = igmp6_mc_seq_private(seq); - im = im->next; - while (!im) { - if (likely(state->idev)) + list_for_each_entry_continue(mc, &state->idev->mc_list, list) + return mc; + + mc = NULL; + + while (!mc) { + if (state->idev) read_unlock_bh(&state->idev->lock); state->dev = next_net_device_rcu(state->dev); if (!state->dev) { state->idev = NULL; - break; + return NULL; } state->idev = __in6_dev_get(state->dev); if (!state->idev) continue; read_lock_bh(&state->idev->lock); - im = state->idev->mc_list; + mc = list_first_entry_or_null(&state->idev->mc_list, + struct ifmcaddr6, list); } - return im; + return mc; } static struct ifmcaddr6 *igmp6_mc_get_idx(struct seq_file *seq, loff_t pos) { - struct ifmcaddr6 *im = igmp6_mc_get_first(seq); - if (im) - while (pos && (im = igmp6_mc_get_next(seq, im)) != NULL) + struct ifmcaddr6 *mc = igmp6_mc_get_first(seq); + + if (mc) + while (pos && (mc = igmp6_mc_get_next(seq, mc)) != NULL) --pos; - return pos ? NULL : im; + return pos ? NULL : mc; } static void *igmp6_mc_seq_start(struct seq_file *seq, loff_t *pos) @@ -2733,10 +2742,10 @@ static void *igmp6_mc_seq_start(struct seq_file *seq, loff_t *pos) static void *igmp6_mc_seq_next(struct seq_file *seq, void *v, loff_t *pos) { - struct ifmcaddr6 *im = igmp6_mc_get_next(seq, v); + struct ifmcaddr6 *mc = igmp6_mc_get_next(seq, v); ++*pos; - return im; + return mc; } static void igmp6_mc_seq_stop(struct seq_file *seq, void *v) @@ -2754,16 +2763,16 @@ static void igmp6_mc_seq_stop(struct seq_file *seq, void *v) static int igmp6_mc_seq_show(struct seq_file *seq, void *v) { - struct ifmcaddr6 *im = (struct ifmcaddr6 *)v; + struct ifmcaddr6 *mc = (struct ifmcaddr6 *)v; struct igmp6_mc_iter_state *state = igmp6_mc_seq_private(seq); seq_printf(seq, "%-4d %-15s %pi6 %5d %08X %ld\n", state->dev->ifindex, state->dev->name, - &im->mca_addr, - im->mca_users, im->mca_flags, - (im->mca_flags&MAF_TIMER_RUNNING) ? - jiffies_to_clock_t(im->mca_timer.expires-jiffies) : 0); + &mc->mca_addr, + mc->mca_users, mc->mca_flags, + (mc->mca_flags & MAF_TIMER_RUNNING) ? + jiffies_to_clock_t(mc->mca_timer.expires - jiffies) : 0); return 0; } @@ -2778,51 +2787,61 @@ struct igmp6_mcf_iter_state { struct seq_net_private p; struct net_device *dev; struct inet6_dev *idev; - struct ifmcaddr6 *im; + struct ifmcaddr6 *mc; }; #define igmp6_mcf_seq_private(seq) ((struct igmp6_mcf_iter_state *)(seq)->private) static inline struct ip6_sf_list *igmp6_mcf_get_first(struct seq_file *seq) { - struct ip6_sf_list *psf = NULL; - struct ifmcaddr6 *im = NULL; struct igmp6_mcf_iter_state *state = igmp6_mcf_seq_private(seq); struct net *net = seq_file_net(seq); + struct ip6_sf_list *psf = NULL; + struct ifmcaddr6 *mc = NULL; state->idev = NULL; - state->im = NULL; + state->mc = NULL; for_each_netdev_rcu(net, state->dev) { struct inet6_dev *idev; + idev = __in6_dev_get(state->dev); if (unlikely(idev == NULL)) continue; read_lock_bh(&idev->lock); - im = idev->mc_list; - if (likely(im)) { - spin_lock_bh(&im->mca_lock); - psf = im->mca_sources; + mc = list_first_entry_or_null(&idev->mc_list, + struct ifmcaddr6, list); + if (likely(mc)) { + spin_lock_bh(&mc->mca_lock); + psf = mc->mca_sources; if (likely(psf)) { - state->im = im; + state->mc = mc; state->idev = idev; break; } - spin_unlock_bh(&im->mca_lock); + spin_unlock_bh(&mc->mca_lock); } read_unlock_bh(&idev->lock); } return psf; } -static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, struct ip6_sf_list *psf) +static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, + struct ip6_sf_list *psf) { struct igmp6_mcf_iter_state *state = igmp6_mcf_seq_private(seq); psf = psf->sf_next; while (!psf) { - spin_unlock_bh(&state->im->mca_lock); - state->im = state->im->next; - while (!state->im) { + spin_unlock_bh(&state->mc->mca_lock); + list_for_each_entry_continue(state->mc, &state->idev->mc_list, list) { + spin_lock_bh(&state->mc->mca_lock); + psf = state->mc->mca_sources; + goto out; + } + + state->mc = NULL; + + while (!state->mc) { if (likely(state->idev)) read_unlock_bh(&state->idev->lock); @@ -2835,12 +2854,13 @@ static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, struct ip6_s if (!state->idev) continue; read_lock_bh(&state->idev->lock); - state->im = state->idev->mc_list; + state->mc = list_first_entry_or_null(&state->idev->mc_list, + struct ifmcaddr6, list); } - if (!state->im) + if (!state->mc) break; - spin_lock_bh(&state->im->mca_lock); - psf = state->im->mca_sources; + spin_lock_bh(&state->mc->mca_lock); + psf = state->mc->mca_sources; } out: return psf; @@ -2849,6 +2869,7 @@ static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, struct ip6_s static struct ip6_sf_list *igmp6_mcf_get_idx(struct seq_file *seq, loff_t pos) { struct ip6_sf_list *psf = igmp6_mcf_get_first(seq); + if (psf) while (pos && (psf = igmp6_mcf_get_next(seq, psf)) != NULL) --pos; @@ -2865,6 +2886,7 @@ static void *igmp6_mcf_seq_start(struct seq_file *seq, loff_t *pos) static void *igmp6_mcf_seq_next(struct seq_file *seq, void *v, loff_t *pos) { struct ip6_sf_list *psf; + if (v == SEQ_START_TOKEN) psf = igmp6_mcf_get_first(seq); else @@ -2877,9 +2899,10 @@ static void igmp6_mcf_seq_stop(struct seq_file *seq, void *v) __releases(RCU) { struct igmp6_mcf_iter_state *state = igmp6_mcf_seq_private(seq); - if (likely(state->im)) { - spin_unlock_bh(&state->im->mca_lock); - state->im = NULL; + + if (likely(state->mc)) { + spin_unlock_bh(&state->mc->mca_lock); + state->mc = NULL; } if (likely(state->idev)) { read_unlock_bh(&state->idev->lock); @@ -2900,7 +2923,7 @@ static int igmp6_mcf_seq_show(struct seq_file *seq, void *v) seq_printf(seq, "%3d %6.6s %pi6 %pi6 %6lu %6lu\n", state->dev->ifindex, state->dev->name, - &state->im->mca_addr, + &state->mc->mca_addr, &psf->sf_addr, psf->sf_count[MCAST_INCLUDE], psf->sf_count[MCAST_EXCLUDE]); From patchwork Mon Feb 8 17:53:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 12076163 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F0AEC433E0 for ; Mon, 8 Feb 2021 17:56:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02C2064E54 for ; Mon, 8 Feb 2021 17:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235135AbhBHR43 (ORCPT ); Mon, 8 Feb 2021 12:56:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234629AbhBHRyp (ORCPT ); Mon, 8 Feb 2021 12:54:45 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDE77C061788 for ; Mon, 8 Feb 2021 09:54:03 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id b145so10196205pfb.4 for ; Mon, 08 Feb 2021 09:54:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=B7fQD3v69CLjaOLuf7nxZ3nJdmMrCYjpSCXAVRHVFtI=; b=MIjfgvCQxn0tYbGT1SAVb6P3BudCRiur5HgC8ApSQWNRnG4ICTP8XggUFFXp0E1CbR jg5qwyWAHHVC28+NwLN/jH6ZdOOEk584aQODnj4OKGExb37fDhAIzTVoPncKiWSd0Yq3 YOjXinIRJHn7/d9PqMcv4bDMIV6cbPy8vVwLCr9iZjZNk0+wXqKkYBaKEVHGixdLEPS6 E3SiCWIMbSBVFxz1e1e5+2HD5iinHTdC4PmU45DinXTh3LVzmARUamQwFOLySzjn7urm V/grxRWx0CWRhfPKmoCYh93LelE+zQIK96Yum/0gdTNaUpLQVs7HhumiaDUf1jAHQtKr lmdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=B7fQD3v69CLjaOLuf7nxZ3nJdmMrCYjpSCXAVRHVFtI=; b=W9KgK46Di1nJnFiE8MIX8brRxcoAGqtMQWObb6gpFBcZV4BZyeAXlA8lDfc+ojQgiD yDK04TXcz/onNNrVjyB57WzAi2QOh+iiUcvl8eCjqJm7oxm5K6AAvIItKrY6IRs9wpW0 prbydr8im7rF4EKS0JqkEu+iUUutJulrq/CdcX74rmwbt91Y4eFuMmDJpkQZBl4WmNmB rpoPZFuyRq1FBXOzSruDj+ZpCU6uaamkDGlrK2mMNPGesRovEdxfzkkko/ztx1Xxa+0X t8RfWuHQh9poqX0KNV5lj5qXODIFfovH5HL3Wv82DlOlKmlYDdtka1vky+deqTk4csB/ 5XNA== X-Gm-Message-State: AOAM531+PgcoK9iyQXHnYW8iQCBpS1jbNFwllPkfFWOqGGbihlnahyi2 YNt/hDnqZGvnjgnlErwYhHjCmU8d3Oc= X-Google-Smtp-Source: ABdhPJzEUC8K5S5GKiliC8OFH2A3MjNhovndc4+WerWBCm1ZmY05HXlY0nDi7ZLcAnPbeAnqEEoPzA== X-Received: by 2002:aa7:9e89:0:b029:1d2:7d49:9bac with SMTP id p9-20020aa79e890000b02901d27d499bacmr19235621pfq.21.1612806843238; Mon, 08 Feb 2021 09:54:03 -0800 (PST) Received: from localhost.localdomain ([49.173.165.50]) by smtp.gmail.com with ESMTPSA id r1sm19075483pfh.2.2021.02.08.09.54.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 09:54:02 -0800 (PST) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, dsahern@kernel.org, xiyou.wangcong@gmail.com Cc: ap420073@gmail.com Subject: [PATCH net-next 2/8] mld: convert ip6_sf_list to list macros Date: Mon, 8 Feb 2021 17:53:56 +0000 Message-Id: <20210208175356.5060-1-ap420073@gmail.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, struct ip6_sf_list doesn't use list API so that code shape is a little bit different from others. So it converts ip6_sf_list to use list API so it would improve readability. Signed-off-by: Taehee Yoo --- include/net/if_inet6.h | 6 +- net/ipv6/mcast.c | 267 +++++++++++++++++++++++------------------ 2 files changed, 155 insertions(+), 118 deletions(-) diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h index 1262ccd5221e..cd17b756a2a5 100644 --- a/include/net/if_inet6.h +++ b/include/net/if_inet6.h @@ -97,7 +97,7 @@ struct ipv6_mc_socklist { }; struct ip6_sf_list { - struct ip6_sf_list *sf_next; + struct list_head list; struct in6_addr sf_addr; unsigned long sf_count[2]; /* include/exclude counts */ unsigned char sf_gsresp; /* include in g & s response? */ @@ -115,8 +115,8 @@ struct ifmcaddr6 { struct in6_addr mca_addr; struct inet6_dev *idev; struct list_head list; - struct ip6_sf_list *mca_sources; - struct ip6_sf_list *mca_tomb; + struct list_head mca_source_list; + struct list_head mca_tomb_list; unsigned int mca_sfmode; unsigned char mca_crcount; unsigned long mca_sfcount[2]; diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 508c007df84f..9c4dc4c2ff01 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -747,6 +747,8 @@ static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) spin_lock_bh(&im->mca_lock); spin_lock_init(&mc->mca_lock); INIT_LIST_HEAD(&mc->list); + INIT_LIST_HEAD(&mc->mca_tomb_list); + INIT_LIST_HEAD(&mc->mca_source_list); mc->idev = im->idev; in6_dev_hold(idev); mc->mca_addr = im->mca_addr; @@ -755,10 +757,10 @@ static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) if (mc->mca_sfmode == MCAST_INCLUDE) { struct ip6_sf_list *psf; - mc->mca_tomb = im->mca_tomb; - mc->mca_sources = im->mca_sources; - im->mca_tomb = im->mca_sources = NULL; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) + list_splice_init(&im->mca_tomb_list, &mc->mca_tomb_list); + list_splice_init(&im->mca_source_list, &mc->mca_source_list); + + list_for_each_entry(psf, &mc->mca_source_list, list) psf->sf_crcount = mc->mca_crcount; } spin_unlock_bh(&im->mca_lock); @@ -773,6 +775,8 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) struct ifmcaddr6 *mc = NULL, *tmp = NULL; struct in6_addr *mca = &im->mca_addr; struct ip6_sf_list *psf; + LIST_HEAD(source_list); + LIST_HEAD(tomb_list); bool found = false; spin_lock_bh(&idev->mc_tomb_lock); @@ -789,9 +793,14 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) if (found) { im->idev = mc->idev; if (im->mca_sfmode == MCAST_INCLUDE) { - swap(im->mca_tomb, mc->mca_tomb); - swap(im->mca_sources, mc->mca_sources); - for (psf = im->mca_sources; psf; psf = psf->sf_next) + list_splice_init(&im->mca_tomb_list, &tomb_list); + list_splice_init(&im->mca_source_list, &source_list); + list_splice_init(&mc->mca_tomb_list, &im->mca_tomb_list); + list_splice_init(&mc->mca_source_list, &im->mca_source_list); + list_splice_init(&tomb_list, &mc->mca_tomb_list); + list_splice_init(&source_list, &mc->mca_source_list); + + list_for_each_entry(psf, &im->mca_source_list, list) psf->sf_crcount = idev->mc_qrv; } else { im->mca_crcount = idev->mc_qrv; @@ -819,16 +828,14 @@ static void mld_clear_delrec(struct inet6_dev *idev) /* clear dead sources, too */ read_lock_bh(&idev->lock); list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { - struct ip6_sf_list *psf, *psf_next; + struct ip6_sf_list *psf, *tmp; + LIST_HEAD(mca_list); spin_lock_bh(&mc->mca_lock); - psf = mc->mca_tomb; - mc->mca_tomb = NULL; + list_splice_init(&mc->mca_tomb_list, &mca_list); spin_unlock_bh(&mc->mca_lock); - for (; psf; psf = psf_next) { - psf_next = psf->sf_next; + list_for_each_entry_safe(psf, tmp, &mca_list, list) kfree(psf); - } } read_unlock_bh(&idev->lock); } @@ -848,6 +855,8 @@ static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, mc->mca_addr = *addr; mc->idev = idev; /* reference taken by caller */ INIT_LIST_HEAD(&mc->list); + INIT_LIST_HEAD(&mc->mca_source_list); + INIT_LIST_HEAD(&mc->mca_tomb_list); mc->mca_users = 1; /* mca_stamp should be updated upon changes */ mc->mca_cstamp = mc->mca_tstamp = jiffies; @@ -995,18 +1004,22 @@ bool ipv6_chk_mcast_addr(struct net_device *dev, const struct in6_addr *group, if (found) { if (src_addr && !ipv6_addr_any(src_addr)) { struct ip6_sf_list *psf; + bool found_psf = false; spin_lock_bh(&mc->mca_lock); - for (psf = mc->mca_sources; psf; psf = psf->sf_next) { - if (ipv6_addr_equal(&psf->sf_addr, src_addr)) + list_for_each_entry(psf, &mc->mca_source_list, list) { + if (ipv6_addr_equal(&psf->sf_addr, src_addr)) { + found_psf = true; break; + } } - if (psf) + if (found_psf) { rv = psf->sf_count[MCAST_INCLUDE] || - psf->sf_count[MCAST_EXCLUDE] != - mc->mca_sfcount[MCAST_EXCLUDE]; - else + psf->sf_count[MCAST_EXCLUDE] != + mc->mca_sfcount[MCAST_EXCLUDE]; + } else { rv = mc->mca_sfcount[MCAST_EXCLUDE] != 0; + } spin_unlock_bh(&mc->mca_lock); } else rv = true; /* don't filter unspecified source */ @@ -1097,7 +1110,7 @@ static bool mld_xmarksources(struct ifmcaddr6 *mc, int nsrcs, int i, scount; scount = 0; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) { + list_for_each_entry(psf, &mc->mca_source_list, list) { if (scount == nsrcs) break; for (i = 0; i < nsrcs; i++) { @@ -1130,7 +1143,7 @@ static bool mld_marksources(struct ifmcaddr6 *mc, int nsrcs, /* mark INCLUDE-mode sources */ scount = 0; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) { + list_for_each_entry(psf, &mc->mca_source_list, list) { if (scount == nsrcs) break; for (i = 0; i < nsrcs; i++) { @@ -1540,7 +1553,7 @@ static int mld_scount(struct ifmcaddr6 *mc, int type, int gdeleted, struct ip6_sf_list *psf; int scount = 0; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) { + list_for_each_entry(psf, &mc->mca_source_list, list) { if (!is_in(mc, psf, type, gdeleted, sdeleted)) continue; scount++; @@ -1722,12 +1735,13 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, int type, int gdeleted, int sdeleted, int crsend) { - struct ip6_sf_list *psf, *psf_next, *psf_prev, **psf_list; int scount, stotal, first, isquery, truncate; struct inet6_dev *idev = mc->idev; + struct ip6_sf_list *psf, *tmp; struct mld2_grec *pgr = NULL; struct mld2_report *pmr; struct net_device *dev; + struct list_head *head; unsigned int mtu; dev = idev->dev; @@ -1745,9 +1759,12 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, stotal = scount = 0; - psf_list = sdeleted ? &mc->mca_tomb : &mc->mca_sources; + if (sdeleted) + head = &mc->mca_tomb_list; + else + head = &mc->mca_source_list; - if (!*psf_list) + if (list_empty(head)) goto empty_source; pmr = skb ? (struct mld2_report *)skb_transport_header(skb) : NULL; @@ -1761,17 +1778,13 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, skb = mld_newpack(idev, mtu); } } + first = 1; - psf_prev = NULL; - for (psf = *psf_list; psf; psf = psf_next) { + list_for_each_entry_safe(psf, tmp, head, list) { struct in6_addr *psrc; - psf_next = psf->sf_next; - - if (!is_in(mc, psf, type, gdeleted, sdeleted) && !crsend) { - psf_prev = psf; + if (!is_in(mc, psf, type, gdeleted, sdeleted) && !crsend) continue; - } /* Based on RFC3810 6.1. Should not send source-list change * records when there is a filter mode change. @@ -1798,10 +1811,12 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, first = 1; scount = 0; } + if (first) { skb = add_grhead(skb, mc, type, &pgr, mtu); first = 0; } + if (!skb) return NULL; psrc = skb_put(skb, sizeof(*psrc)); @@ -1812,15 +1827,11 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, decrease_sf_crcount: psf->sf_crcount--; if ((sdeleted || gdeleted) && psf->sf_crcount == 0) { - if (psf_prev) - psf_prev->sf_next = psf->sf_next; - else - *psf_list = psf->sf_next; + list_del(&psf->list); kfree(psf); continue; } } - psf_prev = psf; } empty_source: @@ -1880,21 +1891,22 @@ static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *mc) /* * remove zero-count source records from a source filter list */ -static void mld_clear_zeros(struct ip6_sf_list **ppsf) +static void mld_clear_zeros(struct ifmcaddr6 *mc) { - struct ip6_sf_list *psf_prev, *psf_next, *psf; + struct ip6_sf_list *psf, *tmp; - psf_prev = NULL; - for (psf = *ppsf; psf; psf = psf_next) { - psf_next = psf->sf_next; + list_for_each_entry_safe(psf, tmp, &mc->mca_tomb_list, list) { if (psf->sf_crcount == 0) { - if (psf_prev) - psf_prev->sf_next = psf->sf_next; - else - *ppsf = psf->sf_next; + list_del(&psf->list); kfree(psf); - } else - psf_prev = psf; + } + } + + list_for_each_entry_safe(psf, tmp, &mc->mca_source_list, list) { + if (psf->sf_crcount == 0) { + list_del(&psf->list); + kfree(psf); + } } } @@ -1915,19 +1927,21 @@ static void mld_send_cr(struct inet6_dev *idev) skb = add_grec(skb, mc, type, 1, 0, 0); skb = add_grec(skb, mc, dtype, 1, 1, 0); } + if (mc->mca_crcount) { if (mc->mca_sfmode == MCAST_EXCLUDE) { type = MLD2_CHANGE_TO_INCLUDE; skb = add_grec(skb, mc, type, 1, 0, 0); } + mc->mca_crcount--; - if (mc->mca_crcount == 0) { - mld_clear_zeros(&mc->mca_tomb); - mld_clear_zeros(&mc->mca_sources); - } + if (mc->mca_crcount == 0) + mld_clear_zeros(mc); } - if (mc->mca_crcount == 0 && !mc->mca_tomb && - !mc->mca_sources) { + + if (mc->mca_crcount == 0 && + list_empty(&mc->mca_tomb_list) && + list_empty(&mc->mca_source_list)) { list_del(&mc->list); in6_dev_put(mc->idev); kfree(mc); @@ -2117,33 +2131,32 @@ static void mld_dad_timer_expire(struct timer_list *t) static int ip6_mc_del1_src(struct ifmcaddr6 *mc, int sfmode, const struct in6_addr *psfsrc) { - struct ip6_sf_list *psf, *psf_prev; + struct ip6_sf_list *psf; + bool found = false; int rv = 0; - psf_prev = NULL; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) { - if (ipv6_addr_equal(&psf->sf_addr, psfsrc)) + list_for_each_entry(psf, &mc->mca_source_list, list) { + if (ipv6_addr_equal(&psf->sf_addr, psfsrc)) { + found = true; break; - psf_prev = psf; + } } - if (!psf || psf->sf_count[sfmode] == 0) { + + if (!found || psf->sf_count[sfmode] == 0) { /* source filter not found, or count wrong => bug */ return -ESRCH; } + psf->sf_count[sfmode]--; if (!psf->sf_count[MCAST_INCLUDE] && !psf->sf_count[MCAST_EXCLUDE]) { struct inet6_dev *idev = mc->idev; /* no more filters for this source */ - if (psf_prev) - psf_prev->sf_next = psf->sf_next; - else - mc->mca_sources = psf->sf_next; + list_del_init(&psf->list); if (psf->sf_oldin && !(mc->mca_flags & MAF_NOREPORT) && !mld_in_v1_mode(idev)) { psf->sf_crcount = idev->mc_qrv; - psf->sf_next = mc->mca_tomb; - mc->mca_tomb = psf; + list_add(&psf->list, &mc->mca_tomb_list); rv = 1; } else { kfree(psf); @@ -2202,7 +2215,7 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, mc->mca_sfmode = MCAST_INCLUDE; mc->mca_crcount = idev->mc_qrv; idev->mc_ifc_count = mc->mca_crcount; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) + list_for_each_entry(psf, &mc->mca_source_list, list) psf->sf_crcount = 0; mld_ifc_event(mc->idev); } else if (sf_setstate(mc) || changerec) { @@ -2219,24 +2232,24 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, static int ip6_mc_add1_src(struct ifmcaddr6 *mc, int sfmode, const struct in6_addr *psfsrc) { - struct ip6_sf_list *psf, *psf_prev; + struct ip6_sf_list *psf; + bool found = false; - psf_prev = NULL; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) { - if (ipv6_addr_equal(&psf->sf_addr, psfsrc)) + list_for_each_entry(psf, &mc->mca_source_list, list) { + if (ipv6_addr_equal(&psf->sf_addr, psfsrc)) { + found = true; break; - psf_prev = psf; + } } - if (!psf) { + + if (!found) { psf = kzalloc(sizeof(*psf), GFP_ATOMIC); if (!psf) return -ENOBUFS; psf->sf_addr = *psfsrc; - if (psf_prev) { - psf_prev->sf_next = psf; - } else - mc->mca_sources = psf; + INIT_LIST_HEAD(&psf->list); + list_add_tail(&psf->list, &mc->mca_source_list); } psf->sf_count[sfmode]++; return 0; @@ -2247,7 +2260,7 @@ static void sf_markstate(struct ifmcaddr6 *mc) int mca_xcount = mc->mca_sfcount[MCAST_EXCLUDE]; struct ip6_sf_list *psf; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) { + list_for_each_entry(psf, &mc->mca_source_list, list) { if (mc->mca_sfcount[MCAST_EXCLUDE]) { psf->sf_oldin = mca_xcount == psf->sf_count[MCAST_EXCLUDE] && @@ -2263,53 +2276,67 @@ static int sf_setstate(struct ifmcaddr6 *mc) struct ip6_sf_list *psf, *dpsf; int qrv = mc->idev->mc_qrv; int new_in, rv; + bool found; rv = 0; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) { + list_for_each_entry(psf, &mc->mca_source_list, list) { + found = false; + if (mc->mca_sfcount[MCAST_EXCLUDE]) { new_in = mca_xcount == psf->sf_count[MCAST_EXCLUDE] && !psf->sf_count[MCAST_INCLUDE]; - } else + } else { new_in = psf->sf_count[MCAST_INCLUDE] != 0; + } + if (new_in) { - if (!psf->sf_oldin) { - struct ip6_sf_list *prev = NULL; + if (psf->sf_oldin) + continue; - for (dpsf = mc->mca_tomb; dpsf; - dpsf = dpsf->sf_next) { - if (ipv6_addr_equal(&dpsf->sf_addr, - &psf->sf_addr)) - break; - prev = dpsf; - } - if (dpsf) { - if (prev) - prev->sf_next = dpsf->sf_next; - else - mc->mca_tomb = dpsf->sf_next; - kfree(dpsf); + list_for_each_entry(dpsf, &mc->mca_tomb_list, list) { + if (ipv6_addr_equal(&dpsf->sf_addr, + &psf->sf_addr)) { + found = true; + break; } - psf->sf_crcount = qrv; - rv++; } + + if (found) { + list_del(&dpsf->list); + kfree(dpsf); + } + psf->sf_crcount = qrv; + rv++; } else if (psf->sf_oldin) { psf->sf_crcount = 0; /* * add or update "delete" records if an active filter * is now inactive */ - for (dpsf = mc->mca_tomb; dpsf; dpsf = dpsf->sf_next) + list_for_each_entry(dpsf, &mc->mca_tomb_list, list) { if (ipv6_addr_equal(&dpsf->sf_addr, - &psf->sf_addr)) + &psf->sf_addr)) { + found = true; break; - if (!dpsf) { + } + } + + if (!found) { dpsf = kmalloc(sizeof(*dpsf), GFP_ATOMIC); if (!dpsf) continue; - *dpsf = *psf; + + INIT_LIST_HEAD(&dpsf->list); + dpsf->sf_addr = psf->sf_addr; + dpsf->sf_count[MCAST_INCLUDE] = + psf->sf_count[MCAST_INCLUDE]; + dpsf->sf_count[MCAST_EXCLUDE] = + psf->sf_count[MCAST_EXCLUDE]; + dpsf->sf_gsresp = psf->sf_gsresp; + dpsf->sf_oldin = psf->sf_oldin; + dpsf->sf_crcount = psf->sf_crcount; /* mc->mca_lock held by callers */ - dpsf->sf_next = mc->mca_tomb; - mc->mca_tomb = dpsf; + list_add(&dpsf->list, &mc->mca_tomb_list); } dpsf->sf_crcount = qrv; rv++; @@ -2376,7 +2403,7 @@ static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, mc->mca_crcount = idev->mc_qrv; idev->mc_ifc_count = mc->mca_crcount; - for (psf = mc->mca_sources; psf; psf = psf->sf_next) + list_for_each_entry(psf, &mc->mca_source_list, list) psf->sf_crcount = 0; mld_ifc_event(idev); } else if (sf_setstate(mc)) @@ -2388,18 +2415,18 @@ static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, static void ip6_mc_clear_src(struct ifmcaddr6 *mc) { - struct ip6_sf_list *psf, *nextpsf; + struct ip6_sf_list *psf, *tmp; - for (psf = mc->mca_tomb; psf; psf = nextpsf) { - nextpsf = psf->sf_next; + list_for_each_entry_safe(psf, tmp, &mc->mca_tomb_list, list) { + list_del(&psf->list); kfree(psf); } - mc->mca_tomb = NULL; - for (psf = mc->mca_sources; psf; psf = nextpsf) { - nextpsf = psf->sf_next; + + list_for_each_entry_safe(psf, tmp, &mc->mca_source_list, list) { + list_del(&psf->list); kfree(psf); } - mc->mca_sources = NULL; + mc->mca_sfmode = MCAST_EXCLUDE; mc->mca_sfcount[MCAST_INCLUDE] = 0; mc->mca_sfcount[MCAST_EXCLUDE] = 1; @@ -2812,7 +2839,8 @@ static inline struct ip6_sf_list *igmp6_mcf_get_first(struct seq_file *seq) struct ifmcaddr6, list); if (likely(mc)) { spin_lock_bh(&mc->mca_lock); - psf = mc->mca_sources; + psf = list_first_entry_or_null(&mc->mca_source_list, + struct ip6_sf_list, list); if (likely(psf)) { state->mc = mc; state->idev = idev; @@ -2830,12 +2858,20 @@ static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, { struct igmp6_mcf_iter_state *state = igmp6_mcf_seq_private(seq); - psf = psf->sf_next; + list_for_each_entry_continue(psf, &state->mc->mca_source_list, list) + return psf; + + psf = NULL; while (!psf) { spin_unlock_bh(&state->mc->mca_lock); list_for_each_entry_continue(state->mc, &state->idev->mc_list, list) { spin_lock_bh(&state->mc->mca_lock); - psf = state->mc->mca_sources; + psf = list_first_entry_or_null(&state->mc->mca_source_list, + struct ip6_sf_list, list); + if (!psf) { + spin_unlock_bh(&state->mc->mca_lock); + continue; + } goto out; } @@ -2860,7 +2896,8 @@ static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, if (!state->mc) break; spin_lock_bh(&state->mc->mca_lock); - psf = state->mc->mca_sources; + psf = list_first_entry_or_null(&state->mc->mca_source_list, + struct ip6_sf_list, list); } out: return psf; From patchwork Mon Feb 8 17:54:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 12076177 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E22BFC433DB for ; Mon, 8 Feb 2021 17:58:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9537464E54 for ; Mon, 8 Feb 2021 17:58:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235086AbhBHR5n (ORCPT ); Mon, 8 Feb 2021 12:57:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235064AbhBHRzR (ORCPT ); Mon, 8 Feb 2021 12:55:17 -0500 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 976D8C06178A for ; Mon, 8 Feb 2021 09:54:33 -0800 (PST) Received: by mail-pg1-x531.google.com with SMTP id t25so10727821pga.2 for ; Mon, 08 Feb 2021 09:54:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=+O9RXy20nOZtVAzRE1KIvZyJl0+VVqbkieHFl0m9tt8=; b=J3LkI/d8QAQD/m5ow6FBchDU0eUkvzzcUlMGf6Th2bpVPVQ1yPlhwIEspnYGDvsUZF XUehCVxhBRSF82Wky081gIuiu4zAePwaVS5Hl6xtCR5lJD7qnba6+MwLqruA/7oZzL1G Q80Dx0UfWlO6dhCOKXfklB24M1wqVy8uFaa1OFAUK6dRf1hyc/g63LBMVr3XFXLGFekp 0D4nBuT1XVnpNGavcZ5iw1N9t3g7pKhS8yjxLWSujB+fXqamDptTTvAlP2hrGHXTaZZv 4ykfZi9lNd/8ZyjOfRDzOJmYl0wE4PzGEJZ+BuendXlRXsXCKZqL7jkd2nbVzOgp6BQb I+TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=+O9RXy20nOZtVAzRE1KIvZyJl0+VVqbkieHFl0m9tt8=; b=WIlpEVUyqjuwNhxBlm9LUcKfsnKA5uCxOt38Qi7LHpM1PEESobDucEI4zAn/L5F59d qhYIJah0sVR3THTp46oJ28BwtGzE2s4mfClvLLQvlmqmrQNOGpFydpFkpmqHX9UfvnyW 4AcDlWjtOEdWdZqtd+RYgPKhXXeoCqPz1OIIglLNBOqvUny7PkUCkf19tnTnhJhhl12A aHrRHCoxgLKnkTdz+c9Ydx8wCw4Fl+lNYePioJz7uPFNZ09su1FaAnBrQEZMRk2CnKMU XVLCqWi3l+d+e5FR4GxjJCPb+jFQ9yYAmJo3EkEToC753B+337mEI6LyoV0kbhlGfpy6 g6JA== X-Gm-Message-State: AOAM533iawjt467FVxaODnC1/3z0oIKfI4cm1eII36MNl3mkZ1l4SkeT g6Ppx3YDBXImDUm1EpyKc/4= X-Google-Smtp-Source: ABdhPJx0aIZyYD6BlXmuxU6f6tKbCf6KyweLQ2H5TyhZKSqwWeg7AiQAUhILSq3zvv/0ZfRNgIyGWw== X-Received: by 2002:a63:4e63:: with SMTP id o35mr17696933pgl.291.1612806873213; Mon, 08 Feb 2021 09:54:33 -0800 (PST) Received: from localhost.localdomain ([49.173.165.50]) by smtp.gmail.com with ESMTPSA id 123sm20433941pge.88.2021.02.08.09.54.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 09:54:32 -0800 (PST) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, dsahern@kernel.org, xiyou.wangcong@gmail.com Cc: ap420073@gmail.com Subject: [PATCH net-next 3/8] mld: use mca_{get | put} instead of refcount_{dec | inc} Date: Mon, 8 Feb 2021 17:54:21 +0000 Message-Id: <20210208175421.5126-1-ap420073@gmail.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org mca_{get | put} are wrapper function of refcount_{dec | inc}. And using only wrapper functions is more readable. Signed-off-by: Taehee Yoo --- net/ipv6/mcast.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 9c4dc4c2ff01..45a983ed091e 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -723,7 +723,7 @@ static void igmp6_group_dropped(struct ifmcaddr6 *mc) spin_lock_bh(&mc->mca_lock); if (del_timer(&mc->mca_timer)) - refcount_dec(&mc->mca_refcnt); + mca_put(mc); spin_unlock_bh(&mc->mca_lock); } @@ -1089,7 +1089,7 @@ static void igmp6_group_queried(struct ifmcaddr6 *mc, unsigned long resptime) return; if (del_timer(&mc->mca_timer)) { - refcount_dec(&mc->mca_refcnt); + mca_put(mc); delay = mc->mca_timer.expires - jiffies; } @@ -1098,7 +1098,7 @@ static void igmp6_group_queried(struct ifmcaddr6 *mc, unsigned long resptime) mc->mca_timer.expires = jiffies + delay; if (!mod_timer(&mc->mca_timer, jiffies + delay)) - refcount_inc(&mc->mca_refcnt); + mca_get(mc); mc->mca_flags |= MAF_TIMER_RUNNING; } @@ -1493,7 +1493,7 @@ int igmp6_event_report(struct sk_buff *skb) if (ipv6_addr_equal(&mc->mca_addr, &mld->mld_mca)) { spin_lock(&mc->mca_lock); if (del_timer(&mc->mca_timer)) - refcount_dec(&mc->mca_refcnt); + mca_put(mc); mc->mca_flags &= ~(MAF_LAST_REPORTER | MAF_TIMER_RUNNING); spin_unlock(&mc->mca_lock); break; @@ -2446,12 +2446,12 @@ static void igmp6_join_group(struct ifmcaddr6 *mc) spin_lock_bh(&mc->mca_lock); if (del_timer(&mc->mca_timer)) { - refcount_dec(&mc->mca_refcnt); + mca_put(mc); delay = mc->mca_timer.expires - jiffies; } if (!mod_timer(&mc->mca_timer, jiffies + delay)) - refcount_inc(&mc->mca_refcnt); + mca_get(mc); mc->mca_flags |= MAF_TIMER_RUNNING | MAF_LAST_REPORTER; spin_unlock_bh(&mc->mca_lock); } From patchwork Mon Feb 8 17:54:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 12076179 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63639C433DB for ; Mon, 8 Feb 2021 17:59:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F283964EBD for ; Mon, 8 Feb 2021 17:59:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233956AbhBHR6d (ORCPT ); Mon, 8 Feb 2021 12:58:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235119AbhBHRzg (ORCPT ); Mon, 8 Feb 2021 12:55:36 -0500 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDBACC06178C for ; Mon, 8 Feb 2021 09:54:55 -0800 (PST) Received: by mail-pl1-x629.google.com with SMTP id g3so8243411plp.2 for ; Mon, 08 Feb 2021 09:54:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=1G74omdaQsQ2/mmAeXPu9UjWD9wFAQbtL5FxBsIfALA=; b=fTqtFq+marYNDUfImsYqesJ08IkqVWH2wHTrwATfcPux4etzzcuoWXExCWsT0EYNoF we3DkN0X/1Y2wB6y4dApumNMvUrmn0erSI8HZOywtTsGCLZGKeH9YdL02YgirQ+G9hog jmy3K9pg2/XX4EI9D07Qi5GKIhnjEVhwCQtKcUFnicBpSgh3XaKiQ5T6PWq71eUQxQnO +WHTHeSHVI0nAf0O+6X5FoKYtr0zHSNIlXrNbme30yaHEU2pddOIk2l0nW+O8TWAXPyc XkhhZGjw1CJHgYx/q9dZ7biHAMKFZuPl6a57K5LCO3Dj4v+9O4cFMvbXUjkuOXzdUI8S +aaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=1G74omdaQsQ2/mmAeXPu9UjWD9wFAQbtL5FxBsIfALA=; b=Ka3PzzqnkMH4biZoSf/Bpml7+Hpn3dIy/aL3Xue5wnP6L1czagxaPCOi47QTJZBvfQ 3vnDOmV+QZc5GD9pJwuIYxGxPIk+sHeYm4HKyKbIK85FKc4h79opqki6nMxjfzHrwCKA ReoLDLpkc5G5wkOr4egNfJiLpcHVU4x1gYfjKEdZSlU4qWJtqt3l/OWTGkVVlKTkN5vG cJP9G/Fhvj8sgoEYrFDkf4e63L/5anpPZ8+g211VBJh4ydlIRWv13sDfZx69s8raS1Hq kFEDMCDFcT2f3ASYpZ9IsMN4RxCljln34DDSR96zE3G/gt0Dsaecr/W8cWaYU8RxtGwr R9kA== X-Gm-Message-State: AOAM5338DLLpEJKhN+xO7Z05zMukkkUBsZqe4NFzK/lRm4v+K8o25L+h GKZktQacjVNLQCSt46+qoxM= X-Google-Smtp-Source: ABdhPJy3gRulY2HwMP+7/NfDnoVBWhu3K6aCvHDHK0i/2925SbmFlk0UBi02xzUS71KqX4Ehux4Cvw== X-Received: by 2002:a17:902:d686:b029:de:7845:c0b2 with SMTP id v6-20020a170902d686b02900de7845c0b2mr17200090ply.11.1612806895300; Mon, 08 Feb 2021 09:54:55 -0800 (PST) Received: from localhost.localdomain ([49.173.165.50]) by smtp.gmail.com with ESMTPSA id hi15sm16926059pjb.19.2021.02.08.09.54.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 09:54:54 -0800 (PST) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, dsahern@kernel.org, xiyou.wangcong@gmail.com Cc: ap420073@gmail.com Subject: [PATCH net-next 4/8] mld: convert from timer to delayed work Date: Mon, 8 Feb 2021 17:54:45 +0000 Message-Id: <20210208175445.5203-1-ap420073@gmail.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org mcast.c has several timers for delaying works. Timer's expire handler is working under BH context so it can't use sleepable things such as GFP_KERNEL, mutex, etc. In order to use sleepable APIs, it converts from timers to delayed work. But there are some critical sections, which is used by both process and BH context. So that it still uses spin_lock_bh() and rwlock. Suggested-by: Cong Wang Signed-off-by: Taehee Yoo --- include/net/if_inet6.h | 8 +-- net/ipv6/mcast.c | 141 ++++++++++++++++++++++++----------------- 2 files changed, 86 insertions(+), 63 deletions(-) diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h index cd17b756a2a5..096c0554d199 100644 --- a/include/net/if_inet6.h +++ b/include/net/if_inet6.h @@ -120,7 +120,7 @@ struct ifmcaddr6 { unsigned int mca_sfmode; unsigned char mca_crcount; unsigned long mca_sfcount[2]; - struct timer_list mca_timer; + struct delayed_work mca_work; unsigned int mca_flags; int mca_users; refcount_t mca_refcnt; @@ -178,9 +178,9 @@ struct inet6_dev { unsigned long mc_qri; /* Query Response Interval */ unsigned long mc_maxdelay; - struct timer_list mc_gq_timer; /* general query timer */ - struct timer_list mc_ifc_timer; /* interface change timer */ - struct timer_list mc_dad_timer; /* dad complete mc timer */ + struct delayed_work mc_gq_work; /* general query work */ + struct delayed_work mc_ifc_work; /* interface change work */ + struct delayed_work mc_dad_work; /* dad complete mc work */ struct ifacaddr6 *ac_list; rwlock_t lock; diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 45a983ed091e..ed31b3212b9e 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -29,7 +29,6 @@ #include #include #include -#include #include #include #include @@ -42,6 +41,7 @@ #include #include #include +#include #include #include @@ -67,11 +67,12 @@ static int __mld2_query_bugs[] __attribute__((__unused__)) = { BUILD_BUG_ON_ZERO(offsetof(struct mld2_grec, grec_mca) % 4) }; +static struct workqueue_struct *mld_wq; static struct in6_addr mld2_all_mcr = MLD2_ALL_MCR_INIT; static void igmp6_join_group(struct ifmcaddr6 *mc); static void igmp6_leave_group(struct ifmcaddr6 *mc); -static void igmp6_timer_handler(struct timer_list *t); +static void mld_mca_work(struct work_struct *work); static void mld_ifc_event(struct inet6_dev *idev); static bool mld_in_v1_mode(const struct inet6_dev *idev); @@ -722,7 +723,7 @@ static void igmp6_group_dropped(struct ifmcaddr6 *mc) igmp6_leave_group(mc); spin_lock_bh(&mc->mca_lock); - if (del_timer(&mc->mca_timer)) + if (cancel_delayed_work(&mc->mca_work)) mca_put(mc); spin_unlock_bh(&mc->mca_lock); } @@ -850,7 +851,7 @@ static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, if (!mc) return NULL; - timer_setup(&mc->mca_timer, igmp6_timer_handler, 0); + INIT_DELAYED_WORK(&mc->mca_work, mld_mca_work); mc->mca_addr = *addr; mc->idev = idev; /* reference taken by caller */ @@ -1030,48 +1031,51 @@ bool ipv6_chk_mcast_addr(struct net_device *dev, const struct in6_addr *group, return rv; } -static void mld_gq_start_timer(struct inet6_dev *idev) +static void mld_gq_start_work(struct inet6_dev *idev) { unsigned long tv = prandom_u32() % idev->mc_maxdelay; idev->mc_gq_running = 1; - if (!mod_timer(&idev->mc_gq_timer, jiffies+tv+2)) + if (!mod_delayed_work(mld_wq, &idev->mc_gq_work, + msecs_to_jiffies(tv + 2))) in6_dev_hold(idev); } -static void mld_gq_stop_timer(struct inet6_dev *idev) +static void mld_gq_stop_work(struct inet6_dev *idev) { idev->mc_gq_running = 0; - if (del_timer(&idev->mc_gq_timer)) + if (cancel_delayed_work(&idev->mc_gq_work)) __in6_dev_put(idev); } -static void mld_ifc_start_timer(struct inet6_dev *idev, unsigned long delay) +static void mld_ifc_start_work(struct inet6_dev *idev, unsigned long delay) { unsigned long tv = prandom_u32() % delay; - if (!mod_timer(&idev->mc_ifc_timer, jiffies+tv+2)) + if (!mod_delayed_work(mld_wq, &idev->mc_ifc_work, + msecs_to_jiffies(tv + 2))) in6_dev_hold(idev); } -static void mld_ifc_stop_timer(struct inet6_dev *idev) +static void mld_ifc_stop_work(struct inet6_dev *idev) { idev->mc_ifc_count = 0; - if (del_timer(&idev->mc_ifc_timer)) + if (cancel_delayed_work(&idev->mc_ifc_work)) __in6_dev_put(idev); } -static void mld_dad_start_timer(struct inet6_dev *idev, unsigned long delay) +static void mld_dad_start_work(struct inet6_dev *idev, unsigned long delay) { unsigned long tv = prandom_u32() % delay; - if (!mod_timer(&idev->mc_dad_timer, jiffies+tv+2)) + if (!mod_delayed_work(mld_wq, &idev->mc_dad_work, + msecs_to_jiffies(tv + 2))) in6_dev_hold(idev); } -static void mld_dad_stop_timer(struct inet6_dev *idev) +static void mld_dad_stop_work(struct inet6_dev *idev) { - if (del_timer(&idev->mc_dad_timer)) + if (cancel_delayed_work(&idev->mc_dad_work)) __in6_dev_put(idev); } @@ -1083,21 +1087,21 @@ static void igmp6_group_queried(struct ifmcaddr6 *mc, unsigned long resptime) { unsigned long delay = resptime; - /* Do not start timer for these addresses */ + /* Do not start work for these addresses */ if (ipv6_addr_is_ll_all_nodes(&mc->mca_addr) || IPV6_ADDR_MC_SCOPE(&mc->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL) return; - if (del_timer(&mc->mca_timer)) { + if (cancel_delayed_work(&mc->mca_work)) { mca_put(mc); - delay = mc->mca_timer.expires - jiffies; + delay = mc->mca_work.timer.expires - jiffies; } if (delay >= resptime) delay = prandom_u32() % resptime; - mc->mca_timer.expires = jiffies + delay; - if (!mod_timer(&mc->mca_timer, jiffies + delay)) + if (!mod_delayed_work(mld_wq, &mc->mca_work, + msecs_to_jiffies(delay))) mca_get(mc); mc->mca_flags |= MAF_TIMER_RUNNING; } @@ -1308,10 +1312,10 @@ static int mld_process_v1(struct inet6_dev *idev, struct mld_msg *mld, if (v1_query) mld_set_v1_mode(idev); - /* cancel MLDv2 report timer */ - mld_gq_stop_timer(idev); - /* cancel the interface change timer */ - mld_ifc_stop_timer(idev); + /* cancel MLDv2 report work */ + mld_gq_stop_work(idev); + /* cancel the interface change work */ + mld_ifc_stop_work(idev); /* clear deleted report items */ mld_clear_delrec(idev); @@ -1401,7 +1405,7 @@ int igmp6_event_query(struct sk_buff *skb) if (mlh2->mld2q_nsrcs) return -EINVAL; /* no sources allowed */ - mld_gq_start_timer(idev); + mld_gq_start_work(idev); return 0; } /* mark sources to include, if group & source-specific */ @@ -1485,14 +1489,14 @@ int igmp6_event_report(struct sk_buff *skb) return -ENODEV; /* - * Cancel the timer for this group + * Cancel the work for this group */ read_lock_bh(&idev->lock); list_for_each_entry(mc, &idev->mc_list, list) { if (ipv6_addr_equal(&mc->mca_addr, &mld->mld_mca)) { spin_lock(&mc->mca_lock); - if (del_timer(&mc->mca_timer)) + if (cancel_delayed_work(&mc->mca_work)) mca_put(mc); mc->mca_flags &= ~(MAF_LAST_REPORTER | MAF_TIMER_RUNNING); spin_unlock(&mc->mca_lock); @@ -2109,21 +2113,23 @@ void ipv6_mc_dad_complete(struct inet6_dev *idev) mld_send_initial_cr(idev); idev->mc_dad_count--; if (idev->mc_dad_count) - mld_dad_start_timer(idev, - unsolicited_report_interval(idev)); + mld_dad_start_work(idev, + unsolicited_report_interval(idev)); } } -static void mld_dad_timer_expire(struct timer_list *t) +static void mld_dad_work(struct work_struct *work) { - struct inet6_dev *idev = from_timer(idev, t, mc_dad_timer); + struct inet6_dev *idev = container_of(to_delayed_work(work), + struct inet6_dev, + mc_dad_work); mld_send_initial_cr(idev); if (idev->mc_dad_count) { idev->mc_dad_count--; if (idev->mc_dad_count) - mld_dad_start_timer(idev, - unsolicited_report_interval(idev)); + mld_dad_start_work(idev, + unsolicited_report_interval(idev)); } in6_dev_put(idev); } @@ -2445,12 +2451,13 @@ static void igmp6_join_group(struct ifmcaddr6 *mc) delay = prandom_u32() % unsolicited_report_interval(mc->idev); spin_lock_bh(&mc->mca_lock); - if (del_timer(&mc->mca_timer)) { + if (cancel_delayed_work(&mc->mca_work)) { mca_put(mc); - delay = mc->mca_timer.expires - jiffies; + delay = mc->mca_work.timer.expires - jiffies; } - if (!mod_timer(&mc->mca_timer, jiffies + delay)) + if (!mod_delayed_work(mld_wq, &mc->mca_work, + msecs_to_jiffies(delay))) mca_get(mc); mc->mca_flags |= MAF_TIMER_RUNNING | MAF_LAST_REPORTER; spin_unlock_bh(&mc->mca_lock); @@ -2487,25 +2494,27 @@ static void igmp6_leave_group(struct ifmcaddr6 *mc) } } -static void mld_gq_timer_expire(struct timer_list *t) +static void mld_gq_work(struct work_struct *work) { - struct inet6_dev *idev = from_timer(idev, t, mc_gq_timer); + struct inet6_dev *idev = container_of(to_delayed_work(work), + struct inet6_dev, mc_gq_work); idev->mc_gq_running = 0; mld_send_report(idev, NULL); in6_dev_put(idev); } -static void mld_ifc_timer_expire(struct timer_list *t) +static void mld_ifc_work(struct work_struct *work) { - struct inet6_dev *idev = from_timer(idev, t, mc_ifc_timer); + struct inet6_dev *idev = container_of(to_delayed_work(work), + struct inet6_dev, mc_ifc_work); mld_send_cr(idev); if (idev->mc_ifc_count) { idev->mc_ifc_count--; if (idev->mc_ifc_count) - mld_ifc_start_timer(idev, - unsolicited_report_interval(idev)); + mld_ifc_start_work(idev, + unsolicited_report_interval(idev)); } in6_dev_put(idev); } @@ -2515,22 +2524,23 @@ static void mld_ifc_event(struct inet6_dev *idev) if (mld_in_v1_mode(idev)) return; idev->mc_ifc_count = idev->mc_qrv; - mld_ifc_start_timer(idev, 1); + mld_ifc_start_work(idev, 1); } -static void igmp6_timer_handler(struct timer_list *t) +static void mld_mca_work(struct work_struct *work) { - struct ifmcaddr6 *mc = from_timer(mc, t, mca_timer); + struct ifmcaddr6 *mc = container_of(to_delayed_work(work), + struct ifmcaddr6, mca_work); if (mld_in_v1_mode(mc->idev)) igmp6_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); else mld_send_report(mc->idev, mc); - spin_lock(&mc->mca_lock); + spin_lock_bh(&mc->mca_lock); mc->mca_flags |= MAF_LAST_REPORTER; mc->mca_flags &= ~MAF_TIMER_RUNNING; - spin_unlock(&mc->mca_lock); + spin_unlock_bh(&mc->mca_lock); mca_put(mc); } @@ -2566,12 +2576,12 @@ void ipv6_mc_down(struct inet6_dev *idev) list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) igmp6_group_dropped(mc); - /* Should stop timer after group drop. or we will - * start timer again in mld_ifc_event() + /* Should stop work after group drop. or we will + * start work again in mld_ifc_event() */ - mld_ifc_stop_timer(idev); - mld_gq_stop_timer(idev); - mld_dad_stop_timer(idev); + mld_ifc_stop_work(idev); + mld_gq_stop_work(idev); + mld_dad_stop_work(idev); read_unlock_bh(&idev->lock); } @@ -2608,12 +2618,12 @@ void ipv6_mc_init_dev(struct inet6_dev *idev) write_lock_bh(&idev->lock); spin_lock_init(&idev->mc_tomb_lock); idev->mc_gq_running = 0; - timer_setup(&idev->mc_gq_timer, mld_gq_timer_expire, 0); + INIT_DELAYED_WORK(&idev->mc_gq_work, mld_gq_work); INIT_LIST_HEAD(&idev->mc_tomb_list); INIT_LIST_HEAD(&idev->mc_list); idev->mc_ifc_count = 0; - timer_setup(&idev->mc_ifc_timer, mld_ifc_timer_expire, 0); - timer_setup(&idev->mc_dad_timer, mld_dad_timer_expire, 0); + INIT_DELAYED_WORK(&idev->mc_ifc_work, mld_ifc_work); + INIT_DELAYED_WORK(&idev->mc_dad_work, mld_dad_work); ipv6_mc_reset(idev); write_unlock_bh(&idev->lock); } @@ -2626,7 +2636,7 @@ void ipv6_mc_destroy_dev(struct inet6_dev *idev) { struct ifmcaddr6 *mc, *tmp; - /* Deactivate timers */ + /* Deactivate works */ ipv6_mc_down(idev); mld_clear_delrec(idev); @@ -2799,7 +2809,7 @@ static int igmp6_mc_seq_show(struct seq_file *seq, void *v) &mc->mca_addr, mc->mca_users, mc->mca_flags, (mc->mca_flags & MAF_TIMER_RUNNING) ? - jiffies_to_clock_t(mc->mca_timer.expires - jiffies) : 0); + jiffies_to_clock_t(mc->mca_work.timer.expires - jiffies) : 0); return 0; } @@ -3062,7 +3072,19 @@ static struct pernet_operations igmp6_net_ops = { int __init igmp6_init(void) { - return register_pernet_subsys(&igmp6_net_ops); + int err; + + err = register_pernet_subsys(&igmp6_net_ops); + if (err) + return err; + + mld_wq = create_workqueue("mld"); + if (!mld_wq) { + unregister_pernet_subsys(&igmp6_net_ops); + return -ENOMEM; + } + + return err; } int __init igmp6_late_init(void) @@ -3073,6 +3095,7 @@ int __init igmp6_late_init(void) void igmp6_cleanup(void) { unregister_pernet_subsys(&igmp6_net_ops); + destroy_workqueue(mld_wq); } void igmp6_late_cleanup(void) From patchwork Mon Feb 8 17:55:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 12076181 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5296C433E6 for ; Mon, 8 Feb 2021 17:59:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B5E464EE6 for ; Mon, 8 Feb 2021 17:59:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234629AbhBHR7E (ORCPT ); Mon, 8 Feb 2021 12:59:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231872AbhBHRz7 (ORCPT ); Mon, 8 Feb 2021 12:55:59 -0500 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D516C061793 for ; Mon, 8 Feb 2021 09:55:18 -0800 (PST) Received: by mail-pg1-x52c.google.com with SMTP id o21so9309671pgn.12 for ; Mon, 08 Feb 2021 09:55:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=5LwNILtJ+GSUJjj0ZmPbjj70IBpIbFtM9sxTftOlGq4=; b=YqR+j9U/rLMaKFsiwzrnA8ThwsN5t4kvOMRM2XFX9JeohqHsSxFAKfWtHBBPXb3XWu dnVXuy1zGWU+XDQlDjQzJH7VYcsC9SbDH8yGksSIXni63DtPpCeyMVVKLOH57maNVolS vjz91bkyQ72iubcDOQr30hnOLLv23b0CCX4/YT8pI/wbcTKqeLyz9aFfPv5D1w3QmyBa Ng4iHuS8G54Pse2WbZ8jab4j4cOUEAU/g44cLHs6akqgXh1L04qwvXPC3CA4XXur/MSE PXKX2gYtSYUNpBbKXBNJCvJv2J1lxuA0bw6IF6NnPWI3K7YZgaXNYhVfva6lu8I1+Xfb S+KA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=5LwNILtJ+GSUJjj0ZmPbjj70IBpIbFtM9sxTftOlGq4=; b=ENJOwNfb7MHtBCWqxRhSejn5LFj81yxHqn8bGDFLdNVlk/iIBHKPngbD4ojpq6ILt8 /r2Spyo0uM9FgXgSOHpL8JrIo4rcDMXASHx667O/ipySGNez2ym1X9GBa9xo6+bX3Qp/ PeUchrMjGsfiKFV8Cc7YNWK0omc0dh22SDnWhH96E0zDQW+XSawnV5Y91JO0OwJlq04Z xv0DJBSjlhzbmPv5L6uafX9k16BPuixZjnaVFrwwV8GyiCEifVP6HJc/PNHOw2VXV/ah 5xjScHgmtHuRzCqaDFT+GVrnDUFXvNFgjz6BnrUg7E/kgTYCjwRVOUFpj2+f1lCfjsKI L2qg== X-Gm-Message-State: AOAM5300JZN+3DwWu2QehHivtN6daZRq/oap8d2Pcyd8JbSYVGa0cq3D BmIc7mcsRy6mpy0mzNGr2lw= X-Google-Smtp-Source: ABdhPJx4XKKWoAWz8BKxp7RYz7gzWb6rDAItTOKBHPPm7IHplsilO39ULszWsxoaIQLBIXHYuyWXUw== X-Received: by 2002:a63:d143:: with SMTP id c3mr18944158pgj.86.1612806917841; Mon, 08 Feb 2021 09:55:17 -0800 (PST) Received: from localhost.localdomain ([49.173.165.50]) by smtp.gmail.com with ESMTPSA id e21sm18654361pgv.74.2021.02.08.09.55.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 09:55:17 -0800 (PST) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, dsahern@kernel.org, xiyou.wangcong@gmail.com Cc: ap420073@gmail.com Subject: [PATCH net-next 5/8] mld: rename igmp6 to mld Date: Mon, 8 Feb 2021 17:55:06 +0000 Message-Id: <20210208175506.5284-1-ap420073@gmail.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org ipv6 multicast protocol is MLD, not IGMP6. So it should be renamed. Signed-off-by: Taehee Yoo --- include/net/ndisc.h | 14 ++-- net/ipv6/af_inet6.c | 12 +-- net/ipv6/icmp.c | 4 +- net/ipv6/mcast.c | 198 ++++++++++++++++++++++---------------------- 4 files changed, 115 insertions(+), 113 deletions(-) diff --git a/include/net/ndisc.h b/include/net/ndisc.h index 38e4094960ce..09b1e5948b73 100644 --- a/include/net/ndisc.h +++ b/include/net/ndisc.h @@ -479,17 +479,17 @@ void ndisc_update(const struct net_device *dev, struct neighbour *neigh, struct ndisc_options *ndopts); /* - * IGMP + * MLD */ -int igmp6_init(void); -int igmp6_late_init(void); +int mld_init(void); +int mld_late_init(void); -void igmp6_cleanup(void); -void igmp6_late_cleanup(void); +void mld_cleanup(void); +void mld_late_cleanup(void); -int igmp6_event_query(struct sk_buff *skb); +int mld_event_query(struct sk_buff *skb); -int igmp6_event_report(struct sk_buff *skb); +int mld_event_report(struct sk_buff *skb); #ifdef CONFIG_SYSCTL diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index 0e9994e0ecd7..ace6527171bd 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -1105,7 +1105,7 @@ static int __init inet6_init(void) err = ndisc_init(); if (err) goto ndisc_fail; - err = igmp6_init(); + err = mld_init(); if (err) goto igmp_fail; @@ -1186,9 +1186,9 @@ static int __init inet6_init(void) if (err) goto rpl_fail; - err = igmp6_late_init(); + err = mld_late_init(); if (err) - goto igmp6_late_err; + goto mld_late_err; #ifdef CONFIG_SYSCTL err = ipv6_sysctl_register(); @@ -1205,9 +1205,9 @@ static int __init inet6_init(void) #ifdef CONFIG_SYSCTL sysctl_fail: - igmp6_late_cleanup(); + mld_late_cleanup(); #endif -igmp6_late_err: +mld_late_err: rpl_exit(); rpl_fail: seg6_exit(); @@ -1252,7 +1252,7 @@ static int __init inet6_init(void) #endif ipv6_netfilter_fini(); netfilter_fail: - igmp6_cleanup(); + mld_cleanup(); igmp_fail: ndisc_cleanup(); ndisc_fail: diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c index f3d05866692e..af0382a0de3c 100644 --- a/net/ipv6/icmp.c +++ b/net/ipv6/icmp.c @@ -943,11 +943,11 @@ static int icmpv6_rcv(struct sk_buff *skb) break; case ICMPV6_MGM_QUERY: - igmp6_event_query(skb); + mld_event_query(skb); break; case ICMPV6_MGM_REPORT: - igmp6_event_report(skb); + mld_event_report(skb); break; case ICMPV6_MGM_REDUCTION: diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index ed31b3212b9e..21f3bbec5568 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -70,8 +70,8 @@ static int __mld2_query_bugs[] __attribute__((__unused__)) = { static struct workqueue_struct *mld_wq; static struct in6_addr mld2_all_mcr = MLD2_ALL_MCR_INIT; -static void igmp6_join_group(struct ifmcaddr6 *mc); -static void igmp6_leave_group(struct ifmcaddr6 *mc); +static void mld_join_group(struct ifmcaddr6 *mc); +static void mld_leave_group(struct ifmcaddr6 *mc); static void mld_mca_work(struct work_struct *work); static void mld_ifc_event(struct inet6_dev *idev); @@ -663,7 +663,7 @@ bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, return rv; } -static void igmp6_group_added(struct ifmcaddr6 *mc) +static void mld_group_added(struct ifmcaddr6 *mc) { struct net_device *dev = mc->idev->dev; char buf[MAX_ADDR_LEN]; @@ -684,7 +684,7 @@ static void igmp6_group_added(struct ifmcaddr6 *mc) return; if (mld_in_v1_mode(mc->idev)) { - igmp6_join_group(mc); + mld_join_group(mc); return; } /* else v2 */ @@ -699,7 +699,7 @@ static void igmp6_group_added(struct ifmcaddr6 *mc) mld_ifc_event(mc->idev); } -static void igmp6_group_dropped(struct ifmcaddr6 *mc) +static void mld_group_dropped(struct ifmcaddr6 *mc) { struct net_device *dev = mc->idev->dev; char buf[MAX_ADDR_LEN]; @@ -720,7 +720,7 @@ static void igmp6_group_dropped(struct ifmcaddr6 *mc) return; if (!mc->idev->dead) - igmp6_leave_group(mc); + mld_leave_group(mc); spin_lock_bh(&mc->mca_lock); if (cancel_delayed_work(&mc->mca_work)) @@ -923,7 +923,7 @@ static int __ipv6_dev_mc_inc(struct net_device *dev, write_unlock_bh(&idev->lock); mld_del_delrec(idev, mc); - igmp6_group_added(mc); + mld_group_added(mc); mca_put(mc); return 0; } @@ -949,7 +949,7 @@ int __ipv6_dev_mc_dec(struct inet6_dev *idev, const struct in6_addr *addr) if (--mc->mca_users == 0) { list_del(&mc->list); write_unlock_bh(&idev->lock); - igmp6_group_dropped(mc); + mld_group_dropped(mc); ip6_mc_clear_src(mc); mca_put(mc); return 0; @@ -1080,10 +1080,10 @@ static void mld_dad_stop_work(struct inet6_dev *idev) } /* - * IGMP handling (alias multicast ICMPv6 messages) + * MLD handling (alias multicast ICMPv6 messages) */ -static void igmp6_group_queried(struct ifmcaddr6 *mc, unsigned long resptime) +static void mld_group_queried(struct ifmcaddr6 *mc, unsigned long resptime) { unsigned long delay = resptime; @@ -1337,7 +1337,7 @@ static int mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld, } /* called with rcu_read_lock() */ -int igmp6_event_query(struct sk_buff *skb) +int mld_event_query(struct sk_buff *skb) { struct mld2_query *mlh2 = NULL; const struct in6_addr *group; @@ -1425,7 +1425,7 @@ int igmp6_event_query(struct sk_buff *skb) if (group_type == IPV6_ADDR_ANY) { list_for_each_entry(mc, &idev->mc_list, list) { spin_lock_bh(&mc->mca_lock); - igmp6_group_queried(mc, max_delay); + mld_group_queried(mc, max_delay); spin_unlock_bh(&mc->mca_lock); } } else { @@ -1446,7 +1446,7 @@ int igmp6_event_query(struct sk_buff *skb) } if (!(mc->mca_flags & MAF_GSQUERY) || mld_marksources(mc, ntohs(mlh2->mld2q_nsrcs), mlh2->mld2q_srcs)) - igmp6_group_queried(mc, max_delay); + mld_group_queried(mc, max_delay); spin_unlock_bh(&mc->mca_lock); break; } @@ -1457,7 +1457,7 @@ int igmp6_event_query(struct sk_buff *skb) } /* called with rcu_read_lock() */ -int igmp6_event_report(struct sk_buff *skb) +int mld_event_report(struct sk_buff *skb) { struct inet6_dev *idev; struct ifmcaddr6 *mc; @@ -1983,7 +1983,7 @@ static void mld_send_cr(struct inet6_dev *idev) mld_sendpack(skb); } -static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type) +static void mld_send(struct in6_addr *addr, struct net_device *dev, int type) { u8 ra[8] = { IPPROTO_ICMPV6, 0, IPV6_TLV_ROUTERALERT, @@ -2439,14 +2439,14 @@ static void ip6_mc_clear_src(struct ifmcaddr6 *mc) } -static void igmp6_join_group(struct ifmcaddr6 *mc) +static void mld_join_group(struct ifmcaddr6 *mc) { unsigned long delay; if (mc->mca_flags & MAF_NOREPORT) return; - igmp6_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); + mld_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); delay = prandom_u32() % unsolicited_report_interval(mc->idev); @@ -2482,12 +2482,12 @@ static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml, return err; } -static void igmp6_leave_group(struct ifmcaddr6 *mc) +static void mld_leave_group(struct ifmcaddr6 *mc) { if (mld_in_v1_mode(mc->idev)) { if (mc->mca_flags & MAF_LAST_REPORTER) - igmp6_send(&mc->mca_addr, mc->idev->dev, - ICMPV6_MGM_REDUCTION); + mld_send(&mc->mca_addr, mc->idev->dev, + ICMPV6_MGM_REDUCTION); } else { mld_add_delrec(mc->idev, mc); mld_ifc_event(mc->idev); @@ -2533,7 +2533,7 @@ static void mld_mca_work(struct work_struct *work) struct ifmcaddr6, mca_work); if (mld_in_v1_mode(mc->idev)) - igmp6_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); + mld_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); else mld_send_report(mc->idev, mc); @@ -2554,7 +2554,7 @@ void ipv6_mc_unmap(struct inet6_dev *idev) read_lock_bh(&idev->lock); list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) - igmp6_group_dropped(mc); + mld_group_dropped(mc); read_unlock_bh(&idev->lock); } @@ -2574,7 +2574,7 @@ void ipv6_mc_down(struct inet6_dev *idev) read_lock_bh(&idev->lock); list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) - igmp6_group_dropped(mc); + mld_group_dropped(mc); /* Should stop work after group drop. or we will * start work again in mld_ifc_event() @@ -2606,7 +2606,7 @@ void ipv6_mc_up(struct inet6_dev *idev) ipv6_mc_reset(idev); list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { mld_del_delrec(idev, mc); - igmp6_group_added(mc); + mld_group_added(mc); } read_unlock_bh(&idev->lock); } @@ -2670,7 +2670,7 @@ static void ipv6_mc_rejoin_groups(struct inet6_dev *idev) if (mld_in_v1_mode(idev)) { read_lock_bh(&idev->lock); list_for_each_entry(mc, &idev->mc_list, list) - igmp6_join_group(mc); + mld_join_group(mc); read_unlock_bh(&idev->lock); } else mld_send_report(idev, NULL); @@ -2695,22 +2695,22 @@ static int ipv6_mc_netdev_event(struct notifier_block *this, return NOTIFY_DONE; } -static struct notifier_block igmp6_netdev_notifier = { +static struct notifier_block mld_netdev_notifier = { .notifier_call = ipv6_mc_netdev_event, }; #ifdef CONFIG_PROC_FS -struct igmp6_mc_iter_state { +struct mld_mc_iter_state { struct seq_net_private p; struct net_device *dev; struct inet6_dev *idev; }; -#define igmp6_mc_seq_private(seq) ((struct igmp6_mc_iter_state *)(seq)->private) +#define mld_mc_seq_private(seq) ((struct mld_mc_iter_state *)(seq)->private) -static inline struct ifmcaddr6 *igmp6_mc_get_first(struct seq_file *seq) +static inline struct ifmcaddr6 *mld_mc_get_first(struct seq_file *seq) { - struct igmp6_mc_iter_state *state = igmp6_mc_seq_private(seq); + struct mld_mc_iter_state *state = mld_mc_seq_private(seq); struct net *net = seq_file_net(seq); struct ifmcaddr6 *mc; @@ -2732,9 +2732,9 @@ static inline struct ifmcaddr6 *igmp6_mc_get_first(struct seq_file *seq) return NULL; } -static struct ifmcaddr6 *igmp6_mc_get_next(struct seq_file *seq, struct ifmcaddr6 *mc) +static struct ifmcaddr6 *mld_mc_get_next(struct seq_file *seq, struct ifmcaddr6 *mc) { - struct igmp6_mc_iter_state *state = igmp6_mc_seq_private(seq); + struct mld_mc_iter_state *state = mld_mc_seq_private(seq); list_for_each_entry_continue(mc, &state->idev->mc_list, list) return mc; @@ -2760,35 +2760,35 @@ static struct ifmcaddr6 *igmp6_mc_get_next(struct seq_file *seq, struct ifmcaddr return mc; } -static struct ifmcaddr6 *igmp6_mc_get_idx(struct seq_file *seq, loff_t pos) +static struct ifmcaddr6 *mld_mc_get_idx(struct seq_file *seq, loff_t pos) { - struct ifmcaddr6 *mc = igmp6_mc_get_first(seq); + struct ifmcaddr6 *mc = mld_mc_get_first(seq); if (mc) - while (pos && (mc = igmp6_mc_get_next(seq, mc)) != NULL) + while (pos && (mc = mld_mc_get_next(seq, mc)) != NULL) --pos; return pos ? NULL : mc; } -static void *igmp6_mc_seq_start(struct seq_file *seq, loff_t *pos) +static void *mld_mc_seq_start(struct seq_file *seq, loff_t *pos) __acquires(RCU) { rcu_read_lock(); - return igmp6_mc_get_idx(seq, *pos); + return mld_mc_get_idx(seq, *pos); } -static void *igmp6_mc_seq_next(struct seq_file *seq, void *v, loff_t *pos) +static void *mld_mc_seq_next(struct seq_file *seq, void *v, loff_t *pos) { - struct ifmcaddr6 *mc = igmp6_mc_get_next(seq, v); + struct ifmcaddr6 *mc = mld_mc_get_next(seq, v); ++*pos; return mc; } -static void igmp6_mc_seq_stop(struct seq_file *seq, void *v) +static void mld_mc_seq_stop(struct seq_file *seq, void *v) __releases(RCU) { - struct igmp6_mc_iter_state *state = igmp6_mc_seq_private(seq); + struct mld_mc_iter_state *state = mld_mc_seq_private(seq); if (likely(state->idev)) { read_unlock_bh(&state->idev->lock); @@ -2798,10 +2798,10 @@ static void igmp6_mc_seq_stop(struct seq_file *seq, void *v) rcu_read_unlock(); } -static int igmp6_mc_seq_show(struct seq_file *seq, void *v) +static int mld_mc_seq_show(struct seq_file *seq, void *v) { struct ifmcaddr6 *mc = (struct ifmcaddr6 *)v; - struct igmp6_mc_iter_state *state = igmp6_mc_seq_private(seq); + struct mld_mc_iter_state *state = mld_mc_seq_private(seq); seq_printf(seq, "%-4d %-15s %pi6 %5d %08X %ld\n", @@ -2813,25 +2813,25 @@ static int igmp6_mc_seq_show(struct seq_file *seq, void *v) return 0; } -static const struct seq_operations igmp6_mc_seq_ops = { - .start = igmp6_mc_seq_start, - .next = igmp6_mc_seq_next, - .stop = igmp6_mc_seq_stop, - .show = igmp6_mc_seq_show, +static const struct seq_operations mld_mc_seq_ops = { + .start = mld_mc_seq_start, + .next = mld_mc_seq_next, + .stop = mld_mc_seq_stop, + .show = mld_mc_seq_show, }; -struct igmp6_mcf_iter_state { +struct mld_mcf_iter_state { struct seq_net_private p; struct net_device *dev; struct inet6_dev *idev; struct ifmcaddr6 *mc; }; -#define igmp6_mcf_seq_private(seq) ((struct igmp6_mcf_iter_state *)(seq)->private) +#define mld_mcf_seq_private(seq) ((struct mld_mcf_iter_state *)(seq)->private) -static inline struct ip6_sf_list *igmp6_mcf_get_first(struct seq_file *seq) +static inline struct ip6_sf_list *mld_mcf_get_first(struct seq_file *seq) { - struct igmp6_mcf_iter_state *state = igmp6_mcf_seq_private(seq); + struct mld_mcf_iter_state *state = mld_mcf_seq_private(seq); struct net *net = seq_file_net(seq); struct ip6_sf_list *psf = NULL; struct ifmcaddr6 *mc = NULL; @@ -2863,10 +2863,10 @@ static inline struct ip6_sf_list *igmp6_mcf_get_first(struct seq_file *seq) return psf; } -static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, - struct ip6_sf_list *psf) +static struct ip6_sf_list *mld_mcf_get_next(struct seq_file *seq, + struct ip6_sf_list *psf) { - struct igmp6_mcf_iter_state *state = igmp6_mcf_seq_private(seq); + struct mld_mcf_iter_state *state = mld_mcf_seq_private(seq); list_for_each_entry_continue(psf, &state->mc->mca_source_list, list) return psf; @@ -2913,39 +2913,39 @@ static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, return psf; } -static struct ip6_sf_list *igmp6_mcf_get_idx(struct seq_file *seq, loff_t pos) +static struct ip6_sf_list *mld_mcf_get_idx(struct seq_file *seq, loff_t pos) { - struct ip6_sf_list *psf = igmp6_mcf_get_first(seq); + struct ip6_sf_list *psf = mld_mcf_get_first(seq); if (psf) - while (pos && (psf = igmp6_mcf_get_next(seq, psf)) != NULL) + while (pos && (psf = mld_mcf_get_next(seq, psf)) != NULL) --pos; return pos ? NULL : psf; } -static void *igmp6_mcf_seq_start(struct seq_file *seq, loff_t *pos) +static void *mld_mcf_seq_start(struct seq_file *seq, loff_t *pos) __acquires(RCU) { rcu_read_lock(); - return *pos ? igmp6_mcf_get_idx(seq, *pos - 1) : SEQ_START_TOKEN; + return *pos ? mld_mcf_get_idx(seq, *pos - 1) : SEQ_START_TOKEN; } -static void *igmp6_mcf_seq_next(struct seq_file *seq, void *v, loff_t *pos) +static void *mld_mcf_seq_next(struct seq_file *seq, void *v, loff_t *pos) { struct ip6_sf_list *psf; if (v == SEQ_START_TOKEN) - psf = igmp6_mcf_get_first(seq); + psf = mld_mcf_get_first(seq); else - psf = igmp6_mcf_get_next(seq, v); + psf = mld_mcf_get_next(seq, v); ++*pos; return psf; } -static void igmp6_mcf_seq_stop(struct seq_file *seq, void *v) +static void mld_mcf_seq_stop(struct seq_file *seq, void *v) __releases(RCU) { - struct igmp6_mcf_iter_state *state = igmp6_mcf_seq_private(seq); + struct mld_mcf_iter_state *state = mld_mcf_seq_private(seq); if (likely(state->mc)) { spin_unlock_bh(&state->mc->mca_lock); @@ -2959,10 +2959,10 @@ static void igmp6_mcf_seq_stop(struct seq_file *seq, void *v) rcu_read_unlock(); } -static int igmp6_mcf_seq_show(struct seq_file *seq, void *v) +static int mld_mcf_seq_show(struct seq_file *seq, void *v) { struct ip6_sf_list *psf = (struct ip6_sf_list *)v; - struct igmp6_mcf_iter_state *state = igmp6_mcf_seq_private(seq); + struct mld_mcf_iter_state *state = mld_mcf_seq_private(seq); if (v == SEQ_START_TOKEN) { seq_puts(seq, "Idx Device Multicast Address Source Address INC EXC\n"); @@ -2978,51 +2978,53 @@ static int igmp6_mcf_seq_show(struct seq_file *seq, void *v) return 0; } -static const struct seq_operations igmp6_mcf_seq_ops = { - .start = igmp6_mcf_seq_start, - .next = igmp6_mcf_seq_next, - .stop = igmp6_mcf_seq_stop, - .show = igmp6_mcf_seq_show, +static const struct seq_operations mld_mcf_seq_ops = { + .start = mld_mcf_seq_start, + .next = mld_mcf_seq_next, + .stop = mld_mcf_seq_stop, + .show = mld_mcf_seq_show, }; -static int __net_init igmp6_proc_init(struct net *net) +static int __net_init mld_proc_init(struct net *net) { int err; err = -ENOMEM; - if (!proc_create_net("igmp6", 0444, net->proc_net, &igmp6_mc_seq_ops, - sizeof(struct igmp6_mc_iter_state))) + if (!proc_create_net("igmp6", 0444, net->proc_net, &mld_mc_seq_ops, + sizeof(struct mld_mc_iter_state))) goto out; + if (!proc_create_net("mcfilter6", 0444, net->proc_net, - &igmp6_mcf_seq_ops, - sizeof(struct igmp6_mcf_iter_state))) - goto out_proc_net_igmp6; + &mld_mcf_seq_ops, + sizeof(struct mld_mcf_iter_state))) + goto out_proc_net_mld; err = 0; out: return err; -out_proc_net_igmp6: +out_proc_net_mld: remove_proc_entry("igmp6", net->proc_net); goto out; } -static void __net_exit igmp6_proc_exit(struct net *net) +static void __net_exit mld_proc_exit(struct net *net) { remove_proc_entry("mcfilter6", net->proc_net); remove_proc_entry("igmp6", net->proc_net); } #else -static inline int igmp6_proc_init(struct net *net) +static inline int mld_proc_init(struct net *net) { return 0; } -static inline void igmp6_proc_exit(struct net *net) + +static inline void mld_proc_exit(struct net *net) { } #endif -static int __net_init igmp6_net_init(struct net *net) +static int __net_init mld_net_init(struct net *net) { int err; @@ -3044,7 +3046,7 @@ static int __net_init igmp6_net_init(struct net *net) goto out_sock_create; } - err = igmp6_proc_init(net); + err = mld_proc_init(net); if (err) goto out_sock_create_autojoin; @@ -3058,47 +3060,47 @@ static int __net_init igmp6_net_init(struct net *net) return err; } -static void __net_exit igmp6_net_exit(struct net *net) +static void __net_exit mld_net_exit(struct net *net) { inet_ctl_sock_destroy(net->ipv6.igmp_sk); inet_ctl_sock_destroy(net->ipv6.mc_autojoin_sk); - igmp6_proc_exit(net); + mld_proc_exit(net); } -static struct pernet_operations igmp6_net_ops = { - .init = igmp6_net_init, - .exit = igmp6_net_exit, +static struct pernet_operations mld_net_ops = { + .init = mld_net_init, + .exit = mld_net_exit, }; -int __init igmp6_init(void) +int __init mld_init(void) { int err; - err = register_pernet_subsys(&igmp6_net_ops); + err = register_pernet_subsys(&mld_net_ops); if (err) return err; mld_wq = create_workqueue("mld"); if (!mld_wq) { - unregister_pernet_subsys(&igmp6_net_ops); + unregister_pernet_subsys(&mld_net_ops); return -ENOMEM; } return err; } -int __init igmp6_late_init(void) +int __init mld_late_init(void) { - return register_netdevice_notifier(&igmp6_netdev_notifier); + return register_netdevice_notifier(&mld_netdev_notifier); } -void igmp6_cleanup(void) +void mld_cleanup(void) { - unregister_pernet_subsys(&igmp6_net_ops); + unregister_pernet_subsys(&mld_net_ops); destroy_workqueue(mld_wq); } -void igmp6_late_cleanup(void) +void mld_late_cleanup(void) { - unregister_netdevice_notifier(&igmp6_netdev_notifier); + unregister_netdevice_notifier(&mld_netdev_notifier); } From patchwork Mon Feb 8 17:57:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 12076183 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBD5DC433DB for ; Mon, 8 Feb 2021 18:00:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 91B4864EB6 for ; Mon, 8 Feb 2021 18:00:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235233AbhBHSAG (ORCPT ); Mon, 8 Feb 2021 13:00:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233440AbhBHR6O (ORCPT ); Mon, 8 Feb 2021 12:58:14 -0500 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E51CC061786 for ; Mon, 8 Feb 2021 09:57:59 -0800 (PST) Received: by mail-pg1-x533.google.com with SMTP id t25so10734009pga.2 for ; Mon, 08 Feb 2021 09:57:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=87dKznQF9BcYTx3/CoiSqxSJWajDfwYuHToW7R6j35o=; b=Ul5YYv8Auddi05Aa3L6Y/12fLRcPRb2skehxwUcZnExujS05lBt8LEIp38czcUh8tc iUZGW2okgQIKJIDbHxr/TY1z+kddVieE5yElAS3B5jxX/AUiH01M8iJ/uh+obyHMVWLO 5L+DCKPYgbvm3d/o4hcR2bR1Y7OTz2EhwnHmvou4YN3xeU1un4ExQMDp6dUT+RMgNLpS ESXwh9AD2a6Drl+HzmG7bTMVPhtlZtA36Oyed4UdhPCq9I3I1EjHfL7adjXR4qJprXJe NjHB+X/2DhUsGyqF69mOjrRJNrS6RjYQSi7caigs9/I/yZXKvRY6mw9jyZVeaFEi0JbK hV4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=87dKznQF9BcYTx3/CoiSqxSJWajDfwYuHToW7R6j35o=; b=d9pkAVxSWF7gS3Bjq4i1PWPrKESAAsRIfuJ/8u/7q/aHqXsX257/dH3Hd1NVJVaMcV Nb/fj1u33LUWoVf8BKaepzzbrlZiTB3D/zzniOXCyCLNB791B+ti+KWlknl9Sk7Kd9sc FXY5BkfvTBauv1+WPU1JKE4nVcctKT+h7TPWUHun59ngLZowAn2z5PZOw+Cow6JTcHgo 3Z64myJ/kUpAyaTYCdrzvDYsy7en7zEXJejFbPgg3Dn2KTpEyo98o+POBVCHefHaUYRx 8pTnLwIv8qhL8BVJnZLaLFcClRXX+ZKYAeinQ1vCjI5ApOR3oPo3DPl5MVFkYz4tvNxq TLDg== X-Gm-Message-State: AOAM53371ZTJ0BSM3UrROIvBALGXFcm3fLsgAzEKRNW62JKfdUkXQM/4 zWKRrbNVvhKq6Qnf2Ea7pNJcs3WzSjI= X-Google-Smtp-Source: ABdhPJzFY1S3A4kOvme9MpeURG8tgSkhgv/OdYdqo3TxwKBVb6QxQIHYIjTakRTUTob7wL2b8cISwg== X-Received: by 2002:a63:ed42:: with SMTP id m2mr4032723pgk.95.1612807079071; Mon, 08 Feb 2021 09:57:59 -0800 (PST) Received: from localhost.localdomain ([49.173.165.50]) by smtp.gmail.com with ESMTPSA id x141sm18629732pfc.128.2021.02.08.09.57.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 09:57:58 -0800 (PST) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, dsahern@kernel.org, xiyou.wangcong@gmail.com, ayush.sawal@chelsio.com, vinay.yadav@chelsio.com, rohitm@chelsio.com, yoshfuji@linux-ipv6.org, edumazet@google.com, vyasevich@gmail.com, nhorman@tuxdriver.com, marcelo.leitner@gmail.com, alex.aring@gmail.com, linmiaohe@huawei.com, praveen5582@gmail.com, rdunlap@infradead.org, willemb@google.com, rdias@singlestore.com, matthieu.baerts@tessares.net, paul@paul-moore.com Cc: ap420073@gmail.com Subject: [PATCH net-next 6/8] mld: convert ipv6_mc_socklist to list macros Date: Mon, 8 Feb 2021 17:57:48 +0000 Message-Id: <20210208175748.5628-1-ap420073@gmail.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, struct ipv6_mc_socklist doesn't use list API so that code shape is a little bit different from others. So it converts ipv6_mc_socklist to use list API so it would improve readability. Signed-off-by: Taehee Yoo --- .../chelsio/inline_crypto/chtls/chtls_cm.c | 1 + include/linux/ipv6.h | 2 +- include/net/if_inet6.h | 2 +- net/dccp/ipv6.c | 4 +- net/ipv6/af_inet6.c | 1 + net/ipv6/mcast.c | 190 +++++++++--------- net/ipv6/tcp_ipv6.c | 4 +- net/sctp/ipv6.c | 2 +- 8 files changed, 105 insertions(+), 101 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c index 19dc7dc054a2..729d9de9db62 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c @@ -1204,6 +1204,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk, newsk->sk_v6_rcv_saddr = treq->ir_v6_loc_addr; inet6_sk(newsk)->saddr = treq->ir_v6_loc_addr; newnp->ipv6_fl_list = NULL; + INIT_LIST_HEAD(&newnp->ipv6_mc_list); newnp->pktoptions = NULL; newsk->sk_bound_dev_if = treq->ir_iif; newinet->inet_opt = NULL; diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h index 9d1f29f0c512..66533e42a758 100644 --- a/include/linux/ipv6.h +++ b/include/linux/ipv6.h @@ -282,7 +282,7 @@ struct ipv6_pinfo { __u32 dst_cookie; __u32 rx_dst_cookie; - struct ipv6_mc_socklist __rcu *ipv6_mc_list; + struct list_head ipv6_mc_list; struct ipv6_ac_socklist *ipv6_ac_list; struct ipv6_fl_socklist __rcu *ipv6_fl_list; diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h index 096c0554d199..babf19c27b29 100644 --- a/include/net/if_inet6.h +++ b/include/net/if_inet6.h @@ -90,7 +90,7 @@ struct ipv6_mc_socklist { struct in6_addr addr; int ifindex; unsigned int sfmode; /* MCAST_{INCLUDE,EXCLUDE} */ - struct ipv6_mc_socklist __rcu *next; + struct list_head list; rwlock_t sflock; struct ip6_sf_socklist *sflist; struct rcu_head rcu; diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c index 1f73603913f5..3a6332b9b845 100644 --- a/net/dccp/ipv6.c +++ b/net/dccp/ipv6.c @@ -430,7 +430,7 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk, newsk->sk_backlog_rcv = dccp_v4_do_rcv; newnp->pktoptions = NULL; newnp->opt = NULL; - newnp->ipv6_mc_list = NULL; + INIT_LIST_HEAD(&newnp->ipv6_mc_list); newnp->ipv6_ac_list = NULL; newnp->ipv6_fl_list = NULL; newnp->mcast_oif = inet_iif(skb); @@ -497,7 +497,7 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk, /* Clone RX bits */ newnp->rxopt.all = np->rxopt.all; - newnp->ipv6_mc_list = NULL; + INIT_LIST_HEAD(&newnp->ipv6_mc_list); newnp->ipv6_ac_list = NULL; newnp->ipv6_fl_list = NULL; newnp->pktoptions = NULL; diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index ace6527171bd..ae3a1865189f 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -207,6 +207,7 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol, inet_sk(sk)->pinet6 = np = inet6_sk_generic(sk); np->hop_limit = -1; + INIT_LIST_HEAD(&np->ipv6_mc_list); np->mcast_hops = IPV6_DEFAULT_MCASTHOPS; np->mc_loop = 1; np->mc_all = 1; diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 21f3bbec5568..f4fc29fcdf48 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -85,7 +85,7 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, int sfmode, int sfcount, const struct in6_addr *psfsrc, int delta); -static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml, +static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *mc_lst, struct inet6_dev *idev); static int __ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr, unsigned int mode); @@ -109,11 +109,6 @@ int sysctl_mld_qrv __read_mostly = MLD_QRV_DEFAULT; * socket join on multicast group */ -#define for_each_mc_rcu(np, mc) \ - for (mc = rcu_dereference((np)->ipv6_mc_list); \ - mc; \ - mc = rcu_dereference(mc->next)) - static void mca_get(struct ifmcaddr6 *mc) { refcount_inc(&mc->mca_refcnt); @@ -142,10 +137,10 @@ static int unsolicited_report_interval(struct inet6_dev *idev) static int __ipv6_sock_mc_join(struct sock *sk, int ifindex, const struct in6_addr *addr, unsigned int mode) { - struct net_device *dev = NULL; - struct ipv6_mc_socklist *mc_lst; struct ipv6_pinfo *np = inet6_sk(sk); + struct ipv6_mc_socklist *mc_lst; struct net *net = sock_net(sk); + struct net_device *dev = NULL; int err; ASSERT_RTNL(); @@ -153,22 +148,17 @@ static int __ipv6_sock_mc_join(struct sock *sk, int ifindex, if (!ipv6_addr_is_multicast(addr)) return -EINVAL; - rcu_read_lock(); - for_each_mc_rcu(np, mc_lst) { + list_for_each_entry(mc_lst, &np->ipv6_mc_list, list) { if ((ifindex == 0 || mc_lst->ifindex == ifindex) && - ipv6_addr_equal(&mc_lst->addr, addr)) { - rcu_read_unlock(); + ipv6_addr_equal(&mc_lst->addr, addr)) return -EADDRINUSE; - } } - rcu_read_unlock(); mc_lst = sock_kmalloc(sk, sizeof(struct ipv6_mc_socklist), GFP_KERNEL); - if (!mc_lst) return -ENOMEM; - mc_lst->next = NULL; + INIT_LIST_HEAD(&mc_lst->list); mc_lst->addr = *addr; if (ifindex == 0) { @@ -202,8 +192,7 @@ static int __ipv6_sock_mc_join(struct sock *sk, int ifindex, return err; } - mc_lst->next = np->ipv6_mc_list; - rcu_assign_pointer(np->ipv6_mc_list, mc_lst); + list_add_rcu(&mc_lst->list, &np->ipv6_mc_list); return 0; } @@ -227,7 +216,6 @@ int ipv6_sock_mc_drop(struct sock *sk, int ifindex, const struct in6_addr *addr) { struct ipv6_pinfo *np = inet6_sk(sk); struct ipv6_mc_socklist *mc_lst; - struct ipv6_mc_socklist __rcu **lnk; struct net *net = sock_net(sk); ASSERT_RTNL(); @@ -235,25 +223,22 @@ int ipv6_sock_mc_drop(struct sock *sk, int ifindex, const struct in6_addr *addr) if (!ipv6_addr_is_multicast(addr)) return -EINVAL; - for (lnk = &np->ipv6_mc_list; - (mc_lst = rtnl_dereference(*lnk)) != NULL; - lnk = &mc_lst->next) { + list_for_each_entry(mc_lst, &np->ipv6_mc_list, list) { if ((ifindex == 0 || mc_lst->ifindex == ifindex) && ipv6_addr_equal(&mc_lst->addr, addr)) { struct net_device *dev; - *lnk = mc_lst->next; - dev = __dev_get_by_index(net, mc_lst->ifindex); if (dev) { struct inet6_dev *idev = __in6_dev_get(dev); - (void) ip6_mc_leave_src(sk, mc_lst, idev); + ip6_mc_leave_src(sk, mc_lst, idev); if (idev) __ipv6_dev_mc_dec(idev, &mc_lst->addr); } else - (void) ip6_mc_leave_src(sk, mc_lst, NULL); + ip6_mc_leave_src(sk, mc_lst, NULL); + list_del_rcu(&mc_lst->list); atomic_sub(sizeof(*mc_lst), &sk->sk_omem_alloc); kfree_rcu(mc_lst, rcu); return 0; @@ -297,27 +282,27 @@ static struct inet6_dev *ip6_mc_find_dev_rcu(struct net *net, void __ipv6_sock_mc_close(struct sock *sk) { + struct ipv6_mc_socklist *mc_lst, *tmp; struct ipv6_pinfo *np = inet6_sk(sk); - struct ipv6_mc_socklist *mc_lst; struct net *net = sock_net(sk); ASSERT_RTNL(); - while ((mc_lst = rtnl_dereference(np->ipv6_mc_list)) != NULL) { + list_for_each_entry_safe(mc_lst, tmp, &np->ipv6_mc_list, list) { struct net_device *dev; - np->ipv6_mc_list = mc_lst->next; - dev = __dev_get_by_index(net, mc_lst->ifindex); if (dev) { struct inet6_dev *idev = __in6_dev_get(dev); - (void) ip6_mc_leave_src(sk, mc_lst, idev); + ip6_mc_leave_src(sk, mc_lst, idev); if (idev) __ipv6_dev_mc_dec(idev, &mc_lst->addr); - } else - (void) ip6_mc_leave_src(sk, mc_lst, NULL); + } else { + ip6_mc_leave_src(sk, mc_lst, NULL); + } + list_del_rcu(&mc_lst->list); atomic_sub(sizeof(*mc_lst), &sk->sk_omem_alloc); kfree_rcu(mc_lst, rcu); } @@ -327,23 +312,27 @@ void ipv6_sock_mc_close(struct sock *sk) { struct ipv6_pinfo *np = inet6_sk(sk); - if (!rcu_access_pointer(np->ipv6_mc_list)) - return; rtnl_lock(); + if (list_empty(&np->ipv6_mc_list)) { + rtnl_unlock(); + return; + } + __ipv6_sock_mc_close(sk); rtnl_unlock(); } int ip6_mc_source(int add, int omode, struct sock *sk, - struct group_source_req *pgsr) + struct group_source_req *pgsr) { struct ipv6_pinfo *inet6 = inet6_sk(sk); struct in6_addr *source, *group; + struct ipv6_mc_socklist *mc_lst; struct net *net = sock_net(sk); - struct ipv6_mc_socklist *mc; struct ip6_sf_socklist *psl; struct inet6_dev *idev; int leavegroup = 0; + bool found = false; int mclocked = 0; int i, j, rv; int err; @@ -363,33 +352,35 @@ int ip6_mc_source(int add, int omode, struct sock *sk, err = -EADDRNOTAVAIL; - for_each_mc_rcu(inet6, mc) { - if (pgsr->gsr_interface && mc->ifindex != pgsr->gsr_interface) + list_for_each_entry_rcu(mc_lst, &inet6->ipv6_mc_list, list) { + if (pgsr->gsr_interface && mc_lst->ifindex != pgsr->gsr_interface) continue; - if (ipv6_addr_equal(&mc->addr, group)) + if (ipv6_addr_equal(&mc_lst->addr, group)) { + found = true; break; + } } - if (!mc) { /* must have a prior join */ + if (!found) { /* must have a prior join */ err = -EINVAL; goto done; } /* if a source filter was set, must be the same mode as before */ - if (mc->sflist) { - if (mc->sfmode != omode) { + if (mc_lst->sflist) { + if (mc_lst->sfmode != omode) { err = -EINVAL; goto done; } - } else if (mc->sfmode != omode) { + } else if (mc_lst->sfmode != omode) { /* allow mode switches for empty-set filters */ ip6_mc_add_src(idev, group, omode, 0, NULL, 0); - ip6_mc_del_src(idev, group, mc->sfmode, 0, NULL, 0); - mc->sfmode = omode; + ip6_mc_del_src(idev, group, mc_lst->sfmode, 0, NULL, 0); + mc_lst->sfmode = omode; } - write_lock(&mc->sflock); + write_lock(&mc_lst->sflock); mclocked = 1; - psl = mc->sflist; + psl = mc_lst->sflist; if (!add) { if (!psl) goto done; /* err = -EADDRNOTAVAIL */ @@ -442,7 +433,7 @@ int ip6_mc_source(int add, int omode, struct sock *sk, sock_kfree_s(sk, psl, IP6_SFLSIZE(psl->sl_max)); } psl = newpsl; - mc->sflist = psl; + mc_lst->sflist = psl; } rv = 1; /* > 0 for insert logic below if sl_count is 0 */ for (i = 0; i < psl->sl_count; i++) { @@ -459,7 +450,7 @@ int ip6_mc_source(int add, int omode, struct sock *sk, ip6_mc_add_src(idev, group, omode, 1, source, 1); done: if (mclocked) - write_unlock(&mc->sflock); + write_unlock(&mc_lst->sflock); read_unlock_bh(&idev->lock); rcu_read_unlock(); if (leavegroup) @@ -472,11 +463,12 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, { struct ipv6_pinfo *inet6 = inet6_sk(sk); struct ip6_sf_socklist *newpsl, *psl; + struct ipv6_mc_socklist *mc_lst; struct net *net = sock_net(sk); const struct in6_addr *group; - struct ipv6_mc_socklist *mc; struct inet6_dev *idev; int leavegroup = 0; + bool found = false; int i, err; group = &((struct sockaddr_in6 *)&gsf->gf_group)->sin6_addr; @@ -502,13 +494,15 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, goto done; } - for_each_mc_rcu(inet6, mc) { - if (mc->ifindex != gsf->gf_interface) + list_for_each_entry_rcu(mc_lst, &inet6->ipv6_mc_list, list) { + if (mc_lst->ifindex != gsf->gf_interface) continue; - if (ipv6_addr_equal(&mc->addr, group)) + if (ipv6_addr_equal(&mc_lst->addr, group)) { + found = true; break; + } } - if (!mc) { /* must have a prior join */ + if (!found) { /* must have a prior join */ err = -EINVAL; goto done; } @@ -537,17 +531,17 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, ip6_mc_add_src(idev, group, gsf->gf_fmode, 0, NULL, 0); } - write_lock(&mc->sflock); - psl = mc->sflist; + write_lock(&mc_lst->sflock); + psl = mc_lst->sflist; if (psl) { - ip6_mc_del_src(idev, group, mc->sfmode, + ip6_mc_del_src(idev, group, mc_lst->sfmode, psl->sl_count, psl->sl_addr, 0); sock_kfree_s(sk, psl, IP6_SFLSIZE(psl->sl_max)); } else - ip6_mc_del_src(idev, group, mc->sfmode, 0, NULL, 0); - mc->sflist = newpsl; - mc->sfmode = gsf->gf_fmode; - write_unlock(&mc->sflock); + ip6_mc_del_src(idev, group, mc_lst->sfmode, 0, NULL, 0); + mc_lst->sflist = newpsl; + mc_lst->sfmode = gsf->gf_fmode; + write_unlock(&mc_lst->sflock); err = 0; done: read_unlock_bh(&idev->lock); @@ -560,13 +554,14 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, struct sockaddr_storage __user *p) { + struct ipv6_pinfo *inet6 = inet6_sk(sk); + struct ipv6_mc_socklist *mc_lst; + struct net *net = sock_net(sk); int err, i, count, copycount; const struct in6_addr *group; - struct ipv6_mc_socklist *mc; - struct inet6_dev *idev; - struct ipv6_pinfo *inet6 = inet6_sk(sk); struct ip6_sf_socklist *psl; - struct net *net = sock_net(sk); + struct inet6_dev *idev; + bool found = false; group = &((struct sockaddr_in6 *)&gsf->gf_group)->sin6_addr; @@ -587,16 +582,18 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, * so reading the list is safe. */ - for_each_mc_rcu(inet6, mc) { - if (mc->ifindex != gsf->gf_interface) + list_for_each_entry_rcu(mc_lst, &inet6->ipv6_mc_list, list) { + if (mc_lst->ifindex != gsf->gf_interface) continue; - if (ipv6_addr_equal(group, &mc->addr)) + if (ipv6_addr_equal(group, &mc_lst->addr)) { + found = true; break; + } } - if (!mc) /* must have a prior join */ + if (!found) /* must have a prior join */ goto done; - gsf->gf_fmode = mc->sfmode; - psl = mc->sflist; + gsf->gf_fmode = mc_lst->sfmode; + psl = mc_lst->sflist; count = psl ? psl->sl_count : 0; read_unlock_bh(&idev->lock); rcu_read_unlock(); @@ -604,7 +601,7 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, copycount = count < gsf->gf_numsrc ? count : gsf->gf_numsrc; gsf->gf_numsrc = count; /* changes to psl require the socket lock, and a write lock - * on mc->sflock. We have the socket lock so reading here is safe. + * on mc_lst->sflock. We have the socket lock so reading here is safe. */ for (i = 0; i < copycount; i++, p++) { struct sockaddr_in6 *psin6; @@ -628,23 +625,25 @@ bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, const struct in6_addr *src_addr) { struct ipv6_pinfo *np = inet6_sk(sk); - struct ipv6_mc_socklist *mc; + struct ipv6_mc_socklist *mc_lst; + bool rv = true, found = false; struct ip6_sf_socklist *psl; - bool rv = true; rcu_read_lock(); - for_each_mc_rcu(np, mc) { - if (ipv6_addr_equal(&mc->addr, mc_addr)) + list_for_each_entry_rcu(mc_lst, &np->ipv6_mc_list, list) { + if (ipv6_addr_equal(&mc_lst->addr, mc_addr)) { + found = true; break; + } } - if (!mc) { + if (!found) { rcu_read_unlock(); return np->mc_all; } - read_lock(&mc->sflock); - psl = mc->sflist; + read_lock(&mc_lst->sflock); + psl = mc_lst->sflist; if (!psl) { - rv = mc->sfmode == MCAST_EXCLUDE; + rv = mc_lst->sfmode == MCAST_EXCLUDE; } else { int i; @@ -652,12 +651,12 @@ bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, if (ipv6_addr_equal(&psl->sl_addr[i], src_addr)) break; } - if (mc->sfmode == MCAST_INCLUDE && i >= psl->sl_count) + if (mc_lst->sfmode == MCAST_INCLUDE && i >= psl->sl_count) rv = false; - if (mc->sfmode == MCAST_EXCLUDE && i < psl->sl_count) + if (mc_lst->sfmode == MCAST_EXCLUDE && i < psl->sl_count) rv = false; } - read_unlock(&mc->sflock); + read_unlock(&mc_lst->sflock); rcu_read_unlock(); return rv; @@ -2463,22 +2462,25 @@ static void mld_join_group(struct ifmcaddr6 *mc) spin_unlock_bh(&mc->mca_lock); } -static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *iml, +static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *mc_lst, struct inet6_dev *idev) { int err; - write_lock_bh(&iml->sflock); - if (!iml->sflist) { + write_lock_bh(&mc_lst->sflock); + if (!mc_lst->sflist) { /* any-source empty exclude case */ - err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, 0, NULL, 0); + err = ip6_mc_del_src(idev, &mc_lst->addr, mc_lst->sfmode, + 0, NULL, 0); } else { - err = ip6_mc_del_src(idev, &iml->addr, iml->sfmode, - iml->sflist->sl_count, iml->sflist->sl_addr, 0); - sock_kfree_s(sk, iml->sflist, IP6_SFLSIZE(iml->sflist->sl_max)); - iml->sflist = NULL; - } - write_unlock_bh(&iml->sflock); + err = ip6_mc_del_src(idev, &mc_lst->addr, mc_lst->sfmode, + mc_lst->sflist->sl_count, + mc_lst->sflist->sl_addr, 0); + sock_kfree_s(sk, mc_lst->sflist, + IP6_SFLSIZE(mc_lst->sflist->sl_max)); + mc_lst->sflist = NULL; + } + write_unlock_bh(&mc_lst->sflock); return err; } diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index d093ef3ef060..b6cb600fd02a 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1242,7 +1242,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * newtp->af_specific = &tcp_sock_ipv6_mapped_specific; #endif - newnp->ipv6_mc_list = NULL; + INIT_LIST_HEAD(&newnp->ipv6_mc_list); newnp->ipv6_ac_list = NULL; newnp->ipv6_fl_list = NULL; newnp->pktoptions = NULL; @@ -1311,7 +1311,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * First: no IPv4 options. */ newinet->inet_opt = NULL; - newnp->ipv6_mc_list = NULL; + INIT_LIST_HEAD(&newnp->ipv6_mc_list); newnp->ipv6_ac_list = NULL; newnp->ipv6_fl_list = NULL; diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c index c3e89c776e66..4842e538a988 100644 --- a/net/sctp/ipv6.c +++ b/net/sctp/ipv6.c @@ -754,7 +754,7 @@ static struct sock *sctp_v6_create_accept_sk(struct sock *sk, newnp = inet6_sk(newsk); memcpy(newnp, np, sizeof(struct ipv6_pinfo)); - newnp->ipv6_mc_list = NULL; + INIT_LIST_HEAD(&newnp->ipv6_mc_list); newnp->ipv6_ac_list = NULL; newnp->ipv6_fl_list = NULL; From patchwork Mon Feb 8 17:58:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 12076185 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32887C433E0 for ; Mon, 8 Feb 2021 18:01:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D6D9B64EBA for ; Mon, 8 Feb 2021 18:01:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233553AbhBHSAj (ORCPT ); Mon, 8 Feb 2021 13:00:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235052AbhBHR7K (ORCPT ); Mon, 8 Feb 2021 12:59:10 -0500 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80726C06178A for ; Mon, 8 Feb 2021 09:58:30 -0800 (PST) Received: by mail-pl1-x631.google.com with SMTP id e12so8236844pls.4 for ; Mon, 08 Feb 2021 09:58:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=kK0uNANfn1a1IfQOCpu0I+YpRPDpVqCi2JCznRa/yIY=; b=aBhwpL97+/I10Jm8MIfvEZnPqP+szGSnE3bdCq7uv1kQ0jGRU/0tciwsdFg9R8rYVS ManszoWGF2F/9qdqyiOny0JqOYRpT2JSTV7Qo9/W8mDRDNHwoA3mAGcVEoBNwO05X07c nRD79YuYmKxXtpstR94yl/Mj0AQdSO8nAdo00Ishk9MzD3bMYcGj5wTaldmd4PddJYLp aGddOVW01L6cLX0oSkLoi4HCHmOi7QOvl+Vrg7ppSDSR6i3AoDNPrM3qCxEoNsQGFBiU asLyeR5oecVtEhIqqVYwgDe0h8vkao33aX+nZ6ZMSeY5hm+YSmJ7fob+mjdyF3KnEpGb 3H0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=kK0uNANfn1a1IfQOCpu0I+YpRPDpVqCi2JCznRa/yIY=; b=GNVVkw/s2nxccUTn/iAOHTyefJnxej5lod5hN4NPADkvQpzN3OkmfYMwnu4V9IFkb/ Fb12sRTGTMIuyiSqXnr//2+fy+L69fCPPm1jE3Syg+foyGYcPAcInoNb+Rue22N0VkEZ h9kXQRFYMruLbmMP2KCiBpZkfqVKqdl/c8Ib9oMNOFtbBK0wnan82FVX0Xs+WTkFhK3r rDgyWTshz1Nz5zYdIF+QkUHh6e3F5mZgm1hLO6DtMHqjcAbrUkArdnaQXGMC9HEmAZ6a fFjqF6F17069w203qbIk2aVDLPh0Cnsn+yzLvAC+d1z/5yG0dWyy0rRgIRQ4CsOvARDc OHDA== X-Gm-Message-State: AOAM530D/BvSWm8JiZeVxRuXvYItG1rjwo/Kxu0VM4SeftLafNYKKtpw uOMPMsUEW7eJN8P8fEDlxhs= X-Google-Smtp-Source: ABdhPJwobNyMH3NFQPhl/GC8f8eH6u+qERgMKy4my38pTByzDMLD4gVle4wtfVUS7VBqNb4V8fxAew== X-Received: by 2002:a17:902:c602:b029:e2:8422:ffbc with SMTP id r2-20020a170902c602b02900e28422ffbcmr17538032plr.78.1612807109909; Mon, 08 Feb 2021 09:58:29 -0800 (PST) Received: from localhost.localdomain ([49.173.165.50]) by smtp.gmail.com with ESMTPSA id 194sm17016069pfu.165.2021.02.08.09.58.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 09:58:29 -0800 (PST) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, dsahern@kernel.org, xiyou.wangcong@gmail.com Cc: ap420073@gmail.com Subject: [PATCH net-next 7/8] mld: convert ip6_sf_socklist to list macros Date: Mon, 8 Feb 2021 17:58:20 +0000 Message-Id: <20210208175820.5690-1-ap420073@gmail.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Currently, struct ip6_sf_socklist doesn't use list API so that code shape is a little bit different from others. So it converts ip6_sf_socklist to use list API so it would improve readability. Signed-off-by: Taehee Yoo Reported-by: kernel test robot --- include/net/if_inet6.h | 19 +- include/uapi/linux/in.h | 4 +- net/ipv6/mcast.c | 387 +++++++++++++++++++++++++--------------- 3 files changed, 256 insertions(+), 154 deletions(-) diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h index babf19c27b29..6885ab8ec2e9 100644 --- a/include/net/if_inet6.h +++ b/include/net/if_inet6.h @@ -13,6 +13,7 @@ #include #include #include +#include /* inet6_dev.if_flags */ @@ -76,23 +77,19 @@ struct inet6_ifaddr { }; struct ip6_sf_socklist { - unsigned int sl_max; - unsigned int sl_count; - struct in6_addr sl_addr[]; + struct list_head list; + struct in6_addr sl_addr; + struct rcu_head rcu; }; -#define IP6_SFLSIZE(count) (sizeof(struct ip6_sf_socklist) + \ - (count) * sizeof(struct in6_addr)) - -#define IP6_SFBLOCK 10 /* allocate this many at once */ - struct ipv6_mc_socklist { struct in6_addr addr; int ifindex; - unsigned int sfmode; /* MCAST_{INCLUDE,EXCLUDE} */ + bool sfmode; /* MCAST_{INCLUDE,EXCLUDE} */ struct list_head list; + struct list_head sflist; rwlock_t sflock; - struct ip6_sf_socklist *sflist; + atomic_t sl_count; struct rcu_head rcu; }; @@ -101,7 +98,7 @@ struct ip6_sf_list { struct in6_addr sf_addr; unsigned long sf_count[2]; /* include/exclude counts */ unsigned char sf_gsresp; /* include in g & s response? */ - unsigned char sf_oldin; /* change state */ + bool sf_oldin; /* change state */ unsigned char sf_crcount; /* retrans. left to send */ }; diff --git a/include/uapi/linux/in.h b/include/uapi/linux/in.h index 7d6687618d80..97024873afd0 100644 --- a/include/uapi/linux/in.h +++ b/include/uapi/linux/in.h @@ -160,8 +160,8 @@ struct in_addr { #define IP_MULTICAST_ALL 49 #define IP_UNICAST_IF 50 -#define MCAST_EXCLUDE 0 -#define MCAST_INCLUDE 1 +#define MCAST_EXCLUDE false +#define MCAST_INCLUDE true /* These need to appear somewhere around here */ #define IP_DEFAULT_MULTICAST_TTL 1 diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index f4fc29fcdf48..45b683b15835 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -80,13 +80,18 @@ static int sf_setstate(struct ifmcaddr6 *mc); static void sf_markstate(struct ifmcaddr6 *mc); static void ip6_mc_clear_src(struct ifmcaddr6 *mc); static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, - int sfmode, int sfcount, const struct in6_addr *psfsrc, - int delta); + int sfmode, const struct in6_addr *psfsrc, int delta); +static void ip6_mc_del_src_bulk(struct inet6_dev *idev, + struct ipv6_mc_socklist *mc_lst, + struct sock *sk); static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, - int sfmode, int sfcount, const struct in6_addr *psfsrc, - int delta); -static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *mc_lst, - struct inet6_dev *idev); + int sfmode, const struct in6_addr *psfsrc, int delta); +static int ip6_mc_add_src_bulk(struct inet6_dev *idev, struct group_filter *gsf, + struct list_head *head, + struct sockaddr_storage *list, + struct sock *sk); +static void ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *mc_lst, + struct inet6_dev *idev); static int __ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr, unsigned int mode); @@ -178,8 +183,9 @@ static int __ipv6_sock_mc_join(struct sock *sk, int ifindex, mc_lst->ifindex = dev->ifindex; mc_lst->sfmode = mode; + atomic_set(&mc_lst->sl_count, 0); rwlock_init(&mc_lst->sflock); - mc_lst->sflist = NULL; + INIT_LIST_HEAD(&mc_lst->sflist); /* * now add/increase the group membership on the device @@ -334,7 +340,6 @@ int ip6_mc_source(int add, int omode, struct sock *sk, int leavegroup = 0; bool found = false; int mclocked = 0; - int i, j, rv; int err; source = &((struct sockaddr_in6 *)&pgsr->gsr_source)->sin6_addr; @@ -365,89 +370,70 @@ int ip6_mc_source(int add, int omode, struct sock *sk, goto done; } /* if a source filter was set, must be the same mode as before */ - if (mc_lst->sflist) { + if (!list_empty(&mc_lst->sflist)) { if (mc_lst->sfmode != omode) { err = -EINVAL; goto done; } } else if (mc_lst->sfmode != omode) { /* allow mode switches for empty-set filters */ - ip6_mc_add_src(idev, group, omode, 0, NULL, 0); - ip6_mc_del_src(idev, group, mc_lst->sfmode, 0, NULL, 0); + ip6_mc_add_src(idev, group, omode, NULL, 0); + ip6_mc_del_src(idev, group, mc_lst->sfmode, NULL, 0); mc_lst->sfmode = omode; } write_lock(&mc_lst->sflock); mclocked = 1; - psl = mc_lst->sflist; if (!add) { - if (!psl) - goto done; /* err = -EADDRNOTAVAIL */ - rv = !0; - for (i = 0; i < psl->sl_count; i++) { - rv = !ipv6_addr_equal(&psl->sl_addr[i], source); - if (rv == 0) + found = false; + list_for_each_entry(psl, &mc_lst->sflist, list) { + if (ipv6_addr_equal(&psl->sl_addr, source)) { + found = true; break; + } } - if (rv) /* source not found */ + if (!found) goto done; /* err = -EADDRNOTAVAIL */ /* special case - (INCLUDE, empty) == LEAVE_GROUP */ - if (psl->sl_count == 1 && omode == MCAST_INCLUDE) { + if (atomic_read(&mc_lst->sl_count) == 1 && + omode == MCAST_INCLUDE) { leavegroup = 1; goto done; } /* update the interface filter */ - ip6_mc_del_src(idev, group, omode, 1, source, 1); + ip6_mc_del_src(idev, group, omode, &psl->sl_addr, 1); - for (j = i+1; j < psl->sl_count; j++) - psl->sl_addr[j-1] = psl->sl_addr[j]; - psl->sl_count--; + list_del_rcu(&psl->list); + atomic_dec(&mc_lst->sl_count); err = 0; goto done; } /* else, add a new source to the filter */ - if (psl && psl->sl_count >= sysctl_mld_max_msf) { + if (atomic_read(&mc_lst->sl_count) >= sysctl_mld_max_msf) { err = -ENOBUFS; goto done; } - if (!psl || psl->sl_count == psl->sl_max) { - struct ip6_sf_socklist *newpsl; - int count = IP6_SFBLOCK; - if (psl) - count += psl->sl_max; - newpsl = sock_kmalloc(sk, IP6_SFLSIZE(count), GFP_ATOMIC); - if (!newpsl) { - err = -ENOBUFS; - goto done; - } - newpsl->sl_max = count; - newpsl->sl_count = count - IP6_SFBLOCK; - if (psl) { - for (i = 0; i < psl->sl_count; i++) - newpsl->sl_addr[i] = psl->sl_addr[i]; - sock_kfree_s(sk, psl, IP6_SFLSIZE(psl->sl_max)); - } - psl = newpsl; - mc_lst->sflist = psl; - } - rv = 1; /* > 0 for insert logic below if sl_count is 0 */ - for (i = 0; i < psl->sl_count; i++) { - rv = !ipv6_addr_equal(&psl->sl_addr[i], source); - if (rv == 0) /* There is an error in the address. */ + list_for_each_entry(psl, &mc_lst->sflist, list) + if (ipv6_addr_equal(&psl->sl_addr, source)) goto done; + + psl = sock_kmalloc(sk, sizeof(struct ip6_sf_socklist), GFP_ATOMIC); + if (!psl) { + err = -ENOBUFS; + goto done; } - for (j = psl->sl_count-1; j >= i; j--) - psl->sl_addr[j+1] = psl->sl_addr[j]; - psl->sl_addr[i] = *source; - psl->sl_count++; + atomic_inc(&mc_lst->sl_count); + psl->sl_addr = *source; + list_add_rcu(&psl->list, &mc_lst->sflist); + err = 0; /* update the interface list */ - ip6_mc_add_src(idev, group, omode, 1, source, 1); + ip6_mc_add_src(idev, group, omode, &psl->sl_addr, 1); done: if (mclocked) write_unlock(&mc_lst->sflock); @@ -462,14 +448,14 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, struct sockaddr_storage *list) { struct ipv6_pinfo *inet6 = inet6_sk(sk); - struct ip6_sf_socklist *newpsl, *psl; struct ipv6_mc_socklist *mc_lst; struct net *net = sock_net(sk); const struct in6_addr *group; struct inet6_dev *idev; int leavegroup = 0; bool found = false; - int i, err; + LIST_HEAD(head); + int err; group = &((struct sockaddr_in6 *)&gsf->gf_group)->sin6_addr; @@ -506,40 +492,19 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, err = -EINVAL; goto done; } - if (gsf->gf_numsrc) { - newpsl = sock_kmalloc(sk, IP6_SFLSIZE(gsf->gf_numsrc), - GFP_ATOMIC); - if (!newpsl) { - err = -ENOBUFS; - goto done; - } - newpsl->sl_max = newpsl->sl_count = gsf->gf_numsrc; - for (i = 0; i < newpsl->sl_count; ++i, ++list) { - struct sockaddr_in6 *psin6; - psin6 = (struct sockaddr_in6 *)list; - newpsl->sl_addr[i] = psin6->sin6_addr; - } - err = ip6_mc_add_src(idev, group, gsf->gf_fmode, - newpsl->sl_count, newpsl->sl_addr, 0); - if (err) { - sock_kfree_s(sk, newpsl, IP6_SFLSIZE(newpsl->sl_max)); - goto done; - } - } else { - newpsl = NULL; - ip6_mc_add_src(idev, group, gsf->gf_fmode, 0, NULL, 0); - } + if (gsf->gf_numsrc) + err = ip6_mc_add_src_bulk(idev, gsf, &head, list, sk); + else + err = ip6_mc_add_src(idev, group, gsf->gf_fmode, NULL, 0); + + if (err) + goto done; write_lock(&mc_lst->sflock); - psl = mc_lst->sflist; - if (psl) { - ip6_mc_del_src(idev, group, mc_lst->sfmode, - psl->sl_count, psl->sl_addr, 0); - sock_kfree_s(sk, psl, IP6_SFLSIZE(psl->sl_max)); - } else - ip6_mc_del_src(idev, group, mc_lst->sfmode, 0, NULL, 0); - mc_lst->sflist = newpsl; + ip6_mc_del_src_bulk(idev, mc_lst, sk); + atomic_set(&mc_lst->sl_count, gsf->gf_numsrc); + list_splice(&head, &mc_lst->sflist); mc_lst->sfmode = gsf->gf_fmode; write_unlock(&mc_lst->sflock); err = 0; @@ -548,6 +513,7 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, rcu_read_unlock(); if (leavegroup) err = ipv6_sock_mc_drop(sk, gsf->gf_interface, group); + return err; } @@ -557,11 +523,11 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, struct ipv6_pinfo *inet6 = inet6_sk(sk); struct ipv6_mc_socklist *mc_lst; struct net *net = sock_net(sk); - int err, i, count, copycount; const struct in6_addr *group; struct ip6_sf_socklist *psl; struct inet6_dev *idev; bool found = false; + int err, i; group = &((struct sockaddr_in6 *)&gsf->gf_group)->sin6_addr; @@ -593,27 +559,31 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, if (!found) /* must have a prior join */ goto done; gsf->gf_fmode = mc_lst->sfmode; - psl = mc_lst->sflist; - count = psl ? psl->sl_count : 0; read_unlock_bh(&idev->lock); rcu_read_unlock(); - copycount = count < gsf->gf_numsrc ? count : gsf->gf_numsrc; - gsf->gf_numsrc = count; - /* changes to psl require the socket lock, and a write lock - * on mc_lst->sflock. We have the socket lock so reading here is safe. - */ - for (i = 0; i < copycount; i++, p++) { + i = 0; + read_lock(&mc_lst->sflock); + list_for_each_entry(psl, &mc_lst->sflist, list) { struct sockaddr_in6 *psin6; struct sockaddr_storage ss; + if (i >= gsf->gf_numsrc) + break; + psin6 = (struct sockaddr_in6 *)&ss; memset(&ss, 0, sizeof(ss)); psin6->sin6_family = AF_INET6; - psin6->sin6_addr = psl->sl_addr[i]; - if (copy_to_user(p, &ss, sizeof(ss))) + psin6->sin6_addr = psl->sl_addr; + if (copy_to_user(p, &ss, sizeof(ss))) { + read_unlock(&mc_lst->sflock); return -EFAULT; + } + p++; + i++; } + gsf->gf_numsrc = i; + read_unlock(&mc_lst->sflock); return 0; done: read_unlock_bh(&idev->lock); @@ -641,19 +611,20 @@ bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, return np->mc_all; } read_lock(&mc_lst->sflock); - psl = mc_lst->sflist; - if (!psl) { + + found = false; + if (list_empty(&mc_lst->sflist)) { rv = mc_lst->sfmode == MCAST_EXCLUDE; } else { - int i; - - for (i = 0; i < psl->sl_count; i++) { - if (ipv6_addr_equal(&psl->sl_addr[i], src_addr)) + list_for_each_entry_rcu(psl, &mc_lst->sflist, list) { + if (ipv6_addr_equal(&psl->sl_addr, src_addr)) { + found = true; break; + } } - if (mc_lst->sfmode == MCAST_INCLUDE && i >= psl->sl_count) + if (mc_lst->sfmode == MCAST_INCLUDE && !found) rv = false; - if (mc_lst->sfmode == MCAST_EXCLUDE && i < psl->sl_count) + if (mc_lst->sfmode == MCAST_EXCLUDE && found) rv = false; } read_unlock(&mc_lst->sflock); @@ -900,7 +871,7 @@ static int __ipv6_dev_mc_inc(struct net_device *dev, if (ipv6_addr_equal(&mc->mca_addr, addr)) { mc->mca_users++; write_unlock_bh(&idev->lock); - ip6_mc_add_src(idev, &mc->mca_addr, mode, 0, NULL, 0); + ip6_mc_add_src(idev, &mc->mca_addr, mode, NULL, 0); in6_dev_put(idev); return 0; } @@ -2171,16 +2142,16 @@ static int ip6_mc_del1_src(struct ifmcaddr6 *mc, int sfmode, } static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, - int sfmode, int sfcount, const struct in6_addr *psfsrc, - int delta) + int sfmode, const struct in6_addr *psfsrc, int delta) { struct ifmcaddr6 *mc; bool found = false; int changerec = 0; - int i, err; + int i, err, rv; if (!idev) return -ENODEV; + read_lock_bh(&idev->lock); list_for_each_entry(mc, &idev->mc_list, list) { if (ipv6_addr_equal(mca, &mc->mca_addr)) { @@ -2204,13 +2175,16 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, mc->mca_sfcount[sfmode]--; } err = 0; - for (i = 0; i < sfcount; i++) { - int rv = ip6_mc_del1_src(mc, sfmode, &psfsrc[i]); + i = 0; + + if (psfsrc) { + rv = ip6_mc_del1_src(mc, sfmode, psfsrc); changerec |= rv > 0; if (!err && rv < 0) err = rv; } + if (mc->mca_sfmode == MCAST_EXCLUDE && mc->mca_sfcount[MCAST_EXCLUDE] == 0 && mc->mca_sfcount[MCAST_INCLUDE]) { @@ -2231,6 +2205,71 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, return err; } +static void ip6_mc_del_src_bulk(struct inet6_dev *idev, + struct ipv6_mc_socklist *mc_lst, + struct sock *sk) +{ + struct in6_addr *mca = &mc_lst->addr; + struct ip6_sf_socklist *psl, *tmp; + int sfmode = mc_lst->sfmode; + struct ifmcaddr6 *mc; + bool found = false; + int changerec = 0; + int i, rv; + + if (!idev) + return; + + read_lock_bh(&idev->lock); + list_for_each_entry(mc, &idev->mc_list, list) { + if (ipv6_addr_equal(mca, &mc->mca_addr)) { + found = true; + break; + } + } + if (!found) { + /* MCA not found?? bug */ + read_unlock_bh(&idev->lock); + return; + } + spin_lock_bh(&mc->mca_lock); + sf_markstate(mc); + if (!mc->mca_sfcount[sfmode]) { + spin_unlock_bh(&mc->mca_lock); + read_unlock_bh(&idev->lock); + return; + } + mc->mca_sfcount[sfmode]--; + i = 0; + + list_for_each_entry_safe(psl, tmp, &mc_lst->sflist, list) { + rv = ip6_mc_del1_src(mc, sfmode, &psl->sl_addr); + list_del_rcu(&psl->list); + atomic_sub(sizeof(*psl), &sk->sk_omem_alloc); + kfree_rcu(psl, rcu); + + changerec |= rv > 0; + } + + if (mc->mca_sfmode == MCAST_EXCLUDE && + mc->mca_sfcount[MCAST_EXCLUDE] == 0 && + mc->mca_sfcount[MCAST_INCLUDE]) { + struct ip6_sf_list *psf; + + /* filter mode change */ + mc->mca_sfmode = MCAST_INCLUDE; + mc->mca_crcount = idev->mc_qrv; + idev->mc_ifc_count = mc->mca_crcount; + list_for_each_entry(psf, &mc->mca_source_list, list) + psf->sf_crcount = 0; + mld_ifc_event(mc->idev); + } else if (sf_setstate(mc) || changerec) { + mld_ifc_event(mc->idev); + } + spin_unlock_bh(&mc->mca_lock); + read_unlock_bh(&idev->lock); +} + /* * Add multicast single-source filter to the interface list */ @@ -2353,14 +2392,14 @@ static int sf_setstate(struct ifmcaddr6 *mc) /* * Add multicast source filter list to the interface list */ + static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, - int sfmode, int sfcount, const struct in6_addr *psfsrc, - int delta) + int sfmode, const struct in6_addr *psfsrc, int delta) { struct ifmcaddr6 *mc; bool found = false; int isexclude; - int i, err; + int err = 0; if (!idev) return -ENODEV; @@ -2383,19 +2422,99 @@ static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, isexclude = mc->mca_sfmode == MCAST_EXCLUDE; if (!delta) mc->mca_sfcount[sfmode]++; - err = 0; - for (i = 0; i < sfcount; i++) { - err = ip6_mc_add1_src(mc, sfmode, &psfsrc[i]); + + if (psfsrc) + err = ip6_mc_add1_src(mc, sfmode, psfsrc); + + if (err) { + if (!delta) + mc->mca_sfcount[sfmode]--; + } else if (isexclude != (mc->mca_sfcount[MCAST_EXCLUDE] != 0)) { + struct ip6_sf_list *psf; + + /* filter mode change */ + if (mc->mca_sfcount[MCAST_EXCLUDE]) + mc->mca_sfmode = MCAST_EXCLUDE; + else if (mc->mca_sfcount[MCAST_INCLUDE]) + mc->mca_sfmode = MCAST_INCLUDE; + /* else no filters; keep old mode for reports */ + + mc->mca_crcount = idev->mc_qrv; + idev->mc_ifc_count = mc->mca_crcount; + list_for_each_entry(psf, &mc->mca_source_list, list) + psf->sf_crcount = 0; + mld_ifc_event(idev); + } else if (sf_setstate(mc)) { + mld_ifc_event(idev); + } + + spin_unlock_bh(&mc->mca_lock); + read_unlock_bh(&idev->lock); + return err; +} + +static int ip6_mc_add_src_bulk(struct inet6_dev *idev, struct group_filter *gsf, + struct list_head *head, + struct sockaddr_storage *list, + struct sock *sk) +{ + struct ip6_sf_socklist *psl, *tmp; + const struct in6_addr *group; + int sfmode = gsf->gf_fmode; + struct ifmcaddr6 *mc; + bool found = false; + int isexclude; + int i, err = 0; + + group = &((struct sockaddr_in6 *)&gsf->gf_group)->sin6_addr; + + if (!idev) + return -ENODEV; + + list_for_each_entry(mc, &idev->mc_list, list) { + if (ipv6_addr_equal(group, &mc->mca_addr)) { + found = true; + break; + } + } + if (!found) { + /* MCA not found?? bug */ + return -ESRCH; + } + spin_lock_bh(&mc->mca_lock); + + sf_markstate(mc); + isexclude = mc->mca_sfmode == MCAST_EXCLUDE; + mc->mca_sfcount[sfmode]++; + + for (i = 0; i < gsf->gf_numsrc; i++, ++list) { + struct sockaddr_in6 *psin6; + + psl = sock_kmalloc(sk, sizeof(struct ip6_sf_socklist), + GFP_ATOMIC); + if (!psl) { + err = -ENOBUFS; + break; + } + INIT_LIST_HEAD(&psl->list); + psin6 = (struct sockaddr_in6 *)list; + psl->sl_addr = psin6->sin6_addr; + + err = ip6_mc_add1_src(mc, gsf->gf_fmode, &psl->sl_addr); if (err) break; + + list_add_tail(&psl->list, head); } + if (err) { - int j; + mc->mca_sfcount[sfmode]--; - if (!delta) - mc->mca_sfcount[sfmode]--; - for (j = 0; j < i; j++) - ip6_mc_del1_src(mc, sfmode, &psfsrc[j]); + list_for_each_entry_safe(psl, tmp, head, list) { + list_del(&psl->list); + atomic_sub(sizeof(*psl), &sk->sk_omem_alloc); + kfree(psl); + } } else if (isexclude != (mc->mca_sfcount[MCAST_EXCLUDE] != 0)) { struct ip6_sf_list *psf; @@ -2414,7 +2533,7 @@ static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, } else if (sf_setstate(mc)) mld_ifc_event(idev); spin_unlock_bh(&mc->mca_lock); - read_unlock_bh(&idev->lock); + return err; } @@ -2462,26 +2581,12 @@ static void mld_join_group(struct ifmcaddr6 *mc) spin_unlock_bh(&mc->mca_lock); } -static int ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *mc_lst, - struct inet6_dev *idev) +static void ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *mc_lst, + struct inet6_dev *idev) { - int err; - write_lock_bh(&mc_lst->sflock); - if (!mc_lst->sflist) { - /* any-source empty exclude case */ - err = ip6_mc_del_src(idev, &mc_lst->addr, mc_lst->sfmode, - 0, NULL, 0); - } else { - err = ip6_mc_del_src(idev, &mc_lst->addr, mc_lst->sfmode, - mc_lst->sflist->sl_count, - mc_lst->sflist->sl_addr, 0); - sock_kfree_s(sk, mc_lst->sflist, - IP6_SFLSIZE(mc_lst->sflist->sl_max)); - mc_lst->sflist = NULL; - } + ip6_mc_del_src_bulk(idev, mc_lst, sk); write_unlock_bh(&mc_lst->sflock); - return err; } static void mld_leave_group(struct ifmcaddr6 *mc) From patchwork Mon Feb 8 17:59:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Taehee Yoo X-Patchwork-Id: 12076211 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3B8CC433E0 for ; Mon, 8 Feb 2021 18:05:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A32D964E84 for ; Mon, 8 Feb 2021 18:05:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235383AbhBHSEl (ORCPT ); Mon, 8 Feb 2021 13:04:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235154AbhBHSAo (ORCPT ); Mon, 8 Feb 2021 13:00:44 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DC02C061786 for ; Mon, 8 Feb 2021 10:00:03 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id u15so8255991plf.1 for ; Mon, 08 Feb 2021 10:00:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=7g0N8h/aQvIOfGXyYANd1dLUX1crVgLsMLSNLT5F9jo=; b=rkNLsNgusBcI460qKCNWD6FX1rjH7MD9VdaoQMkpRypHOGn2eLR9yemLp4Je2azDHS i3ip9dmb5o0bBhZ/b3l31APBRhhDATC+r5lbx32vdM8msmS9KMOETZR97XaZUHs+PdUV +pE+miMb7azwUnqXrNp0zE1jSEVNc1IXE6R42YAcjFzEV/cG9u02WLCsC235AVscuCpR EBLVFtFooducHuqsaxHPJK2rNLbiRYzKq5KSZmQ/OLJWANlf6chubp6P5Ms+a9MTY5PL 76TkBlvKqHb5R8O+tw/GOH3OtMrwyvdSPqqZtY19lL5LcZ6eVLv8/RPUywrtF1CjmbeG 31EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=7g0N8h/aQvIOfGXyYANd1dLUX1crVgLsMLSNLT5F9jo=; b=MFR4DLWTGcCSkVx1n93f4k7SfQZmlQ0NFDtD0zK8F25sgB4j5eDyDqt/V3IkGkfgc3 1AL8UT4rGWC/KgiwZ+GCoJ+jgQg08ulc1nB1Ob/J/DtLsIgYlviGqSTffsV/RU/KK0EM x8A4kfQ6pFdRLP4iG+kg0B28++wKhQuqI/HAPa6SNciA1Az/9fZST+bOpyG/coivDwEr 2TLkdBkRNuIWuXoj8/f/1N0SlNd0agGs+FH/aC27s06Jd0eR9j1AsK8KIr1pIyXZ68pR LfvxzYTBUyyBGgznL10xfwnwcRrWFz5RQjuruAHWl1hjVzcoU/gwmCmlqlHFbZUJgmf8 UiFg== X-Gm-Message-State: AOAM531o1+Vn4+l9uB+4Tq9U3oiYMirhJCiJRSXL8a1RtE+p2fLhgp8K RcH+Ffxw7KamToR67FBSZzM= X-Google-Smtp-Source: ABdhPJxEJbvz8ZUGxLKIvuvUdSgRHLW567Ys39JwJMEUGvW2DmGZhEXkXdo3qgUmBhFi3eVghCGvhQ== X-Received: by 2002:a17:902:edcd:b029:df:d2b1:ecf0 with SMTP id q13-20020a170902edcdb02900dfd2b1ecf0mr17636359plk.15.1612807202644; Mon, 08 Feb 2021 10:00:02 -0800 (PST) Received: from localhost.localdomain ([49.173.165.50]) by smtp.gmail.com with ESMTPSA id e21sm18661279pgv.74.2021.02.08.09.59.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 10:00:01 -0800 (PST) From: Taehee Yoo To: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org, dsahern@kernel.org, xiyou.wangcong@gmail.com, jwi@linux.ibm.com, kgraul@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com, mareklindner@neomailbox.ch, sw@simonwunderlich.de, a@unstable.cc, sven@narfation.org, yoshfuji@linux-ipv6.org Cc: ap420073@gmail.com Subject: [PATCH net-next 8/8] mld: change context of mld module Date: Mon, 8 Feb 2021 17:59:52 +0000 Message-Id: <20210208175952.5880-1-ap420073@gmail.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org MLD module's context is atomic although most logic is called from control-path, not data path. Only a few functions are called from datapath, most of the functions are called from the control-path. Furthermore, MLD's response is not processed immediately because MLD protocol is using delayed response. It means that If a query is received, the node should have a delay in response At this point, it could change the context. It means most of the functions can be implemented as the sleepable context so that mld functions can use sleepable functions. Most resources are protected by spinlock and rwlock so the context of mld functions is atomic. So, in order to change context, locking scenario should be changed. It switches from spinlock/rwlock to mutex and rcu. Some locks are deleted and added. 1. ipv6->mc_socklist->sflock is deleted This is rwlock and it is unnecessary. Because it protects ipv6_mc_socklist-sflist but it is now protected by rtnl_lock(). 2. ifmcaddr6->mca_work_lock is added. This lock protects ifmcaddr6->mca_work. This workqueue can be used by both control-path and data-path. It means mutex can't be used. So mca_work_lock(spinlock) is added. 3. inet6_dev->mc_tomb_lock is deleted This lock protects inet6_dev->mc_bom_list. But it is protected by rtnl_lock(). 4. inet6_dev->lock is used for protecting workqueues. inet6_dev has its own workqueues(mc_gq_work, mc_ifc_work, mc_delrec_work) and it can be started and stop by both control-path and data-path. So, mutex can't be used. Suggested-by: Cong Wang Signed-off-by: Taehee Yoo Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot --- drivers/s390/net/qeth_l3_main.c | 6 +- include/net/if_inet6.h | 29 +- net/batman-adv/multicast.c | 4 +- net/ipv6/addrconf.c | 4 +- net/ipv6/mcast.c | 785 ++++++++++++++++---------------- 5 files changed, 411 insertions(+), 417 deletions(-) diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index e49abdeff69c..085afb24482e 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -1098,8 +1098,8 @@ static int qeth_l3_add_mcast_rtnl(struct net_device *dev, int vid, void *arg) tmp.disp_flag = QETH_DISP_ADDR_ADD; tmp.is_multicast = 1; - read_lock_bh(&in6_dev->lock); - list_for_each_entry(im6, in6_dev->mc_list, list) { + rcu_read_lock(); + list_for_each_entry_rcu(im6, in6_dev->mc_list, list) { tmp.u.a6.addr = im6->mca_addr; ipm = qeth_l3_find_addr_by_ip(card, &tmp); @@ -1117,7 +1117,7 @@ static int qeth_l3_add_mcast_rtnl(struct net_device *dev, int vid, void *arg) qeth_l3_ipaddr_hash(ipm)); } - read_unlock_bh(&in6_dev->lock); + rcu_read_unlock(); out: return 0; diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h index 6885ab8ec2e9..0a8478b96ef1 100644 --- a/include/net/if_inet6.h +++ b/include/net/if_inet6.h @@ -88,7 +88,6 @@ struct ipv6_mc_socklist { bool sfmode; /* MCAST_{INCLUDE,EXCLUDE} */ struct list_head list; struct list_head sflist; - rwlock_t sflock; atomic_t sl_count; struct rcu_head rcu; }; @@ -96,17 +95,19 @@ struct ipv6_mc_socklist { struct ip6_sf_list { struct list_head list; struct in6_addr sf_addr; - unsigned long sf_count[2]; /* include/exclude counts */ + atomic_t incl_count; /* include count */ + atomic_t excl_count; /* exclude count */ unsigned char sf_gsresp; /* include in g & s response? */ bool sf_oldin; /* change state */ unsigned char sf_crcount; /* retrans. left to send */ + struct rcu_head rcu; }; -#define MAF_TIMER_RUNNING 0x01 -#define MAF_LAST_REPORTER 0x02 -#define MAF_LOADED 0x04 -#define MAF_NOREPORT 0x08 -#define MAF_GSQUERY 0x10 +enum mca_enum { + MCA_TIMER_RUNNING, + MCA_LAST_REPORTER, + MCA_GSQUERY, +}; struct ifmcaddr6 { struct in6_addr mca_addr; @@ -116,14 +117,18 @@ struct ifmcaddr6 { struct list_head mca_tomb_list; unsigned int mca_sfmode; unsigned char mca_crcount; - unsigned long mca_sfcount[2]; - struct delayed_work mca_work; - unsigned int mca_flags; + atomic_t mca_incl_count; + atomic_t mca_excl_count; + struct delayed_work mca_work; /* Protected by mca_work_lock */ + spinlock_t mca_work_lock; + unsigned long mca_flags; + bool mca_noreport; + bool mca_loaded; int mca_users; refcount_t mca_refcnt; - spinlock_t mca_lock; unsigned long mca_cstamp; unsigned long mca_tstamp; + struct rcu_head rcu; }; /* Anycast stuff */ @@ -163,7 +168,6 @@ struct inet6_dev { struct list_head addr_list; struct list_head mc_list; struct list_head mc_tomb_list; - spinlock_t mc_tomb_lock; unsigned char mc_qrv; /* Query Robustness Variable */ unsigned char mc_gq_running; @@ -178,6 +182,7 @@ struct inet6_dev { struct delayed_work mc_gq_work; /* general query work */ struct delayed_work mc_ifc_work; /* interface change work */ struct delayed_work mc_dad_work; /* dad complete mc work */ + struct delayed_work mc_delrec_work; struct ifacaddr6 *ac_list; rwlock_t lock; diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c index 1a9ad5a9257b..3d36e6924000 100644 --- a/net/batman-adv/multicast.c +++ b/net/batman-adv/multicast.c @@ -454,8 +454,7 @@ batadv_mcast_mla_softif_get_ipv6(struct net_device *dev, return 0; } - read_lock_bh(&in6_dev->lock); - list_for_each_entry(pmc6, &in6_dev->mc_list, list) { + list_for_each_entry_rcu(pmc6, &in6_dev->mc_list, list) { if (IPV6_ADDR_MC_SCOPE(&pmc6->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL) continue; @@ -484,7 +483,6 @@ batadv_mcast_mla_softif_get_ipv6(struct net_device *dev, hlist_add_head(&new->list, mcast_list); ret++; } - read_unlock_bh(&in6_dev->lock); rcu_read_unlock(); return ret; diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index e9fe0eee5768..3138fe9c4829 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -5110,7 +5110,7 @@ static int in6_dump_addrs(struct inet6_dev *idev, struct sk_buff *skb, fillargs->event = RTM_GETMULTICAST; /* multicast address */ - list_for_each_entry(ifmca, &idev->mc_list, list) { + list_for_each_entry_rcu(ifmca, &idev->mc_list, list) { if (ip_idx < s_ip_idx) goto next2; err = inet6_fill_ifmcaddr(skb, ifmca, fillargs); @@ -6094,10 +6094,8 @@ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp) static void ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp) { - rcu_read_lock_bh(); if (likely(ifp->idev->dead == 0)) __ipv6_ifa_notify(event, ifp); - rcu_read_unlock_bh(); } #ifdef CONFIG_SYSCTL diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c index 45b683b15835..5fd87659dcef 100644 --- a/net/ipv6/mcast.c +++ b/net/ipv6/mcast.c @@ -90,8 +90,6 @@ static int ip6_mc_add_src_bulk(struct inet6_dev *idev, struct group_filter *gsf, struct list_head *head, struct sockaddr_storage *list, struct sock *sk); -static void ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *mc_lst, - struct inet6_dev *idev); static int __ipv6_dev_mc_inc(struct net_device *dev, const struct in6_addr *addr, unsigned int mode); @@ -123,7 +121,7 @@ static void mca_put(struct ifmcaddr6 *mc) { if (refcount_dec_and_test(&mc->mca_refcnt)) { in6_dev_put(mc->idev); - kfree(mc); + kfree_rcu(mc, rcu); } } @@ -173,8 +171,9 @@ static int __ipv6_sock_mc_join(struct sock *sk, int ifindex, dev = rt->dst.dev; ip6_rt_put(rt); } - } else + } else { dev = __dev_get_by_index(net, ifindex); + } if (!dev) { sock_kfree_s(sk, mc_lst, sizeof(*mc_lst)); @@ -184,7 +183,6 @@ static int __ipv6_sock_mc_join(struct sock *sk, int ifindex, mc_lst->ifindex = dev->ifindex; mc_lst->sfmode = mode; atomic_set(&mc_lst->sl_count, 0); - rwlock_init(&mc_lst->sflock); INIT_LIST_HEAD(&mc_lst->sflist); /* @@ -238,11 +236,11 @@ int ipv6_sock_mc_drop(struct sock *sk, int ifindex, const struct in6_addr *addr) if (dev) { struct inet6_dev *idev = __in6_dev_get(dev); - ip6_mc_leave_src(sk, mc_lst, idev); + ip6_mc_del_src_bulk(idev, mc_lst, sk); if (idev) __ipv6_dev_mc_dec(idev, &mc_lst->addr); } else - ip6_mc_leave_src(sk, mc_lst, NULL); + ip6_mc_del_src_bulk(NULL, mc_lst, sk); list_del_rcu(&mc_lst->list); atomic_sub(sizeof(*mc_lst), &sk->sk_omem_alloc); @@ -255,10 +253,9 @@ int ipv6_sock_mc_drop(struct sock *sk, int ifindex, const struct in6_addr *addr) } EXPORT_SYMBOL(ipv6_sock_mc_drop); -/* called with rcu_read_lock() */ -static struct inet6_dev *ip6_mc_find_dev_rcu(struct net *net, - const struct in6_addr *group, - int ifindex) +static struct inet6_dev *ip6_mc_find_dev(struct net *net, + const struct in6_addr *group, + int ifindex) { struct net_device *dev = NULL; struct inet6_dev *idev = NULL; @@ -270,19 +267,23 @@ static struct inet6_dev *ip6_mc_find_dev_rcu(struct net *net, dev = rt->dst.dev; ip6_rt_put(rt); } - } else - dev = dev_get_by_index_rcu(net, ifindex); + } else { + dev = __dev_get_by_index(net, ifindex); + } if (!dev) - return NULL; + goto out; + idev = __in6_dev_get(dev); if (!idev) - return NULL; - read_lock_bh(&idev->lock); - if (idev->dead) { - read_unlock_bh(&idev->lock); - return NULL; - } + goto out; + + if (idev->dead) + goto out; + + in6_dev_hold(idev); + dev_hold(dev); +out: return idev; } @@ -301,11 +302,11 @@ void __ipv6_sock_mc_close(struct sock *sk) if (dev) { struct inet6_dev *idev = __in6_dev_get(dev); - ip6_mc_leave_src(sk, mc_lst, idev); + ip6_mc_del_src_bulk(idev, mc_lst, sk); if (idev) __ipv6_dev_mc_dec(idev, &mc_lst->addr); } else { - ip6_mc_leave_src(sk, mc_lst, NULL); + ip6_mc_del_src_bulk(NULL, mc_lst, sk); } list_del_rcu(&mc_lst->list); @@ -328,6 +329,16 @@ void ipv6_sock_mc_close(struct sock *sk) rtnl_unlock(); } +/* special case - (INCLUDE, empty) == LEAVE_GROUP */ +bool mld_check_leave_group(struct ipv6_mc_socklist *mc_lst, int omode) +{ + if (atomic_read(&mc_lst->sl_count) == 1 && omode == MCAST_INCLUDE) + return true; + else + return false; +} + +/* called with rtnl_lock */ int ip6_mc_source(int add, int omode, struct sock *sk, struct group_source_req *pgsr) { @@ -339,25 +350,23 @@ int ip6_mc_source(int add, int omode, struct sock *sk, struct inet6_dev *idev; int leavegroup = 0; bool found = false; - int mclocked = 0; int err; + ASSERT_RTNL(); + source = &((struct sockaddr_in6 *)&pgsr->gsr_source)->sin6_addr; group = &((struct sockaddr_in6 *)&pgsr->gsr_group)->sin6_addr; if (!ipv6_addr_is_multicast(group)) return -EINVAL; - rcu_read_lock(); - idev = ip6_mc_find_dev_rcu(net, group, pgsr->gsr_interface); - if (!idev) { - rcu_read_unlock(); + idev = ip6_mc_find_dev(net, group, pgsr->gsr_interface); + if (!idev) return -ENODEV; - } err = -EADDRNOTAVAIL; - list_for_each_entry_rcu(mc_lst, &inet6->ipv6_mc_list, list) { + list_for_each_entry(mc_lst, &inet6->ipv6_mc_list, list) { if (pgsr->gsr_interface && mc_lst->ifindex != pgsr->gsr_interface) continue; if (ipv6_addr_equal(&mc_lst->addr, group)) { @@ -369,6 +378,7 @@ int ip6_mc_source(int add, int omode, struct sock *sk, err = -EINVAL; goto done; } + /* if a source filter was set, must be the same mode as before */ if (!list_empty(&mc_lst->sflist)) { if (mc_lst->sfmode != omode) { @@ -382,9 +392,6 @@ int ip6_mc_source(int add, int omode, struct sock *sk, mc_lst->sfmode = omode; } - write_lock(&mc_lst->sflock); - mclocked = 1; - if (!add) { found = false; list_for_each_entry(psl, &mc_lst->sflist, list) { @@ -396,9 +403,7 @@ int ip6_mc_source(int add, int omode, struct sock *sk, if (!found) goto done; /* err = -EADDRNOTAVAIL */ - /* special case - (INCLUDE, empty) == LEAVE_GROUP */ - if (atomic_read(&mc_lst->sl_count) == 1 && - omode == MCAST_INCLUDE) { + if (mld_check_leave_group(mc_lst, omode)) { leavegroup = 1; goto done; } @@ -422,7 +427,7 @@ int ip6_mc_source(int add, int omode, struct sock *sk, if (ipv6_addr_equal(&psl->sl_addr, source)) goto done; - psl = sock_kmalloc(sk, sizeof(struct ip6_sf_socklist), GFP_ATOMIC); + psl = sock_kmalloc(sk, sizeof(struct ip6_sf_socklist), GFP_KERNEL); if (!psl) { err = -ENOBUFS; goto done; @@ -435,10 +440,9 @@ int ip6_mc_source(int add, int omode, struct sock *sk, /* update the interface list */ ip6_mc_add_src(idev, group, omode, &psl->sl_addr, 1); done: - if (mclocked) - write_unlock(&mc_lst->sflock); - read_unlock_bh(&idev->lock); - rcu_read_unlock(); + + in6_dev_put(idev); + dev_put(idev->dev); if (leavegroup) err = ipv6_sock_mc_drop(sk, pgsr->gsr_interface, group); return err; @@ -457,6 +461,8 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, LIST_HEAD(head); int err; + ASSERT_RTNL(); + group = &((struct sockaddr_in6 *)&gsf->gf_group)->sin6_addr; if (!ipv6_addr_is_multicast(group)) @@ -465,13 +471,10 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, gsf->gf_fmode != MCAST_EXCLUDE) return -EINVAL; - rcu_read_lock(); - idev = ip6_mc_find_dev_rcu(net, group, gsf->gf_interface); + idev = ip6_mc_find_dev(net, group, gsf->gf_interface); - if (!idev) { - rcu_read_unlock(); + if (!idev) return -ENODEV; - } err = 0; @@ -480,7 +483,7 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, goto done; } - list_for_each_entry_rcu(mc_lst, &inet6->ipv6_mc_list, list) { + list_for_each_entry(mc_lst, &inet6->ipv6_mc_list, list) { if (mc_lst->ifindex != gsf->gf_interface) continue; if (ipv6_addr_equal(&mc_lst->addr, group)) { @@ -501,16 +504,14 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf, if (err) goto done; - write_lock(&mc_lst->sflock); ip6_mc_del_src_bulk(idev, mc_lst, sk); atomic_set(&mc_lst->sl_count, gsf->gf_numsrc); list_splice(&head, &mc_lst->sflist); mc_lst->sfmode = gsf->gf_fmode; - write_unlock(&mc_lst->sflock); err = 0; done: - read_unlock_bh(&idev->lock); - rcu_read_unlock(); + in6_dev_put(idev); + dev_put(idev->dev); if (leavegroup) err = ipv6_sock_mc_drop(sk, gsf->gf_interface, group); @@ -527,28 +528,20 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, struct ip6_sf_socklist *psl; struct inet6_dev *idev; bool found = false; - int err, i; + int err = 0, i; + + ASSERT_RTNL(); group = &((struct sockaddr_in6 *)&gsf->gf_group)->sin6_addr; if (!ipv6_addr_is_multicast(group)) return -EINVAL; - rcu_read_lock(); - idev = ip6_mc_find_dev_rcu(net, group, gsf->gf_interface); - - if (!idev) { - rcu_read_unlock(); + idev = ip6_mc_find_dev(net, group, gsf->gf_interface); + if (!idev) return -ENODEV; - } - err = -EADDRNOTAVAIL; - /* changes to the ipv6_mc_list require the socket lock and - * rtnl lock. We have the socket lock and rcu read lock, - * so reading the list is safe. - */ - - list_for_each_entry_rcu(mc_lst, &inet6->ipv6_mc_list, list) { + list_for_each_entry(mc_lst, &inet6->ipv6_mc_list, list) { if (mc_lst->ifindex != gsf->gf_interface) continue; if (ipv6_addr_equal(group, &mc_lst->addr)) { @@ -556,14 +549,14 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, break; } } - if (!found) /* must have a prior join */ + if (!found) { /* must have a prior join */ + err = -EADDRNOTAVAIL; goto done; + } + gsf->gf_fmode = mc_lst->sfmode; - read_unlock_bh(&idev->lock); - rcu_read_unlock(); i = 0; - read_lock(&mc_lst->sflock); list_for_each_entry(psl, &mc_lst->sflist, list) { struct sockaddr_in6 *psin6; struct sockaddr_storage ss; @@ -576,21 +569,20 @@ int ip6_mc_msfget(struct sock *sk, struct group_filter *gsf, psin6->sin6_family = AF_INET6; psin6->sin6_addr = psl->sl_addr; if (copy_to_user(p, &ss, sizeof(ss))) { - read_unlock(&mc_lst->sflock); - return -EFAULT; + err = -EFAULT; + goto done; } p++; i++; } gsf->gf_numsrc = i; - read_unlock(&mc_lst->sflock); - return 0; done: - read_unlock_bh(&idev->lock); - rcu_read_unlock(); + in6_dev_put(idev); + dev_put(idev->dev); return err; } +/* atomic context */ bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, const struct in6_addr *src_addr) { @@ -610,7 +602,6 @@ bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, rcu_read_unlock(); return np->mc_all; } - read_lock(&mc_lst->sflock); found = false; if (list_empty(&mc_lst->sflist)) { @@ -627,7 +618,6 @@ bool inet6_mc_check(struct sock *sk, const struct in6_addr *mc_addr, if (mc_lst->sfmode == MCAST_EXCLUDE && found) rv = false; } - read_unlock(&mc_lst->sflock); rcu_read_unlock(); return rv; @@ -642,15 +632,13 @@ static void mld_group_added(struct ifmcaddr6 *mc) IPV6_ADDR_SCOPE_LINKLOCAL) return; - spin_lock_bh(&mc->mca_lock); - if (!(mc->mca_flags&MAF_LOADED)) { - mc->mca_flags |= MAF_LOADED; + if (!mc->mca_loaded) { + mc->mca_loaded = true; if (ndisc_mc_map(&mc->mca_addr, buf, dev, 0) == 0) dev_mc_add(dev, buf); } - spin_unlock_bh(&mc->mca_lock); - if (!(dev->flags & IFF_UP) || (mc->mca_flags & MAF_NOREPORT)) + if (!(dev->flags & IFF_UP) || mc->mca_noreport) return; if (mld_in_v1_mode(mc->idev)) { @@ -678,24 +666,22 @@ static void mld_group_dropped(struct ifmcaddr6 *mc) IPV6_ADDR_SCOPE_LINKLOCAL) return; - spin_lock_bh(&mc->mca_lock); - if (mc->mca_flags&MAF_LOADED) { - mc->mca_flags &= ~MAF_LOADED; + if (mc->mca_loaded) { + mc->mca_loaded = false; if (ndisc_mc_map(&mc->mca_addr, buf, dev, 0) == 0) dev_mc_del(dev, buf); } - spin_unlock_bh(&mc->mca_lock); - if (mc->mca_flags & MAF_NOREPORT) + if (mc->mca_noreport) return; if (!mc->idev->dead) mld_leave_group(mc); - spin_lock_bh(&mc->mca_lock); + spin_lock_bh(&mc->mca_work_lock); if (cancel_delayed_work(&mc->mca_work)) mca_put(mc); - spin_unlock_bh(&mc->mca_lock); + spin_unlock_bh(&mc->mca_work_lock); } /* @@ -711,12 +697,11 @@ static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) * for deleted items allows change reports to use common code with * non-deleted or query-response MCA's. */ - mc = kzalloc(sizeof(*mc), GFP_ATOMIC); + mc = kzalloc(sizeof(*mc), GFP_KERNEL); if (!mc) return; - spin_lock_bh(&im->mca_lock); - spin_lock_init(&mc->mca_lock); + spin_lock_init(&mc->mca_work_lock); INIT_LIST_HEAD(&mc->list); INIT_LIST_HEAD(&mc->mca_tomb_list); INIT_LIST_HEAD(&mc->mca_source_list); @@ -729,16 +714,13 @@ static void mld_add_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) struct ip6_sf_list *psf; list_splice_init(&im->mca_tomb_list, &mc->mca_tomb_list); - list_splice_init(&im->mca_source_list, &mc->mca_source_list); + list_splice_init_rcu(&im->mca_source_list, &mc->mca_source_list, + synchronize_rcu); list_for_each_entry(psf, &mc->mca_source_list, list) psf->sf_crcount = mc->mca_crcount; } - spin_unlock_bh(&im->mca_lock); - - spin_lock_bh(&idev->mc_tomb_lock); list_add(&mc->list, &idev->mc_tomb_list); - spin_unlock_bh(&idev->mc_tomb_lock); } static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) @@ -750,7 +732,6 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) LIST_HEAD(tomb_list); bool found = false; - spin_lock_bh(&idev->mc_tomb_lock); list_for_each_entry_safe(mc, tmp, &idev->mc_tomb_list, list) { if (ipv6_addr_equal(&mc->mca_addr, mca)) { list_del(&mc->list); @@ -758,16 +739,16 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) break; } } - spin_unlock_bh(&idev->mc_tomb_lock); - spin_lock_bh(&im->mca_lock); if (found) { im->idev = mc->idev; if (im->mca_sfmode == MCAST_INCLUDE) { list_splice_init(&im->mca_tomb_list, &tomb_list); - list_splice_init(&im->mca_source_list, &source_list); + list_splice_init_rcu(&im->mca_source_list, &source_list, + synchronize_rcu); list_splice_init(&mc->mca_tomb_list, &im->mca_tomb_list); - list_splice_init(&mc->mca_source_list, &im->mca_source_list); + list_splice_init_rcu(&mc->mca_source_list, &im->mca_source_list, + synchronize_rcu); list_splice_init(&tomb_list, &mc->mca_tomb_list); list_splice_init(&source_list, &mc->mca_source_list); @@ -778,37 +759,32 @@ static void mld_del_delrec(struct inet6_dev *idev, struct ifmcaddr6 *im) } in6_dev_put(mc->idev); ip6_mc_clear_src(mc); + /* tomb_list's mc doesn't need kfree_rcu() */ kfree(mc); } - spin_unlock_bh(&im->mca_lock); } static void mld_clear_delrec(struct inet6_dev *idev) { + struct ip6_sf_list *psf, *psf_tmp; struct ifmcaddr6 *mc, *tmp; + LIST_HEAD(mca_list); + + ASSERT_RTNL(); - spin_lock_bh(&idev->mc_tomb_lock); list_for_each_entry_safe(mc, tmp, &idev->mc_tomb_list, list) { list_del(&mc->list); ip6_mc_clear_src(mc); in6_dev_put(mc->idev); kfree(mc); } - spin_unlock_bh(&idev->mc_tomb_lock); /* clear dead sources, too */ - read_lock_bh(&idev->lock); - list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { - struct ip6_sf_list *psf, *tmp; - LIST_HEAD(mca_list); - - spin_lock_bh(&mc->mca_lock); + list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) list_splice_init(&mc->mca_tomb_list, &mca_list); - spin_unlock_bh(&mc->mca_lock); - list_for_each_entry_safe(psf, tmp, &mca_list, list) - kfree(psf); - } - read_unlock_bh(&idev->lock); + + list_for_each_entry_safe(psf, psf_tmp, &mca_list, list) + kfree(psf); } static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, @@ -817,7 +793,7 @@ static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, { struct ifmcaddr6 *mc; - mc = kzalloc(sizeof(*mc), GFP_ATOMIC); + mc = kzalloc(sizeof(*mc), GFP_KERNEL); if (!mc) return NULL; @@ -832,14 +808,20 @@ static struct ifmcaddr6 *mca_alloc(struct inet6_dev *idev, /* mca_stamp should be updated upon changes */ mc->mca_cstamp = mc->mca_tstamp = jiffies; refcount_set(&mc->mca_refcnt, 1); - spin_lock_init(&mc->mca_lock); + spin_lock_init(&mc->mca_work_lock); mc->mca_sfmode = mode; - mc->mca_sfcount[mode] = 1; + if (mode == MCAST_INCLUDE) { + atomic_set(&mc->mca_incl_count, 1); + atomic_set(&mc->mca_excl_count, 0); + } else { + atomic_set(&mc->mca_incl_count, 0); + atomic_set(&mc->mca_excl_count, 1); + } if (ipv6_addr_is_ll_all_nodes(&mc->mca_addr) || IPV6_ADDR_MC_SCOPE(&mc->mca_addr) < IPV6_ADDR_SCOPE_LINKLOCAL) - mc->mca_flags |= MAF_NOREPORT; + mc->mca_noreport = true; return mc; } @@ -860,9 +842,7 @@ static int __ipv6_dev_mc_inc(struct net_device *dev, if (!idev) return -EINVAL; - write_lock_bh(&idev->lock); if (idev->dead) { - write_unlock_bh(&idev->lock); in6_dev_put(idev); return -ENODEV; } @@ -870,7 +850,6 @@ static int __ipv6_dev_mc_inc(struct net_device *dev, list_for_each_entry(mc, &idev->mc_list, list) { if (ipv6_addr_equal(&mc->mca_addr, addr)) { mc->mca_users++; - write_unlock_bh(&idev->lock); ip6_mc_add_src(idev, &mc->mca_addr, mode, NULL, 0); in6_dev_put(idev); return 0; @@ -879,18 +858,16 @@ static int __ipv6_dev_mc_inc(struct net_device *dev, mc = mca_alloc(idev, addr, mode); if (!mc) { - write_unlock_bh(&idev->lock); in6_dev_put(idev); return -ENOMEM; } - list_add(&mc->list, &idev->mc_list); + list_add_rcu(&mc->list, &idev->mc_list); /* Hold this for the code below before we unlock, * it is already exposed via idev->mc_list. */ mca_get(mc); - write_unlock_bh(&idev->lock); mld_del_delrec(idev, mc); mld_group_added(mc); @@ -913,24 +890,20 @@ int __ipv6_dev_mc_dec(struct inet6_dev *idev, const struct in6_addr *addr) ASSERT_RTNL(); - write_lock_bh(&idev->lock); list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { if (ipv6_addr_equal(&mc->mca_addr, addr)) { if (--mc->mca_users == 0) { - list_del(&mc->list); - write_unlock_bh(&idev->lock); + list_del_rcu(&mc->list); mld_group_dropped(mc); ip6_mc_clear_src(mc); mca_put(mc); return 0; } - write_unlock_bh(&idev->lock); return 0; } } - write_unlock_bh(&idev->lock); return -ENOENT; } @@ -964,74 +937,82 @@ bool ipv6_chk_mcast_addr(struct net_device *dev, const struct in6_addr *group, rcu_read_lock(); idev = __in6_dev_get(dev); if (idev) { - read_lock_bh(&idev->lock); - list_for_each_entry(mc, &idev->mc_list, list) { + list_for_each_entry_rcu(mc, &idev->mc_list, list) { if (ipv6_addr_equal(&mc->mca_addr, group)) { found = true; break; } } - if (found) { - if (src_addr && !ipv6_addr_any(src_addr)) { - struct ip6_sf_list *psf; - bool found_psf = false; - - spin_lock_bh(&mc->mca_lock); - list_for_each_entry(psf, &mc->mca_source_list, list) { - if (ipv6_addr_equal(&psf->sf_addr, src_addr)) { - found_psf = true; - break; - } - } - if (found_psf) { - rv = psf->sf_count[MCAST_INCLUDE] || - psf->sf_count[MCAST_EXCLUDE] != - mc->mca_sfcount[MCAST_EXCLUDE]; - } else { - rv = mc->mca_sfcount[MCAST_EXCLUDE] != 0; + if (!found) + goto out; + + if (src_addr && !ipv6_addr_any(src_addr)) { + struct ip6_sf_list *psf; + bool found_psf = false; + + list_for_each_entry_rcu(psf, &mc->mca_source_list, list) { + if (ipv6_addr_equal(&psf->sf_addr, src_addr)) { + found_psf = true; + break; } - spin_unlock_bh(&mc->mca_lock); - } else - rv = true; /* don't filter unspecified source */ + } + if (found_psf) { + rv = atomic_read(&psf->incl_count) || + atomic_read(&psf->excl_count) != + atomic_read(&mc->mca_excl_count); + } else { + rv = atomic_read(&mc->mca_excl_count) != 0; + } + } else { + rv = true; /* don't filter unspecified source */ } - read_unlock_bh(&idev->lock); } +out: rcu_read_unlock(); return rv; } +/* atomic context */ static void mld_gq_start_work(struct inet6_dev *idev) { unsigned long tv = prandom_u32() % idev->mc_maxdelay; + write_lock_bh(&idev->lock); idev->mc_gq_running = 1; if (!mod_delayed_work(mld_wq, &idev->mc_gq_work, msecs_to_jiffies(tv + 2))) in6_dev_hold(idev); + write_unlock_bh(&idev->lock); } static void mld_gq_stop_work(struct inet6_dev *idev) { + write_lock_bh(&idev->lock); idev->mc_gq_running = 0; if (cancel_delayed_work(&idev->mc_gq_work)) __in6_dev_put(idev); + write_unlock_bh(&idev->lock); } static void mld_ifc_start_work(struct inet6_dev *idev, unsigned long delay) { unsigned long tv = prandom_u32() % delay; + write_lock_bh(&idev->lock); if (!mod_delayed_work(mld_wq, &idev->mc_ifc_work, msecs_to_jiffies(tv + 2))) in6_dev_hold(idev); + write_unlock_bh(&idev->lock); } static void mld_ifc_stop_work(struct inet6_dev *idev) { + write_lock_bh(&idev->lock); idev->mc_ifc_count = 0; if (cancel_delayed_work(&idev->mc_ifc_work)) __in6_dev_put(idev); + write_unlock_bh(&idev->lock); } static void mld_dad_start_work(struct inet6_dev *idev, unsigned long delay) @@ -1049,10 +1030,25 @@ static void mld_dad_stop_work(struct inet6_dev *idev) __in6_dev_put(idev); } +static void mld_clear_delrec_start_work(struct inet6_dev *idev) +{ + write_lock_bh(&idev->lock); + if (!mod_delayed_work(mld_wq, &idev->mc_delrec_work, 0)) + in6_dev_hold(idev); + write_unlock_bh(&idev->lock); +} + +static void mld_clear_delrec_stop_work(struct inet6_dev *idev) +{ + write_lock_bh(&idev->lock); + if (cancel_delayed_work(&idev->mc_delrec_work)) + __in6_dev_put(idev); + write_unlock_bh(&idev->lock); +} + /* - * MLD handling (alias multicast ICMPv6 messages) + * MLD handling (alias multicast ICMPv6 messages) */ - static void mld_group_queried(struct ifmcaddr6 *mc, unsigned long resptime) { unsigned long delay = resptime; @@ -1073,7 +1069,7 @@ static void mld_group_queried(struct ifmcaddr6 *mc, unsigned long resptime) if (!mod_delayed_work(mld_wq, &mc->mca_work, msecs_to_jiffies(delay))) mca_get(mc); - mc->mca_flags |= MAF_TIMER_RUNNING; + set_bit(MCA_TIMER_RUNNING, &mc->mca_flags); } /* mark EXCLUDE-mode sources */ @@ -1084,14 +1080,14 @@ static bool mld_xmarksources(struct ifmcaddr6 *mc, int nsrcs, int i, scount; scount = 0; - list_for_each_entry(psf, &mc->mca_source_list, list) { + list_for_each_entry_rcu(psf, &mc->mca_source_list, list) { if (scount == nsrcs) break; for (i = 0; i < nsrcs; i++) { /* skip inactive filters */ - if (psf->sf_count[MCAST_INCLUDE] || - mc->mca_sfcount[MCAST_EXCLUDE] != - psf->sf_count[MCAST_EXCLUDE]) + if (atomic_read(&psf->incl_count) || + atomic_read(&mc->mca_excl_count) != + atomic_read(&psf->excl_count)) break; if (ipv6_addr_equal(&srcs[i], &psf->sf_addr)) { scount++; @@ -1099,7 +1095,8 @@ static bool mld_xmarksources(struct ifmcaddr6 *mc, int nsrcs, } } } - mc->mca_flags &= ~MAF_GSQUERY; + + clear_bit(MCA_GSQUERY, &mc->mca_flags); if (scount == nsrcs) /* all sources excluded */ return false; return true; @@ -1117,7 +1114,7 @@ static bool mld_marksources(struct ifmcaddr6 *mc, int nsrcs, /* mark INCLUDE-mode sources */ scount = 0; - list_for_each_entry(psf, &mc->mca_source_list, list) { + list_for_each_entry_rcu(psf, &mc->mca_source_list, list) { if (scount == nsrcs) break; for (i = 0; i < nsrcs; i++) { @@ -1129,10 +1126,10 @@ static bool mld_marksources(struct ifmcaddr6 *mc, int nsrcs, } } if (!scount) { - mc->mca_flags &= ~MAF_GSQUERY; + clear_bit(MCA_GSQUERY, &mc->mca_flags); return false; } - mc->mca_flags |= MAF_GSQUERY; + set_bit(MCA_GSQUERY, &mc->mca_flags); return true; } @@ -1246,6 +1243,7 @@ static void mld_update_qri(struct inet6_dev *idev, idev->mc_qri = msecs_to_jiffies(mldv2_mrc(mlh2)); } +/* atomic context */ static int mld_process_v1(struct inet6_dev *idev, struct mld_msg *mld, unsigned long *max_delay, bool v1_query) { @@ -1287,7 +1285,7 @@ static int mld_process_v1(struct inet6_dev *idev, struct mld_msg *mld, /* cancel the interface change work */ mld_ifc_stop_work(idev); /* clear deleted report items */ - mld_clear_delrec(idev); + mld_clear_delrec_start_work(idev); return 0; } @@ -1306,7 +1304,7 @@ static int mld_process_v2(struct inet6_dev *idev, struct mld2_query *mld, return 0; } -/* called with rcu_read_lock() */ +/* atomic context */ int mld_event_query(struct sk_buff *skb) { struct mld2_query *mlh2 = NULL; @@ -1391,42 +1389,40 @@ int mld_event_query(struct sk_buff *skb) return -EINVAL; } - read_lock_bh(&idev->lock); if (group_type == IPV6_ADDR_ANY) { - list_for_each_entry(mc, &idev->mc_list, list) { - spin_lock_bh(&mc->mca_lock); + list_for_each_entry_rcu(mc, &idev->mc_list, list) { + spin_lock_bh(&mc->mca_work_lock); mld_group_queried(mc, max_delay); - spin_unlock_bh(&mc->mca_lock); + spin_unlock_bh(&mc->mca_work_lock); } } else { - list_for_each_entry(mc, &idev->mc_list, list) { + list_for_each_entry_rcu(mc, &idev->mc_list, list) { if (!ipv6_addr_equal(group, &mc->mca_addr)) continue; - spin_lock_bh(&mc->mca_lock); - if (mc->mca_flags & MAF_TIMER_RUNNING) { + spin_lock_bh(&mc->mca_work_lock); + if (test_bit(MCA_TIMER_RUNNING, &mc->mca_flags)) { /* gsquery <- gsquery && mark */ if (!mark) - mc->mca_flags &= ~MAF_GSQUERY; + clear_bit(MCA_GSQUERY, &mc->mca_flags); } else { /* gsquery <- mark */ if (mark) - mc->mca_flags |= MAF_GSQUERY; + set_bit(MCA_GSQUERY, &mc->mca_flags); else - mc->mca_flags &= ~MAF_GSQUERY; + clear_bit(MCA_GSQUERY, &mc->mca_flags); } - if (!(mc->mca_flags & MAF_GSQUERY) || + if (!(test_bit(MCA_GSQUERY, &mc->mca_flags)) || mld_marksources(mc, ntohs(mlh2->mld2q_nsrcs), mlh2->mld2q_srcs)) mld_group_queried(mc, max_delay); - spin_unlock_bh(&mc->mca_lock); + spin_unlock_bh(&mc->mca_work_lock); break; } } - read_unlock_bh(&idev->lock); return 0; } -/* called with rcu_read_lock() */ +/* atomic context */ int mld_event_report(struct sk_buff *skb) { struct inet6_dev *idev; @@ -1462,18 +1458,17 @@ int mld_event_report(struct sk_buff *skb) * Cancel the work for this group */ - read_lock_bh(&idev->lock); - list_for_each_entry(mc, &idev->mc_list, list) { + list_for_each_entry_rcu(mc, &idev->mc_list, list) { if (ipv6_addr_equal(&mc->mca_addr, &mld->mld_mca)) { - spin_lock(&mc->mca_lock); + spin_lock_bh(&mc->mca_work_lock); if (cancel_delayed_work(&mc->mca_work)) mca_put(mc); - mc->mca_flags &= ~(MAF_LAST_REPORTER | MAF_TIMER_RUNNING); - spin_unlock(&mc->mca_lock); + clear_bit(MCA_LAST_REPORTER, &mc->mca_flags); + clear_bit(MCA_TIMER_RUNNING, &mc->mca_flags); + spin_unlock_bh(&mc->mca_work_lock); break; } } - read_unlock_bh(&idev->lock); return 0; } @@ -1485,30 +1480,30 @@ static bool is_in(struct ifmcaddr6 *mc, struct ip6_sf_list *psf, int type, case MLD2_MODE_IS_EXCLUDE: if (gdeleted || sdeleted) return false; - if (!((mc->mca_flags & MAF_GSQUERY) && !psf->sf_gsresp)) { + if (!(test_bit(MCA_GSQUERY, &mc->mca_flags) && !psf->sf_gsresp)) { if (mc->mca_sfmode == MCAST_INCLUDE) return true; /* don't include if this source is excluded * in all filters */ - if (psf->sf_count[MCAST_INCLUDE]) + if (atomic_read(&psf->incl_count)) return type == MLD2_MODE_IS_INCLUDE; - return mc->mca_sfcount[MCAST_EXCLUDE] == - psf->sf_count[MCAST_EXCLUDE]; + return atomic_read(&mc->mca_excl_count) == + atomic_read(&psf->excl_count); } return false; case MLD2_CHANGE_TO_INCLUDE: if (gdeleted || sdeleted) return false; - return psf->sf_count[MCAST_INCLUDE] != 0; + return atomic_read(&psf->incl_count) != 0; case MLD2_CHANGE_TO_EXCLUDE: if (gdeleted || sdeleted) return false; - if (mc->mca_sfcount[MCAST_EXCLUDE] == 0 || - psf->sf_count[MCAST_INCLUDE]) + if (!atomic_read(&mc->mca_excl_count) || + atomic_read(&psf->incl_count)) return false; - return mc->mca_sfcount[MCAST_EXCLUDE] == - psf->sf_count[MCAST_EXCLUDE]; + return atomic_read(&mc->mca_excl_count) == + atomic_read(&psf->excl_count); case MLD2_ALLOW_NEW_SOURCES: if (gdeleted || !psf->sf_crcount) return false; @@ -1719,7 +1714,7 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, unsigned int mtu; dev = idev->dev; - if (mc->mca_flags & MAF_NOREPORT) + if (mc->mca_noreport) return skb; mtu = READ_ONCE(dev->mtu); @@ -1801,8 +1796,8 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, decrease_sf_crcount: psf->sf_crcount--; if ((sdeleted || gdeleted) && psf->sf_crcount == 0) { - list_del(&psf->list); - kfree(psf); + list_del_rcu(&psf->list); + kfree_rcu(psf, rcu); continue; } } @@ -1825,8 +1820,11 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *mc, if (pgr) pgr->grec_nsrcs = htons(scount); - if (isquery) - mc->mca_flags &= ~MAF_GSQUERY; /* clear query state */ + if (isquery) { + spin_lock_bh(&mc->mca_work_lock); + clear_bit(MCA_GSQUERY, &mc->mca_flags); /* clear query state */ + spin_unlock_bh(&mc->mca_work_lock); + } return skb; } @@ -1835,29 +1833,23 @@ static void mld_send_report(struct inet6_dev *idev, struct ifmcaddr6 *mc) struct sk_buff *skb = NULL; int type; - read_lock_bh(&idev->lock); if (!mc) { list_for_each_entry(mc, &idev->mc_list, list) { - if (mc->mca_flags & MAF_NOREPORT) + if (mc->mca_noreport) continue; - spin_lock_bh(&mc->mca_lock); - if (mc->mca_sfcount[MCAST_EXCLUDE]) + if (atomic_read(&mc->mca_excl_count)) type = MLD2_MODE_IS_EXCLUDE; else type = MLD2_MODE_IS_INCLUDE; skb = add_grec(skb, mc, type, 0, 0, 0); - spin_unlock_bh(&mc->mca_lock); } } else { - spin_lock_bh(&mc->mca_lock); - if (mc->mca_sfcount[MCAST_EXCLUDE]) + if (atomic_read(&mc->mca_excl_count)) type = MLD2_MODE_IS_EXCLUDE; else type = MLD2_MODE_IS_INCLUDE; skb = add_grec(skb, mc, type, 0, 0, 0); - spin_unlock_bh(&mc->mca_lock); } - read_unlock_bh(&idev->lock); if (skb) mld_sendpack(skb); } @@ -1871,15 +1863,15 @@ static void mld_clear_zeros(struct ifmcaddr6 *mc) list_for_each_entry_safe(psf, tmp, &mc->mca_tomb_list, list) { if (psf->sf_crcount == 0) { - list_del(&psf->list); - kfree(psf); + list_del_rcu(&psf->list); + kfree_rcu(psf, rcu); } } list_for_each_entry_safe(psf, tmp, &mc->mca_source_list, list) { if (psf->sf_crcount == 0) { - list_del(&psf->list); - kfree(psf); + list_del_rcu(&psf->list); + kfree_rcu(psf, rcu); } } } @@ -1890,9 +1882,6 @@ static void mld_send_cr(struct inet6_dev *idev) struct ifmcaddr6 *mc, *tmp; int type, dtype; - read_lock_bh(&idev->lock); - spin_lock(&idev->mc_tomb_lock); - /* deleted MCA's */ list_for_each_entry_safe(mc, tmp, &idev->mc_tomb_list, list) { if (mc->mca_sfmode == MCAST_INCLUDE) { @@ -1921,12 +1910,10 @@ static void mld_send_cr(struct inet6_dev *idev) kfree(mc); } } - spin_unlock(&idev->mc_tomb_lock); /* change recs */ list_for_each_entry(mc, &idev->mc_list, list) { - spin_lock_bh(&mc->mca_lock); - if (mc->mca_sfcount[MCAST_EXCLUDE]) { + if (atomic_read(&mc->mca_excl_count)) { type = MLD2_BLOCK_OLD_SOURCES; dtype = MLD2_ALLOW_NEW_SOURCES; } else { @@ -1945,9 +1932,7 @@ static void mld_send_cr(struct inet6_dev *idev) skb = add_grec(skb, mc, type, 0, 0, 0); mc->mca_crcount--; } - spin_unlock_bh(&mc->mca_lock); } - read_unlock_bh(&idev->lock); if (!skb) return; mld_sendpack(skb); @@ -2061,17 +2046,14 @@ static void mld_send_initial_cr(struct inet6_dev *idev) return; skb = NULL; - read_lock_bh(&idev->lock); list_for_each_entry(mc, &idev->mc_list, list) { - spin_lock_bh(&mc->mca_lock); - if (mc->mca_sfcount[MCAST_EXCLUDE]) + if (atomic_read(&mc->mca_excl_count)) type = MLD2_CHANGE_TO_EXCLUDE; else type = MLD2_ALLOW_NEW_SOURCES; skb = add_grec(skb, mc, type, 0, 0, 1); - spin_unlock_bh(&mc->mca_lock); } - read_unlock_bh(&idev->lock); + if (skb) mld_sendpack(skb); } @@ -2094,6 +2076,7 @@ static void mld_dad_work(struct work_struct *work) struct inet6_dev, mc_dad_work); + rtnl_lock(); mld_send_initial_cr(idev); if (idev->mc_dad_count) { idev->mc_dad_count--; @@ -2101,6 +2084,18 @@ static void mld_dad_work(struct work_struct *work) mld_dad_start_work(idev, unsolicited_report_interval(idev)); } + rtnl_unlock(); + in6_dev_put(idev); +} + +static void mld_clear_delrec_work(struct work_struct *work) +{ + struct inet6_dev *idev = container_of(to_delayed_work(work), + struct inet6_dev, + mc_delrec_work); + rtnl_lock(); + mld_clear_delrec(idev); + rtnl_unlock(); in6_dev_put(idev); } @@ -2118,24 +2113,32 @@ static int ip6_mc_del1_src(struct ifmcaddr6 *mc, int sfmode, } } - if (!found || psf->sf_count[sfmode] == 0) { + if (!found) /* source filter not found, or count wrong => bug */ return -ESRCH; + + if (sfmode == MCAST_INCLUDE) { + if (!atomic_read(&psf->incl_count)) + return -ESRCH; + atomic_dec(&psf->incl_count); + } else { + if (!atomic_read(&psf->excl_count)) + return -ESRCH; + atomic_dec(&psf->excl_count); } - psf->sf_count[sfmode]--; - if (!psf->sf_count[MCAST_INCLUDE] && !psf->sf_count[MCAST_EXCLUDE]) { + if (!atomic_read(&psf->incl_count) && !atomic_read(&psf->excl_count)) { struct inet6_dev *idev = mc->idev; /* no more filters for this source */ list_del_init(&psf->list); - if (psf->sf_oldin && !(mc->mca_flags & MAF_NOREPORT) && - !mld_in_v1_mode(idev)) { + if (psf->sf_oldin && !mld_in_v1_mode(idev) && + !mc->mca_noreport) { psf->sf_crcount = idev->mc_qrv; list_add(&psf->list, &mc->mca_tomb_list); rv = 1; } else { - kfree(psf); + kfree_rcu(psf, rcu); } } return rv; @@ -2152,27 +2155,27 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, if (!idev) return -ENODEV; - read_lock_bh(&idev->lock); list_for_each_entry(mc, &idev->mc_list, list) { if (ipv6_addr_equal(mca, &mc->mca_addr)) { found = true; break; } } - if (!found) { - /* MCA not found?? bug */ - read_unlock_bh(&idev->lock); + + if (!found) return -ESRCH; - } - spin_lock_bh(&mc->mca_lock); + sf_markstate(mc); if (!delta) { - if (!mc->mca_sfcount[sfmode]) { - spin_unlock_bh(&mc->mca_lock); - read_unlock_bh(&idev->lock); - return -EINVAL; + if (sfmode == MCAST_INCLUDE) { + if (!atomic_read(&mc->mca_incl_count)) + return -EINVAL; + atomic_dec(&mc->mca_incl_count); + } else { + if (!atomic_read(&mc->mca_excl_count)) + return -EINVAL; + atomic_dec(&mc->mca_excl_count); } - mc->mca_sfcount[sfmode]--; } err = 0; i = 0; @@ -2186,8 +2189,8 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, } if (mc->mca_sfmode == MCAST_EXCLUDE && - mc->mca_sfcount[MCAST_EXCLUDE] == 0 && - mc->mca_sfcount[MCAST_INCLUDE]) { + !atomic_read(&mc->mca_excl_count) && + atomic_read(&mc->mca_incl_count)) { struct ip6_sf_list *psf; /* filter mode change */ @@ -2200,8 +2203,7 @@ static int ip6_mc_del_src(struct inet6_dev *idev, const struct in6_addr *mca, } else if (sf_setstate(mc) || changerec) { mld_ifc_event(mc->idev); } - spin_unlock_bh(&mc->mca_lock); - read_unlock_bh(&idev->lock); + return err; } @@ -2215,32 +2217,32 @@ static void ip6_mc_del_src_bulk(struct inet6_dev *idev, struct ifmcaddr6 *mc; bool found = false; int changerec = 0; - int i, rv; + int rv; if (!idev) return; - read_lock_bh(&idev->lock); list_for_each_entry(mc, &idev->mc_list, list) { if (ipv6_addr_equal(mca, &mc->mca_addr)) { found = true; break; } } - if (!found) { - /* MCA not found?? bug */ - read_unlock_bh(&idev->lock); + if (!found) return; - } - spin_lock_bh(&mc->mca_lock); + sf_markstate(mc); - if (!mc->mca_sfcount[sfmode]) { - spin_unlock_bh(&mc->mca_lock); - read_unlock_bh(&idev->lock); - return; + if (sfmode == MCAST_INCLUDE) { + if (!atomic_read(&mc->mca_incl_count)) + return; + + atomic_dec(&mc->mca_incl_count); + } else { + if (!atomic_read(&mc->mca_excl_count)) + return; + + atomic_dec(&mc->mca_excl_count); } - mc->mca_sfcount[sfmode]--; - i = 0; list_for_each_entry_safe(psl, tmp, &mc_lst->sflist, list) { rv = ip6_mc_del1_src(mc, sfmode, &psl->sl_addr); @@ -2249,11 +2251,12 @@ static void ip6_mc_del_src_bulk(struct inet6_dev *idev, kfree_rcu(psl, rcu); changerec |= rv > 0; + cond_resched(); } if (mc->mca_sfmode == MCAST_EXCLUDE && - mc->mca_sfcount[MCAST_EXCLUDE] == 0 && - mc->mca_sfcount[MCAST_INCLUDE]) { + !atomic_read(&mc->mca_excl_count) && + atomic_read(&mc->mca_incl_count)) { struct ip6_sf_list *psf; /* filter mode change */ @@ -2266,8 +2269,6 @@ static void ip6_mc_del_src_bulk(struct inet6_dev *idev, } else if (sf_setstate(mc) || changerec) { mld_ifc_event(mc->idev); } - spin_unlock_bh(&mc->mca_lock); - read_unlock_bh(&idev->lock); } /* @@ -2287,36 +2288,42 @@ static int ip6_mc_add1_src(struct ifmcaddr6 *mc, int sfmode, } if (!found) { - psf = kzalloc(sizeof(*psf), GFP_ATOMIC); + psf = kzalloc(sizeof(*psf), GFP_KERNEL); if (!psf) return -ENOBUFS; + atomic_set(&psf->incl_count, 0); + atomic_set(&psf->excl_count, 0); psf->sf_addr = *psfsrc; INIT_LIST_HEAD(&psf->list); - list_add_tail(&psf->list, &mc->mca_source_list); + list_add_tail_rcu(&psf->list, &mc->mca_source_list); } - psf->sf_count[sfmode]++; + + if (sfmode == MCAST_INCLUDE) + atomic_inc(&psf->incl_count); + else + atomic_inc(&psf->excl_count); return 0; } static void sf_markstate(struct ifmcaddr6 *mc) { - int mca_xcount = mc->mca_sfcount[MCAST_EXCLUDE]; + int mca_xcount = atomic_read(&mc->mca_excl_count); struct ip6_sf_list *psf; list_for_each_entry(psf, &mc->mca_source_list, list) { - if (mc->mca_sfcount[MCAST_EXCLUDE]) { + if (atomic_read(&mc->mca_excl_count)) { psf->sf_oldin = mca_xcount == - psf->sf_count[MCAST_EXCLUDE] && - !psf->sf_count[MCAST_INCLUDE]; + atomic_read(&psf->excl_count) && + !atomic_read(&psf->incl_count); } else - psf->sf_oldin = psf->sf_count[MCAST_INCLUDE] != 0; + psf->sf_oldin = atomic_read(&psf->incl_count) != 0; } } static int sf_setstate(struct ifmcaddr6 *mc) { - int mca_xcount = mc->mca_sfcount[MCAST_EXCLUDE]; + int mca_xcount = atomic_read(&mc->mca_excl_count); struct ip6_sf_list *psf, *dpsf; int qrv = mc->idev->mc_qrv; int new_in, rv; @@ -2326,11 +2333,11 @@ static int sf_setstate(struct ifmcaddr6 *mc) list_for_each_entry(psf, &mc->mca_source_list, list) { found = false; - if (mc->mca_sfcount[MCAST_EXCLUDE]) { - new_in = mca_xcount == psf->sf_count[MCAST_EXCLUDE] && - !psf->sf_count[MCAST_INCLUDE]; + if (atomic_read(&mc->mca_excl_count)) { + new_in = mca_xcount == atomic_read(&psf->excl_count) && + !atomic_read(&psf->incl_count); } else { - new_in = psf->sf_count[MCAST_INCLUDE] != 0; + new_in = atomic_read(&psf->incl_count) != 0; } if (new_in) { @@ -2366,20 +2373,19 @@ static int sf_setstate(struct ifmcaddr6 *mc) } if (!found) { - dpsf = kmalloc(sizeof(*dpsf), GFP_ATOMIC); + dpsf = kmalloc(sizeof(*dpsf), GFP_KERNEL); if (!dpsf) continue; INIT_LIST_HEAD(&dpsf->list); dpsf->sf_addr = psf->sf_addr; - dpsf->sf_count[MCAST_INCLUDE] = - psf->sf_count[MCAST_INCLUDE]; - dpsf->sf_count[MCAST_EXCLUDE] = - psf->sf_count[MCAST_EXCLUDE]; + atomic_set(&dpsf->incl_count, + atomic_read(&psf->incl_count)); + atomic_set(&dpsf->excl_count, + atomic_read(&psf->excl_count)); dpsf->sf_gsresp = psf->sf_gsresp; dpsf->sf_oldin = psf->sf_oldin; dpsf->sf_crcount = psf->sf_crcount; - /* mc->mca_lock held by callers */ list_add(&dpsf->list, &mc->mca_tomb_list); } dpsf->sf_crcount = qrv; @@ -2404,38 +2410,41 @@ static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, if (!idev) return -ENODEV; - read_lock_bh(&idev->lock); list_for_each_entry(mc, &idev->mc_list, list) { if (ipv6_addr_equal(mca, &mc->mca_addr)) { found = true; break; } } - if (!found) { - /* MCA not found?? bug */ - read_unlock_bh(&idev->lock); + if (!found) return -ESRCH; - } - spin_lock_bh(&mc->mca_lock); sf_markstate(mc); isexclude = mc->mca_sfmode == MCAST_EXCLUDE; - if (!delta) - mc->mca_sfcount[sfmode]++; + if (!delta) { + if (sfmode == MCAST_INCLUDE) + atomic_inc(&mc->mca_incl_count); + else + atomic_inc(&mc->mca_excl_count); + } if (psfsrc) err = ip6_mc_add1_src(mc, sfmode, psfsrc); if (err) { - if (!delta) - mc->mca_sfcount[sfmode]--; - } else if (isexclude != (mc->mca_sfcount[MCAST_EXCLUDE] != 0)) { + if (!delta) { + if (sfmode == MCAST_INCLUDE) + atomic_dec(&mc->mca_incl_count); + else + atomic_dec(&mc->mca_excl_count); + } + } else if (isexclude != (atomic_read(&mc->mca_excl_count) != 0)) { struct ip6_sf_list *psf; /* filter mode change */ - if (mc->mca_sfcount[MCAST_EXCLUDE]) + if (atomic_read(&mc->mca_excl_count)) mc->mca_sfmode = MCAST_EXCLUDE; - else if (mc->mca_sfcount[MCAST_INCLUDE]) + else if (atomic_read(&mc->mca_incl_count)) mc->mca_sfmode = MCAST_INCLUDE; /* else no filters; keep old mode for reports */ @@ -2448,8 +2457,6 @@ static int ip6_mc_add_src(struct inet6_dev *idev, const struct in6_addr *mca, mld_ifc_event(idev); } - spin_unlock_bh(&mc->mca_lock); - read_unlock_bh(&idev->lock); return err; } @@ -2481,17 +2488,19 @@ static int ip6_mc_add_src_bulk(struct inet6_dev *idev, struct group_filter *gsf, /* MCA not found?? bug */ return -ESRCH; } - spin_lock_bh(&mc->mca_lock); sf_markstate(mc); isexclude = mc->mca_sfmode == MCAST_EXCLUDE; - mc->mca_sfcount[sfmode]++; + if (sfmode == MCAST_INCLUDE) + atomic_inc(&mc->mca_incl_count); + else + atomic_inc(&mc->mca_excl_count); for (i = 0; i < gsf->gf_numsrc; i++, ++list) { struct sockaddr_in6 *psin6; psl = sock_kmalloc(sk, sizeof(struct ip6_sf_socklist), - GFP_ATOMIC); + GFP_KERNEL); if (!psl) { err = -ENOBUFS; break; @@ -2508,20 +2517,24 @@ static int ip6_mc_add_src_bulk(struct inet6_dev *idev, struct group_filter *gsf, } if (err) { - mc->mca_sfcount[sfmode]--; + if (sfmode == MCAST_INCLUDE) + atomic_dec(&mc->mca_incl_count); + else + atomic_dec(&mc->mca_excl_count); list_for_each_entry_safe(psl, tmp, head, list) { list_del(&psl->list); atomic_sub(sizeof(*psl), &sk->sk_omem_alloc); kfree(psl); + cond_resched(); } - } else if (isexclude != (mc->mca_sfcount[MCAST_EXCLUDE] != 0)) { + } else if (isexclude != (atomic_read(&mc->mca_excl_count) != 0)) { struct ip6_sf_list *psf; /* filter mode change */ - if (mc->mca_sfcount[MCAST_EXCLUDE]) + if (atomic_read(&mc->mca_excl_count)) mc->mca_sfmode = MCAST_EXCLUDE; - else if (mc->mca_sfcount[MCAST_INCLUDE]) + else if (atomic_read(&mc->mca_incl_count)) mc->mca_sfmode = MCAST_INCLUDE; /* else no filters; keep old mode for reports */ @@ -2532,7 +2545,6 @@ static int ip6_mc_add_src_bulk(struct inet6_dev *idev, struct group_filter *gsf, mld_ifc_event(idev); } else if (sf_setstate(mc)) mld_ifc_event(idev); - spin_unlock_bh(&mc->mca_lock); return err; } @@ -2544,16 +2556,18 @@ static void ip6_mc_clear_src(struct ifmcaddr6 *mc) list_for_each_entry_safe(psf, tmp, &mc->mca_tomb_list, list) { list_del(&psf->list); kfree(psf); + cond_resched(); } list_for_each_entry_safe(psf, tmp, &mc->mca_source_list, list) { - list_del(&psf->list); - kfree(psf); + list_del_rcu(&psf->list); + kfree_rcu(psf, rcu); + cond_resched(); } mc->mca_sfmode = MCAST_EXCLUDE; - mc->mca_sfcount[MCAST_INCLUDE] = 0; - mc->mca_sfcount[MCAST_EXCLUDE] = 1; + atomic_set(&mc->mca_incl_count, 0); + atomic_set(&mc->mca_excl_count, 1); } @@ -2561,14 +2575,14 @@ static void mld_join_group(struct ifmcaddr6 *mc) { unsigned long delay; - if (mc->mca_flags & MAF_NOREPORT) + if (mc->mca_noreport) return; mld_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); delay = prandom_u32() % unsolicited_report_interval(mc->idev); - spin_lock_bh(&mc->mca_lock); + spin_lock_bh(&mc->mca_work_lock); if (cancel_delayed_work(&mc->mca_work)) { mca_put(mc); delay = mc->mca_work.timer.expires - jiffies; @@ -2577,22 +2591,16 @@ static void mld_join_group(struct ifmcaddr6 *mc) if (!mod_delayed_work(mld_wq, &mc->mca_work, msecs_to_jiffies(delay))) mca_get(mc); - mc->mca_flags |= MAF_TIMER_RUNNING | MAF_LAST_REPORTER; - spin_unlock_bh(&mc->mca_lock); -} -static void ip6_mc_leave_src(struct sock *sk, struct ipv6_mc_socklist *mc_lst, - struct inet6_dev *idev) -{ - write_lock_bh(&mc_lst->sflock); - ip6_mc_del_src_bulk(idev, mc_lst, sk); - write_unlock_bh(&mc_lst->sflock); + set_bit(MCA_TIMER_RUNNING, &mc->mca_flags); + set_bit(MCA_LAST_REPORTER, &mc->mca_flags); + spin_unlock_bh(&mc->mca_work_lock); } static void mld_leave_group(struct ifmcaddr6 *mc) { if (mld_in_v1_mode(mc->idev)) { - if (mc->mca_flags & MAF_LAST_REPORTER) + if (test_bit(MCA_LAST_REPORTER, &mc->mca_flags)) mld_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REDUCTION); } else { @@ -2607,7 +2615,9 @@ static void mld_gq_work(struct work_struct *work) struct inet6_dev, mc_gq_work); idev->mc_gq_running = 0; + rtnl_lock(); mld_send_report(idev, NULL); + rtnl_unlock(); in6_dev_put(idev); } @@ -2616,20 +2626,26 @@ static void mld_ifc_work(struct work_struct *work) struct inet6_dev *idev = container_of(to_delayed_work(work), struct inet6_dev, mc_ifc_work); + rtnl_lock(); mld_send_cr(idev); + if (idev->mc_ifc_count) { idev->mc_ifc_count--; if (idev->mc_ifc_count) mld_ifc_start_work(idev, unsolicited_report_interval(idev)); } + rtnl_unlock(); in6_dev_put(idev); } static void mld_ifc_event(struct inet6_dev *idev) { + ASSERT_RTNL(); + if (mld_in_v1_mode(idev)) return; + idev->mc_ifc_count = idev->mc_qrv; mld_ifc_start_work(idev, 1); } @@ -2639,15 +2655,17 @@ static void mld_mca_work(struct work_struct *work) struct ifmcaddr6 *mc = container_of(to_delayed_work(work), struct ifmcaddr6, mca_work); + rtnl_lock(); if (mld_in_v1_mode(mc->idev)) mld_send(&mc->mca_addr, mc->idev->dev, ICMPV6_MGM_REPORT); else mld_send_report(mc->idev, mc); + rtnl_unlock(); - spin_lock_bh(&mc->mca_lock); - mc->mca_flags |= MAF_LAST_REPORTER; - mc->mca_flags &= ~MAF_TIMER_RUNNING; - spin_unlock_bh(&mc->mca_lock); + spin_lock_bh(&mc->mca_work_lock); + set_bit(MCA_LAST_REPORTER, &mc->mca_flags); + clear_bit(MCA_TIMER_RUNNING, &mc->mca_flags); + spin_unlock_bh(&mc->mca_work_lock); mca_put(mc); } @@ -2659,14 +2677,16 @@ void ipv6_mc_unmap(struct inet6_dev *idev) /* Install multicast list, except for all-nodes (already installed) */ - read_lock_bh(&idev->lock); + ASSERT_RTNL(); + list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) mld_group_dropped(mc); - read_unlock_bh(&idev->lock); } void ipv6_mc_remap(struct inet6_dev *idev) { + ASSERT_RTNL(); + ipv6_mc_up(idev); } @@ -2676,10 +2696,9 @@ void ipv6_mc_down(struct inet6_dev *idev) { struct ifmcaddr6 *mc, *tmp; - /* Withdraw multicast list */ - - read_lock_bh(&idev->lock); + ASSERT_RTNL(); + /* Withdraw multicast list */ list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) mld_group_dropped(mc); @@ -2689,7 +2708,7 @@ void ipv6_mc_down(struct inet6_dev *idev) mld_ifc_stop_work(idev); mld_gq_stop_work(idev); mld_dad_stop_work(idev); - read_unlock_bh(&idev->lock); + mld_clear_delrec_stop_work(idev); } static void ipv6_mc_reset(struct inet6_dev *idev) @@ -2709,21 +2728,21 @@ void ipv6_mc_up(struct inet6_dev *idev) /* Install multicast list, except for all-nodes (already installed) */ - read_lock_bh(&idev->lock); + ASSERT_RTNL(); + ipv6_mc_reset(idev); list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { mld_del_delrec(idev, mc); mld_group_added(mc); } - read_unlock_bh(&idev->lock); } /* IPv6 device initialization. */ void ipv6_mc_init_dev(struct inet6_dev *idev) { - write_lock_bh(&idev->lock); - spin_lock_init(&idev->mc_tomb_lock); + ASSERT_RTNL(); + idev->mc_gq_running = 0; INIT_DELAYED_WORK(&idev->mc_gq_work, mld_gq_work); INIT_LIST_HEAD(&idev->mc_tomb_list); @@ -2731,8 +2750,8 @@ void ipv6_mc_init_dev(struct inet6_dev *idev) idev->mc_ifc_count = 0; INIT_DELAYED_WORK(&idev->mc_ifc_work, mld_ifc_work); INIT_DELAYED_WORK(&idev->mc_dad_work, mld_dad_work); + INIT_DELAYED_WORK(&idev->mc_delrec_work, mld_clear_delrec_work); ipv6_mc_reset(idev); - write_unlock_bh(&idev->lock); } /* @@ -2743,6 +2762,8 @@ void ipv6_mc_destroy_dev(struct inet6_dev *idev) { struct ifmcaddr6 *mc, *tmp; + ASSERT_RTNL(); + /* Deactivate works */ ipv6_mc_down(idev); mld_clear_delrec(idev); @@ -2757,15 +2778,11 @@ void ipv6_mc_destroy_dev(struct inet6_dev *idev) if (idev->cnf.forwarding) __ipv6_dev_mc_dec(idev, &in6addr_linklocal_allrouters); - write_lock_bh(&idev->lock); list_for_each_entry_safe(mc, tmp, &idev->mc_list, list) { - list_del(&mc->list); - write_unlock_bh(&idev->lock); + list_del_rcu(&mc->list); ip6_mc_clear_src(mc); mca_put(mc); - write_lock_bh(&idev->lock); } - write_unlock_bh(&idev->lock); } static void ipv6_mc_rejoin_groups(struct inet6_dev *idev) @@ -2775,12 +2792,11 @@ static void ipv6_mc_rejoin_groups(struct inet6_dev *idev) ASSERT_RTNL(); if (mld_in_v1_mode(idev)) { - read_lock_bh(&idev->lock); list_for_each_entry(mc, &idev->mc_list, list) mld_join_group(mc); - read_unlock_bh(&idev->lock); - } else + } else { mld_send_report(idev, NULL); + } } static int ipv6_mc_netdev_event(struct notifier_block *this, @@ -2829,12 +2845,10 @@ static inline struct ifmcaddr6 *mld_mc_get_first(struct seq_file *seq) if (!idev) continue; - read_lock_bh(&idev->lock); - list_for_each_entry(mc, &idev->mc_list, list) { + list_for_each_entry_rcu(mc, &idev->mc_list, list) { state->idev = idev; return mc; } - read_unlock_bh(&idev->lock); } return NULL; } @@ -2843,15 +2857,12 @@ static struct ifmcaddr6 *mld_mc_get_next(struct seq_file *seq, struct ifmcaddr6 { struct mld_mc_iter_state *state = mld_mc_seq_private(seq); - list_for_each_entry_continue(mc, &state->idev->mc_list, list) + list_for_each_entry_continue_rcu(mc, &state->idev->mc_list, list) return mc; mc = NULL; while (!mc) { - if (state->idev) - read_unlock_bh(&state->idev->lock); - state->dev = next_net_device_rcu(state->dev); if (!state->dev) { state->idev = NULL; @@ -2860,9 +2871,8 @@ static struct ifmcaddr6 *mld_mc_get_next(struct seq_file *seq, struct ifmcaddr6 state->idev = __in6_dev_get(state->dev); if (!state->idev) continue; - read_lock_bh(&state->idev->lock); - mc = list_first_entry_or_null(&state->idev->mc_list, - struct ifmcaddr6, list); + mc = list_first_or_null_rcu(&state->idev->mc_list, + struct ifmcaddr6, list); } return mc; } @@ -2897,10 +2907,8 @@ static void mld_mc_seq_stop(struct seq_file *seq, void *v) { struct mld_mc_iter_state *state = mld_mc_seq_private(seq); - if (likely(state->idev)) { - read_unlock_bh(&state->idev->lock); + if (likely(state->idev)) state->idev = NULL; - } state->dev = NULL; rcu_read_unlock(); } @@ -2911,11 +2919,11 @@ static int mld_mc_seq_show(struct seq_file *seq, void *v) struct mld_mc_iter_state *state = mld_mc_seq_private(seq); seq_printf(seq, - "%-4d %-15s %pi6 %5d %08X %ld\n", + "%-4d %-15s %pi6 %5d %08lX %ld\n", state->dev->ifindex, state->dev->name, &mc->mca_addr, mc->mca_users, mc->mca_flags, - (mc->mca_flags & MAF_TIMER_RUNNING) ? + (test_bit(MCA_TIMER_RUNNING, &mc->mca_flags)) ? jiffies_to_clock_t(mc->mca_work.timer.expires - jiffies) : 0); return 0; } @@ -2951,21 +2959,17 @@ static inline struct ip6_sf_list *mld_mcf_get_first(struct seq_file *seq) idev = __in6_dev_get(state->dev); if (unlikely(idev == NULL)) continue; - read_lock_bh(&idev->lock); - mc = list_first_entry_or_null(&idev->mc_list, - struct ifmcaddr6, list); + mc = list_first_or_null_rcu(&idev->mc_list, + struct ifmcaddr6, list); if (likely(mc)) { - spin_lock_bh(&mc->mca_lock); - psf = list_first_entry_or_null(&mc->mca_source_list, - struct ip6_sf_list, list); + psf = list_first_or_null_rcu(&mc->mca_source_list, + struct ip6_sf_list, list); if (likely(psf)) { state->mc = mc; state->idev = idev; break; } - spin_unlock_bh(&mc->mca_lock); } - read_unlock_bh(&idev->lock); } return psf; } @@ -2975,29 +2979,23 @@ static struct ip6_sf_list *mld_mcf_get_next(struct seq_file *seq, { struct mld_mcf_iter_state *state = mld_mcf_seq_private(seq); - list_for_each_entry_continue(psf, &state->mc->mca_source_list, list) + list_for_each_entry_continue_rcu(psf, &state->mc->mca_source_list, list) return psf; psf = NULL; while (!psf) { - spin_unlock_bh(&state->mc->mca_lock); - list_for_each_entry_continue(state->mc, &state->idev->mc_list, list) { - spin_lock_bh(&state->mc->mca_lock); - psf = list_first_entry_or_null(&state->mc->mca_source_list, - struct ip6_sf_list, list); - if (!psf) { - spin_unlock_bh(&state->mc->mca_lock); + list_for_each_entry_continue_rcu(state->mc, + &state->idev->mc_list, list) { + psf = list_first_or_null_rcu(&state->mc->mca_source_list, + struct ip6_sf_list, list); + if (!psf) continue; - } goto out; } state->mc = NULL; while (!state->mc) { - if (likely(state->idev)) - read_unlock_bh(&state->idev->lock); - state->dev = next_net_device_rcu(state->dev); if (!state->dev) { state->idev = NULL; @@ -3006,15 +3004,13 @@ static struct ip6_sf_list *mld_mcf_get_next(struct seq_file *seq, state->idev = __in6_dev_get(state->dev); if (!state->idev) continue; - read_lock_bh(&state->idev->lock); - state->mc = list_first_entry_or_null(&state->idev->mc_list, - struct ifmcaddr6, list); + state->mc = list_first_or_null_rcu(&state->idev->mc_list, + struct ifmcaddr6, list); } if (!state->mc) break; - spin_lock_bh(&state->mc->mca_lock); - psf = list_first_entry_or_null(&state->mc->mca_source_list, - struct ip6_sf_list, list); + psf = list_first_or_null_rcu(&state->mc->mca_source_list, + struct ip6_sf_list, list); } out: return psf; @@ -3054,14 +3050,10 @@ static void mld_mcf_seq_stop(struct seq_file *seq, void *v) { struct mld_mcf_iter_state *state = mld_mcf_seq_private(seq); - if (likely(state->mc)) { - spin_unlock_bh(&state->mc->mca_lock); + if (likely(state->mc)) state->mc = NULL; - } - if (likely(state->idev)) { - read_unlock_bh(&state->idev->lock); + if (likely(state->idev)) state->idev = NULL; - } state->dev = NULL; rcu_read_unlock(); } @@ -3075,12 +3067,12 @@ static int mld_mcf_seq_show(struct seq_file *seq, void *v) seq_puts(seq, "Idx Device Multicast Address Source Address INC EXC\n"); } else { seq_printf(seq, - "%3d %6.6s %pi6 %pi6 %6lu %6lu\n", + "%3d %6.6s %pi6 %pi6 %6u %6u\n", state->dev->ifindex, state->dev->name, &state->mc->mca_addr, &psf->sf_addr, - psf->sf_count[MCAST_INCLUDE], - psf->sf_count[MCAST_EXCLUDE]); + atomic_read(&psf->incl_count), + atomic_read(&psf->excl_count)); } return 0; } @@ -3144,6 +3136,7 @@ static int __net_init mld_net_init(struct net *net) } inet6_sk(net->ipv6.igmp_sk)->hop_limit = 1; + net->ipv6.igmp_sk->sk_allocation = GFP_KERNEL; err = inet_ctl_sock_create(&net->ipv6.mc_autojoin_sk, PF_INET6, SOCK_RAW, IPPROTO_ICMPV6, net);