From patchwork Tue Jan 10 21:00:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 13095672 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 707C1C46467 for ; Tue, 10 Jan 2023 21:00:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233074AbjAJVAn (ORCPT ); Tue, 10 Jan 2023 16:00:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231910AbjAJVAk (ORCPT ); Tue, 10 Jan 2023 16:00:40 -0500 Received: from mail-ot1-x32a.google.com (mail-ot1-x32a.google.com [IPv6:2607:f8b0:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FC085A881 for ; Tue, 10 Jan 2023 13:00:39 -0800 (PST) Received: by mail-ot1-x32a.google.com with SMTP id p17-20020a9d6951000000b00678306ceb94so7721747oto.5 for ; Tue, 10 Jan 2023 13:00:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bMpyQatArxwmmFPyrdtoTmA+SfPU5+se0f/uYrvucJs=; b=DxuzXzsBcKXT3N/TcCKrDNturORqD/aTLNvcZgORSoHjHBkEhagoT2cDX/cxS6LODN L+FtWLq0HVFKX7GJrCNRISMoFOZf/ClQs1LxClC0zD83SnWR6lQOw2zcsvbtpmnYmVZq gKakTWl4GIEj/dIlY+pe1DMy/v+BU13wDXip5vkYTi4c6zKMaSmWBtItDfXErgdFO258 fjAvwU2zpdJJDhZsNEo8meNX97u+VRlwZ5/fpOwvP7atC4GsMaXbU0QybeQoKbxU/y1X 0UaJOlpPr8J/hosPvtFAf1C+uJ3iAvgY8SXXBUEdjIy3jlWQ/Q+vI1kvKvhlwM4lsfKY g4Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bMpyQatArxwmmFPyrdtoTmA+SfPU5+se0f/uYrvucJs=; b=bRsHKgCGjyyhf8G1MPrfD4/7oPg2k71vKfDc/c59RL5WzTB+bWOTwVU5Nj5yAf7sl5 ReNTDWljfGd6KakW1SaBXhAeLYZN2m0wVGyj6f8c10JUNMSBjiPy5cqhCSuoHQ3N4yuF t1UIVCrxNCNcRaIm4oJtsiWaLyN9NcpY4jKbefG9IkJtmbHmyFy9zaC79yxYsp991fkf mbko2YIPh5iglgK5P8PhHa2UWBPoR/7UuRrGfsEDnbf/zMgx7i2JM6ozRgm9ZHE9tte3 3gMEexeDmhf/W9WL9J1JjH/zgjDMHPcXIl0lYwjCGTJWGYRaOfQ9by6Uy8AK5TGjff6L cSqg== X-Gm-Message-State: AFqh2krWrf57A5beqr4Z8O5eoZfMkDMz6nrwaXmMXZslP0sHWX+8OgVV vCSuB2A1z4wzEZyO0ldv8JJ1GZqwIs0= X-Google-Smtp-Source: AMrXdXuZjH7jXdcKqGaPkCy7Ql9JnOIGa1Q4BA17fc2UkfNmvuWYETdC5lwcwWyHKQsSJGvFSISgQw== X-Received: by 2002:a9d:1ab:0:b0:66e:9c3b:41fe with SMTP id e40-20020a9d01ab000000b0066e9c3b41femr16942ote.8.1673384438314; Tue, 10 Jan 2023 13:00:38 -0800 (PST) Received: from pop-os.attlocal.net ([2600:1700:65a0:ab60:f305:c94a:147a:7bd2]) by smtp.gmail.com with ESMTPSA id b10-20020a056830104a00b0066e93d2b858sm6518854otp.55.2023.01.10.13.00.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 13:00:37 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: saeed@kernel.org, gnault@redhat.com, Cong Wang , Tetsuo Handa , Jakub Sitnicki , Eric Dumazet Subject: [Patch net v2 1/2] l2tp: convert l2tp_tunnel_list to idr Date: Tue, 10 Jan 2023 13:00:29 -0800 Message-Id: <20230110210030.593083-2-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230110210030.593083-1-xiyou.wangcong@gmail.com> References: <20230110210030.593083-1-xiyou.wangcong@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Cong Wang l2tp uses l2tp_tunnel_list to track all registered tunnels and to allocate tunnel ID's. IDR can do the same job. More importantly, with IDR we can hold the ID before a successful registration so that we don't need to worry about late error handling, it is not easy to rollback socket changes. This is a preparation for the following fix. Cc: Tetsuo Handa Cc: Guillaume Nault Cc: Jakub Sitnicki Cc: Eric Dumazet Signed-off-by: Cong Wang --- net/l2tp/l2tp_core.c | 85 ++++++++++++++++++++++---------------------- 1 file changed, 42 insertions(+), 43 deletions(-) diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c index 9a1415fe3fa7..894bc9ff0e71 100644 --- a/net/l2tp/l2tp_core.c +++ b/net/l2tp/l2tp_core.c @@ -104,9 +104,9 @@ static struct workqueue_struct *l2tp_wq; /* per-net private data for this module */ static unsigned int l2tp_net_id; struct l2tp_net { - struct list_head l2tp_tunnel_list; - /* Lock for write access to l2tp_tunnel_list */ - spinlock_t l2tp_tunnel_list_lock; + /* Lock for write access to l2tp_tunnel_idr */ + spinlock_t l2tp_tunnel_idr_lock; + struct idr l2tp_tunnel_idr; struct hlist_head l2tp_session_hlist[L2TP_HASH_SIZE_2]; /* Lock for write access to l2tp_session_hlist */ spinlock_t l2tp_session_hlist_lock; @@ -208,13 +208,10 @@ struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id) struct l2tp_tunnel *tunnel; rcu_read_lock_bh(); - list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) { - if (tunnel->tunnel_id == tunnel_id && - refcount_inc_not_zero(&tunnel->ref_count)) { - rcu_read_unlock_bh(); - - return tunnel; - } + tunnel = idr_find(&pn->l2tp_tunnel_idr, tunnel_id); + if (tunnel && refcount_inc_not_zero(&tunnel->ref_count)) { + rcu_read_unlock_bh(); + return tunnel; } rcu_read_unlock_bh(); @@ -224,13 +221,14 @@ EXPORT_SYMBOL_GPL(l2tp_tunnel_get); struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth) { - const struct l2tp_net *pn = l2tp_pernet(net); + struct l2tp_net *pn = l2tp_pernet(net); + unsigned long tunnel_id, tmp; struct l2tp_tunnel *tunnel; int count = 0; rcu_read_lock_bh(); - list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) { - if (++count > nth && + idr_for_each_entry_ul(&pn->l2tp_tunnel_idr, tunnel, tmp, tunnel_id) { + if (tunnel && ++count > nth && refcount_inc_not_zero(&tunnel->ref_count)) { rcu_read_unlock_bh(); return tunnel; @@ -1227,6 +1225,15 @@ static void l2tp_udp_encap_destroy(struct sock *sk) l2tp_tunnel_delete(tunnel); } +static void l2tp_tunnel_remove(struct net *net, struct l2tp_tunnel *tunnel) +{ + struct l2tp_net *pn = l2tp_pernet(net); + + spin_lock_bh(&pn->l2tp_tunnel_idr_lock); + idr_remove(&pn->l2tp_tunnel_idr, tunnel->tunnel_id); + spin_unlock_bh(&pn->l2tp_tunnel_idr_lock); +} + /* Workqueue tunnel deletion function */ static void l2tp_tunnel_del_work(struct work_struct *work) { @@ -1234,7 +1241,6 @@ static void l2tp_tunnel_del_work(struct work_struct *work) del_work); struct sock *sk = tunnel->sock; struct socket *sock = sk->sk_socket; - struct l2tp_net *pn; l2tp_tunnel_closeall(tunnel); @@ -1248,12 +1254,7 @@ static void l2tp_tunnel_del_work(struct work_struct *work) } } - /* Remove the tunnel struct from the tunnel list */ - pn = l2tp_pernet(tunnel->l2tp_net); - spin_lock_bh(&pn->l2tp_tunnel_list_lock); - list_del_rcu(&tunnel->list); - spin_unlock_bh(&pn->l2tp_tunnel_list_lock); - + l2tp_tunnel_remove(tunnel->l2tp_net, tunnel); /* drop initial ref */ l2tp_tunnel_dec_refcount(tunnel); @@ -1455,12 +1456,19 @@ static int l2tp_validate_socket(const struct sock *sk, const struct net *net, int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net, struct l2tp_tunnel_cfg *cfg) { - struct l2tp_tunnel *tunnel_walk; - struct l2tp_net *pn; + struct l2tp_net *pn = l2tp_pernet(net); + u32 tunnel_id = tunnel->tunnel_id; struct socket *sock; struct sock *sk; int ret; + spin_lock_bh(&pn->l2tp_tunnel_idr_lock); + ret = idr_alloc_u32(&pn->l2tp_tunnel_idr, NULL, &tunnel_id, tunnel_id, + GFP_ATOMIC); + spin_unlock_bh(&pn->l2tp_tunnel_idr_lock); + if (ret) + return ret; + if (tunnel->fd < 0) { ret = l2tp_tunnel_sock_create(net, tunnel->tunnel_id, tunnel->peer_tunnel_id, cfg, @@ -1481,23 +1489,13 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net, rcu_assign_sk_user_data(sk, tunnel); write_unlock_bh(&sk->sk_callback_lock); - tunnel->l2tp_net = net; - pn = l2tp_pernet(net); - sock_hold(sk); tunnel->sock = sk; + tunnel->l2tp_net = net; - spin_lock_bh(&pn->l2tp_tunnel_list_lock); - list_for_each_entry(tunnel_walk, &pn->l2tp_tunnel_list, list) { - if (tunnel_walk->tunnel_id == tunnel->tunnel_id) { - spin_unlock_bh(&pn->l2tp_tunnel_list_lock); - sock_put(sk); - ret = -EEXIST; - goto err_sock; - } - } - list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list); - spin_unlock_bh(&pn->l2tp_tunnel_list_lock); + spin_lock_bh(&pn->l2tp_tunnel_idr_lock); + idr_replace(&pn->l2tp_tunnel_idr, tunnel, tunnel->tunnel_id); + spin_unlock_bh(&pn->l2tp_tunnel_idr_lock); if (tunnel->encap == L2TP_ENCAPTYPE_UDP) { struct udp_tunnel_sock_cfg udp_cfg = { @@ -1523,9 +1521,6 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net, return 0; -err_sock: - write_lock_bh(&sk->sk_callback_lock); - rcu_assign_sk_user_data(sk, NULL); err_inval_sock: write_unlock_bh(&sk->sk_callback_lock); @@ -1534,6 +1529,7 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net, else sockfd_put(sock); err: + l2tp_tunnel_remove(net, tunnel); return ret; } EXPORT_SYMBOL_GPL(l2tp_tunnel_register); @@ -1647,8 +1643,8 @@ static __net_init int l2tp_init_net(struct net *net) struct l2tp_net *pn = net_generic(net, l2tp_net_id); int hash; - INIT_LIST_HEAD(&pn->l2tp_tunnel_list); - spin_lock_init(&pn->l2tp_tunnel_list_lock); + idr_init(&pn->l2tp_tunnel_idr); + spin_lock_init(&pn->l2tp_tunnel_idr_lock); for (hash = 0; hash < L2TP_HASH_SIZE_2; hash++) INIT_HLIST_HEAD(&pn->l2tp_session_hlist[hash]); @@ -1662,11 +1658,13 @@ static __net_exit void l2tp_exit_net(struct net *net) { struct l2tp_net *pn = l2tp_pernet(net); struct l2tp_tunnel *tunnel = NULL; + unsigned long tunnel_id, tmp; int hash; rcu_read_lock_bh(); - list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) { - l2tp_tunnel_delete(tunnel); + idr_for_each_entry_ul(&pn->l2tp_tunnel_idr, tunnel, tmp, tunnel_id) { + if (tunnel) + l2tp_tunnel_delete(tunnel); } rcu_read_unlock_bh(); @@ -1676,6 +1674,7 @@ static __net_exit void l2tp_exit_net(struct net *net) for (hash = 0; hash < L2TP_HASH_SIZE_2; hash++) WARN_ON_ONCE(!hlist_empty(&pn->l2tp_session_hlist[hash])); + idr_destroy(&pn->l2tp_tunnel_idr); } static struct pernet_operations l2tp_net_ops = {