diff mbox series

[mptcp-next,v2,10/21] mptcp: handle local addrs announced by userspace PMs

Message ID 20220112221523.1829397-11-kishen.maloor@intel.com (mailing list archive)
State Superseded, archived
Headers show
Series mptcp: support userspace path management | expand

Checks

Context Check Description
matttbe/checkpatch success total: 0 errors, 0 warnings, 0 checks, 119 lines checked
matttbe/build fail Build error with: -Werror
matttbe/KVM_Validation__normal warning Unstable: 2 failed test(s): packetdrill_add_addr selftest_mptcp_join

Commit Message

Kishen Maloor Jan. 12, 2022, 10:15 p.m. UTC
This change adds a new internal function to store/retrieve local
addrs announced by userspace PM implementations from the kernel
context. The function does not stipulate any limitation on the #
of addrs, and handles the requirements of three scenarios:
1) ADD_ADDR announcements (which require that a local id be
provided), 2) retrieving the local id associated with an address,
also where one may need to be assigned, and 3) reissuance of
ADD_ADDRs when there's a successful match of addr/id.

The list of all stored local addr entries is held under the
MPTCP sock structure. This list, if not released by the REMOVE_ADDR
flow is freed while the sock is destructed.

Signed-off-by: Kishen Maloor <kishen.maloor@intel.com>
---
 net/mptcp/pm_netlink.c | 79 ++++++++++++++++++++++++++++++++++++++++++
 net/mptcp/protocol.c   |  2 ++
 net/mptcp/protocol.h   |  2 ++
 3 files changed, 83 insertions(+)

Comments

Paolo Abeni Jan. 14, 2022, 5:11 p.m. UTC | #1
On Wed, 2022-01-12 at 17:15 -0500, Kishen Maloor wrote:
> This change adds a new internal function to store/retrieve local
> addrs announced by userspace PM implementations from the kernel
> context. The function does not stipulate any limitation on the #
> of addrs, and handles the requirements of three scenarios:
> 1) ADD_ADDR announcements (which require that a local id be
> provided), 2) retrieving the local id associated with an address,
> also where one may need to be assigned, and 3) reissuance of
> ADD_ADDRs when there's a successful match of addr/id.
> 
> The list of all stored local addr entries is held under the
> MPTCP sock structure. This list, if not released by the REMOVE_ADDR
> flow is freed while the sock is destructed.

It feels strange to me that we need to maintain an additional addresses
list inside the kernel for the user-space PM - which should take care
of all the status information.

Why anno_list is not enough? Why isn't cheapest to extend it? 

Being the list unlimited a malicius (or buggy) user-space could consume
all the kernel memory. I think we need some limits, or at least some
accounting.

/P
Kishen Maloor Jan. 19, 2022, 1:24 a.m. UTC | #2
On 1/14/22 9:11 AM, Paolo Abeni wrote:
> On Wed, 2022-01-12 at 17:15 -0500, Kishen Maloor wrote:
>> This change adds a new internal function to store/retrieve local
>> addrs announced by userspace PM implementations from the kernel
>> context. The function does not stipulate any limitation on the #
>> of addrs, and handles the requirements of three scenarios:
>> 1) ADD_ADDR announcements (which require that a local id be
>> provided), 2) retrieving the local id associated with an address,
>> also where one may need to be assigned, and 3) reissuance of
>> ADD_ADDRs when there's a successful match of addr/id.
>>
>> The list of all stored local addr entries is held under the
>> MPTCP sock structure. This list, if not released by the REMOVE_ADDR
>> flow is freed while the sock is destructed.
> 
> It feels strange to me that we need to maintain an additional addresses
> list inside the kernel for the user-space PM - which should take care
> of all the status information.

The PM daemon will have the complete picture, but the protocol needs to know the
local ids assigned to addresses so as such the kernel has to store addresses 
(with their ids) regardless of PM. 

> 
> Why anno_list is not enough? Why isn't cheapest to extend it? 

The anno_list is the list of addresses that were announced via the ADD_ADDR command, to
be used specifically for doing re-transmissions.

However, the implementation can also accept connections at local addresses that were not
explicitly announced (hence not in anno_list), and in this case the kernel records the address
and assigns it a local id unique in the scope of its connection.

So basically msk->local_addr_list is the list of all known local addresses, both announced 
and not so their ids can be later retrieved.

To give you more context, in my last iteration of the code before I posted the series, I was storing local addrs
in pernet->local_addr_list just as its done for the kernel PM, but later moved it to
a per-msk list to eliminate contention (in accessing that list) with other userspace or kernel
PM managed connections.

> 
> Being the list unlimited a malicius (or buggy) user-space could consume
> all the kernel memory. I think we need some limits, or at least some
> accounting.

At this point we're taking the PM daemon as a trusted entity which can issue these
netlink commands for path management. So there is currently no configurable ceiling
in the kernel on the size of the PM's kernel stored context.

> 
> /P
>
Mat Martineau Jan. 19, 2022, 7:20 p.m. UTC | #3
On Tue, 18 Jan 2022, Kishen Maloor wrote:

> On 1/14/22 9:11 AM, Paolo Abeni wrote:
>> On Wed, 2022-01-12 at 17:15 -0500, Kishen Maloor wrote:
>>> This change adds a new internal function to store/retrieve local
>>> addrs announced by userspace PM implementations from the kernel
>>> context. The function does not stipulate any limitation on the #
>>> of addrs, and handles the requirements of three scenarios:
>>> 1) ADD_ADDR announcements (which require that a local id be
>>> provided), 2) retrieving the local id associated with an address,
>>> also where one may need to be assigned, and 3) reissuance of
>>> ADD_ADDRs when there's a successful match of addr/id.
>>>
>>> The list of all stored local addr entries is held under the
>>> MPTCP sock structure. This list, if not released by the REMOVE_ADDR
>>> flow is freed while the sock is destructed.
>>
>> It feels strange to me that we need to maintain an additional addresses
>> list inside the kernel for the user-space PM - which should take care
>> of all the status information.
>
> The PM daemon will have the complete picture, but the protocol needs to know the
> local ids assigned to addresses so as such the kernel has to store addresses
> (with their ids) regardless of PM.
>
>>
>> Why anno_list is not enough? Why isn't cheapest to extend it?
>
> The anno_list is the list of addresses that were announced via the ADD_ADDR command, to
> be used specifically for doing re-transmissions.
>
> However, the implementation can also accept connections at local addresses that were not
> explicitly announced (hence not in anno_list), and in this case the kernel records the address
> and assigns it a local id unique in the scope of its connection.
>
> So basically msk->local_addr_list is the list of all known local addresses, both announced
> and not so their ids can be later retrieved.
>
> To give you more context, in my last iteration of the code before I posted the series, I was storing local addrs
> in pernet->local_addr_list just as its done for the kernel PM, but later moved it to
> a per-msk list to eliminate contention (in accessing that list) with other userspace or kernel
> PM managed connections.
>
>>
>> Being the list unlimited a malicius (or buggy) user-space could consume
>> all the kernel memory. I think we need some limits, or at least some
>> accounting.
>
> At this point we're taking the PM daemon as a trusted entity which can issue these
> netlink commands for path management. So there is currently no configurable ceiling
> in the kernel on the size of the PM's kernel stored context.
>

I think it's still worthwhile to have some limits/accounting as Paolo 
suggests - part of the point of pushing PM code to userspace is so bugs or 
other vulnerabilities don't take down the whole machine.

--
Mat Martineau
Intel
Kishen Maloor Jan. 19, 2022, 8:27 p.m. UTC | #4
On 1/19/22 11:20 AM, Mat Martineau wrote:
> On Tue, 18 Jan 2022, Kishen Maloor wrote:
> 
>> On 1/14/22 9:11 AM, Paolo Abeni wrote:
>>> On Wed, 2022-01-12 at 17:15 -0500, Kishen Maloor wrote:
>>>> This change adds a new internal function to store/retrieve local
>>>> addrs announced by userspace PM implementations from the kernel
>>>> context. The function does not stipulate any limitation on the #
>>>> of addrs, and handles the requirements of three scenarios:
>>>> 1) ADD_ADDR announcements (which require that a local id be
>>>> provided), 2) retrieving the local id associated with an address,
>>>> also where one may need to be assigned, and 3) reissuance of
>>>> ADD_ADDRs when there's a successful match of addr/id.
>>>>
>>>> The list of all stored local addr entries is held under the
>>>> MPTCP sock structure. This list, if not released by the REMOVE_ADDR
>>>> flow is freed while the sock is destructed.
>>>
>>> It feels strange to me that we need to maintain an additional addresses
>>> list inside the kernel for the user-space PM - which should take care
>>> of all the status information.
>>
>> The PM daemon will have the complete picture, but the protocol needs to know the
>> local ids assigned to addresses so as such the kernel has to store addresses
>> (with their ids) regardless of PM.
>>
>>>
>>> Why anno_list is not enough? Why isn't cheapest to extend it?
>>
>> The anno_list is the list of addresses that were announced via the ADD_ADDR command, to
>> be used specifically for doing re-transmissions.
>>
>> However, the implementation can also accept connections at local addresses that were not
>> explicitly announced (hence not in anno_list), and in this case the kernel records the address
>> and assigns it a local id unique in the scope of its connection.
>>
>> So basically msk->local_addr_list is the list of all known local addresses, both announced
>> and not so their ids can be later retrieved.
>>
>> To give you more context, in my last iteration of the code before I posted the series, I was storing local addrs
>> in pernet->local_addr_list just as its done for the kernel PM, but later moved it to
>> a per-msk list to eliminate contention (in accessing that list) with other userspace or kernel
>> PM managed connections.
>>
>>>
>>> Being the list unlimited a malicius (or buggy) user-space could consume
>>> all the kernel memory. I think we need some limits, or at least some
>>> accounting.
>>
>> At this point we're taking the PM daemon as a trusted entity which can issue these
>> netlink commands for path management. So there is currently no configurable ceiling
>> in the kernel on the size of the PM's kernel stored context.
>>
> 
> I think it's still worthwhile to have some limits/accounting as Paolo suggests - part of the point of pushing PM code to userspace is so bugs or other vulnerabilities don't take down the whole machine.
> 

Agreed, but do we put circuit breakers inside the kernel or the PM daemon? (for e.g.,
if a PM plug-in can see limitations imposed by the daemon?)

If we consider userspace path management as providing "more flexibility",
then conceivably the kernel would expose hooks to the PM daemon to bump up any 
kernel enforced limits when necessary. So a malicious or buggy PM could also raise 
those limits. 

Since interactions over the netlink interface are to be carried out by a privileged
entity, I've been assuming that the PM daemon is to be trusted. 

On the other hand, if we're talking about having fixed upper bounds in the kernel wrt
# of addrs/subflows (which cannot be changed by the PM daemon), then that could make sense.

> -- 
> Mat Martineau
> Intel
Mat Martineau Jan. 19, 2022, 8:44 p.m. UTC | #5
On Wed, 19 Jan 2022, Kishen Maloor wrote:

> On 1/19/22 11:20 AM, Mat Martineau wrote:
>> On Tue, 18 Jan 2022, Kishen Maloor wrote:
>>
>>> On 1/14/22 9:11 AM, Paolo Abeni wrote:
>>>> On Wed, 2022-01-12 at 17:15 -0500, Kishen Maloor wrote:
>>>>> This change adds a new internal function to store/retrieve local
>>>>> addrs announced by userspace PM implementations from the kernel
>>>>> context. The function does not stipulate any limitation on the #
>>>>> of addrs, and handles the requirements of three scenarios:
>>>>> 1) ADD_ADDR announcements (which require that a local id be
>>>>> provided), 2) retrieving the local id associated with an address,
>>>>> also where one may need to be assigned, and 3) reissuance of
>>>>> ADD_ADDRs when there's a successful match of addr/id.
>>>>>
>>>>> The list of all stored local addr entries is held under the
>>>>> MPTCP sock structure. This list, if not released by the REMOVE_ADDR
>>>>> flow is freed while the sock is destructed.
>>>>
>>>> It feels strange to me that we need to maintain an additional addresses
>>>> list inside the kernel for the user-space PM - which should take care
>>>> of all the status information.
>>>
>>> The PM daemon will have the complete picture, but the protocol needs to know the
>>> local ids assigned to addresses so as such the kernel has to store addresses
>>> (with their ids) regardless of PM.
>>>
>>>>
>>>> Why anno_list is not enough? Why isn't cheapest to extend it?
>>>
>>> The anno_list is the list of addresses that were announced via the ADD_ADDR command, to
>>> be used specifically for doing re-transmissions.
>>>
>>> However, the implementation can also accept connections at local addresses that were not
>>> explicitly announced (hence not in anno_list), and in this case the kernel records the address
>>> and assigns it a local id unique in the scope of its connection.
>>>
>>> So basically msk->local_addr_list is the list of all known local addresses, both announced
>>> and not so their ids can be later retrieved.
>>>
>>> To give you more context, in my last iteration of the code before I posted the series, I was storing local addrs
>>> in pernet->local_addr_list just as its done for the kernel PM, but later moved it to
>>> a per-msk list to eliminate contention (in accessing that list) with other userspace or kernel
>>> PM managed connections.
>>>
>>>>
>>>> Being the list unlimited a malicius (or buggy) user-space could consume
>>>> all the kernel memory. I think we need some limits, or at least some
>>>> accounting.
>>>
>>> At this point we're taking the PM daemon as a trusted entity which can issue these
>>> netlink commands for path management. So there is currently no configurable ceiling
>>> in the kernel on the size of the PM's kernel stored context.
>>>
>>
>> I think it's still worthwhile to have some limits/accounting as Paolo suggests - part of the point of pushing PM code to userspace is so bugs or other vulnerabilities don't take down the whole machine.
>>
>
> Agreed, but do we put circuit breakers inside the kernel or the PM daemon? (for e.g.,
> if a PM plug-in can see limitations imposed by the daemon?)
>

Kernel, I'd say.

> If we consider userspace path management as providing "more flexibility",
> then conceivably the kernel would expose hooks to the PM daemon to bump up any
> kernel enforced limits when necessary. So a malicious or buggy PM could also raise
> those limits.
>

True, an admin (or bad actor) can get themselves into trouble by setting 
nonsensical limits. But the limits remain useful for stability and 
managing resources.

> Since interactions over the netlink interface are to be carried out by a privileged
> entity, I've been assuming that the PM daemon is to be trusted.
>

We're trusting the PM daemon with certain (granular) 
privileges/capabilities using CAP_NET_ADMIN, not the old 
"root-controls-all" model.

> On the other hand, if we're talking about having fixed upper bounds in the kernel wrt
> # of addrs/subflows (which cannot be changed by the PM daemon), then that could make sense.
>

That's pretty much what I'm talking about, maybe tunable (within a range) 
by a sysctl.

--
Mat Martineau
Intel
Kishen Maloor Jan. 19, 2022, 9:30 p.m. UTC | #6
On 1/19/22 12:44 PM, Mat Martineau wrote:
> On Wed, 19 Jan 2022, Kishen Maloor wrote:
> 
>> On 1/19/22 11:20 AM, Mat Martineau wrote:
>>> On Tue, 18 Jan 2022, Kishen Maloor wrote:
>>>
>>>> On 1/14/22 9:11 AM, Paolo Abeni wrote:
>>>>> On Wed, 2022-01-12 at 17:15 -0500, Kishen Maloor wrote:
>>>>>> This change adds a new internal function to store/retrieve local
>>>>>> addrs announced by userspace PM implementations from the kernel
>>>>>> context. The function does not stipulate any limitation on the #
>>>>>> of addrs, and handles the requirements of three scenarios:
>>>>>> 1) ADD_ADDR announcements (which require that a local id be
>>>>>> provided), 2) retrieving the local id associated with an address,
>>>>>> also where one may need to be assigned, and 3) reissuance of
>>>>>> ADD_ADDRs when there's a successful match of addr/id.
>>>>>>
>>>>>> The list of all stored local addr entries is held under the
>>>>>> MPTCP sock structure. This list, if not released by the REMOVE_ADDR
>>>>>> flow is freed while the sock is destructed.
>>>>>
>>>>> It feels strange to me that we need to maintain an additional addresses
>>>>> list inside the kernel for the user-space PM - which should take care
>>>>> of all the status information.
>>>>
>>>> The PM daemon will have the complete picture, but the protocol needs to know the
>>>> local ids assigned to addresses so as such the kernel has to store addresses
>>>> (with their ids) regardless of PM.
>>>>
>>>>>
>>>>> Why anno_list is not enough? Why isn't cheapest to extend it?
>>>>
>>>> The anno_list is the list of addresses that were announced via the ADD_ADDR command, to
>>>> be used specifically for doing re-transmissions.
>>>>
>>>> However, the implementation can also accept connections at local addresses that were not
>>>> explicitly announced (hence not in anno_list), and in this case the kernel records the address
>>>> and assigns it a local id unique in the scope of its connection.
>>>>
>>>> So basically msk->local_addr_list is the list of all known local addresses, both announced
>>>> and not so their ids can be later retrieved.
>>>>
>>>> To give you more context, in my last iteration of the code before I posted the series, I was storing local addrs
>>>> in pernet->local_addr_list just as its done for the kernel PM, but later moved it to
>>>> a per-msk list to eliminate contention (in accessing that list) with other userspace or kernel
>>>> PM managed connections.
>>>>
>>>>>
>>>>> Being the list unlimited a malicius (or buggy) user-space could consume
>>>>> all the kernel memory. I think we need some limits, or at least some
>>>>> accounting.
>>>>
>>>> At this point we're taking the PM daemon as a trusted entity which can issue these
>>>> netlink commands for path management. So there is currently no configurable ceiling
>>>> in the kernel on the size of the PM's kernel stored context.
>>>>
>>>
>>> I think it's still worthwhile to have some limits/accounting as Paolo suggests - part of the point of pushing PM code to userspace is so bugs or other vulnerabilities don't take down the whole machine.
>>>
>>
>> Agreed, but do we put circuit breakers inside the kernel or the PM daemon? (for e.g.,
>> if a PM plug-in can see limitations imposed by the daemon?)
>>
> 
> Kernel, I'd say.
> 
>> If we consider userspace path management as providing "more flexibility",
>> then conceivably the kernel would expose hooks to the PM daemon to bump up any
>> kernel enforced limits when necessary. So a malicious or buggy PM could also raise
>> those limits.
>>
> 
> True, an admin (or bad actor) can get themselves into trouble by setting nonsensical limits. But the limits remain useful for stability and managing resources.
> 
>> Since interactions over the netlink interface are to be carried out by a privileged
>> entity, I've been assuming that the PM daemon is to be trusted.
>>
> 
> We're trusting the PM daemon with certain (granular) privileges/capabilities using CAP_NET_ADMIN, not the old "root-controls-all" model.
> 
>> On the other hand, if we're talking about having fixed upper bounds in the kernel wrt
>> # of addrs/subflows (which cannot be changed by the PM daemon), then that could make sense.
>>
> 
> That's pretty much what I'm talking about, maybe tunable (within a range) by a sysctl.

Sounds good, I will look into adding a couple new params, configurable via sysctl for regulating activities by userspace PMs.
Note that these will be unrelated to the equivalent limits that we have in place for the kernel PM. 

> 
> -- 
> Mat Martineau
> Intel
diff mbox series

Patch

diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index a8c9a89ee1c5..052a803a7f71 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -484,6 +484,31 @@  static bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
 	return true;
 }
 
+void mptcp_free_local_addr_list(struct mptcp_sock *msk)
+{
+	struct mptcp_pm_addr_entry *entry, *tmp;
+	struct sock *sk = (struct sock *)msk;
+	struct pm_nl_pernet *pernet;
+	LIST_HEAD(free_list);
+
+	if (READ_ONCE(msk->pm.pm_type) == MPTCP_PM_TYPE_KERNEL)
+		return;
+
+	pernet = net_generic(sock_net(sk), pm_nl_pernet_id);
+
+	pr_debug("msk=%p", msk);
+
+	mptcp_data_lock(sk);
+	list_splice_init(&msk->local_addr_list, &free_list);
+	mptcp_data_unlock(sk);
+
+	list_for_each_entry_safe(entry, tmp, &free_list, list) {
+		if (entry->lsk_ref)
+			lsk_list_release(pernet, entry->lsk_ref);
+		kfree(entry);
+	}
+}
+
 void mptcp_pm_free_anno_list(struct mptcp_sock *msk)
 {
 	struct mptcp_pm_add_entry *entry, *tmp;
@@ -972,6 +997,60 @@  static bool address_use_port(struct mptcp_pm_addr_entry *entry)
 		MPTCP_PM_ADDR_FLAG_SIGNAL;
 }
 
+static int mptcp_userspace_pm_append_new_local_addr(struct mptcp_sock *msk,
+						    struct mptcp_pm_addr_entry *entry)
+{
+	DECLARE_BITMAP(id_bitmap, MPTCP_PM_MAX_ADDR_ID + 1);
+	struct mptcp_pm_addr_entry *match = NULL;
+	struct sock *sk = (struct sock *)msk;
+	struct mptcp_pm_addr_entry *e;
+	bool addr_match = false;
+	bool id_match = false;
+	int ret = -EINVAL;
+
+	bitmap_zero(id_bitmap, MPTCP_PM_MAX_ADDR_ID + 1);
+
+	mptcp_data_lock(sk);
+	list_for_each_entry(e, &msk->local_addr_list, list) {
+		addr_match = addresses_equal(&e->addr, &entry->addr, true);
+		if (addr_match && entry->addr.id == 0)
+			entry->addr.id = e->addr.id;
+		id_match = (e->addr.id == entry->addr.id);
+		if (addr_match && id_match) {
+			match = e;
+			break;
+		} else if (addr_match || id_match) {
+			break;
+		}
+		__set_bit(e->addr.id, id_bitmap);
+	}
+
+	if (!match && !addr_match && !id_match) {
+		e = kmalloc(sizeof(*e), GFP_ATOMIC);
+		if (!e) {
+			mptcp_data_unlock(sk);
+			return -ENOMEM;
+		}
+
+		*e = *entry;
+		if (!e->addr.id)
+			e->addr.id = find_next_zero_bit(id_bitmap,
+							MPTCP_PM_MAX_ADDR_ID + 1,
+							1);
+		list_add_tail_rcu(&e->list, &msk->local_addr_list);
+		ret = e->addr.id;
+
+		if (e->lsk_ref && e->addr.port)
+			lsk_list_add_ref(e->lsk_ref);
+	} else if (match) {
+		ret = entry->addr.id;
+	}
+
+	mptcp_data_unlock(sk);
+
+	return ret;
+}
+
 static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet,
 					     struct mptcp_pm_addr_entry *entry)
 {
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 408a05bff633..331c1080396d 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2531,6 +2531,7 @@  static int __mptcp_init_sock(struct sock *sk)
 	INIT_LIST_HEAD(&msk->conn_list);
 	INIT_LIST_HEAD(&msk->join_list);
 	INIT_LIST_HEAD(&msk->rtx_queue);
+	INIT_LIST_HEAD(&msk->local_addr_list);
 	INIT_WORK(&msk->work, mptcp_worker);
 	__skb_queue_head_init(&msk->receive_queue);
 	msk->out_of_order_queue = RB_ROOT;
@@ -3027,6 +3028,7 @@  void mptcp_destroy_common(struct mptcp_sock *msk)
 	msk->rmem_fwd_alloc = 0;
 	mptcp_token_destroy(msk);
 	mptcp_pm_free_anno_list(msk);
+	mptcp_free_local_addr_list(msk);
 }
 
 static void mptcp_destroy(struct sock *sk)
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index c50247673c7e..63b4ea850d07 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -281,6 +281,7 @@  struct mptcp_sock {
 	struct sk_buff_head receive_queue;
 	struct list_head conn_list;
 	struct list_head rtx_queue;
+	struct list_head local_addr_list;
 	struct mptcp_data_frag *first_pending;
 	struct list_head join_list;
 	struct socket	*subflow; /* outgoing connect/listener/!mp_capable */
@@ -733,6 +734,7 @@  struct mptcp_sock *mptcp_token_get_sock(struct net *net, u32 token);
 struct mptcp_sock *mptcp_token_iter_next(const struct net *net, long *s_slot,
 					 long *s_num);
 void mptcp_token_destroy(struct mptcp_sock *msk);
+void mptcp_free_local_addr_list(struct mptcp_sock *msk);
 
 void mptcp_crypto_key_sha(u64 key, u32 *token, u64 *idsn);