diff mbox series

[bpf-next,2/5] net/smc: allow smc to negotiate protocols on policies

Message ID 1682501055-4736-3-git-send-email-alibuda@linux.alibaba.com (mailing list archive)
State New, archived
Delegated to: BPF
Headers show
Series net/smc: Introduce BPF injection capability | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 10 this patch: 10
netdev/cc_maintainers success CCed 10 of 10 maintainers
netdev/build_clang success Errors and warnings before: 9 this patch: 9
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 10 this patch: 10
netdev/checkpatch warning CHECK: Alignment should match open parenthesis CHECK: Blank lines aren't necessary after an open brace '{' WARNING: added, moved or deleted file(s), does MAINTAINERS need updating? WARNING: line length of 83 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ${{ matrix.test }} on ${{ matrix.arch }} with ${{ matrix.toolchain_full }}
bpf/vmtest-bpf-next-VM_Test-2 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-3 fail Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 fail Logs for build for aarch64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-5 fail Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-6 fail Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 fail Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-9 success Logs for veristat

Commit Message

D. Wythe April 26, 2023, 9:24 a.m. UTC
From: "D. Wythe" <alibuda@linux.alibaba.com>

As we all know, the SMC protocol is not suitable for all scenarios,
especially for short-lived. However, for most applications, they cannot
guarantee that there are no such scenarios at all. Therefore, apps
may need some specific strategies to decide shall we need to use SMC
or not.

Just like the congestion control implementation in TCP, this patch
provides a generic negotiator implementation. If necessary,
we can provide different protocol negotiation strategies for
apps based on this implementation.

But most importantly, this patch provides the possibility of
eBPF injection, allowing users to implement their own protocol
negotiation policy in userspace.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
---
 include/net/smc.h |  43 ++++++++++++
 net/Makefile      |   1 +
 net/smc/Kconfig   |  11 +++
 net/smc/af_smc.c  |  68 +++++++++++++++++-
 net/smc/bpf_smc.c | 201 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 323 insertions(+), 1 deletion(-)
 create mode 100644 net/smc/bpf_smc.c

Comments

Kui-Feng Lee April 26, 2023, 4:47 p.m. UTC | #1
On 4/26/23 02:24, D. Wythe wrote:
> From: "D. Wythe" <alibuda@linux.alibaba.com>
> diff --git a/net/smc/bpf_smc.c b/net/smc/bpf_smc.c
> new file mode 100644
> index 0000000..0c0ec05
> --- /dev/null
> +++ b/net/smc/bpf_smc.c
> @@ -0,0 +1,201 @@
> +// SPDX-License-Identifier: GPL-2.0-only
... cut ...
> +
> +/* register ops */
> +int smc_sock_register_negotiator_ops(struct smc_sock_negotiator_ops *ops)
> +{
> +	int ret;
> +
> +	ret = smc_sock_validate_negotiator_ops(ops);
> +	if (ret)
> +		return ret;
> +
> +	/* calt key by name hash */
> +	ops->key = jhash(ops->name, sizeof(ops->name), strlen(ops->name));
> +
> +	spin_lock(&smc_sock_negotiator_list_lock);
> +	if (smc_negotiator_ops_get_by_key(ops->key)) {
> +		pr_notice("smc: %s negotiator already registered\n", ops->name);
> +		ret = -EEXIST;
> +	} else {
> +		list_add_tail_rcu(&ops->list, &smc_sock_negotiator_list);
> +	}
> +	spin_unlock(&smc_sock_negotiator_list_lock);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(smc_sock_register_negotiator_ops);

This and following functions are not specific to BPF, right?
I found you have more BPF specific code in this file in following
patches.  But, I feel these function should not in this file since
they are not BPF specific because file name "bpf_smc.c" hints.
D. Wythe April 27, 2023, 3:30 a.m. UTC | #2
Hi Lee,


On 4/27/23 12:47 AM, Kui-Feng Lee wrote:
>
>
> On 4/26/23 02:24, D. Wythe wrote:
>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>> diff --git a/net/smc/bpf_smc.c b/net/smc/bpf_smc.c
>> new file mode 100644
>> index 0000000..0c0ec05
>> --- /dev/null
>> +++ b/net/smc/bpf_smc.c
>> @@ -0,0 +1,201 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
> ... cut ...

Will fix it, Thanks.

>> +
>> +/* register ops */
>> +int smc_sock_register_negotiator_ops(struct smc_sock_negotiator_ops 
>> *ops)
>> +{
>> +    int ret;
>> +
>> +    ret = smc_sock_validate_negotiator_ops(ops);
>> +    if (ret)
>> +        return ret;
>> +
>> +    /* calt key by name hash */
>> +    ops->key = jhash(ops->name, sizeof(ops->name), strlen(ops->name));
>> +
>> +    spin_lock(&smc_sock_negotiator_list_lock);
>> +    if (smc_negotiator_ops_get_by_key(ops->key)) {
>> +        pr_notice("smc: %s negotiator already registered\n", 
>> ops->name);
>> +        ret = -EEXIST;
>> +    } else {
>> +        list_add_tail_rcu(&ops->list, &smc_sock_negotiator_list);
>> +    }
>> +    spin_unlock(&smc_sock_negotiator_list_lock);
>> +    return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(smc_sock_register_negotiator_ops);
>
> This and following functions are not specific to BPF, right?
> I found you have more BPF specific code in this file in following
> patches.  But, I feel these function should not in this file since
> they are not BPF specific because file name "bpf_smc.c" hints.

Yes. Logically those functions are not suitable for being placed in 
"bpf_smc.c".
However, since SMC is compiled as modules by default, and currently
struct ops needs to be built in, or specific symbols will not be found 
during linking.

Of course, I can separate those this function in another new file, which 
can also be built in.
I may have to introduce a new KConfig likes SMC_NEGOTIATOR. But this 
feature is  only effective
when eBPF exists, so from the perspective of SMC, it would also be kind 
of weird.

But whatever, if you do think it's necessary, I can split it into two files.

Besh wishes.
D. Wythe
Kui-Feng Lee April 27, 2023, 5:40 p.m. UTC | #3
On 4/26/23 20:30, D. Wythe wrote:
> 
> Hi Lee,
> 
> 
> On 4/27/23 12:47 AM, Kui-Feng Lee wrote:
>>
>>
>> On 4/26/23 02:24, D. Wythe wrote:
>>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>> diff --git a/net/smc/bpf_smc.c b/net/smc/bpf_smc.c
>>> new file mode 100644
>>> index 0000000..0c0ec05
>>> --- /dev/null
>>> +++ b/net/smc/bpf_smc.c
>>> @@ -0,0 +1,201 @@
>>> +// SPDX-License-Identifier: GPL-2.0-only
>> ... cut ...
> 
> Will fix it, Thanks.
> 
>>> +
>>> +/* register ops */
>>> +int smc_sock_register_negotiator_ops(struct smc_sock_negotiator_ops 
>>> *ops)
>>> +{
>>> +    int ret;
>>> +
>>> +    ret = smc_sock_validate_negotiator_ops(ops);
>>> +    if (ret)
>>> +        return ret;
>>> +
>>> +    /* calt key by name hash */
>>> +    ops->key = jhash(ops->name, sizeof(ops->name), strlen(ops->name));
>>> +
>>> +    spin_lock(&smc_sock_negotiator_list_lock);
>>> +    if (smc_negotiator_ops_get_by_key(ops->key)) {
>>> +        pr_notice("smc: %s negotiator already registered\n", 
>>> ops->name);
>>> +        ret = -EEXIST;
>>> +    } else {
>>> +        list_add_tail_rcu(&ops->list, &smc_sock_negotiator_list);
>>> +    }
>>> +    spin_unlock(&smc_sock_negotiator_list_lock);
>>> +    return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(smc_sock_register_negotiator_ops);
>>
>> This and following functions are not specific to BPF, right?
>> I found you have more BPF specific code in this file in following
>> patches.  But, I feel these function should not in this file since
>> they are not BPF specific because file name "bpf_smc.c" hints.
> 
> Yes. Logically those functions are not suitable for being placed in 
> "bpf_smc.c".
> However, since SMC is compiled as modules by default, and currently
> struct ops needs to be built in, or specific symbols will not be found 
> during linking.
> 
> Of course, I can separate those this function in another new file, which 
> can also be built in.
> I may have to introduce a new KConfig likes SMC_NEGOTIATOR. But this 
> feature is  only effective
> when eBPF exists, so from the perspective of SMC, it would also be kind 
> of weird.
On the other hand, this feature is only effective when SMC exists.
Even without BPF, you still can implement a negotiator in a module.
Since you have exported these symbols, I suspect that you expect
negotiators in modules or builtin, right?  If I am wrong about exports,
perhaps you should stop exporting since they are used locally only.

> 
> But whatever, if you do think it's necessary, I can split it into two 
> files.
> 
> Besh wishes.
> D. Wythe
> 
> 
>
diff mbox series

Patch

diff --git a/include/net/smc.h b/include/net/smc.h
index 6d076f5..cd701a3 100644
--- a/include/net/smc.h
+++ b/include/net/smc.h
@@ -296,6 +296,8 @@  struct smc_sock {				/* smc sock container */
 	atomic_t                queued_smc_hs;  /* queued smc handshakes */
 	struct inet_connection_sock_af_ops		af_ops;
 	const struct inet_connection_sock_af_ops	*ori_af_ops;
+	/* protocol negotiator ops */
+	const struct smc_sock_negotiator_ops *negotiator_ops;
 						/* original af ops */
 	int			sockopt_defer_accept;
 						/* sockopt TCP_DEFER_ACCEPT
@@ -316,4 +318,45 @@  struct smc_sock {				/* smc sock container */
 						 */
 };
 
+#ifdef CONFIG_SMC_BPF
+
+#define SMC_NEGOTIATOR_NAME_MAX	(16)
+#define SMC_SOCK_CLOSED_TIMING	(0)
+
+/* BPF struct ops for smc protocol negotiator */
+struct smc_sock_negotiator_ops {
+
+	struct list_head	list;
+
+	/* ops name */
+	char		name[16];
+	/* key for name */
+	u32			key;
+
+	/* init with sk */
+	void (*init)(struct sock *sk);
+
+	/* release with sk */
+	void (*release)(struct sock *sk);
+
+	/* advice for negotiate */
+	int (*negotiate)(struct sock *sk);
+
+	/* info gathering timing */
+	void (*collect_info)(struct sock *sk, int timing);
+
+	/* module owner */
+	struct module *owner;
+};
+
+int smc_sock_register_negotiator_ops(struct smc_sock_negotiator_ops *ops);
+int smc_sock_update_negotiator_ops(struct smc_sock_negotiator_ops *ops,
+					  struct smc_sock_negotiator_ops *old_ops);
+void smc_sock_unregister_negotiator_ops(struct smc_sock_negotiator_ops *ops);
+int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const char *name);
+void smc_sock_cleanup_negotiator_ops(struct smc_sock *smc, int in_release);
+void smc_sock_clone_negotiator_ops(struct sock *parent, struct sock *child);
+
+#endif
+
 #endif	/* _SMC_H */
diff --git a/net/Makefile b/net/Makefile
index 8759200..af2f5ea 100644
--- a/net/Makefile
+++ b/net/Makefile
@@ -52,6 +52,7 @@  obj-$(CONFIG_TIPC)		+= tipc/
 obj-$(CONFIG_NETLABEL)		+= netlabel/
 obj-$(CONFIG_IUCV)		+= iucv/
 obj-$(CONFIG_SMC)		+= smc/
+obj-$(CONFIG_SMC_BPF)		+= smc/bpf_smc.o
 obj-$(CONFIG_RFKILL)		+= rfkill/
 obj-$(CONFIG_NET_9P)		+= 9p/
 obj-$(CONFIG_CAIF)		+= caif/
diff --git a/net/smc/Kconfig b/net/smc/Kconfig
index 1ab3c5a..bdcc9f1 100644
--- a/net/smc/Kconfig
+++ b/net/smc/Kconfig
@@ -19,3 +19,14 @@  config SMC_DIAG
 	  smcss.
 
 	  if unsure, say Y.
+
+config SMC_BPF
+	bool "SMC: support eBPF" if SMC
+	depends on BPF_SYSCALL
+	default n
+	help
+	  Supports eBPF to allows user mode participation in SMC's protocol process
+	  via ebpf programs. Alternatively, obtain information about the SMC socks
+	  through the ebpf program.
+
+	  If unsure, say N.
diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
index 50c38b6..6565f1f 100644
--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -68,6 +68,49 @@ 
 static void smc_tcp_listen_work(struct work_struct *);
 static void smc_connect_work(struct work_struct *);
 
+static int smc_sock_should_select_smc(const struct smc_sock *smc)
+{
+#ifdef CONFIG_SMC_BPF
+	const struct smc_sock_negotiator_ops *ops;
+	int ret;
+
+	rcu_read_lock();
+	ops = READ_ONCE(smc->negotiator_ops);
+
+	/* No negotiator_ops supply or no negotiate func set,
+	 * always pass it.
+	 */
+	if (!ops || !ops->negotiate) {
+		rcu_read_unlock();
+		return SK_PASS;
+	}
+
+	ret = ops->negotiate((struct sock *)&smc->sk);
+	rcu_read_unlock();
+	return ret;
+#else
+	return SK_PASS;
+#endif
+}
+
+#ifdef CONFIG_SMC_BPF
+static void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing)
+{
+	const struct smc_sock_negotiator_ops *ops;
+
+	rcu_read_lock();
+	ops = READ_ONCE(smc->negotiator_ops);
+
+	if (!ops || !ops->collect_info) {
+		rcu_read_unlock();
+		return;
+	}
+
+	ops->collect_info((struct sock *)&smc->sk, timing);
+	rcu_read_unlock();
+}
+#endif
+
 int smc_nl_dump_hs_limitation(struct sk_buff *skb, struct netlink_callback *cb)
 {
 	struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb);
@@ -166,6 +209,9 @@  static bool smc_hs_congested(const struct sock *sk)
 	if (workqueue_congested(WORK_CPU_UNBOUND, smc_hs_wq))
 		return true;
 
+	if (!smc_sock_should_select_smc(smc))
+		return true;
+
 	return false;
 }
 
@@ -320,6 +366,11 @@  static int smc_release(struct socket *sock)
 	sock_hold(sk); /* sock_put below */
 	smc = smc_sk(sk);
 
+#ifdef CONFIG_SMC_BPF
+	/* trigger info gathering if needed.*/
+	smc_sock_perform_collecting_info(smc, SMC_SOCK_CLOSED_TIMING);
+#endif
+
 	old_state = sk->sk_state;
 
 	/* cleanup for a dangling non-blocking connect */
@@ -356,6 +407,10 @@  static int smc_release(struct socket *sock)
 
 static void smc_destruct(struct sock *sk)
 {
+#ifdef CONFIG_SMC_BPF
+	/* cleanup negotiator_ops if set */
+	smc_sock_cleanup_negotiator_ops(smc_sk(sk), /* in release */ 1);
+#endif
 	if (sk->sk_state != SMC_CLOSED)
 		return;
 	if (!sock_flag(sk, SOCK_DEAD))
@@ -1627,7 +1682,14 @@  static int smc_connect(struct socket *sock, struct sockaddr *addr,
 	}
 
 	smc_copy_sock_settings_to_clc(smc);
-	tcp_sk(smc->clcsock->sk)->syn_smc = 1;
+	/* accept out connection as SMC connection */
+	if (smc_sock_should_select_smc(smc) == SK_PASS) {
+		tcp_sk(smc->clcsock->sk)->syn_smc = 1;
+	} else {
+		tcp_sk(smc->clcsock->sk)->syn_smc = 0;
+		smc_switch_to_fallback(smc, /* active fallback */ 0);
+	}
+
 	if (smc->connect_nonblock) {
 		rc = -EALREADY;
 		goto out;
@@ -1679,6 +1741,10 @@  static int smc_clcsock_accept(struct smc_sock *lsmc, struct smc_sock **new_smc)
 	}
 	*new_smc = smc_sk(new_sk);
 
+#ifdef CONFIG_SMC_BPF
+	smc_sock_clone_negotiator_ops(lsk, new_sk);
+#endif
+
 	mutex_lock(&lsmc->clcsock_release_lock);
 	if (lsmc->clcsock)
 		rc = kernel_accept(lsmc->clcsock, &new_clcsock, SOCK_NONBLOCK);
diff --git a/net/smc/bpf_smc.c b/net/smc/bpf_smc.c
new file mode 100644
index 0000000..0c0ec05
--- /dev/null
+++ b/net/smc/bpf_smc.c
@@ -0,0 +1,201 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *  Support eBPF for Shared Memory Communications over RDMA (SMC-R) and RoCE
+ *
+ *  Copyright IBM Corp. 2016, 2018
+ *
+ *  Author(s):  D. Wythe <alibuda@linux.alibaba.com>
+ */
+
+#include <linux/kernel.h>
+#include <linux/bpf.h>
+#include <linux/smc.h>
+#include <net/sock.h>
+#include "smc.h"
+
+static DEFINE_SPINLOCK(smc_sock_negotiator_list_lock);
+static LIST_HEAD(smc_sock_negotiator_list);
+
+/* required smc_sock_negotiator_list_lock locked */
+static struct smc_sock_negotiator_ops *smc_negotiator_ops_get_by_key(u32 key)
+{
+	struct smc_sock_negotiator_ops *ops;
+
+	list_for_each_entry_rcu(ops, &smc_sock_negotiator_list, list) {
+		if (ops->key == key)
+			return ops;
+	}
+
+	return NULL;
+}
+
+/* required smc_sock_negotiator_list_lock locked */
+static struct smc_sock_negotiator_ops *
+smc_negotiator_ops_get_by_name(const char *name)
+{
+	struct smc_sock_negotiator_ops *ops;
+
+	list_for_each_entry_rcu(ops, &smc_sock_negotiator_list, list) {
+		if (strcmp(ops->name, name) == 0)
+			return ops;
+	}
+
+	return NULL;
+}
+
+static int smc_sock_validate_negotiator_ops(struct smc_sock_negotiator_ops *ops)
+{
+	/* not required yet */
+	return 0;
+}
+
+/* register ops */
+int smc_sock_register_negotiator_ops(struct smc_sock_negotiator_ops *ops)
+{
+	int ret;
+
+	ret = smc_sock_validate_negotiator_ops(ops);
+	if (ret)
+		return ret;
+
+	/* calt key by name hash */
+	ops->key = jhash(ops->name, sizeof(ops->name), strlen(ops->name));
+
+	spin_lock(&smc_sock_negotiator_list_lock);
+	if (smc_negotiator_ops_get_by_key(ops->key)) {
+		pr_notice("smc: %s negotiator already registered\n", ops->name);
+		ret = -EEXIST;
+	} else {
+		list_add_tail_rcu(&ops->list, &smc_sock_negotiator_list);
+	}
+	spin_unlock(&smc_sock_negotiator_list_lock);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(smc_sock_register_negotiator_ops);
+
+/* unregister ops */
+void smc_sock_unregister_negotiator_ops(struct smc_sock_negotiator_ops *ops)
+{
+	spin_lock(&smc_sock_negotiator_list_lock);
+	list_del_rcu(&ops->list);
+	spin_unlock(&smc_sock_negotiator_list_lock);
+
+	/* Wait for outstanding readers to complete before the
+	 * ops gets removed entirely.
+	 */
+	synchronize_rcu();
+}
+EXPORT_SYMBOL_GPL(smc_sock_unregister_negotiator_ops);
+
+int smc_sock_update_negotiator_ops(struct smc_sock_negotiator_ops *ops,
+				   struct smc_sock_negotiator_ops *old_ops)
+{
+	struct smc_sock_negotiator_ops *existing;
+	int ret;
+
+	ret = smc_sock_validate_negotiator_ops(ops);
+	if (ret)
+		return ret;
+
+	ops->key = jhash(ops->name, sizeof(ops->name), strlen(ops->name));
+	if (unlikely(!ops->key))
+		return -EINVAL;
+
+	spin_lock(&smc_sock_negotiator_list_lock);
+	existing = smc_negotiator_ops_get_by_key(old_ops->key);
+	if (!existing || strcmp(existing->name, ops->name)) {
+		ret = -EINVAL;
+	} else if (existing != old_ops) {
+		pr_notice("invalid old negotiator to replace\n");
+		ret = -EINVAL;
+	} else {
+		list_add_tail_rcu(&ops->list, &smc_sock_negotiator_list);
+		list_del_rcu(&existing->list);
+	}
+
+	spin_unlock(&smc_sock_negotiator_list_lock);
+	if (ret)
+		return ret;
+
+	synchronize_rcu();
+	return 0;
+}
+EXPORT_SYMBOL_GPL(smc_sock_update_negotiator_ops);
+
+/* assign ops to sock */
+int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const char *name)
+{
+	struct smc_sock_negotiator_ops *ops;
+	int ret = -EINVAL;
+
+	/* already set */
+	if (READ_ONCE(smc->negotiator_ops))
+		smc_sock_cleanup_negotiator_ops(smc, /* in release */ 0);
+
+	/* Just for clear negotiator_ops */
+	if (!name || !strlen(name))
+		return 0;
+
+	rcu_read_lock();
+	ops = smc_negotiator_ops_get_by_name(name);
+	if (likely(ops)) {
+		if (unlikely(!bpf_try_module_get(ops, ops->owner))) {
+			ret = -EACCES;
+		} else {
+			WRITE_ONCE(smc->negotiator_ops, ops);
+			/* make sure ops can be seen */
+			smp_wmb();
+			if (ops->init)
+				ops->init(&smc->sk);
+			ret = 0;
+		}
+	}
+	rcu_read_unlock();
+	return ret;
+}
+EXPORT_SYMBOL_GPL(smc_sock_assign_negotiator_ops);
+
+/* reset ops to sock */
+void smc_sock_cleanup_negotiator_ops(struct smc_sock *smc, int in_release)
+{
+	const struct smc_sock_negotiator_ops *ops;
+
+	ops = READ_ONCE(smc->negotiator_ops);
+
+	/* not all smc sock has negotiator_ops */
+	if (!ops)
+		return;
+
+	might_sleep();
+
+	/* Just ensure data integrity */
+	WRITE_ONCE(smc->negotiator_ops, NULL);
+	/* make sure NULL can be seen */
+	smp_wmb();
+	/* If the cleanup was not caused by the release of the sock,
+	 * it means that we may need to wait for the readers of ops
+	 * to complete.
+	 */
+	if (unlikely(!in_release))
+		synchronize_rcu();
+	if (ops->release)
+		ops->release(&smc->sk);
+	bpf_module_put(ops, ops->owner);
+}
+EXPORT_SYMBOL_GPL(smc_sock_cleanup_negotiator_ops);
+
+void smc_sock_clone_negotiator_ops(struct sock *parent, struct sock *child)
+{
+	const struct smc_sock_negotiator_ops *ops;
+
+	rcu_read_lock();
+	ops = READ_ONCE(smc_sk(parent)->negotiator_ops);
+	if (ops && bpf_try_module_get(ops, ops->owner)) {
+		smc_sk(child)->negotiator_ops = ops;
+		if (ops->init)
+			ops->init(child);
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(smc_sock_clone_negotiator_ops);
+