diff mbox series

[bpf-next,v1,2/5] net/smc: allow smc to negotiate protocols on policies

Message ID 1683872684-64872-3-git-send-email-alibuda@linux.alibaba.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series net/smc: Introduce BPF injection capability | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit fail Errors and warnings before: 10 this patch: 13
netdev/cc_maintainers success CCed 10 of 10 maintainers
netdev/build_clang fail Errors and warnings before: 8 this patch: 13
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 10 this patch: 13
netdev/checkpatch warning CHECK: Blank lines aren't necessary after an open brace '{' WARNING: added, moved or deleted file(s), does MAINTAINERS need updating? WARNING: line length of 81 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 85 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns WARNING: line length of 87 exceeds 80 columns WARNING: line length of 88 exceeds 80 columns WARNING: line length of 89 exceeds 80 columns WARNING: line length of 90 exceeds 80 columns WARNING: line length of 92 exceeds 80 columns WARNING: line length of 93 exceeds 80 columns WARNING: line length of 94 exceeds 80 columns
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline fail Was 0 now: 1
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ${{ matrix.test }} on ${{ matrix.arch }} with ${{ matrix.toolchain_full }}
bpf/vmtest-bpf-next-VM_Test-2 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-3 fail Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 fail Logs for build for aarch64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-5 fail Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-6 fail Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-7 fail Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-9 success Logs for veristat

Commit Message

D. Wythe May 12, 2023, 6:24 a.m. UTC
From: "D. Wythe" <alibuda@linux.alibaba.com>

As we all know, the SMC protocol is not suitable for all scenarios,
especially for short-lived. However, for most applications, they cannot
guarantee that there are no such scenarios at all. Therefore, apps
may need some specific strategies to decide shall we need to use SMC
or not.

Just like the congestion control implementation in TCP, this patch
provides a generic negotiator implementation. If necessary,
we can provide different protocol negotiation strategies for
apps based on this implementation.

But most importantly, this patch provides the possibility of
eBPF injection, allowing users to implement their own protocol
negotiation policy in userspace.

Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
---
 include/net/smc.h        |  32 +++++++++++
 net/Makefile             |   1 +
 net/smc/Kconfig          |  11 ++++
 net/smc/af_smc.c         | 134 ++++++++++++++++++++++++++++++++++++++++++++++-
 net/smc/smc_negotiator.c | 119 +++++++++++++++++++++++++++++++++++++++++
 net/smc/smc_negotiator.h | 116 ++++++++++++++++++++++++++++++++++++++++
 6 files changed, 412 insertions(+), 1 deletion(-)
 create mode 100644 net/smc/smc_negotiator.c
 create mode 100644 net/smc/smc_negotiator.h

Comments

kernel test robot May 12, 2023, 1:13 p.m. UTC | #1
Hi Wythe,

kernel test robot noticed the following build errors:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/D-Wythe/net-smc-move-smc_sock-related-structure-definition/20230512-142700
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/1683872684-64872-3-git-send-email-alibuda%40linux.alibaba.com
patch subject: [PATCH bpf-next v1 2/5] net/smc: allow smc to negotiate protocols on policies
config: mips-allmodconfig (https://download.01.org/0day-ci/archive/20230512/202305122104.msaKEOV1-lkp@intel.com/config)
compiler: mips-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/db8daea84b78121c3612ad5e5ba1d1eaac2f4171
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review D-Wythe/net-smc-move-smc_sock-related-structure-definition/20230512-142700
        git checkout db8daea84b78121c3612ad5e5ba1d1eaac2f4171
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=mips olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=mips SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202305122104.msaKEOV1-lkp@intel.com/

All errors (new ones prefixed by >>, old ones prefixed by <<):

>> ERROR: modpost: "bpf_struct_ops_get" [net/smc/smc.ko] undefined!
>> ERROR: modpost: "bpf_struct_ops_put" [net/smc/smc.ko] undefined!
Martin KaFai Lau May 15, 2023, 10:52 p.m. UTC | #2
On 5/11/23 11:24 PM, D. Wythe wrote:
> From: "D. Wythe" <alibuda@linux.alibaba.com>
> 
> As we all know, the SMC protocol is not suitable for all scenarios,
> especially for short-lived. However, for most applications, they cannot
> guarantee that there are no such scenarios at all. Therefore, apps
> may need some specific strategies to decide shall we need to use SMC
> or not.
> 
> Just like the congestion control implementation in TCP, this patch
> provides a generic negotiator implementation. If necessary,
> we can provide different protocol negotiation strategies for
> apps based on this implementation.
> 
> But most importantly, this patch provides the possibility of
> eBPF injection, allowing users to implement their own protocol
> negotiation policy in userspace.
> 
> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
> ---
>   include/net/smc.h        |  32 +++++++++++
>   net/Makefile             |   1 +
>   net/smc/Kconfig          |  11 ++++
>   net/smc/af_smc.c         | 134 ++++++++++++++++++++++++++++++++++++++++++++++-
>   net/smc/smc_negotiator.c | 119 +++++++++++++++++++++++++++++++++++++++++
>   net/smc/smc_negotiator.h | 116 ++++++++++++++++++++++++++++++++++++++++
>   6 files changed, 412 insertions(+), 1 deletion(-)
>   create mode 100644 net/smc/smc_negotiator.c
>   create mode 100644 net/smc/smc_negotiator.h
> 
> diff --git a/include/net/smc.h b/include/net/smc.h
> index 6d076f5..191061c 100644
> --- a/include/net/smc.h
> +++ b/include/net/smc.h
> @@ -296,6 +296,8 @@ struct smc_sock {				/* smc sock container */
>   	atomic_t                queued_smc_hs;  /* queued smc handshakes */
>   	struct inet_connection_sock_af_ops		af_ops;
>   	const struct inet_connection_sock_af_ops	*ori_af_ops;
> +	/* protocol negotiator ops */
> +	const struct smc_sock_negotiator_ops *negotiator_ops;
>   						/* original af ops */
>   	int			sockopt_defer_accept;
>   						/* sockopt TCP_DEFER_ACCEPT
> @@ -316,4 +318,34 @@ struct smc_sock {				/* smc sock container */
>   						 */
>   };
>   
> +#ifdef CONFIG_SMC_BPF
> +/* BPF struct ops for smc protocol negotiator */
> +struct smc_sock_negotiator_ops {
> +
> +	struct list_head	list;
> +
> +	/* ops name */
> +	char		name[16];
> +	/* key for name */
> +	u32			key;
> +
> +	/* init with sk */
> +	void (*init)(struct sock *sk);
> +
> +	/* release with sk */
> +	void (*release)(struct sock *sk);
> +
> +	/* advice for negotiate */
> +	int (*negotiate)(struct sock *sk);
> +
> +	/* info gathering timing */
> +	void (*collect_info)(struct sock *sk, int timing);
> +
> +	/* module owner */
> +	struct module *owner;
> +};
> +#else
> +struct smc_sock_negotiator_ops {};
> +#endif
> +
>   #endif	/* _SMC_H */
> diff --git a/net/Makefile b/net/Makefile
> index 4c4dc53..222916a 100644
> --- a/net/Makefile
> +++ b/net/Makefile
> @@ -52,6 +52,7 @@ obj-$(CONFIG_TIPC)		+= tipc/
>   obj-$(CONFIG_NETLABEL)		+= netlabel/
>   obj-$(CONFIG_IUCV)		+= iucv/
>   obj-$(CONFIG_SMC)		+= smc/
> +obj-$(CONFIG_SMC_BPF)		+= smc/smc_negotiator.o >   obj-$(CONFIG_RFKILL)		+= rfkill/
>   obj-$(CONFIG_NET_9P)		+= 9p/
>   obj-$(CONFIG_CAIF)		+= caif/
> diff --git a/net/smc/Kconfig b/net/smc/Kconfig
> index 1ab3c5a..bdcc9f1 100644
> --- a/net/smc/Kconfig
> +++ b/net/smc/Kconfig
> @@ -19,3 +19,14 @@ config SMC_DIAG
>   	  smcss.
>   
>   	  if unsure, say Y.
> +
> +config SMC_BPF
> +	bool "SMC: support eBPF" if SMC


so smc_negotiator will always be in the kernel image even af_smc is compiled as 
a module? If the SMC_BPF needs to support af_smc as a module, proper 
implementation needs to be added to bpf_struct_ops to support module first. It 
is work-in-progress.

> +	depends on BPF_SYSCALL
> +	default n
> +	help
> +	  Supports eBPF to allows user mode participation in SMC's protocol process
> +	  via ebpf programs. Alternatively, obtain information about the SMC socks
> +	  through the ebpf program.
> +
> +	  If unsure, say N.
> diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
> index 50c38b6..7406fd4 100644
> --- a/net/smc/af_smc.c
> +++ b/net/smc/af_smc.c
> @@ -52,6 +52,7 @@
>   #include "smc_close.h"
>   #include "smc_stats.h"
>   #include "smc_tracepoint.h"
> +#include "smc_negotiator.h"
>   #include "smc_sysctl.h"
>   
>   static DEFINE_MUTEX(smc_server_lgr_pending);	/* serialize link group
> @@ -68,6 +69,119 @@
>   static void smc_tcp_listen_work(struct work_struct *);
>   static void smc_connect_work(struct work_struct *);
>   
> +#ifdef CONFIG_SMC_BPF
> +
> +/* Check if sock should use smc */
> +int smc_sock_should_select_smc(const struct smc_sock *smc)
> +{
> +	const struct smc_sock_negotiator_ops *ops;
> +	int ret;
> +
> +	rcu_read_lock();
> +	ops = READ_ONCE(smc->negotiator_ops);
> +
> +	/* No negotiator_ops supply or no negotiate func set,
> +	 * always pass it.
> +	 */
> +	if (!ops || !ops->negotiate) {

A smc_sock_negotiator_ops without ->negotiate? Is it useful at all to allow the 
register in the first place?

> +		rcu_read_unlock();
> +		return SK_PASS;
> +	}
> +
> +	ret = ops->negotiate((struct sock *)&smc->sk);
> +	rcu_read_unlock();
> +	return ret;
> +}
> +
> +void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing)
> +{
> +	const struct smc_sock_negotiator_ops *ops;
> +
> +	rcu_read_lock();
> +	ops = READ_ONCE(smc->negotiator_ops);
> +
> +	if (!ops || !ops->collect_info) {
> +		rcu_read_unlock();
> +		return;
> +	}
> +
> +	ops->collect_info((struct sock *)&smc->sk, timing);
> +	rcu_read_unlock();
> +}
> +
> +int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const char *name)
> +{
> +	struct smc_sock_negotiator_ops *ops;
> +	int ret = -EINVAL;
> +
> +	/* already set */
> +	if (READ_ONCE(smc->negotiator_ops))
> +		smc_sock_cleanup_negotiator_ops(smc, /* might be still referenced */ false);
> +
> +	/* Just for clear negotiator_ops */
> +	if (!name || !strlen(name))
> +		return 0;
> +
> +	rcu_read_lock();
> +	ops = smc_negotiator_ops_get_by_name(name);
> +	if (likely(ops)) {
> +		if (unlikely(!bpf_try_module_get(ops, ops->owner))) {
> +			ret = -EACCES;
> +		} else {
> +			WRITE_ONCE(smc->negotiator_ops, ops);
> +			/* make sure ops can be seen */
> +			smp_wmb();

This rcu_read_lock(), WRITE_ONCE, and smp_wmb() combo looks very suspicious. 
smc->negotiator_ops is protected by rcu (+refcnt) or lock_sock()?

I am going to stop reviewing here.

> +			if (ops->init)
> +				ops->init(&smc->sk);
> +			ret = 0;
> +		}
> +	}
> +	rcu_read_unlock();
> +	return ret;
> +}
> +
> +void smc_sock_cleanup_negotiator_ops(struct smc_sock *smc, bool no_more)
> +{
> +	const struct smc_sock_negotiator_ops *ops;
> +
> +	ops = READ_ONCE(smc->negotiator_ops);
> +
> +	/* not all smc sock has negotiator_ops */
> +	if (!ops)
> +		return;
> +
> +	might_sleep();
> +
> +	/* Just ensure data integrity */
> +	WRITE_ONCE(smc->negotiator_ops, NULL);
> +	/* make sure NULL can be seen */
> +	smp_wmb();
> +	/* if the socks may have references to the negotiator ops to be removed.
> +	 * it means that we might need to wait for the readers of ops
> +	 * to complete. It's slow though.
> +	 */
> +	if (unlikely(!no_more))
> +		synchronize_rcu();
> +	if (ops->release)
> +		ops->release(&smc->sk);
> +	bpf_module_put(ops, ops->owner);
> +}
> +
> +void smc_sock_clone_negotiator_ops(struct sock *parent, struct sock *child)
> +{
> +	const struct smc_sock_negotiator_ops *ops;
> +
> +	rcu_read_lock();
> +	ops = READ_ONCE(smc_sk(parent)->negotiator_ops);
> +	if (ops && bpf_try_module_get(ops, ops->owner)) {
> +		smc_sk(child)->negotiator_ops = ops;
> +		if (ops->init)
> +			ops->init(child);
> +	}
> +	rcu_read_unlock();
> +}
> +#endif
> +
>   int smc_nl_dump_hs_limitation(struct sk_buff *skb, struct netlink_callback *cb)
>   {
>   	struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb);
> @@ -166,6 +280,9 @@ static bool smc_hs_congested(const struct sock *sk)
>   	if (workqueue_congested(WORK_CPU_UNBOUND, smc_hs_wq))
>   		return true;
>   
> +	if (!smc_sock_should_select_smc(smc))
> +		return true;
> +
>   	return false;
>   }
>   
> @@ -320,6 +437,9 @@ static int smc_release(struct socket *sock)
>   	sock_hold(sk); /* sock_put below */
>   	smc = smc_sk(sk);
>   
> +	/* trigger info gathering if needed.*/
> +	smc_sock_perform_collecting_info(smc, SMC_SOCK_CLOSED_TIMING);
> +
>   	old_state = sk->sk_state;
>   
>   	/* cleanup for a dangling non-blocking connect */
> @@ -356,6 +476,9 @@ static int smc_release(struct socket *sock)
>   
>   static void smc_destruct(struct sock *sk)
>   {
> +	/* cleanup negotiator_ops if set */
> +	smc_sock_cleanup_negotiator_ops(smc_sk(sk), /* no longer used */ true);
> +
>   	if (sk->sk_state != SMC_CLOSED)
>   		return;
>   	if (!sock_flag(sk, SOCK_DEAD))
> @@ -1627,7 +1750,14 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr,
>   	}
>   
>   	smc_copy_sock_settings_to_clc(smc);
> -	tcp_sk(smc->clcsock->sk)->syn_smc = 1;
> +	/* accept out connection as SMC connection */
> +	if (smc_sock_should_select_smc(smc) == SK_PASS) {
> +		tcp_sk(smc->clcsock->sk)->syn_smc = 1;
> +	} else {
> +		tcp_sk(smc->clcsock->sk)->syn_smc = 0;
> +		smc_switch_to_fallback(smc, /* active fallback */ 0);
> +	}
> +
>   	if (smc->connect_nonblock) {
>   		rc = -EALREADY;
>   		goto out;
> @@ -1679,6 +1809,8 @@ static int smc_clcsock_accept(struct smc_sock *lsmc, struct smc_sock **new_smc)
>   	}
>   	*new_smc = smc_sk(new_sk);
>   
> +	smc_sock_clone_negotiator_ops(lsk, new_sk);
> +
>   	mutex_lock(&lsmc->clcsock_release_lock);
>   	if (lsmc->clcsock)
>   		rc = kernel_accept(lsmc->clcsock, &new_clcsock, SOCK_NONBLOCK);
D. Wythe May 17, 2023, 7:08 a.m. UTC | #3
On 5/16/23 6:52 AM, Martin KaFai Lau wrote:
> On 5/11/23 11:24 PM, D. Wythe wrote:
>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>
>> As we all know, the SMC protocol is not suitable for all scenarios,
>> especially for short-lived. However, for most applications, they cannot
>> guarantee that there are no such scenarios at all. Therefore, apps
>> may need some specific strategies to decide shall we need to use SMC
>> or not.
>>
>> Just like the congestion control implementation in TCP, this patch
>> provides a generic negotiator implementation. If necessary,
>> we can provide different protocol negotiation strategies for
>> apps based on this implementation.
>>
>> But most importantly, this patch provides the possibility of
>> eBPF injection, allowing users to implement their own protocol
>> negotiation policy in userspace.
>>
>> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
>> ---
>>   include/net/smc.h        |  32 +++++++++++
>>   net/Makefile             |   1 +
>>   net/smc/Kconfig          |  11 ++++
>>   net/smc/af_smc.c         | 134 
>> ++++++++++++++++++++++++++++++++++++++++++++++-
>>   net/smc/smc_negotiator.c | 119 
>> +++++++++++++++++++++++++++++++++++++++++
>>   net/smc/smc_negotiator.h | 116 
>> ++++++++++++++++++++++++++++++++++++++++
>>   6 files changed, 412 insertions(+), 1 deletion(-)
>>   create mode 100644 net/smc/smc_negotiator.c
>>   create mode 100644 net/smc/smc_negotiator.h
>>
>> diff --git a/include/net/smc.h b/include/net/smc.h
>> index 6d076f5..191061c 100644
>> --- a/include/net/smc.h
>> +++ b/include/net/smc.h
>> @@ -296,6 +296,8 @@ struct smc_sock {                /* smc sock 
>> container */
>>       atomic_t                queued_smc_hs;  /* queued smc 
>> handshakes */
>>       struct inet_connection_sock_af_ops        af_ops;
>>       const struct inet_connection_sock_af_ops    *ori_af_ops;
>> +    /* protocol negotiator ops */
>> +    const struct smc_sock_negotiator_ops *negotiator_ops;
>>                           /* original af ops */
>>       int            sockopt_defer_accept;
>>                           /* sockopt TCP_DEFER_ACCEPT
>> @@ -316,4 +318,34 @@ struct smc_sock {                /* smc sock 
>> container */
>>                            */
>>   };
>>   +#ifdef CONFIG_SMC_BPF
>> +/* BPF struct ops for smc protocol negotiator */
>> +struct smc_sock_negotiator_ops {
>> +
>> +    struct list_head    list;
>> +
>> +    /* ops name */
>> +    char        name[16];
>> +    /* key for name */
>> +    u32            key;
>> +
>> +    /* init with sk */
>> +    void (*init)(struct sock *sk);
>> +
>> +    /* release with sk */
>> +    void (*release)(struct sock *sk);
>> +
>> +    /* advice for negotiate */
>> +    int (*negotiate)(struct sock *sk);
>> +
>> +    /* info gathering timing */
>> +    void (*collect_info)(struct sock *sk, int timing);
>> +
>> +    /* module owner */
>> +    struct module *owner;
>> +};
>> +#else
>> +struct smc_sock_negotiator_ops {};
>> +#endif
>> +
>>   #endif    /* _SMC_H */
>> diff --git a/net/Makefile b/net/Makefile
>> index 4c4dc53..222916a 100644
>> --- a/net/Makefile
>> +++ b/net/Makefile
>> @@ -52,6 +52,7 @@ obj-$(CONFIG_TIPC)        += tipc/
>>   obj-$(CONFIG_NETLABEL)        += netlabel/
>>   obj-$(CONFIG_IUCV)        += iucv/
>>   obj-$(CONFIG_SMC)        += smc/
>> +obj-$(CONFIG_SMC_BPF)        += smc/smc_negotiator.o > 
>> obj-$(CONFIG_RFKILL)        += rfkill/
>>   obj-$(CONFIG_NET_9P)        += 9p/
>>   obj-$(CONFIG_CAIF)        += caif/
>> diff --git a/net/smc/Kconfig b/net/smc/Kconfig
>> index 1ab3c5a..bdcc9f1 100644
>> --- a/net/smc/Kconfig
>> +++ b/net/smc/Kconfig
>> @@ -19,3 +19,14 @@ config SMC_DIAG
>>         smcss.
>>           if unsure, say Y.
>> +
>> +config SMC_BPF
>> +    bool "SMC: support eBPF" if SMC
>
>
> so smc_negotiator will always be in the kernel image even af_smc is 
> compiled as a module? If the SMC_BPF needs to support af_smc as a 
> module, proper implementation needs to be added to bpf_struct_ops to 
> support module first. It is work-in-progress.
>

smc_negotiator will not no in the kernel image when af_smc is compiled 
as a module,
it's requires config SMC_BPF also sets to be Y,  while it's default to 
be N. That's is,
even if af_smc is compiled as a module but with no SMC_BPF set, 
smc_negotiator
doesn't exist anywhere.

>> +    depends on BPF_SYSCALL
>> +    default n
>> +    help
>> +      Supports eBPF to allows user mode participation in SMC's 
>> protocol process
>> +      via ebpf programs. Alternatively, obtain information about the 
>> SMC socks
>> +      through the ebpf program.
>> +
>> +      If unsure, say N.
>> diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
>> index 50c38b6..7406fd4 100644
>> --- a/net/smc/af_smc.c
>> +++ b/net/smc/af_smc.c
>> @@ -52,6 +52,7 @@
>>   #include "smc_close.h"
>>   #include "smc_stats.h"
>>   #include "smc_tracepoint.h"
>> +#include "smc_negotiator.h"
>>   #include "smc_sysctl.h"
>>     static DEFINE_MUTEX(smc_server_lgr_pending);    /* serialize link 
>> group
>> @@ -68,6 +69,119 @@
>>   static void smc_tcp_listen_work(struct work_struct *);
>>   static void smc_connect_work(struct work_struct *);
>>   +#ifdef CONFIG_SMC_BPF
>> +
>> +/* Check if sock should use smc */
>> +int smc_sock_should_select_smc(const struct smc_sock *smc)
>> +{
>> +    const struct smc_sock_negotiator_ops *ops;
>> +    int ret;
>> +
>> +    rcu_read_lock();
>> +    ops = READ_ONCE(smc->negotiator_ops);
>> +
>> +    /* No negotiator_ops supply or no negotiate func set,
>> +     * always pass it.
>> +     */
>> +    if (!ops || !ops->negotiate) {
>
> A smc_sock_negotiator_ops without ->negotiate? Is it useful at all to 
> allow the register in the first place?
>

You are right, this can be avoid before registration. I'll fix it.

>> +        rcu_read_unlock();
>> +        return SK_PASS;
>> +    }
>> +
>> +    ret = ops->negotiate((struct sock *)&smc->sk);
>> +    rcu_read_unlock();
>> +    return ret;
>> +}
>> +
>> +void smc_sock_perform_collecting_info(const struct smc_sock *smc, 
>> int timing)
>> +{
>> +    const struct smc_sock_negotiator_ops *ops;
>> +
>> +    rcu_read_lock();
>> +    ops = READ_ONCE(smc->negotiator_ops);
>> +
>> +    if (!ops || !ops->collect_info) {
>> +        rcu_read_unlock();
>> +        return;
>> +    }
>> +
>> +    ops->collect_info((struct sock *)&smc->sk, timing);
>> +    rcu_read_unlock();
>> +}
>> +
>> +int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const char 
>> *name)
>> +{
>> +    struct smc_sock_negotiator_ops *ops;
>> +    int ret = -EINVAL;
>> +
>> +    /* already set */
>> +    if (READ_ONCE(smc->negotiator_ops))
>> +        smc_sock_cleanup_negotiator_ops(smc, /* might be still 
>> referenced */ false);
>> +
>> +    /* Just for clear negotiator_ops */
>> +    if (!name || !strlen(name))
>> +        return 0;
>> +
>> +    rcu_read_lock();
>> +    ops = smc_negotiator_ops_get_by_name(name);
>> +    if (likely(ops)) {
>> +        if (unlikely(!bpf_try_module_get(ops, ops->owner))) {
>> +            ret = -EACCES;
>> +        } else {
>> +            WRITE_ONCE(smc->negotiator_ops, ops);
>> +            /* make sure ops can be seen */
>> +            smp_wmb();
>
> This rcu_read_lock(), WRITE_ONCE, and smp_wmb() combo looks very 
> suspicious. smc->negotiator_ops is protected by rcu (+refcnt) or 
> lock_sock()?
>

All access to ops is protected by RCU, and there are no lock_sock. 
WRITE_ONCE() and smp_wmb() do
not participate in any guarantee of the availability of ops,  The 
purpose to using them is just wish the latest values
can be read as soon as possible , In fact, even if old value is read, 
there will be no problem in logic because all updates
will do synchronize_rcu() and all access to ops is under in rcu_read_lock().

> I am going to stop reviewing here.
>

Hoping my explanation can answer your questions and still looking forward to
your more feedback 
Martin KaFai Lau May 17, 2023, 8:14 a.m. UTC | #4
On 5/17/23 12:08 AM, D. Wythe wrote:
> 
> 
> On 5/16/23 6:52 AM, Martin KaFai Lau wrote:
>> On 5/11/23 11:24 PM, D. Wythe wrote:
>>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>>
>>> As we all know, the SMC protocol is not suitable for all scenarios,
>>> especially for short-lived. However, for most applications, they cannot
>>> guarantee that there are no such scenarios at all. Therefore, apps
>>> may need some specific strategies to decide shall we need to use SMC
>>> or not.
>>>
>>> Just like the congestion control implementation in TCP, this patch
>>> provides a generic negotiator implementation. If necessary,
>>> we can provide different protocol negotiation strategies for
>>> apps based on this implementation.
>>>
>>> But most importantly, this patch provides the possibility of
>>> eBPF injection, allowing users to implement their own protocol
>>> negotiation policy in userspace.
>>>
>>> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
>>> ---
>>>   include/net/smc.h        |  32 +++++++++++
>>>   net/Makefile             |   1 +
>>>   net/smc/Kconfig          |  11 ++++
>>>   net/smc/af_smc.c         | 134 ++++++++++++++++++++++++++++++++++++++++++++++-
>>>   net/smc/smc_negotiator.c | 119 +++++++++++++++++++++++++++++++++++++++++
>>>   net/smc/smc_negotiator.h | 116 ++++++++++++++++++++++++++++++++++++++++
>>>   6 files changed, 412 insertions(+), 1 deletion(-)
>>>   create mode 100644 net/smc/smc_negotiator.c
>>>   create mode 100644 net/smc/smc_negotiator.h
>>>
>>> diff --git a/include/net/smc.h b/include/net/smc.h
>>> index 6d076f5..191061c 100644
>>> --- a/include/net/smc.h
>>> +++ b/include/net/smc.h
>>> @@ -296,6 +296,8 @@ struct smc_sock {                /* smc sock container */
>>>       atomic_t                queued_smc_hs;  /* queued smc handshakes */
>>>       struct inet_connection_sock_af_ops        af_ops;
>>>       const struct inet_connection_sock_af_ops    *ori_af_ops;
>>> +    /* protocol negotiator ops */
>>> +    const struct smc_sock_negotiator_ops *negotiator_ops;
>>>                           /* original af ops */
>>>       int            sockopt_defer_accept;
>>>                           /* sockopt TCP_DEFER_ACCEPT
>>> @@ -316,4 +318,34 @@ struct smc_sock {                /* smc sock container */
>>>                            */
>>>   };
>>>   +#ifdef CONFIG_SMC_BPF
>>> +/* BPF struct ops for smc protocol negotiator */
>>> +struct smc_sock_negotiator_ops {
>>> +
>>> +    struct list_head    list;
>>> +
>>> +    /* ops name */
>>> +    char        name[16];
>>> +    /* key for name */
>>> +    u32            key;
>>> +
>>> +    /* init with sk */
>>> +    void (*init)(struct sock *sk);
>>> +
>>> +    /* release with sk */
>>> +    void (*release)(struct sock *sk);
>>> +
>>> +    /* advice for negotiate */
>>> +    int (*negotiate)(struct sock *sk);
>>> +
>>> +    /* info gathering timing */
>>> +    void (*collect_info)(struct sock *sk, int timing);
>>> +
>>> +    /* module owner */
>>> +    struct module *owner;
>>> +};
>>> +#else
>>> +struct smc_sock_negotiator_ops {};
>>> +#endif
>>> +
>>>   #endif    /* _SMC_H */
>>> diff --git a/net/Makefile b/net/Makefile
>>> index 4c4dc53..222916a 100644
>>> --- a/net/Makefile
>>> +++ b/net/Makefile
>>> @@ -52,6 +52,7 @@ obj-$(CONFIG_TIPC)        += tipc/
>>>   obj-$(CONFIG_NETLABEL)        += netlabel/
>>>   obj-$(CONFIG_IUCV)        += iucv/
>>>   obj-$(CONFIG_SMC)        += smc/
>>> +obj-$(CONFIG_SMC_BPF)        += smc/smc_negotiator.o > 
>>> obj-$(CONFIG_RFKILL)        += rfkill/
>>>   obj-$(CONFIG_NET_9P)        += 9p/
>>>   obj-$(CONFIG_CAIF)        += caif/
>>> diff --git a/net/smc/Kconfig b/net/smc/Kconfig
>>> index 1ab3c5a..bdcc9f1 100644
>>> --- a/net/smc/Kconfig
>>> +++ b/net/smc/Kconfig
>>> @@ -19,3 +19,14 @@ config SMC_DIAG
>>>         smcss.
>>>           if unsure, say Y.
>>> +
>>> +config SMC_BPF
>>> +    bool "SMC: support eBPF" if SMC
>>
>>
>> so smc_negotiator will always be in the kernel image even af_smc is compiled 
>> as a module? If the SMC_BPF needs to support af_smc as a module, proper 
>> implementation needs to be added to bpf_struct_ops to support module first. It 
>> is work-in-progress.
>>
> 
> smc_negotiator will not no in the kernel image when af_smc is compiled as a module,
> it's requires config SMC_BPF also sets to be Y,  while it's default to be N. 
> That's is,
> even if af_smc is compiled as a module but with no SMC_BPF set, smc_negotiator
> doesn't exist anywhere.

CONFIG_SMC_BPF could be "y" while CONFIG_SMC is "m", no?

Anyway, there is a build error when CONFIG_SMC is "m" :(

> 
>>> +    depends on BPF_SYSCALL
>>> +    default n
>>> +    help
>>> +      Supports eBPF to allows user mode participation in SMC's protocol process
>>> +      via ebpf programs. Alternatively, obtain information about the SMC socks
>>> +      through the ebpf program.
>>> +
>>> +      If unsure, say N.
>>> diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
>>> index 50c38b6..7406fd4 100644
>>> --- a/net/smc/af_smc.c
>>> +++ b/net/smc/af_smc.c
>>> @@ -52,6 +52,7 @@
>>>   #include "smc_close.h"
>>>   #include "smc_stats.h"
>>>   #include "smc_tracepoint.h"
>>> +#include "smc_negotiator.h"
>>>   #include "smc_sysctl.h"
>>>     static DEFINE_MUTEX(smc_server_lgr_pending);    /* serialize link group
>>> @@ -68,6 +69,119 @@
>>>   static void smc_tcp_listen_work(struct work_struct *);
>>>   static void smc_connect_work(struct work_struct *);
>>>   +#ifdef CONFIG_SMC_BPF
>>> +
>>> +/* Check if sock should use smc */
>>> +int smc_sock_should_select_smc(const struct smc_sock *smc)
>>> +{
>>> +    const struct smc_sock_negotiator_ops *ops;
>>> +    int ret;
>>> +
>>> +    rcu_read_lock();
>>> +    ops = READ_ONCE(smc->negotiator_ops);
>>> +
>>> +    /* No negotiator_ops supply or no negotiate func set,
>>> +     * always pass it.
>>> +     */
>>> +    if (!ops || !ops->negotiate) {
>>
>> A smc_sock_negotiator_ops without ->negotiate? Is it useful at all to allow 
>> the register in the first place?
>>
> 
> You are right, this can be avoid before registration. I'll fix it.
> 
>>> +        rcu_read_unlock();
>>> +        return SK_PASS;
>>> +    }
>>> +
>>> +    ret = ops->negotiate((struct sock *)&smc->sk);
>>> +    rcu_read_unlock();
>>> +    return ret;
>>> +}
>>> +
>>> +void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing)
>>> +{
>>> +    const struct smc_sock_negotiator_ops *ops;
>>> +
>>> +    rcu_read_lock();
>>> +    ops = READ_ONCE(smc->negotiator_ops);
>>> +
>>> +    if (!ops || !ops->collect_info) {
>>> +        rcu_read_unlock();
>>> +        return;
>>> +    }
>>> +
>>> +    ops->collect_info((struct sock *)&smc->sk, timing);
>>> +    rcu_read_unlock();
>>> +}
>>> +
>>> +int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const char *name)
>>> +{
>>> +    struct smc_sock_negotiator_ops *ops;
>>> +    int ret = -EINVAL;
>>> +
>>> +    /* already set */
>>> +    if (READ_ONCE(smc->negotiator_ops))
>>> +        smc_sock_cleanup_negotiator_ops(smc, /* might be still referenced */ 
>>> false);
>>> +
>>> +    /* Just for clear negotiator_ops */
>>> +    if (!name || !strlen(name))
>>> +        return 0;
>>> +
>>> +    rcu_read_lock();
>>> +    ops = smc_negotiator_ops_get_by_name(name);
>>> +    if (likely(ops)) {
>>> +        if (unlikely(!bpf_try_module_get(ops, ops->owner))) {
>>> +            ret = -EACCES;
>>> +        } else {
>>> +            WRITE_ONCE(smc->negotiator_ops, ops);
>>> +            /* make sure ops can be seen */
>>> +            smp_wmb();
>>
>> This rcu_read_lock(), WRITE_ONCE, and smp_wmb() combo looks very suspicious. 
>> smc->negotiator_ops is protected by rcu (+refcnt) or lock_sock()?
>>
> 
> All access to ops is protected by RCU, and there are no lock_sock. WRITE_ONCE() 
> and smp_wmb() do
> not participate in any guarantee of the availability of ops,  The purpose to 
> using them is just wish the latest values
> can be read as soon as possible , In fact, even if old value is read, there will 
> be no problem in logic because all updates
> will do synchronize_rcu() and all access to ops is under in rcu_read_lock().

The explanation is not encouraging. No clear benefit while having this kind of 
complexity here. Switching tcp congestion ops also does not require this. Some 
of the new codes is in af_smc but bpf is the primary user. It is not something 
that I would like to maintain and then need to reason about this unusual pattern 
a year later. Beside, this negotiator_ops assignment must be done under a 
lock_sock(). The same probably is true for calling ops->negotiate() where the 
bpf prog may be looking at the sk and calling bpf_setsockopt.

> 
>> I am going to stop reviewing here.
>>
> 
> Hoping my explanation can answer your questions and still looking forward to
> your more feedback 
D. Wythe May 17, 2023, 9:16 a.m. UTC | #5
On 5/17/23 4:14 PM, Martin KaFai Lau wrote:
> On 5/17/23 12:08 AM, D. Wythe wrote:
>>
>>
>> On 5/16/23 6:52 AM, Martin KaFai Lau wrote:
>>> On 5/11/23 11:24 PM, D. Wythe wrote:
>>>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>>>
>>>> As we all know, the SMC protocol is not suitable for all scenarios,
>>>> especially for short-lived. However, for most applications, they 
>>>> cannot
>>>> guarantee that there are no such scenarios at all. Therefore, apps
>>>> may need some specific strategies to decide shall we need to use SMC
>>>> or not.
>>>>
>>>> Just like the congestion control implementation in TCP, this patch
>>>> provides a generic negotiator implementation. If necessary,
>>>> we can provide different protocol negotiation strategies for
>>>> apps based on this implementation.
>>>>
>>>> But most importantly, this patch provides the possibility of
>>>> eBPF injection, allowing users to implement their own protocol
>>>> negotiation policy in userspace.
>>>>
>>>> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
>>>> ---
>>>>   include/net/smc.h        |  32 +++++++++++
>>>>   net/Makefile             |   1 +
>>>>   net/smc/Kconfig          |  11 ++++
>>>>   net/smc/af_smc.c         | 134 
>>>> ++++++++++++++++++++++++++++++++++++++++++++++-
>>>>   net/smc/smc_negotiator.c | 119 
>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>   net/smc/smc_negotiator.h | 116 
>>>> ++++++++++++++++++++++++++++++++++++++++
>>>>   6 files changed, 412 insertions(+), 1 deletion(-)
>>>>   create mode 100644 net/smc/smc_negotiator.c
>>>>   create mode 100644 net/smc/smc_negotiator.h
>>>>
>>>> diff --git a/include/net/smc.h b/include/net/smc.h
>>>> index 6d076f5..191061c 100644
>>>> --- a/include/net/smc.h
>>>> +++ b/include/net/smc.h
>>>> @@ -296,6 +296,8 @@ struct smc_sock {                /* smc sock 
>>>> container */
>>>>       atomic_t                queued_smc_hs;  /* queued smc 
>>>> handshakes */
>>>>       struct inet_connection_sock_af_ops        af_ops;
>>>>       const struct inet_connection_sock_af_ops *ori_af_ops;
>>>> +    /* protocol negotiator ops */
>>>> +    const struct smc_sock_negotiator_ops *negotiator_ops;
>>>>                           /* original af ops */
>>>>       int            sockopt_defer_accept;
>>>>                           /* sockopt TCP_DEFER_ACCEPT
>>>> @@ -316,4 +318,34 @@ struct smc_sock {                /* smc sock 
>>>> container */
>>>>                            */
>>>>   };
>>>>   +#ifdef CONFIG_SMC_BPF
>>>> +/* BPF struct ops for smc protocol negotiator */
>>>> +struct smc_sock_negotiator_ops {
>>>> +
>>>> +    struct list_head    list;
>>>> +
>>>> +    /* ops name */
>>>> +    char        name[16];
>>>> +    /* key for name */
>>>> +    u32            key;
>>>> +
>>>> +    /* init with sk */
>>>> +    void (*init)(struct sock *sk);
>>>> +
>>>> +    /* release with sk */
>>>> +    void (*release)(struct sock *sk);
>>>> +
>>>> +    /* advice for negotiate */
>>>> +    int (*negotiate)(struct sock *sk);
>>>> +
>>>> +    /* info gathering timing */
>>>> +    void (*collect_info)(struct sock *sk, int timing);
>>>> +
>>>> +    /* module owner */
>>>> +    struct module *owner;
>>>> +};
>>>> +#else
>>>> +struct smc_sock_negotiator_ops {};
>>>> +#endif
>>>> +
>>>>   #endif    /* _SMC_H */
>>>> diff --git a/net/Makefile b/net/Makefile
>>>> index 4c4dc53..222916a 100644
>>>> --- a/net/Makefile
>>>> +++ b/net/Makefile
>>>> @@ -52,6 +52,7 @@ obj-$(CONFIG_TIPC)        += tipc/
>>>>   obj-$(CONFIG_NETLABEL)        += netlabel/
>>>>   obj-$(CONFIG_IUCV)        += iucv/
>>>>   obj-$(CONFIG_SMC)        += smc/
>>>> +obj-$(CONFIG_SMC_BPF)        += smc/smc_negotiator.o > 
>>>> obj-$(CONFIG_RFKILL)        += rfkill/
>>>>   obj-$(CONFIG_NET_9P)        += 9p/
>>>>   obj-$(CONFIG_CAIF)        += caif/
>>>> diff --git a/net/smc/Kconfig b/net/smc/Kconfig
>>>> index 1ab3c5a..bdcc9f1 100644
>>>> --- a/net/smc/Kconfig
>>>> +++ b/net/smc/Kconfig
>>>> @@ -19,3 +19,14 @@ config SMC_DIAG
>>>>         smcss.
>>>>           if unsure, say Y.
>>>> +
>>>> +config SMC_BPF
>>>> +    bool "SMC: support eBPF" if SMC
>>>
>>>
>>> so smc_negotiator will always be in the kernel image even af_smc is 
>>> compiled as a module? If the SMC_BPF needs to support af_smc as a 
>>> module, proper implementation needs to be added to bpf_struct_ops to 
>>> support module first. It is work-in-progress.
>>>
>>
>> smc_negotiator will not no in the kernel image when af_smc is 
>> compiled as a module,
>> it's requires config SMC_BPF also sets to be Y,  while it's default 
>> to be N. That's is,
>> even if af_smc is compiled as a module but with no SMC_BPF set, 
>> smc_negotiator
>> doesn't exist anywhere.
>
> CONFIG_SMC_BPF could be "y" while CONFIG_SMC is "m", no?
>
> Anyway, there is a build error when CONFIG_SMC is "m" :(
>

I am curious if users who proactively set CONFIG_SMC_BPF to Y would care 
about the issue you mentioned, while
CONFIG_SMC_BPF defaults to N ?

And I'm really sorry about this compilation error. Last time, I had got 
some comments about symbol export, so I tried to remove some symbol exports,
unfortunately, there are compilation issues when BPF_JIT is set 
(bpf_struct_ops_get is no exported), sorry for my incomplete
testing. I will fix this issue in the new version.

Anyway, if bpf_struct_ops can support module, that would be better, and 
can greatly reduce the trade-offs I make between modules and built-in.
Is there any details can shared on your progress ?

>>>> +    depends on BPF_SYSCALL
>>>> +    default n
>>>> +    help
>>>> +      Supports eBPF to allows user mode participation in SMC's 
>>>> protocol process
>>>> +      via ebpf programs. Alternatively, obtain information about 
>>>> the SMC socks
>>>> +      through the ebpf program.
>>>> +
>>>> +      If unsure, say N.
>>>> diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
>>>> index 50c38b6..7406fd4 100644
>>>> --- a/net/smc/af_smc.c
>>>> +++ b/net/smc/af_smc.c
>>>> @@ -52,6 +52,7 @@
>>>>   #include "smc_close.h"
>>>>   #include "smc_stats.h"
>>>>   #include "smc_tracepoint.h"
>>>> +#include "smc_negotiator.h"
>>>>   #include "smc_sysctl.h"
>>>>     static DEFINE_MUTEX(smc_server_lgr_pending);    /* serialize 
>>>> link group
>>>> @@ -68,6 +69,119 @@
>>>>   static void smc_tcp_listen_work(struct work_struct *);
>>>>   static void smc_connect_work(struct work_struct *);
>>>>   +#ifdef CONFIG_SMC_BPF
>>>> +
>>>> +/* Check if sock should use smc */
>>>> +int smc_sock_should_select_smc(const struct smc_sock *smc)
>>>> +{
>>>> +    const struct smc_sock_negotiator_ops *ops;
>>>> +    int ret;
>>>> +
>>>> +    rcu_read_lock();
>>>> +    ops = READ_ONCE(smc->negotiator_ops);
>>>> +
>>>> +    /* No negotiator_ops supply or no negotiate func set,
>>>> +     * always pass it.
>>>> +     */
>>>> +    if (!ops || !ops->negotiate) {
>>>
>>> A smc_sock_negotiator_ops without ->negotiate? Is it useful at all 
>>> to allow the register in the first place?
>>>
>>
>> You are right, this can be avoid before registration. I'll fix it.
>>
>>>> +        rcu_read_unlock();
>>>> +        return SK_PASS;
>>>> +    }
>>>> +
>>>> +    ret = ops->negotiate((struct sock *)&smc->sk);
>>>> +    rcu_read_unlock();
>>>> +    return ret;
>>>> +}
>>>> +
>>>> +void smc_sock_perform_collecting_info(const struct smc_sock *smc, 
>>>> int timing)
>>>> +{
>>>> +    const struct smc_sock_negotiator_ops *ops;
>>>> +
>>>> +    rcu_read_lock();
>>>> +    ops = READ_ONCE(smc->negotiator_ops);
>>>> +
>>>> +    if (!ops || !ops->collect_info) {
>>>> +        rcu_read_unlock();
>>>> +        return;
>>>> +    }
>>>> +
>>>> +    ops->collect_info((struct sock *)&smc->sk, timing);
>>>> +    rcu_read_unlock();
>>>> +}
>>>> +
>>>> +int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const 
>>>> char *name)
>>>> +{
>>>> +    struct smc_sock_negotiator_ops *ops;
>>>> +    int ret = -EINVAL;
>>>> +
>>>> +    /* already set */
>>>> +    if (READ_ONCE(smc->negotiator_ops))
>>>> +        smc_sock_cleanup_negotiator_ops(smc, /* might be still 
>>>> referenced */ false);
>>>> +
>>>> +    /* Just for clear negotiator_ops */
>>>> +    if (!name || !strlen(name))
>>>> +        return 0;
>>>> +
>>>> +    rcu_read_lock();
>>>> +    ops = smc_negotiator_ops_get_by_name(name);
>>>> +    if (likely(ops)) {
>>>> +        if (unlikely(!bpf_try_module_get(ops, ops->owner))) {
>>>> +            ret = -EACCES;
>>>> +        } else {
>>>> +            WRITE_ONCE(smc->negotiator_ops, ops);
>>>> +            /* make sure ops can be seen */
>>>> +            smp_wmb();
>>>
>>> This rcu_read_lock(), WRITE_ONCE, and smp_wmb() combo looks very 
>>> suspicious. smc->negotiator_ops is protected by rcu (+refcnt) or 
>>> lock_sock()?
>>>
>>
>> All access to ops is protected by RCU, and there are no lock_sock. 
>> WRITE_ONCE() and smp_wmb() do
>> not participate in any guarantee of the availability of ops, The 
>> purpose to using them is just wish the latest values
>> can be read as soon as possible , In fact, even if old value is read, 
>> there will be no problem in logic because all updates
>> will do synchronize_rcu() and all access to ops is under in 
>> rcu_read_lock().
>
> The explanation is not encouraging. No clear benefit while having this 
> kind of complexity here. Switching tcp congestion ops also does not 
> require this. Some of the new codes is in af_smc but bpf is the 
> primary user. It is not something that I would like to maintain and 
> then need to reason about this unusual pattern a year later. Beside, 
> this negotiator_ops assignment must be done under a lock_sock(). The 
> same probably is true for calling ops->negotiate() where the bpf prog 
> may be looking at the sk and calling bpf_setsockopt.

I got you point, If you feel that those code are complexity and 
unnecessary, I can remove them of course.

Additionally, smc_sock_assign_negotiator_ops is indeed executed under 
sock lock,  __smc_setsockopt will lock sock
for it. I misunderstood your meaning before.

As for ops ->negotiate(), thanks for this point, but considering 
performance,
I might prohibit calling setsockopt in negotiate().

>>
>>> I am going to stop reviewing here.
>>>
>>
>> Hoping my explanation can answer your questions and still looking 
>> forward to
>> your more feedback 
diff mbox series

Patch

diff --git a/include/net/smc.h b/include/net/smc.h
index 6d076f5..191061c 100644
--- a/include/net/smc.h
+++ b/include/net/smc.h
@@ -296,6 +296,8 @@  struct smc_sock {				/* smc sock container */
 	atomic_t                queued_smc_hs;  /* queued smc handshakes */
 	struct inet_connection_sock_af_ops		af_ops;
 	const struct inet_connection_sock_af_ops	*ori_af_ops;
+	/* protocol negotiator ops */
+	const struct smc_sock_negotiator_ops *negotiator_ops;
 						/* original af ops */
 	int			sockopt_defer_accept;
 						/* sockopt TCP_DEFER_ACCEPT
@@ -316,4 +318,34 @@  struct smc_sock {				/* smc sock container */
 						 */
 };
 
+#ifdef CONFIG_SMC_BPF
+/* BPF struct ops for smc protocol negotiator */
+struct smc_sock_negotiator_ops {
+
+	struct list_head	list;
+
+	/* ops name */
+	char		name[16];
+	/* key for name */
+	u32			key;
+
+	/* init with sk */
+	void (*init)(struct sock *sk);
+
+	/* release with sk */
+	void (*release)(struct sock *sk);
+
+	/* advice for negotiate */
+	int (*negotiate)(struct sock *sk);
+
+	/* info gathering timing */
+	void (*collect_info)(struct sock *sk, int timing);
+
+	/* module owner */
+	struct module *owner;
+};
+#else
+struct smc_sock_negotiator_ops {};
+#endif
+
 #endif	/* _SMC_H */
diff --git a/net/Makefile b/net/Makefile
index 4c4dc53..222916a 100644
--- a/net/Makefile
+++ b/net/Makefile
@@ -52,6 +52,7 @@  obj-$(CONFIG_TIPC)		+= tipc/
 obj-$(CONFIG_NETLABEL)		+= netlabel/
 obj-$(CONFIG_IUCV)		+= iucv/
 obj-$(CONFIG_SMC)		+= smc/
+obj-$(CONFIG_SMC_BPF)		+= smc/smc_negotiator.o
 obj-$(CONFIG_RFKILL)		+= rfkill/
 obj-$(CONFIG_NET_9P)		+= 9p/
 obj-$(CONFIG_CAIF)		+= caif/
diff --git a/net/smc/Kconfig b/net/smc/Kconfig
index 1ab3c5a..bdcc9f1 100644
--- a/net/smc/Kconfig
+++ b/net/smc/Kconfig
@@ -19,3 +19,14 @@  config SMC_DIAG
 	  smcss.
 
 	  if unsure, say Y.
+
+config SMC_BPF
+	bool "SMC: support eBPF" if SMC
+	depends on BPF_SYSCALL
+	default n
+	help
+	  Supports eBPF to allows user mode participation in SMC's protocol process
+	  via ebpf programs. Alternatively, obtain information about the SMC socks
+	  through the ebpf program.
+
+	  If unsure, say N.
diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
index 50c38b6..7406fd4 100644
--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -52,6 +52,7 @@ 
 #include "smc_close.h"
 #include "smc_stats.h"
 #include "smc_tracepoint.h"
+#include "smc_negotiator.h"
 #include "smc_sysctl.h"
 
 static DEFINE_MUTEX(smc_server_lgr_pending);	/* serialize link group
@@ -68,6 +69,119 @@ 
 static void smc_tcp_listen_work(struct work_struct *);
 static void smc_connect_work(struct work_struct *);
 
+#ifdef CONFIG_SMC_BPF
+
+/* Check if sock should use smc */
+int smc_sock_should_select_smc(const struct smc_sock *smc)
+{
+	const struct smc_sock_negotiator_ops *ops;
+	int ret;
+
+	rcu_read_lock();
+	ops = READ_ONCE(smc->negotiator_ops);
+
+	/* No negotiator_ops supply or no negotiate func set,
+	 * always pass it.
+	 */
+	if (!ops || !ops->negotiate) {
+		rcu_read_unlock();
+		return SK_PASS;
+	}
+
+	ret = ops->negotiate((struct sock *)&smc->sk);
+	rcu_read_unlock();
+	return ret;
+}
+
+void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing)
+{
+	const struct smc_sock_negotiator_ops *ops;
+
+	rcu_read_lock();
+	ops = READ_ONCE(smc->negotiator_ops);
+
+	if (!ops || !ops->collect_info) {
+		rcu_read_unlock();
+		return;
+	}
+
+	ops->collect_info((struct sock *)&smc->sk, timing);
+	rcu_read_unlock();
+}
+
+int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const char *name)
+{
+	struct smc_sock_negotiator_ops *ops;
+	int ret = -EINVAL;
+
+	/* already set */
+	if (READ_ONCE(smc->negotiator_ops))
+		smc_sock_cleanup_negotiator_ops(smc, /* might be still referenced */ false);
+
+	/* Just for clear negotiator_ops */
+	if (!name || !strlen(name))
+		return 0;
+
+	rcu_read_lock();
+	ops = smc_negotiator_ops_get_by_name(name);
+	if (likely(ops)) {
+		if (unlikely(!bpf_try_module_get(ops, ops->owner))) {
+			ret = -EACCES;
+		} else {
+			WRITE_ONCE(smc->negotiator_ops, ops);
+			/* make sure ops can be seen */
+			smp_wmb();
+			if (ops->init)
+				ops->init(&smc->sk);
+			ret = 0;
+		}
+	}
+	rcu_read_unlock();
+	return ret;
+}
+
+void smc_sock_cleanup_negotiator_ops(struct smc_sock *smc, bool no_more)
+{
+	const struct smc_sock_negotiator_ops *ops;
+
+	ops = READ_ONCE(smc->negotiator_ops);
+
+	/* not all smc sock has negotiator_ops */
+	if (!ops)
+		return;
+
+	might_sleep();
+
+	/* Just ensure data integrity */
+	WRITE_ONCE(smc->negotiator_ops, NULL);
+	/* make sure NULL can be seen */
+	smp_wmb();
+	/* if the socks may have references to the negotiator ops to be removed.
+	 * it means that we might need to wait for the readers of ops
+	 * to complete. It's slow though.
+	 */
+	if (unlikely(!no_more))
+		synchronize_rcu();
+	if (ops->release)
+		ops->release(&smc->sk);
+	bpf_module_put(ops, ops->owner);
+}
+
+void smc_sock_clone_negotiator_ops(struct sock *parent, struct sock *child)
+{
+	const struct smc_sock_negotiator_ops *ops;
+
+	rcu_read_lock();
+	ops = READ_ONCE(smc_sk(parent)->negotiator_ops);
+	if (ops && bpf_try_module_get(ops, ops->owner)) {
+		smc_sk(child)->negotiator_ops = ops;
+		if (ops->init)
+			ops->init(child);
+	}
+	rcu_read_unlock();
+}
+#endif
+
 int smc_nl_dump_hs_limitation(struct sk_buff *skb, struct netlink_callback *cb)
 {
 	struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb);
@@ -166,6 +280,9 @@  static bool smc_hs_congested(const struct sock *sk)
 	if (workqueue_congested(WORK_CPU_UNBOUND, smc_hs_wq))
 		return true;
 
+	if (!smc_sock_should_select_smc(smc))
+		return true;
+
 	return false;
 }
 
@@ -320,6 +437,9 @@  static int smc_release(struct socket *sock)
 	sock_hold(sk); /* sock_put below */
 	smc = smc_sk(sk);
 
+	/* trigger info gathering if needed.*/
+	smc_sock_perform_collecting_info(smc, SMC_SOCK_CLOSED_TIMING);
+
 	old_state = sk->sk_state;
 
 	/* cleanup for a dangling non-blocking connect */
@@ -356,6 +476,9 @@  static int smc_release(struct socket *sock)
 
 static void smc_destruct(struct sock *sk)
 {
+	/* cleanup negotiator_ops if set */
+	smc_sock_cleanup_negotiator_ops(smc_sk(sk), /* no longer used */ true);
+
 	if (sk->sk_state != SMC_CLOSED)
 		return;
 	if (!sock_flag(sk, SOCK_DEAD))
@@ -1627,7 +1750,14 @@  static int smc_connect(struct socket *sock, struct sockaddr *addr,
 	}
 
 	smc_copy_sock_settings_to_clc(smc);
-	tcp_sk(smc->clcsock->sk)->syn_smc = 1;
+	/* accept out connection as SMC connection */
+	if (smc_sock_should_select_smc(smc) == SK_PASS) {
+		tcp_sk(smc->clcsock->sk)->syn_smc = 1;
+	} else {
+		tcp_sk(smc->clcsock->sk)->syn_smc = 0;
+		smc_switch_to_fallback(smc, /* active fallback */ 0);
+	}
+
 	if (smc->connect_nonblock) {
 		rc = -EALREADY;
 		goto out;
@@ -1679,6 +1809,8 @@  static int smc_clcsock_accept(struct smc_sock *lsmc, struct smc_sock **new_smc)
 	}
 	*new_smc = smc_sk(new_sk);
 
+	smc_sock_clone_negotiator_ops(lsk, new_sk);
+
 	mutex_lock(&lsmc->clcsock_release_lock);
 	if (lsmc->clcsock)
 		rc = kernel_accept(lsmc->clcsock, &new_clcsock, SOCK_NONBLOCK);
diff --git a/net/smc/smc_negotiator.c b/net/smc/smc_negotiator.c
new file mode 100644
index 0000000..a93a19e
--- /dev/null
+++ b/net/smc/smc_negotiator.c
@@ -0,0 +1,119 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *  Support eBPF for Shared Memory Communications over RDMA (SMC-R) and RoCE
+ *
+ *  Author(s):  D. Wythe <alibuda@linux.alibaba.com>
+ */
+#include <linux/kernel.h>
+#include <linux/bpf.h>
+#include <linux/smc.h>
+#include <net/sock.h>
+
+#include "smc_negotiator.h"
+#include "smc.h"
+
+static DEFINE_SPINLOCK(smc_sock_negotiator_list_lock);
+static LIST_HEAD(smc_sock_negotiator_list);
+
+/* required smc_sock_negotiator_list_lock locked */
+static inline struct smc_sock_negotiator_ops *smc_negotiator_ops_get_by_key(u32 key)
+{
+	struct smc_sock_negotiator_ops *ops;
+
+	list_for_each_entry_rcu(ops, &smc_sock_negotiator_list, list) {
+		if (ops->key == key)
+			return ops;
+	}
+
+	return NULL;
+}
+
+struct smc_sock_negotiator_ops *smc_negotiator_ops_get_by_name(const char *name)
+{
+	struct smc_sock_negotiator_ops *ops = NULL;
+
+	spin_lock(&smc_sock_negotiator_list_lock);
+	list_for_each_entry_rcu(ops, &smc_sock_negotiator_list, list) {
+		if (strcmp(ops->name, name) == 0)
+			break;
+	}
+	spin_unlock(&smc_sock_negotiator_list_lock);
+	return ops;
+}
+EXPORT_SYMBOL_GPL(smc_negotiator_ops_get_by_name);
+
+int smc_sock_validate_negotiator_ops(struct smc_sock_negotiator_ops *ops)
+{
+	/* not required yet */
+	return 0;
+}
+
+/* register ops */
+int smc_sock_register_negotiator_ops(struct smc_sock_negotiator_ops *ops)
+{
+	int ret;
+
+	ret = smc_sock_validate_negotiator_ops(ops);
+	if (ret)
+		return ret;
+
+	/* calt key by name hash */
+	ops->key = jhash(ops->name, sizeof(ops->name), strlen(ops->name));
+
+	spin_lock(&smc_sock_negotiator_list_lock);
+	if (smc_negotiator_ops_get_by_key(ops->key)) {
+		pr_notice("smc: %s negotiator already registered\n", ops->name);
+		ret = -EEXIST;
+	} else {
+		list_add_tail_rcu(&ops->list, &smc_sock_negotiator_list);
+	}
+	spin_unlock(&smc_sock_negotiator_list_lock);
+	return ret;
+}
+
+/* unregister ops */
+void smc_sock_unregister_negotiator_ops(struct smc_sock_negotiator_ops *ops)
+{
+	spin_lock(&smc_sock_negotiator_list_lock);
+	list_del_rcu(&ops->list);
+	spin_unlock(&smc_sock_negotiator_list_lock);
+
+	/* Wait for outstanding readers to complete before the
+	 * ops gets removed entirely.
+	 */
+	synchronize_rcu();
+}
+
+int smc_sock_update_negotiator_ops(struct smc_sock_negotiator_ops *ops,
+				   struct smc_sock_negotiator_ops *old_ops)
+{
+	struct smc_sock_negotiator_ops *existing;
+	int ret;
+
+	ret = smc_sock_validate_negotiator_ops(ops);
+	if (ret)
+		return ret;
+
+	ops->key = jhash(ops->name, sizeof(ops->name), strlen(ops->name));
+	if (unlikely(!ops->key))
+		return -EINVAL;
+
+	spin_lock(&smc_sock_negotiator_list_lock);
+	existing = smc_negotiator_ops_get_by_key(old_ops->key);
+	if (!existing || strcmp(existing->name, ops->name)) {
+		ret = -EINVAL;
+	} else if (existing != old_ops) {
+		pr_notice("invalid old negotiator to replace\n");
+		ret = -EINVAL;
+	} else {
+		list_add_tail_rcu(&ops->list, &smc_sock_negotiator_list);
+		list_del_rcu(&existing->list);
+	}
+
+	spin_unlock(&smc_sock_negotiator_list_lock);
+	if (ret)
+		return ret;
+
+	synchronize_rcu();
+	return 0;
+}
diff --git a/net/smc/smc_negotiator.h b/net/smc/smc_negotiator.h
new file mode 100644
index 0000000..b294ede
--- /dev/null
+++ b/net/smc/smc_negotiator.h
@@ -0,0 +1,116 @@ 
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *  Support eBPF for Shared Memory Communications over RDMA (SMC-R) and RoCE
+ *
+ *  Author(s):  D. Wythe <alibuda@linux.alibaba.com>
+ */
+
+#include <linux/types.h>
+#include <net/smc.h>
+
+/* Max length of negotiator name */
+#define SMC_NEGOTIATOR_NAME_MAX	(16)
+
+/* closing time */
+#define SMC_SOCK_CLOSED_TIMING	(0)
+
+#ifdef CONFIG_SMC_BPF
+
+/* Register a new SMC socket negotiator ops
+ * The registered ops can then be assigned to SMC sockets using
+ * smc_sock_assign_negotiator_ops() via name
+ * Return: 0 on success, negative error code on failure
+ */
+int smc_sock_register_negotiator_ops(struct smc_sock_negotiator_ops *ops);
+
+/* Update an existing SMC socket negotiator ops
+ * This function is used to update an existing SMC socket negotiator ops. The new ops will
+ * replace the old ops who has the same name.
+ * Return: 0 on success, negative error code on failure.
+ */
+int smc_sock_update_negotiator_ops(struct smc_sock_negotiator_ops *ops,
+				   struct smc_sock_negotiator_ops *old_ops);
+
+/* Validate SMC negotiator operations
+ * This function is called to validate an SMC negotiator operations structure
+ * before it is assigned to a socket. It checks that all necessary function
+ * pointers are defined and not null.
+ * Returns 0 if the @ops argument is valid, or a negative error code otherwise.
+ */
+int smc_sock_validate_negotiator_ops(struct smc_sock_negotiator_ops *ops);
+
+/* Unregister an SMC socket negotiator ops
+ * This function is used to unregister an existing SMC socket negotiator ops.
+ * The ops will no longer be available for assignment to SMC sockets immediately.
+ */
+void smc_sock_unregister_negotiator_ops(struct smc_sock_negotiator_ops *ops);
+
+/* Get registered negotiator ops via name, caller should invoke it
+ * with RCU protected.
+ */
+struct smc_sock_negotiator_ops *smc_negotiator_ops_get_by_name(const char *name);
+
+/* Assign a negotiator ops to an SMC socket
+ * This function is used to assign a negotiator ops to an SMC socket.
+ * The ops must have been previously registered with
+ * smc_sock_register_negotiator_ops().
+ * Return: 0 on success, negative error code on failure.
+ */
+int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const char *name);
+
+/* Remove negotiator ops who had assigned to @smc.
+ * @no_more implies that the caller explicitly states that the @smc have no references
+ * to the negotiator ops to be removed. This is not a mandatory option.
+ * When it sets to false, we will use RCU to protect ops, but in this case we have to
+ * always call synchronize_rcu(), which has a significant performance impact.
+ */
+void smc_sock_cleanup_negotiator_ops(struct smc_sock *smc, bool no_more);
+
+/* Clone negotiator ops of parnet sock to
+ * child sock.
+ */
+void smc_sock_clone_negotiator_ops(struct sock *parent, struct sock *child);
+
+/* Check if sock should use smc */
+int smc_sock_should_select_smc(const struct smc_sock *smc);
+
+/* Collect information to assigned ops */
+void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing);
+
+#else
+static inline int smc_sock_register_negotiator_ops(struct smc_sock_negotiator_ops *ops)
+{
+	return 0;
+}
+
+static inline int smc_sock_update_negotiator_ops(struct smc_sock_negotiator_ops *ops,
+						 struct smc_sock_negotiator_ops *old_ops)
+{
+	return 0;
+}
+
+static inline int smc_sock_validate_negotiator_ops(struct smc_sock_negotiator_ops *ops)
+{
+	return 0;
+}
+
+static inline void smc_sock_unregister_negotiator_ops(struct smc_sock_negotiator_ops *ops) {}
+
+static inline struct smc_sock_negotiator_ops *smc_negotiator_ops_get_by_name(const char *name)
+{
+	return NULL;
+}
+
+static inline int smc_sock_assign_negotiator_ops(struct smc_sock *smc, const char *name)
+{
+	return -EOPNOTSUPP;
+}
+
+static inline void smc_sock_cleanup_negotiator_ops(struct smc_sock *smc, bool no_more) {}
+
+static inline void smc_sock_clone_negotiator_ops(struct sock *parent, struct sock *child) {}
+
+static inline int smc_sock_should_select_smc(const struct smc_sock *smc) { return SK_PASS; }
+
+static inline void smc_sock_perform_collecting_info(const struct smc_sock *smc, int timing) {}
+#endif