Message ID | 20240731093102.130154-1-guwen@linux.alibaba.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net] net/smc: delete buf_desc from buffer list under lock protection | expand |
Hi Wen Gu: "The operations to link group buffer list should be protected by sndbufs_lock or rmbs_lock" It seems that the logic is smooth. But will this really happen? Because no process is in use with the link group, does this mean that there is no concurrent scenario? Thank you Zhengchao Shao On 2024/7/31 17:31, Wen Gu wrote: > The operations to link group buffer list should be protected by > sndbufs_lock or rmbs_lock. So fix it. > > Fixes: 3e034725c0d8 ("net/smc: common functions for RMBs and send buffers") > Signed-off-by: Wen Gu <guwen@linux.alibaba.com> > --- > net/smc/smc_core.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c > index 3b95828d9976..ecfea8c38da9 100644 > --- a/net/smc/smc_core.c > +++ b/net/smc/smc_core.c > @@ -1368,18 +1368,24 @@ static void __smc_lgr_free_bufs(struct smc_link_group *lgr, bool is_rmb) > { > struct smc_buf_desc *buf_desc, *bf_desc; > struct list_head *buf_list; > + struct rw_semaphore *lock; > int i; > > for (i = 0; i < SMC_RMBE_SIZES; i++) { > - if (is_rmb) > + if (is_rmb) { > buf_list = &lgr->rmbs[i]; > - else > + lock = &lgr->rmbs_lock; > + } else { > buf_list = &lgr->sndbufs[i]; > + lock = &lgr->sndbufs_lock; > + } > + down_write(lock); > list_for_each_entry_safe(buf_desc, bf_desc, buf_list, > list) { > list_del(&buf_desc->list); > smc_buf_free(lgr, is_rmb, buf_desc); > } > + up_write(lock); > } > } >
On 2024/7/31 18:32, shaozhengchao wrote: > Hi Wen Gu: > "The operations to link group buffer list should be protected by > sndbufs_lock or rmbs_lock" It seems that the logic is smooth. But will > this really happen? Because no process is in use with the link group, > does this mean that there is no concurrent scenario? > Hi Zhengchao, Yes, I am also very conflicted about whether to add lock protection. From the code, it appears that when __smc_lgr_free_bufs is called, the link group has already been removed from the lgr_list, so theoretically there should be no contention (e.g. add to buf_list). However, in order to maintain consistency with other lgr buf_list operations and to guard against unforeseen or future changes, I have added lock protection here as well. Thanks! > Thank you > > Zhengchao Shao > > On 2024/7/31 17:31, Wen Gu wrote: >> The operations to link group buffer list should be protected by >> sndbufs_lock or rmbs_lock. So fix it. >> >> Fixes: 3e034725c0d8 ("net/smc: common functions for RMBs and send buffers") >> Signed-off-by: Wen Gu <guwen@linux.alibaba.com> >> --- >> net/smc/smc_core.c | 10 ++++++++-- >> 1 file changed, 8 insertions(+), 2 deletions(-) >> >> diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c >> index 3b95828d9976..ecfea8c38da9 100644 >> --- a/net/smc/smc_core.c >> +++ b/net/smc/smc_core.c >> @@ -1368,18 +1368,24 @@ static void __smc_lgr_free_bufs(struct smc_link_group *lgr, bool is_rmb) >> { >> struct smc_buf_desc *buf_desc, *bf_desc; >> struct list_head *buf_list; >> + struct rw_semaphore *lock; >> int i; >> for (i = 0; i < SMC_RMBE_SIZES; i++) { >> - if (is_rmb) >> + if (is_rmb) { >> buf_list = &lgr->rmbs[i]; >> - else >> + lock = &lgr->rmbs_lock; >> + } else { >> buf_list = &lgr->sndbufs[i]; >> + lock = &lgr->sndbufs_lock; >> + } >> + down_write(lock); >> list_for_each_entry_safe(buf_desc, bf_desc, buf_list, >> list) { >> list_del(&buf_desc->list); >> smc_buf_free(lgr, is_rmb, buf_desc); >> } >> + up_write(lock); >> } >> }
On 02.08.24 03:55, Wen Gu wrote: > > > On 2024/7/31 18:32, shaozhengchao wrote: >> Hi Wen Gu: >> "The operations to link group buffer list should be protected by >> sndbufs_lock or rmbs_lock" It seems that the logic is smooth. But will >> this really happen? Because no process is in use with the link group, >> does this mean that there is no concurrent scenario? >> > > Hi Zhengchao, > > Yes, I am also very conflicted about whether to add lock protection. > From the code, it appears that when __smc_lgr_free_bufs is called, the > link group has already been removed from the lgr_list, so theoretically > there should be no contention (e.g. add to buf_list). However, in order > to maintain consistency with other lgr buf_list operations and to guard > against unforeseen or future changes, I have added lock protection here > as well. > > Thanks! > I'm indeed in two minds about if I give you my reviewed-by especially on the reason of unforeseen bugs. However, previously the most bugs on locking we met in our code are almost because of deadlocks caused by too much different locks introduced. Thus, I don't think this patch is necessary at lease for now. Thanks, Wenjia
On 2024/8/5 16:23, Wenjia Zhang wrote: > > > On 02.08.24 03:55, Wen Gu wrote: >> >> >> On 2024/7/31 18:32, shaozhengchao wrote: >>> Hi Wen Gu: >>> "The operations to link group buffer list should be protected by >>> sndbufs_lock or rmbs_lock" It seems that the logic is smooth. But will >>> this really happen? Because no process is in use with the link group, >>> does this mean that there is no concurrent scenario? >>> >> >> Hi Zhengchao, >> >> Yes, I am also very conflicted about whether to add lock protection. >> From the code, it appears that when __smc_lgr_free_bufs is called, the >> link group has already been removed from the lgr_list, so theoretically >> there should be no contention (e.g. add to buf_list). However, in order >> to maintain consistency with other lgr buf_list operations and to guard >> against unforeseen or future changes, I have added lock protection here >> as well. >> >> Thanks! >> > > I'm indeed in two minds about if I give you my reviewed-by especially on the reason of unforeseen bugs. However, previously the most bugs on locking we met in our code are almost because of deadlocks caused by too much different locks introduced. Thus, I don't think this patch is necessary at lease > for now. > OK, let's handle it when it causes actual problems. Thanks! > Thanks, > Wenjia
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c index 3b95828d9976..ecfea8c38da9 100644 --- a/net/smc/smc_core.c +++ b/net/smc/smc_core.c @@ -1368,18 +1368,24 @@ static void __smc_lgr_free_bufs(struct smc_link_group *lgr, bool is_rmb) { struct smc_buf_desc *buf_desc, *bf_desc; struct list_head *buf_list; + struct rw_semaphore *lock; int i; for (i = 0; i < SMC_RMBE_SIZES; i++) { - if (is_rmb) + if (is_rmb) { buf_list = &lgr->rmbs[i]; - else + lock = &lgr->rmbs_lock; + } else { buf_list = &lgr->sndbufs[i]; + lock = &lgr->sndbufs_lock; + } + down_write(lock); list_for_each_entry_safe(buf_desc, bf_desc, buf_list, list) { list_del(&buf_desc->list); smc_buf_free(lgr, is_rmb, buf_desc); } + up_write(lock); } }
The operations to link group buffer list should be protected by sndbufs_lock or rmbs_lock. So fix it. Fixes: 3e034725c0d8 ("net/smc: common functions for RMBs and send buffers") Signed-off-by: Wen Gu <guwen@linux.alibaba.com> --- net/smc/smc_core.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)