Message ID | 54004E82.3060608@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Viro & Andraw Could you help review this patch? Thanks. xuejiufei On 2014/8/29 17:57, Xue jiufei wrote: > The patch trys to solve one deadlock problem caused by cluster > fs, like ocfs2. And the problem may happen at least in the below > situations: > 1)Receiving a connect message from other nodes, node queues a > work_struct o2net_listen_work. > 2)o2net_wq processes this work and calls sock_alloc() to allocate > memory for a new socket. > 3)It would do direct memory reclaim when available memory is not > enough and trigger the inode cleanup. That inode being cleaned up > is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() > ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), > and wait for the unlock response from master. > 4)tcp layer received the response, call o2net_data_ready() and > queue sc_rx_work, waiting o2net_wq to process this work. > 5)o2net_wq is a single thread workqueue, it process the work one by > one. Right now it is still doing o2net_listen_work and cannot handle > sc_rx_work. so we deadlock. > > It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). > So we use PF_FSTRANS to avoid the task reentering filesystem when > available memory is not enough. > > Signed-off-by: joyce.xue <xuejiufei@huawei.com> > --- > fs/ocfs2/cluster/tcp.c | 7 +++++++ > fs/super.c | 3 +++ > 2 files changed, 10 insertions(+) > > diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c > index 681691b..629b4da 100644 > --- a/fs/ocfs2/cluster/tcp.c > +++ b/fs/ocfs2/cluster/tcp.c > @@ -1581,6 +1581,8 @@ static void o2net_start_connect(struct work_struct *work) > int ret = 0, stop; > unsigned int timeout; > > + current->flags |= PF_FSTRANS; > + > /* if we're greater we initiate tx, otherwise we accept */ > if (o2nm_this_node() <= o2net_num_from_nn(nn)) > goto out; > @@ -1683,6 +1685,7 @@ out: > if (mynode) > o2nm_node_put(mynode); > > + current->flags &= ~PF_FSTRANS; > return; > } > > @@ -1809,6 +1812,8 @@ static int o2net_accept_one(struct socket *sock, int *more) > struct o2net_sock_container *sc = NULL; > struct o2net_node *nn; > > + current->flags |= PF_FSTRANS; > + > BUG_ON(sock == NULL); > *more = 0; > ret = sock_create_lite(sock->sk->sk_family, sock->sk->sk_type, > @@ -1918,6 +1923,8 @@ out: > o2nm_node_put(local_node); > if (sc) > sc_put(sc); > + > + current->flags &= ~PF_FSTRANS; > return ret; > } > > diff --git a/fs/super.c b/fs/super.c > index b9a214d..c4a8dc1 100644 > --- a/fs/super.c > +++ b/fs/super.c > @@ -71,6 +71,9 @@ static unsigned long super_cache_scan(struct shrinker *shrink, > if (!(sc->gfp_mask & __GFP_FS)) > return SHRINK_STOP; > > + if (current->flags & PF_FSTRANS) > + return SHRINK_STOP; > + > if (!grab_super_passive(sb)) > return SHRINK_STOP; > >
On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: > The patch trys to solve one deadlock problem caused by cluster > fs, like ocfs2. And the problem may happen at least in the below > situations: > 1)Receiving a connect message from other nodes, node queues a > work_struct o2net_listen_work. > 2)o2net_wq processes this work and calls sock_alloc() to allocate > memory for a new socket. > 3)It would do direct memory reclaim when available memory is not > enough and trigger the inode cleanup. That inode being cleaned up > is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() > ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), > and wait for the unlock response from master. > 4)tcp layer received the response, call o2net_data_ready() and > queue sc_rx_work, waiting o2net_wq to process this work. > 5)o2net_wq is a single thread workqueue, it process the work one by > one. Right now it is still doing o2net_listen_work and cannot handle > sc_rx_work. so we deadlock. > > It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). > So we use PF_FSTRANS to avoid the task reentering filesystem when > available memory is not enough. > > Signed-off-by: joyce.xue <xuejiufei@huawei.com> For the second time: use memalloc_noio_save/memalloc_noio_restore. And please put a great big comment in the code explaining why you need to do this special thing with memory reclaim flags. Cheers, Dave.
Hi, Dave On 2014/9/2 7:51, Dave Chinner wrote: > On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: >> The patch trys to solve one deadlock problem caused by cluster >> fs, like ocfs2. And the problem may happen at least in the below >> situations: >> 1)Receiving a connect message from other nodes, node queues a >> work_struct o2net_listen_work. >> 2)o2net_wq processes this work and calls sock_alloc() to allocate >> memory for a new socket. >> 3)It would do direct memory reclaim when available memory is not >> enough and trigger the inode cleanup. That inode being cleaned up >> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() >> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), >> and wait for the unlock response from master. >> 4)tcp layer received the response, call o2net_data_ready() and >> queue sc_rx_work, waiting o2net_wq to process this work. >> 5)o2net_wq is a single thread workqueue, it process the work one by >> one. Right now it is still doing o2net_listen_work and cannot handle >> sc_rx_work. so we deadlock. >> >> It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). >> So we use PF_FSTRANS to avoid the task reentering filesystem when >> available memory is not enough. >> >> Signed-off-by: joyce.xue <xuejiufei@huawei.com> > > For the second time: use memalloc_noio_save/memalloc_noio_restore. > And please put a great big comment in the code explaining why you > need to do this special thing with memory reclaim flags. > > Cheers, > > Dave. > Thanks for your reply. But I am afraid that memalloc_noio_save/ memalloc_noio_restore can not solve my problem. __GFP_IO is cleared if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory reclaim. However, __GFP_FS is still set that can not avoid pruning dcache and icache in memory allocation, resulting in the deadlock I described. Thanks. XueJiufei
On Tue, Sep 02, 2014 at 05:03:27PM +0800, Xue jiufei wrote: > Hi, Dave > On 2014/9/2 7:51, Dave Chinner wrote: > > On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: > >> The patch trys to solve one deadlock problem caused by cluster > >> fs, like ocfs2. And the problem may happen at least in the below > >> situations: > >> 1)Receiving a connect message from other nodes, node queues a > >> work_struct o2net_listen_work. > >> 2)o2net_wq processes this work and calls sock_alloc() to allocate > >> memory for a new socket. > >> 3)It would do direct memory reclaim when available memory is not > >> enough and trigger the inode cleanup. That inode being cleaned up > >> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() > >> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), > >> and wait for the unlock response from master. > >> 4)tcp layer received the response, call o2net_data_ready() and > >> queue sc_rx_work, waiting o2net_wq to process this work. > >> 5)o2net_wq is a single thread workqueue, it process the work one by > >> one. Right now it is still doing o2net_listen_work and cannot handle > >> sc_rx_work. so we deadlock. > >> > >> It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). > >> So we use PF_FSTRANS to avoid the task reentering filesystem when > >> available memory is not enough. > >> > >> Signed-off-by: joyce.xue <xuejiufei@huawei.com> > > > > For the second time: use memalloc_noio_save/memalloc_noio_restore. > > And please put a great big comment in the code explaining why you > > need to do this special thing with memory reclaim flags. > > > > Cheers, > > > > Dave. > > > Thanks for your reply. But I am afraid that memalloc_noio_save/ > memalloc_noio_restore can not solve my problem. __GFP_IO is cleared > if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory > reclaim. Well, yes. It sets a process flag that is used to avoid re-entrancy issues in direct reclaim. Direct reclaim is more than just the superblock shrinker - there are lots of other shrinkers, page reclaim, etc and I bet there are other paths that can trigger the deadlock you are seeing. We need to protect against all those cases, not just the one shrinker you see a problem with. i.e. we need to clear __GPF_FS from *all* reclaim, not just the superblock shrinker. Also, PF_FSTRANS is used internally by filesystems, not the generic code. If we start spreading it through generic code like this, we start breaking filesystems that rely on it having a specific, filesystem internal meaning. So it's a NACK on that basis as well. > However, __GFP_FS is still set that can not avoid pruning > dcache and icache in memory allocation, resulting in the deadlock I > described. You have a deadlock in direct reclaim, and we already have a template for setting a process flag that is used to indirectly control direct reclaim behaviour. If the current process flag doesn't provide precisely the coverage, then use that implementation as the template to do exactly what is needed for your case. Cheers, Dave.
Hi Jiufei, On 09/02/2014 05:03 PM, Xue jiufei wrote: > Hi, Dave > On 2014/9/2 7:51, Dave Chinner wrote: >> On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: >>> The patch trys to solve one deadlock problem caused by cluster >>> fs, like ocfs2. And the problem may happen at least in the below >>> situations: >>> 1)Receiving a connect message from other nodes, node queues a >>> work_struct o2net_listen_work. >>> 2)o2net_wq processes this work and calls sock_alloc() to allocate >>> memory for a new socket. >>> 3)It would do direct memory reclaim when available memory is not >>> enough and trigger the inode cleanup. That inode being cleaned up >>> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() >>> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), >>> and wait for the unlock response from master. >>> 4)tcp layer received the response, call o2net_data_ready() and >>> queue sc_rx_work, waiting o2net_wq to process this work. >>> 5)o2net_wq is a single thread workqueue, it process the work one by >>> one. Right now it is still doing o2net_listen_work and cannot handle >>> sc_rx_work. so we deadlock. >>> >>> It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). >>> So we use PF_FSTRANS to avoid the task reentering filesystem when >>> available memory is not enough. >>> >>> Signed-off-by: joyce.xue <xuejiufei@huawei.com> >> >> For the second time: use memalloc_noio_save/memalloc_noio_restore. >> And please put a great big comment in the code explaining why you >> need to do this special thing with memory reclaim flags. >> >> Cheers, >> >> Dave. >> > Thanks for your reply. But I am afraid that memalloc_noio_save/ > memalloc_noio_restore can not solve my problem. __GFP_IO is cleared > if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory > reclaim. However, __GFP_FS is still set that can not avoid pruning > dcache and icache in memory allocation, resulting in the deadlock I > described. You can use PF_MEMALLOC_NOIO to replace PF_FSTRANS, set this flag in ocfs2 and check it in sb shrinker. Thanks, Junxiao. > > Thanks. > XueJiufei > > >
Hi, Dave On 2014/9/3 9:02, Dave Chinner wrote: > On Tue, Sep 02, 2014 at 05:03:27PM +0800, Xue jiufei wrote: >> Hi, Dave >> On 2014/9/2 7:51, Dave Chinner wrote: >>> On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: >>>> The patch trys to solve one deadlock problem caused by cluster >>>> fs, like ocfs2. And the problem may happen at least in the below >>>> situations: >>>> 1)Receiving a connect message from other nodes, node queues a >>>> work_struct o2net_listen_work. >>>> 2)o2net_wq processes this work and calls sock_alloc() to allocate >>>> memory for a new socket. >>>> 3)It would do direct memory reclaim when available memory is not >>>> enough and trigger the inode cleanup. That inode being cleaned up >>>> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() >>>> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), >>>> and wait for the unlock response from master. >>>> 4)tcp layer received the response, call o2net_data_ready() and >>>> queue sc_rx_work, waiting o2net_wq to process this work. >>>> 5)o2net_wq is a single thread workqueue, it process the work one by >>>> one. Right now it is still doing o2net_listen_work and cannot handle >>>> sc_rx_work. so we deadlock. >>>> >>>> It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). >>>> So we use PF_FSTRANS to avoid the task reentering filesystem when >>>> available memory is not enough. >>>> >>>> Signed-off-by: joyce.xue <xuejiufei@huawei.com> >>> >>> For the second time: use memalloc_noio_save/memalloc_noio_restore. >>> And please put a great big comment in the code explaining why you >>> need to do this special thing with memory reclaim flags. >>> >>> Cheers, >>> >>> Dave. >>> >> Thanks for your reply. But I am afraid that memalloc_noio_save/ >> memalloc_noio_restore can not solve my problem. __GFP_IO is cleared >> if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory >> reclaim. > > Well, yes. It sets a process flag that is used to avoid re-entrancy > issues in direct reclaim. Direct reclaim is more than just the > superblock shrinker - there are lots of other shrinkers, page > reclaim, etc and I bet there are other paths that can trigger the > deadlock you are seeing. We need to protect against all those > cases, not just the one shrinker you see a problem with. i.e. we > need to clear __GPF_FS from *all* reclaim, not just the superblock > shrinker. > > Also, PF_FSTRANS is used internally by filesystems, not the > generic code. If we start spreading it through generic code like > this, we start breaking filesystems that rely on it having a > specific, filesystem internal meaning. So it's a NACK on that basis > as well. > >> However, __GFP_FS is still set that can not avoid pruning >> dcache and icache in memory allocation, resulting in the deadlock I >> described. > > You have a deadlock in direct reclaim, and we already have a > template for setting a process flag that is used to indirectly > control direct reclaim behaviour. If the current process flag > doesn't provide precisely the coverage, then use that implementation > as the template to do exactly what is needed for your case. > > Cheers, > > Dave. > Thanks very much for your advise. I will send another patch later. Thanks, Xuejiufei
On Wed, Sep 03, 2014 at 09:38:31AM +0800, Junxiao Bi wrote: > Hi Jiufei, > > On 09/02/2014 05:03 PM, Xue jiufei wrote: > > Hi, Dave > > On 2014/9/2 7:51, Dave Chinner wrote: > >> On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: > >>> The patch trys to solve one deadlock problem caused by cluster > >>> fs, like ocfs2. And the problem may happen at least in the below > >>> situations: > >>> 1)Receiving a connect message from other nodes, node queues a > >>> work_struct o2net_listen_work. > >>> 2)o2net_wq processes this work and calls sock_alloc() to allocate > >>> memory for a new socket. > >>> 3)It would do direct memory reclaim when available memory is not > >>> enough and trigger the inode cleanup. That inode being cleaned up > >>> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() > >>> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), > >>> and wait for the unlock response from master. > >>> 4)tcp layer received the response, call o2net_data_ready() and > >>> queue sc_rx_work, waiting o2net_wq to process this work. > >>> 5)o2net_wq is a single thread workqueue, it process the work one by > >>> one. Right now it is still doing o2net_listen_work and cannot handle > >>> sc_rx_work. so we deadlock. > >>> > >>> It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). > >>> So we use PF_FSTRANS to avoid the task reentering filesystem when > >>> available memory is not enough. > >>> > >>> Signed-off-by: joyce.xue <xuejiufei@huawei.com> > >> > >> For the second time: use memalloc_noio_save/memalloc_noio_restore. > >> And please put a great big comment in the code explaining why you > >> need to do this special thing with memory reclaim flags. > >> > >> Cheers, > >> > >> Dave. > >> > > Thanks for your reply. But I am afraid that memalloc_noio_save/ > > memalloc_noio_restore can not solve my problem. __GFP_IO is cleared > > if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory > > reclaim. However, __GFP_FS is still set that can not avoid pruning > > dcache and icache in memory allocation, resulting in the deadlock I > > described. > > You can use PF_MEMALLOC_NOIO to replace PF_FSTRANS, set this flag in > ocfs2 and check it in sb shrinker. No changes to the superblock shrinker, please. The flag should modify the gfp_mask in the struct shrink_control passed to the shrinker, just like the noio flag is used in the rest of the mm code. Cheers, Dave.
Hi Junxiao On 2014/9/3 9:38, Junxiao Bi wrote: > Hi Jiufei, > > On 09/02/2014 05:03 PM, Xue jiufei wrote: >> Hi, Dave >> On 2014/9/2 7:51, Dave Chinner wrote: >>> On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: >>>> The patch trys to solve one deadlock problem caused by cluster >>>> fs, like ocfs2. And the problem may happen at least in the below >>>> situations: >>>> 1)Receiving a connect message from other nodes, node queues a >>>> work_struct o2net_listen_work. >>>> 2)o2net_wq processes this work and calls sock_alloc() to allocate >>>> memory for a new socket. >>>> 3)It would do direct memory reclaim when available memory is not >>>> enough and trigger the inode cleanup. That inode being cleaned up >>>> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() >>>> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), >>>> and wait for the unlock response from master. >>>> 4)tcp layer received the response, call o2net_data_ready() and >>>> queue sc_rx_work, waiting o2net_wq to process this work. >>>> 5)o2net_wq is a single thread workqueue, it process the work one by >>>> one. Right now it is still doing o2net_listen_work and cannot handle >>>> sc_rx_work. so we deadlock. >>>> >>>> It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). >>>> So we use PF_FSTRANS to avoid the task reentering filesystem when >>>> available memory is not enough. >>>> >>>> Signed-off-by: joyce.xue <xuejiufei@huawei.com> >>> >>> For the second time: use memalloc_noio_save/memalloc_noio_restore. >>> And please put a great big comment in the code explaining why you >>> need to do this special thing with memory reclaim flags. >>> >>> Cheers, >>> >>> Dave. >>> >> Thanks for your reply. But I am afraid that memalloc_noio_save/ >> memalloc_noio_restore can not solve my problem. __GFP_IO is cleared >> if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory >> reclaim. However, __GFP_FS is still set that can not avoid pruning >> dcache and icache in memory allocation, resulting in the deadlock I >> described. > > You can use PF_MEMALLOC_NOIO to replace PF_FSTRANS, set this flag in > ocfs2 and check it in sb shrinker. > Thanks for your advice. But I think using another process flag is better. Do you think so? I will send another patch later. Thanks, XueJiufei > Thanks, > Junxiao. >> >> Thanks. >> XueJiufei >> >> >> > > . >
On 09/03/2014 11:10 AM, Dave Chinner wrote: > On Wed, Sep 03, 2014 at 09:38:31AM +0800, Junxiao Bi wrote: >> Hi Jiufei, >> >> On 09/02/2014 05:03 PM, Xue jiufei wrote: >>> Hi, Dave >>> On 2014/9/2 7:51, Dave Chinner wrote: >>>> On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: >>>>> The patch trys to solve one deadlock problem caused by cluster >>>>> fs, like ocfs2. And the problem may happen at least in the below >>>>> situations: >>>>> 1)Receiving a connect message from other nodes, node queues a >>>>> work_struct o2net_listen_work. >>>>> 2)o2net_wq processes this work and calls sock_alloc() to allocate >>>>> memory for a new socket. >>>>> 3)It would do direct memory reclaim when available memory is not >>>>> enough and trigger the inode cleanup. That inode being cleaned up >>>>> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() >>>>> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), >>>>> and wait for the unlock response from master. >>>>> 4)tcp layer received the response, call o2net_data_ready() and >>>>> queue sc_rx_work, waiting o2net_wq to process this work. >>>>> 5)o2net_wq is a single thread workqueue, it process the work one by >>>>> one. Right now it is still doing o2net_listen_work and cannot handle >>>>> sc_rx_work. so we deadlock. >>>>> >>>>> It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). >>>>> So we use PF_FSTRANS to avoid the task reentering filesystem when >>>>> available memory is not enough. >>>>> >>>>> Signed-off-by: joyce.xue <xuejiufei@huawei.com> >>>> >>>> For the second time: use memalloc_noio_save/memalloc_noio_restore. >>>> And please put a great big comment in the code explaining why you >>>> need to do this special thing with memory reclaim flags. >>>> >>>> Cheers, >>>> >>>> Dave. >>>> >>> Thanks for your reply. But I am afraid that memalloc_noio_save/ >>> memalloc_noio_restore can not solve my problem. __GFP_IO is cleared >>> if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory >>> reclaim. However, __GFP_FS is still set that can not avoid pruning >>> dcache and icache in memory allocation, resulting in the deadlock I >>> described. >> >> You can use PF_MEMALLOC_NOIO to replace PF_FSTRANS, set this flag in >> ocfs2 and check it in sb shrinker. > > No changes to the superblock shrinker, please. The flag should > modify the gfp_mask in the struct shrink_control passed to the > shrinker, just like the noio flag is used in the rest of the mm > code. __GFP_FS seemed imply __GFP_IO, can superblock shrinker check !(sc->gfp_mask & __GFP_IO) and stop? Thanks, Junxiao. > > Cheers, > > Dave. >
On Wed, Sep 03, 2014 at 12:21:24PM +0800, Junxiao Bi wrote: > On 09/03/2014 11:10 AM, Dave Chinner wrote: > > On Wed, Sep 03, 2014 at 09:38:31AM +0800, Junxiao Bi wrote: > >> Hi Jiufei, > >> > >> On 09/02/2014 05:03 PM, Xue jiufei wrote: > >>> Hi, Dave > >>> On 2014/9/2 7:51, Dave Chinner wrote: > >>>> On Fri, Aug 29, 2014 at 05:57:22PM +0800, Xue jiufei wrote: > >>>>> The patch trys to solve one deadlock problem caused by cluster > >>>>> fs, like ocfs2. And the problem may happen at least in the below > >>>>> situations: > >>>>> 1)Receiving a connect message from other nodes, node queues a > >>>>> work_struct o2net_listen_work. > >>>>> 2)o2net_wq processes this work and calls sock_alloc() to allocate > >>>>> memory for a new socket. > >>>>> 3)It would do direct memory reclaim when available memory is not > >>>>> enough and trigger the inode cleanup. That inode being cleaned up > >>>>> is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() > >>>>> ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), > >>>>> and wait for the unlock response from master. > >>>>> 4)tcp layer received the response, call o2net_data_ready() and > >>>>> queue sc_rx_work, waiting o2net_wq to process this work. > >>>>> 5)o2net_wq is a single thread workqueue, it process the work one by > >>>>> one. Right now it is still doing o2net_listen_work and cannot handle > >>>>> sc_rx_work. so we deadlock. > >>>>> > >>>>> It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). > >>>>> So we use PF_FSTRANS to avoid the task reentering filesystem when > >>>>> available memory is not enough. > >>>>> > >>>>> Signed-off-by: joyce.xue <xuejiufei@huawei.com> > >>>> > >>>> For the second time: use memalloc_noio_save/memalloc_noio_restore. > >>>> And please put a great big comment in the code explaining why you > >>>> need to do this special thing with memory reclaim flags. > >>>> > >>>> Cheers, > >>>> > >>>> Dave. > >>>> > >>> Thanks for your reply. But I am afraid that memalloc_noio_save/ > >>> memalloc_noio_restore can not solve my problem. __GFP_IO is cleared > >>> if PF_MEMALLOC_NOIO is set and can avoid doing IO in direct memory > >>> reclaim. However, __GFP_FS is still set that can not avoid pruning > >>> dcache and icache in memory allocation, resulting in the deadlock I > >>> described. > >> > >> You can use PF_MEMALLOC_NOIO to replace PF_FSTRANS, set this flag in > >> ocfs2 and check it in sb shrinker. > > > > No changes to the superblock shrinker, please. The flag should > > modify the gfp_mask in the struct shrink_control passed to the > > shrinker, just like the noio flag is used in the rest of the mm > > code. > __GFP_FS seemed imply __GFP_IO, Now you are starting to understand. Check what GFP_NOIO actually means, then tell me why memalloc_noio_flags() is not fully correct, needs fixing, and needs to be applied to all of reclaim. Hint: there's a heirarchy involved.... > can superblock shrinker check > !(sc->gfp_mask & __GFP_IO) and stop? No. Go back and read what I said about the initial setting of sc->gfp_mask. Cheers, Dave.
diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c index 681691b..629b4da 100644 --- a/fs/ocfs2/cluster/tcp.c +++ b/fs/ocfs2/cluster/tcp.c @@ -1581,6 +1581,8 @@ static void o2net_start_connect(struct work_struct *work) int ret = 0, stop; unsigned int timeout; + current->flags |= PF_FSTRANS; + /* if we're greater we initiate tx, otherwise we accept */ if (o2nm_this_node() <= o2net_num_from_nn(nn)) goto out; @@ -1683,6 +1685,7 @@ out: if (mynode) o2nm_node_put(mynode); + current->flags &= ~PF_FSTRANS; return; } @@ -1809,6 +1812,8 @@ static int o2net_accept_one(struct socket *sock, int *more) struct o2net_sock_container *sc = NULL; struct o2net_node *nn; + current->flags |= PF_FSTRANS; + BUG_ON(sock == NULL); *more = 0; ret = sock_create_lite(sock->sk->sk_family, sock->sk->sk_type, @@ -1918,6 +1923,8 @@ out: o2nm_node_put(local_node); if (sc) sc_put(sc); + + current->flags &= ~PF_FSTRANS; return ret; } diff --git a/fs/super.c b/fs/super.c index b9a214d..c4a8dc1 100644 --- a/fs/super.c +++ b/fs/super.c @@ -71,6 +71,9 @@ static unsigned long super_cache_scan(struct shrinker *shrink, if (!(sc->gfp_mask & __GFP_FS)) return SHRINK_STOP; + if (current->flags & PF_FSTRANS) + return SHRINK_STOP; + if (!grab_super_passive(sb)) return SHRINK_STOP;
The patch trys to solve one deadlock problem caused by cluster fs, like ocfs2. And the problem may happen at least in the below situations: 1)Receiving a connect message from other nodes, node queues a work_struct o2net_listen_work. 2)o2net_wq processes this work and calls sock_alloc() to allocate memory for a new socket. 3)It would do direct memory reclaim when available memory is not enough and trigger the inode cleanup. That inode being cleaned up is happened to be ocfs2 inode, so call evict()->ocfs2_evict_inode() ->ocfs2_drop_lock()->dlmunlock()->o2net_send_message_vec(), and wait for the unlock response from master. 4)tcp layer received the response, call o2net_data_ready() and queue sc_rx_work, waiting o2net_wq to process this work. 5)o2net_wq is a single thread workqueue, it process the work one by one. Right now it is still doing o2net_listen_work and cannot handle sc_rx_work. so we deadlock. It is impossible to set GFP_NOFS for memory allocation in sock_alloc(). So we use PF_FSTRANS to avoid the task reentering filesystem when available memory is not enough. Signed-off-by: joyce.xue <xuejiufei@huawei.com> --- fs/ocfs2/cluster/tcp.c | 7 +++++++ fs/super.c | 3 +++ 2 files changed, 10 insertions(+)