diff mbox series

vfs: Avoid softlockups in drop_pagecache_sb()

Message ID 20181206161152.3290-1-jack@suse.cz (mailing list archive)
State New, archived
Headers show
Series vfs: Avoid softlockups in drop_pagecache_sb() | expand

Commit Message

Jan Kara Dec. 6, 2018, 4:11 p.m. UTC
When superblock has lots of inodes without any pagecache (like is the case for
/proc), drop_pagecache_sb() will iterate through all of them without dropping
sb->s_inode_list_lock which can lead to softlockups.

Fix the problem by going to the slow path and doing cond_resched() in case the
process needs rescheduling.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/drop_caches.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Michal Hocko Jan. 8, 2019, 3:16 p.m. UTC | #1
Is this staged in any existing tree?

On Thu 06-12-18 17:11:52, Jan Kara wrote:
> When superblock has lots of inodes without any pagecache (like is the case for
> /proc), drop_pagecache_sb() will iterate through all of them without dropping
> sb->s_inode_list_lock which can lead to softlockups.
> 
> Fix the problem by going to the slow path and doing cond_resched() in case the
> process needs rescheduling.

FWIW the patch looks good to me.

> Signed-off-by: Jan Kara <jack@suse.cz>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  fs/drop_caches.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/drop_caches.c b/fs/drop_caches.c
> index 82377017130f..d31b6c72b476 100644
> --- a/fs/drop_caches.c
> +++ b/fs/drop_caches.c
> @@ -21,8 +21,13 @@ static void drop_pagecache_sb(struct super_block *sb, void *unused)
>  	spin_lock(&sb->s_inode_list_lock);
>  	list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
>  		spin_lock(&inode->i_lock);
> +		/*
> +		 * We must skip inodes in unusual state. We may also skip
> +		 * inodes without pages but we deliberately won't in case
> +		 * we need to reschedule to avoid softlockups.
> +		 */
>  		if ((inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) ||
> -		    (inode->i_mapping->nrpages == 0)) {
> +		    (inode->i_mapping->nrpages == 0 && !need_resched())) {
>  			spin_unlock(&inode->i_lock);
>  			continue;
>  		}
> @@ -30,6 +35,7 @@ static void drop_pagecache_sb(struct super_block *sb, void *unused)
>  		spin_unlock(&inode->i_lock);
>  		spin_unlock(&sb->s_inode_list_lock);
>  
> +		cond_resched();
>  		invalidate_mapping_pages(inode->i_mapping, 0, -1);
>  		iput(toput_inode);
>  		toput_inode = inode;
> -- 
> 2.16.4
diff mbox series

Patch

diff --git a/fs/drop_caches.c b/fs/drop_caches.c
index 82377017130f..d31b6c72b476 100644
--- a/fs/drop_caches.c
+++ b/fs/drop_caches.c
@@ -21,8 +21,13 @@  static void drop_pagecache_sb(struct super_block *sb, void *unused)
 	spin_lock(&sb->s_inode_list_lock);
 	list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
 		spin_lock(&inode->i_lock);
+		/*
+		 * We must skip inodes in unusual state. We may also skip
+		 * inodes without pages but we deliberately won't in case
+		 * we need to reschedule to avoid softlockups.
+		 */
 		if ((inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) ||
-		    (inode->i_mapping->nrpages == 0)) {
+		    (inode->i_mapping->nrpages == 0 && !need_resched())) {
 			spin_unlock(&inode->i_lock);
 			continue;
 		}
@@ -30,6 +35,7 @@  static void drop_pagecache_sb(struct super_block *sb, void *unused)
 		spin_unlock(&inode->i_lock);
 		spin_unlock(&sb->s_inode_list_lock);
 
+		cond_resched();
 		invalidate_mapping_pages(inode->i_mapping, 0, -1);
 		iput(toput_inode);
 		toput_inode = inode;