mbox series

[0/12,v4] fs: Hole punch vs page cache filling races

Message ID 20210423171010.12-1-jack@suse.cz (mailing list archive)
Headers show
Series fs: Hole punch vs page cache filling races | expand

Message

Jan Kara April 23, 2021, 5:29 p.m. UTC
Hello,

here is another version of my patches to address races between hole punching
and page cache filling functions for ext4 and other filesystems. I think
we are coming close to a complete solution so I've removed the RFC tag from
the subject. I went through all filesystems supporting hole punching and
converted them from their private locks to a generic one (usually fixing the
race ext4 had as a side effect). I also found out ceph & cifs didn't have
any protection from the hole punch vs page fault race either so I've added
appropriate protections there. Open are still GFS2 and OCFS2 filesystems.
GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock
both above and below mmap_sem), OCFS2 locking seems kind of hosed and some
read, write, and hole punch paths are not properly serialized possibly leading
to fs corruption. Both issues are non-trivial so respective fs maintainers
have to deal with those (I've informed them and problems were generally
confirmed). Anyway, for all the other filesystem this kind of race should
be closed.

As a next step, I'd like to actually make sure all calls to
truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
and then we can also get rid of i_size checks in some places (truncate can
use the same serialization scheme as hole punch). But that step is mostly
a cleanup so I'd like to get these functional fixes in first.

Changes since v3:
* Renamed and moved lock to struct address_space
* Added conversions of tmpfs, ceph, cifs, fuse, f2fs
* Fixed error handling path in filemap_read()
* Removed .page_mkwrite() cleanup from the series for now

Changes since v2:
* Added documentation and comments regarding lock ordering and how the lock is
  supposed to be used
* Added conversions of ext2, xfs, zonefs
* Added patch removing i_mapping_sem protection from .page_mkwrite handlers

Changes since v1:
* Moved to using inode->i_mapping_sem instead of aops handler to acquire
  appropriate lock

---
Motivation:

Amir has reported [1] a that ext4 has a potential issues when reads can race
with hole punching possibly exposing stale data from freed blocks or even
corrupting filesystem when stale mapping data gets used for writeout. The
problem is that during hole punching, new page cache pages can get instantiated
and block mapping from the looked up in a punched range after
truncate_inode_pages() has run but before the filesystem removes blocks from
the file. In principle any filesystem implementing hole punching thus needs to
implement a mechanism to block instantiating page cache pages during hole
punching to avoid this race. This is further complicated by the fact that there
are multiple places that can instantiate pages in page cache.  We can have
regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
result in reading in page cache pages through force_page_cache_readahead().

There are couple of ways how to fix this. First way (currently implemented by
XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
serialized with hole punching. This is easy to do but as a result all reads
would then be serialized with writes and thus mixed read-write workloads suffer
heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
when creating new pages in the page cache and looking up their corresponding
block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
which provides necessary serialization with hole punching for ext4.

								Honza

[1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/

Previous versions:
Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz

CC: ceph-devel@vger.kernel.org
CC: Chao Yu <yuchao0@huawei.com>
CC: Damien Le Moal <damien.lemoal@wdc.com>
CC: "Darrick J. Wong" <darrick.wong@oracle.com>
CC: Hugh Dickins <hughd@google.com>
CC: Jaegeuk Kim <jaegeuk@kernel.org>
CC: Jeff Layton <jlayton@kernel.org>
CC: Johannes Thumshirn <jth@kernel.org>
CC: linux-cifs@vger.kernel.org
CC: <linux-ext4@vger.kernel.org>
CC: linux-f2fs-devel@lists.sourceforge.net
CC: <linux-fsdevel@vger.kernel.org>
CC: <linux-mm@kvack.org>
CC: <linux-xfs@vger.kernel.org>
CC: Miklos Szeredi <miklos@szeredi.hu>
CC: Steve French <sfrench@samba.org>
CC: Ted Tso <tytso@mit.edu>

Comments

Dave Chinner April 23, 2021, 10:07 p.m. UTC | #1
Hi Jan,

In future, can you please use the same cc-list for the entire
patchset?

The stuff that has hit the XFS list (where I'm replying from)
doesn't give me any context as to what the core changes are that
allow XFS to be changed, so I can't review them in isolation.

I've got to spend time now reconstructing the patchset into a single
series because the delivery has been spread across three different
mailing lists and so hit 3 different procmail filters.  I'll comment
on the patches once I've reconstructed the series and read through
it as a whole...

/me considers the way people use "cc" tags in git commits for
including mailing lists on individual patches actively harmful.
Unless the recipient is subscribed to all the mailing lists the
patchset was CC'd to, they can't easily find the bits of the
patchset that didn't arrive in their mail box. Individual mailing
lists should receive entire patchsets for review, not random,
individual, context free patches. 

And, FWIW, cc'ing the cover letter to all the mailing lists is not
good enough. Being able to see the code change as a whole is what
matters for review, not the cover letter...

Cheers,

Dave.

On Fri, Apr 23, 2021 at 07:29:29PM +0200, Jan Kara wrote:
> Hello,
> 
> here is another version of my patches to address races between hole punching
> and page cache filling functions for ext4 and other filesystems. I think
> we are coming close to a complete solution so I've removed the RFC tag from
> the subject. I went through all filesystems supporting hole punching and
> converted them from their private locks to a generic one (usually fixing the
> race ext4 had as a side effect). I also found out ceph & cifs didn't have
> any protection from the hole punch vs page fault race either so I've added
> appropriate protections there. Open are still GFS2 and OCFS2 filesystems.
> GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock
> both above and below mmap_sem), OCFS2 locking seems kind of hosed and some
> read, write, and hole punch paths are not properly serialized possibly leading
> to fs corruption. Both issues are non-trivial so respective fs maintainers
> have to deal with those (I've informed them and problems were generally
> confirmed). Anyway, for all the other filesystem this kind of race should
> be closed.
> 
> As a next step, I'd like to actually make sure all calls to
> truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
> and then we can also get rid of i_size checks in some places (truncate can
> use the same serialization scheme as hole punch). But that step is mostly
> a cleanup so I'd like to get these functional fixes in first.
> 
> Changes since v3:
> * Renamed and moved lock to struct address_space
> * Added conversions of tmpfs, ceph, cifs, fuse, f2fs
> * Fixed error handling path in filemap_read()
> * Removed .page_mkwrite() cleanup from the series for now
> 
> Changes since v2:
> * Added documentation and comments regarding lock ordering and how the lock is
>   supposed to be used
> * Added conversions of ext2, xfs, zonefs
> * Added patch removing i_mapping_sem protection from .page_mkwrite handlers
> 
> Changes since v1:
> * Moved to using inode->i_mapping_sem instead of aops handler to acquire
>   appropriate lock
> 
> ---
> Motivation:
> 
> Amir has reported [1] a that ext4 has a potential issues when reads can race
> with hole punching possibly exposing stale data from freed blocks or even
> corrupting filesystem when stale mapping data gets used for writeout. The
> problem is that during hole punching, new page cache pages can get instantiated
> and block mapping from the looked up in a punched range after
> truncate_inode_pages() has run but before the filesystem removes blocks from
> the file. In principle any filesystem implementing hole punching thus needs to
> implement a mechanism to block instantiating page cache pages during hole
> punching to avoid this race. This is further complicated by the fact that there
> are multiple places that can instantiate pages in page cache.  We can have
> regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
> result in reading in page cache pages through force_page_cache_readahead().
> 
> There are couple of ways how to fix this. First way (currently implemented by
> XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
> serialized with hole punching. This is easy to do but as a result all reads
> would then be serialized with writes and thus mixed read-write workloads suffer
> heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
> when creating new pages in the page cache and looking up their corresponding
> block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
> which provides necessary serialization with hole punching for ext4.
> 
> 								Honza
> 
> [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/
> 
> Previous versions:
> Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
> Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz
> 
> CC: ceph-devel@vger.kernel.org
> CC: Chao Yu <yuchao0@huawei.com>
> CC: Damien Le Moal <damien.lemoal@wdc.com>
> CC: "Darrick J. Wong" <darrick.wong@oracle.com>
> CC: Hugh Dickins <hughd@google.com>
> CC: Jaegeuk Kim <jaegeuk@kernel.org>
> CC: Jeff Layton <jlayton@kernel.org>
> CC: Johannes Thumshirn <jth@kernel.org>
> CC: linux-cifs@vger.kernel.org
> CC: <linux-ext4@vger.kernel.org>
> CC: linux-f2fs-devel@lists.sourceforge.net
> CC: <linux-fsdevel@vger.kernel.org>
> CC: <linux-mm@kvack.org>
> CC: <linux-xfs@vger.kernel.org>
> CC: Miklos Szeredi <miklos@szeredi.hu>
> CC: Steve French <sfrench@samba.org>
> CC: Ted Tso <tytso@mit.edu>
>
Matthew Wilcox April 23, 2021, 11:51 p.m. UTC | #2
On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> I've got to spend time now reconstructing the patchset into a single
> series because the delivery has been spread across three different
> mailing lists and so hit 3 different procmail filters.  I'll comment
> on the patches once I've reconstructed the series and read through
> it as a whole...

$ b4 mbox 20210423171010.12-1-jack@suse.cz
Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
Grabbing thread from lore.kernel.org/ceph-devel
6 messages in the thread
Saved ./20210423171010.12-1-jack@suse.cz.mbx
Christoph Hellwig April 24, 2021, 6:11 a.m. UTC | #3
On Sat, Apr 24, 2021 at 12:51:49AM +0100, Matthew Wilcox wrote:
> On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote:
> > I've got to spend time now reconstructing the patchset into a single
> > series because the delivery has been spread across three different
> > mailing lists and so hit 3 different procmail filters.  I'll comment
> > on the patches once I've reconstructed the series and read through
> > it as a whole...
> 
> $ b4 mbox 20210423171010.12-1-jack@suse.cz
> Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz
> Grabbing thread from lore.kernel.org/ceph-devel
> 6 messages in the thread
> Saved ./20210423171010.12-1-jack@suse.cz.mbx

Yikes.  Just send them damn mails.  Or switch the lists to NNTP, but
don't let the people who are reviewing your patches do stupid work
with weird tools.
Hugh Dickins April 29, 2021, 4:12 a.m. UTC | #4
On Fri, 23 Apr 2021, Jan Kara wrote:

> Shmem uses a home-grown mechanism for serializing hole punch with page
> fault. Use mapping->invalidate_lock for it instead. Admittedly the
> home-grown mechanism locks out only the range being actually punched out
> while invalidate_lock locks the whole mapping so it is serializing more.
> But hole punch doesn't seem to be that critical operation and the
> simplification is noticeable.

Home-grown indeed (and went through several different bugginesses,
Linus fixing issues in its waitq handling found years later).

I'd love to remove it all (rather than replace it by a new rwsem),
but never enough courage+time to do so: on optimistic days (that is,
rarely) I like to think that none of it would be needed nowadays;
but its gestation was difficult, and I cannot easily reproduce the
testing that demanded it (Sasha and Vlastimil helped a lot).

If you're interested in the history, I cannot point to one thread,
but "shmem: fix faulting into a hole while it's punched" finds
some of them, June/July 2014.  You've pushed me into re-reading
there, but I've not yet found the crucial evidence that stopped us
from reverting this mechanism, once we had abandoned the hole-punch
"pincer" in shmem_undo_range().

tmpfs's problem with faulting versus hole-punch was not the data
integrity issue you are attacking with invalidate_lock, but a
starvation issue triggered in Trinity fuzzing.

If invalidate_lock had existed at the time, I might have reused it
for this purpose too - I certainly wanted to avoid enlarging the
inode with another rwsem just for this; but also reluctant to add
another layer of locking to the common path (maybe I'm just silly
to try to avoid an rwsem which is so rarely taken for writing?).

But the code as it stands is working satisfactorily with minimal
overhead: so I'm not in a rush to remove or replace it yet. Thank
you for including tmpfs in your reach, but I think for the moment
I'd prefer you to leave this change out of the series. Maybe later
when it's settled in the fs/ filesystems (perhaps making guarantees
that we might want to extend to tmpfs) we could make this change -
but I'd still rather let hole-punch and fault race freely without it.

But your 01/12, fixing mm comments mentioning i_mutex, looked good:
Acked-by: Hugh Dickins <hughd@google.com>
to that one.  But I think it would be better extracted from this
invalidate_lock series, and just sent to akpm cc linux-mm on its own.

Thanks,
Hugh

> 
> CC: Hugh Dickins <hughd@google.com>
> CC: <linux-mm@kvack.org>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  mm/shmem.c | 98 ++++--------------------------------------------------
>  1 file changed, 7 insertions(+), 91 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 55b2888db542..f34162ac46de 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -95,12 +95,11 @@ static struct vfsmount *shm_mnt;
>  #define SHORT_SYMLINK_LEN 128
>  
>  /*
> - * shmem_fallocate communicates with shmem_fault or shmem_writepage via
> - * inode->i_private (with i_rwsem making sure that it has only one user at
> - * a time): we would prefer not to enlarge the shmem inode just for that.
> + * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
> + * i_rwsem making sure that it has only one user at a time): we would prefer
> + * not to enlarge the shmem inode just for that.
>   */
>  struct shmem_falloc {
> -	wait_queue_head_t *waitq; /* faults into hole wait for punch to end */
>  	pgoff_t start;		/* start of range currently being fallocated */
>  	pgoff_t next;		/* the next page offset to be fallocated */
>  	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
> @@ -1378,7 +1377,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
>  			spin_lock(&inode->i_lock);
>  			shmem_falloc = inode->i_private;
>  			if (shmem_falloc &&
> -			    !shmem_falloc->waitq &&
>  			    index >= shmem_falloc->start &&
>  			    index < shmem_falloc->next)
>  				shmem_falloc->nr_unswapped++;
> @@ -2025,18 +2023,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>  	return error;
>  }
>  
> -/*
> - * This is like autoremove_wake_function, but it removes the wait queue
> - * entry unconditionally - even if something else had already woken the
> - * target.
> - */
> -static int synchronous_wake_function(wait_queue_entry_t *wait, unsigned mode, int sync, void *key)
> -{
> -	int ret = default_wake_function(wait, mode, sync, key);
> -	list_del_init(&wait->entry);
> -	return ret;
> -}
> -
>  static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  {
>  	struct vm_area_struct *vma = vmf->vma;
> @@ -2046,65 +2032,6 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  	int err;
>  	vm_fault_t ret = VM_FAULT_LOCKED;
>  
> -	/*
> -	 * Trinity finds that probing a hole which tmpfs is punching can
> -	 * prevent the hole-punch from ever completing: which in turn
> -	 * locks writers out with its hold on i_rwsem.  So refrain from
> -	 * faulting pages into the hole while it's being punched.  Although
> -	 * shmem_undo_range() does remove the additions, it may be unable to
> -	 * keep up, as each new page needs its own unmap_mapping_range() call,
> -	 * and the i_mmap tree grows ever slower to scan if new vmas are added.
> -	 *
> -	 * It does not matter if we sometimes reach this check just before the
> -	 * hole-punch begins, so that one fault then races with the punch:
> -	 * we just need to make racing faults a rare case.
> -	 *
> -	 * The implementation below would be much simpler if we just used a
> -	 * standard mutex or completion: but we cannot take i_rwsem in fault,
> -	 * and bloating every shmem inode for this unlikely case would be sad.
> -	 */
> -	if (unlikely(inode->i_private)) {
> -		struct shmem_falloc *shmem_falloc;
> -
> -		spin_lock(&inode->i_lock);
> -		shmem_falloc = inode->i_private;
> -		if (shmem_falloc &&
> -		    shmem_falloc->waitq &&
> -		    vmf->pgoff >= shmem_falloc->start &&
> -		    vmf->pgoff < shmem_falloc->next) {
> -			struct file *fpin;
> -			wait_queue_head_t *shmem_falloc_waitq;
> -			DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function);
> -
> -			ret = VM_FAULT_NOPAGE;
> -			fpin = maybe_unlock_mmap_for_io(vmf, NULL);
> -			if (fpin)
> -				ret = VM_FAULT_RETRY;
> -
> -			shmem_falloc_waitq = shmem_falloc->waitq;
> -			prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait,
> -					TASK_UNINTERRUPTIBLE);
> -			spin_unlock(&inode->i_lock);
> -			schedule();
> -
> -			/*
> -			 * shmem_falloc_waitq points into the shmem_fallocate()
> -			 * stack of the hole-punching task: shmem_falloc_waitq
> -			 * is usually invalid by the time we reach here, but
> -			 * finish_wait() does not dereference it in that case;
> -			 * though i_lock needed lest racing with wake_up_all().
> -			 */
> -			spin_lock(&inode->i_lock);
> -			finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
> -			spin_unlock(&inode->i_lock);
> -
> -			if (fpin)
> -				fput(fpin);
> -			return ret;
> -		}
> -		spin_unlock(&inode->i_lock);
> -	}
> -
>  	sgp = SGP_CACHE;
>  
>  	if ((vma->vm_flags & VM_NOHUGEPAGE) ||
> @@ -2113,8 +2040,10 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
>  	else if (vma->vm_flags & VM_HUGEPAGE)
>  		sgp = SGP_HUGE;
>  
> +	down_read(&inode->i_mapping->invalidate_lock);
>  	err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp,
>  				  gfp, vma, vmf, &ret);
> +	up_read(&inode->i_mapping->invalidate_lock);
>  	if (err)
>  		return vmf_error(err);
>  	return ret;
> @@ -2715,7 +2644,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		struct address_space *mapping = file->f_mapping;
>  		loff_t unmap_start = round_up(offset, PAGE_SIZE);
>  		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
> -		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq);
>  
>  		/* protected by i_rwsem */
>  		if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
> @@ -2723,24 +2651,13 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  			goto out;
>  		}
>  
> -		shmem_falloc.waitq = &shmem_falloc_waitq;
> -		shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT;
> -		shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT;
> -		spin_lock(&inode->i_lock);
> -		inode->i_private = &shmem_falloc;
> -		spin_unlock(&inode->i_lock);
> -
> +		down_write(&mapping->invalidate_lock);
>  		if ((u64)unmap_end > (u64)unmap_start)
>  			unmap_mapping_range(mapping, unmap_start,
>  					    1 + unmap_end - unmap_start, 0);
>  		shmem_truncate_range(inode, offset, offset + len - 1);
>  		/* No need to unmap again: hole-punching leaves COWed pages */
> -
> -		spin_lock(&inode->i_lock);
> -		inode->i_private = NULL;
> -		wake_up_all(&shmem_falloc_waitq);
> -		WARN_ON_ONCE(!list_empty(&shmem_falloc_waitq.head));
> -		spin_unlock(&inode->i_lock);
> +		up_write(&mapping->invalidate_lock);
>  		error = 0;
>  		goto out;
>  	}
> @@ -2763,7 +2680,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
>  		goto out;
>  	}
>  
> -	shmem_falloc.waitq = NULL;
>  	shmem_falloc.start = start;
>  	shmem_falloc.next  = start;
>  	shmem_falloc.nr_falloced = 0;
> -- 
> 2.26.2
Jan Kara April 29, 2021, 9:30 a.m. UTC | #5
On Wed 28-04-21 21:12:36, Hugh Dickins wrote:
> On Fri, 23 Apr 2021, Jan Kara wrote:
> 
> > Shmem uses a home-grown mechanism for serializing hole punch with page
> > fault. Use mapping->invalidate_lock for it instead. Admittedly the
> > home-grown mechanism locks out only the range being actually punched out
> > while invalidate_lock locks the whole mapping so it is serializing more.
> > But hole punch doesn't seem to be that critical operation and the
> > simplification is noticeable.
> 
> Home-grown indeed (and went through several different bugginesses,
> Linus fixing issues in its waitq handling found years later).
> 
> I'd love to remove it all (rather than replace it by a new rwsem),
> but never enough courage+time to do so: on optimistic days (that is,
> rarely) I like to think that none of it would be needed nowadays;
> but its gestation was difficult, and I cannot easily reproduce the
> testing that demanded it (Sasha and Vlastimil helped a lot).
> 
> If you're interested in the history, I cannot point to one thread,
> but "shmem: fix faulting into a hole while it's punched" finds
> some of them, June/July 2014.  You've pushed me into re-reading
> there, but I've not yet found the crucial evidence that stopped us
> from reverting this mechanism, once we had abandoned the hole-punch
> "pincer" in shmem_undo_range().
> 
> tmpfs's problem with faulting versus hole-punch was not the data
> integrity issue you are attacking with invalidate_lock, but a
> starvation issue triggered in Trinity fuzzing.
> 
> If invalidate_lock had existed at the time, I might have reused it
> for this purpose too - I certainly wanted to avoid enlarging the
> inode with another rwsem just for this; but also reluctant to add
> another layer of locking to the common path (maybe I'm just silly
> to try to avoid an rwsem which is so rarely taken for writing?).
> 
> But the code as it stands is working satisfactorily with minimal
> overhead: so I'm not in a rush to remove or replace it yet. Thank
> you for including tmpfs in your reach, but I think for the moment
> I'd prefer you to leave this change out of the series. Maybe later
> when it's settled in the fs/ filesystems (perhaps making guarantees
> that we might want to extend to tmpfs) we could make this change -
> but I'd still rather let hole-punch and fault race freely without it.

OK, I'll remove the patch from the series for now. As you say, tmpfs is not
buggy so we can postpone the cleanup for later.

> But your 01/12, fixing mm comments mentioning i_mutex, looked good:
> Acked-by: Hugh Dickins <hughd@google.com>
> to that one.  But I think it would be better extracted from this
> invalidate_lock series, and just sent to akpm cc linux-mm on its own.

Thanks for review and yes, I guess I can send that patch to Andrew earlier.

								Honza

> > 
> > CC: Hugh Dickins <hughd@google.com>
> > CC: <linux-mm@kvack.org>
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> >  mm/shmem.c | 98 ++++--------------------------------------------------
> >  1 file changed, 7 insertions(+), 91 deletions(-)
> > 
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 55b2888db542..f34162ac46de 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -95,12 +95,11 @@ static struct vfsmount *shm_mnt;
> >  #define SHORT_SYMLINK_LEN 128
> >  
> >  /*
> > - * shmem_fallocate communicates with shmem_fault or shmem_writepage via
> > - * inode->i_private (with i_rwsem making sure that it has only one user at
> > - * a time): we would prefer not to enlarge the shmem inode just for that.
> > + * shmem_fallocate communicates with shmem_writepage via inode->i_private (with
> > + * i_rwsem making sure that it has only one user at a time): we would prefer
> > + * not to enlarge the shmem inode just for that.
> >   */
> >  struct shmem_falloc {
> > -	wait_queue_head_t *waitq; /* faults into hole wait for punch to end */
> >  	pgoff_t start;		/* start of range currently being fallocated */
> >  	pgoff_t next;		/* the next page offset to be fallocated */
> >  	pgoff_t nr_falloced;	/* how many new pages have been fallocated */
> > @@ -1378,7 +1377,6 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
> >  			spin_lock(&inode->i_lock);
> >  			shmem_falloc = inode->i_private;
> >  			if (shmem_falloc &&
> > -			    !shmem_falloc->waitq &&
> >  			    index >= shmem_falloc->start &&
> >  			    index < shmem_falloc->next)
> >  				shmem_falloc->nr_unswapped++;
> > @@ -2025,18 +2023,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
> >  	return error;
> >  }
> >  
> > -/*
> > - * This is like autoremove_wake_function, but it removes the wait queue
> > - * entry unconditionally - even if something else had already woken the
> > - * target.
> > - */
> > -static int synchronous_wake_function(wait_queue_entry_t *wait, unsigned mode, int sync, void *key)
> > -{
> > -	int ret = default_wake_function(wait, mode, sync, key);
> > -	list_del_init(&wait->entry);
> > -	return ret;
> > -}
> > -
> >  static vm_fault_t shmem_fault(struct vm_fault *vmf)
> >  {
> >  	struct vm_area_struct *vma = vmf->vma;
> > @@ -2046,65 +2032,6 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
> >  	int err;
> >  	vm_fault_t ret = VM_FAULT_LOCKED;
> >  
> > -	/*
> > -	 * Trinity finds that probing a hole which tmpfs is punching can
> > -	 * prevent the hole-punch from ever completing: which in turn
> > -	 * locks writers out with its hold on i_rwsem.  So refrain from
> > -	 * faulting pages into the hole while it's being punched.  Although
> > -	 * shmem_undo_range() does remove the additions, it may be unable to
> > -	 * keep up, as each new page needs its own unmap_mapping_range() call,
> > -	 * and the i_mmap tree grows ever slower to scan if new vmas are added.
> > -	 *
> > -	 * It does not matter if we sometimes reach this check just before the
> > -	 * hole-punch begins, so that one fault then races with the punch:
> > -	 * we just need to make racing faults a rare case.
> > -	 *
> > -	 * The implementation below would be much simpler if we just used a
> > -	 * standard mutex or completion: but we cannot take i_rwsem in fault,
> > -	 * and bloating every shmem inode for this unlikely case would be sad.
> > -	 */
> > -	if (unlikely(inode->i_private)) {
> > -		struct shmem_falloc *shmem_falloc;
> > -
> > -		spin_lock(&inode->i_lock);
> > -		shmem_falloc = inode->i_private;
> > -		if (shmem_falloc &&
> > -		    shmem_falloc->waitq &&
> > -		    vmf->pgoff >= shmem_falloc->start &&
> > -		    vmf->pgoff < shmem_falloc->next) {
> > -			struct file *fpin;
> > -			wait_queue_head_t *shmem_falloc_waitq;
> > -			DEFINE_WAIT_FUNC(shmem_fault_wait, synchronous_wake_function);
> > -
> > -			ret = VM_FAULT_NOPAGE;
> > -			fpin = maybe_unlock_mmap_for_io(vmf, NULL);
> > -			if (fpin)
> > -				ret = VM_FAULT_RETRY;
> > -
> > -			shmem_falloc_waitq = shmem_falloc->waitq;
> > -			prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait,
> > -					TASK_UNINTERRUPTIBLE);
> > -			spin_unlock(&inode->i_lock);
> > -			schedule();
> > -
> > -			/*
> > -			 * shmem_falloc_waitq points into the shmem_fallocate()
> > -			 * stack of the hole-punching task: shmem_falloc_waitq
> > -			 * is usually invalid by the time we reach here, but
> > -			 * finish_wait() does not dereference it in that case;
> > -			 * though i_lock needed lest racing with wake_up_all().
> > -			 */
> > -			spin_lock(&inode->i_lock);
> > -			finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
> > -			spin_unlock(&inode->i_lock);
> > -
> > -			if (fpin)
> > -				fput(fpin);
> > -			return ret;
> > -		}
> > -		spin_unlock(&inode->i_lock);
> > -	}
> > -
> >  	sgp = SGP_CACHE;
> >  
> >  	if ((vma->vm_flags & VM_NOHUGEPAGE) ||
> > @@ -2113,8 +2040,10 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
> >  	else if (vma->vm_flags & VM_HUGEPAGE)
> >  		sgp = SGP_HUGE;
> >  
> > +	down_read(&inode->i_mapping->invalidate_lock);
> >  	err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp,
> >  				  gfp, vma, vmf, &ret);
> > +	up_read(&inode->i_mapping->invalidate_lock);
> >  	if (err)
> >  		return vmf_error(err);
> >  	return ret;
> > @@ -2715,7 +2644,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  		struct address_space *mapping = file->f_mapping;
> >  		loff_t unmap_start = round_up(offset, PAGE_SIZE);
> >  		loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1;
> > -		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq);
> >  
> >  		/* protected by i_rwsem */
> >  		if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
> > @@ -2723,24 +2651,13 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  			goto out;
> >  		}
> >  
> > -		shmem_falloc.waitq = &shmem_falloc_waitq;
> > -		shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT;
> > -		shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT;
> > -		spin_lock(&inode->i_lock);
> > -		inode->i_private = &shmem_falloc;
> > -		spin_unlock(&inode->i_lock);
> > -
> > +		down_write(&mapping->invalidate_lock);
> >  		if ((u64)unmap_end > (u64)unmap_start)
> >  			unmap_mapping_range(mapping, unmap_start,
> >  					    1 + unmap_end - unmap_start, 0);
> >  		shmem_truncate_range(inode, offset, offset + len - 1);
> >  		/* No need to unmap again: hole-punching leaves COWed pages */
> > -
> > -		spin_lock(&inode->i_lock);
> > -		inode->i_private = NULL;
> > -		wake_up_all(&shmem_falloc_waitq);
> > -		WARN_ON_ONCE(!list_empty(&shmem_falloc_waitq.head));
> > -		spin_unlock(&inode->i_lock);
> > +		up_write(&mapping->invalidate_lock);
> >  		error = 0;
> >  		goto out;
> >  	}
> > @@ -2763,7 +2680,6 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
> >  		goto out;
> >  	}
> >  
> > -	shmem_falloc.waitq = NULL;
> >  	shmem_falloc.start = start;
> >  	shmem_falloc.next  = start;
> >  	shmem_falloc.nr_falloced = 0;
> > -- 
> > 2.26.2