mbox series

[0/3,V5] drop the mmap_sem when doing IO in the fault path

Message ID 20181211173801.29535-1-josef@toxicpanda.com (mailing list archive)
Headers show
Series drop the mmap_sem when doing IO in the fault path | expand

Message

Josef Bacik Dec. 11, 2018, 5:37 p.m. UTC
Here's the latest version, slimmed down a bit from my last submission with more
details in the changelogs as requested.

v4->v5:
- dropped the cached_page infrastructure and the addition of the
  handle_mm_fault_cacheable helper as it had no discernable bearing on
  performance in my performance testing.
- reworked the page lock dropping logic in order to be it's own helper, which
  comments describing how to use it.
- added more details to the changelog for the fpin patch.
- added a patch to cleanup the arguments for the readahead functions for mmap as
  per Jan's suggestion.

v3->v4:
- dropped the ->page_mkwrite portion of these patches, we don't actually see
  issues with mkwrite in production, and I kept running into corner cases where
  I missed something important.  I want to wait on that part until I have a real
  reason to do the work so I can have a solid test in place.
- completely reworked how we drop the mmap_sem in filemap_fault and cleaned it
  up a bit.  Once I started actually testing this with our horrifying reproducer
  I saw a bunch of places where we still ended up doing IO under the mmap_sem
  because I had missed a few corner cases.  Fixed this by reworking
  filemap_fault to only return RETRY once it has a completely uptodate page
  ready to be used.
- lots more testing, including production testing.

v2->v3:
- dropped the RFC, ready for a real review.
- fixed a kbuild error for !MMU configs.
- dropped the swapcache patches since Johannes is still working on those parts.

v1->v2:
- reworked so it only affects x86, since its the only arch I can build and test.
- fixed the fact that do_page_mkwrite wasn't actually sending ALLOW_RETRY down
  to ->page_mkwrite.
- fixed error handling in do_page_mkwrite/callers to explicitly catch
  VM_FAULT_RETRY.
- fixed btrfs to set ->cached_page properly.

-- Original message --

Now that we have proper isolation in place with cgroups2 we have started going
through and fixing the various priority inversions.  Most are all gone now, but
this one is sort of weird since it's not necessarily a priority inversion that
happens within the kernel, but rather because of something userspace does.

We have giant applications that we want to protect, and parts of these giant
applications do things like watch the system state to determine how healthy the
box is for load balancing and such.  This involves running 'ps' or other such
utilities.  These utilities will often walk /proc/<pid>/whatever, and these
files can sometimes need to down_read(&task->mmap_sem).  Not usually a big deal,
but we noticed when we are stress testing that sometimes our protected
application has latency spikes trying to get the mmap_sem for tasks that are in
lower priority cgroups.

This is because any down_write() on a semaphore essentially turns it into a
mutex, so even if we currently have it held for reading, any new readers will
not be allowed on to keep from starving the writer.  This is fine, except a
lower priority task could be stuck doing IO because it has been throttled to the
point that its IO is taking much longer than normal.  But because a higher
priority group depends on this completing it is now stuck behind lower priority
work.

In order to avoid this particular priority inversion we want to use the existing
retry mechanism to stop from holding the mmap_sem at all if we are going to do
IO.  This already exists in the read case sort of, but needed to be extended for
more than just grabbing the page lock.  With io.latency we throttle at
submit_bio() time, so the readahead stuff can block and even page_cache_read can
block, so all these paths need to have the mmap_sem dropped.

The other big thing is ->page_mkwrite.  btrfs is particularly shitty here
because we have to reserve space for the dirty page, which can be a very
expensive operation.  We use the same retry method as the read path, and simply
cache the page and verify the page is still setup properly the next pass through
->page_mkwrite().

I've tested these patches with xfstests and there are no regressions.  Let me
know what you think.  Thanks,

Josef

Comments

Jan Kara Dec. 12, 2018, 10:10 a.m. UTC | #1
On Tue 11-12-18 12:38:00, Josef Bacik wrote:
> All of the arguments to these functions come from the vmf, and the
> following patches are going to add more arguments.  Cut down on the
> amount of arguments passed by simply passing in the vmf to these two
> helpers.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>

The patch looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  mm/filemap.c | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 03bce38d8f2b..8fc45f24b201 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2309,20 +2309,20 @@ EXPORT_SYMBOL(generic_file_read_iter);
>   * Synchronous readahead happens when we don't even find
>   * a page in the page cache at all.
>   */
> -static void do_sync_mmap_readahead(struct vm_area_struct *vma,
> -				   struct file_ra_state *ra,
> -				   struct file *file,
> -				   pgoff_t offset)
> +static void do_sync_mmap_readahead(struct vm_fault *vmf)
>  {
> +	struct file *file = vmf->vma->vm_file;
> +	struct file_ra_state *ra = &file->f_ra;
>  	struct address_space *mapping = file->f_mapping;
> +	pgoff_t offset = vmf->pgoff;
>  
>  	/* If we don't want any read-ahead, don't bother */
> -	if (vma->vm_flags & VM_RAND_READ)
> +	if (vmf->vma->vm_flags & VM_RAND_READ)
>  		return;
>  	if (!ra->ra_pages)
>  		return;
>  
> -	if (vma->vm_flags & VM_SEQ_READ) {
> +	if (vmf->vma->vm_flags & VM_SEQ_READ) {
>  		page_cache_sync_readahead(mapping, ra, file, offset,
>  					  ra->ra_pages);
>  		return;
> @@ -2352,16 +2352,16 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
>   * Asynchronous readahead happens when we find the page and PG_readahead,
>   * so we want to possibly extend the readahead further..
>   */
> -static void do_async_mmap_readahead(struct vm_area_struct *vma,
> -				    struct file_ra_state *ra,
> -				    struct file *file,
> -				    struct page *page,
> -				    pgoff_t offset)
> +static void do_async_mmap_readahead(struct vm_fault *vmf,
> +				    struct page *page)
>  {
> +	struct file *file = vmf->vma->vm_file;
> +	struct file_ra_state *ra = &file->f_ra;
>  	struct address_space *mapping = file->f_mapping;
> +	pgoff_t offset = vmf->pgoff;
>  
>  	/* If we don't want any read-ahead, don't bother */
> -	if (vma->vm_flags & VM_RAND_READ)
> +	if (vmf->vma->vm_flags & VM_RAND_READ)
>  		return;
>  	if (ra->mmap_miss > 0)
>  		ra->mmap_miss--;
> @@ -2418,10 +2418,10 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>  		 * We found the page, so try async readahead before
>  		 * waiting for the lock.
>  		 */
> -		do_async_mmap_readahead(vmf->vma, ra, file, page, offset);
> +		do_async_mmap_readahead(vmf, page);
>  	} else if (!page) {
>  		/* No page in the page cache at all */
> -		do_sync_mmap_readahead(vmf->vma, ra, file, offset);
> +		do_sync_mmap_readahead(vmf);
>  		count_vm_event(PGMAJFAULT);
>  		count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
>  		ret = VM_FAULT_MAJOR;
> -- 
> 2.14.3
>