diff mbox

[1/5] MM: avoid throttling reclaim for loop-back nfsd threads.

Message ID 20140423024058.4725.71995.stgit@notabene.brown (mailing list archive)
State New, archived
Headers show

Commit Message

NeilBrown April 23, 2014, 2:40 a.m. UTC
When a loop-back NFS mount is active and the backing device for the
NFS mount becomes congested, that can impose throttling delays on the
nfsd threads.

These delays significantly reduce throughput and so the NFS mount
remains congested.

This results in a live lock and the reduced throughput persists.

This live lock has been found in testing with the 'wait_iff_congested'
call, and could possibly be caused by the 'congestion_wait' call.

This livelock is similar to the deadlock which justified the
introduction of PF_LESS_THROTTLE, and the same flag can be used to
remove this livelock.

To minimise the impact of the change, we still throttle nfsd when the
filesystem it is writing to is congested, but not when some separate
filesystem (e.g. the NFS filesystem) is congested.

Signed-off-by: NeilBrown <neilb@suse.de>
---
 mm/vmscan.c |   18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)



--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Andrew Morton April 23, 2014, 10:03 p.m. UTC | #1
On Wed, 23 Apr 2014 12:40:58 +1000 NeilBrown <neilb@suse.de> wrote:

> When a loop-back NFS mount is active and the backing device for the
> NFS mount becomes congested, that can impose throttling delays on the
> nfsd threads.
> 
> These delays significantly reduce throughput and so the NFS mount
> remains congested.
> 
> This results in a live lock and the reduced throughput persists.
> 
> This live lock has been found in testing with the 'wait_iff_congested'
> call, and could possibly be caused by the 'congestion_wait' call.
> 
> This livelock is similar to the deadlock which justified the
> introduction of PF_LESS_THROTTLE, and the same flag can be used to
> remove this livelock.
> 
> To minimise the impact of the change, we still throttle nfsd when the
> filesystem it is writing to is congested, but not when some separate
> filesystem (e.g. the NFS filesystem) is congested.
> 
> Signed-off-by: NeilBrown <neilb@suse.de>
> ---
>  mm/vmscan.c |   18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a9c74b409681..e011a646de95 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1424,6 +1424,18 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
>  	list_splice(&pages_to_free, page_list);
>  }
>  
> +/* If a kernel thread (such as nfsd for loop-back mounts) services

/*
 * If ...

please

> + * a backing device by writing to the page cache it sets PF_LESS_THROTTLE.
> + * In that case we should only throttle if the backing device it is
> + * writing to is congested.  In other cases it is safe to throttle.
> + */
> +static int current_may_throttle(void)
> +{
> +	return !(current->flags & PF_LESS_THROTTLE) ||
> +		current->backing_dev_info == NULL ||
> +		bdi_write_congested(current->backing_dev_info);
> +}
> +
>  /*
>   * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
>   * of reclaimed pages
> @@ -1552,7 +1564,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
>  		 * implies that pages are cycling through the LRU faster than
>  		 * they are written so also forcibly stall.
>  		 */
> -		if (nr_unqueued_dirty == nr_taken || nr_immediate)
> +		if ((nr_unqueued_dirty == nr_taken || nr_immediate)
> +		    && current_may_throttle())

	foo &&
	bar

please.  As you did in in current_may_throttle().

>  			congestion_wait(BLK_RW_ASYNC, HZ/10);
>  	}
>  
> @@ -1561,7 +1574,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
>  	 * is congested. Allow kswapd to continue until it starts encountering
>  	 * unqueued dirty pages or cycling through the LRU too quickly.
>  	 */
> -	if (!sc->hibernation_mode && !current_is_kswapd())
> +	if (!sc->hibernation_mode && !current_is_kswapd()
> +	    && current_may_throttle())

ditto

>  		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
>  
>  	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
NeilBrown April 23, 2014, 10:47 p.m. UTC | #2
On Wed, 23 Apr 2014 15:03:18 -0700 Andrew Morton <akpm@linux-foundation.org>
wrote:

> On Wed, 23 Apr 2014 12:40:58 +1000 NeilBrown <neilb@suse.de> wrote:
> 
> > When a loop-back NFS mount is active and the backing device for the
> > NFS mount becomes congested, that can impose throttling delays on the
> > nfsd threads.
> > 
> > These delays significantly reduce throughput and so the NFS mount
> > remains congested.
> > 
> > This results in a live lock and the reduced throughput persists.
> > 
> > This live lock has been found in testing with the 'wait_iff_congested'
> > call, and could possibly be caused by the 'congestion_wait' call.
> > 
> > This livelock is similar to the deadlock which justified the
> > introduction of PF_LESS_THROTTLE, and the same flag can be used to
> > remove this livelock.
> > 
> > To minimise the impact of the change, we still throttle nfsd when the
> > filesystem it is writing to is congested, but not when some separate
> > filesystem (e.g. the NFS filesystem) is congested.
> > 
> > Signed-off-by: NeilBrown <neilb@suse.de>
> > ---
> >  mm/vmscan.c |   18 ++++++++++++++++--
> >  1 file changed, 16 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index a9c74b409681..e011a646de95 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1424,6 +1424,18 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
> >  	list_splice(&pages_to_free, page_list);
> >  }
> >  
> > +/* If a kernel thread (such as nfsd for loop-back mounts) services
> 
> /*
>  * If ...
> 
> please
> 
> > + * a backing device by writing to the page cache it sets PF_LESS_THROTTLE.
> > + * In that case we should only throttle if the backing device it is
> > + * writing to is congested.  In other cases it is safe to throttle.
> > + */
> > +static int current_may_throttle(void)
> > +{
> > +	return !(current->flags & PF_LESS_THROTTLE) ||
> > +		current->backing_dev_info == NULL ||
> > +		bdi_write_congested(current->backing_dev_info);
> > +}
> > +
> >  /*
> >   * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
> >   * of reclaimed pages
> > @@ -1552,7 +1564,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> >  		 * implies that pages are cycling through the LRU faster than
> >  		 * they are written so also forcibly stall.
> >  		 */
> > -		if (nr_unqueued_dirty == nr_taken || nr_immediate)
> > +		if ((nr_unqueued_dirty == nr_taken || nr_immediate)
> > +		    && current_may_throttle())
> 
> 	foo &&
> 	bar
> 
> please.  As you did in in current_may_throttle().
> 
> >  			congestion_wait(BLK_RW_ASYNC, HZ/10);
> >  	}
> >  
> > @@ -1561,7 +1574,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> >  	 * is congested. Allow kswapd to continue until it starts encountering
> >  	 * unqueued dirty pages or cycling through the LRU too quickly.
> >  	 */
> > -	if (!sc->hibernation_mode && !current_is_kswapd())
> > +	if (!sc->hibernation_mode && !current_is_kswapd()
> > +	    && current_may_throttle())
> 
> ditto
> 
> >  		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
> >  
> >  	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,
> > 

Thanks.  I've made those changes and will resend just that patch.
As it is quite independent of all the others can you take it and I'll funnel
the others through other trees?

Thanks
NeilBrown
diff mbox

Patch

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a9c74b409681..e011a646de95 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1424,6 +1424,18 @@  putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 	list_splice(&pages_to_free, page_list);
 }
 
+/* If a kernel thread (such as nfsd for loop-back mounts) services
+ * a backing device by writing to the page cache it sets PF_LESS_THROTTLE.
+ * In that case we should only throttle if the backing device it is
+ * writing to is congested.  In other cases it is safe to throttle.
+ */
+static int current_may_throttle(void)
+{
+	return !(current->flags & PF_LESS_THROTTLE) ||
+		current->backing_dev_info == NULL ||
+		bdi_write_congested(current->backing_dev_info);
+}
+
 /*
  * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
  * of reclaimed pages
@@ -1552,7 +1564,8 @@  shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 		 * implies that pages are cycling through the LRU faster than
 		 * they are written so also forcibly stall.
 		 */
-		if (nr_unqueued_dirty == nr_taken || nr_immediate)
+		if ((nr_unqueued_dirty == nr_taken || nr_immediate)
+		    && current_may_throttle())
 			congestion_wait(BLK_RW_ASYNC, HZ/10);
 	}
 
@@ -1561,7 +1574,8 @@  shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	 * is congested. Allow kswapd to continue until it starts encountering
 	 * unqueued dirty pages or cycling through the LRU too quickly.
 	 */
-	if (!sc->hibernation_mode && !current_is_kswapd())
+	if (!sc->hibernation_mode && !current_is_kswapd()
+	    && current_may_throttle())
 		wait_iff_congested(zone, BLK_RW_ASYNC, HZ/10);
 
 	trace_mm_vmscan_lru_shrink_inactive(zone->zone_pgdat->node_id,