diff mbox series

[1/2,V3] MM: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE

Message ID 87tv1ks24t.fsf@notabene.neil.brown.name (mailing list archive)
State New, archived
Headers show
Series [1/2,V3] MM: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE | expand

Commit Message

NeilBrown April 16, 2020, 12:30 a.m. UTC
PF_LESS_THROTTLE exists for loop-back nfsd (and a similar need in the
loop block driver and callers of prctl(PR_SET_IO_FLUSHER)), where a
daemon needs to write to one bdi (the final bdi) in order to free up
writes queued to another bdi (the client bdi).

The daemon sets PF_LESS_THROTTLE and gets a larger allowance of dirty
pages, so that it can still dirty pages after other processses have been
throttled.

This approach was designed when all threads were blocked equally,
independently on which device they were writing to, or how fast it was.
Since that time the writeback algorithm has changed substantially with
different threads getting different allowances based on non-trivial
heuristics.  This means the simple "add 25%" heuristic is no longer
reliable.

The important issue is not that the daemon needs a *larger* dirty page
allowance, but that it needs a *private* dirty page allowance, so that
dirty pages for the "client" bdi that it is helping to clear (the bdi for
an NFS filesystem or loop block device etc) do not affect the throttling
of the deamon writing to the "final" bdi.

This patch changes the heuristic so that the task is only throttled if
*both* the global threshhold *and* the per-wb threshold are exceeded.
This is similar to the effect of BDI_CAP_STRICTLIMIT which causes the
global limits to be ignored, but it isn't as strict.  A PF_LOCAL_THROTTLE
task will be allowed to proceed unthrottled if the global threshold is
not exceeded or if the local threshold is not exceeded.  They need to
both be exceeded before PF_LOCAL_THROTTLE tasks are throttled.

This approach of "only throttle when target bdi is busy" is consistent
with the other use of PF_LESS_THROTTLE in current_may_throttle(), were
it causes attention to be focussed only on the target bdi.

So this patch
 - renames PF_LESS_THROTTLE to PF_LOCAL_THROTTLE,
 - removes the 25% bonus that that flag gives, and
 - If PF_LOCAL_THROTTLE is set, don't delay at all unless both
   thresholds are exceeded.

Note that previously realtime threads were treated the same as
PF_LESS_THROTTLE threads.  This patch does *not* change the behvaiour for
real-time threads, so it is now different from the behaviour of nfsd and
loop tasks.  I don't know what is wanted for realtime.

Acked-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: NeilBrown <neilb@suse.de>
---
 drivers/block/loop.c  |  2 +-
 fs/nfsd/vfs.c         |  9 +++++----
 include/linux/sched.h |  3 ++-
 kernel/sys.c          |  2 +-
 mm/page-writeback.c   | 18 ++++++++++++++----
 mm/vmscan.c           |  4 ++--
 6 files changed, 25 insertions(+), 13 deletions(-)

Comments

Christoph Hellwig April 16, 2020, 6:54 a.m. UTC | #1
> +		if (current->flags & PF_LOCAL_THROTTLE)
> +			/* This task must only be throttled based on the bdi
> +			 * it is writing to - dirty pages for other bdis might
> +			 * be pages this task is trying to write out.  So it
> +			 * gets a free pass unless both global and local
> +			 * thresholds are exceeded.  i.e unless
> +			 * "dirty_exceeded".
> +			 */

This is not our normal multi-line comment style.  The first line should
be just a

			/*

Otherwise this looks good.
Jan Kara April 16, 2020, 3:19 p.m. UTC | #2
On Thu 16-04-20 10:30:42, NeilBrown wrote:
> 
> PF_LESS_THROTTLE exists for loop-back nfsd (and a similar need in the
> loop block driver and callers of prctl(PR_SET_IO_FLUSHER)), where a
> daemon needs to write to one bdi (the final bdi) in order to free up
> writes queued to another bdi (the client bdi).
> 
> The daemon sets PF_LESS_THROTTLE and gets a larger allowance of dirty
> pages, so that it can still dirty pages after other processses have been
> throttled.
> 
> This approach was designed when all threads were blocked equally,
> independently on which device they were writing to, or how fast it was.
> Since that time the writeback algorithm has changed substantially with
> different threads getting different allowances based on non-trivial
> heuristics.  This means the simple "add 25%" heuristic is no longer
> reliable.
> 
> The important issue is not that the daemon needs a *larger* dirty page
> allowance, but that it needs a *private* dirty page allowance, so that
> dirty pages for the "client" bdi that it is helping to clear (the bdi for
> an NFS filesystem or loop block device etc) do not affect the throttling
> of the deamon writing to the "final" bdi.
> 
> This patch changes the heuristic so that the task is only throttled if
> *both* the global threshhold *and* the per-wb threshold are exceeded.
> This is similar to the effect of BDI_CAP_STRICTLIMIT which causes the
> global limits to be ignored, but it isn't as strict.  A PF_LOCAL_THROTTLE
> task will be allowed to proceed unthrottled if the global threshold is
> not exceeded or if the local threshold is not exceeded.  They need to
> both be exceeded before PF_LOCAL_THROTTLE tasks are throttled.
> 
> This approach of "only throttle when target bdi is busy" is consistent
> with the other use of PF_LESS_THROTTLE in current_may_throttle(), were
> it causes attention to be focussed only on the target bdi.
> 
> So this patch
>  - renames PF_LESS_THROTTLE to PF_LOCAL_THROTTLE,
>  - removes the 25% bonus that that flag gives, and
>  - If PF_LOCAL_THROTTLE is set, don't delay at all unless both
>    thresholds are exceeded.
> 
> Note that previously realtime threads were treated the same as
> PF_LESS_THROTTLE threads.  This patch does *not* change the behvaiour for
> real-time threads, so it is now different from the behaviour of nfsd and
> loop tasks.  I don't know what is wanted for realtime.
> 
> Acked-by: Chuck Lever <chuck.lever@oracle.com>
> Signed-off-by: NeilBrown <neilb@suse.de>

...

> @@ -1700,6 +1699,17 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
>  				sdtc = mdtc;
>  		}
>  
> +		if (current->flags & PF_LOCAL_THROTTLE)
> +			/* This task must only be throttled based on the bdi
> +			 * it is writing to - dirty pages for other bdis might
> +			 * be pages this task is trying to write out.  So it
> +			 * gets a free pass unless both global and local
> +			 * thresholds are exceeded.  i.e unless
> +			 * "dirty_exceeded".
> +			 */
> +			if (!dirty_exceeded)
> +				break;
> +
>  		if (dirty_exceeded && !wb->dirty_exceeded)
>  			wb->dirty_exceeded = 1;

Ok, but note that this will have one sideeffect you maybe didn't realize:
Currently we try to throttle tasks softly - the heuristic rougly works like
this: If dirty < (thresh + bg_thresh)/2, leave the task alone.
(thresh+bg_thresh)/2 is called "freerun ceiling". If dirty is greater than
this, we delay the task somewhat (the aim is to delay the task as long as
it would take to write back the pages task has dirtied) in
balance_dirty_pages() so ideally 'thresh' is never hit. Only if the
heuristic consistently underestimates the time to writeback pages, we hit
'thresh' and then block the task as long as it takes flush worker to clean
enough pages to get below 'thresh'. This all leads to task being usually
gradually slowed down in balance_dirty_pages() which generally leads to
smoother overall system behavior.

What you did makes PF_LOCAL_THROTTLE tasks ignore any limits and then when
local bdi limit is exceeded, they'll suddently hit the wall and be blocked
for a long time in balance_dirty_pages().

So I like what you suggest in principle, just I think the implementation
has undesirable sideeffects. I think it would be better to modify
wb_position_ratio() to take PF_LOCAL_THROTTLE into account. It will be
probably similar to how BDI_CAP_STRICTLIMIT is handled but different in
some ways because BDI_CAP_STRICTLIMIT takes minimum from wb_pos_ratio and
global pos_ratio, you rather want to take wb_pos_ratio only. Also there are
some early bail out conditions when we are over global dirty limit which
you need to handle differently for PF_LOCAL_THROTTLE. And then, when you
have appropriate pos_ratio computed based on your policy, you can let the
task wait for appropriate amount of time and things should just work (TM) ;).
Thinking about it, you probably also want to add 'freerun' condition for
PF_LOCAL_THROTTLE tasks like:

	if ((current->flags & PF_LOCAL_THROTTLE) &&
	    wb_dirty <= dirty_freerun_ceiling(wb_thresh, wb_bg_thresh))
		go the freerun path...

								Honza
NeilBrown April 21, 2020, 2:22 a.m. UTC | #3
On Thu, Apr 16 2020, Jan Kara wrote:

> On Thu 16-04-20 10:30:42, NeilBrown wrote:
>> 
>> PF_LESS_THROTTLE exists for loop-back nfsd (and a similar need in the
>> loop block driver and callers of prctl(PR_SET_IO_FLUSHER)), where a
>> daemon needs to write to one bdi (the final bdi) in order to free up
>> writes queued to another bdi (the client bdi).
>> 
>> The daemon sets PF_LESS_THROTTLE and gets a larger allowance of dirty
>> pages, so that it can still dirty pages after other processses have been
>> throttled.
>> 
>> This approach was designed when all threads were blocked equally,
>> independently on which device they were writing to, or how fast it was.
>> Since that time the writeback algorithm has changed substantially with
>> different threads getting different allowances based on non-trivial
>> heuristics.  This means the simple "add 25%" heuristic is no longer
>> reliable.
>> 
>> The important issue is not that the daemon needs a *larger* dirty page
>> allowance, but that it needs a *private* dirty page allowance, so that
>> dirty pages for the "client" bdi that it is helping to clear (the bdi for
>> an NFS filesystem or loop block device etc) do not affect the throttling
>> of the deamon writing to the "final" bdi.
>> 
>> This patch changes the heuristic so that the task is only throttled if
>> *both* the global threshhold *and* the per-wb threshold are exceeded.
>> This is similar to the effect of BDI_CAP_STRICTLIMIT which causes the
>> global limits to be ignored, but it isn't as strict.  A PF_LOCAL_THROTTLE
>> task will be allowed to proceed unthrottled if the global threshold is
>> not exceeded or if the local threshold is not exceeded.  They need to
>> both be exceeded before PF_LOCAL_THROTTLE tasks are throttled.
>> 
>> This approach of "only throttle when target bdi is busy" is consistent
>> with the other use of PF_LESS_THROTTLE in current_may_throttle(), were
>> it causes attention to be focussed only on the target bdi.
>> 
>> So this patch
>>  - renames PF_LESS_THROTTLE to PF_LOCAL_THROTTLE,
>>  - removes the 25% bonus that that flag gives, and
>>  - If PF_LOCAL_THROTTLE is set, don't delay at all unless both
>>    thresholds are exceeded.
>> 
>> Note that previously realtime threads were treated the same as
>> PF_LESS_THROTTLE threads.  This patch does *not* change the behvaiour for
>> real-time threads, so it is now different from the behaviour of nfsd and
>> loop tasks.  I don't know what is wanted for realtime.
>> 
>> Acked-by: Chuck Lever <chuck.lever@oracle.com>
>> Signed-off-by: NeilBrown <neilb@suse.de>
>
> ...
>
>> @@ -1700,6 +1699,17 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
>>  				sdtc = mdtc;
>>  		}
>>  
>> +		if (current->flags & PF_LOCAL_THROTTLE)
>> +			/* This task must only be throttled based on the bdi
>> +			 * it is writing to - dirty pages for other bdis might
>> +			 * be pages this task is trying to write out.  So it
>> +			 * gets a free pass unless both global and local
>> +			 * thresholds are exceeded.  i.e unless
>> +			 * "dirty_exceeded".
>> +			 */
>> +			if (!dirty_exceeded)
>> +				break;
>> +
>>  		if (dirty_exceeded && !wb->dirty_exceeded)
>>  			wb->dirty_exceeded = 1;
>
> Ok, but note that this will have one sideeffect you maybe didn't realize:
> Currently we try to throttle tasks softly - the heuristic rougly works like
> this: If dirty < (thresh + bg_thresh)/2, leave the task alone.
> (thresh+bg_thresh)/2 is called "freerun ceiling". If dirty is greater than
> this, we delay the task somewhat (the aim is to delay the task as long as
> it would take to write back the pages task has dirtied) in
> balance_dirty_pages() so ideally 'thresh' is never hit. Only if the
> heuristic consistently underestimates the time to writeback pages, we hit
> 'thresh' and then block the task as long as it takes flush worker to clean
> enough pages to get below 'thresh'. This all leads to task being usually
> gradually slowed down in balance_dirty_pages() which generally leads to
> smoother overall system behavior.
>
> What you did makes PF_LOCAL_THROTTLE tasks ignore any limits and then when
> local bdi limit is exceeded, they'll suddently hit the wall and be blocked
> for a long time in balance_dirty_pages().
>
> So I like what you suggest in principle, just I think the implementation
> has undesirable sideeffects. I think it would be better to modify
> wb_position_ratio() to take PF_LOCAL_THROTTLE into account. It will be
> probably similar to how BDI_CAP_STRICTLIMIT is handled but different in
> some ways because BDI_CAP_STRICTLIMIT takes minimum from wb_pos_ratio and
> global pos_ratio, you rather want to take wb_pos_ratio only. Also there are
> some early bail out conditions when we are over global dirty limit which
> you need to handle differently for PF_LOCAL_THROTTLE. And then, when you
> have appropriate pos_ratio computed based on your policy, you can let the
> task wait for appropriate amount of time and things should just work (TM) ;).
> Thinking about it, you probably also want to add 'freerun' condition for
> PF_LOCAL_THROTTLE tasks like:
>
> 	if ((current->flags & PF_LOCAL_THROTTLE) &&
> 	    wb_dirty <= dirty_freerun_ceiling(wb_thresh, wb_bg_thresh))
> 		go the freerun path...
>

Thanks.....
I have 2 thoughts on this.
One is that I'm not sure how much it really matters.
The PF_LOCAL_THROTTLE task it always doing writeout on behalf of some
other process.  Some process writes to NFS or to a loop block device or
somewhere, then the PF_LOCAL_THROTTLE task writes those dirty pages out
to a different BDI.  So the top level task will be throttled, an the
PF_LOCAL_THROTTLE task won't get more than it can handle.
There will be starting transients of course, but I doubt it would
generally be a problem.  However it would still be nice to find the
"right" solution.

My second thought is that I really don't understand the writeback code.
I think I understand the general principle, and there are lots of big
comments that try to explain things, but it just doesn't seem to help.
I look at the code and see more questions than answers.

What are the units for "dirty_ratelimit"??  I think it is pages per
second, because it is initialized to INIT_BW which is documented as 100
MB/s.
What is the difference between dirty_ratelimit and
balanced_dirty_ratelimit?
The later is "balanced" I guess.  What does that mean?
Apparently (from backing-dev-defs.h) dirty_ratelimit moves in smaller
steps and is more smooth than balanced_dirty_ratelimit.  How is being
less smooth, more balanced??

What is pos_ratio? And what is RATELIMIT_CALC_SHIFT ???
Maybe pos_ratio is the ratio of the actual number of dirty pages to the
desired number?  And pos_ratio is calculated with fixed-point arithmetic
and RATELIMIT_CALC_SHIFT tells where the point is?

I think I understand freerun - half way between the dirty limit and the
dirty_bg limit.  Se below dirty_bg, no writeback happens.  Between there
and freerun, writeback happens, but nothing in throttled.  From free up
to the limit, tasks are progressively throttled.
"setpoint" is the midpoint of this range.  Is the goal that pos_ratio is
computed for.
(except that in the BDI_CAP_STRICTLIMIT part of wb_position_ratio)
wb_setpoint is set to the bottom of this range, same as the freerun ceiling.)

Then we have the control lines, which are cubic(?) for global counts and
linear for per-wb - but truncated at 1/4.  The comment says "so that
wb_dirty can be smoothly throttled".  It'll take me a while to work out
what a hard edge results in smooth throttling.  I suspect it makes sense
but it doesn't jump out at me.

So, you see, I don't feel at all confident changing any of this code
because I just don't get it.

So I'm inclined to stick with the patch that I have. :-(

Thanks,
NeilBrown
Jan Kara April 22, 2020, 12:46 p.m. UTC | #4
On Tue 21-04-20 12:22:59, NeilBrown wrote:
> On Thu, Apr 16 2020, Jan Kara wrote:
> 
> > On Thu 16-04-20 10:30:42, NeilBrown wrote:
> >> 
> >> PF_LESS_THROTTLE exists for loop-back nfsd (and a similar need in the
> >> loop block driver and callers of prctl(PR_SET_IO_FLUSHER)), where a
> >> daemon needs to write to one bdi (the final bdi) in order to free up
> >> writes queued to another bdi (the client bdi).
> >> 
> >> The daemon sets PF_LESS_THROTTLE and gets a larger allowance of dirty
> >> pages, so that it can still dirty pages after other processses have been
> >> throttled.
> >> 
> >> This approach was designed when all threads were blocked equally,
> >> independently on which device they were writing to, or how fast it was.
> >> Since that time the writeback algorithm has changed substantially with
> >> different threads getting different allowances based on non-trivial
> >> heuristics.  This means the simple "add 25%" heuristic is no longer
> >> reliable.
> >> 
> >> The important issue is not that the daemon needs a *larger* dirty page
> >> allowance, but that it needs a *private* dirty page allowance, so that
> >> dirty pages for the "client" bdi that it is helping to clear (the bdi for
> >> an NFS filesystem or loop block device etc) do not affect the throttling
> >> of the deamon writing to the "final" bdi.
> >> 
> >> This patch changes the heuristic so that the task is only throttled if
> >> *both* the global threshhold *and* the per-wb threshold are exceeded.
> >> This is similar to the effect of BDI_CAP_STRICTLIMIT which causes the
> >> global limits to be ignored, but it isn't as strict.  A PF_LOCAL_THROTTLE
> >> task will be allowed to proceed unthrottled if the global threshold is
> >> not exceeded or if the local threshold is not exceeded.  They need to
> >> both be exceeded before PF_LOCAL_THROTTLE tasks are throttled.
> >> 
> >> This approach of "only throttle when target bdi is busy" is consistent
> >> with the other use of PF_LESS_THROTTLE in current_may_throttle(), were
> >> it causes attention to be focussed only on the target bdi.
> >> 
> >> So this patch
> >>  - renames PF_LESS_THROTTLE to PF_LOCAL_THROTTLE,
> >>  - removes the 25% bonus that that flag gives, and
> >>  - If PF_LOCAL_THROTTLE is set, don't delay at all unless both
> >>    thresholds are exceeded.
> >> 
> >> Note that previously realtime threads were treated the same as
> >> PF_LESS_THROTTLE threads.  This patch does *not* change the behvaiour for
> >> real-time threads, so it is now different from the behaviour of nfsd and
> >> loop tasks.  I don't know what is wanted for realtime.
> >> 
> >> Acked-by: Chuck Lever <chuck.lever@oracle.com>
> >> Signed-off-by: NeilBrown <neilb@suse.de>
> >
> > ...
> >
> >> @@ -1700,6 +1699,17 @@ static void balance_dirty_pages(struct bdi_writeback *wb,
> >>  				sdtc = mdtc;
> >>  		}
> >>  
> >> +		if (current->flags & PF_LOCAL_THROTTLE)
> >> +			/* This task must only be throttled based on the bdi
> >> +			 * it is writing to - dirty pages for other bdis might
> >> +			 * be pages this task is trying to write out.  So it
> >> +			 * gets a free pass unless both global and local
> >> +			 * thresholds are exceeded.  i.e unless
> >> +			 * "dirty_exceeded".
> >> +			 */
> >> +			if (!dirty_exceeded)
> >> +				break;
> >> +
> >>  		if (dirty_exceeded && !wb->dirty_exceeded)
> >>  			wb->dirty_exceeded = 1;
> >
> > Ok, but note that this will have one sideeffect you maybe didn't realize:
> > Currently we try to throttle tasks softly - the heuristic rougly works like
> > this: If dirty < (thresh + bg_thresh)/2, leave the task alone.
> > (thresh+bg_thresh)/2 is called "freerun ceiling". If dirty is greater than
> > this, we delay the task somewhat (the aim is to delay the task as long as
> > it would take to write back the pages task has dirtied) in
> > balance_dirty_pages() so ideally 'thresh' is never hit. Only if the
> > heuristic consistently underestimates the time to writeback pages, we hit
> > 'thresh' and then block the task as long as it takes flush worker to clean
> > enough pages to get below 'thresh'. This all leads to task being usually
> > gradually slowed down in balance_dirty_pages() which generally leads to
> > smoother overall system behavior.
> >
> > What you did makes PF_LOCAL_THROTTLE tasks ignore any limits and then when
> > local bdi limit is exceeded, they'll suddently hit the wall and be blocked
> > for a long time in balance_dirty_pages().
> >
> > So I like what you suggest in principle, just I think the implementation
> > has undesirable sideeffects. I think it would be better to modify
> > wb_position_ratio() to take PF_LOCAL_THROTTLE into account. It will be
> > probably similar to how BDI_CAP_STRICTLIMIT is handled but different in
> > some ways because BDI_CAP_STRICTLIMIT takes minimum from wb_pos_ratio and
> > global pos_ratio, you rather want to take wb_pos_ratio only. Also there are
> > some early bail out conditions when we are over global dirty limit which
> > you need to handle differently for PF_LOCAL_THROTTLE. And then, when you
> > have appropriate pos_ratio computed based on your policy, you can let the
> > task wait for appropriate amount of time and things should just work (TM) ;).
> > Thinking about it, you probably also want to add 'freerun' condition for
> > PF_LOCAL_THROTTLE tasks like:
> >
> > 	if ((current->flags & PF_LOCAL_THROTTLE) &&
> > 	    wb_dirty <= dirty_freerun_ceiling(wb_thresh, wb_bg_thresh))
> > 		go the freerun path...
> >
> 
> Thanks.....
> I have 2 thoughts on this.
> One is that I'm not sure how much it really matters.
> The PF_LOCAL_THROTTLE task it always doing writeout on behalf of some
> other process.  Some process writes to NFS or to a loop block device or
> somewhere, then the PF_LOCAL_THROTTLE task writes those dirty pages out
> to a different BDI.  So the top level task will be throttled, an the
> PF_LOCAL_THROTTLE task won't get more than it can handle.
> There will be starting transients of course, but I doubt it would
> generally be a problem.  However it would still be nice to find the
> "right" solution.

I'm not sure PF_LOCAL_THROTTLE "won't get more than it can handle". Once
dirty pages on NFS BDI accumulate, flush worker will start to push them out
as fast as it can. So the only thing that's limitting this is the dirty
throttling on the receiving (NFS server - thus underlying BDI) side. When
underlying BDI throttling triggers depends on that BDI dirty limits and
those are proportional part of global dirty limits scaled by writeback
throughput on underlying BDI compared to other BDIs. So depending on which
BDIs are in the system and how active they are in dirtying pages
'underlying BDI' will get different dirty limits set. It's quite imaginable
that in some configurations it will be easy to push NFS server to hit its
dirty limit even with PF_LOCAL_THROTTLE. And then having NFS server
undersponsive for couple seconds because it is blocked in
balance_dirty_pages() just is not nice...

> My second thought is that I really don't understand the writeback code.
> I think I understand the general principle, and there are lots of big
> comments that try to explain things, but it just doesn't seem to help.
> I look at the code and see more questions than answers.

I fully understand what you mean :). The logic is complex and while
Fengguang wrote a lot of comments it is still rather hard to follow.

> What are the units for "dirty_ratelimit"??  I think it is pages per
> second, because it is initialized to INIT_BW which is documented as 100
> MB/s.

Yes, that's what I think as well.

> What is the difference between dirty_ratelimit and
> balanced_dirty_ratelimit?
> The later is "balanced" I guess.  What does that mean?
> Apparently (from backing-dev-defs.h) dirty_ratelimit moves in smaller
> steps and is more smooth than balanced_dirty_ratelimit.  How is being
> less smooth, more balanced??

Yeah. So I cannot really explain the naming to you (not sure why Fengguang
chose these names). But 'balanced_dirty_ratelimit' is pages/second value we
want to limit task to based on the events in the most current time slice.
'dirty_ratelimit' is smoothed version of 'balanced_dirty_ratelimit' taking
more of history into account.

> What is pos_ratio? And what is RATELIMIT_CALC_SHIFT ???
> Maybe pos_ratio is the ratio of the actual number of dirty pages to the
> desired number?  And pos_ratio is calculated with fixed-point arithmetic
> and RATELIMIT_CALC_SHIFT tells where the point is?

So RATELIMIT_CALC_SHIFT is indeed the shift of fixed point arithmetic used
in the computations. Pos_ratio is the multiplicative "correction" factor we
apply to computed dirty_ratelimit (i.e., task_ratelimit = dirty_ratelimit *
pos_ratio) - so if we see we are able to writeout say 100 MB/s but we are
still relatively far from dirty limits, we let tasks dirty 200 MB/s
(pos_ratio is 2). As we are nearing dirty limits, pos_ratio is dropping
(it's appropriately scaled and shifted third order polynomial) so very
close to dirty limits, we let tasks dirty only say 10 MB/s even though we
are still able to write out 100 MB/s.

> I think I understand freerun - half way between the dirty limit and the
> dirty_bg limit.  Se below dirty_bg, no writeback happens.  Between there
> and freerun, writeback happens, but nothing in throttled.  From free up
> to the limit, tasks are progressively throttled.

Correct.

> "setpoint" is the midpoint of this range.  Is the goal that pos_ratio is
> computed for.
> (except that in the BDI_CAP_STRICTLIMIT part of wb_position_ratio)
> wb_setpoint is set to the bottom of this range, same as the freerun ceiling.)

Correct.

> Then we have the control lines, which are cubic(?) for global counts and
> linear for per-wb - but truncated at 1/4.  The comment says "so that
> wb_dirty can be smoothly throttled".  It'll take me a while to work out
> what a hard edge results in smooth throttling.  I suspect it makes sense
> but it doesn't jump out at me.
> 
> So, you see, I don't feel at all confident changing any of this code
> because I just don't get it.
> 
> So I'm inclined to stick with the patch that I have. :-(

OK, I'll try to write something and we'll see if it will work :)

								Honza
NeilBrown May 13, 2020, 7:16 a.m. UTC | #5
I thought about this some more and come up with another "simple"
approach that didn't require me understanding too much code, but does -
I think - address your concerns.

I've changed the heuristic to avoid any throttling on PF_LOCAL_THROTTLE
task if:
 - the global dirty count is below the global free-run threshold.  The
   code did this already.
 - (or) the per-wb dirty count is below the per-wb free-run threshold.
   This is the change.

This means that:
 - in a steady stated, all bdis will be throttled based on their (steady
   state) throughput, which is equally appropriate for PF_LOCAL_THROTTLE
   tasks.
 - a PF_LOCAL_THROTTLE task will never be *completely* blocked by dirty
   pages queued for other devices.  This means no deadlock, and that is
   the primary purpose of PF_LOCAL_THROTTLE.
 - when writes through the PF_LOCAL_THROTTLE task start up from idle -
   when there is no current throughput estimate - the PF_LOCAL_THROTTLE
   can be expected to get a fair share of the available memory, just as
   much as any other writer.  This was the possible problem with
   treating PF_LOCAL_THROTTLE just like BDI_CAP_STRICTLIMIT.

So I think this is a good solution.  Thoughts?
Patches follow - I've address the comment formatting issue.

Thanks,
NeilBrown
diff mbox series

Patch

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index da693e6a834e..d89c25ba3b89 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -919,7 +919,7 @@  static void loop_unprepare_queue(struct loop_device *lo)
 
 static int loop_kthread_worker_fn(void *worker_ptr)
 {
-	current->flags |= PF_LESS_THROTTLE | PF_MEMALLOC_NOIO;
+	current->flags |= PF_LOCAL_THROTTLE | PF_MEMALLOC_NOIO;
 	return kthread_worker_fn(worker_ptr);
 }
 
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index 0aa02eb18bd3..c3fbab1753ec 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -979,12 +979,13 @@  nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
 
 	if (test_bit(RQ_LOCAL, &rqstp->rq_flags))
 		/*
-		 * We want less throttling in balance_dirty_pages()
-		 * and shrink_inactive_list() so that nfs to
+		 * We want throttling in balance_dirty_pages()
+		 * and shrink_inactive_list() to only consider
+		 * the backingdev we are writing to, so that nfs to
 		 * localhost doesn't cause nfsd to lock up due to all
 		 * the client's dirty pages or its congested queue.
 		 */
-		current->flags |= PF_LESS_THROTTLE;
+		current->flags |= PF_LOCAL_THROTTLE;
 
 	exp = fhp->fh_export;
 	use_wgather = (rqstp->rq_vers == 2) && EX_WGATHER(exp);
@@ -1037,7 +1038,7 @@  nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf,
 		nfserr = nfserrno(host_err);
 	}
 	if (test_bit(RQ_LOCAL, &rqstp->rq_flags))
-		current_restore_flags(pflags, PF_LESS_THROTTLE);
+		current_restore_flags(pflags, PF_LOCAL_THROTTLE);
 	return nfserr;
 }
 
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4418f5cb8324..5955a089df32 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1481,7 +1481,8 @@  extern struct pid *cad_pid;
 #define PF_KSWAPD		0x00020000	/* I am kswapd */
 #define PF_MEMALLOC_NOFS	0x00040000	/* All allocation requests will inherit GFP_NOFS */
 #define PF_MEMALLOC_NOIO	0x00080000	/* All allocation requests will inherit GFP_NOIO */
-#define PF_LESS_THROTTLE	0x00100000	/* Throttle me less: I clean memory */
+#define PF_LOCAL_THROTTLE	0x00100000	/* Throttle writes only agasint the bdi I write to,
+						 * I am cleaning dirty pages from some other bdi. */
 #define PF_KTHREAD		0x00200000	/* I am a kernel thread */
 #define PF_RANDOMIZE		0x00400000	/* Randomize virtual address space */
 #define PF_SWAPWRITE		0x00800000	/* Allowed to write to swap */
diff --git a/kernel/sys.c b/kernel/sys.c
index d325f3ab624a..180a2fa33f7f 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2262,7 +2262,7 @@  int __weak arch_prctl_spec_ctrl_set(struct task_struct *t, unsigned long which,
 	return -EINVAL;
 }
 
-#define PR_IO_FLUSHER (PF_MEMALLOC_NOIO | PF_LESS_THROTTLE)
+#define PR_IO_FLUSHER (PF_MEMALLOC_NOIO | PF_LOCAL_THROTTLE)
 
 SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 		unsigned long, arg4, unsigned long, arg5)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 7326b54ab728..9692c553526b 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -387,8 +387,7 @@  static unsigned long global_dirtyable_memory(void)
  * Calculate @dtc->thresh and ->bg_thresh considering
  * vm_dirty_{bytes|ratio} and dirty_background_{bytes|ratio}.  The caller
  * must ensure that @dtc->avail is set before calling this function.  The
- * dirty limits will be lifted by 1/4 for PF_LESS_THROTTLE (ie. nfsd) and
- * real-time tasks.
+ * dirty limits will be lifted by 1/4 for real-time tasks.
  */
 static void domain_dirty_limits(struct dirty_throttle_control *dtc)
 {
@@ -436,7 +435,7 @@  static void domain_dirty_limits(struct dirty_throttle_control *dtc)
 	if (bg_thresh >= thresh)
 		bg_thresh = thresh / 2;
 	tsk = current;
-	if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk)) {
+	if (rt_task(tsk)) {
 		bg_thresh += bg_thresh / 4 + global_wb_domain.dirty_limit / 32;
 		thresh += thresh / 4 + global_wb_domain.dirty_limit / 32;
 	}
@@ -486,7 +485,7 @@  static unsigned long node_dirty_limit(struct pglist_data *pgdat)
 	else
 		dirty = vm_dirty_ratio * node_memory / 100;
 
-	if (tsk->flags & PF_LESS_THROTTLE || rt_task(tsk))
+	if (rt_task(tsk))
 		dirty += dirty / 4;
 
 	return dirty;
@@ -1700,6 +1699,17 @@  static void balance_dirty_pages(struct bdi_writeback *wb,
 				sdtc = mdtc;
 		}
 
+		if (current->flags & PF_LOCAL_THROTTLE)
+			/* This task must only be throttled based on the bdi
+			 * it is writing to - dirty pages for other bdis might
+			 * be pages this task is trying to write out.  So it
+			 * gets a free pass unless both global and local
+			 * thresholds are exceeded.  i.e unless
+			 * "dirty_exceeded".
+			 */
+			if (!dirty_exceeded)
+				break;
+
 		if (dirty_exceeded && !wb->dirty_exceeded)
 			wb->dirty_exceeded = 1;
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b06868fc4926..80ab523926aa 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1879,13 +1879,13 @@  static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
 
 /*
  * If a kernel thread (such as nfsd for loop-back mounts) services
- * a backing device by writing to the page cache it sets PF_LESS_THROTTLE.
+ * a backing device by writing to the page cache it sets PF_LOCAL_THROTTLE.
  * In that case we should only throttle if the backing device it is
  * writing to is congested.  In other cases it is safe to throttle.
  */
 static int current_may_throttle(void)
 {
-	return !(current->flags & PF_LESS_THROTTLE) ||
+	return !(current->flags & PF_LOCAL_THROTTLE) ||
 		current->backing_dev_info == NULL ||
 		bdi_write_congested(current->backing_dev_info);
 }