diff mbox series

[09/24] xfs: don't allow log IO to be throttled

Message ID 20190801021752.4986-10-david@fromorbit.com (mailing list archive)
State New, archived
Headers show
Series mm, xfs: non-blocking inode reclaim | expand

Commit Message

Dave Chinner Aug. 1, 2019, 2:17 a.m. UTC
From: Dave Chinner <dchinner@redhat.com>

Running metadata intensive workloads, I've been seeing the AIL
pushing getting stuck on pinned buffers and triggering log forces.
The log force is taking a long time to run because the log IO is
getting throttled by wbt_wait() - the block layer writeback
throttle. It's being throttled because there is a huge amount of
metadata writeback going on which is filling the request queue.

IOWs, we have a priority inversion problem here.

Mark the log IO bios with REQ_IDLE so they don't get throttled
by the block layer writeback throttle. When we are forcing the CIL,
we are likely to need to to tens of log IOs, and they are issued as
fast as they can be build and IO completed. Hence REQ_IDLE is
appropriate - it's an indication that more IO will follow shortly.

And because we also set REQ_SYNC, the writeback throttle will no
treat log IO the same way it treats direct IO writes - it will not
throttle them at all. Hence we solve the priority inversion problem
caused by the writeback throttle being unable to distinguish between
high priority log IO and background metadata writeback.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_log.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Chris Mason Aug. 1, 2019, 1:39 p.m. UTC | #1
On 31 Jul 2019, at 22:17, Dave Chinner wrote:

> From: Dave Chinner <dchinner@redhat.com>
>
> Running metadata intensive workloads, I've been seeing the AIL
> pushing getting stuck on pinned buffers and triggering log forces.
> The log force is taking a long time to run because the log IO is
> getting throttled by wbt_wait() - the block layer writeback
> throttle. It's being throttled because there is a huge amount of
> metadata writeback going on which is filling the request queue.
>
> IOWs, we have a priority inversion problem here.
>
> Mark the log IO bios with REQ_IDLE so they don't get throttled
> by the block layer writeback throttle. When we are forcing the CIL,
> we are likely to need to to tens of log IOs, and they are issued as
> fast as they can be build and IO completed. Hence REQ_IDLE is
> appropriate - it's an indication that more IO will follow shortly.
>
> And because we also set REQ_SYNC, the writeback throttle will no
> treat log IO the same way it treats direct IO writes - it will not
> throttle them at all. Hence we solve the priority inversion problem
> caused by the writeback throttle being unable to distinguish between
> high priority log IO and background metadata writeback.
>
  [ cc Jens ]

We spent a lot of time getting rid of these inversions in io.latency 
(and the new io.cost), where REQ_META just blows through the throttling 
and goes into back charging instead.

It feels awkward to have one set of prio inversion workarounds for io.* 
and another for wbt.  Jens, should we make an explicit one that doesn't 
rely on magic side effects, or just decide that metadata is meta enough 
to break all the rules?

-chris
Dave Chinner Aug. 1, 2019, 11:58 p.m. UTC | #2
On Thu, Aug 01, 2019 at 01:39:34PM +0000, Chris Mason wrote:
> On 31 Jul 2019, at 22:17, Dave Chinner wrote:
> 
> > From: Dave Chinner <dchinner@redhat.com>
> >
> > Running metadata intensive workloads, I've been seeing the AIL
> > pushing getting stuck on pinned buffers and triggering log forces.
> > The log force is taking a long time to run because the log IO is
> > getting throttled by wbt_wait() - the block layer writeback
> > throttle. It's being throttled because there is a huge amount of
> > metadata writeback going on which is filling the request queue.
> >
> > IOWs, we have a priority inversion problem here.
> >
> > Mark the log IO bios with REQ_IDLE so they don't get throttled
> > by the block layer writeback throttle. When we are forcing the CIL,
> > we are likely to need to to tens of log IOs, and they are issued as
> > fast as they can be build and IO completed. Hence REQ_IDLE is
> > appropriate - it's an indication that more IO will follow shortly.
> >
> > And because we also set REQ_SYNC, the writeback throttle will no
> > treat log IO the same way it treats direct IO writes - it will not
> > throttle them at all. Hence we solve the priority inversion problem
> > caused by the writeback throttle being unable to distinguish between
> > high priority log IO and background metadata writeback.
> >
>   [ cc Jens ]
> 
> We spent a lot of time getting rid of these inversions in io.latency 
> (and the new io.cost), where REQ_META just blows through the throttling 
> and goes into back charging instead.

Which simply reinforces the fact that that request type based
throttling is a fundamentally broken architecture.

> It feels awkward to have one set of prio inversion workarounds for io.* 
> and another for wbt.  Jens, should we make an explicit one that doesn't 
> rely on magic side effects, or just decide that metadata is meta enough 
> to break all the rules?

The problem isn't REQ_META blows throw the throttling, the problem
is that different REQ_META IOs have different priority.

IOWs, the problem here is that we are trying to infer priority from
the request type rather than an actual priority assigned by the
submitter. There is no way direct IO has higher priority in a
filesystem than log IO tagged with REQ_META as direct IO can require
log IO to make progress. Priority is a policy determined by the
submitter, not the mechanism doing the throttling.

Can we please move this all over to priorites based on
bio->b_ioprio? And then document how the range of priorities are
managed, such as:

(99 = highest prio to 0 = lowest)

swap out
swap in				>90
User hard RT max		89
User hard RT min		80
filesystem max			79
ionice max			60
background data writeback	40
ionice min			20
filesystem min			10
idle				0

So that we can appropriately prioritise different types of kernel
internal IO w.r.t user controlled IO priorities? This way we can
still tag the bios with the type of data they contain, but we
no longer use that to determine whether to throttle that IO or not -
throttling/scheduling should be done entirely on a priority basis.

Cheers,

Dave.
Christoph Hellwig Aug. 2, 2019, 8:12 a.m. UTC | #3
On Fri, Aug 02, 2019 at 09:58:49AM +1000, Dave Chinner wrote:
> Which simply reinforces the fact that that request type based
> throttling is a fundamentally broken architecture.
> 
> > It feels awkward to have one set of prio inversion workarounds for io.* 
> > and another for wbt.  Jens, should we make an explicit one that doesn't 
> > rely on magic side effects, or just decide that metadata is meta enough 
> > to break all the rules?
> 
> The problem isn't REQ_META blows throw the throttling, the problem
> is that different REQ_META IOs have different priority.
> 
> IOWs, the problem here is that we are trying to infer priority from
> the request type rather than an actual priority assigned by the
> submitter. There is no way direct IO has higher priority in a
> filesystem than log IO tagged with REQ_META as direct IO can require
> log IO to make progress. Priority is a policy determined by the
> submitter, not the mechanism doing the throttling.
> 
> Can we please move this all over to priorites based on
> bio->b_ioprio? And then document how the range of priorities are
> managed, such as:

Yes, we need to fix the magic deducted throttling behavior, especiall
the odd REQ_IDLE that in its various incarnations has been a massive
source of toruble and confusion.  Not sure tons of priorities are
really helping, given that even hardware with priority level support
usually just supports about two priorit levels.
Chris Mason Aug. 2, 2019, 2:11 p.m. UTC | #4
On 1 Aug 2019, at 19:58, Dave Chinner wrote:

> On Thu, Aug 01, 2019 at 01:39:34PM +0000, Chris Mason wrote:
>> On 31 Jul 2019, at 22:17, Dave Chinner wrote:
>>
>>> From: Dave Chinner <dchinner@redhat.com>
>>>
>>> Running metadata intensive workloads, I've been seeing the AIL
>>> pushing getting stuck on pinned buffers and triggering log forces.
>>> The log force is taking a long time to run because the log IO is
>>> getting throttled by wbt_wait() - the block layer writeback
>>> throttle. It's being throttled because there is a huge amount of
>>> metadata writeback going on which is filling the request queue.
>>>
>>> IOWs, we have a priority inversion problem here.
>>>
>>> Mark the log IO bios with REQ_IDLE so they don't get throttled
>>> by the block layer writeback throttle. When we are forcing the CIL,
>>> we are likely to need to to tens of log IOs, and they are issued as
>>> fast as they can be build and IO completed. Hence REQ_IDLE is
>>> appropriate - it's an indication that more IO will follow shortly.
>>>
>>> And because we also set REQ_SYNC, the writeback throttle will no
>>> treat log IO the same way it treats direct IO writes - it will not
>>> throttle them at all. Hence we solve the priority inversion problem
>>> caused by the writeback throttle being unable to distinguish between
>>> high priority log IO and background metadata writeback.
>>>
>>   [ cc Jens ]
>>
>> We spent a lot of time getting rid of these inversions in io.latency
>> (and the new io.cost), where REQ_META just blows through the 
>> throttling
>> and goes into back charging instead.
>
> Which simply reinforces the fact that that request type based
> throttling is a fundamentally broken architecture.
>
>> It feels awkward to have one set of prio inversion workarounds for 
>> io.*
>> and another for wbt.  Jens, should we make an explicit one that 
>> doesn't
>> rely on magic side effects, or just decide that metadata is meta 
>> enough
>> to break all the rules?
>
> The problem isn't REQ_META blows throw the throttling, the problem
> is that different REQ_META IOs have different priority.

Yes and no.  At some point important FS threads have the potential to 
wait on every single REQ_META IO on the box, so every single REQ_META IO 
has the potential to create priority inversions.

>
> IOWs, the problem here is that we are trying to infer priority from
> the request type rather than an actual priority assigned by the
> submitter. There is no way direct IO has higher priority in a
> filesystem than log IO tagged with REQ_META as direct IO can require
> log IO to make progress. Priority is a policy determined by the
> submitter, not the mechanism doing the throttling.
>
> Can we please move this all over to priorites based on
> bio->b_ioprio? And then document how the range of priorities are
> managed, such as:
>
> (99 = highest prio to 0 = lowest)
>
> swap out
> swap in				>90
> User hard RT max		89
> User hard RT min		80
> filesystem max			79
> ionice max			60
> background data writeback	40
> ionice min			20
> filesystem min			10
> idle				0
>
> So that we can appropriately prioritise different types of kernel
> internal IO w.r.t user controlled IO priorities? This way we can
> still tag the bios with the type of data they contain, but we
> no longer use that to determine whether to throttle that IO or not -
> throttling/scheduling should be done entirely on a priority basis.

I think you and I are describing solutions to different problems.  The 
reason the back charging works so well in io.latency and io.cost is 
because the IO controllers are able to remember that a given cgroup 
created X amount of IO, and then just make that cgroup wait at a safe 
time, instead of trying to assign priority to things that have infinite 
priority.

I can't really see bio->b_ioprio working without the rest of the IO 
controller logic creating a sensible system, and giving userland the 
framework to define weights etc.  My question is if it's worth trying 
inside of the wbt code, or if we should just let the metadata go 
through.

Tejun reminded me that in a lot of ways, swap is user IO and it's 
actually fine to have it prioritized at the same level as user IO.  We 
don't want to let a low prio app thrash the drive swapping things in and 
out all the time, and it's actually fine to make them wait as long as 
other higher priority processes aren't waiting for the memory.  This 
depends on the cgroup config, so wrt your current patches it probably 
sounds crazy, but we have a lot of data around this from the fleet.

-chris
Matthew Wilcox (Oracle) Aug. 2, 2019, 6:34 p.m. UTC | #5
On Fri, Aug 02, 2019 at 02:11:53PM +0000, Chris Mason wrote:
> Yes and no.  At some point important FS threads have the potential to 
> wait on every single REQ_META IO on the box, so every single REQ_META IO 
> has the potential to create priority inversions.

[...]

> Tejun reminded me that in a lot of ways, swap is user IO and it's 
> actually fine to have it prioritized at the same level as user IO.  We 
> don't want to let a low prio app thrash the drive swapping things in and 
> out all the time, and it's actually fine to make them wait as long as 
> other higher priority processes aren't waiting for the memory.  This 
> depends on the cgroup config, so wrt your current patches it probably 
> sounds crazy, but we have a lot of data around this from the fleet.

swap is only user IO if we're doing the swapping in response to an
allocation done on behalf of a user thread.  If one of the above-mentioned
important FS threads does a memory allocation which causes swapping,
that priority needs to be inherited by the IO.
Dave Chinner Aug. 2, 2019, 11:28 p.m. UTC | #6
On Fri, Aug 02, 2019 at 02:11:53PM +0000, Chris Mason wrote:
> On 1 Aug 2019, at 19:58, Dave Chinner wrote:
> I can't really see bio->b_ioprio working without the rest of the IO 
> controller logic creating a sensible system,

That's exactly the problem we need to solve. The current situation
is ... untenable. Regardless of whether the io.latency controller
works well, the fact is that the wbt subsystem is active on -all-
configurations and the way it "prioritises" is completely broken.

> framework to define weights etc.  My question is if it's worth trying 
> inside of the wbt code, or if we should just let the metadata go 
> through.

As I said, that doesn't  solve the problem. We /want/ critical
journal IO to have higher priority that background metadata
writeback. Just ignoring REQ_META doesn't help us there - it just
moves the priority inversion to blocking on request queue tags.

> Tejun reminded me that in a lot of ways, swap is user IO and it's 
> actually fine to have it prioritized at the same level as user IO.  We 

I think that's wrong. Swap *in* could have user priority but swap
*out* is global as there is no guarantee that the page being swapped
belongs to the user context that is reclaiming memory.

Lots of other user and kernel reclaim contexts may be waiting on
that swap to complete, so it's important that swap out is not
arbitrarily delayed or susceptible to priority inversions. i.e. swap
out must take priority over swap-in and other user IO because that
IO may require allocation to make progress via swapping to free
"user" file data cached in memory....

> don't want to let a low prio app thrash the drive swapping things in and 
> out all the time,

Low priority apps will be throttled on *swap in* IO - i.e. by their
incoming memory demand. High priority apps should be swapping out
low priority app memory if there are shortages - that's what priority
defines....

> other higher priority processes aren't waiting for the memory.  This 
> depends on the cgroup config, so wrt your current patches it probably 
> sounds crazy, but we have a lot of data around this from the fleet.

I'm not using cgroups.

Core infrastructure needs to work without cgroups being configured
to confine everything in userspace to "safe" bounds, and right now
just running things in the root cgroup doesn't appear to work very
well at all.

Cheers,

Dave.
Chris Mason Aug. 5, 2019, 6:32 p.m. UTC | #7
On 2 Aug 2019, at 19:28, Dave Chinner wrote:

> On Fri, Aug 02, 2019 at 02:11:53PM +0000, Chris Mason wrote:
>> On 1 Aug 2019, at 19:58, Dave Chinner wrote:
>> I can't really see bio->b_ioprio working without the rest of the IO
>> controller logic creating a sensible system,
>
> That's exactly the problem we need to solve. The current situation
> is ... untenable. Regardless of whether the io.latency controller
> works well, the fact is that the wbt subsystem is active on -all-
> configurations and the way it "prioritises" is completely broken.

Completely broken is probably a little strong.   Before wbt, it was 
impossible to do buffered IO without periodically saturating the drive 
in unexpected ways.  We've got a lot of data showing it helping, and 
it's pretty easy to setup a new A/B experiment to demonstrate it's 
usefulness in current kernels.  But that doesn't mean it's perfect.

>
>> framework to define weights etc.  My question is if it's worth trying
>> inside of the wbt code, or if we should just let the metadata go
>> through.
>
> As I said, that doesn't  solve the problem. We /want/ critical
> journal IO to have higher priority that background metadata
> writeback. Just ignoring REQ_META doesn't help us there - it just
> moves the priority inversion to blocking on request queue tags.

Does XFS background metadata IO ever get waited on by critical journal 
threads?  My understanding is that all of the filesystems do this from 
time to time.  Without a way to bump the priority of throttled 
background metadata IO, I can't see how to avoid prio inversions without 
running background metadata at the same prio as all of the critical 
journal IO.

>
>> Tejun reminded me that in a lot of ways, swap is user IO and it's
>> actually fine to have it prioritized at the same level as user IO.  
>> We
>
> I think that's wrong. Swap *in* could have user priority but swap
> *out* is global as there is no guarantee that the page being swapped
> belongs to the user context that is reclaiming memory.
>
> Lots of other user and kernel reclaim contexts may be waiting on
> that swap to complete, so it's important that swap out is not
> arbitrarily delayed or susceptible to priority inversions. i.e. swap
> out must take priority over swap-in and other user IO because that
> IO may require allocation to make progress via swapping to free
> "user" file data cached in memory....
>
>> don't want to let a low prio app thrash the drive swapping things in 
>> and
>> out all the time,
>
> Low priority apps will be throttled on *swap in* IO - i.e. by their
> incoming memory demand. High priority apps should be swapping out
> low priority app memory if there are shortages - that's what priority
> defines....
>
>> other higher priority processes aren't waiting for the memory.  This
>> depends on the cgroup config, so wrt your current patches it probably
>> sounds crazy, but we have a lot of data around this from the fleet.
>
> I'm not using cgroups.
>
> Core infrastructure needs to work without cgroups being configured
> to confine everything in userspace to "safe" bounds, and right now
> just running things in the root cgroup doesn't appear to work very
> well at all.

I'm not disagreeing with this part, my real point is there isn't a 
single answer.  It's possible for swap to be critical to the running of 
the box in some workloads, and totally unimportant in others.

-chris
Dave Chinner Aug. 5, 2019, 11:09 p.m. UTC | #8
On Mon, Aug 05, 2019 at 06:32:51PM +0000, Chris Mason wrote:
> On 2 Aug 2019, at 19:28, Dave Chinner wrote:
> 
> > On Fri, Aug 02, 2019 at 02:11:53PM +0000, Chris Mason wrote:
> >> On 1 Aug 2019, at 19:58, Dave Chinner wrote:
> >> I can't really see bio->b_ioprio working without the rest of the IO
> >> controller logic creating a sensible system,
> >
> > That's exactly the problem we need to solve. The current situation
> > is ... untenable. Regardless of whether the io.latency controller
> > works well, the fact is that the wbt subsystem is active on -all-
> > configurations and the way it "prioritises" is completely broken.
> 
> Completely broken is probably a little strong.   Before wbt, it was 
> impossible to do buffered IO without periodically saturating the drive 
> in unexpected ways.  We've got a lot of data showing it helping, and 
> it's pretty easy to setup a new A/B experiment to demonstrate it's 
> usefulness in current kernels.  But that doesn't mean it's perfect.

I'm not arguing that wbt is useless, I'm just saying that it's
design w.r.t. IO prioritisation is fundamentally broken. Using
request types to try to infer priority just doesn't work, as I've
been trying to explain.

> >> framework to define weights etc.  My question is if it's worth trying
> >> inside of the wbt code, or if we should just let the metadata go
> >> through.
> >
> > As I said, that doesn't  solve the problem. We /want/ critical
> > journal IO to have higher priority that background metadata
> > writeback. Just ignoring REQ_META doesn't help us there - it just
> > moves the priority inversion to blocking on request queue tags.
> 
> Does XFS background metadata IO ever get waited on by critical journal 
> threads?

No. Background writeback (which, with this series, is the only way
metadata gets written in XFS) is almost entirely non-blocking until
IO submission occurs. It will force the log if pinned items are
prevents the log tail from moving (hence blocking on log IO) but
largely it doesn't block on anything except IO submission.

The only thing that blocks on journal IO is CIL flushing and,
subsequently, anything that is waiting on a journal flush to
complete. CIL flushing happens in it's own workqueue, so it doesn't
block anything directly. The only operations that wait for log IO
require items to be stable in the journal (e.g. fsync()).

Starting a transactional change may block on metadata writeback. If
there isn't space in the log for the new transaction, it will kick
and wait for background metadata writeback to make progress and push
the tail of the log forwards.  And this may wait on journal IO if
pinned items need to be flushed to the log before writeback can
occur.

This is the way we prevent transactions requiring journal IO to
blocking on metadata writeback to make progress - we don't allow a
transaction to start until it is guaranteed that it can complete
without requiring journal IO to flush other metadata to the journal.
That way there is always space available in the log for all pending
journal IO to complete with a dependency no metadata writeback
making progress.

This "block on metadata writeback at transaction start" design means
data writeback can block on metadata writeback because we do
allocation transactions in the IO path. Which means data IO can
block behind metadata IO, which can block behind log IO, and that
largely defines the IO heirarchy in XFS.

Hence the IO priority order is very clear in XFS - it was designed
this way because you can't support things like guaranteed rate IO
storage applications (one of the prime use cases XFS was originally
designed for) without having a clear model for avoiding priority
inversions between data, metadata and the journal.

I'm not guessing about any of this - I know how all this is supposed
to work because I spent years at SGI working with people far smarter
than me supporting real-time IO applications working along with
real-time IO schedulers in a real time kernel (i.e.  Irix). I don't
make this stuff up for fun or to argue, I say stuff because I know
how it's supposed to work.

And, FWIW, Irix also had a block layer writeback throttling
mechanism to prevent bulk data writeback from thrashing disks and
starving higher priority IO. It was also fully IO priority aware -
this stuff isn't rocket science, and Linux is not the first OS to
ever implement this sort of functionality. Linux was not my first
rodeo....

> My understanding is that all of the filesystems do this from 
> time to time.  Without a way to bump the priority of throttled 
> background metadata IO, I can't see how to avoid prio inversions without 
> running background metadata at the same prio as all of the critical 
> journal IO.

Perhaps you just haven't thought about it enough. :)

> > Core infrastructure needs to work without cgroups being configured
> > to confine everything in userspace to "safe" bounds, and right now
> > just running things in the root cgroup doesn't appear to work very
> > well at all.
> 
> I'm not disagreeing with this part, my real point is there isn't a 
> single answer.  It's possible for swap to be critical to the running of 
> the box in some workloads, and totally unimportant in others.

Sure, but that only indicates that we need to be able to adjust the
priority of IO within certain bounds.

The problem is right now is that the default behaviour is pretty
nasty and core functionality is non-functional. It doesn't matter if
swap priority is adjustable or not, users should not have to tune
the kernel to use an esoteric cgroup configuration in order for the
kernel to function correctly out of the box.

I'm not sure when we lost sight of the fact we need to make the
default configurations work correctly first, and only then do we
worry about how tunable somethign is when the default behaviour has
been proven to be insufficient. Hiding bad behaviour behind custom
cgroup configuration does nobody any favours.

Cheers,

Dave.
diff mbox series

Patch

diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index 00e9f5c388d3..7bdea629e749 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -1723,7 +1723,15 @@  xlog_write_iclog(
 	iclog->ic_bio.bi_iter.bi_sector = log->l_logBBstart + bno;
 	iclog->ic_bio.bi_end_io = xlog_bio_end_io;
 	iclog->ic_bio.bi_private = iclog;
-	iclog->ic_bio.bi_opf = REQ_OP_WRITE | REQ_META | REQ_SYNC | REQ_FUA;
+
+	/*
+	 * We use REQ_SYNC | REQ_IDLE here to tell the block layer the are more
+	 * IOs coming immediately after this one. This prevents the block layer
+	 * writeback throttle from throttling log writes behind background
+	 * metadata writeback and causing priority inversions.
+	 */
+	iclog->ic_bio.bi_opf = REQ_OP_WRITE | REQ_META | REQ_SYNC |
+				REQ_IDLE | REQ_FUA;
 	if (need_flush)
 		iclog->ic_bio.bi_opf |= REQ_PREFLUSH;