diff mbox series

xfs: set aside allocation btree blocks from block reservation

Message ID 20210217132339.651020-1-bfoster@redhat.com (mailing list archive)
State Superseded
Headers show
Series xfs: set aside allocation btree blocks from block reservation | expand

Commit Message

Brian Foster Feb. 17, 2021, 1:23 p.m. UTC
The blocks used for allocation btrees (bnobt and countbt) are
technically considered free space. This is because as free space is
used, allocbt blocks are removed and naturally become available for
traditional allocation. However, this means that a significant
portion of free space may consist of in-use btree blocks if free
space is severely fragmented.

On large filesystems with large perag reservations, this can lead to
a rare but nasty condition where a significant amount of physical
free space is available, but the majority of actual usable blocks
consist of in-use allocbt blocks. We have a record of a (~12TB, 32
AG) filesystem with multiple AGs in a state with ~2.5GB or so free
blocks tracked across ~300 total allocbt blocks, but effectively at
100% full because the the free space is entirely consumed by
refcountbt perag reservation.

Such a large perag reservation is by design on large filesystems.
The problem is that because the free space is so fragmented, this AG
contributes the 300 or so allocbt blocks to the global counters as
free space. If this pattern repeats across enough AGs, the
filesystem lands in a state where global block reservation can
outrun physical block availability. For example, a streaming
buffered write on the affected filesystem continues to allow delayed
allocation beyond the point where writeback starts to fail due to
physical block allocation failures. The expected behavior is for the
delalloc block reservation to fail gracefully with -ENOSPC before
physical block allocation failure is a possibility.

To address this problem, introduce a percpu counter to track the sum
of the allocbt block counters already tracked in the AGF. Use the
new counter to set these blocks aside at reservation time and thus
ensure they cannot be allocated until truly available. Since this is
only necessary when large reflink perag reservations are in place
and the counter requires a read of each AGF to fully populate, only
enforce on reflink enabled filesystems. This allows initialization
of the counter at ->pagf_init time because the refcountbt perag
reservation init code reads each AGF at mount time.

Note that the counter uses a small percpu batch size to allow the
allocation paths to keep the primary count accurate enough that the
reservation path doesn't ever need to lock and sum the counter.
Absolute accuracy is not required here, just that the counter
reflects the majority of unavailable blocks so the reservation path
fails first.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---

This survives my initial regression tests with more ongoing.
Unfortunately I've not been able to recreate the problematic conditions
from scratch. I suspect it's caused by a reflink/COW heavy workload
where COW remaps have severely fragmented free space and depleted the
reserved block pool over time. I have confirmed that this allows -ENOSPC
to occur gracefully on a metadump image of the originally affected
filesystem. Thoughts, reviews, flames appreciated.

Brian

 fs/xfs/libxfs/xfs_alloc.c | 13 +++++++++++++
 fs/xfs/xfs_mount.c        | 13 ++++++++++++-
 fs/xfs/xfs_mount.h        |  6 ++++++
 fs/xfs/xfs_super.c        |  7 +++++++
 4 files changed, 38 insertions(+), 1 deletion(-)

Comments

Dave Chinner Feb. 18, 2021, 12:34 a.m. UTC | #1
On Wed, Feb 17, 2021 at 08:23:39AM -0500, Brian Foster wrote:
> The blocks used for allocation btrees (bnobt and countbt) are
> technically considered free space. This is because as free space is
> used, allocbt blocks are removed and naturally become available for
> traditional allocation. However, this means that a significant
> portion of free space may consist of in-use btree blocks if free
> space is severely fragmented.
> 
> On large filesystems with large perag reservations, this can lead to
> a rare but nasty condition where a significant amount of physical
> free space is available, but the majority of actual usable blocks
> consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> blocks tracked across ~300 total allocbt blocks, but effectively at
> 100% full because the the free space is entirely consumed by
> refcountbt perag reservation.
> 
> Such a large perag reservation is by design on large filesystems.
> The problem is that because the free space is so fragmented, this AG
> contributes the 300 or so allocbt blocks to the global counters as
> free space. If this pattern repeats across enough AGs, the
> filesystem lands in a state where global block reservation can
> outrun physical block availability. For example, a streaming
> buffered write on the affected filesystem continues to allow delayed
> allocation beyond the point where writeback starts to fail due to
> physical block allocation failures. The expected behavior is for the
> delalloc block reservation to fail gracefully with -ENOSPC before
> physical block allocation failure is a possibility.

*nod*

> To address this problem, introduce a percpu counter to track the sum
> of the allocbt block counters already tracked in the AGF. Use the
> new counter to set these blocks aside at reservation time and thus
> ensure they cannot be allocated until truly available. Since this is
> only necessary when large reflink perag reservations are in place
> and the counter requires a read of each AGF to fully populate, only
> enforce on reflink enabled filesystems. This allows initialization
> of the counter at ->pagf_init time because the refcountbt perag
> reservation init code reads each AGF at mount time.

Ok, so the mechanism sounds ok, but a per-cpu counter seems like
premature optimisation. How often are we really updating btree block
counts? An atomic counter is good for at least a million updates a
second across a 2 socket 32p machine, and I highly doubt we're
incrementing/decrementing btree block counts that often on such a
machine. 

While per-cpu counters have a fast write side, they come with
additional algorithmic complexity. Hence if the update rate of the
counter is not fast enough to need per-cpu counters, we should avoid
them. just because other free space counters use per-cpu counters,
it doesn't mean everything in that path needs to use them...

> Note that the counter uses a small percpu batch size to allow the
> allocation paths to keep the primary count accurate enough that the
> reservation path doesn't ever need to lock and sum the counter.
> Absolute accuracy is not required here, just that the counter
> reflects the majority of unavailable blocks so the reservation path
> fails first.

And this makes the per-cpu counter scale almost no better than an
simple atomic counter, because a spinlock requires two atomic
operations (lock and unlock). Hence a batch count of 4 only reduces
the atomic op count by half but introduces at lot of extra
complexity. It won't make a difference to the scalability of
workloads that hammer the btree block count because contention on
the internal counter spinlock will occur at close to the same
concurrency rate as would occur on an atomic counter.

Hence a per-cpu counter used in this manner seems like a premature
optimisation to me...

Cheers,

Dave.
Chandan Babu R Feb. 18, 2021, 7:51 a.m. UTC | #2
On 17 Feb 2021 at 18:53, Brian Foster wrote:
> The blocks used for allocation btrees (bnobt and countbt) are
> technically considered free space. This is because as free space is
> used, allocbt blocks are removed and naturally become available for
> traditional allocation. However, this means that a significant
> portion of free space may consist of in-use btree blocks if free
> space is severely fragmented.
>
> On large filesystems with large perag reservations, this can lead to
> a rare but nasty condition where a significant amount of physical
> free space is available, but the majority of actual usable blocks
> consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> blocks tracked across ~300 total allocbt blocks, but effectively at
> 100% full because the the free space is entirely consumed by
> refcountbt perag reservation.
>
> Such a large perag reservation is by design on large filesystems.
> The problem is that because the free space is so fragmented, this AG
> contributes the 300 or so allocbt blocks to the global counters as
> free space. If this pattern repeats across enough AGs, the
> filesystem lands in a state where global block reservation can
> outrun physical block availability. For example, a streaming
> buffered write on the affected filesystem continues to allow delayed
> allocation beyond the point where writeback starts to fail due to
> physical block allocation failures. The expected behavior is for the
> delalloc block reservation to fail gracefully with -ENOSPC before
> physical block allocation failure is a possibility.
>
> To address this problem, introduce a percpu counter to track the sum
> of the allocbt block counters already tracked in the AGF. Use the
> new counter to set these blocks aside at reservation time and thus
> ensure they cannot be allocated until truly available. Since this is
> only necessary when large reflink perag reservations are in place
> and the counter requires a read of each AGF to fully populate, only
> enforce on reflink enabled filesystems. This allows initialization
> of the counter at ->pagf_init time because the refcountbt perag
> reservation init code reads each AGF at mount time.
>
> Note that the counter uses a small percpu batch size to allow the
> allocation paths to keep the primary count accurate enough that the
> reservation path doesn't ever need to lock and sum the counter.
> Absolute accuracy is not required here, just that the counter
> reflects the majority of unavailable blocks so the reservation path
> fails first.
>

The changes look good to me from the perspective of logical correctness.

Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com>
Brian Foster Feb. 18, 2021, 1:25 p.m. UTC | #3
On Thu, Feb 18, 2021 at 11:34:51AM +1100, Dave Chinner wrote:
> On Wed, Feb 17, 2021 at 08:23:39AM -0500, Brian Foster wrote:
> > The blocks used for allocation btrees (bnobt and countbt) are
> > technically considered free space. This is because as free space is
> > used, allocbt blocks are removed and naturally become available for
> > traditional allocation. However, this means that a significant
> > portion of free space may consist of in-use btree blocks if free
> > space is severely fragmented.
> > 
> > On large filesystems with large perag reservations, this can lead to
> > a rare but nasty condition where a significant amount of physical
> > free space is available, but the majority of actual usable blocks
> > consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> > AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> > blocks tracked across ~300 total allocbt blocks, but effectively at
> > 100% full because the the free space is entirely consumed by
> > refcountbt perag reservation.
> > 
> > Such a large perag reservation is by design on large filesystems.
> > The problem is that because the free space is so fragmented, this AG
> > contributes the 300 or so allocbt blocks to the global counters as
> > free space. If this pattern repeats across enough AGs, the
> > filesystem lands in a state where global block reservation can
> > outrun physical block availability. For example, a streaming
> > buffered write on the affected filesystem continues to allow delayed
> > allocation beyond the point where writeback starts to fail due to
> > physical block allocation failures. The expected behavior is for the
> > delalloc block reservation to fail gracefully with -ENOSPC before
> > physical block allocation failure is a possibility.
> 
> *nod*
> 
> > To address this problem, introduce a percpu counter to track the sum
> > of the allocbt block counters already tracked in the AGF. Use the
> > new counter to set these blocks aside at reservation time and thus
> > ensure they cannot be allocated until truly available. Since this is
> > only necessary when large reflink perag reservations are in place
> > and the counter requires a read of each AGF to fully populate, only
> > enforce on reflink enabled filesystems. This allows initialization
> > of the counter at ->pagf_init time because the refcountbt perag
> > reservation init code reads each AGF at mount time.
> 
> Ok, so the mechanism sounds ok, but a per-cpu counter seems like
> premature optimisation. How often are we really updating btree block
> counts? An atomic counter is good for at least a million updates a
> second across a 2 socket 32p machine, and I highly doubt we're
> incrementing/decrementing btree block counts that often on such a
> machine. 
> 
> While per-cpu counters have a fast write side, they come with
> additional algorithmic complexity. Hence if the update rate of the
> counter is not fast enough to need per-cpu counters, we should avoid
> them. just because other free space counters use per-cpu counters,
> it doesn't mean everything in that path needs to use them...
> 

The use of the percpu counter was more for the read side than the write
side. I think of it more of an abstraction to avoid having to open code
and define a new spin lock just for this. I actually waffled a bit on
just setting a batch count of 0 to get roughly equivalent behavior, but
didn't think it would make much difference.

> > Note that the counter uses a small percpu batch size to allow the
> > allocation paths to keep the primary count accurate enough that the
> > reservation path doesn't ever need to lock and sum the counter.
> > Absolute accuracy is not required here, just that the counter
> > reflects the majority of unavailable blocks so the reservation path
> > fails first.
> 
> And this makes the per-cpu counter scale almost no better than an
> simple atomic counter, because a spinlock requires two atomic
> operations (lock and unlock). Hence a batch count of 4 only reduces
> the atomic op count by half but introduces at lot of extra
> complexity. It won't make a difference to the scalability of
> workloads that hammer the btree block count because contention on
> the internal counter spinlock will occur at close to the same
> concurrency rate as would occur on an atomic counter.
> 

Right, but percpu_counter_read_positive() allows a fast read in the
xfs_mod_fdblocks() path. I didn't use an atomic because I was concerned
about introducing overhead in that path. If we're Ok with whatever
overhead an atomic read might introduce (a spin lock in the worst case
for some arches), then I don't mind switching over to that. I also don't
mind defining a new spin lock and explicitly implementing the lockless
read in xfs_mod_fdblocks(), I just thought it was extra code for little
benefit over the percpu counter. Preference?

Brian

> Hence a per-cpu counter used in this manner seems like a premature
> optimisation to me...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
>
Dave Chinner Feb. 19, 2021, 2:24 a.m. UTC | #4
On Thu, Feb 18, 2021 at 08:25:20AM -0500, Brian Foster wrote:
> On Thu, Feb 18, 2021 at 11:34:51AM +1100, Dave Chinner wrote:
> > On Wed, Feb 17, 2021 at 08:23:39AM -0500, Brian Foster wrote:
> > > The blocks used for allocation btrees (bnobt and countbt) are
> > > technically considered free space. This is because as free space is
> > > used, allocbt blocks are removed and naturally become available for
> > > traditional allocation. However, this means that a significant
> > > portion of free space may consist of in-use btree blocks if free
> > > space is severely fragmented.
> > > 
> > > On large filesystems with large perag reservations, this can lead to
> > > a rare but nasty condition where a significant amount of physical
> > > free space is available, but the majority of actual usable blocks
> > > consist of in-use allocbt blocks. We have a record of a (~12TB, 32
> > > AG) filesystem with multiple AGs in a state with ~2.5GB or so free
> > > blocks tracked across ~300 total allocbt blocks, but effectively at
> > > 100% full because the the free space is entirely consumed by
> > > refcountbt perag reservation.
> > > 
> > > Such a large perag reservation is by design on large filesystems.
> > > The problem is that because the free space is so fragmented, this AG
> > > contributes the 300 or so allocbt blocks to the global counters as
> > > free space. If this pattern repeats across enough AGs, the
> > > filesystem lands in a state where global block reservation can
> > > outrun physical block availability. For example, a streaming
> > > buffered write on the affected filesystem continues to allow delayed
> > > allocation beyond the point where writeback starts to fail due to
> > > physical block allocation failures. The expected behavior is for the
> > > delalloc block reservation to fail gracefully with -ENOSPC before
> > > physical block allocation failure is a possibility.
> > 
> > *nod*
> > 
> > > To address this problem, introduce a percpu counter to track the sum
> > > of the allocbt block counters already tracked in the AGF. Use the
> > > new counter to set these blocks aside at reservation time and thus
> > > ensure they cannot be allocated until truly available. Since this is
> > > only necessary when large reflink perag reservations are in place
> > > and the counter requires a read of each AGF to fully populate, only
> > > enforce on reflink enabled filesystems. This allows initialization
> > > of the counter at ->pagf_init time because the refcountbt perag
> > > reservation init code reads each AGF at mount time.
> > 
> > Ok, so the mechanism sounds ok, but a per-cpu counter seems like
> > premature optimisation. How often are we really updating btree block
> > counts? An atomic counter is good for at least a million updates a
> > second across a 2 socket 32p machine, and I highly doubt we're
> > incrementing/decrementing btree block counts that often on such a
> > machine. 
> > 
> > While per-cpu counters have a fast write side, they come with
> > additional algorithmic complexity. Hence if the update rate of the
> > counter is not fast enough to need per-cpu counters, we should avoid
> > them. just because other free space counters use per-cpu counters,
> > it doesn't mean everything in that path needs to use them...
> > 
> 
> The use of the percpu counter was more for the read side than the write
> side. I think of it more of an abstraction to avoid having to open code
> and define a new spin lock just for this. I actually waffled a bit on
> just setting a batch count of 0 to get roughly equivalent behavior, but
> didn't think it would make much difference.

That doesn't make much sense to me. percpu counters are not for
avoiding spinlocks for stand alone counters - that's what atomic
counters are for. percpu counters are for replacing atomic counters
when they run out of scalability, not spin locks.

> > > Note that the counter uses a small percpu batch size to allow the
> > > allocation paths to keep the primary count accurate enough that the
> > > reservation path doesn't ever need to lock and sum the counter.
> > > Absolute accuracy is not required here, just that the counter
> > > reflects the majority of unavailable blocks so the reservation path
> > > fails first.
> > 
> > And this makes the per-cpu counter scale almost no better than an
> > simple atomic counter, because a spinlock requires two atomic
> > operations (lock and unlock). Hence a batch count of 4 only reduces
> > the atomic op count by half but introduces at lot of extra
> > complexity. It won't make a difference to the scalability of
> > workloads that hammer the btree block count because contention on
> > the internal counter spinlock will occur at close to the same
> > concurrency rate as would occur on an atomic counter.
> > 
> 
> Right, but percpu_counter_read_positive() allows a fast read in the
> xfs_mod_fdblocks() path. I didn't use an atomic because I was concerned
> about introducing overhead in that path. If we're Ok with whatever
> overhead an atomic read might introduce (a spin lock in the worst case
> for some arches), then I don't mind switching over to that. I also don't

The generic definition of atomic_read() is this:

/**
 * atomic_read - read atomic variable
 * @v: pointer of type atomic_t
 *
 * Atomically reads the value of @v.
 */
#ifndef atomic_read
#define atomic_read(v)  READ_ONCE((v)->counter)
#endif

And the only arch specific implementations (x86 and arm64) both have
the same implementation.

And percpu_counter_read_positive() is:

/*
 * It is possible for the percpu_counter_read() to return a small negative
 * number for some counter which should never be negative.
 *
 */
static inline s64 percpu_counter_read_positive(struct percpu_counter *fbc)
{
        /* Prevent reloads of fbc->count */
        s64 ret = READ_ONCE(fbc->count);

        if (ret >= 0)
                return ret;
        return 0;
}

IOWs, atomic_read() has lower read side overhead than the percpu
counter. atomic reads do not require spinlocks or even locked memory
accesses - they are only needed on the write side. Hence the only
reason for using a pcp counter over an atomic is that the atomic is
update rate bound....

FWIW, generic atomics don't use spin locks anymore - they use
cmpxchg() which is generally much more efficient than a spin lock,
even when emulated on platforms that don't have native support for
cmpxchg(). And, really, we don't care one bit about performance or
scalability on those niche platforms; all the high CPU count
platforms where scalability matters have native atomic cmpxchg
operations.

> mind defining a new spin lock and explicitly implementing the lockless
> read in xfs_mod_fdblocks(), I just thought it was extra code for little
> benefit over the percpu counter. Preference?

Spinlocks really hurt scalability in fast paths. atomics are much,
much less hurty, and so the threshold for needing percpu counters is
*much higher* than the threshold for needing lockless algorithms to
avoid catastrophic lock contention.[*]

Cheers,

Dave.

[*] For example, I just added an atomic counter into
xlog_cil_insert_items() that is incremented on every transaction
commit (provides global ordering of items in the CIL checkpoint
placed on per-cpu lists). With the spinlocks from the CIL commit
path, we increase performance from 700k commits/s to 1.4 million
commits/s on a 32p machine and only increase CPU time in
xlog_cil_insert_items() from 1.8% to 3.3%.

IOWs, the atomic counter has almost zero overhead compared to a set
of critical sections protected by multiple spinlocks that were
consuming crazy amounts of CPU time.

Before, at 700,000 commits/sec:

 -   75.35%     1.83%  [kernel]            [k] xfs_log_commit_cil
    - 46.35% xfs_log_commit_cil
       - 41.54% _raw_spin_lock
          - 67.30% do_raw_spin_lock
               66.96% __pv_queued_spin_lock_slowpath


After, at 1.4 million commits/sec:

-   21.78%     3.32%  [kernel]            [k] xlog_cil_commit
   - 17.32% xlog_cil_commit
      - 9.04% xfs_log_ticket_ungrant
           2.22% xfs_log_space_wake
        2.13% memcpy_erms
      - 1.42% xfs_buf_item_committing
         - 1.44% xfs_buf_item_release
            - 0.63% xfs_buf_unlock
                 0.63% up
              0.55% xfs_buf_rele
        1.06% xfs_inode_item_format
        1.00% down_read
        0.77% up_read
        0.60% xfs_buf_item_format

You can see the atomic cmpxchg loops to update the grant heads in
xfs_log_ticket_ungrant() is now looking like the highest contended
cachelines in the fast path, and the CIL ctx rwsem is showing up on
the profile, too. But there's no spin lock contention, and the
atomics I added to the xlog_cil_commit() fast path aren't having any
adverse impact on CPU usage at this point.

FWIW, the grant head accounting is definitely getting hot here,
as the xfs_trans_reserve side is showing:

     7.35%     7.29%  [kernel]            [k] xlog_grant_add_space 

Which indicates at least 15% of the CPU time on this workload is now
being spend in the grant head accounting loops. Those atomic
counters are where I'd be considering per-cpu counters now as they
are showing signs of contention at ~2.8 million updates/s each.

However, the reason I didn't use per-cpu counters way back when I
made this stuff lockless was that we need an accurate sum of the
grant head space frequently. Hence the summing cost of a per-cpu
counter is just too great for a hot limit accounting path like this,
so I used cmpxchg loops. PCP counters are still too expensive to sum
accurately regularly, so if we want to support a higher transaction
throughput rate we're going to have to get creative here.

Cheers,

Dave.
Brian Foster Feb. 19, 2021, 2:09 p.m. UTC | #5
On Fri, Feb 19, 2021 at 01:24:11PM +1100, Dave Chinner wrote:
> On Thu, Feb 18, 2021 at 08:25:20AM -0500, Brian Foster wrote:
> > On Thu, Feb 18, 2021 at 11:34:51AM +1100, Dave Chinner wrote:
> > > On Wed, Feb 17, 2021 at 08:23:39AM -0500, Brian Foster wrote:
...
> 
> > > > Note that the counter uses a small percpu batch size to allow the
> > > > allocation paths to keep the primary count accurate enough that the
> > > > reservation path doesn't ever need to lock and sum the counter.
> > > > Absolute accuracy is not required here, just that the counter
> > > > reflects the majority of unavailable blocks so the reservation path
> > > > fails first.
> > > 
> > > And this makes the per-cpu counter scale almost no better than an
> > > simple atomic counter, because a spinlock requires two atomic
> > > operations (lock and unlock). Hence a batch count of 4 only reduces
> > > the atomic op count by half but introduces at lot of extra
> > > complexity. It won't make a difference to the scalability of
> > > workloads that hammer the btree block count because contention on
> > > the internal counter spinlock will occur at close to the same
> > > concurrency rate as would occur on an atomic counter.
> > > 
> > 
> > Right, but percpu_counter_read_positive() allows a fast read in the
> > xfs_mod_fdblocks() path. I didn't use an atomic because I was concerned
> > about introducing overhead in that path. If we're Ok with whatever
> > overhead an atomic read might introduce (a spin lock in the worst case
> > for some arches), then I don't mind switching over to that. I also don't
> 
> The generic definition of atomic_read() is this:
> 
> /**
>  * atomic_read - read atomic variable
>  * @v: pointer of type atomic_t
>  *
>  * Atomically reads the value of @v.
>  */
> #ifndef atomic_read
> #define atomic_read(v)  READ_ONCE((v)->counter)
> #endif
> 
> And the only arch specific implementations (x86 and arm64) both have
> the same implementation.
> 

That's the 32-bit variant, FWIW. It looks like the x86-64 and arm64
atomic64 variants are essentially the same, but the generic 64-bit
implementation is:

s64 atomic64_read(const atomic64_t *v)
{
        unsigned long flags;
        raw_spinlock_t *lock = lock_addr(v);
        s64 val;

        raw_spin_lock_irqsave(lock, flags);
        val = v->counter;
        raw_spin_unlock_irqrestore(lock, flags);
        return val;
}

Arm, powerpc, and s390 appear to have custom implementations which I
assume are more efficient than this. x86 has an arch_atomic64_read()
that falls through a maze of directives with at least a couple
underlying implementations. One appears to be atomic64_read_cx8() which
uses the cache line lock prefix thing. It's not clear to me where the
other goes, or if this ever falls back to the generic implementation..

> And percpu_counter_read_positive() is:
> 
> /*
>  * It is possible for the percpu_counter_read() to return a small negative
>  * number for some counter which should never be negative.
>  *
>  */
> static inline s64 percpu_counter_read_positive(struct percpu_counter *fbc)
> {
>         /* Prevent reloads of fbc->count */
>         s64 ret = READ_ONCE(fbc->count);
> 
>         if (ret >= 0)
>                 return ret;
>         return 0;
> }
> 
> IOWs, atomic_read() has lower read side overhead than the percpu
> counter. atomic reads do not require spinlocks or even locked memory
> accesses - they are only needed on the write side. Hence the only
> reason for using a pcp counter over an atomic is that the atomic is
> update rate bound....
> 
> FWIW, generic atomics don't use spin locks anymore - they use
> cmpxchg() which is generally much more efficient than a spin lock,
> even when emulated on platforms that don't have native support for
> cmpxchg(). And, really, we don't care one bit about performance or
> scalability on those niche platforms; all the high CPU count
> platforms where scalability matters have native atomic cmpxchg
> operations.
> 
> > mind defining a new spin lock and explicitly implementing the lockless
> > read in xfs_mod_fdblocks(), I just thought it was extra code for little
> > benefit over the percpu counter. Preference?
> 
> Spinlocks really hurt scalability in fast paths. atomics are much,
> much less hurty, and so the threshold for needing percpu counters is
> *much higher* than the threshold for needing lockless algorithms to
> avoid catastrophic lock contention.[*]
> 

As shown above, this is why I wanted to avoid introducing a spinlock on
the read side. ;) IOW, the functional behavior I was aiming for was:

update code:
	spin_lock();
	counter += delta;
	spin_unlock();

read code:
	READ_ONCE(counter);

I was initially going to pass a batch size of 0 because performance of
the update side is not important. The use of percpu was not for
scalability reasons at all. It was a somewhat lazy reuse of code to
avoid defining a new spinlock just for this.

In any event, the atomic64 read side is essentially equivalent to this
on x86-64 and arm64. If we're comfortable with the remaining custom
atomic64 implementations for other common arches (ppc64, s390x, x86,
etc.) and simply don't care enough about the additional overhead on the
arches that might fall back to the generic implementation, then that is
good enough reason to me to switch to an atomic...

Brian

> Cheers,
> 
> Dave.
> 
> [*] For example, I just added an atomic counter into
> xlog_cil_insert_items() that is incremented on every transaction
> commit (provides global ordering of items in the CIL checkpoint
> placed on per-cpu lists). With the spinlocks from the CIL commit
> path, we increase performance from 700k commits/s to 1.4 million
> commits/s on a 32p machine and only increase CPU time in
> xlog_cil_insert_items() from 1.8% to 3.3%.
> 
> IOWs, the atomic counter has almost zero overhead compared to a set
> of critical sections protected by multiple spinlocks that were
> consuming crazy amounts of CPU time.
> 
> Before, at 700,000 commits/sec:
> 
>  -   75.35%     1.83%  [kernel]            [k] xfs_log_commit_cil
>     - 46.35% xfs_log_commit_cil
>        - 41.54% _raw_spin_lock
>           - 67.30% do_raw_spin_lock
>                66.96% __pv_queued_spin_lock_slowpath
> 
> 
> After, at 1.4 million commits/sec:
> 
> -   21.78%     3.32%  [kernel]            [k] xlog_cil_commit
>    - 17.32% xlog_cil_commit
>       - 9.04% xfs_log_ticket_ungrant
>            2.22% xfs_log_space_wake
>         2.13% memcpy_erms
>       - 1.42% xfs_buf_item_committing
>          - 1.44% xfs_buf_item_release
>             - 0.63% xfs_buf_unlock
>                  0.63% up
>               0.55% xfs_buf_rele
>         1.06% xfs_inode_item_format
>         1.00% down_read
>         0.77% up_read
>         0.60% xfs_buf_item_format
> 
> You can see the atomic cmpxchg loops to update the grant heads in
> xfs_log_ticket_ungrant() is now looking like the highest contended
> cachelines in the fast path, and the CIL ctx rwsem is showing up on
> the profile, too. But there's no spin lock contention, and the
> atomics I added to the xlog_cil_commit() fast path aren't having any
> adverse impact on CPU usage at this point.
> 
> FWIW, the grant head accounting is definitely getting hot here,
> as the xfs_trans_reserve side is showing:
> 
>      7.35%     7.29%  [kernel]            [k] xlog_grant_add_space 
> 
> Which indicates at least 15% of the CPU time on this workload is now
> being spend in the grant head accounting loops. Those atomic
> counters are where I'd be considering per-cpu counters now as they
> are showing signs of contention at ~2.8 million updates/s each.
> 
> However, the reason I didn't use per-cpu counters way back when I
> made this stuff lockless was that we need an accurate sum of the
> grant head space frequently. Hence the summing cost of a per-cpu
> counter is just too great for a hot limit accounting path like this,
> so I used cmpxchg loops. PCP counters are still too expensive to sum
> accurately regularly, so if we want to support a higher transaction
> throughput rate we're going to have to get creative here.
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
>
Dave Chinner Feb. 19, 2021, 9:28 p.m. UTC | #6
On Fri, Feb 19, 2021 at 09:09:19AM -0500, Brian Foster wrote:
> On Fri, Feb 19, 2021 at 01:24:11PM +1100, Dave Chinner wrote:
> > On Thu, Feb 18, 2021 at 08:25:20AM -0500, Brian Foster wrote:
> > > On Thu, Feb 18, 2021 at 11:34:51AM +1100, Dave Chinner wrote:
> > > > On Wed, Feb 17, 2021 at 08:23:39AM -0500, Brian Foster wrote:
> ...
> > 
> > > > > Note that the counter uses a small percpu batch size to allow the
> > > > > allocation paths to keep the primary count accurate enough that the
> > > > > reservation path doesn't ever need to lock and sum the counter.
> > > > > Absolute accuracy is not required here, just that the counter
> > > > > reflects the majority of unavailable blocks so the reservation path
> > > > > fails first.
> > > > 
> > > > And this makes the per-cpu counter scale almost no better than an
> > > > simple atomic counter, because a spinlock requires two atomic
> > > > operations (lock and unlock). Hence a batch count of 4 only reduces
> > > > the atomic op count by half but introduces at lot of extra
> > > > complexity. It won't make a difference to the scalability of
> > > > workloads that hammer the btree block count because contention on
> > > > the internal counter spinlock will occur at close to the same
> > > > concurrency rate as would occur on an atomic counter.
> > > > 
> > > 
> > > Right, but percpu_counter_read_positive() allows a fast read in the
> > > xfs_mod_fdblocks() path. I didn't use an atomic because I was concerned
> > > about introducing overhead in that path. If we're Ok with whatever
> > > overhead an atomic read might introduce (a spin lock in the worst case
> > > for some arches), then I don't mind switching over to that. I also don't
> > 
> > The generic definition of atomic_read() is this:
> > 
> > /**
> >  * atomic_read - read atomic variable
> >  * @v: pointer of type atomic_t
> >  *
> >  * Atomically reads the value of @v.
> >  */
> > #ifndef atomic_read
> > #define atomic_read(v)  READ_ONCE((v)->counter)
> > #endif
> > 
> > And the only arch specific implementations (x86 and arm64) both have
> > the same implementation.
> > 
> 
> That's the 32-bit variant, FWIW. It looks like the x86-64 and arm64
> atomic64 variants are essentially the same, but the generic 64-bit
> implementation is:
> 
> s64 atomic64_read(const atomic64_t *v)
> {
>         unsigned long flags;
>         raw_spinlock_t *lock = lock_addr(v);
>         s64 val;
> 
>         raw_spin_lock_irqsave(lock, flags);
>         val = v->counter;
>         raw_spin_unlock_irqrestore(lock, flags);
>         return val;
> }

That's necessary for generic 32 bit platforms that may do two 32 bit
ops to store a 64 bit val. This is why xfs_trans_ail_copy_lsn()
exists - to be able to do atomic read and store of an lsn on a 32
bit platform so we don't get a bad LSN from a torn 32 bit loads.

But we *just don't care* about scalability on 32 bit platforms -
they just don't have the CPU count for this to matter one bit to
performancer. And if you worry about scaling down then the spin lock
goes away for all the single CPU 32 bit platforms, too.

> Arm, powerpc, and s390 appear to have custom implementations which I
> assume are more efficient than this.

Yes, they are much more efficient than spinlocks. All the 64bit CPU
platforms essentially boil down to a single load instruction on the
read side.  This is documented in Documentation/atomic_t.txt:

SEMANTICS
---------

Non-RMW ops:

The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
smp_store_release() respectively. Therefore, if you find yourself only using
the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
and are doing it wrong.
.....


> x86 has an arch_atomic64_read() that falls through a maze of
> directives with at least a couple underlying implementations.  One
> appears to be atomic64_read_cx8() which uses the cache line lock
> prefix thing. It's not clear to me where the other goes, or if
> this ever falls back to the generic implementation..

arch/x86/include/asm/atomic64_64.h version is the one that is used
on 64 bit CPUs and hence the only one we care about. It is
READ_ONCE(v->counter)...

The atomic64_32.h header is for 32 bit CPUs and only some of them
support the cmpxchg8 which is an atomic 8 byte cmpxchg. That's still
faster and more efficient than a spin lock. The oldest of the x86
cpus fall back to some else, but we *really* don't care about 25+
year old 32 bit CPUs...

> > Spinlocks really hurt scalability in fast paths. atomics are much,
> > much less hurty, and so the threshold for needing percpu counters is
> > *much higher* than the threshold for needing lockless algorithms to
> > avoid catastrophic lock contention.[*]
> > 
> 
> As shown above, this is why I wanted to avoid introducing a spinlock on
> the read side. ;) IOW, the functional behavior I was aiming for was:
> 
> update code:
> 	spin_lock();
> 	counter += delta;
> 	spin_unlock();
> 
> read code:
> 	READ_ONCE(counter);

Which is simply atomic64_add(cnt, delta); and atomic64_read(cnt);

> I was initially going to pass a batch size of 0 because performance of
> the update side is not important. The use of percpu was not for
> scalability reasons at all. It was a somewhat lazy reuse of code to
> avoid defining a new spinlock just for this.

This is what atomics are for.

> In any event, the atomic64 read side is essentially equivalent to this
> on x86-64 and arm64. If we're comfortable with the remaining custom
> atomic64 implementations for other common arches (ppc64, s390x, x86,
> etc.) and simply don't care enough about the additional overhead on the
> arches that might fall back to the generic implementation, then that is
> good enough reason to me to switch to an atomic...

We've been comfortable with these for well over a decade. That is,
we use atomic64 heavily in the lockless grant head and log tail
accounting algorithms, which are the the hottest accounting paths in
the transaction subsystem. They are the counters I pointed out were
only just starting to show scalability issues at 2.8 million updates
a second in my last email...

Cheers,

Dave.
diff mbox series

Patch

diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
index 0c623d3c1036..81ebcddcba7a 100644
--- a/fs/xfs/libxfs/xfs_alloc.c
+++ b/fs/xfs/libxfs/xfs_alloc.c
@@ -36,6 +36,13 @@  struct workqueue_struct *xfs_alloc_wq;
 #define	XFSA_FIXUP_BNO_OK	1
 #define	XFSA_FIXUP_CNT_OK	2
 
+/*
+ * Use a small batch size for the btree blocks counter since modifications
+ * mainly occur in the block allocation path. This allows the read side to
+ * remain lockless.
+ */
+#define XFS_BTREE_BLOCKS_BATCH	4
+
 STATIC int xfs_alloc_ag_vextent_exact(xfs_alloc_arg_t *);
 STATIC int xfs_alloc_ag_vextent_near(xfs_alloc_arg_t *);
 STATIC int xfs_alloc_ag_vextent_size(xfs_alloc_arg_t *);
@@ -2746,6 +2753,8 @@  xfs_alloc_get_freelist(
 	if (btreeblk) {
 		be32_add_cpu(&agf->agf_btreeblks, 1);
 		pag->pagf_btreeblks++;
+		percpu_counter_add_batch(&mp->m_btree_blks, 1,
+					 XFS_BTREE_BLOCKS_BATCH);
 		logflags |= XFS_AGF_BTREEBLKS;
 	}
 
@@ -2853,6 +2862,8 @@  xfs_alloc_put_freelist(
 	if (btreeblk) {
 		be32_add_cpu(&agf->agf_btreeblks, -1);
 		pag->pagf_btreeblks--;
+		percpu_counter_add_batch(&mp->m_btree_blks, -1,
+					 XFS_BTREE_BLOCKS_BATCH);
 		logflags |= XFS_AGF_BTREEBLKS;
 	}
 
@@ -3055,6 +3066,8 @@  xfs_alloc_read_agf(
 	if (!pag->pagf_init) {
 		pag->pagf_freeblks = be32_to_cpu(agf->agf_freeblks);
 		pag->pagf_btreeblks = be32_to_cpu(agf->agf_btreeblks);
+		percpu_counter_add_batch(&mp->m_btree_blks, pag->pagf_btreeblks,
+					 XFS_BTREE_BLOCKS_BATCH);
 		pag->pagf_flcount = be32_to_cpu(agf->agf_flcount);
 		pag->pagf_longest = be32_to_cpu(agf->agf_longest);
 		pag->pagf_levels[XFS_BTNUM_BNOi] =
diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
index 52370d0a3f43..41ec14aafbff 100644
--- a/fs/xfs/xfs_mount.c
+++ b/fs/xfs/xfs_mount.c
@@ -1178,6 +1178,7 @@  xfs_mod_fdblocks(
 	int64_t			lcounter;
 	long long		res_used;
 	s32			batch;
+	uint64_t		set_aside = mp->m_alloc_set_aside;
 
 	if (delta > 0) {
 		/*
@@ -1217,8 +1218,18 @@  xfs_mod_fdblocks(
 	else
 		batch = XFS_FDBLOCKS_BATCH;
 
+	/*
+	 * Set aside allocbt blocks on reflink filesystems because COW remaps
+	 * can dip into the reserved block pool. This is problematic if free
+	 * space is fragmented and m_fdblocks tracks a significant number of
+	 * allocbt blocks. Note this also ensures the counter is populated since
+	 * perag reservation reads all AGFs at mount time.
+	 */
+	if (xfs_sb_version_hasreflink(&mp->m_sb))
+		set_aside += percpu_counter_read_positive(&mp->m_btree_blks);
+
 	percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
-	if (__percpu_counter_compare(&mp->m_fdblocks, mp->m_alloc_set_aside,
+	if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
 				     XFS_FDBLOCKS_BATCH) >= 0) {
 		/* we had space! */
 		return 0;
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index 659ad95fe3e0..2f94088b4170 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -170,6 +170,12 @@  typedef struct xfs_mount {
 	 * extents or anything related to the rt device.
 	 */
 	struct percpu_counter	m_delalloc_blks;
+	/*
+	 * Optional count of btree blocks in use across all AGs. Only used when
+	 * reflink is enabled and is not 100% accurate. Helps prevent block
+	 * reservation from attempting to reserve allocation btree blocks.
+	 */
+	struct percpu_counter	m_btree_blks;
 
 	struct radix_tree_root	m_perag_tree;	/* per-ag accounting info */
 	spinlock_t		m_perag_lock;	/* lock for m_perag_tree */
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index 21b1d034aca3..6d035b7c0d0b 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -1001,8 +1001,14 @@  xfs_init_percpu_counters(
 	if (error)
 		goto free_fdblocks;
 
+	error = percpu_counter_init(&mp->m_btree_blks, 0, GFP_KERNEL);
+	if (error)
+		goto free_delalloc;
+
 	return 0;
 
+free_delalloc:
+	percpu_counter_destroy(&mp->m_delalloc_blks);
 free_fdblocks:
 	percpu_counter_destroy(&mp->m_fdblocks);
 free_ifree:
@@ -1031,6 +1037,7 @@  xfs_destroy_percpu_counters(
 	ASSERT(XFS_FORCED_SHUTDOWN(mp) ||
 	       percpu_counter_sum(&mp->m_delalloc_blks) == 0);
 	percpu_counter_destroy(&mp->m_delalloc_blks);
+	percpu_counter_destroy(&mp->m_btree_blks);
 }
 
 static void