mbox series

[0/7,v3] xfs: log recovery fixes

Message ID 20220315064241.3133751-1-david@fromorbit.com (mailing list archive)
Headers show
Series xfs: log recovery fixes | expand

Message

Dave Chinner March 15, 2022, 6:42 a.m. UTC
Willy reported generic/530 had started hanging on his test machines
and I've tried to reproduce the problem he reported. While I haven't
reproduced the exact hang he's been having, I've found a couple of
others while running g/530 in a tight loop on a couple of test
machines.

The first 3 patches are defensive fixes - the log worker acts as a
watchdog, and the issues in patch 2 and 3 were triggered on my
testing of g/530 and lead to 30s delays that the log worker watchdog
caught. Without the watchdog, these may actually be deadlock
triggers.

The 4th patch is the one that fixes the problem Willy reported.
It is a regression from conversion of the AIL pushing to use
non-blocking CIL flushes. It is unknown why this suddenly started
showing up on Willy's test machine right now, and why only on that
machine, but it is clearly a problem. This patch catches the state
that leads to the deadlock and breaks it with an immediate log
force to flush any pending iclogs.

In testing these patches, I found generic/388 was causing frequent
failures with recovery failing because inode clusters were being
found uninitialised in log recovery. That turns out to be a zero day
race condition in the forced shutdown code, and that is fixed over
the patches 5-7. In short, we can't abort writeback of log items
before we shut down the log (because that's separate to -mount-
shutdown) as removing aborted log items can move the tail of the log
forward and that can be propagated to the on-disk log and corrupt it
if timing is just right.

Fixing this takes failures of g/388 from 1 in 5-10 runs to 1 in 100
runs. There is a change in patch 7 that I mention "I'm not sure how
this can happen here, but it's handled elsewhere like this" that
avoids a double remove of an aborted inode from the AIL that results
in an ASSERT failure. I *think* I now know how that can occur, but
fixing it is another set of patches, and it may be a recent
regression rather than a long standing issue.

Version 3:
- added fixes for generic/388 failures.

Version 2:
- https://lore.kernel.org/linux-xfs/20220309015512.2648074-1-david@fromorbit.com/
- updated to 5.17-rc7
- tested by Willy.

Version 1:
- https://lore.kernel.org/linux-xfs/20220307053252.2534616-1-david@fromorbit.com/