Message ID | 20210727062053.11129-3-allison.henderson@oracle.com (mailing list archive) |
---|---|
State | Accepted |
Headers | show |
Series | Delayed Attributes | expand |
On 27 Jul 2021 at 11:50, Allison Henderson wrote: > From: "Darrick J. Wong" <djwong@kernel.org> > > When there are no ongoing transactions and the log contents have been > checkpointed back into the filesystem, the log performs 'covering', > which is to say that it log a dummy transaction to record the fact that > the tail has caught up with the head. This is a good time to clear log > incompat feature flags, because they are flags that are temporarily set > to limit the range of kernels that can replay a dirty log. > > Since it's possible that some other higher level thread is about to > start logging items protected by a log incompat flag, we create a rwsem > so that upper level threads can coordinate this with the log. It would > probably be more performant to use a percpu rwsem, but the ability to > /try/ taking the write lock during covering is critical, and percpu > rwsems do not provide that. > Looks good to me. Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> > Signed-off-by: Darrick J. Wong <djwong@kernel.org> > Reviewed-by: Allison Henderson <allison.henderson@oracle.com> > Signed-off-by: Allison Henderson <allison.henderson@oracle.com> > --- > fs/xfs/xfs_log.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++++ > fs/xfs/xfs_log.h | 3 +++ > fs/xfs/xfs_log_priv.h | 3 +++ > 3 files changed, 55 insertions(+) > > diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c > index 9254405..c58a0d7 100644 > --- a/fs/xfs/xfs_log.c > +++ b/fs/xfs/xfs_log.c > @@ -1338,6 +1338,32 @@ xfs_log_work_queue( > } > > /* > + * Clear the log incompat flags if we have the opportunity. > + * > + * This only happens if we're about to log the second dummy transaction as part > + * of covering the log and we can get the log incompat feature usage lock. > + */ > +static inline void > +xlog_clear_incompat( > + struct xlog *log) > +{ > + struct xfs_mount *mp = log->l_mp; > + > + if (!xfs_sb_has_incompat_log_feature(&mp->m_sb, > + XFS_SB_FEAT_INCOMPAT_LOG_ALL)) > + return; > + > + if (log->l_covered_state != XLOG_STATE_COVER_DONE2) > + return; > + > + if (!down_write_trylock(&log->l_incompat_users)) > + return; > + > + xfs_clear_incompat_log_features(mp); > + up_write(&log->l_incompat_users); > +} > + > +/* > * Every sync period we need to unpin all items in the AIL and push them to > * disk. If there is nothing dirty, then we might need to cover the log to > * indicate that the filesystem is idle. > @@ -1363,6 +1389,7 @@ xfs_log_worker( > * synchronously log the superblock instead to ensure the > * superblock is immediately unpinned and can be written back. > */ > + xlog_clear_incompat(log); > xfs_sync_sb(mp, true); > } else > xfs_log_force(mp, 0); > @@ -1450,6 +1477,8 @@ xlog_alloc_log( > } > log->l_sectBBsize = 1 << log2_size; > > + init_rwsem(&log->l_incompat_users); > + > xlog_get_iclog_buffer_size(mp, log); > > spin_lock_init(&log->l_icloglock); > @@ -3895,3 +3924,23 @@ xfs_log_in_recovery( > > return log->l_flags & XLOG_ACTIVE_RECOVERY; > } > + > +/* > + * Notify the log that we're about to start using a feature that is protected > + * by a log incompat feature flag. This will prevent log covering from > + * clearing those flags. > + */ > +void > +xlog_use_incompat_feat( > + struct xlog *log) > +{ > + down_read(&log->l_incompat_users); > +} > + > +/* Notify the log that we've finished using log incompat features. */ > +void > +xlog_drop_incompat_feat( > + struct xlog *log) > +{ > + up_read(&log->l_incompat_users); > +} > diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h > index 813b972..b274fb9 100644 > --- a/fs/xfs/xfs_log.h > +++ b/fs/xfs/xfs_log.h > @@ -142,4 +142,7 @@ bool xfs_log_in_recovery(struct xfs_mount *); > > xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes); > > +void xlog_use_incompat_feat(struct xlog *log); > +void xlog_drop_incompat_feat(struct xlog *log); > + > #endif /* __XFS_LOG_H__ */ > diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h > index 4c41bbfa..c507041 100644 > --- a/fs/xfs/xfs_log_priv.h > +++ b/fs/xfs/xfs_log_priv.h > @@ -449,6 +449,9 @@ struct xlog { > xfs_lsn_t l_recovery_lsn; > > uint32_t l_iclog_roundoff;/* padding roundoff */ > + > + /* Users of log incompat features should take a read lock. */ > + struct rw_semaphore l_incompat_users; > }; > > #define XLOG_BUF_CANCEL_BUCKET(log, blkno) \
On 7/27/21 5:46 AM, Chandan Babu R wrote: > On 27 Jul 2021 at 11:50, Allison Henderson wrote: >> From: "Darrick J. Wong" <djwong@kernel.org> >> >> When there are no ongoing transactions and the log contents have been >> checkpointed back into the filesystem, the log performs 'covering', >> which is to say that it log a dummy transaction to record the fact that >> the tail has caught up with the head. This is a good time to clear log >> incompat feature flags, because they are flags that are temporarily set >> to limit the range of kernels that can replay a dirty log. >> >> Since it's possible that some other higher level thread is about to >> start logging items protected by a log incompat flag, we create a rwsem >> so that upper level threads can coordinate this with the log. It would >> probably be more performant to use a percpu rwsem, but the ability to >> /try/ taking the write lock during covering is critical, and percpu >> rwsems do not provide that. >> > > Looks good to me. > > Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> Thank you! Allison > >> Signed-off-by: Darrick J. Wong <djwong@kernel.org> >> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> >> Signed-off-by: Allison Henderson <allison.henderson@oracle.com> >> --- >> fs/xfs/xfs_log.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++++ >> fs/xfs/xfs_log.h | 3 +++ >> fs/xfs/xfs_log_priv.h | 3 +++ >> 3 files changed, 55 insertions(+) >> >> diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c >> index 9254405..c58a0d7 100644 >> --- a/fs/xfs/xfs_log.c >> +++ b/fs/xfs/xfs_log.c >> @@ -1338,6 +1338,32 @@ xfs_log_work_queue( >> } >> >> /* >> + * Clear the log incompat flags if we have the opportunity. >> + * >> + * This only happens if we're about to log the second dummy transaction as part >> + * of covering the log and we can get the log incompat feature usage lock. >> + */ >> +static inline void >> +xlog_clear_incompat( >> + struct xlog *log) >> +{ >> + struct xfs_mount *mp = log->l_mp; >> + >> + if (!xfs_sb_has_incompat_log_feature(&mp->m_sb, >> + XFS_SB_FEAT_INCOMPAT_LOG_ALL)) >> + return; >> + >> + if (log->l_covered_state != XLOG_STATE_COVER_DONE2) >> + return; >> + >> + if (!down_write_trylock(&log->l_incompat_users)) >> + return; >> + >> + xfs_clear_incompat_log_features(mp); >> + up_write(&log->l_incompat_users); >> +} >> + >> +/* >> * Every sync period we need to unpin all items in the AIL and push them to >> * disk. If there is nothing dirty, then we might need to cover the log to >> * indicate that the filesystem is idle. >> @@ -1363,6 +1389,7 @@ xfs_log_worker( >> * synchronously log the superblock instead to ensure the >> * superblock is immediately unpinned and can be written back. >> */ >> + xlog_clear_incompat(log); >> xfs_sync_sb(mp, true); >> } else >> xfs_log_force(mp, 0); >> @@ -1450,6 +1477,8 @@ xlog_alloc_log( >> } >> log->l_sectBBsize = 1 << log2_size; >> >> + init_rwsem(&log->l_incompat_users); >> + >> xlog_get_iclog_buffer_size(mp, log); >> >> spin_lock_init(&log->l_icloglock); >> @@ -3895,3 +3924,23 @@ xfs_log_in_recovery( >> >> return log->l_flags & XLOG_ACTIVE_RECOVERY; >> } >> + >> +/* >> + * Notify the log that we're about to start using a feature that is protected >> + * by a log incompat feature flag. This will prevent log covering from >> + * clearing those flags. >> + */ >> +void >> +xlog_use_incompat_feat( >> + struct xlog *log) >> +{ >> + down_read(&log->l_incompat_users); >> +} >> + >> +/* Notify the log that we've finished using log incompat features. */ >> +void >> +xlog_drop_incompat_feat( >> + struct xlog *log) >> +{ >> + up_read(&log->l_incompat_users); >> +} >> diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h >> index 813b972..b274fb9 100644 >> --- a/fs/xfs/xfs_log.h >> +++ b/fs/xfs/xfs_log.h >> @@ -142,4 +142,7 @@ bool xfs_log_in_recovery(struct xfs_mount *); >> >> xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes); >> >> +void xlog_use_incompat_feat(struct xlog *log); >> +void xlog_drop_incompat_feat(struct xlog *log); >> + >> #endif /* __XFS_LOG_H__ */ >> diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h >> index 4c41bbfa..c507041 100644 >> --- a/fs/xfs/xfs_log_priv.h >> +++ b/fs/xfs/xfs_log_priv.h >> @@ -449,6 +449,9 @@ struct xlog { >> xfs_lsn_t l_recovery_lsn; >> >> uint32_t l_iclog_roundoff;/* padding roundoff */ >> + >> + /* Users of log incompat features should take a read lock. */ >> + struct rw_semaphore l_incompat_users; >> }; >> >> #define XLOG_BUF_CANCEL_BUCKET(log, blkno) \ > >
diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 9254405..c58a0d7 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -1338,6 +1338,32 @@ xfs_log_work_queue( } /* + * Clear the log incompat flags if we have the opportunity. + * + * This only happens if we're about to log the second dummy transaction as part + * of covering the log and we can get the log incompat feature usage lock. + */ +static inline void +xlog_clear_incompat( + struct xlog *log) +{ + struct xfs_mount *mp = log->l_mp; + + if (!xfs_sb_has_incompat_log_feature(&mp->m_sb, + XFS_SB_FEAT_INCOMPAT_LOG_ALL)) + return; + + if (log->l_covered_state != XLOG_STATE_COVER_DONE2) + return; + + if (!down_write_trylock(&log->l_incompat_users)) + return; + + xfs_clear_incompat_log_features(mp); + up_write(&log->l_incompat_users); +} + +/* * Every sync period we need to unpin all items in the AIL and push them to * disk. If there is nothing dirty, then we might need to cover the log to * indicate that the filesystem is idle. @@ -1363,6 +1389,7 @@ xfs_log_worker( * synchronously log the superblock instead to ensure the * superblock is immediately unpinned and can be written back. */ + xlog_clear_incompat(log); xfs_sync_sb(mp, true); } else xfs_log_force(mp, 0); @@ -1450,6 +1477,8 @@ xlog_alloc_log( } log->l_sectBBsize = 1 << log2_size; + init_rwsem(&log->l_incompat_users); + xlog_get_iclog_buffer_size(mp, log); spin_lock_init(&log->l_icloglock); @@ -3895,3 +3924,23 @@ xfs_log_in_recovery( return log->l_flags & XLOG_ACTIVE_RECOVERY; } + +/* + * Notify the log that we're about to start using a feature that is protected + * by a log incompat feature flag. This will prevent log covering from + * clearing those flags. + */ +void +xlog_use_incompat_feat( + struct xlog *log) +{ + down_read(&log->l_incompat_users); +} + +/* Notify the log that we've finished using log incompat features. */ +void +xlog_drop_incompat_feat( + struct xlog *log) +{ + up_read(&log->l_incompat_users); +} diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h index 813b972..b274fb9 100644 --- a/fs/xfs/xfs_log.h +++ b/fs/xfs/xfs_log.h @@ -142,4 +142,7 @@ bool xfs_log_in_recovery(struct xfs_mount *); xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes); +void xlog_use_incompat_feat(struct xlog *log); +void xlog_drop_incompat_feat(struct xlog *log); + #endif /* __XFS_LOG_H__ */ diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h index 4c41bbfa..c507041 100644 --- a/fs/xfs/xfs_log_priv.h +++ b/fs/xfs/xfs_log_priv.h @@ -449,6 +449,9 @@ struct xlog { xfs_lsn_t l_recovery_lsn; uint32_t l_iclog_roundoff;/* padding roundoff */ + + /* Users of log incompat features should take a read lock. */ + struct rw_semaphore l_incompat_users; }; #define XLOG_BUF_CANCEL_BUCKET(log, blkno) \