diff mbox series

[RFC,v3] xfs: automatic relogging experiment

Message ID 20191125185523.47556-1-bfoster@redhat.com (mailing list archive)
State New, archived
Headers show
Series [RFC,v3] xfs: automatic relogging experiment | expand

Commit Message

Brian Foster Nov. 25, 2019, 6:55 p.m. UTC
POC to automatically relog the quotaoff start intent. This approach
involves no reservation stealing nor transaction rolling, so
deadlock avoidance is not guaranteed. The tradeoff is simplicity and
an approach that might be effective enough in practice.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---

Here's a quickly hacked up version of what I was rambling about in the
cover letter. I wanted to post this for comparison. As noted above, this
doesn't necessarily guarantee deadlock avoidance with transaction
rolling, etc., but might be good enough in practice for the current use
cases (particularly with CIL context size fixes). Even if not, there's a
clear enough path to tracking relog reservation with a ticket in the CIL
context in a manner that is more conducive to batching. We also may be
able to union ->li_cb() with a ->li_relog() variant to relog intent
items as dfops currently does for things like EFIs that don't currently
support direct relogging of the same object.

Thoughts about using something like this as an intermediate solution,
provided it holds up against some stress testing?

Brian

 fs/xfs/xfs_log.c         |  1 +
 fs/xfs/xfs_log_cil.c     | 50 +++++++++++++++++++++++++++++++++++++++-
 fs/xfs/xfs_log_priv.h    |  2 ++
 fs/xfs/xfs_qm_syscalls.c |  6 +++++
 fs/xfs/xfs_trans.h       |  5 +++-
 5 files changed, 62 insertions(+), 2 deletions(-)

Comments

Brian Foster Nov. 27, 2019, 3:12 p.m. UTC | #1
On Mon, Nov 25, 2019 at 01:55:23PM -0500, Brian Foster wrote:
> POC to automatically relog the quotaoff start intent. This approach
> involves no reservation stealing nor transaction rolling, so
> deadlock avoidance is not guaranteed. The tradeoff is simplicity and
> an approach that might be effective enough in practice.
> 
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
> 
> Here's a quickly hacked up version of what I was rambling about in the
> cover letter. I wanted to post this for comparison. As noted above, this
> doesn't necessarily guarantee deadlock avoidance with transaction
> rolling, etc., but might be good enough in practice for the current use
> cases (particularly with CIL context size fixes). Even if not, there's a
> clear enough path to tracking relog reservation with a ticket in the CIL
> context in a manner that is more conducive to batching. We also may be
> able to union ->li_cb() with a ->li_relog() variant to relog intent
> items as dfops currently does for things like EFIs that don't currently
> support direct relogging of the same object.
> 
> Thoughts about using something like this as an intermediate solution,
> provided it holds up against some stress testing?
> 

In thinking about this a bit more, it occurs to me that there might be
an elegant way to provide simplicity and flexibility by incorporating
automatic relogging into xfsaild rather than tieing it into the CIL or
having it completely independent (as the past couple of RFCs have done).
From the simplicity standpoint, xfsaild already tracks logged+committed
items for us, so that eliminates the need for independent "RIL"
tracking. xfsaild already issues log item callbacks that could translate
the new log item LI_RELOG state bit to a corresponding XFS_ITEM_RELOG
return code that triggers a relog of the item. We also know the lsn of
the item at this point and can further compare to the tail lsn to only
relog such items when they sit at the log tail. This is more efficient
than the current proposal to automatically relog on every CIL
checkpoint.

From the flexibility standpoint, xfsaild already knows how to safely
access all types of log items via the log ops vector. IOW, it knows how
to exclusively access a log item for writeback, so it would just need
generic/mechanical bits to relog a particular item instead. The caveat
to this is that the task that requests auto relog must relenquish locks
for relogging to actually take place. For example, the sequence for a
traditional (non-intent) log item would be something like::

	- lock object
	- dirty in transaction
	- set lip relog bit
	- commit transaction
	- unlock object (background relogging now enabled)

At this point the log item is essentially pinned in-core without pinning
the tail of the log. It is free to be modified by any unrelated
transaction without conflict to either task, but xfsaild will not write
it back. Sometime later the original task would have to lock the item
and clear the relog state to cancel the sequence. The task could simply
release the item to allow writeback or join it to a final transaction
with the relog bit cleared.

There's still room for a custom ->iop_relog() callback or some such for
any items that require special relog handling. If releasing locks is
ever a concern for a particular operation, that callback could overload
the generic relog mechanism and serve as a notification to the lock
holder to roll its own transaction without ever unlocking the item. TBH
I'm still not sure there's a use case for this kind of non-intent
relogging, but ISTM this design model accommodates it reasonably enough
with minimal complexity.

There are still some open implementation questions around managing the
relog transaction(s), determining how much reservation is needed, how to
acquire it, etc. We most likely do not want to block the xfsaild task on
acquiring reservation, but it might be able to regrant a permanent
ticket or simply kick off the task of replenishing relog reservation to
a workqueue.

Thoughts?

Brian

> Brian
> 
>  fs/xfs/xfs_log.c         |  1 +
>  fs/xfs/xfs_log_cil.c     | 50 +++++++++++++++++++++++++++++++++++++++-
>  fs/xfs/xfs_log_priv.h    |  2 ++
>  fs/xfs/xfs_qm_syscalls.c |  6 +++++
>  fs/xfs/xfs_trans.h       |  5 +++-
>  5 files changed, 62 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
> index 6a147c63a8a6..4fb3c3156ea2 100644
> --- a/fs/xfs/xfs_log.c
> +++ b/fs/xfs/xfs_log.c
> @@ -1086,6 +1086,7 @@ xfs_log_item_init(
>  	INIT_LIST_HEAD(&item->li_cil);
>  	INIT_LIST_HEAD(&item->li_bio_list);
>  	INIT_LIST_HEAD(&item->li_trans);
> +	INIT_LIST_HEAD(&item->li_ril);
>  }
>  
>  /*
> diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c
> index 48435cf2aa16..c16ebc448a40 100644
> --- a/fs/xfs/xfs_log_cil.c
> +++ b/fs/xfs/xfs_log_cil.c
> @@ -19,6 +19,44 @@
>  
>  struct workqueue_struct *xfs_discard_wq;
>  
> +static void
> +xfs_relog_worker(
> +	struct work_struct	*work)
> +{
> +	struct xfs_cil_ctx	*ctx = container_of(work, struct xfs_cil_ctx, relog_work);
> +	struct xfs_mount	*mp = ctx->cil->xc_log->l_mp;
> +	struct xfs_trans	*tp;
> +	struct xfs_log_item	*lip, *lipp;
> +	int			error;
> +
> +	error = xfs_trans_alloc(mp, &M_RES(mp)->tr_qm_quotaoff, 0, 0, 0, &tp);
> +	ASSERT(!error);
> +
> +	list_for_each_entry_safe(lip, lipp, &ctx->relog_list, li_ril) {
> +		list_del_init(&lip->li_ril);
> +
> +		if (!test_bit(XFS_LI_RELOG, &lip->li_flags))
> +			continue;
> +
> +		xfs_trans_add_item(tp, lip);
> +		set_bit(XFS_LI_DIRTY, &lip->li_flags);
> +		tp->t_flags |= XFS_TRANS_DIRTY;
> +	}
> +
> +	error = xfs_trans_commit(tp);
> +	ASSERT(!error);
> +
> +	/* XXX */
> +	kmem_free(ctx);
> +}
> +
> +static void
> +xfs_relog_queue(
> +	struct xfs_cil_ctx	*ctx)
> +{
> +	queue_work(xfs_discard_wq, &ctx->relog_work);
> +}
> +
>  /*
>   * Allocate a new ticket. Failing to get a new ticket makes it really hard to
>   * recover, so we don't allow failure here. Also, we allocate in a context that
> @@ -476,6 +514,9 @@ xlog_cil_insert_items(
>  		 */
>  		if (!list_is_last(&lip->li_cil, &cil->xc_cil))
>  			list_move_tail(&lip->li_cil, &cil->xc_cil);
> +
> +		if (test_bit(XFS_LI_RELOG, &lip->li_flags))
> +			list_move_tail(&lip->li_ril, &ctx->relog_list);
>  	}
>  
>  	spin_unlock(&cil->xc_cil_lock);
> @@ -605,7 +646,10 @@ xlog_cil_committed(
>  
>  	xlog_cil_free_logvec(ctx->lv_chain);
>  
> -	if (!list_empty(&ctx->busy_extents))
> +	/* XXX: mutually exclusive w/ discard for POC to handle ctx freeing */
> +	if (!list_empty(&ctx->relog_list))
> +		xfs_relog_queue(ctx);
> +	else if (!list_empty(&ctx->busy_extents))
>  		xlog_discard_busy_extents(mp, ctx);
>  	else
>  		kmem_free(ctx);
> @@ -746,8 +790,10 @@ xlog_cil_push(
>  	 */
>  	INIT_LIST_HEAD(&new_ctx->committing);
>  	INIT_LIST_HEAD(&new_ctx->busy_extents);
> +	INIT_LIST_HEAD(&new_ctx->relog_list);
>  	new_ctx->sequence = ctx->sequence + 1;
>  	new_ctx->cil = cil;
> +	INIT_WORK(&new_ctx->relog_work, xfs_relog_worker);
>  	cil->xc_ctx = new_ctx;
>  
>  	/*
> @@ -1199,6 +1245,8 @@ xlog_cil_init(
>  
>  	INIT_LIST_HEAD(&ctx->committing);
>  	INIT_LIST_HEAD(&ctx->busy_extents);
> +	INIT_LIST_HEAD(&ctx->relog_list);
> +	INIT_WORK(&ctx->relog_work, xfs_relog_worker);
>  	ctx->sequence = 1;
>  	ctx->cil = cil;
>  	cil->xc_ctx = ctx;
> diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h
> index b192c5a9f9fd..6fd7b7297bd3 100644
> --- a/fs/xfs/xfs_log_priv.h
> +++ b/fs/xfs/xfs_log_priv.h
> @@ -243,6 +243,8 @@ struct xfs_cil_ctx {
>  	struct list_head	iclog_entry;
>  	struct list_head	committing;	/* ctx committing list */
>  	struct work_struct	discard_endio_work;
> +	struct list_head	relog_list;
> +	struct work_struct	relog_work;
>  };
>  
>  /*
> diff --git a/fs/xfs/xfs_qm_syscalls.c b/fs/xfs/xfs_qm_syscalls.c
> index 1ea82764bf89..08b6180cb5a3 100644
> --- a/fs/xfs/xfs_qm_syscalls.c
> +++ b/fs/xfs/xfs_qm_syscalls.c
> @@ -18,6 +18,7 @@
>  #include "xfs_quota.h"
>  #include "xfs_qm.h"
>  #include "xfs_icache.h"
> +#include "xfs_log.h"
>  
>  STATIC int
>  xfs_qm_log_quotaoff(
> @@ -37,6 +38,7 @@ xfs_qm_log_quotaoff(
>  
>  	qoffi = xfs_trans_get_qoff_item(tp, NULL, flags & XFS_ALL_QUOTA_ACCT);
>  	xfs_trans_log_quotaoff_item(tp, qoffi);
> +	set_bit(XFS_LI_RELOG, &qoffi->qql_item.li_flags);
>  
>  	spin_lock(&mp->m_sb_lock);
>  	mp->m_sb.sb_qflags = (mp->m_qflags & ~(flags)) & XFS_MOUNT_QUOTA_ALL;
> @@ -69,6 +71,10 @@ xfs_qm_log_quotaoff_end(
>  	int			error;
>  	struct xfs_qoff_logitem	*qoffi;
>  
> +	clear_bit(XFS_LI_RELOG, &startqoff->qql_item.li_flags);
> +	xfs_log_force(mp, XFS_LOG_SYNC);
> +	flush_workqueue(xfs_discard_wq);
> +
>  	error = xfs_trans_alloc(mp, &M_RES(mp)->tr_qm_equotaoff, 0, 0, 0, &tp);
>  	if (error)
>  		return error;
> diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
> index 64d7f171ebd3..e04033c29f0d 100644
> --- a/fs/xfs/xfs_trans.h
> +++ b/fs/xfs/xfs_trans.h
> @@ -48,6 +48,7 @@ struct xfs_log_item {
>  	struct xfs_log_vec		*li_lv;		/* active log vector */
>  	struct xfs_log_vec		*li_lv_shadow;	/* standby vector */
>  	xfs_lsn_t			li_seq;		/* CIL commit seq */
> +	struct list_head		li_ril;
>  };
>  
>  /*
> @@ -59,12 +60,14 @@ struct xfs_log_item {
>  #define	XFS_LI_ABORTED	1
>  #define	XFS_LI_FAILED	2
>  #define	XFS_LI_DIRTY	3	/* log item dirty in transaction */
> +#define	XFS_LI_RELOG	4	/* automatic relogging */
>  
>  #define XFS_LI_FLAGS \
>  	{ (1 << XFS_LI_IN_AIL),		"IN_AIL" }, \
>  	{ (1 << XFS_LI_ABORTED),	"ABORTED" }, \
>  	{ (1 << XFS_LI_FAILED),		"FAILED" }, \
> -	{ (1 << XFS_LI_DIRTY),		"DIRTY" }
> +	{ (1 << XFS_LI_DIRTY),		"DIRTY" }, \
> +	{ (1 << XFS_LI_RELOG),		"RELOG" }
>  
>  struct xfs_item_ops {
>  	unsigned flags;
> -- 
> 2.20.1
>
diff mbox series

Patch

diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
index 6a147c63a8a6..4fb3c3156ea2 100644
--- a/fs/xfs/xfs_log.c
+++ b/fs/xfs/xfs_log.c
@@ -1086,6 +1086,7 @@  xfs_log_item_init(
 	INIT_LIST_HEAD(&item->li_cil);
 	INIT_LIST_HEAD(&item->li_bio_list);
 	INIT_LIST_HEAD(&item->li_trans);
+	INIT_LIST_HEAD(&item->li_ril);
 }
 
 /*
diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c
index 48435cf2aa16..c16ebc448a40 100644
--- a/fs/xfs/xfs_log_cil.c
+++ b/fs/xfs/xfs_log_cil.c
@@ -19,6 +19,44 @@ 
 
 struct workqueue_struct *xfs_discard_wq;
 
+static void
+xfs_relog_worker(
+	struct work_struct	*work)
+{
+	struct xfs_cil_ctx	*ctx = container_of(work, struct xfs_cil_ctx, relog_work);
+	struct xfs_mount	*mp = ctx->cil->xc_log->l_mp;
+	struct xfs_trans	*tp;
+	struct xfs_log_item	*lip, *lipp;
+	int			error;
+
+	error = xfs_trans_alloc(mp, &M_RES(mp)->tr_qm_quotaoff, 0, 0, 0, &tp);
+	ASSERT(!error);
+
+	list_for_each_entry_safe(lip, lipp, &ctx->relog_list, li_ril) {
+		list_del_init(&lip->li_ril);
+
+		if (!test_bit(XFS_LI_RELOG, &lip->li_flags))
+			continue;
+
+		xfs_trans_add_item(tp, lip);
+		set_bit(XFS_LI_DIRTY, &lip->li_flags);
+		tp->t_flags |= XFS_TRANS_DIRTY;
+	}
+
+	error = xfs_trans_commit(tp);
+	ASSERT(!error);
+
+	/* XXX */
+	kmem_free(ctx);
+}
+
+static void
+xfs_relog_queue(
+	struct xfs_cil_ctx	*ctx)
+{
+	queue_work(xfs_discard_wq, &ctx->relog_work);
+}
+
 /*
  * Allocate a new ticket. Failing to get a new ticket makes it really hard to
  * recover, so we don't allow failure here. Also, we allocate in a context that
@@ -476,6 +514,9 @@  xlog_cil_insert_items(
 		 */
 		if (!list_is_last(&lip->li_cil, &cil->xc_cil))
 			list_move_tail(&lip->li_cil, &cil->xc_cil);
+
+		if (test_bit(XFS_LI_RELOG, &lip->li_flags))
+			list_move_tail(&lip->li_ril, &ctx->relog_list);
 	}
 
 	spin_unlock(&cil->xc_cil_lock);
@@ -605,7 +646,10 @@  xlog_cil_committed(
 
 	xlog_cil_free_logvec(ctx->lv_chain);
 
-	if (!list_empty(&ctx->busy_extents))
+	/* XXX: mutually exclusive w/ discard for POC to handle ctx freeing */
+	if (!list_empty(&ctx->relog_list))
+		xfs_relog_queue(ctx);
+	else if (!list_empty(&ctx->busy_extents))
 		xlog_discard_busy_extents(mp, ctx);
 	else
 		kmem_free(ctx);
@@ -746,8 +790,10 @@  xlog_cil_push(
 	 */
 	INIT_LIST_HEAD(&new_ctx->committing);
 	INIT_LIST_HEAD(&new_ctx->busy_extents);
+	INIT_LIST_HEAD(&new_ctx->relog_list);
 	new_ctx->sequence = ctx->sequence + 1;
 	new_ctx->cil = cil;
+	INIT_WORK(&new_ctx->relog_work, xfs_relog_worker);
 	cil->xc_ctx = new_ctx;
 
 	/*
@@ -1199,6 +1245,8 @@  xlog_cil_init(
 
 	INIT_LIST_HEAD(&ctx->committing);
 	INIT_LIST_HEAD(&ctx->busy_extents);
+	INIT_LIST_HEAD(&ctx->relog_list);
+	INIT_WORK(&ctx->relog_work, xfs_relog_worker);
 	ctx->sequence = 1;
 	ctx->cil = cil;
 	cil->xc_ctx = ctx;
diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h
index b192c5a9f9fd..6fd7b7297bd3 100644
--- a/fs/xfs/xfs_log_priv.h
+++ b/fs/xfs/xfs_log_priv.h
@@ -243,6 +243,8 @@  struct xfs_cil_ctx {
 	struct list_head	iclog_entry;
 	struct list_head	committing;	/* ctx committing list */
 	struct work_struct	discard_endio_work;
+	struct list_head	relog_list;
+	struct work_struct	relog_work;
 };
 
 /*
diff --git a/fs/xfs/xfs_qm_syscalls.c b/fs/xfs/xfs_qm_syscalls.c
index 1ea82764bf89..08b6180cb5a3 100644
--- a/fs/xfs/xfs_qm_syscalls.c
+++ b/fs/xfs/xfs_qm_syscalls.c
@@ -18,6 +18,7 @@ 
 #include "xfs_quota.h"
 #include "xfs_qm.h"
 #include "xfs_icache.h"
+#include "xfs_log.h"
 
 STATIC int
 xfs_qm_log_quotaoff(
@@ -37,6 +38,7 @@  xfs_qm_log_quotaoff(
 
 	qoffi = xfs_trans_get_qoff_item(tp, NULL, flags & XFS_ALL_QUOTA_ACCT);
 	xfs_trans_log_quotaoff_item(tp, qoffi);
+	set_bit(XFS_LI_RELOG, &qoffi->qql_item.li_flags);
 
 	spin_lock(&mp->m_sb_lock);
 	mp->m_sb.sb_qflags = (mp->m_qflags & ~(flags)) & XFS_MOUNT_QUOTA_ALL;
@@ -69,6 +71,10 @@  xfs_qm_log_quotaoff_end(
 	int			error;
 	struct xfs_qoff_logitem	*qoffi;
 
+	clear_bit(XFS_LI_RELOG, &startqoff->qql_item.li_flags);
+	xfs_log_force(mp, XFS_LOG_SYNC);
+	flush_workqueue(xfs_discard_wq);
+
 	error = xfs_trans_alloc(mp, &M_RES(mp)->tr_qm_equotaoff, 0, 0, 0, &tp);
 	if (error)
 		return error;
diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h
index 64d7f171ebd3..e04033c29f0d 100644
--- a/fs/xfs/xfs_trans.h
+++ b/fs/xfs/xfs_trans.h
@@ -48,6 +48,7 @@  struct xfs_log_item {
 	struct xfs_log_vec		*li_lv;		/* active log vector */
 	struct xfs_log_vec		*li_lv_shadow;	/* standby vector */
 	xfs_lsn_t			li_seq;		/* CIL commit seq */
+	struct list_head		li_ril;
 };
 
 /*
@@ -59,12 +60,14 @@  struct xfs_log_item {
 #define	XFS_LI_ABORTED	1
 #define	XFS_LI_FAILED	2
 #define	XFS_LI_DIRTY	3	/* log item dirty in transaction */
+#define	XFS_LI_RELOG	4	/* automatic relogging */
 
 #define XFS_LI_FLAGS \
 	{ (1 << XFS_LI_IN_AIL),		"IN_AIL" }, \
 	{ (1 << XFS_LI_ABORTED),	"ABORTED" }, \
 	{ (1 << XFS_LI_FAILED),		"FAILED" }, \
-	{ (1 << XFS_LI_DIRTY),		"DIRTY" }
+	{ (1 << XFS_LI_DIRTY),		"DIRTY" }, \
+	{ (1 << XFS_LI_RELOG),		"RELOG" }
 
 struct xfs_item_ops {
 	unsigned flags;