diff mbox

[v2,1/3] lib/list_batch: A simple list insertion/deletion batching facility

Message ID 1454095846-19628-2-git-send-email-Waiman.Long@hpe.com (mailing list archive)
State New, archived
Headers show

Commit Message

Waiman Long Jan. 29, 2016, 7:30 p.m. UTC
Linked list insertion or deletion under lock is a very common activity
in the Linux kernel. If this is the only activity under lock, the
locking overhead can be pretty large compared with the actual time
spent on the insertion or deletion operation itself especially on a
large system with many CPUs.

This patch introduces a simple list insertion/deletion batching
facility where a group of list insertion and deletion operations are
grouped together in a single batch under lock. This can reduce the
locking overhead and improve overall system performance.

The fast path of this batching facility will be similar in performance
to the "lock; listop; unlock;" sequence of the existing code. If
the lock is not available, it will enter slowpath where the batching
happens.

A new config option LIST_BATCHING is added so that we can control on
which architecture do we want to have this facility enabled.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 include/linux/list_batch.h |  133 ++++++++++++++++++++++++++++++++++++++++++++
 lib/Kconfig                |    7 ++
 lib/Makefile               |    1 +
 lib/list_batch.c           |  125 +++++++++++++++++++++++++++++++++++++++++
 4 files changed, 266 insertions(+), 0 deletions(-)
 create mode 100644 include/linux/list_batch.h
 create mode 100644 lib/list_batch.c

Comments

Dave Chinner Feb. 1, 2016, 12:47 a.m. UTC | #1
On Fri, Jan 29, 2016 at 02:30:44PM -0500, Waiman Long wrote:
> Linked list insertion or deletion under lock is a very common activity
> in the Linux kernel. If this is the only activity under lock, the
> locking overhead can be pretty large compared with the actual time
> spent on the insertion or deletion operation itself especially on a
> large system with many CPUs.
> 
> This patch introduces a simple list insertion/deletion batching
> facility where a group of list insertion and deletion operations are
> grouped together in a single batch under lock. This can reduce the
> locking overhead and improve overall system performance.
> 
> The fast path of this batching facility will be similar in performance
> to the "lock; listop; unlock;" sequence of the existing code. If
> the lock is not available, it will enter slowpath where the batching
> happens.
> 
> A new config option LIST_BATCHING is added so that we can control on
> which architecture do we want to have this facility enabled.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
....
> +#ifdef CONFIG_LIST_BATCHING
> +
> +extern void do_list_batch_slowpath(spinlock_t *lock, enum list_batch_cmd cmd,
> +				   struct list_batch *batch,
> +				   struct list_head *entry);
> +
> +/*
> + * The caller is expected to pass in a constant cmd parameter. As a
> + * result, most of unneeded code in the switch statement of _list_batch_cmd()
> + * will be optimized away. This should make the fast path almost as fast
> + * as the "lock; listop; unlock;" sequence it replaces.
> + */

This strikes me as needlessly complex. Simple inline functions are
much easier to read and verify correct, and we don't have to rely on
the compiler to optimise out dead code:

static inline void list_batch_add(struct list_head *entry,
				  struct list_batch *batch)
{
	if (!spin_trylock(&batch->lock))
		return do_list_batch_slowpath(entry, batch, lb_cmd_add);

	list_add(entry, &batch->list)
	spin_unlock(&batch->lock);
}

> +#include <linux/list_batch.h>
> +
> +/*
> + * List processing batch size = 128
> + *
> + * The batch size shouldn't be too large. Otherwise, it will be too unfair
> + * to the task doing the batch processing. It shouldn't be too small neither
> + * as the performance benefit will be reduced.
> + */
> +#define LB_BATCH_SIZE	(1 << 7)

Ok, so arbitrary operations are going to see longer delays when they
are selected as the batch processor. I'm not sure I really like this
idea, as it will be the first in the queue that sees contention
that takes the delay which reduces the fairness of the operations.
i.e. the spinlock uses fair queuing, but now we can be grossly unfair
the to the first spinner...

> +	/*
> +	 * We rely on the implictit memory barrier of xchg() to make sure
> +	 * that node initialization will be done before its content is being
> +	 * accessed by other CPUs.
> +	 */
> +	prev = xchg(&batch->tail, &node);
> +
> +	if (prev) {
> +		WRITE_ONCE(prev->next, &node);
> +		while (READ_ONCE(node.state) == lb_state_waiting)
> +			cpu_relax();
> +		if (node.state == lb_state_done)
> +			return;

So we spin waiting for the batch processor to process the
list, or

> +		WARN_ON(node.state != lb_state_batch);

tell us we are not the batch processor.

So, effectively, the reduction in runtime is due to the fact the
list operations spin on their own cache line rather than the spin
lock cacheline until they have been processed and/or made the batch
processor?

> +	}
> +
> +	/*
> +	 * We are now the queue head, we should acquire the lock and
> +	 * process a batch of qnodes.
> +	 */
> +	loop = LB_BATCH_SIZE;
> +	next = &node;
> +	spin_lock(lock);
> +
> +do_list_again:
> +	do {

While we are batch processing, all operations will fail the
trylock and add themselves to the tail of the queue, and spin on
their own cacheline at that point. So it doesn't reduce the amount
of spinning, just removes the cacheline contention that slows the
spinning.

Hmmm - there's another point of unfairness - when switching batch
processors, other add/delete operations can get the list lock and
perform their operations directly, thereby jumping the batch
queue....

So at what point does simply replacing the list_head with a list_lru
become more efficient than this batch processing (i.e.
https://lkml.org/lkml/2015/3/10/660)?  The list_lru isn't a great
fit for the inode list (doesn't need any of the special LRU/memcg
stuff https://lkml.org/lkml/2015/3/16/261) but it will tell us if,
like Ingo suggested, moving more towards a generic per-cpu list
would provide better overall performance...

Cheers,

Dave.
Waiman Long Feb. 3, 2016, 11:11 p.m. UTC | #2
On 01/31/2016 07:47 PM, Dave Chinner wrote:
> On Fri, Jan 29, 2016 at 02:30:44PM -0500, Waiman Long wrote:
>> Linked list insertion or deletion under lock is a very common activity
>> in the Linux kernel. If this is the only activity under lock, the
>> locking overhead can be pretty large compared with the actual time
>> spent on the insertion or deletion operation itself especially on a
>> large system with many CPUs.
>>
>> This patch introduces a simple list insertion/deletion batching
>> facility where a group of list insertion and deletion operations are
>> grouped together in a single batch under lock. This can reduce the
>> locking overhead and improve overall system performance.
>>
>> The fast path of this batching facility will be similar in performance
>> to the "lock; listop; unlock;" sequence of the existing code. If
>> the lock is not available, it will enter slowpath where the batching
>> happens.
>>
>> A new config option LIST_BATCHING is added so that we can control on
>> which architecture do we want to have this facility enabled.
>>
>> Signed-off-by: Waiman Long<Waiman.Long@hpe.com>
> ....
>> +#ifdef CONFIG_LIST_BATCHING
>> +
>> +extern void do_list_batch_slowpath(spinlock_t *lock, enum list_batch_cmd cmd,
>> +				   struct list_batch *batch,
>> +				   struct list_head *entry);
>> +
>> +/*
>> + * The caller is expected to pass in a constant cmd parameter. As a
>> + * result, most of unneeded code in the switch statement of _list_batch_cmd()
>> + * will be optimized away. This should make the fast path almost as fast
>> + * as the "lock; listop; unlock;" sequence it replaces.
>> + */
> This strikes me as needlessly complex. Simple inline functions are
> much easier to read and verify correct, and we don't have to rely on
> the compiler to optimise out dead code:
>
> static inline void list_batch_add(struct list_head *entry,
> 				  struct list_batch *batch)
> {
> 	if (!spin_trylock(&batch->lock))
> 		return do_list_batch_slowpath(entry, batch, lb_cmd_add);
>
> 	list_add(entry,&batch->list)
> 	spin_unlock(&batch->lock);
> }

Will do so.

>> +#include<linux/list_batch.h>
>> +
>> +/*
>> + * List processing batch size = 128
>> + *
>> + * The batch size shouldn't be too large. Otherwise, it will be too unfair
>> + * to the task doing the batch processing. It shouldn't be too small neither
>> + * as the performance benefit will be reduced.
>> + */
>> +#define LB_BATCH_SIZE	(1<<  7)
> Ok, so arbitrary operations are going to see longer delays when they
> are selected as the batch processor. I'm not sure I really like this
> idea, as it will be the first in the queue that sees contention
> that takes the delay which reduces the fairness of the operations.
> i.e. the spinlock uses fair queuing, but now we can be grossly unfair
> the to the first spinner...
>

That is certainly true. It is well known that a bit of unfairness can 
often improve overall system performance. Interrupt handling, for 
example, is also unfair to the process currently running on the CPU. The 
amount of unfairness is controlled by the batch size parameter. Maybe we 
can make this parameter a read-mostly constant set up at boot time which 
has a value depending on the # of CPUs in a system so that smaller 
system can have a smaller batch size. That will reduce the unfairness, 
at least in smaller systems.

>> +	/*
>> +	 * We rely on the implictit memory barrier of xchg() to make sure
>> +	 * that node initialization will be done before its content is being
>> +	 * accessed by other CPUs.
>> +	 */
>> +	prev = xchg(&batch->tail,&node);
>> +
>> +	if (prev) {
>> +		WRITE_ONCE(prev->next,&node);
>> +		while (READ_ONCE(node.state) == lb_state_waiting)
>> +			cpu_relax();
>> +		if (node.state == lb_state_done)
>> +			return;
> So we spin waiting for the batch processor to process the
> list, or
>
>> +		WARN_ON(node.state != lb_state_batch);
> tell us we are not the batch processor.
>
> So, effectively, the reduction in runtime is due to the fact the
> list operations spin on their own cache line rather than the spin
> lock cacheline until they have been processed and/or made the batch
> processor?

Yes, that can a major part of it.

>> +	}
>> +
>> +	/*
>> +	 * We are now the queue head, we should acquire the lock and
>> +	 * process a batch of qnodes.
>> +	 */
>> +	loop = LB_BATCH_SIZE;
>> +	next =&node;
>> +	spin_lock(lock);
>> +
>> +do_list_again:
>> +	do {
> While we are batch processing, all operations will fail the
> trylock and add themselves to the tail of the queue, and spin on
> their own cacheline at that point. So it doesn't reduce the amount
> of spinning, just removes the cacheline contention that slows the
> spinning.
>
> Hmmm - there's another point of unfairness - when switching batch
> processors, other add/delete operations can get the list lock and
> perform their operations directly, thereby jumping the batch
> queue....

That is true too.

> So at what point does simply replacing the list_head with a list_lru
> become more efficient than this batch processing (i.e.
> https://lkml.org/lkml/2015/3/10/660)?  The list_lru isn't a great
> fit for the inode list (doesn't need any of the special LRU/memcg
> stuff https://lkml.org/lkml/2015/3/16/261) but it will tell us if,
> like Ingo suggested, moving more towards a generic per-cpu list
> would provide better overall performance...

I will take a look at the list_lru patch to see if that help. As for the 
per-cpu list, I tried that and it didn't quite work out.

Thanks,
Longman

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dave Chinner Feb. 6, 2016, 11:57 p.m. UTC | #3
On Wed, Feb 03, 2016 at 06:11:56PM -0500, Waiman Long wrote:
> On 01/31/2016 07:47 PM, Dave Chinner wrote:
> >So at what point does simply replacing the list_head with a list_lru
> >become more efficient than this batch processing (i.e.
> >https://lkml.org/lkml/2015/3/10/660)?  The list_lru isn't a great
> >fit for the inode list (doesn't need any of the special LRU/memcg
> >stuff https://lkml.org/lkml/2015/3/16/261) but it will tell us if,
> >like Ingo suggested, moving more towards a generic per-cpu list
> >would provide better overall performance...
> 
> I will take a look at the list_lru patch to see if that help. As for
> the per-cpu list, I tried that and it didn't quite work out.

OK, see my last email as to why Andi's patch didn't change anything.
The list_lru implementation has a list per node, a lock per node,
and each item is placed on the list for the node it is physically
allocated from. Hence for local workloads, the list/lock that is
accessed for add/remove should be local to the node and hence should
reduce cache line contention mostly to within a single node.

Cheers,

Dave.
Waiman Long Feb. 17, 2016, 1:37 a.m. UTC | #4
On 02/06/2016 06:57 PM, Dave Chinner wrote:
> On Wed, Feb 03, 2016 at 06:11:56PM -0500, Waiman Long wrote:
>> On 01/31/2016 07:47 PM, Dave Chinner wrote:
>>> So at what point does simply replacing the list_head with a list_lru
>>> become more efficient than this batch processing (i.e.
>>> https://lkml.org/lkml/2015/3/10/660)?  The list_lru isn't a great
>>> fit for the inode list (doesn't need any of the special LRU/memcg
>>> stuff https://lkml.org/lkml/2015/3/16/261) but it will tell us if,
>>> like Ingo suggested, moving more towards a generic per-cpu list
>>> would provide better overall performance...
>> I will take a look at the list_lru patch to see if that help. As for
>> the per-cpu list, I tried that and it didn't quite work out.
> OK, see my last email as to why Andi's patch didn't change anything.
> The list_lru implementation has a list per node, a lock per node,
> and each item is placed on the list for the node it is physically
> allocated from. Hence for local workloads, the list/lock that is
> accessed for add/remove should be local to the node and hence should
> reduce cache line contention mostly to within a single node.
>
> Cheers,
>
> Dave.

I have just sent out a new patchset using per-cpu list with per-cpu 
locks. I used the per-cpu list as the changes will be simpler and easier 
to review. Please let me know your thought on that.

Thanks,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/include/linux/list_batch.h b/include/linux/list_batch.h
new file mode 100644
index 0000000..a445a2e
--- /dev/null
+++ b/include/linux/list_batch.h
@@ -0,0 +1,133 @@ 
+/*
+ * List insertion/deletion batching facility
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2016 Hewlett-Packard Enterprise Development LP
+ *
+ * Authors: Waiman Long <waiman.long@hpe.com>
+ */
+#ifndef __LINUX_LIST_BATCH_H
+#define __LINUX_LIST_BATCH_H
+
+#include <linux/spinlock.h>
+#include <linux/list.h>
+
+/*
+ * include/linux/list_batch.h
+ *
+ * Inserting or deleting an entry from a linked list under a spinlock is a
+ * very common operation in the Linux kernel. If many CPUs are trying to
+ * grab the lock and manipulate the linked list, it can lead to significant
+ * lock contention and slow operation.
+ *
+ * This list operation batching facility is used to batch multiple list
+ * operations under one lock/unlock critical section, thus reducing the
+ * locking and cacheline bouncing overhead and improving overall performance.
+ */
+enum list_batch_cmd {
+	lb_cmd_add,
+	lb_cmd_del,
+	lb_cmd_del_init
+};
+
+enum list_batch_state {
+	lb_state_waiting,	/* Node is waiting */
+	lb_state_batch,		/* Queue head to perform batch processing */
+	lb_state_done		/* Job is done */
+};
+
+struct list_batch_qnode {
+	struct list_batch_qnode	*next;
+	struct list_head	*entry;
+	enum list_batch_cmd	cmd;
+	enum list_batch_state	state;
+};
+
+struct list_batch {
+	struct list_head	*list;
+	struct list_batch_qnode *tail;
+};
+
+#define LIST_BATCH_INIT(_list)	\
+	{			\
+		.list = _list,	\
+		.tail = NULL	\
+	}
+
+static inline void list_batch_init(struct list_batch *batch,
+				   struct list_head *list)
+{
+	batch->list = list;
+	batch->tail = NULL;
+}
+
+static __always_inline void _list_batch_cmd(enum list_batch_cmd cmd,
+					    struct list_head *head,
+					    struct list_head *entry)
+{
+	switch (cmd) {
+	case lb_cmd_add:
+		list_add(entry, head);
+		break;
+
+	case lb_cmd_del:
+		list_del(entry);
+		break;
+
+	case lb_cmd_del_init:
+		list_del_init(entry);
+		break;
+	}
+}
+
+#ifdef CONFIG_LIST_BATCHING
+
+extern void do_list_batch_slowpath(spinlock_t *lock, enum list_batch_cmd cmd,
+				   struct list_batch *batch,
+				   struct list_head *entry);
+
+/*
+ * The caller is expected to pass in a constant cmd parameter. As a
+ * result, most of unneeded code in the switch statement of _list_batch_cmd()
+ * will be optimized away. This should make the fast path almost as fast
+ * as the "lock; listop; unlock;" sequence it replaces.
+ */
+static inline void do_list_batch(spinlock_t *lock, enum list_batch_cmd cmd,
+				   struct list_batch *batch,
+				   struct list_head *entry)
+{
+	/*
+	 * Fast path
+	 */
+	if (likely(spin_trylock(lock))) {
+		_list_batch_cmd(cmd, batch->list, entry);
+		spin_unlock(lock);
+		return;
+	}
+	do_list_batch_slowpath(lock, cmd, batch, entry);
+}
+
+
+#else /* CONFIG_LIST_BATCHING */
+
+static inline void do_list_batch(spinlock_t *lock, enum list_batch_cmd cmd,
+				   struct list_batch *batch,
+				   struct list_head *entry)
+{
+	spin_lock(lock);
+	_list_batch_cmd(cmd, batch->list, entry);
+	spin_unlock(lock);
+}
+
+#endif /* CONFIG_LIST_BATCHING */
+
+#endif /* __LINUX_LIST_BATCH_H */
diff --git a/lib/Kconfig b/lib/Kconfig
index 133ebc0..d75ce19 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -514,6 +514,13 @@  config OID_REGISTRY
 config UCS2_STRING
         tristate
 
+config LIST_BATCHING
+	def_bool y if ARCH_USE_LIST_BATCHING
+	depends on SMP
+
+config ARCH_USE_LIST_BATCHING
+	bool
+
 source "lib/fonts/Kconfig"
 
 config SG_SPLIT
diff --git a/lib/Makefile b/lib/Makefile
index a7c26a4..2791262 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -210,6 +210,7 @@  quiet_cmd_build_OID_registry = GEN     $@
 clean-files	+= oid_registry_data.c
 
 obj-$(CONFIG_UCS2_STRING) += ucs2_string.o
+obj-$(CONFIG_LIST_BATCHING) += list_batch.o
 obj-$(CONFIG_UBSAN) += ubsan.o
 
 UBSAN_SANITIZE_ubsan.o := n
diff --git a/lib/list_batch.c b/lib/list_batch.c
new file mode 100644
index 0000000..174f4ba
--- /dev/null
+++ b/lib/list_batch.c
@@ -0,0 +1,125 @@ 
+/*
+ * List insertion/deletion batching facility
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2016 Hewlett-Packard Enterprise Development LP
+ *
+ * Authors: Waiman Long <waiman.long@hpe.com>
+ */
+#include <linux/list_batch.h>
+
+/*
+ * List processing batch size = 128
+ *
+ * The batch size shouldn't be too large. Otherwise, it will be too unfair
+ * to the task doing the batch processing. It shouldn't be too small neither
+ * as the performance benefit will be reduced.
+ */
+#define LB_BATCH_SIZE	(1 << 7)
+
+/*
+ * Inserting or deleting an entry from a linked list under a spinlock is a
+ * very common operation in the Linux kernel. If many CPUs are trying to
+ * grab the lock and manipulate the linked list, it can lead to significant
+ * lock contention and slow operation.
+ *
+ * This list operation batching facility is used to batch multiple list
+ * operations under one lock/unlock critical section, thus reducing the
+ * locking overhead and improving overall performance.
+ */
+void do_list_batch_slowpath(spinlock_t *lock, enum list_batch_cmd cmd,
+			    struct list_batch *batch, struct list_head *entry)
+{
+	struct list_batch_qnode node, *prev, *next, *nptr;
+	int loop;
+
+	/*
+	 * Put itself into the list_batch queue
+	 */
+	node.next  = NULL;
+	node.entry = entry;
+	node.cmd   = cmd;
+	node.state = lb_state_waiting;
+
+	/*
+	 * We rely on the implictit memory barrier of xchg() to make sure
+	 * that node initialization will be done before its content is being
+	 * accessed by other CPUs.
+	 */
+	prev = xchg(&batch->tail, &node);
+
+	if (prev) {
+		WRITE_ONCE(prev->next, &node);
+		while (READ_ONCE(node.state) == lb_state_waiting)
+			cpu_relax();
+		if (node.state == lb_state_done)
+			return;
+		WARN_ON(node.state != lb_state_batch);
+	}
+
+	/*
+	 * We are now the queue head, we should acquire the lock and
+	 * process a batch of qnodes.
+	 */
+	loop = LB_BATCH_SIZE;
+	next = &node;
+	spin_lock(lock);
+
+do_list_again:
+	do {
+		nptr = next;
+		_list_batch_cmd(nptr->cmd, batch->list, nptr->entry);
+		next = READ_ONCE(nptr->next);
+		/*
+		 * As soon as the state is marked lb_state_done, we
+		 * can no longer assume the content of *nptr as valid.
+		 * So we have to hold off marking it done until we no
+		 * longer need its content.
+		 *
+		 * The release barrier here is to make sure that we
+		 * won't access its content after marking it done.
+		 */
+		if (next)
+			smp_store_release(&nptr->state, lb_state_done);
+	} while (--loop && next);
+	if (!next) {
+		/*
+		 * The queue tail should equal to nptr, so clear it to
+		 * mark the queue as empty.
+		 */
+		if (cmpxchg_relaxed(&batch->tail, nptr, NULL) != nptr) {
+			/*
+			 * Queue not empty, wait until the next pointer is
+			 * initialized.
+			 */
+			while (!(next = READ_ONCE(nptr->next)))
+				cpu_relax();
+		}
+		/*
+		 * The release barrier is required to make sure that
+		 * setting the done state is the last operation.
+		 */
+		smp_store_release(&nptr->state, lb_state_done);
+	}
+	if (next) {
+		if (loop)
+			goto do_list_again;	/* More qnodes to process */
+		/*
+		 * Mark the next qnode as head to process the next batch
+		 * of qnodes. The new queue head cannot proceed until we
+		 * release the lock.
+		 */
+		WRITE_ONCE(next->state, lb_state_batch);
+	}
+	spin_unlock(lock);
+}
+EXPORT_SYMBOL_GPL(do_list_batch_slowpath);