diff mbox

[RFC,01/11] workqueue: Add a decrement-after-return and wake if 0 facility

Message ID 150428045304.25051.1778333106306853298.stgit@warthog.procyon.org.uk (mailing list archive)
State New, archived
Headers show

Commit Message

David Howells Sept. 1, 2017, 3:40 p.m. UTC
Add a facility to the workqueue subsystem whereby an atomic_t can be
registered by a work function such that the work function dispatcher will
decrement the atomic after the work function has returned and then call
wake_up_atomic() on it if it reached 0.

This is analogous to complete_and_exit() for kernel threads and is used to
avoid a race between notifying that a work item is about to finish and the
.text segment from a module being discarded.

The way this is used is that the work function calls:

	dec_after_work(atomic_t *counter);

to register the counter and then process_one_work() calls it, potentially
wakes it and clears the registration.

The reason I've used an atomic_t rather than a completion is that (1) it
takes up less space and (2) it can monitor multiple objects.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Tejun Heo <tj@kernel.org>
cc: Lai Jiangshan <jiangshanlai@gmail.com>
---

 include/linux/workqueue.h   |    1 +
 kernel/workqueue.c          |   25 +++++++++++++++++++++++++
 kernel/workqueue_internal.h |    1 +
 3 files changed, 27 insertions(+)

Comments

David Howells Sept. 1, 2017, 3:52 p.m. UTC | #1
Here are some changes to the AFS filesystem that form the first part of
network-namespacing and IPv6 enabling the AFS filesystem.  AF_RXRPC is
already namespaced.

This is built on AF_RXRPC changes tagged with rxrpc-next-20170829 (which is
also in net-next).

The AFS changes are:

 (1) Create a dummy AFS network namespace and shift a bunch of global
     things into it and start using it.

 (2) Add some more AFS RPC protocol definitions.

 (3) Update the cache infrastructure to remove some stuff that is redundant
     or not actually useful and increment the version.

 (4) Keep track of internal addresses in terms of sockaddr_rxrpc structs
     rather than in_addr structs.  This will enable the use of IPv6.

 (5) Allow IPv6 addresses for VL servers to be specified.  Note that this
     doesn't help with finding FS servers as that requires a protocol
     change.  Such a protocol extension is available in the AuriStor
     AFS-compatible server, though I haven't implemented that yet.

 (6) Overhaul cell database management to manage them better, making them
     automatically kept up to date from the DNS server.

 (7) Make use of the new AF_RXRPC call-retry to implement address rotation
     for VL servers and FS servers without the need to re-encrypt client
     call data.

To make this work, I've added some extensions to the core kernel:

 (1) Add a decrement-after-return function for workqueues that allows a
     work item to ask the workqueue manager to decrement an atomic_t and
     'wake it up' if it reaches 0.  This is analogous to
     complete_and_exit() and can be used to protect rmmod against code
     removal.

 (2) Add refcount_inc/dec_return() functions that return the new value of
     the refcount_t.  This makes maintaining a cache easier where you want
     to schedule timed garbage collection when the refcount reaches 1.  It
     also makes tracing easier as the value is obtained atomically.

 (3) Pass the wait mode to wait_on_atomic_t() and provide a default action
     function.  This allows various default actions scattered about the
     place to be deleted.

 (4) Add a function to start or reduce the timeout on a timer if it's
     already running.  This makes it easier to maintain a single timer for
     multiple events without requiring extra locking to check/modify the
     timer (the timer has its own lock after all).


The patches can be found here also:

	http://git.kernel.org/cgit/linux/kernel/git/dhowells/linux-fs.git/log/?h=afs

David
Tejun Heo Sept. 5, 2017, 1:29 p.m. UTC | #2
Hello, David.

On Fri, Sep 01, 2017 at 04:40:53PM +0100, David Howells wrote:
> Add a facility to the workqueue subsystem whereby an atomic_t can be
> registered by a work function such that the work function dispatcher will
> decrement the atomic after the work function has returned and then call
> wake_up_atomic() on it if it reached 0.
> 
> This is analogous to complete_and_exit() for kernel threads and is used to
> avoid a race between notifying that a work item is about to finish and the
> .text segment from a module being discarded.
> 
> The way this is used is that the work function calls:
> 
> 	dec_after_work(atomic_t *counter);
> 
> to register the counter and then process_one_work() calls it, potentially
> wakes it and clears the registration.
> 
> The reason I've used an atomic_t rather than a completion is that (1) it
> takes up less space and (2) it can monitor multiple objects.

Given how work items are used, I think this is too inviting to abuses
where people build complex event chains through these counters and
those chains would be completely opaque.  If the goal is protecting
.text of a work item, can't we just do that?  Can you please describe
your use case in more detail?  Why can't it be done via the usual
"flush from exit"?

Thanks.
David Howells Sept. 5, 2017, 2:50 p.m. UTC | #3
Tejun Heo <tj@kernel.org> wrote:

> Given how work items are used, I think this is too inviting to abuses
> where people build complex event chains through these counters and
> those chains would be completely opaque.  If the goal is protecting
> .text of a work item, can't we just do that?  Can you please describe
> your use case in more detail?

With one of my latest patches to AFS, there's a set of cell records, where
each cell has a manager work item that mainains that cell, including
refreshing DNS records and excising expired records from the list.  Performing
the excision in the manager work item makes handling the fscache index cookie
easier (you can't have two cookies attached to the same object), amongst other
things.

There's also an overseer work item that maintains a single expiry timer for
all the cells and queues the per-cell work items to do DNS updates and cell
removal.

The reason that the overseer exists is that it makes it easier to do a put on
a cell.  The cell decrements the cell refcount and then wants to schedule the
cell for destruction - but it's no longer permitted to touch the cell.  I
could use atomic_dec_and_lock(), but that's messy.  It's cleaner just to set
the timer on the overseer and leave it to that.

However, if someone does rmmod, I have to be able to clean everything up.  The
overseer timer may be queued or running; the overseer may be queued *and*
running and may get queued again by the timer; and each cell's work item may
be queued *and* running and may get queued again by the manager.

> Why can't it be done via the usual "flush from exit"?

Well, it can, but you need a flush for each separate level of dependencies,
where one dependency will kick off another level of dependency during the
cleanup.

So what I think I would have to do is set a flag to say that no one is allowed
to set the timer now (this shouldn't happen outside of server or volume cache
clearance), delete the timer synchronously, flush the work queue four times
and then do an RCU barrier.

However, since I have volumes with dependencies on servers and cells, possibly
with their own managers, I think I may need up to 12 flushes, possibly with
interspersed RCU barriers.

It's much simpler to count out the objects than to try and get the flushing
right.

David
Tejun Heo Sept. 6, 2017, 2:51 p.m. UTC | #4
Hello, David.

On Tue, Sep 05, 2017 at 03:50:16PM +0100, David Howells wrote:
> With one of my latest patches to AFS, there's a set of cell records, where
> each cell has a manager work item that mainains that cell, including
> refreshing DNS records and excising expired records from the list.  Performing
> the excision in the manager work item makes handling the fscache index cookie
> easier (you can't have two cookies attached to the same object), amongst other
> things.
> 
> There's also an overseer work item that maintains a single expiry timer for
> all the cells and queues the per-cell work items to do DNS updates and cell
> removal.
> 
> The reason that the overseer exists is that it makes it easier to do a put on
> a cell.  The cell decrements the cell refcount and then wants to schedule the
> cell for destruction - but it's no longer permitted to touch the cell.  I
> could use atomic_dec_and_lock(), but that's messy.  It's cleaner just to set
> the timer on the overseer and leave it to that.
> 
> However, if someone does rmmod, I have to be able to clean everything up.  The
> overseer timer may be queued or running; the overseer may be queued *and*
> running and may get queued again by the timer; and each cell's work item may
> be queued *and* running and may get queued again by the manager.

Thanks for the detailed explanation.

> > Why can't it be done via the usual "flush from exit"?
> 
> Well, it can, but you need a flush for each separate level of dependencies,
> where one dependency will kick off another level of dependency during the
> cleanup.
> 
> So what I think I would have to do is set a flag to say that no one is allowed
> to set the timer now (this shouldn't happen outside of server or volume cache
> clearance), delete the timer synchronously, flush the work queue four times
> and then do an RCU barrier.
> 
> However, since I have volumes with dependencies on servers and cells, possibly
> with their own managers, I think I may need up to 12 flushes, possibly with
> interspersed RCU barriers.

Would it be possible to isolate work items for the cell in its own
workqueue and use drain_workqueue()?  Separating out flush domains is
one of the main use cases for dedicated workqueues after all.

> It's much simpler to count out the objects than to try and get the flushing
> right.

I still feel very reluctant to add generic counting & trigger
mechanism to work items for this.  I think it's too generic a solution
for a very specific problem.

Thanks.
diff mbox

Patch

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index db6dc9dc0482..ceaed1387e9b 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -451,6 +451,7 @@  extern bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
 
 extern void flush_workqueue(struct workqueue_struct *wq);
 extern void drain_workqueue(struct workqueue_struct *wq);
+extern void dec_after_work(atomic_t *counter);
 
 extern int schedule_on_each_cpu(work_func_t func);
 
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ca937b0c3a96..2936ad0ab293 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2112,6 +2112,12 @@  __acquires(&pool->lock)
 		dump_stack();
 	}
 
+	if (worker->dec_after) {
+		if (atomic_dec_and_test(worker->dec_after))
+			wake_up_atomic_t(worker->dec_after);
+		worker->dec_after = NULL;
+	}
+
 	/*
 	 * The following prevents a kworker from hogging CPU on !PREEMPT
 	 * kernels, where a requeueing work item waiting for something to
@@ -3087,6 +3093,25 @@  int schedule_on_each_cpu(work_func_t func)
 }
 
 /**
+ * dec_after_work - Register counter to dec and wake after work func returns
+ * @counter: The counter to decrement and wake
+ *
+ * Register an atomic counter to be decremented after a work function returns
+ * to the core.  The counter is 'woken' if it is decremented to 0.  This allows
+ * synchronisation to be effected by one or more work functions in a module
+ * without leaving a window in which the work function code can be unloaded.
+ */
+void dec_after_work(atomic_t *counter)
+{
+	struct worker *worker = current_wq_worker();
+
+	BUG_ON(!worker);
+	BUG_ON(worker->dec_after);
+	worker->dec_after = counter;
+}
+EXPORT_SYMBOL(dec_after_work);
+
+/**
  * execute_in_process_context - reliably execute the routine with user context
  * @fn:		the function to execute
  * @ew:		guaranteed storage for the execute work structure (must
diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h
index 8635417c587b..94ea1ca9b01f 100644
--- a/kernel/workqueue_internal.h
+++ b/kernel/workqueue_internal.h
@@ -28,6 +28,7 @@  struct worker {
 
 	struct work_struct	*current_work;	/* L: work being processed */
 	work_func_t		current_func;	/* L: current_work's fn */
+	atomic_t		*dec_after;	/* Decrement after func returns */
 	struct pool_workqueue	*current_pwq; /* L: current_work's pwq */
 	bool			desc_valid;	/* ->desc is valid */
 	struct list_head	scheduled;	/* L: scheduled works */