Message ID | 4FA345DA4F4AE44899BD2B03EEEC2FA911988B37@SACEXCMBX04-PRD.hq.netapp.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Jan 3, 2013, at 6:26 PM, "Myklebust, Trond" <Trond.Myklebust@netapp.com> wrote: > On Thu, 2013-01-03 at 18:11 -0500, Trond Myklebust wrote: >> On Thu, 2013-01-03 at 17:26 -0500, Tejun Heo wrote: >>> Ooh, BTW, there was a bug where workqueue code created a false >>> dependency between two work items. Workqueue currently considers two >>> work items to be the same if they're on the same address and won't >>> execute them concurrently - ie. it makes a work item which is queued >>> again while being executed wait for the previous execution to >>> complete. >>> >>> If a work function frees the work item, and then waits for an event >>> which should be performed by another work item and *that* work item >>> recycles the freed work item, it can create a false dependency loop. >>> There really is no reliable way to detect this short of verifying >>> every memory free. A patch is queued to make such occurrences less >>> likely (work functions should also match for two work items considered >>> the same), but if you're seeing this, the best thing to do is freeing >>> the work item at the end of the work function. >> >> That's interesting... I wonder if we may have been hitting that issue. >> >> From what I can see, we do actually free the write RPC task (and hence >> the work_struct) before we call the asynchronous unlink completion... >> >> Dros, can you see if reverting commit >> 324d003b0cd82151adbaecefef57b73f7959a469 + commit >> 168e4b39d1afb79a7e3ea6c3bb246b4c82c6bdb9 and then applying the attached >> patch also fixes the hang on a pristine 3.7.x kernel? > > Actually, we probably also need to look at rpc_free_task, so the > following patch, instead... Yes, this patch fixes the hang! Thank you for the explanation Tejun - that makes a lot of sense and explains the workqueue behavior that we were seeing. -dros-- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/fs/nfs/read.c b/fs/nfs/read.c index b6bdb18..400f7ec 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -91,12 +91,13 @@ void nfs_readdata_release(struct nfs_read_data *rdata) put_nfs_open_context(rdata->args.context); if (rdata->pages.pagevec != rdata->pages.page_array) kfree(rdata->pages.pagevec); - if (rdata != &read_header->rpc_data) - kfree(rdata); - else + if (rdata == &read_header->rpc_data) { rdata->header = NULL; + rdata = NULL; + } if (atomic_dec_and_test(&hdr->refcnt)) hdr->completion_ops->completion(hdr); + kfree(rdata); } EXPORT_SYMBOL_GPL(nfs_readdata_release); diff --git a/fs/nfs/write.c b/fs/nfs/write.c index b673be3..45d9250 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -126,12 +126,13 @@ void nfs_writedata_release(struct nfs_write_data *wdata) put_nfs_open_context(wdata->args.context); if (wdata->pages.pagevec != wdata->pages.page_array) kfree(wdata->pages.pagevec); - if (wdata != &write_header->rpc_data) - kfree(wdata); - else + if (wdata == &write_header->rpc_data) { wdata->header = NULL; + wdata = NULL; + } if (atomic_dec_and_test(&hdr->refcnt)) hdr->completion_ops->completion(hdr); + kfree(wdata); } EXPORT_SYMBOL_GPL(nfs_writedata_release); diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c index d17a704..500407a 100644 --- a/net/sunrpc/sched.c +++ b/net/sunrpc/sched.c @@ -936,14 +936,13 @@ struct rpc_task *rpc_new_task(const struct rpc_task_setup *setup_data) static void rpc_free_task(struct rpc_task *task) { - const struct rpc_call_ops *tk_ops = task->tk_ops; - void *calldata = task->tk_calldata; + unsigned short tk_flags = task->tk_flags; - if (task->tk_flags & RPC_TASK_DYNAMIC) { + rpc_release_calldata(task->tk_ops, task->tk_calldata); + if (tk_flags & RPC_TASK_DYNAMIC) { dprintk("RPC: %5u freeing task\n", task->tk_pid); mempool_free(task, rpc_task_mempool); } - rpc_release_calldata(tk_ops, calldata); } static void rpc_async_release(struct work_struct *work)