From patchwork Thu Jan 3 23:26:56 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Trond Myklebust X-Patchwork-Id: 1930021 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 767D33FC33 for ; Thu, 3 Jan 2013 23:27:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754716Ab3ACX1J (ORCPT ); Thu, 3 Jan 2013 18:27:09 -0500 Received: from mx12.netapp.com ([216.240.18.77]:13948 "EHLO mx12.netapp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754681Ab3ACX07 (ORCPT ); Thu, 3 Jan 2013 18:26:59 -0500 X-IronPort-AV: E=Sophos;i="4.84,405,1355126400"; d="dif'?scan'208";a="4572760" Received: from smtp1.corp.netapp.com ([10.57.156.124]) by mx12-out.netapp.com with ESMTP; 03 Jan 2013 15:26:58 -0800 Received: from vmwexceht04-prd.hq.netapp.com (vmwexceht04-prd.hq.netapp.com [10.106.77.34]) by smtp1.corp.netapp.com (8.13.1/8.13.1/NTAP-1.6) with ESMTP id r03NQwBL014582; Thu, 3 Jan 2013 15:26:58 -0800 (PST) Received: from SACEXCMBX04-PRD.hq.netapp.com ([169.254.6.100]) by vmwexceht04-prd.hq.netapp.com ([10.106.77.34]) with mapi id 14.02.0328.009; Thu, 3 Jan 2013 15:26:57 -0800 From: "Myklebust, Trond" To: Tejun Heo CC: "J. Bruce Fields" , "Adamson, Dros" , Dave Jones , Linux Kernel , "linux-nfs@vger.kernel.org" Subject: Re: nfsd oops on Linus' current tree. Thread-Topic: nfsd oops on Linus' current tree. Thread-Index: AQHN35CZ1gDHdp5O+Eehf/Br4MyqIpgkE1AAgAAJEYCAAErfgP//ego5gACKzoD//3sf0IAAikkAgBP0PoCAAD46AIAAIKkAgAABM4CAAAPygIAADIsAgAAETIA= Date: Thu, 3 Jan 2013 23:26:56 +0000 Message-ID: <4FA345DA4F4AE44899BD2B03EEEC2FA911988B37@SACEXCMBX04-PRD.hq.netapp.com> References: <4FA345DA4F4AE44899BD2B03EEEC2FA91197273D@SACEXCMBX04-PRD.hq.netapp.com> <20121221230849.GB29739@fieldses.org> <4FA345DA4F4AE44899BD2B03EEEC2FA911972C73@SACEXCMBX04-PRD.hq.netapp.com> <20121221232609.GC29739@fieldses.org> <4FA345DA4F4AE44899BD2B03EEEC2FA911972CA1@SACEXCMBX04-PRD.hq.netapp.com> <20121221234530.GA30048@fieldses.org> <0EC8763B847DB24D9ADF5EBD9CD7B4191259E4A2@SACEXCMBX02-PRD.hq.netapp.com> <20130103201120.GA2096@fieldses.org> <20130103220814.GB2753@mtj.dyndns.org> <4FA345DA4F4AE44899BD2B03EEEC2FA9119886EE@SACEXCMBX04-PRD.hq.netapp.com> <20130103222639.GE2753@mtj.dyndns.org> <1357254692.55285.33.camel@lade.trondhjem.org> In-Reply-To: <1357254692.55285.33.camel@lade.trondhjem.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.104.60.115] MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Thu, 2013-01-03 at 18:11 -0500, Trond Myklebust wrote: > On Thu, 2013-01-03 at 17:26 -0500, Tejun Heo wrote: > > Ooh, BTW, there was a bug where workqueue code created a false > > dependency between two work items. Workqueue currently considers two > > work items to be the same if they're on the same address and won't > > execute them concurrently - ie. it makes a work item which is queued > > again while being executed wait for the previous execution to > > complete. > > > > If a work function frees the work item, and then waits for an event > > which should be performed by another work item and *that* work item > > recycles the freed work item, it can create a false dependency loop. > > There really is no reliable way to detect this short of verifying > > every memory free. A patch is queued to make such occurrences less > > likely (work functions should also match for two work items considered > > the same), but if you're seeing this, the best thing to do is freeing > > the work item at the end of the work function. > > That's interesting... I wonder if we may have been hitting that issue. > > From what I can see, we do actually free the write RPC task (and hence > the work_struct) before we call the asynchronous unlink completion... > > Dros, can you see if reverting commit > 324d003b0cd82151adbaecefef57b73f7959a469 + commit > 168e4b39d1afb79a7e3ea6c3bb246b4c82c6bdb9 and then applying the attached > patch also fixes the hang on a pristine 3.7.x kernel? Actually, we probably also need to look at rpc_free_task, so the following patch, instead... diff --git a/fs/nfs/read.c b/fs/nfs/read.c index b6bdb18..400f7ec 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -91,12 +91,13 @@ void nfs_readdata_release(struct nfs_read_data *rdata) put_nfs_open_context(rdata->args.context); if (rdata->pages.pagevec != rdata->pages.page_array) kfree(rdata->pages.pagevec); - if (rdata != &read_header->rpc_data) - kfree(rdata); - else + if (rdata == &read_header->rpc_data) { rdata->header = NULL; + rdata = NULL; + } if (atomic_dec_and_test(&hdr->refcnt)) hdr->completion_ops->completion(hdr); + kfree(rdata); } EXPORT_SYMBOL_GPL(nfs_readdata_release); diff --git a/fs/nfs/write.c b/fs/nfs/write.c index b673be3..45d9250 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -126,12 +126,13 @@ void nfs_writedata_release(struct nfs_write_data *wdata) put_nfs_open_context(wdata->args.context); if (wdata->pages.pagevec != wdata->pages.page_array) kfree(wdata->pages.pagevec); - if (wdata != &write_header->rpc_data) - kfree(wdata); - else + if (wdata == &write_header->rpc_data) { wdata->header = NULL; + wdata = NULL; + } if (atomic_dec_and_test(&hdr->refcnt)) hdr->completion_ops->completion(hdr); + kfree(wdata); } EXPORT_SYMBOL_GPL(nfs_writedata_release); diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c index d17a704..500407a 100644 --- a/net/sunrpc/sched.c +++ b/net/sunrpc/sched.c @@ -936,14 +936,13 @@ struct rpc_task *rpc_new_task(const struct rpc_task_setup *setup_data) static void rpc_free_task(struct rpc_task *task) { - const struct rpc_call_ops *tk_ops = task->tk_ops; - void *calldata = task->tk_calldata; + unsigned short tk_flags = task->tk_flags; - if (task->tk_flags & RPC_TASK_DYNAMIC) { + rpc_release_calldata(task->tk_ops, task->tk_calldata); + if (tk_flags & RPC_TASK_DYNAMIC) { dprintk("RPC: %5u freeing task\n", task->tk_pid); mempool_free(task, rpc_task_mempool); } - rpc_release_calldata(tk_ops, calldata); } static void rpc_async_release(struct work_struct *work)