From patchwork Tue Jul 12 21:47:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lin X-Patchwork-Id: 9226285 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CEB1B604DB for ; Tue, 12 Jul 2016 21:49:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B91D424DA1 for ; Tue, 12 Jul 2016 21:49:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AD4BA2715B; Tue, 12 Jul 2016 21:49:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D67624DA1 for ; Tue, 12 Jul 2016 21:49:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751106AbcGLVsy (ORCPT ); Tue, 12 Jul 2016 17:48:54 -0400 Received: from mail.kernel.org ([198.145.29.136]:49946 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750897AbcGLVsy (ORCPT ); Tue, 12 Jul 2016 17:48:54 -0400 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D98602017E; Tue, 12 Jul 2016 21:48:38 +0000 (UTC) Received: from [10.8.0.10] (unknown [159.203.220.84]) (using TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DE19C20142; Tue, 12 Jul 2016 21:48:37 +0000 (UTC) Message-ID: <1468360076.10045.0.camel@ssi> Subject: Re: crash on device removal From: Ming Lin To: Steve Wise Cc: 'Christoph Hellwig' , linux-rdma@vger.kernel.org, 'sagig' , linux-nvme@lists.infradead.org Date: Tue, 12 Jul 2016 14:47:56 -0700 In-Reply-To: <010201d1dc81$9f59cf10$de0d6d30$@opengridcomputing.com> References: <00cb01d1dc5b$51c05970$f5410c50$@opengridcomputing.com> <1468356054.5426.1.camel@ssi> <010201d1dc81$9f59cf10$de0d6d30$@opengridcomputing.com> X-Mailer: Evolution 3.10.4-0ubuntu2 Mime-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tue, 2016-07-12 at 16:09 -0500, Steve Wise wrote: > > On Tue, 2016-07-12 at 11:34 -0500, Steve Wise wrote: > > > Hey Christoph, > > > > > > I see a crash when shutting down a nvme host node via 'reboot' that has 1 target > > > device attached. The shutdown causes iw_cxgb4 to be removed which triggers > > the > > > device removal logic in the nvmf rdma transport. The crash is here: > > > > > > (gdb) list *nvme_rdma_free_qe+0x18 > > > 0x1e8 is in nvme_rdma_free_qe (drivers/nvme/host/rdma.c:196). > > > 191 } > > > 192 > > > 193 static void nvme_rdma_free_qe(struct ib_device *ibdev, struct > > > nvme_rdma_qe *qe, > > > 194 size_t capsule_size, enum dma_data_direction dir) > > > 195 { > > > 196 ib_dma_unmap_single(ibdev, qe->dma, capsule_size, dir); > > > 197 kfree(qe->data); > > > 198 } > > > 199 > > > 200 static int nvme_rdma_alloc_qe(struct ib_device *ibdev, struct > > > nvme_rdma_qe *qe, > > > > > > Apparently qe is NULL. > > > > > > Looking at the device removal path, the logic appears correct (see > > > nvme_rdma_device_unplug() and the nice function comment :) ). I'm wondering > > if > > > concurrently to the host device removal path cleaning up queues, the target is > > > disconnecting all of its queues due to the first disconnect event from the host > > > causing some cleanup race on the host side? Although since the removal path > > > executing in the cma event handler upcall, I don't think another thread would be > > > handling a disconnect event. Maybe the qp async event handler flow? > > > > > > Thoughts? > > > > We actually missed a kref_get in nvme_get_ns_from_disk(). > > > > This should fix it. Could you help to verify? > > > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c > > index 4babdf0..b146f52 100644 > > --- a/drivers/nvme/host/core.c > > +++ b/drivers/nvme/host/core.c > > @@ -183,6 +183,8 @@ static struct nvme_ns *nvme_get_ns_from_disk(struct > > gendisk *disk) > > } > > spin_unlock(&dev_list_lock); > > > > + kref_get(&ns->ctrl->kref); > > + > > return ns; > > > > fail_put_ns: > > Hey Ming. This avoids the crash in nvme_rdma_free_qe(), but now I see another crash: > > [ 975.633436] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 10.0.1.14:4420 > [ 978.463636] nvme nvme0: creating 32 I/O queues. > [ 979.187826] nvme nvme0: new ctrl: NQN "testnqn", addr 10.0.1.14:4420 > [ 987.778287] nvme nvme0: Got rdma device removal event, deleting ctrl > [ 987.882202] BUG: unable to handle kernel paging request at ffff880e770e01f8 > [ 987.890024] IP: [] __ib_process_cq+0x46/0xc0 [ib_core] > > This looks like another problem with freeing the tag sets before stopping the QP. I thought we fixed that once and for all, but perhaps there is some other path we missed. :( Sorry, the previous patch was wrong. Here is the right one. --- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 1ad47c5..f13e3a6 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -845,6 +845,7 @@ static ssize_t nvmf_dev_write(struct file *file, const char __user *ubuf, goto out_unlock; } + kref_get(&ctrl->kref); seq_file->private = ctrl; out_unlock: