From patchwork Fri Jul 29 01:24:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ma, Jianpeng" X-Patchwork-Id: 9251839 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C6F4B6077C for ; Fri, 29 Jul 2016 01:24:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B82B027E71 for ; Fri, 29 Jul 2016 01:24:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ACE2127F80; Fri, 29 Jul 2016 01:24:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7749127E71 for ; Fri, 29 Jul 2016 01:24:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751039AbcG2BY1 (ORCPT ); Thu, 28 Jul 2016 21:24:27 -0400 Received: from mga11.intel.com ([192.55.52.93]:46088 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750805AbcG2BY0 convert rfc822-to-8bit (ORCPT ); Thu, 28 Jul 2016 21:24:26 -0400 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP; 28 Jul 2016 18:24:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,436,1464678000"; d="scan'208";a="147271378" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by fmsmga004.fm.intel.com with ESMTP; 28 Jul 2016 18:24:26 -0700 Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 28 Jul 2016 18:24:25 -0700 Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 28 Jul 2016 18:24:24 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.8]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.116]) with mapi id 14.03.0248.002; Fri, 29 Jul 2016 09:24:23 +0800 From: "Ma, Jianpeng" To: Somnath Roy CC: ceph-devel Subject: RE: BLueStore Deadlock Thread-Topic: BLueStore Deadlock Thread-Index: AdHoqlo3LS+osxH0TeOY0HU9pqmuyAAOsgjAAAY/0zAADjzasA== Date: Fri, 29 Jul 2016 01:24:22 +0000 Message-ID: <6AA21C22F0A5DA478922644AD2EC308C373B940C@SHSMSX101.ccr.corp.intel.com> References: <6AA21C22F0A5DA478922644AD2EC308C373B92BC@SHSMSX101.ccr.corp.intel.com> In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNGNkMjgxZjUtZTllMy00ZmMyLWE5MjYtZTY1ZjNhZjQxZmZiIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6ImduWHMrMzZYNE8wdmF0N2RXT0hBbjZQdlwvTkIzWFRTXC81dkdzZHV2T0wrUT0ifQ== x-ctpclassification: CTP_IC x-originating-ip: [10.239.127.40] MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Roy: W/ your patch, there still deadlock. By the way, if we change the BufferSpace::_add_buffer, if w/ cache flush push it into cache front and if w/o cache flag only put it at back. I think use this way we can remove finish_write. How about it? Thanks! -----Original Message----- From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy Sent: Friday, July 29, 2016 6:22 AM To: Ma, Jianpeng Cc: ceph-devel Subject: RE: BLueStore Deadlock Jianpeng, I thought through this and it seems there could be one possible deadlock scenario. tp_osd_tp --> waiting on onode->flush() for previous txc to finish. Holding Wlock(coll) aio_complete_thread --> waiting for RLock(coll) No other thread will be blocked here. We do add previous txc in the flush_txns list during _txc_write_nodes() and before aio_complete_thread calling _txc_state_proc(). So, if within this time frame if we have IO on the same collection , it will be waiting on unfinished txcs. The solution to this could be the following.. root@emsnode5:~/ceph-master/src# git diff diff --git a/src/os/bluestore/BlueStore.cc b/src/os/bluestore/BlueStore.cc index e8548b1..575a234 100644 I am not able to reproduce this in my setup , so, if you can do the above changes in your env and see if you are still hitting the issue, would be helpful. Thanks & Regards Somnath -----Original Message----- From: Somnath Roy Sent: Thursday, July 28, 2016 8:45 AM To: 'Ma, Jianpeng' Cc: ceph-devel Subject: RE: BLueStore Deadlock Hi Jianpeng, Are you trying with latest master and still hitting the issue (seems so but confirming) ? The following scenario should not be creating deadlock because of the following reason. Onode->flush() is waiting on flush_lock() and from _txc_finish() it is releasing that before taking osr->qlock(). Am I missing anything ? I got a deadlock in this path in one of my earlier changes in the following pull request (described in detail there) and it is fixed and merged. https://github.com/ceph/ceph/pull/10220 If my theory is right , we are hitting deadlock because of some other reason may be. It seems you are doing WAL write , could you please describe the steps to reproduce ? Thanks & Regards Somnath From: Ma, Jianpeng [mailto:jianpeng.ma@intel.com] Sent: Thursday, July 28, 2016 1:46 AM To: Somnath Roy Cc: ceph-devel; Ma, Jianpeng Subject: BLueStore Deadlock Hi Roy: When do seqwrite w/ rbd+librbd, I met deadlock for bluestore. It can reproduce 100%.(based on 98602ae6c67637dbadddd549bd9a0035e5a2717) By add message and this found this bug caused by bf70bcb6c54e4d6404533bc91781a5ef77d62033. Consider this case: tp_osd_tp aio_complete_thread kv_sync_thread Rwlock(coll) txc_finish_io _txc_finish do_write lock(osr->qlock) lock(osr->qlock) do_read RLock(coll) need osr->qlock to continue onode->flush() need coll onode->readlock to continue need previous txc complete But current I don't how to fix this. Thanks! PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). --- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- a/src/os/bluestore/BlueStore.cc +++ b/src/os/bluestore/BlueStore.cc @@ -4606,6 +4606,11 @@ void BlueStore::_txc_state_proc(TransContext *txc) (txc->first_collection)->lock.get_read(); } for (auto& o : txc->onodes) { + { + std::lock_guard l(o->flush_lock); + o->flush_txns.insert(txc); + } + for (auto& p : o->blob_map.blob_map) { p.bc.finish_write(txc->seq); } @@ -4733,8 +4738,8 @@ void BlueStore::_txc_write_nodes(TransContext *txc, KeyValueDB::Transaction t) dout(20) << " onode " << (*p)->oid << " is " << bl.length() << dendl; t->set(PREFIX_OBJ, (*p)->key, bl); - std::lock_guard l((*p)->flush_lock); - (*p)->flush_txns.insert(txc); + /*std::lock_guard l((*p)->flush_lock); + (*p)->flush_txns.insert(txc);*/ }