From patchwork Wed Aug 3 01:35:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ma, Jianpeng" X-Patchwork-Id: 9260665 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C19F660865 for ; Wed, 3 Aug 2016 01:35:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B587F2853B for ; Wed, 3 Aug 2016 01:35:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A7D4F2853E; Wed, 3 Aug 2016 01:35:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 094AB2853B for ; Wed, 3 Aug 2016 01:35:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756636AbcHCBfJ (ORCPT ); Tue, 2 Aug 2016 21:35:09 -0400 Received: from mga02.intel.com ([134.134.136.20]:39127 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754635AbcHCBfG convert rfc822-to-8bit (ORCPT ); Tue, 2 Aug 2016 21:35:06 -0400 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 02 Aug 2016 18:35:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,463,1464678000"; d="scan'208";a="1007372680" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga001.jf.intel.com with ESMTP; 02 Aug 2016 18:35:05 -0700 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 2 Aug 2016 18:35:04 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.8]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.107]) with mapi id 14.03.0248.002; Wed, 3 Aug 2016 09:35:02 +0800 From: "Ma, Jianpeng" To: Sage Weil CC: ceph-devel , Somnath Roy Subject: RE: BLueStore Deadlock Thread-Topic: BLueStore Deadlock Thread-Index: AdHoqlo3LS+osxH0TeOY0HU9pqmuyAAOsgjAAAY/0zAADjzasAAAl5lwAPsDnBA= Date: Wed, 3 Aug 2016 01:35:01 +0000 Message-ID: <6AA21C22F0A5DA478922644AD2EC308C373B9AB0@SHSMSX101.ccr.corp.intel.com> References: <6AA21C22F0A5DA478922644AD2EC308C373B92BC@SHSMSX101.ccr.corp.intel.com> <6AA21C22F0A5DA478922644AD2EC308C373B940C@SHSMSX101.ccr.corp.intel.com> <6AA21C22F0A5DA478922644AD2EC308C373B9424@SHSMSX101.ccr.corp.intel.com> In-Reply-To: <6AA21C22F0A5DA478922644AD2EC308C373B9424@SHSMSX101.ccr.corp.intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNGNkMjgxZjUtZTllMy00ZmMyLWE5MjYtZTY1ZjNhZjQxZmZiIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6ImduWHMrMzZYNE8wdmF0N2RXT0hBbjZQdlwvTkIzWFRTXC81dkdzZHV2T0wrUT0ifQ== x-ctpclassification: CTP_IC x-originating-ip: [10.239.127.40] MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Sage: Why is there STATE_WRITTING? In my option: for normal read, it do onode->flush() , so no need wait io complete. The special case is one transaction which has two write and later write need read. For write, if it is wal-write, when do finish_write, the data don't locate into disk. This different with non-wal-write Thanks! -----Original Message----- From: Ma, Jianpeng Sent: Friday, July 29, 2016 9:36 AM To: Ma, Jianpeng ; Somnath Roy Cc: ceph-devel Subject: RE: BLueStore Deadlock Roy: my question is why is there STATE_WRITTING? For read we don't by pass cache, so why need STATE_WRITTING? Thanks! -----Original Message----- From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Ma, Jianpeng Sent: Friday, July 29, 2016 9:24 AM To: Somnath Roy Cc: ceph-devel Subject: RE: BLueStore Deadlock Hi Roy: W/ your patch, there still deadlock. By the way, if we change the BufferSpace::_add_buffer, if w/ cache flush push it into cache front and if w/o cache flag only put it at back. I think use this way we can remove finish_write. How about it? Thanks! -----Original Message----- From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Somnath Roy Sent: Friday, July 29, 2016 6:22 AM To: Ma, Jianpeng Cc: ceph-devel Subject: RE: BLueStore Deadlock Jianpeng, I thought through this and it seems there could be one possible deadlock scenario. tp_osd_tp --> waiting on onode->flush() for previous txc to finish. Holding Wlock(coll) aio_complete_thread --> waiting for RLock(coll) No other thread will be blocked here. We do add previous txc in the flush_txns list during _txc_write_nodes() and before aio_complete_thread calling _txc_state_proc(). So, if within this time frame if we have IO on the same collection , it will be waiting on unfinished txcs. The solution to this could be the following.. root@emsnode5:~/ceph-master/src# git diff diff --git a/src/os/bluestore/BlueStore.cc b/src/os/bluestore/BlueStore.cc index e8548b1..575a234 100644 I am not able to reproduce this in my setup , so, if you can do the above changes in your env and see if you are still hitting the issue, would be helpful. Thanks & Regards Somnath -----Original Message----- From: Somnath Roy Sent: Thursday, July 28, 2016 8:45 AM To: 'Ma, Jianpeng' Cc: ceph-devel Subject: RE: BLueStore Deadlock Hi Jianpeng, Are you trying with latest master and still hitting the issue (seems so but confirming) ? The following scenario should not be creating deadlock because of the following reason. Onode->flush() is waiting on flush_lock() and from _txc_finish() it is releasing that before taking osr->qlock(). Am I missing anything ? I got a deadlock in this path in one of my earlier changes in the following pull request (described in detail there) and it is fixed and merged. https://github.com/ceph/ceph/pull/10220 If my theory is right , we are hitting deadlock because of some other reason may be. It seems you are doing WAL write , could you please describe the steps to reproduce ? Thanks & Regards Somnath From: Ma, Jianpeng [mailto:jianpeng.ma@intel.com] Sent: Thursday, July 28, 2016 1:46 AM To: Somnath Roy Cc: ceph-devel; Ma, Jianpeng Subject: BLueStore Deadlock Hi Roy: When do seqwrite w/ rbd+librbd, I met deadlock for bluestore. It can reproduce 100%.(based on 98602ae6c67637dbadddd549bd9a0035e5a2717) By add message and this found this bug caused by bf70bcb6c54e4d6404533bc91781a5ef77d62033. Consider this case: tp_osd_tp aio_complete_thread kv_sync_thread Rwlock(coll) txc_finish_io _txc_finish do_write lock(osr->qlock) lock(osr->qlock) do_read RLock(coll) need osr->qlock to continue onode->flush() need coll onode->readlock to continue need previous txc complete But current I don't how to fix this. Thanks! PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). --- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- a/src/os/bluestore/BlueStore.cc +++ b/src/os/bluestore/BlueStore.cc @@ -4606,6 +4606,11 @@ void BlueStore::_txc_state_proc(TransContext *txc) (txc->first_collection)->lock.get_read(); } for (auto& o : txc->onodes) { + { + std::lock_guard l(o->flush_lock); + o->flush_txns.insert(txc); + } + for (auto& p : o->blob_map.blob_map) { p.bc.finish_write(txc->seq); } @@ -4733,8 +4738,8 @@ void BlueStore::_txc_write_nodes(TransContext *txc, KeyValueDB::Transaction t) dout(20) << " onode " << (*p)->oid << " is " << bl.length() << dendl; t->set(PREFIX_OBJ, (*p)->key, bl); - std::lock_guard l((*p)->flush_lock); - (*p)->flush_txns.insert(txc); + /*std::lock_guard l((*p)->flush_lock); + (*p)->flush_txns.insert(txc);*/ }