From patchwork Mon Nov 23 02:46:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 7677311 X-Patchwork-Delegate: axboe@kernel.dk Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 0DA589F2EC for ; Mon, 23 Nov 2015 02:46:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E96FA20623 for ; Mon, 23 Nov 2015 02:46:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7566D20609 for ; Mon, 23 Nov 2015 02:46:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753084AbbKWCqZ (ORCPT ); Sun, 22 Nov 2015 21:46:25 -0500 Received: from youngberry.canonical.com ([91.189.89.112]:39916 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752991AbbKWCqX (ORCPT ); Sun, 22 Nov 2015 21:46:23 -0500 Received: from mail-ig0-f179.google.com ([209.85.213.179]) by youngberry.canonical.com with esmtpsa (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1a0h93-00022b-Qg; Mon, 23 Nov 2015 02:46:22 +0000 Received: by igbxm8 with SMTP id xm8so48184059igb.1; Sun, 22 Nov 2015 18:46:20 -0800 (PST) MIME-Version: 1.0 X-Received: by 10.50.4.68 with SMTP id i4mr10125519igi.96.1448246780652; Sun, 22 Nov 2015 18:46:20 -0800 (PST) Received: by 10.64.125.69 with HTTP; Sun, 22 Nov 2015 18:46:20 -0800 (PST) In-Reply-To: <1448243411.8209.36.camel@redhat.com> References: <1447838334.1564.2.camel@ellerman.id.au> <1447855399.3974.24.camel@redhat.com> <1447894964.15206.0.camel@ellerman.id.au> <20151119082325.GA11419@infradead.org> <1448021448.14769.7.camel@ellerman.id.au> <565055C6.5040801@linux.vnet.ibm.com> <20151122005635.1b9ffbe1@tom-T450> <1448234410.8209.3.camel@redhat.com> <1448243411.8209.36.camel@redhat.com> Date: Mon, 23 Nov 2015 10:46:20 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: kernel BUG at drivers/scsi/scsi_lib.c:1096! From: Ming Lei To: Mark Salter Cc: Laurent Dufour , Michael Ellerman , Christoph Hellwig , "James E. J. Bottomley" , brking , Linux SCSI List , Linux Kernel Mailing List , linuxppc-dev@lists.ozlabs.org, linux-block@vger.kernel.org Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_TVD_MIME_EPI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Mark, On Mon, Nov 23, 2015 at 9:50 AM, Mark Salter wrote: > On Mon, 2015-11-23 at 08:36 +0800, Ming Lei wrote: >> On Mon, Nov 23, 2015 at 7:20 AM, Mark Salter wrote: >> > On Sun, 2015-11-22 at 00:56 +0800, Ming Lei wrote: >> > > On Sat, 21 Nov 2015 12:30:14 +0100 >> > > Laurent Dufour wrote: >> > > >> > > > On 20/11/2015 13:10, Michael Ellerman wrote: >> > > > > On Thu, 2015-11-19 at 00:23 -0800, Christoph Hellwig wrote: >> > > > > >> > > > > > It's pretty much guaranteed a block layer bug, most likely in the >> > > > > > merge bios to request infrastucture where we don't obey the merging >> > > > > > limits properly. >> > > > > > >> > > > > > Does either of you have a known good and first known bad kernel? >> > > > > >> > > > > Not me, I've only hit it one or two times. All I can say is I have hit it in >> > > > > 4.4-rc1. >> > > > > >> > > > > Laurent, can you narrow it down at all? >> > > > >> > > > It seems that the panic is triggered by the commit bdced438acd8 ("block: >> > > > setup bi_phys_segments after splitting") which has been pulled by the >> > > > merge d9734e0d1ccf ("Merge branch 'for-4.4/core' of >> > > > git://git.kernel.dk/linux-block"). >> > > > >> > > > My system is panicing promptly when running a kernel built at >> > > > d9734e0d1ccf, while reverting the commit bdced438acd8, it can run hours >> > > > without panicing. >> > > > >> > > > This being said, I can't explain what's going wrong. >> > > > >> > > > May Ming shed some light here ? >> > > >> > > Laurent, looks there is one bug in blk_bio_segment_split(), would you >> > > mind testing the following patch to see if it fixes your issue? >> > > >> > > --- >> > > From 6fc701231dcc000bc8bc4b9105583380d9aa31f4 Mon Sep 17 00:00:00 2001 >> > > From: Ming Lei >> > > Date: Sun, 22 Nov 2015 00:47:13 +0800 >> > > Subject: [PATCH] block: fix segment split >> > > >> > > Inside blk_bio_segment_split(), previous bvec pointer('bvprvp') >> > > always points to the iterator local variable, which is obviously >> > > wrong, so fix it by pointing to the local variable of 'bvprv'. >> > > >> > > Signed-off-by: Ming Lei >> > > --- >> > > block/blk-merge.c | 4 ++-- >> > > 1 file changed, 2 insertions(+), 2 deletions(-) >> > > >> > > diff --git a/block/blk-merge.c b/block/blk-merge.c >> > > index de5716d8..f2efe8a 100644 >> > > --- a/block/blk-merge.c >> > > +++ b/block/blk-merge.c >> > > @@ -98,7 +98,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, >> > > >> > > seg_size += bv.bv_len; >> > > bvprv = bv; >> > > - bvprvp = &bv; >> > > + bvprvp = &bvprv; >> > > sectors += bv.bv_len >> 9; >> > > continue; >> > > } >> > > @@ -108,7 +108,7 @@ new_segment: >> > > >> > > nsegs++; >> > > bvprv = bv; >> > > - bvprvp = &bv; >> > > + bvprvp = &bvprv; >> > > seg_size = bv.bv_len; >> > > sectors += bv.bv_len >> 9; >> > > } >> > >> > I'm still hitting the BUG even with this patch applied on top of 4.4-rc1. >> >> OK, looks there are still other bugs, care to share us how to reproduce >> it on arm64? >> >> thanks, >> Ming > > Unfortunately, the best reproducer I have is to boot the platform. I have seen the > BUG a few times post-boot, but I don't have a consistant reproducer. I am using > upstream 4.4-rc1 with this config: > > http://people.redhat.com/msalter/fh_defconfig > > With 4.4-rc1 on an APM Mustang platform, I see the BUG about once every 6-7 boots. > On an AMD Seattle platform, about every 9 boots. Thanks for the input, and I will try to reproduce the issue on mustang with your kernel config. > > I have a script that loops through an ssh command to reboot the platform under test. > I manually install test kernels and then run the script and wait for failure. While > debugging, I have tried more minimal configs with which I have been unable to > reproduce the problem even after several hours of reboots. With the above mentioned > fh_defconfig, I have been able to get a failure within 20 or so boots with most > kernel builds but at certain kernel commits, the failure has taken a longer time to > reproduce. > > From my POV, I can't say which commit causes the problem. So far, I have not been > able to reproduce at all before commit d9734e0d1ccf but I am currently trying to > reproduce with commit 0d51ce9ca1116 (one merge earlier than d9734e0d1ccf). The patch for fixing 'bvprvp' is better to be included for test, because that issue may have a big effect on computing physical seg count. Also I appreciate if you, Laurent or anyone may provide debug log which can be captured with the attached debug patch when this issue is trigered . Thanks, diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index dd8ad2a..a0db1fe 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1077,14 +1077,31 @@ void scsi_io_completion(struct scsi_cmnd *cmd, unsigned int good_bytes) } } +static void scsi_dump_req(struct request *req) +{ + struct bio_vec bvec; + struct bio *bio; + struct bvec_iter iter; + int i = 0; + + __rq_for_each_bio(bio, req) { + printk("%d-%p\n", i, bio); + bio_for_each_segment(bvec, bio, iter) { + printk("\t %p %u %u\n", bvec.bv_page, + bvec.bv_offset, bvec.bv_len); + } + } +} + static int scsi_init_sgtable(struct request *req, struct scsi_data_buffer *sdb) { - int count; + int count, pre_count = req->nr_phys_segments; + int alloc_cnt = queue_max_segments(req->q); /* * If sg table allocation fails, requeue request later. */ - if (unlikely(scsi_alloc_sgtable(sdb, req->nr_phys_segments, + if (unlikely(scsi_alloc_sgtable(sdb, alloc_cnt, req->mq_ctx != NULL))) return BLKPREP_DEFER; @@ -1093,6 +1110,11 @@ static int scsi_init_sgtable(struct request *req, struct scsi_data_buffer *sdb) * each segment. */ count = blk_rq_map_sg(req->q, req, sdb->table.sgl); + + if (count > sdb->table.nents) { + printk("%s prev vs. now: %d-%d\n", __func__, pre_count, count); + scsi_dump_req(req); + } BUG_ON(count > sdb->table.nents); sdb->table.nents = count; sdb->length = blk_rq_bytes(req);