From patchwork Thu Feb 16 11:45:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9576955 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DB740600C5 for ; Thu, 16 Feb 2017 11:51:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB6422855B for ; Thu, 16 Feb 2017 11:51:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D06E8285BB; Thu, 16 Feb 2017 11:51:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 752F12855B for ; Thu, 16 Feb 2017 11:51:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754441AbdBPLvA (ORCPT ); Thu, 16 Feb 2017 06:51:00 -0500 Received: from mail-pf0-f193.google.com ([209.85.192.193]:33358 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753869AbdBPLqj (ORCPT ); Thu, 16 Feb 2017 06:46:39 -0500 Received: by mail-pf0-f193.google.com with SMTP id e4so1478803pfg.0; Thu, 16 Feb 2017 03:46:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=h2K2OK7PdFfJGM7Ymkm5Y7tx3ueqCvfpUGNhZ4GIRVg=; b=df1M75jVPrkawo49mBY2Q/i8QKuBhrHknhwmw610puy/BM3VvPfeBYQnXWBi6nO/lL wqDwp75niFtlkgJkD99OLXp8elrYw0oXf8PuWc+At3iE06F0RDQIYir6qu+H5aBWjM0E DwSGvg6IZa3p2vk6MH438yFJlKFEshbPRuisMbC7rYstIchPL2it4KBva/qN137fo3NI OefvHePKXgffdmVqn24oyIlRVm5N8JMzssKaqNw5p+WSvN5cUt+739VBzn0VeoIRuhtO I2VVlIYh5IIjXd86spYs2lPZicUEmqsIzL9opSi6Xai8lt84TYVWVFG/buUoBmMpHILx SObg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=h2K2OK7PdFfJGM7Ymkm5Y7tx3ueqCvfpUGNhZ4GIRVg=; b=cGH4oFzJZnDWFdHvmWeXZA0xpPza46lHxWXrsVVB67JvCT2wVjcfCtM6IgLM8DWuF3 KCdNh7ppwLHve8dQ3C1BrB71pkWUxcldA0difbNNspZ31QRsuv4Tw075KyoUXI6bx006 TkDxCdc9FtNoRBaGbsw5j9YTnff7gHOi/Ycds7UnIiBgiyfwnbvv4wDMSzpgwm8GVZJ1 /974aPh5SEmfkUgT7oodVOz1gf9OWvUo0sfO+Rfghq8QNMOgiJ2vam3oBzWbaC1O2ZU0 HYveT/RfPF0ZRAW4N/Vlpu9mjAArGC2paydXf1/1gnMhWhGiWggLB+gkzEhEFcjKKA4m vVTg== X-Gm-Message-State: AMke39lTPLCdtCjTV8jMGIZuJgOMdXAMHiTzxb+OhiRpoWBC6L683ac1IsYIRyVTdlUoSA== X-Received: by 10.99.237.17 with SMTP id d17mr2287791pgi.82.1487245588199; Thu, 16 Feb 2017 03:46:28 -0800 (PST) Received: from localhost (li405-222.members.linode.com. [106.187.53.222]) by smtp.gmail.com with ESMTPSA id n189sm13241648pfn.108.2017.02.16.03.46.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Feb 2017 03:46:27 -0800 (PST) From: Ming Lei To: Shaohua Li , Jens Axboe , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, linux-block@vger.kernel.org, Christoph Hellwig , NeilBrown Cc: Ming Lei Subject: [PATCH 06/17] md: raid1/raid10: borrow .bi_error as pre-allocated page index Date: Thu, 16 Feb 2017 19:45:36 +0800 Message-Id: <1487245547-24384-7-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1487245547-24384-1-git-send-email-tom.leiming@gmail.com> References: <1487245547-24384-1-git-send-email-tom.leiming@gmail.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Before bio is submitted, it is safe to borrow .bi_error. This patch uses .bi_error as index of pre-allocated page in bio, so that we can avoid to mess .bi_vcnt. Especially the old way will not work any more when multipage bvec is introduced. Signed-off-by: Ming Lei --- drivers/md/raid1.c | 12 ++++++++++-- drivers/md/raid10.c | 14 ++++++++++---- 2 files changed, 20 insertions(+), 6 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index c4791fbd69ac..8904a9149671 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2811,13 +2811,14 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, len = sync_blocks<<9; } + /* borrow .bi_error as pre-allocated page index */ for (i = 0 ; i < conf->raid_disks * 2; i++) { bio = r1_bio->bios[i]; if (bio->bi_end_io) { - page = mdev_get_page_from_bio(bio, bio->bi_vcnt); + page = mdev_get_page_from_bio(bio, bio->bi_error++); if (bio_add_page(bio, page, len, 0) == 0) { /* stop here */ - mdev_put_page_to_bio(bio, bio->bi_vcnt, page); + mdev_put_page_to_bio(bio, --bio->bi_error, page); while (i > 0) { i--; bio = r1_bio->bios[i]; @@ -2836,6 +2837,13 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, sync_blocks -= (len>>9); } while (r1_bio->bios[disk]->bi_vcnt < RESYNC_PAGES); bio_full: + /* return .bi_error back to bio */ + for (i = 0 ; i < conf->raid_disks * 2; i++) { + bio = r1_bio->bios[i]; + if (bio->bi_end_io) + bio->bi_error = 0; + } + r1_bio->sectors = nr_sectors; if (mddev_is_clustered(mddev) && diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index b7dfbca869a3..9cfc22cd1330 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3348,7 +3348,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, bio = r10_bio->devs[i].bio; bio_reset(bio); - bio->bi_error = -EIO; rcu_read_lock(); rdev = rcu_dereference(conf->mirrors[d].rdev); if (rdev == NULL || test_bit(Faulty, &rdev->flags)) { @@ -3392,7 +3391,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, /* Need to set up for writing to the replacement */ bio = r10_bio->devs[i].repl_bio; bio_reset(bio); - bio->bi_error = -EIO; sector = r10_bio->devs[i].addr; bio->bi_next = biolist; @@ -3435,14 +3433,15 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, len = (max_sector - sector_nr) << 9; if (len == 0) break; + /* borrow .bi_error as pre-allocated page index */ for (bio= biolist ; bio ; bio=bio->bi_next) { struct bio *bio2; - page = mdev_get_page_from_bio(bio, bio->bi_vcnt); + page = mdev_get_page_from_bio(bio, bio->bi_error++); if (bio_add_page(bio, page, len, 0)) continue; /* stop here */ - mdev_put_page_to_bio(bio, bio->bi_vcnt, page); + mdev_put_page_to_bio(bio, --bio->bi_error, page); for (bio2 = biolist; bio2 && bio2 != bio; bio2 = bio2->bi_next) { @@ -3456,6 +3455,13 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, sector_nr += len>>9; } while (biolist->bi_vcnt < RESYNC_PAGES); bio_full: + /* return .bi_error back to bio, and set resync's as -EIO */ + for (bio= biolist ; bio ; bio=bio->bi_next) + if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) + bio->bi_error = -EIO; + else + bio->bi_error = 0; + r10_bio->sectors = nr_sectors; while (biolist) {