From patchwork Fri Feb 24 15:42:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 9590615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 057F56020A for ; Fri, 24 Feb 2017 15:48:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ECFC9286CC for ; Fri, 24 Feb 2017 15:48:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E1D7428711; Fri, 24 Feb 2017 15:48:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 70F1128687 for ; Fri, 24 Feb 2017 15:48:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751518AbdBXPoL (ORCPT ); Fri, 24 Feb 2017 10:44:11 -0500 Received: from mail-pf0-f195.google.com ([209.85.192.195]:34568 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751320AbdBXPng (ORCPT ); Fri, 24 Feb 2017 10:43:36 -0500 Received: by mail-pf0-f195.google.com with SMTP id o64so1168058pfb.1; Fri, 24 Feb 2017 07:43:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tBvcSfaZNRHtGlNnU2izM6i/cfFoil+4Uf+QzuFNDCI=; b=n9ObEG6pa4qS5gydFEi6l4md0v34mIrGVYSwZ3KfJbhN9FuuWvjyn7H4tjojL/9N1V qMol7+KxhCRhMvROf51fz5ER8IKJXbI6fylm8wJ/YQhzfvz/Wto/jSUu3wez6BccZ9gM VBWL10ze77n7hVrDTeDgKGNSfWfdAxXgmVN9FzLSwNzGXyeKBVJ5v4qb4xTXA0jspjWk 2OU0hsOUSjiamn0S/vUJqNSShsXnCBSckkjEG8hmUkjCFeogAZ+NQcCd2bJKpVJ/ynxT EAoqTHYciJOWd91oYCPn7ii7Tb1MM5zn3JqZT5JTQMiuI6NgnN7o9qw6J3631SKm5bgK ws/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tBvcSfaZNRHtGlNnU2izM6i/cfFoil+4Uf+QzuFNDCI=; b=Vuq1C4aKCK5OUJhaMiZFKFLJGmmiFHXoSg88zHPj/D0JnoSkLoBE2im9TYbefTZ49u qqHMofUNu4HuLBBxzNWa5zbLfTu6d3Iqpzqfs02Dy8xcimDbvxrs3TJrCVhDsKaziHPI R6SJisl015PCyBd98I5lwewlFta8wG++lhvtuLyQdWlSjNUHys5ZABrY486z3dVF9W5s 13qEKbJISal9xCzgQKVlL4As+v7GNs5gnkLpD3MmT7uXpQqBfyOT4DiwKYxF25qW+JvT qZXxo9MsG4pLgv4BBAiRHz4Z5xNuSPDFIbd8HWaHZGspbAUKb7dwncpHTUoby7+FIlDe cu1Q== X-Gm-Message-State: AMke39mX703j7Azl8NuAkUcCn7FcVa3embOYZxml6OvVG0LFbebgSWPJvv2GwaTZBr3Gww== X-Received: by 10.84.130.65 with SMTP id 59mr4709937plc.170.1487951015670; Fri, 24 Feb 2017 07:43:35 -0800 (PST) Received: from localhost (li405-222.members.linode.com. [106.187.53.222]) by smtp.gmail.com with ESMTPSA id 84sm15943472pgh.21.2017.02.24.07.43.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 24 Feb 2017 07:43:35 -0800 (PST) From: Ming Lei To: Shaohua Li , Jens Axboe , linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, linux-block@vger.kernel.org, Christoph Hellwig Cc: Ming Lei Subject: [PATCH v1 07/14] md: raid1: don't use bio's vec table to manage resync pages Date: Fri, 24 Feb 2017 23:42:44 +0800 Message-Id: <1487950971-1131-8-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1487950971-1131-1-git-send-email-tom.leiming@gmail.com> References: <1487950971-1131-1-git-send-email-tom.leiming@gmail.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now we allocate one page array for managing resync pages, instead of using bio's vec table to do that, and the old way is very hacky and won't work any more if multipage bvec is enabled. The introduced cost is that we need to allocate (128 + 16) * raid_disks bytes per r1_bio, and it is fine because the inflight r1_bio for resync shouldn't be much, as pointed by Shaohua. Also the bio_reset() in raid1_sync_request() is removed because all bios are freshly new now and not necessary to reset any more. This patch can be thought as a cleanup too Suggested-by: Shaohua Li Signed-off-by: Ming Lei --- drivers/md/raid1.c | 86 +++++++++++++++++++++++++++++++++++------------------- 1 file changed, 56 insertions(+), 30 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 2de0bd69d8da..4a208220ff0f 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -77,6 +77,16 @@ static void lower_barrier(struct r1conf *conf, sector_t sector_nr); #define raid1_log(md, fmt, args...) \ do { if ((md)->queue) blk_add_trace_msg((md)->queue, "raid1 " fmt, ##args); } while (0) +static inline struct resync_pages *get_resync_pages(struct bio *bio) +{ + return bio->bi_private; +} + +static inline struct r1bio *get_resync_r1bio(struct bio *bio) +{ + return get_resync_pages(bio)->raid_bio; +} + static void * r1bio_pool_alloc(gfp_t gfp_flags, void *data) { struct pool_info *pi = data; @@ -104,12 +114,18 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) struct r1bio *r1_bio; struct bio *bio; int need_pages; - int i, j; + int j; + struct resync_pages *rps; r1_bio = r1bio_pool_alloc(gfp_flags, pi); if (!r1_bio) return NULL; + rps = kmalloc(sizeof(struct resync_pages) * pi->raid_disks, + gfp_flags); + if (!rps) + goto out_free_r1bio; + /* * Allocate bios : 1 for reading, n-1 for writing */ @@ -129,22 +145,22 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) need_pages = pi->raid_disks; else need_pages = 1; - for (j = 0; j < need_pages; j++) { + for (j = 0; j < pi->raid_disks; j++) { + struct resync_pages *rp = &rps[j]; + bio = r1_bio->bios[j]; - bio->bi_vcnt = RESYNC_PAGES; - - if (bio_alloc_pages(bio, gfp_flags)) - goto out_free_pages; - } - /* If not user-requests, copy the page pointers to all bios */ - if (!test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery)) { - for (i=0; iraid_disks; j++) { - struct page *page = - r1_bio->bios[0]->bi_io_vec[i].bv_page; - get_page(page); - r1_bio->bios[j]->bi_io_vec[i].bv_page = page; - } + + if (j < need_pages) { + if (resync_alloc_pages(rp, gfp_flags)) + goto out_free_pages; + } else { + memcpy(rp, &rps[0], sizeof(*rp)); + resync_get_all_pages(rp); + } + + rp->idx = 0; + rp->raid_bio = r1_bio; + bio->bi_private = rp; } r1_bio->master_bio = NULL; @@ -153,11 +169,14 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) out_free_pages: while (--j >= 0) - bio_free_pages(r1_bio->bios[j]); + resync_free_pages(&rps[j]); out_free_bio: while (++j < pi->raid_disks) bio_put(r1_bio->bios[j]); + kfree(rps); + +out_free_r1bio: r1bio_pool_free(r1_bio, data); return NULL; } @@ -165,14 +184,18 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data) static void r1buf_pool_free(void *__r1_bio, void *data) { struct pool_info *pi = data; - int i,j; + int i; struct r1bio *r1bio = __r1_bio; + struct resync_pages *rp = NULL; - for (i = 0; i < RESYNC_PAGES; i++) - for (j = pi->raid_disks; j-- ;) - safe_put_page(r1bio->bios[j]->bi_io_vec[i].bv_page); - for (i=0 ; i < pi->raid_disks; i++) + for (i = pi->raid_disks; i--; ) { + rp = get_resync_pages(r1bio->bios[i]); + resync_free_pages(rp); bio_put(r1bio->bios[i]); + } + + /* resync pages array stored in the 1st bio's .bi_private */ + kfree(rp); r1bio_pool_free(r1bio, data); } @@ -1849,7 +1872,7 @@ static int raid1_remove_disk(struct mddev *mddev, struct md_rdev *rdev) static void end_sync_read(struct bio *bio) { - struct r1bio *r1_bio = bio->bi_private; + struct r1bio *r1_bio = get_resync_r1bio(bio); update_head_pos(r1_bio->read_disk, r1_bio); @@ -1868,7 +1891,7 @@ static void end_sync_read(struct bio *bio) static void end_sync_write(struct bio *bio) { int uptodate = !bio->bi_error; - struct r1bio *r1_bio = bio->bi_private; + struct r1bio *r1_bio = get_resync_r1bio(bio); struct mddev *mddev = r1_bio->mddev; struct r1conf *conf = mddev->private; sector_t first_bad; @@ -2085,6 +2108,7 @@ static void process_checks(struct r1bio *r1_bio) int size; int error; struct bio *b = r1_bio->bios[i]; + struct resync_pages *rp = get_resync_pages(b); if (b->bi_end_io != end_sync_read) continue; /* fixup the bio for reuse, but preserve errno */ @@ -2097,7 +2121,8 @@ static void process_checks(struct r1bio *r1_bio) conf->mirrors[i].rdev->data_offset; b->bi_bdev = conf->mirrors[i].rdev->bdev; b->bi_end_io = end_sync_read; - b->bi_private = r1_bio; + rp->raid_bio = r1_bio; + b->bi_private = rp; size = b->bi_iter.bi_size; for (j = 0; j < vcnt ; j++) { @@ -2755,7 +2780,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, for (i = 0; i < conf->raid_disks * 2; i++) { struct md_rdev *rdev; bio = r1_bio->bios[i]; - bio_reset(bio); rdev = rcu_dereference(conf->mirrors[i].rdev); if (rdev == NULL || @@ -2811,7 +2835,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, atomic_inc(&rdev->nr_pending); bio->bi_iter.bi_sector = sector_nr + rdev->data_offset; bio->bi_bdev = rdev->bdev; - bio->bi_private = r1_bio; if (test_bit(FailFast, &rdev->flags)) bio->bi_opf |= MD_FAILFAST; } @@ -2897,12 +2920,15 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, } for (i = 0 ; i < conf->raid_disks * 2; i++) { + struct resync_pages *rp; + bio = r1_bio->bios[i]; + rp = get_resync_pages(bio); if (bio->bi_end_io) { - page = bio->bi_io_vec[bio->bi_vcnt].bv_page; + page = resync_fetch_page(rp); if (bio_add_page(bio, page, len, 0) == 0) { /* stop here */ - bio->bi_io_vec[bio->bi_vcnt].bv_page = page; + resync_store_page(rp, page); while (i > 0) { i--; bio = r1_bio->bios[i]; @@ -2919,7 +2945,7 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr, nr_sectors += len>>9; sector_nr += len>>9; sync_blocks -= (len>>9); - } while (r1_bio->bios[disk]->bi_vcnt < RESYNC_PAGES); + } while (resync_page_available(r1_bio->bios[disk]->bi_private)); bio_full: r1_bio->sectors = nr_sectors;