From patchwork Tue May 2 10:19:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 13228749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB53C7EE21 for ; Tue, 2 May 2023 10:23:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233981AbjEBKXh (ORCPT ); Tue, 2 May 2023 06:23:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233871AbjEBKWS (ORCPT ); Tue, 2 May 2023 06:22:18 -0400 Received: from esa5.hgst.iphmx.com (esa5.hgst.iphmx.com [216.71.153.144]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA5545BBD; Tue, 2 May 2023 03:21:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1683022906; x=1714558906; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sbko2e75SNzKYCeOJLmxzwzkVRoxsgBH9ZlWCz9oYps=; b=fm3QOuracl+oZgNr1SS6EcIuc/pO1rX2Yo+jib/vpmLHdqkJTkQ/5SnO NzSOG7u7GnP3f6sX8b+ibF2mi0SmB+ZJq5qARW5E3hoRPWzYaCo5CUhbc nX2du6F5U1QOES4D6s7+pBEGVnxxyGWLcjSqW2P4oJbwx+mzvItwlVkfI q1fE9hcxyR0jsd5si6Jr0ZeNvIRwaNLwedggc1mJT2RNTKMIf+1cR9V4x foHNDCnvTffzumnqgFQePXQIK0oW4QAb/XCNPP9ujAKgThuVvq5As9UC6 eMofbbGoKI9yyz7aB9cw/8pT/WT+g+1WFLu+fw5dWV4+fEgRjM//MNhWY w==; X-IronPort-AV: E=Sophos;i="5.99,243,1677513600"; d="scan'208";a="229597960" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 02 May 2023 18:21:46 +0800 IronPort-SDR: t3e4IG9/ckYukUib/n7r+D7F8QQbUwV1YF1EGlNcQA56hE3Gv3SpQIePJBCbWBO94KrTyXNJqD FoN5xvlgBy1NdrLf7yeJ87PxASFHERbvDBhFWzhoR4pTj2qz14C8MtKgwfs+Gw1zdg+hRFpmpp m5KUVDNcFoXxkvUKqqtiYtQOifpznaIo2LDSEk2DwjOEwl2noWp0oCghOWNyEtZjCkWBXlEVGW GZOK5mDneYNrZaxGyYP4lG3UUHVA5rRtxu/xhDALC9wwQl/KlY0sVUnAJyBWcRonSmPGS8tZ+0 TdM= Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 02 May 2023 02:31:32 -0700 IronPort-SDR: ZWr6Qm7aWZaY8bdOiwef1PvSCYtgS9U9pquupOCJUQGWN9visATuWUPqvCvOMfvdDLIKZAhW8J 8PvGJ51TwreVCGTmty9sPSqI6NH5c9sZK7RjKgSCX/HDyjXRo/PbJcwapaM/ecHhWIXqz7Kl8U jgHztRbeeu58xdyiqpFqsVBRJgUjSHLJlzeswv16TnR3VZpldJyULjW+gWhIkfd2QZoJCL1ndd 3CxfOoIKDBYHgQDbb+Rc/apQ3FCRJszz/CpBb5cAqd5m4klOwv27bN++AJvOZeHo32NNrFKhtY YR4= WDCIronportException: Internal Received: from myd008205.ad.shared (HELO localhost.localdomain) ([10.225.1.100]) by uls-op-cesaip02.wdc.com with ESMTP; 02 May 2023 03:21:41 -0700 From: Johannes Thumshirn To: "axboe @ kernel . dk" Cc: agruenba@redhat.com, cluster-devel@redhat.com, damien.lemoal@wdc.com, dm-devel@redhat.com, hare@suse.de, hch@lst.de, jfs-discussion@lists.sourceforge.net, kch@nvidia.com, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-raid@vger.kernel.org, ming.lei@redhat.com, rpeterso@redhat.com, shaggy@kernel.org, snitzer@kernel.org, song@kernel.org, willy@infradead.org, Johannes Thumshirn , Damien Le Moal Subject: [PATCH v5 15/20] md: raid1: check if adding pages to resync bio fails Date: Tue, 2 May 2023 12:19:29 +0200 Message-Id: <20230502101934.24901-16-johannes.thumshirn@wdc.com> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20230502101934.24901-1-johannes.thumshirn@wdc.com> References: <20230502101934.24901-1-johannes.thumshirn@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org Check if adding pages to resync bio fails and if bail out. As the comment above suggests this cannot happen, WARN if it actually happens. This way we can mark bio_add_pages as __must_check. Reviewed-by: Damien Le Moal Acked-by: Song Liu Signed-off-by: Johannes Thumshirn --- drivers/md/raid1-10.c | 11 ++++++----- drivers/md/raid10.c | 20 ++++++++++---------- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c index e61f6cad4e08..cd349e69ed77 100644 --- a/drivers/md/raid1-10.c +++ b/drivers/md/raid1-10.c @@ -101,11 +101,12 @@ static void md_bio_reset_resync_pages(struct bio *bio, struct resync_pages *rp, struct page *page = resync_fetch_page(rp, idx); int len = min_t(int, size, PAGE_SIZE); - /* - * won't fail because the vec table is big - * enough to hold all these pages - */ - bio_add_page(bio, page, len, 0); + if (WARN_ON(!bio_add_page(bio, page, len, 0))) { + bio->bi_status = BLK_STS_RESOURCE; + bio_endio(bio); + return; + } + size -= len; } while (idx++ < RESYNC_PAGES && size > 0); } diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 4fcfcb350d2b..381c21f7fb06 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3819,11 +3819,11 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, for (bio= biolist ; bio ; bio=bio->bi_next) { struct resync_pages *rp = get_resync_pages(bio); page = resync_fetch_page(rp, page_idx); - /* - * won't fail because the vec table is big enough - * to hold all these pages - */ - bio_add_page(bio, page, len, 0); + if (WARN_ON(!bio_add_page(bio, page, len, 0))) { + bio->bi_status = BLK_STS_RESOURCE; + bio_endio(bio); + goto giveup; + } } nr_sectors += len>>9; sector_nr += len>>9; @@ -4997,11 +4997,11 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, if (len > PAGE_SIZE) len = PAGE_SIZE; for (bio = blist; bio ; bio = bio->bi_next) { - /* - * won't fail because the vec table is big enough - * to hold all these pages - */ - bio_add_page(bio, page, len, 0); + if (WARN_ON(!bio_add_page(bio, page, len, 0))) { + bio->bi_status = BLK_STS_RESOURCE; + bio_endio(bio); + return sectors_done; + } } sector_nr += len >> 9; nr_sectors += len >> 9;