From patchwork Thu Apr 20 10:04:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 13218376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 782D7C77B73 for ; Thu, 20 Apr 2023 10:08:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234686AbjDTKIK (ORCPT ); Thu, 20 Apr 2023 06:08:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234614AbjDTKGc (ORCPT ); Thu, 20 Apr 2023 06:06:32 -0400 Received: from mail-wr1-f52.google.com (mail-wr1-f52.google.com [209.85.221.52]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D1134682; Thu, 20 Apr 2023 03:06:12 -0700 (PDT) Received: by mail-wr1-f52.google.com with SMTP id ffacd0b85a97d-2f7c281a015so281274f8f.1; Thu, 20 Apr 2023 03:06:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681985170; x=1684577170; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NtU3y1Zr3yjzTsE4b676R+MqirB7TojNGzCXkCz9E5c=; b=hjnKVVxyTyC6ht/kNv7/cZsfjl9Ui+1ni0r0DPwAzsdr7RARvBT8s78AYm5vLXi1xO 0cMmBAW9q77JHKT3AuQEXKWyxvGJBfoEvycqQleB0TApkFVHOVwWHQkzmXYS5VhokT0I 9efnyOKVbTJ1G3liLH3xRh1qsOfikQpowLYc9ASnxQKRkZF1mYcE3jNNn+LE7nrTWfDc FUAxymrpSa7ZAYOlTU38/V+3IfwXqVV9qOz/X/dAPZwTkHhPosOvA7HVwk5SH61EWck5 /pHrg4D9T3xw27LX2ymHxS7PplBkozDPrNDF3m7yeYwgPftE1FXsD2vJ/JlFEZmbXM2v L69Q== X-Gm-Message-State: AAQBX9dQ07vjBY5M2ppyYcDjRLj0t1XJSC1MmVOc8CjgwQcIu/eaGu9U wrNQFUkQJVw1arhIogkJkxc= X-Google-Smtp-Source: AKy350Yf43FTKduuCP1kMcXqJDkEawnut9n4ItV54pfFrm9DNgw20nsuo8ykxVcEagPqAFCwlrvDXw== X-Received: by 2002:adf:f4d0:0:b0:2f7:e3aa:677a with SMTP id h16-20020adff4d0000000b002f7e3aa677amr912509wrp.46.1681985170733; Thu, 20 Apr 2023 03:06:10 -0700 (PDT) Received: from localhost.localdomain (aftr-62-216-205-208.dynamic.mnet-online.de. [62.216.205.208]) by smtp.googlemail.com with ESMTPSA id l11-20020a5d674b000000b0030276f42f08sm201410wrw.88.2023.04.20.03.06.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Apr 2023 03:06:10 -0700 (PDT) From: Johannes Thumshirn To: axboe@kernel.dk Cc: johannes.thumshirn@wdc.com, agruenba@redhat.com, cluster-devel@redhat.com, damien.lemoal@wdc.com, dm-devel@redhat.com, dsterba@suse.com, hare@suse.de, hch@lst.de, jfs-discussion@lists.sourceforge.net, kch@nvidia.com, linux-block@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-raid@vger.kernel.org, ming.lei@redhat.com, rpeterso@redhat.com, shaggy@kernel.org, snitzer@kernel.org, song@kernel.org, willy@infradead.org, Damien Le Moal Subject: [PATCH v4 17/22] md: raid1: check if adding pages to resync bio fails Date: Thu, 20 Apr 2023 12:04:56 +0200 Message-Id: <20230420100501.32981-18-jth@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230420100501.32981-1-jth@kernel.org> References: <20230420100501.32981-1-jth@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org From: Johannes Thumshirn Check if adding pages to resync bio fails and if bail out. As the comment above suggests this cannot happen, WARN if it actually happens. This way we can mark bio_add_pages as __must_check. Signed-off-by: Johannes Thumshirn Reviewed-by: Damien Le Moal Acked-by: Song Liu --- drivers/md/raid1-10.c | 11 ++++++----- drivers/md/raid10.c | 20 ++++++++++---------- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c index e61f6cad4e08..cd349e69ed77 100644 --- a/drivers/md/raid1-10.c +++ b/drivers/md/raid1-10.c @@ -101,11 +101,12 @@ static void md_bio_reset_resync_pages(struct bio *bio, struct resync_pages *rp, struct page *page = resync_fetch_page(rp, idx); int len = min_t(int, size, PAGE_SIZE); - /* - * won't fail because the vec table is big - * enough to hold all these pages - */ - bio_add_page(bio, page, len, 0); + if (WARN_ON(!bio_add_page(bio, page, len, 0))) { + bio->bi_status = BLK_STS_RESOURCE; + bio_endio(bio); + return; + } + size -= len; } while (idx++ < RESYNC_PAGES && size > 0); } diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 6c66357f92f5..59e52cf01569 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3804,11 +3804,11 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr, for (bio= biolist ; bio ; bio=bio->bi_next) { struct resync_pages *rp = get_resync_pages(bio); page = resync_fetch_page(rp, page_idx); - /* - * won't fail because the vec table is big enough - * to hold all these pages - */ - bio_add_page(bio, page, len, 0); + if (WARN_ON(!bio_add_page(bio, page, len, 0))) { + bio->bi_status = BLK_STS_RESOURCE; + bio_endio(bio); + goto giveup; + } } nr_sectors += len>>9; sector_nr += len>>9; @@ -4985,11 +4985,11 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, if (len > PAGE_SIZE) len = PAGE_SIZE; for (bio = blist; bio ; bio = bio->bi_next) { - /* - * won't fail because the vec table is big enough - * to hold all these pages - */ - bio_add_page(bio, page, len, 0); + if (WARN_ON(!bio_add_page(bio, page, len, 0))) { + bio->bi_status = BLK_STS_RESOURCE; + bio_endio(bio); + return sectors_done; + } } sector_nr += len >> 9; nr_sectors += len >> 9;