From patchwork Mon Aug 14 13:53:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueshi Hu X-Patchwork-Id: 13352914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17FA5C04E69 for ; Mon, 14 Aug 2023 13:55:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231660AbjHNNyl (ORCPT ); Mon, 14 Aug 2023 09:54:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231712AbjHNNyX (ORCPT ); Mon, 14 Aug 2023 09:54:23 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81FCDE7D for ; Mon, 14 Aug 2023 06:54:22 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-688142a392eso3166245b3a.3 for ; Mon, 14 Aug 2023 06:54:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20221208.gappssmtp.com; s=20221208; t=1692021262; x=1692626062; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lxdE2Q9T1UX2QI7uu1gFMik2XpiMuGjdCiq1TVfU6V0=; b=tGWHk+X5xic9wmoatJ059bBc5+bAlv/iXTBYwinz/0cP93jW2bj4RVhWhfB1qBYuCP 95bFy2IaM7uFETzxIipK8y3VWwUsgJH7f/jhfQg8xn4uzbk6IGU4sSt2Bs04x2esKnsX 302imBbTbMjuxQ5rVD4cx4XwajSMeIq+oOPIQgfryyboWsAT9heslGHSHo775pnjl+9I SZeBrOgAiUlHKBnPOgBz2xUk3UVOAcL4ZgKwolvddAiamf6uqEUQpi/ghhzgltjf3PEo SqKt0SZJsaimH0dGgjIZRgZcQbkxWy4z88+JSY1wu3NMJHs4qG9NuOw/KJafZ5d/iNW4 Wdnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692021262; x=1692626062; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lxdE2Q9T1UX2QI7uu1gFMik2XpiMuGjdCiq1TVfU6V0=; b=d/aiK4vUvLbIIK4AYpQL/vsQK3BERsz8Pv0dEc+JO4rv3TQgw9C2PwRehCZjQD0b9e lnNAGv5U8j2nqP9+XF3plGzC3gtlh7/jN1MmFjwEjuAjVVX6bP8ibsSjTiXB24962DGE 08jjCQpZJkrOez9H0lT3imX/Bh1FVsvO5FHVd+BvmQnPCHgrlZRBhPlbp6NL+5cAGg5c HwSFniGpXAEgzAdKkfPavgBJeqXAEy+im2IvsViLyY5MpSxYIfTVe7iPeb5faZCCGBi8 WzGju5lT0kT9Niho+1fPlfmpNJ6k3fFOL/XAzQIvQJNlsAAeUtBDNJsAXEPlCqAhxuqO dHHg== X-Gm-Message-State: AOJu0YweY8Hj4ZQZk7NPyn4JoIopRXQ4KLA5ZcOTyqWptsYuvddSb0n8 /b0a3y9BRwRLaZ1EuJMZeg2xBQ== X-Google-Smtp-Source: AGHT+IGyxBs1Am1hZ6Ysv3NF3J2PmmouELd2lJvBLjzWavaypnWGaYdXMu4Ay/HKYFFnGzI46Ljrlw== X-Received: by 2002:a05:6a00:134c:b0:682:a6c5:6f28 with SMTP id k12-20020a056a00134c00b00682a6c56f28mr14455219pfu.32.1692021261937; Mon, 14 Aug 2023 06:54:21 -0700 (PDT) Received: from nixos.tailf4e9e.ts.net ([47.75.78.161]) by smtp.googlemail.com with ESMTPSA id m26-20020a056a00165a00b006874a6850e9sm7962122pfc.215.2023.08.14.06.54.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 06:54:21 -0700 (PDT) From: Xueshi Hu To: yukuai3@huawei.com, song@kernel.org, dan.j.williams@intel.com, neilb@suse.de, akpm@linux-foundation.org, neilb@suse.com Cc: linux-raid@vger.kernel.org, Xueshi Hu Subject: [PATCH v6 1/3] md/raid1: call free_r1bio() before allow_barrier() Date: Mon, 14 Aug 2023 21:53:54 +0800 Message-Id: <20230814135356.1113639-2-xueshi.hu@smartx.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814135356.1113639-1-xueshi.hu@smartx.com> References: <20230814135356.1113639-1-xueshi.hu@smartx.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org After allow_barrier(), an concurrent raid1_reshape() will replace old mempool and r1conf::raid_disks, which are necessary when freeing the r1bio. Change the execution order of free_r1bio() and allow_barrier() so that kernel can free r1bio safely. Reviewed-by: Yu Kuai Signed-off-by: Xueshi Hu --- drivers/md/raid1.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index dd25832eb045..dbbee0c14a5b 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -313,6 +313,7 @@ static void raid_end_bio_io(struct r1bio *r1_bio) { struct bio *bio = r1_bio->master_bio; struct r1conf *conf = r1_bio->mddev->private; + sector_t sector = r1_bio->sector; /* if nobody has done the final endio yet, do it now */ if (!test_and_set_bit(R1BIO_Returned, &r1_bio->state)) { @@ -323,13 +324,13 @@ static void raid_end_bio_io(struct r1bio *r1_bio) call_bio_endio(r1_bio); } + + free_r1bio(r1_bio); /* * Wake up any possible resync thread that waits for the device * to go idle. All I/Os, even write-behind writes, are done. */ - allow_barrier(conf, r1_bio->sector); - - free_r1bio(r1_bio); + allow_barrier(conf, sector); } /* From patchwork Mon Aug 14 13:53:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueshi Hu X-Patchwork-Id: 13352913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08C31C04A6A for ; Mon, 14 Aug 2023 13:55:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231643AbjHNNyl (ORCPT ); Mon, 14 Aug 2023 09:54:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231730AbjHNNy1 (ORCPT ); Mon, 14 Aug 2023 09:54:27 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 331D510F4 for ; Mon, 14 Aug 2023 06:54:26 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-686be28e1a8so3020591b3a.0 for ; Mon, 14 Aug 2023 06:54:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20221208.gappssmtp.com; s=20221208; t=1692021265; x=1692626065; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Y3ZoyfLkeoyL9ct03/xsH7pdr7cmbeHEYZ/U9mmEUCw=; b=FU0SF1aDqpbGJF3lBnecyq2YDISs7pdShKtraYsvL9Tk2L1q26sjcf+eeqfoNob6BC R9PypBIYDr2NJ2bWBQHT4CTszViJS7KsH5GbDnyO17xtxWNTzkV3Ne1KjW+9D+enCkCR wKNganprEUZ1QVp/H0Prktg6/iGnqH2CijOxp7zUt+2pC8nUdS/eHz4YktF0d15TH2yk t9O8wMLpXJ/Ffoh8bZ7rvONHI9AnAjj0NUOecENQ4AhU9WqlDFGmB+81bjLL3nPtMD2y v+1NLQt88ogCBKqFun7DW/H9Ga/hRvgyi32EQFuIWW12byRpdT5SmdKDnHHxsT1vux8J gidw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692021265; x=1692626065; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y3ZoyfLkeoyL9ct03/xsH7pdr7cmbeHEYZ/U9mmEUCw=; b=hNDZVsiPdThTjyMdEJehKH/IQcVjH1LhzrxoDdkGaD/Fe0o2VQiokyYeDVVxkAVKXh n/nvj1QN7tvA6d/VMKySohwz0SKB6Wabn4G5W+6VsyHJwyGsYpWXbi5pc2RJSivWuB0G dB/ypxRCmXZYBQOKxvwN1WgyICOSg8TIhIYSVIwPBGBHSUcJbDeg+/O0SK1Focf0qadp eeUzDRuEEg5qnPJryUbWBoaX+TcT4hExKr3S8OPicE1d+rOeaUreymiNlp+yvZOBfWSZ PuIYHXPkmJnUB8ZLcy+0vZrkM87C/3tfW8Cc7oUN35JOj5LMsmInB83ojkYZrutdENr2 mU+Q== X-Gm-Message-State: AOJu0YziiH42nBfjaerGu1Vt+XbCxx+AegdYnlMGdYV4JaeHLNObEov9 oCALGCU4f8tYWobKD+ta9fQthg== X-Google-Smtp-Source: AGHT+IEqckEE/XCiRwmSDxLRyJNd4Hmrks4uXrqjSHA5K+QAOHGYoL8X/uOq97yXOppuiBS+5+5MaQ== X-Received: by 2002:a05:6a20:4294:b0:140:6a7b:78bd with SMTP id o20-20020a056a20429400b001406a7b78bdmr10389461pzj.46.1692021265533; Mon, 14 Aug 2023 06:54:25 -0700 (PDT) Received: from nixos.tailf4e9e.ts.net ([47.75.78.161]) by smtp.googlemail.com with ESMTPSA id m26-20020a056a00165a00b006874a6850e9sm7962122pfc.215.2023.08.14.06.54.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 06:54:25 -0700 (PDT) From: Xueshi Hu To: yukuai3@huawei.com, song@kernel.org, dan.j.williams@intel.com, neilb@suse.de, akpm@linux-foundation.org, neilb@suse.com Cc: linux-raid@vger.kernel.org, Xueshi Hu Subject: [PATCH v6 2/3] md/raid1: free the r1bio firstly before waiting for blocked rdev Date: Mon, 14 Aug 2023 21:53:55 +0800 Message-Id: <20230814135356.1113639-3-xueshi.hu@smartx.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814135356.1113639-1-xueshi.hu@smartx.com> References: <20230814135356.1113639-1-xueshi.hu@smartx.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org Raid1 reshape will change mempool and r1conf::raid_disks which are necessary when freeing r1bio. allow_barrier() make an concurrent raid1_reshape() possible. So, free the in-flight r1bio firstly before waiting blocked rdev. Fixes: 6bfe0b499082 ("md: support blocking writes to an array on device failure") Reviewed-by: Yu Kuai Signed-off-by: Xueshi Hu --- drivers/md/raid1.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index dbbee0c14a5b..5f17f30a00a9 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1374,6 +1374,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, return; } + retry_write: r1_bio = alloc_r1bio(mddev, bio); r1_bio->sectors = max_write_sectors; @@ -1389,7 +1390,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, */ disks = conf->raid_disks * 2; - retry_write: blocked_rdev = NULL; rcu_read_lock(); max_sectors = r1_bio->sectors; @@ -1469,7 +1469,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, for (j = 0; j < i; j++) if (r1_bio->bios[j]) rdev_dec_pending(conf->mirrors[j].rdev, mddev); - r1_bio->state = 0; + free_r1bio(r1_bio); allow_barrier(conf, bio->bi_iter.bi_sector); if (bio->bi_opf & REQ_NOWAIT) { From patchwork Mon Aug 14 13:53:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueshi Hu X-Patchwork-Id: 13352912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDEECC001DB for ; Mon, 14 Aug 2023 13:55:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230049AbjHNNyj (ORCPT ); Mon, 14 Aug 2023 09:54:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231742AbjHNNyb (ORCPT ); Mon, 14 Aug 2023 09:54:31 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 313CCE7D for ; Mon, 14 Aug 2023 06:54:31 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-686ba97e4feso4189656b3a.0 for ; Mon, 14 Aug 2023 06:54:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20221208.gappssmtp.com; s=20221208; t=1692021270; x=1692626070; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C3MZQsfMYBt41xrzdTuDgcS6K01tLRziULIDJrxSB8k=; b=aSIjw0VMeSHx0eEXGxufJLBmxIfRXmi2hRDUrZ3VcbxMm0PmBZKHIB/QeDZuhTW6le ffmrNtODrCQIezXStb194yYH2LmAhJgBhsRTbfA4rh3kjolXmsQhy76DxBz3roCPnNF+ lJdH/naHYLe+jM6jcKZMTzIB4xnfcDshmIBvfv0AJdLhc6hFCG9qS8gj6UXsYCcP1prK qlpsxa49d35f53/2uVmlEqssRUByd1if/pkM3KO5yIIkBINsaIfk4/u3LP92XHcMxnX2 3wRrhiiFP3YqGP6wHArLWw3LHrYJAnbyA5Dg4ci9I26hPmT+AX3Sp8c+xm2qQTtNEPCq WTaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692021270; x=1692626070; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C3MZQsfMYBt41xrzdTuDgcS6K01tLRziULIDJrxSB8k=; b=jf+XBKQPFYsPOBREOJSIP/dVp2kLQXFu+08Oh59OJ2AnDUmYeVKa68Grea+1xJcX+y KNOhZYpLU1iXZwD4vw/b9MV1e8+OCOXaIeAzn2KbIlkGqCj3amHZMCKao/UHFNTYT4JY M+uvInDK8OijtJu2dbVZrsV9+7oaU1vuDn+foOd7+o0gogR0/74vqDe27kYgwHpZJ8Tf NLeFPiAy+7XP3N5egxW5QDTUzI5FNUCD5zNmGnm0P01TT6lecBcQyFDMQpqWk8q7S8IN qDzcyk8lnTAU+7W/oiv3ZVt3/K6Fzm09IIXTFEohvkJ9kaMJcoYmfv7PdrnVQLlA7jkJ Bypg== X-Gm-Message-State: AOJu0YxdOFhIxtBGpQRmRiRFoZ6X7Dx1K7WF67Ueb++0TpdU/2LEF22P 7T+EcEyFrJQeH2pjiHe94gBYnQ== X-Google-Smtp-Source: AGHT+IH4j2dOeCx2B7+gSORCx0cb2QJ6nsF7WdiMIn5EQwHSmcX0EcuSIMrf8M+8HnCrk7NqbBJpBw== X-Received: by 2002:a05:6a20:428f:b0:13a:6413:9004 with SMTP id o15-20020a056a20428f00b0013a64139004mr13493642pzj.43.1692021270624; Mon, 14 Aug 2023 06:54:30 -0700 (PDT) Received: from nixos.tailf4e9e.ts.net ([47.75.78.161]) by smtp.googlemail.com with ESMTPSA id m26-20020a056a00165a00b006874a6850e9sm7962122pfc.215.2023.08.14.06.54.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Aug 2023 06:54:30 -0700 (PDT) From: Xueshi Hu To: yukuai3@huawei.com, song@kernel.org, dan.j.williams@intel.com, neilb@suse.de, akpm@linux-foundation.org, neilb@suse.com Cc: linux-raid@vger.kernel.org, Xueshi Hu Subject: [PATCH v6 3/3] md/raid1: keep the barrier held until handle_read_error() finished Date: Mon, 14 Aug 2023 21:53:56 +0800 Message-Id: <20230814135356.1113639-4-xueshi.hu@smartx.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230814135356.1113639-1-xueshi.hu@smartx.com> References: <20230814135356.1113639-1-xueshi.hu@smartx.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org handle_read_error() will call allow_barrier() to match the former barrier raising. However, it should put the allow_barrier() at the end to avoid an concurrent raid reshape. Fixes: 689389a06ce7 ("md/raid1: simplify handle_read_error().") Reviewed-by: Yu Kuai Signed-off-by: Xueshi Hu --- drivers/md/raid1.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 5f17f30a00a9..5a5eb5f1a224 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2499,6 +2499,7 @@ static void handle_read_error(struct r1conf *conf, struct r1bio *r1_bio) struct mddev *mddev = conf->mddev; struct bio *bio; struct md_rdev *rdev; + sector_t sector; clear_bit(R1BIO_ReadError, &r1_bio->state); /* we got a read error. Maybe the drive is bad. Maybe just @@ -2528,12 +2529,13 @@ static void handle_read_error(struct r1conf *conf, struct r1bio *r1_bio) } rdev_dec_pending(rdev, conf->mddev); - allow_barrier(conf, r1_bio->sector); + sector = r1_bio->sector; bio = r1_bio->master_bio; /* Reuse the old r1_bio so that the IO_BLOCKED settings are preserved */ r1_bio->state = 0; raid1_read_request(mddev, bio, r1_bio->sectors, r1_bio); + allow_barrier(conf, sector); } static void raid1d(struct md_thread *thread)