From patchwork Tue Aug 27 15:35:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Artur Paszkiewicz X-Patchwork-Id: 13779731 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9DD51C4EEA for ; Tue, 27 Aug 2024 15:35:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772948; cv=none; b=CauHYxR/trozePP8yqjlh12ezBnMR4VhSyTTHXIHOhwxg1ZWJ6x1IO5ie8ppBcf7UQrMBTJqazGAOoJth9WjhjE3PUb7U1d5A0UeqyfIgm5zRPaYvOon/s0tIsVI90Mzg/ZO9+iEUcfRRF62WFgKctGe5z7gB58IeEAZRwAU4bQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772948; c=relaxed/simple; bh=4Ely7P4qH81F1B7yYUtEqgTIo0/dFxqzTxvmAqyoBFA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YJrra053xxUMmswqDIVlvIR5m3RhK8k99RM3twLgSu5aTODj5+Pk7jbSzk2NwrEerlfkXxN+uzZJTFe9reObA4GIo/3McDi/e9N42rGBBw/pIjqnUxjMFdrZQay8A8sX90LWrdOSGDE1EY/+9oU/kXUmObmaAV3DAwW6YhVdrME= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=RarYhTRN; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="RarYhTRN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724772947; x=1756308947; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4Ely7P4qH81F1B7yYUtEqgTIo0/dFxqzTxvmAqyoBFA=; b=RarYhTRNRGmrWBWb6fGVuAHWqkc+5ZWrGeQxZbCFrPIZsQ+h8TiTXlR4 BZ5YGjgCo4oCpnsW6RnFqPdLwcbwZatYJAy4cUtbBUnp/r+VZUfhZFzRr Y+AXATvvabQWlhJ2w1PmMzRLgRbkBH+9hOcYdV50580+eR3JbgBDyFMID YlmjhGFwl/sWaVWw/o8+d+zrW+shLlg8+9Jz/EMlR4t5tp6cJLf7xLWok AbzzTcp4/64ymFzBeS6U6B2foPmq+Xwsk+VGeM2R08NYqE6MFTfiEk1YD Ni0pPc6k+UOZ8IbhhUcZHQXjV173MJUn2d8MfC/vwkyK58AbnppAp7bO6 Q==; X-CSE-ConnectionGUID: Tu9nkeMzSymUhhfPBSa98Q== X-CSE-MsgGUID: 9gkRTpDQQQiUmj74LLeF5A== X-IronPort-AV: E=McAfee;i="6700,10204,11177"; a="34634184" X-IronPort-AV: E=Sophos;i="6.10,180,1719903600"; d="scan'208";a="34634184" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2024 08:35:47 -0700 X-CSE-ConnectionGUID: pTuT1/4aRdiZNPz8Wfp7vQ== X-CSE-MsgGUID: 9Shn+JP/QTeqksMuNoO9pQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,180,1719903600"; d="scan'208";a="62628235" Received: from apaszkie-mobl2.apaszkie-mobl2 (HELO apaszkie-mobl2.intel.com) ([10.245.244.19]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2024 08:35:44 -0700 From: Artur Paszkiewicz To: song@kernel.org Cc: linux-raid@vger.kernel.org, paul.e.luse@intel.com Subject: [PATCH 1/3] md/raid5: use wait_on_bit() for R5_Overlap Date: Tue, 27 Aug 2024 17:35:34 +0200 Message-ID: <20240827153536.6743-2-artur.paszkiewicz@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240827153536.6743-1-artur.paszkiewicz@intel.com> References: <20240827153536.6743-1-artur.paszkiewicz@intel.com> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Convert uses of wait_for_overlap wait queue with R5_Overlap bit to wait_on_bit() / wake_up_bit(). Signed-off-by: Artur Paszkiewicz --- drivers/md/raid5-cache.c | 6 +--- drivers/md/raid5.c | 60 +++++++++++++++++++--------------------- 2 files changed, 30 insertions(+), 36 deletions(-) diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c index 874874fe4fa1..d0698852c570 100644 --- a/drivers/md/raid5-cache.c +++ b/drivers/md/raid5-cache.c @@ -2798,7 +2798,6 @@ void r5c_finish_stripe_write_out(struct r5conf *conf, { struct r5l_log *log = READ_ONCE(conf->log); int i; - int do_wakeup = 0; sector_t tree_index; void __rcu **pslot; uintptr_t refcount; @@ -2815,7 +2814,7 @@ void r5c_finish_stripe_write_out(struct r5conf *conf, for (i = sh->disks; i--; ) { clear_bit(R5_InJournal, &sh->dev[i].flags); if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) - do_wakeup = 1; + wake_up_bit(&sh->dev[i].flags, R5_Overlap); } /* @@ -2828,9 +2827,6 @@ void r5c_finish_stripe_write_out(struct r5conf *conf, if (atomic_dec_and_test(&conf->pending_full_writes)) md_wakeup_thread(conf->mddev->thread); - if (do_wakeup) - wake_up(&conf->wait_for_overlap); - spin_lock_irq(&log->stripe_in_journal_lock); list_del_init(&sh->r5c); spin_unlock_irq(&log->stripe_in_journal_lock); diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index c14cf2410365..1e7dcb6235c7 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2337,7 +2337,7 @@ static void raid_run_ops(struct stripe_head *sh, unsigned long ops_request) for (i = disks; i--; ) { struct r5dev *dev = &sh->dev[i]; if (test_and_clear_bit(R5_Overlap, &dev->flags)) - wake_up(&sh->raid_conf->wait_for_overlap); + wake_up_bit(&dev->flags, R5_Overlap); } } local_unlock(&conf->percpu->lock); @@ -3473,7 +3473,7 @@ static bool stripe_bio_overlaps(struct stripe_head *sh, struct bio *bi, * With PPL only writes to consecutive data chunks within a * stripe are allowed because for a single stripe_head we can * only have one PPL entry at a time, which describes one data - * range. Not really an overlap, but wait_for_overlap can be + * range. Not really an overlap, but R5_Overlap can be * used to handle this. */ sector_t sector; @@ -3652,7 +3652,7 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh, log_stripe_write_finished(sh); if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) - wake_up(&conf->wait_for_overlap); + wake_up_bit(&sh->dev[i].flags, R5_Overlap); while (bi && bi->bi_iter.bi_sector < sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) { @@ -3696,7 +3696,7 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh, sh->dev[i].toread = NULL; spin_unlock_irq(&sh->stripe_lock); if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) - wake_up(&conf->wait_for_overlap); + wake_up_bit(&sh->dev[i].flags, R5_Overlap); if (bi) s->to_read--; while (bi && bi->bi_iter.bi_sector < @@ -3734,7 +3734,7 @@ handle_failed_sync(struct r5conf *conf, struct stripe_head *sh, BUG_ON(sh->batch_head); clear_bit(STRIPE_SYNCING, &sh->state); if (test_and_clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags)) - wake_up(&conf->wait_for_overlap); + wake_up_bit(&sh->dev[sh->pd_idx].flags, R5_Overlap); s->syncing = 0; s->replacing = 0; /* There is nothing more to do for sync/check/repair. @@ -4875,7 +4875,6 @@ static void break_stripe_batch_list(struct stripe_head *head_sh, { struct stripe_head *sh, *next; int i; - int do_wakeup = 0; list_for_each_entry_safe(sh, next, &head_sh->batch_list, batch_list) { @@ -4911,7 +4910,7 @@ static void break_stripe_batch_list(struct stripe_head *head_sh, spin_unlock_irq(&sh->stripe_lock); for (i = 0; i < sh->disks; i++) { if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags)) - do_wakeup = 1; + wake_up_bit(&sh->dev[i].flags, R5_Overlap); sh->dev[i].flags = head_sh->dev[i].flags & (~((1 << R5_WriteError) | (1 << R5_Overlap))); } @@ -4925,12 +4924,9 @@ static void break_stripe_batch_list(struct stripe_head *head_sh, spin_unlock_irq(&head_sh->stripe_lock); for (i = 0; i < head_sh->disks; i++) if (test_and_clear_bit(R5_Overlap, &head_sh->dev[i].flags)) - do_wakeup = 1; + wake_up_bit(&head_sh->dev[i].flags, R5_Overlap); if (head_sh->state & handle_flags) set_bit(STRIPE_HANDLE, &head_sh->state); - - if (do_wakeup) - wake_up(&head_sh->raid_conf->wait_for_overlap); } static void handle_stripe(struct stripe_head *sh) @@ -5196,7 +5192,7 @@ static void handle_stripe(struct stripe_head *sh) md_done_sync(conf->mddev, RAID5_STRIPE_SECTORS(conf), 1); clear_bit(STRIPE_SYNCING, &sh->state); if (test_and_clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags)) - wake_up(&conf->wait_for_overlap); + wake_up_bit(&sh->dev[sh->pd_idx].flags, R5_Overlap); } /* If the failed drives are just a ReadError, then we might need @@ -5753,12 +5749,11 @@ static void make_discard_request(struct mddev *mddev, struct bio *bi) int d; again: sh = raid5_get_active_stripe(conf, NULL, logical_sector, 0); - prepare_to_wait(&conf->wait_for_overlap, &w, - TASK_UNINTERRUPTIBLE); set_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags); if (test_bit(STRIPE_SYNCING, &sh->state)) { raid5_release_stripe(sh); - schedule(); + wait_on_bit(&sh->dev[sh->pd_idx].flags, R5_Overlap, + TASK_UNINTERRUPTIBLE); goto again; } clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags); @@ -5770,12 +5765,12 @@ static void make_discard_request(struct mddev *mddev, struct bio *bi) set_bit(R5_Overlap, &sh->dev[d].flags); spin_unlock_irq(&sh->stripe_lock); raid5_release_stripe(sh); - schedule(); + wait_on_bit(&sh->dev[d].flags, R5_Overlap, + TASK_UNINTERRUPTIBLE); goto again; } } set_bit(STRIPE_DISCARD, &sh->state); - finish_wait(&conf->wait_for_overlap, &w); sh->overwrite_disks = 0; for (d = 0; d < conf->raid_disks; d++) { if (d == sh->pd_idx || d == sh->qd_idx) @@ -5855,7 +5850,6 @@ static int add_all_stripe_bios(struct r5conf *conf, struct bio *bi, int forwrite, int previous) { int dd_idx; - int ret = 1; spin_lock_irq(&sh->stripe_lock); @@ -5871,14 +5865,19 @@ static int add_all_stripe_bios(struct r5conf *conf, if (stripe_bio_overlaps(sh, bi, dd_idx, forwrite)) { set_bit(R5_Overlap, &dev->flags); - ret = 0; - continue; + spin_unlock_irq(&sh->stripe_lock); + raid5_release_stripe(sh); + /* release batch_last before wait to avoid risk of deadlock */ + if (ctx->batch_last) { + raid5_release_stripe(ctx->batch_last); + ctx->batch_last = NULL; + } + md_wakeup_thread(conf->mddev->thread); + wait_on_bit(&dev->flags, R5_Overlap, TASK_UNINTERRUPTIBLE); + return 0; } } - if (!ret) - goto out; - for (dd_idx = 0; dd_idx < sh->disks; dd_idx++) { struct r5dev *dev = &sh->dev[dd_idx]; @@ -5894,9 +5893,8 @@ static int add_all_stripe_bios(struct r5conf *conf, RAID5_STRIPE_SHIFT(conf), ctx->sectors_to_do); } -out: spin_unlock_irq(&sh->stripe_lock); - return ret; + return 1; } enum reshape_loc { @@ -5992,17 +5990,17 @@ static enum stripe_result make_stripe_request(struct mddev *mddev, goto out_release; } - if (test_bit(STRIPE_EXPANDING, &sh->state) || - !add_all_stripe_bios(conf, ctx, sh, bi, rw, previous)) { - /* - * Stripe is busy expanding or add failed due to - * overlap. Flush everything and wait a while. - */ + if (test_bit(STRIPE_EXPANDING, &sh->state)) { md_wakeup_thread(mddev->thread); ret = STRIPE_SCHEDULE_AND_RETRY; goto out_release; } + if (!add_all_stripe_bios(conf, ctx, sh, bi, rw, previous)) { + ret = STRIPE_RETRY; + goto out; + } + if (stripe_can_batch(sh)) { stripe_add_to_batch_list(conf, sh, ctx->batch_last); if (ctx->batch_last) From patchwork Tue Aug 27 15:35:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Artur Paszkiewicz X-Patchwork-Id: 13779730 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0333D1C57AF for ; Tue, 27 Aug 2024 15:35:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772948; cv=none; b=ugnlLvXv2p2CnYvLTAyP93Pa+Vc9XokMpQfHx0XbSPQcA3QFr/s/wmF/cnCqgnlw2lRi/AOGXIazo/qcK713c38zzFtHSxTvW/A7NI9RLoGlrs7wtRWl0d5G+PRoeLbsUDrkGFD++wRY6ZxP4n+4li9/11gXXrXevDp08lojhWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772948; c=relaxed/simple; bh=yD9k3I2uVZTIZt4jf56qQTEAACzIVpxxYO+VPLV9CdA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P6PtW/QdnW3QLkzEoD/1qWRWY9NxbqFzG3L7QnW9mM97YnQ/skPeVgQJAGellxY6yiiBlsaCV2dn66ojgFf86+qkxf+p2UUB6xpbxl+AmGzquCzBXfn6gZkepT9dg5iyBy17YrougTRQlRrqrBHNZ9SLBGjYpb99uJfTTvhSPeU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kNCzb/mE; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kNCzb/mE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724772948; x=1756308948; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yD9k3I2uVZTIZt4jf56qQTEAACzIVpxxYO+VPLV9CdA=; b=kNCzb/mExa0MpuiKy5dp5L9XhspRyV4GO4+1hKr+vDmjqDFEeENNyodL ilXJtpNQT9JvO5hYZk4WgI3H8jio3wUnjyG40GILUvzXr6VzbYYtcSHD1 AD3aTRZdbjjlVDMUB57ak6tCHFG1QdXnNL8A99ckUvUk/Lhn4aeHgmKmb iC3j3SKHTgrpbYGCgDpIwgR5KjsU8XO8rhYNA+/khPg7l4t2w2oyCUHlg PbO15c9HVqiGiULuK3hWivPYe/jO4MLRQP7QIvoFFRqvBf5VIhGPV1Bm0 50mR0kdi/AHwDUoTrNmpcEVTABv8QkLcEqQwtnkOdGqo5znN1NUZLKCzt A==; X-CSE-ConnectionGUID: 9HxKi2qtSGuGG/ud9BlKdQ== X-CSE-MsgGUID: IbK6INGtT3uhpf8TpvUipg== X-IronPort-AV: E=McAfee;i="6700,10204,11177"; a="34634186" X-IronPort-AV: E=Sophos;i="6.10,180,1719903600"; d="scan'208";a="34634186" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2024 08:35:47 -0700 X-CSE-ConnectionGUID: R9bsxr8YTQK7FH1+SjbfVw== X-CSE-MsgGUID: TZDwWMN1TCSERtmEMman+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,180,1719903600"; d="scan'208";a="62628242" Received: from apaszkie-mobl2.apaszkie-mobl2 (HELO apaszkie-mobl2.intel.com) ([10.245.244.19]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2024 08:35:46 -0700 From: Artur Paszkiewicz To: song@kernel.org Cc: linux-raid@vger.kernel.org, paul.e.luse@intel.com Subject: [PATCH 2/3] md/raid5: only add to wq if reshape is in progress Date: Tue, 27 Aug 2024 17:35:35 +0200 Message-ID: <20240827153536.6743-3-artur.paszkiewicz@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240827153536.6743-1-artur.paszkiewicz@intel.com> References: <20240827153536.6743-1-artur.paszkiewicz@intel.com> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Now that actual overlaps are not handled on the wait_for_overlap wq anymore, the remaining cases when we wait on this wq are limited to reshape. If reshape is not in progress, don't add to the wq in raid5_make_request() because add_wait_queue() / remove_wait_queue() operations take a spinlock and cause noticeable contention when multiple threads are submitting requests to the mddev. Signed-off-by: Artur Paszkiewicz --- drivers/md/raid5.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 1e7dcb6235c7..8e74c455a87d 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -6071,6 +6071,7 @@ static sector_t raid5_bio_lowest_chunk_sector(struct r5conf *conf, static bool raid5_make_request(struct mddev *mddev, struct bio * bi) { DEFINE_WAIT_FUNC(wait, woken_wake_function); + bool on_wq; struct r5conf *conf = mddev->private; sector_t logical_sector; struct stripe_request_ctx ctx = {}; @@ -6144,11 +6145,15 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi) * sequential IO pattern. We don't bother with the optimization when * reshaping as the performance benefit is not worth the complexity. */ - if (likely(conf->reshape_progress == MaxSector)) + if (likely(conf->reshape_progress == MaxSector)) { logical_sector = raid5_bio_lowest_chunk_sector(conf, bi); + on_wq = false; + } else { + add_wait_queue(&conf->wait_for_overlap, &wait); + on_wq = true; + } s = (logical_sector - ctx.first_sector) >> RAID5_STRIPE_SHIFT(conf); - add_wait_queue(&conf->wait_for_overlap, &wait); while (1) { res = make_stripe_request(mddev, conf, &ctx, logical_sector, bi); @@ -6159,6 +6164,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi) continue; if (res == STRIPE_SCHEDULE_AND_RETRY) { + WARN_ON_ONCE(!on_wq); /* * Must release the reference to batch_last before * scheduling and waiting for work to be done, @@ -6183,7 +6189,8 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi) logical_sector = ctx.first_sector + (s << RAID5_STRIPE_SHIFT(conf)); } - remove_wait_queue(&conf->wait_for_overlap, &wait); + if (unlikely(on_wq)) + remove_wait_queue(&conf->wait_for_overlap, &wait); if (ctx.batch_last) raid5_release_stripe(ctx.batch_last); From patchwork Tue Aug 27 15:35:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Artur Paszkiewicz X-Patchwork-Id: 13779732 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FC831C3F17 for ; Tue, 27 Aug 2024 15:35:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772950; cv=none; b=IVZ1AztPfdCEZ2zS0micGAVR4ZjS1PnJdN/YtUCLICuYjEGij3bwKiXNnsVE/n9mjvl1ptUXpPk+trrh+RPH2Q4rpw4Jac8XBP0M+aLM1ttAeIP+9oKUykvUO1Jgw+pObEz0E9l1fH3sLaNl9tug3IDKKpeoGM2BYPHCLO64lls= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724772950; c=relaxed/simple; bh=DlTgAkDMZigOgB86X3IdPTHyYdpmRglEH5v9Cxh/lHU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GEh/rK49ibF6FZLM7QzMvL2sjq+EyyLlTGY/PfUcJZ5mQL97vrQrv7OyWoWkv3UWkh196eSfap9GyGTcPytvt9oFOeJnSli938JdZHlZZSQDD7JmsKg1N8ggBGnI9CjaFrWux2qt6HwPDRjEKsBQRizLSOSi2/Af4DWl0KapuyU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HKppfxWb; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HKppfxWb" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1724772949; x=1756308949; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DlTgAkDMZigOgB86X3IdPTHyYdpmRglEH5v9Cxh/lHU=; b=HKppfxWbnxsYVl2PJEm/UbeRB4WuHjfzvx0nI+azVPOb7vYji1DBeREi 3MWFkGuWB2pFkfqFo33ZnWT8EnGCL9ghGOqJoWhqG2y8FuSvmL+KP/Bwf 8L8t1/WZ8I4DMT+uv9nyA+Wfq1hMU/zAjQl+TCREHy4GivqhlRjVIB/BK huQkRgjJWgBY2F9rzirjPtRtidONUdVnTKdmOqdnr9wm3vbDb7MoL8Yjt DNx1iP4ZtSpxPZxT1Ir4qdnbC8AN2p2T/KwfcxaIS8iliNggvJiB+eQHq /W1sAN16FF7VaXpLbUatrkzXHQ21sQd9scWeuPonXA3JxA9KiLb/gzDXF A==; X-CSE-ConnectionGUID: yYe/0dWsTfGpKAWH5oQlHw== X-CSE-MsgGUID: /wFHeEXgSym6PLTOxMFtQQ== X-IronPort-AV: E=McAfee;i="6700,10204,11177"; a="34634191" X-IronPort-AV: E=Sophos;i="6.10,180,1719903600"; d="scan'208";a="34634191" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2024 08:35:49 -0700 X-CSE-ConnectionGUID: yy5QoDGBR52hd93gtCwkFQ== X-CSE-MsgGUID: GX6yjKukQT6KoieoW/+tSg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,180,1719903600"; d="scan'208";a="62628247" Received: from apaszkie-mobl2.apaszkie-mobl2 (HELO apaszkie-mobl2.intel.com) ([10.245.244.19]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2024 08:35:47 -0700 From: Artur Paszkiewicz To: song@kernel.org Cc: linux-raid@vger.kernel.org, paul.e.luse@intel.com Subject: [PATCH 3/3] md/raid5: rename wait_for_overlap to wait_for_reshape Date: Tue, 27 Aug 2024 17:35:36 +0200 Message-ID: <20240827153536.6743-4-artur.paszkiewicz@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240827153536.6743-1-artur.paszkiewicz@intel.com> References: <20240827153536.6743-1-artur.paszkiewicz@intel.com> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only remaining uses of wait_for_overlap are related to reshape so rename it accordingly. Signed-off-by: Artur Paszkiewicz --- drivers/md/raid5.c | 26 +++++++++++++------------- drivers/md/raid5.h | 2 +- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 8e74c455a87d..7def36cfb408 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -5255,7 +5255,7 @@ static void handle_stripe(struct stripe_head *sh) } else if (s.expanded && !sh->reconstruct_state && s.locked == 0) { clear_bit(STRIPE_EXPAND_READY, &sh->state); atomic_dec(&conf->reshape_stripes); - wake_up(&conf->wait_for_overlap); + wake_up(&conf->wait_for_reshape); md_done_sync(conf->mddev, RAID5_STRIPE_SECTORS(conf), 1); } @@ -6149,7 +6149,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi) logical_sector = raid5_bio_lowest_chunk_sector(conf, bi); on_wq = false; } else { - add_wait_queue(&conf->wait_for_overlap, &wait); + add_wait_queue(&conf->wait_for_reshape, &wait); on_wq = true; } s = (logical_sector - ctx.first_sector) >> RAID5_STRIPE_SHIFT(conf); @@ -6190,7 +6190,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi) (s << RAID5_STRIPE_SHIFT(conf)); } if (unlikely(on_wq)) - remove_wait_queue(&conf->wait_for_overlap, &wait); + remove_wait_queue(&conf->wait_for_reshape, &wait); if (ctx.batch_last) raid5_release_stripe(ctx.batch_last); @@ -6343,7 +6343,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk : (safepos < writepos && readpos > writepos)) || time_after(jiffies, conf->reshape_checkpoint + 10*HZ)) { /* Cannot proceed until we've updated the superblock... */ - wait_event(conf->wait_for_overlap, + wait_event(conf->wait_for_reshape, atomic_read(&conf->reshape_stripes)==0 || test_bit(MD_RECOVERY_INTR, &mddev->recovery)); if (atomic_read(&conf->reshape_stripes) != 0) @@ -6369,7 +6369,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk spin_lock_irq(&conf->device_lock); conf->reshape_safe = mddev->reshape_position; spin_unlock_irq(&conf->device_lock); - wake_up(&conf->wait_for_overlap); + wake_up(&conf->wait_for_reshape); sysfs_notify_dirent_safe(mddev->sysfs_completed); } @@ -6452,7 +6452,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk (sector_nr - mddev->curr_resync_completed) * 2 >= mddev->resync_max - mddev->curr_resync_completed) { /* Cannot proceed until we've updated the superblock... */ - wait_event(conf->wait_for_overlap, + wait_event(conf->wait_for_reshape, atomic_read(&conf->reshape_stripes) == 0 || test_bit(MD_RECOVERY_INTR, &mddev->recovery)); if (atomic_read(&conf->reshape_stripes) != 0) @@ -6478,7 +6478,7 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk spin_lock_irq(&conf->device_lock); conf->reshape_safe = mddev->reshape_position; spin_unlock_irq(&conf->device_lock); - wake_up(&conf->wait_for_overlap); + wake_up(&conf->wait_for_reshape); sysfs_notify_dirent_safe(mddev->sysfs_completed); } ret: @@ -6513,7 +6513,7 @@ static inline sector_t raid5_sync_request(struct mddev *mddev, sector_t sector_n } /* Allow raid5_quiesce to complete */ - wait_event(conf->wait_for_overlap, conf->quiesce != 2); + wait_event(conf->wait_for_reshape, conf->quiesce != 2); if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery)) return reshape_request(mddev, sector_nr, skipped); @@ -7497,7 +7497,7 @@ static struct r5conf *setup_conf(struct mddev *mddev) init_waitqueue_head(&conf->wait_for_quiescent); init_waitqueue_head(&conf->wait_for_stripe); - init_waitqueue_head(&conf->wait_for_overlap); + init_waitqueue_head(&conf->wait_for_reshape); INIT_LIST_HEAD(&conf->handle_list); INIT_LIST_HEAD(&conf->loprio_list); INIT_LIST_HEAD(&conf->hold_list); @@ -8555,7 +8555,7 @@ static void end_reshape(struct r5conf *conf) !test_bit(In_sync, &rdev->flags)) rdev->recovery_offset = MaxSector; spin_unlock_irq(&conf->device_lock); - wake_up(&conf->wait_for_overlap); + wake_up(&conf->wait_for_reshape); mddev_update_io_opt(conf->mddev, conf->raid_disks - conf->max_degraded); @@ -8619,13 +8619,13 @@ static void raid5_quiesce(struct mddev *mddev, int quiesce) conf->quiesce = 1; unlock_all_device_hash_locks_irq(conf); /* allow reshape to continue */ - wake_up(&conf->wait_for_overlap); + wake_up(&conf->wait_for_reshape); } else { /* re-enable writes */ lock_all_device_hash_locks_irq(conf); conf->quiesce = 0; wake_up(&conf->wait_for_quiescent); - wake_up(&conf->wait_for_overlap); + wake_up(&conf->wait_for_reshape); unlock_all_device_hash_locks_irq(conf); } log_quiesce(conf, quiesce); @@ -8944,7 +8944,7 @@ static void raid5_prepare_suspend(struct mddev *mddev) { struct r5conf *conf = mddev->private; - wake_up(&conf->wait_for_overlap); + wake_up(&conf->wait_for_reshape); } static struct md_personality raid6_personality = diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h index 9b5a7dc3f2a0..896ecfc4afa6 100644 --- a/drivers/md/raid5.h +++ b/drivers/md/raid5.h @@ -668,7 +668,7 @@ struct r5conf { struct llist_head released_stripes; wait_queue_head_t wait_for_quiescent; wait_queue_head_t wait_for_stripe; - wait_queue_head_t wait_for_overlap; + wait_queue_head_t wait_for_reshape; unsigned long cache_state; struct shrinker *shrinker; int pool_size; /* number of disks in stripeheads in pool */