From patchwork Wed Aug 16 18:13:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Jeffery X-Patchwork-Id: 13355556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C630CC05052 for ; Wed, 16 Aug 2023 18:16:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345433AbjHPSQC (ORCPT ); Wed, 16 Aug 2023 14:16:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229580AbjHPSPa (ORCPT ); Wed, 16 Aug 2023 14:15:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3180C2698 for ; Wed, 16 Aug 2023 11:14:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692209682; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=cRFMFLEA/y9urwlUkYfYB8wJ+QIdz4LX1Akn7KTI54I=; b=fA3qvb7vhiQWkFTp6iR6YQheGwHOCJXAOc/n18cHyKVp7N+8hCxpUDsYbxU6nO0MwF42IL tVIrhjRfVpcfdPG4GBefySKUP8Cz50Neoq3+ky3PnckI18TbnXcoadZQMazMWigKighBEN yDUqme6M0EBcvuMD3Pk4tuDo6NKYB5Y= Received: from mimecast-mx02.redhat.com (66.187.233.73 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-605-U7kXPsJ0NiGLPyHmOUeq4Q-1; Wed, 16 Aug 2023 14:14:38 -0400 X-MC-Unique: U7kXPsJ0NiGLPyHmOUeq4Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 757913C100B3; Wed, 16 Aug 2023 18:14:38 +0000 (UTC) Received: from fedora-work.redhat.com (unknown [10.22.18.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id 04F0F2166B2D; Wed, 16 Aug 2023 18:14:37 +0000 (UTC) From: David Jeffery To: Song Liu , linux-raid@vger.kernel.org Cc: David Jeffery , Laurence Oberman , Yu Kuai Subject: [PATCH v2] md: raid0: account for split bio in iostat accounting Date: Wed, 16 Aug 2023 14:13:55 -0400 Message-ID: <20230816181433.13289-1-djeffery@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-raid@vger.kernel.org When a bio is split by md raid0, the newly created bio will not be tracked by md for I/O accounting. Only the portion of I/O still assigned to the original bio which was reduced by the split will be accounted for. This results in md iostat data sometimes showing I/O values far below the actual amount of data being sent through md. md_account_bio() needs to be called for all bio generated by the bio split. A simple example of the issue was generated using a raid0 device on partitions to the same device. Since all raid0 I/O then goes to one device, it makes it easy to see a gap between the md device and its sd storage. Reading an lvm device on top of the md device, the iostat output (some 0 columns and extra devices removed to make the data more compact) was: Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read md2 0.00 0.00 0.00 0.00 0 sde 0.00 0.00 0.00 0.00 0 md2 1364.00 411496.00 0.00 0.00 411496 sde 1734.00 646144.00 0.00 0.00 646144 md2 1699.00 510680.00 0.00 0.00 510680 sde 2155.00 802784.00 0.00 0.00 802784 md2 803.00 241480.00 0.00 0.00 241480 sde 1016.00 377888.00 0.00 0.00 377888 md2 0.00 0.00 0.00 0.00 0 sde 0.00 0.00 0.00 0.00 0 I/O was generated doing large direct I/O reads (12M) with dd to a linear lvm volume on top of the 4 leg raid0 device. The md2 reads were showing as roughly 2/3 of the reads to the sde device containing all of md2's raid partitions. The sum of reads to sde was 1826816 kB, which was the expected amount as it was the amount read by dd. With the patch, the total reads from md will match the reads from sde and be consistent with the amount of I/O generated. Fixes: 10764815ff47 ("md: add io accounting for raid0 and raid5") Signed-off-by: David Jeffery Tested-by: Laurence Oberman Reviewed-by: Laurence Oberman Reviewed-by: Yu Kuai --- Patch v2: ported to apply on top of md-next drivers/md/raid0.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index abbd77977f98..c50a7abda744 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -553,8 +553,7 @@ static void raid0_map_submit_bio(struct mddev *mddev, struct bio *bio) sector_t bio_sector = bio->bi_iter.bi_sector; sector_t sector = bio_sector; - if (bio->bi_pool != &mddev->bio_set) - md_account_bio(mddev, &bio); + md_account_bio(mddev, &bio); zone = find_zone(mddev->private, §or); switch (conf->layout) {