From patchwork Fri Apr 8 17:12:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12806999 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50ED9C433FE for ; Fri, 8 Apr 2022 17:13:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649438022; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=CY4sCosrTcKznRDiQTlSxLcWC7zsQ3luytvJ01XzXaY=; b=MygnNaQZTw1fCIZIqKONkEKqCI1iAOTterpSChmcBWoXjkBTwRsIAg+0m0ZoYMxqxB/hXD FcRhtdjqoQ5MwHxpLbz3XkRU3AzdzazI8chitGCevyWk9sr6tkfcaBX2epWMxSgTQYRnBY weIzIlydR6MgIe4wYcNmxlohRqO95IQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-596-O--pgBqLM6quy9xAgBCZyQ-1; Fri, 08 Apr 2022 13:13:41 -0400 X-MC-Unique: O--pgBqLM6quy9xAgBCZyQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0AD183C1485C; Fri, 8 Apr 2022 17:13:39 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id EEF16111F3DD; Fri, 8 Apr 2022 17:13:38 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id C2BC71940344; Fri, 8 Apr 2022 17:13:38 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 7202E1940341 for ; Fri, 8 Apr 2022 17:13:37 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 53E3340D296B; Fri, 8 Apr 2022 17:13:37 +0000 (UTC) Received: from localhost (ovpn-8-29.pek2.redhat.com [10.72.8.29]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8475F40D2962; Fri, 8 Apr 2022 17:13:36 +0000 (UTC) From: Ming Lei To: Mike Snitzer Date: Sat, 9 Apr 2022 01:12:52 +0800 Message-Id: <20220408171254.935171-2-ming.lei@redhat.com> In-Reply-To: <20220408171254.935171-1-ming.lei@redhat.com> References: <20220408171254.935171-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Subject: [dm-devel] [PATCH 1/3] dm: don't grab target io reference in dm_zone_map_bio X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dm-devel@redhat.com, Damien Le Moal , Ming Lei Errors-To: dm-devel-bounces@redhat.com Sender: "dm-devel" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com dm_zone_map_bio() is only called from __map_bio in which the io's reference is grabbed already, and the reference won't be released until the bio is submitted, so no necessary to do it dm_zone_map_bio any more. Cc: Damien Le Moal Signed-off-by: Ming Lei Reviewed-by: Damien Le Moal Tested-by: Damien Le Moal --- drivers/md/dm-core.h | 7 ------- drivers/md/dm-zone.c | 10 ---------- drivers/md/dm.c | 7 ++++++- 3 files changed, 6 insertions(+), 18 deletions(-) diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index 4277853c7535..13136bd47cb3 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -277,13 +277,6 @@ static inline void dm_io_set_flag(struct dm_io *io, unsigned int bit) io->flags |= (1U << bit); } -static inline void dm_io_inc_pending(struct dm_io *io) -{ - atomic_inc(&io->io_count); -} - -void dm_io_dec_pending(struct dm_io *io, blk_status_t error); - static inline struct completion *dm_get_completion_from_kobject(struct kobject *kobj) { return &container_of(kobj, struct dm_kobject_holder, kobj)->completion; diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index c1ca9be4b79e..85d3c158719f 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -545,13 +545,6 @@ int dm_zone_map_bio(struct dm_target_io *tio) return DM_MAPIO_KILL; } - /* - * The target map function may issue and complete the IO quickly. - * Take an extra reference on the IO to make sure it does disappear - * until we run dm_zone_map_bio_end(). - */ - dm_io_inc_pending(io); - /* Let the target do its work */ r = ti->type->map(ti, clone); switch (r) { @@ -580,9 +573,6 @@ int dm_zone_map_bio(struct dm_target_io *tio) break; } - /* Drop the extra reference on the IO */ - dm_io_dec_pending(io, sts); - if (sts != BLK_STS_OK) return DM_MAPIO_KILL; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 3c5fad7c4ee6..b8424a4b4725 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -929,11 +929,16 @@ static inline bool dm_tio_is_normal(struct dm_target_io *tio) !dm_tio_flagged(tio, DM_TIO_IS_DUPLICATE_BIO)); } +static void dm_io_inc_pending(struct dm_io *io) +{ + atomic_inc(&io->io_count); +} + /* * Decrements the number of outstanding ios that a bio has been * cloned into, completing the original io if necc. */ -void dm_io_dec_pending(struct dm_io *io, blk_status_t error) +static void dm_io_dec_pending(struct dm_io *io, blk_status_t error) { /* Push-back supersedes any I/O errors */ if (unlikely(error)) { From patchwork Fri Apr 8 17:12:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12807001 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4EE94C433EF for ; Fri, 8 Apr 2022 17:13:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649438026; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=2HKDjwskRqr1edUIssGxgpmXcET72uYvst04jiuIAYg=; b=B7bB2UOiZcI2/JGRwyPPVMg1kI1Ci0WuE19uD9HzkZkBE8ic1Rmf0TcZzMYlEIC6JlcrL9 XYpsApjqeLBqXZROapi9e5JJcm6pHWXzwlK36zNubZPAd/7o5u5dORsfeLNc4NYCo1UW6S rc3DnWX0ydMBKw9ShsEuIM/V75EJUN8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-531-M3f6JbUgO9W5c12BU3qLGA-1; Fri, 08 Apr 2022 13:13:44 -0400 X-MC-Unique: M3f6JbUgO9W5c12BU3qLGA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 432193800E9F; Fri, 8 Apr 2022 17:13:42 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3321D1121314; Fri, 8 Apr 2022 17:13:42 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id E33C01940344; Fri, 8 Apr 2022 17:13:41 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id A891E1940341 for ; Fri, 8 Apr 2022 17:13:40 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 98B7C403175; Fri, 8 Apr 2022 17:13:40 +0000 (UTC) Received: from localhost (ovpn-8-29.pek2.redhat.com [10.72.8.29]) by smtp.corp.redhat.com (Postfix) with ESMTP id BA102401E6C; Fri, 8 Apr 2022 17:13:39 +0000 (UTC) From: Ming Lei To: Mike Snitzer Date: Sat, 9 Apr 2022 01:12:53 +0800 Message-Id: <20220408171254.935171-3-ming.lei@redhat.com> In-Reply-To: <20220408171254.935171-1-ming.lei@redhat.com> References: <20220408171254.935171-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.10 Subject: [dm-devel] [PATCH 2/3] dm: improve target io referencing X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dm-devel@redhat.com, Ming Lei Errors-To: dm-devel-bounces@redhat.com Sender: "dm-devel" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Currently target io's reference counter is grabbed before calling __map_bio(), this way isn't efficient since we can move this grabbing into alloc_io(). Meantime it becomes typical async io reference counter model: one is for submission side, the other is for completion side, and the io won't be completed until both two sides are done. Signed-off-by: Ming Lei --- drivers/md/dm.c | 36 +++++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index b8424a4b4725..528559ca2f91 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -579,7 +579,9 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio) io = container_of(tio, struct dm_io, tio); io->magic = DM_IO_MAGIC; io->status = 0; - atomic_set(&io->io_count, 1); + + /* one is for submission, the other is for completion */ + atomic_set(&io->io_count, 2); this_cpu_inc(*md->pending_io); io->orig_bio = NULL; io->md = md; @@ -929,11 +931,6 @@ static inline bool dm_tio_is_normal(struct dm_target_io *tio) !dm_tio_flagged(tio, DM_TIO_IS_DUPLICATE_BIO)); } -static void dm_io_inc_pending(struct dm_io *io) -{ - atomic_inc(&io->io_count); -} - /* * Decrements the number of outstanding ios that a bio has been * cloned into, completing the original io if necc. @@ -1275,7 +1272,6 @@ static void __map_bio(struct bio *clone) /* * Map the clone. */ - dm_io_inc_pending(io); tio->old_sector = clone->bi_iter.bi_sector; if (unlikely(swap_bios_limit(ti, clone))) { @@ -1357,11 +1353,12 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, } } -static void __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, +static int __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, unsigned num_bios, unsigned *len) { struct bio_list blist = BIO_EMPTY_LIST; struct bio *clone; + int ret = 0; switch (num_bios) { case 0: @@ -1370,15 +1367,19 @@ static void __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, clone = alloc_tio(ci, ti, 0, len, GFP_NOIO); dm_tio_set_flag(clone_to_tio(clone), DM_TIO_IS_DUPLICATE_BIO); __map_bio(clone); + ret = 1; break; default: alloc_multiple_bios(&blist, ci, ti, num_bios, len); while ((clone = bio_list_pop(&blist))) { dm_tio_set_flag(clone_to_tio(clone), DM_TIO_IS_DUPLICATE_BIO); __map_bio(clone); + ret += 1; } break; } + + return ret; } static void __send_empty_flush(struct clone_info *ci) @@ -1398,8 +1399,16 @@ static void __send_empty_flush(struct clone_info *ci) ci->bio = &flush_bio; ci->sector_count = 0; - while ((ti = dm_table_get_target(ci->map, target_nr++))) - __send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL); + while ((ti = dm_table_get_target(ci->map, target_nr++))) { + int bios; + + atomic_add(ti->num_flush_bios, &ci->io->io_count); + bios = __send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL); + atomic_sub(ti->num_flush_bios - bios, &ci->io->io_count); + } + + /* alloc_io() takes one extra reference for submission */ + atomic_sub(1, &ci->io->io_count); bio_uninit(ci->bio); } @@ -1408,6 +1417,7 @@ static void __send_changing_extent_only(struct clone_info *ci, struct dm_target unsigned num_bios) { unsigned len; + int bios; len = min_t(sector_t, ci->sector_count, max_io_len_target_boundary(ti, dm_target_offset(ti, ci->sector))); @@ -1419,7 +1429,11 @@ static void __send_changing_extent_only(struct clone_info *ci, struct dm_target ci->sector += len; ci->sector_count -= len; - __send_duplicate_bios(ci, ti, num_bios, &len); + + atomic_add(num_bios, &ci->io->io_count); + bios = __send_duplicate_bios(ci, ti, num_bios, &len); + /* alloc_io() takes one extra reference for submission */ + atomic_sub(num_bios - bios + 1, &ci->io->io_count); } static bool is_abnormal_io(struct bio *bio) From patchwork Fri Apr 8 17:12:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12807002 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F725C433F5 for ; Fri, 8 Apr 2022 17:13:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649438031; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=67Knx1vnsJNQxwwpY+q0AxzAxjcO4XuMZU4Itu/m2Mw=; b=gTLqwrhOTxR5ddt+WFi7yuZyu9DpVS80Huib5Gvse5gvVqhmxECb7/7Ij202UEp6QW85EK meKfT+sMYhMai8Q1bIp+A+OCGJtcO3L8nNk6EOYWkjLEPMRYgKfnCsfqwWqAUNUXsPgnhd xSNHbWX565bqx1HhQWHtyOFWkFmSKZU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-624-ZZN7xozhOcWYpUSQGURULw-1; Fri, 08 Apr 2022 13:13:48 -0400 X-MC-Unique: ZZN7xozhOcWYpUSQGURULw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 60CC1282B7EB; Fri, 8 Apr 2022 17:13:46 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com [10.30.29.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4FCC91121314; Fri, 8 Apr 2022 17:13:46 +0000 (UTC) Received: from mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (localhost [IPv6:::1]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 0ED421940344; Fri, 8 Apr 2022 17:13:46 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) by mm-prod-listman-01.mail-001.prod.us-east-1.aws.redhat.com (Postfix) with ESMTP id 68ACD1940341 for ; Fri, 8 Apr 2022 17:13:44 +0000 (UTC) Received: by smtp.corp.redhat.com (Postfix) id 59052C33AE8; Fri, 8 Apr 2022 17:13:44 +0000 (UTC) Received: from localhost (ovpn-8-29.pek2.redhat.com [10.72.8.29]) by smtp.corp.redhat.com (Postfix) with ESMTP id 87116C30C0A; Fri, 8 Apr 2022 17:13:43 +0000 (UTC) From: Ming Lei To: Mike Snitzer Date: Sat, 9 Apr 2022 01:12:54 +0800 Message-Id: <20220408171254.935171-4-ming.lei@redhat.com> In-Reply-To: <20220408171254.935171-1-ming.lei@redhat.com> References: <20220408171254.935171-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 Subject: [dm-devel] [PATCH 3/3] dm: put all polled io into one single list X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: dm-devel@redhat.com, Ming Lei Errors-To: dm-devel-bounces@redhat.com Sender: "dm-devel" X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com If bio_split() isn't involved, it is a bit overkill to link dm_io into hlist, given there is only single dm_io in the list, so convert to single list for holding all dm_io instances associated with this bio. Signed-off-by: Ming Lei --- drivers/md/dm-core.h | 2 +- drivers/md/dm.c | 46 +++++++++++++++++++++++--------------------- 2 files changed, 25 insertions(+), 23 deletions(-) diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index 13136bd47cb3..e28f7d54d2b7 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -252,7 +252,7 @@ struct dm_io { spinlock_t lock; unsigned long start_time; void *data; - struct hlist_node node; + struct dm_io *next; struct task_struct *map_task; struct dm_stats_aux stats_aux; /* last member of dm_target_io is 'struct bio' */ diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 528559ca2f91..84cb73462fcd 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1486,7 +1486,7 @@ static bool __process_abnormal_io(struct clone_info *ci, struct dm_target *ti, } /* - * Reuse ->bi_private as hlist head for storing all dm_io instances + * Reuse ->bi_private as dm_io list head for storing all dm_io instances * associated with this bio, and this bio's bi_private needs to be * stored in dm_io->data before the reuse. * @@ -1494,14 +1494,14 @@ static bool __process_abnormal_io(struct clone_info *ci, struct dm_target *ti, * touch it after splitting. Meantime it won't be changed by anyone after * bio is submitted. So this reuse is safe. */ -static inline struct hlist_head *dm_get_bio_hlist_head(struct bio *bio) +static inline struct dm_io **dm_poll_list_head(struct bio *bio) { - return (struct hlist_head *)&bio->bi_private; + return (struct dm_io **)&bio->bi_private; } static void dm_queue_poll_io(struct bio *bio, struct dm_io *io) { - struct hlist_head *head = dm_get_bio_hlist_head(bio); + struct dm_io **head = dm_poll_list_head(bio); if (!(bio->bi_opf & REQ_DM_POLL_LIST)) { bio->bi_opf |= REQ_DM_POLL_LIST; @@ -1511,19 +1511,20 @@ static void dm_queue_poll_io(struct bio *bio, struct dm_io *io) */ io->data = bio->bi_private; - INIT_HLIST_HEAD(head); - /* tell block layer to poll for completion */ bio->bi_cookie = ~BLK_QC_T_NONE; + + io->next = NULL; } else { /* * bio recursed due to split, reuse original poll list, * and save bio->bi_private too. */ - io->data = hlist_entry(head->first, struct dm_io, node)->data; + io->data = (*head)->data; + io->next = *head; } - hlist_add_head(&io->node, head); + *head = io; } /* @@ -1677,18 +1678,16 @@ static bool dm_poll_dm_io(struct dm_io *io, struct io_comp_batch *iob, static int dm_poll_bio(struct bio *bio, struct io_comp_batch *iob, unsigned int flags) { - struct hlist_head *head = dm_get_bio_hlist_head(bio); - struct hlist_head tmp = HLIST_HEAD_INIT; - struct hlist_node *next; - struct dm_io *io; + struct dm_io **head = dm_poll_list_head(bio); + struct dm_io *list = *head; + struct dm_io *tmp = NULL; + struct dm_io *curr, *next; /* Only poll normal bio which was marked as REQ_DM_POLL_LIST */ if (!(bio->bi_opf & REQ_DM_POLL_LIST)) return 0; - WARN_ON_ONCE(hlist_empty(head)); - - hlist_move_list(head, &tmp); + WARN_ON_ONCE(!list); /* * Restore .bi_private before possibly completing dm_io. @@ -1699,24 +1698,27 @@ static int dm_poll_bio(struct bio *bio, struct io_comp_batch *iob, * clearing REQ_DM_POLL_LIST here. */ bio->bi_opf &= ~REQ_DM_POLL_LIST; - bio->bi_private = hlist_entry(tmp.first, struct dm_io, node)->data; + bio->bi_private = list->data; - hlist_for_each_entry_safe(io, next, &tmp, node) { - if (dm_poll_dm_io(io, iob, flags)) { - hlist_del_init(&io->node); + for (curr = list, next = curr->next; curr; curr = next, next = + curr ? curr->next : NULL) { + if (dm_poll_dm_io(curr, iob, flags)) { /* * clone_endio() has already occurred, so passing * error as 0 here doesn't override io->status */ - dm_io_dec_pending(io, 0); + dm_io_dec_pending(curr, 0); + } else { + curr->next = tmp; + tmp = curr; } } /* Not done? */ - if (!hlist_empty(&tmp)) { + if (tmp) { bio->bi_opf |= REQ_DM_POLL_LIST; /* Reset bio->bi_private to dm_io list head */ - hlist_move_list(&tmp, head); + *head = tmp; return 0; } return 1;