From patchwork Thu May 5 15:54:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Snitzer X-Patchwork-Id: 9025641 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 86114BF440 for ; Thu, 5 May 2016 15:54:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C08A52013A for ; Thu, 5 May 2016 15:54:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CF12E20222 for ; Thu, 5 May 2016 15:54:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757587AbcEEPyb (ORCPT ); Thu, 5 May 2016 11:54:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37689 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754620AbcEEPya (ORCPT ); Thu, 5 May 2016 11:54:30 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 338A9C001265; Thu, 5 May 2016 15:54:30 +0000 (UTC) Received: from localhost (dhcp-25-53.bos.redhat.com [10.18.25.53]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u45FsT4t006775 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 5 May 2016 11:54:29 -0400 From: Mike Snitzer To: Jens Axboe Cc: Christoph Hellwig , linux-block@vger.kernel.org, dm-devel@redhat.com, Mike Snitzer Subject: [PATCH 3/5] dm thin: remove __bio_inc_remaining() and switch to using bio_inc_remaining() Date: Thu, 5 May 2016 11:54:23 -0400 Message-Id: <1462463665-85312-4-git-send-email-snitzer@redhat.com> In-Reply-To: <1462463665-85312-1-git-send-email-snitzer@redhat.com> References: <1462463665-85312-1-git-send-email-snitzer@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-9.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP DM thinp's use of bio_inc_remaining() is critical to ensure the original parent discard bio isn't completed before sub-discards have. DM thinp needs this due to the extra quiescing that occurs, via multiple DM thinp mappings, while processing large discards. As such DM thinp must build the async discard bio chain after some delay -- so bio_inc_remaining() is used to enable DM thinp to take a reference on the original parent discard bio for each mapping. This allows the immediate use of bio_endio() on that discard bio; but with the understanding that the actual completion won't occur until each of the sub-discards' per-mapping references are dropped. Signed-off-by: Mike Snitzer Acked-by: Joe Thornber --- drivers/md/dm-thin.c | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index 04e7f3b..da42c49 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -1494,17 +1494,6 @@ static void process_discard_cell_no_passdown(struct thin_c *tc, pool->process_prepared_discard(m); } -/* - * __bio_inc_remaining() is used to defer parent bios's end_io until - * we _know_ all chained sub range discard bios have completed. - */ -static inline void __bio_inc_remaining(struct bio *bio) -{ - bio->bi_flags |= (1 << BIO_CHAIN); - smp_mb__before_atomic(); - atomic_inc(&bio->__bi_remaining); -} - static void break_up_discard_bio(struct thin_c *tc, dm_block_t begin, dm_block_t end, struct bio *bio) { @@ -1560,7 +1549,7 @@ static void break_up_discard_bio(struct thin_c *tc, dm_block_t begin, dm_block_t * the implicit decrement that occurs via bio_endio() in * process_prepared_discard_{passdown,no_passdown}. */ - __bio_inc_remaining(bio); + bio_inc_remaining(bio); if (!dm_deferred_set_add_work(pool->all_io_ds, &m->list)) pool->process_prepared_discard(m);