From patchwork Thu Apr 28 01:09:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 8964601 Return-Path: X-Original-To: patchwork-linux-btrfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C73679F457 for ; Thu, 28 Apr 2016 01:10:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C64CB201F5 for ; Thu, 28 Apr 2016 01:10:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BCFA020219 for ; Thu, 28 Apr 2016 01:10:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753529AbcD1BKY (ORCPT ); Wed, 27 Apr 2016 21:10:24 -0400 Received: from mail-pf0-f196.google.com ([209.85.192.196]:33448 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753037AbcD1BKS (ORCPT ); Wed, 27 Apr 2016 21:10:18 -0400 Received: by mail-pf0-f196.google.com with SMTP id e190so7742142pfe.0; Wed, 27 Apr 2016 18:10:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6iv8pmwMoYoYR0Wb3eE7K1nqSk0X9OdrSRSeOZto5NI=; b=IFiGyNS28lSohYprHh4yx2eB75B+eoE5vBRtFBSyYAjOkc53AuduSLUzk0Z2MV53g5 c3hxQ0grSpZpdhy7g7KDUEE/7PFU94iwRNKI/O8bpKgopA8AbissfdOQwitOCQqFbHZc uFCNTmtlPv/IMDSZKhIJ7K4dkFlgvkuQ86is2oE5PztTNEfibum5e29fdLHRfaUZSy6A txJ0FgcjS4cN7aBIauh7undZZOdFep7QOMz9ZszNICemcaRWTOKPIWeX0jWIGV5eHkKB 070+ONKINX+j1XcL0P1fHM8y65hVwBGv8ofEjTmmkcLlSKiD/BIp5rTqG8SsWBe1pISV sVsQ== X-Gm-Message-State: AOPr4FX9UCsPeK3PKQPd4wO2U2WCk3XY9RVuixH+raomtlj/OOWIcbIX3lxAGmnVtdEc7w== X-Received: by 10.98.79.199 with SMTP id f68mr5239475pfj.44.1461805811839; Wed, 27 Apr 2016 18:10:11 -0700 (PDT) Received: from localhost ([103.192.227.7]) by smtp.gmail.com with ESMTPSA id eh9sm9875603pad.47.2016.04.27.18.10.08 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Wed, 27 Apr 2016 18:10:11 -0700 (PDT) From: Ming Lei To: Jens Axboe , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, Christoph Hellwig , linux-btrfs@vger.kernel.org, Ming Lei , Shaun Tancheff , Mikulas Patocka , Alan Cox , Neil Brown , Liu Bo , Jens Axboe Subject: [PATCH v3 3/3] block: avoid to call .bi_end_io() recursively Date: Thu, 28 Apr 2016 09:09:49 +0800 Message-Id: <1461805789-3632-3-git-send-email-ming.lei@canonical.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1461805789-3632-1-git-send-email-ming.lei@canonical.com> References: <1461805789-3632-1-git-send-email-ming.lei@canonical.com> Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There were reports about heavy stack use by recursive calling .bi_end_io()([1][2][3]). For example, more than 16K stack is consumed in a single bio complete path[3], and in [2] stack overflow can be triggered if 20 nested dm-crypt is used. Also patches[1] [2] [3] were posted for addressing the issue, but never be merged. And the idea in these patches is basically similar, all serializes the recursive calling of .bi_end_io() by percpu list. This patch still takes the same idea, but uses bio_list to implement it, which turns out more simple and the code becomes more readable meantime. One corner case which wasn't covered before is that bi_endio() may be scheduled to run in process context(such as btrfs), and this patch just bypasses the optimizing for that case because one new context should have enough stack space, and this approach isn't capable of optimizing it too because there isn't easy way to get a per-task linked list head. xfstests(-g auto) is run with this patch and no regression is found on ext4, xfs and btrfs. [1] http://marc.info/?t=121428502000004&r=1&w=2 [2] http://marc.info/?l=dm-devel&m=139595190620008&w=2 [3] http://marc.info/?t=145974644100001&r=1&w=2 Cc: Shaun Tancheff Cc: Christoph Hellwig Cc: Mikulas Patocka Cc: Alan Cox Cc: Neil Brown Cc: Liu Bo Signed-off-by: Ming Lei --- block/bio.c | 55 +++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 53 insertions(+), 2 deletions(-) diff --git a/block/bio.c b/block/bio.c index 807d25e..e20a83c 100644 --- a/block/bio.c +++ b/block/bio.c @@ -68,6 +68,8 @@ static DEFINE_MUTEX(bio_slab_lock); static struct bio_slab *bio_slabs; static unsigned int bio_slab_nr, bio_slab_max; +static DEFINE_PER_CPU(struct bio_list *, bio_end_list) = { NULL }; + static struct kmem_cache *bio_find_or_create_slab(unsigned int extra_size) { unsigned int sz = sizeof(struct bio) + extra_size; @@ -1737,6 +1739,56 @@ static inline bool bio_remaining_done(struct bio *bio) return false; } +static void __bio_endio(struct bio *bio) +{ + if (bio->bi_end_io) + bio->bi_end_io(bio); +} + +/* disable local irq when manipulating the percpu bio_list */ +static void unwind_bio_endio(struct bio *bio) +{ + struct bio_list *bl; + unsigned long flags; + + /* + * We can't optimize if bi_endio() is scheduled to run from + * process context because there isn't easy way to get a + * per-task bio list head or allocate a per-task variable. + */ + if (!in_interrupt()) { + /* + * It has to be a top calling when it is run from + * process context. + */ + WARN_ON(this_cpu_read(bio_end_list)); + __bio_endio(bio); + return; + } + + local_irq_save(flags); + bl = __this_cpu_read(bio_end_list); + if (bl) { + bio_list_add(bl, bio); + } else { + struct bio_list bl_in_stack; + + bl = &bl_in_stack; + bio_list_init(bl); + + __this_cpu_write(bio_end_list, bl); + while (bio) { + local_irq_restore(flags); + __bio_endio(bio); + local_irq_save(flags); + + bio = bio_list_pop(bl); + } + __this_cpu_write(bio_end_list, NULL); + } + local_irq_restore(flags); +} + /** * bio_endio - end I/O on a bio * @bio: bio @@ -1765,8 +1817,7 @@ again: goto again; } - if (bio->bi_end_io) - bio->bi_end_io(bio); + unwind_bio_endio(bio); } EXPORT_SYMBOL(bio_endio);