From patchwork Fri Jun 1 10:51:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 10442981 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AB505602BC for ; Fri, 1 Jun 2018 10:52:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C5E928E8B for ; Fri, 1 Jun 2018 10:52:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F0B928EB5; Fri, 1 Jun 2018 10:52:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1770928E8B for ; Fri, 1 Jun 2018 10:52:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751422AbeFAKwD (ORCPT ); Fri, 1 Jun 2018 06:52:03 -0400 Received: from mail-it0-f66.google.com ([209.85.214.66]:53841 "EHLO mail-it0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750852AbeFAKwB (ORCPT ); Fri, 1 Jun 2018 06:52:01 -0400 Received: by mail-it0-f66.google.com with SMTP id a195-v6so1175239itd.3; Fri, 01 Jun 2018 03:52:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=+XNnIKnzc6rHGST7yXwOo7AlwqLJA2pRZjqYuX2iH+8=; b=lQWmxk/EijYK7xlFMe0YliSowsPflHlIaGMGy+ymhszW80QNbKf90OJQSPHN5GqZH+ v5MRggLZ6coB8pZnRvzymcCEnZTmB88JwtmfGVbvxixP/yzptUy8kJcU3HztH1ucvwQC NXp9WSh2Y0X+/53E0xucqHJE+Cf3nnbElKiy8dfBnO7jaOc0eW4PyvVa6E2+1aKePYPq CwR0IUPZ3QLrSw1cAYzEkMJcsaJEzLeq+/3U6cq0a7o3hw8TO1pCYHP3fXAShHnW90Fn xaJGgkDiLxPTvAf0IWLoc68T/b+EMVnvI16B7mF0Ow1SH7dIqXSGiIwE+VIEdOnbS9Yi QcWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=+XNnIKnzc6rHGST7yXwOo7AlwqLJA2pRZjqYuX2iH+8=; b=A1u5LdA9fVTob0/9Vqt4HhErGONEMKgzYoF4mEjsc2UANCzUtonBQg7HCn9t1/vAeB 2rmbxFFZ20i1G2SFiS+9sVBknmYvuN6qtxA5aUIubzeBDu7aJRP5b/XvWG7tXYqEqfi9 bRu14MVPEFpicGlVr/+Xv6gz1pi3YbeHZ3f8Oas8C4bOp61aqxITo9GQoRFhnJb+TOqS nyuXOnpr9H5AYO2zP1/xp2FXJFICuc5yy+BuPJVAX4ZWv1fvwPKYufncMsFQxD24urvz +4D2GErhx2LtnSWj+nFDCYS8cgnwmCB5cL72OD8hGGgB5MtHQfXLSSxRHsQCJJrmoXPU /kDw== X-Gm-Message-State: APt69E1GPu+xxNKfJ938+KYY8eK5CFvVE8yGw5+lY2+9nZ99BoJttRpz NFlgZlXDZ9lfzLbnm0ht3761pGZt6u9FxnvxPFM= X-Google-Smtp-Source: ADUXVKIc8PhWPQE+e4y32TO4UlfLZnKPxFhtSLBMSGyP5GodcKCiz2wgCSdUDraW4nmSCap374PdUawkIxbBrbQlZt0= X-Received: by 2002:a24:43cf:: with SMTP id s198-v6mr3476838itb.28.1527850320098; Fri, 01 Jun 2018 03:52:00 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a4f:7517:0:0:0:0:0 with HTTP; Fri, 1 Jun 2018 03:51:59 -0700 (PDT) In-Reply-To: <20180520222558.7053-7-kent.overstreet@gmail.com> References: <20180520222558.7053-1-kent.overstreet@gmail.com> <20180520222558.7053-7-kent.overstreet@gmail.com> From: Arnd Bergmann Date: Fri, 1 Jun 2018 12:51:59 +0200 X-Google-Sender-Auth: oD1A0A1G2f5TmEhgKUuLHMh_8us Message-ID: Subject: Re: [PATCH 06/12] md: convert to bioset_init()/mempool_init() To: Kent Overstreet Cc: Linux Kernel Mailing List , linux-block , Jens Axboe , Christoph Hellwig , colyli@suse.de, Mike Snitzer , "Darrick J. Wong" , Chris Mason , bacik@fb.com, linux-xfs , drbd-dev@lists.linbit.com, linux-btrfs , "open list:SOFTWARE RAID (Multiple Disks) SUPPORT" , NeilBrown Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Mon, May 21, 2018 at 12:25 AM, Kent Overstreet wrote: > @@ -510,7 +510,10 @@ static void mddev_delayed_delete(struct work_struct *ws); > > static void mddev_put(struct mddev *mddev) > { > - struct bio_set *bs = NULL, *sync_bs = NULL; > + struct bio_set bs, sync_bs; > + > + memset(&bs, 0, sizeof(bs)); > + memset(&sync_bs, 0, sizeof(sync_bs)); > > if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) > return; > @@ -521,8 +524,8 @@ static void mddev_put(struct mddev *mddev) > list_del_init(&mddev->all_mddevs); > bs = mddev->bio_set; > sync_bs = mddev->sync_set; > - mddev->bio_set = NULL; > - mddev->sync_set = NULL; > + memset(&mddev->bio_set, 0, sizeof(mddev->bio_set)); > + memset(&mddev->sync_set, 0, sizeof(mddev->sync_set)); > if (mddev->gendisk) { > /* We did a probe so need to clean up. Call > * queue_work inside the spinlock so that > @@ -535,10 +538,8 @@ static void mddev_put(struct mddev *mddev) > kfree(mddev); > } > spin_unlock(&all_mddevs_lock); > - if (bs) > - bioset_free(bs); > - if (sync_bs) > - bioset_free(sync_bs); > + bioset_exit(&bs); > + bioset_exit(&sync_bs); > } This has triggered a new warning in randconfig builds for me, on 32-bit: drivers/md/md.c: In function 'mddev_put': drivers/md/md.c:564:1: error: the frame size of 1096 bytes is larger than 1024 bytes [-Werror=frame-larger-than=] and on 64-bit: drivers/md/md.c: In function 'mddev_put': drivers/md/md.c:564:1: error: the frame size of 2112 bytes is larger than 2048 bytes [-Werror=frame-larger-than=] The size of bio_set is a bit configuration dependent, but it looks like this is a bit too big to put on the stack either way, epsecially when you traverse a list and copy two of them with assignments for each loop in 'bs = mddev->bio_set'. A related problem is that calling bioset_exit on a copy of the bio_set might not do the right thing. The function is defined as void bioset_exit(struct bio_set *bs) { if (bs->rescue_workqueue) destroy_workqueue(bs->rescue_workqueue); bs->rescue_workqueue = NULL; mempool_exit(&bs->bio_pool); mempool_exit(&bs->bvec_pool); bioset_integrity_free(bs); if (bs->bio_slab) bio_put_slab(bs); bs->bio_slab = NULL; } So it calls a couple of functions (detroy_workqueue, mempool_exit, bioset_integrity_free, bio_put_slab), each of which might rely on a the structure being the same one that was originally allocated. If the structure contains any kind of linked list entry, passing a copy into the list management functions would guarantee corruption. I could not come up with an obvious fix, but maybe it's possible to change mddev_put() to do call bioset_exit() before the structure gets freed? Something like the (completely untested and probably wrong somewhere) patch below Signed-off-by: Arnd Bergmann --- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/md/md.c b/drivers/md/md.c index 411f60d591d1..3bf54591ddcd 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -531,11 +531,6 @@ static void mddev_delayed_delete(struct work_struct *ws); static void mddev_put(struct mddev *mddev) { - struct bio_set bs, sync_bs; - - memset(&bs, 0, sizeof(bs)); - memset(&sync_bs, 0, sizeof(sync_bs)); - if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) return; if (!mddev->raid_disks && list_empty(&mddev->disks) && @@ -543,10 +538,14 @@ static void mddev_put(struct mddev *mddev) /* Array is not configured at all, and not held active, * so destroy it */ list_del_init(&mddev->all_mddevs); - bs = mddev->bio_set; - sync_bs = mddev->sync_set; + spin_unlock(&all_mddevs_lock); + + bioset_exit(&mddev->bio_set); memset(&mddev->bio_set, 0, sizeof(mddev->bio_set)); + + bioset_exit(&mddev->sync_set); memset(&mddev->sync_set, 0, sizeof(mddev->sync_set)); + if (mddev->gendisk) { /* We did a probe so need to clean up. Call * queue_work inside the spinlock so that @@ -557,10 +556,9 @@ static void mddev_put(struct mddev *mddev) queue_work(md_misc_wq, &mddev->del_work); } else kfree(mddev); + } else { + spin_unlock(&all_mddevs_lock); } - spin_unlock(&all_mddevs_lock); - bioset_exit(&bs); - bioset_exit(&sync_bs); } static void md_safemode_timeout(struct timer_list *t);