From patchwork Mon Aug 1 15:50:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khazhismel Kumykov X-Patchwork-Id: 12933867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE937C00144 for ; Mon, 1 Aug 2022 15:50:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 262356B0071; Mon, 1 Aug 2022 11:50:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 211F28E0003; Mon, 1 Aug 2022 11:50:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D9388E0002; Mon, 1 Aug 2022 11:50:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F26236B0071 for ; Mon, 1 Aug 2022 11:50:41 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C3ACF140AE4 for ; Mon, 1 Aug 2022 15:50:41 +0000 (UTC) X-FDA: 79751461482.27.6D10FBE Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by imf05.hostedemail.com (Postfix) with ESMTP id E53901000FB for ; Mon, 1 Aug 2022 15:50:40 +0000 (UTC) Received: by mail-pf1-f180.google.com with SMTP id f28so2752843pfk.1 for ; Mon, 01 Aug 2022 08:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xDVffGBJScwkOd+9hsMyXStFZyBJcuy4eeJPA0O+mYE=; b=jMmbhYHIQdhHEsYbYCTMWkE+N1oR2+4UIsO935pssDIUKV24yYBYURTemf+naJm91J +9BPjXvfoOiYx9oigoCYE3NmVM8Jcdx+neZJCJkl+xHlmLz7Cw3EWQ11Q2GLWVxga4Hq c0kegNzqCTkBQDxScnsZzfUCJB29phKYzgaYc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xDVffGBJScwkOd+9hsMyXStFZyBJcuy4eeJPA0O+mYE=; b=68ENozCKfGdm8S2mDEVIeuIlCoA5aS7dTcXoA41vJsBvTwT5QzKGSsW7fYU9hK59ha 8FxVW+pQ3JzV37coXx6YAQiS3mzVfB/2u9isu7wTFAaB7noqZAa+sXELUHjLQX0DxJC8 aQsWeWLpUBeKu+2/vY/trmsUpi0Xi6S+D40QION5ZoZUL4pjWNAwcBonEv4rjnvUUVk8 QN8TbSOuNHHQZTLKKn03K+fQXw6wTNxTBTikPdhjPxNmfSlnW7PcHTjnnxrN6a29LaIT Nqdn8pj5UVFwDo2XyJ5CNz02HP7z2Y2dw4X2IKdGJ3tJllS+ZheSQ/GwYLZpC5xhuiB4 TySA== X-Gm-Message-State: AJIora8CHVrOHd5Yo/1MWfKdW8WBVNIsc7CLbycdtZdHZYGkVUIAqm1r dPDBR5+rGsvRsPmCURaoTpwKDA== X-Google-Smtp-Source: AGRyM1u2007IRXIHyLfdUffsTUR4Bx46Z3b0fr1ICpvciP22E+MaQbIIln+XTvg0vFD0yi4DQA4iGQ== X-Received: by 2002:a65:6b8a:0:b0:3db:7dc5:fec2 with SMTP id d10-20020a656b8a000000b003db7dc5fec2mr13491063pgw.223.1659369039877; Mon, 01 Aug 2022 08:50:39 -0700 (PDT) Received: from khazhy-linux.svl.corp.google.com ([2620:15c:2d4:203:8277:7816:dd6b:eadc]) by smtp.gmail.com with ESMTPSA id lx7-20020a17090b4b0700b001f4dd3b7d7fsm5132323pjb.9.2022.08.01.08.50.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Aug 2022 08:50:39 -0700 (PDT) From: Khazhismel Kumykov X-Google-Original-From: Khazhismel Kumykov To: Alexander Viro , Andrew Morton , "Matthew Wilcox (Oracle)" , Jan Kara Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Khazhismel Kumykov Subject: [PATCH v2] writeback: avoid use-after-free after removing device Date: Mon, 1 Aug 2022 08:50:34 -0700 Message-Id: <20220801155034.3772543-1-khazhy@google.com> X-Mailer: git-send-email 2.37.1.455.g008518b4e5-goog In-Reply-To: <20220729215123.1998585-1-khazhy@google.com> References: <20220729215123.1998585-1-khazhy@google.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=jMmbhYHI; spf=pass (imf05.hostedemail.com: domain of khazhy@chromium.org designates 209.85.210.180 as permitted sender) smtp.mailfrom=khazhy@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659369041; a=rsa-sha256; cv=none; b=vmnW9wKnyzjKxRZ9kKyv9RPEzxzj66cIEcYY8AFizIxY5Ep1FPpfMclbXvpvA6hMFGI4Gf JPThFnN2P6I92dTyKC4gdJ27Vg+QfJkyu2tW5/NZiUVcn4fdnoXgc9Qge9N7nOANQS/ErN Gp0OpzYmzHKX9ktDoycRWz1gNsxswm4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659369041; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xDVffGBJScwkOd+9hsMyXStFZyBJcuy4eeJPA0O+mYE=; b=MOpUgHgtNqMLpYyso1Y0zrHiKa6Gl9+LAY3zzWlVUgGrrU4to8RkEw/Tgqn4vQJ+aSGTmt IsbJZ2O0aUq5BrmF3BzFDlpjHZN6wzXATu1hiLII9qIhT4oWCH0U1EeaPHU4dDsCi+P1On 19aHDG05+MHhq6MBB9vLTB03lj4h+ro= X-Stat-Signature: wpxotf47fxiufqmcx4m5hgo6wsheabg9 X-Rspamd-Queue-Id: E53901000FB X-Rspam-User: X-Rspamd-Server: rspam05 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=jMmbhYHI; spf=pass (imf05.hostedemail.com: domain of khazhy@chromium.org designates 209.85.210.180 as permitted sender) smtp.mailfrom=khazhy@chromium.org; dmarc=pass (policy=none) header.from=chromium.org X-HE-Tag: 1659369040-951810 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a disk is removed, bdi_unregister gets called to stop further writeback and wait for associated delayed work to complete. However, wb_inode_writeback_end() may schedule bandwidth estimation dwork after this has completed, which can result in the timer attempting to access the just freed bdi_writeback. Fix this by checking if the bdi_writeback is alive, similar to when scheduling writeback work. Since this requires wb->work_lock, and wb_inode_writeback_end() may get called from interrupt, switch wb->work_lock to an irqsafe lock. Fixes: 45a2966fd641 ("writeback: fix bandwidth estimate for spiky workload") Signed-off-by: Khazhismel Kumykov Reviewed-by: Jan Kara --- fs/fs-writeback.c | 12 ++++++------ mm/backing-dev.c | 10 +++++----- mm/page-writeback.c | 6 +++++- 3 files changed, 16 insertions(+), 12 deletions(-) v2: made changelog a bit more verbose diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 05221366a16d..08a1993ab7fd 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -134,10 +134,10 @@ static bool inode_io_list_move_locked(struct inode *inode, static void wb_wakeup(struct bdi_writeback *wb) { - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (test_bit(WB_registered, &wb->state)) mod_delayed_work(bdi_wq, &wb->dwork, 0); - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); } static void finish_writeback_work(struct bdi_writeback *wb, @@ -164,7 +164,7 @@ static void wb_queue_work(struct bdi_writeback *wb, if (work->done) atomic_inc(&work->done->cnt); - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (test_bit(WB_registered, &wb->state)) { list_add_tail(&work->list, &wb->work_list); @@ -172,7 +172,7 @@ static void wb_queue_work(struct bdi_writeback *wb, } else finish_writeback_work(wb, work); - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); } /** @@ -2082,13 +2082,13 @@ static struct wb_writeback_work *get_next_work_item(struct bdi_writeback *wb) { struct wb_writeback_work *work = NULL; - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (!list_empty(&wb->work_list)) { work = list_entry(wb->work_list.next, struct wb_writeback_work, list); list_del_init(&work->list); } - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); return work; } diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 95550b8fa7fe..de65cb1e5f76 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -260,10 +260,10 @@ void wb_wakeup_delayed(struct bdi_writeback *wb) unsigned long timeout; timeout = msecs_to_jiffies(dirty_writeback_interval * 10); - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (test_bit(WB_registered, &wb->state)) queue_delayed_work(bdi_wq, &wb->dwork, timeout); - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); } static void wb_update_bandwidth_workfn(struct work_struct *work) @@ -334,12 +334,12 @@ static void cgwb_remove_from_bdi_list(struct bdi_writeback *wb); static void wb_shutdown(struct bdi_writeback *wb) { /* Make sure nobody queues further work */ - spin_lock_bh(&wb->work_lock); + spin_lock_irq(&wb->work_lock); if (!test_and_clear_bit(WB_registered, &wb->state)) { - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); return; } - spin_unlock_bh(&wb->work_lock); + spin_unlock_irq(&wb->work_lock); cgwb_remove_from_bdi_list(wb); /* diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 55c2776ae699..3c34db15cf70 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2867,6 +2867,7 @@ static void wb_inode_writeback_start(struct bdi_writeback *wb) static void wb_inode_writeback_end(struct bdi_writeback *wb) { + unsigned long flags; atomic_dec(&wb->writeback_inodes); /* * Make sure estimate of writeback throughput gets updated after @@ -2875,7 +2876,10 @@ static void wb_inode_writeback_end(struct bdi_writeback *wb) * that if multiple inodes end writeback at a similar time, they get * batched into one bandwidth update. */ - queue_delayed_work(bdi_wq, &wb->bw_dwork, BANDWIDTH_INTERVAL); + spin_lock_irqsave(&wb->work_lock, flags); + if (test_bit(WB_registered, &wb->state)) + queue_delayed_work(bdi_wq, &wb->bw_dwork, BANDWIDTH_INTERVAL); + spin_unlock_irqrestore(&wb->work_lock, flags); } bool __folio_end_writeback(struct folio *folio)