From patchwork Mon Mar 23 05:07:31 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 6070091 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 36351BF90F for ; Mon, 23 Mar 2015 05:14:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 35A53201FA for ; Mon, 23 Mar 2015 05:14:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1AAAA20121 for ; Mon, 23 Mar 2015 05:14:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752847AbbCWFIA (ORCPT ); Mon, 23 Mar 2015 01:08:00 -0400 Received: from mail-qc0-f177.google.com ([209.85.216.177]:35692 "EHLO mail-qc0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752293AbbCWFH6 (ORCPT ); Mon, 23 Mar 2015 01:07:58 -0400 Received: by qcbkw5 with SMTP id kw5so136951718qcb.2; Sun, 22 Mar 2015 22:07:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ah2d6XJ7XmYm4VDSO8hqjRvC8nUAst/2csELuxzhNws=; b=IqaA66fAe0yVa9BMKzT3MSbORcAuMV31+hhA0/foeDyKfcG5h+Wr0XksF2co+t7XBB khH6bR/95FgJ8kVx/uYopAq/DLUqY8w+7GlQH2smrgzS1kuGbBWPAsXU9k4IZc4xoFHJ SrQ1E1H8wnFnpQu0b3fLyMTh8Gu9H8Iy51eSXeTTN7a9GN7hWNPxFuPtU7C+xxE/jPbg FFARI21BQciPLMrKHYRQ7rVt2CP7QnNVsXG0yoxNszShN0RA2BtA28YTJTIctxCq8jgy 3LCNe1lN1wZLUkGXs4eQgCzkbzxhA5Cj3j1O6l90krluSpk+nUyfqzrUPi7bOrcThBOb PR+w== X-Received: by 10.229.213.201 with SMTP id gx9mr48223668qcb.28.1427087277246; Sun, 22 Mar 2015 22:07:57 -0700 (PDT) Received: from htj.duckdns.org.lan (207-38-238-8.c3-0.wsd-ubr1.qens-wsd.ny.cable.rcn.com. [207.38.238.8]) by mx.google.com with ESMTPSA id f77sm8494303qka.9.2015.03.22.22.07.55 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 22 Mar 2015 22:07:56 -0700 (PDT) From: Tejun Heo To: axboe@kernel.dk Cc: linux-kernel@vger.kernel.org, jack@suse.cz, hch@infradead.org, hannes@cmpxchg.org, linux-fsdevel@vger.kernel.org, vgoyal@redhat.com, lizefan@huawei.com, cgroups@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.cz, clm@fb.com, fengguang.wu@intel.com, david@fromorbit.com, gthelen@google.com, Tejun Heo Subject: [PATCH 02/18] writeback: reorganize [__]wb_update_bandwidth() Date: Mon, 23 Mar 2015 01:07:31 -0400 Message-Id: <1427087267-16592-3-git-send-email-tj@kernel.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1427087267-16592-1-git-send-email-tj@kernel.org> References: <1427087267-16592-1-git-send-email-tj@kernel.org> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP __wb_update_bandwidth() is called from two places - fs/fs-writeback.c::balance_dirty_pages() and mm/page-writeback.c::wb_writeback(). The latter updates only the write bandwidth while the former also deals with the dirty ratelimit. The two callsites are distinguished by whether @thresh parameter is zero or not, which is cryptic. In addition, the two files define their own different versions of wb_update_bandwidth() on top of __wb_update_bandwidth(), which is confusing to say the least. This patch cleans up [__]wb_update_bandwidth() in the following ways. * __wb_update_bandwidth() now takes explicit @update_ratelimit parameter to gate dirty ratelimit handling. * mm/page-writeback.c::wb_update_bandwidth() is flattened into its caller - balance_dirty_pages(). * fs/fs-writeback.c::wb_update_bandwidth() is moved to mm/page-writeback.c and __wb_update_bandwidth() is made static. * While at it, add a lockdep assertion to __wb_update_bandwidth(). Except for the lockdep addition, this is pure reorganization and doesn't introduce any behavioral changes. Signed-off-by: Tejun Heo Cc: Jens Axboe Cc: Jan Kara Cc: Wu Fengguang Cc: Greg Thelen --- fs/fs-writeback.c | 10 ---------- include/linux/writeback.h | 9 +-------- mm/page-writeback.c | 45 ++++++++++++++++++++++----------------------- 3 files changed, 23 insertions(+), 41 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 890cff1..3d9b360 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -1079,16 +1079,6 @@ static bool over_bground_thresh(struct bdi_writeback *wb) } /* - * Called under wb->list_lock. If there are multiple wb per bdi, - * only the flusher working on the first wb should do it. - */ -static void wb_update_bandwidth(struct bdi_writeback *wb, - unsigned long start_time) -{ - __wb_update_bandwidth(wb, 0, 0, 0, 0, 0, start_time); -} - -/* * Explicit flushing or periodic writeback of "old" data. * * Define "old": the first time one of an inode's pages is dirtied, we mark the diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 75349bb..82e0e39 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -154,14 +154,7 @@ int dirty_writeback_centisecs_handler(struct ctl_table *, int, void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty); unsigned long wb_dirty_limit(struct bdi_writeback *wb, unsigned long dirty); -void __wb_update_bandwidth(struct bdi_writeback *wb, - unsigned long thresh, - unsigned long bg_thresh, - unsigned long dirty, - unsigned long bdi_thresh, - unsigned long bdi_dirty, - unsigned long start_time); - +void wb_update_bandwidth(struct bdi_writeback *wb, unsigned long start_time); void page_writeback_init(void); void balance_dirty_pages_ratelimited(struct address_space *mapping); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index fd441ea..d9ebabe 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1160,19 +1160,22 @@ static void wb_update_dirty_ratelimit(struct bdi_writeback *wb, trace_bdi_dirty_ratelimit(wb->bdi, dirty_rate, task_ratelimit); } -void __wb_update_bandwidth(struct bdi_writeback *wb, - unsigned long thresh, - unsigned long bg_thresh, - unsigned long dirty, - unsigned long wb_thresh, - unsigned long wb_dirty, - unsigned long start_time) +static void __wb_update_bandwidth(struct bdi_writeback *wb, + unsigned long thresh, + unsigned long bg_thresh, + unsigned long dirty, + unsigned long wb_thresh, + unsigned long wb_dirty, + unsigned long start_time, + bool update_ratelimit) { unsigned long now = jiffies; unsigned long elapsed = now - wb->bw_time_stamp; unsigned long dirtied; unsigned long written; + lockdep_assert_held(&wb->list_lock); + /* * rate-limit, only update once every 200ms. */ @@ -1189,7 +1192,7 @@ void __wb_update_bandwidth(struct bdi_writeback *wb, if (elapsed > HZ && time_before(wb->bw_time_stamp, start_time)) goto snapshot; - if (thresh) { + if (update_ratelimit) { global_update_bandwidth(thresh, dirty, now); wb_update_dirty_ratelimit(wb, thresh, bg_thresh, dirty, wb_thresh, wb_dirty, @@ -1203,20 +1206,9 @@ snapshot: wb->bw_time_stamp = now; } -static void wb_update_bandwidth(struct bdi_writeback *wb, - unsigned long thresh, - unsigned long bg_thresh, - unsigned long dirty, - unsigned long wb_thresh, - unsigned long wb_dirty, - unsigned long start_time) +void wb_update_bandwidth(struct bdi_writeback *wb, unsigned long start_time) { - if (time_is_after_eq_jiffies(wb->bw_time_stamp + BANDWIDTH_INTERVAL)) - return; - spin_lock(&wb->list_lock); - __wb_update_bandwidth(wb, thresh, bg_thresh, dirty, - wb_thresh, wb_dirty, start_time); - spin_unlock(&wb->list_lock); + __wb_update_bandwidth(wb, 0, 0, 0, 0, 0, start_time, false); } /* @@ -1467,8 +1459,15 @@ static void balance_dirty_pages(struct address_space *mapping, if (dirty_exceeded && !wb->dirty_exceeded) wb->dirty_exceeded = 1; - wb_update_bandwidth(wb, dirty_thresh, background_thresh, - nr_dirty, wb_thresh, wb_dirty, start_time); + if (time_is_before_jiffies(wb->bw_time_stamp + + BANDWIDTH_INTERVAL)) { + spin_lock(&wb->list_lock); + __wb_update_bandwidth(wb, dirty_thresh, + background_thresh, nr_dirty, + wb_thresh, wb_dirty, start_time, + true); + spin_unlock(&wb->list_lock); + } dirty_ratelimit = wb->dirty_ratelimit; pos_ratio = wb_position_ratio(wb, dirty_thresh,