Message ID | 1530694660-20141-1-git-send-email-lirongqing@baidu.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Ping -RongQing > -----邮件原件----- > 发件人: linux-fsdevel-owner@vger.kernel.org [mailto:linux-fsdevel- > owner@vger.kernel.org] 代表 Li RongQing > 发送时间: 2018年7月4日 16:58 > 收件人: linux-fsdevel@vger.kernel.org; linux-mm@kvack.org > 抄送: viro@zeniv.linux.org.uk > 主题: [PATCH] fs/writeback: do memory cgroup related writeback firstly > > When a mechine has hundreds of memory cgroups, and some cgroups > generate more or less dirty pages, but a cgroup of them has lots of memory > pressure and always tries to reclaim dirty page, then it will trigger all cgroups > to writeback, which is less efficient: > > 1.if the used memory in a memory cgroup reaches its limit, it is useless to > writeback other cgroups. > 2.other cgroups can wait more time to merge write request > > so replace the full flush with flushing writeback of memory cgroup whose > tasks tries to reclaim memory and trigger writeback, if nothing is writeback, > then start a full flush > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- > fs/fs-writeback.c | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index > abafc9bd4622..a27efc4711a9 100644 > --- a/fs/fs-writeback.c > +++ b/fs/fs-writeback.c > @@ -2001,6 +2001,28 @@ void wakeup_flusher_threads(enum wb_reason > reason) > if (blk_needs_flush_plug(current)) > blk_schedule_flush_plug(current); > > +#ifdef CONFIG_CGROUP_WRITEBACK > + if (reason == WB_REASON_VMSCAN) { > + int flush = 0; > + > + rcu_read_lock(); > + list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) { > + struct bdi_writeback *wb = wb_find_current(bdi); > + > + if (wb && wb != &bdi->wb) { > + if (wb_has_dirty_io(wb)) { > + wb_start_writeback(wb, reason); > + flush++; > + } > + } > + } > + rcu_read_unlock(); > + > + if (flush) > + return; > + } > +#endif > + > rcu_read_lock(); > list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) > __wakeup_flusher_threads_bdi(bdi, reason); > -- > 2.16.2
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index abafc9bd4622..a27efc4711a9 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -2001,6 +2001,28 @@ void wakeup_flusher_threads(enum wb_reason reason) if (blk_needs_flush_plug(current)) blk_schedule_flush_plug(current); +#ifdef CONFIG_CGROUP_WRITEBACK + if (reason == WB_REASON_VMSCAN) { + int flush = 0; + + rcu_read_lock(); + list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) { + struct bdi_writeback *wb = wb_find_current(bdi); + + if (wb && wb != &bdi->wb) { + if (wb_has_dirty_io(wb)) { + wb_start_writeback(wb, reason); + flush++; + } + } + } + rcu_read_unlock(); + + if (flush) + return; + } +#endif + rcu_read_lock(); list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) __wakeup_flusher_threads_bdi(bdi, reason);
When a mechine has hundreds of memory cgroups, and some cgroups generate more or less dirty pages, but a cgroup of them has lots of memory pressure and always tries to reclaim dirty page, then it will trigger all cgroups to writeback, which is less efficient: 1.if the used memory in a memory cgroup reaches its limit, it is useless to writeback other cgroups. 2.other cgroups can wait more time to merge write request so replace the full flush with flushing writeback of memory cgroup whose tasks tries to reclaim memory and trigger writeback, if nothing is writeback, then start a full flush Signed-off-by: Li RongQing <lirongqing@baidu.com> --- fs/fs-writeback.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)