diff mbox series

[1/7] fs/writeback: avoid to writeback non-expired inode in kupdate writeback

Message ID 20240208172024.23625-2-shikemeng@huaweicloud.com (mailing list archive)
State New
Headers show
Series Fixes and cleanups to fs-writeback | expand

Commit Message

Kemeng Shi Feb. 8, 2024, 5:20 p.m. UTC
In kupdate writeback, only expired inode (have been dirty for longer than
dirty_expire_interval) is supposed to be written back. However, kupdate
writeback will writeback non-expired inode left in b_io or b_more_io from
last wb_writeback. As a result, writeback will keep being triggered
unexpected when we keep dirtying pages even dirty memory is under
threshold and inode is not expired. To be more specific:
Assume dirty background threshold is > 1G and dirty_expire_centisecs is
> 60s. When we running fio -size=1G -invalidate=0 -ioengine=libaio
--time_based -runtime=60... (keep dirtying), the writeback will keep
being triggered as following:
wb_workfn
  wb_do_writeback
    wb_check_background_flush
      /*
       * Wb dirty background threshold starts at 0 if device was idle and
       * grows up when bandwidth of wb is updated. So a background
       * writeback is triggered.
       */
      wb_over_bg_thresh
      /*
       * Dirtied inode will be written back and added to b_more_io list
       * after slice used up (because we keep dirtying the inode).
       */
      wb_writeback

Writeback is triggered per dirty_writeback_centisecs as following:
wb_workfn
  wb_do_writeback
    wb_check_old_data_flush
      /*
       * Write back inode left in b_io and b_more_io from last wb_writeback
       * even the inode is non-expired and it will be added to b_more_io
       * again as slice will be used up (because we keep dirtying the
       * inode)
       */
      wb_writeback

Fix this by moving non-expired inode in io list from last wb_writeback to
dirty list in kudpate writeback.

Test as following:
/* make it more easier to observe the issue */
echo 300000 > /proc/sys/vm/dirty_expire_centisecs
echo 100 > /proc/sys/vm/dirty_writeback_centisecs
/* create a idle device */
mkfs.ext4 -F /dev/vdb
mount /dev/vdb /bdi1/
/* run buffer write with fio */
fio -name test -filename=/bdi1/file -size=800M -ioengine=libaio -bs=4K \
-iodepth=1 -rw=write -direct=0 --time_based -runtime=60 -invalidate=0

Result before fix (run three tests):
1360MB/s
1329MB/s
1455MB/s

Result after fix (run three tests);
790MB/s
1820MB/s
1804MB/s

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 fs/fs-writeback.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

Comments

Tim Chen Feb. 8, 2024, 6:29 p.m. UTC | #1
On Fri, 2024-02-09 at 01:20 +0800, Kemeng Shi wrote:
> 
>  
> +static void filter_expired_io(struct bdi_writeback *wb)
> +{
> +	struct inode *inode, *tmp;
> +	unsigned long expired_jiffies = jiffies -
> +		msecs_to_jiffies(dirty_expire_interval * 10);

We have kupdate trigger time hard coded with a factor of 10 to expire interval here.
The kupdate trigger time "mssecs_to_jiffies(dirty_expire_interval * 10)" is
also used in wb_writeback().  It will be better to have a macro or #define
to encapsulate the trigger time so if for any reason we need
to tune the trigger time, we just need to change it at one place.

Tim

> +
> +	spin_lock(&wb->list_lock);
> +	list_for_each_entry_safe(inode, tmp, &wb->b_io, i_io_list)
> +		if (inode_dirtied_after(inode, expired_jiffies))
> +			redirty_tail(inode, wb);
> +
> +	list_for_each_entry_safe(inode, tmp, &wb->b_more_io, i_io_list)
> +		if (inode_dirtied_after(inode, expired_jiffies))
> +			redirty_tail(inode, wb);
> +	spin_unlock(&wb->list_lock);
> +}
> +
>  /*
>   * Explicit flushing or periodic writeback of "old" data.
>   *
> @@ -2070,6 +2087,9 @@ static long wb_writeback(struct bdi_writeback *wb,
>  	long progress;
>  	struct blk_plug plug;
>  
> +	if (work->for_kupdate)
> +		filter_expired_io(wb);
> +
>  	blk_start_plug(&plug);
>  	for (;;) {
>  		/*
Kemeng Shi Feb. 18, 2024, 2:01 a.m. UTC | #2
on 2/9/2024 2:29 AM, Tim Chen wrote:
> On Fri, 2024-02-09 at 01:20 +0800, Kemeng Shi wrote:
>>
>>  
>> +static void filter_expired_io(struct bdi_writeback *wb)
>> +{
>> +	struct inode *inode, *tmp;
>> +	unsigned long expired_jiffies = jiffies -
>> +		msecs_to_jiffies(dirty_expire_interval * 10);
> 
> We have kupdate trigger time hard coded with a factor of 10 to expire interval here.
> The kupdate trigger time "mssecs_to_jiffies(dirty_expire_interval * 10)" is
> also used in wb_writeback().  It will be better to have a macro or #define
> to encapsulate the trigger time so if for any reason we need
> to tune the trigger time, we just need to change it at one place.
Hi Tim. Sorry for the late reply, I was on vacation these days.
I agree it's better to have a macro and I will add it in next version.
Thanks!
> 
> Tim
> 
>> +
>> +	spin_lock(&wb->list_lock);
>> +	list_for_each_entry_safe(inode, tmp, &wb->b_io, i_io_list)
>> +		if (inode_dirtied_after(inode, expired_jiffies))
>> +			redirty_tail(inode, wb);
>> +
>> +	list_for_each_entry_safe(inode, tmp, &wb->b_more_io, i_io_list)
>> +		if (inode_dirtied_after(inode, expired_jiffies))
>> +			redirty_tail(inode, wb);
>> +	spin_unlock(&wb->list_lock);
>> +}
>> +
>>  /*
>>   * Explicit flushing or periodic writeback of "old" data.
>>   *
>> @@ -2070,6 +2087,9 @@ static long wb_writeback(struct bdi_writeback *wb,
>>  	long progress;
>>  	struct blk_plug plug;
>>  
>> +	if (work->for_kupdate)
>> +		filter_expired_io(wb);
>> +
>>  	blk_start_plug(&plug);
>>  	for (;;) {
>>  		/*
>
Jan Kara Feb. 23, 2024, 1:42 p.m. UTC | #3
On Fri 09-02-24 01:20:18, Kemeng Shi wrote:
> In kupdate writeback, only expired inode (have been dirty for longer than
> dirty_expire_interval) is supposed to be written back. However, kupdate
> writeback will writeback non-expired inode left in b_io or b_more_io from
> last wb_writeback. As a result, writeback will keep being triggered
> unexpected when we keep dirtying pages even dirty memory is under
> threshold and inode is not expired. To be more specific:
> Assume dirty background threshold is > 1G and dirty_expire_centisecs is
> > 60s. When we running fio -size=1G -invalidate=0 -ioengine=libaio
> --time_based -runtime=60... (keep dirtying), the writeback will keep
> being triggered as following:
> wb_workfn
>   wb_do_writeback
>     wb_check_background_flush
>       /*
>        * Wb dirty background threshold starts at 0 if device was idle and
>        * grows up when bandwidth of wb is updated. So a background
>        * writeback is triggered.
>        */
>       wb_over_bg_thresh
>       /*
>        * Dirtied inode will be written back and added to b_more_io list
>        * after slice used up (because we keep dirtying the inode).
>        */
>       wb_writeback
> 
> Writeback is triggered per dirty_writeback_centisecs as following:
> wb_workfn
>   wb_do_writeback
>     wb_check_old_data_flush
>       /*
>        * Write back inode left in b_io and b_more_io from last wb_writeback
>        * even the inode is non-expired and it will be added to b_more_io
>        * again as slice will be used up (because we keep dirtying the
>        * inode)
>        */
>       wb_writeback
> 
> Fix this by moving non-expired inode in io list from last wb_writeback to
> dirty list in kudpate writeback.
> 
> Test as following:
> /* make it more easier to observe the issue */
> echo 300000 > /proc/sys/vm/dirty_expire_centisecs
> echo 100 > /proc/sys/vm/dirty_writeback_centisecs
> /* create a idle device */
> mkfs.ext4 -F /dev/vdb
> mount /dev/vdb /bdi1/
> /* run buffer write with fio */
> fio -name test -filename=/bdi1/file -size=800M -ioengine=libaio -bs=4K \
> -iodepth=1 -rw=write -direct=0 --time_based -runtime=60 -invalidate=0
> 
> Result before fix (run three tests):
> 1360MB/s
> 1329MB/s
> 1455MB/s
> 
> Result after fix (run three tests);
> 790MB/s
> 1820MB/s
> 1804MB/s
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>

OK, I don't find this a particularly troubling problem but I agree it might
be nice to fix. But filtering the lists in wb_writeback() like this seems
kind of wrong - the queueing is managed in queue_io() and I'd prefer to
keep it that way. What if we just modified requeue_inode() to not
requeue_io() inodes in case we are doing kupdate style writeback and inode
isn't expired?

Sure we will still possibly writeback unexpired inodes once before calling
redirty_tail_locked() on them but that shouldn't really be noticeable?

								Honza
> ---
>  fs/fs-writeback.c | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
> 
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 5ab1aaf805f7..a9a918972719 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -2046,6 +2046,23 @@ static long writeback_inodes_wb(struct bdi_writeback *wb, long nr_pages,
>  	return nr_pages - work.nr_pages;
>  }
>  
> +static void filter_expired_io(struct bdi_writeback *wb)
> +{
> +	struct inode *inode, *tmp;
> +	unsigned long expired_jiffies = jiffies -
> +		msecs_to_jiffies(dirty_expire_interval * 10);
> +
> +	spin_lock(&wb->list_lock);
> +	list_for_each_entry_safe(inode, tmp, &wb->b_io, i_io_list)
> +		if (inode_dirtied_after(inode, expired_jiffies))
> +			redirty_tail(inode, wb);
> +
> +	list_for_each_entry_safe(inode, tmp, &wb->b_more_io, i_io_list)
> +		if (inode_dirtied_after(inode, expired_jiffies))
> +			redirty_tail(inode, wb);
> +	spin_unlock(&wb->list_lock);
> +}
> +
>  /*
>   * Explicit flushing or periodic writeback of "old" data.
>   *
> @@ -2070,6 +2087,9 @@ static long wb_writeback(struct bdi_writeback *wb,
>  	long progress;
>  	struct blk_plug plug;
>  
> +	if (work->for_kupdate)
> +		filter_expired_io(wb);
> +
>  	blk_start_plug(&plug);
>  	for (;;) {
>  		/*
> -- 
> 2.30.0
>
Kemeng Shi Feb. 26, 2024, 11:47 a.m. UTC | #4
on 2/23/2024 9:42 PM, Jan Kara wrote:
> On Fri 09-02-24 01:20:18, Kemeng Shi wrote:
>> In kupdate writeback, only expired inode (have been dirty for longer than
>> dirty_expire_interval) is supposed to be written back. However, kupdate
>> writeback will writeback non-expired inode left in b_io or b_more_io from
>> last wb_writeback. As a result, writeback will keep being triggered
>> unexpected when we keep dirtying pages even dirty memory is under
>> threshold and inode is not expired. To be more specific:
>> Assume dirty background threshold is > 1G and dirty_expire_centisecs is
>>> 60s. When we running fio -size=1G -invalidate=0 -ioengine=libaio
>> --time_based -runtime=60... (keep dirtying), the writeback will keep
>> being triggered as following:
>> wb_workfn
>>   wb_do_writeback
>>     wb_check_background_flush
>>       /*
>>        * Wb dirty background threshold starts at 0 if device was idle and
>>        * grows up when bandwidth of wb is updated. So a background
>>        * writeback is triggered.
>>        */
>>       wb_over_bg_thresh
>>       /*
>>        * Dirtied inode will be written back and added to b_more_io list
>>        * after slice used up (because we keep dirtying the inode).
>>        */
>>       wb_writeback
>>
>> Writeback is triggered per dirty_writeback_centisecs as following:
>> wb_workfn
>>   wb_do_writeback
>>     wb_check_old_data_flush
>>       /*
>>        * Write back inode left in b_io and b_more_io from last wb_writeback
>>        * even the inode is non-expired and it will be added to b_more_io
>>        * again as slice will be used up (because we keep dirtying the
>>        * inode)
>>        */
>>       wb_writeback
>>
>> Fix this by moving non-expired inode in io list from last wb_writeback to
>> dirty list in kudpate writeback.
>>
>> Test as following:
>> /* make it more easier to observe the issue */
>> echo 300000 > /proc/sys/vm/dirty_expire_centisecs
>> echo 100 > /proc/sys/vm/dirty_writeback_centisecs
>> /* create a idle device */
>> mkfs.ext4 -F /dev/vdb
>> mount /dev/vdb /bdi1/
>> /* run buffer write with fio */
>> fio -name test -filename=/bdi1/file -size=800M -ioengine=libaio -bs=4K \
>> -iodepth=1 -rw=write -direct=0 --time_based -runtime=60 -invalidate=0
>>
>> Result before fix (run three tests):
>> 1360MB/s
>> 1329MB/s
>> 1455MB/s
>>
>> Result after fix (run three tests);
>> 790MB/s
>> 1820MB/s
>> 1804MB/s
>>
>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> 
> OK, I don't find this a particularly troubling problem but I agree it might
> be nice to fix. But filtering the lists in wb_writeback() like this seems
> kind of wrong - the queueing is managed in queue_io() and I'd prefer to
> keep it that way. What if we just modified requeue_inode() to not
> requeue_io() inodes in case we are doing kupdate style writeback and inode
> isn't expired?
Sure, this could solve the reported problem and is acceptable to me. Thanks
for the advise. I will try it in next version.
> 
> Sure we will still possibly writeback unexpired inodes once before calling
> redirty_tail_locked() on them but that shouldn't really be noticeable?
> 
> 								Honza
>> ---
>>  fs/fs-writeback.c | 20 ++++++++++++++++++++
>>  1 file changed, 20 insertions(+)
>>
>> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
>> index 5ab1aaf805f7..a9a918972719 100644
>> --- a/fs/fs-writeback.c
>> +++ b/fs/fs-writeback.c
>> @@ -2046,6 +2046,23 @@ static long writeback_inodes_wb(struct bdi_writeback *wb, long nr_pages,
>>  	return nr_pages - work.nr_pages;
>>  }
>>  
>> +static void filter_expired_io(struct bdi_writeback *wb)
>> +{
>> +	struct inode *inode, *tmp;
>> +	unsigned long expired_jiffies = jiffies -
>> +		msecs_to_jiffies(dirty_expire_interval * 10);
>> +
>> +	spin_lock(&wb->list_lock);
>> +	list_for_each_entry_safe(inode, tmp, &wb->b_io, i_io_list)
>> +		if (inode_dirtied_after(inode, expired_jiffies))
>> +			redirty_tail(inode, wb);
>> +
>> +	list_for_each_entry_safe(inode, tmp, &wb->b_more_io, i_io_list)
>> +		if (inode_dirtied_after(inode, expired_jiffies))
>> +			redirty_tail(inode, wb);
>> +	spin_unlock(&wb->list_lock);
>> +}
>> +
>>  /*
>>   * Explicit flushing or periodic writeback of "old" data.
>>   *
>> @@ -2070,6 +2087,9 @@ static long wb_writeback(struct bdi_writeback *wb,
>>  	long progress;
>>  	struct blk_plug plug;
>>  
>> +	if (work->for_kupdate)
>> +		filter_expired_io(wb);
>> +
>>  	blk_start_plug(&plug);
>>  	for (;;) {
>>  		/*
>> -- 
>> 2.30.0
>>
Kemeng Shi Feb. 28, 2024, 1:46 a.m. UTC | #5
on 2/18/2024 10:01 AM, Kemeng Shi wrote:
> 
> 
> on 2/9/2024 2:29 AM, Tim Chen wrote:
>> On Fri, 2024-02-09 at 01:20 +0800, Kemeng Shi wrote:
>>>
>>>  
>>> +static void filter_expired_io(struct bdi_writeback *wb)
>>> +{
>>> +	struct inode *inode, *tmp;
>>> +	unsigned long expired_jiffies = jiffies -
>>> +		msecs_to_jiffies(dirty_expire_interval * 10);
>>
>> We have kupdate trigger time hard coded with a factor of 10 to expire interval here.
>> The kupdate trigger time "mssecs_to_jiffies(dirty_expire_interval * 10)" is
>> also used in wb_writeback().  It will be better to have a macro or #define
>> to encapsulate the trigger time so if for any reason we need
>> to tune the trigger time, we just need to change it at one place.
> Hi Tim. Sorry for the late reply, I was on vacation these days.
> I agree it's better to have a macro and I will add it in next version.
> Thanks!
Hi Tim,
After a deep look, I plan to set dirty_expire_interval in jiffies within sysctl
handler. Then we could use dirty_expire_interval directly instead of
"mssecs_to_jiffies(dirty_expire_interval * 10)" and macro is not needed.
Similar, dirty_writeback_interval and dirtytime_expire_interval could be set in
jiffies to remove repeat convertion from centisecs to jiffies. I will submit a
new series to do this if no one is against this.
Thanks!
>>
>> Tim
>>
>>> +
>>> +	spin_lock(&wb->list_lock);
>>> +	list_for_each_entry_safe(inode, tmp, &wb->b_io, i_io_list)
>>> +		if (inode_dirtied_after(inode, expired_jiffies))
>>> +			redirty_tail(inode, wb);
>>> +
>>> +	list_for_each_entry_safe(inode, tmp, &wb->b_more_io, i_io_list)
>>> +		if (inode_dirtied_after(inode, expired_jiffies))
>>> +			redirty_tail(inode, wb);
>>> +	spin_unlock(&wb->list_lock);
>>> +}
>>> +
>>>  /*
>>>   * Explicit flushing or periodic writeback of "old" data.
>>>   *
>>> @@ -2070,6 +2087,9 @@ static long wb_writeback(struct bdi_writeback *wb,
>>>  	long progress;
>>>  	struct blk_plug plug;
>>>  
>>> +	if (work->for_kupdate)
>>> +		filter_expired_io(wb);
>>> +
>>>  	blk_start_plug(&plug);
>>>  	for (;;) {
>>>  		/*
>>
diff mbox series

Patch

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 5ab1aaf805f7..a9a918972719 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -2046,6 +2046,23 @@  static long writeback_inodes_wb(struct bdi_writeback *wb, long nr_pages,
 	return nr_pages - work.nr_pages;
 }
 
+static void filter_expired_io(struct bdi_writeback *wb)
+{
+	struct inode *inode, *tmp;
+	unsigned long expired_jiffies = jiffies -
+		msecs_to_jiffies(dirty_expire_interval * 10);
+
+	spin_lock(&wb->list_lock);
+	list_for_each_entry_safe(inode, tmp, &wb->b_io, i_io_list)
+		if (inode_dirtied_after(inode, expired_jiffies))
+			redirty_tail(inode, wb);
+
+	list_for_each_entry_safe(inode, tmp, &wb->b_more_io, i_io_list)
+		if (inode_dirtied_after(inode, expired_jiffies))
+			redirty_tail(inode, wb);
+	spin_unlock(&wb->list_lock);
+}
+
 /*
  * Explicit flushing or periodic writeback of "old" data.
  *
@@ -2070,6 +2087,9 @@  static long wb_writeback(struct bdi_writeback *wb,
 	long progress;
 	struct blk_plug plug;
 
+	if (work->for_kupdate)
+		filter_expired_io(wb);
+
 	blk_start_plug(&plug);
 	for (;;) {
 		/*