mbox series

[v4,0/3] protect page cache from freeing inode

Message ID 1582450294-18038-1-git-send-email-laoar.shao@gmail.com (mailing list archive)
Headers show
Series protect page cache from freeing inode | expand

Message

Yafang Shao Feb. 23, 2020, 9:31 a.m. UTC
On my server there're some running MEMCGs protected by memory.{min, low},
but I found the usage of these MEMCGs abruptly became very small, which
were far less than the protect limit. It confused me and finally I
found that was because of inode stealing.
Once an inode is freed, all its belonging page caches will be dropped as
well, no matter how may page caches it has. So if we intend to protect the
page caches in a memcg, we must protect their host (the inode) first.
Otherwise the memcg protection can be easily bypassed with freeing inode,
especially if there're big files in this memcg.
The inherent mismatch between memcg and inode is a trouble. One inode can
be shared by different MEMCGs, but it is a very rare case. If an inode is
shared, its belonging page caches may be charged to different MEMCGs.
Currently there's no perfect solution to fix this kind of issue, but the
inode majority-writer ownership switching can help it more or less.

- Changes against v3:
Fix the possible risk pointed by Johannes in another patchset [1].
Per discussion with Johannes in that mail thread, I found that the issue
Johannes is trying to fix is different with the issue I'm trying to fix.
That's why I update this patchset and post it again. This specific memcg
protection issue should be addressed.

- Changes against v2:
    1. Seperates memcg patches from this patchset, suggested by Roman.
    2. Improves code around the usage of for_each_mem_cgroup(), suggested
       by Dave
    3. Use memcg_low_reclaim passed from scan_control, instead of
       introducing a new member in struct mem_cgroup.
    4. Some other code improvement suggested by Dave.


- Changes against v1:
Use the memcg passed from the shrink_control, instead of getting it from
inode itself, suggested by Dave. That could make the laying better.

[1]. https://lore.kernel.org/linux-mm/20200211175507.178100-1-hannes@cmpxchg.org/

Yafang Shao (3):
  mm, list_lru: make memcg visible to lru walker isolation function
  mm, shrinker: make memcg low reclaim visible to lru walker isolation
    function
  inode: protect page cache from freeing inode

 fs/inode.c                 | 76 ++++++++++++++++++++++++++++++++++++++++++++--
 include/linux/memcontrol.h | 21 +++++++++++++
 include/linux/shrinker.h   |  3 ++
 mm/list_lru.c              | 47 ++++++++++++++++------------
 mm/memcontrol.c            | 15 ---------
 mm/vmscan.c                | 27 +++++++++-------
 6 files changed, 141 insertions(+), 48 deletions(-)

Comments

Andrew Morton Feb. 24, 2020, 3:17 a.m. UTC | #1
On Sun, 23 Feb 2020 04:31:31 -0500 Yafang Shao <laoar.shao@gmail.com> wrote:

> On my server there're some running MEMCGs protected by memory.{min, low},
> but I found the usage of these MEMCGs abruptly became very small, which
> were far less than the protect limit. It confused me and finally I
> found that was because of inode stealing.
> Once an inode is freed, all its belonging page caches will be dropped as
> well, no matter how may page caches it has. So if we intend to protect the
> page caches in a memcg, we must protect their host (the inode) first.
> Otherwise the memcg protection can be easily bypassed with freeing inode,
> especially if there're big files in this memcg.
> The inherent mismatch between memcg and inode is a trouble. One inode can
> be shared by different MEMCGs, but it is a very rare case. If an inode is
> shared, its belonging page caches may be charged to different MEMCGs.
> Currently there's no perfect solution to fix this kind of issue, but the
> inode majority-writer ownership switching can help it more or less.

What are the potential regression scenarios here?  Presumably a large
number of inodes pinned by small amounts of pagecache, consuming memory
and CPU during list scanning.  Anything else?

Have you considered constructing test cases to evaluate the impact of
such things?
Yafang Shao Feb. 26, 2020, 2:16 p.m. UTC | #2
On Mon, Feb 24, 2020 at 11:17 AM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Sun, 23 Feb 2020 04:31:31 -0500 Yafang Shao <laoar.shao@gmail.com> wrote:
>
> > On my server there're some running MEMCGs protected by memory.{min, low},
> > but I found the usage of these MEMCGs abruptly became very small, which
> > were far less than the protect limit. It confused me and finally I
> > found that was because of inode stealing.
> > Once an inode is freed, all its belonging page caches will be dropped as
> > well, no matter how may page caches it has. So if we intend to protect the
> > page caches in a memcg, we must protect their host (the inode) first.
> > Otherwise the memcg protection can be easily bypassed with freeing inode,
> > especially if there're big files in this memcg.
> > The inherent mismatch between memcg and inode is a trouble. One inode can
> > be shared by different MEMCGs, but it is a very rare case. If an inode is
> > shared, its belonging page caches may be charged to different MEMCGs.
> > Currently there's no perfect solution to fix this kind of issue, but the
> > inode majority-writer ownership switching can help it more or less.
>
> What are the potential regression scenarios here?  Presumably a large
> number of inodes pinned by small amounts of pagecache, consuming memory
> and CPU during list scanning.  Anything else?
>

Sorry about the late response.

Yes, lots of inodes pinned by small amounts of pagecache in a memcg
protected by memory.{min, low} could be a potential regression
scenarios.  It may take extra time to skip these inodes when a
workload outside of these memcg is trying to do page reclaim.
But these extra time can almostly be ignored comparing with the time
the memory.{min, low} may take, because memory.{min, low} takes
another round to scan all the pages again.
While if the workload trying to reclaim these protected inodes is
inside of the protected memcg, then this workload will not be effected
at all because memory.{min, low} doesn't take effect under these
condition.

> Have you considered constructing test cases to evaluate the impact of
> such things?
>

I did some mmtests::pagereclaim-performance  test with lots of
protected inodes pinned by small amounts of pagecache.  The result
doesn't show any obvious difference  with this patchset. I will update
it with these test data.  Appreciate it if you could recommend some
better tests tools.