Message ID | bca13f1ae6a72f0d126cd7e9ede11baaa2b81064.1477081587.git.shli@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Oct 21, 2016 at 01:35:14PM -0700, Shaohua Li wrote: > In our systems, proc/sysfs inode/dentry cache use more than 1G memory > even memory pressure is high sometimes. Since proc/sysfs is in-memory > filesystem, rebuilding the cache is fast. There is no point proc/sysfs > and disk fs have equal pressure for slab shrink. > > One idea is directly discarding proc/sysfs inode/dentry cache rightly > after the proc/sysfs file is closed. But the discarding will make > proc/sysfs file open slower next time, which is 20x slower in my test if > multiple applications are accessing proc files. This patch doesn't go > that far. Instead, just put more pressure to shrink proc/sysfs slabs. > > Signed-off-by: Shaohua Li <shli@fb.com> > --- > fs/kernfs/mount.c | 2 ++ > fs/proc/inode.c | 2 ++ > 2 files changed, 4 insertions(+) > > diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c > index d5b149a..5b4e747 100644 > --- a/fs/kernfs/mount.c > +++ b/fs/kernfs/mount.c > @@ -161,6 +161,8 @@ static int kernfs_fill_super(struct super_block *sb, unsigned long magic) > sb->s_xattr = kernfs_xattr_handlers; > sb->s_time_gran = 1; > > + sb->s_shrink.seeks = 1; > + sb->s_shrink.batch = 0; This sort of thing needs comments as to why they are being changed. Otherwise the next person who comes along to do shrinker modifications won't have a clue about why this magic exists. Also, I don't think s_shrink.batch = 0 does what you think it does. The superblock batch size default of 1024 is more efficient than setting sb->s_shrink.batch = 0 as that makes the shrinker use SHRINK_BATCH: #define SHRINK_BATCH 128 i.e. it does less work per batch so has more overhead.... Cheers, Dave.
diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c index d5b149a..5b4e747 100644 --- a/fs/kernfs/mount.c +++ b/fs/kernfs/mount.c @@ -161,6 +161,8 @@ static int kernfs_fill_super(struct super_block *sb, unsigned long magic) sb->s_xattr = kernfs_xattr_handlers; sb->s_time_gran = 1; + sb->s_shrink.seeks = 1; + sb->s_shrink.batch = 0; /* get root inode, initialize and unlock it */ mutex_lock(&kernfs_mutex); inode = kernfs_get_inode(sb, info->root->kn); diff --git a/fs/proc/inode.c b/fs/proc/inode.c index e69ebe6..afef9fb 100644 --- a/fs/proc/inode.c +++ b/fs/proc/inode.c @@ -474,6 +474,8 @@ int proc_fill_super(struct super_block *s, void *data, int silent) s->s_op = &proc_sops; s->s_time_gran = 1; + s->s_shrink.seeks = 1; + s->s_shrink.batch = 0; /* * procfs isn't actually a stacking filesystem; however, there is * too much magic going on inside it to permit stacking things on
In our systems, proc/sysfs inode/dentry cache use more than 1G memory even memory pressure is high sometimes. Since proc/sysfs is in-memory filesystem, rebuilding the cache is fast. There is no point proc/sysfs and disk fs have equal pressure for slab shrink. One idea is directly discarding proc/sysfs inode/dentry cache rightly after the proc/sysfs file is closed. But the discarding will make proc/sysfs file open slower next time, which is 20x slower in my test if multiple applications are accessing proc files. This patch doesn't go that far. Instead, just put more pressure to shrink proc/sysfs slabs. Signed-off-by: Shaohua Li <shli@fb.com> --- fs/kernfs/mount.c | 2 ++ fs/proc/inode.c | 2 ++ 2 files changed, 4 insertions(+)