Message ID | 20180312223632.GA6124@castle (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Mar 12, 2018 at 10:36:38PM +0000, Roman Gushchin wrote: > Ah, I see... > > I think, it's better to account them when we're actually freeing, > otherwise we will have strange path: > (indirectly) reclaimable -> unreclaimable -> free > > Do you agree? > +static void __d_free_external_name(struct rcu_head *head) > +{ > + struct external_name *name; > + > + name = container_of(head, struct external_name, u.head); > + > + mod_node_page_state(page_pgdat(virt_to_page(name)), > + NR_INDIRECTLY_RECLAIMABLE_BYTES, > + -ksize(name)); > + > + kfree(name); > +} Maybe, but then you want to call that from __d_free_external() and from failure path in __d_alloc() as well. Duplicating something that convoluted and easy to get out of sync is just asking for trouble.
On Tue, 13 Mar 2018 00:45:32 +0000 Al Viro <viro@ZenIV.linux.org.uk> wrote: > On Mon, Mar 12, 2018 at 10:36:38PM +0000, Roman Gushchin wrote: > > > Ah, I see... > > > > I think, it's better to account them when we're actually freeing, > > otherwise we will have strange path: > > (indirectly) reclaimable -> unreclaimable -> free > > > > Do you agree? > > > +static void __d_free_external_name(struct rcu_head *head) > > +{ > > + struct external_name *name; > > + > > + name = container_of(head, struct external_name, u.head); > > + > > + mod_node_page_state(page_pgdat(virt_to_page(name)), > > + NR_INDIRECTLY_RECLAIMABLE_BYTES, > > + -ksize(name)); > > + > > + kfree(name); > > +} > > Maybe, but then you want to call that from __d_free_external() and from > failure path in __d_alloc() as well. Duplicating something that convoluted > and easy to get out of sync is just asking for trouble. So.. where are we at with this issue?
On Thu, Apr 05, 2018 at 03:11:23PM -0700, Andrew Morton wrote: > On Tue, 13 Mar 2018 00:45:32 +0000 Al Viro <viro@ZenIV.linux.org.uk> wrote: > > > On Mon, Mar 12, 2018 at 10:36:38PM +0000, Roman Gushchin wrote: > > > > > Ah, I see... > > > > > > I think, it's better to account them when we're actually freeing, > > > otherwise we will have strange path: > > > (indirectly) reclaimable -> unreclaimable -> free > > > > > > Do you agree? > > > > > +static void __d_free_external_name(struct rcu_head *head) > > > +{ > > > + struct external_name *name; > > > + > > > + name = container_of(head, struct external_name, u.head); > > > + > > > + mod_node_page_state(page_pgdat(virt_to_page(name)), > > > + NR_INDIRECTLY_RECLAIMABLE_BYTES, > > > + -ksize(name)); > > > + > > > + kfree(name); > > > +} > > > > Maybe, but then you want to call that from __d_free_external() and from > > failure path in __d_alloc() as well. Duplicating something that convoluted > > and easy to get out of sync is just asking for trouble. > > So.. where are we at with this issue? I assume that commit 0babe6fe1da3 ("dcache: fix indirectly reclaimable memory accounting") address the issue. __d_free_external_name() is now called from all release paths (including __d_free_external()) and is the only place where NR_INDIRECTLY_RECLAIMABLE_BYTES is decremented. __d_alloc()'s error path is slightly different, because I bump NR_INDIRECTLY_RECLAIMABLE_BYTES in a very last moment, when it's already clear, that no errors did occur. So we don't need to increase and decrease the counter back and forth. Thank you!
diff --git a/fs/dcache.c b/fs/dcache.c index 98826efe22a0..19bc7495a6c4 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -266,6 +266,19 @@ static void __d_free(struct rcu_head *head) kmem_cache_free(dentry_cache, dentry); } +static void __d_free_external_name(struct rcu_head *head) +{ + struct external_name *name; + + name = container_of(head, struct external_name, u.head); + + mod_node_page_state(page_pgdat(virt_to_page(name)), + NR_INDIRECTLY_RECLAIMABLE_BYTES, + -ksize(name)); + + kfree(name); +} + static void __d_free_external(struct rcu_head *head) { struct dentry *dentry = container_of(head, struct dentry, d_u.d_rcu); @@ -307,7 +320,7 @@ void release_dentry_name_snapshot(struct name_snapshot *name) struct external_name *p; p = container_of(name->name, struct external_name, name[0]); if (unlikely(atomic_dec_and_test(&p->u.count))) - kfree_rcu(p, u.head); + call_rcu(&p->u.head, __d_free_external_name); } } EXPORT_SYMBOL(release_dentry_name_snapshot); @@ -2769,7 +2782,7 @@ static void copy_name(struct dentry *dentry, struct dentry *target) dentry->d_name.hash_len = target->d_name.hash_len; } if (old_name && likely(atomic_dec_and_test(&old_name->u.count))) - kfree_rcu(old_name, u.head); + call_rcu(&old_name->u.head, __d_free_external_name); } static void dentry_lock_for_move(struct dentry *dentry, struct dentry *target)