Message ID | 20190605024454.1393507-2-guro@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: reparent slab memory on cgroup removal | expand |
On Tue, Jun 4, 2019 at 7:45 PM Roman Gushchin <guro@fb.com> wrote: > > Johannes noticed that reading the memcg kmem_cache pointer in > cache_from_memcg_idx() is performed using READ_ONCE() macro, > which doesn't implement a SMP barrier, which is required > by the logic. > > Add a proper smp_rmb() to be paired with smp_wmb() in > memcg_create_kmem_cache(). > > The same applies to memcg_create_kmem_cache() itself, > which reads the same value without barriers and READ_ONCE(). > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Roman Gushchin <guro@fb.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> This seems like independent to the series. Shouldn't this be Cc'ed stable? > --- > mm/slab.h | 1 + > mm/slab_common.c | 3 ++- > 2 files changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/slab.h b/mm/slab.h > index 739099af6cbb..1176b61bb8fc 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -260,6 +260,7 @@ cache_from_memcg_idx(struct kmem_cache *s, int idx) > * memcg_caches issues a write barrier to match this (see > * memcg_create_kmem_cache()). > */ > + smp_rmb(); > cachep = READ_ONCE(arr->entries[idx]); > rcu_read_unlock(); > > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 58251ba63e4a..8092bdfc05d5 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -652,7 +652,8 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg, > * allocation (see memcg_kmem_get_cache()), several threads can try to > * create the same cache, but only one of them may succeed. > */ > - if (arr->entries[idx]) > + smp_rmb(); > + if (READ_ONCE(arr->entries[idx])) > goto out_unlock; > > cgroup_name(css->cgroup, memcg_name_buf, sizeof(memcg_name_buf)); > -- > 2.20.1 >
On Tue, Jun 04, 2019 at 07:44:45PM -0700, Roman Gushchin wrote: > Johannes noticed that reading the memcg kmem_cache pointer in > cache_from_memcg_idx() is performed using READ_ONCE() macro, > which doesn't implement a SMP barrier, which is required > by the logic. > > Add a proper smp_rmb() to be paired with smp_wmb() in > memcg_create_kmem_cache(). > > The same applies to memcg_create_kmem_cache() itself, > which reads the same value without barriers and READ_ONCE(). > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Roman Gushchin <guro@fb.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
On Tue, Jun 04, 2019 at 09:35:02PM -0700, Shakeel Butt wrote: > On Tue, Jun 4, 2019 at 7:45 PM Roman Gushchin <guro@fb.com> wrote: > > > > Johannes noticed that reading the memcg kmem_cache pointer in > > cache_from_memcg_idx() is performed using READ_ONCE() macro, > > which doesn't implement a SMP barrier, which is required > > by the logic. > > > > Add a proper smp_rmb() to be paired with smp_wmb() in > > memcg_create_kmem_cache(). > > > > The same applies to memcg_create_kmem_cache() itself, > > which reads the same value without barriers and READ_ONCE(). > > > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > > Signed-off-by: Roman Gushchin <guro@fb.com> > > Reviewed-by: Shakeel Butt <shakeelb@google.com> > > This seems like independent to the series. Shouldn't this be Cc'ed stable? It is independent, but let's keep it here to avoid merge conflicts. It has been so for a long time, and nobody complained, so I'm not sure if we really need a stable backport. Do you have a different opinion? Thank you!
On Wed, Jun 5, 2019 at 10:14 AM Roman Gushchin <guro@fb.com> wrote: > > On Tue, Jun 04, 2019 at 09:35:02PM -0700, Shakeel Butt wrote: > > On Tue, Jun 4, 2019 at 7:45 PM Roman Gushchin <guro@fb.com> wrote: > > > > > > Johannes noticed that reading the memcg kmem_cache pointer in > > > cache_from_memcg_idx() is performed using READ_ONCE() macro, > > > which doesn't implement a SMP barrier, which is required > > > by the logic. > > > > > > Add a proper smp_rmb() to be paired with smp_wmb() in > > > memcg_create_kmem_cache(). > > > > > > The same applies to memcg_create_kmem_cache() itself, > > > which reads the same value without barriers and READ_ONCE(). > > > > > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > > > Signed-off-by: Roman Gushchin <guro@fb.com> > > > > Reviewed-by: Shakeel Butt <shakeelb@google.com> > > > > This seems like independent to the series. Shouldn't this be Cc'ed stable? > > It is independent, but let's keep it here to avoid merge conflicts. > > It has been so for a long time, and nobody complained, so I'm not sure > if we really need a stable backport. Do you have a different opinion? > Nah, it's fine as it is.
On Tue, Jun 04, 2019 at 07:44:45PM -0700, Roman Gushchin wrote: > Johannes noticed that reading the memcg kmem_cache pointer in > cache_from_memcg_idx() is performed using READ_ONCE() macro, > which doesn't implement a SMP barrier, which is required > by the logic. > > Add a proper smp_rmb() to be paired with smp_wmb() in > memcg_create_kmem_cache(). > > The same applies to memcg_create_kmem_cache() itself, > which reads the same value without barriers and READ_ONCE(). > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > Signed-off-by: Roman Gushchin <guro@fb.com> > --- > mm/slab.h | 1 + > mm/slab_common.c | 3 ++- > 2 files changed, 3 insertions(+), 1 deletion(-) > > diff --git a/mm/slab.h b/mm/slab.h > index 739099af6cbb..1176b61bb8fc 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -260,6 +260,7 @@ cache_from_memcg_idx(struct kmem_cache *s, int idx) > * memcg_caches issues a write barrier to match this (see > * memcg_create_kmem_cache()). > */ > + smp_rmb(); > cachep = READ_ONCE(arr->entries[idx]); Hmm, we used to have lockless_dereference() here, but it was replaced with READ_ONCE some time ago. The commit message claims that READ_ONCE has an implicit read barrier in it. commit 506458efaf153c1ea480591c5602a5a3ba5a3b76 Author: Will Deacon <will.deacon@arm.com> Date: Tue Oct 24 11:22:48 2017 +0100 locking/barriers: Convert users of lockless_dereference() to READ_ONCE() READ_ONCE() now has an implicit smp_read_barrier_depends() call, so it can be used instead of lockless_dereference() without any change in semantics. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1508840570-22169-4-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org> commit 76ebbe78f7390aee075a7f3768af197ded1bdfbb Author: Will Deacon <will.deacon@arm.com> Date: Tue Oct 24 11:22:47 2017 +0100 locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE() In preparation for the removal of lockless_dereference(), which is the same as READ_ONCE() on all architectures other than Alpha, add an implicit smp_read_barrier_depends() to READ_ONCE() so that it can be used to head dependency chains on all architectures. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1508840570-22169-3-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
On Sun, Jun 09, 2019 at 03:10:52PM +0300, Vladimir Davydov wrote: > On Tue, Jun 04, 2019 at 07:44:45PM -0700, Roman Gushchin wrote: > > Johannes noticed that reading the memcg kmem_cache pointer in > > cache_from_memcg_idx() is performed using READ_ONCE() macro, > > which doesn't implement a SMP barrier, which is required > > by the logic. > > > > Add a proper smp_rmb() to be paired with smp_wmb() in > > memcg_create_kmem_cache(). > > > > The same applies to memcg_create_kmem_cache() itself, > > which reads the same value without barriers and READ_ONCE(). > > > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > > Signed-off-by: Roman Gushchin <guro@fb.com> > > --- > > mm/slab.h | 1 + > > mm/slab_common.c | 3 ++- > > 2 files changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/mm/slab.h b/mm/slab.h > > index 739099af6cbb..1176b61bb8fc 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -260,6 +260,7 @@ cache_from_memcg_idx(struct kmem_cache *s, int idx) > > * memcg_caches issues a write barrier to match this (see > > * memcg_create_kmem_cache()). > > */ > > + smp_rmb(); > > cachep = READ_ONCE(arr->entries[idx]); > > Hmm, we used to have lockless_dereference() here, but it was replaced > with READ_ONCE some time ago. The commit message claims that READ_ONCE > has an implicit read barrier in it. Thanks for catching this Vladimir. I wasn't aware of this change to the memory model. Indeed, we don't need to change anything here.
On Mon, Jun 10, 2019 at 04:33:44PM -0400, Johannes Weiner wrote: > On Sun, Jun 09, 2019 at 03:10:52PM +0300, Vladimir Davydov wrote: > > On Tue, Jun 04, 2019 at 07:44:45PM -0700, Roman Gushchin wrote: > > > Johannes noticed that reading the memcg kmem_cache pointer in > > > cache_from_memcg_idx() is performed using READ_ONCE() macro, > > > which doesn't implement a SMP barrier, which is required > > > by the logic. > > > > > > Add a proper smp_rmb() to be paired with smp_wmb() in > > > memcg_create_kmem_cache(). > > > > > > The same applies to memcg_create_kmem_cache() itself, > > > which reads the same value without barriers and READ_ONCE(). > > > > > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org> > > > Signed-off-by: Roman Gushchin <guro@fb.com> > > > --- > > > mm/slab.h | 1 + > > > mm/slab_common.c | 3 ++- > > > 2 files changed, 3 insertions(+), 1 deletion(-) > > > > > > diff --git a/mm/slab.h b/mm/slab.h > > > index 739099af6cbb..1176b61bb8fc 100644 > > > --- a/mm/slab.h > > > +++ b/mm/slab.h > > > @@ -260,6 +260,7 @@ cache_from_memcg_idx(struct kmem_cache *s, int idx) > > > * memcg_caches issues a write barrier to match this (see > > > * memcg_create_kmem_cache()). > > > */ > > > + smp_rmb(); > > > cachep = READ_ONCE(arr->entries[idx]); > > > > Hmm, we used to have lockless_dereference() here, but it was replaced > > with READ_ONCE some time ago. The commit message claims that READ_ONCE > > has an implicit read barrier in it. > > Thanks for catching this Vladimir. I wasn't aware of this change to > the memory model. Indeed, we don't need to change anything here. Cool, I'm dropping this patch. Thanks!
diff --git a/mm/slab.h b/mm/slab.h index 739099af6cbb..1176b61bb8fc 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -260,6 +260,7 @@ cache_from_memcg_idx(struct kmem_cache *s, int idx) * memcg_caches issues a write barrier to match this (see * memcg_create_kmem_cache()). */ + smp_rmb(); cachep = READ_ONCE(arr->entries[idx]); rcu_read_unlock(); diff --git a/mm/slab_common.c b/mm/slab_common.c index 58251ba63e4a..8092bdfc05d5 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -652,7 +652,8 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg, * allocation (see memcg_kmem_get_cache()), several threads can try to * create the same cache, but only one of them may succeed. */ - if (arr->entries[idx]) + smp_rmb(); + if (READ_ONCE(arr->entries[idx])) goto out_unlock; cgroup_name(css->cgroup, memcg_name_buf, sizeof(memcg_name_buf));
Johannes noticed that reading the memcg kmem_cache pointer in cache_from_memcg_idx() is performed using READ_ONCE() macro, which doesn't implement a SMP barrier, which is required by the logic. Add a proper smp_rmb() to be paired with smp_wmb() in memcg_create_kmem_cache(). The same applies to memcg_create_kmem_cache() itself, which reads the same value without barriers and READ_ONCE(). Suggested-by: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Roman Gushchin <guro@fb.com> --- mm/slab.h | 1 + mm/slab_common.c | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-)