Message ID | 20240814215037.1870645-1-axelrasmussen@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] mm, slub: print CPU id (and its node) on slab OOM | expand |
On 8/14/24 23:50, Axel Rasmussen wrote: > Depending on how remote_node_defrag_ratio is configured, allocations can > end up in this path as a result of the local node being OOM, despite the > allocation overall being unconstrained (node == -1). > > When we print a warning, printing the current CPU makes that situation > more clear (i.e., you can immediately see which node's OOM status > matters for the allocation at hand). > > Acked-by: David Rientjes <rientjes@google.com> > Signed-off-by: Axel Rasmussen <axelrasmussen@google.com> Thanks, replaced v1 in the slab tree. > --- > mm/slub.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index c9d8a2497fd6..3088260bf75d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3416,14 +3416,15 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) > { > static DEFINE_RATELIMIT_STATE(slub_oom_rs, DEFAULT_RATELIMIT_INTERVAL, > DEFAULT_RATELIMIT_BURST); > + int cpu = raw_smp_processor_id(); > int node; > struct kmem_cache_node *n; > > if ((gfpflags & __GFP_NOWARN) || !__ratelimit(&slub_oom_rs)) > return; > > - pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n", > - nid, gfpflags, &gfpflags); > + pr_warn("SLUB: Unable to allocate memory on CPU %u (of node %d) on node %d, gfp=%#x(%pGg)\n", > + cpu, cpu_to_node(cpu), nid, gfpflags, &gfpflags); > pr_warn(" cache: %s, object size: %u, buffer size: %u, default order: %u, min order: %u\n", > s->name, s->object_size, s->size, oo_order(s->oo), > oo_order(s->min));
diff --git a/mm/slub.c b/mm/slub.c index c9d8a2497fd6..3088260bf75d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3416,14 +3416,15 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid) { static DEFINE_RATELIMIT_STATE(slub_oom_rs, DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); + int cpu = raw_smp_processor_id(); int node; struct kmem_cache_node *n; if ((gfpflags & __GFP_NOWARN) || !__ratelimit(&slub_oom_rs)) return; - pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n", - nid, gfpflags, &gfpflags); + pr_warn("SLUB: Unable to allocate memory on CPU %u (of node %d) on node %d, gfp=%#x(%pGg)\n", + cpu, cpu_to_node(cpu), nid, gfpflags, &gfpflags); pr_warn(" cache: %s, object size: %u, buffer size: %u, default order: %u, min order: %u\n", s->name, s->object_size, s->size, oo_order(s->oo), oo_order(s->min));