diff mbox series

[RFC] mm, page_alloc: drop should_suppress_show_mem

Message ID 20180907114334.7088-1-mhocko@kernel.org (mailing list archive)
State New, archived
Headers show
Series [RFC] mm, page_alloc: drop should_suppress_show_mem | expand

Commit Message

Michal Hocko Sept. 7, 2018, 11:43 a.m. UTC
From: Michal Hocko <mhocko@suse.com>

should_suppress_show_mem has been introduced to reduce the overhead of
show_mem on large NUMA systems. Things have changed since then though.
Namely c78e93630d15 ("mm: do not walk all of system memory during
show_mem") has reduced the overhead considerably.

Moreover warn_alloc_show_mem clears SHOW_MEM_FILTER_NODES when called
from the IRQ context already so we are not printing per node stats.

Remove should_suppress_show_mem because we are losing potentially
interesting information about allocation failures. We have seen a bug
report where system gets unresponsive under memory pressure and there
is only
kernel: [2032243.696888] qlge 0000:8b:00.1 ql1: Could not get a page chunk, i=8, clean_idx =200 .
kernel: [2032243.710725] swapper/7: page allocation failure: order:1, mode:0x1084120(GFP_ATOMIC|__GFP_COLD|__GFP_COMP)

without an additional information for debugging. It would be great to
see the state of the page allocator at the moment.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

Comments

Vlastimil Babka Sept. 7, 2018, 12:16 p.m. UTC | #1
On 09/07/2018 01:43 PM, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> should_suppress_show_mem has been introduced to reduce the overhead of
> show_mem on large NUMA systems. Things have changed since then though.
> Namely c78e93630d15 ("mm: do not walk all of system memory during
> show_mem") has reduced the overhead considerably.
> 
> Moreover warn_alloc_show_mem clears SHOW_MEM_FILTER_NODES when called
> from the IRQ context already so we are not printing per node stats.
> 
> Remove should_suppress_show_mem because we are losing potentially
> interesting information about allocation failures. We have seen a bug
> report where system gets unresponsive under memory pressure and there
> is only
> kernel: [2032243.696888] qlge 0000:8b:00.1 ql1: Could not get a page chunk, i=8, clean_idx =200 .
> kernel: [2032243.710725] swapper/7: page allocation failure: order:1, mode:0x1084120(GFP_ATOMIC|__GFP_COLD|__GFP_COMP)
> 
> without an additional information for debugging. It would be great to
> see the state of the page allocator at the moment.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

The dependency on build-time constant instead of real system size is
also unfortunate. Maybe the time was depending on *possible* nodes in
the past, but I don't think it's the case today.

Thanks.
diff mbox series

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 89d2a2ab3fe6..025f23dc282e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3366,26 +3366,12 @@  get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
 	return NULL;
 }
 
-/*
- * Large machines with many possible nodes should not always dump per-node
- * meminfo in irq context.
- */
-static inline bool should_suppress_show_mem(void)
-{
-	bool ret = false;
-
-#if NODES_SHIFT > 8
-	ret = in_interrupt();
-#endif
-	return ret;
-}
-
 static void warn_alloc_show_mem(gfp_t gfp_mask, nodemask_t *nodemask)
 {
 	unsigned int filter = SHOW_MEM_FILTER_NODES;
 	static DEFINE_RATELIMIT_STATE(show_mem_rs, HZ, 1);
 
-	if (should_suppress_show_mem() || !__ratelimit(&show_mem_rs))
+	if (!__ratelimit(&show_mem_rs))
 		return;
 
 	/*