Message ID | 499c0f6cc10d6eb829a67f2a4d75b4228a9b356e.1501695897.git.jtoppins@redhat.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
On Wed, 2017-08-02 at 13:44 -0400, Jonathan Toppins wrote: > The RDMA subsystem can generate several thousand of these messages > per > second eventually leading to a kernel crash. Ratelimit these messages > to prevent this crash. > > Signed-off-by: Jonathan Toppins <jtoppins@redhat.com> > Reviewed-by: Doug Ledford <dledford@redhat.com> > Tested-by: Doug Ledford <dledford@redhat.com> > --- > mm/page_alloc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 6d30e914afb6..07b7d3060b21 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7666,7 +7666,7 @@ int alloc_contig_range(unsigned long start, > unsigned long end, > > /* Make sure the range is really isolated. */ > if (test_pages_isolated(outer_start, end, false)) { > - pr_info("%s: [%lx, %lx) PFNs busy\n", > + pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", > __func__, outer_start, end); > ret = -EBUSY; > goto done; FWIW, I've been carrying a version of this for several kernel versions. I don't remember when they started, but we have one (and only one) class of machines: Dell PE R730xd, that generate these errors. When it happens, without a rate limit, we get rcu timeouts and kernel oopses. With the rate limit, we just get a lot of annoying kernel messages but the machine continues on, recovers, and eventually the memory operations all succeed.
On Wed, 2 Aug 2017 13:44:57 -0400 Jonathan Toppins <jtoppins@redhat.com> wrote: > The RDMA subsystem can generate several thousand of these messages per > second eventually leading to a kernel crash. Ratelimit these messages > to prevent this crash. Well... why are all these EBUSY's occurring? It sounds inefficient (at least) but if it is expected, normal and unavoidable then perhaps we should just remove that message altogether? > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -7666,7 +7666,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, > > /* Make sure the range is really isolated. */ > if (test_pages_isolated(outer_start, end, false)) { > - pr_info("%s: [%lx, %lx) PFNs busy\n", > + pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", > __func__, outer_start, end); > ret = -EBUSY; > goto done; -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 2017-08-02 at 14:17 -0700, Andrew Morton wrote: > On Wed, 2 Aug 2017 13:44:57 -0400 Jonathan Toppins <jtoppins@redhat. > com> wrote: > > > The RDMA subsystem can generate several thousand of these messages > > per > > second eventually leading to a kernel crash. Ratelimit these > > messages > > to prevent this crash. > > Well... why are all these EBUSY's occurring? It sounds inefficient > (at > least) but if it is expected, normal and unavoidable then perhaps we > should just remove that message altogether? I don't have an answer to that question. To be honest, I haven't looked real hard. We never had this at all, then it started out of the blue, but only on our Dell 730xd machines (and it hits all of them), but no other classes or brands of machines. And we have our 730xd machines loaded up with different brands and models of cards (for instance one dedicated to mlx4 hardware, one for qib, one for mlx5, an ocrdma/cxgb4 combo, etc), so the fact that it hit all of the machines meant it wasn't tied to any particular brand/model of RDMA hardware. To me, it always smelled of a hardware oddity specific to maybe the CPUs or mainboard chipsets in these machines, so given that I'm not an mm expert anyway, I never chased it down. A few other relevant details: it showed up somewhere around 4.8/4.9 or thereabouts. It never happened before, but the prinkt has been there since the 3.18 days, so possibly the test to trigger this message was changed, or something else in the allocator changed such that the situation started happening on these machines? And, like I said, it is specific to our 730xd machines (but they are all identical, so that could mean it's something like their specific ram configuration is causing the allocator to hit this on these machine but not on other machines in the cluster, I don't want to say it's necessarily the model of chipset or CPU, there are other bits of identicalness between these machines).
On Fri 04-08-17 14:55:06, Doug Ledford wrote: > On Wed, 2017-08-02 at 14:17 -0700, Andrew Morton wrote: > > On Wed, 2 Aug 2017 13:44:57 -0400 Jonathan Toppins <jtoppins@redhat. > > com> wrote: > > > > > The RDMA subsystem can generate several thousand of these messages > > > per > > > second eventually leading to a kernel crash. Ratelimit these > > > messages > > > to prevent this crash. > > > > Well... why are all these EBUSY's occurring? It sounds inefficient > > (at > > least) but if it is expected, normal and unavoidable then perhaps we > > should just remove that message altogether? > > I don't have an answer to that question. To be honest, I haven't > looked real hard. We never had this at all, then it started out of the > blue, but only on our Dell 730xd machines (and it hits all of them), > but no other classes or brands of machines. And we have our 730xd > machines loaded up with different brands and models of cards (for > instance one dedicated to mlx4 hardware, one for qib, one for mlx5, an > ocrdma/cxgb4 combo, etc), so the fact that it hit all of the machines > meant it wasn't tied to any particular brand/model of RDMA hardware. > To me, it always smelled of a hardware oddity specific to maybe the > CPUs or mainboard chipsets in these machines, so given that I'm not an > mm expert anyway, I never chased it down. It would certainly be good to chase this down. I do not object to ratelimiting, it is much better than having a non-bootable system but this doesn't solve the underlying problem. > A few other relevant details: it showed up somewhere around 4.8/4.9 or > thereabouts. It never happened before, but the prinkt has been there > since the 3.18 days, so possibly the test to trigger this message was > changed, or something else in the allocator changed such that the > situation started happening on these machines? Is this still the case with the current Linus tree? We have had a fix 424f6c4818bb ("mm: alloc_contig: re-allow CMA to compact FS pages") which made it into 4.10
Andrew Morton <akpm@linux-foundation.org> writes: > On Wed, 2 Aug 2017 13:44:57 -0400 Jonathan Toppins <jtoppins@redhat.com> wrote: > >> The RDMA subsystem can generate several thousand of these messages per >> second eventually leading to a kernel crash. Ratelimit these messages >> to prevent this crash. > > Well... why are all these EBUSY's occurring? It sounds inefficient (at > least) but if it is expected, normal and unavoidable then perhaps we > should just remove that message altogether? We see them on powerpc sometimes when CMA is unable to make large allocations for the hash table of a KVM guest. At least in that context they're not useful, CMA will try the allocation again, and if it really can't allocate then CMA will print more useful information itself. So I'd vote for dropping the message and letting the callers decide what to do. cheers -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6d30e914afb6..07b7d3060b21 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7666,7 +7666,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, /* Make sure the range is really isolated. */ if (test_pages_isolated(outer_start, end, false)) { - pr_info("%s: [%lx, %lx) PFNs busy\n", + pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", __func__, outer_start, end); ret = -EBUSY; goto done;