diff mbox

Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache)

Message ID 20160623023756.GA30438@js1304-P5Q-DELUXE (mailing list archive)
State New, archived
Headers show

Commit Message

Joonsoo Kim June 23, 2016, 2:37 a.m. UTC
On Wed, Jun 22, 2016 at 05:49:35PM -0700, Paul E. McKenney wrote:
> On Wed, Jun 22, 2016 at 12:08:59PM -0700, Paul E. McKenney wrote:
> > On Wed, Jun 22, 2016 at 05:01:35PM +0200, Geert Uytterhoeven wrote:
> > > On Wed, Jun 22, 2016 at 2:52 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> > > > Could you try below patch to check who causes the hang?
> > > >
> > > > And, if sysalt-t works when hang, could you get sysalt-t output? I haven't
> > > > used it before but Paul could find some culprit on it. :)
> > > >
> > > > Thanks.
> > > >
> > > >
> > > > ----->8-----
> > > > diff --git a/mm/slab.c b/mm/slab.c
> > > > index 763096a..9652d38 100644
> > > > --- a/mm/slab.c
> > > > +++ b/mm/slab.c
> > > > @@ -964,8 +964,13 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> > > >          * guaranteed to be valid until irq is re-enabled, because it will be
> > > >          * freed after synchronize_sched().
> > > >          */
> > > > -       if (force_change)
> > > > +       if (force_change) {
> > > > +               if (num_online_cpus() > 1)
> > > > +                       dump_stack();
> > > >                 synchronize_sched();
> > > > +               if (num_online_cpus() > 1)
> > > > +                       dump_stack();
> > > > +       }
> > > 
> > > I've only added the first one, as I would never see the second one. All of
> > > this happens before the serial console is activated, earlycon is not supported,
> > > and I only have remote access.
> > > 
> > > Brought up 2 CPUs
> > > SMP: Total of 2 processors activated (2132.00 BogoMIPS).
> > > CPU: All CPU(s) started in SVC mode.
> > > CPU: 0 PID: 1 Comm: swapper/0 Not tainted
> > > 4.7.0-rc4-kzm9d-00404-g4a235e6dde4404dd-dirty #89
> > > Hardware name: Generic Emma Mobile EV2 (Flattened Device Tree)
> > > [<c010de68>] (unwind_backtrace) from [<c010a658>] (show_stack+0x10/0x14)
> > > [<c010a658>] (show_stack) from [<c02b5cf8>] (dump_stack+0x7c/0x9c)
> > > [<c02b5cf8>] (dump_stack) from [<c01cfa4c>] (setup_kmem_cache_node+0x140/0x170)
> > > [<c01cfa4c>] (setup_kmem_cache_node) from [<c01cfe3c>]
> > > (__do_tune_cpucache+0xf4/0x114)
> > > [<c01cfe3c>] (__do_tune_cpucache) from [<c01cff54>] (enable_cpucache+0xf8/0x148)
> > > [<c01cff54>] (enable_cpucache) from [<c01d0190>]
> > > (__kmem_cache_create+0x1a8/0x1d0)
> > > [<c01d0190>] (__kmem_cache_create) from [<c01b32d0>]
> > > (kmem_cache_create+0xbc/0x190)
> > > [<c01b32d0>] (kmem_cache_create) from [<c070d968>] (shmem_init+0x34/0xb0)
> > > [<c070d968>] (shmem_init) from [<c0700cc8>] (kernel_init_freeable+0x98/0x1ec)
> > > [<c0700cc8>] (kernel_init_freeable) from [<c049fdbc>] (kernel_init+0x8/0x110)
> > > [<c049fdbc>] (kernel_init) from [<c0106cb8>] (ret_from_fork+0x14/0x3c)
> > > devtmpfs: initialized
> > 
> > I don't see anything here that would prevent grace periods from completing.
> > 
> > The CPUs are using the normal hotplug sequence to come online, correct?
> 
> And either way, could you please apply the patch below and then
> invoke rcu_dump_rcu_sched_tree() just before the offending call to
> synchronize_sched()?  That will tell me what CPUs RCU believes exist,
> and perhaps also which CPU is holding it up.

I can't find rcu_dump_rcu_sched_tree(). Do you mean
rcu_dump_rcu_node_tree()? Anyway, there is no patch below so I attach
one which does what Paul want, maybe.

Thanks.

------->8---------

Comments

Paul E. McKenney June 23, 2016, 2:47 a.m. UTC | #1
On Thu, Jun 23, 2016 at 11:37:56AM +0900, Joonsoo Kim wrote:
> On Wed, Jun 22, 2016 at 05:49:35PM -0700, Paul E. McKenney wrote:
> > On Wed, Jun 22, 2016 at 12:08:59PM -0700, Paul E. McKenney wrote:
> > > On Wed, Jun 22, 2016 at 05:01:35PM +0200, Geert Uytterhoeven wrote:
> > > > On Wed, Jun 22, 2016 at 2:52 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> > > > > Could you try below patch to check who causes the hang?
> > > > >
> > > > > And, if sysalt-t works when hang, could you get sysalt-t output? I haven't
> > > > > used it before but Paul could find some culprit on it. :)
> > > > >
> > > > > Thanks.
> > > > >
> > > > >
> > > > > ----->8-----
> > > > > diff --git a/mm/slab.c b/mm/slab.c
> > > > > index 763096a..9652d38 100644
> > > > > --- a/mm/slab.c
> > > > > +++ b/mm/slab.c
> > > > > @@ -964,8 +964,13 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> > > > >          * guaranteed to be valid until irq is re-enabled, because it will be
> > > > >          * freed after synchronize_sched().
> > > > >          */
> > > > > -       if (force_change)
> > > > > +       if (force_change) {
> > > > > +               if (num_online_cpus() > 1)
> > > > > +                       dump_stack();
> > > > >                 synchronize_sched();
> > > > > +               if (num_online_cpus() > 1)
> > > > > +                       dump_stack();
> > > > > +       }
> > > > 
> > > > I've only added the first one, as I would never see the second one. All of
> > > > this happens before the serial console is activated, earlycon is not supported,
> > > > and I only have remote access.
> > > > 
> > > > Brought up 2 CPUs
> > > > SMP: Total of 2 processors activated (2132.00 BogoMIPS).
> > > > CPU: All CPU(s) started in SVC mode.
> > > > CPU: 0 PID: 1 Comm: swapper/0 Not tainted
> > > > 4.7.0-rc4-kzm9d-00404-g4a235e6dde4404dd-dirty #89
> > > > Hardware name: Generic Emma Mobile EV2 (Flattened Device Tree)
> > > > [<c010de68>] (unwind_backtrace) from [<c010a658>] (show_stack+0x10/0x14)
> > > > [<c010a658>] (show_stack) from [<c02b5cf8>] (dump_stack+0x7c/0x9c)
> > > > [<c02b5cf8>] (dump_stack) from [<c01cfa4c>] (setup_kmem_cache_node+0x140/0x170)
> > > > [<c01cfa4c>] (setup_kmem_cache_node) from [<c01cfe3c>]
> > > > (__do_tune_cpucache+0xf4/0x114)
> > > > [<c01cfe3c>] (__do_tune_cpucache) from [<c01cff54>] (enable_cpucache+0xf8/0x148)
> > > > [<c01cff54>] (enable_cpucache) from [<c01d0190>]
> > > > (__kmem_cache_create+0x1a8/0x1d0)
> > > > [<c01d0190>] (__kmem_cache_create) from [<c01b32d0>]
> > > > (kmem_cache_create+0xbc/0x190)
> > > > [<c01b32d0>] (kmem_cache_create) from [<c070d968>] (shmem_init+0x34/0xb0)
> > > > [<c070d968>] (shmem_init) from [<c0700cc8>] (kernel_init_freeable+0x98/0x1ec)
> > > > [<c0700cc8>] (kernel_init_freeable) from [<c049fdbc>] (kernel_init+0x8/0x110)
> > > > [<c049fdbc>] (kernel_init) from [<c0106cb8>] (ret_from_fork+0x14/0x3c)
> > > > devtmpfs: initialized
> > > 
> > > I don't see anything here that would prevent grace periods from completing.
> > > 
> > > The CPUs are using the normal hotplug sequence to come online, correct?
> > 
> > And either way, could you please apply the patch below and then
> > invoke rcu_dump_rcu_sched_tree() just before the offending call to
> > synchronize_sched()?  That will tell me what CPUs RCU believes exist,
> > and perhaps also which CPU is holding it up.
> 
> I can't find rcu_dump_rcu_sched_tree(). Do you mean
> rcu_dump_rcu_node_tree()? Anyway, there is no patch below so I attach
> one which does what Paul want, maybe.

One of those days, I guess!  :-/

Your patch is exactly what I intended to send, thank you!

							Thanx, Paul

> Thanks.
> 
> ------->8---------
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 88d3f95..6b650f0 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -4171,7 +4171,7 @@ static void __init rcu_init_geometry(void)
>   * Dump out the structure of the rcu_node combining tree associated
>   * with the rcu_state structure referenced by rsp.
>   */
> -static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
> +static void rcu_dump_rcu_node_tree(struct rcu_state *rsp)
>  {
>         int level = 0;
>         struct rcu_node *rnp;
> @@ -4189,6 +4189,11 @@ static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
>         pr_cont("\n");
>  }
> 
> +void rcu_dump_rcu_sched_tree(void)
> +{
> +       rcu_dump_rcu_node_tree(&rcu_sched_state);
> +}
> +
>  void __init rcu_init(void)
>  {
>         int cpu;
> diff --git a/mm/slab.c b/mm/slab.c
> index 763096a..d88976c 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -909,6 +909,8 @@ static int init_cache_node_node(int node)
>         return 0;
>  }
> 
> +extern void rcu_dump_rcu_sched_tree(void);
> +
>  static int setup_kmem_cache_node(struct kmem_cache *cachep,
>                                 int node, gfp_t gfp, bool force_change)
>  {
> @@ -964,8 +966,10 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
>          * guaranteed to be valid until irq is re-enabled, because it will be
>          * freed after synchronize_sched().
>          */
> -       if (force_change)
> +       if (force_change) {
> +               rcu_dump_rcu_sched_tree();
>                 synchronize_sched();
> +       }
> 
>  fail:
>         kfree(old_shared);
>
diff mbox

Patch

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 88d3f95..6b650f0 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4171,7 +4171,7 @@  static void __init rcu_init_geometry(void)
  * Dump out the structure of the rcu_node combining tree associated
  * with the rcu_state structure referenced by rsp.
  */
-static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
+static void rcu_dump_rcu_node_tree(struct rcu_state *rsp)
 {
        int level = 0;
        struct rcu_node *rnp;
@@ -4189,6 +4189,11 @@  static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp)
        pr_cont("\n");
 }
 
+void rcu_dump_rcu_sched_tree(void)
+{
+       rcu_dump_rcu_node_tree(&rcu_sched_state);
+}
+
 void __init rcu_init(void)
 {
        int cpu;
diff --git a/mm/slab.c b/mm/slab.c
index 763096a..d88976c 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -909,6 +909,8 @@  static int init_cache_node_node(int node)
        return 0;
 }
 
+extern void rcu_dump_rcu_sched_tree(void);
+
 static int setup_kmem_cache_node(struct kmem_cache *cachep,
                                int node, gfp_t gfp, bool force_change)
 {
@@ -964,8 +966,10 @@  static int setup_kmem_cache_node(struct kmem_cache *cachep,
         * guaranteed to be valid until irq is re-enabled, because it will be
         * freed after synchronize_sched().
         */
-       if (force_change)
+       if (force_change) {
+               rcu_dump_rcu_sched_tree();
                synchronize_sched();
+       }
 
 fail:
        kfree(old_shared);