Message ID | 20220510094639.106661-1-chenzhongjin@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v4] locking/csd_lock: change csdlock_debug from early_param to __setup | expand |
On 2022/5/10 17:46, Chen Zhongjin wrote: > csdlock_debug uses early_param and static_branch_enable() to enable > csd_lock_wait feature, which triggers a panic on arm64 with config: > CONFIG_SPARSEMEM=y > CONFIG_SPARSEMEM_VMEMMAP=n > > With CONFIG_SPARSEMEM_VMEMMAP=n, __nr_to_section is called in > static_key_enable() and returns NULL which makes NULL dereference > because mem_section is initialized in sparse_init() which is later > than parse_early_param() stage. > > For powerpc this is also broken, because early_param stage is > earlier than jump_label_init() so static_key_enable won't work. > powerpc throws an warning: "static key 'xxx' used before call > to jump_label_init()". > > Thus, early_param is too early for csd_lock_wait to run > static_branch_enable(), so changes it to __setup to fix these. > > Fixes: 8d0968cc6b8f ("locking/csd_lock: Add boot parameter for controlling CSD lock debugging") > Cc: stable@vger.kernel.org > Reported-by: Chen jingwen <chenjingwen6@huawei.com> > Signed-off-by: Chen Zhongjin <chenzhongjin@huawei.com> > --- > Change v3 -> v4: > Fix title and description because this fix is also applied > to powerpc. > For more detailed arm64 bug report see: > https://lore.kernel.org/linux-arm-kernel/e8715911-f835-059d-27f8-cc5f5ad30a07@huawei.com/t/ > > Change v2 -> v3: > Add module name in title > > Change v1 -> v2: > Fix return 1 for __setup > --- > kernel/smp.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/kernel/smp.c b/kernel/smp.c > index 65a630f62363..381eb15cd28f 100644 > --- a/kernel/smp.c > +++ b/kernel/smp.c > @@ -174,9 +174,9 @@ static int __init csdlock_debug(char *str) > if (val) > static_branch_enable(&csdlock_debug_enabled); > > - return 0; > + return 1; > } > -early_param("csdlock_debug", csdlock_debug); > +__setup("csdlock_debug=", csdlock_debug); > > static DEFINE_PER_CPU(call_single_data_t *, cur_csd); > static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func); Ping for review. Thanks!
On Tue, May 17, 2022 at 11:22:04AM +0800, Chen Zhongjin wrote: > On 2022/5/10 17:46, Chen Zhongjin wrote: > > csdlock_debug uses early_param and static_branch_enable() to enable > > csd_lock_wait feature, which triggers a panic on arm64 with config: > > CONFIG_SPARSEMEM=y > > CONFIG_SPARSEMEM_VMEMMAP=n > > > > With CONFIG_SPARSEMEM_VMEMMAP=n, __nr_to_section is called in > > static_key_enable() and returns NULL which makes NULL dereference > > because mem_section is initialized in sparse_init() which is later > > than parse_early_param() stage. > > > > For powerpc this is also broken, because early_param stage is > > earlier than jump_label_init() so static_key_enable won't work. > > powerpc throws an warning: "static key 'xxx' used before call > > to jump_label_init()". > > > > Thus, early_param is too early for csd_lock_wait to run > > static_branch_enable(), so changes it to __setup to fix these. > > > > Fixes: 8d0968cc6b8f ("locking/csd_lock: Add boot parameter for controlling CSD lock debugging") > > Cc: stable@vger.kernel.org > > Reported-by: Chen jingwen <chenjingwen6@huawei.com> > > Signed-off-by: Chen Zhongjin <chenzhongjin@huawei.com> > > --- > > Change v3 -> v4: > > Fix title and description because this fix is also applied > > to powerpc. > > For more detailed arm64 bug report see: > > https://lore.kernel.org/linux-arm-kernel/e8715911-f835-059d-27f8-cc5f5ad30a07@huawei.com/t/ > > > > Change v2 -> v3: > > Add module name in title > > > > Change v1 -> v2: > > Fix return 1 for __setup > > --- > > kernel/smp.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/smp.c b/kernel/smp.c > > index 65a630f62363..381eb15cd28f 100644 > > --- a/kernel/smp.c > > +++ b/kernel/smp.c > > @@ -174,9 +174,9 @@ static int __init csdlock_debug(char *str) > > if (val) > > static_branch_enable(&csdlock_debug_enabled); > > > > - return 0; > > + return 1; > > } > > -early_param("csdlock_debug", csdlock_debug); > > +__setup("csdlock_debug=", csdlock_debug); > > > > static DEFINE_PER_CPU(call_single_data_t *, cur_csd); > > static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func); > > Ping for review. Thanks! I have pulled it into -rcu for testing and further review. It might well need to go through some other path, though. Thanx, Paul
Hi, On 2022/5/18 9:11, Paul E. McKenney wrote: > On Tue, May 17, 2022 at 11:22:04AM +0800, Chen Zhongjin wrote: >> On 2022/5/10 17:46, Chen Zhongjin wrote: >>> csdlock_debug uses early_param and static_branch_enable() to enable >>> csd_lock_wait feature, which triggers a panic on arm64 with config: >>> CONFIG_SPARSEMEM=y >>> CONFIG_SPARSEMEM_VMEMMAP=n >>> >>> With CONFIG_SPARSEMEM_VMEMMAP=n, __nr_to_section is called in >>> static_key_enable() and returns NULL which makes NULL dereference >>> because mem_section is initialized in sparse_init() which is later >>> than parse_early_param() stage. >>> >>> For powerpc this is also broken, because early_param stage is >>> earlier than jump_label_init() so static_key_enable won't work. >>> powerpc throws an warning: "static key 'xxx' used before call >>> to jump_label_init()". >>> >>> Thus, early_param is too early for csd_lock_wait to run >>> static_branch_enable(), so changes it to __setup to fix these. >>> >>> Fixes: 8d0968cc6b8f ("locking/csd_lock: Add boot parameter for controlling CSD lock debugging") >>> Cc: stable@vger.kernel.org >>> Reported-by: Chen jingwen <chenjingwen6@huawei.com> >>> Signed-off-by: Chen Zhongjin <chenzhongjin@huawei.com> >>> --- >>> Change v3 -> v4: >>> Fix title and description because this fix is also applied >>> to powerpc. >>> For more detailed arm64 bug report see: >>> https://lore.kernel.org/linux-arm-kernel/e8715911-f835-059d-27f8-cc5f5ad30a07@huawei.com/t/ >>> >>> Change v2 -> v3: >>> Add module name in title >>> >>> Change v1 -> v2: >>> Fix return 1 for __setup >>> --- >>> kernel/smp.c | 4 ++-- >>> 1 file changed, 2 insertions(+), 2 deletions(-) >>> >>> diff --git a/kernel/smp.c b/kernel/smp.c >>> index 65a630f62363..381eb15cd28f 100644 >>> --- a/kernel/smp.c >>> +++ b/kernel/smp.c >>> @@ -174,9 +174,9 @@ static int __init csdlock_debug(char *str) >>> if (val) >>> static_branch_enable(&csdlock_debug_enabled); >>> >>> - return 0; >>> + return 1; >>> } >>> -early_param("csdlock_debug", csdlock_debug); >>> +__setup("csdlock_debug=", csdlock_debug); >>> >>> static DEFINE_PER_CPU(call_single_data_t *, cur_csd); >>> static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func); >> >> Ping for review. Thanks! > > I have pulled it into -rcu for testing and further review. It might > well need to go through some other path, though. >> Thanx, Paul > . So did it have any result? Do we have any idea to fix that except delaying the set timing? I guess that maybe not using static_branch can work for this, but it still needs to be evaluated for performance influence of not enabled situation. Best, Chen
On Fri, May 27, 2022 at 02:49:03PM +0800, Chen Zhongjin wrote: > Hi, > > On 2022/5/18 9:11, Paul E. McKenney wrote: > > On Tue, May 17, 2022 at 11:22:04AM +0800, Chen Zhongjin wrote: > >> On 2022/5/10 17:46, Chen Zhongjin wrote: > >>> csdlock_debug uses early_param and static_branch_enable() to enable > >>> csd_lock_wait feature, which triggers a panic on arm64 with config: > >>> CONFIG_SPARSEMEM=y > >>> CONFIG_SPARSEMEM_VMEMMAP=n > >>> > >>> With CONFIG_SPARSEMEM_VMEMMAP=n, __nr_to_section is called in > >>> static_key_enable() and returns NULL which makes NULL dereference > >>> because mem_section is initialized in sparse_init() which is later > >>> than parse_early_param() stage. > >>> > >>> For powerpc this is also broken, because early_param stage is > >>> earlier than jump_label_init() so static_key_enable won't work. > >>> powerpc throws an warning: "static key 'xxx' used before call > >>> to jump_label_init()". > >>> > >>> Thus, early_param is too early for csd_lock_wait to run > >>> static_branch_enable(), so changes it to __setup to fix these. > >>> > >>> Fixes: 8d0968cc6b8f ("locking/csd_lock: Add boot parameter for controlling CSD lock debugging") > >>> Cc: stable@vger.kernel.org > >>> Reported-by: Chen jingwen <chenjingwen6@huawei.com> > >>> Signed-off-by: Chen Zhongjin <chenzhongjin@huawei.com> > >>> --- > >>> Change v3 -> v4: > >>> Fix title and description because this fix is also applied > >>> to powerpc. > >>> For more detailed arm64 bug report see: > >>> https://lore.kernel.org/linux-arm-kernel/e8715911-f835-059d-27f8-cc5f5ad30a07@huawei.com/t/ > >>> > >>> Change v2 -> v3: > >>> Add module name in title > >>> > >>> Change v1 -> v2: > >>> Fix return 1 for __setup > >>> --- > >>> kernel/smp.c | 4 ++-- > >>> 1 file changed, 2 insertions(+), 2 deletions(-) > >>> > >>> diff --git a/kernel/smp.c b/kernel/smp.c > >>> index 65a630f62363..381eb15cd28f 100644 > >>> --- a/kernel/smp.c > >>> +++ b/kernel/smp.c > >>> @@ -174,9 +174,9 @@ static int __init csdlock_debug(char *str) > >>> if (val) > >>> static_branch_enable(&csdlock_debug_enabled); > >>> > >>> - return 0; > >>> + return 1; > >>> } > >>> -early_param("csdlock_debug", csdlock_debug); > >>> +__setup("csdlock_debug=", csdlock_debug); > >>> > >>> static DEFINE_PER_CPU(call_single_data_t *, cur_csd); > >>> static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func); > >> > >> Ping for review. Thanks! > > > > I have pulled it into -rcu for testing and further review. It might > > well need to go through some other path, though. > >> Thanx, Paul > > . > > So did it have any result? Do we have any idea to fix that except delaying the > set timing? I guess that maybe not using static_branch can work for this, but it > still needs to be evaluated for performance influence of not enabled situation. It was in -next for a short time without complaints. It will go back into -next after the merge window closes. If there are no objections, I would include it in my pull request for the next merge window (v5.20). Thanx, Paul
diff --git a/kernel/smp.c b/kernel/smp.c index 65a630f62363..381eb15cd28f 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -174,9 +174,9 @@ static int __init csdlock_debug(char *str) if (val) static_branch_enable(&csdlock_debug_enabled); - return 0; + return 1; } -early_param("csdlock_debug", csdlock_debug); +__setup("csdlock_debug=", csdlock_debug); static DEFINE_PER_CPU(call_single_data_t *, cur_csd); static DEFINE_PER_CPU(smp_call_func_t, cur_csd_func);
csdlock_debug uses early_param and static_branch_enable() to enable csd_lock_wait feature, which triggers a panic on arm64 with config: CONFIG_SPARSEMEM=y CONFIG_SPARSEMEM_VMEMMAP=n With CONFIG_SPARSEMEM_VMEMMAP=n, __nr_to_section is called in static_key_enable() and returns NULL which makes NULL dereference because mem_section is initialized in sparse_init() which is later than parse_early_param() stage. For powerpc this is also broken, because early_param stage is earlier than jump_label_init() so static_key_enable won't work. powerpc throws an warning: "static key 'xxx' used before call to jump_label_init()". Thus, early_param is too early for csd_lock_wait to run static_branch_enable(), so changes it to __setup to fix these. Fixes: 8d0968cc6b8f ("locking/csd_lock: Add boot parameter for controlling CSD lock debugging") Cc: stable@vger.kernel.org Reported-by: Chen jingwen <chenjingwen6@huawei.com> Signed-off-by: Chen Zhongjin <chenzhongjin@huawei.com> --- Change v3 -> v4: Fix title and description because this fix is also applied to powerpc. For more detailed arm64 bug report see: https://lore.kernel.org/linux-arm-kernel/e8715911-f835-059d-27f8-cc5f5ad30a07@huawei.com/t/ Change v2 -> v3: Add module name in title Change v1 -> v2: Fix return 1 for __setup --- kernel/smp.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)