Message ID | 20180325175004.28162-3-ynorov@caviumnetworks.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > work may be done at the exit of this state. Delaying synchronization helps to > save power if CPU is in idle state and decrease latency for real-time tasks. > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > code to delay syncronization. > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > isolated task would be fatal, as it breaks isolation. The approach with delaying > of synchronization work helps to maintain isolated state. > > I've tested it with test from task isolation series on ThunderX2 for more than > 10 hours (10k giga-ticks) without breaking isolation. > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > --- > arch/arm64/kernel/insn.c | 2 +- > include/linux/smp.h | 2 ++ > kernel/smp.c | 24 ++++++++++++++++++++++++ > mm/slab.c | 2 +- > 4 files changed, 28 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > index 2718a77da165..9d7c492e920e 100644 > --- a/arch/arm64/kernel/insn.c > +++ b/arch/arm64/kernel/insn.c > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > * synchronization. > */ > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > - kick_all_cpus_sync(); > + kick_active_cpus_sync(); > return ret; > } > } > diff --git a/include/linux/smp.h b/include/linux/smp.h > index 9fb239e12b82..27215e22240d 100644 > --- a/include/linux/smp.h > +++ b/include/linux/smp.h > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask, > smp_call_func_t func, void *info, int wait); > > void kick_all_cpus_sync(void); > +void kick_active_cpus_sync(void); > void wake_up_all_idle_cpus(void); > > /* > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, > } > > static inline void kick_all_cpus_sync(void) { } > +static inline void kick_active_cpus_sync(void) { } > static inline void wake_up_all_idle_cpus(void) { } > > #ifdef CONFIG_UP_LATE_INIT > diff --git a/kernel/smp.c b/kernel/smp.c > index 084c8b3a2681..0358d6673850 100644 > --- a/kernel/smp.c > +++ b/kernel/smp.c > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > } > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > +/** > + * kick_active_cpus_sync - Force CPUs that are not in extended > + * quiescent state (idle or nohz_full userspace) sync by sending > + * IPI. Extended quiescent state CPUs will sync at the exit of > + * that state. > + */ > +void kick_active_cpus_sync(void) > +{ > + int cpu; > + struct cpumask kernel_cpus; > + > + smp_mb(); > + > + cpumask_clear(&kernel_cpus); > + preempt_disable(); > + for_each_online_cpu(cpu) { > + if (!rcu_eqs_special_set(cpu)) If we get here, the CPU is not in a quiescent state, so we therefore must IPI it, correct? But don't you also need to define rcu_eqs_special_exit() so that RCU can invoke it when it next leaves its quiescent state? Or are you able to ignore the CPU in that case? (If you are able to ignore the CPU in that case, I could give you a lower-cost function to get your job done.) Thanx, Paul > + cpumask_set_cpu(cpu, &kernel_cpus); > + } > + smp_call_function_many(&kernel_cpus, do_nothing, NULL, 1); > + preempt_enable(); > +} > +EXPORT_SYMBOL_GPL(kick_active_cpus_sync); > + > /** > * wake_up_all_idle_cpus - break all cpus out of idle > * wake_up_all_idle_cpus try to break all cpus which is in idle state even > diff --git a/mm/slab.c b/mm/slab.c > index 324446621b3e..678d5dbd6f46 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache *cachep, int limit, > * cpus, so skip the IPIs. > */ > if (prev) > - kick_all_cpus_sync(); > + kick_active_cpus_sync(); > > check_irq_on(); > cachep->batchcount = batchcount; > -- > 2.14.1 >
Hi Yury, On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > work may be done at the exit of this state. Delaying synchronization helps to > save power if CPU is in idle state and decrease latency for real-time tasks. > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > code to delay syncronization. > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > isolated task would be fatal, as it breaks isolation. The approach with delaying > of synchronization work helps to maintain isolated state. > > I've tested it with test from task isolation series on ThunderX2 for more than > 10 hours (10k giga-ticks) without breaking isolation. > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > --- > arch/arm64/kernel/insn.c | 2 +- > include/linux/smp.h | 2 ++ > kernel/smp.c | 24 ++++++++++++++++++++++++ > mm/slab.c | 2 +- > 4 files changed, 28 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > index 2718a77da165..9d7c492e920e 100644 > --- a/arch/arm64/kernel/insn.c > +++ b/arch/arm64/kernel/insn.c > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > * synchronization. > */ > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > - kick_all_cpus_sync(); > + kick_active_cpus_sync(); > return ret; > } > } > diff --git a/include/linux/smp.h b/include/linux/smp.h > index 9fb239e12b82..27215e22240d 100644 > --- a/include/linux/smp.h > +++ b/include/linux/smp.h > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask, > smp_call_func_t func, void *info, int wait); > > void kick_all_cpus_sync(void); > +void kick_active_cpus_sync(void); > void wake_up_all_idle_cpus(void); > > /* > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, > } > > static inline void kick_all_cpus_sync(void) { } > +static inline void kick_active_cpus_sync(void) { } > static inline void wake_up_all_idle_cpus(void) { } > > #ifdef CONFIG_UP_LATE_INIT > diff --git a/kernel/smp.c b/kernel/smp.c > index 084c8b3a2681..0358d6673850 100644 > --- a/kernel/smp.c > +++ b/kernel/smp.c > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > } > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > +/** > + * kick_active_cpus_sync - Force CPUs that are not in extended > + * quiescent state (idle or nohz_full userspace) sync by sending > + * IPI. Extended quiescent state CPUs will sync at the exit of > + * that state. > + */ > +void kick_active_cpus_sync(void) > +{ > + int cpu; > + struct cpumask kernel_cpus; > + > + smp_mb(); (A general remark only:) checkpatch.pl should have warned about the fact that this barrier is missing an accompanying comment (which accesses are being "ordered", what is the pairing barrier, etc.). Moreover if, as your reply above suggested, your patch is relying on "implicit barriers" (something I would not recommend) then even more so you should comment on these requirements. This could: (a) force you to reason about the memory ordering stuff, (b) easy the task of reviewing and adopting your patch, (c) easy the task of preserving those requirements (as implementations changes). Andrea > + > + cpumask_clear(&kernel_cpus); > + preempt_disable(); > + for_each_online_cpu(cpu) { > + if (!rcu_eqs_special_set(cpu)) > + cpumask_set_cpu(cpu, &kernel_cpus); > + } > + smp_call_function_many(&kernel_cpus, do_nothing, NULL, 1); > + preempt_enable(); > +} > +EXPORT_SYMBOL_GPL(kick_active_cpus_sync); > + > /** > * wake_up_all_idle_cpus - break all cpus out of idle > * wake_up_all_idle_cpus try to break all cpus which is in idle state even > diff --git a/mm/slab.c b/mm/slab.c > index 324446621b3e..678d5dbd6f46 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache *cachep, int limit, > * cpus, so skip the IPIs. > */ > if (prev) > - kick_all_cpus_sync(); > + kick_active_cpus_sync(); > > check_irq_on(); > cachep->batchcount = batchcount; > -- > 2.14.1 >
On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote: > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote: > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > > > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > > > work may be done at the exit of this state. Delaying synchronization helps to > > > save power if CPU is in idle state and decrease latency for real-time tasks. > > > > > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > > > code to delay syncronization. > > > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > > > isolated task would be fatal, as it breaks isolation. The approach with delaying > > > of synchronization work helps to maintain isolated state. > > > > > > I've tested it with test from task isolation series on ThunderX2 for more than > > > 10 hours (10k giga-ticks) without breaking isolation. > > > > > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > > > --- > > > arch/arm64/kernel/insn.c | 2 +- > > > include/linux/smp.h | 2 ++ > > > kernel/smp.c | 24 ++++++++++++++++++++++++ > > > mm/slab.c | 2 +- > > > 4 files changed, 28 insertions(+), 2 deletions(-) > > > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > > > index 2718a77da165..9d7c492e920e 100644 > > > --- a/arch/arm64/kernel/insn.c > > > +++ b/arch/arm64/kernel/insn.c > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > > > * synchronization. > > > */ > > > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > > > - kick_all_cpus_sync(); > > > + kick_active_cpus_sync(); > > > return ret; > > > } > > > } > > > diff --git a/include/linux/smp.h b/include/linux/smp.h > > > index 9fb239e12b82..27215e22240d 100644 > > > --- a/include/linux/smp.h > > > +++ b/include/linux/smp.h > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask, > > > smp_call_func_t func, void *info, int wait); > > > > > > void kick_all_cpus_sync(void); > > > +void kick_active_cpus_sync(void); > > > void wake_up_all_idle_cpus(void); > > > > > > /* > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, > > > } > > > > > > static inline void kick_all_cpus_sync(void) { } > > > +static inline void kick_active_cpus_sync(void) { } > > > static inline void wake_up_all_idle_cpus(void) { } > > > > > > #ifdef CONFIG_UP_LATE_INIT > > > diff --git a/kernel/smp.c b/kernel/smp.c > > > index 084c8b3a2681..0358d6673850 100644 > > > --- a/kernel/smp.c > > > +++ b/kernel/smp.c > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > > > } > > > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > > > > > +/** > > > + * kick_active_cpus_sync - Force CPUs that are not in extended > > > + * quiescent state (idle or nohz_full userspace) sync by sending > > > + * IPI. Extended quiescent state CPUs will sync at the exit of > > > + * that state. > > > + */ > > > +void kick_active_cpus_sync(void) > > > +{ > > > + int cpu; > > > + struct cpumask kernel_cpus; > > > + > > > + smp_mb(); > > > + > > > + cpumask_clear(&kernel_cpus); > > > + preempt_disable(); > > > + for_each_online_cpu(cpu) { > > > + if (!rcu_eqs_special_set(cpu)) > > > > If we get here, the CPU is not in a quiescent state, so we therefore > > must IPI it, correct? > > > > But don't you also need to define rcu_eqs_special_exit() so that RCU > > can invoke it when it next leaves its quiescent state? Or are you able > > to ignore the CPU in that case? (If you are able to ignore the CPU in > > that case, I could give you a lower-cost function to get your job done.) > > > > Thanx, Paul > > What's actually needed for synchronization is issuing memory barrier on target > CPUs before we start executing kernel code. > > smp_mb() is implicitly called in smp_call_function*() path for it. In > rcu_eqs_special_set() -> rcu_dynticks_eqs_exit() path, smp_mb__after_atomic() > is called just before rcu_eqs_special_exit(). > > So I think, rcu_eqs_special_exit() may be left untouched. Empty > rcu_eqs_special_exit() in new RCU path corresponds empty do_nothing() in old > IPI path. > > Or my understanding of smp_mb__after_atomic() is wrong? By default, > smp_mb__after_atomic() is just alias to smp_mb(). But some > architectures define it differently. x86, for example, aliases it to > just barrier() with a comment: "Atomic operations are already > serializing on x86". > > I was initially thinking that it's also fine to leave > rcu_eqs_special_exit() empty in this case, but now I'm not sure... > > Anyway, answering to your question, we shouldn't ignore quiescent > CPUs, and rcu_eqs_special_set() path is really needed as it issues > memory barrier on them. An alternative approach would be for me to make something like this and export it: bool rcu_cpu_in_eqs(int cpu) { struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); int snap; smp_mb(); /* Obtain consistent snapshot, pairs with update. */ snap = READ_ONCE(&rdtp->dynticks); smp_mb(); /* See above. */ return !(snap & RCU_DYNTICK_CTRL_CTR); } Then you could replace your use of rcu_cpu_in_eqs() above with the new rcu_cpu_in_eqs(). This would avoid the RMW atomic, and, more important, the unnecessary write to ->dynticks. Or am I missing something? Thanx, Paul > Yury > > > > + cpumask_set_cpu(cpu, &kernel_cpus); > > > + } > > > + smp_call_function_many(&kernel_cpus, do_nothing, NULL, 1); > > > + preempt_enable(); > > > +} > > > +EXPORT_SYMBOL_GPL(kick_active_cpus_sync); > > > + > > > /** > > > * wake_up_all_idle_cpus - break all cpus out of idle > > > * wake_up_all_idle_cpus try to break all cpus which is in idle state even > > > diff --git a/mm/slab.c b/mm/slab.c > > > index 324446621b3e..678d5dbd6f46 100644 > > > --- a/mm/slab.c > > > +++ b/mm/slab.c > > > @@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache *cachep, int limit, > > > * cpus, so skip the IPIs. > > > */ > > > if (prev) > > > - kick_all_cpus_sync(); > > > + kick_active_cpus_sync(); > > > > > > check_irq_on(); > > > cachep->batchcount = batchcount; > > > -- > > > 2.14.1 > > > >
On Mon, 26 Mar 2018 10:53:13 +0200 Andrea Parri <andrea.parri@amarulasolutions.com> wrote: > > --- a/kernel/smp.c > > +++ b/kernel/smp.c > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > > } > > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > > > +/** > > + * kick_active_cpus_sync - Force CPUs that are not in extended > > + * quiescent state (idle or nohz_full userspace) sync by sending > > + * IPI. Extended quiescent state CPUs will sync at the exit of > > + * that state. > > + */ > > +void kick_active_cpus_sync(void) > > +{ > > + int cpu; > > + struct cpumask kernel_cpus; > > + > > + smp_mb(); > > (A general remark only:) > > checkpatch.pl should have warned about the fact that this barrier is > missing an accompanying comment (which accesses are being "ordered", > what is the pairing barrier, etc.). He could have simply copied the comment above the smp_mb() for kick_all_cpus_sync(): /* Make sure the change is visible before we kick the cpus */ The kick itself is pretty much a synchronization primitive. That is, you make some changes and then you need all CPUs to see it, and you call: kick_active_cpus_synch(), which is the barrier to make sure you previous changes are seen on all CPUS before you proceed further. Note, the matching barrier is implicit in the IPI itself. -- Steve > > Moreover if, as your reply above suggested, your patch is relying on > "implicit barriers" (something I would not recommend) then even more > so you should comment on these requirements. > > This could: (a) force you to reason about the memory ordering stuff, > (b) easy the task of reviewing and adopting your patch, (c) easy the > task of preserving those requirements (as implementations changes). > > Andrea > > > > + > > + cpumask_clear(&kernel_cpus); > > + preempt_disable(); > > + for_each_online_cpu(cpu) { > > + if (!rcu_eqs_special_set(cpu)) > > + cpumask_set_cpu(cpu, &kernel_cpus); > > + } > > + smp_call_function_many(&kernel_cpus, do_nothing, NULL, 1); > > + preempt_enable(); > > +} > > +EXPORT_SYMBOL_GPL(kick_active_cpus_sync); > > + > > /** > > * wake_up_all_idle_cpus - break all cpus out of idle > > * wake_up_all_idle_cpus try to break all cpus which is in idle state even > > diff --git a/mm/slab.c b/mm/slab.c > > index 324446621b3e..678d5dbd6f46 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache *cachep, int limit, > > * cpus, so skip the IPIs. > > */ > > if (prev) > > - kick_all_cpus_sync(); > > + kick_active_cpus_sync(); > > > > check_irq_on(); > > cachep->batchcount = batchcount; > > -- > > 2.14.1 > >
On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > work may be done at the exit of this state. Delaying synchronization helps to > save power if CPU is in idle state and decrease latency for real-time tasks. > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > code to delay syncronization. > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > isolated task would be fatal, as it breaks isolation. The approach with delaying > of synchronization work helps to maintain isolated state. > > I've tested it with test from task isolation series on ThunderX2 for more than > 10 hours (10k giga-ticks) without breaking isolation. > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > --- > arch/arm64/kernel/insn.c | 2 +- > include/linux/smp.h | 2 ++ > kernel/smp.c | 24 ++++++++++++++++++++++++ > mm/slab.c | 2 +- > 4 files changed, 28 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > index 2718a77da165..9d7c492e920e 100644 > --- a/arch/arm64/kernel/insn.c > +++ b/arch/arm64/kernel/insn.c > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > * synchronization. > */ > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > - kick_all_cpus_sync(); > + kick_active_cpus_sync(); > return ret; > } > } I think this means that runtime modifications to the kernel text might not be picked up by CPUs coming out of idle. Shouldn't we add an ISB on that path to avoid executing stale instructions? Will
On Tue, Mar 27, 2018 at 11:21:17AM +0100, Will Deacon wrote: > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > > work may be done at the exit of this state. Delaying synchronization helps to > > save power if CPU is in idle state and decrease latency for real-time tasks. > > > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > > code to delay syncronization. > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > > isolated task would be fatal, as it breaks isolation. The approach with delaying > > of synchronization work helps to maintain isolated state. > > > > I've tested it with test from task isolation series on ThunderX2 for more than > > 10 hours (10k giga-ticks) without breaking isolation. > > > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > > --- > > arch/arm64/kernel/insn.c | 2 +- > > include/linux/smp.h | 2 ++ > > kernel/smp.c | 24 ++++++++++++++++++++++++ > > mm/slab.c | 2 +- > > 4 files changed, 28 insertions(+), 2 deletions(-) > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > > index 2718a77da165..9d7c492e920e 100644 > > --- a/arch/arm64/kernel/insn.c > > +++ b/arch/arm64/kernel/insn.c > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > > * synchronization. > > */ > > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > > - kick_all_cpus_sync(); > > + kick_active_cpus_sync(); > > return ret; > > } > > } > > I think this means that runtime modifications to the kernel text might not > be picked up by CPUs coming out of idle. Shouldn't we add an ISB on that > path to avoid executing stale instructions? Thanks, Will, for the hint. I'll do that. Yury
On Mon, Mar 26, 2018 at 02:57:35PM -0400, Steven Rostedt wrote: > On Mon, 26 Mar 2018 10:53:13 +0200 > Andrea Parri <andrea.parri@amarulasolutions.com> wrote: > > > > --- a/kernel/smp.c > > > +++ b/kernel/smp.c > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > > > } > > > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > > > > > +/** > > > + * kick_active_cpus_sync - Force CPUs that are not in extended > > > + * quiescent state (idle or nohz_full userspace) sync by sending > > > + * IPI. Extended quiescent state CPUs will sync at the exit of > > > + * that state. > > > + */ > > > +void kick_active_cpus_sync(void) > > > +{ > > > + int cpu; > > > + struct cpumask kernel_cpus; > > > + > > > + smp_mb(); > > > > (A general remark only:) > > > > checkpatch.pl should have warned about the fact that this barrier is > > missing an accompanying comment (which accesses are being "ordered", > > what is the pairing barrier, etc.). > > He could have simply copied the comment above the smp_mb() for > kick_all_cpus_sync(): > > /* Make sure the change is visible before we kick the cpus */ > > The kick itself is pretty much a synchronization primitive. > > That is, you make some changes and then you need all CPUs to see it, > and you call: kick_active_cpus_synch(), which is the barrier to make > sure you previous changes are seen on all CPUS before you proceed > further. Note, the matching barrier is implicit in the IPI itself. > > -- Steve I know that I had to copy the comment from kick_all_cpus_sync(), but I don't like copy-pasting in general, and as Steven told, this smp_mb() is already inside synchronization routine, so we may hope that users of kick_*_cpus_sync() will explain better what for they need it... > > > > > Moreover if, as your reply above suggested, your patch is relying on > > "implicit barriers" (something I would not recommend) then even more > > so you should comment on these requirements. > > > > This could: (a) force you to reason about the memory ordering stuff, > > (b) easy the task of reviewing and adopting your patch, (c) easy the > > task of preserving those requirements (as implementations changes). > > > > Andrea I need v2 anyway, and I will add comments to address all questions in this thread. I also hope that we'll agree that for powerpc it's also safe to delay synchronization, and if so, we will have no users of kick_all_cpus_sync(), and can drop it. (It looks like this, because nohz_full userspace CPU cannot have pending IPIs, but I'd like to get confirmation from powerpc people.) Would it make sense to rename kick_all_cpus_sync() to smp_mb_sync(), which would stand for 'synchronous memory barrier on all online CPUs'? Yury
On Mon, Mar 26, 2018 at 05:45:55AM -0700, Paul E. McKenney wrote: > On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote: > > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote: > > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > > > > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > > > > work may be done at the exit of this state. Delaying synchronization helps to > > > > save power if CPU is in idle state and decrease latency for real-time tasks. > > > > > > > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > > > > code to delay syncronization. > > > > > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > > > > isolated task would be fatal, as it breaks isolation. The approach with delaying > > > > of synchronization work helps to maintain isolated state. > > > > > > > > I've tested it with test from task isolation series on ThunderX2 for more than > > > > 10 hours (10k giga-ticks) without breaking isolation. > > > > > > > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > > > > --- > > > > arch/arm64/kernel/insn.c | 2 +- > > > > include/linux/smp.h | 2 ++ > > > > kernel/smp.c | 24 ++++++++++++++++++++++++ > > > > mm/slab.c | 2 +- > > > > 4 files changed, 28 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > > > > index 2718a77da165..9d7c492e920e 100644 > > > > --- a/arch/arm64/kernel/insn.c > > > > +++ b/arch/arm64/kernel/insn.c > > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > > > > * synchronization. > > > > */ > > > > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > > > > - kick_all_cpus_sync(); > > > > + kick_active_cpus_sync(); > > > > return ret; > > > > } > > > > } > > > > diff --git a/include/linux/smp.h b/include/linux/smp.h > > > > index 9fb239e12b82..27215e22240d 100644 > > > > --- a/include/linux/smp.h > > > > +++ b/include/linux/smp.h > > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask, > > > > smp_call_func_t func, void *info, int wait); > > > > > > > > void kick_all_cpus_sync(void); > > > > +void kick_active_cpus_sync(void); > > > > void wake_up_all_idle_cpus(void); > > > > > > > > /* > > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, > > > > } > > > > > > > > static inline void kick_all_cpus_sync(void) { } > > > > +static inline void kick_active_cpus_sync(void) { } > > > > static inline void wake_up_all_idle_cpus(void) { } > > > > > > > > #ifdef CONFIG_UP_LATE_INIT > > > > diff --git a/kernel/smp.c b/kernel/smp.c > > > > index 084c8b3a2681..0358d6673850 100644 > > > > --- a/kernel/smp.c > > > > +++ b/kernel/smp.c > > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > > > > } > > > > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > > > > > > > +/** > > > > + * kick_active_cpus_sync - Force CPUs that are not in extended > > > > + * quiescent state (idle or nohz_full userspace) sync by sending > > > > + * IPI. Extended quiescent state CPUs will sync at the exit of > > > > + * that state. > > > > + */ > > > > +void kick_active_cpus_sync(void) > > > > +{ > > > > + int cpu; > > > > + struct cpumask kernel_cpus; > > > > + > > > > + smp_mb(); > > > > + > > > > + cpumask_clear(&kernel_cpus); > > > > + preempt_disable(); > > > > + for_each_online_cpu(cpu) { > > > > + if (!rcu_eqs_special_set(cpu)) > > > > > > If we get here, the CPU is not in a quiescent state, so we therefore > > > must IPI it, correct? > > > > > > But don't you also need to define rcu_eqs_special_exit() so that RCU > > > can invoke it when it next leaves its quiescent state? Or are you able > > > to ignore the CPU in that case? (If you are able to ignore the CPU in > > > that case, I could give you a lower-cost function to get your job done.) > > > > > > Thanx, Paul > > > > What's actually needed for synchronization is issuing memory barrier on target > > CPUs before we start executing kernel code. > > > > smp_mb() is implicitly called in smp_call_function*() path for it. In > > rcu_eqs_special_set() -> rcu_dynticks_eqs_exit() path, smp_mb__after_atomic() > > is called just before rcu_eqs_special_exit(). > > > > So I think, rcu_eqs_special_exit() may be left untouched. Empty > > rcu_eqs_special_exit() in new RCU path corresponds empty do_nothing() in old > > IPI path. > > > > Or my understanding of smp_mb__after_atomic() is wrong? By default, > > smp_mb__after_atomic() is just alias to smp_mb(). But some > > architectures define it differently. x86, for example, aliases it to > > just barrier() with a comment: "Atomic operations are already > > serializing on x86". > > > > I was initially thinking that it's also fine to leave > > rcu_eqs_special_exit() empty in this case, but now I'm not sure... > > > > Anyway, answering to your question, we shouldn't ignore quiescent > > CPUs, and rcu_eqs_special_set() path is really needed as it issues > > memory barrier on them. > > An alternative approach would be for me to make something like this > and export it: > > bool rcu_cpu_in_eqs(int cpu) > { > struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); > int snap; > > smp_mb(); /* Obtain consistent snapshot, pairs with update. */ > snap = READ_ONCE(&rdtp->dynticks); > smp_mb(); /* See above. */ > return !(snap & RCU_DYNTICK_CTRL_CTR); > } > > Then you could replace your use of rcu_cpu_in_eqs() above with Did you mean replace rcu_eqs_special_set()? > the new rcu_cpu_in_eqs(). This would avoid the RMW atomic, and, more > important, the unnecessary write to ->dynticks. > > Or am I missing something? > > Thanx, Paul This will not work because EQS CPUs will not be charged to call smp_mb() on exit of EQS. Lets sync our understanding of IPI and RCU mechanisms. Traditional IPI scheme looks like this: CPU1: CPU2: touch shared resource(); /* running any code */ smp_mb(); smp_call_function(); ---> handle_IPI() { /* Make resource visible */ smp_mb(); do_nothing(); } And new RCU scheme for eqs CPUs looks like this: CPU1: CPU2: touch shared resource(); /* Running EQS */ smp_mb(); if (RCU_DYNTICK_CTRL_CTR) set(RCU_DYNTICK_CTRL_MASK); /* Still in EQS */ /* And later */ rcu_dynticks_eqs_exit() { if (RCU_DYNTICK_CTRL_MASK) { /* Make resource visible */ smp_mb(); rcu_eqs_special_exit(); } } Is it correct? Yury
On Wed, Mar 28, 2018 at 04:36:05PM +0300, Yury Norov wrote: > On Mon, Mar 26, 2018 at 05:45:55AM -0700, Paul E. McKenney wrote: > > On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote: > > > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote: > > > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > > > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > > > > > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > > > > > work may be done at the exit of this state. Delaying synchronization helps to > > > > > save power if CPU is in idle state and decrease latency for real-time tasks. > > > > > > > > > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > > > > > code to delay syncronization. > > > > > > > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > > > > > isolated task would be fatal, as it breaks isolation. The approach with delaying > > > > > of synchronization work helps to maintain isolated state. > > > > > > > > > > I've tested it with test from task isolation series on ThunderX2 for more than > > > > > 10 hours (10k giga-ticks) without breaking isolation. > > > > > > > > > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > > > > > --- > > > > > arch/arm64/kernel/insn.c | 2 +- > > > > > include/linux/smp.h | 2 ++ > > > > > kernel/smp.c | 24 ++++++++++++++++++++++++ > > > > > mm/slab.c | 2 +- > > > > > 4 files changed, 28 insertions(+), 2 deletions(-) > > > > > > > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > > > > > index 2718a77da165..9d7c492e920e 100644 > > > > > --- a/arch/arm64/kernel/insn.c > > > > > +++ b/arch/arm64/kernel/insn.c > > > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > > > > > * synchronization. > > > > > */ > > > > > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > > > > > - kick_all_cpus_sync(); > > > > > + kick_active_cpus_sync(); > > > > > return ret; > > > > > } > > > > > } > > > > > diff --git a/include/linux/smp.h b/include/linux/smp.h > > > > > index 9fb239e12b82..27215e22240d 100644 > > > > > --- a/include/linux/smp.h > > > > > +++ b/include/linux/smp.h > > > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask, > > > > > smp_call_func_t func, void *info, int wait); > > > > > > > > > > void kick_all_cpus_sync(void); > > > > > +void kick_active_cpus_sync(void); > > > > > void wake_up_all_idle_cpus(void); > > > > > > > > > > /* > > > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, > > > > > } > > > > > > > > > > static inline void kick_all_cpus_sync(void) { } > > > > > +static inline void kick_active_cpus_sync(void) { } > > > > > static inline void wake_up_all_idle_cpus(void) { } > > > > > > > > > > #ifdef CONFIG_UP_LATE_INIT > > > > > diff --git a/kernel/smp.c b/kernel/smp.c > > > > > index 084c8b3a2681..0358d6673850 100644 > > > > > --- a/kernel/smp.c > > > > > +++ b/kernel/smp.c > > > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > > > > > } > > > > > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > > > > > > > > > +/** > > > > > + * kick_active_cpus_sync - Force CPUs that are not in extended > > > > > + * quiescent state (idle or nohz_full userspace) sync by sending > > > > > + * IPI. Extended quiescent state CPUs will sync at the exit of > > > > > + * that state. > > > > > + */ > > > > > +void kick_active_cpus_sync(void) > > > > > +{ > > > > > + int cpu; > > > > > + struct cpumask kernel_cpus; > > > > > + > > > > > + smp_mb(); > > > > > + > > > > > + cpumask_clear(&kernel_cpus); > > > > > + preempt_disable(); > > > > > + for_each_online_cpu(cpu) { > > > > > + if (!rcu_eqs_special_set(cpu)) > > > > > > > > If we get here, the CPU is not in a quiescent state, so we therefore > > > > must IPI it, correct? > > > > > > > > But don't you also need to define rcu_eqs_special_exit() so that RCU > > > > can invoke it when it next leaves its quiescent state? Or are you able > > > > to ignore the CPU in that case? (If you are able to ignore the CPU in > > > > that case, I could give you a lower-cost function to get your job done.) > > > > > > > > Thanx, Paul > > > > > > What's actually needed for synchronization is issuing memory barrier on target > > > CPUs before we start executing kernel code. > > > > > > smp_mb() is implicitly called in smp_call_function*() path for it. In > > > rcu_eqs_special_set() -> rcu_dynticks_eqs_exit() path, smp_mb__after_atomic() > > > is called just before rcu_eqs_special_exit(). > > > > > > So I think, rcu_eqs_special_exit() may be left untouched. Empty > > > rcu_eqs_special_exit() in new RCU path corresponds empty do_nothing() in old > > > IPI path. > > > > > > Or my understanding of smp_mb__after_atomic() is wrong? By default, > > > smp_mb__after_atomic() is just alias to smp_mb(). But some > > > architectures define it differently. x86, for example, aliases it to > > > just barrier() with a comment: "Atomic operations are already > > > serializing on x86". > > > > > > I was initially thinking that it's also fine to leave > > > rcu_eqs_special_exit() empty in this case, but now I'm not sure... > > > > > > Anyway, answering to your question, we shouldn't ignore quiescent > > > CPUs, and rcu_eqs_special_set() path is really needed as it issues > > > memory barrier on them. > > > > An alternative approach would be for me to make something like this > > and export it: > > > > bool rcu_cpu_in_eqs(int cpu) > > { > > struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); > > int snap; > > > > smp_mb(); /* Obtain consistent snapshot, pairs with update. */ > > snap = READ_ONCE(&rdtp->dynticks); > > smp_mb(); /* See above. */ > > return !(snap & RCU_DYNTICK_CTRL_CTR); > > } > > > > Then you could replace your use of rcu_cpu_in_eqs() above with > > Did you mean replace rcu_eqs_special_set()? Yes, apologies for my confusion, and good show figuring it out. ;-) > > the new rcu_cpu_in_eqs(). This would avoid the RMW atomic, and, more > > important, the unnecessary write to ->dynticks. > > > > Or am I missing something? > > > > Thanx, Paul > > This will not work because EQS CPUs will not be charged to call > smp_mb() on exit of EQS. Actually, CPUs are guaranteed to do a value-returning atomic increment of ->dynticks on EQS exit, which implies smp_mb() both before and after that atomic increment. > Lets sync our understanding of IPI and RCU mechanisms. > > Traditional IPI scheme looks like this: > > CPU1: CPU2: > touch shared resource(); /* running any code */ > smp_mb(); > smp_call_function(); ---> handle_IPI() EQS exit here, so implied smp_mb() on both sides of the ->dynticks increment. > { > /* Make resource visible */ > smp_mb(); > do_nothing(); > } > > And new RCU scheme for eqs CPUs looks like this: > > CPU1: CPU2: > touch shared resource(); /* Running EQS */ > smp_mb(); > > if (RCU_DYNTICK_CTRL_CTR) > set(RCU_DYNTICK_CTRL_MASK); /* Still in EQS */ > > /* And later */ > rcu_dynticks_eqs_exit() > { > if (RCU_DYNTICK_CTRL_MASK) { > /* Make resource visible */ > smp_mb(); > rcu_eqs_special_exit(); > } > } > > Is it correct? You are missing the atomic_add_return() that is already in rcu_dynticks_eqs_exit(), and this value-returning atomic operation again implies smp_mb() both before and after. So you should be covered without needing to worry about RCU_DYNTICK_CTRL_MASK. Or am I missing something subtle here? Thanx, Paul
On Wed, Mar 28, 2018 at 06:56:17AM -0700, Paul E. McKenney wrote: > On Wed, Mar 28, 2018 at 04:36:05PM +0300, Yury Norov wrote: > > On Mon, Mar 26, 2018 at 05:45:55AM -0700, Paul E. McKenney wrote: > > > On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote: > > > > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote: > > > > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > > > > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > > > > > > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > > > > > > work may be done at the exit of this state. Delaying synchronization helps to > > > > > > save power if CPU is in idle state and decrease latency for real-time tasks. > > > > > > > > > > > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > > > > > > code to delay syncronization. > > > > > > > > > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > > > > > > isolated task would be fatal, as it breaks isolation. The approach with delaying > > > > > > of synchronization work helps to maintain isolated state. > > > > > > > > > > > > I've tested it with test from task isolation series on ThunderX2 for more than > > > > > > 10 hours (10k giga-ticks) without breaking isolation. > > > > > > > > > > > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > > > > > > --- > > > > > > arch/arm64/kernel/insn.c | 2 +- > > > > > > include/linux/smp.h | 2 ++ > > > > > > kernel/smp.c | 24 ++++++++++++++++++++++++ > > > > > > mm/slab.c | 2 +- > > > > > > 4 files changed, 28 insertions(+), 2 deletions(-) > > > > > > > > > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > > > > > > index 2718a77da165..9d7c492e920e 100644 > > > > > > --- a/arch/arm64/kernel/insn.c > > > > > > +++ b/arch/arm64/kernel/insn.c > > > > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > > > > > > * synchronization. > > > > > > */ > > > > > > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > > > > > > - kick_all_cpus_sync(); > > > > > > + kick_active_cpus_sync(); > > > > > > return ret; > > > > > > } > > > > > > } > > > > > > diff --git a/include/linux/smp.h b/include/linux/smp.h > > > > > > index 9fb239e12b82..27215e22240d 100644 > > > > > > --- a/include/linux/smp.h > > > > > > +++ b/include/linux/smp.h > > > > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask, > > > > > > smp_call_func_t func, void *info, int wait); > > > > > > > > > > > > void kick_all_cpus_sync(void); > > > > > > +void kick_active_cpus_sync(void); > > > > > > void wake_up_all_idle_cpus(void); > > > > > > > > > > > > /* > > > > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, > > > > > > } > > > > > > > > > > > > static inline void kick_all_cpus_sync(void) { } > > > > > > +static inline void kick_active_cpus_sync(void) { } > > > > > > static inline void wake_up_all_idle_cpus(void) { } > > > > > > > > > > > > #ifdef CONFIG_UP_LATE_INIT > > > > > > diff --git a/kernel/smp.c b/kernel/smp.c > > > > > > index 084c8b3a2681..0358d6673850 100644 > > > > > > --- a/kernel/smp.c > > > > > > +++ b/kernel/smp.c > > > > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > > > > > > } > > > > > > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > > > > > > > > > > > +/** > > > > > > + * kick_active_cpus_sync - Force CPUs that are not in extended > > > > > > + * quiescent state (idle or nohz_full userspace) sync by sending > > > > > > + * IPI. Extended quiescent state CPUs will sync at the exit of > > > > > > + * that state. > > > > > > + */ > > > > > > +void kick_active_cpus_sync(void) > > > > > > +{ > > > > > > + int cpu; > > > > > > + struct cpumask kernel_cpus; > > > > > > + > > > > > > + smp_mb(); > > > > > > + > > > > > > + cpumask_clear(&kernel_cpus); > > > > > > + preempt_disable(); > > > > > > + for_each_online_cpu(cpu) { > > > > > > + if (!rcu_eqs_special_set(cpu)) > > > > > > > > > > If we get here, the CPU is not in a quiescent state, so we therefore > > > > > must IPI it, correct? > > > > > > > > > > But don't you also need to define rcu_eqs_special_exit() so that RCU > > > > > can invoke it when it next leaves its quiescent state? Or are you able > > > > > to ignore the CPU in that case? (If you are able to ignore the CPU in > > > > > that case, I could give you a lower-cost function to get your job done.) > > > > > > > > > > Thanx, Paul > > > > > > > > What's actually needed for synchronization is issuing memory barrier on target > > > > CPUs before we start executing kernel code. > > > > > > > > smp_mb() is implicitly called in smp_call_function*() path for it. In > > > > rcu_eqs_special_set() -> rcu_dynticks_eqs_exit() path, smp_mb__after_atomic() > > > > is called just before rcu_eqs_special_exit(). > > > > > > > > So I think, rcu_eqs_special_exit() may be left untouched. Empty > > > > rcu_eqs_special_exit() in new RCU path corresponds empty do_nothing() in old > > > > IPI path. > > > > > > > > Or my understanding of smp_mb__after_atomic() is wrong? By default, > > > > smp_mb__after_atomic() is just alias to smp_mb(). But some > > > > architectures define it differently. x86, for example, aliases it to > > > > just barrier() with a comment: "Atomic operations are already > > > > serializing on x86". > > > > > > > > I was initially thinking that it's also fine to leave > > > > rcu_eqs_special_exit() empty in this case, but now I'm not sure... > > > > > > > > Anyway, answering to your question, we shouldn't ignore quiescent > > > > CPUs, and rcu_eqs_special_set() path is really needed as it issues > > > > memory barrier on them. > > > > > > An alternative approach would be for me to make something like this > > > and export it: > > > > > > bool rcu_cpu_in_eqs(int cpu) > > > { > > > struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); > > > int snap; > > > > > > smp_mb(); /* Obtain consistent snapshot, pairs with update. */ > > > snap = READ_ONCE(&rdtp->dynticks); > > > smp_mb(); /* See above. */ > > > return !(snap & RCU_DYNTICK_CTRL_CTR); > > > } > > > > > > Then you could replace your use of rcu_cpu_in_eqs() above with > > > > Did you mean replace rcu_eqs_special_set()? > > Yes, apologies for my confusion, and good show figuring it out. ;-) > > > > the new rcu_cpu_in_eqs(). This would avoid the RMW atomic, and, more > > > important, the unnecessary write to ->dynticks. > > > > > > Or am I missing something? > > > > > > Thanx, Paul > > > > This will not work because EQS CPUs will not be charged to call > > smp_mb() on exit of EQS. > > Actually, CPUs are guaranteed to do a value-returning atomic increment > of ->dynticks on EQS exit, which implies smp_mb() both before and after > that atomic increment. > > > Lets sync our understanding of IPI and RCU mechanisms. > > > > Traditional IPI scheme looks like this: > > > > CPU1: CPU2: > > touch shared resource(); /* running any code */ > > smp_mb(); > > smp_call_function(); ---> handle_IPI() > > EQS exit here, so implied > smp_mb() on both sides of the > ->dynticks increment. > > > { > > /* Make resource visible */ > > smp_mb(); > > do_nothing(); > > } > > > > And new RCU scheme for eqs CPUs looks like this: > > > > CPU1: CPU2: > > touch shared resource(); /* Running EQS */ > > smp_mb(); > > > > if (RCU_DYNTICK_CTRL_CTR) > > set(RCU_DYNTICK_CTRL_MASK); /* Still in EQS */ > > > > /* And later */ > > rcu_dynticks_eqs_exit() > > { > > if (RCU_DYNTICK_CTRL_MASK) { > > /* Make resource visible */ > > smp_mb(); > > rcu_eqs_special_exit(); > > } > > } > > > > Is it correct? > > You are missing the atomic_add_return() that is already in > rcu_dynticks_eqs_exit(), and this value-returning atomic operation again > implies smp_mb() both before and after. So you should be covered without > needing to worry about RCU_DYNTICK_CTRL_MASK. > > Or am I missing something subtle here? Ah, now I understand, thank you. I'll collect other comments for more, and submit v2 with this change. Yury
On Wed, Mar 28, 2018 at 05:41:40PM +0300, Yury Norov wrote: > On Wed, Mar 28, 2018 at 06:56:17AM -0700, Paul E. McKenney wrote: > > On Wed, Mar 28, 2018 at 04:36:05PM +0300, Yury Norov wrote: > > > On Mon, Mar 26, 2018 at 05:45:55AM -0700, Paul E. McKenney wrote: > > > > On Sun, Mar 25, 2018 at 11:11:54PM +0300, Yury Norov wrote: > > > > > On Sun, Mar 25, 2018 at 12:23:28PM -0700, Paul E. McKenney wrote: > > > > > > On Sun, Mar 25, 2018 at 08:50:04PM +0300, Yury Norov wrote: > > > > > > > kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. > > > > > > > If CPU is in extended quiescent state (idle task or nohz_full userspace), this > > > > > > > work may be done at the exit of this state. Delaying synchronization helps to > > > > > > > save power if CPU is in idle state and decrease latency for real-time tasks. > > > > > > > > > > > > > > This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 > > > > > > > code to delay syncronization. > > > > > > > > > > > > > > For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running > > > > > > > isolated task would be fatal, as it breaks isolation. The approach with delaying > > > > > > > of synchronization work helps to maintain isolated state. > > > > > > > > > > > > > > I've tested it with test from task isolation series on ThunderX2 for more than > > > > > > > 10 hours (10k giga-ticks) without breaking isolation. > > > > > > > > > > > > > > Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> > > > > > > > --- > > > > > > > arch/arm64/kernel/insn.c | 2 +- > > > > > > > include/linux/smp.h | 2 ++ > > > > > > > kernel/smp.c | 24 ++++++++++++++++++++++++ > > > > > > > mm/slab.c | 2 +- > > > > > > > 4 files changed, 28 insertions(+), 2 deletions(-) > > > > > > > > > > > > > > diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c > > > > > > > index 2718a77da165..9d7c492e920e 100644 > > > > > > > --- a/arch/arm64/kernel/insn.c > > > > > > > +++ b/arch/arm64/kernel/insn.c > > > > > > > @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) > > > > > > > * synchronization. > > > > > > > */ > > > > > > > ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); > > > > > > > - kick_all_cpus_sync(); > > > > > > > + kick_active_cpus_sync(); > > > > > > > return ret; > > > > > > > } > > > > > > > } > > > > > > > diff --git a/include/linux/smp.h b/include/linux/smp.h > > > > > > > index 9fb239e12b82..27215e22240d 100644 > > > > > > > --- a/include/linux/smp.h > > > > > > > +++ b/include/linux/smp.h > > > > > > > @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask, > > > > > > > smp_call_func_t func, void *info, int wait); > > > > > > > > > > > > > > void kick_all_cpus_sync(void); > > > > > > > +void kick_active_cpus_sync(void); > > > > > > > void wake_up_all_idle_cpus(void); > > > > > > > > > > > > > > /* > > > > > > > @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, > > > > > > > } > > > > > > > > > > > > > > static inline void kick_all_cpus_sync(void) { } > > > > > > > +static inline void kick_active_cpus_sync(void) { } > > > > > > > static inline void wake_up_all_idle_cpus(void) { } > > > > > > > > > > > > > > #ifdef CONFIG_UP_LATE_INIT > > > > > > > diff --git a/kernel/smp.c b/kernel/smp.c > > > > > > > index 084c8b3a2681..0358d6673850 100644 > > > > > > > --- a/kernel/smp.c > > > > > > > +++ b/kernel/smp.c > > > > > > > @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) > > > > > > > } > > > > > > > EXPORT_SYMBOL_GPL(kick_all_cpus_sync); > > > > > > > > > > > > > > +/** > > > > > > > + * kick_active_cpus_sync - Force CPUs that are not in extended > > > > > > > + * quiescent state (idle or nohz_full userspace) sync by sending > > > > > > > + * IPI. Extended quiescent state CPUs will sync at the exit of > > > > > > > + * that state. > > > > > > > + */ > > > > > > > +void kick_active_cpus_sync(void) > > > > > > > +{ > > > > > > > + int cpu; > > > > > > > + struct cpumask kernel_cpus; > > > > > > > + > > > > > > > + smp_mb(); > > > > > > > + > > > > > > > + cpumask_clear(&kernel_cpus); > > > > > > > + preempt_disable(); > > > > > > > + for_each_online_cpu(cpu) { > > > > > > > + if (!rcu_eqs_special_set(cpu)) > > > > > > > > > > > > If we get here, the CPU is not in a quiescent state, so we therefore > > > > > > must IPI it, correct? > > > > > > > > > > > > But don't you also need to define rcu_eqs_special_exit() so that RCU > > > > > > can invoke it when it next leaves its quiescent state? Or are you able > > > > > > to ignore the CPU in that case? (If you are able to ignore the CPU in > > > > > > that case, I could give you a lower-cost function to get your job done.) > > > > > > > > > > > > Thanx, Paul > > > > > > > > > > What's actually needed for synchronization is issuing memory barrier on target > > > > > CPUs before we start executing kernel code. > > > > > > > > > > smp_mb() is implicitly called in smp_call_function*() path for it. In > > > > > rcu_eqs_special_set() -> rcu_dynticks_eqs_exit() path, smp_mb__after_atomic() > > > > > is called just before rcu_eqs_special_exit(). > > > > > > > > > > So I think, rcu_eqs_special_exit() may be left untouched. Empty > > > > > rcu_eqs_special_exit() in new RCU path corresponds empty do_nothing() in old > > > > > IPI path. > > > > > > > > > > Or my understanding of smp_mb__after_atomic() is wrong? By default, > > > > > smp_mb__after_atomic() is just alias to smp_mb(). But some > > > > > architectures define it differently. x86, for example, aliases it to > > > > > just barrier() with a comment: "Atomic operations are already > > > > > serializing on x86". > > > > > > > > > > I was initially thinking that it's also fine to leave > > > > > rcu_eqs_special_exit() empty in this case, but now I'm not sure... > > > > > > > > > > Anyway, answering to your question, we shouldn't ignore quiescent > > > > > CPUs, and rcu_eqs_special_set() path is really needed as it issues > > > > > memory barrier on them. > > > > > > > > An alternative approach would be for me to make something like this > > > > and export it: > > > > > > > > bool rcu_cpu_in_eqs(int cpu) > > > > { > > > > struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); > > > > int snap; > > > > > > > > smp_mb(); /* Obtain consistent snapshot, pairs with update. */ > > > > snap = READ_ONCE(&rdtp->dynticks); > > > > smp_mb(); /* See above. */ > > > > return !(snap & RCU_DYNTICK_CTRL_CTR); > > > > } > > > > > > > > Then you could replace your use of rcu_cpu_in_eqs() above with > > > > > > Did you mean replace rcu_eqs_special_set()? > > > > Yes, apologies for my confusion, and good show figuring it out. ;-) > > > > > > the new rcu_cpu_in_eqs(). This would avoid the RMW atomic, and, more > > > > important, the unnecessary write to ->dynticks. > > > > > > > > Or am I missing something? > > > > > > > > Thanx, Paul > > > > > > This will not work because EQS CPUs will not be charged to call > > > smp_mb() on exit of EQS. > > > > Actually, CPUs are guaranteed to do a value-returning atomic increment > > of ->dynticks on EQS exit, which implies smp_mb() both before and after > > that atomic increment. > > > > > Lets sync our understanding of IPI and RCU mechanisms. > > > > > > Traditional IPI scheme looks like this: > > > > > > CPU1: CPU2: > > > touch shared resource(); /* running any code */ > > > smp_mb(); > > > smp_call_function(); ---> handle_IPI() > > > > EQS exit here, so implied > > smp_mb() on both sides of the > > ->dynticks increment. > > > > > { > > > /* Make resource visible */ > > > smp_mb(); > > > do_nothing(); > > > } > > > > > > And new RCU scheme for eqs CPUs looks like this: > > > > > > CPU1: CPU2: > > > touch shared resource(); /* Running EQS */ > > > smp_mb(); > > > > > > if (RCU_DYNTICK_CTRL_CTR) > > > set(RCU_DYNTICK_CTRL_MASK); /* Still in EQS */ > > > > > > /* And later */ > > > rcu_dynticks_eqs_exit() > > > { > > > if (RCU_DYNTICK_CTRL_MASK) { > > > /* Make resource visible */ > > > smp_mb(); > > > rcu_eqs_special_exit(); > > > } > > > } > > > > > > Is it correct? > > > > You are missing the atomic_add_return() that is already in > > rcu_dynticks_eqs_exit(), and this value-returning atomic operation again > > implies smp_mb() both before and after. So you should be covered without > > needing to worry about RCU_DYNTICK_CTRL_MASK. > > > > Or am I missing something subtle here? > > Ah, now I understand, thank you. I'll collect other comments for more, and > submit v2 with this change. Very good, looking forward to seeing v2. Thanx, Paul
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 2718a77da165..9d7c492e920e 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -291,7 +291,7 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) * synchronization. */ ret = aarch64_insn_patch_text_nosync(addrs[0], insns[0]); - kick_all_cpus_sync(); + kick_active_cpus_sync(); return ret; } } diff --git a/include/linux/smp.h b/include/linux/smp.h index 9fb239e12b82..27215e22240d 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -105,6 +105,7 @@ int smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, void *info, int wait); void kick_all_cpus_sync(void); +void kick_active_cpus_sync(void); void wake_up_all_idle_cpus(void); /* @@ -161,6 +162,7 @@ smp_call_function_any(const struct cpumask *mask, smp_call_func_t func, } static inline void kick_all_cpus_sync(void) { } +static inline void kick_active_cpus_sync(void) { } static inline void wake_up_all_idle_cpus(void) { } #ifdef CONFIG_UP_LATE_INIT diff --git a/kernel/smp.c b/kernel/smp.c index 084c8b3a2681..0358d6673850 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -724,6 +724,30 @@ void kick_all_cpus_sync(void) } EXPORT_SYMBOL_GPL(kick_all_cpus_sync); +/** + * kick_active_cpus_sync - Force CPUs that are not in extended + * quiescent state (idle or nohz_full userspace) sync by sending + * IPI. Extended quiescent state CPUs will sync at the exit of + * that state. + */ +void kick_active_cpus_sync(void) +{ + int cpu; + struct cpumask kernel_cpus; + + smp_mb(); + + cpumask_clear(&kernel_cpus); + preempt_disable(); + for_each_online_cpu(cpu) { + if (!rcu_eqs_special_set(cpu)) + cpumask_set_cpu(cpu, &kernel_cpus); + } + smp_call_function_many(&kernel_cpus, do_nothing, NULL, 1); + preempt_enable(); +} +EXPORT_SYMBOL_GPL(kick_active_cpus_sync); + /** * wake_up_all_idle_cpus - break all cpus out of idle * wake_up_all_idle_cpus try to break all cpus which is in idle state even diff --git a/mm/slab.c b/mm/slab.c index 324446621b3e..678d5dbd6f46 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3856,7 +3856,7 @@ static int __do_tune_cpucache(struct kmem_cache *cachep, int limit, * cpus, so skip the IPIs. */ if (prev) - kick_all_cpus_sync(); + kick_active_cpus_sync(); check_irq_on(); cachep->batchcount = batchcount;
kick_all_cpus_sync() forces all CPUs to sync caches by sending broadcast IPI. If CPU is in extended quiescent state (idle task or nohz_full userspace), this work may be done at the exit of this state. Delaying synchronization helps to save power if CPU is in idle state and decrease latency for real-time tasks. This patch introduces kick_active_cpus_sync() and uses it in mm/slab and arm64 code to delay syncronization. For task isolation (https://lkml.org/lkml/2017/11/3/589), IPI to the CPU running isolated task would be fatal, as it breaks isolation. The approach with delaying of synchronization work helps to maintain isolated state. I've tested it with test from task isolation series on ThunderX2 for more than 10 hours (10k giga-ticks) without breaking isolation. Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> --- arch/arm64/kernel/insn.c | 2 +- include/linux/smp.h | 2 ++ kernel/smp.c | 24 ++++++++++++++++++++++++ mm/slab.c | 2 +- 4 files changed, 28 insertions(+), 2 deletions(-)