Message ID | 20220312155603.1752193-1-guoren@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC] arch: patch_text: Fixup last cpu should be master | expand |
Hi Guo Ren, On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote: > > From: Guo Ren <guoren@linux.alibaba.com> > > These patch_text implementations are using stop_machine_cpuslocked > infrastructure with atomic cpu_count. The origin idea is that when The original > the master CPU patch_text, others should wait for it. But current > implementation is using the first CPU as master, which couldn't > guarantee continue CPUs are waiting. This patch changes the last guarantee that remaining CPUs are waiting. > CPU as the master to solve the potaintial risk. potential > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > Signed-off-by: Guo Ren <guoren@kernel.org> > Cc: Will Deacon <will@kernel.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Palmer Dabbelt <palmer@dabbelt.com> > Cc: Peter Zijlstra <peterz@infradead.org > Cc: Masami Hiramatsu <mhiramat@kernel.org> > Cc: Chris Zankel <chris@zankel.net> > Cc: Max Filippov <jcmvbkbc@gmail.com> > Cc: Arnd Bergmann <arnd@arndb.de> > --- > arch/arm64/kernel/patching.c | 4 ++-- > arch/csky/kernel/probes/kprobes.c | 2 +- > arch/riscv/kernel/patch.c | 2 +- > arch/xtensa/kernel/jump_label.c | 2 +- > 4 files changed, 5 insertions(+), 5 deletions(-) Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> I'm curious, is there a specific issue that prompted this patch?
On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote: > > From: Guo Ren <guoren@linux.alibaba.com> > > These patch_text implementations are using stop_machine_cpuslocked > infrastructure with atomic cpu_count. The origin idea is that when > the master CPU patch_text, others should wait for it. But current > implementation is using the first CPU as master, which couldn't > guarantee continue CPUs are waiting. This patch changes the last > CPU as the master to solve the potaintial risk. > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > Signed-off-by: Guo Ren <guoren@kernel.org> > Cc: Will Deacon <will@kernel.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Palmer Dabbelt <palmer@dabbelt.com> > Cc: Peter Zijlstra <peterz@infradead.org > Cc: Masami Hiramatsu <mhiramat@kernel.org> > Cc: Chris Zankel <chris@zankel.net> > Cc: Max Filippov <jcmvbkbc@gmail.com> > Cc: Arnd Bergmann <arnd@arndb.de> > --- > arch/arm64/kernel/patching.c | 4 ++-- > arch/csky/kernel/probes/kprobes.c | 2 +- > arch/riscv/kernel/patch.c | 2 +- > arch/xtensa/kernel/jump_label.c | 2 +- > 4 files changed, 5 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c > index 771f543464e0..6cfea9650e65 100644 > --- a/arch/arm64/kernel/patching.c > +++ b/arch/arm64/kernel/patching.c > @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg) > int i, ret = 0; > struct aarch64_insn_patch *pp = arg; > > - /* The first CPU becomes master */ > - if (atomic_inc_return(&pp->cpu_count) == 1) { > + /* The last CPU becomes master */ > + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) { atomic_inc_return returns the incremented value, so the last CPU gets num_online_cpus(), not (num_online_cpus() - 1). > for (i = 0; ret == 0 && i < pp->insn_cnt; i++) > ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], > pp->new_insns[i]); > diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c > index 42920f25e73c..19821a06a991 100644 > --- a/arch/csky/kernel/probes/kprobes.c > +++ b/arch/csky/kernel/probes/kprobes.c > @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv) > struct csky_insn_patch *param = priv; > unsigned int addr = (unsigned int)param->addr; > > - if (atomic_inc_return(¶m->cpu_count) == 1) { > + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) { Ditto. > *(u16 *) addr = cpu_to_le16(param->opcode); > dcache_wb_range(addr, addr + 2); > atomic_inc(¶m->cpu_count); > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c > index 0b552873a577..cca72a9388e3 100644 > --- a/arch/riscv/kernel/patch.c > +++ b/arch/riscv/kernel/patch.c > @@ -104,7 +104,7 @@ static int patch_text_cb(void *data) > struct patch_insn *patch = data; > int ret = 0; > > - if (atomic_inc_return(&patch->cpu_count) == 1) { > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) { Ditto. > ret = > patch_text_nosync(patch->addr, &patch->insn, > GET_INSN_LENGTH(patch->insn)); > diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c > index 61cf6497a646..7e1d3f952eb3 100644 > --- a/arch/xtensa/kernel/jump_label.c > +++ b/arch/xtensa/kernel/jump_label.c > @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data) > { > struct patch *patch = data; > > - if (atomic_inc_return(&patch->cpu_count) == 1) { > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) { Ditto. > local_patch_text(patch->addr, patch->data, patch->sz); > atomic_inc(&patch->cpu_count); > } else { > -- > 2.25.1 >
On Sun, Mar 13, 2022 at 7:57 AM Max Filippov <jcmvbkbc@gmail.com> wrote: > > On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote: > > > > From: Guo Ren <guoren@linux.alibaba.com> > > > > These patch_text implementations are using stop_machine_cpuslocked > > infrastructure with atomic cpu_count. The origin idea is that when > > the master CPU patch_text, others should wait for it. But current > > implementation is using the first CPU as master, which couldn't > > guarantee continue CPUs are waiting. This patch changes the last > > CPU as the master to solve the potaintial risk. > > > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > > Signed-off-by: Guo Ren <guoren@kernel.org> > > Cc: Will Deacon <will@kernel.org> > > Cc: Catalin Marinas <catalin.marinas@arm.com> > > Cc: Palmer Dabbelt <palmer@dabbelt.com> > > Cc: Peter Zijlstra <peterz@infradead.org > > Cc: Masami Hiramatsu <mhiramat@kernel.org> > > Cc: Chris Zankel <chris@zankel.net> > > Cc: Max Filippov <jcmvbkbc@gmail.com> > > Cc: Arnd Bergmann <arnd@arndb.de> > > --- > > arch/arm64/kernel/patching.c | 4 ++-- > > arch/csky/kernel/probes/kprobes.c | 2 +- > > arch/riscv/kernel/patch.c | 2 +- > > arch/xtensa/kernel/jump_label.c | 2 +- > > 4 files changed, 5 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c > > index 771f543464e0..6cfea9650e65 100644 > > --- a/arch/arm64/kernel/patching.c > > +++ b/arch/arm64/kernel/patching.c > > @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg) > > int i, ret = 0; > > struct aarch64_insn_patch *pp = arg; > > > > - /* The first CPU becomes master */ > > - if (atomic_inc_return(&pp->cpu_count) == 1) { > > + /* The last CPU becomes master */ > > + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) { > > atomic_inc_return returns the incremented value, so the last CPU gets > num_online_cpus(), not (num_online_cpus() - 1). Oops! You are right, thx. > > > for (i = 0; ret == 0 && i < pp->insn_cnt; i++) > > ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], > > pp->new_insns[i]); > > diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c > > index 42920f25e73c..19821a06a991 100644 > > --- a/arch/csky/kernel/probes/kprobes.c > > +++ b/arch/csky/kernel/probes/kprobes.c > > @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv) > > struct csky_insn_patch *param = priv; > > unsigned int addr = (unsigned int)param->addr; > > > > - if (atomic_inc_return(¶m->cpu_count) == 1) { > > + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) { > > Ditto. > > > *(u16 *) addr = cpu_to_le16(param->opcode); > > dcache_wb_range(addr, addr + 2); > > atomic_inc(¶m->cpu_count); > > diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c > > index 0b552873a577..cca72a9388e3 100644 > > --- a/arch/riscv/kernel/patch.c > > +++ b/arch/riscv/kernel/patch.c > > @@ -104,7 +104,7 @@ static int patch_text_cb(void *data) > > struct patch_insn *patch = data; > > int ret = 0; > > > > - if (atomic_inc_return(&patch->cpu_count) == 1) { > > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) { > > Ditto. > > > ret = > > patch_text_nosync(patch->addr, &patch->insn, > > GET_INSN_LENGTH(patch->insn)); > > diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c > > index 61cf6497a646..7e1d3f952eb3 100644 > > --- a/arch/xtensa/kernel/jump_label.c > > +++ b/arch/xtensa/kernel/jump_label.c > > @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data) > > { > > struct patch *patch = data; > > > > - if (atomic_inc_return(&patch->cpu_count) == 1) { > > + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) { > > Ditto. > > > local_patch_text(patch->addr, patch->data, patch->sz); > > atomic_inc(&patch->cpu_count); > > } else { > > -- > > 2.25.1 > > > > > -- > Thanks. > -- Max
On Sun, Mar 13, 2022 at 7:50 AM Max Filippov <jcmvbkbc@gmail.com> wrote: > > Hi Guo Ren, > > On Sat, Mar 12, 2022 at 7:56 AM <guoren@kernel.org> wrote: > > > > From: Guo Ren <guoren@linux.alibaba.com> > > > > These patch_text implementations are using stop_machine_cpuslocked > > infrastructure with atomic cpu_count. The origin idea is that when > > The original > > > the master CPU patch_text, others should wait for it. But current > > implementation is using the first CPU as master, which couldn't > > guarantee continue CPUs are waiting. This patch changes the last > > guarantee that remaining CPUs are waiting. > > > CPU as the master to solve the potaintial risk. > > potential > > > > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > > Signed-off-by: Guo Ren <guoren@kernel.org> > > Cc: Will Deacon <will@kernel.org> > > Cc: Catalin Marinas <catalin.marinas@arm.com> > > Cc: Palmer Dabbelt <palmer@dabbelt.com> > > Cc: Peter Zijlstra <peterz@infradead.org > > Cc: Masami Hiramatsu <mhiramat@kernel.org> > > Cc: Chris Zankel <chris@zankel.net> > > Cc: Max Filippov <jcmvbkbc@gmail.com> > > Cc: Arnd Bergmann <arnd@arndb.de> > > --- > > arch/arm64/kernel/patching.c | 4 ++-- > > arch/csky/kernel/probes/kprobes.c | 2 +- > > arch/riscv/kernel/patch.c | 2 +- > > arch/xtensa/kernel/jump_label.c | 2 +- > > 4 files changed, 5 insertions(+), 5 deletions(-) > > Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> > > I'm curious, is there a specific issue that prompted this patch? No, theoretical risk. > > -- > Thanks. > -- Max
diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c index 771f543464e0..6cfea9650e65 100644 --- a/arch/arm64/kernel/patching.c +++ b/arch/arm64/kernel/patching.c @@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg) int i, ret = 0; struct aarch64_insn_patch *pp = arg; - /* The first CPU becomes master */ - if (atomic_inc_return(&pp->cpu_count) == 1) { + /* The last CPU becomes master */ + if (atomic_inc_return(&pp->cpu_count) == (num_online_cpus() - 1)) { for (i = 0; ret == 0 && i < pp->insn_cnt; i++) ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], pp->new_insns[i]); diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c index 42920f25e73c..19821a06a991 100644 --- a/arch/csky/kernel/probes/kprobes.c +++ b/arch/csky/kernel/probes/kprobes.c @@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv) struct csky_insn_patch *param = priv; unsigned int addr = (unsigned int)param->addr; - if (atomic_inc_return(¶m->cpu_count) == 1) { + if (atomic_inc_return(¶m->cpu_count) == (num_online_cpus() - 1)) { *(u16 *) addr = cpu_to_le16(param->opcode); dcache_wb_range(addr, addr + 2); atomic_inc(¶m->cpu_count); diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c index 0b552873a577..cca72a9388e3 100644 --- a/arch/riscv/kernel/patch.c +++ b/arch/riscv/kernel/patch.c @@ -104,7 +104,7 @@ static int patch_text_cb(void *data) struct patch_insn *patch = data; int ret = 0; - if (atomic_inc_return(&patch->cpu_count) == 1) { + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) { ret = patch_text_nosync(patch->addr, &patch->insn, GET_INSN_LENGTH(patch->insn)); diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c index 61cf6497a646..7e1d3f952eb3 100644 --- a/arch/xtensa/kernel/jump_label.c +++ b/arch/xtensa/kernel/jump_label.c @@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data) { struct patch *patch = data; - if (atomic_inc_return(&patch->cpu_count) == 1) { + if (atomic_inc_return(&patch->cpu_count) == (num_online_cpus() - 1)) { local_patch_text(patch->addr, patch->data, patch->sz); atomic_inc(&patch->cpu_count); } else {