Message ID | 20160526163549.3276-2-a.rigo@virtualopensystems.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Alvise, On Thu, May 26, 2016 at 12:35 PM, Alvise Rigo <a.rigo@virtualopensystems.com> wrote: > Add tcg_exclusive_{lock,unlock}() functions that will be used for making > the emulation of LL and SC instructions thread safe. > > Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> <snip> > +__thread bool cpu_have_exclusive_lock; > +QemuSpin cpu_exclusive_lock; > +inline void tcg_exclusive_lock(void) > +{ > + if (!cpu_have_exclusive_lock) { > + qemu_spin_lock(&cpu_exclusive_lock); > + cpu_have_exclusive_lock = true; > + } > +} > + > +inline void tcg_exclusive_unlock(void) > +{ > + if (cpu_have_exclusive_lock) { > + cpu_have_exclusive_lock = false; > + qemu_spin_unlock(&cpu_exclusive_lock); > + } > +} I think the unlock() here should have an assert if cpu_have_exclusive_lock is false. From what I can see, a thread should either take the exclusive lock or wait spinning for it in lock(). So unlock() should always see cpu_have_exclusive_lock as true. It is a good place to find locking bugs.
Hi Pranith, Thank you for the hint, I will keep this in mind for the next version. Regards, alvise On Tue, May 31, 2016 at 5:03 PM, Pranith Kumar <bobby.prani@gmail.com> wrote: > Hi Alvise, > > On Thu, May 26, 2016 at 12:35 PM, Alvise Rigo > <a.rigo@virtualopensystems.com> wrote: >> Add tcg_exclusive_{lock,unlock}() functions that will be used for making >> the emulation of LL and SC instructions thread safe. >> >> Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> > > <snip> > >> +__thread bool cpu_have_exclusive_lock; >> +QemuSpin cpu_exclusive_lock; >> +inline void tcg_exclusive_lock(void) >> +{ >> + if (!cpu_have_exclusive_lock) { >> + qemu_spin_lock(&cpu_exclusive_lock); >> + cpu_have_exclusive_lock = true; >> + } >> +} >> + >> +inline void tcg_exclusive_unlock(void) >> +{ >> + if (cpu_have_exclusive_lock) { >> + cpu_have_exclusive_lock = false; >> + qemu_spin_unlock(&cpu_exclusive_lock); >> + } >> +} > > I think the unlock() here should have an assert if > cpu_have_exclusive_lock is false. From what I can see, a thread should > either take the exclusive lock or wait spinning for it in lock(). So > unlock() should always see cpu_have_exclusive_lock as true. It is a > good place to find locking bugs. > > -- > Pranith
Alvise Rigo <a.rigo@virtualopensystems.com> writes: > Add tcg_exclusive_{lock,unlock}() functions that will be used for making > the emulation of LL and SC instructions thread safe. I wonder how much similarity there is to the mechanism linus-user ends up using for it's exclusive-start/end? > > Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> > --- > cpus.c | 2 ++ > exec.c | 18 ++++++++++++++++++ > include/qom/cpu.h | 5 +++++ > 3 files changed, 25 insertions(+) > > diff --git a/cpus.c b/cpus.c > index 860e7ba..b9ec903 100644 > --- a/cpus.c > +++ b/cpus.c > @@ -961,6 +961,8 @@ void qemu_init_cpu_loop(void) > qemu_cond_init(&qemu_work_cond); > qemu_mutex_init(&qemu_global_mutex); > > + qemu_spin_init(&cpu_exclusive_lock); > + > qemu_thread_get_self(&io_thread); > > safe_work = g_array_sized_new(TRUE, TRUE, sizeof(qemu_safe_work_item), 128); > diff --git a/exec.c b/exec.c > index a24b31c..1c72113 100644 > --- a/exec.c > +++ b/exec.c > @@ -197,6 +197,24 @@ void cpu_exclusive_history_free(void) > g_free(excl_history.c_array); > } > } > + > +__thread bool cpu_have_exclusive_lock; > +QemuSpin cpu_exclusive_lock; > +inline void tcg_exclusive_lock(void) > +{ > + if (!cpu_have_exclusive_lock) { > + qemu_spin_lock(&cpu_exclusive_lock); > + cpu_have_exclusive_lock = true; > + } > +} > + > +inline void tcg_exclusive_unlock(void) > +{ > + if (cpu_have_exclusive_lock) { > + cpu_have_exclusive_lock = false; > + qemu_spin_unlock(&cpu_exclusive_lock); > + } > +} > #endif > > #if !defined(CONFIG_USER_ONLY) > diff --git a/include/qom/cpu.h b/include/qom/cpu.h > index 0f51870..019f06d 100644 > --- a/include/qom/cpu.h > +++ b/include/qom/cpu.h > @@ -201,6 +201,11 @@ typedef struct CPUClass { > void (*disas_set_info)(CPUState *cpu, disassemble_info *info); > } CPUClass; > > +/* Protect cpu_exclusive_* variable .*/ > +void tcg_exclusive_lock(void); > +void tcg_exclusive_unlock(void); > +extern QemuSpin cpu_exclusive_lock; > + > #ifdef HOST_WORDS_BIGENDIAN > typedef struct icount_decr_u16 { > uint16_t high; -- Alex Bennée
As far as I understand, linux-user uses a mutex to make the atomic accesses exclusive with respect to other CPU's atomic accesses. So basically in the LDREX case it implements: lock() -> access() -> unlock() This patch series makes the atomic accesses exclusive with respect to every memory access, this is allowed by the softmmu. The access is now something like: lock() -> softmmu_access() -> unlock() where "softmmu_access()" is not just a memory access, but includes a manipulation of the EXCL bitmap and possible queries of TLB flushes. So there are similarities, but are pretty much confined to the locking/unlocking of a spinlock/mutex. This made me think, how does linux-user can properly work with upstream TCG, for instance, in an absurd configuration like target-arm on ARM host? alvise On Wed, Jun 8, 2016 at 11:21 AM, Alex Bennée <alex.bennee@linaro.org> wrote: > > Alvise Rigo <a.rigo@virtualopensystems.com> writes: > >> Add tcg_exclusive_{lock,unlock}() functions that will be used for making >> the emulation of LL and SC instructions thread safe. > > I wonder how much similarity there is to the mechanism linus-user ends > up using for it's exclusive-start/end? > >> >> Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> >> --- >> cpus.c | 2 ++ >> exec.c | 18 ++++++++++++++++++ >> include/qom/cpu.h | 5 +++++ >> 3 files changed, 25 insertions(+) >> >> diff --git a/cpus.c b/cpus.c >> index 860e7ba..b9ec903 100644 >> --- a/cpus.c >> +++ b/cpus.c >> @@ -961,6 +961,8 @@ void qemu_init_cpu_loop(void) >> qemu_cond_init(&qemu_work_cond); >> qemu_mutex_init(&qemu_global_mutex); >> >> + qemu_spin_init(&cpu_exclusive_lock); >> + >> qemu_thread_get_self(&io_thread); >> >> safe_work = g_array_sized_new(TRUE, TRUE, sizeof(qemu_safe_work_item), 128); >> diff --git a/exec.c b/exec.c >> index a24b31c..1c72113 100644 >> --- a/exec.c >> +++ b/exec.c >> @@ -197,6 +197,24 @@ void cpu_exclusive_history_free(void) >> g_free(excl_history.c_array); >> } >> } >> + >> +__thread bool cpu_have_exclusive_lock; >> +QemuSpin cpu_exclusive_lock; >> +inline void tcg_exclusive_lock(void) >> +{ >> + if (!cpu_have_exclusive_lock) { >> + qemu_spin_lock(&cpu_exclusive_lock); >> + cpu_have_exclusive_lock = true; >> + } >> +} >> + >> +inline void tcg_exclusive_unlock(void) >> +{ >> + if (cpu_have_exclusive_lock) { >> + cpu_have_exclusive_lock = false; >> + qemu_spin_unlock(&cpu_exclusive_lock); >> + } >> +} >> #endif >> >> #if !defined(CONFIG_USER_ONLY) >> diff --git a/include/qom/cpu.h b/include/qom/cpu.h >> index 0f51870..019f06d 100644 >> --- a/include/qom/cpu.h >> +++ b/include/qom/cpu.h >> @@ -201,6 +201,11 @@ typedef struct CPUClass { >> void (*disas_set_info)(CPUState *cpu, disassemble_info *info); >> } CPUClass; >> >> +/* Protect cpu_exclusive_* variable .*/ >> +void tcg_exclusive_lock(void); >> +void tcg_exclusive_unlock(void); >> +extern QemuSpin cpu_exclusive_lock; >> + >> #ifdef HOST_WORDS_BIGENDIAN >> typedef struct icount_decr_u16 { >> uint16_t high; > > > -- > Alex Bennée
On 8 June 2016 at 11:00, alvise rigo <a.rigo@virtualopensystems.com> wrote: > As far as I understand, linux-user uses a mutex to make the atomic > accesses exclusive with respect to other CPU's atomic accesses. So > basically in the LDREX case it implements: > lock() -> access() -> unlock() > This patch series makes the atomic accesses exclusive with respect to > every memory access, this is allowed by the softmmu. The access is now > something like: > lock() -> softmmu_access() -> unlock() > where "softmmu_access()" is not just a memory access, but includes a > manipulation of the EXCL bitmap and possible queries of TLB flushes. > So there are similarities, but are pretty much confined to the > locking/unlocking of a spinlock/mutex. > > This made me think, how does linux-user can properly work with > upstream TCG, for instance, in an absurd configuration like target-arm > on ARM host? linux-user's exclusives handling is "broken but happens to sort of work most of the time". Fixing this and bringing it into line with how we want to handle exclusives in the multithreaded system emulation case is one of the things I was hoping would come out of the MTTCG work... thanks -- PMM
alvise rigo <a.rigo@virtualopensystems.com> writes: > As far as I understand, linux-user uses a mutex to make the atomic > accesses exclusive with respect to other CPU's atomic accesses. So > basically in the LDREX case it implements: > lock() -> access() -> unlock() > This patch series makes the atomic accesses exclusive with respect to > every memory access, this is allowed by the softmmu. The access is now > something like: > lock() -> softmmu_access() -> unlock() > where "softmmu_access()" is not just a memory access, but includes a > manipulation of the EXCL bitmap and possible queries of TLB flushes. > So there are similarities, but are pretty much confined to the > locking/unlocking of a spinlock/mutex. But couldn't this be achieved with the various other solutions to wanting the rest of the system to be quiescent while we do something tricky? > This made me think, how does linux-user can properly work with > upstream TCG, for instance, in an absurd configuration like target-arm > on ARM host? > > alvise > > On Wed, Jun 8, 2016 at 11:21 AM, Alex Bennée <alex.bennee@linaro.org> wrote: >> >> Alvise Rigo <a.rigo@virtualopensystems.com> writes: >> >>> Add tcg_exclusive_{lock,unlock}() functions that will be used for making >>> the emulation of LL and SC instructions thread safe. >> >> I wonder how much similarity there is to the mechanism linus-user ends >> up using for it's exclusive-start/end? >> >>> >>> Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> >>> --- >>> cpus.c | 2 ++ >>> exec.c | 18 ++++++++++++++++++ >>> include/qom/cpu.h | 5 +++++ >>> 3 files changed, 25 insertions(+) >>> >>> diff --git a/cpus.c b/cpus.c >>> index 860e7ba..b9ec903 100644 >>> --- a/cpus.c >>> +++ b/cpus.c >>> @@ -961,6 +961,8 @@ void qemu_init_cpu_loop(void) >>> qemu_cond_init(&qemu_work_cond); >>> qemu_mutex_init(&qemu_global_mutex); >>> >>> + qemu_spin_init(&cpu_exclusive_lock); >>> + >>> qemu_thread_get_self(&io_thread); >>> >>> safe_work = g_array_sized_new(TRUE, TRUE, sizeof(qemu_safe_work_item), 128); >>> diff --git a/exec.c b/exec.c >>> index a24b31c..1c72113 100644 >>> --- a/exec.c >>> +++ b/exec.c >>> @@ -197,6 +197,24 @@ void cpu_exclusive_history_free(void) >>> g_free(excl_history.c_array); >>> } >>> } >>> + >>> +__thread bool cpu_have_exclusive_lock; >>> +QemuSpin cpu_exclusive_lock; >>> +inline void tcg_exclusive_lock(void) >>> +{ >>> + if (!cpu_have_exclusive_lock) { >>> + qemu_spin_lock(&cpu_exclusive_lock); >>> + cpu_have_exclusive_lock = true; >>> + } >>> +} >>> + >>> +inline void tcg_exclusive_unlock(void) >>> +{ >>> + if (cpu_have_exclusive_lock) { >>> + cpu_have_exclusive_lock = false; >>> + qemu_spin_unlock(&cpu_exclusive_lock); >>> + } >>> +} >>> #endif >>> >>> #if !defined(CONFIG_USER_ONLY) >>> diff --git a/include/qom/cpu.h b/include/qom/cpu.h >>> index 0f51870..019f06d 100644 >>> --- a/include/qom/cpu.h >>> +++ b/include/qom/cpu.h >>> @@ -201,6 +201,11 @@ typedef struct CPUClass { >>> void (*disas_set_info)(CPUState *cpu, disassemble_info *info); >>> } CPUClass; >>> >>> +/* Protect cpu_exclusive_* variable .*/ >>> +void tcg_exclusive_lock(void); >>> +void tcg_exclusive_unlock(void); >>> +extern QemuSpin cpu_exclusive_lock; >>> + >>> #ifdef HOST_WORDS_BIGENDIAN >>> typedef struct icount_decr_u16 { >>> uint16_t high; >> >> >> -- >> Alex Bennée -- Alex Bennée
diff --git a/cpus.c b/cpus.c index 860e7ba..b9ec903 100644 --- a/cpus.c +++ b/cpus.c @@ -961,6 +961,8 @@ void qemu_init_cpu_loop(void) qemu_cond_init(&qemu_work_cond); qemu_mutex_init(&qemu_global_mutex); + qemu_spin_init(&cpu_exclusive_lock); + qemu_thread_get_self(&io_thread); safe_work = g_array_sized_new(TRUE, TRUE, sizeof(qemu_safe_work_item), 128); diff --git a/exec.c b/exec.c index a24b31c..1c72113 100644 --- a/exec.c +++ b/exec.c @@ -197,6 +197,24 @@ void cpu_exclusive_history_free(void) g_free(excl_history.c_array); } } + +__thread bool cpu_have_exclusive_lock; +QemuSpin cpu_exclusive_lock; +inline void tcg_exclusive_lock(void) +{ + if (!cpu_have_exclusive_lock) { + qemu_spin_lock(&cpu_exclusive_lock); + cpu_have_exclusive_lock = true; + } +} + +inline void tcg_exclusive_unlock(void) +{ + if (cpu_have_exclusive_lock) { + cpu_have_exclusive_lock = false; + qemu_spin_unlock(&cpu_exclusive_lock); + } +} #endif #if !defined(CONFIG_USER_ONLY) diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 0f51870..019f06d 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -201,6 +201,11 @@ typedef struct CPUClass { void (*disas_set_info)(CPUState *cpu, disassemble_info *info); } CPUClass; +/* Protect cpu_exclusive_* variable .*/ +void tcg_exclusive_lock(void); +void tcg_exclusive_unlock(void); +extern QemuSpin cpu_exclusive_lock; + #ifdef HOST_WORDS_BIGENDIAN typedef struct icount_decr_u16 { uint16_t high;
Add tcg_exclusive_{lock,unlock}() functions that will be used for making the emulation of LL and SC instructions thread safe. Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> --- cpus.c | 2 ++ exec.c | 18 ++++++++++++++++++ include/qom/cpu.h | 5 +++++ 3 files changed, 25 insertions(+)