Message ID | 20190614171200.21078-4-alex.bennee@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | tcg plugin support | expand |
On 6/14/19 10:11 AM, Alex Bennée wrote: > start_exclusive(); > + cpu->in_exclusive_work_context = true; > wi->func(cpu, wi->data); > + cpu->in_exclusive_work_context = false; > end_exclusive(); Is there a reason not to put those into start/end_exclusive? And if not, what does in_exclusive_work_context mean? r~
Richard Henderson <richard.henderson@linaro.org> writes: > On 6/14/19 10:11 AM, Alex Bennée wrote: >> start_exclusive(); >> + cpu->in_exclusive_work_context = true; >> wi->func(cpu, wi->data); >> + cpu->in_exclusive_work_context = false; >> end_exclusive(); > > Is there a reason not to put those into start/end_exclusive? Not particularly... it can use current_cpu to find the cpu and set the flag. > And if not, what does in_exclusive_work_context mean? Currently the check implies it's only for: exclusive work context, which has previously been queued via async_safe_run_on_cpu() which avoids jumping through hoops if another async_safe tasks still wants to flush the TB. However keeping it with start/end exclusive means we could also clean up the code in: void cpu_exec_step_atomic(CPUState *cpu) { .. /* volatile because we modify it between setjmp and longjmp */ volatile bool in_exclusive_region = false; .. if (sigsetjmp(cpu->jmp_env, 0) == 0) { start_exclusive(); .. } else { .. } if (in_exclusive_region) { .. end_exclusive(); but the volatile makes me nervous. Is it only a risk that local variable accesses might get optimised away? > > > r~ -- Alex Bennée
diff --git a/cpus-common.c b/cpus-common.c index 3ca58c64e8..960058457a 100644 --- a/cpus-common.c +++ b/cpus-common.c @@ -335,7 +335,9 @@ void process_queued_cpu_work(CPUState *cpu) */ qemu_mutex_unlock_iothread(); start_exclusive(); + cpu->in_exclusive_work_context = true; wi->func(cpu, wi->data); + cpu->in_exclusive_work_context = false; end_exclusive(); qemu_mutex_lock_iothread(); } else { diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 5ee0046b62..08481ad304 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -373,6 +373,7 @@ struct CPUState { bool unplug; bool crash_occurred; bool exit_request; + bool in_exclusive_work_context; uint32_t cflags_next_tb; /* updates protected by BQL */ uint32_t interrupt_request; @@ -785,6 +786,18 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data) */ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data); +/** + * cpu_in_exclusive_work_context() + * @cpu: The vCPU to check + * + * Returns true if @cpu is an exclusive work context, which has + * previously been queued via async_safe_run_on_cpu(). + */ +static inline bool cpu_in_exclusive_work_context(const CPUState *cpu) +{ + return cpu->in_exclusive_work_context; +} + /** * qemu_get_cpu: * @index: The CPUState@cpu_index value of the CPU to obtain.