From patchwork Wed Oct 14 19:23:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Claudio Fontana X-Patchwork-Id: 11838233 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 988B117E6 for ; Wed, 14 Oct 2020 19:26:53 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 39AFE21D7F for ; Wed, 14 Oct 2020 19:26:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 39AFE21D7F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:33228 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kSmQF-00060X-V2 for patchwork-qemu-devel@patchwork.kernel.org; Wed, 14 Oct 2020 15:26:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42046) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kSmN5-0001HK-J4 for qemu-devel@nongnu.org; Wed, 14 Oct 2020 15:23:35 -0400 Received: from mx2.suse.de ([195.135.220.15]:34262) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kSmN0-00022c-8o for qemu-devel@nongnu.org; Wed, 14 Oct 2020 15:23:35 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6602DB29A; Wed, 14 Oct 2020 19:23:27 +0000 (UTC) From: Claudio Fontana To: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Richard Henderson , =?utf-8?q?Alex_Benn=C3=A9?= =?utf-8?q?e?= Subject: [PATCH v2 1/3] accel/tcg: split CpusAccel into three TCG variants Date: Wed, 14 Oct 2020 21:23:22 +0200 Message-Id: <20201014192324.6390-2-cfontana@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201014192324.6390-1-cfontana@suse.de> References: <20201014192324.6390-1-cfontana@suse.de> MIME-Version: 1.0 Received-SPF: pass client-ip=195.135.220.15; envelope-from=cfontana@suse.de; helo=mx2.suse.de X-detected-operating-system: by eggs.gnu.org: First seen = 2020/10/12 01:21:00 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x (no timestamps) [generic] X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Claudio Fontana , qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" split up the CpusAccel tcg_cpus into three TCG variants: tcg_cpus_rr (single threaded, round robin cpus) tcg_cpus_icount (same as rr, but with instruction counting enabled) tcg_cpus_mttcg (multi-threaded cpus) Suggested-by: Richard Henderson Signed-off-by: Claudio Fontana Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson --- accel/tcg/meson.build | 9 +- accel/tcg/tcg-all.c | 8 +- accel/tcg/tcg-cpus-icount.c | 147 +++++++++++ accel/tcg/tcg-cpus-icount.h | 17 ++ accel/tcg/tcg-cpus-mttcg.c | 119 +++++++++ accel/tcg/tcg-cpus-mttcg.h | 23 ++ accel/tcg/tcg-cpus-rr.c | 272 ++++++++++++++++++++ accel/tcg/tcg-cpus-rr.h | 21 ++ accel/tcg/tcg-cpus.c | 484 ++---------------------------------- accel/tcg/tcg-cpus.h | 13 +- softmmu/icount.c | 2 +- 11 files changed, 653 insertions(+), 462 deletions(-) create mode 100644 accel/tcg/tcg-cpus-icount.c create mode 100644 accel/tcg/tcg-cpus-icount.h create mode 100644 accel/tcg/tcg-cpus-mttcg.c create mode 100644 accel/tcg/tcg-cpus-mttcg.h create mode 100644 accel/tcg/tcg-cpus-rr.c create mode 100644 accel/tcg/tcg-cpus-rr.h diff --git a/accel/tcg/meson.build b/accel/tcg/meson.build index 19b9343d5b..f39aab0a0c 100644 --- a/accel/tcg/meson.build +++ b/accel/tcg/meson.build @@ -12,4 +12,11 @@ tcg_ss.add(when: 'CONFIG_SOFTMMU', if_false: files('user-exec-stub.c')) tcg_ss.add(when: 'CONFIG_PLUGIN', if_true: [files('plugin-gen.c'), libdl]) specific_ss.add_all(when: 'CONFIG_TCG', if_true: tcg_ss) -specific_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TCG'], if_true: files('tcg-all.c', 'cputlb.c', 'tcg-cpus.c')) +specific_ss.add(when: ['CONFIG_SOFTMMU', 'CONFIG_TCG'], if_true: files( + 'tcg-all.c', + 'cputlb.c', + 'tcg-cpus.c', + 'tcg-cpus-mttcg.c', + 'tcg-cpus-icount.c', + 'tcg-cpus-rr.c' +)) diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c index fa1208158f..e42a028043 100644 --- a/accel/tcg/tcg-all.c +++ b/accel/tcg/tcg-all.c @@ -104,8 +104,14 @@ static int tcg_init(MachineState *ms) tcg_exec_init(s->tb_size * 1024 * 1024); mttcg_enabled = s->mttcg_enabled; - cpus_register_accel(&tcg_cpus); + if (mttcg_enabled) { + cpus_register_accel(&tcg_cpus_mttcg); + } else if (icount_enabled()) { + cpus_register_accel(&tcg_cpus_icount); + } else { + cpus_register_accel(&tcg_cpus_rr); + } return 0; } diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c new file mode 100644 index 0000000000..d3af3afb6d --- /dev/null +++ b/accel/tcg/tcg-cpus-icount.c @@ -0,0 +1,147 @@ +/* + * QEMU TCG Single Threaded vCPUs implementation using instruction counting + * + * Copyright (c) 2003-2008 Fabrice Bellard + * Copyright (c) 2014 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "sysemu/tcg.h" +#include "sysemu/replay.h" +#include "qemu/main-loop.h" +#include "qemu/guest-random.h" +#include "exec/exec-all.h" +#include "hw/boards.h" + +#include "tcg-cpus.h" +#include "tcg-cpus-icount.h" +#include "tcg-cpus-rr.h" + +static int64_t tcg_get_icount_limit(void) +{ + int64_t deadline; + + if (replay_mode != REPLAY_MODE_PLAY) { + /* + * Include all the timers, because they may need an attention. + * Too long CPU execution may create unnecessary delay in UI. + */ + deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL, + QEMU_TIMER_ATTR_ALL); + /* Check realtime timers, because they help with input processing */ + deadline = qemu_soonest_timeout(deadline, + qemu_clock_deadline_ns_all(QEMU_CLOCK_REALTIME, + QEMU_TIMER_ATTR_ALL)); + + /* + * Maintain prior (possibly buggy) behaviour where if no deadline + * was set (as there is no QEMU_CLOCK_VIRTUAL timer) or it is more than + * INT32_MAX nanoseconds ahead, we still use INT32_MAX + * nanoseconds. + */ + if ((deadline < 0) || (deadline > INT32_MAX)) { + deadline = INT32_MAX; + } + + return icount_round(deadline); + } else { + return replay_get_instructions(); + } +} + +static void notify_aio_contexts(void) +{ + /* Wake up other AioContexts. */ + qemu_clock_notify(QEMU_CLOCK_VIRTUAL); + qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL); +} + +void handle_icount_deadline(void) +{ + assert(qemu_in_vcpu_thread()); + int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL, + QEMU_TIMER_ATTR_ALL); + + if (deadline == 0) { + notify_aio_contexts(); + } +} + +void prepare_icount_for_run(CPUState *cpu) +{ + int insns_left; + + /* + * These should always be cleared by process_icount_data after + * each vCPU execution. However u16.high can be raised + * asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt + */ + g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0); + g_assert(cpu->icount_extra == 0); + + cpu->icount_budget = tcg_get_icount_limit(); + insns_left = MIN(0xffff, cpu->icount_budget); + cpu_neg(cpu)->icount_decr.u16.low = insns_left; + cpu->icount_extra = cpu->icount_budget - insns_left; + + replay_mutex_lock(); + + if (cpu->icount_budget == 0 && replay_has_checkpoint()) { + notify_aio_contexts(); + } +} + +void process_icount_data(CPUState *cpu) +{ + /* Account for executed instructions */ + icount_update(cpu); + + /* Reset the counters */ + cpu_neg(cpu)->icount_decr.u16.low = 0; + cpu->icount_extra = 0; + cpu->icount_budget = 0; + + replay_account_executed_instructions(); + + replay_mutex_unlock(); +} + +static void icount_handle_interrupt(CPUState *cpu, int mask) +{ + int old_mask = cpu->interrupt_request; + + tcg_handle_interrupt(cpu, mask); + if (qemu_cpu_is_self(cpu) && + !cpu->can_do_io + && (mask & ~old_mask) != 0) { + cpu_abort(cpu, "Raised interrupt while not in I/O function"); + } +} + +const CpusAccel tcg_cpus_icount = { + .create_vcpu_thread = tcg_start_vcpu_thread, + .kick_vcpu_thread = qemu_cpu_kick_rr_cpus, + + .handle_interrupt = icount_handle_interrupt, + .get_virtual_clock = icount_get, + .get_elapsed_ticks = icount_get, +}; diff --git a/accel/tcg/tcg-cpus-icount.h b/accel/tcg/tcg-cpus-icount.h new file mode 100644 index 0000000000..cbcf76b413 --- /dev/null +++ b/accel/tcg/tcg-cpus-icount.h @@ -0,0 +1,17 @@ +/* + * QEMU TCG Single Threaded vCPUs implementation using instruction counting + * + * Copyright 2020 SUSE LLC + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifndef TCG_CPUS_ICOUNT_H +#define TCG_CPUS_ICOUNT_H + +void handle_icount_deadline(void); +void prepare_icount_for_run(CPUState *cpu); +void process_icount_data(CPUState *cpu); + +#endif /* TCG_CPUS_ICOUNT_H */ diff --git a/accel/tcg/tcg-cpus-mttcg.c b/accel/tcg/tcg-cpus-mttcg.c new file mode 100644 index 0000000000..723703fa3e --- /dev/null +++ b/accel/tcg/tcg-cpus-mttcg.c @@ -0,0 +1,119 @@ +/* + * QEMU TCG Multi Threaded vCPUs implementation + * + * Copyright (c) 2003-2008 Fabrice Bellard + * Copyright (c) 2014 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "sysemu/tcg.h" +#include "sysemu/replay.h" +#include "qemu/main-loop.h" +#include "qemu/guest-random.h" +#include "exec/exec-all.h" +#include "hw/boards.h" + +#include "tcg-cpus.h" +#include "tcg-cpus-mttcg.h" + +/* + * Multi-threaded TCG + * + * In the multi-threaded case each vCPU has its own thread. The TLS + * variable current_cpu can be used deep in the code to find the + * current CPUState for a given thread. + */ + +void *tcg_cpu_thread_fn(void *arg) +{ + CPUState *cpu = arg; + + assert(tcg_enabled()); + g_assert(!icount_enabled()); + + rcu_register_thread(); + tcg_register_thread(); + + qemu_mutex_lock_iothread(); + qemu_thread_get_self(cpu->thread); + + cpu->thread_id = qemu_get_thread_id(); + cpu->can_do_io = 1; + current_cpu = cpu; + cpu_thread_signal_created(cpu); + qemu_guest_random_seed_thread_part2(cpu->random_seed); + + /* process any pending work */ + cpu->exit_request = 1; + + do { + if (cpu_can_run(cpu)) { + int r; + qemu_mutex_unlock_iothread(); + r = tcg_cpu_exec(cpu); + qemu_mutex_lock_iothread(); + switch (r) { + case EXCP_DEBUG: + cpu_handle_guest_debug(cpu); + break; + case EXCP_HALTED: + /* + * during start-up the vCPU is reset and the thread is + * kicked several times. If we don't ensure we go back + * to sleep in the halted state we won't cleanly + * start-up when the vCPU is enabled. + * + * cpu->halted should ensure we sleep in wait_io_event + */ + g_assert(cpu->halted); + break; + case EXCP_ATOMIC: + qemu_mutex_unlock_iothread(); + cpu_exec_step_atomic(cpu); + qemu_mutex_lock_iothread(); + default: + /* Ignore everything else? */ + break; + } + } + + qatomic_mb_set(&cpu->exit_request, 0); + qemu_wait_io_event(cpu); + } while (!cpu->unplug || cpu_can_run(cpu)); + + qemu_tcg_destroy_vcpu(cpu); + qemu_mutex_unlock_iothread(); + rcu_unregister_thread(); + return NULL; +} + +static void mttcg_kick_vcpu_thread(CPUState *cpu) +{ + cpu_exit(cpu); +} + +const CpusAccel tcg_cpus_mttcg = { + .create_vcpu_thread = tcg_start_vcpu_thread, + .kick_vcpu_thread = mttcg_kick_vcpu_thread, + + .handle_interrupt = tcg_handle_interrupt, +}; diff --git a/accel/tcg/tcg-cpus-mttcg.h b/accel/tcg/tcg-cpus-mttcg.h new file mode 100644 index 0000000000..5d203243e8 --- /dev/null +++ b/accel/tcg/tcg-cpus-mttcg.h @@ -0,0 +1,23 @@ +/* + * QEMU TCG Multi Threaded vCPUs implementation + * + * Copyright 2020 SUSE LLC + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifndef TCG_CPUS_MTTCG_H +#define TCG_CPUS_MTTCG_H + +/* + * Multi-threaded TCG + * + * In the multi-threaded case each vCPU has its own thread. The TLS + * variable current_cpu can be used deep in the code to find the + * current CPUState for a given thread. + */ + +void *tcg_cpu_thread_fn(void *arg); + +#endif /* TCG_CPUS_MTTCG_H */ diff --git a/accel/tcg/tcg-cpus-rr.c b/accel/tcg/tcg-cpus-rr.c new file mode 100644 index 0000000000..82bb7c5ea6 --- /dev/null +++ b/accel/tcg/tcg-cpus-rr.c @@ -0,0 +1,272 @@ +/* + * QEMU TCG Single Threaded vCPUs implementation + * + * Copyright (c) 2003-2008 Fabrice Bellard + * Copyright (c) 2014 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "sysemu/tcg.h" +#include "sysemu/replay.h" +#include "qemu/main-loop.h" +#include "qemu/guest-random.h" +#include "exec/exec-all.h" +#include "hw/boards.h" + +#include "tcg-cpus.h" +#include "tcg-cpus-rr.h" +#include "tcg-cpus-icount.h" + +/* Kick all RR vCPUs */ +void qemu_cpu_kick_rr_cpus(CPUState *unused) +{ + CPUState *cpu; + + CPU_FOREACH(cpu) { + cpu_exit(cpu); + }; +} + +/* + * TCG vCPU kick timer + * + * The kick timer is responsible for moving single threaded vCPU + * emulation on to the next vCPU. If more than one vCPU is running a + * timer event with force a cpu->exit so the next vCPU can get + * scheduled. + * + * The timer is removed if all vCPUs are idle and restarted again once + * idleness is complete. + */ + +static QEMUTimer *tcg_kick_vcpu_timer; +static CPUState *tcg_current_rr_cpu; + +#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10) + +static inline int64_t qemu_tcg_next_kick(void) +{ + return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD; +} + +/* Kick the currently round-robin scheduled vCPU to next */ +static void qemu_cpu_kick_rr_next_cpu(void) +{ + CPUState *cpu; + do { + cpu = qatomic_mb_read(&tcg_current_rr_cpu); + if (cpu) { + cpu_exit(cpu); + } + } while (cpu != qatomic_mb_read(&tcg_current_rr_cpu)); +} + +static void kick_tcg_thread(void *opaque) +{ + timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick()); + qemu_cpu_kick_rr_next_cpu(); +} + +static void start_tcg_kick_timer(void) +{ + if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) { + tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, + kick_tcg_thread, NULL); + } + if (tcg_kick_vcpu_timer && !timer_pending(tcg_kick_vcpu_timer)) { + timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick()); + } +} + +static void stop_tcg_kick_timer(void) +{ + if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) { + timer_del(tcg_kick_vcpu_timer); + } +} + +static void qemu_tcg_rr_wait_io_event(void) +{ + CPUState *cpu; + + while (all_cpu_threads_idle()) { + stop_tcg_kick_timer(); + qemu_cond_wait_iothread(first_cpu->halt_cond); + } + + start_tcg_kick_timer(); + + CPU_FOREACH(cpu) { + qemu_wait_io_event_common(cpu); + } +} + +/* + * Destroy any remaining vCPUs which have been unplugged and have + * finished running + */ +static void deal_with_unplugged_cpus(void) +{ + CPUState *cpu; + + CPU_FOREACH(cpu) { + if (cpu->unplug && !cpu_can_run(cpu)) { + qemu_tcg_destroy_vcpu(cpu); + break; + } + } +} + +/* + * Single-threaded TCG + * + * In the single-threaded case each vCPU is simulated in turn. If + * there is more than a single vCPU we create a simple timer to kick + * the vCPU and ensure we don't get stuck in a tight loop in one vCPU. + * This is done explicitly rather than relying on side-effects + * elsewhere. + */ + +void *tcg_rr_cpu_thread_fn(void *arg) +{ + CPUState *cpu = arg; + + assert(tcg_enabled()); + rcu_register_thread(); + tcg_register_thread(); + + qemu_mutex_lock_iothread(); + qemu_thread_get_self(cpu->thread); + + cpu->thread_id = qemu_get_thread_id(); + cpu->can_do_io = 1; + cpu_thread_signal_created(cpu); + qemu_guest_random_seed_thread_part2(cpu->random_seed); + + /* wait for initial kick-off after machine start */ + while (first_cpu->stopped) { + qemu_cond_wait_iothread(first_cpu->halt_cond); + + /* process any pending work */ + CPU_FOREACH(cpu) { + current_cpu = cpu; + qemu_wait_io_event_common(cpu); + } + } + + start_tcg_kick_timer(); + + cpu = first_cpu; + + /* process any pending work */ + cpu->exit_request = 1; + + while (1) { + qemu_mutex_unlock_iothread(); + replay_mutex_lock(); + qemu_mutex_lock_iothread(); + + if (icount_enabled()) { + /* Account partial waits to QEMU_CLOCK_VIRTUAL. */ + icount_account_warp_timer(); + /* + * Run the timers here. This is much more efficient than + * waking up the I/O thread and waiting for completion. + */ + handle_icount_deadline(); + } + + replay_mutex_unlock(); + + if (!cpu) { + cpu = first_cpu; + } + + while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) { + + qatomic_mb_set(&tcg_current_rr_cpu, cpu); + current_cpu = cpu; + + qemu_clock_enable(QEMU_CLOCK_VIRTUAL, + (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0); + + if (cpu_can_run(cpu)) { + int r; + + qemu_mutex_unlock_iothread(); + if (icount_enabled()) { + prepare_icount_for_run(cpu); + } + r = tcg_cpu_exec(cpu); + if (icount_enabled()) { + process_icount_data(cpu); + } + qemu_mutex_lock_iothread(); + + if (r == EXCP_DEBUG) { + cpu_handle_guest_debug(cpu); + break; + } else if (r == EXCP_ATOMIC) { + qemu_mutex_unlock_iothread(); + cpu_exec_step_atomic(cpu); + qemu_mutex_lock_iothread(); + break; + } + } else if (cpu->stop) { + if (cpu->unplug) { + cpu = CPU_NEXT(cpu); + } + break; + } + + cpu = CPU_NEXT(cpu); + } /* while (cpu && !cpu->exit_request).. */ + + /* Does not need qatomic_mb_set because a spurious wakeup is okay. */ + qatomic_set(&tcg_current_rr_cpu, NULL); + + if (cpu && cpu->exit_request) { + qatomic_mb_set(&cpu->exit_request, 0); + } + + if (icount_enabled() && all_cpu_threads_idle()) { + /* + * When all cpus are sleeping (e.g in WFI), to avoid a deadlock + * in the main_loop, wake it up in order to start the warp timer. + */ + qemu_notify_event(); + } + + qemu_tcg_rr_wait_io_event(); + deal_with_unplugged_cpus(); + } + + rcu_unregister_thread(); + return NULL; +} + +const CpusAccel tcg_cpus_rr = { + .create_vcpu_thread = tcg_start_vcpu_thread, + .kick_vcpu_thread = qemu_cpu_kick_rr_cpus, + + .handle_interrupt = tcg_handle_interrupt, +}; diff --git a/accel/tcg/tcg-cpus-rr.h b/accel/tcg/tcg-cpus-rr.h new file mode 100644 index 0000000000..12463b0b93 --- /dev/null +++ b/accel/tcg/tcg-cpus-rr.h @@ -0,0 +1,21 @@ +/* + * QEMU TCG Single Threaded vCPUs implementation + * + * Copyright 2020 SUSE LLC + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifndef TCG_CPUS_RR_H +#define TCG_CPUS_RR_H + +#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10) + +/* Kick all RR vCPUs. */ +void qemu_cpu_kick_rr_cpus(CPUState *unused); + +/* Single-threaded TCG */ +void *tcg_rr_cpu_thread_fn(void *arg); + +#endif /* TCG_CPUS_RR_H */ diff --git a/accel/tcg/tcg-cpus.c b/accel/tcg/tcg-cpus.c index da1c63d8f6..60d307dba1 100644 --- a/accel/tcg/tcg-cpus.c +++ b/accel/tcg/tcg-cpus.c @@ -1,5 +1,7 @@ /* - * QEMU System Emulator + * QEMU TCG vcpu common functionality + * + * Functionality common to all TCG vcpu variants: mttcg, rr and icount. * * Copyright (c) 2003-2008 Fabrice Bellard * Copyright (c) 2014 Red Hat Inc. @@ -33,436 +35,12 @@ #include "hw/boards.h" #include "tcg-cpus.h" +#include "tcg-cpus-mttcg.h" +#include "tcg-cpus-rr.h" -/* Kick all RR vCPUs */ -static void qemu_cpu_kick_rr_cpus(void) -{ - CPUState *cpu; +/* common functionality among all TCG variants */ - CPU_FOREACH(cpu) { - cpu_exit(cpu); - }; -} - -static void tcg_kick_vcpu_thread(CPUState *cpu) -{ - if (qemu_tcg_mttcg_enabled()) { - cpu_exit(cpu); - } else { - qemu_cpu_kick_rr_cpus(); - } -} - -/* - * TCG vCPU kick timer - * - * The kick timer is responsible for moving single threaded vCPU - * emulation on to the next vCPU. If more than one vCPU is running a - * timer event with force a cpu->exit so the next vCPU can get - * scheduled. - * - * The timer is removed if all vCPUs are idle and restarted again once - * idleness is complete. - */ - -static QEMUTimer *tcg_kick_vcpu_timer; -static CPUState *tcg_current_rr_cpu; - -#define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10) - -static inline int64_t qemu_tcg_next_kick(void) -{ - return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD; -} - -/* Kick the currently round-robin scheduled vCPU to next */ -static void qemu_cpu_kick_rr_next_cpu(void) -{ - CPUState *cpu; - do { - cpu = qatomic_mb_read(&tcg_current_rr_cpu); - if (cpu) { - cpu_exit(cpu); - } - } while (cpu != qatomic_mb_read(&tcg_current_rr_cpu)); -} - -static void kick_tcg_thread(void *opaque) -{ - timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick()); - qemu_cpu_kick_rr_next_cpu(); -} - -static void start_tcg_kick_timer(void) -{ - assert(!mttcg_enabled); - if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) { - tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, - kick_tcg_thread, NULL); - } - if (tcg_kick_vcpu_timer && !timer_pending(tcg_kick_vcpu_timer)) { - timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick()); - } -} - -static void stop_tcg_kick_timer(void) -{ - assert(!mttcg_enabled); - if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) { - timer_del(tcg_kick_vcpu_timer); - } -} - -static void qemu_tcg_destroy_vcpu(CPUState *cpu) -{ -} - -static void qemu_tcg_rr_wait_io_event(void) -{ - CPUState *cpu; - - while (all_cpu_threads_idle()) { - stop_tcg_kick_timer(); - qemu_cond_wait_iothread(first_cpu->halt_cond); - } - - start_tcg_kick_timer(); - - CPU_FOREACH(cpu) { - qemu_wait_io_event_common(cpu); - } -} - -static int64_t tcg_get_icount_limit(void) -{ - int64_t deadline; - - if (replay_mode != REPLAY_MODE_PLAY) { - /* - * Include all the timers, because they may need an attention. - * Too long CPU execution may create unnecessary delay in UI. - */ - deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL, - QEMU_TIMER_ATTR_ALL); - /* Check realtime timers, because they help with input processing */ - deadline = qemu_soonest_timeout(deadline, - qemu_clock_deadline_ns_all(QEMU_CLOCK_REALTIME, - QEMU_TIMER_ATTR_ALL)); - - /* - * Maintain prior (possibly buggy) behaviour where if no deadline - * was set (as there is no QEMU_CLOCK_VIRTUAL timer) or it is more than - * INT32_MAX nanoseconds ahead, we still use INT32_MAX - * nanoseconds. - */ - if ((deadline < 0) || (deadline > INT32_MAX)) { - deadline = INT32_MAX; - } - - return icount_round(deadline); - } else { - return replay_get_instructions(); - } -} - -static void notify_aio_contexts(void) -{ - /* Wake up other AioContexts. */ - qemu_clock_notify(QEMU_CLOCK_VIRTUAL); - qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL); -} - -static void handle_icount_deadline(void) -{ - assert(qemu_in_vcpu_thread()); - if (icount_enabled()) { - int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL, - QEMU_TIMER_ATTR_ALL); - - if (deadline == 0) { - notify_aio_contexts(); - } - } -} - -static void prepare_icount_for_run(CPUState *cpu) -{ - if (icount_enabled()) { - int insns_left; - - /* - * These should always be cleared by process_icount_data after - * each vCPU execution. However u16.high can be raised - * asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt - */ - g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0); - g_assert(cpu->icount_extra == 0); - - cpu->icount_budget = tcg_get_icount_limit(); - insns_left = MIN(0xffff, cpu->icount_budget); - cpu_neg(cpu)->icount_decr.u16.low = insns_left; - cpu->icount_extra = cpu->icount_budget - insns_left; - - replay_mutex_lock(); - - if (cpu->icount_budget == 0 && replay_has_checkpoint()) { - notify_aio_contexts(); - } - } -} - -static void process_icount_data(CPUState *cpu) -{ - if (icount_enabled()) { - /* Account for executed instructions */ - icount_update(cpu); - - /* Reset the counters */ - cpu_neg(cpu)->icount_decr.u16.low = 0; - cpu->icount_extra = 0; - cpu->icount_budget = 0; - - replay_account_executed_instructions(); - - replay_mutex_unlock(); - } -} - -static int tcg_cpu_exec(CPUState *cpu) -{ - int ret; -#ifdef CONFIG_PROFILER - int64_t ti; -#endif - - assert(tcg_enabled()); -#ifdef CONFIG_PROFILER - ti = profile_getclock(); -#endif - cpu_exec_start(cpu); - ret = cpu_exec(cpu); - cpu_exec_end(cpu); -#ifdef CONFIG_PROFILER - qatomic_set(&tcg_ctx->prof.cpu_exec_time, - tcg_ctx->prof.cpu_exec_time + profile_getclock() - ti); -#endif - return ret; -} - -/* - * Destroy any remaining vCPUs which have been unplugged and have - * finished running - */ -static void deal_with_unplugged_cpus(void) -{ - CPUState *cpu; - - CPU_FOREACH(cpu) { - if (cpu->unplug && !cpu_can_run(cpu)) { - qemu_tcg_destroy_vcpu(cpu); - cpu_thread_signal_destroyed(cpu); - break; - } - } -} - -/* - * Single-threaded TCG - * - * In the single-threaded case each vCPU is simulated in turn. If - * there is more than a single vCPU we create a simple timer to kick - * the vCPU and ensure we don't get stuck in a tight loop in one vCPU. - * This is done explicitly rather than relying on side-effects - * elsewhere. - */ - -static void *tcg_rr_cpu_thread_fn(void *arg) -{ - CPUState *cpu = arg; - - assert(tcg_enabled()); - rcu_register_thread(); - tcg_register_thread(); - - qemu_mutex_lock_iothread(); - qemu_thread_get_self(cpu->thread); - - cpu->thread_id = qemu_get_thread_id(); - cpu->can_do_io = 1; - cpu_thread_signal_created(cpu); - qemu_guest_random_seed_thread_part2(cpu->random_seed); - - /* wait for initial kick-off after machine start */ - while (first_cpu->stopped) { - qemu_cond_wait_iothread(first_cpu->halt_cond); - - /* process any pending work */ - CPU_FOREACH(cpu) { - current_cpu = cpu; - qemu_wait_io_event_common(cpu); - } - } - - start_tcg_kick_timer(); - - cpu = first_cpu; - - /* process any pending work */ - cpu->exit_request = 1; - - while (1) { - qemu_mutex_unlock_iothread(); - replay_mutex_lock(); - qemu_mutex_lock_iothread(); - /* Account partial waits to QEMU_CLOCK_VIRTUAL. */ - icount_account_warp_timer(); - - /* - * Run the timers here. This is much more efficient than - * waking up the I/O thread and waiting for completion. - */ - handle_icount_deadline(); - - replay_mutex_unlock(); - - if (!cpu) { - cpu = first_cpu; - } - - while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) { - - qatomic_mb_set(&tcg_current_rr_cpu, cpu); - current_cpu = cpu; - - qemu_clock_enable(QEMU_CLOCK_VIRTUAL, - (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0); - - if (cpu_can_run(cpu)) { - int r; - - qemu_mutex_unlock_iothread(); - prepare_icount_for_run(cpu); - - r = tcg_cpu_exec(cpu); - - process_icount_data(cpu); - qemu_mutex_lock_iothread(); - - if (r == EXCP_DEBUG) { - cpu_handle_guest_debug(cpu); - break; - } else if (r == EXCP_ATOMIC) { - qemu_mutex_unlock_iothread(); - cpu_exec_step_atomic(cpu); - qemu_mutex_lock_iothread(); - break; - } - } else if (cpu->stop) { - if (cpu->unplug) { - cpu = CPU_NEXT(cpu); - } - break; - } - - cpu = CPU_NEXT(cpu); - } /* while (cpu && !cpu->exit_request).. */ - - /* Does not need qatomic_mb_set because a spurious wakeup is okay. */ - qatomic_set(&tcg_current_rr_cpu, NULL); - - if (cpu && cpu->exit_request) { - qatomic_mb_set(&cpu->exit_request, 0); - } - - if (icount_enabled() && all_cpu_threads_idle()) { - /* - * When all cpus are sleeping (e.g in WFI), to avoid a deadlock - * in the main_loop, wake it up in order to start the warp timer. - */ - qemu_notify_event(); - } - - qemu_tcg_rr_wait_io_event(); - deal_with_unplugged_cpus(); - } - - rcu_unregister_thread(); - return NULL; -} - -/* - * Multi-threaded TCG - * - * In the multi-threaded case each vCPU has its own thread. The TLS - * variable current_cpu can be used deep in the code to find the - * current CPUState for a given thread. - */ - -static void *tcg_cpu_thread_fn(void *arg) -{ - CPUState *cpu = arg; - - assert(tcg_enabled()); - g_assert(!icount_enabled()); - - rcu_register_thread(); - tcg_register_thread(); - - qemu_mutex_lock_iothread(); - qemu_thread_get_self(cpu->thread); - - cpu->thread_id = qemu_get_thread_id(); - cpu->can_do_io = 1; - current_cpu = cpu; - cpu_thread_signal_created(cpu); - qemu_guest_random_seed_thread_part2(cpu->random_seed); - - /* process any pending work */ - cpu->exit_request = 1; - - do { - if (cpu_can_run(cpu)) { - int r; - qemu_mutex_unlock_iothread(); - r = tcg_cpu_exec(cpu); - qemu_mutex_lock_iothread(); - switch (r) { - case EXCP_DEBUG: - cpu_handle_guest_debug(cpu); - break; - case EXCP_HALTED: - /* - * during start-up the vCPU is reset and the thread is - * kicked several times. If we don't ensure we go back - * to sleep in the halted state we won't cleanly - * start-up when the vCPU is enabled. - * - * cpu->halted should ensure we sleep in wait_io_event - */ - g_assert(cpu->halted); - break; - case EXCP_ATOMIC: - qemu_mutex_unlock_iothread(); - cpu_exec_step_atomic(cpu); - qemu_mutex_lock_iothread(); - default: - /* Ignore everything else? */ - break; - } - } - - qatomic_mb_set(&cpu->exit_request, 0); - qemu_wait_io_event(cpu); - } while (!cpu->unplug || cpu_can_run(cpu)); - - qemu_tcg_destroy_vcpu(cpu); - cpu_thread_signal_destroyed(cpu); - qemu_mutex_unlock_iothread(); - rcu_unregister_thread(); - return NULL; -} - -static void tcg_start_vcpu_thread(CPUState *cpu) +void tcg_start_vcpu_thread(CPUState *cpu) { char thread_name[VCPU_THREAD_NAME_SIZE]; static QemuCond *single_tcg_halt_cond; @@ -518,29 +96,36 @@ static void tcg_start_vcpu_thread(CPUState *cpu) } } -static int64_t tcg_get_virtual_clock(void) +void qemu_tcg_destroy_vcpu(CPUState *cpu) { - if (icount_enabled()) { - return icount_get(); - } - return cpu_get_clock(); + cpu_thread_signal_destroyed(cpu); } -static int64_t tcg_get_elapsed_ticks(void) +int tcg_cpu_exec(CPUState *cpu) { - if (icount_enabled()) { - return icount_get(); - } - return cpu_get_ticks(); + int ret; +#ifdef CONFIG_PROFILER + int64_t ti; +#endif + assert(tcg_enabled()); +#ifdef CONFIG_PROFILER + ti = profile_getclock(); +#endif + cpu_exec_start(cpu); + ret = cpu_exec(cpu); + cpu_exec_end(cpu); +#ifdef CONFIG_PROFILER + qatomic_set(&tcg_ctx->prof.cpu_exec_time, + tcg_ctx->prof.cpu_exec_time + profile_getclock() - ti); +#endif + return ret; } /* mask must never be zero, except for A20 change call */ -static void tcg_handle_interrupt(CPUState *cpu, int mask) +void tcg_handle_interrupt(CPUState *cpu, int mask) { - int old_mask; g_assert(qemu_mutex_iothread_locked()); - old_mask = cpu->interrupt_request; cpu->interrupt_request |= mask; /* @@ -551,20 +136,5 @@ static void tcg_handle_interrupt(CPUState *cpu, int mask) qemu_cpu_kick(cpu); } else { qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); - if (icount_enabled() && - !cpu->can_do_io - && (mask & ~old_mask) != 0) { - cpu_abort(cpu, "Raised interrupt while not in I/O function"); - } } } - -const CpusAccel tcg_cpus = { - .create_vcpu_thread = tcg_start_vcpu_thread, - .kick_vcpu_thread = tcg_kick_vcpu_thread, - - .handle_interrupt = tcg_handle_interrupt, - - .get_virtual_clock = tcg_get_virtual_clock, - .get_elapsed_ticks = tcg_get_elapsed_ticks, -}; diff --git a/accel/tcg/tcg-cpus.h b/accel/tcg/tcg-cpus.h index 8b1d9d2abc..cb61aed1cc 100644 --- a/accel/tcg/tcg-cpus.h +++ b/accel/tcg/tcg-cpus.h @@ -1,5 +1,7 @@ /* - * Accelerator CPUS Interface + * QEMU TCG vcpu common functionality + * + * Functionality common to all TCG vcpu variants: mttcg, rr and icount. * * Copyright 2020 SUSE LLC * @@ -12,6 +14,13 @@ #include "sysemu/cpus.h" -extern const CpusAccel tcg_cpus; +extern const CpusAccel tcg_cpus_mttcg; +extern const CpusAccel tcg_cpus_icount; +extern const CpusAccel tcg_cpus_rr; + +void tcg_start_vcpu_thread(CPUState *cpu); +void qemu_tcg_destroy_vcpu(CPUState *cpu); +int tcg_cpu_exec(CPUState *cpu); +void tcg_handle_interrupt(CPUState *cpu, int mask); #endif /* TCG_CPUS_H */ diff --git a/softmmu/icount.c b/softmmu/icount.c index 020a201a01..dbcd8c3594 100644 --- a/softmmu/icount.c +++ b/softmmu/icount.c @@ -396,7 +396,7 @@ void icount_start_warp_timer(void) void icount_account_warp_timer(void) { - if (!icount_enabled() || !icount_sleep) { + if (!icount_sleep) { return; } From patchwork Wed Oct 14 19:23:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Claudio Fontana X-Patchwork-Id: 11838239 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11C701592 for ; Wed, 14 Oct 2020 19:33:24 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9FD0F2222C for ; Wed, 14 Oct 2020 19:33:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9FD0F2222C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:47636 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kSmWY-0003gj-Cf for patchwork-qemu-devel@patchwork.kernel.org; Wed, 14 Oct 2020 15:33:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42016) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kSmN2-0001AA-Cp for qemu-devel@nongnu.org; Wed, 14 Oct 2020 15:23:32 -0400 Received: from mx2.suse.de ([195.135.220.15]:34278) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kSmN0-00022f-8N for qemu-devel@nongnu.org; Wed, 14 Oct 2020 15:23:32 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id AFA0EB2A1; Wed, 14 Oct 2020 19:23:27 +0000 (UTC) From: Claudio Fontana To: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Richard Henderson , =?utf-8?q?Alex_Benn=C3=A9?= =?utf-8?q?e?= Subject: [PATCH v2 2/3] accel/tcg: split tcg_start_vcpu_thread Date: Wed, 14 Oct 2020 21:23:23 +0200 Message-Id: <20201014192324.6390-3-cfontana@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201014192324.6390-1-cfontana@suse.de> References: <20201014192324.6390-1-cfontana@suse.de> MIME-Version: 1.0 Received-SPF: pass client-ip=195.135.220.15; envelope-from=cfontana@suse.de; helo=mx2.suse.de X-detected-operating-system: by eggs.gnu.org: First seen = 2020/10/12 01:21:00 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x (no timestamps) [generic] X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Claudio Fontana , qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" after the initial split into 3 tcg variants, we proceed to also split tcg_start_vcpu_thread. We actually split it in 2 this time, since the icount variant just uses the round robin function. Suggested-by: Richard Henderson Signed-off-by: Claudio Fontana --- accel/tcg/tcg-all.c | 5 ++++ accel/tcg/tcg-cpus-icount.c | 2 +- accel/tcg/tcg-cpus-mttcg.c | 29 +++++++++++++++++-- accel/tcg/tcg-cpus-mttcg.h | 10 ------- accel/tcg/tcg-cpus-rr.c | 39 +++++++++++++++++++++++-- accel/tcg/tcg-cpus-rr.h | 4 +-- accel/tcg/tcg-cpus.c | 58 ------------------------------------- accel/tcg/tcg-cpus.h | 1 - 8 files changed, 71 insertions(+), 77 deletions(-) diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c index e42a028043..1ac0b76515 100644 --- a/accel/tcg/tcg-all.c +++ b/accel/tcg/tcg-all.c @@ -105,6 +105,11 @@ static int tcg_init(MachineState *ms) tcg_exec_init(s->tb_size * 1024 * 1024); mttcg_enabled = s->mttcg_enabled; + /* + * Initialize TCG regions + */ + tcg_region_init(); + if (mttcg_enabled) { cpus_register_accel(&tcg_cpus_mttcg); } else if (icount_enabled()) { diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c index d3af3afb6d..82dbe2cacf 100644 --- a/accel/tcg/tcg-cpus-icount.c +++ b/accel/tcg/tcg-cpus-icount.c @@ -138,7 +138,7 @@ static void icount_handle_interrupt(CPUState *cpu, int mask) } const CpusAccel tcg_cpus_icount = { - .create_vcpu_thread = tcg_start_vcpu_thread, + .create_vcpu_thread = rr_start_vcpu_thread, .kick_vcpu_thread = qemu_cpu_kick_rr_cpus, .handle_interrupt = icount_handle_interrupt, diff --git a/accel/tcg/tcg-cpus-mttcg.c b/accel/tcg/tcg-cpus-mttcg.c index 723703fa3e..619718c994 100644 --- a/accel/tcg/tcg-cpus-mttcg.c +++ b/accel/tcg/tcg-cpus-mttcg.c @@ -33,7 +33,6 @@ #include "hw/boards.h" #include "tcg-cpus.h" -#include "tcg-cpus-mttcg.h" /* * Multi-threaded TCG @@ -43,7 +42,7 @@ * current CPUState for a given thread. */ -void *tcg_cpu_thread_fn(void *arg) +static void *tcg_cpu_thread_fn(void *arg) { CPUState *cpu = arg; @@ -111,8 +110,32 @@ static void mttcg_kick_vcpu_thread(CPUState *cpu) cpu_exit(cpu); } +static void mttcg_start_vcpu_thread(CPUState *cpu) +{ + char thread_name[VCPU_THREAD_NAME_SIZE]; + + g_assert(tcg_enabled()); + + parallel_cpus = (current_machine->smp.max_cpus > 1); + + cpu->thread = g_malloc0(sizeof(QemuThread)); + cpu->halt_cond = g_malloc0(sizeof(QemuCond)); + qemu_cond_init(cpu->halt_cond); + + /* create a thread per vCPU with TCG (MTTCG) */ + snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG", + cpu->cpu_index); + + qemu_thread_create(cpu->thread, thread_name, tcg_cpu_thread_fn, + cpu, QEMU_THREAD_JOINABLE); + +#ifdef _WIN32 + cpu->hThread = qemu_thread_get_handle(cpu->thread); +#endif +} + const CpusAccel tcg_cpus_mttcg = { - .create_vcpu_thread = tcg_start_vcpu_thread, + .create_vcpu_thread = mttcg_start_vcpu_thread, .kick_vcpu_thread = mttcg_kick_vcpu_thread, .handle_interrupt = tcg_handle_interrupt, diff --git a/accel/tcg/tcg-cpus-mttcg.h b/accel/tcg/tcg-cpus-mttcg.h index 5d203243e8..6a70dcb5d6 100644 --- a/accel/tcg/tcg-cpus-mttcg.h +++ b/accel/tcg/tcg-cpus-mttcg.h @@ -10,14 +10,4 @@ #ifndef TCG_CPUS_MTTCG_H #define TCG_CPUS_MTTCG_H -/* - * Multi-threaded TCG - * - * In the multi-threaded case each vCPU has its own thread. The TLS - * variable current_cpu can be used deep in the code to find the - * current CPUState for a given thread. - */ - -void *tcg_cpu_thread_fn(void *arg); - #endif /* TCG_CPUS_MTTCG_H */ diff --git a/accel/tcg/tcg-cpus-rr.c b/accel/tcg/tcg-cpus-rr.c index 82bb7c5ea6..31ec37915a 100644 --- a/accel/tcg/tcg-cpus-rr.c +++ b/accel/tcg/tcg-cpus-rr.c @@ -146,7 +146,7 @@ static void deal_with_unplugged_cpus(void) * elsewhere. */ -void *tcg_rr_cpu_thread_fn(void *arg) +static void *tcg_rr_cpu_thread_fn(void *arg) { CPUState *cpu = arg; @@ -264,8 +264,43 @@ void *tcg_rr_cpu_thread_fn(void *arg) return NULL; } +void rr_start_vcpu_thread(CPUState *cpu) +{ + char thread_name[VCPU_THREAD_NAME_SIZE]; + static QemuCond *single_tcg_halt_cond; + static QemuThread *single_tcg_cpu_thread; + + g_assert(tcg_enabled()); + parallel_cpus = false; + + if (!single_tcg_cpu_thread) { + cpu->thread = g_malloc0(sizeof(QemuThread)); + cpu->halt_cond = g_malloc0(sizeof(QemuCond)); + qemu_cond_init(cpu->halt_cond); + + /* share a single thread for all cpus with TCG */ + snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG"); + qemu_thread_create(cpu->thread, thread_name, + tcg_rr_cpu_thread_fn, + cpu, QEMU_THREAD_JOINABLE); + + single_tcg_halt_cond = cpu->halt_cond; + single_tcg_cpu_thread = cpu->thread; +#ifdef _WIN32 + cpu->hThread = qemu_thread_get_handle(cpu->thread); +#endif + } else { + /* we share the thread */ + cpu->thread = single_tcg_cpu_thread; + cpu->halt_cond = single_tcg_halt_cond; + cpu->thread_id = first_cpu->thread_id; + cpu->can_do_io = 1; + cpu->created = true; + } +} + const CpusAccel tcg_cpus_rr = { - .create_vcpu_thread = tcg_start_vcpu_thread, + .create_vcpu_thread = rr_start_vcpu_thread, .kick_vcpu_thread = qemu_cpu_kick_rr_cpus, .handle_interrupt = tcg_handle_interrupt, diff --git a/accel/tcg/tcg-cpus-rr.h b/accel/tcg/tcg-cpus-rr.h index 12463b0b93..2e5943eda9 100644 --- a/accel/tcg/tcg-cpus-rr.h +++ b/accel/tcg/tcg-cpus-rr.h @@ -15,7 +15,7 @@ /* Kick all RR vCPUs. */ void qemu_cpu_kick_rr_cpus(CPUState *unused); -/* Single-threaded TCG */ -void *tcg_rr_cpu_thread_fn(void *arg); +/* start the round robin vcpu thread */ +void rr_start_vcpu_thread(CPUState *cpu); #endif /* TCG_CPUS_RR_H */ diff --git a/accel/tcg/tcg-cpus.c b/accel/tcg/tcg-cpus.c index 60d307dba1..fa70237414 100644 --- a/accel/tcg/tcg-cpus.c +++ b/accel/tcg/tcg-cpus.c @@ -35,67 +35,9 @@ #include "hw/boards.h" #include "tcg-cpus.h" -#include "tcg-cpus-mttcg.h" -#include "tcg-cpus-rr.h" /* common functionality among all TCG variants */ -void tcg_start_vcpu_thread(CPUState *cpu) -{ - char thread_name[VCPU_THREAD_NAME_SIZE]; - static QemuCond *single_tcg_halt_cond; - static QemuThread *single_tcg_cpu_thread; - static int tcg_region_inited; - - assert(tcg_enabled()); - /* - * Initialize TCG regions--once. Now is a good time, because: - * (1) TCG's init context, prologue and target globals have been set up. - * (2) qemu_tcg_mttcg_enabled() works now (TCG init code runs before the - * -accel flag is processed, so the check doesn't work then). - */ - if (!tcg_region_inited) { - tcg_region_inited = 1; - tcg_region_init(); - parallel_cpus = qemu_tcg_mttcg_enabled() && current_machine->smp.max_cpus > 1; - } - - if (qemu_tcg_mttcg_enabled() || !single_tcg_cpu_thread) { - cpu->thread = g_malloc0(sizeof(QemuThread)); - cpu->halt_cond = g_malloc0(sizeof(QemuCond)); - qemu_cond_init(cpu->halt_cond); - - if (qemu_tcg_mttcg_enabled()) { - /* create a thread per vCPU with TCG (MTTCG) */ - snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG", - cpu->cpu_index); - - qemu_thread_create(cpu->thread, thread_name, tcg_cpu_thread_fn, - cpu, QEMU_THREAD_JOINABLE); - - } else { - /* share a single thread for all cpus with TCG */ - snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG"); - qemu_thread_create(cpu->thread, thread_name, - tcg_rr_cpu_thread_fn, - cpu, QEMU_THREAD_JOINABLE); - - single_tcg_halt_cond = cpu->halt_cond; - single_tcg_cpu_thread = cpu->thread; - } -#ifdef _WIN32 - cpu->hThread = qemu_thread_get_handle(cpu->thread); -#endif - } else { - /* For non-MTTCG cases we share the thread */ - cpu->thread = single_tcg_cpu_thread; - cpu->halt_cond = single_tcg_halt_cond; - cpu->thread_id = first_cpu->thread_id; - cpu->can_do_io = 1; - cpu->created = true; - } -} - void qemu_tcg_destroy_vcpu(CPUState *cpu) { cpu_thread_signal_destroyed(cpu); diff --git a/accel/tcg/tcg-cpus.h b/accel/tcg/tcg-cpus.h index cb61aed1cc..21419d1117 100644 --- a/accel/tcg/tcg-cpus.h +++ b/accel/tcg/tcg-cpus.h @@ -18,7 +18,6 @@ extern const CpusAccel tcg_cpus_mttcg; extern const CpusAccel tcg_cpus_icount; extern const CpusAccel tcg_cpus_rr; -void tcg_start_vcpu_thread(CPUState *cpu); void qemu_tcg_destroy_vcpu(CPUState *cpu); int tcg_cpu_exec(CPUState *cpu); void tcg_handle_interrupt(CPUState *cpu, int mask); From patchwork Wed Oct 14 19:23:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Claudio Fontana X-Patchwork-Id: 11838231 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DC3891592 for ; Wed, 14 Oct 2020 19:24:38 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5CFC8214D8 for ; Wed, 14 Oct 2020 19:24:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CFC8214D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:53918 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kSmO5-0002r6-CO for patchwork-qemu-devel@patchwork.kernel.org; Wed, 14 Oct 2020 15:24:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42034) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kSmN4-0001E3-1r for qemu-devel@nongnu.org; Wed, 14 Oct 2020 15:23:34 -0400 Received: from mx2.suse.de ([195.135.220.15]:34290) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kSmN0-00022i-8n for qemu-devel@nongnu.org; Wed, 14 Oct 2020 15:23:33 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 0BA58B2A3; Wed, 14 Oct 2020 19:23:28 +0000 (UTC) From: Claudio Fontana To: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Richard Henderson , =?utf-8?q?Alex_Benn=C3=A9?= =?utf-8?q?e?= Subject: [PATCH v2 3/3] accel/tcg: rename tcg-cpus functions to match module name Date: Wed, 14 Oct 2020 21:23:24 +0200 Message-Id: <20201014192324.6390-4-cfontana@suse.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201014192324.6390-1-cfontana@suse.de> References: <20201014192324.6390-1-cfontana@suse.de> MIME-Version: 1.0 Received-SPF: pass client-ip=195.135.220.15; envelope-from=cfontana@suse.de; helo=mx2.suse.de X-detected-operating-system: by eggs.gnu.org: First seen = 2020/10/12 01:21:00 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x (no timestamps) [generic] X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Claudio Fontana , qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Claudio Fontana Reviewed-by: Philippe Mathieu-Daudé --- accel/tcg/tcg-cpus-icount.c | 24 ++++++------ accel/tcg/tcg-cpus-icount.h | 6 +-- accel/tcg/tcg-cpus-mttcg.c | 10 ++--- accel/tcg/tcg-cpus-rr.c | 74 ++++++++++++++++++------------------- accel/tcg/tcg-cpus-rr.h | 2 +- accel/tcg/tcg-cpus.c | 6 +-- accel/tcg/tcg-cpus.h | 6 +-- 7 files changed, 64 insertions(+), 64 deletions(-) diff --git a/accel/tcg/tcg-cpus-icount.c b/accel/tcg/tcg-cpus-icount.c index 82dbe2cacf..9f45432275 100644 --- a/accel/tcg/tcg-cpus-icount.c +++ b/accel/tcg/tcg-cpus-icount.c @@ -36,7 +36,7 @@ #include "tcg-cpus-icount.h" #include "tcg-cpus-rr.h" -static int64_t tcg_get_icount_limit(void) +static int64_t icount_get_limit(void) { int64_t deadline; @@ -68,37 +68,37 @@ static int64_t tcg_get_icount_limit(void) } } -static void notify_aio_contexts(void) +static void icount_notify_aio_contexts(void) { /* Wake up other AioContexts. */ qemu_clock_notify(QEMU_CLOCK_VIRTUAL); qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL); } -void handle_icount_deadline(void) +void icount_handle_deadline(void) { assert(qemu_in_vcpu_thread()); int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL, QEMU_TIMER_ATTR_ALL); if (deadline == 0) { - notify_aio_contexts(); + icount_notify_aio_contexts(); } } -void prepare_icount_for_run(CPUState *cpu) +void icount_prepare_for_run(CPUState *cpu) { int insns_left; /* - * These should always be cleared by process_icount_data after + * These should always be cleared by icount_process_data after * each vCPU execution. However u16.high can be raised - * asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt + * asynchronously by cpu_exit/cpu_interrupt/tcg_cpus_handle_interrupt */ g_assert(cpu_neg(cpu)->icount_decr.u16.low == 0); g_assert(cpu->icount_extra == 0); - cpu->icount_budget = tcg_get_icount_limit(); + cpu->icount_budget = icount_get_limit(); insns_left = MIN(0xffff, cpu->icount_budget); cpu_neg(cpu)->icount_decr.u16.low = insns_left; cpu->icount_extra = cpu->icount_budget - insns_left; @@ -106,11 +106,11 @@ void prepare_icount_for_run(CPUState *cpu) replay_mutex_lock(); if (cpu->icount_budget == 0 && replay_has_checkpoint()) { - notify_aio_contexts(); + icount_notify_aio_contexts(); } } -void process_icount_data(CPUState *cpu) +void icount_process_data(CPUState *cpu) { /* Account for executed instructions */ icount_update(cpu); @@ -129,7 +129,7 @@ static void icount_handle_interrupt(CPUState *cpu, int mask) { int old_mask = cpu->interrupt_request; - tcg_handle_interrupt(cpu, mask); + tcg_cpus_handle_interrupt(cpu, mask); if (qemu_cpu_is_self(cpu) && !cpu->can_do_io && (mask & ~old_mask) != 0) { @@ -139,7 +139,7 @@ static void icount_handle_interrupt(CPUState *cpu, int mask) const CpusAccel tcg_cpus_icount = { .create_vcpu_thread = rr_start_vcpu_thread, - .kick_vcpu_thread = qemu_cpu_kick_rr_cpus, + .kick_vcpu_thread = rr_kick_vcpu_thread, .handle_interrupt = icount_handle_interrupt, .get_virtual_clock = icount_get, diff --git a/accel/tcg/tcg-cpus-icount.h b/accel/tcg/tcg-cpus-icount.h index cbcf76b413..b695939dfa 100644 --- a/accel/tcg/tcg-cpus-icount.h +++ b/accel/tcg/tcg-cpus-icount.h @@ -10,8 +10,8 @@ #ifndef TCG_CPUS_ICOUNT_H #define TCG_CPUS_ICOUNT_H -void handle_icount_deadline(void); -void prepare_icount_for_run(CPUState *cpu); -void process_icount_data(CPUState *cpu); +void icount_handle_deadline(void); +void icount_prepare_for_run(CPUState *cpu); +void icount_process_data(CPUState *cpu); #endif /* TCG_CPUS_ICOUNT_H */ diff --git a/accel/tcg/tcg-cpus-mttcg.c b/accel/tcg/tcg-cpus-mttcg.c index 619718c994..8fa76f4c04 100644 --- a/accel/tcg/tcg-cpus-mttcg.c +++ b/accel/tcg/tcg-cpus-mttcg.c @@ -42,7 +42,7 @@ * current CPUState for a given thread. */ -static void *tcg_cpu_thread_fn(void *arg) +static void *mttcg_cpu_thread_fn(void *arg) { CPUState *cpu = arg; @@ -68,7 +68,7 @@ static void *tcg_cpu_thread_fn(void *arg) if (cpu_can_run(cpu)) { int r; qemu_mutex_unlock_iothread(); - r = tcg_cpu_exec(cpu); + r = tcg_cpus_exec(cpu); qemu_mutex_lock_iothread(); switch (r) { case EXCP_DEBUG: @@ -99,7 +99,7 @@ static void *tcg_cpu_thread_fn(void *arg) qemu_wait_io_event(cpu); } while (!cpu->unplug || cpu_can_run(cpu)); - qemu_tcg_destroy_vcpu(cpu); + tcg_cpus_destroy(cpu); qemu_mutex_unlock_iothread(); rcu_unregister_thread(); return NULL; @@ -126,7 +126,7 @@ static void mttcg_start_vcpu_thread(CPUState *cpu) snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "CPU %d/TCG", cpu->cpu_index); - qemu_thread_create(cpu->thread, thread_name, tcg_cpu_thread_fn, + qemu_thread_create(cpu->thread, thread_name, mttcg_cpu_thread_fn, cpu, QEMU_THREAD_JOINABLE); #ifdef _WIN32 @@ -138,5 +138,5 @@ const CpusAccel tcg_cpus_mttcg = { .create_vcpu_thread = mttcg_start_vcpu_thread, .kick_vcpu_thread = mttcg_kick_vcpu_thread, - .handle_interrupt = tcg_handle_interrupt, + .handle_interrupt = tcg_cpus_handle_interrupt, }; diff --git a/accel/tcg/tcg-cpus-rr.c b/accel/tcg/tcg-cpus-rr.c index 31ec37915a..bb3bdf650b 100644 --- a/accel/tcg/tcg-cpus-rr.c +++ b/accel/tcg/tcg-cpus-rr.c @@ -37,7 +37,7 @@ #include "tcg-cpus-icount.h" /* Kick all RR vCPUs */ -void qemu_cpu_kick_rr_cpus(CPUState *unused) +void rr_kick_vcpu_thread(CPUState *unused) { CPUState *cpu; @@ -58,62 +58,62 @@ void qemu_cpu_kick_rr_cpus(CPUState *unused) * idleness is complete. */ -static QEMUTimer *tcg_kick_vcpu_timer; -static CPUState *tcg_current_rr_cpu; +static QEMUTimer *rr_kick_vcpu_timer; +static CPUState *rr_current_cpu; #define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10) -static inline int64_t qemu_tcg_next_kick(void) +static inline int64_t rr_next_kick_time(void) { return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + TCG_KICK_PERIOD; } /* Kick the currently round-robin scheduled vCPU to next */ -static void qemu_cpu_kick_rr_next_cpu(void) +static void rr_kick_next_cpu(void) { CPUState *cpu; do { - cpu = qatomic_mb_read(&tcg_current_rr_cpu); + cpu = qatomic_mb_read(&rr_current_cpu); if (cpu) { cpu_exit(cpu); } - } while (cpu != qatomic_mb_read(&tcg_current_rr_cpu)); + } while (cpu != qatomic_mb_read(&rr_current_cpu)); } -static void kick_tcg_thread(void *opaque) +static void rr_kick_thread(void *opaque) { - timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick()); - qemu_cpu_kick_rr_next_cpu(); + timer_mod(rr_kick_vcpu_timer, rr_next_kick_time()); + rr_kick_next_cpu(); } -static void start_tcg_kick_timer(void) +static void rr_start_kick_timer(void) { - if (!tcg_kick_vcpu_timer && CPU_NEXT(first_cpu)) { - tcg_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, - kick_tcg_thread, NULL); + if (!rr_kick_vcpu_timer && CPU_NEXT(first_cpu)) { + rr_kick_vcpu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, + rr_kick_thread, NULL); } - if (tcg_kick_vcpu_timer && !timer_pending(tcg_kick_vcpu_timer)) { - timer_mod(tcg_kick_vcpu_timer, qemu_tcg_next_kick()); + if (rr_kick_vcpu_timer && !timer_pending(rr_kick_vcpu_timer)) { + timer_mod(rr_kick_vcpu_timer, rr_next_kick_time()); } } -static void stop_tcg_kick_timer(void) +static void rr_stop_kick_timer(void) { - if (tcg_kick_vcpu_timer && timer_pending(tcg_kick_vcpu_timer)) { - timer_del(tcg_kick_vcpu_timer); + if (rr_kick_vcpu_timer && timer_pending(rr_kick_vcpu_timer)) { + timer_del(rr_kick_vcpu_timer); } } -static void qemu_tcg_rr_wait_io_event(void) +static void rr_wait_io_event(void) { CPUState *cpu; while (all_cpu_threads_idle()) { - stop_tcg_kick_timer(); + rr_stop_kick_timer(); qemu_cond_wait_iothread(first_cpu->halt_cond); } - start_tcg_kick_timer(); + rr_start_kick_timer(); CPU_FOREACH(cpu) { qemu_wait_io_event_common(cpu); @@ -124,13 +124,13 @@ static void qemu_tcg_rr_wait_io_event(void) * Destroy any remaining vCPUs which have been unplugged and have * finished running */ -static void deal_with_unplugged_cpus(void) +static void rr_deal_with_unplugged_cpus(void) { CPUState *cpu; CPU_FOREACH(cpu) { if (cpu->unplug && !cpu_can_run(cpu)) { - qemu_tcg_destroy_vcpu(cpu); + tcg_cpus_destroy(cpu); break; } } @@ -146,7 +146,7 @@ static void deal_with_unplugged_cpus(void) * elsewhere. */ -static void *tcg_rr_cpu_thread_fn(void *arg) +static void *rr_cpu_thread_fn(void *arg) { CPUState *cpu = arg; @@ -173,7 +173,7 @@ static void *tcg_rr_cpu_thread_fn(void *arg) } } - start_tcg_kick_timer(); + rr_start_kick_timer(); cpu = first_cpu; @@ -192,7 +192,7 @@ static void *tcg_rr_cpu_thread_fn(void *arg) * Run the timers here. This is much more efficient than * waking up the I/O thread and waiting for completion. */ - handle_icount_deadline(); + icount_handle_deadline(); } replay_mutex_unlock(); @@ -203,7 +203,7 @@ static void *tcg_rr_cpu_thread_fn(void *arg) while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) { - qatomic_mb_set(&tcg_current_rr_cpu, cpu); + qatomic_mb_set(&rr_current_cpu, cpu); current_cpu = cpu; qemu_clock_enable(QEMU_CLOCK_VIRTUAL, @@ -214,11 +214,11 @@ static void *tcg_rr_cpu_thread_fn(void *arg) qemu_mutex_unlock_iothread(); if (icount_enabled()) { - prepare_icount_for_run(cpu); + icount_prepare_for_run(cpu); } - r = tcg_cpu_exec(cpu); + r = tcg_cpus_exec(cpu); if (icount_enabled()) { - process_icount_data(cpu); + icount_process_data(cpu); } qemu_mutex_lock_iothread(); @@ -242,7 +242,7 @@ static void *tcg_rr_cpu_thread_fn(void *arg) } /* while (cpu && !cpu->exit_request).. */ /* Does not need qatomic_mb_set because a spurious wakeup is okay. */ - qatomic_set(&tcg_current_rr_cpu, NULL); + qatomic_set(&rr_current_cpu, NULL); if (cpu && cpu->exit_request) { qatomic_mb_set(&cpu->exit_request, 0); @@ -256,8 +256,8 @@ static void *tcg_rr_cpu_thread_fn(void *arg) qemu_notify_event(); } - qemu_tcg_rr_wait_io_event(); - deal_with_unplugged_cpus(); + rr_wait_io_event(); + rr_deal_with_unplugged_cpus(); } rcu_unregister_thread(); @@ -281,7 +281,7 @@ void rr_start_vcpu_thread(CPUState *cpu) /* share a single thread for all cpus with TCG */ snprintf(thread_name, VCPU_THREAD_NAME_SIZE, "ALL CPUs/TCG"); qemu_thread_create(cpu->thread, thread_name, - tcg_rr_cpu_thread_fn, + rr_cpu_thread_fn, cpu, QEMU_THREAD_JOINABLE); single_tcg_halt_cond = cpu->halt_cond; @@ -301,7 +301,7 @@ void rr_start_vcpu_thread(CPUState *cpu) const CpusAccel tcg_cpus_rr = { .create_vcpu_thread = rr_start_vcpu_thread, - .kick_vcpu_thread = qemu_cpu_kick_rr_cpus, + .kick_vcpu_thread = rr_kick_vcpu_thread, - .handle_interrupt = tcg_handle_interrupt, + .handle_interrupt = tcg_cpus_handle_interrupt, }; diff --git a/accel/tcg/tcg-cpus-rr.h b/accel/tcg/tcg-cpus-rr.h index 2e5943eda9..54f6ae6e86 100644 --- a/accel/tcg/tcg-cpus-rr.h +++ b/accel/tcg/tcg-cpus-rr.h @@ -13,7 +13,7 @@ #define TCG_KICK_PERIOD (NANOSECONDS_PER_SECOND / 10) /* Kick all RR vCPUs. */ -void qemu_cpu_kick_rr_cpus(CPUState *unused); +void rr_kick_vcpu_thread(CPUState *unused); /* start the round robin vcpu thread */ void rr_start_vcpu_thread(CPUState *cpu); diff --git a/accel/tcg/tcg-cpus.c b/accel/tcg/tcg-cpus.c index fa70237414..6510931c07 100644 --- a/accel/tcg/tcg-cpus.c +++ b/accel/tcg/tcg-cpus.c @@ -38,12 +38,12 @@ /* common functionality among all TCG variants */ -void qemu_tcg_destroy_vcpu(CPUState *cpu) +void tcg_cpus_destroy(CPUState *cpu) { cpu_thread_signal_destroyed(cpu); } -int tcg_cpu_exec(CPUState *cpu) +int tcg_cpus_exec(CPUState *cpu) { int ret; #ifdef CONFIG_PROFILER @@ -64,7 +64,7 @@ int tcg_cpu_exec(CPUState *cpu) } /* mask must never be zero, except for A20 change call */ -void tcg_handle_interrupt(CPUState *cpu, int mask) +void tcg_cpus_handle_interrupt(CPUState *cpu, int mask) { g_assert(qemu_mutex_iothread_locked()); diff --git a/accel/tcg/tcg-cpus.h b/accel/tcg/tcg-cpus.h index 21419d1117..1be1493e7b 100644 --- a/accel/tcg/tcg-cpus.h +++ b/accel/tcg/tcg-cpus.h @@ -18,8 +18,8 @@ extern const CpusAccel tcg_cpus_mttcg; extern const CpusAccel tcg_cpus_icount; extern const CpusAccel tcg_cpus_rr; -void qemu_tcg_destroy_vcpu(CPUState *cpu); -int tcg_cpu_exec(CPUState *cpu); -void tcg_handle_interrupt(CPUState *cpu, int mask); +void tcg_cpus_destroy(CPUState *cpu); +int tcg_cpus_exec(CPUState *cpu); +void tcg_cpus_handle_interrupt(CPUState *cpu, int mask); #endif /* TCG_CPUS_H */