From patchwork Fri Apr 15 14:23:46 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 8851541 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 6B6189F1E6 for ; Fri, 15 Apr 2016 14:31:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3D43320373 for ; Fri, 15 Apr 2016 14:31:16 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4063F202E9 for ; Fri, 15 Apr 2016 14:31:15 +0000 (UTC) Received: from localhost ([::1]:35229 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ar4m9-0006fu-P2 for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Apr 2016 10:31:13 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49416) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ar4fB-0000VV-22 for qemu-devel@nongnu.org; Fri, 15 Apr 2016 10:24:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ar4f9-0007f6-KO for qemu-devel@nongnu.org; Fri, 15 Apr 2016 10:24:00 -0400 Received: from mail-wm0-x229.google.com ([2a00:1450:400c:c09::229]:38371) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ar4f9-0007f0-9v for qemu-devel@nongnu.org; Fri, 15 Apr 2016 10:23:59 -0400 Received: by mail-wm0-x229.google.com with SMTP id u206so34791817wme.1 for ; Fri, 15 Apr 2016 07:23:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wFxCTuzfDaX+WdjVo4WCuGOXEQ4VO7F3N0hoiazJvis=; b=XueC4TbpuZjcqtmw2ly65C2rwXjWSFtEHwSnVnPUZ5cDYZnR4WgsfQmYwQTSAmfZy5 yTyNvpNi5rmSKnxHSZ4F4YVJljoHss6TcjCZ5Z0FthtgUuTstIsouGLxA+4km4cDW29B LwtaV/i/GRsTGT6qzP0VG6vhNABZTP6hw32NA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wFxCTuzfDaX+WdjVo4WCuGOXEQ4VO7F3N0hoiazJvis=; b=BW8I4WPLquiZTyLOU3pAhBz+x//nCE//NCHzRfHRFXUfvyNkA/EA3U5ecDjnPky4Bx OTmAgXy9ExDb7xGG3iPzqs7xROIdelfqJmP6dF1fWrqW5glFNcHcvrl0igKVC8rgaX4+ 4mcAlcGaxkK80UbVR6H//V0HLfhSzhGKg+E1mLweNuljRH4ISB8LrLLFEl4Qfbrmlxwn /+kngYUu677ab9g/YLcEO+lbPoLnOm7ZqhQe470ErRjFSmwybwCG5uPrmPn6SgJvsFZF eNXZG79TUKhJFO9dkngKU+xkTWod/Vj/36AZK2ixsagPAVRSGBAlZ2RLL2cukGG1ArWs QpxA== X-Gm-Message-State: AOPr4FXmIYQ+Ub3BqniNexGOAtgOvWe35gesWx4FhBR83GRUCP+UDyHUZKxeV2n1SDJPkWDa X-Received: by 10.28.14.77 with SMTP id 74mr4593178wmo.15.1460730238523; Fri, 15 Apr 2016 07:23:58 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id m134sm11902504wmd.14.2016.04.15.07.23.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 15 Apr 2016 07:23:53 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id E54F03E0583; Fri, 15 Apr 2016 15:24:04 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, serge.fdrv@gmail.com, cota@braap.org Date: Fri, 15 Apr 2016 15:23:46 +0100 Message-Id: <1460730231-1184-9-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1460730231-1184-1-git-send-email-alex.bennee@linaro.org> References: <1460730231-1184-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::229 Subject: [Qemu-devel] [RFC v1 07/12] cpus: introduce async_safe_run_on_cpu. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, claudio.fontana@huawei.com, Peter Crosthwaite , jan.kiszka@siemens.com, mark.burton@greensocs.com, qemu-devel@nongnu.org, pbonzini@redhat.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , =?UTF-8?q?Andreas=20F=C3=A4rber?= , rth@twiddle.net Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: KONRAD Frederic We already had async_run_on_cpu but for some tasks we need to ensure all vCPUs have stopped running before updating shared structures. We call this safe work as it is safe to make the modifications. Work is scheduled with async_safe_run_on_cpu() which is passed the CPUState and an anonymous structure with the relevant data for the task. Once this is done the vCPU is kicked to bring it out of the main execution loop. The main difference with the other run_on_cpu functions is it operates out of a single queue. This ensures fairness as all pending tasks will get drained whichever vCPU is nominally doing the work. The internal implementation is also a GArray so the need to malloc memory is minimised while adding tasks to the queue. When async_safe_work_pending() cpu_exec returns and the vCPUs can't enters execution loop. Once all scheduled vCPUs have exited the loop the last one to exit processed the execution queue. Signed-off-by: KONRAD Frederic [AJB: Name change, single queue, atomic counter for active vCPUs] Signed-off-by: Alex Bennée --- v1 (arm-v1) - now async_safe_run_on_cpu - single GArray based queue - use atomic counter to bring all vCPUs to a halt - wording for async safe_work --- cpu-exec-common.c | 1 + cpu-exec.c | 11 ++++++ cpus.c | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++++-- include/qom/cpu.h | 19 ++++++++++ 4 files changed, 131 insertions(+), 2 deletions(-) diff --git a/cpu-exec-common.c b/cpu-exec-common.c index 3d7eaa3..c2f7c29 100644 --- a/cpu-exec-common.c +++ b/cpu-exec-common.c @@ -79,3 +79,4 @@ void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc) cpu->current_tb = NULL; siglongjmp(cpu->jmp_env, 1); } + diff --git a/cpu-exec.c b/cpu-exec.c index 42cec05..2f362f8 100644 --- a/cpu-exec.c +++ b/cpu-exec.c @@ -365,6 +365,17 @@ int cpu_exec(CPUState *cpu) uintptr_t next_tb; SyncClocks sc; + /* + * This happen when somebody doesn't want this CPU to start + * In case of MTTCG. + */ +#ifdef CONFIG_SOFTMMU + if (async_safe_work_pending()) { + cpu->exit_request = 1; + return 0; + } +#endif + /* replay_interrupt may need current_cpu */ current_cpu = cpu; diff --git a/cpus.c b/cpus.c index 9177161..860e2a9 100644 --- a/cpus.c +++ b/cpus.c @@ -928,6 +928,19 @@ static QemuCond qemu_cpu_cond; static QemuCond qemu_pause_cond; static QemuCond qemu_work_cond; +/* safe work */ +static int safe_work_pending; +static int tcg_scheduled_cpus; + +typedef struct { + CPUState *cpu; /* CPU affected */ + run_on_cpu_func func; /* Helper function */ + void *data; /* Helper data */ +} qemu_safe_work_item; + +static GArray *safe_work; /* array of qemu_safe_work_items */ +static QemuMutex safe_work_mutex; + void qemu_init_cpu_loop(void) { qemu_init_sigbus(); @@ -937,6 +950,9 @@ void qemu_init_cpu_loop(void) qemu_mutex_init(&qemu_global_mutex); qemu_thread_get_self(&io_thread); + + safe_work = g_array_sized_new(TRUE, TRUE, sizeof(qemu_safe_work_item), 128); + qemu_mutex_init(&safe_work_mutex); } void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) @@ -997,6 +1013,81 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) qemu_cpu_kick(cpu); } +/* + * Safe work interface + * + * Safe work is defined as work that requires the system to be + * quiescent before making changes. + */ + +void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) +{ + CPUState *iter; + qemu_safe_work_item wi; + wi.cpu = cpu; + wi.func = func; + wi.data = data; + + qemu_mutex_lock(&safe_work_mutex); + g_array_append_val(safe_work, wi); + atomic_inc(&safe_work_pending); + qemu_mutex_unlock(&safe_work_mutex); + + /* Signal all vCPUs to halt */ + CPU_FOREACH(iter) { + qemu_cpu_kick(iter); + } +} + +/** + * flush_queued_safe_work: + * + * @scheduled_cpu_count + * + * If not 0 will signal the other vCPUs and sleep. The last vCPU to + * get to the function then drains the queue while the system is in a + * quiescent state. This allows the operations to change shared + * structures. + * + * @see async_run_safe_work_on_cpu + */ +static void flush_queued_safe_work(int scheduled_cpu_count) +{ + qemu_safe_work_item *wi; + int i; + + /* bail out if there is nothing to do */ + if (!async_safe_work_pending()) { + return; + } + + if (scheduled_cpu_count) { + + /* Nothing to do but sleep */ + qemu_cond_wait(&qemu_work_cond, &qemu_global_mutex); + + } else { + + /* We can now do the work */ + qemu_mutex_lock(&safe_work_mutex); + for (i = 0; i < safe_work->len; i++) { + wi = &g_array_index(safe_work, qemu_safe_work_item, i); + wi->func(wi->cpu, wi->data); + } + g_array_remove_range(safe_work, 0, safe_work->len); + atomic_set(&safe_work_pending, 0); + qemu_mutex_unlock(&safe_work_mutex); + + /* Wake everyone up */ + qemu_cond_broadcast(&qemu_work_cond); + } +} + +bool async_safe_work_pending(void) +{ + return (atomic_read(&safe_work_pending) != 0); +} + static void flush_queued_work(CPUState *cpu) { struct qemu_work_item *wi; @@ -1259,6 +1350,7 @@ static void *qemu_tcg_single_cpu_thread_fn(void *arg) if (cpu) { g_assert(cpu->exit_request); + flush_queued_safe_work(0); /* Pairs with smp_wmb in qemu_cpu_kick. */ atomic_mb_set(&cpu->exit_request, 0); qemu_tcg_wait_io_event(cpu); @@ -1300,8 +1392,13 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) while (1) { bool sleep = false; - if (cpu_can_run(cpu)) { - int r = tcg_cpu_exec(cpu); + if (cpu_can_run(cpu) && !async_safe_work_pending()) { + int r; + + atomic_inc(&tcg_scheduled_cpus); + r = tcg_cpu_exec(cpu); + flush_queued_safe_work(atomic_dec_fetch(&tcg_scheduled_cpus)); + switch (r) { case EXCP_DEBUG: @@ -1319,6 +1416,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) /* Ignore everything else? */ break; } + } else { sleep = true; } diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 385d5bb..8ab969e 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -642,6 +642,25 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data); void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data); /** + * async_safe_run_on_cpu: + * @cpu: The vCPU to run on. + * @func: The function to be executed. + * @data: Data to pass to the function. + * + * Schedules the function @func for execution on the vCPU @cpu asynchronously + * when all the VCPUs are outside their loop. + */ +void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data); + +/** + * async_safe_work_pending: + * + * Check whether any safe work is pending on any VCPUs. + * Returns: @true if a safe work is pending, @false otherwise. + */ +bool async_safe_work_pending(void); + +/** * qemu_get_cpu: * @index: The CPUState@cpu_index value of the CPU to obtain. *