From patchwork Fri Jul 15 18:57:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: sergey.fedorov@linaro.org X-Patchwork-Id: 9232471 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 05B8A60868 for ; Fri, 15 Jul 2016 19:12:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EA14127D5D for ; Fri, 15 Jul 2016 19:12:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DDC0027F8F; Fri, 15 Jul 2016 19:12:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2BDAC27D5D for ; Fri, 15 Jul 2016 19:12:19 +0000 (UTC) Received: from localhost ([::1]:34555 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bO8X4-0005Bg-6f for patchwork-qemu-devel@patchwork.kernel.org; Fri, 15 Jul 2016 15:12:18 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36910) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bO8R7-0007C2-I6 for qemu-devel@nongnu.org; Fri, 15 Jul 2016 15:06:11 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bO8R3-0005MP-C4 for qemu-devel@nongnu.org; Fri, 15 Jul 2016 15:06:09 -0400 Received: from mail-lf0-x229.google.com ([2a00:1450:4010:c07::229]:35573) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bO8R2-0005MI-VM for qemu-devel@nongnu.org; Fri, 15 Jul 2016 15:06:05 -0400 Received: by mail-lf0-x229.google.com with SMTP id f93so94534774lfi.2 for ; Fri, 15 Jul 2016 12:06:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O1HhYNNGHrjv/JrikrrJLN/fuHVcdIf/MB52qjjrr4M=; b=bvr+syVip6zCuL+fsK6XVxyyBxq/UDTp3e55sPfuxR5dePnki5A0f+IFnjQfyfbgcO +/ryvd7gW4AefeG2GfQvVRizsL1E6vI0JrgaJA3Ti/nMg7xwC5eR8JvZdVLXCIzrxSfn 3LnstAp2LXrSQmXw8D09B9lKIVbQoFK/GBPB0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O1HhYNNGHrjv/JrikrrJLN/fuHVcdIf/MB52qjjrr4M=; b=LIeV2wiLim77unVRjJLUisPySQIrRCo+lqOW5m5PCCP+SJ7TpDtrHbUsKLg+LJBFQf 6THvSOzj9gXWoWIn4E9GM+u/jE3PmebcLUSu7XZCermEAIityonH166vKfLuyJncbNUC 7g4KLl+snNeodIeL7VGQcBZnA+GPRad9ij6jTkBw1bnXKajQ99IUJqf93watdeUrYT5P lY/jRE4UxsGjrOgPO8vTJ475pnJQtCRqZ6jTBMHZ7M5DWo5+Y9FynpXDrcEFiNfzlAUw 6iGo3PPdxyo8TlbpIRAtc8JefWcLQ73i5rlxhUGJUyXYht3TcUHO6pp7RxNXu9yQz9jc QW5A== X-Gm-Message-State: ALyK8tK/BwYM3aKzQPVAPAh35VPYD1xdRHcVVAZ6034dT6NXEIYoaqC4E9GKvRsZddKAltVH X-Received: by 10.25.219.84 with SMTP id s81mr10319238lfg.101.1468609065668; Fri, 15 Jul 2016 11:57:45 -0700 (PDT) Received: from sergey-laptop.Dlink (broadband-46-188-120-37.2com.net. [46.188.120.37]) by smtp.gmail.com with ESMTPSA id 35sm2126381ljb.10.2016.07.15.11.57.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 15 Jul 2016 11:57:44 -0700 (PDT) From: Sergey Fedorov To: qemu-devel@nongnu.org Date: Fri, 15 Jul 2016 21:57:23 +0300 Message-Id: <20160715185726.10181-10-sergey.fedorov@linaro.org> X-Mailer: git-send-email 2.9.1 In-Reply-To: <20160715185726.10181-1-sergey.fedorov@linaro.org> References: <20160715185726.10181-1-sergey.fedorov@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:4010:c07::229 Subject: [Qemu-devel] [PATCH v4 09/12] linux-user: Support CPU work queue X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: MTTCG Devel , Peter Maydell , Riku Voipio , Sergey Fedorov , patches@linaro.org, Peter Crosthwaite , Alvise Rigo , "Emilio G. Cota" , Paolo Bonzini , Sergey Fedorov , Richard Henderson , =?UTF-8?Q?Alex_Benn=c3=a9e?= , =?UTF-8?B?S09OUkFEIEZyw6lkw6lyaWM=?= Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Sergey Fedorov Make CPU work core functions common between system and user-mode emulation. User-mode does not have BQL, so process_queued_cpu_work() is protected by 'exclusive_lock'. Signed-off-by: Sergey Fedorov Signed-off-by: Sergey Fedorov Reviewed-by: Alex Bennée --- Changes in v2: - 'qemu_work_cond' definition moved to cpu-exec-common.c - documentation commend for new public API added --- cpu-exec-common.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++++ cpus.c | 86 +------------------------------------------------ include/exec/exec-all.h | 17 ++++++++++ linux-user/main.c | 8 +++++ 4 files changed, 111 insertions(+), 85 deletions(-) diff --git a/cpu-exec-common.c b/cpu-exec-common.c index 0cb4ae60eff9..a233f0124559 100644 --- a/cpu-exec-common.c +++ b/cpu-exec-common.c @@ -77,3 +77,88 @@ void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc) } siglongjmp(cpu->jmp_env, 1); } + +QemuCond qemu_work_cond; + +static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi) +{ + qemu_mutex_lock(&cpu->work_mutex); + if (cpu->queued_work_first == NULL) { + cpu->queued_work_first = wi; + } else { + cpu->queued_work_last->next = wi; + } + cpu->queued_work_last = wi; + wi->next = NULL; + wi->done = false; + qemu_mutex_unlock(&cpu->work_mutex); + + qemu_cpu_kick(cpu); +} + +void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) +{ + struct qemu_work_item wi; + + if (qemu_cpu_is_self(cpu)) { + func(cpu, data); + return; + } + + wi.func = func; + wi.data = data; + wi.free = false; + + queue_work_on_cpu(cpu, &wi); + while (!atomic_mb_read(&wi.done)) { + CPUState *self_cpu = current_cpu; + + qemu_cond_wait(&qemu_work_cond, qemu_get_cpu_work_mutex()); + current_cpu = self_cpu; + } +} + +void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) +{ + struct qemu_work_item *wi; + + if (qemu_cpu_is_self(cpu)) { + func(cpu, data); + return; + } + + wi = g_malloc0(sizeof(struct qemu_work_item)); + wi->func = func; + wi->data = data; + wi->free = true; + + queue_work_on_cpu(cpu, wi); +} + +void process_queued_cpu_work(CPUState *cpu) +{ + struct qemu_work_item *wi; + + if (cpu->queued_work_first == NULL) { + return; + } + + qemu_mutex_lock(&cpu->work_mutex); + while (cpu->queued_work_first != NULL) { + wi = cpu->queued_work_first; + cpu->queued_work_first = wi->next; + if (!cpu->queued_work_first) { + cpu->queued_work_last = NULL; + } + qemu_mutex_unlock(&cpu->work_mutex); + wi->func(cpu, wi->data); + qemu_mutex_lock(&cpu->work_mutex); + if (wi->free) { + g_free(wi); + } else { + atomic_mb_set(&wi->done, true); + } + } + qemu_mutex_unlock(&cpu->work_mutex); + qemu_cond_broadcast(&qemu_work_cond); +} diff --git a/cpus.c b/cpus.c index 51fd8c18b4c8..282d7e399902 100644 --- a/cpus.c +++ b/cpus.c @@ -896,7 +896,6 @@ static QemuThread io_thread; static QemuCond qemu_cpu_cond; /* system init */ static QemuCond qemu_pause_cond; -static QemuCond qemu_work_cond; void qemu_init_cpu_loop(void) { @@ -910,66 +909,11 @@ void qemu_init_cpu_loop(void) qemu_thread_get_self(&io_thread); } -static QemuMutex *qemu_get_cpu_work_mutex(void) +QemuMutex *qemu_get_cpu_work_mutex(void) { return &qemu_global_mutex; } -static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi) -{ - qemu_mutex_lock(&cpu->work_mutex); - if (cpu->queued_work_first == NULL) { - cpu->queued_work_first = wi; - } else { - cpu->queued_work_last->next = wi; - } - cpu->queued_work_last = wi; - wi->next = NULL; - wi->done = false; - qemu_mutex_unlock(&cpu->work_mutex); - - qemu_cpu_kick(cpu); -} - -void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) -{ - struct qemu_work_item wi; - - if (qemu_cpu_is_self(cpu)) { - func(cpu, data); - return; - } - - wi.func = func; - wi.data = data; - wi.free = false; - - queue_work_on_cpu(cpu, &wi); - while (!atomic_mb_read(&wi.done)) { - CPUState *self_cpu = current_cpu; - - qemu_cond_wait(&qemu_work_cond, qemu_get_cpu_work_mutex()); - current_cpu = self_cpu; - } -} - -void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) -{ - struct qemu_work_item *wi; - - if (qemu_cpu_is_self(cpu)) { - func(cpu, data); - return; - } - - wi = g_malloc0(sizeof(struct qemu_work_item)); - wi->func = func; - wi->data = data; - wi->free = true; - - queue_work_on_cpu(cpu, wi); -} - static void qemu_kvm_destroy_vcpu(CPUState *cpu) { if (kvm_destroy_vcpu(cpu) < 0) { @@ -982,34 +926,6 @@ static void qemu_tcg_destroy_vcpu(CPUState *cpu) { } -static void process_queued_cpu_work(CPUState *cpu) -{ - struct qemu_work_item *wi; - - if (cpu->queued_work_first == NULL) { - return; - } - - qemu_mutex_lock(&cpu->work_mutex); - while (cpu->queued_work_first != NULL) { - wi = cpu->queued_work_first; - cpu->queued_work_first = wi->next; - if (!cpu->queued_work_first) { - cpu->queued_work_last = NULL; - } - qemu_mutex_unlock(&cpu->work_mutex); - wi->func(cpu, wi->data); - qemu_mutex_lock(&cpu->work_mutex); - if (wi->free) { - g_free(wi); - } else { - atomic_mb_set(&wi->done, true); - } - } - qemu_mutex_unlock(&cpu->work_mutex); - qemu_cond_broadcast(&qemu_work_cond); -} - static void qemu_wait_io_event_common(CPUState *cpu) { if (cpu->stop) { diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index acda7b613d53..8d5c7dbcf5a9 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -407,4 +407,21 @@ extern int singlestep; extern CPUState *tcg_current_cpu; extern bool exit_request; +/** + * qemu_work_cond - condition to wait for CPU work items completion + */ +extern QemuCond qemu_work_cond; + +/** + * qemu_get_cpu_work_mutex() - get the mutex which protects CPU work execution + * + * Return: A pointer to the mutex. + */ +QemuMutex *qemu_get_cpu_work_mutex(void); +/** + * process_queued_cpu_work() - process all items on CPU work queue + * @cpu: The CPU which work queue to process. + */ +void process_queued_cpu_work(CPUState *cpu); + #endif diff --git a/linux-user/main.c b/linux-user/main.c index a8790ac63f68..fce61d5a35fc 100644 --- a/linux-user/main.c +++ b/linux-user/main.c @@ -121,6 +121,7 @@ void qemu_init_cpu_loop(void) qemu_mutex_init(&exclusive_lock); qemu_cond_init(&exclusive_cond); qemu_cond_init(&exclusive_resume); + qemu_cond_init(&qemu_work_cond); } /* Make sure everything is in a consistent state for calling fork(). */ @@ -149,6 +150,7 @@ void fork_end(int child) qemu_mutex_init(&cpu_list_mutex); qemu_cond_init(&exclusive_cond); qemu_cond_init(&exclusive_resume); + qemu_cond_init(&qemu_work_cond); qemu_mutex_init(&tcg_ctx.tb_ctx.tb_lock); gdbserver_fork(thread_cpu); } else { @@ -157,6 +159,11 @@ void fork_end(int child) } } +QemuMutex *qemu_get_cpu_work_mutex(void) +{ + return &exclusive_lock; +} + /* Wait for pending exclusive operations to complete. The exclusive lock must be held. */ static inline void exclusive_idle(void) @@ -215,6 +222,7 @@ static inline void cpu_exec_end(CPUState *cpu) qemu_cond_signal(&exclusive_cond); } exclusive_idle(); + process_queued_cpu_work(cpu); qemu_mutex_unlock(&exclusive_lock); }