From patchwork Thu May 26 16:35:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alvise rigo X-Patchwork-Id: 9137257 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B72006075A for ; Thu, 26 May 2016 16:40:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA56A2807E for ; Thu, 26 May 2016 16:40:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9CD26282E2; Thu, 26 May 2016 16:40:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EB5432807E for ; Thu, 26 May 2016 16:40:55 +0000 (UTC) Received: from localhost ([::1]:39504 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b5yL8-0002BD-OT for patchwork-qemu-devel@patchwork.kernel.org; Thu, 26 May 2016 12:40:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58483) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b5yGd-0006R2-BO for qemu-devel@nongnu.org; Thu, 26 May 2016 12:36:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b5yGa-0003jR-SU for qemu-devel@nongnu.org; Thu, 26 May 2016 12:36:14 -0400 Received: from mail-wm0-x244.google.com ([2a00:1450:400c:c09::244]:36201) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b5yGa-0003jF-Ij for qemu-devel@nongnu.org; Thu, 26 May 2016 12:36:12 -0400 Received: by mail-wm0-x244.google.com with SMTP id q62so7026272wmg.3 for ; Thu, 26 May 2016 09:36:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtualopensystems-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TZDlS5o/XixJt07WosQxe3Y6YVIqYnh5ZSJ+Mrhg/Ro=; b=KFU0+0jKCcSkDF9k17UVRJExB9w2St0dvEodclKjGvFGXZzHm8usjcgXW1HL+BrBb2 1YBseGjsSY6crobEuEqSvt16H5y8CmJ1A1vRBzD5SDhXa25n0PgOHVata5vmzi4rA47k xid/TlX218ro84Qd2mBrwprEf5akb6gmacTeiV+55ERnCCsaJQ3vxi38kvsB1t8dIRSm Dd17m8M5Tb4P6hj0p1EBaTbkHBrj+Jm24yeFRK+ipA5dYZIhkcYrWsDzrDBPc7hj3aX4 0I7wBgnt7lDZF31QKdi7KfxBUrcRnRbOqwYZ9Vza/IrUyA5Z2888/M9YHab90QCa7hIU 8uFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TZDlS5o/XixJt07WosQxe3Y6YVIqYnh5ZSJ+Mrhg/Ro=; b=Ut2Jql/j+wbBsa6XLLfpbY38Du3zEJllUotxQGZmCJW/OIU9rKsomQJRLk2gllnlaA 8P5WP06M26kAFye+vrQiANQvCJnas0MRKtsVRvc+WvJ3k0dgLU55GbtA1K+mL+tExMch 4OuRDD0nnojwBCUa2icgxBJdo+Z1yUugqlyCZY1GVt2r37SEqeb2BPAs2N5GIBQOfNUV hwZEhlkEE2udtowW24OwIBxhB51p7huA2yCIiRVDd/L+QPYoSYV9bCQi2GqPZ3Ie8KUr ru5lM4KpmgoL7fXqGKkjdmFwcV9mnReswb7QNCKNgsdwTCRc853UdCDwb57R7n0Y92/L QZ8w== X-Gm-Message-State: ALyK8tLFxaVNUMTsaCuiZPDNBX75uljSk8vF+83XC5yQcGJnjeA8HDzf6xac7Vx6VUTH5g== X-Received: by 10.194.123.9 with SMTP id lw9mr11909183wjb.53.1464280571792; Thu, 26 May 2016 09:36:11 -0700 (PDT) Received: from linarch.localdomain (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by smtp.googlemail.com with ESMTPSA id lf9sm15133703wjc.44.2016.05.26.09.36.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 May 2016 09:36:10 -0700 (PDT) From: Alvise Rigo To: mttcg@listserver.greensocs.com, alex.bennee@linaro.org Date: Thu, 26 May 2016 18:35:42 +0200 Message-Id: <20160526163549.3276-4-a.rigo@virtualopensystems.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20160526163549.3276-1-a.rigo@virtualopensystems.com> References: <20160526163549.3276-1-a.rigo@virtualopensystems.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::244 Subject: [Qemu-devel] [RFC 03/10] cpus: Introduce async_wait_run_on_cpu() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, rth@twiddle.net, claudio.fontana@huawei.com, qemu-devel@nongnu.org, Alvise Rigo , cota@braap.org, serge.fdrv@gmail.com, pbonzini@redhat.com, jani.kokkonen@huawei.com, tech@virtualopensystems.com, fred.konrad@greensocs.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Introduce a new function that allows the calling VCPU to add a work item to another VCPU (aka target VCPU). This new function differs from async_run_on_cpu() since it makes the calling VCPU waiting for the target VCPU to finish the work item. The mechanism makes use of the halt_cond to wait and in case process pending work items. Signed-off-by: Alvise Rigo --- cpus.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- include/qom/cpu.h | 31 +++++++++++++++++++++++++++++++ 2 files changed, 73 insertions(+), 2 deletions(-) diff --git a/cpus.c b/cpus.c index b9ec903..7bc96e2 100644 --- a/cpus.c +++ b/cpus.c @@ -89,7 +89,7 @@ static bool cpu_thread_is_idle(CPUState *cpu) if (cpu->stop || cpu->queued_work_first) { return false; } - if (cpu_is_stopped(cpu)) { + if (cpu_is_stopped(cpu) || async_waiting_for_work(cpu)) { return true; } if (!cpu->halted || cpu_has_work(cpu) || @@ -1012,6 +1012,7 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) wi->func = func; wi->data = data; wi->free = true; + wi->wcpu = NULL; qemu_mutex_lock(&cpu->work_mutex); if (cpu->queued_work_first == NULL) { @@ -1027,6 +1028,40 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data) qemu_cpu_kick(cpu); } +void async_wait_run_on_cpu(CPUState *cpu, CPUState *wcpu, run_on_cpu_func func, + void *data) +{ + struct qemu_work_item *wwi; + + assert(wcpu != cpu); + + wwi = g_malloc0(sizeof(struct qemu_work_item)); + wwi->func = func; + wwi->data = data; + wwi->free = true; + wwi->wcpu = wcpu; + + /* Increase the number of pending work items */ + atomic_inc(&wcpu->pending_work_items); + + qemu_mutex_lock(&cpu->work_mutex); + /* Add the waiting work items at the beginning to free as soon as possible + * the waiting CPU. */ + if (cpu->queued_work_first == NULL) { + cpu->queued_work_last = wwi; + } else { + wwi->next = cpu->queued_work_first; + } + cpu->queued_work_first = wwi; + wwi->done = false; + qemu_mutex_unlock(&cpu->work_mutex); + + qemu_cpu_kick(cpu); + + /* In order to wait, @wcpu has to exit the CPU loop */ + cpu_exit(wcpu); +} + /* * Safe work interface * @@ -1120,6 +1155,10 @@ static void flush_queued_work(CPUState *cpu) qemu_mutex_unlock(&cpu->work_mutex); wi->func(cpu, wi->data); qemu_mutex_lock(&cpu->work_mutex); + if (wi->wcpu != NULL) { + atomic_dec(&wi->wcpu->pending_work_items); + qemu_cond_broadcast(wi->wcpu->halt_cond); + } if (wi->free) { g_free(wi); } else { @@ -1406,7 +1445,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) while (1) { bool sleep = false; - if (cpu_can_run(cpu) && !async_safe_work_pending()) { + if (cpu_can_run(cpu) && !async_safe_work_pending() + && !async_waiting_for_work(cpu)) { int r; atomic_inc(&tcg_scheduled_cpus); diff --git a/include/qom/cpu.h b/include/qom/cpu.h index 019f06d..7be82ed 100644 --- a/include/qom/cpu.h +++ b/include/qom/cpu.h @@ -259,6 +259,8 @@ struct qemu_work_item { void *data; int done; bool free; + /* CPU waiting for this work item to finish. If NULL, no CPU is waiting. */ + CPUState *wcpu; }; /** @@ -303,6 +305,7 @@ struct qemu_work_item { * @kvm_fd: vCPU file descriptor for KVM. * @work_mutex: Lock to prevent multiple access to queued_work_*. * @queued_work_first: First asynchronous work pending. + * @pending_work_items: Work items for which the CPU needs to wait completion. * * State of one CPU core or thread. */ @@ -337,6 +340,7 @@ struct CPUState { QemuMutex work_mutex; struct qemu_work_item *queued_work_first, *queued_work_last; + int pending_work_items; CPUAddressSpace *cpu_ases; int num_ases; @@ -398,6 +402,9 @@ struct CPUState { * by a stcond (see softmmu_template.h). */ bool excl_succeeded; + /* True if some CPU requested a TLB flush for this CPU. */ + bool pending_tlb_flush; + /* Note that this is accessed at the start of every TB via a negative offset from AREG0. Leave this field at the end so as to make the (absolute value) offset as small as possible. This reduces code @@ -680,6 +687,19 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data); void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data); /** + * async_wait_run_on_cpu: + * @cpu: The vCPU to run on. + * @wpu: The vCPU submitting the work. + * @func: The function to be executed. + * @data: Data to pass to the function. + * + * Schedules the function @func for execution on the vCPU @cpu asynchronously. + * The vCPU @wcpu will wait for @cpu to finish the job. + */ +void async_wait_run_on_cpu(CPUState *cpu, CPUState *wcpu, run_on_cpu_func func, + void *data); + +/** * async_safe_run_on_cpu: * @cpu: The vCPU to run on. * @func: The function to be executed. @@ -699,6 +719,17 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data); bool async_safe_work_pending(void); /** + * async_waiting_for_work: + * + * Check whether there are work items for which @cpu is waiting completion. + * Returns: @true if work items are pending for completion, @false otherwise. + */ +static inline bool async_waiting_for_work(CPUState *cpu) +{ + return atomic_mb_read(&cpu->pending_work_items) != 0; +} + +/** * qemu_get_cpu: * @index: The CPUState@cpu_index value of the CPU to obtain. *