From patchwork Fri Jul 6 12:13:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marc-Andr=C3=A9_Lureau?= X-Patchwork-Id: 10511471 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 36DCA60325 for ; Fri, 6 Jul 2018 12:19:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 26971285DB for ; Fri, 6 Jul 2018 12:19:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1AF9E285DD; Fri, 6 Jul 2018 12:19:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 78408285E5 for ; Fri, 6 Jul 2018 12:19:21 +0000 (UTC) Received: from localhost ([::1]:57453 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fbPho-0002AY-Lz for patchwork-qemu-devel@patchwork.kernel.org; Fri, 06 Jul 2018 08:19:20 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59496) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fbPcr-0006kM-Md for qemu-devel@nongnu.org; Fri, 06 Jul 2018 08:14:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fbPco-0005pI-Cb for qemu-devel@nongnu.org; Fri, 06 Jul 2018 08:14:13 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:39006 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fbPco-0005oz-6z for qemu-devel@nongnu.org; Fri, 06 Jul 2018 08:14:10 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id AF251E900C for ; Fri, 6 Jul 2018 12:14:09 +0000 (UTC) Received: from localhost (ovpn-112-48.ams2.redhat.com [10.36.112.48]) by smtp.corp.redhat.com (Postfix) with ESMTP id E4AF579A9; Fri, 6 Jul 2018 12:14:05 +0000 (UTC) From: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= To: qemu-devel@nongnu.org Date: Fri, 6 Jul 2018 14:13:46 +0200 Message-Id: <20180706121354.2021-5-marcandre.lureau@redhat.com> In-Reply-To: <20180706121354.2021-1-marcandre.lureau@redhat.com> References: <20180706121354.2021-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Fri, 06 Jul 2018 12:14:09 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Fri, 06 Jul 2018 12:14:09 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'marcandre.lureau@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 04/12] Revert "qmp: isolate responses into io thread" X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , armbru@redhat.com, peterx@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This reverts commit abe3cd0ff7f774966da6842620806ab7576fe4f3. There is no need to add an additional queue to send the reply to the IOThread, because QMP response is thread safe, and chardev write path is thread safe. It will schedule the watcher in the associated IOThread. Signed-off-by: Marc-André Lureau Reviewed-by: Markus Armbruster --- monitor.c | 120 ++---------------------------------------------------- 1 file changed, 3 insertions(+), 117 deletions(-) diff --git a/monitor.c b/monitor.c index fc481d902d..462cf96f7b 100644 --- a/monitor.c +++ b/monitor.c @@ -183,8 +183,6 @@ typedef struct { QemuMutex qmp_queue_lock; /* Input queue that holds all the parsed QMP requests */ GQueue *qmp_requests; - /* Output queue contains all the QMP responses in order */ - GQueue *qmp_responses; } MonitorQMP; /* @@ -248,9 +246,6 @@ IOThread *mon_iothread; /* Bottom half to dispatch the requests received from I/O thread */ QEMUBH *qmp_dispatcher_bh; -/* Bottom half to deliver the responses back to clients */ -QEMUBH *qmp_respond_bh; - struct QMPRequest { /* Owner of the request */ Monitor *mon; @@ -376,19 +371,10 @@ static void monitor_qmp_cleanup_req_queue_locked(Monitor *mon) } } -/* Caller must hold the mon->qmp.qmp_queue_lock */ -static void monitor_qmp_cleanup_resp_queue_locked(Monitor *mon) -{ - while (!g_queue_is_empty(mon->qmp.qmp_responses)) { - qobject_unref((QDict *)g_queue_pop_head(mon->qmp.qmp_responses)); - } -} - static void monitor_qmp_cleanup_queues(Monitor *mon) { qemu_mutex_lock(&mon->qmp.qmp_queue_lock); monitor_qmp_cleanup_req_queue_locked(mon); - monitor_qmp_cleanup_resp_queue_locked(mon); qemu_mutex_unlock(&mon->qmp.qmp_queue_lock); } @@ -519,85 +505,6 @@ static void qmp_send_response(Monitor *mon, const QDict *rsp) qobject_unref(json); } -static void qmp_queue_response(Monitor *mon, QDict *rsp) -{ - if (mon->use_io_thread) { - /* - * Push a reference to the response queue. The I/O thread - * drains that queue and emits. - */ - qemu_mutex_lock(&mon->qmp.qmp_queue_lock); - g_queue_push_tail(mon->qmp.qmp_responses, qobject_ref(rsp)); - qemu_mutex_unlock(&mon->qmp.qmp_queue_lock); - qemu_bh_schedule(qmp_respond_bh); - } else { - /* - * Not using monitor I/O thread, i.e. we are in the main thread. - * Emit right away. - */ - qmp_send_response(mon, rsp); - } -} - -struct QMPResponse { - Monitor *mon; - QDict *data; -}; -typedef struct QMPResponse QMPResponse; - -static QDict *monitor_qmp_response_pop_one(Monitor *mon) -{ - QDict *data; - - qemu_mutex_lock(&mon->qmp.qmp_queue_lock); - data = g_queue_pop_head(mon->qmp.qmp_responses); - qemu_mutex_unlock(&mon->qmp.qmp_queue_lock); - - return data; -} - -static void monitor_qmp_response_flush(Monitor *mon) -{ - QDict *data; - - while ((data = monitor_qmp_response_pop_one(mon))) { - qmp_send_response(mon, data); - qobject_unref(data); - } -} - -/* - * Pop a QMPResponse from any monitor's response queue into @response. - * Return false if all the queues are empty; else true. - */ -static bool monitor_qmp_response_pop_any(QMPResponse *response) -{ - Monitor *mon; - QDict *data = NULL; - - qemu_mutex_lock(&monitor_lock); - QTAILQ_FOREACH(mon, &mon_list, entry) { - data = monitor_qmp_response_pop_one(mon); - if (data) { - response->mon = mon; - response->data = data; - break; - } - } - qemu_mutex_unlock(&monitor_lock); - return data != NULL; -} - -static void monitor_qmp_bh_responder(void *opaque) -{ - QMPResponse response; - - while (monitor_qmp_response_pop_any(&response)) { - qmp_send_response(response.mon, response.data); - qobject_unref(response.data); - } -} - static MonitorQAPIEventConf monitor_qapi_event_conf[QAPI_EVENT__MAX] = { /* Limit guest-triggerable events to 1 per second */ [QAPI_EVENT_RTC_CHANGE] = { 1000 * SCALE_MS }, @@ -621,7 +528,7 @@ static void monitor_qapi_event_emit(QAPIEvent event, QDict *qdict) QTAILQ_FOREACH(mon, &mon_list, entry) { if (monitor_is_qmp(mon) && mon->qmp.commands != &qmp_cap_negotiation_commands) { - qmp_queue_response(mon, qdict); + qmp_send_response(mon, qdict); } } } @@ -777,7 +684,6 @@ static void monitor_data_init(Monitor *mon, bool skip_flush, mon->skip_flush = skip_flush; mon->use_io_thread = use_io_thread; mon->qmp.qmp_requests = g_queue_new(); - mon->qmp.qmp_responses = g_queue_new(); } static void monitor_data_destroy(Monitor *mon) @@ -792,9 +698,7 @@ static void monitor_data_destroy(Monitor *mon) qemu_mutex_destroy(&mon->mon_lock); qemu_mutex_destroy(&mon->qmp.qmp_queue_lock); monitor_qmp_cleanup_req_queue_locked(mon); - monitor_qmp_cleanup_resp_queue_locked(mon); g_queue_free(mon->qmp.qmp_requests); - g_queue_free(mon->qmp.qmp_responses); } char *qmp_human_monitor_command(const char *command_line, bool has_cpu_index, @@ -4100,7 +4004,7 @@ static void monitor_qmp_respond(Monitor *mon, QDict *rsp, QObject *id) qdict_put_obj(rsp, "id", qobject_ref(id)); } - qmp_queue_response(mon, rsp); + qmp_send_response(mon, rsp); } } @@ -4395,7 +4299,7 @@ static void monitor_qmp_event(void *opaque, int event) mon->qmp.commands = &qmp_cap_negotiation_commands; monitor_qmp_caps_reset(mon); data = qmp_greeting(mon); - qmp_queue_response(mon, data); + qmp_send_response(mon, data); qobject_unref(data); mon_refcount++; break; @@ -4406,7 +4310,6 @@ static void monitor_qmp_event(void *opaque, int event) * stdio, it's possible that stdout is still open when stdin * is closed. */ - monitor_qmp_response_flush(mon); monitor_qmp_cleanup_queues(mon); json_message_parser_destroy(&mon->qmp.parser); json_message_parser_init(&mon->qmp.parser, handle_qmp_command); @@ -4508,15 +4411,6 @@ static void monitor_iothread_init(void) qmp_dispatcher_bh = aio_bh_new(iohandler_get_aio_context(), monitor_qmp_bh_dispatcher, NULL); - - /* - * The responder BH must be run in the monitor I/O thread, so that - * monitors that are using the I/O thread have their output - * written by the I/O thread. - */ - qmp_respond_bh = aio_bh_new(monitor_get_aio_context(), - monitor_qmp_bh_responder, - NULL); } void monitor_init_globals(void) @@ -4668,12 +4562,6 @@ void monitor_cleanup(void) */ iothread_stop(mon_iothread); - /* - * Flush all response queues. Note that even after this flush, - * data may remain in output buffers. - */ - monitor_qmp_bh_responder(NULL); - /* Flush output buffers and destroy monitors */ qemu_mutex_lock(&monitor_lock); QTAILQ_FOREACH_SAFE(mon, &mon_list, entry, next) { @@ -4687,8 +4575,6 @@ void monitor_cleanup(void) /* QEMUBHs needs to be deleted before destroying the I/O thread */ qemu_bh_delete(qmp_dispatcher_bh); qmp_dispatcher_bh = NULL; - qemu_bh_delete(qmp_respond_bh); - qmp_respond_bh = NULL; iothread_destroy(mon_iothread); mon_iothread = NULL;