From patchwork Fri Jun 7 13:54:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Wolf X-Patchwork-Id: 10981883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B1C516C5 for ; Fri, 7 Jun 2019 14:09:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D80D28C0F for ; Fri, 7 Jun 2019 14:09:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9154426E78; Fri, 7 Jun 2019 14:09:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.47]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5AE9A28A09 for ; Fri, 7 Jun 2019 14:09:50 +0000 (UTC) Received: from localhost ([::1]:51532 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hZFYx-0005cu-D9 for patchwork-qemu-devel@patchwork.kernel.org; Fri, 07 Jun 2019 10:09:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:45262) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hZFKt-0001oR-VW for qemu-devel@nongnu.org; Fri, 07 Jun 2019 09:55:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hZFKp-0003XC-4P for qemu-devel@nongnu.org; Fri, 07 Jun 2019 09:55:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51012) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hZFKf-0003Ef-I6; Fri, 07 Jun 2019 09:55:01 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 231686697A; Fri, 7 Jun 2019 13:54:41 +0000 (UTC) Received: from dhcp-200-226.str.redhat.com (dhcp-200-226.str.redhat.com [10.33.200.226]) by smtp.corp.redhat.com (Postfix) with ESMTP id BF3F51001B08; Fri, 7 Jun 2019 13:54:39 +0000 (UTC) From: Kevin Wolf To: qemu-devel@nongnu.org Date: Fri, 7 Jun 2019 15:54:23 +0200 Message-Id: <20190607135430.22149-4-kwolf@redhat.com> In-Reply-To: <20190607135430.22149-1-kwolf@redhat.com> References: <20190607135430.22149-1-kwolf@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 07 Jun 2019 13:54:50 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [RFC PATCH 03/10] monitor: Make MonitorQMP a child class of Monitor X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, armbru@redhat.com, qemu-block@nongnu.org, dgilbert@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Currently, struct Monitor mixes state that is only relevant for HMP, state that is only relevant for QMP, and some actually shared state. In particular, a MonitorQMP field is present in the state of any monitor, even if it's not a QMP monitor and therefore doesn't use the state. As a first step towards a clean separation between QMP and HMP, let MonitorQMP extend Monitor and create a MonitorQMP object only when the monitor is actually a QMP monitor. Signed-off-by: Kevin Wolf Reviewed-by: Dr. David Alan Gilbert --- monitor.c | 214 ++++++++++++++++++++++++++++++------------------------ 1 file changed, 118 insertions(+), 96 deletions(-) diff --git a/monitor.c b/monitor.c index bb23cc0450..d18cf18393 100644 --- a/monitor.c +++ b/monitor.c @@ -166,26 +166,6 @@ struct MonFdset { QLIST_ENTRY(MonFdset) next; }; -typedef struct { - JSONMessageParser parser; - /* - * When a client connects, we're in capabilities negotiation mode. - * @commands is &qmp_cap_negotiation_commands then. When command - * qmp_capabilities succeeds, we go into command mode, and - * @command becomes &qmp_commands. - */ - QmpCommandList *commands; - bool capab_offered[QMP_CAPABILITY__MAX]; /* capabilities offered */ - bool capab[QMP_CAPABILITY__MAX]; /* offered and accepted */ - /* - * Protects qmp request/response queue. - * Take monitor_lock first when you need both. - */ - QemuMutex qmp_queue_lock; - /* Input queue that holds all the parsed QMP requests */ - GQueue *qmp_requests; -} MonitorQMP; - /* * To prevent flooding clients, events can be throttled. The * throttling is calculated globally, rather than per-Monitor @@ -218,7 +198,6 @@ struct Monitor { */ ReadLineState *rs; - MonitorQMP qmp; gchar *mon_cpu_path; mon_cmd_t *cmd_table; QTAILQ_ENTRY(Monitor) entry; @@ -239,6 +218,27 @@ struct Monitor { int mux_out; }; +typedef struct { + Monitor common; + JSONMessageParser parser; + /* + * When a client connects, we're in capabilities negotiation mode. + * @commands is &qmp_cap_negotiation_commands then. When command + * qmp_capabilities succeeds, we go into command mode, and + * @command becomes &qmp_commands. + */ + QmpCommandList *commands; + bool capab_offered[QMP_CAPABILITY__MAX]; /* capabilities offered */ + bool capab[QMP_CAPABILITY__MAX]; /* offered and accepted */ + /* + * Protects qmp request/response queue. + * Take monitor_lock first when you need both. + */ + QemuMutex qmp_queue_lock; + /* Input queue that holds all the parsed QMP requests */ + GQueue *qmp_requests; +} MonitorQMP; + /* Shared monitor I/O thread */ IOThread *mon_iothread; @@ -247,7 +247,7 @@ QEMUBH *qmp_dispatcher_bh; struct QMPRequest { /* Owner of the request */ - Monitor *mon; + MonitorQMP *mon; /* * Request object to be handled or Error to be reported * (exactly one of them is non-null) @@ -355,18 +355,18 @@ static void qmp_request_free(QMPRequest *req) } /* Caller must hold mon->qmp.qmp_queue_lock */ -static void monitor_qmp_cleanup_req_queue_locked(Monitor *mon) +static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon) { - while (!g_queue_is_empty(mon->qmp.qmp_requests)) { - qmp_request_free(g_queue_pop_head(mon->qmp.qmp_requests)); + while (!g_queue_is_empty(mon->qmp_requests)) { + qmp_request_free(g_queue_pop_head(mon->qmp_requests)); } } -static void monitor_qmp_cleanup_queues(Monitor *mon) +static void monitor_qmp_cleanup_queues(MonitorQMP *mon) { - qemu_mutex_lock(&mon->qmp.qmp_queue_lock); + qemu_mutex_lock(&mon->qmp_queue_lock); monitor_qmp_cleanup_req_queue_locked(mon); - qemu_mutex_unlock(&mon->qmp.qmp_queue_lock); + qemu_mutex_unlock(&mon->qmp_queue_lock); } @@ -478,17 +478,17 @@ int monitor_printf(Monitor *mon, const char *fmt, ...) return ret; } -static void qmp_send_response(Monitor *mon, const QDict *rsp) +static void qmp_send_response(MonitorQMP *mon, const QDict *rsp) { const QObject *data = QOBJECT(rsp); QString *json; - json = mon->flags & MONITOR_USE_PRETTY ? qobject_to_json_pretty(data) : - qobject_to_json(data); + json = mon->common.flags & MONITOR_USE_PRETTY ? + qobject_to_json_pretty(data) : qobject_to_json(data); assert(json != NULL); qstring_append_chr(json, '\n'); - monitor_puts(mon, qstring_get_str(json)); + monitor_puts(&mon->common, qstring_get_str(json)); qobject_unref(json); } @@ -511,12 +511,17 @@ static MonitorQAPIEventConf monitor_qapi_event_conf[QAPI_EVENT__MAX] = { static void monitor_qapi_event_emit(QAPIEvent event, QDict *qdict) { Monitor *mon; + MonitorQMP *qmp_mon; trace_monitor_protocol_event_emit(event, qdict); QTAILQ_FOREACH(mon, &mon_list, entry) { - if (monitor_is_qmp(mon) - && mon->qmp.commands != &qmp_cap_negotiation_commands) { - qmp_send_response(mon, qdict); + if (!monitor_is_qmp(mon)) { + continue; + } + + qmp_mon = container_of(mon, MonitorQMP, common); + if (qmp_mon->commands != &qmp_cap_negotiation_commands) { + qmp_send_response(qmp_mon, qdict); } } } @@ -710,29 +715,33 @@ static void monitor_data_init(Monitor *mon, int flags, bool skip_flush, } memset(mon, 0, sizeof(Monitor)); qemu_mutex_init(&mon->mon_lock); - qemu_mutex_init(&mon->qmp.qmp_queue_lock); mon->outbuf = qstring_new(); /* Use *mon_cmds by default. */ mon->cmd_table = mon_cmds; mon->skip_flush = skip_flush; mon->use_io_thread = use_io_thread; - mon->qmp.qmp_requests = g_queue_new(); mon->flags = flags; } +static void monitor_data_destroy_qmp(MonitorQMP *mon) +{ + json_message_parser_destroy(&mon->parser); + qemu_mutex_destroy(&mon->qmp_queue_lock); + monitor_qmp_cleanup_req_queue_locked(mon); + g_queue_free(mon->qmp_requests); +} + static void monitor_data_destroy(Monitor *mon) { g_free(mon->mon_cpu_path); qemu_chr_fe_deinit(&mon->chr, false); if (monitor_is_qmp(mon)) { - json_message_parser_destroy(&mon->qmp.parser); + MonitorQMP *qmp_mon = container_of(mon, MonitorQMP, common); + monitor_data_destroy_qmp(qmp_mon); } readline_free(mon->rs); qobject_unref(mon->outbuf); qemu_mutex_destroy(&mon->mon_lock); - qemu_mutex_destroy(&mon->qmp.qmp_queue_lock); - monitor_qmp_cleanup_req_queue_locked(mon); - g_queue_free(mon->qmp.qmp_requests); } char *qmp_human_monitor_command(const char *command_line, bool has_cpu_index, @@ -1085,8 +1094,9 @@ static void query_commands_cb(QmpCommand *cmd, void *opaque) CommandInfoList *qmp_query_commands(Error **errp) { CommandInfoList *list = NULL; + MonitorQMP *mon = container_of(cur_mon, MonitorQMP, common); - qmp_for_each_command(cur_mon->qmp.commands, query_commands_cb, &list); + qmp_for_each_command(mon->commands, query_commands_cb, &list); return list; } @@ -1153,16 +1163,16 @@ static void monitor_init_qmp_commands(void) qmp_marshal_qmp_capabilities, QCO_ALLOW_PRECONFIG); } -static bool qmp_oob_enabled(Monitor *mon) +static bool qmp_oob_enabled(MonitorQMP *mon) { - return mon->qmp.capab[QMP_CAPABILITY_OOB]; + return mon->capab[QMP_CAPABILITY_OOB]; } -static void monitor_qmp_caps_reset(Monitor *mon) +static void monitor_qmp_caps_reset(MonitorQMP *mon) { - memset(mon->qmp.capab_offered, 0, sizeof(mon->qmp.capab_offered)); - memset(mon->qmp.capab, 0, sizeof(mon->qmp.capab)); - mon->qmp.capab_offered[QMP_CAPABILITY_OOB] = mon->use_io_thread; + memset(mon->capab_offered, 0, sizeof(mon->capab_offered)); + memset(mon->capab, 0, sizeof(mon->capab)); + mon->capab_offered[QMP_CAPABILITY_OOB] = mon->common.use_io_thread; } /* @@ -1170,7 +1180,7 @@ static void monitor_qmp_caps_reset(Monitor *mon) * On success, set mon->qmp.capab[], and return true. * On error, set @errp, and return false. */ -static bool qmp_caps_accept(Monitor *mon, QMPCapabilityList *list, +static bool qmp_caps_accept(MonitorQMP *mon, QMPCapabilityList *list, Error **errp) { GString *unavailable = NULL; @@ -1179,7 +1189,7 @@ static bool qmp_caps_accept(Monitor *mon, QMPCapabilityList *list, memset(capab, 0, sizeof(capab)); for (; list; list = list->next) { - if (!mon->qmp.capab_offered[list->value]) { + if (!mon->capab_offered[list->value]) { if (!unavailable) { unavailable = g_string_new(QMPCapability_str(list->value)); } else { @@ -1196,25 +1206,27 @@ static bool qmp_caps_accept(Monitor *mon, QMPCapabilityList *list, return false; } - memcpy(mon->qmp.capab, capab, sizeof(capab)); + memcpy(mon->capab, capab, sizeof(capab)); return true; } void qmp_qmp_capabilities(bool has_enable, QMPCapabilityList *enable, Error **errp) { - if (cur_mon->qmp.commands == &qmp_commands) { + MonitorQMP *mon = container_of(cur_mon, MonitorQMP, common); + + if (mon->commands == &qmp_commands) { error_set(errp, ERROR_CLASS_COMMAND_NOT_FOUND, "Capabilities negotiation is already complete, command " "ignored"); return; } - if (!qmp_caps_accept(cur_mon, enable, errp)) { + if (!qmp_caps_accept(mon, enable, errp)) { return; } - cur_mon->qmp.commands = &qmp_commands; + mon->commands = &qmp_commands; } /* Set the current CPU defined by the user. Callers must hold BQL. */ @@ -4121,27 +4133,27 @@ static int monitor_can_read(void *opaque) * Null @rsp can only happen for commands with QCO_NO_SUCCESS_RESP. * Nothing is emitted then. */ -static void monitor_qmp_respond(Monitor *mon, QDict *rsp) +static void monitor_qmp_respond(MonitorQMP *mon, QDict *rsp) { if (rsp) { qmp_send_response(mon, rsp); } } -static void monitor_qmp_dispatch(Monitor *mon, QObject *req) +static void monitor_qmp_dispatch(MonitorQMP *mon, QObject *req) { Monitor *old_mon; QDict *rsp; QDict *error; old_mon = cur_mon; - cur_mon = mon; + cur_mon = &mon->common; - rsp = qmp_dispatch(mon->qmp.commands, req, qmp_oob_enabled(mon)); + rsp = qmp_dispatch(mon->commands, req, qmp_oob_enabled(mon)); cur_mon = old_mon; - if (mon->qmp.commands == &qmp_cap_negotiation_commands) { + if (mon->commands == &qmp_cap_negotiation_commands) { error = qdict_get_qdict(rsp, "error"); if (error && !g_strcmp0(qdict_get_try_str(error, "class"), @@ -4166,24 +4178,30 @@ static void monitor_qmp_dispatch(Monitor *mon, QObject *req) * monitor to the end of mon_list queue. * * Note: if the function returned with non-NULL, then the caller will - * be with mon->qmp.qmp_queue_lock held, and the caller is responsible + * be with qmp_mon->qmp_queue_lock held, and the caller is responsible * to release it. */ static QMPRequest *monitor_qmp_requests_pop_any_with_lock(void) { QMPRequest *req_obj = NULL; Monitor *mon; + MonitorQMP *qmp_mon; qemu_mutex_lock(&monitor_lock); QTAILQ_FOREACH(mon, &mon_list, entry) { - qemu_mutex_lock(&mon->qmp.qmp_queue_lock); - req_obj = g_queue_pop_head(mon->qmp.qmp_requests); + if (!monitor_is_qmp(mon)) { + continue; + } + + qmp_mon = container_of(mon, MonitorQMP, common); + qemu_mutex_lock(&qmp_mon->qmp_queue_lock); + req_obj = g_queue_pop_head(qmp_mon->qmp_requests); if (req_obj) { /* With the lock of corresponding queue held */ break; } - qemu_mutex_unlock(&mon->qmp.qmp_queue_lock); + qemu_mutex_unlock(&qmp_mon->qmp_queue_lock); } if (req_obj) { @@ -4205,7 +4223,7 @@ static void monitor_qmp_bh_dispatcher(void *data) QMPRequest *req_obj = monitor_qmp_requests_pop_any_with_lock(); QDict *rsp; bool need_resume; - Monitor *mon; + MonitorQMP *mon; if (!req_obj) { return; @@ -4214,8 +4232,8 @@ static void monitor_qmp_bh_dispatcher(void *data) mon = req_obj->mon; /* qmp_oob_enabled() might change after "qmp_capabilities" */ need_resume = !qmp_oob_enabled(mon) || - mon->qmp.qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1; - qemu_mutex_unlock(&mon->qmp.qmp_queue_lock); + mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1; + qemu_mutex_unlock(&mon->qmp_queue_lock); if (req_obj->req) { QDict *qdict = qobject_to(QDict, req_obj->req); QObject *id = qdict ? qdict_get(qdict, "id") : NULL; @@ -4231,7 +4249,7 @@ static void monitor_qmp_bh_dispatcher(void *data) if (need_resume) { /* Pairs with the monitor_suspend() in handle_qmp_command() */ - monitor_resume(mon); + monitor_resume(&mon->common); } qmp_request_free(req_obj); @@ -4241,7 +4259,7 @@ static void monitor_qmp_bh_dispatcher(void *data) static void handle_qmp_command(void *opaque, QObject *req, Error *err) { - Monitor *mon = opaque; + MonitorQMP *mon = opaque; QObject *id = NULL; QDict *qdict; QMPRequest *req_obj; @@ -4273,7 +4291,7 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err) req_obj->err = err; /* Protect qmp_requests and fetching its length. */ - qemu_mutex_lock(&mon->qmp.qmp_queue_lock); + qemu_mutex_lock(&mon->qmp_queue_lock); /* * Suspend the monitor when we can't queue more requests after @@ -4282,8 +4300,8 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err) * command, for backward compatibility. */ if (!qmp_oob_enabled(mon) || - mon->qmp.qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1) { - monitor_suspend(mon); + mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX - 1) { + monitor_suspend(&mon->common); } /* @@ -4291,9 +4309,9 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err) * handled in time order. Ownership for req_obj, req, * etc. will be delivered to the handler side. */ - assert(mon->qmp.qmp_requests->length < QMP_REQ_QUEUE_LEN_MAX); - g_queue_push_tail(mon->qmp.qmp_requests, req_obj); - qemu_mutex_unlock(&mon->qmp.qmp_queue_lock); + assert(mon->qmp_requests->length < QMP_REQ_QUEUE_LEN_MAX); + g_queue_push_tail(mon->qmp_requests, req_obj); + qemu_mutex_unlock(&mon->qmp_queue_lock); /* Kick the dispatcher routine */ qemu_bh_schedule(qmp_dispatcher_bh); @@ -4301,9 +4319,9 @@ static void handle_qmp_command(void *opaque, QObject *req, Error *err) static void monitor_qmp_read(void *opaque, const uint8_t *buf, int size) { - Monitor *mon = opaque; + MonitorQMP *mon = opaque; - json_message_parser_feed(&mon->qmp.parser, (const char *) buf, size); + json_message_parser_feed(&mon->parser, (const char *) buf, size); } static void monitor_read(void *opaque, const uint8_t *buf, int size) @@ -4389,7 +4407,7 @@ void monitor_resume(Monitor *mon) trace_monitor_suspend(mon, -1); } -static QDict *qmp_greeting(Monitor *mon) +static QDict *qmp_greeting(MonitorQMP *mon) { QList *cap_list = qlist_new(); QObject *ver = NULL; @@ -4398,7 +4416,7 @@ static QDict *qmp_greeting(Monitor *mon) qmp_marshal_query_version(NULL, &ver, NULL); for (cap = 0; cap < QMP_CAPABILITY__MAX; cap++) { - if (mon->qmp.capab_offered[cap]) { + if (mon->capab_offered[cap]) { qlist_append_str(cap_list, QMPCapability_str(cap)); } } @@ -4411,11 +4429,11 @@ static QDict *qmp_greeting(Monitor *mon) static void monitor_qmp_event(void *opaque, int event) { QDict *data; - Monitor *mon = opaque; + MonitorQMP *mon = opaque; switch (event) { case CHR_EVENT_OPENED: - mon->qmp.commands = &qmp_cap_negotiation_commands; + mon->commands = &qmp_cap_negotiation_commands; monitor_qmp_caps_reset(mon); data = qmp_greeting(mon); qmp_send_response(mon, data); @@ -4430,8 +4448,8 @@ static void monitor_qmp_event(void *opaque, int event) * is closed. */ monitor_qmp_cleanup_queues(mon); - json_message_parser_destroy(&mon->qmp.parser); - json_message_parser_init(&mon->qmp.parser, handle_qmp_command, + json_message_parser_destroy(&mon->parser); + json_message_parser_init(&mon->parser, handle_qmp_command, mon, NULL); mon_refcount--; monitor_fdsets_cleanup(); @@ -4593,30 +4611,34 @@ static void monitor_list_append(Monitor *mon) static void monitor_qmp_setup_handlers_bh(void *opaque) { - Monitor *mon = opaque; + MonitorQMP *mon = opaque; GMainContext *context; - assert(mon->use_io_thread); + assert(mon->common.use_io_thread); context = iothread_get_g_main_context(mon_iothread); assert(context); - qemu_chr_fe_set_handlers(&mon->chr, monitor_can_read, monitor_qmp_read, - monitor_qmp_event, NULL, mon, context, true); - monitor_list_append(mon); + qemu_chr_fe_set_handlers(&mon->common.chr, monitor_can_read, + monitor_qmp_read, monitor_qmp_event, + NULL, &mon->common, context, true); + monitor_list_append(&mon->common); } static void monitor_init_qmp(Chardev *chr, int flags) { - Monitor *mon = g_malloc(sizeof(*mon)); + MonitorQMP *mon = g_malloc0(sizeof(*mon)); /* Note: we run QMP monitor in I/O thread when @chr supports that */ - monitor_data_init(mon, flags, false, + monitor_data_init(&mon->common, flags, false, qemu_chr_has_feature(chr, QEMU_CHAR_FEATURE_GCONTEXT)); - qemu_chr_fe_init(&mon->chr, chr, &error_abort); - qemu_chr_fe_set_echo(&mon->chr, true); + qemu_mutex_init(&mon->qmp_queue_lock); + mon->qmp_requests = g_queue_new(); - json_message_parser_init(&mon->qmp.parser, handle_qmp_command, mon, NULL); - if (mon->use_io_thread) { + qemu_chr_fe_init(&mon->common.chr, chr, &error_abort); + qemu_chr_fe_set_echo(&mon->common.chr, true); + + json_message_parser_init(&mon->parser, handle_qmp_command, mon, NULL); + if (mon->common.use_io_thread) { /* * Make sure the old iowatch is gone. It's possible when * e.g. the chardev is in client mode, with wait=on. @@ -4631,10 +4653,10 @@ static void monitor_init_qmp(Chardev *chr, int flags) monitor_qmp_setup_handlers_bh, mon); /* The bottom half will add @mon to @mon_list */ } else { - qemu_chr_fe_set_handlers(&mon->chr, monitor_can_read, + qemu_chr_fe_set_handlers(&mon->common.chr, monitor_can_read, monitor_qmp_read, monitor_qmp_event, - NULL, mon, NULL, true); - monitor_list_append(mon); + NULL, &mon->common, NULL, true); + monitor_list_append(&mon->common); } }