From patchwork Mon Sep 18 15:31:21 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9956977 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E198E60385 for ; Mon, 18 Sep 2017 15:35:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D096B288BC for ; Mon, 18 Sep 2017 15:35:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C582328A90; Mon, 18 Sep 2017 15:35:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D0C58288BC for ; Mon, 18 Sep 2017 15:35:40 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dty2m-0006Um-Rb; Mon, 18 Sep 2017 15:33:08 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dty2l-0006Th-JL for xen-devel@lists.xenproject.org; Mon, 18 Sep 2017 15:33:07 +0000 Received: from [85.158.139.211] by server-7.bemta-5.messagelabs.com id 7A/2A-02208-237EFB95; Mon, 18 Sep 2017 15:33:06 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrOIsWRWlGSWpSXmKPExsXitHRDpK7R8/2 RBhffSVh83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBmX325iKzjUzVhxY243awNjV0oXIyeHhIC/ xP1JJ9hBbDYBHYmpTy+xdjFycIgIqEjc3msAYjILlEt0TKgFqRAWcJGYdKGLEcRmEVCVmHJ4H TtICa+AtcS56UUQA+UldrVdZAWxOQVsJDrPLmYGsYWASjYsOsEEYvMKCEqcnPmEBcRmFtCUaN 3+mx3Clpdo3jobql5FYv3UWWwTGPlmIWmZhaRlFpKWBYzMqxg1ilOLylKLdA3N9ZKKMtMzSnI TM3N0DQ1M9XJTi4sT01NzEpOK9ZLzczcxAgONAQh2MF487XmIUZKDSUmUVzRyf6QQX1J+SmVG YnFGfFFpTmrxIUYZDg4lCd41u4BygkWp6akVaZk5wJCHSUtw8CiJ8J7cCZTmLS5IzC3OTIdIn WLU5ei4efcPkxBLXn5eqpQ4rzfIDAGQoozSPLgRsPi7xCgrJczLCHSUEE9BalFuZgmq/CtGcQ 5GJWHejSBTeDLzSuA2vQI6ggnoiJYde0COKElESEk1MLodWiz5UPP0ok+JU171cLydv9I7b/m Pa5Vbr7x22myzSq+pZatIqc8a0duqRxiW8X78fX/B2YsR1/cl+C9XZp35MvtM4cajokFNSWo3 3x+Zc5ftqt4N02OKJSrvDngXmB1jCk76w8sr3RZuFs755AWvaW5rXsWe9Rw3v9WfW3X4stuJX TKvZx1QYinOSDTUYi4qTgQABuPV6boCAAA= X-Env-Sender: prvs=427434608=Paul.Durrant@citrix.com X-Msg-Ref: server-11.tower-206.messagelabs.com!1505748783!90972451!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 64691 invoked from network); 18 Sep 2017 15:33:06 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 18 Sep 2017 15:33:06 -0000 X-IronPort-AV: E=Sophos;i="5.42,413,1500940800"; d="scan'208";a="440164680" From: Paul Durrant To: Date: Mon, 18 Sep 2017 16:31:21 +0100 Message-ID: <20170918153126.3058-8-paul.durrant@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170918153126.3058-1-paul.durrant@citrix.com> References: <20170918153126.3058-1-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , Paul Durrant , Jan Beulich Subject: [Xen-devel] [PATCH v7 07/12] x86/hvm/ioreq: use bool rather than bool_t X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch changes use of bool_t to bool in the ioreq server code. It also fixes an incorrect indentation in a continuation line. This patch is purely cosmetic. No semantic or functional change. Signed-off-by: Paul Durrant Reviewed-by: Roger Pau Monné Reviewed-by: Wei Liu Acked-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper --- xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 4 +- xen/arch/x86/hvm/ioreq.c | 100 +++++++++++++++++++-------------------- xen/include/asm-x86/hvm/domain.h | 6 +-- xen/include/asm-x86/hvm/ioreq.h | 14 +++--- 6 files changed, 64 insertions(+), 64 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index f7cb883fec..87ef4b6ca9 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -409,7 +409,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad[0] || data->pad[1] || data->pad[2] ) break; - rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0, + rc = hvm_create_ioreq_server(d, curr_d->domain_id, false, data->handle_bufioreq, &data->id); break; } diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 58b4afa1d1..031d07baf0 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -4361,7 +4361,7 @@ static int hvmop_get_param( { domid_t domid = d->arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN]; - rc = hvm_create_ioreq_server(d, domid, 1, + rc = hvm_create_ioreq_server(d, domid, true, HVM_IOREQSRV_BUFIOREQ_LEGACY, NULL); if ( rc != 0 && rc != -EEXIST ) goto out; diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index bf41954f59..1ddcaba52e 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -59,7 +59,7 @@ void send_timeoffset_req(unsigned long timeoff) if ( timeoff == 0 ) return; - if ( hvm_broadcast_ioreq(&p, 1) != 0 ) + if ( hvm_broadcast_ioreq(&p, true) != 0 ) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } @@ -73,7 +73,7 @@ void send_invalidate_req(void) .data = ~0UL, /* flush all */ }; - if ( hvm_broadcast_ioreq(&p, 0) != 0 ) + if ( hvm_broadcast_ioreq(&p, false) != 0 ) gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); } diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 69913cf3cd..f2e0b3f74a 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -43,7 +43,7 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) return &p->vcpu_ioreq[v->vcpu_id]; } -bool_t hvm_io_pending(struct vcpu *v) +bool hvm_io_pending(struct vcpu *v) { struct domain *d = v->domain; struct hvm_ioreq_server *s; @@ -59,11 +59,11 @@ bool_t hvm_io_pending(struct vcpu *v) list_entry ) { if ( sv->vcpu == v && sv->pending ) - return 1; + return true; } } - return 0; + return false; } static void hvm_io_assist(struct hvm_ioreq_vcpu *sv, uint64_t data) @@ -82,10 +82,10 @@ static void hvm_io_assist(struct hvm_ioreq_vcpu *sv, uint64_t data) msix_write_completion(v); vcpu_end_shutdown_deferral(v); - sv->pending = 0; + sv->pending = false; } -static bool_t hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) { while ( sv->pending ) { @@ -112,16 +112,16 @@ static bool_t hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) break; default: gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); - sv->pending = 0; + sv->pending = false; domain_crash(sv->vcpu->domain); - return 0; /* bail */ + return false; /* bail */ } } - return 1; + return true; } -bool_t handle_hvm_io_completion(struct vcpu *v) +bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d = v->domain; struct hvm_vcpu_io *vio = &v->arch.hvm_vcpu.hvm_io; @@ -141,7 +141,7 @@ bool_t handle_hvm_io_completion(struct vcpu *v) if ( sv->vcpu == v && sv->pending ) { if ( !hvm_wait_for_io(sv, get_ioreq(s, v)) ) - return 0; + return false; break; } @@ -178,7 +178,7 @@ bool_t handle_hvm_io_completion(struct vcpu *v) break; } - return 1; + return true; } static int hvm_alloc_ioreq_gfn(struct domain *d, unsigned long *gfn) @@ -208,7 +208,7 @@ static void hvm_free_ioreq_gfn(struct domain *d, unsigned long gfn) set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask); } -static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf) +static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool buf) { struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; @@ -216,7 +216,7 @@ static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf) } static int hvm_map_ioreq_page( - struct hvm_ioreq_server *s, bool_t buf, unsigned long gfn) + struct hvm_ioreq_server *s, bool buf, unsigned long gfn) { struct domain *d = s->domain; struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; @@ -240,10 +240,10 @@ static int hvm_map_ioreq_page( return 0; } -bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page) +bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { const struct hvm_ioreq_server *s; - bool_t found = 0; + bool found = false; spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); @@ -254,7 +254,7 @@ bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page) if ( (s->ioreq.va && s->ioreq.page == page) || (s->bufioreq.va && s->bufioreq.page == page) ) { - found = 1; + found = true; break; } } @@ -302,7 +302,7 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, - bool_t is_default, struct vcpu *v) + bool is_default, struct vcpu *v) { struct hvm_ioreq_vcpu *sv; int rc; @@ -417,22 +417,22 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s, { int rc; - rc = hvm_map_ioreq_page(s, 0, ioreq_gfn); + rc = hvm_map_ioreq_page(s, false, ioreq_gfn); if ( rc ) return rc; if ( bufioreq_gfn != gfn_x(INVALID_GFN) ) - rc = hvm_map_ioreq_page(s, 1, bufioreq_gfn); + rc = hvm_map_ioreq_page(s, true, bufioreq_gfn); if ( rc ) - hvm_unmap_ioreq_page(s, 0); + hvm_unmap_ioreq_page(s, false); return rc; } static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s, - bool_t is_default, - bool_t handle_bufioreq) + bool is_default, + bool handle_bufioreq) { struct domain *d = s->domain; unsigned long ioreq_gfn = gfn_x(INVALID_GFN); @@ -469,15 +469,15 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s, } static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s, - bool_t is_default) + bool is_default) { struct domain *d = s->domain; - bool_t handle_bufioreq = ( s->bufioreq.va != NULL ); + bool handle_bufioreq = !!s->bufioreq.va; if ( handle_bufioreq ) - hvm_unmap_ioreq_page(s, 1); + hvm_unmap_ioreq_page(s, true); - hvm_unmap_ioreq_page(s, 0); + hvm_unmap_ioreq_page(s, false); if ( !is_default ) { @@ -489,7 +489,7 @@ static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s, } static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s, - bool_t is_default) + bool is_default) { unsigned int i; @@ -501,7 +501,7 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s, } static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, - bool_t is_default) + bool is_default) { unsigned int i; int rc; @@ -537,17 +537,17 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, return 0; fail: - hvm_ioreq_server_free_rangesets(s, 0); + hvm_ioreq_server_free_rangesets(s, false); return rc; } static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, - bool_t is_default) + bool is_default) { struct domain *d = s->domain; struct hvm_ioreq_vcpu *sv; - bool_t handle_bufioreq = ( s->bufioreq.va != NULL ); + bool handle_bufioreq = !!s->bufioreq.va; spin_lock(&s->lock); @@ -562,7 +562,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, hvm_remove_ioreq_gfn(d, &s->bufioreq); } - s->enabled = 1; + s->enabled = true; list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -574,10 +574,10 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, } static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, - bool_t is_default) + bool is_default) { struct domain *d = s->domain; - bool_t handle_bufioreq = ( s->bufioreq.va != NULL ); + bool handle_bufioreq = !!s->bufioreq.va; spin_lock(&s->lock); @@ -592,7 +592,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, hvm_add_ioreq_gfn(d, &s->ioreq); } - s->enabled = 0; + s->enabled = false; done: spin_unlock(&s->lock); @@ -600,7 +600,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, struct domain *d, domid_t domid, - bool_t is_default, int bufioreq_handling, + bool is_default, int bufioreq_handling, ioservid_t id) { struct vcpu *v; @@ -619,7 +619,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, return rc; if ( bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC ) - s->bufioreq_atomic = 1; + s->bufioreq_atomic = true; rc = hvm_ioreq_server_setup_pages( s, is_default, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF); @@ -646,7 +646,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, } static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s, - bool_t is_default) + bool is_default) { ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); @@ -681,7 +681,7 @@ static ioservid_t next_ioservid(struct domain *d) } int hvm_create_ioreq_server(struct domain *d, domid_t domid, - bool_t is_default, int bufioreq_handling, + bool is_default, int bufioreq_handling, ioservid_t *id) { struct hvm_ioreq_server *s; @@ -713,7 +713,7 @@ int hvm_create_ioreq_server(struct domain *d, domid_t domid, if ( is_default ) { d->arch.hvm_domain.default_ioreq_server = s; - hvm_ioreq_server_enable(s, 1); + hvm_ioreq_server_enable(s, true); } if ( id ) @@ -756,11 +756,11 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) p2m_set_ioreq_server(d, 0, s); - hvm_ioreq_server_disable(s, 0); + hvm_ioreq_server_disable(s, false); list_del(&s->list_entry); - hvm_ioreq_server_deinit(s, 0); + hvm_ioreq_server_deinit(s, false); domain_unpause(d); @@ -968,7 +968,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, } int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool_t enabled) + bool enabled) { struct list_head *entry; int rc; @@ -992,9 +992,9 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, domain_pause(d); if ( enabled ) - hvm_ioreq_server_enable(s, 0); + hvm_ioreq_server_enable(s, false); else - hvm_ioreq_server_disable(s, 0); + hvm_ioreq_server_disable(s, false); domain_unpause(d); @@ -1017,7 +1017,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) &d->arch.hvm_domain.ioreq_server.list, list_entry ) { - bool_t is_default = (s == d->arch.hvm_domain.default_ioreq_server); + bool is_default = (s == d->arch.hvm_domain.default_ioreq_server); rc = hvm_ioreq_server_add_vcpu(s, is_default, v); if ( rc ) @@ -1066,7 +1066,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) &d->arch.hvm_domain.ioreq_server.list, list_entry ) { - bool_t is_default = (s == d->arch.hvm_domain.default_ioreq_server); + bool is_default = (s == d->arch.hvm_domain.default_ioreq_server); hvm_ioreq_server_disable(s, is_default); @@ -1347,7 +1347,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) } int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool_t buffered) + bool buffered) { struct vcpu *curr = current; struct domain *d = curr->domain; @@ -1398,7 +1398,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, p->state = STATE_IOREQ_READY; notify_via_xen_event_channel(d, port); - sv->pending = 1; + sv->pending = true; return X86EMUL_RETRY; } } @@ -1406,7 +1406,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, return X86EMUL_UNHANDLEABLE; } -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool_t buffered) +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) { struct domain *d = current->domain; struct hvm_ioreq_server *s; diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index ce536f75ef..7f128c05ff 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -45,7 +45,7 @@ struct hvm_ioreq_vcpu { struct list_head list_entry; struct vcpu *vcpu; evtchn_port_t ioreq_evtchn; - bool_t pending; + bool pending; }; #define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) @@ -69,8 +69,8 @@ struct hvm_ioreq_server { spinlock_t bufioreq_lock; evtchn_port_t bufioreq_evtchn; struct rangeset *range[NR_IO_RANGE_TYPES]; - bool_t enabled; - bool_t bufioreq_atomic; + bool enabled; + bool bufioreq_atomic; }; /* diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 43fbe115dc..1829fcf43e 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,12 +19,12 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ -bool_t hvm_io_pending(struct vcpu *v); -bool_t handle_hvm_io_completion(struct vcpu *v); -bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page); +bool hvm_io_pending(struct vcpu *v); +bool handle_hvm_io_completion(struct vcpu *v); +bool is_ioreq_server_page(struct domain *d, const struct page_info *page); int hvm_create_ioreq_server(struct domain *d, domid_t domid, - bool_t is_default, int bufioreq_handling, + bool is_default, int bufioreq_handling, ioservid_t *id); int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, @@ -40,7 +40,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags); int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool_t enabled); + bool enabled); int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); @@ -51,8 +51,8 @@ int hvm_set_dm_domain(struct domain *d, domid_t domid); struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, ioreq_t *p); int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool_t buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool_t buffered); + bool buffered); +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); void hvm_ioreq_init(struct domain *d);