From patchwork Mon Nov 30 10:31:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F485C71155 for ; Mon, 30 Nov 2020 10:32:18 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ABC4220870 for ; Mon, 30 Nov 2020 10:32:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bfLk0hkt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ABC4220870 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40803.73753 (Exim 4.92) (envelope-from ) id 1kjgTW-0000Wk-Gm; Mon, 30 Nov 2020 10:32:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40803.73753; Mon, 30 Nov 2020 10:32:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTW-0000Wc-DP; Mon, 30 Nov 2020 10:32:06 +0000 Received: by outflank-mailman (input) for mailman id 40803; Mon, 30 Nov 2020 10:32:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTV-0000Uu-Bd for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:05 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9f36d7d1-6755-472c-bfde-d95988fc6a48; Mon, 30 Nov 2020 10:31:59 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id t6so20556215lfl.13 for ; Mon, 30 Nov 2020 02:31:59 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.31.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:31:57 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9f36d7d1-6755-472c-bfde-d95988fc6a48 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=H4oorxrqC8a8IVKIJ+B0T4aa6L2Ar1jXOispE9qF9to=; b=bfLk0hkt1ud8tqn2B7N9fvtJkgTUBH3gLqBX6yTp5mQBI159oL8rB332M5GpjMIfF1 9Q5oMMP4ZyUKENrx2OHUV/9EmLzuw3GIE0ZP3MWFiGJ7devXTyVxzIieMf+H/D9MY9G6 OuibPsWcMV5EH6Q/fyE8eB4d3+RU4DwNyk8GpCrwzq17BF/WuM6zGFhCAykN+YMDuYK1 h4KH10xS6WOHSNFLsQeE1rOMwk0HwTUbWhLOpY3CWLza3+7qk7GOqoHWbznVAOiJRPKX rIPmt1lr/PvJLv51KmBu0zChFpLaLf0hqm7A8a5N0o1hWxhPme1XAObdYVeNBanmenOx afiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H4oorxrqC8a8IVKIJ+B0T4aa6L2Ar1jXOispE9qF9to=; b=eJor9tvvciN5SdUbVAxUneUmci/feULibv/dPtaJmBIVrFIyiSromZ2gmBsaSlDnbk zEzgh+eo9XticAhR03EfSk/ufjQxAsauiNoVIjUj2xnrgacbw1wQMwF3ZJgru7HeYjVL RliJdU4KYAcEKa30+NO7zRF3Ygl7fI5dZRt6Nre/eQIlN0T44h6xDrUUAJYIFp1Fw9f0 cz01wsLW8OiVahxrIGC9GLuHWg5Q6AMgO+CHU4uuOjTDOAAknFwhAE3ZWo+2lYC+HSC5 bSf9hLCbgq4Qk4hfpVIoeUtxQZg08z4eDjxALc7sJgI6kCClWoiHnicGwpDcZeplzZh2 Fabg== X-Gm-Message-State: AOAM533VnJs+X9TAZ4tcNaAr2XfYrrdjkoJtNYqpIRvGrSYv9r4JUC8O dLAn+zwfKiKTZKzhSuq0aKxZwFlzp3JStg== X-Google-Smtp-Source: ABdhPJxkLlSzpvFrmGcakSg3BhhbgwiAmkbwvRc5I3CekIamfo2xZL+HeY8fhILdLtm0UB8yYPx75A== X-Received: by 2002:ac2:5985:: with SMTP id w5mr8569034lfn.386.1606732318276; Mon, 30 Nov 2020 02:31:58 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it common Date: Mon, 30 Nov 2020 12:31:16 +0200 Message-Id: <1606732298-22107-2-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch makes some preparation to x86/hvm/ioreq.c before moving to the common code. This way we will get a verbatim copy for a code movement in subsequent patch. This patch mostly introduces specific hooks to abstract arch specific materials taking into the account the requirment to leave the "legacy" mechanism of mapping magic pages for the IOREQ servers x86 specific and not expose it to the common code. These hooks are named according to the more consistent new naming scheme right away (including dropping the "hvm" prefixes and infixes): - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" other functions will be renamed in subsequent patches. It worth mentioning that a code which checks the return value of p2m_set_ioreq_server() in hvm_map_mem_type_to_ioreq_server() was folded into arch_ioreq_server_map_mem_type() for the clear split. So the p2m_change_entry_type_global() is called with ioreq_server lock held. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Alex Bennée --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" - fold the check of p->type into hvm_get_ioreq_server_range_type() and make it return success/failure - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy() in arch/x86/hvm/ioreq.c - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion() Changes V1 -> V2: - update patch description - make arch functions inline and put them into arch header to achieve a truly rename by the subsequent patch - return void in arch_hvm_destroy_ioreq_server() - return bool in arch_hvm_ioreq_destroy() - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy() - rename IOREQ_IO* to IOREQ_STATUS* - remove *handle* from arch_handle_hvm_io_completion() - re-order #include-s alphabetically - rename hvm_get_ioreq_server_range_type() to hvm_ioreq_server_get_type_addr() and add "const" to several arguments Changes V2 -> V3: - update patch description - name new arch hooks according to the new naming scheme - don't make arch hooks inline, move them ioreq.c - make get_ioreq_server() local again - rework the whole patch taking into the account that "legacy" interface should remain x86 specific (additional arch hooks, etc) - update the code to be able to use hvm_map_mem_type_to_ioreq_server() in the common code (an extra arch hook, etc) - don’t include from arch header - add "arch" prefix to hvm_ioreq_server_get_type_addr() - move IOREQ_STATUS_* #define-s introduction to the separate patch - move HANDLE_BUFIOREQ to the arch header - just return relocate_portio_handler() from arch_ioreq_server_destroy_all() - misc adjustments proposed by Jan (adding const, unsigned int instead of uint32_t) --- --- xen/arch/x86/hvm/ioreq.c | 174 ++++++++++++++++++++++++++-------------- xen/include/asm-x86/hvm/ioreq.h | 19 +++++ 2 files changed, 133 insertions(+), 60 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 1cc27df..e3dfb49 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -17,15 +17,15 @@ */ #include +#include +#include #include +#include #include -#include +#include #include -#include #include -#include -#include -#include +#include #include #include @@ -170,6 +170,29 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) return true; } +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) +{ + switch ( io_completion ) + { + case HVMIO_realmode_completion: + { + struct hvm_emulate_ctxt ctxt; + + hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); + vmx_realmode_emulate_one(&ctxt); + hvm_emulate_writeback(&ctxt); + + break; + } + + default: + ASSERT_UNREACHABLE(); + break; + } + + return true; +} + bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d = v->domain; @@ -209,19 +232,8 @@ bool handle_hvm_io_completion(struct vcpu *v) return handle_pio(vio->io_req.addr, vio->io_req.size, vio->io_req.dir); - case HVMIO_realmode_completion: - { - struct hvm_emulate_ctxt ctxt; - - hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); - vmx_realmode_emulate_one(&ctxt); - hvm_emulate_writeback(&ctxt); - - break; - } default: - ASSERT_UNREACHABLE(); - break; + return arch_vcpu_ioreq_completion(io_completion); } return true; @@ -477,9 +489,6 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } } -#define HANDLE_BUFIOREQ(s) \ - ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) - static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v) { @@ -586,7 +595,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) { int rc; @@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) return rc; } -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); @@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, return rc; } +void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); +} + static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) { struct hvm_ioreq_vcpu *sv; @@ -683,8 +698,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) if ( s->enabled ) goto done; - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + arch_ioreq_server_enable(s); s->enabled = true; @@ -697,6 +711,12 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } +void arch_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); +} + static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) { spin_lock(&s->lock); @@ -704,8 +724,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) if ( !s->enabled ) goto done; - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + arch_ioreq_server_disable(s); s->enabled = false; @@ -750,7 +769,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, fail_add: hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s); + arch_ioreq_server_unmap_pages(s); hvm_ioreq_server_free_rangesets(s); @@ -764,7 +783,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) hvm_ioreq_server_remove_all_vcpus(s); /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and * hvm_ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. @@ -772,7 +791,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) * the page_info pointer to NULL, meaning the latter will do * nothing. */ - hvm_ioreq_server_unmap_pages(s); + arch_ioreq_server_unmap_pages(s); hvm_ioreq_server_free_pages(s); hvm_ioreq_server_free_rangesets(s); @@ -836,6 +855,12 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, return rc; } +/* Called when target domain is paused */ +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) +{ + p2m_set_ioreq_server(s->target, 0, s); +} + int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { struct hvm_ioreq_server *s; @@ -855,7 +880,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) domain_pause(d); - p2m_set_ioreq_server(d, 0, s); + arch_ioreq_server_destroy(s); hvm_ioreq_server_disable(s); @@ -900,7 +925,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, if ( ioreq_gfn || bufioreq_gfn ) { - rc = hvm_ioreq_server_map_pages(s); + rc = arch_ioreq_server_map_pages(s); if ( rc ) goto out; } @@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, return rc; } +/* Called with ioreq_server lock held */ +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + int rc = p2m_set_ioreq_server(d, flags, s); + + if ( rc == 0 && flags == 0 ) + { + const struct p2m_domain *p2m = p2m_get_hostp2m(d); + + if ( read_atomic(&p2m->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); + } + + return rc; +} + /* * Map or unmap an ioreq server to specific memory type. For now, only * HVMMEM_ioreq_server is supported, and in the future new types can be @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, if ( s->emulator != current->domain ) goto out; - rc = p2m_set_ioreq_server(d, flags, s); + rc = arch_ioreq_server_map_mem_type(d, s, flags); out: spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - if ( rc == 0 && flags == 0 ) - { - struct p2m_domain *p2m = p2m_get_hostp2m(d); - - if ( read_atomic(&p2m->ioreq.entry_count) ) - p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); - } - return rc; } @@ -1210,12 +1245,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } +bool arch_ioreq_server_destroy_all(struct domain *d) +{ + return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); +} + void hvm_destroy_all_ioreq_servers(struct domain *d) { struct hvm_ioreq_server *s; unsigned int id; - if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) + if ( !arch_ioreq_server_destroy_all(d) ) return; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1239,33 +1279,28 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +int arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) { - struct hvm_ioreq_server *s; - uint32_t cf8; - uint8_t type; - uint64_t addr; - unsigned int id; + unsigned int cf8 = d->arch.hvm.pci_cf8; if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) - return NULL; - - cf8 = d->arch.hvm.pci_cf8; + return -EINVAL; if ( p->type == IOREQ_TYPE_PIO && (p->addr & ~3) == 0xcfc && CF8_ENABLED(cf8) ) { - uint32_t x86_fam; + unsigned int x86_fam, reg; pci_sbdf_t sbdf; - unsigned int reg; reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); /* PCI config data cycle */ - type = XEN_DMOP_IO_RANGE_PCI; - addr = ((uint64_t)sbdf.sbdf << 32) | reg; + *type = XEN_DMOP_IO_RANGE_PCI; + *addr = ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ if ( CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor == X86_VENDOR_AMD && @@ -1277,16 +1312,30 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - addr |= CF8_ADDR_HI(cf8); + *addr |= CF8_ADDR_HI(cf8); } } else { - type = (p->type == IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - addr = p->addr; + *type = (p->type == IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr = p->addr; } + return 0; +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) + return NULL; + FOR_EACH_IOREQ_SERVER(d, id, s) { struct rangeset *r; @@ -1515,11 +1564,16 @@ static int hvm_access_cf8( return X86EMUL_UNHANDLEABLE; } +void arch_ioreq_domain_init(struct domain *d) +{ + register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); +} + void hvm_ioreq_init(struct domain *d) { spin_lock_init(&d->arch.hvm.ioreq_server.lock); - register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); + arch_ioreq_domain_init(d); } /* diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index e2588e9..cc79285 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,6 +19,25 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ +#define HANDLE_BUFIOREQ(s) \ + ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) + +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_enable(struct hvm_ioreq_server *s); +void arch_ioreq_server_disable(struct hvm_ioreq_server *s); +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags); +bool arch_ioreq_server_destroy_all(struct domain *d); +int arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr); +void arch_ioreq_domain_init(struct domain *d); + bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page);