From patchwork Mon Nov 30 10:31:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F485C71155 for ; Mon, 30 Nov 2020 10:32:18 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ABC4220870 for ; Mon, 30 Nov 2020 10:32:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bfLk0hkt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ABC4220870 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40803.73753 (Exim 4.92) (envelope-from ) id 1kjgTW-0000Wk-Gm; Mon, 30 Nov 2020 10:32:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40803.73753; Mon, 30 Nov 2020 10:32:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTW-0000Wc-DP; Mon, 30 Nov 2020 10:32:06 +0000 Received: by outflank-mailman (input) for mailman id 40803; Mon, 30 Nov 2020 10:32:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTV-0000Uu-Bd for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:05 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 9f36d7d1-6755-472c-bfde-d95988fc6a48; Mon, 30 Nov 2020 10:31:59 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id t6so20556215lfl.13 for ; Mon, 30 Nov 2020 02:31:59 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.31.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:31:57 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9f36d7d1-6755-472c-bfde-d95988fc6a48 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=H4oorxrqC8a8IVKIJ+B0T4aa6L2Ar1jXOispE9qF9to=; b=bfLk0hkt1ud8tqn2B7N9fvtJkgTUBH3gLqBX6yTp5mQBI159oL8rB332M5GpjMIfF1 9Q5oMMP4ZyUKENrx2OHUV/9EmLzuw3GIE0ZP3MWFiGJ7devXTyVxzIieMf+H/D9MY9G6 OuibPsWcMV5EH6Q/fyE8eB4d3+RU4DwNyk8GpCrwzq17BF/WuM6zGFhCAykN+YMDuYK1 h4KH10xS6WOHSNFLsQeE1rOMwk0HwTUbWhLOpY3CWLza3+7qk7GOqoHWbznVAOiJRPKX rIPmt1lr/PvJLv51KmBu0zChFpLaLf0hqm7A8a5N0o1hWxhPme1XAObdYVeNBanmenOx afiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H4oorxrqC8a8IVKIJ+B0T4aa6L2Ar1jXOispE9qF9to=; b=eJor9tvvciN5SdUbVAxUneUmci/feULibv/dPtaJmBIVrFIyiSromZ2gmBsaSlDnbk zEzgh+eo9XticAhR03EfSk/ufjQxAsauiNoVIjUj2xnrgacbw1wQMwF3ZJgru7HeYjVL RliJdU4KYAcEKa30+NO7zRF3Ygl7fI5dZRt6Nre/eQIlN0T44h6xDrUUAJYIFp1Fw9f0 cz01wsLW8OiVahxrIGC9GLuHWg5Q6AMgO+CHU4uuOjTDOAAknFwhAE3ZWo+2lYC+HSC5 bSf9hLCbgq4Qk4hfpVIoeUtxQZg08z4eDjxALc7sJgI6kCClWoiHnicGwpDcZeplzZh2 Fabg== X-Gm-Message-State: AOAM533VnJs+X9TAZ4tcNaAr2XfYrrdjkoJtNYqpIRvGrSYv9r4JUC8O dLAn+zwfKiKTZKzhSuq0aKxZwFlzp3JStg== X-Google-Smtp-Source: ABdhPJxkLlSzpvFrmGcakSg3BhhbgwiAmkbwvRc5I3CekIamfo2xZL+HeY8fhILdLtm0UB8yYPx75A== X-Received: by 2002:ac2:5985:: with SMTP id w5mr8569034lfn.386.1606732318276; Mon, 30 Nov 2020 02:31:58 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V3 01/23] x86/ioreq: Prepare IOREQ feature for making it common Date: Mon, 30 Nov 2020 12:31:16 +0200 Message-Id: <1606732298-22107-2-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch makes some preparation to x86/hvm/ioreq.c before moving to the common code. This way we will get a verbatim copy for a code movement in subsequent patch. This patch mostly introduces specific hooks to abstract arch specific materials taking into the account the requirment to leave the "legacy" mechanism of mapping magic pages for the IOREQ servers x86 specific and not expose it to the common code. These hooks are named according to the more consistent new naming scheme right away (including dropping the "hvm" prefixes and infixes): - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" other functions will be renamed in subsequent patches. It worth mentioning that a code which checks the return value of p2m_set_ioreq_server() in hvm_map_mem_type_to_ioreq_server() was folded into arch_ioreq_server_map_mem_type() for the clear split. So the p2m_change_entry_type_global() is called with ioreq_server lock held. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Alex Bennée --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" - fold the check of p->type into hvm_get_ioreq_server_range_type() and make it return success/failure - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy() in arch/x86/hvm/ioreq.c - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion() Changes V1 -> V2: - update patch description - make arch functions inline and put them into arch header to achieve a truly rename by the subsequent patch - return void in arch_hvm_destroy_ioreq_server() - return bool in arch_hvm_ioreq_destroy() - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy() - rename IOREQ_IO* to IOREQ_STATUS* - remove *handle* from arch_handle_hvm_io_completion() - re-order #include-s alphabetically - rename hvm_get_ioreq_server_range_type() to hvm_ioreq_server_get_type_addr() and add "const" to several arguments Changes V2 -> V3: - update patch description - name new arch hooks according to the new naming scheme - don't make arch hooks inline, move them ioreq.c - make get_ioreq_server() local again - rework the whole patch taking into the account that "legacy" interface should remain x86 specific (additional arch hooks, etc) - update the code to be able to use hvm_map_mem_type_to_ioreq_server() in the common code (an extra arch hook, etc) - don’t include from arch header - add "arch" prefix to hvm_ioreq_server_get_type_addr() - move IOREQ_STATUS_* #define-s introduction to the separate patch - move HANDLE_BUFIOREQ to the arch header - just return relocate_portio_handler() from arch_ioreq_server_destroy_all() - misc adjustments proposed by Jan (adding const, unsigned int instead of uint32_t) --- --- xen/arch/x86/hvm/ioreq.c | 174 ++++++++++++++++++++++++++-------------- xen/include/asm-x86/hvm/ioreq.h | 19 +++++ 2 files changed, 133 insertions(+), 60 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 1cc27df..e3dfb49 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -17,15 +17,15 @@ */ #include +#include +#include #include +#include #include -#include +#include #include -#include #include -#include -#include -#include +#include #include #include @@ -170,6 +170,29 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) return true; } +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) +{ + switch ( io_completion ) + { + case HVMIO_realmode_completion: + { + struct hvm_emulate_ctxt ctxt; + + hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); + vmx_realmode_emulate_one(&ctxt); + hvm_emulate_writeback(&ctxt); + + break; + } + + default: + ASSERT_UNREACHABLE(); + break; + } + + return true; +} + bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d = v->domain; @@ -209,19 +232,8 @@ bool handle_hvm_io_completion(struct vcpu *v) return handle_pio(vio->io_req.addr, vio->io_req.size, vio->io_req.dir); - case HVMIO_realmode_completion: - { - struct hvm_emulate_ctxt ctxt; - - hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); - vmx_realmode_emulate_one(&ctxt); - hvm_emulate_writeback(&ctxt); - - break; - } default: - ASSERT_UNREACHABLE(); - break; + return arch_vcpu_ioreq_completion(io_completion); } return true; @@ -477,9 +489,6 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } } -#define HANDLE_BUFIOREQ(s) \ - ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) - static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v) { @@ -586,7 +595,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) { int rc; @@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) return rc; } -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); @@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, return rc; } +void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); +} + static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) { struct hvm_ioreq_vcpu *sv; @@ -683,8 +698,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) if ( s->enabled ) goto done; - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + arch_ioreq_server_enable(s); s->enabled = true; @@ -697,6 +711,12 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } +void arch_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); +} + static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) { spin_lock(&s->lock); @@ -704,8 +724,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) if ( !s->enabled ) goto done; - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + arch_ioreq_server_disable(s); s->enabled = false; @@ -750,7 +769,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, fail_add: hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s); + arch_ioreq_server_unmap_pages(s); hvm_ioreq_server_free_rangesets(s); @@ -764,7 +783,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) hvm_ioreq_server_remove_all_vcpus(s); /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and * hvm_ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. @@ -772,7 +791,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) * the page_info pointer to NULL, meaning the latter will do * nothing. */ - hvm_ioreq_server_unmap_pages(s); + arch_ioreq_server_unmap_pages(s); hvm_ioreq_server_free_pages(s); hvm_ioreq_server_free_rangesets(s); @@ -836,6 +855,12 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, return rc; } +/* Called when target domain is paused */ +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) +{ + p2m_set_ioreq_server(s->target, 0, s); +} + int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { struct hvm_ioreq_server *s; @@ -855,7 +880,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) domain_pause(d); - p2m_set_ioreq_server(d, 0, s); + arch_ioreq_server_destroy(s); hvm_ioreq_server_disable(s); @@ -900,7 +925,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, if ( ioreq_gfn || bufioreq_gfn ) { - rc = hvm_ioreq_server_map_pages(s); + rc = arch_ioreq_server_map_pages(s); if ( rc ) goto out; } @@ -1080,6 +1105,24 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, return rc; } +/* Called with ioreq_server lock held */ +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + int rc = p2m_set_ioreq_server(d, flags, s); + + if ( rc == 0 && flags == 0 ) + { + const struct p2m_domain *p2m = p2m_get_hostp2m(d); + + if ( read_atomic(&p2m->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); + } + + return rc; +} + /* * Map or unmap an ioreq server to specific memory type. For now, only * HVMMEM_ioreq_server is supported, and in the future new types can be @@ -1112,19 +1155,11 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, if ( s->emulator != current->domain ) goto out; - rc = p2m_set_ioreq_server(d, flags, s); + rc = arch_ioreq_server_map_mem_type(d, s, flags); out: spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - if ( rc == 0 && flags == 0 ) - { - struct p2m_domain *p2m = p2m_get_hostp2m(d); - - if ( read_atomic(&p2m->ioreq.entry_count) ) - p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); - } - return rc; } @@ -1210,12 +1245,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } +bool arch_ioreq_server_destroy_all(struct domain *d) +{ + return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); +} + void hvm_destroy_all_ioreq_servers(struct domain *d) { struct hvm_ioreq_server *s; unsigned int id; - if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) + if ( !arch_ioreq_server_destroy_all(d) ) return; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1239,33 +1279,28 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +int arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) { - struct hvm_ioreq_server *s; - uint32_t cf8; - uint8_t type; - uint64_t addr; - unsigned int id; + unsigned int cf8 = d->arch.hvm.pci_cf8; if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) - return NULL; - - cf8 = d->arch.hvm.pci_cf8; + return -EINVAL; if ( p->type == IOREQ_TYPE_PIO && (p->addr & ~3) == 0xcfc && CF8_ENABLED(cf8) ) { - uint32_t x86_fam; + unsigned int x86_fam, reg; pci_sbdf_t sbdf; - unsigned int reg; reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); /* PCI config data cycle */ - type = XEN_DMOP_IO_RANGE_PCI; - addr = ((uint64_t)sbdf.sbdf << 32) | reg; + *type = XEN_DMOP_IO_RANGE_PCI; + *addr = ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ if ( CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor == X86_VENDOR_AMD && @@ -1277,16 +1312,30 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - addr |= CF8_ADDR_HI(cf8); + *addr |= CF8_ADDR_HI(cf8); } } else { - type = (p->type == IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - addr = p->addr; + *type = (p->type == IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr = p->addr; } + return 0; +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) + return NULL; + FOR_EACH_IOREQ_SERVER(d, id, s) { struct rangeset *r; @@ -1515,11 +1564,16 @@ static int hvm_access_cf8( return X86EMUL_UNHANDLEABLE; } +void arch_ioreq_domain_init(struct domain *d) +{ + register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); +} + void hvm_ioreq_init(struct domain *d) { spin_lock_init(&d->arch.hvm.ioreq_server.lock); - register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); + arch_ioreq_domain_init(d); } /* diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index e2588e9..cc79285 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,6 +19,25 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ +#define HANDLE_BUFIOREQ(s) \ + ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) + +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_enable(struct hvm_ioreq_server *s); +void arch_ioreq_server_disable(struct hvm_ioreq_server *s); +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags); +bool arch_ioreq_server_destroy_all(struct domain *d); +int arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr); +void arch_ioreq_domain_init(struct domain *d); + bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); From patchwork Mon Nov 30 10:31:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36A3BC71156 for ; Mon, 30 Nov 2020 10:32:19 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B80CE20870 for ; Mon, 30 Nov 2020 10:32:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HoKpwzrY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B80CE20870 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40804.73765 (Exim 4.92) (envelope-from ) id 1kjgTb-0000aZ-Ub; Mon, 30 Nov 2020 10:32:11 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40804.73765; Mon, 30 Nov 2020 10:32:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTb-0000aR-R0; Mon, 30 Nov 2020 10:32:11 +0000 Received: by outflank-mailman (input) for mailman id 40804; Mon, 30 Nov 2020 10:32:10 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTa-0000Uu-Bt for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:10 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c79e6e20-71f7-4aeb-9f98-f5e9636a98a0; Mon, 30 Nov 2020 10:32:00 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id u19so20609196lfr.7 for ; Mon, 30 Nov 2020 02:32:00 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.31.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:31:58 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c79e6e20-71f7-4aeb-9f98-f5e9636a98a0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DAVWNnxNI+etFFRHAWRdvvzFJ2BAef63O413YzD2Bp8=; b=HoKpwzrYIldx9hYT5rofXiD7iYq8fpH+km86eXPPvxLKEk0zVaLE5FUaNY6lxP7au2 t4VQAGseGeBfRcpgYEy5/dFGyPQEr08bYNjwPJ+MFM9VKs4MbOKSRv9LJx2iYXLCdWuS kaWsKSDcowlOa9TlN5pyIYRzAhuYA6TvfrR1fpImak01PW13ZNTc4fbEYHzXDBjbNBSk GWgcWqntuS7xR5Frw4jGZG3oqwHJN1gDyLPRWmDy/28QmhN3jc0vKZq//xOgdOSToO0w 22ETbEgDw5lCoqpLLOblmIILWVigeb/ne3+kQ4VZjBN0SG7XBibIeB2D3yJM4QXLd2EB RA7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DAVWNnxNI+etFFRHAWRdvvzFJ2BAef63O413YzD2Bp8=; b=Hnszf4u4uZL1s8M89P8Xa7VagIUTLvX66I391bZeUSslXyrA8HgMzl7Puczz+48Gh3 B9hi0bSdbiuYVFTGLnz6YOXqYIcOW09oHxyegoLM+rxQlmfPPRb+scgf6Uqiu8VIcYGL IRrcc1i5FDSqLpz7QJYvN65ZyTrBMopXVb2srWi4fta5m7QaNEycnBsK7HOGdFsYbph7 EDxLuEFV7kWRu8UTrvdNqWpeBpO+/pk+pYyf+VEEyhJHVHPU8ncC0/6Glva2hi4K/6aG dFvRfwnyTYDLrpBBSOygOzUII8DB8r5R9lQDXKnlYUCNd3KJiRq/JoEAoCU2mDfDvhOM olew== X-Gm-Message-State: AOAM531AselRB8BDmxQjHAkt/clfTTNa+zhH14dhLyc2n1ILpbG7KE4X kxyhTpWmduxAH7yUuUDPZzPhnsz+WWujUQ== X-Google-Smtp-Source: ABdhPJzJZ7NsJA4nffR8QNtTSSiXN0wYjOa8ZqEK32XZp02Z4ytCpN1Glbo+EebOv8yWpCrpb2wrAQ== X-Received: by 2002:a19:6551:: with SMTP id c17mr9075180lfj.46.1606732319305; Mon, 30 Nov 2020 02:31:59 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V3 02/23] x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving Date: Mon, 30 Nov 2020 12:31:17 +0200 Message-Id: <1606732298-22107-3-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch continues to make some preparation to x86/hvm/ioreq.c before moving to the common code. Add IOREQ_STATUS_* #define-s and update candidates for moving since X86EMUL_* shouldn't be exposed to the common code in that form. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Alex Bennée Acked-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V2 -> V3: - new patch, was split from [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common --- --- xen/arch/x86/hvm/ioreq.c | 16 ++++++++-------- xen/include/asm-x86/hvm/ioreq.h | 4 ++++ 2 files changed, 12 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index e3dfb49..9525554 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1400,7 +1400,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) pg = iorp->va; if ( !pg ) - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; /* * Return 0 for the cases we can't deal with: @@ -1430,7 +1430,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) break; default: gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } spin_lock(&s->bufioreq_lock); @@ -1440,7 +1440,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) { /* The queue is full: send the iopacket through the normal path. */ spin_unlock(&s->bufioreq_lock); - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp; @@ -1471,7 +1471,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) notify_via_xen_event_channel(d, s->bufioreq_evtchn); spin_unlock(&s->bufioreq_lock); - return X86EMUL_OKAY; + return IOREQ_STATUS_HANDLED; } int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, @@ -1487,7 +1487,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, return hvm_send_buffered_ioreq(s, proto_p); if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return X86EMUL_RETRY; + return IOREQ_STATUS_RETRY; list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -1527,11 +1527,11 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, notify_via_xen_event_channel(d, port); sv->pending = true; - return X86EMUL_RETRY; + return IOREQ_STATUS_RETRY; } } - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) @@ -1545,7 +1545,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) if ( !s->enabled ) continue; - if ( hvm_send_ioreq(s, p, buffered) == X86EMUL_UNHANDLEABLE ) + if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) failed++; } diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index cc79285..e9c8b2d 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -74,6 +74,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); void hvm_ioreq_init(struct domain *d); +#define IOREQ_STATUS_HANDLED X86EMUL_OKAY +#define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE +#define IOREQ_STATUS_RETRY X86EMUL_RETRY + #endif /* __ASM_X86_HVM_IOREQ_H__ */ /* From patchwork Mon Nov 30 10:31:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00E7FC64E8A for ; Mon, 30 Nov 2020 10:32:26 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 829A320708 for ; Mon, 30 Nov 2020 10:32:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="u26ezXmb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 829A320708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40805.73777 (Exim 4.92) (envelope-from ) id 1kjgTh-0000ev-7y; Mon, 30 Nov 2020 10:32:17 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40805.73777; Mon, 30 Nov 2020 10:32:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTh-0000eo-4M; Mon, 30 Nov 2020 10:32:17 +0000 Received: by outflank-mailman (input) for mailman id 40805; Mon, 30 Nov 2020 10:32:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTf-0000Uu-CD for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:15 +0000 Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8247b874-94ea-45a8-b30a-bd01c48b57d4; Mon, 30 Nov 2020 10:32:01 +0000 (UTC) Received: by mail-lj1-x244.google.com with SMTP id o24so17026537ljj.6 for ; Mon, 30 Nov 2020 02:32:01 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.31.59 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:31:59 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8247b874-94ea-45a8-b30a-bd01c48b57d4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=oPwVS+xymxylL7L/6Uw1WpkrUax1c9fqZs6baXhLT/w=; b=u26ezXmb1HRCJb8cN2LiXx5fYCJJYgz8O2qS2wZNHCim1gBF6QWCOUxmPgGnUZJdCQ A2u07GsuWsrAPZVkPSbcqCvD5qIr7z1DpVWz3+yYB2QNTW6tgJMTKCzn5FwuZLmVFUMP tqWxf9mah+ryYyhRIePhziAvpCOSpqXvL1l6/akGhCHL7rbjCdRxK2cjsH9Hk2dyKreD mLK+pQDSdeCZRlgHQBveuXtZ5jFRB14xGWrknxT60ICr3W22nPUSzhpC8QLxQ5epxVOW TL7Io7jPWTIj9N9sdrERmj6D+Gsqf9asqUJZGQO0cdmVX45AJS/XtSynS/GGAaLvvV7F YTfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oPwVS+xymxylL7L/6Uw1WpkrUax1c9fqZs6baXhLT/w=; b=ho0zbMsGJt0juD6lEUrRYVD6n1SIRDZhx15x1YpnfxkWe3biuV90O7nAEfudHu5FEA lA61i1UhJ7kD31Oe4IdPXtGliy/hBT1yPmGc2zwTvvMMfAW0qpvYbp6euFg/5LcBO0I8 Jd7u6sr+MtvCpU6K0LZwiOuNaXqhkIMEpicx9RPSdsU7qKZ6G8nA5cHlaQTGCgUVPjic ahEamABDw4vIJi2F6xuKr8Riqdj7ivXIGJo+GUNArx8riz9k55xHDQ7FvqXDIbnNmjDK 3VGeLENUBmMLa79wbjNHp2X5AbbdkjuZiOioTKV6U4tbBhYX1YvdVoBxyVGRClgiJuf9 xr0g== X-Gm-Message-State: AOAM530ET5C56xCgF0EhGZJwwUsWYkdd4ffj92fcMf3jH20tiTSS+tgS R+x5Qj0yV1Ra+yI9NjR8V2jZoxuLjCoa9w== X-Google-Smtp-Source: ABdhPJyuSNGsY9pPHt6NxlrdawJ/3pd8os8BI0EWnoUkpWEPjXmJ9k5q9+vUOTu4IB3qd3+tTl76+Q== X-Received: by 2002:a05:651c:315:: with SMTP id a21mr8588159ljp.229.1606732320354; Mon, 30 Nov 2020 02:32:00 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V3 03/23] x86/ioreq: Provide out-of-line wrapper for the handle_mmio() Date: Mon, 30 Nov 2020 12:31:18 +0200 Message-Id: <1606732298-22107-4-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The IOREQ is about to be common feature and Arm will have its own implementation. But the name of the function is pretty generic and can be confusing on Arm (we already have a try_handle_mmio()). In order not to rename the function (which is used for a varying set of purposes on x86) globally and get non-confusing variant on Arm provide a wrapper ioreq_complete_mmio() to be used on common and Arm code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "handle" - add Jan's A-b Changes V2 -> V3: - remove Jan's A-b - update patch subject/description - use out-of-line function instead of #define - put earlier in the series to avoid breakage --- --- xen/arch/x86/hvm/ioreq.c | 7 ++++++- xen/include/asm-x86/hvm/ioreq.h | 2 ++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 9525554..36b1e4e 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -36,6 +36,11 @@ #include #include +bool ioreq_complete_mmio(void) +{ + return handle_mmio(); +} + static void set_ioreq_server(struct domain *d, unsigned int id, struct hvm_ioreq_server *s) { @@ -226,7 +231,7 @@ bool handle_hvm_io_completion(struct vcpu *v) break; case HVMIO_mmio_completion: - return handle_mmio(); + return ioreq_complete_mmio(); case HVMIO_pio_completion: return handle_pio(vio->io_req.addr, vio->io_req.size, diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index e9c8b2d..c7563e1 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -74,6 +74,8 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); void hvm_ioreq_init(struct domain *d); +bool ioreq_complete_mmio(void); + #define IOREQ_STATUS_HANDLED X86EMUL_OKAY #define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE #define IOREQ_STATUS_RETRY X86EMUL_RETRY From patchwork Mon Nov 30 10:31:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0673C64E90 for ; Mon, 30 Nov 2020 10:32:44 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4D50B20708 for ; Mon, 30 Nov 2020 10:32:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="t2IZrF/i" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4D50B20708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40816.73813 (Exim 4.92) (envelope-from ) id 1kjgTw-0000x4-Gw; Mon, 30 Nov 2020 10:32:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40816.73813; Mon, 30 Nov 2020 10:32:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTw-0000wq-C6; Mon, 30 Nov 2020 10:32:32 +0000 Received: by outflank-mailman (input) for mailman id 40816; Mon, 30 Nov 2020 10:32:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTu-0000Uu-Ci for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:30 +0000 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 120459cb-e07a-46e5-93cd-44136438b689; Mon, 30 Nov 2020 10:32:04 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id j205so20582119lfj.6 for ; Mon, 30 Nov 2020 02:32:04 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:01 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 120459cb-e07a-46e5-93cd-44136438b689 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=aGQpvDbV6S2U1yz85teIC5aptKqTALR4gBxIs/mCIMY=; b=t2IZrF/i6aU21frm5KbsJgen8U407C3uQELtOD+6wxFSKI139uW/U8SpjTZ5EvGv01 /BUhijalta439xShmd0Wr2b+9IYfLLVLwWFlQ6CQhaNJckZZAW8PBRc8XrMkmGagHkaL nMXgBEeL/ZQdaK0/Wf2VVnuO1zIiAKEU9/Eg7dIPoiqz8oHAYa04QA77zvxBahwwMuSo Ht1GSCzNTSspG+5zdU3nXWngU7RuxuJFSlPTXNHw8G4D+3jh9RGlNxynt2BQpmylnNru QZhT9h6enCuszWu7ms15Qz7vc4tJl/HaIJIN3FjPADqeQ/1CUgeIc2x672pvOsaG3ZkH 6+IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=aGQpvDbV6S2U1yz85teIC5aptKqTALR4gBxIs/mCIMY=; b=EsrrE2DQty0hNj3HrslG2lpziImHx5jMLFQBOD8YURhogmIOGx/qmtocNcdrMzx9Ep 5ymwZmeZhgViE7NKXUiIqHL2lUaFCiVhxoIER+pREV67lTt0tSmo1yWJ85/NIanOf+gm OApsJ2FeNTulqJG2QuRyoX1ItO1sjObi59sPYNMdrW+aVsxfxrFxytzVgNtwxiVBG3zB XuacEzVKugzAzy/SkZA9PpKBejPn6Uf1I1dXXscyztUOG+emm3vZMSb/mov7djZZe5vP FJ4KRyMdRbO/jflczMZgEsr74dpfCJ/Sz27uq+NVB3z//hzPiTUgqXez0DrA9uQx5Jl3 ExtA== X-Gm-Message-State: AOAM532098CZ+0IC19iwqdrmogjcj/ZCFCQZWwtuChFG4Y5iRca8LYQH JoI2LhLPTt9wILJYtKE9zSfJe6Y1rmGFlA== X-Google-Smtp-Source: ABdhPJxcZ6QbqV454audB9S8a62c7d1bGO1rQuYoFSmSMHuXQZT/Hy8LOLGIhyPFn1hqTXlQV0zDAw== X-Received: by 2002:a19:857:: with SMTP id 84mr9546062lfi.235.1606732321729; Mon, 30 Nov 2020 02:32:01 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Paul Durrant , Tim Deegan , Julien Grall Subject: [PATCH V3 04/23] xen/ioreq: Make x86's IOREQ feature common Date: Mon, 30 Nov 2020 12:31:19 +0200 Message-Id: <1606732298-22107-5-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch moves previously prepared IOREQ support to the common code (the code movement is verbatim copy). The "legacy" mechanism of mapping magic pages for the IOREQ servers remains x86 specific and not exposed to the common code. The common IOREQ feature is supposed to be built with IOREQ_SERVER option enabled, which is selected for x86's config HVM for now. In order to avoid having a gigantic patch here, the subsequent patches will update remaining bits in the common code step by step: - Make IOREQ related structs/materials common - Drop the "hvm" prefixes and infixes - Remove layering violation by moving corresponding fields out of *arch.hvm* or abstracting away accesses to them Also include which will be needed on Arm to avoid touch the common code again when introducing Arm specific bits. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, this patch depends on the following which is on review: https://patchwork.kernel.org/patch/11816689/ *** Changes RFC -> V1: - was split into three patches: - x86/ioreq: Prepare IOREQ feature for making it common - xen/ioreq: Make x86's IOREQ feature common - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common - update MAINTAINERS file - do not use a separate subdir for the IOREQ stuff, move it to: - xen/common/ioreq.c - xen/include/xen/ioreq.h - update x86's files to include xen/ioreq.h - remove unneeded headers in arch/x86/hvm/ioreq.c - re-order the headers alphabetically in common/ioreq.c - update common/ioreq.c according to the newly introduced arch functions: arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion() Changes V1 -> V2: - update patch description - make everything needed in the previous patch to achieve a truly rename here - don't include unnecessary headers from asm-x86/hvm/ioreq.h and xen/ioreq.h - use __XEN_IOREQ_H__ instead of __IOREQ_H__ - move get_ioreq_server() to common/ioreq.c Changes V2 -> V3: - update patch description - make everything needed in the previous patch to not expose "legacy" interface to the common code here - update patch according the "legacy interface" is x86 specific - include in common ioreq.c --- --- MAINTAINERS | 8 +- xen/arch/x86/Kconfig | 1 + xen/arch/x86/hvm/ioreq.c | 1356 ++------------------------------------- xen/arch/x86/mm.c | 2 +- xen/arch/x86/mm/shadow/common.c | 2 +- xen/common/Kconfig | 3 + xen/common/Makefile | 1 + xen/common/ioreq.c | 1287 +++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/ioreq.h | 39 +- xen/include/xen/ioreq.h | 73 +++ 10 files changed, 1432 insertions(+), 1340 deletions(-) create mode 100644 xen/common/ioreq.c create mode 100644 xen/include/xen/ioreq.h diff --git a/MAINTAINERS b/MAINTAINERS index dab38a6..5a44ba4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -333,6 +333,13 @@ X: xen/drivers/passthrough/vtd/ X: xen/drivers/passthrough/device_tree.c F: xen/include/xen/iommu.h +I/O EMULATION (IOREQ) +M: Paul Durrant +S: Supported +F: xen/common/ioreq.c +F: xen/include/xen/ioreq.h +F: xen/include/public/hvm/ioreq.h + KCONFIG M: Doug Goldstein S: Supported @@ -549,7 +556,6 @@ F: xen/arch/x86/hvm/ioreq.c F: xen/include/asm-x86/hvm/emulate.h F: xen/include/asm-x86/hvm/io.h F: xen/include/asm-x86/hvm/ioreq.h -F: xen/include/public/hvm/ioreq.h X86 MEMORY MANAGEMENT M: Jan Beulich diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 24868aa..abe0fce 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -91,6 +91,7 @@ config PV_LINEAR_PT config HVM def_bool !PV_SHIM_EXCLUSIVE + select IOREQ_SERVER prompt "HVM support" ---help--- Interfaces to support HVM domains. HVM domains require hardware diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 36b1e4e..b03ceee 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -41,140 +41,6 @@ bool ioreq_complete_mmio(void) return handle_mmio(); } -static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) -{ - ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); - - d->arch.hvm.ioreq_server.server[id] = s; -} - -#define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] - -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) -{ - if ( id >= MAX_NR_IOREQ_SERVERS ) - return NULL; - - return GET_IOREQ_SERVER(d, id); -} - -/* - * Iterate over all possible ioreq servers. - * - * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). - * This is a semantic that previously existed when ioreq servers - * were held in a linked list. - */ -#define FOR_EACH_IOREQ_SERVER(d, id, s) \ - for ( (id) = MAX_NR_IOREQ_SERVERS; (id) != 0; ) \ - if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \ - continue; \ - else - -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) -{ - shared_iopage_t *p = s->ioreq.va; - - ASSERT((v == current) || !vcpu_runnable(v)); - ASSERT(p != NULL); - - return &p->vcpu_ioreq[v->vcpu_id]; -} - -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **srvp) -{ - struct domain *d = v->domain; - struct hvm_ioreq_server *s; - unsigned int id; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct hvm_ioreq_vcpu *sv; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu == v && sv->pending ) - { - if ( srvp ) - *srvp = s; - return sv; - } - } - } - - return NULL; -} - -bool hvm_io_pending(struct vcpu *v) -{ - return get_pending_vcpu(v, NULL); -} - -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) -{ - unsigned int prev_state = STATE_IOREQ_NONE; - unsigned int state = p->state; - uint64_t data = ~0; - - smp_rmb(); - - /* - * The only reason we should see this condition be false is when an - * emulator dying races with I/O being requested. - */ - while ( likely(state != STATE_IOREQ_NONE) ) - { - if ( unlikely(state < prev_state) ) - { - gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n", - prev_state, state); - sv->pending = false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - switch ( prev_state = state ) - { - case STATE_IORESP_READY: /* IORESP_READY -> NONE */ - p->state = STATE_IOREQ_NONE; - data = p->data; - break; - - case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READY */ - case STATE_IOREQ_INPROCESS: - wait_on_xen_event_channel(sv->ioreq_evtchn, - ({ state = p->state; - smp_rmb(); - state != prev_state; })); - continue; - - default: - gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); - sv->pending = false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - break; - } - - p = &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) - p->data = data; - - sv->pending = false; - - return true; -} - bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) { switch ( io_completion ) @@ -198,52 +64,6 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) return true; } -bool handle_hvm_io_completion(struct vcpu *v) -{ - struct domain *d = v->domain; - struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; - enum hvm_io_completion io_completion; - - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - sv = get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) - return false; - - vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ? - STATE_IORESP_READY : STATE_IOREQ_NONE; - - msix_write_completion(v); - vcpu_end_shutdown_deferral(v); - - io_completion = vio->io_completion; - vio->io_completion = HVMIO_no_completion; - - switch ( io_completion ) - { - case HVMIO_no_completion: - break; - - case HVMIO_mmio_completion: - return ioreq_complete_mmio(); - - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); - - default: - return arch_vcpu_ioreq_completion(io_completion); - } - - return true; -} - static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) { struct domain *d = s->target; @@ -360,93 +180,6 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - - if ( iorp->page ) - { - /* - * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then - * allocating a page is not permitted. - */ - if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; - - return 0; - } - - page = alloc_domheap_page(s->target, MEMF_no_refcount); - - if ( !page ) - return -ENOMEM; - - if ( !get_page_and_type(page, s->target, PGT_writable_page) ) - { - /* - * The domain can't possibly know about this page yet, so failure - * here is a clear indication of something fishy going on. - */ - domain_crash(s->emulator); - return -ENODATA; - } - - iorp->va = __map_domain_page_global(page); - if ( !iorp->va ) - goto fail; - - iorp->page = page; - clear_page(iorp->va); - return 0; - - fail: - put_page_alloc_ref(page); - put_page_and_type(page); - - return -ENOMEM; -} - -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; - struct page_info *page = iorp->page; - - if ( !page ) - return; - - iorp->page = NULL; - - unmap_domain_page_global(iorp->va); - iorp->va = NULL; - - put_page_alloc_ref(page); - put_page_and_type(page); -} - -bool is_ioreq_server_page(struct domain *d, const struct page_info *page) -{ - const struct hvm_ioreq_server *s; - unsigned int id; - bool found = false; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( (s->ioreq.page == page) || (s->bufioreq.page == page) ) - { - found = true; - break; - } - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return found; -} - static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { @@ -481,125 +214,6 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) -{ - ASSERT(spin_is_locked(&s->lock)); - - if ( s->ioreq.va != NULL ) - { - ioreq_t *p = get_ioreq(s, sv->vcpu); - - p->vp_eport = sv->ioreq_evtchn; - } -} - -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - int rc; - - sv = xzalloc(struct hvm_ioreq_vcpu); - - rc = -ENOMEM; - if ( !sv ) - goto fail1; - - spin_lock(&s->lock); - - rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail2; - - sv->ioreq_evtchn = rc; - - if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) - { - rc = alloc_unbound_xen_event_channel(v->domain, 0, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail3; - - s->bufioreq_evtchn = rc; - } - - sv->vcpu = v; - - list_add(&sv->list_entry, &s->ioreq_vcpu_list); - - if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); - - spin_unlock(&s->lock); - return 0; - - fail3: - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - fail2: - spin_unlock(&s->lock); - xfree(sv); - - fail1: - return rc; -} - -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu != v ) - continue; - - list_del(&sv->list_entry); - - if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - break; - } - - spin_unlock(&s->lock); -} - -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) -{ - struct hvm_ioreq_vcpu *sv, *next; - - spin_lock(&s->lock); - - list_for_each_entry_safe ( sv, - next, - &s->ioreq_vcpu_list, - list_entry ) - { - struct vcpu *v = sv->vcpu; - - list_del(&sv->list_entry); - - if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - } - - spin_unlock(&s->lock); -} - int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) { int rc; @@ -621,940 +235,91 @@ void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) hvm_unmap_ioreq_gfn(s, false); } -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_enable(struct hvm_ioreq_server *s) { - int rc; - - rc = hvm_alloc_ioreq_mfn(s, false); - - if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc = hvm_alloc_ioreq_mfn(s, true); - - if ( rc ) - hvm_free_ioreq_mfn(s, false); - - return rc; + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); } -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_disable(struct hvm_ioreq_server *s) { - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); } -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +/* Called when target domain is paused */ +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) { - unsigned int i; - - for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) - rangeset_destroy(s->range[i]); + p2m_set_ioreq_server(s->target, 0, s); } -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, - ioservid_t id) +/* Called with ioreq_server lock held */ +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) { - unsigned int i; - int rc; + int rc = p2m_set_ioreq_server(d, flags, s); - for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) + if ( rc == 0 && flags == 0 ) { - char *name; - - rc = asprintf(&name, "ioreq_server %d %s", id, - (i == XEN_DMOP_IO_RANGE_PORT) ? "port" : - (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : - (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" : - ""); - if ( rc ) - goto fail; - - s->range[i] = rangeset_new(s->target, name, - RANGESETF_prettyprint_hex); - - xfree(name); - - rc = -ENOMEM; - if ( !s->range[i] ) - goto fail; + const struct p2m_domain *p2m = p2m_get_hostp2m(d); - rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + if ( read_atomic(&p2m->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); } - return 0; - - fail: - hvm_ioreq_server_free_rangesets(s); - return rc; } -void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +bool arch_ioreq_server_destroy_all(struct domain *d) { - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); } -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +int arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) { - struct hvm_ioreq_vcpu *sv; + unsigned int cf8 = d->arch.hvm.pci_cf8; - spin_lock(&s->lock); + if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) + return -EINVAL; - if ( s->enabled ) - goto done; + if ( p->type == IOREQ_TYPE_PIO && + (p->addr & ~3) == 0xcfc && + CF8_ENABLED(cf8) ) + { + unsigned int x86_fam, reg; + pci_sbdf_t sbdf; - arch_ioreq_server_enable(s); + reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); - s->enabled = true; + /* PCI config data cycle */ + *type = XEN_DMOP_IO_RANGE_PCI; + *addr = ((uint64_t)sbdf.sbdf << 32) | reg; + /* AMD extended configuration space access? */ + if ( CF8_ADDR_HI(cf8) && + d->arch.cpuid->x86_vendor == X86_VENDOR_AMD && + (x86_fam = get_cpu_family( + d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 && + x86_fam < 0x17 ) + { + uint64_t msr_val; - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); + if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && + (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) + *addr |= CF8_ADDR_HI(cf8); + } + } + else + { + *type = (p->type == IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr = p->addr; + } - done: - spin_unlock(&s->lock); -} - -void arch_ioreq_server_disable(struct hvm_ioreq_server *s) -{ - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); -} - -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) -{ - spin_lock(&s->lock); - - if ( !s->enabled ) - goto done; - - arch_ioreq_server_disable(s); - - s->enabled = false; - - done: - spin_unlock(&s->lock); -} - -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) -{ - struct domain *currd = current->domain; - struct vcpu *v; - int rc; - - s->target = d; - - get_knownalive_domain(currd); - s->emulator = currd; - - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); - - s->ioreq.gfn = INVALID_GFN; - s->bufioreq.gfn = INVALID_GFN; - - rc = hvm_ioreq_server_alloc_rangesets(s, id); - if ( rc ) - return rc; - - s->bufioreq_handling = bufioreq_handling; - - for_each_vcpu ( d, v ) - { - rc = hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; - } - - return 0; - - fail_add: - hvm_ioreq_server_remove_all_vcpus(s); - arch_ioreq_server_unmap_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); - return rc; -} - -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) -{ - ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - arch_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); -} - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) -{ - struct hvm_ioreq_server *s; - unsigned int i; - int rc; - - if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) - return -EINVAL; - - s = xzalloc(struct hvm_ioreq_server); - if ( !s ) - return -ENOMEM; - - domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ ) - { - if ( !GET_IOREQ_SERVER(d, i) ) - break; - } - - rc = -ENOSPC; - if ( i >= MAX_NR_IOREQ_SERVERS ) - goto fail; - - /* - * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. - */ - set_ioreq_server(d, i, s); - - rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i); - if ( rc ) - { - set_ioreq_server(d, i, NULL); - goto fail; - } - - if ( id ) - *id = i; - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - return 0; - - fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - xfree(s); - return rc; -} - -/* Called when target domain is paused */ -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) -{ - p2m_set_ioreq_server(s->target, 0, s); -} - -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - domain_pause(d); - - arch_ioreq_server_destroy(s); - - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is paused. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - domain_unpause(d); - - xfree(s); - - rc = 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - if ( ioreq_gfn || bufioreq_gfn ) - { - rc = arch_ioreq_server_map_pages(s); - if ( rc ) - goto out; - } - - if ( ioreq_gfn ) - *ioreq_gfn = gfn_x(s->ioreq.gfn); - - if ( HANDLE_BUFIOREQ(s) ) - { - if ( bufioreq_gfn ) - *bufioreq_gfn = gfn_x(s->bufioreq.gfn); - - if ( bufioreq_port ) - *bufioreq_port = s->bufioreq_evtchn; - } - - rc = 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) -{ - struct hvm_ioreq_server *s; - int rc; - - ASSERT(is_hvm_domain(d)); - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - rc = hvm_ioreq_server_alloc_pages(s); - if ( rc ) - goto out; - - switch ( idx ) - { - case XENMEM_resource_ioreq_server_frame_bufioreq: - rc = -ENOENT; - if ( !HANDLE_BUFIOREQ(s) ) - goto out; - - *mfn = page_to_mfn(s->bufioreq.page); - rc = 0; - break; - - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn = page_to_mfn(s->ioreq.page); - rc = 0; - break; - - default: - rc = -EINVAL; - break; - } - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r = s->range[type]; - break; - - default: - r = NULL; - break; - } - - rc = -EINVAL; - if ( !r ) - goto out; - - rc = -EEXIST; - if ( rangeset_overlaps_range(r, start, end) ) - goto out; - - rc = rangeset_add_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r = s->range[type]; - break; - - default: - r = NULL; - break; - } - - rc = -EINVAL; - if ( !r ) - goto out; - - rc = -ENOENT; - if ( !rangeset_contains_range(r, start, end) ) - goto out; - - rc = rangeset_remove_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -/* Called with ioreq_server lock held */ -int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, - uint32_t flags) -{ - int rc = p2m_set_ioreq_server(d, flags, s); - - if ( rc == 0 && flags == 0 ) - { - const struct p2m_domain *p2m = p2m_get_hostp2m(d); - - if ( read_atomic(&p2m->ioreq.entry_count) ) - p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); - } - - return rc; -} - -/* - * Map or unmap an ioreq server to specific memory type. For now, only - * HVMMEM_ioreq_server is supported, and in the future new types can be - * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And - * currently, only write operations are to be forwarded to an ioreq server. - * Support for the emulation of read operations can be added when an ioreq - * server has such requirement in the future. - */ -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags) -{ - struct hvm_ioreq_server *s; - int rc; - - if ( type != HVMMEM_ioreq_server ) - return -EINVAL; - - if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - rc = arch_ioreq_server_map_mem_type(d, s, flags); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - domain_pause(d); - - if ( enabled ) - hvm_ioreq_server_enable(s); - else - hvm_ioreq_server_disable(s); - - domain_unpause(d); - - rc = 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - return rc; -} - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - rc = hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail; - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return 0; - - fail: - while ( ++id != MAX_NR_IOREQ_SERVERS ) - { - s = GET_IOREQ_SERVER(d, id); - - if ( !s ) - continue; - - hvm_ioreq_server_remove_vcpu(s, v); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -bool arch_ioreq_server_destroy_all(struct domain *d) -{ - return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); -} - -void hvm_destroy_all_ioreq_servers(struct domain *d) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - if ( !arch_ioreq_server_destroy_all(d) ) - return; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - /* No need to domain_pause() as the domain is being torn down */ - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is being destroyed. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - xfree(s); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -int arch_ioreq_server_get_type_addr(const struct domain *d, - const ioreq_t *p, - uint8_t *type, - uint64_t *addr) -{ - unsigned int cf8 = d->arch.hvm.pci_cf8; - - if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) - return -EINVAL; - - if ( p->type == IOREQ_TYPE_PIO && - (p->addr & ~3) == 0xcfc && - CF8_ENABLED(cf8) ) - { - unsigned int x86_fam, reg; - pci_sbdf_t sbdf; - - reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); - - /* PCI config data cycle */ - *type = XEN_DMOP_IO_RANGE_PCI; - *addr = ((uint64_t)sbdf.sbdf << 32) | reg; - /* AMD extended configuration space access? */ - if ( CF8_ADDR_HI(cf8) && - d->arch.cpuid->x86_vendor == X86_VENDOR_AMD && - (x86_fam = get_cpu_family( - d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 && - x86_fam < 0x17 ) - { - uint64_t msr_val; - - if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && - (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - *addr |= CF8_ADDR_HI(cf8); - } - } - else - { - *type = (p->type == IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - *addr = p->addr; - } - - return 0; -} - -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) -{ - struct hvm_ioreq_server *s; - uint8_t type; - uint64_t addr; - unsigned int id; - - if ( arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) - return NULL; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct rangeset *r; - - if ( !s->enabled ) - continue; - - r = s->range[type]; - - switch ( type ) - { - unsigned long start, end; - - case XEN_DMOP_IO_RANGE_PORT: - start = addr; - end = start + p->size - 1; - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_MEMORY: - start = hvm_mmio_first_byte(p); - end = hvm_mmio_last_byte(p); - - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_PCI: - if ( rangeset_contains_singleton(r, addr >> 32) ) - { - p->type = IOREQ_TYPE_PCI_CONFIG; - p->addr = addr; - return s; - } - - break; - } - } - - return NULL; -} - -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) -{ - struct domain *d = current->domain; - struct hvm_ioreq_page *iorp; - buffered_iopage_t *pg; - buf_ioreq_t bp = { .data = p->data, - .addr = p->addr, - .type = p->type, - .dir = p->dir }; - /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */ - int qw = 0; - - /* Ensure buffered_iopage fits in a page */ - BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); - - iorp = &s->bufioreq; - pg = iorp->va; - - if ( !pg ) - return IOREQ_STATUS_UNHANDLED; - - /* - * Return 0 for the cases we can't deal with: - * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB - * - we cannot buffer accesses to guest memory buffers, as the guest - * may expect the memory buffer to be synchronously accessed - * - the count field is usually used with data_is_ptr and since we don't - * support data_is_ptr we do not waste space for the count field either - */ - if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) ) - return 0; - - switch ( p->size ) - { - case 1: - bp.size = 0; - break; - case 2: - bp.size = 1; - break; - case 4: - bp.size = 2; - break; - case 8: - bp.size = 3; - qw = 1; - break; - default: - gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return IOREQ_STATUS_UNHANDLED; - } - - spin_lock(&s->bufioreq_lock); - - if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >= - (IOREQ_BUFFER_SLOT_NUM - qw) ) - { - /* The queue is full: send the iopacket through the normal path. */ - spin_unlock(&s->bufioreq_lock); - return IOREQ_STATUS_UNHANDLED; - } - - pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp; - - if ( qw ) - { - bp.data = p->data >> 32; - pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp; - } - - /* Make the ioreq_t visible /before/ write_pointer. */ - smp_wmb(); - pg->ptrs.write_pointer += qw ? 2 : 1; - - /* Canonicalize read/write pointers to prevent their overflow. */ - while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) && - qw++ < IOREQ_BUFFER_SLOT_NUM && - pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM ) - { - union bufioreq_pointers old = pg->ptrs, new; - unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM; - - new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; - new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM; - cmpxchg(&pg->ptrs.full, old.full, new.full); - } - - notify_via_xen_event_channel(d, s->bufioreq_evtchn); - spin_unlock(&s->bufioreq_lock); - - return IOREQ_STATUS_HANDLED; -} - -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered) -{ - struct vcpu *curr = current; - struct domain *d = curr->domain; - struct hvm_ioreq_vcpu *sv; - - ASSERT(s); - - if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); - - if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return IOREQ_STATUS_RETRY; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu == curr ) - { - evtchn_port_t port = sv->ioreq_evtchn; - ioreq_t *p = get_ioreq(s, curr); - - if ( unlikely(p->state != STATE_IOREQ_NONE) ) - { - gprintk(XENLOG_ERR, "device model set bad IO state %d\n", - p->state); - break; - } - - if ( unlikely(p->vp_eport != port) ) - { - gprintk(XENLOG_ERR, "device model set bad event channel %d\n", - p->vp_eport); - break; - } - - proto_p->state = STATE_IOREQ_NONE; - proto_p->vp_eport = port; - *p = *proto_p; - - prepare_wait_on_xen_event_channel(port); - - /* - * Following happens /after/ blocking and setting up ioreq - * contents. prepare_wait_on_xen_event_channel() is an implicit - * barrier. - */ - p->state = STATE_IOREQ_READY; - notify_via_xen_event_channel(d, port); - - sv->pending = true; - return IOREQ_STATUS_RETRY; - } - } - - return IOREQ_STATUS_UNHANDLED; -} - -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) -{ - struct domain *d = current->domain; - struct hvm_ioreq_server *s; - unsigned int id, failed = 0; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( !s->enabled ) - continue; - - if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) - failed++; - } - - return failed; + return 0; } static int hvm_access_cf8( @@ -1574,13 +339,6 @@ void arch_ioreq_domain_init(struct domain *d) register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); } -void hvm_ioreq_init(struct domain *d) -{ - spin_lock_init(&d->arch.hvm.ioreq_server.lock); - - arch_ioreq_domain_init(d); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 5a50339..e4638ef 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -100,6 +100,7 @@ */ #include +#include #include #include #include @@ -141,7 +142,6 @@ #include #include #include -#include #include #include diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index a33e100..f7d74d3 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -20,6 +20,7 @@ * along with this program; If not, see . */ +#include #include #include #include @@ -34,7 +35,6 @@ #include #include #include -#include #include #include "private.h" diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 3e2cf25..c971ded 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -139,6 +139,9 @@ config HYPFS_CONFIG Disable this option in case you want to spare some memory or you want to hide the .config contents from dom0. +config IOREQ_SERVER + bool + config KEXEC bool "kexec support" default y diff --git a/xen/common/Makefile b/xen/common/Makefile index d109f27..c0e91c4 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -15,6 +15,7 @@ obj-$(CONFIG_GRANT_TABLE) += grant_table.o obj-y += guestcopy.o obj-bin-y += gunzip.init.o obj-$(CONFIG_HYPFS) += hypfs.o +obj-$(CONFIG_IOREQ_SERVER) += ioreq.o obj-y += irq.o obj-y += kernel.o obj-y += keyhandler.o diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c new file mode 100644 index 0000000..13ea959 --- /dev/null +++ b/xen/common/ioreq.c @@ -0,0 +1,1287 @@ +/* + * ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +static void set_ioreq_server(struct domain *d, unsigned int id, + struct hvm_ioreq_server *s) +{ + ASSERT(id < MAX_NR_IOREQ_SERVERS); + ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + + d->arch.hvm.ioreq_server.server[id] = s; +} + +#define GET_IOREQ_SERVER(d, id) \ + (d)->arch.hvm.ioreq_server.server[id] + +static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) +{ + if ( id >= MAX_NR_IOREQ_SERVERS ) + return NULL; + + return GET_IOREQ_SERVER(d, id); +} + +/* + * Iterate over all possible ioreq servers. + * + * NOTE: The iteration is backwards such that more recently created + * ioreq servers are favoured in hvm_select_ioreq_server(). + * This is a semantic that previously existed when ioreq servers + * were held in a linked list. + */ +#define FOR_EACH_IOREQ_SERVER(d, id, s) \ + for ( (id) = MAX_NR_IOREQ_SERVERS; (id) != 0; ) \ + if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \ + continue; \ + else + +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +{ + shared_iopage_t *p = s->ioreq.va; + + ASSERT((v == current) || !vcpu_runnable(v)); + ASSERT(p != NULL); + + return &p->vcpu_ioreq[v->vcpu_id]; +} + +static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct hvm_ioreq_server **srvp) +{ + struct domain *d = v->domain; + struct hvm_ioreq_server *s; + unsigned int id; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct hvm_ioreq_vcpu *sv; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu == v && sv->pending ) + { + if ( srvp ) + *srvp = s; + return sv; + } + } + } + + return NULL; +} + +bool hvm_io_pending(struct vcpu *v) +{ + return get_pending_vcpu(v, NULL); +} + +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +{ + unsigned int prev_state = STATE_IOREQ_NONE; + unsigned int state = p->state; + uint64_t data = ~0; + + smp_rmb(); + + /* + * The only reason we should see this condition be false is when an + * emulator dying races with I/O being requested. + */ + while ( likely(state != STATE_IOREQ_NONE) ) + { + if ( unlikely(state < prev_state) ) + { + gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n", + prev_state, state); + sv->pending = false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + switch ( prev_state = state ) + { + case STATE_IORESP_READY: /* IORESP_READY -> NONE */ + p->state = STATE_IOREQ_NONE; + data = p->data; + break; + + case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READY */ + case STATE_IOREQ_INPROCESS: + wait_on_xen_event_channel(sv->ioreq_evtchn, + ({ state = p->state; + smp_rmb(); + state != prev_state; })); + continue; + + default: + gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); + sv->pending = false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + break; + } + + p = &sv->vcpu->arch.hvm.hvm_io.io_req; + if ( hvm_ioreq_needs_completion(p) ) + p->data = data; + + sv->pending = false; + + return true; +} + +bool handle_hvm_io_completion(struct vcpu *v) +{ + struct domain *d = v->domain; + struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; + struct hvm_ioreq_server *s; + struct hvm_ioreq_vcpu *sv; + enum hvm_io_completion io_completion; + + if ( has_vpci(d) && vpci_process_pending(v) ) + { + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + sv = get_pending_vcpu(v, &s); + if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + return false; + + vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ? + STATE_IORESP_READY : STATE_IOREQ_NONE; + + msix_write_completion(v); + vcpu_end_shutdown_deferral(v); + + io_completion = vio->io_completion; + vio->io_completion = HVMIO_no_completion; + + switch ( io_completion ) + { + case HVMIO_no_completion: + break; + + case HVMIO_mmio_completion: + return ioreq_complete_mmio(); + + case HVMIO_pio_completion: + return handle_pio(vio->io_req.addr, vio->io_req.size, + vio->io_req.dir); + + default: + return arch_vcpu_ioreq_completion(io_completion); + } + + return true; +} + +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct page_info *page; + + if ( iorp->page ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if hvm_get_ioreq_server_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + page = alloc_domheap_page(s->target, MEMF_no_refcount); + + if ( !page ) + return -ENOMEM; + + if ( !get_page_and_type(page, s->target, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so failure + * here is a clear indication of something fishy going on. + */ + domain_crash(s->emulator); + return -ENODATA; + } + + iorp->va = __map_domain_page_global(page); + if ( !iorp->va ) + goto fail; + + iorp->page = page; + clear_page(iorp->va); + return 0; + + fail: + put_page_alloc_ref(page); + put_page_and_type(page); + + return -ENOMEM; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct page_info *page = iorp->page; + + if ( !page ) + return; + + iorp->page = NULL; + + unmap_domain_page_global(iorp->va); + iorp->va = NULL; + + put_page_alloc_ref(page); + put_page_and_type(page); +} + +bool is_ioreq_server_page(struct domain *d, const struct page_info *page) +{ + const struct hvm_ioreq_server *s; + unsigned int id; + bool found = false; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( (s->ioreq.page == page) || (s->bufioreq.page == page) ) + { + found = true; + break; + } + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return found; +} + +static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, + struct hvm_ioreq_vcpu *sv) +{ + ASSERT(spin_is_locked(&s->lock)); + + if ( s->ioreq.va != NULL ) + { + ioreq_t *p = get_ioreq(s, sv->vcpu); + + p->vp_eport = sv->ioreq_evtchn; + } +} + +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + int rc; + + sv = xzalloc(struct hvm_ioreq_vcpu); + + rc = -ENOMEM; + if ( !sv ) + goto fail1; + + spin_lock(&s->lock); + + rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail2; + + sv->ioreq_evtchn = rc; + + if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) + { + rc = alloc_unbound_xen_event_channel(v->domain, 0, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail3; + + s->bufioreq_evtchn = rc; + } + + sv->vcpu = v; + + list_add(&sv->list_entry, &s->ioreq_vcpu_list); + + if ( s->enabled ) + hvm_update_ioreq_evtchn(s, sv); + + spin_unlock(&s->lock); + return 0; + + fail3: + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + fail2: + spin_unlock(&s->lock); + xfree(sv); + + fail1: + return rc; +} + +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu != v ) + continue; + + list_del(&sv->list_entry); + + if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + break; + } + + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv, *next; + + spin_lock(&s->lock); + + list_for_each_entry_safe ( sv, + next, + &s->ioreq_vcpu_list, + list_entry ) + { + struct vcpu *v = sv->vcpu; + + list_del(&sv->list_entry); + + if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + } + + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc = hvm_alloc_ioreq_mfn(s, false); + + if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) + rc = hvm_alloc_ioreq_mfn(s, true); + + if ( rc ) + hvm_free_ioreq_mfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +{ + hvm_free_ioreq_mfn(s, true); + hvm_free_ioreq_mfn(s, false); +} + +static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +{ + unsigned int i; + + for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) + rangeset_destroy(s->range[i]); +} + +static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, + ioservid_t id) +{ + unsigned int i; + int rc; + + for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) + { + char *name; + + rc = asprintf(&name, "ioreq_server %d %s", id, + (i == XEN_DMOP_IO_RANGE_PORT) ? "port" : + (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : + (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" : + ""); + if ( rc ) + goto fail; + + s->range[i] = rangeset_new(s->target, name, + RANGESETF_prettyprint_hex); + + xfree(name); + + rc = -ENOMEM; + if ( !s->range[i] ) + goto fail; + + rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + } + + return 0; + + fail: + hvm_ioreq_server_free_rangesets(s); + + return rc; +} + +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + if ( s->enabled ) + goto done; + + arch_ioreq_server_enable(s); + + s->enabled = true; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + + done: + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + spin_lock(&s->lock); + + if ( !s->enabled ) + goto done; + + arch_ioreq_server_disable(s); + + s->enabled = false; + + done: + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) +{ + struct domain *currd = current->domain; + struct vcpu *v; + int rc; + + s->target = d; + + get_knownalive_domain(currd); + s->emulator = currd; + + spin_lock_init(&s->lock); + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn = INVALID_GFN; + s->bufioreq.gfn = INVALID_GFN; + + rc = hvm_ioreq_server_alloc_rangesets(s, id); + if ( rc ) + return rc; + + s->bufioreq_handling = bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc = hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } + + return 0; + + fail_add: + hvm_ioreq_server_remove_all_vcpus(s); + arch_ioreq_server_unmap_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); + return rc; +} + +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +{ + ASSERT(!s->enabled); + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + arch_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); +} + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id) +{ + struct hvm_ioreq_server *s; + unsigned int i; + int rc; + + if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) + return -EINVAL; + + s = xzalloc(struct hvm_ioreq_server); + if ( !s ) + return -ENOMEM; + + domain_pause(d); + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ ) + { + if ( !GET_IOREQ_SERVER(d, i) ) + break; + } + + rc = -ENOSPC; + if ( i >= MAX_NR_IOREQ_SERVERS ) + goto fail; + + /* + * It is safe to call set_ioreq_server() prior to + * hvm_ioreq_server_init() since the target domain is paused. + */ + set_ioreq_server(d, i, s); + + rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i); + if ( rc ) + { + set_ioreq_server(d, i, NULL); + goto fail; + } + + if ( id ) + *id = i; + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + return 0; + + fail: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + xfree(s); + return rc; +} + +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + domain_pause(d); + + arch_ioreq_server_destroy(s); + + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is paused. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + domain_unpause(d); + + xfree(s); + + rc = 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + if ( ioreq_gfn || bufioreq_gfn ) + { + rc = arch_ioreq_server_map_pages(s); + if ( rc ) + goto out; + } + + if ( ioreq_gfn ) + *ioreq_gfn = gfn_x(s->ioreq.gfn); + + if ( HANDLE_BUFIOREQ(s) ) + { + if ( bufioreq_gfn ) + *bufioreq_gfn = gfn_x(s->bufioreq.gfn); + + if ( bufioreq_port ) + *bufioreq_port = s->bufioreq_evtchn; + } + + rc = 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) +{ + struct hvm_ioreq_server *s; + int rc; + + ASSERT(is_hvm_domain(d)); + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + rc = hvm_ioreq_server_alloc_pages(s); + if ( rc ) + goto out; + + switch ( idx ) + { + case XENMEM_resource_ioreq_server_frame_bufioreq: + rc = -ENOENT; + if ( !HANDLE_BUFIOREQ(s) ) + goto out; + + *mfn = page_to_mfn(s->bufioreq.page); + rc = 0; + break; + + case XENMEM_resource_ioreq_server_frame_ioreq(0): + *mfn = page_to_mfn(s->ioreq.page); + rc = 0; + break; + + default: + rc = -EINVAL; + break; + } + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r = s->range[type]; + break; + + default: + r = NULL; + break; + } + + rc = -EINVAL; + if ( !r ) + goto out; + + rc = -EEXIST; + if ( rangeset_overlaps_range(r, start, end) ) + goto out; + + rc = rangeset_add_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r = s->range[type]; + break; + + default: + r = NULL; + break; + } + + rc = -EINVAL; + if ( !r ) + goto out; + + rc = -ENOENT; + if ( !rangeset_contains_range(r, start, end) ) + goto out; + + rc = rangeset_remove_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +/* + * Map or unmap an ioreq server to specific memory type. For now, only + * HVMMEM_ioreq_server is supported, and in the future new types can be + * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And + * currently, only write operations are to be forwarded to an ioreq server. + * Support for the emulation of read operations can be added when an ioreq + * server has such requirement in the future. + */ +int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags) +{ + struct hvm_ioreq_server *s; + int rc; + + if ( type != HVMMEM_ioreq_server ) + return -EINVAL; + + if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + rc = arch_ioreq_server_map_mem_type(d, s, flags); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + domain_pause(d); + + if ( enabled ) + hvm_ioreq_server_enable(s); + else + hvm_ioreq_server_disable(s); + + domain_unpause(d); + + rc = 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + return rc; +} + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + rc = hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail; + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return 0; + + fail: + while ( ++id != MAX_NR_IOREQ_SERVERS ) + { + s = GET_IOREQ_SERVER(d, id); + + if ( !s ) + continue; + + hvm_ioreq_server_remove_vcpu(s, v); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + hvm_ioreq_server_remove_vcpu(s, v); + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +void hvm_destroy_all_ioreq_servers(struct domain *d) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + if ( !arch_ioreq_server_destroy_all(d) ) + return; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + /* No need to domain_pause() as the domain is being torn down */ + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is being destroyed. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + xfree(s); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) + return NULL; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct rangeset *r; + + if ( !s->enabled ) + continue; + + r = s->range[type]; + + switch ( type ) + { + unsigned long start, end; + + case XEN_DMOP_IO_RANGE_PORT: + start = addr; + end = start + p->size - 1; + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_MEMORY: + start = hvm_mmio_first_byte(p); + end = hvm_mmio_last_byte(p); + + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_PCI: + if ( rangeset_contains_singleton(r, addr >> 32) ) + { + p->type = IOREQ_TYPE_PCI_CONFIG; + p->addr = addr; + return s; + } + + break; + } + } + + return NULL; +} + +static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +{ + struct domain *d = current->domain; + struct hvm_ioreq_page *iorp; + buffered_iopage_t *pg; + buf_ioreq_t bp = { .data = p->data, + .addr = p->addr, + .type = p->type, + .dir = p->dir }; + /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */ + int qw = 0; + + /* Ensure buffered_iopage fits in a page */ + BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); + + iorp = &s->bufioreq; + pg = iorp->va; + + if ( !pg ) + return IOREQ_STATUS_UNHANDLED; + + /* + * Return 0 for the cases we can't deal with: + * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB + * - we cannot buffer accesses to guest memory buffers, as the guest + * may expect the memory buffer to be synchronously accessed + * - the count field is usually used with data_is_ptr and since we don't + * support data_is_ptr we do not waste space for the count field either + */ + if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) ) + return 0; + + switch ( p->size ) + { + case 1: + bp.size = 0; + break; + case 2: + bp.size = 1; + break; + case 4: + bp.size = 2; + break; + case 8: + bp.size = 3; + qw = 1; + break; + default: + gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); + return IOREQ_STATUS_UNHANDLED; + } + + spin_lock(&s->bufioreq_lock); + + if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >= + (IOREQ_BUFFER_SLOT_NUM - qw) ) + { + /* The queue is full: send the iopacket through the normal path. */ + spin_unlock(&s->bufioreq_lock); + return IOREQ_STATUS_UNHANDLED; + } + + pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp; + + if ( qw ) + { + bp.data = p->data >> 32; + pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp; + } + + /* Make the ioreq_t visible /before/ write_pointer. */ + smp_wmb(); + pg->ptrs.write_pointer += qw ? 2 : 1; + + /* Canonicalize read/write pointers to prevent their overflow. */ + while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) && + qw++ < IOREQ_BUFFER_SLOT_NUM && + pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM ) + { + union bufioreq_pointers old = pg->ptrs, new; + unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM; + + new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; + new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM; + cmpxchg(&pg->ptrs.full, old.full, new.full); + } + + notify_via_xen_event_channel(d, s->bufioreq_evtchn); + spin_unlock(&s->bufioreq_lock); + + return IOREQ_STATUS_HANDLED; +} + +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered) +{ + struct vcpu *curr = current; + struct domain *d = curr->domain; + struct hvm_ioreq_vcpu *sv; + + ASSERT(s); + + if ( buffered ) + return hvm_send_buffered_ioreq(s, proto_p); + + if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) + return IOREQ_STATUS_RETRY; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu == curr ) + { + evtchn_port_t port = sv->ioreq_evtchn; + ioreq_t *p = get_ioreq(s, curr); + + if ( unlikely(p->state != STATE_IOREQ_NONE) ) + { + gprintk(XENLOG_ERR, "device model set bad IO state %d\n", + p->state); + break; + } + + if ( unlikely(p->vp_eport != port) ) + { + gprintk(XENLOG_ERR, "device model set bad event channel %d\n", + p->vp_eport); + break; + } + + proto_p->state = STATE_IOREQ_NONE; + proto_p->vp_eport = port; + *p = *proto_p; + + prepare_wait_on_xen_event_channel(port); + + /* + * Following happens /after/ blocking and setting up ioreq + * contents. prepare_wait_on_xen_event_channel() is an implicit + * barrier. + */ + p->state = STATE_IOREQ_READY; + notify_via_xen_event_channel(d, port); + + sv->pending = true; + return IOREQ_STATUS_RETRY; + } + } + + return IOREQ_STATUS_UNHANDLED; +} + +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +{ + struct domain *d = current->domain; + struct hvm_ioreq_server *s; + unsigned int id, failed = 0; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( !s->enabled ) + continue; + + if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) + failed++; + } + + return failed; +} + +void hvm_ioreq_init(struct domain *d) +{ + spin_lock_init(&d->arch.hvm.ioreq_server.lock); + + arch_ioreq_domain_init(d); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index c7563e1..ab2f3f8 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,8 +19,7 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ -#define HANDLE_BUFIOREQ(s) \ - ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) +#include bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); @@ -38,42 +37,6 @@ int arch_ioreq_server_get_type_addr(const struct domain *d, uint64_t *addr); void arch_ioreq_domain_init(struct domain *d); -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); -bool is_ioreq_server_page(struct domain *d, const struct page_info *page); - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); - -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); - -void hvm_ioreq_init(struct domain *d); - bool ioreq_complete_mmio(void); #define IOREQ_STATUS_HANDLED X86EMUL_OKAY diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h new file mode 100644 index 0000000..ad47c61 --- /dev/null +++ b/xen/include/xen/ioreq.h @@ -0,0 +1,73 @@ +/* + * ioreq.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#ifndef __XEN_IOREQ_H__ +#define __XEN_IOREQ_H__ + +#include + +#define HANDLE_BUFIOREQ(s) \ + ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) + +bool hvm_io_pending(struct vcpu *v); +bool handle_hvm_io_completion(struct vcpu *v); +bool is_ioreq_server_page(struct domain *d, const struct page_info *page); + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id); +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port); +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags); +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled); + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); +void hvm_destroy_all_ioreq_servers(struct domain *d); + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); + +void hvm_ioreq_init(struct domain *d); + +#endif /* __XEN_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ From patchwork Mon Nov 30 10:31:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E23C3C64E8A for ; Mon, 30 Nov 2020 10:32:28 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 70E2820708 for ; Mon, 30 Nov 2020 10:32:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jV/3EVtr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 70E2820708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40806.73789 (Exim 4.92) (envelope-from ) id 1kjgTl-0000k7-JS; Mon, 30 Nov 2020 10:32:21 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40806.73789; Mon, 30 Nov 2020 10:32:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTl-0000jz-FN; Mon, 30 Nov 2020 10:32:21 +0000 Received: by outflank-mailman (input) for mailman id 40806; Mon, 30 Nov 2020 10:32:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTk-0000Uu-CG for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:20 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2805de2e-574c-4796-84cd-436d506d06e0; Mon, 30 Nov 2020 10:32:04 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id y7so17018053lji.8 for ; Mon, 30 Nov 2020 02:32:03 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.01 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:02 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2805de2e-574c-4796-84cd-436d506d06e0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rGoTPdf/l+qRRLf+/W/rrPRXJCM3LTC59V6irfyoxMU=; b=jV/3EVtrfs9wG+IGfm6YI638irh46ArUIkqAPh0C+AWXimT1zEZ0+fzXBVgRjPUvAO QDkpKJxcH4xTo/N0J0itT42n1OCqn/GAvNC/5AY0a3gs+NMiHTvSWN67HA9PkqKz/IgF 2kvOyamIEBWswBq1h8d80RinN2ccMiyTfX1b4uRpAYGPjGdedcNVyYHndK6hZaOTqURQ TYZmiwQsue9OqUfUy0q4wONRuwlVwiQyzgtrPJF+Bbnh3on5+h6qVaiHPDC7Vc4y4hFD J1AXDtEKUpUj2REvXtdBk7ZWt3EB4SHjkp2xkIhTb6klVXwtMzFfjsRfBWYMeG2cXprc 1YHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rGoTPdf/l+qRRLf+/W/rrPRXJCM3LTC59V6irfyoxMU=; b=fSzjwr1CnhAzcVgjG6O2YXt1HXxrsswOZh7hJV6JTrAL5/fv+XDDjgxufIVjnnRl/t qLCDVSfTQ6nD9UseAyepeTOLpkUwITjzG5bQLQQq5xdGgtjYp3zbq225NTNxnYkaaE/4 14YJEP/xC0V58XAzQEDFs4dZ0kRvgECtaMYETrrkHgahuqe97x/LjarG5CYoWi36FKkD +P5CMBe42zRbd8ozJ278P6VeJzFIOBXPVTDhje8btOvmkreMjn9Lq/aGK3Bodvs6I3DJ TtLdCAyIkf6krUoPggmRQtKhSkC/saBpEqQY3nGu1ZfHTlLEQYYUNLN6GyyiyzeQJgNi L+kw== X-Gm-Message-State: AOAM53105iB2Wc9H66p5sb6dw2n6tHSuXa1IP2sreOARd4y3Qxs1i7pt 8yE1xEq1xZrYRgrwdzB8GC1l+pIkaXLWiw== X-Google-Smtp-Source: ABdhPJy47RPNrcGD+P3/cQU3hJMi9G/IQpqMoec+hwdC5rhZFoXqEO5xPoFNlLh9h6hCaQxcwARbQQ== X-Received: by 2002:a2e:8e81:: with SMTP id z1mr8655507ljk.316.1606732322757; Mon, 30 Nov 2020 02:32:02 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V3 05/23] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common Date: Mon, 30 Nov 2020 12:31:20 +0200 Message-Id: <1606732298-22107-6-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The IOREQ is a common feature now and this helper will be used on Arm as is. Move it to xen/ioreq.h and remove "hvm" prefix. Although PIO handling on Arm is not introduced with the current series (it will be implemented when we add support for vPCI), technically the PIOs exist on Arm (however they are accessed the same way as MMIO) and it would be better not to diverge now. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Paul Durrant CC: Julien Grall Acked-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" Changes V1 -> V2: - remove "hvm" prefix Changes V2 -> V3: - add Paul's R-b --- --- xen/arch/x86/hvm/emulate.c | 4 ++-- xen/arch/x86/hvm/io.c | 2 +- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/vcpu.h | 7 ------- xen/include/xen/ioreq.h | 7 +++++++ 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 24cf85f..5700274 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -336,7 +336,7 @@ static int hvmemul_do_io( rc = hvm_send_ioreq(s, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state = STATE_IOREQ_NONE; - else if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + else if ( !ioreq_needs_completion(&vio->io_req) ) rc = X86EMUL_OKAY; } break; @@ -2649,7 +2649,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, if ( rc == X86EMUL_OKAY && vio->mmio_retry ) rc = X86EMUL_RETRY; - if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + if ( !ioreq_needs_completion(&vio->io_req) ) completion = HVMIO_no_completion; else if ( completion == HVMIO_no_completion ) completion = (vio->io_req.type != IOREQ_TYPE_PIO || diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 3e09d9b..b220d6b 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -135,7 +135,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir) rc = hvmemul_do_pio_buffer(port, size, dir, &data); - if ( hvm_ioreq_needs_completion(&vio->io_req) ) + if ( ioreq_needs_completion(&vio->io_req) ) vio->io_completion = HVMIO_pio_completion; switch ( rc ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 13ea959..44385ef 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -160,7 +160,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) } p = &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) + if ( ioreq_needs_completion(p) ) p->data = data; sv->pending = false; @@ -186,7 +186,7 @@ bool handle_hvm_io_completion(struct vcpu *v) if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) return false; - vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ? + vio->io_req.state = ioreq_needs_completion(&vio->io_req) ? STATE_IORESP_READY : STATE_IOREQ_NONE; msix_write_completion(v); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 5ccd075..6c1feda 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -91,13 +91,6 @@ struct hvm_vcpu_io { const struct g2m_ioport *g2m_ioport; }; -static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) -{ - return ioreq->state == STATE_IOREQ_READY && - !ioreq->data_is_ptr && - (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE); -} - struct nestedvcpu { bool_t nv_guestmode; /* vcpu in guestmode? */ void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index ad47c61..3cc333d 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,6 +21,13 @@ #include +static inline bool ioreq_needs_completion(const ioreq_t *ioreq) +{ + return ioreq->state == STATE_IOREQ_READY && + !ioreq->data_is_ptr && + (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE); +} + #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) From patchwork Mon Nov 30 10:31:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F9E3C64E8A for ; Mon, 30 Nov 2020 10:32:39 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EA01C20708 for ; Mon, 30 Nov 2020 10:32:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UUg5dCSo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA01C20708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40810.73801 (Exim 4.92) (envelope-from ) id 1kjgTq-0000q4-Vz; Mon, 30 Nov 2020 10:32:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40810.73801; Mon, 30 Nov 2020 10:32:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTq-0000pu-Ry; Mon, 30 Nov 2020 10:32:26 +0000 Received: by outflank-mailman (input) for mailman id 40810; Mon, 30 Nov 2020 10:32:25 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTp-0000Uu-CO for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:25 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f3f1b784-1180-4b2c-bb88-f2f13bb2d93f; Mon, 30 Nov 2020 10:32:05 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id j10so17065418lja.5 for ; Mon, 30 Nov 2020 02:32:05 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:03 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f3f1b784-1180-4b2c-bb88-f2f13bb2d93f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BMgHi30kkpPIOBuivFYxLOwMmC5wtjLvN0gOaeT0A2k=; b=UUg5dCSoinN5YJ15ePPYLqNzyVdSWglcKASLNn74daGGVtFK3RMfdzdFbiJt/NyfDu PHsdv1ou5dsi5CplxXvgnzTziyzyldowCEOlOYoK84Z385ls8jRqhWyPsWJRAjjD01Dp K6mFVo3794BGC8F2ldWClGDS42pvVDNElVV+jn+tPvoxv8ddckP1cJg50liFgeNFitxW EK6IlcCDECZYBAahrbAz7EHAOPCPlm6PfLS2B8RRZzcrjzLXg8L4bFmVt+oP0yVtdCAO EmIlMwCIKXA9H4Y++XXpV9G2j7ZuPHiJKY1sCZn3K9CWzQ4bWk6nYBRTuHdOOusb+v/M nDHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BMgHi30kkpPIOBuivFYxLOwMmC5wtjLvN0gOaeT0A2k=; b=Jc0hwZEhePLzP3lT6Dr6XcpWhfeQ/jBAHX3UF7pgmepsgDo6Mz3CNDvBgz+rMdUi6p Sxk5z8OdX0/C582G0p4yjNbEevokORXb/cTqepriEcy64fg71lwqYM7EiNw4uAMGrfil 9OJQYjccuy+EWH3NwEWEPRjMde/imJPB4xf+rSemz4oeMP2ym1QviAQjs6FrbzDZxIB0 RrGrwicaHq8kRPMEes7usHtENKk6bCzcQHzGxixdjghLp8g0QRCvBA5YwC9mxMsSCduc OMqunpIsYy0MM2lDBSe12NkfAhb30WTuNgJffZXuSap6bpSRCbDGK064ES3IU4gFiCbX Qmdg== X-Gm-Message-State: AOAM530uoxSeJXdfI48x93CR/7GwqRQVcZ+u1VWmvzUFenmD3wvnIT8k t9XnDJnnHn62kSKRBet71wWj+XKay/xaHw== X-Google-Smtp-Source: ABdhPJykjxP4/PJoWDRQ7qfNn7lsdrTkU5Cf96Dhxh76xp7TDH9yYoSuS+as+u7kdkrtXDH4d9oLLA== X-Received: by 2002:a2e:924f:: with SMTP id v15mr9106399ljg.6.1606732323860; Mon, 30 Nov 2020 02:32:03 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V3 06/23] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common Date: Mon, 30 Nov 2020 12:31:21 +0200 Message-Id: <1606732298-22107-7-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The IOREQ is a common feature now and these helpers will be used on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes with "ioreq". Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Paul Durrant CC: Julien Grall Acked-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - replace "hvm" prefix by "ioreq" Changes V2 -> V3: - add Paul's R-b --- --- xen/arch/x86/hvm/intercept.c | 5 +++-- xen/arch/x86/hvm/stdvga.c | 4 ++-- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/io.h | 16 ---------------- xen/include/xen/ioreq.h | 16 ++++++++++++++++ 5 files changed, 23 insertions(+), 22 deletions(-) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index cd4c4c1..02ca3b0 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -17,6 +17,7 @@ * this program; If not, see . */ +#include #include #include #include @@ -34,7 +35,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler, const ioreq_t *p) { - paddr_t first = hvm_mmio_first_byte(p), last; + paddr_t first = ioreq_mmio_first_byte(p), last; BUG_ON(handler->type != IOREQ_TYPE_COPY); @@ -42,7 +43,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler, return 0; /* Make sure the handler will accept the whole access. */ - last = hvm_mmio_last_byte(p); + last = ioreq_mmio_last_byte(p); if ( last != first && !handler->mmio.ops->check(current, last) ) domain_crash(current->domain); diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index e267513..e184664 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -524,8 +524,8 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, * deadlock when hvm_mmio_internal() is called from * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept(). */ - if ( (hvm_mmio_first_byte(p) < VGA_MEM_BASE) || - (hvm_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) + if ( (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) || + (ioreq_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) return 0; spin_lock(&s->lock); diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 44385ef..6e9f745 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -1075,8 +1075,8 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, break; case XEN_DMOP_IO_RANGE_MEMORY: - start = hvm_mmio_first_byte(p); - end = hvm_mmio_last_byte(p); + start = ioreq_mmio_first_byte(p); + end = ioreq_mmio_last_byte(p); if ( rangeset_contains_range(r, start, end) ) return s; diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 558426b..fb64294 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -40,22 +40,6 @@ struct hvm_mmio_ops { hvm_mmio_write_t write; }; -static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p) -{ - return unlikely(p->df) ? - p->addr - (p->count - 1ul) * p->size : - p->addr; -} - -static inline paddr_t hvm_mmio_last_byte(const ioreq_t *p) -{ - unsigned long size = p->size; - - return unlikely(p->df) ? - p->addr + size - 1: - p->addr + (p->count * size) - 1; -} - typedef int (*portio_action_t)( int dir, unsigned int port, unsigned int bytes, uint32_t *val); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 3cc333d..2746bb1 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,6 +21,22 @@ #include +static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) +{ + return unlikely(p->df) ? + p->addr - (p->count - 1ul) * p->size : + p->addr; +} + +static inline paddr_t ioreq_mmio_last_byte(const ioreq_t *p) +{ + unsigned long size = p->size; + + return unlikely(p->df) ? + p->addr + size - 1: + p->addr + (p->count * size) - 1; +} + static inline bool ioreq_needs_completion(const ioreq_t *ioreq) { return ioreq->state == STATE_IOREQ_READY && From patchwork Mon Nov 30 10:31:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D548CC71155 for ; Mon, 30 Nov 2020 10:32:48 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 541F3207BC for ; Mon, 30 Nov 2020 10:32:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IvJ05SA7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 541F3207BC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40820.73825 (Exim 4.92) (envelope-from ) id 1kjgU1-00012l-3H; Mon, 30 Nov 2020 10:32:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40820.73825; Mon, 30 Nov 2020 10:32:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgU0-00012a-Up; Mon, 30 Nov 2020 10:32:36 +0000 Received: by outflank-mailman (input) for mailman id 40820; Mon, 30 Nov 2020 10:32:35 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgTz-0000Uu-Cc for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:35 +0000 Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a4a13094-8e86-4fd3-aeb8-a1da70081b79; Mon, 30 Nov 2020 10:32:06 +0000 (UTC) Received: by mail-lj1-x242.google.com with SMTP id o24so17026932ljj.6 for ; Mon, 30 Nov 2020 02:32:06 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:04 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a4a13094-8e86-4fd3-aeb8-a1da70081b79 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rn7rcwlGYfRN9LL5RlJZX5o5cC8jSZr2I24RdNS6p0k=; b=IvJ05SA7SKa8rQvQ7/l4NypRc/X/Me9ZfT7+PsN6Wo5mETOlMRVHhthu2xpfMHkviL bbkKRX1wQubxZyOpuzJ7/S7EwrrqY3GZ5OzZiACQVOV9t8malGF58ptxvm51uOyQC7WU XfSx9kkJrfTaxGHxrTFaj0Il91eLigZ8sMq4Hebtanl5OckBWyFQHP4/1Kh7xdwS+BTA Jw98q5r4tBUVSEenRvtEMxjdYmweqrROvbV9XF9GxyFAf60qqvoRlVrfKm9ctyDeiVpg 2+zB1pOivYnoiMSC0jyZHsb8E98A6qmEkhaRVjlyltRamazIZH1diAd44vWrLqMe2RtK oqiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rn7rcwlGYfRN9LL5RlJZX5o5cC8jSZr2I24RdNS6p0k=; b=AEdvkcnSImbsZ1IVMIJ2/AwBt9Yp9zDiknvuwvVQ9qGU+YbUZhVAPe9LuSgkth452v WZTqgpkU+eJTlUeDY2XvTjTT6D+OcT3S+TqIOhM5LOvPbY1BVzSoaOxXdSZHpzsS+/Ca PJpVM07iNWuOuCxcC95f8SHm5ZwvP2a+1X6yllGXQ9SVRSdunW8HkARRIOTbZcw0hHv6 5ErvcF6HGNNBkq0dRoKhQ8YClys4ju0FEAzo2ALhbsk7PHnJVQOlm0j/OV+oCmSpwtQi I0bDN8d5bYU4rR+q/5cc7AXDL6ZH+DdQ4YOt7CQjDPc7PQ5Z/H9DUR9RS5wtX6bxxoAQ R9Yg== X-Gm-Message-State: AOAM53338756/E0Oi2I5ZMHFA6yh5Fag3WIxHYxMFDyWatMkXOjLQLIv rqBSGPpOtQuefD+reDXfEdxDhpJoYavbfA== X-Google-Smtp-Source: ABdhPJwyOapeZac+B6TTL/mMjuypW5B2KBBZsRIxb1wb0fHOV0SCN6Bwm/83pAuhLeE3F9Va/7JssA== X-Received: by 2002:a2e:8908:: with SMTP id d8mr9413245lji.309.1606732324950; Mon, 30 Nov 2020 02:32:04 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V3 07/23] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Date: Mon, 30 Nov 2020 12:31:22 +0200 Message-Id: <1606732298-22107-8-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The IOREQ is a common feature now and these structs will be used on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Acked-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "hvm" prefix Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific --- --- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/ioreq.c | 36 ++++++------- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/mm/p2m.c | 8 +-- xen/common/ioreq.c | 108 +++++++++++++++++++-------------------- xen/include/asm-x86/hvm/domain.h | 36 +------------ xen/include/asm-x86/hvm/ioreq.h | 12 ++--- xen/include/asm-x86/p2m.h | 8 +-- xen/include/xen/ioreq.h | 40 +++++++++++++-- 9 files changed, 126 insertions(+), 126 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 5700274..4746d5a 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -287,7 +287,7 @@ static int hvmemul_do_io( * However, there's no cheap approach to avoid above situations in xen, * so the device model side needs to check the incoming ioreq event. */ - struct hvm_ioreq_server *s = NULL; + struct ioreq_server *s = NULL; p2m_type_t p2mt = p2m_invalid; if ( is_mmio ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index b03ceee..009a95a 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -64,7 +64,7 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) return true; } -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) { struct domain *d = s->target; unsigned int i; @@ -80,7 +80,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) return INVALID_GFN; } -static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) { struct domain *d = s->target; unsigned int i; @@ -98,7 +98,7 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) return hvm_alloc_legacy_ioreq_gfn(s); } -static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, +static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d = s->target; @@ -116,7 +116,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, return true; } -static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) +static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d = s->target; unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; @@ -130,9 +130,9 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) } } -static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -144,10 +144,10 @@ static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) iorp->gfn = INVALID_GFN; } -static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d = s->target; - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; if ( iorp->page ) @@ -180,11 +180,11 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } -static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d = s->target; - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -195,10 +195,10 @@ static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) clear_page(iorp->va); } -static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d = s->target; - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; if ( gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -214,7 +214,7 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) +int arch_ioreq_server_map_pages(struct ioreq_server *s) { int rc; @@ -229,33 +229,33 @@ int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) return rc; } -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_unmap_pages(struct ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); } -void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +void arch_ioreq_server_enable(struct ioreq_server *s) { hvm_remove_ioreq_gfn(s, false); hvm_remove_ioreq_gfn(s, true); } -void arch_ioreq_server_disable(struct hvm_ioreq_server *s) +void arch_ioreq_server_disable(struct ioreq_server *s) { hvm_add_ioreq_gfn(s, true); hvm_add_ioreq_gfn(s, false); } /* Called when target domain is paused */ -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) +void arch_ioreq_server_destroy(struct ioreq_server *s) { p2m_set_ioreq_server(s->target, 0, s); } /* Called with ioreq_server lock held */ int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags) { int rc = p2m_set_ioreq_server(d, flags, s); diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index e184664..bafb3f6 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler, .dir = IOREQ_WRITE, .data = data, }; - struct hvm_ioreq_server *srv; + struct ioreq_server *srv; if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) goto done; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index d9cc185..7a2ba82 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -367,7 +367,7 @@ void p2m_memory_type_changed(struct domain *d) int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { struct p2m_domain *p2m = p2m_get_hostp2m(d); int rc; @@ -415,11 +415,11 @@ int p2m_set_ioreq_server(struct domain *d, return rc; } -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags) +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags) { struct p2m_domain *p2m = p2m_get_hostp2m(d); - struct hvm_ioreq_server *s; + struct ioreq_server *s; spin_lock(&p2m->ioreq.lock); diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 6e9f745..3e80fc6 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -35,7 +35,7 @@ #include static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); @@ -46,8 +46,8 @@ static void set_ioreq_server(struct domain *d, unsigned int id, #define GET_IOREQ_SERVER(d, id) \ (d)->arch.hvm.ioreq_server.server[id] -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) +static struct ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) { if ( id >= MAX_NR_IOREQ_SERVERS ) return NULL; @@ -69,7 +69,7 @@ static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, continue; \ else -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v) { shared_iopage_t *p = s->ioreq.va; @@ -79,16 +79,16 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) return &p->vcpu_ioreq[v->vcpu_id]; } -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **srvp) +static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct ioreq_server **srvp) { struct domain *d = v->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; FOR_EACH_IOREQ_SERVER(d, id, s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -111,7 +111,7 @@ bool hvm_io_pending(struct vcpu *v) return get_pending_vcpu(v, NULL); } -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state = STATE_IOREQ_NONE; unsigned int state = p->state; @@ -172,8 +172,8 @@ bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d = v->domain; struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; + struct ioreq_server *s; + struct ioreq_vcpu *sv; enum hvm_io_completion io_completion; if ( has_vpci(d) && vpci_process_pending(v) ) @@ -214,9 +214,9 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; struct page_info *page; if ( iorp->page ) @@ -262,9 +262,9 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) return -ENOMEM; } -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; struct page_info *page = iorp->page; if ( !page ) @@ -281,7 +281,7 @@ static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { - const struct hvm_ioreq_server *s; + const struct ioreq_server *s; unsigned int id; bool found = false; @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) return found; } -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) +static void hvm_update_ioreq_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); @@ -314,13 +314,13 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } } -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; int rc; - sv = xzalloc(struct hvm_ioreq_vcpu); + sv = xzalloc(struct ioreq_vcpu); rc = -ENOMEM; if ( !sv ) @@ -366,10 +366,10 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, return rc; } -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, +static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; spin_lock(&s->lock); @@ -394,9 +394,9 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, spin_unlock(&s->lock); } -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv, *next; + struct ioreq_vcpu *sv, *next; spin_lock(&s->lock); @@ -420,7 +420,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; @@ -435,13 +435,13 @@ static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) return rc; } -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_pages(struct ioreq_server *s) { hvm_free_ioreq_mfn(s, true); hvm_free_ioreq_mfn(s, false); } -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; @@ -449,7 +449,7 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) rangeset_destroy(s->range[i]); } -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, ioservid_t id) { unsigned int i; @@ -487,9 +487,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, return rc; } -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_enable(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; spin_lock(&s->lock); @@ -509,7 +509,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); @@ -524,7 +524,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_init(struct ioreq_server *s, struct domain *d, int bufioreq_handling, ioservid_t id) { @@ -569,7 +569,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, return rc; } -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); @@ -594,14 +594,14 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, ioservid_t *id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int i; int rc; if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) return -EINVAL; - s = xzalloc(struct hvm_ioreq_server); + s = xzalloc(struct ioreq_server); if ( !s ) return -ENOMEM; @@ -649,7 +649,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -694,7 +694,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *bufioreq_gfn, evtchn_port_t *bufioreq_port) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -739,7 +739,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, unsigned long idx, mfn_t *mfn) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; ASSERT(is_hvm_domain(d)); @@ -791,7 +791,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; @@ -843,7 +843,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; @@ -902,7 +902,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; if ( type != HVMMEM_ioreq_server ) @@ -934,7 +934,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool enabled) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -967,7 +967,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; int rc; @@ -1002,7 +1002,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1015,7 +1015,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) void hvm_destroy_all_ioreq_servers(struct domain *d) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; if ( !arch_ioreq_server_destroy_all(d) ) @@ -1042,10 +1042,10 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; uint8_t type; uint64_t addr; unsigned int id; @@ -1098,10 +1098,10 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, return NULL; } -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) { struct domain *d = current->domain; - struct hvm_ioreq_page *iorp; + struct ioreq_page *iorp; buffered_iopage_t *pg; buf_ioreq_t bp = { .data = p->data, .addr = p->addr, @@ -1191,12 +1191,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered) { struct vcpu *curr = current; struct domain *d = curr->domain; - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; ASSERT(s); @@ -1254,7 +1254,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) { struct domain *d = current->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id, failed = 0; FOR_EACH_IOREQ_SERVER(d, id, s) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 9d247ba..1c4ca47 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -30,40 +30,6 @@ #include -struct hvm_ioreq_page { - gfn_t gfn; - struct page_info *page; - void *va; -}; - -struct hvm_ioreq_vcpu { - struct list_head list_entry; - struct vcpu *vcpu; - evtchn_port_t ioreq_evtchn; - bool pending; -}; - -#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) -#define MAX_NR_IO_RANGES 256 - -struct hvm_ioreq_server { - struct domain *target, *emulator; - - /* Lock to serialize toolstack modifications */ - spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; - struct rangeset *range[NR_IO_RANGE_TYPES]; - bool enabled; - uint8_t bufioreq_handling; -}; - #ifdef CONFIG_MEM_SHARING struct mem_sharing_domain { @@ -110,7 +76,7 @@ struct hvm_domain { /* Lock protects all other values in the sub-struct and the default */ struct { spinlock_t lock; - struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; } ioreq_server; /* Cached CF8 for guest PCI config cycles */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index ab2f3f8..854dc77 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -22,13 +22,13 @@ #include bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_enable(struct hvm_ioreq_server *s); -void arch_ioreq_server_disable(struct hvm_ioreq_server *s); -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_pages(struct ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct ioreq_server *s); +void arch_ioreq_server_enable(struct ioreq_server *s); +void arch_ioreq_server_disable(struct ioreq_server *s); +void arch_ioreq_server_destroy(struct ioreq_server *s); int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags); bool arch_ioreq_server_destroy_all(struct domain *d); int arch_ioreq_server_get_type_addr(const struct domain *d, diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 8d6fd1a..4603560 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -363,7 +363,7 @@ struct p2m_domain { * ioreq server who's responsible for the emulation of * gfns with specific p2m type(for now, p2m_ioreq_server). */ - struct hvm_ioreq_server *server; + struct ioreq_server *server; /* * flags specifies whether read, write or both operations * are to be emulated by an ioreq server. @@ -941,9 +941,9 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn) } int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s); -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags); + struct ioreq_server *s); +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags); static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt, p2m_type_t ot, mfn_t nfn, mfn_t ofn, diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 2746bb1..979afa0 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,6 +21,40 @@ #include +struct ioreq_page { + gfn_t gfn; + struct page_info *page; + void *va; +}; + +struct ioreq_vcpu { + struct list_head list_entry; + struct vcpu *vcpu; + evtchn_port_t ioreq_evtchn; + bool pending; +}; + +#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) +#define MAX_NR_IO_RANGES 256 + +struct ioreq_server { + struct domain *target, *emulator; + + /* Lock to serialize toolstack modifications */ + spinlock_t lock; + + struct ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + struct rangeset *range[NR_IO_RANGE_TYPES]; + bool enabled; + uint8_t bufioreq_handling; +}; + static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) { return unlikely(p->df) ? @@ -75,9 +109,9 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); void hvm_destroy_all_ioreq_servers(struct domain *d); -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); From patchwork Mon Nov 30 10:31:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B7B9C64E8A for ; Mon, 30 Nov 2020 10:32:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A233320708 for ; Mon, 30 Nov 2020 10:32:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qb/ZibRi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A233320708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40824.73836 (Exim 4.92) (envelope-from ) id 1kjgU6-00019c-Kh; Mon, 30 Nov 2020 10:32:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40824.73836; Mon, 30 Nov 2020 10:32:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgU6-00019N-Gh; Mon, 30 Nov 2020 10:32:42 +0000 Received: by outflank-mailman (input) for mailman id 40824; Mon, 30 Nov 2020 10:32:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgU4-0000Uu-Cl for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:40 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id fe4ec96c-4b2d-4290-ac68-1104e63198e9; Mon, 30 Nov 2020 10:32:07 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id y7so17018339lji.8 for ; Mon, 30 Nov 2020 02:32:07 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:05 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fe4ec96c-4b2d-4290-ac68-1104e63198e9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Cc66UPW5H7v9UJ3Wv0uf40SoC8B/Dy8enJYvxK9l3es=; b=Qb/ZibRigsTp1//JPm1gDde4wRxyyl2Jt+50ox4r4LS55UTDorgGU2J+ImajEKFr3A LT4DGSCAq8iDWY8ktxIRaIvZjfQaY5IUUW7VT4JtG7zEF2RC9KAcrArtEE2Ov5Ea01Dr AnJLobw4U//72RCmCFyn0U530UlRBEgxZhyzfatdFmGpnGOjskXbHx7kjz3r/pKLyMtS PByPlWILbdJAgXbXMkiUffPqqDs3OlPF5PLUkIKvmDiNzV5W3w56T8HkaXBPOy8Is1Eh 37GaQPemxndRug59hSIfSV+Ru/9P9DXOP6nrhiGLyBJ1mhqSqNrubKlMMZluSdjsxnt7 BfkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Cc66UPW5H7v9UJ3Wv0uf40SoC8B/Dy8enJYvxK9l3es=; b=a0Xg2fUkyzYREigLLgqPD0qR+AAXnVqAe8IgsmSGkLDQCGwsnEyFw/Ej1BEZ+7oO50 mwjLDmxJjxRo/0PnKMwhQvxF33Nx+8tY9vfWu+zWkwV0uEPf4nHj+Q4HfEs4HIvcz04Q z8dgIWSol7jBxRmdUWOkgqwAcMbFd3gokvF/vhTZ/PinisNn19czxg9o7JY7YQCmtPox mKueAJH8V1lZZoEUbQpKgpElSkDOmIfo2XFhKgvIeGRwAXKQQw3w2+Yrwz+Mdg1Zoq8C S+RgziSI7QaOz02inOoaW+f5KaeRRFG1J3pyLEocMwsr+jsNDHoJ7Wh/bCezZ7EYbnjQ zm1Q== X-Gm-Message-State: AOAM5306fTC9Mz4zKx02OQWzMEAX1AO3HOx02t1nA/2firyguKq7Y443 rnWnB7AY3J5GjVhxJZgCAsdY9gEWPL47Bg== X-Google-Smtp-Source: ABdhPJy4eNMMohOGq2GH4e3eHmtQqcMmMO2w2jZDZccMSVyUoPMx30MpFl7/aZncoaaYDZOAduEAhQ== X-Received: by 2002:a2e:7203:: with SMTP id n3mr9504708ljc.86.1606732326102; Mon, 30 Nov 2020 02:32:06 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Julien Grall Subject: [PATCH V3 08/23] xen/ioreq: Move x86's ioreq_server to struct domain Date: Mon, 30 Nov 2020 12:31:23 +0200 Message-Id: <1606732298-22107-9-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The IOREQ is a common feature now and this struct will be used on Arm as is. Move it to common struct domain. This also significantly reduces the layering violation in the common code (*arch.hvm* usage). We don't move ioreq_gfn since it is not used in the common code (the "legacy" mechanism is x86 specific). Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Acked-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - remove the mention of "ioreq_gfn" from patch subject/description - update patch according the "legacy interface" is x86 specific - drop hvm_params related changes in arch/x86/hvm/hvm.c - leave ioreq_gfn in hvm_domain --- --- xen/common/ioreq.c | 60 ++++++++++++++++++++-------------------- xen/include/asm-x86/hvm/domain.h | 8 ------ xen/include/xen/sched.h | 10 +++++++ 3 files changed, 40 insertions(+), 38 deletions(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 3e80fc6..b7c2d5a 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -38,13 +38,13 @@ static void set_ioreq_server(struct domain *d, unsigned int id, struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + ASSERT(!s || !d->ioreq_server.server[id]); - d->arch.hvm.ioreq_server.server[id] = s; + d->ioreq_server.server[id] = s; } #define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] + (d)->ioreq_server.server[id] static struct ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id) @@ -285,7 +285,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) unsigned int id; bool found = false; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -296,7 +296,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) } } - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return found; } @@ -606,7 +606,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, return -ENOMEM; domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ ) { @@ -634,13 +634,13 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, if ( id ) *id = i; - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); return 0; fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); xfree(s); @@ -652,7 +652,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) struct ioreq_server *s; int rc; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -684,7 +684,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) rc = 0; out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -697,7 +697,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, struct ioreq_server *s; int rc; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -731,7 +731,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, rc = 0; out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -744,7 +744,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, ASSERT(is_hvm_domain(d)); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -782,7 +782,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, } out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -798,7 +798,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, if ( start > end ) return -EINVAL; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -834,7 +834,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, rc = rangeset_add_range(r, start, end); out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -850,7 +850,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, if ( start > end ) return -EINVAL; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -886,7 +886,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, rc = rangeset_remove_range(r, start, end); out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -911,7 +911,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) return -EINVAL; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -926,7 +926,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, rc = arch_ioreq_server_map_mem_type(d, s, flags); out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -937,7 +937,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, struct ioreq_server *s; int rc; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -961,7 +961,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, rc = 0; out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -971,7 +971,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) unsigned int id; int rc; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -980,7 +980,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) goto fail; } - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return 0; @@ -995,7 +995,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) hvm_ioreq_server_remove_vcpu(s, v); } - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -1005,12 +1005,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) struct ioreq_server *s; unsigned int id; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } void hvm_destroy_all_ioreq_servers(struct domain *d) @@ -1021,7 +1021,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) if ( !arch_ioreq_server_destroy_all(d) ) return; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); /* No need to domain_pause() as the domain is being torn down */ @@ -1039,7 +1039,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) xfree(s); } - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } struct ioreq_server *hvm_select_ioreq_server(struct domain *d, @@ -1271,7 +1271,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) void hvm_ioreq_init(struct domain *d) { - spin_lock_init(&d->arch.hvm.ioreq_server.lock); + spin_lock_init(&d->ioreq_server.lock); arch_ioreq_domain_init(d); } diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 1c4ca47..b8be1ad 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -63,8 +63,6 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; -#define MAX_NR_IOREQ_SERVERS 8 - struct hvm_domain { /* Guest page range used for non-default ioreq servers */ struct { @@ -73,12 +71,6 @@ struct hvm_domain { unsigned long legacy_mask; /* indexed by HVM param number */ } ioreq_gfn; - /* Lock protects all other values in the sub-struct and the default */ - struct { - spinlock_t lock; - struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; - } ioreq_server; - /* Cached CF8 for guest PCI config cycles */ uint32_t pci_cf8; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index a345cc0..62cbcdb 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -316,6 +316,8 @@ struct sched_unit { struct evtchn_port_ops; +#define MAX_NR_IOREQ_SERVERS 8 + struct domain { domid_t domain_id; @@ -523,6 +525,14 @@ struct domain /* Argo interdomain communication support */ struct argo_domain *argo; #endif + +#ifdef CONFIG_IOREQ_SERVER + /* Lock protects all other values in the sub-struct and the default */ + struct { + spinlock_t lock; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + } ioreq_server; +#endif }; static inline struct page_list_head *page_to_list( From patchwork Mon Nov 30 10:31:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FE46C64E90 for ; Mon, 30 Nov 2020 10:32:59 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F1B7720708 for ; Mon, 30 Nov 2020 10:32:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DkMEJ+/h" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1B7720708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40831.73849 (Exim 4.92) (envelope-from ) id 1kjgU9-0001EA-Vi; Mon, 30 Nov 2020 10:32:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40831.73849; Mon, 30 Nov 2020 10:32:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgU9-0001Dz-Rh; Mon, 30 Nov 2020 10:32:45 +0000 Received: by outflank-mailman (input) for mailman id 40831; Mon, 30 Nov 2020 10:32:45 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgU9-0000Uu-D0 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:45 +0000 Received: from mail-lj1-x233.google.com (unknown [2a00:1450:4864:20::233]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 79765296-9490-4b12-9b66-dbbc61d640c0; Mon, 30 Nov 2020 10:32:08 +0000 (UTC) Received: by mail-lj1-x233.google.com with SMTP id t22so17072185ljk.0 for ; Mon, 30 Nov 2020 02:32:08 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:06 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 79765296-9490-4b12-9b66-dbbc61d640c0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jmfuLLn6VZhTTj/XRd0e2QYAXx4n0t7Wv41UBmA+aUQ=; b=DkMEJ+/hAYMWc7ygU6SikW0OL+fcT2FOueY1DqxM1hhwt0kDHOxr3zPTGvTqfthyTb EONZ2ugtLWdlijN3cV6fljiPYD0lKvWOEBq9DHgWqVF6QoiHULR2qgUSL0cEGP51czEG AwmD+9ScD4j9mIBzfNcbG9KSOjMp86DLPscfkitArneWhumOMaigj/4ymithTdjzh8fX Gv6MQBRNaKhXZt8jxLca7UOuTgwlTPzKmNeBBIcU8xpxYLVv83gDw7rn3xip4iefM4Si oQSog4d87s4+LSB0RDCVmuIvNly9AVKOF8cxn1gPPQh0qbE0T3EAwnNCKSpJER62/hc7 npjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jmfuLLn6VZhTTj/XRd0e2QYAXx4n0t7Wv41UBmA+aUQ=; b=cW4EcR3islnK2sA0y9WP/g7nNqTYjtfsp9+gbpL7W041and1gYMdgcXl/FRj/wg7z5 j+aoVYjX0EdEhhqqEpCzeEWlqVfXsk26aLXm4Ejgl5iPnq/CTzuxWpCepcHpp/F/m1B4 yiO/yLdDc46wFafK586hn1mFYeUa0YoxQyoqokg8ufUIYkn9/6O7CIXzdCVZYOh8NIVM EAbtyOzOe6pKLNwtnck0+ASpm5Yo1VWMGdHTGvg664BlPKoyGt8pxZZJJ892O/eTn5Gq CMTx29ZY9kcjUrcEIoWV7JypIf9ii75omkiuZO8cqm8m1BcJblzf7G85bJGRciZZA4Le uBsg== X-Gm-Message-State: AOAM533ne1LEublDo2mDwLXUZxbss0zAf04il9MhVltave497JiEzZKX iRN6W+cRIkcQeIuM/5q8kBkxdthMq57u9w== X-Google-Smtp-Source: ABdhPJxG/YFNMEt3pHjJaZUIojgOCRGKzKOKXPksO1pbmwPbAjKmp2ZkXAI13wN8Co2FqBbyfqmrjA== X-Received: by 2002:a2e:814a:: with SMTP id t10mr9735200ljg.30.1606732327338; Mon, 30 Nov 2020 02:32:07 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Daniel De Graaf , Oleksandr Tyshchenko Subject: [PATCH V3 09/23] xen/dm: Make x86's DM feature common Date: Mon, 30 Nov 2020 12:31:24 +0200 Message-Id: <1606732298-22107-10-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Julien Grall As a lot of x86 code can be re-used on Arm later on, this patch splits devicemodel support into common and arch specific parts. The common DM feature is supposed to be built with IOREQ_SERVER option enabled (as well as the IOREQ feature), which is selected for x86's config HVM for now. Also update XSM code a bit to let DM op be used on Arm. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - update XSM, related changes were pulled from: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features Changes V1 -> V2: - update the author of a patch - update patch description - introduce xen/dm.h and move definitions here Changes V2 -> V3: - no changes --- --- xen/arch/x86/hvm/dm.c | 291 ++++-------------------------------------------- xen/common/Makefile | 1 + xen/common/dm.c | 291 ++++++++++++++++++++++++++++++++++++++++++++++++ xen/include/xen/dm.h | 44 ++++++++ xen/include/xsm/dummy.h | 4 +- xen/include/xsm/xsm.h | 6 +- xen/xsm/dummy.c | 2 +- xen/xsm/flask/hooks.c | 5 +- 8 files changed, 364 insertions(+), 280 deletions(-) create mode 100644 xen/common/dm.c create mode 100644 xen/include/xen/dm.h diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 71f5ca4..35f860a 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -16,6 +16,7 @@ #include #include +#include #include #include #include @@ -29,13 +30,6 @@ #include -struct dmop_args { - domid_t domid; - unsigned int nr_bufs; - /* Reserve enough buf elements for all current hypercalls. */ - struct xen_dm_op_buf buf[2]; -}; - static bool _raw_copy_from_guest_buf_offset(void *dst, const struct dmop_args *args, unsigned int buf_idx, @@ -338,148 +332,20 @@ static int inject_event(struct domain *d, return 0; } -static int dm_op(const struct dmop_args *op_args) +int arch_dm_op(struct xen_dm_op *op, struct domain *d, + const struct dmop_args *op_args, bool *const_op) { - struct domain *d; - struct xen_dm_op op; - bool const_op = true; long rc; - size_t offset; - - static const uint8_t op_size[] = { - [XEN_DMOP_create_ioreq_server] = sizeof(struct xen_dm_op_create_ioreq_server), - [XEN_DMOP_get_ioreq_server_info] = sizeof(struct xen_dm_op_get_ioreq_server_info), - [XEN_DMOP_map_io_range_to_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range), - [XEN_DMOP_unmap_io_range_from_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range), - [XEN_DMOP_set_ioreq_server_state] = sizeof(struct xen_dm_op_set_ioreq_server_state), - [XEN_DMOP_destroy_ioreq_server] = sizeof(struct xen_dm_op_destroy_ioreq_server), - [XEN_DMOP_track_dirty_vram] = sizeof(struct xen_dm_op_track_dirty_vram), - [XEN_DMOP_set_pci_intx_level] = sizeof(struct xen_dm_op_set_pci_intx_level), - [XEN_DMOP_set_isa_irq_level] = sizeof(struct xen_dm_op_set_isa_irq_level), - [XEN_DMOP_set_pci_link_route] = sizeof(struct xen_dm_op_set_pci_link_route), - [XEN_DMOP_modified_memory] = sizeof(struct xen_dm_op_modified_memory), - [XEN_DMOP_set_mem_type] = sizeof(struct xen_dm_op_set_mem_type), - [XEN_DMOP_inject_event] = sizeof(struct xen_dm_op_inject_event), - [XEN_DMOP_inject_msi] = sizeof(struct xen_dm_op_inject_msi), - [XEN_DMOP_map_mem_type_to_ioreq_server] = sizeof(struct xen_dm_op_map_mem_type_to_ioreq_server), - [XEN_DMOP_remote_shutdown] = sizeof(struct xen_dm_op_remote_shutdown), - [XEN_DMOP_relocate_memory] = sizeof(struct xen_dm_op_relocate_memory), - [XEN_DMOP_pin_memory_cacheattr] = sizeof(struct xen_dm_op_pin_memory_cacheattr), - }; - - rc = rcu_lock_remote_domain_by_id(op_args->domid, &d); - if ( rc ) - return rc; - - if ( !is_hvm_domain(d) ) - goto out; - - rc = xsm_dm_op(XSM_DM_PRIV, d); - if ( rc ) - goto out; - - offset = offsetof(struct xen_dm_op, u); - - rc = -EFAULT; - if ( op_args->buf[0].size < offset ) - goto out; - - if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset) ) - goto out; - - if ( op.op >= ARRAY_SIZE(op_size) ) - { - rc = -EOPNOTSUPP; - goto out; - } - - op.op = array_index_nospec(op.op, ARRAY_SIZE(op_size)); - - if ( op_args->buf[0].size < offset + op_size[op.op] ) - goto out; - - if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset, - op_size[op.op]) ) - goto out; - - rc = -EINVAL; - if ( op.pad ) - goto out; - - switch ( op.op ) - { - case XEN_DMOP_create_ioreq_server: - { - struct xen_dm_op_create_ioreq_server *data = - &op.u.create_ioreq_server; - - const_op = false; - - rc = -EINVAL; - if ( data->pad[0] || data->pad[1] || data->pad[2] ) - break; - - rc = hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); - break; - } - case XEN_DMOP_get_ioreq_server_info: + switch ( op->op ) { - struct xen_dm_op_get_ioreq_server_info *data = - &op.u.get_ioreq_server_info; - const uint16_t valid_flags = XEN_DMOP_no_gfns; - - const_op = false; - - rc = -EINVAL; - if ( data->flags & ~valid_flags ) - break; - - rc = hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->ioreq_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->bufioreq_gfn, - &data->bufioreq_port); - break; - } - - case XEN_DMOP_map_io_range_to_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data = - &op.u.map_io_range_to_ioreq_server; - - rc = -EINVAL; - if ( data->pad ) - break; - - rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); - break; - } - - case XEN_DMOP_unmap_io_range_from_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data = - &op.u.unmap_io_range_from_ioreq_server; - - rc = -EINVAL; - if ( data->pad ) - break; - - rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type, - data->start, data->end); - break; - } - case XEN_DMOP_map_mem_type_to_ioreq_server: { struct xen_dm_op_map_mem_type_to_ioreq_server *data = - &op.u.map_mem_type_to_ioreq_server; + &op->u.map_mem_type_to_ioreq_server; unsigned long first_gfn = data->opaque; - const_op = false; + *const_op = false; rc = -EOPNOTSUPP; if ( !hap_enabled(d) ) @@ -523,36 +389,10 @@ static int dm_op(const struct dmop_args *op_args) break; } - case XEN_DMOP_set_ioreq_server_state: - { - const struct xen_dm_op_set_ioreq_server_state *data = - &op.u.set_ioreq_server_state; - - rc = -EINVAL; - if ( data->pad ) - break; - - rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled); - break; - } - - case XEN_DMOP_destroy_ioreq_server: - { - const struct xen_dm_op_destroy_ioreq_server *data = - &op.u.destroy_ioreq_server; - - rc = -EINVAL; - if ( data->pad ) - break; - - rc = hvm_destroy_ioreq_server(d, data->id); - break; - } - case XEN_DMOP_track_dirty_vram: { const struct xen_dm_op_track_dirty_vram *data = - &op.u.track_dirty_vram; + &op->u.track_dirty_vram; rc = -EINVAL; if ( data->pad ) @@ -568,7 +408,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_pci_intx_level: { const struct xen_dm_op_set_pci_intx_level *data = - &op.u.set_pci_intx_level; + &op->u.set_pci_intx_level; rc = set_pci_intx_level(d, data->domain, data->bus, data->device, data->intx, @@ -579,7 +419,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_isa_irq_level: { const struct xen_dm_op_set_isa_irq_level *data = - &op.u.set_isa_irq_level; + &op->u.set_isa_irq_level; rc = set_isa_irq_level(d, data->isa_irq, data->level); break; @@ -588,7 +428,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_set_pci_link_route: { const struct xen_dm_op_set_pci_link_route *data = - &op.u.set_pci_link_route; + &op->u.set_pci_link_route; rc = hvm_set_pci_link_route(d, data->link, data->isa_irq); break; @@ -597,19 +437,19 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_modified_memory: { struct xen_dm_op_modified_memory *data = - &op.u.modified_memory; + &op->u.modified_memory; rc = modified_memory(d, op_args, data); - const_op = !rc; + *const_op = !rc; break; } case XEN_DMOP_set_mem_type: { struct xen_dm_op_set_mem_type *data = - &op.u.set_mem_type; + &op->u.set_mem_type; - const_op = false; + *const_op = false; rc = -EINVAL; if ( data->pad ) @@ -622,7 +462,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_inject_event: { const struct xen_dm_op_inject_event *data = - &op.u.inject_event; + &op->u.inject_event; rc = -EINVAL; if ( data->pad0 || data->pad1 ) @@ -635,7 +475,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_inject_msi: { const struct xen_dm_op_inject_msi *data = - &op.u.inject_msi; + &op->u.inject_msi; rc = -EINVAL; if ( data->pad ) @@ -648,7 +488,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_remote_shutdown: { const struct xen_dm_op_remote_shutdown *data = - &op.u.remote_shutdown; + &op->u.remote_shutdown; domain_shutdown(d, data->reason); rc = 0; @@ -657,7 +497,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_relocate_memory: { - struct xen_dm_op_relocate_memory *data = &op.u.relocate_memory; + struct xen_dm_op_relocate_memory *data = &op->u.relocate_memory; struct xen_add_to_physmap xatp = { .domid = op_args->domid, .size = data->size, @@ -680,7 +520,7 @@ static int dm_op(const struct dmop_args *op_args) data->size -= rc; data->src_gfn += rc; data->dst_gfn += rc; - const_op = false; + *const_op = false; rc = -ERESTART; } break; @@ -689,7 +529,7 @@ static int dm_op(const struct dmop_args *op_args) case XEN_DMOP_pin_memory_cacheattr: { const struct xen_dm_op_pin_memory_cacheattr *data = - &op.u.pin_memory_cacheattr; + &op->u.pin_memory_cacheattr; if ( data->pad ) { @@ -707,97 +547,6 @@ static int dm_op(const struct dmop_args *op_args) break; } - if ( (!rc || rc == -ERESTART) && - !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, - (void *)&op.u, op_size[op.op]) ) - rc = -EFAULT; - - out: - rcu_unlock_domain(d); - - return rc; -} - -#include - -CHECK_dm_op_create_ioreq_server; -CHECK_dm_op_get_ioreq_server_info; -CHECK_dm_op_ioreq_server_range; -CHECK_dm_op_set_ioreq_server_state; -CHECK_dm_op_destroy_ioreq_server; -CHECK_dm_op_track_dirty_vram; -CHECK_dm_op_set_pci_intx_level; -CHECK_dm_op_set_isa_irq_level; -CHECK_dm_op_set_pci_link_route; -CHECK_dm_op_modified_memory; -CHECK_dm_op_set_mem_type; -CHECK_dm_op_inject_event; -CHECK_dm_op_inject_msi; -CHECK_dm_op_map_mem_type_to_ioreq_server; -CHECK_dm_op_remote_shutdown; -CHECK_dm_op_relocate_memory; -CHECK_dm_op_pin_memory_cacheattr; - -int compat_dm_op(domid_t domid, - unsigned int nr_bufs, - XEN_GUEST_HANDLE_PARAM(void) bufs) -{ - struct dmop_args args; - unsigned int i; - int rc; - - if ( nr_bufs > ARRAY_SIZE(args.buf) ) - return -E2BIG; - - args.domid = domid; - args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); - - for ( i = 0; i < args.nr_bufs; i++ ) - { - struct compat_dm_op_buf cmp; - - if ( copy_from_guest_offset(&cmp, bufs, i, 1) ) - return -EFAULT; - -#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \ - guest_from_compat_handle((_d_)->h, (_s_)->h) - - XLAT_dm_op_buf(&args.buf[i], &cmp); - -#undef XLAT_dm_op_buf_HNDL_h - } - - rc = dm_op(&args); - - if ( rc == -ERESTART ) - rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", - domid, nr_bufs, bufs); - - return rc; -} - -long do_dm_op(domid_t domid, - unsigned int nr_bufs, - XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) -{ - struct dmop_args args; - int rc; - - if ( nr_bufs > ARRAY_SIZE(args.buf) ) - return -E2BIG; - - args.domid = domid; - args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); - - if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) - return -EFAULT; - - rc = dm_op(&args); - - if ( rc == -ERESTART ) - rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", - domid, nr_bufs, bufs); - return rc; } diff --git a/xen/common/Makefile b/xen/common/Makefile index c0e91c4..460f214 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -6,6 +6,7 @@ obj-$(CONFIG_CORE_PARKING) += core_parking.o obj-y += cpu.o obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o +obj-$(CONFIG_IOREQ_SERVER) += dm.o obj-y += domain.o obj-y += event_2l.o obj-y += event_channel.o diff --git a/xen/common/dm.c b/xen/common/dm.c new file mode 100644 index 0000000..36e01a2 --- /dev/null +++ b/xen/common/dm.c @@ -0,0 +1,291 @@ +/* + * Copyright (c) 2016 Citrix Systems Inc. + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include + +static int dm_op(const struct dmop_args *op_args) +{ + struct domain *d; + struct xen_dm_op op; + long rc; + bool const_op = true; + const size_t offset = offsetof(struct xen_dm_op, u); + + static const uint8_t op_size[] = { + [XEN_DMOP_create_ioreq_server] = sizeof(struct xen_dm_op_create_ioreq_server), + [XEN_DMOP_get_ioreq_server_info] = sizeof(struct xen_dm_op_get_ioreq_server_info), + [XEN_DMOP_map_io_range_to_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range), + [XEN_DMOP_unmap_io_range_from_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range), + [XEN_DMOP_set_ioreq_server_state] = sizeof(struct xen_dm_op_set_ioreq_server_state), + [XEN_DMOP_destroy_ioreq_server] = sizeof(struct xen_dm_op_destroy_ioreq_server), + [XEN_DMOP_track_dirty_vram] = sizeof(struct xen_dm_op_track_dirty_vram), + [XEN_DMOP_set_pci_intx_level] = sizeof(struct xen_dm_op_set_pci_intx_level), + [XEN_DMOP_set_isa_irq_level] = sizeof(struct xen_dm_op_set_isa_irq_level), + [XEN_DMOP_set_pci_link_route] = sizeof(struct xen_dm_op_set_pci_link_route), + [XEN_DMOP_modified_memory] = sizeof(struct xen_dm_op_modified_memory), + [XEN_DMOP_set_mem_type] = sizeof(struct xen_dm_op_set_mem_type), + [XEN_DMOP_inject_event] = sizeof(struct xen_dm_op_inject_event), + [XEN_DMOP_inject_msi] = sizeof(struct xen_dm_op_inject_msi), + [XEN_DMOP_map_mem_type_to_ioreq_server] = sizeof(struct xen_dm_op_map_mem_type_to_ioreq_server), + [XEN_DMOP_remote_shutdown] = sizeof(struct xen_dm_op_remote_shutdown), + [XEN_DMOP_relocate_memory] = sizeof(struct xen_dm_op_relocate_memory), + [XEN_DMOP_pin_memory_cacheattr] = sizeof(struct xen_dm_op_pin_memory_cacheattr), + }; + + rc = rcu_lock_remote_domain_by_id(op_args->domid, &d); + if ( rc ) + return rc; + + if ( !is_hvm_domain(d) ) + goto out; + + rc = xsm_dm_op(XSM_DM_PRIV, d); + if ( rc ) + goto out; + + rc = -EFAULT; + if ( op_args->buf[0].size < offset ) + goto out; + + if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset) ) + goto out; + + if ( op.op >= ARRAY_SIZE(op_size) ) + { + rc = -EOPNOTSUPP; + goto out; + } + + op.op = array_index_nospec(op.op, ARRAY_SIZE(op_size)); + + if ( op_args->buf[0].size < offset + op_size[op.op] ) + goto out; + + if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset, + op_size[op.op]) ) + goto out; + + rc = -EINVAL; + if ( op.pad ) + goto out; + + switch ( op.op ) + { + case XEN_DMOP_create_ioreq_server: + { + struct xen_dm_op_create_ioreq_server *data = + &op.u.create_ioreq_server; + + const_op = false; + + rc = -EINVAL; + if ( data->pad[0] || data->pad[1] || data->pad[2] ) + break; + + rc = hvm_create_ioreq_server(d, data->handle_bufioreq, + &data->id); + break; + } + + case XEN_DMOP_get_ioreq_server_info: + { + struct xen_dm_op_get_ioreq_server_info *data = + &op.u.get_ioreq_server_info; + const uint16_t valid_flags = XEN_DMOP_no_gfns; + + const_op = false; + + rc = -EINVAL; + if ( data->flags & ~valid_flags ) + break; + + rc = hvm_get_ioreq_server_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->ioreq_gfn, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufioreq_gfn, + &data->bufioreq_port); + break; + } + + case XEN_DMOP_map_io_range_to_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data = + &op.u.map_io_range_to_ioreq_server; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type, + data->start, data->end); + break; + } + + case XEN_DMOP_unmap_io_range_from_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data = + &op.u.unmap_io_range_from_ioreq_server; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type, + data->start, data->end); + break; + } + + case XEN_DMOP_set_ioreq_server_state: + { + const struct xen_dm_op_set_ioreq_server_state *data = + &op.u.set_ioreq_server_state; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + break; + } + + case XEN_DMOP_destroy_ioreq_server: + { + const struct xen_dm_op_destroy_ioreq_server *data = + &op.u.destroy_ioreq_server; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = hvm_destroy_ioreq_server(d, data->id); + break; + } + + default: + rc = arch_dm_op(&op, d, op_args, &const_op); + } + + if ( (!rc || rc == -ERESTART) && + !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, + (void *)&op.u, op_size[op.op]) ) + rc = -EFAULT; + + out: + rcu_unlock_domain(d); + + return rc; +} + +#ifdef CONFIG_COMPAT +#include + +CHECK_dm_op_create_ioreq_server; +CHECK_dm_op_get_ioreq_server_info; +CHECK_dm_op_ioreq_server_range; +CHECK_dm_op_set_ioreq_server_state; +CHECK_dm_op_destroy_ioreq_server; +CHECK_dm_op_track_dirty_vram; +CHECK_dm_op_set_pci_intx_level; +CHECK_dm_op_set_isa_irq_level; +CHECK_dm_op_set_pci_link_route; +CHECK_dm_op_modified_memory; +CHECK_dm_op_set_mem_type; +CHECK_dm_op_inject_event; +CHECK_dm_op_inject_msi; +CHECK_dm_op_map_mem_type_to_ioreq_server; +CHECK_dm_op_remote_shutdown; +CHECK_dm_op_relocate_memory; +CHECK_dm_op_pin_memory_cacheattr; + +int compat_dm_op(domid_t domid, + unsigned int nr_bufs, + XEN_GUEST_HANDLE_PARAM(void) bufs) +{ + struct dmop_args args; + unsigned int i; + int rc; + + if ( nr_bufs > ARRAY_SIZE(args.buf) ) + return -E2BIG; + + args.domid = domid; + args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); + + for ( i = 0; i < args.nr_bufs; i++ ) + { + struct compat_dm_op_buf cmp; + + if ( copy_from_guest_offset(&cmp, bufs, i, 1) ) + return -EFAULT; + +#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \ + guest_from_compat_handle((_d_)->h, (_s_)->h) + + XLAT_dm_op_buf(&args.buf[i], &cmp); + +#undef XLAT_dm_op_buf_HNDL_h + } + + rc = dm_op(&args); + + if ( rc == -ERESTART ) + rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", + domid, nr_bufs, bufs); + + return rc; +} +#endif + +long do_dm_op(domid_t domid, + unsigned int nr_bufs, + XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) +{ + struct dmop_args args; + int rc; + + if ( nr_bufs > ARRAY_SIZE(args.buf) ) + return -E2BIG; + + args.domid = domid; + args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); + + if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) + return -EFAULT; + + rc = dm_op(&args); + + if ( rc == -ERESTART ) + rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", + domid, nr_bufs, bufs); + + return rc; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/xen/dm.h b/xen/include/xen/dm.h new file mode 100644 index 0000000..ef15edf --- /dev/null +++ b/xen/include/xen/dm.h @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#ifndef __XEN_DM_H__ +#define __XEN_DM_H__ + +#include + +struct dmop_args { + domid_t domid; + unsigned int nr_bufs; + /* Reserve enough buf elements for all current hypercalls. */ + struct xen_dm_op_buf buf[2]; +}; + +int arch_dm_op(struct xen_dm_op *op, + struct domain *d, + const struct dmop_args *op_args, + bool *const_op); + +#endif /* __XEN_DM_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index 7ae3c40..5c61d8e 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -707,14 +707,14 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int } } +#endif /* CONFIG_X86 */ + static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); return xsm_default_action(action, current->domain, d); } -#endif /* CONFIG_X86 */ - #ifdef CONFIG_ARGO static XSM_INLINE int xsm_argo_enable(const struct domain *d) { diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 7bd03d8..91ecff4 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -176,8 +176,8 @@ struct xsm_operations { int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow); int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow); int (*pmu_op) (struct domain *d, unsigned int op); - int (*dm_op) (struct domain *d); #endif + int (*dm_op) (struct domain *d); int (*xen_version) (uint32_t cmd); int (*domain_resource_map) (struct domain *d); #ifdef CONFIG_ARGO @@ -682,13 +682,13 @@ static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int return xsm_ops->pmu_op(d, op); } +#endif /* CONFIG_X86 */ + static inline int xsm_dm_op(xsm_default_t def, struct domain *d) { return xsm_ops->dm_op(d); } -#endif /* CONFIG_X86 */ - static inline int xsm_xen_version (xsm_default_t def, uint32_t op) { return xsm_ops->xen_version(op); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 9e09512..8bdffe7 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -147,8 +147,8 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, ioport_permission); set_to_dummy_if_null(ops, ioport_mapping); set_to_dummy_if_null(ops, pmu_op); - set_to_dummy_if_null(ops, dm_op); #endif + set_to_dummy_if_null(ops, dm_op); set_to_dummy_if_null(ops, xen_version); set_to_dummy_if_null(ops, domain_resource_map); #ifdef CONFIG_ARGO diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 19b0d9e..11784d7 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1656,14 +1656,13 @@ static int flask_pmu_op (struct domain *d, unsigned int op) return -EPERM; } } +#endif /* CONFIG_X86 */ static int flask_dm_op(struct domain *d) { return current_has_perm(d, SECCLASS_HVM, HVM__DM); } -#endif /* CONFIG_X86 */ - static int flask_xen_version (uint32_t op) { u32 dsid = domain_sid(current->domain); @@ -1865,8 +1864,8 @@ static struct xsm_operations flask_ops = { .ioport_permission = flask_ioport_permission, .ioport_mapping = flask_ioport_mapping, .pmu_op = flask_pmu_op, - .dm_op = flask_dm_op, #endif + .dm_op = flask_dm_op, .xen_version = flask_xen_version, .domain_resource_map = flask_domain_resource_map, #ifdef CONFIG_ARGO From patchwork Mon Nov 30 10:31:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5920FC71156 for ; Mon, 30 Nov 2020 10:44:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DAE10204EF for ; Mon, 30 Nov 2020 10:44:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="sg92dzWN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DAE10204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40934.74024 (Exim 4.92) (envelope-from ) id 1kjgfZ-0003sz-CS; Mon, 30 Nov 2020 10:44:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40934.74024; Mon, 30 Nov 2020 10:44:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfY-0003sT-Od; Mon, 30 Nov 2020 10:44:32 +0000 Received: by outflank-mailman (input) for mailman id 40934; Mon, 30 Nov 2020 10:44:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUE-0000Uu-DG for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:50 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a62e5870-ab8e-4ea1-91ea-27ad55cb95e1; Mon, 30 Nov 2020 10:32:09 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id l11so20618759lfg.0 for ; Mon, 30 Nov 2020 02:32:09 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:08 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a62e5870-ab8e-4ea1-91ea-27ad55cb95e1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BmFrzwS/kmGPL3jpIw+pFO9nB5RgVKn8PxqvE5ux3HE=; b=sg92dzWNiK+/Ro+9oqGcTaWQJO27limwAQq3tw95xlyV0n3Cq8uhjCySRcCye5bv8r sXa6pPo9nKyspMB27GhibfSj0LhXUZ+CK1bJ1ZWaaoQGvlWfvynoBb9pGQYwrdq9LAFt uZad3CAj73YWgfZE6JdyZ5XHdZ4/XCDSZnpkuo4noBeDWx2ErCT9Zpz2ochBGza7DCFJ CwDoh5r2ibnfEnBV+geiur72JAsfyoA1+J38wgNZb9zSHbtZa1ms+7NsS0Fs1mFk4uAY VJaqlpsdO/O/iRtbH+yQtafO0nhvn3Kybl4vSGfpxi6Q1OEKr35WvIeWeGIg7pRtSs57 fF6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BmFrzwS/kmGPL3jpIw+pFO9nB5RgVKn8PxqvE5ux3HE=; b=GhAR1oPzEIpN/sHYVIQGacEyByMgtskvEvwtMSN4aoFLkIqiCyD38FQUMEGnI7Ysgv facCt4KBZcpx88uyZ1efNqw+hpis4PYHwVl/3U0IFZ0O+I6uWEAaTbEtqhbP4EM3p0Sy VSZKXh6npFFyazbph2HpnJAVM73DAoIqoQ5K/d9TdryyllWfNRo+W+ieQoUFlmkVEkCM Qh6Rgyf7gEhXODkxLHval3cItRVL04nFWNZpjXAA0UX+ok9BeRwTE75ew/jpzF9Eb6F9 1TNHYcdylAfaSBfqyuq9FTdQ6syfau1yKCkqbaHRofMuqwrneaMBEweq0RXISwZT7+Y7 J1ig== X-Gm-Message-State: AOAM533z7Gk5dPDdX92JZDAiECjbix0y9DofDXeCLL+PZcInkrDw9UL0 XiAQdi0pVH5qDHYCrY8OoZ1f1gwQokUbbA== X-Google-Smtp-Source: ABdhPJywPAlsZY/gRxAwj8tc1Y63uc2g2x+0dSFYJS21YrDzj2f6MLz0fCbuKRgQxwxerAU2m2BN0A== X-Received: by 2002:a19:418d:: with SMTP id o135mr9066537lfa.329.1606732328561; Mon, 30 Nov 2020 02:32:08 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V3 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Date: Mon, 30 Nov 2020 12:31:25 +0200 Message-Id: <1606732298-22107-11-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Julien Grall As x86 implementation of XENMEM_resource_ioreq_server can be re-used on Arm later on, this patch makes it common and removes arch_acquire_resource as unneeded. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - no changes Changes V1 -> V2: - update the author of a patch Changes V2 -> V3: - don't wrap #include - limit the number of #ifdef-s - re-order #include-s alphabetically --- --- xen/arch/x86/mm.c | 44 --------------------------------- xen/common/memory.c | 63 +++++++++++++++++++++++++++++++++++++++--------- xen/include/asm-arm/mm.h | 8 ------ xen/include/asm-x86/mm.h | 4 --- 4 files changed, 51 insertions(+), 68 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index e4638ef..c0a7124 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4699,50 +4699,6 @@ int xenmem_add_to_physmap_one( return rc; } -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]) -{ - int rc; - - switch ( type ) - { -#ifdef CONFIG_HVM - case XENMEM_resource_ioreq_server: - { - ioservid_t ioservid = id; - unsigned int i; - - rc = -EINVAL; - if ( !is_hvm_domain(d) ) - break; - - if ( id != (unsigned int)ioservid ) - break; - - rc = 0; - for ( i = 0; i < nr_frames; i++ ) - { - mfn_t mfn; - - rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); - if ( rc ) - break; - - mfn_list[i] = mfn_x(mfn); - } - break; - } -#endif - - default: - rc = -EOPNOTSUPP; - break; - } - - return rc; -} - long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { int rc; diff --git a/xen/common/memory.c b/xen/common/memory.c index 2c86934..92cf983 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -8,22 +8,23 @@ */ #include -#include +#include +#include +#include +#include +#include +#include +#include #include +#include #include +#include +#include #include #include #include -#include -#include -#include -#include -#include -#include -#include -#include #include -#include +#include #include #include #include @@ -1086,6 +1087,40 @@ static int acquire_grant_table(struct domain *d, unsigned int id, return 0; } +static int acquire_ioreq_server(struct domain *d, + unsigned int id, + unsigned long frame, + unsigned int nr_frames, + xen_pfn_t mfn_list[]) +{ +#ifdef CONFIG_IOREQ_SERVER + ioservid_t ioservid = id; + unsigned int i; + int rc; + + if ( !is_hvm_domain(d) ) + return -EINVAL; + + if ( id != (unsigned int)ioservid ) + return -EINVAL; + + for ( i = 0; i < nr_frames; i++ ) + { + mfn_t mfn; + + rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + if ( rc ) + return rc; + + mfn_list[i] = mfn_x(mfn); + } + + return 0; +#else + return -EOPNOTSUPP; +#endif +} + static int acquire_resource( XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg) { @@ -1144,9 +1179,13 @@ static int acquire_resource( mfn_list); break; + case XENMEM_resource_ioreq_server: + rc = acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames, + mfn_list); + break; + default: - rc = arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame, - xmar.nr_frames, mfn_list); + rc = -EOPNOTSUPP; break; } diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index f8ba49b..0b7de31 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -358,14 +358,6 @@ static inline void put_page_and_type(struct page_info *page) void clear_and_clean_page(struct page_info *page); -static inline -int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int id, - unsigned long frame, unsigned int nr_frames, - xen_pfn_t mfn_list[]) -{ - return -EOPNOTSUPP; -} - unsigned int arch_get_dma_bitsize(void); #endif /* __ARCH_ARM_MM__ */ diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index deeba75..859214e 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -639,8 +639,4 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn) return mfn <= (virt_to_mfn(eva - 1) + 1); } -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]); - #endif /* __ASM_X86_MM_H__ */ From patchwork Mon Nov 30 10:31:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DE5AC64E8A for ; Mon, 30 Nov 2020 10:42:35 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0E900204EF for ; Mon, 30 Nov 2020 10:42:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MFZ8mhig" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E900204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40899.73941 (Exim 4.92) (envelope-from ) id 1kjgdV-0003AD-ES; Mon, 30 Nov 2020 10:42:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40899.73941; Mon, 30 Nov 2020 10:42:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgdV-00039z-9b; Mon, 30 Nov 2020 10:42:25 +0000 Received: by outflank-mailman (input) for mailman id 40899; Mon, 30 Nov 2020 10:42:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUJ-0000Uu-DQ for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:32:55 +0000 Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8aebbba1-fb08-4028-bcdd-c48910c5b2f7; Mon, 30 Nov 2020 10:32:11 +0000 (UTC) Received: by mail-lf1-x142.google.com with SMTP id u18so20569916lfd.9 for ; Mon, 30 Nov 2020 02:32:11 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:09 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8aebbba1-fb08-4028-bcdd-c48910c5b2f7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eshuqDSj3zEarimOkB2aX4FZ33jnTSOjPB7bKmk+eE8=; b=MFZ8mhig+TzPLkg9FVgiE/SfHSeO8ODXrKXW4kqmOB95vTeL5UAS+2I4Vz0BOyj+AH 0xlh0lCifbdqvZw7SXb/MT0lWzEFe1fqWnHRsRgeXzM7kCme5YX0YK9toxjS8GJd579Q 6od0sZ8NkW7nG4nq4sMeJM6Oa52VyP/K7y21EQT/mNnD9+Kq6SPn9K4vZMPQ13GBEF0e RNxSRN8sRT7i8M92eMV1vxaFdkTgAj43vFDeFO1kw7wuCYilDr4nwDXkICzbN/fDrl3q dwjpRiAauK+qhcfnug/6z1qsLXpYtsNy5IG9vmSY5isVusL2oNm7a/WsxS1zmxHblFO1 3vzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eshuqDSj3zEarimOkB2aX4FZ33jnTSOjPB7bKmk+eE8=; b=ovZLJphYBIuRYMfKB7YFcuf15fwKs0tYwWFXOKSQJGU+P6Vr8fxNrBOyYlHNo7couS j6JBGJAj0AWFy6VnUPFd2Q6wYq9xbi/hwYf1tytA8HpA4crB2AUMT7Gxt+0aRqb+8WnU Ki8NAW+o2Dbg7k2WqpAlXWfAuvfCMUEmcyW2zpzQ19VHEP6kure+4EFgvijUwSNMFNPb 2fhu7uBFkh/fa7i81L3kP7/f94XXVVawJx507OpMH4PXxCIx2WhLq8TrsgNzV0IOTyrg MskPalRj+ughu2CczWd4U4ohJIUJ8QsYGZ/VU6yF6lBU7ynxKwamYSDeobv1dnCWQiOL YD5Q== X-Gm-Message-State: AOAM533tzSotW9Yvghm9vuRaGyXKksr9oaLCa3lwtEQqOz/LwgXQFUAY fuBUF9TKlQU8o+W95TMgykaBmZwnCKNe8Q== X-Google-Smtp-Source: ABdhPJwjEuWwAuuzHD/sq1bzikOS6pgc5KOC+joPhA5v8wz4/g6AyMOkWFCmMDhURCb2oUyGFEzbsQ== X-Received: by 2002:ac2:431a:: with SMTP id l26mr7157890lfh.196.1606732329822; Mon, 30 Nov 2020 02:32:09 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V3 11/23] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu Date: Mon, 30 Nov 2020 12:31:26 +0200 Message-Id: <1606732298-22107-12-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The IOREQ is a common feature now and these fields will be used on Arm as is. Move them to common struct vcpu as a part of new struct vcpu_io and drop duplicating "io" prefixes. Also move enum hvm_io_completion to xen/sched.h and remove "hvm" prefixes. This patch completely removes layering violation in the common code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - update patch description - drop the "io" prefixes from the field names - wrap IO_realmode_completion --- --- xen/arch/x86/hvm/emulate.c | 72 +++++++++++++++++++-------------------- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 8 ++--- xen/arch/x86/hvm/ioreq.c | 4 +-- xen/arch/x86/hvm/svm/nestedsvm.c | 2 +- xen/arch/x86/hvm/vmx/realmode.c | 6 ++-- xen/common/ioreq.c | 22 ++++++------ xen/include/asm-x86/hvm/emulate.h | 2 +- xen/include/asm-x86/hvm/ioreq.h | 2 +- xen/include/asm-x86/hvm/vcpu.h | 11 ------ xen/include/xen/sched.h | 19 +++++++++++ 11 files changed, 79 insertions(+), 71 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 4746d5a..04e4994 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -142,8 +142,8 @@ void hvmemul_cancel(struct vcpu *v) { struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; - vio->io_req.state = STATE_IOREQ_NONE; - vio->io_completion = HVMIO_no_completion; + v->io.req.state = STATE_IOREQ_NONE; + v->io.completion = IO_no_completion; vio->mmio_cache_count = 0; vio->mmio_insn_bytes = 0; vio->mmio_access = (struct npfec){}; @@ -159,7 +159,7 @@ static int hvmemul_do_io( { struct vcpu *curr = current; struct domain *currd = curr->domain; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct vcpu_io *vio = &curr->io; ioreq_t p = { .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO, .addr = addr, @@ -184,13 +184,13 @@ static int hvmemul_do_io( return X86EMUL_UNHANDLEABLE; } - switch ( vio->io_req.state ) + switch ( vio->req.state ) { case STATE_IOREQ_NONE: break; case STATE_IORESP_READY: - vio->io_req.state = STATE_IOREQ_NONE; - p = vio->io_req; + vio->req.state = STATE_IOREQ_NONE; + p = vio->req; /* Verify the emulation request has been correctly re-issued */ if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) || @@ -238,7 +238,7 @@ static int hvmemul_do_io( } ASSERT(p.count); - vio->io_req = p; + vio->req = p; rc = hvm_io_intercept(&p); @@ -247,12 +247,12 @@ static int hvmemul_do_io( * our callers and mirror this into latched state. */ ASSERT(p.count <= *reps); - *reps = vio->io_req.count = p.count; + *reps = vio->req.count = p.count; switch ( rc ) { case X86EMUL_OKAY: - vio->io_req.state = STATE_IOREQ_NONE; + vio->req.state = STATE_IOREQ_NONE; break; case X86EMUL_UNHANDLEABLE: { @@ -305,7 +305,7 @@ static int hvmemul_do_io( if ( s == NULL ) { rc = X86EMUL_RETRY; - vio->io_req.state = STATE_IOREQ_NONE; + vio->req.state = STATE_IOREQ_NONE; break; } @@ -316,7 +316,7 @@ static int hvmemul_do_io( if ( dir == IOREQ_READ ) { rc = hvm_process_io_intercept(&ioreq_server_handler, &p); - vio->io_req.state = STATE_IOREQ_NONE; + vio->req.state = STATE_IOREQ_NONE; break; } } @@ -329,14 +329,14 @@ static int hvmemul_do_io( if ( !s ) { rc = hvm_process_io_intercept(&null_handler, &p); - vio->io_req.state = STATE_IOREQ_NONE; + vio->req.state = STATE_IOREQ_NONE; } else { rc = hvm_send_ioreq(s, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) - vio->io_req.state = STATE_IOREQ_NONE; - else if ( !ioreq_needs_completion(&vio->io_req) ) + vio->req.state = STATE_IOREQ_NONE; + else if ( !ioreq_needs_completion(&vio->req) ) rc = X86EMUL_OKAY; } break; @@ -1854,7 +1854,7 @@ static int hvmemul_rep_movs( * cheaper than multiple round trips through the device model. Yet * when processing a response we can always re-use the translation. */ - (vio->io_req.state == STATE_IORESP_READY || + (curr->io.req.state == STATE_IORESP_READY || ((!df || *reps == 1) && PAGE_SIZE - (saddr & ~PAGE_MASK) >= *reps * bytes_per_rep)) ) sgpa = pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); @@ -1870,7 +1870,7 @@ static int hvmemul_rep_movs( if ( vio->mmio_access.write_access && (vio->mmio_gla == (daddr & PAGE_MASK)) && /* See comment above. */ - (vio->io_req.state == STATE_IORESP_READY || + (curr->io.req.state == STATE_IORESP_READY || ((!df || *reps == 1) && PAGE_SIZE - (daddr & ~PAGE_MASK) >= *reps * bytes_per_rep)) ) dgpa = pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); @@ -2007,7 +2007,7 @@ static int hvmemul_rep_stos( if ( vio->mmio_access.write_access && (vio->mmio_gla == (addr & PAGE_MASK)) && /* See respective comment in MOVS processing. */ - (vio->io_req.state == STATE_IORESP_READY || + (curr->io.req.state == STATE_IORESP_READY || ((!df || *reps == 1) && PAGE_SIZE - (addr & ~PAGE_MASK) >= *reps * bytes_per_rep)) ) gpa = pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); @@ -2613,13 +2613,13 @@ static const struct x86_emulate_ops hvm_emulate_ops_no_write = { }; /* - * Note that passing HVMIO_no_completion into this function serves as kind + * Note that passing IO_no_completion into this function serves as kind * of (but not fully) an "auto select completion" indicator. When there's * no completion needed, the passed in value will be ignored in any case. */ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, const struct x86_emulate_ops *ops, - enum hvm_io_completion completion) + enum io_completion completion) { const struct cpu_user_regs *regs = hvmemul_ctxt->ctxt.regs; struct vcpu *curr = current; @@ -2634,11 +2634,11 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, */ if ( vio->cache->num_ents > vio->cache->max_ents ) { - ASSERT(vio->io_req.state == STATE_IOREQ_NONE); + ASSERT(curr->io.req.state == STATE_IOREQ_NONE); vio->cache->num_ents = 0; } else - ASSERT(vio->io_req.state == STATE_IORESP_READY); + ASSERT(curr->io.req.state == STATE_IORESP_READY); hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn, vio->mmio_insn_bytes); @@ -2649,25 +2649,25 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, if ( rc == X86EMUL_OKAY && vio->mmio_retry ) rc = X86EMUL_RETRY; - if ( !ioreq_needs_completion(&vio->io_req) ) - completion = HVMIO_no_completion; - else if ( completion == HVMIO_no_completion ) - completion = (vio->io_req.type != IOREQ_TYPE_PIO || - hvmemul_ctxt->is_mem_access) ? HVMIO_mmio_completion - : HVMIO_pio_completion; + if ( !ioreq_needs_completion(&curr->io.req) ) + completion = IO_no_completion; + else if ( completion == IO_no_completion ) + completion = (curr->io.req.type != IOREQ_TYPE_PIO || + hvmemul_ctxt->is_mem_access) ? IO_mmio_completion + : IO_pio_completion; - switch ( vio->io_completion = completion ) + switch ( curr->io.completion = completion ) { - case HVMIO_no_completion: - case HVMIO_pio_completion: + case IO_no_completion: + case IO_pio_completion: vio->mmio_cache_count = 0; vio->mmio_insn_bytes = 0; vio->mmio_access = (struct npfec){}; hvmemul_cache_disable(curr); break; - case HVMIO_mmio_completion: - case HVMIO_realmode_completion: + case IO_mmio_completion: + case IO_realmode_completion: BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_buf)); vio->mmio_insn_bytes = hvmemul_ctxt->insn_buf_bytes; memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_bytes); @@ -2716,7 +2716,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion) + enum io_completion completion) { return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion); } @@ -2754,7 +2754,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla) guest_cpu_user_regs()); ctxt.ctxt.data = &mmio_ro_ctxt; - switch ( rc = _hvm_emulate_one(&ctxt, ops, HVMIO_no_completion) ) + switch ( rc = _hvm_emulate_one(&ctxt, ops, IO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: @@ -2782,7 +2782,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, { case EMUL_KIND_NOWRITE: rc = _hvm_emulate_one(&ctx, &hvm_emulate_ops_no_write, - HVMIO_no_completion); + IO_no_completion); break; case EMUL_KIND_SET_CONTEXT_INSN: { struct vcpu *curr = current; @@ -2803,7 +2803,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, /* Fall-through */ default: ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA); - rc = hvm_emulate_one(&ctx, HVMIO_no_completion); + rc = hvm_emulate_one(&ctx, IO_no_completion); } switch ( rc ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 54e32e4..cc46909 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3800,7 +3800,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs) return; } - switch ( hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( hvm_emulate_one(&ctxt, IO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index b220d6b..327a6a2 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -85,7 +85,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr) hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs()); - switch ( rc = hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( rc = hvm_emulate_one(&ctxt, IO_no_completion) ) { case X86EMUL_UNHANDLEABLE: hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt, rc); @@ -122,7 +122,7 @@ bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, bool handle_pio(uint16_t port, unsigned int size, int dir) { struct vcpu *curr = current; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct vcpu_io *vio = &curr->io; unsigned int data; int rc; @@ -135,8 +135,8 @@ bool handle_pio(uint16_t port, unsigned int size, int dir) rc = hvmemul_do_pio_buffer(port, size, dir, &data); - if ( ioreq_needs_completion(&vio->io_req) ) - vio->io_completion = HVMIO_pio_completion; + if ( ioreq_needs_completion(&vio->req) ) + vio->completion = IO_pio_completion; switch ( rc ) { diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 009a95a..7808b75 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -41,11 +41,11 @@ bool ioreq_complete_mmio(void) return handle_mmio(); } -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) +bool arch_vcpu_ioreq_completion(enum io_completion io_completion) { switch ( io_completion ) { - case HVMIO_realmode_completion: + case IO_realmode_completion: { struct hvm_emulate_ctxt ctxt; diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c index fcfccf7..6d90630 100644 --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -1266,7 +1266,7 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v) * Delay the injection because this would result in delivering * an interrupt *within* the execution of an instruction. */ - if ( v->arch.hvm.hvm_io.io_req.state != STATE_IOREQ_NONE ) + if ( v->io.req.state != STATE_IOREQ_NONE ) return hvm_intblk_shadow; if ( !nv->nv_vmexit_pending && n2vmcb->exit_int_info.v ) diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c index 768f01e..3033143 100644 --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -101,7 +101,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt) perfc_incr(realmode_emulations); - rc = hvm_emulate_one(hvmemul_ctxt, HVMIO_realmode_completion); + rc = hvm_emulate_one(hvmemul_ctxt, IO_realmode_completion); if ( rc == X86EMUL_UNHANDLEABLE ) { @@ -188,7 +188,7 @@ void vmx_realmode(struct cpu_user_regs *regs) vmx_realmode_emulate_one(&hvmemul_ctxt); - if ( vio->io_req.state != STATE_IOREQ_NONE || vio->mmio_retry ) + if ( curr->io.req.state != STATE_IOREQ_NONE || vio->mmio_retry ) break; /* Stop emulating unless our segment state is not safe */ @@ -202,7 +202,7 @@ void vmx_realmode(struct cpu_user_regs *regs) } /* Need to emulate next time if we've started an IO operation */ - if ( vio->io_req.state != STATE_IOREQ_NONE ) + if ( curr->io.req.state != STATE_IOREQ_NONE ) curr->arch.hvm.vmx.vmx_emulate = 1; if ( !curr->arch.hvm.vmx.vmx_emulate && !curr->arch.hvm.vmx.vmx_realmode ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index b7c2d5a..caf4543 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -159,7 +159,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) break; } - p = &sv->vcpu->arch.hvm.hvm_io.io_req; + p = &sv->vcpu->io.req; if ( ioreq_needs_completion(p) ) p->data = data; @@ -171,10 +171,10 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d = v->domain; - struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; + struct vcpu_io *vio = &v->io; struct ioreq_server *s; struct ioreq_vcpu *sv; - enum hvm_io_completion io_completion; + enum io_completion io_completion; if ( has_vpci(d) && vpci_process_pending(v) ) { @@ -186,26 +186,26 @@ bool handle_hvm_io_completion(struct vcpu *v) if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) return false; - vio->io_req.state = ioreq_needs_completion(&vio->io_req) ? + vio->req.state = ioreq_needs_completion(&vio->req) ? STATE_IORESP_READY : STATE_IOREQ_NONE; msix_write_completion(v); vcpu_end_shutdown_deferral(v); - io_completion = vio->io_completion; - vio->io_completion = HVMIO_no_completion; + io_completion = vio->completion; + vio->completion = IO_no_completion; switch ( io_completion ) { - case HVMIO_no_completion: + case IO_no_completion: break; - case HVMIO_mmio_completion: + case IO_mmio_completion: return ioreq_complete_mmio(); - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); + case IO_pio_completion: + return handle_pio(vio->req.addr, vio->req.size, + vio->req.dir); default: return arch_vcpu_ioreq_completion(io_completion); diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h index 1620cc7..131cdf4 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -65,7 +65,7 @@ bool __nonnull(1, 2) hvm_emulate_one_insn( const char *descr); int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion); + enum io_completion completion); void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode); diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 854dc77..ca3bf29 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -21,7 +21,7 @@ #include -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +bool arch_vcpu_ioreq_completion(enum io_completion io_completion); int arch_ioreq_server_map_pages(struct ioreq_server *s); void arch_ioreq_server_unmap_pages(struct ioreq_server *s); void arch_ioreq_server_enable(struct ioreq_server *s); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 6c1feda..8adf455 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -28,13 +28,6 @@ #include #include -enum hvm_io_completion { - HVMIO_no_completion, - HVMIO_mmio_completion, - HVMIO_pio_completion, - HVMIO_realmode_completion -}; - struct hvm_vcpu_asid { uint64_t generation; uint32_t asid; @@ -52,10 +45,6 @@ struct hvm_mmio_cache { }; struct hvm_vcpu_io { - /* I/O request in flight to device model. */ - enum hvm_io_completion io_completion; - ioreq_t io_req; - /* * HVM emulation: * Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn. diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 62cbcdb..8269f84 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -145,6 +145,21 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */ struct waitqueue_vcpu; +enum io_completion { + IO_no_completion, + IO_mmio_completion, + IO_pio_completion, +#ifdef CONFIG_X86 + IO_realmode_completion, +#endif +}; + +struct vcpu_io { + /* I/O request in flight to device model. */ + enum io_completion completion; + ioreq_t req; +}; + struct vcpu { int vcpu_id; @@ -256,6 +271,10 @@ struct vcpu struct vpci_vcpu vpci; struct arch_vcpu arch; + +#ifdef CONFIG_IOREQ_SERVER + struct vcpu_io io; +#endif }; struct sched_unit { From patchwork Mon Nov 30 10:31:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97DF4C64E90 for ; Mon, 30 Nov 2020 10:44:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 24D4A20825 for ; Mon, 30 Nov 2020 10:44:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZRM3C2bw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 24D4A20825 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40931.74002 (Exim 4.92) (envelope-from ) id 1kjgfX-0003ph-SL; Mon, 30 Nov 2020 10:44:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40931.74002; Mon, 30 Nov 2020 10:44:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfX-0003ou-Eq; Mon, 30 Nov 2020 10:44:31 +0000 Received: by outflank-mailman (input) for mailman id 40931; Mon, 30 Nov 2020 10:44:29 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUT-0000Uu-Di for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:05 +0000 Received: from mail-lf1-x136.google.com (unknown [2a00:1450:4864:20::136]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 139e13ca-ea33-4293-acbf-50a16258e6e5; Mon, 30 Nov 2020 10:32:12 +0000 (UTC) Received: by mail-lf1-x136.google.com with SMTP id z21so20560124lfe.12 for ; Mon, 30 Nov 2020 02:32:12 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:10 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 139e13ca-ea33-4293-acbf-50a16258e6e5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=M048BVT+nCu0MjdyKQAZ2lA41UelwaMDIQ9W/KmwiOU=; b=ZRM3C2bwAeHcK8L2SFy9VzjIPuMZBQi1v7iYg/lOTkIPNf5r6NGWfird1X1t27vMQH 0gfC1AybpAfANSPJw+5LZnpRbhOU3VWoiKJzE1TQKNhm35tEb7MWFMboj+Zu2lBNughJ 1nuT9yRE7xkL9nsIlxh0TtyyWvKw+dJYOQMdrWAPRJp9QC0feOf9C1bvJw6ab7qetBQL A/kO0+KB1CxAt87LEjwfsK3oR7H3glLKjlj/XY/SeKyJMUiQkeodmVL8ivoFP07x0RJw UxjsWALAIFliBPRLfRQ0NSduRq4pzfbUA8Gc1LUAgu3EMDdzjndCgOOFFv0D714KE/fD uRYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=M048BVT+nCu0MjdyKQAZ2lA41UelwaMDIQ9W/KmwiOU=; b=ZwzUCLCbbKgm+NwNjR+GBKY1s+jpX9Vn0xonrf3QDVhlUXzUmTnnTznRcghijFdop9 bOcU5xxfMjaRiJHVBypZFQbY5xaeOYg2DFXUdHb8RD4V/ZMG65wXPd4DCySV7IbjcclU GLGHvYxrykXwdkwqO2OAomCN6ao8xIVJQxoR9beYmfH79ZMPSzxf6Kmq49y/sxmjO7uI 1TunFoOuONLhemVupSWfde62pHl0iYirSvfAJsUAEksy5SfMVuGhG3w1GHO2fG/ay1Pz 7cNRVTsnmp0IWPv986wXWWsPSbwHsth15VB/tifykBh0TquPD2CD+cT5fTYt1wwoU5G7 LCrw== X-Gm-Message-State: AOAM532sqj0Ulrl8QGlo6HHd/BtPD1p7UVwgLTjGYdYUz5j8LDeg7etc J8QzmK9JDQfHJMHPR2KdFcl2U0VC+oTAFw== X-Google-Smtp-Source: ABdhPJxS23rp4feIT9/kzYSLTMYBHzWOxn0CwNgpX6XLp1NId/ICuU1Bxn8Jpt1h2ggt/yBIxq9mlg== X-Received: by 2002:ac2:5208:: with SMTP id a8mr5249176lfl.206.1606732331125; Mon, 30 Nov 2020 02:32:11 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V3 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names Date: Mon, 30 Nov 2020 12:31:27 +0200 Message-Id: <1606732298-22107-13-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch removes "hvm" prefixes and infixes from IOREQ related function names in the common code and performs a renaming where appropriate according to the more consistent new naming scheme: - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" A few function names are clarified to better fit into their purposes: handle_hvm_io_completion -> vcpu_ioreq_handle_completion hvm_io_pending -> vcpu_ioreq_pending hvm_ioreq_init -> ioreq_domain_init hvm_alloc_ioreq_mfn -> ioreq_server_alloc_mfn hvm_free_ioreq_mfn -> ioreq_server_free_mfn Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Jan Beulich --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - update patch description - rename everything touched according to new naming scheme --- --- xen/arch/x86/hvm/dm.c | 4 +- xen/arch/x86/hvm/emulate.c | 6 +- xen/arch/x86/hvm/hvm.c | 10 +-- xen/arch/x86/hvm/io.c | 6 +- xen/arch/x86/hvm/ioreq.c | 2 +- xen/arch/x86/hvm/stdvga.c | 4 +- xen/arch/x86/hvm/vmx/vvmx.c | 2 +- xen/common/dm.c | 28 +++---- xen/common/ioreq.c | 174 ++++++++++++++++++++++---------------------- xen/common/memory.c | 2 +- xen/include/xen/ioreq.h | 67 ++++++++--------- 11 files changed, 153 insertions(+), 152 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 35f860a..0b6319e 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -352,8 +352,8 @@ int arch_dm_op(struct xen_dm_op *op, struct domain *d, break; if ( first_gfn == 0 ) - rc = hvm_map_mem_type_to_ioreq_server(d, data->id, - data->type, data->flags); + rc = ioreq_server_map_mem_type(d, data->id, + data->type, data->flags); else rc = 0; diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 04e4994..a025f89 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -261,7 +261,7 @@ static int hvmemul_do_io( * an ioreq server that can handle it. * * Rules: - * A> PIO or MMIO accesses run through hvm_select_ioreq_server() to + * A> PIO or MMIO accesses run through ioreq_server_select() to * choose the ioreq server by range. If no server is found, the access * is ignored. * @@ -323,7 +323,7 @@ static int hvmemul_do_io( } if ( !s ) - s = hvm_select_ioreq_server(currd, &p); + s = ioreq_server_select(currd, &p); /* If there is no suitable backing DM, just ignore accesses */ if ( !s ) @@ -333,7 +333,7 @@ static int hvmemul_do_io( } else { - rc = hvm_send_ioreq(s, &p, 0); + rc = ioreq_send(s, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) vio->req.state = STATE_IOREQ_NONE; else if ( !ioreq_needs_completion(&vio->req) ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index cc46909..8e3c2e2 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -546,7 +546,7 @@ void hvm_do_resume(struct vcpu *v) pt_restore_timer(v); - if ( !handle_hvm_io_completion(v) ) + if ( !vcpu_ioreq_handle_completion(v) ) return; if ( unlikely(v->arch.vm_event) ) @@ -677,7 +677,7 @@ int hvm_domain_initialise(struct domain *d) register_g2m_portio_handler(d); register_vpci_portio_handler(d); - hvm_ioreq_init(d); + ioreq_domain_init(d); hvm_init_guest_time(d); @@ -739,7 +739,7 @@ void hvm_domain_relinquish_resources(struct domain *d) viridian_domain_deinit(d); - hvm_destroy_all_ioreq_servers(d); + ioreq_server_destroy_all(d); msixtbl_pt_cleanup(d); @@ -1582,7 +1582,7 @@ int hvm_vcpu_initialise(struct vcpu *v) if ( rc ) goto fail5; - rc = hvm_all_ioreq_servers_add_vcpu(d, v); + rc = ioreq_server_add_vcpu_all(d, v); if ( rc != 0 ) goto fail6; @@ -1618,7 +1618,7 @@ void hvm_vcpu_destroy(struct vcpu *v) { viridian_vcpu_deinit(v); - hvm_all_ioreq_servers_remove_vcpu(v->domain, v); + ioreq_server_remove_vcpu_all(v->domain, v); if ( hvm_altp2m_supported() ) altp2m_vcpu_destroy(v); diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 327a6a2..a0dd8d1 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -60,7 +60,7 @@ void send_timeoffset_req(unsigned long timeoff) if ( timeoff == 0 ) return; - if ( hvm_broadcast_ioreq(&p, true) != 0 ) + if ( ioreq_broadcast(&p, true) != 0 ) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } @@ -74,7 +74,7 @@ void send_invalidate_req(void) .data = ~0UL, /* flush all */ }; - if ( hvm_broadcast_ioreq(&p, false) != 0 ) + if ( ioreq_broadcast(&p, false) != 0 ) gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); } @@ -155,7 +155,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir) * We should not advance RIP/EIP if the domain is shutting down or * if X86EMUL_RETRY has been returned by an internal handler. */ - if ( curr->domain->is_shutting_down || !hvm_io_pending(curr) ) + if ( curr->domain->is_shutting_down || !vcpu_ioreq_pending(curr) ) return false; break; diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 7808b75..934189e 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -154,7 +154,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { /* * If a page has already been allocated (which will happen on - * demand if hvm_get_ioreq_server_frame() is called), then + * demand if ioreq_server_get_frame() is called), then * mapping a guest frame is not permitted. */ if ( gfn_eq(iorp->gfn, INVALID_GFN) ) diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index bafb3f6..390ac51 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler, } done: - srv = hvm_select_ioreq_server(current->domain, &p); + srv = ioreq_server_select(current->domain, &p); if ( !srv ) return X86EMUL_UNHANDLEABLE; - return hvm_send_ioreq(srv, &p, 1); + return ioreq_send(srv, &p, 1); } static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 3a37e9e..a4813f0 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1516,7 +1516,7 @@ void nvmx_switch_guest(void) * don't want to continue as this setup is not implemented nor supported * as of right now. */ - if ( hvm_io_pending(v) ) + if ( vcpu_ioreq_pending(v) ) return; /* * a softirq may interrupt us between a virtual vmentry is diff --git a/xen/common/dm.c b/xen/common/dm.c index 36e01a2..9d394fc 100644 --- a/xen/common/dm.c +++ b/xen/common/dm.c @@ -100,8 +100,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad[0] || data->pad[1] || data->pad[2] ) break; - rc = hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + rc = ioreq_server_create(d, data->handle_bufioreq, + &data->id); break; } @@ -117,12 +117,12 @@ static int dm_op(const struct dmop_args *op_args) if ( data->flags & ~valid_flags ) break; - rc = hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : (unsigned long *)&data->ioreq_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : (unsigned long *)&data->bufioreq_gfn, - &data->bufioreq_port); + rc = ioreq_server_get_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->ioreq_gfn, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufioreq_gfn, + &data->bufioreq_port); break; } @@ -135,8 +135,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; - rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); + rc = ioreq_server_map_io_range(d, data->id, data->type, + data->start, data->end); break; } @@ -149,8 +149,8 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; - rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type, - data->start, data->end); + rc = ioreq_server_unmap_io_range(d, data->id, data->type, + data->start, data->end); break; } @@ -163,7 +163,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; - rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + rc = ioreq_server_set_state(d, data->id, !!data->enabled); break; } @@ -176,7 +176,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; - rc = hvm_destroy_ioreq_server(d, data->id); + rc = ioreq_server_destroy(d, data->id); break; } diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index caf4543..3ca5b96 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -59,7 +59,7 @@ static struct ioreq_server *get_ioreq_server(const struct domain *d, * Iterate over all possible ioreq servers. * * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). + * ioreq servers are favoured in ioreq_server_select(). * This is a semantic that previously existed when ioreq servers * were held in a linked list. */ @@ -106,12 +106,12 @@ static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, return NULL; } -bool hvm_io_pending(struct vcpu *v) +bool vcpu_ioreq_pending(struct vcpu *v) { return get_pending_vcpu(v, NULL); } -static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) +static bool wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state = STATE_IOREQ_NONE; unsigned int state = p->state; @@ -168,7 +168,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) return true; } -bool handle_hvm_io_completion(struct vcpu *v) +bool vcpu_ioreq_handle_completion(struct vcpu *v) { struct domain *d = v->domain; struct vcpu_io *vio = &v->io; @@ -183,7 +183,7 @@ bool handle_hvm_io_completion(struct vcpu *v) } sv = get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + if ( sv && !wait_for_io(sv, get_ioreq(s, v)) ) return false; vio->req.state = ioreq_needs_completion(&vio->req) ? @@ -214,7 +214,7 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } -static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) +static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; struct page_info *page; @@ -223,7 +223,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { /* * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then + * on demand if ioreq_server_get_info() is called), then * allocating a page is not permitted. */ if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -262,7 +262,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) return -ENOMEM; } -static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) +static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; struct page_info *page = iorp->page; @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) return found; } -static void hvm_update_ioreq_evtchn(struct ioreq_server *s, - struct ioreq_vcpu *sv) +static void ioreq_update_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); @@ -314,8 +314,8 @@ static void hvm_update_ioreq_evtchn(struct ioreq_server *s, } } -static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, - struct vcpu *v) +static int ioreq_server_add_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; int rc; @@ -350,7 +350,7 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, list_add(&sv->list_entry, &s->ioreq_vcpu_list); if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); + ioreq_update_evtchn(s, sv); spin_unlock(&s->lock); return 0; @@ -366,8 +366,8 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, return rc; } -static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, - struct vcpu *v) +static void ioreq_server_remove_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; @@ -394,7 +394,7 @@ static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, spin_unlock(&s->lock); } -static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) +static void ioreq_server_remove_all_vcpus(struct ioreq_server *s) { struct ioreq_vcpu *sv, *next; @@ -420,28 +420,28 @@ static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) +static int ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; - rc = hvm_alloc_ioreq_mfn(s, false); + rc = ioreq_server_alloc_mfn(s, false); if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc = hvm_alloc_ioreq_mfn(s, true); + rc = ioreq_server_alloc_mfn(s, true); if ( rc ) - hvm_free_ioreq_mfn(s, false); + ioreq_server_free_mfn(s, false); return rc; } -static void hvm_ioreq_server_free_pages(struct ioreq_server *s) +static void ioreq_server_free_pages(struct ioreq_server *s) { - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); + ioreq_server_free_mfn(s, true); + ioreq_server_free_mfn(s, false); } -static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) +static void ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; @@ -449,8 +449,8 @@ static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) rangeset_destroy(s->range[i]); } -static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, - ioservid_t id) +static int ioreq_server_alloc_rangesets(struct ioreq_server *s, + ioservid_t id) { unsigned int i; int rc; @@ -482,12 +482,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, return 0; fail: - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); return rc; } -static void hvm_ioreq_server_enable(struct ioreq_server *s) +static void ioreq_server_enable(struct ioreq_server *s) { struct ioreq_vcpu *sv; @@ -503,13 +503,13 @@ static void hvm_ioreq_server_enable(struct ioreq_server *s) list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) - hvm_update_ioreq_evtchn(s, sv); + ioreq_update_evtchn(s, sv); done: spin_unlock(&s->lock); } -static void hvm_ioreq_server_disable(struct ioreq_server *s) +static void ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); @@ -524,9 +524,9 @@ static void hvm_ioreq_server_disable(struct ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_init(struct ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) +static int ioreq_server_init(struct ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) { struct domain *currd = current->domain; struct vcpu *v; @@ -544,7 +544,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, s->ioreq.gfn = INVALID_GFN; s->bufioreq.gfn = INVALID_GFN; - rc = hvm_ioreq_server_alloc_rangesets(s, id); + rc = ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; @@ -552,7 +552,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, for_each_vcpu ( d, v ) { - rc = hvm_ioreq_server_add_vcpu(s, v); + rc = ioreq_server_add_vcpu(s, v); if ( rc ) goto fail_add; } @@ -560,23 +560,23 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, return 0; fail_add: - hvm_ioreq_server_remove_all_vcpus(s); + ioreq_server_remove_all_vcpus(s); arch_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); put_domain(s->emulator); return rc; } -static void hvm_ioreq_server_deinit(struct ioreq_server *s) +static void ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); + ioreq_server_remove_all_vcpus(s); /* * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. + * ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. * However if the pages are mapped then the former will set @@ -584,15 +584,15 @@ static void hvm_ioreq_server_deinit(struct ioreq_server *s) * nothing. */ arch_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); + ioreq_server_free_pages(s); - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); put_domain(s->emulator); } -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) +int ioreq_server_create(struct domain *d, int bufioreq_handling, + ioservid_t *id) { struct ioreq_server *s; unsigned int i; @@ -620,11 +620,11 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, /* * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. + * ioreq_server_init() since the target domain is paused. */ set_ioreq_server(d, i, s); - rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i); + rc = ioreq_server_init(s, d, bufioreq_handling, i); if ( rc ) { set_ioreq_server(d, i, NULL); @@ -647,7 +647,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, return rc; } -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +int ioreq_server_destroy(struct domain *d, ioservid_t id) { struct ioreq_server *s; int rc; @@ -668,13 +668,13 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) arch_ioreq_server_destroy(s); - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is paused. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); domain_unpause(d); @@ -689,10 +689,10 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) return rc; } -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) +int ioreq_server_get_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) { struct ioreq_server *s; int rc; @@ -736,8 +736,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, return rc; } -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) +int ioreq_server_get_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) { struct ioreq_server *s; int rc; @@ -756,7 +756,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, if ( s->emulator != current->domain ) goto out; - rc = hvm_ioreq_server_alloc_pages(s); + rc = ioreq_server_alloc_pages(s); if ( rc ) goto out; @@ -787,9 +787,9 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, return rc; } -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +int ioreq_server_map_io_range(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -839,9 +839,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, return rc; } -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +int ioreq_server_unmap_io_range(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -899,8 +899,8 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, * Support for the emulation of read operations can be added when an ioreq * server has such requirement in the future. */ -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags) +int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags) { struct ioreq_server *s; int rc; @@ -931,8 +931,8 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, return rc; } -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) +int ioreq_server_set_state(struct domain *d, ioservid_t id, + bool enabled) { struct ioreq_server *s; int rc; @@ -952,9 +952,9 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, domain_pause(d); if ( enabled ) - hvm_ioreq_server_enable(s); + ioreq_server_enable(s); else - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); domain_unpause(d); @@ -965,7 +965,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, return rc; } -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -975,7 +975,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) FOR_EACH_IOREQ_SERVER(d, id, s) { - rc = hvm_ioreq_server_add_vcpu(s, v); + rc = ioreq_server_add_vcpu(s, v); if ( rc ) goto fail; } @@ -992,7 +992,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) if ( !s ) continue; - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); } spin_unlock_recursive(&d->ioreq_server.lock); @@ -1000,7 +1000,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) return rc; } -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -1008,12 +1008,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) spin_lock_recursive(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); spin_unlock_recursive(&d->ioreq_server.lock); } -void hvm_destroy_all_ioreq_servers(struct domain *d) +void ioreq_server_destroy_all(struct domain *d) { struct ioreq_server *s; unsigned int id; @@ -1027,13 +1027,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) FOR_EACH_IOREQ_SERVER(d, id, s) { - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is being destroyed. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); xfree(s); @@ -1042,8 +1042,8 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->ioreq_server.lock); } -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *ioreq_server_select(struct domain *d, + ioreq_t *p) { struct ioreq_server *s; uint8_t type; @@ -1098,7 +1098,7 @@ struct ioreq_server *hvm_select_ioreq_server(struct domain *d, return NULL; } -static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) +static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p) { struct domain *d = current->domain; struct ioreq_page *iorp; @@ -1191,8 +1191,8 @@ static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered) +int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered) { struct vcpu *curr = current; struct domain *d = curr->domain; @@ -1201,7 +1201,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, ASSERT(s); if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); + return ioreq_send_buffered(s, proto_p); if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return IOREQ_STATUS_RETRY; @@ -1251,7 +1251,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, return IOREQ_STATUS_UNHANDLED; } -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +unsigned int ioreq_broadcast(ioreq_t *p, bool buffered) { struct domain *d = current->domain; struct ioreq_server *s; @@ -1262,14 +1262,14 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) if ( !s->enabled ) continue; - if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) + if ( ioreq_send(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) failed++; } return failed; } -void hvm_ioreq_init(struct domain *d) +void ioreq_domain_init(struct domain *d) { spin_lock_init(&d->ioreq_server.lock); diff --git a/xen/common/memory.c b/xen/common/memory.c index 92cf983..3363c06 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1108,7 +1108,7 @@ static int acquire_ioreq_server(struct domain *d, { mfn_t mfn; - rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + rc = ioreq_server_get_frame(d, id, frame + i, &mfn); if ( rc ) return rc; diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 979afa0..02ff998 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -81,41 +81,42 @@ static inline bool ioreq_needs_completion(const ioreq_t *ioreq) #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); +bool vcpu_ioreq_pending(struct vcpu *v); +bool vcpu_ioreq_handle_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); - -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); - -void hvm_ioreq_init(struct domain *d); +int ioreq_server_create(struct domain *d, int bufioreq_handling, + ioservid_t *id); +int ioreq_server_destroy(struct domain *d, ioservid_t id); +int ioreq_server_get_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port); +int ioreq_server_get_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int ioreq_server_map_io_range(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int ioreq_server_unmap_io_range(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags); + +int ioreq_server_set_state(struct domain *d, ioservid_t id, + bool enabled); + +int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v); +void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v); +void ioreq_server_destroy_all(struct domain *d); + +struct ioreq_server *ioreq_server_select(struct domain *d, + ioreq_t *p); +int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int ioreq_broadcast(ioreq_t *p, bool buffered); + +void ioreq_domain_init(struct domain *d); #endif /* __XEN_IOREQ_H__ */ From patchwork Mon Nov 30 10:31:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 470E6C64E90 for ; Mon, 30 Nov 2020 10:44:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D5D84207BC for ; Mon, 30 Nov 2020 10:44:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jK99CXJO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D5D84207BC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40929.73984 (Exim 4.92) (envelope-from ) id 1kjgfW-0003nN-Qf; Mon, 30 Nov 2020 10:44:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40929.73984; Mon, 30 Nov 2020 10:44:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfW-0003n3-MM; Mon, 30 Nov 2020 10:44:30 +0000 Received: by outflank-mailman (input) for mailman id 40929; Mon, 30 Nov 2020 10:44:29 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUO-0000Uu-DP for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:00 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 475c828b-c025-4108-a7fe-694966bbda51; Mon, 30 Nov 2020 10:32:13 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id d8so20627634lfa.1 for ; Mon, 30 Nov 2020 02:32:13 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:11 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 475c828b-c025-4108-a7fe-694966bbda51 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nMA1D5QyuY6nqUL9SQWis4OrJ06p7JM6vXYO/WRZnDQ=; b=jK99CXJOBFidWNf8C240OVzB5yoEfyU15B2+M7W/8Daf5lmtZuXDIdsAjV2s85dKMH eg4p9Iv9wIom4ohIKNtmwsW+6oTfdv68QLZOfizUCAyswBWvgxQufepXuZlbzu6AoMMN /xE9QZ8huIFDaNz5yg8MCyAqB/ZlNj+zhyJo+M5arDeh1cJOVwlQkg7VypkVWKsq/6wN /rsMncuE1LNTSTz1LsFriaHeg8m4W4b8LP90fIWctCmPO56de5xK36yGGIxz07Rr48ax Sthgqur8dsVIPh9KrmXiiZyBwwrf/0wBMLG9I84eg3IbbHf5ODdtUmd4dAPG2RGiLX7a 7yag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nMA1D5QyuY6nqUL9SQWis4OrJ06p7JM6vXYO/WRZnDQ=; b=PAi4uMJ4G1cc4sWeTsrPLwgAGjmx2Gkny0UDLNm9SXui+BmgmPBCZrvDY+fbzRCAJ6 mlCoGm/LYlBSXXIQ9m4sPmvMtaqGqy9a7mNvDddmIuu3tIu6axymWIGlthqDSr/NU+e7 UbbKJBhjZpDN2tn33JpQtgNsmHvZWQcReotHwNUmX/ynSD7RCzqOvNWINqYtFpb4FizV ackTI04zxTBSZPDCPf1Aw20Z3E1cf9TAxaIb8LyFivyTEtIUVOMKNmUCtVSLRd02Ejgd u2vj35AAjuoxT2PPuiVMUcgZht1o10pud4bVKZW/o7e8jdIXpOqvgMGxpPU8ZexmEDEc qIWQ== X-Gm-Message-State: AOAM530dWcEWcM6YLO2zOGpe6JsxtaveQve7CYYM40FXwpbDMNesFcE9 zw/eoSXr/6AlymOnWxVQPRVh+vGXc4+A/Q== X-Google-Smtp-Source: ABdhPJyPEH5EGMt8g7+ksXyuOES+H1T4Dvl+dgtWK2QVenb8rwRLrqqN7nDDLL9ibneLpMfhtYE4tw== X-Received: by 2002:a19:248a:: with SMTP id k132mr9638514lfk.387.1606732332030; Mon, 30 Nov 2020 02:32:12 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Paul Durrant , Julien Grall Subject: [PATCH V3 13/23] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() Date: Mon, 30 Nov 2020 12:31:28 +0200 Message-Id: <1606732298-22107-14-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The cmpxchg() in ioreq_send_buffered() operates on memory shared with the emulator domain (and the target domain if the legacy interface is used). In order to be on the safe side we need to switch to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm. As there is no plan to support the legacy interface on Arm, we will have a page to be mapped in a single domain at the time, so we can use s->emulator in guest_cmpxchg64() safely. Thankfully the only user of the legacy interface is x86 so far and there is not concern regarding the atomics operations. Please note, that the legacy interface *must* not be used on Arm without revisiting the code. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Acked-by: Stefano Stabellini --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - move earlier to avoid breaking arm32 compilation - add an explanation to commit description and hvm_allow_set_param() - pass s->emulator Changes V2 -> V3: - update patch description --- --- xen/arch/arm/hvm.c | 4 ++++ xen/common/ioreq.c | 3 ++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c index 8951b34..9694e5a 100644 --- a/xen/arch/arm/hvm.c +++ b/xen/arch/arm/hvm.c @@ -31,6 +31,10 @@ #include +/* + * The legacy interface (which involves magic IOREQ pages) *must* not be used + * without revisiting the code. + */ static int hvm_allow_set_param(const struct domain *d, unsigned int param) { switch ( param ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 3ca5b96..4855dd8 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -29,6 +29,7 @@ #include #include +#include #include #include @@ -1182,7 +1183,7 @@ static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p) new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM; - cmpxchg(&pg->ptrs.full, old.full, new.full); + guest_cmpxchg64(s->emulator, &pg->ptrs.full, old.full, new.full); } notify_via_xen_event_channel(d, s->bufioreq_evtchn); From patchwork Mon Nov 30 10:31:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 136EEC64E8A for ; Mon, 30 Nov 2020 10:44:43 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8D27E207BC for ; Mon, 30 Nov 2020 10:44:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nTmscL9M" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D27E207BC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40932.74012 (Exim 4.92) (envelope-from ) id 1kjgfY-0003rU-HS; Mon, 30 Nov 2020 10:44:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40932.74012; Mon, 30 Nov 2020 10:44:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfY-0003qt-6K; Mon, 30 Nov 2020 10:44:32 +0000 Received: by outflank-mailman (input) for mailman id 40932; Mon, 30 Nov 2020 10:44:29 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUY-0000Uu-Dl for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:10 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id daa3e8e6-cf72-44a6-8ee1-0f0c2ee68849; Mon, 30 Nov 2020 10:32:14 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id l11so20619199lfg.0 for ; Mon, 30 Nov 2020 02:32:14 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:12 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: daa3e8e6-cf72-44a6-8ee1-0f0c2ee68849 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fPq8ybHMVxvpysG13On0tY3c7FiJZ3bhgQTb8lqkCiM=; b=nTmscL9MBzSpuqJuv7d2U+kLjSOgPh4EfE6Dkna/f3t+K++rjv8PGxKzFFuE/Qrsp+ s6N20OWSly3VEIj+nO38oNE3PgKQhgUkLZurBDBrRpd6N91IMEvhfzuqPMVcRoSxc+SP 6TnLHKIrMe4M2KC9xHlSWimS62QWTAHGxD/v8nn2kjs21L0riEfjEM1BhDdrRtHIsYbp YyTvr6GM0dShSobdRFYIdODA4pbd4Und0oO/h/eG8DsZfvS3/PBb/3Gmp9oBYrCMxKU2 vgehUZadkyrLQtGDKYoPK4URQsbg4vX+wxDtn2QI4c8lR7CYJ5reIMRtC6aiqgmHgYDs pCaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fPq8ybHMVxvpysG13On0tY3c7FiJZ3bhgQTb8lqkCiM=; b=HPlXO/3ZP06CQqk9KrVaDv7L8N26Y8rP683a149y2X4JfejgnlPmvkYpdJtX+MoD9j imUhjXBq35ZGocWvGH7scLlbaAIVosg8NS68R9Y+qJUOGKgKcfIGjXlubrfmjkRDwbPO 7kSi09Yk1FwI3wHpT6j55mHr29aocPBM9g4qMFxLnGo8HWzVysMxRxDaYoNU2gcAztks YLegmlPfjODg3ytAfVT8K/OcvH9JdPVwlGyOI0yuTH+X0Y/BlDVmdlB+YeG4fck3Fxpm +mHe15L7M8x08jElLTK7kAI5FtSJVhZ4t39pXVU9HmIQl7Lm+9xJv6vGPNeAfGHC4aq8 lIRw== X-Gm-Message-State: AOAM5324Ru553N6hkZujDtFTvp6MuniC9YBXklK1VUmZZ3Fa2SgTrr9j GGqGzv++IJ3zYyMVQdPQ7v6bhU7WNuAvYA== X-Google-Smtp-Source: ABdhPJx1W8tA7iTrTV9rbTzlbFKXCaj6IFoo5qPVaUoRyIsxoqoIMgJKEOtD+9JsGxSSHfeQ/OFBYA== X-Received: by 2002:a19:83c9:: with SMTP id f192mr9387224lfd.302.1606732332978; Mon, 30 Nov 2020 02:32:12 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V3 14/23] arm/ioreq: Introduce arch specific bits for IOREQ/DM features Date: Mon, 30 Nov 2020 12:31:29 +0200 Message-Id: <1606732298-22107-15-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Julien Grall This patch adds basic IOREQ/DM support on Arm. The subsequent patches will improve functionality and add remaining bits. The IOREQ/DM features are supposed to be built with IOREQ_SERVER option enabled, which is disabled by default on Arm for now. Please note, the "PIO handling" TODO is expected to left unaddressed for the current series. It is not an big issue for now while Xen doesn't have support for vPCI on Arm. On Arm64 they are only used for PCI IO Bar and we would probably want to expose them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling" should be implemented when we add support for vPCI. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm - update patch description - update asm-arm/hvm/ioreq.h according to the newly introduced arch functions: - arch_hvm_destroy_ioreq_server() - arch_handle_hvm_io_completion() - update arch files to include xen/ioreq.h - remove HVMOP plumbing - rewrite a logic to handle properly case when hvm_send_ioreq() returns IO_RETRY - add a logic to handle properly handle_hvm_io_completion() return value - rename handle_mmio() to ioreq_handle_complete_mmio() - move paging_mark_pfn_dirty() to asm-arm/paging.h - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/ioreq.h - use gdprintk in try_fwd_ioserv(), remove unneeded prints - update list of #include-s - move has_vpci() to asm-arm/domain.h - add a comment (TODO) to unimplemented yet handle_pio() - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) structs from the arch files, they were already moved to the common code - remove set_foreign_p2m_entry() changes, they will be properly implemented in the follow-up patch - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig - remove x86's realmode and other unneeded stubs from xen/ioreq.h - clafify ioreq_t p.df usage in try_fwd_ioserv() - set ioreq_t p.count to 1 in try_fwd_ioserv() Changes V1 -> V2: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed - update the author of a patch - update patch description - move a loop in leave_hypervisor_to_guest() to a separate patch - set IOREQ_SERVER disabled by default - remove already clarified /* XXX */ - replace BUG() by ASSERT_UNREACHABLE() in handle_pio() - remove default case for handling the return value of try_handle_mmio() - remove struct hvm_domain, enum hvm_io_completion, struct hvm_vcpu_io, struct hvm_vcpu from asm-arm/domain.h, these are common materials now - update everything according to the recent changes (IOREQ related function names don't contain "hvm" prefixes/infixes anymore, IOREQ related fields are part of common struct vcpu/domain now, etc) Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - add dummy arch hooks - remove dummy paging_mark_pfn_dirty() - don’t include in common ioreq.c - don’t include in arch ioreq.h - remove #define ioreq_params(d, i) --- --- xen/arch/arm/Makefile | 2 + xen/arch/arm/dm.c | 34 ++++++++++ xen/arch/arm/domain.c | 9 +++ xen/arch/arm/io.c | 11 +++- xen/arch/arm/ioreq.c | 141 ++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/traps.c | 13 ++++ xen/include/asm-arm/domain.h | 3 + xen/include/asm-arm/hvm/ioreq.h | 139 +++++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/mmio.h | 1 + 9 files changed, 352 insertions(+), 1 deletion(-) create mode 100644 xen/arch/arm/dm.c create mode 100644 xen/arch/arm/ioreq.c create mode 100644 xen/include/asm-arm/hvm/ioreq.h diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 296c5e6..c3ff454 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -13,6 +13,7 @@ obj-y += cpuerrata.o obj-y += cpufeature.o obj-y += decode.o obj-y += device.o +obj-$(CONFIG_IOREQ_SERVER) += dm.o obj-y += domain.o obj-y += domain_build.init.o obj-y += domctl.o @@ -27,6 +28,7 @@ obj-y += guest_atomics.o obj-y += guest_walk.o obj-y += hvm.o obj-y += io.o +obj-$(CONFIG_IOREQ_SERVER) += ioreq.o obj-y += irq.o obj-y += kernel.init.o obj-$(CONFIG_LIVEPATCH) += livepatch.o diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c new file mode 100644 index 0000000..5d3da37 --- /dev/null +++ b/xen/arch/arm/dm.c @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#include +#include + +int arch_dm_op(struct xen_dm_op *op, struct domain *d, + const struct dmop_args *op_args, bool *const_op) +{ + return -EOPNOTSUPP; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 18cafcd..8f55aba 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -696,6 +697,10 @@ int arch_domain_create(struct domain *d, ASSERT(config != NULL); +#ifdef CONFIG_IOREQ_SERVER + ioreq_domain_init(d); +#endif + /* p2m_init relies on some value initialized by the IOMMU subsystem */ if ( (rc = iommu_domain_init(d, config->iommu_opts)) != 0 ) goto fail; @@ -1014,6 +1019,10 @@ int domain_relinquish_resources(struct domain *d) if (ret ) return ret; +#ifdef CONFIG_IOREQ_SERVER + ioreq_server_destroy_all(d); +#endif + PROGRESS(xen): ret = relinquish_memory(d, &d->xenpage_list); if ( ret ) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index ae7ef96..f44cfd4 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "decode.h" @@ -123,7 +124,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs, handler = find_mmio_handler(v->domain, info.gpa); if ( !handler ) - return IO_UNHANDLED; + { + int rc; + + rc = try_fwd_ioserv(regs, v, &info); + if ( rc == IO_HANDLED ) + return handle_ioserv(regs, v); + + return rc; + } /* All the instructions used on emulated MMIO region should be valid */ if ( !dabt.valid ) diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c new file mode 100644 index 0000000..f08190c --- /dev/null +++ b/xen/arch/arm/ioreq.c @@ -0,0 +1,141 @@ +/* + * arm/ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#include +#include + +#include + +#include + +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) +{ + const union hsr hsr = { .bits = regs->hsr }; + const struct hsr_dabt dabt = hsr.dabt; + /* Code is similar to handle_read */ + uint8_t size = (1 << dabt.size) * 8; + register_t r = v->io.req.data; + + /* We are done with the IO */ + v->io.req.state = STATE_IOREQ_NONE; + + if ( dabt.write ) + return IO_HANDLED; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long)); + r |= (~0UL) << size; + } + + set_user_reg(regs, dabt.reg, r); + + return IO_HANDLED; +} + +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info) +{ + struct vcpu_io *vio = &v->io; + ioreq_t p = { + .type = IOREQ_TYPE_COPY, + .addr = info->gpa, + .size = 1 << info->dabt.size, + .count = 1, + .dir = !info->dabt.write, + /* + * On x86, df is used by 'rep' instruction to tell the direction + * to iterate (forward or backward). + * On Arm, all the accesses to MMIO region will do a single + * memory access. So for now, we can safely always set to 0. + */ + .df = 0, + .data = get_user_reg(regs, info->dabt.reg), + .state = STATE_IOREQ_READY, + }; + struct ioreq_server *s = NULL; + enum io_state rc; + + switch ( vio->req.state ) + { + case STATE_IOREQ_NONE: + break; + + case STATE_IORESP_READY: + return IO_HANDLED; + + default: + gdprintk(XENLOG_ERR, "wrong state %u\n", vio->req.state); + return IO_ABORT; + } + + s = ioreq_server_select(v->domain, &p); + if ( !s ) + return IO_UNHANDLED; + + if ( !info->dabt.valid ) + return IO_ABORT; + + vio->req = p; + + rc = ioreq_send(s, &p, 0); + if ( rc != IO_RETRY || v->domain->is_shutting_down ) + vio->req.state = STATE_IOREQ_NONE; + else if ( !ioreq_needs_completion(&vio->req) ) + rc = IO_HANDLED; + else + vio->completion = IO_mmio_completion; + + return rc; +} + +bool ioreq_complete_mmio(void) +{ + struct vcpu *v = current; + struct cpu_user_regs *regs = guest_cpu_user_regs(); + const union hsr hsr = { .bits = regs->hsr }; + paddr_t addr = v->io.req.addr; + + if ( try_handle_mmio(regs, hsr, addr) == IO_HANDLED ) + { + advance_pc(regs, hsr); + return true; + } + + return false; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 22bd1bd..036b13f 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -1385,6 +1386,9 @@ static arm_hypercall_t arm_hypercall_table[] = { #ifdef CONFIG_HYPFS HYPERCALL(hypfs_op, 5), #endif +#ifdef CONFIG_IOREQ_SERVER + HYPERCALL(dm_op, 3), +#endif }; #ifndef NDEBUG @@ -1956,6 +1960,9 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs, case IO_HANDLED: advance_pc(regs, hsr); return; + case IO_RETRY: + /* finish later */ + return; case IO_UNHANDLED: /* IO unhandled, try another way to handle it. */ break; @@ -2254,6 +2261,12 @@ static void check_for_vcpu_work(void) { struct vcpu *v = current; +#ifdef CONFIG_IOREQ_SERVER + local_irq_enable(); + vcpu_ioreq_handle_completion(v); + local_irq_disable(); +#endif + if ( likely(!v->arch.need_flush_to_ram) ) return; diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 6819a3b..c235e5b 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -10,6 +10,7 @@ #include #include #include +#include #include struct hvm_domain @@ -262,6 +263,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {} #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_flag) +#define has_vpci(d) ({ (void)(d); false; }) + #endif /* __ASM_DOMAIN_H__ */ /* diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/ioreq.h new file mode 100644 index 0000000..2bffc7a --- /dev/null +++ b/xen/include/asm-arm/hvm/ioreq.h @@ -0,0 +1,139 @@ +/* + * hvm.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#ifndef __ASM_ARM_HVM_IOREQ_H__ +#define __ASM_ARM_HVM_IOREQ_H__ + +#include + +#ifdef CONFIG_IOREQ_SERVER +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v); +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info); +#else +static inline enum io_state handle_ioserv(struct cpu_user_regs *regs, + struct vcpu *v) +{ + return IO_UNHANDLED; +} + +static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info) +{ + return IO_UNHANDLED; +} +#endif + +bool ioreq_complete_mmio(void); + +static inline bool handle_pio(uint16_t port, unsigned int size, int dir) +{ + /* + * TODO: For Arm64, the main user will be PCI. So this should be + * implemented when we add support for vPCI. + */ + ASSERT_UNREACHABLE(); + return true; +} + +static inline void msix_write_completion(struct vcpu *v) +{ +} + +static inline bool arch_vcpu_ioreq_completion(enum io_completion io_completion) +{ + ASSERT_UNREACHABLE(); + return true; +} + +/* + * The "legacy" mechanism of mapping magic pages for the IOREQ servers + * is x86 specific, so the following hooks don't need to be implemented on Arm: + * - arch_ioreq_server_map_pages + * - arch_ioreq_server_unmap_pages + * - arch_ioreq_server_enable + * - arch_ioreq_server_disable + */ +static inline int arch_ioreq_server_map_pages(struct ioreq_server *s) +{ + return -EOPNOTSUPP; +} + +static inline void arch_ioreq_server_unmap_pages(struct ioreq_server *s) +{ +} + +static inline void arch_ioreq_server_enable(struct ioreq_server *s) +{ +} + +static inline void arch_ioreq_server_disable(struct ioreq_server *s) +{ +} + +static inline void arch_ioreq_server_destroy(struct ioreq_server *s) +{ +} + +static inline int arch_ioreq_server_map_mem_type(struct domain *d, + struct ioreq_server *s, + uint32_t flags) +{ + return -EOPNOTSUPP; +} + +static inline bool arch_ioreq_server_destroy_all(struct domain *d) +{ + return true; +} + +static inline int arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) +{ + if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) + return -EINVAL; + + *type = (p->type == IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr = p->addr; + + return 0; +} + +static inline void arch_ioreq_domain_init(struct domain *d) +{ +} + +#define IOREQ_STATUS_HANDLED IO_HANDLED +#define IOREQ_STATUS_UNHANDLED IO_UNHANDLED +#define IOREQ_STATUS_RETRY IO_RETRY + +#endif /* __ASM_ARM_HVM_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h index 8dbfb27..7ab873c 100644 --- a/xen/include/asm-arm/mmio.h +++ b/xen/include/asm-arm/mmio.h @@ -37,6 +37,7 @@ enum io_state IO_ABORT, /* The IO was handled by the helper and led to an abort. */ IO_HANDLED, /* The IO was successfully handled by the helper. */ IO_UNHANDLED, /* The IO was not handled by the helper. */ + IO_RETRY, /* Retry the emulation for some reason */ }; typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info, From patchwork Mon Nov 30 10:31:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82874C71155 for ; Mon, 30 Nov 2020 10:42:34 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 09914204EF for ; Mon, 30 Nov 2020 10:42:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PyzX3v5Y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09914204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40898.73936 (Exim 4.92) (envelope-from ) id 1kjgdV-00039Q-1F; Mon, 30 Nov 2020 10:42:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40898.73936; Mon, 30 Nov 2020 10:42:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgdU-00039H-Th; Mon, 30 Nov 2020 10:42:24 +0000 Received: by outflank-mailman (input) for mailman id 40898; Mon, 30 Nov 2020 10:42:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUd-0000Uu-Du for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:15 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 96a2f0db-d730-4ce2-85d1-7141288e635d; Mon, 30 Nov 2020 10:32:15 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id y16so17071337ljk.1 for ; Mon, 30 Nov 2020 02:32:15 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:13 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 96a2f0db-d730-4ce2-85d1-7141288e635d DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iqCz87HzldwgsC5VC8qOcBaMp2PRkja+L+B3SFRUh4I=; b=PyzX3v5YY/kPVBJWz1Ll85oBRXYlOGLysDHW9CI041WcTXn6VP0t0HIrPjkGANNvj1 PnII00xjc+tLQksDJGkiOIeMQHut3W2ceR4ltOKYDuDyglU0htzdL6DOXMd0QFideswo JiS1nuD+Wo1Y8hGft5xA+dQh0ILdZ/CX1uZ30hkIyG1HJ0W25w/SZiK+7PrP1uMCBQPV X2f2+VJ8BJmUQZSfooyAXkjUF5oQ7VhwaQoVC+2F56IBASDsaK7oqoh+wCnY8CxxRtDS P6jtMMT8g665RpAhToPMxKZLbaeBgyDuRGxic/5CS87iMKVlj7WRdWqF+m2YqHR5X03F hF+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iqCz87HzldwgsC5VC8qOcBaMp2PRkja+L+B3SFRUh4I=; b=m1fEarczBMsV+SRisVv5MxyZF27gftUVKneAwYGRbrDqr6VpNixrm93gjTOKlnXBkX 9ZFYFB6EjDTJuiuuzvnEAutxuqY7PNFGub3aHNJ/3vYBMuFXs1ajoOzvoy93marx1DL7 8yIj89esrj8xq38Mzi8XS0dRSrVtoFlxm9jd6PCz4wruKXzSXOLJvOfJwsIzBOh6rBG2 122MZck+H+zmyOs+5fP1GafT9oaw7tJsNq50zIzeDEp3MbzFUQXJT0N7okaF/lGO0Gdz ZEbJ2pzI8SB4GOcnm+DZ35yWK5z3NV/wRiOhl9juMt28yTFuTA2OqHQKdrQlZKauFBy6 ttXw== X-Gm-Message-State: AOAM53132nRlxke+8A5SDQu+Gdd3K6DvI5qNsN7kLeJ/c8AIUGBd3AF+ SYKP1MFw2PljxYxHWt2sAEtk/uoou3IkNA== X-Google-Smtp-Source: ABdhPJw40lqiQy7GF6uLV41CLLq1UB/3gNsg5mbEHB+jHc++oxZAzF6T7J0nFBRk7Ia3Xc4GnJ12HQ== X-Received: by 2002:a2e:a377:: with SMTP id i23mr9900360ljn.185.1606732334020; Mon, 30 Nov 2020 02:32:14 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V3 15/23] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed Date: Mon, 30 Nov 2020 12:31:30 +0200 Message-Id: <1606732298-22107-16-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch adds proper handling of return value of vcpu_ioreq_handle_completion() which involves using a loop in leave_hypervisor_to_guest(). The reason to use an unbounded loop here is the fact that vCPU shouldn't continue until an I/O has completed. In Xen case, if an I/O never completes then it most likely means that something went horribly wrong with the Device Emulator. And it is most likely not safe to continue. So letting the vCPU to spin forever if I/O never completes is a safer action than letting it continue and leaving the guest in unclear state and is the best what we can do for now. This wouldn't be an issue for Xen as do_softirq() would be called at every loop. In case of failure, the guest will crash and the vCPU will be unscheduled. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch, changes were derived from (+ new explanation): arm/ioreq: Introduce arch specific bits for IOREQ/DM features Changes V2 -> V3: - update patch description --- --- xen/arch/arm/traps.c | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 036b13f..4cef43e 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2257,18 +2257,23 @@ static void check_for_pcpu_work(void) * Process pending work for the vCPU. Any call should be fast or * implement preemption. */ -static void check_for_vcpu_work(void) +static bool check_for_vcpu_work(void) { struct vcpu *v = current; #ifdef CONFIG_IOREQ_SERVER + bool handled; + local_irq_enable(); - vcpu_ioreq_handle_completion(v); + handled = vcpu_ioreq_handle_completion(v); local_irq_disable(); + + if ( !handled ) + return true; #endif if ( likely(!v->arch.need_flush_to_ram) ) - return; + return false; /* * Give a chance for the pCPU to process work before handling the vCPU @@ -2279,6 +2284,8 @@ static void check_for_vcpu_work(void) local_irq_enable(); p2m_flush_vm(v); local_irq_disable(); + + return false; } /* @@ -2291,8 +2298,22 @@ void leave_hypervisor_to_guest(void) { local_irq_disable(); - check_for_vcpu_work(); - check_for_pcpu_work(); + /* + * The reason to use an unbounded loop here is the fact that vCPU + * shouldn't continue until an I/O has completed. In Xen case, if an I/O + * never completes then it most likely means that something went horribly + * wrong with the Device Emulator. And it is most likely not safe to + * continue. So letting the vCPU to spin forever if I/O never completes + * is a safer action than letting it continue and leaving the guest in + * unclear state and is the best what we can do for now. + * + * This wouldn't be an issue for Xen as do_softirq() would be called at + * every loop. In case of failure, the guest will crash and the vCPU + * will be unscheduled. + */ + do { + check_for_pcpu_work(); + } while ( check_for_vcpu_work() ); vgic_sync_to_lrs(); From patchwork Mon Nov 30 10:31:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBB83C64E90 for ; Mon, 30 Nov 2020 10:44:42 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 703F3204EF for ; Mon, 30 Nov 2020 10:44:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XKXYxs6V" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 703F3204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40935.74030 (Exim 4.92) (envelope-from ) id 1kjgfZ-0003uf-Ta; Mon, 30 Nov 2020 10:44:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40935.74030; Mon, 30 Nov 2020 10:44:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfZ-0003u5-Ec; Mon, 30 Nov 2020 10:44:33 +0000 Received: by outflank-mailman (input) for mailman id 40935; Mon, 30 Nov 2020 10:44:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUi-0000Uu-Dz for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:20 +0000 Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6ae8ea99-6cfd-4cdd-ac5d-3e112cc031bc; Mon, 30 Nov 2020 10:32:16 +0000 (UTC) Received: by mail-lj1-x241.google.com with SMTP id o24so17027702ljj.6 for ; Mon, 30 Nov 2020 02:32:16 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:14 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6ae8ea99-6cfd-4cdd-ac5d-3e112cc031bc DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OKmGkMX/jIOJO9o6+1yYd5b6GcAGRnA31NMN1x/OSi4=; b=XKXYxs6V0uUIrdlAspHzTQGR3pmU8Ss17YPN4o2F4++GT9IJYtjEF760ralpDNSCm5 4V7W5UxP5PODamVWYK7FGDpE8gP5WcHwG9Q+zN6aOpbB7dp8/7B1M01IZCecvacPcWi/ 5tvgk2Lzp77zp57KvWRUXIGDu63XI0PiHQNWPdXeEH941F6tCF1zHDA+1+XZ6+fojrnz Cw4/kLix3wC4JtpSxuNGTNdEDkEKTvgyxNslcrWl0LH/SgAoDiw2Izv/kCtyGbSowy6Q pqzL1+HxTd/UtXhAhz3mks5CB9/io+fT5JiWOwBPHVVfCoZvcq0p5E+IVR1CUh4lj+NL G7PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OKmGkMX/jIOJO9o6+1yYd5b6GcAGRnA31NMN1x/OSi4=; b=KIZtWPaJjKRgs3GFYOQk3B+S9A7P93bpnguTd57DLbT+JLQoOGxq8z8S5+g5d0tHu2 2VPBCdimWfPImsOH57zFbJLrd+F4F1z06rX2VfQSxF8JbBhIkducHnrisNnWNGPu88D5 gLRVNYVqtkJDENceFiZ0gJVVFmoWX4lpKyYwg+PGddIOV4bjkfwAkeYySwkM9bEMwZ0l nXoUqw49lHzjqSlN6F5Gm0DlolKSdTiK0oZi0/5CEfI1XaIXQohjtarfWRorrNAx5wLw ESmXph8Z7H9bVRCMSWR/ff38Jr9pU4sb6BoySD7n8Bzw6/3sjH4TCaSic9GhVeCoEtvM uEfg== X-Gm-Message-State: AOAM532XwRViCvPuVaJHo7q7lfvN1VbUt/84sLfkdoV72JN4gfEhFmmx 2iT9rWNCfRg6bPomzeXDbzz6tNaAkqDeUg== X-Google-Smtp-Source: ABdhPJyMo+o0k5ozOLvlit07DgGXSuYm6EYjFCQns/RTQUDWl7b3OnGNV5QbcQGsq44W8TxhrbtePg== X-Received: by 2002:a2e:2286:: with SMTP id i128mr9141277lji.396.1606732335210; Mon, 30 Nov 2020 02:32:15 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Wei Liu , =?utf-8?q?Roger_Pau_?= =?utf-8?q?Monn=C3=A9?= , Julien Grall Subject: [PATCH V3 16/23] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm Date: Mon, 30 Nov 2020 12:31:31 +0200 Message-Id: <1606732298-22107-17-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko This patch implements reference counting of foreign entries in in set_foreign_p2m_entry() on Arm. This is a mandatory action if we want to run emulator (IOREQ server) in other than dom0 domain, as we can't trust it to do the right thing if it is not running in dom0. So we need to grab a reference on the page to avoid it disappearing. It is valid to always pass "p2m_map_foreign_rw" type to guest_physmap_add_entry() since the current and foreign domains would be always different. A case when they are equal would be rejected by rcu_lock_remote_domain_by_id(). Besides the similar comment in the code put a respective ASSERT() to catch incorrect usage in future. It was tested with IOREQ feature to confirm that all the pages given to this function belong to a domain, so we can use the same approach as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one(). This involves adding an extra parameter for the foreign domain to set_foreign_p2m_entry() and a helper to indicate whether the arch supports the reference counting of foreign entries and the restriction for the hardware domain in the common code can be skipped for it. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Acked-by: Stefano Stabellini --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features" - rewrite a logic to handle properly reference in set_foreign_p2m_entry() instead of treating foreign entries as p2m_ram_rw Changes V1 -> V2: - rebase according to the recent changes to acquire_resource() - update patch description - introduce arch_refcounts_p2m() - add an explanation why p2m_map_foreign_rw is valid - move set_foreign_p2m_entry() to p2m-common.h - add const to new parameter Changes V2 -> V3: - update patch description - rename arch_refcounts_p2m() to arch_acquire_resource_check() - move comment to x86’s arch_acquire_resource_check() - return rc in Arm's set_foreign_p2m_entry() - put a respective ASSERT() into Arm's set_foreign_p2m_entry() --- --- xen/arch/arm/p2m.c | 24 ++++++++++++++++++++++++ xen/arch/x86/mm/p2m.c | 5 +++-- xen/common/memory.c | 10 +++------- xen/include/asm-arm/p2m.h | 19 +++++++++---------- xen/include/asm-x86/p2m.h | 16 +++++++++++++--- xen/include/xen/p2m-common.h | 4 ++++ 6 files changed, 56 insertions(+), 22 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 4eeb867..5b8d494 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1380,6 +1380,30 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); } +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) +{ + struct page_info *page = mfn_to_page(mfn); + int rc; + + if ( !get_page(page, fd) ) + return -EINVAL; + + /* + * It is valid to always use p2m_map_foreign_rw here as if this gets + * called then d != fd. A case when d == fd would be rejected by + * rcu_lock_remote_domain_by_id() earlier. Put a respective ASSERT() + * to catch incorrect usage in future. + */ + ASSERT(d != fd); + + rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw); + if ( rc ) + put_page(page); + + return rc; +} + static struct page_info *p2m_allocate_root(void) { struct page_info *page; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 7a2ba82..4772c86 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1321,7 +1321,8 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l, } /* Set foreign mfn in the given guest's p2m table. */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) { return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign, p2m_get_hostp2m(d)->default_access); @@ -2621,7 +2622,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, * will update the m2p table which will result in mfn -> gpfn of dom0 * and not fgfn of domU. */ - rc = set_foreign_p2m_entry(tdom, gpfn, mfn); + rc = set_foreign_p2m_entry(tdom, fdom, gpfn, mfn); if ( rc ) gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. " "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n", diff --git a/xen/common/memory.c b/xen/common/memory.c index 3363c06..49e3001 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1134,12 +1134,8 @@ static int acquire_resource( xen_pfn_t mfn_list[32]; int rc; - /* - * FIXME: Until foreign pages inserted into the P2M are properly - * reference counted, it is unsafe to allow mapping of - * resource pages unless the caller is the hardware domain. - */ - if ( paging_mode_translate(currd) && !is_hardware_domain(currd) ) + if ( paging_mode_translate(currd) && !is_hardware_domain(currd) && + !arch_acquire_resource_check() ) return -EACCES; if ( copy_from_guest(&xmar, arg, 1) ) @@ -1207,7 +1203,7 @@ static int acquire_resource( for ( i = 0; !rc && i < xmar.nr_frames; i++ ) { - rc = set_foreign_p2m_entry(currd, gfn_list[i], + rc = set_foreign_p2m_entry(currd, d, gfn_list[i], _mfn(mfn_list[i])); /* rc should be -EIO for any iteration other than the first */ if ( rc && i ) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 28ca9a8..4f8056e 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -161,6 +161,15 @@ typedef enum { #endif #include +static inline bool arch_acquire_resource_check(void) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is supported on Arm. + */ + return true; +} + static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) { @@ -392,16 +401,6 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order) return gfn_add(gfn, 1UL << order); } -static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, - mfn_t mfn) -{ - /* - * NOTE: If this is implemented then proper reference counting of - * foreign entries will need to be implemented. - */ - return -EOPNOTSUPP; -} - /* * A vCPU has cache enabled only when the MMU is enabled and data cache * is enabled. diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 4603560..8d2dc22 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -382,6 +382,19 @@ struct p2m_domain { #endif #include +static inline bool arch_acquire_resource_check(void) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is not supported on x86. + * + * FIXME: Until foreign pages inserted into the P2M are properly + * reference counted, it is unsafe to allow mapping of + * resource pages unless the caller is the hardware domain. + */ + return false; +} + /* * Updates vCPU's n2pm to match its np2m_base in VMCx12 and returns that np2m. */ @@ -647,9 +660,6 @@ int p2m_finish_type_change(struct domain *d, int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start, unsigned long end); -/* Set foreign entry in the p2m table (for priv-mapping) */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn); - /* Set mmio addresses in the p2m table (for pass-through) */ int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int order); diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h index 58031a6..b4bc709 100644 --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -3,6 +3,10 @@ #include +/* Set foreign entry in the p2m table */ +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn); + /* Remove a page from a domain's p2m table */ int __must_check guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, From patchwork Mon Nov 30 10:31:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 973C6C71156 for ; Mon, 30 Nov 2020 10:44:25 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1CA3C204EF for ; Mon, 30 Nov 2020 10:44:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="H4WDeUB/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1CA3C204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40924.73966 (Exim 4.92) (envelope-from ) id 1kjgfJ-0003eS-H2; Mon, 30 Nov 2020 10:44:17 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40924.73966; Mon, 30 Nov 2020 10:44:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfJ-0003eK-C5; Mon, 30 Nov 2020 10:44:17 +0000 Received: by outflank-mailman (input) for mailman id 40924; Mon, 30 Nov 2020 10:44:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUn-0000Uu-Dz for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:25 +0000 Received: from mail-lj1-x22c.google.com (unknown [2a00:1450:4864:20::22c]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 553845a9-04a6-42a1-bbaf-90d801202ba7; Mon, 30 Nov 2020 10:32:17 +0000 (UTC) Received: by mail-lj1-x22c.google.com with SMTP id s9so16995139ljo.11 for ; Mon, 30 Nov 2020 02:32:17 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:15 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 553845a9-04a6-42a1-bbaf-90d801202ba7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kU8hXo0MhUwaJAOfCWP3cQmYgBrqrc7iZRLWg/7AJHY=; b=H4WDeUB/57LzDiHlv6dARF/vp22ivBig+v9X086y02k3BHN3mN0dh9PeTop8n5QXKF cxkGhqC99WfEJ99PDxwkWG5woj6hArK3AXI2CkKJ6WpEUA3hbmLzNqnd5VNI8CXezn/h Ic4P1axxOApt25piQe+p2rC5dSXtTnNggsi7AO9LlclK5q6QzqBrirRbyZs5bS5sp4x9 iYDQWya4aKnvhO3hwu4e5xFiuXZEP79ChJfj0K+IVsaOz62D8NgxbkCGZFHDYLWPUDm/ pyGeBcnLiG0tFkm5oRzUFxFGeOa+HIOyTCdpBtOPKcgQgsJ7RI6BqBxhKIQAvkeTh6YA X8ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kU8hXo0MhUwaJAOfCWP3cQmYgBrqrc7iZRLWg/7AJHY=; b=eLibJvDBL0/+WhxtJYEg2/pttSmXIx6y2xbY8HrUlpzxdKdJucbMvQabDBN+QJjyLS O/gNSEnipoJKPkIafwWAifjOaWQXPY0RGHZV7qcOXNLhC8dmP1SqL88NjiOl9EPbp6uN fgWK7j9Wvs/fZZWdiGMpYeSkaEY8cNouqgWPQQy0qbDKHYPYTs8M32Ge4JXmoUnIGyb+ hZbjAJCLISdm7zno2+VNX2Nuaqvuz4TQJIA18nb45prw7L+Mr1jMhlMrohUuvIZgHqxq 2T7E9f+bhxCy7C7OqNsfEnqw1ub23BC7ALnHz5FNEj7QZWTn5mXwr+0mPBi9Q8yawWhk GW7g== X-Gm-Message-State: AOAM533ry5IvGkZNDpIynb5MT5+U/lALAlLixo2+hGFlGp/FVSTelYoI 6HU7oERpuB+bYQKkij0/8qzKxpw80oNHIQ== X-Google-Smtp-Source: ABdhPJwMpyUcazry+2thIFAZp4dzdk2dlfVeiFZSM93RQu/mrnP+Z/vMPEdKvWqrRLEmtl6WU6tNhA== X-Received: by 2002:a2e:985:: with SMTP id 127mr10145900ljj.268.1606732336342; Mon, 30 Nov 2020 02:32:16 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Wei Liu , Paul Durrant , Julien Grall Subject: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server() Date: Mon, 30 Nov 2020 12:31:32 +0200 Message-Id: <1606732298-22107-18-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch introduces a helper the main purpose of which is to check if a domain is using IOREQ server(s). On Arm the current benefit is to avoid calling vcpu_ioreq_handle_completion() (which implies iterating over all possible IOREQ servers anyway) on every return in leave_hypervisor_to_guest() if there is no active servers for the particular domain. Also this helper will be used by one of the subsequent patches on Arm. This involves adding an extra per-domain variable to store the count of servers in use. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - update patch description - guard helper with CONFIG_IOREQ_SERVER - remove "hvm" prefix - modify helper to just return d->arch.hvm.ioreq_server.nr_servers - put suitable ASSERT()s - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server() - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init() Changes V2 -> V3: - update patch description - remove ASSERT()s from the helper, add a comment - use #ifdef CONFIG_IOREQ_SERVER inside function body - use new ASSERT() construction in set_ioreq_server() --- --- xen/arch/arm/traps.c | 15 +++++++++------ xen/common/ioreq.c | 7 ++++++- xen/include/xen/ioreq.h | 14 ++++++++++++++ xen/include/xen/sched.h | 1 + 4 files changed, 30 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 4cef43e..b6077d2 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2262,14 +2262,17 @@ static bool check_for_vcpu_work(void) struct vcpu *v = current; #ifdef CONFIG_IOREQ_SERVER - bool handled; + if ( domain_has_ioreq_server(v->domain) ) + { + bool handled; - local_irq_enable(); - handled = vcpu_ioreq_handle_completion(v); - local_irq_disable(); + local_irq_enable(); + handled = vcpu_ioreq_handle_completion(v); + local_irq_disable(); - if ( !handled ) - return true; + if ( !handled ) + return true; + } #endif if ( likely(!v->arch.need_flush_to_ram) ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 4855dd8..f35dcf9 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned int id, struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->ioreq_server.server[id]); + ASSERT(!s ^ !d->ioreq_server.server[id]); d->ioreq_server.server[id] = s; + + if ( s ) + d->ioreq_server.nr_servers++; + else + d->ioreq_server.nr_servers--; } #define GET_IOREQ_SERVER(d, id) \ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 02ff998..2289e79 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -55,6 +55,20 @@ struct ioreq_server { uint8_t bufioreq_handling; }; +/* + * This should only be used when d == current->domain and it's not paused, + * or when they're distinct and d is paused. Otherwise the result is + * stale before the caller can inspect it. + */ +static inline bool domain_has_ioreq_server(const struct domain *d) +{ +#ifdef CONFIG_IOREQ_SERVER + return d->ioreq_server.nr_servers; +#else + return false; +#endif +} + static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) { return unlikely(p->df) ? diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 8269f84..2277995 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -550,6 +550,7 @@ struct domain struct { spinlock_t lock; struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + unsigned int nr_servers; } ioreq_server; #endif }; From patchwork Mon Nov 30 10:31:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 273DAC71155 for ; Mon, 30 Nov 2020 10:42:37 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A191A204EF for ; Mon, 30 Nov 2020 10:42:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SXOHyTZ1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A191A204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40896.73912 (Exim 4.92) (envelope-from ) id 1kjgdS-00035b-8T; Mon, 30 Nov 2020 10:42:22 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40896.73912; Mon, 30 Nov 2020 10:42:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgdS-00035F-5P; Mon, 30 Nov 2020 10:42:22 +0000 Received: by outflank-mailman (input) for mailman id 40896; Mon, 30 Nov 2020 10:42:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUs-0000Uu-E5 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:30 +0000 Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5285bf49-6a37-4c47-b49e-5de333c5ec0c; Mon, 30 Nov 2020 10:32:18 +0000 (UTC) Received: by mail-lj1-x22e.google.com with SMTP id r18so17057209ljc.2 for ; Mon, 30 Nov 2020 02:32:18 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:16 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5285bf49-6a37-4c47-b49e-5de333c5ec0c DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vLq3q1p9DK/8+swqd0EF1XmAyuyBf4cp5s4Q1EQ0uO8=; b=SXOHyTZ1rwZniOOv8KjCpjfALIIOEOF2Bux7/lVnWEpqg2Kwk+ssC7rkpWYtUSNGvE OkJP6HKHS4c8Z5Az31kcnmHLvIG7heO5XP8ccqrUkRY6Dtovs+ABUp/H5XjXHLdaL/NH C2uHrigDL6XxTRBByVQVvUHo8D8pXd/PA6/+1oJXNdTqL7zvNkGH70Jnt4K5rfw0O/xr MgajZC9NF2c69fYNMzC4XR2o0yCzfrByS85Bk3JH/zT62aABeYua7zPRv1j48ofLFCut AxZNuXuAzd6d18K2Lf+lfBASn9evS2uTj0o27By8UDHjxzVGD0aiteNLj3GkQeQGOekt Dv1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vLq3q1p9DK/8+swqd0EF1XmAyuyBf4cp5s4Q1EQ0uO8=; b=inLL78r2rGcqdNQpvZ7nuA92kyjAegIVwDMcI54pQH1H4EKx7oVOuubYjsHuAW3Pga B8x0h6fbN9GRk6d4HxOIixIqAOH6qxUrtdi0LJtcgbk6/HEM7/dTSAiafeXCP9RrYYMH Y+JMO4IR10+fCz5EArQ9wO0mIy1LUhtAyDUwY+9EdxZcbi9p5OcS6EgE3oQN6/ROu1ro QidegtkPiJF+kwcyV9XXbDyS1Sas+4lcFzsbCrjX7U8PMxpQhZoNSnjn+9lAWro2E6CD v3LzuaMv+oDFjP2y7k6lDBeiG9TExcsBerDOgfDmfGkom4E9bN8ccu91JV6qC4XXlJcX THDg== X-Gm-Message-State: AOAM531DhDxIXB24MastsEQZWtNyM+D3s3cPUsbWTpH1z+HKdJgnEf8v C7VY67C4jKVZYhwVWNf3OW8LhYkhK/6Ipw== X-Google-Smtp-Source: ABdhPJx6IDc/ofgTeeKOgnKL59M07jftT6TZAWWGN6beKRoLaMz8foPoms99Ssblul6f3LbpRUMF9g== X-Received: by 2002:a05:651c:112e:: with SMTP id e14mr9348107ljo.388.1606732337396; Mon, 30 Nov 2020 02:32:17 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Ian Jackson , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V3 18/23] xen/dm: Introduce xendevicemodel_set_irq_level DM op Date: Mon, 30 Nov 2020 12:31:33 +0200 Message-Id: <1606732298-22107-19-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Julien Grall This patch adds ability to the device emulator to notify otherend (some entity running in the guest) using a SPI and implements Arm specific bits for it. Proposed interface allows emulator to set the logical level of a one of a domain's IRQ lines. We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level) to inject an interrupt as the "isa_irq" field is only 8-bit and able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020). Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, I left interface untouched since there is still an open discussion what interface to use/what information to pass to the hypervisor. The question whether we should abstract away the state of the line or not. *** Changes RFC -> V1: - check incoming parameters in arch_dm_op() - add explicit padding to struct xen_dm_op_set_irq_level Changes V1 -> V2: - update the author of a patch - update patch description - check that padding is always 0 - mention that interface is Arm only and only SPIs are supported for now - allow to set the logical level of a line for non-allocated interrupts only - add xen_dm_op_set_irq_level_t Changes V2 -> V3: - no changes --- --- tools/include/xendevicemodel.h | 4 ++ tools/libs/devicemodel/core.c | 18 +++++++++ tools/libs/devicemodel/libxendevicemodel.map | 1 + xen/arch/arm/dm.c | 57 +++++++++++++++++++++++++++- xen/common/dm.c | 1 + xen/include/public/hvm/dm_op.h | 16 ++++++++ 6 files changed, 96 insertions(+), 1 deletion(-) diff --git a/tools/include/xendevicemodel.h b/tools/include/xendevicemodel.h index e877f5c..c06b3c8 100644 --- a/tools/include/xendevicemodel.h +++ b/tools/include/xendevicemodel.h @@ -209,6 +209,10 @@ int xendevicemodel_set_isa_irq_level( xendevicemodel_handle *dmod, domid_t domid, uint8_t irq, unsigned int level); +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, unsigned int irq, + unsigned int level); + /** * This function maps a PCI INTx line to a an IRQ line. * diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c index 4d40639..30bd79f 100644 --- a/tools/libs/devicemodel/core.c +++ b/tools/libs/devicemodel/core.c @@ -430,6 +430,24 @@ int xendevicemodel_set_isa_irq_level( return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); } +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, uint32_t irq, + unsigned int level) +{ + struct xen_dm_op op; + struct xen_dm_op_set_irq_level *data; + + memset(&op, 0, sizeof(op)); + + op.op = XEN_DMOP_set_irq_level; + data = &op.u.set_irq_level; + + data->irq = irq; + data->level = level; + + return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); +} + int xendevicemodel_set_pci_link_route( xendevicemodel_handle *dmod, domid_t domid, uint8_t link, uint8_t irq) { diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devicemodel/libxendevicemodel.map index 561c62d..a0c3012 100644 --- a/tools/libs/devicemodel/libxendevicemodel.map +++ b/tools/libs/devicemodel/libxendevicemodel.map @@ -32,6 +32,7 @@ VERS_1.2 { global: xendevicemodel_relocate_memory; xendevicemodel_pin_memory_cacheattr; + xendevicemodel_set_irq_level; } VERS_1.1; VERS_1.3 { diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c index 5d3da37..e4bb233 100644 --- a/xen/arch/arm/dm.c +++ b/xen/arch/arm/dm.c @@ -17,10 +17,65 @@ #include #include +#include + int arch_dm_op(struct xen_dm_op *op, struct domain *d, const struct dmop_args *op_args, bool *const_op) { - return -EOPNOTSUPP; + int rc; + + switch ( op->op ) + { + case XEN_DMOP_set_irq_level: + { + const struct xen_dm_op_set_irq_level *data = + &op->u.set_irq_level; + unsigned int i; + + /* Only SPIs are supported */ + if ( (data->irq < NR_LOCAL_IRQS) || (data->irq >= vgic_num_irqs(d)) ) + { + rc = -EINVAL; + break; + } + + if ( data->level != 0 && data->level != 1 ) + { + rc = -EINVAL; + break; + } + + /* Check that padding is always 0 */ + for ( i = 0; i < sizeof(data->pad); i++ ) + { + if ( data->pad[i] ) + { + rc = -EINVAL; + break; + } + } + + /* + * Allow to set the logical level of a line for non-allocated + * interrupts only. + */ + if ( test_bit(data->irq, d->arch.vgic.allocated_irqs) ) + { + rc = -EINVAL; + break; + } + + vgic_inject_irq(d, NULL, data->irq, data->level); + rc = 0; + break; + } + + default: + rc = -EOPNOTSUPP; + break; + } + + return rc; } /* diff --git a/xen/common/dm.c b/xen/common/dm.c index 9d394fc..7bfb46c 100644 --- a/xen/common/dm.c +++ b/xen/common/dm.c @@ -48,6 +48,7 @@ static int dm_op(const struct dmop_args *op_args) [XEN_DMOP_remote_shutdown] = sizeof(struct xen_dm_op_remote_shutdown), [XEN_DMOP_relocate_memory] = sizeof(struct xen_dm_op_relocate_memory), [XEN_DMOP_pin_memory_cacheattr] = sizeof(struct xen_dm_op_pin_memory_cacheattr), + [XEN_DMOP_set_irq_level] = sizeof(struct xen_dm_op_set_irq_level), }; rc = rcu_lock_remote_domain_by_id(op_args->domid, &d); diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 66cae1a..1f70d58 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -434,6 +434,21 @@ struct xen_dm_op_pin_memory_cacheattr { }; typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheattr_t; +/* + * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's + * IRQ lines (currently Arm only). + * Only SPIs are supported. + */ +#define XEN_DMOP_set_irq_level 19 + +struct xen_dm_op_set_irq_level { + uint32_t irq; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; + uint8_t pad[3]; +}; +typedef struct xen_dm_op_set_irq_level xen_dm_op_set_irq_level_t; + struct xen_dm_op { uint32_t op; uint32_t pad; @@ -447,6 +462,7 @@ struct xen_dm_op { xen_dm_op_track_dirty_vram_t track_dirty_vram; xen_dm_op_set_pci_intx_level_t set_pci_intx_level; xen_dm_op_set_isa_irq_level_t set_isa_irq_level; + xen_dm_op_set_irq_level_t set_irq_level; xen_dm_op_set_pci_link_route_t set_pci_link_route; xen_dm_op_modified_memory_t modified_memory; xen_dm_op_set_mem_type_t set_mem_type; From patchwork Mon Nov 30 10:31:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8107C64E90 for ; Mon, 30 Nov 2020 10:40:54 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 42718207BC for ; Mon, 30 Nov 2020 10:40:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jbVzVpyj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 42718207BC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40890.73900 (Exim 4.92) (envelope-from ) id 1kjgbu-0002v5-TE; Mon, 30 Nov 2020 10:40:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40890.73900; Mon, 30 Nov 2020 10:40:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgbu-0002uy-Q1; Mon, 30 Nov 2020 10:40:46 +0000 Received: by outflank-mailman (input) for mailman id 40890; Mon, 30 Nov 2020 10:40:45 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgUx-0000Uu-EL for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:35 +0000 Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id bd410a64-db72-4953-8a83-8438d237a04e; Mon, 30 Nov 2020 10:32:19 +0000 (UTC) Received: by mail-lj1-x244.google.com with SMTP id y7so17019212lji.8 for ; Mon, 30 Nov 2020 02:32:19 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.17 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:17 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bd410a64-db72-4953-8a83-8438d237a04e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cBlwWPGKzEGxAAOiUlGssFOYms5RMEboYfXpdUQmleg=; b=jbVzVpyjkJDq4y8E5MaaQa+5tPgBBO1gUbVpVCKJCv2sSzYgKO3ASyhFoToueFJoVc BoO3t1tOtefZJ73282cqMkDoH4NzpFkoW4BF9j3/gvePFx9jgK0U1prSV/lnT4Jwq8Ho +oBmxy1QcC3yQeMiMPN10o6IzCl0t/8uSMx5vTHvTULDPGoDlEHNshDRMCBrF+uMRdo8 3nCzit3goLRD5gtD1U7Pv4Z51Nw2W0O1ndKS6o4LLAYQrpIV0CDwyREPilAGWAeDhTrB a0o3nXnjWT8KFbvvVxjxMn0y/yG6WYMMc8fMO7B5p5J30+u+kSJT8Mn0VflJVHbbTGbb ug2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cBlwWPGKzEGxAAOiUlGssFOYms5RMEboYfXpdUQmleg=; b=dYPVcQiMvjEoxhuO45gToD4fFUGiev9pPeW4zph3SMVbA5Ud4Gui/+TGAenE9q0aYq +NyYQ35G4W9G09QWKN86EPkwKueqoUNbEqvOqIXxgByzaK5iw7XOeRf/roZjraShe8or ksXdR2M4EXyhU+TB5QvgjuE3A0soPpwgWNbraEI5CsIrSN2hogOZHwWjdpG+Nu9hkYcf 4nTPnNrUnGM0CjYWwxWamgwav0Ll4HehGulrw1ywLzZ1w218Nt6RxFmzICenTVDPrHrY GufP4dC5JQF3/JmdH4uok+ZONktvFjNtCTef0PBRJwhqfdUCHBIhvfvklgrWFI8GtgCg 1seQ== X-Gm-Message-State: AOAM531o6quIwMq4xf6uW17YM2TaA9oA8Qd0/FuBpEANotZlxvFa6NCS 6CmE6ih1oaDefD2ARLKLVSbulJdOQmhNsA== X-Google-Smtp-Source: ABdhPJzYADassTkhsXa92/kInhE69nL47Ls14oh6muNSJGEg1ZrkemxdIuJKh8G5Qfuaxk8MNo3ECg== X-Received: by 2002:a2e:5c2:: with SMTP id 185mr9454661ljf.79.1606732338314; Mon, 30 Nov 2020 02:32:18 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V3 19/23] xen/arm: io: Abstract sign-extension Date: Mon, 30 Nov 2020 12:31:34 +0200 Message-Id: <1606732298-22107-20-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko In order to avoid code duplication (both handle_read() and handle_ioserv() contain the same code for the sign-extension) put this code to a common helper to be used for both. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - no changes --- --- xen/arch/arm/io.c | 18 ++---------------- xen/arch/arm/ioreq.c | 17 +---------------- xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++ 3 files changed, 27 insertions(+), 32 deletions(-) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index f44cfd4..8d6ec6c 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include "decode.h" @@ -39,26 +40,11 @@ static enum io_state handle_read(const struct mmio_handler *handler, * setting r). */ register_t r = 0; - uint8_t size = (1 << dabt.size) * 8; if ( !handler->ops->read(v, info, &r, handler->priv) ) return IO_ABORT; - /* - * Sign extend if required. - * Note that we expect the read handler to have zeroed the bits - * outside the requested access size. - */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) - { - /* - * We are relying on register_t using the same as - * an unsigned long in order to keep the 32-bit assembly - * code smaller. - */ - BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long)); - r |= (~0UL) << size; - } + r = sign_extend(dabt, r); set_user_reg(regs, dabt.reg, r); diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index f08190c..2f39289 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) const union hsr hsr = { .bits = regs->hsr }; const struct hsr_dabt dabt = hsr.dabt; /* Code is similar to handle_read */ - uint8_t size = (1 << dabt.size) * 8; register_t r = v->io.req.data; /* We are done with the IO */ @@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) if ( dabt.write ) return IO_HANDLED; - /* - * Sign extend if required. - * Note that we expect the read handler to have zeroed the bits - * outside the requested access size. - */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) - { - /* - * We are relying on register_t using the same as - * an unsigned long in order to keep the 32-bit assembly - * code smaller. - */ - BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long)); - r |= (~0UL) << size; - } + r = sign_extend(dabt, r); set_user_reg(regs, dabt.reg, r); diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h index 997c378..e301c44 100644 --- a/xen/include/asm-arm/traps.h +++ b/xen/include/asm-arm/traps.h @@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_user_regs *regs) (unsigned long)abort_guest_exit_end == regs->pc; } +/* Check whether the sign extension is required and perform it */ +static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r) +{ + uint8_t size = (1 << dabt.size) * 8; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long)); + r |= (~0UL) << size; + } + + return r; +} + #endif /* __ASM_ARM_TRAPS__ */ /* * Local variables: From patchwork Mon Nov 30 10:31:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 451B4C64E8A for ; Mon, 30 Nov 2020 10:42:32 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C43AD204EF for ; Mon, 30 Nov 2020 10:42:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nkoAAdWq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C43AD204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40897.73919 (Exim 4.92) (envelope-from ) id 1kjgdS-000369-LK; Mon, 30 Nov 2020 10:42:22 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40897.73919; Mon, 30 Nov 2020 10:42:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgdS-00035r-E2; Mon, 30 Nov 2020 10:42:22 +0000 Received: by outflank-mailman (input) for mailman id 40897; Mon, 30 Nov 2020 10:42:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgV2-0000Uu-EZ for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:40 +0000 Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 34b8d9a9-44f7-4625-b302-f3659906a420; Mon, 30 Nov 2020 10:32:21 +0000 (UTC) Received: by mail-lf1-x143.google.com with SMTP id s30so20589388lfc.4 for ; Mon, 30 Nov 2020 02:32:20 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.18 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:18 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 34b8d9a9-44f7-4625-b302-f3659906a420 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/NCbRJRG/y1peWdb5VV9fW9K7yzgcZXIhEHGv+DYhcI=; b=nkoAAdWqP3QV7ERu4DO5sk6igvlfj4tBavrIeM93acV99mrK4SLzELno8x3hBXV1Sr Suy9tD0w+m0eWVjgBvSTitriGBy7qnfofhNMRqnZZq50qFwh9LqhRUR8U88MCHSZNLEp UWr/ZQq1pykUXpB6Xe3CXlCDieGVT2a1hsNV5lDKZ+U1oXD91Dk+rET7PoxnPDRGINiC QNrTH8LnM2RmWMa1EQ/aX0n2FqWQc/LqolKbHjKckPg4djAI3U7XXgwxLaVL/oHYLO3e TExfSmMogquumcB3JPW/kfwS76E2pNJ9DjGYXaE2IMQaNqSp4uLo3/oauHlwJ+WTtblK wEFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/NCbRJRG/y1peWdb5VV9fW9K7yzgcZXIhEHGv+DYhcI=; b=QhJzw9niQehij3h33hBwwXgXW/5Lr3ymRPyUeFsB6rBxoS79P3y/Dtt0yZ6aa48Dxz T1HcpDh7d7fE3mpUF2kZGokEU1SgE6H3Qhqw/Dt2Flqp8a9BuGDU7vEK9HcwWwVcGYLq sQjuY0yKN7jWa+GIvvXJAzVcVMJ6KA2hYqgsLlGIObDLlQm7SfHaHXcGgtMVVPhRQYC+ thuubdhTSFhn4qSrS1zwT2s6Wd9uGmpLHE79vhAh4Bb9EzY3hq1ChzyAkT6qtAg0RqHa aeJIYG8xgNBo1AvvnkI75bbkn3asWb+jshES35L3LALzC5vLrQvwz89CyG1kjX7p8ygS Bjeg== X-Gm-Message-State: AOAM530nLRXxVymzSUXuRSN/cDwUcdisbkatpA7n4HhjAo2IgpGqX1jW WWQaa62Lgrqj/aGGdggRpLb+bOjWX/DrdA== X-Google-Smtp-Source: ABdhPJysgf1BIsSrpTJNnyHZsuIO26DFzpRK0mmqVYcYFuGaiKZTOWSYioDBKAmiE0iyDYk1oQcwag== X-Received: by 2002:a05:6512:3e6:: with SMTP id n6mr9015506lfq.413.1606732339424; Mon, 30 Nov 2020 02:32:19 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Julien Grall Subject: [PATCH V3 20/23] xen/ioreq: Make x86's send_invalidate_req() common Date: Mon, 30 Nov 2020 12:31:35 +0200 Message-Id: <1606732298-22107-21-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko As the IOREQ is a common feature now and we also need to invalidate qemu/demu mapcache on Arm when the required condition occurs this patch moves this function to the common code (and remames it to ioreq_signal_mapcache_invalidate). This patch also moves per-domain qemu_mapcache_invalidate variable out of the arch sub-struct (and drops "qemu" prefix). The subsequent patch will add mapcache invalidation handling on Arm. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - move send_invalidate_req() to the common code - update patch subject/description - move qemu_mapcache_invalidate out of the arch sub-struct, update checks - remove #if defined(CONFIG_ARM64) from the common code Changes V1 -> V2: - was split into: - xen/ioreq: Make x86's send_invalidate_req() common - xen/arm: Add mapcache invalidation handling - update patch description/subject - move Arm bits to a separate patch - don't alter the common code, the flag is set by arch code - rename send_invalidate_req() to send_invalidate_ioreq() - guard qemu_mapcache_invalidate with CONFIG_IOREQ_SERVER - use bool instead of bool_t - remove blank line blank line between head comment and #include-s Changes V2 -> V3: - update patch description - drop "qemu" prefix from the variable name - rename send_invalidate_req() to ioreq_signal_mapcache_invalidate() --- --- xen/arch/x86/hvm/hypercall.c | 9 +++++---- xen/arch/x86/hvm/io.c | 14 -------------- xen/common/ioreq.c | 14 ++++++++++++++ xen/include/asm-x86/hvm/domain.h | 1 - xen/include/asm-x86/hvm/io.h | 1 - xen/include/xen/ioreq.h | 1 + xen/include/xen/sched.h | 2 ++ 7 files changed, 22 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c index ac573c8..6d41c56 100644 --- a/xen/arch/x86/hvm/hypercall.c +++ b/xen/arch/x86/hvm/hypercall.c @@ -20,6 +20,7 @@ */ #include #include +#include #include #include @@ -47,7 +48,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) rc = compat_memory_op(cmd, arg); if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation ) - curr->domain->arch.hvm.qemu_mapcache_invalidate = true; + curr->domain->mapcache_invalidate = true; return rc; } @@ -326,9 +327,9 @@ int hvm_hypercall(struct cpu_user_regs *regs) HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax); - if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) && - test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) ) - send_invalidate_req(); + if ( unlikely(currd->mapcache_invalidate) && + test_and_clear_bool(currd->mapcache_invalidate) ) + ioreq_signal_mapcache_invalidate(); return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed; } diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index a0dd8d1..ba77414 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -64,20 +64,6 @@ void send_timeoffset_req(unsigned long timeoff) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } -/* Ask ioemu mapcache to invalidate mappings. */ -void send_invalidate_req(void) -{ - ioreq_t p = { - .type = IOREQ_TYPE_INVALIDATE, - .size = 4, - .dir = IOREQ_WRITE, - .data = ~0UL, /* flush all */ - }; - - if ( ioreq_broadcast(&p, false) != 0 ) - gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); -} - bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr) { struct hvm_emulate_ctxt ctxt; diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index f35dcf9..61ba761 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -35,6 +35,20 @@ #include #include +/* Ask ioemu mapcache to invalidate mappings. */ +void ioreq_signal_mapcache_invalidate(void) +{ + ioreq_t p = { + .type = IOREQ_TYPE_INVALIDATE, + .size = 4, + .dir = IOREQ_WRITE, + .data = ~0UL, /* flush all */ + }; + + if ( ioreq_broadcast(&p, false) != 0 ) + gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); +} + static void set_ioreq_server(struct domain *d, unsigned int id, struct ioreq_server *s) { diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index b8be1ad..cf959f6 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -122,7 +122,6 @@ struct hvm_domain { struct viridian_domain *viridian; - bool_t qemu_mapcache_invalidate; bool_t is_s3_suspended; /* diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index fb64294..3da0136 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -97,7 +97,6 @@ bool relocate_portio_handler( unsigned int size); void send_timeoffset_req(unsigned long timeoff); -void send_invalidate_req(void); bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec); bool handle_pio(uint16_t port, unsigned int size, int dir); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 2289e79..482f76f 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -129,6 +129,7 @@ struct ioreq_server *ioreq_server_select(struct domain *d, int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int ioreq_broadcast(ioreq_t *p, bool buffered); +void ioreq_signal_mapcache_invalidate(void); void ioreq_domain_init(struct domain *d); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 2277995..60bf254 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -552,6 +552,8 @@ struct domain struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; unsigned int nr_servers; } ioreq_server; + + bool mapcache_invalidate; #endif }; From patchwork Mon Nov 30 10:31:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48CB6C64E8A for ; Mon, 30 Nov 2020 10:44:39 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C855D204EF for ; Mon, 30 Nov 2020 10:44:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="i6IB/D3c" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C855D204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40930.73990 (Exim 4.92) (envelope-from ) id 1kjgfX-0003o2-6z; Mon, 30 Nov 2020 10:44:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40930.73990; Mon, 30 Nov 2020 10:44:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfX-0003nr-0c; Mon, 30 Nov 2020 10:44:31 +0000 Received: by outflank-mailman (input) for mailman id 40930; Mon, 30 Nov 2020 10:44:29 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgV7-0000Uu-En for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:45 +0000 Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 99336363-45f7-42cc-a685-7b4091276a2a; Mon, 30 Nov 2020 10:32:22 +0000 (UTC) Received: by mail-lf1-x144.google.com with SMTP id q13so19910159lfr.10 for ; Mon, 30 Nov 2020 02:32:21 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.19 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:19 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 99336363-45f7-42cc-a685-7b4091276a2a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Z671f8NtIQQSdLKPW7Ujw7d1GChOiu14pAFcqn9342M=; b=i6IB/D3cTa/9KbdcsLiGLYiGU6RKFd30neH7OpWV/zgMkEGuWqh+gK44FpPVNN/3gP han1c6Bc+aJ7ampbPFJ3PotG9kHTiB/SmzdbG9AtbSSXUjZZy9daEfdQyKZa9xaECzyW jI/U6vZJ5OlUVbejs/MYuUuU9a3pJlePvsPRLqgZcfijVh3ShuEnax5TdUD1kPrPH8Kb 61PNowVZZudUxAYJZWDrbGRIlBfKb7O05MHYeZu7PPXcObLrevrEkWXBwv4DQhMlhafy 1pT7yjQIXQwiI7c3sxPx7BPPTXPCvpAWBDV0rLa6uoKWRIpIp+W5i8h4+/FEyGAYFfET rSzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Z671f8NtIQQSdLKPW7Ujw7d1GChOiu14pAFcqn9342M=; b=AmQ+0F93qdliGcez0NUbb7n0ce5VePfxPH9FdAOijEpXxG0mk88AEgpcYWyWAhLhBV zs7LkE2UQV4y7p/j/NjyANNw5i8avYYF9dGo3T4ev7XURORq6pa6Auyhj4zER7kHnaJn +YWl8RCQ2cyrfA5DIAxViuRl7UqYo2D3W6NPepojmkoTC8PQvxXJNttYvY3fXjrAWoJe IhDKAInT89zxQ2SVahEB0s3thW21Nw7a4ppvZXxV/Zv4UO3hKYxIqF0wYHRhVaWq7ZUb GZXJQNsGKybOWwMpw4dBvPkNZ82XipQABcr4UOxTp4FNOAf7QriuFlo0llDcjIsEjh6U NKvg== X-Gm-Message-State: AOAM530MEYe8YCM76tI2gcKSm1fqgT+fonxW1PYDJt1psepVYS6CLOUA rYCLUPG0NdHLQ1ddjx0e5qhnWAtfrLB1Aw== X-Google-Smtp-Source: ABdhPJySun5qea1qs774Yk8F4e0pmXYkhq/Jfe962JNLAOrwOFlSTOgFx57BlncSwpz28P1vAOSXVQ== X-Received: by 2002:a05:6512:3243:: with SMTP id c3mr8426212lfr.371.1606732340364; Mon, 30 Nov 2020 02:32:20 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V3 21/23] xen/arm: Add mapcache invalidation handling Date: Mon, 30 Nov 2020 12:31:36 +0200 Message-Id: <1606732298-22107-22-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko We need to send mapcache invalidation request to qemu/demu everytime the page gets removed from a guest. At the moment, the Arm code doesn't explicitely remove the existing mapping before inserting the new mapping. Instead, this is done implicitely by __p2m_set_entry(). So we need to recognize a case when old entry is a RAM page *and* the new MFN is different in order to set the corresponding flag. The most suitable place to do this is p2m_free_entry(), there we can find the correct leaf type. The invalidation request will be sent in do_trap_hypercall() later on. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch, some changes were derived from (+ new explanation): xen/ioreq: Make x86's invalidate qemu mapcache handling common - put setting of the flag into __p2m_set_entry() - clarify the conditions when the flag should be set - use domain_has_ioreq_server() - update do_trap_hypercall() by adding local variable Changes V2 -> V3: - update patch description - move check to p2m_free_entry() - add a comment - use "curr" instead of "v" in do_trap_hypercall() --- --- xen/arch/arm/p2m.c | 24 ++++++++++++++++-------- xen/arch/arm/traps.c | 13 ++++++++++--- 2 files changed, 26 insertions(+), 11 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 5b8d494..9674f6f 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -749,17 +750,24 @@ static void p2m_free_entry(struct p2m_domain *p2m, if ( !p2m_is_valid(entry) ) return; - /* Nothing to do but updating the stats if the entry is a super-page. */ - if ( p2m_is_superpage(entry, level) ) + if ( p2m_is_superpage(entry, level) || (level == 3) ) { - p2m->stats.mappings[level]--; - return; - } +#ifdef CONFIG_IOREQ_SERVER + /* + * If this gets called (non-recursively) then either the entry + * was replaced by an entry with a different base (valid case) or + * the shattering of a superpage was failed (error case). + * So, at worst, the spurious mapcache invalidation might be sent. + */ + if ( domain_has_ioreq_server(p2m->domain) && + (p2m->domain == current->domain) && p2m_is_ram(entry.p2m.type) ) + p2m->domain->mapcache_invalidate = true; +#endif - if ( level == 3 ) - { p2m->stats.mappings[level]--; - p2m_put_l3_page(entry); + /* Nothing to do if the entry is a super-page. */ + if ( level == 3 ) + p2m_put_l3_page(entry); return; } diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index b6077d2..151c626 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1443,6 +1443,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr, const union hsr hsr) { arm_hypercall_fn_t call = NULL; + struct vcpu *curr = current; BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) ); @@ -1459,7 +1460,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr, return; } - current->hcall_preempted = false; + curr->hcall_preempted = false; perfc_incra(hypercalls, *nr); call = arm_hypercall_table[*nr].fn; @@ -1472,7 +1473,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr, HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs)); #ifndef NDEBUG - if ( !current->hcall_preempted ) + if ( !curr->hcall_preempted ) { /* Deliberately corrupt parameter regs used by this hypercall. */ switch ( arm_hypercall_table[*nr].nr_args ) { @@ -1489,8 +1490,14 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr, #endif /* Ensure the hypercall trap instruction is re-executed. */ - if ( current->hcall_preempted ) + if ( curr->hcall_preempted ) regs->pc -= 4; /* re-execute 'hvc #XEN_HYPERCALL_TAG' */ + +#ifdef CONFIG_IOREQ_SERVER + if ( unlikely(curr->domain->mapcache_invalidate) && + test_and_clear_bool(curr->domain->mapcache_invalidate) ) + ioreq_signal_mapcache_invalidate(); +#endif } void arch_hypercall_tasklet_result(struct vcpu *v, long res) From patchwork Mon Nov 30 10:31:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1FEDC64E8A for ; Mon, 30 Nov 2020 10:44:26 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6AC39204EF for ; Mon, 30 Nov 2020 10:44:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qxzxqs0l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6AC39204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40923.73960 (Exim 4.92) (envelope-from ) id 1kjgfJ-0003e3-6x; Mon, 30 Nov 2020 10:44:17 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40923.73960; Mon, 30 Nov 2020 10:44:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfJ-0003dw-3n; Mon, 30 Nov 2020 10:44:17 +0000 Received: by outflank-mailman (input) for mailman id 40923; Mon, 30 Nov 2020 10:44:15 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgVC-0000Uu-Er for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:50 +0000 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id aa0973e0-b233-481e-b031-1dc209cd97f5; Mon, 30 Nov 2020 10:32:22 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id u19so20611199lfr.7 for ; Mon, 30 Nov 2020 02:32:22 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:20 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: aa0973e0-b233-481e-b031-1dc209cd97f5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ifPY9Uvp382IkhNwXLGsr13LZTrex5ciLBeHg/6gJ1w=; b=Qxzxqs0lfT5xEyU1rdEO3FpxNRG118Njr8elyaY4OYVfNW1Qf3ZQQMBqhUfMiCMhza XbwWPwK7m22WFM1ce//fK8bg+xoYLPCUX58AWm1UypIBRPwBxxNoMXsvU/dZQYjbk0Zt oJZaMv7XOdA4dJF4CpoPELB0Sc/ZB/6ymdat0jDQT+CEJRscndwYrH865UIaQTOv5XsY ZNrSteYSMDSsxEvuGk7grv6tO3pCkTnqumz6vg9alWNaYhCpIoYpix5t5IaUkwE90ydE wed3kQulCvdj8KNkXy85jHnc7CI+x19G1eu1pkev36HC6wwE8lCl9/o785g24UXzaSsp V6DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ifPY9Uvp382IkhNwXLGsr13LZTrex5ciLBeHg/6gJ1w=; b=E7X6V5RnkSfXQZoj+0LbcvrCAJHCeB8LcQNdiGJ67ftZspiGfgjVRoZS4wN/eViFpI gD1OD8kmmNB5tNzHPVlolmEaKeQf9fhiznUs4de/DEAOi8TrkkytvMnd+07IaSxYDSqw SQpkcO+H23+KI/gpW2T18nYQqNEm/7PHFga7QNDXB6GpWmXdPhMpQ2XLC3fCOBI/p2MC GkHwzy05MiJ2pwuIL42ZPqKCUtPA7LXT0nJumUsyjmIT00upqiRVS9JX+r7uaV15k9ra MHciV2n5hsWB7FLrbQpketBhQVGnpUamEm36sbpPjrkPY2KuupdNKtlhBdZiuqegtQAc L8mQ== X-Gm-Message-State: AOAM530KwT9hkODzEfeA1+h2CvMsLl3e1keA6ZZa2mrR5iAZkI0bhY2R WzkF9EByBI6xsiiNNVB2SZAPWC/GF41PZw== X-Google-Smtp-Source: ABdhPJxetDjXbIwzfcMq8cJC4D1KQaRrNniJGhlimdK3j8VjICnXrpr3lqGdTSoEjUU0QhpzApP0qg== X-Received: by 2002:a19:952:: with SMTP id 79mr9235742lfj.559.1606732341362; Mon, 30 Nov 2020 02:32:21 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Ian Jackson , Wei Liu , Anthony PERARD , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V3 22/23] libxl: Introduce basic virtio-mmio support on Arm Date: Mon, 30 Nov 2020 12:31:37 +0200 Message-Id: <1606732298-22107-23-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Julien Grall This patch creates specific device node in the Guest device-tree with allocated MMIO range and SPI interrupt if specific 'virtio' property is present in domain config. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - was squashed with: "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way" "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node" "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT" - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h Changes V1 -> V2: - update the author of a patch Changes V2 -> V3: - no changes --- --- tools/libs/light/libxl_arm.c | 58 ++++++++++++++++++++++++++++++++++++++-- tools/libs/light/libxl_types.idl | 1 + tools/xl/xl_parse.c | 1 + xen/include/public/arch-arm.h | 5 ++++ 4 files changed, 63 insertions(+), 2 deletions(-) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 66e8a06..588ee5a 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -26,8 +26,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, { uint32_t nr_spis = 0; unsigned int i; - uint32_t vuart_irq; - bool vuart_enabled = false; + uint32_t vuart_irq, virtio_irq; + bool vuart_enabled = false, virtio_enabled = false; /* * If pl011 vuart is enabled then increment the nr_spis to allow allocation @@ -39,6 +39,17 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled = true; } + /* + * XXX: Handle properly virtio + * A proper solution would be the toolstack to allocate the interrupts + * used by each virtio backend and let the backend now which one is used + */ + if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) { + nr_spis += (GUEST_VIRTIO_MMIO_SPI - 32) + 1; + virtio_irq = GUEST_VIRTIO_MMIO_SPI; + virtio_enabled = true; + } + for (i = 0; i < d_config->b_info.num_irqs; i++) { uint32_t irq = d_config->b_info.irqs[i]; uint32_t spi; @@ -58,6 +69,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, return ERROR_FAIL; } + /* The same check as for vpl011 */ + if (virtio_enabled && irq == virtio_irq) { + LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", irq); + return ERROR_FAIL; + } + if (irq < 32) continue; @@ -658,6 +675,39 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *fdt, return 0; } +static int make_virtio_mmio_node(libxl__gc *gc, void *fdt, + uint64_t base, uint32_t irq) +{ + int res; + gic_interrupt intr; + /* Placeholder for virtio@ + a 64-bit number + \0 */ + char buf[24]; + + snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base); + res = fdt_begin_node(fdt, buf); + if (res) return res; + + res = fdt_property_compat(gc, fdt, 1, "virtio,mmio"); + if (res) return res; + + res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS, + 1, base, GUEST_VIRTIO_MMIO_SIZE); + if (res) return res; + + set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING); + res = fdt_property_interrupts(gc, fdt, &intr, 1); + if (res) return res; + + res = fdt_property(fdt, "dma-coherent", NULL, 0); + if (res) return res; + + res = fdt_end_node(fdt); + if (res) return res; + + return 0; + +} + static const struct arch_info *get_arch_info(libxl__gc *gc, const struct xc_dom_image *dom) { @@ -961,6 +1011,9 @@ next_resize: if (info->tee == LIBXL_TEE_TYPE_OPTEE) FDT( make_optee_node(gc, fdt) ); + if (libxl_defbool_val(info->arch_arm.virtio)) + FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GUEST_VIRTIO_MMIO_SPI) ); + if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); @@ -1178,6 +1231,7 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc, { /* ACPI is disabled by default */ libxl_defbool_setdefault(&b_info->acpi, false); + libxl_defbool_setdefault(&b_info->arch_arm.virtio, false); if (b_info->type != LIBXL_DOMAIN_TYPE_PV) return; diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl index 9d3f05f..b054bf9 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -639,6 +639,7 @@ libxl_domain_build_info = Struct("domain_build_info",[ ("arch_arm", Struct(None, [("gic_version", libxl_gic_version), + ("virtio", libxl_defbool), ("vuart", libxl_vuart_type), ])), # Alternate p2m is not bound to any architecture or guest type, as it is diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index cae8eb6..10acf22 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -2581,6 +2581,7 @@ skip_usbdev: } xlu_cfg_get_defbool(config, "dm_restrict", &b_info->dm_restrict, 0); + xlu_cfg_get_defbool(config, "virtio", &b_info->arch_arm.virtio, 0); if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) { if (!xlu_cfg_get_string (config, "vga", &buf, 0)) { diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index c365b1b..be7595f 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -464,6 +464,11 @@ typedef uint64_t xen_callback_t; #define PSCI_cpu_on 2 #define PSCI_migrate 3 +/* VirtIO MMIO definitions */ +#define GUEST_VIRTIO_MMIO_BASE xen_mk_ullong(0x02000000) +#define GUEST_VIRTIO_MMIO_SIZE xen_mk_ullong(0x200) +#define GUEST_VIRTIO_MMIO_SPI 33 + #endif #ifndef __ASSEMBLY__ From patchwork Mon Nov 30 10:31:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 11940171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3AF9C71155 for ; Mon, 30 Nov 2020 10:44:49 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 560ED204EF for ; Mon, 30 Nov 2020 10:44:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QniTpPBA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 560ED204EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.40937.74054 (Exim 4.92) (envelope-from ) id 1kjgfc-00041g-5F; Mon, 30 Nov 2020 10:44:36 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 40937.74054; Mon, 30 Nov 2020 10:44:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgfc-00041H-0M; Mon, 30 Nov 2020 10:44:36 +0000 Received: by outflank-mailman (input) for mailman id 40937; Mon, 30 Nov 2020 10:44:30 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kjgVH-0000Uu-F2 for xen-devel@lists.xenproject.org; Mon, 30 Nov 2020 10:33:55 +0000 Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f507fd0b-22b9-4436-ae24-5a8c01053eb2; Mon, 30 Nov 2020 10:32:27 +0000 (UTC) Received: by mail-lf1-x141.google.com with SMTP id s27so20591630lfp.5 for ; Mon, 30 Nov 2020 02:32:23 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id 136sm2399393lfb.62.2020.11.30.02.32.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 02:32:21 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f507fd0b-22b9-4436-ae24-5a8c01053eb2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tgVthTyJdv0ZHP39YuffZGYtQrzTKTFcEEOILGJIBck=; b=QniTpPBA0dahyqbjVq3caKgybDZpvMlrZej7irZa21Z38Pv04CaPMWDlG6PNoKARXn UFL8SONlV7ezKqCJcAIty6MJVParEaB8Gk6sdMeHHEKrJd9HgXNQzHLTtSG8p7zyBHSX ILcaqgkA0f8vsZgUHluOQRjtZXZWyE21Vm6CkS6plAd71D3SgAdcdY75lVNf3mcTQxaG PlF3ZQLljAs7Yy/574aQiYZjRIPVtLjOKQYxdcfhg/poyUYOnRmaP+pI/NH5ehRjHpJA XocOC8nkoIPD0JFroevd2z+gUQmxqFblJPzls6HHt9Mm2cIJPiMaLPips0f1yZqSal57 7cQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tgVthTyJdv0ZHP39YuffZGYtQrzTKTFcEEOILGJIBck=; b=DnyizD7yU6E/CQpLvMqSLHAlCIsXiUPiiR6uZMWOGNeNDteh5yGXemn8x29Zb6foj8 BjvmuNnzWMVWJ710M9M7kJxCCHMi3Weyliyi6N2Ck6LeLorNZIG7td6qP3tutfhHQAuN Bl7TtR+gOAt7fDi9m/2jiuuQK1Lh9SB6nAlj7Utm5jwcRg7yIsPIAsMibMslmajJAukB MrigxwCYu3CPaDy8XR9Gh5dnJvYhS3d4WgQTOSJi0XAjvzW0ocbGFIKM9UVtmCpz39Fp h0cIawpgnzwIqMjq0O6yTcFJjh75pYxgJsuONbnwIDBEI+GGr7czcWOmqGJxmyyVwe3C 1cdA== X-Gm-Message-State: AOAM530mDC6G16VF9WmQd6iSnmivAYgA13Sg9/17XByTV2i6mZUTduuA 4tb2DxJKsVMk5nqpaFsf2YimJOoBQuX9HQ== X-Google-Smtp-Source: ABdhPJxd57Am6JNM1anvQlenRuZFLqMrHJuNwjJl3vIVER15CFMt5qCEByazdJaBKOeAiuWnWkA/RA== X-Received: by 2002:a19:be4a:: with SMTP id o71mr8088918lff.494.1606732342315; Mon, 30 Nov 2020 02:32:22 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Ian Jackson , Wei Liu , Julien Grall , Stefano Stabellini , Anthony PERARD Subject: [PATCH V3 23/23] [RFC] libxl: Add support for virtio-disk configuration Date: Mon, 30 Nov 2020 12:31:38 +0200 Message-Id: <1606732298-22107-24-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> References: <1606732298-22107-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch adds basic support for configuring and assisting virtio-disk backend (emualator) which is intended to run out of Qemu and could be run in any domain. Xenstore was chosen as a communication interface for the emulator running in non-toolstack domain to be able to get configuration either by reading Xenstore directly or by receiving command line parameters (an updated 'xl devd' running in the same domain would read Xenstore beforehand and call backend executable with the required arguments). An example of domain configuration (two disks are assigned to the guest, the latter is in readonly mode): vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ] Where per-disk Xenstore entries are: - filename and readonly flag (configured via "vdisk" property) - base and irq (allocated dynamically) Besides handling 'visible' params described in configuration file, patch also allocates virtio-mmio specific ones for each device and writes them into Xenstore. virtio-mmio params (irq and base) are unique per guest domain, they allocated at the domain creation time and passed through to the emulator. Each VirtIO device has at least one pair of these params. TODO: 1. An extra "virtio" property could be removed. 2. Update documentation. Signed-off-by: Oleksandr Tyshchenko --- Changes RFC -> V1: - no changes Changes V1 -> V2: - rebase according to the new location of libxl_virtio_disk.c Changes V2 -> V3: - no changes Please note, there is a real concern about VirtIO interrupts allocation. Just copy here what Stefano said in RFC thread. So, if we end up allocating let's say 6 virtio interrupts for a domain, the chance of a clash with a physical interrupt of a passthrough device is real. I am not entirely sure how to solve it, but these are a few ideas: - choosing virtio interrupts that are less likely to conflict (maybe > 1000) - make the virtio irq (optionally) configurable so that a user could override the default irq and specify one that doesn't conflict - implementing support for virq != pirq (even the xl interface doesn't allow to specify the virq number for passthrough devices, see "irqs") Also there is one suggestion from Wei Chen regarding a parameter for domain config file which I haven't addressed yet. [Just copy here what Wei said in V2 thread] Can we keep use the same 'disk' parameter for virtio-disk, but add an option like "model=virtio-disk"? For example: disk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3,model=virtio-disk' ] Just like what Xen has done for x86 virtio-net. --- --- tools/libs/light/Makefile | 1 + tools/libs/light/libxl_arm.c | 56 ++++++++++++--- tools/libs/light/libxl_create.c | 1 + tools/libs/light/libxl_internal.h | 1 + tools/libs/light/libxl_types.idl | 15 ++++ tools/libs/light/libxl_types_internal.idl | 1 + tools/libs/light/libxl_virtio_disk.c | 109 ++++++++++++++++++++++++++++ tools/xl/Makefile | 2 +- tools/xl/xl.h | 3 + tools/xl/xl_cmdtable.c | 15 ++++ tools/xl/xl_parse.c | 115 ++++++++++++++++++++++++++++++ tools/xl/xl_virtio_disk.c | 46 ++++++++++++ 12 files changed, 354 insertions(+), 11 deletions(-) create mode 100644 tools/libs/light/libxl_virtio_disk.c create mode 100644 tools/xl/xl_virtio_disk.c diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile index 68f6fa3..ccc91b9 100644 --- a/tools/libs/light/Makefile +++ b/tools/libs/light/Makefile @@ -115,6 +115,7 @@ SRCS-y += libxl_genid.c SRCS-y += _libxl_types.c SRCS-y += libxl_flask.c SRCS-y += _libxl_types_internal.c +SRCS-y += libxl_virtio_disk.c ifeq ($(CONFIG_LIBNL),y) CFLAGS_LIBXL += $(LIBNL3_CFLAGS) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 588ee5a..9eb3022 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -8,6 +8,12 @@ #include #include +#ifndef container_of +#define container_of(ptr, type, member) ({ \ + typeof( ((type *)0)->member ) *__mptr = (ptr); \ + (type *)( (char *)__mptr - offsetof(type,member) );}) +#endif + static const char *gicv_to_string(libxl_gic_version gic_version) { switch (gic_version) { @@ -39,14 +45,32 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled = true; } - /* - * XXX: Handle properly virtio - * A proper solution would be the toolstack to allocate the interrupts - * used by each virtio backend and let the backend now which one is used - */ if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) { - nr_spis += (GUEST_VIRTIO_MMIO_SPI - 32) + 1; + uint64_t virtio_base; + libxl_device_virtio_disk *virtio_disk; + + virtio_base = GUEST_VIRTIO_MMIO_BASE; virtio_irq = GUEST_VIRTIO_MMIO_SPI; + + if (!d_config->num_virtio_disks) { + LOG(ERROR, "Virtio is enabled, but no Virtio devices present\n"); + return ERROR_FAIL; + } + virtio_disk = &d_config->virtio_disks[0]; + + for (i = 0; i < virtio_disk->num_disks; i++) { + virtio_disk->disks[i].base = virtio_base; + virtio_disk->disks[i].irq = virtio_irq; + + LOG(DEBUG, "Allocate Virtio MMIO params: IRQ %u BASE 0x%"PRIx64, + virtio_irq, virtio_base); + + virtio_irq ++; + virtio_base += GUEST_VIRTIO_MMIO_SIZE; + } + virtio_irq --; + + nr_spis += (virtio_irq - 32) + 1; virtio_enabled = true; } @@ -70,8 +94,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, } /* The same check as for vpl011 */ - if (virtio_enabled && irq == virtio_irq) { - LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", irq); + if (virtio_enabled && + (irq >= GUEST_VIRTIO_MMIO_SPI && irq <= virtio_irq)) { + LOG(ERROR, "Physical IRQ %u conflicting with Virtio IRQ range\n", irq); return ERROR_FAIL; } @@ -1011,8 +1036,19 @@ next_resize: if (info->tee == LIBXL_TEE_TYPE_OPTEE) FDT( make_optee_node(gc, fdt) ); - if (libxl_defbool_val(info->arch_arm.virtio)) - FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GUEST_VIRTIO_MMIO_SPI) ); + if (libxl_defbool_val(info->arch_arm.virtio)) { + libxl_domain_config *d_config = + container_of(info, libxl_domain_config, b_info); + libxl_device_virtio_disk *virtio_disk = &d_config->virtio_disks[0]; + unsigned int i; + + for (i = 0; i < virtio_disk->num_disks; i++) { + uint64_t base = virtio_disk->disks[i].base; + uint32_t irq = virtio_disk->disks[i].irq; + + FDT( make_virtio_mmio_node(gc, fdt, base, irq) ); + } + } if (pfdt) FDT( copy_partial_fdt(gc, fdt, pfdt) ); diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c index 321a13e..8da328d 100644 --- a/tools/libs/light/libxl_create.c +++ b/tools/libs/light/libxl_create.c @@ -1821,6 +1821,7 @@ const libxl__device_type *device_type_tbl[] = { &libxl__dtdev_devtype, &libxl__vdispl_devtype, &libxl__vsnd_devtype, + &libxl__virtio_disk_devtype, NULL }; diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h index e26cda9..ea497bb 100644 --- a/tools/libs/light/libxl_internal.h +++ b/tools/libs/light/libxl_internal.h @@ -4000,6 +4000,7 @@ extern const libxl__device_type libxl__vdispl_devtype; extern const libxl__device_type libxl__p9_devtype; extern const libxl__device_type libxl__pvcallsif_devtype; extern const libxl__device_type libxl__vsnd_devtype; +extern const libxl__device_type libxl__virtio_disk_devtype; extern const libxl__device_type *device_type_tbl[]; diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl index b054bf9..5f8a3ff 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -935,6 +935,20 @@ libxl_device_vsnd = Struct("device_vsnd", [ ("pcms", Array(libxl_vsnd_pcm, "num_vsnd_pcms")) ]) +libxl_virtio_disk_param = Struct("virtio_disk_param", [ + ("filename", string), + ("readonly", bool), + ("irq", uint32), + ("base", uint64), + ]) + +libxl_device_virtio_disk = Struct("device_virtio_disk", [ + ("backend_domid", libxl_domid), + ("backend_domname", string), + ("devid", libxl_devid), + ("disks", Array(libxl_virtio_disk_param, "num_disks")), + ]) + libxl_domain_config = Struct("domain_config", [ ("c_info", libxl_domain_create_info), ("b_info", libxl_domain_build_info), @@ -951,6 +965,7 @@ libxl_domain_config = Struct("domain_config", [ ("pvcallsifs", Array(libxl_device_pvcallsif, "num_pvcallsifs")), ("vdispls", Array(libxl_device_vdispl, "num_vdispls")), ("vsnds", Array(libxl_device_vsnd, "num_vsnds")), + ("virtio_disks", Array(libxl_device_virtio_disk, "num_virtio_disks")), # a channel manifests as a console with a name, # see docs/misc/channels.txt ("channels", Array(libxl_device_channel, "num_channels")), diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl index 3593e21..8f71980 100644 --- a/tools/libs/light/libxl_types_internal.idl +++ b/tools/libs/light/libxl_types_internal.idl @@ -32,6 +32,7 @@ libxl__device_kind = Enumeration("device_kind", [ (14, "PVCALLS"), (15, "VSND"), (16, "VINPUT"), + (17, "VIRTIO_DISK"), ]) libxl__console_backend = Enumeration("console_backend", [ diff --git a/tools/libs/light/libxl_virtio_disk.c b/tools/libs/light/libxl_virtio_disk.c new file mode 100644 index 0000000..25e7f1a --- /dev/null +++ b/tools/libs/light/libxl_virtio_disk.c @@ -0,0 +1,109 @@ +/* + * Copyright (C) 2020 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include "libxl_internal.h" + +static int libxl__device_virtio_disk_setdefault(libxl__gc *gc, uint32_t domid, + libxl_device_virtio_disk *virtio_disk, + bool hotplug) +{ + return libxl__resolve_domid(gc, virtio_disk->backend_domname, + &virtio_disk->backend_domid); +} + +static int libxl__virtio_disk_from_xenstore(libxl__gc *gc, const char *libxl_path, + libxl_devid devid, + libxl_device_virtio_disk *virtio_disk) +{ + const char *be_path; + int rc; + + virtio_disk->devid = devid; + rc = libxl__xs_read_mandatory(gc, XBT_NULL, + GCSPRINTF("%s/backend", libxl_path), + &be_path); + if (rc) return rc; + + rc = libxl__backendpath_parse_domid(gc, be_path, &virtio_disk->backend_domid); + if (rc) return rc; + + return 0; +} + +static void libxl__update_config_virtio_disk(libxl__gc *gc, + libxl_device_virtio_disk *dst, + libxl_device_virtio_disk *src) +{ + dst->devid = src->devid; +} + +static int libxl_device_virtio_disk_compare(libxl_device_virtio_disk *d1, + libxl_device_virtio_disk *d2) +{ + return COMPARE_DEVID(d1, d2); +} + +static void libxl__device_virtio_disk_add(libxl__egc *egc, uint32_t domid, + libxl_device_virtio_disk *virtio_disk, + libxl__ao_device *aodev) +{ + libxl__device_add_async(egc, domid, &libxl__virtio_disk_devtype, virtio_disk, aodev); +} + +static int libxl__set_xenstore_virtio_disk(libxl__gc *gc, uint32_t domid, + libxl_device_virtio_disk *virtio_disk, + flexarray_t *back, flexarray_t *front, + flexarray_t *ro_front) +{ + int rc; + unsigned int i; + + for (i = 0; i < virtio_disk->num_disks; i++) { + rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/filename", i), + GCSPRINTF("%s", virtio_disk->disks[i].filename)); + if (rc) return rc; + + rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/readonly", i), + GCSPRINTF("%d", virtio_disk->disks[i].readonly)); + if (rc) return rc; + + rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/base", i), + GCSPRINTF("%lu", virtio_disk->disks[i].base)); + if (rc) return rc; + + rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/irq", i), + GCSPRINTF("%u", virtio_disk->disks[i].irq)); + if (rc) return rc; + } + + return 0; +} + +static LIBXL_DEFINE_UPDATE_DEVID(virtio_disk) +static LIBXL_DEFINE_DEVICE_FROM_TYPE(virtio_disk) +static LIBXL_DEFINE_DEVICES_ADD(virtio_disk) + +DEFINE_DEVICE_TYPE_STRUCT(virtio_disk, VIRTIO_DISK, + .update_config = (device_update_config_fn_t) libxl__update_config_virtio_disk, + .from_xenstore = (device_from_xenstore_fn_t) libxl__virtio_disk_from_xenstore, + .set_xenstore_config = (device_set_xenstore_config_fn_t) libxl__set_xenstore_virtio_disk +); + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/tools/xl/Makefile b/tools/xl/Makefile index bdf67c8..9d8f2aa 100644 --- a/tools/xl/Makefile +++ b/tools/xl/Makefile @@ -23,7 +23,7 @@ XL_OBJS += xl_vtpm.o xl_block.o xl_nic.o xl_usb.o XL_OBJS += xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o XL_OBJS += xl_info.o xl_console.o xl_misc.o XL_OBJS += xl_vmcontrol.o xl_saverestore.o xl_migrate.o -XL_OBJS += xl_vdispl.o xl_vsnd.o xl_vkb.o +XL_OBJS += xl_vdispl.o xl_vsnd.o xl_vkb.o xl_virtio_disk.o $(XL_OBJS): CFLAGS += $(CFLAGS_libxentoollog) $(XL_OBJS): CFLAGS += $(CFLAGS_XL) diff --git a/tools/xl/xl.h b/tools/xl/xl.h index 06569c6..3d26f19 100644 --- a/tools/xl/xl.h +++ b/tools/xl/xl.h @@ -178,6 +178,9 @@ int main_vsnddetach(int argc, char **argv); int main_vkbattach(int argc, char **argv); int main_vkblist(int argc, char **argv); int main_vkbdetach(int argc, char **argv); +int main_virtio_diskattach(int argc, char **argv); +int main_virtio_disklist(int argc, char **argv); +int main_virtio_diskdetach(int argc, char **argv); int main_usbctrl_attach(int argc, char **argv); int main_usbctrl_detach(int argc, char **argv); int main_usbdev_attach(int argc, char **argv); diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c index 7da6c1b..745afab 100644 --- a/tools/xl/xl_cmdtable.c +++ b/tools/xl/xl_cmdtable.c @@ -435,6 +435,21 @@ struct cmd_spec cmd_table[] = { "Destroy a domain's virtual sound device", " ", }, + { "virtio-disk-attach", + &main_virtio_diskattach, 1, 1, + "Create a new virtio block device", + " TBD\n" + }, + { "virtio-disk-list", + &main_virtio_disklist, 0, 0, + "List virtio block devices for a domain", + "", + }, + { "virtio-disk-detach", + &main_virtio_diskdetach, 0, 1, + "Destroy a domain's virtio block device", + " ", + }, { "uptime", &main_uptime, 0, 0, "Print uptime for all/some domains", diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 10acf22..6cf3524 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -1204,6 +1204,120 @@ out: if (rc) exit(EXIT_FAILURE); } +#define MAX_VIRTIO_DISKS 4 + +static int parse_virtio_disk_config(libxl_device_virtio_disk *virtio_disk, char *token) +{ + char *oparg; + libxl_string_list disks = NULL; + int i, rc; + + if (MATCH_OPTION("backend", token, oparg)) { + virtio_disk->backend_domname = strdup(oparg); + } else if (MATCH_OPTION("disks", token, oparg)) { + split_string_into_string_list(oparg, ";", &disks); + + virtio_disk->num_disks = libxl_string_list_length(&disks); + if (virtio_disk->num_disks > MAX_VIRTIO_DISKS) { + fprintf(stderr, "vdisk: currently only %d disks are supported", + MAX_VIRTIO_DISKS); + return 1; + } + virtio_disk->disks = xcalloc(virtio_disk->num_disks, + sizeof(*virtio_disk->disks)); + + for(i = 0; i < virtio_disk->num_disks; i++) { + char *disk_opt; + + rc = split_string_into_pair(disks[i], ":", &disk_opt, + &virtio_disk->disks[i].filename); + if (rc) { + fprintf(stderr, "vdisk: failed to split \"%s\" into pair\n", + disks[i]); + goto out; + } + + if (!strcmp(disk_opt, "ro")) + virtio_disk->disks[i].readonly = 1; + else if (!strcmp(disk_opt, "rw")) + virtio_disk->disks[i].readonly = 0; + else { + fprintf(stderr, "vdisk: failed to parse \"%s\" disk option\n", + disk_opt); + rc = 1; + } + free(disk_opt); + + if (rc) goto out; + } + } else { + fprintf(stderr, "Unknown string \"%s\" in vdisk spec\n", token); + rc = 1; goto out; + } + + rc = 0; + +out: + libxl_string_list_dispose(&disks); + return rc; +} + +static void parse_virtio_disk_list(const XLU_Config *config, + libxl_domain_config *d_config) +{ + XLU_ConfigList *virtio_disks; + const char *item; + char *buf = NULL; + int rc; + + if (!xlu_cfg_get_list (config, "vdisk", &virtio_disks, 0, 0)) { + libxl_domain_build_info *b_info = &d_config->b_info; + int entry = 0; + + /* XXX Remove an extra property */ + libxl_defbool_setdefault(&b_info->arch_arm.virtio, false); + if (!libxl_defbool_val(b_info->arch_arm.virtio)) { + fprintf(stderr, "Virtio device requires Virtio property to be set\n"); + exit(EXIT_FAILURE); + } + + while ((item = xlu_cfg_get_listitem(virtio_disks, entry)) != NULL) { + libxl_device_virtio_disk *virtio_disk; + char *p; + + virtio_disk = ARRAY_EXTEND_INIT(d_config->virtio_disks, + d_config->num_virtio_disks, + libxl_device_virtio_disk_init); + + buf = strdup(item); + + p = strtok (buf, ","); + while (p != NULL) + { + while (*p == ' ') p++; + + rc = parse_virtio_disk_config(virtio_disk, p); + if (rc) goto out; + + p = strtok (NULL, ","); + } + + entry++; + + if (virtio_disk->num_disks == 0) { + fprintf(stderr, "At least one virtio disk should be specified\n"); + rc = 1; goto out; + } + } + } + + rc = 0; + +out: + free(buf); + if (rc) exit(EXIT_FAILURE); +} + void parse_config_data(const char *config_source, const char *config_data, int config_len, @@ -2734,6 +2848,7 @@ skip_usbdev: } parse_vkb_list(config, d_config); + parse_virtio_disk_list(config, d_config); xlu_cfg_get_defbool(config, "xend_suspend_evtchn_compat", &c_info->xend_suspend_evtchn_compat, 0); diff --git a/tools/xl/xl_virtio_disk.c b/tools/xl/xl_virtio_disk.c new file mode 100644 index 0000000..808a7da --- /dev/null +++ b/tools/xl/xl_virtio_disk.c @@ -0,0 +1,46 @@ +/* + * Copyright (C) 2020 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include + +#include +#include +#include + +#include "xl.h" +#include "xl_utils.h" +#include "xl_parse.h" + +int main_virtio_diskattach(int argc, char **argv) +{ + return 0; +} + +int main_virtio_disklist(int argc, char **argv) +{ + return 0; +} + +int main_virtio_diskdetach(int argc, char **argv) +{ + return 0; +} + +/* + * Local variables: + * mode: C + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */