From patchwork Fri Jan 29 01:48:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E7CDC433E9 for ; Fri, 29 Jan 2021 01:49:35 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DD6A964DFB for ; Fri, 29 Jan 2021 01:49:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD6A964DFB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77575.140569 (Exim 4.92) (envelope-from ) id 1l5IuQ-0004fj-IW; Fri, 29 Jan 2021 01:49:14 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77575.140569; Fri, 29 Jan 2021 01:49:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IuQ-0004fc-FO; Fri, 29 Jan 2021 01:49:14 +0000 Received: by outflank-mailman (input) for mailman id 77575; Fri, 29 Jan 2021 01:49:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IuO-0004da-JD for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:12 +0000 Received: from mail-lj1-x22a.google.com (unknown [2a00:1450:4864:20::22a]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a1dfd4df-af45-4f6b-a864-b2fdf9224da7; Fri, 29 Jan 2021 01:49:06 +0000 (UTC) Received: by mail-lj1-x22a.google.com with SMTP id v15so5733342ljk.13 for ; Thu, 28 Jan 2021 17:49:06 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:04 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a1dfd4df-af45-4f6b-a864-b2fdf9224da7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iF2H7PiQj9JssMO7RoW2VaZWE1uSEl4S1uoDxu//ZpY=; b=cH7zHDx+WDTtSCTx4VVAeLHc+bSW+X8++MmTZV13m+/RoqHbHA2Gn9rCxgcwk8voGk 0RAAi8Gke7mCOoR+Y53znPugF2FmmAJfvLXilVoazxf9GaJ3epM4v6C2D725Ez15ARID PTuVX/hilV0VT5RcTJ1bRwYDaxPIHHYe/db3dmT0q33pp4JFk6t5uFctpO0ORaLaCDLg 2Dk3NhDlQ5aQLPEDhMF9qDwUBtGZFMaMzt3MSP9upUF4zpjYXQMsNM1zROEBa0y5jDP8 LQMjCXuW+Zh8Jxj9YevSA/g7fwLGI+xgcGe0qg9Kq4BiM/HQFv/3BAyopCzFUW1I137T /16A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iF2H7PiQj9JssMO7RoW2VaZWE1uSEl4S1uoDxu//ZpY=; b=tvwa2zmmLcQrpvYsLPpcKdrEcrCulvsooElZs6oZeyLy4aUcUwP0hDfIxbEfVbU733 fvrwzGyFluFW8NnNHUWXHUJhoCbYEd6klVzlu11yhxl1vCdwvZkpLJaETPpY8C6PIA+D BII5XRFs5JAXnySrzscK6/Q4x6wP7eo2WwbUnBeIp5jl15c/0IuEjO5h6bg31hdMXcVj iHIY/BQtpPTnTYA18GpL2TZKXGbE+ydMIbyUTKZVSKNt6f0eafMQA7tYq6MdaT4c2BGQ SlAs2Wm4tr6ji3r/IqbXh20jnY7pGt4Oank2odxghDuJdu46vKg/b4dTdoepKbCLHleL Xy4g== X-Gm-Message-State: AOAM53200pM7Yjto/bQyxumMd1tA8fG31MX7FgeNkIZ/7KNACMMcDT5N ESqmyw/AyjHuUbbgf0Ikz0RPy46Kfgu/6w== X-Google-Smtp-Source: ABdhPJz2WiyQgDkTXNb96lI/jUceP1Ki2MqoxxsUXAGDJDpsFXKK8gBwvBUSeDC8St/HEdoLJr76jg== X-Received: by 2002:a05:651c:15:: with SMTP id n21mr1136685lja.347.1611884945116; Thu, 28 Jan 2021 17:49:05 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V6 01/24] x86/ioreq: Prepare IOREQ feature for making it common Date: Fri, 29 Jan 2021 03:48:29 +0200 Message-Id: <1611884932-1851-2-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch makes some preparation to x86/hvm/ioreq.c before moving to the common code. This way we will get a verbatim copy for a code movement in subsequent patch. This patch mostly introduces specific hooks to abstract arch specific materials taking into the account the requirment to leave the "legacy" mechanism of mapping magic pages for the IOREQ servers x86 specific and not expose it to the common code. These hooks are named according to the more consistent new naming scheme right away (including dropping the "hvm" prefixes and infixes): - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" other functions will be renamed in subsequent patches. Introduce common ioreq.h right away and put arch hook declarations there. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Alex Bennée Reviewed-by: Julien Grall Reviewed-by: Paul Durrant Acked-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" - fold the check of p->type into hvm_get_ioreq_server_range_type() and make it return success/failure - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy() in arch/x86/hvm/ioreq.c - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion() Changes V1 -> V2: - update patch description - make arch functions inline and put them into arch header to achieve a truly rename by the subsequent patch - return void in arch_hvm_destroy_ioreq_server() - return bool in arch_hvm_ioreq_destroy() - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy() - rename IOREQ_IO* to IOREQ_STATUS* - remove *handle* from arch_handle_hvm_io_completion() - re-order #include-s alphabetically - rename hvm_get_ioreq_server_range_type() to hvm_ioreq_server_get_type_addr() and add "const" to several arguments Changes V2 -> V3: - update patch description - name new arch hooks according to the new naming scheme - don't make arch hooks inline, move them ioreq.c - make get_ioreq_server() local again - rework the whole patch taking into the account that "legacy" interface should remain x86 specific (additional arch hooks, etc) - update the code to be able to use hvm_map_mem_type_to_ioreq_server() in the common code (an extra arch hook, etc) - don’t include from arch header - add "arch" prefix to hvm_ioreq_server_get_type_addr() - move IOREQ_STATUS_* #define-s introduction to the separate patch - move HANDLE_BUFIOREQ to the arch header - just return relocate_portio_handler() from arch_ioreq_server_destroy_all() - misc adjustments proposed by Jan (adding const, unsigned int instead of uint32_t) Changes V3 -> V4: - add Alex's R-b - update patch description - make arch_ioreq_server_get_type_addr return bool - drop #include - use two arch hooks in hvm_map_mem_type_to_ioreq_server() to avoid calling p2m_change_entry_type_global() with lock held Changes V4 -> V5: - add Julien's and Paul's R-b - update patch description - remove single use variable in arch_ioreq_server_map_mem_type_completed() - put multiple function parameters on a single line in the header where possible - introduce common ioreq.h right away and put arch hooks declarations there instead of doing that in patch #4 Changes V5 -> V6: - add Jan's A-b --- --- xen/arch/x86/hvm/ioreq.c | 175 +++++++++++++++++++++++++++++++---------------- xen/include/xen/ioreq.h | 54 +++++++++++++++ 2 files changed, 169 insertions(+), 60 deletions(-) create mode 100644 xen/include/xen/ioreq.h diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 1cc27df..3c3c173 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -16,16 +16,16 @@ * this program; If not, see . */ -#include +#include +#include #include +#include +#include #include -#include +#include #include -#include #include -#include -#include -#include +#include #include #include @@ -170,6 +170,29 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) return true; } +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) +{ + switch ( io_completion ) + { + case HVMIO_realmode_completion: + { + struct hvm_emulate_ctxt ctxt; + + hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); + vmx_realmode_emulate_one(&ctxt); + hvm_emulate_writeback(&ctxt); + + break; + } + + default: + ASSERT_UNREACHABLE(); + break; + } + + return true; +} + bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d = v->domain; @@ -209,19 +232,8 @@ bool handle_hvm_io_completion(struct vcpu *v) return handle_pio(vio->io_req.addr, vio->io_req.size, vio->io_req.dir); - case HVMIO_realmode_completion: - { - struct hvm_emulate_ctxt ctxt; - - hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs()); - vmx_realmode_emulate_one(&ctxt); - hvm_emulate_writeback(&ctxt); - - break; - } default: - ASSERT_UNREACHABLE(); - break; + return arch_vcpu_ioreq_completion(io_completion); } return true; @@ -477,9 +489,6 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } } -#define HANDLE_BUFIOREQ(s) \ - ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) - static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v) { @@ -586,7 +595,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) { int rc; @@ -601,7 +610,7 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) return rc; } -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); @@ -674,6 +683,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, return rc; } +void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); +} + static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) { struct hvm_ioreq_vcpu *sv; @@ -683,8 +698,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) if ( s->enabled ) goto done; - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + arch_ioreq_server_enable(s); s->enabled = true; @@ -697,6 +711,12 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } +void arch_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); +} + static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) { spin_lock(&s->lock); @@ -704,8 +724,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) if ( !s->enabled ) goto done; - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + arch_ioreq_server_disable(s); s->enabled = false; @@ -750,7 +769,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, fail_add: hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s); + arch_ioreq_server_unmap_pages(s); hvm_ioreq_server_free_rangesets(s); @@ -764,7 +783,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) hvm_ioreq_server_remove_all_vcpus(s); /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and * hvm_ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. @@ -772,7 +791,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) * the page_info pointer to NULL, meaning the latter will do * nothing. */ - hvm_ioreq_server_unmap_pages(s); + arch_ioreq_server_unmap_pages(s); hvm_ioreq_server_free_pages(s); hvm_ioreq_server_free_rangesets(s); @@ -836,6 +855,12 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, return rc; } +/* Called when target domain is paused */ +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) +{ + p2m_set_ioreq_server(s->target, 0, s); +} + int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { struct hvm_ioreq_server *s; @@ -855,7 +880,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) domain_pause(d); - p2m_set_ioreq_server(d, 0, s); + arch_ioreq_server_destroy(s); hvm_ioreq_server_disable(s); @@ -900,7 +925,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, if ( ioreq_gfn || bufioreq_gfn ) { - rc = hvm_ioreq_server_map_pages(s); + rc = arch_ioreq_server_map_pages(s); if ( rc ) goto out; } @@ -1080,6 +1105,22 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, return rc; } +/* Called with ioreq_server lock held */ +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + return p2m_set_ioreq_server(d, flags, s); +} + +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + if ( flags == 0 && read_atomic(&p2m_get_hostp2m(d)->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); +} + /* * Map or unmap an ioreq server to specific memory type. For now, only * HVMMEM_ioreq_server is supported, and in the future new types can be @@ -1112,18 +1153,13 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, if ( s->emulator != current->domain ) goto out; - rc = p2m_set_ioreq_server(d, flags, s); + rc = arch_ioreq_server_map_mem_type(d, s, flags); out: spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - if ( rc == 0 && flags == 0 ) - { - struct p2m_domain *p2m = p2m_get_hostp2m(d); - - if ( read_atomic(&p2m->ioreq.entry_count) ) - p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); - } + if ( rc == 0 ) + arch_ioreq_server_map_mem_type_completed(d, s, flags); return rc; } @@ -1210,12 +1246,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } +bool arch_ioreq_server_destroy_all(struct domain *d) +{ + return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); +} + void hvm_destroy_all_ioreq_servers(struct domain *d) { struct hvm_ioreq_server *s; unsigned int id; - if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) ) + if ( !arch_ioreq_server_destroy_all(d) ) return; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1239,33 +1280,28 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +bool arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) { - struct hvm_ioreq_server *s; - uint32_t cf8; - uint8_t type; - uint64_t addr; - unsigned int id; + unsigned int cf8 = d->arch.hvm.pci_cf8; if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) - return NULL; - - cf8 = d->arch.hvm.pci_cf8; + return false; if ( p->type == IOREQ_TYPE_PIO && (p->addr & ~3) == 0xcfc && CF8_ENABLED(cf8) ) { - uint32_t x86_fam; + unsigned int x86_fam, reg; pci_sbdf_t sbdf; - unsigned int reg; reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); /* PCI config data cycle */ - type = XEN_DMOP_IO_RANGE_PCI; - addr = ((uint64_t)sbdf.sbdf << 32) | reg; + *type = XEN_DMOP_IO_RANGE_PCI; + *addr = ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ if ( CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor == X86_VENDOR_AMD && @@ -1277,16 +1313,30 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) && (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) ) - addr |= CF8_ADDR_HI(cf8); + *addr |= CF8_ADDR_HI(cf8); } } else { - type = (p->type == IOREQ_TYPE_PIO) ? - XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; - addr = p->addr; + *type = (p->type == IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr = p->addr; } + return true; +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( !arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) + return NULL; + FOR_EACH_IOREQ_SERVER(d, id, s) { struct rangeset *r; @@ -1515,11 +1565,16 @@ static int hvm_access_cf8( return X86EMUL_UNHANDLEABLE; } +void arch_ioreq_domain_init(struct domain *d) +{ + register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); +} + void hvm_ioreq_init(struct domain *d) { spin_lock_init(&d->arch.hvm.ioreq_server.lock); - register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); + arch_ioreq_domain_init(d); } /* diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h new file mode 100644 index 0000000..d0980c5 --- /dev/null +++ b/xen/include/xen/ioreq.h @@ -0,0 +1,54 @@ +/* + * ioreq.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#ifndef __XEN_IOREQ_H__ +#define __XEN_IOREQ_H__ + +#include + +#define HANDLE_BUFIOREQ(s) \ + ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) + +bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); +void arch_ioreq_server_enable(struct hvm_ioreq_server *s); +void arch_ioreq_server_disable(struct hvm_ioreq_server *s); +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags); +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags); +bool arch_ioreq_server_destroy_all(struct domain *d); +bool arch_ioreq_server_get_type_addr(const struct domain *d, const ioreq_t *p, + uint8_t *type, uint64_t *addr); +void arch_ioreq_domain_init(struct domain *d); + +#endif /* __XEN_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ From patchwork Fri Jan 29 01:48:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF929C433DB for ; Fri, 29 Jan 2021 01:49:31 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8B53F64DFD for ; Fri, 29 Jan 2021 01:49:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8B53F64DFD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77576.140581 (Exim 4.92) (envelope-from ) id 1l5IuV-0004ig-08; Fri, 29 Jan 2021 01:49:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77576.140581; Fri, 29 Jan 2021 01:49:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IuU-0004iY-TA; Fri, 29 Jan 2021 01:49:18 +0000 Received: by outflank-mailman (input) for mailman id 77576; Fri, 29 Jan 2021 01:49:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IuT-0004da-K0 for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:17 +0000 Received: from mail-lf1-x12b.google.com (unknown [2a00:1450:4864:20::12b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e77de284-e38c-4502-956a-7f7051c4f7bc; Fri, 29 Jan 2021 01:49:07 +0000 (UTC) Received: by mail-lf1-x12b.google.com with SMTP id q8so10278768lfm.10 for ; Thu, 28 Jan 2021 17:49:07 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:05 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e77de284-e38c-4502-956a-7f7051c4f7bc DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eVg5M4etOTkPCRrhhB/tZ1LxBY8S4/I31OBuPL+kszA=; b=SnsAdkI7w/4suM1BajLM2B0yDdJYVDJtuS4pRcawrStCwJ9nbDD5ISYIgevIrAL4aR LEMufQa7USXEPtWaiMUjBQFAMYYOP+TmXIFNWhoflAFTZuzRmnY9v/hDBZGajmCm8Qht GAqAekaNW10GZIPbISs8Sw3iscErYoHl6xbJCPW+RoTz/N5IgvToYeIz6nujapzdYGwL z41Pgxp01tQQp51heQmzctSl0DRty+io47+r/BhUoA/ibBqYkzf39/6vdpRthhTYEX5r Nh1cxDQZr7seGOTNhl2wuFmoVxH8mUMTU7oeJXdf9sVc0Qc1q3vflG/1mqCbJ640LQhC X47A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eVg5M4etOTkPCRrhhB/tZ1LxBY8S4/I31OBuPL+kszA=; b=YPk8HTld7gYKQM9/3QZR/WEc4jKkYlb6kmsy+M+Yv0A6Ii3lLVorivkwl4H0OVEABR o8fqhwutTFi7r/LxsFiU4yRXlPnwJqVRjNChIBpMXEsrtaAVpwCZUxI6A3Dt57aM1ZsJ u3E7VjY+qt57sZnrXBQVS2CE079Y357o0HcPuZGOBQbUuwnemO9iodUJwGB/4cj22MyG a5vdfJZR34U7ggYQcae4l3KCezN8eZbsGcnrua8cOumiss+DkGHCw8iHqZJCzZvZHPBV KhO3tgCcg1R8IV8V4LOMoXjVnJtMAwDNyn5I6PyAfllHQBOJWerwQfihlBl+9I02Geog pv3Q== X-Gm-Message-State: AOAM530VCyRZoSftN+tOU7WZU/IMMO33xoeALmG3xUSyRqh46XMoHEsj 4yRZpPbgv/00R0oC+Ea+sUJsXZ0e7nu+lg== X-Google-Smtp-Source: ABdhPJzqFTraBQXJFYDWt9vD37z9aQXC3uo1ytiF2ICvA+tZEIreCO3mwxcE6x2pH61VcMtcCOdTrA== X-Received: by 2002:a05:6512:1311:: with SMTP id x17mr895642lfu.307.1611884946190; Thu, 28 Jan 2021 17:49:06 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V6 02/24] x86/ioreq: Add IOREQ_STATUS_* #define-s and update code for moving Date: Fri, 29 Jan 2021 03:48:30 +0200 Message-Id: <1611884932-1851-3-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko This patch continues to make some preparation to x86/hvm/ioreq.c before moving to the common code. Add IOREQ_STATUS_* #define-s and update candidates for moving since X86EMUL_* shouldn't be exposed to the common code in that form. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich Reviewed-by: Alex Bennée Reviewed-by: Julien Grall Reviewed-by: Paul Durrant CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V2 -> V3: - new patch, was split from [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common Changes V3 -> V4: - add Alex's R-b and Jan's A-b - add a comment above IOREQ_STATUS_* #define-s Changes V4 -> V5: - rebase - add Julien's and Paul's R-b Changes V5 -> V6: - no changes --- --- xen/arch/x86/hvm/ioreq.c | 16 ++++++++-------- xen/include/asm-x86/hvm/ioreq.h | 5 +++++ 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 3c3c173..27a4a6f 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1401,7 +1401,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) pg = iorp->va; if ( !pg ) - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; /* * Return 0 for the cases we can't deal with: @@ -1431,7 +1431,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) break; default: gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } spin_lock(&s->bufioreq_lock); @@ -1441,7 +1441,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) { /* The queue is full: send the iopacket through the normal path. */ spin_unlock(&s->bufioreq_lock); - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp; @@ -1472,7 +1472,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) notify_via_xen_event_channel(d, s->bufioreq_evtchn); spin_unlock(&s->bufioreq_lock); - return X86EMUL_OKAY; + return IOREQ_STATUS_HANDLED; } int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, @@ -1488,7 +1488,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, return hvm_send_buffered_ioreq(s, proto_p); if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return X86EMUL_RETRY; + return IOREQ_STATUS_RETRY; list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -1528,11 +1528,11 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, notify_via_xen_event_channel(d, port); sv->pending = true; - return X86EMUL_RETRY; + return IOREQ_STATUS_RETRY; } } - return X86EMUL_UNHANDLEABLE; + return IOREQ_STATUS_UNHANDLED; } unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) @@ -1546,7 +1546,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) if ( !s->enabled ) continue; - if ( hvm_send_ioreq(s, p, buffered) == X86EMUL_UNHANDLEABLE ) + if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) failed++; } diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index e2588e9..df0c292 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -55,6 +55,11 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); void hvm_ioreq_init(struct domain *d); +/* This correlation must not be altered */ +#define IOREQ_STATUS_HANDLED X86EMUL_OKAY +#define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE +#define IOREQ_STATUS_RETRY X86EMUL_RETRY + #endif /* __ASM_X86_HVM_IOREQ_H__ */ /* From patchwork Fri Jan 29 01:48:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A44A7C433E6 for ; Fri, 29 Jan 2021 01:49:32 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4B8A464E06 for ; Fri, 29 Jan 2021 01:49:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B8A464E06 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77577.140593 (Exim 4.92) (envelope-from ) id 1l5Iua-0004nB-9o; Fri, 29 Jan 2021 01:49:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77577.140593; Fri, 29 Jan 2021 01:49:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iua-0004n2-5Z; Fri, 29 Jan 2021 01:49:24 +0000 Received: by outflank-mailman (input) for mailman id 77577; Fri, 29 Jan 2021 01:49:22 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IuY-0004da-Jn for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:22 +0000 Received: from mail-lf1-x132.google.com (unknown [2a00:1450:4864:20::132]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6dfd42f1-888e-440e-baad-0e73b78c4b04; Fri, 29 Jan 2021 01:49:08 +0000 (UTC) Received: by mail-lf1-x132.google.com with SMTP id u25so10287779lfc.2 for ; Thu, 28 Jan 2021 17:49:08 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:06 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6dfd42f1-888e-440e-baad-0e73b78c4b04 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kdMuni2EH6DuFbXuV8A3zz5QqpHjmzDHkoSJZD+NLoI=; b=rfpoqwJP1gGZW07VlK3dNlgVHlmD5pYKE5+ugMcl+kqT/DDIkUmBQUnIQ94bqjyAtw RD8CiTl4i2/AlT2if/ZvT4wclRyQKzmhF7PChhCJ9l0J9Is/1rlEERFydldkC3+FFFwb X035jgaIa5Pq36CjC30Ru24AnPqi4GYHHEYAH+eQpoaEFA1CxQXv1lbpgo9miJ+KEmYy mj2ZJL1UZGynYn07h3PbnXLj8y1reKi9Bx56SjUOarGhAX+DbXduzTipObKaoDe5dhb3 DDU8W093IdtLpm8A1Mx+RaEsRYAM8k/HuTkvyLRYWF9VFYaYavMykuOMf471yj/bpF8x 4mgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kdMuni2EH6DuFbXuV8A3zz5QqpHjmzDHkoSJZD+NLoI=; b=PGHz2WwwWbN5v51bYOrJuHTH4wO1XwK7Q3RvmePlf6RzGNBJ5bNwbBauH17RuvO5/5 zu42M03YzUyBc1xTCVFkFPbmsf3xfjckg5ksPx7fPY2+i6z+B/Z3iJ0LtILRE+3e/7Jk AhrcmsOAUq7Eyxf45wM38m9Z0XeKhqlqzambVeB/9sJg7V4CnnwFDwmyQBDr6rRCDFJr DjQnPKhp/EDPFq28Wnz1GVlV+jvAgZSoRSgQaRcNEysNRnDykPLVdVEzA7it/FTWQUYG K3SFXNvrdab7rdqjRPbIvCBkAEhm/BCJ8Ajw/3yNNeF5z0BTAIOLwWiZUrfaXmtvoSN9 tUeg== X-Gm-Message-State: AOAM532LOyljcBhIKEqrQTFqDszjJ4WxSzzk9GrGpyGFuaNt/ZRDJuNW yfS4mU7Z8L0O1YbXCR3084C68kiIwVTGhQ== X-Google-Smtp-Source: ABdhPJyVMcX9CNxT0MTbCM3BdCanvpoqDilYF1nesxeH+UWBohvKwgYCq2/p0P0cCyZh4d0WbAG77w== X-Received: by 2002:a05:6512:3182:: with SMTP id i2mr881568lfe.553.1611884947283; Thu, 28 Jan 2021 17:49:07 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V6 03/24] x86/ioreq: Provide out-of-line wrapper for the handle_mmio() Date: Fri, 29 Jan 2021 03:48:31 +0200 Message-Id: <1611884932-1851-4-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko The IOREQ is about to be common feature and Arm will have its own implementation. But the name of the function is pretty generic and can be confusing on Arm (we already have a try_handle_mmio()). In order not to rename the function (which is used for a varying set of purposes on x86) globally and get non-confusing variant on Arm provide a wrapper arch_ioreq_complete_mmio() to be used on common and Arm code. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Jan Beulich Reviewed-by: Alex Bennée Reviewed-by: Julien Grall Reviewed-by: Paul Durrant CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "handle" - add Jan's A-b Changes V2 -> V3: - remove Jan's A-b - update patch subject/description - use out-of-line function instead of #define - put earlier in the series to avoid breakage Changes V3 -> V4: - add Jan's R-b - rename ioreq_complete_mmio() to arch_ioreq_complete_mmio() Changes V4 -> V5: - rebase - add Alex's, Julien's and Paul's R-b Changes V5 -> V6: - no changes --- --- xen/arch/x86/hvm/ioreq.c | 7 ++++++- xen/include/xen/ioreq.h | 1 + 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 27a4a6f..30e8724 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -36,6 +36,11 @@ #include #include +bool arch_ioreq_complete_mmio(void) +{ + return handle_mmio(); +} + static void set_ioreq_server(struct domain *d, unsigned int id, struct hvm_ioreq_server *s) { @@ -226,7 +231,7 @@ bool handle_hvm_io_completion(struct vcpu *v) break; case HVMIO_mmio_completion: - return handle_mmio(); + return arch_ioreq_complete_mmio(); case HVMIO_pio_completion: return handle_pio(vio->io_req.addr, vio->io_req.size, diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index d0980c5..b95d3ef 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -24,6 +24,7 @@ #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) +bool arch_ioreq_complete_mmio(void); bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); From patchwork Fri Jan 29 01:48:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8458FC433E0 for ; Fri, 29 Jan 2021 01:49:53 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0878D64DFB for ; Fri, 29 Jan 2021 01:49:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0878D64DFB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77584.140629 (Exim 4.92) (envelope-from ) id 1l5Iuo-000535-Hu; Fri, 29 Jan 2021 01:49:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77584.140629; Fri, 29 Jan 2021 01:49:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iuo-00052u-D5; Fri, 29 Jan 2021 01:49:38 +0000 Received: by outflank-mailman (input) for mailman id 77584; Fri, 29 Jan 2021 01:49:37 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iun-0004da-KU for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:37 +0000 Received: from mail-lj1-x22d.google.com (unknown [2a00:1450:4864:20::22d]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 960dd86c-ba54-42dc-abb0-100c817dc927; Fri, 29 Jan 2021 01:49:11 +0000 (UTC) Received: by mail-lj1-x22d.google.com with SMTP id c18so8722403ljd.9 for ; Thu, 28 Jan 2021 17:49:11 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:08 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 960dd86c-ba54-42dc-abb0-100c817dc927 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SqFthVd2sJUsJgLPjp3Pbi0177yxASa+Ue9qeOJS3sw=; b=XZqeDKKd395fVDqc+KSOMI6XObU2fl3Kj2yR4qyd0lZeptRBrXkh4sDlUEN8HHrmBf qd4JJfzq0Up4AeoXBz2suxI8CKNIWrxAuoqsiyKYV0NoJCavJf/IzEPFB3OD6LpnZrDo WS+V7Pg9F1t8PSvUs5UncB5L5Cd/ZYzCvZiqdI2Aq0Zl+a3jzI0hrTgFnUMjCJNZ5jSc PLajnsOX3LNGMYXMl+d0RXwqPvLw0cikj1Drd6bacZsFP65dr2QwVY6W9LDm+aO+n9N3 jW2iz1a0yaL6xNXkdfLwh4ASOcSzEuAPtZVsPeAycz7J6zSBbxBDnbwLf2P70/FUm2Nj b6kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SqFthVd2sJUsJgLPjp3Pbi0177yxASa+Ue9qeOJS3sw=; b=AAD9vHretjJ+JLt4gYE/AtzewrKQTh9uT8iXdm4BxL7M1Pxw2QRIHkS5mY/Ed+zBFK IF8nNRwTmxZ3SgSWsRV8D1n/jq7+4qutc2lJvL+yd/+yFJOZWtGw/LCaVxe5j7Sm1yCg sFHIozEE9on+DLyCDT0FAVi0XowM76VekI7NIhplENOD0He+cDdDZBH48Aadf0v3F4Bv K7DpdaRvE6m2dEyPh6LCUbUnmZaZvDyU4tk+COf/mTEvEtkwHbqGBFxkBC4KugZlDmGg 8Zxc2ncGIsbepYMMLoRlxORf7aNG1LpqRvLqjb42fc3UCHUN8qvaojLX9KP5MOIZAeYf IHZw== X-Gm-Message-State: AOAM533wx8kf76uugoRXNlVSj1nk+UIQT7CYHAr20mXTMZLD4WR7xZX+ I1BRmrE9It9xBknktzYBxP9wkr/Aur+dnw== X-Google-Smtp-Source: ABdhPJx4E5y6MwEIOjPDF6zoTMl3pfNKxBpI3gmzwJLibcfRG1FvC6h/Tpu95UemokOHJ691VsETsA== X-Received: by 2002:a2e:b896:: with SMTP id r22mr1152242ljp.234.1611884948744; Thu, 28 Jan 2021 17:49:08 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Paul Durrant , Jun Nakajima , Kevin Tian , Tim Deegan , Julien Grall Subject: [PATCH V6 04/24] xen/ioreq: Make x86's IOREQ feature common Date: Fri, 29 Jan 2021 03:48:32 +0200 Message-Id: <1611884932-1851-5-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko As a lot of x86 code can be re-used on Arm later on, this patch moves previously prepared IOREQ support to the common code (the code movement is verbatim copy). The "legacy" mechanism of mapping magic pages for the IOREQ servers remains x86 specific and not exposed to the common code. The common IOREQ feature is supposed to be built with IOREQ_SERVER option enabled, which is selected for x86's config HVM for now. In order to avoid having a gigantic patch here, the subsequent patches will update remaining bits in the common code step by step: - Make IOREQ related structs/materials common - Drop the "hvm" prefixes and infixes - Remove layering violation by moving corresponding fields out of *arch.hvm* or abstracting away accesses to them Introduce asm/ioreq.h wrapper to be included by common ioreq.h instead of asm/hvm/ioreq.h to avoid HVM-ism in the code common. Also include which will be needed on Arm to avoid touch the common code again when introducing Arm specific bits. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Paul Durrant Acked-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, this patch depends on the following which is on review: https://patchwork.kernel.org/patch/11816689/ The effort (to get it upstreamed) was paused because of the security issue around that code (XSA-348). *** Changes RFC -> V1: - was split into three patches: - x86/ioreq: Prepare IOREQ feature for making it common - xen/ioreq: Make x86's IOREQ feature common - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common - update MAINTAINERS file - do not use a separate subdir for the IOREQ stuff, move it to: - xen/common/ioreq.c - xen/include/xen/ioreq.h - update x86's files to include xen/ioreq.h - remove unneeded headers in arch/x86/hvm/ioreq.c - re-order the headers alphabetically in common/ioreq.c - update common/ioreq.c according to the newly introduced arch functions: arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion() Changes V1 -> V2: - update patch description - make everything needed in the previous patch to achieve a truly rename here - don't include unnecessary headers from asm-x86/hvm/ioreq.h and xen/ioreq.h - use __XEN_IOREQ_H__ instead of __IOREQ_H__ - move get_ioreq_server() to common/ioreq.c Changes V2 -> V3: - update patch description - make everything needed in the previous patch to not expose "legacy" interface to the common code here - update patch according the "legacy interface" is x86 specific - include in common ioreq.c Changes V3 -> V4: - rebase - don't include from arch header - мove all arch hook declarations to the common header Change V4 -> V5: - rebase - introduce asm-x86/ioreq.h wrapper: - update MAINTAINERS file - include asm/ioreq.h instead of asm/hvm/ioreq.h from common ioreq.c - include public/hvm/dm_op.h from common ioreq.h - add Paul's R-b Change V5 -> V6: - add Jan's A-b - wrap header inclusion by #ifdef CONFIG_HVM in asm-x86/ioreq.h --- --- MAINTAINERS | 9 +- xen/arch/x86/Kconfig | 1 + xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 2 +- xen/arch/x86/hvm/ioreq.c | 1316 ++------------------------------------- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/hvm/vmx/vvmx.c | 3 +- xen/arch/x86/mm.c | 2 +- xen/arch/x86/mm/shadow/common.c | 2 +- xen/common/Kconfig | 3 + xen/common/Makefile | 1 + xen/common/ioreq.c | 1290 ++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/ioreq.h | 36 -- xen/include/asm-x86/ioreq.h | 39 ++ xen/include/xen/ioreq.h | 38 ++ 17 files changed, 1424 insertions(+), 1326 deletions(-) create mode 100644 xen/common/ioreq.c create mode 100644 xen/include/asm-x86/ioreq.h diff --git a/MAINTAINERS b/MAINTAINERS index 8ce2ad5..3a5c481 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -339,6 +339,13 @@ X: xen/drivers/passthrough/vtd/ X: xen/drivers/passthrough/device_tree.c F: xen/include/xen/iommu.h +I/O EMULATION (IOREQ) +M: Paul Durrant +S: Supported +F: xen/common/ioreq.c +F: xen/include/xen/ioreq.h +F: xen/include/public/hvm/ioreq.h + KCONFIG M: Doug Goldstein S: Supported @@ -555,7 +562,7 @@ F: xen/arch/x86/hvm/ioreq.c F: xen/include/asm-x86/hvm/emulate.h F: xen/include/asm-x86/hvm/io.h F: xen/include/asm-x86/hvm/ioreq.h -F: xen/include/public/hvm/ioreq.h +F: xen/include/asm-x86/ioreq.h X86 MEMORY MANAGEMENT M: Jan Beulich diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 78f351f..ea9a9ea 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -92,6 +92,7 @@ config PV_LINEAR_PT config HVM def_bool !PV_SHIM_EXCLUSIVE + select IOREQ_SERVER prompt "HVM support" ---help--- Interfaces to support HVM domains. HVM domains require hardware diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 71f5ca4..d3e2a9e 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -17,12 +17,12 @@ #include #include #include +#include #include #include #include #include -#include #include #include diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 24cf85f..60ca465 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -10,6 +10,7 @@ */ #include +#include #include #include #include @@ -20,7 +21,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 54e32e4..bc96947 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -20,6 +20,7 @@ #include #include +#include #include #include #include @@ -64,7 +65,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 3e09d9b..11e007d 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -19,6 +19,7 @@ */ #include +#include #include #include #include @@ -35,7 +36,6 @@ #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 30e8724..666d695 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -30,7 +30,6 @@ #include #include -#include #include #include @@ -41,140 +40,6 @@ bool arch_ioreq_complete_mmio(void) return handle_mmio(); } -static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) -{ - ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); - - d->arch.hvm.ioreq_server.server[id] = s; -} - -#define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] - -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) -{ - if ( id >= MAX_NR_IOREQ_SERVERS ) - return NULL; - - return GET_IOREQ_SERVER(d, id); -} - -/* - * Iterate over all possible ioreq servers. - * - * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). - * This is a semantic that previously existed when ioreq servers - * were held in a linked list. - */ -#define FOR_EACH_IOREQ_SERVER(d, id, s) \ - for ( (id) = MAX_NR_IOREQ_SERVERS; (id) != 0; ) \ - if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \ - continue; \ - else - -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) -{ - shared_iopage_t *p = s->ioreq.va; - - ASSERT((v == current) || !vcpu_runnable(v)); - ASSERT(p != NULL); - - return &p->vcpu_ioreq[v->vcpu_id]; -} - -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **srvp) -{ - struct domain *d = v->domain; - struct hvm_ioreq_server *s; - unsigned int id; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct hvm_ioreq_vcpu *sv; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu == v && sv->pending ) - { - if ( srvp ) - *srvp = s; - return sv; - } - } - } - - return NULL; -} - -bool hvm_io_pending(struct vcpu *v) -{ - return get_pending_vcpu(v, NULL); -} - -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) -{ - unsigned int prev_state = STATE_IOREQ_NONE; - unsigned int state = p->state; - uint64_t data = ~0; - - smp_rmb(); - - /* - * The only reason we should see this condition be false is when an - * emulator dying races with I/O being requested. - */ - while ( likely(state != STATE_IOREQ_NONE) ) - { - if ( unlikely(state < prev_state) ) - { - gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n", - prev_state, state); - sv->pending = false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - switch ( prev_state = state ) - { - case STATE_IORESP_READY: /* IORESP_READY -> NONE */ - p->state = STATE_IOREQ_NONE; - data = p->data; - break; - - case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READY */ - case STATE_IOREQ_INPROCESS: - wait_on_xen_event_channel(sv->ioreq_evtchn, - ({ state = p->state; - smp_rmb(); - state != prev_state; })); - continue; - - default: - gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); - sv->pending = false; - domain_crash(sv->vcpu->domain); - return false; /* bail */ - } - - break; - } - - p = &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) - p->data = data; - - sv->pending = false; - - return true; -} - bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) { switch ( io_completion ) @@ -198,52 +63,6 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) return true; } -bool handle_hvm_io_completion(struct vcpu *v) -{ - struct domain *d = v->domain; - struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; - enum hvm_io_completion io_completion; - - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - sv = get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) - return false; - - vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ? - STATE_IORESP_READY : STATE_IOREQ_NONE; - - msix_write_completion(v); - vcpu_end_shutdown_deferral(v); - - io_completion = vio->io_completion; - vio->io_completion = HVMIO_no_completion; - - switch ( io_completion ) - { - case HVMIO_no_completion: - break; - - case HVMIO_mmio_completion: - return arch_ioreq_complete_mmio(); - - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); - - default: - return arch_vcpu_ioreq_completion(io_completion); - } - - return true; -} - static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) { struct domain *d = s->target; @@ -360,93 +179,6 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - - if ( iorp->page ) - { - /* - * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then - * allocating a page is not permitted. - */ - if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) - return -EPERM; - - return 0; - } - - page = alloc_domheap_page(s->target, MEMF_no_refcount); - - if ( !page ) - return -ENOMEM; - - if ( !get_page_and_type(page, s->target, PGT_writable_page) ) - { - /* - * The domain can't possibly know about this page yet, so failure - * here is a clear indication of something fishy going on. - */ - domain_crash(s->emulator); - return -ENODATA; - } - - iorp->va = __map_domain_page_global(page); - if ( !iorp->va ) - goto fail; - - iorp->page = page; - clear_page(iorp->va); - return 0; - - fail: - put_page_alloc_ref(page); - put_page_and_type(page); - - return -ENOMEM; -} - -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) -{ - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; - struct page_info *page = iorp->page; - - if ( !page ) - return; - - iorp->page = NULL; - - unmap_domain_page_global(iorp->va); - iorp->va = NULL; - - put_page_alloc_ref(page); - put_page_and_type(page); -} - -bool is_ioreq_server_page(struct domain *d, const struct page_info *page) -{ - const struct hvm_ioreq_server *s; - unsigned int id; - bool found = false; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( (s->ioreq.page == page) || (s->bufioreq.page == page) ) - { - found = true; - break; - } - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return found; -} - static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { @@ -481,125 +213,6 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) -{ - ASSERT(spin_is_locked(&s->lock)); - - if ( s->ioreq.va != NULL ) - { - ioreq_t *p = get_ioreq(s, sv->vcpu); - - p->vp_eport = sv->ioreq_evtchn; - } -} - -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - int rc; - - sv = xzalloc(struct hvm_ioreq_vcpu); - - rc = -ENOMEM; - if ( !sv ) - goto fail1; - - spin_lock(&s->lock); - - rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail2; - - sv->ioreq_evtchn = rc; - - if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) - { - rc = alloc_unbound_xen_event_channel(v->domain, 0, - s->emulator->domain_id, NULL); - if ( rc < 0 ) - goto fail3; - - s->bufioreq_evtchn = rc; - } - - sv->vcpu = v; - - list_add(&sv->list_entry, &s->ioreq_vcpu_list); - - if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); - - spin_unlock(&s->lock); - return 0; - - fail3: - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - fail2: - spin_unlock(&s->lock); - xfree(sv); - - fail1: - return rc; -} - -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, - struct vcpu *v) -{ - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu != v ) - continue; - - list_del(&sv->list_entry); - - if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - break; - } - - spin_unlock(&s->lock); -} - -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) -{ - struct hvm_ioreq_vcpu *sv, *next; - - spin_lock(&s->lock); - - list_for_each_entry_safe ( sv, - next, - &s->ioreq_vcpu_list, - list_entry ) - { - struct vcpu *v = sv->vcpu; - - list_del(&sv->list_entry); - - if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) - free_xen_event_channel(v->domain, s->bufioreq_evtchn); - - free_xen_event_channel(v->domain, sv->ioreq_evtchn); - - xfree(sv); - } - - spin_unlock(&s->lock); -} - int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) { int rc; @@ -621,688 +234,63 @@ void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) hvm_unmap_ioreq_gfn(s, false); } -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) -{ - int rc; - - rc = hvm_alloc_ioreq_mfn(s, false); - - if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc = hvm_alloc_ioreq_mfn(s, true); - - if ( rc ) - hvm_free_ioreq_mfn(s, false); - - return rc; -} - -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) -{ - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); -} - -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) -{ - unsigned int i; - - for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) - rangeset_destroy(s->range[i]); -} - -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, - ioservid_t id) -{ - unsigned int i; - int rc; - - for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) - { - char *name; - - rc = asprintf(&name, "ioreq_server %d %s", id, - (i == XEN_DMOP_IO_RANGE_PORT) ? "port" : - (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : - (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" : - ""); - if ( rc ) - goto fail; - - s->range[i] = rangeset_new(s->target, name, - RANGESETF_prettyprint_hex); - - xfree(name); - - rc = -ENOMEM; - if ( !s->range[i] ) - goto fail; - - rangeset_limit(s->range[i], MAX_NR_IO_RANGES); - } - - return 0; - - fail: - hvm_ioreq_server_free_rangesets(s); - - return rc; -} - void arch_ioreq_server_enable(struct hvm_ioreq_server *s) { hvm_remove_ioreq_gfn(s, false); hvm_remove_ioreq_gfn(s, true); } -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) -{ - struct hvm_ioreq_vcpu *sv; - - spin_lock(&s->lock); - - if ( s->enabled ) - goto done; - - arch_ioreq_server_enable(s); - - s->enabled = true; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); - - done: - spin_unlock(&s->lock); -} - void arch_ioreq_server_disable(struct hvm_ioreq_server *s) { hvm_add_ioreq_gfn(s, true); hvm_add_ioreq_gfn(s, false); } -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +/* Called when target domain is paused */ +void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) { - spin_lock(&s->lock); - - if ( !s->enabled ) - goto done; - - arch_ioreq_server_disable(s); + p2m_set_ioreq_server(s->target, 0, s); +} - s->enabled = false; +/* Called with ioreq_server lock held */ +int arch_ioreq_server_map_mem_type(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + return p2m_set_ioreq_server(d, flags, s); +} - done: - spin_unlock(&s->lock); +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct hvm_ioreq_server *s, + uint32_t flags) +{ + if ( flags == 0 && read_atomic(&p2m_get_hostp2m(d)->ioreq.entry_count) ) + p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); } -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) +bool arch_ioreq_server_destroy_all(struct domain *d) { - struct domain *currd = current->domain; - struct vcpu *v; - int rc; + return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); +} - s->target = d; +bool arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) +{ + unsigned int cf8 = d->arch.hvm.pci_cf8; - get_knownalive_domain(currd); - s->emulator = currd; + if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) + return false; - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); + if ( p->type == IOREQ_TYPE_PIO && + (p->addr & ~3) == 0xcfc && + CF8_ENABLED(cf8) ) + { + unsigned int x86_fam, reg; + pci_sbdf_t sbdf; - s->ioreq.gfn = INVALID_GFN; - s->bufioreq.gfn = INVALID_GFN; - - rc = hvm_ioreq_server_alloc_rangesets(s, id); - if ( rc ) - return rc; - - s->bufioreq_handling = bufioreq_handling; - - for_each_vcpu ( d, v ) - { - rc = hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; - } - - return 0; - - fail_add: - hvm_ioreq_server_remove_all_vcpus(s); - arch_ioreq_server_unmap_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); - return rc; -} - -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) -{ - ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - arch_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); - - hvm_ioreq_server_free_rangesets(s); - - put_domain(s->emulator); -} - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) -{ - struct hvm_ioreq_server *s; - unsigned int i; - int rc; - - if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) - return -EINVAL; - - s = xzalloc(struct hvm_ioreq_server); - if ( !s ) - return -ENOMEM; - - domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ ) - { - if ( !GET_IOREQ_SERVER(d, i) ) - break; - } - - rc = -ENOSPC; - if ( i >= MAX_NR_IOREQ_SERVERS ) - goto fail; - - /* - * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. - */ - set_ioreq_server(d, i, s); - - rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i); - if ( rc ) - { - set_ioreq_server(d, i, NULL); - goto fail; - } - - if ( id ) - *id = i; - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - return 0; - - fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - domain_unpause(d); - - xfree(s); - return rc; -} - -/* Called when target domain is paused */ -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) -{ - p2m_set_ioreq_server(s->target, 0, s); -} - -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - domain_pause(d); - - arch_ioreq_server_destroy(s); - - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is paused. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - domain_unpause(d); - - xfree(s); - - rc = 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - if ( ioreq_gfn || bufioreq_gfn ) - { - rc = arch_ioreq_server_map_pages(s); - if ( rc ) - goto out; - } - - if ( ioreq_gfn ) - *ioreq_gfn = gfn_x(s->ioreq.gfn); - - if ( HANDLE_BUFIOREQ(s) ) - { - if ( bufioreq_gfn ) - *bufioreq_gfn = gfn_x(s->bufioreq.gfn); - - if ( bufioreq_port ) - *bufioreq_port = s->bufioreq_evtchn; - } - - rc = 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) -{ - struct hvm_ioreq_server *s; - int rc; - - ASSERT(is_hvm_domain(d)); - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - rc = hvm_ioreq_server_alloc_pages(s); - if ( rc ) - goto out; - - switch ( idx ) - { - case XENMEM_resource_ioreq_server_frame_bufioreq: - rc = -ENOENT; - if ( !HANDLE_BUFIOREQ(s) ) - goto out; - - *mfn = page_to_mfn(s->bufioreq.page); - rc = 0; - break; - - case XENMEM_resource_ioreq_server_frame_ioreq(0): - *mfn = page_to_mfn(s->ioreq.page); - rc = 0; - break; - - default: - rc = -EINVAL; - break; - } - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r = s->range[type]; - break; - - default: - r = NULL; - break; - } - - rc = -EINVAL; - if ( !r ) - goto out; - - rc = -EEXIST; - if ( rangeset_overlaps_range(r, start, end) ) - goto out; - - rc = rangeset_add_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) -{ - struct hvm_ioreq_server *s; - struct rangeset *r; - int rc; - - if ( start > end ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - switch ( type ) - { - case XEN_DMOP_IO_RANGE_PORT: - case XEN_DMOP_IO_RANGE_MEMORY: - case XEN_DMOP_IO_RANGE_PCI: - r = s->range[type]; - break; - - default: - r = NULL; - break; - } - - rc = -EINVAL; - if ( !r ) - goto out; - - rc = -ENOENT; - if ( !rangeset_contains_range(r, start, end) ) - goto out; - - rc = rangeset_remove_range(r, start, end); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -/* Called with ioreq_server lock held */ -int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, - uint32_t flags) -{ - return p2m_set_ioreq_server(d, flags, s); -} - -void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, - uint32_t flags) -{ - if ( flags == 0 && read_atomic(&p2m_get_hostp2m(d)->ioreq.entry_count) ) - p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw); -} - -/* - * Map or unmap an ioreq server to specific memory type. For now, only - * HVMMEM_ioreq_server is supported, and in the future new types can be - * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And - * currently, only write operations are to be forwarded to an ioreq server. - * Support for the emulation of read operations can be added when an ioreq - * server has such requirement in the future. - */ -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags) -{ - struct hvm_ioreq_server *s; - int rc; - - if ( type != HVMMEM_ioreq_server ) - return -EINVAL; - - if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) - return -EINVAL; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - rc = arch_ioreq_server_map_mem_type(d, s, flags); - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - if ( rc == 0 ) - arch_ioreq_server_map_mem_type_completed(d, s, flags); - - return rc; -} - -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) -{ - struct hvm_ioreq_server *s; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - s = get_ioreq_server(d, id); - - rc = -ENOENT; - if ( !s ) - goto out; - - rc = -EPERM; - if ( s->emulator != current->domain ) - goto out; - - domain_pause(d); - - if ( enabled ) - hvm_ioreq_server_enable(s); - else - hvm_ioreq_server_disable(s); - - domain_unpause(d); - - rc = 0; - - out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - return rc; -} - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - int rc; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - rc = hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail; - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return 0; - - fail: - while ( ++id != MAX_NR_IOREQ_SERVERS ) - { - s = GET_IOREQ_SERVER(d, id); - - if ( !s ) - continue; - - hvm_ioreq_server_remove_vcpu(s, v); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); - - return rc; -} - -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -bool arch_ioreq_server_destroy_all(struct domain *d) -{ - return relocate_portio_handler(d, 0xcf8, 0xcf8, 4); -} - -void hvm_destroy_all_ioreq_servers(struct domain *d) -{ - struct hvm_ioreq_server *s; - unsigned int id; - - if ( !arch_ioreq_server_destroy_all(d) ) - return; - - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); - - /* No need to domain_pause() as the domain is being torn down */ - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - hvm_ioreq_server_disable(s); - - /* - * It is safe to call hvm_ioreq_server_deinit() prior to - * set_ioreq_server() since the target domain is being destroyed. - */ - hvm_ioreq_server_deinit(s); - set_ioreq_server(d, id, NULL); - - xfree(s); - } - - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); -} - -bool arch_ioreq_server_get_type_addr(const struct domain *d, - const ioreq_t *p, - uint8_t *type, - uint64_t *addr) -{ - unsigned int cf8 = d->arch.hvm.pci_cf8; - - if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) - return false; - - if ( p->type == IOREQ_TYPE_PIO && - (p->addr & ~3) == 0xcfc && - CF8_ENABLED(cf8) ) - { - unsigned int x86_fam, reg; - pci_sbdf_t sbdf; - - reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); + reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); /* PCI config data cycle */ *type = XEN_DMOP_IO_RANGE_PCI; @@ -1331,233 +319,6 @@ bool arch_ioreq_server_get_type_addr(const struct domain *d, return true; } -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) -{ - struct hvm_ioreq_server *s; - uint8_t type; - uint64_t addr; - unsigned int id; - - if ( !arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) - return NULL; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - struct rangeset *r; - - if ( !s->enabled ) - continue; - - r = s->range[type]; - - switch ( type ) - { - unsigned long start, end; - - case XEN_DMOP_IO_RANGE_PORT: - start = addr; - end = start + p->size - 1; - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_MEMORY: - start = hvm_mmio_first_byte(p); - end = hvm_mmio_last_byte(p); - - if ( rangeset_contains_range(r, start, end) ) - return s; - - break; - - case XEN_DMOP_IO_RANGE_PCI: - if ( rangeset_contains_singleton(r, addr >> 32) ) - { - p->type = IOREQ_TYPE_PCI_CONFIG; - p->addr = addr; - return s; - } - - break; - } - } - - return NULL; -} - -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) -{ - struct domain *d = current->domain; - struct hvm_ioreq_page *iorp; - buffered_iopage_t *pg; - buf_ioreq_t bp = { .data = p->data, - .addr = p->addr, - .type = p->type, - .dir = p->dir }; - /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */ - int qw = 0; - - /* Ensure buffered_iopage fits in a page */ - BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); - - iorp = &s->bufioreq; - pg = iorp->va; - - if ( !pg ) - return IOREQ_STATUS_UNHANDLED; - - /* - * Return 0 for the cases we can't deal with: - * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB - * - we cannot buffer accesses to guest memory buffers, as the guest - * may expect the memory buffer to be synchronously accessed - * - the count field is usually used with data_is_ptr and since we don't - * support data_is_ptr we do not waste space for the count field either - */ - if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) ) - return 0; - - switch ( p->size ) - { - case 1: - bp.size = 0; - break; - case 2: - bp.size = 1; - break; - case 4: - bp.size = 2; - break; - case 8: - bp.size = 3; - qw = 1; - break; - default: - gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); - return IOREQ_STATUS_UNHANDLED; - } - - spin_lock(&s->bufioreq_lock); - - if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >= - (IOREQ_BUFFER_SLOT_NUM - qw) ) - { - /* The queue is full: send the iopacket through the normal path. */ - spin_unlock(&s->bufioreq_lock); - return IOREQ_STATUS_UNHANDLED; - } - - pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp; - - if ( qw ) - { - bp.data = p->data >> 32; - pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp; - } - - /* Make the ioreq_t visible /before/ write_pointer. */ - smp_wmb(); - pg->ptrs.write_pointer += qw ? 2 : 1; - - /* Canonicalize read/write pointers to prevent their overflow. */ - while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) && - qw++ < IOREQ_BUFFER_SLOT_NUM && - pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM ) - { - union bufioreq_pointers old = pg->ptrs, new; - unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM; - - new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; - new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM; - cmpxchg(&pg->ptrs.full, old.full, new.full); - } - - notify_via_xen_event_channel(d, s->bufioreq_evtchn); - spin_unlock(&s->bufioreq_lock); - - return IOREQ_STATUS_HANDLED; -} - -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered) -{ - struct vcpu *curr = current; - struct domain *d = curr->domain; - struct hvm_ioreq_vcpu *sv; - - ASSERT(s); - - if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); - - if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return IOREQ_STATUS_RETRY; - - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - { - if ( sv->vcpu == curr ) - { - evtchn_port_t port = sv->ioreq_evtchn; - ioreq_t *p = get_ioreq(s, curr); - - if ( unlikely(p->state != STATE_IOREQ_NONE) ) - { - gprintk(XENLOG_ERR, "device model set bad IO state %d\n", - p->state); - break; - } - - if ( unlikely(p->vp_eport != port) ) - { - gprintk(XENLOG_ERR, "device model set bad event channel %d\n", - p->vp_eport); - break; - } - - proto_p->state = STATE_IOREQ_NONE; - proto_p->vp_eport = port; - *p = *proto_p; - - prepare_wait_on_xen_event_channel(port); - - /* - * Following happens /after/ blocking and setting up ioreq - * contents. prepare_wait_on_xen_event_channel() is an implicit - * barrier. - */ - p->state = STATE_IOREQ_READY; - notify_via_xen_event_channel(d, port); - - sv->pending = true; - return IOREQ_STATUS_RETRY; - } - } - - return IOREQ_STATUS_UNHANDLED; -} - -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) -{ - struct domain *d = current->domain; - struct hvm_ioreq_server *s; - unsigned int id, failed = 0; - - FOR_EACH_IOREQ_SERVER(d, id, s) - { - if ( !s->enabled ) - continue; - - if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) - failed++; - } - - return failed; -} - static int hvm_access_cf8( int dir, unsigned int port, unsigned int bytes, uint32_t *val) { @@ -1575,13 +336,6 @@ void arch_ioreq_domain_init(struct domain *d) register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); } -void hvm_ioreq_init(struct domain *d) -{ - spin_lock_init(&d->arch.hvm.ioreq_server.lock); - - arch_ioreq_domain_init(d); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index e267513..fd7cadb 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -27,10 +27,10 @@ * can have side effects. */ +#include #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 3a37e9e..0ddb6a4 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -19,10 +19,11 @@ * */ +#include + #include #include #include -#include #include #include #include diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 63e9fae..59eb5c8 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -100,6 +100,7 @@ */ #include +#include #include #include #include @@ -140,7 +141,6 @@ #include #include #include -#include #include #include diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index df95c4d..4a88824 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -20,6 +20,7 @@ * along with this program; If not, see . */ +#include #include #include #include @@ -34,7 +35,6 @@ #include #include #include -#include #include #include "private.h" diff --git a/xen/common/Kconfig b/xen/common/Kconfig index b5c91a1..cf32a07 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -139,6 +139,9 @@ config HYPFS_CONFIG Disable this option in case you want to spare some memory or you want to hide the .config contents from dom0. +config IOREQ_SERVER + bool + config KEXEC bool "kexec support" default y diff --git a/xen/common/Makefile b/xen/common/Makefile index d751315..2e3c234 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -14,6 +14,7 @@ obj-$(CONFIG_GRANT_TABLE) += grant_table.o obj-y += guestcopy.o obj-bin-y += gunzip.init.o obj-$(CONFIG_HYPFS) += hypfs.o +obj-$(CONFIG_IOREQ_SERVER) += ioreq.o obj-y += irq.o obj-y += kernel.o obj-y += keyhandler.o diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c new file mode 100644 index 0000000..4e7d91b --- /dev/null +++ b/xen/common/ioreq.c @@ -0,0 +1,1290 @@ +/* + * ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +static void set_ioreq_server(struct domain *d, unsigned int id, + struct hvm_ioreq_server *s) +{ + ASSERT(id < MAX_NR_IOREQ_SERVERS); + ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + + d->arch.hvm.ioreq_server.server[id] = s; +} + +#define GET_IOREQ_SERVER(d, id) \ + (d)->arch.hvm.ioreq_server.server[id] + +static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) +{ + if ( id >= MAX_NR_IOREQ_SERVERS ) + return NULL; + + return GET_IOREQ_SERVER(d, id); +} + +/* + * Iterate over all possible ioreq servers. + * + * NOTE: The iteration is backwards such that more recently created + * ioreq servers are favoured in hvm_select_ioreq_server(). + * This is a semantic that previously existed when ioreq servers + * were held in a linked list. + */ +#define FOR_EACH_IOREQ_SERVER(d, id, s) \ + for ( (id) = MAX_NR_IOREQ_SERVERS; (id) != 0; ) \ + if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \ + continue; \ + else + +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +{ + shared_iopage_t *p = s->ioreq.va; + + ASSERT((v == current) || !vcpu_runnable(v)); + ASSERT(p != NULL); + + return &p->vcpu_ioreq[v->vcpu_id]; +} + +static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct hvm_ioreq_server **srvp) +{ + struct domain *d = v->domain; + struct hvm_ioreq_server *s; + unsigned int id; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct hvm_ioreq_vcpu *sv; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu == v && sv->pending ) + { + if ( srvp ) + *srvp = s; + return sv; + } + } + } + + return NULL; +} + +bool hvm_io_pending(struct vcpu *v) +{ + return get_pending_vcpu(v, NULL); +} + +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +{ + unsigned int prev_state = STATE_IOREQ_NONE; + unsigned int state = p->state; + uint64_t data = ~0; + + smp_rmb(); + + /* + * The only reason we should see this condition be false is when an + * emulator dying races with I/O being requested. + */ + while ( likely(state != STATE_IOREQ_NONE) ) + { + if ( unlikely(state < prev_state) ) + { + gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n", + prev_state, state); + sv->pending = false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + switch ( prev_state = state ) + { + case STATE_IORESP_READY: /* IORESP_READY -> NONE */ + p->state = STATE_IOREQ_NONE; + data = p->data; + break; + + case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READY */ + case STATE_IOREQ_INPROCESS: + wait_on_xen_event_channel(sv->ioreq_evtchn, + ({ state = p->state; + smp_rmb(); + state != prev_state; })); + continue; + + default: + gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); + sv->pending = false; + domain_crash(sv->vcpu->domain); + return false; /* bail */ + } + + break; + } + + p = &sv->vcpu->arch.hvm.hvm_io.io_req; + if ( hvm_ioreq_needs_completion(p) ) + p->data = data; + + sv->pending = false; + + return true; +} + +bool handle_hvm_io_completion(struct vcpu *v) +{ + struct domain *d = v->domain; + struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; + struct hvm_ioreq_server *s; + struct hvm_ioreq_vcpu *sv; + enum hvm_io_completion io_completion; + + if ( has_vpci(d) && vpci_process_pending(v) ) + { + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + sv = get_pending_vcpu(v, &s); + if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + return false; + + vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ? + STATE_IORESP_READY : STATE_IOREQ_NONE; + + msix_write_completion(v); + vcpu_end_shutdown_deferral(v); + + io_completion = vio->io_completion; + vio->io_completion = HVMIO_no_completion; + + switch ( io_completion ) + { + case HVMIO_no_completion: + break; + + case HVMIO_mmio_completion: + return arch_ioreq_complete_mmio(); + + case HVMIO_pio_completion: + return handle_pio(vio->io_req.addr, vio->io_req.size, + vio->io_req.dir); + + default: + return arch_vcpu_ioreq_completion(io_completion); + } + + return true; +} + +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct page_info *page; + + if ( iorp->page ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if hvm_get_ioreq_server_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + page = alloc_domheap_page(s->target, MEMF_no_refcount); + + if ( !page ) + return -ENOMEM; + + if ( !get_page_and_type(page, s->target, PGT_writable_page) ) + { + /* + * The domain can't possibly know about this page yet, so failure + * here is a clear indication of something fishy going on. + */ + domain_crash(s->emulator); + return -ENODATA; + } + + iorp->va = __map_domain_page_global(page); + if ( !iorp->va ) + goto fail; + + iorp->page = page; + clear_page(iorp->va); + return 0; + + fail: + put_page_alloc_ref(page); + put_page_and_type(page); + + return -ENOMEM; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct page_info *page = iorp->page; + + if ( !page ) + return; + + iorp->page = NULL; + + unmap_domain_page_global(iorp->va); + iorp->va = NULL; + + put_page_alloc_ref(page); + put_page_and_type(page); +} + +bool is_ioreq_server_page(struct domain *d, const struct page_info *page) +{ + const struct hvm_ioreq_server *s; + unsigned int id; + bool found = false; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( (s->ioreq.page == page) || (s->bufioreq.page == page) ) + { + found = true; + break; + } + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return found; +} + +static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, + struct hvm_ioreq_vcpu *sv) +{ + ASSERT(spin_is_locked(&s->lock)); + + if ( s->ioreq.va != NULL ) + { + ioreq_t *p = get_ioreq(s, sv->vcpu); + + p->vp_eport = sv->ioreq_evtchn; + } +} + +static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + int rc; + + sv = xzalloc(struct hvm_ioreq_vcpu); + + rc = -ENOMEM; + if ( !sv ) + goto fail1; + + spin_lock(&s->lock); + + rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail2; + + sv->ioreq_evtchn = rc; + + if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) + { + rc = alloc_unbound_xen_event_channel(v->domain, 0, + s->emulator->domain_id, NULL); + if ( rc < 0 ) + goto fail3; + + s->bufioreq_evtchn = rc; + } + + sv->vcpu = v; + + list_add(&sv->list_entry, &s->ioreq_vcpu_list); + + if ( s->enabled ) + hvm_update_ioreq_evtchn(s, sv); + + spin_unlock(&s->lock); + return 0; + + fail3: + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + fail2: + spin_unlock(&s->lock); + xfree(sv); + + fail1: + return rc; +} + +static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, + struct vcpu *v) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu != v ) + continue; + + list_del(&sv->list_entry); + + if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + break; + } + + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv, *next; + + spin_lock(&s->lock); + + list_for_each_entry_safe ( sv, + next, + &s->ioreq_vcpu_list, + list_entry ) + { + struct vcpu *v = sv->vcpu; + + list_del(&sv->list_entry); + + if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) ) + free_xen_event_channel(v->domain, s->bufioreq_evtchn); + + free_xen_event_channel(v->domain, sv->ioreq_evtchn); + + xfree(sv); + } + + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc = hvm_alloc_ioreq_mfn(s, false); + + if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) + rc = hvm_alloc_ioreq_mfn(s, true); + + if ( rc ) + hvm_free_ioreq_mfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +{ + hvm_free_ioreq_mfn(s, true); + hvm_free_ioreq_mfn(s, false); +} + +static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +{ + unsigned int i; + + for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) + rangeset_destroy(s->range[i]); +} + +static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, + ioservid_t id) +{ + unsigned int i; + int rc; + + for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) + { + char *name; + + rc = asprintf(&name, "ioreq_server %d %s", id, + (i == XEN_DMOP_IO_RANGE_PORT) ? "port" : + (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" : + (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" : + ""); + if ( rc ) + goto fail; + + s->range[i] = rangeset_new(s->target, name, + RANGESETF_prettyprint_hex); + + xfree(name); + + rc = -ENOMEM; + if ( !s->range[i] ) + goto fail; + + rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + } + + return 0; + + fail: + hvm_ioreq_server_free_rangesets(s); + + return rc; +} + +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +{ + struct hvm_ioreq_vcpu *sv; + + spin_lock(&s->lock); + + if ( s->enabled ) + goto done; + + arch_ioreq_server_enable(s); + + s->enabled = true; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + + done: + spin_unlock(&s->lock); +} + +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +{ + spin_lock(&s->lock); + + if ( !s->enabled ) + goto done; + + arch_ioreq_server_disable(s); + + s->enabled = false; + + done: + spin_unlock(&s->lock); +} + +static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) +{ + struct domain *currd = current->domain; + struct vcpu *v; + int rc; + + s->target = d; + + get_knownalive_domain(currd); + s->emulator = currd; + + spin_lock_init(&s->lock); + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn = INVALID_GFN; + s->bufioreq.gfn = INVALID_GFN; + + rc = hvm_ioreq_server_alloc_rangesets(s, id); + if ( rc ) + return rc; + + s->bufioreq_handling = bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc = hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } + + return 0; + + fail_add: + hvm_ioreq_server_remove_all_vcpus(s); + arch_ioreq_server_unmap_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); + return rc; +} + +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +{ + ASSERT(!s->enabled); + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + arch_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + hvm_ioreq_server_free_rangesets(s); + + put_domain(s->emulator); +} + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id) +{ + struct hvm_ioreq_server *s; + unsigned int i; + int rc; + + if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) + return -EINVAL; + + s = xzalloc(struct hvm_ioreq_server); + if ( !s ) + return -ENOMEM; + + domain_pause(d); + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ ) + { + if ( !GET_IOREQ_SERVER(d, i) ) + break; + } + + rc = -ENOSPC; + if ( i >= MAX_NR_IOREQ_SERVERS ) + goto fail; + + /* + * It is safe to call set_ioreq_server() prior to + * hvm_ioreq_server_init() since the target domain is paused. + */ + set_ioreq_server(d, i, s); + + rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i); + if ( rc ) + { + set_ioreq_server(d, i, NULL); + goto fail; + } + + if ( id ) + *id = i; + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + return 0; + + fail: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + domain_unpause(d); + + xfree(s); + return rc; +} + +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + domain_pause(d); + + arch_ioreq_server_destroy(s); + + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is paused. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + domain_unpause(d); + + xfree(s); + + rc = 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + if ( ioreq_gfn || bufioreq_gfn ) + { + rc = arch_ioreq_server_map_pages(s); + if ( rc ) + goto out; + } + + if ( ioreq_gfn ) + *ioreq_gfn = gfn_x(s->ioreq.gfn); + + if ( HANDLE_BUFIOREQ(s) ) + { + if ( bufioreq_gfn ) + *bufioreq_gfn = gfn_x(s->bufioreq.gfn); + + if ( bufioreq_port ) + *bufioreq_port = s->bufioreq_evtchn; + } + + rc = 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) +{ + struct hvm_ioreq_server *s; + int rc; + + ASSERT(is_hvm_domain(d)); + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + rc = hvm_ioreq_server_alloc_pages(s); + if ( rc ) + goto out; + + switch ( idx ) + { + case XENMEM_resource_ioreq_server_frame_bufioreq: + rc = -ENOENT; + if ( !HANDLE_BUFIOREQ(s) ) + goto out; + + *mfn = page_to_mfn(s->bufioreq.page); + rc = 0; + break; + + case XENMEM_resource_ioreq_server_frame_ioreq(0): + *mfn = page_to_mfn(s->ioreq.page); + rc = 0; + break; + + default: + rc = -EINVAL; + break; + } + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r = s->range[type]; + break; + + default: + r = NULL; + break; + } + + rc = -EINVAL; + if ( !r ) + goto out; + + rc = -EEXIST; + if ( rangeset_overlaps_range(r, start, end) ) + goto out; + + rc = rangeset_add_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) +{ + struct hvm_ioreq_server *s; + struct rangeset *r; + int rc; + + if ( start > end ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + switch ( type ) + { + case XEN_DMOP_IO_RANGE_PORT: + case XEN_DMOP_IO_RANGE_MEMORY: + case XEN_DMOP_IO_RANGE_PCI: + r = s->range[type]; + break; + + default: + r = NULL; + break; + } + + rc = -EINVAL; + if ( !r ) + goto out; + + rc = -ENOENT; + if ( !rangeset_contains_range(r, start, end) ) + goto out; + + rc = rangeset_remove_range(r, start, end); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +/* + * Map or unmap an ioreq server to specific memory type. For now, only + * HVMMEM_ioreq_server is supported, and in the future new types can be + * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And + * currently, only write operations are to be forwarded to an ioreq server. + * Support for the emulation of read operations can be added when an ioreq + * server has such requirement in the future. + */ +int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags) +{ + struct hvm_ioreq_server *s; + int rc; + + if ( type != HVMMEM_ioreq_server ) + return -EINVAL; + + if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) + return -EINVAL; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + rc = arch_ioreq_server_map_mem_type(d, s, flags); + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + if ( rc == 0 ) + arch_ioreq_server_map_mem_type_completed(d, s, flags); + + return rc; +} + +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + s = get_ioreq_server(d, id); + + rc = -ENOENT; + if ( !s ) + goto out; + + rc = -EPERM; + if ( s->emulator != current->domain ) + goto out; + + domain_pause(d); + + if ( enabled ) + hvm_ioreq_server_enable(s); + else + hvm_ioreq_server_disable(s); + + domain_unpause(d); + + rc = 0; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + return rc; +} + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + int rc; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + rc = hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail; + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return 0; + + fail: + while ( ++id != MAX_NR_IOREQ_SERVERS ) + { + s = GET_IOREQ_SERVER(d, id); + + if ( !s ) + continue; + + hvm_ioreq_server_remove_vcpu(s, v); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + FOR_EACH_IOREQ_SERVER(d, id, s) + hvm_ioreq_server_remove_vcpu(s, v); + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +void hvm_destroy_all_ioreq_servers(struct domain *d) +{ + struct hvm_ioreq_server *s; + unsigned int id; + + if ( !arch_ioreq_server_destroy_all(d) ) + return; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + + /* No need to domain_pause() as the domain is being torn down */ + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + hvm_ioreq_server_disable(s); + + /* + * It is safe to call hvm_ioreq_server_deinit() prior to + * set_ioreq_server() since the target domain is being destroyed. + */ + hvm_ioreq_server_deinit(s); + set_ioreq_server(d, id, NULL); + + xfree(s); + } + + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); +} + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) +{ + struct hvm_ioreq_server *s; + uint8_t type; + uint64_t addr; + unsigned int id; + + if ( !arch_ioreq_server_get_type_addr(d, p, &type, &addr) ) + return NULL; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + struct rangeset *r; + + if ( !s->enabled ) + continue; + + r = s->range[type]; + + switch ( type ) + { + unsigned long start, end; + + case XEN_DMOP_IO_RANGE_PORT: + start = addr; + end = start + p->size - 1; + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_MEMORY: + start = hvm_mmio_first_byte(p); + end = hvm_mmio_last_byte(p); + + if ( rangeset_contains_range(r, start, end) ) + return s; + + break; + + case XEN_DMOP_IO_RANGE_PCI: + if ( rangeset_contains_singleton(r, addr >> 32) ) + { + p->type = IOREQ_TYPE_PCI_CONFIG; + p->addr = addr; + return s; + } + + break; + } + } + + return NULL; +} + +static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +{ + struct domain *d = current->domain; + struct hvm_ioreq_page *iorp; + buffered_iopage_t *pg; + buf_ioreq_t bp = { .data = p->data, + .addr = p->addr, + .type = p->type, + .dir = p->dir }; + /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */ + int qw = 0; + + /* Ensure buffered_iopage fits in a page */ + BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); + + iorp = &s->bufioreq; + pg = iorp->va; + + if ( !pg ) + return IOREQ_STATUS_UNHANDLED; + + /* + * Return 0 for the cases we can't deal with: + * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB + * - we cannot buffer accesses to guest memory buffers, as the guest + * may expect the memory buffer to be synchronously accessed + * - the count field is usually used with data_is_ptr and since we don't + * support data_is_ptr we do not waste space for the count field either + */ + if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) ) + return 0; + + switch ( p->size ) + { + case 1: + bp.size = 0; + break; + case 2: + bp.size = 1; + break; + case 4: + bp.size = 2; + break; + case 8: + bp.size = 3; + qw = 1; + break; + default: + gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size); + return IOREQ_STATUS_UNHANDLED; + } + + spin_lock(&s->bufioreq_lock); + + if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >= + (IOREQ_BUFFER_SLOT_NUM - qw) ) + { + /* The queue is full: send the iopacket through the normal path. */ + spin_unlock(&s->bufioreq_lock); + return IOREQ_STATUS_UNHANDLED; + } + + pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp; + + if ( qw ) + { + bp.data = p->data >> 32; + pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp; + } + + /* Make the ioreq_t visible /before/ write_pointer. */ + smp_wmb(); + pg->ptrs.write_pointer += qw ? 2 : 1; + + /* Canonicalize read/write pointers to prevent their overflow. */ + while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) && + qw++ < IOREQ_BUFFER_SLOT_NUM && + pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM ) + { + union bufioreq_pointers old = pg->ptrs, new; + unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM; + + new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; + new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM; + cmpxchg(&pg->ptrs.full, old.full, new.full); + } + + notify_via_xen_event_channel(d, s->bufioreq_evtchn); + spin_unlock(&s->bufioreq_lock); + + return IOREQ_STATUS_HANDLED; +} + +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered) +{ + struct vcpu *curr = current; + struct domain *d = curr->domain; + struct hvm_ioreq_vcpu *sv; + + ASSERT(s); + + if ( buffered ) + return hvm_send_buffered_ioreq(s, proto_p); + + if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) + return IOREQ_STATUS_RETRY; + + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + { + if ( sv->vcpu == curr ) + { + evtchn_port_t port = sv->ioreq_evtchn; + ioreq_t *p = get_ioreq(s, curr); + + if ( unlikely(p->state != STATE_IOREQ_NONE) ) + { + gprintk(XENLOG_ERR, "device model set bad IO state %d\n", + p->state); + break; + } + + if ( unlikely(p->vp_eport != port) ) + { + gprintk(XENLOG_ERR, "device model set bad event channel %d\n", + p->vp_eport); + break; + } + + proto_p->state = STATE_IOREQ_NONE; + proto_p->vp_eport = port; + *p = *proto_p; + + prepare_wait_on_xen_event_channel(port); + + /* + * Following happens /after/ blocking and setting up ioreq + * contents. prepare_wait_on_xen_event_channel() is an implicit + * barrier. + */ + p->state = STATE_IOREQ_READY; + notify_via_xen_event_channel(d, port); + + sv->pending = true; + return IOREQ_STATUS_RETRY; + } + } + + return IOREQ_STATUS_UNHANDLED; +} + +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +{ + struct domain *d = current->domain; + struct hvm_ioreq_server *s; + unsigned int id, failed = 0; + + FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( !s->enabled ) + continue; + + if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) + failed++; + } + + return failed; +} + +void hvm_ioreq_init(struct domain *d) +{ + spin_lock_init(&d->arch.hvm.ioreq_server.lock); + + arch_ioreq_domain_init(d); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index df0c292..9b2eb6f 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -19,42 +19,6 @@ #ifndef __ASM_X86_HVM_IOREQ_H__ #define __ASM_X86_HVM_IOREQ_H__ -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); -bool is_ioreq_server_page(struct domain *d, const struct page_info *page); - -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); - -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); - -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); - -void hvm_ioreq_init(struct domain *d); - /* This correlation must not be altered */ #define IOREQ_STATUS_HANDLED X86EMUL_OKAY #define IOREQ_STATUS_UNHANDLED X86EMUL_UNHANDLEABLE diff --git a/xen/include/asm-x86/ioreq.h b/xen/include/asm-x86/ioreq.h new file mode 100644 index 0000000..d06ce9a --- /dev/null +++ b/xen/include/asm-x86/ioreq.h @@ -0,0 +1,39 @@ +/* + * ioreq.h: Hardware virtual machine assist interface definitions. + * + * This is a wrapper which purpose is to not include arch HVM specific header + * from the common code. + * + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#ifndef __ASM_X86_IOREQ_H__ +#define __ASM_X86_IOREQ_H__ + +#ifdef CONFIG_HVM +#include +#endif + +#endif /* __ASM_X86_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index b95d3ef..430fc22 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -21,9 +21,47 @@ #include +#include + #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) +bool hvm_io_pending(struct vcpu *v); +bool handle_hvm_io_completion(struct vcpu *v); +bool is_ioreq_server_page(struct domain *d, const struct page_info *page); + +int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, + ioservid_t *id); +int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); +int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port); +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); +int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end); +int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags); +int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, + bool enabled); + +int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); +void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); +void hvm_destroy_all_ioreq_servers(struct domain *d); + +struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); + +void hvm_ioreq_init(struct domain *d); + bool arch_ioreq_complete_mmio(void); bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); From patchwork Fri Jan 29 01:48:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 352FBC433DB for ; Fri, 29 Jan 2021 01:49:37 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D16F964DFB for ; Fri, 29 Jan 2021 01:49:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D16F964DFB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77578.140605 (Exim 4.92) (envelope-from ) id 1l5Iuf-0004sF-Je; Fri, 29 Jan 2021 01:49:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77578.140605; Fri, 29 Jan 2021 01:49:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iuf-0004s7-FI; Fri, 29 Jan 2021 01:49:29 +0000 Received: by outflank-mailman (input) for mailman id 77578; Fri, 29 Jan 2021 01:49:27 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iud-0004da-Jz for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:27 +0000 Received: from mail-lf1-x12f.google.com (unknown [2a00:1450:4864:20::12f]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 94a7b5d1-73e6-4697-b6eb-9c4eddf3578f; Fri, 29 Jan 2021 01:49:11 +0000 (UTC) Received: by mail-lf1-x12f.google.com with SMTP id a12so10326725lfb.1 for ; Thu, 28 Jan 2021 17:49:10 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:09 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 94a7b5d1-73e6-4697-b6eb-9c4eddf3578f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jSb7x+4yQyqCLdt0Hc3olaT2zK2MLUsGOPPC0n8/O54=; b=LajRGJW2QNITxPoExTnsYEghUCzo33JCP7te/a3L3a9KPv9jZ6kVr3jC5qKr0sNAWf +cKp3lSY5XnAN8xd5WpciOkiLtbQfPQjoZ5IOPMToRyU/rk9hJnnG00pqWfAYSGlRjt+ 1M2Klr/sTgU8+onvlEw7FilR0wHfMCmsgW5XXgWoTesPF8axWAnYnOKibmYHekaw53Md kmZEHZF69t0Td1duoWTJA/54rKMR+hzpnW62um9DfsScfzxJKKqak9jKJdrzFazYOf7n fBHipjyxhEVAXhQrWqlI7KByxbuEVn52cvxuRFcmlH2A1dv5YIvqIb1MiCWfNmAb0ZaO ltHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jSb7x+4yQyqCLdt0Hc3olaT2zK2MLUsGOPPC0n8/O54=; b=a+u9EJg94cAjIr4SlSnopa4BG+XkAg0uiDTB0QY4tCSeJS11+vxmtGMuB/Pcj/0CmW tSdAAQukU2x/Dukm3ctAMM/PcUSUkBwhRe8jV7iqwHDei0hDwqUp381Ner38ZGslBCt2 JQImgL0i5kFm8mJITY79R3o3NNjIhwj0ikFhARYQEOp4SmAlt5hX2E+8FDvP+0/qg2ph tQGwn1kOpVXPWqhzzJGQU6wfB0yAheNu+UtxaL8rm+Ej0378OGsbVLdTjLKP2k3O+6RS CmSxbYysSxkbmE9yXRIW7tIZPo3QyARQJHPBKnFz2Nz8eVbP6WvX3oIo1RIVW/FNX74U wijw== X-Gm-Message-State: AOAM5332Suf+Zq5f8FYDuhsF/dBGsbd8nwNaaso5214pP37LSbmK/c8U 78zgSUvyg0JxNz2ifOHdgLsfYPT0fogA0w== X-Google-Smtp-Source: ABdhPJx0169AU/SRPkWn0JUW/nPlOzvXT7MZcyjdbLc/GxrcbLQDSLyYre/Ck4bX+arijvXLK/DAow== X-Received: by 2002:ac2:4436:: with SMTP id w22mr855089lfl.41.1611884949760; Thu, 28 Jan 2021 17:49:09 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V6 05/24] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common Date: Fri, 29 Jan 2021 03:48:33 +0200 Message-Id: <1611884932-1851-6-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko The IOREQ is a common feature now and this helper will be used on Arm as is. Move it to xen/ioreq.h and remove "hvm" prefix. Although PIO handling on Arm is not introduced with the current series (it will be implemented when we add support for vPCI), technically the PIOs exist on Arm (however they are accessed the same way as MMIO) and it would be better not to diverge now. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Paul Durrant Acked-by: Jan Beulich Reviewed-by: Julien Grall Reviewed-by: Alex Bennée CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common" Changes V1 -> V2: - remove "hvm" prefix Changes V2 -> V3: - add Paul's R-b Changes V3 -> V4: - add Jan's A-b Changes V4 -> V5: - rebase - add Julien's and Alex's R-b Changes V5 -> V6: - no changes --- --- xen/arch/x86/hvm/emulate.c | 4 ++-- xen/arch/x86/hvm/io.c | 2 +- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/vcpu.h | 7 ------- xen/include/xen/ioreq.h | 7 +++++++ 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 60ca465..c3487b5 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -336,7 +336,7 @@ static int hvmemul_do_io( rc = hvm_send_ioreq(s, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state = STATE_IOREQ_NONE; - else if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + else if ( !ioreq_needs_completion(&vio->io_req) ) rc = X86EMUL_OKAY; } break; @@ -2649,7 +2649,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, if ( rc == X86EMUL_OKAY && vio->mmio_retry ) rc = X86EMUL_RETRY; - if ( !hvm_ioreq_needs_completion(&vio->io_req) ) + if ( !ioreq_needs_completion(&vio->io_req) ) completion = HVMIO_no_completion; else if ( completion == HVMIO_no_completion ) completion = (vio->io_req.type != IOREQ_TYPE_PIO || diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 11e007d..ef8286b 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -135,7 +135,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir) rc = hvmemul_do_pio_buffer(port, size, dir, &data); - if ( hvm_ioreq_needs_completion(&vio->io_req) ) + if ( ioreq_needs_completion(&vio->io_req) ) vio->io_completion = HVMIO_pio_completion; switch ( rc ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 4e7d91b..61ddd54 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -160,7 +160,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) } p = &sv->vcpu->arch.hvm.hvm_io.io_req; - if ( hvm_ioreq_needs_completion(p) ) + if ( ioreq_needs_completion(p) ) p->data = data; sv->pending = false; @@ -186,7 +186,7 @@ bool handle_hvm_io_completion(struct vcpu *v) if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) return false; - vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ? + vio->io_req.state = ioreq_needs_completion(&vio->io_req) ? STATE_IORESP_READY : STATE_IOREQ_NONE; msix_write_completion(v); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 5ccd075..6c1feda 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -91,13 +91,6 @@ struct hvm_vcpu_io { const struct g2m_ioport *g2m_ioport; }; -static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) -{ - return ioreq->state == STATE_IOREQ_READY && - !ioreq->data_is_ptr && - (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE); -} - struct nestedvcpu { bool_t nv_guestmode; /* vcpu in guestmode? */ void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 430fc22..e957b52 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -23,6 +23,13 @@ #include +static inline bool ioreq_needs_completion(const ioreq_t *ioreq) +{ + return ioreq->state == STATE_IOREQ_READY && + !ioreq->data_is_ptr && + (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE); +} + #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) From patchwork Fri Jan 29 01:48:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70181C433DB for ; Fri, 29 Jan 2021 01:49:42 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 239FB64DFD for ; Fri, 29 Jan 2021 01:49:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 239FB64DFD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77579.140617 (Exim 4.92) (envelope-from ) id 1l5Iuj-0004xX-Tx; Fri, 29 Jan 2021 01:49:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77579.140617; Fri, 29 Jan 2021 01:49:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iuj-0004xO-Q8; Fri, 29 Jan 2021 01:49:33 +0000 Received: by outflank-mailman (input) for mailman id 77579; Fri, 29 Jan 2021 01:49:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iui-0004da-Jv for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:32 +0000 Received: from mail-lj1-x22b.google.com (unknown [2a00:1450:4864:20::22b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id a50a8c0f-025e-4589-8e3e-82577820982b; Fri, 29 Jan 2021 01:49:12 +0000 (UTC) Received: by mail-lj1-x22b.google.com with SMTP id r14so8743959ljc.2 for ; Thu, 28 Jan 2021 17:49:12 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:10 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a50a8c0f-025e-4589-8e3e-82577820982b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4xuAT5BoXwLYZKVpDnJ4oCiBPzwMaEaaq9l2TzHM01U=; b=YssgsN8JEg7tB9BnA3WAA2KozYG+SFlVIZZhqp/xJ9Zru9Xh33PmKG8hF6dopt/sRt mp9mYbpvTBXDGQhk1cDu5jszmXY1ld10KuYMx33GBn374Entw7KQBN2WmXV+m+b77fa6 Zb96a9beo5u1c9D1+lpBu26wnkFSe4ZbdphMH9Q+xbGEOeLUcRNUYo1sYEQYmVJi8hY8 qCWEWD6vXrINglygqeaVngymldJDE/Go+SpDaKtmVpbgGYgyyORssjKb5CoOHRbqQkfS ZCDsxkmRq72Qgz9hta7rzops+qNDV4ljsqp4G5b1TaZuNDehW9ZLtvbwkAk3Qzp/pCGQ vwCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4xuAT5BoXwLYZKVpDnJ4oCiBPzwMaEaaq9l2TzHM01U=; b=A00s2JUjD8eNH6E/UpXOmzD9pWyUVtRxm0NCKULw1YOSbB7mJe5m24wkrFk9oP5uf2 W40EfTWgZAaKXtu+nGXvaAx9ED6D9hs5QBvsQFllMskIRDoXMUFGOF936bqAT8wZt6N6 MRrIG3iWPTYm13fJCS/qTED0q+Z36PRvX+yJs5m7GpPdl0dRms5MazjX3UyeVKZbg7Rw YoSP03QYiKb3aJAcym9HGxIN9cF0PZfSofWDU5jYOoQjfGF3IoSHvlxn8N8oLGV5sDof oLC4fR1Qxjj1EKnosmuaWQg9+Ga8/MKd6tvjLLEd7sFNoBbi5TJaauoqrww3OHO8CUAk O7MQ== X-Gm-Message-State: AOAM5300jnzahFmQNyjQ9l765svEsHIyRwtDqvMAUak/3pFg5fl6syKK 9ZY8/waS0MtU7sHs1ywooI+iac1Urq2UZA== X-Google-Smtp-Source: ABdhPJyZiSygaXMCJRK6GlVWwJv30GUfOrPNK17jx7jQtqN6Jkc4F1vh48J31jBGU1IJqAM9yz8Xyg== X-Received: by 2002:a2e:5802:: with SMTP id m2mr1176199ljb.145.1611884950983; Thu, 28 Jan 2021 17:49:10 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V6 06/24] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common Date: Fri, 29 Jan 2021 03:48:34 +0200 Message-Id: <1611884932-1851-7-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko The IOREQ is a common feature now and these helpers will be used on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes with "ioreq". Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Paul Durrant Acked-by: Jan Beulich Reviewed-by: Julien Grall Reviewed-by: Alex Bennée CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - replace "hvm" prefix by "ioreq" Changes V2 -> V3: - add Paul's R-b Changes V3 -> V4: - add Jan's A-b Changes V4 -> V5: - rebase - add Julien's and Alex's R-b Changes V5 -> V6: - no changes --- --- xen/arch/x86/hvm/intercept.c | 5 +++-- xen/arch/x86/hvm/stdvga.c | 4 ++-- xen/common/ioreq.c | 4 ++-- xen/include/asm-x86/hvm/io.h | 16 ---------------- xen/include/xen/ioreq.h | 16 ++++++++++++++++ 5 files changed, 23 insertions(+), 22 deletions(-) diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index cd4c4c1..02ca3b0 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -17,6 +17,7 @@ * this program; If not, see . */ +#include #include #include #include @@ -34,7 +35,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler, const ioreq_t *p) { - paddr_t first = hvm_mmio_first_byte(p), last; + paddr_t first = ioreq_mmio_first_byte(p), last; BUG_ON(handler->type != IOREQ_TYPE_COPY); @@ -42,7 +43,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler, return 0; /* Make sure the handler will accept the whole access. */ - last = hvm_mmio_last_byte(p); + last = ioreq_mmio_last_byte(p); if ( last != first && !handler->mmio.ops->check(current, last) ) domain_crash(current->domain); diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index fd7cadb..17dee74 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -524,8 +524,8 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, * deadlock when hvm_mmio_internal() is called from * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept(). */ - if ( (hvm_mmio_first_byte(p) < VGA_MEM_BASE) || - (hvm_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) + if ( (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) || + (ioreq_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) return 0; spin_lock(&s->lock); diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 61ddd54..89e75ff 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -1078,8 +1078,8 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, break; case XEN_DMOP_IO_RANGE_MEMORY: - start = hvm_mmio_first_byte(p); - end = hvm_mmio_last_byte(p); + start = ioreq_mmio_first_byte(p); + end = ioreq_mmio_last_byte(p); if ( rangeset_contains_range(r, start, end) ) return s; diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 3d2e877..3a4a739 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -40,22 +40,6 @@ struct hvm_mmio_ops { hvm_mmio_write_t write; }; -static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p) -{ - return unlikely(p->df) ? - p->addr - (p->count - 1ul) * p->size : - p->addr; -} - -static inline paddr_t hvm_mmio_last_byte(const ioreq_t *p) -{ - unsigned long size = p->size; - - return unlikely(p->df) ? - p->addr + size - 1: - p->addr + (p->count * size) - 1; -} - typedef int (*portio_action_t)( int dir, unsigned int port, unsigned int bytes, uint32_t *val); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index e957b52..6853aa3 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -23,6 +23,22 @@ #include +static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) +{ + return unlikely(p->df) ? + p->addr - (p->count - 1ul) * p->size : + p->addr; +} + +static inline paddr_t ioreq_mmio_last_byte(const ioreq_t *p) +{ + unsigned long size = p->size; + + return unlikely(p->df) ? + p->addr + size - 1: + p->addr + (p->count * size) - 1; +} + static inline bool ioreq_needs_completion(const ioreq_t *ioreq) { return ioreq->state == STATE_IOREQ_READY && From patchwork Fri Jan 29 01:48:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BC21C433E6 for ; Fri, 29 Jan 2021 01:49:53 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2483564DFF for ; Fri, 29 Jan 2021 01:49:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2483564DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77585.140641 (Exim 4.92) (envelope-from ) id 1l5Iuu-00059x-6n; Fri, 29 Jan 2021 01:49:44 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77585.140641; Fri, 29 Jan 2021 01:49:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iuu-00059m-1v; Fri, 29 Jan 2021 01:49:44 +0000 Received: by outflank-mailman (input) for mailman id 77585; Fri, 29 Jan 2021 01:49:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Ius-0004da-Ke for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:42 +0000 Received: from mail-lf1-x12e.google.com (unknown [2a00:1450:4864:20::12e]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e57f5c84-9322-4082-9ac7-11ac939115cb; Fri, 29 Jan 2021 01:49:13 +0000 (UTC) Received: by mail-lf1-x12e.google.com with SMTP id p21so10290465lfu.11 for ; Thu, 28 Jan 2021 17:49:13 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:11 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e57f5c84-9322-4082-9ac7-11ac939115cb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3mgPqfKu3rPyCLjwwz6NDvykXCkFGsW8Yk/ENNleQ24=; b=pz5XN1G4Bmq1BTo/GX1tVuuuzR7kq9aF+LaE1dT4jbEQ9Ow+XfholqEij23rH9YYs7 WBkaPfIdqEAx4rs8mvELfmreSj/n6v0dhoX95qGuaN7cZcL8dTnIW7NL7pe6tUIcdONX xMq3o3UEjjXu5gW43Jw3M84YQ2l/+GUQ1t7IMq7cDXVTuDUJqqeL2vHva5UKmsvRGF6O h7TbsDbd7VrXq12sLa751iqJ7EkfDqjZ0mpM2wC3E9NFrDiaoLCRPIKLcpVeXycDlakS JrfBiC85tDbOD5/U+1lBmS/HwxWbdIhuC67pJHpJC1GIyLudMdxbMgc3+x9xQYfmba4l f6VA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3mgPqfKu3rPyCLjwwz6NDvykXCkFGsW8Yk/ENNleQ24=; b=q/1dpjlcYFgsQxn+0WCa7o1EttjvBUwJsdjx5Es/qQSmcXu00xNZjlIf5dFnnknLyO 3xAZmbidlzWgyl4s+/HSsiIXJqzoGXKdJCAeBSJZXlaY10NmDUgtfB4RKl5bV8Hv+06U xyouww6iJXkzaQ6nPQqlKFT3PPRaTySnokO5WaHPJstA3Av4jCRsQ+KxjrzAcK4sfdTy hiwUgs2VBcFOyZgesn5TiWMI7usEsxNWXsuMmY0uT4tNoFlwQVisPhb1yhXtw9PJg+ZI dKjsHBmVvTnk+TOE50tDxJJf8UalB1bPcpcD5K3ZmEHvRPynhcuwk6JPWeZ43usMpi79 7JoA== X-Gm-Message-State: AOAM531wmXwhr64LTQJ5rSvBn1Xv3yHNakPq0S4HAoKuTi+xSnLgpIJ4 dvL9xTRXM++esRW21HH2c0mzQ+zuYRypqQ== X-Google-Smtp-Source: ABdhPJzNbqVIGdAUT2Kp4WnNEsksijrahvSGW7VbMeveoPSZJO4K3rppc6Yuz9jKTBLMajUpvuQ2oQ== X-Received: by 2002:a05:6512:32b3:: with SMTP id q19mr859139lfe.163.1611884952167; Thu, 28 Jan 2021 17:49:12 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Jun Nakajima , Kevin Tian , George Dunlap , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V6 07/24] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common Date: Fri, 29 Jan 2021 03:48:35 +0200 Message-Id: <1611884932-1851-8-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko The IOREQ is a common feature now and these structs will be used on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes. Also there is no need to include public/hvm/dm_op.h by asm-x86/hvm/domain.h anymore since #define NR_IO_RANGE_TYPES (which uses XEN_DMOP_IO_RANGE_PCI) gets moved to another location. Instead include it by 2 places (p2m-pt.c and p2m-ept.c) which require that header, but don't directly include it so far. Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich Reviewed-by: Julien Grall Reviewed-by: Paul Durrant Reviewed-by: Alex Bennée CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - remove "hvm" prefix Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific Changes V3 -> V4: - add Jan's A-b Changes V4 -> V5: - rebase - add Julien's, Alex's and Paul's R-b - fix alignment issue in domain.h Changes V5 -> V6: - don't include by asm-x86/hvm/domain.h include it by p2m-pt.c and p2m-ept.c directly --- --- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/ioreq.c | 38 +++++++------- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/mm/p2m-ept.c | 1 + xen/arch/x86/mm/p2m-pt.c | 1 + xen/arch/x86/mm/p2m.c | 8 +-- xen/common/ioreq.c | 108 +++++++++++++++++++-------------------- xen/include/asm-x86/hvm/domain.h | 38 +------------- xen/include/asm-x86/p2m.h | 8 +-- xen/include/xen/ioreq.h | 54 ++++++++++++++++---- 10 files changed, 130 insertions(+), 130 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index c3487b5..4d62199 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -287,7 +287,7 @@ static int hvmemul_do_io( * However, there's no cheap approach to avoid above situations in xen, * so the device model side needs to check the incoming ioreq event. */ - struct hvm_ioreq_server *s = NULL; + struct ioreq_server *s = NULL; p2m_type_t p2mt = p2m_invalid; if ( is_mmio ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 666d695..0cadf34 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -63,7 +63,7 @@ bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) return true; } -static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s) { struct domain *d = s->target; unsigned int i; @@ -79,7 +79,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s) return INVALID_GFN; } -static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) +static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s) { struct domain *d = s->target; unsigned int i; @@ -97,7 +97,7 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) return hvm_alloc_legacy_ioreq_gfn(s); } -static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, +static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d = s->target; @@ -115,7 +115,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s, return true; } -static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) +static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn) { struct domain *d = s->target; unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base; @@ -129,9 +129,9 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn) } } -static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -143,10 +143,10 @@ static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) iorp->gfn = INVALID_GFN; } -static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d = s->target; - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; if ( iorp->page ) @@ -179,11 +179,11 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } -static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d = s->target; - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; if ( gfn_eq(iorp->gfn, INVALID_GFN) ) return; @@ -194,10 +194,10 @@ static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) clear_page(iorp->va); } -static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf) { struct domain *d = s->target; - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; if ( gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -213,7 +213,7 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) +int arch_ioreq_server_map_pages(struct ioreq_server *s) { int rc; @@ -228,40 +228,40 @@ int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s) return rc; } -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) +void arch_ioreq_server_unmap_pages(struct ioreq_server *s) { hvm_unmap_ioreq_gfn(s, true); hvm_unmap_ioreq_gfn(s, false); } -void arch_ioreq_server_enable(struct hvm_ioreq_server *s) +void arch_ioreq_server_enable(struct ioreq_server *s) { hvm_remove_ioreq_gfn(s, false); hvm_remove_ioreq_gfn(s, true); } -void arch_ioreq_server_disable(struct hvm_ioreq_server *s) +void arch_ioreq_server_disable(struct ioreq_server *s) { hvm_add_ioreq_gfn(s, true); hvm_add_ioreq_gfn(s, false); } /* Called when target domain is paused */ -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s) +void arch_ioreq_server_destroy(struct ioreq_server *s) { p2m_set_ioreq_server(s->target, 0, s); } /* Called with ioreq_server lock held */ int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags) { return p2m_set_ioreq_server(d, flags, s); } void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags) { if ( flags == 0 && read_atomic(&p2m_get_hostp2m(d)->ioreq.entry_count) ) diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index 17dee74..ee13449 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler, .dir = IOREQ_WRITE, .data = data, }; - struct hvm_ioreq_server *srv; + struct ioreq_server *srv; if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) goto done; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 975ab40..23d411f 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -17,6 +17,7 @@ #include #include +#include #include #include #include diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index c43d5d0..f2afcf4 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index a32301c..c1dd45b 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -372,7 +372,7 @@ void p2m_memory_type_changed(struct domain *d) int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { struct p2m_domain *p2m = p2m_get_hostp2m(d); int rc; @@ -420,11 +420,11 @@ int p2m_set_ioreq_server(struct domain *d, return rc; } -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags) +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags) { struct p2m_domain *p2m = p2m_get_hostp2m(d); - struct hvm_ioreq_server *s; + struct ioreq_server *s; spin_lock(&p2m->ioreq.lock); diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 89e75ff..7320f23 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -35,7 +35,7 @@ #include static void set_ioreq_server(struct domain *d, unsigned int id, - struct hvm_ioreq_server *s) + struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); @@ -46,8 +46,8 @@ static void set_ioreq_server(struct domain *d, unsigned int id, #define GET_IOREQ_SERVER(d, id) \ (d)->arch.hvm.ioreq_server.server[id] -static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, - unsigned int id) +static struct ioreq_server *get_ioreq_server(const struct domain *d, + unsigned int id) { if ( id >= MAX_NR_IOREQ_SERVERS ) return NULL; @@ -69,7 +69,7 @@ static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d, continue; \ else -static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) +static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v) { shared_iopage_t *p = s->ioreq.va; @@ -79,16 +79,16 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v) return &p->vcpu_ioreq[v->vcpu_id]; } -static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, - struct hvm_ioreq_server **srvp) +static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, + struct ioreq_server **srvp) { struct domain *d = v->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; FOR_EACH_IOREQ_SERVER(d, id, s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -111,7 +111,7 @@ bool hvm_io_pending(struct vcpu *v) return get_pending_vcpu(v, NULL); } -static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) +static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state = STATE_IOREQ_NONE; unsigned int state = p->state; @@ -172,8 +172,8 @@ bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d = v->domain; struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; - struct hvm_ioreq_server *s; - struct hvm_ioreq_vcpu *sv; + struct ioreq_server *s; + struct ioreq_vcpu *sv; enum hvm_io_completion io_completion; if ( has_vpci(d) && vpci_process_pending(v) ) @@ -214,9 +214,9 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } -static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; struct page_info *page; if ( iorp->page ) @@ -262,9 +262,9 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) return -ENOMEM; } -static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) { - struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; struct page_info *page = iorp->page; if ( !page ) @@ -281,7 +281,7 @@ static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { - const struct hvm_ioreq_server *s; + const struct ioreq_server *s; unsigned int id; bool found = false; @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) return found; } -static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, - struct hvm_ioreq_vcpu *sv) +static void hvm_update_ioreq_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); @@ -314,13 +314,13 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } } -static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; int rc; - sv = xzalloc(struct hvm_ioreq_vcpu); + sv = xzalloc(struct ioreq_vcpu); rc = -ENOMEM; if ( !sv ) @@ -366,10 +366,10 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, return rc; } -static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, +static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, struct vcpu *v) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; spin_lock(&s->lock); @@ -394,9 +394,9 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, spin_unlock(&s->lock); } -static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv, *next; + struct ioreq_vcpu *sv, *next; spin_lock(&s->lock); @@ -420,7 +420,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; @@ -435,13 +435,13 @@ static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) return rc; } -static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_pages(struct ioreq_server *s) { hvm_free_ioreq_mfn(s, true); hvm_free_ioreq_mfn(s, false); } -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; @@ -449,7 +449,7 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) rangeset_destroy(s->range[i]); } -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, ioservid_t id) { unsigned int i; @@ -487,9 +487,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, return rc; } -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_enable(struct ioreq_server *s) { - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; spin_lock(&s->lock); @@ -509,7 +509,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); @@ -524,7 +524,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, +static int hvm_ioreq_server_init(struct ioreq_server *s, struct domain *d, int bufioreq_handling, ioservid_t id) { @@ -569,7 +569,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, return rc; } -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) +static void hvm_ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); @@ -594,14 +594,14 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, ioservid_t *id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int i; int rc; if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) return -EINVAL; - s = xzalloc(struct hvm_ioreq_server); + s = xzalloc(struct ioreq_server); if ( !s ) return -ENOMEM; @@ -649,7 +649,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -694,7 +694,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *bufioreq_gfn, evtchn_port_t *bufioreq_port) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -739,7 +739,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, unsigned long idx, mfn_t *mfn) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; ASSERT(is_hvm_domain(d)); @@ -791,7 +791,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; @@ -843,7 +843,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; struct rangeset *r; int rc; @@ -902,7 +902,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; if ( type != HVMMEM_ioreq_server ) @@ -937,7 +937,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool enabled) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; int rc; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -970,7 +970,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; int rc; @@ -1005,7 +1005,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1018,7 +1018,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) void hvm_destroy_all_ioreq_servers(struct domain *d) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id; if ( !arch_ioreq_server_destroy_all(d) ) @@ -1045,10 +1045,10 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p) { - struct hvm_ioreq_server *s; + struct ioreq_server *s; uint8_t type; uint64_t addr; unsigned int id; @@ -1101,10 +1101,10 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, return NULL; } -static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) +static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) { struct domain *d = current->domain; - struct hvm_ioreq_page *iorp; + struct ioreq_page *iorp; buffered_iopage_t *pg; buf_ioreq_t bp = { .data = p->data, .addr = p->addr, @@ -1194,12 +1194,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered) { struct vcpu *curr = current; struct domain *d = curr->domain; - struct hvm_ioreq_vcpu *sv; + struct ioreq_vcpu *sv; ASSERT(s); @@ -1257,7 +1257,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) { struct domain *d = current->domain; - struct hvm_ioreq_server *s; + struct ioreq_server *s; unsigned int id, failed = 0; FOR_EACH_IOREQ_SERVER(d, id, s) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 9d247ba..f26c1a2 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -28,42 +28,6 @@ #include #include -#include - -struct hvm_ioreq_page { - gfn_t gfn; - struct page_info *page; - void *va; -}; - -struct hvm_ioreq_vcpu { - struct list_head list_entry; - struct vcpu *vcpu; - evtchn_port_t ioreq_evtchn; - bool pending; -}; - -#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) -#define MAX_NR_IO_RANGES 256 - -struct hvm_ioreq_server { - struct domain *target, *emulator; - - /* Lock to serialize toolstack modifications */ - spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; - struct rangeset *range[NR_IO_RANGE_TYPES]; - bool enabled; - uint8_t bufioreq_handling; -}; - #ifdef CONFIG_MEM_SHARING struct mem_sharing_domain { @@ -110,7 +74,7 @@ struct hvm_domain { /* Lock protects all other values in the sub-struct and the default */ struct { spinlock_t lock; - struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; } ioreq_server; /* Cached CF8 for guest PCI config cycles */ diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 6acbd1e..5d7836d 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -363,7 +363,7 @@ struct p2m_domain { * ioreq server who's responsible for the emulation of * gfns with specific p2m type(for now, p2m_ioreq_server). */ - struct hvm_ioreq_server *server; + struct ioreq_server *server; /* * flags specifies whether read, write or both operations * are to be emulated by an ioreq server. @@ -937,9 +937,9 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn) } int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s); -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags); + struct ioreq_server *s); +struct ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned int *flags); static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt, p2m_type_t ot, mfn_t nfn, mfn_t ofn, diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 6853aa3..5a6c11d 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -23,6 +23,40 @@ #include +struct ioreq_page { + gfn_t gfn; + struct page_info *page; + void *va; +}; + +struct ioreq_vcpu { + struct list_head list_entry; + struct vcpu *vcpu; + evtchn_port_t ioreq_evtchn; + bool pending; +}; + +#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1) +#define MAX_NR_IO_RANGES 256 + +struct ioreq_server { + struct domain *target, *emulator; + + /* Lock to serialize toolstack modifications */ + spinlock_t lock; + + struct ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + struct rangeset *range[NR_IO_RANGE_TYPES]; + bool enabled; + uint8_t bufioreq_handling; +}; + static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p) { return unlikely(p->df) ? @@ -77,9 +111,9 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); void hvm_destroy_all_ioreq_servers(struct domain *d); -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +struct ioreq_server *hvm_select_ioreq_server(struct domain *d, + ioreq_t *p); +int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); @@ -87,16 +121,16 @@ void hvm_ioreq_init(struct domain *d); bool arch_ioreq_complete_mmio(void); bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); -int arch_ioreq_server_map_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_unmap_pages(struct hvm_ioreq_server *s); -void arch_ioreq_server_enable(struct hvm_ioreq_server *s); -void arch_ioreq_server_disable(struct hvm_ioreq_server *s); -void arch_ioreq_server_destroy(struct hvm_ioreq_server *s); +int arch_ioreq_server_map_pages(struct ioreq_server *s); +void arch_ioreq_server_unmap_pages(struct ioreq_server *s); +void arch_ioreq_server_enable(struct ioreq_server *s); +void arch_ioreq_server_disable(struct ioreq_server *s); +void arch_ioreq_server_destroy(struct ioreq_server *s); int arch_ioreq_server_map_mem_type(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags); void arch_ioreq_server_map_mem_type_completed(struct domain *d, - struct hvm_ioreq_server *s, + struct ioreq_server *s, uint32_t flags); bool arch_ioreq_server_destroy_all(struct domain *d); bool arch_ioreq_server_get_type_addr(const struct domain *d, const ioreq_t *p, From patchwork Fri Jan 29 01:48:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D74ACC433E9 for ; Fri, 29 Jan 2021 01:49:56 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7300A64DFD for ; Fri, 29 Jan 2021 01:49:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7300A64DFD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77589.140653 (Exim 4.92) (envelope-from ) id 1l5Iuy-0005Fc-Ni; Fri, 29 Jan 2021 01:49:48 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77589.140653; Fri, 29 Jan 2021 01:49:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iuy-0005FS-JL; Fri, 29 Jan 2021 01:49:48 +0000 Received: by outflank-mailman (input) for mailman id 77589; Fri, 29 Jan 2021 01:49:47 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iux-0004da-Kp for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:47 +0000 Received: from mail-lf1-x12d.google.com (unknown [2a00:1450:4864:20::12d]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c469bf96-17da-435d-bdad-89a56d64d044; Fri, 29 Jan 2021 01:49:14 +0000 (UTC) Received: by mail-lf1-x12d.google.com with SMTP id h7so10311625lfc.6 for ; Thu, 28 Jan 2021 17:49:14 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:12 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c469bf96-17da-435d-bdad-89a56d64d044 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dbNbEU0klhhO8MTvmLsJjNNNA4F+emU1z7jZo6fqUYc=; b=uxanJXxSyhdBRFU0K66ecDhuVZsuxhHaWZCh209z6Y4DKaOJ4FIUFqXxMreAYWXBo/ GVKsgIkls+d+QCqH4tHW7S0wSZLCrhQ11bT7joNR0mwowI7ZlYJJ6tR2QwEyWTQ6yNjD mLDPbpyAVHWu3uWETUsjbohrQwwmw399pZ85p8LQlpyaij5ckPfE1iaakk55DdCgu53Y f/AXXybtOgoSvamjEpZ3XeVkggoy6AW/eTffSi+scwQtljEfKtUpOj97qd+np68dgoFX t5APW/Bu9CpOpH6Q9I4xMMEYEEu0H+2vP5nn8uZuzCdlVTiBSPr+C+ze4ZaRuK7LFt0a 8SbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dbNbEU0klhhO8MTvmLsJjNNNA4F+emU1z7jZo6fqUYc=; b=qDJgXS/qFbZ+waoFfBllkZWUAlDwViCj5z7mWyEK9ozbt9XDGfcTHCTXe6gzVGZrBx A5/6SnCvtTGzP2RmGbCFxbJUEa8j/gkvuIEA9V8jQJXiPj7//rqVb/OK3hSMhHTI6aDZ PEeRPw4NtzF6o/gWLEAp6/+kxUKd1lJDPhqsVfGmSV5dBbvcSHXnyWoThouInocYZWtE lnAyLSg7YB9cki8qaPzMS0CnQEmNza9QUOMZ3tj+y0pqBZXg5NgI9FqNuRvsprB+9HT8 4j7d7+kNqLUCaSnKZSdzTnCK6ti0tgHyHz0Y8TSZG0GM0o8qRLBxiaeG+PoK1kpAmLx+ 6hhg== X-Gm-Message-State: AOAM5335u+eRVg9wp6p4NURfcZMvvr6IhIJ0RhDhQhrOQeqOjfYI4E1e n3lNk+9tOFArtGbzJGXukijgNB1CdFVmWA== X-Google-Smtp-Source: ABdhPJzF32GZjWWFNiDs8T722by+NJxPttdAgYxVJmvF1hrco44KRV39xW4RlbCwpdlINMDKhs2Eaw== X-Received: by 2002:ac2:59cb:: with SMTP id x11mr917385lfn.202.1611884953371; Thu, 28 Jan 2021 17:49:13 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Julien Grall Subject: [PATCH V6 08/24] xen/ioreq: Move x86's ioreq_server to struct domain Date: Fri, 29 Jan 2021 03:48:36 +0200 Message-Id: <1611884932-1851-9-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko The IOREQ is a common feature now and this struct will be used on Arm as is. Move it to common struct domain. This also significantly reduces the layering violation in the common code (*arch.hvm* usage). We don't move ioreq_gfn since it is not used in the common code (the "legacy" mechanism is x86 specific). Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich Reviewed-by: Julien Grall Reviewed-by: Paul Durrant Reviewed-by: Alex Bennée CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - remove the mention of "ioreq_gfn" from patch subject/description - update patch according the "legacy interface" is x86 specific - drop hvm_params related changes in arch/x86/hvm/hvm.c - leave ioreq_gfn in hvm_domain Changes V3 -> V4: - rebase - drop the stale part of the comment above struct ioreq_server - add Jan's A-b Changes V4 -> V5: - add Julien's, Alex's and Paul's R-b Changes V5 -> V6: - no changes --- --- xen/common/ioreq.c | 60 ++++++++++++++++++++-------------------- xen/include/asm-x86/hvm/domain.h | 8 ------ xen/include/xen/sched.h | 10 +++++++ 3 files changed, 40 insertions(+), 38 deletions(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 7320f23..4cb26e6 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -38,13 +38,13 @@ static void set_ioreq_server(struct domain *d, unsigned int id, struct ioreq_server *s) { ASSERT(id < MAX_NR_IOREQ_SERVERS); - ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + ASSERT(!s || !d->ioreq_server.server[id]); - d->arch.hvm.ioreq_server.server[id] = s; + d->ioreq_server.server[id] = s; } #define GET_IOREQ_SERVER(d, id) \ - (d)->arch.hvm.ioreq_server.server[id] + (d)->ioreq_server.server[id] static struct ioreq_server *get_ioreq_server(const struct domain *d, unsigned int id) @@ -285,7 +285,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) unsigned int id; bool found = false; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -296,7 +296,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) } } - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return found; } @@ -606,7 +606,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, return -ENOMEM; domain_pause(d); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ ) { @@ -634,13 +634,13 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, if ( id ) *id = i; - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); return 0; fail: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); domain_unpause(d); xfree(s); @@ -652,7 +652,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) struct ioreq_server *s; int rc; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -684,7 +684,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) rc = 0; out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -697,7 +697,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, struct ioreq_server *s; int rc; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -731,7 +731,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, rc = 0; out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -744,7 +744,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, ASSERT(is_hvm_domain(d)); - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -782,7 +782,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, } out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -798,7 +798,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, if ( start > end ) return -EINVAL; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -834,7 +834,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, rc = rangeset_add_range(r, start, end); out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -850,7 +850,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, if ( start > end ) return -EINVAL; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -886,7 +886,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, rc = rangeset_remove_range(r, start, end); out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -911,7 +911,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) return -EINVAL; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -926,7 +926,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, rc = arch_ioreq_server_map_mem_type(d, s, flags); out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); if ( rc == 0 ) arch_ioreq_server_map_mem_type_completed(d, s, flags); @@ -940,7 +940,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, struct ioreq_server *s; int rc; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); s = get_ioreq_server(d, id); @@ -964,7 +964,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, rc = 0; out: - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -974,7 +974,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) unsigned int id; int rc; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -983,7 +983,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) goto fail; } - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return 0; @@ -998,7 +998,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) hvm_ioreq_server_remove_vcpu(s, v); } - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); return rc; } @@ -1008,12 +1008,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) struct ioreq_server *s; unsigned int id; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) hvm_ioreq_server_remove_vcpu(s, v); - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } void hvm_destroy_all_ioreq_servers(struct domain *d) @@ -1024,7 +1024,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) if ( !arch_ioreq_server_destroy_all(d) ) return; - spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_lock_recursive(&d->ioreq_server.lock); /* No need to domain_pause() as the domain is being torn down */ @@ -1042,7 +1042,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) xfree(s); } - spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + spin_unlock_recursive(&d->ioreq_server.lock); } struct ioreq_server *hvm_select_ioreq_server(struct domain *d, @@ -1274,7 +1274,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) void hvm_ioreq_init(struct domain *d) { - spin_lock_init(&d->arch.hvm.ioreq_server.lock); + spin_lock_init(&d->ioreq_server.lock); arch_ioreq_domain_init(d); } diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index f26c1a2..25af518 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -61,8 +61,6 @@ struct hvm_pi_ops { void (*vcpu_block)(struct vcpu *); }; -#define MAX_NR_IOREQ_SERVERS 8 - struct hvm_domain { /* Guest page range used for non-default ioreq servers */ struct { @@ -71,12 +69,6 @@ struct hvm_domain { unsigned long legacy_mask; /* indexed by HVM param number */ } ioreq_gfn; - /* Lock protects all other values in the sub-struct and the default */ - struct { - spinlock_t lock; - struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; - } ioreq_server; - /* Cached CF8 for guest PCI config cycles */ uint32_t pci_cf8; diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index da19f4e..f437ee3 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -318,6 +318,8 @@ struct sched_unit { struct evtchn_port_ops; +#define MAX_NR_IOREQ_SERVERS 8 + struct domain { domid_t domain_id; @@ -534,6 +536,14 @@ struct domain unsigned int val; struct vcpu *vcpu; } teardown; + +#ifdef CONFIG_IOREQ_SERVER + /* Lock protects all other values in the sub-struct */ + struct { + spinlock_t lock; + struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; + } ioreq_server; +#endif }; static inline struct page_list_head *page_to_list( From patchwork Fri Jan 29 01:48:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F376C433DB for ; Fri, 29 Jan 2021 01:50:04 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1E8EE64DFB for ; Fri, 29 Jan 2021 01:50:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E8EE64DFB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77590.140665 (Exim 4.92) (envelope-from ) id 1l5Iv4-0005Lz-1z; Fri, 29 Jan 2021 01:49:54 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77590.140665; Fri, 29 Jan 2021 01:49:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iv3-0005Ls-UM; Fri, 29 Jan 2021 01:49:53 +0000 Received: by outflank-mailman (input) for mailman id 77590; Fri, 29 Jan 2021 01:49:52 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iv2-0004da-Km for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:52 +0000 Received: from mail-lj1-x233.google.com (unknown [2a00:1450:4864:20::233]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id dda93437-14bc-4471-8d1c-4080cfd29bbb; Fri, 29 Jan 2021 01:49:16 +0000 (UTC) Received: by mail-lj1-x233.google.com with SMTP id f11so8719127ljm.8 for ; Thu, 28 Jan 2021 17:49:16 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:14 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: dda93437-14bc-4471-8d1c-4080cfd29bbb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=OkRuzTYfFyATpx+gTS9Zxr9CCrLjwbVBY9lHD95giVQ=; b=lM7gn7Wa09Qx0WKh9Z84QKV88fnhC3ekleZ9mDLU1mKiEooluJOHn9znu2dGUsLB3x JmwoD/EItATdVrrDZGUWE1VlqeUhmEZSrXyYf4QCeEGOdCABXrQPOibuAIeclJeM32/K YRbS3NSnM9woS0NU2o4BRc/4RKUOHdiiIB5tSk/Gz7e5ownk3k8bUm4AQmIbG5cWNjDe vt34rPLQfjwqe+M/xYVcmSlEgTnUH5Acihki8Qqt4oKSgIpOR4SSGXKOd0s7Pcy3ce3f USUiKkJTkEiVw6aVPyGTIJ54sZXiRDImi6TOEJui6CB7I6nSeNdMeMXepY/3znGPO9RC CM3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=OkRuzTYfFyATpx+gTS9Zxr9CCrLjwbVBY9lHD95giVQ=; b=S3Gz7x7lbO98jYviEQ0ePKszKqh361MGuc3LuexaXJ8hhEnj/+ubNB/8jZiK5SgL0B YYgglS6KR183F4BxKhnf3QZ2aYbAxP+lVcEkzPllUhvbbz6PLjt9yqNCQfM4vQH+jGuw upVhZE9HJfNrSkJXxYLU3+NYczYF3+0Kg/BRWPrR1lXcFlFJ8psAZ2wB30gDp5WRuF0d L2nKT84GmEACaR/+9Q+htJZsLDkoMfgG4qqyea4y+m/2pI/imhBRhvdhvoCQzXb/txhm A52xaYxVZhAcBz9uA1JAz5iEcIsI/ODsiZZ4o4W/G/e+Rb8140TkZSaxPw1Rfc1F9pSy +oag== X-Gm-Message-State: AOAM533AuIosY63z7A4hGUjRzm66FKzdrVjZahskR5mMpgo77N3mfpHS rFUgO5tlLIdR6Jm5iHmNkDHzSv+tIHy8uA== X-Google-Smtp-Source: ABdhPJw2ytivzUITiyzeYB4bPZ2LtbCegWM6yF+2AOBZpNnmYdzm+6J5oEN/7eyXuY14tVaUydRjWQ== X-Received: by 2002:a2e:8008:: with SMTP id j8mr1151325ljg.461.1611884954642; Thu, 28 Jan 2021 17:49:14 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Daniel De Graaf , Oleksandr Tyshchenko Subject: [PATCH V6 09/24] xen/ioreq: Make x86's IOREQ related dm-op handling common Date: Fri, 29 Jan 2021 03:48:37 +0200 Message-Id: <1611884932-1851-10-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Julien Grall As a lot of x86 code can be re-used on Arm later on, this patch moves the IOREQ related dm-op handling to the common code. The idea is to have the top level dm-op handling arch-specific and call into ioreq_server_dm_op() for otherwise unhandled ops. Pros: - More natural than doing it other way around (top level dm-op handling common). - Leave compat_dm_op() in x86 code. Cons: - Code duplication. Both arches have to duplicate dm_op(), etc. Make the corresponding functions static and rename them according to the new naming scheme (including dropping the "hvm" prefixes). Introduce common dm.c file as a resting place for the do_dm_op() (which is identical for both Arm and x86) to minimize code duplication. The common DM feature is supposed to be built with IOREQ_SERVER option enabled (as well as the IOREQ feature), which is selected for x86's config HVM for now. Also update XSM code a bit to let dm-op be used on Arm. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich Reviewed-by: Paul Durrant [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - update XSM, related changes were pulled from: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features Changes V1 -> V2: - update the author of a patch - update patch description - introduce xen/dm.h and move definitions here Changes V2 -> V3: - no changes Changes V3 -> V4: - rework to have the top level dm-op handling arch-specific - update patch subject/description, was "xen/dm: Make x86's DM feature common" - make a few functions static in common ioreq.c Changes V4 -> V5: - update patch description - add Jan's A-b - drop the 'hvm_' prefixes of touched functions and rename them instead of doing that in patch #12 - add common dm.c to keep do_dm_op(), make dm_op() global Changes V5 -> V6: - include xen/types.h, public/xen.h and public/hvm/dm_op.h by common dm.h instead of xen/sched.h - add Paul's R-b --- --- xen/arch/x86/hvm/dm.c | 132 ++-------------------------------------------- xen/common/Makefile | 1 + xen/common/dm.c | 55 +++++++++++++++++++ xen/common/ioreq.c | 137 ++++++++++++++++++++++++++++++++++++++++++------ xen/include/xen/dm.h | 44 ++++++++++++++++ xen/include/xen/ioreq.h | 21 ++------ xen/include/xsm/dummy.h | 4 +- xen/include/xsm/xsm.h | 6 +-- xen/xsm/dummy.c | 2 +- xen/xsm/flask/hooks.c | 5 +- 10 files changed, 238 insertions(+), 169 deletions(-) create mode 100644 xen/common/dm.c create mode 100644 xen/include/xen/dm.h diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index d3e2a9e..5bc172a 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -16,6 +16,7 @@ #include #include +#include #include #include #include @@ -29,13 +30,6 @@ #include -struct dmop_args { - domid_t domid; - unsigned int nr_bufs; - /* Reserve enough buf elements for all current hypercalls. */ - struct xen_dm_op_buf buf[2]; -}; - static bool _raw_copy_from_guest_buf_offset(void *dst, const struct dmop_args *args, unsigned int buf_idx, @@ -338,7 +332,7 @@ static int inject_event(struct domain *d, return 0; } -static int dm_op(const struct dmop_args *op_args) +int dm_op(const struct dmop_args *op_args) { struct domain *d; struct xen_dm_op op; @@ -408,71 +402,6 @@ static int dm_op(const struct dmop_args *op_args) switch ( op.op ) { - case XEN_DMOP_create_ioreq_server: - { - struct xen_dm_op_create_ioreq_server *data = - &op.u.create_ioreq_server; - - const_op = false; - - rc = -EINVAL; - if ( data->pad[0] || data->pad[1] || data->pad[2] ) - break; - - rc = hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); - break; - } - - case XEN_DMOP_get_ioreq_server_info: - { - struct xen_dm_op_get_ioreq_server_info *data = - &op.u.get_ioreq_server_info; - const uint16_t valid_flags = XEN_DMOP_no_gfns; - - const_op = false; - - rc = -EINVAL; - if ( data->flags & ~valid_flags ) - break; - - rc = hvm_get_ioreq_server_info(d, data->id, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->ioreq_gfn, - (data->flags & XEN_DMOP_no_gfns) ? - NULL : &data->bufioreq_gfn, - &data->bufioreq_port); - break; - } - - case XEN_DMOP_map_io_range_to_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data = - &op.u.map_io_range_to_ioreq_server; - - rc = -EINVAL; - if ( data->pad ) - break; - - rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); - break; - } - - case XEN_DMOP_unmap_io_range_from_ioreq_server: - { - const struct xen_dm_op_ioreq_server_range *data = - &op.u.unmap_io_range_from_ioreq_server; - - rc = -EINVAL; - if ( data->pad ) - break; - - rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type, - data->start, data->end); - break; - } - case XEN_DMOP_map_mem_type_to_ioreq_server: { struct xen_dm_op_map_mem_type_to_ioreq_server *data = @@ -486,8 +415,8 @@ static int dm_op(const struct dmop_args *op_args) break; if ( first_gfn == 0 ) - rc = hvm_map_mem_type_to_ioreq_server(d, data->id, - data->type, data->flags); + rc = ioreq_server_map_mem_type(d, data->id, + data->type, data->flags); else rc = 0; @@ -523,32 +452,6 @@ static int dm_op(const struct dmop_args *op_args) break; } - case XEN_DMOP_set_ioreq_server_state: - { - const struct xen_dm_op_set_ioreq_server_state *data = - &op.u.set_ioreq_server_state; - - rc = -EINVAL; - if ( data->pad ) - break; - - rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled); - break; - } - - case XEN_DMOP_destroy_ioreq_server: - { - const struct xen_dm_op_destroy_ioreq_server *data = - &op.u.destroy_ioreq_server; - - rc = -EINVAL; - if ( data->pad ) - break; - - rc = hvm_destroy_ioreq_server(d, data->id); - break; - } - case XEN_DMOP_track_dirty_vram: { const struct xen_dm_op_track_dirty_vram *data = @@ -703,7 +606,7 @@ static int dm_op(const struct dmop_args *op_args) } default: - rc = -EOPNOTSUPP; + rc = ioreq_server_dm_op(&op, d, &const_op); break; } @@ -776,31 +679,6 @@ int compat_dm_op(domid_t domid, return rc; } -long do_dm_op(domid_t domid, - unsigned int nr_bufs, - XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) -{ - struct dmop_args args; - int rc; - - if ( nr_bufs > ARRAY_SIZE(args.buf) ) - return -E2BIG; - - args.domid = domid; - args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); - - if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) - return -EFAULT; - - rc = dm_op(&args); - - if ( rc == -ERESTART ) - rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", - domid, nr_bufs, bufs); - - return rc; -} - /* * Local variables: * mode: C diff --git a/xen/common/Makefile b/xen/common/Makefile index 2e3c234..71c1d46 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -5,6 +5,7 @@ obj-$(CONFIG_CORE_PARKING) += core_parking.o obj-y += cpu.o obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o +obj-$(CONFIG_IOREQ_SERVER) += dm.o obj-y += domain.o obj-y += event_2l.o obj-y += event_channel.o diff --git a/xen/common/dm.c b/xen/common/dm.c new file mode 100644 index 0000000..2d1d98c --- /dev/null +++ b/xen/common/dm.c @@ -0,0 +1,55 @@ +/* + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#include +#include +#include +#include + +long do_dm_op(domid_t domid, + unsigned int nr_bufs, + XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs) +{ + struct dmop_args args; + int rc; + + if ( nr_bufs > ARRAY_SIZE(args.buf) ) + return -E2BIG; + + args.domid = domid; + args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1); + + if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) ) + return -EFAULT; + + rc = dm_op(&args); + + if ( rc == -ERESTART ) + rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih", + domid, nr_bufs, bufs); + + return rc; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 4cb26e6..84bce36 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -223,7 +223,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) { /* * If a guest frame has already been mapped (which may happen - * on demand if hvm_get_ioreq_server_info() is called), then + * on demand if ioreq_server_get_info() is called), then * allocating a page is not permitted. */ if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) @@ -591,8 +591,8 @@ static void hvm_ioreq_server_deinit(struct ioreq_server *s) put_domain(s->emulator); } -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) +static int ioreq_server_create(struct domain *d, int bufioreq_handling, + ioservid_t *id) { struct ioreq_server *s; unsigned int i; @@ -647,7 +647,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, return rc; } -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) +static int ioreq_server_destroy(struct domain *d, ioservid_t id) { struct ioreq_server *s; int rc; @@ -689,10 +689,10 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) return rc; } -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port) +static int ioreq_server_get_info(struct domain *d, ioservid_t id, + unsigned long *ioreq_gfn, + unsigned long *bufioreq_gfn, + evtchn_port_t *bufioreq_port) { struct ioreq_server *s; int rc; @@ -787,7 +787,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, return rc; } -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, +static int ioreq_server_map_io_range(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) { @@ -839,9 +839,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, return rc; } -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end) +static int ioreq_server_unmap_io_range(struct domain *d, ioservid_t id, + uint32_t type, uint64_t start, + uint64_t end) { struct ioreq_server *s; struct rangeset *r; @@ -899,8 +899,8 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, * Support for the emulation of read operations can be added when an ioreq * server has such requirement in the future. */ -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags) +int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags) { struct ioreq_server *s; int rc; @@ -934,8 +934,8 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, return rc; } -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) +static int ioreq_server_set_state(struct domain *d, ioservid_t id, + bool enabled) { struct ioreq_server *s; int rc; @@ -1279,6 +1279,111 @@ void hvm_ioreq_init(struct domain *d) arch_ioreq_domain_init(d); } +int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op) +{ + long rc; + + switch ( op->op ) + { + case XEN_DMOP_create_ioreq_server: + { + struct xen_dm_op_create_ioreq_server *data = + &op->u.create_ioreq_server; + + *const_op = false; + + rc = -EINVAL; + if ( data->pad[0] || data->pad[1] || data->pad[2] ) + break; + + rc = ioreq_server_create(d, data->handle_bufioreq, + &data->id); + break; + } + + case XEN_DMOP_get_ioreq_server_info: + { + struct xen_dm_op_get_ioreq_server_info *data = + &op->u.get_ioreq_server_info; + const uint16_t valid_flags = XEN_DMOP_no_gfns; + + *const_op = false; + + rc = -EINVAL; + if ( data->flags & ~valid_flags ) + break; + + rc = ioreq_server_get_info(d, data->id, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->ioreq_gfn, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : (unsigned long *)&data->bufioreq_gfn, + &data->bufioreq_port); + break; + } + + case XEN_DMOP_map_io_range_to_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data = + &op->u.map_io_range_to_ioreq_server; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = ioreq_server_map_io_range(d, data->id, data->type, + data->start, data->end); + break; + } + + case XEN_DMOP_unmap_io_range_from_ioreq_server: + { + const struct xen_dm_op_ioreq_server_range *data = + &op->u.unmap_io_range_from_ioreq_server; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = ioreq_server_unmap_io_range(d, data->id, data->type, + data->start, data->end); + break; + } + + case XEN_DMOP_set_ioreq_server_state: + { + const struct xen_dm_op_set_ioreq_server_state *data = + &op->u.set_ioreq_server_state; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = ioreq_server_set_state(d, data->id, !!data->enabled); + break; + } + + case XEN_DMOP_destroy_ioreq_server: + { + const struct xen_dm_op_destroy_ioreq_server *data = + &op->u.destroy_ioreq_server; + + rc = -EINVAL; + if ( data->pad ) + break; + + rc = ioreq_server_destroy(d, data->id); + break; + } + + default: + rc = -EOPNOTSUPP; + break; + } + + return rc; +} + /* * Local variables: * mode: C diff --git a/xen/include/xen/dm.h b/xen/include/xen/dm.h new file mode 100644 index 0000000..6bb8b6b --- /dev/null +++ b/xen/include/xen/dm.h @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2016 Citrix Systems Inc. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#ifndef __XEN_DM_H__ +#define __XEN_DM_H__ + +#include + +#include +#include + +struct dmop_args { + domid_t domid; + unsigned int nr_bufs; + /* Reserve enough buf elements for all current hypercalls. */ + struct xen_dm_op_buf buf[2]; +}; + +int dm_op(const struct dmop_args *op_args); + +#endif /* __XEN_DM_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 5a6c11d..60e864d 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -87,25 +87,10 @@ bool hvm_io_pending(struct vcpu *v); bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); -int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); -int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); -int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, - unsigned long *ioreq_gfn, - unsigned long *bufioreq_gfn, - evtchn_port_t *bufioreq_port); int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, unsigned long idx, mfn_t *mfn); -int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint64_t start, - uint64_t end); -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, - uint32_t type, uint32_t flags); -int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); +int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, + uint32_t type, uint32_t flags); int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); @@ -119,6 +104,8 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); void hvm_ioreq_init(struct domain *d); +int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op); + bool arch_ioreq_complete_mmio(void); bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); int arch_ioreq_server_map_pages(struct ioreq_server *s); diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h index fa40e88..10739e7 100644 --- a/xen/include/xsm/dummy.h +++ b/xen/include/xsm/dummy.h @@ -707,14 +707,14 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int } } +#endif /* CONFIG_X86 */ + static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d) { XSM_ASSERT_ACTION(XSM_DM_PRIV); return xsm_default_action(action, current->domain, d); } -#endif /* CONFIG_X86 */ - #ifdef CONFIG_ARGO static XSM_INLINE int xsm_argo_enable(const struct domain *d) { diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h index 7bd03d8..91ecff4 100644 --- a/xen/include/xsm/xsm.h +++ b/xen/include/xsm/xsm.h @@ -176,8 +176,8 @@ struct xsm_operations { int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow); int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow); int (*pmu_op) (struct domain *d, unsigned int op); - int (*dm_op) (struct domain *d); #endif + int (*dm_op) (struct domain *d); int (*xen_version) (uint32_t cmd); int (*domain_resource_map) (struct domain *d); #ifdef CONFIG_ARGO @@ -682,13 +682,13 @@ static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int return xsm_ops->pmu_op(d, op); } +#endif /* CONFIG_X86 */ + static inline int xsm_dm_op(xsm_default_t def, struct domain *d) { return xsm_ops->dm_op(d); } -#endif /* CONFIG_X86 */ - static inline int xsm_xen_version (xsm_default_t def, uint32_t op) { return xsm_ops->xen_version(op); diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c index 9e09512..8bdffe7 100644 --- a/xen/xsm/dummy.c +++ b/xen/xsm/dummy.c @@ -147,8 +147,8 @@ void __init xsm_fixup_ops (struct xsm_operations *ops) set_to_dummy_if_null(ops, ioport_permission); set_to_dummy_if_null(ops, ioport_mapping); set_to_dummy_if_null(ops, pmu_op); - set_to_dummy_if_null(ops, dm_op); #endif + set_to_dummy_if_null(ops, dm_op); set_to_dummy_if_null(ops, xen_version); set_to_dummy_if_null(ops, domain_resource_map); #ifdef CONFIG_ARGO diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c index 19b0d9e..11784d7 100644 --- a/xen/xsm/flask/hooks.c +++ b/xen/xsm/flask/hooks.c @@ -1656,14 +1656,13 @@ static int flask_pmu_op (struct domain *d, unsigned int op) return -EPERM; } } +#endif /* CONFIG_X86 */ static int flask_dm_op(struct domain *d) { return current_has_perm(d, SECCLASS_HVM, HVM__DM); } -#endif /* CONFIG_X86 */ - static int flask_xen_version (uint32_t op) { u32 dsid = domain_sid(current->domain); @@ -1865,8 +1864,8 @@ static struct xsm_operations flask_ops = { .ioport_permission = flask_ioport_permission, .ioport_mapping = flask_ioport_mapping, .pmu_op = flask_pmu_op, - .dm_op = flask_dm_op, #endif + .dm_op = flask_dm_op, .xen_version = flask_xen_version, .domain_resource_map = flask_domain_resource_map, #ifdef CONFIG_ARGO From patchwork Fri Jan 29 01:48:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FC79C433DB for ; Fri, 29 Jan 2021 01:59:31 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 91E8264DF1 for ; Fri, 29 Jan 2021 01:59:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91E8264DF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77633.140800 (Exim 4.92) (envelope-from ) id 1l5J4B-0007Ni-Hm; Fri, 29 Jan 2021 01:59:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77633.140800; Fri, 29 Jan 2021 01:59:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J4B-0007NT-9C; Fri, 29 Jan 2021 01:59:19 +0000 Received: by outflank-mailman (input) for mailman id 77633; Fri, 29 Jan 2021 01:59:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IvC-0004da-L2 for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:02 +0000 Received: from mail-lj1-x22b.google.com (unknown [2a00:1450:4864:20::22b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e0fec707-dc76-4cdf-85aa-f857bebb03ef; Fri, 29 Jan 2021 01:49:17 +0000 (UTC) Received: by mail-lj1-x22b.google.com with SMTP id b20so872733ljo.1 for ; Thu, 28 Jan 2021 17:49:17 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.14 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:15 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e0fec707-dc76-4cdf-85aa-f857bebb03ef DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0BGolaLMJ5Dsz6jO8VEqSfxf0F7loDHFFufPr2VqrWg=; b=cLJxghZRIUHMTVewvLAgZqqMXoAnbHxAsfzcqpjuGiSFNt/Fk4UPKBWnBs0PYO79rU znxlerCEr2mr32bdug44zUubI/w70Nt+TNrG5PIRJBVUOIDdJ0loATvV+CT5EJvjaK03 tyrMn6G8N2ZkLuMxQoYW1EBKxR6XymU4metqMy9iD9wTVr59TOET2j9qkPxpTfhlBMZt zNP1BA4sUSrddtf61kvm73nYP4MANWcEE/JpLEuqAYirK1485lTRq5bkjRTcm/tKbnVS 8Y2rz74wRVYZwzy29neKoWjPRhuI19gG6zTCBUL8lOZz90opfa6mFdC3n/pQNn6q1xGr jxUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0BGolaLMJ5Dsz6jO8VEqSfxf0F7loDHFFufPr2VqrWg=; b=GeeA8cGmzKhxyUe8ZGx6ERRK9q09+A7blwQywhngR5riIDRqlTzBXCNoONAAaZEHxI IVCYyTXlOm6jQMWqv1WvgS6qo9Het3Ni6yoVyLONd4qGk/6x7to6rvdwjmZXq0fWgdgl ZqkOrApYFDj4m6SJrjJQWa92dlHB4fsMPy23S1k5yzUK4y7bfW6o0j5kz/++chA9fjJp veXIdsLcIcJo6q8HdN5ZlK2EfpKK/6lSYzNgzGfhW7t2RmHvSOhkMLbcJcOBxOfow8OF 9d7fNmCUuJIZDfvqZ/HVQZWINJ8tM1iRDpOfZm/aQMsZAFc+dW8usKEksDYCE03zrkl3 pxwQ== X-Gm-Message-State: AOAM533Jm+yNsvRgdQuqtY5SSJqR1zMWdOfR0dCnT26691EtNVzkUa4m q8E8BL2MXWlni2ELK6MI6o6kYXuIF78IvA== X-Google-Smtp-Source: ABdhPJyNAk28i0ywu2+S2bsXDznaGGblNVraCu36dGgMxaT/TWxsg/D6omRHi8soEk2AXyGdrug7tQ== X-Received: by 2002:a2e:a48a:: with SMTP id h10mr1167022lji.422.1611884955981; Thu, 28 Jan 2021 17:49:15 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V6 10/24] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu Date: Fri, 29 Jan 2021 03:48:38 +0200 Message-Id: <1611884932-1851-11-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The IOREQ is a common feature now and these fields will be used on Arm as is. Move them to common struct vcpu as a part of new struct vcpu_io and drop duplicating "io" prefixes. Also move enum hvm_io_completion to xen/sched.h and remove "hvm" prefixes. This patch completely removes layering violation in the common code. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Julien Grall Reviewed-by: Paul Durrant Acked-by: Jan Beulich CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - update patch description - drop the "io" prefixes from the field names - wrap IO_realmode_completion Changes V3 -> V4: - rename all hvm_vcpu_io locals to "hvio" - rename according to the new renaming scheme IO_ -> VIO_ (io_ -> vio_) - drop "io" prefix from io_completion locals Changes V4 -> V5: - add Julien's and Paul's R-b, Jan's A-b Changes V5 -> V6: - no changes --- --- xen/arch/x86/hvm/emulate.c | 210 +++++++++++++++++++------------------- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 32 +++--- xen/arch/x86/hvm/ioreq.c | 6 +- xen/arch/x86/hvm/svm/nestedsvm.c | 2 +- xen/arch/x86/hvm/vmx/realmode.c | 8 +- xen/common/ioreq.c | 26 ++--- xen/include/asm-x86/hvm/emulate.h | 2 +- xen/include/asm-x86/hvm/vcpu.h | 11 -- xen/include/xen/ioreq.h | 2 +- xen/include/xen/sched.h | 19 ++++ 11 files changed, 164 insertions(+), 156 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 4d62199..21051ce 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -140,15 +140,15 @@ static const struct hvm_io_handler ioreq_server_handler = { */ void hvmemul_cancel(struct vcpu *v) { - struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = &v->arch.hvm.hvm_io; - vio->io_req.state = STATE_IOREQ_NONE; - vio->io_completion = HVMIO_no_completion; - vio->mmio_cache_count = 0; - vio->mmio_insn_bytes = 0; - vio->mmio_access = (struct npfec){}; - vio->mmio_retry = false; - vio->g2m_ioport = NULL; + v->io.req.state = STATE_IOREQ_NONE; + v->io.completion = VIO_no_completion; + hvio->mmio_cache_count = 0; + hvio->mmio_insn_bytes = 0; + hvio->mmio_access = (struct npfec){}; + hvio->mmio_retry = false; + hvio->g2m_ioport = NULL; hvmemul_cache_disable(v); } @@ -159,7 +159,7 @@ static int hvmemul_do_io( { struct vcpu *curr = current; struct domain *currd = curr->domain; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct vcpu_io *vio = &curr->io; ioreq_t p = { .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO, .addr = addr, @@ -184,13 +184,13 @@ static int hvmemul_do_io( return X86EMUL_UNHANDLEABLE; } - switch ( vio->io_req.state ) + switch ( vio->req.state ) { case STATE_IOREQ_NONE: break; case STATE_IORESP_READY: - vio->io_req.state = STATE_IOREQ_NONE; - p = vio->io_req; + vio->req.state = STATE_IOREQ_NONE; + p = vio->req; /* Verify the emulation request has been correctly re-issued */ if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) || @@ -238,7 +238,7 @@ static int hvmemul_do_io( } ASSERT(p.count); - vio->io_req = p; + vio->req = p; rc = hvm_io_intercept(&p); @@ -247,12 +247,12 @@ static int hvmemul_do_io( * our callers and mirror this into latched state. */ ASSERT(p.count <= *reps); - *reps = vio->io_req.count = p.count; + *reps = vio->req.count = p.count; switch ( rc ) { case X86EMUL_OKAY: - vio->io_req.state = STATE_IOREQ_NONE; + vio->req.state = STATE_IOREQ_NONE; break; case X86EMUL_UNHANDLEABLE: { @@ -305,7 +305,7 @@ static int hvmemul_do_io( if ( s == NULL ) { rc = X86EMUL_RETRY; - vio->io_req.state = STATE_IOREQ_NONE; + vio->req.state = STATE_IOREQ_NONE; break; } @@ -316,7 +316,7 @@ static int hvmemul_do_io( if ( dir == IOREQ_READ ) { rc = hvm_process_io_intercept(&ioreq_server_handler, &p); - vio->io_req.state = STATE_IOREQ_NONE; + vio->req.state = STATE_IOREQ_NONE; break; } } @@ -329,14 +329,14 @@ static int hvmemul_do_io( if ( !s ) { rc = hvm_process_io_intercept(&null_handler, &p); - vio->io_req.state = STATE_IOREQ_NONE; + vio->req.state = STATE_IOREQ_NONE; } else { rc = hvm_send_ioreq(s, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) - vio->io_req.state = STATE_IOREQ_NONE; - else if ( !ioreq_needs_completion(&vio->io_req) ) + vio->req.state = STATE_IOREQ_NONE; + else if ( !ioreq_needs_completion(&vio->req) ) rc = X86EMUL_OKAY; } break; @@ -1005,14 +1005,14 @@ static int hvmemul_phys_mmio_access( * cache indexed by linear MMIO address. */ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( - struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir, bool create) + struct hvm_vcpu_io *hvio, unsigned long gla, uint8_t dir, bool create) { unsigned int i; struct hvm_mmio_cache *cache; - for ( i = 0; i < vio->mmio_cache_count; i ++ ) + for ( i = 0; i < hvio->mmio_cache_count; i ++ ) { - cache = &vio->mmio_cache[i]; + cache = &hvio->mmio_cache[i]; if ( gla == cache->gla && dir == cache->dir ) @@ -1022,13 +1022,13 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( if ( !create ) return NULL; - i = vio->mmio_cache_count; - if( i == ARRAY_SIZE(vio->mmio_cache) ) + i = hvio->mmio_cache_count; + if( i == ARRAY_SIZE(hvio->mmio_cache) ) return NULL; - ++vio->mmio_cache_count; + ++hvio->mmio_cache_count; - cache = &vio->mmio_cache[i]; + cache = &hvio->mmio_cache[i]; memset(cache, 0, sizeof (*cache)); cache->gla = gla; @@ -1037,26 +1037,26 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( return cache; } -static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla, +static void latch_linear_to_phys(struct hvm_vcpu_io *hvio, unsigned long gla, unsigned long gpa, bool_t write) { - if ( vio->mmio_access.gla_valid ) + if ( hvio->mmio_access.gla_valid ) return; - vio->mmio_gla = gla & PAGE_MASK; - vio->mmio_gpfn = PFN_DOWN(gpa); - vio->mmio_access = (struct npfec){ .gla_valid = 1, - .read_access = 1, - .write_access = write }; + hvio->mmio_gla = gla & PAGE_MASK; + hvio->mmio_gpfn = PFN_DOWN(gpa); + hvio->mmio_access = (struct npfec){ .gla_valid = 1, + .read_access = 1, + .write_access = write }; } static int hvmemul_linear_mmio_access( unsigned long gla, unsigned int size, uint8_t dir, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn) { - struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; unsigned long offset = gla & ~PAGE_MASK; - struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, gla, dir, true); + struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(hvio, gla, dir, true); unsigned int chunk, buffer_offset = 0; paddr_t gpa; unsigned long one_rep = 1; @@ -1068,7 +1068,7 @@ static int hvmemul_linear_mmio_access( chunk = min_t(unsigned int, size, PAGE_SIZE - offset); if ( known_gpfn ) - gpa = pfn_to_paddr(vio->mmio_gpfn) | offset; + gpa = pfn_to_paddr(hvio->mmio_gpfn) | offset; else { rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec, @@ -1076,7 +1076,7 @@ static int hvmemul_linear_mmio_access( if ( rc != X86EMUL_OKAY ) return rc; - latch_linear_to_phys(vio, gla, gpa, dir == IOREQ_WRITE); + latch_linear_to_phys(hvio, gla, gpa, dir == IOREQ_WRITE); } for ( ;; ) @@ -1122,22 +1122,22 @@ static inline int hvmemul_linear_mmio_write( static bool known_gla(unsigned long addr, unsigned int bytes, uint32_t pfec) { - const struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; + const struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; if ( pfec & PFEC_write_access ) { - if ( !vio->mmio_access.write_access ) + if ( !hvio->mmio_access.write_access ) return false; } else if ( pfec & PFEC_insn_fetch ) { - if ( !vio->mmio_access.insn_fetch ) + if ( !hvio->mmio_access.insn_fetch ) return false; } - else if ( !vio->mmio_access.read_access ) + else if ( !hvio->mmio_access.read_access ) return false; - return (vio->mmio_gla == (addr & PAGE_MASK) && + return (hvio->mmio_gla == (addr & PAGE_MASK) && (addr & ~PAGE_MASK) + bytes <= PAGE_SIZE); } @@ -1145,7 +1145,7 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt) { pagefault_info_t pfinfo; - struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; unsigned int offset = addr & ~PAGE_MASK; int rc = HVMTRANS_bad_gfn_to_mfn; @@ -1167,7 +1167,7 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data, * we handle this access in the same way to guarantee completion and hence * clean up any interim state. */ - if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_READ, false) ) + if ( !hvmemul_find_mmio_cache(hvio, addr, IOREQ_READ, false) ) rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo); switch ( rc ) @@ -1200,7 +1200,7 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt) { pagefault_info_t pfinfo; - struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; unsigned int offset = addr & ~PAGE_MASK; int rc = HVMTRANS_bad_gfn_to_mfn; @@ -1222,7 +1222,7 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data, * we handle this access in the same way to guarantee completion and hence * clean up any interim state. */ - if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_WRITE, false) ) + if ( !hvmemul_find_mmio_cache(hvio, addr, IOREQ_WRITE, false) ) rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo); switch ( rc ) @@ -1599,7 +1599,7 @@ static int hvmemul_cmpxchg( struct vcpu *curr = current; unsigned long addr; uint32_t pfec = PFEC_page_present | PFEC_write_access; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = &curr->arch.hvm.hvm_io; int rc; void *mapping = NULL; @@ -1625,8 +1625,8 @@ static int hvmemul_cmpxchg( /* Fix this in case the guest is really relying on r-m-w atomicity. */ return hvmemul_linear_mmio_write(addr, bytes, p_new, pfec, hvmemul_ctxt, - vio->mmio_access.write_access && - vio->mmio_gla == (addr & PAGE_MASK)); + hvio->mmio_access.write_access && + hvio->mmio_gla == (addr & PAGE_MASK)); } switch ( bytes ) @@ -1823,7 +1823,7 @@ static int hvmemul_rep_movs( struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); struct vcpu *curr = current; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = &curr->arch.hvm.hvm_io; unsigned long saddr, daddr, bytes; paddr_t sgpa, dgpa; uint32_t pfec = PFEC_page_present; @@ -1846,18 +1846,18 @@ static int hvmemul_rep_movs( if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 ) pfec |= PFEC_user_mode; - if ( vio->mmio_access.read_access && - (vio->mmio_gla == (saddr & PAGE_MASK)) && + if ( hvio->mmio_access.read_access && + (hvio->mmio_gla == (saddr & PAGE_MASK)) && /* * Upon initial invocation don't truncate large batches just because * of a hit for the translation: Doing the guest page table walk is * cheaper than multiple round trips through the device model. Yet * when processing a response we can always re-use the translation. */ - (vio->io_req.state == STATE_IORESP_READY || + (curr->io.req.state == STATE_IORESP_READY || ((!df || *reps == 1) && PAGE_SIZE - (saddr & ~PAGE_MASK) >= *reps * bytes_per_rep)) ) - sgpa = pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); + sgpa = pfn_to_paddr(hvio->mmio_gpfn) | (saddr & ~PAGE_MASK); else { rc = hvmemul_linear_to_phys(saddr, &sgpa, bytes_per_rep, reps, pfec, @@ -1867,13 +1867,13 @@ static int hvmemul_rep_movs( } bytes = PAGE_SIZE - (daddr & ~PAGE_MASK); - if ( vio->mmio_access.write_access && - (vio->mmio_gla == (daddr & PAGE_MASK)) && + if ( hvio->mmio_access.write_access && + (hvio->mmio_gla == (daddr & PAGE_MASK)) && /* See comment above. */ - (vio->io_req.state == STATE_IORESP_READY || + (curr->io.req.state == STATE_IORESP_READY || ((!df || *reps == 1) && PAGE_SIZE - (daddr & ~PAGE_MASK) >= *reps * bytes_per_rep)) ) - dgpa = pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); + dgpa = pfn_to_paddr(hvio->mmio_gpfn) | (daddr & ~PAGE_MASK); else { rc = hvmemul_linear_to_phys(daddr, &dgpa, bytes_per_rep, reps, @@ -1892,14 +1892,14 @@ static int hvmemul_rep_movs( if ( sp2mt == p2m_mmio_dm ) { - latch_linear_to_phys(vio, saddr, sgpa, 0); + latch_linear_to_phys(hvio, saddr, sgpa, 0); return hvmemul_do_mmio_addr( sgpa, reps, bytes_per_rep, IOREQ_READ, df, dgpa); } if ( dp2mt == p2m_mmio_dm ) { - latch_linear_to_phys(vio, daddr, dgpa, 1); + latch_linear_to_phys(hvio, daddr, dgpa, 1); return hvmemul_do_mmio_addr( dgpa, reps, bytes_per_rep, IOREQ_WRITE, df, sgpa); } @@ -1992,7 +1992,7 @@ static int hvmemul_rep_stos( struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); struct vcpu *curr = current; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = &curr->arch.hvm.hvm_io; unsigned long addr, bytes; paddr_t gpa; p2m_type_t p2mt; @@ -2004,13 +2004,13 @@ static int hvmemul_rep_stos( return rc; bytes = PAGE_SIZE - (addr & ~PAGE_MASK); - if ( vio->mmio_access.write_access && - (vio->mmio_gla == (addr & PAGE_MASK)) && + if ( hvio->mmio_access.write_access && + (hvio->mmio_gla == (addr & PAGE_MASK)) && /* See respective comment in MOVS processing. */ - (vio->io_req.state == STATE_IORESP_READY || + (curr->io.req.state == STATE_IORESP_READY || ((!df || *reps == 1) && PAGE_SIZE - (addr & ~PAGE_MASK) >= *reps * bytes_per_rep)) ) - gpa = pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); + gpa = pfn_to_paddr(hvio->mmio_gpfn) | (addr & ~PAGE_MASK); else { uint32_t pfec = PFEC_page_present | PFEC_write_access; @@ -2103,7 +2103,7 @@ static int hvmemul_rep_stos( return X86EMUL_UNHANDLEABLE; case p2m_mmio_dm: - latch_linear_to_phys(vio, addr, gpa, 1); + latch_linear_to_phys(hvio, addr, gpa, 1); return hvmemul_do_mmio_buffer(gpa, reps, bytes_per_rep, IOREQ_WRITE, df, p_data); } @@ -2613,18 +2613,18 @@ static const struct x86_emulate_ops hvm_emulate_ops_no_write = { }; /* - * Note that passing HVMIO_no_completion into this function serves as kind + * Note that passing VIO_no_completion into this function serves as kind * of (but not fully) an "auto select completion" indicator. When there's * no completion needed, the passed in value will be ignored in any case. */ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, const struct x86_emulate_ops *ops, - enum hvm_io_completion completion) + enum vio_completion completion) { const struct cpu_user_regs *regs = hvmemul_ctxt->ctxt.regs; struct vcpu *curr = current; uint32_t new_intr_shadow; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = &curr->arch.hvm.hvm_io; int rc; /* @@ -2632,45 +2632,45 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, * untouched if it's already enabled, for re-execution to consume * entries populated by an earlier pass. */ - if ( vio->cache->num_ents > vio->cache->max_ents ) + if ( hvio->cache->num_ents > hvio->cache->max_ents ) { - ASSERT(vio->io_req.state == STATE_IOREQ_NONE); - vio->cache->num_ents = 0; + ASSERT(curr->io.req.state == STATE_IOREQ_NONE); + hvio->cache->num_ents = 0; } else - ASSERT(vio->io_req.state == STATE_IORESP_READY); + ASSERT(curr->io.req.state == STATE_IORESP_READY); - hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn, - vio->mmio_insn_bytes); + hvm_emulate_init_per_insn(hvmemul_ctxt, hvio->mmio_insn, + hvio->mmio_insn_bytes); - vio->mmio_retry = 0; + hvio->mmio_retry = 0; rc = x86_emulate(&hvmemul_ctxt->ctxt, ops); - if ( rc == X86EMUL_OKAY && vio->mmio_retry ) + if ( rc == X86EMUL_OKAY && hvio->mmio_retry ) rc = X86EMUL_RETRY; - if ( !ioreq_needs_completion(&vio->io_req) ) - completion = HVMIO_no_completion; - else if ( completion == HVMIO_no_completion ) - completion = (vio->io_req.type != IOREQ_TYPE_PIO || - hvmemul_ctxt->is_mem_access) ? HVMIO_mmio_completion - : HVMIO_pio_completion; + if ( !ioreq_needs_completion(&curr->io.req) ) + completion = VIO_no_completion; + else if ( completion == VIO_no_completion ) + completion = (curr->io.req.type != IOREQ_TYPE_PIO || + hvmemul_ctxt->is_mem_access) ? VIO_mmio_completion + : VIO_pio_completion; - switch ( vio->io_completion = completion ) + switch ( curr->io.completion = completion ) { - case HVMIO_no_completion: - case HVMIO_pio_completion: - vio->mmio_cache_count = 0; - vio->mmio_insn_bytes = 0; - vio->mmio_access = (struct npfec){}; + case VIO_no_completion: + case VIO_pio_completion: + hvio->mmio_cache_count = 0; + hvio->mmio_insn_bytes = 0; + hvio->mmio_access = (struct npfec){}; hvmemul_cache_disable(curr); break; - case HVMIO_mmio_completion: - case HVMIO_realmode_completion: - BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_buf)); - vio->mmio_insn_bytes = hvmemul_ctxt->insn_buf_bytes; - memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_bytes); + case VIO_mmio_completion: + case VIO_realmode_completion: + BUILD_BUG_ON(sizeof(hvio->mmio_insn) < sizeof(hvmemul_ctxt->insn_buf)); + hvio->mmio_insn_bytes = hvmemul_ctxt->insn_buf_bytes; + memcpy(hvio->mmio_insn, hvmemul_ctxt->insn_buf, hvio->mmio_insn_bytes); break; default: @@ -2716,7 +2716,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt, int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion) + enum vio_completion completion) { return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion); } @@ -2754,7 +2754,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla) guest_cpu_user_regs()); ctxt.ctxt.data = &mmio_ro_ctxt; - switch ( rc = _hvm_emulate_one(&ctxt, ops, HVMIO_no_completion) ) + switch ( rc = _hvm_emulate_one(&ctxt, ops, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: @@ -2782,28 +2782,28 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, { case EMUL_KIND_NOWRITE: rc = _hvm_emulate_one(&ctx, &hvm_emulate_ops_no_write, - HVMIO_no_completion); + VIO_no_completion); break; case EMUL_KIND_SET_CONTEXT_INSN: { struct vcpu *curr = current; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = &curr->arch.hvm.hvm_io; - BUILD_BUG_ON(sizeof(vio->mmio_insn) != + BUILD_BUG_ON(sizeof(hvio->mmio_insn) != sizeof(curr->arch.vm_event->emul.insn.data)); - ASSERT(!vio->mmio_insn_bytes); + ASSERT(!hvio->mmio_insn_bytes); /* * Stash insn buffer into mmio buffer here instead of ctx * to avoid having to add more logic to hvm_emulate_one. */ - vio->mmio_insn_bytes = sizeof(vio->mmio_insn); - memcpy(vio->mmio_insn, curr->arch.vm_event->emul.insn.data, - vio->mmio_insn_bytes); + hvio->mmio_insn_bytes = sizeof(hvio->mmio_insn); + memcpy(hvio->mmio_insn, curr->arch.vm_event->emul.insn.data, + hvio->mmio_insn_bytes); } /* Fall-through */ default: ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA); - rc = hvm_emulate_one(&ctx, HVMIO_no_completion); + rc = hvm_emulate_one(&ctx, VIO_no_completion); } switch ( rc ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index bc96947..4ed929c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3800,7 +3800,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs) return; } - switch ( hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( hvm_emulate_one(&ctxt, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: case X86EMUL_UNIMPLEMENTED: diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index ef8286b..dd733e1 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -85,7 +85,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr) hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs()); - switch ( rc = hvm_emulate_one(&ctxt, HVMIO_no_completion) ) + switch ( rc = hvm_emulate_one(&ctxt, VIO_no_completion) ) { case X86EMUL_UNHANDLEABLE: hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt, rc); @@ -109,20 +109,20 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr) bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec access) { - struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; - vio->mmio_access = access.gla_valid && - access.kind == npfec_kind_with_gla - ? access : (struct npfec){}; - vio->mmio_gla = gla & PAGE_MASK; - vio->mmio_gpfn = gpfn; + hvio->mmio_access = access.gla_valid && + access.kind == npfec_kind_with_gla + ? access : (struct npfec){}; + hvio->mmio_gla = gla & PAGE_MASK; + hvio->mmio_gpfn = gpfn; return handle_mmio(); } bool handle_pio(uint16_t port, unsigned int size, int dir) { struct vcpu *curr = current; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct vcpu_io *vio = &curr->io; unsigned int data; int rc; @@ -135,8 +135,8 @@ bool handle_pio(uint16_t port, unsigned int size, int dir) rc = hvmemul_do_pio_buffer(port, size, dir, &data); - if ( ioreq_needs_completion(&vio->io_req) ) - vio->io_completion = HVMIO_pio_completion; + if ( ioreq_needs_completion(&vio->req) ) + vio->completion = VIO_pio_completion; switch ( rc ) { @@ -175,7 +175,7 @@ static bool_t g2m_portio_accept(const struct hvm_io_handler *handler, { struct vcpu *curr = current; const struct hvm_domain *hvm = &curr->domain->arch.hvm; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = &curr->arch.hvm.hvm_io; struct g2m_ioport *g2m_ioport; unsigned int start, end; @@ -185,7 +185,7 @@ static bool_t g2m_portio_accept(const struct hvm_io_handler *handler, end = start + g2m_ioport->np; if ( (p->addr >= start) && (p->addr + p->size <= end) ) { - vio->g2m_ioport = g2m_ioport; + hvio->g2m_ioport = g2m_ioport; return 1; } } @@ -196,8 +196,8 @@ static bool_t g2m_portio_accept(const struct hvm_io_handler *handler, static int g2m_portio_read(const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, uint64_t *data) { - struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; - const struct g2m_ioport *g2m_ioport = vio->g2m_ioport; + struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; + const struct g2m_ioport *g2m_ioport = hvio->g2m_ioport; unsigned int mport = (addr - g2m_ioport->gport) + g2m_ioport->mport; switch ( size ) @@ -221,8 +221,8 @@ static int g2m_portio_read(const struct hvm_io_handler *handler, static int g2m_portio_write(const struct hvm_io_handler *handler, uint64_t addr, uint32_t size, uint64_t data) { - struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; - const struct g2m_ioport *g2m_ioport = vio->g2m_ioport; + struct hvm_vcpu_io *hvio = ¤t->arch.hvm.hvm_io; + const struct g2m_ioport *g2m_ioport = hvio->g2m_ioport; unsigned int mport = (addr - g2m_ioport->gport) + g2m_ioport->mport; switch ( size ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 0cadf34..62a4b33 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -40,11 +40,11 @@ bool arch_ioreq_complete_mmio(void) return handle_mmio(); } -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion) +bool arch_vcpu_ioreq_completion(enum vio_completion completion) { - switch ( io_completion ) + switch ( completion ) { - case HVMIO_realmode_completion: + case VIO_realmode_completion: { struct hvm_emulate_ctxt ctxt; diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c index fcfccf7..6d90630 100644 --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -1266,7 +1266,7 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v) * Delay the injection because this would result in delivering * an interrupt *within* the execution of an instruction. */ - if ( v->arch.hvm.hvm_io.io_req.state != STATE_IOREQ_NONE ) + if ( v->io.req.state != STATE_IOREQ_NONE ) return hvm_intblk_shadow; if ( !nv->nv_vmexit_pending && n2vmcb->exit_int_info.v ) diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c index 768f01e..cc23afa 100644 --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -101,7 +101,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt) perfc_incr(realmode_emulations); - rc = hvm_emulate_one(hvmemul_ctxt, HVMIO_realmode_completion); + rc = hvm_emulate_one(hvmemul_ctxt, VIO_realmode_completion); if ( rc == X86EMUL_UNHANDLEABLE ) { @@ -153,7 +153,7 @@ void vmx_realmode(struct cpu_user_regs *regs) struct vcpu *curr = current; struct hvm_emulate_ctxt hvmemul_ctxt; struct segment_register *sreg; - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io; + struct hvm_vcpu_io *hvio = &curr->arch.hvm.hvm_io; unsigned long intr_info; unsigned int emulations = 0; @@ -188,7 +188,7 @@ void vmx_realmode(struct cpu_user_regs *regs) vmx_realmode_emulate_one(&hvmemul_ctxt); - if ( vio->io_req.state != STATE_IOREQ_NONE || vio->mmio_retry ) + if ( curr->io.req.state != STATE_IOREQ_NONE || hvio->mmio_retry ) break; /* Stop emulating unless our segment state is not safe */ @@ -202,7 +202,7 @@ void vmx_realmode(struct cpu_user_regs *regs) } /* Need to emulate next time if we've started an IO operation */ - if ( vio->io_req.state != STATE_IOREQ_NONE ) + if ( curr->io.req.state != STATE_IOREQ_NONE ) curr->arch.hvm.vmx.vmx_emulate = 1; if ( !curr->arch.hvm.vmx.vmx_emulate && !curr->arch.hvm.vmx.vmx_realmode ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 84bce36..ce3ef59 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -159,7 +159,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) break; } - p = &sv->vcpu->arch.hvm.hvm_io.io_req; + p = &sv->vcpu->io.req; if ( ioreq_needs_completion(p) ) p->data = data; @@ -171,10 +171,10 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) bool handle_hvm_io_completion(struct vcpu *v) { struct domain *d = v->domain; - struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io; + struct vcpu_io *vio = &v->io; struct ioreq_server *s; struct ioreq_vcpu *sv; - enum hvm_io_completion io_completion; + enum vio_completion completion; if ( has_vpci(d) && vpci_process_pending(v) ) { @@ -186,29 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v) if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) return false; - vio->io_req.state = ioreq_needs_completion(&vio->io_req) ? + vio->req.state = ioreq_needs_completion(&vio->req) ? STATE_IORESP_READY : STATE_IOREQ_NONE; msix_write_completion(v); vcpu_end_shutdown_deferral(v); - io_completion = vio->io_completion; - vio->io_completion = HVMIO_no_completion; + completion = vio->completion; + vio->completion = VIO_no_completion; - switch ( io_completion ) + switch ( completion ) { - case HVMIO_no_completion: + case VIO_no_completion: break; - case HVMIO_mmio_completion: + case VIO_mmio_completion: return arch_ioreq_complete_mmio(); - case HVMIO_pio_completion: - return handle_pio(vio->io_req.addr, vio->io_req.size, - vio->io_req.dir); + case VIO_pio_completion: + return handle_pio(vio->req.addr, vio->req.size, + vio->req.dir); default: - return arch_vcpu_ioreq_completion(io_completion); + return arch_vcpu_ioreq_completion(completion); } return true; diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h index 1620cc7..610078b 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -65,7 +65,7 @@ bool __nonnull(1, 2) hvm_emulate_one_insn( const char *descr); int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt, - enum hvm_io_completion completion); + enum vio_completion completion); void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode); diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 6c1feda..8adf455 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -28,13 +28,6 @@ #include #include -enum hvm_io_completion { - HVMIO_no_completion, - HVMIO_mmio_completion, - HVMIO_pio_completion, - HVMIO_realmode_completion -}; - struct hvm_vcpu_asid { uint64_t generation; uint32_t asid; @@ -52,10 +45,6 @@ struct hvm_mmio_cache { }; struct hvm_vcpu_io { - /* I/O request in flight to device model. */ - enum hvm_io_completion io_completion; - ioreq_t io_req; - /* * HVM emulation: * Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn. diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 60e864d..eace1d3 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -107,7 +107,7 @@ void hvm_ioreq_init(struct domain *d); int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op); bool arch_ioreq_complete_mmio(void); -bool arch_vcpu_ioreq_completion(enum hvm_io_completion io_completion); +bool arch_vcpu_ioreq_completion(enum vio_completion completion); int arch_ioreq_server_map_pages(struct ioreq_server *s); void arch_ioreq_server_unmap_pages(struct ioreq_server *s); void arch_ioreq_server_enable(struct ioreq_server *s); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index f437ee3..59e5b6a 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -147,6 +147,21 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */ struct waitqueue_vcpu; +enum vio_completion { + VIO_no_completion, + VIO_mmio_completion, + VIO_pio_completion, +#ifdef CONFIG_X86 + VIO_realmode_completion, +#endif +}; + +struct vcpu_io { + /* I/O request in flight to device model. */ + enum vio_completion completion; + ioreq_t req; +}; + struct vcpu { int vcpu_id; @@ -258,6 +273,10 @@ struct vcpu struct vpci_vcpu vpci; struct arch_vcpu arch; + +#ifdef CONFIG_IOREQ_SERVER + struct vcpu_io io; +#endif }; struct sched_unit { From patchwork Fri Jan 29 01:48:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5905EC433DB for ; Fri, 29 Jan 2021 01:59:17 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E2DC764DF1 for ; Fri, 29 Jan 2021 01:59:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2DC764DF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77627.140752 (Exim 4.92) (envelope-from ) id 1l5J41-0006zJ-Kr; Fri, 29 Jan 2021 01:59:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77627.140752; Fri, 29 Jan 2021 01:59:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J41-0006yt-Gk; Fri, 29 Jan 2021 01:59:09 +0000 Received: by outflank-mailman (input) for mailman id 77627; Fri, 29 Jan 2021 01:59:08 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iv7-0004da-Ky for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:49:57 +0000 Received: from mail-lf1-x12b.google.com (unknown [2a00:1450:4864:20::12b]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6959da08-c442-4655-bd02-cac1cadc4785; Fri, 29 Jan 2021 01:49:18 +0000 (UTC) Received: by mail-lf1-x12b.google.com with SMTP id i187so10306523lfd.4 for ; Thu, 28 Jan 2021 17:49:18 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:16 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6959da08-c442-4655-bd02-cac1cadc4785 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wO92iTlgb3DNVOezolo/XVlMnVKwV7Y/rHIBCUd9Cu8=; b=SY9UcgHzjsclm7gI+VSLSdbMx7Y+JLur1TDQbTOD8jLvHId5WwCm5B9hFwbmCHPxfg W+j+LRzOFGjbWtPrgvlV+bWRC9aFV/rFqLFGumSp0rtwXUsjl87scUDJ5w/uP15UyPgN sNosYkbC4YWVwbBx202bVI42SidEY8tPHvm/OCS/o8jXV2k2GQfqxPyAJXY9QJXd6ILi wnuIv+zPBqmN1+GJ+zlErH8Z8DJ+Bv+x1530nTrDIdGYo8vQugHDzO9t7smnxYf10nYS zUDXhdGU8KfdKZbO6Ya1zRFHqvlUq4EPWb8rXY3ft6zxFECybFHRpvaHnan2jPEe17rx db/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wO92iTlgb3DNVOezolo/XVlMnVKwV7Y/rHIBCUd9Cu8=; b=JC2ZHD8xpfUnZ3w6yZfEMqsgBz2sOaX+FKFb1Kit9+3ZMSqUCSnSAJKFw7hXlMsUmt pt5uu8nqr4w8Cs+GZ0+Ulu0gTg/AGfB4oYgSwN2oCo5jLN//qGVvmSO9+3OIT2tM5HVb 1XDxcGsAq5hZCANLAXDjPcqbJx2yTXN2Zw4mHX8UppNyaS8f1hEu7ozp7rU4iviNmp/q Qx3WlIt0uosL5pDkVvHHvKp5Dz4JMwem3B0PQ0cbyjSJ8rpb7gJOBsLgxBlhCEcV02MS x7ssmVOqx21A+njVQiPayAfYoHktWjkkAFbQgQu6Uutm479iUhhS1QCdLNbrs1E8MQ85 OcQg== X-Gm-Message-State: AOAM531LNV5ziTaiMbfQX94S05YNcHugztcXFJ/atQmrmJh4hxeKrmZD iphsCGmO24q6Ato/h3VIEr/11SZxHoMBTg== X-Google-Smtp-Source: ABdhPJzPm/FEOVU7qPNjQYoLRgww8CaY20+14YvdxhpQqKI3lLPXxYbZWQ6BDet5953TClQxpDebsA== X-Received: by 2002:a19:c356:: with SMTP id t83mr875368lff.484.1611884957102; Thu, 28 Jan 2021 17:49:17 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Oleksandr Tyshchenko Subject: [PATCH V6 11/24] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common Date: Fri, 29 Jan 2021 03:48:39 +0200 Message-Id: <1611884932-1851-12-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Julien Grall As x86 implementation of XENMEM_resource_ioreq_server can be re-used on Arm later on, this patch makes it common and removes arch_acquire_resource (and the corresponding option) as unneeded. Also re-order #include-s alphabetically. This support is going to be used on Arm to be able run device emulator outside of Xen hypervisor. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Jan Beulich Reviewed-by: Paul Durrant [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - no changes Changes V1 -> V2: - update the author of a patch Changes V2 -> V3: - don't wrap #include - limit the number of #ifdef-s - re-order #include-s alphabetically Changes V3 -> V4: - rebase - Add Jan's R-b Changes V4 -> V5: - add Paul's R-b - update patch description - remove ARCH_ACQUIRE_RESOURCE option, etc Changes V5 -> V6: - no changes --- --- xen/arch/x86/Kconfig | 1 - xen/arch/x86/mm.c | 44 --------------------------------- xen/common/Kconfig | 3 --- xen/common/memory.c | 63 +++++++++++++++++++++++++++++++++++++++--------- xen/include/asm-x86/mm.h | 4 --- xen/include/xen/mm.h | 9 ------- 6 files changed, 51 insertions(+), 73 deletions(-) diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index ea9a9ea..abe0fce 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -6,7 +6,6 @@ config X86 select ACPI select ACPI_LEGACY_TABLES_LOOKUP select ARCH_SUPPORTS_INT128 - select ARCH_ACQUIRE_RESOURCE select COMPAT select CORE_PARKING select HAS_ALTERNATIVE diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 59eb5c8..4366ea3 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4587,50 +4587,6 @@ static int handle_iomem_range(unsigned long s, unsigned long e, void *p) return err || s > e ? err : _handle_iomem_range(s, e, p); } -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]) -{ - int rc; - - switch ( type ) - { -#ifdef CONFIG_HVM - case XENMEM_resource_ioreq_server: - { - ioservid_t ioservid = id; - unsigned int i; - - rc = -EINVAL; - if ( !is_hvm_domain(d) ) - break; - - if ( id != (unsigned int)ioservid ) - break; - - rc = 0; - for ( i = 0; i < nr_frames; i++ ) - { - mfn_t mfn; - - rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); - if ( rc ) - break; - - mfn_list[i] = mfn_x(mfn); - } - break; - } -#endif - - default: - rc = -EOPNOTSUPP; - break; - } - - return rc; -} - long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { int rc; diff --git a/xen/common/Kconfig b/xen/common/Kconfig index cf32a07..fa049a6 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -22,9 +22,6 @@ config GRANT_TABLE If unsure, say Y. -config ARCH_ACQUIRE_RESOURCE - bool - config HAS_ALTERNATIVE bool diff --git a/xen/common/memory.c b/xen/common/memory.c index ccb4d49..2f274a6 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -8,22 +8,23 @@ */ #include -#include +#include +#include +#include +#include +#include +#include +#include #include +#include #include +#include +#include #include #include #include -#include -#include -#include -#include -#include -#include -#include -#include #include -#include +#include #include #include #include @@ -1091,6 +1092,40 @@ static int acquire_grant_table(struct domain *d, unsigned int id, return 0; } +static int acquire_ioreq_server(struct domain *d, + unsigned int id, + unsigned long frame, + unsigned int nr_frames, + xen_pfn_t mfn_list[]) +{ +#ifdef CONFIG_IOREQ_SERVER + ioservid_t ioservid = id; + unsigned int i; + int rc; + + if ( !is_hvm_domain(d) ) + return -EINVAL; + + if ( id != (unsigned int)ioservid ) + return -EINVAL; + + for ( i = 0; i < nr_frames; i++ ) + { + mfn_t mfn; + + rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + if ( rc ) + return rc; + + mfn_list[i] = mfn_x(mfn); + } + + return 0; +#else + return -EOPNOTSUPP; +#endif +} + static int acquire_resource( XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg) { @@ -1149,9 +1184,13 @@ static int acquire_resource( mfn_list); break; + case XENMEM_resource_ioreq_server: + rc = acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames, + mfn_list); + break; + default: - rc = arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame, - xmar.nr_frames, mfn_list); + rc = -EOPNOTSUPP; break; } diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 1fdb4eb..041c158 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -640,8 +640,4 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn) return mfn <= (virt_to_mfn(eva - 1) + 1); } -int arch_acquire_resource(struct domain *d, unsigned int type, - unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]); - #endif /* __ASM_X86_MM_H__ */ diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 636a125..667f9da 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -612,13 +612,4 @@ static inline void put_page_alloc_ref(struct page_info *page) } } -#ifndef CONFIG_ARCH_ACQUIRE_RESOURCE -static inline int arch_acquire_resource( - struct domain *d, unsigned int type, unsigned int id, unsigned long frame, - unsigned int nr_frames, xen_pfn_t mfn_list[]) -{ - return -EOPNOTSUPP; -} -#endif /* !CONFIG_ARCH_ACQUIRE_RESOURCE */ - #endif /* __XEN_MM_H__ */ From patchwork Fri Jan 29 01:48:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C105C433DB for ; Fri, 29 Jan 2021 01:59:10 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB76764DFD for ; Fri, 29 Jan 2021 01:59:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB76764DFD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77620.140680 (Exim 4.92) (envelope-from ) id 1l5J3p-0006d0-7l; Fri, 29 Jan 2021 01:58:57 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77620.140680; Fri, 29 Jan 2021 01:58:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J3p-0006cn-4p; Fri, 29 Jan 2021 01:58:57 +0000 Received: by outflank-mailman (input) for mailman id 77620; Fri, 29 Jan 2021 01:58:56 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IvH-0004da-LC for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:07 +0000 Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 3524506d-ba43-4f59-a4fe-2ef0c65ce5d0; Fri, 29 Jan 2021 01:49:20 +0000 (UTC) Received: by mail-lj1-x236.google.com with SMTP id f19so8726928ljn.5 for ; Thu, 28 Jan 2021 17:49:19 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.17 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:17 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3524506d-ba43-4f59-a4fe-2ef0c65ce5d0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rE23lbgtf65G+twheVdGK7cR9mupdBzhdkIIIy/5aEY=; b=dKJAnl5Pd43x0jPezgmFDPizbM6c8AkZ+Zd5H4gIgd+61vniqzmj/JzJh+mDUTZOAp 5et5eiip4qLUz3AXXuNDXgjeLq0p5HrRG9xjeiFhCemXJVSr/M2cXRgRm0kTbTQ3fZxA cYUpzaAz1DzDZdVzD4dU+x7KWuenZHPWHQC+MFHIiE9XCQt1oiSzL6bq7U8w8mupcPHx Esfh2ZQK+/nsVDs/xzkGzCZbaSKqrOwUMrBsBrEzTZXiMVuRhVDAg2pbzAKl9FbgBle9 3zuHG+FMx8YcpLWQ/t7lYKhFyfQkDdW3NRsQ0rmV+vxZCYujFmsruRCZCu8V8lky2+jM uhqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rE23lbgtf65G+twheVdGK7cR9mupdBzhdkIIIy/5aEY=; b=Wsx1mKk9ZtmLWvPw9TILUAGTeuAxnB0NiTzUCa1/3ip3IglK9Q3MaIGzxQlVjxUUcN 4FIRXhNdk/Gy15DV0zO1V7VtymzDCrjDZeHyXujvZOepAkKV4bg2hFbv0FyPkC//UiBB vbAoPdxcwQbIq/yAqadaLW647TjBN0gJ9PxS/ieb58JE/tEYvpT65B0QDyw6JyQWoBFt YPJW34pokPnUfH494In1iv5ku3Xdm3GhXNGXsZ/XHpBVHCQQcVG3pE7Tq2JryddMMkyx zkVIZI3B8/0exRKkDuyxeTT790qLy7bTa9Cux/CaPF7wiBcxqAVFRY+EyHgALQq80vdf RGVg== X-Gm-Message-State: AOAM531g5DaxivU8+jYkL4G8ijtg91PJVy1c3G3Swxl5NhHGLdHRKRSW nJKo4ahJW/L+u/fnkp8aCVhUVWBM7mPwjg== X-Google-Smtp-Source: ABdhPJwbnFZlevimcDR6iI7WW4iUgOlpf5rK896pBV07zOl2Vht15LBRjhLIK9chqMq6Qxosu3YtCA== X-Received: by 2002:a2e:a583:: with SMTP id m3mr1149855ljp.291.1611884958405; Thu, 28 Jan 2021 17:49:18 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Julien Grall Subject: [PATCH V6 12/24] xen/ioreq: Remove "hvm" prefixes from involved function names Date: Fri, 29 Jan 2021 03:48:40 +0200 Message-Id: <1611884932-1851-13-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch removes "hvm" prefixes and infixes from IOREQ related function names in the common code and performs a renaming where appropriate according to the more consistent new naming scheme: - IOREQ server functions should start with "ioreq_server_" - IOREQ functions should start with "ioreq_" A few function names are clarified to better fit into their purposes: handle_hvm_io_completion -> vcpu_ioreq_handle_completion hvm_io_pending -> vcpu_ioreq_pending hvm_ioreq_init -> ioreq_domain_init hvm_alloc_ioreq_mfn -> ioreq_server_alloc_mfn hvm_free_ioreq_mfn -> ioreq_server_free_mfn Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Jan Beulich Reviewed-by: Paul Durrant CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - update patch description - rename everything touched according to new naming scheme Changes V3 -> V4: - rebase - rename ioreq_update_evtchn() to ioreq_server_update_evtchn() - add Jan's R-b Changes V4 -> V5: - rebase - add Paul's R-b Changes V5 -> V6: - no changes --- --- xen/arch/x86/hvm/emulate.c | 6 +- xen/arch/x86/hvm/hvm.c | 10 ++-- xen/arch/x86/hvm/io.c | 6 +- xen/arch/x86/hvm/ioreq.c | 2 +- xen/arch/x86/hvm/stdvga.c | 4 +- xen/arch/x86/hvm/vmx/vvmx.c | 2 +- xen/common/ioreq.c | 138 ++++++++++++++++++++++---------------------- xen/common/memory.c | 2 +- xen/include/xen/ioreq.h | 26 ++++----- 9 files changed, 98 insertions(+), 98 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 21051ce..425c8dd 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -261,7 +261,7 @@ static int hvmemul_do_io( * an ioreq server that can handle it. * * Rules: - * A> PIO or MMIO accesses run through hvm_select_ioreq_server() to + * A> PIO or MMIO accesses run through ioreq_server_select() to * choose the ioreq server by range. If no server is found, the access * is ignored. * @@ -323,7 +323,7 @@ static int hvmemul_do_io( } if ( !s ) - s = hvm_select_ioreq_server(currd, &p); + s = ioreq_server_select(currd, &p); /* If there is no suitable backing DM, just ignore accesses */ if ( !s ) @@ -333,7 +333,7 @@ static int hvmemul_do_io( } else { - rc = hvm_send_ioreq(s, &p, 0); + rc = ioreq_send(s, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) vio->req.state = STATE_IOREQ_NONE; else if ( !ioreq_needs_completion(&vio->req) ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 4ed929c..0d7bb42 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -546,7 +546,7 @@ void hvm_do_resume(struct vcpu *v) pt_restore_timer(v); - if ( !handle_hvm_io_completion(v) ) + if ( !vcpu_ioreq_handle_completion(v) ) return; if ( unlikely(v->arch.vm_event) ) @@ -677,7 +677,7 @@ int hvm_domain_initialise(struct domain *d) register_g2m_portio_handler(d); register_vpci_portio_handler(d); - hvm_ioreq_init(d); + ioreq_domain_init(d); hvm_init_guest_time(d); @@ -739,7 +739,7 @@ void hvm_domain_relinquish_resources(struct domain *d) viridian_domain_deinit(d); - hvm_destroy_all_ioreq_servers(d); + ioreq_server_destroy_all(d); msixtbl_pt_cleanup(d); @@ -1582,7 +1582,7 @@ int hvm_vcpu_initialise(struct vcpu *v) if ( rc ) goto fail5; - rc = hvm_all_ioreq_servers_add_vcpu(d, v); + rc = ioreq_server_add_vcpu_all(d, v); if ( rc != 0 ) goto fail6; @@ -1618,7 +1618,7 @@ void hvm_vcpu_destroy(struct vcpu *v) { viridian_vcpu_deinit(v); - hvm_all_ioreq_servers_remove_vcpu(v->domain, v); + ioreq_server_remove_vcpu_all(v->domain, v); if ( hvm_altp2m_supported() ) altp2m_vcpu_destroy(v); diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index dd733e1..66a37ee 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -60,7 +60,7 @@ void send_timeoffset_req(unsigned long timeoff) if ( timeoff == 0 ) return; - if ( hvm_broadcast_ioreq(&p, true) != 0 ) + if ( ioreq_broadcast(&p, true) != 0 ) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } @@ -74,7 +74,7 @@ void send_invalidate_req(void) .data = ~0UL, /* flush all */ }; - if ( hvm_broadcast_ioreq(&p, false) != 0 ) + if ( ioreq_broadcast(&p, false) != 0 ) gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); } @@ -155,7 +155,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir) * We should not advance RIP/EIP if the domain is shutting down or * if X86EMUL_RETRY has been returned by an internal handler. */ - if ( curr->domain->is_shutting_down || !hvm_io_pending(curr) ) + if ( curr->domain->is_shutting_down || !vcpu_ioreq_pending(curr) ) return false; break; diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 62a4b33..02ad9db 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -153,7 +153,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf) { /* * If a page has already been allocated (which will happen on - * demand if hvm_get_ioreq_server_frame() is called), then + * demand if ioreq_server_get_frame() is called), then * mapping a guest frame is not permitted. */ if ( gfn_eq(iorp->gfn, INVALID_GFN) ) diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index ee13449..ab9781d 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler, } done: - srv = hvm_select_ioreq_server(current->domain, &p); + srv = ioreq_server_select(current->domain, &p); if ( !srv ) return X86EMUL_UNHANDLEABLE; - return hvm_send_ioreq(srv, &p, 1); + return ioreq_send(srv, &p, 1); } static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index 0ddb6a4..e9f94da 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1517,7 +1517,7 @@ void nvmx_switch_guest(void) * don't want to continue as this setup is not implemented nor supported * as of right now. */ - if ( hvm_io_pending(v) ) + if ( vcpu_ioreq_pending(v) ) return; /* * a softirq may interrupt us between a virtual vmentry is diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index ce3ef59..de3066a 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -59,7 +59,7 @@ static struct ioreq_server *get_ioreq_server(const struct domain *d, * Iterate over all possible ioreq servers. * * NOTE: The iteration is backwards such that more recently created - * ioreq servers are favoured in hvm_select_ioreq_server(). + * ioreq servers are favoured in ioreq_server_select(). * This is a semantic that previously existed when ioreq servers * were held in a linked list. */ @@ -106,12 +106,12 @@ static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, return NULL; } -bool hvm_io_pending(struct vcpu *v) +bool vcpu_ioreq_pending(struct vcpu *v) { return get_pending_vcpu(v, NULL); } -static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) +static bool wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) { unsigned int prev_state = STATE_IOREQ_NONE; unsigned int state = p->state; @@ -168,7 +168,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) return true; } -bool handle_hvm_io_completion(struct vcpu *v) +bool vcpu_ioreq_handle_completion(struct vcpu *v) { struct domain *d = v->domain; struct vcpu_io *vio = &v->io; @@ -183,7 +183,7 @@ bool handle_hvm_io_completion(struct vcpu *v) } sv = get_pending_vcpu(v, &s); - if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) ) + if ( sv && !wait_for_io(sv, get_ioreq(s, v)) ) return false; vio->req.state = ioreq_needs_completion(&vio->req) ? @@ -214,7 +214,7 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } -static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) +static int ioreq_server_alloc_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; struct page_info *page; @@ -262,7 +262,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf) return -ENOMEM; } -static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf) +static void ioreq_server_free_mfn(struct ioreq_server *s, bool buf) { struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; struct page_info *page = iorp->page; @@ -301,8 +301,8 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) return found; } -static void hvm_update_ioreq_evtchn(struct ioreq_server *s, - struct ioreq_vcpu *sv) +static void ioreq_server_update_evtchn(struct ioreq_server *s, + struct ioreq_vcpu *sv) { ASSERT(spin_is_locked(&s->lock)); @@ -314,8 +314,8 @@ static void hvm_update_ioreq_evtchn(struct ioreq_server *s, } } -static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, - struct vcpu *v) +static int ioreq_server_add_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; int rc; @@ -350,7 +350,7 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, list_add(&sv->list_entry, &s->ioreq_vcpu_list); if ( s->enabled ) - hvm_update_ioreq_evtchn(s, sv); + ioreq_server_update_evtchn(s, sv); spin_unlock(&s->lock); return 0; @@ -366,8 +366,8 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s, return rc; } -static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, - struct vcpu *v) +static void ioreq_server_remove_vcpu(struct ioreq_server *s, + struct vcpu *v) { struct ioreq_vcpu *sv; @@ -394,7 +394,7 @@ static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s, spin_unlock(&s->lock); } -static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) +static void ioreq_server_remove_all_vcpus(struct ioreq_server *s) { struct ioreq_vcpu *sv, *next; @@ -420,28 +420,28 @@ static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s) +static int ioreq_server_alloc_pages(struct ioreq_server *s) { int rc; - rc = hvm_alloc_ioreq_mfn(s, false); + rc = ioreq_server_alloc_mfn(s, false); if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) - rc = hvm_alloc_ioreq_mfn(s, true); + rc = ioreq_server_alloc_mfn(s, true); if ( rc ) - hvm_free_ioreq_mfn(s, false); + ioreq_server_free_mfn(s, false); return rc; } -static void hvm_ioreq_server_free_pages(struct ioreq_server *s) +static void ioreq_server_free_pages(struct ioreq_server *s) { - hvm_free_ioreq_mfn(s, true); - hvm_free_ioreq_mfn(s, false); + ioreq_server_free_mfn(s, true); + ioreq_server_free_mfn(s, false); } -static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) +static void ioreq_server_free_rangesets(struct ioreq_server *s) { unsigned int i; @@ -449,8 +449,8 @@ static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s) rangeset_destroy(s->range[i]); } -static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, - ioservid_t id) +static int ioreq_server_alloc_rangesets(struct ioreq_server *s, + ioservid_t id) { unsigned int i; int rc; @@ -482,12 +482,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s, return 0; fail: - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); return rc; } -static void hvm_ioreq_server_enable(struct ioreq_server *s) +static void ioreq_server_enable(struct ioreq_server *s) { struct ioreq_vcpu *sv; @@ -503,13 +503,13 @@ static void hvm_ioreq_server_enable(struct ioreq_server *s) list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) - hvm_update_ioreq_evtchn(s, sv); + ioreq_server_update_evtchn(s, sv); done: spin_unlock(&s->lock); } -static void hvm_ioreq_server_disable(struct ioreq_server *s) +static void ioreq_server_disable(struct ioreq_server *s) { spin_lock(&s->lock); @@ -524,9 +524,9 @@ static void hvm_ioreq_server_disable(struct ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_init(struct ioreq_server *s, - struct domain *d, int bufioreq_handling, - ioservid_t id) +static int ioreq_server_init(struct ioreq_server *s, + struct domain *d, int bufioreq_handling, + ioservid_t id) { struct domain *currd = current->domain; struct vcpu *v; @@ -544,7 +544,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, s->ioreq.gfn = INVALID_GFN; s->bufioreq.gfn = INVALID_GFN; - rc = hvm_ioreq_server_alloc_rangesets(s, id); + rc = ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; @@ -552,7 +552,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, for_each_vcpu ( d, v ) { - rc = hvm_ioreq_server_add_vcpu(s, v); + rc = ioreq_server_add_vcpu(s, v); if ( rc ) goto fail_add; } @@ -560,23 +560,23 @@ static int hvm_ioreq_server_init(struct ioreq_server *s, return 0; fail_add: - hvm_ioreq_server_remove_all_vcpus(s); + ioreq_server_remove_all_vcpus(s); arch_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); put_domain(s->emulator); return rc; } -static void hvm_ioreq_server_deinit(struct ioreq_server *s) +static void ioreq_server_deinit(struct ioreq_server *s) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); + ioreq_server_remove_all_vcpus(s); /* * NOTE: It is safe to call both arch_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. + * ioreq_server_free_pages() in that order. * This is because the former will do nothing if the pages * are not mapped, leaving the page to be freed by the latter. * However if the pages are mapped then the former will set @@ -584,9 +584,9 @@ static void hvm_ioreq_server_deinit(struct ioreq_server *s) * nothing. */ arch_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); + ioreq_server_free_pages(s); - hvm_ioreq_server_free_rangesets(s); + ioreq_server_free_rangesets(s); put_domain(s->emulator); } @@ -620,11 +620,11 @@ static int ioreq_server_create(struct domain *d, int bufioreq_handling, /* * It is safe to call set_ioreq_server() prior to - * hvm_ioreq_server_init() since the target domain is paused. + * ioreq_server_init() since the target domain is paused. */ set_ioreq_server(d, i, s); - rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i); + rc = ioreq_server_init(s, d, bufioreq_handling, i); if ( rc ) { set_ioreq_server(d, i, NULL); @@ -668,13 +668,13 @@ static int ioreq_server_destroy(struct domain *d, ioservid_t id) arch_ioreq_server_destroy(s); - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is paused. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); domain_unpause(d); @@ -736,8 +736,8 @@ static int ioreq_server_get_info(struct domain *d, ioservid_t id, return rc; } -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn) +int ioreq_server_get_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) { struct ioreq_server *s; int rc; @@ -756,7 +756,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, if ( s->emulator != current->domain ) goto out; - rc = hvm_ioreq_server_alloc_pages(s); + rc = ioreq_server_alloc_pages(s); if ( rc ) goto out; @@ -955,9 +955,9 @@ static int ioreq_server_set_state(struct domain *d, ioservid_t id, domain_pause(d); if ( enabled ) - hvm_ioreq_server_enable(s); + ioreq_server_enable(s); else - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); domain_unpause(d); @@ -968,7 +968,7 @@ static int ioreq_server_set_state(struct domain *d, ioservid_t id, return rc; } -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) +int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -978,7 +978,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) FOR_EACH_IOREQ_SERVER(d, id, s) { - rc = hvm_ioreq_server_add_vcpu(s, v); + rc = ioreq_server_add_vcpu(s, v); if ( rc ) goto fail; } @@ -995,7 +995,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) if ( !s ) continue; - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); } spin_unlock_recursive(&d->ioreq_server.lock); @@ -1003,7 +1003,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) return rc; } -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) +void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v) { struct ioreq_server *s; unsigned int id; @@ -1011,12 +1011,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) spin_lock_recursive(&d->ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) - hvm_ioreq_server_remove_vcpu(s, v); + ioreq_server_remove_vcpu(s, v); spin_unlock_recursive(&d->ioreq_server.lock); } -void hvm_destroy_all_ioreq_servers(struct domain *d) +void ioreq_server_destroy_all(struct domain *d) { struct ioreq_server *s; unsigned int id; @@ -1030,13 +1030,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) FOR_EACH_IOREQ_SERVER(d, id, s) { - hvm_ioreq_server_disable(s); + ioreq_server_disable(s); /* - * It is safe to call hvm_ioreq_server_deinit() prior to + * It is safe to call ioreq_server_deinit() prior to * set_ioreq_server() since the target domain is being destroyed. */ - hvm_ioreq_server_deinit(s); + ioreq_server_deinit(s); set_ioreq_server(d, id, NULL); xfree(s); @@ -1045,8 +1045,8 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->ioreq_server.lock); } -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +struct ioreq_server *ioreq_server_select(struct domain *d, + ioreq_t *p) { struct ioreq_server *s; uint8_t type; @@ -1101,7 +1101,7 @@ struct ioreq_server *hvm_select_ioreq_server(struct domain *d, return NULL; } -static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) +static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p) { struct domain *d = current->domain; struct ioreq_page *iorp; @@ -1194,8 +1194,8 @@ static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p) return IOREQ_STATUS_HANDLED; } -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered) +int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered) { struct vcpu *curr = current; struct domain *d = curr->domain; @@ -1204,7 +1204,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, ASSERT(s); if ( buffered ) - return hvm_send_buffered_ioreq(s, proto_p); + return ioreq_send_buffered(s, proto_p); if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return IOREQ_STATUS_RETRY; @@ -1254,7 +1254,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, return IOREQ_STATUS_UNHANDLED; } -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) +unsigned int ioreq_broadcast(ioreq_t *p, bool buffered) { struct domain *d = current->domain; struct ioreq_server *s; @@ -1265,14 +1265,14 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) if ( !s->enabled ) continue; - if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) + if ( ioreq_send(s, p, buffered) == IOREQ_STATUS_UNHANDLED ) failed++; } return failed; } -void hvm_ioreq_init(struct domain *d) +void ioreq_domain_init(struct domain *d) { spin_lock_init(&d->ioreq_server.lock); diff --git a/xen/common/memory.c b/xen/common/memory.c index 2f274a6..8eb05b1 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1113,7 +1113,7 @@ static int acquire_ioreq_server(struct domain *d, { mfn_t mfn; - rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + rc = ioreq_server_get_frame(d, id, frame + i, &mfn); if ( rc ) return rc; diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index eace1d3..0b433e2 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -83,26 +83,26 @@ static inline bool ioreq_needs_completion(const ioreq_t *ioreq) #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) -bool hvm_io_pending(struct vcpu *v); -bool handle_hvm_io_completion(struct vcpu *v); +bool vcpu_ioreq_pending(struct vcpu *v); +bool vcpu_ioreq_handle_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); -int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, - unsigned long idx, mfn_t *mfn); +int ioreq_server_get_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); int ioreq_server_map_mem_type(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags); -int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); -void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); -void hvm_destroy_all_ioreq_servers(struct domain *d); +int ioreq_server_add_vcpu_all(struct domain *d, struct vcpu *v); +void ioreq_server_remove_vcpu_all(struct domain *d, struct vcpu *v); +void ioreq_server_destroy_all(struct domain *d); -struct ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p, - bool buffered); -unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); +struct ioreq_server *ioreq_server_select(struct domain *d, + ioreq_t *p); +int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, + bool buffered); +unsigned int ioreq_broadcast(ioreq_t *p, bool buffered); -void hvm_ioreq_init(struct domain *d); +void ioreq_domain_init(struct domain *d); int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op); From patchwork Fri Jan 29 01:48:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34BB3C433E0 for ; Fri, 29 Jan 2021 01:59:12 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C1D4164DF1 for ; Fri, 29 Jan 2021 01:59:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C1D4164DF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77623.140710 (Exim 4.92) (envelope-from ) id 1l5J3w-0006mg-Dx; Fri, 29 Jan 2021 01:59:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77623.140710; Fri, 29 Jan 2021 01:59:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J3w-0006mI-5j; Fri, 29 Jan 2021 01:59:04 +0000 Received: by outflank-mailman (input) for mailman id 77623; Fri, 29 Jan 2021 01:59:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IvM-0004da-LP for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:12 +0000 Received: from mail-lj1-x22d.google.com (unknown [2a00:1450:4864:20::22d]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 0c53fa3e-e9dd-472f-8070-be4f663585fb; Fri, 29 Jan 2021 01:49:20 +0000 (UTC) Received: by mail-lj1-x22d.google.com with SMTP id l12so8725118ljc.3 for ; Thu, 28 Jan 2021 17:49:20 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.18 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:18 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0c53fa3e-e9dd-472f-8070-be4f663585fb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+Vn/s2UgbHBcNmmFjtnirXqLZd6G2KyfqANVIyYRlQM=; b=eCyLHT/9o9sgSNEmSBjbh34KrgvfIKFC8ChvKoT8LafJfwrFVe4ShDimURBY29DwVA w10/I7ngHQ3GVLf4RkX5uyqpd8vxXZZpp3faalYT5gG9kLOyHgSYLww+Z+CF5y3w/p1i 7ybik4VnFJl4jjB9IqUHhjPw+UNUa9VslVBi7ywXl1Oz9HtstXtT2jKS14gtU3dUQFuq pOdJDcn3XuvDWUdGiVxFwBxMNc8Jv0iEGHLyxAGI2Zch+JLYsN/zjk61t5BP3bjHfold j+3YWGL/01+dM/bkwVVDgmtW5uiNWm3csG2odIcrJ2UbaLdGJhwG+Q6yzNwMGxm6t9lc OoZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+Vn/s2UgbHBcNmmFjtnirXqLZd6G2KyfqANVIyYRlQM=; b=k403mKA3ej64jlfgDJq0X31h4lMTYjsQlbo3jPUnvIxQCXZTSowyqcNmCBkShgY3WQ QTjbmHKlvOa8jkVAWJg9ry1h0EfaAKoc7h7hjD7QGMYTEsrpIbKI3Nf8ghAO49N6VOdx Tb6/tzws+tCRLGx/vTYnMLyfPZGU8JF5C8COZWhfZ9IIHIUlR0SYnyLibHmLAYKKqe3V 59PdxvrL4oO+S/5Cb3EpgesJwJAZU8Ao5uC5yM13pijwdizVPqONphH0dRR34U5taFwK NqJP0k53RGvgSzc5tLTCUzEFY4Rv60wrp69W3eM4fkfQzjyQr6b6JiRyA2op89DcQhrY xVYA== X-Gm-Message-State: AOAM530btp30mZaPLsE+ThwBiWFNsj0jXZRuhqQY9BVOMpRTxyRTqJ90 7sU/83lQzfwDtyEVEqHgt/5FcI/MHlA2qQ== X-Google-Smtp-Source: ABdhPJyoN1FHzod313arC9Mf2MqGv1JQpwEFvGe4YTIiET0mT6EM7GiQxdyb3OCAJRzVtno7Zoo9GQ== X-Received: by 2002:a2e:918f:: with SMTP id f15mr1111443ljg.357.1611884959313; Thu, 28 Jan 2021 17:49:19 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Julien Grall , Stefano Stabellini , Julien Grall Subject: [PATCH V6 13/24] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg() Date: Fri, 29 Jan 2021 03:48:41 +0200 Message-Id: <1611884932-1851-14-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The cmpxchg() in ioreq_send_buffered() operates on memory shared with the emulator domain (and the target domain if the legacy interface is used). In order to be on the safe side we need to switch to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm. The point to use 64-bit version of helper is to support Arm32 since the IOREQ code uses cmpxchg() with 64-bit value. As there is no plan to support the legacy interface on Arm, we will have a page to be mapped in a single domain at the time, so we can use s->emulator in guest_cmpxchg64() safely. Thankfully the only user of the legacy interface is x86 so far and there is not concern regarding the atomics operations. Please note, that the legacy interface *must* not be used on Arm without revisiting the code. Signed-off-by: Oleksandr Tyshchenko Acked-by: Stefano Stabellini Reviewed-by: Paul Durrant CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - move earlier to avoid breaking arm32 compilation - add an explanation to commit description and hvm_allow_set_param() - pass s->emulator Changes V2 -> V3: - update patch description Changes V3 -> V4: - add Stefano's A-b - drop comment from arm/hvm.c Changes V4 -> V5: - update patch description - rebase - add Paul's R-b Changes V5 -> V6: - no changes --- --- xen/common/ioreq.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index de3066a..07572a5 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -29,6 +29,7 @@ #include #include +#include #include #include @@ -1185,7 +1186,7 @@ static int ioreq_send_buffered(struct ioreq_server *s, ioreq_t *p) new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM; new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM; - cmpxchg(&pg->ptrs.full, old.full, new.full); + guest_cmpxchg64(s->emulator, &pg->ptrs.full, old.full, new.full); } notify_via_xen_event_channel(d, s->bufioreq_evtchn); From patchwork Fri Jan 29 01:48:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E323C433E0 for ; Fri, 29 Jan 2021 01:59:19 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9520D64DF1 for ; Fri, 29 Jan 2021 01:59:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9520D64DF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77626.140740 (Exim 4.92) (envelope-from ) id 1l5J3z-0006uL-Vq; Fri, 29 Jan 2021 01:59:07 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77626.140740; Fri, 29 Jan 2021 01:59:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J3z-0006u5-R1; Fri, 29 Jan 2021 01:59:07 +0000 Received: by outflank-mailman (input) for mailman id 77626; Fri, 29 Jan 2021 01:59:06 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IvR-0004da-LV for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:17 +0000 Received: from mail-lj1-x22d.google.com (unknown [2a00:1450:4864:20::22d]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d9f5d46f-0804-436d-91db-b5d0b67daa0f; Fri, 29 Jan 2021 01:49:21 +0000 (UTC) Received: by mail-lj1-x22d.google.com with SMTP id f19so8727002ljn.5 for ; Thu, 28 Jan 2021 17:49:21 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.19 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:19 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d9f5d46f-0804-436d-91db-b5d0b67daa0f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W7e+RF2sY6ZbmCdePGx01+0PjVsacAMgFWPgdxGteqM=; b=Wz+/HiUescnCnWHyPn1goPrG1oOpjvIDXgCognubzjbm4heBV9rPwwOC4tuvzdS4tU iuEudVCKsbWHcfMd+THNKzgh9fHbdYaKTTbx0bCfiHcSfJ3HA5KyhnXSgwyVcP9l2mko 38/0fspFdNK6qIQJuj4lp6imzeB1tWxsU7KubQZgk+95c9fxbXBuLb3UNqHv1v3GImlw c14wgF3nqsxsK+84cihgZ6QOdIkXyZY4v54efmNpfGW2uHEObMdYGX3shddvJh+EZey/ GXaQ7HCy6RzYg7rmyVRr0vjAvfhB/vEvu9o9Kai5VVynveb0iAn+1OjI+rdJijLwcA3K qEPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W7e+RF2sY6ZbmCdePGx01+0PjVsacAMgFWPgdxGteqM=; b=muzWIbG5RBgAx3gM6P8fmOUaTptzPwDpiAfhP578M3PMdZwXJ/0etvN+TQabELLgxQ Qin0Ia3+B1jyKVA+/QpPWYWyayv83d0gL/1b0hE3/05XJTsSkdaKI8DnL8fuOpHHOhDU Z/+cyHPDFjoAGshQVzsECsfCxbyz5iYvmyZkv+hClikLiN6crhDyJJwV6IiFcEBjdUfx UCKFOec8vwF0LAOBEkXfThU85/+9wqioNcAH5B+MD8Fu8VF0x4kMSfLGwpyPhDYRHpTt neUkfZjlydU8ZvYNkoSidlG3Yv0+sR+8idJ3A8xFIvpANMuXfEkHu9zwtPLQlfmYjBRl Dpkw== X-Gm-Message-State: AOAM530gBzJgKtNpef9q7iPplyx0vKxmrStG7OgqYEEX65BhT5wbrWR2 gvEKnJMANMrH3ybygOOjmxdJ1AJqHhZH6Q== X-Google-Smtp-Source: ABdhPJwnFqR41GgfJ2ChQQcNz1wERbYfIIdJ6uawL7wHK74lpRRZ+yONpkGDrSAo+mvf1sCifgiToQ== X-Received: by 2002:a05:651c:3c1:: with SMTP id f1mr1117084ljp.407.1611884960335; Thu, 28 Jan 2021 17:49:20 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Julien Grall , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Oleksandr Tyshchenko Subject: [PATCH V6 14/24] arm/ioreq: Introduce arch specific bits for IOREQ/DM features Date: Fri, 29 Jan 2021 03:48:42 +0200 Message-Id: <1611884932-1851-15-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Julien Grall This patch adds basic IOREQ/DM support on Arm. The subsequent patches will improve functionality and add remaining bits. The IOREQ/DM features are supposed to be built with IOREQ_SERVER option enabled, which is disabled by default on Arm for now. Please note, the "PIO handling" TODO is expected to left unaddressed for the current series. It is not an big issue for now while Xen doesn't have support for vPCI on Arm. On Arm64 they are only used for PCI IO Bar and we would probably want to expose them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling" should be implemented when we add support for vPCI. Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Stefano Stabellini [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** I admit, I didn't resolve header dependencies completely. For now, public/hvm/dm_op.h is included by xen/dm.h, but ought to be included by arch/arm/dm.c. Details here: https://lore.kernel.org/xen-devel/e0bc7f80-974e-945d-4605-173bd05302af@gmail.com/ *** Changes RFC -> V1: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm - update patch description - update asm-arm/hvm/ioreq.h according to the newly introduced arch functions: - arch_hvm_destroy_ioreq_server() - arch_handle_hvm_io_completion() - update arch files to include xen/ioreq.h - remove HVMOP plumbing - rewrite a logic to handle properly case when hvm_send_ioreq() returns IO_RETRY - add a logic to handle properly handle_hvm_io_completion() return value - rename handle_mmio() to ioreq_handle_complete_mmio() - move paging_mark_pfn_dirty() to asm-arm/paging.h - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/ioreq.h - use gdprintk in try_fwd_ioserv(), remove unneeded prints - update list of #include-s - move has_vpci() to asm-arm/domain.h - add a comment (TODO) to unimplemented yet handle_pio() - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) structs from the arch files, they were already moved to the common code - remove set_foreign_p2m_entry() changes, they will be properly implemented in the follow-up patch - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig - remove x86's realmode and other unneeded stubs from xen/ioreq.h - clafify ioreq_t p.df usage in try_fwd_ioserv() - set ioreq_t p.count to 1 in try_fwd_ioserv() Changes V1 -> V2: - was split into: - arm/ioreq: Introduce arch specific bits for IOREQ/DM features - xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed - update the author of a patch - update patch description - move a loop in leave_hypervisor_to_guest() to a separate patch - set IOREQ_SERVER disabled by default - remove already clarified /* XXX */ - replace BUG() by ASSERT_UNREACHABLE() in handle_pio() - remove default case for handling the return value of try_handle_mmio() - remove struct hvm_domain, enum hvm_io_completion, struct hvm_vcpu_io, struct hvm_vcpu from asm-arm/domain.h, these are common materials now - update everything according to the recent changes (IOREQ related function names don't contain "hvm" prefixes/infixes anymore, IOREQ related fields are part of common struct vcpu/domain now, etc) Changes V2 -> V3: - update patch according the "legacy interface" is x86 specific - add dummy arch hooks - remove dummy paging_mark_pfn_dirty() - don’t include in common ioreq.c - don’t include in arch ioreq.h - remove #define ioreq_params(d, i) Changes V3 -> V4: - rebase - update patch according to the renaming IO_ -> VIO_ (io_ -> vio_) and misc changes to arch hooks - update patch according to the IOREQ related dm-op handling changes - don't include from arch header - make all arch hooks out-of-line - add a comment above IOREQ_STATUS_* #define-s Changes V4 -> V5: - change the placement of ioreq_server_destroy_all() in arm/domain.c - don't include public/hvm/dm_op.h by asm-arm/domain.h - include public/hvm/dm_op.h by xen/dm.h - put arch ioreq.h directly into asm-arm subdir - remove do_dm_op() in arm/dm.c, this is a common material now - remove obsolete ioreq_complete_mmio() from asm-arm/ioreq.h - optimize arch_ioreq_complete_mmio() to not call try_handle_mmio(), but try_handle_mmio(), use ASSERT_UNREACHABLE() if state is incorrect - split changes to check_for_vcpu_work() to be squashed with patch #15 Changes V5 -> V6: - do not include public/hvm/dm_op.h by common dm.h (it was already done in patch #9) - do not use switch(...) for only one case in try_fwd_ioserv() - add Stefano's R-b --- --- xen/arch/arm/Makefile | 2 + xen/arch/arm/dm.c | 97 ++++++++++++++++++++ xen/arch/arm/domain.c | 9 ++ xen/arch/arm/io.c | 12 ++- xen/arch/arm/ioreq.c | 211 +++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/traps.c | 6 ++ xen/include/asm-arm/domain.h | 2 + xen/include/asm-arm/ioreq.h | 70 ++++++++++++++ xen/include/asm-arm/mmio.h | 1 + 9 files changed, 409 insertions(+), 1 deletion(-) create mode 100644 xen/arch/arm/dm.c create mode 100644 xen/arch/arm/ioreq.c create mode 100644 xen/include/asm-arm/ioreq.h diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 512ffdd..16e6523 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -13,6 +13,7 @@ obj-y += cpuerrata.o obj-y += cpufeature.o obj-y += decode.o obj-y += device.o +obj-$(CONFIG_IOREQ_SERVER) += dm.o obj-y += domain.o obj-y += domain_build.init.o obj-y += domctl.o @@ -27,6 +28,7 @@ obj-y += guest_atomics.o obj-y += guest_walk.o obj-y += hvm.o obj-y += io.o +obj-$(CONFIG_IOREQ_SERVER) += ioreq.o obj-y += irq.o obj-y += kernel.init.o obj-$(CONFIG_LIVEPATCH) += livepatch.o diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c new file mode 100644 index 0000000..f254ed7 --- /dev/null +++ b/xen/arch/arm/dm.c @@ -0,0 +1,97 @@ +/* + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#include +#include +#include +#include +#include + +int dm_op(const struct dmop_args *op_args) +{ + struct domain *d; + struct xen_dm_op op; + bool const_op = true; + long rc; + size_t offset; + + static const uint8_t op_size[] = { + [XEN_DMOP_create_ioreq_server] = sizeof(struct xen_dm_op_create_ioreq_server), + [XEN_DMOP_get_ioreq_server_info] = sizeof(struct xen_dm_op_get_ioreq_server_info), + [XEN_DMOP_map_io_range_to_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range), + [XEN_DMOP_unmap_io_range_from_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range), + [XEN_DMOP_set_ioreq_server_state] = sizeof(struct xen_dm_op_set_ioreq_server_state), + [XEN_DMOP_destroy_ioreq_server] = sizeof(struct xen_dm_op_destroy_ioreq_server), + }; + + rc = rcu_lock_remote_domain_by_id(op_args->domid, &d); + if ( rc ) + return rc; + + rc = xsm_dm_op(XSM_DM_PRIV, d); + if ( rc ) + goto out; + + offset = offsetof(struct xen_dm_op, u); + + rc = -EFAULT; + if ( op_args->buf[0].size < offset ) + goto out; + + if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset) ) + goto out; + + if ( op.op >= ARRAY_SIZE(op_size) ) + { + rc = -EOPNOTSUPP; + goto out; + } + + op.op = array_index_nospec(op.op, ARRAY_SIZE(op_size)); + + if ( op_args->buf[0].size < offset + op_size[op.op] ) + goto out; + + if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset, + op_size[op.op]) ) + goto out; + + rc = -EINVAL; + if ( op.pad ) + goto out; + + rc = ioreq_server_dm_op(&op, d, &const_op); + + if ( (!rc || rc == -ERESTART) && + !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, + (void *)&op.u, op_size[op.op]) ) + rc = -EFAULT; + + out: + rcu_unlock_domain(d); + + return rc; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 18cafcd..bdd3d3e 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -696,6 +697,10 @@ int arch_domain_create(struct domain *d, ASSERT(config != NULL); +#ifdef CONFIG_IOREQ_SERVER + ioreq_domain_init(d); +#endif + /* p2m_init relies on some value initialized by the IOMMU subsystem */ if ( (rc = iommu_domain_init(d, config->iommu_opts)) != 0 ) goto fail; @@ -1009,6 +1014,10 @@ int domain_relinquish_resources(struct domain *d) */ domain_vpl011_deinit(d); +#ifdef CONFIG_IOREQ_SERVER + ioreq_server_destroy_all(d); +#endif + PROGRESS(tee): ret = tee_relinquish_resources(d); if (ret ) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index ae7ef96..7ac0303 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -16,12 +16,14 @@ * GNU General Public License for more details. */ +#include #include #include #include #include #include #include +#include #include #include "decode.h" @@ -123,7 +125,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs, handler = find_mmio_handler(v->domain, info.gpa); if ( !handler ) - return IO_UNHANDLED; + { + int rc; + + rc = try_fwd_ioserv(regs, v, &info); + if ( rc == IO_HANDLED ) + return handle_ioserv(regs, v); + + return rc; + } /* All the instructions used on emulated MMIO region should be valid */ if ( !dabt.valid ) diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c new file mode 100644 index 0000000..43f5d6b --- /dev/null +++ b/xen/arch/arm/ioreq.c @@ -0,0 +1,211 @@ +/* + * arm/ioreq.c: hardware virtual machine I/O emulation + * + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#include +#include + +#include + +#include + +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) +{ + const union hsr hsr = { .bits = regs->hsr }; + const struct hsr_dabt dabt = hsr.dabt; + /* Code is similar to handle_read */ + uint8_t size = (1 << dabt.size) * 8; + register_t r = v->io.req.data; + + /* We are done with the IO */ + v->io.req.state = STATE_IOREQ_NONE; + + if ( dabt.write ) + return IO_HANDLED; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long)); + r |= (~0UL) << size; + } + + set_user_reg(regs, dabt.reg, r); + + return IO_HANDLED; +} + +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info) +{ + struct vcpu_io *vio = &v->io; + ioreq_t p = { + .type = IOREQ_TYPE_COPY, + .addr = info->gpa, + .size = 1 << info->dabt.size, + .count = 1, + .dir = !info->dabt.write, + /* + * On x86, df is used by 'rep' instruction to tell the direction + * to iterate (forward or backward). + * On Arm, all the accesses to MMIO region will do a single + * memory access. So for now, we can safely always set to 0. + */ + .df = 0, + .data = get_user_reg(regs, info->dabt.reg), + .state = STATE_IOREQ_READY, + }; + struct ioreq_server *s = NULL; + enum io_state rc; + + if ( vio->req.state != STATE_IOREQ_NONE ) + { + gdprintk(XENLOG_ERR, "wrong state %u\n", vio->req.state); + return IO_ABORT; + } + + s = ioreq_server_select(v->domain, &p); + if ( !s ) + return IO_UNHANDLED; + + if ( !info->dabt.valid ) + return IO_ABORT; + + vio->req = p; + + rc = ioreq_send(s, &p, 0); + if ( rc != IO_RETRY || v->domain->is_shutting_down ) + vio->req.state = STATE_IOREQ_NONE; + else if ( !ioreq_needs_completion(&vio->req) ) + rc = IO_HANDLED; + else + vio->completion = VIO_mmio_completion; + + return rc; +} + +bool arch_ioreq_complete_mmio(void) +{ + struct vcpu *v = current; + struct cpu_user_regs *regs = guest_cpu_user_regs(); + const union hsr hsr = { .bits = regs->hsr }; + + if ( v->io.req.state != STATE_IORESP_READY ) + { + ASSERT_UNREACHABLE(); + return false; + } + + if ( handle_ioserv(regs, v) == IO_HANDLED ) + { + advance_pc(regs, hsr); + return true; + } + + return false; +} + +bool arch_vcpu_ioreq_completion(enum vio_completion completion) +{ + ASSERT_UNREACHABLE(); + return true; +} + +/* + * The "legacy" mechanism of mapping magic pages for the IOREQ servers + * is x86 specific, so the following hooks don't need to be implemented on Arm: + * - arch_ioreq_server_map_pages + * - arch_ioreq_server_unmap_pages + * - arch_ioreq_server_enable + * - arch_ioreq_server_disable + */ +int arch_ioreq_server_map_pages(struct ioreq_server *s) +{ + return -EOPNOTSUPP; +} + +void arch_ioreq_server_unmap_pages(struct ioreq_server *s) +{ +} + +void arch_ioreq_server_enable(struct ioreq_server *s) +{ +} + +void arch_ioreq_server_disable(struct ioreq_server *s) +{ +} + +void arch_ioreq_server_destroy(struct ioreq_server *s) +{ +} + +int arch_ioreq_server_map_mem_type(struct domain *d, + struct ioreq_server *s, + uint32_t flags) +{ + return -EOPNOTSUPP; +} + +void arch_ioreq_server_map_mem_type_completed(struct domain *d, + struct ioreq_server *s, + uint32_t flags) +{ +} + +bool arch_ioreq_server_destroy_all(struct domain *d) +{ + return true; +} + +bool arch_ioreq_server_get_type_addr(const struct domain *d, + const ioreq_t *p, + uint8_t *type, + uint64_t *addr) +{ + if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) + return false; + + *type = (p->type == IOREQ_TYPE_PIO) ? + XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; + *addr = p->addr; + + return true; +} + +void arch_ioreq_domain_init(struct domain *d) +{ +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index d0df33b..8848764 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1393,6 +1393,9 @@ static arm_hypercall_t arm_hypercall_table[] = { #ifdef CONFIG_HYPFS HYPERCALL(hypfs_op, 5), #endif +#ifdef CONFIG_IOREQ_SERVER + HYPERCALL(dm_op, 3), +#endif }; #ifndef NDEBUG @@ -1964,6 +1967,9 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs, case IO_HANDLED: advance_pc(regs, hsr); return; + case IO_RETRY: + /* finish later */ + return; case IO_UNHANDLED: /* IO unhandled, try another way to handle it. */ break; diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index 6819a3b..1da90f2 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -262,6 +262,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {} #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_flag) +#define has_vpci(d) ({ (void)(d); false; }) + #endif /* __ASM_DOMAIN_H__ */ /* diff --git a/xen/include/asm-arm/ioreq.h b/xen/include/asm-arm/ioreq.h new file mode 100644 index 0000000..5018597 --- /dev/null +++ b/xen/include/asm-arm/ioreq.h @@ -0,0 +1,70 @@ +/* + * ioreq.h: Hardware virtual machine assist interface definitions. + * + * Copyright (c) 2016 Citrix Systems Inc. + * Copyright (c) 2019 Arm ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#ifndef __ASM_ARM_IOREQ_H__ +#define __ASM_ARM_IOREQ_H__ + +#ifdef CONFIG_IOREQ_SERVER +enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v); +enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info); +#else +static inline enum io_state handle_ioserv(struct cpu_user_regs *regs, + struct vcpu *v) +{ + return IO_UNHANDLED; +} + +static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs, + struct vcpu *v, mmio_info_t *info) +{ + return IO_UNHANDLED; +} +#endif + +static inline bool handle_pio(uint16_t port, unsigned int size, int dir) +{ + /* + * TODO: For Arm64, the main user will be PCI. So this should be + * implemented when we add support for vPCI. + */ + ASSERT_UNREACHABLE(); + return true; +} + +static inline void msix_write_completion(struct vcpu *v) +{ +} + +/* This correlation must not be altered */ +#define IOREQ_STATUS_HANDLED IO_HANDLED +#define IOREQ_STATUS_UNHANDLED IO_UNHANDLED +#define IOREQ_STATUS_RETRY IO_RETRY + +#endif /* __ASM_ARM_IOREQ_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h index 8dbfb27..7ab873c 100644 --- a/xen/include/asm-arm/mmio.h +++ b/xen/include/asm-arm/mmio.h @@ -37,6 +37,7 @@ enum io_state IO_ABORT, /* The IO was handled by the helper and led to an abort. */ IO_HANDLED, /* The IO was successfully handled by the helper. */ IO_UNHANDLED, /* The IO was not handled by the helper. */ + IO_RETRY, /* Retry the emulation for some reason */ }; typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info, From patchwork Fri Jan 29 01:48:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46742C43381 for ; Fri, 29 Jan 2021 01:59:14 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2F0E64DD8 for ; Fri, 29 Jan 2021 01:59:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2F0E64DD8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77625.140728 (Exim 4.92) (envelope-from ) id 1l5J3y-0006rL-Jo; Fri, 29 Jan 2021 01:59:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77625.140728; Fri, 29 Jan 2021 01:59:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J3y-0006rB-Fk; Fri, 29 Jan 2021 01:59:06 +0000 Received: by outflank-mailman (input) for mailman id 77625; Fri, 29 Jan 2021 01:59:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IvW-0004da-Lj for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:22 +0000 Received: from mail-lj1-x231.google.com (unknown [2a00:1450:4864:20::231]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id bedd6eb8-f922-434f-9d06-20c2c723c811; Fri, 29 Jan 2021 01:49:22 +0000 (UTC) Received: by mail-lj1-x231.google.com with SMTP id f19so8727026ljn.5 for ; Thu, 28 Jan 2021 17:49:22 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:20 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bedd6eb8-f922-434f-9d06-20c2c723c811 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=J8Q+NbbfOYPSxHuBJx+zfYkWiARdAR8S5Uvx+xX0BfA=; b=Jjep4zr3UhqrSeIHdGssd6JksZpaqYeuPw9KyUEeBCN7elRBbF0moB865viv3iHgSo +JGDIbFDCHf+RNOUUFkmZGVWvil8uiUKuCXXE9yURSa+Ft/2MlgI4H4YM9SXsWfNexz9 ssF9caR+lwPVQq30Ayns6EOYhDnKygKeO2dy9mH3u2OMyRncBnve2fA0Gwr+s16ljMLK vyTYbfwdFo1KOKE6CkN8BqAwOhhgH/pfsXmzEipqV4nmnz9fCAdvM8n9IlgX4QnbmrfX +YviyhQ5WNv+RQQWL3VB1cHeCIby7BsxXNAotJcfTPrrUq6FzhDw8hlcnqe0sqZ8bTJy xSMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=J8Q+NbbfOYPSxHuBJx+zfYkWiARdAR8S5Uvx+xX0BfA=; b=sylojfWhdZVgno7JNnrXKyCPtZ096evPCjB4ZEDJvkth3EVY8psoBCsXBrFRrv58Tp IsZY0M8EBMpFsY+X/sl6750J+9qJ+3uWjKvQDlKfPzmJjYgZ+chrA/0kmGwZw/ckH7or 8wavSQBiZvmm414UEz6+7OPSWoQH5KtaezahVCRaa1azrWnte6fyLDbh+YgscVMXSQsZ cJ1+0p+T+hxICr4sj6Bj4VbT6t5V7bRZYi4kc1cYQmOlnhrgolzCCUUEJ41s2ME0n2S8 i4ZKblTNOd3nFAOKdoTk5md78ideO2csPEaB50NWiugSB6vWssmS1O5hs+e8ZQ5UbYOa bhIw== X-Gm-Message-State: AOAM530i2xRo41b+fZ3/raNvwMApw5h8KXHXM+nf83R4d12/yeeQkg2d dMTq/qklNLCg1UafP6vDSLV7yAdLiusvlA== X-Google-Smtp-Source: ABdhPJzEBOBx24c+tcYLLrrcPo5wbVjdcfZT3+cHR9T25pn+6+yxuJkL9xX8SKTw6mYHmAU22DMeVA== X-Received: by 2002:a2e:9c88:: with SMTP id x8mr1166570lji.409.1611884961181; Thu, 28 Jan 2021 17:49:21 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V6 15/24] xen/arm: Call vcpu_ioreq_handle_completion() in check_for_vcpu_work() Date: Fri, 29 Jan 2021 03:48:43 +0200 Message-Id: <1611884932-1851-16-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch adds remaining bits needed for the IOREQ support on Arm. Besides just calling vcpu_ioreq_handle_completion() we need to handle it's return value to make sure that all the vCPU works are done before we return to the guest (the vcpu_ioreq_handle_completion() may return false if there is vCPU work to do or IOREQ state is invalid). For that reason we use an unbounded loop in leave_hypervisor_to_guest(). The worse that can happen here if the vCPU will never run again (the I/O will never complete). But, in Xen case, if the I/O never completes then it most likely means that something went horribly wrong with the Device Emulator. And it is most likely not safe to continue. So letting the vCPU to spin forever if the I/O never completes is a safer action than letting it continue and leaving the guest in unclear state and is the best what we can do for now. Please note, using this loop we will not spin forever on a pCPU, preventing any other vCPUs from being scheduled. At every loop we will call check_for_pcpu_work() that will process pending softirqs. In case of failure, the guest will crash and the vCPU will be unscheduled. In normal case, if the rescheduling is necessary the vCPU will be rescheduled to give place to someone else. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Stefano Stabellini Acked-by: Julien Grall CC: Julien Grall [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch, changes were derived from (+ new explanation): arm/ioreq: Introduce arch specific bits for IOREQ/DM features Changes V2 -> V3: - update patch description Changes V3 -> V4: - update patch description and comment in code Changes V4 -> V5: - add Stefano's R-b - update patch subject/description and comment in code, was "xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed" - change loop logic a bit - squash with changes to check_for_vcpu_work() from patch #14 Changes V5 -> V6: - add Julien's A-b --- --- xen/arch/arm/traps.c | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 8848764..cb37a45 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -2269,12 +2270,23 @@ static void check_for_pcpu_work(void) * Process pending work for the vCPU. Any call should be fast or * implement preemption. */ -static void check_for_vcpu_work(void) +static bool check_for_vcpu_work(void) { struct vcpu *v = current; +#ifdef CONFIG_IOREQ_SERVER + bool handled; + + local_irq_enable(); + handled = vcpu_ioreq_handle_completion(v); + local_irq_disable(); + + if ( !handled ) + return true; +#endif + if ( likely(!v->arch.need_flush_to_ram) ) - return; + return false; /* * Give a chance for the pCPU to process work before handling the vCPU @@ -2285,6 +2297,8 @@ static void check_for_vcpu_work(void) local_irq_enable(); p2m_flush_vm(v); local_irq_disable(); + + return false; } /* @@ -2297,7 +2311,13 @@ void leave_hypervisor_to_guest(void) { local_irq_disable(); - check_for_vcpu_work(); + /* + * check_for_vcpu_work() may return true if there are more work to before + * the vCPU can safely resume. This gives us an opportunity to deschedule + * the vCPU if needed. + */ + while ( check_for_vcpu_work() ) + check_for_pcpu_work(); check_for_pcpu_work(); vgic_sync_to_lrs(); From patchwork Fri Jan 29 01:48:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D099C433E9 for ; Fri, 29 Jan 2021 01:59:12 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F2A8964DD8 for ; Fri, 29 Jan 2021 01:59:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2A8964DD8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77622.140704 (Exim 4.92) (envelope-from ) id 1l5J3w-0006lt-0p; Fri, 29 Jan 2021 01:59:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77622.140704; Fri, 29 Jan 2021 01:59:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J3v-0006lj-SG; Fri, 29 Jan 2021 01:59:03 +0000 Received: by outflank-mailman (input) for mailman id 77622; Fri, 29 Jan 2021 01:59:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Ivb-0004da-Ln for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:27 +0000 Received: from mail-lf1-x132.google.com (unknown [2a00:1450:4864:20::132]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2cffdc12-63bf-48e9-bad9-adeb517486e1; Fri, 29 Jan 2021 01:49:23 +0000 (UTC) Received: by mail-lf1-x132.google.com with SMTP id f1so10320490lfu.3 for ; Thu, 28 Jan 2021 17:49:23 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:21 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2cffdc12-63bf-48e9-bad9-adeb517486e1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fFcKXS9Qfprk04/IhHYF4gpVEJ521Jn+3XeqQ84RLaY=; b=kFCvr/sVKO/jJ/dxndOQ2l0YFo6Iax2350uPWXfxjLqmlnlNttCf5nCpVngACRkRZS y22Y6vEr1++n2v57JAcV5jQ67SmAKG3S5q3Awcb9YexXuXW8lNjITs2FylY52hU6Cdhy 0vUkUlNdxEDz0oumkJWRfkAs9GshXgg6zENa/7iFvUUcdaBx2DdrUR1G1Pf3Ty9IJkpF CIsyxF5ty7ZoAeg5uQXTYGCxv6AV81OdjG/qUpLA3HCpGafQoOlKyIurox1e4oJBHdBw j/7Ixd2sIHdZPLzpnKlxYazLyEIMfWeGHVgvT+99uEC4fxFo2XsWM9LCaz3HoKmipwsf RBSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fFcKXS9Qfprk04/IhHYF4gpVEJ521Jn+3XeqQ84RLaY=; b=lpZYfWj9tRd6s0WYCBYJfVNMNB6uTDV0waxxYyW3qk0dXqQ5sAoEp5bNXPUGX5wfXo 9WvZ1EM0wsoa0IDjyhMbvcN8+xVIzsk4wQhmNLlOiWC5ZA5Q6r9jNfL98YKW/k0w1As0 wcnmEt/L97y+7YHWcgYrqL+8O6AjrgZ/9jfk1GiUbRMRc4rqDqdTLyZKEX7Qy9fkJ7Uu jhG/tutibsQ8msvvwVxGkqKJRp+6BmSxsK/ExlLZcS6xgX9DTrGavTK0r69FPpxq296t ZUPscfDry8T0NEAJHwsHzbHIU+S4OTR1tD2vE74udhkSa83MWB/CAXwXcqK0Mj7bLhDs GKmw== X-Gm-Message-State: AOAM532kgNqnDTwN8Ys7s63gkojR4VVo/qFhNUk0feVlNIEzG2iuLGLN +zXwxCLngoIRRlYV6nKVYjbEJDjJspK3bg== X-Google-Smtp-Source: ABdhPJzPD6cgEQ7Z1niqBhoap25Ndi2W8p4mMyE/6rCqWrL/jUxnue+HDMRSgjGyPvyMVIt5vvO/Yw== X-Received: by 2002:a05:6512:224f:: with SMTP id i15mr900821lfu.545.1611884962303; Thu, 28 Jan 2021 17:49:22 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Wei Liu , =?utf-8?q?Roger_Pau_?= =?utf-8?q?Monn=C3=A9?= , Julien Grall Subject: [PATCH V6 16/24] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm Date: Fri, 29 Jan 2021 03:48:44 +0200 Message-Id: <1611884932-1851-17-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> MIME-Version: 1.0 From: Oleksandr Tyshchenko This patch implements reference counting of foreign entries in in set_foreign_p2m_entry() on Arm. This is a mandatory action if we want to run emulator (IOREQ server) in other than dom0 domain, as we can't trust it to do the right thing if it is not running in dom0. So we need to grab a reference on the page to avoid it disappearing. It is valid to always pass "p2m_map_foreign_rw" type to guest_physmap_add_entry() since the current and foreign domains would be always different. A case when they are equal would be rejected by rcu_lock_remote_domain_by_id(). Besides the similar comment in the code put a respective ASSERT() to catch incorrect usage in future. It was tested with IOREQ feature to confirm that all the pages given to this function belong to a domain, so we can use the same approach as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one(). This involves adding an extra parameter for the foreign domain to set_foreign_p2m_entry() and a helper to indicate whether the arch supports the reference counting of foreign entries and the restriction for the hardware domain in the common code can be skipped for it. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Acked-by: Stefano Stabellini Reviewed-by: Julien Grall Reviewed-by: Jan Beulich [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch, was split from: "[RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features" - rewrite a logic to handle properly reference in set_foreign_p2m_entry() instead of treating foreign entries as p2m_ram_rw Changes V1 -> V2: - rebase according to the recent changes to acquire_resource() - update patch description - introduce arch_refcounts_p2m() - add an explanation why p2m_map_foreign_rw is valid - move set_foreign_p2m_entry() to p2m-common.h - add const to new parameter Changes V2 -> V3: - update patch description - rename arch_refcounts_p2m() to arch_acquire_resource_check() - move comment to x86’s arch_acquire_resource_check() - return rc in Arm's set_foreign_p2m_entry() - put a respective ASSERT() into Arm's set_foreign_p2m_entry() Changes V3 -> V4: - update arch_acquire_resource_check() implementation on x86 and common code which uses it, pass struct domain to the function - put ASSERT() to x86/Arm set_foreign_p2m_entry() - use arch_acquire_resource_check() in p2m_add_foreign() instead of open-coding it Changes V4 -> V5: - update x86's arch_acquire_resource_check() - use single return statement - update comment - add Jan's and Julien's R-b, Stefano's A-b Changes V5 -> V6: - no changes --- --- xen/arch/arm/p2m.c | 26 ++++++++++++++++++++++++++ xen/arch/x86/mm/p2m.c | 9 ++++++--- xen/common/memory.c | 9 ++------- xen/include/asm-arm/p2m.h | 19 +++++++++---------- xen/include/asm-x86/p2m.h | 14 +++++++++++--- xen/include/xen/p2m-common.h | 4 ++++ 6 files changed, 58 insertions(+), 23 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 4eeb867..d41c4fa 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1380,6 +1380,32 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, return p2m_remove_mapping(d, gfn, (1 << page_order), mfn); } +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) +{ + struct page_info *page = mfn_to_page(mfn); + int rc; + + ASSERT(arch_acquire_resource_check(d)); + + if ( !get_page(page, fd) ) + return -EINVAL; + + /* + * It is valid to always use p2m_map_foreign_rw here as if this gets + * called then d != fd. A case when d == fd would be rejected by + * rcu_lock_remote_domain_by_id() earlier. Put a respective ASSERT() + * to catch incorrect usage in future. + */ + ASSERT(d != fd); + + rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw); + if ( rc ) + put_page(page); + + return rc; +} + static struct page_info *p2m_allocate_root(void) { struct page_info *page; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index c1dd45b..2091aed 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1323,8 +1323,11 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l, } /* Set foreign mfn in the given guest's p2m table. */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn) { + ASSERT(arch_acquire_resource_check(d)); + return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign, p2m_get_hostp2m(d)->default_access); } @@ -2587,7 +2590,7 @@ static int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, * hvm fixme: until support is added to p2m teardown code to cleanup any * foreign entries, limit this to hardware domain only. */ - if ( !is_hardware_domain(tdom) ) + if ( !arch_acquire_resource_check(tdom) ) return -EPERM; if ( foreigndom == DOMID_XEN ) @@ -2643,7 +2646,7 @@ static int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, * will update the m2p table which will result in mfn -> gpfn of dom0 * and not fgfn of domU. */ - rc = set_foreign_p2m_entry(tdom, gpfn, mfn); + rc = set_foreign_p2m_entry(tdom, fdom, gpfn, mfn); if ( rc ) gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. " "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n", diff --git a/xen/common/memory.c b/xen/common/memory.c index 8eb05b1..33296e6 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1139,12 +1139,7 @@ static int acquire_resource( xen_pfn_t mfn_list[32]; int rc; - /* - * FIXME: Until foreign pages inserted into the P2M are properly - * reference counted, it is unsafe to allow mapping of - * resource pages unless the caller is the hardware domain. - */ - if ( paging_mode_translate(currd) && !is_hardware_domain(currd) ) + if ( !arch_acquire_resource_check(currd) ) return -EACCES; if ( copy_from_guest(&xmar, arg, 1) ) @@ -1212,7 +1207,7 @@ static int acquire_resource( for ( i = 0; !rc && i < xmar.nr_frames; i++ ) { - rc = set_foreign_p2m_entry(currd, gfn_list[i], + rc = set_foreign_p2m_entry(currd, d, gfn_list[i], _mfn(mfn_list[i])); /* rc should be -EIO for any iteration other than the first */ if ( rc && i ) diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 28ca9a8..4f8b3b0 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -161,6 +161,15 @@ typedef enum { #endif #include +static inline bool arch_acquire_resource_check(struct domain *d) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is supported on Arm. + */ + return true; +} + static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) { @@ -392,16 +401,6 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order) return gfn_add(gfn, 1UL << order); } -static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, - mfn_t mfn) -{ - /* - * NOTE: If this is implemented then proper reference counting of - * foreign entries will need to be implemented. - */ - return -EOPNOTSUPP; -} - /* * A vCPU has cache enabled only when the MMU is enabled and data cache * is enabled. diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 5d7836d..7d63f57 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -382,6 +382,17 @@ struct p2m_domain { #endif #include +static inline bool arch_acquire_resource_check(struct domain *d) +{ + /* + * FIXME: Until foreign pages inserted into the P2M are properly + * reference counted, it is unsafe to allow mapping of + * resource pages unless the caller is the hardware domain + * (see set_foreign_p2m_entry()). + */ + return !paging_mode_translate(d) || is_hardware_domain(d); +} + /* * Updates vCPU's n2pm to match its np2m_base in VMCx12 and returns that np2m. */ @@ -647,9 +658,6 @@ int p2m_finish_type_change(struct domain *d, int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start, unsigned long end); -/* Set foreign entry in the p2m table (for priv-mapping) */ -int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn); - /* Set mmio addresses in the p2m table (for pass-through) */ int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int order); diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h index 3753bc0..a322e73 100644 --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -3,6 +3,10 @@ #include +/* Set foreign entry in the p2m table */ +int set_foreign_p2m_entry(struct domain *d, const struct domain *fd, + unsigned long gfn, mfn_t mfn); + /* Remove a page from a domain's p2m table */ int __must_check guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn, From patchwork Fri Jan 29 01:48:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A54AC433DB for ; Fri, 29 Jan 2021 01:59:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2E38B64DFB for ; Fri, 29 Jan 2021 01:59:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E38B64DFB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77628.140764 (Exim 4.92) (envelope-from ) id 1l5J46-00078o-2M; Fri, 29 Jan 2021 01:59:14 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77628.140764; Fri, 29 Jan 2021 01:59:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J45-00078Y-Tz; Fri, 29 Jan 2021 01:59:13 +0000 Received: by outflank-mailman (input) for mailman id 77628; Fri, 29 Jan 2021 01:59:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Ivg-0004da-Lt for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:32 +0000 Received: from mail-lj1-x22d.google.com (unknown [2a00:1450:4864:20::22d]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 48c0507c-26f8-47b6-b7c4-36f0116b0e24; Fri, 29 Jan 2021 01:49:24 +0000 (UTC) Received: by mail-lj1-x22d.google.com with SMTP id r14so8744346ljc.2 for ; Thu, 28 Jan 2021 17:49:24 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.22 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:22 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 48c0507c-26f8-47b6-b7c4-36f0116b0e24 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yHZDWbqsnnHHHjfd2iWbdB0nAif5Wc6wWeD5M/+zjqk=; b=rI9PkNpdToPqeWalJrNwob6+vrOCbPe4W3YpVj4tPWtDm1dK2mDYqY8hmAA25d2+8f ezJgBXk3yJm27eLnoDHdwQxqQCljQ13/YR7PmrSBk1MG3gdVc4vz94kP43JJ1SZvLSlZ a2qAVnwehERTxFR/+UKKZmIFyvztP78efInzwECIc3PzORnJr1CgdWtE1y1yyCQ36cFh bNByyce2bZzmeVFwV2ubdjnjMsbnf5r70WTRme0844tQaiqcWzeq68bRaB6JQysAxNK7 0U8o28qn7F22B0w996Z5tjsuCukiDPom8rpZScQlW+S7zPwMHWtDdcSFYoCJXGpq85dC QR3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yHZDWbqsnnHHHjfd2iWbdB0nAif5Wc6wWeD5M/+zjqk=; b=HNGuysyOwqSxuW2cwknkl8ff7t7eGZpMhKAMZXGum95SALLdRW+ixXn1823ThjIgrN AD4E8ZfC5z2oDvQlx1NfHfURzNOqD46Qx8C3A4ZfP7KnSarU4WkXs0u3eGRIWkM2Sa8R L1q4nmPDcN0ae45KY+z9cW4QMXg1EN7g3adPg7YeCiKNJugvW3OzvTP/wv/lwos1+3Ca ATAZ6FXdfpQCAMzMxl9iv2+I3pTULpy64SrUiTEwzUl5Zmw6hDraQSE5gBDMxLsDtvrB ghdy2V+vPtio6rRhq5mAEEAwV4tgWRvNPYcqhUcucqrKBZ89HwG15CNFtGOZw8pT3FoW IaUQ== X-Gm-Message-State: AOAM53121I59IOfYITA9T/BB8V2yPGQ96htsQtBvCPS0QufFGotDzj2t onLgOn/TcpyI7coG9RiVANUcKcH6eWjs4g== X-Google-Smtp-Source: ABdhPJywtxd0avHPJo4IDv+C7oO/ZT0z6gNVCBhFZ0T2TPq2hZc0VjSlgwTG9S4slmEXh9fP6sIWKg== X-Received: by 2002:a2e:b8c6:: with SMTP id s6mr1101564ljp.275.1611884963249; Thu, 28 Jan 2021 17:49:23 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Paul Durrant , Julien Grall Subject: [PATCH V6 17/24] xen/ioreq: Introduce domain_has_ioreq_server() Date: Fri, 29 Jan 2021 03:48:45 +0200 Message-Id: <1611884932-1851-18-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch introduces a helper the main purpose of which is to check if a domain is using IOREQ server(s). On Arm the current benefit is to avoid calling vcpu_ioreq_handle_completion() (which implies iterating over all possible IOREQ servers anyway) on every return in leave_hypervisor_to_guest() if there is no active servers for the particular domain. Also this helper will be used by one of the subsequent patches on Arm. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Stefano Stabellini Reviewed-by: Paul Durrant [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - new patch Changes V1 -> V2: - update patch description - guard helper with CONFIG_IOREQ_SERVER - remove "hvm" prefix - modify helper to just return d->arch.hvm.ioreq_server.nr_servers - put suitable ASSERT()s - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server() - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init() Changes V2 -> V3: - update patch description - remove ASSERT()s from the helper, add a comment - use #ifdef CONFIG_IOREQ_SERVER inside function body - use new ASSERT() construction in set_ioreq_server() Changes V3 -> V4: - update patch description - drop per-domain variable "nr_servers" - reimplement a helper to count the non-NULL entries - make the helper out-of-line Changes V4 -> V5: - add Stefano's and Paul's R-b Changes V5 -> V6: - no changes --- --- xen/arch/arm/traps.c | 15 +++++++++------ xen/common/ioreq.c | 16 ++++++++++++++++ xen/include/xen/ioreq.h | 2 ++ 3 files changed, 27 insertions(+), 6 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index cb37a45..476900e 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2275,14 +2275,17 @@ static bool check_for_vcpu_work(void) struct vcpu *v = current; #ifdef CONFIG_IOREQ_SERVER - bool handled; + if ( domain_has_ioreq_server(v->domain) ) + { + bool handled; - local_irq_enable(); - handled = vcpu_ioreq_handle_completion(v); - local_irq_disable(); + local_irq_enable(); + handled = vcpu_ioreq_handle_completion(v); + local_irq_disable(); - if ( !handled ) - return true; + if ( !handled ) + return true; + } #endif if ( likely(!v->arch.need_flush_to_ram) ) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 07572a5..5b0f03e 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -80,6 +80,22 @@ static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v) return &p->vcpu_ioreq[v->vcpu_id]; } +/* + * This should only be used when d == current->domain or when they're + * distinct and d is paused. Otherwise the result is stale before + * the caller can inspect it. + */ +bool domain_has_ioreq_server(const struct domain *d) +{ + const struct ioreq_server *s; + unsigned int id; + + FOR_EACH_IOREQ_SERVER(d, id, s) + return true; + + return false; +} + static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, struct ioreq_server **srvp) { diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 0b433e2..89ee171 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -83,6 +83,8 @@ static inline bool ioreq_needs_completion(const ioreq_t *ioreq) #define HANDLE_BUFIOREQ(s) \ ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) +bool domain_has_ioreq_server(const struct domain *d); + bool vcpu_ioreq_pending(struct vcpu *v); bool vcpu_ioreq_handle_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); From patchwork Fri Jan 29 01:48:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B694CC433DB for ; Fri, 29 Jan 2021 01:59:26 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5C4AD64DFB for ; Fri, 29 Jan 2021 01:59:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C4AD64DFB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77630.140778 (Exim 4.92) (envelope-from ) id 1l5J47-0007Cl-J9; Fri, 29 Jan 2021 01:59:15 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77630.140778; Fri, 29 Jan 2021 01:59:15 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J47-0007C2-4R; Fri, 29 Jan 2021 01:59:15 +0000 Received: by outflank-mailman (input) for mailman id 77630; Fri, 29 Jan 2021 01:59:13 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Ivl-0004da-Lt for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:37 +0000 Received: from mail-lf1-x12e.google.com (unknown [2a00:1450:4864:20::12e]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8fb612d0-be49-48fd-a911-cb9295dc141b; Fri, 29 Jan 2021 01:49:25 +0000 (UTC) Received: by mail-lf1-x12e.google.com with SMTP id h7so10312003lfc.6 for ; Thu, 28 Jan 2021 17:49:25 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:23 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8fb612d0-be49-48fd-a911-cb9295dc141b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=L6Xw2xZUe5QMV1urEQTD26Cq6miSmnHST8DAv5fF8Mg=; b=eeEZ78AVwhOl2nFFL6aSBVEOyWZBkd2mI6bG1wz33rPH3RFX7MgfmLE/Dd2nl3uRm4 0ikh4ph7toEFu+6pWEuX/rm5MzT3ql0C1Cz++2KYDMVAXVcxvqzm0lNTURx4Z6QNHFk1 PQm1HgDv6x1DzwmrWmv3yfzTLpny8eZtHq5BmW0axc5iYGxTQjZJn8flSOne/JSo5n8j oeUuXSHz6KvroJbE68bKzi2Vgt0VVOB8aMfI4LVRYOmowCbWNYHwT+4FgOwQtXJqIaVb Zn3SNfJSquW07zznGkLoqh/OuhL7Zm0xX/CWrpPyeQSEZ5/FoVGs5/vZGQ2aSmfuUKnk G9jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=L6Xw2xZUe5QMV1urEQTD26Cq6miSmnHST8DAv5fF8Mg=; b=tUQQv234IzuLYUTKMWWergzy5FMWX3qJK29vjggGUXH5U5QCmvlq+4pu7r0G4iraNw f+WKEIlXyliQzHBM8zPJbOfOXEx2u+pkLC6X1sCgTbWm0nWZumJWP8yAa8gKCRteKeLl B1ZTbw0cxochuygN/qFvmfA6m2jM9wRKBziZYJ2V3esYVsquJxR7BuZe0v934+Fjtt+a zc2FGvrE2sj1qfHz+KXrJkxxQF0gTNabdIS9BJoyO8fsvZfB7ME7XVmrnN6E4zLU0sYq N4NbiK8MeSMMZEXuwLWTq4POY6Y9TxYE3bs77nNVx9OSinNVBnPUZc6pftX7FNo5YbH1 4IpQ== X-Gm-Message-State: AOAM532J0kUiOcOFfU1PuspY4OQ1XIVM6dNvDL7lp4aCu9K9YhyxRaYM FwpSfqhC82lav7mLW7LHqnIFYiMWCvQzpw== X-Google-Smtp-Source: ABdhPJz/5+iiwNEMyhJ9KiRn/Dtc07WNkRnRDyk1ufEA7yx3dPqQ1GezHOuwQqSYyoXkQQmRDI7AaA== X-Received: by 2002:a05:6512:6d6:: with SMTP id u22mr1009792lff.454.1611884964354; Thu, 28 Jan 2021 17:49:24 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Ian Jackson , Wei Liu , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Volodymyr Babchuk , Julien Grall Subject: [PATCH V6 18/24] xen/dm: Introduce xendevicemodel_set_irq_level DM op Date: Fri, 29 Jan 2021 03:48:46 +0200 Message-Id: <1611884932-1851-19-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch adds ability to the device emulator to notify otherend (some entity running in the guest) using a SPI and implements Arm specific bits for it. Proposed interface allows emulator to set the logical level of a one of a domain's IRQ lines. We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level) to inject an interrupt as the "isa_irq" field is only 8-bit and able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020). Please note, for egde-triggered interrupt (which is used for the virtio-mmio emulation) we only trigger the interrupt on Arm if the level is asserted (rising edge) and do nothing if the level is deasserted (falling edge), so the call could be named "trigger_irq" (without the level parameter). But, in order to model the line closely (to be able to support level-triggered interrupt) we need to know whether the line is low or high, so the proposed interface has been chosen. However, it is worth mentioning that in case of the level-triggered interrupt, we should keep injecting the interrupt to the guest until the line is deasserted (this is not covered by current patch). Signed-off-by: Julien Grall Signed-off-by: Oleksandr Tyshchenko Acked-by: Stefano Stabellini [On Arm only] Tested-by: Wei Chen Acked-by: Ian Jackson --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - check incoming parameters in arch_dm_op() - add explicit padding to struct xen_dm_op_set_irq_level Changes V1 -> V2: - update the author of a patch - update patch description - check that padding is always 0 - mention that interface is Arm only and only SPIs are supported for now - allow to set the logical level of a line for non-allocated interrupts only - add xen_dm_op_set_irq_level_t Changes V2 -> V3: - no changes Changes V3 -> V4: - update patch description - update patch according to the IOREQ related dm-op handling changes Changes V4 -> V5: - rebase - add Stefano-s A-b Changes V5 -> V6: - no changes --- --- tools/include/xendevicemodel.h | 4 +++ tools/libs/devicemodel/core.c | 18 ++++++++++ tools/libs/devicemodel/libxendevicemodel.map | 1 + xen/arch/arm/dm.c | 54 +++++++++++++++++++++++++++- xen/include/public/hvm/dm_op.h | 16 +++++++++ 5 files changed, 92 insertions(+), 1 deletion(-) diff --git a/tools/include/xendevicemodel.h b/tools/include/xendevicemodel.h index e877f5c..c06b3c8 100644 --- a/tools/include/xendevicemodel.h +++ b/tools/include/xendevicemodel.h @@ -209,6 +209,10 @@ int xendevicemodel_set_isa_irq_level( xendevicemodel_handle *dmod, domid_t domid, uint8_t irq, unsigned int level); +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, unsigned int irq, + unsigned int level); + /** * This function maps a PCI INTx line to a an IRQ line. * diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c index 4d40639..30bd79f 100644 --- a/tools/libs/devicemodel/core.c +++ b/tools/libs/devicemodel/core.c @@ -430,6 +430,24 @@ int xendevicemodel_set_isa_irq_level( return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); } +int xendevicemodel_set_irq_level( + xendevicemodel_handle *dmod, domid_t domid, uint32_t irq, + unsigned int level) +{ + struct xen_dm_op op; + struct xen_dm_op_set_irq_level *data; + + memset(&op, 0, sizeof(op)); + + op.op = XEN_DMOP_set_irq_level; + data = &op.u.set_irq_level; + + data->irq = irq; + data->level = level; + + return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); +} + int xendevicemodel_set_pci_link_route( xendevicemodel_handle *dmod, domid_t domid, uint8_t link, uint8_t irq) { diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devicemodel/libxendevicemodel.map index 561c62d..a0c3012 100644 --- a/tools/libs/devicemodel/libxendevicemodel.map +++ b/tools/libs/devicemodel/libxendevicemodel.map @@ -32,6 +32,7 @@ VERS_1.2 { global: xendevicemodel_relocate_memory; xendevicemodel_pin_memory_cacheattr; + xendevicemodel_set_irq_level; } VERS_1.1; VERS_1.3 { diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c index f254ed7..7854133 100644 --- a/xen/arch/arm/dm.c +++ b/xen/arch/arm/dm.c @@ -20,6 +20,8 @@ #include #include +#include + int dm_op(const struct dmop_args *op_args) { struct domain *d; @@ -35,6 +37,7 @@ int dm_op(const struct dmop_args *op_args) [XEN_DMOP_unmap_io_range_from_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range), [XEN_DMOP_set_ioreq_server_state] = sizeof(struct xen_dm_op_set_ioreq_server_state), [XEN_DMOP_destroy_ioreq_server] = sizeof(struct xen_dm_op_destroy_ioreq_server), + [XEN_DMOP_set_irq_level] = sizeof(struct xen_dm_op_set_irq_level), }; rc = rcu_lock_remote_domain_by_id(op_args->domid, &d); @@ -73,7 +76,56 @@ int dm_op(const struct dmop_args *op_args) if ( op.pad ) goto out; - rc = ioreq_server_dm_op(&op, d, &const_op); + switch ( op.op ) + { + case XEN_DMOP_set_irq_level: + { + const struct xen_dm_op_set_irq_level *data = + &op.u.set_irq_level; + unsigned int i; + + /* Only SPIs are supported */ + if ( (data->irq < NR_LOCAL_IRQS) || (data->irq >= vgic_num_irqs(d)) ) + { + rc = -EINVAL; + break; + } + + if ( data->level != 0 && data->level != 1 ) + { + rc = -EINVAL; + break; + } + + /* Check that padding is always 0 */ + for ( i = 0; i < sizeof(data->pad); i++ ) + { + if ( data->pad[i] ) + { + rc = -EINVAL; + break; + } + } + + /* + * Allow to set the logical level of a line for non-allocated + * interrupts only. + */ + if ( test_bit(data->irq, d->arch.vgic.allocated_irqs) ) + { + rc = -EINVAL; + break; + } + + vgic_inject_irq(d, NULL, data->irq, data->level); + rc = 0; + break; + } + + default: + rc = ioreq_server_dm_op(&op, d, &const_op); + break; + } if ( (!rc || rc == -ERESTART) && !const_op && copy_to_guest_offset(op_args->buf[0].h, offset, diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 66cae1a..1f70d58 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -434,6 +434,21 @@ struct xen_dm_op_pin_memory_cacheattr { }; typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheattr_t; +/* + * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's + * IRQ lines (currently Arm only). + * Only SPIs are supported. + */ +#define XEN_DMOP_set_irq_level 19 + +struct xen_dm_op_set_irq_level { + uint32_t irq; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; + uint8_t pad[3]; +}; +typedef struct xen_dm_op_set_irq_level xen_dm_op_set_irq_level_t; + struct xen_dm_op { uint32_t op; uint32_t pad; @@ -447,6 +462,7 @@ struct xen_dm_op { xen_dm_op_track_dirty_vram_t track_dirty_vram; xen_dm_op_set_pci_intx_level_t set_pci_intx_level; xen_dm_op_set_isa_irq_level_t set_isa_irq_level; + xen_dm_op_set_irq_level_t set_irq_level; xen_dm_op_set_pci_link_route_t set_pci_link_route; xen_dm_op_modified_memory_t modified_memory; xen_dm_op_set_mem_type_t set_mem_type; From patchwork Fri Jan 29 01:48:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2178C433E6 for ; Fri, 29 Jan 2021 01:59:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 938DB64DD8 for ; Fri, 29 Jan 2021 01:59:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 938DB64DD8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77629.140770 (Exim 4.92) (envelope-from ) id 1l5J46-0007A2-Q1; Fri, 29 Jan 2021 01:59:14 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77629.140770; Fri, 29 Jan 2021 01:59:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J46-00079R-Ar; Fri, 29 Jan 2021 01:59:14 +0000 Received: by outflank-mailman (input) for mailman id 77629; Fri, 29 Jan 2021 01:59:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Ivq-0004da-MA for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:42 +0000 Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 33856721-8e4d-4188-87f0-896a28741d34; Fri, 29 Jan 2021 01:49:26 +0000 (UTC) Received: by mail-lj1-x236.google.com with SMTP id v15so5733970ljk.13 for ; Thu, 28 Jan 2021 17:49:26 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:24 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 33856721-8e4d-4188-87f0-896a28741d34 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PS2Y4AZ7CaGH+jvzxCns83I8qf6IAANHgkaElFlh7Pc=; b=EASIFe7e8W285IsdY5ZsU7pbUZs9uMYhdtdtxWmhjdtTisYMCkSRGfbznqaQ2Ss/j0 f9Zxgw9kY6ZU9fGoi6+ytoKQQYDoCDDzMz10IS8l76SUDqZR0NAqyyg6VpqMbVoPYdY3 AgduovJ8Qf7ydKEt6gUIfHPPVQQapvnYEnJHUl0hN7HbchI4HnFNUiRD4ezwn0AFS8hM GqmbN9/zY1RFINlLXLhQnIJEr9uXBDbD1iSrzqDgLnsS5TDWnXtO8GwYGpOPdoHfqxut fyMaaEqernNgTbuEpu3SA4EdloVgmfViTBdJDbGuPI4jqyUa1metkLoqwRb63TzmiTK5 l8Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PS2Y4AZ7CaGH+jvzxCns83I8qf6IAANHgkaElFlh7Pc=; b=j6LkuprbaVMMpMLJe4e75A6tHXw4xOmhRdLPjrbB9zHJWedWLF/WMqGWK2b9t+ihru UXKfXwzMRygLZHPtZiGutrjF60fUkUx1MGayCh7F5CG3MP4vbvd0DasUtjwlFL3G6Hep n7bnVHFklUY1YfPrutuYYGPfRvl6EmfnInKJz3G+0W1kEvI88weGD0Gjvt7vmvZekDUp t6LgsOf3idlf19hGwbwu9PgC4BogxmI5Ig68vx7vfouUfongQAkVzmHS3LuM3HMACCBO BKtHyhPkmi9YwmfbRRi2VXWmxDgY4ytqx56GJvQj46Z4GNvdFteU1Cp+W/o8gO4Gj4sP /VXw== X-Gm-Message-State: AOAM533zSAM7J2CxBLyaHp/9z9B3VbgMLuFLOAYfkefKJmgpvv2zibcM t6kIlXSGkLzLshEvKDgiaMm2n0bAbN2rQg== X-Google-Smtp-Source: ABdhPJx2qq2uckNQzPqI5wsNqwcjBbwhtIIclwaWFXnk/F0UcG378prNRgIXR7dY0+QNWH8NaOfKTQ== X-Received: by 2002:a2e:850c:: with SMTP id j12mr1224965lji.298.1611884965401; Thu, 28 Jan 2021 17:49:25 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V6 19/24] xen/arm: io: Abstract sign-extension Date: Fri, 29 Jan 2021 03:48:47 +0200 Message-Id: <1611884932-1851-20-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko In order to avoid code duplication (both handle_read() and handle_ioserv() contain the same code for the sign-extension) put this code to a common helper to be used for both. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Stefano Stabellini [On Arm only] Tested-by: Wei Chen Acked-by: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V1 -> V2: - new patch Changes V2 -> V3: - no changes Changes V3 -> V4: - no changes here, but in new patch: "xen/arm: io: Harden sign extension check" Changes V4 -> V5: - add Stefano-s R-b Changes V5 -> V6: - no changes --- --- xen/arch/arm/io.c | 18 ++---------------- xen/arch/arm/ioreq.c | 17 +---------------- xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++ 3 files changed, 27 insertions(+), 32 deletions(-) diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index 7ac0303..729287e 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -25,6 +25,7 @@ #include #include #include +#include #include "decode.h" @@ -40,26 +41,11 @@ static enum io_state handle_read(const struct mmio_handler *handler, * setting r). */ register_t r = 0; - uint8_t size = (1 << dabt.size) * 8; if ( !handler->ops->read(v, info, &r, handler->priv) ) return IO_ABORT; - /* - * Sign extend if required. - * Note that we expect the read handler to have zeroed the bits - * outside the requested access size. - */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) - { - /* - * We are relying on register_t using the same as - * an unsigned long in order to keep the 32-bit assembly - * code smaller. - */ - BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long)); - r |= (~0UL) << size; - } + r = sign_extend(dabt, r); set_user_reg(regs, dabt.reg, r); diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c index 43f5d6b..308650b 100644 --- a/xen/arch/arm/ioreq.c +++ b/xen/arch/arm/ioreq.c @@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) const union hsr hsr = { .bits = regs->hsr }; const struct hsr_dabt dabt = hsr.dabt; /* Code is similar to handle_read */ - uint8_t size = (1 << dabt.size) * 8; register_t r = v->io.req.data; /* We are done with the IO */ @@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v) if ( dabt.write ) return IO_HANDLED; - /* - * Sign extend if required. - * Note that we expect the read handler to have zeroed the bits - * outside the requested access size. - */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) - { - /* - * We are relying on register_t using the same as - * an unsigned long in order to keep the 32-bit assembly - * code smaller. - */ - BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long)); - r |= (~0UL) << size; - } + r = sign_extend(dabt, r); set_user_reg(regs, dabt.reg, r); diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h index c4a3d0f..c6b3cc7 100644 --- a/xen/include/asm-arm/traps.h +++ b/xen/include/asm-arm/traps.h @@ -84,6 +84,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_user_regs *regs) (unsigned long)abort_guest_exit_end == regs->pc; } +/* Check whether the sign extension is required and perform it */ +static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r) +{ + uint8_t size = (1 << dabt.size) * 8; + + /* + * Sign extend if required. + * Note that we expect the read handler to have zeroed the bits + * outside the requested access size. + */ + if ( dabt.sign && (r & (1UL << (size - 1))) ) + { + /* + * We are relying on register_t using the same as + * an unsigned long in order to keep the 32-bit assembly + * code smaller. + */ + BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long)); + r |= (~0UL) << size; + } + + return r; +} + #endif /* __ASM_ARM_TRAPS__ */ /* * Local variables: From patchwork Fri Jan 29 01:48:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1207CC433E0 for ; Fri, 29 Jan 2021 01:59:37 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A74AE64DF1 for ; Fri, 29 Jan 2021 01:59:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A74AE64DF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77643.140848 (Exim 4.92) (envelope-from ) id 1l5J4L-0007oh-MM; Fri, 29 Jan 2021 01:59:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77643.140848; Fri, 29 Jan 2021 01:59:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J4L-0007oG-EN; Fri, 29 Jan 2021 01:59:29 +0000 Received: by outflank-mailman (input) for mailman id 77643; Fri, 29 Jan 2021 01:59:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Ivv-0004da-MF for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:47 +0000 Received: from mail-lj1-x22a.google.com (unknown [2a00:1450:4864:20::22a]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e4189ae2-8f49-4c16-b419-f5131a4e7e83; Fri, 29 Jan 2021 01:49:30 +0000 (UTC) Received: by mail-lj1-x22a.google.com with SMTP id c18so8722861ljd.9 for ; Thu, 28 Jan 2021 17:49:27 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.25 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:25 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e4189ae2-8f49-4c16-b419-f5131a4e7e83 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TdqDmTMibHmM0swGW3MLKhqpUk26AMu5zQCVZ6e6gLk=; b=uwPEbt5RXebB+vGG4kN1n8pOtK6UIL+0Ne9sKOJsuksAuwpkG9zlIVYzQz++uy8KBF wx32jxGbhH0V9f/Cc/L35i5iFAkbhmxshBXcdqpOpsrDP3a9Lu3ndFsahdIY+yy8xtwE ff/Ccb33ABEu/2+HXpdfrUWDbPiZylfKneqqFFK0MViGiQpcy4Mg9s+FG0dAvUilQNzt 47Pu26R6kRY8hs63dkqWA4RDfNSkVbD6jekmRL209/o6RHSeCOleN8sOJwTUxGWTphZG I4/GX0dGiUia4vdlqPdlQ3+21xjN6U6p+Iyn6ALvRsXYl3SS/ctoIWAqqvBtHn20QY+d JARg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TdqDmTMibHmM0swGW3MLKhqpUk26AMu5zQCVZ6e6gLk=; b=qq3VVsHUwNU0NB8iVzHr7KYH5RMQ3XukPVvkwSrTEYYFn0kWXFZJZEj2ySJCBomKzQ ejdX4g4Otk7aaX4i8Ah6MKt4RId+zZCMYHcEzXj2vNHRfPYe06Yh/6SXlqJC+Y03MZ2W s2TTe/Biam8CI7HKgYxLCcpXay4TpRJeGRnv47NjERvvmVsNAfAr3oQXNdeH5arhXNf0 xG9uWgQxTXJf5vcn0qEBkPWGZApLi3TcfujU5m6NxAntWpuSnA8k6ONElbqkFT0qG5Rl 63/bZd8EvOC7eL86GI4kfketVvNiROfuCUgxQowMcnyS8X5K3Lp2rC6ozKuCB3mi0AQY Iogw== X-Gm-Message-State: AOAM533qZri9NaRS97b05ErMLIXqSKajwwWjSblZvFYmfwq1TwrTGg2J dJVNqwW9irCesMJ9P6hn+2nJna+8Q7COZQ== X-Google-Smtp-Source: ABdhPJygEAqQ4qjplH7v+s+jNLi8kCp9l7KEFWL20yfvovvgcL7X1K2J9hco+QAYNM8wQQS1gDhc+A== X-Received: by 2002:a2e:993:: with SMTP id 141mr481288ljj.372.1611884966317; Thu, 28 Jan 2021 17:49:26 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V6 20/24] xen/arm: io: Harden sign extension check Date: Fri, 29 Jan 2021 03:48:48 +0200 Message-Id: <1611884932-1851-21-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko In the ideal world we would never get an undefined behavior when propagating the sign bit since that bit can only be set for access size smaller than the register size (i.e byte/half-word for aarch32, byte/half-word/word for aarch64). In the real world we need to care for *possible* hardware bug such as advertising a sign extension for either 64-bit (or 32-bit) on Arm64 (resp. Arm32). So harden a bit more the code to prevent undefined behavior when propagating the sign bit in case of buggy hardware. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Stefano Stabellini Reviewed-by: Volodymyr Babchuk CC: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V3 -> V4: - new patch Changes V4 -> V5: - add Stefano's and Volodymyr's R-b Changes V5 -> V6: - no changes --- --- xen/include/asm-arm/traps.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h index c6b3cc7..2ed2b85 100644 --- a/xen/include/asm-arm/traps.h +++ b/xen/include/asm-arm/traps.h @@ -94,7 +94,8 @@ static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r) * Note that we expect the read handler to have zeroed the bits * outside the requested access size. */ - if ( dabt.sign && (r & (1UL << (size - 1))) ) + if ( dabt.sign && (size < sizeof(register_t) * 8) && + (r & (1UL << (size - 1))) ) { /* * We are relying on register_t using the same as From patchwork Fri Jan 29 01:48:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8212C433E0 for ; Fri, 29 Jan 2021 01:59:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 85B6164DFD for ; Fri, 29 Jan 2021 01:59:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85B6164DFD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77638.140836 (Exim 4.92) (envelope-from ) id 1l5J4H-0007et-VX; Fri, 29 Jan 2021 01:59:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77638.140836; Fri, 29 Jan 2021 01:59:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J4H-0007eN-Mz; Fri, 29 Jan 2021 01:59:25 +0000 Received: by outflank-mailman (input) for mailman id 77638; Fri, 29 Jan 2021 01:59:23 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iw0-0004da-MR for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:52 +0000 Received: from mail-lf1-x130.google.com (unknown [2a00:1450:4864:20::130]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ad342ffe-f71d-4ee9-a41e-2f9757973cb1; Fri, 29 Jan 2021 01:49:30 +0000 (UTC) Received: by mail-lf1-x130.google.com with SMTP id p21so10291045lfu.11 for ; Thu, 28 Jan 2021 17:49:28 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:26 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ad342ffe-f71d-4ee9-a41e-2f9757973cb1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=V7Ze/uWBYvT/mUbOzFjca4djsJ39k0R9IEN1Yq8/5nM=; b=HLFveNNsSzZyJsuQ60Ixp1ExNWP26GcLSUFisWoghSdDf8NFiF9EoNwfBZtL4FqIcr ANJVu7YK5un5xWwzjW1L8DDl7VLtc+PvnyAnqgXS32S0I8tL4NOKP3ks7Jz+jr/KlUnE dRzExQjhdyPshnXeqCrwwoFl+A2/s+9Eca5MGAoDrmafMSpm46QQDyvo5/ChvkcmQ8k2 XYjzG6gSbhUZD+xXyENlBWXsuKTpViBnY7JO0ygbK4gqPFWT2CMNGGia8UHVcJQ9vpkX aqKLwIDSmnWRdXXtoJ+UlSvfL1vdrJy0xnbX46gOkVzB4eXx6AkqDQT6uE7y2cB9EWJs X7Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=V7Ze/uWBYvT/mUbOzFjca4djsJ39k0R9IEN1Yq8/5nM=; b=ONoxYkZ2pMr1HXwriJeO1NUCfXD2Ygep3y5DHR7qPTYV8rr96OAAAmarO20qwNgLrW /Eqhsw0m7Qx366qTs2ixfoxmiromi/+fEgWHswXmoH9KWG+nJpMZVZktbja2adAOv+oD H+fq1I76b9qxQXKyKcTuENgfu5cUJbtFB+TNH8lv+zGu5usSYA7fQ1MMmEFAaGLN+m4U wD1JD0Ubq6sIiutuvLmozTRmUDZWAdC60AcpZd3DiCjgFamVBPXCnBDvg6HAYEQh3GJo gu6a5zJPx0Y6AfslN6b8uIYgevRAIdj9XUaW1e7zIxFxM9n/78RSjKxeRF1OgVIQBalC j4pg== X-Gm-Message-State: AOAM532jlgPNOSB2j3GVGI2R/1FRh+i+Pqht7qru4cVMBbTmEgTbssbt J4pWpz8VCccqmOefuiruEtX8MHfmd4IUvQ== X-Google-Smtp-Source: ABdhPJykE6OjYiYFv0R6tUagaD4iYhS6ZbkEAHnp2R7Yugm3u0ZZSx31ntghG126/j1YE0oOhEs8YQ== X-Received: by 2002:a19:ec6:: with SMTP id 189mr887176lfo.463.1611884967421; Thu, 28 Jan 2021 17:49:27 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Paul Durrant , Julien Grall Subject: [PATCH V6 21/24] xen/ioreq: Make x86's send_invalidate_req() common Date: Fri, 29 Jan 2021 03:48:49 +0200 Message-Id: <1611884932-1851-22-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko As the IOREQ is a common feature now and we also need to invalidate qemu/demu mapcache on Arm when the required condition occurs this patch moves this function to the common code (and remames it to ioreq_signal_mapcache_invalidate). This patch also moves per-domain qemu_mapcache_invalidate variable out of the arch sub-struct (and drops "qemu" prefix). We don't put this variable inside the #ifdef CONFIG_IOREQ_SERVER at the end of struct domain, but in the hole next to the group of 5 bools further up which is more efficient. The subsequent patch will add mapcache invalidation handling on Arm. Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Paul Durrant Acked-by: Jan Beulich [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes RFC -> V1: - move send_invalidate_req() to the common code - update patch subject/description - move qemu_mapcache_invalidate out of the arch sub-struct, update checks - remove #if defined(CONFIG_ARM64) from the common code Changes V1 -> V2: - was split into: - xen/ioreq: Make x86's send_invalidate_req() common - xen/arm: Add mapcache invalidation handling - update patch description/subject - move Arm bits to a separate patch - don't alter the common code, the flag is set by arch code - rename send_invalidate_req() to send_invalidate_ioreq() - guard qemu_mapcache_invalidate with CONFIG_IOREQ_SERVER - use bool instead of bool_t - remove blank line blank line between head comment and #include-s Changes V2 -> V3: - update patch description - drop "qemu" prefix from the variable name - rename send_invalidate_req() to ioreq_signal_mapcache_invalidate() Changes V3 -> V4: - change variable location in struct domain Changes V4 -> V5: - add Jan's A-b and Paul's R-b Changes V5 -> V6: - no changes --- --- xen/arch/x86/hvm/hypercall.c | 9 +++++---- xen/arch/x86/hvm/io.c | 14 -------------- xen/common/ioreq.c | 14 ++++++++++++++ xen/include/asm-x86/hvm/domain.h | 1 - xen/include/asm-x86/hvm/io.h | 1 - xen/include/xen/ioreq.h | 1 + xen/include/xen/sched.h | 5 +++++ 7 files changed, 25 insertions(+), 20 deletions(-) diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c index ac573c8..6d41c56 100644 --- a/xen/arch/x86/hvm/hypercall.c +++ b/xen/arch/x86/hvm/hypercall.c @@ -20,6 +20,7 @@ */ #include #include +#include #include #include @@ -47,7 +48,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) rc = compat_memory_op(cmd, arg); if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation ) - curr->domain->arch.hvm.qemu_mapcache_invalidate = true; + curr->domain->mapcache_invalidate = true; return rc; } @@ -326,9 +327,9 @@ int hvm_hypercall(struct cpu_user_regs *regs) HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax); - if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) && - test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) ) - send_invalidate_req(); + if ( unlikely(currd->mapcache_invalidate) && + test_and_clear_bool(currd->mapcache_invalidate) ) + ioreq_signal_mapcache_invalidate(); return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed; } diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 66a37ee..046a8eb 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -64,20 +64,6 @@ void send_timeoffset_req(unsigned long timeoff) gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n"); } -/* Ask ioemu mapcache to invalidate mappings. */ -void send_invalidate_req(void) -{ - ioreq_t p = { - .type = IOREQ_TYPE_INVALIDATE, - .size = 4, - .dir = IOREQ_WRITE, - .data = ~0UL, /* flush all */ - }; - - if ( ioreq_broadcast(&p, false) != 0 ) - gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); -} - bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr) { struct hvm_emulate_ctxt ctxt; diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 5b0f03e..67ef1f7 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -35,6 +35,20 @@ #include #include +/* Ask ioemu mapcache to invalidate mappings. */ +void ioreq_signal_mapcache_invalidate(void) +{ + ioreq_t p = { + .type = IOREQ_TYPE_INVALIDATE, + .size = 4, + .dir = IOREQ_WRITE, + .data = ~0UL, /* flush all */ + }; + + if ( ioreq_broadcast(&p, false) != 0 ) + gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n"); +} + static void set_ioreq_server(struct domain *d, unsigned int id, struct ioreq_server *s) { diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 25af518..7b60e91 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -120,7 +120,6 @@ struct hvm_domain { struct viridian_domain *viridian; - bool_t qemu_mapcache_invalidate; bool_t is_s3_suspended; /* diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 3a4a739..54e0161 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -97,7 +97,6 @@ bool relocate_portio_handler( unsigned int size); void send_timeoffset_req(unsigned long timeoff); -void send_invalidate_req(void); bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec); bool handle_pio(uint16_t port, unsigned int size, int dir); diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h index 89ee171..2d635e9 100644 --- a/xen/include/xen/ioreq.h +++ b/xen/include/xen/ioreq.h @@ -103,6 +103,7 @@ struct ioreq_server *ioreq_server_select(struct domain *d, int ioreq_send(struct ioreq_server *s, ioreq_t *proto_p, bool buffered); unsigned int ioreq_broadcast(ioreq_t *p, bool buffered); +void ioreq_signal_mapcache_invalidate(void); void ioreq_domain_init(struct domain *d); diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 59e5b6a..06dba1a 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -444,6 +444,11 @@ struct domain * unpaused for the first time by the systemcontroller. */ bool creation_finished; + /* + * Indicates that mapcache invalidation request should be sent to + * the device emulator. + */ + bool mapcache_invalidate; /* Which guest this guest has privileges on */ struct domain *target; From patchwork Fri Jan 29 01:48:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28210C433E6 for ; Fri, 29 Jan 2021 01:59:31 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BD54F64DD8 for ; Fri, 29 Jan 2021 01:59:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD54F64DD8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77635.140817 (Exim 4.92) (envelope-from ) id 1l5J4E-0007UI-4S; Fri, 29 Jan 2021 01:59:22 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77635.140817; Fri, 29 Jan 2021 01:59:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J4D-0007Tv-Lg; Fri, 29 Jan 2021 01:59:21 +0000 Received: by outflank-mailman (input) for mailman id 77635; Fri, 29 Jan 2021 01:59:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5Iw5-0004da-Me for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:50:57 +0000 Received: from mail-lj1-x229.google.com (unknown [2a00:1450:4864:20::229]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6b1eead0-3291-4457-83cd-8c707f3c6841; Fri, 29 Jan 2021 01:49:30 +0000 (UTC) Received: by mail-lj1-x229.google.com with SMTP id a25so8754720ljn.0 for ; Thu, 28 Jan 2021 17:49:29 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:27 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6b1eead0-3291-4457-83cd-8c707f3c6841 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rF6mEm5YriIEef9BT+XrdAFUqgH3Lonx+bIstOZUlFo=; b=FR0GXFcoTPqEIp5GCxkPfPq2ulwOtg6Hzes21Npdyg0cEm5DHrHMd/xs/M7iMCrRan X2nNOorGumkqMQbUR0nz8Cux0sfnJ0CEkc2JmvMccubZtcEuwRaPJwuvJ+S0MluXCcHM 41UyQyMEw6Py9VzT//IL5qTByLkWmT2JFctqJBrrLiqd2Lp5sqzA1QdrohEDy5Xe2wHp 3b3CijIMNeVssQ/tu8cD6QabTzKp2425h1CUrJyzbJggzsEIp3l93qSK0kp/wTOJugz1 gdJGwwBHokpmHS2FLc7k6PSQVhV4mku93I4+tNm1VPMrmOxxuW1i1j1vJnRKfLjA+61a tSEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rF6mEm5YriIEef9BT+XrdAFUqgH3Lonx+bIstOZUlFo=; b=Y5I4xKUwcu5C+nKC7iQ7N2gk5NTQVG1hJHh3ORwXOzdorPSumXBhNsbi/FNcgiN5Q+ ksq9+zkhzokSE6FywpodBkcut9rJ4W+plBmEdWmhjEYTW9H31JZd6mMiUd8Wcg5HTiS0 tAUDTNwuSlvuZTvOspYY1j7ALqv33TWu0NsuD6sImyqLsZ1gpXNJ0DGChRa78E0MXUU6 rU+xe4rWnawGdubqGgYArykbg+qBeua2pdZtKJPKXxr4q+jB+9S4ge75gjUs++gO0iq9 Dig9cRZT1CKX8BAeM7gdS5K/HBERBUD2nxwvUxwXMCgK/zyXQZN09nYR9+UaQPSqxyH+ +Hkg== X-Gm-Message-State: AOAM5330BIU5l4yZMWKHGNyY+oyYNUAslT0eGt1rLX/MhYzADlOv58+9 iAefvMr7Sl4hou5c33fTRLsE9FmsU2kSUw== X-Google-Smtp-Source: ABdhPJyHkJfC7bxhBDFViDBdYVc370+fwWLTAVrgT2cS351liUPi/9hR6v02WdPtPWY2oJpaLClD/w== X-Received: by 2002:a2e:a54f:: with SMTP id e15mr1129782ljn.441.1611884968369; Thu, 28 Jan 2021 17:49:28 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Julien Grall Subject: [PATCH V6 22/24] xen/arm: Add mapcache invalidation handling Date: Fri, 29 Jan 2021 03:48:50 +0200 Message-Id: <1611884932-1851-23-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko We need to send mapcache invalidation request to qemu/demu everytime the page gets removed from a guest. At the moment, the Arm code doesn't explicitely remove the existing mapping before inserting the new mapping. Instead, this is done implicitely by __p2m_set_entry(). First of all we need to recognize a case when the "freed" entry contains some RAM page in order to set the corresponding flag. The most suitable place to do this is p2m_free_entry(), there we can find the correct leaf type. The invalidation request will be sent in do_trap_hypercall() later on. Taking into the account the following the do_trap_hypercall() is the best place to send invalidation request: - The only way a guest can modify its P2M on Arm is via an hypercall - When sending the invalidation request, the vCPU will be blocked until all the IOREQ servers have acknowledged the invalidation Signed-off-by: Oleksandr Tyshchenko CC: Julien Grall Reviewed-by: Stefano Stabellini [On Arm only] Tested-by: Wei Chen --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" *** Please note, this patch depends on the following which is on review: https://patchwork.kernel.org/patch/11803383/ This patch is on par with x86 code (whether it is buggy or not). If there is a need to improve/harden something, this can be done on a follow-up. *** Changes V1 -> V2: - new patch, some changes were derived from (+ new explanation): xen/ioreq: Make x86's invalidate qemu mapcache handling common - put setting of the flag into __p2m_set_entry() - clarify the conditions when the flag should be set - use domain_has_ioreq_server() - update do_trap_hypercall() by adding local variable Changes V2 -> V3: - update patch description - move check to p2m_free_entry() - add a comment - use "curr" instead of "v" in do_trap_hypercall() Changes V3 -> V4: - update patch description - re-order check in p2m_free_entry() to call domain_has_ioreq_server() only if p2m->domain == current->domain - add a comment in do_trap_hypercall() Changes V4 -> V5: - add Stefano's R-b - update comment in do_trap_hypercall() Changes V5 -> V6: - update comment in p2m_free_entry() and patch description --- --- xen/arch/arm/p2m.c | 25 +++++++++++++++++-------- xen/arch/arm/traps.c | 20 +++++++++++++++++--- 2 files changed, 34 insertions(+), 11 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index d41c4fa..6895804 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -749,17 +750,25 @@ static void p2m_free_entry(struct p2m_domain *p2m, if ( !p2m_is_valid(entry) ) return; - /* Nothing to do but updating the stats if the entry is a super-page. */ - if ( p2m_is_superpage(entry, level) ) + if ( p2m_is_superpage(entry, level) || (level == 3) ) { - p2m->stats.mappings[level]--; - return; - } +#ifdef CONFIG_IOREQ_SERVER + /* + * If this gets called then either the entry was replaced by an entry + * with a different base (valid case) or the shattering of a superpage + * has failed (error case). + * So, at worst, the spurious mapcache invalidation might be sent. + */ + if ( (p2m->domain == current->domain) && + domain_has_ioreq_server(p2m->domain) && + p2m_is_ram(entry.p2m.type) ) + p2m->domain->mapcache_invalidate = true; +#endif - if ( level == 3 ) - { p2m->stats.mappings[level]--; - p2m_put_l3_page(entry); + /* Nothing to do if the entry is a super-page. */ + if ( level == 3 ) + p2m_put_l3_page(entry); return; } diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 476900e..6fa1350 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1451,6 +1451,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr, const union hsr hsr) { arm_hypercall_fn_t call = NULL; + struct vcpu *curr = current; BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) ); @@ -1467,7 +1468,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr, return; } - current->hcall_preempted = false; + curr->hcall_preempted = false; perfc_incra(hypercalls, *nr); call = arm_hypercall_table[*nr].fn; @@ -1480,7 +1481,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr, HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs)); #ifndef NDEBUG - if ( !current->hcall_preempted ) + if ( !curr->hcall_preempted ) { /* Deliberately corrupt parameter regs used by this hypercall. */ switch ( arm_hypercall_table[*nr].nr_args ) { @@ -1497,8 +1498,21 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr, #endif /* Ensure the hypercall trap instruction is re-executed. */ - if ( current->hcall_preempted ) + if ( curr->hcall_preempted ) regs->pc -= 4; /* re-execute 'hvc #XEN_HYPERCALL_TAG' */ + +#ifdef CONFIG_IOREQ_SERVER + /* + * We call ioreq_signal_mapcache_invalidate from do_trap_hypercall() + * because the only way a guest can modify its P2M on Arm is via an + * hypercall. + * Note that sending the invalidation request causes the vCPU to block + * until all the IOREQ servers have acknowledged the invalidation. + */ + if ( unlikely(curr->domain->mapcache_invalidate) && + test_and_clear_bool(curr->domain->mapcache_invalidate) ) + ioreq_signal_mapcache_invalidate(); +#endif } void arch_hypercall_tasklet_result(struct vcpu *v, long res) From patchwork Fri Jan 29 01:48:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7997AC433E0 for ; Fri, 29 Jan 2021 01:59:29 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 16C8E64DD8 for ; Fri, 29 Jan 2021 01:59:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 16C8E64DD8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77634.140812 (Exim 4.92) (envelope-from ) id 1l5J4D-0007Sf-AG; Fri, 29 Jan 2021 01:59:21 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77634.140812; Fri, 29 Jan 2021 01:59:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J4D-0007SK-4F; Fri, 29 Jan 2021 01:59:21 +0000 Received: by outflank-mailman (input) for mailman id 77634; Fri, 29 Jan 2021 01:59:19 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IwA-0004da-Mg for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:51:02 +0000 Received: from mail-lj1-x230.google.com (unknown [2a00:1450:4864:20::230]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 58414d95-ebcd-4cd3-8746-1465072b0bc8; Fri, 29 Jan 2021 01:49:35 +0000 (UTC) Received: by mail-lj1-x230.google.com with SMTP id u4so7195324ljh.6 for ; Thu, 28 Jan 2021 17:49:30 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:28 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 58414d95-ebcd-4cd3-8746-1465072b0bc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=w2jodtzQLDzvAchjFYRZnoQG4edeafDW+VyX5zm624g=; b=CWMGn14UoxB28w/EHBx9ohKHRbCy45I1mUi8GsWqvanZiSScuJPEaQ1RFBaTLUWjPT UWZINdBFPifhzecPH+9zZ7ZadrawLvHmoLveuK5S6JluF+TNbZgx8WrpBGSu6HN1uZud YbHglSjkYE7qH7n6g7mTJVQQgnDeIchjPx4AkaWXauTdSb53S8gvD97Py+SEuhazlkV+ I+eP5DDuQgwVYCKa5Vo4qbPK3XC57uovorPPbZ/xomQKtUEtwaRC9k/ALGposn+YCpIM m00WLbKVVe6YN3F9gAfnj/9muK693j1QsahhNOuKqzJZYXE1u1nlPOswP9C+1iIOxKe8 kWTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=w2jodtzQLDzvAchjFYRZnoQG4edeafDW+VyX5zm624g=; b=a8YdogXjJTx0AZH3ofD34FMkHJ1+TYKGqdUsOXgjINW6Xw3uZwNi+RplaMl8xvkLpI e07a7Rnxa0IImKE78hfQOBbxOusQOjy6MruMMjOCQXInfmi98odjPv62xvyZGTYWTTzt zE1xSJ9tWFF5Zh4D/oJogXY4wfLT9NiYvGgHytwcuPJDNz6aaTdu0bj3BwWCgqq/06s6 G/4PGcQ4Qy6AWdo1FKtXozw3+uBbkpZ9hmVc69fvy+YuodisaY5Rnv+RY9usH0Q9GnSN xo92K2ann6u215SupI5PXVR5Xh/p8mO8/P3dTRZGUGesE0nmFnTcGxNYO++e0sNSv0eS f2WA== X-Gm-Message-State: AOAM531ozECjjett+o+59pPgMmYsGkDDzeo7yxAXvjQMsO5AlEmSpMkn wSO3qF8OTj2yP9OdIM3E2kIPsYvQ2VcAvQ== X-Google-Smtp-Source: ABdhPJxlh0nXRKSpDhyrFViWuzk9V2z+ug2B/eR+grgHMD5Yg2y47pxlPiWzhN5OFzvVl1RvMcTFFg== X-Received: by 2002:a2e:7607:: with SMTP id r7mr1097953ljc.168.1611884969269; Thu, 28 Jan 2021 17:49:29 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Paul Durrant , Julien Grall , Stefano Stabellini , Andrew Cooper Subject: [PATCH V6 23/24] xen/ioreq: Do not let bufioreq to be used on other than x86 arches Date: Fri, 29 Jan 2021 03:48:51 +0200 Message-Id: <1611884932-1851-24-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko This patch prevents the device model running on other than x86 systems to use buffered I/O feature for now. Please note, there is no caller which requires to send buffered I/O request on Arm currently and the purpose of this check is to catch any future user of bufioreq. Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich Acked-by: Julien Grall Acked-by: Paul Durrant --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V5 -> V6: - new patch --- --- xen/common/ioreq.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 67ef1f7..a36137d 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -629,6 +629,9 @@ static int ioreq_server_create(struct domain *d, int bufioreq_handling, unsigned int i; int rc; + if ( !IS_ENABLED(CONFIG_X86) && bufioreq_handling ) + return -EINVAL; + if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC ) return -EINVAL; From patchwork Fri Jan 29 01:48:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 12054955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83E01C433E6 for ; Fri, 29 Jan 2021 01:59:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1367564DF1 for ; Fri, 29 Jan 2021 01:59:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1367564DF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.77621.140692 (Exim 4.92) (envelope-from ) id 1l5J3s-0006fR-Gi; Fri, 29 Jan 2021 01:59:00 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 77621.140692; Fri, 29 Jan 2021 01:59:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5J3s-0006f1-Cg; Fri, 29 Jan 2021 01:59:00 +0000 Received: by outflank-mailman (input) for mailman id 77621; Fri, 29 Jan 2021 01:58:59 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l5IwF-0004da-Mu for xen-devel@lists.xenproject.org; Fri, 29 Jan 2021 01:51:07 +0000 Received: from mail-lf1-x12a.google.com (unknown [2a00:1450:4864:20::12a]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 20401dd4-8379-48c8-bfb9-df4ecad16768; Fri, 29 Jan 2021 01:49:36 +0000 (UTC) Received: by mail-lf1-x12a.google.com with SMTP id p21so10291168lfu.11 for ; Thu, 28 Jan 2021 17:49:31 -0800 (PST) Received: from otyshchenko.www.tendawifi.com ([212.22.223.21]) by smtp.gmail.com with ESMTPSA id z128sm1840238lfa.72.2021.01.28.17.49.29 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jan 2021 17:49:29 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 20401dd4-8379-48c8-bfb9-df4ecad16768 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jYVbTReNhthuaYUiNHZtnj8eTkyYCReKM8/i81+nTtE=; b=p0q+c7IkORKo+QxKh1j5u6bKK0SZg0M7cu79OFC5zwqUN7OUyi2FwHCH/SEvPeejP+ 7MzqbITTc2dx+1EVY9XZ6/MeYZQCrmnF72Ude7aJXPgykL9IbqQ7FbebXWEfxNkwcj5Y +BxVp6gQQ1SaJGRVZhbv2sKwL4yQqNE9IOzWDIDwE4y2lm+lRSNSQGKZ0Atfgsbxn6ua TTnyXsAn3vYx2b05i0KTTXKZKk8HtFtkuN/Lq0kiCAXLP0N+Xj+0hThA/+Iv9xkuq+O5 aqAZWQCzPenipt52/kcHnEEyDvJl62BqBC9eXHK3mpQNwSTLol6wqh9mBAXcCc5r7HmA dQIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jYVbTReNhthuaYUiNHZtnj8eTkyYCReKM8/i81+nTtE=; b=degvd8OM0Zd7H+rX4qRqKtoSY00gEJvcxOc9+dqOSgTWP6wvemmPJhjzSsYxel+E/f 2Emq/O18ss/gyj/WMA3HsHfQhR+F3sOJci6ewQafBJcOMBCXekA1tmnOZ8dHlc6ZkLzY lOHHx9jiRDFskmSnjsUh+6VgFVWqkJk1lpGyuVqls52255NUaIfFDsz3zLICTvaLxSt4 tT97l23x12VINBCX2f/Gqhs8p5z2OwBQpXFAW6JmzRGJ5YEHDtSivB4pZKzZm61K+NOy Rs1q/aMqGKlgTqM95tqyC9948K4X3NRf0bjlPKua4drnMUs3g3oMCdotegY6jT+piIWx vpZg== X-Gm-Message-State: AOAM530ArBewrJ35/0aGlGvPtWBbx6n95NyMnXxZqyYb7YJwU0Y9XWkR r/xAbX70KprC50K73o3WHyzzWcPCJHdceQ== X-Google-Smtp-Source: ABdhPJy7zqkl36xjz6XPIC+Ds7kUHtc175tsQAi/loZtEy6RVEPjxKVHtbNmA0eSsbsec7Onrz/28g== X-Received: by 2002:ac2:54ac:: with SMTP id w12mr951239lfk.514.1611884970294; Thu, 28 Jan 2021 17:49:30 -0800 (PST) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Cc: Oleksandr Tyshchenko , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH V6 24/24] xen/ioreq: Make the IOREQ feature selectable on Arm Date: Fri, 29 Jan 2021 03:48:52 +0200 Message-Id: <1611884932-1851-25-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> References: <1611884932-1851-1-git-send-email-olekstysh@gmail.com> From: Oleksandr Tyshchenko The purpose of this patch is to add a possibility for user to be able to select IOREQ support on Arm (which is disabled by default) with retaining the current behaviour on x86 (is selected by HVM and it's prompt is not visible). Also make the IOREQ be depended on CONFIG_EXPERT on Arm since it is considered as Technological Preview feature. Signed-off-by: Oleksandr Tyshchenko Acked-by: Jan Beulich Acked-by: Julien Grall --- Please note, this is a split/cleanup/hardening of Julien's PoC: "Add support for Guest IO forwarding to a device emulator" Changes V5 -> V6: - new patch --- --- xen/common/Kconfig | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/xen/common/Kconfig b/xen/common/Kconfig index fa049a6..225e57b 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -137,7 +137,13 @@ config HYPFS_CONFIG want to hide the .config contents from dom0. config IOREQ_SERVER - bool + bool "IOREQ support (EXPERT)" if EXPERT && !X86 + default X86 + depends on HVM + ---help--- + Enables generic mechanism for providing emulated devices to the guests. + + If unsure, say Y. config KEXEC bool "kexec support"