From patchwork Mon Mar 27 10:18:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 9646351 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 23552602D6 for ; Mon, 27 Mar 2017 10:21:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 22CF828364 for ; Mon, 27 Mar 2017 10:21:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1787628384; Mon, 27 Mar 2017 10:21:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 982BE28364 for ; Mon, 27 Mar 2017 10:21:01 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1csRjo-0007Fo-3a; Mon, 27 Mar 2017 10:19:00 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1csRjm-0007EJ-Kt for xen-devel@lists.xenproject.org; Mon, 27 Mar 2017 10:18:58 +0000 Received: from [193.109.254.147] by server-9.bemta-6.messagelabs.com id 98/08-14382-217E8D85; Mon, 27 Mar 2017 10:18:58 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrIIsWRWlGSWpSXmKPExsWyU9JRQlfw+Y0 Ig6ZTMhbft0xmcmD0OPzhCksAYxRrZl5SfkUCa0bDzw7mgs9GFf0djxkbGGepdTFyckgI+ElM 2DePDcRmE9CRuDh3J5DNwSEioCJxe68BiMksUC4x40Y8SIWwQLBE59luFpAwi4CqxOItwSBhX gFLic3nv7NBDJSXeLTpEdgQTgEribP7KkDCQkAlnfu62CHKBSVOznzCAmIzC2hKtG7/zQ5hy0 s0b53NDFGvKNE/7wHUyHSJic96WCYw8s9C0j4LSfssJO0LGJlXMWoUpxaVpRbpGlnoJRVlpme U5CZm5ugaGpjp5aYWFyemp+YkJhXrJefnbmIEBh8DEOxgPL828BCjJAeTkijvh9M3IoT4kvJT KjMSizPii0pzUosPMWpwcAhsXrv6AqMUS15+XqqSBG/2U6A6waLU9NSKtMwcYHzAlEpw8CiJ8 B54ApTmLS5IzC3OTIdInWJUlBLnLQTpEwBJZJTmwbXBYvISo6yUMC8j0FFCPAWpRbmZJajyrx jFORiVhHk3g0zhycwrgZv+CmgxE9Diw/PBFpckIqSkGhgnHMj9WSj7JZMl/JbYAbmri0PPtXk 1f9z6dl1oZkPzcrZd6fOndm/tirNduD2L7aus4YtMzlcHY15MevX1x/ItgYxa/H9tp860lL5Y 25dm5HOM69H7+2eds0unlLzi23vN+UJwhe/H2mtMTzkSO09fX6SvxtLmkFq786ruBIUI/abVb AusbX4psRRnJBpqMRcVJwIAAYvj2MQCAAA= X-Env-Sender: prvs=25239713d=roger.pau@citrix.com X-Msg-Ref: server-8.tower-27.messagelabs.com!1490609913!83588185!8 X-Originating-IP: [185.25.65.24] X-SpamReason: No, hits=0.0 required=7.0 tests=received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 31301 invoked from network); 27 Mar 2017 10:18:57 -0000 Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24) by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 27 Mar 2017 10:18:57 -0000 X-IronPort-AV: E=Sophos;i="5.36,231,1486425600"; d="scan'208";a="43158889" From: Roger Pau Monne To: Date: Mon, 27 Mar 2017 11:18:23 +0100 Message-ID: <20170327101823.99368-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.12.1 In-Reply-To: <20170327101823.99368-1-roger.pau@citrix.com> References: <20170327101823.99368-1-roger.pau@citrix.com> MIME-Version: 1.0 X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To AMSPEX02CL02.citrite.net (10.69.22.126) Cc: Andrew Cooper , Jan Beulich , Roger Pau Monne Subject: [Xen-devel] [PATCH v2 7/7] x86/vioapic: allow PVHv2 Dom0 to have more than one IO APIC X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP The base address, id and number of pins of the vIO APICs exposed to PVHv2 Dom0 is the same as the values found on bare metal. Signed-off-by: Roger Pau Monné Reviewed-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper --- xen/arch/x86/hvm/dom0_build.c | 33 ++++++++++++--------------------- xen/arch/x86/hvm/hvm.c | 8 +++++--- xen/arch/x86/hvm/vioapic.c | 30 ++++++++++++++++++++++++------ 3 files changed, 41 insertions(+), 30 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index daa791d3f4..db9be87612 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -681,12 +681,7 @@ static int __init pvh_setup_acpi_madt(struct domain *d, paddr_t *addr) max_vcpus = dom0_max_vcpus(); /* Calculate the size of the crafted MADT. */ size = sizeof(*madt); - /* - * FIXME: the current vIO-APIC code just supports one IO-APIC instance - * per domain. This must be fixed in order to provide the same amount of - * IO APICs as available on bare metal. - */ - size += sizeof(*io_apic); + size += sizeof(*io_apic) * nr_ioapics; size += sizeof(*intsrcovr) * acpi_intr_overrides; size += sizeof(*nmisrc) * acpi_nmi_sources; size += sizeof(*x2apic) * max_vcpus; @@ -716,23 +711,19 @@ static int __init pvh_setup_acpi_madt(struct domain *d, paddr_t *addr) */ madt->header.revision = min_t(unsigned char, table->revision, 4); - /* - * Setup the IO APIC entry. - * FIXME: the current vIO-APIC code just supports one IO-APIC instance - * per domain. This must be fixed in order to provide the same amount of - * IO APICs as available on bare metal, and with the same IDs as found in - * the native IO APIC MADT entries. - */ - if ( nr_ioapics > 1 ) - printk("WARNING: found %d IO APICs, Dom0 will only have access to 1 emulated IO APIC\n", - nr_ioapics); + /* Setup the IO APIC entries. */ io_apic = (void *)(madt + 1); - io_apic->header.type = ACPI_MADT_TYPE_IO_APIC; - io_apic->header.length = sizeof(*io_apic); - io_apic->id = domain_vioapic(d, 0)->id; - io_apic->address = VIOAPIC_DEFAULT_BASE_ADDRESS; + for ( i = 0; i < nr_ioapics; i++ ) + { + io_apic->header.type = ACPI_MADT_TYPE_IO_APIC; + io_apic->header.length = sizeof(*io_apic); + io_apic->id = domain_vioapic(d, i)->id; + io_apic->address = domain_vioapic(d, i)->base_address; + io_apic->global_irq_base = io_apic_gsi_base(i); + io_apic++; + } - x2apic = (void *)(io_apic + 1); + x2apic = (void *)io_apic; for ( i = 0; i < max_vcpus; i++ ) { x2apic->header.type = ACPI_MADT_TYPE_LOCAL_X2APIC; diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 9a6cd9c9bf..322b3b8235 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -595,6 +595,7 @@ static int hvm_print_line( int hvm_domain_initialise(struct domain *d) { + unsigned int nr_gsis; int rc; if ( !hvm_enabled ) @@ -615,19 +616,20 @@ int hvm_domain_initialise(struct domain *d) if ( rc != 0 ) goto fail0; + nr_gsis = is_hardware_domain(d) ? nr_irqs_gsi : VIOAPIC_NUM_PINS; d->arch.hvm_domain.pl_time = xzalloc(struct pl_time); d->arch.hvm_domain.params = xzalloc_array(uint64_t, HVM_NR_PARAMS); d->arch.hvm_domain.io_handler = xzalloc_array(struct hvm_io_handler, NR_IO_HANDLERS); - d->arch.hvm_domain.irq = xzalloc_bytes(hvm_irq_size(VIOAPIC_NUM_PINS)); + d->arch.hvm_domain.irq = xzalloc_bytes(hvm_irq_size(nr_gsis)); rc = -ENOMEM; if ( !d->arch.hvm_domain.pl_time || !d->arch.hvm_domain.irq || !d->arch.hvm_domain.params || !d->arch.hvm_domain.io_handler ) goto fail1; - /* Set the default number of GSIs */ - hvm_domain_irq(d)->nr_gsis = VIOAPIC_NUM_PINS; + /* Set the number of GSIs */ + hvm_domain_irq(d)->nr_gsis = nr_gsis; /* need link to containing domain */ d->arch.hvm_domain.pl_time->domain = d; diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c index 327a9758e0..c349a3ee61 100644 --- a/xen/arch/x86/hvm/vioapic.c +++ b/xen/arch/x86/hvm/vioapic.c @@ -533,9 +533,19 @@ void vioapic_reset(struct domain *d) memset(vioapic, 0, hvm_vioapic_size(nr_pins)); for ( j = 0; j < nr_pins; j++ ) vioapic->redirtbl[j].fields.mask = 1; - vioapic->base_address = VIOAPIC_DEFAULT_BASE_ADDRESS + - VIOAPIC_MEM_LENGTH * i; - vioapic->id = i; + + if ( !is_hardware_domain(d) ) + { + vioapic->base_address = VIOAPIC_DEFAULT_BASE_ADDRESS + + VIOAPIC_MEM_LENGTH * i; + vioapic->id = i; + } + else + { + vioapic->base_address = mp_ioapics[i].mpc_apicaddr; + vioapic->id = mp_ioapics[i].mpc_apicid; + } + vioapic->nr_pins = nr_pins; vioapic->domain = d; } @@ -556,7 +566,7 @@ static void vioapic_free(const struct domain *d, unsigned int nr_vioapics) int vioapic_init(struct domain *d) { - unsigned int i, nr_vioapics = 1; + unsigned int i, nr_vioapics, nr_gsis = 0; if ( !has_vioapic(d) ) { @@ -564,6 +574,8 @@ int vioapic_init(struct domain *d) return 0; } + nr_vioapics = is_hardware_domain(d) ? nr_ioapics : 1; + if ( (d->arch.hvm_domain.vioapic == NULL) && ((d->arch.hvm_domain.vioapic = xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) ) @@ -571,15 +583,21 @@ int vioapic_init(struct domain *d) for ( i = 0; i < nr_vioapics; i++ ) { + unsigned int nr_pins = is_hardware_domain(d) ? nr_ioapic_entries[i] + : VIOAPIC_NUM_PINS; + if ( (domain_vioapic(d, i) = - xmalloc_bytes(hvm_vioapic_size(VIOAPIC_NUM_PINS))) == NULL ) + xmalloc_bytes(hvm_vioapic_size(nr_pins))) == NULL ) { vioapic_free(d, nr_vioapics); return -ENOMEM; } - domain_vioapic(d, i)->nr_pins = VIOAPIC_NUM_PINS; + domain_vioapic(d, i)->nr_pins = nr_pins; + nr_gsis += nr_pins; } + ASSERT(hvm_domain_irq(d)->nr_gsis == nr_gsis); + d->arch.hvm_domain.nr_vioapics = nr_vioapics; vioapic_reset(d);