From patchwork Tue Mar 14 17:35:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 9624051 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9042660492 for ; Tue, 14 Mar 2017 17:38:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 834B520564 for ; Tue, 14 Mar 2017 17:38:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 77F3F28512; Tue, 14 Mar 2017 17:38:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D94EE20564 for ; Tue, 14 Mar 2017 17:38:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cnqMe-00010r-He; Tue, 14 Mar 2017 17:36:04 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cnqMd-00010f-44 for xen-devel@lists.xenproject.org; Tue, 14 Mar 2017 17:36:03 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id 49/04-21675-20A28C85; Tue, 14 Mar 2017 17:36:02 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRWlGSWpSXmKPExsVysWW7jC6j1ok Ig7uXrS2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oy/i66xF3y0rmjZ9Ia1gfGaURcjF4eQwG4m iUPnHrFAOEcYJb49uMzaxcjJwSagI/H96SlmEFtEQEni3qrJTCBFzAKbGCUetJ5mB0kICzhJH N+1jq2LkYODRUBV4v0kTpAwr4C5REfLERYQm1PAQmLnlGNgthBQ/PW5bYwgtoSAtsSpjd/ZQW ZKCPQxShzduIBlAiPPAkaGVYzqxalFZalFuhZ6SUWZ6RkluYmZObqGBmZ6uanFxYnpqTmJScV 6yfm5mxiBnmcAgh2Msy/7H2KU5GBSEuVVETwRIcSXlJ9SmZFYnBFfVJqTWnyIUYaDQ0mC96UG UE6wKDU9tSItMwcYgjBpCQ4eJRHeNSBp3uKCxNzizHSI1ClGRSlx3msgCQGQREZpHlwbLOwvM cpKCfMyAh0ixFOQWpSbWYIq/4pRnINRSZj3L8gUnsy8Erjpr4AWMwEtTvx5BGRxSSJCSqqBUT +SW/zvaXWJhrlHuHY7be8XeKB0NfLnxlYPnhuMed0vsrc6yQdwBa8yEbr5y7/bbcLN4+dWCH9 7+M/m9sWsm30mdbWhiUvlnwq8C9wS2vikcfbPb+HRkVWMQusfrLu4ZdHVZ4Xz2Go6/kqfaggP 3Cl/4Z6kz5oZR+Mnda+WNUz/XqRutP59vRJLcUaioRZzUXEiAM9MADR2AgAA X-Env-Sender: vkuznets@redhat.com X-Msg-Ref: server-16.tower-27.messagelabs.com!1489512960!91412777!1 X-Originating-IP: [209.132.183.28] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 47389 invoked from network); 14 Mar 2017 17:36:01 -0000 Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 14 Mar 2017 17:36:01 -0000 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2327D13A9A; Tue, 14 Mar 2017 17:36:01 +0000 (UTC) Received: from vitty.brq.redhat.com (vitty.brq.redhat.com [10.34.26.3]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v2EHZvDg016231; Tue, 14 Mar 2017 13:35:59 -0400 From: Vitaly Kuznetsov To: xen-devel@lists.xenproject.org Date: Tue, 14 Mar 2017 18:35:36 +0100 Message-Id: <20170314173556.2249-2-vkuznets@redhat.com> In-Reply-To: <20170314173556.2249-1-vkuznets@redhat.com> References: <20170314173556.2249-1-vkuznets@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 14 Mar 2017 17:36:01 +0000 (UTC) Cc: Juergen Gross , Boris Ostrovsky , x86@kernel.org, Andrew Jones , linux-kernel@vger.kernel.org Subject: [Xen-devel] [PATCH v3 01/21] x86/xen: separate PV and HVM hypervisors X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP As a preparation to splitting the code we need to untangle it: x86_hyper_xen -> x86_hyper_xen_hvm and x86_hyper_xen_pv xen_platform() -> xen_platform_hvm() and xen_platform_pv() xen_cpu_up_prepare() -> xen_cpu_up_prepare_pv() and xen_cpu_up_prepare_hvm() xen_cpu_dead() -> xen_cpu_dead_pv() and xen_cpu_dead_pv_hvm() Add two parameters to xen_cpuhp_setup() to pass proper cpu_up_prepare and cpu_dead hooks. xen_set_cpu_features() is now PV-only so the redundant xen_pv_domain() check can be dropped. Signed-off-by: Vitaly Kuznetsov Reviewed-by: Juergen Gross --- Changes since v2: .pin_vcpu kept for x86_hyper_xen_hvm to support PVH Dom0 in future [Juergen Gross] --- arch/x86/include/asm/hypervisor.h | 3 +- arch/x86/kernel/cpu/hypervisor.c | 3 +- arch/x86/xen/enlighten.c | 114 +++++++++++++++++++++++++------------- 3 files changed, 79 insertions(+), 41 deletions(-) diff --git a/arch/x86/include/asm/hypervisor.h b/arch/x86/include/asm/hypervisor.h index 67942b6..6f7545c6 100644 --- a/arch/x86/include/asm/hypervisor.h +++ b/arch/x86/include/asm/hypervisor.h @@ -53,7 +53,8 @@ extern const struct hypervisor_x86 *x86_hyper; /* Recognized hypervisors */ extern const struct hypervisor_x86 x86_hyper_vmware; extern const struct hypervisor_x86 x86_hyper_ms_hyperv; -extern const struct hypervisor_x86 x86_hyper_xen; +extern const struct hypervisor_x86 x86_hyper_xen_pv; +extern const struct hypervisor_x86 x86_hyper_xen_hvm; extern const struct hypervisor_x86 x86_hyper_kvm; extern void init_hypervisor(struct cpuinfo_x86 *c); diff --git a/arch/x86/kernel/cpu/hypervisor.c b/arch/x86/kernel/cpu/hypervisor.c index 35691a6..a77f18d 100644 --- a/arch/x86/kernel/cpu/hypervisor.c +++ b/arch/x86/kernel/cpu/hypervisor.c @@ -29,7 +29,8 @@ static const __initconst struct hypervisor_x86 * const hypervisors[] = { #ifdef CONFIG_XEN - &x86_hyper_xen, + &x86_hyper_xen_pv, + &x86_hyper_xen_hvm, #endif &x86_hyper_vmware, &x86_hyper_ms_hyperv, diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index ec1d5c4..999ba13 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -139,9 +139,11 @@ void *xen_initial_gdt; RESERVE_BRK(shared_info_page_brk, PAGE_SIZE); -static int xen_cpu_up_prepare(unsigned int cpu); +static int xen_cpu_up_prepare_pv(unsigned int cpu); +static int xen_cpu_up_prepare_hvm(unsigned int cpu); static int xen_cpu_up_online(unsigned int cpu); -static int xen_cpu_dead(unsigned int cpu); +static int xen_cpu_dead_pv(unsigned int cpu); +static int xen_cpu_dead_hvm(unsigned int cpu); /* * Point at some empty memory to start with. We map the real shared_info @@ -1447,13 +1449,14 @@ static void __init xen_dom0_set_legacy_features(void) x86_platform.legacy.rtc = 1; } -static int xen_cpuhp_setup(void) +static int xen_cpuhp_setup(int (*cpu_up_prepare_cb)(unsigned int), + int (*cpu_dead_cb)(unsigned int)) { int rc; rc = cpuhp_setup_state_nocalls(CPUHP_XEN_PREPARE, "x86/xen/hvm_guest:prepare", - xen_cpu_up_prepare, xen_cpu_dead); + cpu_up_prepare_cb, cpu_dead_cb); if (rc >= 0) { rc = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/xen/hvm_guest:online", @@ -1559,7 +1562,7 @@ asmlinkage __visible void __init xen_start_kernel(void) possible map and a non-dummy shared_info. */ per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0]; - WARN_ON(xen_cpuhp_setup()); + WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_pv, xen_cpu_dead_pv)); local_irq_disable(); early_boot_irqs_disabled = true; @@ -1840,28 +1843,41 @@ static void __init init_hvm_pv_info(void) } #endif -static int xen_cpu_up_prepare(unsigned int cpu) +static int xen_cpu_up_prepare_pv(unsigned int cpu) { int rc; - if (xen_hvm_domain()) { - /* - * This can happen if CPU was offlined earlier and - * offlining timed out in common_cpu_die(). - */ - if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) { - xen_smp_intr_free(cpu); - xen_uninit_lock_cpu(cpu); - } + xen_setup_timer(cpu); - if (cpu_acpi_id(cpu) != U32_MAX) - per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu); - else - per_cpu(xen_vcpu_id, cpu) = cpu; - xen_vcpu_setup(cpu); + rc = xen_smp_intr_init(cpu); + if (rc) { + WARN(1, "xen_smp_intr_init() for CPU %d failed: %d\n", + cpu, rc); + return rc; + } + return 0; +} + +static int xen_cpu_up_prepare_hvm(unsigned int cpu) +{ + int rc; + + /* + * This can happen if CPU was offlined earlier and + * offlining timed out in common_cpu_die(). + */ + if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) { + xen_smp_intr_free(cpu); + xen_uninit_lock_cpu(cpu); } - if (xen_pv_domain() || xen_feature(XENFEAT_hvm_safe_pvclock)) + if (cpu_acpi_id(cpu) != U32_MAX) + per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu); + else + per_cpu(xen_vcpu_id, cpu) = cpu; + xen_vcpu_setup(cpu); + + if (xen_feature(XENFEAT_hvm_safe_pvclock)) xen_setup_timer(cpu); rc = xen_smp_intr_init(cpu); @@ -1873,16 +1889,25 @@ static int xen_cpu_up_prepare(unsigned int cpu) return 0; } -static int xen_cpu_dead(unsigned int cpu) +static int xen_cpu_dead_pv(unsigned int cpu) { xen_smp_intr_free(cpu); - if (xen_pv_domain() || xen_feature(XENFEAT_hvm_safe_pvclock)) - xen_teardown_timer(cpu); + xen_teardown_timer(cpu); return 0; } +static int xen_cpu_dead_hvm(unsigned int cpu) +{ + xen_smp_intr_free(cpu); + + if (xen_feature(XENFEAT_hvm_safe_pvclock)) + xen_teardown_timer(cpu); + + return 0; +} + static int xen_cpu_up_online(unsigned int cpu) { xen_init_lock_cpu(cpu); @@ -1919,7 +1944,7 @@ static void __init xen_hvm_guest_init(void) BUG_ON(!xen_feature(XENFEAT_hvm_callback_vector)); xen_hvm_smp_init(); - WARN_ON(xen_cpuhp_setup()); + WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_hvm, xen_cpu_dead_hvm)); xen_unplug_emulated_devices(); x86_init.irqs.intr_init = xen_init_IRQ; xen_hvm_init_time_ops(); @@ -1942,9 +1967,17 @@ static __init int xen_parse_nopv(char *arg) } early_param("xen_nopv", xen_parse_nopv); -static uint32_t __init xen_platform(void) +static uint32_t __init xen_platform_pv(void) { - if (xen_nopv) + if (xen_pv_domain()) + return xen_cpuid_base(); + + return 0; +} + +static uint32_t __init xen_platform_hvm(void) +{ + if (xen_pv_domain() || xen_nopv) return 0; return xen_cpuid_base(); @@ -1966,10 +1999,8 @@ EXPORT_SYMBOL_GPL(xen_hvm_need_lapic); static void xen_set_cpu_features(struct cpuinfo_x86 *c) { - if (xen_pv_domain()) { - clear_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS); - set_cpu_cap(c, X86_FEATURE_XENPV); - } + clear_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS); + set_cpu_cap(c, X86_FEATURE_XENPV); } static void xen_pin_vcpu(int cpu) @@ -2011,17 +2042,22 @@ static void xen_pin_vcpu(int cpu) } } -const struct hypervisor_x86 x86_hyper_xen = { - .name = "Xen", - .detect = xen_platform, -#ifdef CONFIG_XEN_PVHVM - .init_platform = xen_hvm_guest_init, -#endif - .x2apic_available = xen_x2apic_para_available, +const struct hypervisor_x86 x86_hyper_xen_pv = { + .name = "Xen PV", + .detect = xen_platform_pv, .set_cpu_features = xen_set_cpu_features, .pin_vcpu = xen_pin_vcpu, }; -EXPORT_SYMBOL(x86_hyper_xen); +EXPORT_SYMBOL(x86_hyper_xen_pv); + +const struct hypervisor_x86 x86_hyper_xen_hvm = { + .name = "Xen HVM", + .detect = xen_platform_hvm, + .init_platform = xen_hvm_guest_init, + .pin_vcpu = xen_pin_vcpu, + .x2apic_available = xen_x2apic_para_available, +}; +EXPORT_SYMBOL(x86_hyper_xen_hvm); #ifdef CONFIG_HOTPLUG_CPU void xen_arch_register_cpu(int num)