From patchwork Tue Mar 15 16:14:16 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 8590381 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 05EE59F294 for ; Tue, 15 Mar 2016 16:27:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C6B1C202F0 for ; Tue, 15 Mar 2016 16:27:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6EA75202EC for ; Tue, 15 Mar 2016 16:27:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1afrlr-00011V-Jx; Tue, 15 Mar 2016 16:24:35 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1afrlq-000113-NL for xen-devel@lists.xenproject.org; Tue, 15 Mar 2016 16:24:34 +0000 Received: from [193.109.254.147] by server-6.bemta-14.messagelabs.com id 88/1E-03497-24738E65; Tue, 15 Mar 2016 16:24:34 +0000 X-Env-Sender: prvs=875e259b5=Paul.Durrant@citrix.com X-Msg-Ref: server-14.tower-27.messagelabs.com!1458059072!19190957!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 44898 invoked from network); 15 Mar 2016 16:24:33 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 15 Mar 2016 16:24:33 -0000 X-IronPort-AV: E=Sophos;i="5.24,340,1454976000"; d="scan'208";a="339091650" From: Paul Durrant To: Date: Tue, 15 Mar 2016 16:14:16 +0000 Message-ID: <1458058456-7345-3-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1458058456-7345-1-git-send-email-paul.durrant@citrix.com> References: <1458058456-7345-1-git-send-email-paul.durrant@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: Wei Liu , Stefano Stabellini , Andrew Cooper , Ian Jackson , Paul Durrant , Jan Beulich , Keir Fraser Subject: [Xen-devel] [PATCH 2/2] x86/hvm/viridian: Enable APIC assist enlightenment X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds code to enable the APIC assist enlightenment which, under certain conditions, means that the guest can avoid an EOI of the local APIC and thereby avoid a VMEXIT. Use of the enlightenment by the hypervisor is under control of the toolstack. Signed-off-by: Paul Durrant Cc: Ian Jackson Cc: Stefano Stabellini Cc: Wei Liu Cc: Keir Fraser Cc: Jan Beulich Cc: Andrew Cooper --- docs/man/xl.cfg.pod.5 | 8 ++++++ tools/libxl/libxl_dom.c | 3 ++ tools/libxl/libxl_types.idl | 1 + xen/arch/x86/hvm/viridian.c | 59 +++++++++++++++++++++++++++++++------- xen/arch/x86/hvm/vlapic.c | 58 +++++++++++++++++++++++++++++++++---- xen/include/asm-x86/hvm/viridian.h | 5 ++++ xen/include/public/hvm/params.h | 7 ++++- 7 files changed, 124 insertions(+), 17 deletions(-) diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5 index 56b1117..49acda7 100644 --- a/docs/man/xl.cfg.pod.5 +++ b/docs/man/xl.cfg.pod.5 @@ -1484,6 +1484,14 @@ This set incorporates use of hypercalls for remote TLB flushing. This enlightenment may improve performance of Windows guests running on hosts with higher levels of (physical) CPU contention. +=item B + +This set incorporates use of the APIC assist page to avoid EOI of +the local APIC. +This enlightenment may improve performance of Windows guests, +particularly those running PV drivers that make use of per-vcpu +event channel upcall vectors. + =item B This is a special value that enables the default set of groups, which diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index b825b98..ee75ad1 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -253,6 +253,9 @@ static int hvm_set_viridian_features(libxl__gc *gc, uint32_t domid, if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_HCALL_REMOTE_TLB_FLUSH)) mask |= HVMPV_hcall_remote_tlb_flush; + if (libxl_bitmap_test(&enlightenments, LIBXL_VIRIDIAN_ENLIGHTENMENT_APIC_ASSIST)) + mask |= HVMPV_apic_assist; + if (mask != 0 && xc_hvm_param_set(CTX->xch, domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 632c009..e3be957 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -221,6 +221,7 @@ libxl_viridian_enlightenment = Enumeration("viridian_enlightenment", [ (2, "time_ref_count"), (3, "reference_tsc"), (4, "hcall_remote_tlb_flush"), + (5, "apic_assist"), ]) libxl_hdtype = Enumeration("hdtype", [ diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c index c779290..1f73691 100644 --- a/xen/arch/x86/hvm/viridian.c +++ b/xen/arch/x86/hvm/viridian.c @@ -221,16 +221,6 @@ static void initialize_apic_assist(struct vcpu *v) struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); void *va; - /* - * We don't yet make use of the APIC assist page but by setting - * the CPUID3A_MSR_APIC_ACCESS bit in CPUID leaf 40000003 we are duty - * bound to support the MSR. We therefore do just enough to keep windows - * happy. - * - * See http://msdn.microsoft.com/en-us/library/ff538657%28VS.85%29.aspx for - * details of how Windows uses the page. - */ - if ( !page ) return; @@ -251,6 +241,55 @@ static void initialize_apic_assist(struct vcpu *v) v->arch.hvm_vcpu.viridian.apic_assist.page = page; v->arch.hvm_vcpu.viridian.apic_assist.va = va; + v->arch.hvm_vcpu.viridian.apic_assist.vector = -1; +} + +void viridian_start_apic_assist(struct vcpu *v, int vector) +{ + void *va = v->arch.hvm_vcpu.viridian.apic_assist.va; + + if ( !(viridian_feature_mask(v->domain) & HVMPV_apic_assist) || + !va ) + return; + + /* + * If there is already an assist pending then something has gone + * wrong and the VM will most likely hang so force a crash now + * to make the problem clear. + */ + if (v->arch.hvm_vcpu.viridian.apic_assist.vector >= 0) + domain_crash(v->domain); + + v->arch.hvm_vcpu.viridian.apic_assist.vector = vector; + *(uint32_t *)va |= 1u; +} + +bool_t viridian_complete_apic_assist(struct vcpu *v, int *vector) +{ + void *va = v->arch.hvm_vcpu.viridian.apic_assist.va; + + if ( !(viridian_feature_mask(v->domain) & HVMPV_apic_assist) || + !va ) + return 0; + + if ( *(uint32_t *)va & 1 ) + return 0; /* Interrupt not yet processed by the guest */ + + *vector = v->arch.hvm_vcpu.viridian.apic_assist.vector; + v->arch.hvm_vcpu.viridian.apic_assist.vector = -1; + return 1; +} + +void viridian_abort_apic_assist(struct vcpu *v) +{ + void *va = v->arch.hvm_vcpu.viridian.apic_assist.va; + + if ( !(viridian_feature_mask(v->domain) & HVMPV_apic_assist) || + !va ) + return; + + *(uint32_t *)va &= ~1u; + v->arch.hvm_vcpu.viridian.apic_assist.vector = -1; } static void teardown_apic_assist(struct vcpu *v) diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 01a8430..aac4263 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include @@ -95,6 +96,18 @@ static int vlapic_find_highest_vector(const void *bitmap) return (fls(word[word_offset*4]) - 1) + (word_offset * 32); } +static int vlapic_find_lowest_vector(const void *bitmap) +{ + const uint32_t *word = bitmap; + unsigned int word_offset; + + /* Work forwards through the bitmap (first 32-bit word in every four). */ + for ( word_offset = 0; word_offset < NR_VECTORS / 32; word_offset++) + if ( word[word_offset * 4] ) + return (ffs(word[word_offset * 4]) - 1) + (word_offset * 32); + + return -1; +} /* * IRR-specific bitmap update & search routines. @@ -1157,7 +1170,7 @@ int vlapic_virtual_intr_delivery_enabled(void) int vlapic_has_pending_irq(struct vcpu *v) { struct vlapic *vlapic = vcpu_vlapic(v); - int irr, isr; + int irr, vector, isr; if ( !vlapic_enabled(vlapic) ) return -1; @@ -1170,10 +1183,27 @@ int vlapic_has_pending_irq(struct vcpu *v) !nestedhvm_vcpu_in_guestmode(v) ) return irr; + /* + * If APIC assist was used then there may have been no EOI so + * we need to clear the requisite bit from the ISR here, before + * comparing with the IRR. + */ + if ( viridian_complete_apic_assist(v, &vector) && + vector != -1 ) + vlapic_clear_vector(vector, &vlapic->regs->data[APIC_ISR]); + isr = vlapic_find_highest_isr(vlapic); isr = (isr != -1) ? isr : 0; if ( (isr & 0xf0) >= (irr & 0xf0) ) + { + /* + * There's already a higher priority vector pending so + * we need to abort any previous APIC assist to ensure there + * is an EOI. + */ + viridian_abort_apic_assist(v); return -1; + } return irr; } @@ -1181,13 +1211,29 @@ int vlapic_has_pending_irq(struct vcpu *v) int vlapic_ack_pending_irq(struct vcpu *v, int vector, bool_t force_ack) { struct vlapic *vlapic = vcpu_vlapic(v); + int isr; - if ( force_ack || !vlapic_virtual_intr_delivery_enabled() ) - { - vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]); - vlapic_clear_irr(vector, vlapic); - } + if ( !force_ack && + vlapic_virtual_intr_delivery_enabled() ) + return 1; + + if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) ) + goto done; + + isr = vlapic_find_lowest_vector(&vlapic->regs->data[APIC_ISR]); + if ( isr >= 0 && isr < vector ) + goto done; + + /* + * This vector is edge triggered and there are no lower priority + * vectors pending, so we can use APIC assist to avoid exiting + * for EOI. + */ + viridian_start_apic_assist(v, vector); +done: + vlapic_set_vector(vector, &vlapic->regs->data[APIC_ISR]); + vlapic_clear_irr(vector, vlapic); return 1; } diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index c60c113..658d46a 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -25,6 +25,7 @@ struct viridian_vcpu union viridian_apic_assist msr; struct page_info *page; void *va; + int vector; } apic_assist; cpumask_var_t flush_cpumask; }; @@ -125,6 +126,10 @@ void viridian_time_ref_count_thaw(struct domain *d); int viridian_vcpu_init(struct vcpu *v); void viridian_vcpu_deinit(struct vcpu *v); +void viridian_start_apic_assist(struct vcpu *v, int vector); +bool_t viridian_complete_apic_assist(struct vcpu *v, int *vector); +void viridian_abort_apic_assist(struct vcpu *v); + #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */ /* diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index 73d4718..e69c72c 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -115,12 +115,17 @@ #define _HVMPV_hcall_remote_tlb_flush 4 #define HVMPV_hcall_remote_tlb_flush (1 << _HVMPV_hcall_remote_tlb_flush) +/* Use APIC assist */ +#define _HVMPV_apic_assist 5 +#define HVMPV_apic_assist (1 << _HVMPV_apic_assist) + #define HVMPV_feature_mask \ (HVMPV_base_freq | \ HVMPV_no_freq | \ HVMPV_time_ref_count | \ HVMPV_reference_tsc | \ - HVMPV_hcall_remote_tlb_flush) + HVMPV_hcall_remote_tlb_flush | \ + HVMPV_apic_assist) #endif