From patchwork Tue Mar 29 09:30:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 8684781 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 34693C0553 for ; Tue, 29 Mar 2016 09:43:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B0FC82020F for ; Tue, 29 Mar 2016 09:43:49 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C6ABA2012D for ; Tue, 29 Mar 2016 09:43:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1akq9A-0005IK-Bg; Tue, 29 Mar 2016 09:41:12 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1akq98-0005IE-VI for xen-devel@lists.xenproject.org; Tue, 29 Mar 2016 09:41:11 +0000 Received: from [85.158.137.68] by server-16.bemta-3.messagelabs.com id 2D/9E-02994-6BD4AF65; Tue, 29 Mar 2016 09:41:10 +0000 X-Env-Sender: prvs=889c84d62=Paul.Durrant@citrix.com X-Msg-Ref: server-2.tower-31.messagelabs.com!1459244467!31800422!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 4466 invoked from network); 29 Mar 2016 09:41:09 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 29 Mar 2016 09:41:09 -0000 X-IronPort-AV: E=Sophos;i="5.24,410,1454976000"; d="scan'208";a="349660452" From: Paul Durrant To: Date: Tue, 29 Mar 2016 10:30:05 +0100 Message-ID: <1459243805-2150-1-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 X-DLP: MIA1 Cc: Paul Durrant , Jan Beulich Subject: [Xen-devel] [PATCH v2] x86/hvm/viridian: save APIC assist vector X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If any vcpu has a pending APIC assist when the domain is suspended then the vector needs to be saved. If this is not done then it's possible for the vector to remain pending in the vlapic ISR indefinitely after resume. This patch adds code to save the APIC assist vector value in the viridian vcpu save record. This means that the record is now zero- extended on load and, because this implies a loaded value of zero means nothing is pending (for backwards compatibility with hosts not implementing APIC assist), the rest of the viridian APIC assist code is adjusted to treat a zero value in this way. A check has therefore been added to viridian_start_apic_assist() to prevent the enlightenment being used for vectors < 0x10 (which are illegal for an APIC). Signed-off-by: Paul Durrant Cc: Jan Beulich Reviewed-by: Jan Beulich --- v2: - don't use biasing - add missing padding to save record --- xen/arch/x86/hvm/viridian.c | 23 ++++++++++++++--------- xen/arch/x86/hvm/vlapic.c | 2 +- xen/include/public/arch-x86/hvm/save.h | 4 +++- 3 files changed, 18 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c index 410320c..dceed2c 100644 --- a/xen/arch/x86/hvm/viridian.c +++ b/xen/arch/x86/hvm/viridian.c @@ -252,7 +252,6 @@ static void initialize_apic_assist(struct vcpu *v) if ( viridian_feature_mask(v->domain) & HVMPV_apic_assist ) { v->arch.hvm_vcpu.viridian.apic_assist.va = va; - v->arch.hvm_vcpu.viridian.apic_assist.vector = -1; return; } @@ -288,12 +287,15 @@ void viridian_start_apic_assist(struct vcpu *v, int vector) if ( !va ) return; + if ( vector < 0x10 ) + return; + /* * If there is already an assist pending then something has gone * wrong and the VM will most likely hang so force a crash now * to make the problem clear. */ - if ( v->arch.hvm_vcpu.viridian.apic_assist.vector >= 0 ) + if ( v->arch.hvm_vcpu.viridian.apic_assist.vector ) domain_crash(v->domain); v->arch.hvm_vcpu.viridian.apic_assist.vector = vector; @@ -306,13 +308,13 @@ int viridian_complete_apic_assist(struct vcpu *v) int vector; if ( !va ) - return -1; + return 0; if ( *va & 1u ) - return -1; /* Interrupt not yet processed by the guest. */ + return 0; /* Interrupt not yet processed by the guest. */ vector = v->arch.hvm_vcpu.viridian.apic_assist.vector; - v->arch.hvm_vcpu.viridian.apic_assist.vector = -1; + v->arch.hvm_vcpu.viridian.apic_assist.vector = 0; return vector; } @@ -325,7 +327,7 @@ void viridian_abort_apic_assist(struct vcpu *v) return; *va &= ~1u; - v->arch.hvm_vcpu.viridian.apic_assist.vector = -1; + v->arch.hvm_vcpu.viridian.apic_assist.vector = 0; } static void update_reference_tsc(struct domain *d, bool_t initialize) @@ -806,7 +808,8 @@ static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h) for_each_vcpu( d, v ) { struct hvm_viridian_vcpu_context ctxt; - ctxt.apic_assist = v->arch.hvm_vcpu.viridian.apic_assist.msr.raw; + ctxt.apic_assist_msr = v->arch.hvm_vcpu.viridian.apic_assist.msr.raw; + ctxt.apic_assist_vector = v->arch.hvm_vcpu.viridian.apic_assist.vector; if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 ) return 1; @@ -829,13 +832,15 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h) return -EINVAL; } - if ( hvm_load_entry(VIRIDIAN_VCPU, h, &ctxt) != 0 ) + if ( hvm_load_entry_zeroextend(VIRIDIAN_VCPU, h, &ctxt) != 0 ) return -EINVAL; - v->arch.hvm_vcpu.viridian.apic_assist.msr.raw = ctxt.apic_assist; + v->arch.hvm_vcpu.viridian.apic_assist.msr.raw = ctxt.apic_assist_msr; if ( v->arch.hvm_vcpu.viridian.apic_assist.msr.fields.enabled ) initialize_apic_assist(v); + v->arch.hvm_vcpu.viridian.apic_assist.vector = ctxt.apic_assist_vector; + return 0; } diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index f36eff7..e2f4450 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1189,7 +1189,7 @@ int vlapic_has_pending_irq(struct vcpu *v) * comparing with the IRR. */ vector = viridian_complete_apic_assist(v); - if ( vector != -1 ) + if ( vector ) vlapic_clear_vector(vector, &vlapic->regs->data[APIC_ISR]); isr = vlapic_find_highest_isr(vlapic); diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index fbd1c6a..8d73b51 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -588,7 +588,9 @@ struct hvm_viridian_domain_context { DECLARE_HVM_SAVE_TYPE(VIRIDIAN_DOMAIN, 15, struct hvm_viridian_domain_context); struct hvm_viridian_vcpu_context { - uint64_t apic_assist; + uint64_t apic_assist_msr; + uint8_t apic_assist_vector; + uint8_t _pad[7]; }; DECLARE_HVM_SAVE_TYPE(VIRIDIAN_VCPU, 17, struct hvm_viridian_vcpu_context);