From patchwork Wed Mar 16 13:00:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 8600011 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 680EB9F44D for ; Wed, 16 Mar 2016 13:13:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 750FE20251 for ; Wed, 16 Mar 2016 13:13:28 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E5DE1201C8 for ; Wed, 16 Mar 2016 13:13:22 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1agBDp-0001ru-S9; Wed, 16 Mar 2016 13:10:45 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1agBDp-0001rm-9K for xen-devel@lists.xenproject.org; Wed, 16 Mar 2016 13:10:45 +0000 Received: from [85.158.137.68] by server-5.bemta-3.messagelabs.com id EF/E5-03651-45B59E65; Wed, 16 Mar 2016 13:10:44 +0000 X-Env-Sender: prvs=87697dac6=Paul.Durrant@citrix.com X-Msg-Ref: server-6.tower-31.messagelabs.com!1458133841!3276927!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 20386 invoked from network); 16 Mar 2016 13:10:42 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 16 Mar 2016 13:10:42 -0000 X-IronPort-AV: E=Sophos;i="5.24,344,1454976000"; d="scan'208";a="339306099" From: Paul Durrant To: Date: Wed, 16 Mar 2016 13:00:26 +0000 Message-ID: <1458133226-1808-1-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 MIME-Version: 1.0 X-DLP: MIA1 Cc: Andrew Cooper , Paul Durrant , Keir Fraser , Jan Beulich Subject: [Xen-devel] [PATCH] x86/hvm/viridian: fix the TLB flush hypercall X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit b38d426a "flush remote tlbs by hypercall" add support to allow Windows to request flush of remote TLB via hypercall rather than IPI. Unfortunately it seems that this code was broken in a couple of ways: 1) The allocation of the per-vcpu flush mask is gated on whether the domain has viridian features enabled but the call to allocate is made before the toolstack has enabled those features. This results in a NULL pointer dereference. 2) One of the flush hypercall variants is a rep op, but the code does not update the output data with the reps completed. Hence the guest will spin repeatedly making the hypercall because it believes it has uncompleted reps. This patch fixes both of these issues and also adds a check to make sure the current vCPU is not included in the flush mask (since there's clearly no need for the CPU to IPI itself). Signed-off-by: Paul Durrant Cc: Keir Fraser Cc: Jan Beulich Cc: Andrew Cooper --- xen/arch/x86/hvm/hvm.c | 12 ++++-------- xen/arch/x86/hvm/viridian.c | 4 +++- 2 files changed, 7 insertions(+), 9 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 5bc2812..f5c55e1 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2576,12 +2576,9 @@ int hvm_vcpu_initialise(struct vcpu *v) if ( rc != 0 ) goto fail6; - if ( is_viridian_domain(d) ) - { - rc = viridian_vcpu_init(v); - if ( rc != 0 ) - goto fail7; - } + rc = viridian_vcpu_init(v); + if ( rc != 0 ) + goto fail7; if ( v->vcpu_id == 0 ) { @@ -2615,8 +2612,7 @@ int hvm_vcpu_initialise(struct vcpu *v) void hvm_vcpu_destroy(struct vcpu *v) { - if ( is_viridian_domain(v->domain) ) - viridian_vcpu_deinit(v); + viridian_vcpu_deinit(v); hvm_all_ioreq_servers_remove_vcpu(v->domain, v); diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c index 6bd844b..6530a67 100644 --- a/xen/arch/x86/hvm/viridian.c +++ b/xen/arch/x86/hvm/viridian.c @@ -645,7 +645,7 @@ int viridian_hypercall(struct cpu_user_regs *regs) continue; hvm_asid_flush_vcpu(v); - if ( v->is_running ) + if ( v != curr && v->is_running ) __cpumask_set_cpu(v->processor, pcpu_mask); } @@ -658,6 +658,8 @@ int viridian_hypercall(struct cpu_user_regs *regs) if ( !cpumask_empty(pcpu_mask) ) flush_tlb_mask(pcpu_mask); + output.rep_complete = input.rep_count; + status = HV_STATUS_SUCCESS; break; }