From patchwork Fri Jan 10 16:04:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11327807 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4018E109A for ; Fri, 10 Jan 2020 16:05:54 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1D47320848 for ; Fri, 10 Jan 2020 16:05:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="A6bO+Yij" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D47320848 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipwm1-0008Tq-7E; Fri, 10 Jan 2020 16:04:33 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipwm0-0008Tl-6R for xen-devel@lists.xenproject.org; Fri, 10 Jan 2020 16:04:32 +0000 X-Inumbo-ID: e1da0a7c-33c2-11ea-bf4d-12813bfff9fa Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id e1da0a7c-33c2-11ea-bf4d-12813bfff9fa; Fri, 10 Jan 2020 16:04:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1578672271; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1e4l83mh/cOCX5oz1dxvv3cO4KFJABBQsc0ILtsANoo=; b=A6bO+YijQCG14j0sFLtAfaKXWRmHy/rhYEv9TzshCQwbrrTfZYmENFAX QYBdoVaxdGJC8zrQrOM5Y8peBb8izQ5p/xOtRvGJnJ3Q8T4D5Kka4YxC9 anhqsBFMUIZk6gigefopzmyXkImZVKv8hn2wH1V71PtbmTBmhtLdg7Oe7 Q=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: WmDL46PVUoig6h4e32hSapGFRT2gaFR+FJglJu1iIP63NJt14MG61ZclX4w+2HruI6MwYGN7u/ f98pqjcI/0HwzixxKT0AvGIVAZOZ8rq8pRJKMRBSj2Cke8B4goam8YZGuH3VdoyX2LTSIHFQiE b/L3pzeHTnBgYemeMk/D3HNHF1OXG+Bv5w/7DjcelZXu6sbsluzhBSV0DrC8A5PxATUr/DEhjU vZVPSGLzgmPoPObiptLAe6F5/g4H4P8icwVjdcnZ0q4bizxmWjxyrdCP2XjwdwoNC/6HPo/b/b ePA= X-SBRS: 2.7 X-MesageID: 10766849 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.69,417,1571716800"; d="scan'208";a="10766849" From: Roger Pau Monne To: Date: Fri, 10 Jan 2020 17:04:03 +0100 Message-ID: <20200110160404.15573-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200110160404.15573-1-roger.pau@citrix.com> References: <20200110160404.15573-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 2/3] x86/hvm: rework HVMOP_flush_tlbs X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Current implementation of hvm_flush_vcpu_tlb is highly inefficient. First of all the call to flush_tlb_mask is completely useless when trying to flush the TLB of HVM guests, as this TLB flush is executed in root mode, and hence doesn't flush any guest state cache. Secondly, calling paging_update_cr3 albeit correct, is much more expensive than strictly required. Instead a TLB flush can be achieved by calling hvm_asid_flush_vcpu on each pCPU that has a domain vCPU state currently loaded. This call will invalidate the current non-root context, thus forcing a clean cache state on vmentry. If the guest is not using ASIDs, the vmexit caused by the on_selected_cpus IPI will already force a TLB flush. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/hvm.c | 54 ++++++++++++---------------- xen/arch/x86/hvm/viridian/viridian.c | 7 +--- xen/include/asm-x86/hvm/hvm.h | 2 +- 3 files changed, 25 insertions(+), 38 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 4723f5d09c..e4fef0afcd 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3973,7 +3973,21 @@ static void hvm_s3_resume(struct domain *d) } } -bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), +static void do_flush(void *data) +{ + cpumask_t *mask = data; + unsigned int cpu = smp_processor_id(); + + ASSERT(cpumask_test_cpu(cpu, mask)); + /* + * A vmexit/vmenter (caused by the IPI issued to execute this function) is + * enough to force a TLB flush since we have already ticked the vCPU ASID + * prior to issuing the IPI. + */ + cpumask_clear_cpu(cpu, mask); +} + +void hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { static DEFINE_PER_CPU(cpumask_t, flush_cpumask); @@ -3981,27 +3995,8 @@ bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), struct domain *d = current->domain; struct vcpu *v; - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - cpumask_clear(mask); - /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */ for_each_vcpu ( d, v ) { unsigned int cpu; @@ -4009,22 +4004,17 @@ bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), if ( !flush_vcpu(ctxt, v) ) continue; - paging_update_cr3(v, false); + hvm_asid_flush_vcpu(v); cpu = read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) + if ( cpu != smp_processor_id() && is_vcpu_dirty_cpu(cpu) ) __cpumask_set_cpu(cpu, mask); } - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); + on_selected_cpus(mask, do_flush, mask, 0); - /* Done. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); - - return true; + while ( !cpumask_empty(mask) ) + cpu_relax(); } static bool always_flush(void *ctxt, struct vcpu *v) @@ -4037,7 +4027,9 @@ static int hvmop_flush_tlb_all(void) if ( !is_hvm_domain(current->domain) ) return -EINVAL; - return hvm_flush_vcpu_tlb(always_flush, NULL) ? 0 : -ERESTART; + hvm_flush_vcpu_tlb(always_flush, NULL); + + return 0; } static int hvmop_set_evtchn_upcall_vector( diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 44c8e6cac6..ec73361597 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -604,12 +604,7 @@ int viridian_hypercall(struct cpu_user_regs *regs) if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS ) input_params.vcpu_mask = ~0ul; - /* - * A false return means that another vcpu is currently trying - * a similar operation, so back off. - */ - if ( !hvm_flush_vcpu_tlb(need_flush, &input_params.vcpu_mask) ) - return HVM_HCALL_preempted; + hvm_flush_vcpu_tlb(need_flush, &input_params.vcpu_mask); output.rep_complete = input.rep_count; diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 09793c12e9..1f70ee0823 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -333,7 +333,7 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value, signed int cr0_pg); unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore); -bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), +void hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt); #ifdef CONFIG_HVM