From patchwork Mon Feb 17 13:55:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Wei Liu X-Patchwork-Id: 11386521 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58CF017F0 for ; Mon, 17 Feb 2020 13:56:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 353882070B for ; Mon, 17 Feb 2020 13:56:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RFrglyFo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 353882070B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3grw-0006FK-86; Mon, 17 Feb 2020 13:55:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3gru-0006EV-Ut for xen-devel@lists.xenproject.org; Mon, 17 Feb 2020 13:55:26 +0000 X-Inumbo-ID: 23a4b0e4-518d-11ea-aa99-bc764e2007e4 Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 23a4b0e4-518d-11ea-aa99-bc764e2007e4; Mon, 17 Feb 2020 13:55:22 +0000 (UTC) Received: by mail-wm1-x344.google.com with SMTP id p17so18560944wma.1 for ; Mon, 17 Feb 2020 05:55:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wLwrdLrf3Y5Q/7f9/LZJ0C4qbdwwFvoupOHOFtCD0qc=; b=RFrglyFoHhein7XRsb5cs7Uha4vcvBkg2AUJhyr3LPYwVkVBgDnbdb+fXz6Z1hgR6x SMfwXo83ezWoqDW0Z4VKfdYFzC9g+H/t6DEl8QN3uYia9SANWm7OVbE6gVjGVt8s4j6q yY6dvmhIuLJYHJvkYuju4Ub65xjUcN0IYIwaaJFBy1Xck/1QvryNXHLzSvRJ9QFQ/hAl Ozk1UR14mfwjX44fZYgV0G45mjjcWMD+rJ1Hm6kB5ABa3k3EvzqtkrzZpWKgaVx5++t7 dDq8I5CPHIMTJ52HV4mSIBCmYkQhVx03TrPUC7uNrMYE9WD6z2dCDCF+6ZoAZTE5Xh3W 2jOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=wLwrdLrf3Y5Q/7f9/LZJ0C4qbdwwFvoupOHOFtCD0qc=; b=cmUvfSAV8OL/xJCJ0CrhK3cTifxYokgH5UfwYaEbl6E2shbIoYGpI4dzghnYfQ6cEs EnvYVUzMN7RRh1xFKn1WRkhX2eFPYgU6Vzf1farCm01uyJOeKQafn9ZQrViSK0DbFVrq J7jNuBTsjApY08ZQzZH7OXZEgfW36olkN68xVYkLaeGpSkxP6RNAb6cjqUQShXH4dF1D VNGtk7S3rKIGcyFhOTPtPmWHjN2sYl8vwxaSp2+aU+AL4p0a8WdkwyzzAITXSeTFVOvb KjsUFDcOPrgvzFG4zJD40yKNGYKz02IHtqvt+APCNeRI4s7+szQSe+5rNi+kNmtD/jak 62Xw== X-Gm-Message-State: APjAAAWN9lZzA25jkPK432fRbRMbGVelvePcchE5SmIvTfuh0Sspg2Ky iLV+AApobHGcDJbVYv1JwFFzDxSO X-Google-Smtp-Source: APXvYqwVrlE69e4XKeGBPsQWOJRnPUqFAcI6aCJOBDW/X8p0QFE6onAQHGfq3xZ6YQgFyxGubFj2Vw== X-Received: by 2002:a1c:a381:: with SMTP id m123mr22383818wme.158.1581947721253; Mon, 17 Feb 2020 05:55:21 -0800 (PST) Received: from localhost.localdomain (41.142.6.51.dyn.plus.net. [51.6.142.41]) by smtp.gmail.com with ESMTPSA id m3sm1021533wrs.53.2020.02.17.05.55.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Feb 2020 05:55:20 -0800 (PST) From: Wei Liu X-Google-Original-From: Wei Liu To: Xen Development List Date: Mon, 17 Feb 2020 13:55:15 +0000 Message-Id: <20200217135517.5826-2-liuwe@microsoft.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200217135517.5826-1-liuwe@microsoft.com> References: <20200217135517.5826-1-liuwe@microsoft.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Wei Liu , Andrew Cooper , Paul Durrant , Michael Kelley , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Hyper-V's L0 assisted flush has fine-grained control over what gets flushed. We need all the flags available to make the best decisions possible. No functional change because Xen's implementation doesn't care about what is passed to it. Signed-off-by: Wei Liu Reviewed-by: Roger Pau Monné Reviewed-by: Paul Durrant --- v2: 1. Introduce FLUSH_TLB_FLAGS_MASK --- xen/arch/x86/guest/hypervisor.c | 7 +++++-- xen/arch/x86/guest/xen/xen.c | 2 +- xen/arch/x86/smp.c | 5 ++--- xen/include/asm-x86/flushtlb.h | 3 +++ xen/include/asm-x86/guest/hypervisor.h | 10 +++++----- 5 files changed, 16 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hypervisor.c index 47e938e287..6ee28c9df1 100644 --- a/xen/arch/x86/guest/hypervisor.c +++ b/xen/arch/x86/guest/hypervisor.c @@ -75,10 +75,13 @@ void __init hypervisor_e820_fixup(struct e820map *e820) } int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, - unsigned int order) + unsigned int flags) { + if ( flags & ~FLUSH_TLB_FLAGS_MASK ) + return -EINVAL; + if ( ops.flush_tlb ) - return alternative_call(ops.flush_tlb, mask, va, order); + return alternative_call(ops.flush_tlb, mask, va, flags); return -ENOSYS; } diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index 5d3427a713..0eb1115c4d 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -324,7 +324,7 @@ static void __init e820_fixup(struct e820map *e820) pv_shim_fixup_e820(e820); } -static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int order) +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int flags) { return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); } diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index c7caf5bc26..4dab74c0d5 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -258,9 +258,8 @@ void flush_area_mask(const cpumask_t *mask, const void *va, unsigned int flags) !cpumask_subset(mask, cpumask_of(cpu)) ) { if ( cpu_has_hypervisor && - !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | - FLUSH_ORDER_MASK)) && - !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) ) + !(flags & ~FLUSH_TLB_FLAGS_MASK) && + !hypervisor_flush_tlb(mask, va, flags) ) { if ( tlb_clk_enabled ) tlb_clk_enabled = false; diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 9773014320..a4de317452 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -123,6 +123,9 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4); /* Flush all HVM guests linear TLB (using ASID/VPID) */ #define FLUSH_GUESTS_TLB 0x4000 +#define FLUSH_TLB_FLAGS_MASK (FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | \ + FLUSH_ORDER_MASK) + /* Flush local TLBs/caches. */ unsigned int flush_area_local(const void *va, unsigned int flags); #define flush_local(flags) flush_area_local(NULL, flags) diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/guest/hypervisor.h index 432e57c2a0..48d54735d2 100644 --- a/xen/include/asm-x86/guest/hypervisor.h +++ b/xen/include/asm-x86/guest/hypervisor.h @@ -35,7 +35,7 @@ struct hypervisor_ops { /* Fix up e820 map */ void (*e820_fixup)(struct e820map *e820); /* L0 assisted TLB flush */ - int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int order); + int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int flags); }; #ifdef CONFIG_GUEST @@ -48,11 +48,11 @@ void hypervisor_e820_fixup(struct e820map *e820); /* * L0 assisted TLB flush. * mask: cpumask of the dirty vCPUs that should be flushed. - * va: linear address to flush, or NULL for global flushes. - * order: order of the linear address pointed by va. + * va: linear address to flush, or NULL for entire address space. + * flags: flags for flushing, including the order of va. */ int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, - unsigned int order); + unsigned int flags); #else @@ -65,7 +65,7 @@ static inline int hypervisor_ap_setup(void) { return 0; } static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); } static inline void hypervisor_e820_fixup(struct e820map *e820) {} static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, - unsigned int order) + unsigned int flags) { return -ENOSYS; }