From patchwork Mon Feb 7 15:28:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Levitsky X-Patchwork-Id: 12737444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 550C8C433F5 for ; Mon, 7 Feb 2022 15:32:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AF616112457; Mon, 7 Feb 2022 15:32:30 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by gabe.freedesktop.org (Postfix) with ESMTPS id B8702112456 for ; Mon, 7 Feb 2022 15:32:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1644247948; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PA0ZlhnyV8+URLprs6c44r6sDD4SwJewRmeM2u7HZ1o=; b=DsPp4RRzQPzbfvOz7PJq/nWknH4r9Ucpy3SqHCIMR9Oqw3uM54rfxpKa3t6y//l0IB7JfQ 3kXy4hDw3LQO/Ch0HS2z/YORewFcbHAEnpeJjE/ezZlTrAg7sX1Bu2GjrJntjzcHEys22n FO0ze33ugxwxCeFin/C+LbAZOc9WQjU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-324-9nEFvhYxO4W6nDGlvtXIyA-1; Mon, 07 Feb 2022 10:32:25 -0500 X-MC-Unique: 9nEFvhYxO4W6nDGlvtXIyA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DE53B2F25; Mon, 7 Feb 2022 15:32:20 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.192.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3863E84FF5; Mon, 7 Feb 2022 15:32:12 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Date: Mon, 7 Feb 2022 17:28:35 +0200 Message-Id: <20220207152847.836777-19-mlevitsk@redhat.com> In-Reply-To: <20220207152847.836777-1-mlevitsk@redhat.com> References: <20220207152847.836777-1-mlevitsk@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Subject: [Intel-gfx] [PATCH 18/30] KVM: x86: mmu: add strict mmu mode X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dave Hansen , Wanpeng Li , David Airlie , "Chang S. Bae" , "maintainer:X86 ARCHITECTURE 32-BIT AND 64-BIT" , "open list:X86 ARCHITECTURE 32-BIT AND 64-BIT" , Maxim Levitsky , Tony Luck , "open list:DRM DRIVERS" , Brijesh Singh , Paolo Bonzini , Vitaly Kuznetsov , Jim Mattson , "open list:INTEL GVT-g DRIVERS Intel GPU Virtualization" , "open list:INTEL DRM DRIVERS excluding Poulsbo, Moorestow..., Joerg Roedel , Borislav Petkov , Daniel Vetter , \"H. Peter Anvin\" , Ingo Molnar , Sean Christopherson , Joonas Lahtinen , Pawan Gupta , Thomas Gleixner " , Kan Liang Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Add an (mostly debug) option to force KVM's shadow mmu to never have unsync pages. This is useful in some cases to debug it. It is also useful for some legacy guest OSes which don't flush TLBs correctly, and thus don't work on modern CPUs which have speculative MMUs. Using this option together with legacy paging (npt/ept=0) allows to correctly simulate such old MMU while still getting most of the benefits of the virtualization. Signed-off-by: Maxim Levitsky --- arch/x86/kvm/mmu/mmu.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 43c7abdd6b70f..fa2da6990703f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -91,6 +91,10 @@ __MODULE_PARM_TYPE(nx_huge_pages_recovery_period_ms, "uint"); static bool __read_mostly force_flush_and_sync_on_reuse; module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644); + +bool strict_mmu; +module_param(strict_mmu, bool, 0644); + /* * When setting this variable to true it enables Two-Dimensional-Paging * where the hardware walks 2 page tables: @@ -2703,7 +2707,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } wrprot = make_spte(vcpu, sp, slot, pte_access, gfn, pfn, *sptep, prefetch, - true, host_writable, &spte); + !strict_mmu, host_writable, &spte); if (*sptep == spte) { ret = RET_PF_SPURIOUS; @@ -5139,6 +5143,11 @@ static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, */ static bool detect_write_flooding(struct kvm_mmu_page *sp) { + /* + * When using non speculating MMU, use a bit higher threshold + * for write flood detection + */ + int threshold = strict_mmu ? 10 : 3; /* * Skip write-flooding detected for the sp whose level is 1, because * it can become unsync, then the guest page is not write-protected. @@ -5147,7 +5156,7 @@ static bool detect_write_flooding(struct kvm_mmu_page *sp) return false; atomic_inc(&sp->write_flooding_count); - return atomic_read(&sp->write_flooding_count) >= 3; + return atomic_read(&sp->write_flooding_count) >= threshold; } /*