From patchwork Tue Feb 5 21:01:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10798485 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A0B613BF for ; Tue, 5 Feb 2019 21:03:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B7BF2C798 for ; Tue, 5 Feb 2019 21:03:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3FFC62C89F; Tue, 5 Feb 2019 21:03:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B53522C798 for ; Tue, 5 Feb 2019 21:03:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729795AbfBEVDR (ORCPT ); Tue, 5 Feb 2019 16:03:17 -0500 Received: from mga07.intel.com ([134.134.136.100]:30420 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729914AbfBEVCs (ORCPT ); Tue, 5 Feb 2019 16:02:48 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Feb 2019 13:02:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,336,1544515200"; d="scan'208";a="131343506" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.14]) by FMSMGA003.fm.intel.com with ESMTP; 05 Feb 2019 13:02:47 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Xiao Guangrong Subject: [PATCH v2 12/27] Revert "KVM: MMU: document fast invalidate all pages" Date: Tue, 5 Feb 2019 13:01:22 -0800 Message-Id: <20190205210137.1377-12-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190205205443.1059-1-sean.j.christopherson@intel.com> References: <20190205205443.1059-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Remove x86 KVM's fast invalidate mechanism, i.e. revert all patches from the original series[1]. Though not explicitly stated, for all intents and purposes the fast invalidate mechanism was added to speed up the scenario where removing a memslot, e.g. as part of accessing reading PCI ROM, caused KVM to flush all shadow entries[1]. Now that the memslot case flushes only shadow entries belonging to the memslot, i.e. doesn't use the fast invalidate mechanism, the only remaining usage of the mechanism are when the VM is being destroyed and when the MMIO generation rolls over. When a VM is being destroyed, either there are no active vcpus, i.e. there's no lock contention, or the VM has ungracefully terminated, in which case we want to reclaim its pages as quickly as possible, i.e. not release the MMU lock if there are still CPUs executing in the VM. The MMIO generation scenario is almost literally a one-in-a-million occurrence, i.e. is not a performance sensitive scenario. Given that lock-breaking is not desirable (VM teardown) or irrelevant (MMIO generation overflow), remove the fast invalidate mechanism to simplify the code (a small amount) and to discourage future code from zapping all pages as using such a big hammer should be a last restort. This reverts commit f6f8adeef542a18b1cb26a0b772c9781a10bb477. [1] https://lkml.kernel.org/r/1369960590-14138-1-git-send-email-xiaoguangrong@linux.vnet.ibm.com Cc: Xiao Guangrong Signed-off-by: Sean Christopherson --- Documentation/virtual/kvm/mmu.txt | 28 +--------------------------- arch/x86/include/asm/kvm_host.h | 3 --- 2 files changed, 1 insertion(+), 30 deletions(-) diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt index 367a952f50ab..f365102c80f5 100644 --- a/Documentation/virtual/kvm/mmu.txt +++ b/Documentation/virtual/kvm/mmu.txt @@ -224,10 +224,6 @@ Shadow pages contain the following information: A bitmap indicating which sptes in spt point (directly or indirectly) at pages that may be unsynchronized. Used to quickly locate all unsychronized pages reachable from a given page. - mmu_valid_gen: - Generation number of the page. It is compared with kvm->arch.mmu_valid_gen - during hash table lookup, and used to skip invalidated shadow pages (see - "Zapping all pages" below.) clear_spte_count: Only present on 32-bit hosts, where a 64-bit spte cannot be written atomically. The reader uses this while running out of the MMU lock @@ -402,27 +398,6 @@ causes its disallow_lpage to be incremented, thus preventing instantiation of a large spte. The frames at the end of an unaligned memory slot have artificially inflated ->disallow_lpages so they can never be instantiated. -Zapping all pages (page generation count) -========================================= - -For the large memory guests, walking and zapping all pages is really slow -(because there are a lot of pages), and also blocks memory accesses of -all VCPUs because it needs to hold the MMU lock. - -To make it be more scalable, kvm maintains a global generation number -which is stored in kvm->arch.mmu_valid_gen. Every shadow page stores -the current global generation-number into sp->mmu_valid_gen when it -is created. Pages with a mismatching generation number are "obsolete". - -When KVM need zap all shadow pages sptes, it just simply increases the global -generation-number then reload root shadow pages on all vcpus. As the VCPUs -create new shadow page tables, the old pages are not used because of the -mismatching generation number. - -KVM then walks through all pages and zaps obsolete pages. While the zap -operation needs to take the MMU lock, the lock can be released periodically -so that the VCPUs can make progress. - Fast invalidation of MMIO sptes =============================== @@ -435,8 +410,7 @@ shadow pages, and is made more scalable with a similar technique. MMIO sptes have a few spare bits, which are used to store a generation number. The global generation number is stored in kvm_memslots(kvm)->generation, and increased whenever guest memory info -changes. This generation number is distinct from the one described in -the previous section. +changes. When KVM finds an MMIO spte, it checks the generation number of the spte. If the generation number of the spte does not equal the global generation diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1c7f726e3ad5..afb56feff5f3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -332,10 +332,7 @@ struct kvm_mmu_page { int root_count; /* Currently serving as active root */ unsigned int unsync_children; struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ - - /* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen. */ unsigned long mmu_valid_gen; - DECLARE_BITMAP(unsync_child_bitmap, 512); #ifdef CONFIG_X86_32