From patchwork Thu Jan 10 18:07:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10756603 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFEB517E1 for ; Thu, 10 Jan 2019 18:07:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A168529E78 for ; Thu, 10 Jan 2019 18:07:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 95EEC29E80; Thu, 10 Jan 2019 18:07:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 41C8C29E78 for ; Thu, 10 Jan 2019 18:07:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730867AbfAJSHg (ORCPT ); Thu, 10 Jan 2019 13:07:36 -0500 Received: from mga02.intel.com ([134.134.136.20]:30783 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730856AbfAJSHf (ORCPT ); Thu, 10 Jan 2019 13:07:35 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jan 2019 10:07:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,462,1539673200"; d="scan'208";a="137176439" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by fmsmga001.fm.intel.com with ESMTP; 10 Jan 2019 10:07:19 -0800 From: Sean Christopherson To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, Xiao Guangrong Subject: [PATCH 12/14] Revert "KVM: x86: use the fast way to invalidate all pages" Date: Thu, 10 Jan 2019 10:07:04 -0800 Message-Id: <20190110180706.24974-13-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190110180706.24974-1-sean.j.christopherson@intel.com> References: <20190110180706.24974-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Revert to a slow kvm_mmu_zap_all() for kvm_arch_flush_shadow_all(). Flushing all shadow entries is only done during VM teardown, i.e. kvm_arch_flush_shadow_all() is only called when the associated MM struct is being released or when the VM instance is being freed. In normal operation there aren't any active vCPUs to defer to, and in an unexpected teardown KVM deferring to running vCPUs is not desirable. An argument can be made that KVM should still schedule voluntarily during VM teardown (since it's a common path) to play nice with the rest of the kernel, but during teardown that can be done without the fast invalidate mechanism, e.g. marking the VM as dead to prevent MMU activity. This reverts commit 6ca18b6950f8dee29361722f28f69847724b276f. Cc: Xiao Guangrong Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu.c | 15 +++++++++++++++ arch/x86/kvm/x86.c | 2 +- 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ce5d5ec99fa5..291356af4a00 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5836,6 +5836,21 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, } EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty); +void kvm_mmu_zap_all(struct kvm *kvm) +{ + struct kvm_mmu_page *sp, *node; + LIST_HEAD(invalid_list); + + spin_lock(&kvm->mmu_lock); +restart: + list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) + if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list)) + goto restart; + + kvm_mmu_commit_zap_page(kvm, &invalid_list); + spin_unlock(&kvm->mmu_lock); +} + static void kvm_zap_obsolete_pages(struct kvm *kvm) { struct kvm_mmu_page *sp, *node; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bd903779610e..5d7def8fabe2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9450,7 +9450,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_mmu_invalidate_zap_all_pages(kvm); + kvm_mmu_zap_all(kvm); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm,