From patchwork Fri Apr 3 07:40:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wanpeng Li X-Patchwork-Id: 6154001 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E0512BF4A6 for ; Fri, 3 Apr 2015 07:59:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E2C1B203A9 for ; Fri, 3 Apr 2015 07:59:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BD22A2038A for ; Fri, 3 Apr 2015 07:58:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752218AbbDCH6o (ORCPT ); Fri, 3 Apr 2015 03:58:44 -0400 Received: from mga14.intel.com ([192.55.52.115]:1640 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751916AbbDCH6n (ORCPT ); Fri, 3 Apr 2015 03:58:43 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP; 03 Apr 2015 00:58:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,516,1422950400"; d="scan'208";a="708338783" Received: from kernel.bj.intel.com ([10.238.154.97]) by orsmga002.jf.intel.com with ESMTP; 03 Apr 2015 00:58:41 -0700 From: Wanpeng Li To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Xiao Guangrong , Wanpeng Li Subject: [PATCH v3] kvm: mmu: lazy collapse small sptes into large sptes Date: Fri, 3 Apr 2015 15:40:25 +0800 Message-Id: <1428046825-6905-1-git-send-email-wanpeng.li@linux.intel.com> X-Mailer: git-send-email 1.9.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There are two scenarios for the requirement of collapsing small sptes into large sptes. - dirty logging tracks sptes in 4k granularity, so large sptes are split, the large sptes will be reallocated in the destination machine and the guest in the source machine will be destroyed when live migration successfully. However, the guest in the source machine will continue to run if live migration fail due to some reasons, the sptes still keep small which lead to bad performance. - our customers write tools to track the dirty speed of guests by EPT D bit/PML in order to determine the most appropriate one to be live migrated, however sptes will still keep small after tracking dirty speed. This patch introduce lazy collapse small sptes into large sptes, the memory region will be scanned on the ioctl context when dirty log is stopped, the ones which can be collapsed into large pages will be dropped during the scan, it depends the on later #PF to reallocate all large sptes. Reviewed-by: Xiao Guangrong Signed-off-by: Wanpeng Li --- v2 -> v3: * update comments * fix infinite for loop v1 -> v2: * use 'bool' instead of 'int' * add more comments * fix can not get the next spte after drop the current spte arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.c | 73 +++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.c | 19 +++++++++++ 3 files changed, 94 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 30b28dc..91b5bdb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -854,6 +854,8 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot); +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, + struct kvm_memory_slot *memslot); void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot); void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index cee7592..ba002a0 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4465,6 +4465,79 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, kvm_flush_remote_tlbs(kvm); } +static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, + unsigned long *rmapp) +{ + u64 *sptep; + struct rmap_iterator iter; + int need_tlb_flush = 0; + pfn_t pfn; + struct kvm_mmu_page *sp; + + for (sptep = rmap_get_first(*rmapp, &iter); sptep;) { + BUG_ON(!(*sptep & PT_PRESENT_MASK)); + + sp = page_header(__pa(sptep)); + pfn = spte_to_pfn(*sptep); + + /* + * Lets support EPT only for now, there still needs to figure + * out an efficient way to let these codes be aware what mapping + * level used in guest. + */ + if (sp->role.direct && + !kvm_is_reserved_pfn(pfn) && + PageTransCompound(pfn_to_page(pfn))) { + drop_spte(kvm, sptep); + sptep = rmap_get_first(*rmapp, &iter); + need_tlb_flush = 1; + } else + sptep = rmap_get_next(&iter); + } + + return need_tlb_flush; +} + +void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + bool flush = false; + unsigned long *rmapp; + unsigned long last_index, index; + gfn_t gfn_start, gfn_end; + + spin_lock(&kvm->mmu_lock); + + gfn_start = memslot->base_gfn; + gfn_end = memslot->base_gfn + memslot->npages - 1; + + if (gfn_start >= gfn_end) + goto out; + + rmapp = memslot->arch.rmap[0]; + last_index = gfn_to_index(gfn_end, memslot->base_gfn, + PT_PAGE_TABLE_LEVEL); + + for (index = 0; index <= last_index; ++index, ++rmapp) { + if (*rmapp) + flush |= kvm_mmu_zap_collapsible_spte(kvm, rmapp); + + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { + if (flush) { + kvm_flush_remote_tlbs(kvm); + flush = false; + } + cond_resched_lock(&kvm->mmu_lock); + } + } + + if (flush) + kvm_flush_remote_tlbs(kvm); + +out: + spin_unlock(&kvm->mmu_lock); +} + void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 50861dd..a6cd10b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7647,6 +7647,25 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, new = id_to_memslot(kvm->memslots, mem->slot); /* + * Dirty logging tracks sptes in 4k granularity, so large sptes are + * split, the large sptes will be reallocated in the destination + * machine and the guest in the source machine will be destroyed + * when live migration successfully. However, the guest in the source + * machine will continue to run if live migration fail due to some + * reasons, the sptes still keep small which lead to bad performance. + * + * Lazy collapse small sptes into large sptes is intended to handle + * this, the memory region will be scanned on the ioctl context when + * dirty log is stopped, the ones which can be collapsed into large + * pages will be dropped during the scan, it depends the on later #PF + * to reallocate all large sptes. + */ + if ((change != KVM_MR_DELETE) && + (old->flags & KVM_MEM_LOG_DIRTY_PAGES) && + !(new->flags & KVM_MEM_LOG_DIRTY_PAGES)) + kvm_mmu_zap_collapsible_sptes(kvm, new); + + /* * Set up write protection and/or dirty logging for the new slot. * * For KVM_MR_DELETE and KVM_MR_MOVE, the shadow pages of old slot have