From patchwork Tue Jun 15 13:55:23 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hansen X-Patchwork-Id: 106200 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o5FDukLd012639 for ; Tue, 15 Jun 2010 13:56:47 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757757Ab0FONzp (ORCPT ); Tue, 15 Jun 2010 09:55:45 -0400 Received: from e33.co.us.ibm.com ([32.97.110.151]:38008 "EHLO e33.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757742Ab0FONzn (ORCPT ); Tue, 15 Jun 2010 09:55:43 -0400 Received: from d03relay05.boulder.ibm.com (d03relay05.boulder.ibm.com [9.17.195.107]) by e33.co.us.ibm.com (8.14.4/8.13.1) with ESMTP id o5FDpNUu006046; Tue, 15 Jun 2010 07:51:23 -0600 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay05.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o5FDtPNN114286; Tue, 15 Jun 2010 07:55:26 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o5FDtOr1000328; Tue, 15 Jun 2010 07:55:24 -0600 Received: from kernel.beaverton.ibm.com (kernel.beaverton.ibm.com [9.47.67.96]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id o5FDtO73032742; Tue, 15 Jun 2010 07:55:24 -0600 Received: from localhost.localdomain (localhost [127.0.0.1]) by kernel.beaverton.ibm.com (Postfix) with ESMTP id BC9871E74F9; Tue, 15 Jun 2010 06:55:23 -0700 (PDT) Subject: [RFC][PATCH 4/9] create aggregate kvm_total_used_mmu_pages value To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Dave Hansen From: Dave Hansen Date: Tue, 15 Jun 2010 06:55:23 -0700 References: <20100615135518.BC244431@kernel.beaverton.ibm.com> In-Reply-To: <20100615135518.BC244431@kernel.beaverton.ibm.com> Message-Id: <20100615135523.25D24A73@kernel.beaverton.ibm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 15 Jun 2010 13:56:47 +0000 (UTC) diff -puN arch/x86/kvm/mmu.c~make_global_used_value arch/x86/kvm/mmu.c --- linux-2.6.git/arch/x86/kvm/mmu.c~make_global_used_value 2010-06-09 15:14:30.000000000 -0700 +++ linux-2.6.git-dave/arch/x86/kvm/mmu.c 2010-06-09 15:14:30.000000000 -0700 @@ -891,6 +891,19 @@ static int is_empty_shadow_page(u64 *spt } #endif +/* + * This value is the sum of all of the kvm instances's + * kvm->arch.n_used_mmu_pages values. We need a global, + * aggregate version in order to make the slab shrinker + * faster + */ +static unsigned int kvm_total_used_mmu_pages; +static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr) +{ + kvm->arch.n_used_mmu_pages += nr; + kvm_total_used_mmu_pages += nr; +} + static void kvm_mmu_free_page(struct kvm *kvm, struct kvm_mmu_page *sp) { ASSERT(is_empty_shadow_page(sp->spt)); @@ -898,7 +911,7 @@ static void kvm_mmu_free_page(struct kvm __free_page(virt_to_page(sp->spt)); __free_page(virt_to_page(sp->gfns)); kfree(sp); - --kvm->arch.n_used_mmu_pages; + kvm_mod_used_mmu_pages(kvm, -1); } static unsigned kvm_page_table_hashfn(gfn_t gfn) @@ -919,7 +932,7 @@ static struct kvm_mmu_page *kvm_mmu_allo bitmap_zero(sp->slot_bitmap, KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS); sp->multimapped = 0; sp->parent_pte = parent_pte; - ++vcpu->kvm->arch.n_used_mmu_pages; + kvm_mod_used_mmu_pages(vcpu->kvm, +1); return sp; } @@ -2914,21 +2927,20 @@ static int mmu_shrink(int nr_to_scan, gf { struct kvm *kvm; struct kvm *kvm_freed = NULL; - int cache_count = 0; + + if (nr_to_scan == 0) + goto out; spin_lock(&kvm_lock); list_for_each_entry(kvm, &vm_list, vm_list) { - int npages, idx, freed_pages; + int idx, freed_pages; idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); - npages = kvm->arch.n_max_mmu_pages - - kvm_mmu_available_pages(kvm); - cache_count += npages; - if (!kvm_freed && nr_to_scan > 0 && npages > 0) { + if (!kvm_freed && nr_to_scan > 0 && + kvm->arch.n_used_mmu_pages > 0) { freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm); - cache_count -= freed_pages; kvm_freed = kvm; } nr_to_scan--; @@ -2941,7 +2953,8 @@ static int mmu_shrink(int nr_to_scan, gf spin_unlock(&kvm_lock); - return cache_count; +out: + return kvm_total_used_mmu_pages; } static struct shrinker mmu_shrinker = {