From patchwork Wed Mar 20 20:14:20 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 2309641 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 8C5DE3FC54 for ; Wed, 20 Mar 2013 20:18:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756144Ab3CTUPc (ORCPT ); Wed, 20 Mar 2013 16:15:32 -0400 Received: from mx1.redhat.com ([209.132.183.28]:28972 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752430Ab3CTUP2 (ORCPT ); Wed, 20 Mar 2013 16:15:28 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r2KKFOWR005210 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 20 Mar 2013 16:15:24 -0400 Received: from amt.cnet (vpn1-6-56.gru2.redhat.com [10.97.6.56]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r2KKFOXj017870; Wed, 20 Mar 2013 16:15:24 -0400 Received: from amt.cnet (localhost [127.0.0.1]) by amt.cnet (Postfix) with ESMTP id 76574104044; Wed, 20 Mar 2013 17:14:22 -0300 (BRT) Received: (from marcelo@localhost) by amt.cnet (8.14.6/8.14.6/Submit) id r2KKEL7W017521; Wed, 20 Mar 2013 17:14:21 -0300 Date: Wed, 20 Mar 2013 17:14:20 -0300 From: Marcelo Tosatti To: kvm , Ulrich Obergfell Cc: Xiao Guangrong , Takuya Yoshikawa , Avi Kivity Subject: KVM: MMU: improve n_max_mmu_pages calculation with TDP Message-ID: <20130320201420.GA17347@amt.cnet> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kvm_mmu_calculate_mmu_pages numbers, maximum number of shadow pages = 2% of mapped guest pages Does not make sense for TDP guests where mapping all of guest memory with 4k pages cannot exceed "mapped guest pages / 512" (not counting root pages). Allow that maximum for TDP, forcing the guest to recycle otherwise. Signed-off-by: Marcelo Tosatti --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 956ca35..a9694a8d7 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4293,7 +4293,7 @@ nomem: unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) { unsigned int nr_mmu_pages; - unsigned int nr_pages = 0; + unsigned int i, nr_pages = 0; struct kvm_memslots *slots; struct kvm_memory_slot *memslot; @@ -4302,7 +4302,19 @@ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) kvm_for_each_memslot(memslot, slots) nr_pages += memslot->npages; - nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000; + if (tdp_enabled) { + /* one root page */ + nr_mmu_pages = 1; + /* nr_pages / (512^i) per level, due to + * guest RAM map being linear */ + for (i = 1; i < 4; i++) { + int nr_pages_round = nr_pages + (1 << (9*i)); + nr_mmu_pages += nr_pages_round >> (9*i); + } + } else { + nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000; + } + nr_mmu_pages = max(nr_mmu_pages, (unsigned int) KVM_MIN_ALLOC_MMU_PAGES);