From patchwork Thu Jan 8 18:26:17 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 1400 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n08IN4gu017874 for ; Thu, 8 Jan 2009 10:23:04 -0800 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752938AbZAHS0m (ORCPT ); Thu, 8 Jan 2009 13:26:42 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753581AbZAHS0m (ORCPT ); Thu, 8 Jan 2009 13:26:42 -0500 Received: from mx2.redhat.com ([66.187.237.31]:44316 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752228AbZAHS0l (ORCPT ); Thu, 8 Jan 2009 13:26:41 -0500 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n08IQRUi005092 for ; Thu, 8 Jan 2009 13:26:27 -0500 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n08IQRb7000852; Thu, 8 Jan 2009 13:26:27 -0500 Received: from amt.cnet (vpn-10-155.str.redhat.com [10.32.10.155]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id n08IQN9H028111; Thu, 8 Jan 2009 13:26:24 -0500 Received: from amt.cnet (amt.cnet [127.0.0.1]) by amt.cnet (Postfix) with ESMTP id 91171680217; Thu, 8 Jan 2009 16:26:19 -0200 (BRST) Received: (from marcelo@localhost) by amt.cnet (8.14.3/8.14.3/Submit) id n08IQHba007418; Thu, 8 Jan 2009 16:26:17 -0200 Date: Thu, 8 Jan 2009 16:26:17 -0200 From: Marcelo Tosatti To: kvm@vger.kernel.org Cc: Avi Kivity Subject: KVM: MMU: zero caches before entering mmu_lock protected section Message-ID: <20090108182617.GA7250@amt.cnet> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.18 (2008-05-17) X-Scanned-By: MIMEDefang 2.58 on 172.16.27.26 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Clean the pre-allocated cache pages before entering mmu_lock region. This is safe since the caches are per-vcpu. Smaller chunks are already zeroed by kmem_cache_zalloc. ~= 0.90% reduction in system time with AIM7 on RHEL3 / 4-vcpu. Signed-off-by: Marcelo Tosatti --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 10bdb2a..823d0cd 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -301,7 +301,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache, if (cache->nobjs >= min) return 0; while (cache->nobjs < ARRAY_SIZE(cache->objects)) { - page = alloc_page(GFP_KERNEL); + page = alloc_page(GFP_KERNEL|__GFP_ZERO); if (!page) return -ENOMEM; set_page_private(page, 0); @@ -352,7 +352,6 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc, BUG_ON(!mc->nobjs); p = mc->objects[--mc->nobjs]; - memset(p, 0, size); return p; }