From patchwork Mon Dec 19 21:58:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 9480785 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 38AFD606DB for ; Mon, 19 Dec 2016 21:58:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2980628449 for ; Mon, 19 Dec 2016 21:58:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1E023284FA; Mon, 19 Dec 2016 21:58:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D8B028449 for ; Mon, 19 Dec 2016 21:58:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752948AbcLSV6e (ORCPT ); Mon, 19 Dec 2016 16:58:34 -0500 Received: from mail-io0-f174.google.com ([209.85.223.174]:35984 "EHLO mail-io0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752199AbcLSV6d (ORCPT ); Mon, 19 Dec 2016 16:58:33 -0500 Received: by mail-io0-f174.google.com with SMTP id 136so164413568iou.3 for ; Mon, 19 Dec 2016 13:58:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=MkA2pnAFbAI49SGH2rJZnFj2XgC0+XixYzVvxQq5IeQ=; b=f38dYfYlpLi1NlnkNjnr2wu1LsuC2prGlrv91tbLnHF1d8qk8u+RZl5u67zk93RsSC 6M7cxrVrpcZS7UV6D8sBu0zuw36R3pLcO8G8cjxVSZx3aHv+IINvtnptQZDpFCRt+OdC 0lMg5eeUxwWajFIdeFdq6l5QDck5b/6QrCb7VBRJqmwRnIJzwq2FR/Q3jmDAklZKHewq PJRPsnHJStOQQRWUqg39olAOkdjCRkSRohnxI/kLeRBB+Sdw3HTEHYKUms7iIGn6zit/ FqJDIR58lLZ5rfdaep1K6hzNupNJ6pCBR/f15vgu0jHFOc9rN/mCuLNNVylT91pj89xw +BSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=MkA2pnAFbAI49SGH2rJZnFj2XgC0+XixYzVvxQq5IeQ=; b=sdE1zLJ9cLYZNvRTPls8fmvRa1jedUH/wt2THG4t3BGiON29ZEwc2lSmtOLak650+a Z4c6hoRDaGxQZoGQ2aBwlSXECgTSW9Vp9iQdOwyrSvufvbEUf/40F5+vCTAU6HhYteRJ 2MAsyVonQRM8s56FlRv4t8Leu/VB3C8Pn7oHu1xhUDt6KkU4DCcmuENiTVSjOf7XxdQY 4ODR4gL6ER2LFl/pCddIKzG1CJEa21UhTpfhjmu7bX9QW+I3RaDWUe6oYjsNImt1BeEh jnxhs1KDeEvynZi6+yDHoe8qUKEkOItS3dMltnGn+sS+CaHBnmgnmtgyclw0GypKx3v5 2Y7w== X-Gm-Message-State: AIkVDXJgOY87Ex1RuRNsIHlC07BlYKEV5tZBUsOyI21Didn1g1R+Z72c4frFQRhEmHRH4Dxl X-Received: by 10.107.18.39 with SMTP id a39mr16304480ioj.45.1482184712031; Mon, 19 Dec 2016 13:58:32 -0800 (PST) Received: from dmatlack.sea.corp.google.com ([100.100.206.82]) by smtp.gmail.com with ESMTPSA id h17sm9032707ioh.6.2016.12.19.13.58.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 19 Dec 2016 13:58:31 -0800 (PST) From: David Matlack To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, rkrcmar@redhat.com, David Matlack Subject: [PATCH 1/2] kvm: x86: export maximum number of mmu_page_hash collisions Date: Mon, 19 Dec 2016 13:58:24 -0800 Message-Id: <1482184705-127401-1-git-send-email-dmatlack@google.com> X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Report the maximum number of mmu_page_hash collisions as a per-VM stat. This will make it easy to identify problems with the mmu_page_hash in the future. Signed-off-by: David Matlack Change-Id: I096fc6d5a3589e7f19fcc9c2a1b8a37c7368ba17 --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.c | 31 +++++++++++++++++++++---------- arch/x86/kvm/x86.c | 2 ++ 3 files changed, 24 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2e25038..8ba0d64 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -815,6 +815,7 @@ struct kvm_vm_stat { ulong mmu_unsync; ulong remote_tlb_flush; ulong lpages; + ulong max_mmu_page_hash_collisions; }; struct kvm_vcpu_stat { diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7012de4..58995fd9 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1904,17 +1904,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, * since it has been deleted from active_mmu_pages but still can be found * at hast list. * - * for_each_gfn_valid_sp() has skipped that kind of pages. + * for_each_valid_sp() has skipped that kind of pages. */ -#define for_each_gfn_valid_sp(_kvm, _sp, _gfn) \ +#define for_each_valid_sp(_kvm, _sp, _gfn) \ hlist_for_each_entry(_sp, \ &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \ - if ((_sp)->gfn != (_gfn) || is_obsolete_sp((_kvm), (_sp)) \ - || (_sp)->role.invalid) {} else + if (is_obsolete_sp((_kvm), (_sp)) || (_sp)->role.invalid) { \ + } else #define for_each_gfn_indirect_valid_sp(_kvm, _sp, _gfn) \ - for_each_gfn_valid_sp(_kvm, _sp, _gfn) \ - if ((_sp)->role.direct) {} else + for_each_valid_sp(_kvm, _sp, _gfn) \ + if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else /* @sp->gfn should be write-protected at the call site */ static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, @@ -2116,6 +2116,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp; bool need_sync = false; bool flush = false; + int collisions = 0; + bool created = true; LIST_HEAD(invalid_list); role = vcpu->arch.mmu.base_role; @@ -2130,7 +2132,12 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; role.quadrant = quadrant; } - for_each_gfn_valid_sp(vcpu->kvm, sp, gfn) { + for_each_valid_sp(vcpu->kvm, sp, gfn) { + if (sp->gfn != gfn) { + collisions++; + continue; + } + if (!need_sync && sp->unsync) need_sync = true; @@ -2152,8 +2159,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); __clear_sp_write_flooding_count(sp); - trace_kvm_mmu_get_page(sp, false); - return sp; + created = false; + goto out; } ++vcpu->kvm->stat.mmu_cache_miss; @@ -2164,6 +2171,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, sp->role = role; hlist_add_head(&sp->hash_link, &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]); + if (!direct) { /* * we should do write protection before syncing pages @@ -2180,9 +2188,12 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, } sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; clear_page(sp->spt); - trace_kvm_mmu_get_page(sp, true); kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); +out: + if (collisions > vcpu->kvm->stat.max_mmu_page_hash_collisions) + vcpu->kvm->stat.max_mmu_page_hash_collisions = collisions; + trace_kvm_mmu_get_page(sp, created); return sp; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8f86c0c..ee4c35e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -190,6 +190,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { { "mmu_unsync", VM_STAT(mmu_unsync) }, { "remote_tlb_flush", VM_STAT(remote_tlb_flush) }, { "largepages", VM_STAT(lpages) }, + { "max_mmu_page_hash_collisions", + VM_STAT(max_mmu_page_hash_collisions) }, { NULL } };