From patchwork Thu May 9 06:46:02 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 2543341 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id A4B5CDF24C for ; Thu, 9 May 2013 06:45:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751309Ab3EIGpJ (ORCPT ); Thu, 9 May 2013 02:45:09 -0400 Received: from tama500.ecl.ntt.co.jp ([129.60.39.148]:48293 "EHLO tama500.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750821Ab3EIGpI (ORCPT ); Thu, 9 May 2013 02:45:08 -0400 Received: from mfs5.rdh.ecl.ntt.co.jp (mfs5.rdh.ecl.ntt.co.jp [129.60.39.144]) by tama500.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id r496j3QJ011329; Thu, 9 May 2013 15:45:03 +0900 Received: from mfs5.rdh.ecl.ntt.co.jp (localhost.localdomain [127.0.0.1]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 53324E0189; Thu, 9 May 2013 15:45:03 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by mfs5.rdh.ecl.ntt.co.jp (Postfix) with ESMTP id 4733EE0187; Thu, 9 May 2013 15:45:03 +0900 (JST) Received: from yshpad ([129.60.241.181]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id r496j3sj000524; Thu, 9 May 2013 15:45:03 +0900 Date: Thu, 9 May 2013 15:46:02 +0900 From: Takuya Yoshikawa To: gleb@redhat.com, pbonzini@redhat.com, mtosatti@redhat.com Cc: kvm@vger.kernel.org Subject: [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots() Message-Id: <20130509154602.da528c3b.yoshikawa_takuya_b1@lab.ntt.co.jp> In-Reply-To: <20130509154350.15b956c4.yoshikawa_takuya_b1@lab.ntt.co.jp> References: <20130509154350.15b956c4.yoshikawa_takuya_b1@lab.ntt.co.jp> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org By making the last three statements common to both if/else cases, the symmetry between the locking and unlocking becomes clearer. One note here is that VCPU's root_hpa does not need to be protected by mmu_lock. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 39 +++++++++++++++++++-------------------- 1 files changed, 19 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d01f340..bf80b46 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2861,42 +2861,41 @@ out_unlock: static void mmu_free_roots(struct kvm_vcpu *vcpu) { int i; + hpa_t root; struct kvm_mmu_page *sp; LIST_HEAD(invalid_list); if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) return; + spin_lock(&vcpu->kvm->mmu_lock); + if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL && (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL || vcpu->arch.mmu.direct_map)) { - hpa_t root = vcpu->arch.mmu.root_hpa; - + root = vcpu->arch.mmu.root_hpa; sp = page_header(root); --sp->root_count; - if (!sp->root_count && sp->role.invalid) { + if (!sp->root_count && sp->role.invalid) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); + } else { + for (i = 0; i < 4; ++i) { + root = vcpu->arch.mmu.pae_root[i]; + if (root) { + root &= PT64_BASE_ADDR_MASK; + sp = page_header(root); + --sp->root_count; + if (!sp->root_count && sp->role.invalid) + kvm_mmu_prepare_zap_page(vcpu->kvm, sp, + &invalid_list); + } + vcpu->arch.mmu.pae_root[i] = INVALID_PAGE; } - vcpu->arch.mmu.root_hpa = INVALID_PAGE; - spin_unlock(&vcpu->kvm->mmu_lock); - return; } - for (i = 0; i < 4; ++i) { - hpa_t root = vcpu->arch.mmu.pae_root[i]; - if (root) { - root &= PT64_BASE_ADDR_MASK; - sp = page_header(root); - --sp->root_count; - if (!sp->root_count && sp->role.invalid) - kvm_mmu_prepare_zap_page(vcpu->kvm, sp, - &invalid_list); - } - vcpu->arch.mmu.pae_root[i] = INVALID_PAGE; - } kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); spin_unlock(&vcpu->kvm->mmu_lock); + vcpu->arch.mmu.root_hpa = INVALID_PAGE; }