From patchwork Tue May 19 17:25:13 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 6440051 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6AD38C0432 for ; Tue, 19 May 2015 17:27:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 975A420444 for ; Tue, 19 May 2015 17:27:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A501720416 for ; Tue, 19 May 2015 17:27:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753682AbbESR1J (ORCPT ); Tue, 19 May 2015 13:27:09 -0400 Received: from mail-wi0-f170.google.com ([209.85.212.170]:37466 "EHLO mail-wi0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755279AbbESRZ4 (ORCPT ); Tue, 19 May 2015 13:25:56 -0400 Received: by wibt6 with SMTP id t6so31124499wib.0; Tue, 19 May 2015 10:25:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=6CVmXGTMfCVM9rIsaMzGPKdVDJzOvEWMrSO+Hz23Ylw=; b=CRIm7wljh9adIKl3BVzgf55dwNgfVMvDRzw1+ypw+slIukHNC4YJeC4m1GtHRt4TBp SwB/VjTqG2NIPxc1kyVbadEmjoJ8x/EI8R2Q8bW8weFpKkolc7nMyK3JSNh9T4/Ku99/ V9qnE8+idX//QfHykBatEZ4w3n24jpKtKgsWDn60YZiY4t4CyJv/Vg0BCKlfGrnQwRGS N5zqrxq28ZlA1V+udakybCxdJYAzL/GqTSetS72Xc59qJZ7Qw2hkTmczjt3KfaM1Wnb7 cWnp37Ez3gU1W5bUughXrvwSBST1AXR5DwlItaqmgwdHl44vps3pZ/E3GqpMiCrrYhv2 Egxg== X-Received: by 10.194.216.230 with SMTP id ot6mr57743318wjc.68.1432056355574; Tue, 19 May 2015 10:25:55 -0700 (PDT) Received: from 640k.localdomain (dynamic-adsl-94-39-199-114.clienti.tiscali.it. [94.39.199.114]) by mx.google.com with ESMTPSA id be3sm18255006wib.21.2015.05.19.10.25.53 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 May 2015 10:25:54 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Takuya Yoshikawa , Xiao Guangrong , rkrcmar@redhat.com Subject: [PATCH 9/9] KVM: x86: pass struct kvm_mmu_page to account/unaccount_shadowed Date: Tue, 19 May 2015 19:25:13 +0200 Message-Id: <1432056313-36100-10-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1432056313-36100-1-git-send-email-pbonzini@redhat.com> References: <1432056313-36100-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Prepare for multiple address spaces this way, since a VCPU is not available where unaccount_shadowed is called. We will get to the right kvm_memslots 1truct through the role field in struct kvm_mmu_page. Signed-off-by: Paolo Bonzini Reviewed-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ceed1c591bc5..6ea24812007a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -804,12 +804,14 @@ static struct kvm_lpage_info *lpage_info_slot(gfn_t gfn, return &slot->arch.lpage_info[level - 2][idx]; } -static void account_shadowed(struct kvm *kvm, gfn_t gfn) +static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) { struct kvm_memory_slot *slot; struct kvm_lpage_info *linfo; + gfn_t gfn; int i; + gfn = sp->gfn; slot = gfn_to_memslot(kvm, gfn); for (i = PT_DIRECTORY_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { linfo = lpage_info_slot(gfn, slot, i); @@ -818,12 +820,14 @@ static void account_shadowed(struct kvm *kvm, gfn_t gfn) kvm->arch.indirect_shadow_pages++; } -static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn) +static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) { struct kvm_memory_slot *slot; struct kvm_lpage_info *linfo; + gfn_t gfn; int i; + gfn = sp->gfn; slot = gfn_to_memslot(kvm, gfn); for (i = PT_DIRECTORY_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { linfo = lpage_info_slot(gfn, slot, i); @@ -2131,7 +2135,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, if (level > PT_PAGE_TABLE_LEVEL && need_sync) kvm_sync_pages(vcpu, gfn); - account_shadowed(vcpu->kvm, gfn); + account_shadowed(vcpu->kvm, sp); } sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; init_shadow_page_table(sp); @@ -2312,7 +2316,7 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp, kvm_mmu_unlink_parents(kvm, sp); if (!sp->role.invalid && !sp->role.direct) - unaccount_shadowed(kvm, sp->gfn); + unaccount_shadowed(kvm, sp); if (sp->unsync) kvm_unlink_unsync_page(kvm, sp);