From patchwork Mon Oct 15 03:10:18 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1592031 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id C102540135 for ; Mon, 15 Oct 2012 03:10:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752061Ab2JODKR (ORCPT ); Sun, 14 Oct 2012 23:10:17 -0400 Received: from mail-qc0-f174.google.com ([209.85.216.174]:62003 "EHLO mail-qc0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751774Ab2JODKQ (ORCPT ); Sun, 14 Oct 2012 23:10:16 -0400 Received: by mail-qc0-f174.google.com with SMTP id p14so390885qcr.19 for ; Sun, 14 Oct 2012 20:10:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:x-gm-message-state; bh=1W41FnLXj99BAWa9uZZctIfofv2Kc/+eMmVPy7L6MKc=; b=YExm4vVNrosw6u/tDqKCCIktyKyurpjV0XF+9LX9N2MKihTAPgugDnmm/Dtw6XulDm NS0hf8azQ0UZhlOe4PZOh0kPk2qCvVmxiGBE93vDURfVW5SwAkcINKfZjyajt22Jb2YG VqFM6iUkt1nda17/k13962dlQBqbbc6kvYFRwp844GIAVvv7n0CLTM2KZRn58l8UEzoH gJnsXSB8f8lTOz8wbMGyOtldDKH8Jlb2nz7PAzy6l/I3vzDDHnMZG+V+7mk5zr5xojXn 8OmcI4F6ZwNwQMbVBuvmk1Zy3Bv28m/SRErayZPFMyHsxsRrsARRPTJVyYL0i6kIqgFh 3img== Received: by 10.224.176.132 with SMTP id be4mr18683641qab.45.1350270616040; Sun, 14 Oct 2012 20:10:16 -0700 (PDT) Received: from localhost.localdomain (pool-72-80-83-148.nycmny.fios.verizon.net. [72.80.83.148]) by mx.google.com with ESMTPS id hb7sm12073297qab.20.2012.10.14.20.10.15 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 14 Oct 2012 20:10:15 -0700 (PDT) From: Christoffer Dall To: kvm@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu, Christoffer Dall Subject: [PATCH] KVM: Take kvm instead of vcpu to mmu_notifier_retry Date: Sun, 14 Oct 2012 23:10:18 -0400 Message-Id: <1350270618-9712-1-git-send-email-c.dall@virtualopensystems.com> X-Mailer: git-send-email 1.7.9.5 X-Gm-Message-State: ALoCoQlLrGrEGlP1Xbx7Q9w+et4O+5uJ46VJ9TifL9w5Oln6b76OI/HsbHutaFWrMAILwBnAuAig Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The mmu_notifier_retry is not specific to any vcpu (and never will be) so only take struct kvm as a parameter. The motivation is the ARM mmu code that needs to call this from somewhere where we long let go of the vcpu pointer. Signed-off-by: Christoffer Dall --- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_hv_rm_mmu.c | 2 +- arch/x86/kvm/mmu.c | 4 ++-- arch/x86/kvm/paging_tmpl.h | 2 +- include/linux/kvm_host.h | 6 +++--- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index d95d113..c260282 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -710,7 +710,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, /* Check if we might have been invalidated; let the guest retry if so */ ret = RESUME_GUEST; - if (mmu_notifier_retry(vcpu, mmu_seq)) { + if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) { unlock_rmap(rmap); goto out_unlock; } diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index fb0e821..ed64002 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -297,7 +297,7 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags, lock_rmap(rmap); /* Check for pending invalidations under the rmap chain lock */ if (kvm->arch.using_mmu_notifiers && - mmu_notifier_retry(vcpu, mmu_seq)) { + mmu_notifier_retry(vcpu->kvm, mmu_seq)) { /* inval in progress, write a non-present HPTE */ pteh |= HPTE_V_ABSENT; pteh &= ~HPTE_V_VALID; diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d289fee..552ef95 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2863,7 +2863,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code, return r; spin_lock(&vcpu->kvm->mmu_lock); - if (mmu_notifier_retry(vcpu, mmu_seq)) + if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) goto out_unlock; kvm_mmu_free_some_pages(vcpu); if (likely(!force_pt_level)) @@ -3332,7 +3332,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, return r; spin_lock(&vcpu->kvm->mmu_lock); - if (mmu_notifier_retry(vcpu, mmu_seq)) + if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) goto out_unlock; kvm_mmu_free_some_pages(vcpu); if (likely(!force_pt_level)) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 714e2c0..c6ab2ec 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -594,7 +594,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, return r; spin_lock(&vcpu->kvm->mmu_lock); - if (mmu_notifier_retry(vcpu, mmu_seq)) + if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) goto out_unlock; kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 80bfc88..df3721c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -838,9 +838,9 @@ extern struct kvm_stats_debugfs_item debugfs_entries[]; extern struct dentry *kvm_debugfs_dir; #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) -static inline int mmu_notifier_retry(struct kvm_vcpu *vcpu, unsigned long mmu_seq) +static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq) { - if (unlikely(vcpu->kvm->mmu_notifier_count)) + if (unlikely(kvm->mmu_notifier_count)) return 1; /* * Ensure the read of mmu_notifier_count happens before the read @@ -853,7 +853,7 @@ static inline int mmu_notifier_retry(struct kvm_vcpu *vcpu, unsigned long mmu_se * can't rely on kvm->mmu_lock to keep things ordered. */ smp_rmb(); - if (vcpu->kvm->mmu_notifier_seq != mmu_seq) + if (kvm->mmu_notifier_seq != mmu_seq) return 1; return 0; }