From patchwork Wed Jan 30 14:45:02 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gleb Natapov X-Patchwork-Id: 2067951 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id DB890DF264 for ; Wed, 30 Jan 2013 14:45:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755304Ab3A3OpY (ORCPT ); Wed, 30 Jan 2013 09:45:24 -0500 Received: from mx1.redhat.com ([209.132.183.28]:53357 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754925Ab3A3OpU (ORCPT ); Wed, 30 Jan 2013 09:45:20 -0500 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r0UEjHkT032363 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 30 Jan 2013 09:45:17 -0500 Received: from dhcp-1-237.tlv.redhat.com (dhcp-4-26.tlv.redhat.com [10.35.4.26]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r0UEjBxQ012167; Wed, 30 Jan 2013 09:45:12 -0500 Received: by dhcp-1-237.tlv.redhat.com (Postfix, from userid 13519) id DCCAD133A8F; Wed, 30 Jan 2013 16:45:10 +0200 (IST) From: Gleb Natapov To: kvm@vger.kernel.org Cc: mtosatti@redhat.com, xiaoguangrong@linux.vnet.ibm.com, yoshikawa_takuya_b1@lab.ntt.co.jp Subject: [PATCH 3/6] KVM: MMU: set base_role.nxe during mmu initialization. Date: Wed, 30 Jan 2013 16:45:02 +0200 Message-Id: <1359557105-30821-4-git-send-email-gleb@redhat.com> In-Reply-To: <1359557105-30821-1-git-send-email-gleb@redhat.com> References: <1359557105-30821-1-git-send-email-gleb@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move base_role.nxe initialisation to where all other roles are initialized. Signed-off-by: Gleb Natapov --- arch/x86/kvm/mmu.c | 1 + arch/x86/kvm/x86.c | 2 -- 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 40737b3..8028ac6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3687,6 +3687,7 @@ int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context) else r = paging32_init_context(vcpu, context); + vcpu->arch.mmu.base_role.nxe = is_nx(vcpu); vcpu->arch.mmu.base_role.cr4_pae = !!is_pae(vcpu); vcpu->arch.mmu.base_role.cr0_wp = is_write_protection(vcpu); vcpu->arch.mmu.base_role.smep_andnot_wp diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cf512e70..373e17a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -870,8 +870,6 @@ static int set_efer(struct kvm_vcpu *vcpu, u64 efer) kvm_x86_ops->set_efer(vcpu, efer); - vcpu->arch.mmu.base_role.nxe = (efer & EFER_NX) && !tdp_enabled; - /* Update reserved bits */ if ((efer ^ old_efer) & EFER_NX) kvm_mmu_reset_context(vcpu);