From patchwork Fri May 8 11:20:32 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 6364351 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B80DB9F373 for ; Fri, 8 May 2015 11:22:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E3B0020211 for ; Fri, 8 May 2015 11:22:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5B13F20220 for ; Fri, 8 May 2015 11:22:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752865AbbEHLWL (ORCPT ); Fri, 8 May 2015 07:22:11 -0400 Received: from mail-wg0-f50.google.com ([74.125.82.50]:33243 "EHLO mail-wg0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752516AbbEHLWF (ORCPT ); Fri, 8 May 2015 07:22:05 -0400 Received: by wgin8 with SMTP id n8so69533455wgi.0; Fri, 08 May 2015 04:22:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=244tOgTToufrIsHSB7lUp1RcB6x/uebLop1Xv0JlhhQ=; b=cseI10fk8QX0D39t+z8WWJOOHDgL/kgBgHghk3f9xuMWCk/WgbnRb9THwB7hhyXztn xojiBPJwWrvH7jqDsZy9ac9pWP+k0ZtPLt69JSq3NJXCV5XRlws40Zch4EHMU0/RDyKR f0M751t3tjI6cSp5PIqTnI4c3q90oQcH4ahEKtAyjfgCrMa9xocPa91bK3fQdGVMU4Zw naRfC7/wVjTv8rRUKglsGf6xJh90LSXcNSHO88NWuQsLdhtoaCrxMXRdInWRq9aksYp6 36ZOmCi9cAKp5wb97J2WUaILb6TVCOr5lBogeQkqpMAUVJ264ZFfOnj9OqYwosnzoU7s ibEw== X-Received: by 10.180.19.134 with SMTP id f6mr5259299wie.35.1431084123773; Fri, 08 May 2015 04:22:03 -0700 (PDT) Received: from 640k.localdomain (dynamic-adsl-94-39-186-233.clienti.tiscali.it. [94.39.186.233]) by mx.google.com with ESMTPSA id vz8sm7900283wjc.27.2015.05.08.04.22.01 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 08 May 2015 04:22:02 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: rkrcmar@redhat.com, bsd@redhat.com Subject: [PATCH 10/12] KVM: x86: add SMM to the MMU role Date: Fri, 8 May 2015 13:20:32 +0200 Message-Id: <1431084034-8425-11-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1431084034-8425-1-git-send-email-pbonzini@redhat.com> References: <1431084034-8425-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 11 +---------- arch/x86/kvm/mmu.c | 5 ++++- arch/x86/kvm/x86.c | 1 + 3 files changed, 6 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 52b8716397d5..3caefa4be90b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -184,16 +184,6 @@ struct kvm_mmu_memory_cache { void *objects[KVM_NR_MEM_OBJS]; }; -/* - * kvm_mmu_page_role, below, is defined as: - * - * bits 0:3 - total guest paging levels (2-4, or zero for real mode) - * bits 4:7 - page table level for this shadow (1-4) - * bits 8:9 - page table quadrant for 2-level guests - * bit 16 - direct mapping of virtual to physical mapping at gfn - * used for real mode and two-dimensional paging - * bits 17:19 - common access permissions for all ptes in this shadow page - */ union kvm_mmu_page_role { unsigned word; struct { @@ -207,6 +197,7 @@ union kvm_mmu_page_role { unsigned nxe:1; unsigned cr0_wp:1; unsigned smep_andnot_wp:1; + unsigned smm:1; }; }; diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 4694ad42aa8b..043680d90964 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3879,6 +3879,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) struct kvm_mmu *context = &vcpu->arch.mmu; context->base_role.word = 0; + context->base_role.smm = is_smm(vcpu); context->page_fault = tdp_page_fault; context->sync_page = nonpaging_sync_page; context->invlpg = nonpaging_invlpg; @@ -3937,6 +3938,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu) context->base_role.cr0_wp = is_write_protection(vcpu); context->base_role.smep_andnot_wp = smep && !is_write_protection(vcpu); + context->base_role.smm = is_smm(vcpu); } EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu); @@ -4239,7 +4241,8 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, ++vcpu->kvm->stat.mmu_pte_write; kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE); - mask.cr0_wp = mask.cr4_pae = mask.nxe = mask.smep_andnot_wp = 1; + mask.cr0_wp = mask.cr4_pae = mask.nxe = mask.smep_andnot_wp = + mask.smm = 1; for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) { if (detect_write_misaligned(sp, gpa, bytes) || detect_write_flooding(sp)) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9c89aee475d3..ce36aca2276d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5419,6 +5419,7 @@ void kvm_set_hflags(struct kvm_vcpu *vcpu, unsigned emul_flags) } vcpu->arch.hflags = emul_flags; + kvm_mmu_reset_context(vcpu); } static int kvm_vcpu_check_hw_bp(unsigned long addr, u32 type, u32 dr7,