From patchwork Fri May 8 11:20:28 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 6364481 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 269299F373 for ; Fri, 8 May 2015 11:24:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E379F20211 for ; Fri, 8 May 2015 11:24:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8C68A20220 for ; Fri, 8 May 2015 11:24:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753245AbbEHLYZ (ORCPT ); Fri, 8 May 2015 07:24:25 -0400 Received: from mail-wg0-f48.google.com ([74.125.82.48]:35149 "EHLO mail-wg0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932083AbbEHLVz (ORCPT ); Fri, 8 May 2015 07:21:55 -0400 Received: by wgyo15 with SMTP id o15so69593721wgy.2; Fri, 08 May 2015 04:21:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=x2WrQE958JTVABFv4GooBdajiWW09j7n21k6ca+acXY=; b=zXMCpwcgMvOAvp7HCMIUIqUY1rnJXY8r0VsQTe97Wr56vCObwyj/a13QUtljvQGC7K nCUrQjI06ZDfpzrzihIdloC71EhCi92vhaEo4H8e7TbUZLoeXNV8aL5s/FlbhjUYPdRp pIkATZ2CJr/OEvEaRQUHqoJDHzQg1vj5/aX1iwaQzgH95HNiHoO/mdp/BWiCaNF5SulE ED1a0JLm0Ko+M6sMtVe2h5bAvqL9HqUS0Ba1wklaiBpIdqkvaeFdu0lFooOlTCzElaPc hwGlzRvLhg6A6rTGAZXzf8nsGxT7pZdy7PoOuUlAGIXaLER7dGfN+Veuw2oCkUpyJdGd 9oMA== X-Received: by 10.180.88.8 with SMTP id bc8mr5191398wib.19.1431084114291; Fri, 08 May 2015 04:21:54 -0700 (PDT) Received: from 640k.localdomain (dynamic-adsl-94-39-186-233.clienti.tiscali.it. [94.39.186.233]) by mx.google.com with ESMTPSA id vz8sm7900283wjc.27.2015.05.08.04.21.51 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 08 May 2015 04:21:53 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: rkrcmar@redhat.com, bsd@redhat.com Subject: [PATCH 06/12] KVM: x86: API changes for SMM support Date: Fri, 8 May 2015 13:20:28 +0200 Message-Id: <1431084034-8425-7-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1431084034-8425-1-git-send-email-pbonzini@redhat.com> References: <1431084034-8425-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch includes changes to the external API for SMM support. All the changes are predicated by the availability of a new capability, KVM_CAP_X86_SMM, which is added at the end of the patch series. Signed-off-by: Paolo Bonzini --- RFC->v1: add smi.pending and smi.rsm_unmasks_nmi fields, reduce padding in struct kvm_vcpu_events; remove memset of events struct, instead zero smi.pad. KVM_CAP_X86_SMM frozen at 117. --- Documentation/virtual/kvm/api.txt | 40 +++++++++++++++++++++++++++++++++------ arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/include/uapi/asm/kvm.h | 11 ++++++++++- arch/x86/kvm/kvm_cache_regs.h | 5 +++++ arch/x86/kvm/x86.c | 34 +++++++++++++++++++++++++++++++-- include/uapi/linux/kvm.h | 5 ++++- 6 files changed, 88 insertions(+), 10 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 695544420ff2..51523b70b6b2 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -820,11 +820,21 @@ struct kvm_vcpu_events { } nmi; __u32 sipi_vector; __u32 flags; + struct { + __u8 smm; + __u8 pending; + __u8 smm_inside_nmi; + __u8 pad; + } smi; }; -KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that -interrupt.shadow contains a valid state. Otherwise, this field is undefined. +Only two fields are defined in the flags field: + +- KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that + interrupt.shadow contains a valid state. +- KVM_VCPUEVENT_VALID_SMM may be set in the flags field to signal that + smi contains a valid state. 4.32 KVM_SET_VCPU_EVENTS @@ -841,17 +851,20 @@ vcpu. See KVM_GET_VCPU_EVENTS for the data structure. Fields that may be modified asynchronously by running VCPUs can be excluded -from the update. These fields are nmi.pending and sipi_vector. Keep the -corresponding bits in the flags field cleared to suppress overwriting the -current in-kernel state. The bits are: +from the update. These fields are nmi.pending, sipi_vector, smi.smm, +smi.pending. Keep the corresponding bits in the flags field cleared to +suppress overwriting the current in-kernel state. The bits are: KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector +KVM_VCPUEVENT_VALID_SMM - transfer the smi sub-struct. If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in the flags field to signal that interrupt.shadow contains a valid state and shall be written into the VCPU. +KVM_VCPUEVENT_VALID_SMM can only be set if KVM_CAP_X86_SMM is available. + 4.33 KVM_GET_DEBUGREGS @@ -2979,6 +2992,16 @@ len must be a multiple of sizeof(struct kvm_s390_irq). It must be > 0 and it must not exceed (max_vcpus + 32) * sizeof(struct kvm_s390_irq), which is the maximum number of possibly pending cpu-local interrupts. +4.90 KVM_SMI + +Capability: KVM_CAP_X86_SMM +Architectures: x86 +Type: vcpu ioctl +Parameters: none +Returns: 0 on success, -1 on error + +Queues an SMI on the thread's vcpu. + 5. The kvm_run structure ------------------------ @@ -3014,7 +3037,12 @@ an interrupt can be injected now with KVM_INTERRUPT. The value of the current interrupt flag. Only valid if in-kernel local APIC is not used. - __u8 padding2[2]; + __u16 flags; + +More architecture-specific flags detailing state of the VCPU that may +affect the device's behavior. The only currently defined flag is +KVM_RUN_X86_SMM, which is valid on x86 machines and is set if the +VCPU is in system management mode. /* in (pre_kvm_run), out (post_kvm_run) */ __u64 cr8; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6487ffc95b0a..a1bef695dc99 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -469,6 +469,7 @@ struct kvm_vcpu_arch { atomic_t nmi_queued; /* unprocessed asynchronous NMIs */ unsigned nmi_pending; /* NMI queued after currently running handler */ bool nmi_injected; /* Trying to inject an NMI this entry */ + bool smi_pending; /* SMI queued after currently running handler */ struct mtrr_state_type mtrr_state; u64 pat; @@ -1112,6 +1113,8 @@ enum { #define HF_NMI_MASK (1 << 3) #define HF_IRET_MASK (1 << 4) #define HF_GUEST_MASK (1 << 5) /* VCPU is in guest-mode */ +#define HF_SMM_MASK (1 << 6) +#define HF_SMM_INSIDE_NMI_MASK (1 << 7) /* * Hardware virtualization extension instructions may fault if a diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 2fec75e4b1e1..30100a3c1bed 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -106,6 +106,8 @@ struct kvm_ioapic_state { #define KVM_IRQCHIP_IOAPIC 2 #define KVM_NR_IRQCHIPS 3 +#define KVM_RUN_X86_SMM (1 << 0) + /* for KVM_GET_REGS and KVM_SET_REGS */ struct kvm_regs { /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */ @@ -281,6 +283,7 @@ struct kvm_reinject_control { #define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001 #define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002 #define KVM_VCPUEVENT_VALID_SHADOW 0x00000004 +#define KVM_VCPUEVENT_VALID_SMM 0x00000008 /* Interrupt shadow states */ #define KVM_X86_SHADOW_INT_MOV_SS 0x01 @@ -309,7 +312,13 @@ struct kvm_vcpu_events { } nmi; __u32 sipi_vector; __u32 flags; - __u32 reserved[10]; + struct { + __u8 smm; + __u8 pending; + __u8 smm_inside_nmi; + __u8 pad; + } smi; + __u32 reserved[9]; }; /* for KVM_GET/SET_DEBUGREGS */ diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h index 544076c4f44b..e1e89ee4af75 100644 --- a/arch/x86/kvm/kvm_cache_regs.h +++ b/arch/x86/kvm/kvm_cache_regs.h @@ -99,4 +99,9 @@ static inline bool is_guest_mode(struct kvm_vcpu *vcpu) return vcpu->arch.hflags & HF_GUEST_MASK; } +static inline bool is_smm(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.hflags & HF_SMM_MASK; +} + #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7d311a0de84c..8015c67a6d07 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3037,6 +3037,11 @@ static int kvm_vcpu_ioctl_nmi(struct kvm_vcpu *vcpu) return 0; } +static int kvm_vcpu_ioctl_smi(struct kvm_vcpu *vcpu) +{ + return 0; +} + static int vcpu_ioctl_tpr_access_reporting(struct kvm_vcpu *vcpu, struct kvm_tpr_access_ctl *tac) { @@ -3142,8 +3147,15 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu, events->sipi_vector = 0; /* never valid when reporting to user space */ + events->smi.smm = is_smm(vcpu); + events->smi.pending = vcpu->arch.smi_pending; + events->smi.smm_inside_nmi = + !!(vcpu->arch.hflags & HF_SMM_INSIDE_NMI_MASK); + events->smi.pad = 0; + events->flags = (KVM_VCPUEVENT_VALID_NMI_PENDING - | KVM_VCPUEVENT_VALID_SHADOW); + | KVM_VCPUEVENT_VALID_SHADOW + | KVM_VCPUEVENT_VALID_SMM); memset(&events->reserved, 0, sizeof(events->reserved)); } @@ -3152,7 +3164,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, { if (events->flags & ~(KVM_VCPUEVENT_VALID_NMI_PENDING | KVM_VCPUEVENT_VALID_SIPI_VECTOR - | KVM_VCPUEVENT_VALID_SHADOW)) + | KVM_VCPUEVENT_VALID_SHADOW + | KVM_VCPUEVENT_VALID_SMM)) return -EINVAL; process_nmi(vcpu); @@ -3177,6 +3190,18 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, kvm_vcpu_has_lapic(vcpu)) vcpu->arch.apic->sipi_vector = events->sipi_vector; + if (events->flags & KVM_VCPUEVENT_VALID_SMM) { + if (events->smi.smm) + vcpu->arch.hflags |= HF_SMM_MASK; + else + vcpu->arch.hflags &= ~HF_SMM_MASK; + vcpu->arch.smi_pending = events->smi.pending; + if (events->smi.smm_inside_nmi) + vcpu->arch.hflags |= ~HF_SMM_INSIDE_NMI_MASK; + else + vcpu->arch.hflags &= ~HF_SMM_INSIDE_NMI_MASK; + } + kvm_make_request(KVM_REQ_EVENT, vcpu); return 0; @@ -3436,6 +3461,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp, r = kvm_vcpu_ioctl_nmi(vcpu); break; } + case KVM_SMI: { + r = kvm_vcpu_ioctl_smi(vcpu); + break; + } case KVM_SET_CPUID: { struct kvm_cpuid __user *cpuid_arg = argp; struct kvm_cpuid cpuid; @@ -6118,6 +6147,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu) struct kvm_run *kvm_run = vcpu->run; kvm_run->if_flag = (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0; + kvm_run->flags = is_smm(vcpu) ? KVM_RUN_X86_SMM : 0; kvm_run->cr8 = kvm_get_cr8(vcpu); kvm_run->apic_base = kvm_get_apic_base(vcpu); if (irqchip_in_kernel(vcpu->kvm)) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 75bd9f7fd846..eace8babd227 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -202,7 +202,7 @@ struct kvm_run { __u32 exit_reason; __u8 ready_for_interrupt_injection; __u8 if_flag; - __u8 padding2[2]; + __u16 flags; /* in (pre_kvm_run), out (post_kvm_run) */ __u64 cr8; @@ -815,6 +815,7 @@ struct kvm_ppc_smmu_info { #define KVM_CAP_S390_IRQ_STATE 114 #define KVM_CAP_PPC_HWRNG 115 #define KVM_CAP_DISABLE_QUIRKS 116 +#define KVM_CAP_X86_SMM 117 #ifdef KVM_CAP_IRQ_ROUTING @@ -1200,6 +1201,8 @@ struct kvm_s390_ucas_mapping { /* Available with KVM_CAP_S390_IRQ_STATE */ #define KVM_S390_SET_IRQ_STATE _IOW(KVMIO, 0xb5, struct kvm_s390_irq_state) #define KVM_S390_GET_IRQ_STATE _IOW(KVMIO, 0xb6, struct kvm_s390_irq_state) +/* Available with KVM_CAP_X86_SMM */ +#define KVM_SMI _IO(KVMIO, 0xb7) #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0) #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)