From patchwork Fri Mar 11 20:47:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 8569011 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 359B69F44D for ; Fri, 11 Mar 2016 20:48:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 44EBC2034A for ; Fri, 11 Mar 2016 20:48:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5D18420340 for ; Fri, 11 Mar 2016 20:48:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752437AbcCKUsA (ORCPT ); Fri, 11 Mar 2016 15:48:00 -0500 Received: from mail-pa0-f42.google.com ([209.85.220.42]:35956 "EHLO mail-pa0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752431AbcCKUr6 (ORCPT ); Fri, 11 Mar 2016 15:47:58 -0500 Received: by mail-pa0-f42.google.com with SMTP id tt10so108398420pab.3 for ; Fri, 11 Mar 2016 12:47:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=v9on0NOw3C9D/9NgeO664GdpeElEvM3jJXJ2iC5//lc=; b=OSvICB2V6rflyxrhVQa/7gRRNXDCzvZ/NB5FgdfZEK+WcZ1Mvarp00MBwL+8drwasP YZ527Co4IEK+oY/klDZ4vG9KS8zp8aCd07G5ebTPtTWtWgnDk+WqG+Bm4bJ0P6/3o6L5 B7btkfaowEHeAgcNt71olvOvY49ff4s0UDeDtOCBXJ4RC3F1U/NFFEu4vhHWvP7xG5/g AzRDO58JicnxZZE4Vgoa0T4iOZ5bmquk5oTSnmxrDc0bwj37iKDR2/XVMplnMTY3v0nw DSycQCP0SweF+Gqp3CNSIO0GdAlq50jGoBgjY+qIISUNSqP+uPZxYgCmX1pc14CMmowb UFwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=v9on0NOw3C9D/9NgeO664GdpeElEvM3jJXJ2iC5//lc=; b=fwjtFSKNfwYTezWocyqgw30r4uBdsKppLacBo9boTuCWy7Z6NHBEmxmMnorEl0nc5J kyDvkNBqphjoOq7I2VMlAEtt+mvXQSST33RbUxnjNIpgAFt9bjiWX21ujEM/EwyuXMxD AY51tLBFVGF3TmdRmuOchGq+WNUmgNHjzForhzHlgWRT8AIhflGCj1HlBlvc7+s2yVt0 VM5IWoHaaXQkj4xF7gCxc8B1suhONg9+8fioTH50NYvzktl+Pa+WTWxrPbPYbziX3I6B mVA/i3e8aHosPy513EzuBhVIx8FIueqAvplKM1NQvenGCuXvWm4pgDPRC9J6nl8dh2UA jobw== X-Gm-Message-State: AD7BkJLwflJxYJH4PHAxnR2bDTY382dKnaohDbJiL2hqBZVybKi3tU6nMlaGDPu0ufQcZEno X-Received: by 10.67.23.202 with SMTP id ic10mr18312003pad.127.1457729272391; Fri, 11 Mar 2016 12:47:52 -0800 (PST) Received: from dmatlack.sea.corp.google.com ([172.31.89.32]) by smtp.gmail.com with ESMTPSA id o7sm14931580pfj.89.2016.03.11.12.47.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 11 Mar 2016 12:47:51 -0800 (PST) From: David Matlack To: linux-kernel@vger.kernel.org, x86@kernel.org, kvm@vger.kernel.org Cc: pbonzini@redhat.com, mingo@redhat.com, luto@kernel.org, hpa@zytor.com, digitaleric@google.com Subject: [PATCH 1/1] KVM: don't allow irq_fpu_usable when the VCPU's XCR0 is loaded Date: Fri, 11 Mar 2016 12:47:20 -0800 Message-Id: <1457729240-3846-2-git-send-email-dmatlack@google.com> X-Mailer: git-send-email 2.7.0.rc3.207.g0ac5344 In-Reply-To: <1457729240-3846-1-git-send-email-dmatlack@google.com> References: <1457729240-3846-1-git-send-email-dmatlack@google.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Eric Northup Add a percpu boolean, tracking whether a KVM vCPU is running on the host CPU. KVM will set and clear it as it loads/unloads guest XCR0. (Note that the rest of the guest FPU load/restore is safe, because kvm_load_guest_fpu and kvm_put_guest_fpu call __kernel_fpu_begin() and __kernel_fpu_end(), respectively.) irq_fpu_usable() will then also check for this percpu boolean. --- arch/x86/include/asm/i387.h | 3 +++ arch/x86/kernel/i387.c | 10 ++++++++-- arch/x86/kvm/x86.c | 4 ++++ 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h index ed8089d..ca2c173 100644 --- a/arch/x86/include/asm/i387.h +++ b/arch/x86/include/asm/i387.h @@ -14,6 +14,7 @@ #include #include +#include struct pt_regs; struct user_i387_struct; @@ -25,6 +26,8 @@ extern void math_state_restore(void); extern bool irq_fpu_usable(void); +DECLARE_PER_CPU(bool, kvm_xcr0_loaded); + /* * Careful: __kernel_fpu_begin/end() must be called with preempt disabled * and they don't touch the preempt state on their own. diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c index b627746..9015828 100644 --- a/arch/x86/kernel/i387.c +++ b/arch/x86/kernel/i387.c @@ -19,6 +19,9 @@ #include #include +DEFINE_PER_CPU(bool, kvm_xcr0_loaded); +EXPORT_PER_CPU_SYMBOL(kvm_xcr0_loaded); + /* * Were we in an interrupt that interrupted kernel mode? * @@ -33,8 +36,11 @@ */ static inline bool interrupted_kernel_fpu_idle(void) { - if (use_eager_fpu()) - return __thread_has_fpu(current); + if (use_eager_fpu()) { + /* Preempt already disabled, safe to read percpu. */ + return __thread_has_fpu(current) && + !__this_cpu_read(kvm_xcr0_loaded); + } return !__thread_has_fpu(current) && (read_cr0() & X86_CR0_TS); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d21bce5..f0ba7a1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -557,8 +557,10 @@ EXPORT_SYMBOL_GPL(kvm_lmsw); static void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu) { + BUG_ON(this_cpu_read(kvm_xcr0_loaded) != vcpu->guest_xcr0_loaded); if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) && !vcpu->guest_xcr0_loaded) { + this_cpu_write(kvm_xcr0_loaded, 1); /* kvm_set_xcr() also depends on this */ xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0); vcpu->guest_xcr0_loaded = 1; @@ -571,7 +573,9 @@ static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu) if (vcpu->arch.xcr0 != host_xcr0) xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0); vcpu->guest_xcr0_loaded = 0; + this_cpu_write(kvm_xcr0_loaded, 0); } + BUG_ON(this_cpu_read(kvm_xcr0_loaded) != vcpu->guest_xcr0_loaded); } int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)