From patchwork Mon Oct 11 23:59:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 644C8C433F5 for ; Tue, 12 Oct 2021 00:11:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F37B60F11 for ; Tue, 12 Oct 2021 00:11:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234174AbhJLANR (ORCPT ); Mon, 11 Oct 2021 20:13:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231586AbhJLANN (ORCPT ); Mon, 11 Oct 2021 20:13:13 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DE22C0613E8; Mon, 11 Oct 2021 17:10:26 -0700 (PDT) Message-ID: <20211011223610.344149306@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633996799; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=K/J8DliA1KrydT+K75626ybfPJK1Kiie29CUG4R7hzc=; b=H7T3VpdSC2CYXckP5JLTRtsCTBQ+N7V8euUhZmAllSwWDrrRkLbNN0cvHV+859gA0ck4NL 6Lw0RjX/K2bz/AOiD6zn/RdBXvx6LVxR8D2noBrbXbMrKxG0E9Q4iFh9lYBI0uBq1Fsv8I ROVXlj8JTmXYCfkyDRogq+V1k49hXXXRXHAIjZOIkUzuOTLFo7FNiS6qLgLYxUfGO+RMW9 dyqbG9//XnIg37YyHdV4Jai0t2P8ceE/G6Pa8EQd7kfwARveFdRyd15QuMpzePwps5fMEf wioe2KcT5auY2b2w0239bADYfvvQKmeRaheSaeefh9YQJcPGmpY+jX8Ae7Rweg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633996799; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=K/J8DliA1KrydT+K75626ybfPJK1Kiie29CUG4R7hzc=; b=f/6HQnddEEvZcHUnLpzgM8ZOd01FN8/UVG9Ii1lkr5sMEGVALPsVVzyjvm/v6RY3pj8Iml CIe4FvnVZNfwQUCQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 01/31] x86/fpu: Remove pointless argument from switch_fpu_finish() References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 01:59:59 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unused since the FPU switching rework. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 2 +- arch/x86/kernel/process_32.c | 3 +-- arch/x86/kernel/process_64.c | 3 +-- 3 files changed, 3 insertions(+), 5 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -521,7 +521,7 @@ static inline void switch_fpu_prepare(st * Delay loading of the complete FPU state until the return to userland. * PKRU is handled separately. */ -static inline void switch_fpu_finish(struct fpu *new_fpu) +static inline void switch_fpu_finish(void) { if (cpu_feature_enabled(X86_FEATURE_FPU)) set_thread_flag(TIF_NEED_FPU_LOAD); --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -160,7 +160,6 @@ EXPORT_SYMBOL_GPL(start_thread); struct thread_struct *prev = &prev_p->thread, *next = &next_p->thread; struct fpu *prev_fpu = &prev->fpu; - struct fpu *next_fpu = &next->fpu; int cpu = smp_processor_id(); /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */ @@ -213,7 +212,7 @@ EXPORT_SYMBOL_GPL(start_thread); this_cpu_write(current_task, next_p); - switch_fpu_finish(next_fpu); + switch_fpu_finish(); /* Load the Intel cache allocation PQR MSR. */ resctrl_sched_in(); --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -559,7 +559,6 @@ void compat_start_thread(struct pt_regs struct thread_struct *prev = &prev_p->thread; struct thread_struct *next = &next_p->thread; struct fpu *prev_fpu = &prev->fpu; - struct fpu *next_fpu = &next->fpu; int cpu = smp_processor_id(); WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) && @@ -620,7 +619,7 @@ void compat_start_thread(struct pt_regs this_cpu_write(current_task, next_p); this_cpu_write(cpu_current_top_of_stack, task_top_of_stack(next_p)); - switch_fpu_finish(next_fpu); + switch_fpu_finish(); /* Reload sp0. */ update_task_stack(next_p); From patchwork Tue Oct 12 00:00:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BEF7C433F5 for ; Tue, 12 Oct 2021 00:07:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4444C60E54 for ; Tue, 12 Oct 2021 00:07:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235859AbhJLAJ0 (ORCPT ); Mon, 11 Oct 2021 20:09:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232494AbhJLAI2 (ORCPT ); Mon, 11 Oct 2021 20:08:28 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41099C06161C; Mon, 11 Oct 2021 17:06:27 -0700 (PDT) Message-ID: <20211011223610.404292507@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=EFhq4BlDAnV4q8GVsDZZiqlvWQFfxiu85EHWBzdICcc=; b=VCNQ0YYqHaHhvR6GAnFWkdWVKikPaplqds9wAmKjS/2xCPbmC63V4rn8t76+jiSQ6l3eJy zKR2EvMVeCS38BTAJjoIsebbL5ABSJU2Vp0CWS1ISyr3fAyPC0GKrHJuoUEyIbF9O3BB0d 5IXV36pdUaqQn3t+0mLaXE9LnrP2UHDfwqJsjmJsFDujew/+tbx7hZDX40iGAUh+CGKJnG ysXI2IFv9v2MkWz9583q8Jt3QxXZt/I63S91rGNyBsnjmXdjTjskbpH2+MqpBztw9NqLiS o+L+Rtq8iNi3gycnPsS60e9++9pkB/xFXN0ep9LxHqHiUDsc7958oKPUgvh6fw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=EFhq4BlDAnV4q8GVsDZZiqlvWQFfxiu85EHWBzdICcc=; b=CV8UjHd/t1bsVIIXwU4JXiPd5LH4dqDOMlNC0DxsNjXzgU5yx/iaLlGKXHXLDAJ2YQxGeL h5WoxoGIbEHGvSDg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 02/31] x86/fpu: Update stale comments References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:00 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org copy_fpstate_to_sigframe() does not have a slow path anymore. Neither does the !ia32 restore in __fpu_restore_sig(). Update the comments accordingly. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/signal.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -155,10 +155,8 @@ static inline int copy_fpregs_to_sigfram * buf == buf_fx for 64-bit frames and 32-bit fsave frame. * buf != buf_fx for 32-bit frames with fxstate. * - * Try to save it directly to the user frame with disabled page fault handler. - * If this fails then do the slow path where the FPU state is first saved to - * task's fpu->state and then copy it to the user frame pointed to by the - * aligned pointer 'buf_fx'. + * Save it directly to the user frame with disabled page fault handler. If + * that faults, try to clear the frame which handles the page fault. * * If this is a 32-bit frame with fxstate, put a fsave header before * the aligned state at 'buf_fx'. @@ -334,12 +332,7 @@ static bool __fpu_restore_sig(void __use } if (likely(!ia32_fxstate)) { - /* - * Attempt to restore the FPU registers directly from user - * memory. For that to succeed, the user access cannot cause page - * faults. If it does, fall back to the slow path below, going - * through the kernel buffer with the enabled pagefault handler. - */ + /* Restore the FPU registers directly from user memory. */ return restore_fpregs_from_user(buf_fx, user_xfeatures, fx_only, state_size); } From patchwork Tue Oct 12 00:00:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94D1AC433F5 for ; Tue, 12 Oct 2021 00:06:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8199760F92 for ; Tue, 12 Oct 2021 00:06:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235779AbhJLAIk (ORCPT ); Mon, 11 Oct 2021 20:08:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234728AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E374DC061570; Mon, 11 Oct 2021 17:06:28 -0700 (PDT) Message-ID: <20211011223610.463067287@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=30YG7qzJGGvYFzVQYDpeQaqJ24jssBdyvBLYMB+8pqE=; b=tuFoHc3CvB604KyPYb91G6msQbCntsX7/nKZGfcSk3TbmtEcYMHNvkLl33cxBTca+SkZhU YCce9McsVBVGsVVaWxzUElKrLb60jdjigFG/L3bjszALXVF60Rv3Q2mvips6/+/Hk1zqb9 YJG+uv727y/wnlPrPjirS4itCZ22oDXHUIVPz4D/NLTklsARdOWRlekoyXQ573Nr+QUWmS qy7Lz8pYp/6wxYfRgEHAwMIvIto5GiDxJ1bLgDbQiu9eNoUBr7ZUSpzbr+S5DisQafZiRj 8kN5LnoarjkHaJxNqYvwm7IAfy91zYrVVGmi79LGGGIGa8KLQSvrMUhNR8dZRw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=30YG7qzJGGvYFzVQYDpeQaqJ24jssBdyvBLYMB+8pqE=; b=PGDHmx21mQhHnTYI4anE8NXeZGRymNWzg33VgmGjKiYZAanhOWnoEHJBnNgkoCYzmXq6Rx O3Eys41Wscfg07AQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 03/31] x86/pkru: Remove useless include References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:02 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org PKRU code does not need anything from fpu headers. Include cpufeature.h instead and fixup the resulting fallout in perf. This is a preparation for FPU changes in order to prevent recursive include hell. Signed-off-by: Thomas Gleixner --- arch/x86/events/perf_event.h | 1 + arch/x86/include/asm/pkru.h | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -14,6 +14,7 @@ #include +#include #include #include --- a/arch/x86/include/asm/pkru.h +++ b/arch/x86/include/asm/pkru.h @@ -2,7 +2,7 @@ #ifndef _ASM_X86_PKRU_H #define _ASM_X86_PKRU_H -#include +#include #define PKRU_AD_BIT 0x1 #define PKRU_WD_BIT 0x2 From patchwork Tue Oct 12 00:00:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F210CC433EF for ; Tue, 12 Oct 2021 00:07:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D9C4B60F43 for ; Tue, 12 Oct 2021 00:07:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236282AbhJLAJf (ORCPT ); Mon, 11 Oct 2021 20:09:35 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51396 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231886AbhJLAIZ (ORCPT ); Mon, 11 Oct 2021 20:08:25 -0400 Message-ID: <20211011223610.524176686@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Az4teqBX37vif9hCrLNnZFZ28q4lgGFyyPMu6OSHNTs=; b=YYTQ0Nc1wQpJQxX5p8jGnIK1+jgqsu6Zc5pexORO+BSfUT7jwVS/6YL7tlAdItV67pOd91 lkV3BlabkL1+VWUoebgKGJTWXYlEgbnq+K0wH+gHQReQSq/kzdoramTTv8vYVwXCWTalQa 040xb4KYGYxkOHHnp3FXJ2NdB5g0EXc9KCbHoXXJ4wccz0yZzzC97nuZJQ6Td9cbi3JKG7 tTEPk2954FEGedATF3ut8LroesSwTYSzMtNUrdQZZYKwI+PggxP4eL4RHFJhazq+m8hEVQ flmexavRBeoBZYlFE4XtjkQlfSabSc+as/56NuCLU7S/IAvgSJA+Nn3skk79dA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Az4teqBX37vif9hCrLNnZFZ28q4lgGFyyPMu6OSHNTs=; b=jTftb3uApnXxQgYi1tryhvrgcEUsZxXp1Yba96DLlfeLAUGcEI3FNupn/U0qGFq1RX/7QX 7REdTx33R9e6BeBw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 04/31] x86/fpu: Restrict xsaves()/xrstors() to independent states References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:04 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These interfaces are really only valid for features which are independently managed and not part of the task context state for various reasons. Tighten the checks and adjust the misleading comments. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/xstate.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1182,13 +1182,9 @@ static bool validate_xsaves_xrstors(u64 if (WARN_ON_FPU(!cpu_feature_enabled(X86_FEATURE_XSAVES))) return false; /* - * Validate that this is either a task->fpstate related component - * subset or an independent one. + * Validate that this is a independent compoment. */ - if (mask & xfeatures_mask_independent()) - xchk = ~xfeatures_mask_independent(); - else - xchk = ~xfeatures_mask_all; + xchk = ~xfeatures_mask_independent(); if (WARN_ON_ONCE(!mask || mask & xchk)) return false; @@ -1206,8 +1202,7 @@ static bool validate_xsaves_xrstors(u64 * buffer should be zeroed otherwise a consecutive XRSTORS from that buffer * can #GP. * - * The feature mask must either be a subset of the independent features or - * a subset of the task->fpstate related features. + * The feature mask must be a subset of the independent features */ void xsaves(struct xregs_state *xstate, u64 mask) { @@ -1231,8 +1226,7 @@ void xsaves(struct xregs_state *xstate, * Proper usage is to restore the state which was saved with * xsaves() into @xstate. * - * The feature mask must either be a subset of the independent features or - * a subset of the task->fpstate related features. + * The feature mask must be a subset of the independent features */ void xrstors(struct xregs_state *xstate, u64 mask) { From patchwork Tue Oct 12 00:00:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66254C433F5 for ; Tue, 12 Oct 2021 00:07:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 50B4F60E54 for ; Tue, 12 Oct 2021 00:07:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236242AbhJLAJV (ORCPT ); Mon, 11 Oct 2021 20:09:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235003AbhJLAIa (ORCPT ); Mon, 11 Oct 2021 20:08:30 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D342C06174E; Mon, 11 Oct 2021 17:06:29 -0700 (PDT) Message-ID: <20211011223610.584398929@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=wsqOL8FLiWK/ug1HoIyemEsvOTae9xkh1fWRP383y9k=; b=iUvazFzUe3cO23KDcHZEJ6UNxjGgZi1iJ1s57nBnLVwn5W6i9EcHPSwWHgUPeDv/pilnf2 CyEOYYXjnZNgrzYelC+/Say7I5/EUJ853lpLn+IisKPq+TlpJr/TeWJlIixwoaEHspDkZG lnlyXtFqyUcnFgqq4cyswzWzFXErOwGyLhlvOYsOmKzxIMAnlzNRGZZFh20fIaicjsraNF rLRZsah/FkzJ1aNZ5uTUb8DNPcdVax+bbY3xKO+hOZHE1BFEOMSmp4MhnjahE+3qD5t2mH FWTHN3We3NMofTJSWS7Hox9wTkqyDk/+RpknV7d8idye4VpXmZfOTTwcd01YnQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=wsqOL8FLiWK/ug1HoIyemEsvOTae9xkh1fWRP383y9k=; b=ngF7uGFQgMACUCNdufzUDlV6mBfkkZdovwnhoDCB+UHYsWJnYk9GWOdHWy6v7MmnxlHdm0 hZ9qalp7hajZz7Cg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 05/31] x86/fpu: Cleanup the on_boot_cpu clutter References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:05 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Defensive programming is useful, but this on_boot_cpu debug is really silly. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/init.c | 16 ---------------- arch/x86/kernel/fpu/xstate.c | 9 --------- 2 files changed, 25 deletions(-) --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -192,11 +192,6 @@ static void __init fpu__init_task_struct */ static void __init fpu__init_system_xstate_size_legacy(void) { - static int on_boot_cpu __initdata = 1; - - WARN_ON_FPU(!on_boot_cpu); - on_boot_cpu = 0; - /* * Note that xstate sizes might be overwritten later during * fpu__init_system_xstate(). @@ -216,15 +211,6 @@ static void __init fpu__init_system_xsta fpu_user_xstate_size = fpu_kernel_xstate_size; } -/* Legacy code to initialize eager fpu mode. */ -static void __init fpu__init_system_ctx_switch(void) -{ - static bool on_boot_cpu __initdata = 1; - - WARN_ON_FPU(!on_boot_cpu); - on_boot_cpu = 0; -} - /* * Called on the boot CPU once per system bootup, to set up the initial * FPU state that is later cloned into all processes: @@ -243,6 +229,4 @@ void __init fpu__init_system(struct cpui fpu__init_system_xstate_size_legacy(); fpu__init_system_xstate(); fpu__init_task_struct_size(); - - fpu__init_system_ctx_switch(); } --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -379,15 +379,10 @@ static void __init print_xstate_offset_s */ static void __init setup_init_fpu_buf(void) { - static int on_boot_cpu __initdata = 1; - BUILD_BUG_ON((XFEATURE_MASK_USER_SUPPORTED | XFEATURE_MASK_SUPERVISOR_SUPPORTED) != XFEATURES_INIT_FPSTATE_HANDLED); - WARN_ON_FPU(!on_boot_cpu); - on_boot_cpu = 0; - if (!boot_cpu_has(X86_FEATURE_XSAVE)) return; @@ -721,14 +716,10 @@ static void fpu__init_disable_system_xst void __init fpu__init_system_xstate(void) { unsigned int eax, ebx, ecx, edx; - static int on_boot_cpu __initdata = 1; u64 xfeatures; int err; int i; - WARN_ON_FPU(!on_boot_cpu); - on_boot_cpu = 0; - if (!boot_cpu_has(X86_FEATURE_FPU)) { pr_info("x86/fpu: No FPU detected\n"); return; From patchwork Tue Oct 12 00:00:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2EDAC433EF for ; Tue, 12 Oct 2021 00:07:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C0C186008E for ; Tue, 12 Oct 2021 00:07:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235780AbhJLAJT (ORCPT ); Mon, 11 Oct 2021 20:09:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235007AbhJLAIa (ORCPT ); Mon, 11 Oct 2021 20:08:30 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70E7BC061762; Mon, 11 Oct 2021 17:06:29 -0700 (PDT) Message-ID: <20211011223610.645433623@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=B1tkVTwLBzXQ1kd/oYzbsyoZ3TVakIxSQFbsR2tD+XI=; b=DRU3FrOhoijfg/uXLgY3S+4aT1sYBLS1OesJ3aT9QARSczsVuAId/+8dyToLgHiaO8EyBn gxWXy34aRoEAKqmxSX3c+KcsdzH6KiwwlUquCxENgoFDHGIcYK9Cp/omnuan1IB+Z3qicR imuItG0/vaVI6Zl/1ZhQSBYOuCkS5lTV+/ZC5aTePbjHyRNEWtmTNed8jW9l8TmmLbllhg 4IESelGZrw0OU5I5beHH8bpQzfDe5rzBhatyUf6u7sNxvYxQRIUgCs355OKD16tjEgPuyg LxYV7TDHdT3JyV45a+wsg2yLSobZqsX31d8Hjfm0vf26u2DiTUptgt+iTNZZxw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=B1tkVTwLBzXQ1kd/oYzbsyoZ3TVakIxSQFbsR2tD+XI=; b=Snd0N896VA7Xdp4/T/V0MNqdfAZnstLxbiKrNKP77s0vBGysIj1skuG+NldaW/ZpBtYfZN bli3cooIXxz6iOAg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 06/31] x86/fpu: Remove pointless memset in fpu_clone() References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:07 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Zeroing the forked task's FPU register buffer to avoid leaking init optimized stale data into the clone is a pointless exercise for the case where the current task has TIF_NEED_FPU_LOAD set. In that case the FPU register state is copied from current's FPU register buffer which can contain stale init optimized data as well. The alledged information leak is non-existant because this stale init optimized data is nowhere used and cannot leak anywhere. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/core.c | 6 ------ 1 file changed, 6 deletions(-) --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -260,12 +260,6 @@ int fpu_clone(struct task_struct *dst) return 0; /* - * Don't let 'init optimized' areas of the XSAVE area - * leak into the child task: - */ - memset(&dst_fpu->state.xsave, 0, fpu_kernel_xstate_size); - - /* * If the FPU registers are not owned by current just memcpy() the * state. Otherwise save the FPU registers directly into the * child's FPU context, without any memory-to-memory copying. From patchwork Tue Oct 12 00:00:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78040C433FE for ; Tue, 12 Oct 2021 00:07:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F9216008E for ; Tue, 12 Oct 2021 00:07:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235352AbhJLAJb (ORCPT ); Mon, 11 Oct 2021 20:09:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232381AbhJLAI2 (ORCPT ); Mon, 11 Oct 2021 20:08:28 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FF08C061570; Mon, 11 Oct 2021 17:06:27 -0700 (PDT) Message-ID: <20211011223610.705049376@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=i9xeaFxN1PyesYRvXe7tDSiQSxGVh/JmtQE1TkpD0JE=; b=j45F7Vitxq3lCuzU93riBz9DtA+FAoeuCa+gNjcBtjf9nHP32il5sYITMxGNlplsO4lx6X YDM7e2nI5HgZJUqCLnaO4AP0Z6rzlgroOLRf7p8iKELaCbobMAEXb5FZiznX1YjG+BqVcc QXfOC7yYXFXnMSvL7EOKugdnCG9l05698LKE43FMnEhZn+EFm6PfBfmccA5NGaWdpWnXDU Zmpng034tsjINxRoFbYlpd0HJcF5c7N0uQ0vmUiXC123jgo6ZyZaLDft97/MtYoECZkk5C zvHhDERn2ON8sXuhpF0qLWs6Az3B8CV5eop78u81QmbGPHeR/oSQvZKP4Q8Y8A== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=i9xeaFxN1PyesYRvXe7tDSiQSxGVh/JmtQE1TkpD0JE=; b=++J6xEi63QztqsISM8+l0+isaMoD1sO50WbgDxdtWfZuZ+/FRtbX/StaAtSdyO7YapgWNA unZa1HP6lRRvH5DA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 07/31] x86/process: Clone FPU in copy_thread() References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:08 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is no reason to clone FPU in arch_dup_task_struct(). Quite the contrary it prevents optimizations. Move it to copy_thread() Signed-off-by: Thomas Gleixner --- arch/x86/kernel/process.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -87,7 +87,7 @@ int arch_dup_task_struct(struct task_str #ifdef CONFIG_VM86 dst->thread.vm86 = NULL; #endif - return fpu_clone(dst); + return 0; } /* @@ -154,6 +154,8 @@ int copy_thread(unsigned long clone_flag frame->flags = X86_EFLAGS_FIXED; #endif + fpu_clone(p); + /* Kernel thread ? */ if (unlikely(p->flags & PF_KTHREAD)) { p->thread.pkru = pkru_get_init_value(); From patchwork Tue Oct 12 00:00:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB82DC433EF for ; Tue, 12 Oct 2021 00:06:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C437F61027 for ; Tue, 12 Oct 2021 00:06:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235880AbhJLAIp (ORCPT ); Mon, 11 Oct 2021 20:08:45 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51366 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233508AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223610.766147508@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=UFVFYkEktW/yEvtcjFFR89PEJUGZgSbkDZyTQ+sOUPg=; b=dKwibIjsWQVo+snnmtdR33D7DOsRg8NODKLWBU0RJkHspJkuyBkCufL1cmaRHZXKMOJQ9P S15YutiqfkJm+C6T4r1yceWfZJhxMTZsh+9UsDj3RG9gw/GFhyCKP+0n9lVqnKd7duehxq sTLJzFEQ5GmYXUnG54h6IaRyYp/4Ff4P9XIR743ZUrDioLPacfw4Bi6NWP34pKUwdX/cmn GMmBGawdjF6fDhGAIyi9AZYsTaF/ozC3DrYO4Kpi+yZsFAymOZSop/yiOPNhbE549r6Bhp pFfiwlFECpQp7N8wc1nRfZnznOCEFltZJWjK1/wu/xwVvSXHJMTIHs0zkXRInQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=UFVFYkEktW/yEvtcjFFR89PEJUGZgSbkDZyTQ+sOUPg=; b=lTekYzvIJbxZ76A+KXIg0s44C8GdaMRqweQ2LkamFpigKmQjIScDnXAGXgISd8W0KXzhU+ znEfmIispec5AUDA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 08/31] x86/fpu: Do not inherit FPU context for kernel and IO worker threads References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:10 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is no reason why kernel and IO worker threads need a full clone of the parent's FPU state. Both are kernel threads which are not supposed to use FPU. So copying a large state or doing XSAVE() is pointless. Just clean out the minimaly required state for those tasks. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/core.c | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-) --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -212,6 +212,15 @@ static inline void fpstate_init_xstate(s xsave->header.xcomp_bv = XCOMP_BV_COMPACTED_FORMAT | xfeatures_mask_all; } +static inline unsigned int init_fpstate_copy_size(void) +{ + if (!use_xsave()) + return fpu_kernel_xstate_size; + + /* XSAVE(S) just needs the legacy and the xstate header part */ + return sizeof(init_fpstate.xsave); +} + static inline void fpstate_init_fxstate(struct fxregs_state *fx) { fx->cwd = 0x37f; @@ -260,6 +269,23 @@ int fpu_clone(struct task_struct *dst) return 0; /* + * Enforce reload for user space tasks and prevent kernel threads + * from trying to save the FPU registers on context switch. + */ + set_tsk_thread_flag(dst, TIF_NEED_FPU_LOAD); + + /* + * No FPU state inheritance for kernel threads and IO + * worker threads. + */ + if (dst->flags & (PF_KTHREAD | PF_IO_WORKER)) { + /* Clear out the minimal state */ + memcpy(&dst_fpu->state, &init_fpstate, + init_fpstate_copy_size()); + return 0; + } + + /* * If the FPU registers are not owned by current just memcpy() the * state. Otherwise save the FPU registers directly into the * child's FPU context, without any memory-to-memory copying. @@ -272,8 +298,6 @@ int fpu_clone(struct task_struct *dst) save_fpregs_to_fpstate(dst_fpu); fpregs_unlock(); - set_tsk_thread_flag(dst, TIF_NEED_FPU_LOAD); - trace_x86_fpu_copy_src(src_fpu); trace_x86_fpu_copy_dst(dst_fpu); @@ -322,15 +346,6 @@ static inline void restore_fpregs_from_i pkru_write_default(); } -static inline unsigned int init_fpstate_copy_size(void) -{ - if (!use_xsave()) - return fpu_kernel_xstate_size; - - /* XSAVE(S) just needs the legacy and the xstate header part */ - return sizeof(init_fpstate.xsave); -} - /* * Reset current->fpu memory state to the init values. */ From patchwork Tue Oct 12 00:00:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58194C433FE for ; Tue, 12 Oct 2021 00:06:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3ACFC61184 for ; Tue, 12 Oct 2021 00:06:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235562AbhJLAIi (ORCPT ); Mon, 11 Oct 2021 20:08:38 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51366 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231443AbhJLAIY (ORCPT ); Mon, 11 Oct 2021 20:08:24 -0400 Message-ID: <20211011223610.828296394@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=5e/GctY8E+M9FTg87q7clLA+JaNKqsqcOKPkkw/zsqY=; b=fWKxcOASGZ283xBkCCj1NcRQ4K05BWRXR2p6CrF/lel3M+NzVbvtyyrqvF/htWqJ25HcJV fGsCwv8YQvSNwQeEuakL+EpxoMDsOtUM/AfWcMY3gdEvSGW1zP2jLpzko+UwMCXEFoi1Yf jdK47/szQVY8vbFZ89S7hGLMfUCBuO/pmEi6Lch7VkTod3QeHPNrGFkQVG+C7aXFyd7pEj L5VDar88FVg7y8TYSznegYwTivW3d646AxK2vd+bTewbbqZ8Mcj04DzvEtqbAqX+DgPlwU E5737oLf0m/OKyK05J0c0d0pZTvS5q7WXm05TReMN/zl/FAIdmbOpjoPtyC4NA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=5e/GctY8E+M9FTg87q7clLA+JaNKqsqcOKPkkw/zsqY=; b=W7dM5vmo/lQxT2lbTj5ScIbfcpcLEwctrVW4uwIwHIb+GfYSSWVk4KdDd3y3mKl2F9/Y6P iws95qs0CzewB8Dg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 09/31] x86/fpu: Do not inherit FPU context for CLONE_THREAD References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:11 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org CLONE_THREAD does not have the guarantee of a true fork to inherit all state. Especially the FPU state is meaningless for CLONE_THREAD. Just wipe out the minimal required state so restore on return to user space let's the thread start with a clean FPU. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 2 +- arch/x86/kernel/fpu/core.c | 8 +++++--- arch/x86/kernel/process.c | 2 +- 3 files changed, 7 insertions(+), 5 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -34,7 +34,7 @@ extern int fpu__exception_code(struct f extern void fpu_sync_fpstate(struct fpu *fpu); /* Clone and exit operations */ -extern int fpu_clone(struct task_struct *dst); +extern int fpu_clone(struct task_struct *dst, unsigned long clone_flags); extern void fpu_flush_thread(void); /* --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -257,7 +257,7 @@ void fpstate_init(union fpregs_state *st EXPORT_SYMBOL_GPL(fpstate_init); /* Clone current's FPU state on fork */ -int fpu_clone(struct task_struct *dst) +int fpu_clone(struct task_struct *dst, unsigned long clone_flags) { struct fpu *src_fpu = ¤t->thread.fpu; struct fpu *dst_fpu = &dst->thread.fpu; @@ -276,9 +276,11 @@ int fpu_clone(struct task_struct *dst) /* * No FPU state inheritance for kernel threads and IO - * worker threads. + * worker threads. Neither CLONE_THREAD needs a copy + * of the FPU state. */ - if (dst->flags & (PF_KTHREAD | PF_IO_WORKER)) { + if (clone_flags & CLONE_THREAD || + dst->flags & (PF_KTHREAD | PF_IO_WORKER)) { /* Clear out the minimal state */ memcpy(&dst_fpu->state, &init_fpstate, init_fpstate_copy_size()); --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -154,7 +154,7 @@ int copy_thread(unsigned long clone_flag frame->flags = X86_EFLAGS_FIXED; #endif - fpu_clone(p); + fpu_clone(p, clone_flags); /* Kernel thread ? */ if (unlikely(p->flags & PF_KTHREAD)) { From patchwork Tue Oct 12 00:00:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 884F1C433EF for ; Tue, 12 Oct 2021 00:06:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B2BD61038 for ; Tue, 12 Oct 2021 00:06:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234937AbhJLAIa (ORCPT ); Mon, 11 Oct 2021 20:08:30 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51264 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230213AbhJLAIY (ORCPT ); Mon, 11 Oct 2021 20:08:24 -0400 Message-ID: <20211011223610.889700153@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=5zH7sbOyJ/0kgGAfW88cQB0QECVGl/W35IgTXJyPvBg=; b=yUcSh3MYIq0/7iFpfPM4FaAinzNFuVgaCMr9wLWcU5RyjpGG/sNBpiWC1v22eSmzgbWte6 dW8T74/U+seclD147jdmWavg55483CFSMNGJJQyKRjV/hApBaFnq/9yBn6UTukdw60ZOQL 1VkrMZKFZXjAi8GFhiaF1k9WBGvoLQlUUZWvbJ6l/H8s6CXhrtfOjWdM8Y7Cq/YWjZc/pe V0to9aoj64QiH+rc4pt0iDOZ3mtiMlMmsvryqyi0B1UKZ7UdWEWMeMABS+utzkRVXcvW5u uv9N3IRmcUADSCqnRSrGumKuqqNWf8AxULdeVZljsl5Ydd0MOGyEqfLQiKfi8A== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=5zH7sbOyJ/0kgGAfW88cQB0QECVGl/W35IgTXJyPvBg=; b=KHHMyZkgmfd64Vy2mRUZzvDmgdCsvMDRsIZaHRfiOLdTUnybKABW8mfMVA/GV1m64QfylN d5sqm3EoZXmSxMAA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 10/31] x86/fpu: Cleanup xstate xcomp_bv initialization References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:13 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No point in having this duplicated all over the place with needlessly different defines. Provide a proper initialization function which initializes user buffers properly and make KVM use it. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 4 +++- arch/x86/kernel/fpu/core.c | 35 +++++++++++++++++++---------------- arch/x86/kernel/fpu/init.c | 6 +++--- arch/x86/kernel/fpu/xstate.c | 8 +++----- arch/x86/kernel/fpu/xstate.h | 18 ++++++++++++++++++ arch/x86/kvm/x86.c | 11 +++-------- 6 files changed, 49 insertions(+), 33 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -80,7 +80,9 @@ static __always_inline __pure bool use_f extern union fpregs_state init_fpstate; -extern void fpstate_init(union fpregs_state *state); +extern void fpstate_init_user(union fpregs_state *state); +extern void fpu_init_fpstate_user(struct fpu *fpu); + #ifdef CONFIG_MATH_EMULATION extern void fpstate_init_soft(struct swregs_state *soft); #else --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -16,6 +16,8 @@ #include #include +#include "xstate.h" + #define CREATE_TRACE_POINTS #include @@ -203,15 +205,6 @@ void fpu_sync_fpstate(struct fpu *fpu) fpregs_unlock(); } -static inline void fpstate_init_xstate(struct xregs_state *xsave) -{ - /* - * XRSTORS requires these bits set in xcomp_bv, or it will - * trigger #GP: - */ - xsave->header.xcomp_bv = XCOMP_BV_COMPACTED_FORMAT | xfeatures_mask_all; -} - static inline unsigned int init_fpstate_copy_size(void) { if (!use_xsave()) @@ -238,23 +231,33 @@ static inline void fpstate_init_fstate(s fp->fos = 0xffff0000u; } -void fpstate_init(union fpregs_state *state) +/* + * Used in two places: + * 1) Early boot to setup init_fpstate for non XSAVE systems + * 2) fpu_init_fpstate_user() which is invoked from KVM + */ +void fpstate_init_user(union fpregs_state *state) { - if (!static_cpu_has(X86_FEATURE_FPU)) { + if (!cpu_feature_enabled(X86_FEATURE_FPU)) { fpstate_init_soft(&state->soft); return; } - memset(state, 0, fpu_kernel_xstate_size); + xstate_init_xcomp_bv(&state->xsave, xfeatures_mask_uabi()); - if (static_cpu_has(X86_FEATURE_XSAVES)) - fpstate_init_xstate(&state->xsave); - if (static_cpu_has(X86_FEATURE_FXSR)) + if (cpu_feature_enabled(X86_FEATURE_FXSR)) fpstate_init_fxstate(&state->fxsave); else fpstate_init_fstate(&state->fsave); } -EXPORT_SYMBOL_GPL(fpstate_init); + +#if IS_ENABLED(CONFIG_KVM) +void fpu_init_fpstate_user(struct fpu *fpu) +{ + fpstate_init_user(&fpu->state); +} +EXPORT_SYMBOL_GPL(fpu_init_fpstate_user); +#endif /* Clone current's FPU state on fork */ int fpu_clone(struct task_struct *dst, unsigned long clone_flags) --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -121,10 +121,10 @@ static void __init fpu__init_system_mxcs static void __init fpu__init_system_generic(void) { /* - * Set up the legacy init FPU context. (xstate init might overwrite this - * with a more modern format, if the CPU supports it.) + * Set up the legacy init FPU context. Will be updated when the + * CPU supports XSAVE[S]. */ - fpstate_init(&init_fpstate); + fpstate_init_user(&init_fpstate); fpu__init_system_mxcsr(); } --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -15,10 +15,10 @@ #include #include #include -#include #include -#include + +#include "xstate.h" /* * Although we spell it out in here, the Processor Trace @@ -389,9 +389,7 @@ static void __init setup_init_fpu_buf(vo setup_xstate_features(); print_xstate_features(); - if (boot_cpu_has(X86_FEATURE_XSAVES)) - init_fpstate.xsave.header.xcomp_bv = XCOMP_BV_COMPACTED_FORMAT | - xfeatures_mask_all; + xstate_init_xcomp_bv(&init_fpstate.xsave, xfeatures_mask_all); /* * Init all the features state with header.xfeatures being 0x0 --- /dev/null +++ b/arch/x86/kernel/fpu/xstate.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __X86_KERNEL_FPU_XSTATE_H +#define __X86_KERNEL_FPU_XSTATE_H + +#include +#include + +static inline void xstate_init_xcomp_bv(struct xregs_state *xsave, u64 mask) +{ + /* + * XRSTORS requires these bits set in xcomp_bv, or it will + * trigger #GP: + */ + if (cpu_feature_enabled(X86_FEATURE_XSAVES)) + xsave->header.xcomp_bv = mask | XCOMP_BV_COMPACTED_FORMAT; +} + +#endif --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10612,14 +10612,6 @@ static int sync_regs(struct kvm_vcpu *vc static void fx_init(struct kvm_vcpu *vcpu) { - if (!vcpu->arch.guest_fpu) - return; - - fpstate_init(&vcpu->arch.guest_fpu->state); - if (boot_cpu_has(X86_FEATURE_XSAVES)) - vcpu->arch.guest_fpu->state.xsave.header.xcomp_bv = - host_xcr0 | XSTATE_COMPACTION_ENABLED; - /* * Ensure guest xcr0 is valid for loading */ @@ -10704,6 +10696,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu pr_err("kvm: failed to allocate vcpu's fpu\n"); goto free_user_fpu; } + + fpu_init_fpstate_user(vcpu->arch.user_fpu); + fpu_init_fpstate_user(vcpu->arch.guest_fpu); fx_init(vcpu); vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); From patchwork Tue Oct 12 00:00:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCCE4C433EF for ; Tue, 12 Oct 2021 00:06:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AECEE61108 for ; Tue, 12 Oct 2021 00:06:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232675AbhJLAI2 (ORCPT ); Mon, 11 Oct 2021 20:08:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231352AbhJLAIX (ORCPT ); Mon, 11 Oct 2021 20:08:23 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D018C06161C; Mon, 11 Oct 2021 17:06:22 -0700 (PDT) Message-ID: <20211011223610.950054267@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=4EHKPcl4JlAkis6eUZZQUW/49IdenRLk1IEf79GkDUo=; b=enbtcQDhqeYiyz2Z/05myFc/NHMseN8qMYLy5yOCCpOqN52G8ISj1qOxeT9Il4+UaXeWKi XlKiikJiVvSWqLo1ctNjytPyUsVMHmn5rRhgj21e3qOAsM7KByXJAYmd9sLoD8V5OfC1h0 jFI7m5ACjVwiV1F8EISv0i+jPEN+JWA8mMWYcfooGjuwqdy4XDCiJ25inzPAaliPZ4PdsT c3YDTEJkW62G3//if7bALwTxKebY31+Py1GOAq85Ut6L4ImLiaT/mbxraTfuXXIJ9cbDwv mA6i1efdWgrCZNWl6MdTLiZufI9JEmaJJfVLmSQdeiw+t+Fq9ehPQnlo2jTdig== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=4EHKPcl4JlAkis6eUZZQUW/49IdenRLk1IEf79GkDUo=; b=lMnZZ8wHxx2R97Q0c0pAzokvQkIx3zsYmLLWcxBqJ/wY/5kmxfcZecJvOjPG3gj3atdSOU y/3QXBAImZc9DSBA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 11/31] x86/fpu/xstate: Provide and use for_each_xfeature() References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:14 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These loops evaluating xfeature bits are really hard to read. Create an iterator and use for_each_set_bit_from() inside which already does the right thing. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/xstate.c | 56 +++++++++++++++++-------------------------- 1 file changed, 23 insertions(+), 33 deletions(-) --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -4,6 +4,7 @@ * * Author: Suresh Siddha */ +#include #include #include #include @@ -20,6 +21,10 @@ #include "xstate.h" +#define for_each_extended_xfeature(bit, mask) \ + (bit) = FIRST_EXTENDED_XFEATURE; \ + for_each_set_bit_from(bit, (unsigned long *)&(mask), 8 * sizeof(mask)) + /* * Although we spell it out in here, the Processor Trace * xfeature is completely unused. We use other mechanisms @@ -184,10 +189,7 @@ static void __init setup_xstate_features xstate_sizes[XFEATURE_SSE] = sizeof_field(struct fxregs_state, xmm_space); - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i)) - continue; - + for_each_extended_xfeature(i, xfeatures_mask_all) { cpuid_count(XSTATE_CPUID, i, &eax, &ebx, &ecx, &edx); xstate_sizes[i] = eax; @@ -291,20 +293,15 @@ static void __init setup_xstate_comp_off xstate_comp_offsets[XFEATURE_SSE] = offsetof(struct fxregs_state, xmm_space); - if (!boot_cpu_has(X86_FEATURE_XSAVES)) { - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (xfeature_enabled(i)) - xstate_comp_offsets[i] = xstate_offsets[i]; - } + if (!cpu_feature_enabled(X86_FEATURE_XSAVES)) { + for_each_extended_xfeature(i, xfeatures_mask_all) + xstate_comp_offsets[i] = xstate_offsets[i]; return; } next_offset = FXSAVE_SIZE + XSAVE_HDR_SIZE; - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i)) - continue; - + for_each_extended_xfeature(i, xfeatures_mask_all) { if (xfeature_is_aligned(i)) next_offset = ALIGN(next_offset, 64); @@ -328,8 +325,8 @@ static void __init setup_supervisor_only next_offset = FXSAVE_SIZE + XSAVE_HDR_SIZE; - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i) || !xfeature_is_supervisor(i)) + for_each_extended_xfeature(i, xfeatures_mask_all) { + if (!xfeature_is_supervisor(i)) continue; if (xfeature_is_aligned(i)) @@ -347,9 +344,7 @@ static void __init print_xstate_offset_s { int i; - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i)) - continue; + for_each_extended_xfeature(i, xfeatures_mask_all) { pr_info("x86/fpu: xstate_offset[%d]: %4d, xstate_sizes[%d]: %4d\n", i, xstate_comp_offsets[i], i, xstate_sizes[i]); } @@ -554,10 +549,7 @@ static void do_extra_xstate_size_checks( int paranoid_xstate_size = FXSAVE_SIZE + XSAVE_HDR_SIZE; int i; - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i)) - continue; - + for_each_extended_xfeature(i, xfeatures_mask_all) { check_xstate_against_struct(i); /* * Supervisor state components can be managed only by @@ -586,7 +578,6 @@ static void do_extra_xstate_size_checks( XSTATE_WARN_ON(paranoid_xstate_size != fpu_kernel_xstate_size); } - /* * Get total size of enabled xstates in XCR0 | IA32_XSS. * @@ -969,6 +960,7 @@ void copy_xstate_to_uabi_buf(struct memb struct xregs_state *xinit = &init_fpstate.xsave; struct xstate_header header; unsigned int zerofrom; + u64 mask; int i; memset(&header, 0, sizeof(header)); @@ -1022,17 +1014,15 @@ void copy_xstate_to_uabi_buf(struct memb zerofrom = offsetof(struct xregs_state, extended_state_area); - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - /* - * The ptrace buffer is in non-compacted XSAVE format. - * In non-compacted format disabled features still occupy - * state space, but there is no state to copy from in the - * compacted init_fpstate. The gap tracking will zero this - * later. - */ - if (!(xfeatures_mask_uabi() & BIT_ULL(i))) - continue; + /* + * The ptrace buffer is in non-compacted XSAVE format. In + * non-compacted format disabled features still occupy state space, + * but there is no state to copy from in the compacted + * init_fpstate. The gap tracking will zero these states. + */ + mask = xfeatures_mask_uabi(); + for_each_extended_xfeature(i, mask) { /* * If there was a feature or alignment gap, zero the space * in the destination buffer. From patchwork Tue Oct 12 00:00:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A194AC433F5 for ; Tue, 12 Oct 2021 00:06:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85B1B61165 for ; Tue, 12 Oct 2021 00:06:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235005AbhJLAIa (ORCPT ); Mon, 11 Oct 2021 20:08:30 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51286 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231351AbhJLAIY (ORCPT ); Mon, 11 Oct 2021 20:08:24 -0400 Message-ID: <20211011223611.008987641@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=vJplmk5mLqHish4MRg29kcJ/vnY99dO9+83MpoObvg0=; b=4g850/Yt3H3l3cl9haoFNQ+WNNd1tjBIMr+pGb5RAqQGPCw0OkRttRgiCAinaeDcLuhD+7 TjhfSzNEbyTGix9z0Nggg6cvxuSMTdtckbMCt9AYN4LRhZa2EQS3E2nQHZw6E9jlhZCYJ1 S8NXR8XRhGJrrw4ylc63fU1E97DO7pw2Um6EiGNM0seOowC+8G0/QPKGUOFMiJ1GLd4846 pnKzOVV7Bo9QMcvSxSS5OUN2Qnsaiaqu2W4ctlbJNvdu8nYeKTCIIgtzze4Z0vP71N5E+C 0VtpHRE17dl2OYv94/xFN/ulTmKREOZLOu4slitalIiQjUkTfXQAWgFZZ0uuaA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=vJplmk5mLqHish4MRg29kcJ/vnY99dO9+83MpoObvg0=; b=zWVTD6BkuA8n22nt7clyuxFZSb+Bf5WS87149Y+h/SWtaCl/bIM1fOYabgQ2VOOG+mgeDM imogWjg/IETJxcDA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 12/31] x86/fpu/xstate: Mark all init only functions __init References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:16 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No point to keep them around after boot. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/xstate.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -462,7 +462,7 @@ static int validate_user_xstate_header(c return 0; } -static void __xstate_dump_leaves(void) +static void __init __xstate_dump_leaves(void) { int i; u32 eax, ebx, ecx, edx; @@ -502,7 +502,7 @@ static void __xstate_dump_leaves(void) * that our software representation matches what the CPU * tells us about the state's size. */ -static void check_xstate_against_struct(int nr) +static void __init check_xstate_against_struct(int nr) { /* * Ask the CPU for the size of the state. @@ -544,7 +544,7 @@ static void check_xstate_against_struct( * covered by these checks. Only the size of the buffer for task->fpu * is checked here. */ -static void do_extra_xstate_size_checks(void) +static void __init do_extra_xstate_size_checks(void) { int paranoid_xstate_size = FXSAVE_SIZE + XSAVE_HDR_SIZE; int i; @@ -646,7 +646,7 @@ static unsigned int __init get_xsave_siz * Will the runtime-enumerated 'xstate_size' fit in the init * task's statically-allocated buffer? */ -static bool is_supported_xstate_size(unsigned int test_xstate_size) +static bool __init is_supported_xstate_size(unsigned int test_xstate_size) { if (test_xstate_size <= sizeof(union fpregs_state)) return true; @@ -691,7 +691,7 @@ static int __init init_xstate_size(void) * We enabled the XSAVE hardware, but something went wrong and * we can not use it. Disable it. */ -static void fpu__init_disable_system_xstate(void) +static void __init fpu__init_disable_system_xstate(void) { xfeatures_mask_all = 0; cr4_clear_bits(X86_CR4_OSXSAVE); From patchwork Tue Oct 12 00:00:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AA5BC433FE for ; Tue, 12 Oct 2021 00:06:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 17E866115A for ; Tue, 12 Oct 2021 00:06:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235377AbhJLAIf (ORCPT ); Mon, 11 Oct 2021 20:08:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231607AbhJLAIY (ORCPT ); Mon, 11 Oct 2021 20:08:24 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADDC6C061745; Mon, 11 Oct 2021 17:06:23 -0700 (PDT) Message-ID: <20211011223611.069324121@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=r/ZkZDk0uCUdUzfqEH4kRtKn/N/k0uKmPiViYgAGdNA=; b=X2Jc2yDJqujuUGDr/jHZL20+4lQqjR4hzZMwldbLElbHJ3gmuAb6cCWE6m1O4vvIiqkRSE TZi7aG9hjxMccsY4CXrORp5UANoRs0vDURxr+uhUfLrTEt+Q/cmOa0N1+sVspX4qNi74A6 DpnilvWUPyklrP6BonRgUTcS5ldIqobcOx6kHCjSyPcBNkLWJfLDla67LCyA7p2jKu+5AJ OpwCUoGU49E0m09OX9QFRr/fI440raranZF6QUX//PgbqtDH5J439zzWVGhpbVZsZvKY2B 5PDNadFzT9K+rhPUxkvjJk2MaNEtXqaYU+GuofviaYXZC1Iyj7W87vJA9xCq8w== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=r/ZkZDk0uCUdUzfqEH4kRtKn/N/k0uKmPiViYgAGdNA=; b=/Wiu1U+gXREeaEVK2min30l78kB1t5uWMQpl1wtkbZFwU+adceWnv2D1Yp9rRDYPtoOIyH //znt1HAktL9BPCQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 13/31] x86/fpu: Move KVMs FPU swapping to FPU core References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:17 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Swapping the host/guest FPU is directly fiddling with FPU internals which requires 5 exports. The upcoming support of dymanically enabled states would even need more. Implement a swap function in the FPU core code and export that instead. Signed-off-by: Thomas Gleixner Cc: kvm@vger.kernel.org Cc: Paolo Bonzini Reviewed-by: Paolo Bonzini --- arch/x86/include/asm/fpu/api.h | 8 +++++ arch/x86/include/asm/fpu/internal.h | 15 +--------- arch/x86/kernel/fpu/core.c | 30 ++++++++++++++++++--- arch/x86/kernel/fpu/init.c | 1 arch/x86/kernel/fpu/xstate.c | 1 arch/x86/kvm/x86.c | 51 +++++++----------------------------- arch/x86/mm/extable.c | 2 - 7 files changed, 48 insertions(+), 60 deletions(-) --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -12,6 +12,8 @@ #define _ASM_X86_FPU_API_H #include +#include + /* * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It * disables preemption so be careful if you intend to use it for long periods @@ -108,4 +110,10 @@ extern int cpu_has_xfeatures(u64 xfeatur static inline void update_pasid(void) { } +/* FPSTATE related functions which are exported to KVM */ +extern void fpu_init_fpstate_user(struct fpu *fpu); + +/* KVM specific functions */ +extern void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask); + #endif /* _ASM_X86_FPU_API_H */ --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -74,14 +74,8 @@ static __always_inline __pure bool use_f return static_cpu_has(X86_FEATURE_FXSR); } -/* - * fpstate handling functions: - */ - extern union fpregs_state init_fpstate; - extern void fpstate_init_user(union fpregs_state *state); -extern void fpu_init_fpstate_user(struct fpu *fpu); #ifdef CONFIG_MATH_EMULATION extern void fpstate_init_soft(struct swregs_state *soft); @@ -381,12 +375,7 @@ static inline int os_xrstor_safe(struct return err; } -extern void __restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); - -static inline void restore_fpregs_from_fpstate(union fpregs_state *fpstate) -{ - __restore_fpregs_from_fpstate(fpstate, xfeatures_mask_fpstate()); -} +extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); @@ -467,7 +456,7 @@ static inline void fpregs_restore_userre */ mask = xfeatures_mask_restore_user() | xfeatures_mask_supervisor(); - __restore_fpregs_from_fpstate(&fpu->state, mask); + restore_fpregs_from_fpstate(&fpu->state, mask); fpregs_activate(fpu); fpu->last_cpu = cpu; --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -124,9 +124,8 @@ void save_fpregs_to_fpstate(struct fpu * asm volatile("fnsave %[fp]; fwait" : [fp] "=m" (fpu->state.fsave)); frstor(&fpu->state.fsave); } -EXPORT_SYMBOL(save_fpregs_to_fpstate); -void __restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask) +void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask) { /* * AMD K7/K8 and later CPUs up to Zen don't save/restore @@ -151,7 +150,31 @@ void __restore_fpregs_from_fpstate(union frstor(&fpstate->fsave); } } -EXPORT_SYMBOL_GPL(__restore_fpregs_from_fpstate); + +#if IS_ENABLED(CONFIG_KVM) +void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask) +{ + fpregs_lock(); + + if (save) { + if (test_thread_flag(TIF_NEED_FPU_LOAD)) { + memcpy(&save->state, ¤t->thread.fpu.state, + fpu_kernel_xstate_size); + } else { + save_fpregs_to_fpstate(save); + } + } + + if (rstor) { + restore_mask &= xfeatures_mask_fpstate(); + restore_fpregs_from_fpstate(&rstor->state, restore_mask); + } + + fpregs_mark_activate(); + fpregs_unlock(); +} +EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpu); +#endif void kernel_fpu_begin_mask(unsigned int kfpu_mask) { @@ -459,7 +482,6 @@ void fpregs_mark_activate(void) fpu->last_cpu = smp_processor_id(); clear_thread_flag(TIF_NEED_FPU_LOAD); } -EXPORT_SYMBOL_GPL(fpregs_mark_activate); /* * x87 math exception handling: --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -136,7 +136,6 @@ static void __init fpu__init_system_gene * components into a single, continuous memory block: */ unsigned int fpu_kernel_xstate_size __ro_after_init; -EXPORT_SYMBOL_GPL(fpu_kernel_xstate_size); /* Get alignment of the TYPE. */ #define TYPE_ALIGN(TYPE) offsetof(struct { char x; TYPE test; }, test) --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -65,7 +65,6 @@ static short xsave_cpuid_features[] __in * XSAVE buffer, both supervisor and user xstates. */ u64 xfeatures_mask_all __ro_after_init; -EXPORT_SYMBOL_GPL(xfeatures_mask_all); static unsigned int xstate_offsets[XFEATURE_MAX] __ro_after_init = { [ 0 ... XFEATURE_MAX - 1] = -1}; --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -68,7 +68,9 @@ #include #include #include -#include /* Ugh! */ +#include +#include +#include #include #include #include @@ -9899,58 +9901,27 @@ static int complete_emulated_mmio(struct return 0; } -static void kvm_save_current_fpu(struct fpu *fpu) -{ - /* - * If the target FPU state is not resident in the CPU registers, just - * memcpy() from current, else save CPU state directly to the target. - */ - if (test_thread_flag(TIF_NEED_FPU_LOAD)) - memcpy(&fpu->state, ¤t->thread.fpu.state, - fpu_kernel_xstate_size); - else - save_fpregs_to_fpstate(fpu); -} - /* Swap (qemu) user FPU context for the guest FPU context. */ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) { - fpregs_lock(); - - kvm_save_current_fpu(vcpu->arch.user_fpu); - /* - * Guests with protected state can't have it set by the hypervisor, - * so skip trying to set it. + * Guest with protected state have guest_fpu == NULL which makes + * the swap only safe the host state. Exclude PKRU from restore as + * it is restored separately in kvm_x86_ops.run(). */ - if (vcpu->arch.guest_fpu) - /* PKRU is separately restored in kvm_x86_ops.run. */ - __restore_fpregs_from_fpstate(&vcpu->arch.guest_fpu->state, - ~XFEATURE_MASK_PKRU); - - fpregs_mark_activate(); - fpregs_unlock(); - + fpu_swap_kvm_fpu(vcpu->arch.user_fpu, vcpu->arch.guest_fpu, + ~XFEATURE_MASK_PKRU); trace_kvm_fpu(1); } /* When vcpu_run ends, restore user space FPU context. */ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) { - fpregs_lock(); - /* - * Guests with protected state can't have it read by the hypervisor, - * so skip trying to save it. + * Guest with protected state have guest_fpu == NULL which makes + * swap only restore the host state. */ - if (vcpu->arch.guest_fpu) - kvm_save_current_fpu(vcpu->arch.guest_fpu); - - restore_fpregs_from_fpstate(&vcpu->arch.user_fpu->state); - - fpregs_mark_activate(); - fpregs_unlock(); - + fpu_swap_kvm_fpu(vcpu->arch.guest_fpu, vcpu->arch.user_fpu, ~0ULL); ++vcpu->stat.fpu_reload; trace_kvm_fpu(0); } --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -47,7 +47,7 @@ static bool ex_handler_fprestore(const s WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.", (void *)instruction_pointer(regs)); - __restore_fpregs_from_fpstate(&init_fpstate, xfeatures_mask_fpstate()); + restore_fpregs_from_fpstate(&init_fpstate, xfeatures_mask_fpstate()); return true; } From patchwork Tue Oct 12 00:00:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A47DC433F5 for ; Tue, 12 Oct 2021 00:07:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 61EF860E54 for ; Tue, 12 Oct 2021 00:07:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235515AbhJLAJb (ORCPT ); Mon, 11 Oct 2021 20:09:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230108AbhJLAI0 (ORCPT ); Mon, 11 Oct 2021 20:08:26 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69B9EC06174E; Mon, 11 Oct 2021 17:06:25 -0700 (PDT) Message-ID: <20211011223611.129308001@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=9pY54dcnizQderckOuII3FbExiyNeJ/pxeUpRMR6Ztw=; b=gmOFKX7XsNqMmVcJu8SvWQyPK9HA5jJ97k7iR8l3eC94RACx31PIwHls5pw/tWQMHCgjiY 5lwfiEOGF/DUjquBWTvNrYC0RZgVbvEJ73TUdIf8Bk6KHsCMG2+GobPp5QpE21mWzaWeSC 2rc6RTBw7rzRNq9lFjij6gAri4SnDrNsE1E8aM2o/Qpbe+u5UgbTtGUsuxZq2nA4jAGwiF BcMuxKJXZMhHcf+iYJtdh0HhCjAKSepRfRchlOraMKBB9h2i0hs5t40tZmtefdWR8n9sOz kx+MYsm0SzpoGQOVb31KiuP1DqrOLqCi6wzI+Y4109H6loCj+eez0DyjxkoSsA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=9pY54dcnizQderckOuII3FbExiyNeJ/pxeUpRMR6Ztw=; b=fI1TMYVZdU57ucH5hbdVoUM5tNAarC301xIgI4mnd4Oya2qt/ZbrXnnUnLyMjIf5628MzI 4NfwTawnROv8udCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 14/31] x86/fpu: Replace KVMs homebrewn FPU copy from user References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:19 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Copying a user space buffer to the memory buffer is already available in the FPU core. The copy mechanism in KVM lacks sanity checks and needs to use cpuid() to lookup the offset of each component, while the FPU core has this information cached. Make the FPU core variant accessible for KVM and replace the homebrewn mechanism. Signed-off-by: Thomas Gleixner Cc: kvm@vger.kernel.org Cc: Paolo Bonzini Reviewed-by: Paolo Bonzini --- arch/x86/include/asm/fpu/api.h | 3 + arch/x86/kernel/fpu/core.c | 38 ++++++++++++++++++++- arch/x86/kernel/fpu/xstate.c | 3 - arch/x86/kvm/x86.c | 74 +---------------------------------------- 4 files changed, 44 insertions(+), 74 deletions(-) --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -116,4 +116,7 @@ extern void fpu_init_fpstate_user(struct /* KVM specific functions */ extern void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask); +struct kvm_vcpu; +extern int fpu_copy_kvm_uabi_to_vcpu(struct fpu *fpu, const void *buf, u64 xcr0, u32 *pkru); + #endif /* _ASM_X86_FPU_API_H */ --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -174,7 +174,43 @@ void fpu_swap_kvm_fpu(struct fpu *save, fpregs_unlock(); } EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpu); -#endif + +int fpu_copy_kvm_uabi_to_vcpu(struct fpu *fpu, const void *buf, u64 xcr0, + u32 *vpkru) +{ + union fpregs_state *kstate = &fpu->state; + const union fpregs_state *ustate = buf; + struct pkru_state *xpkru; + int ret; + + if (!cpu_feature_enabled(X86_FEATURE_XSAVE)) { + if (ustate->xsave.header.xfeatures & ~XFEATURE_MASK_FPSSE) + return -EINVAL; + if (ustate->fxsave.mxcsr & ~mxcsr_feature_mask) + return -EINVAL; + memcpy(&kstate->fxsave, &ustate->fxsave, sizeof(ustate->fxsave)); + return 0; + } + + if (ustate->xsave.header.xfeatures & ~xcr0) + return -EINVAL; + + ret = copy_uabi_from_kernel_to_xstate(&kstate->xsave, ustate); + if (ret) + return ret; + + /* Retrieve PKRU if not in init state */ + if (kstate->xsave.header.xfeatures & XFEATURE_MASK_PKRU) { + xpkru = get_xsave_addr(&kstate->xsave, XFEATURE_PKRU); + *vpkru = xpkru->pkru; + } + + /* Ensure that XCOMP_BV is set up for XSAVES */ + xstate_init_xcomp_bv(&kstate->xsave, xfeatures_mask_uabi()); + return 0; +} +EXPORT_SYMBOL_GPL(fpu_copy_kvm_uabi_to_vcpu); +#endif /* CONFIG_KVM */ void kernel_fpu_begin_mask(unsigned int kfpu_mask) { --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1134,8 +1134,7 @@ static int copy_uabi_to_xstate(struct xr /* * Convert from a ptrace standard-format kernel buffer to kernel XSAVE[S] - * format and copy to the target thread. This is called from - * xstateregs_set(). + * format and copy to the target thread. Used by ptrace and KVM. */ int copy_uabi_from_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf) { --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4695,8 +4695,6 @@ static int kvm_vcpu_ioctl_x86_set_debugr return 0; } -#define XSTATE_COMPACTION_ENABLED (1ULL << 63) - static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) { struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; @@ -4740,50 +4738,6 @@ static void fill_xsave(u8 *dest, struct } } -static void load_xsave(struct kvm_vcpu *vcpu, u8 *src) -{ - struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; - u64 xstate_bv = *(u64 *)(src + XSAVE_HDR_OFFSET); - u64 valid; - - /* - * Copy legacy XSAVE area, to avoid complications with CPUID - * leaves 0 and 1 in the loop below. - */ - memcpy(xsave, src, XSAVE_HDR_OFFSET); - - /* Set XSTATE_BV and possibly XCOMP_BV. */ - xsave->header.xfeatures = xstate_bv; - if (boot_cpu_has(X86_FEATURE_XSAVES)) - xsave->header.xcomp_bv = host_xcr0 | XSTATE_COMPACTION_ENABLED; - - /* - * Copy each region from the non-compacted offset to the - * possibly compacted offset. - */ - valid = xstate_bv & ~XFEATURE_MASK_FPSSE; - while (valid) { - u32 size, offset, ecx, edx; - u64 xfeature_mask = valid & -valid; - int xfeature_nr = fls64(xfeature_mask) - 1; - - cpuid_count(XSTATE_CPUID, xfeature_nr, - &size, &offset, &ecx, &edx); - - if (xfeature_nr == XFEATURE_PKRU) { - memcpy(&vcpu->arch.pkru, src + offset, - sizeof(vcpu->arch.pkru)); - } else { - void *dest = get_xsave_addr(xsave, xfeature_nr); - - if (dest) - memcpy(dest, src + offset, size); - } - - valid -= xfeature_mask; - } -} - static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, struct kvm_xsave *guest_xsave) { @@ -4802,37 +4756,15 @@ static void kvm_vcpu_ioctl_x86_get_xsave } } -#define XSAVE_MXCSR_OFFSET 24 - static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, struct kvm_xsave *guest_xsave) { - u64 xstate_bv; - u32 mxcsr; - if (!vcpu->arch.guest_fpu) return 0; - xstate_bv = *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)]; - mxcsr = *(u32 *)&guest_xsave->region[XSAVE_MXCSR_OFFSET / sizeof(u32)]; - - if (boot_cpu_has(X86_FEATURE_XSAVE)) { - /* - * Here we allow setting states that are not present in - * CPUID leaf 0xD, index 0, EDX:EAX. This is for compatibility - * with old userspace. - */ - if (xstate_bv & ~supported_xcr0 || mxcsr & ~mxcsr_feature_mask) - return -EINVAL; - load_xsave(vcpu, (u8 *)guest_xsave->region); - } else { - if (xstate_bv & ~XFEATURE_MASK_FPSSE || - mxcsr & ~mxcsr_feature_mask) - return -EINVAL; - memcpy(&vcpu->arch.guest_fpu->state.fxsave, - guest_xsave->region, sizeof(struct fxregs_state)); - } - return 0; + return fpu_copy_kvm_uabi_to_vcpu(vcpu->arch.guest_fpu, + guest_xsave->region, + supported_xcr0, &vcpu->arch.pkru); } static void kvm_vcpu_ioctl_x86_get_xcrs(struct kvm_vcpu *vcpu, From patchwork Tue Oct 12 00:00:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98190C433EF for ; Tue, 12 Oct 2021 00:06:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 84C0561027 for ; Tue, 12 Oct 2021 00:06:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235949AbhJLAIv (ORCPT ); Mon, 11 Oct 2021 20:08:51 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51536 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233200AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223611.189084655@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Ij7f9mOfp0QLjWmvwZM0nmVubL30A1buOGY1JPWbLHc=; b=HdmBHMq+Qld3lFMMgIHxAJf1YjxeVa4uuQdZBF2XMyj5DEqPoJAluQKDLe5b6i8dhqgF8k mVgiAOwNB84VyDiiugsoeD7dXNG2RVo/5eKTq0rDmxX1aAmhCCG4uD3ovfKbPZ6dUSHcvs doRzj6+x9fAATMWwMkeV6yXhL3Q89yhfD1mInMb6iGW9ik0mM/l/BoYh+njdeS9CzuWG0S eqGkq+QG2m0YZahEXQGqC1waRGm5U6LqYU3HkTRTpRL9NzRGBa5uQIP1OXu0/aaYuABotA ljVnJ2sJ8tvn26+8I8759rZqTSvD7DDHy0MNECYfp0v7Um6GMCBVa5qa+1TVUg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Ij7f9mOfp0QLjWmvwZM0nmVubL30A1buOGY1JPWbLHc=; b=z7o+25g0twDCPNgb/TE0ysFVK9/S06gvKLTOUgK+08Ho+WZ0RJjIHYNXmVEm1dHY4ulSDI BHd0iT9WifulFGDw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 15/31] x86/fpu: Rework copy_xstate_to_uabi_buf() References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:20 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Prepare for replacing the KVM copy xstate to user function by extending copy_xstate_to_uabi_buf() with a pkru argument which allows the caller to hand in the pkru value, which is required for KVM because the guest PKRU is not accessible via current. Fixup all callsites accordingly. Signed-off-by: Thomas Gleixner Reviewed-by: Paolo Bonzini --- arch/x86/kernel/fpu/xstate.c | 34 ++++++++++++++++++++++++++-------- arch/x86/kernel/fpu/xstate.h | 3 +++ 2 files changed, 29 insertions(+), 8 deletions(-) --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -940,9 +940,10 @@ static void copy_feature(bool from_xstat } /** - * copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer + * __copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer * @to: membuf descriptor - * @tsk: The task from which to copy the saved xstate + * @xsave: The xsave from which to copy + * @pkru_val: The PKRU value to store in the PKRU component * @copy_mode: The requested copy mode * * Converts from kernel XSAVE or XSAVES compacted format to UABI conforming @@ -951,11 +952,10 @@ static void copy_feature(bool from_xstat * * It supports partial copy but @to.pos always starts from zero. */ -void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, - enum xstate_copy_mode copy_mode) +void __copy_xstate_to_uabi_buf(struct membuf to, struct xregs_state *xsave, + u32 pkru_val, enum xstate_copy_mode copy_mode) { const unsigned int off_mxcsr = offsetof(struct fxregs_state, mxcsr); - struct xregs_state *xsave = &tsk->thread.fpu.state.xsave; struct xregs_state *xinit = &init_fpstate.xsave; struct xstate_header header; unsigned int zerofrom; @@ -1033,10 +1033,9 @@ void copy_xstate_to_uabi_buf(struct memb struct pkru_state pkru = {0}; /* * PKRU is not necessarily up to date in the - * thread's XSAVE buffer. Fill this part from the - * per-thread storage. + * XSAVE buffer. Use the provided value. */ - pkru.pkru = tsk->thread.pkru; + pkru.pkru = pkru_val; membuf_write(&to, &pkru, sizeof(pkru)); } else { copy_feature(header.xfeatures & BIT_ULL(i), &to, @@ -1056,6 +1055,25 @@ void copy_xstate_to_uabi_buf(struct memb membuf_zero(&to, to.left); } +/** + * copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer + * @to: membuf descriptor + * @tsk: The task from which to copy the saved xstate + * @copy_mode: The requested copy mode + * + * Converts from kernel XSAVE or XSAVES compacted format to UABI conforming + * format, i.e. from the kernel internal hardware dependent storage format + * to the requested @mode. UABI XSTATE is always uncompacted! + * + * It supports partial copy but @to.pos always starts from zero. + */ +void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, + enum xstate_copy_mode copy_mode) +{ + __copy_xstate_to_uabi_buf(to, &tsk->thread.fpu.state.xsave, + tsk->thread.pkru, copy_mode); +} + static int copy_from_buffer(void *dst, unsigned int offset, unsigned int size, const void *kbuf, const void __user *ubuf) { --- a/arch/x86/kernel/fpu/xstate.h +++ b/arch/x86/kernel/fpu/xstate.h @@ -15,4 +15,7 @@ static inline void xstate_init_xcomp_bv( xsave->header.xcomp_bv = mask | XCOMP_BV_COMPACTED_FORMAT; } +extern void __copy_xstate_to_uabi_buf(struct membuf to, struct xregs_state *xsave, + u32 pkru_val, enum xstate_copy_mode copy_mode); + #endif From patchwork Tue Oct 12 00:00:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9EE0C433F5 for ; Tue, 12 Oct 2021 00:06:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1FCD60F92 for ; Tue, 12 Oct 2021 00:06:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235808AbhJLAIl (ORCPT ); Mon, 11 Oct 2021 20:08:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234350AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7082C06161C; Mon, 11 Oct 2021 17:06:28 -0700 (PDT) Message-ID: <20211011223611.249593446@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=iciMW1uu9UaVhmocVfg8whl/1bZEWCnc4y/4+Y5pRtI=; b=0yE8IE4txsO2ziTfo6JNPxjA3Ue+vI+O0bBQ16BUnvTaXQQHU9Ib3RPnjwQT93USu1FWOL eFTW0tpo3Qz89eH7EXnCtuY/YU8xt+U+koaoPzcKD0uzP6Dsd502UFhtYGizmFIH3/z8kz y94bmEXz95NWpKO5e5rF1Rkxs82r3mNGCjb5V1eQ8LEfBLG0FZdFQLYNdDNx9bp+LN7Y0O qcxAdlrQXKCrdnPjfxwdgEt8hI845Rjm+T30mAxQKIbJqGP2WaVIf2RjA389ziRGcnApj8 nzo1XYfNDLmBOkdYdHAR1L9Bb/BKDVmLfF/CUQdFvQPnRtRSC+VYe7ZU9g6uUQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=iciMW1uu9UaVhmocVfg8whl/1bZEWCnc4y/4+Y5pRtI=; b=gyXnSgXT/8DA/Byg7GJDyZ8DZWS9KJmHzc3UmnhveJZlkVrTWgLNJUmK150LCj3LxqgH9T ZJWpO+n1DyCKe0Ag== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 16/31] x86/fpu: Replace KVMs homebrewn FPU copy to user References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:22 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Similar to the copy from user function the FPU core has this already implemented with all bells and whistels. Get rid of the duplicated code and use the core functionality. Signed-off-by: Thomas Gleixner Cc: kvm@vger.kernel.org Cc: Paolo Bonzini --- arch/x86/include/asm/fpu/api.h | 2 - arch/x86/kernel/fpu/core.c | 16 +++++++++++ arch/x86/kvm/x86.c | 56 ++--------------------------------------- 3 files changed, 20 insertions(+), 54 deletions(-) --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -116,7 +116,7 @@ extern void fpu_init_fpstate_user(struct /* KVM specific functions */ extern void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask); -struct kvm_vcpu; extern int fpu_copy_kvm_uabi_to_vcpu(struct fpu *fpu, const void *buf, u64 xcr0, u32 *pkru); +extern void fpu_copy_vcpu_to_kvm_uabi(struct fpu *fpu, void *buf, unsigned int size, u32 pkru); #endif /* _ASM_X86_FPU_API_H */ --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -175,6 +175,22 @@ void fpu_swap_kvm_fpu(struct fpu *save, } EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpu); +void fpu_copy_vcpu_to_kvm_uabi(struct fpu *fpu, void *buf, + unsigned int size, u32 pkru) +{ + union fpregs_state *kstate = &fpu->state; + union fpregs_state *ustate = buf; + struct membuf mb = { .p = buf, .left = size }; + + if (cpu_feature_enabled(X86_FEATURE_XSAVE)) { + __copy_xstate_to_uabi_buf(mb, &kstate->xsave, pkru, + XSTATE_COPY_XSAVE); + } else { + memcpy(&ustate->fxsave, &kstate->fxsave, sizeof(ustate->fxsave)); + } +} +EXPORT_SYMBOL_GPL(fpu_copy_vcpu_to_kvm_uabi); + int fpu_copy_kvm_uabi_to_vcpu(struct fpu *fpu, const void *buf, u64 xcr0, u32 *vpkru) { --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4695,65 +4695,15 @@ static int kvm_vcpu_ioctl_x86_set_debugr return 0; } -static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) -{ - struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; - u64 xstate_bv = xsave->header.xfeatures; - u64 valid; - - /* - * Copy legacy XSAVE area, to avoid complications with CPUID - * leaves 0 and 1 in the loop below. - */ - memcpy(dest, xsave, XSAVE_HDR_OFFSET); - - /* Set XSTATE_BV */ - xstate_bv &= vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FPSSE; - *(u64 *)(dest + XSAVE_HDR_OFFSET) = xstate_bv; - - /* - * Copy each region from the possibly compacted offset to the - * non-compacted offset. - */ - valid = xstate_bv & ~XFEATURE_MASK_FPSSE; - while (valid) { - u32 size, offset, ecx, edx; - u64 xfeature_mask = valid & -valid; - int xfeature_nr = fls64(xfeature_mask) - 1; - void *src; - - cpuid_count(XSTATE_CPUID, xfeature_nr, - &size, &offset, &ecx, &edx); - - if (xfeature_nr == XFEATURE_PKRU) { - memcpy(dest + offset, &vcpu->arch.pkru, - sizeof(vcpu->arch.pkru)); - } else { - src = get_xsave_addr(xsave, xfeature_nr); - if (src) - memcpy(dest + offset, src, size); - } - - valid -= xfeature_mask; - } -} - static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, struct kvm_xsave *guest_xsave) { if (!vcpu->arch.guest_fpu) return; - if (boot_cpu_has(X86_FEATURE_XSAVE)) { - memset(guest_xsave, 0, sizeof(struct kvm_xsave)); - fill_xsave((u8 *) guest_xsave->region, vcpu); - } else { - memcpy(guest_xsave->region, - &vcpu->arch.guest_fpu->state.fxsave, - sizeof(struct fxregs_state)); - *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] = - XFEATURE_MASK_FPSSE; - } + fpu_copy_vcpu_to_kvm_uabi(vcpu->arch.guest_fpu, guest_xsave->region, + sizeof(guest_xsave->region), + vcpu->arch.pkru); } static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, From patchwork Tue Oct 12 00:00:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19694C433FE for ; Tue, 12 Oct 2021 00:07:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F07FE6008E for ; Tue, 12 Oct 2021 00:07:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235877AbhJLAJe (ORCPT ); Mon, 11 Oct 2021 20:09:34 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51394 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231808AbhJLAIZ (ORCPT ); Mon, 11 Oct 2021 20:08:25 -0400 Message-ID: <20211011223611.308125747@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Y6cv7iHtxYeZTvU/dVyQg/lgf+UNGIdtJn3H7eNlZtY=; b=ef7fnCdf63KiDZcX0JRHCgUo5+55U5FKcZ3vqIMeAlRBWYnXcj/QD8MmamMoXeQxFF89yx Fz+mFaUuNj/WXyidcob2ZFRmsg5xnG7Xl2E6EO9Rmgv/j4KGlhXUpXeeAqq115Tldxol+Q o6333tNbhPLETAXFu76mgQES87540jdoboLcoZLRdNY8F2mgGt39EDB8iRRaPzH5eL65S7 A67uHcvkD/vnq0ySm7HFX2lmP4JAEyBL3RP+17jIzi1i8+5Z1nUwWIpILnlQiZTTdes9aY A/R7EbXQLlJo0s54kTNa4kSmhYH3LnKQ3SuDuZavHTpAahQ/hSyLB7lAdt2JYA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Y6cv7iHtxYeZTvU/dVyQg/lgf+UNGIdtJn3H7eNlZtY=; b=nwQPs3ElAl8xMV73iw8DvTAh0NBXJaGCySLYAiVkkOsejDYk4k9PznNCJSk4gtM0VHjwum gOGKCOG1ce8Oj7DA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 17/31] x86/fpu: Mark fpu__init_prepare_fx_sw_frame() as __init References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:24 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No need to keep it around. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/signal.h | 2 -- arch/x86/kernel/fpu/internal.h | 8 ++++++++ arch/x86/kernel/fpu/signal.c | 4 +++- arch/x86/kernel/fpu/xstate.c | 1 + 4 files changed, 12 insertions(+), 3 deletions(-) --- a/arch/x86/include/asm/fpu/signal.h +++ b/arch/x86/include/asm/fpu/signal.h @@ -31,6 +31,4 @@ fpu__alloc_mathframe(unsigned long sp, i unsigned long fpu__get_fpstate_size(void); -extern void fpu__init_prepare_fx_sw_frame(void); - #endif /* _ASM_X86_FPU_SIGNAL_H */ --- /dev/null +++ b/arch/x86/kernel/fpu/internal.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __X86_KERNEL_FPU_INTERNAL_H +#define __X86_KERNEL_FPU_INTERNAL_H + +/* Init functions */ +extern void fpu__init_prepare_fx_sw_frame(void); + +#endif --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -16,6 +16,8 @@ #include #include +#include "internal.h" + static struct _fpx_sw_bytes fx_sw_reserved __ro_after_init; static struct _fpx_sw_bytes fx_sw_reserved_ia32 __ro_after_init; @@ -514,7 +516,7 @@ unsigned long fpu__get_fpstate_size(void * This will be saved when ever the FP and extended state context is * saved on the user stack during the signal handler delivery to the user. */ -void fpu__init_prepare_fx_sw_frame(void) +void __init fpu__init_prepare_fx_sw_frame(void) { int size = fpu_user_xstate_size + FP_XSTATE_MAGIC2_SIZE; --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -19,6 +19,7 @@ #include +#include "internal.h" #include "xstate.h" #define for_each_extended_xfeature(bit, mask) \ From patchwork Tue Oct 12 00:00:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04546C4332F for ; Tue, 12 Oct 2021 00:06:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E1EF961038 for ; Tue, 12 Oct 2021 00:06:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235843AbhJLAIo (ORCPT ); Mon, 11 Oct 2021 20:08:44 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51550 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233607AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223611.366238313@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=f3VP54IkLXbEb26hhi82VGdpjCIjZgFZjEVfCyYt2Fw=; b=NzSzQ0ztq7fLv5bhCnK5br8bRZnbTr1pd1adGQRiul7L+GeY3ySgupn+Cd4RnuIreIiUOf lD/imwH3v2y21mTGv97PAxTZILkcFOwctzZ5lhQnAQAK7jyEDFQV80BdFGmXYcMMYMK4ov mTjy23xuvvHXuMxraSlbWoX2c8RPl4QlM+EIranW/LuoqK+yD5ezuiaaiRyhx84tefkDfs wV2bmtXktlcD2cAnIoQdkOkjKyN1Y+yfKSA6d7qYjLOxGdeaWYaG7OIc8esWmYkiKgaYEF FRABdsUd36lOtbLLWN967cwegmsPfid1dkTq6t4JKckQKTQ5wsp1DONL9A1hhg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=f3VP54IkLXbEb26hhi82VGdpjCIjZgFZjEVfCyYt2Fw=; b=08jkdat4JviMl8stWrrF9MCYeaupfI5h/oj87mC/PyTAnzKEUxobZoQfk08OHxgNVGLAUo qJqKoqmuTNQPB4DQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 18/31] x86/fpu: Move context switch and exit to user inlines into sched.h References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:25 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org internal.h is a kitchen sink which needs to get out of the way to prepare for the upcoming changes. Move the context switch and exit to user inlines into a separate header, which is all that code needs. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 60 ------------------------------- arch/x86/include/asm/fpu/sched.h | 68 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/fpu/core.c | 1 arch/x86/kernel/process.c | 2 - arch/x86/kernel/process_32.c | 2 - arch/x86/kernel/process_64.c | 2 - 6 files changed, 72 insertions(+), 63 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -27,16 +27,11 @@ * High level FPU state handling functions: */ extern bool fpu__restore_sig(void __user *buf, int ia32_frame); -extern void fpu__drop(struct fpu *fpu); extern void fpu__clear_user_states(struct fpu *fpu); extern int fpu__exception_code(struct fpu *fpu, int trap_nr); extern void fpu_sync_fpstate(struct fpu *fpu); -/* Clone and exit operations */ -extern int fpu_clone(struct task_struct *dst, unsigned long clone_flags); -extern void fpu_flush_thread(void); - /* * Boot time FPU initialization functions: */ @@ -82,7 +77,6 @@ extern void fpstate_init_soft(struct swr #else static inline void fpstate_init_soft(struct swregs_state *soft) {} #endif -extern void save_fpregs_to_fpstate(struct fpu *fpu); /* * Returns 0 on success or the trap number when the operation raises an @@ -464,58 +458,4 @@ static inline void fpregs_restore_userre clear_thread_flag(TIF_NEED_FPU_LOAD); } -/* - * FPU state switching for scheduling. - * - * This is a two-stage process: - * - * - switch_fpu_prepare() saves the old state. - * This is done within the context of the old process. - * - * - switch_fpu_finish() sets TIF_NEED_FPU_LOAD; the floating point state - * will get loaded on return to userspace, or when the kernel needs it. - * - * If TIF_NEED_FPU_LOAD is cleared then the CPU's FPU registers - * are saved in the current thread's FPU register state. - * - * If TIF_NEED_FPU_LOAD is set then CPU's FPU registers may not - * hold current()'s FPU registers. It is required to load the - * registers before returning to userland or using the content - * otherwise. - * - * The FPU context is only stored/restored for a user task and - * PF_KTHREAD is used to distinguish between kernel and user threads. - */ -static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) -{ - if (static_cpu_has(X86_FEATURE_FPU) && !(current->flags & PF_KTHREAD)) { - save_fpregs_to_fpstate(old_fpu); - /* - * The save operation preserved register state, so the - * fpu_fpregs_owner_ctx is still @old_fpu. Store the - * current CPU number in @old_fpu, so the next return - * to user space can avoid the FPU register restore - * when is returns on the same CPU and still owns the - * context. - */ - old_fpu->last_cpu = cpu; - - trace_x86_fpu_regs_deactivated(old_fpu); - } -} - -/* - * Misc helper functions: - */ - -/* - * Delay loading of the complete FPU state until the return to userland. - * PKRU is handled separately. - */ -static inline void switch_fpu_finish(void) -{ - if (cpu_feature_enabled(X86_FEATURE_FPU)) - set_thread_flag(TIF_NEED_FPU_LOAD); -} - #endif /* _ASM_X86_FPU_INTERNAL_H */ --- /dev/null +++ b/arch/x86/include/asm/fpu/sched.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_FPU_SCHED_H +#define _ASM_X86_FPU_SCHED_H + +#include + +#include +#include + +#include + +extern void save_fpregs_to_fpstate(struct fpu *fpu); +extern void fpu__drop(struct fpu *fpu); +extern int fpu_clone(struct task_struct *dst, unsigned long clone_flags); +extern void fpu_flush_thread(void); + +/* + * FPU state switching for scheduling. + * + * This is a two-stage process: + * + * - switch_fpu_prepare() saves the old state. + * This is done within the context of the old process. + * + * - switch_fpu_finish() sets TIF_NEED_FPU_LOAD; the floating point state + * will get loaded on return to userspace, or when the kernel needs it. + * + * If TIF_NEED_FPU_LOAD is cleared then the CPU's FPU registers + * are saved in the current thread's FPU register state. + * + * If TIF_NEED_FPU_LOAD is set then CPU's FPU registers may not + * hold current()'s FPU registers. It is required to load the + * registers before returning to userland or using the content + * otherwise. + * + * The FPU context is only stored/restored for a user task and + * PF_KTHREAD is used to distinguish between kernel and user threads. + */ +static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) +{ + if (cpu_feature_enabled(X86_FEATURE_FPU) && + !(current->flags & PF_KTHREAD)) { + save_fpregs_to_fpstate(old_fpu); + /* + * The save operation preserved register state, so the + * fpu_fpregs_owner_ctx is still @old_fpu. Store the + * current CPU number in @old_fpu, so the next return + * to user space can avoid the FPU register restore + * when is returns on the same CPU and still owns the + * context. + */ + old_fpu->last_cpu = cpu; + + trace_x86_fpu_regs_deactivated(old_fpu); + } +} + +/* + * Delay loading of the complete FPU state until the return to userland. + * PKRU is handled separately. + */ +static inline void switch_fpu_finish(void) +{ + if (cpu_feature_enabled(X86_FEATURE_FPU)) + set_thread_flag(TIF_NEED_FPU_LOAD); +} + +#endif /* _ASM_X86_FPU_SCHED_H */ --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -8,6 +8,7 @@ */ #include #include +#include #include #include #include --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -30,7 +30,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -41,7 +41,7 @@ #include #include -#include +#include #include #include --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -42,7 +42,7 @@ #include #include -#include +#include #include #include #include From patchwork Tue Oct 12 00:00:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCE24C433EF for ; Tue, 12 Oct 2021 00:06:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C69BD61027 for ; Tue, 12 Oct 2021 00:06:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235831AbhJLAIm (ORCPT ); Mon, 11 Oct 2021 20:08:42 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51392 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233868AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223611.428058983@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=FrTsmTxLY4zfOsBVawKDcdgr8GHMDRr6FsbCRBLv7c8=; b=AJZj8LGJV9V7rypaeLzso38sXinKcIbf0lvzDsOzeQ1su5WdTa9lHkEvNK5lPNsTMhNN1i nLCGWjaU4CdrZ4+HN7hgRMHTZLbMYngky3jxl0v2xCedNLlSvAoEzmLMSHkqLwb27ooWzi d5yGi6O5/CuP/k9PqsxriS3Vb+cF76UMUGk8/DRduLLf13B0xyO6kuLsLtJo5hPQQPzaVV SHmUryfs+k1EGIhyjFUXF7/v145TcyX7yeawb6UG+v7iSmpMFXkX0UK2x75LlJb2IVdhwr I3PiMmXBEHxUXQIOEqbUlqP7SewAy3abqlMd9ctHf7mla4ezOJYZvbEfKPoW+Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=FrTsmTxLY4zfOsBVawKDcdgr8GHMDRr6FsbCRBLv7c8=; b=YLJojTzGPqnBX0Tcse7hEcrjuBuStsuS5+1Ngrr8mubTkRtX//sBhs4qRhkbs9BRbtPBDV ByNdNzCbLAv7RuCQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 19/31] x86/fpu: Clean up cpu feature tests References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:27 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Further disintegration of internal.h: Move the cpu feature tests to a core header and remove the unused one. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 18 ------------------ arch/x86/kernel/fpu/core.c | 1 + arch/x86/kernel/fpu/internal.h | 11 +++++++++++ arch/x86/kernel/fpu/regset.c | 2 ++ 4 files changed, 14 insertions(+), 18 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -51,24 +51,6 @@ extern void fpu__resume_cpu(void); # define WARN_ON_FPU(x) ({ (void)(x); 0; }) #endif -/* - * FPU related CPU feature flag helper routines: - */ -static __always_inline __pure bool use_xsaveopt(void) -{ - return static_cpu_has(X86_FEATURE_XSAVEOPT); -} - -static __always_inline __pure bool use_xsave(void) -{ - return static_cpu_has(X86_FEATURE_XSAVE); -} - -static __always_inline __pure bool use_fxsr(void) -{ - return static_cpu_has(X86_FEATURE_FXSR); -} - extern union fpregs_state init_fpstate; extern void fpstate_init_user(union fpregs_state *state); --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -17,6 +17,7 @@ #include #include +#include "internal.h" #include "xstate.h" #define CREATE_TRACE_POINTS --- a/arch/x86/kernel/fpu/internal.h +++ b/arch/x86/kernel/fpu/internal.h @@ -2,6 +2,17 @@ #ifndef __X86_KERNEL_FPU_INTERNAL_H #define __X86_KERNEL_FPU_INTERNAL_H +/* CPU feature check wrappers */ +static __always_inline __pure bool use_xsave(void) +{ + return cpu_feature_enabled(X86_FEATURE_XSAVE); +} + +static __always_inline __pure bool use_fxsr(void) +{ + return cpu_feature_enabled(X86_FEATURE_FXSR); +} + /* Init functions */ extern void fpu__init_prepare_fx_sw_frame(void); --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -10,6 +10,8 @@ #include #include +#include "internal.h" + /* * The xstateregs_active() routine is the same as the regset_fpregs_active() routine, * as the "regset->n" for the xstate regset will be updated based on the feature From patchwork Tue Oct 12 00:00:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F348EC433F5 for ; Tue, 12 Oct 2021 00:06:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE21B61027 for ; Tue, 12 Oct 2021 00:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235991AbhJLAIy (ORCPT ); Mon, 11 Oct 2021 20:08:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234869AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06ABFC061745; Mon, 11 Oct 2021 17:06:28 -0700 (PDT) Message-ID: <20211011223611.488735966@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=RM4WwqDVFzMFQNaYplJ49a0++6qEuhrKPfh9LJhm0gY=; b=qfbNJ3nzl/9uZyg7e5mJlDMz7ka6lqSRKQPQtvnR5Z2bij+P/GJEOMW5RuUPCNjkla/xtt cFXWLXr+fioKa2TOzIupl4wGSLDguiNrDzXJSOyFzRy4CoWpA5z5bSNjruTzn0OC3LVR1v JepmA/INy/YuKP9FsjFAfq4gvQg2Hsq4hs4NZLEs04yUCRTTQEVZv0FQvO0Ws94ikKVAI8 ldadKtQVHf9CAiNNHrJlI1dGCcXQmy3m0sykVYCoc1a6xDSxGgMr5OEN27zNNFuPlz+PcK gTJZnbRhB57KgTta4IHNZ6S7omtARUUnpxV1H8JncpNtKOcBPUwtwDM+hsr78Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=RM4WwqDVFzMFQNaYplJ49a0++6qEuhrKPfh9LJhm0gY=; b=ret41enOBEKuHMRhOFxn80vdf5Ai1A5t7Bo/p+q6JsBdmBGivrkeFFIdHKaID5Q/6LAg7c GY/4pXn3eqNve1AQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 20/31] x86/fpu: Make os_xrstor_booting() private References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:28 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org It's only required in the xstate init code. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 25 ------------------------- arch/x86/kernel/fpu/xstate.c | 23 +++++++++++++++++++++++ 2 files changed, 23 insertions(+), 25 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -226,31 +226,6 @@ static inline void fxsave(struct fxregs_ : "memory") /* - * This function is called only during boot time when x86 caps are not set - * up and alternative can not be used yet. - */ -static inline void os_xrstor_booting(struct xregs_state *xstate) -{ - u64 mask = xfeatures_mask_fpstate(); - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - WARN_ON(system_state != SYSTEM_BOOTING); - - if (boot_cpu_has(X86_FEATURE_XSAVES)) - XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); - else - XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); - - /* - * We should never fault when copying from a kernel buffer, and the FPU - * state we set at boot time should be valid. - */ - WARN_ON_FPU(err); -} - -/* * Save processor xstate to xsave area. * * Uses either XSAVE or XSAVEOPT or XSAVES depending on the CPU features --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -351,6 +351,29 @@ static void __init print_xstate_offset_s } /* + * This function is called only during boot time when x86 caps are not set + * up and alternative can not be used yet. + */ +static __init void os_xrstor_booting(struct xregs_state *xstate) +{ + u64 mask = xfeatures_mask_fpstate(); + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + if (cpu_feature_enabled(X86_FEATURE_XSAVES)) + XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); + else + XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); + + /* + * We should never fault when copying from a kernel buffer, and the FPU + * state we set at boot time should be valid. + */ + WARN_ON_FPU(err); +} + +/* * All supported features have either init state all zeros or are * handled in setup_init_fpu() individually. This is an explicit * feature list and does not use XFEATURE_MASK*SUPPORTED to catch From patchwork Tue Oct 12 00:00:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81B5BC433EF for ; Tue, 12 Oct 2021 00:06:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4BD8B61074 for ; Tue, 12 Oct 2021 00:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235960AbhJLAIx (ORCPT ); Mon, 11 Oct 2021 20:08:53 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51510 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233082AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223611.547448596@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=9w0NWhL6SdTxpUmLrspNNpCJt4o6+e7f2hTCoWcjDwY=; b=hnsTmwCHS/2H/bKEpnAGHmWnaTwZuG6d+qDCjs6ocnFRUNal/MXv/n9QUYpYNgi/hJ0EaE 3TYZeNWftAIkTXGjAHVQkaJuH3PQDRroYBqoNSRImAqdA8/pxUWYYggKWWFL01PC4vlV7H YD6LaE8L3syn0zt0ogsREW07ch+5UOyrXOUKGkzgCDDPD+prkm2oaCyfWCicbjWp4wT5zl GojPKndbwN3hGexWUKuX+s+9PwyU72QEu+/2EznACC3R7jzTJhoaMhg9FsWUG1uC//Pwsv LB32RVddEw01m3kKY1BeQeWB7TfJ91wYMXKcHKN9j/+bKWop7MrLD58x5FAzrw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=9w0NWhL6SdTxpUmLrspNNpCJt4o6+e7f2hTCoWcjDwY=; b=s+h/JTnh9FWBJfE+9eD55AvvbkB4jXZgjdIKDoLFDPzkDAqL9sZQYLy4H7R4a/YKbw3b6j 2fExf71+axguT2CQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 21/31] x86/fpu: Move os_xsave() and os_xrstor() to core References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:30 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Nothing outside the core code needs these. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 165 ---------------------------------- arch/x86/include/asm/fpu/xstate.h | 6 - arch/x86/kernel/fpu/signal.c | 1 arch/x86/kernel/fpu/xstate.h | 174 ++++++++++++++++++++++++++++++++++++ 4 files changed, 175 insertions(+), 171 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -161,171 +161,6 @@ static inline void fxsave(struct fxregs_ asm volatile("fxsaveq %[fx]" : [fx] "=m" (*fx)); } -/* These macros all use (%edi)/(%rdi) as the single memory argument. */ -#define XSAVE ".byte " REX_PREFIX "0x0f,0xae,0x27" -#define XSAVEOPT ".byte " REX_PREFIX "0x0f,0xae,0x37" -#define XSAVES ".byte " REX_PREFIX "0x0f,0xc7,0x2f" -#define XRSTOR ".byte " REX_PREFIX "0x0f,0xae,0x2f" -#define XRSTORS ".byte " REX_PREFIX "0x0f,0xc7,0x1f" - -/* - * After this @err contains 0 on success or the trap number when the - * operation raises an exception. - */ -#define XSTATE_OP(op, st, lmask, hmask, err) \ - asm volatile("1:" op "\n\t" \ - "xor %[err], %[err]\n" \ - "2:\n\t" \ - _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FAULT_MCE_SAFE) \ - : [err] "=a" (err) \ - : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ - : "memory") - -/* - * If XSAVES is enabled, it replaces XSAVEOPT because it supports a compact - * format and supervisor states in addition to modified optimization in - * XSAVEOPT. - * - * Otherwise, if XSAVEOPT is enabled, XSAVEOPT replaces XSAVE because XSAVEOPT - * supports modified optimization which is not supported by XSAVE. - * - * We use XSAVE as a fallback. - * - * The 661 label is defined in the ALTERNATIVE* macros as the address of the - * original instruction which gets replaced. We need to use it here as the - * address of the instruction where we might get an exception at. - */ -#define XSTATE_XSAVE(st, lmask, hmask, err) \ - asm volatile(ALTERNATIVE_2(XSAVE, \ - XSAVEOPT, X86_FEATURE_XSAVEOPT, \ - XSAVES, X86_FEATURE_XSAVES) \ - "\n" \ - "xor %[err], %[err]\n" \ - "3:\n" \ - ".pushsection .fixup,\"ax\"\n" \ - "4: movl $-2, %[err]\n" \ - "jmp 3b\n" \ - ".popsection\n" \ - _ASM_EXTABLE(661b, 4b) \ - : [err] "=r" (err) \ - : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ - : "memory") - -/* - * Use XRSTORS to restore context if it is enabled. XRSTORS supports compact - * XSAVE area format. - */ -#define XSTATE_XRESTORE(st, lmask, hmask) \ - asm volatile(ALTERNATIVE(XRSTOR, \ - XRSTORS, X86_FEATURE_XSAVES) \ - "\n" \ - "3:\n" \ - _ASM_EXTABLE_TYPE(661b, 3b, EX_TYPE_FPU_RESTORE) \ - : \ - : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ - : "memory") - -/* - * Save processor xstate to xsave area. - * - * Uses either XSAVE or XSAVEOPT or XSAVES depending on the CPU features - * and command line options. The choice is permanent until the next reboot. - */ -static inline void os_xsave(struct xregs_state *xstate) -{ - u64 mask = xfeatures_mask_all; - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - WARN_ON_FPU(!alternatives_patched); - - XSTATE_XSAVE(xstate, lmask, hmask, err); - - /* We should never fault when copying to a kernel buffer: */ - WARN_ON_FPU(err); -} - -/* - * Restore processor xstate from xsave area. - * - * Uses XRSTORS when XSAVES is used, XRSTOR otherwise. - */ -static inline void os_xrstor(struct xregs_state *xstate, u64 mask) -{ - u32 lmask = mask; - u32 hmask = mask >> 32; - - XSTATE_XRESTORE(xstate, lmask, hmask); -} - -/* - * Save xstate to user space xsave area. - * - * We don't use modified optimization because xrstor/xrstors might track - * a different application. - * - * We don't use compacted format xsave area for backward compatibility for - * old applications which don't understand the compacted format of the - * xsave area. - * - * The caller has to zero buf::header before calling this because XSAVE* - * does not touch the reserved fields in the header. - */ -static inline int xsave_to_user_sigframe(struct xregs_state __user *buf) -{ - /* - * Include the features which are not xsaved/rstored by the kernel - * internally, e.g. PKRU. That's user space ABI and also required - * to allow the signal handler to modify PKRU. - */ - u64 mask = xfeatures_mask_uabi(); - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - stac(); - XSTATE_OP(XSAVE, buf, lmask, hmask, err); - clac(); - - return err; -} - -/* - * Restore xstate from user space xsave area. - */ -static inline int xrstor_from_user_sigframe(struct xregs_state __user *buf, u64 mask) -{ - struct xregs_state *xstate = ((__force struct xregs_state *)buf); - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - stac(); - XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); - clac(); - - return err; -} - -/* - * Restore xstate from kernel space xsave area, return an error code instead of - * an exception. - */ -static inline int os_xrstor_safe(struct xregs_state *xstate, u64 mask) -{ - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - if (cpu_feature_enabled(X86_FEATURE_XSAVES)) - XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); - else - XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); - - return err; -} - extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -78,12 +78,6 @@ XFEATURE_MASK_INDEPENDENT | \ XFEATURE_MASK_SUPERVISOR_UNSUPPORTED) -#ifdef CONFIG_X86_64 -#define REX_PREFIX "0x48, " -#else -#define REX_PREFIX -#endif - extern u64 xfeatures_mask_all; static inline u64 xfeatures_mask_supervisor(void) --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -17,6 +17,7 @@ #include #include "internal.h" +#include "xstate.h" static struct _fpx_sw_bytes fx_sw_reserved __ro_after_init; static struct _fpx_sw_bytes fx_sw_reserved_ia32 __ro_after_init; --- a/arch/x86/kernel/fpu/xstate.h +++ b/arch/x86/kernel/fpu/xstate.h @@ -18,4 +18,178 @@ static inline void xstate_init_xcomp_bv( extern void __copy_xstate_to_uabi_buf(struct membuf to, struct xregs_state *xsave, u32 pkru_val, enum xstate_copy_mode copy_mode); +/* XSAVE/XRSTOR wrapper functions */ + +#ifdef CONFIG_X86_64 +#define REX_PREFIX "0x48, " +#else +#define REX_PREFIX +#endif + +/* These macros all use (%edi)/(%rdi) as the single memory argument. */ +#define XSAVE ".byte " REX_PREFIX "0x0f,0xae,0x27" +#define XSAVEOPT ".byte " REX_PREFIX "0x0f,0xae,0x37" +#define XSAVES ".byte " REX_PREFIX "0x0f,0xc7,0x2f" +#define XRSTOR ".byte " REX_PREFIX "0x0f,0xae,0x2f" +#define XRSTORS ".byte " REX_PREFIX "0x0f,0xc7,0x1f" + +/* + * After this @err contains 0 on success or the trap number when the + * operation raises an exception. + */ +#define XSTATE_OP(op, st, lmask, hmask, err) \ + asm volatile("1:" op "\n\t" \ + "xor %[err], %[err]\n" \ + "2:\n\t" \ + _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FAULT_MCE_SAFE) \ + : [err] "=a" (err) \ + : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ + : "memory") + +/* + * If XSAVES is enabled, it replaces XSAVEOPT because it supports a compact + * format and supervisor states in addition to modified optimization in + * XSAVEOPT. + * + * Otherwise, if XSAVEOPT is enabled, XSAVEOPT replaces XSAVE because XSAVEOPT + * supports modified optimization which is not supported by XSAVE. + * + * We use XSAVE as a fallback. + * + * The 661 label is defined in the ALTERNATIVE* macros as the address of the + * original instruction which gets replaced. We need to use it here as the + * address of the instruction where we might get an exception at. + */ +#define XSTATE_XSAVE(st, lmask, hmask, err) \ + asm volatile(ALTERNATIVE_2(XSAVE, \ + XSAVEOPT, X86_FEATURE_XSAVEOPT, \ + XSAVES, X86_FEATURE_XSAVES) \ + "\n" \ + "xor %[err], %[err]\n" \ + "3:\n" \ + ".pushsection .fixup,\"ax\"\n" \ + "4: movl $-2, %[err]\n" \ + "jmp 3b\n" \ + ".popsection\n" \ + _ASM_EXTABLE(661b, 4b) \ + : [err] "=r" (err) \ + : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ + : "memory") + +/* + * Use XRSTORS to restore context if it is enabled. XRSTORS supports compact + * XSAVE area format. + */ +#define XSTATE_XRESTORE(st, lmask, hmask) \ + asm volatile(ALTERNATIVE(XRSTOR, \ + XRSTORS, X86_FEATURE_XSAVES) \ + "\n" \ + "3:\n" \ + _ASM_EXTABLE_TYPE(661b, 3b, EX_TYPE_FPU_RESTORE) \ + : \ + : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ + : "memory") + +/* + * Save processor xstate to xsave area. + * + * Uses either XSAVE or XSAVEOPT or XSAVES depending on the CPU features + * and command line options. The choice is permanent until the next reboot. + */ +static inline void os_xsave(struct xregs_state *xstate) +{ + u64 mask = xfeatures_mask_all; + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + WARN_ON_FPU(!alternatives_patched); + + XSTATE_XSAVE(xstate, lmask, hmask, err); + + /* We should never fault when copying to a kernel buffer: */ + WARN_ON_FPU(err); +} + +/* + * Restore processor xstate from xsave area. + * + * Uses XRSTORS when XSAVES is used, XRSTOR otherwise. + */ +static inline void os_xrstor(struct xregs_state *xstate, u64 mask) +{ + u32 lmask = mask; + u32 hmask = mask >> 32; + + XSTATE_XRESTORE(xstate, lmask, hmask); +} + +/* + * Save xstate to user space xsave area. + * + * We don't use modified optimization because xrstor/xrstors might track + * a different application. + * + * We don't use compacted format xsave area for backward compatibility for + * old applications which don't understand the compacted format of the + * xsave area. + * + * The caller has to zero buf::header before calling this because XSAVE* + * does not touch the reserved fields in the header. + */ +static inline int xsave_to_user_sigframe(struct xregs_state __user *buf) +{ + /* + * Include the features which are not xsaved/rstored by the kernel + * internally, e.g. PKRU. That's user space ABI and also required + * to allow the signal handler to modify PKRU. + */ + u64 mask = xfeatures_mask_uabi(); + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + stac(); + XSTATE_OP(XSAVE, buf, lmask, hmask, err); + clac(); + + return err; +} + +/* + * Restore xstate from user space xsave area. + */ +static inline int xrstor_from_user_sigframe(struct xregs_state __user *buf, u64 mask) +{ + struct xregs_state *xstate = ((__force struct xregs_state *)buf); + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + stac(); + XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); + clac(); + + return err; +} + +/* + * Restore xstate from kernel space xsave area, return an error code instead of + * an exception. + */ +static inline int os_xrstor_safe(struct xregs_state *xstate, u64 mask) +{ + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + if (cpu_feature_enabled(X86_FEATURE_XSAVES)) + XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); + else + XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); + + return err; +} + + #endif From patchwork Tue Oct 12 00:00:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53C72C433EF for ; Tue, 12 Oct 2021 00:06:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3119261038 for ; Tue, 12 Oct 2021 00:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235249AbhJLAId (ORCPT ); Mon, 11 Oct 2021 20:08:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231671AbhJLAIY (ORCPT ); Mon, 11 Oct 2021 20:08:24 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9458C061749; Mon, 11 Oct 2021 17:06:23 -0700 (PDT) Message-ID: <20211011223611.607783558@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=7L/7euo9n/CNBdDDKwpcc2W3IjYL9EGRwYmm+UwXFxU=; b=isjfMHPFjDEa6xhr0K+gQrJX1Gwoow/eYK+kLazePctS3q+ANPbSHopk3BDaVW0hUXTcrF OloG8mV3nJ6YX9JGTGzCHYkzzL8U3hfy5gQXBVFXsW/RTisF/WWct1UXQ1m6AE9aNKFyRt JDhux3qBz7xKtkY629LCLgM1dYtHJfYPDDQW99YUMnzqw1R4MHJyr188L4/divhn5eDxby p+6pLVismcmo0iLFkjytBNi0lClTIPVa5W1qANdxMw1fsie6TIT5FxL+9WqYRzwhy/AQgG UFy7bDV2JKfSHLMpa6mScUjcAMO5Z2gyfilMZ9vABl525G9A5UucK2KxVihphQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997181; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=7L/7euo9n/CNBdDDKwpcc2W3IjYL9EGRwYmm+UwXFxU=; b=Strfvt3v+WsA35ZLLcwQQ8moVj1Ub/IGyGOQ8SPNILKYLF/upykI0ahriTLPG5X8yU3Jwu pVTGRs/spe4jwwAQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 22/31] x86/fpu: Move legacy ASM wrappers to core References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:31 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Nothing outside the core code requires them. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 101 --------------------------------- arch/x86/kernel/fpu/core.c | 1 arch/x86/kernel/fpu/legacy.h | 108 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/fpu/signal.c | 1 arch/x86/kernel/fpu/xstate.c | 1 5 files changed, 111 insertions(+), 101 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -60,107 +60,6 @@ extern void fpstate_init_soft(struct swr static inline void fpstate_init_soft(struct swregs_state *soft) {} #endif -/* - * Returns 0 on success or the trap number when the operation raises an - * exception. - */ -#define user_insn(insn, output, input...) \ -({ \ - int err; \ - \ - might_fault(); \ - \ - asm volatile(ASM_STAC "\n" \ - "1: " #insn "\n" \ - "2: " ASM_CLAC "\n" \ - _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FAULT_MCE_SAFE) \ - : [err] "=a" (err), output \ - : "0"(0), input); \ - err; \ -}) - -#define kernel_insn_err(insn, output, input...) \ -({ \ - int err; \ - asm volatile("1:" #insn "\n\t" \ - "2:\n" \ - ".section .fixup,\"ax\"\n" \ - "3: movl $-1,%[err]\n" \ - " jmp 2b\n" \ - ".previous\n" \ - _ASM_EXTABLE(1b, 3b) \ - : [err] "=r" (err), output \ - : "0"(0), input); \ - err; \ -}) - -#define kernel_insn(insn, output, input...) \ - asm volatile("1:" #insn "\n\t" \ - "2:\n" \ - _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FPU_RESTORE) \ - : output : input) - -static inline int fnsave_to_user_sigframe(struct fregs_state __user *fx) -{ - return user_insn(fnsave %[fx]; fwait, [fx] "=m" (*fx), "m" (*fx)); -} - -static inline int fxsave_to_user_sigframe(struct fxregs_state __user *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - return user_insn(fxsave %[fx], [fx] "=m" (*fx), "m" (*fx)); - else - return user_insn(fxsaveq %[fx], [fx] "=m" (*fx), "m" (*fx)); - -} - -static inline void fxrstor(struct fxregs_state *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - kernel_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); - else - kernel_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline int fxrstor_safe(struct fxregs_state *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - return kernel_insn_err(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); - else - return kernel_insn_err(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline int fxrstor_from_user_sigframe(struct fxregs_state __user *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - return user_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); - else - return user_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline void frstor(struct fregs_state *fx) -{ - kernel_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline int frstor_safe(struct fregs_state *fx) -{ - return kernel_insn_err(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline int frstor_from_user_sigframe(struct fregs_state __user *fx) -{ - return user_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline void fxsave(struct fxregs_state *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - asm volatile( "fxsave %[fx]" : [fx] "=m" (*fx)); - else - asm volatile("fxsaveq %[fx]" : [fx] "=m" (*fx)); -} - extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -18,6 +18,7 @@ #include #include "internal.h" +#include "legacy.h" #include "xstate.h" #define CREATE_TRACE_POINTS --- /dev/null +++ b/arch/x86/kernel/fpu/legacy.h @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __X86_KERNEL_FPU_LEGACY_H +#define __X86_KERNEL_FPU_LEGACY_H + +#include + +/* + * Returns 0 on success or the trap number when the operation raises an + * exception. + */ +#define user_insn(insn, output, input...) \ +({ \ + int err; \ + \ + might_fault(); \ + \ + asm volatile(ASM_STAC "\n" \ + "1: " #insn "\n" \ + "2: " ASM_CLAC "\n" \ + _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FAULT_MCE_SAFE) \ + : [err] "=a" (err), output \ + : "0"(0), input); \ + err; \ +}) + +#define kernel_insn_err(insn, output, input...) \ +({ \ + int err; \ + asm volatile("1:" #insn "\n\t" \ + "2:\n" \ + ".section .fixup,\"ax\"\n" \ + "3: movl $-1,%[err]\n" \ + " jmp 2b\n" \ + ".previous\n" \ + _ASM_EXTABLE(1b, 3b) \ + : [err] "=r" (err), output \ + : "0"(0), input); \ + err; \ +}) + +#define kernel_insn(insn, output, input...) \ + asm volatile("1:" #insn "\n\t" \ + "2:\n" \ + _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FPU_RESTORE) \ + : output : input) + +static inline int fnsave_to_user_sigframe(struct fregs_state __user *fx) +{ + return user_insn(fnsave %[fx]; fwait, [fx] "=m" (*fx), "m" (*fx)); +} + +static inline int fxsave_to_user_sigframe(struct fxregs_state __user *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + return user_insn(fxsave %[fx], [fx] "=m" (*fx), "m" (*fx)); + else + return user_insn(fxsaveq %[fx], [fx] "=m" (*fx), "m" (*fx)); + +} + +static inline void fxrstor(struct fxregs_state *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + kernel_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); + else + kernel_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline int fxrstor_safe(struct fxregs_state *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + return kernel_insn_err(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); + else + return kernel_insn_err(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline int fxrstor_from_user_sigframe(struct fxregs_state __user *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + return user_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); + else + return user_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline void frstor(struct fregs_state *fx) +{ + kernel_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline int frstor_safe(struct fregs_state *fx) +{ + return kernel_insn_err(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline int frstor_from_user_sigframe(struct fregs_state __user *fx) +{ + return user_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline void fxsave(struct fxregs_state *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + asm volatile( "fxsave %[fx]" : [fx] "=m" (*fx)); + else + asm volatile("fxsaveq %[fx]" : [fx] "=m" (*fx)); +} + +#endif --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -17,6 +17,7 @@ #include #include "internal.h" +#include "legacy.h" #include "xstate.h" static struct _fpx_sw_bytes fx_sw_reserved __ro_after_init; --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -20,6 +20,7 @@ #include #include "internal.h" +#include "legacy.h" #include "xstate.h" #define for_each_extended_xfeature(bit, mask) \ From patchwork Tue Oct 12 00:00:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7DABC433F5 for ; Tue, 12 Oct 2021 00:06:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7DA4660F92 for ; Tue, 12 Oct 2021 00:06:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231851AbhJLAIc (ORCPT ); Mon, 11 Oct 2021 20:08:32 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51276 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231292AbhJLAIY (ORCPT ); Mon, 11 Oct 2021 20:08:24 -0400 Message-ID: <20211011223611.667144933@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=cfXC2sFV/i3NzxBTHk1PxgvXSNyPHXcEHXmZOp7DfuY=; b=RpYyauXsM/S2hlkcwjRgzawm+NOAPXepP47Y3DeAbXPdMOlar6ETJdAf03TtKwxCOaB1gx ulbK/glAZR6JF8eQ/oTG8YSnCDFY11+Uyqki43/BHWR00qXLfVK6NuM4D+EAYVovTk2M5n 6bJy2DeVx+kQHx7PD7xZcq8hcK5U0ORjiJvMZHtr+XLbPLz869uS+hNbthZpU7xdwyc6xy AaGUvT3RmtvyM63FLf3seqebIDr1edClIafsfknGYxDlZfBqNu4uxvZrfjjwSfemWuc5JY 9ED2Yu2EphnOO2EMO1CkE6/DInDXC/eaxaZ7Qpv18Ms+uNj4SIZ9ZX5g40ndlw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=cfXC2sFV/i3NzxBTHk1PxgvXSNyPHXcEHXmZOp7DfuY=; b=OzkSw4mVDMi/ZWB4KFETgypYK1Nz5fX886jnFrmpT19CrNFhFXdb3Yl32RXF53eh6PMX1M apMDKQOro9xnVoDQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 23/31] x86/fpu: Make WARN_ON_FPU() private References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:33 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No point in being in global headers. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 9 --------- arch/x86/kernel/fpu/init.c | 2 ++ arch/x86/kernel/fpu/internal.h | 6 ++++++ 3 files changed, 8 insertions(+), 9 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -42,15 +42,6 @@ extern void fpu__init_system(struct cpui extern void fpu__init_check_bugs(void); extern void fpu__resume_cpu(void); -/* - * Debugging facility: - */ -#ifdef CONFIG_X86_DEBUG_FPU -# define WARN_ON_FPU(x) WARN_ON_ONCE(x) -#else -# define WARN_ON_FPU(x) ({ (void)(x); 0; }) -#endif - extern union fpregs_state init_fpstate; extern void fpstate_init_user(union fpregs_state *state); --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -10,6 +10,8 @@ #include #include +#include "internal.h" + /* * Initialize the registers found in all CPUs, CR0 and CR4: */ --- a/arch/x86/kernel/fpu/internal.h +++ b/arch/x86/kernel/fpu/internal.h @@ -13,6 +13,12 @@ static __always_inline __pure bool use_f return cpu_feature_enabled(X86_FEATURE_FXSR); } +#ifdef CONFIG_X86_DEBUG_FPU +# define WARN_ON_FPU(x) WARN_ON_ONCE(x) +#else +# define WARN_ON_FPU(x) ({ (void)(x); 0; }) +#endif + /* Init functions */ extern void fpu__init_prepare_fx_sw_frame(void); From patchwork Tue Oct 12 00:00:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6B41C4332F for ; Tue, 12 Oct 2021 00:06:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8ECCB610A4 for ; Tue, 12 Oct 2021 00:06:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233936AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230351AbhJLAIX (ORCPT ); Mon, 11 Oct 2021 20:08:23 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12F03C061570; Mon, 11 Oct 2021 17:06:22 -0700 (PDT) Message-ID: <20211011223611.727493295@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=1r7yu5vPXy5SyZnbjR2SneOXObpzizgNxt6TRa0bb3Y=; b=EsJpGlx7k28MLM28soBIOU/nF1Pxbxe/6ynmcrKN6410Gy4yVh4a6npib47T8BHXIFvpU9 dKPXFUBpxxL/r130VnY6cRgFm3grdNXYihQlWqUXD81VNWe1K4hP6kMefy3wQdviICExXV iKovkRazh3xRi+nhSdW1Bi0wzNjPOS6COKxBmxOdHdoYWRne/nEbaNuhl6icTuv1yIwv1k bt24pSCJNvfBQlcdY5eLCYCoJPnNkl7smgH3QlX4SO0z/PpyIgMDbmZDaZLTiMNAgUrT3k CqMbykhWN041EiOsT4EI2rnaUiGnFQDvm+AZIu+dd7Eiqh0QVa4U18+h2x7wrQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=1r7yu5vPXy5SyZnbjR2SneOXObpzizgNxt6TRa0bb3Y=; b=xZJFmxLS5rl/XeCZBMM8bDZ/qHCiB6opt8STgu04uh8Hnb8uu7y1qbwXeJITxaOoTIFdhM dHGMcHQOXo46YkCQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 24/31] x86/fpu: Move fpregs_restore_userregs() to core References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:34 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Only used core internaly. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 83 ----------------------------------- arch/x86/kernel/fpu/context.h | 85 ++++++++++++++++++++++++++++++++++++ arch/x86/kernel/fpu/core.c | 1 arch/x86/kernel/fpu/regset.c | 1 arch/x86/kernel/fpu/signal.c | 1 5 files changed, 88 insertions(+), 83 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -55,89 +55,6 @@ extern void restore_fpregs_from_fpstate( extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); -/* - * FPU context switch related helper methods: - */ - DECLARE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx); -/* - * The in-register FPU state for an FPU context on a CPU is assumed to be - * valid if the fpu->last_cpu matches the CPU, and the fpu_fpregs_owner_ctx - * matches the FPU. - * - * If the FPU register state is valid, the kernel can skip restoring the - * FPU state from memory. - * - * Any code that clobbers the FPU registers or updates the in-memory - * FPU state for a task MUST let the rest of the kernel know that the - * FPU registers are no longer valid for this task. - * - * Either one of these invalidation functions is enough. Invalidate - * a resource you control: CPU if using the CPU for something else - * (with preemption disabled), FPU for the current task, or a task that - * is prevented from running by the current task. - */ -static inline void __cpu_invalidate_fpregs_state(void) -{ - __this_cpu_write(fpu_fpregs_owner_ctx, NULL); -} - -static inline void __fpu_invalidate_fpregs_state(struct fpu *fpu) -{ - fpu->last_cpu = -1; -} - -static inline int fpregs_state_valid(struct fpu *fpu, unsigned int cpu) -{ - return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; -} - -/* - * These generally need preemption protection to work, - * do try to avoid using these on their own: - */ -static inline void fpregs_deactivate(struct fpu *fpu) -{ - this_cpu_write(fpu_fpregs_owner_ctx, NULL); - trace_x86_fpu_regs_deactivated(fpu); -} - -static inline void fpregs_activate(struct fpu *fpu) -{ - this_cpu_write(fpu_fpregs_owner_ctx, fpu); - trace_x86_fpu_regs_activated(fpu); -} - -/* Internal helper for switch_fpu_return() and signal frame setup */ -static inline void fpregs_restore_userregs(void) -{ - struct fpu *fpu = ¤t->thread.fpu; - int cpu = smp_processor_id(); - - if (WARN_ON_ONCE(current->flags & PF_KTHREAD)) - return; - - if (!fpregs_state_valid(fpu, cpu)) { - u64 mask; - - /* - * This restores _all_ xstate which has not been - * established yet. - * - * If PKRU is enabled, then the PKRU value is already - * correct because it was either set in switch_to() or in - * flush_thread(). So it is excluded because it might be - * not up to date in current->thread.fpu.xsave state. - */ - mask = xfeatures_mask_restore_user() | - xfeatures_mask_supervisor(); - restore_fpregs_from_fpstate(&fpu->state, mask); - - fpregs_activate(fpu); - fpu->last_cpu = cpu; - } - clear_thread_flag(TIF_NEED_FPU_LOAD); -} - #endif /* _ASM_X86_FPU_INTERNAL_H */ --- /dev/null +++ b/arch/x86/kernel/fpu/context.h @@ -0,0 +1,85 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __X86_KERNEL_FPU_CONTEXT_H +#define __X86_KERNEL_FPU_CONTEXT_H + +#include +#include + +/* Functions related to FPU context tracking */ + +/* + * The in-register FPU state for an FPU context on a CPU is assumed to be + * valid if the fpu->last_cpu matches the CPU, and the fpu_fpregs_owner_ctx + * matches the FPU. + * + * If the FPU register state is valid, the kernel can skip restoring the + * FPU state from memory. + * + * Any code that clobbers the FPU registers or updates the in-memory + * FPU state for a task MUST let the rest of the kernel know that the + * FPU registers are no longer valid for this task. + * + * Either one of these invalidation functions is enough. Invalidate + * a resource you control: CPU if using the CPU for something else + * (with preemption disabled), FPU for the current task, or a task that + * is prevented from running by the current task. + */ +static inline void __cpu_invalidate_fpregs_state(void) +{ + __this_cpu_write(fpu_fpregs_owner_ctx, NULL); +} + +static inline void __fpu_invalidate_fpregs_state(struct fpu *fpu) +{ + fpu->last_cpu = -1; +} + +static inline int fpregs_state_valid(struct fpu *fpu, unsigned int cpu) +{ + return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; +} + +static inline void fpregs_deactivate(struct fpu *fpu) +{ + __this_cpu_write(fpu_fpregs_owner_ctx, NULL); + trace_x86_fpu_regs_deactivated(fpu); +} + +static inline void fpregs_activate(struct fpu *fpu) +{ + __this_cpu_write(fpu_fpregs_owner_ctx, fpu); + trace_x86_fpu_regs_activated(fpu); +} + +/* Internal helper for switch_fpu_return() and signal frame setup */ +static inline void fpregs_restore_userregs(void) +{ + struct fpu *fpu = ¤t->thread.fpu; + int cpu = smp_processor_id(); + + if (WARN_ON_ONCE(current->flags & PF_KTHREAD)) + return; + + if (!fpregs_state_valid(fpu, cpu)) { + u64 mask; + + /* + * This restores _all_ xstate which has not been + * established yet. + * + * If PKRU is enabled, then the PKRU value is already + * correct because it was either set in switch_to() or in + * flush_thread(). So it is excluded because it might be + * not up to date in current->thread.fpu.xsave state. + */ + mask = xfeatures_mask_restore_user() | + xfeatures_mask_supervisor(); + restore_fpregs_from_fpstate(&fpu->state, mask); + + fpregs_activate(fpu); + fpu->last_cpu = cpu; + } + clear_thread_flag(TIF_NEED_FPU_LOAD); +} + +#endif --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -17,6 +17,7 @@ #include #include +#include "context.h" #include "internal.h" #include "legacy.h" #include "xstate.h" --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -10,6 +10,7 @@ #include #include +#include "context.h" #include "internal.h" /* --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -16,6 +16,7 @@ #include #include +#include "context.h" #include "internal.h" #include "legacy.h" #include "xstate.h" From patchwork Tue Oct 12 00:00:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C996C4332F for ; Tue, 12 Oct 2021 00:06:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 87DBC61166 for ; Tue, 12 Oct 2021 00:06:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235697AbhJLAIj (ORCPT ); Mon, 11 Oct 2021 20:08:39 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51368 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231565AbhJLAIY (ORCPT ); Mon, 11 Oct 2021 20:08:24 -0400 Message-ID: <20211011223611.787198449@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=bRtRVFO/Qk9SnYYppFE5b04AKhvpImwNUBtl9GhSMVg=; b=gHrmNgi5IG/m3Y5i56g570joBTJp8ga7dJ9Rt8KNv/hw5z7AfF+cs/CyUHRTxVOtOMSEJ6 EP+yqjeaozNiX5pStTjY92XhXNc0lHvwSUGhjwJMBYYG2Y2Acnlof2stV1g/80KZCTkTcd hz7OwWOrwkCotUYh89RuVMzvV4XCd5n/fq6e4+qn6NZVMOFePwE7h8pfrwaF65bA+LNEYg CvHN2GQQ9NdQu52Hxd2VgVxBVig5+PAdX4o0dQckK0b3e95gvKmztbk/P1TLDxPnhgL0oZ O4EwoMnLa7i4x+2HNnhkaeD8jfyx2lJzKpa+M/ekPleSAv2fYBlS28J2aF25sQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=bRtRVFO/Qk9SnYYppFE5b04AKhvpImwNUBtl9GhSMVg=; b=m7v2hdlyCl1AbGvAAySMBN34qNqhT9bb5FU3tczmOj4rYmmj4+00X1Sx5MEW5+ubmemQQn 1YZNmm2kviAXDEDw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 25/31] x86/fpu: Move mxcsr related code to core References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:36 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No need to expose that to code which only needs the XCR0 accessors. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/xcr.h | 11 ----------- arch/x86/kernel/fpu/init.c | 1 + arch/x86/kernel/fpu/legacy.h | 7 +++++++ arch/x86/kernel/fpu/regset.c | 1 + arch/x86/kernel/fpu/xstate.c | 3 ++- arch/x86/kvm/svm/sev.c | 2 +- 6 files changed, 12 insertions(+), 13 deletions(-) --- a/arch/x86/include/asm/fpu/xcr.h +++ b/arch/x86/include/asm/fpu/xcr.h @@ -2,17 +2,6 @@ #ifndef _ASM_X86_FPU_XCR_H #define _ASM_X86_FPU_XCR_H -/* - * MXCSR and XCR definitions: - */ - -static inline void ldmxcsr(u32 mxcsr) -{ - asm volatile("ldmxcsr %0" :: "m" (mxcsr)); -} - -extern unsigned int mxcsr_feature_mask; - #define XCR_XFEATURE_ENABLED_MASK 0x00000000 static inline u64 xgetbv(u32 index) --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -11,6 +11,7 @@ #include #include "internal.h" +#include "legacy.h" /* * Initialize the registers found in all CPUs, CR0 and CR4: --- a/arch/x86/kernel/fpu/legacy.h +++ b/arch/x86/kernel/fpu/legacy.h @@ -4,6 +4,13 @@ #include +extern unsigned int mxcsr_feature_mask; + +static inline void ldmxcsr(u32 mxcsr) +{ + asm volatile("ldmxcsr %0" :: "m" (mxcsr)); +} + /* * Returns 0 on success or the trap number when the operation raises an * exception. --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -12,6 +12,7 @@ #include "context.h" #include "internal.h" +#include "legacy.h" /* * The xstateregs_active() routine is the same as the regset_fpregs_active() routine, --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -14,8 +14,9 @@ #include #include -#include #include +#include +#include #include --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -17,10 +17,10 @@ #include #include #include -#include #include #include +#include #include "x86.h" #include "svm.h" From patchwork Tue Oct 12 00:00:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 146ACC433FE for ; Tue, 12 Oct 2021 00:07:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00618610C9 for ; Tue, 12 Oct 2021 00:07:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234085AbhJLAJS (ORCPT ); Mon, 11 Oct 2021 20:09:18 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51508 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232938AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223611.846280577@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=oEva5V/97ct5UQuKFK5IGfaw1DZ+FmPxcrD1r2aiDpI=; b=QZjdLY8tNuG5aFmEWSAS8f3n3JjHZn38ppgbMPIZQmyYPeGQI6K81JqY6DRVVZgh/7ywTn MliNBa8jWkllkYmFfoFsNdZgLLz5tdD2pBagevl1Xux5fNBkg6xRxl65UPXv8caSetPE/8 SD0Xqwf9Nk99C0NfLapt9ofWT2IVqJN0JszLGlv8UVheUQKNa2v0k5TUCrMiTWGtjwrp7O c7S0dU2Jp1lVFTzgI5/OX2rq1nggyntW5m1uFSK0ENguq6GmPedJ3N5C5fR4EQz2xHm2uX GAdCeQvnrx5rUqbut+sS5Hf29if3lIGeU9lHiCfQLYuFVGz7TIxM48/gj+IroA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997183; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=oEva5V/97ct5UQuKFK5IGfaw1DZ+FmPxcrD1r2aiDpI=; b=+UsevrxoWqKsvWh7t443jgIl5arzLF/LCHtZx7iHX1w2+Tzax2kKgnLwzRRHxmYfc6Oaww O5PalNHNi1nxpkDQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 26/31] x86/fpu: Move fpstate functions to api.h References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:37 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move function declarations which need to be globaly available to api.h where they belong. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/api.h | 9 +++++++++ arch/x86/include/asm/fpu/internal.h | 9 --------- arch/x86/kernel/fpu/internal.h | 3 +++ arch/x86/math-emu/fpu_entry.c | 2 +- 4 files changed, 13 insertions(+), 10 deletions(-) --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -110,6 +110,15 @@ extern int cpu_has_xfeatures(u64 xfeatur static inline void update_pasid(void) { } +#ifdef CONFIG_MATH_EMULATION +extern void fpstate_init_soft(struct swregs_state *soft); +#else +static inline void fpstate_init_soft(struct swregs_state *soft) {} +#endif + +/* FPSTATE */ +extern union fpregs_state init_fpstate; + /* FPSTATE related functions which are exported to KVM */ extern void fpu_init_fpstate_user(struct fpu *fpu); --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -42,15 +42,6 @@ extern void fpu__init_system(struct cpui extern void fpu__init_check_bugs(void); extern void fpu__resume_cpu(void); -extern union fpregs_state init_fpstate; -extern void fpstate_init_user(union fpregs_state *state); - -#ifdef CONFIG_MATH_EMULATION -extern void fpstate_init_soft(struct swregs_state *soft); -#else -static inline void fpstate_init_soft(struct swregs_state *soft) {} -#endif - extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); --- a/arch/x86/kernel/fpu/internal.h +++ b/arch/x86/kernel/fpu/internal.h @@ -22,4 +22,7 @@ static __always_inline __pure bool use_f /* Init functions */ extern void fpu__init_prepare_fx_sw_frame(void); +/* Used in init.c */ +extern void fpstate_init_user(union fpregs_state *state); + #endif --- a/arch/x86/math-emu/fpu_entry.c +++ b/arch/x86/math-emu/fpu_entry.c @@ -31,7 +31,7 @@ #include #include #include -#include +#include #include "fpu_system.h" #include "fpu_emu.h" From patchwork Tue Oct 12 00:00:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE4D9C433FE for ; Tue, 12 Oct 2021 00:06:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AAFDF60F92 for ; Tue, 12 Oct 2021 00:06:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236009AbhJLAIz (ORCPT ); Mon, 11 Oct 2021 20:08:55 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51544 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233455AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223611.905227254@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=BHEwndCCeICn4y0xsJCnPp3PZLShkZDVpkgf+lOaHZA=; b=HczKWxcGHiQq8dfAk6o4PbxVoeIJiNwxZ4+jKfQRQdSgFGkQ+9IPG/rFPlkjX8HPKQDAU/ 61Rl5jJnmnPTNXWZVNpWk1PmZ6L/rBPx1rnTXlfHx27V1pD8h4hdLrIGbZo+qUsyZX+sqA wscTITRXXa+ft9ML7roxkFSY7R0FMilIfKI9z+IgkLRybAY3WcjZ0tgEkxB40s7sUTypvw NrPgl1d61XBThdTXJqyZ4SbIPLzi8/7jpPROtDkrRrM+OycNUdoDXtB8xlxIzojl6nCZyz 90cnR6j7B0G6uBRylpNo/+EoG9924kNVqaIZKyaM+HlaFhO3PO5JfmiJ1jKQFw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=BHEwndCCeICn4y0xsJCnPp3PZLShkZDVpkgf+lOaHZA=; b=P+AuLL4f6EreRiFb22KLo5UeviCaAisyFupMC3Q4HMvTwPiKwbKAEwlc1gH/jES+jGAWT9 FJAvKEq/bJBg6kBQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 27/31] x86/fpu: Remove internal.h dependency from fpu/signal.h References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:39 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to remove internal.h make signal.h independent of it. Signed-off-by: Thomas Gleixner --- arch/x86/ia32/ia32_signal.c | 1 - arch/x86/include/asm/fpu/api.h | 3 +++ arch/x86/include/asm/fpu/internal.h | 7 ------- arch/x86/include/asm/fpu/signal.h | 13 +++++++++++++ arch/x86/kernel/fpu/signal.c | 1 - arch/x86/kernel/ptrace.c | 1 - arch/x86/kernel/signal.c | 1 - arch/x86/mm/extable.c | 3 ++- 8 files changed, 18 insertions(+), 12 deletions(-) --- a/arch/x86/ia32/ia32_signal.c +++ b/arch/x86/ia32/ia32_signal.c @@ -24,7 +24,6 @@ #include #include #include -#include #include #include #include --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -116,6 +116,9 @@ extern void fpstate_init_soft(struct swr static inline void fpstate_init_soft(struct swregs_state *soft) {} #endif +/* State tracking */ +DECLARE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx); + /* FPSTATE */ extern union fpregs_state init_fpstate; --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -26,7 +26,6 @@ /* * High level FPU state handling functions: */ -extern bool fpu__restore_sig(void __user *buf, int ia32_frame); extern void fpu__clear_user_states(struct fpu *fpu); extern int fpu__exception_code(struct fpu *fpu, int trap_nr); @@ -42,10 +41,4 @@ extern void fpu__init_system(struct cpui extern void fpu__init_check_bugs(void); extern void fpu__resume_cpu(void); -extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); - -extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); - -DECLARE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx); - #endif /* _ASM_X86_FPU_INTERNAL_H */ --- a/arch/x86/include/asm/fpu/signal.h +++ b/arch/x86/include/asm/fpu/signal.h @@ -5,6 +5,11 @@ #ifndef _ASM_X86_FPU_SIGNAL_H #define _ASM_X86_FPU_SIGNAL_H +#include +#include + +#include + #ifdef CONFIG_X86_64 # include # include @@ -31,4 +36,12 @@ fpu__alloc_mathframe(unsigned long sp, i unsigned long fpu__get_fpstate_size(void); +extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); +extern void fpu__clear_user_states(struct fpu *fpu); +extern bool fpu__restore_sig(void __user *buf, int ia32_frame); + +extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); + +extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); + #endif /* _ASM_X86_FPU_SIGNAL_H */ --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -7,7 +7,6 @@ #include #include -#include #include #include #include --- a/arch/x86/kernel/ptrace.c +++ b/arch/x86/kernel/ptrace.c @@ -29,7 +29,6 @@ #include #include -#include #include #include #include --- a/arch/x86/kernel/signal.c +++ b/arch/x86/kernel/signal.c @@ -30,7 +30,6 @@ #include #include -#include #include #include #include --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -4,7 +4,8 @@ #include #include -#include +#include +#include #include #include #include From patchwork Tue Oct 12 00:00:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B44D5C433EF for ; Tue, 12 Oct 2021 00:06:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9926660F92 for ; Tue, 12 Oct 2021 00:06:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235433AbhJLAIh (ORCPT ); Mon, 11 Oct 2021 20:08:37 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51392 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231747AbhJLAIZ (ORCPT ); Mon, 11 Oct 2021 20:08:25 -0400 Message-ID: <20211011223611.964445769@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=y4+0Q5/F0AE4WZ3PGBK1rdQuc9zuwv3SlxUifq1uq3o=; b=gL/gE4YAnBkOWsWtPdfJ3DaBnmv8141V7E1hQjPWdPZhdcgK/7S9vAaaHvBo22XWZzKP0o 4ImcBIKM/UsSJCdqyhZY9rVFcmCULAAjO9v0kqdt2eivdx5pz8DrwArr581jQqoa5XIqaQ JkEZuuiYqYTXitHVjDxeaRqwaa3ffJU2R3lRNdO2QazVBjdkGnqAKvaCmpqdKp010qTAAy 86EIXNmpQ6k3kOeI8OvBbiwBcP0mb1JnHFHDacgBoM0eYV9s50TCnwLCSqn6mysIfHlrkM bIRev8lrhbd6EhWx7wIiAjqhHHLjyerPFs+x63GG+CgB3VTgQbLrkhtFX8/HQw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=y4+0Q5/F0AE4WZ3PGBK1rdQuc9zuwv3SlxUifq1uq3o=; b=Vfc+fqaR41EyJ0Xi1smC7n+B7aYqVUBvpFoMuSxcWD1pA01J2cB1J6kbGsbebUA0ux409F MkP3srusPcdUIRBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 28/31] x86/sev: Include fpu/xcr.h References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:41 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Include the header which only provides the XRC accessors. That's all what is needed here. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/sev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -23,7 +23,7 @@ #include #include #include -#include +#include #include #include #include From patchwork Tue Oct 12 00:00:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCE16C433F5 for ; Tue, 12 Oct 2021 00:06:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C4E8D60F92 for ; Tue, 12 Oct 2021 00:06:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236027AbhJLAI4 (ORCPT ); Mon, 11 Oct 2021 20:08:56 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51368 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231368AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223612.026348843@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=F7eBvI4wTZKF2FFjAfkIvrpU/tpV4FMMwx2h4uht6Zc=; b=3eE9DPHOI4bp04QMFmVoutVBNMKOWdxSBAZIUnagLsSIMcUHw0zov41ft5Q7P+/N8tI+KH ph+o0+DSnGjyTMx+QWbbUD+3ccla4Fj+qjgY8O+WBPZP/NNByS1xYC8gl8c7EPMz+Icgmu OpOsGbQv7q4THZVORHL97dIz+yVxp4F8R//pnv0Z2taSsg8c2RtFtVLyDMx2QlzSva5n7B Ky+hWGH4MfWizbvHYU5gAd3A/DfI91mUf2GMWSbq/Ua1BXE5guKPcTU1rdVJ/JLXF19fft ziz3CKbgfCridTbTywHZpOBSSu6IShAU+NFhO+tndlwq/0ysd5/jwcfNDDwXFw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=F7eBvI4wTZKF2FFjAfkIvrpU/tpV4FMMwx2h4uht6Zc=; b=iAwcRI4xeQdK58PpCVHvfYfPI4Ig/nJKd4/K/DDTCa1eA+/fvOP9IeDFHm0v0UIj7lOzBq 95W5zrvTL97M3zCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 29/31] x86/fpu: Mop up the internal.h leftovers References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:42 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the global interfaces to api.h and the rest into the core. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/api.h | 10 ++++++++++ arch/x86/include/asm/fpu/internal.h | 18 ------------------ arch/x86/kernel/fpu/init.c | 1 + arch/x86/kernel/fpu/xstate.h | 3 +++ 4 files changed, 14 insertions(+), 18 deletions(-) --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -110,6 +110,16 @@ extern int cpu_has_xfeatures(u64 xfeatur static inline void update_pasid(void) { } +/* Trap handling */ +extern int fpu__exception_code(struct fpu *fpu, int trap_nr); +extern void fpu_sync_fpstate(struct fpu *fpu); + +/* Boot, hotplug and resume */ +extern void fpu__init_cpu(void); +extern void fpu__init_system(struct cpuinfo_x86 *c); +extern void fpu__init_check_bugs(void); +extern void fpu__resume_cpu(void); + #ifdef CONFIG_MATH_EMULATION extern void fpstate_init_soft(struct swregs_state *soft); #else --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -23,22 +23,4 @@ #include #include -/* - * High level FPU state handling functions: - */ -extern void fpu__clear_user_states(struct fpu *fpu); -extern int fpu__exception_code(struct fpu *fpu, int trap_nr); - -extern void fpu_sync_fpstate(struct fpu *fpu); - -/* - * Boot time FPU initialization functions: - */ -extern void fpu__init_cpu(void); -extern void fpu__init_system_xstate(void); -extern void fpu__init_cpu_xstate(void); -extern void fpu__init_system(struct cpuinfo_x86 *c); -extern void fpu__init_check_bugs(void); -extern void fpu__resume_cpu(void); - #endif /* _ASM_X86_FPU_INTERNAL_H */ --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -12,6 +12,7 @@ #include "internal.h" #include "legacy.h" +#include "xstate.h" /* * Initialize the registers found in all CPUs, CR0 and CR4: --- a/arch/x86/kernel/fpu/xstate.h +++ b/arch/x86/kernel/fpu/xstate.h @@ -18,6 +18,9 @@ static inline void xstate_init_xcomp_bv( extern void __copy_xstate_to_uabi_buf(struct membuf to, struct xregs_state *xsave, u32 pkru_val, enum xstate_copy_mode copy_mode); +extern void fpu__init_cpu_xstate(void); +extern void fpu__init_system_xstate(void); + /* XSAVE/XRSTOR wrapper functions */ #ifdef CONFIG_X86_64 From patchwork Tue Oct 12 00:00:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A827C433F5 for ; Tue, 12 Oct 2021 00:07:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 20CE360D42 for ; Tue, 12 Oct 2021 00:07:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236086AbhJLAJA (ORCPT ); Mon, 11 Oct 2021 20:09:00 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:51552 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233973AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Message-ID: <20211011223612.087345195@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=TiGpirlAJA6MaUuc3ToQrsUucBGkUhNodaDfeSM0MTo=; b=PhDlrmff52Dw9LGMebVnt8SswA9VzSykG8BW7tp7ofPTqgPA11TUUPN2EKd4fAGf7mRLpr aLFtun55Fhu5PqBFeBbnvbmyQc6a+ymHKfvq1Hirkvo/JMJMjTTmnVujfm/jejexcqxkA1 h3ZOu0h56A4r6nwSnHfX0ByjF6sdySSH66NACNI8ayQRNRazRu8WT30m9hpf+eMK6rS3bX wW6ZNf4UxP1X7WOWa3YxLqRZdAX+mlpBJLebCZ1UsR+TviQtFcLg/RphNa5+qePggxc5fk 9zeo/q0di2ac0eVJv7YrjNeLlbHqm2Yl+wlyFw1c7j4aOiUmKPCPaEbKo1lgEQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=TiGpirlAJA6MaUuc3ToQrsUucBGkUhNodaDfeSM0MTo=; b=4OH/xQP4AcDYVc3zliMEb5ZloeSOAxNG3aXZXRa+bFtaSDYMDs0UAbBI/vML9VDZBk+c5q 5yo7pb7zF2eoY+Cg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 30/31] x86/fpu: Replace the includes of fpu/internal.h References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:44 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the file is empty, fixup all references with the proper includes and delete the former kitchen sink. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 26 -------------------------- arch/x86/kernel/cpu/bugs.c | 2 +- arch/x86/kernel/cpu/common.c | 2 +- arch/x86/kernel/fpu/bugs.c | 2 +- arch/x86/kernel/fpu/core.c | 2 +- arch/x86/kernel/fpu/init.c | 2 +- arch/x86/kernel/fpu/regset.c | 2 +- arch/x86/kernel/fpu/xstate.c | 1 - arch/x86/kernel/smpboot.c | 2 +- arch/x86/kernel/traps.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/power/cpu.c | 2 +- 12 files changed, 10 insertions(+), 37 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -1,26 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 1994 Linus Torvalds - * - * Pentium III FXSR, SSE support - * General FPU state handling cleanups - * Gareth Hughes , May 2000 - * x86-64 work by Andi Kleen 2002 - */ - -#ifndef _ASM_X86_FPU_INTERNAL_H -#define _ASM_X86_FPU_INTERNAL_H - -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include - -#endif /* _ASM_X86_FPU_INTERNAL_H */ --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -22,7 +22,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -42,7 +42,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/kernel/fpu/bugs.c +++ b/arch/x86/kernel/fpu/bugs.c @@ -2,7 +2,7 @@ /* * x86 FPU bug checks: */ -#include +#include /* * Boot time CPU/FPU FDIV bug detection code: --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -6,7 +6,7 @@ * General FPU state handling cleanups * Gareth Hughes , May 2000 */ -#include +#include #include #include #include --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -2,7 +2,7 @@ /* * x86 FPU boot time init code: */ -#include +#include #include #include --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -5,7 +5,7 @@ #include #include -#include +#include #include #include #include --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -13,7 +13,6 @@ #include #include -#include #include #include #include --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -70,7 +70,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -48,7 +48,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -35,7 +35,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/power/cpu.c +++ b/arch/x86/power/cpu.c @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include From patchwork Tue Oct 12 00:00:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12551203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 352F2C433EF for ; Tue, 12 Oct 2021 00:07:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2014B6008E for ; Tue, 12 Oct 2021 00:07:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236216AbhJLAJP (ORCPT ); Mon, 11 Oct 2021 20:09:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234873AbhJLAI3 (ORCPT ); Mon, 11 Oct 2021 20:08:29 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E2A0C061749; Mon, 11 Oct 2021 17:06:29 -0700 (PDT) Message-ID: <20211011223612.145363780@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1633997185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=+DDPznsqgrt6mGetm00zwJ7IWc7hobGjiUCJd9lybJk=; b=fS7P9nLP5NEzaJTFb/loVeS8dt+CcQRX4IyxpRXrPJhbIK7WJPbkYsPGrfB8faiGevEHg3 /3dt2XlavVuoO2ADZPvyyzepeBRS9tCwzbmD/KdZJSVuJFf/t9LAfBZkl4ATzk+puTKij4 zXjKBClMHCC346oXHU2agFmB+fti3HiSRwg488YwfjD3d7jP5+bLHh6R7jj5dpg0GsC+VS Ma95grsxmSmlEI59eHdU1BvUI/JoqDFnq339ZtUEyJC2cN5vBV3NQrD2nJQeihxZQTuErK CJnnpcy+qI4bderRSIIaKAssymZUFrwIz+zJ4bINOmENeo8GuK46wmSByMCWSw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1633997185; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=+DDPznsqgrt6mGetm00zwJ7IWc7hobGjiUCJd9lybJk=; b=TfTcEaElMUW4s8tDhEa+hGdo/bV/oRUEEmqy/t+jcwY8ZEuKAkbo1PvAmMLFd+2eZbWrlW 24tx2i2YnhmaOCBQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini Subject: [patch 31/31] x86/fpu: Provide a proper function for ex_handler_fprestore() References: <20211011215813.558681373@linutronix.de> MIME-Version: 1.0 Date: Tue, 12 Oct 2021 02:00:45 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To make upcoming changes for support of dynamically enabled features simpler, provide a proper function for the exception handler which removes exposure of FPU internals. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/api.h | 4 +--- arch/x86/kernel/fpu/core.c | 5 +++++ arch/x86/kernel/fpu/internal.h | 2 ++ arch/x86/mm/extable.c | 5 ++--- 4 files changed, 10 insertions(+), 6 deletions(-) --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -113,6 +113,7 @@ static inline void update_pasid(void) { /* Trap handling */ extern int fpu__exception_code(struct fpu *fpu, int trap_nr); extern void fpu_sync_fpstate(struct fpu *fpu); +extern void fpu_reset_from_exception_fixup(void); /* Boot, hotplug and resume */ extern void fpu__init_cpu(void); @@ -129,9 +130,6 @@ static inline void fpstate_init_soft(str /* State tracking */ DECLARE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx); -/* FPSTATE */ -extern union fpregs_state init_fpstate; - /* FPSTATE related functions which are exported to KVM */ extern void fpu_init_fpstate_user(struct fpu *fpu); --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -155,6 +155,11 @@ void restore_fpregs_from_fpstate(union f } } +void fpu_reset_from_exception_fixup(void) +{ + restore_fpregs_from_fpstate(&init_fpstate, xfeatures_mask_fpstate()); +} + #if IS_ENABLED(CONFIG_KVM) void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask) { --- a/arch/x86/kernel/fpu/internal.h +++ b/arch/x86/kernel/fpu/internal.h @@ -2,6 +2,8 @@ #ifndef __X86_KERNEL_FPU_INTERNAL_H #define __X86_KERNEL_FPU_INTERNAL_H +extern union fpregs_state init_fpstate; + /* CPU feature check wrappers */ static __always_inline __pure bool use_xsave(void) { --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -4,8 +4,7 @@ #include #include -#include -#include +#include #include #include #include @@ -48,7 +47,7 @@ static bool ex_handler_fprestore(const s WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.", (void *)instruction_pointer(regs)); - restore_fpregs_from_fpstate(&init_fpstate, xfeatures_mask_fpstate()); + fpu_reset_from_exception_fixup(); return true; }