From patchwork Fri Oct 15 01:15:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89061C4332F for ; Fri, 15 Oct 2021 01:16:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B835611F0 for ; Fri, 15 Oct 2021 01:16:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232242AbhJOBSD (ORCPT ); Thu, 14 Oct 2021 21:18:03 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46300 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229922AbhJOBSC (ORCPT ); Thu, 14 Oct 2021 21:18:02 -0400 Message-ID: <20211015011538.433135710@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260555; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ZZTNJd7cxz4mCtW2ftlTrqZfGzeoazdJ+Z32mroPb/E=; b=Kk8CHxQwHxh789V1y3SUoXQ15ByBEdE7Z0cW9BxJ9eLy0FyPftpiHlySKJbeglrmg6VWPc f3YT3LK9iFU0fwidUUbbfQPkpXhMTNmdVos+AB6faTftNgkid2GakZM3MZ0q1xF/8KPaEZ h6D3ljd+U/Cy3rRcgUbTRJHtDlFyut7wCGD1RJC4yGdFLe0tZrBLxQUyfu77t3aGuhu9zv dJKEBABv6w+5hGHJ3bSOE6QX2Mfft2UzmhAlncltKuEWknY4yXc7OKfUSgiM3sX1wSMyCn kQ0GDjQiN6+iIF6bcgJufHmGdsjOjHdBkGFe3RU5zDJqJlqa5tZp9XFwFMaGww== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260555; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ZZTNJd7cxz4mCtW2ftlTrqZfGzeoazdJ+Z32mroPb/E=; b=7lnjxBbFH3aV2E6qvzwtaUe0/xrJD/KFOJtLKV8ET+AYxa7245bDFiQ8URtx7k+LF5OXGR ynmlZ4swuVqWd3Cw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 01/30] x86/fpu: Remove pointless argument from switch_fpu_finish() References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:15:54 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Unused since the FPU switching rework. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 2 +- arch/x86/kernel/process_32.c | 3 +-- arch/x86/kernel/process_64.c | 3 +-- 3 files changed, 3 insertions(+), 5 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 89960e479f87..1503750534f7 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -521,7 +521,7 @@ static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) * Delay loading of the complete FPU state until the return to userland. * PKRU is handled separately. */ -static inline void switch_fpu_finish(struct fpu *new_fpu) +static inline void switch_fpu_finish(void) { if (cpu_feature_enabled(X86_FEATURE_FPU)) set_thread_flag(TIF_NEED_FPU_LOAD); diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index 4f2f54e1281c..d008e222a302 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -160,7 +160,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) struct thread_struct *prev = &prev_p->thread, *next = &next_p->thread; struct fpu *prev_fpu = &prev->fpu; - struct fpu *next_fpu = &next->fpu; int cpu = smp_processor_id(); /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */ @@ -213,7 +212,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) this_cpu_write(current_task, next_p); - switch_fpu_finish(next_fpu); + switch_fpu_finish(); /* Load the Intel cache allocation PQR MSR. */ resctrl_sched_in(); diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index ec0d836a13b1..39f12ef1c85c 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -559,7 +559,6 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) struct thread_struct *prev = &prev_p->thread; struct thread_struct *next = &next_p->thread; struct fpu *prev_fpu = &prev->fpu; - struct fpu *next_fpu = &next->fpu; int cpu = smp_processor_id(); WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) && @@ -620,7 +619,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) this_cpu_write(current_task, next_p); this_cpu_write(cpu_current_top_of_stack, task_top_of_stack(next_p)); - switch_fpu_finish(next_fpu); + switch_fpu_finish(); /* Reload sp0. */ update_task_stack(next_p); From patchwork Fri Oct 15 01:15:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12A49C433EF for ; Fri, 15 Oct 2021 01:16:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6FCE611EE for ; Fri, 15 Oct 2021 01:16:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232328AbhJOBSF (ORCPT ); Thu, 14 Oct 2021 21:18:05 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46316 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232107AbhJOBSD (ORCPT ); Thu, 14 Oct 2021 21:18:03 -0400 Message-ID: <20211015011538.493570236@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260556; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=820daWr24eERPD7yq1vPcOq80bu7Gqh2gnJkboAilY8=; b=z53ZadhVh8vsDDvHU5A/i/6fuZtxC7hf31elMkx2NVRaAhIjJw2q67Mnhzs6olSug/YRL5 FQ3+cIofFgLm9hx4FcEDAjX6kD+ECW4cf2tdyw1osGpw8QIjBy5PNCYmf0xaC3gP9K2K0Q S7C3xhtlf4DQeBCO35jEu/WGLIBqAqgan+aOzSh75BsdIBrITAN8KMFPSgK5HO7vHAXfPu /L4TzBHDG4lI9v0ek22wdfekcT+iAFa9okU7GzA4iyK7Cyk/UUQhbndl1kXaU6qTv+g+x6 atf13KrdqHRIrO3FoB7GKfBtbZHwW05oTqUsEGcE00jB8kVT+ixPfabl+XxkOg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260556; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=820daWr24eERPD7yq1vPcOq80bu7Gqh2gnJkboAilY8=; b=AEPRVCARSRQzAN3pwy/nwh0mp2LFcKPazkvR5+9t3z2g5d5RMpUPFY2HmCOwHNyh2retB9 bJFuvfjZ0pq4AtCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 02/30] x86/fpu: Update stale comments References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:15:56 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org copy_fpstate_to_sigframe() does not have a slow path anymore. Neither does the !ia32 restore in __fpu_restore_sig(). Update the comments accordingly. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/signal.c | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) --- diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 37dbfed29d75..64f0d4eda0b0 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -155,10 +155,8 @@ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf) * buf == buf_fx for 64-bit frames and 32-bit fsave frame. * buf != buf_fx for 32-bit frames with fxstate. * - * Try to save it directly to the user frame with disabled page fault handler. - * If this fails then do the slow path where the FPU state is first saved to - * task's fpu->state and then copy it to the user frame pointed to by the - * aligned pointer 'buf_fx'. + * Save it directly to the user frame with disabled page fault handler. If + * that faults, try to clear the frame which handles the page fault. * * If this is a 32-bit frame with fxstate, put a fsave header before * the aligned state at 'buf_fx'. @@ -334,12 +332,7 @@ static bool __fpu_restore_sig(void __user *buf, void __user *buf_fx, } if (likely(!ia32_fxstate)) { - /* - * Attempt to restore the FPU registers directly from user - * memory. For that to succeed, the user access cannot cause page - * faults. If it does, fall back to the slow path below, going - * through the kernel buffer with the enabled pagefault handler. - */ + /* Restore the FPU registers directly from user memory. */ return restore_fpregs_from_user(buf_fx, user_xfeatures, fx_only, state_size); } From patchwork Fri Oct 15 01:15:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECC52C433FE for ; Fri, 15 Oct 2021 01:16:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D4A88611EE for ; Fri, 15 Oct 2021 01:16:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232444AbhJOBSH (ORCPT ); Thu, 14 Oct 2021 21:18:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232345AbhJOBSF (ORCPT ); Thu, 14 Oct 2021 21:18:05 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AB1FC061570; Thu, 14 Oct 2021 18:15:59 -0700 (PDT) Message-ID: <20211015011538.551522694@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260558; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=+MrVSk/2pciJ3iChH2fVF59RQ2MRpjf0vm2riZZbUpw=; b=MXB83Z3K+K9pR4xdFQsZTDqe17dya/qIry7s151HSJRy/+9yNIFDQEq3c2GK4NkbPtofVl IlEHLAbBFl9+qyfWf/512xLCp4/ChoN4eRrmvaBc0+pSiYiEpybNdyU8kZklxkbMdss6yx WZkrfJMQ1tDgH5byLctRaRw7hPmd9vhwocqeLYS18D9OKyNqV3RUZFH4xZpv3Zz3zr2UX/ sq7AD+OyAWkOQjZvYgRq9FvLLVecjEojizqXlugh63Bz/KOsaZ9Y4WjmgpvYpqQJ5PwVjU TnaGrZpVqDdRJQD9AeQqU+zB9YXAuARK4OSKI5lmnuk+w6fW7dE8uaFhbQai7Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260558; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=+MrVSk/2pciJ3iChH2fVF59RQ2MRpjf0vm2riZZbUpw=; b=5k3nH9qdwbC9lGFKIKwV45Ges8DtCgWK+z265NNrxBRdamTyjw05NqOg9j3PFcI5xQVh3J iyv9KRHi17gGCjCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 03/30] x86/pkru: Remove useless include References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:15:57 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org PKRU code does not need anything from fpu headers. Include cpufeature.h instead and fixup the resulting fallout in perf. This is a preparation for FPU changes in order to prevent recursive include hell. Signed-off-by: Thomas Gleixner --- arch/x86/events/perf_event.h | 1 + arch/x86/include/asm/pkru.h | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) --- diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h index e3ac05c97b5e..134c08df7340 100644 --- a/arch/x86/events/perf_event.h +++ b/arch/x86/events/perf_event.h @@ -14,6 +14,7 @@ #include +#include #include #include diff --git a/arch/x86/include/asm/pkru.h b/arch/x86/include/asm/pkru.h index ccc539faa5bb..4cd49afa0ca4 100644 --- a/arch/x86/include/asm/pkru.h +++ b/arch/x86/include/asm/pkru.h @@ -2,7 +2,7 @@ #ifndef _ASM_X86_PKRU_H #define _ASM_X86_PKRU_H -#include +#include #define PKRU_AD_BIT 0x1 #define PKRU_WD_BIT 0x2 From patchwork Fri Oct 15 01:15:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9E81C4332F for ; Fri, 15 Oct 2021 01:16:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AEEE0611C2 for ; Fri, 15 Oct 2021 01:16:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232406AbhJOBSI (ORCPT ); Thu, 14 Oct 2021 21:18:08 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46334 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232412AbhJOBSG (ORCPT ); Thu, 14 Oct 2021 21:18:06 -0400 Message-ID: <20211015011538.608492174@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260559; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=NxL389GvCjYUIYxtWyYLkVLuEIfE/oYZ1co+1NzD5AQ=; b=SIdlpy7xGtg2Q2b5SjYmIzJmYx7vxcpHbYR6g6uTIYcsPEcPH2dTfX++2KoXYujqrK1oSw JxLCZe+Q6BPLg9dtJBY1cx+rG+PGab5nCYDhTcfGRSlvCebXf9bbPHI3N35ItQYv9EUU4j 5HtUdL9HUaRx+CYEh/fu4UDvlyTZlmAvCkwAhPVa/38cNXKIFsNldcPXi8BrDsKuzSyeTS Kc8+H3/G5wxP5NZ6pGam/4tmEB5BrO9cOuBGy9mKKeRGRb4Fz7/N5P7HUW8Dmxdf0npsOO 9vHbHY7fX3THVx4zf1xoAqm3tYI3qoh7aHSr4qjFh2gNyBGr++s+1rL3DDkDBQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260559; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=NxL389GvCjYUIYxtWyYLkVLuEIfE/oYZ1co+1NzD5AQ=; b=UYnpa4gWI8zoffFomg/WiH12Ss6FrpLvBf2eSZ0OtMkanP0lNkt8MMtPv4+uhjM9zaqBZ5 76FP248mvBUYlUAg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 04/30] x86/fpu: Restrict xsaves()/xrstors() to independent states References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:15:59 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These interfaces are really only valid for features which are independently managed and not part of the task context state for various reasons. Tighten the checks and adjust the misleading comments. Signed-off-by: Thomas Gleixner --- V2: Rename functions and address the comment related review - Boris --- arch/x86/kernel/fpu/xstate.c | 22 +++++++--------------- 1 file changed, 7 insertions(+), 15 deletions(-) --- diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index c8def1b7f8fb..5a76df965337 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1175,20 +1175,14 @@ int copy_sigframe_from_user_to_xstate(struct xregs_state *xsave, return copy_uabi_to_xstate(xsave, NULL, ubuf); } -static bool validate_xsaves_xrstors(u64 mask) +static bool validate_independent_components(u64 mask) { u64 xchk; if (WARN_ON_FPU(!cpu_feature_enabled(X86_FEATURE_XSAVES))) return false; - /* - * Validate that this is either a task->fpstate related component - * subset or an independent one. - */ - if (mask & xfeatures_mask_independent()) - xchk = ~xfeatures_mask_independent(); - else - xchk = ~xfeatures_mask_all; + + xchk = ~xfeatures_mask_independent(); if (WARN_ON_ONCE(!mask || mask & xchk)) return false; @@ -1206,14 +1200,13 @@ static bool validate_xsaves_xrstors(u64 mask) * buffer should be zeroed otherwise a consecutive XRSTORS from that buffer * can #GP. * - * The feature mask must either be a subset of the independent features or - * a subset of the task->fpstate related features. + * The feature mask must be a subset of the independent features. */ void xsaves(struct xregs_state *xstate, u64 mask) { int err; - if (!validate_xsaves_xrstors(mask)) + if (!validate_independent_components(mask)) return; XSTATE_OP(XSAVES, xstate, (u32)mask, (u32)(mask >> 32), err); @@ -1231,14 +1224,13 @@ void xsaves(struct xregs_state *xstate, u64 mask) * Proper usage is to restore the state which was saved with * xsaves() into @xstate. * - * The feature mask must either be a subset of the independent features or - * a subset of the task->fpstate related features. + * The feature mask must be a subset of the independent features. */ void xrstors(struct xregs_state *xstate, u64 mask) { int err; - if (!validate_xsaves_xrstors(mask)) + if (!validate_independent_components(mask)) return; XSTATE_OP(XRSTORS, xstate, (u32)mask, (u32)(mask >> 32), err); From patchwork Fri Oct 15 01:16:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DCB1C433EF for ; Fri, 15 Oct 2021 01:16:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00004611C2 for ; Fri, 15 Oct 2021 01:16:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233014AbhJOBSJ (ORCPT ); Thu, 14 Oct 2021 21:18:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232904AbhJOBSI (ORCPT ); Thu, 14 Oct 2021 21:18:08 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09F01C061755; Thu, 14 Oct 2021 18:16:03 -0700 (PDT) Message-ID: <20211015011538.665080855@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260561; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=1+fzVJkFpn2tADRe4ODW/jh023vO5Spky2/2mWXGWdU=; b=tN2J9E9/vTbOYn5GnVZzeQOevlmw/wTDV0aUUGqu1IhPy/fHahLxzD7nG4FRc5VwI6G2jm fXyPZN+tNKHqZhbeyWxON5MgkyZnu8kpoTRlVHj6qmpFI/0HrTvkLhnGhmWkYDm2jPKOME +bAPuOU0Xs9sf93IdOOJtVNj9qGjPBCpBf3MXj35RxqeL7LwOnQGsLhcjCSXX+Zf66JmMQ AmbEPEpIbgkBfvz0iZlyVOLzS7VAWBhLIgJwp6NCpOXW41F8C5vtNiNk+/U3O2MRegi8XM cq4rXJ4qjsWJhucyd/ICmKMDpokSigEUnipQ7k4VmSXlY4b4BuTjp1/6vFmGEQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260561; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=1+fzVJkFpn2tADRe4ODW/jh023vO5Spky2/2mWXGWdU=; b=7H0l4GAJaGv+L+6trXIqifMpzwwjs5VCMMEFRCMs3/vJXwzIz++LT9D7SABEtBziwXQHjo lxHhcpR1iKff+iCQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 05/30] x86/fpu: Cleanup the on_boot_cpu clutter References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:01 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Defensive programming is useful, but this on_boot_cpu debug is really silly. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/init.c | 16 ---------------- arch/x86/kernel/fpu/xstate.c | 9 --------- 2 files changed, 25 deletions(-) --- diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index 64e29927cc32..86bc9759fc8b 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -192,11 +192,6 @@ static void __init fpu__init_task_struct_size(void) */ static void __init fpu__init_system_xstate_size_legacy(void) { - static int on_boot_cpu __initdata = 1; - - WARN_ON_FPU(!on_boot_cpu); - on_boot_cpu = 0; - /* * Note that xstate sizes might be overwritten later during * fpu__init_system_xstate(). @@ -216,15 +211,6 @@ static void __init fpu__init_system_xstate_size_legacy(void) fpu_user_xstate_size = fpu_kernel_xstate_size; } -/* Legacy code to initialize eager fpu mode. */ -static void __init fpu__init_system_ctx_switch(void) -{ - static bool on_boot_cpu __initdata = 1; - - WARN_ON_FPU(!on_boot_cpu); - on_boot_cpu = 0; -} - /* * Called on the boot CPU once per system bootup, to set up the initial * FPU state that is later cloned into all processes: @@ -243,6 +229,4 @@ void __init fpu__init_system(struct cpuinfo_x86 *c) fpu__init_system_xstate_size_legacy(); fpu__init_system_xstate(); fpu__init_task_struct_size(); - - fpu__init_system_ctx_switch(); } diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 5a76df965337..d6b5f2266143 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -379,15 +379,10 @@ static void __init print_xstate_offset_size(void) */ static void __init setup_init_fpu_buf(void) { - static int on_boot_cpu __initdata = 1; - BUILD_BUG_ON((XFEATURE_MASK_USER_SUPPORTED | XFEATURE_MASK_SUPERVISOR_SUPPORTED) != XFEATURES_INIT_FPSTATE_HANDLED); - WARN_ON_FPU(!on_boot_cpu); - on_boot_cpu = 0; - if (!boot_cpu_has(X86_FEATURE_XSAVE)) return; @@ -721,14 +716,10 @@ static void fpu__init_disable_system_xstate(void) void __init fpu__init_system_xstate(void) { unsigned int eax, ebx, ecx, edx; - static int on_boot_cpu __initdata = 1; u64 xfeatures; int err; int i; - WARN_ON_FPU(!on_boot_cpu); - on_boot_cpu = 0; - if (!boot_cpu_has(X86_FEATURE_FPU)) { pr_info("x86/fpu: No FPU detected\n"); return; From patchwork Fri Oct 15 01:16:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD388C433FE for ; Fri, 15 Oct 2021 01:16:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B008D611C8 for ; Fri, 15 Oct 2021 01:16:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232412AbhJOBSN (ORCPT ); Thu, 14 Oct 2021 21:18:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233020AbhJOBSK (ORCPT ); Thu, 14 Oct 2021 21:18:10 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF805C061570; Thu, 14 Oct 2021 18:16:04 -0700 (PDT) Message-ID: <20211015011538.722854569@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260563; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=i1kz6jZJsBK77znZm+uKwkWxRf7TFLfVlxwf5q30Gk0=; b=DOTmXuXGVblCRPywIpGoR1TjViFut3fHZ+Hmro8oejUAcm0XpiiJN6/5AG86SOuYZILw66 80LF0dR8qbnXhkExOxFYOA8S45RUNdSTH0S4YgXiJuJej8Kl2OfnWnjgmEGBK2APTeRu3+ zFMrA9XF0ODUCAus8V8sDluh492p788FOvGcYhDXPg+EyctlwvJJBQZOLKUzBnF+ULd6Pt ja3gJH8sCVXMe60xZ8bPOYG4NUu6790VOC0msAJ/jBVIjIYsRgQhYoKWgvy5XHtifahyoq XMbAcivpjBX+qBIGKJNJUp+0S/I0KEmLpppVL+ATL+1IEpyrcmoo/3Xwtl4fSw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260563; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=i1kz6jZJsBK77znZm+uKwkWxRf7TFLfVlxwf5q30Gk0=; b=vJchbI6TWMY8QbTB0y9qZ6i4C1kU/HpVMArljnrkGYRNtmcaUn7AOnvoBcWqtUIxR5unhU GuOd6XQ08xpPBCCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 06/30] x86/fpu: Remove pointless memset in fpu_clone() References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:02 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Zeroing the forked task's FPU register buffer to avoid leaking init optimized stale data into the clone is a pointless exercise for the case where the current task has TIF_NEED_FPU_LOAD set. In that case the FPU register state is copied from current's FPU register buffer which can contain stale init optimized data as well. The alledged information leak is non-existant because this stale init optimized data is nowhere used and cannot leak anywhere. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/core.c | 6 ------ 1 file changed, 6 deletions(-) --- diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 7ada7bd03a32..191269edac97 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -260,12 +260,6 @@ int fpu_clone(struct task_struct *dst) return 0; /* - * Don't let 'init optimized' areas of the XSAVE area - * leak into the child task: - */ - memset(&dst_fpu->state.xsave, 0, fpu_kernel_xstate_size); - - /* * If the FPU registers are not owned by current just memcpy() the * state. Otherwise save the FPU registers directly into the * child's FPU context, without any memory-to-memory copying. From patchwork Fri Oct 15 01:16:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C86BC433FE for ; Fri, 15 Oct 2021 01:16:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 45D4D611ED for ; Fri, 15 Oct 2021 01:16:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234079AbhJOBSQ (ORCPT ); Thu, 14 Oct 2021 21:18:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232279AbhJOBSM (ORCPT ); Thu, 14 Oct 2021 21:18:12 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46514C061762; Thu, 14 Oct 2021 18:16:06 -0700 (PDT) Message-ID: <20211015011538.780714235@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260565; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=zDkaoeyhehBlXxI4Eb2zSF3/rxi3Kg/Q69UFd71ACTg=; b=Dgjh1w6GxEJJmwBfRKd3sAM8pdafHeV4V5+vMAzySOBMjhQ31NfvEGpRzcjO3X8ZSqKex7 VbHWu0l5wFvi45smpiP3nKVOARqAoWdPVNoISrySz1i//CDcE//AVx5L9js6OmOXdxqjVf qSsWhtUGAoHWxK2WCNvP5/0O2RaE8vUVfTTBzrww+MiYlKx/Ymw3jOmbpN6ztsm7Xp4fFn BoeREMqF3bFu0CJPBjPAIJVWKeI1t7l+nb8Ee4u9WzdruSgeWtbHez7HPd9/IHiT1EzAFf lyW3ynn27UUKZhwdtbHWdR/Do36H5zTp2W/ZCmvJKblatkHyBg4ULZ9NMTgkTw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260565; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=zDkaoeyhehBlXxI4Eb2zSF3/rxi3Kg/Q69UFd71ACTg=; b=l1kbDKe2XSVlNihUT7kTDoWV/NCXhfdnVQbYYRCULwyW9vrAmH/UiJAnMI1ugAzykrq/8Y /H66hkGdvxYnVaAw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 07/30] x86/process: Clone FPU in copy_thread() References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:04 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is no reason to clone FPU in arch_dup_task_struct(). Quite the contrary it prevents optimizations. Move it to copy_thread() Signed-off-by: Thomas Gleixner --- arch/x86/kernel/process.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) --- diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 1d9463e3096b..d2227c55e683 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -87,7 +87,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) #ifdef CONFIG_VM86 dst->thread.vm86 = NULL; #endif - return fpu_clone(dst); + return 0; } /* @@ -154,6 +154,8 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, frame->flags = X86_EFLAGS_FIXED; #endif + fpu_clone(p); + /* Kernel thread ? */ if (unlikely(p->flags & PF_KTHREAD)) { p->thread.pkru = pkru_get_init_value(); From patchwork Fri Oct 15 01:16:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8360C433EF for ; Fri, 15 Oct 2021 01:16:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A408961208 for ; Fri, 15 Oct 2021 01:16:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234779AbhJOBST (ORCPT ); Thu, 14 Oct 2021 21:18:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233454AbhJOBSN (ORCPT ); Thu, 14 Oct 2021 21:18:13 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5B6DC061753; Thu, 14 Oct 2021 18:16:07 -0700 (PDT) Message-ID: <20211015011538.839822981@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260566; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=p/XFR0aVQMKKxVzOYz6MXsECkHIm28ch1vdabn5duRg=; b=ikhPjIVeyC3IOe8gjkJYB46r2rQjjJiCfZdPDICGuMX5n7Z5IBAUvkDO82g/ZWV4oVSfB0 r2AwyyiwCflbr/JpV8KGpB+7Ih80Yxzf0UbUk3yftB38zcKlPVwxFoY329+xo5GPJicZH2 orJ5D5Uy9tHbvLesk3LhaLrRMc8c/2sUC8Q6lCfPHjYNDUdMKZYlrjs4AslcH6Y5Hpgkx/ FFdX1VdSQy09pD8O4vYLc0WnU29x5NnIx/y+Y/pjVXxhD4Wxmn+u9YV+sMsfNtYmtz+q7f rLnRJVC52i1RGzzha31CIISxyOIKdXJNfofWerH6fKU54XIBfUGj0xBLx8HLPA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260566; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=p/XFR0aVQMKKxVzOYz6MXsECkHIm28ch1vdabn5duRg=; b=y2iPbKYe3fOsGW0Vw6X8btssEiOozCvl7wx3IRjYAvUR7iiwFtTrXysaEi2ks6PxY/j8+e ilZ8DqpeRi8OdJCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 08/30] x86/fpu: Do not inherit FPU context for kernel and IO worker threads References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:06 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org There is no reason why kernel and IO worker threads need a full clone of the parent's FPU state. Both are kernel threads which are not supposed to use FPU. So copying a large state or doing XSAVE() is pointless. Just clean out the minimaly required state for those tasks. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/core.c | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-) --- diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 191269edac97..9a6b195a8a00 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -212,6 +212,15 @@ static inline void fpstate_init_xstate(struct xregs_state *xsave) xsave->header.xcomp_bv = XCOMP_BV_COMPACTED_FORMAT | xfeatures_mask_all; } +static inline unsigned int init_fpstate_copy_size(void) +{ + if (!use_xsave()) + return fpu_kernel_xstate_size; + + /* XSAVE(S) just needs the legacy and the xstate header part */ + return sizeof(init_fpstate.xsave); +} + static inline void fpstate_init_fxstate(struct fxregs_state *fx) { fx->cwd = 0x37f; @@ -260,6 +269,23 @@ int fpu_clone(struct task_struct *dst) return 0; /* + * Enforce reload for user space tasks and prevent kernel threads + * from trying to save the FPU registers on context switch. + */ + set_tsk_thread_flag(dst, TIF_NEED_FPU_LOAD); + + /* + * No FPU state inheritance for kernel threads and IO + * worker threads. + */ + if (dst->flags & (PF_KTHREAD | PF_IO_WORKER)) { + /* Clear out the minimal state */ + memcpy(&dst_fpu->state, &init_fpstate, + init_fpstate_copy_size()); + return 0; + } + + /* * If the FPU registers are not owned by current just memcpy() the * state. Otherwise save the FPU registers directly into the * child's FPU context, without any memory-to-memory copying. @@ -272,8 +298,6 @@ int fpu_clone(struct task_struct *dst) save_fpregs_to_fpstate(dst_fpu); fpregs_unlock(); - set_tsk_thread_flag(dst, TIF_NEED_FPU_LOAD); - trace_x86_fpu_copy_src(src_fpu); trace_x86_fpu_copy_dst(dst_fpu); @@ -322,15 +346,6 @@ static inline void restore_fpregs_from_init_fpstate(u64 features_mask) pkru_write_default(); } -static inline unsigned int init_fpstate_copy_size(void) -{ - if (!use_xsave()) - return fpu_kernel_xstate_size; - - /* XSAVE(S) just needs the legacy and the xstate header part */ - return sizeof(init_fpstate.xsave); -} - /* * Reset current->fpu memory state to the init values. */ From patchwork Fri Oct 15 01:16:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57A3FC433FE for ; Fri, 15 Oct 2021 01:16:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F50E611CC for ; Fri, 15 Oct 2021 01:16:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234739AbhJOBS0 (ORCPT ); Thu, 14 Oct 2021 21:18:26 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46394 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234717AbhJOBSP (ORCPT ); Thu, 14 Oct 2021 21:18:15 -0400 Message-ID: <20211015011538.897664678@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260568; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=9mP6eQkXKYOSl75/3hJBuuCyR+HyUFavq+clkqMZqqQ=; b=Z9S7hBzZjC1ZW6j2qBaNa/11Ee0Kvjra2L43wxgCIPqGvUZlboKMz3gYkx6mNeS7KIuaKs E1yBrkmpVcgaLaM/yC0TxjYSck5AK7HPbcMg7brGFY4HueI4n61M28cIQFbBgdVx4ICCoi iE4Ukqw3OP8pZxjM/nQ0KtJMcZNC4w49WCsDl4De8vwlmqxJ5MCFj9P89WzpuQeopIHRcA L9RQJ+TAVLcNeCeyonn9t5MLgIA/t28SzPj5cz58TPSnbZzaTPtzsN2VIlEp9Q0xY1Tra6 ekJ9C9Mfaa930xtQXExlTwNB6XKLJ2cqZC5kHUZDBlJ9gaTJ/YStxN3D6IT0nw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260568; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=9mP6eQkXKYOSl75/3hJBuuCyR+HyUFavq+clkqMZqqQ=; b=XU4b8NrsvZnO7ohwIWu8kEBzsl+iFS0csTDXAHNt3EwsS6KfmuEhyAzI224RJbRlXBtEmB xv1Lw1YjXpbFsmBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 09/30] x86/fpu: Cleanup xstate xcomp_bv initialization References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:07 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No point in having this duplicated all over the place with needlessly different defines. Provide a proper initialization function which initializes user buffers properly and make KVM use it. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 4 +++- arch/x86/kernel/fpu/core.c | 35 +++++++++++++++++++---------------- arch/x86/kernel/fpu/init.c | 6 +++--- arch/x86/kernel/fpu/xstate.c | 8 +++----- arch/x86/kernel/fpu/xstate.h | 18 ++++++++++++++++++ arch/x86/kvm/x86.c | 11 +++-------- 6 files changed, 49 insertions(+), 33 deletions(-) create mode 100644 arch/x86/kernel/fpu/xstate.h --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 1503750534f7..df57f1af3a4c 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -80,7 +80,9 @@ static __always_inline __pure bool use_fxsr(void) extern union fpregs_state init_fpstate; -extern void fpstate_init(union fpregs_state *state); +extern void fpstate_init_user(union fpregs_state *state); +extern void fpu_init_fpstate_user(struct fpu *fpu); + #ifdef CONFIG_MATH_EMULATION extern void fpstate_init_soft(struct swregs_state *soft); #else diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 9a6b195a8a00..0789f0c3dca9 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -16,6 +16,8 @@ #include #include +#include "xstate.h" + #define CREATE_TRACE_POINTS #include @@ -203,15 +205,6 @@ void fpu_sync_fpstate(struct fpu *fpu) fpregs_unlock(); } -static inline void fpstate_init_xstate(struct xregs_state *xsave) -{ - /* - * XRSTORS requires these bits set in xcomp_bv, or it will - * trigger #GP: - */ - xsave->header.xcomp_bv = XCOMP_BV_COMPACTED_FORMAT | xfeatures_mask_all; -} - static inline unsigned int init_fpstate_copy_size(void) { if (!use_xsave()) @@ -238,23 +231,33 @@ static inline void fpstate_init_fstate(struct fregs_state *fp) fp->fos = 0xffff0000u; } -void fpstate_init(union fpregs_state *state) +/* + * Used in two places: + * 1) Early boot to setup init_fpstate for non XSAVE systems + * 2) fpu_init_fpstate_user() which is invoked from KVM + */ +void fpstate_init_user(union fpregs_state *state) { - if (!static_cpu_has(X86_FEATURE_FPU)) { + if (!cpu_feature_enabled(X86_FEATURE_FPU)) { fpstate_init_soft(&state->soft); return; } - memset(state, 0, fpu_kernel_xstate_size); + xstate_init_xcomp_bv(&state->xsave, xfeatures_mask_uabi()); - if (static_cpu_has(X86_FEATURE_XSAVES)) - fpstate_init_xstate(&state->xsave); - if (static_cpu_has(X86_FEATURE_FXSR)) + if (cpu_feature_enabled(X86_FEATURE_FXSR)) fpstate_init_fxstate(&state->fxsave); else fpstate_init_fstate(&state->fsave); } -EXPORT_SYMBOL_GPL(fpstate_init); + +#if IS_ENABLED(CONFIG_KVM) +void fpu_init_fpstate_user(struct fpu *fpu) +{ + fpstate_init_user(&fpu->state); +} +EXPORT_SYMBOL_GPL(fpu_init_fpstate_user); +#endif /* Clone current's FPU state on fork */ int fpu_clone(struct task_struct *dst) diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index 86bc9759fc8b..37f872630a0e 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -121,10 +121,10 @@ static void __init fpu__init_system_mxcsr(void) static void __init fpu__init_system_generic(void) { /* - * Set up the legacy init FPU context. (xstate init might overwrite this - * with a more modern format, if the CPU supports it.) + * Set up the legacy init FPU context. Will be updated when the + * CPU supports XSAVE[S]. */ - fpstate_init(&init_fpstate); + fpstate_init_user(&init_fpstate); fpu__init_system_mxcsr(); } diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index d6b5f2266143..259951d1eec5 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -15,10 +15,10 @@ #include #include #include -#include #include -#include + +#include "xstate.h" /* * Although we spell it out in here, the Processor Trace @@ -389,9 +389,7 @@ static void __init setup_init_fpu_buf(void) setup_xstate_features(); print_xstate_features(); - if (boot_cpu_has(X86_FEATURE_XSAVES)) - init_fpstate.xsave.header.xcomp_bv = XCOMP_BV_COMPACTED_FORMAT | - xfeatures_mask_all; + xstate_init_xcomp_bv(&init_fpstate.xsave, xfeatures_mask_all); /* * Init all the features state with header.xfeatures being 0x0 diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h new file mode 100644 index 000000000000..0789a04ee705 --- /dev/null +++ b/arch/x86/kernel/fpu/xstate.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __X86_KERNEL_FPU_XSTATE_H +#define __X86_KERNEL_FPU_XSTATE_H + +#include +#include + +static inline void xstate_init_xcomp_bv(struct xregs_state *xsave, u64 mask) +{ + /* + * XRSTORS requires these bits set in xcomp_bv, or it will + * trigger #GP: + */ + if (cpu_feature_enabled(X86_FEATURE_XSAVES)) + xsave->header.xcomp_bv = mask | XCOMP_BV_COMPACTED_FORMAT; +} + +#endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 28ef14155726..743f522ed293 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10612,14 +10612,6 @@ static int sync_regs(struct kvm_vcpu *vcpu) static void fx_init(struct kvm_vcpu *vcpu) { - if (!vcpu->arch.guest_fpu) - return; - - fpstate_init(&vcpu->arch.guest_fpu->state); - if (boot_cpu_has(X86_FEATURE_XSAVES)) - vcpu->arch.guest_fpu->state.xsave.header.xcomp_bv = - host_xcr0 | XSTATE_COMPACTION_ENABLED; - /* * Ensure guest xcr0 is valid for loading */ @@ -10704,6 +10696,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) pr_err("kvm: failed to allocate vcpu's fpu\n"); goto free_user_fpu; } + + fpu_init_fpstate_user(vcpu->arch.user_fpu); + fpu_init_fpstate_user(vcpu->arch.guest_fpu); fx_init(vcpu); vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); From patchwork Fri Oct 15 01:16:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCE15C433EF for ; Fri, 15 Oct 2021 01:16:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B76F4611ED for ; Fri, 15 Oct 2021 01:16:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234856AbhJOBS3 (ORCPT ); Thu, 14 Oct 2021 21:18:29 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46408 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233665AbhJOBSQ (ORCPT ); Thu, 14 Oct 2021 21:18:16 -0400 Message-ID: <20211015011538.958107505@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260569; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=XXUejelorOMNKR3ap34rldZ7qd2OZMr2QmuOlk+A2D4=; b=BFrN+4JCDCbOhtdf1+IGbi9Mm/Gm5oN9UdBMIJP2KEJ/Jt3T2pFZ6I6FM1qIRa5lezUs64 +hHk2XYjiHNmJT1GXUr+B2LXkc32eKMi6+IDOKc99HXUoKj0AnTcuM94tx+29HqodJw8a5 1YLJseyfZlVbBYQlwgXUDFnqiEaILKyQrwROhE9bwXdmw06V1sM3YQG/LA9UolZEat52WC VePLh625I8UTjPb0rv1JFN96jJuq/h1GL/mLvSVjKaZFF17+WcBX3bHl8Ge8oD4z/HBW1q vLqAqBFBYwQ5074O9BVXA1yaR1nAKdsgNyRHD+BTWElKLHk+ONGCxp15g3MFtw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260569; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=XXUejelorOMNKR3ap34rldZ7qd2OZMr2QmuOlk+A2D4=; b=eYAFVa3kJuo/hMbF9eQ3EvpXXtxLKZtvX2YnD2N/W84AKOagRn/FdBX17bsyARqxj4bdZZ OZP1uDDBwTMLgUAw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 10/30] x86/fpu/xstate: Provide and use for_each_xfeature() References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:09 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These loops evaluating xfeature bits are really hard to read. Create an iterator and use for_each_set_bit_from() inside which already does the right thing. No functional changes. Signed-off-by: Thomas Gleixner --- V2: Update changelog - Boris --- arch/x86/kernel/fpu/xstate.c | 56 ++++++++++++++++++--------------------------- 1 file changed, 23 insertions(+), 33 deletions(-) --- diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 259951d1eec5..a2bdc0cf8687 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -4,6 +4,7 @@ * * Author: Suresh Siddha */ +#include #include #include #include @@ -20,6 +21,10 @@ #include "xstate.h" +#define for_each_extended_xfeature(bit, mask) \ + (bit) = FIRST_EXTENDED_XFEATURE; \ + for_each_set_bit_from(bit, (unsigned long *)&(mask), 8 * sizeof(mask)) + /* * Although we spell it out in here, the Processor Trace * xfeature is completely unused. We use other mechanisms @@ -184,10 +189,7 @@ static void __init setup_xstate_features(void) xstate_sizes[XFEATURE_SSE] = sizeof_field(struct fxregs_state, xmm_space); - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i)) - continue; - + for_each_extended_xfeature(i, xfeatures_mask_all) { cpuid_count(XSTATE_CPUID, i, &eax, &ebx, &ecx, &edx); xstate_sizes[i] = eax; @@ -291,20 +293,15 @@ static void __init setup_xstate_comp_offsets(void) xstate_comp_offsets[XFEATURE_SSE] = offsetof(struct fxregs_state, xmm_space); - if (!boot_cpu_has(X86_FEATURE_XSAVES)) { - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (xfeature_enabled(i)) - xstate_comp_offsets[i] = xstate_offsets[i]; - } + if (!cpu_feature_enabled(X86_FEATURE_XSAVES)) { + for_each_extended_xfeature(i, xfeatures_mask_all) + xstate_comp_offsets[i] = xstate_offsets[i]; return; } next_offset = FXSAVE_SIZE + XSAVE_HDR_SIZE; - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i)) - continue; - + for_each_extended_xfeature(i, xfeatures_mask_all) { if (xfeature_is_aligned(i)) next_offset = ALIGN(next_offset, 64); @@ -328,8 +325,8 @@ static void __init setup_supervisor_only_offsets(void) next_offset = FXSAVE_SIZE + XSAVE_HDR_SIZE; - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i) || !xfeature_is_supervisor(i)) + for_each_extended_xfeature(i, xfeatures_mask_all) { + if (!xfeature_is_supervisor(i)) continue; if (xfeature_is_aligned(i)) @@ -347,9 +344,7 @@ static void __init print_xstate_offset_size(void) { int i; - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i)) - continue; + for_each_extended_xfeature(i, xfeatures_mask_all) { pr_info("x86/fpu: xstate_offset[%d]: %4d, xstate_sizes[%d]: %4d\n", i, xstate_comp_offsets[i], i, xstate_sizes[i]); } @@ -554,10 +549,7 @@ static void do_extra_xstate_size_checks(void) int paranoid_xstate_size = FXSAVE_SIZE + XSAVE_HDR_SIZE; int i; - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - if (!xfeature_enabled(i)) - continue; - + for_each_extended_xfeature(i, xfeatures_mask_all) { check_xstate_against_struct(i); /* * Supervisor state components can be managed only by @@ -586,7 +578,6 @@ static void do_extra_xstate_size_checks(void) XSTATE_WARN_ON(paranoid_xstate_size != fpu_kernel_xstate_size); } - /* * Get total size of enabled xstates in XCR0 | IA32_XSS. * @@ -969,6 +960,7 @@ void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, struct xregs_state *xinit = &init_fpstate.xsave; struct xstate_header header; unsigned int zerofrom; + u64 mask; int i; memset(&header, 0, sizeof(header)); @@ -1022,17 +1014,15 @@ void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, zerofrom = offsetof(struct xregs_state, extended_state_area); - for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) { - /* - * The ptrace buffer is in non-compacted XSAVE format. - * In non-compacted format disabled features still occupy - * state space, but there is no state to copy from in the - * compacted init_fpstate. The gap tracking will zero this - * later. - */ - if (!(xfeatures_mask_uabi() & BIT_ULL(i))) - continue; + /* + * The ptrace buffer is in non-compacted XSAVE format. In + * non-compacted format disabled features still occupy state space, + * but there is no state to copy from in the compacted + * init_fpstate. The gap tracking will zero these states. + */ + mask = xfeatures_mask_uabi(); + for_each_extended_xfeature(i, mask) { /* * If there was a feature or alignment gap, zero the space * in the destination buffer. From patchwork Fri Oct 15 01:16:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA46FC433F5 for ; Fri, 15 Oct 2021 01:16:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A2357611EE for ; Fri, 15 Oct 2021 01:16:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234761AbhJOBSc (ORCPT ); Thu, 14 Oct 2021 21:18:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233617AbhJOBSS (ORCPT ); Thu, 14 Oct 2021 21:18:18 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEF48C061755; Thu, 14 Oct 2021 18:16:12 -0700 (PDT) Message-ID: <20211015011539.017919252@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260571; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=KIumDjyjiVF2o609MDPzkt6ytEuSTGly13lmlNZ3VVg=; b=Ibb4Oz4X1Rre/dyRwgz7HNwHLJgsGi0Bf7sFq56bz6F9w4MY9b3mr6uQlszVoH6GOtqFbU qmjeZeoqBsm9gOSSfQq71OF6DGVo3gQ7UbNeq5izzxS0lWalNRBV5nsLMyXwRj7QjwRrcx LCnw4drHfsatNOkGEYoALrZfhFQ1zv5AqUuo7vvSpTNM8d5uTEkQltBghRV9mKVkqc0KLo UA3TH9s0BW1Sd328qhnHV8/INQjnVkiEMHz87pkDswRraZzSBqW5NdMtFEJSv7T2iuDW+Q 0u4BILIMfljrG5+YnGxAS3jKr8LrW4HvctxMhqQSuRNQleKW/bruVTQA7GG5aw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260571; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=KIumDjyjiVF2o609MDPzkt6ytEuSTGly13lmlNZ3VVg=; b=Y/ueOgmP7SS/CY0eHEgZ1xIFWnaflpBQpmithNc33jnhY4MKKpmYtkgPTmv19SUM2qMpsA l7KQu2UMEfddPECA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 11/30] x86/fpu/xstate: Mark all init only functions __init References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:10 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No point to keep them around after boot. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/xstate.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index a2bdc0cf8687..b35ecfa8d450 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -462,7 +462,7 @@ static int validate_user_xstate_header(const struct xstate_header *hdr) return 0; } -static void __xstate_dump_leaves(void) +static void __init __xstate_dump_leaves(void) { int i; u32 eax, ebx, ecx, edx; @@ -502,7 +502,7 @@ static void __xstate_dump_leaves(void) * that our software representation matches what the CPU * tells us about the state's size. */ -static void check_xstate_against_struct(int nr) +static void __init check_xstate_against_struct(int nr) { /* * Ask the CPU for the size of the state. @@ -544,7 +544,7 @@ static void check_xstate_against_struct(int nr) * covered by these checks. Only the size of the buffer for task->fpu * is checked here. */ -static void do_extra_xstate_size_checks(void) +static void __init do_extra_xstate_size_checks(void) { int paranoid_xstate_size = FXSAVE_SIZE + XSAVE_HDR_SIZE; int i; @@ -646,7 +646,7 @@ static unsigned int __init get_xsave_size(void) * Will the runtime-enumerated 'xstate_size' fit in the init * task's statically-allocated buffer? */ -static bool is_supported_xstate_size(unsigned int test_xstate_size) +static bool __init is_supported_xstate_size(unsigned int test_xstate_size) { if (test_xstate_size <= sizeof(union fpregs_state)) return true; @@ -691,7 +691,7 @@ static int __init init_xstate_size(void) * We enabled the XSAVE hardware, but something went wrong and * we can not use it. Disable it. */ -static void fpu__init_disable_system_xstate(void) +static void __init fpu__init_disable_system_xstate(void) { xfeatures_mask_all = 0; cr4_clear_bits(X86_CR4_OSXSAVE); From patchwork Fri Oct 15 01:16:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E037BC433F5 for ; Fri, 15 Oct 2021 01:16:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C8272611C8 for ; Fri, 15 Oct 2021 01:16:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234782AbhJOBSr (ORCPT ); Thu, 14 Oct 2021 21:18:47 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46438 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234103AbhJOBST (ORCPT ); Thu, 14 Oct 2021 21:18:19 -0400 Message-ID: <20211015011539.076072399@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260572; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=z0ay49acq2DOw6ZhSUwwocc5xvCRCyVdfe0cuef4++4=; b=ijpWP5+VjJ7p+Nata20L00uTLKtB+ei1A8k+bao0Vq/m9rnB7yOZJDDNK9SGQJDGePn5RB JMGTPnMoa5NXLrq6cp0tCFkejIpjIuqLDCm9+tA8xho7FNoTfa0VTxduMMoiCJG+b3VfR0 xFto74TOm/nieN5e/mhchiUqSkwjQcIsOKoZ0zOkcdZG0cc9cMm9i9TuC8NiumXXf0bRgP N3HcbPuL9piVQ5OJmeVZF84ay9R90Ut34PDyIwp4Eqeoof8pCRyrPhkNKMd/ih4NAkJm8Y /GdvOcXVMtzjS2xYw8/zkOT21A0JOwPexAfRmpf0pBci+15ZSjAqu9YCqHOB8g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260572; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=z0ay49acq2DOw6ZhSUwwocc5xvCRCyVdfe0cuef4++4=; b=tYroHFdbMnAsS/edjBdktbrl47kcNbEEgylG4Eo6SRpX0TrX9lf1DoHNxCpe5pcx/yG5qq ncsROVD4xkxSnwBA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 12/30] x86/fpu: Move KVMs FPU swapping to FPU core References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:12 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Swapping the host/guest FPU is directly fiddling with FPU internals which requires 5 exports. The upcoming support of dynamically enabled states would even need more. Implement a swap function in the FPU core code and export that instead. Signed-off-by: Thomas Gleixner Reviewed-by: Paolo Bonzini Cc: kvm@vger.kernel.org --- V2: Fix changelog and comments - Boris --- arch/x86/include/asm/fpu/api.h | 8 ++++++- arch/x86/include/asm/fpu/internal.h | 15 +---------- arch/x86/kernel/fpu/core.c | 30 +++++++++++++++++++--- arch/x86/kernel/fpu/init.c | 1 +- arch/x86/kernel/fpu/xstate.c | 1 +- arch/x86/kvm/x86.c | 51 ++++++++------------------------------ arch/x86/mm/extable.c | 2 +- 7 files changed, 48 insertions(+), 60 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index 23bef08a8388..d2b8603a9c7e 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -12,6 +12,8 @@ #define _ASM_X86_FPU_API_H #include +#include + /* * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It * disables preemption so be careful if you intend to use it for long periods @@ -108,4 +110,10 @@ extern int cpu_has_xfeatures(u64 xfeatures_mask, const char **feature_name); static inline void update_pasid(void) { } +/* fpstate-related functions which are exported to KVM */ +extern void fpu_init_fpstate_user(struct fpu *fpu); + +/* KVM specific functions */ +extern void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask); + #endif /* _ASM_X86_FPU_API_H */ diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index df57f1af3a4c..3ac55ba55782 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -74,14 +74,8 @@ static __always_inline __pure bool use_fxsr(void) return static_cpu_has(X86_FEATURE_FXSR); } -/* - * fpstate handling functions: - */ - extern union fpregs_state init_fpstate; - extern void fpstate_init_user(union fpregs_state *state); -extern void fpu_init_fpstate_user(struct fpu *fpu); #ifdef CONFIG_MATH_EMULATION extern void fpstate_init_soft(struct swregs_state *soft); @@ -381,12 +375,7 @@ static inline int os_xrstor_safe(struct xregs_state *xstate, u64 mask) return err; } -extern void __restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); - -static inline void restore_fpregs_from_fpstate(union fpregs_state *fpstate) -{ - __restore_fpregs_from_fpstate(fpstate, xfeatures_mask_fpstate()); -} +extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); @@ -467,7 +456,7 @@ static inline void fpregs_restore_userregs(void) */ mask = xfeatures_mask_restore_user() | xfeatures_mask_supervisor(); - __restore_fpregs_from_fpstate(&fpu->state, mask); + restore_fpregs_from_fpstate(&fpu->state, mask); fpregs_activate(fpu); fpu->last_cpu = cpu; diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 0789f0c3dca9..023bfe857907 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -124,9 +124,8 @@ void save_fpregs_to_fpstate(struct fpu *fpu) asm volatile("fnsave %[fp]; fwait" : [fp] "=m" (fpu->state.fsave)); frstor(&fpu->state.fsave); } -EXPORT_SYMBOL(save_fpregs_to_fpstate); -void __restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask) +void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask) { /* * AMD K7/K8 and later CPUs up to Zen don't save/restore @@ -151,7 +150,31 @@ void __restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask) frstor(&fpstate->fsave); } } -EXPORT_SYMBOL_GPL(__restore_fpregs_from_fpstate); + +#if IS_ENABLED(CONFIG_KVM) +void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask) +{ + fpregs_lock(); + + if (save) { + if (test_thread_flag(TIF_NEED_FPU_LOAD)) { + memcpy(&save->state, ¤t->thread.fpu.state, + fpu_kernel_xstate_size); + } else { + save_fpregs_to_fpstate(save); + } + } + + if (rstor) { + restore_mask &= xfeatures_mask_fpstate(); + restore_fpregs_from_fpstate(&rstor->state, restore_mask); + } + + fpregs_mark_activate(); + fpregs_unlock(); +} +EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpu); +#endif void kernel_fpu_begin_mask(unsigned int kfpu_mask) { @@ -457,7 +480,6 @@ void fpregs_mark_activate(void) fpu->last_cpu = smp_processor_id(); clear_thread_flag(TIF_NEED_FPU_LOAD); } -EXPORT_SYMBOL_GPL(fpregs_mark_activate); /* * x87 math exception handling: diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index 37f872630a0e..545c91c723b8 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -136,7 +136,6 @@ static void __init fpu__init_system_generic(void) * components into a single, continuous memory block: */ unsigned int fpu_kernel_xstate_size __ro_after_init; -EXPORT_SYMBOL_GPL(fpu_kernel_xstate_size); /* Get alignment of the TYPE. */ #define TYPE_ALIGN(TYPE) offsetof(struct { char x; TYPE test; }, test) diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index b35ecfa8d450..68355605ca75 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -65,7 +65,6 @@ static short xsave_cpuid_features[] __initdata = { * XSAVE buffer, both supervisor and user xstates. */ u64 xfeatures_mask_all __ro_after_init; -EXPORT_SYMBOL_GPL(xfeatures_mask_all); static unsigned int xstate_offsets[XFEATURE_MAX] __ro_after_init = { [ 0 ... XFEATURE_MAX - 1] = -1}; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 743f522ed293..b9fda03162bd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -68,7 +68,9 @@ #include #include #include -#include /* Ugh! */ +#include +#include +#include #include #include #include @@ -9899,58 +9901,27 @@ static int complete_emulated_mmio(struct kvm_vcpu *vcpu) return 0; } -static void kvm_save_current_fpu(struct fpu *fpu) -{ - /* - * If the target FPU state is not resident in the CPU registers, just - * memcpy() from current, else save CPU state directly to the target. - */ - if (test_thread_flag(TIF_NEED_FPU_LOAD)) - memcpy(&fpu->state, ¤t->thread.fpu.state, - fpu_kernel_xstate_size); - else - save_fpregs_to_fpstate(fpu); -} - /* Swap (qemu) user FPU context for the guest FPU context. */ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) { - fpregs_lock(); - - kvm_save_current_fpu(vcpu->arch.user_fpu); - /* - * Guests with protected state can't have it set by the hypervisor, - * so skip trying to set it. + * Guests with protected state have guest_fpu == NULL which makes + * the swap only save the host state. Exclude PKRU from restore as + * it is restored separately in kvm_x86_ops.run(). */ - if (vcpu->arch.guest_fpu) - /* PKRU is separately restored in kvm_x86_ops.run. */ - __restore_fpregs_from_fpstate(&vcpu->arch.guest_fpu->state, - ~XFEATURE_MASK_PKRU); - - fpregs_mark_activate(); - fpregs_unlock(); - + fpu_swap_kvm_fpu(vcpu->arch.user_fpu, vcpu->arch.guest_fpu, + ~XFEATURE_MASK_PKRU); trace_kvm_fpu(1); } /* When vcpu_run ends, restore user space FPU context. */ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) { - fpregs_lock(); - /* - * Guests with protected state can't have it read by the hypervisor, - * so skip trying to save it. + * Guests with protected state have guest_fpu == NULL which makes + * swap only restore the host state. */ - if (vcpu->arch.guest_fpu) - kvm_save_current_fpu(vcpu->arch.guest_fpu); - - restore_fpregs_from_fpstate(&vcpu->arch.user_fpu->state); - - fpregs_mark_activate(); - fpregs_unlock(); - + fpu_swap_kvm_fpu(vcpu->arch.guest_fpu, vcpu->arch.user_fpu, ~0ULL); ++vcpu->stat.fpu_reload; trace_kvm_fpu(0); } diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index f37e290e6d0a..043ec385af45 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -47,7 +47,7 @@ static bool ex_handler_fprestore(const struct exception_table_entry *fixup, WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.", (void *)instruction_pointer(regs)); - __restore_fpregs_from_fpstate(&init_fpstate, xfeatures_mask_fpstate()); + restore_fpregs_from_fpstate(&init_fpstate, xfeatures_mask_fpstate()); return true; } From patchwork Fri Oct 15 01:16:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7312CC433F5 for ; Fri, 15 Oct 2021 01:16:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5A964611C2 for ; Fri, 15 Oct 2021 01:16:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234799AbhJOBSu (ORCPT ); Thu, 14 Oct 2021 21:18:50 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46450 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234793AbhJOBSU (ORCPT ); Thu, 14 Oct 2021 21:18:20 -0400 Message-ID: <20211015011539.134065207@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260574; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=rBWYhc5n/8kp/duF9LKTPxmhxOYdYSqco7mavwH2s5o=; b=TEBo2C9VZaONZ6EkPt4AxZ3OnxYEpT00DEQiMJiVi6EMKc2EdnDHxozYyf9CS6Fc64JTFq MYCTInBR08CD4ROk2alkX6/tb/8r+1OiOUfswk6T8ZSKSwH407MSgZ6+Hn0fc0F3KlqGH9 bbOCwtBM0MtAukvN5ewrslCUDhXDGlGgK4ZA7lBj6/5Ctf5D4bPCX54Z0tHoc81/2Gx+RD GhKLj8ZbEMmqV63mttoRFAHoTG3k/Y/TUp+Zr1P4m1MqmP69q1AdiBf8GSfzUVzAEsABeF dAO0gMx+DVP43UDonx9nVnhPUTSpVz6PRtqXpx7Ren5oLOEGsdtIfNVZWZtiMw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260574; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=rBWYhc5n/8kp/duF9LKTPxmhxOYdYSqco7mavwH2s5o=; b=KbOBAChsm0nbJC5Dms91A5WVjjDKzplgjsvhKsLPgOwFHu5rjJLrdmtBkIwg1Ge2gWW3Xr PoZp+jxHxUWOXcBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 13/30] x86/fpu: Replace KVMs home brewed FPU copy from user References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:13 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Copying a user space buffer to the memory buffer is already available in the FPU core. The copy mechanism in KVM lacks sanity checks and needs to use cpuid() to lookup the offset of each component, while the FPU core has this information cached. Make the FPU core variant accessible for KVM and replace the home brewed mechanism. Signed-off-by: Thomas Gleixner Cc: kvm@vger.kernel.org Cc: Paolo Bonzini --- arch/x86/include/asm/fpu/api.h | 2 +- arch/x86/kernel/fpu/core.c | 38 +++++++++++++++++++++- arch/x86/kernel/fpu/xstate.c | 3 +-- arch/x86/kvm/x86.c | 74 +------------------------------------------ 4 files changed, 43 insertions(+), 74 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index d2b8603a9c7e..77a732ea4cda 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -116,4 +116,6 @@ extern void fpu_init_fpstate_user(struct fpu *fpu); /* KVM specific functions */ extern void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask); +extern int fpu_copy_kvm_uabi_to_fpstate(struct fpu *fpu, const void *buf, u64 xcr0, u32 *pkru); + #endif /* _ASM_X86_FPU_API_H */ diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 023bfe857907..65fc87760011 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -174,7 +174,43 @@ void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask) fpregs_unlock(); } EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpu); -#endif + +int fpu_copy_kvm_uabi_to_fpstate(struct fpu *fpu, const void *buf, u64 xcr0, + u32 *vpkru) +{ + union fpregs_state *kstate = &fpu->state; + const union fpregs_state *ustate = buf; + struct pkru_state *xpkru; + int ret; + + if (!cpu_feature_enabled(X86_FEATURE_XSAVE)) { + if (ustate->xsave.header.xfeatures & ~XFEATURE_MASK_FPSSE) + return -EINVAL; + if (ustate->fxsave.mxcsr & ~mxcsr_feature_mask) + return -EINVAL; + memcpy(&kstate->fxsave, &ustate->fxsave, sizeof(ustate->fxsave)); + return 0; + } + + if (ustate->xsave.header.xfeatures & ~xcr0) + return -EINVAL; + + ret = copy_uabi_from_kernel_to_xstate(&kstate->xsave, ustate); + if (ret) + return ret; + + /* Retrieve PKRU if not in init state */ + if (kstate->xsave.header.xfeatures & XFEATURE_MASK_PKRU) { + xpkru = get_xsave_addr(&kstate->xsave, XFEATURE_PKRU); + *vpkru = xpkru->pkru; + } + + /* Ensure that XCOMP_BV is set up for XSAVES */ + xstate_init_xcomp_bv(&kstate->xsave, xfeatures_mask_uabi()); + return 0; +} +EXPORT_SYMBOL_GPL(fpu_copy_kvm_uabi_to_fpstate); +#endif /* CONFIG_KVM */ void kernel_fpu_begin_mask(unsigned int kfpu_mask) { diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 68355605ca75..eeeb807b9717 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -1134,8 +1134,7 @@ static int copy_uabi_to_xstate(struct xregs_state *xsave, const void *kbuf, /* * Convert from a ptrace standard-format kernel buffer to kernel XSAVE[S] - * format and copy to the target thread. This is called from - * xstateregs_set(). + * format and copy to the target thread. Used by ptrace and KVM. */ int copy_uabi_from_kernel_to_xstate(struct xregs_state *xsave, const void *kbuf) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b9fda03162bd..81dd70be631f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4695,8 +4695,6 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu, return 0; } -#define XSTATE_COMPACTION_ENABLED (1ULL << 63) - static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) { struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; @@ -4740,50 +4738,6 @@ static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) } } -static void load_xsave(struct kvm_vcpu *vcpu, u8 *src) -{ - struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; - u64 xstate_bv = *(u64 *)(src + XSAVE_HDR_OFFSET); - u64 valid; - - /* - * Copy legacy XSAVE area, to avoid complications with CPUID - * leaves 0 and 1 in the loop below. - */ - memcpy(xsave, src, XSAVE_HDR_OFFSET); - - /* Set XSTATE_BV and possibly XCOMP_BV. */ - xsave->header.xfeatures = xstate_bv; - if (boot_cpu_has(X86_FEATURE_XSAVES)) - xsave->header.xcomp_bv = host_xcr0 | XSTATE_COMPACTION_ENABLED; - - /* - * Copy each region from the non-compacted offset to the - * possibly compacted offset. - */ - valid = xstate_bv & ~XFEATURE_MASK_FPSSE; - while (valid) { - u32 size, offset, ecx, edx; - u64 xfeature_mask = valid & -valid; - int xfeature_nr = fls64(xfeature_mask) - 1; - - cpuid_count(XSTATE_CPUID, xfeature_nr, - &size, &offset, &ecx, &edx); - - if (xfeature_nr == XFEATURE_PKRU) { - memcpy(&vcpu->arch.pkru, src + offset, - sizeof(vcpu->arch.pkru)); - } else { - void *dest = get_xsave_addr(xsave, xfeature_nr); - - if (dest) - memcpy(dest, src + offset, size); - } - - valid -= xfeature_mask; - } -} - static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, struct kvm_xsave *guest_xsave) { @@ -4802,37 +4756,15 @@ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, } } -#define XSAVE_MXCSR_OFFSET 24 - static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, struct kvm_xsave *guest_xsave) { - u64 xstate_bv; - u32 mxcsr; - if (!vcpu->arch.guest_fpu) return 0; - xstate_bv = *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)]; - mxcsr = *(u32 *)&guest_xsave->region[XSAVE_MXCSR_OFFSET / sizeof(u32)]; - - if (boot_cpu_has(X86_FEATURE_XSAVE)) { - /* - * Here we allow setting states that are not present in - * CPUID leaf 0xD, index 0, EDX:EAX. This is for compatibility - * with old userspace. - */ - if (xstate_bv & ~supported_xcr0 || mxcsr & ~mxcsr_feature_mask) - return -EINVAL; - load_xsave(vcpu, (u8 *)guest_xsave->region); - } else { - if (xstate_bv & ~XFEATURE_MASK_FPSSE || - mxcsr & ~mxcsr_feature_mask) - return -EINVAL; - memcpy(&vcpu->arch.guest_fpu->state.fxsave, - guest_xsave->region, sizeof(struct fxregs_state)); - } - return 0; + return fpu_copy_kvm_uabi_to_fpstate(vcpu->arch.guest_fpu, + guest_xsave->region, + supported_xcr0, &vcpu->arch.pkru); } static void kvm_vcpu_ioctl_x86_get_xcrs(struct kvm_vcpu *vcpu, From patchwork Fri Oct 15 01:16:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B1FBC433F5 for ; Fri, 15 Oct 2021 01:16:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 31525611C2 for ; Fri, 15 Oct 2021 01:16:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234733AbhJOBS6 (ORCPT ); Thu, 14 Oct 2021 21:18:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234738AbhJOBSZ (ORCPT ); Thu, 14 Oct 2021 21:18:25 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43D1BC061779; Thu, 14 Oct 2021 18:16:17 -0700 (PDT) Message-ID: <20211015011539.191902137@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260575; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=LZ4s65C2Wf05ubve26y4ecylgzNISNCMKFTBqqOPuVM=; b=jBLoYWz0Kyuf9IQvDQi7QlQNLA4f/z1vivHCdA1xMoNCqXnUE/s5H3RtU7zXcQx4JpLtYk njY8yfWKNieAhOxdoHazuFgAtBGKwdXqEXxBfJQ9W8yGrXIFeL9IVSi44LnHS+MVACWoa3 QSa+LOTYF9DcjYMti1GdABavOxGHq9IC/SSODsLFJ/V5tZOySpb8F4/Qk1d4HsHhNDHi+N X9KUQXhbVntKTRMOgZWtH2CdDsi2a/JtdwvXhTXtstfnE1Txo1KMqqws520k5sgVk7qtSD 7u/6ElE6Ay+CmWhQEW+Sq66XHwUbja61j8t0Xap5sOUsiZot4cmOeL8+3ltZgA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260575; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=LZ4s65C2Wf05ubve26y4ecylgzNISNCMKFTBqqOPuVM=; b=Uou+9QaIU4ceAj51/cc5oSHG4o5f3dBbT1Fc3obyXD10jBQMC22MYVUnCc877HU1+5K6g+ x81Fng+eLbVDECAA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 14/30] x86/fpu: Rework copy_xstate_to_uabi_buf() References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:15 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Prepare for replacing the KVM copy xstate to user function by extending copy_xstate_to_uabi_buf() with a pkru argument which allows the caller to hand in the pkru value, which is required for KVM because the guest PKRU is not accessible via current. Fixup all callsites accordingly. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/fpu/xstate.c | 34 ++++++++++++++++++++++++++-------- arch/x86/kernel/fpu/xstate.h | 3 +++ 2 files changed, 29 insertions(+), 8 deletions(-) --- diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index eeeb807b9717..b2537a8203ee 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -940,9 +940,10 @@ static void copy_feature(bool from_xstate, struct membuf *to, void *xstate, } /** - * copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer + * __copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer * @to: membuf descriptor - * @tsk: The task from which to copy the saved xstate + * @xsave: The xsave from which to copy + * @pkru_val: The PKRU value to store in the PKRU component * @copy_mode: The requested copy mode * * Converts from kernel XSAVE or XSAVES compacted format to UABI conforming @@ -951,11 +952,10 @@ static void copy_feature(bool from_xstate, struct membuf *to, void *xstate, * * It supports partial copy but @to.pos always starts from zero. */ -void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, - enum xstate_copy_mode copy_mode) +void __copy_xstate_to_uabi_buf(struct membuf to, struct xregs_state *xsave, + u32 pkru_val, enum xstate_copy_mode copy_mode) { const unsigned int off_mxcsr = offsetof(struct fxregs_state, mxcsr); - struct xregs_state *xsave = &tsk->thread.fpu.state.xsave; struct xregs_state *xinit = &init_fpstate.xsave; struct xstate_header header; unsigned int zerofrom; @@ -1033,10 +1033,9 @@ void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, struct pkru_state pkru = {0}; /* * PKRU is not necessarily up to date in the - * thread's XSAVE buffer. Fill this part from the - * per-thread storage. + * XSAVE buffer. Use the provided value. */ - pkru.pkru = tsk->thread.pkru; + pkru.pkru = pkru_val; membuf_write(&to, &pkru, sizeof(pkru)); } else { copy_feature(header.xfeatures & BIT_ULL(i), &to, @@ -1056,6 +1055,25 @@ void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, membuf_zero(&to, to.left); } +/** + * copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer + * @to: membuf descriptor + * @tsk: The task from which to copy the saved xstate + * @copy_mode: The requested copy mode + * + * Converts from kernel XSAVE or XSAVES compacted format to UABI conforming + * format, i.e. from the kernel internal hardware dependent storage format + * to the requested @mode. UABI XSTATE is always uncompacted! + * + * It supports partial copy but @to.pos always starts from zero. + */ +void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, + enum xstate_copy_mode copy_mode) +{ + __copy_xstate_to_uabi_buf(to, &tsk->thread.fpu.state.xsave, + tsk->thread.pkru, copy_mode); +} + static int copy_from_buffer(void *dst, unsigned int offset, unsigned int size, const void *kbuf, const void __user *ubuf) { diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h index 0789a04ee705..81f4202781ac 100644 --- a/arch/x86/kernel/fpu/xstate.h +++ b/arch/x86/kernel/fpu/xstate.h @@ -15,4 +15,7 @@ static inline void xstate_init_xcomp_bv(struct xregs_state *xsave, u64 mask) xsave->header.xcomp_bv = mask | XCOMP_BV_COMPACTED_FORMAT; } +extern void __copy_xstate_to_uabi_buf(struct membuf to, struct xregs_state *xsave, + u32 pkru_val, enum xstate_copy_mode copy_mode); + #endif From patchwork Fri Oct 15 01:16:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE190C433F5 for ; Fri, 15 Oct 2021 01:16:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D154A611CE for ; Fri, 15 Oct 2021 01:16:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234744AbhJOBTC (ORCPT ); Thu, 14 Oct 2021 21:19:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234870AbhJOBSa (ORCPT ); Thu, 14 Oct 2021 21:18:30 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D52BEC061760; Thu, 14 Oct 2021 18:16:18 -0700 (PDT) Message-ID: <20211015011539.244101845@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260577; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=DlvKqi0jmCE/KH+PUdpmqLIW5TpRgR26D2pqirU4Qm4=; b=goG7LKJk/nOsHyLLiWtn/5g2lKZZzKdP4EWYg9YgCZs8Bu7cEnJ3+Ys9gDVMMBlgoc7Ud9 rE5CNO7TUtCzbHUv4o9HWh5hArj+01pTi/FGodhmCrg2X/Cnrru0QzxxaCMYJAD17B6vid g09iOUtHgzG6MfCjp7bJoA8uNLH1aIZaLi8jLPRe++ZnPlGJAMlH0dwm/kaJKsm2gEQ/KO ME51MK/gGqJL9BEEQ8B0hNd7DjVzNxyhmGCjKqM2RSfPyKSEsPe2UyX2h+77kICCp83n8P 3SDduJ3nUhRo46ceP7/sJeaHui40CbkWsZEJwQv9IbkbDdJY1tfUxZJD0608yQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260577; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=DlvKqi0jmCE/KH+PUdpmqLIW5TpRgR26D2pqirU4Qm4=; b=TmJDR9JpvyidIUNlECYuVs5IZtWTaWZRIlAqMbB8M4litWm7ED8Ev9jlkb+uRoi5Ch88hF AIfOaEltVG4q2sDA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 15/30] x86/fpu: Replace KVMs home brewed FPU copy to user References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:17 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Similar to the copy from user function the FPU core has this already implemented with all bells and whistles. Get rid of the duplicated code and use the core functionality. Signed-off-by: Thomas Gleixner Cc: Paolo Bonzini Cc: kvm@vger.kernel.org --- V2: Fix the non xsave path - Paolo Fix changelog, comments and function name - Boris, Paolo, Sean --- arch/x86/include/asm/fpu/api.h | 1 arch/x86/kernel/fpu/core.c | 18 +++++++++++++ arch/x86/kvm/x86.c | 56 ++--------------------------------------- 3 files changed, 22 insertions(+), 53 deletions(-) --- --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -117,5 +117,6 @@ extern void fpu_init_fpstate_user(struct extern void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask); extern int fpu_copy_kvm_uabi_to_fpstate(struct fpu *fpu, const void *buf, u64 xcr0, u32 *pkru); +extern void fpu_copy_fpstate_to_kvm_uabi(struct fpu *fpu, void *buf, unsigned int size, u32 pkru); #endif /* _ASM_X86_FPU_API_H */ --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -175,6 +175,24 @@ void fpu_swap_kvm_fpu(struct fpu *save, } EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpu); +void fpu_copy_fpstate_to_kvm_uabi(struct fpu *fpu, void *buf, + unsigned int size, u32 pkru) +{ + union fpregs_state *kstate = &fpu->state; + union fpregs_state *ustate = buf; + struct membuf mb = { .p = buf, .left = size }; + + if (cpu_feature_enabled(X86_FEATURE_XSAVE)) { + __copy_xstate_to_uabi_buf(mb, &kstate->xsave, pkru, + XSTATE_COPY_XSAVE); + } else { + memcpy(&ustate->fxsave, &kstate->fxsave, sizeof(ustate->fxsave)); + /* Make it restorable on a XSAVE enabled host */ + ustate->xsave.header.xfeatures = XFEATURE_MASK_FPSSE; + } +} +EXPORT_SYMBOL_GPL(fpu_copy_fpstate_to_kvm_uabi); + int fpu_copy_kvm_uabi_to_fpstate(struct fpu *fpu, const void *buf, u64 xcr0, u32 *vpkru) { --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4695,65 +4695,15 @@ static int kvm_vcpu_ioctl_x86_set_debugr return 0; } -static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu) -{ - struct xregs_state *xsave = &vcpu->arch.guest_fpu->state.xsave; - u64 xstate_bv = xsave->header.xfeatures; - u64 valid; - - /* - * Copy legacy XSAVE area, to avoid complications with CPUID - * leaves 0 and 1 in the loop below. - */ - memcpy(dest, xsave, XSAVE_HDR_OFFSET); - - /* Set XSTATE_BV */ - xstate_bv &= vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FPSSE; - *(u64 *)(dest + XSAVE_HDR_OFFSET) = xstate_bv; - - /* - * Copy each region from the possibly compacted offset to the - * non-compacted offset. - */ - valid = xstate_bv & ~XFEATURE_MASK_FPSSE; - while (valid) { - u32 size, offset, ecx, edx; - u64 xfeature_mask = valid & -valid; - int xfeature_nr = fls64(xfeature_mask) - 1; - void *src; - - cpuid_count(XSTATE_CPUID, xfeature_nr, - &size, &offset, &ecx, &edx); - - if (xfeature_nr == XFEATURE_PKRU) { - memcpy(dest + offset, &vcpu->arch.pkru, - sizeof(vcpu->arch.pkru)); - } else { - src = get_xsave_addr(xsave, xfeature_nr); - if (src) - memcpy(dest + offset, src, size); - } - - valid -= xfeature_mask; - } -} - static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, struct kvm_xsave *guest_xsave) { if (!vcpu->arch.guest_fpu) return; - if (boot_cpu_has(X86_FEATURE_XSAVE)) { - memset(guest_xsave, 0, sizeof(struct kvm_xsave)); - fill_xsave((u8 *) guest_xsave->region, vcpu); - } else { - memcpy(guest_xsave->region, - &vcpu->arch.guest_fpu->state.fxsave, - sizeof(struct fxregs_state)); - *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] = - XFEATURE_MASK_FPSSE; - } + fpu_copy_fpstate_to_kvm_uabi(vcpu->arch.guest_fpu, guest_xsave->region, + sizeof(guest_xsave->region), + vcpu->arch.pkru); } static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, From patchwork Fri Oct 15 01:16:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17725C433EF for ; Fri, 15 Oct 2021 01:16:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F3287611C2 for ; Fri, 15 Oct 2021 01:16:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234679AbhJOBS5 (ORCPT ); Thu, 14 Oct 2021 21:18:57 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46486 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234828AbhJOBSZ (ORCPT ); Thu, 14 Oct 2021 21:18:25 -0400 Message-ID: <20211015011539.296435736@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260579; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=zdW2877c8QrzBJVc70/FSm1yd0CzzH/G41JEp9X+xGc=; b=f96R+daJK0xsP8Msl6ECe0LcEVSRjlCOnATpoMhB1P+AoDbxwAr0rxxs67XoKVG8BjrGiE BXuGRxVzptTJbP+bw+6AHxdfoTkVboDY5+myu1zAa5y5/Pe5V+AoCRMxYpKIlj1FpHSlbZ zXuPva3jwn6JGRpHaVFVOXwuSeGSLs1pByzjSI23wnGVU/rrMfoMg9CMvfR43hX27obnGc G4AM7g3x+9cZ/GRlqzVYJrL2ZQ6sU6G+L3Dm2W4BKEIrpdAsO6Nfy1HvVAMBQUqXqE3RwE K7u7qwF9upm4waZuTE2XZfHrgs7jxvHNv+1LAZcCkoKdCLQYkFD3y8yYcEFO2A== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260579; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=zdW2877c8QrzBJVc70/FSm1yd0CzzH/G41JEp9X+xGc=; b=m0N8F+OJ9dTeTSj74mAzfHXQVgRw8dFqWbmVjE2BP+l7W1hCXjxhVyZWLXITvlYki6Mszt v8Q1KjZDTJGXcRBg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 16/30] x86/fpu: Mark fpu__init_prepare_fx_sw_frame() as __init References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:18 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No need to keep it around. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/signal.h | 2 -- arch/x86/kernel/fpu/internal.h | 8 ++++++++ arch/x86/kernel/fpu/signal.c | 4 +++- arch/x86/kernel/fpu/xstate.c | 1 + 4 files changed, 12 insertions(+), 3 deletions(-) create mode 100644 arch/x86/kernel/fpu/internal.h --- diff --git a/arch/x86/include/asm/fpu/signal.h b/arch/x86/include/asm/fpu/signal.h index 8b6631dffefd..04868a76239a 100644 --- a/arch/x86/include/asm/fpu/signal.h +++ b/arch/x86/include/asm/fpu/signal.h @@ -31,6 +31,4 @@ fpu__alloc_mathframe(unsigned long sp, int ia32_frame, unsigned long fpu__get_fpstate_size(void); -extern void fpu__init_prepare_fx_sw_frame(void); - #endif /* _ASM_X86_FPU_SIGNAL_H */ diff --git a/arch/x86/kernel/fpu/internal.h b/arch/x86/kernel/fpu/internal.h new file mode 100644 index 000000000000..036f84c236dd --- /dev/null +++ b/arch/x86/kernel/fpu/internal.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __X86_KERNEL_FPU_INTERNAL_H +#define __X86_KERNEL_FPU_INTERNAL_H + +/* Init functions */ +extern void fpu__init_prepare_fx_sw_frame(void); + +#endif diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 64f0d4eda0b0..404bbb4a0f60 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -16,6 +16,8 @@ #include #include +#include "internal.h" + static struct _fpx_sw_bytes fx_sw_reserved __ro_after_init; static struct _fpx_sw_bytes fx_sw_reserved_ia32 __ro_after_init; @@ -514,7 +516,7 @@ unsigned long fpu__get_fpstate_size(void) * This will be saved when ever the FP and extended state context is * saved on the user stack during the signal handler delivery to the user. */ -void fpu__init_prepare_fx_sw_frame(void) +void __init fpu__init_prepare_fx_sw_frame(void) { int size = fpu_user_xstate_size + FP_XSTATE_MAGIC2_SIZE; diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index b2537a8203ee..1f5a66a38671 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -19,6 +19,7 @@ #include +#include "internal.h" #include "xstate.h" #define for_each_extended_xfeature(bit, mask) \ From patchwork Fri Oct 15 01:16:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7812C433F5 for ; Fri, 15 Oct 2021 01:17:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8CCAF60E09 for ; Fri, 15 Oct 2021 01:17:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234888AbhJOBTQ (ORCPT ); Thu, 14 Oct 2021 21:19:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234873AbhJOBSj (ORCPT ); Thu, 14 Oct 2021 21:18:39 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DF4AC061764; Thu, 14 Oct 2021 18:16:22 -0700 (PDT) Message-ID: <20211015011539.349132461@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260580; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Yuqjn+FZ4rFAoVIpNWFlNZmrURRaY784kqZDrND8iCM=; b=AnW2PD1tiDYlkIdPWNY5n58jPT+kJIY8HQ2vXXVMD/VFncas/C/DJIhMK/QV8zAdcsgTA9 RO+vvOr0iV5IGpvOrZ9sETYZGSfgNLQEPL8d4b6YwL/HE7OzpR8Z8GJGxhEksIkCc04V6K wJ1VI54VPF64jMcPc9HpIYyUxzWkxRaF7W5P3Bt+52ceLGBLXkwasbo13rCQvGi2PWZKad BjmGJtAELctNfY5JLkebcdGZORJVu+R4W096V1TqT60tcUcWTUrqxNgStNsu/D3Vv1to2R h64vDj7/EufJw9LkHyYxeNpybKA5aydc0Z6VSKEW0CAeYExDVNPGitf7OEKOPA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260580; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Yuqjn+FZ4rFAoVIpNWFlNZmrURRaY784kqZDrND8iCM=; b=B0luzzmX5T6fpBcOhp8U41C8TBTpw3X3kCREv/ZQGk1fHNGgDTUjw+CP2l7BjhHMI456tv 810lwyFGD5LuK1BA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 17/30] x86/fpu: Move context switch and exit to user inlines into sched.h References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:20 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org internal.h is a kitchen sink which needs to get out of the way to prepare for the upcoming changes. Move the context switch and exit to user inlines into a separate header, which is all that code needs. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 60 +---------------------------------- arch/x86/include/asm/fpu/sched.h | 68 ++++++++++++++++++++++++++++++++++++++- arch/x86/kernel/fpu/core.c | 1 +- arch/x86/kernel/process.c | 2 +- arch/x86/kernel/process_32.c | 2 +- arch/x86/kernel/process_64.c | 2 +- 6 files changed, 72 insertions(+), 63 deletions(-) create mode 100644 arch/x86/include/asm/fpu/sched.h --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 3ac55ba55782..398c87c8e199 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -27,16 +27,11 @@ * High level FPU state handling functions: */ extern bool fpu__restore_sig(void __user *buf, int ia32_frame); -extern void fpu__drop(struct fpu *fpu); extern void fpu__clear_user_states(struct fpu *fpu); extern int fpu__exception_code(struct fpu *fpu, int trap_nr); extern void fpu_sync_fpstate(struct fpu *fpu); -/* Clone and exit operations */ -extern int fpu_clone(struct task_struct *dst); -extern void fpu_flush_thread(void); - /* * Boot time FPU initialization functions: */ @@ -82,7 +77,6 @@ extern void fpstate_init_soft(struct swregs_state *soft); #else static inline void fpstate_init_soft(struct swregs_state *soft) {} #endif -extern void save_fpregs_to_fpstate(struct fpu *fpu); /* * Returns 0 on success or the trap number when the operation raises an @@ -464,58 +458,4 @@ static inline void fpregs_restore_userregs(void) clear_thread_flag(TIF_NEED_FPU_LOAD); } -/* - * FPU state switching for scheduling. - * - * This is a two-stage process: - * - * - switch_fpu_prepare() saves the old state. - * This is done within the context of the old process. - * - * - switch_fpu_finish() sets TIF_NEED_FPU_LOAD; the floating point state - * will get loaded on return to userspace, or when the kernel needs it. - * - * If TIF_NEED_FPU_LOAD is cleared then the CPU's FPU registers - * are saved in the current thread's FPU register state. - * - * If TIF_NEED_FPU_LOAD is set then CPU's FPU registers may not - * hold current()'s FPU registers. It is required to load the - * registers before returning to userland or using the content - * otherwise. - * - * The FPU context is only stored/restored for a user task and - * PF_KTHREAD is used to distinguish between kernel and user threads. - */ -static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) -{ - if (static_cpu_has(X86_FEATURE_FPU) && !(current->flags & PF_KTHREAD)) { - save_fpregs_to_fpstate(old_fpu); - /* - * The save operation preserved register state, so the - * fpu_fpregs_owner_ctx is still @old_fpu. Store the - * current CPU number in @old_fpu, so the next return - * to user space can avoid the FPU register restore - * when is returns on the same CPU and still owns the - * context. - */ - old_fpu->last_cpu = cpu; - - trace_x86_fpu_regs_deactivated(old_fpu); - } -} - -/* - * Misc helper functions: - */ - -/* - * Delay loading of the complete FPU state until the return to userland. - * PKRU is handled separately. - */ -static inline void switch_fpu_finish(void) -{ - if (cpu_feature_enabled(X86_FEATURE_FPU)) - set_thread_flag(TIF_NEED_FPU_LOAD); -} - #endif /* _ASM_X86_FPU_INTERNAL_H */ diff --git a/arch/x86/include/asm/fpu/sched.h b/arch/x86/include/asm/fpu/sched.h new file mode 100644 index 000000000000..cdb78d590c86 --- /dev/null +++ b/arch/x86/include/asm/fpu/sched.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_X86_FPU_SCHED_H +#define _ASM_X86_FPU_SCHED_H + +#include + +#include +#include + +#include + +extern void save_fpregs_to_fpstate(struct fpu *fpu); +extern void fpu__drop(struct fpu *fpu); +extern int fpu_clone(struct task_struct *dst); +extern void fpu_flush_thread(void); + +/* + * FPU state switching for scheduling. + * + * This is a two-stage process: + * + * - switch_fpu_prepare() saves the old state. + * This is done within the context of the old process. + * + * - switch_fpu_finish() sets TIF_NEED_FPU_LOAD; the floating point state + * will get loaded on return to userspace, or when the kernel needs it. + * + * If TIF_NEED_FPU_LOAD is cleared then the CPU's FPU registers + * are saved in the current thread's FPU register state. + * + * If TIF_NEED_FPU_LOAD is set then CPU's FPU registers may not + * hold current()'s FPU registers. It is required to load the + * registers before returning to userland or using the content + * otherwise. + * + * The FPU context is only stored/restored for a user task and + * PF_KTHREAD is used to distinguish between kernel and user threads. + */ +static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) +{ + if (cpu_feature_enabled(X86_FEATURE_FPU) && + !(current->flags & PF_KTHREAD)) { + save_fpregs_to_fpstate(old_fpu); + /* + * The save operation preserved register state, so the + * fpu_fpregs_owner_ctx is still @old_fpu. Store the + * current CPU number in @old_fpu, so the next return + * to user space can avoid the FPU register restore + * when is returns on the same CPU and still owns the + * context. + */ + old_fpu->last_cpu = cpu; + + trace_x86_fpu_regs_deactivated(old_fpu); + } +} + +/* + * Delay loading of the complete FPU state until the return to userland. + * PKRU is handled separately. + */ +static inline void switch_fpu_finish(void) +{ + if (cpu_feature_enabled(X86_FEATURE_FPU)) + set_thread_flag(TIF_NEED_FPU_LOAD); +} + +#endif /* _ASM_X86_FPU_SCHED_H */ diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 5ed58b22a75b..4fb63a0cb4ef 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -8,6 +8,7 @@ */ #include #include +#include #include #include #include diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index d2227c55e683..5cd82082353e 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -30,7 +30,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c index d008e222a302..26edb1cd07a4 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -41,7 +41,7 @@ #include #include -#include +#include #include #include diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 39f12ef1c85c..3402edec236c 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -42,7 +42,7 @@ #include #include -#include +#include #include #include #include From patchwork Fri Oct 15 01:16:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D6C5C433FE for ; Fri, 15 Oct 2021 01:16:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4457D611C8 for ; Fri, 15 Oct 2021 01:16:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234953AbhJOBTD (ORCPT ); Thu, 14 Oct 2021 21:19:03 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46502 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233864AbhJOBS2 (ORCPT ); Thu, 14 Oct 2021 21:18:28 -0400 Message-ID: <20211015011539.401510559@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260582; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=a0QlIBSwa8Sfkm3ncNnZ66ijbSmBxDzHL00Gh1+jgG8=; b=YDjjTJxr2ss7qy7lo8vvqjbv9xjUaysd0vgtrniH8KxrwdZwT0r4fXuS8qQqoJcbvCui2/ jDlIuUXYnnj/DtRgPZivQq59ppY+j1HoIpF973tqQWmuVKk3HqceyZoBENJurdwUGbLNMf HSv3bvdNV+HqknCOPsaaLN4tzTU6TpsxqoEAkN2fGZiB5htGSdh3clnf36zFRLBTiiKDRV wiza3WdOHAWS/C3g+laQFT+QnG5qwhZV5vQwdBEwPqsBIN3s3iM2drEnxoGKeOB0Qd6hX2 qfpsMeP7+5C3BuPVoUpoNaC7GGQHd6aCMq7B7HAdJLNK6G0ShwP5E5HgxFovXg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260582; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=a0QlIBSwa8Sfkm3ncNnZ66ijbSmBxDzHL00Gh1+jgG8=; b=HBGw1Y5LebYxp8ouulS1s0IpM5DHTqSMKydMf10sIB+ByhoMSd86cX3I/reWLbxHMYanhe QnS1aeUA02MkA7Cg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 18/30] x86/fpu: Clean up cpu feature tests References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:21 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Further disintegration of internal.h: Move the cpu feature tests to a core header and remove the unused one. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 18 ------------------ arch/x86/kernel/fpu/core.c | 1 + arch/x86/kernel/fpu/internal.h | 11 +++++++++++ arch/x86/kernel/fpu/regset.c | 2 ++ 4 files changed, 14 insertions(+), 18 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 398c87c8e199..5da7528b3b2f 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -51,24 +51,6 @@ extern void fpu__resume_cpu(void); # define WARN_ON_FPU(x) ({ (void)(x); 0; }) #endif -/* - * FPU related CPU feature flag helper routines: - */ -static __always_inline __pure bool use_xsaveopt(void) -{ - return static_cpu_has(X86_FEATURE_XSAVEOPT); -} - -static __always_inline __pure bool use_xsave(void) -{ - return static_cpu_has(X86_FEATURE_XSAVE); -} - -static __always_inline __pure bool use_fxsr(void) -{ - return static_cpu_has(X86_FEATURE_FXSR); -} - extern union fpregs_state init_fpstate; extern void fpstate_init_user(union fpregs_state *state); diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 4fb63a0cb4ef..058df2643999 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -17,6 +17,7 @@ #include #include +#include "internal.h" #include "xstate.h" #define CREATE_TRACE_POINTS diff --git a/arch/x86/kernel/fpu/internal.h b/arch/x86/kernel/fpu/internal.h index 036f84c236dd..a8aac21ba364 100644 --- a/arch/x86/kernel/fpu/internal.h +++ b/arch/x86/kernel/fpu/internal.h @@ -2,6 +2,17 @@ #ifndef __X86_KERNEL_FPU_INTERNAL_H #define __X86_KERNEL_FPU_INTERNAL_H +/* CPU feature check wrappers */ +static __always_inline __pure bool use_xsave(void) +{ + return cpu_feature_enabled(X86_FEATURE_XSAVE); +} + +static __always_inline __pure bool use_fxsr(void) +{ + return cpu_feature_enabled(X86_FEATURE_FXSR); +} + /* Init functions */ extern void fpu__init_prepare_fx_sw_frame(void); diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c index 66ed317ebc0d..ccf0c59955f1 100644 --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -10,6 +10,8 @@ #include #include +#include "internal.h" + /* * The xstateregs_active() routine is the same as the regset_fpregs_active() routine, * as the "regset->n" for the xstate regset will be updated based on the feature From patchwork Fri Oct 15 01:16:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5A1CC433F5 for ; Fri, 15 Oct 2021 01:17:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9DE0D611C8 for ; Fri, 15 Oct 2021 01:17:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235163AbhJOBTl (ORCPT ); Thu, 14 Oct 2021 21:19:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234818AbhJOBSw (ORCPT ); Thu, 14 Oct 2021 21:18:52 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 119C2C0613DF; Thu, 14 Oct 2021 18:16:25 -0700 (PDT) Message-ID: <20211015011539.455836597@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260583; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=wJvwI4BBDt6Cu2A9JXCnL4YOdGbXc9WWklcRk7yFEcs=; b=djEf3dzpPDWlJIyINEdUxOtyf9wyx7r2m6ijWqcdjlIx3JY5ooQ2HVj3o6wlMZgETA1Mwo oLo9tFCPtuGu/7e8gKrmdITlVbjOAqGBMXYv/4tAX6Ipl9tj1vUEoDN708NCALua8BgU3O dFTpl0XRibNHOHEiPhi1c/rTgGwzRoP2hw1IdTZ5Y/io+uT+DnuGhUgPBoWFW5rDS1bNYc UX7AMn2lWV6NyZy/kyMZllpsIONC784jwcAFiS9I83Vimi3OzXb8riJrEfOFHaSCxmFdmE n0ybNoBEnujOahDHHdXjlxZUZGBRHp7RViKp1eDrHvydVGHNsHBQifRuESqmTA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260583; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=wJvwI4BBDt6Cu2A9JXCnL4YOdGbXc9WWklcRk7yFEcs=; b=yLESv1KV5SSVoWAQKMcbQMC/QU0QKqrczYcoCh+XX4XntrEPRBVM/LBKV89ayyv3/JMuwF 3O6SQ+SXNYlWCdDg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 19/30] x86/fpu: Make os_xrstor_booting() private References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:23 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org It's only required in the xstate init code. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 25 ------------------------- arch/x86/kernel/fpu/xstate.c | 23 +++++++++++++++++++++++ 2 files changed, 23 insertions(+), 25 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 5da7528b3b2f..3ad2ae73efa5 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -226,31 +226,6 @@ static inline void fxsave(struct fxregs_state *fx) : "memory") /* - * This function is called only during boot time when x86 caps are not set - * up and alternative can not be used yet. - */ -static inline void os_xrstor_booting(struct xregs_state *xstate) -{ - u64 mask = xfeatures_mask_fpstate(); - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - WARN_ON(system_state != SYSTEM_BOOTING); - - if (boot_cpu_has(X86_FEATURE_XSAVES)) - XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); - else - XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); - - /* - * We should never fault when copying from a kernel buffer, and the FPU - * state we set at boot time should be valid. - */ - WARN_ON_FPU(err); -} - -/* * Save processor xstate to xsave area. * * Uses either XSAVE or XSAVEOPT or XSAVES depending on the CPU features diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 1f5a66a38671..b712c06cbbfb 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -351,6 +351,29 @@ static void __init print_xstate_offset_size(void) } /* + * This function is called only during boot time when x86 caps are not set + * up and alternative can not be used yet. + */ +static __init void os_xrstor_booting(struct xregs_state *xstate) +{ + u64 mask = xfeatures_mask_fpstate(); + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + if (cpu_feature_enabled(X86_FEATURE_XSAVES)) + XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); + else + XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); + + /* + * We should never fault when copying from a kernel buffer, and the FPU + * state we set at boot time should be valid. + */ + WARN_ON_FPU(err); +} + +/* * All supported features have either init state all zeros or are * handled in setup_init_fpu() individually. This is an explicit * feature list and does not use XFEATURE_MASK*SUPPORTED to catch From patchwork Fri Oct 15 01:16:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559691 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CBB6C433F5 for ; Fri, 15 Oct 2021 01:17:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 062FC611C8 for ; Fri, 15 Oct 2021 01:17:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233355AbhJOBTt (ORCPT ); Thu, 14 Oct 2021 21:19:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234923AbhJOBS5 (ORCPT ); Thu, 14 Oct 2021 21:18:57 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CD7CC061753; Thu, 14 Oct 2021 18:16:26 -0700 (PDT) Message-ID: <20211015011539.513368075@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260585; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=YOsuXSwjrU2Xe+XUzIe+v3+7OEw3Rf70pPuAaXgwdN0=; b=iluDAH7IL4qbK2xWxif4Zy62SmPZHPRCduuk69SWswKfqQa4rTlUlqZrb/9o4rHHjylrkq 40njIXRsxjGHDq8C4AUcgKU9vCCB/+3vuTFZsUc8u18z/9YwJofDxArM9UgRq+JcKHHXVr nOg2x1MNAhhr8+HuvDKI0cmTDFJ+esCIr1CZkH1O4PHGVyA8xGXveZAvyvSmsOPrEiwgxT 5ClwTvcfEJLNRf2aeDO34AhpipOEZHnU20aa3m6jkfJVS3tRUi/gGC0N3lUvs6O/PXrFu7 sn3vpzT5X1zGlDR2uF1THGBd/LuvH6EauLQ9RVKWJOj3j1L1TwFuvphRtNv5Gw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260585; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=YOsuXSwjrU2Xe+XUzIe+v3+7OEw3Rf70pPuAaXgwdN0=; b=h+2GJgZfnlMIzE2om1wsOxe3mdH3yQhYiZsk7HPcx9H1g9sE2TtOkKoX6WQxbXIo2sNGFv 4SSFBBNAlQdzN+Bg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 20/30] x86/fpu: Move os_xsave() and os_xrstor() to core References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:24 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Nothing outside the core code needs these. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 165 +----------------------------------- arch/x86/include/asm/fpu/xstate.h | 6 +- arch/x86/kernel/fpu/signal.c | 1 +- arch/x86/kernel/fpu/xstate.h | 174 +++++++++++++++++++++++++++++++++++++- 4 files changed, 175 insertions(+), 171 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 3ad2ae73efa5..b68f9940489f 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -161,171 +161,6 @@ static inline void fxsave(struct fxregs_state *fx) asm volatile("fxsaveq %[fx]" : [fx] "=m" (*fx)); } -/* These macros all use (%edi)/(%rdi) as the single memory argument. */ -#define XSAVE ".byte " REX_PREFIX "0x0f,0xae,0x27" -#define XSAVEOPT ".byte " REX_PREFIX "0x0f,0xae,0x37" -#define XSAVES ".byte " REX_PREFIX "0x0f,0xc7,0x2f" -#define XRSTOR ".byte " REX_PREFIX "0x0f,0xae,0x2f" -#define XRSTORS ".byte " REX_PREFIX "0x0f,0xc7,0x1f" - -/* - * After this @err contains 0 on success or the trap number when the - * operation raises an exception. - */ -#define XSTATE_OP(op, st, lmask, hmask, err) \ - asm volatile("1:" op "\n\t" \ - "xor %[err], %[err]\n" \ - "2:\n\t" \ - _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FAULT_MCE_SAFE) \ - : [err] "=a" (err) \ - : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ - : "memory") - -/* - * If XSAVES is enabled, it replaces XSAVEOPT because it supports a compact - * format and supervisor states in addition to modified optimization in - * XSAVEOPT. - * - * Otherwise, if XSAVEOPT is enabled, XSAVEOPT replaces XSAVE because XSAVEOPT - * supports modified optimization which is not supported by XSAVE. - * - * We use XSAVE as a fallback. - * - * The 661 label is defined in the ALTERNATIVE* macros as the address of the - * original instruction which gets replaced. We need to use it here as the - * address of the instruction where we might get an exception at. - */ -#define XSTATE_XSAVE(st, lmask, hmask, err) \ - asm volatile(ALTERNATIVE_2(XSAVE, \ - XSAVEOPT, X86_FEATURE_XSAVEOPT, \ - XSAVES, X86_FEATURE_XSAVES) \ - "\n" \ - "xor %[err], %[err]\n" \ - "3:\n" \ - ".pushsection .fixup,\"ax\"\n" \ - "4: movl $-2, %[err]\n" \ - "jmp 3b\n" \ - ".popsection\n" \ - _ASM_EXTABLE(661b, 4b) \ - : [err] "=r" (err) \ - : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ - : "memory") - -/* - * Use XRSTORS to restore context if it is enabled. XRSTORS supports compact - * XSAVE area format. - */ -#define XSTATE_XRESTORE(st, lmask, hmask) \ - asm volatile(ALTERNATIVE(XRSTOR, \ - XRSTORS, X86_FEATURE_XSAVES) \ - "\n" \ - "3:\n" \ - _ASM_EXTABLE_TYPE(661b, 3b, EX_TYPE_FPU_RESTORE) \ - : \ - : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ - : "memory") - -/* - * Save processor xstate to xsave area. - * - * Uses either XSAVE or XSAVEOPT or XSAVES depending on the CPU features - * and command line options. The choice is permanent until the next reboot. - */ -static inline void os_xsave(struct xregs_state *xstate) -{ - u64 mask = xfeatures_mask_all; - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - WARN_ON_FPU(!alternatives_patched); - - XSTATE_XSAVE(xstate, lmask, hmask, err); - - /* We should never fault when copying to a kernel buffer: */ - WARN_ON_FPU(err); -} - -/* - * Restore processor xstate from xsave area. - * - * Uses XRSTORS when XSAVES is used, XRSTOR otherwise. - */ -static inline void os_xrstor(struct xregs_state *xstate, u64 mask) -{ - u32 lmask = mask; - u32 hmask = mask >> 32; - - XSTATE_XRESTORE(xstate, lmask, hmask); -} - -/* - * Save xstate to user space xsave area. - * - * We don't use modified optimization because xrstor/xrstors might track - * a different application. - * - * We don't use compacted format xsave area for backward compatibility for - * old applications which don't understand the compacted format of the - * xsave area. - * - * The caller has to zero buf::header before calling this because XSAVE* - * does not touch the reserved fields in the header. - */ -static inline int xsave_to_user_sigframe(struct xregs_state __user *buf) -{ - /* - * Include the features which are not xsaved/rstored by the kernel - * internally, e.g. PKRU. That's user space ABI and also required - * to allow the signal handler to modify PKRU. - */ - u64 mask = xfeatures_mask_uabi(); - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - stac(); - XSTATE_OP(XSAVE, buf, lmask, hmask, err); - clac(); - - return err; -} - -/* - * Restore xstate from user space xsave area. - */ -static inline int xrstor_from_user_sigframe(struct xregs_state __user *buf, u64 mask) -{ - struct xregs_state *xstate = ((__force struct xregs_state *)buf); - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - stac(); - XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); - clac(); - - return err; -} - -/* - * Restore xstate from kernel space xsave area, return an error code instead of - * an exception. - */ -static inline int os_xrstor_safe(struct xregs_state *xstate, u64 mask) -{ - u32 lmask = mask; - u32 hmask = mask >> 32; - int err; - - if (cpu_feature_enabled(X86_FEATURE_XSAVES)) - XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); - else - XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); - - return err; -} - extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h index 109dfcc75299..b8cebc0ee420 100644 --- a/arch/x86/include/asm/fpu/xstate.h +++ b/arch/x86/include/asm/fpu/xstate.h @@ -78,12 +78,6 @@ XFEATURE_MASK_INDEPENDENT | \ XFEATURE_MASK_SUPERVISOR_UNSUPPORTED) -#ifdef CONFIG_X86_64 -#define REX_PREFIX "0x48, " -#else -#define REX_PREFIX -#endif - extern u64 xfeatures_mask_all; static inline u64 xfeatures_mask_supervisor(void) diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 404bbb4a0f60..1a409328aead 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -17,6 +17,7 @@ #include #include "internal.h" +#include "xstate.h" static struct _fpx_sw_bytes fx_sw_reserved __ro_after_init; static struct _fpx_sw_bytes fx_sw_reserved_ia32 __ro_after_init; diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h index 81f4202781ac..ae61baa97682 100644 --- a/arch/x86/kernel/fpu/xstate.h +++ b/arch/x86/kernel/fpu/xstate.h @@ -18,4 +18,178 @@ static inline void xstate_init_xcomp_bv(struct xregs_state *xsave, u64 mask) extern void __copy_xstate_to_uabi_buf(struct membuf to, struct xregs_state *xsave, u32 pkru_val, enum xstate_copy_mode copy_mode); +/* XSAVE/XRSTOR wrapper functions */ + +#ifdef CONFIG_X86_64 +#define REX_PREFIX "0x48, " +#else +#define REX_PREFIX +#endif + +/* These macros all use (%edi)/(%rdi) as the single memory argument. */ +#define XSAVE ".byte " REX_PREFIX "0x0f,0xae,0x27" +#define XSAVEOPT ".byte " REX_PREFIX "0x0f,0xae,0x37" +#define XSAVES ".byte " REX_PREFIX "0x0f,0xc7,0x2f" +#define XRSTOR ".byte " REX_PREFIX "0x0f,0xae,0x2f" +#define XRSTORS ".byte " REX_PREFIX "0x0f,0xc7,0x1f" + +/* + * After this @err contains 0 on success or the trap number when the + * operation raises an exception. + */ +#define XSTATE_OP(op, st, lmask, hmask, err) \ + asm volatile("1:" op "\n\t" \ + "xor %[err], %[err]\n" \ + "2:\n\t" \ + _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FAULT_MCE_SAFE) \ + : [err] "=a" (err) \ + : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ + : "memory") + +/* + * If XSAVES is enabled, it replaces XSAVEOPT because it supports a compact + * format and supervisor states in addition to modified optimization in + * XSAVEOPT. + * + * Otherwise, if XSAVEOPT is enabled, XSAVEOPT replaces XSAVE because XSAVEOPT + * supports modified optimization which is not supported by XSAVE. + * + * We use XSAVE as a fallback. + * + * The 661 label is defined in the ALTERNATIVE* macros as the address of the + * original instruction which gets replaced. We need to use it here as the + * address of the instruction where we might get an exception at. + */ +#define XSTATE_XSAVE(st, lmask, hmask, err) \ + asm volatile(ALTERNATIVE_2(XSAVE, \ + XSAVEOPT, X86_FEATURE_XSAVEOPT, \ + XSAVES, X86_FEATURE_XSAVES) \ + "\n" \ + "xor %[err], %[err]\n" \ + "3:\n" \ + ".pushsection .fixup,\"ax\"\n" \ + "4: movl $-2, %[err]\n" \ + "jmp 3b\n" \ + ".popsection\n" \ + _ASM_EXTABLE(661b, 4b) \ + : [err] "=r" (err) \ + : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ + : "memory") + +/* + * Use XRSTORS to restore context if it is enabled. XRSTORS supports compact + * XSAVE area format. + */ +#define XSTATE_XRESTORE(st, lmask, hmask) \ + asm volatile(ALTERNATIVE(XRSTOR, \ + XRSTORS, X86_FEATURE_XSAVES) \ + "\n" \ + "3:\n" \ + _ASM_EXTABLE_TYPE(661b, 3b, EX_TYPE_FPU_RESTORE) \ + : \ + : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ + : "memory") + +/* + * Save processor xstate to xsave area. + * + * Uses either XSAVE or XSAVEOPT or XSAVES depending on the CPU features + * and command line options. The choice is permanent until the next reboot. + */ +static inline void os_xsave(struct xregs_state *xstate) +{ + u64 mask = xfeatures_mask_all; + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + WARN_ON_FPU(!alternatives_patched); + + XSTATE_XSAVE(xstate, lmask, hmask, err); + + /* We should never fault when copying to a kernel buffer: */ + WARN_ON_FPU(err); +} + +/* + * Restore processor xstate from xsave area. + * + * Uses XRSTORS when XSAVES is used, XRSTOR otherwise. + */ +static inline void os_xrstor(struct xregs_state *xstate, u64 mask) +{ + u32 lmask = mask; + u32 hmask = mask >> 32; + + XSTATE_XRESTORE(xstate, lmask, hmask); +} + +/* + * Save xstate to user space xsave area. + * + * We don't use modified optimization because xrstor/xrstors might track + * a different application. + * + * We don't use compacted format xsave area for backward compatibility for + * old applications which don't understand the compacted format of the + * xsave area. + * + * The caller has to zero buf::header before calling this because XSAVE* + * does not touch the reserved fields in the header. + */ +static inline int xsave_to_user_sigframe(struct xregs_state __user *buf) +{ + /* + * Include the features which are not xsaved/rstored by the kernel + * internally, e.g. PKRU. That's user space ABI and also required + * to allow the signal handler to modify PKRU. + */ + u64 mask = xfeatures_mask_uabi(); + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + stac(); + XSTATE_OP(XSAVE, buf, lmask, hmask, err); + clac(); + + return err; +} + +/* + * Restore xstate from user space xsave area. + */ +static inline int xrstor_from_user_sigframe(struct xregs_state __user *buf, u64 mask) +{ + struct xregs_state *xstate = ((__force struct xregs_state *)buf); + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + stac(); + XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); + clac(); + + return err; +} + +/* + * Restore xstate from kernel space xsave area, return an error code instead of + * an exception. + */ +static inline int os_xrstor_safe(struct xregs_state *xstate, u64 mask) +{ + u32 lmask = mask; + u32 hmask = mask >> 32; + int err; + + if (cpu_feature_enabled(X86_FEATURE_XSAVES)) + XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); + else + XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); + + return err; +} + + #endif From patchwork Fri Oct 15 01:16:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94A5DC433F5 for ; Fri, 15 Oct 2021 01:17:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7523A60E09 for ; Fri, 15 Oct 2021 01:17:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233771AbhJOBTM (ORCPT ); Thu, 14 Oct 2021 21:19:12 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46540 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234091AbhJOBSd (ORCPT ); Thu, 14 Oct 2021 21:18:33 -0400 Message-ID: <20211015011539.572439164@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260586; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=wXFplKFyHNdFHd3/9/YWUtmQO4eBbjH9eqMp16wPlfU=; b=A/b7ncCMGUYi5yC9LWyL4KO8atI9nwwtWtvuRn4gWw8b0HfjYYob5iX2xpz3zMZ83d3anp Mll/SHVJeGZ+o6EkqxE7BN/9+vhzpGT8yd2fzqLhdhVL7VL0H0WHAWqoaz3yvtPGKG9ylC pFUre5CjMEoYsBk/wP9i2vOVFKL/7K2VGtS8HOtUYaoROx3kYYa+w7uVzJYoBCBVVeoLKF o0pc66/f4SWPO3xacTcZOazu9LCjzrA1JK59XnGiqDknH5GhCHYinEBWMbLjkwHqG4m1BS ZManKdtx5AqltOCSAZVr5qFgyHdu1yaTijKaAipIevHU76rvSISDXLHTXESXTA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260586; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=wXFplKFyHNdFHd3/9/YWUtmQO4eBbjH9eqMp16wPlfU=; b=VXXCdriXRD3Yrw1bLFf66Jd7PAskNQk55Ii8lq13nZlAcq3E34Rm/X/Vta0s0L+rlwdk3k dCLSigv9qouRYoDg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 21/30] x86/fpu: Move legacy ASM wrappers to core References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:26 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Nothing outside the core code requires them. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 101 +----------------------------------- arch/x86/kernel/fpu/core.c | 1 +- arch/x86/kernel/fpu/legacy.h | 108 +++++++++++++++++++++++++++++++++++++- arch/x86/kernel/fpu/signal.c | 1 +- arch/x86/kernel/fpu/xstate.c | 1 +- 5 files changed, 111 insertions(+), 101 deletions(-) create mode 100644 arch/x86/kernel/fpu/legacy.h --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index b68f9940489f..7722aadc3278 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -60,107 +60,6 @@ extern void fpstate_init_soft(struct swregs_state *soft); static inline void fpstate_init_soft(struct swregs_state *soft) {} #endif -/* - * Returns 0 on success or the trap number when the operation raises an - * exception. - */ -#define user_insn(insn, output, input...) \ -({ \ - int err; \ - \ - might_fault(); \ - \ - asm volatile(ASM_STAC "\n" \ - "1: " #insn "\n" \ - "2: " ASM_CLAC "\n" \ - _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FAULT_MCE_SAFE) \ - : [err] "=a" (err), output \ - : "0"(0), input); \ - err; \ -}) - -#define kernel_insn_err(insn, output, input...) \ -({ \ - int err; \ - asm volatile("1:" #insn "\n\t" \ - "2:\n" \ - ".section .fixup,\"ax\"\n" \ - "3: movl $-1,%[err]\n" \ - " jmp 2b\n" \ - ".previous\n" \ - _ASM_EXTABLE(1b, 3b) \ - : [err] "=r" (err), output \ - : "0"(0), input); \ - err; \ -}) - -#define kernel_insn(insn, output, input...) \ - asm volatile("1:" #insn "\n\t" \ - "2:\n" \ - _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FPU_RESTORE) \ - : output : input) - -static inline int fnsave_to_user_sigframe(struct fregs_state __user *fx) -{ - return user_insn(fnsave %[fx]; fwait, [fx] "=m" (*fx), "m" (*fx)); -} - -static inline int fxsave_to_user_sigframe(struct fxregs_state __user *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - return user_insn(fxsave %[fx], [fx] "=m" (*fx), "m" (*fx)); - else - return user_insn(fxsaveq %[fx], [fx] "=m" (*fx), "m" (*fx)); - -} - -static inline void fxrstor(struct fxregs_state *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - kernel_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); - else - kernel_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline int fxrstor_safe(struct fxregs_state *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - return kernel_insn_err(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); - else - return kernel_insn_err(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline int fxrstor_from_user_sigframe(struct fxregs_state __user *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - return user_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); - else - return user_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline void frstor(struct fregs_state *fx) -{ - kernel_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline int frstor_safe(struct fregs_state *fx) -{ - return kernel_insn_err(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline int frstor_from_user_sigframe(struct fregs_state __user *fx) -{ - return user_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); -} - -static inline void fxsave(struct fxregs_state *fx) -{ - if (IS_ENABLED(CONFIG_X86_32)) - asm volatile( "fxsave %[fx]" : [fx] "=m" (*fx)); - else - asm volatile("fxsaveq %[fx]" : [fx] "=m" (*fx)); -} - extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 058df2643999..45ad0baa0128 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -18,6 +18,7 @@ #include #include "internal.h" +#include "legacy.h" #include "xstate.h" #define CREATE_TRACE_POINTS diff --git a/arch/x86/kernel/fpu/legacy.h b/arch/x86/kernel/fpu/legacy.h new file mode 100644 index 000000000000..2ff36b0f79e9 --- /dev/null +++ b/arch/x86/kernel/fpu/legacy.h @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __X86_KERNEL_FPU_LEGACY_H +#define __X86_KERNEL_FPU_LEGACY_H + +#include + +/* + * Returns 0 on success or the trap number when the operation raises an + * exception. + */ +#define user_insn(insn, output, input...) \ +({ \ + int err; \ + \ + might_fault(); \ + \ + asm volatile(ASM_STAC "\n" \ + "1: " #insn "\n" \ + "2: " ASM_CLAC "\n" \ + _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FAULT_MCE_SAFE) \ + : [err] "=a" (err), output \ + : "0"(0), input); \ + err; \ +}) + +#define kernel_insn_err(insn, output, input...) \ +({ \ + int err; \ + asm volatile("1:" #insn "\n\t" \ + "2:\n" \ + ".section .fixup,\"ax\"\n" \ + "3: movl $-1,%[err]\n" \ + " jmp 2b\n" \ + ".previous\n" \ + _ASM_EXTABLE(1b, 3b) \ + : [err] "=r" (err), output \ + : "0"(0), input); \ + err; \ +}) + +#define kernel_insn(insn, output, input...) \ + asm volatile("1:" #insn "\n\t" \ + "2:\n" \ + _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_FPU_RESTORE) \ + : output : input) + +static inline int fnsave_to_user_sigframe(struct fregs_state __user *fx) +{ + return user_insn(fnsave %[fx]; fwait, [fx] "=m" (*fx), "m" (*fx)); +} + +static inline int fxsave_to_user_sigframe(struct fxregs_state __user *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + return user_insn(fxsave %[fx], [fx] "=m" (*fx), "m" (*fx)); + else + return user_insn(fxsaveq %[fx], [fx] "=m" (*fx), "m" (*fx)); + +} + +static inline void fxrstor(struct fxregs_state *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + kernel_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); + else + kernel_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline int fxrstor_safe(struct fxregs_state *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + return kernel_insn_err(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); + else + return kernel_insn_err(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline int fxrstor_from_user_sigframe(struct fxregs_state __user *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + return user_insn(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx)); + else + return user_insn(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline void frstor(struct fregs_state *fx) +{ + kernel_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline int frstor_safe(struct fregs_state *fx) +{ + return kernel_insn_err(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline int frstor_from_user_sigframe(struct fregs_state __user *fx) +{ + return user_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx)); +} + +static inline void fxsave(struct fxregs_state *fx) +{ + if (IS_ENABLED(CONFIG_X86_32)) + asm volatile( "fxsave %[fx]" : [fx] "=m" (*fx)); + else + asm volatile("fxsaveq %[fx]" : [fx] "=m" (*fx)); +} + +#endif diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 1a409328aead..873b65ca22ed 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -17,6 +17,7 @@ #include #include "internal.h" +#include "legacy.h" #include "xstate.h" static struct _fpx_sw_bytes fx_sw_reserved __ro_after_init; diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index b712c06cbbfb..246a7fea06b1 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -20,6 +20,7 @@ #include #include "internal.h" +#include "legacy.h" #include "xstate.h" #define for_each_extended_xfeature(bit, mask) \ From patchwork Fri Oct 15 01:16:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 824ADC433FE for ; Fri, 15 Oct 2021 01:17:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 688DA60E09 for ; Fri, 15 Oct 2021 01:17:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234773AbhJOBUD (ORCPT ); Thu, 14 Oct 2021 21:20:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234880AbhJOBTP (ORCPT ); Thu, 14 Oct 2021 21:19:15 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86B65C0613E9; Thu, 14 Oct 2021 18:16:30 -0700 (PDT) Message-ID: <20211015011539.628516182@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260589; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=k0x+7z+1KYAdgY8we3nsR4L/sClon/Ac7bDr44uQ7wo=; b=rO6vOAixtFynqmqo5OyawLf57C7yuXIqN1jmQWhJtaYJ675UccbJJEUsgwFn9TxnqeQodB vkxHFx0q55+0mjgJjsCRUOiyGQ2A6/P7fxO0ZPBXOZJocCgOIqctu4Cm/Cope9c8jnzapd NFNGA6mQ0McA7+7P/XmUb8ZNOD0i/rLXUPQfZ8NGEWsfakcGkvBlbdvS8K9BfWMhOGl9xf vuNu2JtHgNYc9yLZpe8+anC7CfDmduElg9YNw6fQuQC4xdGGu55LldkyrrYXX+bZLaqJhZ 6XDq5dpEtZ4gmGxp5HKNrD2O/xh+aTbbtGXCD1DD9ejDRHunVH/I47PEr1LNmA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260589; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=k0x+7z+1KYAdgY8we3nsR4L/sClon/Ac7bDr44uQ7wo=; b=plMA5KPR/+QYQBEnA/Pi03H+Msyx097R+bap2IN8RtwYDcp4iC7fd0YIFwbsUTVM/DjfTQ jnk5V0YrW0zhFWAQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 22/30] x86/fpu: Make WARN_ON_FPU() private References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:28 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No point in being in global headers. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 9 --------- arch/x86/kernel/fpu/init.c | 2 ++ arch/x86/kernel/fpu/internal.h | 6 ++++++ 3 files changed, 8 insertions(+), 9 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 7722aadc3278..f8413a509ba5 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -42,15 +42,6 @@ extern void fpu__init_system(struct cpuinfo_x86 *c); extern void fpu__init_check_bugs(void); extern void fpu__resume_cpu(void); -/* - * Debugging facility: - */ -#ifdef CONFIG_X86_DEBUG_FPU -# define WARN_ON_FPU(x) WARN_ON_ONCE(x) -#else -# define WARN_ON_FPU(x) ({ (void)(x); 0; }) -#endif - extern union fpregs_state init_fpstate; extern void fpstate_init_user(union fpregs_state *state); diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index 545c91c723b8..24873dfe2dba 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -10,6 +10,8 @@ #include #include +#include "internal.h" + /* * Initialize the registers found in all CPUs, CR0 and CR4: */ diff --git a/arch/x86/kernel/fpu/internal.h b/arch/x86/kernel/fpu/internal.h index a8aac21ba364..5ddc09e03c2a 100644 --- a/arch/x86/kernel/fpu/internal.h +++ b/arch/x86/kernel/fpu/internal.h @@ -13,6 +13,12 @@ static __always_inline __pure bool use_fxsr(void) return cpu_feature_enabled(X86_FEATURE_FXSR); } +#ifdef CONFIG_X86_DEBUG_FPU +# define WARN_ON_FPU(x) WARN_ON_ONCE(x) +#else +# define WARN_ON_FPU(x) ({ (void)(x); 0; }) +#endif + /* Init functions */ extern void fpu__init_prepare_fx_sw_frame(void); From patchwork Fri Oct 15 01:16:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 167E0C433EF for ; Fri, 15 Oct 2021 01:17:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EBF7F611C2 for ; Fri, 15 Oct 2021 01:17:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235026AbhJOBTV (ORCPT ); Thu, 14 Oct 2021 21:19:21 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46568 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234890AbhJOBSj (ORCPT ); Thu, 14 Oct 2021 21:18:39 -0400 Message-ID: <20211015011539.686806639@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260591; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=HPmsI8Eo9P3rBuDPDWMzg9zb6tyYtZovtLirQSc3828=; b=JFHB7etDaLVe2OjomSvUdWgtSvyzw7KAvmTFe/1kJZnYUpcYNiDRXUZ/vxfd6cSsonOIIJ n+/KwuRy3qXlzJL13HKVuyNvQtMTp9Q6yGsthlq5i893i53H+JkHHAiTVw1qYISjTf1akN ePujTTT+TYvtuTkSvIvKEdLg5owyOBMqEUJKurQsLyh/5ygAKrtGX2+JlgukWdtOuUFxB/ kISLIQnRl+0k5/AE46nl3z6oEmpt/4QJee+/0Nt7VYBa9iDMiHANdEaSmw38K1DteiQgu8 nh+FeQkp8KpLdSJAE+J6TNUZV1S6BtaEXOg/7K71JOab2fSSf09ijTpVJR8MzA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260591; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=HPmsI8Eo9P3rBuDPDWMzg9zb6tyYtZovtLirQSc3828=; b=c83JqMIaEVgo1+BHbGbRBsrjB7Hyxgy/jBjn0thqZ1wXNEj1PkAuaxFdedhf33M+2M7N7W vh0AyACrg4QVBzBA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 23/30] x86/fpu: Move fpregs_restore_userregs() to core References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:30 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Only used internally in the FPU core code. Signed-off-by: Thomas Gleixner --- V2: Fixed changelog - Boris --- arch/x86/include/asm/fpu/internal.h | 83 +------------------------------------- arch/x86/kernel/fpu/context.h | 85 ++++++++++++++++++++++++++++++++++++++- arch/x86/kernel/fpu/core.c | 1 +- arch/x86/kernel/fpu/regset.c | 1 +- arch/x86/kernel/fpu/signal.c | 1 +- 5 files changed, 88 insertions(+), 83 deletions(-) create mode 100644 arch/x86/kernel/fpu/context.h --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index f8413a509ba5..74b7cc3d2e77 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -55,89 +55,6 @@ extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); -/* - * FPU context switch related helper methods: - */ - DECLARE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx); -/* - * The in-register FPU state for an FPU context on a CPU is assumed to be - * valid if the fpu->last_cpu matches the CPU, and the fpu_fpregs_owner_ctx - * matches the FPU. - * - * If the FPU register state is valid, the kernel can skip restoring the - * FPU state from memory. - * - * Any code that clobbers the FPU registers or updates the in-memory - * FPU state for a task MUST let the rest of the kernel know that the - * FPU registers are no longer valid for this task. - * - * Either one of these invalidation functions is enough. Invalidate - * a resource you control: CPU if using the CPU for something else - * (with preemption disabled), FPU for the current task, or a task that - * is prevented from running by the current task. - */ -static inline void __cpu_invalidate_fpregs_state(void) -{ - __this_cpu_write(fpu_fpregs_owner_ctx, NULL); -} - -static inline void __fpu_invalidate_fpregs_state(struct fpu *fpu) -{ - fpu->last_cpu = -1; -} - -static inline int fpregs_state_valid(struct fpu *fpu, unsigned int cpu) -{ - return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; -} - -/* - * These generally need preemption protection to work, - * do try to avoid using these on their own: - */ -static inline void fpregs_deactivate(struct fpu *fpu) -{ - this_cpu_write(fpu_fpregs_owner_ctx, NULL); - trace_x86_fpu_regs_deactivated(fpu); -} - -static inline void fpregs_activate(struct fpu *fpu) -{ - this_cpu_write(fpu_fpregs_owner_ctx, fpu); - trace_x86_fpu_regs_activated(fpu); -} - -/* Internal helper for switch_fpu_return() and signal frame setup */ -static inline void fpregs_restore_userregs(void) -{ - struct fpu *fpu = ¤t->thread.fpu; - int cpu = smp_processor_id(); - - if (WARN_ON_ONCE(current->flags & PF_KTHREAD)) - return; - - if (!fpregs_state_valid(fpu, cpu)) { - u64 mask; - - /* - * This restores _all_ xstate which has not been - * established yet. - * - * If PKRU is enabled, then the PKRU value is already - * correct because it was either set in switch_to() or in - * flush_thread(). So it is excluded because it might be - * not up to date in current->thread.fpu.xsave state. - */ - mask = xfeatures_mask_restore_user() | - xfeatures_mask_supervisor(); - restore_fpregs_from_fpstate(&fpu->state, mask); - - fpregs_activate(fpu); - fpu->last_cpu = cpu; - } - clear_thread_flag(TIF_NEED_FPU_LOAD); -} - #endif /* _ASM_X86_FPU_INTERNAL_H */ diff --git a/arch/x86/kernel/fpu/context.h b/arch/x86/kernel/fpu/context.h new file mode 100644 index 000000000000..e652282842c8 --- /dev/null +++ b/arch/x86/kernel/fpu/context.h @@ -0,0 +1,85 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __X86_KERNEL_FPU_CONTEXT_H +#define __X86_KERNEL_FPU_CONTEXT_H + +#include +#include + +/* Functions related to FPU context tracking */ + +/* + * The in-register FPU state for an FPU context on a CPU is assumed to be + * valid if the fpu->last_cpu matches the CPU, and the fpu_fpregs_owner_ctx + * matches the FPU. + * + * If the FPU register state is valid, the kernel can skip restoring the + * FPU state from memory. + * + * Any code that clobbers the FPU registers or updates the in-memory + * FPU state for a task MUST let the rest of the kernel know that the + * FPU registers are no longer valid for this task. + * + * Either one of these invalidation functions is enough. Invalidate + * a resource you control: CPU if using the CPU for something else + * (with preemption disabled), FPU for the current task, or a task that + * is prevented from running by the current task. + */ +static inline void __cpu_invalidate_fpregs_state(void) +{ + __this_cpu_write(fpu_fpregs_owner_ctx, NULL); +} + +static inline void __fpu_invalidate_fpregs_state(struct fpu *fpu) +{ + fpu->last_cpu = -1; +} + +static inline int fpregs_state_valid(struct fpu *fpu, unsigned int cpu) +{ + return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; +} + +static inline void fpregs_deactivate(struct fpu *fpu) +{ + __this_cpu_write(fpu_fpregs_owner_ctx, NULL); + trace_x86_fpu_regs_deactivated(fpu); +} + +static inline void fpregs_activate(struct fpu *fpu) +{ + __this_cpu_write(fpu_fpregs_owner_ctx, fpu); + trace_x86_fpu_regs_activated(fpu); +} + +/* Internal helper for switch_fpu_return() and signal frame setup */ +static inline void fpregs_restore_userregs(void) +{ + struct fpu *fpu = ¤t->thread.fpu; + int cpu = smp_processor_id(); + + if (WARN_ON_ONCE(current->flags & PF_KTHREAD)) + return; + + if (!fpregs_state_valid(fpu, cpu)) { + u64 mask; + + /* + * This restores _all_ xstate which has not been + * established yet. + * + * If PKRU is enabled, then the PKRU value is already + * correct because it was either set in switch_to() or in + * flush_thread(). So it is excluded because it might be + * not up to date in current->thread.fpu.xsave state. + */ + mask = xfeatures_mask_restore_user() | + xfeatures_mask_supervisor(); + restore_fpregs_from_fpstate(&fpu->state, mask); + + fpregs_activate(fpu); + fpu->last_cpu = cpu; + } + clear_thread_flag(TIF_NEED_FPU_LOAD); +} + +#endif diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 45ad0baa0128..4b407215abef 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -17,6 +17,7 @@ #include #include +#include "context.h" #include "internal.h" #include "legacy.h" #include "xstate.h" diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c index ccf0c59955f1..a40150e350b6 100644 --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -10,6 +10,7 @@ #include #include +#include "context.h" #include "internal.h" /* diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 873b65ca22ed..e4df905e59b0 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -16,6 +16,7 @@ #include #include +#include "context.h" #include "internal.h" #include "legacy.h" #include "xstate.h" From patchwork Fri Oct 15 01:16:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3E3AC433FE for ; Fri, 15 Oct 2021 01:17:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 93A50611C2 for ; Fri, 15 Oct 2021 01:17:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231394AbhJOBTZ (ORCPT ); Thu, 14 Oct 2021 21:19:25 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46584 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234898AbhJOBSj (ORCPT ); Thu, 14 Oct 2021 21:18:39 -0400 Message-ID: <20211015011539.740012411@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260592; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=K1kVt48y5cwbSkcpNYxYQMCbOR59BkxrewWCF+G1Xvo=; b=g2CpoeImdKTInzgSVFX8GXWXSh0upwKRKOGP0Uyb/z7q9GJaapEOpWNGJ++VtEFtqNUIrr 5Lw1A1XHNo54/secSjRw9PvVkndcn3w0GwLrOYDGlXuP4rB8QCPTnFHFfDNOSGPzWXqisG tv7X/K7uTo+cGgCLb/+Cg4E0080ONqvPq5L5EVguUzn6xZGOojYq+PHzEKzipJ44pWPZJT kxfZhDRv82GUDMeclK/cW4XIB+414ZCzhbyIIlHpC+NQHDiE1aMQuPuW9GOWt7VDtJflQv oncyV3hmwMsYnGwEy/nqpKQ9txioIi7G0fWXHsk+Q3SB7VlfeyPgO6/pEnoFAQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260592; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=K1kVt48y5cwbSkcpNYxYQMCbOR59BkxrewWCF+G1Xvo=; b=5bNYA6weSFa+S84SalQOf+L8nvkJkmZkhPo5mz5b+vNaM9EfvXeujcMcccWGnVQdx2ZzrV 9l/zWCZ34dUy1mCg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 24/30] x86/fpu: Move mxcsr related code to core References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:31 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org No need to expose that to code which only needs the XCR0 accessors. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/xcr.h | 11 ----------- arch/x86/kernel/fpu/init.c | 1 + arch/x86/kernel/fpu/legacy.h | 7 +++++++ arch/x86/kernel/fpu/regset.c | 1 + arch/x86/kernel/fpu/xstate.c | 3 ++- arch/x86/kvm/svm/sev.c | 2 +- 6 files changed, 12 insertions(+), 13 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/xcr.h b/arch/x86/include/asm/fpu/xcr.h index 1c7ab8d95da5..79f95d3787e2 100644 --- a/arch/x86/include/asm/fpu/xcr.h +++ b/arch/x86/include/asm/fpu/xcr.h @@ -2,17 +2,6 @@ #ifndef _ASM_X86_FPU_XCR_H #define _ASM_X86_FPU_XCR_H -/* - * MXCSR and XCR definitions: - */ - -static inline void ldmxcsr(u32 mxcsr) -{ - asm volatile("ldmxcsr %0" :: "m" (mxcsr)); -} - -extern unsigned int mxcsr_feature_mask; - #define XCR_XFEATURE_ENABLED_MASK 0x00000000 static inline u64 xgetbv(u32 index) diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index 24873dfe2dba..e77084a6ae7c 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -11,6 +11,7 @@ #include #include "internal.h" +#include "legacy.h" /* * Initialize the registers found in all CPUs, CR0 and CR4: diff --git a/arch/x86/kernel/fpu/legacy.h b/arch/x86/kernel/fpu/legacy.h index 2ff36b0f79e9..17c26b164c63 100644 --- a/arch/x86/kernel/fpu/legacy.h +++ b/arch/x86/kernel/fpu/legacy.h @@ -4,6 +4,13 @@ #include +extern unsigned int mxcsr_feature_mask; + +static inline void ldmxcsr(u32 mxcsr) +{ + asm volatile("ldmxcsr %0" :: "m" (mxcsr)); +} + /* * Returns 0 on success or the trap number when the operation raises an * exception. diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c index a40150e350b6..3d8ed45da166 100644 --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -12,6 +12,7 @@ #include "context.h" #include "internal.h" +#include "legacy.h" /* * The xstateregs_active() routine is the same as the regset_fpregs_active() routine, diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index 246a7fea06b1..f0305b2b227f 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -14,8 +14,9 @@ #include #include -#include #include +#include +#include #include diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 75e0b21ad07c..e153f3907507 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -17,10 +17,10 @@ #include #include #include -#include #include #include +#include #include "x86.h" #include "svm.h" From patchwork Fri Oct 15 01:16:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14D43C433EF for ; Fri, 15 Oct 2021 01:17:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EBEA9611C2 for ; Fri, 15 Oct 2021 01:17:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234898AbhJOBT2 (ORCPT ); Thu, 14 Oct 2021 21:19:28 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46590 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234900AbhJOBSk (ORCPT ); Thu, 14 Oct 2021 21:18:40 -0400 Message-ID: <20211015011539.792363754@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260594; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=naosO9hQ1o99O1ACUu2/9fKYBwND95hze68DMEXuWms=; b=fNam2hCirHA1yhViuZ6D7YsmS6ZY5LGS/eDPXatTdZ6bX18PAF6QRteae6QVhugkadgKAP D+zOpXkeUlqFxOIEafQzdwPCGR4cdMx8sGmlRnC/+Bw0tJMFLPJrbQnkteGbPVT5s+prt2 DVRxC64nIGHn/iKc82Gpz4EYHfCSFgZYCWZqYINSdKByRsEQPWyD8dkywUeq3d0MJ81r+1 8dQBKdfitS9eFb6Ar67/AVS2whygkR2ndo3iXbWdVVftOxRs30dmLMY4miKo6dK8cd30Ox MPUM2B8KQAwBAdI4jL+VCXS4TSNhXb67EBA/EWJQP3owGTy8gvLNkvZKw/rPaw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260594; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=naosO9hQ1o99O1ACUu2/9fKYBwND95hze68DMEXuWms=; b=RGUhHCaqqM5uU3xWk+CYjmkQaIddYbUphS+AXwTiNK0r6J1J371dlgdVb9Mfyjgoz8v4iS kvj3Md9ckwDtuiBw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 25/30] x86/fpu: Move fpstate functions to api.h References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:33 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move function declarations which need to be globally available to api.h where they belong. Signed-off-by: Thomas Gleixner --- V2: Fix changelog typo - Boris --- arch/x86/include/asm/fpu/api.h | 9 +++++++++ arch/x86/include/asm/fpu/internal.h | 9 --------- arch/x86/kernel/fpu/internal.h | 3 +++ arch/x86/math-emu/fpu_entry.c | 2 +- 4 files changed, 13 insertions(+), 10 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index c6f3d1add32f..ed0e2baa3f4b 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -110,6 +110,15 @@ extern int cpu_has_xfeatures(u64 xfeatures_mask, const char **feature_name); static inline void update_pasid(void) { } +#ifdef CONFIG_MATH_EMULATION +extern void fpstate_init_soft(struct swregs_state *soft); +#else +static inline void fpstate_init_soft(struct swregs_state *soft) {} +#endif + +/* fpstate */ +extern union fpregs_state init_fpstate; + /* fpstate-related functions which are exported to KVM */ extern void fpu_init_fpstate_user(struct fpu *fpu); diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 74b7cc3d2e77..d8bb49134ebb 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -42,15 +42,6 @@ extern void fpu__init_system(struct cpuinfo_x86 *c); extern void fpu__init_check_bugs(void); extern void fpu__resume_cpu(void); -extern union fpregs_state init_fpstate; -extern void fpstate_init_user(union fpregs_state *state); - -#ifdef CONFIG_MATH_EMULATION -extern void fpstate_init_soft(struct swregs_state *soft); -#else -static inline void fpstate_init_soft(struct swregs_state *soft) {} -#endif - extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); diff --git a/arch/x86/kernel/fpu/internal.h b/arch/x86/kernel/fpu/internal.h index 5ddc09e03c2a..bd7f813242dd 100644 --- a/arch/x86/kernel/fpu/internal.h +++ b/arch/x86/kernel/fpu/internal.h @@ -22,4 +22,7 @@ static __always_inline __pure bool use_fxsr(void) /* Init functions */ extern void fpu__init_prepare_fx_sw_frame(void); +/* Used in init.c */ +extern void fpstate_init_user(union fpregs_state *state); + #endif diff --git a/arch/x86/math-emu/fpu_entry.c b/arch/x86/math-emu/fpu_entry.c index 8679a9d6c47f..50195e249753 100644 --- a/arch/x86/math-emu/fpu_entry.c +++ b/arch/x86/math-emu/fpu_entry.c @@ -31,7 +31,7 @@ #include #include #include -#include +#include #include "fpu_system.h" #include "fpu_emu.h" From patchwork Fri Oct 15 01:16:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8398C433EF for ; Fri, 15 Oct 2021 01:18:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ADB1B60E09 for ; Fri, 15 Oct 2021 01:18:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234819AbhJOBU3 (ORCPT ); Thu, 14 Oct 2021 21:20:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235189AbhJOBTs (ORCPT ); Thu, 14 Oct 2021 21:19:48 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D75D2C06178A; Thu, 14 Oct 2021 18:16:36 -0700 (PDT) Message-ID: <20211015011539.844565975@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260595; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=oLjS3bh1ZdpLwkMG093WGxLYcrvexFUXTWx+VBYZ9fg=; b=FCtSS7iHv41GYQlWrVG3OHSMxrcXz0sWV4o0PY1UiUyZprua26YqhZAPpHzIX07+9k7tuP 8OYcHc2qYcHnjyJOhOcsPbYExVTkxQO2LddPZ+gx+7ynxL3GIgKcOiqD94SPm+kLQANZVV J9id8VKw8QTliokmZixFigzZIwzfZUbnaMeotw7Bqf4dQZurswx5hsgtT+D9D3stoqp3Uq EqMPFivGOsn1uLIMqx0g2ths0vzwG8H7RmmGzGw0++TMoymTl+3I4RXmN+N2kIqVwvRiS7 OLAYx1NEwnU67duNEZD28rzsWdEr4ZV3ctkVU35fnP0Gp8mYupcvpF0lb17aAw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260595; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=oLjS3bh1ZdpLwkMG093WGxLYcrvexFUXTWx+VBYZ9fg=; b=z9DJdedrcaXA9hCfgbRwgFNdahmRHYwfe4dUw86qjt/ozkJG/nsQhHXNxj2V1YX0UcWf56 OyftwTgEu9yznZAQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 26/30] x86/fpu: Remove internal.h dependency from fpu/signal.h References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:35 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to remove internal.h make signal.h independent of it. Signed-off-by: Thomas Gleixner --- arch/x86/ia32/ia32_signal.c | 1 - arch/x86/include/asm/fpu/api.h | 3 +++ arch/x86/include/asm/fpu/internal.h | 7 ------- arch/x86/include/asm/fpu/signal.h | 13 +++++++++++++ arch/x86/kernel/fpu/signal.c | 1 - arch/x86/kernel/ptrace.c | 1 - arch/x86/kernel/signal.c | 1 - arch/x86/mm/extable.c | 3 ++- 8 files changed, 18 insertions(+), 12 deletions(-) --- diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c index 828ab0a9239b..c9c3859322fa 100644 --- a/arch/x86/ia32/ia32_signal.c +++ b/arch/x86/ia32/ia32_signal.c @@ -24,7 +24,6 @@ #include #include #include -#include #include #include #include diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index ed0e2baa3f4b..764f3ae028c1 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -116,6 +116,9 @@ extern void fpstate_init_soft(struct swregs_state *soft); static inline void fpstate_init_soft(struct swregs_state *soft) {} #endif +/* State tracking */ +DECLARE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx); + /* fpstate */ extern union fpregs_state init_fpstate; diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index d8bb49134ebb..8f97d3e375de 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -26,7 +26,6 @@ /* * High level FPU state handling functions: */ -extern bool fpu__restore_sig(void __user *buf, int ia32_frame); extern void fpu__clear_user_states(struct fpu *fpu); extern int fpu__exception_code(struct fpu *fpu, int trap_nr); @@ -42,10 +41,4 @@ extern void fpu__init_system(struct cpuinfo_x86 *c); extern void fpu__init_check_bugs(void); extern void fpu__resume_cpu(void); -extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); - -extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); - -DECLARE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx); - #endif /* _ASM_X86_FPU_INTERNAL_H */ diff --git a/arch/x86/include/asm/fpu/signal.h b/arch/x86/include/asm/fpu/signal.h index 04868a76239a..9a63a21c219d 100644 --- a/arch/x86/include/asm/fpu/signal.h +++ b/arch/x86/include/asm/fpu/signal.h @@ -5,6 +5,11 @@ #ifndef _ASM_X86_FPU_SIGNAL_H #define _ASM_X86_FPU_SIGNAL_H +#include +#include + +#include + #ifdef CONFIG_X86_64 # include # include @@ -31,4 +36,12 @@ fpu__alloc_mathframe(unsigned long sp, int ia32_frame, unsigned long fpu__get_fpstate_size(void); +extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); +extern void fpu__clear_user_states(struct fpu *fpu); +extern bool fpu__restore_sig(void __user *buf, int ia32_frame); + +extern void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask); + +extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size); + #endif /* _ASM_X86_FPU_SIGNAL_H */ diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index e4df905e59b0..5f5fb443b2e2 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -7,7 +7,6 @@ #include #include -#include #include #include #include diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c index 4c208ea3bd9f..1a789c8a3c3f 100644 --- a/arch/x86/kernel/ptrace.c +++ b/arch/x86/kernel/ptrace.c @@ -29,7 +29,6 @@ #include #include -#include #include #include #include diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c index 02ee68e68184..58bd07071d14 100644 --- a/arch/x86/kernel/signal.c +++ b/arch/x86/kernel/signal.c @@ -30,7 +30,6 @@ #include #include -#include #include #include #include diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index 043ec385af45..79c2e30d93ae 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -4,7 +4,8 @@ #include #include -#include +#include +#include #include #include #include From patchwork Fri Oct 15 01:16:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EA9AC433EF for ; Fri, 15 Oct 2021 01:18:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53096611C2 for ; Fri, 15 Oct 2021 01:18:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235053AbhJOBUh (ORCPT ); Thu, 14 Oct 2021 21:20:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234837AbhJOBTt (ORCPT ); Thu, 14 Oct 2021 21:19:49 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D0EFC061793; Thu, 14 Oct 2021 18:16:38 -0700 (PDT) Message-ID: <20211015011539.896573039@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260597; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=qXPN8X8kmxRIkmpqRqhxyp/xhryH6JXN4VJULuCHRLE=; b=mr/S+AxYZDBrtj0qvbRAefEmbCaT4Tx1tvTEq2ZQyydXwFUd4ORLmKxyKs5G+wX50G4Lje qIrWDT18h+SRkNAT2/7hfo3/4JAgt5MjiHZ2z0onDxl+YuFhjFWdWXKJLYMfX6wbCyDq3n ZrzJE4UY5AkBkeYJgfJs1YSf9fa9mqth4+lNhYMmZosOzgN/jzrnvUp/n5p1wo4KynLXh9 f7e86ltBsopYjMDHlklKkrNkaCGcxjxS+m70F9HbAIULYoLfPEvTocLPaLDg26PpLr7hKI /T0BKAdgnvqJJwzkbN0KFjJJ43MXI+quNkJywXejitR0uOh2wPfnFcuPm07gJA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260597; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=qXPN8X8kmxRIkmpqRqhxyp/xhryH6JXN4VJULuCHRLE=; b=tu43GJXm/jdi/JGF+7Ss/F8W74AY7cHLI4theZomCOkKne1d8B0WG2ukdNrWmSj8yg9y3T eq1dEvFzjlDEl+AQ== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 27/30] x86/sev: Include fpu/xcr.h References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:36 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Include the header which only provides the XCR accessors. That's all what is needed here. Signed-off-by: Thomas Gleixner --- V2: s/XRC/XCR/ - Xiaoyao Li --- arch/x86/kernel/sev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a6895e440bc3..50c773c3384c 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -23,7 +23,7 @@ #include #include #include -#include +#include #include #include #include From patchwork Fri Oct 15 01:16:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8994C433FE for ; Fri, 15 Oct 2021 01:17:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC5BA606A5 for ; Fri, 15 Oct 2021 01:17:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235125AbhJOBTe (ORCPT ); Thu, 14 Oct 2021 21:19:34 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46628 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230080AbhJOBSp (ORCPT ); Thu, 14 Oct 2021 21:18:45 -0400 Message-ID: <20211015011539.948837194@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260598; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=hx/pIt3bXp8CbN6mt/mxogSML+lH+MiBhjTh9wH9cQs=; b=SscN3BvHGQfZMD3X2uDjNaaZKCjeIbOn5kV7SigkTf0a6v3nFjliDSekMvuSQRNAlaAKfz EYamYkrCrhRPJ+7DnisXL/IIUu/3txP8S5/cT+Kd1xuG+PtJDG/+cJDGT8XrZhF6/LepA4 zxy5uUAewVP8IXB5FVELG7arGjWCkYF39EV9c33HDVCC8enFn/3ahbH04Se6itqXyRXICU +89mX4bjMXCWr1hIZGw4f+6doyof1iE/juZsEjG+QFxVz3ty2iHHsFegR43JWjQ+5cwYC8 ieJtzT5OkN4OMFeiYF7UIV2U6XZcCMooB1F1XqZL9MxpvLzSr+McxZQzrd/LOg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260598; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=hx/pIt3bXp8CbN6mt/mxogSML+lH+MiBhjTh9wH9cQs=; b=9jUf8OC/+VdtIJJJVDZ1pEAIOFtRedrJQcb4R+NxQjK7VsjVejqiKfvgFzyzvkM88KXJUF jQetQhX3Lm8RvhAg== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 28/30] x86/fpu: Mop up the internal.h leftovers References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:38 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the global interfaces to api.h and the rest into the core. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/api.h | 10 ++++++++++ arch/x86/include/asm/fpu/internal.h | 18 ------------------ arch/x86/kernel/fpu/init.c | 1 + arch/x86/kernel/fpu/xstate.h | 3 +++ 4 files changed, 14 insertions(+), 18 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index 764f3ae028c1..b68d8ce599e4 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -110,6 +110,16 @@ extern int cpu_has_xfeatures(u64 xfeatures_mask, const char **feature_name); static inline void update_pasid(void) { } +/* Trap handling */ +extern int fpu__exception_code(struct fpu *fpu, int trap_nr); +extern void fpu_sync_fpstate(struct fpu *fpu); + +/* Boot, hotplug and resume */ +extern void fpu__init_cpu(void); +extern void fpu__init_system(struct cpuinfo_x86 *c); +extern void fpu__init_check_bugs(void); +extern void fpu__resume_cpu(void); + #ifdef CONFIG_MATH_EMULATION extern void fpstate_init_soft(struct swregs_state *soft); #else diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 8f97d3e375de..8df83e887ff6 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -23,22 +23,4 @@ #include #include -/* - * High level FPU state handling functions: - */ -extern void fpu__clear_user_states(struct fpu *fpu); -extern int fpu__exception_code(struct fpu *fpu, int trap_nr); - -extern void fpu_sync_fpstate(struct fpu *fpu); - -/* - * Boot time FPU initialization functions: - */ -extern void fpu__init_cpu(void); -extern void fpu__init_system_xstate(void); -extern void fpu__init_cpu_xstate(void); -extern void fpu__init_system(struct cpuinfo_x86 *c); -extern void fpu__init_check_bugs(void); -extern void fpu__resume_cpu(void); - #endif /* _ASM_X86_FPU_INTERNAL_H */ diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index e77084a6ae7c..d420d29e58be 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -12,6 +12,7 @@ #include "internal.h" #include "legacy.h" +#include "xstate.h" /* * Initialize the registers found in all CPUs, CR0 and CR4: diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h index ae61baa97682..bb6d7d298d2a 100644 --- a/arch/x86/kernel/fpu/xstate.h +++ b/arch/x86/kernel/fpu/xstate.h @@ -18,6 +18,9 @@ static inline void xstate_init_xcomp_bv(struct xregs_state *xsave, u64 mask) extern void __copy_xstate_to_uabi_buf(struct membuf to, struct xregs_state *xsave, u32 pkru_val, enum xstate_copy_mode copy_mode); +extern void fpu__init_cpu_xstate(void); +extern void fpu__init_system_xstate(void); + /* XSAVE/XRSTOR wrapper functions */ #ifdef CONFIG_X86_64 From patchwork Fri Oct 15 01:16:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00334C433F5 for ; Fri, 15 Oct 2021 01:17:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D2630606A5 for ; Fri, 15 Oct 2021 01:17:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235136AbhJOBTg (ORCPT ); Thu, 14 Oct 2021 21:19:36 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:46642 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233835AbhJOBSr (ORCPT ); Thu, 14 Oct 2021 21:18:47 -0400 Message-ID: <20211015011540.001197214@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260600; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=AYE0o6qd+Xss/eXod7qpO7YIuwEmE4T6tKGyGEEu2BA=; b=OLmkoTMCuL2slxYsSMN3UCE9DjARRAQKR3rA+wX/PKJNkXdAKQBIwDvrUeR3FbqJFaJBQj ezVQ0PkkuW4+Wc3KlTIDUG/ShpEXZ9gRYrvywhYlik+xIv6DiQxaQ5yx6WMBSEYm0dDJqv zmUcNkTDDYAV/OeG/sWK6YJnSCtbTpaXHhFpgeuCzr+iQQ0WVApFZBaRABeqAt7oFMN/ZU nNMGc58nmq6CUrO/2CrMwIWWO7XXYgS8Sx3OstJVcNUHJ56FruM2LnnVDj46n9jA9jJMC9 nLXhD+mV2b9Xnt0pvquwgSwbEQ49pdu0cj+TzJakp/criLVOYzqLhGyeRpZYhA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260600; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=AYE0o6qd+Xss/eXod7qpO7YIuwEmE4T6tKGyGEEu2BA=; b=V7cTOYZXLN8IB3vv5rQpcUmX9fRgYteRLm2KFSxQkhsA3WAmjzzNEyfDYz3SwdM79809eY t4mZWUpG5JFHhBAA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 29/30] x86/fpu: Replace the includes of fpu/internal.h References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:39 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the file is empty, fixup all references with the proper includes and delete the former kitchen sink. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/internal.h | 26 -------------------------- arch/x86/kernel/cpu/bugs.c | 2 +- arch/x86/kernel/cpu/common.c | 2 +- arch/x86/kernel/fpu/bugs.c | 2 +- arch/x86/kernel/fpu/core.c | 2 +- arch/x86/kernel/fpu/init.c | 2 +- arch/x86/kernel/fpu/regset.c | 2 +- arch/x86/kernel/fpu/xstate.c | 1 - arch/x86/kernel/smpboot.c | 2 +- arch/x86/kernel/traps.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/power/cpu.c | 2 +- 12 files changed, 10 insertions(+), 37 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index 8df83e887ff6..e69de29bb2d1 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -1,26 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 1994 Linus Torvalds - * - * Pentium III FXSR, SSE support - * General FPU state handling cleanups - * Gareth Hughes , May 2000 - * x86-64 work by Andi Kleen 2002 - */ - -#ifndef _ASM_X86_FPU_INTERNAL_H -#define _ASM_X86_FPU_INTERNAL_H - -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include - -#endif /* _ASM_X86_FPU_INTERNAL_H */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index ecfca3bbcd96..6c8a86b58020 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -22,7 +22,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index b3410f1ac217..486dc8c1d388 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -42,7 +42,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/kernel/fpu/bugs.c b/arch/x86/kernel/fpu/bugs.c index 2954fab15e51..794e70151203 100644 --- a/arch/x86/kernel/fpu/bugs.c +++ b/arch/x86/kernel/fpu/bugs.c @@ -2,7 +2,7 @@ /* * x86 FPU bug checks: */ -#include +#include /* * Boot time CPU/FPU FDIV bug detection code: diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 4b407215abef..c6d7a47b1b26 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -6,7 +6,7 @@ * General FPU state handling cleanups * Gareth Hughes , May 2000 */ -#include +#include #include #include #include diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c index d420d29e58be..23791355ca67 100644 --- a/arch/x86/kernel/fpu/init.c +++ b/arch/x86/kernel/fpu/init.c @@ -2,7 +2,7 @@ /* * x86 FPU boot time init code: */ -#include +#include #include #include diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c index 3d8ed45da166..01a1d97c3cb6 100644 --- a/arch/x86/kernel/fpu/regset.c +++ b/arch/x86/kernel/fpu/regset.c @@ -5,7 +5,7 @@ #include #include -#include +#include #include #include #include diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c index f0305b2b227f..b022df95a302 100644 --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -13,7 +13,6 @@ #include #include -#include #include #include #include diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 85f6e242b6b4..2577ed3b5e6b 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -70,7 +70,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index a58800973aed..bae7582c58f5 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -48,7 +48,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0c2c0d5ae873..e8d5f7963401 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -35,7 +35,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c index 6665f8802098..9f2b251e83c5 100644 --- a/arch/x86/power/cpu.c +++ b/arch/x86/power/cpu.c @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include From patchwork Fri Oct 15 01:16:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 12559709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDD54C433EF for ; Fri, 15 Oct 2021 01:18:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C3E2861139 for ; Fri, 15 Oct 2021 01:18:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234765AbhJOBUr (ORCPT ); Thu, 14 Oct 2021 21:20:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233864AbhJOBUC (ORCPT ); Thu, 14 Oct 2021 21:20:02 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12AA8C061798; Thu, 14 Oct 2021 18:16:43 -0700 (PDT) Message-ID: <20211015011540.053515012@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1634260601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=TMwU8z8/LQBoAYmC9g7bu6xynGD7UrpwsBXR7iUeNYg=; b=IZRHjyDDLGucPzxkcSjlaLcrM0hXSfsMMZH6nGNrBoLiUO38oO4YxFVd7FgvZUWCUkSTkj H0sgl2wXuE5QcwyrGX6UKbKaZbH+25G9E3ixXHtlHR2DqWqT5/ZaPmNUPCw8FK6eWbZk3Z Cwkyoa7ZE7jpQH+bf8+JNsfZLAjluiSqHCc3p+p6rAyrdSNe4cnJsMgA4206EKw0syogoi TInB8C9b54x4BV86Bs82yKoJzkh5VxVjPJlTSSHci96ai9v76nZJBeeJIVTN23Xz55uuxY AFWvFA5fGEQXxj01CoQHZ7RzknjWsEW8MIhdkGyj4qw4i5lJyZOCDnW11725AA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1634260601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=TMwU8z8/LQBoAYmC9g7bu6xynGD7UrpwsBXR7iUeNYg=; b=l6rmTHz2EtlpCgyHhW5lMfDtFMBVI3KWBzlOBYX/EaQBbN/edXu4el9+hJ5DaVq8jLuk+z +At4PAhbsNd8ANDA== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, "Chang S. Bae" , Dave Hansen , Arjan van de Ven , kvm@vger.kernel.org, Paolo Bonzini , "Liu, Jing2" , Sean Christopherson , Xiaoyao Li Subject: [patch V2 30/30] x86/fpu: Provide a proper function for ex_handler_fprestore() References: <20211015011411.304289784@linutronix.de> MIME-Version: 1.0 Date: Fri, 15 Oct 2021 03:16:41 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To make upcoming changes for support of dynamically enabled features simpler, provide a proper function for the exception handler which removes exposure of FPU internals. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/fpu/api.h | 4 +--- arch/x86/kernel/fpu/core.c | 5 +++++ arch/x86/kernel/fpu/internal.h | 2 ++ arch/x86/mm/extable.c | 5 ++--- 4 files changed, 10 insertions(+), 6 deletions(-) --- diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index b68d8ce599e4..5ac5e4596b53 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -113,6 +113,7 @@ static inline void update_pasid(void) { } /* Trap handling */ extern int fpu__exception_code(struct fpu *fpu, int trap_nr); extern void fpu_sync_fpstate(struct fpu *fpu); +extern void fpu_reset_from_exception_fixup(void); /* Boot, hotplug and resume */ extern void fpu__init_cpu(void); @@ -129,9 +130,6 @@ static inline void fpstate_init_soft(struct swregs_state *soft) {} /* State tracking */ DECLARE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx); -/* fpstate */ -extern union fpregs_state init_fpstate; - /* fpstate-related functions which are exported to KVM */ extern void fpu_init_fpstate_user(struct fpu *fpu); diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index c6d7a47b1b26..ac540a7d410e 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -155,6 +155,11 @@ void restore_fpregs_from_fpstate(union fpregs_state *fpstate, u64 mask) } } +void fpu_reset_from_exception_fixup(void) +{ + restore_fpregs_from_fpstate(&init_fpstate, xfeatures_mask_fpstate()); +} + #if IS_ENABLED(CONFIG_KVM) void fpu_swap_kvm_fpu(struct fpu *save, struct fpu *rstor, u64 restore_mask) { diff --git a/arch/x86/kernel/fpu/internal.h b/arch/x86/kernel/fpu/internal.h index bd7f813242dd..479f2db6e160 100644 --- a/arch/x86/kernel/fpu/internal.h +++ b/arch/x86/kernel/fpu/internal.h @@ -2,6 +2,8 @@ #ifndef __X86_KERNEL_FPU_INTERNAL_H #define __X86_KERNEL_FPU_INTERNAL_H +extern union fpregs_state init_fpstate; + /* CPU feature check wrappers */ static __always_inline __pure bool use_xsave(void) { diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index 79c2e30d93ae..5cd2a88930a9 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -4,8 +4,7 @@ #include #include -#include -#include +#include #include #include #include @@ -48,7 +47,7 @@ static bool ex_handler_fprestore(const struct exception_table_entry *fixup, WARN_ONCE(1, "Bad FPU state detected at %pB, reinitializing FPU registers.", (void *)instruction_pointer(regs)); - restore_fpregs_from_fpstate(&init_fpstate, xfeatures_mask_fpstate()); + fpu_reset_from_exception_fixup(); return true; }