Message ID | 20230119212317.8324-7-rick.p.edgecombe@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Shadow stacks for userspace | expand |
On Thu, Jan 19, 2023 at 01:22:44PM -0800, Rick Edgecombe wrote: > Just like user xfeatures, supervisor xfeatures can be active in the > registers or present in the task FPU buffer. If the registers are > active, the registers can be modified directly. If the registers are > not active, the modification must be performed on the task FPU buffer. > > When the state is not active, the kernel could perform modifications > directly to the buffer. But in order for it to do that, it needs > to know where in the buffer the specific state it wants to modify is > located. Doing this is not robust against optimizations that compact > the FPU buffer, as each access would require computing where in the > buffer it is. > > The easiest way to modify supervisor xfeature data is to force restore > the registers and write directly to the MSRs. Often times this is just fine > anyway as the registers need to be restored before returning to userspace. > Do this for now, leaving buffer writing optimizations for the future. > > Add a new function fpregs_lock_and_load() that can simultaneously call > fpregs_lock() and do this restore. Also perform some extra sanity > checks in this function since this will be used in non-fpu focused code. > > Tested-by: Pengfei Xu <pengfei.xu@intel.com> > Tested-by: John Allen <john.allen@amd.com> > Suggested-by: Thomas Gleixner <tglx@linutronix.de> > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Kees Cook <keescook@chromium.org>
On Thu, Jan 19, 2023 at 01:22:44PM -0800, Rick Edgecombe wrote: > +void fpregs_lock_and_load(void) > +{ > + /* > + * fpregs_lock() only disables preemption (mostly). So modifying state > + * in an interrupt could screw up some in progress fpregs operation, > + * but appear to work. Warn about it. I don't like comments where it sounds like we don't know what we're doing. "Appear to work"?
On Wed, 2023-02-01 at 12:01 +0100, Borislav Petkov wrote: > On Thu, Jan 19, 2023 at 01:22:44PM -0800, Rick Edgecombe wrote: > > +void fpregs_lock_and_load(void) > > +{ > > + /* > > + * fpregs_lock() only disables preemption (mostly). So > > modifying state > > + * in an interrupt could screw up some in progress fpregs > > operation, > > + * but appear to work. Warn about it. > > I don't like comments where it sounds like we don't know what we're > doing. "Appear to work"? I can change it. This patch started with the observation that modifying xstate from the kernel had been gotten wrong a couple times in the past, so that is what this is referencing. Since then, the fancy automatic solution got boiled down to this helper and a couple warnings.
On Wed, Feb 01, 2023 at 05:31:50PM +0000, Edgecombe, Rick P wrote: > On Wed, 2023-02-01 at 12:01 +0100, Borislav Petkov wrote: > > On Thu, Jan 19, 2023 at 01:22:44PM -0800, Rick Edgecombe wrote: > > > +void fpregs_lock_and_load(void) > > > +{ > > > + /* > > > + * fpregs_lock() only disables preemption (mostly). So > > > modifying state > > > + * in an interrupt could screw up some in progress fpregs > > > operation, > > > + * but appear to work. Warn about it. > > > > I don't like comments where it sounds like we don't know what we're > > doing. "Appear to work"? > > I can change it. This patch started with the observation that modifying > xstate from the kernel had been gotten wrong a couple times in the > past, so that is what this is referencing. Since then, the fancy > automatic solution got boiled down to this helper and a couple > warnings. Yeah, but that comment right now reads like: modifying in interrupt context can corrupt fpregs and you should not do it but it kinda works, by chance. Thus encouraging people to keep doing that. I guess "but appear to work" can go and then it is fine. Thx.
diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h index 503a577814b2..aadc6893dcaa 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -82,6 +82,15 @@ static inline void fpregs_unlock(void) preempt_enable(); } +/* + * FPU state gets lazily restored before returning to userspace. So when in the + * kernel, the valid FPU state may be kept in the buffer. This function will force + * restore all the fpu state to the registers early if needed, and lock them from + * being automatically saved/restored. Then FPU state can be modified safely in the + * registers, before unlocking with fpregs_unlock(). + */ +void fpregs_lock_and_load(void); + #ifdef CONFIG_X86_DEBUG_FPU extern void fpregs_assert_state_consistent(void); #else diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index dccce58201b7..7317bfd5ea36 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -753,6 +753,24 @@ void switch_fpu_return(void) } EXPORT_SYMBOL_GPL(switch_fpu_return); +void fpregs_lock_and_load(void) +{ + /* + * fpregs_lock() only disables preemption (mostly). So modifying state + * in an interrupt could screw up some in progress fpregs operation, + * but appear to work. Warn about it. + */ + WARN_ON_ONCE(!irq_fpu_usable()); + WARN_ON_ONCE(current->flags & PF_KTHREAD); + + fpregs_lock(); + + fpregs_assert_state_consistent(); + + if (test_thread_flag(TIF_NEED_FPU_LOAD)) + fpregs_restore_userregs(); +} + #ifdef CONFIG_X86_DEBUG_FPU /* * If current FPU state according to its tracking (loaded FPU context on this