diff mbox series

[02/23] x86/fpu: Remove fpu->initialized usage in __fpu__restore_sig()

Message ID 20181107194858.9380-3-bigeasy@linutronix.de (mailing list archive)
State New, archived
Headers show
Series [01/23] x86/fpu: Use ULL for shift in xfeature_uncompacted_offset() | expand

Commit Message

Sebastian Andrzej Siewior Nov. 7, 2018, 7:48 p.m. UTC
This is a preparation for the removal of the ->initialized member in the
fpu struct.
__fpu__restore_sig() is deactivating the FPU via fpu__drop() and then
setting manually ->initialized followed by fpu__restore(). The result is
that it is possible to manipulate fpu->state and the state of registers
won't be saved/restore on a context switch which would overwrite state.

Don't access the fpu->state while the content is read from user space
and examined / sanitized. Use a temporary buffer kmalloc() buffer for
the preparation of the FPU registers and once the state is considered
okay, load it. Should something go wrong, return with and error and
without altering the original FPU registers.

The removal of "fpu__initialize()" is a nop because fpu->initialized is
already set for the user task.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/include/asm/fpu/signal.h |  2 +-
 arch/x86/kernel/fpu/regset.c      |  5 ++--
 arch/x86/kernel/fpu/signal.c      | 41 ++++++++++++-------------------
 3 files changed, 19 insertions(+), 29 deletions(-)

Comments

Borislav Petkov Nov. 8, 2018, 2:57 p.m. UTC | #1
On Wed, Nov 07, 2018 at 08:48:37PM +0100, Sebastian Andrzej Siewior wrote:
> This is a preparation for the removal of the ->initialized member in the
> fpu struct.
> __fpu__restore_sig() is deactivating the FPU via fpu__drop() and then
> setting manually ->initialized followed by fpu__restore(). The result is
> that it is possible to manipulate fpu->state and the state of registers
> won't be saved/restore on a context switch which would overwrite state.

		restored

> 
> Don't access the fpu->state while the content is read from user space
> and examined / sanitized. Use a temporary buffer kmalloc() buffer for

one "buffer" too many.

More importantly, what I'm missing here is more detailed explanation
about how that manipulation can happen. Especially since the comment
over fpu__drop() you're removing below is claiming the exact opposite.
AFAICT.

Yeah, FPU code has always been nasty and tricky to follow so I think
we'd need to have this stuff explained in much more detail.

Thx.
Sebastian Andrzej Siewior Nov. 9, 2018, 5:35 p.m. UTC | #2
On 2018-11-08 15:57:21 [+0100], Borislav Petkov wrote:
> On Wed, Nov 07, 2018 at 08:48:37PM +0100, Sebastian Andrzej Siewior wrote:
> > This is a preparation for the removal of the ->initialized member in the
> > fpu struct.
> > __fpu__restore_sig() is deactivating the FPU via fpu__drop() and then
> > setting manually ->initialized followed by fpu__restore(). The result is
> > that it is possible to manipulate fpu->state and the state of registers
> > won't be saved/restore on a context switch which would overwrite state.
> 
> 		restored
> 
> > 
> > Don't access the fpu->state while the content is read from user space
> > and examined / sanitized. Use a temporary buffer kmalloc() buffer for
> 
> one "buffer" too many.

corrected.

> More importantly, what I'm missing here is more detailed explanation
> about how that manipulation can happen. Especially since the comment
> over fpu__drop() you're removing below is claiming the exact opposite.
> AFAICT.

fpu__drop() stets ->initialized to 0. As a result the context switch
will not save current FPU registers and so _not_ write to fpu->state.
This also means that CPU's FPU register will be random (inherited from
the last context) after the context switch. This is also true for usage
in softirq via kernel_fpu_begin().

The "new" FPU state is prepared in fpu->state and once it is done, it
gets loaded via
  fpu->initialized = 1; // make sure fpu__initialize() in fpu__restore()
                        // is a nop
  fpu__restore();	// Load the registers.

Since I plan to remove the ->initialized member, I don't have the luxury
to play with fpu->state because the "new" content obtained by
copy_from_user() will be overwritten with CPU's current FPU state during
a context switch.
Now with that information, what do you plan to alter? :)

> Yeah, FPU code has always been nasty and tricky to follow so I think
> we'd need to have this stuff explained in much more detail.

Yeah, tell me about it. Now that you made me look into this again, I
spotted this gem:

| __fpu__restore_sig()
…
|        if (ia32_fxstate) {
…
|                 fpu__drop(fpu);
…
|	/* prepare new FPU state in fpu->state */
| 
|                 fpu->initialized = 1;

*BOOM* context switch. ->initialized == 1 is seen so it stashes current
CPU's FPU state into fpu->state and overwrites what has been prepared
before.

On the switch back to this task, the fpu__restore() becomes a "nop"
because the saved registers are the same but not what was expected /
prepared before.

|                 preempt_disable();
|                 fpu__restore(fpu);
|                 preempt_enable();
|

So. The fix would be:
@@ -344,10 +344,10 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
                        sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
                }
 
+               local_bh_disable();
                fpu->initialized = 1;
-               preempt_disable();
                fpu__restore(fpu);
-               preempt_enable();
+               local_bh_enable();
 
                return err;
        } else {

local_bh_disable() due to possible kernel_fpu_begin() usage in softirq.
How much do we care here about a theoretical race on 32bit anyway? I
don't think someone complained :) I would have to rebase my queue…
otherwise…

> Thx.

Sebastian
Sebastian Andrzej Siewior Nov. 9, 2018, 11:25 p.m. UTC | #3
On 2018-11-09 19:52:02 [+0100], Borislav Petkov wrote:
> On Fri, Nov 09, 2018 at 06:35:21PM +0100, Sebastian Andrzej Siewior wrote:
> > fpu__drop() stets ->initialized to 0. As a result the context switch
> 
> "... the context switch path landing in switch_fpu_prepare()... " is what you
> mean, right?
I mean both. switch_fpu_prepare() while the task is leaving and then
switch_fpu_finish() while the task is coming back. But yes.

> > will not save current FPU registers and so _not_ write to fpu->state.
> > This also means that CPU's FPU register will be random (inherited from
> > the last context)
> 
> You mean, the FPU regs will have random values, yes.
correct. Same like for kernel threads.

> > after the context switch. This is also true for usage
> > in softirq via kernel_fpu_begin().
> 
> So far so good.
> 
> Except maybe because I'm dense about FPU, I still am missing something.
> 
> You have this path:
> 
> __fpu__restore_sig
> |-> fpu__clear
>  |-> fpu__drop
> 
> and that happens on the sigreturn() path.
> 
> Now, the context switch happens ... when exactly?
> 
> After the sigreturn is done?

Is fpu__clear() correct here? If so, a context switch after setting
->initialized has been set to 1 wouldn't matter because in the end the
register state is restored from init_fpstate and not from task's FPU
struct.

> 
> It must be because then you'd get that ->state corruption after
> ->initialized has been cleared.
> 
> Right?

I might got your question wrong. If you quote the code and try again and
I do so, too :)

> <snip a bunch of stuff, we'll get back to it later>
> 
> > So. The fix would be:
> > @@ -344,10 +344,10 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
> >                         sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
> >                 }
> >  
> > +               local_bh_disable();
> >                 fpu->initialized = 1;
> > -               preempt_disable();
> >                 fpu__restore(fpu);
> > -               preempt_enable();
> > +               local_bh_enable();
> >  
> >                 return err;
> >         } else {
> > 
> > local_bh_disable() due to possible kernel_fpu_begin() usage in softirq.
> > How much do we care here about a theoretical race on 32bit anyway? I
> > don't think someone complained :) I would have to rebase my queue…
> > otherwise…
> 
> Funny, you should mention that.
> 
> But this very much rings a bell about a very elusive bug we had on
> 32-bit at the time. See attached mbox (yeah, the web archives were crap
> and couldn't find the links so I'm sending you the whole thread).
> 
> And at the time Ingo said that there's something still missing about
> *why* it would happen.
> 
> And I think it is this context switch happening right after the
> sigreturn - *AFAICT* - which would cause this.
> 
> I could very well be off but this smells very similar to your thing.

So checking out v4.5-rc3-15-g58122bf1d856a and __fpu__restore_sig() is
something like this:

|	fpu__drop(fpu);
…
|	fpu->fpstate_active = 1;
X
|	if (use_eager_fpu()) {
|		preempt_disable();
|		fpu__restore(fpu);
|		preempt_enable();
|	}

fpu__drop() sets fpstate_active & fpregs_active to 0[¹]. A context switch
at X would _not_ save current FPU registers and overwrite what was
prepared because fpregs_active should still be zero.
Now on the switch back to the task, fpstate_active was set which means
fpu.preload might be true. If so it would load the FPU registers and set
fpregs_active to 1. Later fpu__restore() would try the same and
fpregs_activate() would trigger the warning because fpregs_active was
already set to 1.

> Hmmm.
So I just came up with a possible hard to trigger case and a robot
triggered it already a while back. Well, CONFIG_PREEMPT=y is also there
so it matches this part of the story. But you connected the dots. 

[¹] side note: in my early research it took a while to notice that
    fpstate_active and fpregs_active were two different things. My brain
    used fp.*_active for matching. It also helped my confusion that
    those were renamed and removed…

Sebastian
diff mbox series

Patch

diff --git a/arch/x86/include/asm/fpu/signal.h b/arch/x86/include/asm/fpu/signal.h
index 44bbc39a57b30..7fb516b6893a8 100644
--- a/arch/x86/include/asm/fpu/signal.h
+++ b/arch/x86/include/asm/fpu/signal.h
@@ -22,7 +22,7 @@  int ia32_setup_frame(int sig, struct ksignal *ksig,
 
 extern void convert_from_fxsr(struct user_i387_ia32_struct *env,
 			      struct task_struct *tsk);
-extern void convert_to_fxsr(struct task_struct *tsk,
+extern void convert_to_fxsr(struct fxregs_state *fxsave,
 			    const struct user_i387_ia32_struct *env);
 
 unsigned long
diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c
index bc02f5144b958..5dbc099178a88 100644
--- a/arch/x86/kernel/fpu/regset.c
+++ b/arch/x86/kernel/fpu/regset.c
@@ -269,11 +269,10 @@  convert_from_fxsr(struct user_i387_ia32_struct *env, struct task_struct *tsk)
 		memcpy(&to[i], &from[i], sizeof(to[0]));
 }
 
-void convert_to_fxsr(struct task_struct *tsk,
+void convert_to_fxsr(struct fxregs_state *fxsave,
 		     const struct user_i387_ia32_struct *env)
 
 {
-	struct fxregs_state *fxsave = &tsk->thread.fpu.state.fxsave;
 	struct _fpreg *from = (struct _fpreg *) &env->st_space[0];
 	struct _fpxreg *to = (struct _fpxreg *) &fxsave->st_space[0];
 	int i;
@@ -350,7 +349,7 @@  int fpregs_set(struct task_struct *target, const struct user_regset *regset,
 
 	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &env, 0, -1);
 	if (!ret)
-		convert_to_fxsr(target, &env);
+		convert_to_fxsr(&target->thread.fpu.state.fxsave, &env);
 
 	/*
 	 * update the header bit in the xsave header, indicating the
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index 61a949d84dfa5..5777ee0c32fed 100644
--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/kernel/fpu/signal.c
@@ -207,11 +207,11 @@  int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
 }
 
 static inline void
-sanitize_restored_xstate(struct task_struct *tsk,
+sanitize_restored_xstate(union fpregs_state *state,
 			 struct user_i387_ia32_struct *ia32_env,
 			 u64 xfeatures, int fx_only)
 {
-	struct xregs_state *xsave = &tsk->thread.fpu.state.xsave;
+	struct xregs_state *xsave = &state->xsave;
 	struct xstate_header *header = &xsave->header;
 
 	if (use_xsave()) {
@@ -238,7 +238,7 @@  sanitize_restored_xstate(struct task_struct *tsk,
 		 */
 		xsave->i387.mxcsr &= mxcsr_feature_mask;
 
-		convert_to_fxsr(tsk, ia32_env);
+		convert_to_fxsr(&state->fxsave, ia32_env);
 	}
 }
 
@@ -284,8 +284,6 @@  static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 	if (!access_ok(VERIFY_READ, buf, size))
 		return -EACCES;
 
-	fpu__initialize(fpu);
-
 	if (!static_cpu_has(X86_FEATURE_FPU))
 		return fpregs_soft_set(current, NULL,
 				       0, sizeof(struct user_i387_ia32_struct),
@@ -314,41 +312,34 @@  static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 		 * thread's fpu state, reconstruct fxstate from the fsave
 		 * header. Validate and sanitize the copied state.
 		 */
+		union fpregs_state *state;
+		void *tmp;
 		struct user_i387_ia32_struct env;
 		int err = 0;
 
-		/*
-		 * Drop the current fpu which clears fpu->initialized. This ensures
-		 * that any context-switch during the copy of the new state,
-		 * avoids the intermediate state from getting restored/saved.
-		 * Thus avoiding the new restored state from getting corrupted.
-		 * We will be ready to restore/save the state only after
-		 * fpu->initialized is again set.
-		 */
-		fpu__drop(fpu);
+		tmp = kmalloc(sizeof(*state) + fpu_kernel_xstate_size + 64, GFP_KERNEL);
+		if (!tmp)
+			return -ENOMEM;
+		state = PTR_ALIGN(tmp, 64);
 
 		if (using_compacted_format()) {
-			err = copy_user_to_xstate(&fpu->state.xsave, buf_fx);
+			err = copy_user_to_xstate(&state->xsave, buf_fx);
 		} else {
-			err = __copy_from_user(&fpu->state.xsave, buf_fx, state_size);
+			err = __copy_from_user(&state->xsave, buf_fx, state_size);
 
 			if (!err && state_size > offsetof(struct xregs_state, header))
-				err = validate_xstate_header(&fpu->state.xsave.header);
+				err = validate_xstate_header(&state->xsave.header);
 		}
 
 		if (err || __copy_from_user(&env, buf, sizeof(env))) {
-			fpstate_init(&fpu->state);
-			trace_x86_fpu_init_state(fpu);
 			err = -1;
 		} else {
-			sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
+			sanitize_restored_xstate(state, &env,
+						 xfeatures, fx_only);
+			copy_kernel_to_fpregs(state);
 		}
 
-		fpu->initialized = 1;
-		preempt_disable();
-		fpu__restore(fpu);
-		preempt_enable();
-
+		kfree(tmp);
 		return err;
 	} else {
 		/*