diff mbox series

KVM: selftests: Double check on the current CPU in rseq_test

Message ID 20220714080642.3376618-1-gshan@redhat.com (mailing list archive)
State New, archived
Headers show
Series KVM: selftests: Double check on the current CPU in rseq_test | expand

Commit Message

Gavin Shan July 14, 2022, 8:06 a.m. UTC
In rseq_test, there are two threads created. Those two threads are
'main' and 'migration_thread' separately. We also have the assumption
that non-migration status on 'migration-worker' thread guarantees the
same non-migration status on 'main' thread. Unfortunately, the assumption
isn't true. The 'main' thread can be migrated from one CPU to another
one between the calls to sched_getcpu() and READ_ONCE(__rseq.cpu_id).
The following assert is raised eventually because of the mismatched
CPU numbers.

The issue can be reproduced on arm64 system occasionally.

  host# uname -r
  5.19.0-rc6-gavin+
  host# # cat /proc/cpuinfo | grep processor | tail -n 1
  processor    : 223
  host# pwd
  /home/gavin/sandbox/linux.main/tools/testing/selftests/kvm
  host# for i in `seq 1 100`;   \
        do echo "--------> $i"; \
        ./rseq_test; sleep 3;   \
        done
  --------> 1
  --------> 2
  --------> 3
  --------> 4
  --------> 5
  --------> 6
  ==== Test Assertion Failure ====
    rseq_test.c:265: rseq_cpu == cpu
    pid=3925 tid=3925 errno=4 - Interrupted system call
       1  0x0000000000401963: main at rseq_test.c:265 (discriminator 2)
       2  0x0000ffffb044affb: ?? ??:0
       3  0x0000ffffb044b0c7: ?? ??:0
       4  0x0000000000401a6f: _start at ??:?
    rseq CPU = 4, sched CPU = 27

This fixes the issue by double-checking on the current CPU after
call to READ_ONCE(__rseq.cpu_id) and restarting the test if the
two consecutive CPU numbers aren't euqal.

Fixes: 61e52f1630f5 ("KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration bugs")
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 tools/testing/selftests/kvm/rseq_test.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Comments

Paolo Bonzini July 14, 2022, 1:52 p.m. UTC | #1
On 7/14/22 10:06, Gavin Shan wrote:
> In rseq_test, there are two threads created. Those two threads are
> 'main' and 'migration_thread' separately. We also have the assumption
> that non-migration status on 'migration-worker' thread guarantees the
> same non-migration status on 'main' thread. Unfortunately, the assumption
> isn't true. The 'main' thread can be migrated from one CPU to another
> one between the calls to sched_getcpu() and READ_ONCE(__rseq.cpu_id).
> The following assert is raised eventually because of the mismatched
> CPU numbers.
> 
> The issue can be reproduced on arm64 system occasionally.
> 
>    host# uname -r
>    5.19.0-rc6-gavin+
>    host# # cat /proc/cpuinfo | grep processor | tail -n 1
>    processor    : 223
>    host# pwd
>    /home/gavin/sandbox/linux.main/tools/testing/selftests/kvm
>    host# for i in `seq 1 100`;   \
>          do echo "--------> $i"; \
>          ./rseq_test; sleep 3;   \
>          done
>    --------> 1
>    --------> 2
>    --------> 3
>    --------> 4
>    --------> 5
>    --------> 6
>    ==== Test Assertion Failure ====
>      rseq_test.c:265: rseq_cpu == cpu
>      pid=3925 tid=3925 errno=4 - Interrupted system call
>         1  0x0000000000401963: main at rseq_test.c:265 (discriminator 2)
>         2  0x0000ffffb044affb: ?? ??:0
>         3  0x0000ffffb044b0c7: ?? ??:0
>         4  0x0000000000401a6f: _start at ??:?
>      rseq CPU = 4, sched CPU = 27
> 
> This fixes the issue by double-checking on the current CPU after
> call to READ_ONCE(__rseq.cpu_id) and restarting the test if the
> two consecutive CPU numbers aren't euqal.
> 
> Fixes: 61e52f1630f5 ("KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration bugs")
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>   tools/testing/selftests/kvm/rseq_test.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c
> index 4158da0da2bb..74709dd9f5b2 100644
> --- a/tools/testing/selftests/kvm/rseq_test.c
> +++ b/tools/testing/selftests/kvm/rseq_test.c
> @@ -207,7 +207,7 @@ int main(int argc, char *argv[])
>   {
>   	int r, i, snapshot;
>   	struct kvm_vm *vm;
> -	u32 cpu, rseq_cpu;
> +	u32 cpu, rseq_cpu, last_cpu;
>   
>   	/* Tell stdout not to buffer its content */
>   	setbuf(stdout, NULL);
> @@ -259,8 +259,9 @@ int main(int argc, char *argv[])
>   			smp_rmb();
>   			cpu = sched_getcpu();
>   			rseq_cpu = READ_ONCE(__rseq.cpu_id);
> +			last_cpu = sched_getcpu();
>   			smp_rmb();
> -		} while (snapshot != atomic_read(&seq_cnt));
> +		} while (snapshot != atomic_read(&seq_cnt) || cpu != last_cpu);
>   
>   		TEST_ASSERT(rseq_cpu == cpu,
>   			    "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu);

Queued for -rc, thanks.

Paolo
Paolo Bonzini July 14, 2022, 2:03 p.m. UTC | #2
On 7/14/22 10:06, Gavin Shan wrote:
> In rseq_test, there are two threads created. Those two threads are
> 'main' and 'migration_thread' separately. We also have the assumption
> that non-migration status on 'migration-worker' thread guarantees the
> same non-migration status on 'main' thread. Unfortunately, the assumption
> isn't true. The 'main' thread can be migrated from one CPU to another
> one between the calls to sched_getcpu() and READ_ONCE(__rseq.cpu_id).
> The following assert is raised eventually because of the mismatched
> CPU numbers.
> 
> The issue can be reproduced on arm64 system occasionally.

Hmm, this does not seem a correct patch - the threads are already 
synchronizing using seq_cnt, like this:

	migration			main
	----------------------		--------------------------------
	seq_cnt = 1
	smp_wmb()
					snapshot = 0
					smp_rmb()
					cpu = sched_getcpu() reads 23
	sched_setaffinity()
					rseq_cpu = __rseq.cpuid reads 35
					smp_rmb()
					snapshot != seq_cnt -> retry
	smp_wmb()
	seq_cnt = 2

sched_setaffinity() is guaranteed to block until the task is enqueued on 
an allowed CPU.

Can you check that smp_rmb() and smp_wmb() generate correct instructions 
on arm64?

Paolo

>    host# uname -r
>    5.19.0-rc6-gavin+
>    host# # cat /proc/cpuinfo | grep processor | tail -n 1
>    processor    : 223
>    host# pwd
>    /home/gavin/sandbox/linux.main/tools/testing/selftests/kvm
>    host# for i in `seq 1 100`;   \
>          do echo "--------> $i"; \
>          ./rseq_test; sleep 3;   \
>          done
>    --------> 1
>    --------> 2
>    --------> 3
>    --------> 4
>    --------> 5
>    --------> 6
>    ==== Test Assertion Failure ====
>      rseq_test.c:265: rseq_cpu == cpu
>      pid=3925 tid=3925 errno=4 - Interrupted system call
>         1  0x0000000000401963: main at rseq_test.c:265 (discriminator 2)
>         2  0x0000ffffb044affb: ?? ??:0
>         3  0x0000ffffb044b0c7: ?? ??:0
>         4  0x0000000000401a6f: _start at ??:?
>      rseq CPU = 4, sched CPU = 27
> 
> This fixes the issue by double-checking on the current CPU after
> call to READ_ONCE(__rseq.cpu_id) and restarting the test if the
> two consecutive CPU numbers aren't euqal.
> 
> Fixes: 61e52f1630f5 ("KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration bugs")
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>   tools/testing/selftests/kvm/rseq_test.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c
> index 4158da0da2bb..74709dd9f5b2 100644
> --- a/tools/testing/selftests/kvm/rseq_test.c
> +++ b/tools/testing/selftests/kvm/rseq_test.c
> @@ -207,7 +207,7 @@ int main(int argc, char *argv[])
>   {
>   	int r, i, snapshot;
>   	struct kvm_vm *vm;
> -	u32 cpu, rseq_cpu;
> +	u32 cpu, rseq_cpu, last_cpu;
>   
>   	/* Tell stdout not to buffer its content */
>   	setbuf(stdout, NULL);
> @@ -259,8 +259,9 @@ int main(int argc, char *argv[])
>   			smp_rmb();
>   			cpu = sched_getcpu();
>   			rseq_cpu = READ_ONCE(__rseq.cpu_id);
> +			last_cpu = sched_getcpu();
>   			smp_rmb();
> -		} while (snapshot != atomic_read(&seq_cnt));
> +		} while (snapshot != atomic_read(&seq_cnt) || cpu != last_cpu);
>   
>   		TEST_ASSERT(rseq_cpu == cpu,
>   			    "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu);
Sean Christopherson July 14, 2022, 3:35 p.m. UTC | #3
On Thu, Jul 14, 2022, Paolo Bonzini wrote:
> On 7/14/22 10:06, Gavin Shan wrote:
> > In rseq_test, there are two threads created. Those two threads are
> > 'main' and 'migration_thread' separately. We also have the assumption
> > that non-migration status on 'migration-worker' thread guarantees the
> > same non-migration status on 'main' thread. Unfortunately, the assumption
> > isn't true. The 'main' thread can be migrated from one CPU to another
> > one between the calls to sched_getcpu() and READ_ONCE(__rseq.cpu_id).
> > The following assert is raised eventually because of the mismatched
> > CPU numbers.
> > 
> > The issue can be reproduced on arm64 system occasionally.
> 
> Hmm, this does not seem a correct patch - the threads are already
> synchronizing using seq_cnt, like this:
> 
> 	migration			main
> 	----------------------		--------------------------------
> 	seq_cnt = 1
> 	smp_wmb()
> 					snapshot = 0
> 					smp_rmb()
> 					cpu = sched_getcpu() reads 23
> 	sched_setaffinity()
> 					rseq_cpu = __rseq.cpuid reads 35
> 					smp_rmb()
> 					snapshot != seq_cnt -> retry
> 	smp_wmb()
> 	seq_cnt = 2
> 
> sched_setaffinity() is guaranteed to block until the task is enqueued on an
> allowed CPU.

Yes, and retrying could suppress detection of kernel bugs that this test is intended
to catch.

> Can you check that smp_rmb() and smp_wmb() generate correct instructions on
> arm64?

That seems like the most likely scenario (or a kernel bug), I distinctly remember
the barriers provided by tools/ being rather bizarre.
Paolo Bonzini July 14, 2022, 3:42 p.m. UTC | #4
On 7/14/22 17:35, Sean Christopherson wrote:
>> Can you check that smp_rmb() and smp_wmb() generate correct instructions on
>> arm64?
> 
> That seems like the most likely scenario (or a kernel bug), I distinctly remember
> the barriers provided by tools/ being rather bizarre.

Maybe we should bite the bait and use C11 atomics in tools/.  I've long 
planned an article "C11 atomics for kernel programmers", especially 
because this will also be an issue when Rust gets into the mix...

Paolo
Gavin Shan July 15, 2022, 2:21 a.m. UTC | #5
Hi Paolo and Sean,

On 7/15/22 1:35 AM, Sean Christopherson wrote:
> On Thu, Jul 14, 2022, Paolo Bonzini wrote:
>> On 7/14/22 10:06, Gavin Shan wrote:
>>> In rseq_test, there are two threads created. Those two threads are
>>> 'main' and 'migration_thread' separately. We also have the assumption
>>> that non-migration status on 'migration-worker' thread guarantees the
>>> same non-migration status on 'main' thread. Unfortunately, the assumption
>>> isn't true. The 'main' thread can be migrated from one CPU to another
>>> one between the calls to sched_getcpu() and READ_ONCE(__rseq.cpu_id).
>>> The following assert is raised eventually because of the mismatched
>>> CPU numbers.
>>>
>>> The issue can be reproduced on arm64 system occasionally.
>>
>> Hmm, this does not seem a correct patch - the threads are already
>> synchronizing using seq_cnt, like this:
>>
>> 	migration			main
>> 	----------------------		--------------------------------
>> 	seq_cnt = 1
>> 	smp_wmb()
>> 					snapshot = 0
>> 					smp_rmb()
>> 					cpu = sched_getcpu() reads 23
>> 	sched_setaffinity()
>> 					rseq_cpu = __rseq.cpuid reads 35
>> 					smp_rmb()
>> 					snapshot != seq_cnt -> retry
>> 	smp_wmb()
>> 	seq_cnt = 2
>>
>> sched_setaffinity() is guaranteed to block until the task is enqueued on an
>> allowed CPU.
> 
> Yes, and retrying could suppress detection of kernel bugs that this test is intended
> to catch.
> 

Well, I don't think migration_worker() does correct thing, if I'm understanding
correctly. The intention seems to force migration on 'main' thread by 'migration'
thread?  If that is the case, I don't think the following function call has correct
parameters.

     r = sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask);

     it should be something like:

     r = sched_setaffinity(getpid(), sizeof(allowed_mask), &allowed_mask);

If we're using sched_setaffinity(0, ...) in the 'migration' thread, the CPU
affinity of 'main' thread won't be affected. It means 'main' thread can be
migrated from one CPU to another at any time, even in the following point:

     int main(...)
     {
           :
           /*
            * migration can happen immediately after sched_getcpu(). If
            * CPU affinity of 'main' thread is sticky to one particular
            * CPU, which 'migration' thread supposes to do, then there
            * should have no migration.
            */
           cpu = sched_getcpu();
           rseq_cpu = READ_ONCE(__rseq.cpu_id);
           :
     }

So I think the correct fix is to have sched_setaffinity(getpid(), ...) ?
Please refer to the manpage.

    https://man7.org/linux/man-pages/man2/sched_setaffinity.2.html
    'If pid is zero, then the calling thread is used'

>> Can you check that smp_rmb() and smp_wmb() generate correct instructions on
>> arm64?
> 
> That seems like the most likely scenario (or a kernel bug), I distinctly remember
> the barriers provided by tools/ being rather bizarre.
> 

I don't see any problems for smp_rmb() and smp_wmb() in my case. They have
been translated to correct instructions, as expected.

#define smp_mb()        asm volatile("dmb ish" ::: "memory")
#define smp_wmb()       asm volatile("dmb ishst" ::: "memory")
#define smp_rmb()       asm volatile("dmb ishld" ::: "memory")

--------------

One more experiment for sched_setaffinity(). I run the following program,
the CPU affinity of 'main' thread isn't changed, until the correct
parameter is used, to have sched_setaffinity(getpid(), ...).

sched_setaffinity(0, ...)
-------------------------
[root@virtlab-arm01 tmp]# ./a
thread_func: cpu=0
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
thread_func: cpu=1
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
main: mask=0x000000ff
   :

sched_setaffinity(getpid(), ...)
--------------------------------
thread_func: cpu=198
main: mask=0x00000001
main: mask=0x00000001
main: mask=0x00000001
main: mask=0x00000001
main: mask=0x00000001
main: mask=0x00000001
main: mask=0x00000001
main: mask=0x00000001
main: mask=0x00000001
thread_func: cpu=198
main: mask=0x00000002
main: mask=0x00000002
main: mask=0x00000002
main: mask=0x00000002
main: mask=0x00000002
main: mask=0x00000002
main: mask=0x00000002
main: mask=0x00000002
main: mask=0x00000002
main: mask=0x00000002
   :

#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <pthread.h>
#include <sched.h>

#define NR_CPUS	8
static int thread_exit = 0;

static void *thread_func(void *data)
{
	cpu_set_t allowed_mask;
	int ret, i;

	for (i = 0; i < NR_CPUS; i++) {
		CPU_ZERO(&allowed_mask);
		CPU_SET(i, &allowed_mask);
#if 1
		sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask);
#else
                 sched_setaffinity(getpid(), sizeof(allowed_mask), &allowed_mask);
#endif
		fprintf(stdout, "%s: cpu=%d\n", __func__, sched_getcpu());

		sleep(1);
	}

	thread_exit = 1;
	return NULL;
}

int main(int argc, char **argv)
{
	pthread_t thread;
	cpu_set_t allowed_mask;
	int mask, i, count = 0;

	pthread_create(&thread, NULL, thread_func, NULL);

	while (!thread_exit) {
		usleep(100000);

		mask = 0;
		sched_getaffinity(0, sizeof(allowed_mask), &allowed_mask);
		for (i = 0; i < NR_CPUS; i++) {
			if (CPU_ISSET(i, &allowed_mask))
				mask |= (1 << i);
		}

		fprintf(stdout, "%s: mask=0x%08x\n", __func__, mask);
	}

	return 0;
}


Thanks,
Gavin
Sean Christopherson July 15, 2022, 2:32 p.m. UTC | #6
On Fri, Jul 15, 2022, Gavin Shan wrote:
> Hi Paolo and Sean,
> 
> On 7/15/22 1:35 AM, Sean Christopherson wrote:
> > On Thu, Jul 14, 2022, Paolo Bonzini wrote:
> Well, I don't think migration_worker() does correct thing, if I'm understanding
> correctly. The intention seems to force migration on 'main' thread by 'migration'
> thread?  If that is the case, I don't think the following function call has correct
> parameters.
> 
>     r = sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask);
> 
>     it should be something like:
> 
>     r = sched_setaffinity(getpid(), sizeof(allowed_mask), &allowed_mask);
> 
> If we're using sched_setaffinity(0, ...) in the 'migration' thread, the CPU
> affinity of 'main' thread won't be affected. It means 'main' thread can be
> migrated from one CPU to another at any time, even in the following point:
> 
>     int main(...)
>     {
>           :
>           /*
>            * migration can happen immediately after sched_getcpu(). If
>            * CPU affinity of 'main' thread is sticky to one particular
>            * CPU, which 'migration' thread supposes to do, then there
>            * should have no migration.
>            */
>           cpu = sched_getcpu();
>           rseq_cpu = READ_ONCE(__rseq.cpu_id);
>           :
>     }
> 
> So I think the correct fix is to have sched_setaffinity(getpid(), ...) ?
> Please refer to the manpage.
> 
>    https://man7.org/linux/man-pages/man2/sched_setaffinity.2.html
>    'If pid is zero, then the calling thread is used'

Oof, and more explicitly the rest of that sentence clarifies that the result of
getpid() will target the main thread (I assume "main" means thread group leader).

   Specifying pid as 0 will set the attribute for the calling thread, and passing
   the value returned from a call to getpid(2) will set the attribute for the main
   thread of the thread group.

I'm guessing my test worked (in that it reproduced the bug) by virtue of the
scheduler trying to colocate all threads in the process.

In my defense, the die.net copy of the manpages quite clearly uses "process"[1],
but that was fixed in the manpages in 2013[2]!?!!?  So I guess the takeaway is
to use only the official manpages.

Anyways, for the code, my preference would be to snapshot gettid() in main() before
spawning the migration worker.  Same result, but I would rather the test explicitly
target the thread doing rseq instead of relying on (a) getpid() targeting only the
main thread and (b) the main thread always being the rseq thread.  E.g. if for some
reason a future patch moves the rseq code to its own worker thread, then getpid()
would be incorrect.

Thanks for figuring this out!

[1] https://linux.die.net/man/2/sched_setaffinity
[2] 6a7fcf3cc ("sched_setaffinity.2: Clarify that these system calls affect a per-thread attribute")
Gavin Shan July 16, 2022, 2:46 p.m. UTC | #7
Hi Sean,

On 7/16/22 12:32 AM, Sean Christopherson wrote:
> On Fri, Jul 15, 2022, Gavin Shan wrote:
>> On 7/15/22 1:35 AM, Sean Christopherson wrote:
>>> On Thu, Jul 14, 2022, Paolo Bonzini wrote:
>> Well, I don't think migration_worker() does correct thing, if I'm understanding
>> correctly. The intention seems to force migration on 'main' thread by 'migration'
>> thread?  If that is the case, I don't think the following function call has correct
>> parameters.
>>
>>      r = sched_setaffinity(0, sizeof(allowed_mask), &allowed_mask);
>>
>>      it should be something like:
>>
>>      r = sched_setaffinity(getpid(), sizeof(allowed_mask), &allowed_mask);
>>
>> If we're using sched_setaffinity(0, ...) in the 'migration' thread, the CPU
>> affinity of 'main' thread won't be affected. It means 'main' thread can be
>> migrated from one CPU to another at any time, even in the following point:
>>
>>      int main(...)
>>      {
>>            :
>>            /*
>>             * migration can happen immediately after sched_getcpu(). If
>>             * CPU affinity of 'main' thread is sticky to one particular
>>             * CPU, which 'migration' thread supposes to do, then there
>>             * should have no migration.
>>             */
>>            cpu = sched_getcpu();
>>            rseq_cpu = READ_ONCE(__rseq.cpu_id);
>>            :
>>      }
>>
>> So I think the correct fix is to have sched_setaffinity(getpid(), ...) ?
>> Please refer to the manpage.
>>
>>     https://man7.org/linux/man-pages/man2/sched_setaffinity.2.html
>>     'If pid is zero, then the calling thread is used'
> 
> Oof, and more explicitly the rest of that sentence clarifies that the result of
> getpid() will target the main thread (I assume "main" means thread group leader).
> 
>     Specifying pid as 0 will set the attribute for the calling thread, and passing
>     the value returned from a call to getpid(2) will set the attribute for the main
>     thread of the thread group.
> 
> I'm guessing my test worked (in that it reproduced the bug) by virtue of the
> scheduler trying to colocate all threads in the process.
> 
> In my defense, the die.net copy of the manpages quite clearly uses "process"[1],
> but that was fixed in the manpages in 2013[2]!?!!?  So I guess the takeaway is
> to use only the official manpages.
> 
> Anyways, for the code, my preference would be to snapshot gettid() in main() before
> spawning the migration worker.  Same result, but I would rather the test explicitly
> target the thread doing rseq instead of relying on (a) getpid() targeting only the
> main thread and (b) the main thread always being the rseq thread.  E.g. if for some
> reason a future patch moves the rseq code to its own worker thread, then getpid()
> would be incorrect.
> 
> Thanks for figuring this out!
> 
> [1] https://linux.die.net/man/2/sched_setaffinity
> [2] 6a7fcf3cc ("sched_setaffinity.2: Clarify that these system calls affect a per-thread attribute")
> 

Thanks for your confirm. The suggested way, to cache tid of the thread group
leader in advance, makes sense to me. The code has been modified accordingly
in below patch, which was just posted. Please help to review when you get a
chance.

[PATCH v2] KVM: selftests: Fix target thread to be migrated in rseq_test

Thanks,
Gavin
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c
index 4158da0da2bb..74709dd9f5b2 100644
--- a/tools/testing/selftests/kvm/rseq_test.c
+++ b/tools/testing/selftests/kvm/rseq_test.c
@@ -207,7 +207,7 @@  int main(int argc, char *argv[])
 {
 	int r, i, snapshot;
 	struct kvm_vm *vm;
-	u32 cpu, rseq_cpu;
+	u32 cpu, rseq_cpu, last_cpu;
 
 	/* Tell stdout not to buffer its content */
 	setbuf(stdout, NULL);
@@ -259,8 +259,9 @@  int main(int argc, char *argv[])
 			smp_rmb();
 			cpu = sched_getcpu();
 			rseq_cpu = READ_ONCE(__rseq.cpu_id);
+			last_cpu = sched_getcpu();
 			smp_rmb();
-		} while (snapshot != atomic_read(&seq_cnt));
+		} while (snapshot != atomic_read(&seq_cnt) || cpu != last_cpu);
 
 		TEST_ASSERT(rseq_cpu == cpu,
 			    "rseq CPU = %d, sched CPU = %d\n", rseq_cpu, cpu);