diff mbox series

[v2,2/2] mm/hmm: add a test for cross device private faults

Message ID 20220725183615.4118795-3-rcampbell@nvidia.com (mailing list archive)
State New
Headers show
Series mm/hmm: fault non-owner device private entries | expand

Commit Message

Ralph Campbell July 25, 2022, 6:36 p.m. UTC
Add a simple test case for when hmm_range_fault() is called with the
HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
other than the hmm_range::dev_private_owner. This should cause the
page to be faulted back to system memory from the other device and the
PFN returned in the output array.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
---
 tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

Comments

Alistair Popple July 26, 2022, 1:38 a.m. UTC | #1
Reviewed-by: Alistair Popple <apopple@nvidia.com>

Ralph Campbell <rcampbell@nvidia.com> writes:

> Add a simple test case for when hmm_range_fault() is called with the
> HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
> other than the hmm_range::dev_private_owner. This should cause the
> page to be faulted back to system memory from the other device and the
> PFN returned in the output array.
>
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> ---
>  tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
> index 203323967b50..a5ce7cc2e7aa 100644
> --- a/tools/testing/selftests/vm/hmm-tests.c
> +++ b/tools/testing/selftests/vm/hmm-tests.c
> @@ -1520,9 +1520,19 @@ TEST_F(hmm2, double_map)
>  	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
>  		ASSERT_EQ(ptr[i], i);
>
> -	/* Punch a hole after the first page address. */
> -	ret = munmap(buffer->ptr + self->page_size, self->page_size);
> +	/* Migrate pages to device 1 and try to read from device 0. */
> +	ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, npages);
> +	ASSERT_EQ(ret, 0);
> +	ASSERT_EQ(buffer->cpages, npages);
> +
> +	ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
>  	ASSERT_EQ(ret, 0);
> +	ASSERT_EQ(buffer->cpages, npages);
> +	ASSERT_EQ(buffer->faults, 1);
> +
> +	/* Check what device 0 read. */
> +	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
> +		ASSERT_EQ(ptr[i], i);
>
>  	hmm_buffer_free(buffer);
>  }
John Hubbard July 26, 2022, 9:03 p.m. UTC | #2
On 7/25/22 11:36, Ralph Campbell wrote:
> Add a simple test case for when hmm_range_fault() is called with the
> HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
> other than the hmm_range::dev_private_owner. This should cause the
> page to be faulted back to system memory from the other device and the
> PFN returned in the output array.
> 
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> ---
>   tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
>   1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
> index 203323967b50..a5ce7cc2e7aa 100644
> --- a/tools/testing/selftests/vm/hmm-tests.c
> +++ b/tools/testing/selftests/vm/hmm-tests.c
> @@ -1520,9 +1520,19 @@ TEST_F(hmm2, double_map)
>   	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
>   		ASSERT_EQ(ptr[i], i);
>   
> -	/* Punch a hole after the first page address. */
> -	ret = munmap(buffer->ptr + self->page_size, self->page_size);

If this removal was intentional, then it should be mentioned in the
commit log.

> +	/* Migrate pages to device 1 and try to read from device 0. */
> +	ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, npages);
> +	ASSERT_EQ(ret, 0);
> +	ASSERT_EQ(buffer->cpages, npages);
> +
> +	ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
>   	ASSERT_EQ(ret, 0);
> +	ASSERT_EQ(buffer->cpages, npages);
> +	ASSERT_EQ(buffer->faults, 1);
> +
> +	/* Check what device 0 read. */
> +	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
> +		ASSERT_EQ(ptr[i], i);

I'm assuming that your testing shows that this fails without patch 1,
and succeeds with patch 1 applied? Apologies for such an obvious
question... :)

>   
>   	hmm_buffer_free(buffer);
>   }

thanks,
Ralph Campbell July 26, 2022, 9:14 p.m. UTC | #3
On 7/26/22 14:03, John Hubbard wrote:
> On 7/25/22 11:36, Ralph Campbell wrote:
>> Add a simple test case for when hmm_range_fault() is called with the
>> HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
>> other than the hmm_range::dev_private_owner. This should cause the
>> page to be faulted back to system memory from the other device and the
>> PFN returned in the output array.
>>
>> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
>> ---
>>   tools/testing/selftests/vm/hmm-tests.c | 14 ++++++++++++--
>>   1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/testing/selftests/vm/hmm-tests.c 
>> b/tools/testing/selftests/vm/hmm-tests.c
>> index 203323967b50..a5ce7cc2e7aa 100644
>> --- a/tools/testing/selftests/vm/hmm-tests.c
>> +++ b/tools/testing/selftests/vm/hmm-tests.c
>> @@ -1520,9 +1520,19 @@ TEST_F(hmm2, double_map)
>>       for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
>>           ASSERT_EQ(ptr[i], i);
>>   -    /* Punch a hole after the first page address. */
>> -    ret = munmap(buffer->ptr + self->page_size, self->page_size);
>
> If this removal was intentional, then it should be mentioned in the
> commit log.

Yes. It does nothing, probably a copy & paste error.
I'll update the description and send a v3.

>
>> +    /* Migrate pages to device 1 and try to read from device 0. */
>> +    ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, 
>> npages);
>> +    ASSERT_EQ(ret, 0);
>> +    ASSERT_EQ(buffer->cpages, npages);
>> +
>> +    ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
>>       ASSERT_EQ(ret, 0);
>> +    ASSERT_EQ(buffer->cpages, npages);
>> +    ASSERT_EQ(buffer->faults, 1);
>> +
>> +    /* Check what device 0 read. */
>> +    for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
>> +        ASSERT_EQ(ptr[i], i);
>
> I'm assuming that your testing shows that this fails without patch 1,
> and succeeds with patch 1 applied? Apologies for such an obvious
> question... :)

Yes. Without the patch, the ASSERT_EQ(ret, 0) would trigger.
With the patch, ASSERT_EQ(buffer->faults, 1) verifies that the pages
were faulted in from device 1 when device 0 tries to read them.
diff mbox series

Patch

diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c
index 203323967b50..a5ce7cc2e7aa 100644
--- a/tools/testing/selftests/vm/hmm-tests.c
+++ b/tools/testing/selftests/vm/hmm-tests.c
@@ -1520,9 +1520,19 @@  TEST_F(hmm2, double_map)
 	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
 		ASSERT_EQ(ptr[i], i);
 
-	/* Punch a hole after the first page address. */
-	ret = munmap(buffer->ptr + self->page_size, self->page_size);
+	/* Migrate pages to device 1 and try to read from device 0. */
+	ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, npages);
+	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+
+	ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
 	ASSERT_EQ(ret, 0);
+	ASSERT_EQ(buffer->cpages, npages);
+	ASSERT_EQ(buffer->faults, 1);
+
+	/* Check what device 0 read. */
+	for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+		ASSERT_EQ(ptr[i], i);
 
 	hmm_buffer_free(buffer);
 }