diff mbox series

[v2] mm: add access/dirty bit on numa page fault

Message ID 20220317065033.2635123-1-maobibo@loongson.cn (mailing list archive)
State New
Headers show
Series [v2] mm: add access/dirty bit on numa page fault | expand

Commit Message

bibo mao March 17, 2022, 6:50 a.m. UTC
On platforms like x86/arm which supports hw page walking, access
and dirty bit is set by hw, however on some platforms without
such hw functions, access and dirty bit is set by software in
next trap.

During numa page fault, dirty bit can be added for old pte if
fail to migrate on write fault. And if it succeeds to migrate,
access bit can be added for migrated new pte, also dirty bit
can be added for write fault.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
---
 mm/memory.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

Comments

Matthew Wilcox (Oracle) March 17, 2022, 12:23 p.m. UTC | #1
On Thu, Mar 17, 2022 at 02:50:33AM -0400, Bibo Mao wrote:
> On platforms like x86/arm which supports hw page walking, access
> and dirty bit is set by hw, however on some platforms without
> such hw functions, access and dirty bit is set by software in
> next trap.
> 
> During numa page fault, dirty bit can be added for old pte if
> fail to migrate on write fault. And if it succeeds to migrate,
> access bit can be added for migrated new pte, also dirty bit
> can be added for write fault.

Is this a correctness problem, in which case this will need to be
backported, or is this a performance problem, in which case can you
share some numbers?

> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
> ---
>  mm/memory.c | 21 ++++++++++++++++++++-
>  1 file changed, 20 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index c125c4969913..65813bec9c06 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>  	if (migrate_misplaced_page(page, vma, target_nid)) {
>  		page_nid = target_nid;
>  		flags |= TNF_MIGRATED;
> +
> +		/*
> +		 * update pte entry with access bit, and dirty bit for
> +		 * write fault
> +		 */
> +		spin_lock(vmf->ptl);
> +		pte = *vmf->pte;
> +		pte = pte_mkyoung(pte);
> +		if (was_writable) {
> +			pte = pte_mkwrite(pte);
> +			if (vmf->flags & FAULT_FLAG_WRITE)
> +				pte = pte_mkdirty(pte);
> +		}
> +		set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
> +		update_mmu_cache(vma, vmf->address, vmf->pte);
> +		pte_unmap_unlock(vmf->pte, vmf->ptl);
>  	} else {
>  		flags |= TNF_MIGRATE_FAIL;
>  		vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
> @@ -4427,8 +4443,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>  	old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte);
>  	pte = pte_modify(old_pte, vma->vm_page_prot);
>  	pte = pte_mkyoung(pte);
> -	if (was_writable)
> +	if (was_writable) {
>  		pte = pte_mkwrite(pte);
> +		if (vmf->flags & FAULT_FLAG_WRITE)
> +			pte = pte_mkdirty(pte);
> +	}
>  	ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
>  	update_mmu_cache(vma, vmf->address, vmf->pte);
>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
> -- 
> 2.31.1
> 
>
David Hildenbrand March 17, 2022, 12:32 p.m. UTC | #2
On 17.03.22 07:50, Bibo Mao wrote:
> On platforms like x86/arm which supports hw page walking, access
> and dirty bit is set by hw, however on some platforms without
> such hw functions, access and dirty bit is set by software in
> next trap.
> 
> During numa page fault, dirty bit can be added for old pte if
> fail to migrate on write fault. And if it succeeds to migrate,
> access bit can be added for migrated new pte, also dirty bit
> can be added for write fault.
> 
> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
> ---
>  mm/memory.c | 21 ++++++++++++++++++++-
>  1 file changed, 20 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index c125c4969913..65813bec9c06 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>  	if (migrate_misplaced_page(page, vma, target_nid)) {
>  		page_nid = target_nid;
>  		flags |= TNF_MIGRATED;
> +
> +		/*
> +		 * update pte entry with access bit, and dirty bit for
> +		 * write fault
> +		 */
> +		spin_lock(vmf->ptl);

Ehm, are you sure? We did a pte_unmap_unlock(), so you most certainly need a

vmf->pte = pte_offset_map(vmf->pmd, vmf->address);


Also, don't we need pte_same() checks before we do anything after
dropping the PT lock?

> +		pte = *vmf->pte;
> +		pte = pte_mkyoung(pte);
> +		if (was_writable) {
> +			pte = pte_mkwrite(pte);
> +			if (vmf->flags & FAULT_FLAG_WRITE)
> +				pte = pte_mkdirty(pte);
> +		}
> +		set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
> +		update_mmu_cache(vma, vmf->address, vmf->pte);
> +		pte_unmap_unlock(vmf->pte, vmf->ptl);
>  	} else {
>  		flags |= TNF_MIGRATE_FAIL;
>  		vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
> @@ -4427,8 +4443,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>  	old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte);
>  	pte = pte_modify(old_pte, vma->vm_page_prot);
>  	pte = pte_mkyoung(pte);
> -	if (was_writable)
> +	if (was_writable) {
>  		pte = pte_mkwrite(pte);
> +		if (vmf->flags & FAULT_FLAG_WRITE)
> +			pte = pte_mkdirty(pte);
> +	}
>  	ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
>  	update_mmu_cache(vma, vmf->address, vmf->pte);
>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
bibo mao March 18, 2022, 1:01 a.m. UTC | #3
On 03/17/2022 08:23 PM, Matthew Wilcox wrote:
> On Thu, Mar 17, 2022 at 02:50:33AM -0400, Bibo Mao wrote:
>> On platforms like x86/arm which supports hw page walking, access
>> and dirty bit is set by hw, however on some platforms without
>> such hw functions, access and dirty bit is set by software in
>> next trap.
>>
>> During numa page fault, dirty bit can be added for old pte if
>> fail to migrate on write fault. And if it succeeds to migrate,
>> access bit can be added for migrated new pte, also dirty bit
>> can be added for write fault.
> 
> Is this a correctness problem, in which case this will need to be
> backported, or is this a performance problem, in which case can you
> share some numbers?
It is only performance issue, and there is no obvious performance
improvement for general workloads on my hand, but I do not test
it on microbenchmark.

> 
>> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
>> ---
>>  mm/memory.c | 21 ++++++++++++++++++++-
>>  1 file changed, 20 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index c125c4969913..65813bec9c06 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>>  	if (migrate_misplaced_page(page, vma, target_nid)) {
>>  		page_nid = target_nid;
>>  		flags |= TNF_MIGRATED;
>> +
>> +		/*
>> +		 * update pte entry with access bit, and dirty bit for
>> +		 * write fault
>> +		 */
>> +		spin_lock(vmf->ptl);
>> +		pte = *vmf->pte;
>> +		pte = pte_mkyoung(pte);
>> +		if (was_writable) {
>> +			pte = pte_mkwrite(pte);
>> +			if (vmf->flags & FAULT_FLAG_WRITE)
>> +				pte = pte_mkdirty(pte);
>> +		}
>> +		set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
>> +		update_mmu_cache(vma, vmf->address, vmf->pte);
>> +		pte_unmap_unlock(vmf->pte, vmf->ptl);
>>  	} else {
>>  		flags |= TNF_MIGRATE_FAIL;
>>  		vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>> @@ -4427,8 +4443,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>>  	old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte);
>>  	pte = pte_modify(old_pte, vma->vm_page_prot);
>>  	pte = pte_mkyoung(pte);
>> -	if (was_writable)
>> +	if (was_writable) {
>>  		pte = pte_mkwrite(pte);
>> +		if (vmf->flags & FAULT_FLAG_WRITE)
>> +			pte = pte_mkdirty(pte);
>> +	}
>>  	ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
>>  	update_mmu_cache(vma, vmf->address, vmf->pte);
>>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
>> -- 
>> 2.31.1
>>
>>
bibo mao March 18, 2022, 1:17 a.m. UTC | #4
On 03/17/2022 08:32 PM, David Hildenbrand wrote:
> On 17.03.22 07:50, Bibo Mao wrote:
>> On platforms like x86/arm which supports hw page walking, access
>> and dirty bit is set by hw, however on some platforms without
>> such hw functions, access and dirty bit is set by software in
>> next trap.
>>
>> During numa page fault, dirty bit can be added for old pte if
>> fail to migrate on write fault. And if it succeeds to migrate,
>> access bit can be added for migrated new pte, also dirty bit
>> can be added for write fault.
>>
>> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
>> ---
>>  mm/memory.c | 21 ++++++++++++++++++++-
>>  1 file changed, 20 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index c125c4969913..65813bec9c06 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>>  	if (migrate_misplaced_page(page, vma, target_nid)) {
>>  		page_nid = target_nid;
>>  		flags |= TNF_MIGRATED;
>> +
>> +		/*
>> +		 * update pte entry with access bit, and dirty bit for
>> +		 * write fault
>> +		 */
>> +		spin_lock(vmf->ptl);
> 
> Ehm, are you sure? We did a pte_unmap_unlock(), so you most certainly need a
> 
> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
yes, we need probe pte entry again after function pte_unmap_unlock().
> 
> 
> Also, don't we need pte_same() checks before we do anything after
> dropping the PT lock?
I do not think so. If page succeeds in migration, pte entry should be changed
also, it should be different.

regards
bibo,mao

> 
>> +		pte = *vmf->pte;
>> +		pte = pte_mkyoung(pte);
>> +		if (was_writable) {
>> +			pte = pte_mkwrite(pte);
>> +			if (vmf->flags & FAULT_FLAG_WRITE)
>> +				pte = pte_mkdirty(pte);
>> +		}
>> +		set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
>> +		update_mmu_cache(vma, vmf->address, vmf->pte);
>> +		pte_unmap_unlock(vmf->pte, vmf->ptl);
>>  	} else {
>>  		flags |= TNF_MIGRATE_FAIL;
>>  		vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>> @@ -4427,8 +4443,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>>  	old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte);
>>  	pte = pte_modify(old_pte, vma->vm_page_prot);
>>  	pte = pte_mkyoung(pte);
>> -	if (was_writable)
>> +	if (was_writable) {
>>  		pte = pte_mkwrite(pte);
>> +		if (vmf->flags & FAULT_FLAG_WRITE)
>> +			pte = pte_mkdirty(pte);
>> +	}
>>  	ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
>>  	update_mmu_cache(vma, vmf->address, vmf->pte);
>>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
> 
>
Matthew Wilcox (Oracle) March 18, 2022, 1:46 a.m. UTC | #5
On Fri, Mar 18, 2022 at 09:01:32AM +0800, maobibo wrote:
> > Is this a correctness problem, in which case this will need to be
> > backported, or is this a performance problem, in which case can you
> > share some numbers?
> It is only performance issue, and there is no obvious performance
> improvement for general workloads on my hand, but I do not test
> it on microbenchmark.

... if there's no performance improvement, why should we apply this
patch?  Confused.
bibo mao March 18, 2022, 2:17 a.m. UTC | #6
On 03/18/2022 09:46 AM, Matthew Wilcox wrote:
> On Fri, Mar 18, 2022 at 09:01:32AM +0800, maobibo wrote:
>>> Is this a correctness problem, in which case this will need to be
>>> backported, or is this a performance problem, in which case can you
>>> share some numbers?
>> It is only performance issue, and there is no obvious performance
>> improvement for general workloads on my hand, but I do not test
>> it on microbenchmark.
> 
> ... if there's no performance improvement, why should we apply this
> patch?  Confused.
> 
It is not obvious from workload view, it actually reduces one tlb miss
on platforms without hw page walk.
David Hildenbrand March 18, 2022, 8:21 a.m. UTC | #7
On 18.03.22 02:17, maobibo wrote:
> 
> 
> On 03/17/2022 08:32 PM, David Hildenbrand wrote:
>> On 17.03.22 07:50, Bibo Mao wrote:
>>> On platforms like x86/arm which supports hw page walking, access
>>> and dirty bit is set by hw, however on some platforms without
>>> such hw functions, access and dirty bit is set by software in
>>> next trap.
>>>
>>> During numa page fault, dirty bit can be added for old pte if
>>> fail to migrate on write fault. And if it succeeds to migrate,
>>> access bit can be added for migrated new pte, also dirty bit
>>> can be added for write fault.
>>>
>>> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
>>> ---
>>>  mm/memory.c | 21 ++++++++++++++++++++-
>>>  1 file changed, 20 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index c125c4969913..65813bec9c06 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>>>  	if (migrate_misplaced_page(page, vma, target_nid)) {
>>>  		page_nid = target_nid;
>>>  		flags |= TNF_MIGRATED;
>>> +
>>> +		/*
>>> +		 * update pte entry with access bit, and dirty bit for
>>> +		 * write fault
>>> +		 */
>>> +		spin_lock(vmf->ptl);
>>
>> Ehm, are you sure? We did a pte_unmap_unlock(), so you most certainly need a
>>
>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
> yes, we need probe pte entry again after function pte_unmap_unlock().
>>
>>
>> Also, don't we need pte_same() checks before we do anything after
>> dropping the PT lock?
> I do not think so. If page succeeds in migration, pte entry should be changed
> also, it should be different.
> 

We have to be very careful here. Page migration succeeded, so I do
wonder if you have to do anything on this branch *at all*. I'd assume
that page migration too care of that already.

See, when only holding the mmap lock in read mode, there are absolutely
no guarantees what will happen after dropping the PT lock. The page
could get unmapped and another page could get mapped. The page could
have been mapped R/O in the meantime.

So I'm pretty sure that unconditionally modifying the PTE here is wrong.
bibo mao March 19, 2022, 2:58 a.m. UTC | #8
On 03/18/2022 04:21 PM, David Hildenbrand wrote:
> On 18.03.22 02:17, maobibo wrote:
>>
>>
>> On 03/17/2022 08:32 PM, David Hildenbrand wrote:
>>> On 17.03.22 07:50, Bibo Mao wrote:
>>>> On platforms like x86/arm which supports hw page walking, access
>>>> and dirty bit is set by hw, however on some platforms without
>>>> such hw functions, access and dirty bit is set by software in
>>>> next trap.
>>>>
>>>> During numa page fault, dirty bit can be added for old pte if
>>>> fail to migrate on write fault. And if it succeeds to migrate,
>>>> access bit can be added for migrated new pte, also dirty bit
>>>> can be added for write fault.
>>>>
>>>> Signed-off-by: Bibo Mao <maobibo@loongson.cn>
>>>> ---
>>>>  mm/memory.c | 21 ++++++++++++++++++++-
>>>>  1 file changed, 20 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index c125c4969913..65813bec9c06 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>>>>  	if (migrate_misplaced_page(page, vma, target_nid)) {
>>>>  		page_nid = target_nid;
>>>>  		flags |= TNF_MIGRATED;
>>>> +
>>>> +		/*
>>>> +		 * update pte entry with access bit, and dirty bit for
>>>> +		 * write fault
>>>> +		 */
>>>> +		spin_lock(vmf->ptl);
>>>
>>> Ehm, are you sure? We did a pte_unmap_unlock(), so you most certainly need a
>>>
>>> vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
>> yes, we need probe pte entry again after function pte_unmap_unlock().
>>>
>>>
>>> Also, don't we need pte_same() checks before we do anything after
>>> dropping the PT lock?
>> I do not think so. If page succeeds in migration, pte entry should be changed
>> also, it should be different.
>>
> 
> We have to be very careful here. Page migration succeeded, so I do
> wonder if you have to do anything on this branch *at all*. I'd assume
> that page migration too care of that already.
> 
> See, when only holding the mmap lock in read mode, there are absolutely
> no guarantees what will happen after dropping the PT lock. The page
> could get unmapped and another page could get mapped. The page could
> have been mapped R/O in the meantime.
> 
> So I'm pretty sure that unconditionally modifying the PTE here is wrong.
yes, there will be problem change pte directly, thanks for your guidance:)
it should be done on page migration flow, i will check code of page migration.
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index c125c4969913..65813bec9c06 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4404,6 +4404,22 @@  static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	if (migrate_misplaced_page(page, vma, target_nid)) {
 		page_nid = target_nid;
 		flags |= TNF_MIGRATED;
+
+		/*
+		 * update pte entry with access bit, and dirty bit for
+		 * write fault
+		 */
+		spin_lock(vmf->ptl);
+		pte = *vmf->pte;
+		pte = pte_mkyoung(pte);
+		if (was_writable) {
+			pte = pte_mkwrite(pte);
+			if (vmf->flags & FAULT_FLAG_WRITE)
+				pte = pte_mkdirty(pte);
+		}
+		set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
+		update_mmu_cache(vma, vmf->address, vmf->pte);
+		pte_unmap_unlock(vmf->pte, vmf->ptl);
 	} else {
 		flags |= TNF_MIGRATE_FAIL;
 		vmf->pte = pte_offset_map(vmf->pmd, vmf->address);
@@ -4427,8 +4443,11 @@  static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte);
 	pte = pte_modify(old_pte, vma->vm_page_prot);
 	pte = pte_mkyoung(pte);
-	if (was_writable)
+	if (was_writable) {
 		pte = pte_mkwrite(pte);
+		if (vmf->flags & FAULT_FLAG_WRITE)
+			pte = pte_mkdirty(pte);
+	}
 	ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
 	update_mmu_cache(vma, vmf->address, vmf->pte);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);