diff mbox series

[02/16] mm/huge_memory: access vm_page_prot with READ_ONCE in remove_migration_pmd

Message ID 20220622170627.19786-3-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series A few cleanup patches for huge_memory | expand

Commit Message

Miaohe Lin June 22, 2022, 5:06 p.m. UTC
vma->vm_page_prot is read lockless from the rmap_walk, it may be updated
concurrently. Using READ_ONCE to prevent the risk of reading intermediate
values.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/huge_memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Kirill A . Shutemov June 23, 2022, 3:14 a.m. UTC | #1
On Thu, Jun 23, 2022 at 01:06:13AM +0800, Miaohe Lin wrote:
> vma->vm_page_prot is read lockless from the rmap_walk, it may be updated
> concurrently. Using READ_ONCE to prevent the risk of reading intermediate
> values.

Have you checked all other vm_page_prot reads that they hold mmap_lock?

I think the right fix would be to provide a helper to read vm_page_prot
which does READ_ONCE() and use it everywhere. This seems more sustainable.
Miaohe Lin June 23, 2022, 12:03 p.m. UTC | #2
On 2022/6/23 11:14, Kirill A. Shutemov wrote:
> On Thu, Jun 23, 2022 at 01:06:13AM +0800, Miaohe Lin wrote:
>> vma->vm_page_prot is read lockless from the rmap_walk, it may be updated
>> concurrently. Using READ_ONCE to prevent the risk of reading intermediate
>> values.
> 
> Have you checked all other vm_page_prot reads that they hold mmap_lock?

I took a glance when I made this patch.

> 
> I think the right fix would be to provide a helper to read vm_page_prot
> which does READ_ONCE() and use it everywhere. This seems more sustainable.
> 

This patch is inspired from the below commit
  6d2329f8872f ("mm: vm_page_prot: update with WRITE_ONCE/READ_ONCE")

It changed all the places that need to use READ_ONCE. But remove_migration_pmd
is missed due to it's introduced later. It looks fine to add a helper to read
vm_page_prot which does READ_ONCE() but READ_ONCE is unneeded while under the
mmap_lock, so might it be a little overkill to add a helper because the helper
is used iff mmap_lock is not held?

Thanks.
Zach O'Keefe June 24, 2022, 6:40 p.m. UTC | #3
On 23 Jun 20:03, Miaohe Lin wrote:
> On 2022/6/23 11:14, Kirill A. Shutemov wrote:
> > On Thu, Jun 23, 2022 at 01:06:13AM +0800, Miaohe Lin wrote:
> >> vma->vm_page_prot is read lockless from the rmap_walk, it may be updated
> >> concurrently. Using READ_ONCE to prevent the risk of reading intermediate
> >> values.
> > 
> > Have you checked all other vm_page_prot reads that they hold mmap_lock?
> 
> I took a glance when I made this patch.
> 
> > 
> > I think the right fix would be to provide a helper to read vm_page_prot
> > which does READ_ONCE() and use it everywhere. This seems more sustainable.
> > 
> 
> This patch is inspired from the below commit
>   6d2329f8872f ("mm: vm_page_prot: update with WRITE_ONCE/READ_ONCE")
> 
> It changed all the places that need to use READ_ONCE. But remove_migration_pmd
> is missed due to it's introduced later. It looks fine to add a helper to read
> vm_page_prot which does READ_ONCE() but READ_ONCE is unneeded while under the
> mmap_lock, so might it be a little overkill to add a helper because the helper
> is used iff mmap_lock is not held?
> 
> Thanks.

IMO adding the READ_ONCE() as proposed in fine. Adding a helper to be called
dependent on locking context still requires the caller / dev to know what the
locking context is - so I don't think it provides much benefit.
Miaohe Lin June 25, 2022, 3:17 a.m. UTC | #4
On 2022/6/25 2:40, Zach O'Keefe wrote:
> On 23 Jun 20:03, Miaohe Lin wrote:
>> On 2022/6/23 11:14, Kirill A. Shutemov wrote:
>>> On Thu, Jun 23, 2022 at 01:06:13AM +0800, Miaohe Lin wrote:
>>>> vma->vm_page_prot is read lockless from the rmap_walk, it may be updated
>>>> concurrently. Using READ_ONCE to prevent the risk of reading intermediate
>>>> values.
>>>
>>> Have you checked all other vm_page_prot reads that they hold mmap_lock?
>>
>> I took a glance when I made this patch.
>>
>>>
>>> I think the right fix would be to provide a helper to read vm_page_prot
>>> which does READ_ONCE() and use it everywhere. This seems more sustainable.
>>>
>>
>> This patch is inspired from the below commit
>>   6d2329f8872f ("mm: vm_page_prot: update with WRITE_ONCE/READ_ONCE")
>>
>> It changed all the places that need to use READ_ONCE. But remove_migration_pmd
>> is missed due to it's introduced later. It looks fine to add a helper to read
>> vm_page_prot which does READ_ONCE() but READ_ONCE is unneeded while under the
>> mmap_lock, so might it be a little overkill to add a helper because the helper
>> is used iff mmap_lock is not held?
>>
>> Thanks.
> 
> IMO adding the READ_ONCE() as proposed in fine. Adding a helper to be called
> dependent on locking context still requires the caller / dev to know what the
> locking context is - so I don't think it provides much benefit.

I tend to agree with Zach. Thanks!

> .
>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fd6da053a13e..83fb6c3442ff 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3202,7 +3202,7 @@  void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
 
 	entry = pmd_to_swp_entry(*pvmw->pmd);
 	get_page(new);
-	pmde = pmd_mkold(mk_huge_pmd(new, vma->vm_page_prot));
+	pmde = pmd_mkold(mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)));
 	if (pmd_swp_soft_dirty(*pvmw->pmd))
 		pmde = pmd_mksoft_dirty(pmde);
 	if (is_writable_migration_entry(entry))