diff mbox series

[v2] mm/mremap_pages: Save a few cycles in get_dev_pagemap()

Message ID 9ef1562a1975371360f3e263856e9f1c5749b656.1662136782.git.christophe.jaillet@wanadoo.fr (mailing list archive)
State New
Headers show
Series [v2] mm/mremap_pages: Save a few cycles in get_dev_pagemap() | expand

Commit Message

Christophe JAILLET Sept. 2, 2022, 4:39 p.m. UTC
Use 'percpu_ref_tryget_live_rcu()' instead of 'percpu_ref_tryget_live()' to
save a few cycles when it is known that the rcu lock is already
taken/released.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
---
Matthew Wilcox <willy@infradead.org> commented on v1 that it is just a slow
path... but it is also just an easy patch :)

If considered as useless, let me know and I'll drop it from my WIP list.

Changes in v2:
  * (no code change)
  * synch with latest -next

v1:
  https://lore.kernel.org/all/b4a47154877853cc64be3a35dcfd594d40cc2bce.1635975283.git.christophe.jaillet@wanadoo.fr/
---
 mm/memremap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/memremap.c b/mm/memremap.c
index 58b20c3c300b..25029a474d30 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -454,7 +454,7 @@  struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 	/* fall back to slow path lookup */
 	rcu_read_lock();
 	pgmap = xa_load(&pgmap_array, PHYS_PFN(phys));
-	if (pgmap && !percpu_ref_tryget_live(&pgmap->ref))
+	if (pgmap && !percpu_ref_tryget_live_rcu(&pgmap->ref))
 		pgmap = NULL;
 	rcu_read_unlock();