diff mbox series

mm: convert partially_mapped set/clear operations to be atomic

Message ID 20241212135447.3530047-1-usamaarif642@gmail.com (mailing list archive)
State New
Headers show
Series mm: convert partially_mapped set/clear operations to be atomic | expand

Commit Message

Usama Arif Dec. 12, 2024, 1:54 p.m. UTC
Other page flags in the 2nd page, like PG_hwpoison and
PG_anon_exclusive can get modified concurrently. Hence,
partially_mapped flags need to be changed atomically.

Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
 include/linux/page-flags.h | 12 ++----------
 mm/huge_memory.c           |  8 ++++----
 2 files changed, 6 insertions(+), 14 deletions(-)

Comments

David Hildenbrand Dec. 12, 2024, 2:39 p.m. UTC | #1
On 12.12.24 14:54, Usama Arif wrote:
> Other page flags in the 2nd page, like PG_hwpoison and
> PG_anon_exclusive can get modified concurrently. Hence,
> partially_mapped flags need to be changed atomically.
> 
> Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>

Fortunately we have the test-before-set checks already in place.

Acked-by: David Hildenbrand <david@redhat.com>

> ---
>   include/linux/page-flags.h | 12 ++----------
>   mm/huge_memory.c           |  8 ++++----
>   2 files changed, 6 insertions(+), 14 deletions(-)
> 
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index cf46ac720802..691506bdf2c5 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -862,18 +862,10 @@ static inline void ClearPageCompound(struct page *page)
>   	ClearPageHead(page);
>   }
>   FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
> -FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> -/*
> - * PG_partially_mapped is protected by deferred_split split_queue_lock,
> - * so its safe to use non-atomic set/clear.
> - */
> -__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> -__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> +FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
>   #else
>   FOLIO_FLAG_FALSE(large_rmappable)
> -FOLIO_TEST_FLAG_FALSE(partially_mapped)
> -__FOLIO_SET_FLAG_NOOP(partially_mapped)
> -__FOLIO_CLEAR_FLAG_NOOP(partially_mapped)
> +FOLIO_FLAG_FALSE(partially_mapped)
>   #endif
>   
>   #define PG_head_mask ((1UL << PG_head))
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2da5520bfe24..120cd2cdc614 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3583,7 +3583,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>   		    !list_empty(&folio->_deferred_list)) {
>   			ds_queue->split_queue_len--;
>   			if (folio_test_partially_mapped(folio)) {
> -				__folio_clear_partially_mapped(folio);
> +				folio_clear_partially_mapped(folio);
>   				mod_mthp_stat(folio_order(folio),
>   					      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>   			}
> @@ -3695,7 +3695,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio)
>   	if (!list_empty(&folio->_deferred_list)) {
>   		ds_queue->split_queue_len--;
>   		if (folio_test_partially_mapped(folio)) {
> -			__folio_clear_partially_mapped(folio);
> +			folio_clear_partially_mapped(folio);
>   			mod_mthp_stat(folio_order(folio),
>   				      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>   		}
> @@ -3739,7 +3739,7 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
>   	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
>   	if (partially_mapped) {
>   		if (!folio_test_partially_mapped(folio)) {
> -			__folio_set_partially_mapped(folio);
> +			folio_set_partially_mapped(folio);
>   			if (folio_test_pmd_mappable(folio))
>   				count_vm_event(THP_DEFERRED_SPLIT_PAGE);
>   			count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
> @@ -3832,7 +3832,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
>   		} else {
>   			/* We lost race with folio_put() */
>   			if (folio_test_partially_mapped(folio)) {
> -				__folio_clear_partially_mapped(folio);
> +				folio_clear_partially_mapped(folio);
>   				mod_mthp_stat(folio_order(folio),
>   					      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>   			}
Johannes Weiner Dec. 12, 2024, 5:22 p.m. UTC | #2
On Thu, Dec 12, 2024 at 01:54:47PM +0000, Usama Arif wrote:
> Other page flags in the 2nd page, like PG_hwpoison and
> PG_anon_exclusive can get modified concurrently. Hence,
> partially_mapped flags need to be changed atomically.

Can you please include a description of possible consequences if this
happens? I realize this was spotted through review, but it's still
important to break down how this might impact users.

> Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")

This is in an alreaady released kernel. Do we need this:

Cc: stable@vger.kernel.org

> Reported-by: David Hildenbrand <david@redhat.com>

Link: https://lore.kernel.org/all/e53b04ad-1827-43a2-a1ab-864c7efecf6e@redhat.com/

> Signed-off-by: Usama Arif <usamaarif642@gmail.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Roman Gushchin Dec. 12, 2024, 6:07 p.m. UTC | #3
On Thu, Dec 12, 2024 at 01:54:47PM +0000, Usama Arif wrote:
> Other page flags in the 2nd page, like PG_hwpoison and
> PG_anon_exclusive can get modified concurrently. Hence,
> partially_mapped flags need to be changed atomically.
> 
> Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>

Acked-by: Roman Gushchin <roman.gushchin@linux.dev>

Thanks!
diff mbox series

Patch

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index cf46ac720802..691506bdf2c5 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -862,18 +862,10 @@  static inline void ClearPageCompound(struct page *page)
 	ClearPageHead(page);
 }
 FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
-FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
-/*
- * PG_partially_mapped is protected by deferred_split split_queue_lock,
- * so its safe to use non-atomic set/clear.
- */
-__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
-__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
 #else
 FOLIO_FLAG_FALSE(large_rmappable)
-FOLIO_TEST_FLAG_FALSE(partially_mapped)
-__FOLIO_SET_FLAG_NOOP(partially_mapped)
-__FOLIO_CLEAR_FLAG_NOOP(partially_mapped)
+FOLIO_FLAG_FALSE(partially_mapped)
 #endif
 
 #define PG_head_mask ((1UL << PG_head))
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2da5520bfe24..120cd2cdc614 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3583,7 +3583,7 @@  int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
 		    !list_empty(&folio->_deferred_list)) {
 			ds_queue->split_queue_len--;
 			if (folio_test_partially_mapped(folio)) {
-				__folio_clear_partially_mapped(folio);
+				folio_clear_partially_mapped(folio);
 				mod_mthp_stat(folio_order(folio),
 					      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
 			}
@@ -3695,7 +3695,7 @@  bool __folio_unqueue_deferred_split(struct folio *folio)
 	if (!list_empty(&folio->_deferred_list)) {
 		ds_queue->split_queue_len--;
 		if (folio_test_partially_mapped(folio)) {
-			__folio_clear_partially_mapped(folio);
+			folio_clear_partially_mapped(folio);
 			mod_mthp_stat(folio_order(folio),
 				      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
 		}
@@ -3739,7 +3739,7 @@  void deferred_split_folio(struct folio *folio, bool partially_mapped)
 	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
 	if (partially_mapped) {
 		if (!folio_test_partially_mapped(folio)) {
-			__folio_set_partially_mapped(folio);
+			folio_set_partially_mapped(folio);
 			if (folio_test_pmd_mappable(folio))
 				count_vm_event(THP_DEFERRED_SPLIT_PAGE);
 			count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
@@ -3832,7 +3832,7 @@  static unsigned long deferred_split_scan(struct shrinker *shrink,
 		} else {
 			/* We lost race with folio_put() */
 			if (folio_test_partially_mapped(folio)) {
-				__folio_clear_partially_mapped(folio);
+				folio_clear_partially_mapped(folio);
 				mod_mthp_stat(folio_order(folio),
 					      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
 			}