diff mbox series

mm, trivial: Simplify swap related code in try_to_unmap_one()

Message ID 20200331084613.1258555-1-ying.huang@intel.com (mailing list archive)
State New, archived
Headers show
Series mm, trivial: Simplify swap related code in try_to_unmap_one() | expand

Commit Message

Huang, Ying March 31, 2020, 8:46 a.m. UTC
From: Huang Ying <ying.huang@intel.com>

Because PageSwapCache() will always return false if PageSwapBacked() returns
false, and PageSwapBacked() will be check for MADV_FREE pages in
try_to_unmap_one().  The swap related code in try_to_unmap_one() can be
simplified to improve the readability.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@surriel.com>
---
 mm/rmap.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

Comments

Michal Hocko March 31, 2020, 9:41 a.m. UTC | #1
On Tue 31-03-20 16:46:13, Huang, Ying wrote:
> From: Huang Ying <ying.huang@intel.com>
> 
> Because PageSwapCache() will always return false if PageSwapBacked() returns
> false, and PageSwapBacked() will be check for MADV_FREE pages in
> try_to_unmap_one().  The swap related code in try_to_unmap_one() can be
> simplified to improve the readability.

My understanding is that this is a sanity check to let us know if
something breaks. Do we really want to get rid of it? Maybe it is not
really useful but if that is the case then the changelog should reflect
this fact.

> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Rik van Riel <riel@surriel.com>
> ---
>  mm/rmap.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 2126fd4a254b..cd3c406aeac7 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1613,19 +1613,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  		} else if (PageAnon(page)) {
>  			swp_entry_t entry = { .val = page_private(subpage) };
>  			pte_t swp_pte;
> -			/*
> -			 * Store the swap location in the pte.
> -			 * See handle_pte_fault() ...
> -			 */
> -			if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
> -				WARN_ON_ONCE(1);
> -				ret = false;
> -				/* We have to invalidate as we cleared the pte */
> -				mmu_notifier_invalidate_range(mm, address,
> -							address + PAGE_SIZE);
> -				page_vma_mapped_walk_done(&pvmw);
> -				break;
> -			}
>  
>  			/* MADV_FREE page check */
>  			if (!PageSwapBacked(page)) {
> @@ -1648,6 +1635,20 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  				break;
>  			}
>  
> +			/*
> +			 * Store the swap location in the pte.
> +			 * See handle_pte_fault() ...
> +			 */
> +			if (unlikely(!PageSwapCache(page))) {
> +				WARN_ON_ONCE(1);
> +				ret = false;
> +				/* We have to invalidate as we cleared the pte */
> +				mmu_notifier_invalidate_range(mm, address,
> +							address + PAGE_SIZE);
> +				page_vma_mapped_walk_done(&pvmw);
> +				break;
> +			}
> +
>  			if (swap_duplicate(entry) < 0) {
>  				set_pte_at(mm, address, pvmw.pte, pteval);
>  				ret = false;
> -- 
> 2.25.0
Huang, Ying April 1, 2020, 1:11 a.m. UTC | #2
Michal Hocko <mhocko@kernel.org> writes:

> On Tue 31-03-20 16:46:13, Huang, Ying wrote:
>> From: Huang Ying <ying.huang@intel.com>
>> 
>> Because PageSwapCache() will always return false if PageSwapBacked() returns
>> false, and PageSwapBacked() will be check for MADV_FREE pages in
>> try_to_unmap_one().  The swap related code in try_to_unmap_one() can be
>> simplified to improve the readability.
>
> My understanding is that this is a sanity check to let us know if
> something breaks. Do we really want to get rid of it? Maybe it is not
> really useful but if that is the case then the changelog should reflect
> this fact.

Now the definition of PageSwapCache() is,

static __always_inline int PageSwapCache(struct page *page)
{
#ifdef CONFIG_THP_SWAP
	page = compound_head(page);
#endif
	return PageSwapBacked(page) && test_bit(PG_swapcache, &page->flags);
}

So, if PageSwapBacked() returns false, PageSwapCache() will always
return false.  The original checking,

-			if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {

is equivalent to

-			if (unlikely(PageSwapBacked(page) && !PageSwapCache(page))) {

Then what is the check !PageSwapBacked() && PageSwapCache() for?  To
prevent someone to change the definition of PageSwapCache() in the
future to break this?

Best Regards,
Huang, Ying
Michal Hocko April 1, 2020, 8:31 a.m. UTC | #3
On Wed 01-04-20 09:11:13, Huang, Ying wrote:
[...]
> Then what is the check !PageSwapBacked() && PageSwapCache() for?  To
> prevent someone to change the definition of PageSwapCache() in the
> future to break this?

Yes this is my understading. It is essentially an assert that enforces
the assumption about swap cache vs. swap backed being coupled.
Huang, Ying April 1, 2020, 8:48 a.m. UTC | #4
Michal Hocko <mhocko@kernel.org> writes:

> On Wed 01-04-20 09:11:13, Huang, Ying wrote:
> [...]
>> Then what is the check !PageSwapBacked() && PageSwapCache() for?  To
>> prevent someone to change the definition of PageSwapCache() in the
>> future to break this?
>
> Yes this is my understading. It is essentially an assert that enforces
> the assumption about swap cache vs. swap backed being coupled.

OK.  Then we can just keep the current code.

Best Regards,
Huang, Ying
diff mbox series

Patch

diff --git a/mm/rmap.c b/mm/rmap.c
index 2126fd4a254b..cd3c406aeac7 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1613,19 +1613,6 @@  static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		} else if (PageAnon(page)) {
 			swp_entry_t entry = { .val = page_private(subpage) };
 			pte_t swp_pte;
-			/*
-			 * Store the swap location in the pte.
-			 * See handle_pte_fault() ...
-			 */
-			if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
-				WARN_ON_ONCE(1);
-				ret = false;
-				/* We have to invalidate as we cleared the pte */
-				mmu_notifier_invalidate_range(mm, address,
-							address + PAGE_SIZE);
-				page_vma_mapped_walk_done(&pvmw);
-				break;
-			}
 
 			/* MADV_FREE page check */
 			if (!PageSwapBacked(page)) {
@@ -1648,6 +1635,20 @@  static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				break;
 			}
 
+			/*
+			 * Store the swap location in the pte.
+			 * See handle_pte_fault() ...
+			 */
+			if (unlikely(!PageSwapCache(page))) {
+				WARN_ON_ONCE(1);
+				ret = false;
+				/* We have to invalidate as we cleared the pte */
+				mmu_notifier_invalidate_range(mm, address,
+							address + PAGE_SIZE);
+				page_vma_mapped_walk_done(&pvmw);
+				break;
+			}
+
 			if (swap_duplicate(entry) < 0) {
 				set_pte_at(mm, address, pvmw.pte, pteval);
 				ret = false;