diff mbox series

Hitshield : Something new eviction process for MGLRU

Message ID 20240802000546.322036-1-chminoo@g.cbnu.ac.kr (mailing list archive)
State New
Headers show
Series Hitshield : Something new eviction process for MGLRU | expand

Commit Message

Minwoo Jo Aug. 2, 2024, 12:05 a.m. UTC
Signed-off-by: Minwoo Jo <chminoo@g.cbnu.ac.kr>

This commit addresses the page space occupancy issue that arose
in the previous commit with the HitShield technology.

The HitShield technology is based on the observation that MGLRU
does not take into account the state of the folio during eviction.

I assumed that if a folio, which has been updated only 1 to 3 times,
has increased in generation until the point of eviction,
it is likely to be referenced less in the future.

Therefore, I implemented this technology to protect frequently updated folios.

I added the hitshield_0, hitshield_1, and hitshield flags to the page flag.

Upon my review, I confirmed that the flags occupy positions 0 to 28.

Thus, I believe that allocating 3 flags, or 3 bits, is sufficient.

The hitshield_0 and hitshield_1 flags serve as count flags, representing the digit positions in binary.

The hitshield flag is a boolean flag used to verify whether the HitShield is actually enabled.

Each time a folio is added to lrugen, the hitshield_0 and hitshield_1 flags are cleared to reset them.

Subsequently, in the folio_update_gen function, I added the following code:

if (!folio_test_hitshield(folio))
    if (folio_test_change_hitshield_0(folio))
        if (folio_test_change_hitshield_1(folio))
            folio_set_hitshield(folio);

This code counts the HitShield, and if it exceeds 5, it sets the hitshield flag.

In the sort_folio function, which is executed for eviction, if the folio's hitshield flag is set,
it ages the folio to max_gen and resets the flag.

The testing was conducted on Ubuntu 24.04 LTS (Kernel 6.8.0)
with an AMD Ryzen 9 5950X 16Core (32Threads) environment, restricting DRAM to 750MiB through Docker.

In this environment, the ycsb benchmark was tested using the following command:

./bin/ycsb load mongodb -s -P workloads/workloadf -p recordcount=100000000 -p mongodb.batchsize=1024
-threads 32 -p mongodb.url="mongodb://localhost:27017/ycsb"

During testing, a reduction of 78.9% in pswpin and 75.6% in pswpout was measured.

Additionally, the runtime for the Load command decreased by 18.3%,
and the runtime for the Run command decreased by 9.3%.

However, when running a single-threaded program with the current implementation,
while a decrease in swap counts was observed, the runtime increased compared to the native MGLRU.

A thorough analysis is needed to understand why this occurs, but I think it appears
that there is significant overhead from the flag calculations.

Therefore, it seems that if we activate this functionality through an ifdef statement
only in certain situations, we could achieve performance benefits.

As an undergraduate student who is still self-studying the kernel, I am not very familiar with using ifdef,
so I did not write that part separately.

I apologize for not being able to test it with large memory swap workloads,
as I was unsure what would be appropriate.

I would appreciate your review and feedback on this approach.

Thank you.
---
 include/linux/mm_inline.h      |  4 ++++
 include/linux/page-flags.h     | 26 ++++++++++++++++++++++++++
 include/trace/events/mmflags.h |  5 ++++-
 mm/vmscan.c                    | 22 ++++++++++++++++++++++
 4 files changed, 56 insertions(+), 1 deletion(-)

Comments

Vlastimil Babka Aug. 5, 2024, 11:29 a.m. UTC | #1
+Cc Yu Zhao

On 8/2/24 02:05, Minwoo Jo wrote:
> Signed-off-by: Minwoo Jo <chminoo@g.cbnu.ac.kr>
> 
> This commit addresses the page space occupancy issue that arose
> in the previous commit with the HitShield technology.
> 
> The HitShield technology is based on the observation that MGLRU
> does not take into account the state of the folio during eviction.
> 
> I assumed that if a folio, which has been updated only 1 to 3 times,
> has increased in generation until the point of eviction,
> it is likely to be referenced less in the future.
> 
> Therefore, I implemented this technology to protect frequently updated folios.
> 
> I added the hitshield_0, hitshield_1, and hitshield flags to the page flag.
> 
> Upon my review, I confirmed that the flags occupy positions 0 to 28.
> 
> Thus, I believe that allocating 3 flags, or 3 bits, is sufficient.
> 
> The hitshield_0 and hitshield_1 flags serve as count flags, representing the digit positions in binary.
> 
> The hitshield flag is a boolean flag used to verify whether the HitShield is actually enabled.
> 
> Each time a folio is added to lrugen, the hitshield_0 and hitshield_1 flags are cleared to reset them.
> 
> Subsequently, in the folio_update_gen function, I added the following code:
> 
> if (!folio_test_hitshield(folio))
>     if (folio_test_change_hitshield_0(folio))
>         if (folio_test_change_hitshield_1(folio))
>             folio_set_hitshield(folio);
> 
> This code counts the HitShield, and if it exceeds 5, it sets the hitshield flag.
> 
> In the sort_folio function, which is executed for eviction, if the folio's hitshield flag is set,
> it ages the folio to max_gen and resets the flag.
> 
> The testing was conducted on Ubuntu 24.04 LTS (Kernel 6.8.0)
> with an AMD Ryzen 9 5950X 16Core (32Threads) environment, restricting DRAM to 750MiB through Docker.
> 
> In this environment, the ycsb benchmark was tested using the following command:
> 
> ./bin/ycsb load mongodb -s -P workloads/workloadf -p recordcount=100000000 -p mongodb.batchsize=1024
> -threads 32 -p mongodb.url="mongodb://localhost:27017/ycsb"
> 
> During testing, a reduction of 78.9% in pswpin and 75.6% in pswpout was measured.
> 
> Additionally, the runtime for the Load command decreased by 18.3%,
> and the runtime for the Run command decreased by 9.3%.
> 
> However, when running a single-threaded program with the current implementation,
> while a decrease in swap counts was observed, the runtime increased compared to the native MGLRU.
> 
> A thorough analysis is needed to understand why this occurs, but I think it appears
> that there is significant overhead from the flag calculations.
> 
> Therefore, it seems that if we activate this functionality through an ifdef statement
> only in certain situations, we could achieve performance benefits.
> 
> As an undergraduate student who is still self-studying the kernel, I am not very familiar with using ifdef,
> so I did not write that part separately.
> 
> I apologize for not being able to test it with large memory swap workloads,
> as I was unsure what would be appropriate.
> 
> I would appreciate your review and feedback on this approach.
> 
> Thank you.
> ---
>  include/linux/mm_inline.h      |  4 ++++
>  include/linux/page-flags.h     | 26 ++++++++++++++++++++++++++
>  include/trace/events/mmflags.h |  5 ++++-
>  mm/vmscan.c                    | 22 ++++++++++++++++++++++
>  4 files changed, 56 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index f4fe593c1400..dea613b2785c 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -266,6 +266,10 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio,
>  	else
>  		list_add(&folio->lru, &lrugen->folios[gen][type][zone]);
>  
> +	folio_clear_hitshield_0(folio);
> +	folio_clear_hitshield_1(folio);
> +	/* This for initialize hit_shield by 0 when folio add to gen */
> +
>  	return true;
>  }
>  
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 5769fe6e4950..70951c6fe4ce 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -130,6 +130,16 @@ enum pageflags {
>  	PG_arch_2,
>  	PG_arch_3,
>  #endif
> +	PG_hitshield_0,
> +	PG_hitshield_1,
> +	PG_hitshield,
> +
> +	/*
> +	 * The flags consist of two types: one for counting the update occurrences
> +	 * of the folio and another boolean flag to check whether the HitShield is activated.
> +	 */
> +
> +
>  	__NR_PAGEFLAGS,
>  
>  	PG_readahead = PG_reclaim,
> @@ -504,6 +514,14 @@ static inline int TestClearPage##uname(struct page *page) { return 0; }
>  #define TESTSCFLAG_FALSE(uname, lname)					\
>  	TESTSETFLAG_FALSE(uname, lname) TESTCLEARFLAG_FALSE(uname, lname)
>  
> +#define TESTCHANGEPAGEFLAG(uname, lname, policy)                        \
> +static __always_inline                                                  \
> +bool folio_test_change_##lname(struct folio *folio)                     \
> +{ return test_and_change_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \
> +static __always_inline int TestChangePage##uname(struct page *page)             \
> +{ return test_and_change_bit(PG_##lname, &policy(page, 0)->flags); }
> +/* This macro function allows the use of the change operation defined in bitops. */
> +
>  __PAGEFLAG(Locked, locked, PF_NO_TAIL)
>  FOLIO_FLAG(waiters, FOLIO_HEAD_PAGE)
>  PAGEFLAG(Error, error, PF_NO_TAIL) TESTCLEARFLAG(Error, error, PF_NO_TAIL)
> @@ -559,6 +577,14 @@ PAGEFLAG(Reclaim, reclaim, PF_NO_TAIL)
>  PAGEFLAG(Readahead, readahead, PF_NO_COMPOUND)
>  	TESTCLEARFLAG(Readahead, readahead, PF_NO_COMPOUND)
>  
> +/*
> + * hitshield_0 and 1 use only clear and test_change functions, also
> + * hitshield flag uses only set, test, test_clear function
> + */
> +PAGEFLAG(Hitshield0, hitshield_0, PF_HEAD) TESTCHANGEPAGEFLAG(Hitshield0, hitshield_0, PF_HEAD)
> +PAGEFLAG(Hitshield1, hitshield_1, PF_HEAD) TESTCHANGEPAGEFLAG(Hitshield1, hitshield_1, PF_HEAD)
> +PAGEFLAG(Hitshield, hitshield, PF_HEAD) TESTSCFLAG(Hitshield, hitshield, PF_HEAD)
> +
>  #ifdef CONFIG_HIGHMEM
>  /*
>   * Must use a macro here due to header dependency issues. page_zone() is not
> diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
> index b63d211bd141..d7ef9c503876 100644
> --- a/include/trace/events/mmflags.h
> +++ b/include/trace/events/mmflags.h
> @@ -117,7 +117,10 @@
>  	DEF_PAGEFLAG_NAME(mappedtodisk),				\
>  	DEF_PAGEFLAG_NAME(reclaim),					\
>  	DEF_PAGEFLAG_NAME(swapbacked),					\
> -	DEF_PAGEFLAG_NAME(unevictable)					\
> +	DEF_PAGEFLAG_NAME(unevictable),					\
> +	DEF_PAGEFLAG_NAME(hitshield_0),					\
> +	DEF_PAGEFLAG_NAME(hitshield_1),					\
> +	DEF_PAGEFLAG_NAME(hitshield)					\
>  IF_HAVE_PG_MLOCK(mlocked)						\
>  IF_HAVE_PG_UNCACHED(uncached)						\
>  IF_HAVE_PG_HWPOISON(hwpoison)						\
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index cfa839284b92..b44371dddb33 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3148,6 +3148,18 @@ static int folio_update_gen(struct folio *folio, int gen)
>  		new_flags |= (gen + 1UL) << LRU_GEN_PGOFF;
>  	} while (!try_cmpxchg(&folio->flags, &old_flags, new_flags));
>  
> +	/*
> +	 * This part is core of hit_shield : Has this folio been updated frequently?
> +	 * I chose 5 as the number of times to grant shield because of MAX_NR_GENS is 4,
> +	 * so if this folio has been updated for more than a generation's length,
> +	 * it has additional survivability equal to the generation's length.
> +	 */
> +
> +	if (!folio_test_hitshield(folio))
> +		if (folio_test_change_hitshield_0(folio))
> +			if (folio_test_change_hitshield_1(folio))
> +				folio_set_hitshield(folio);
> +
>  	return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
>  }
>  
> @@ -4334,6 +4346,16 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
>  		return true;
>  	}
>  
> +	/* this when hit_shield is enabled
> +	 * init hit_shield again, and protect this folio like second chance algorithm
> +	 */
> +
> +	if (folio_test_clear_hitshield(folio)) {
> +		gen = folio_inc_gen(lruvec, folio, true);
> +		list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
> +		return true;
> +	}
> +
>  	return false;
>  }
>
David Hildenbrand Aug. 5, 2024, 11:56 a.m. UTC | #2
On 02.08.24 02:05, Minwoo Jo wrote:
> Signed-off-by: Minwoo Jo <chminoo@g.cbnu.ac.kr>
> 
> This commit addresses the page space occupancy issue that arose
> in the previous commit with the HitShield technology.
> 
> The HitShield technology is based on the observation that MGLRU
> does not take into account the state of the folio during eviction.
> 
> I assumed that if a folio, which has been updated only 1 to 3 times,
> has increased in generation until the point of eviction,
> it is likely to be referenced less in the future.
> 
> Therefore, I implemented this technology to protect frequently updated folios.
> 
> I added the hitshield_0, hitshield_1, and hitshield flags to the page flag.
> 
> Upon my review, I confirmed that the flags occupy positions 0 to 28.
> 
> Thus, I believe that allocating 3 flags, or 3 bits, is sufficient.
> 
> The hitshield_0 and hitshield_1 flags serve as count flags, representing the digit positions in binary.
> 
> The hitshield flag is a boolean flag used to verify whether the HitShield is actually enabled.
> 
> Each time a folio is added to lrugen, the hitshield_0 and hitshield_1 flags are cleared to reset them.
> 
> Subsequently, in the folio_update_gen function, I added the following code:
> 
> if (!folio_test_hitshield(folio))
>      if (folio_test_change_hitshield_0(folio))
>          if (folio_test_change_hitshield_1(folio))
>              folio_set_hitshield(folio);
> 
> This code counts the HitShield, and if it exceeds 5, it sets the hitshield flag.
> 
> In the sort_folio function, which is executed for eviction, if the folio's hitshield flag is set,
> it ages the folio to max_gen and resets the flag.
> 
> The testing was conducted on Ubuntu 24.04 LTS (Kernel 6.8.0)
> with an AMD Ryzen 9 5950X 16Core (32Threads) environment, restricting DRAM to 750MiB through Docker.
> 
> In this environment, the ycsb benchmark was tested using the following command:
> 
> ./bin/ycsb load mongodb -s -P workloads/workloadf -p recordcount=100000000 -p mongodb.batchsize=1024
> -threads 32 -p mongodb.url="mongodb://localhost:27017/ycsb"
> 
> During testing, a reduction of 78.9% in pswpin and 75.6% in pswpout was measured.
> 
> Additionally, the runtime for the Load command decreased by 18.3%,
> and the runtime for the Run command decreased by 9.3%.
> 
> However, when running a single-threaded program with the current implementation,
> while a decrease in swap counts was observed, the runtime increased compared to the native MGLRU.
> 
> A thorough analysis is needed to understand why this occurs, but I think it appears
> that there is significant overhead from the flag calculations.
> 
> Therefore, it seems that if we activate this functionality through an ifdef statement
> only in certain situations, we could achieve performance benefits.
> 
> As an undergraduate student who is still self-studying the kernel, I am not very familiar with using ifdef,
> so I did not write that part separately.
> 
> I apologize for not being able to test it with large memory swap workloads,
> as I was unsure what would be appropriate.
> 
> I would appreciate your review and feedback on this approach.
> 
> Thank you.
> ---
>   include/linux/mm_inline.h      |  4 ++++
>   include/linux/page-flags.h     | 26 ++++++++++++++++++++++++++
>   include/trace/events/mmflags.h |  5 ++++-
>   mm/vmscan.c                    | 22 ++++++++++++++++++++++
>   4 files changed, 56 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index f4fe593c1400..dea613b2785c 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -266,6 +266,10 @@ static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio,
>   	else
>   		list_add(&folio->lru, &lrugen->folios[gen][type][zone]);
>   
> +	folio_clear_hitshield_0(folio);
> +	folio_clear_hitshield_1(folio);
> +	/* This for initialize hit_shield by 0 when folio add to gen */
> +
>   	return true;
>   }
>   
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 5769fe6e4950..70951c6fe4ce 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -130,6 +130,16 @@ enum pageflags {
>   	PG_arch_2,
>   	PG_arch_3,
>   #endif
> +	PG_hitshield_0,
> +	PG_hitshield_1,
> +	PG_hitshield,
> +
> +	/*
> +	 * The flags consist of two types: one for counting the update occurrences
> +	 * of the folio and another boolean flag to check whether the HitShield is activated.
> +	 */

Note that adding new pageflags is frowned upon -- we're actually trying 
to get rid of some of them, like PG_error. Getting another 3 added seems 
very .. unlikely :)

Especially when done unconditionally on 32bit as well this might just 
not work at all, and as you state, would require #ifdefery.
SeongJae Park Oct. 8, 2024, 6:50 p.m. UTC | #3
Hello Minwoo,

On Fri, 2 Aug 2024 00:05:46 +0000 Minwoo Jo <chminoo@g.cbnu.ac.kr> wrote:

> Signed-off-by: Minwoo Jo <chminoo@g.cbnu.ac.kr>
> 
> This commit addresses the page space occupancy issue that arose
> in the previous commit with the HitShield technology.
> 
> The HitShield technology is based on the observation that MGLRU
> does not take into account the state of the folio during eviction.
> 
> I assumed that if a folio, which has been updated only 1 to 3 times,
> has increased in generation until the point of eviction,
> it is likely to be referenced less in the future.

So, my humble understanding of the point is that, you want to improve the
kernel's reclamation logic by making it awares not only recency, but also
frequency?

Given existence of several previous frequency based page placement algorithm
optimization researches including LRFU, CAR and CART, I think the overall idea
might make sense at least in some use cases.  I'm not sure whether the
threshold value and the mechanism to measure the frequency make sense and/or
can be applied to general cases, though.

> 
> Therefore, I implemented this technology to protect frequently updated folios.
> 
> I added the hitshield_0, hitshield_1, and hitshield flags to the page flag.
> 
> Upon my review, I confirmed that the flags occupy positions 0 to 28.
> 
> Thus, I believe that allocating 3 flags, or 3 bits, is sufficient.

As others also mentioned, I think even three bits could be challinging to add.

In my humble opinion, frequency monitoring could be done using data structures
and access check mechanisms other than folio/LRU and reclamation logic.  Then,
the monitoring mechanism could manipulate LRU lists based on the frequency
data.  Modifying reclamation code to refer to the frequency data could also be
another way.

DAMON_LRU_SORT [1,2] could be an example of such approaches (driven by the
monitoring mechanism without reclamation code modification).  Of course,
DAMON_LRU_SORT would have many limitations and rooms for improvements.  I'm
curious if you also had a time to consider that kind of approaches?  If you did
so but found some critical problems and resulted in this patch, it would be
great if you can further share the found problems.

[1] https://lwn.net/Articles/905370/
[2] https://www.kernel.org/doc/html/latest/admin-guide/mm/damon/lru_sort.html


Thanks,
SJ

[...]
Minwoo Jo Oct. 28, 2024, 7:32 a.m. UTC | #4
2024년 10월 9일 (수) 오전 3:50, SeongJae Park <sj@kernel.org>님이 작성:
>
> Hello Minwoo,
>
> On Fri, 2 Aug 2024 00:05:46 +0000 Minwoo Jo <chminoo@g.cbnu.ac.kr> wrote:
>
> > Signed-off-by: Minwoo Jo <chminoo@g.cbnu.ac.kr>
> >
> > This commit addresses the page space occupancy issue that arose
> > in the previous commit with the HitShield technology.
> >
> > The HitShield technology is based on the observation that MGLRU
> > does not take into account the state of the folio during eviction.
> >
> > I assumed that if a folio, which has been updated only 1 to 3 times,
> > has increased in generation until the point of eviction,
> > it is likely to be referenced less in the future.
>
> So, my humble understanding of the point is that, you want to improve the
> kernel's reclamation logic by making it awares not only recency, but also
> frequency?
>
> Given existence of several previous frequency based page placement algorithm
> optimization researches including LRFU, CAR and CART, I think the overall idea
> might make sense at least in some use cases.  I'm not sure whether the
> threshold value and the mechanism to measure the frequency make sense and/or
> can be applied to general cases, though.
>
> >
> > Therefore, I implemented this technology to protect frequently updated folios.
> >
> > I added the hitshield_0, hitshield_1, and hitshield flags to the page flag.
> >
> > Upon my review, I confirmed that the flags occupy positions 0 to 28.
> >
> > Thus, I believe that allocating 3 flags, or 3 bits, is sufficient.
>
> As others also mentioned, I think even three bits could be challinging to add.
>
> In my humble opinion, frequency monitoring could be done using data structures
> and access check mechanisms other than folio/LRU and reclamation logic.  Then,
> the monitoring mechanism could manipulate LRU lists based on the frequency
> data.  Modifying reclamation code to refer to the frequency data could also be
> another way.
>
> DAMON_LRU_SORT [1,2] could be an example of such approaches (driven by the
> monitoring mechanism without reclamation code modification).  Of course,
> DAMON_LRU_SORT would have many limitations and rooms for improvements.  I'm
> curious if you also had a time to consider that kind of approaches?  If you did
> so but found some critical problems and resulted in this patch, it would be
> great if you can further share the found problems.
>
> [1] https://lwn.net/Articles/905370/
> [2] https://www.kernel.org/doc/html/latest/admin-guide/mm/damon/lru_sort.html
>
>
> Thanks,
> SJ
>
> [...]

I apologize for the late response to your message.

Based on my limited understanding, the LRU manipulation in DAMON appears
to assist LRU by tracking data access to identify Hot Pages and Cold Pages,
subsequently lowering the priority of Cold Pages.

Additionally, this approach goes beyond simply measuring frequency,
as it analyzes patterns, which I believe could lead to greater performance.

By appropriately utilizing the lru_prio and lru_deprio mechanisms of DAMON's
LRU_SORT, I think it could also be implemented in MGLRU.

While HitShield differs from DAMON_LRU_SORT in that it checks counters
during internal operations rather than monitoring the internals,
I believe that by appropriately modifying this approach, we could implement
the HitShield mechanism without altering the folio's internal
structure or using flags.

Thank you for your insights; I will definitely consider them.

Thank you.
Minwoo, Jo
diff mbox series

Patch

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index f4fe593c1400..dea613b2785c 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -266,6 +266,10 @@  static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio,
 	else
 		list_add(&folio->lru, &lrugen->folios[gen][type][zone]);
 
+	folio_clear_hitshield_0(folio);
+	folio_clear_hitshield_1(folio);
+	/* This for initialize hit_shield by 0 when folio add to gen */
+
 	return true;
 }
 
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 5769fe6e4950..70951c6fe4ce 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -130,6 +130,16 @@  enum pageflags {
 	PG_arch_2,
 	PG_arch_3,
 #endif
+	PG_hitshield_0,
+	PG_hitshield_1,
+	PG_hitshield,
+
+	/*
+	 * The flags consist of two types: one for counting the update occurrences
+	 * of the folio and another boolean flag to check whether the HitShield is activated.
+	 */
+
+
 	__NR_PAGEFLAGS,
 
 	PG_readahead = PG_reclaim,
@@ -504,6 +514,14 @@  static inline int TestClearPage##uname(struct page *page) { return 0; }
 #define TESTSCFLAG_FALSE(uname, lname)					\
 	TESTSETFLAG_FALSE(uname, lname) TESTCLEARFLAG_FALSE(uname, lname)
 
+#define TESTCHANGEPAGEFLAG(uname, lname, policy)                        \
+static __always_inline                                                  \
+bool folio_test_change_##lname(struct folio *folio)                     \
+{ return test_and_change_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \
+static __always_inline int TestChangePage##uname(struct page *page)             \
+{ return test_and_change_bit(PG_##lname, &policy(page, 0)->flags); }
+/* This macro function allows the use of the change operation defined in bitops. */
+
 __PAGEFLAG(Locked, locked, PF_NO_TAIL)
 FOLIO_FLAG(waiters, FOLIO_HEAD_PAGE)
 PAGEFLAG(Error, error, PF_NO_TAIL) TESTCLEARFLAG(Error, error, PF_NO_TAIL)
@@ -559,6 +577,14 @@  PAGEFLAG(Reclaim, reclaim, PF_NO_TAIL)
 PAGEFLAG(Readahead, readahead, PF_NO_COMPOUND)
 	TESTCLEARFLAG(Readahead, readahead, PF_NO_COMPOUND)
 
+/*
+ * hitshield_0 and 1 use only clear and test_change functions, also
+ * hitshield flag uses only set, test, test_clear function
+ */
+PAGEFLAG(Hitshield0, hitshield_0, PF_HEAD) TESTCHANGEPAGEFLAG(Hitshield0, hitshield_0, PF_HEAD)
+PAGEFLAG(Hitshield1, hitshield_1, PF_HEAD) TESTCHANGEPAGEFLAG(Hitshield1, hitshield_1, PF_HEAD)
+PAGEFLAG(Hitshield, hitshield, PF_HEAD) TESTSCFLAG(Hitshield, hitshield, PF_HEAD)
+
 #ifdef CONFIG_HIGHMEM
 /*
  * Must use a macro here due to header dependency issues. page_zone() is not
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index b63d211bd141..d7ef9c503876 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -117,7 +117,10 @@ 
 	DEF_PAGEFLAG_NAME(mappedtodisk),				\
 	DEF_PAGEFLAG_NAME(reclaim),					\
 	DEF_PAGEFLAG_NAME(swapbacked),					\
-	DEF_PAGEFLAG_NAME(unevictable)					\
+	DEF_PAGEFLAG_NAME(unevictable),					\
+	DEF_PAGEFLAG_NAME(hitshield_0),					\
+	DEF_PAGEFLAG_NAME(hitshield_1),					\
+	DEF_PAGEFLAG_NAME(hitshield)					\
 IF_HAVE_PG_MLOCK(mlocked)						\
 IF_HAVE_PG_UNCACHED(uncached)						\
 IF_HAVE_PG_HWPOISON(hwpoison)						\
diff --git a/mm/vmscan.c b/mm/vmscan.c
index cfa839284b92..b44371dddb33 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3148,6 +3148,18 @@  static int folio_update_gen(struct folio *folio, int gen)
 		new_flags |= (gen + 1UL) << LRU_GEN_PGOFF;
 	} while (!try_cmpxchg(&folio->flags, &old_flags, new_flags));
 
+	/*
+	 * This part is core of hit_shield : Has this folio been updated frequently?
+	 * I chose 5 as the number of times to grant shield because of MAX_NR_GENS is 4,
+	 * so if this folio has been updated for more than a generation's length,
+	 * it has additional survivability equal to the generation's length.
+	 */
+
+	if (!folio_test_hitshield(folio))
+		if (folio_test_change_hitshield_0(folio))
+			if (folio_test_change_hitshield_1(folio))
+				folio_set_hitshield(folio);
+
 	return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1;
 }
 
@@ -4334,6 +4346,16 @@  static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
 		return true;
 	}
 
+	/* this when hit_shield is enabled
+	 * init hit_shield again, and protect this folio like second chance algorithm
+	 */
+
+	if (folio_test_clear_hitshield(folio)) {
+		gen = folio_inc_gen(lruvec, folio, true);
+		list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
+		return true;
+	}
+
 	return false;
 }