diff mbox series

[linux-next] kernel/power: Use kmap_local_page() in snapshot.c

Message ID b2fd8c52429b51fc0c060753e6b616f1edf81d92.1702020689.git.chen.haonan2@zte.com.cn (mailing list archive)
State Changes Requested, archived
Headers show
Series [linux-next] kernel/power: Use kmap_local_page() in snapshot.c | expand

Commit Message

chenguanxi11234@163.com Dec. 11, 2023, 2:52 p.m. UTC
From: Chen Haonan <chen.haonan2@zte.com.cn>

kmap_atomic() has been deprecated in favor of kmap_local_page().

Each call to kmap_atomic() in the kernel creates a non-preemptable
segment and disables the missing page exception. This can be one of
the sources of unexpected latency. Therefore users should choose
kmap_local_page() over kmap_atomic().

Signed-off-by: Chen Haonan <chen.haonan2@zte.com.cn>
---
 kernel/power/snapshot.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

Comments

Rafael J. Wysocki Dec. 11, 2023, 4:04 p.m. UTC | #1
On Mon, Dec 11, 2023 at 3:52 PM <chenguanxi11234@163.com> wrote:
>
> From: Chen Haonan <chen.haonan2@zte.com.cn>
>
> kmap_atomic() has been deprecated in favor of kmap_local_page().
>
> Each call to kmap_atomic() in the kernel creates a non-preemptable
> segment and disables the missing page exception. This can be one of
> the sources of unexpected latency. Therefore users should choose
> kmap_local_page() over kmap_atomic().

Do you realize that the code being modified runs with only one CPU
online and with interrupts off on that CPU?

> Signed-off-by: Chen Haonan <chen.haonan2@zte.com.cn>
> ---
>  kernel/power/snapshot.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 50a15408c3fc..feef0d4d3288 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1487,11 +1487,11 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
>         s_page = pfn_to_page(src_pfn);
>         d_page = pfn_to_page(dst_pfn);
>         if (PageHighMem(s_page)) {
> -               src = kmap_atomic(s_page);
> -               dst = kmap_atomic(d_page);
> +               src = kmap_local_page(s_page);
> +               dst = kmap_local_page(d_page);
>                 zeros_only = do_copy_page(dst, src);
> -               kunmap_atomic(dst);
> -               kunmap_atomic(src);
> +               kunmap_local(dst);
> +               kunmap_local(src);
>         } else {
>                 if (PageHighMem(d_page)) {
>                         /*
> @@ -1499,9 +1499,9 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
>                          * data modified by kmap_atomic()
>                          */
>                         zeros_only = safe_copy_page(buffer, s_page);
> -                       dst = kmap_atomic(d_page);
> +                       dst = kmap_local_page(d_page);
>                         copy_page(dst, buffer);
> -                       kunmap_atomic(dst);
> +                       kunmap_local(dst);
>                 } else {
>                         zeros_only = safe_copy_page(page_address(d_page), s_page);
>                 }
> --
> 2.25.1
>
chenguanxi11234@163.com Dec. 12, 2023, 2:38 p.m. UTC | #2
What I've learned is that kmap_atomic() disables page-faults and 
preemption (the latter only for !PREEMPT_RT kernels).In my opinion, 
the code between the mapping and un-mapping in this patch does not 
depend on the above-mentioned side effects.So I simply replaced 
kmap_atomic() with kmap_local_page(). If I'm wrong, please explain it to me. 
Thank you.

>
>> Signed-off-by: Chen Haonan <chen.haonan2@zte.com.cn>
>> ---
>>  kernel/power/snapshot.c | 12 ++++++------
>>  1 file changed, 6 insertions(+), 6 deletions(-)
>> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
>> index 50a15408c3fc..feef0d4d3288 100644
>> --- a/kernel/power/snapshot.c
>> +++ b/kernel/power/snapshot.c
>> @@ -1487,11 +1487,11 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
>>         s_page = pfn_to_page(src_pfn);
>>         d_page = pfn_to_page(dst_pfn);
>>         if (PageHighMem(s_page)) {
>> -               src = kmap_atomic(s_page);
>> -               dst = kmap_atomic(d_page);
>> +               src = kmap_local_page(s_page);
>> +               dst = kmap_local_page(d_page);
>>                 zeros_only = do_copy_page(dst, src);
>> -               kunmap_atomic(dst);
>> -               kunmap_atomic(src);
>> +               kunmap_local(dst);
>> +               kunmap_local(src);
>>         } else {
>>                 if (PageHighMem(d_page)) {
>>                         /*
>> @@ -1499,9 +1499,9 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
>>                          * data modified by kmap_atomic()
>>                          */
>>                         zeros_only = safe_copy_page(buffer, s_page);
>> -                       dst = kmap_atomic(d_page);
>> +                       dst = kmap_local_page(d_page);
>>                         copy_page(dst, buffer);
>> -                       kunmap_atomic(dst);
>> +                       kunmap_local(dst);
>>                 } else {
>>                         zeros_only = safe_copy_page(page_address(d_page), s_page);
>>                 }
>> --
>> 2.25.1
>>
Rafael J. Wysocki Dec. 12, 2023, 3:01 p.m. UTC | #3
On Tue, Dec 12, 2023 at 3:38 PM <chenguanxi11234@163.com> wrote:
>
> What I've learned is that kmap_atomic() disables page-faults and
> preemption (the latter only for !PREEMPT_RT kernels).In my opinion,
> the code between the mapping and un-mapping in this patch does not
> depend on the above-mentioned side effects.So I simply replaced
> kmap_atomic() with kmap_local_page(). If I'm wrong, please explain it to me.

You are right, but why don't you say the above in the patch changelog
instead of the irrelevant information that is there now?
chenguanxi11234@163.com Dec. 12, 2023, 3:47 p.m. UTC | #4
Thanks for your explanation, I have modified my description 
information and submitted the updated version of patch.
diff mbox series

Patch

diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 50a15408c3fc..feef0d4d3288 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1487,11 +1487,11 @@  static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
 	s_page = pfn_to_page(src_pfn);
 	d_page = pfn_to_page(dst_pfn);
 	if (PageHighMem(s_page)) {
-		src = kmap_atomic(s_page);
-		dst = kmap_atomic(d_page);
+		src = kmap_local_page(s_page);
+		dst = kmap_local_page(d_page);
 		zeros_only = do_copy_page(dst, src);
-		kunmap_atomic(dst);
-		kunmap_atomic(src);
+		kunmap_local(dst);
+		kunmap_local(src);
 	} else {
 		if (PageHighMem(d_page)) {
 			/*
@@ -1499,9 +1499,9 @@  static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
 			 * data modified by kmap_atomic()
 			 */
 			zeros_only = safe_copy_page(buffer, s_page);
-			dst = kmap_atomic(d_page);
+			dst = kmap_local_page(d_page);
 			copy_page(dst, buffer);
-			kunmap_atomic(dst);
+			kunmap_local(dst);
 		} else {
 			zeros_only = safe_copy_page(page_address(d_page), s_page);
 		}