diff mbox series

mm: proc: smaps_rollup: Fix pss_locked calculation

Message ID 20190121011049.160505-1-sspatil@android.com (mailing list archive)
State New, archived
Headers show
Series mm: proc: smaps_rollup: Fix pss_locked calculation | expand

Commit Message

Sandeep Patil Jan. 21, 2019, 1:10 a.m. UTC
The 'pss_locked' field of smaps_rollup was being calculated incorrectly
as it accumulated the current pss everytime a locked VMA was found.

Fix that by making sure we record the current pss value before each VMA
is walked. So, we can only add the delta if the VMA was found to be
VM_LOCKED.

Fixes: 493b0e9d945f ("mm: add /proc/pid/smaps_rollup")
Cc: stable@vger.kernel.org # 4.14.y 4.19.y
Signed-off-by: Sandeep Patil <sspatil@android.com>
---
 fs/proc/task_mmu.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Sasha Levin Jan. 23, 2019, 10:57 p.m. UTC | #1
Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: 493b0e9d945f mm: add /proc/pid/smaps_rollup.

The bot has tested the following trees: v4.20.3, v4.19.16, v4.14.94.

v4.20.3: Build OK!
v4.19.16: Build OK!
v4.14.94: Failed to apply! Possible dependencies:
    8526d84f8171 ("fs/proc/task_mmu.c: do not show VmExe bigger than total executable virtual memory")
    8e68d689afe3 ("mm: /proc/pid/smaps: factor out mem stats gathering")
    af5b0f6a09e4 ("mm: consolidate page table accounting")
    b4e98d9ac775 ("mm: account pud page tables")
    c4812909f5d5 ("mm: introduce wrappers to access mm->nr_ptes")
    d1be35cb6f96 ("proc: add seq_put_decimal_ull_width to speed up /proc/pid/smaps")


How should we proceed with this patch?

--
Thanks,
Sasha
Sandeep Patil Jan. 24, 2019, 9:39 p.m. UTC | #2
On Wed, Jan 23, 2019 at 10:57:45PM +0000, Sasha Levin wrote:
> Hi,
> 
> [This is an automated email]
> 
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: 493b0e9d945f mm: add /proc/pid/smaps_rollup.
> 
> The bot has tested the following trees: v4.20.3, v4.19.16, v4.14.94.
> 
> v4.20.3: Build OK!
> v4.19.16: Build OK!
> v4.14.94: Failed to apply! Possible dependencies:
>     8526d84f8171 ("fs/proc/task_mmu.c: do not show VmExe bigger than total executable virtual memory")
>     8e68d689afe3 ("mm: /proc/pid/smaps: factor out mem stats gathering")
>     af5b0f6a09e4 ("mm: consolidate page table accounting")
>     b4e98d9ac775 ("mm: account pud page tables")
>     c4812909f5d5 ("mm: introduce wrappers to access mm->nr_ptes")
>     d1be35cb6f96 ("proc: add seq_put_decimal_ull_width to speed up /proc/pid/smaps")
> 
> 
> How should we proceed with this patch?

I will send 4.14 / 4.9 backports to -stable if / when the patch gets
accepted.

- ssp
Greg KH Jan. 25, 2019, 6:21 a.m. UTC | #3
On Thu, Jan 24, 2019 at 01:39:40PM -0800, Sandeep Patil wrote:
> On Wed, Jan 23, 2019 at 10:57:45PM +0000, Sasha Levin wrote:
> > Hi,
> > 
> > [This is an automated email]
> > 
> > This commit has been processed because it contains a "Fixes:" tag,
> > fixing commit: 493b0e9d945f mm: add /proc/pid/smaps_rollup.
> > 
> > The bot has tested the following trees: v4.20.3, v4.19.16, v4.14.94.
> > 
> > v4.20.3: Build OK!
> > v4.19.16: Build OK!
> > v4.14.94: Failed to apply! Possible dependencies:
> >     8526d84f8171 ("fs/proc/task_mmu.c: do not show VmExe bigger than total executable virtual memory")
> >     8e68d689afe3 ("mm: /proc/pid/smaps: factor out mem stats gathering")
> >     af5b0f6a09e4 ("mm: consolidate page table accounting")
> >     b4e98d9ac775 ("mm: account pud page tables")
> >     c4812909f5d5 ("mm: introduce wrappers to access mm->nr_ptes")
> >     d1be35cb6f96 ("proc: add seq_put_decimal_ull_width to speed up /proc/pid/smaps")
> > 
> > 
> > How should we proceed with this patch?
> 
> I will send 4.14 / 4.9 backports to -stable if / when the patch gets
> accepted.

That's fine, you will get the automated "FAILED:" emails when I try to
apply it to the tree at that time, and you can send an updated version
then if you want.

thanks,

greg k-h
Andrew Morton Jan. 29, 2019, 12:15 a.m. UTC | #4
On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <sspatil@android.com> wrote:

> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
> as it accumulated the current pss everytime a locked VMA was found.
> 
> Fix that by making sure we record the current pss value before each VMA
> is walked. So, we can only add the delta if the VMA was found to be
> VM_LOCKED.
> 
> ...
>
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>  #endif
>  		.mm = vma->vm_mm,
>  	};
> +	unsigned long pss;
>  
>  	smaps_walk.private = mss;
>  
> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>  		}
>  	}
>  #endif
> -
> +	/* record current pss so we can calculate the delta after page walk */
> +	pss = mss->pss;
>  	/* mmap_sem is held in m_start */
>  	walk_page_vma(vma, &smaps_walk);
>  	if (vma->vm_flags & VM_LOCKED)
> -		mss->pss_locked += mss->pss;
> +		mss->pss_locked += mss->pss - pss;
>  }

This seems to be a rather obscure way of accumulating
mem_size_stats.pss_locked.  Wouldn't it make more sense to do this in
smaps_account(), wherever we increment mem_size_stats.pss?

It would be a tiny bit less efficient but I think that the code cleanup
justifies such a cost?
Vlastimil Babka Jan. 29, 2019, 3:52 p.m. UTC | #5
On 1/29/19 1:15 AM, Andrew Morton wrote:
> On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <sspatil@android.com> wrote:
> 
>> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
>> as it accumulated the current pss everytime a locked VMA was found.
>> 
>> Fix that by making sure we record the current pss value before each VMA
>> is walked. So, we can only add the delta if the VMA was found to be
>> VM_LOCKED.
>> 
>> ...
>>
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>>  #endif
>>  		.mm = vma->vm_mm,
>>  	};
>> +	unsigned long pss;
>>  
>>  	smaps_walk.private = mss;
>>  
>> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
>>  		}
>>  	}
>>  #endif
>> -
>> +	/* record current pss so we can calculate the delta after page walk */
>> +	pss = mss->pss;
>>  	/* mmap_sem is held in m_start */
>>  	walk_page_vma(vma, &smaps_walk);
>>  	if (vma->vm_flags & VM_LOCKED)
>> -		mss->pss_locked += mss->pss;
>> +		mss->pss_locked += mss->pss - pss;
>>  }
> 
> This seems to be a rather obscure way of accumulating
> mem_size_stats.pss_locked.  Wouldn't it make more sense to do this in
> smaps_account(), wherever we increment mem_size_stats.pss?
> 
> It would be a tiny bit less efficient but I think that the code cleanup
> justifies such a cost?

Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it
there? We probably don't need the whole vma param yet.

Thanks.
Sandeep Patil Feb. 3, 2019, 6:21 a.m. UTC | #6
On Tue, Jan 29, 2019 at 04:52:21PM +0100, Vlastimil Babka wrote:
> On 1/29/19 1:15 AM, Andrew Morton wrote:
> > On Sun, 20 Jan 2019 17:10:49 -0800 Sandeep Patil <sspatil@android.com> wrote:
> > 
> >> The 'pss_locked' field of smaps_rollup was being calculated incorrectly
> >> as it accumulated the current pss everytime a locked VMA was found.
> >> 
> >> Fix that by making sure we record the current pss value before each VMA
> >> is walked. So, we can only add the delta if the VMA was found to be
> >> VM_LOCKED.
> >> 
> >> ...
> >>
> >> --- a/fs/proc/task_mmu.c
> >> +++ b/fs/proc/task_mmu.c
> >> @@ -709,6 +709,7 @@ static void smap_gather_stats(struct vm_area_struct *vma,
> >>  #endif
> >>  		.mm = vma->vm_mm,
> >>  	};
> >> +	unsigned long pss;
> >>  
> >>  	smaps_walk.private = mss;
> >>  
> >> @@ -737,11 +738,12 @@ static void smap_gather_stats(struct vm_area_struct *vma,
> >>  		}
> >>  	}
> >>  #endif
> >> -
> >> +	/* record current pss so we can calculate the delta after page walk */
> >> +	pss = mss->pss;
> >>  	/* mmap_sem is held in m_start */
> >>  	walk_page_vma(vma, &smaps_walk);
> >>  	if (vma->vm_flags & VM_LOCKED)
> >> -		mss->pss_locked += mss->pss;
> >> +		mss->pss_locked += mss->pss - pss;
> >>  }
> > 
> > This seems to be a rather obscure way of accumulating
> > mem_size_stats.pss_locked.  Wouldn't it make more sense to do this in
> > smaps_account(), wherever we increment mem_size_stats.pss?
> > 
> > It would be a tiny bit less efficient but I think that the code cleanup
> > justifies such a cost?
> 
> Yeah, Sandeep could you add 'bool locked' param to smaps_account() and check it
> there? We probably don't need the whole vma param yet.

Agree, I will send -v2 shortly.

- ssp
diff mbox series

Patch

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index f0ec9edab2f3..51a00a2b4733 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -709,6 +709,7 @@  static void smap_gather_stats(struct vm_area_struct *vma,
 #endif
 		.mm = vma->vm_mm,
 	};
+	unsigned long pss;
 
 	smaps_walk.private = mss;
 
@@ -737,11 +738,12 @@  static void smap_gather_stats(struct vm_area_struct *vma,
 		}
 	}
 #endif
-
+	/* record current pss so we can calculate the delta after page walk */
+	pss = mss->pss;
 	/* mmap_sem is held in m_start */
 	walk_page_vma(vma, &smaps_walk);
 	if (vma->vm_flags & VM_LOCKED)
-		mss->pss_locked += mss->pss;
+		mss->pss_locked += mss->pss - pss;
 }
 
 #define SEQ_PUT_DEC(str, val) \