diff mbox series

mm: /proc/pid/smaps_rollup: avoid skipping vma after getting mmap_lock again

Message ID 20240523183531.2535436-1-yzhong@purestorage.com (mailing list archive)
State New
Headers show
Series mm: /proc/pid/smaps_rollup: avoid skipping vma after getting mmap_lock again | expand

Commit Message

Yuanyuan Zhong May 23, 2024, 6:35 p.m. UTC
After switching smaps_rollup to use VMA iterator, searching for next
entry is part of the condition expression of the do-while loop. So the
current VMA needs to be addressed before the continue statement.

Fixes: c4c84f06285e ("fs/proc/task_mmu: stop using linked list and highest_vm_end")
Signed-off-by: Yuanyuan Zhong <yzhong@purestorage.com>
Reviewed-by: Mohamed Khalfella <mkhalfella@purestorage.com>
---
 fs/proc/task_mmu.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Comments

Andrew Morton May 23, 2024, 6:56 p.m. UTC | #1
On Thu, 23 May 2024 12:35:31 -0600 Yuanyuan Zhong <yzhong@purestorage.com> wrote:

> After switching smaps_rollup to use VMA iterator, searching for next
> entry is part of the condition expression of the do-while loop. So the
> current VMA needs to be addressed before the continue statement.

Please describe the userspace-visible runtime effects of this bug. 
This aids others in deciding which kernel version(s) need the patch.

Thanks.
Yuanyuan Zhong May 23, 2024, 7:16 p.m. UTC | #2
On Thu, May 23, 2024 at 11:56 AM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> Please describe the userspace-visible runtime effects of this bug.
> This aids others in deciding which kernel version(s) need the patch.
>
Otherwise, with some VMAs skipped, userspace observed memory consumption
from /proc/pid/smaps_rollup will be smaller than the sum of the
corresponding fields from /proc/pid/smaps.

Please let me know if separate v2 is needed. Thanks
Andrew Morton May 24, 2024, 3:02 a.m. UTC | #3
On Thu, 23 May 2024 12:16:57 -0700 Yuanyuan Zhong <yzhong@purestorage.com> wrote:

> On Thu, May 23, 2024 at 11:56 AM Andrew Morton
> <akpm@linux-foundation.org> wrote:
> >
> > Please describe the userspace-visible runtime effects of this bug.
> > This aids others in deciding which kernel version(s) need the patch.
> >
> Otherwise, with some VMAs skipped, userspace observed memory consumption
> from /proc/pid/smaps_rollup will be smaller than the sum of the
> corresponding fields from /proc/pid/smaps.
> 
> Please let me know if separate v2 is needed. Thanks

All is good, thanks.  I added the above text and the cc:stable tag.
David Hildenbrand May 27, 2024, 7:33 a.m. UTC | #4
Am 23.05.24 um 20:35 schrieb Yuanyuan Zhong:
> After switching smaps_rollup to use VMA iterator, searching for next
> entry is part of the condition expression of the do-while loop. So the
> current VMA needs to be addressed before the continue statement.
> 
> Fixes: c4c84f06285e ("fs/proc/task_mmu: stop using linked list and highest_vm_end")
> Signed-off-by: Yuanyuan Zhong <yzhong@purestorage.com>
> Reviewed-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> ---
>   fs/proc/task_mmu.c | 9 +++++++--
>   1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index e5a5f015ff03..f8d35f993fe5 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -970,12 +970,17 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
>   				break;
>   
>   			/* Case 1 and 2 above */
> -			if (vma->vm_start >= last_vma_end)
> +			if (vma->vm_start >= last_vma_end) {
> +				smap_gather_stats(vma, &mss, 0);
> +				last_vma_end = vma->vm_end;
>   				continue;
> +			}
>   
>   			/* Case 4 above */
> -			if (vma->vm_end > last_vma_end)
> +			if (vma->vm_end > last_vma_end) {
>   				smap_gather_stats(vma, &mss, last_vma_end);
> +				last_vma_end = vma->vm_end;
> +			}
>   		}
>   	} for_each_vma(vmi, vma);
>   

Looks correct to me. I guess getting a reproducer is rather tricky.

Acked-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index e5a5f015ff03..f8d35f993fe5 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -970,12 +970,17 @@  static int show_smaps_rollup(struct seq_file *m, void *v)
 				break;
 
 			/* Case 1 and 2 above */
-			if (vma->vm_start >= last_vma_end)
+			if (vma->vm_start >= last_vma_end) {
+				smap_gather_stats(vma, &mss, 0);
+				last_vma_end = vma->vm_end;
 				continue;
+			}
 
 			/* Case 4 above */
-			if (vma->vm_end > last_vma_end)
+			if (vma->vm_end > last_vma_end) {
 				smap_gather_stats(vma, &mss, last_vma_end);
+				last_vma_end = vma->vm_end;
+			}
 		}
 	} for_each_vma(vmi, vma);