Message ID | 20210203003134.2422308-2-surenb@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] mm: replace BUG_ON in vm_insert_page with a return of an error | expand |
On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote: > Currently system heap maps its buffers with VM_PFNMAP flag using > remap_pfn_range. This results in such buffers not being accounted > for in PSS calculations because vm treats this memory as having no > page structs. Without page structs there are no counters representing > how many processes are mapping a page and therefore PSS calculation > is impossible. > Historically, ION driver used to map its buffers as VM_PFNMAP areas > due to memory carveouts that did not have page structs [1]. That > is not the case anymore and it seems there was desire to move away > from remap_pfn_range [2]. > Dmabuf system heap design inherits this ION behavior and maps its > pages using remap_pfn_range even though allocated pages are backed > by page structs. > Replace remap_pfn_range with vm_insert_page, following Laura's suggestion > in [1]. This would allow correct PSS calculation for dmabufs. > > [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io > [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html > (sorry, could not find lore links for these discussions) > > Suggested-by: Laura Abbott <labbott@kernel.org> > Signed-off-by: Suren Baghdasaryan <surenb@google.com> Reviewed-by: Minchan Kim <minchan@kernel.org> A note: This patch makes dmabuf system heap accounted as PSS so if someone has relies on the size, they will see the bloat. IIRC, there was some debate whether PSS accounting for their buffer is correct or not. If it'd be a problem, we need to discuss how to solve it(maybe, vma->vm_flags and reintroduce remap_pfn_range for them to be respected).
On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim <minchan@kernel.org> wrote: > > On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote: > > Currently system heap maps its buffers with VM_PFNMAP flag using > > remap_pfn_range. This results in such buffers not being accounted > > for in PSS calculations because vm treats this memory as having no > > page structs. Without page structs there are no counters representing > > how many processes are mapping a page and therefore PSS calculation > > is impossible. > > Historically, ION driver used to map its buffers as VM_PFNMAP areas > > due to memory carveouts that did not have page structs [1]. That > > is not the case anymore and it seems there was desire to move away > > from remap_pfn_range [2]. > > Dmabuf system heap design inherits this ION behavior and maps its > > pages using remap_pfn_range even though allocated pages are backed > > by page structs. > > Replace remap_pfn_range with vm_insert_page, following Laura's suggestion > > in [1]. This would allow correct PSS calculation for dmabufs. > > > > [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io > > [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html > > (sorry, could not find lore links for these discussions) > > > > Suggested-by: Laura Abbott <labbott@kernel.org> > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > Reviewed-by: Minchan Kim <minchan@kernel.org> > > A note: This patch makes dmabuf system heap accounted as PSS so > if someone has relies on the size, they will see the bloat. > IIRC, there was some debate whether PSS accounting for their > buffer is correct or not. If it'd be a problem, we need to > discuss how to solve it(maybe, vma->vm_flags and reintroduce > remap_pfn_range for them to be respected). I did not see debates about not including *mapped* dmabufs into PSS calculation. I remember people were discussing how to account dmabufs referred only by the FD but that is a different discussion. If the buffer is mapped into the address space of a process then IMHO including it into PSS of that process is not controversial.
On Tue, Feb 2, 2021 at 4:31 PM Suren Baghdasaryan <surenb@google.com> wrote: > Currently system heap maps its buffers with VM_PFNMAP flag using > remap_pfn_range. This results in such buffers not being accounted > for in PSS calculations because vm treats this memory as having no > page structs. Without page structs there are no counters representing > how many processes are mapping a page and therefore PSS calculation > is impossible. > Historically, ION driver used to map its buffers as VM_PFNMAP areas > due to memory carveouts that did not have page structs [1]. That > is not the case anymore and it seems there was desire to move away > from remap_pfn_range [2]. > Dmabuf system heap design inherits this ION behavior and maps its > pages using remap_pfn_range even though allocated pages are backed > by page structs. > Replace remap_pfn_range with vm_insert_page, following Laura's suggestion > in [1]. This would allow correct PSS calculation for dmabufs. > > [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io > [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html > (sorry, could not find lore links for these discussions) > > Suggested-by: Laura Abbott <labbott@kernel.org> > Signed-off-by: Suren Baghdasaryan <surenb@google.com> For consistency, do we need something similar for the cma heap as well? thanks -john
On Tue, Feb 2, 2021 at 6:07 PM John Stultz <john.stultz@linaro.org> wrote: > > On Tue, Feb 2, 2021 at 4:31 PM Suren Baghdasaryan <surenb@google.com> wrote: > > Currently system heap maps its buffers with VM_PFNMAP flag using > > remap_pfn_range. This results in such buffers not being accounted > > for in PSS calculations because vm treats this memory as having no > > page structs. Without page structs there are no counters representing > > how many processes are mapping a page and therefore PSS calculation > > is impossible. > > Historically, ION driver used to map its buffers as VM_PFNMAP areas > > due to memory carveouts that did not have page structs [1]. That > > is not the case anymore and it seems there was desire to move away > > from remap_pfn_range [2]. > > Dmabuf system heap design inherits this ION behavior and maps its > > pages using remap_pfn_range even though allocated pages are backed > > by page structs. > > Replace remap_pfn_range with vm_insert_page, following Laura's suggestion > > in [1]. This would allow correct PSS calculation for dmabufs. > > > > [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io > > [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html > > (sorry, could not find lore links for these discussions) > > > > Suggested-by: Laura Abbott <labbott@kernel.org> > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > > For consistency, do we need something similar for the cma heap as well? Good question. Let me look closer into it. > > thanks > -john
Am 03.02.21 um 03:02 schrieb Suren Baghdasaryan: > On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim <minchan@kernel.org> wrote: >> On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote: >>> Currently system heap maps its buffers with VM_PFNMAP flag using >>> remap_pfn_range. This results in such buffers not being accounted >>> for in PSS calculations because vm treats this memory as having no >>> page structs. Without page structs there are no counters representing >>> how many processes are mapping a page and therefore PSS calculation >>> is impossible. >>> Historically, ION driver used to map its buffers as VM_PFNMAP areas >>> due to memory carveouts that did not have page structs [1]. That >>> is not the case anymore and it seems there was desire to move away >>> from remap_pfn_range [2]. >>> Dmabuf system heap design inherits this ION behavior and maps its >>> pages using remap_pfn_range even though allocated pages are backed >>> by page structs. >>> Replace remap_pfn_range with vm_insert_page, following Laura's suggestion >>> in [1]. This would allow correct PSS calculation for dmabufs. >>> >>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdriverdev-devel.linuxdriverproject.narkive.com%2Fv0fJGpaD%2Fusing-ion-memory-for-direct-io&data=04%7C01%7Cchristian.koenig%40amd.com%7Cb4c145b86dd0472c943c08d8c7e7ba4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637479145389160353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=W1N%2B%2BlcFDaRSvXdSPe5hPNMRByHfGkU7Uc3cmM3FCTU%3D&reserved=0 >>> [2] https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdriverdev.linuxdriverproject.org%2Fpipermail%2Fdriverdev-devel%2F2018-October%2F127519.html&data=04%7C01%7Cchristian.koenig%40amd.com%7Cb4c145b86dd0472c943c08d8c7e7ba4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637479145389160353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jQxSzKEr52lUcAIx%2FuBHMJ7yOgof%2FVMlW9%2BB2f%2FoS%2FE%3D&reserved=0 >>> (sorry, could not find lore links for these discussions) >>> >>> Suggested-by: Laura Abbott <labbott@kernel.org> >>> Signed-off-by: Suren Baghdasaryan <surenb@google.com> >> Reviewed-by: Minchan Kim <minchan@kernel.org> >> >> A note: This patch makes dmabuf system heap accounted as PSS so >> if someone has relies on the size, they will see the bloat. >> IIRC, there was some debate whether PSS accounting for their >> buffer is correct or not. If it'd be a problem, we need to >> discuss how to solve it(maybe, vma->vm_flags and reintroduce >> remap_pfn_range for them to be respected). > I did not see debates about not including *mapped* dmabufs into PSS > calculation. I remember people were discussing how to account dmabufs > referred only by the FD but that is a different discussion. If the > buffer is mapped into the address space of a process then IMHO > including it into PSS of that process is not controversial. Well, I think it is. And to be honest this doesn't looks like a good idea to me since it will eventually lead to double accounting of system heap DMA-bufs. As discussed multiple times it is illegal to use the struct page of a DMA-buf. This case here is a bit special since it is the owner of the pages which does that, but I'm not sure if this won't cause problems elsewhere as well. A more appropriate solution would be to held processes accountable for resources they have allocated through device drivers. Regards, Christian.
On Wed, Feb 3, 2021 at 12:06 AM Christian König <christian.koenig@amd.com> wrote: > > Am 03.02.21 um 03:02 schrieb Suren Baghdasaryan: > > On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim <minchan@kernel.org> wrote: > >> On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote: > >>> Currently system heap maps its buffers with VM_PFNMAP flag using > >>> remap_pfn_range. This results in such buffers not being accounted > >>> for in PSS calculations because vm treats this memory as having no > >>> page structs. Without page structs there are no counters representing > >>> how many processes are mapping a page and therefore PSS calculation > >>> is impossible. > >>> Historically, ION driver used to map its buffers as VM_PFNMAP areas > >>> due to memory carveouts that did not have page structs [1]. That > >>> is not the case anymore and it seems there was desire to move away > >>> from remap_pfn_range [2]. > >>> Dmabuf system heap design inherits this ION behavior and maps its > >>> pages using remap_pfn_range even though allocated pages are backed > >>> by page structs. > >>> Replace remap_pfn_range with vm_insert_page, following Laura's suggestion > >>> in [1]. This would allow correct PSS calculation for dmabufs. > >>> > >>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdriverdev-devel.linuxdriverproject.narkive.com%2Fv0fJGpaD%2Fusing-ion-memory-for-direct-io&data=04%7C01%7Cchristian.koenig%40amd.com%7Cb4c145b86dd0472c943c08d8c7e7ba4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637479145389160353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=W1N%2B%2BlcFDaRSvXdSPe5hPNMRByHfGkU7Uc3cmM3FCTU%3D&reserved=0 > >>> [2] https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdriverdev.linuxdriverproject.org%2Fpipermail%2Fdriverdev-devel%2F2018-October%2F127519.html&data=04%7C01%7Cchristian.koenig%40amd.com%7Cb4c145b86dd0472c943c08d8c7e7ba4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637479145389160353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jQxSzKEr52lUcAIx%2FuBHMJ7yOgof%2FVMlW9%2BB2f%2FoS%2FE%3D&reserved=0 > >>> (sorry, could not find lore links for these discussions) > >>> > >>> Suggested-by: Laura Abbott <labbott@kernel.org> > >>> Signed-off-by: Suren Baghdasaryan <surenb@google.com> > >> Reviewed-by: Minchan Kim <minchan@kernel.org> > >> > >> A note: This patch makes dmabuf system heap accounted as PSS so > >> if someone has relies on the size, they will see the bloat. > >> IIRC, there was some debate whether PSS accounting for their > >> buffer is correct or not. If it'd be a problem, we need to > >> discuss how to solve it(maybe, vma->vm_flags and reintroduce > >> remap_pfn_range for them to be respected). > > I did not see debates about not including *mapped* dmabufs into PSS > > calculation. I remember people were discussing how to account dmabufs > > referred only by the FD but that is a different discussion. If the > > buffer is mapped into the address space of a process then IMHO > > including it into PSS of that process is not controversial. > > Well, I think it is. And to be honest this doesn't looks like a good > idea to me since it will eventually lead to double accounting of system > heap DMA-bufs. Thanks for the comment! Could you please expand on this double accounting issue? Do you mean userspace could double account dmabufs because it expects dmabufs not to be part of PSS or is there some in-kernel accounting mechanism that would be broken by this? > > As discussed multiple times it is illegal to use the struct page of a > DMA-buf. This case here is a bit special since it is the owner of the > pages which does that, but I'm not sure if this won't cause problems > elsewhere as well. I would be happy to keep things as they are but calculating dmabuf contribution to PSS without struct pages is extremely inefficient and becomes a real pain when we consider the possibilities of partial mappings, when not the entire dmabuf is being mapped. Calculating this would require parsing /proc/pid/maps for the process, finding dmabuf mappings and the size for each one, then parsing /proc/pid/maps for ALL processes in the system to see if the same dmabufs are used by other processes and only then calculating the PSS. I hope that explains the desire to use already existing struct pages to obtain PSS in a much more efficient way. > > A more appropriate solution would be to held processes accountable for > resources they have allocated through device drivers. Are you suggesting some new kernel mechanism to account resources allocated by a process via a driver? If so, any details? > > Regards, > Christian. > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 17e0e9a68baf..4983f18cc2ce 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -203,8 +203,7 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) for_each_sgtable_page(table, &piter, vma->vm_pgoff) { struct page *page = sg_page_iter_page(&piter); - ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, - vma->vm_page_prot); + ret = vm_insert_page(vma, addr, page); if (ret) return ret; addr += PAGE_SIZE;
Currently system heap maps its buffers with VM_PFNMAP flag using remap_pfn_range. This results in such buffers not being accounted for in PSS calculations because vm treats this memory as having no page structs. Without page structs there are no counters representing how many processes are mapping a page and therefore PSS calculation is impossible. Historically, ION driver used to map its buffers as VM_PFNMAP areas due to memory carveouts that did not have page structs [1]. That is not the case anymore and it seems there was desire to move away from remap_pfn_range [2]. Dmabuf system heap design inherits this ION behavior and maps its pages using remap_pfn_range even though allocated pages are backed by page structs. Replace remap_pfn_range with vm_insert_page, following Laura's suggestion in [1]. This would allow correct PSS calculation for dmabufs. [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html (sorry, could not find lore links for these discussions) Suggested-by: Laura Abbott <labbott@kernel.org> Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- v1 posted at: https://lore.kernel.org/patchwork/patch/1372409/ changes in v2: - removed VM_PFNMAP clearing part of the patch, per Minchan and Christoph - created prerequisite patch to replace BUG_ON with WARN_ON_ONCE, per Christoph drivers/dma-buf/heaps/system_heap.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)