mbox series

[RFC,00/19] mm: Introduce a cgroup to limit the amount of locked and pinned memory

Message ID cover.f52b9eb2792bccb8a9ecd6bc95055705cfe2ae03.1674538665.git-series.apopple@nvidia.com (mailing list archive)
Headers show
Series mm: Introduce a cgroup to limit the amount of locked and pinned memory | expand

Message

Alistair Popple Jan. 24, 2023, 5:42 a.m. UTC
Having large amounts of unmovable or unreclaimable memory in a system
can lead to system instability due to increasing the likelihood of
encountering out-of-memory conditions. Therefore it is desirable to
limit the amount of memory users can lock or pin.

From userspace such limits can be enforced by setting
RLIMIT_MEMLOCK. However there is no standard method that drivers and
other in-kernel users can use to check and enforce this limit.

This has lead to a large number of inconsistencies in how limits are
enforced. For example some drivers will use mm->locked_mm while others
will use mm->pinned_mm or user->locked_mm. It is therefore possible to
have up to three times RLIMIT_MEMLOCKED pinned.

Having pinned memory limited per-task also makes it easy for users to
exceed the limit. For example drivers that pin memory with
pin_user_pages() it tends to remain pinned after fork. To deal with
this and other issues this series introduces a cgroup for tracking and
limiting the number of pages pinned or locked by tasks in the group.

However the existing behaviour with regards to the rlimit needs to be
maintained. Therefore the lesser of the two limits is
enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
but this bypass is not allowed for the cgroup.

The first part of this series converts existing drivers which
open-code the use of locked_mm/pinned_mm over to a common interface
which manages the refcounts of the associated task/mm/user
structs. This ensures accounting of pages is consistent and makes it
easier to add charging of the cgroup.

The second part of the series adds the cgroup and converts core mm
code such as mlock over to charging the cgroup before finally
introducing some selftests.

As I don't have access to systems with all the various devices I
haven't been able to test all driver changes. Any help there would be
appreciated.

Alistair Popple (19):
  mm: Introduce vm_account
  drivers/vhost: Convert to use vm_account
  drivers/vdpa: Convert vdpa to use the new vm_structure
  infiniband/umem: Convert to use vm_account
  RMDA/siw: Convert to use vm_account
  RDMA/usnic: convert to use vm_account
  vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm
  vfio/spapr_tce: Convert accounting to pinned_vm
  io_uring: convert to use vm_account
  net: skb: Switch to using vm_account
  xdp: convert to use vm_account
  kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned()
  fpga: dfl: afu: convert to use vm_account
  mm: Introduce a cgroup for pinned memory
  mm/util: Extend vm_account to charge pages against the pin cgroup
  mm/util: Refactor account_locked_vm
  mm: Convert mmap and mlock to use account_locked_vm
  mm/mmap: Charge locked memory to pins cgroup
  selftests/vm: Add pins-cgroup selftest for mlock/mmap

 MAINTAINERS                              |   8 +-
 arch/powerpc/kvm/book3s_64_vio.c         |  10 +-
 arch/powerpc/mm/book3s64/iommu_api.c     |  29 +--
 drivers/fpga/dfl-afu-dma-region.c        |  11 +-
 drivers/fpga/dfl-afu.h                   |   1 +-
 drivers/infiniband/core/umem.c           |  16 +-
 drivers/infiniband/core/umem_odp.c       |   6 +-
 drivers/infiniband/hw/usnic/usnic_uiom.c |  13 +-
 drivers/infiniband/hw/usnic/usnic_uiom.h |   1 +-
 drivers/infiniband/sw/siw/siw.h          |   2 +-
 drivers/infiniband/sw/siw/siw_mem.c      |  20 +--
 drivers/infiniband/sw/siw/siw_verbs.c    |  15 +-
 drivers/vdpa/vdpa_user/vduse_dev.c       |  20 +--
 drivers/vfio/vfio_iommu_spapr_tce.c      |  15 +-
 drivers/vfio/vfio_iommu_type1.c          |  59 +----
 drivers/vhost/vdpa.c                     |   9 +-
 drivers/vhost/vhost.c                    |   2 +-
 drivers/vhost/vhost.h                    |   1 +-
 include/linux/cgroup.h                   |  20 ++-
 include/linux/cgroup_subsys.h            |   4 +-
 include/linux/io_uring_types.h           |   3 +-
 include/linux/kvm_host.h                 |   1 +-
 include/linux/mm.h                       |   5 +-
 include/linux/mm_types.h                 |  88 ++++++++-
 include/linux/skbuff.h                   |   6 +-
 include/net/sock.h                       |   2 +-
 include/net/xdp_sock.h                   |   2 +-
 include/rdma/ib_umem.h                   |   1 +-
 io_uring/io_uring.c                      |  20 +--
 io_uring/notif.c                         |   4 +-
 io_uring/notif.h                         |  10 +-
 io_uring/rsrc.c                          |  38 +---
 io_uring/rsrc.h                          |   9 +-
 mm/Kconfig                               |  11 +-
 mm/Makefile                              |   1 +-
 mm/internal.h                            |   2 +-
 mm/mlock.c                               |  76 +------
 mm/mmap.c                                |  76 +++----
 mm/mremap.c                              |  54 +++--
 mm/pins_cgroup.c                         | 273 ++++++++++++++++++++++++-
 mm/secretmem.c                           |   6 +-
 mm/util.c                                | 196 +++++++++++++++--
 net/core/skbuff.c                        |  47 +---
 net/rds/message.c                        |   9 +-
 net/xdp/xdp_umem.c                       |  38 +--
 tools/testing/selftests/vm/Makefile      |   1 +-
 tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++-
 virt/kvm/kvm_main.c                      |   3 +-
 48 files changed, 1114 insertions(+), 401 deletions(-)
 create mode 100644 mm/pins_cgroup.c
 create mode 100644 tools/testing/selftests/vm/pins-cgroup.c

base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65

Comments

Yosry Ahmed Jan. 24, 2023, 6:26 p.m. UTC | #1
On Mon, Jan 23, 2023 at 9:43 PM Alistair Popple <apopple@nvidia.com> wrote:
>
> Having large amounts of unmovable or unreclaimable memory in a system
> can lead to system instability due to increasing the likelihood of
> encountering out-of-memory conditions. Therefore it is desirable to
> limit the amount of memory users can lock or pin.
>
> From userspace such limits can be enforced by setting
> RLIMIT_MEMLOCK. However there is no standard method that drivers and
> other in-kernel users can use to check and enforce this limit.
>
> This has lead to a large number of inconsistencies in how limits are
> enforced. For example some drivers will use mm->locked_mm while others
> will use mm->pinned_mm or user->locked_mm. It is therefore possible to
> have up to three times RLIMIT_MEMLOCKED pinned.
>
> Having pinned memory limited per-task also makes it easy for users to
> exceed the limit. For example drivers that pin memory with
> pin_user_pages() it tends to remain pinned after fork. To deal with
> this and other issues this series introduces a cgroup for tracking and
> limiting the number of pages pinned or locked by tasks in the group.
>
> However the existing behaviour with regards to the rlimit needs to be
> maintained. Therefore the lesser of the two limits is
> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
> but this bypass is not allowed for the cgroup.
>
> The first part of this series converts existing drivers which
> open-code the use of locked_mm/pinned_mm over to a common interface
> which manages the refcounts of the associated task/mm/user
> structs. This ensures accounting of pages is consistent and makes it
> easier to add charging of the cgroup.
>
> The second part of the series adds the cgroup and converts core mm
> code such as mlock over to charging the cgroup before finally
> introducing some selftests.


I didn't go through the entire series, so apologies if this was
mentioned somewhere, but do you mind elaborating on why this is added
as a separate cgroup controller rather than an extension of the memory
cgroup controller?

>
>
> As I don't have access to systems with all the various devices I
> haven't been able to test all driver changes. Any help there would be
> appreciated.
>
> Alistair Popple (19):
>   mm: Introduce vm_account
>   drivers/vhost: Convert to use vm_account
>   drivers/vdpa: Convert vdpa to use the new vm_structure
>   infiniband/umem: Convert to use vm_account
>   RMDA/siw: Convert to use vm_account
>   RDMA/usnic: convert to use vm_account
>   vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm
>   vfio/spapr_tce: Convert accounting to pinned_vm
>   io_uring: convert to use vm_account
>   net: skb: Switch to using vm_account
>   xdp: convert to use vm_account
>   kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned()
>   fpga: dfl: afu: convert to use vm_account
>   mm: Introduce a cgroup for pinned memory
>   mm/util: Extend vm_account to charge pages against the pin cgroup
>   mm/util: Refactor account_locked_vm
>   mm: Convert mmap and mlock to use account_locked_vm
>   mm/mmap: Charge locked memory to pins cgroup
>   selftests/vm: Add pins-cgroup selftest for mlock/mmap
>
>  MAINTAINERS                              |   8 +-
>  arch/powerpc/kvm/book3s_64_vio.c         |  10 +-
>  arch/powerpc/mm/book3s64/iommu_api.c     |  29 +--
>  drivers/fpga/dfl-afu-dma-region.c        |  11 +-
>  drivers/fpga/dfl-afu.h                   |   1 +-
>  drivers/infiniband/core/umem.c           |  16 +-
>  drivers/infiniband/core/umem_odp.c       |   6 +-
>  drivers/infiniband/hw/usnic/usnic_uiom.c |  13 +-
>  drivers/infiniband/hw/usnic/usnic_uiom.h |   1 +-
>  drivers/infiniband/sw/siw/siw.h          |   2 +-
>  drivers/infiniband/sw/siw/siw_mem.c      |  20 +--
>  drivers/infiniband/sw/siw/siw_verbs.c    |  15 +-
>  drivers/vdpa/vdpa_user/vduse_dev.c       |  20 +--
>  drivers/vfio/vfio_iommu_spapr_tce.c      |  15 +-
>  drivers/vfio/vfio_iommu_type1.c          |  59 +----
>  drivers/vhost/vdpa.c                     |   9 +-
>  drivers/vhost/vhost.c                    |   2 +-
>  drivers/vhost/vhost.h                    |   1 +-
>  include/linux/cgroup.h                   |  20 ++-
>  include/linux/cgroup_subsys.h            |   4 +-
>  include/linux/io_uring_types.h           |   3 +-
>  include/linux/kvm_host.h                 |   1 +-
>  include/linux/mm.h                       |   5 +-
>  include/linux/mm_types.h                 |  88 ++++++++-
>  include/linux/skbuff.h                   |   6 +-
>  include/net/sock.h                       |   2 +-
>  include/net/xdp_sock.h                   |   2 +-
>  include/rdma/ib_umem.h                   |   1 +-
>  io_uring/io_uring.c                      |  20 +--
>  io_uring/notif.c                         |   4 +-
>  io_uring/notif.h                         |  10 +-
>  io_uring/rsrc.c                          |  38 +---
>  io_uring/rsrc.h                          |   9 +-
>  mm/Kconfig                               |  11 +-
>  mm/Makefile                              |   1 +-
>  mm/internal.h                            |   2 +-
>  mm/mlock.c                               |  76 +------
>  mm/mmap.c                                |  76 +++----
>  mm/mremap.c                              |  54 +++--
>  mm/pins_cgroup.c                         | 273 ++++++++++++++++++++++++-
>  mm/secretmem.c                           |   6 +-
>  mm/util.c                                | 196 +++++++++++++++--
>  net/core/skbuff.c                        |  47 +---
>  net/rds/message.c                        |   9 +-
>  net/xdp/xdp_umem.c                       |  38 +--
>  tools/testing/selftests/vm/Makefile      |   1 +-
>  tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++-
>  virt/kvm/kvm_main.c                      |   3 +-
>  48 files changed, 1114 insertions(+), 401 deletions(-)
>  create mode 100644 mm/pins_cgroup.c
>  create mode 100644 tools/testing/selftests/vm/pins-cgroup.c
>
> base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65
> --
> git-series 0.9.1
>
Jason Gunthorpe Jan. 24, 2023, 8:12 p.m. UTC | #2
On Tue, Jan 24, 2023 at 04:42:29PM +1100, Alistair Popple wrote:
> Having large amounts of unmovable or unreclaimable memory in a system
> can lead to system instability due to increasing the likelihood of
> encountering out-of-memory conditions. Therefore it is desirable to
> limit the amount of memory users can lock or pin.
> 
> From userspace such limits can be enforced by setting
> RLIMIT_MEMLOCK. However there is no standard method that drivers and
> other in-kernel users can use to check and enforce this limit.
> 
> This has lead to a large number of inconsistencies in how limits are
> enforced. For example some drivers will use mm->locked_mm while others
> will use mm->pinned_mm or user->locked_mm. It is therefore possible to
> have up to three times RLIMIT_MEMLOCKED pinned.
> 
> Having pinned memory limited per-task also makes it easy for users to
> exceed the limit. For example drivers that pin memory with
> pin_user_pages() it tends to remain pinned after fork. To deal with
> this and other issues this series introduces a cgroup for tracking and
> limiting the number of pages pinned or locked by tasks in the group.
> 
> However the existing behaviour with regards to the rlimit needs to be
> maintained. Therefore the lesser of the two limits is
> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
> but this bypass is not allowed for the cgroup.
> 
> The first part of this series converts existing drivers which
> open-code the use of locked_mm/pinned_mm over to a common interface
> which manages the refcounts of the associated task/mm/user
> structs. This ensures accounting of pages is consistent and makes it
> easier to add charging of the cgroup.
> 
> The second part of the series adds the cgroup and converts core mm
> code such as mlock over to charging the cgroup before finally
> introducing some selftests.
>
> As I don't have access to systems with all the various devices I
> haven't been able to test all driver changes. Any help there would be
> appreciated.

I'm excited by this series, thanks for making it.

The pin accounting has been a long standing problem and cgroups will
really help!

Jason
Alistair Popple Jan. 31, 2023, 12:54 a.m. UTC | #3
Yosry Ahmed <yosryahmed@google.com> writes:

> On Mon, Jan 23, 2023 at 9:43 PM Alistair Popple <apopple@nvidia.com> wrote:
>>
>> Having large amounts of unmovable or unreclaimable memory in a system
>> can lead to system instability due to increasing the likelihood of
>> encountering out-of-memory conditions. Therefore it is desirable to
>> limit the amount of memory users can lock or pin.
>>
>> From userspace such limits can be enforced by setting
>> RLIMIT_MEMLOCK. However there is no standard method that drivers and
>> other in-kernel users can use to check and enforce this limit.
>>
>> This has lead to a large number of inconsistencies in how limits are
>> enforced. For example some drivers will use mm->locked_mm while others
>> will use mm->pinned_mm or user->locked_mm. It is therefore possible to
>> have up to three times RLIMIT_MEMLOCKED pinned.
>>
>> Having pinned memory limited per-task also makes it easy for users to
>> exceed the limit. For example drivers that pin memory with
>> pin_user_pages() it tends to remain pinned after fork. To deal with
>> this and other issues this series introduces a cgroup for tracking and
>> limiting the number of pages pinned or locked by tasks in the group.
>>
>> However the existing behaviour with regards to the rlimit needs to be
>> maintained. Therefore the lesser of the two limits is
>> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
>> but this bypass is not allowed for the cgroup.
>>
>> The first part of this series converts existing drivers which
>> open-code the use of locked_mm/pinned_mm over to a common interface
>> which manages the refcounts of the associated task/mm/user
>> structs. This ensures accounting of pages is consistent and makes it
>> easier to add charging of the cgroup.
>>
>> The second part of the series adds the cgroup and converts core mm
>> code such as mlock over to charging the cgroup before finally
>> introducing some selftests.
>
>
> I didn't go through the entire series, so apologies if this was
> mentioned somewhere, but do you mind elaborating on why this is added
> as a separate cgroup controller rather than an extension of the memory
> cgroup controller?

One of my early prototypes actually did add this to the memcg
controller. However pinned pages fall under their own limit, and we
wanted to always account pages to the cgroup of the task using the
driver rather than say folio_memcg(). So adding it to memcg didn't seem
to have much benefit as we didn't end up using any of the infrastructure
provided by memcg. Hence I thought it was clearer to just add it as it's
own controller.

 - Alistair
 
>>
>>
>> As I don't have access to systems with all the various devices I
>> haven't been able to test all driver changes. Any help there would be
>> appreciated.
>>
>> Alistair Popple (19):
>>   mm: Introduce vm_account
>>   drivers/vhost: Convert to use vm_account
>>   drivers/vdpa: Convert vdpa to use the new vm_structure
>>   infiniband/umem: Convert to use vm_account
>>   RMDA/siw: Convert to use vm_account
>>   RDMA/usnic: convert to use vm_account
>>   vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm
>>   vfio/spapr_tce: Convert accounting to pinned_vm
>>   io_uring: convert to use vm_account
>>   net: skb: Switch to using vm_account
>>   xdp: convert to use vm_account
>>   kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned()
>>   fpga: dfl: afu: convert to use vm_account
>>   mm: Introduce a cgroup for pinned memory
>>   mm/util: Extend vm_account to charge pages against the pin cgroup
>>   mm/util: Refactor account_locked_vm
>>   mm: Convert mmap and mlock to use account_locked_vm
>>   mm/mmap: Charge locked memory to pins cgroup
>>   selftests/vm: Add pins-cgroup selftest for mlock/mmap
>>
>>  MAINTAINERS                              |   8 +-
>>  arch/powerpc/kvm/book3s_64_vio.c         |  10 +-
>>  arch/powerpc/mm/book3s64/iommu_api.c     |  29 +--
>>  drivers/fpga/dfl-afu-dma-region.c        |  11 +-
>>  drivers/fpga/dfl-afu.h                   |   1 +-
>>  drivers/infiniband/core/umem.c           |  16 +-
>>  drivers/infiniband/core/umem_odp.c       |   6 +-
>>  drivers/infiniband/hw/usnic/usnic_uiom.c |  13 +-
>>  drivers/infiniband/hw/usnic/usnic_uiom.h |   1 +-
>>  drivers/infiniband/sw/siw/siw.h          |   2 +-
>>  drivers/infiniband/sw/siw/siw_mem.c      |  20 +--
>>  drivers/infiniband/sw/siw/siw_verbs.c    |  15 +-
>>  drivers/vdpa/vdpa_user/vduse_dev.c       |  20 +--
>>  drivers/vfio/vfio_iommu_spapr_tce.c      |  15 +-
>>  drivers/vfio/vfio_iommu_type1.c          |  59 +----
>>  drivers/vhost/vdpa.c                     |   9 +-
>>  drivers/vhost/vhost.c                    |   2 +-
>>  drivers/vhost/vhost.h                    |   1 +-
>>  include/linux/cgroup.h                   |  20 ++-
>>  include/linux/cgroup_subsys.h            |   4 +-
>>  include/linux/io_uring_types.h           |   3 +-
>>  include/linux/kvm_host.h                 |   1 +-
>>  include/linux/mm.h                       |   5 +-
>>  include/linux/mm_types.h                 |  88 ++++++++-
>>  include/linux/skbuff.h                   |   6 +-
>>  include/net/sock.h                       |   2 +-
>>  include/net/xdp_sock.h                   |   2 +-
>>  include/rdma/ib_umem.h                   |   1 +-
>>  io_uring/io_uring.c                      |  20 +--
>>  io_uring/notif.c                         |   4 +-
>>  io_uring/notif.h                         |  10 +-
>>  io_uring/rsrc.c                          |  38 +---
>>  io_uring/rsrc.h                          |   9 +-
>>  mm/Kconfig                               |  11 +-
>>  mm/Makefile                              |   1 +-
>>  mm/internal.h                            |   2 +-
>>  mm/mlock.c                               |  76 +------
>>  mm/mmap.c                                |  76 +++----
>>  mm/mremap.c                              |  54 +++--
>>  mm/pins_cgroup.c                         | 273 ++++++++++++++++++++++++-
>>  mm/secretmem.c                           |   6 +-
>>  mm/util.c                                | 196 +++++++++++++++--
>>  net/core/skbuff.c                        |  47 +---
>>  net/rds/message.c                        |   9 +-
>>  net/xdp/xdp_umem.c                       |  38 +--
>>  tools/testing/selftests/vm/Makefile      |   1 +-
>>  tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++-
>>  virt/kvm/kvm_main.c                      |   3 +-
>>  48 files changed, 1114 insertions(+), 401 deletions(-)
>>  create mode 100644 mm/pins_cgroup.c
>>  create mode 100644 tools/testing/selftests/vm/pins-cgroup.c
>>
>> base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65
>> --
>> git-series 0.9.1
>>
Yosry Ahmed Jan. 31, 2023, 5:14 a.m. UTC | #4
On Mon, Jan 30, 2023 at 5:07 PM Alistair Popple <apopple@nvidia.com> wrote:
>
>
> Yosry Ahmed <yosryahmed@google.com> writes:
>
> > On Mon, Jan 23, 2023 at 9:43 PM Alistair Popple <apopple@nvidia.com> wrote:
> >>
> >> Having large amounts of unmovable or unreclaimable memory in a system
> >> can lead to system instability due to increasing the likelihood of
> >> encountering out-of-memory conditions. Therefore it is desirable to
> >> limit the amount of memory users can lock or pin.
> >>
> >> From userspace such limits can be enforced by setting
> >> RLIMIT_MEMLOCK. However there is no standard method that drivers and
> >> other in-kernel users can use to check and enforce this limit.
> >>
> >> This has lead to a large number of inconsistencies in how limits are
> >> enforced. For example some drivers will use mm->locked_mm while others
> >> will use mm->pinned_mm or user->locked_mm. It is therefore possible to
> >> have up to three times RLIMIT_MEMLOCKED pinned.
> >>
> >> Having pinned memory limited per-task also makes it easy for users to
> >> exceed the limit. For example drivers that pin memory with
> >> pin_user_pages() it tends to remain pinned after fork. To deal with
> >> this and other issues this series introduces a cgroup for tracking and
> >> limiting the number of pages pinned or locked by tasks in the group.
> >>
> >> However the existing behaviour with regards to the rlimit needs to be
> >> maintained. Therefore the lesser of the two limits is
> >> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
> >> but this bypass is not allowed for the cgroup.
> >>
> >> The first part of this series converts existing drivers which
> >> open-code the use of locked_mm/pinned_mm over to a common interface
> >> which manages the refcounts of the associated task/mm/user
> >> structs. This ensures accounting of pages is consistent and makes it
> >> easier to add charging of the cgroup.
> >>
> >> The second part of the series adds the cgroup and converts core mm
> >> code such as mlock over to charging the cgroup before finally
> >> introducing some selftests.
> >
> >
> > I didn't go through the entire series, so apologies if this was
> > mentioned somewhere, but do you mind elaborating on why this is added
> > as a separate cgroup controller rather than an extension of the memory
> > cgroup controller?
>
> One of my early prototypes actually did add this to the memcg
> controller. However pinned pages fall under their own limit, and we
> wanted to always account pages to the cgroup of the task using the
> driver rather than say folio_memcg(). So adding it to memcg didn't seem
> to have much benefit as we didn't end up using any of the infrastructure
> provided by memcg. Hence I thought it was clearer to just add it as it's
> own controller.

To clarify, you account and limit pinned memory based on the cgroup of
the process pinning the pages, not based on the cgroup that the pages
are actually charged to? Is my understanding correct?

IOW, you limit the amount of memory that processes in a cgroup can
pin, not the amount of memory charged to a cgroup that can be pinned?

>
>  - Alistair
>
> >>
> >>
> >> As I don't have access to systems with all the various devices I
> >> haven't been able to test all driver changes. Any help there would be
> >> appreciated.
> >>
> >> Alistair Popple (19):
> >>   mm: Introduce vm_account
> >>   drivers/vhost: Convert to use vm_account
> >>   drivers/vdpa: Convert vdpa to use the new vm_structure
> >>   infiniband/umem: Convert to use vm_account
> >>   RMDA/siw: Convert to use vm_account
> >>   RDMA/usnic: convert to use vm_account
> >>   vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm
> >>   vfio/spapr_tce: Convert accounting to pinned_vm
> >>   io_uring: convert to use vm_account
> >>   net: skb: Switch to using vm_account
> >>   xdp: convert to use vm_account
> >>   kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned()
> >>   fpga: dfl: afu: convert to use vm_account
> >>   mm: Introduce a cgroup for pinned memory
> >>   mm/util: Extend vm_account to charge pages against the pin cgroup
> >>   mm/util: Refactor account_locked_vm
> >>   mm: Convert mmap and mlock to use account_locked_vm
> >>   mm/mmap: Charge locked memory to pins cgroup
> >>   selftests/vm: Add pins-cgroup selftest for mlock/mmap
> >>
> >>  MAINTAINERS                              |   8 +-
> >>  arch/powerpc/kvm/book3s_64_vio.c         |  10 +-
> >>  arch/powerpc/mm/book3s64/iommu_api.c     |  29 +--
> >>  drivers/fpga/dfl-afu-dma-region.c        |  11 +-
> >>  drivers/fpga/dfl-afu.h                   |   1 +-
> >>  drivers/infiniband/core/umem.c           |  16 +-
> >>  drivers/infiniband/core/umem_odp.c       |   6 +-
> >>  drivers/infiniband/hw/usnic/usnic_uiom.c |  13 +-
> >>  drivers/infiniband/hw/usnic/usnic_uiom.h |   1 +-
> >>  drivers/infiniband/sw/siw/siw.h          |   2 +-
> >>  drivers/infiniband/sw/siw/siw_mem.c      |  20 +--
> >>  drivers/infiniband/sw/siw/siw_verbs.c    |  15 +-
> >>  drivers/vdpa/vdpa_user/vduse_dev.c       |  20 +--
> >>  drivers/vfio/vfio_iommu_spapr_tce.c      |  15 +-
> >>  drivers/vfio/vfio_iommu_type1.c          |  59 +----
> >>  drivers/vhost/vdpa.c                     |   9 +-
> >>  drivers/vhost/vhost.c                    |   2 +-
> >>  drivers/vhost/vhost.h                    |   1 +-
> >>  include/linux/cgroup.h                   |  20 ++-
> >>  include/linux/cgroup_subsys.h            |   4 +-
> >>  include/linux/io_uring_types.h           |   3 +-
> >>  include/linux/kvm_host.h                 |   1 +-
> >>  include/linux/mm.h                       |   5 +-
> >>  include/linux/mm_types.h                 |  88 ++++++++-
> >>  include/linux/skbuff.h                   |   6 +-
> >>  include/net/sock.h                       |   2 +-
> >>  include/net/xdp_sock.h                   |   2 +-
> >>  include/rdma/ib_umem.h                   |   1 +-
> >>  io_uring/io_uring.c                      |  20 +--
> >>  io_uring/notif.c                         |   4 +-
> >>  io_uring/notif.h                         |  10 +-
> >>  io_uring/rsrc.c                          |  38 +---
> >>  io_uring/rsrc.h                          |   9 +-
> >>  mm/Kconfig                               |  11 +-
> >>  mm/Makefile                              |   1 +-
> >>  mm/internal.h                            |   2 +-
> >>  mm/mlock.c                               |  76 +------
> >>  mm/mmap.c                                |  76 +++----
> >>  mm/mremap.c                              |  54 +++--
> >>  mm/pins_cgroup.c                         | 273 ++++++++++++++++++++++++-
> >>  mm/secretmem.c                           |   6 +-
> >>  mm/util.c                                | 196 +++++++++++++++--
> >>  net/core/skbuff.c                        |  47 +---
> >>  net/rds/message.c                        |   9 +-
> >>  net/xdp/xdp_umem.c                       |  38 +--
> >>  tools/testing/selftests/vm/Makefile      |   1 +-
> >>  tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++-
> >>  virt/kvm/kvm_main.c                      |   3 +-
> >>  48 files changed, 1114 insertions(+), 401 deletions(-)
> >>  create mode 100644 mm/pins_cgroup.c
> >>  create mode 100644 tools/testing/selftests/vm/pins-cgroup.c
> >>
> >> base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65
> >> --
> >> git-series 0.9.1
> >>
>
Alistair Popple Jan. 31, 2023, 11:22 a.m. UTC | #5
Yosry Ahmed <yosryahmed@google.com> writes:

> On Mon, Jan 30, 2023 at 5:07 PM Alistair Popple <apopple@nvidia.com> wrote:
>>
>>
>> Yosry Ahmed <yosryahmed@google.com> writes:
>>
>> > On Mon, Jan 23, 2023 at 9:43 PM Alistair Popple <apopple@nvidia.com> wrote:
>> >>
>> >> Having large amounts of unmovable or unreclaimable memory in a system
>> >> can lead to system instability due to increasing the likelihood of
>> >> encountering out-of-memory conditions. Therefore it is desirable to
>> >> limit the amount of memory users can lock or pin.
>> >>
>> >> From userspace such limits can be enforced by setting
>> >> RLIMIT_MEMLOCK. However there is no standard method that drivers and
>> >> other in-kernel users can use to check and enforce this limit.
>> >>
>> >> This has lead to a large number of inconsistencies in how limits are
>> >> enforced. For example some drivers will use mm->locked_mm while others
>> >> will use mm->pinned_mm or user->locked_mm. It is therefore possible to
>> >> have up to three times RLIMIT_MEMLOCKED pinned.
>> >>
>> >> Having pinned memory limited per-task also makes it easy for users to
>> >> exceed the limit. For example drivers that pin memory with
>> >> pin_user_pages() it tends to remain pinned after fork. To deal with
>> >> this and other issues this series introduces a cgroup for tracking and
>> >> limiting the number of pages pinned or locked by tasks in the group.
>> >>
>> >> However the existing behaviour with regards to the rlimit needs to be
>> >> maintained. Therefore the lesser of the two limits is
>> >> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
>> >> but this bypass is not allowed for the cgroup.
>> >>
>> >> The first part of this series converts existing drivers which
>> >> open-code the use of locked_mm/pinned_mm over to a common interface
>> >> which manages the refcounts of the associated task/mm/user
>> >> structs. This ensures accounting of pages is consistent and makes it
>> >> easier to add charging of the cgroup.
>> >>
>> >> The second part of the series adds the cgroup and converts core mm
>> >> code such as mlock over to charging the cgroup before finally
>> >> introducing some selftests.
>> >
>> >
>> > I didn't go through the entire series, so apologies if this was
>> > mentioned somewhere, but do you mind elaborating on why this is added
>> > as a separate cgroup controller rather than an extension of the memory
>> > cgroup controller?
>>
>> One of my early prototypes actually did add this to the memcg
>> controller. However pinned pages fall under their own limit, and we
>> wanted to always account pages to the cgroup of the task using the
>> driver rather than say folio_memcg(). So adding it to memcg didn't seem
>> to have much benefit as we didn't end up using any of the infrastructure
>> provided by memcg. Hence I thought it was clearer to just add it as it's
>> own controller.
>
> To clarify, you account and limit pinned memory based on the cgroup of
> the process pinning the pages, not based on the cgroup that the pages
> are actually charged to? Is my understanding correct?

That's correct.

> IOW, you limit the amount of memory that processes in a cgroup can
> pin, not the amount of memory charged to a cgroup that can be pinned?

Right, that's a good clarification which I might steal and add to the
cover letter.

>>
>>  - Alistair
>>
>> >>
>> >>
>> >> As I don't have access to systems with all the various devices I
>> >> haven't been able to test all driver changes. Any help there would be
>> >> appreciated.
>> >>
>> >> Alistair Popple (19):
>> >>   mm: Introduce vm_account
>> >>   drivers/vhost: Convert to use vm_account
>> >>   drivers/vdpa: Convert vdpa to use the new vm_structure
>> >>   infiniband/umem: Convert to use vm_account
>> >>   RMDA/siw: Convert to use vm_account
>> >>   RDMA/usnic: convert to use vm_account
>> >>   vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm
>> >>   vfio/spapr_tce: Convert accounting to pinned_vm
>> >>   io_uring: convert to use vm_account
>> >>   net: skb: Switch to using vm_account
>> >>   xdp: convert to use vm_account
>> >>   kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned()
>> >>   fpga: dfl: afu: convert to use vm_account
>> >>   mm: Introduce a cgroup for pinned memory
>> >>   mm/util: Extend vm_account to charge pages against the pin cgroup
>> >>   mm/util: Refactor account_locked_vm
>> >>   mm: Convert mmap and mlock to use account_locked_vm
>> >>   mm/mmap: Charge locked memory to pins cgroup
>> >>   selftests/vm: Add pins-cgroup selftest for mlock/mmap
>> >>
>> >>  MAINTAINERS                              |   8 +-
>> >>  arch/powerpc/kvm/book3s_64_vio.c         |  10 +-
>> >>  arch/powerpc/mm/book3s64/iommu_api.c     |  29 +--
>> >>  drivers/fpga/dfl-afu-dma-region.c        |  11 +-
>> >>  drivers/fpga/dfl-afu.h                   |   1 +-
>> >>  drivers/infiniband/core/umem.c           |  16 +-
>> >>  drivers/infiniband/core/umem_odp.c       |   6 +-
>> >>  drivers/infiniband/hw/usnic/usnic_uiom.c |  13 +-
>> >>  drivers/infiniband/hw/usnic/usnic_uiom.h |   1 +-
>> >>  drivers/infiniband/sw/siw/siw.h          |   2 +-
>> >>  drivers/infiniband/sw/siw/siw_mem.c      |  20 +--
>> >>  drivers/infiniband/sw/siw/siw_verbs.c    |  15 +-
>> >>  drivers/vdpa/vdpa_user/vduse_dev.c       |  20 +--
>> >>  drivers/vfio/vfio_iommu_spapr_tce.c      |  15 +-
>> >>  drivers/vfio/vfio_iommu_type1.c          |  59 +----
>> >>  drivers/vhost/vdpa.c                     |   9 +-
>> >>  drivers/vhost/vhost.c                    |   2 +-
>> >>  drivers/vhost/vhost.h                    |   1 +-
>> >>  include/linux/cgroup.h                   |  20 ++-
>> >>  include/linux/cgroup_subsys.h            |   4 +-
>> >>  include/linux/io_uring_types.h           |   3 +-
>> >>  include/linux/kvm_host.h                 |   1 +-
>> >>  include/linux/mm.h                       |   5 +-
>> >>  include/linux/mm_types.h                 |  88 ++++++++-
>> >>  include/linux/skbuff.h                   |   6 +-
>> >>  include/net/sock.h                       |   2 +-
>> >>  include/net/xdp_sock.h                   |   2 +-
>> >>  include/rdma/ib_umem.h                   |   1 +-
>> >>  io_uring/io_uring.c                      |  20 +--
>> >>  io_uring/notif.c                         |   4 +-
>> >>  io_uring/notif.h                         |  10 +-
>> >>  io_uring/rsrc.c                          |  38 +---
>> >>  io_uring/rsrc.h                          |   9 +-
>> >>  mm/Kconfig                               |  11 +-
>> >>  mm/Makefile                              |   1 +-
>> >>  mm/internal.h                            |   2 +-
>> >>  mm/mlock.c                               |  76 +------
>> >>  mm/mmap.c                                |  76 +++----
>> >>  mm/mremap.c                              |  54 +++--
>> >>  mm/pins_cgroup.c                         | 273 ++++++++++++++++++++++++-
>> >>  mm/secretmem.c                           |   6 +-
>> >>  mm/util.c                                | 196 +++++++++++++++--
>> >>  net/core/skbuff.c                        |  47 +---
>> >>  net/rds/message.c                        |   9 +-
>> >>  net/xdp/xdp_umem.c                       |  38 +--
>> >>  tools/testing/selftests/vm/Makefile      |   1 +-
>> >>  tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++-
>> >>  virt/kvm/kvm_main.c                      |   3 +-
>> >>  48 files changed, 1114 insertions(+), 401 deletions(-)
>> >>  create mode 100644 mm/pins_cgroup.c
>> >>  create mode 100644 tools/testing/selftests/vm/pins-cgroup.c
>> >>
>> >> base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65
>> >> --
>> >> git-series 0.9.1
>> >>
>>
David Hildenbrand Jan. 31, 2023, 1:57 p.m. UTC | #6
On 24.01.23 21:12, Jason Gunthorpe wrote:
> On Tue, Jan 24, 2023 at 04:42:29PM +1100, Alistair Popple wrote:
>> Having large amounts of unmovable or unreclaimable memory in a system
>> can lead to system instability due to increasing the likelihood of
>> encountering out-of-memory conditions. Therefore it is desirable to
>> limit the amount of memory users can lock or pin.
>>
>>  From userspace such limits can be enforced by setting
>> RLIMIT_MEMLOCK. However there is no standard method that drivers and
>> other in-kernel users can use to check and enforce this limit.
>>
>> This has lead to a large number of inconsistencies in how limits are
>> enforced. For example some drivers will use mm->locked_mm while others
>> will use mm->pinned_mm or user->locked_mm. It is therefore possible to
>> have up to three times RLIMIT_MEMLOCKED pinned.
>>
>> Having pinned memory limited per-task also makes it easy for users to
>> exceed the limit. For example drivers that pin memory with
>> pin_user_pages() it tends to remain pinned after fork. To deal with
>> this and other issues this series introduces a cgroup for tracking and
>> limiting the number of pages pinned or locked by tasks in the group.
>>
>> However the existing behaviour with regards to the rlimit needs to be
>> maintained. Therefore the lesser of the two limits is
>> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
>> but this bypass is not allowed for the cgroup.
>>
>> The first part of this series converts existing drivers which
>> open-code the use of locked_mm/pinned_mm over to a common interface
>> which manages the refcounts of the associated task/mm/user
>> structs. This ensures accounting of pages is consistent and makes it
>> easier to add charging of the cgroup.
>>
>> The second part of the series adds the cgroup and converts core mm
>> code such as mlock over to charging the cgroup before finally
>> introducing some selftests.
>>
>> As I don't have access to systems with all the various devices I
>> haven't been able to test all driver changes. Any help there would be
>> appreciated.
> 
> I'm excited by this series, thanks for making it.
> 
> The pin accounting has been a long standing problem and cgroups will
> really help!

Indeed. I'm curious how GUP-fast, pinning the same page multiple times, 
and pinning subpages of larger folios are handled :)
Jason Gunthorpe Jan. 31, 2023, 2:03 p.m. UTC | #7
On Tue, Jan 31, 2023 at 02:57:20PM +0100, David Hildenbrand wrote:

> > I'm excited by this series, thanks for making it.
> > 
> > The pin accounting has been a long standing problem and cgroups will
> > really help!
> 
> Indeed. I'm curious how GUP-fast, pinning the same page multiple times, and
> pinning subpages of larger folios are handled :)

The same as today. The pinning is done based on the result from GUP,
and we charge every returned struct page.

So duplicates are counted multiple times, folios are ignored.

Removing duplicate charges would be costly, it would require storage
to keep track of how many times individual pages have been charged to
each cgroup (eg an xarray indexed by PFN of integers in each cgroup).

It doesn't seem worth the cost, IMHO.

We've made alot of investment now with iommufd to remove the most
annoying sources of duplicated pins so it is much less of a problem in
the qemu context at least.

Jason
David Hildenbrand Jan. 31, 2023, 2:06 p.m. UTC | #8
On 31.01.23 15:03, Jason Gunthorpe wrote:
> On Tue, Jan 31, 2023 at 02:57:20PM +0100, David Hildenbrand wrote:
> 
>>> I'm excited by this series, thanks for making it.
>>>
>>> The pin accounting has been a long standing problem and cgroups will
>>> really help!
>>
>> Indeed. I'm curious how GUP-fast, pinning the same page multiple times, and
>> pinning subpages of larger folios are handled :)
> 
> The same as today. The pinning is done based on the result from GUP,
> and we charge every returned struct page.
> 
> So duplicates are counted multiple times, folios are ignored.
> 
> Removing duplicate charges would be costly, it would require storage
> to keep track of how many times individual pages have been charged to
> each cgroup (eg an xarray indexed by PFN of integers in each cgroup).
> 
> It doesn't seem worth the cost, IMHO.
> 
> We've made alot of investment now with iommufd to remove the most
> annoying sources of duplicated pins so it is much less of a problem in
> the qemu context at least.

Wasn't there the discussion regarding using vfio+io_uring+rdma+$whatever 
on a VM and requiring multiple times the VM size as memlock limit? Would 
it be the same now, just that we need multiple times the pin limit?
Jason Gunthorpe Jan. 31, 2023, 2:10 p.m. UTC | #9
On Tue, Jan 31, 2023 at 03:06:10PM +0100, David Hildenbrand wrote:
> On 31.01.23 15:03, Jason Gunthorpe wrote:
> > On Tue, Jan 31, 2023 at 02:57:20PM +0100, David Hildenbrand wrote:
> > 
> > > > I'm excited by this series, thanks for making it.
> > > > 
> > > > The pin accounting has been a long standing problem and cgroups will
> > > > really help!
> > > 
> > > Indeed. I'm curious how GUP-fast, pinning the same page multiple times, and
> > > pinning subpages of larger folios are handled :)
> > 
> > The same as today. The pinning is done based on the result from GUP,
> > and we charge every returned struct page.
> > 
> > So duplicates are counted multiple times, folios are ignored.
> > 
> > Removing duplicate charges would be costly, it would require storage
> > to keep track of how many times individual pages have been charged to
> > each cgroup (eg an xarray indexed by PFN of integers in each cgroup).
> > 
> > It doesn't seem worth the cost, IMHO.
> > 
> > We've made alot of investment now with iommufd to remove the most
> > annoying sources of duplicated pins so it is much less of a problem in
> > the qemu context at least.
> 
> Wasn't there the discussion regarding using vfio+io_uring+rdma+$whatever on
> a VM and requiring multiple times the VM size as memlock limit?

Yes, but iommufd gives us some more options to mitigate this.

eg it makes some of logical sense to point RDMA at the iommufd page
table that is already pinned when trying to DMA from guest memory, in
this case it could ride on the existing pin.

> Would it be the same now, just that we need multiple times the pin
> limit?

Yes

Jason
David Hildenbrand Jan. 31, 2023, 2:15 p.m. UTC | #10
On 31.01.23 15:10, Jason Gunthorpe wrote:
> On Tue, Jan 31, 2023 at 03:06:10PM +0100, David Hildenbrand wrote:
>> On 31.01.23 15:03, Jason Gunthorpe wrote:
>>> On Tue, Jan 31, 2023 at 02:57:20PM +0100, David Hildenbrand wrote:
>>>
>>>>> I'm excited by this series, thanks for making it.
>>>>>
>>>>> The pin accounting has been a long standing problem and cgroups will
>>>>> really help!
>>>>
>>>> Indeed. I'm curious how GUP-fast, pinning the same page multiple times, and
>>>> pinning subpages of larger folios are handled :)
>>>
>>> The same as today. The pinning is done based on the result from GUP,
>>> and we charge every returned struct page.
>>>
>>> So duplicates are counted multiple times, folios are ignored.
>>>
>>> Removing duplicate charges would be costly, it would require storage
>>> to keep track of how many times individual pages have been charged to
>>> each cgroup (eg an xarray indexed by PFN of integers in each cgroup).
>>>
>>> It doesn't seem worth the cost, IMHO.
>>>
>>> We've made alot of investment now with iommufd to remove the most
>>> annoying sources of duplicated pins so it is much less of a problem in
>>> the qemu context at least.
>>
>> Wasn't there the discussion regarding using vfio+io_uring+rdma+$whatever on
>> a VM and requiring multiple times the VM size as memlock limit?
> 
> Yes, but iommufd gives us some more options to mitigate this.
> 
> eg it makes some of logical sense to point RDMA at the iommufd page
> table that is already pinned when trying to DMA from guest memory, in
> this case it could ride on the existing pin.

Right, I suspect some issue is that the address space layout for the 
RDMA device might be completely different. But I'm no expert on IOMMUs 
at all :)

I do understand that at least multiple VFIO containers could benefit by 
only pinning once (IIUC that mgiht have been an issue?).

> 
>> Would it be the same now, just that we need multiple times the pin
>> limit?
> 
> Yes

Okay, thanks.


It's all still a big improvement, because I also asked for TDX 
restrictedmem to be accounted somehow as unmovable.
Jason Gunthorpe Jan. 31, 2023, 2:21 p.m. UTC | #11
On Tue, Jan 31, 2023 at 03:15:49PM +0100, David Hildenbrand wrote:
> On 31.01.23 15:10, Jason Gunthorpe wrote:
> > On Tue, Jan 31, 2023 at 03:06:10PM +0100, David Hildenbrand wrote:
> > > On 31.01.23 15:03, Jason Gunthorpe wrote:
> > > > On Tue, Jan 31, 2023 at 02:57:20PM +0100, David Hildenbrand wrote:
> > > > 
> > > > > > I'm excited by this series, thanks for making it.
> > > > > > 
> > > > > > The pin accounting has been a long standing problem and cgroups will
> > > > > > really help!
> > > > > 
> > > > > Indeed. I'm curious how GUP-fast, pinning the same page multiple times, and
> > > > > pinning subpages of larger folios are handled :)
> > > > 
> > > > The same as today. The pinning is done based on the result from GUP,
> > > > and we charge every returned struct page.
> > > > 
> > > > So duplicates are counted multiple times, folios are ignored.
> > > > 
> > > > Removing duplicate charges would be costly, it would require storage
> > > > to keep track of how many times individual pages have been charged to
> > > > each cgroup (eg an xarray indexed by PFN of integers in each cgroup).
> > > > 
> > > > It doesn't seem worth the cost, IMHO.
> > > > 
> > > > We've made alot of investment now with iommufd to remove the most
> > > > annoying sources of duplicated pins so it is much less of a problem in
> > > > the qemu context at least.
> > > 
> > > Wasn't there the discussion regarding using vfio+io_uring+rdma+$whatever on
> > > a VM and requiring multiple times the VM size as memlock limit?
> > 
> > Yes, but iommufd gives us some more options to mitigate this.
> > 
> > eg it makes some of logical sense to point RDMA at the iommufd page
> > table that is already pinned when trying to DMA from guest memory, in
> > this case it could ride on the existing pin.
> 
> Right, I suspect some issue is that the address space layout for the RDMA
> device might be completely different. But I'm no expert on IOMMUs at all :)

Oh it doesn't matter, it is all virtualized so many times..

> I do understand that at least multiple VFIO containers could benefit by only
> pinning once (IIUC that mgiht have been an issue?).

iommufd has fixed this completely.

> It's all still a big improvement, because I also asked for TDX restrictedmem
> to be accounted somehow as unmovable.

Yeah, it is sort of reasonable to think of the CC "secret memory" as
memory that is no different from memory being DMA'd to. The DMA is
just some other vCPU.

I still don't have a clear idea how all this CC memory is going to
actually work. Eventually it has to get into iommufd as well, somehow.

Jason
Yosry Ahmed Jan. 31, 2023, 7:49 p.m. UTC | #12
On Tue, Jan 31, 2023 at 3:24 AM Alistair Popple <apopple@nvidia.com> wrote:
>
>
> Yosry Ahmed <yosryahmed@google.com> writes:
>
> > On Mon, Jan 30, 2023 at 5:07 PM Alistair Popple <apopple@nvidia.com> wrote:
> >>
> >>
> >> Yosry Ahmed <yosryahmed@google.com> writes:
> >>
> >> > On Mon, Jan 23, 2023 at 9:43 PM Alistair Popple <apopple@nvidia.com> wrote:
> >> >>
> >> >> Having large amounts of unmovable or unreclaimable memory in a system
> >> >> can lead to system instability due to increasing the likelihood of
> >> >> encountering out-of-memory conditions. Therefore it is desirable to
> >> >> limit the amount of memory users can lock or pin.
> >> >>
> >> >> From userspace such limits can be enforced by setting
> >> >> RLIMIT_MEMLOCK. However there is no standard method that drivers and
> >> >> other in-kernel users can use to check and enforce this limit.
> >> >>
> >> >> This has lead to a large number of inconsistencies in how limits are
> >> >> enforced. For example some drivers will use mm->locked_mm while others
> >> >> will use mm->pinned_mm or user->locked_mm. It is therefore possible to
> >> >> have up to three times RLIMIT_MEMLOCKED pinned.
> >> >>
> >> >> Having pinned memory limited per-task also makes it easy for users to
> >> >> exceed the limit. For example drivers that pin memory with
> >> >> pin_user_pages() it tends to remain pinned after fork. To deal with
> >> >> this and other issues this series introduces a cgroup for tracking and
> >> >> limiting the number of pages pinned or locked by tasks in the group.
> >> >>
> >> >> However the existing behaviour with regards to the rlimit needs to be
> >> >> maintained. Therefore the lesser of the two limits is
> >> >> enforced. Furthermore having CAP_IPC_LOCK usually bypasses the rlimit,
> >> >> but this bypass is not allowed for the cgroup.
> >> >>
> >> >> The first part of this series converts existing drivers which
> >> >> open-code the use of locked_mm/pinned_mm over to a common interface
> >> >> which manages the refcounts of the associated task/mm/user
> >> >> structs. This ensures accounting of pages is consistent and makes it
> >> >> easier to add charging of the cgroup.
> >> >>
> >> >> The second part of the series adds the cgroup and converts core mm
> >> >> code such as mlock over to charging the cgroup before finally
> >> >> introducing some selftests.
> >> >
> >> >
> >> > I didn't go through the entire series, so apologies if this was
> >> > mentioned somewhere, but do you mind elaborating on why this is added
> >> > as a separate cgroup controller rather than an extension of the memory
> >> > cgroup controller?
> >>
> >> One of my early prototypes actually did add this to the memcg
> >> controller. However pinned pages fall under their own limit, and we
> >> wanted to always account pages to the cgroup of the task using the
> >> driver rather than say folio_memcg(). So adding it to memcg didn't seem
> >> to have much benefit as we didn't end up using any of the infrastructure
> >> provided by memcg. Hence I thought it was clearer to just add it as it's
> >> own controller.
> >
> > To clarify, you account and limit pinned memory based on the cgroup of
> > the process pinning the pages, not based on the cgroup that the pages
> > are actually charged to? Is my understanding correct?
>
> That's correct.

Interesting.

>
> > IOW, you limit the amount of memory that processes in a cgroup can
> > pin, not the amount of memory charged to a cgroup that can be pinned?
>
> Right, that's a good clarification which I might steal and add to the
> cover letter.

Feel free to :)

Please also clarify this in the code/docs. Glancing through the
patches I was asking myself multiple times why this is not
"memory.pinned.[current/max]" or similar.

>
> >>
> >>  - Alistair
> >>
> >> >>
> >> >>
> >> >> As I don't have access to systems with all the various devices I
> >> >> haven't been able to test all driver changes. Any help there would be
> >> >> appreciated.
> >> >>
> >> >> Alistair Popple (19):
> >> >>   mm: Introduce vm_account
> >> >>   drivers/vhost: Convert to use vm_account
> >> >>   drivers/vdpa: Convert vdpa to use the new vm_structure
> >> >>   infiniband/umem: Convert to use vm_account
> >> >>   RMDA/siw: Convert to use vm_account
> >> >>   RDMA/usnic: convert to use vm_account
> >> >>   vfio/type1: Charge pinned pages to pinned_vm instead of locked_vm
> >> >>   vfio/spapr_tce: Convert accounting to pinned_vm
> >> >>   io_uring: convert to use vm_account
> >> >>   net: skb: Switch to using vm_account
> >> >>   xdp: convert to use vm_account
> >> >>   kvm/book3s_64_vio: Convert account_locked_vm() to vm_account_pinned()
> >> >>   fpga: dfl: afu: convert to use vm_account
> >> >>   mm: Introduce a cgroup for pinned memory
> >> >>   mm/util: Extend vm_account to charge pages against the pin cgroup
> >> >>   mm/util: Refactor account_locked_vm
> >> >>   mm: Convert mmap and mlock to use account_locked_vm
> >> >>   mm/mmap: Charge locked memory to pins cgroup
> >> >>   selftests/vm: Add pins-cgroup selftest for mlock/mmap
> >> >>
> >> >>  MAINTAINERS                              |   8 +-
> >> >>  arch/powerpc/kvm/book3s_64_vio.c         |  10 +-
> >> >>  arch/powerpc/mm/book3s64/iommu_api.c     |  29 +--
> >> >>  drivers/fpga/dfl-afu-dma-region.c        |  11 +-
> >> >>  drivers/fpga/dfl-afu.h                   |   1 +-
> >> >>  drivers/infiniband/core/umem.c           |  16 +-
> >> >>  drivers/infiniband/core/umem_odp.c       |   6 +-
> >> >>  drivers/infiniband/hw/usnic/usnic_uiom.c |  13 +-
> >> >>  drivers/infiniband/hw/usnic/usnic_uiom.h |   1 +-
> >> >>  drivers/infiniband/sw/siw/siw.h          |   2 +-
> >> >>  drivers/infiniband/sw/siw/siw_mem.c      |  20 +--
> >> >>  drivers/infiniband/sw/siw/siw_verbs.c    |  15 +-
> >> >>  drivers/vdpa/vdpa_user/vduse_dev.c       |  20 +--
> >> >>  drivers/vfio/vfio_iommu_spapr_tce.c      |  15 +-
> >> >>  drivers/vfio/vfio_iommu_type1.c          |  59 +----
> >> >>  drivers/vhost/vdpa.c                     |   9 +-
> >> >>  drivers/vhost/vhost.c                    |   2 +-
> >> >>  drivers/vhost/vhost.h                    |   1 +-
> >> >>  include/linux/cgroup.h                   |  20 ++-
> >> >>  include/linux/cgroup_subsys.h            |   4 +-
> >> >>  include/linux/io_uring_types.h           |   3 +-
> >> >>  include/linux/kvm_host.h                 |   1 +-
> >> >>  include/linux/mm.h                       |   5 +-
> >> >>  include/linux/mm_types.h                 |  88 ++++++++-
> >> >>  include/linux/skbuff.h                   |   6 +-
> >> >>  include/net/sock.h                       |   2 +-
> >> >>  include/net/xdp_sock.h                   |   2 +-
> >> >>  include/rdma/ib_umem.h                   |   1 +-
> >> >>  io_uring/io_uring.c                      |  20 +--
> >> >>  io_uring/notif.c                         |   4 +-
> >> >>  io_uring/notif.h                         |  10 +-
> >> >>  io_uring/rsrc.c                          |  38 +---
> >> >>  io_uring/rsrc.h                          |   9 +-
> >> >>  mm/Kconfig                               |  11 +-
> >> >>  mm/Makefile                              |   1 +-
> >> >>  mm/internal.h                            |   2 +-
> >> >>  mm/mlock.c                               |  76 +------
> >> >>  mm/mmap.c                                |  76 +++----
> >> >>  mm/mremap.c                              |  54 +++--
> >> >>  mm/pins_cgroup.c                         | 273 ++++++++++++++++++++++++-
> >> >>  mm/secretmem.c                           |   6 +-
> >> >>  mm/util.c                                | 196 +++++++++++++++--
> >> >>  net/core/skbuff.c                        |  47 +---
> >> >>  net/rds/message.c                        |   9 +-
> >> >>  net/xdp/xdp_umem.c                       |  38 +--
> >> >>  tools/testing/selftests/vm/Makefile      |   1 +-
> >> >>  tools/testing/selftests/vm/pins-cgroup.c | 271 ++++++++++++++++++++++++-
> >> >>  virt/kvm/kvm_main.c                      |   3 +-
> >> >>  48 files changed, 1114 insertions(+), 401 deletions(-)
> >> >>  create mode 100644 mm/pins_cgroup.c
> >> >>  create mode 100644 tools/testing/selftests/vm/pins-cgroup.c
> >> >>
> >> >> base-commit: 2241ab53cbb5cdb08a6b2d4688feb13971058f65
> >> >> --
> >> >> git-series 0.9.1
> >> >>
> >>
>