mbox series

[v3,0/6] hugetlb_cgroup: Add hugetlb_cgroup reservation limits

Message ID 20190826233240.11524-1-almasrymina@google.com (mailing list archive)
Headers show
Series hugetlb_cgroup: Add hugetlb_cgroup reservation limits | expand

Message

Mina Almasry Aug. 26, 2019, 11:32 p.m. UTC
Problem:
Currently tasks attempting to allocate more hugetlb memory than is available get
a failure at mmap/shmget time. This is thanks to Hugetlbfs Reservations [1].
However, if a task attempts to allocate hugetlb memory only more than its
hugetlb_cgroup limit allows, the kernel will allow the mmap/shmget call,
but will SIGBUS the task when it attempts to fault the memory in.

We have developers interested in using hugetlb_cgroups, and they have expressed
dissatisfaction regarding this behavior. We'd like to improve this
behavior such that tasks violating the hugetlb_cgroup limits get an error on
mmap/shmget time, rather than getting SIGBUS'd when they try to fault
the excess memory in.

The underlying problem is that today's hugetlb_cgroup accounting happens
at hugetlb memory *fault* time, rather than at *reservation* time.
Thus, enforcing the hugetlb_cgroup limit only happens at fault time, and
the offending task gets SIGBUS'd.

Proposed Solution:
A new page counter named hugetlb.xMB.reservation_[limit|usage]_in_bytes. This
counter has slightly different semantics than
hugetlb.xMB.[limit|usage]_in_bytes:

- While usage_in_bytes tracks all *faulted* hugetlb memory,
reservation_usage_in_bytes tracks all *reserved* hugetlb memory.

- If a task attempts to reserve more memory than limit_in_bytes allows,
the kernel will allow it to do so. But if a task attempts to reserve
more memory than reservation_limit_in_bytes, the kernel will fail this
reservation.

This proposal is implemented in this patch, with tests to verify
functionality and show the usage.

Alternatives considered:
1. A new cgroup, instead of only a new page_counter attached to
   the existing hugetlb_cgroup. Adding a new cgroup seemed like a lot of code
   duplication with hugetlb_cgroup. Keeping hugetlb related page counters under
   hugetlb_cgroup seemed cleaner as well.

2. Instead of adding a new counter, we considered adding a sysctl that modifies
   the behavior of hugetlb.xMB.[limit|usage]_in_bytes, to do accounting at
   reservation time rather than fault time. Adding a new page_counter seems
   better as userspace could, if it wants, choose to enforce different cgroups
   differently: one via limit_in_bytes, and another via
   reservation_limit_in_bytes. This could be very useful if you're
   transitioning how hugetlb memory is partitioned on your system one
   cgroup at a time, for example. Also, someone may find usage for both
   limit_in_bytes and reservation_limit_in_bytes concurrently, and this
   approach gives them the option to do so.

Caveats:
1. This support is implemented for cgroups-v1. I have not tried
   hugetlb_cgroups with cgroups v2, and AFAICT it's not supported yet.
   This is largely because we use cgroups-v1 for now. If required, I
   can add hugetlb_cgroup support to cgroups v2 in this patch or
   a follow up.
2. Most complicated bit of this patch I believe is: where to store the
   pointer to the hugetlb_cgroup to uncharge at unreservation time?
   Normally the cgroup pointers hang off the struct page. But, with
   hugetlb_cgroup reservations, one task can reserve a specific page and another
   task may fault it in (I believe), so storing the pointer in struct
   page is not appropriate. Proposed approach here is to store the pointer in
   the resv_map. See patch for details.

Signed-off-by: Mina Almasry <almasrymina@google.com>

[1]: https://www.kernel.org/doc/html/latest/vm/hugetlbfs_reserv.html

Changes in v3:
- Addressed comments of Hillf Danton:
  - Added docs.
  - cgroup_files now uses enum.
  - Various readability improvements.
- Addressed comments of Mike Kravetz.
  - region_* functions no longer coalesce file_region entries in the resv_map.
  - region_add() and region_chg() refactored to make them much easier to
    understand and remove duplicated code so this patch doesn't add too much
    complexity.
  - Refactored common functionality into helpers.

Changes in v2:
- Split the patch into a 5 patch series.
- Fixed patch subject.

Mina Almasry (6):
  hugetlb_cgroup: Add hugetlb_cgroup reservation counter
  hugetlb_cgroup: add interface for charge/uncharge hugetlb reservations
  hugetlb_cgroup: add reservation accounting for private mappings
  hugetlb_cgroup: add accounting for shared mappings
  hugetlb_cgroup: Add hugetlb_cgroup reservation tests
  hugetlb_cgroup: Add hugetlb_cgroup reservation docs

 .../admin-guide/cgroup-v1/hugetlb.rst         |  84 ++-
 include/linux/hugetlb.h                       |  24 +-
 include/linux/hugetlb_cgroup.h                |  19 +-
 mm/hugetlb.c                                  | 493 ++++++++++++------
 mm/hugetlb_cgroup.c                           | 187 +++++--
 tools/testing/selftests/vm/.gitignore         |   1 +
 tools/testing/selftests/vm/Makefile           |   4 +
 .../selftests/vm/charge_reserved_hugetlb.sh   | 438 ++++++++++++++++
 .../selftests/vm/write_hugetlb_memory.sh      |  22 +
 .../testing/selftests/vm/write_to_hugetlbfs.c | 252 +++++++++
 10 files changed, 1300 insertions(+), 224 deletions(-)
 create mode 100755 tools/testing/selftests/vm/charge_reserved_hugetlb.sh
 create mode 100644 tools/testing/selftests/vm/write_hugetlb_memory.sh
 create mode 100644 tools/testing/selftests/vm/write_to_hugetlbfs.c

--
2.23.0.187.g17f5b7556c-goog

Comments

Hillf Danton Aug. 27, 2019, 8 a.m. UTC | #1
On Mon, 26 Aug 2019 16:32:35 -0700 Mina Almasry wrote:
>
> These counters will track hugetlb reservations rather than hugetlb
> memory faulted in. This patch only adds the counter, following patches
> add the charging and uncharging of the counter.

Acked-by: Hillf Danton <hdanton@sina.com>
Hillf Danton Aug. 27, 2019, 9:18 a.m. UTC | #2
On Mon, 26 Aug 2019 16:32:40 -0700 Mina Almasry wrote:
>
> Add docs for how to use hugetlb_cgroup reservations, and their behavior.

Acked-by: Hillf Danton <hdanton@sina.com>
Michal Hocko Aug. 28, 2019, 11:23 a.m. UTC | #3
On Mon 26-08-19 16:32:34, Mina Almasry wrote:
>  mm/hugetlb.c                                  | 493 ++++++++++++------
>  mm/hugetlb_cgroup.c                           | 187 +++++--

This is a lot of changes to an already subtle code which hugetlb
reservations undoubly are. Moreover cgroupv1 is feature frozen and I am
not aware of any plans to port the controller to v2. That all doesn't
sound in favor of this change. Mike is the maintainer of the hugetlb
code so I will defer to him to make a decision but I wouldn't recommend
that.
Mina Almasry Aug. 28, 2019, 5:58 p.m. UTC | #4
On Wed, Aug 28, 2019 at 4:23 AM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 26-08-19 16:32:34, Mina Almasry wrote:
> >  mm/hugetlb.c                                  | 493 ++++++++++++------
> >  mm/hugetlb_cgroup.c                           | 187 +++++--
>
> This is a lot of changes to an already subtle code which hugetlb
> reservations undoubly are.

For what it's worth, I think this patch series is a net decrease in
the complexity of the reservation code, especially the region_*
functions, which is where a lot of the complexity lies. I removed the
race between region_del and region_{add|chg}, refactored the main
logic into smaller code, moved common code to helpers and deleted the
duplicates, and finally added lots of comments to the hard to
understand pieces. I hope that when folks review the changes they will
see that! :)

> Moreover cgroupv1 is feature frozen and I am
> not aware of any plans to port the controller to v2.

Also for what it's worth, if porting the controller to v2 is a
requisite to take this, I'm happy to do that. As far as I understand
there is no reason hugetlb_cgroups shouldn't be in cgroups v2, and we
see value in them.

> That all doesn't
> sound in favor of this change. Mike is the maintainer of the hugetlb
> code so I will defer to him to make a decision but I wouldn't recommend
> that.
> --
> Michal Hocko
> SUSE Labs
Shakeel Butt Aug. 29, 2019, 12:42 a.m. UTC | #5
On Wed, Aug 28, 2019 at 4:23 AM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 26-08-19 16:32:34, Mina Almasry wrote:
> >  mm/hugetlb.c                                  | 493 ++++++++++++------
> >  mm/hugetlb_cgroup.c                           | 187 +++++--
>
> This is a lot of changes to an already subtle code which hugetlb
> reservations undoubly are. Moreover cgroupv1 is feature frozen and I am
> not aware of any plans to port the controller to v2. That all doesn't
> sound in favor of this change.

Actually "no plan to port the controller to v2" makes the case strong
for these changes (and other new features) to be done in v1. If there
is an alternative solution in v2 then I can understand the push-back
on changes in v1 but that is not the case here.

Shakeel
Michal Hocko Aug. 29, 2019, 7:18 a.m. UTC | #6
[Cc cgroups maintainers]

On Wed 28-08-19 10:58:00, Mina Almasry wrote:
> On Wed, Aug 28, 2019 at 4:23 AM Michal Hocko <mhocko@kernel.org> wrote:
> >
> > On Mon 26-08-19 16:32:34, Mina Almasry wrote:
> > >  mm/hugetlb.c                                  | 493 ++++++++++++------
> > >  mm/hugetlb_cgroup.c                           | 187 +++++--
> >
> > This is a lot of changes to an already subtle code which hugetlb
> > reservations undoubly are.
> 
> For what it's worth, I think this patch series is a net decrease in
> the complexity of the reservation code, especially the region_*
> functions, which is where a lot of the complexity lies. I removed the
> race between region_del and region_{add|chg}, refactored the main
> logic into smaller code, moved common code to helpers and deleted the
> duplicates, and finally added lots of comments to the hard to
> understand pieces. I hope that when folks review the changes they will
> see that! :)

Post those improvements as standalone patches and sell them as
improvements. We can talk about the net additional complexity of the
controller much easier then.

> > Moreover cgroupv1 is feature frozen and I am
> > not aware of any plans to port the controller to v2.
> 
> Also for what it's worth, if porting the controller to v2 is a
> requisite to take this, I'm happy to do that. As far as I understand
> there is no reason hugetlb_cgroups shouldn't be in cgroups v2, and we
> see value in them.

Talk to cgroups maintainers why the hugegetlb controller hasn't been
enabled in v2. All I am saing is that v1 only features are really a hard
sell. Even without adding a lot of code to hugetlb which is quite
complex on its own.
Mike Kravetz Sept. 3, 2019, 5:57 p.m. UTC | #7
On 8/29/19 12:18 AM, Michal Hocko wrote:
> [Cc cgroups maintainers]
> 
> On Wed 28-08-19 10:58:00, Mina Almasry wrote:
>> On Wed, Aug 28, 2019 at 4:23 AM Michal Hocko <mhocko@kernel.org> wrote:
>>>
>>> On Mon 26-08-19 16:32:34, Mina Almasry wrote:
>>>>  mm/hugetlb.c                                  | 493 ++++++++++++------
>>>>  mm/hugetlb_cgroup.c                           | 187 +++++--
>>>
>>> This is a lot of changes to an already subtle code which hugetlb
>>> reservations undoubly are.
>>
>> For what it's worth, I think this patch series is a net decrease in
>> the complexity of the reservation code, especially the region_*
>> functions, which is where a lot of the complexity lies. I removed the
>> race between region_del and region_{add|chg}, refactored the main
>> logic into smaller code, moved common code to helpers and deleted the
>> duplicates, and finally added lots of comments to the hard to
>> understand pieces. I hope that when folks review the changes they will
>> see that! :)
> 
> Post those improvements as standalone patches and sell them as
> improvements. We can talk about the net additional complexity of the
> controller much easier then.

All such changes appear to be in patch 4 of this series.  The commit message
says "region_add() and region_chg() are heavily refactored to in this commit
to make the code easier to understand and remove duplication.".  However, the
modifications were also added to accommodate the new cgroup reservation
accounting.  I think it would be helpful to explain why the existing code does
not work with the new accounting.  For example, one change is because
"existing code coalesces resv_map entries for shared mappings.  new cgroup
accounting requires that resv_map entries be kept separate for proper
uncharging."

I am starting to review the changes, but it would help if there was a high
level description.  I also like Michal's idea of calling out the region_*
changes separately.  If not a standalone patch, at least the first patch of
the series.  This new code will be exercised even if cgroup reservation
accounting not enabled, so it is very important than no subtle regressions
be introduced.
Mike Kravetz Sept. 3, 2019, 11:44 p.m. UTC | #8
On 9/3/19 10:57 AM, Mike Kravetz wrote:
> On 8/29/19 12:18 AM, Michal Hocko wrote:
>> [Cc cgroups maintainers]
>>
>> On Wed 28-08-19 10:58:00, Mina Almasry wrote:
>>> On Wed, Aug 28, 2019 at 4:23 AM Michal Hocko <mhocko@kernel.org> wrote:
>>>>
>>>> On Mon 26-08-19 16:32:34, Mina Almasry wrote:
>>>>>  mm/hugetlb.c                                  | 493 ++++++++++++------
>>>>>  mm/hugetlb_cgroup.c                           | 187 +++++--
>>>>
>>>> This is a lot of changes to an already subtle code which hugetlb
>>>> reservations undoubly are.
>>>
>>> For what it's worth, I think this patch series is a net decrease in
>>> the complexity of the reservation code, especially the region_*
>>> functions, which is where a lot of the complexity lies. I removed the
>>> race between region_del and region_{add|chg}, refactored the main
>>> logic into smaller code, moved common code to helpers and deleted the
>>> duplicates, and finally added lots of comments to the hard to
>>> understand pieces. I hope that when folks review the changes they will
>>> see that! :)
>>
>> Post those improvements as standalone patches and sell them as
>> improvements. We can talk about the net additional complexity of the
>> controller much easier then.
> 
> All such changes appear to be in patch 4 of this series.  The commit message
> says "region_add() and region_chg() are heavily refactored to in this commit
> to make the code easier to understand and remove duplication.".  However, the
> modifications were also added to accommodate the new cgroup reservation
> accounting.  I think it would be helpful to explain why the existing code does
> not work with the new accounting.  For example, one change is because
> "existing code coalesces resv_map entries for shared mappings.  new cgroup
> accounting requires that resv_map entries be kept separate for proper
> uncharging."
> 
> I am starting to review the changes, but it would help if there was a high
> level description.  I also like Michal's idea of calling out the region_*
> changes separately.  If not a standalone patch, at least the first patch of
> the series.  This new code will be exercised even if cgroup reservation
> accounting not enabled, so it is very important than no subtle regressions
> be introduced.

While looking at the region_* changes, I started thinking about this no
coalesce change for shared mappings which I think is necessary.  Am I
mistaken, or is this a requirement?

If it is a requirement, then think about some of the possible scenarios
such as:
- There is a hugetlbfs file of size 10 huge pages.
- Task A has reservations for pages at offset 1 3 5 7 and 9
- Task B then mmaps the entire file which should result in reservations
  at 0 2 4 6 and 8.
- region_chg will return 5, but will also need to allocate 5 resv_map
  entries for the subsequent region_add which can not fail.  Correct?
  The code does not appear to handle this.

BTW, this series will BUG when running libhugetlbfs test suite.  It will
hit this in resv_map_release().

	VM_BUG_ON(resv_map->adds_in_progress);
Mina Almasry Sept. 5, 2019, 7:55 p.m. UTC | #9
On Tue, Sep 3, 2019 at 10:58 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 8/29/19 12:18 AM, Michal Hocko wrote:
> > [Cc cgroups maintainers]
> >
> > On Wed 28-08-19 10:58:00, Mina Almasry wrote:
> >> On Wed, Aug 28, 2019 at 4:23 AM Michal Hocko <mhocko@kernel.org> wrote:
> >>>
> >>> On Mon 26-08-19 16:32:34, Mina Almasry wrote:
> >>>>  mm/hugetlb.c                                  | 493 ++++++++++++------
> >>>>  mm/hugetlb_cgroup.c                           | 187 +++++--
> >>>
> >>> This is a lot of changes to an already subtle code which hugetlb
> >>> reservations undoubly are.
> >>
> >> For what it's worth, I think this patch series is a net decrease in
> >> the complexity of the reservation code, especially the region_*
> >> functions, which is where a lot of the complexity lies. I removed the
> >> race between region_del and region_{add|chg}, refactored the main
> >> logic into smaller code, moved common code to helpers and deleted the
> >> duplicates, and finally added lots of comments to the hard to
> >> understand pieces. I hope that when folks review the changes they will
> >> see that! :)
> >
> > Post those improvements as standalone patches and sell them as
> > improvements. We can talk about the net additional complexity of the
> > controller much easier then.
>
> All such changes appear to be in patch 4 of this series.  The commit message
> says "region_add() and region_chg() are heavily refactored to in this commit
> to make the code easier to understand and remove duplication.".  However, the
> modifications were also added to accommodate the new cgroup reservation
> accounting.  I think it would be helpful to explain why the existing code does
> not work with the new accounting.  For example, one change is because
> "existing code coalesces resv_map entries for shared mappings.  new cgroup
> accounting requires that resv_map entries be kept separate for proper
> uncharging."
>
> I am starting to review the changes, but it would help if there was a high
> level description.  I also like Michal's idea of calling out the region_*
> changes separately.  If not a standalone patch, at least the first patch of
> the series.  This new code will be exercised even if cgroup reservation
> accounting not enabled, so it is very important than no subtle regressions
> be introduced.
>

Yep, seems I'm not calling out these changes as clearly as I should.
I'll look into breaking them into separate patches. I'll probably put
them as a separate patch or right behind current patchset 4, since
they are really done to make removing the coalescing a bit easier. Let
me look into that.

> --
> Mike Kravetz
Mina Almasry Sept. 5, 2019, 8:07 p.m. UTC | #10
On Tue, Sep 3, 2019 at 4:46 PM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 9/3/19 10:57 AM, Mike Kravetz wrote:
> > On 8/29/19 12:18 AM, Michal Hocko wrote:
> >> [Cc cgroups maintainers]
> >>
> >> On Wed 28-08-19 10:58:00, Mina Almasry wrote:
> >>> On Wed, Aug 28, 2019 at 4:23 AM Michal Hocko <mhocko@kernel.org> wrote:
> >>>>
> >>>> On Mon 26-08-19 16:32:34, Mina Almasry wrote:
> >>>>>  mm/hugetlb.c                                  | 493 ++++++++++++------
> >>>>>  mm/hugetlb_cgroup.c                           | 187 +++++--
> >>>>
> >>>> This is a lot of changes to an already subtle code which hugetlb
> >>>> reservations undoubly are.
> >>>
> >>> For what it's worth, I think this patch series is a net decrease in
> >>> the complexity of the reservation code, especially the region_*
> >>> functions, which is where a lot of the complexity lies. I removed the
> >>> race between region_del and region_{add|chg}, refactored the main
> >>> logic into smaller code, moved common code to helpers and deleted the
> >>> duplicates, and finally added lots of comments to the hard to
> >>> understand pieces. I hope that when folks review the changes they will
> >>> see that! :)
> >>
> >> Post those improvements as standalone patches and sell them as
> >> improvements. We can talk about the net additional complexity of the
> >> controller much easier then.
> >
> > All such changes appear to be in patch 4 of this series.  The commit message
> > says "region_add() and region_chg() are heavily refactored to in this commit
> > to make the code easier to understand and remove duplication.".  However, the
> > modifications were also added to accommodate the new cgroup reservation
> > accounting.  I think it would be helpful to explain why the existing code does
> > not work with the new accounting.  For example, one change is because
> > "existing code coalesces resv_map entries for shared mappings.  new cgroup
> > accounting requires that resv_map entries be kept separate for proper
> > uncharging."
> >
> > I am starting to review the changes, but it would help if there was a high
> > level description.  I also like Michal's idea of calling out the region_*
> > changes separately.  If not a standalone patch, at least the first patch of
> > the series.  This new code will be exercised even if cgroup reservation
> > accounting not enabled, so it is very important than no subtle regressions
> > be introduced.
>
> While looking at the region_* changes, I started thinking about this no
> coalesce change for shared mappings which I think is necessary.  Am I
> mistaken, or is this a requirement?
>

No coalesce is a requirement, yes. The idea is that task A can reseve
range [0-1], and task B can reserve range [1-2]. We want the code to
put in 2 regions:

1. [0-1], with cgroup information that points to task A's cgroup.
2. [1-2], with cgroup information that points to task B's cgroup.

If coalescing is happening, then you end up with one region [0-2] with
cgroup information for one of those cgroups, and someone gets
uncharged wrong when the reservation is freed.

Technically we can still coalesce if the cgroup information is the
same and I can do that, but the region_* code becomes more
complicated, and you mentioned on an earlier patchset that you were
concerned with how complicated the region_* functions are as is.

> If it is a requirement, then think about some of the possible scenarios
> such as:
> - There is a hugetlbfs file of size 10 huge pages.
> - Task A has reservations for pages at offset 1 3 5 7 and 9
> - Task B then mmaps the entire file which should result in reservations
>   at 0 2 4 6 and 8.
> - region_chg will return 5, but will also need to allocate 5 resv_map
>   entries for the subsequent region_add which can not fail.  Correct?
>   The code does not appear to handle this.
>

I thought the code did handle this. region_chg calls
allocate_enough_cache_for_range_and_lock(), which in this scenario
will put 5 entries in resv_map->region_cache. region_add will use
these 5 region_cache entries to do its business.

I'll add a test in my suite to test this case to make sure.

> BTW, this series will BUG when running libhugetlbfs test suite.  It will
> hit this in resv_map_release().
>
>         VM_BUG_ON(resv_map->adds_in_progress);
>

Sorry about that, I've been having trouble running the libhugetlbfs
tests, but I'm still on it. I'll get to the bottom of this by next
patch series.

> --
> Mike Kravetz