mbox series

[v1,0/6] mm / virtio: Provide support for paravirtual waste page treatment

Message ID 20190619222922.1231.27432.stgit@localhost.localdomain (mailing list archive)
Headers show
Series mm / virtio: Provide support for paravirtual waste page treatment | expand

Message

Alexander H Duyck June 19, 2019, 10:32 p.m. UTC
This series provides an asynchronous means of hinting to a hypervisor
that a guest page is no longer in use and can have the data associated
with it dropped. To do this I have implemented functionality that allows
for what I am referring to as waste page treatment.

I have based many of the terms and functionality off of waste water
treatment, the idea for the similarity occurred to me after I had reached
the point of referring to the hints as "bubbles", as the hints used the
same approach as the balloon functionality but would disappear if they
were touched, as a result I started to think of the virtio device as an
aerator. The general idea with all of this is that the guest should be
treating the unused pages so that when they end up heading "downstream"
to either another guest, or back at the host they will not need to be
written to swap.

When the number of "dirty" pages in a given free_area exceeds our high
water mark, which is currently 32, we will schedule the aeration task to
start going through and scrubbing the zone. While the scrubbing is taking
place a boundary will be defined that we use to seperate the "aerated"
pages from the "dirty" ones. We use the ZONE_AERATION_ACTIVE bit to flag
when these boundaries are in place.

I am leaving a number of things hard-coded such as limiting the lowest
order processed to PAGEBLOCK_ORDER, and have left it up to the guest to
determine what batch size it wants to allocate to process the hints.

My primary testing has just been to verify the memory is being freed after
allocation by running memhog 32g in the guest and watching the total free
memory via /proc/meminfo on the host. With this I have verified most of
the memory is freed after each iteration. As far as performance I have
been mainly focusing on the will-it-scale/page_fault1 test running with
16 vcpus. With that I have seen a less than 1% difference between the
base kernel without these patches, with the patches and virtio-balloon
disabled, and with the patches and virtio-balloon enabled with hinting.

Changes from the RFC:
Moved aeration requested flag out of aerator and into zone->flags.
Moved boundary out of free_area and into local variables for aeration.
Moved aeration cycle out of interrupt and into workqueue.
Left nr_free as total pages instead of splitting it between raw and aerated.
Combined size and physical address values in virtio ring into one 64b value.
Restructured the patch set to reduce patches from 11 to 6.

---

Alexander Duyck (6):
      mm: Adjust shuffle code to allow for future coalescing
      mm: Move set/get_pcppage_migratetype to mmzone.h
      mm: Use zone and order instead of free area in free_list manipulators
      mm: Introduce "aerated" pages
      mm: Add logic for separating "aerated" pages from "raw" pages
      virtio-balloon: Add support for aerating memory via hinting


 drivers/virtio/Kconfig              |    1 
 drivers/virtio/virtio_balloon.c     |  110 ++++++++++++++
 include/linux/memory_aeration.h     |  118 +++++++++++++++
 include/linux/mmzone.h              |  113 +++++++++------
 include/linux/page-flags.h          |    8 +
 include/uapi/linux/virtio_balloon.h |    1 
 mm/Kconfig                          |    5 +
 mm/Makefile                         |    1 
 mm/aeration.c                       |  270 +++++++++++++++++++++++++++++++++++
 mm/page_alloc.c                     |  203 ++++++++++++++++++--------
 mm/shuffle.c                        |   24 ---
 mm/shuffle.h                        |   35 +++++
 12 files changed, 753 insertions(+), 136 deletions(-)
 create mode 100644 include/linux/memory_aeration.h
 create mode 100644 mm/aeration.c

--

Comments

David Hildenbrand June 25, 2019, 7:42 a.m. UTC | #1
On 20.06.19 00:32, Alexander Duyck wrote:
> This series provides an asynchronous means of hinting to a hypervisor
> that a guest page is no longer in use and can have the data associated
> with it dropped. To do this I have implemented functionality that allows
> for what I am referring to as waste page treatment.
> 
> I have based many of the terms and functionality off of waste water
> treatment, the idea for the similarity occurred to me after I had reached
> the point of referring to the hints as "bubbles", as the hints used the
> same approach as the balloon functionality but would disappear if they
> were touched, as a result I started to think of the virtio device as an
> aerator. The general idea with all of this is that the guest should be
> treating the unused pages so that when they end up heading "downstream"
> to either another guest, or back at the host they will not need to be
> written to swap.
> 
> When the number of "dirty" pages in a given free_area exceeds our high
> water mark, which is currently 32, we will schedule the aeration task to
> start going through and scrubbing the zone. While the scrubbing is taking
> place a boundary will be defined that we use to seperate the "aerated"
> pages from the "dirty" ones. We use the ZONE_AERATION_ACTIVE bit to flag
> when these boundaries are in place.

I still *detest* the terminology, sorry. Can't you come up with a
simpler terminology that makes more sense in the context of operating
systems and pages we want to hint to the hypervisor? (that is the only
use case you are using it for so far)

> 
> I am leaving a number of things hard-coded such as limiting the lowest
> order processed to PAGEBLOCK_ORDER, and have left it up to the guest to
> determine what batch size it wants to allocate to process the hints.
> 
> My primary testing has just been to verify the memory is being freed after
> allocation by running memhog 32g in the guest and watching the total free
> memory via /proc/meminfo on the host. With this I have verified most of
> the memory is freed after each iteration. As far as performance I have
> been mainly focusing on the will-it-scale/page_fault1 test running with
> 16 vcpus. With that I have seen a less than 1% difference between the

1% throughout all benchmarks? Guess that is quite good.

> base kernel without these patches, with the patches and virtio-balloon
> disabled, and with the patches and virtio-balloon enabled with hinting.
> 
> Changes from the RFC:
> Moved aeration requested flag out of aerator and into zone->flags.
> Moved boundary out of free_area and into local variables for aeration.
> Moved aeration cycle out of interrupt and into workqueue.
> Left nr_free as total pages instead of splitting it between raw and aerated.
> Combined size and physical address values in virtio ring into one 64b value.
> Restructured the patch set to reduce patches from 11 to 6.
> 

I'm planning to look into the details, but will be on PTO for two weeks
starting this Saturday (and still have other things to finish first :/ ).

> ---
> 
> Alexander Duyck (6):
>       mm: Adjust shuffle code to allow for future coalescing
>       mm: Move set/get_pcppage_migratetype to mmzone.h
>       mm: Use zone and order instead of free area in free_list manipulators
>       mm: Introduce "aerated" pages
>       mm: Add logic for separating "aerated" pages from "raw" pages
>       virtio-balloon: Add support for aerating memory via hinting
> 
> 
>  drivers/virtio/Kconfig              |    1 
>  drivers/virtio/virtio_balloon.c     |  110 ++++++++++++++
>  include/linux/memory_aeration.h     |  118 +++++++++++++++
>  include/linux/mmzone.h              |  113 +++++++++------
>  include/linux/page-flags.h          |    8 +
>  include/uapi/linux/virtio_balloon.h |    1 
>  mm/Kconfig                          |    5 +
>  mm/Makefile                         |    1 
>  mm/aeration.c                       |  270 +++++++++++++++++++++++++++++++++++
>  mm/page_alloc.c                     |  203 ++++++++++++++++++--------
>  mm/shuffle.c                        |   24 ---
>  mm/shuffle.h                        |   35 +++++
>  12 files changed, 753 insertions(+), 136 deletions(-)
>  create mode 100644 include/linux/memory_aeration.h
>  create mode 100644 mm/aeration.c

Compared to

 17 files changed, 838 insertions(+), 86 deletions(-)
 create mode 100644 include/linux/memory_aeration.h
 create mode 100644 mm/aeration.c

this looks like a good improvement :)
Dave Hansen June 25, 2019, 2:10 p.m. UTC | #2
On 6/25/19 12:42 AM, David Hildenbrand wrote:
> On 20.06.19 00:32, Alexander Duyck wrote:
> I still *detest* the terminology, sorry. Can't you come up with a
> simpler terminology that makes more sense in the context of operating
> systems and pages we want to hint to the hypervisor? (that is the only
> use case you are using it for so far)

It's a wee bit too cute for my taste as well.  I could probably live
with it in the data structures, but having it show up out in places like
Kconfig and filenames goes too far.

For instance, someone seeing memory_aeration.c will have no concept
what's in the file.  Could we call it something like memory_paravirt.c?
 Or even mm/paravirt.c.

Could you talk for a minute about why the straightforward naming like
"hinted/unhinted" wasn't used?  Is there something else we could ever
use this infrastructure for that is not related to paravirtualized free
page hinting?
Alexander H Duyck June 25, 2019, 4:09 p.m. UTC | #3
On Tue, Jun 25, 2019 at 12:42 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 20.06.19 00:32, Alexander Duyck wrote:
> > This series provides an asynchronous means of hinting to a hypervisor
> > that a guest page is no longer in use and can have the data associated
> > with it dropped. To do this I have implemented functionality that allows
> > for what I am referring to as waste page treatment.
> >
> > I have based many of the terms and functionality off of waste water
> > treatment, the idea for the similarity occurred to me after I had reached
> > the point of referring to the hints as "bubbles", as the hints used the
> > same approach as the balloon functionality but would disappear if they
> > were touched, as a result I started to think of the virtio device as an
> > aerator. The general idea with all of this is that the guest should be
> > treating the unused pages so that when they end up heading "downstream"
> > to either another guest, or back at the host they will not need to be
> > written to swap.
> >
> > When the number of "dirty" pages in a given free_area exceeds our high
> > water mark, which is currently 32, we will schedule the aeration task to
> > start going through and scrubbing the zone. While the scrubbing is taking
> > place a boundary will be defined that we use to seperate the "aerated"
> > pages from the "dirty" ones. We use the ZONE_AERATION_ACTIVE bit to flag
> > when these boundaries are in place.
>
> I still *detest* the terminology, sorry. Can't you come up with a
> simpler terminology that makes more sense in the context of operating
> systems and pages we want to hint to the hypervisor? (that is the only
> use case you are using it for so far)

I'm open to suggestions. The terminology is just what I went with as I
had gone from balloon to thinking of this as a bubble since it was a
balloon without the deflate logic. From there I got to aeration since
it is filling the buddy allocator with those bubbles.

> >
> > I am leaving a number of things hard-coded such as limiting the lowest
> > order processed to PAGEBLOCK_ORDER, and have left it up to the guest to
> > determine what batch size it wants to allocate to process the hints.
> >
> > My primary testing has just been to verify the memory is being freed after
> > allocation by running memhog 32g in the guest and watching the total free
> > memory via /proc/meminfo on the host. With this I have verified most of
> > the memory is freed after each iteration. As far as performance I have
> > been mainly focusing on the will-it-scale/page_fault1 test running with
> > 16 vcpus. With that I have seen a less than 1% difference between the
>
> 1% throughout all benchmarks? Guess that is quite good.

That is the general idea. What I wanted to avoid was this introducing
any significant slowdown, especially in the case where we weren't
using it.

> > base kernel without these patches, with the patches and virtio-balloon
> > disabled, and with the patches and virtio-balloon enabled with hinting.
> >
> > Changes from the RFC:
> > Moved aeration requested flag out of aerator and into zone->flags.
> > Moved boundary out of free_area and into local variables for aeration.
> > Moved aeration cycle out of interrupt and into workqueue.
> > Left nr_free as total pages instead of splitting it between raw and aerated.
> > Combined size and physical address values in virtio ring into one 64b value.
> > Restructured the patch set to reduce patches from 11 to 6.
> >
>
> I'm planning to look into the details, but will be on PTO for two weeks
> starting this Saturday (and still have other things to finish first :/ ).

Thanks. No rush. I will be on PTO for the next couple of weeks myself.

> > ---
> >
> > Alexander Duyck (6):
> >       mm: Adjust shuffle code to allow for future coalescing
> >       mm: Move set/get_pcppage_migratetype to mmzone.h
> >       mm: Use zone and order instead of free area in free_list manipulators
> >       mm: Introduce "aerated" pages
> >       mm: Add logic for separating "aerated" pages from "raw" pages
> >       virtio-balloon: Add support for aerating memory via hinting
> >
> >
> >  drivers/virtio/Kconfig              |    1
> >  drivers/virtio/virtio_balloon.c     |  110 ++++++++++++++
> >  include/linux/memory_aeration.h     |  118 +++++++++++++++
> >  include/linux/mmzone.h              |  113 +++++++++------
> >  include/linux/page-flags.h          |    8 +
> >  include/uapi/linux/virtio_balloon.h |    1
> >  mm/Kconfig                          |    5 +
> >  mm/Makefile                         |    1
> >  mm/aeration.c                       |  270 +++++++++++++++++++++++++++++++++++
> >  mm/page_alloc.c                     |  203 ++++++++++++++++++--------
> >  mm/shuffle.c                        |   24 ---
> >  mm/shuffle.h                        |   35 +++++
> >  12 files changed, 753 insertions(+), 136 deletions(-)
> >  create mode 100644 include/linux/memory_aeration.h
> >  create mode 100644 mm/aeration.c
>
> Compared to
>
>  17 files changed, 838 insertions(+), 86 deletions(-)
>  create mode 100644 include/linux/memory_aeration.h
>  create mode 100644 mm/aeration.c
>
> this looks like a good improvement :)

Thanks.

- Alex
Alexander H Duyck June 25, 2019, 5 p.m. UTC | #4
On Tue, Jun 25, 2019 at 7:10 AM Dave Hansen <dave.hansen@intel.com> wrote:
>
> On 6/25/19 12:42 AM, David Hildenbrand wrote:
> > On 20.06.19 00:32, Alexander Duyck wrote:
> > I still *detest* the terminology, sorry. Can't you come up with a
> > simpler terminology that makes more sense in the context of operating
> > systems and pages we want to hint to the hypervisor? (that is the only
> > use case you are using it for so far)
>
> It's a wee bit too cute for my taste as well.  I could probably live
> with it in the data structures, but having it show up out in places like
> Kconfig and filenames goes too far.
>
> For instance, someone seeing memory_aeration.c will have no concept
> what's in the file.  Could we call it something like memory_paravirt.c?
>  Or even mm/paravirt.c.

Well I couldn't come up with a better explanation of what this was
doing, also I wanted to avoid mentioning hinting specifically because
there have already been a few series that have been committed upstream
that reference this for slightly different purposes such as the one by
Wei Wang that was doing free memory tracking for migration purposes,
https://lkml.org/lkml/2018/7/10/211.

Basically what we are doing is inflating the memory size we can report
by inserting voids into the free memory areas. In my mind that matches
up very well with what "aeration" is. It is similar to balloon in
functionality, however instead of inflating the balloon we are
inflating the free_list for higher order free areas by creating voids
where the madvised pages were.

> Could you talk for a minute about why the straightforward naming like
> "hinted/unhinted" wasn't used?  Is there something else we could ever
> use this infrastructure for that is not related to paravirtualized free
> page hinting?

I was hoping there might be something in the future that could use the
infrastructure if it needed to go through and sort out used versus
unused memory. The way things are designed right now for instance
there is only really a define that is limiting the lowest order pages
that are processed. So if we wanted to use this for another purpose we
could replace the AERATOR_MIN_ORDER define with something that is
specific to that use case.
David Hildenbrand June 25, 2019, 6:12 p.m. UTC | #5
On 25.06.19 19:00, Alexander Duyck wrote:
> On Tue, Jun 25, 2019 at 7:10 AM Dave Hansen <dave.hansen@intel.com> wrote:
>>
>> On 6/25/19 12:42 AM, David Hildenbrand wrote:
>>> On 20.06.19 00:32, Alexander Duyck wrote:
>>> I still *detest* the terminology, sorry. Can't you come up with a
>>> simpler terminology that makes more sense in the context of operating
>>> systems and pages we want to hint to the hypervisor? (that is the only
>>> use case you are using it for so far)
>>
>> It's a wee bit too cute for my taste as well.  I could probably live
>> with it in the data structures, but having it show up out in places like
>> Kconfig and filenames goes too far.
>>
>> For instance, someone seeing memory_aeration.c will have no concept
>> what's in the file.  Could we call it something like memory_paravirt.c?
>>  Or even mm/paravirt.c.
> 
> Well I couldn't come up with a better explanation of what this was
> doing, also I wanted to avoid mentioning hinting specifically because
> there have already been a few series that have been committed upstream
> that reference this for slightly different purposes such as the one by
> Wei Wang that was doing free memory tracking for migration purposes,
> https://lkml.org/lkml/2018/7/10/211.

That one we referred to rather as "free page reporting".

> 
> Basically what we are doing is inflating the memory size we can report
> by inserting voids into the free memory areas. In my mind that matches
> up very well with what "aeration" is. It is similar to balloon in
> functionality, however instead of inflating the balloon we are
> inflating the free_list for higher order free areas by creating voids
> where the madvised pages were.
> 
>> Could you talk for a minute about why the straightforward naming like
>> "hinted/unhinted" wasn't used?  Is there something else we could ever
>> use this infrastructure for that is not related to paravirtualized free
>> page hinting?
> 
> I was hoping there might be something in the future that could use the
> infrastructure if it needed to go through and sort out used versus
> unused memory. The way things are designed right now for instance
> there is only really a define that is limiting the lowest order pages
> that are processed. So if we wanted to use this for another purpose we
> could replace the AERATOR_MIN_ORDER define with something that is
> specific to that use case.


I'd still vote to call this "hinting" in some form. Whenever a new use
case eventually pops up, we could generalize this approach. But well,
that's just my opinion :)
Dave Hansen June 25, 2019, 6:22 p.m. UTC | #6
On 6/25/19 10:00 AM, Alexander Duyck wrote:
> Basically what we are doing is inflating the memory size we can report
> by inserting voids into the free memory areas. In my mind that matches
> up very well with what "aeration" is. It is similar to balloon in
> functionality, however instead of inflating the balloon we are
> inflating the free_list for higher order free areas by creating voids
> where the madvised pages were.

OK, then call it "free page auto ballooning" or "auto ballooning" or
"allocator ballooning".  s390 calls them "unused pages".

Any of those things are clearer and more meaningful than "page aeration"
to me.
Christophe de Dinechin June 26, 2019, 9:01 a.m. UTC | #7
David Hildenbrand writes:

> On 20.06.19 00:32, Alexander Duyck wrote:
>> This series provides an asynchronous means of hinting to a hypervisor
>> that a guest page is no longer in use and can have the data associated
>> with it dropped. To do this I have implemented functionality that allows
>> for what I am referring to as waste page treatment.
>> 
>> I have based many of the terms and functionality off of waste water
>> treatment, the idea for the similarity occurred to me after I had reached
>> the point of referring to the hints as "bubbles", as the hints used the
>> same approach as the balloon functionality but would disappear if they
>> were touched, as a result I started to think of the virtio device as an
>> aerator. The general idea with all of this is that the guest should be
>> treating the unused pages so that when they end up heading "downstream"
>> to either another guest, or back at the host they will not need to be
>> written to swap.
>> 
>> When the number of "dirty" pages in a given free_area exceeds our high
>> water mark, which is currently 32, we will schedule the aeration task to
>> start going through and scrubbing the zone. While the scrubbing is taking
>> place a boundary will be defined that we use to seperate the "aerated"
>> pages from the "dirty" ones. We use the ZONE_AERATION_ACTIVE bit to flag
>> when these boundaries are in place.
>
> I still *detest* the terminology, sorry. Can't you come up with a
> simpler terminology that makes more sense in the context of operating
> systems and pages we want to hint to the hypervisor? (that is the only
> use case you are using it for so far)

FWIW, I thought the terminology made sense, in particular given the analogy
with the balloon driver. Operating systems in general, and Linux in
particular, already use tons of analogy-supported terminology. In
particular, a "waste page treatment" terminology is not very far from
the very common "garbage collection" or "scrubbing" wordings. I would find
"hinting" much less specific. for example.

Usually, the phrases that stick are somewhat unique while providing a
useful analogy to server as a reminder of what the thing actually
does. IMHO, it's the case here on both fronts, so I like it.

>
>> 
>> I am leaving a number of things hard-coded such as limiting the lowest
>> order processed to PAGEBLOCK_ORDER, and have left it up to the guest to
>> determine what batch size it wants to allocate to process the hints.
>> 
>> My primary testing has just been to verify the memory is being freed after
>> allocation by running memhog 32g in the guest and watching the total free
>> memory via /proc/meminfo on the host. With this I have verified most of
>> the memory is freed after each iteration. As far as performance I have
>> been mainly focusing on the will-it-scale/page_fault1 test running with
>> 16 vcpus. With that I have seen a less than 1% difference between the
>
> 1% throughout all benchmarks? Guess that is quite good.
>
>> base kernel without these patches, with the patches and virtio-balloon
>> disabled, and with the patches and virtio-balloon enabled with hinting.
>> 
>> Changes from the RFC:
>> Moved aeration requested flag out of aerator and into zone->flags.
>> Moved boundary out of free_area and into local variables for aeration.
>> Moved aeration cycle out of interrupt and into workqueue.
>> Left nr_free as total pages instead of splitting it between raw and aerated.
>> Combined size and physical address values in virtio ring into one 64b value.
>> Restructured the patch set to reduce patches from 11 to 6.
>> 
>
> I'm planning to look into the details, but will be on PTO for two weeks
> starting this Saturday (and still have other things to finish first :/ ).
>
>> ---
>> 
>> Alexander Duyck (6):
>>       mm: Adjust shuffle code to allow for future coalescing
>>       mm: Move set/get_pcppage_migratetype to mmzone.h
>>       mm: Use zone and order instead of free area in free_list manipulators
>>       mm: Introduce "aerated" pages
>>       mm: Add logic for separating "aerated" pages from "raw" pages
>>       virtio-balloon: Add support for aerating memory via hinting
>> 
>> 
>>  drivers/virtio/Kconfig              |    1 
>>  drivers/virtio/virtio_balloon.c     |  110 ++++++++++++++
>>  include/linux/memory_aeration.h     |  118 +++++++++++++++
>>  include/linux/mmzone.h              |  113 +++++++++------
>>  include/linux/page-flags.h          |    8 +
>>  include/uapi/linux/virtio_balloon.h |    1 
>>  mm/Kconfig                          |    5 +
>>  mm/Makefile                         |    1 
>>  mm/aeration.c                       |  270 +++++++++++++++++++++++++++++++++++
>>  mm/page_alloc.c                     |  203 ++++++++++++++++++--------
>>  mm/shuffle.c                        |   24 ---
>>  mm/shuffle.h                        |   35 +++++
>>  12 files changed, 753 insertions(+), 136 deletions(-)
>>  create mode 100644 include/linux/memory_aeration.h
>>  create mode 100644 mm/aeration.c
>
> Compared to
>
>  17 files changed, 838 insertions(+), 86 deletions(-)
>  create mode 100644 include/linux/memory_aeration.h
>  create mode 100644 mm/aeration.c
>
> this looks like a good improvement :)
David Hildenbrand June 26, 2019, 9:12 a.m. UTC | #8
On 26.06.19 11:01, Christophe de Dinechin wrote:
> 
> David Hildenbrand writes:
> 
>> On 20.06.19 00:32, Alexander Duyck wrote:
>>> This series provides an asynchronous means of hinting to a hypervisor
>>> that a guest page is no longer in use and can have the data associated
>>> with it dropped. To do this I have implemented functionality that allows
>>> for what I am referring to as waste page treatment.
>>>
>>> I have based many of the terms and functionality off of waste water
>>> treatment, the idea for the similarity occurred to me after I had reached
>>> the point of referring to the hints as "bubbles", as the hints used the
>>> same approach as the balloon functionality but would disappear if they
>>> were touched, as a result I started to think of the virtio device as an
>>> aerator. The general idea with all of this is that the guest should be
>>> treating the unused pages so that when they end up heading "downstream"
>>> to either another guest, or back at the host they will not need to be
>>> written to swap.
>>>
>>> When the number of "dirty" pages in a given free_area exceeds our high
>>> water mark, which is currently 32, we will schedule the aeration task to
>>> start going through and scrubbing the zone. While the scrubbing is taking
>>> place a boundary will be defined that we use to seperate the "aerated"
>>> pages from the "dirty" ones. We use the ZONE_AERATION_ACTIVE bit to flag
>>> when these boundaries are in place.
>>
>> I still *detest* the terminology, sorry. Can't you come up with a
>> simpler terminology that makes more sense in the context of operating
>> systems and pages we want to hint to the hypervisor? (that is the only
>> use case you are using it for so far)
> 
> FWIW, I thought the terminology made sense, in particular given the analogy
> with the balloon driver. Operating systems in general, and Linux in
> particular, already use tons of analogy-supported terminology. In
> particular, a "waste page treatment" terminology is not very far from
> the very common "garbage collection" or "scrubbing" wordings. I would find
> "hinting" much less specific. for example.
> 
> Usually, the phrases that stick are somewhat unique while providing a
> useful analogy to server as a reminder of what the thing actually
> does. IMHO, it's the case here on both fronts, so I like it.

While something like "waste pages" make sense, "aeration" is far out of
my comfort zone.

An analogy is like a joke. If you have to explain it, it's not that
good. (see, that was a good analogy ;) ).
David Hildenbrand July 15, 2019, 9:41 a.m. UTC | #9
On 25.06.19 20:22, Dave Hansen wrote:
> On 6/25/19 10:00 AM, Alexander Duyck wrote:
>> Basically what we are doing is inflating the memory size we can report
>> by inserting voids into the free memory areas. In my mind that matches
>> up very well with what "aeration" is. It is similar to balloon in
>> functionality, however instead of inflating the balloon we are
>> inflating the free_list for higher order free areas by creating voids
>> where the madvised pages were.
> 
> OK, then call it "free page auto ballooning" or "auto ballooning" or
> "allocator ballooning".  s390 calls them "unused pages".
> 
> Any of those things are clearer and more meaningful than "page aeration"
> to me.
> 

Alex, if you want to generalize the approach, and not call it "hinting",
what about something similar to "page recycling".

Would also fit the "waste" example and would be clearer - at least to
me. Well, "bubble" does not apply anymore ...
Alexander Duyck July 15, 2019, 2:57 p.m. UTC | #10
On Mon, 2019-07-15 at 11:41 +0200, David Hildenbrand wrote:
> On 25.06.19 20:22, Dave Hansen wrote:
> > On 6/25/19 10:00 AM, Alexander Duyck wrote:
> > > Basically what we are doing is inflating the memory size we can report
> > > by inserting voids into the free memory areas. In my mind that matches
> > > up very well with what "aeration" is. It is similar to balloon in
> > > functionality, however instead of inflating the balloon we are
> > > inflating the free_list for higher order free areas by creating voids
> > > where the madvised pages were.
> > 
> > OK, then call it "free page auto ballooning" or "auto ballooning" or
> > "allocator ballooning".  s390 calls them "unused pages".
> > 
> > Any of those things are clearer and more meaningful than "page aeration"
> > to me.
> > 
> 
> Alex, if you want to generalize the approach, and not call it "hinting",
> what about something similar to "page recycling".
> 
> Would also fit the "waste" example and would be clearer - at least to
> me. Well, "bubble" does not apply anymore ...
> 

I am fine with "page hinting". I have already gone through and started the
rename. The problem with "page recycling" is that is actually pretty
similar to the name we had in the networking space for how the NICs will
recycle the Rx buffers.

For now I am going through and replacing instances of Aerated with Hinted,
and aeration with page_hinting. I should have a new patch set ready in a
couple days assuming no unforeseen issues.

Thanks.

- Alex