mbox series

[0/2] overcommit: introduce mem-lock-onfault

Message ID 20241205231909.1161950-1-d-tatianin@yandex-team.ru (mailing list archive)
Headers show
Series overcommit: introduce mem-lock-onfault | expand

Message

Daniil Tatianin Dec. 5, 2024, 11:19 p.m. UTC
Currently, passing mem-lock=on to QEMU causes memory usage to grow by
huge amounts:

no memlock:
    $ qemu-system-x86_64 -overcommit mem-lock=off
    $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
    45652

    $ ./qemu-system-x86_64 -overcommit mem-lock=off -enable-kvm
    $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
    39756

memlock:
    $ qemu-system-x86_64 -overcommit mem-lock=on
    $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
    1309876

    $ ./qemu-system-x86_64 -overcommit mem-lock=on -enable-kvm
    $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
    259956

This is caused by the fact that mlockall(2) automatically
write-faults every existing and future anonymous mappings in the
process right away.

One of the reasons to enable mem-lock is to protect a QEMU process'
pages from being compacted and migrated by kcompactd (which does so
by messing with a live process page tables causing thousands of TLB
flush IPIs per second) basically stealing all guest time while it's
active.

mem-lock=on helps against this (given compact_unevictable_allowed is 0),
but the memory overhead it introduces is an undesirable side effect,
which we can completely avoid by passing MCL_ONFAULT to mlockall, which
is what this series allows to do with a new command line option called
mem-lock-onfault.

memlock-onfault:
    $ qemu-system-x86_64 -overcommit mem-lock-onfault=on
    $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
    54004

    $ ./qemu-system-x86_64 -overcommit mem-lock-onfault=on -enable-kvm
    $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
    47772

You may notice the memory usage is still slightly higher, in this case
by a few megabytes over the mem-lock=off case. I was able to trace this
down to a bug in the linux kernel with MCL_ONFAULT not being honored for
the early process heap (with brk(2) etc.) so it is still write-faulted in
this case, but it's still way less than it was with just the mem-lock=on.

Daniil Tatianin (2):
  os: add an ability to lock memory on_fault
  overcommit: introduce mem-lock-onfault

 include/sysemu/os-posix.h |  2 +-
 include/sysemu/os-win32.h |  3 ++-
 include/sysemu/sysemu.h   |  1 +
 migration/postcopy-ram.c  |  4 ++--
 os-posix.c                | 10 ++++++++--
 qemu-options.hx           | 13 ++++++++++---
 system/globals.c          |  1 +
 system/vl.c               | 18 ++++++++++++++++--
 8 files changed, 41 insertions(+), 11 deletions(-)

Comments

Peter Xu Dec. 6, 2024, 1:08 a.m. UTC | #1
On Fri, Dec 06, 2024 at 02:19:06AM +0300, Daniil Tatianin wrote:
> Currently, passing mem-lock=on to QEMU causes memory usage to grow by
> huge amounts:
> 
> no memlock:
>     $ qemu-system-x86_64 -overcommit mem-lock=off
>     $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>     45652
> 
>     $ ./qemu-system-x86_64 -overcommit mem-lock=off -enable-kvm
>     $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>     39756
> 
> memlock:
>     $ qemu-system-x86_64 -overcommit mem-lock=on
>     $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>     1309876
> 
>     $ ./qemu-system-x86_64 -overcommit mem-lock=on -enable-kvm
>     $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>     259956
> 
> This is caused by the fact that mlockall(2) automatically
> write-faults every existing and future anonymous mappings in the
> process right away.
> 
> One of the reasons to enable mem-lock is to protect a QEMU process'
> pages from being compacted and migrated by kcompactd (which does so
> by messing with a live process page tables causing thousands of TLB
> flush IPIs per second) basically stealing all guest time while it's
> active.
> 
> mem-lock=on helps against this (given compact_unevictable_allowed is 0),
> but the memory overhead it introduces is an undesirable side effect,
> which we can completely avoid by passing MCL_ONFAULT to mlockall, which
> is what this series allows to do with a new command line option called
> mem-lock-onfault.

IMHO it'll be always helpful to dig and provide information on why such
difference existed.  E.g. guest mem should normally be the major mem sink
and that definitely won't be affected by either ON_FAULT or not.

I had a quick look explicitly on tcg (as that really surprised me a bit..).
When you look at the mappings there's 1G constant shmem map that always got
locked and populated.

It turns out to be tcg's jit buffer, alloc_code_gen_buffer_splitwx_memfd:

    buf_rw = qemu_memfd_alloc("tcg-jit", size, 0, &fd, errp);
    if (buf_rw == NULL) {
        goto fail;
    }

    buf_rx = mmap(NULL, size, host_prot_read_exec(), MAP_SHARED, fd, 0);
    if (buf_rx == MAP_FAILED) {
        error_setg_errno(errp, errno,
                         "failed to map shared memory for execute");
        goto fail;
    }

Looks like that's the major reason why tcg has mlockall bloated constantly
with roughly 1G size - that seems to be from tcg_init_machine().  I didn't
check kvm.

Logically having a on-fault option won't ever hurt, so probably not an
issue to have it anyway.  Still, share my finding above, as IIUC that's
mostly why it was bloated for tcg, so maybe there're other options too.

> 
> memlock-onfault:
>     $ qemu-system-x86_64 -overcommit mem-lock-onfault=on
>     $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>     54004
> 
>     $ ./qemu-system-x86_64 -overcommit mem-lock-onfault=on -enable-kvm
>     $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>     47772
> 
> You may notice the memory usage is still slightly higher, in this case
> by a few megabytes over the mem-lock=off case. I was able to trace this
> down to a bug in the linux kernel with MCL_ONFAULT not being honored for
> the early process heap (with brk(2) etc.) so it is still write-faulted in
> this case, but it's still way less than it was with just the mem-lock=on.
> 
> Daniil Tatianin (2):
>   os: add an ability to lock memory on_fault
>   overcommit: introduce mem-lock-onfault
> 
>  include/sysemu/os-posix.h |  2 +-
>  include/sysemu/os-win32.h |  3 ++-
>  include/sysemu/sysemu.h   |  1 +
>  migration/postcopy-ram.c  |  4 ++--
>  os-posix.c                | 10 ++++++++--
>  qemu-options.hx           | 13 ++++++++++---
>  system/globals.c          |  1 +
>  system/vl.c               | 18 ++++++++++++++++--
>  8 files changed, 41 insertions(+), 11 deletions(-)
> 
> -- 
> 2.34.1
> 
>
Daniil Tatianin Dec. 9, 2024, 7:40 a.m. UTC | #2
On 12/6/24 4:08 AM, Peter Xu wrote:

> On Fri, Dec 06, 2024 at 02:19:06AM +0300, Daniil Tatianin wrote:
>> Currently, passing mem-lock=on to QEMU causes memory usage to grow by
>> huge amounts:
>>
>> no memlock:
>>      $ qemu-system-x86_64 -overcommit mem-lock=off
>>      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>      45652
>>
>>      $ ./qemu-system-x86_64 -overcommit mem-lock=off -enable-kvm
>>      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>      39756
>>
>> memlock:
>>      $ qemu-system-x86_64 -overcommit mem-lock=on
>>      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>      1309876
>>
>>      $ ./qemu-system-x86_64 -overcommit mem-lock=on -enable-kvm
>>      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>      259956
>>
>> This is caused by the fact that mlockall(2) automatically
>> write-faults every existing and future anonymous mappings in the
>> process right away.
>>
>> One of the reasons to enable mem-lock is to protect a QEMU process'
>> pages from being compacted and migrated by kcompactd (which does so
>> by messing with a live process page tables causing thousands of TLB
>> flush IPIs per second) basically stealing all guest time while it's
>> active.
>>
>> mem-lock=on helps against this (given compact_unevictable_allowed is 0),
>> but the memory overhead it introduces is an undesirable side effect,
>> which we can completely avoid by passing MCL_ONFAULT to mlockall, which
>> is what this series allows to do with a new command line option called
>> mem-lock-onfault.
> IMHO it'll be always helpful to dig and provide information on why such
> difference existed.  E.g. guest mem should normally be the major mem sink
> and that definitely won't be affected by either ON_FAULT or not.
>
> I had a quick look explicitly on tcg (as that really surprised me a bit..).
> When you look at the mappings there's 1G constant shmem map that always got
> locked and populated.
>
> It turns out to be tcg's jit buffer, alloc_code_gen_buffer_splitwx_memfd:

Thanks for looking into this! I'd guessed it was something to do with 
JIT, makes sense.

>      buf_rw = qemu_memfd_alloc("tcg-jit", size, 0, &fd, errp);
>      if (buf_rw == NULL) {
>          goto fail;
>      }
>
>      buf_rx = mmap(NULL, size, host_prot_read_exec(), MAP_SHARED, fd, 0);
>      if (buf_rx == MAP_FAILED) {
>          error_setg_errno(errp, errno,
>                           "failed to map shared memory for execute");
>          goto fail;
>      }
>
> Looks like that's the major reason why tcg has mlockall bloated constantly
> with roughly 1G size - that seems to be from tcg_init_machine().  I didn't
> check kvm.
>
> Logically having a on-fault option won't ever hurt, so probably not an
> issue to have it anyway.  Still, share my finding above, as IIUC that's
> mostly why it was bloated for tcg, so maybe there're other options too.

Yeah, the situation with KVM is slightly better, although it's still a 
~200MiB overhead with default Q35 and no extra devices (I haven't 
measured the difference with various devices).

I think it's definitely nice to have an on-fault option for this, as 
optimizing every possible mmap caller for the rare mem-lock=on case 
might be too ambitious.

Thanks!

>
>> memlock-onfault:
>>      $ qemu-system-x86_64 -overcommit mem-lock-onfault=on
>>      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>      54004
>>
>>      $ ./qemu-system-x86_64 -overcommit mem-lock-onfault=on -enable-kvm
>>      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>      47772
>>
>> You may notice the memory usage is still slightly higher, in this case
>> by a few megabytes over the mem-lock=off case. I was able to trace this
>> down to a bug in the linux kernel with MCL_ONFAULT not being honored for
>> the early process heap (with brk(2) etc.) so it is still write-faulted in
>> this case, but it's still way less than it was with just the mem-lock=on.
>>
>> Daniil Tatianin (2):
>>    os: add an ability to lock memory on_fault
>>    overcommit: introduce mem-lock-onfault
>>
>>   include/sysemu/os-posix.h |  2 +-
>>   include/sysemu/os-win32.h |  3 ++-
>>   include/sysemu/sysemu.h   |  1 +
>>   migration/postcopy-ram.c  |  4 ++--
>>   os-posix.c                | 10 ++++++++--
>>   qemu-options.hx           | 13 ++++++++++---
>>   system/globals.c          |  1 +
>>   system/vl.c               | 18 ++++++++++++++++--
>>   8 files changed, 41 insertions(+), 11 deletions(-)
>>
>> -- 
>> 2.34.1
>>
>>
Vladimir Sementsov-Ogievskiy Dec. 10, 2024, 2:48 p.m. UTC | #3
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Peter Xu Dec. 10, 2024, 4:48 p.m. UTC | #4
On Mon, Dec 09, 2024 at 10:40:51AM +0300, Daniil Tatianin wrote:
> On 12/6/24 4:08 AM, Peter Xu wrote:
> 
> > On Fri, Dec 06, 2024 at 02:19:06AM +0300, Daniil Tatianin wrote:
> > > Currently, passing mem-lock=on to QEMU causes memory usage to grow by
> > > huge amounts:
> > > 
> > > no memlock:
> > >      $ qemu-system-x86_64 -overcommit mem-lock=off
> > >      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
> > >      45652
> > > 
> > >      $ ./qemu-system-x86_64 -overcommit mem-lock=off -enable-kvm
> > >      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
> > >      39756
> > > 
> > > memlock:
> > >      $ qemu-system-x86_64 -overcommit mem-lock=on
> > >      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
> > >      1309876
> > > 
> > >      $ ./qemu-system-x86_64 -overcommit mem-lock=on -enable-kvm
> > >      $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
> > >      259956
> > > 
> > > This is caused by the fact that mlockall(2) automatically
> > > write-faults every existing and future anonymous mappings in the
> > > process right away.
> > > 
> > > One of the reasons to enable mem-lock is to protect a QEMU process'
> > > pages from being compacted and migrated by kcompactd (which does so
> > > by messing with a live process page tables causing thousands of TLB
> > > flush IPIs per second) basically stealing all guest time while it's
> > > active.
> > > 
> > > mem-lock=on helps against this (given compact_unevictable_allowed is 0),
> > > but the memory overhead it introduces is an undesirable side effect,
> > > which we can completely avoid by passing MCL_ONFAULT to mlockall, which
> > > is what this series allows to do with a new command line option called
> > > mem-lock-onfault.
> > IMHO it'll be always helpful to dig and provide information on why such
> > difference existed.  E.g. guest mem should normally be the major mem sink
> > and that definitely won't be affected by either ON_FAULT or not.
> > 
> > I had a quick look explicitly on tcg (as that really surprised me a bit..).
> > When you look at the mappings there's 1G constant shmem map that always got
> > locked and populated.
> > 
> > It turns out to be tcg's jit buffer, alloc_code_gen_buffer_splitwx_memfd:
> 
> Thanks for looking into this! I'd guessed it was something to do with JIT,
> makes sense.
> 
> >      buf_rw = qemu_memfd_alloc("tcg-jit", size, 0, &fd, errp);
> >      if (buf_rw == NULL) {
> >          goto fail;
> >      }
> > 
> >      buf_rx = mmap(NULL, size, host_prot_read_exec(), MAP_SHARED, fd, 0);
> >      if (buf_rx == MAP_FAILED) {
> >          error_setg_errno(errp, errno,
> >                           "failed to map shared memory for execute");
> >          goto fail;
> >      }
> > 
> > Looks like that's the major reason why tcg has mlockall bloated constantly
> > with roughly 1G size - that seems to be from tcg_init_machine().  I didn't
> > check kvm.
> > 
> > Logically having a on-fault option won't ever hurt, so probably not an
> > issue to have it anyway.  Still, share my finding above, as IIUC that's
> > mostly why it was bloated for tcg, so maybe there're other options too.
> 
> Yeah, the situation with KVM is slightly better, although it's still a
> ~200MiB overhead with default Q35 and no extra devices (I haven't measured
> the difference with various devices).
> 
> I think it's definitely nice to have an on-fault option for this, as
> optimizing every possible mmap caller for the rare mem-lock=on case might be
> too ambitious.

It really depends, IMHO, and that's why I didn't already ack the series.

It may be relevant to the trade-off here on allowing faults to happen later
even if mem-lock=on.  The question is why, for example in your use case,
would like to lock the memory.

Take kvm-rt as example, I believe that's needed because RT apps (running in
the guest) would like to avoid page faults throughout the stack, so that
guest workload, especially on the latency part of things, is predictable.

Here if on-fault is enabled it could beat that purpose already.

Or if the current use case is making sure after QEMU boots the memory will
always present so that even if later the host faces memory stress it won't
affect anything running the VM as it pre-allocated everything (so that's
beyond memory-backend-*,prealloc=on, because it covers QEMU/KVM memory
too).  Meanwhile locked pages won't swap out, so it's always there.

But then with on-fault, it means the pages will only be locked upon access.
Then it means the guarantee on "QEMU secures the memory on boot" is gone
too.

That's why I was thinking whether your specific use case really wants
on-fault, or you do want e.g. to have a limit on the tcg-jit buffer instead
(or same to whatever kvm was consuming), so you don't want that large a
buffer, however you still want to have all things locked up upfront.  It
can be relevant to why your use case started to use mem-lock=on before this
on-fault flag.

OTOH, I believe on-fault cannot work with kvm-rt at all already, because of
its possible faults happening later on - even if the fault can happen in
KVM and even if it's not about accessing guest mem, it can still be part of
overhead later when running the rt application in the guest, hence it can
start to break RT deterministics.

Thanks,
Daniil Tatianin Dec. 10, 2024, 5:01 p.m. UTC | #5
On 12/10/24 7:48 PM, Peter Xu wrote:

> On Mon, Dec 09, 2024 at 10:40:51AM +0300, Daniil Tatianin wrote:
>> On 12/6/24 4:08 AM, Peter Xu wrote:
>>
>>> On Fri, Dec 06, 2024 at 02:19:06AM +0300, Daniil Tatianin wrote:
>>>> Currently, passing mem-lock=on to QEMU causes memory usage to grow by
>>>> huge amounts:
>>>>
>>>> no memlock:
>>>>       $ qemu-system-x86_64 -overcommit mem-lock=off
>>>>       $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>>>       45652
>>>>
>>>>       $ ./qemu-system-x86_64 -overcommit mem-lock=off -enable-kvm
>>>>       $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>>>       39756
>>>>
>>>> memlock:
>>>>       $ qemu-system-x86_64 -overcommit mem-lock=on
>>>>       $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>>>       1309876
>>>>
>>>>       $ ./qemu-system-x86_64 -overcommit mem-lock=on -enable-kvm
>>>>       $ ps -p $(pidof ./qemu-system-x86_64) -o rss=
>>>>       259956
>>>>
>>>> This is caused by the fact that mlockall(2) automatically
>>>> write-faults every existing and future anonymous mappings in the
>>>> process right away.
>>>>
>>>> One of the reasons to enable mem-lock is to protect a QEMU process'
>>>> pages from being compacted and migrated by kcompactd (which does so
>>>> by messing with a live process page tables causing thousands of TLB
>>>> flush IPIs per second) basically stealing all guest time while it's
>>>> active.
>>>>
>>>> mem-lock=on helps against this (given compact_unevictable_allowed is 0),
>>>> but the memory overhead it introduces is an undesirable side effect,
>>>> which we can completely avoid by passing MCL_ONFAULT to mlockall, which
>>>> is what this series allows to do with a new command line option called
>>>> mem-lock-onfault.
>>> IMHO it'll be always helpful to dig and provide information on why such
>>> difference existed.  E.g. guest mem should normally be the major mem sink
>>> and that definitely won't be affected by either ON_FAULT or not.
>>>
>>> I had a quick look explicitly on tcg (as that really surprised me a bit..).
>>> When you look at the mappings there's 1G constant shmem map that always got
>>> locked and populated.
>>>
>>> It turns out to be tcg's jit buffer, alloc_code_gen_buffer_splitwx_memfd:
>> Thanks for looking into this! I'd guessed it was something to do with JIT,
>> makes sense.
>>
>>>       buf_rw = qemu_memfd_alloc("tcg-jit", size, 0, &fd, errp);
>>>       if (buf_rw == NULL) {
>>>           goto fail;
>>>       }
>>>
>>>       buf_rx = mmap(NULL, size, host_prot_read_exec(), MAP_SHARED, fd, 0);
>>>       if (buf_rx == MAP_FAILED) {
>>>           error_setg_errno(errp, errno,
>>>                            "failed to map shared memory for execute");
>>>           goto fail;
>>>       }
>>>
>>> Looks like that's the major reason why tcg has mlockall bloated constantly
>>> with roughly 1G size - that seems to be from tcg_init_machine().  I didn't
>>> check kvm.
>>>
>>> Logically having a on-fault option won't ever hurt, so probably not an
>>> issue to have it anyway.  Still, share my finding above, as IIUC that's
>>> mostly why it was bloated for tcg, so maybe there're other options too.
>> Yeah, the situation with KVM is slightly better, although it's still a
>> ~200MiB overhead with default Q35 and no extra devices (I haven't measured
>> the difference with various devices).
>>
>> I think it's definitely nice to have an on-fault option for this, as
>> optimizing every possible mmap caller for the rare mem-lock=on case might be
>> too ambitious.
> It really depends, IMHO, and that's why I didn't already ack the series.
>
> It may be relevant to the trade-off here on allowing faults to happen later
> even if mem-lock=on.  The question is why, for example in your use case,
> would like to lock the memory.
>
> Take kvm-rt as example, I believe that's needed because RT apps (running in
> the guest) would like to avoid page faults throughout the stack, so that
> guest workload, especially on the latency part of things, is predictable.
>
> Here if on-fault is enabled it could beat that purpose already.
>
> Or if the current use case is making sure after QEMU boots the memory will
> always present so that even if later the host faces memory stress it won't
> affect anything running the VM as it pre-allocated everything (so that's
> beyond memory-backend-*,prealloc=on, because it covers QEMU/KVM memory
> too).  Meanwhile locked pages won't swap out, so it's always there.
>
> But then with on-fault, it means the pages will only be locked upon access.
> Then it means the guarantee on "QEMU secures the memory on boot" is gone
> too.
>
> That's why I was thinking whether your specific use case really wants
> on-fault, or you do want e.g. to have a limit on the tcg-jit buffer instead
> (or same to whatever kvm was consuming), so you don't want that large a
> buffer, however you still want to have all things locked up upfront.  It
> can be relevant to why your use case started to use mem-lock=on before this
> on-fault flag.

I mentioned my use case in the cover letter. Basically we want to 
protect QEMU's pages from being migrated and compacted by kcompactd, 
which it accomplishes by modifying live page tables and spamming the 
process with TLB invalidate IPIs while it does that, which kills guest 
performance for the duration of the compaction operation.

Memory locking allows to protect a process from kcompactd page 
compaction and more importantly, migration (that is taking a PTE and 
replacing it with one, which is closer in memory to reduce 
fragmentation). (As long as /proc/sys/vm/compact_unevictable_allowed is 0)

For this use case we don't mind page faults as they take more or less 
constant time, which we can also avoid if we wanted by preallocating 
guest memory. We do, however, want PTEs to be untouched by kcompactd, 
which MCL_ONFAULT accomplishes just fine without the extra memory 
overhead that comes from various anonymous mappings getting 
write-faulted with the currently available mem-lock=on option.

In our case we use KVM of course, TCG was just an experiment where I 
noticed anonymous memory
jump way too much.

I don't think it's feasible in our case to look for the origin of every 
anonymous mapping that grew compared to the no mem-lock case (which 
there's about ~30 with default Q35 + KVM, without any extra devices), 
and try to optimize it to map anonymous memory less eagerly.

Thanks!

>
> OTOH, I believe on-fault cannot work with kvm-rt at all already, because of
> its possible faults happening later on - even if the fault can happen in
> KVM and even if it's not about accessing guest mem, it can still be part of
> overhead later when running the rt application in the guest, hence it can
> start to break RT deterministics.
>
> Thanks,
>
Peter Xu Dec. 10, 2024, 5:20 p.m. UTC | #6
On Tue, Dec 10, 2024 at 08:01:08PM +0300, Daniil Tatianin wrote:
> I mentioned my use case in the cover letter. Basically we want to protect
> QEMU's pages from being migrated and compacted by kcompactd, which it
> accomplishes by modifying live page tables and spamming the process with TLB
> invalidate IPIs while it does that, which kills guest performance for the
> duration of the compaction operation.

Ah right, I read it initially but just now when I scanned the cover letter
I missed that.  My fault.

> 
> Memory locking allows to protect a process from kcompactd page compaction
> and more importantly, migration (that is taking a PTE and replacing it with
> one, which is closer in memory to reduce fragmentation). (As long as
> /proc/sys/vm/compact_unevictable_allowed is 0)
> 
> For this use case we don't mind page faults as they take more or less
> constant time, which we can also avoid if we wanted by preallocating guest
> memory. We do, however, want PTEs to be untouched by kcompactd, which
> MCL_ONFAULT accomplishes just fine without the extra memory overhead that
> comes from various anonymous mappings getting write-faulted with the
> currently available mem-lock=on option.
> 
> In our case we use KVM of course, TCG was just an experiment where I noticed
> anonymous memory
> jump way too much.
> 
> I don't think it's feasible in our case to look for the origin of every
> anonymous mapping that grew compared to the no mem-lock case (which there's
> about ~30 with default Q35 + KVM, without any extra devices), and try to
> optimize it to map anonymous memory less eagerly.

Would it be better then to use mem-lock=on|off|onfault?  So turns it into a
string to avoid the "exclusiveness" needed (meanwhile having two separate
knobs for relevant things looks odd too).

Thanks,
Daniil Tatianin Dec. 10, 2024, 5:23 p.m. UTC | #7
On 12/10/24 8:20 PM, Peter Xu wrote:
> On Tue, Dec 10, 2024 at 08:01:08PM +0300, Daniil Tatianin wrote:
>> I mentioned my use case in the cover letter. Basically we want to protect
>> QEMU's pages from being migrated and compacted by kcompactd, which it
>> accomplishes by modifying live page tables and spamming the process with TLB
>> invalidate IPIs while it does that, which kills guest performance for the
>> duration of the compaction operation.
> Ah right, I read it initially but just now when I scanned the cover letter
> I missed that.  My fault.

No worries!

>> Memory locking allows to protect a process from kcompactd page compaction
>> and more importantly, migration (that is taking a PTE and replacing it with
>> one, which is closer in memory to reduce fragmentation). (As long as
>> /proc/sys/vm/compact_unevictable_allowed is 0)
>>
>> For this use case we don't mind page faults as they take more or less
>> constant time, which we can also avoid if we wanted by preallocating guest
>> memory. We do, however, want PTEs to be untouched by kcompactd, which
>> MCL_ONFAULT accomplishes just fine without the extra memory overhead that
>> comes from various anonymous mappings getting write-faulted with the
>> currently available mem-lock=on option.
>>
>> In our case we use KVM of course, TCG was just an experiment where I noticed
>> anonymous memory
>> jump way too much.
>>
>> I don't think it's feasible in our case to look for the origin of every
>> anonymous mapping that grew compared to the no mem-lock case (which there's
>> about ~30 with default Q35 + KVM, without any extra devices), and try to
>> optimize it to map anonymous memory less eagerly.
> Would it be better then to use mem-lock=on|off|onfault?  So turns it into a
> string to avoid the "exclusiveness" needed (meanwhile having two separate
> knobs for relevant things looks odd too).

How did I not think of that.. Sounds much better IMO.

Thank you!

> Thanks,
>