mbox series

[v1,0/2] s390/mm: shared zeropage + KVM fix and optimization

Message ID 20240321215954.177730-1-david@redhat.com (mailing list archive)
Headers show
Series s390/mm: shared zeropage + KVM fix and optimization | expand

Message

David Hildenbrand March 21, 2024, 9:59 p.m. UTC
This series fixes one issue with uffd + shared zeropages on s390x and
optimizes "ordinary" KVM guests to make use of shared zeropages again.

userfaultfd could currently end up mapping shared zeropages into processes
that forbid shared zeropages. This only apples to s390x, relevant for
handling PV guests and guests that use storage kets correctly. Fix it
by placing a zeroed folio instead of the shared zeropage during
UFFDIO_ZEROPAGE instead.

I stumbled over this issue while looking into a customer scenario that
is using:

(1) Memory ballooning for dynamic resizing. Start a VM with, say, 100 GiB
    and inflate the balloon during boot to 60 GiB. The VM has ~40 GiB
    available and additional memory can be "fake hotplugged" to the VM
    later on demand by deflating the balloon. Actual memory overcommit is
    not desired, so physical memory would only be moved between VMs.

(2) Live migration of VMs between sites to evacuate servers in case of
    emergency.

Without the shared zeropage, during (2), the VM would suddenly consume
100 GiB on the migration source and destination. On the migration source,
where we don't excpect memory overcommit, we could easilt end up crashing
the VM during migration.

Independent of that, memory handed back to the hypervisor using "free page
reporting" would end up consuming actual memory after the migration on the
destination, not getting freed up until reused+freed again.

While there might be ways to optimize parts of this in QEMU, we really
should just support the shared zeropage again for ordinary VMs.

We only expect legcy guests to make use of storage keys, so let's handle
zeropages again when enabling storage keys or when enabling PV. To not
break userfaultfd like we did in the past, don't zap the shared zeropages,
but instead trigger unsharing faults, just like we do for unsharing
KSM pages in break_ksm().

Unsharing faults will simply replace the shared zeropage by a zeroed
anonymous folio. We can already trigger the same fault path using GUP,
when trying to long-term pin a shared zeropage, but also when unmerging
a KSM-placed zeropages, so this is nothing new.

Patch #1 tested on 86-64 by forcing mm_forbids_zeropage() to be 1, and
running the uffd selftests.

Patch #2 tested on s390x: the live migration scenario now works as
expected, and kvm-unit-tests that trigger usage of skeys work well, whereby
I can see detection and unsharing of shared zeropages.

Based on current mm-unstable. Maybe at least the second patch should
go via the s390x tree, I think patch #1 could go that route as well.

Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Janosch Frank <frankja@linux.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: kvm@vger.kernel.org
Cc: linux-s390@vger.kernel.org

David Hildenbrand (2):
  mm/userfaultfd: don't place zeropages when zeropages are disallowed
  s390/mm: re-enable the shared zeropage for !PV and !skeys KVM guests

 arch/s390/include/asm/gmap.h        |   2 +-
 arch/s390/include/asm/mmu.h         |   5 +
 arch/s390/include/asm/mmu_context.h |   1 +
 arch/s390/include/asm/pgtable.h     |  15 ++-
 arch/s390/kvm/kvm-s390.c            |   4 +-
 arch/s390/mm/gmap.c                 | 163 +++++++++++++++++++++-------
 mm/userfaultfd.c                    |  35 ++++++
 7 files changed, 178 insertions(+), 47 deletions(-)

Comments

Andrew Morton March 21, 2024, 10:13 p.m. UTC | #1
On Thu, 21 Mar 2024 22:59:52 +0100 David Hildenbrand <david@redhat.com> wrote:

> Based on current mm-unstable. Maybe at least the second patch should
> go via the s390x tree, I think patch #1 could go that route as well.

Taking both via the s390 tree is OK by me.  I'll drop the mm.git copies
if/when these turn up in the linux-next feed.
Heiko Carstens March 26, 2024, 7:38 a.m. UTC | #2
On Thu, Mar 21, 2024 at 03:13:53PM -0700, Andrew Morton wrote:
> On Thu, 21 Mar 2024 22:59:52 +0100 David Hildenbrand <david@redhat.com> wrote:
> 
> > Based on current mm-unstable. Maybe at least the second patch should
> > go via the s390x tree, I think patch #1 could go that route as well.
> 
> Taking both via the s390 tree is OK by me.  I'll drop the mm.git copies
> if/when these turn up in the linux-next feed.

Considering the comments I would expect a v2 of this series at some
time in the future.
David Hildenbrand March 26, 2024, 8:28 a.m. UTC | #3
On 26.03.24 08:38, Heiko Carstens wrote:
> On Thu, Mar 21, 2024 at 03:13:53PM -0700, Andrew Morton wrote:
>> On Thu, 21 Mar 2024 22:59:52 +0100 David Hildenbrand <david@redhat.com> wrote:
>>
>>> Based on current mm-unstable. Maybe at least the second patch should
>>> go via the s390x tree, I think patch #1 could go that route as well.
>>
>> Taking both via the s390 tree is OK by me.  I'll drop the mm.git copies
>> if/when these turn up in the linux-next feed.
> 
> Considering the comments I would expect a v2 of this series at some
> time in the future.

Yes, I'm still waiting for more feedback. I'll likely resend tomorrow.