Message ID | 20250404154352.23078-1-kalyazin@amazon.com (mailing list archive) |
---|---|
Headers | show |
Series | KVM: guest_memfd: support for uffd minor | expand |
On Fri, Apr 04, 2025 at 03:43:46PM +0000, Nikita Kalyazin wrote: > This series is built on top of the Fuad's v7 "mapping guest_memfd backed > memory at the host" [1]. Hm if this is based on an unmerged series this seems quite speculative and should maybe be an RFC? I mean that series at least still seems quite under discussion/experiencing issues? Maybe worth RFC'ing until that one settles down first to avoid complexity in review/application to tree? Thanks! > > With James's KVM userfault [2], it is possible to handle stage-2 faults > in guest_memfd in userspace. However, KVM itself also triggers faults > in guest_memfd in some cases, for example: PV interfaces like kvmclock, > PV EOI and page table walking code when fetching the MMIO instruction on > x86. It was agreed in the guest_memfd upstream call on 23 Jan 2025 [3] > that KVM would be accessing those pages via userspace page tables. In > order for such faults to be handled in userspace, guest_memfd needs to > support userfaultfd. > > Changes since v2 [4]: > - James: Fix sgp type when calling shmem_get_folio_gfp > - James: Improved vm_ops->fault() error handling > - James: Add and make use of the can_userfault() VMA operation > - James: Add UFFD_FEATURE_MINOR_GUEST_MEMFD feature flag > - James: Fix typos and add more checks in the test > > Nikita > > [1] https://lore.kernel.org/kvm/20250318161823.4005529-1-tabba@google.com/T/ > [2] https://lore.kernel.org/kvm/20250109204929.1106563-1-jthoughton@google.com/T/ > [3] https://docs.google.com/document/d/1M6766BzdY1Lhk7LiR5IqVR8B8mG3cr-cxTxOrAosPOk/edit?tab=t.0#heading=h.w1126rgli5e3 > [4] https://lore.kernel.org/kvm/20250402160721.97596-1-kalyazin@amazon.com/T/ > > Nikita Kalyazin (6): > mm: userfaultfd: generic continue for non hugetlbfs > mm: provide can_userfault vma operation > mm: userfaultfd: use can_userfault vma operation > KVM: guest_memfd: add support for userfaultfd minor > mm: userfaultfd: add UFFD_FEATURE_MINOR_GUEST_MEMFD > KVM: selftests: test userfaultfd minor for guest_memfd > > fs/userfaultfd.c | 3 +- > include/linux/mm.h | 5 + > include/linux/mm_types.h | 4 + > include/linux/userfaultfd_k.h | 10 +- > include/uapi/linux/userfaultfd.h | 8 +- > mm/hugetlb.c | 9 +- > mm/shmem.c | 17 +++- > mm/userfaultfd.c | 47 ++++++--- > .../testing/selftests/kvm/guest_memfd_test.c | 99 +++++++++++++++++++ > virt/kvm/guest_memfd.c | 10 ++ > 10 files changed, 188 insertions(+), 24 deletions(-) > > > base-commit: 3cc51efc17a2c41a480eed36b31c1773936717e0 > -- > 2.47.1 >
On 04/04/2025 17:33, Lorenzo Stoakes wrote: > On Fri, Apr 04, 2025 at 03:43:46PM +0000, Nikita Kalyazin wrote: >> This series is built on top of the Fuad's v7 "mapping guest_memfd backed >> memory at the host" [1]. > > Hm if this is based on an unmerged series this seems quite speculative and > should maybe be an RFC? I mean that series at least still seems quite under > discussion/experiencing issues? > > Maybe worth RFC'ing until that one settles down first to avoid complexity > in review/application to tree? Hi, I dropped the RFC tag because I saw similar examples before, but I'm happy to bring it back next time if the dependency is not merged until then. > > Thanks! Thanks! > >> >> With James's KVM userfault [2], it is possible to handle stage-2 faults >> in guest_memfd in userspace. However, KVM itself also triggers faults >> in guest_memfd in some cases, for example: PV interfaces like kvmclock, >> PV EOI and page table walking code when fetching the MMIO instruction on >> x86. It was agreed in the guest_memfd upstream call on 23 Jan 2025 [3] >> that KVM would be accessing those pages via userspace page tables. In >> order for such faults to be handled in userspace, guest_memfd needs to >> support userfaultfd. >> >> Changes since v2 [4]: >> - James: Fix sgp type when calling shmem_get_folio_gfp >> - James: Improved vm_ops->fault() error handling >> - James: Add and make use of the can_userfault() VMA operation >> - James: Add UFFD_FEATURE_MINOR_GUEST_MEMFD feature flag >> - James: Fix typos and add more checks in the test >> >> Nikita >> >> [1] https://lore.kernel.org/kvm/20250318161823.4005529-1-tabba@google.com/T/ >> [2] https://lore.kernel.org/kvm/20250109204929.1106563-1-jthoughton@google.com/T/ >> [3] https://docs.google.com/document/d/1M6766BzdY1Lhk7LiR5IqVR8B8mG3cr-cxTxOrAosPOk/edit?tab=t.0#heading=h.w1126rgli5e3 >> [4] https://lore.kernel.org/kvm/20250402160721.97596-1-kalyazin@amazon.com/T/ >> >> Nikita Kalyazin (6): >> mm: userfaultfd: generic continue for non hugetlbfs >> mm: provide can_userfault vma operation >> mm: userfaultfd: use can_userfault vma operation >> KVM: guest_memfd: add support for userfaultfd minor >> mm: userfaultfd: add UFFD_FEATURE_MINOR_GUEST_MEMFD >> KVM: selftests: test userfaultfd minor for guest_memfd >> >> fs/userfaultfd.c | 3 +- >> include/linux/mm.h | 5 + >> include/linux/mm_types.h | 4 + >> include/linux/userfaultfd_k.h | 10 +- >> include/uapi/linux/userfaultfd.h | 8 +- >> mm/hugetlb.c | 9 +- >> mm/shmem.c | 17 +++- >> mm/userfaultfd.c | 47 ++++++--- >> .../testing/selftests/kvm/guest_memfd_test.c | 99 +++++++++++++++++++ >> virt/kvm/guest_memfd.c | 10 ++ >> 10 files changed, 188 insertions(+), 24 deletions(-) >> >> >> base-commit: 3cc51efc17a2c41a480eed36b31c1773936717e0 >> -- >> 2.47.1 >>
On Fri, Apr 04, 2025 at 05:56:58PM +0100, Nikita Kalyazin wrote: > > > On 04/04/2025 17:33, Lorenzo Stoakes wrote: > > On Fri, Apr 04, 2025 at 03:43:46PM +0000, Nikita Kalyazin wrote: > > > This series is built on top of the Fuad's v7 "mapping guest_memfd backed > > > memory at the host" [1]. > > > > Hm if this is based on an unmerged series this seems quite speculative and > > should maybe be an RFC? I mean that series at least still seems quite under > > discussion/experiencing issues? > > > > Maybe worth RFC'ing until that one settles down first to avoid complexity > > in review/application to tree? > > Hi, > > I dropped the RFC tag because I saw similar examples before, but I'm happy > to bring it back next time if the dependency is not merged until then. Yeah really sorry to be a pain haha, I realise this particular situation is a bit unclear, but I think just for the sake of getting our ducks in a row and ensuring things are settled on the baseline (and it's sort of a fairly big baseline), it'd be best to bring it back! > > > > > Thanks! > > Thanks! Cheers! > > > > > > > > > With James's KVM userfault [2], it is possible to handle stage-2 faults > > > in guest_memfd in userspace. However, KVM itself also triggers faults > > > in guest_memfd in some cases, for example: PV interfaces like kvmclock, > > > PV EOI and page table walking code when fetching the MMIO instruction on > > > x86. It was agreed in the guest_memfd upstream call on 23 Jan 2025 [3] > > > that KVM would be accessing those pages via userspace page tables. In > > > order for such faults to be handled in userspace, guest_memfd needs to > > > support userfaultfd. > > > > > > Changes since v2 [4]: > > > - James: Fix sgp type when calling shmem_get_folio_gfp > > > - James: Improved vm_ops->fault() error handling > > > - James: Add and make use of the can_userfault() VMA operation > > > - James: Add UFFD_FEATURE_MINOR_GUEST_MEMFD feature flag > > > - James: Fix typos and add more checks in the test > > > > > > Nikita > > > > > > [1] https://lore.kernel.org/kvm/20250318161823.4005529-1-tabba@google.com/T/ > > > [2] https://lore.kernel.org/kvm/20250109204929.1106563-1-jthoughton@google.com/T/ > > > [3] https://docs.google.com/document/d/1M6766BzdY1Lhk7LiR5IqVR8B8mG3cr-cxTxOrAosPOk/edit?tab=t.0#heading=h.w1126rgli5e3 > > > [4] https://lore.kernel.org/kvm/20250402160721.97596-1-kalyazin@amazon.com/T/ > > > > > > Nikita Kalyazin (6): > > > mm: userfaultfd: generic continue for non hugetlbfs > > > mm: provide can_userfault vma operation > > > mm: userfaultfd: use can_userfault vma operation > > > KVM: guest_memfd: add support for userfaultfd minor > > > mm: userfaultfd: add UFFD_FEATURE_MINOR_GUEST_MEMFD > > > KVM: selftests: test userfaultfd minor for guest_memfd > > > > > > fs/userfaultfd.c | 3 +- > > > include/linux/mm.h | 5 + > > > include/linux/mm_types.h | 4 + > > > include/linux/userfaultfd_k.h | 10 +- > > > include/uapi/linux/userfaultfd.h | 8 +- > > > mm/hugetlb.c | 9 +- > > > mm/shmem.c | 17 +++- > > > mm/userfaultfd.c | 47 ++++++--- > > > .../testing/selftests/kvm/guest_memfd_test.c | 99 +++++++++++++++++++ > > > virt/kvm/guest_memfd.c | 10 ++ > > > 10 files changed, 188 insertions(+), 24 deletions(-) > > > > > > > > > base-commit: 3cc51efc17a2c41a480eed36b31c1773936717e0 > > > -- > > > 2.47.1 > > > >
+To authors of v7 series referenced in [1] * Nikita Kalyazin <kalyazin@amazon.com> [250404 11:44]: > This series is built on top of the Fuad's v7 "mapping guest_memfd backed > memory at the host" [1]. I didn't see their addresses in the to/cc, so I added them to my response as I reference the v7 patch set below. > > With James's KVM userfault [2], it is possible to handle stage-2 faults > in guest_memfd in userspace. However, KVM itself also triggers faults > in guest_memfd in some cases, for example: PV interfaces like kvmclock, > PV EOI and page table walking code when fetching the MMIO instruction on > x86. It was agreed in the guest_memfd upstream call on 23 Jan 2025 [3] > that KVM would be accessing those pages via userspace page tables. Thanks for being open about the technical call, but it would be better to capture the reasons and not the call date. I explain why in the linking section as well. >In > order for such faults to be handled in userspace, guest_memfd needs to > support userfaultfd. > > Changes since v2 [4]: > - James: Fix sgp type when calling shmem_get_folio_gfp > - James: Improved vm_ops->fault() error handling > - James: Add and make use of the can_userfault() VMA operation > - James: Add UFFD_FEATURE_MINOR_GUEST_MEMFD feature flag > - James: Fix typos and add more checks in the test > > Nikita Please slow down... This patch is at v3, the v7 patch that you are building off has lockdep issues [1] reported by one of the authors, and (sorry for sounding harsh about the v7 of that patch) the cover letter reads a bit more like an RFC than a set ready to go into linux-mm. Maybe the lockdep issue is just a patch ordering thing or removed in a later patch set, but that's not mentioned in the discovery email? What exactly is the goal here and the path forward for the rest of us trying to build on this once it's in mm-new/mm-unstable? Note that mm-unstable is shared with a lot of other people through linux-next, and we are really trying to stop breaking stuff on them. Obviously v7 cannot go in until it works with lockdep - otherwise none of us can use lockdep which is not okay. Also, I am concerned about the amount of testing in the v7 and v3 patch sets that did not bring up a lockdep issue.. > > [1] https://lore.kernel.org/kvm/20250318161823.4005529-1-tabba@google.com/T/ > [2] https://lore.kernel.org/kvm/20250109204929.1106563-1-jthoughton@google.com/T/ > [3] https://docs.google.com/document/d/1M6766BzdY1Lhk7LiR5IqVR8B8mG3cr-cxTxOrAosPOk/edit?tab=t.0#heading=h.w1126rgli5e3 If there is anything we need to know about the decisions in the call and that document, can you please pull it into this change log? I don't think anyone can ensure google will not rename docs to some other office theme tomorrow - as they famously ditch basically every name and application. Also, most of the community does not want to go to a 17 page (and growing) spreadsheet to hunt down the facts when there is an acceptable and ideal place to document them in git. It's another barrier of entry on reviewing your code as well. But please, don't take this suggestion as carte blanche for copying a conversation from the doc, just give us the technical reasons for your decisions as briefly as possible. > [4] https://lore.kernel.org/kvm/20250402160721.97596-1-kalyazin@amazon.com/T/ [1]. https://lore.kernel.org/all/diqz1puanquh.fsf@ackerleytng-ctop.c.googlers.com/ Thanks, Liam
On 04/04/2025 18:12, Liam R. Howlett wrote: > +To authors of v7 series referenced in [1] > > * Nikita Kalyazin <kalyazin@amazon.com> [250404 11:44]: >> This series is built on top of the Fuad's v7 "mapping guest_memfd backed >> memory at the host" [1]. > > I didn't see their addresses in the to/cc, so I added them to my > response as I reference the v7 patch set below. Hi Liam, Thanks for the feedback and for extending the list. > >> >> With James's KVM userfault [2], it is possible to handle stage-2 faults >> in guest_memfd in userspace. However, KVM itself also triggers faults >> in guest_memfd in some cases, for example: PV interfaces like kvmclock, >> PV EOI and page table walking code when fetching the MMIO instruction on >> x86. It was agreed in the guest_memfd upstream call on 23 Jan 2025 [3] >> that KVM would be accessing those pages via userspace page tables. > > Thanks for being open about the technical call, but it would be better > to capture the reasons and not the call date. I explain why in the > linking section as well. Thanks for bringing that up. The document mostly contains the decision itself. The main alternative considered previously was a temporary reintroduction of the pages to the direct map whenever a KVM-internal access is required. It was coming with a significant complexity of guaranteeing correctness in all cases [1]. Since the memslot structure already contains a guest memory pointer supplied by the userspace, KVM can use it directly when in the VMM or vCPU context. I will add this in the cover for the next version. [1] https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#m4f367c52bbad0f0ba7fb07ca347c7b37258a73e5 > >> In >> order for such faults to be handled in userspace, guest_memfd needs to >> support userfaultfd. >> >> Changes since v2 [4]: >> - James: Fix sgp type when calling shmem_get_folio_gfp >> - James: Improved vm_ops->fault() error handling >> - James: Add and make use of the can_userfault() VMA operation >> - James: Add UFFD_FEATURE_MINOR_GUEST_MEMFD feature flag >> - James: Fix typos and add more checks in the test >> >> Nikita > > Please slow down... > > This patch is at v3, the v7 patch that you are building off has lockdep > issues [1] reported by one of the authors, and (sorry for sounding harsh > about the v7 of that patch) the cover letter reads a bit more like an > RFC than a set ready to go into linux-mm. AFAIK the lockdep issue was reported on a v7 of a different change. I'm basing my series on [2] ("KVM: Mapping guest_memfd backed memory at the host for software protected VMs"), while the issue was reported on [2] ("KVM: Restricted mapping of guest_memfd at the host and arm64 support"), which is also built on top of [2]. Please correct me if I'm missing something. The key feature that is required by my series is the ability to mmap guest_memfd when the VM type allows. My understanding is no-one is opposed to that as of now, that's why I assumed it's safe to build on top of that. [2] https://lore.kernel.org/kvm/20250318161823.4005529-1-tabba@google.com/T/ [3] https://lore.kernel.org/all/diqz1puanquh.fsf@ackerleytng-ctop.c.googlers.com/T/ > > Maybe the lockdep issue is just a patch ordering thing or removed in a > later patch set, but that's not mentioned in the discovery email? > > What exactly is the goal here and the path forward for the rest of us > trying to build on this once it's in mm-new/mm-unstable? > > Note that mm-unstable is shared with a lot of other people through > linux-next, and we are really trying to stop breaking stuff on them. > > Obviously v7 cannot go in until it works with lockdep - otherwise none > of us can use lockdep which is not okay. > > Also, I am concerned about the amount of testing in the v7 and v3 patch > sets that did not bring up a lockdep issue.. > >> >> [1] https://lore.kernel.org/kvm/20250318161823.4005529-1-tabba@google.com/T/ >> [2] https://lore.kernel.org/kvm/20250109204929.1106563-1-jthoughton@google.com/T/ >> [3] https://docs.google.com/document/d/1M6766BzdY1Lhk7LiR5IqVR8B8mG3cr-cxTxOrAosPOk/edit?tab=t.0#heading=h.w1126rgli5e3 > > If there is anything we need to know about the decisions in the call and > that document, can you please pull it into this change log? > > I don't think anyone can ensure google will not rename docs to some > other office theme tomorrow - as they famously ditch basically every > name and application. > > Also, most of the community does not want to go to a 17 page (and > growing) spreadsheet to hunt down the facts when there is an acceptable > and ideal place to document them in git. It's another barrier of entry > on reviewing your code as well. > > But please, don't take this suggestion as carte blanche for copying a > conversation from the doc, just give us the technical reasons for your > decisions as briefly as possible. > > >> [4] https://lore.kernel.org/kvm/20250402160721.97596-1-kalyazin@amazon.com/T/ > > [1]. https://lore.kernel.org/all/diqz1puanquh.fsf@ackerleytng-ctop.c.googlers.com/ > > Thanks, > Liam
* Nikita Kalyazin <kalyazin@amazon.com> [250407 07:04]: > > > On 04/04/2025 18:12, Liam R. Howlett wrote: > > +To authors of v7 series referenced in [1] > > > > * Nikita Kalyazin <kalyazin@amazon.com> [250404 11:44]: > > > This series is built on top of the Fuad's v7 "mapping guest_memfd backed > > > memory at the host" [1]. > > > > I didn't see their addresses in the to/cc, so I added them to my > > response as I reference the v7 patch set below. > > Hi Liam, > > Thanks for the feedback and for extending the list. > > > > > > > > > With James's KVM userfault [2], it is possible to handle stage-2 faults > > > in guest_memfd in userspace. However, KVM itself also triggers faults > > > in guest_memfd in some cases, for example: PV interfaces like kvmclock, > > > PV EOI and page table walking code when fetching the MMIO instruction on > > > x86. It was agreed in the guest_memfd upstream call on 23 Jan 2025 [3] > > > that KVM would be accessing those pages via userspace page tables. > > > > Thanks for being open about the technical call, but it would be better > > to capture the reasons and not the call date. I explain why in the > > linking section as well. > > Thanks for bringing that up. The document mostly contains the decision > itself. The main alternative considered previously was a temporary > reintroduction of the pages to the direct map whenever a KVM-internal access > is required. It was coming with a significant complexity of guaranteeing > correctness in all cases [1]. Since the memslot structure already contains > a guest memory pointer supplied by the userspace, KVM can use it directly > when in the VMM or vCPU context. I will add this in the cover for the next > version. Thank you. > > [1] https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#m4f367c52bbad0f0ba7fb07ca347c7b37258a73e5 > > > > > > In > > > order for such faults to be handled in userspace, guest_memfd needs to > > > support userfaultfd. > > > > > > Changes since v2 [4]: > > > - James: Fix sgp type when calling shmem_get_folio_gfp > > > - James: Improved vm_ops->fault() error handling > > > - James: Add and make use of the can_userfault() VMA operation > > > - James: Add UFFD_FEATURE_MINOR_GUEST_MEMFD feature flag > > > - James: Fix typos and add more checks in the test > > > > > > Nikita > > > > Please slow down... > > > > This patch is at v3, the v7 patch that you are building off has lockdep > > issues [1] reported by one of the authors, and (sorry for sounding harsh > > about the v7 of that patch) the cover letter reads a bit more like an > > RFC than a set ready to go into linux-mm. > > AFAIK the lockdep issue was reported on a v7 of a different change. > I'm basing my series on [2] ("KVM: Mapping guest_memfd backed memory at the > host for software protected VMs"), while the issue was reported on [2] > ("KVM: Restricted mapping of guest_memfd at the host and arm64 support"), > which is also built on top of [2]. Please correct me if I'm missing > something. I think you messed up the numbering in your statement above. I believe you are making the point that I messed up which patches depend on what and your code does not depend on faulty locking, which appears to be the case. There are a few issues with the required patch set? > > The key feature that is required by my series is the ability to mmap > guest_memfd when the VM type allows. My understanding is no-one is opposed > to that as of now, that's why I assumed it's safe to build on top of that. > > [2] https://lore.kernel.org/kvm/20250318161823.4005529-1-tabba@google.com/T/ > [3] https://lore.kernel.org/all/diqz1puanquh.fsf@ackerleytng-ctop.c.googlers.com/T/ All of this is extremely confusing because the onus of figuring out what the final code will look like is put on the reviewer. As it is, we have issues with people not doing enough review of the code (due to limited time). One way to get reviews is to make the barrier of entry as low as possible. I spent Friday going down a rabbit hole of patches referring to each other as dependencies and I gave up. It looks like I mistook one set of patches as required vs them requiring the same in-flight ones as your patches. I am struggling to see how we can adequately support all of you given the way the patches are sent out in batches with dependencies - it is just too time consuming to sort out. Thank you, Liam
On 07/04/2025 14:40, Liam R. Howlett wrote: > * Nikita Kalyazin <kalyazin@amazon.com> [250407 07:04]: >> >> >> On 04/04/2025 18:12, Liam R. Howlett wrote: >>> +To authors of v7 series referenced in [1] >>> >>> * Nikita Kalyazin <kalyazin@amazon.com> [250404 11:44]: >>>> This series is built on top of the Fuad's v7 "mapping guest_memfd backed >>>> memory at the host" [1]. >>> >>> I didn't see their addresses in the to/cc, so I added them to my >>> response as I reference the v7 patch set below. >> >> Hi Liam, >> >> Thanks for the feedback and for extending the list. >> >>> >>>> >>>> With James's KVM userfault [2], it is possible to handle stage-2 faults >>>> in guest_memfd in userspace. However, KVM itself also triggers faults >>>> in guest_memfd in some cases, for example: PV interfaces like kvmclock, >>>> PV EOI and page table walking code when fetching the MMIO instruction on >>>> x86. It was agreed in the guest_memfd upstream call on 23 Jan 2025 [3] >>>> that KVM would be accessing those pages via userspace page tables. >>> >>> Thanks for being open about the technical call, but it would be better >>> to capture the reasons and not the call date. I explain why in the >>> linking section as well. >> >> Thanks for bringing that up. The document mostly contains the decision >> itself. The main alternative considered previously was a temporary >> reintroduction of the pages to the direct map whenever a KVM-internal access >> is required. It was coming with a significant complexity of guaranteeing >> correctness in all cases [1]. Since the memslot structure already contains >> a guest memory pointer supplied by the userspace, KVM can use it directly >> when in the VMM or vCPU context. I will add this in the cover for the next >> version. > > Thank you. > >> >> [1] https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#m4f367c52bbad0f0ba7fb07ca347c7b37258a73e5 >> >>> >>>> In >>>> order for such faults to be handled in userspace, guest_memfd needs to >>>> support userfaultfd. >>>> >>>> Changes since v2 [4]: >>>> - James: Fix sgp type when calling shmem_get_folio_gfp >>>> - James: Improved vm_ops->fault() error handling >>>> - James: Add and make use of the can_userfault() VMA operation >>>> - James: Add UFFD_FEATURE_MINOR_GUEST_MEMFD feature flag >>>> - James: Fix typos and add more checks in the test >>>> >>>> Nikita >>> >>> Please slow down... >>> >>> This patch is at v3, the v7 patch that you are building off has lockdep >>> issues [1] reported by one of the authors, and (sorry for sounding harsh >>> about the v7 of that patch) the cover letter reads a bit more like an >>> RFC than a set ready to go into linux-mm. >> >> AFAIK the lockdep issue was reported on a v7 of a different change. >> I'm basing my series on [2] ("KVM: Mapping guest_memfd backed memory at the >> host for software protected VMs"), while the issue was reported on [2] >> ("KVM: Restricted mapping of guest_memfd at the host and arm64 support"), >> which is also built on top of [2]. Please correct me if I'm missing >> something. > > I think you messed up the numbering in your statement above. I did, in an attempt to make it "even more clear" :) Sorry about that, glad you got the intention. > > I believe you are making the point that I messed up which patches depend > on what and your code does not depend on faulty locking, which appears > to be the case. > > There are a few issues with the required patch set? There are indeed, but not in the part this series depends on, as far as I can see. > >> >> The key feature that is required by my series is the ability to mmap >> guest_memfd when the VM type allows. My understanding is no-one is opposed >> to that as of now, that's why I assumed it's safe to build on top of that. >> >> [2] https://lore.kernel.org/kvm/20250318161823.4005529-1-tabba@google.com/T/ >> [3] https://lore.kernel.org/all/diqz1puanquh.fsf@ackerleytng-ctop.c.googlers.com/T/ > > All of this is extremely confusing because the onus of figuring out what > the final code will look like is put on the reviewer. As it is, we have > issues with people not doing enough review of the code (due to limited > time). One way to get reviews is to make the barrier of entry as low as > possible. > > I spent Friday going down a rabbit hole of patches referring to each > other as dependencies and I gave up. It looks like I mistook one set of > patches as required vs them requiring the same in-flight ones as your > patches. > > I am struggling to see how we can adequately support all of you given > the way the patches are sent out in batches with dependencies - it is > just too time consuming to sort out. I'm happy to do whatever I can to make the review easier. I suppose the extreme case is to wait for the dependencies to get accepted, effectively serialising submissions, but that slows the process down significantly. For example, I received very good feedback on v1 and v2 of this series and was able to address it instead of waiting for the dependency. Would including the required patches directly in the series help? My only concern is in that case the same patch will be submitted multiple times (as a part of every depending series), but if it's better, I'll be doing that instead. > > Thank you, > Liam >
* Nikita Kalyazin <kalyazin@amazon.com> [250407 10:05]: > ... > > > > All of this is extremely confusing because the onus of figuring out what > > the final code will look like is put on the reviewer. As it is, we have > > issues with people not doing enough review of the code (due to limited > > time). One way to get reviews is to make the barrier of entry as low as > > possible. > > > > I spent Friday going down a rabbit hole of patches referring to each > > other as dependencies and I gave up. It looks like I mistook one set of > > patches as required vs them requiring the same in-flight ones as your > > patches. > > > > I am struggling to see how we can adequately support all of you given > > the way the patches are sent out in batches with dependencies - it is > > just too time consuming to sort out. > > I'm happy to do whatever I can to make the review easier. I suppose the > extreme case is to wait for the dependencies to get accepted, effectively > serialising submissions, but that slows the process down significantly. For > example, I received very good feedback on v1 and v2 of this series and was > able to address it instead of waiting for the dependency. Would including > the required patches directly in the series help? My only concern is in > that case the same patch will be submitted multiple times (as a part of > every depending series), but if it's better, I'll be doing that instead. Don't resend patches that someone else is upstreaming, that'll cause other problems. Three methods come to mind: 1. As you stated, wait for the dependencies to land. This is will mean what you are working against is well tested and won't change (and you won't have to re-spin due to an unstable base). 2. Combine them into a bigger patch set. I can then pull one patch set and look at the parts of interest to the mm side. 3. Provide a git repo with the necessary changes together. I think 2 and 3 together should be used for the guest_memfd patches. Someone needs to be managing these to send upstream. See the discussion in another patch set on guest_memfd here [1]. As this is not based on fully upstream patches, this should be marked as RFC, imo. Thanks, Liam [1]. https://lore.kernel.org/all/aizia2elwspxcmfrjote5h7k5wdw2stp42slytkl5visrjvzwi@jj3lwuudiyjk/
On 07.04.25 16:24, Liam R. Howlett wrote: > * Nikita Kalyazin <kalyazin@amazon.com> [250407 10:05]: >> > > ... > >>> >>> All of this is extremely confusing because the onus of figuring out what >>> the final code will look like is put on the reviewer. As it is, we have >>> issues with people not doing enough review of the code (due to limited >>> time). One way to get reviews is to make the barrier of entry as low as >>> possible. >>> >>> I spent Friday going down a rabbit hole of patches referring to each >>> other as dependencies and I gave up. It looks like I mistook one set of >>> patches as required vs them requiring the same in-flight ones as your >>> patches. >>> >>> I am struggling to see how we can adequately support all of you given >>> the way the patches are sent out in batches with dependencies - it is >>> just too time consuming to sort out. >> >> I'm happy to do whatever I can to make the review easier. I suppose the >> extreme case is to wait for the dependencies to get accepted, effectively >> serialising submissions, but that slows the process down significantly. For >> example, I received very good feedback on v1 and v2 of this series and was >> able to address it instead of waiting for the dependency. Would including >> the required patches directly in the series help? My only concern is in >> that case the same patch will be submitted multiple times (as a part of >> every depending series), but if it's better, I'll be doing that instead. > > Don't resend patches that someone else is upstreaming, that'll cause > other problems. > > Three methods come to mind: > > 1. As you stated, wait for the dependencies to land. This is will mean > what you are working against is well tested and won't change (and you > won't have to re-spin due to an unstable base). > > 2. Combine them into a bigger patch set. I can then pull one patch set > and look at the parts of interest to the mm side. > > 3. Provide a git repo with the necessary changes together. > > I think 2 and 3 together should be used for the guest_memfd patches. > Someone needs to be managing these to send upstream. See the discussion > in another patch set on guest_memfd here [1]. The issue is that most extensions are fairly independent from each other, except that they built up on Fuad's mmap support, Sending all together as one thing might not be the best option. Once basic mmap support is upstream, some of the extensions (e.g., directmap removal) can go in next. So until that is upstream, I agree that tagging the stuff that builds up on that is the right thing to do, and providing git trees is another very good idea. I'll prioritize getting Fuad's mmap stuff reviewed. (I keep saying that, I know)
On Mon, Apr 07, 2025 at 04:46:48PM +0200, David Hildenbrand wrote: > On 07.04.25 16:24, Liam R. Howlett wrote: > > * Nikita Kalyazin <kalyazin@amazon.com> [250407 10:05]: > > > > > > > ... > > > > > > > > > > All of this is extremely confusing because the onus of figuring out what > > > > the final code will look like is put on the reviewer. As it is, we have > > > > issues with people not doing enough review of the code (due to limited > > > > time). One way to get reviews is to make the barrier of entry as low as > > > > possible. > > > > > > > > I spent Friday going down a rabbit hole of patches referring to each > > > > other as dependencies and I gave up. It looks like I mistook one set of > > > > patches as required vs them requiring the same in-flight ones as your > > > > patches. > > > > > > > > I am struggling to see how we can adequately support all of you given > > > > the way the patches are sent out in batches with dependencies - it is > > > > just too time consuming to sort out. > > > > > > I'm happy to do whatever I can to make the review easier. I suppose the > > > extreme case is to wait for the dependencies to get accepted, effectively > > > serialising submissions, but that slows the process down significantly. For > > > example, I received very good feedback on v1 and v2 of this series and was > > > able to address it instead of waiting for the dependency. Would including > > > the required patches directly in the series help? My only concern is in > > > that case the same patch will be submitted multiple times (as a part of > > > every depending series), but if it's better, I'll be doing that instead. > > > > Don't resend patches that someone else is upstreaming, that'll cause > > other problems. > > > > Three methods come to mind: > > > > 1. As you stated, wait for the dependencies to land. This is will mean > > what you are working against is well tested and won't change (and you > > won't have to re-spin due to an unstable base). > > > > 2. Combine them into a bigger patch set. I can then pull one patch set > > and look at the parts of interest to the mm side. > > > > 3. Provide a git repo with the necessary changes together. > > > > I think 2 and 3 together should be used for the guest_memfd patches. > > Someone needs to be managing these to send upstream. See the discussion > > in another patch set on guest_memfd here [1]. > > The issue is that most extensions are fairly independent from each other, > except that they built up on Fuad's mmap support, > > Sending all together as one thing might not be the best option. > > Once basic mmap support is upstream, some of the extensions (e.g., directmap > removal) can go in next. > > So until that is upstream, I agree that tagging the stuff that builds up on > that is the right thing to do, and providing git trees is another very good > idea. > > I'll prioritize getting Fuad's mmap stuff reviewed. (I keep saying that, I > know) Which series is this? Sorry maybe lost track of this one. > > -- > Cheers, > > David / dhildenb >
On 07.04.25 17:14, Lorenzo Stoakes wrote: > On Mon, Apr 07, 2025 at 04:46:48PM +0200, David Hildenbrand wrote: >> On 07.04.25 16:24, Liam R. Howlett wrote: >>> * Nikita Kalyazin <kalyazin@amazon.com> [250407 10:05]: >>>> >>> >>> ... >>> >>>>> >>>>> All of this is extremely confusing because the onus of figuring out what >>>>> the final code will look like is put on the reviewer. As it is, we have >>>>> issues with people not doing enough review of the code (due to limited >>>>> time). One way to get reviews is to make the barrier of entry as low as >>>>> possible. >>>>> >>>>> I spent Friday going down a rabbit hole of patches referring to each >>>>> other as dependencies and I gave up. It looks like I mistook one set of >>>>> patches as required vs them requiring the same in-flight ones as your >>>>> patches. >>>>> >>>>> I am struggling to see how we can adequately support all of you given >>>>> the way the patches are sent out in batches with dependencies - it is >>>>> just too time consuming to sort out. >>>> >>>> I'm happy to do whatever I can to make the review easier. I suppose the >>>> extreme case is to wait for the dependencies to get accepted, effectively >>>> serialising submissions, but that slows the process down significantly. For >>>> example, I received very good feedback on v1 and v2 of this series and was >>>> able to address it instead of waiting for the dependency. Would including >>>> the required patches directly in the series help? My only concern is in >>>> that case the same patch will be submitted multiple times (as a part of >>>> every depending series), but if it's better, I'll be doing that instead. >>> >>> Don't resend patches that someone else is upstreaming, that'll cause >>> other problems. >>> >>> Three methods come to mind: >>> >>> 1. As you stated, wait for the dependencies to land. This is will mean >>> what you are working against is well tested and won't change (and you >>> won't have to re-spin due to an unstable base). >>> >>> 2. Combine them into a bigger patch set. I can then pull one patch set >>> and look at the parts of interest to the mm side. >>> >>> 3. Provide a git repo with the necessary changes together. >>> >>> I think 2 and 3 together should be used for the guest_memfd patches. >>> Someone needs to be managing these to send upstream. See the discussion >>> in another patch set on guest_memfd here [1]. >> >> The issue is that most extensions are fairly independent from each other, >> except that they built up on Fuad's mmap support, >> >> Sending all together as one thing might not be the best option. >> >> Once basic mmap support is upstream, some of the extensions (e.g., directmap >> removal) can go in next. >> >> So until that is upstream, I agree that tagging the stuff that builds up on >> that is the right thing to do, and providing git trees is another very good >> idea. >> >> I'll prioritize getting Fuad's mmap stuff reviewed. (I keep saying that, I >> know) > > Which series is this? Sorry maybe lost track of this one. Heh, not your fault :) The most important one for basic mmap support is "KVM: Mapping guest_memfd backed memory at the host for software protected VMs" [1]. Some stuff (e.g., direct map removal) should be able to make progress once that landed. I do expect the MM-specific patch in there ("mm: Consolidate freeing of typed folios on final folio_put()") to not be included as part of that work. [I shared the feedback from the LSF/MM session in the upstream guest_memfd call, and we decided to minimize the usage of the folio_put() callback to where absolutely required; that will simplify things and avoid issues as pointed out by Willy, which is great] The next important one will be "[PATCH v7 0/7] KVM: Restricted mapping of guest_memfd at the host and arm64 support" [2], but I similarly expect a simplification as we try moving away from folio_put() for the "shared <-> private" page conversion case. So I expect a v8 of [1] (and that also [2] needs to be updated). @Fuad, please let me know if I am wrong. [1] https://lore.kernel.org/all/20250318161823.4005529-1-tabba@google.com/T/#u [2] https://lore.kernel.org/all/20250328153133.3504118-1-tabba@google.com/
On Mon, Apr 07, 2025 at 04:46:48PM +0200, David Hildenbrand wrote: > On 07.04.25 16:24, Liam R. Howlett wrote: > > * Nikita Kalyazin <kalyazin@amazon.com> [250407 10:05]: > > > > > > > ... > > > > > > > > > > All of this is extremely confusing because the onus of figuring out what > > > > the final code will look like is put on the reviewer. As it is, we have > > > > issues with people not doing enough review of the code (due to limited > > > > time). One way to get reviews is to make the barrier of entry as low as > > > > possible. > > > > > > > > I spent Friday going down a rabbit hole of patches referring to each > > > > other as dependencies and I gave up. It looks like I mistook one set of > > > > patches as required vs them requiring the same in-flight ones as your > > > > patches. > > > > > > > > I am struggling to see how we can adequately support all of you given > > > > the way the patches are sent out in batches with dependencies - it is > > > > just too time consuming to sort out. > > > > > > I'm happy to do whatever I can to make the review easier. I suppose the > > > extreme case is to wait for the dependencies to get accepted, effectively > > > serialising submissions, but that slows the process down significantly. For > > > example, I received very good feedback on v1 and v2 of this series and was > > > able to address it instead of waiting for the dependency. Would including > > > the required patches directly in the series help? My only concern is in > > > that case the same patch will be submitted multiple times (as a part of > > > every depending series), but if it's better, I'll be doing that instead. > > > > Don't resend patches that someone else is upstreaming, that'll cause > > other problems. > > > > Three methods come to mind: > > > > 1. As you stated, wait for the dependencies to land. This is will mean > > what you are working against is well tested and won't change (and you > > won't have to re-spin due to an unstable base). > > > > 2. Combine them into a bigger patch set. I can then pull one patch set > > and look at the parts of interest to the mm side. > > > > 3. Provide a git repo with the necessary changes together. > > > > I think 2 and 3 together should be used for the guest_memfd patches. > > Someone needs to be managing these to send upstream. See the discussion > > in another patch set on guest_memfd here [1]. > > The issue is that most extensions are fairly independent from each other, > except that they built up on Fuad's mmap support, > > Sending all together as one thing might not be the best option. > > Once basic mmap support is upstream, some of the extensions (e.g., directmap > removal) can go in next. > > So until that is upstream, I agree that tagging the stuff that builds up on > that is the right thing to do, and providing git trees is another very good > idea. > > I'll prioritize getting Fuad's mmap stuff reviewed. (I keep saying that, I > know) Fwiw, b4 allows to specify dependencies so you can b4 shazam/am and it will pull in all prerequisite patches: b4 prep --edit-deps Edit the series dependencies in your defined $EDITOR (or core.editor)
Christian Brauner <brauner@kernel.org> writes: > On Mon, Apr 07, 2025 at 04:46:48PM +0200, David Hildenbrand wrote: > > <snip> > > Fwiw, b4 allows to specify dependencies so you can b4 shazam/am and it > will pull in all prerequisite patches: > > b4 prep --edit-deps Edit the series dependencies in your defined $EDITOR (or core.editor) Thank you for this tip! On this note, what are some good CONFIGs people always enable during development?