Message ID | 20240726235234.228822-2-seanjc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Stop grabbing references to PFNMAP'd pages | expand |
Sean Christopherson <seanjc@google.com> writes: > Put the page reference acquired by gfn_to_pfn_prot() if > kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less- > than-stellar heuristics for dealing with pfn-mapped memory means that KVM > can get a page reference to ZONE_DEVICE memory. > > Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest") > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/arm64/kvm/guest.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c > index 11098eb7eb44..e1f0ff08836a 100644 > --- a/arch/arm64/kvm/guest.c > +++ b/arch/arm64/kvm/guest.c > @@ -1059,6 +1059,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, > page = pfn_to_online_page(pfn); > if (!page) { > /* Reject ZONE_DEVICE memory */ > + kvm_release_pfn_clean(pfn); I guess this gets renamed later in the series. However my main comment is does lack of page always mean a ZONE_DEVICE? Looking at pfn_to_online_page() I see a bunch of other checks first. Why isn't it that functions responsibility to clean up after itself if its returning NULLs? > ret = -EFAULT; > goto out; > }
On Wed, Jul 31, 2024, Alex Bennée wrote: > Sean Christopherson <seanjc@google.com> writes: > > > Put the page reference acquired by gfn_to_pfn_prot() if > > kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less- > > than-stellar heuristics for dealing with pfn-mapped memory means that KVM > > can get a page reference to ZONE_DEVICE memory. > > > > Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest") > > Signed-off-by: Sean Christopherson <seanjc@google.com> > > --- > > arch/arm64/kvm/guest.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c > > index 11098eb7eb44..e1f0ff08836a 100644 > > --- a/arch/arm64/kvm/guest.c > > +++ b/arch/arm64/kvm/guest.c > > @@ -1059,6 +1059,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, > > page = pfn_to_online_page(pfn); > > if (!page) { > > /* Reject ZONE_DEVICE memory */ > > + kvm_release_pfn_clean(pfn); > > I guess this gets renamed later in the series. > > However my main comment is does lack of page always mean a ZONE_DEVICE? Nope. > Looking at pfn_to_online_page() I see a bunch of other checks first. Why > isn't it that functions responsibility to clean up after itself if its > returning NULLs? pfn_to_online_page() is more strict than gfn_to_pfn_prot(). At least in theory, gfn_to_pfn_prot() could return a pfn that has an associated "struct page", with a reference held to said page. But for that same pfn, pfn_to_online_page() could return NULL, in which case KVM needs to put the reference it acquired via gfn_to_pfn_prot().
+ Steven Price for this patch (and the following one), as this really is his turf. On Sat, 27 Jul 2024 00:51:10 +0100, Sean Christopherson <seanjc@google.com> wrote: > > Put the page reference acquired by gfn_to_pfn_prot() if > kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less- > than-stellar heuristics for dealing with pfn-mapped memory means that KVM > can get a page reference to ZONE_DEVICE memory. > > Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest") > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/arm64/kvm/guest.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c > index 11098eb7eb44..e1f0ff08836a 100644 > --- a/arch/arm64/kvm/guest.c > +++ b/arch/arm64/kvm/guest.c > @@ -1059,6 +1059,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, > page = pfn_to_online_page(pfn); > if (!page) { > /* Reject ZONE_DEVICE memory */ > + kvm_release_pfn_clean(pfn); > ret = -EFAULT; > goto out; > } > -- > 2.46.0.rc1.232.g9752f9e123-goog > >
On Fri, Jul 26, 2024 at 04:51:10PM -0700, Sean Christopherson wrote: > Put the page reference acquired by gfn_to_pfn_prot() if > kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less- > than-stellar heuristics for dealing with pfn-mapped memory means that KVM > can get a page reference to ZONE_DEVICE memory. > > Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest") > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/arm64/kvm/guest.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c > index 11098eb7eb44..e1f0ff08836a 100644 > --- a/arch/arm64/kvm/guest.c > +++ b/arch/arm64/kvm/guest.c > @@ -1059,6 +1059,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, > page = pfn_to_online_page(pfn); > if (!page) { > /* Reject ZONE_DEVICE memory */ > + kvm_release_pfn_clean(pfn); > ret = -EFAULT; > goto out; > } This patch makes sense irrespective of whether the above pfn is a ZONE_DEVICE or not. gfn_to_pfn_prot() increased the page refcount via GUP, so it must be released before bailing out of this loop. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
On 07/08/2024 15:15, Catalin Marinas wrote: > On Fri, Jul 26, 2024 at 04:51:10PM -0700, Sean Christopherson wrote: >> Put the page reference acquired by gfn_to_pfn_prot() if >> kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less- >> than-stellar heuristics for dealing with pfn-mapped memory means that KVM >> can get a page reference to ZONE_DEVICE memory. >> >> Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest") >> Signed-off-by: Sean Christopherson <seanjc@google.com> >> --- >> arch/arm64/kvm/guest.c | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c >> index 11098eb7eb44..e1f0ff08836a 100644 >> --- a/arch/arm64/kvm/guest.c >> +++ b/arch/arm64/kvm/guest.c >> @@ -1059,6 +1059,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, >> page = pfn_to_online_page(pfn); >> if (!page) { >> /* Reject ZONE_DEVICE memory */ >> + kvm_release_pfn_clean(pfn); >> ret = -EFAULT; >> goto out; >> } > > This patch makes sense irrespective of whether the above pfn is a > ZONE_DEVICE or not. gfn_to_pfn_prot() increased the page refcount via > GUP, so it must be released before bailing out of this loop. > > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > Yep, as Catalin says, this is an 'obviously' correct fix - the reference needs releasing before bailing out. The comment there is perhaps misleading - it's not just ZONE_DEVICE memory that will be rejected, but this is the case that was in my mind when I wrote it. Although clearly I wasn't thinking hard enough when writing the code in the first place... ;) Reviewed-by: Steven Price <steven.price@arm.com> Thanks, Steve
On Fri, 26 Jul 2024 16:51:10 -0700, Sean Christopherson wrote: > Put the page reference acquired by gfn_to_pfn_prot() if > kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less- > than-stellar heuristics for dealing with pfn-mapped memory means that KVM > can get a page reference to ZONE_DEVICE memory. > > Applied to next, thanks! [01/84] KVM: arm64: Release pfn, i.e. put page, if copying MTE tags hits ZONE_DEVICE commit: ae41d7dbaeb4f79134136cd65ad7015cf9ccf78a Cheers, M.
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 11098eb7eb44..e1f0ff08836a 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1059,6 +1059,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, page = pfn_to_online_page(pfn); if (!page) { /* Reject ZONE_DEVICE memory */ + kvm_release_pfn_clean(pfn); ret = -EFAULT; goto out; }
Put the page reference acquired by gfn_to_pfn_prot() if kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less- than-stellar heuristics for dealing with pfn-mapped memory means that KVM can get a page reference to ZONE_DEVICE memory. Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest") Signed-off-by: Sean Christopherson <seanjc@google.com> --- arch/arm64/kvm/guest.c | 1 + 1 file changed, 1 insertion(+)