Message ID | 5da96b29-7f80-4bfd-eb30-5547f415d2b8@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/PoD: correct ordering of checks in p2m_pod_zero_check() | expand |
> -----Original Message----- > From: Jan Beulich <jbeulich@suse.com> > Sent: 07 April 2020 12:08 > To: xen-devel@lists.xenproject.org > Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei Liu > <wl@xen.org>; Paul Durrant <paul@xen.org> > Subject: [PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check() > > Commit 0537d246f8db ("mm: add 'is_special_page' inline function...") > moved the is_special_page() checks first in its respective changes to > PoD code. While this is fine for p2m_pod_zero_check_superpage(), the > validity of the MFN is inferred in both cases from the p2m_is_ram() > check, which therefore also needs to come first in this 2nd instance. > > Take the opportunity and address latent UB here as well - transform > the MFN into struct page_info * only after having established that > this is a valid page. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Paul Durrant <paul@xen.org> ...with a suggestion below > --- > I will admit that this was build tested only. I did observe the crash > late yesterday while in the office, but got around to analyzing it only > today, where I'm again restricted in what I can reasonably test. > > --- a/xen/arch/x86/mm/p2m-pod.c > +++ b/xen/arch/x86/mm/p2m-pod.c > @@ -877,23 +877,25 @@ p2m_pod_zero_check(struct p2m_domain *p2 > for ( i = 0; i < count; i++ ) > { > p2m_access_t a; > - struct page_info *pg; > > mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, > 0, NULL, NULL); > - pg = mfn_to_page(mfns[i]); > > /* > * If this is ram, and not a pagetable or a special page, and > * probably not mapped elsewhere, map it; otherwise, skip. > */ > - if ( !is_special_page(pg) && p2m_is_ram(types[i]) && > - (pg->count_info & PGC_allocated) && > - !(pg->count_info & PGC_page_table) && > - ((pg->count_info & PGC_count_mask) <= max_ref) ) > - map[i] = map_domain_page(mfns[i]); > - else > - map[i] = NULL; > + map[i] = NULL; > + if ( p2m_is_ram(types[i]) ) > + { > + const struct page_info *pg = mfn_to_page(mfns[i]); Perhaps have local scope stack variable for count_info too? > + > + if ( !is_special_page(pg) && > + (pg->count_info & PGC_allocated) && > + !(pg->count_info & PGC_page_table) && > + ((pg->count_info & PGC_count_mask) <= max_ref) ) > + map[i] = map_domain_page(mfns[i]); > + } > } > > /*
On 07.04.2020 14:39, Paul Durrant wrote: >> -----Original Message----- >> From: Jan Beulich <jbeulich@suse.com> >> Sent: 07 April 2020 12:08 >> To: xen-devel@lists.xenproject.org >> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monné <roger.pau@citrix.com>; Wei Liu >> <wl@xen.org>; Paul Durrant <paul@xen.org> >> Subject: [PATCH] x86/PoD: correct ordering of checks in p2m_pod_zero_check() >> >> Commit 0537d246f8db ("mm: add 'is_special_page' inline function...") >> moved the is_special_page() checks first in its respective changes to >> PoD code. While this is fine for p2m_pod_zero_check_superpage(), the >> validity of the MFN is inferred in both cases from the p2m_is_ram() >> check, which therefore also needs to come first in this 2nd instance. >> >> Take the opportunity and address latent UB here as well - transform >> the MFN into struct page_info * only after having established that >> this is a valid page. >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com> > > Reviewed-by: Paul Durrant <paul@xen.org> Thanks. > ...with a suggestion below > >> --- >> I will admit that this was build tested only. I did observe the crash >> late yesterday while in the office, but got around to analyzing it only >> today, where I'm again restricted in what I can reasonably test. >> >> --- a/xen/arch/x86/mm/p2m-pod.c >> +++ b/xen/arch/x86/mm/p2m-pod.c >> @@ -877,23 +877,25 @@ p2m_pod_zero_check(struct p2m_domain *p2 >> for ( i = 0; i < count; i++ ) >> { >> p2m_access_t a; >> - struct page_info *pg; >> >> mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, >> 0, NULL, NULL); >> - pg = mfn_to_page(mfns[i]); >> >> /* >> * If this is ram, and not a pagetable or a special page, and >> * probably not mapped elsewhere, map it; otherwise, skip. >> */ >> - if ( !is_special_page(pg) && p2m_is_ram(types[i]) && >> - (pg->count_info & PGC_allocated) && >> - !(pg->count_info & PGC_page_table) && >> - ((pg->count_info & PGC_count_mask) <= max_ref) ) >> - map[i] = map_domain_page(mfns[i]); >> - else >> - map[i] = NULL; >> + map[i] = NULL; >> + if ( p2m_is_ram(types[i]) ) >> + { >> + const struct page_info *pg = mfn_to_page(mfns[i]); > > Perhaps have local scope stack variable for count_info too? I'd view this as useful only if ... >> + >> + if ( !is_special_page(pg) && ... this could then also be made make use of it. Jan
On 07/04/2020 12:07, Jan Beulich wrote: > Commit 0537d246f8db ("mm: add 'is_special_page' inline function...") > moved the is_special_page() checks first in its respective changes to > PoD code. While this is fine for p2m_pod_zero_check_superpage(), the > validity of the MFN is inferred in both cases from the p2m_is_ram() > check, which therefore also needs to come first in this 2nd instance. > > Take the opportunity and address latent UB here as well - transform > the MFN into struct page_info * only after having established that > this is a valid page. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -877,23 +877,25 @@ p2m_pod_zero_check(struct p2m_domain *p2 for ( i = 0; i < count; i++ ) { p2m_access_t a; - struct page_info *pg; mfns[i] = p2m->get_entry(p2m, gfns[i], types + i, &a, 0, NULL, NULL); - pg = mfn_to_page(mfns[i]); /* * If this is ram, and not a pagetable or a special page, and * probably not mapped elsewhere, map it; otherwise, skip. */ - if ( !is_special_page(pg) && p2m_is_ram(types[i]) && - (pg->count_info & PGC_allocated) && - !(pg->count_info & PGC_page_table) && - ((pg->count_info & PGC_count_mask) <= max_ref) ) - map[i] = map_domain_page(mfns[i]); - else - map[i] = NULL; + map[i] = NULL; + if ( p2m_is_ram(types[i]) ) + { + const struct page_info *pg = mfn_to_page(mfns[i]); + + if ( !is_special_page(pg) && + (pg->count_info & PGC_allocated) && + !(pg->count_info & PGC_page_table) && + ((pg->count_info & PGC_count_mask) <= max_ref) ) + map[i] = map_domain_page(mfns[i]); + } } /*
Commit 0537d246f8db ("mm: add 'is_special_page' inline function...") moved the is_special_page() checks first in its respective changes to PoD code. While this is fine for p2m_pod_zero_check_superpage(), the validity of the MFN is inferred in both cases from the p2m_is_ram() check, which therefore also needs to come first in this 2nd instance. Take the opportunity and address latent UB here as well - transform the MFN into struct page_info * only after having established that this is a valid page. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- I will admit that this was build tested only. I did observe the crash late yesterday while in the office, but got around to analyzing it only today, where I'm again restricted in what I can reasonably test.