Message ID | 1469717505-8026-15-git-send-email-julien.grall@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu, 28 Jul 2016, Julien Grall wrote: > The function p2m_cache_flush can be re-implemented using the generic > function p2m_get_entry by iterating over the range and using the mapping > order given by the callee. > > As the current implementation, no preemption is implemented, although > the comment in the current code claimed it. As the function is called by > a DOMCTL with a region of 1GB maximum, I think the preemption can be > left unimplemented for now. > > Finally drop the operation CACHEFLUSH in apply_one_level as nobody is > using it anymore. Note that the function could have been dropped in one > go at the end, however I find easier to drop the operations one by one > avoiding a big deletion in the patch that convert the last operation. > > Signed-off-by: Julien Grall <julien.grall@arm.com> > > --- > The loop pattern will be very for the reliquish function. It might > be possible to extract it in a separate function. > --- > xen/arch/arm/p2m.c | 67 +++++++++++++++++++++++++++--------------------------- > 1 file changed, 34 insertions(+), 33 deletions(-) > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c > index 9a9c85c..e7697bb 100644 > --- a/xen/arch/arm/p2m.c > +++ b/xen/arch/arm/p2m.c > @@ -722,7 +722,6 @@ enum p2m_operation { > INSERT, > REMOVE, > RELINQUISH, > - CACHEFLUSH, > MEMACCESS, > }; > > @@ -978,36 +977,6 @@ static int apply_one_level(struct domain *d, > */ > return P2M_ONE_PROGRESS; > > - case CACHEFLUSH: > - if ( !p2m_valid(orig_pte) ) > - { > - *addr = (*addr + level_size) & level_mask; > - return P2M_ONE_PROGRESS_NOP; > - } > - > - if ( level < 3 && p2m_table(orig_pte) ) > - return P2M_ONE_DESCEND; > - > - /* > - * could flush up to the next superpage boundary, but would > - * need to be careful about preemption, so just do one 4K page > - * now and return P2M_ONE_PROGRESS{,_NOP} so that the caller will > - * continue to loop over the rest of the range. > - */ > - if ( p2m_is_ram(orig_pte.p2m.type) ) > - { > - unsigned long offset = paddr_to_pfn(*addr & ~level_mask); > - flush_page_to_ram(orig_pte.p2m.base + offset); > - > - *addr += PAGE_SIZE; > - return P2M_ONE_PROGRESS; > - } > - else > - { > - *addr += PAGE_SIZE; > - return P2M_ONE_PROGRESS_NOP; > - } > - > case MEMACCESS: > if ( level < 3 ) > { > @@ -1555,12 +1524,44 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr) > { > struct p2m_domain *p2m = &d->arch.p2m; > gfn_t end = gfn_add(start, nr); > + p2m_type_t t; > + unsigned int order; > > start = gfn_max(start, p2m->lowest_mapped_gfn); > end = gfn_min(end, p2m->max_mapped_gfn); > > - return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN, > - 0, p2m_invalid, d->arch.p2m.default_access); > + /* XXX: Should we use write lock here? */ Good question. As the p2m is left unchanged by this function, I think that the read lock is sufficient. > + p2m_read_lock(p2m); > + > + for ( ; gfn_x(start) < gfn_x(end); start = gfn_add(start, 1UL << order) ) > + { > + mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order); > + > + /* Skip hole and non-RAM page */ > + if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_ram(t) ) > + { > + /* > + * the order corresponds to the order of the mapping in the > + * page table. so we need to align the gfn before > + * incrementing. > + */ > + start = _gfn(gfn_x(start) & ~((1UL << order) - 1)); > + continue; > + } > + > + /* > + * Could flush up to the next superpage boundary, but we would > + * need to be careful about preemption, so just do one 4K page > + * now. I think that even without preemption you should implement flushing up to the next superpage boundary (but not beyond "end"). You can still do it 4K at a time, but only call p2m_get_entry once per "order". Could be a decent performance improvement as cacheflush is a performance critical hypercall. > + * XXX: Implement preemption. > + */ > + flush_page_to_ram(mfn_x(mfn)); > + order = 0; > + } > + > + p2m_read_unlock(p2m); > + > + return 0; > } > > mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn) > -- > 1.9.1 >
Hi Stefano, On 05/09/16 22:13, Stefano Stabellini wrote: > On Thu, 28 Jul 2016, Julien Grall wrote: >> The function p2m_cache_flush can be re-implemented using the generic >> function p2m_get_entry by iterating over the range and using the mapping >> order given by the callee. >> >> As the current implementation, no preemption is implemented, although >> the comment in the current code claimed it. As the function is called by >> a DOMCTL with a region of 1GB maximum, I think the preemption can be >> left unimplemented for now. >> >> Finally drop the operation CACHEFLUSH in apply_one_level as nobody is >> using it anymore. Note that the function could have been dropped in one >> go at the end, however I find easier to drop the operations one by one >> avoiding a big deletion in the patch that convert the last operation. >> >> Signed-off-by: Julien Grall <julien.grall@arm.com> >> >> --- >> The loop pattern will be very for the reliquish function. It might >> be possible to extract it in a separate function. >> --- >> xen/arch/arm/p2m.c | 67 +++++++++++++++++++++++++++--------------------------- >> 1 file changed, 34 insertions(+), 33 deletions(-) >> >> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c >> index 9a9c85c..e7697bb 100644 >> --- a/xen/arch/arm/p2m.c >> +++ b/xen/arch/arm/p2m.c >> @@ -722,7 +722,6 @@ enum p2m_operation { >> INSERT, >> REMOVE, >> RELINQUISH, >> - CACHEFLUSH, >> MEMACCESS, >> }; >> >> @@ -978,36 +977,6 @@ static int apply_one_level(struct domain *d, >> */ >> return P2M_ONE_PROGRESS; >> >> - case CACHEFLUSH: >> - if ( !p2m_valid(orig_pte) ) >> - { >> - *addr = (*addr + level_size) & level_mask; >> - return P2M_ONE_PROGRESS_NOP; >> - } >> - >> - if ( level < 3 && p2m_table(orig_pte) ) >> - return P2M_ONE_DESCEND; >> - >> - /* >> - * could flush up to the next superpage boundary, but would >> - * need to be careful about preemption, so just do one 4K page >> - * now and return P2M_ONE_PROGRESS{,_NOP} so that the caller will >> - * continue to loop over the rest of the range. >> - */ >> - if ( p2m_is_ram(orig_pte.p2m.type) ) >> - { >> - unsigned long offset = paddr_to_pfn(*addr & ~level_mask); >> - flush_page_to_ram(orig_pte.p2m.base + offset); >> - >> - *addr += PAGE_SIZE; >> - return P2M_ONE_PROGRESS; >> - } >> - else >> - { >> - *addr += PAGE_SIZE; >> - return P2M_ONE_PROGRESS_NOP; >> - } >> - >> case MEMACCESS: >> if ( level < 3 ) >> { >> @@ -1555,12 +1524,44 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr) >> { >> struct p2m_domain *p2m = &d->arch.p2m; >> gfn_t end = gfn_add(start, nr); >> + p2m_type_t t; >> + unsigned int order; >> >> start = gfn_max(start, p2m->lowest_mapped_gfn); >> end = gfn_min(end, p2m->max_mapped_gfn); >> >> - return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN, >> - 0, p2m_invalid, d->arch.p2m.default_access); >> + /* XXX: Should we use write lock here? */ > > Good question. As the p2m is left unchanged by this function, I think > that the read lock is sufficient. It is what I thought. I will replace the todo by a comment explain why read-lock is used here. > > >> + p2m_read_lock(p2m); >> + >> + for ( ; gfn_x(start) < gfn_x(end); start = gfn_add(start, 1UL << order) ) >> + { >> + mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order); >> + >> + /* Skip hole and non-RAM page */ >> + if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_ram(t) ) >> + { >> + /* >> + * the order corresponds to the order of the mapping in the >> + * page table. so we need to align the gfn before >> + * incrementing. >> + */ >> + start = _gfn(gfn_x(start) & ~((1UL << order) - 1)); >> + continue; >> + } >> + >> + /* >> + * Could flush up to the next superpage boundary, but we would >> + * need to be careful about preemption, so just do one 4K page >> + * now. > > I think that even without preemption you should implement flushing up to > the next superpage boundary (but not beyond "end"). You can still do it > 4K at a time, but only call p2m_get_entry once per "order". Could be a > decent performance improvement as cacheflush is a performance critical > hypercall. Good point. I will give a look for the next version. Regards,
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 9a9c85c..e7697bb 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -722,7 +722,6 @@ enum p2m_operation { INSERT, REMOVE, RELINQUISH, - CACHEFLUSH, MEMACCESS, }; @@ -978,36 +977,6 @@ static int apply_one_level(struct domain *d, */ return P2M_ONE_PROGRESS; - case CACHEFLUSH: - if ( !p2m_valid(orig_pte) ) - { - *addr = (*addr + level_size) & level_mask; - return P2M_ONE_PROGRESS_NOP; - } - - if ( level < 3 && p2m_table(orig_pte) ) - return P2M_ONE_DESCEND; - - /* - * could flush up to the next superpage boundary, but would - * need to be careful about preemption, so just do one 4K page - * now and return P2M_ONE_PROGRESS{,_NOP} so that the caller will - * continue to loop over the rest of the range. - */ - if ( p2m_is_ram(orig_pte.p2m.type) ) - { - unsigned long offset = paddr_to_pfn(*addr & ~level_mask); - flush_page_to_ram(orig_pte.p2m.base + offset); - - *addr += PAGE_SIZE; - return P2M_ONE_PROGRESS; - } - else - { - *addr += PAGE_SIZE; - return P2M_ONE_PROGRESS_NOP; - } - case MEMACCESS: if ( level < 3 ) { @@ -1555,12 +1524,44 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr) { struct p2m_domain *p2m = &d->arch.p2m; gfn_t end = gfn_add(start, nr); + p2m_type_t t; + unsigned int order; start = gfn_max(start, p2m->lowest_mapped_gfn); end = gfn_min(end, p2m->max_mapped_gfn); - return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN, - 0, p2m_invalid, d->arch.p2m.default_access); + /* XXX: Should we use write lock here? */ + p2m_read_lock(p2m); + + for ( ; gfn_x(start) < gfn_x(end); start = gfn_add(start, 1UL << order) ) + { + mfn_t mfn = p2m_get_entry(p2m, start, &t, NULL, &order); + + /* Skip hole and non-RAM page */ + if ( mfn_eq(mfn, INVALID_MFN) || !p2m_is_ram(t) ) + { + /* + * the order corresponds to the order of the mapping in the + * page table. so we need to align the gfn before + * incrementing. + */ + start = _gfn(gfn_x(start) & ~((1UL << order) - 1)); + continue; + } + + /* + * Could flush up to the next superpage boundary, but we would + * need to be careful about preemption, so just do one 4K page + * now. + * XXX: Implement preemption. + */ + flush_page_to_ram(mfn_x(mfn)); + order = 0; + } + + p2m_read_unlock(p2m); + + return 0; } mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
The function p2m_cache_flush can be re-implemented using the generic function p2m_get_entry by iterating over the range and using the mapping order given by the callee. As the current implementation, no preemption is implemented, although the comment in the current code claimed it. As the function is called by a DOMCTL with a region of 1GB maximum, I think the preemption can be left unimplemented for now. Finally drop the operation CACHEFLUSH in apply_one_level as nobody is using it anymore. Note that the function could have been dropped in one go at the end, however I find easier to drop the operations one by one avoiding a big deletion in the patch that convert the last operation. Signed-off-by: Julien Grall <julien.grall@arm.com> --- The loop pattern will be very for the reliquish function. It might be possible to extract it in a separate function. --- xen/arch/arm/p2m.c | 67 +++++++++++++++++++++++++++--------------------------- 1 file changed, 34 insertions(+), 33 deletions(-)