Message ID | 20160704114605.10086-7-proskurin@sec.in.tum.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
ARM allows the use of concatenated root (first-level) page tables (there are P2M_ROOT_PAGES consecutive pages that are used for the root level page table. We need to prevent freeing one of these concatenated pages during the process of flushing in p2m_flush_table (simply because new pages might be re-inserted at a later point in time into the page table). On 07/04/2016 01:45 PM, Sergej Proskurin wrote: > The current implementation differentiates between flushing and > destroying altp2m views. This commit adds the functions > p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or > individual altp2m views without destroying the entire table. In this > way, altp2m views can be reused at a later point in time. > > In addition, the implementation clears all altp2m entries during the > process of flushing. The same applies to hostp2m entries, when it is > destroyed. In this way, further domain and p2m allocations will not > unintentionally reuse old p2m mappings. > > Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de> > --- > Cc: Stefano Stabellini <sstabellini@kernel.org> > Cc: Julien Grall <julien.grall@arm.com> > --- > xen/arch/arm/p2m.c | 67 +++++++++++++++++++++++++++++++++++++++++++++++ > xen/include/asm-arm/p2m.h | 15 ++++++++--- > 2 files changed, 78 insertions(+), 4 deletions(-) > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c > index 4a745fd..ae789e6 100644 > --- a/xen/arch/arm/p2m.c > +++ b/xen/arch/arm/p2m.c > @@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx) > return rc; > } > > +/* Reset this p2m table to be empty */ > +static void p2m_flush_table(struct p2m_domain *p2m) > +{ > + struct page_info *top, *pg; > + mfn_t mfn; > + unsigned int i; > + > + /* Check whether the p2m table has already been flushed before. */ > + if ( p2m->root == NULL) > + return; > + > + spin_lock(&p2m->lock); > + > + /* > + * "Host" p2m tables can have shared entries &c that need a bit more care > + * when discarding them > + */ > + ASSERT(!p2m_is_hostp2m(p2m)); > + > + /* Zap the top level of the trie */ > + top = p2m->root; > + > + /* Clear all concatenated first level pages */ > + for ( i = 0; i < P2M_ROOT_PAGES; i++ ) > + { > + mfn = _mfn(page_to_mfn(top + i)); > + clear_domain_page(mfn); > + } > + > + /* Free the rest of the trie pages back to the paging pool */ > + while ( (pg = page_list_remove_head(&p2m->pages)) ) > + if ( pg != top ) > + { > + /* > + * Before freeing the individual pages, we clear them to prevent > + * reusing old table entries in future p2m allocations. > + */ > + mfn = _mfn(page_to_mfn(pg)); > + clear_domain_page(mfn); > + free_domheap_page(pg); > + } At this point, we prevent only the first root level page from being freed. In case there are multiple consecutive first level pages, one of them will be freed in the upper loop (and potentially crash the guest if the table is reused at a later point in time). However, testing for every concatenated page in the if clause of the while loop would further decrease the flushing performance. Thus, my question is, whether there is a good way to solve this issue? > + > + page_list_add(top, &p2m->pages); > + > + /* Invalidate VTTBR */ > + p2m->vttbr.vttbr = 0; > + p2m->vttbr.vttbr_baddr = INVALID_MFN; > + > + spin_unlock(&p2m->lock); > +} > + > +void p2m_flush_altp2m(struct domain *d) > +{ > + unsigned int i; > + > + altp2m_lock(d); > + > + for ( i = 0; i < MAX_ALTP2M; i++ ) > + { > + p2m_flush_table(d->arch.altp2m_p2m[i]); > + flush_tlb(); > + d->arch.altp2m_vttbr[i] = INVALID_MFN; > + } > + > + altp2m_unlock(d); > +} > + > /* > * Local variables: > * mode: C > diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h > index 8ee78e0..51d784f 100644 > --- a/xen/include/asm-arm/p2m.h > +++ b/xen/include/asm-arm/p2m.h > @@ -132,10 +132,7 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx) > struct p2m_domain *p2m_get_altp2m(struct vcpu *v); > > /* Flush all the alternate p2m's for a domain */ > -static inline void p2m_flush_altp2m(struct domain *d) > -{ > - /* Not supported on ARM. */ > -} > +void p2m_flush_altp2m(struct domain *d); > > /* Make a specific alternate p2m valid */ > int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx); > @@ -289,6 +286,16 @@ static inline int get_page_and_type(struct page_info *page, > /* get host p2m table */ > #define p2m_get_hostp2m(d) (&(d)->arch.p2m) > > +static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m) > +{ > + return p2m->p2m_class == p2m_host; > +} > + > +static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m) > +{ > + return p2m->p2m_class == p2m_alternate; > +} > + > /* vm_event and mem_access are supported on any ARM guest */ > static inline bool_t p2m_mem_access_sanity_check(struct domain *d) > { > Cheers, Sergej
Hello Sergej, On 04/07/16 13:12, Sergej Proskurin wrote: >> +/* Reset this p2m table to be empty */ >> +static void p2m_flush_table(struct p2m_domain *p2m) >> +{ >> + struct page_info *top, *pg; >> + mfn_t mfn; >> + unsigned int i; >> + >> + /* Check whether the p2m table has already been flushed before. */ >> + if ( p2m->root == NULL) >> + return; >> + >> + spin_lock(&p2m->lock); >> + >> + /* >> + * "Host" p2m tables can have shared entries &c that need a bit more care >> + * when discarding them >> + */ >> + ASSERT(!p2m_is_hostp2m(p2m)); >> + >> + /* Zap the top level of the trie */ >> + top = p2m->root; >> + >> + /* Clear all concatenated first level pages */ >> + for ( i = 0; i < P2M_ROOT_PAGES; i++ ) >> + { >> + mfn = _mfn(page_to_mfn(top + i)); >> + clear_domain_page(mfn); >> + } >> + >> + /* Free the rest of the trie pages back to the paging pool */ >> + while ( (pg = page_list_remove_head(&p2m->pages)) ) >> + if ( pg != top ) >> + { >> + /* >> + * Before freeing the individual pages, we clear them to prevent >> + * reusing old table entries in future p2m allocations. >> + */ >> + mfn = _mfn(page_to_mfn(pg)); >> + clear_domain_page(mfn); >> + free_domheap_page(pg); >> + } > > At this point, we prevent only the first root level page from being > freed. In case there are multiple consecutive first level pages, one of > them will be freed in the upper loop (and potentially crash the guest if > the table is reused at a later point in time). However, testing for > every concatenated page in the if clause of the while loop would further > decrease the flushing performance. Thus, my question is, whether there > is a good way to solve this issue? The root pages are not part of p2m->pages, so there is no issue. Regards,
Hello Sergej, On 04/07/16 12:45, Sergej Proskurin wrote: > The current implementation differentiates between flushing and > destroying altp2m views. This commit adds the functions > p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or > individual altp2m views without destroying the entire table. In this > way, altp2m views can be reused at a later point in time. > > In addition, the implementation clears all altp2m entries during the > process of flushing. The same applies to hostp2m entries, when it is > destroyed. In this way, further domain and p2m allocations will not > unintentionally reuse old p2m mappings. > > Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de> > --- > Cc: Stefano Stabellini <sstabellini@kernel.org> > Cc: Julien Grall <julien.grall@arm.com> > --- > xen/arch/arm/p2m.c | 67 +++++++++++++++++++++++++++++++++++++++++++++++ > xen/include/asm-arm/p2m.h | 15 ++++++++--- > 2 files changed, 78 insertions(+), 4 deletions(-) > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c > index 4a745fd..ae789e6 100644 > --- a/xen/arch/arm/p2m.c > +++ b/xen/arch/arm/p2m.c > @@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx) > return rc; > } > > +/* Reset this p2m table to be empty */ > +static void p2m_flush_table(struct p2m_domain *p2m) > +{ > + struct page_info *top, *pg; > + mfn_t mfn; > + unsigned int i; > + > + /* Check whether the p2m table has already been flushed before. */ > + if ( p2m->root == NULL) This check looks invalid. p2m->root is never reset to NULL by p2m_flush_table, so you will always flush. > + return; > + > + spin_lock(&p2m->lock); > + > + /* > + * "Host" p2m tables can have shared entries &c that need a bit more care > + * when discarding them I don't understand this comment. Can you explain it? > + */ > + ASSERT(!p2m_is_hostp2m(p2m)); > + > + /* Zap the top level of the trie */ > + top = p2m->root; > + > + /* Clear all concatenated first level pages */ > + for ( i = 0; i < P2M_ROOT_PAGES; i++ ) > + { > + mfn = _mfn(page_to_mfn(top + i)); > + clear_domain_page(mfn); > + } > + > + /* Free the rest of the trie pages back to the paging pool */ > + while ( (pg = page_list_remove_head(&p2m->pages)) ) > + if ( pg != top ) > + { > + /* > + * Before freeing the individual pages, we clear them to prevent > + * reusing old table entries in future p2m allocations. > + */ > + mfn = _mfn(page_to_mfn(pg)); > + clear_domain_page(mfn); > + free_domheap_page(pg); > + } > + > + page_list_add(top, &p2m->pages); This code is very similar to p2m_free_one. Can we share some code? > + > + /* Invalidate VTTBR */ > + p2m->vttbr.vttbr = 0; > + p2m->vttbr.vttbr_baddr = INVALID_MFN; > + > + spin_unlock(&p2m->lock); > +} Regards,
On 04/07/16 12:45, Sergej Proskurin wrote: > +void p2m_flush_altp2m(struct domain *d) > +{ > + unsigned int i; > + > + altp2m_lock(d); > + > + for ( i = 0; i < MAX_ALTP2M; i++ ) > + { > + p2m_flush_table(d->arch.altp2m_p2m[i]); > + flush_tlb(); I forgot to comment on this line. Can you explain this call? flush_tlb is flushing TLBs for the current VMID only. However, d may not be equal to current when calling from HVM op. > + d->arch.altp2m_vttbr[i] = INVALID_MFN; > + } Regards,
Hi Julien, On 07/04/2016 05:42 PM, Julien Grall wrote: > Hello Sergej, > > On 04/07/16 13:12, Sergej Proskurin wrote: >>> +/* Reset this p2m table to be empty */ >>> +static void p2m_flush_table(struct p2m_domain *p2m) >>> +{ >>> + struct page_info *top, *pg; >>> + mfn_t mfn; >>> + unsigned int i; >>> + >>> + /* Check whether the p2m table has already been flushed before. */ >>> + if ( p2m->root == NULL) >>> + return; >>> + >>> + spin_lock(&p2m->lock); >>> + >>> + /* >>> + * "Host" p2m tables can have shared entries &c that need a bit >>> more care >>> + * when discarding them >>> + */ >>> + ASSERT(!p2m_is_hostp2m(p2m)); >>> + >>> + /* Zap the top level of the trie */ >>> + top = p2m->root; >>> + >>> + /* Clear all concatenated first level pages */ >>> + for ( i = 0; i < P2M_ROOT_PAGES; i++ ) >>> + { >>> + mfn = _mfn(page_to_mfn(top + i)); >>> + clear_domain_page(mfn); >>> + } >>> + >>> + /* Free the rest of the trie pages back to the paging pool */ >>> + while ( (pg = page_list_remove_head(&p2m->pages)) ) >>> + if ( pg != top ) >>> + { >>> + /* >>> + * Before freeing the individual pages, we clear them >>> to prevent >>> + * reusing old table entries in future p2m allocations. >>> + */ >>> + mfn = _mfn(page_to_mfn(pg)); >>> + clear_domain_page(mfn); >>> + free_domheap_page(pg); >>> + } >> >> At this point, we prevent only the first root level page from being >> freed. In case there are multiple consecutive first level pages, one of >> them will be freed in the upper loop (and potentially crash the guest if >> the table is reused at a later point in time). However, testing for >> every concatenated page in the if clause of the while loop would further >> decrease the flushing performance. Thus, my question is, whether there >> is a good way to solve this issue? > > The root pages are not part of p2m->pages, so there is no issue. Thanks for clearing that up. We have already discussed this in patch #04. > > Regards, > Thank you. Best regards, Sergej
Hello Julien, On 07/04/2016 05:55 PM, Julien Grall wrote: > Hello Sergej, > > On 04/07/16 12:45, Sergej Proskurin wrote: >> The current implementation differentiates between flushing and >> destroying altp2m views. This commit adds the functions >> p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or >> individual altp2m views without destroying the entire table. In this >> way, altp2m views can be reused at a later point in time. >> >> In addition, the implementation clears all altp2m entries during the >> process of flushing. The same applies to hostp2m entries, when it is >> destroyed. In this way, further domain and p2m allocations will not >> unintentionally reuse old p2m mappings. >> >> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de> >> --- >> Cc: Stefano Stabellini <sstabellini@kernel.org> >> Cc: Julien Grall <julien.grall@arm.com> >> --- >> xen/arch/arm/p2m.c | 67 >> +++++++++++++++++++++++++++++++++++++++++++++++ >> xen/include/asm-arm/p2m.h | 15 ++++++++--- >> 2 files changed, 78 insertions(+), 4 deletions(-) >> >> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c >> index 4a745fd..ae789e6 100644 >> --- a/xen/arch/arm/p2m.c >> +++ b/xen/arch/arm/p2m.c >> @@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, >> unsigned int idx) >> return rc; >> } >> >> +/* Reset this p2m table to be empty */ >> +static void p2m_flush_table(struct p2m_domain *p2m) >> +{ >> + struct page_info *top, *pg; >> + mfn_t mfn; >> + unsigned int i; >> + >> + /* Check whether the p2m table has already been flushed before. */ >> + if ( p2m->root == NULL) > > This check looks invalid. p2m->root is never reset to NULL by > p2m_flush_table, so you will always flush. > All right. Here, I just wanted to be sure that we don't flush an invalidpage. I will remove this check. >> + return; >> + >> + spin_lock(&p2m->lock); >> + >> + /* >> + * "Host" p2m tables can have shared entries &c that need a bit >> more care >> + * when discarding them > > I don't understand this comment. Can you explain it? > This is a lost comment. However, there are slight differences in freeing a hostp2m as opposed to an altp2m (see the additional freeing of VMIDs in hostp2ms). Since, we agreed to using unique VMIDs for altp2m views as well, I will try to merge the different flush/free functions.Thank you. >> + */ >> + ASSERT(!p2m_is_hostp2m(p2m)); >> + >> + /* Zap the top level of the trie */ >> + top = p2m->root; >> + >> + /* Clear all concatenated first level pages */ >> + for ( i = 0; i < P2M_ROOT_PAGES; i++ ) >> + { >> + mfn = _mfn(page_to_mfn(top + i)); >> + clear_domain_page(mfn); >> + } >> + >> + /* Free the rest of the trie pages back to the paging pool */ >> + while ( (pg = page_list_remove_head(&p2m->pages)) ) >> + if ( pg != top ) >> + { >> + /* >> + * Before freeing the individual pages, we clear them to >> prevent >> + * reusing old table entries in future p2m allocations. >> + */ >> + mfn = _mfn(page_to_mfn(pg)); >> + clear_domain_page(mfn); >> + free_domheap_page(pg); >> + } >> + >> + page_list_add(top, &p2m->pages); > > This code is very similar to p2m_free_one. Can we share some code? > Yes, I will merge both functions and extract parts that differ in a separate function. The same applies to p2m_teardown_hostp2m. Thank you. >> + >> + /* Invalidate VTTBR */ >> + p2m->vttbr.vttbr = 0; >> + p2m->vttbr.vttbr_baddr = INVALID_MFN; >> + >> + spin_unlock(&p2m->lock); >> +} > > Regards, > Thank you. Cheers, Sergej
Hi Julien, On 07/04/2016 06:20 PM, Julien Grall wrote: > On 04/07/16 12:45, Sergej Proskurin wrote: >> +void p2m_flush_altp2m(struct domain *d) >> +{ >> + unsigned int i; >> + >> + altp2m_lock(d); >> + >> + for ( i = 0; i < MAX_ALTP2M; i++ ) >> + { >> + p2m_flush_table(d->arch.altp2m_p2m[i]); >> + flush_tlb(); > > I forgot to comment on this line. > > Can you explain this call? flush_tlb is flushing TLBs for the current > VMID only. However, d may not be equal to current when calling from > HVM op. > Since we are flushing one of the altp2m views, I wanted to explicitly get rid of potentially stalled entries in the TLBs. Until now, VMIDs were associated with a specific domain --as opposed to a specific view. That is, after flushing one of the altp2m views, the TLBs might be holding invalid entries. Since we agreed to use unique VMIDs per altp2m view, this flush can be left out. Thank you. >> + d->arch.altp2m_vttbr[i] = INVALID_MFN; >> + } > > Regards, > Thank you. Cheers, Sergej
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 4a745fd..ae789e6 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx) return rc; } +/* Reset this p2m table to be empty */ +static void p2m_flush_table(struct p2m_domain *p2m) +{ + struct page_info *top, *pg; + mfn_t mfn; + unsigned int i; + + /* Check whether the p2m table has already been flushed before. */ + if ( p2m->root == NULL) + return; + + spin_lock(&p2m->lock); + + /* + * "Host" p2m tables can have shared entries &c that need a bit more care + * when discarding them + */ + ASSERT(!p2m_is_hostp2m(p2m)); + + /* Zap the top level of the trie */ + top = p2m->root; + + /* Clear all concatenated first level pages */ + for ( i = 0; i < P2M_ROOT_PAGES; i++ ) + { + mfn = _mfn(page_to_mfn(top + i)); + clear_domain_page(mfn); + } + + /* Free the rest of the trie pages back to the paging pool */ + while ( (pg = page_list_remove_head(&p2m->pages)) ) + if ( pg != top ) + { + /* + * Before freeing the individual pages, we clear them to prevent + * reusing old table entries in future p2m allocations. + */ + mfn = _mfn(page_to_mfn(pg)); + clear_domain_page(mfn); + free_domheap_page(pg); + } + + page_list_add(top, &p2m->pages); + + /* Invalidate VTTBR */ + p2m->vttbr.vttbr = 0; + p2m->vttbr.vttbr_baddr = INVALID_MFN; + + spin_unlock(&p2m->lock); +} + +void p2m_flush_altp2m(struct domain *d) +{ + unsigned int i; + + altp2m_lock(d); + + for ( i = 0; i < MAX_ALTP2M; i++ ) + { + p2m_flush_table(d->arch.altp2m_p2m[i]); + flush_tlb(); + d->arch.altp2m_vttbr[i] = INVALID_MFN; + } + + altp2m_unlock(d); +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h index 8ee78e0..51d784f 100644 --- a/xen/include/asm-arm/p2m.h +++ b/xen/include/asm-arm/p2m.h @@ -132,10 +132,7 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx) struct p2m_domain *p2m_get_altp2m(struct vcpu *v); /* Flush all the alternate p2m's for a domain */ -static inline void p2m_flush_altp2m(struct domain *d) -{ - /* Not supported on ARM. */ -} +void p2m_flush_altp2m(struct domain *d); /* Make a specific alternate p2m valid */ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx); @@ -289,6 +286,16 @@ static inline int get_page_and_type(struct page_info *page, /* get host p2m table */ #define p2m_get_hostp2m(d) (&(d)->arch.p2m) +static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m) +{ + return p2m->p2m_class == p2m_host; +} + +static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m) +{ + return p2m->p2m_class == p2m_alternate; +} + /* vm_event and mem_access are supported on any ARM guest */ static inline bool_t p2m_mem_access_sanity_check(struct domain *d) {
The current implementation differentiates between flushing and destroying altp2m views. This commit adds the functions p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or individual altp2m views without destroying the entire table. In this way, altp2m views can be reused at a later point in time. In addition, the implementation clears all altp2m entries during the process of flushing. The same applies to hostp2m entries, when it is destroyed. In this way, further domain and p2m allocations will not unintentionally reuse old p2m mappings. Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de> --- Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Julien Grall <julien.grall@arm.com> --- xen/arch/arm/p2m.c | 67 +++++++++++++++++++++++++++++++++++++++++++++++ xen/include/asm-arm/p2m.h | 15 ++++++++--- 2 files changed, 78 insertions(+), 4 deletions(-)