Message ID | 1454499464-7278-1-git-send-email-czuzu@bitdefender.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, 2016-02-03 at 13:37 +0200, Corneliu ZUZU wrote: I just now applied a previous v2 which was already in my queue. Was this just an accidental resend of v2 or is there some important change and this is really a v3? > When __p2m_get_mem_access gets called, the p2m lock is already taken > by either get_page_from_gva or p2m_get_mem_access. > Possible code paths: > 1) -> get_page_from_gva > -> p2m_mem_access_check_and_get_page > -> __p2m_get_mem_access > 2) -> p2m_get_mem_access > -> __p2m_get_mem_access > > In both cases if __p2m_get_mem_access subsequently gets to > call p2m_lookup (happens if !radix_tree_lookup(...)), a hypervisor > hang will occur, since p2m_lookup also spin-locks on the p2m lock. > > This bug-fix simply replaces the p2m_lookup call from > __p2m_get_mem_access > with a call to __p2m_lookup and also adds an ASSERT to ensure that the > p2m lock > is already taken upon __p2m_get_mem_access entry. > > Signed-off-by: Corneliu ZUZU <czuzu@bitdefender.com> > > --- > Changed since v1: > * added p2m-lock ASSERT > --- > xen/arch/arm/p2m.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c > index 2190908..e8e6db4 100644 > --- a/xen/arch/arm/p2m.c > +++ b/xen/arch/arm/p2m.c > @@ -468,6 +468,8 @@ static int __p2m_get_mem_access(struct domain *d, > gfn_t gfn, > #undef ACCESS > }; > > + ASSERT(spin_is_locked(&p2m->lock)); > + > /* If no setting was ever set, just return rwx. */ > if ( !p2m->mem_access_enabled ) > { > @@ -490,7 +492,7 @@ static int __p2m_get_mem_access(struct domain *d, > gfn_t gfn, > * No setting was found in the Radix tree. Check if the > * entry exists in the page-tables. > */ > - paddr_t maddr = p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL); > + paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL); > if ( INVALID_PADDR == maddr ) > return -ESRCH; >
On 2/3/2016 1:52 PM, Ian Campbell wrote: > On Wed, 2016-02-03 at 13:37 +0200, Corneliu ZUZU wrote: > > I just now applied a previous v2 which was already in my queue. Was this > just an accidental resend of v2 or is there some important change and this > is really a v3? > >> When __p2m_get_mem_access gets called, the p2m lock is already taken >> by either get_page_from_gva or p2m_get_mem_access. >> Possible code paths: >> 1) -> get_page_from_gva >> -> p2m_mem_access_check_and_get_page >> -> __p2m_get_mem_access >> 2) -> p2m_get_mem_access >> -> __p2m_get_mem_access >> >> In both cases if __p2m_get_mem_access subsequently gets to >> call p2m_lookup (happens if !radix_tree_lookup(...)), a hypervisor >> hang will occur, since p2m_lookup also spin-locks on the p2m lock. >> >> This bug-fix simply replaces the p2m_lookup call from >> __p2m_get_mem_access >> with a call to __p2m_lookup and also adds an ASSERT to ensure that the >> p2m lock >> is already taken upon __p2m_get_mem_access entry. >> >> Signed-off-by: Corneliu ZUZU <czuzu@bitdefender.com> >> >> --- >> Changed since v1: >> * added p2m-lock ASSERT >> --- >> xen/arch/arm/p2m.c | 4 +++- >> 1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c >> index 2190908..e8e6db4 100644 >> --- a/xen/arch/arm/p2m.c >> +++ b/xen/arch/arm/p2m.c >> @@ -468,6 +468,8 @@ static int __p2m_get_mem_access(struct domain *d, >> gfn_t gfn, >> #undef ACCESS >> }; >> >> + ASSERT(spin_is_locked(&p2m->lock)); >> + >> /* If no setting was ever set, just return rwx. */ >> if ( !p2m->mem_access_enabled ) >> { >> @@ -490,7 +492,7 @@ static int __p2m_get_mem_access(struct domain *d, >> gfn_t gfn, >> * No setting was found in the Radix tree. Check if the >> * entry exists in the page-tables. >> */ >> - paddr_t maddr = p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL); >> + paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL); >> if ( INVALID_PADDR == maddr ) >> return -ESRCH; >> > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel No, sorry, this is just a duplicate of the 1st v2, I thought the first one was not sent properly (after waiting a few days and noticing I was no longer finding it on the web). Ignore this one. And thanks. Corneliu.
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 2190908..e8e6db4 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -468,6 +468,8 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn, #undef ACCESS }; + ASSERT(spin_is_locked(&p2m->lock)); + /* If no setting was ever set, just return rwx. */ if ( !p2m->mem_access_enabled ) { @@ -490,7 +492,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn, * No setting was found in the Radix tree. Check if the * entry exists in the page-tables. */ - paddr_t maddr = p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL); + paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL); if ( INVALID_PADDR == maddr ) return -ESRCH;
When __p2m_get_mem_access gets called, the p2m lock is already taken by either get_page_from_gva or p2m_get_mem_access. Possible code paths: 1) -> get_page_from_gva -> p2m_mem_access_check_and_get_page -> __p2m_get_mem_access 2) -> p2m_get_mem_access -> __p2m_get_mem_access In both cases if __p2m_get_mem_access subsequently gets to call p2m_lookup (happens if !radix_tree_lookup(...)), a hypervisor hang will occur, since p2m_lookup also spin-locks on the p2m lock. This bug-fix simply replaces the p2m_lookup call from __p2m_get_mem_access with a call to __p2m_lookup and also adds an ASSERT to ensure that the p2m lock is already taken upon __p2m_get_mem_access entry. Signed-off-by: Corneliu ZUZU <czuzu@bitdefender.com> --- Changed since v1: * added p2m-lock ASSERT --- xen/arch/arm/p2m.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)