diff mbox

arm: p2m.c bug-fix: hypervisor hang on __p2m_get_mem_access

Message ID 1453808778-4292-1-git-send-email-czuzu@bitdefender.com (mailing list archive)
State New, archived
Headers show

Commit Message

Corneliu ZUZU Jan. 26, 2016, 11:46 a.m. UTC
When __p2m_get_mem_access gets called, the p2m lock is already taken
by either get_page_from_gva or p2m_get_mem_access.

Possible code paths:
1)	-> get_page_from_gva
		-> p2m_mem_access_check_and_get_page
			-> __p2m_get_mem_access
2)	-> p2m_get_mem_access
		-> __p2m_get_mem_access

In both cases if __p2m_get_mem_access subsequently gets to
call p2m_lookup (happens if !radix_tree_lookup(...)), a hypervisor
hang will occur, since p2m_lookup also spin-locks on the p2m lock.

This bug-fix simply replaces the p2m_lookup call from __p2m_get_mem_access
with a call to __p2m_lookup.

Signed-off-by: Corneliu ZUZU <czuzu@bitdefender.com>
---
 xen/arch/arm/p2m.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Ian Campbell Jan. 26, 2016, 4:14 p.m. UTC | #1
On Tue, 2016-01-26 at 13:46 +0200, Corneliu ZUZU wrote:
> When __p2m_get_mem_access gets called, the p2m lock is already taken
> by either get_page_from_gva or p2m_get_mem_access.
> 
> Possible code paths:
> 1)	-> get_page_from_gva
> 		-> p2m_mem_access_check_and_get_page
> 			-> __p2m_get_mem_access
> 2)	-> p2m_get_mem_access
> 		-> __p2m_get_mem_access

What about:
	-> p2m_mem_access_check
		-> p2m_get_mem_access
I can't see the lock being taken in that paths.

As well as fixing that I think it would be wise to add an assert that the
lock is held before the call to p2m_lookup which you are changing into the
unlocked variant.

> 
> In both cases if __p2m_get_mem_access subsequently gets to
> call p2m_lookup (happens if !radix_tree_lookup(...)), a hypervisor
> hang will occur, since p2m_lookup also spin-locks on the p2m lock.
> 
> This bug-fix simply replaces the p2m_lookup call from
> __p2m_get_mem_access
> with a call to __p2m_lookup.
> 
> Signed-off-by: Corneliu ZUZU <czuzu@bitdefender.com>
> ---
>  xen/arch/arm/p2m.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 2190908..a9157e5 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -490,7 +490,7 @@ static int __p2m_get_mem_access(struct domain *d,
> gfn_t gfn,
>           * No setting was found in the Radix tree. Check if the
>           * entry exists in the page-tables.
>           */
> -        paddr_t maddr = p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
> +        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
>          if ( INVALID_PADDR == maddr )
>              return -ESRCH;
>
Ian Campbell Jan. 27, 2016, 11:42 a.m. UTC | #2
(we went offlist by mistake without noticing, resending my last reply which
I think has sufficient context/quoting to make sense)

On Wed, 2016-01-27 at 11:51 +0200, CORNELIU ZUZU wrote:
> On 1/26/2016 6:14 PM, Ian Campbell wrote:
> > On Tue, 2016-01-26 at 13:46 +0200, Corneliu ZUZU wrote:
> > > When __p2m_get_mem_access gets called, the p2m lock is already taken
> > > by either get_page_from_gva or p2m_get_mem_access.
> > > 
> > > Possible code paths:
> > > 1)	-> get_page_from_gva
> > > 		-> p2m_mem_access_check_and_get_page
> > > 			-> __p2m_get_mem_access
> > > 2)	-> p2m_get_mem_access
> > > 		-> __p2m_get_mem_access
> > What about:
> > 	-> p2m_mem_access_check
> > 		-> p2m_get_mem_access
> > I can't see the lock being taken in that paths.
> 
> Yes, that's code path #2 I've previously mentioned, the lock is first 
> taken in p2m_get_mem_access (i.e. the line 'spin_lock(&p2m->lock)'),
> before __p2m_get_mem_access is called.

Oh yes, not sure how I got myself confused there.

> I'm looking @ the master branch of git://xenbits.xenproject.org/xen.git 
> (although we've hit this bug on the 4.6 stable repo).
> 
> > 
> > As well as fixing that I think it would be wise to add an assert that
> > the
> > lock is held before the call to p2m_lookup which you are changing into
> > the
> > unlocked variant.
> 
> Good idea, didn't know one could do that.
> Concretely, I suppose this would be done by
> 
> ASSERT(spin_is_locked(&p2m->lock));
> 
> ?

Right.

> 
> One question though. It seems that everytime __p2m_get_mem_access is 
> called, the lock is taken *before the call*, not within the
> function itself.

Right. It's a common (but by no mean universal) convention within Xen that
a __foo function is a version of foo which expects the relevant lock(s) to
already be held.

> Since __p2m_get_mem_access also accesses other fields of the p2m pointer 
> (e.g. 'p2m->mem_access_enabled'),
> including performing a radix tree lookup before calling p2m_lookup, 
> doesn't this mean that
> the p2m lock is also needed for these accesses (especially for the radix 
> tree lookup)? And if so,
> shouldn't I position the assertion line @ the *beginning* of 
> __p2m_get_mem_access, e.g. just before 'if ( !p2m->mem_access_enabled )'?

Yes, since p2m_get_mem_access requires the lock to be taken on entry it
makes sense to put the assert as near the top of the function as possible. 

This is true whether the initial bit of the function does anything which
requires the lock.

Ian.
diff mbox

Patch

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2190908..a9157e5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -490,7 +490,7 @@  static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
          * No setting was found in the Radix tree. Check if the
          * entry exists in the page-tables.
          */
-        paddr_t maddr = p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
+        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
         if ( INVALID_PADDR == maddr )
             return -ESRCH;