diff mbox

[v3] altp2m: Allow the hostp2m to be shared

Message ID 1461951776-25956-1-git-send-email-tamas@tklengyel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Tamas K Lengyel April 29, 2016, 5:42 p.m. UTC
Don't propagate altp2m changes from ept_set_entry for memshare as memshare
already has the lock. We call altp2m propagate changes once memshare
successfully finishes. Allow the hostp2m entries to be of type
p2m_ram_shared when applying mem_access. Also, do not trigger PoD for hostp2m
when setting altp2m mem_access to be in-line with non-altp2m mem_access path.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v3: Add comment to the unshare routine pointing out where altp2m gets updated
v2: Cosmetic fixes and addressing PoD in the commit message
---
 xen/arch/x86/mm/mem_sharing.c | 11 ++++++++++-
 xen/arch/x86/mm/p2m-ept.c     |  2 +-
 xen/arch/x86/mm/p2m.c         |  5 ++---
 3 files changed, 13 insertions(+), 5 deletions(-)

Comments

George Dunlap May 25, 2016, 11:27 a.m. UTC | #1
On Fri, Apr 29, 2016 at 6:42 PM, Tamas K Lengyel <tamas@tklengyel.com> wrote:
> Don't propagate altp2m changes from ept_set_entry for memshare as memshare
> already has the lock. We call altp2m propagate changes once memshare
> successfully finishes. Allow the hostp2m entries to be of type
> p2m_ram_shared when applying mem_access. Also, do not trigger PoD for hostp2m
> when setting altp2m mem_access to be in-line with non-altp2m mem_access path.

Hey Tamas,

Sorry for the long delay in getting back to you on this.

So the main issue here (correct me if I'm wrong) is the locking
discipline: namely, men_sharing_share_pages():
- Grabs the hostp2m lock
- Grabs the appropriate domain memsharing locks
- Calls set_shared_p2m_entry(), which ends up calling ept_set_entry(),
which (when altp2m is active) grabs the altp2mlist and altp2m locks.

This causes an ASSERT(), since the altp2mlist lock is ahead of the
memsharing locks in the list.

But having taken a closer look at the code, I'm not sure the change is
quite correct.  Please correct me if I've misread something:

mem_sharing_share_pages() is passed two <domain,gfn> pairs -- the
<sd,sgfn> (which I assume stands for "shared gfn") and <cd,cgfn>
(which I assume stands for "copy"); and it
1) Looks up smfn and cmfn, which back sgfn and cmfn respectively
2) Looks up cmfn, which backs cgfn then replaces all gfn entries which
point to cmfn with smfn (updating accounting as appropriate)

But this change will only call p2m_altp2m_propagate_change() for the
original cgfn -- any other gfns which are backed by cmfn will not have
the corresponding altp2m entries propagated properly.

This sort of mistake is easy to make, which is why I think we should
try to always update the altp2ms in ept_set_entry() if we can, to
minimize the opportunity for making this sort of mistake.

Is there ever a reason to grab the altp2m lock and *then* grab the
sharing lock?  Could we just move the sharing lock up between the p2m
lock and the altp2mlist lock instead?

 -George
Tamas K Lengyel May 25, 2016, 3:31 p.m. UTC | #2
On May 25, 2016 05:27, "George Dunlap" <george.dunlap@citrix.com> wrote:
>
> On Fri, Apr 29, 2016 at 6:42 PM, Tamas K Lengyel <tamas@tklengyel.com>
wrote:
> > Don't propagate altp2m changes from ept_set_entry for memshare as
memshare
> > already has the lock. We call altp2m propagate changes once memshare
> > successfully finishes. Allow the hostp2m entries to be of type
> > p2m_ram_shared when applying mem_access. Also, do not trigger PoD for
hostp2m
> > when setting altp2m mem_access to be in-line with non-altp2m mem_access
path.
>
> Hey Tamas,
>
> Sorry for the long delay in getting back to you on this.

No problem, thanks for taking a closer look!

>
> So the main issue here (correct me if I'm wrong) is the locking
> discipline: namely, men_sharing_share_pages():
> - Grabs the hostp2m lock
> - Grabs the appropriate domain memsharing locks
> - Calls set_shared_p2m_entry(), which ends up calling ept_set_entry(),
> which (when altp2m is active) grabs the altp2mlist and altp2m locks.
>
> This causes an ASSERT(), since the altp2mlist lock is ahead of the
> memsharing locks in the list.
>
> But having taken a closer look at the code, I'm not sure the change is
> quite correct.  Please correct me if I've misread something:
>
> mem_sharing_share_pages() is passed two <domain,gfn> pairs -- the
> <sd,sgfn> (which I assume stands for "shared gfn") and <cd,cgfn>
> (which I assume stands for "copy"); and it

Here s/c stands for source/client.

> 1) Looks up smfn and cmfn, which back sgfn and cmfn respectively
> 2) Looks up cmfn, which backs cgfn then replaces all gfn entries which
> point to cmfn with smfn (updating accounting as appropriate)

Hm, I might have missed that. Where does it do the lookup for all other
cgfns backed by this cmfn?

> But this change will only call p2m_altp2m_propagate_change() for the
> original cgfn -- any other gfns which are backed by cmfn will not have
> the corresponding altp2m entries propagated properly.

Right, if there is some other place where it does sharing in the back we
would have to propagate that change.

> This sort of mistake is easy to make, which is why I think we should
> try to always update the altp2ms in ept_set_entry() if we can, to
> minimize the opportunity for making this sort of mistake.
>
> Is there ever a reason to grab the altp2m lock and *then* grab the
> sharing lock?  Could we just move the sharing lock up between the p2m
> lock and the altp2mlist lock instead?
>

I can't think of a scenario where we would get to sharing from altp2m with
altp2m locking first. Not sure what you mean by moving the sharing lock up
though. The problem is that sharing already has the lock by the time altp2m
tries to lock, so we could pass that info down to make altp2m aware it
needs no locking. It would require extending a bunch of functions though
with an extra input that is barely ever used..

Thanks,
Tamas
George Dunlap May 25, 2016, 4:08 p.m. UTC | #3
On Wed, May 25, 2016 at 4:31 PM, Tamas K Lengyel <tamas@tklengyel.com> wrote:
>
> On May 25, 2016 05:27, "George Dunlap" <george.dunlap@citrix.com> wrote:
>>
>> On Fri, Apr 29, 2016 at 6:42 PM, Tamas K Lengyel <tamas@tklengyel.com>
>> wrote:
>> > Don't propagate altp2m changes from ept_set_entry for memshare as
>> > memshare
>> > already has the lock. We call altp2m propagate changes once memshare
>> > successfully finishes. Allow the hostp2m entries to be of type
>> > p2m_ram_shared when applying mem_access. Also, do not trigger PoD for
>> > hostp2m
>> > when setting altp2m mem_access to be in-line with non-altp2m mem_access
>> > path.
>>
>> Hey Tamas,
>>
>> Sorry for the long delay in getting back to you on this.
>
> No problem, thanks for taking a closer look!
>
>>
>> So the main issue here (correct me if I'm wrong) is the locking
>> discipline: namely, men_sharing_share_pages():
>> - Grabs the hostp2m lock
>> - Grabs the appropriate domain memsharing locks
>> - Calls set_shared_p2m_entry(), which ends up calling ept_set_entry(),
>> which (when altp2m is active) grabs the altp2mlist and altp2m locks.
>>
>> This causes an ASSERT(), since the altp2mlist lock is ahead of the
>> memsharing locks in the list.
>>
>> But having taken a closer look at the code, I'm not sure the change is
>> quite correct.  Please correct me if I've misread something:
>>
>> mem_sharing_share_pages() is passed two <domain,gfn> pairs -- the
>> <sd,sgfn> (which I assume stands for "shared gfn") and <cd,cgfn>
>> (which I assume stands for "copy"); and it
>
> Here s/c stands for source/client.
>
>> 1) Looks up smfn and cmfn, which back sgfn and cmfn respectively
>> 2) Looks up cmfn, which backs cgfn then replaces all gfn entries which
>> point to cmfn with smfn (updating accounting as appropriate)
>
> Hm, I might have missed that. Where does it do the lookup for all other
> cgfns backed by this cmfn?

I was looking at the loop in the middle of the function:

while ( (gfn = rmap_iterate(cpage, &ri)) != NULL) {
 ...
}

I haven't chased it down, but it looks like this walks the reverse map
of all gfns which map cpage; and for each such gfn it finds it:
* removes the cpage -> gfn rmap
* Adds an spage -> gfn map
* Reduces the type count of cpage
* Sets the p2m entry for that gfn to the smfn (rather than cmfn).

Obviously the common case is that the number of mappings is exactly 1;
but we need to either ensure that this is always true, or we need to
handle the case where it's not true. :-)

>> But this change will only call p2m_altp2m_propagate_change() for the
>> original cgfn -- any other gfns which are backed by cmfn will not have
>> the corresponding altp2m entries propagated properly.
>
> Right, if there is some other place where it does sharing in the back we
> would have to propagate that change.
>
>> This sort of mistake is easy to make, which is why I think we should
>> try to always update the altp2ms in ept_set_entry() if we can, to
>> minimize the opportunity for making this sort of mistake.
>>
>> Is there ever a reason to grab the altp2m lock and *then* grab the
>> sharing lock?  Could we just move the sharing lock up between the p2m
>> lock and the altp2mlist lock instead?
>>
>
> I can't think of a scenario where we would get to sharing from altp2m with
> altp2m locking first. Not sure what you mean by moving the sharing lock up
> though. The problem is that sharing already has the lock by the time altp2m
> tries to lock, so we could pass that info down to make altp2m aware it needs
> no locking. It would require extending a bunch of functions though with an
> extra input that is barely ever used..

If you have altp2m there are three locks.  There's one p2m lock for
the "host" p2m (that is, Xen's idea of what the mapping should look
like).  Then there's the altp2mlist lock, which protects the *list* of
altp2ms; then each altp2m itself has its own lock.  These are defined
in mm-lock.h and  must be grabbed in that order: p2m before
altp2mlist, altp2mlist before altp2m.

I assume that the memsharing code is grabbing the hostp2m lock (it
should be anyway), then grabbing the memsharing locks. This is allowed
because the memsharing locks are defined after the p2m lock in
mm-lock.h.  But then when updating the p2m entry, if you have an
altp2m active, it then tries to grab the altp2mlist lock so it can
iterate over the altp2ms.  Since the altp2mlist lock is *before* the
sharing lock in mm-lock.h, this triggers an assert.

Is that not what your issue is?

 -George
Tamas K Lengyel May 25, 2016, 4:58 p.m. UTC | #4
On Wed, May 25, 2016 at 10:08 AM, George Dunlap
<george.dunlap@citrix.com> wrote:
> On Wed, May 25, 2016 at 4:31 PM, Tamas K Lengyel <tamas@tklengyel.com> wrote:
>>
>> On May 25, 2016 05:27, "George Dunlap" <george.dunlap@citrix.com> wrote:
>>>
>>> On Fri, Apr 29, 2016 at 6:42 PM, Tamas K Lengyel <tamas@tklengyel.com>
>>> wrote:
>>> > Don't propagate altp2m changes from ept_set_entry for memshare as
>>> > memshare
>>> > already has the lock. We call altp2m propagate changes once memshare
>>> > successfully finishes. Allow the hostp2m entries to be of type
>>> > p2m_ram_shared when applying mem_access. Also, do not trigger PoD for
>>> > hostp2m
>>> > when setting altp2m mem_access to be in-line with non-altp2m mem_access
>>> > path.
>>>
>>> Hey Tamas,
>>>
>>> Sorry for the long delay in getting back to you on this.
>>
>> No problem, thanks for taking a closer look!
>>
>>>
>>> So the main issue here (correct me if I'm wrong) is the locking
>>> discipline: namely, men_sharing_share_pages():
>>> - Grabs the hostp2m lock
>>> - Grabs the appropriate domain memsharing locks
>>> - Calls set_shared_p2m_entry(), which ends up calling ept_set_entry(),
>>> which (when altp2m is active) grabs the altp2mlist and altp2m locks.
>>>
>>> This causes an ASSERT(), since the altp2mlist lock is ahead of the
>>> memsharing locks in the list.
>>>
>>> But having taken a closer look at the code, I'm not sure the change is
>>> quite correct.  Please correct me if I've misread something:
>>>
>>> mem_sharing_share_pages() is passed two <domain,gfn> pairs -- the
>>> <sd,sgfn> (which I assume stands for "shared gfn") and <cd,cgfn>
>>> (which I assume stands for "copy"); and it
>>
>> Here s/c stands for source/client.
>>
>>> 1) Looks up smfn and cmfn, which back sgfn and cmfn respectively
>>> 2) Looks up cmfn, which backs cgfn then replaces all gfn entries which
>>> point to cmfn with smfn (updating accounting as appropriate)
>>
>> Hm, I might have missed that. Where does it do the lookup for all other
>> cgfns backed by this cmfn?
>
> I was looking at the loop in the middle of the function:
>
> while ( (gfn = rmap_iterate(cpage, &ri)) != NULL) {
>  ...
> }
>
> I haven't chased it down, but it looks like this walks the reverse map
> of all gfns which map cpage; and for each such gfn it finds it:
> * removes the cpage -> gfn rmap
> * Adds an spage -> gfn map
> * Reduces the type count of cpage
> * Sets the p2m entry for that gfn to the smfn (rather than cmfn).
>
> Obviously the common case is that the number of mappings is exactly 1;
> but we need to either ensure that this is always true, or we need to
> handle the case where it's not true. :-)
>
>>> But this change will only call p2m_altp2m_propagate_change() for the
>>> original cgfn -- any other gfns which are backed by cmfn will not have
>>> the corresponding altp2m entries propagated properly.
>>
>> Right, if there is some other place where it does sharing in the back we
>> would have to propagate that change.
>>
>>> This sort of mistake is easy to make, which is why I think we should
>>> try to always update the altp2ms in ept_set_entry() if we can, to
>>> minimize the opportunity for making this sort of mistake.
>>>
>>> Is there ever a reason to grab the altp2m lock and *then* grab the
>>> sharing lock?  Could we just move the sharing lock up between the p2m
>>> lock and the altp2mlist lock instead?
>>>
>>
>> I can't think of a scenario where we would get to sharing from altp2m with
>> altp2m locking first. Not sure what you mean by moving the sharing lock up
>> though. The problem is that sharing already has the lock by the time altp2m
>> tries to lock, so we could pass that info down to make altp2m aware it needs
>> no locking. It would require extending a bunch of functions though with an
>> extra input that is barely ever used..
>
> If you have altp2m there are three locks.  There's one p2m lock for
> the "host" p2m (that is, Xen's idea of what the mapping should look
> like).  Then there's the altp2mlist lock, which protects the *list* of
> altp2ms; then each altp2m itself has its own lock.  These are defined
> in mm-lock.h and  must be grabbed in that order: p2m before
> altp2mlist, altp2mlist before altp2m.
>
> I assume that the memsharing code is grabbing the hostp2m lock (it
> should be anyway), then grabbing the memsharing locks. This is allowed
> because the memsharing locks are defined after the p2m lock in
> mm-lock.h.  But then when updating the p2m entry, if you have an
> altp2m active, it then tries to grab the altp2mlist lock so it can
> iterate over the altp2ms.  Since the altp2mlist lock is *before* the
> sharing lock in mm-lock.h, this triggers an assert.
>
> Is that not what your issue is?

Ahh, I see! Let me give that a try - TBH this locking order
enforcement based on position in mm-lock.h was not entirely clear to
me =)

Tamas
George Dunlap May 25, 2016, 6:25 p.m. UTC | #5
On Wed, May 25, 2016 at 5:58 PM, Tamas K Lengyel <tamas@tklengyel.com> wrote:
> On Wed, May 25, 2016 at 10:08 AM, George Dunlap
> <george.dunlap@citrix.com> wrote:
>> On Wed, May 25, 2016 at 4:31 PM, Tamas K Lengyel <tamas@tklengyel.com> wrote:
>>>
>>> On May 25, 2016 05:27, "George Dunlap" <george.dunlap@citrix.com> wrote:
>>>>
>>>> On Fri, Apr 29, 2016 at 6:42 PM, Tamas K Lengyel <tamas@tklengyel.com>
>>>> wrote:
>>>> > Don't propagate altp2m changes from ept_set_entry for memshare as
>>>> > memshare
>>>> > already has the lock. We call altp2m propagate changes once memshare
>>>> > successfully finishes. Allow the hostp2m entries to be of type
>>>> > p2m_ram_shared when applying mem_access. Also, do not trigger PoD for
>>>> > hostp2m
>>>> > when setting altp2m mem_access to be in-line with non-altp2m mem_access
>>>> > path.
>>>>
>>>> Hey Tamas,
>>>>
>>>> Sorry for the long delay in getting back to you on this.
>>>
>>> No problem, thanks for taking a closer look!
>>>
>>>>
>>>> So the main issue here (correct me if I'm wrong) is the locking
>>>> discipline: namely, men_sharing_share_pages():
>>>> - Grabs the hostp2m lock
>>>> - Grabs the appropriate domain memsharing locks
>>>> - Calls set_shared_p2m_entry(), which ends up calling ept_set_entry(),
>>>> which (when altp2m is active) grabs the altp2mlist and altp2m locks.
>>>>
>>>> This causes an ASSERT(), since the altp2mlist lock is ahead of the
>>>> memsharing locks in the list.
>>>>
>>>> But having taken a closer look at the code, I'm not sure the change is
>>>> quite correct.  Please correct me if I've misread something:
>>>>
>>>> mem_sharing_share_pages() is passed two <domain,gfn> pairs -- the
>>>> <sd,sgfn> (which I assume stands for "shared gfn") and <cd,cgfn>
>>>> (which I assume stands for "copy"); and it
>>>
>>> Here s/c stands for source/client.
>>>
>>>> 1) Looks up smfn and cmfn, which back sgfn and cmfn respectively
>>>> 2) Looks up cmfn, which backs cgfn then replaces all gfn entries which
>>>> point to cmfn with smfn (updating accounting as appropriate)
>>>
>>> Hm, I might have missed that. Where does it do the lookup for all other
>>> cgfns backed by this cmfn?
>>
>> I was looking at the loop in the middle of the function:
>>
>> while ( (gfn = rmap_iterate(cpage, &ri)) != NULL) {
>>  ...
>> }
>>
>> I haven't chased it down, but it looks like this walks the reverse map
>> of all gfns which map cpage; and for each such gfn it finds it:
>> * removes the cpage -> gfn rmap
>> * Adds an spage -> gfn map
>> * Reduces the type count of cpage
>> * Sets the p2m entry for that gfn to the smfn (rather than cmfn).
>>
>> Obviously the common case is that the number of mappings is exactly 1;
>> but we need to either ensure that this is always true, or we need to
>> handle the case where it's not true. :-)
>>
>>>> But this change will only call p2m_altp2m_propagate_change() for the
>>>> original cgfn -- any other gfns which are backed by cmfn will not have
>>>> the corresponding altp2m entries propagated properly.
>>>
>>> Right, if there is some other place where it does sharing in the back we
>>> would have to propagate that change.
>>>
>>>> This sort of mistake is easy to make, which is why I think we should
>>>> try to always update the altp2ms in ept_set_entry() if we can, to
>>>> minimize the opportunity for making this sort of mistake.
>>>>
>>>> Is there ever a reason to grab the altp2m lock and *then* grab the
>>>> sharing lock?  Could we just move the sharing lock up between the p2m
>>>> lock and the altp2mlist lock instead?
>>>>
>>>
>>> I can't think of a scenario where we would get to sharing from altp2m with
>>> altp2m locking first. Not sure what you mean by moving the sharing lock up
>>> though. The problem is that sharing already has the lock by the time altp2m
>>> tries to lock, so we could pass that info down to make altp2m aware it needs
>>> no locking. It would require extending a bunch of functions though with an
>>> extra input that is barely ever used..
>>
>> If you have altp2m there are three locks.  There's one p2m lock for
>> the "host" p2m (that is, Xen's idea of what the mapping should look
>> like).  Then there's the altp2mlist lock, which protects the *list* of
>> altp2ms; then each altp2m itself has its own lock.  These are defined
>> in mm-lock.h and  must be grabbed in that order: p2m before
>> altp2mlist, altp2mlist before altp2m.
>>
>> I assume that the memsharing code is grabbing the hostp2m lock (it
>> should be anyway), then grabbing the memsharing locks. This is allowed
>> because the memsharing locks are defined after the p2m lock in
>> mm-lock.h.  But then when updating the p2m entry, if you have an
>> altp2m active, it then tries to grab the altp2mlist lock so it can
>> iterate over the altp2ms.  Since the altp2mlist lock is *before* the
>> sharing lock in mm-lock.h, this triggers an assert.
>>
>> Is that not what your issue is?
>
> Ahh, I see! Let me give that a try - TBH this locking order
> enforcement based on position in mm-lock.h was not entirely clear to
> me =)

Indeed, it is a bit strange, but if you see the number of locks that
must be ordered properly to avoid deadlock (what, 8 or so?) it's
really only the sane way to make sure things are kept straight.

The original implementation actually uses line numbers from mm-lock.h
to declare the order, but I *think* there recently went in a patch to
change those to explicit enumerations, to make xsplice patches easier.

 -George
Tamas K Lengyel May 25, 2016, 7:29 p.m. UTC | #6
On Wed, May 25, 2016 at 12:25 PM, George Dunlap
<george.dunlap@citrix.com> wrote:
> On Wed, May 25, 2016 at 5:58 PM, Tamas K Lengyel <tamas@tklengyel.com> wrote:
>> On Wed, May 25, 2016 at 10:08 AM, George Dunlap
>> <george.dunlap@citrix.com> wrote:
>>> On Wed, May 25, 2016 at 4:31 PM, Tamas K Lengyel <tamas@tklengyel.com> wrote:
>>>>
>>>> On May 25, 2016 05:27, "George Dunlap" <george.dunlap@citrix.com> wrote:
>>>>>
>>>>> On Fri, Apr 29, 2016 at 6:42 PM, Tamas K Lengyel <tamas@tklengyel.com>
>>>>> wrote:
>>>>> > Don't propagate altp2m changes from ept_set_entry for memshare as
>>>>> > memshare
>>>>> > already has the lock. We call altp2m propagate changes once memshare
>>>>> > successfully finishes. Allow the hostp2m entries to be of type
>>>>> > p2m_ram_shared when applying mem_access. Also, do not trigger PoD for
>>>>> > hostp2m
>>>>> > when setting altp2m mem_access to be in-line with non-altp2m mem_access
>>>>> > path.
>>>>>
>>>>> Hey Tamas,
>>>>>
>>>>> Sorry for the long delay in getting back to you on this.
>>>>
>>>> No problem, thanks for taking a closer look!
>>>>
>>>>>
>>>>> So the main issue here (correct me if I'm wrong) is the locking
>>>>> discipline: namely, men_sharing_share_pages():
>>>>> - Grabs the hostp2m lock
>>>>> - Grabs the appropriate domain memsharing locks
>>>>> - Calls set_shared_p2m_entry(), which ends up calling ept_set_entry(),
>>>>> which (when altp2m is active) grabs the altp2mlist and altp2m locks.
>>>>>
>>>>> This causes an ASSERT(), since the altp2mlist lock is ahead of the
>>>>> memsharing locks in the list.
>>>>>
>>>>> But having taken a closer look at the code, I'm not sure the change is
>>>>> quite correct.  Please correct me if I've misread something:
>>>>>
>>>>> mem_sharing_share_pages() is passed two <domain,gfn> pairs -- the
>>>>> <sd,sgfn> (which I assume stands for "shared gfn") and <cd,cgfn>
>>>>> (which I assume stands for "copy"); and it
>>>>
>>>> Here s/c stands for source/client.
>>>>
>>>>> 1) Looks up smfn and cmfn, which back sgfn and cmfn respectively
>>>>> 2) Looks up cmfn, which backs cgfn then replaces all gfn entries which
>>>>> point to cmfn with smfn (updating accounting as appropriate)
>>>>
>>>> Hm, I might have missed that. Where does it do the lookup for all other
>>>> cgfns backed by this cmfn?
>>>
>>> I was looking at the loop in the middle of the function:
>>>
>>> while ( (gfn = rmap_iterate(cpage, &ri)) != NULL) {
>>>  ...
>>> }
>>>
>>> I haven't chased it down, but it looks like this walks the reverse map
>>> of all gfns which map cpage; and for each such gfn it finds it:
>>> * removes the cpage -> gfn rmap
>>> * Adds an spage -> gfn map
>>> * Reduces the type count of cpage
>>> * Sets the p2m entry for that gfn to the smfn (rather than cmfn).
>>>
>>> Obviously the common case is that the number of mappings is exactly 1;
>>> but we need to either ensure that this is always true, or we need to
>>> handle the case where it's not true. :-)
>>>
>>>>> But this change will only call p2m_altp2m_propagate_change() for the
>>>>> original cgfn -- any other gfns which are backed by cmfn will not have
>>>>> the corresponding altp2m entries propagated properly.
>>>>
>>>> Right, if there is some other place where it does sharing in the back we
>>>> would have to propagate that change.
>>>>
>>>>> This sort of mistake is easy to make, which is why I think we should
>>>>> try to always update the altp2ms in ept_set_entry() if we can, to
>>>>> minimize the opportunity for making this sort of mistake.
>>>>>
>>>>> Is there ever a reason to grab the altp2m lock and *then* grab the
>>>>> sharing lock?  Could we just move the sharing lock up between the p2m
>>>>> lock and the altp2mlist lock instead?
>>>>>
>>>>
>>>> I can't think of a scenario where we would get to sharing from altp2m with
>>>> altp2m locking first. Not sure what you mean by moving the sharing lock up
>>>> though. The problem is that sharing already has the lock by the time altp2m
>>>> tries to lock, so we could pass that info down to make altp2m aware it needs
>>>> no locking. It would require extending a bunch of functions though with an
>>>> extra input that is barely ever used..
>>>
>>> If you have altp2m there are three locks.  There's one p2m lock for
>>> the "host" p2m (that is, Xen's idea of what the mapping should look
>>> like).  Then there's the altp2mlist lock, which protects the *list* of
>>> altp2ms; then each altp2m itself has its own lock.  These are defined
>>> in mm-lock.h and  must be grabbed in that order: p2m before
>>> altp2mlist, altp2mlist before altp2m.
>>>
>>> I assume that the memsharing code is grabbing the hostp2m lock (it
>>> should be anyway), then grabbing the memsharing locks. This is allowed
>>> because the memsharing locks are defined after the p2m lock in
>>> mm-lock.h.  But then when updating the p2m entry, if you have an
>>> altp2m active, it then tries to grab the altp2mlist lock so it can
>>> iterate over the altp2ms.  Since the altp2mlist lock is *before* the
>>> sharing lock in mm-lock.h, this triggers an assert.
>>>
>>> Is that not what your issue is?
>>
>> Ahh, I see! Let me give that a try - TBH this locking order
>> enforcement based on position in mm-lock.h was not entirely clear to
>> me =)
>
> Indeed, it is a bit strange, but if you see the number of locks that
> must be ordered properly to avoid deadlock (what, 8 or so?) it's
> really only the sane way to make sure things are kept straight.
>
> The original implementation actually uses line numbers from mm-lock.h
> to declare the order, but I *think* there recently went in a patch to
> change those to explicit enumerations, to make xsplice patches easier.
>

Alright, moving up the locks in mm-locks.h does resolve the problem
and we can keep the altp2m propagate in ept_set_entry as well! The
xen-access altp2m tests pass as well on a fully memshared domain. Will
send the new patch shortly.

Thanks,
Tamas
diff mbox

Patch

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index a522423..b6c7e33 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -35,6 +35,7 @@ 
 #include <asm/p2m.h>
 #include <asm/atomic.h>
 #include <asm/event.h>
+#include <asm/altp2m.h>
 #include <xsm/xsm.h>
 
 #include "mm-locks.h"
@@ -926,11 +927,12 @@  int mem_sharing_share_pages(struct domain *sd, unsigned long sgfn, shr_handle_t
     int ret = -EINVAL;
     mfn_t smfn, cmfn;
     p2m_type_t smfn_type, cmfn_type;
+    p2m_access_t cmfn_a;
     struct two_gfns tg;
     struct rmap_iterator ri;
 
     get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
-                 cd, cgfn, &cmfn_type, NULL, &cmfn,
+                 cd, cgfn, &cmfn_type, &cmfn_a, &cmfn,
                  0, &tg);
 
     /* This tricky business is to avoid two callers deadlocking if 
@@ -1026,6 +1028,12 @@  int mem_sharing_share_pages(struct domain *sd, unsigned long sgfn, shr_handle_t
     /* We managed to free a domain page. */
     atomic_dec(&nr_shared_mfns);
     atomic_inc(&nr_saved_mfns);
+
+    /* Save the change to the altp2m tables as well if active */
+    if ( altp2m_active(cd) )
+        p2m_altp2m_propagate_change(cd, _gfn(cgfn), smfn, PAGE_ORDER_4K,
+                                    p2m_ram_shared, cmfn_a);
+
     ret = 0;
     
 err_out:
@@ -1222,6 +1230,7 @@  int __mem_sharing_unshare_page(struct domain *d,
     put_page_and_type(old_page);
 
 private_page_found:    
+    /* The following will also update the altp2m tables (if any) */
     if ( p2m_change_type_one(d, gfn, p2m_ram_shared, p2m_ram_rw) )
     {
         gdprintk(XENLOG_ERR, "Could not change p2m type d %hu gfn %lx.\n", 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 1ed5b47..ff84242 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -847,7 +847,7 @@  out:
     if ( is_epte_present(&old_entry) )
         ept_free_entry(p2m, &old_entry, target);
 
-    if ( rc == 0 && p2m_is_hostp2m(p2m) )
+    if ( rc == 0 && p2m_is_hostp2m(p2m) && p2mt != p2m_ram_shared )
         p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2mt, p2ma);
 
     return rc;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6eef2f3..2a42b91 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1765,11 +1765,10 @@  int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
     /* Check host p2m if no valid entry in alternate */
     if ( !mfn_valid(mfn) )
     {
-        mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a,
-                              P2M_ALLOC | P2M_UNSHARE, &page_order, NULL);
+        mfn = hp2m->get_entry(hp2m, gfn_l, &t, &old_a, 0, &page_order, NULL);
 
         rc = -ESRCH;
-        if ( !mfn_valid(mfn) || t != p2m_ram_rw )
+        if ( !mfn_valid(mfn) || (t != p2m_ram_rw && t != p2m_ram_shared) )
             return rc;
 
         /* If this is a superpage, copy that first */