diff mbox series

[v4] x86/PoD: tie together P2M update and increment of entry count

Message ID 3daef84c-47dd-4a6b-9984-402e997598dc@suse.com (mailing list archive)
State Superseded
Headers show
Series [v4] x86/PoD: tie together P2M update and increment of entry count | expand

Commit Message

Jan Beulich March 13, 2024, 2 p.m. UTC
When not holding the PoD lock across the entire region covering P2M
update and stats update, the entry count - if to be incorrect at all -
should indicate too large a value in preference to a too small one, to
avoid functions bailing early when they find the count is zero. However,
instead of moving the increment ahead (and adjust back upon failure),
extend the PoD-locked region.

Fixes: 99af3cd40b6e ("x86/mm: Rework locking in the PoD layer")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v4: Shrink locked region a little again, where possible.
v3: Extend locked region instead. Add Fixes: tag.
v2: Add comments.

Comments

George Dunlap March 13, 2024, 4:31 p.m. UTC | #1
On Wed, Mar 13, 2024 at 2:00 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> When not holding the PoD lock across the entire region covering P2M
> update and stats update, the entry count - if to be incorrect at all -
> should indicate too large a value in preference to a too small one, to
> avoid functions bailing early when they find the count is zero. However,
> instead of moving the increment ahead (and adjust back upon failure),
> extend the PoD-locked region.
>
> Fixes: 99af3cd40b6e ("x86/mm: Rework locking in the PoD layer")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Would you mind commenting on why you went with multiple unlocks,
rather than multiple if statements?

e.g.,

```
rc = p2m_set_entry(...);

/* Do the pod entry adjustment while holding the lock on success */
if ( rc == 0 ) {
 /* adjust pod entries */
}

pod_unlock(p2m);

/* Do the rest of the clean-up and error handling */
if (rc == 0 ) {
```

Just right now the multiple unlocks makes me worry that we may forget
one at some point.

 -George
Jan Beulich March 14, 2024, 6:59 a.m. UTC | #2
On 13.03.2024 17:31, George Dunlap wrote:
> On Wed, Mar 13, 2024 at 2:00 PM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> When not holding the PoD lock across the entire region covering P2M
>> update and stats update, the entry count - if to be incorrect at all -
>> should indicate too large a value in preference to a too small one, to
>> avoid functions bailing early when they find the count is zero. However,
>> instead of moving the increment ahead (and adjust back upon failure),
>> extend the PoD-locked region.
>>
>> Fixes: 99af3cd40b6e ("x86/mm: Rework locking in the PoD layer")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Would you mind commenting on why you went with multiple unlocks,
> rather than multiple if statements?

Simply because what I did I view as more logical a code structure
than ...

> e.g.,
> 
> ```
> rc = p2m_set_entry(...);
> 
> /* Do the pod entry adjustment while holding the lock on success */
> if ( rc == 0 ) {
>  /* adjust pod entries */
> }
> 
> pod_unlock(p2m);
> 
> /* Do the rest of the clean-up and error handling */
> if (rc == 0 ) {

... this, ...

> Just right now the multiple unlocks makes me worry that we may forget
> one at some point.

... despite this possible concern. But well, if going the other route
is what it takes to finally get this in, so be it.

Jan
diff mbox series

Patch

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1348,12 +1348,19 @@  mark_populate_on_demand(struct domain *d
         }
     }
 
+    /*
+     * P2M update and stats increment need to collectively be under PoD lock,
+     * to prevent code elsewhere observing PoD entry count being zero despite
+     * there actually still being PoD entries (created by the p2m_set_entry()
+     * invocation below).
+     */
+    pod_lock(p2m);
+
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
                        p2m_populate_on_demand, p2m->default_access);
     if ( rc == 0 )
     {
-        pod_lock(p2m);
         p2m->pod.entry_count += 1UL << order;
         p2m->pod.entry_count -= pod_count;
         BUG_ON(p2m->pod.entry_count < 0);
@@ -1363,6 +1370,8 @@  mark_populate_on_demand(struct domain *d
     }
     else if ( order )
     {
+        pod_unlock(p2m);
+
         /*
          * If this failed, we can't tell how much of the range was changed.
          * Best to crash the domain.
@@ -1372,6 +1381,8 @@  mark_populate_on_demand(struct domain *d
                d, gfn_l, order, rc);
         domain_crash(d);
     }
+    else
+        pod_unlock(p2m);
 
 out:
     gfn_unlock(p2m, gfn, order);