Message ID | 95699d2c-7e2a-40db-875f-907990797317@suse.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v5] x86/PoD: tie together P2M update and increment of entry count | expand |
On Tue, Mar 19, 2024 at 1:22 PM Jan Beulich <jbeulich@suse.com> wrote: > > When not holding the PoD lock across the entire region covering P2M > update and stats update, the entry count - if to be incorrect at all - > should indicate too large a value in preference to a too small one, to > avoid functions bailing early when they find the count is zero. However, > instead of moving the increment ahead (and adjust back upon failure), > extend the PoD-locked region. > > Fixes: 99af3cd40b6e ("x86/mm: Rework locking in the PoD layer") > Signed-off-by: Jan Beulich <jbeulich@suse.com> Oh! Thanks for doing this -- I hadn't responded because I wasn't sure whether I was bikeshedding, and then it sort of fell off my radar. At any rate: Reviewed-by: George Dunlap <george.dunlap@cloud.com>
--- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -1348,19 +1348,28 @@ mark_populate_on_demand(struct domain *d } } + /* + * P2M update and stats increment need to collectively be under PoD lock, + * to prevent code elsewhere observing PoD entry count being zero despite + * there actually still being PoD entries (created by the p2m_set_entry() + * invocation below). + */ + pod_lock(p2m); + /* Now, actually do the two-way mapping */ rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order, p2m_populate_on_demand, p2m->default_access); if ( rc == 0 ) { - pod_lock(p2m); p2m->pod.entry_count += 1UL << order; p2m->pod.entry_count -= pod_count; BUG_ON(p2m->pod.entry_count < 0); - pod_unlock(p2m); + } + + pod_unlock(p2m); + if ( rc == 0 ) ioreq_request_mapcache_invalidate(d); - } else if ( order ) { /*
When not holding the PoD lock across the entire region covering P2M update and stats update, the entry count - if to be incorrect at all - should indicate too large a value in preference to a too small one, to avoid functions bailing early when they find the count is zero. However, instead of moving the increment ahead (and adjust back upon failure), extend the PoD-locked region. Fixes: 99af3cd40b6e ("x86/mm: Rework locking in the PoD layer") Signed-off-by: Jan Beulich <jbeulich@suse.com> --- v5: Re-arrange conditionals to have just a single unlock. v4: Shrink locked region a little again, where possible. v3: Extend locked region instead. Add Fixes: tag. v2: Add comments.