diff mbox

[v10,5/6] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries.

Message ID 58E5FC0A.4010809@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Yu Zhang April 6, 2017, 8:27 a.m. UTC
On 4/6/2017 3:48 PM, Jan Beulich wrote:
>>>> On 05.04.17 at 20:04, <yu.c.zhang@linux.intel.com> wrote:
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>>            put_gfn(d, gmfn);
>>            return 1;
>>        }
>> +    if ( unlikely(p2mt == p2m_ioreq_server) )
>> +        p2m_change_type_one(d, gmfn,
>> +                            p2m_ioreq_server, p2m_ram_rw);
>> +
>>    #else
>>        mfn = gfn_to_mfn(d, _gfn(gmfn));
>>    #endif
> To be honest, this looks more like a quick hack than a proper solution
> at the first glance. To me it would seem preferable if the count was

Yeah, right. :)

> adjusted at the point the P2M entry is being replaced (i.e. down the
> call stack from guest_physmap_remove_page()). The closer to the
> actual changing of the P2M entry, the less likely you miss any call
> paths (as was already explained by George while suggesting to put
> the accounting into the leaf most EPT/NPT functions).

Well, I thought I have explained the reason why I have always been 
hesitating to do the count in
atomic_write_ept_entry(). But seems I did not make it clear:

1> atomic_write_ept_entry() is used each time an p2m entry is written, 
but sweeping p2m_ioreq_server is
only supposed to happen when an ioreq server unmaps. Checking the p2m 
here impacts the performance,
and I do not think it is worthy - consider there may be very limited 
usage requirement for the p2m_ioreq_server;

2> atomic_write_ept_entry()  does not have a p2m parameter, even some of 
its caller does not have either;

3> the lazy p2m type change is triggered at p2m level, by 
p2m_change_entry_type_global(). It can be used not
only for intel ept. And supporting the count at the lowest level for non 
intel ept platform is more complex than
changes in atomic_write_ept_entry().

4> Fortunately, we have both resolve_misconfig() and do_recalc(), so I 
thought it would be enough to do the count
in these two routines, together with the count in p2m_change_type_one().

But I had to admit I did not think of the extreme scenarios raised by 
George - I had always assumed an p2m_ioreq_server
page will not be allocated to the balloon driver when it is in use.

So here I have another proposal - we shall not allow a p2m_ioreq_server 
being ballooned out. I mean, if some bug in
kernel really allocates a p2m_ioreq_server page to a balloon driver, or 
if the driver is a malicious one which does not
tell the device model this gfn shall be no longer emulated, the 
hypervisor shall let the ballooning fail for this gfn. After
all, if such situation happens, the guest or the device model already 
have bugs, and these last 2 patches are to make
sure that even if there's bug in guest/device model, xen will help do 
the cleanup, instead of to tolerate guest bugs.

If you think this is reasonable, I have drafted a patch, like this:

  #endif

Changes made in p2m_pod_decrease_reservation() is to not let balloon 
driver to steal a p2m_ioreq_server page in PoD;
Changes made in guest_remove_page() is to disallow a p2m_ioreq_server 
page being removed.


Thanks
Yu

> Jan
>
>

Comments

Jan Beulich April 6, 2017, 8:44 a.m. UTC | #1
>>> On 06.04.17 at 10:27, <yu.c.zhang@linux.intel.com> wrote:
> On 4/6/2017 3:48 PM, Jan Beulich wrote:
>>>>> On 05.04.17 at 20:04, <yu.c.zhang@linux.intel.com> wrote:
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>>>            put_gfn(d, gmfn);
>>>            return 1;
>>>        }
>>> +    if ( unlikely(p2mt == p2m_ioreq_server) )
>>> +        p2m_change_type_one(d, gmfn,
>>> +                            p2m_ioreq_server, p2m_ram_rw);
>>> +
>>>    #else
>>>        mfn = gfn_to_mfn(d, _gfn(gmfn));
>>>    #endif
>> To be honest, this looks more like a quick hack than a proper solution
>> at the first glance. To me it would seem preferable if the count was
> 
> Yeah, right. :)
> 
>> adjusted at the point the P2M entry is being replaced (i.e. down the
>> call stack from guest_physmap_remove_page()). The closer to the
>> actual changing of the P2M entry, the less likely you miss any call
>> paths (as was already explained by George while suggesting to put
>> the accounting into the leaf most EPT/NPT functions).
> 
> Well, I thought I have explained the reason why I have always been 
> hesitating to do the count in
> atomic_write_ept_entry(). But seems I did not make it clear:

Well, there was no reason to re-explain. I understand your
reasoning, and I understand George's. Hence my request to move
it _closer_ to the leaf function, not specifically to move it _into_
there.

> But I had to admit I did not think of the extreme scenarios raised by 
> George - I had always assumed an p2m_ioreq_server
> page will not be allocated to the balloon driver when it is in use.
> 
> So here I have another proposal - we shall not allow a p2m_ioreq_server 
> being ballooned out. I mean, if some bug in
> kernel really allocates a p2m_ioreq_server page to a balloon driver, or 
> if the driver is a malicious one which does not
> tell the device model this gfn shall be no longer emulated, the 
> hypervisor shall let the ballooning fail for this gfn. After
> all, if such situation happens, the guest or the device model already 
> have bugs, and these last 2 patches are to make
> sure that even if there's bug in guest/device model, xen will help do 
> the cleanup, instead of to tolerate guest bugs.
> 
> If you think this is reasonable, I have drafted a patch, like this:

Well, that's still the same hack as before (merely extended to
PoD code), isn't it? I'd still prefer the accounting to be got right
instead of the page remove attempt to be failed.

Jan
diff mbox

Patch

diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index d5fea72..ff726ad 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -606,7 +606,8 @@  p2m_pod_decrease_reservation(struct domain *d,
              BUG_ON(p2m->pod.entry_count < 0);
              pod -= n;
          }
-        else if ( steal_for_cache && p2m_is_ram(t) )
+        else if ( steal_for_cache && p2m_is_ram(t) &&
+                  (t != p2m_ioreq_server) )
          {
              /*
               * If we need less than 1 << cur_order, we may end up stealing
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 7dbddda..40d5545 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -288,6 +288,14 @@  int guest_remove_page(struct domain *d, unsigned 
long gmfn)
          put_gfn(d, gmfn);
          return 1;
      }
+    if ( unlikely(p2mt == p2m_ioreq_server) )
+    {
+        put_gfn(d, gmfn);
+        gdprintk(XENLOG_INFO, "Domain %u page %lx cannot be removed.\n",
+                d->domain_id, gmfn);
+        return 0;
+    }
+
  #else
      mfn = gfn_to_mfn(d, _gfn(gmfn));