Message ID | 1453925201-15926-2-git-send-email-tlengyel@novetta.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 01/27/2016 10:06 PM, Tamas K Lengyel wrote: > Extend the existing get_mem_access memop to allow querying permissions in > altp2m views as well. > > Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com> > Cc: Ian Jackson <ian.jackson@eu.citrix.com> > Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > Cc: Ian Campbell <ian.campbell@citrix.com> > Cc: Wei Liu <wei.liu2@citrix.com> > Cc: Razvan Cojocaru <rcojocaru@bitdefender.com> > Cc: Stefano Stabellini <stefano.stabellini@citrix.com> > Cc: George Dunlap <george.dunlap@eu.citrix.com> > Cc: Keir Fraser <keir@xen.org> > Cc: Jan Beulich <jbeulich@suse.com> > Cc: Andrew Cooper <andrew.cooper3@citrix.com> > --- > tools/libxc/include/xenctrl.h | 3 ++- > tools/libxc/xc_mem_access.c | 8 +++++--- > tools/tests/xen-access/xen-access.c | 5 ++++- > xen/arch/arm/p2m.c | 4 ++-- > xen/arch/x86/mm/p2m.c | 23 +++++++++++++++++++---- > xen/common/mem_access.c | 2 +- > xen/include/xen/p2m-common.h | 3 ++- > 7 files changed, 35 insertions(+), 13 deletions(-) For the part of the code in the files we're maintaining: Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com> Thanks, Razvan
On Wed, Jan 27, 2016 at 01:06:41PM -0700, Tamas K Lengyel wrote: > Extend the existing get_mem_access memop to allow querying permissions in > altp2m views as well. > > Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com> > Cc: Ian Jackson <ian.jackson@eu.citrix.com> > Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > Cc: Ian Campbell <ian.campbell@citrix.com> > Cc: Wei Liu <wei.liu2@citrix.com> > Cc: Razvan Cojocaru <rcojocaru@bitdefender.com> > Cc: Stefano Stabellini <stefano.stabellini@citrix.com> > Cc: George Dunlap <george.dunlap@eu.citrix.com> > Cc: Keir Fraser <keir@xen.org> > Cc: Jan Beulich <jbeulich@suse.com> > Cc: Andrew Cooper <andrew.cooper3@citrix.com> > --- > tools/libxc/include/xenctrl.h | 3 ++- > tools/libxc/xc_mem_access.c | 8 +++++--- > tools/tests/xen-access/xen-access.c | 5 ++++- The code looks sensible. Some nits and question below. > xen/arch/arm/p2m.c | 4 ++-- > xen/arch/x86/mm/p2m.c | 23 +++++++++++++++++++---- > xen/common/mem_access.c | 2 +- > xen/include/xen/p2m-common.h | 3 ++- > 7 files changed, 35 insertions(+), 13 deletions(-) > > diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h > index b4e57d8..09d9f62 100644 > --- a/tools/libxc/include/xenctrl.h > +++ b/tools/libxc/include/xenctrl.h > @@ -2065,7 +2065,8 @@ int xc_set_mem_access(xc_interface *xch, domid_t domain_id, > * Gets the mem access for the given page (returned in access on success) > */ > int xc_get_mem_access(xc_interface *xch, domid_t domain_id, > - uint64_t pfn, xenmem_access_t *access); > + uint64_t pfn, uint16_t altp2m_idx, > + xenmem_access_t *access); > The same questions regarding stability of these functions apply here as well. > /* > * Instructions causing a mem_access violation can be emulated by Xen > diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c > index d6fb409..a44865d 100644 > --- a/tools/libxc/xc_mem_access.c > +++ b/tools/libxc/xc_mem_access.c > @@ -46,14 +46,16 @@ int xc_set_mem_access(xc_interface *xch, > int xc_get_mem_access(xc_interface *xch, > domid_t domain_id, > uint64_t pfn, > + uint16_t altp2m_idx, > xenmem_access_t *access) > { > int rc; > xen_mem_access_op_t mao = > { > - .op = XENMEM_access_op_get_access, > - .domid = domain_id, > - .pfn = pfn > + .op = XENMEM_access_op_get_access, > + .domid = domain_id, > + .pfn = pfn, > + .altp2m_idx = altp2m_idx > }; > > rc = do_memory_op(xch, XENMEM_access_op, &mao, sizeof(mao)); > diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c > index 2300e9a..d9dda62 100644 > --- a/tools/tests/xen-access/xen-access.c > +++ b/tools/tests/xen-access/xen-access.c > @@ -571,7 +571,10 @@ int main(int argc, char *argv[]) > > switch (req.reason) { > case VM_EVENT_REASON_MEM_ACCESS: > - rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access); > + rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, > + ((req.flags & VM_EVENT_FLAG_ALTERNATE_P2M) ? req.altp2m_idx : 0), Ling too long. > + &access > + ); And fold this ")" to previous line? > if (rc < 0) > { > ERROR("Error %d getting mem_access event\n", rc); > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c > index 8e9b4be..932c6e2 100644 > --- a/xen/arch/arm/p2m.c > +++ b/xen/arch/arm/p2m.c > @@ -1666,7 +1666,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) > if ( !p2m->mem_access_enabled ) > return true; > > - rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma); > + rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 0, &xma); > if ( rc ) > return true; > > @@ -1847,7 +1847,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, > return 0; > } > > -int p2m_get_mem_access(struct domain *d, gfn_t gfn, > +int p2m_get_mem_access(struct domain *d, gfn_t gfn, unsigned long altp2m_idx, > xenmem_access_t *access) > { > int ret; > diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c > index 95bf7ce..18068e8 100644 > --- a/xen/arch/x86/mm/p2m.c > +++ b/xen/arch/x86/mm/p2m.c > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct vcpu *v, > bool_t violation = 1; > const struct vm_event_mem_access *data = &rsp->u.mem_access; > > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 ) > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > + altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0, Line too long. Wei.
>>> On 27.01.16 at 21:06, <tlengyel@novetta.com> wrote: > --- a/xen/arch/x86/mm/p2m.c > +++ b/xen/arch/x86/mm/p2m.c > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct vcpu *v, > bool_t violation = 1; > const struct vm_event_mem_access *data = &rsp->u.mem_access; > > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 ) > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > + altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0, > + &access) == 0 ) This looks to be a behavioral change beyond what title and description say, and it's not clear whether that's actually the behavior everyone wants. > @@ -1918,9 +1920,10 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, > * Get access type for a gfn. > * If gfn == INVALID_GFN, gets the default access type. > */ > -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access) > +int p2m_get_mem_access(struct domain *d, gfn_t gfn, unsigned long altp2m_idx, > + xenmem_access_t *access) > { > - struct p2m_domain *p2m = p2m_get_hostp2m(d); > + struct p2m_domain *hp2m = p2m_get_hostp2m(d), *p2m = NULL; > p2m_type_t t; > p2m_access_t a; > mfn_t mfn; > @@ -1943,10 +1946,22 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access) > /* If request to get default access. */ > if ( gfn_x(gfn) == INVALID_GFN ) > { > - *access = memaccess[p2m->default_access]; > + *access = memaccess[hp2m->default_access]; > return 0; > } And following the above - why would this not use the altp2m's default access? > + /* altp2m view 0 is treated as the hostp2m */ > + if ( altp2m_idx ) > + { > + if ( altp2m_idx >= MAX_ALTP2M || > + d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN ) > + return -EINVAL; > + > + p2m = d->arch.altp2m_p2m[altp2m_idx]; > + } > + else > + p2m = hp2m; And I don't see why you need separate p2m and hp2m local variables. > --- a/xen/include/xen/p2m-common.h > +++ b/xen/include/xen/p2m-common.h > @@ -56,6 +56,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, > * Get access type for a gfn. > * If gfn == INVALID_GFN, gets the default access type. > */ > -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access); > +int p2m_get_mem_access(struct domain *d, gfn_t gfn, unsigned long altp2m_idx, > + xenmem_access_t *access); Same question as for patch 1 regarding the "unsigned long" here. Jan
On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com> wrote: > > >>> On 27.01.16 at 21:06, <tlengyel@novetta.com> wrote: > > --- a/xen/arch/x86/mm/p2m.c > > +++ b/xen/arch/x86/mm/p2m.c > > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct vcpu *v, > > bool_t violation = 1; > > const struct vm_event_mem_access *data = &rsp->u.mem_access; > > > > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 ) > > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > > + altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0, > > + &access) == 0 ) > > This looks to be a behavioral change beyond what title and > description say, and it's not clear whether that's actually the > behavior everyone wants. I'm fairly comfident its exactly the expected behavior when one uses mem_access in altp2m tables and emulation. Right now because the lack of this AFAIK emulation would not work correctly with altp2m. But Razvan probably can chime in as he uses this path actively. > > > @@ -1918,9 +1920,10 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, > > * Get access type for a gfn. > > * If gfn == INVALID_GFN, gets the default access type. > > */ > > -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access) > > +int p2m_get_mem_access(struct domain *d, gfn_t gfn, unsigned long altp2m_idx, > > + xenmem_access_t *access) > > { > > - struct p2m_domain *p2m = p2m_get_hostp2m(d); > > + struct p2m_domain *hp2m = p2m_get_hostp2m(d), *p2m = NULL; > > p2m_type_t t; > > p2m_access_t a; > > mfn_t mfn; > > @@ -1943,10 +1946,22 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access) > > /* If request to get default access. */ > > if ( gfn_x(gfn) == INVALID_GFN ) > > { > > - *access = memaccess[p2m->default_access]; > > + *access = memaccess[hp2m->default_access]; > > return 0; > > } > > And following the above - why would this not use the altp2m's > default access? Altp2m default access is currently ignored. You can only set default access for hp2m. > > > + /* altp2m view 0 is treated as the hostp2m */ > > + if ( altp2m_idx ) > > + { > > + if ( altp2m_idx >= MAX_ALTP2M || > > + d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN ) > > + return -EINVAL; > > + > > + p2m = d->arch.altp2m_p2m[altp2m_idx]; > > + } > > + else > > + p2m = hp2m; > > And I don't see why you need separate p2m and hp2m local > variables. It was also used above for returning default access, while p2m is the actually active one here. > > > --- a/xen/include/xen/p2m-common.h > > +++ b/xen/include/xen/p2m-common.h > > @@ -56,6 +56,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, > > * Get access type for a gfn. > > * If gfn == INVALID_GFN, gets the default access type. > > */ > > -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access); > > +int p2m_get_mem_access(struct domain *d, gfn_t gfn, unsigned long altp2m_idx, > > + xenmem_access_t *access); > > Same question as for patch 1 regarding the "unsigned long" here. > As stated on the other patch, it could be either for our purposes. Any benefit in using int rather then long? Tamas
>>> On 28.01.16 at 15:42, <tlengyel@novetta.com> wrote: > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com> wrote: >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com> wrote: >> > --- a/xen/arch/x86/mm/p2m.c >> > +++ b/xen/arch/x86/mm/p2m.c >> > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct vcpu *v, >> > bool_t violation = 1; >> > const struct vm_event_mem_access *data = &rsp->u.mem_access; >> > >> > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) > == 0 ) >> > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), >> > + altp2m_active(v->domain) ? > vcpu_altp2m(v).p2midx : 0, >> > + &access) == 0 ) >> >> This looks to be a behavioral change beyond what title and >> description say, and it's not clear whether that's actually the >> behavior everyone wants. > > I'm fairly comfident its exactly the expected behavior when one uses > mem_access in altp2m tables and emulation. Right now because the lack of > this AFAIK emulation would not work correctly with altp2m. But Razvan > probably can chime in as he uses this path actively. But this should then be mentioned in the description. >> > + /* altp2m view 0 is treated as the hostp2m */ >> > + if ( altp2m_idx ) >> > + { >> > + if ( altp2m_idx >= MAX_ALTP2M || >> > + d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN ) >> > + return -EINVAL; >> > + >> > + p2m = d->arch.altp2m_p2m[altp2m_idx]; >> > + } >> > + else >> > + p2m = hp2m; >> >> And I don't see why you need separate p2m and hp2m local >> variables. > > It was also used above for returning default access, while p2m is the > actually active one here. But all that could be done with just the one variable that was already there. Jan
On Jan 28, 2016 7:56 AM, "Jan Beulich" <JBeulich@suse.com> wrote: > > >>> On 28.01.16 at 15:42, <tlengyel@novetta.com> wrote: > > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com> wrote: > >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com> wrote: > >> > --- a/xen/arch/x86/mm/p2m.c > >> > +++ b/xen/arch/x86/mm/p2m.c > >> > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct vcpu *v, > >> > bool_t violation = 1; > >> > const struct vm_event_mem_access *data = &rsp->u.mem_access; > >> > > >> > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) > > == 0 ) > >> > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > >> > + altp2m_active(v->domain) ? > > vcpu_altp2m(v).p2midx : 0, > >> > + &access) == 0 ) > >> > >> This looks to be a behavioral change beyond what title and > >> description say, and it's not clear whether that's actually the > >> behavior everyone wants. > > > > I'm fairly comfident its exactly the expected behavior when one uses > > mem_access in altp2m tables and emulation. Right now because the lack of > > this AFAIK emulation would not work correctly with altp2m. But Razvan > > probably can chime in as he uses this path actively. > > But this should then be mentioned in the description. > > >> > + /* altp2m view 0 is treated as the hostp2m */ > >> > + if ( altp2m_idx ) > >> > + { > >> > + if ( altp2m_idx >= MAX_ALTP2M || > >> > + d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN ) > >> > + return -EINVAL; > >> > + > >> > + p2m = d->arch.altp2m_p2m[altp2m_idx]; > >> > + } > >> > + else > >> > + p2m = hp2m; > >> > >> And I don't see why you need separate p2m and hp2m local > >> variables. > > > > It was also used above for returning default access, while p2m is the > > actually active one here. > > But all that could be done with just the one variable that was already > there. > > Jan Sure, SGTM. Tamas
On 01/28/2016 04:42 PM, Lengyel, Tamas wrote: > > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com > <mailto:JBeulich@suse.com>> wrote: >> >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com > <mailto:tlengyel@novetta.com>> wrote: >> > --- a/xen/arch/x86/mm/p2m.c >> > +++ b/xen/arch/x86/mm/p2m.c >> > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct vcpu *v, >> > bool_t violation = 1; >> > const struct vm_event_mem_access *data = &rsp->u.mem_access; >> > >> > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > &access) == 0 ) >> > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), >> > + altp2m_active(v->domain) ? > vcpu_altp2m(v).p2midx : 0, >> > + &access) == 0 ) >> >> This looks to be a behavioral change beyond what title and >> description say, and it's not clear whether that's actually the >> behavior everyone wants. > > I'm fairly comfident its exactly the expected behavior when one uses > mem_access in altp2m tables and emulation. Right now because the lack of > this AFAIK emulation would not work correctly with altp2m. But Razvan > probably can chime in as he uses this path actively. I've done an experiment to see how much slower using altp2m would be as compared to emulation - so I'm not a big user of the feature, but I did find it cumbersome to have to work with two sets of APIs (one for what could arguably be called the default altp2m view, i.e. the regular xc_set_mem_access(), and one for altp2m, i.e. xc_altp2m_set_mem_access()). Furthermore, the APIs do not currently offer the same features (most notably, xc_altp2m_get_mem_access() is completely missing). I've mentioned this to Tamas while initially trying to get it to work. Now, whether the behaviour I expect is what everyone expects is, of course, wide open to debate. But I think we can all agree that the altp2m interface can, and probably should, be improved. Thanks, Razvan
On Jan 28, 2016 8:02 AM, "Razvan Cojocaru" <rcojocaru@bitdefender.com> wrote: > > On 01/28/2016 04:42 PM, Lengyel, Tamas wrote: > > > > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com > > <mailto:JBeulich@suse.com>> wrote: > >> > >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com > > <mailto:tlengyel@novetta.com>> wrote: > >> > --- a/xen/arch/x86/mm/p2m.c > >> > +++ b/xen/arch/x86/mm/p2m.c > >> > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct vcpu *v, > >> > bool_t violation = 1; > >> > const struct vm_event_mem_access *data = &rsp->u.mem_access; > >> > > >> > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > > &access) == 0 ) > >> > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > >> > + altp2m_active(v->domain) ? > > vcpu_altp2m(v).p2midx : 0, > >> > + &access) == 0 ) > >> > >> This looks to be a behavioral change beyond what title and > >> description say, and it's not clear whether that's actually the > >> behavior everyone wants. > > > > I'm fairly comfident its exactly the expected behavior when one uses > > mem_access in altp2m tables and emulation. Right now because the lack of > > this AFAIK emulation would not work correctly with altp2m. But Razvan > > probably can chime in as he uses this path actively. > > I've done an experiment to see how much slower using altp2m would be as > compared to emulation - so I'm not a big user of the feature, but I did > find it cumbersome to have to work with two sets of APIs (one for what > could arguably be called the default altp2m view, i.e. the regular > xc_set_mem_access(), and one for altp2m, i.e. > xc_altp2m_set_mem_access()). Furthermore, the APIs do not currently > offer the same features (most notably, xc_altp2m_get_mem_access() is > completely missing). I've mentioned this to Tamas while initially trying > to get it to work. > > Now, whether the behaviour I expect is what everyone expects is, of > course, wide open to debate. But I think we can all agree that the > altp2m interface can, and probably should, be improved. > There is that, but also, what is the exact logic behind doing this check before emulation? AFAIU emulation happens in response to a vm_event so we should be fairly certain that this check succeeds as it just verifies that indeed the permissions are restricted by mem_access in the p2m (and with altp2m this should be the active one). But when is this check normally expected to fail? Tamas
On 01/28/2016 05:12 PM, Lengyel, Tamas wrote: > > On Jan 28, 2016 8:02 AM, "Razvan Cojocaru" <rcojocaru@bitdefender.com > <mailto:rcojocaru@bitdefender.com>> wrote: >> >> On 01/28/2016 04:42 PM, Lengyel, Tamas wrote: >> > >> > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com > <mailto:JBeulich@suse.com> >> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>>> wrote: >> >> >> >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com > <mailto:tlengyel@novetta.com> >> > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>>> wrote: >> >> > --- a/xen/arch/x86/mm/p2m.c >> >> > +++ b/xen/arch/x86/mm/p2m.c >> >> > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct > vcpu *v, >> >> > bool_t violation = 1; >> >> > const struct vm_event_mem_access *data = &rsp->u.mem_access; >> >> > >> >> > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), >> > &access) == 0 ) >> >> > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), >> >> > + altp2m_active(v->domain) ? >> > vcpu_altp2m(v).p2midx : 0, >> >> > + &access) == 0 ) >> >> >> >> This looks to be a behavioral change beyond what title and >> >> description say, and it's not clear whether that's actually the >> >> behavior everyone wants. >> > >> > I'm fairly comfident its exactly the expected behavior when one uses >> > mem_access in altp2m tables and emulation. Right now because the lack of >> > this AFAIK emulation would not work correctly with altp2m. But Razvan >> > probably can chime in as he uses this path actively. >> >> I've done an experiment to see how much slower using altp2m would be as >> compared to emulation - so I'm not a big user of the feature, but I did >> find it cumbersome to have to work with two sets of APIs (one for what >> could arguably be called the default altp2m view, i.e. the regular >> xc_set_mem_access(), and one for altp2m, i.e. >> xc_altp2m_set_mem_access()). Furthermore, the APIs do not currently >> offer the same features (most notably, xc_altp2m_get_mem_access() is >> completely missing). I've mentioned this to Tamas while initially trying >> to get it to work. >> >> Now, whether the behaviour I expect is what everyone expects is, of >> course, wide open to debate. But I think we can all agree that the >> altp2m interface can, and probably should, be improved. >> > > There is that, but also, what is the exact logic behind doing this check > before emulation? AFAIU emulation happens in response to a vm_event so > we should be fairly certain that this check succeeds as it just verifies > that indeed the permissions are restricted by mem_access in the p2m (and > with altp2m this should be the active one). But when is this check > normally expected to fail? That check is important, please do not remove it. A vm_event is sent into userspace to our monitoring application, but the monitoring application can actually remove the page restrictions before replying, so in that case emulation is pointless - there will be no more page faults for that instruction. Thanks, Razvan
On Thu, Jan 28, 2016 at 8:20 AM, Razvan Cojocaru <rcojocaru@bitdefender.com> wrote: > On 01/28/2016 05:12 PM, Lengyel, Tamas wrote: > > > > On Jan 28, 2016 8:02 AM, "Razvan Cojocaru" <rcojocaru@bitdefender.com > > <mailto:rcojocaru@bitdefender.com>> wrote: > >> > >> On 01/28/2016 04:42 PM, Lengyel, Tamas wrote: > >> > > >> > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com > > <mailto:JBeulich@suse.com> > >> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>>> wrote: > >> >> > >> >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com > > <mailto:tlengyel@novetta.com> > >> > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>>> wrote: > >> >> > --- a/xen/arch/x86/mm/p2m.c > >> >> > +++ b/xen/arch/x86/mm/p2m.c > >> >> > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct > > vcpu *v, > >> >> > bool_t violation = 1; > >> >> > const struct vm_event_mem_access *data = > &rsp->u.mem_access; > >> >> > > >> >> > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > >> > &access) == 0 ) > >> >> > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > >> >> > + altp2m_active(v->domain) ? > >> > vcpu_altp2m(v).p2midx : 0, > >> >> > + &access) == 0 ) > >> >> > >> >> This looks to be a behavioral change beyond what title and > >> >> description say, and it's not clear whether that's actually the > >> >> behavior everyone wants. > >> > > >> > I'm fairly comfident its exactly the expected behavior when one uses > >> > mem_access in altp2m tables and emulation. Right now because the lack > of > >> > this AFAIK emulation would not work correctly with altp2m. But Razvan > >> > probably can chime in as he uses this path actively. > >> > >> I've done an experiment to see how much slower using altp2m would be as > >> compared to emulation - so I'm not a big user of the feature, but I did > >> find it cumbersome to have to work with two sets of APIs (one for what > >> could arguably be called the default altp2m view, i.e. the regular > >> xc_set_mem_access(), and one for altp2m, i.e. > >> xc_altp2m_set_mem_access()). Furthermore, the APIs do not currently > >> offer the same features (most notably, xc_altp2m_get_mem_access() is > >> completely missing). I've mentioned this to Tamas while initially trying > >> to get it to work. > >> > >> Now, whether the behaviour I expect is what everyone expects is, of > >> course, wide open to debate. But I think we can all agree that the > >> altp2m interface can, and probably should, be improved. > >> > > > > There is that, but also, what is the exact logic behind doing this check > > before emulation? AFAIU emulation happens in response to a vm_event so > > we should be fairly certain that this check succeeds as it just verifies > > that indeed the permissions are restricted by mem_access in the p2m (and > > with altp2m this should be the active one). But when is this check > > normally expected to fail? > > That check is important, please do not remove it. A vm_event is sent > into userspace to our monitoring application, but the monitoring > application can actually remove the page restrictions before replying, > so in that case emulation is pointless - there will be no more page > faults for that instruction. > I see, but then why would you reply with VM_EVENT_FLAG_EMULATE? You know you removed the permission before sending the reply, so this sounds like something specific to your application. Tamas
On 01/28/2016 05:58 PM, Lengyel, Tamas wrote: > > > On Thu, Jan 28, 2016 at 8:20 AM, Razvan Cojocaru > <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote: > > On 01/28/2016 05:12 PM, Lengyel, Tamas wrote: > > > > On Jan 28, 2016 8:02 AM, "Razvan Cojocaru" <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com> > > <mailto:rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>>> wrote: > >> > >> On 01/28/2016 04:42 PM, Lengyel, Tamas wrote: > >> > > >> > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com <mailto:JBeulich@suse.com> > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>> > >> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>>>> wrote: > >> >> > >> >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com <mailto:tlengyel@novetta.com> > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>> > >> > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com> > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>>>> wrote: > >> >> > --- a/xen/arch/x86/mm/p2m.c > >> >> > +++ b/xen/arch/x86/mm/p2m.c > >> >> > @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct > > vcpu *v, > >> >> > bool_t violation = 1; > >> >> > const struct vm_event_mem_access *data = > &rsp->u.mem_access; > >> >> > > >> >> > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > >> > &access) == 0 ) > >> >> > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > >> >> > + altp2m_active(v->domain) ? > >> > vcpu_altp2m(v).p2midx : 0, > >> >> > + &access) == 0 ) > >> >> > >> >> This looks to be a behavioral change beyond what title and > >> >> description say, and it's not clear whether that's actually the > >> >> behavior everyone wants. > >> > > >> > I'm fairly comfident its exactly the expected behavior when one > uses > >> > mem_access in altp2m tables and emulation. Right now because > the lack of > >> > this AFAIK emulation would not work correctly with altp2m. But > Razvan > >> > probably can chime in as he uses this path actively. > >> > >> I've done an experiment to see how much slower using altp2m would > be as > >> compared to emulation - so I'm not a big user of the feature, but > I did > >> find it cumbersome to have to work with two sets of APIs (one for > what > >> could arguably be called the default altp2m view, i.e. the regular > >> xc_set_mem_access(), and one for altp2m, i.e. > >> xc_altp2m_set_mem_access()). Furthermore, the APIs do not currently > >> offer the same features (most notably, xc_altp2m_get_mem_access() is > >> completely missing). I've mentioned this to Tamas while initially > trying > >> to get it to work. > >> > >> Now, whether the behaviour I expect is what everyone expects is, of > >> course, wide open to debate. But I think we can all agree that the > >> altp2m interface can, and probably should, be improved. > >> > > > > There is that, but also, what is the exact logic behind doing this > check > > before emulation? AFAIU emulation happens in response to a vm_event so > > we should be fairly certain that this check succeeds as it just > verifies > > that indeed the permissions are restricted by mem_access in the > p2m (and > > with altp2m this should be the active one). But when is this check > > normally expected to fail? > > That check is important, please do not remove it. A vm_event is sent > into userspace to our monitoring application, but the monitoring > application can actually remove the page restrictions before replying, > so in that case emulation is pointless - there will be no more page > faults for that instruction. > > > I see, but then why would you reply with VM_EVENT_FLAG_EMULATE? You know > you removed the permission before sending the reply, so this sounds like > something specific to your application. It's cheap insurance that things go right. If there's some issue with page rights, or some external tool somehow does an xc_set_mem_access(), things won't go wrong. And they will go wrong if Xen thinks it should emulate the next instruction and the next instruction is not the one that has caused the original fault. I would think that benefits any application. Thanks, Razvan
On Thu, Jan 28, 2016 at 9:32 AM, Razvan Cojocaru <rcojocaru@bitdefender.com> wrote: > On 01/28/2016 05:58 PM, Lengyel, Tamas wrote: > > > > > > On Thu, Jan 28, 2016 at 8:20 AM, Razvan Cojocaru > > <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote: > > > > On 01/28/2016 05:12 PM, Lengyel, Tamas wrote: > > > > > > On Jan 28, 2016 8:02 AM, "Razvan Cojocaru" < > rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com> > > > <mailto:rcojocaru@bitdefender.com <mailto: > rcojocaru@bitdefender.com>>> wrote: > > >> > > >> On 01/28/2016 04:42 PM, Lengyel, Tamas wrote: > > >> > > > >> > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com > <mailto:JBeulich@suse.com> > > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>> > > >> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com> > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>>>> wrote: > > >> >> > > >> >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com <mailto: > tlengyel@novetta.com> > > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>> > > >> > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com> > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>>>> wrote: > > >> >> > --- a/xen/arch/x86/mm/p2m.c > > >> >> > +++ b/xen/arch/x86/mm/p2m.c > > >> >> > @@ -1572,7 +1572,9 @@ void > p2m_mem_access_emulate_check(struct > > > vcpu *v, > > >> >> > bool_t violation = 1; > > >> >> > const struct vm_event_mem_access *data = > > &rsp->u.mem_access; > > >> >> > > > >> >> > - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > > >> > &access) == 0 ) > > >> >> > + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), > > >> >> > + altp2m_active(v->domain) ? > > >> > vcpu_altp2m(v).p2midx : 0, > > >> >> > + &access) == 0 ) > > >> >> > > >> >> This looks to be a behavioral change beyond what title and > > >> >> description say, and it's not clear whether that's actually the > > >> >> behavior everyone wants. > > >> > > > >> > I'm fairly comfident its exactly the expected behavior when one > > uses > > >> > mem_access in altp2m tables and emulation. Right now because > > the lack of > > >> > this AFAIK emulation would not work correctly with altp2m. But > > Razvan > > >> > probably can chime in as he uses this path actively. > > >> > > >> I've done an experiment to see how much slower using altp2m would > > be as > > >> compared to emulation - so I'm not a big user of the feature, but > > I did > > >> find it cumbersome to have to work with two sets of APIs (one for > > what > > >> could arguably be called the default altp2m view, i.e. the regular > > >> xc_set_mem_access(), and one for altp2m, i.e. > > >> xc_altp2m_set_mem_access()). Furthermore, the APIs do not > currently > > >> offer the same features (most notably, xc_altp2m_get_mem_access() > is > > >> completely missing). I've mentioned this to Tamas while initially > > trying > > >> to get it to work. > > >> > > >> Now, whether the behaviour I expect is what everyone expects is, > of > > >> course, wide open to debate. But I think we can all agree that the > > >> altp2m interface can, and probably should, be improved. > > >> > > > > > > There is that, but also, what is the exact logic behind doing this > > check > > > before emulation? AFAIU emulation happens in response to a > vm_event so > > > we should be fairly certain that this check succeeds as it just > > verifies > > > that indeed the permissions are restricted by mem_access in the > > p2m (and > > > with altp2m this should be the active one). But when is this check > > > normally expected to fail? > > > > That check is important, please do not remove it. A vm_event is sent > > into userspace to our monitoring application, but the monitoring > > application can actually remove the page restrictions before > replying, > > so in that case emulation is pointless - there will be no more page > > faults for that instruction. > > > > > > I see, but then why would you reply with VM_EVENT_FLAG_EMULATE? You know > > you removed the permission before sending the reply, so this sounds like > > something specific to your application. > > It's cheap insurance that things go right. If there's some issue with > page rights, or some external tool somehow does an xc_set_mem_access(), > things won't go wrong. I can see this working for your application if you don't cache the mem_access permissions locally and you don't want to query for it before deciding to send the emulate flag in the response or not. Although, I think that would be the best way to go here. > And they will go wrong if Xen thinks it should > emulate the next instruction and the next instruction is not the one > that has caused the original fault. How could that happen? When the vCPU is resumed after the fault, isn't the same instruction guaranteed to be retried? > I would think that benefits any > application. > It's just a bit of an obscure exception. From an API perspective I would rather have Xen do what I tell it to do - in this case emulate - rather then it doing something else silently behind the scenes that you really only find out about if you read the code. Tamas
On 01/28/2016 06:40 PM, Lengyel, Tamas wrote: > > > On Thu, Jan 28, 2016 at 9:32 AM, Razvan Cojocaru > <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote: > > On 01/28/2016 05:58 PM, Lengyel, Tamas wrote: > > > > > > On Thu, Jan 28, 2016 at 8:20 AM, Razvan Cojocaru > > <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com> > <mailto:rcojocaru@bitdefender.com > <mailto:rcojocaru@bitdefender.com>>> wrote: > > > > On 01/28/2016 05:12 PM, Lengyel, Tamas wrote: > > > > > > On Jan 28, 2016 8:02 AM, "Razvan Cojocaru" <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com> > <mailto:rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> > > > <mailto:rcojocaru@bitdefender.com > <mailto:rcojocaru@bitdefender.com> <mailto:rcojocaru@bitdefender.com > <mailto:rcojocaru@bitdefender.com>>>> wrote: > > >> > > >> On 01/28/2016 04:42 PM, Lengyel, Tamas wrote: > > >> > > > >> > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com > <mailto:JBeulich@suse.com> <mailto:JBeulich@suse.com > <mailto:JBeulich@suse.com>> > > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>>> > > >> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>> > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>>>>> wrote: > > >> >> > > >> >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com > <mailto:tlengyel@novetta.com> <mailto:tlengyel@novetta.com > <mailto:tlengyel@novetta.com>> > > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com> > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>>> > > >> > <mailto:tlengyel@novetta.com > <mailto:tlengyel@novetta.com> <mailto:tlengyel@novetta.com > <mailto:tlengyel@novetta.com>> > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com> > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>>>>> wrote: > > >> >> > --- a/xen/arch/x86/mm/p2m.c > > >> >> > +++ b/xen/arch/x86/mm/p2m.c > > >> >> > @@ -1572,7 +1572,9 @@ void > p2m_mem_access_emulate_check(struct > > > vcpu *v, > > >> >> > bool_t violation = 1; > > >> >> > const struct vm_event_mem_access *data = > > &rsp->u.mem_access; > > >> >> > > > >> >> > - if ( p2m_get_mem_access(v->domain, > _gfn(data->gfn), > > >> > &access) == 0 ) > > >> >> > + if ( p2m_get_mem_access(v->domain, > _gfn(data->gfn), > > >> >> > + > altp2m_active(v->domain) ? > > >> > vcpu_altp2m(v).p2midx : 0, > > >> >> > + &access) == 0 ) > > >> >> > > >> >> This looks to be a behavioral change beyond what title and > > >> >> description say, and it's not clear whether that's > actually the > > >> >> behavior everyone wants. > > >> > > > >> > I'm fairly comfident its exactly the expected behavior > when one > > uses > > >> > mem_access in altp2m tables and emulation. Right now because > > the lack of > > >> > this AFAIK emulation would not work correctly with > altp2m. But > > Razvan > > >> > probably can chime in as he uses this path actively. > > >> > > >> I've done an experiment to see how much slower using altp2m > would > > be as > > >> compared to emulation - so I'm not a big user of the > feature, but > > I did > > >> find it cumbersome to have to work with two sets of APIs > (one for > > what > > >> could arguably be called the default altp2m view, i.e. the > regular > > >> xc_set_mem_access(), and one for altp2m, i.e. > > >> xc_altp2m_set_mem_access()). Furthermore, the APIs do not > currently > > >> offer the same features (most notably, > xc_altp2m_get_mem_access() is > > >> completely missing). I've mentioned this to Tamas while > initially > > trying > > >> to get it to work. > > >> > > >> Now, whether the behaviour I expect is what everyone > expects is, of > > >> course, wide open to debate. But I think we can all agree > that the > > >> altp2m interface can, and probably should, be improved. > > >> > > > > > > There is that, but also, what is the exact logic behind > doing this > > check > > > before emulation? AFAIU emulation happens in response to a > vm_event so > > > we should be fairly certain that this check succeeds as it just > > verifies > > > that indeed the permissions are restricted by mem_access in the > > p2m (and > > > with altp2m this should be the active one). But when is this > check > > > normally expected to fail? > > > > That check is important, please do not remove it. A vm_event > is sent > > into userspace to our monitoring application, but the monitoring > > application can actually remove the page restrictions before > replying, > > so in that case emulation is pointless - there will be no more > page > > faults for that instruction. > > > > > > I see, but then why would you reply with VM_EVENT_FLAG_EMULATE? > You know > > you removed the permission before sending the reply, so this > sounds like > > something specific to your application. > > It's cheap insurance that things go right. If there's some issue with > page rights, or some external tool somehow does an xc_set_mem_access(), > things won't go wrong. > > > I can see this working for your application if you don't cache the > mem_access permissions locally and you don't want to query for it before > deciding to send the emulate flag in the response or not. Although, I > think that would be the best way to go here. Querying is out of the question, for obvious performance reasons. That's why we've cached the registers in the vm_event request - we could have not done that and instead just query them via libxc. But one small decision like that and the monitored guest is running twice as slow. This way, you can just set the emulate flag and have the hypervisor do the right thing anyway, with no extra userspace <-> hypervisor roundtrips. Caching might work, but then again that's extra work, memory used in the application (in _each_ application, not just ours). So on one hand, we have the current scenario where things can't go wrong and the solution is in one place, vs. the other scenario, where each application needs to solve the problem by doing tracking / caching / querying that the HV does anyway in p2m, and pay with a possible guest crash or freeze for failure. > And they will go wrong if Xen thinks it should > emulate the next instruction and the next instruction is not the one > that has caused the original fault. > > > How could that happen? When the vCPU is resumed after the fault, isn't > the same instruction guaranteed to be retried? The instruction is the same, but if the page restrictions have been lifted (somehow) and the EMULATE flag is still set, the original instruction will run normally (because it won't trigger another page fault). But the HV will still think that it needs to emulate the next page fault, and so it will emulate whatever instruction causes the next page fault (if it matches the emulate conditions). > I would think that benefits any > application. > > > It's just a bit of an obscure exception. From an API perspective I would > rather have Xen do what I tell it to do - in this case emulate - rather > then it doing something else silently behind the scenes that you really > only find out about if you read the code. But the way the emulation code works now, it _can't_ emulate (see above explanation). Emulation currently only happens as a result of a page fault, and there will be no page fault if the page restriction are lifted. I am thinking about a better way to achieve this, but until then I think it's a good idea to keep the check in. I hope I've been able to shed more light on this. Thanks, Razvan
On Thu, Jan 28, 2016 at 10:04 AM, Razvan Cojocaru <rcojocaru@bitdefender.com > wrote: > On 01/28/2016 06:40 PM, Lengyel, Tamas wrote: > > > > > > On Thu, Jan 28, 2016 at 9:32 AM, Razvan Cojocaru > > <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com>> wrote: > > > > On 01/28/2016 05:58 PM, Lengyel, Tamas wrote: > > > > > > > > > On Thu, Jan 28, 2016 at 8:20 AM, Razvan Cojocaru > > > <rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com> > > <mailto:rcojocaru@bitdefender.com > > <mailto:rcojocaru@bitdefender.com>>> wrote: > > > > > > On 01/28/2016 05:12 PM, Lengyel, Tamas wrote: > > > > > > > > On Jan 28, 2016 8:02 AM, "Razvan Cojocaru" < > rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com> > > <mailto:rcojocaru@bitdefender.com <mailto:rcojocaru@bitdefender.com > >> > > > > <mailto:rcojocaru@bitdefender.com > > <mailto:rcojocaru@bitdefender.com> <mailto:rcojocaru@bitdefender.com > > <mailto:rcojocaru@bitdefender.com>>>> wrote: > > > >> > > > >> On 01/28/2016 04:42 PM, Lengyel, Tamas wrote: > > > >> > > > > >> > On Jan 28, 2016 6:38 AM, "Jan Beulich" <JBeulich@suse.com > > <mailto:JBeulich@suse.com> <mailto:JBeulich@suse.com > > <mailto:JBeulich@suse.com>> > > > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com> > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>>> > > > >> > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com> > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>> > > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com> > > <mailto:JBeulich@suse.com <mailto:JBeulich@suse.com>>>>> wrote: > > > >> >> > > > >> >> >>> On 27.01.16 at 21:06, <tlengyel@novetta.com > > <mailto:tlengyel@novetta.com> <mailto:tlengyel@novetta.com > > <mailto:tlengyel@novetta.com>> > > > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com> > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>>> > > > >> > <mailto:tlengyel@novetta.com > > <mailto:tlengyel@novetta.com> <mailto:tlengyel@novetta.com > > <mailto:tlengyel@novetta.com>> > > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com> > > <mailto:tlengyel@novetta.com <mailto:tlengyel@novetta.com>>>>> > wrote: > > > >> >> > --- a/xen/arch/x86/mm/p2m.c > > > >> >> > +++ b/xen/arch/x86/mm/p2m.c > > > >> >> > @@ -1572,7 +1572,9 @@ void > > p2m_mem_access_emulate_check(struct > > > > vcpu *v, > > > >> >> > bool_t violation = 1; > > > >> >> > const struct vm_event_mem_access *data = > > > &rsp->u.mem_access; > > > >> >> > > > > >> >> > - if ( p2m_get_mem_access(v->domain, > > _gfn(data->gfn), > > > >> > &access) == 0 ) > > > >> >> > + if ( p2m_get_mem_access(v->domain, > > _gfn(data->gfn), > > > >> >> > + > > altp2m_active(v->domain) ? > > > >> > vcpu_altp2m(v).p2midx : 0, > > > >> >> > + &access) == 0 ) > > > >> >> > > > >> >> This looks to be a behavioral change beyond what title > and > > > >> >> description say, and it's not clear whether that's > > actually the > > > >> >> behavior everyone wants. > > > >> > > > > >> > I'm fairly comfident its exactly the expected behavior > > when one > > > uses > > > >> > mem_access in altp2m tables and emulation. Right now > because > > > the lack of > > > >> > this AFAIK emulation would not work correctly with > > altp2m. But > > > Razvan > > > >> > probably can chime in as he uses this path actively. > > > >> > > > >> I've done an experiment to see how much slower using altp2m > > would > > > be as > > > >> compared to emulation - so I'm not a big user of the > > feature, but > > > I did > > > >> find it cumbersome to have to work with two sets of APIs > > (one for > > > what > > > >> could arguably be called the default altp2m view, i.e. the > > regular > > > >> xc_set_mem_access(), and one for altp2m, i.e. > > > >> xc_altp2m_set_mem_access()). Furthermore, the APIs do not > > currently > > > >> offer the same features (most notably, > > xc_altp2m_get_mem_access() is > > > >> completely missing). I've mentioned this to Tamas while > > initially > > > trying > > > >> to get it to work. > > > >> > > > >> Now, whether the behaviour I expect is what everyone > > expects is, of > > > >> course, wide open to debate. But I think we can all agree > > that the > > > >> altp2m interface can, and probably should, be improved. > > > >> > > > > > > > > There is that, but also, what is the exact logic behind > > doing this > > > check > > > > before emulation? AFAIU emulation happens in response to a > > vm_event so > > > > we should be fairly certain that this check succeeds as it > just > > > verifies > > > > that indeed the permissions are restricted by mem_access in > the > > > p2m (and > > > > with altp2m this should be the active one). But when is this > > check > > > > normally expected to fail? > > > > > > That check is important, please do not remove it. A vm_event > > is sent > > > into userspace to our monitoring application, but the > monitoring > > > application can actually remove the page restrictions before > > replying, > > > so in that case emulation is pointless - there will be no more > > page > > > faults for that instruction. > > > > > > > > > I see, but then why would you reply with VM_EVENT_FLAG_EMULATE? > > You know > > > you removed the permission before sending the reply, so this > > sounds like > > > something specific to your application. > > > > It's cheap insurance that things go right. If there's some issue with > > page rights, or some external tool somehow does an > xc_set_mem_access(), > > things won't go wrong. > > > > > > I can see this working for your application if you don't cache the > > mem_access permissions locally and you don't want to query for it before > > deciding to send the emulate flag in the response or not. Although, I > > think that would be the best way to go here. > > Querying is out of the question, for obvious performance reasons. That's > why we've cached the registers in the vm_event request - we could have > not done that and instead just query them via libxc. But one small > decision like that and the monitored guest is running twice as slow. > This way, you can just set the emulate flag and have the hypervisor do > the right thing anyway, with no extra userspace <-> hypervisor roundtrips. > > Caching might work, but then again that's extra work, memory used in the > application (in _each_ application, not just ours). So on one hand, we > have the current scenario where things can't go wrong and the solution > is in one place, vs. the other scenario, where each application needs to > solve the problem by doing tracking / caching / querying that the HV > does anyway in p2m, and pay with a possible guest crash or freeze for > failure. > > > And they will go wrong if Xen thinks it should > > emulate the next instruction and the next instruction is not the one > > that has caused the original fault. > > > > > > How could that happen? When the vCPU is resumed after the fault, isn't > > the same instruction guaranteed to be retried? > > The instruction is the same, but if the page restrictions have been > lifted (somehow) and the EMULATE flag is still set, the original > instruction will run normally (because it won't trigger another page > fault). But the HV will still think that it needs to emulate the next > page fault, and so it will emulate whatever instruction causes the next > page fault (if it matches the emulate conditions). > > > I would think that benefits any > > application. > > > > > > It's just a bit of an obscure exception. From an API perspective I would > > rather have Xen do what I tell it to do - in this case emulate - rather > > then it doing something else silently behind the scenes that you really > > only find out about if you read the code. > > But the way the emulation code works now, it _can't_ emulate (see above > explanation). Emulation currently only happens as a result of a page > fault, and there will be no page fault if the page restriction are > lifted. I am thinking about a better way to achieve this, but until then > I think it's a good idea to keep the check in. > > I hope I've been able to shed more light on this. > Sure, make sense. Since AFAIK you guys are the only one really using this path I'm cool with keeping it as it is, was really just wondering for the logic behind it. Without a reference implementation using this path it's not exactly trivial trying to figure out why things are the way they are. Jan, with the explanation above by Razvan, when using emulation with altp2m the correct check here is to see if the altp2m permissions are still restricted, otherwise no need to emulate. So this patch actually makes the two systems correctly work together. Without this patch only the hostp2m permissions are checked which may not have the restrictions that actually caused the fault and lead to infinite faults and hanging the VM. Tamas
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h index b4e57d8..09d9f62 100644 --- a/tools/libxc/include/xenctrl.h +++ b/tools/libxc/include/xenctrl.h @@ -2065,7 +2065,8 @@ int xc_set_mem_access(xc_interface *xch, domid_t domain_id, * Gets the mem access for the given page (returned in access on success) */ int xc_get_mem_access(xc_interface *xch, domid_t domain_id, - uint64_t pfn, xenmem_access_t *access); + uint64_t pfn, uint16_t altp2m_idx, + xenmem_access_t *access); /* * Instructions causing a mem_access violation can be emulated by Xen diff --git a/tools/libxc/xc_mem_access.c b/tools/libxc/xc_mem_access.c index d6fb409..a44865d 100644 --- a/tools/libxc/xc_mem_access.c +++ b/tools/libxc/xc_mem_access.c @@ -46,14 +46,16 @@ int xc_set_mem_access(xc_interface *xch, int xc_get_mem_access(xc_interface *xch, domid_t domain_id, uint64_t pfn, + uint16_t altp2m_idx, xenmem_access_t *access) { int rc; xen_mem_access_op_t mao = { - .op = XENMEM_access_op_get_access, - .domid = domain_id, - .pfn = pfn + .op = XENMEM_access_op_get_access, + .domid = domain_id, + .pfn = pfn, + .altp2m_idx = altp2m_idx }; rc = do_memory_op(xch, XENMEM_access_op, &mao, sizeof(mao)); diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c index 2300e9a..d9dda62 100644 --- a/tools/tests/xen-access/xen-access.c +++ b/tools/tests/xen-access/xen-access.c @@ -571,7 +571,10 @@ int main(int argc, char *argv[]) switch (req.reason) { case VM_EVENT_REASON_MEM_ACCESS: - rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, &access); + rc = xc_get_mem_access(xch, domain_id, req.u.mem_access.gfn, + ((req.flags & VM_EVENT_FLAG_ALTERNATE_P2M) ? req.altp2m_idx : 0), + &access + ); if (rc < 0) { ERROR("Error %d getting mem_access event\n", rc); diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 8e9b4be..932c6e2 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1666,7 +1666,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec) if ( !p2m->mem_access_enabled ) return true; - rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma); + rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), 0, &xma); if ( rc ) return true; @@ -1847,7 +1847,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, return 0; } -int p2m_get_mem_access(struct domain *d, gfn_t gfn, +int p2m_get_mem_access(struct domain *d, gfn_t gfn, unsigned long altp2m_idx, xenmem_access_t *access) { int ret; diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 95bf7ce..18068e8 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1572,7 +1572,9 @@ void p2m_mem_access_emulate_check(struct vcpu *v, bool_t violation = 1; const struct vm_event_mem_access *data = &rsp->u.mem_access; - if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), &access) == 0 ) + if ( p2m_get_mem_access(v->domain, _gfn(data->gfn), + altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0, + &access) == 0 ) { switch ( access ) { @@ -1918,9 +1920,10 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, * Get access type for a gfn. * If gfn == INVALID_GFN, gets the default access type. */ -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access) +int p2m_get_mem_access(struct domain *d, gfn_t gfn, unsigned long altp2m_idx, + xenmem_access_t *access) { - struct p2m_domain *p2m = p2m_get_hostp2m(d); + struct p2m_domain *hp2m = p2m_get_hostp2m(d), *p2m = NULL; p2m_type_t t; p2m_access_t a; mfn_t mfn; @@ -1943,10 +1946,22 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access) /* If request to get default access. */ if ( gfn_x(gfn) == INVALID_GFN ) { - *access = memaccess[p2m->default_access]; + *access = memaccess[hp2m->default_access]; return 0; } + /* altp2m view 0 is treated as the hostp2m */ + if ( altp2m_idx ) + { + if ( altp2m_idx >= MAX_ALTP2M || + d->arch.altp2m_eptp[altp2m_idx] == INVALID_MFN ) + return -EINVAL; + + p2m = d->arch.altp2m_p2m[altp2m_idx]; + } + else + p2m = hp2m; + gfn_lock(p2m, gfn, 0); mfn = p2m->get_entry(p2m, gfn_x(gfn), &t, &a, 0, NULL, NULL); gfn_unlock(p2m, gfn, 0); diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c index 0411443..a0345a6 100644 --- a/xen/common/mem_access.c +++ b/xen/common/mem_access.c @@ -88,7 +88,7 @@ int mem_access_memop(unsigned long cmd, if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull ) break; - rc = p2m_get_mem_access(d, _gfn(mao.pfn), &access); + rc = p2m_get_mem_access(d, _gfn(mao.pfn), mao.altp2m_idx, &access); if ( rc != 0 ) break; diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h index 0643ad5..2835f24 100644 --- a/xen/include/xen/p2m-common.h +++ b/xen/include/xen/p2m-common.h @@ -56,6 +56,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr, * Get access type for a gfn. * If gfn == INVALID_GFN, gets the default access type. */ -int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access); +int p2m_get_mem_access(struct domain *d, gfn_t gfn, unsigned long altp2m_idx, + xenmem_access_t *access); #endif /* _XEN_P2M_COMMON_H */
Extend the existing get_mem_access memop to allow querying permissions in altp2m views as well. Signed-off-by: Tamas K Lengyel <tlengyel@novetta.com> Cc: Ian Jackson <ian.jackson@eu.citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com> Cc: Stefano Stabellini <stefano.stabellini@citrix.com> Cc: George Dunlap <george.dunlap@eu.citrix.com> Cc: Keir Fraser <keir@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> --- tools/libxc/include/xenctrl.h | 3 ++- tools/libxc/xc_mem_access.c | 8 +++++--- tools/tests/xen-access/xen-access.c | 5 ++++- xen/arch/arm/p2m.c | 4 ++-- xen/arch/x86/mm/p2m.c | 23 +++++++++++++++++++---- xen/common/mem_access.c | 2 +- xen/include/xen/p2m-common.h | 3 ++- 7 files changed, 35 insertions(+), 13 deletions(-)