Message ID | 1473785165-11137-2-git-send-email-paul.c.lai@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 13.09.16 at 18:46, <paul.c.lai@intel.com> wrote: > Indent goto labels by one space. > Inline (header) altp2m functions. > In do_altp2m_op(), during the sanity check of the passed command, > return -ENONSYS if not a valid command. > In do_altp2m_op(), when evaluating a command, ASSERT_UNREACHABLE() > if the command is not recognizable. The sanity check above should > have triggered the return of -ENOSYS. > Make altp2m_vcpu_emulate_ve() return actual bool_t (rather than return > void()). > > Signed-off-by: Paul Lai <paul.c.lai@intel.com> > --- It is very disappointing to find that there is still no information here on what changed from the previous version. > @@ -5349,6 +5362,8 @@ static int do_altp2m_op( > rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view, > _gfn(a.u.change_gfn.old_gfn), > _gfn(a.u.change_gfn.new_gfn)); > + default: > + ASSERT_UNREACHABLE(); > } And it is even worse that the bug pointed out here is still present. > /* emulates #VE */ > -bool_t altp2m_vcpu_emulate_ve(struct vcpu *v); > +static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v) Nor did you switch to plain bool here, as was requested during both v3 and v4 review. Please do not resubmit this series until you have taken care of _all_ review comments you've received so far. As with the previous version I'm not going to spend time looking at the other two patches due to this fundamental requirement, despite having been pointed out before, not being met. Jan
On Mon, Sep 26, 2016 at 10:01:50AM -0600, Jan Beulich wrote: > >>> On 13.09.16 at 18:46, <paul.c.lai@intel.com> wrote: > > Indent goto labels by one space. > > Inline (header) altp2m functions. > > In do_altp2m_op(), during the sanity check of the passed command, > > return -ENONSYS if not a valid command. > > In do_altp2m_op(), when evaluating a command, ASSERT_UNREACHABLE() > > if the command is not recognizable. The sanity check above should > > have triggered the return of -ENOSYS. > > Make altp2m_vcpu_emulate_ve() return actual bool_t (rather than return > > void()). > > > > Signed-off-by: Paul Lai <paul.c.lai@intel.com> > > --- > > It is very disappointing to find that there is still no information here > on what changed from the previous version. > > > @@ -5349,6 +5362,8 @@ static int do_altp2m_op( > > rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view, > > _gfn(a.u.change_gfn.old_gfn), > > _gfn(a.u.change_gfn.new_gfn)); > > + default: > > + ASSERT_UNREACHABLE(); > > } > > And it is even worse that the bug pointed out here is still present. I reread the comment in https://lists.xenproject.org/archives/html/xen-devel/2016-09/msg00937.html and missed the "unintended fallthrough". Will correct. > > > /* emulates #VE */ > > -bool_t altp2m_vcpu_emulate_ve(struct vcpu *v); > > +static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v) > > Nor did you switch to plain bool here, as was requested during both > v3 and v4 review. I did not know "bool" existed, and thought "plain bool" was "bool_t". I have looked around and see that "bool" is better defined ({0,1} instead of {0, !0}) and hopefully understand your motivation for using/requiring it. > > Please do not resubmit this series until you have taken care of > _all_ review comments you've received so far. As with the > previous version I'm not going to spend time looking at the other > two patches due to this fundamental requirement, despite > having been pointed out before, not being met. > > Jan >
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 2c89984..1bf2d01 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1932,11 +1932,11 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, * Otherwise, this is an error condition. */ rc = fall_through; -out_put_gfn: + out_put_gfn: __put_gfn(p2m, gfn); if ( ap2m_active ) __put_gfn(hostp2m, gfn); -out: + out: /* All of these are delayed until we exit, since we might * sleep on event ring wait queues, and we must not hold * locks in such circumstance */ @@ -5227,12 +5227,25 @@ static int do_altp2m_op( return -EFAULT; if ( a.pad1 || a.pad2 || - (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) || - (a.cmd < HVMOP_altp2m_get_domain_state) || - (a.cmd > HVMOP_altp2m_change_gfn) ) + (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ) return -EINVAL; - d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ? + switch ( a.cmd ) + { + case HVMOP_altp2m_get_domain_state: + case HVMOP_altp2m_set_domain_state: + case HVMOP_altp2m_vcpu_enable_notify: + case HVMOP_altp2m_create_p2m: + case HVMOP_altp2m_destroy_p2m: + case HVMOP_altp2m_switch_p2m: + case HVMOP_altp2m_set_mem_access: + case HVMOP_altp2m_change_gfn: + break; + default: + return -ENOSYS; + } + + d = ( a.cmd != HVMOP_altp2m_vcpu_enable_notify ) ? rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain(); if ( d == NULL ) @@ -5349,6 +5362,8 @@ static int do_altp2m_op( rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view, _gfn(a.u.change_gfn.old_gfn), _gfn(a.u.change_gfn.new_gfn)); + default: + ASSERT_UNREACHABLE(); } out: @@ -5873,25 +5888,6 @@ void hvm_toggle_singlestep(struct vcpu *v) v->arch.hvm_vcpu.single_step = !v->arch.hvm_vcpu.single_step; } -void altp2m_vcpu_update_p2m(struct vcpu *v) -{ - if ( hvm_funcs.altp2m_vcpu_update_p2m ) - hvm_funcs.altp2m_vcpu_update_p2m(v); -} - -void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v) -{ - if ( hvm_funcs.altp2m_vcpu_update_vmfunc_ve ) - hvm_funcs.altp2m_vcpu_update_vmfunc_ve(v); -} - -bool_t altp2m_vcpu_emulate_ve(struct vcpu *v) -{ - if ( hvm_funcs.altp2m_vcpu_emulate_ve ) - return hvm_funcs.altp2m_vcpu_emulate_ve(v); - return 0; -} - int hvm_set_mode(struct vcpu *v, int mode) { diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 5d463e0..f3c996d 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -589,13 +589,29 @@ static inline bool_t hvm_altp2m_supported(void) } /* updates the current hardware p2m */ -void altp2m_vcpu_update_p2m(struct vcpu *v); +static inline void altp2m_vcpu_update_p2m(struct vcpu *v) +{ + if ( hvm_funcs.altp2m_vcpu_update_p2m ) + hvm_funcs.altp2m_vcpu_update_p2m(v); +} /* updates VMCS fields related to VMFUNC and #VE */ -void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v); +static inline void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v) +{ + if ( hvm_funcs.altp2m_vcpu_update_vmfunc_ve ) + hvm_funcs.altp2m_vcpu_update_vmfunc_ve(v); +} /* emulates #VE */ -bool_t altp2m_vcpu_emulate_ve(struct vcpu *v); +static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v) +{ + if ( hvm_funcs.altp2m_vcpu_emulate_ve ) + { + hvm_funcs.altp2m_vcpu_emulate_ve(v); + return true; + } + return false; +} /* Check CR4/EFER values */ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
Indent goto labels by one space. Inline (header) altp2m functions. In do_altp2m_op(), during the sanity check of the passed command, return -ENONSYS if not a valid command. In do_altp2m_op(), when evaluating a command, ASSERT_UNREACHABLE() if the command is not recognizable. The sanity check above should have triggered the return of -ENOSYS. Make altp2m_vcpu_emulate_ve() return actual bool_t (rather than return void()). Signed-off-by: Paul Lai <paul.c.lai@intel.com> --- xen/arch/x86/hvm/hvm.c | 46 ++++++++++++++++++++----------------------- xen/include/asm-x86/hvm/hvm.h | 22 ++++++++++++++++++--- 2 files changed, 40 insertions(+), 28 deletions(-)