Message ID | c1ddb1fb-70c3-4ca4-a2cc-acdba9c9a035@suse.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | x86/HVM: drop stdvga caching mode | expand |
On 11/09/2024 1:27 pm, Jan Beulich wrote: > While ->count will only be different from 1 for "indirect" (data in > guest memory) accesses, it being 1 does not exclude the request being an > "indirect" one. Check both to be on the safe side, and bring the ->count > part also in line with what ioreq_send_buffered() actually refuses to > handle. > > Fixes: 3bbaaec09b1b ("x86/hvm: unify stdvga mmio intercept with standard mmio intercept") > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -498,13 +498,13 @@ static bool cf_check stdvga_mem_accept( spin_lock(&s->lock); - if ( p->dir == IOREQ_WRITE && p->count > 1 ) + if ( p->dir == IOREQ_WRITE && (p->data_is_ptr || p->count != 1) ) { /* * We cannot return X86EMUL_UNHANDLEABLE on anything other then the * first cycle of an I/O. So, since we cannot guarantee to always be * able to send buffered writes, we have to reject any multi-cycle - * I/O. + * or "indirect" I/O. */ goto reject; }
While ->count will only be different from 1 for "indirect" (data in guest memory) accesses, it being 1 does not exclude the request being an "indirect" one. Check both to be on the safe side, and bring the ->count part also in line with what ioreq_send_buffered() actually refuses to handle. Fixes: 3bbaaec09b1b ("x86/hvm: unify stdvga mmio intercept with standard mmio intercept") Signed-off-by: Jan Beulich <jbeulich@suse.com> --- v2: New.