diff mbox series

xen-hvm: Allow disabling buffer_io_timer

Message ID 20211210193434.75566-1-jandryuk@gmail.com (mailing list archive)
State New, archived
Headers show
Series xen-hvm: Allow disabling buffer_io_timer | expand

Commit Message

Jason Andryuk Dec. 10, 2021, 7:34 p.m. UTC
commit f37f29d31488 "xen: slightly simplify bufioreq handling" hard
coded setting req.count = 1 during initial field setup before the main
loop.  This missed a subtlety that an early exit from the loop when
there are no ioreqs to process, would have req.count == 0 for the return
value.  handle_buffered_io() would then remove state->buffered_io_timer.
Instead handle_buffered_iopage() is basically always returning true and
handle_buffered_io() always re-setting the timer.

Restore the disabling of the timer by introducing a new handled_ioreq
boolean and use as the return value.  The named variable will more
clearly show the intent of the code.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 hw/i386/xen/xen-hvm.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Paul Durrant Dec. 14, 2021, 1:40 p.m. UTC | #1
On 10/12/2021 11:34, Jason Andryuk wrote:
> commit f37f29d31488 "xen: slightly simplify bufioreq handling" hard
> coded setting req.count = 1 during initial field setup before the main
> loop.  This missed a subtlety that an early exit from the loop when
> there are no ioreqs to process, would have req.count == 0 for the return
> value.  handle_buffered_io() would then remove state->buffered_io_timer.
> Instead handle_buffered_iopage() is basically always returning true and
> handle_buffered_io() always re-setting the timer.
> 
> Restore the disabling of the timer by introducing a new handled_ioreq
> boolean and use as the return value.  The named variable will more
> clearly show the intent of the code.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Paul Durrant <paul@xen.org>
Jason Andryuk Jan. 26, 2022, 1:43 p.m. UTC | #2
On Tue, Dec 14, 2021 at 8:40 AM Durrant, Paul <xadimgnik@gmail.com> wrote:
>
> On 10/12/2021 11:34, Jason Andryuk wrote:
> > commit f37f29d31488 "xen: slightly simplify bufioreq handling" hard
> > coded setting req.count = 1 during initial field setup before the main
> > loop.  This missed a subtlety that an early exit from the loop when
> > there are no ioreqs to process, would have req.count == 0 for the return
> > value.  handle_buffered_io() would then remove state->buffered_io_timer.
> > Instead handle_buffered_iopage() is basically always returning true and
> > handle_buffered_io() always re-setting the timer.
> >
> > Restore the disabling of the timer by introducing a new handled_ioreq
> > boolean and use as the return value.  The named variable will more
> > clearly show the intent of the code.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
>
> Reviewed-by: Paul Durrant <paul@xen.org>

Thanks, Paul.

What is the next step for getting this into QEMU?

To re-state more plainly, this patch fixes a bug to let QEMU go idle
for longer stretches of time.  Without it, buffer_io_timer continues
to re-arm and fire every 100ms even if there is nothing to do.

Regards,
Jason
Paul Durrant Jan. 26, 2022, 1:47 p.m. UTC | #3
On 26/01/2022 13:43, Jason Andryuk wrote:
> On Tue, Dec 14, 2021 at 8:40 AM Durrant, Paul <xadimgnik@gmail.com> wrote:
>>
>> On 10/12/2021 11:34, Jason Andryuk wrote:
>>> commit f37f29d31488 "xen: slightly simplify bufioreq handling" hard
>>> coded setting req.count = 1 during initial field setup before the main
>>> loop.  This missed a subtlety that an early exit from the loop when
>>> there are no ioreqs to process, would have req.count == 0 for the return
>>> value.  handle_buffered_io() would then remove state->buffered_io_timer.
>>> Instead handle_buffered_iopage() is basically always returning true and
>>> handle_buffered_io() always re-setting the timer.
>>>
>>> Restore the disabling of the timer by introducing a new handled_ioreq
>>> boolean and use as the return value.  The named variable will more
>>> clearly show the intent of the code.
>>>
>>> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
>>
>> Reviewed-by: Paul Durrant <paul@xen.org>
> 
> Thanks, Paul.
> 
> What is the next step for getting this into QEMU?
> 

Anthony, can you queue this?

   Paul

> To re-state more plainly, this patch fixes a bug to let QEMU go idle
> for longer stretches of time.  Without it, buffer_io_timer continues
> to re-arm and fire every 100ms even if there is nothing to do.
> 
> Regards,
> Jason
Anthony PERARD Jan. 26, 2022, 4:43 p.m. UTC | #4
On Wed, Jan 26, 2022 at 01:47:20PM +0000, Durrant, Paul wrote:
> On 26/01/2022 13:43, Jason Andryuk wrote:
> > On Tue, Dec 14, 2021 at 8:40 AM Durrant, Paul <xadimgnik@gmail.com> wrote:
> > > 
> > > On 10/12/2021 11:34, Jason Andryuk wrote:
> > > > commit f37f29d31488 "xen: slightly simplify bufioreq handling" hard
> > > > coded setting req.count = 1 during initial field setup before the main
> > > > loop.  This missed a subtlety that an early exit from the loop when
> > > > there are no ioreqs to process, would have req.count == 0 for the return
> > > > value.  handle_buffered_io() would then remove state->buffered_io_timer.
> > > > Instead handle_buffered_iopage() is basically always returning true and
> > > > handle_buffered_io() always re-setting the timer.
> > > > 
> > > > Restore the disabling of the timer by introducing a new handled_ioreq
> > > > boolean and use as the return value.  The named variable will more
> > > > clearly show the intent of the code.
> > > > 
> > > > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > > 
> > > Reviewed-by: Paul Durrant <paul@xen.org>
> > 
> > Thanks, Paul.
> > 
> > What is the next step for getting this into QEMU?
> > 
> 
> Anthony, can you queue this?

Yes, I'll send a pull request soon.

Sorry I tend to wait a while before preparing pull requests, especially
when there's only one patch. But there's another one now.

Cheers,
diff mbox series

Patch

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 482be95415..cf8e500514 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -1087,10 +1087,11 @@  static void handle_ioreq(XenIOState *state, ioreq_t *req)
     }
 }
 
-static int handle_buffered_iopage(XenIOState *state)
+static bool handle_buffered_iopage(XenIOState *state)
 {
     buffered_iopage_t *buf_page = state->buffered_io_page;
     buf_ioreq_t *buf_req = NULL;
+    bool handled_ioreq = false;
     ioreq_t req;
     int qw;
 
@@ -1144,9 +1145,10 @@  static int handle_buffered_iopage(XenIOState *state)
         assert(!req.data_is_ptr);
 
         qatomic_add(&buf_page->read_pointer, qw + 1);
+        handled_ioreq = true;
     }
 
-    return req.count;
+    return handled_ioreq;
 }
 
 static void handle_buffered_io(void *opaque)