diff mbox

[2/3] xen: slightly simplify bufioreq handling

Message ID 58356E680200007800121299@prv-mh.provo.novell.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Beulich Nov. 23, 2016, 9:24 a.m. UTC
There's no point setting fields always receiving the same value on each
iteration, as handle_ioreq() doesn't alter them anyway. Set state and
count once ahead of the loop, drop the redundant clearing of
data_is_ptr, and avoid the meaningless setting of df altogether.

Also avoid doing an unsigned long calculation of size when the field to
be initialized is only 32 bits wide (and the shift value in the range
0...3).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

Comments

Paul Durrant Nov. 23, 2016, 9:51 a.m. UTC | #1
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 23 November 2016 09:25
> To: qemu-devel@nongnu.org
> Cc: Anthony Perard <anthony.perard@citrix.com>; Paul Durrant
> <Paul.Durrant@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen-
> devel <xen-devel@lists.xenproject.org>
> Subject: [PATCH 2/3] xen: slightly simplify bufioreq handling
> 
> There's no point setting fields always receiving the same value on each
> iteration, as handle_ioreq() doesn't alter them anyway. Set state and
> count once ahead of the loop, drop the redundant clearing of
> data_is_ptr, and avoid the meaningless setting of df altogether.
> 
> Also avoid doing an unsigned long calculation of size when the field to
> be initialized is only 32 bits wide (and the shift value in the range
> 0...3).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> 
> --- a/xen-hvm.c
> +++ b/xen-hvm.c
> @@ -995,6 +995,8 @@ static int handle_buffered_iopage(XenIOS
>      }
> 
>      memset(&req, 0x00, sizeof(req));
> +    req.state = STATE_IOREQ_READY;
> +    req.count = 1;
> 
>      for (;;) {
>          uint32_t rdptr = buf_page->read_pointer, wrptr;
> @@ -1009,15 +1011,11 @@ static int handle_buffered_iopage(XenIOS
>              break;
>          }
>          buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
> -        req.size = 1UL << buf_req->size;
> -        req.count = 1;
> +        req.size = 1U << buf_req->size;
>          req.addr = buf_req->addr;
>          req.data = buf_req->data;
> -        req.state = STATE_IOREQ_READY;
>          req.dir = buf_req->dir;
> -        req.df = 1;
>          req.type = buf_req->type;
> -        req.data_is_ptr = 0;
>          xen_rmb();
>          qw = (req.size == 8);
>          if (qw) {
> @@ -1032,6 +1030,13 @@ static int handle_buffered_iopage(XenIOS
> 
>          handle_ioreq(state, &req);
> 
> +        /* Only req.data may get updated by handle_ioreq(), albeit even that
> +         * should not happen as such data would never make it to the guest.
> +         */
> +        assert(req.state == STATE_IOREQ_READY);
> +        assert(req.count == 1);
> +        assert(!req.data_is_ptr);
> +
>          atomic_add(&buf_page->read_pointer, qw + 1);
>      }
> 
> 
>
Stefano Stabellini Nov. 23, 2016, 6:13 p.m. UTC | #2
On Wed, 23 Nov 2016, Jan Beulich wrote:
> There's no point setting fields always receiving the same value on each
> iteration, as handle_ioreq() doesn't alter them anyway. Set state and
> count once ahead of the loop, drop the redundant clearing of
> data_is_ptr, and avoid the meaningless setting of df altogether.

Why setting df is meaningless?


> Also avoid doing an unsigned long calculation of size when the field to
> be initialized is only 32 bits wide (and the shift value in the range
> 0...3).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen-hvm.c
> +++ b/xen-hvm.c
> @@ -995,6 +995,8 @@ static int handle_buffered_iopage(XenIOS
>      }
>  
>      memset(&req, 0x00, sizeof(req));
> +    req.state = STATE_IOREQ_READY;
> +    req.count = 1;
>  
>      for (;;) {
>          uint32_t rdptr = buf_page->read_pointer, wrptr;
> @@ -1009,15 +1011,11 @@ static int handle_buffered_iopage(XenIOS
>              break;
>          }
>          buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
> -        req.size = 1UL << buf_req->size;
> -        req.count = 1;
> +        req.size = 1U << buf_req->size;
>          req.addr = buf_req->addr;
>          req.data = buf_req->data;
> -        req.state = STATE_IOREQ_READY;
>          req.dir = buf_req->dir;
> -        req.df = 1;
>          req.type = buf_req->type;
> -        req.data_is_ptr = 0;
>          xen_rmb();
>          qw = (req.size == 8);
>          if (qw) {
> @@ -1032,6 +1030,13 @@ static int handle_buffered_iopage(XenIOS
>  
>          handle_ioreq(state, &req);
>  
> +        /* Only req.data may get updated by handle_ioreq(), albeit even that
> +         * should not happen as such data would never make it to the guest.
> +         */
> +        assert(req.state == STATE_IOREQ_READY);
> +        assert(req.count == 1);
> +        assert(!req.data_is_ptr);
> +
>          atomic_add(&buf_page->read_pointer, qw + 1);
>      }
>  
> 
> 
>
Jan Beulich Nov. 24, 2016, 10:31 a.m. UTC | #3
>>> On 23.11.16 at 19:13, <sstabellini@kernel.org> wrote:
> On Wed, 23 Nov 2016, Jan Beulich wrote:
>> There's no point setting fields always receiving the same value on each
>> iteration, as handle_ioreq() doesn't alter them anyway. Set state and
>> count once ahead of the loop, drop the redundant clearing of
>> data_is_ptr, and avoid the meaningless setting of df altogether.
> 
> Why setting df is meaningless?

With count being fixed to one there's no need to update addresses,
and hence no use for knowing which direction the updates should go.

Jan
diff mbox

Patch

--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -995,6 +995,8 @@  static int handle_buffered_iopage(XenIOS
     }
 
     memset(&req, 0x00, sizeof(req));
+    req.state = STATE_IOREQ_READY;
+    req.count = 1;
 
     for (;;) {
         uint32_t rdptr = buf_page->read_pointer, wrptr;
@@ -1009,15 +1011,11 @@  static int handle_buffered_iopage(XenIOS
             break;
         }
         buf_req = &buf_page->buf_ioreq[rdptr % IOREQ_BUFFER_SLOT_NUM];
-        req.size = 1UL << buf_req->size;
-        req.count = 1;
+        req.size = 1U << buf_req->size;
         req.addr = buf_req->addr;
         req.data = buf_req->data;
-        req.state = STATE_IOREQ_READY;
         req.dir = buf_req->dir;
-        req.df = 1;
         req.type = buf_req->type;
-        req.data_is_ptr = 0;
         xen_rmb();
         qw = (req.size == 8);
         if (qw) {
@@ -1032,6 +1030,13 @@  static int handle_buffered_iopage(XenIOS
 
         handle_ioreq(state, &req);
 
+        /* Only req.data may get updated by handle_ioreq(), albeit even that
+         * should not happen as such data would never make it to the guest.
+         */
+        assert(req.state == STATE_IOREQ_READY);
+        assert(req.count == 1);
+        assert(!req.data_is_ptr);
+
         atomic_add(&buf_page->read_pointer, qw + 1);
     }