diff mbox

[3/4] tcm_vhost: Fix vs->vs_endpoint checking in vhost_scsi_handle_vq()

Message ID 1363056171-5854-4-git-send-email-asias@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Asias He March 12, 2013, 2:42 a.m. UTC
vs->vs_endpoint is protected by the vs->dev.mutex. Use
tcm_vhost_check_endpoint() to do check. The helper does the needed
locking for us.

Signed-off-by: Asias He <asias@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Michael S. Tsirkin March 12, 2013, 11:11 a.m. UTC | #1
On Tue, Mar 12, 2013 at 10:42:50AM +0800, Asias He wrote:
> vs->vs_endpoint is protected by the vs->dev.mutex. Use
> tcm_vhost_check_endpoint() to do check. The helper does the needed
> locking for us.
> 
> Signed-off-by: Asias He <asias@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

This takes dev mutex on data path which will introduce
contention esp for multiqueue.
How about storing the endpoint as part of vq
private data and protecting with vq mutex?

> ---
>  drivers/vhost/tcm_vhost.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> index 29612bc..61093d1 100644
> --- a/drivers/vhost/tcm_vhost.c
> +++ b/drivers/vhost/tcm_vhost.c
> @@ -594,7 +594,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
>  	u8 target;
>  
>  	/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
> -	if (unlikely(!vs->vs_endpoint))
> +	if (!tcm_vhost_check_endpoint(vs))
>  		return;
>  
>  	mutex_lock(&vq->mutex);
> -- 
> 1.8.1.4
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Asias He March 13, 2013, 3:13 a.m. UTC | #2
On Tue, Mar 12, 2013 at 01:11:19PM +0200, Michael S. Tsirkin wrote:
> On Tue, Mar 12, 2013 at 10:42:50AM +0800, Asias He wrote:
> > vs->vs_endpoint is protected by the vs->dev.mutex. Use
> > tcm_vhost_check_endpoint() to do check. The helper does the needed
> > locking for us.
> > 
> > Signed-off-by: Asias He <asias@redhat.com>
> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> 
> This takes dev mutex on data path which will introduce
> contention esp for multiqueue.

Yes, for now it is okay, but for concurrent execution of multiqueue it is
really bad.

By the way, what is the overhead of taking and releasing the
vs->dev.mutex even if no one contents for it? Is this overhead gnorable.

> How about storing the endpoint as part of vq
> private data and protecting with vq mutex?

Hmm, this makes sense, let's see how well it works.

> > ---
> >  drivers/vhost/tcm_vhost.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> > index 29612bc..61093d1 100644
> > --- a/drivers/vhost/tcm_vhost.c
> > +++ b/drivers/vhost/tcm_vhost.c
> > @@ -594,7 +594,7 @@ static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
> >  	u8 target;
> >  
> >  	/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
> > -	if (unlikely(!vs->vs_endpoint))
> > +	if (!tcm_vhost_check_endpoint(vs))
> >  		return;
> >  
> >  	mutex_lock(&vq->mutex);
> > -- 
> > 1.8.1.4
Paolo Bonzini March 13, 2013, 8:02 a.m. UTC | #3
Il 13/03/2013 04:13, Asias He ha scritto:
>> > This takes dev mutex on data path which will introduce
>> > contention esp for multiqueue.
> Yes, for now it is okay, but for concurrent execution of multiqueue it is
> really bad.
> 
> By the way, what is the overhead of taking and releasing the
> vs->dev.mutex even if no one contents for it? Is this overhead gnorable.

There is a possibility of cacheline ping-pong, but apart from that it's
ignorable.

>> > How about storing the endpoint as part of vq
>> > private data and protecting with vq mutex?
> 
> Hmm, this makes sense, let's see how well it works.

Then VHOST_SCSI_SET_ENDPOINT would have to go through all vqs, no?  A
rwlock seems simpler.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Asias He March 14, 2013, 2:12 a.m. UTC | #4
On Wed, Mar 13, 2013 at 09:02:38AM +0100, Paolo Bonzini wrote:
> Il 13/03/2013 04:13, Asias He ha scritto:
> >> > This takes dev mutex on data path which will introduce
> >> > contention esp for multiqueue.
> > Yes, for now it is okay, but for concurrent execution of multiqueue it is
> > really bad.
> > 
> > By the way, what is the overhead of taking and releasing the
> > vs->dev.mutex even if no one contents for it? Is this overhead gnorable.
> 
> There is a possibility of cacheline ping-pong, but apart from that it's
> ignorable.

Ah, thanks!

> >> > How about storing the endpoint as part of vq
> >> > private data and protecting with vq mutex?
> > 
> > Hmm, this makes sense, let's see how well it works.
> 
> Then VHOST_SCSI_SET_ENDPOINT would have to go through all vqs, no?  A
> rwlock seems simpler.

VHOST_SCSI_SET_ENDPOINT operation is not on the data path, it
is fine to go through all vqs, it's just a loop.

For the rwlock thing, let's discuss it on the other thread.

> Paolo
diff mbox

Patch

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 29612bc..61093d1 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -594,7 +594,7 @@  static void vhost_scsi_handle_vq(struct vhost_scsi *vs,
 	u8 target;
 
 	/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
-	if (unlikely(!vs->vs_endpoint))
+	if (!tcm_vhost_check_endpoint(vs))
 		return;
 
 	mutex_lock(&vq->mutex);