diff mbox

[2/4] tcm_vhost: Introduce tcm_vhost_check_endpoint()

Message ID 1363056171-5854-3-git-send-email-asias@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Asias He March 12, 2013, 2:42 a.m. UTC
This helper is useful to check if vs->vs_endpoint is setup by
vhost_scsi_set_endpoint()

Signed-off-by: Asias He <asias@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 drivers/vhost/tcm_vhost.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Paolo Bonzini March 12, 2013, 8:26 a.m. UTC | #1
Il 12/03/2013 03:42, Asias He ha scritto:
> This helper is useful to check if vs->vs_endpoint is setup by
> vhost_scsi_set_endpoint()
> 
> Signed-off-by: Asias He <asias@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  drivers/vhost/tcm_vhost.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> index b3e50d7..29612bc 100644
> --- a/drivers/vhost/tcm_vhost.c
> +++ b/drivers/vhost/tcm_vhost.c
> @@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
>  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
>  }
>  
> +static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
> +{
> +	bool ret = false;
> +
> +	mutex_lock(&vs->dev.mutex);
> +	if (vs->vs_endpoint)
> +		ret = true;
> +	mutex_unlock(&vs->dev.mutex);

The return value is invalid as soon as mutex_unlock is called, i.e.
before tcm_vhost_check_endpoint returns.  Instead, check vs->vs_endpoint
in the caller while the mutex is taken.

Paolo

> +	return ret;
> +}
> +
>  static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
>  {
>  	return 1;
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Asias He March 13, 2013, 3:02 a.m. UTC | #2
On Tue, Mar 12, 2013 at 09:26:18AM +0100, Paolo Bonzini wrote:
> Il 12/03/2013 03:42, Asias He ha scritto:
> > This helper is useful to check if vs->vs_endpoint is setup by
> > vhost_scsi_set_endpoint()
> > 
> > Signed-off-by: Asias He <asias@redhat.com>
> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  drivers/vhost/tcm_vhost.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> > 
> > diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> > index b3e50d7..29612bc 100644
> > --- a/drivers/vhost/tcm_vhost.c
> > +++ b/drivers/vhost/tcm_vhost.c
> > @@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
> >  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
> >  }
> >  
> > +static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
> > +{
> > +	bool ret = false;
> > +
> > +	mutex_lock(&vs->dev.mutex);
> > +	if (vs->vs_endpoint)
> > +		ret = true;
> > +	mutex_unlock(&vs->dev.mutex);
> 
> The return value is invalid as soon as mutex_unlock is called, i.e.
> before tcm_vhost_check_endpoint returns.  Instead, check vs->vs_endpoint
> in the caller while the mutex is taken.

Do you mean 1) or 2)?

   1)
   vhost_scsi_handle_vq()
   {
   
      mutex_lock(&vs->dev.mutex);
      check vs->vs_endpoint
      mutex_unlock(&vs->dev.mutex);
   
      handle vq
   }
   
   2)
   vhost_scsi_handle_vq()
   {
   
      lock vs->dev.mutex
      check vs->vs_endpoint
      handle vq
      unlock vs->dev.mutex
   }

1) does not make any difference with the original
one right?

2) would be too heavy. This might not be a problem in current 1 thread
per vhost model. But if we want concurrent multiqueue, this will be
killing us.

Anyway, the current one is not good. Need to think.

> Paolo
> 
> > +	return ret;
> > +}
> > +
> >  static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
> >  {
> >  	return 1;
> > 
>
Paolo Bonzini March 13, 2013, 8 a.m. UTC | #3
Il 13/03/2013 04:02, Asias He ha scritto:
> On Tue, Mar 12, 2013 at 09:26:18AM +0100, Paolo Bonzini wrote:
>> Il 12/03/2013 03:42, Asias He ha scritto:
>>> This helper is useful to check if vs->vs_endpoint is setup by
>>> vhost_scsi_set_endpoint()
>>>
>>> Signed-off-by: Asias He <asias@redhat.com>
>>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>>> ---
>>>  drivers/vhost/tcm_vhost.c | 12 ++++++++++++
>>>  1 file changed, 12 insertions(+)
>>>
>>> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
>>> index b3e50d7..29612bc 100644
>>> --- a/drivers/vhost/tcm_vhost.c
>>> +++ b/drivers/vhost/tcm_vhost.c
>>> @@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
>>>  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
>>>  }
>>>  
>>> +static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
>>> +{
>>> +	bool ret = false;
>>> +
>>> +	mutex_lock(&vs->dev.mutex);
>>> +	if (vs->vs_endpoint)
>>> +		ret = true;
>>> +	mutex_unlock(&vs->dev.mutex);
>>
>> The return value is invalid as soon as mutex_unlock is called, i.e.
>> before tcm_vhost_check_endpoint returns.  Instead, check vs->vs_endpoint
>> in the caller while the mutex is taken.
> 
> Do you mean 1) or 2)?
> 
>    1)
>    vhost_scsi_handle_vq()
>    {
>    
>       mutex_lock(&vs->dev.mutex);
>       check vs->vs_endpoint
>       mutex_unlock(&vs->dev.mutex);
>    
>       handle vq
>    }
>    
>    2)
>    vhost_scsi_handle_vq()
>    {
>    
>       lock vs->dev.mutex
>       check vs->vs_endpoint
>       handle vq
>       unlock vs->dev.mutex
>    }
> 
> 1) does not make any difference with the original
> one right?

Yes, it's just what you have with tcm_vhost_check_endpoint inlined.

> 2) would be too heavy. This might not be a problem in current 1 thread
> per vhost model. But if we want concurrent multiqueue, this will be
> killing us.

I mean (2).  You could use an rwlock to enable more concurrency.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Asias He March 14, 2013, 2:14 a.m. UTC | #4
On Wed, Mar 13, 2013 at 09:00:43AM +0100, Paolo Bonzini wrote:
> Il 13/03/2013 04:02, Asias He ha scritto:
> > On Tue, Mar 12, 2013 at 09:26:18AM +0100, Paolo Bonzini wrote:
> >> Il 12/03/2013 03:42, Asias He ha scritto:
> >>> This helper is useful to check if vs->vs_endpoint is setup by
> >>> vhost_scsi_set_endpoint()
> >>>
> >>> Signed-off-by: Asias He <asias@redhat.com>
> >>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> >>> ---
> >>>  drivers/vhost/tcm_vhost.c | 12 ++++++++++++
> >>>  1 file changed, 12 insertions(+)
> >>>
> >>> diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
> >>> index b3e50d7..29612bc 100644
> >>> --- a/drivers/vhost/tcm_vhost.c
> >>> +++ b/drivers/vhost/tcm_vhost.c
> >>> @@ -91,6 +91,18 @@ static int iov_num_pages(struct iovec *iov)
> >>>  	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
> >>>  }
> >>>  
> >>> +static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
> >>> +{
> >>> +	bool ret = false;
> >>> +
> >>> +	mutex_lock(&vs->dev.mutex);
> >>> +	if (vs->vs_endpoint)
> >>> +		ret = true;
> >>> +	mutex_unlock(&vs->dev.mutex);
> >>
> >> The return value is invalid as soon as mutex_unlock is called, i.e.
> >> before tcm_vhost_check_endpoint returns.  Instead, check vs->vs_endpoint
> >> in the caller while the mutex is taken.
> > 
> > Do you mean 1) or 2)?
> > 
> >    1)
> >    vhost_scsi_handle_vq()
> >    {
> >    
> >       mutex_lock(&vs->dev.mutex);
> >       check vs->vs_endpoint
> >       mutex_unlock(&vs->dev.mutex);
> >    
> >       handle vq
> >    }
> >    
> >    2)
> >    vhost_scsi_handle_vq()
> >    {
> >    
> >       lock vs->dev.mutex
> >       check vs->vs_endpoint
> >       handle vq
> >       unlock vs->dev.mutex
> >    }
> > 
> > 1) does not make any difference with the original
> > one right?
> 
> Yes, it's just what you have with tcm_vhost_check_endpoint inlined.

okay.

> > 2) would be too heavy. This might not be a problem in current 1 thread
> > per vhost model. But if we want concurrent multiqueue, this will be
> > killing us.
> 
> I mean (2).  You could use an rwlock to enable more concurrency.
diff mbox

Patch

diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index b3e50d7..29612bc 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -91,6 +91,18 @@  static int iov_num_pages(struct iovec *iov)
 	       ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT;
 }
 
+static bool tcm_vhost_check_endpoint(struct vhost_scsi *vs)
+{
+	bool ret = false;
+
+	mutex_lock(&vs->dev.mutex);
+	if (vs->vs_endpoint)
+		ret = true;
+	mutex_unlock(&vs->dev.mutex);
+
+	return ret;
+}
+
 static int tcm_vhost_check_true(struct se_portal_group *se_tpg)
 {
 	return 1;