Message ID | 1451572003-2440-33-git-send-email-mst@redhat.com (mailing list archive) |
---|---|
State | Awaiting Upstream |
Delegated to: | Geert Uytterhoeven |
Headers | show |
Hello. On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote: > We need a full barrier after writing out event index, using > virt_store_mb there seems better than open-coding. As usual, we need a > wrapper to account for strong barriers. > > It's tempting to use this in vhost as well, for that, we'll > need a variant of smp_store_mb that works on __user pointers. > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > --- > include/linux/virtio_ring.h | 12 ++++++++++++ > drivers/virtio/virtio_ring.c | 15 +++++++++------ > 2 files changed, 21 insertions(+), 6 deletions(-) > > diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h > index f3fa55b..3a74d91 100644 > --- a/include/linux/virtio_ring.h > +++ b/include/linux/virtio_ring.h > @@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers) > wmb(); > } > > +static inline void virtio_store_mb(bool weak_barriers, > + __virtio16 *p, __virtio16 v) > +{ > + if (weak_barriers) > + virt_store_mb(*p, v); > + else > + { The kernel coding style dictates: if (weak_barriers) { virt_store_mb(*p, v); } else { > + WRITE_ONCE(*p, v); > + mb(); > + } > +} > + [...] MBR, Sergei -- To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Jan 01, 2016 at 08:23:46PM +0300, Sergei Shtylyov wrote: > Hello. > > On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote: > > >We need a full barrier after writing out event index, using > >virt_store_mb there seems better than open-coding. As usual, we need a > >wrapper to account for strong barriers. > > > >It's tempting to use this in vhost as well, for that, we'll > >need a variant of smp_store_mb that works on __user pointers. > > > >Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > >--- > > include/linux/virtio_ring.h | 12 ++++++++++++ > > drivers/virtio/virtio_ring.c | 15 +++++++++------ > > 2 files changed, 21 insertions(+), 6 deletions(-) > > > >diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h > >index f3fa55b..3a74d91 100644 > >--- a/include/linux/virtio_ring.h > >+++ b/include/linux/virtio_ring.h > >@@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers) > > wmb(); > > } > > > >+static inline void virtio_store_mb(bool weak_barriers, > >+ __virtio16 *p, __virtio16 v) > >+{ > >+ if (weak_barriers) > >+ virt_store_mb(*p, v); > >+ else > >+ { > > The kernel coding style dictates: > > if (weak_barriers) { > virt_store_mb(*p, v); > } else { > > >+ WRITE_ONCE(*p, v); > >+ mb(); > >+ } > >+} > >+ > [...] > > MBR, Sergei Will fix, thanks! -- To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h index f3fa55b..3a74d91 100644 --- a/include/linux/virtio_ring.h +++ b/include/linux/virtio_ring.h @@ -45,6 +45,18 @@ static inline void virtio_wmb(bool weak_barriers) wmb(); } +static inline void virtio_store_mb(bool weak_barriers, + __virtio16 *p, __virtio16 v) +{ + if (weak_barriers) + virt_store_mb(*p, v); + else + { + WRITE_ONCE(*p, v); + mb(); + } +} + struct virtio_device; struct virtqueue; diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index ee663c4..e12e385 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -517,10 +517,10 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) /* If we expect an interrupt for the next entry, tell host * by writing event index and flush out the write before * the read in the next get_buf call. */ - if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) { - vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx); - virtio_mb(vq->weak_barriers); - } + if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) + virtio_store_mb(vq->weak_barriers, + &vring_used_event(&vq->vring), + cpu_to_virtio16(_vq->vdev, vq->last_used_idx)); #ifdef DEBUG vq->last_add_time_valid = false; @@ -653,8 +653,11 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) } /* TODO: tune this threshold */ bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4; - vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs); - virtio_mb(vq->weak_barriers); + + virtio_store_mb(vq->weak_barriers, + &vring_used_event(&vq->vring), + cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs)); + if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) { END_USE(vq); return false;
We need a full barrier after writing out event index, using virt_store_mb there seems better than open-coding. As usual, we need a wrapper to account for strong barriers. It's tempting to use this in vhost as well, for that, we'll need a variant of smp_store_mb that works on __user pointers. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> --- include/linux/virtio_ring.h | 12 ++++++++++++ drivers/virtio/virtio_ring.c | 15 +++++++++------ 2 files changed, 21 insertions(+), 6 deletions(-)