From patchwork Tue Feb 8 15:25:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12738878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DD847C4332F for ; Tue, 8 Feb 2022 15:27:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WHFJWe+Z+VVjBaxBxhGXLsgydIAf9XSIUKeQ7mcXiYw=; b=dtnbEBbi7NPVk0 RyH60wW6HYT0lpMEUu/4LcV5cGQia23mrAqOh8NZ+wUSrR5WzBtMRt9l+T3iI3vQxWowSZ0YavoPA rs/nQG4Z/IwkFDimuHe54q8UnQmjJ+oFR7/VsAH3CQEKyfx2MbrcbjN1nF6+uEidn5b4HLhHKwciM we9bdD6FFJDyEltyMOYdoABjcQT6cg95ts+3MLr+4+4Qt4kfh8NZLLlC/b8vgfBsBDPvF/GWxp21V 2wsZ3R9qShD9FzJzGT6/+7zG/y7kibMBZVx0ve4elp5SuCovO3YLcV7UeguQEQEiI1Wb+5GgoRVp5 eri3CAyQfxPsju09RknQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHSNG-00ESzi-3t; Tue, 08 Feb 2022 15:25:46 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHSN3-00ESvY-9D for linux-arm-kernel@lists.infradead.org; Tue, 08 Feb 2022 15:25:36 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 494E71FB; Tue, 8 Feb 2022 07:25:32 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE3973F73B; Tue, 8 Feb 2022 07:25:30 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, virtualization@lists.linux-foundation.org Cc: sudeep.holla@arm.com, mst@redhat.com, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, peter.hilber@opensynergy.com, igor.skalkin@opensynergy.com, cristian.marussi@arm.com Subject: [PATCH 1/4] tools/virtio: Miscellaneous build fixes Date: Tue, 8 Feb 2022 15:25:17 +0000 Message-Id: <20220208152520.52983-2-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220208152520.52983-1-cristian.marussi@arm.com> References: <20220208152520.52983-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220208_072533_445458_6D8F245C X-CRM114-Status: GOOD ( 29.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org - Fix pthread LDFLAGS in Makefile: use -pthread compile and link switch as per man-page; the current -lpthread emits a number of undefined refs: virtio_ring.o: In function `spin_lock': tools/virtio/./linux/spinlock.h:16: undefined reference to `pthread_spin_lock' virtio_ring.o: In function `spin_unlock': tools/virtio/./linux/spinlock.h:22: undefined reference to `pthread_spin_unlock' virtio_ring.o: In function `spin_lock': tools/virtio/./linux/spinlock.h:16: undefined reference to `pthread_spin_lock' virtio_ring.o: In function `spin_unlock': tools/virtio/./linux/spinlock.h:22: undefined reference to `pthread_spin_unlock' virtio_ring.o: In function `spin_lock': tools/virtio/./linux/spinlock.h:16: undefined reference to `pthread_spin_lock' virtio_ring.o: In function `spin_unlock': tools/virtio/./linux/spinlock.h:22: undefined reference to `pthread_spin_unlock' virtio_ring.o: In function `spin_lock': tools/virtio/./linux/spinlock.h:16: undefined reference to `pthread_spin_lock' virtio_ring.o: In function `spin_unlock': tools/virtio/./linux/spinlock.h:22: undefined reference to `pthread_spin_unlock' - Missing drv_to_virtio definition and related includes: a copy of uio.h with a helper, using a struct folio reference, removed. - Added empty mm_types.h - Added mocked moduleparam() definitions. - Fixed virtio_test and vringh_test initialization to allow drv_to_virtio inline to work properly when invoked from tets code. Signed-off-by: Cristian Marussi --- tools/virtio/Makefile | 2 +- tools/virtio/linux/mm_types.h | 3 + tools/virtio/linux/module.h | 1 + tools/virtio/linux/moduleparam.h | 7 + tools/virtio/linux/uio.h | 299 ++++++++++++++++++++++++++++++- tools/virtio/linux/virtio.h | 63 +++++++ tools/virtio/virtio_test.c | 3 + tools/virtio/vringh_test.c | 3 + 8 files changed, 379 insertions(+), 2 deletions(-) create mode 100644 tools/virtio/linux/mm_types.h create mode 100644 tools/virtio/linux/moduleparam.h diff --git a/tools/virtio/Makefile b/tools/virtio/Makefile index 0d7bbe49359d..83b6a522d0d2 100644 --- a/tools/virtio/Makefile +++ b/tools/virtio/Makefile @@ -5,7 +5,7 @@ virtio_test: virtio_ring.o virtio_test.o vringh_test: vringh_test.o vringh.o virtio_ring.o CFLAGS += -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h -LDFLAGS += -lpthread +LDFLAGS += -pthread vpath %.c ../../drivers/virtio ../../drivers/vhost mod: ${MAKE} -C `pwd`/../.. M=`pwd`/vhost_test V=${V} diff --git a/tools/virtio/linux/mm_types.h b/tools/virtio/linux/mm_types.h new file mode 100644 index 000000000000..3f3b5a471a1e --- /dev/null +++ b/tools/virtio/linux/mm_types.h @@ -0,0 +1,3 @@ +#ifndef LINUX_MM_TYPES_H +#endif + diff --git a/tools/virtio/linux/module.h b/tools/virtio/linux/module.h index 9dfa96fea2b2..c81dcfe4bbfe 100644 --- a/tools/virtio/linux/module.h +++ b/tools/virtio/linux/module.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #define MODULE_LICENSE(__MODULE_LICENSE_value) \ static __attribute__((unused)) const char *__MODULE_LICENSE_name = \ diff --git a/tools/virtio/linux/moduleparam.h b/tools/virtio/linux/moduleparam.h new file mode 100644 index 000000000000..e1b14a39ddd3 --- /dev/null +++ b/tools/virtio/linux/moduleparam.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_MODULE_PARAMS_H +#define _LINUX_MODULE_PARAMS_H + +#define module_param(name, type, perm) + +#endif /* _LINUX_MODULE_PARAMS_H */ diff --git a/tools/virtio/linux/uio.h b/tools/virtio/linux/uio.h index cd20f0ba3081..b2fba0288409 100644 --- a/tools/virtio/linux/uio.h +++ b/tools/virtio/linux/uio.h @@ -1,3 +1,300 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __LINUX_UIO_H +#define __LINUX_UIO_H + #include +#include +#include +#include + +struct page; +struct pipe_inode_info; + +struct kvec { + void *iov_base; /* and that should *never* hold a userland pointer */ + size_t iov_len; +}; + +enum iter_type { + /* iter types */ + ITER_IOVEC, + ITER_KVEC, + ITER_BVEC, + ITER_PIPE, + ITER_XARRAY, + ITER_DISCARD, +}; + +struct iov_iter_state { + size_t iov_offset; + size_t count; + unsigned long nr_segs; +}; + +struct iov_iter { + u8 iter_type; + bool nofault; + bool data_source; + size_t iov_offset; + size_t count; + union { + const struct iovec *iov; + const struct kvec *kvec; + const struct bio_vec *bvec; + struct xarray *xarray; + struct pipe_inode_info *pipe; + }; + union { + unsigned long nr_segs; + struct { + unsigned int head; + unsigned int start_head; + }; + loff_t xarray_start; + }; +}; + +static inline enum iter_type iov_iter_type(const struct iov_iter *i) +{ + return i->iter_type; +} + +static inline void iov_iter_save_state(struct iov_iter *iter, + struct iov_iter_state *state) +{ + state->iov_offset = iter->iov_offset; + state->count = iter->count; + state->nr_segs = iter->nr_segs; +} + +static inline bool iter_is_iovec(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_IOVEC; +} + +static inline bool iov_iter_is_kvec(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_KVEC; +} + +static inline bool iov_iter_is_bvec(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_BVEC; +} + +static inline bool iov_iter_is_pipe(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_PIPE; +} + +static inline bool iov_iter_is_discard(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_DISCARD; +} + +static inline bool iov_iter_is_xarray(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_XARRAY; +} + +static inline unsigned char iov_iter_rw(const struct iov_iter *i) +{ + return i->data_source ? WRITE : READ; +} + +/* + * Total number of bytes covered by an iovec. + * + * NOTE that it is not safe to use this function until all the iovec's + * segment lengths have been validated. Because the individual lengths can + * overflow a size_t when added together. + */ +static inline size_t iov_length(const struct iovec *iov, unsigned long nr_segs) +{ + unsigned long seg; + size_t ret = 0; + + for (seg = 0; seg < nr_segs; seg++) + ret += iov[seg].iov_len; + return ret; +} + +static inline struct iovec iov_iter_iovec(const struct iov_iter *iter) +{ + return (struct iovec) { + .iov_base = iter->iov->iov_base + iter->iov_offset, + .iov_len = min(iter->count, + iter->iov->iov_len - iter->iov_offset), + }; +} + +size_t copy_page_from_iter_atomic(struct page *page, unsigned offset, + size_t bytes, struct iov_iter *i); +void iov_iter_advance(struct iov_iter *i, size_t bytes); +void iov_iter_revert(struct iov_iter *i, size_t bytes); +size_t fault_in_iov_iter_readable(const struct iov_iter *i, size_t bytes); +size_t fault_in_iov_iter_writeable(const struct iov_iter *i, size_t bytes); +size_t iov_iter_single_seg_count(const struct iov_iter *i); +size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes, + struct iov_iter *i); +size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, + struct iov_iter *i); + +size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i); +size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i); +size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i); + +static __always_inline __must_check +size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) +{ + if (unlikely(!check_copy_size(addr, bytes, true))) + return 0; + else + return _copy_to_iter(addr, bytes, i); +} + +static __always_inline __must_check +size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) +{ + if (unlikely(!check_copy_size(addr, bytes, false))) + return 0; + else + return _copy_from_iter(addr, bytes, i); +} + +static __always_inline __must_check +bool copy_from_iter_full(void *addr, size_t bytes, struct iov_iter *i) +{ + size_t copied = copy_from_iter(addr, bytes, i); + if (likely(copied == bytes)) + return true; + iov_iter_revert(i, copied); + return false; +} + +static __always_inline __must_check +size_t copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i) +{ + if (unlikely(!check_copy_size(addr, bytes, false))) + return 0; + else + return _copy_from_iter_nocache(addr, bytes, i); +} + +static __always_inline __must_check +bool copy_from_iter_full_nocache(void *addr, size_t bytes, struct iov_iter *i) +{ + size_t copied = copy_from_iter_nocache(addr, bytes, i); + if (likely(copied == bytes)) + return true; + iov_iter_revert(i, copied); + return false; +} + +#ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE +/* + * Note, users like pmem that depend on the stricter semantics of + * _copy_from_iter_flushcache() than _copy_from_iter_nocache() must check for + * IS_ENABLED(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) before assuming that the + * destination is flushed from the cache on return. + */ +size_t _copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i); +#else +#define _copy_from_iter_flushcache _copy_from_iter_nocache +#endif + +#ifdef CONFIG_ARCH_HAS_COPY_MC +size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i); +#else +#define _copy_mc_to_iter _copy_to_iter +#endif + +size_t iov_iter_zero(size_t bytes, struct iov_iter *); +unsigned long iov_iter_alignment(const struct iov_iter *i); +unsigned long iov_iter_gap_alignment(const struct iov_iter *i); +void iov_iter_init(struct iov_iter *i, unsigned int direction, const struct iovec *iov, + unsigned long nr_segs, size_t count); +void iov_iter_kvec(struct iov_iter *i, unsigned int direction, const struct kvec *kvec, + unsigned long nr_segs, size_t count); +void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_vec *bvec, + unsigned long nr_segs, size_t count); +void iov_iter_pipe(struct iov_iter *i, unsigned int direction, struct pipe_inode_info *pipe, + size_t count); +void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count); +void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray, + loff_t start, size_t count); +ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, + size_t maxsize, unsigned maxpages, size_t *start); +ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, + size_t maxsize, size_t *start); +int iov_iter_npages(const struct iov_iter *i, int maxpages); +void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state); + +const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags); + +static inline size_t iov_iter_count(const struct iov_iter *i) +{ + return i->count; +} + +/* + * Cap the iov_iter by given limit; note that the second argument is + * *not* the new size - it's upper limit for such. Passing it a value + * greater than the amount of data in iov_iter is fine - it'll just do + * nothing in that case. + */ +static inline void iov_iter_truncate(struct iov_iter *i, u64 count) +{ + /* + * count doesn't have to fit in size_t - comparison extends both + * operands to u64 here and any value that would be truncated by + * conversion in assignement is by definition greater than all + * values of size_t, including old i->count. + */ + if (i->count > count) + i->count = count; +} + +/* + * reexpand a previously truncated iterator; count must be no more than how much + * we had shrunk it. + */ +static inline void iov_iter_reexpand(struct iov_iter *i, size_t count) +{ + i->count = count; +} + +struct csum_state { + __wsum csum; + size_t off; +}; + +size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *csstate, struct iov_iter *i); +size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); + +static __always_inline __must_check +bool csum_and_copy_from_iter_full(void *addr, size_t bytes, + __wsum *csum, struct iov_iter *i) +{ + size_t copied = csum_and_copy_from_iter(addr, bytes, csum, i); + if (likely(copied == bytes)) + return true; + iov_iter_revert(i, copied); + return false; +} +size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, + struct iov_iter *i); + +struct iovec *iovec_from_user(const struct iovec __user *uvector, + unsigned long nr_segs, unsigned long fast_segs, + struct iovec *fast_iov, bool compat); +ssize_t import_iovec(int type, const struct iovec __user *uvec, + unsigned nr_segs, unsigned fast_segs, struct iovec **iovp, + struct iov_iter *i); +ssize_t __import_iovec(int type, const struct iovec __user *uvec, + unsigned nr_segs, unsigned fast_segs, struct iovec **iovp, + struct iov_iter *i, bool compat); +int import_single_range(int type, void __user *buf, size_t len, + struct iovec *iov, struct iov_iter *i); -#include "../../../include/linux/uio.h" +#endif diff --git a/tools/virtio/linux/virtio.h b/tools/virtio/linux/virtio.h index 363b98228301..9021187a66d9 100644 --- a/tools/virtio/linux/virtio.h +++ b/tools/virtio/linux/virtio.h @@ -5,8 +5,47 @@ #include #include +typedef struct pm_message { + int event; +} pm_message_t; + +enum probe_type { + PROBE_DEFAULT_STRATEGY, + PROBE_PREFER_ASYNCHRONOUS, + PROBE_FORCE_SYNCHRONOUS, +}; + +struct device_driver { + const char *name; + void *bus; + + void *owner; + const char *mod_name; /* used for built-in modules */ + + bool suppress_bind_attrs; /* disables bind/unbind via sysfs */ + enum probe_type probe_type; + + const void *of_match_table; + const void *acpi_match_table; + + int (*probe)(void *dev); + void (*sync_state)(void *dev); + int (*remove)(void *dev); + void (*shutdown)(void *dev); + int (*suspend)(void *dev, pm_message_t state); + int (*resume)(void *dev); + const void **groups; + const void **dev_groups; + + const void *pm; + void (*coredump)(void *dev); + + struct driver_private *p; +}; + struct device { void *parent; + struct device_driver *driver; }; struct virtio_device { @@ -26,6 +65,25 @@ struct virtqueue { void *priv; }; +struct virtio_driver { + struct device_driver driver; + const struct virtio_device_id *id_table; + const unsigned int *feature_table; + unsigned int feature_table_size; + const unsigned int *feature_table_legacy; + unsigned int feature_table_size_legacy; + bool suppress_used_validation; + int (*validate)(struct virtio_device *dev); + int (*probe)(struct virtio_device *dev); + void (*scan)(struct virtio_device *dev); + void (*remove)(struct virtio_device *dev); + void (*config_changed)(struct virtio_device *dev); +#ifdef CONFIG_PM + int (*freeze)(struct virtio_device *dev); + int (*restore)(struct virtio_device *dev); +#endif +}; + /* Interfaces exported by virtio_ring. */ int virtqueue_add_sgs(struct virtqueue *vq, struct scatterlist *sgs[], @@ -66,4 +124,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, const char *name); void vring_del_virtqueue(struct virtqueue *vq); +static inline struct virtio_driver *drv_to_virtio(struct device_driver *drv) +{ + return container_of(drv, struct virtio_driver, driver); +} + #endif diff --git a/tools/virtio/virtio_test.c b/tools/virtio/virtio_test.c index cb3f29c09aff..d882075c097f 100644 --- a/tools/virtio/virtio_test.c +++ b/tools/virtio/virtio_test.c @@ -36,6 +36,7 @@ struct vq_info { }; struct vdev_info { + struct virtio_driver vdriver; struct virtio_device vdev; int control; struct pollfd fds[1]; @@ -128,6 +129,8 @@ static void vdev_info_init(struct vdev_info* dev, unsigned long long features) { int r; memset(dev, 0, sizeof *dev); + dev->vdriver.suppress_used_validation = false; + dev->vdev.dev.driver = &dev->vdriver.driver; dev->vdev.features = features; INIT_LIST_HEAD(&dev->vdev.vqs); dev->buf_size = 1024; diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c index fa87b58bd5fa..dbecfbbcbe54 100644 --- a/tools/virtio/vringh_test.c +++ b/tools/virtio/vringh_test.c @@ -437,6 +437,7 @@ static int parallel_test(u64 features, int main(int argc, char *argv[]) { + struct virtio_driver vdriver; struct virtio_device vdev; struct virtqueue *vq; struct vringh vrh; @@ -453,6 +454,8 @@ int main(int argc, char *argv[]) bool fast_vringh = false, parallel = false; getrange = getrange_iov; + vdriver.suppress_used_validation = false; + vdev.dev.driver = &vdriver.driver; vdev.features = 0; INIT_LIST_HEAD(&vdev.vqs); From patchwork Tue Feb 8 15:25:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12738877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8762BC433EF for ; Tue, 8 Feb 2022 15:27:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aa/RpE3DNzFY9/KLfpd9f3F51W7bCdhgQMmnc1Dlw10=; b=SNcyBPLaJRexXM lJ0RyGvty4dUcxweVUpOTN04n50M8hlr5tP4IiZOPkhx7JmeE/YT0eJzHsrVdV3lFwwgtupCIduWq q543mbtqhj/QSFazyjx74ZJwWCg2MhlwV9sRA6a9qr64EQTNYV+o8ZbKA5N86BcsBnJ8ngdIzznj5 CQdvVZILlxcdO3q9NgDVNzAyhu2mMPM2Fbr0g64IHeR14qZcVIz76HV5S5rAWqP8MKg3ri7IPsoDT FFNzO9W3e9gJBgGsLZtqmoVNvs3h44y7YtASRqasYlyxCwQzIvqv4ZjNGMwU4WSRjv1l6qcu6658h Q/GNFQxNeq+M4+KtjWnA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHSNO-00ET1t-Ue; Tue, 08 Feb 2022 15:25:55 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHSN5-00ESw8-BN for linux-arm-kernel@lists.infradead.org; Tue, 08 Feb 2022 15:25:36 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DCD1811FB; Tue, 8 Feb 2022 07:25:33 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7F43E3F73B; Tue, 8 Feb 2022 07:25:32 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, virtualization@lists.linux-foundation.org Cc: sudeep.holla@arm.com, mst@redhat.com, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, peter.hilber@opensynergy.com, igor.skalkin@opensynergy.com, cristian.marussi@arm.com Subject: [PATCH 2/4] tools/virtio: Add missing spin_lock_init on virtio_test Date: Tue, 8 Feb 2022 15:25:18 +0000 Message-Id: <20220208152520.52983-3-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220208152520.52983-1-cristian.marussi@arm.com> References: <20220208152520.52983-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220208_072535_451243_355C5636 X-CRM114-Status: UNSURE ( 7.48 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org A missing spin_lock_init() call can cause the test to hang indefinitely on spin_lock() at virtqueue creation time. Signed-off-by: Cristian Marussi --- tools/virtio/virtio_test.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/virtio/virtio_test.c b/tools/virtio/virtio_test.c index d882075c097f..d9cb3d22d52f 100644 --- a/tools/virtio/virtio_test.c +++ b/tools/virtio/virtio_test.c @@ -133,6 +133,7 @@ static void vdev_info_init(struct vdev_info* dev, unsigned long long features) dev->vdev.dev.driver = &dev->vdriver.driver; dev->vdev.features = features; INIT_LIST_HEAD(&dev->vdev.vqs); + spin_lock_init(&dev->vdev.vqs_list_lock); dev->buf_size = 1024; dev->buf = malloc(dev->buf_size); assert(dev->buf); From patchwork Tue Feb 8 15:25:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12738879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8352BC433F5 for ; Tue, 8 Feb 2022 15:27:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=etDPU8B8RoEXQGwmhMrNwrEhoWrW61/herrxqzM5v/w=; b=ceXsH5dWDy1qTJ or+wUTxYBkCUraxamEmGh3UfwDGIE519gocDiusHOegd1B/ezuEdqyktiDJo83CydNx3++VqcfsCb CO/eYmqwbuPGWrL76yKjxhB8TsA1mLYkQn3B83/Ghyey3MgaRM7tI7tybAlWsOWJbwkao5G9mpyqe zjl1xsNQ5PBpERqcnQDwrygw0wUqWmaj9T0qlr60oKuy2Fss8ZdYyyUgwq+eg3GaB7UFKAszxdAlP pmKk5w13w6FfWjW+Lh9rJUWpLC5h8XUndx0TIbwJin/q7T/UWW9A8ka/cbUX2/yUUHljYXL7lFOgK VzeFyOedl1ykiMeaGcBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHSNY-00ET4O-Pi; Tue, 08 Feb 2022 15:26:04 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHSN6-00ESwj-FT for linux-arm-kernel@lists.infradead.org; Tue, 08 Feb 2022 15:25:38 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7E5622B; Tue, 8 Feb 2022 07:25:35 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1F3F03F73B; Tue, 8 Feb 2022 07:25:34 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, virtualization@lists.linux-foundation.org Cc: sudeep.holla@arm.com, mst@redhat.com, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, peter.hilber@opensynergy.com, igor.skalkin@opensynergy.com, cristian.marussi@arm.com Subject: [PATCH 3/4] virtio_ring: Embed a wrap counter in opaque poll index value Date: Tue, 8 Feb 2022 15:25:19 +0000 Message-Id: <20220208152520.52983-4-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220208152520.52983-1-cristian.marussi@arm.com> References: <20220208152520.52983-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220208_072536_647289_8E4418C4 X-CRM114-Status: GOOD ( 27.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Exported API virtqueue_poll() can be used to support polling mode operation on top of virtio layer if needed; currently the parameter last_used_idx is the opaque value that needs to be passed to the virtqueue_poll() function to check if there are new pending used buffers in the queue: such opaque value would have been previously obtained by a call to the API function virtqueue_enable_cb_prepare(). Since such opaque value is indeed containing simply a snapshot in time of the internal 16bit last_used_index (roughly), it is possible that, if another thread calls virtqueue_add_*() at the same time (which existing drivers don't do, but does not seem to be documented as prohibited anywhere), and if exactly 2**16 buffers are marked as used between two successive calls to virtqueue_poll(), the caller is fooled into thinking that nothing is pending (ABA problem). Keep a 16bit internal wraps counter per virtqueue and embed it into the upper 16bits of the returned opaque value, so that the above scenario can be detected transparently by virtqueue_poll(): this way each single possible last_used_idx value is really belonging to a different wrap. The ABA problem is still theoretically possible but this mitigation make it possibly happen only after 2^32 requests, which seems sufficient in practice. Since most drivers usually do not use virtqueue_poll() API in this way to access the virtqueues in polling mode, make the above behaviour optional using an internal flag that a driver can set, if desired, calling the new API virtqueue_use_wrap_counters(). Cc: "Michael S. Tsirkin" Cc: Igor Skalkin Cc: Peter Hilber Cc: virtualization@lists.linux-foundation.org Signed-off-by: Cristian Marussi --- --- drivers/virtio/virtio_ring.c | 88 ++++++++++++++++++++++++++++++++++-- include/linux/virtio.h | 2 + 2 files changed, 86 insertions(+), 4 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 962f1477b1fa..00a84b93f4a7 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -12,6 +12,8 @@ #include #include #include +#include +#include #include #ifdef DEBUG @@ -66,6 +68,34 @@ #define LAST_ADD_TIME_INVALID(vq) #endif +/* + * The provided last_used_idx, as returned by virtqueue_enable_cb_prepare(), + * is an opaque value representing the queue state and it is built as follows: + * + * --------------------------------------------------------- + * | vq->wraps | vq->last_used_idx | + * 31------------------------------------------------------0 + * + * The MSB 16bits embedding the wraps counter for the underlying virtqueue + * is stripped out, using these macros, before reaching into the lower layer + * helpers. + * + * This structure of the opaque value mitigates the scenario in which, when + * exactly 2**16 messages are marked as used between two successive calls to + * virtqueue_poll(), the caller is fooled into thinking nothing new has arrived + * since the pure last_used_idx is exactly the same. + */ +#define VRING_IDX_MASK GENMASK(15, 0) +#define VRING_POLL_GET_IDX(opaque) \ + ((u16)FIELD_GET(VRING_IDX_MASK, (opaque))) + +#define VRING_WRAPS_MASK GENMASK(31, 16) +#define VRING_POLL_GET_WRAPS(opaque) \ + ((u16)FIELD_GET(VRING_WRAPS_MASK, (opaque))) + +#define VRING_POLL_BUILD_OPAQUE(idx, wraps) \ + (FIELD_PREP(VRING_WRAPS_MASK, (wraps)) | ((idx) & VRING_IDX_MASK)) + struct vring_desc_state_split { void *data; /* Data for callback. */ struct vring_desc *indir_desc; /* Indirect descriptor, if any. */ @@ -114,6 +144,11 @@ struct vring_virtqueue { /* Last used index we've seen. */ u16 last_used_idx; + /* Should we count wraparounds? */ + bool use_wrap_counter; + /* Number of wraparounds */ + u16 wraps; + /* Hint for event idx: already triggered no need to disable. */ bool event_triggered; @@ -789,6 +824,8 @@ static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, ret = vq->split.desc_state[i].data; detach_buf_split(vq, i, ctx); vq->last_used_idx++; + if (unlikely(vq->use_wrap_counter)) + vq->wraps += !vq->last_used_idx; /* If we expect an interrupt for the next entry, tell host * by writing event index and flush out the write before * the read in the next get_buf call. */ @@ -1474,6 +1511,8 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq, if (unlikely(vq->last_used_idx >= vq->packed.vring.num)) { vq->last_used_idx -= vq->packed.vring.num; vq->packed.used_wrap_counter ^= 1; + if (unlikely(vq->use_wrap_counter)) + vq->wraps++; } /* @@ -1709,6 +1748,8 @@ static struct virtqueue *vring_create_virtqueue_packed( vq->weak_barriers = weak_barriers; vq->broken = false; vq->last_used_idx = 0; + vq->use_wrap_counter = false; + vq->wraps = 0; vq->event_triggered = false; vq->num_added = 0; vq->packed_ring = true; @@ -2046,16 +2087,49 @@ EXPORT_SYMBOL_GPL(virtqueue_disable_cb); */ unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq) { + unsigned int last_used_idx; struct vring_virtqueue *vq = to_vvq(_vq); if (vq->event_triggered) vq->event_triggered = false; - return vq->packed_ring ? virtqueue_enable_cb_prepare_packed(_vq) : - virtqueue_enable_cb_prepare_split(_vq); + last_used_idx = vq->packed_ring ? + virtqueue_enable_cb_prepare_packed(_vq) : + virtqueue_enable_cb_prepare_split(_vq); + + return VRING_POLL_BUILD_OPAQUE(last_used_idx, vq->wraps); } EXPORT_SYMBOL_GPL(virtqueue_enable_cb_prepare); +/** + * virtqueue_use_wrap_counter - Enable the virtqueue use_wrap_counter flag + * @_vq: the struct virtqueue we're talking about. + * + * After calling this the core will keep track of virtqueues last_used_idx + * in a dedicated wraps counter and such value will be reported embedded in the + * MSB 16bit of the opaque value returned by virtqueue_enable_cb_prepare. + * + * Return: 0 on Success + */ +int virtqueue_use_wrap_counter(struct virtqueue *_vq) +{ + u8 status; + struct vring_virtqueue *vq = to_vvq(_vq); + + if (unlikely(vq->broken)) + return -EINVAL; + + status = _vq->vdev->config->get_status(_vq->vdev); + if (!status || status >= VIRTIO_CONFIG_S_DRIVER_OK) + return -EBUSY; + + vq->use_wrap_counter = true; + virtio_mb(vq->weak_barriers); + + return 0; +} +EXPORT_SYMBOL_GPL(virtqueue_use_wrap_counter); + /** * virtqueue_poll - query pending used buffers * @_vq: the struct virtqueue we're talking about. @@ -2072,9 +2146,13 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx) if (unlikely(vq->broken)) return false; + if (unlikely(vq->wraps != VRING_POLL_GET_WRAPS(last_used_idx))) + return true; + virtio_mb(vq->weak_barriers); - return vq->packed_ring ? virtqueue_poll_packed(_vq, last_used_idx) : - virtqueue_poll_split(_vq, last_used_idx); + return vq->packed_ring ? + virtqueue_poll_packed(_vq, VRING_POLL_GET_IDX(last_used_idx)) : + virtqueue_poll_split(_vq, VRING_POLL_GET_IDX(last_used_idx)); } EXPORT_SYMBOL_GPL(virtqueue_poll); @@ -2198,6 +2276,8 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, vq->weak_barriers = weak_barriers; vq->broken = false; vq->last_used_idx = 0; + vq->use_wrap_counter = false; + vq->wraps = 0; vq->event_triggered = false; vq->num_added = 0; vq->use_dma_api = vring_use_dma_api(vdev); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 72292a62cd90..3cd428c09f54 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -74,6 +74,8 @@ bool virtqueue_enable_cb(struct virtqueue *vq); unsigned virtqueue_enable_cb_prepare(struct virtqueue *vq); +int virtqueue_use_wrap_counter(struct virtqueue *vq); + bool virtqueue_poll(struct virtqueue *vq, unsigned); bool virtqueue_enable_cb_delayed(struct virtqueue *vq); From patchwork Tue Feb 8 15:25:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cristian Marussi X-Patchwork-Id: 12738880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30FC8C433F5 for ; Tue, 8 Feb 2022 15:27:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oxpuH+psA3Jk6GkyUdK+PQJfsOns4TUeX66mZfhYcKE=; b=sYY8KJS44Sx1H5 LlgcSLsbQTtCfAH1+y/DnqrrO5jHv3WtFgUVSS85FzfZIIkss8X4zQeYmRbFVQGDE4SxZPcG93782 c6N/jY/kedRjhHBoWP/lpq1jAypTGJzhMsop2HCPaeEKs2zPWt85rR6uxPeqSExJQzcgfb7Wfa84o DebSuNT1ZVZfnRuvpNpdDNmBf/GGtC7K104X0i0bkEGsb5RK5fiGJ9wtmZWm9ae8M3QOblKL73T3t /JkdCxPRL8lsFuGbiPoprLpFJxZdvAKTRugUpbIr5cF72lAYzcBxr5L1N65SXA9vmCvJTGvw+kSEG 3punBKotU6JilRfRb4Jw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHSNi-00ET7I-22; Tue, 08 Feb 2022 15:26:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHSN7-00ESxc-PL for linux-arm-kernel@lists.infradead.org; Tue, 08 Feb 2022 15:25:40 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1CE601FB; Tue, 8 Feb 2022 07:25:37 -0800 (PST) Received: from e120937-lin.home (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B2DF13F73B; Tue, 8 Feb 2022 07:25:35 -0800 (PST) From: Cristian Marussi To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, virtualization@lists.linux-foundation.org Cc: sudeep.holla@arm.com, mst@redhat.com, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, peter.hilber@opensynergy.com, igor.skalkin@opensynergy.com, cristian.marussi@arm.com Subject: [PATCH 4/4] tools/virtio: Add virtqueue_use_wrap_counter support Date: Tue, 8 Feb 2022 15:25:20 +0000 Message-Id: <20220208152520.52983-5-cristian.marussi@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220208152520.52983-1-cristian.marussi@arm.com> References: <20220208152520.52983-1-cristian.marussi@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220208_072537_949579_A1DDEA3C X-CRM114-Status: GOOD ( 24.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Added simplified missing headers for VRING_POLL_* macros and a definition of virtqueue_use_wrap_counter API. Add --wrap-counters option to virtio_test to be able to test the new virtqueue_use_wrap_counter API. Signed-off-by: Cristian Marussi --- tools/virtio/linux/bitfield.h | 88 +++++++++++++++++++++++++++++++++++ tools/virtio/linux/bits.h | 36 ++++++++++++++ tools/virtio/linux/virtio.h | 33 +++++++++++++ tools/virtio/virtio_test.c | 35 ++++++++++++-- 4 files changed, 189 insertions(+), 3 deletions(-) create mode 100644 tools/virtio/linux/bitfield.h create mode 100644 tools/virtio/linux/bits.h diff --git a/tools/virtio/linux/bitfield.h b/tools/virtio/linux/bitfield.h new file mode 100644 index 000000000000..43421844eecc --- /dev/null +++ b/tools/virtio/linux/bitfield.h @@ -0,0 +1,88 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _LINUX_BITFIELD_H +#define _LINUX_BITFIELD_H + +#include + +/* + * Bitfield access macros + * + * FIELD_{GET,PREP} macros take as first parameter shifted mask + * from which they extract the base mask and shift amount. + * Mask must be a compilation time constant. + * + * Example: + * + * #define REG_FIELD_A GENMASK(6, 0) + * #define REG_FIELD_B BIT(7) + * #define REG_FIELD_C GENMASK(15, 8) + * #define REG_FIELD_D GENMASK(31, 16) + * + * Get: + * a = FIELD_GET(REG_FIELD_A, reg); + * b = FIELD_GET(REG_FIELD_B, reg); + * + * Set: + * reg = FIELD_PREP(REG_FIELD_A, 1) | + * FIELD_PREP(REG_FIELD_B, 0) | + * FIELD_PREP(REG_FIELD_C, c) | + * FIELD_PREP(REG_FIELD_D, 0x40); + * + * Modify: + * reg &= ~REG_FIELD_C; + * reg |= FIELD_PREP(REG_FIELD_C, c); + */ + +#define __bf_shf(x) (__builtin_ffsll(x) - 1) + +/** + * FIELD_MAX() - produce the maximum value representable by a field + * @_mask: shifted mask defining the field's length and position + * + * FIELD_MAX() returns the maximum value that can be held in the field + * specified by @_mask. + */ +#define FIELD_MAX(_mask) \ + ({ \ + (typeof(_mask))((_mask) >> __bf_shf(_mask)); \ + }) + +/** + * FIELD_FIT() - check if value fits in the field + * @_mask: shifted mask defining the field's length and position + * @_val: value to test against the field + * + * Return: true if @_val can fit inside @_mask, false if @_val is too big. + */ +#define FIELD_FIT(_mask, _val) \ + ({ \ + !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \ + }) + +/** + * FIELD_PREP() - prepare a bitfield element + * @_mask: shifted mask defining the field's length and position + * @_val: value to put in the field + * + * FIELD_PREP() masks and shifts up the value. The result should + * be combined with other fields of the bitfield using logical OR. + */ +#define FIELD_PREP(_mask, _val) \ + ({ \ + ((typeof(_mask))(_val) << __bf_shf(_mask)) & (_mask); \ + }) + +/** + * FIELD_GET() - extract a bitfield element + * @_mask: shifted mask defining the field's length and position + * @_reg: value of entire bitfield + * + * FIELD_GET() extracts the field specified by @_mask from the + * bitfield passed in as @_reg by masking and shifting it down. + */ +#define FIELD_GET(_mask, _reg) \ + ({ \ + (typeof(_mask))(((_reg) & (_mask)) >> __bf_shf(_mask)); \ + }) + +#endif diff --git a/tools/virtio/linux/bits.h b/tools/virtio/linux/bits.h new file mode 100644 index 000000000000..16f8fe32b0b3 --- /dev/null +++ b/tools/virtio/linux/bits.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BITS_H +#define __LINUX_BITS_H + +#include +#include +#include + +#define BIT_ULL(nr) (ULL(1) << (nr)) +#define BIT_MASK(nr) (UL(1) << ((nr) % BITS_PER_LONG)) +#define BIT_WORD(nr) ((nr) / BITS_PER_LONG) +#define BIT_ULL_MASK(nr) (ULL(1) << ((nr) % BITS_PER_LONG_LONG)) +#define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG) +#define BITS_PER_BYTE 8 + +/* + * Create a contiguous bitmask starting at bit position @l and ending at + * position @h. For example + * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000. + */ + +#define GENMASK_INPUT_CHECK(h, l) 0 + +#define __GENMASK(h, l) \ + (((~UL(0)) - (UL(1) << (l)) + 1) & \ + (~UL(0) >> (BITS_PER_LONG - 1 - (h)))) +#define GENMASK(h, l) \ + (GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l)) + +#define __GENMASK_ULL(h, l) \ + (((~ULL(0)) - (ULL(1) << (l)) + 1) & \ + (~ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) +#define GENMASK_ULL(h, l) \ + (GENMASK_INPUT_CHECK(h, l) + __GENMASK_ULL(h, l)) + +#endif /* __LINUX_BITS_H */ diff --git a/tools/virtio/linux/virtio.h b/tools/virtio/linux/virtio.h index 9021187a66d9..0556e093f223 100644 --- a/tools/virtio/linux/virtio.h +++ b/tools/virtio/linux/virtio.h @@ -5,6 +5,8 @@ #include #include +#define VIRTIO_CONFIG_S_DRIVER 2 + typedef struct pm_message { int event; } pm_message_t; @@ -48,9 +50,12 @@ struct device { struct device_driver *driver; }; +struct virtio_config_ops; + struct virtio_device { struct device dev; u64 features; + const struct virtio_config_ops *config; struct list_head vqs; spinlock_t vqs_list_lock; }; @@ -65,6 +70,32 @@ struct virtqueue { void *priv; }; +struct virtio_config_ops { + void (*enable_cbs)(struct virtio_device *vdev); + void (*get)(struct virtio_device *vdev, unsigned int offset, + void *buf, unsigned int len); + void (*set)(struct virtio_device *vdev, unsigned int offset, + const void *buf, unsigned int len); + u32 (*generation)(struct virtio_device *vdev); + u8 (*get_status)(struct virtio_device *vdev); + void (*set_status)(struct virtio_device *vdev, u8 status); + void (*reset)(struct virtio_device *vdev); + int (*find_vqs)(struct virtio_device *vdev, unsigned int nvqs, + struct virtqueue *vqs[], void *callbacks[], + const char * const names[], const bool *ctx, + void *desc); + void (*del_vqs)(struct virtio_device *vdev); + u64 (*get_features)(struct virtio_device *vdev); + int (*finalize_features)(struct virtio_device *vdev); + const char *(*bus_name)(struct virtio_device *vdev); + int (*set_vq_affinity)(struct virtqueue *vq, + const void *cpu_mask); + const void *(*get_vq_affinity)(struct virtio_device *vdev, + int index); + bool (*get_shm_region)(struct virtio_device *vdev, + void *region, u8 id); +}; + struct virtio_driver { struct device_driver driver; const struct virtio_device_id *id_table; @@ -111,6 +142,8 @@ void virtqueue_disable_cb(struct virtqueue *vq); bool virtqueue_enable_cb(struct virtqueue *vq); bool virtqueue_enable_cb_delayed(struct virtqueue *vq); +int virtqueue_use_wrap_counter(struct virtqueue *vq); + void *virtqueue_detach_unused_buf(struct virtqueue *vq); struct virtqueue *vring_new_virtqueue(unsigned int index, unsigned int num, diff --git a/tools/virtio/virtio_test.c b/tools/virtio/virtio_test.c index d9cb3d22d52f..deccd44e4739 100644 --- a/tools/virtio/virtio_test.c +++ b/tools/virtio/virtio_test.c @@ -21,6 +21,8 @@ #define RANDOM_BATCH -1 +static u8 virtio_test_device_status; + /* Unused */ void *__kmalloc_fake, *__kfree_ignore_start, *__kfree_ignore_end; @@ -109,7 +111,7 @@ static void vq_reset(struct vq_info *info, int num, struct virtio_device *vdev) info->vq->priv = info; } -static void vq_info_add(struct vdev_info *dev, int num) +static void vq_info_add(struct vdev_info *dev, int num, bool use_wrap_counters) { struct vq_info *info = &dev->vqs[dev->nvqs]; int r; @@ -119,12 +121,25 @@ static void vq_info_add(struct vdev_info *dev, int num) r = posix_memalign(&info->ring, 4096, vring_size(num, 4096)); assert(r >= 0); vq_reset(info, num, &dev->vdev); + if (use_wrap_counters) { + r = virtqueue_use_wrap_counter(info->vq); + assert(r == 0); + } vhost_vq_setup(dev, info); dev->fds[info->idx].fd = info->call; dev->fds[info->idx].events = POLLIN; dev->nvqs++; } +static u8 virtio_test_get_status(struct virtio_device *vdev) +{ + return virtio_test_device_status; +} + +struct virtio_config_ops virtio_config_ops = { + .get_status = virtio_test_get_status, +}; + static void vdev_info_init(struct vdev_info* dev, unsigned long long features) { int r; @@ -132,6 +147,8 @@ static void vdev_info_init(struct vdev_info* dev, unsigned long long features) dev->vdriver.suppress_used_validation = false; dev->vdev.dev.driver = &dev->vdriver.driver; dev->vdev.features = features; + virtio_test_device_status = VIRTIO_CONFIG_S_DRIVER; + dev->vdev.config = &virtio_config_ops; INIT_LIST_HEAD(&dev->vdev.vqs); spin_lock_init(&dev->vdev.vqs_list_lock); dev->buf_size = 1024; @@ -316,6 +333,14 @@ const struct option longopts[] = { .name = "no-delayed-interrupt", .val = 'd', }, + { + .name = "wrap-counters", + .val = 'W', + }, + { + .name = "no-wrap-counters", + .val = 'w', + }, { .name = "batch", .val = 'b', @@ -337,6 +362,7 @@ static void help(void) " [--no-event-idx]" " [--no-virtio-1]" " [--delayed-interrupt]" + " [--wrap-counters]" " [--batch=random/N]" " [--reset=N]" "\n"); @@ -349,7 +375,7 @@ int main(int argc, char **argv) (1ULL << VIRTIO_RING_F_EVENT_IDX) | (1ULL << VIRTIO_F_VERSION_1); long batch = 1, reset = 0; int o; - bool delayed = false; + bool delayed = false, use_wrap_counters = false; for (;;) { o = getopt_long(argc, argv, optstring, longopts, NULL); @@ -374,6 +400,9 @@ int main(int argc, char **argv) case 'D': delayed = true; break; + case 'W': + use_wrap_counters = true; + break; case 'b': if (0 == strcmp(optarg, "random")) { batch = RANDOM_BATCH; @@ -400,7 +429,7 @@ int main(int argc, char **argv) done: vdev_info_init(&dev, features); - vq_info_add(&dev, 256); + vq_info_add(&dev, 256, use_wrap_counters); run_test(&dev, &dev.vqs[0], delayed, batch, reset, 0x100000); return 0; }