From patchwork Wed Sep 13 20:00:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13383737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8B11AEE020C for ; Wed, 13 Sep 2023 20:01:16 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.601547.937588 (Exim 4.92) (envelope-from ) id 1qgW2s-0006ts-9E; Wed, 13 Sep 2023 20:01:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 601547.937588; Wed, 13 Sep 2023 20:01:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2s-0006t3-2e; Wed, 13 Sep 2023 20:01:06 +0000 Received: by outflank-mailman (input) for mailman id 601547; Wed, 13 Sep 2023 20:01:05 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qgW2r-0006hx-B4 for xen-devel@lists.xenproject.org; Wed, 13 Sep 2023 20:01:05 +0000 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 449d97f9-5270-11ee-8787-cb3800f73035; Wed, 13 Sep 2023 22:01:04 +0200 (CEST) Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-458-iNU3FLWrPDaVFpUU8ODbpw-1; Wed, 13 Sep 2023 16:01:00 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 35E25805566; Wed, 13 Sep 2023 20:00:59 +0000 (UTC) Received: from localhost (unknown [10.39.192.13]) by smtp.corp.redhat.com (Postfix) with ESMTP id A782940C2009; Wed, 13 Sep 2023 20:00:58 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 449d97f9-5270-11ee-8787-cb3800f73035 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694635263; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DFze3Cwgp+YWHC2j1jinaSyM7xhK6i4yDvggp9s94ik=; b=gdbgfrIx+70PYJXCkMDqt1RcxPsa4TaxXR79bwOMwAGEcANo7Ic/NNrJ6Dg9ULK97Lc2iB BzY7bzRD5jXizApcP+kuE8XmMWHlip6Oexjn6zbG/e/xfSRFnCEJZ8IxMSHLvZvzUNtO5f HQ24xN9fGYE8Y2UbHV7Bx7tpP+Mpgfo= X-MC-Unique: iNU3FLWrPDaVFpUU8ODbpw-1 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Stefano Stabellini , Ilya Maximets , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Kevin Wolf , xen-devel@lists.xenproject.org, Anthony Perard , Paolo Bonzini , Stefan Hajnoczi , , Julia Suvorova , Aarushi Mehta , Paul Durrant , "Michael S. Tsirkin" , Fam Zheng , Stefano Garzarella , Hanna Reitz , Eric Blake Subject: [PATCH v3 3/4] virtio: use defer_call() in virtio_irqfd_notify() Date: Wed, 13 Sep 2023 16:00:44 -0400 Message-ID: <20230913200045.1024233-4-stefanha@redhat.com> In-Reply-To: <20230913200045.1024233-1-stefanha@redhat.com> References: <20230913200045.1024233-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used Buffer Notifications from an IOThread. This involves an eventfd write(2) syscall. Calling this repeatedly when completing multiple I/O requests in a row is wasteful. Use the defer_call() API to batch together virtio_irqfd_notify() calls made during thread pool (aio=threads), Linux AIO (aio=native), and io_uring (aio=io_uring) completion processing. Behavior is unchanged for emulated devices that do not use defer_call_begin()/defer_call_end() since defer_call() immediately invokes the callback when called outside a defer_call_begin()/defer_call_end() region. fio rw=randread bs=4k iodepth=64 numjobs=8 IOPS increases by ~9% with a single IOThread and 8 vCPUs. iodepth=1 decreases by ~1% but this could be noise. Detailed performance data and configuration specifics are available here: https://gitlab.com/stefanha/virt-playbooks/-/tree/blk_io_plug-irqfd This duplicates the BH that virtio-blk uses for batching. The next commit will remove it. Reviewed-by: Eric Blake Signed-off-by: Stefan Hajnoczi --- block/io_uring.c | 6 ++++++ block/linux-aio.c | 4 ++++ hw/virtio/virtio.c | 13 ++++++++++++- util/thread-pool.c | 5 +++++ hw/virtio/trace-events | 1 + 5 files changed, 28 insertions(+), 1 deletion(-) diff --git a/block/io_uring.c b/block/io_uring.c index 3a1e1f45b3..7cdd00e9f1 100644 --- a/block/io_uring.c +++ b/block/io_uring.c @@ -125,6 +125,9 @@ static void luring_process_completions(LuringState *s) { struct io_uring_cqe *cqes; int total_bytes; + + defer_call_begin(); + /* * Request completion callbacks can run the nested event loop. * Schedule ourselves so the nested event loop will "see" remaining @@ -217,7 +220,10 @@ end: aio_co_wake(luringcb->co); } } + qemu_bh_cancel(s->completion_bh); + + defer_call_end(); } static int ioq_submit(LuringState *s) diff --git a/block/linux-aio.c b/block/linux-aio.c index a2670b3e46..ec05d946f3 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -205,6 +205,8 @@ static void qemu_laio_process_completions(LinuxAioState *s) { struct io_event *events; + defer_call_begin(); + /* Reschedule so nested event loops see currently pending completions */ qemu_bh_schedule(s->completion_bh); @@ -231,6 +233,8 @@ static void qemu_laio_process_completions(LinuxAioState *s) * own `for` loop. If we are the last all counters dropped to zero. */ s->event_max = 0; s->event_idx = 0; + + defer_call_end(); } static void qemu_laio_process_completions_and_submit(LinuxAioState *s) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 969c25f4cf..d9aeed7012 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -15,6 +15,7 @@ #include "qapi/error.h" #include "qapi/qapi-commands-virtio.h" #include "trace.h" +#include "qemu/defer-call.h" #include "qemu/error-report.h" #include "qemu/log.h" #include "qemu/main-loop.h" @@ -2426,6 +2427,16 @@ static bool virtio_should_notify(VirtIODevice *vdev, VirtQueue *vq) } } +/* Batch irqs while inside a defer_call_begin()/defer_call_end() section */ +static void virtio_notify_irqfd_deferred_fn(void *opaque) +{ + EventNotifier *notifier = opaque; + VirtQueue *vq = container_of(notifier, VirtQueue, guest_notifier); + + trace_virtio_notify_irqfd_deferred_fn(vq->vdev, vq); + event_notifier_set(notifier); +} + void virtio_notify_irqfd(VirtIODevice *vdev, VirtQueue *vq) { WITH_RCU_READ_LOCK_GUARD() { @@ -2452,7 +2463,7 @@ void virtio_notify_irqfd(VirtIODevice *vdev, VirtQueue *vq) * to an atomic operation. */ virtio_set_isr(vq->vdev, 0x1); - event_notifier_set(&vq->guest_notifier); + defer_call(virtio_notify_irqfd_deferred_fn, &vq->guest_notifier); } static void virtio_irq(VirtQueue *vq) diff --git a/util/thread-pool.c b/util/thread-pool.c index e3d8292d14..d84961779a 100644 --- a/util/thread-pool.c +++ b/util/thread-pool.c @@ -15,6 +15,7 @@ * GNU GPL, version 2 or (at your option) any later version. */ #include "qemu/osdep.h" +#include "qemu/defer-call.h" #include "qemu/queue.h" #include "qemu/thread.h" #include "qemu/coroutine.h" @@ -175,6 +176,8 @@ static void thread_pool_completion_bh(void *opaque) ThreadPool *pool = opaque; ThreadPoolElement *elem, *next; + defer_call_begin(); /* cb() may use defer_call() to coalesce work */ + restart: QLIST_FOREACH_SAFE(elem, &pool->head, all, next) { if (elem->state != THREAD_DONE) { @@ -208,6 +211,8 @@ restart: qemu_aio_unref(elem); } } + + defer_call_end(); } static void thread_pool_cancel(BlockAIOCB *acb) diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events index 7109cf1a3b..29f4f543ad 100644 --- a/hw/virtio/trace-events +++ b/hw/virtio/trace-events @@ -73,6 +73,7 @@ virtqueue_fill(void *vq, const void *elem, unsigned int len, unsigned int idx) " virtqueue_flush(void *vq, unsigned int count) "vq %p count %u" virtqueue_pop(void *vq, void *elem, unsigned int in_num, unsigned int out_num) "vq %p elem %p in_num %u out_num %u" virtio_queue_notify(void *vdev, int n, void *vq) "vdev %p n %d vq %p" +virtio_notify_irqfd_deferred_fn(void *vdev, void *vq) "vdev %p vq %p" virtio_notify_irqfd(void *vdev, void *vq) "vdev %p vq %p" virtio_notify(void *vdev, void *vq) "vdev %p vq %p" virtio_set_status(void *vdev, uint8_t val) "vdev %p val %u"