From patchwork Mon Nov 9 14:35:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 11891643 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3D9136A2 for ; Mon, 9 Nov 2020 14:44:54 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E4637206E3 for ; Mon, 9 Nov 2020 14:44:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="T1sLpx70" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E4637206E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:43932 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kc8Pc-0007Ho-Vm for patchwork-qemu-devel@patchwork.kernel.org; Mon, 09 Nov 2020 09:44:53 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:46248) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kc8H4-0006fx-0R for qemu-devel@nongnu.org; Mon, 09 Nov 2020 09:36:02 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:37023) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kc8H1-0002uF-Tp for qemu-devel@nongnu.org; Mon, 09 Nov 2020 09:36:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1604932558; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type; bh=kcdU9aJbEaren7E4fZsP2g6uWz2aJVVVLlffCGmH/tI=; b=T1sLpx70RX6oPM8vKX1wHBp5YlhRamJWSizJofOH1kDQxheCVgPqHgL1DG00QHKvVJMtR2 uLtM81YW1Vaal4UMmhCN4Y63ZmDtkXDj8F9KSIc2kJQpZwrNzdZ1HlJU+vZM52Ko103RjK mrOWdzkDM6q/CTLtWedx4+hocVSITjI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-483-4OaZxlkANGKWCF4t7g8vxA-1; Mon, 09 Nov 2020 09:35:56 -0500 X-MC-Unique: 4OaZxlkANGKWCF4t7g8vxA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 834171016CE5; Mon, 9 Nov 2020 14:35:55 +0000 (UTC) Received: from horse.redhat.com (ovpn-115-201.rdu2.redhat.com [10.10.115.201]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43BC975128; Mon, 9 Nov 2020 14:35:49 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id C5CD7222E35; Mon, 9 Nov 2020 09:35:48 -0500 (EST) Date: Mon, 9 Nov 2020 09:35:48 -0500 From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs-list Subject: [PATCH V2] virtiofsd: Use --thread-pool-size=0 to mean no thread pool Message-ID: <20201109143548.GA1479853@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/11/09 00:04:29 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jose.carlos.venegas.munoz@intel.com, "Dr. David Alan Gilbert" , Stefan Hajnoczi Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" Right now we create a thread pool and main thread hands over the request to thread in thread pool to process. Number of threads in thread pool can be managed by option --thread-pool-size. In tests we have noted that many of the workloads are getting better performance if we don't use a thread pool at all and process all the requests in the context of a thread receiving the request. Hence give user an option to be able to run virtiofsd without using a thread pool. To implement this, I have used existing option --thread-pool-size. This option defines how many maximum threads can be in the thread pool. Thread pool size zero freezes thead pool. I can't see why will one start virtiofsd with a frozen thread pool (hence frozen file system). So I am redefining --thread-pool-size=0 to mean, don't use a thread pool. Instead process the request in the context of thread receiving request from the queue. Signed-off-by: Vivek Goyal Reviewed-by: Dr. David Alan Gilbert Reviewed-by: Stefan Hajnoczi --- tools/virtiofsd/fuse_virtio.c | 36 ++++++++++++++++++++++++++--------- 1 file changed, 27 insertions(+), 9 deletions(-) diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index 83ba07c6cd..944b9a577c 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -588,13 +588,18 @@ static void *fv_queue_thread(void *opaque) struct VuDev *dev = &qi->virtio_dev->dev; struct VuVirtq *q = vu_get_queue(dev, qi->qidx); struct fuse_session *se = qi->virtio_dev->se; - GThreadPool *pool; - - pool = g_thread_pool_new(fv_queue_worker, qi, se->thread_pool_size, FALSE, - NULL); - if (!pool) { - fuse_log(FUSE_LOG_ERR, "%s: g_thread_pool_new failed\n", __func__); - return NULL; + GThreadPool *pool = NULL; + GList *req_list = NULL; + + if (se->thread_pool_size) { + fuse_log(FUSE_LOG_DEBUG, "%s: Creating thread pool for Queue %d\n", + __func__, qi->qidx); + pool = g_thread_pool_new(fv_queue_worker, qi, se->thread_pool_size, + FALSE, NULL); + if (!pool) { + fuse_log(FUSE_LOG_ERR, "%s: g_thread_pool_new failed\n", __func__); + return NULL; + } } fuse_log(FUSE_LOG_INFO, "%s: Start for queue %d kick_fd %d\n", __func__, @@ -669,14 +674,27 @@ static void *fv_queue_thread(void *opaque) req->reply_sent = false; - g_thread_pool_push(pool, req, NULL); + if (!se->thread_pool_size) { + req_list = g_list_prepend(req_list, req); + } else { + g_thread_pool_push(pool, req, NULL); + } } pthread_mutex_unlock(&qi->vq_lock); pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); + + /* Process all the requests. */ + if (!se->thread_pool_size && req_list != NULL) { + g_list_foreach(req_list, fv_queue_worker, qi); + g_list_free(req_list); + req_list = NULL; + } } - g_thread_pool_free(pool, FALSE, TRUE); + if (pool) { + g_thread_pool_free(pool, FALSE, TRUE); + } return NULL; }