From patchwork Tue Nov 17 19:41:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 11913335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C6AEFA6A for ; Tue, 17 Nov 2020 19:42:15 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F3C822210 for ; Tue, 17 Nov 2020 19:42:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UtVdlKOG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F3C822210 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:47416 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kf6rm-0000pq-5E for patchwork-qemu-devel@patchwork.kernel.org; Tue, 17 Nov 2020 14:42:14 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:39864) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kf6rL-0000OG-Ue for qemu-devel@nongnu.org; Tue, 17 Nov 2020 14:41:47 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:30029) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kf6rJ-0000iK-4P for qemu-devel@nongnu.org; Tue, 17 Nov 2020 14:41:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605642103; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type; bh=pPtEmvleLqsZuszS9+VwDzspY7+t14Oe/2By0aaLHrg=; b=UtVdlKOGPVRjGJ34dwkYDvYsj5HT+rx1PF5rzLtXCQW5ijItWIn8MMeGy9HMBoPVNlWl+C 9CFGgu10z8YNkPJ58NRSJc16OCx7kYT0MZK1RTEQb/iJGL1PIFYRt1+zWyy5ixS40fxlcL k3HkJSvpKNVJPcDjyT7PYBYpRxeBito= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-111-yoRrZhQhN466P_QojHOW3w-1; Tue, 17 Nov 2020 14:41:39 -0500 X-MC-Unique: yoRrZhQhN466P_QojHOW3w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA4BC1007485; Tue, 17 Nov 2020 19:41:38 +0000 (UTC) Received: from horse.redhat.com (ovpn-116-186.rdu2.redhat.com [10.10.116.186]) by smtp.corp.redhat.com (Postfix) with ESMTP id 60C8519C78; Tue, 17 Nov 2020 19:41:32 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 7F4EB220BCF; Tue, 17 Nov 2020 14:41:31 -0500 (EST) Date: Tue, 17 Nov 2020 14:41:31 -0500 From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs-list Subject: [PATCH v3] virtiofsd: Use --thread-pool-size=0 to mean no thread pool Message-ID: <20201117194131.GA96587@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/11/17 01:18:45 X-ACL-Warn: Detected OS = Linux 2.2.x-3.x [generic] [fuzzy] X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jose.carlos.venegas.munoz@intel.com, dgilbert@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" This is V3 of the patch. A minor change since V2 is to reverse the list before executing requests. We are prepending elements to the list and that means when we start processing requests, we are processing these in the reverse order. Though we don't guarantee any ordering but it does not hurt to process requests in same order as received as much as possible. Right now we create a thread pool and main thread hands over the request to thread in thread pool to process. Number of threads in thread pool can be managed by option --thread-pool-size. In tests we have noted that many of the workloads are getting better performance if we don't use a thread pool at all and process all the requests in the context of a thread receiving the request. Hence give user an option to be able to run virtiofsd without using a thread pool. To implement this, I have used existing option --thread-pool-size. This option defines how many maximum threads can be in the thread pool. Thread pool size zero freezes thead pool. I can't see why will one start virtiofsd with a frozen thread pool (hence frozen file system). So I am redefining --thread-pool-size=0 to mean, don't use a thread pool. Instead process the request in the context of thread receiving request from the queue. Signed-off-by: Vivek Goyal Reviewed-by: Stefan Hajnoczi --- tools/virtiofsd/fuse_virtio.c | 37 ++++++++++++++++++++++++++--------- 1 file changed, 28 insertions(+), 9 deletions(-) diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index 83ba07c6cd..6c56e606ef 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -588,13 +588,18 @@ static void *fv_queue_thread(void *opaque) struct VuDev *dev = &qi->virtio_dev->dev; struct VuVirtq *q = vu_get_queue(dev, qi->qidx); struct fuse_session *se = qi->virtio_dev->se; - GThreadPool *pool; - - pool = g_thread_pool_new(fv_queue_worker, qi, se->thread_pool_size, FALSE, - NULL); - if (!pool) { - fuse_log(FUSE_LOG_ERR, "%s: g_thread_pool_new failed\n", __func__); - return NULL; + GThreadPool *pool = NULL; + GList *req_list = NULL; + + if (se->thread_pool_size) { + fuse_log(FUSE_LOG_DEBUG, "%s: Creating thread pool for Queue %d\n", + __func__, qi->qidx); + pool = g_thread_pool_new(fv_queue_worker, qi, se->thread_pool_size, + FALSE, NULL); + if (!pool) { + fuse_log(FUSE_LOG_ERR, "%s: g_thread_pool_new failed\n", __func__); + return NULL; + } } fuse_log(FUSE_LOG_INFO, "%s: Start for queue %d kick_fd %d\n", __func__, @@ -669,14 +674,28 @@ static void *fv_queue_thread(void *opaque) req->reply_sent = false; - g_thread_pool_push(pool, req, NULL); + if (!se->thread_pool_size) { + req_list = g_list_prepend(req_list, req); + } else { + g_thread_pool_push(pool, req, NULL); + } } pthread_mutex_unlock(&qi->vq_lock); pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); + + /* Process all the requests. */ + if (!se->thread_pool_size && req_list != NULL) { + req_list = g_list_reverse(req_list); + g_list_foreach(req_list, fv_queue_worker, qi); + g_list_free(req_list); + req_list = NULL; + } } - g_thread_pool_free(pool, FALSE, TRUE); + if (pool) { + g_thread_pool_free(pool, FALSE, TRUE); + } return NULL; }