From patchwork Wed Aug 10 10:53:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Carlos Maiolino X-Patchwork-Id: 12940436 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A740C25B06 for ; Wed, 10 Aug 2022 10:53:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229501AbiHJKxk (ORCPT ); Wed, 10 Aug 2022 06:53:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229763AbiHJKxi (ORCPT ); Wed, 10 Aug 2022 06:53:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6CC75326C7 for ; Wed, 10 Aug 2022 03:53:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660128816; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EK62Kkq/r74QZCk8YtVqluG/7KkiVe8yy2idzr5q3Xg=; b=fDYdKzLx5pcS0zwonfIFK9tr9xhIYR6ajOT2nvVDdw+jG3JSefriKEqLQ2ULCH6/p1F7Gv JdvOXTcuZPGg9VXcOO37kRTL5waZiEV6EPfU8n5xdhyQ/ahEeSBgwfcY8y2rIHySL59xHE I3IZYR06HU1qkHtga83Y1SBpHe3gAcM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-397-zcZdCJBDNuG9EXiMLOOFgQ-1; Wed, 10 Aug 2022 06:53:35 -0400 X-MC-Unique: zcZdCJBDNuG9EXiMLOOFgQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4EC2018A6504 for ; Wed, 10 Aug 2022 10:53:35 +0000 (UTC) Received: from [10.10.0.108] (unknown [10.40.193.197]) by smtp.corp.redhat.com (Postfix) with ESMTP id A843B40D2827 for ; Wed, 10 Aug 2022 10:53:34 +0000 (UTC) Subject: [PATCH 2/2] Rename worker threads from xfsdump's documentation From: Carlos Maiolino To: linux-xfs@vger.kernel.org Date: Wed, 10 Aug 2022 12:53:33 +0200 Message-ID: <166012881358.10085.7894829376842264679.stgit@orion> In-Reply-To: <166012867440.10085.15666482309699207253.stgit@orion> References: <166012867440.10085.15666482309699207253.stgit@orion> User-Agent: StGit/1.4 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org While we've already removed the word 'slave' from the code, the documentation should still be updated. Signed-off-by: Carlos Maiolino Reviewed-by: Darrick J. Wong --- doc/xfsdump.html | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/doc/xfsdump.html b/doc/xfsdump.html index e37e362..2faa65e 100644 --- a/doc/xfsdump.html +++ b/doc/xfsdump.html @@ -286,7 +286,7 @@ of a dump/restore session to multiple drives. | | | 4. O O O ring buffers common/ring.[ch] | | | -5. slave slave slave ring_create(... ring_slave_entry ...) +5. worker worker worker ring_create(... ring_worker_entry ...) thread thread thread | | | 6. drive drive drive physical drives @@ -306,7 +306,7 @@ The process hierachy is shown above. main() first initialises the drive managers with calls to the drive_init functions. In addition to choosing and assigning drive strategies and ops for each drive object, the drive managers intialise a ring buffer and (for -devices other than simple UNIX files) sproc off a slave thread that +devices other than simple UNIX files) sproc off a worker thread that that handles IO to the tape device. This initialisation happens in the drive_manager code and is not directly visible from main().

@@ -316,31 +316,31 @@ sprocs. Each child begins execution in childmain(), runs either content_stream_dump or content_stream_restore and exits with the return code from these functions.

-Both the stream manager processes and the drive manager slaves +Both the stream manager processes and the drive manager workers set their signal disposition to ignore HUP, INT, QUIT, PIPE, ALRM, CLD (and for the stream manager TERM as well).

-The drive manager slave processes are much simpler, and are +The drive manager worker processes are much simpler, and are initialised with a call to ring_create, and begin execution in -ring_slave_func. The ring structure must also be initialised with +ring_worker_func. The ring structure must also be initialised with two ops that are called by the spawned thread: a ring read op, and a write op. The stream manager communicates with the tape manager across this ring structure using Ring_put's and Ring_get's.

-The slave thread sits in a loop processing messages that come across +The worker thread sits in a loop processing messages that come across the ring buffer. It ignores signals and does not terminate until it receives a RING_OP_DIE message. It then exits 0.

The main process sleeps waiting for any of its children to die (ie. waiting for a SIGCLD). All children that it cares about (stream -managers and ring buffer slaves) are registered through the child +managers and ring buffer workers) are registered through the child manager abstraction. When a child dies wait status and other info is stored with its entry in the child manager. main() ignores the deaths of children (and grandchildren) that are not registered through the child manager. The return status of these subprocesses is checked and in the case of an error is used to determine the overall exit code.

-We do not expect slave threads to ever die unexpectedly: they ignore +We do not expect worker threads to ever die unexpectedly: they ignore most signals and only exit when they receive a RING_OP_DIE at which point they drop out of the message processing loop and always signal success.

@@ -1680,35 +1680,35 @@ If xfsdump/xfsrestore is running single-threaded (-Z option) or is running on Linux (which is not multi-threaded) then records are read/written straight to the tape. If it is running multi-threaded then a circular buffer is used as an intermediary -between the client and slave threads. +between the client and worker threads.

Initially drive_init1() calls ds_instantiate() which if dump/restore is running multi-threaded, creates the ring buffer with ring_create which initialises -the state to RING_STAT_INIT and sets up the slave thread with -ring_slave_entry. +the state to RING_STAT_INIT and sets up the worker thread with +ring_worker_entry.

 ds_instantiate()
   ring_create(...,ring_read, ring_write,...)
     - allocate and init buffers
     - set rm_stat = RING_STAT_INIT
-    start up slave thread with ring_slave_entry
+    start up worker thread with ring_worker_entry
 
-The slave spends its time in a loop getting items from the +The worker spends its time in a loop getting items from the active queue, doing the read or write operation and placing the result back on the ready queue.
-slave
+worker
 ======
-ring_slave_entry()
+ring_worker_entry()
   loop
-    ring_slave_get() - get from active queue
+    ring_worker_get() - get from active queue
     case rm_op
       RING_OP_READ -> ringp->r_readfunc
       RING_OP_WRITE -> ringp->r_writefunc
       ..
     endcase
-    ring_slave_put() - puts on ready queue
+    ring_worker_put() - puts on ready queue
   endloop
 
@@ -1778,7 +1778,7 @@ prepare_drive()

For each do_read call in the multi-threaded case, we have two sides to the story: the client which is coming -from code in content.c and the slave which is a simple +from code in content.c and the worker which is a simple thread just satisfying I/O requests. From the point of view of the ring buffer, these are the steps which happen for reading: @@ -1786,7 +1786,7 @@ which happen for reading:

  • client removes msg from ready queue
  • client wants to read, so sets op field to READ (RING_OP_READ) and puts on active queue -
  • slave removes msg from active queue, +
  • worker removes msg from active queue, invokes client read function, sets status field: OK/ERROR, puts msg on ready queue