@@ -286,7 +286,7 @@ of a dump/restore session to multiple drives.
| | |
4. O O O ring buffers common/ring.[ch]
| | |
-5. slave slave slave ring_create(... ring_slave_entry ...)
+5. worker worker worker ring_create(... ring_worker_entry ...)
thread thread thread
| | |
6. drive drive drive physical drives
@@ -306,7 +306,7 @@ The process hierachy is shown above. main() first initialises
the drive managers with calls to the drive_init functions. In
addition to choosing and assigning drive strategies and ops for each
drive object, the drive managers intialise a ring buffer and (for
-devices other than simple UNIX files) sproc off a slave thread that
+devices other than simple UNIX files) sproc off a worker thread that
that handles IO to the tape device. This initialisation happens in the
drive_manager code and is not directly visible from main().
<p>
@@ -316,31 +316,31 @@ sprocs. Each child begins execution in childmain(), runs either
content_stream_dump or content_stream_restore and exits with the
return code from these functions.
<p>
-Both the stream manager processes and the drive manager slaves
+Both the stream manager processes and the drive manager workers
set their signal disposition to ignore HUP, INT, QUIT, PIPE,
ALRM, CLD (and for the stream manager TERM as well).
<p>
-The drive manager slave processes are much simpler, and are
+The drive manager worker processes are much simpler, and are
initialised with a call to ring_create, and begin execution in
-ring_slave_func. The ring structure must also be initialised with
+ring_worker_func. The ring structure must also be initialised with
two ops that are called by the spawned thread: a ring read op, and a write op.
The stream manager communicates with the tape manager across this ring
structure using Ring_put's and Ring_get's.
<p>
-The slave thread sits in a loop processing messages that come across
+The worker thread sits in a loop processing messages that come across
the ring buffer. It ignores signals and does not terminate until it
receives a RING_OP_DIE message. It then exits 0.
<p>
The main process sleeps waiting for any of its children to die
(ie. waiting for a SIGCLD). All children that it cares about (stream
-managers and ring buffer slaves) are registered through the child
+managers and ring buffer workers) are registered through the child
manager abstraction. When a child dies wait status and other info is
stored with its entry in the child manager. main() ignores the deaths
of children (and grandchildren) that are not registered through the child
manager. The return status of these subprocesses is checked
and in the case of an error is used to determine the overall exit code.
<p>
-We do not expect slave threads to ever die unexpectedly: they ignore
+We do not expect worker threads to ever die unexpectedly: they ignore
most signals and only exit when they receive a RING_OP_DIE at which
point they drop out of the message processing loop and always signal success.
<p>
@@ -1680,35 +1680,35 @@ If xfsdump/xfsrestore is running single-threaded (-Z option)
or is running on Linux (which is not multi-threaded) then
records are read/written straight to the tape. If it is running
multi-threaded then a circular buffer is used as an intermediary
-between the client and slave threads.
+between the client and worker threads.
<p>
Initially <i>drive_init1()</i> calls <i>ds_instantiate()</i> which
if dump/restore is running multi-threaded,
creates the ring buffer with <i>ring_create</i> which initialises
-the state to RING_STAT_INIT and sets up the slave thread with
-ring_slave_entry.
+the state to RING_STAT_INIT and sets up the worker thread with
+ring_worker_entry.
<pre>
ds_instantiate()
ring_create(...,ring_read, ring_write,...)
- allocate and init buffers
- set rm_stat = RING_STAT_INIT
- start up slave thread with ring_slave_entry
+ start up worker thread with ring_worker_entry
</pre>
-The slave spends its time in a loop getting items from the
+The worker spends its time in a loop getting items from the
active queue, doing the read or write operation and placing the result
back on the ready queue.
<pre>
-slave
+worker
======
-ring_slave_entry()
+ring_worker_entry()
loop
- ring_slave_get() - get from active queue
+ ring_worker_get() - get from active queue
case rm_op
RING_OP_READ -> ringp->r_readfunc
RING_OP_WRITE -> ringp->r_writefunc
..
endcase
- ring_slave_put() - puts on ready queue
+ ring_worker_put() - puts on ready queue
endloop
</pre>
@@ -1778,7 +1778,7 @@ prepare_drive()
<p>
For each <i>do_read</i> call in the multi-threaded case,
we have two sides to the story: the client which is coming
-from code in <i>content.c</i> and the slave which is a simple
+from code in <i>content.c</i> and the worker which is a simple
thread just satisfying I/O requests.
From the point of view of the ring buffer, these are the steps
which happen for reading:
@@ -1786,7 +1786,7 @@ which happen for reading:
<li>client removes msg from ready queue
<li>client wants to read, so sets op field to READ (RING_OP_READ)
and puts on active queue
-<li>slave removes msg from active queue,
+<li>worker removes msg from active queue,
invokes client read function,
sets status field: OK/ERROR,
puts msg on ready queue
@@ -446,8 +446,8 @@ msgstr ""
"zurück, Fehlernummer %d (%s)\n"
#: .././common/drive_minrmt.c:3823
-msgid "slave"
-msgstr "Slave"
+msgid "worker"
+msgstr "worker"
#: .././common/drive_minrmt.c:3891 .././common/drive_minrmt.c:3899
msgid "KB"
@@ -1327,8 +1327,8 @@ msgid "lock ordinal violation: tid %lu ord %d map %x\n"
msgstr "naruszenie porządku blokad: tid %lu ord %d map %x\n"
#: .././common/ring.c:127
-msgid "slave"
-msgstr "podrzędnego"
+msgid "worker"
+msgstr ""
#: .././common/util.c:188 .././dump/content.c:2867
#, c-format