diff mbox series

generic/558: limit the number of spawned subprocesses

Message ID 2bb7705c-9a1a-6185-4554-9121e5cda710@redhat.com (mailing list archive)
State New, archived
Headers show
Series generic/558: limit the number of spawned subprocesses | expand

Commit Message

Mikulas Patocka July 11, 2023, 3:51 p.m. UTC
When I run the test 558 on bcachefs, it works like a fork-bomb and kills
the machine. The reason is that the "while" loop spawns "create_file"
subprocesses faster than they are able to complete.

This patch fixes the crash by limiting the number of subprocesses to 128.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>

---
 tests/generic/558 |    1 +
 1 file changed, 1 insertion(+)

Comments

Darrick J. Wong July 11, 2023, 11:44 p.m. UTC | #1
On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote:
> When I run the test 558 on bcachefs, it works like a fork-bomb and kills
> the machine. The reason is that the "while" loop spawns "create_file"
> subprocesses faster than they are able to complete.
> 
> This patch fixes the crash by limiting the number of subprocesses to 128.
> 
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> 
> ---
>  tests/generic/558 |    1 +
>  1 file changed, 1 insertion(+)
> 
> Index: xfstests-dev/tests/generic/558
> ===================================================================
> --- xfstests-dev.orig/tests/generic/558
> +++ xfstests-dev/tests/generic/558
> @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi
>  while [ $i -lt $loop ]; do
>  	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
>  	let i=$i+1
> +	if [ $((i % 128)) = 0 ]; then wait; fi

Hm.  $loop is (roughly) the number of free inodes divided by 1000.  This
test completes nearly instantly on XFS; how many free inodes does
bcachefs report after _scratch_mount?

XFS reports ~570k inodes, so it's "only" starting 570 processes.

I think it's probably wise to clamp $loop to something sane, but let's
get to the bottom of how the math went wrong and we got a forkbomb.

--D

>  done
>  wait
>  
>
Kent Overstreet July 12, 2023, 1:09 a.m. UTC | #2
On Tue, Jul 11, 2023 at 04:44:39PM -0700, Darrick J. Wong wrote:
> On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote:
> > When I run the test 558 on bcachefs, it works like a fork-bomb and kills
> > the machine. The reason is that the "while" loop spawns "create_file"
> > subprocesses faster than they are able to complete.
> > 
> > This patch fixes the crash by limiting the number of subprocesses to 128.
> > 
> > Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> > 
> > ---
> >  tests/generic/558 |    1 +
> >  1 file changed, 1 insertion(+)
> > 
> > Index: xfstests-dev/tests/generic/558
> > ===================================================================
> > --- xfstests-dev.orig/tests/generic/558
> > +++ xfstests-dev/tests/generic/558
> > @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi
> >  while [ $i -lt $loop ]; do
> >  	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
> >  	let i=$i+1
> > +	if [ $((i % 128)) = 0 ]; then wait; fi
> 
> Hm.  $loop is (roughly) the number of free inodes divided by 1000.  This
> test completes nearly instantly on XFS; how many free inodes does
> bcachefs report after _scratch_mount?
> 
> XFS reports ~570k inodes, so it's "only" starting 570 processes.
> 
> I think it's probably wise to clamp $loop to something sane, but let's
> get to the bottom of how the math went wrong and we got a forkbomb.

It's because:
 - bcachefs doesn't even report a maximum number of inodes (IIRC);
   inodes are small and variable size (most fields are varints, typical
   inode size is 50-100 bytes).

 - and the kernel has a sysctl to limit the maximum number of open
   files, and it's got a sane default; this is what's supposed to save
   us from pinned inodes eating up all ram), but systemd conveniently
   overwrites it to some absurd value...

I'd prefer to see this fixed properly, rather than just "fixing" the
test; userspace being able to pin all kernel memory this easily is a
real bug.

We could put a second hard cap on the maximum number of open files, and
base that on a percentage of total memory; a VFS inode is somewhere in
the ballpack of a kilobyte, so easy enough to calculate. And we could
make that percentage itself a sysctl, for the people who are really
crazy...
Amir Goldstein July 12, 2023, 5:15 a.m. UTC | #3
On Wed, Jul 12, 2023 at 4:20 AM Kent Overstreet
<kent.overstreet@linux.dev> wrote:
>
> On Tue, Jul 11, 2023 at 04:44:39PM -0700, Darrick J. Wong wrote:
> > On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote:
> > > When I run the test 558 on bcachefs, it works like a fork-bomb and kills
> > > the machine. The reason is that the "while" loop spawns "create_file"
> > > subprocesses faster than they are able to complete.
> > >

Please take a step back and ask why is the test spawning processes at all?
If all the test needs to do is "using up all inodes" a C helper would be
a much better way to accomplish that.

> > > This patch fixes the crash by limiting the number of subprocesses to 128.
> > >
> > > Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> > >
> > > ---
> > >  tests/generic/558 |    1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > Index: xfstests-dev/tests/generic/558
> > > ===================================================================
> > > --- xfstests-dev.orig/tests/generic/558
> > > +++ xfstests-dev/tests/generic/558
> > > @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi
> > >  while [ $i -lt $loop ]; do
> > >     create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
> > >     let i=$i+1
> > > +   if [ $((i % 128)) = 0 ]; then wait; fi
> >
> > Hm.  $loop is (roughly) the number of free inodes divided by 1000.  This
> > test completes nearly instantly on XFS; how many free inodes does
> > bcachefs report after _scratch_mount?
> >
> > XFS reports ~570k inodes, so it's "only" starting 570 processes.
> >
> > I think it's probably wise to clamp $loop to something sane, but let's
> > get to the bottom of how the math went wrong and we got a forkbomb.
>
> It's because:
>  - bcachefs doesn't even report a maximum number of inodes (IIRC);
>    inodes are small and variable size (most fields are varints, typical
>    inode size is 50-100 bytes).
>

The purpose of this test is to "Stress test fs by using up all inodes",
so it seems that the test may not be appropriate for bcachefs?
_require_inode_limits should have detected that, but the expectation
of "no inode limit" to be reported as 0 free inodes is obviously not
a standard. I am not sure which fs, if any, meets this rule?

And yet, it is important for fstests to be able to tell if FSTYP should run
this test or not, so either we make _require_inode_limits be aware
of specific FSTYP, like _fstyp_has_non_default_seek_data_hole
where there was no standard way of detecting expected behavior,
or we agree on a generic heuristic that is good enough for
_require_inode_limits.

>  - and the kernel has a sysctl to limit the maximum number of open
>    files, and it's got a sane default; this is what's supposed to save
>    us from pinned inodes eating up all ram), but systemd conveniently
>    overwrites it to some absurd value...
>
> I'd prefer to see this fixed properly, rather than just "fixing" the
> test; userspace being able to pin all kernel memory this easily is a
> real bug.
>
> We could put a second hard cap on the maximum number of open files, and
> base that on a percentage of total memory; a VFS inode is somewhere in
> the ballpack of a kilobyte, so easy enough to calculate. And we could
> make that percentage itself a sysctl, for the people who are really
> crazy...

Maybe it's not a bad idea, but I don't think it would have solved the reported
fork bomb issue, so the test needs to be fixed anyway.

Specifically regarding the memory overuse issue,
the t_open_tmpfiles tests do have this problem.

These were two attempts to fix the tests:

dd2c7266 fstests: don't oom the box opening tmpfiles
c42c8e6c fstests: don't oom the box opening tmpfiles (take 2)

But for me, generic/531 still OOMs.

Kent, on a personal note, I've been following the heated discussion
lately on fsdevel, one specified argument was also ignited over
"fixing the test" vs. "fixing the kernel bug" argument as you raised above.

One of the good points raised also by yourself was to stick to technical
arguments.

I would simply like to point out that you use the terminology "sane" and "crazy"
quite often in arguments, as you just did in this reply.
Those are obviously not objective technical arguments and yes, I know that
you use them as a figure of speech and I use them often myself, but from
the outside, know that they may sound dismissive of any other opinion
other than the "sane" opinion.

Personally, I think that the suggestion to fix the kernel to enforce
the resource automatically, "_rather_ than just "fixing" the test", is well,
for lack of better words, crazy ;)

I am all for trying to protect the kernel from DoS, but this is far from
being a trivial project, so it does not make sense to ask for that from
the contributor of this simple test patch.

You did say "I'd prefer to see this.." so perhaps you did not mean
that the fix for the test should be blocked, but again, to the outside
(to me) this is how it sounds.

Bottom line - this test does not even need to open a single file
to create inodes (i.e. mknod()) and certainly does not need to
spawn more than one process, so I think the test should be fixed.

Thanks,
Amir.
Mikulas Patocka July 12, 2023, 9:57 a.m. UTC | #4
On Tue, 11 Jul 2023, Darrick J. Wong wrote:

> On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote:
> > When I run the test 558 on bcachefs, it works like a fork-bomb and kills
> > the machine. The reason is that the "while" loop spawns "create_file"
> > subprocesses faster than they are able to complete.
> > 
> > This patch fixes the crash by limiting the number of subprocesses to 128.
> > 
> > Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> > 
> > ---
> >  tests/generic/558 |    1 +
> >  1 file changed, 1 insertion(+)
> > 
> > Index: xfstests-dev/tests/generic/558
> > ===================================================================
> > --- xfstests-dev.orig/tests/generic/558
> > +++ xfstests-dev/tests/generic/558
> > @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi
> >  while [ $i -lt $loop ]; do
> >  	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
> >  	let i=$i+1
> > +	if [ $((i % 128)) = 0 ]; then wait; fi
> 
> Hm.  $loop is (roughly) the number of free inodes divided by 1000.  This
> test completes nearly instantly on XFS; how many free inodes does
> bcachefs report after _scratch_mount?
> 
> XFS reports ~570k inodes, so it's "only" starting 570 processes.
> 
> I think it's probably wise to clamp $loop to something sane, but let's
> get to the bottom of how the math went wrong and we got a forkbomb.
> 
> --D

bcachefs reports 14509106 total inodes (for a 1GB filesystem)

As the test proceeds, the number of total inodes (as well as the number of 
free inodes) decreases.

Mikulas
Mikulas Patocka July 12, 2023, 10:10 a.m. UTC | #5
On Tue, 11 Jul 2023, Kent Overstreet wrote:

> On Tue, Jul 11, 2023 at 04:44:39PM -0700, Darrick J. Wong wrote:
> > On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote:
> > > When I run the test 558 on bcachefs, it works like a fork-bomb and kills
> > > the machine. The reason is that the "while" loop spawns "create_file"
> > > subprocesses faster than they are able to complete.
> > > 
> > > This patch fixes the crash by limiting the number of subprocesses to 128.
> > > 
> > > Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> > > 
> > > ---
> > >  tests/generic/558 |    1 +
> > >  1 file changed, 1 insertion(+)
> > > 
> > > Index: xfstests-dev/tests/generic/558
> > > ===================================================================
> > > --- xfstests-dev.orig/tests/generic/558
> > > +++ xfstests-dev/tests/generic/558
> > > @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi
> > >  while [ $i -lt $loop ]; do
> > >  	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
> > >  	let i=$i+1
> > > +	if [ $((i % 128)) = 0 ]; then wait; fi
> > 
> > Hm.  $loop is (roughly) the number of free inodes divided by 1000.  This
> > test completes nearly instantly on XFS; how many free inodes does
> > bcachefs report after _scratch_mount?
> > 
> > XFS reports ~570k inodes, so it's "only" starting 570 processes.
> > 
> > I think it's probably wise to clamp $loop to something sane, but let's
> > get to the bottom of how the math went wrong and we got a forkbomb.
> 
> It's because:
>  - bcachefs doesn't even report a maximum number of inodes (IIRC);
>    inodes are small and variable size (most fields are varints, typical
>    inode size is 50-100 bytes).
> 
>  - and the kernel has a sysctl to limit the maximum number of open
>    files, and it's got a sane default; this is what's supposed to save
>    us from pinned inodes eating up all ram), but systemd conveniently
>    overwrites it to some absurd value...
> 
> I'd prefer to see this fixed properly, rather than just "fixing" the
> test; userspace being able to pin all kernel memory this easily is a
> real bug.
> 
> We could put a second hard cap on the maximum number of open files, and
> base that on a percentage of total memory; a VFS inode is somewhere in
> the ballpack of a kilobyte, so easy enough to calculate. And we could
> make that percentage itself a sysctl, for the people who are really
> crazy...

If we hit the limit of total open files, we already killed the system. At 
this point the user can't execute any program because executing a programs 
requires opening files.

I think that it is possible to setup cgroups so that a process inside a 
cgroup can't kill the machine by exhausting resources. But distributions 
don't do it. And they don't do it for a root user (the test runs under 
root).

Mikulas
Kent Overstreet July 12, 2023, 2:52 p.m. UTC | #6
On Wed, Jul 12, 2023 at 12:10:05PM +0200, Mikulas Patocka wrote:
> If we hit the limit of total open files, we already killed the system. At 
> this point the user can't execute any program because executing a programs 
> requires opening files.
> 
> I think that it is possible to setup cgroups so that a process inside a 
> cgroup can't kill the machine by exhausting resources. But distributions 
> don't do it. And they don't do it for a root user (the test runs under 
> root).

When I looked at this test before I missed the fork bomb aspect - was
just looking at the crazy numbers of pinned inodes (which is still a
significant fraction of system memory, looking again...)

If we change bcachefs to not report a maximum number of inodes, might
that be more in line with other filesystems? Or is it really just
because bcachefs inodes are tiny?
Mikulas Patocka July 12, 2023, 5:59 p.m. UTC | #7
On Wed, 12 Jul 2023, Kent Overstreet wrote:

> On Wed, Jul 12, 2023 at 12:10:05PM +0200, Mikulas Patocka wrote:
> > If we hit the limit of total open files, we already killed the system. At 
> > this point the user can't execute any program because executing a programs 
> > requires opening files.
> > 
> > I think that it is possible to setup cgroups so that a process inside a 
> > cgroup can't kill the machine by exhausting resources. But distributions 
> > don't do it. And they don't do it for a root user (the test runs under 
> > root).
> 
> When I looked at this test before I missed the fork bomb aspect - was
> just looking at the crazy numbers of pinned inodes (which is still a
> significant fraction of system memory, looking again...)
> 
> If we change bcachefs to not report a maximum number of inodes, might
> that be more in line with other filesystems? Or is it really just
> because bcachefs inodes are tiny?

I think that it's OK to report as many free inodes as it fits on the 
filesystem. I think it is not a bug - we should fix the test, not lie to 
make the test pass.

There is one misbehavior though. As the test allocates the inodes on 
bcachefs, the total number of inodes is decreasing. The other filesystems 
don't behave in this way and I think that bcachefs shouldn't change the 
total number of inodes too.

Mikulas
Kent Overstreet July 12, 2023, 6:35 p.m. UTC | #8
On Wed, Jul 12, 2023 at 07:59:07PM +0200, Mikulas Patocka wrote:
> 
> 
> On Wed, 12 Jul 2023, Kent Overstreet wrote:
> 
> > On Wed, Jul 12, 2023 at 12:10:05PM +0200, Mikulas Patocka wrote:
> > > If we hit the limit of total open files, we already killed the system. At 
> > > this point the user can't execute any program because executing a programs 
> > > requires opening files.
> > > 
> > > I think that it is possible to setup cgroups so that a process inside a 
> > > cgroup can't kill the machine by exhausting resources. But distributions 
> > > don't do it. And they don't do it for a root user (the test runs under 
> > > root).
> > 
> > When I looked at this test before I missed the fork bomb aspect - was
> > just looking at the crazy numbers of pinned inodes (which is still a
> > significant fraction of system memory, looking again...)
> > 
> > If we change bcachefs to not report a maximum number of inodes, might
> > that be more in line with other filesystems? Or is it really just
> > because bcachefs inodes are tiny?
> 
> I think that it's OK to report as many free inodes as it fits on the 
> filesystem. I think it is not a bug - we should fix the test, not lie to 
> make the test pass.
> 
> There is one misbehavior though. As the test allocates the inodes on 
> bcachefs, the total number of inodes is decreasing. The other filesystems 
> don't behave in this way and I think that bcachefs shouldn't change the 
> total number of inodes too.

I don't think that's avoidable: bcachefs inodes are variable size (we
use varints for most fields), so the total number of inodes is only a
guess - it'll also decrease by just writing normal data.
Zorro Lang July 12, 2023, 6:40 p.m. UTC | #9
On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote:
> When I run the test 558 on bcachefs, it works like a fork-bomb and kills
> the machine. The reason is that the "while" loop spawns "create_file"
> subprocesses faster than they are able to complete.
> 
> This patch fixes the crash by limiting the number of subprocesses to 128.
> 
> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> 
> ---
>  tests/generic/558 |    1 +
>  1 file changed, 1 insertion(+)
> 
> Index: xfstests-dev/tests/generic/558
> ===================================================================
> --- xfstests-dev.orig/tests/generic/558
> +++ xfstests-dev/tests/generic/558

The generic/558 was shared/006, it was written for specific fs (e.g. xfs), then
shared with other similar localfs.

After we changed it to a generic test case, the `_scratch_mkfs_sized` helps to
avoid running this case on nfs/cifs and any other fs which can't be mkfs sized.
Originally we thought a fs has a small specific size generally has limited
free inodes. It works for long time, but now bcachefs looks like an exception :)

I think we must limit the number of processes, then let each process create more
files if it need more inodes, that helps to avoid the forkbomb problem, and helps
this case to work with bcachefs and other fs have lots of free inodes in 1G space.

But we'd better to limit the number of free inodes too, we don't want to run this
case too long time. If a fs shows too many free inodes, _notrun "The 1G $FSTYP has
too many free inodes!".

Thanks,
Zorro

> @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi
>  while [ $i -lt $loop ]; do
>  	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
>  	let i=$i+1
> +	if [ $((i % 128)) = 0 ]; then wait; fi
>  done
>  wait
>  
>
Darrick J. Wong July 12, 2023, 10:05 p.m. UTC | #10
On Wed, Jul 12, 2023 at 11:57:49AM +0200, Mikulas Patocka wrote:
> 
> 
> On Tue, 11 Jul 2023, Darrick J. Wong wrote:
> 
> > On Tue, Jul 11, 2023 at 05:51:42PM +0200, Mikulas Patocka wrote:
> > > When I run the test 558 on bcachefs, it works like a fork-bomb and kills
> > > the machine. The reason is that the "while" loop spawns "create_file"
> > > subprocesses faster than they are able to complete.
> > > 
> > > This patch fixes the crash by limiting the number of subprocesses to 128.
> > > 
> > > Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
> > > 
> > > ---
> > >  tests/generic/558 |    1 +
> > >  1 file changed, 1 insertion(+)
> > > 
> > > Index: xfstests-dev/tests/generic/558
> > > ===================================================================
> > > --- xfstests-dev.orig/tests/generic/558
> > > +++ xfstests-dev/tests/generic/558
> > > @@ -48,6 +48,7 @@ echo "Create $((loop * file_per_dir)) fi
> > >  while [ $i -lt $loop ]; do
> > >  	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
> > >  	let i=$i+1
> > > +	if [ $((i % 128)) = 0 ]; then wait; fi
> > 
> > Hm.  $loop is (roughly) the number of free inodes divided by 1000.  This
> > test completes nearly instantly on XFS; how many free inodes does
> > bcachefs report after _scratch_mount?
> > 
> > XFS reports ~570k inodes, so it's "only" starting 570 processes.
> > 
> > I think it's probably wise to clamp $loop to something sane, but let's
> > get to the bottom of how the math went wrong and we got a forkbomb.
> > 
> > --D
> 
> bcachefs reports 14509106 total inodes (for a 1GB filesystem)
> 
> As the test proceeds, the number of total inodes (as well as the number of 
> free inodes) decreases.

Aha, ok.  So XFS does a similar thing (includes free space in the free
inodes count), but XFS inodes are 256-2048 bytes, whereas bcachefs
inodes can be as small as a few dozen bytes.  That's why the number of
processes is big enough to forkbomb the system.

How about we restrict the number of subshells to something resembling
the CPU count?

free_inodes=$(_get_free_inode $SCRATCH_MNT)
nr_cpus=$(( $($here/src/feature -o) * LOAD_FACTOR ))

if ((free_inodes <= nr_cpus)); then
	nr_cpus=1
	files_per_dir=$free_inodes
else
	files_per_dir=$(( (free_inodes + nr_cpus - 1) / nr_cpus ))
fi
mkdir -p $SCRATCH_MNT/testdir

echo "Create $((loop * file_per_dir)) files in $SCRATCH_MNT/testdir" >>$seqres.full
for ((i = 0; i < nr_cpus; i++)); do
	create_file $SCRATCH_MNT/testdir $files_per_dir $i >>$seqres.full 2>&1 &
done
wait


--D

> Mikulas
>
Theodore Ts'o July 13, 2023, 1:44 a.m. UTC | #11
On Thu, Jul 13, 2023 at 02:40:51AM +0800, Zorro Lang wrote:
> The generic/558 was shared/006, it was written for specific fs (e.g. xfs), then
> shared with other similar localfs.
> 
> After we changed it to a generic test case, the `_scratch_mkfs_sized` helps to
> avoid running this case on nfs/cifs and any other fs which can't be mkfs sized.
> Originally we thought a fs has a small specific size generally has limited
> free inodes. It works for long time, but now bcachefs looks like an exception :)
> 
> I think we must limit the number of processes, then let each process create more
> files if it need more inodes, that helps to avoid the forkbomb problem, and helps
> this case to work with bcachefs and other fs have lots of free inodes in 1G space.
> 
> But we'd better to limit the number of free inodes too, we don't want to run this
> case too long time. If a fs shows too many free inodes, _notrun "The 1G $FSTYP has
> too many free inodes!".

Alternatively we could have some fs-specific code which uses a
different sized scratch file system depending on the file system.  So
for bcachefs, maybe it should only be, say, 1 MiB (or whatever the
smallest file system bcachefs can support).

This test is designed to test what happens when the file system is
filled 100% with inodes, to make sure the file system passes fsck at
100%, and then after deleting all of the inodes, the file system
should be self-consistent afterwards as well.  That's a pretty
straightforward test, and for a file system where inodes can be
anywhere and are variably sized, it's certainly a completely valid
thing to do.  And if it turns out that "too many free inodes", then
maybe the test should have some special case sizes for different file
systems.

	`				- Ted
Darrick J. Wong July 13, 2023, 1:48 a.m. UTC | #12
Mikulas, does this solve the forkbomb problem?

--D

diff --git a/tests/generic/558 b/tests/generic/558
index 4e22ce656b..de5c28d00d 100755
--- a/tests/generic/558
+++ b/tests/generic/558
@@ -39,15 +39,21 @@ _scratch_mkfs_sized $((1024 * 1024 * 1024)) >>$seqres.full 2>&1
 _scratch_mount
 
 i=0
-free_inode=`_get_free_inode $SCRATCH_MNT`
-file_per_dir=1000
-loop=$((free_inode / file_per_dir + 1))
+free_inodes=$(_get_free_inode $SCRATCH_MNT)
+nr_cpus=$(( $($here/src/feature -o) * 4 * LOAD_FACTOR ))
+echo "free inodes: $free_inodes nr_cpus: $nr_cpus" >> $seqres.full
+
+if ((free_inodes <= nr_cpus)); then
+	nr_cpus=1
+	files_per_dir=$free_inodes
+else
+	files_per_dir=$(( (free_inodes + nr_cpus - 1) / nr_cpus ))
+fi
 mkdir -p $SCRATCH_MNT/testdir
 
 echo "Create $((loop * file_per_dir)) files in $SCRATCH_MNT/testdir" >>$seqres.full
-while [ $i -lt $loop ]; do
-	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
-	let i=$i+1
+for ((i = 0; i < nr_cpus; i++)); do
+	create_file $SCRATCH_MNT/testdir $files_per_dir $i >>$seqres.full 2>&1 &
 done
 wait
Mikulas Patocka July 13, 2023, 3:08 p.m. UTC | #13
On Wed, 12 Jul 2023, Darrick J. Wong wrote:

> Mikulas, does this solve the forkbomb problem?
> 
> --D

Yes, this patch works.

Tested-by: Mikulas Patocka <mpatocka@redhat.com>

Mikulas


> diff --git a/tests/generic/558 b/tests/generic/558
> index 4e22ce656b..de5c28d00d 100755
> --- a/tests/generic/558
> +++ b/tests/generic/558
> @@ -39,15 +39,21 @@ _scratch_mkfs_sized $((1024 * 1024 * 1024)) >>$seqres.full 2>&1
>  _scratch_mount
>  
>  i=0
> -free_inode=`_get_free_inode $SCRATCH_MNT`
> -file_per_dir=1000
> -loop=$((free_inode / file_per_dir + 1))
> +free_inodes=$(_get_free_inode $SCRATCH_MNT)
> +nr_cpus=$(( $($here/src/feature -o) * 4 * LOAD_FACTOR ))
> +echo "free inodes: $free_inodes nr_cpus: $nr_cpus" >> $seqres.full
> +
> +if ((free_inodes <= nr_cpus)); then
> +	nr_cpus=1
> +	files_per_dir=$free_inodes
> +else
> +	files_per_dir=$(( (free_inodes + nr_cpus - 1) / nr_cpus ))
> +fi
>  mkdir -p $SCRATCH_MNT/testdir
>  
>  echo "Create $((loop * file_per_dir)) files in $SCRATCH_MNT/testdir" >>$seqres.full
> -while [ $i -lt $loop ]; do
> -	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
> -	let i=$i+1
> +for ((i = 0; i < nr_cpus; i++)); do
> +	create_file $SCRATCH_MNT/testdir $files_per_dir $i >>$seqres.full 2>&1 &
>  done
>  wait
>  
>
diff mbox series

Patch

Index: xfstests-dev/tests/generic/558
===================================================================
--- xfstests-dev.orig/tests/generic/558
+++ xfstests-dev/tests/generic/558
@@ -48,6 +48,7 @@  echo "Create $((loop * file_per_dir)) fi
 while [ $i -lt $loop ]; do
 	create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 &
 	let i=$i+1
+	if [ $((i % 128)) = 0 ]; then wait; fi
 done
 wait