diff mbox series

[v6,12/12] selftests/sgx: Add scripts for EPC cgroup testing

Message ID 20231030182013.40086-13-haitao.huang@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series Add Cgroup support for SGX EPC memory | expand

Commit Message

Haitao Huang Oct. 30, 2023, 6:20 p.m. UTC
The scripts rely on cgroup-tools package from libcgroup [1].

To run selftests for epc cgroup:

sudo ./run_epc_cg_selftests.sh

With different cgroups, the script starts one or multiple concurrent SGX
selftests, each to run one unclobbered_vdso_oversubscribed test.  Each
of such test tries to load an enclave of EPC size equal to the EPC
capacity available on the platform. The script checks results against
the expectation set for each cgroup and reports success or failure.

The script creates 3 different cgroups at the beginning with following
expectations:

1) SMALL - intentionally small enough to fail the test loading an
enclave of size equal to the capacity.
2) LARGE - large enough to run up to 4 concurrent tests but fail some if
more than 4 concurrent tests are run. The script starts 4 expecting at
least one test to pass, and then starts 5 expecting at least one test
to fail.
3) LARGER - limit is the same as the capacity, large enough to run lots of
concurrent tests. The script starts 10 of them and expects all pass.
Then it reruns the same test with one process randomly killed and
usage checked to be zero after all process exit.

To watch misc cgroup 'current' changes during testing, run this in a
separate terminal:

./watch_misc_for_tests.sh current

[1] https://github.com/libcgroup/libcgroup/blob/main/README

Signed-off-by: Haitao Huang <haitao.huang@linux.intel.com>
---
V5:

- Added script with automatic results checking, remove the interactive
script.
- The script can run independent from the series below.
---
 .../selftests/sgx/run_epc_cg_selftests.sh     | 196 ++++++++++++++++++
 .../selftests/sgx/watch_misc_for_tests.sh     |  13 ++
 2 files changed, 209 insertions(+)
 create mode 100755 tools/testing/selftests/sgx/run_epc_cg_selftests.sh
 create mode 100755 tools/testing/selftests/sgx/watch_misc_for_tests.sh

Comments

Jarkko Sakkinen Nov. 15, 2023, 9 p.m. UTC | #1
On Mon Oct 30, 2023 at 8:20 PM EET, Haitao Huang wrote:
> The scripts rely on cgroup-tools package from libcgroup [1].
>
> To run selftests for epc cgroup:
>
> sudo ./run_epc_cg_selftests.sh
>
> With different cgroups, the script starts one or multiple concurrent SGX
> selftests, each to run one unclobbered_vdso_oversubscribed test.  Each
> of such test tries to load an enclave of EPC size equal to the EPC
> capacity available on the platform. The script checks results against
> the expectation set for each cgroup and reports success or failure.
>
> The script creates 3 different cgroups at the beginning with following
> expectations:
>
> 1) SMALL - intentionally small enough to fail the test loading an
> enclave of size equal to the capacity.
> 2) LARGE - large enough to run up to 4 concurrent tests but fail some if
> more than 4 concurrent tests are run. The script starts 4 expecting at
> least one test to pass, and then starts 5 expecting at least one test
> to fail.
> 3) LARGER - limit is the same as the capacity, large enough to run lots of
> concurrent tests. The script starts 10 of them and expects all pass.
> Then it reruns the same test with one process randomly killed and
> usage checked to be zero after all process exit.
>
> To watch misc cgroup 'current' changes during testing, run this in a
> separate terminal:
>
> ./watch_misc_for_tests.sh current
>
> [1] https://github.com/libcgroup/libcgroup/blob/main/README
>
> Signed-off-by: Haitao Huang <haitao.huang@linux.intel.com>
> ---
> V5:
>
> - Added script with automatic results checking, remove the interactive
> script.
> - The script can run independent from the series below.
> ---
>  .../selftests/sgx/run_epc_cg_selftests.sh     | 196 ++++++++++++++++++
>  .../selftests/sgx/watch_misc_for_tests.sh     |  13 ++
>  2 files changed, 209 insertions(+)
>  create mode 100755 tools/testing/selftests/sgx/run_epc_cg_selftests.sh
>  create mode 100755 tools/testing/selftests/sgx/watch_misc_for_tests.sh
>
> diff --git a/tools/testing/selftests/sgx/run_epc_cg_selftests.sh b/tools/testing/selftests/sgx/run_epc_cg_selftests.sh
> new file mode 100755
> index 000000000000..72b93f694753
> --- /dev/null
> +++ b/tools/testing/selftests/sgx/run_epc_cg_selftests.sh
> @@ -0,0 +1,196 @@
> +#!/bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright(c) 2023 Intel Corporation.
> +
> +TEST_ROOT_CG=selftest
> +cgcreate -g misc:$TEST_ROOT_CG
> +if [ $? -ne 0 ]; then
> +    echo "# Please make sure cgroup-tools is installed, and misc cgroup is mounted."
> +    exit 1
> +fi
> +TEST_CG_SUB1=$TEST_ROOT_CG/test1
> +TEST_CG_SUB2=$TEST_ROOT_CG/test2
> +TEST_CG_SUB3=$TEST_ROOT_CG/test1/test3
> +TEST_CG_SUB4=$TEST_ROOT_CG/test4
> +
> +cgcreate -g misc:$TEST_CG_SUB1
> +cgcreate -g misc:$TEST_CG_SUB2
> +cgcreate -g misc:$TEST_CG_SUB3
> +cgcreate -g misc:$TEST_CG_SUB4
> +
> +# Default to V2
> +CG_ROOT=/sys/fs/cgroup
> +if [ ! -d "/sys/fs/cgroup/misc" ]; then
> +    echo "# cgroup V2 is in use."
> +else
> +    echo "# cgroup V1 is in use."
> +    CG_ROOT=/sys/fs/cgroup/misc
> +fi

Does the test need to support v1 cgroups?

> +
> +CAPACITY=$(grep "sgx_epc" "$CG_ROOT/misc.capacity" | awk '{print $2}')
> +# This is below number of VA pages needed for enclave of capacity size. So
> +# should fail oversubscribed cases
> +SMALL=$(( CAPACITY / 512 ))
> +
> +# At least load one enclave of capacity size successfully, maybe up to 4.
> +# But some may fail if we run more than 4 concurrent enclaves of capacity size.
> +LARGE=$(( SMALL * 4 ))
> +
> +# Load lots of enclaves
> +LARGER=$CAPACITY
> +echo "# Setting up limits."
> +echo "sgx_epc $SMALL" | tee $CG_ROOT/$TEST_CG_SUB1/misc.max
> +echo "sgx_epc $LARGE" | tee $CG_ROOT/$TEST_CG_SUB2/misc.max
> +echo "sgx_epc $LARGER" | tee $CG_ROOT/$TEST_CG_SUB4/misc.max
> +
> +timestamp=$(date +%Y%m%d_%H%M%S)
> +
> +test_cmd="./test_sgx -t unclobbered_vdso_oversubscribed"
> +
> +echo "# Start unclobbered_vdso_oversubscribed with SMALL limit, expecting failure..."
> +# Always use leaf node of misc cgroups so it works for both v1 and v2
> +# these may fail on OOM
> +cgexec -g misc:$TEST_CG_SUB3 $test_cmd >cgtest_small_$timestamp.log 2>&1
> +if [[ $? -eq 0 ]]; then
> +    echo "# Fail on SMALL limit, not expecting any test passes."
> +    cgdelete -r -g misc:$TEST_ROOT_CG
> +    exit 1
> +else
> +    echo "# Test failed as expected."
> +fi
> +
> +echo "# PASSED SMALL limit."
> +
> +echo "# Start 4 concurrent unclobbered_vdso_oversubscribed tests with LARGE limit,
> +        expecting at least one success...."
> +pids=()
> +for i in {1..4}; do
> +    (
> +        cgexec -g misc:$TEST_CG_SUB2 $test_cmd >cgtest_large_positive_$timestamp.$i.log 2>&1
> +    ) &
> +    pids+=($!)
> +done
> +
> +any_success=0
> +for pid in "${pids[@]}"; do
> +    wait "$pid"
> +    status=$?
> +    if [[ $status -eq 0 ]]; then
> +        any_success=1
> +	echo "# Process $pid returned successfully."
> +    fi
> +done
> +
> +if [[ $any_success -eq 0 ]]; then
> +    echo "# Failed on LARGE limit positive testing, no test passes."
> +    cgdelete -r -g misc:$TEST_ROOT_CG
> +    exit 1
> +fi
> +
> +echo "# PASSED LARGE limit positive testing."
> +
> +echo "# Start 5 concurrent unclobbered_vdso_oversubscribed tests with LARGE limit,
> +        expecting at least one failure...."
> +pids=()
> +for i in {1..5}; do
> +    (
> +        cgexec -g misc:$TEST_CG_SUB2 $test_cmd >cgtest_large_negative_$timestamp.$i.log 2>&1
> +    ) &
> +    pids+=($!)
> +done
> +
> +any_failure=0
> +for pid in "${pids[@]}"; do
> +    wait "$pid"
> +    status=$?
> +    if [[ $status -ne 0 ]]; then
> +	echo "# Process $pid returned failure."
> +        any_failure=1
> +    fi
> +done
> +
> +if [[ $any_failure -eq 0 ]]; then
> +    echo "# Failed on LARGE limit negative testing, no test fails."
> +    cgdelete -r -g misc:$TEST_ROOT_CG
> +    exit 1
> +fi
> +
> +echo "# PASSED LARGE limit negative testing."
> +
> +echo "# Start 10 concurrent unclobbered_vdso_oversubscribed tests with LARGER limit,
> +        expecting no failure...."
> +pids=()
> +for i in {1..10}; do
> +    (
> +        cgexec -g misc:$TEST_CG_SUB4 $test_cmd >cgtest_larger_$timestamp.$i.log 2>&1
> +    ) &
> +    pids+=($!)
> +done
> +
> +any_failure=0
> +for pid in "${pids[@]}"; do
> +    wait "$pid"
> +    status=$?
> +    if [[ $status -ne 0 ]]; then
> +	echo "# Process $pid returned failure."
> +        any_failure=1
> +    fi
> +done
> +
> +if [[ $any_failure -ne 0 ]]; then
> +    echo "# Failed on LARGER limit, at least one test fails."
> +    cgdelete -r -g misc:$TEST_ROOT_CG
> +    exit 1
> +fi
> +
> +echo "# PASSED LARGER limit tests."
> +
> +
> +echo "# Start 10 concurrent unclobbered_vdso_oversubscribed tests with LARGER limit,
> +      randomly kill one, expecting no failure...."
> +pids=()
> +for i in {1..10}; do
> +    (
> +        cgexec -g misc:$TEST_CG_SUB4 $test_cmd >cgtest_larger_$timestamp.$i.log 2>&1
> +    ) &
> +    pids+=($!)
> +done
> +
> +sleep $((RANDOM % 10 + 5))
> +
> +# Randomly select a PID to kill
> +RANDOM_INDEX=$((RANDOM % 10))
> +PID_TO_KILL=${pids[RANDOM_INDEX]}
> +
> +kill $PID_TO_KILL
> +echo "# Killed process with PID: $PID_TO_KILL"
> +
> +any_failure=0
> +for pid in "${pids[@]}"; do
> +    wait "$pid"
> +    status=$?
> +    if [ "$pid" != "$PID_TO_KILL" ]; then
> +        if [[ $status -ne 0 ]]; then
> +	    echo "# Process $pid returned failure."
> +            any_failure=1
> +        fi
> +    fi
> +done
> +
> +if [[ $any_failure -ne 0 ]]; then
> +    echo "# Failed on random killing, at least one test fails."
> +    cgdelete -r -g misc:$TEST_ROOT_CG
> +    exit 1
> +fi
> +
> +sleep 1
> +
> +USAGE=$(grep '^sgx_epc' "$CG_ROOT/$TEST_ROOT_CG/misc.current" | awk '{print $2}')
> +if [ "$USAGE" -ne 0 ]; then
> +    echo "# Failed: Final usage is $USAGE, not 0."
> +else
> +    echo "# PASSED leakage check."
> +    echo "# PASSED ALL cgroup limit tests, cleanup cgroups..."
> +fi
> +cgdelete -r -g misc:$TEST_ROOT_CG
> +echo "# done."
> diff --git a/tools/testing/selftests/sgx/watch_misc_for_tests.sh b/tools/testing/selftests/sgx/watch_misc_for_tests.sh
> new file mode 100755
> index 000000000000..dbd38f346e7b
> --- /dev/null
> +++ b/tools/testing/selftests/sgx/watch_misc_for_tests.sh
> @@ -0,0 +1,13 @@
> +#!/bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright(c) 2023 Intel Corporation.
> +
> +if [ -z "$1" ]
> +  then
> +    echo "No argument supplied, please provide 'max', 'current' or 'events'"
> +    exit 1
> +fi
> +
> +watch -n 1 "find /sys/fs/cgroup -wholename */test*/misc.$1 -exec sh -c \
> +    'echo \"\$1:\"; cat \"\$1\"' _ {} \;"
> +

BR, Jarkko
Haitao Huang Nov. 15, 2023, 9:22 p.m. UTC | #2
>> +CG_ROOT=/sys/fs/cgroup
>> +if [ ! -d "/sys/fs/cgroup/misc" ]; then
>> +    echo "# cgroup V2 is in use."
>> +else
>> +    echo "# cgroup V1 is in use."
>> +    CG_ROOT=/sys/fs/cgroup/misc
>> +fi
>
> Does the test need to support v1 cgroups?
>
I thought some distro may still only support V1. I do my most work on  
Ubuntu22.04 which by default is v1 so it's convenient for me to test. But  
not strong opinions.

Thanks
Haitao
diff mbox series

Patch

diff --git a/tools/testing/selftests/sgx/run_epc_cg_selftests.sh b/tools/testing/selftests/sgx/run_epc_cg_selftests.sh
new file mode 100755
index 000000000000..72b93f694753
--- /dev/null
+++ b/tools/testing/selftests/sgx/run_epc_cg_selftests.sh
@@ -0,0 +1,196 @@ 
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright(c) 2023 Intel Corporation.
+
+TEST_ROOT_CG=selftest
+cgcreate -g misc:$TEST_ROOT_CG
+if [ $? -ne 0 ]; then
+    echo "# Please make sure cgroup-tools is installed, and misc cgroup is mounted."
+    exit 1
+fi
+TEST_CG_SUB1=$TEST_ROOT_CG/test1
+TEST_CG_SUB2=$TEST_ROOT_CG/test2
+TEST_CG_SUB3=$TEST_ROOT_CG/test1/test3
+TEST_CG_SUB4=$TEST_ROOT_CG/test4
+
+cgcreate -g misc:$TEST_CG_SUB1
+cgcreate -g misc:$TEST_CG_SUB2
+cgcreate -g misc:$TEST_CG_SUB3
+cgcreate -g misc:$TEST_CG_SUB4
+
+# Default to V2
+CG_ROOT=/sys/fs/cgroup
+if [ ! -d "/sys/fs/cgroup/misc" ]; then
+    echo "# cgroup V2 is in use."
+else
+    echo "# cgroup V1 is in use."
+    CG_ROOT=/sys/fs/cgroup/misc
+fi
+
+CAPACITY=$(grep "sgx_epc" "$CG_ROOT/misc.capacity" | awk '{print $2}')
+# This is below number of VA pages needed for enclave of capacity size. So
+# should fail oversubscribed cases
+SMALL=$(( CAPACITY / 512 ))
+
+# At least load one enclave of capacity size successfully, maybe up to 4.
+# But some may fail if we run more than 4 concurrent enclaves of capacity size.
+LARGE=$(( SMALL * 4 ))
+
+# Load lots of enclaves
+LARGER=$CAPACITY
+echo "# Setting up limits."
+echo "sgx_epc $SMALL" | tee $CG_ROOT/$TEST_CG_SUB1/misc.max
+echo "sgx_epc $LARGE" | tee $CG_ROOT/$TEST_CG_SUB2/misc.max
+echo "sgx_epc $LARGER" | tee $CG_ROOT/$TEST_CG_SUB4/misc.max
+
+timestamp=$(date +%Y%m%d_%H%M%S)
+
+test_cmd="./test_sgx -t unclobbered_vdso_oversubscribed"
+
+echo "# Start unclobbered_vdso_oversubscribed with SMALL limit, expecting failure..."
+# Always use leaf node of misc cgroups so it works for both v1 and v2
+# these may fail on OOM
+cgexec -g misc:$TEST_CG_SUB3 $test_cmd >cgtest_small_$timestamp.log 2>&1
+if [[ $? -eq 0 ]]; then
+    echo "# Fail on SMALL limit, not expecting any test passes."
+    cgdelete -r -g misc:$TEST_ROOT_CG
+    exit 1
+else
+    echo "# Test failed as expected."
+fi
+
+echo "# PASSED SMALL limit."
+
+echo "# Start 4 concurrent unclobbered_vdso_oversubscribed tests with LARGE limit,
+        expecting at least one success...."
+pids=()
+for i in {1..4}; do
+    (
+        cgexec -g misc:$TEST_CG_SUB2 $test_cmd >cgtest_large_positive_$timestamp.$i.log 2>&1
+    ) &
+    pids+=($!)
+done
+
+any_success=0
+for pid in "${pids[@]}"; do
+    wait "$pid"
+    status=$?
+    if [[ $status -eq 0 ]]; then
+        any_success=1
+	echo "# Process $pid returned successfully."
+    fi
+done
+
+if [[ $any_success -eq 0 ]]; then
+    echo "# Failed on LARGE limit positive testing, no test passes."
+    cgdelete -r -g misc:$TEST_ROOT_CG
+    exit 1
+fi
+
+echo "# PASSED LARGE limit positive testing."
+
+echo "# Start 5 concurrent unclobbered_vdso_oversubscribed tests with LARGE limit,
+        expecting at least one failure...."
+pids=()
+for i in {1..5}; do
+    (
+        cgexec -g misc:$TEST_CG_SUB2 $test_cmd >cgtest_large_negative_$timestamp.$i.log 2>&1
+    ) &
+    pids+=($!)
+done
+
+any_failure=0
+for pid in "${pids[@]}"; do
+    wait "$pid"
+    status=$?
+    if [[ $status -ne 0 ]]; then
+	echo "# Process $pid returned failure."
+        any_failure=1
+    fi
+done
+
+if [[ $any_failure -eq 0 ]]; then
+    echo "# Failed on LARGE limit negative testing, no test fails."
+    cgdelete -r -g misc:$TEST_ROOT_CG
+    exit 1
+fi
+
+echo "# PASSED LARGE limit negative testing."
+
+echo "# Start 10 concurrent unclobbered_vdso_oversubscribed tests with LARGER limit,
+        expecting no failure...."
+pids=()
+for i in {1..10}; do
+    (
+        cgexec -g misc:$TEST_CG_SUB4 $test_cmd >cgtest_larger_$timestamp.$i.log 2>&1
+    ) &
+    pids+=($!)
+done
+
+any_failure=0
+for pid in "${pids[@]}"; do
+    wait "$pid"
+    status=$?
+    if [[ $status -ne 0 ]]; then
+	echo "# Process $pid returned failure."
+        any_failure=1
+    fi
+done
+
+if [[ $any_failure -ne 0 ]]; then
+    echo "# Failed on LARGER limit, at least one test fails."
+    cgdelete -r -g misc:$TEST_ROOT_CG
+    exit 1
+fi
+
+echo "# PASSED LARGER limit tests."
+
+
+echo "# Start 10 concurrent unclobbered_vdso_oversubscribed tests with LARGER limit,
+      randomly kill one, expecting no failure...."
+pids=()
+for i in {1..10}; do
+    (
+        cgexec -g misc:$TEST_CG_SUB4 $test_cmd >cgtest_larger_$timestamp.$i.log 2>&1
+    ) &
+    pids+=($!)
+done
+
+sleep $((RANDOM % 10 + 5))
+
+# Randomly select a PID to kill
+RANDOM_INDEX=$((RANDOM % 10))
+PID_TO_KILL=${pids[RANDOM_INDEX]}
+
+kill $PID_TO_KILL
+echo "# Killed process with PID: $PID_TO_KILL"
+
+any_failure=0
+for pid in "${pids[@]}"; do
+    wait "$pid"
+    status=$?
+    if [ "$pid" != "$PID_TO_KILL" ]; then
+        if [[ $status -ne 0 ]]; then
+	    echo "# Process $pid returned failure."
+            any_failure=1
+        fi
+    fi
+done
+
+if [[ $any_failure -ne 0 ]]; then
+    echo "# Failed on random killing, at least one test fails."
+    cgdelete -r -g misc:$TEST_ROOT_CG
+    exit 1
+fi
+
+sleep 1
+
+USAGE=$(grep '^sgx_epc' "$CG_ROOT/$TEST_ROOT_CG/misc.current" | awk '{print $2}')
+if [ "$USAGE" -ne 0 ]; then
+    echo "# Failed: Final usage is $USAGE, not 0."
+else
+    echo "# PASSED leakage check."
+    echo "# PASSED ALL cgroup limit tests, cleanup cgroups..."
+fi
+cgdelete -r -g misc:$TEST_ROOT_CG
+echo "# done."
diff --git a/tools/testing/selftests/sgx/watch_misc_for_tests.sh b/tools/testing/selftests/sgx/watch_misc_for_tests.sh
new file mode 100755
index 000000000000..dbd38f346e7b
--- /dev/null
+++ b/tools/testing/selftests/sgx/watch_misc_for_tests.sh
@@ -0,0 +1,13 @@ 
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright(c) 2023 Intel Corporation.
+
+if [ -z "$1" ]
+  then
+    echo "No argument supplied, please provide 'max', 'current' or 'events'"
+    exit 1
+fi
+
+watch -n 1 "find /sys/fs/cgroup -wholename */test*/misc.$1 -exec sh -c \
+    'echo \"\$1:\"; cat \"\$1\"' _ {} \;"
+