Message ID | 20230406083050.19246-3-dwagner@suse.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | test queue count changes on reconnect | expand |
On Thu, Apr 06, 2023 at 10:30:50AM +0200, Daniel Wagner wrote: > +test() { > + local subsys_name="blktests-subsystem-1" > + local cfs_path="${NVMET_CFS}/subsystems/${subsys_name}" > + local file_path="${TMPDIR}/img" > + local skipped=false > + local hostnqn > + local hostid > + local port > + > + echo "Running ${TEST_NAME}" I think I missing a _nvmet_setup call here.
On Apr 06, 2023 / 10:30, Daniel Wagner wrote: > The target is allowed to change the number of I/O queues. Test if the > host is able to reconnect in this scenario. > > Signed-off-by: Daniel Wagner <dwagner@suse.de> > --- > tests/nvme/048 | 125 +++++++++++++++++++++++++++++++++++++++++++++ > tests/nvme/048.out | 3 ++ > 2 files changed, 128 insertions(+) > create mode 100755 tests/nvme/048 > create mode 100644 tests/nvme/048.out > > diff --git a/tests/nvme/048 b/tests/nvme/048 > new file mode 100755 > index 000000000000..926f9f3de955 > --- /dev/null > +++ b/tests/nvme/048 > @@ -0,0 +1,125 @@ > +#!/bin/bash > +# SPDX-License-Identifier: GPL-3.0+ > +# Copyright (C) 2023 Daniel Wagner, SUSE Labs > +# > +# Test queue count changes on reconnect > + > +. tests/nvme/rc > + > +DESCRIPTION="Test queue count changes on reconnect" > + > +requires() { > + _nvme_requires > + _have_loop > + _require_nvme_trtype tcp rdma fc > + _require_min_cpus 2 > +} > + > +nvmf_wait_for_state() { > + local def_state_timeout=5 > + local subsys_name="$1" > + local state="$2" > + local timeout="${3:-$def_state_timeout}" > + local nvmedev > + local state_file > + local start_time > + local end_time > + > + nvmedev=$(_find_nvme_dev "${subsys_name}") > + state_file="/sys/class/nvme-fabrics/ctl/${nvmedev}/state" > + > + start_time=$(date +%s) > + while ! grep -q "${state}" "${state_file}"; do > + sleep 1 > + end_time=$(date +%s) > + if (( end_time - start_time > timeout )); then Nit: the line above has spaces instead of tabs. > + echo "expected state \"${state}\" not " \ > + "reached within ${timeout} seconds" > + return 1 > + fi > + done > + > + return 0 > +} > + > +set_nvmet_attr_qid_max() { > + local nvmet_subsystem="$1" > + local qid_max="$2" > + local cfs_path="${NVMET_CFS}/subsystems/${nvmet_subsystem}" > + > + echo "${qid_max}" > "${cfs_path}/attr_qid_max" > +} > + > +set_qid_max() { > + local port="$1" > + local subsys_name="$2" > + local qid_max="$3" > + > + set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}" > + > + # Setting qid_max forces a disconnect and the reconntect attempt starts > + nvmf_wait_for_state "${subsys_name}" "connecting" || return 1 > + nvmf_wait_for_state "${subsys_name}" "live" || return 1 > + > + return 0 > +} > + > +test() { > + local subsys_name="blktests-subsystem-1" > + local cfs_path="${NVMET_CFS}/subsystems/${subsys_name}" > + local file_path="${TMPDIR}/img" > + local skipped=false > + local hostnqn > + local hostid > + local port > + > + echo "Running ${TEST_NAME}" > + > + hostid="$(uuidgen)" > + if [ -z "$hostid" ] ; then > + echo "uuidgen failed" > + return 1 > + fi > + hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}" > + > + truncate -s 512M "${file_path}" > + > + _create_nvmet_subsystem "${subsys_name}" "${file_path}" \ > + "b92842df-a394-44b1-84a4-92ae7d112861" > + port="$(_create_nvmet_port "${nvme_trtype}")" > + _add_nvmet_subsys_to_port "${port}" "${subsys_name}" > + _create_nvmet_host "${subsys_name}" "${hostnqn}" > + > + if [[ -f "${cfs_path}/attr_qid_max" ]] ; then > + _nvme_connect_subsys "${nvme_trtype}" "${subsys_name}" \ > + --hostnqn "${hostnqn}" \ > + --hostid "${hostid}" \ > + --keep-alive-tmo 1 \ > + --reconnect-delay 2 > + > + if ! nvmf_wait_for_state "${subsys_name}" "live" ; then > + echo FAIL > + else > + set_qid_max "${port}" "${subsys_name}" 1 || echo FAIL > + set_qid_max "${port}" "${subsys_name}" 128 || echo FAIL > + fi > + > + _nvme_disconnect_subsys "${subsys_name}" > + else > + SKIP_REASONS+=("missing attr_qid_max feature") > + skipped=true > + fi > + > + _remove_nvmet_subsystem_from_port "${port}" "${subsys_name}" > + _remove_nvmet_subsystem "${subsys_name}" > + _remove_nvmet_port "${port}" > + _remove_nvmet_host "${hostnqn}" > + > + rm "${file_path}" > + > + if [[ "${skipped}" = true ]] ; then > + return 1 > + fi > + > + echo "Test complete" > +} > diff --git a/tests/nvme/048.out b/tests/nvme/048.out > new file mode 100644 > index 000000000000..7f986ef9637d > --- /dev/null > +++ b/tests/nvme/048.out > @@ -0,0 +1,3 @@ > +Running nvme/048 > +NQN:blktests-subsystem-1 disconnected 1 controller(s) > +Test complete > -- > 2.40.0 >
diff --git a/tests/nvme/048 b/tests/nvme/048 new file mode 100755 index 000000000000..926f9f3de955 --- /dev/null +++ b/tests/nvme/048 @@ -0,0 +1,125 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-3.0+ +# Copyright (C) 2023 Daniel Wagner, SUSE Labs +# +# Test queue count changes on reconnect + +. tests/nvme/rc + +DESCRIPTION="Test queue count changes on reconnect" + +requires() { + _nvme_requires + _have_loop + _require_nvme_trtype tcp rdma fc + _require_min_cpus 2 +} + +nvmf_wait_for_state() { + local def_state_timeout=5 + local subsys_name="$1" + local state="$2" + local timeout="${3:-$def_state_timeout}" + local nvmedev + local state_file + local start_time + local end_time + + nvmedev=$(_find_nvme_dev "${subsys_name}") + state_file="/sys/class/nvme-fabrics/ctl/${nvmedev}/state" + + start_time=$(date +%s) + while ! grep -q "${state}" "${state_file}"; do + sleep 1 + end_time=$(date +%s) + if (( end_time - start_time > timeout )); then + echo "expected state \"${state}\" not " \ + "reached within ${timeout} seconds" + return 1 + fi + done + + return 0 +} + +set_nvmet_attr_qid_max() { + local nvmet_subsystem="$1" + local qid_max="$2" + local cfs_path="${NVMET_CFS}/subsystems/${nvmet_subsystem}" + + echo "${qid_max}" > "${cfs_path}/attr_qid_max" +} + +set_qid_max() { + local port="$1" + local subsys_name="$2" + local qid_max="$3" + + set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}" + + # Setting qid_max forces a disconnect and the reconntect attempt starts + nvmf_wait_for_state "${subsys_name}" "connecting" || return 1 + nvmf_wait_for_state "${subsys_name}" "live" || return 1 + + return 0 +} + +test() { + local subsys_name="blktests-subsystem-1" + local cfs_path="${NVMET_CFS}/subsystems/${subsys_name}" + local file_path="${TMPDIR}/img" + local skipped=false + local hostnqn + local hostid + local port + + echo "Running ${TEST_NAME}" + + hostid="$(uuidgen)" + if [ -z "$hostid" ] ; then + echo "uuidgen failed" + return 1 + fi + hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}" + + truncate -s 512M "${file_path}" + + _create_nvmet_subsystem "${subsys_name}" "${file_path}" \ + "b92842df-a394-44b1-84a4-92ae7d112861" + port="$(_create_nvmet_port "${nvme_trtype}")" + _add_nvmet_subsys_to_port "${port}" "${subsys_name}" + _create_nvmet_host "${subsys_name}" "${hostnqn}" + + if [[ -f "${cfs_path}/attr_qid_max" ]] ; then + _nvme_connect_subsys "${nvme_trtype}" "${subsys_name}" \ + --hostnqn "${hostnqn}" \ + --hostid "${hostid}" \ + --keep-alive-tmo 1 \ + --reconnect-delay 2 + + if ! nvmf_wait_for_state "${subsys_name}" "live" ; then + echo FAIL + else + set_qid_max "${port}" "${subsys_name}" 1 || echo FAIL + set_qid_max "${port}" "${subsys_name}" 128 || echo FAIL + fi + + _nvme_disconnect_subsys "${subsys_name}" + else + SKIP_REASONS+=("missing attr_qid_max feature") + skipped=true + fi + + _remove_nvmet_subsystem_from_port "${port}" "${subsys_name}" + _remove_nvmet_subsystem "${subsys_name}" + _remove_nvmet_port "${port}" + _remove_nvmet_host "${hostnqn}" + + rm "${file_path}" + + if [[ "${skipped}" = true ]] ; then + return 1 + fi + + echo "Test complete" +} diff --git a/tests/nvme/048.out b/tests/nvme/048.out new file mode 100644 index 000000000000..7f986ef9637d --- /dev/null +++ b/tests/nvme/048.out @@ -0,0 +1,3 @@ +Running nvme/048 +NQN:blktests-subsystem-1 disconnected 1 controller(s) +Test complete
The target is allowed to change the number of I/O queues. Test if the host is able to reconnect in this scenario. Signed-off-by: Daniel Wagner <dwagner@suse.de> --- tests/nvme/048 | 125 +++++++++++++++++++++++++++++++++++++++++++++ tests/nvme/048.out | 3 ++ 2 files changed, 128 insertions(+) create mode 100755 tests/nvme/048 create mode 100644 tests/nvme/048.out