Message ID | 20220831153506.28234-1-dwagner@suse.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [blktests,v2] nvme/045: test queue count changes on reconnect | expand |
Thanks for the patch. First of all, the commit title says nvme/045, but it adds nvme/046. Also I made a couple of comments on this patch in line. I ran the added test case and observed test case failure [1] and KASAN slab-out-of-bounds [2]. To run this test case, I applied this patch on top of the PR #100 [3] and used kernel at nvme-6.1 branch tip 500e781dc0b0 at "git://git.infradead.org/nvme.git". Is this failure expected? Or do I miss any required set up to pass the test case? On Aug 31, 2022 / 17:35, Daniel Wagner wrote: > The target is allowed to change the number of I/O queues. Test if the > host is able to reconnect in this scenario. > > Signed-off-by: Daniel Wagner <dwagner@suse.de> > --- > > changes > v2: > - detect if attr_qid_max is available > v1: > - https://lore.kernel.org/linux-block/20220831120900.13129-1-dwagner@suse.de/ > > tests/nvme/046 | 133 +++++++++++++++++++++++++++++++++++++++++++++ > tests/nvme/046.out | 3 + > tests/nvme/rc | 10 ++++ > 3 files changed, 146 insertions(+) > create mode 100755 tests/nvme/046 > create mode 100644 tests/nvme/046.out > > diff --git a/tests/nvme/046 b/tests/nvme/046 > new file mode 100755 > index 000000000000..428d596c93b9 > --- /dev/null > +++ b/tests/nvme/046 > @@ -0,0 +1,133 @@ > +#!/bin/bash > +# SPDX-License-Identifier: GPL-3.0+ > +# Copyright (C) 2022 Daniel Wagner, SUSE Labs > +# > +# Test queue count changes on reconnect > + > +. tests/nvme/rc > + > +DESCRIPTION="Test queue count changes on reconnect" > +QUICK=1 > + > +requires() { > + _nvme_requires > + _have_loop > + _require_nvme_trtype_is_fabrics > + _require_min_cpus 2 From my curiosity, what's the reason to require 2 cpus? > +} > + > +_detect_subsys_attr() { > + local attr="$1" > + local file_path="${TMPDIR}/img" > + local subsys_name="blktests-feature-detect" > + local cfs_path="${NVMET_CFS}/subsystems/${subsys_name}" > + local port > + > + truncate -s 1M "${file_path}" > + > + _create_nvmet_subsystem "${subsys_name}" "${file_path}" \ > + "b92842df-a394-44b1-84a4-92ae7d112332" > + port="$(_create_nvmet_port "${nvme_trtype}")" > + > + local val=1 > + [[ -f "${cfs_path}/${attr}" ]] && val=0 > + > + _remove_nvmet_subsystem "${subsys_name}" > + > + _remove_nvmet_port "${port}" > + > + rm "${file_path}" > + > + return "${val}" > +} > + > +def_state_timeout=20 > + > +nvmf_wait_for_state() { > + local subsys_name="$1" > + local state="$2" > + local timeout="${3:-$def_state_timeout}" > + > + local nvmedev=$(_find_nvme_dev "${subsys_name}") > + local state_file="/sys/class/nvme-fabrics/ctl/${nvmedev}/state" > + > + local start_time=$(date +%s) > + local end_time > + > + while ! grep -q "${state}" "${state_file}"; do > + sleep 1 > + end_time=$(date +%s) > + if (( end_time - start_time > timeout )); then > + echo "expected state \"${state}\" not " \ > + "reached within ${timeout} seconds" > + break > + fi > + done > +} > + > +nvmet_set_max_qid() { > + local port="$1" > + local subsys_name="$2" > + local max_qid="$3" > + > + _remove_nvmet_subsystem_from_port "${port}" "${subsys_name}" > + nvmf_wait_for_state "${subsys_name}" "connecting" > + > + _set_nvmet_attr_qid_max "${subsys_name}" "${max_qid}" > + > + _add_nvmet_subsys_to_port "${port}" "${subsys_name}" > + nvmf_wait_for_state "${subsys_name}" "live" > +} > + > +test() { > + local port > + local subsys_name="blktests-subsystem-1" > + local hostid > + local hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}" > + local file_path="${TMPDIR}/img" > + > + echo "Running ${TEST_NAME}" > + > + hostid="$(uuidgen)" > + if [ -z "$hostid" ] ; then > + echo "uuidgen failed" > + return 1 > + fi > + > + _setup_nvmet > + > + if ! _detect_subsys_attr "attr_qid_max"; then > + SKIP_REASONS+=("missing attr_qid_max feature") > + return 1 > + fi > + > + truncate -s 512M "${file_path}" > + > + _create_nvmet_subsystem "${subsys_name}" "${file_path}" \ > + "b92842df-a394-44b1-84a4-92ae7d112861" > + port="$(_create_nvmet_port "${nvme_trtype}")" > + _add_nvmet_subsys_to_port "${port}" "${subsys_name}" > + _create_nvmet_host "${subsys_name}" "${hostnqn}" > + > + _nvme_connect_subsys "${nvme_trtype}" "${subsys_name}" \ > + "" "" \ > + "${hostnqn}" "${hostid}" > + > + nvmf_wait_for_state "${subsys_name}" "live" > + > + nvmet_set_max_qid "${port}" "${subsys_name}" 1 > + nvmet_set_max_qid "${port}" "${subsys_name}" 128 > + > + _nvme_disconnect_subsys "${subsys_name}" > + > + _remove_nvmet_subsystem_from_port "${port}" "${subsys_name}" > + _remove_nvmet_subsystem "${subsys_name}" > + > + _remove_nvmet_port "${port}" > + > + _remove_nvmet_host "${hostnqn}" > + > + rm "${file_path}" > + > + echo "Test complete" > +} > diff --git a/tests/nvme/046.out b/tests/nvme/046.out > new file mode 100644 > index 000000000000..f1a967d540b7 > --- /dev/null > +++ b/tests/nvme/046.out > @@ -0,0 +1,3 @@ > +Running nvme/046 > +NQN:blktests-subsystem-1 disconnected 1 controller(s) > +Test complete > diff --git a/tests/nvme/rc b/tests/nvme/rc > index 6d4397a7f043..9e4fe9c8ba6c 100644 > --- a/tests/nvme/rc > +++ b/tests/nvme/rc > @@ -544,6 +544,16 @@ _set_nvmet_dhgroup() { > "${cfs_path}/dhchap_dhgroup" > } > > +_set_nvmet_attr_qid_max() { > + local nvmet_subsystem="$1" > + local qid_max="$2" > + local cfs_path="${NVMET_CFS}/subsystems/${nvmet_subsystem}" > + > + if [[ -f "${cfs_path}/attr_qid_max" ]]; then > + echo $qid_max > "${cfs_path}/attr_qid_max" I ran 'make check' and noticed the line above triggers a shellcheck warning: tests/nvme/rc:553:8: note: Double quote to prevent globbing and word splitting. [SC2086] > + fi > +} > + > _find_nvme_dev() { > local subsys=$1 > local subsysnqn > -- > 2.37.2 > [1] test case failure messages $ sudo ./check nvme/046 nvme/046 (Test queue count changes on reconnect) [failed] runtime 88.104s ... 87.687s --- tests/nvme/046.out 2022-09-08 08:35:02.063595059 +0900 +++ /home/shin/kts/kernel-test-suite/src/blktests/results/nodev/nvme/046.out.bad 2022-09-08 08:43:54.524174409 +0900 @@ -1,3 +1,86 @@ Running nvme/046 -NQN:blktests-subsystem-1 disconnected 1 controller(s) +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory ... (Run 'diff -u tests/nvme/046.out /home/shin/kts/kernel-test-suite/src/blktests/results/nodev/nvme/046.out.bad' to see the entire diff) [2] KASAN: slab-out-of-bounds [ 151.315742] run blktests nvme/046 at 2022-09-08 08:42:26 [ 151.834816] nvmet: adding nsid 1 to subsystem blktests-feature-detect [ 152.170966] nvmet: adding nsid 1 to subsystem blktests-subsystem-1 [ 152.514592] nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:. [ 152.522907] nvme nvme6: Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices. [ 152.527164] nvme nvme6: creating 4 I/O queues. [ 152.533543] nvme nvme6: new ctrl: "blktests-subsystem-1" [ 154.339129] nvme nvme6: Removing ctrl: NQN "blktests-subsystem-1" [ 175.599995] ================================================================== [ 175.601755] BUG: KASAN: slab-out-of-bounds in nvmet_subsys_attr_qid_max_store+0x13d/0x160 [nvmet] [ 175.603816] Read of size 1 at addr ffff8881138dc450 by task check/946 [ 175.605801] CPU: 1 PID: 946 Comm: check Not tainted 6.0.0-rc2+ #3 [ 175.607232] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 [ 175.609735] Call Trace: [ 175.610412] <TASK> [ 175.611039] dump_stack_lvl+0x5b/0x77 [ 175.612016] print_report.cold+0x5e/0x602 [ 175.612999] ? nvmet_subsys_attr_qid_max_store+0x13d/0x160 [nvmet] [ 175.614465] kasan_report+0xb1/0xf0 [ 175.615392] ? lock_downgrade+0x5c0/0x6b0 [ 175.616414] ? nvmet_subsys_attr_qid_max_store+0x13d/0x160 [nvmet] [ 175.617845] nvmet_subsys_attr_qid_max_store+0x13d/0x160 [nvmet] [ 175.619154] ? nvmet_addr_adrfam_store+0x140/0x140 [nvmet] [ 175.620382] configfs_write_iter+0x2a5/0x460 [ 175.621273] vfs_write+0x519/0xc50 [ 175.622021] ? __ia32_sys_pread64+0x1c0/0x1c0 [ 175.622897] ? find_held_lock+0x2d/0x110 [ 175.623747] ? __fget_light+0x51/0x230 [ 175.624579] ksys_write+0xe7/0x1b0 [ 175.625286] ? __ia32_sys_read+0xa0/0xa0 [ 175.626121] ? lockdep_hardirqs_on_prepare+0x17b/0x410 [ 175.627091] ? syscall_enter_from_user_mode+0x22/0xc0 [ 175.628015] ? lockdep_hardirqs_on+0x7d/0x100 [ 175.628844] do_syscall_64+0x37/0x90 [ 175.629480] entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 175.630374] RIP: 0033:0x7f072b5018f7 [ 175.631005] Code: 0f 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24 [ 175.633806] RSP: 002b:00007ffc4143b2e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [ 175.634954] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f072b5018f7 [ 175.636041] RDX: 0000000000000002 RSI: 0000556f9fd4d5b0 RDI: 0000000000000001 [ 175.637130] RBP: 0000556f9fd4d5b0 R08: 0000000000000000 R09: 0000000000000073 [ 175.638142] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002 [ 175.639178] R13: 00007f072b5f9780 R14: 0000000000000002 R15: 00007f072b5f49e0 [ 175.640214] </TASK> [ 175.640893] Allocated by task 986: [ 175.641480] kasan_save_stack+0x1c/0x40 [ 175.642030] __kasan_kmalloc+0xa7/0xe0 [ 175.642627] nvmet_subsys_alloc+0x90/0x540 [nvmet] [ 175.643296] nvmet_subsys_make+0x36/0x400 [nvmet] [ 175.643995] configfs_mkdir+0x3f4/0xa60 [ 175.644546] vfs_mkdir+0x1cf/0x400 [ 175.645045] do_mkdirat+0x1fb/0x260 [ 175.645699] __x64_sys_mkdir+0xd3/0x120 [ 175.646232] do_syscall_64+0x37/0x90 [ 175.646765] entry_SYSCALL_64_after_hwframe+0x63/0xcd [ 175.647682] The buggy address belongs to the object at ffff8881138dc000 which belongs to the cache kmalloc-1k of size 1024 [ 175.649252] The buggy address is located 80 bytes to the right of 1024-byte region [ffff8881138dc000, ffff8881138dc400) [ 175.651101] The buggy address belongs to the physical page: [ 175.651811] page:00000000ab656d52 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1138d8 [ 175.653008] head:00000000ab656d52 order:3 compound_mapcount:0 compound_pincount:0 [ 175.653958] flags: 0x17ffffc0010200(slab|head|node=0|zone=2|lastcpupid=0x1fffff) [ 175.654911] raw: 0017ffffc0010200 ffffea0004627200 dead000000000002 ffff888100042dc0 [ 175.655851] raw: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000 [ 175.656830] page dumped because: kasan: bad access detected [ 175.657845] Memory state around the buggy address: [ 175.658470] ffff8881138dc300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 175.659391] ffff8881138dc380: 00 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc [ 175.660271] >ffff8881138dc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 175.661203] ^ [ 175.661977] ffff8881138dc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 175.662853] ffff8881138dc500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc [ 175.807152] ================================================================== [ 175.996544] Disabling lock debugging due to kernel taint [3] https://github.com/osandov/blktests/pull/100
Hi, On Thu, Sep 08, 2022 at 12:02:24AM +0000, Shinichiro Kawasaki wrote: > Thanks for the patch. First of all, the commit title says nvme/045, but it adds > nvme/046. Also I made a couple of comments on this patch in line. Ah it's a stupid copy&past error. Will update it accordingly. > I ran the added test case and observed test case failure [1] and KASAN > slab-out-of-bounds [2]. To run this test case, I applied this patch on top of > the PR #100 [3] and used kernel at nvme-6.1 branch tip 500e781dc0b0 at > "git://git.infradead.org/nvme.git". Is this failure expected? Or do I miss > any required set up to pass the test case? No, the crash you observed is not expected. I'll look into it. I suspect it has something to do with '/sys/class/nvme-fabrics/ctl//state: No such file or directory' >> > +requires() { > > + _nvme_requires > > + _have_loop > > + _require_nvme_trtype_is_fabrics > > + _require_min_cpus 2 > > From my curiosity, what's the reason to require 2 cpus? The number of CPUs define how many queues we will setup or request from the target. As this tests starts with the default queue counts requested by the host and then limits the queues count to 1 on the target side we need to have more than 1 queue requested initially. I think it's worthwhile to add this as comment to the test. > > +_set_nvmet_attr_qid_max() { > > + local nvmet_subsystem="$1" > > + local qid_max="$2" > > + local cfs_path="${NVMET_CFS}/subsystems/${nvmet_subsystem}" > > + > > + if [[ -f "${cfs_path}/attr_qid_max" ]]; then > > + echo $qid_max > "${cfs_path}/attr_qid_max" > > I ran 'make check' and noticed the line above triggers a shellcheck warning: > > tests/nvme/rc:553:8: note: Double quote to prevent globbing and > word splitting. [SC2086] Will fix it. > [1] test case failure messages > > $ sudo ./check nvme/046 > nvme/046 (Test queue count changes on reconnect) [failed] > runtime 88.104s ... 87.687s > --- tests/nvme/046.out 2022-09-08 08:35:02.063595059 +0900 > +++ /home/shin/kts/kernel-test-suite/src/blktests/results/nodev/nvme/046.out.bad 2022-09-08 08:43:54.524174409 +0900 > @@ -1,3 +1,86 @@ > Running nvme/046 > -NQN:blktests-subsystem-1 disconnected 1 controller(s) > +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory > +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory > +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory > +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory > +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory > ... > (Run 'diff -u tests/nvme/046.out /home/shin/kts/kernel-test-suite/src/blktests/results/nodev/nvme/046.out.bad' to see the entire diff) > > [2] KASAN: slab-out-of-bounds > > [ 151.315742] run blktests nvme/046 at 2022-09-08 08:42:26 > [ 151.834816] nvmet: adding nsid 1 to subsystem blktests-feature-detect > [ 152.170966] nvmet: adding nsid 1 to subsystem blktests-subsystem-1 > [ 152.514592] nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:. > [ 152.522907] nvme nvme6: Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices. > [ 152.527164] nvme nvme6: creating 4 I/O queues. > [ 152.533543] nvme nvme6: new ctrl: "blktests-subsystem-1" > [ 154.339129] nvme nvme6: Removing ctrl: NQN "blktests-subsystem-1" > [ 175.599995] ================================================================== > [ 175.601755] BUG: KASAN: slab-out-of-bounds in nvmet_subsys_attr_qid_max_store+0x13d/0x160 [nvmet] > [ 175.603816] Read of size 1 at addr ffff8881138dc450 by task check/946 > > [ 175.605801] CPU: 1 PID: 946 Comm: check Not tainted 6.0.0-rc2+ #3 > [ 175.607232] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 > [ 175.609735] Call Trace: > [ 175.610412] <TASK> > [ 175.611039] dump_stack_lvl+0x5b/0x77 > [ 175.612016] print_report.cold+0x5e/0x602 > [ 175.612999] ? nvmet_subsys_attr_qid_max_store+0x13d/0x160 [nvmet] Hmm, as qid_max is more or less a copy of existing attributes it might not the only attribute store operation which has this problem. Thanks for the review! Daniel
On Thu, Sep 08, 2022 at 09:33:23AM +0200, Daniel Wagner wrote: > > [ 175.612999] ? nvmet_subsys_attr_qid_max_store+0x13d/0x160 [nvmet] > > Hmm, as qid_max is more or less a copy of existing attributes it might > not the only attribute store operation which has this problem. D'oh, the port online check is obviously wrong... working on a fix.
diff --git a/tests/nvme/046 b/tests/nvme/046 new file mode 100755 index 000000000000..428d596c93b9 --- /dev/null +++ b/tests/nvme/046 @@ -0,0 +1,133 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-3.0+ +# Copyright (C) 2022 Daniel Wagner, SUSE Labs +# +# Test queue count changes on reconnect + +. tests/nvme/rc + +DESCRIPTION="Test queue count changes on reconnect" +QUICK=1 + +requires() { + _nvme_requires + _have_loop + _require_nvme_trtype_is_fabrics + _require_min_cpus 2 +} + +_detect_subsys_attr() { + local attr="$1" + local file_path="${TMPDIR}/img" + local subsys_name="blktests-feature-detect" + local cfs_path="${NVMET_CFS}/subsystems/${subsys_name}" + local port + + truncate -s 1M "${file_path}" + + _create_nvmet_subsystem "${subsys_name}" "${file_path}" \ + "b92842df-a394-44b1-84a4-92ae7d112332" + port="$(_create_nvmet_port "${nvme_trtype}")" + + local val=1 + [[ -f "${cfs_path}/${attr}" ]] && val=0 + + _remove_nvmet_subsystem "${subsys_name}" + + _remove_nvmet_port "${port}" + + rm "${file_path}" + + return "${val}" +} + +def_state_timeout=20 + +nvmf_wait_for_state() { + local subsys_name="$1" + local state="$2" + local timeout="${3:-$def_state_timeout}" + + local nvmedev=$(_find_nvme_dev "${subsys_name}") + local state_file="/sys/class/nvme-fabrics/ctl/${nvmedev}/state" + + local start_time=$(date +%s) + local end_time + + while ! grep -q "${state}" "${state_file}"; do + sleep 1 + end_time=$(date +%s) + if (( end_time - start_time > timeout )); then + echo "expected state \"${state}\" not " \ + "reached within ${timeout} seconds" + break + fi + done +} + +nvmet_set_max_qid() { + local port="$1" + local subsys_name="$2" + local max_qid="$3" + + _remove_nvmet_subsystem_from_port "${port}" "${subsys_name}" + nvmf_wait_for_state "${subsys_name}" "connecting" + + _set_nvmet_attr_qid_max "${subsys_name}" "${max_qid}" + + _add_nvmet_subsys_to_port "${port}" "${subsys_name}" + nvmf_wait_for_state "${subsys_name}" "live" +} + +test() { + local port + local subsys_name="blktests-subsystem-1" + local hostid + local hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}" + local file_path="${TMPDIR}/img" + + echo "Running ${TEST_NAME}" + + hostid="$(uuidgen)" + if [ -z "$hostid" ] ; then + echo "uuidgen failed" + return 1 + fi + + _setup_nvmet + + if ! _detect_subsys_attr "attr_qid_max"; then + SKIP_REASONS+=("missing attr_qid_max feature") + return 1 + fi + + truncate -s 512M "${file_path}" + + _create_nvmet_subsystem "${subsys_name}" "${file_path}" \ + "b92842df-a394-44b1-84a4-92ae7d112861" + port="$(_create_nvmet_port "${nvme_trtype}")" + _add_nvmet_subsys_to_port "${port}" "${subsys_name}" + _create_nvmet_host "${subsys_name}" "${hostnqn}" + + _nvme_connect_subsys "${nvme_trtype}" "${subsys_name}" \ + "" "" \ + "${hostnqn}" "${hostid}" + + nvmf_wait_for_state "${subsys_name}" "live" + + nvmet_set_max_qid "${port}" "${subsys_name}" 1 + nvmet_set_max_qid "${port}" "${subsys_name}" 128 + + _nvme_disconnect_subsys "${subsys_name}" + + _remove_nvmet_subsystem_from_port "${port}" "${subsys_name}" + _remove_nvmet_subsystem "${subsys_name}" + + _remove_nvmet_port "${port}" + + _remove_nvmet_host "${hostnqn}" + + rm "${file_path}" + + echo "Test complete" +} diff --git a/tests/nvme/046.out b/tests/nvme/046.out new file mode 100644 index 000000000000..f1a967d540b7 --- /dev/null +++ b/tests/nvme/046.out @@ -0,0 +1,3 @@ +Running nvme/046 +NQN:blktests-subsystem-1 disconnected 1 controller(s) +Test complete diff --git a/tests/nvme/rc b/tests/nvme/rc index 6d4397a7f043..9e4fe9c8ba6c 100644 --- a/tests/nvme/rc +++ b/tests/nvme/rc @@ -544,6 +544,16 @@ _set_nvmet_dhgroup() { "${cfs_path}/dhchap_dhgroup" } +_set_nvmet_attr_qid_max() { + local nvmet_subsystem="$1" + local qid_max="$2" + local cfs_path="${NVMET_CFS}/subsystems/${nvmet_subsystem}" + + if [[ -f "${cfs_path}/attr_qid_max" ]]; then + echo $qid_max > "${cfs_path}/attr_qid_max" + fi +} + _find_nvme_dev() { local subsys=$1 local subsysnqn
The target is allowed to change the number of I/O queues. Test if the host is able to reconnect in this scenario. Signed-off-by: Daniel Wagner <dwagner@suse.de> --- changes v2: - detect if attr_qid_max is available v1: - https://lore.kernel.org/linux-block/20220831120900.13129-1-dwagner@suse.de/ tests/nvme/046 | 133 +++++++++++++++++++++++++++++++++++++++++++++ tests/nvme/046.out | 3 + tests/nvme/rc | 10 ++++ 3 files changed, 146 insertions(+) create mode 100755 tests/nvme/046 create mode 100644 tests/nvme/046.out