From patchwork Fri Nov 22 00:54:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shuah Khan X-Patchwork-Id: 13882530 Received: from mail-il1-f173.google.com (mail-il1-f173.google.com [209.85.166.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38DE617C for ; Fri, 22 Nov 2024 00:54:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732236901; cv=none; b=GcUq31BOBGTPFcyoF0FnkrokEA1EM81hp+gQcdaLoBVdswfUPgOe0PRqt7unofSOipde9p+UdDhJ9K9JisE2JscztkQsD44ZVnsgP5iSZpVsndhFwxoFG+ILynOBaSN5aAYEJ4asl3F13OPniVORe81luPBkXsOwOz8qPRcxiRk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732236901; c=relaxed/simple; bh=vFtoCyAY/poiYHu4bHHTj8bs+yBY6dI4X8YhQ4n3+Wk=; h=Content-Type:Message-ID:Date:MIME-Version:To:Cc:From:Subject; b=Iz1/754bs0B1CI3pnuzuRPsRZn26pMhzYMtTem94JgR679j3Ta+pnxqa25eeMZ87rA0Hg4yqhudEq/ad2gWtXajDa99gqvEAUGdvfouaQshjpAlZFuVus7H/8LuX2+eb1P5pO7lzAJ5aiOv8DWz+hadBwmJQ2g9jBJvZrExyAis= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linuxfoundation.org; spf=pass smtp.mailfrom=linuxfoundation.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=UaJ3mIz5; arc=none smtp.client-ip=209.85.166.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linuxfoundation.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linuxfoundation.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="UaJ3mIz5" Received: by mail-il1-f173.google.com with SMTP id e9e14a558f8ab-3a78c242d50so5939065ab.1 for ; Thu, 21 Nov 2024 16:54:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxfoundation.org; s=google; t=1732236897; x=1732841697; darn=vger.kernel.org; h=subject:from:cc:to:content-language:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=X3qJMvLLSXc2qguwHNjwo6BvpL8HGQCrn3D1xeb7dFo=; b=UaJ3mIz5nrEkzU067HUYI7xA9DMR0qG0T5cPblJCBzgREtone1QoDBPMNIEUx2npLr u2v/VCC8lwmi8tpJ6impCH4BpWx/B8pGsCI4nvDdANriwS+5y3pvEmMFZ8NREuicz0sg PfuA0k8nvionGyRRSpybvluc5bPy0WCIo9AK0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732236897; x=1732841697; h=subject:from:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=X3qJMvLLSXc2qguwHNjwo6BvpL8HGQCrn3D1xeb7dFo=; b=ahuRGCHm7f6GrKCJxs45IPDl8hQJEEKzNObTHVDbDNOCWQOl5kBEJNu9OUv07QHve9 w7nw27wBH6LmUBEZZ5MyB4QyIEOdoXFsC094L+IJHoLCh9rnxalDrWTATBBmy+xaAUl3 vQSEAYGZPFf+C0KaUiLCcvz/MbVe7qKSpe6WzvFcudt1Ru1L3D2EIE0YF1V4nqfZSKfV 5s+9uo/ledwZat8o057ryUR5MxNjJW+xcLlu6ba/Ipw/V/b7PWTYrjbvPbGS/v6yjnoy 5WpbwtrKK1741ep0LP1JNuolO3godH4SuUPRHDDbZtDWPw5ywqQnpTHTbTAMv9QeNf1a MAZg== X-Forwarded-Encrypted: i=1; AJvYcCULu61A2+2LDo7AoTQSNorsrO9agcMuhjLmQmCUQTcnfOqToQqNkByl++nrFpt3qhvijj2bQEUNgs8sGvRjEck=@vger.kernel.org X-Gm-Message-State: AOJu0YwjQtkoABt9+YHvpw6NXG8TC1MlcGoKShrme6lo0pplE0K+uRFq Pk+oIT3hNjwqs6mM7PBjnjeNxeWkldsCkKa0YC9u7vy/oRHDOhdrSLyHsw1KMnI= X-Gm-Gg: ASbGncvPgKfOcV1lK5oCo1WqNbzT7yyf8gdiqKHfkHc05UFWcOI7zubdJfonMkN6Qq2 Hio3ytYPQmnYwHKr7b4j/c6ovd6TmeSOaTSf9a5t8S3ppuObx6s+kpFWAhNR1Ri4xmAV7uSj0W2 ZUPeNxWRkjku/ksJY/dQS/o6dyP1NZqnszu85u4lEvokN4NH0CkTmPdMzszUPVBdY1u5NhAxw4B XNeK1k4cw9Adgi4I0M53UKD7ZU/LhjPml8MNPBeLgO3Ft+qm6Y7uPvQ+FKwRA== X-Google-Smtp-Source: AGHT+IEALDjKPpGIb4j6908Sj6Njt+bEeNjeHjieYSyoIAxFrboHd6uRgeLe6JaDklKJ82Z7T9vFDg== X-Received: by 2002:a05:6e02:1cad:b0:3a7:8720:9e9e with SMTP id e9e14a558f8ab-3a79acfbfadmr15629685ab.2.1732236897208; Thu, 21 Nov 2024 16:54:57 -0800 (PST) Received: from [192.168.1.128] ([38.175.170.29]) by smtp.gmail.com with ESMTPSA id 8926c6da1cb9f-4e1cfe1a116sm383988173.13.2024.11.21.16.54.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 21 Nov 2024 16:54:56 -0800 (PST) Message-ID: <9ac83205-add4-4971-8cf3-70be10282e1c@linuxfoundation.org> Date: Thu, 21 Nov 2024 17:54:55 -0700 Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: Linus Torvalds Cc: Shuah Khan , shuah , Brendan Higgins , David Gow , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org From: Shuah Khan Subject: [GIT PULL] KUnit update for Linux 6.13-rc1-fixed Hi Linus, Please pull the following kunit update for Linux 6.13-rc1. This pull request is fixed up with the right fix for the UAF bug and includes other fixes. linux_kselftest-kunit-6.13-rc1-fixed kunit update for Linux 6.13-rc1 -- fixes user-after-free (UAF) bug in kunit_init_suite() -- adds option to kunit tool to print just the summary of test results -- adds option to kunit tool to print just the failed test results -- fixes kunit_zalloc_skb() to use user passed in gfp value instead of hardcoding GFP_KERNEL -- fixes kunit_zalloc_skb() kernel doc to include allocation flags variable -- updates KUnit email address for Brendan Higgins -- adds LoongArch config to qemu_configs -- changes tool to allow overriding the shutdown mode from qemu config -- enables shutdown in loongarch qemu_config -- fixes potential null dereference in kunit_device_driver_test() -- fixes debugfs to use IS_ERR() for alloc_string_stream() error check diff is attached. Tests passed on my kunit repo & linux-next: - Build make allmodconfig ./tools/testing/kunit/kunit.py run ./tools/testing/kunit/kunit.py run --alltests ./tools/testing/kunit/kunit.py run --arch x86_64 ./tools/testing/kunit/kunit.py run --alltests --arch x86_64 thanks, -- Shuah ---------------------------------------------------------------- The following changes since commit 2d5404caa8c7bb5c4e0435f94b28834ae5456623: Linux 6.12-rc7 (2024-11-10 14:19:35 -0800) are available in the Git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest tags/linux_kselftest-kunit-6.13-rc1-fixed for you to fetch changes up to 62adcae479fe5bc04fa3b6c3f93bd340441f8b25: kunit: qemu_configs: loongarch: Enable shutdown (2024-11-19 15:26:30 -0700) ---------------------------------------------------------------- linux_kselftest-kunit-6.13-rc1-fixed kunit update for Linux 6.13-rc1 -- fixes user-after-free (UAF) bug in kunit_init_suite() -- adds option to kunit tool to print just the summary of test results -- adds option to kunit tool to print just the failed test results -- fixes kunit_zalloc_skb() to use user passed in gfp value instead of hardcoding GFP_KERNEL -- fixes kunit_zalloc_skb() kernel doc to include allocation flags variable -- updates KUnit email address for Brendan Higgins -- adds LoongArch config to qemu_configs -- changes tool to allow overriding the shutdown mode from qemu config -- enables shutdown in loongarch qemu_config -- fixes potential null dereference in kunit_device_driver_test() -- fixes debugfs to use IS_ERR() for alloc_string_stream() error check ---------------------------------------------------------------- Brendan Higgins (1): MAINTAINERS: Update KUnit email address for Brendan Higgins Dan Carpenter (2): kunit: skb: use "gfp" variable instead of hardcoding GFP_KERNEL kunit: skb: add gfp to kernel doc for kunit_zalloc_skb() David Gow (1): kunit: tool: Only print the summary Jinjie Ruan (1): kunit: string-stream: Fix a UAF bug in kunit_init_suite() Kuan-Wei Chiu (1): kunit: debugfs: Use IS_ERR() for alloc_string_stream() error check Rae Moar (1): kunit: tool: print failed tests only Thomas Weißschuh (3): kunit: qemu_configs: Add LoongArch config kunit: tool: Allow overriding the shutdown mode from qemu config kunit: qemu_configs: loongarch: Enable shutdown Zichen Xie (1): kunit: Fix potential null dereference in kunit_device_driver_test() MAINTAINERS | 2 +- include/kunit/skbuff.h | 5 +- lib/kunit/debugfs.c | 9 +- lib/kunit/kunit-test.c | 2 + tools/testing/kunit/kunit.py | 28 +++++- tools/testing/kunit/kunit_kernel.py | 4 +- tools/testing/kunit/kunit_parser.py | 134 ++++++++++++++++---------- tools/testing/kunit/kunit_printer.py | 14 ++- tools/testing/kunit/kunit_tool_test.py | 55 +++++------ tools/testing/kunit/qemu_configs/loongarch.py | 21 ++++ 10 files changed, 183 insertions(+), 91 deletions(-) create mode 100644 tools/testing/kunit/qemu_configs/loongarch.py ---------------------------------------------------------------- diff --git a/MAINTAINERS b/MAINTAINERS index 21fdaa19229a..398518c5e861 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12405,7 +12405,7 @@ F: fs/smb/common/ F: fs/smb/server/ KERNEL UNIT TESTING FRAMEWORK (KUnit) -M: Brendan Higgins +M: Brendan Higgins M: David Gow R: Rae Moar L: linux-kselftest@vger.kernel.org diff --git a/include/kunit/skbuff.h b/include/kunit/skbuff.h index 44d12370939a..07784694357c 100644 --- a/include/kunit/skbuff.h +++ b/include/kunit/skbuff.h @@ -20,8 +20,9 @@ static void kunit_action_kfree_skb(void *p) * kunit_zalloc_skb() - Allocate and initialize a resource managed skb. * @test: The test case to which the skb belongs * @len: size to allocate + * @gfp: allocation flags * - * Allocate a new struct sk_buff with GFP_KERNEL, zero fill the give length + * Allocate a new struct sk_buff with gfp flags, zero fill the given length * and add it as a resource to the kunit test for automatic cleanup. * * Returns: newly allocated SKB, or %NULL on error @@ -29,7 +30,7 @@ static void kunit_action_kfree_skb(void *p) static inline struct sk_buff *kunit_zalloc_skb(struct kunit *test, int len, gfp_t gfp) { - struct sk_buff *res = alloc_skb(len, GFP_KERNEL); + struct sk_buff *res = alloc_skb(len, gfp); if (!res || skb_pad(res, len)) return NULL; diff --git a/lib/kunit/debugfs.c b/lib/kunit/debugfs.c index d548750a325a..af71911f4a07 100644 --- a/lib/kunit/debugfs.c +++ b/lib/kunit/debugfs.c @@ -181,7 +181,7 @@ void kunit_debugfs_create_suite(struct kunit_suite *suite) * successfully. */ stream = alloc_string_stream(GFP_KERNEL); - if (IS_ERR_OR_NULL(stream)) + if (IS_ERR(stream)) return; string_stream_set_append_newlines(stream, true); @@ -189,7 +189,7 @@ void kunit_debugfs_create_suite(struct kunit_suite *suite) kunit_suite_for_each_test_case(suite, test_case) { stream = alloc_string_stream(GFP_KERNEL); - if (IS_ERR_OR_NULL(stream)) + if (IS_ERR(stream)) goto err; string_stream_set_append_newlines(stream, true); @@ -212,8 +212,11 @@ void kunit_debugfs_create_suite(struct kunit_suite *suite) err: string_stream_destroy(suite->log); - kunit_suite_for_each_test_case(suite, test_case) + suite->log = NULL; + kunit_suite_for_each_test_case(suite, test_case) { string_stream_destroy(test_case->log); + test_case->log = NULL; + } } void kunit_debugfs_destroy_suite(struct kunit_suite *suite) diff --git a/lib/kunit/kunit-test.c b/lib/kunit/kunit-test.c index 37e02be1e710..d9c781c859fd 100644 --- a/lib/kunit/kunit-test.c +++ b/lib/kunit/kunit-test.c @@ -805,6 +805,8 @@ static void kunit_device_driver_test(struct kunit *test) struct device *test_device; struct driver_test_state *test_state = kunit_kzalloc(test, sizeof(*test_state), GFP_KERNEL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_state); + test->priv = test_state; test_driver = kunit_driver_create(test, "my_driver"); diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py index bc74088c458a..676fa99a8b19 100755 --- a/tools/testing/kunit/kunit.py +++ b/tools/testing/kunit/kunit.py @@ -23,7 +23,7 @@ from typing import Iterable, List, Optional, Sequence, Tuple import kunit_json import kunit_kernel import kunit_parser -from kunit_printer import stdout +from kunit_printer import stdout, null_printer class KunitStatus(Enum): SUCCESS = auto() @@ -49,6 +49,8 @@ class KunitBuildRequest(KunitConfigRequest): class KunitParseRequest: raw_output: Optional[str] json: Optional[str] + summary: bool + failed: bool @dataclass class KunitExecRequest(KunitParseRequest): @@ -235,11 +237,18 @@ def parse_tests(request: KunitParseRequest, metadata: kunit_json.Metadata, input parse_time = time.time() - parse_start return KunitResult(KunitStatus.SUCCESS, parse_time), fake_test + default_printer = stdout + if request.summary or request.failed: + default_printer = null_printer # Actually parse the test results. - test = kunit_parser.parse_run_tests(input_data) + test = kunit_parser.parse_run_tests(input_data, default_printer) parse_time = time.time() - parse_start + if request.failed: + kunit_parser.print_test(test, request.failed, stdout) + kunit_parser.print_summary_line(test, stdout) + if request.json: json_str = kunit_json.get_json_result( test=test, @@ -413,6 +422,14 @@ def add_parse_opts(parser: argparse.ArgumentParser) -> None: help='Prints parsed test results as JSON to stdout or a file if ' 'a filename is specified. Does nothing if --raw_output is set.', type=str, const='stdout', default=None, metavar='FILE') + parser.add_argument('--summary', + help='Prints only the summary line for parsed test results.' + 'Does nothing if --raw_output is set.', + action='store_true') + parser.add_argument('--failed', + help='Prints only the failed parsed test results and summary line.' + 'Does nothing if --raw_output is set.', + action='store_true') def tree_from_args(cli_args: argparse.Namespace) -> kunit_kernel.LinuxSourceTree: @@ -448,6 +465,8 @@ def run_handler(cli_args: argparse.Namespace) -> None: jobs=cli_args.jobs, raw_output=cli_args.raw_output, json=cli_args.json, + summary=cli_args.summary, + failed=cli_args.failed, timeout=cli_args.timeout, filter_glob=cli_args.filter_glob, filter=cli_args.filter, @@ -495,6 +514,8 @@ def exec_handler(cli_args: argparse.Namespace) -> None: exec_request = KunitExecRequest(raw_output=cli_args.raw_output, build_dir=cli_args.build_dir, json=cli_args.json, + summary=cli_args.summary, + failed=cli_args.failed, timeout=cli_args.timeout, filter_glob=cli_args.filter_glob, filter=cli_args.filter, @@ -520,7 +541,8 @@ def parse_handler(cli_args: argparse.Namespace) -> None: # We know nothing about how the result was created! metadata = kunit_json.Metadata() request = KunitParseRequest(raw_output=cli_args.raw_output, - json=cli_args.json) + json=cli_args.json, summary=cli_args.summary, + failed=cli_args.failed) result, _ = parse_tests(request, metadata, kunit_output) if result.status != KunitStatus.SUCCESS: sys.exit(1) diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py index 61931c4926fd..e76d7894b6c5 100644 --- a/tools/testing/kunit/kunit_kernel.py +++ b/tools/testing/kunit/kunit_kernel.py @@ -105,7 +105,9 @@ class LinuxSourceTreeOperationsQemu(LinuxSourceTreeOperations): self._kconfig = qemu_arch_params.kconfig self._qemu_arch = qemu_arch_params.qemu_arch self._kernel_path = qemu_arch_params.kernel_path - self._kernel_command_line = qemu_arch_params.kernel_command_line + ' kunit_shutdown=reboot' + self._kernel_command_line = qemu_arch_params.kernel_command_line + if 'kunit_shutdown=' not in self._kernel_command_line: + self._kernel_command_line += ' kunit_shutdown=reboot' self._extra_qemu_params = qemu_arch_params.extra_qemu_params self._serial = qemu_arch_params.serial diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index ce34be15c929..29fc27e8949b 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -17,7 +17,7 @@ import textwrap from enum import Enum, auto from typing import Iterable, Iterator, List, Optional, Tuple -from kunit_printer import stdout +from kunit_printer import Printer, stdout class Test: """ @@ -54,10 +54,10 @@ class Test: """Returns string representation of a Test class object.""" return str(self) - def add_error(self, error_message: str) -> None: + def add_error(self, printer: Printer, error_message: str) -> None: """Records an error that occurred while parsing this test.""" self.counts.errors += 1 - stdout.print_with_timestamp(stdout.red('[ERROR]') + f' Test: {self.name}: {error_message}') + printer.print_with_timestamp(stdout.red('[ERROR]') + f' Test: {self.name}: {error_message}') def ok_status(self) -> bool: """Returns true if the status was ok, i.e. passed or skipped.""" @@ -251,7 +251,7 @@ KTAP_VERSIONS = [1] TAP_VERSIONS = [13, 14] def check_version(version_num: int, accepted_versions: List[int], - version_type: str, test: Test) -> None: + version_type: str, test: Test, printer: Printer) -> None: """ Adds error to test object if version number is too high or too low. @@ -263,13 +263,14 @@ def check_version(version_num: int, accepted_versions: List[int], version_type - 'KTAP' or 'TAP' depending on the type of version line. test - Test object for current test being parsed + printer - Printer object to output error """ if version_num < min(accepted_versions): - test.add_error(f'{version_type} version lower than expected!') + test.add_error(printer, f'{version_type} version lower than expected!') elif version_num > max(accepted_versions): - test.add_error(f'{version_type} version higer than expected!') + test.add_error(printer, f'{version_type} version higer than expected!') -def parse_ktap_header(lines: LineStream, test: Test) -> bool: +def parse_ktap_header(lines: LineStream, test: Test, printer: Printer) -> bool: """ Parses KTAP/TAP header line and checks version number. Returns False if fails to parse KTAP/TAP header line. @@ -281,6 +282,7 @@ def parse_ktap_header(lines: LineStream, test: Test) -> bool: Parameters: lines - LineStream of KTAP output to parse test - Test object for current test being parsed + printer - Printer object to output results Return: True if successfully parsed KTAP/TAP header line @@ -289,10 +291,10 @@ def parse_ktap_header(lines: LineStream, test: Test) -> bool: tap_match = TAP_START.match(lines.peek()) if ktap_match: version_num = int(ktap_match.group(1)) - check_version(version_num, KTAP_VERSIONS, 'KTAP', test) + check_version(version_num, KTAP_VERSIONS, 'KTAP', test, printer) elif tap_match: version_num = int(tap_match.group(1)) - check_version(version_num, TAP_VERSIONS, 'TAP', test) + check_version(version_num, TAP_VERSIONS, 'TAP', test, printer) else: return False lines.pop() @@ -380,7 +382,7 @@ def peek_test_name_match(lines: LineStream, test: Test) -> bool: return name == test.name def parse_test_result(lines: LineStream, test: Test, - expected_num: int) -> bool: + expected_num: int, printer: Printer) -> bool: """ Parses test result line and stores the status and name in the test object. Reports an error if the test number does not match expected @@ -398,6 +400,7 @@ def parse_test_result(lines: LineStream, test: Test, lines - LineStream of KTAP output to parse test - Test object for current test being parsed expected_num - expected test number for current test + printer - Printer object to output results Return: True if successfully parsed a test result line. @@ -420,7 +423,7 @@ def parse_test_result(lines: LineStream, test: Test, # Check test num num = int(match.group(2)) if num != expected_num: - test.add_error(f'Expected test number {expected_num} but found {num}') + test.add_error(printer, f'Expected test number {expected_num} but found {num}') # Set status of test object status = match.group(1) @@ -486,7 +489,7 @@ def format_test_divider(message: str, len_message: int) -> str: len_2 = difference - len_1 return ('=' * len_1) + f' {message} ' + ('=' * len_2) -def print_test_header(test: Test) -> None: +def print_test_header(test: Test, printer: Printer) -> None: """ Prints test header with test name and optionally the expected number of subtests. @@ -496,6 +499,7 @@ def print_test_header(test: Test) -> None: Parameters: test - Test object representing current test being printed + printer - Printer object to output results """ message = test.name if message != "": @@ -507,15 +511,15 @@ def print_test_header(test: Test) -> None: message += '(1 subtest)' else: message += f'({test.expected_count} subtests)' - stdout.print_with_timestamp(format_test_divider(message, len(message))) + printer.print_with_timestamp(format_test_divider(message, len(message))) -def print_log(log: Iterable[str]) -> None: +def print_log(log: Iterable[str], printer: Printer) -> None: """Prints all strings in saved log for test in yellow.""" formatted = textwrap.dedent('\n'.join(log)) for line in formatted.splitlines(): - stdout.print_with_timestamp(stdout.yellow(line)) + printer.print_with_timestamp(printer.yellow(line)) -def format_test_result(test: Test) -> str: +def format_test_result(test: Test, printer: Printer) -> str: """ Returns string with formatted test result with colored status and test name. @@ -525,23 +529,24 @@ def format_test_result(test: Test) -> str: Parameters: test - Test object representing current test being printed + printer - Printer object to output results Return: String containing formatted test result """ if test.status == TestStatus.SUCCESS: - return stdout.green('[PASSED] ') + test.name + return printer.green('[PASSED] ') + test.name if test.status == TestStatus.SKIPPED: - return stdout.yellow('[SKIPPED] ') + test.name + return printer.yellow('[SKIPPED] ') + test.name if test.status == TestStatus.NO_TESTS: - return stdout.yellow('[NO TESTS RUN] ') + test.name + return printer.yellow('[NO TESTS RUN] ') + test.name if test.status == TestStatus.TEST_CRASHED: - print_log(test.log) + print_log(test.log, printer) return stdout.red('[CRASHED] ') + test.name - print_log(test.log) - return stdout.red('[FAILED] ') + test.name + print_log(test.log, printer) + return printer.red('[FAILED] ') + test.name -def print_test_result(test: Test) -> None: +def print_test_result(test: Test, printer: Printer) -> None: """ Prints result line with status of test. @@ -550,10 +555,11 @@ def print_test_result(test: Test) -> None: Parameters: test - Test object representing current test being printed + printer - Printer object """ - stdout.print_with_timestamp(format_test_result(test)) + printer.print_with_timestamp(format_test_result(test, printer)) -def print_test_footer(test: Test) -> None: +def print_test_footer(test: Test, printer: Printer) -> None: """ Prints test footer with status of test. @@ -562,12 +568,38 @@ def print_test_footer(test: Test) -> None: Parameters: test - Test object representing current test being printed + printer - Printer object to output results """ - message = format_test_result(test) - stdout.print_with_timestamp(format_test_divider(message, - len(message) - stdout.color_len())) + message = format_test_result(test, printer) + printer.print_with_timestamp(format_test_divider(message, + len(message) - printer.color_len())) +def print_test(test: Test, failed_only: bool, printer: Printer) -> None: + """ + Prints Test object to given printer. For a child test, the result line is + printed. For a parent test, the test header, all child test results, and + the test footer are all printed. If failed_only is true, only failed/crashed + tests will be printed. + Parameters: + test - Test object to print + failed_only - True if only failed/crashed tests should be printed. + printer - Printer object to output results + """ + if test.name == "main": + printer.print_with_timestamp(DIVIDER) + for subtest in test.subtests: + print_test(subtest, failed_only, printer) + printer.print_with_timestamp(DIVIDER) + elif test.subtests != []: + if not failed_only or not test.ok_status(): + print_test_header(test, printer) + for subtest in test.subtests: + print_test(subtest, failed_only, printer) + print_test_footer(test, printer) + else: + if not failed_only or not test.ok_status(): + print_test_result(test, printer) def _summarize_failed_tests(test: Test) -> str: """Tries to summarize all the failing subtests in `test`.""" @@ -601,7 +633,7 @@ def _summarize_failed_tests(test: Test) -> str: return 'Failures: ' + ', '.join(failures) -def print_summary_line(test: Test) -> None: +def print_summary_line(test: Test, printer: Printer) -> None: """ Prints summary line of test object. Color of line is dependent on status of test. Color is green if test passes, yellow if test is @@ -614,6 +646,7 @@ def print_summary_line(test: Test) -> None: Errors: 0" test - Test object representing current test being printed + printer - Printer object to output results """ if test.status == TestStatus.SUCCESS: color = stdout.green @@ -621,7 +654,7 @@ def print_summary_line(test: Test) -> None: color = stdout.yellow else: color = stdout.red - stdout.print_with_timestamp(color(f'Testing complete. {test.counts}')) + printer.print_with_timestamp(color(f'Testing complete. {test.counts}')) # Summarize failures that might have gone off-screen since we had a lot # of tests (arbitrarily defined as >=100 for now). @@ -630,7 +663,7 @@ def print_summary_line(test: Test) -> None: summarized = _summarize_failed_tests(test) if not summarized: return - stdout.print_with_timestamp(color(summarized)) + printer.print_with_timestamp(color(summarized)) # Other methods: @@ -654,7 +687,7 @@ def bubble_up_test_results(test: Test) -> None: elif test.counts.get_status() == TestStatus.TEST_CRASHED: test.status = TestStatus.TEST_CRASHED -def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: bool) -> Test: +def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: bool, printer: Printer) -> Test: """ Finds next test to parse in LineStream, creates new Test object, parses any subtests of the test, populates Test object with all @@ -710,6 +743,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: log - list of strings containing any preceding diagnostic lines corresponding to the current test is_subtest - boolean indicating whether test is a subtest + printer - Printer object to output results Return: Test object populated with characteristics and any subtests @@ -725,14 +759,14 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: # If parsing the main/top-level test, parse KTAP version line and # test plan test.name = "main" - ktap_line = parse_ktap_header(lines, test) + ktap_line = parse_ktap_header(lines, test, printer) test.log.extend(parse_diagnostic(lines)) parse_test_plan(lines, test) parent_test = True else: # If not the main test, attempt to parse a test header containing # the KTAP version line and/or subtest header line - ktap_line = parse_ktap_header(lines, test) + ktap_line = parse_ktap_header(lines, test, printer) subtest_line = parse_test_header(lines, test) parent_test = (ktap_line or subtest_line) if parent_test: @@ -740,7 +774,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: # to parse test plan and print test header test.log.extend(parse_diagnostic(lines)) parse_test_plan(lines, test) - print_test_header(test) + print_test_header(test, printer) expected_count = test.expected_count subtests = [] test_num = 1 @@ -758,16 +792,16 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: # If parser reaches end of test before # parsing expected number of subtests, print # crashed subtest and record error - test.add_error('missing expected subtest!') + test.add_error(printer, 'missing expected subtest!') sub_test.log.extend(sub_log) test.counts.add_status( TestStatus.TEST_CRASHED) - print_test_result(sub_test) + print_test_result(sub_test, printer) else: test.log.extend(sub_log) break else: - sub_test = parse_test(lines, test_num, sub_log, True) + sub_test = parse_test(lines, test_num, sub_log, True, printer) subtests.append(sub_test) test_num += 1 test.subtests = subtests @@ -775,51 +809,51 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: # If not main test, look for test result line test.log.extend(parse_diagnostic(lines)) if test.name != "" and not peek_test_name_match(lines, test): - test.add_error('missing subtest result line!') + test.add_error(printer, 'missing subtest result line!') else: - parse_test_result(lines, test, expected_num) + parse_test_result(lines, test, expected_num, printer) # Check for there being no subtests within parent test if parent_test and len(subtests) == 0: # Don't override a bad status if this test had one reported. # Assumption: no subtests means CRASHED is from Test.__init__() if test.status in (TestStatus.TEST_CRASHED, TestStatus.SUCCESS): - print_log(test.log) + print_log(test.log, printer) test.status = TestStatus.NO_TESTS - test.add_error('0 tests run!') + test.add_error(printer, '0 tests run!') # Add statuses to TestCounts attribute in Test object bubble_up_test_results(test) if parent_test and is_subtest: # If test has subtests and is not the main test object, print # footer. - print_test_footer(test) + print_test_footer(test, printer) elif is_subtest: - print_test_result(test) + print_test_result(test, printer) return test -def parse_run_tests(kernel_output: Iterable[str]) -> Test: +def parse_run_tests(kernel_output: Iterable[str], printer: Printer) -> Test: """ Using kernel output, extract KTAP lines, parse the lines for test results and print condensed test results and summary line. Parameters: kernel_output - Iterable object contains lines of kernel output + printer - Printer object to output results Return: Test - the main test object with all subtests. """ - stdout.print_with_timestamp(DIVIDER) + printer.print_with_timestamp(DIVIDER) lines = extract_tap_lines(kernel_output) test = Test() if not lines: test.name = '' - test.add_error('Could not find any KTAP output. Did any KUnit tests run?') + test.add_error(printer, 'Could not find any KTAP output. Did any KUnit tests run?') test.status = TestStatus.FAILURE_TO_PARSE_TESTS else: - test = parse_test(lines, 0, [], False) + test = parse_test(lines, 0, [], False, printer) if test.status != TestStatus.NO_TESTS: test.status = test.counts.get_status() - stdout.print_with_timestamp(DIVIDER) - print_summary_line(test) + printer.print_with_timestamp(DIVIDER) return test diff --git a/tools/testing/kunit/kunit_printer.py b/tools/testing/kunit/kunit_printer.py index 015adf87dc2c..ca119f61fe79 100644 --- a/tools/testing/kunit/kunit_printer.py +++ b/tools/testing/kunit/kunit_printer.py @@ -15,12 +15,17 @@ _RESET = '\033[0;0m' class Printer: """Wraps a file object, providing utilities for coloring output, etc.""" - def __init__(self, output: typing.IO[str]): + def __init__(self, print: bool=True, output: typing.IO[str]=sys.stdout): self._output = output - self._use_color = output.isatty() + self._print = print + if print: + self._use_color = output.isatty() + else: + self._use_color = False def print(self, message: str) -> None: - print(message, file=self._output) + if self._print: + print(message, file=self._output) def print_with_timestamp(self, message: str) -> None: ts = datetime.datetime.now().strftime('%H:%M:%S') @@ -45,4 +50,5 @@ class Printer: return len(self.red('')) # Provides a default instance that prints to stdout -stdout = Printer(sys.stdout) +stdout = Printer() +null_printer = Printer(print=False) diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 2beb7327e53f..0bcb0cc002f8 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -23,6 +23,7 @@ import kunit_parser import kunit_kernel import kunit_json import kunit +from kunit_printer import stdout test_tmpdir = '' abs_test_data_dir = '' @@ -139,28 +140,28 @@ class KUnitParserTest(unittest.TestCase): def test_parse_successful_test_log(self): all_passed_log = test_data_path('test_is_test_passed-all_passed.log') with open(all_passed_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) self.assertEqual(result.counts.errors, 0) def test_parse_successful_nested_tests_log(self): all_passed_log = test_data_path('test_is_test_passed-all_passed_nested.log') with open(all_passed_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) self.assertEqual(result.counts.errors, 0) def test_kselftest_nested(self): kselftest_log = test_data_path('test_is_test_passed-kselftest.log') with open(kselftest_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) self.assertEqual(result.counts.errors, 0) def test_parse_failed_test_log(self): failed_log = test_data_path('test_is_test_passed-failure.log') with open(failed_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.FAILURE, result.status) self.assertEqual(result.counts.errors, 0) @@ -168,7 +169,7 @@ class KUnitParserTest(unittest.TestCase): empty_log = test_data_path('test_is_test_passed-no_tests_run_no_header.log') with open(empty_log) as file: result = kunit_parser.parse_run_tests( - kunit_parser.extract_tap_lines(file.readlines())) + kunit_parser.extract_tap_lines(file.readlines()), stdout) self.assertEqual(0, len(result.subtests)) self.assertEqual(kunit_parser.TestStatus.FAILURE_TO_PARSE_TESTS, result.status) self.assertEqual(result.counts.errors, 1) @@ -179,7 +180,7 @@ class KUnitParserTest(unittest.TestCase): with open(missing_plan_log) as file: result = kunit_parser.parse_run_tests( kunit_parser.extract_tap_lines( - file.readlines())) + file.readlines()), stdout) # A missing test plan is not an error. self.assertEqual(result.counts, kunit_parser.TestCounts(passed=10, errors=0)) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) @@ -188,7 +189,7 @@ class KUnitParserTest(unittest.TestCase): header_log = test_data_path('test_is_test_passed-no_tests_run_with_header.log') with open(header_log) as file: result = kunit_parser.parse_run_tests( - kunit_parser.extract_tap_lines(file.readlines())) + kunit_parser.extract_tap_lines(file.readlines()), stdout) self.assertEqual(0, len(result.subtests)) self.assertEqual(kunit_parser.TestStatus.NO_TESTS, result.status) self.assertEqual(result.counts.errors, 1) @@ -197,7 +198,7 @@ class KUnitParserTest(unittest.TestCase): no_plan_log = test_data_path('test_is_test_passed-no_tests_no_plan.log') with open(no_plan_log) as file: result = kunit_parser.parse_run_tests( - kunit_parser.extract_tap_lines(file.readlines())) + kunit_parser.extract_tap_lines(file.readlines()), stdout) self.assertEqual(0, len(result.subtests[0].subtests[0].subtests)) self.assertEqual( kunit_parser.TestStatus.NO_TESTS, @@ -210,7 +211,7 @@ class KUnitParserTest(unittest.TestCase): print_mock = mock.patch('kunit_printer.Printer.print').start() with open(crash_log) as file: result = kunit_parser.parse_run_tests( - kunit_parser.extract_tap_lines(file.readlines())) + kunit_parser.extract_tap_lines(file.readlines()), stdout) print_mock.assert_any_call(StrContains('Could not find any KTAP output.')) print_mock.stop() self.assertEqual(0, len(result.subtests)) @@ -219,7 +220,7 @@ class KUnitParserTest(unittest.TestCase): def test_skipped_test(self): skipped_log = test_data_path('test_skip_tests.log') with open(skipped_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) # A skipped test does not fail the whole suite. self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) @@ -228,7 +229,7 @@ class KUnitParserTest(unittest.TestCase): def test_skipped_all_tests(self): skipped_log = test_data_path('test_skip_all_tests.log') with open(skipped_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SKIPPED, result.status) self.assertEqual(result.counts, kunit_parser.TestCounts(skipped=5)) @@ -236,7 +237,7 @@ class KUnitParserTest(unittest.TestCase): def test_ignores_hyphen(self): hyphen_log = test_data_path('test_strip_hyphen.log') with open(hyphen_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) # A skipped test does not fail the whole suite. self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) @@ -250,7 +251,7 @@ class KUnitParserTest(unittest.TestCase): def test_ignores_prefix_printk_time(self): prefix_log = test_data_path('test_config_printk_time.log') with open(prefix_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) self.assertEqual('kunit-resource-test', result.subtests[0].name) self.assertEqual(result.counts.errors, 0) @@ -258,7 +259,7 @@ class KUnitParserTest(unittest.TestCase): def test_ignores_multiple_prefixes(self): prefix_log = test_data_path('test_multiple_prefixes.log') with open(prefix_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) self.assertEqual('kunit-resource-test', result.subtests[0].name) self.assertEqual(result.counts.errors, 0) @@ -266,7 +267,7 @@ class KUnitParserTest(unittest.TestCase): def test_prefix_mixed_kernel_output(self): mixed_prefix_log = test_data_path('test_interrupted_tap_output.log') with open(mixed_prefix_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) self.assertEqual('kunit-resource-test', result.subtests[0].name) self.assertEqual(result.counts.errors, 0) @@ -274,7 +275,7 @@ class KUnitParserTest(unittest.TestCase): def test_prefix_poundsign(self): pound_log = test_data_path('test_pound_sign.log') with open(pound_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) self.assertEqual('kunit-resource-test', result.subtests[0].name) self.assertEqual(result.counts.errors, 0) @@ -282,7 +283,7 @@ class KUnitParserTest(unittest.TestCase): def test_kernel_panic_end(self): panic_log = test_data_path('test_kernel_panic_interrupt.log') with open(panic_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.TEST_CRASHED, result.status) self.assertEqual('kunit-resource-test', result.subtests[0].name) self.assertGreaterEqual(result.counts.errors, 1) @@ -290,7 +291,7 @@ class KUnitParserTest(unittest.TestCase): def test_pound_no_prefix(self): pound_log = test_data_path('test_pound_no_prefix.log') with open(pound_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(kunit_parser.TestStatus.SUCCESS, result.status) self.assertEqual('kunit-resource-test', result.subtests[0].name) self.assertEqual(result.counts.errors, 0) @@ -310,7 +311,7 @@ class KUnitParserTest(unittest.TestCase): not ok 2 - test2 not ok 1 - some_failed_suite """ - result = kunit_parser.parse_run_tests(output.splitlines()) + result = kunit_parser.parse_run_tests(output.splitlines(), stdout) self.assertEqual(kunit_parser.TestStatus.FAILURE, result.status) self.assertEqual(kunit_parser._summarize_failed_tests(result), @@ -319,7 +320,7 @@ class KUnitParserTest(unittest.TestCase): def test_ktap_format(self): ktap_log = test_data_path('test_parse_ktap_output.log') with open(ktap_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) self.assertEqual(result.counts, kunit_parser.TestCounts(passed=3)) self.assertEqual('suite', result.subtests[0].name) self.assertEqual('case_1', result.subtests[0].subtests[0].name) @@ -328,13 +329,13 @@ class KUnitParserTest(unittest.TestCase): def test_parse_subtest_header(self): ktap_log = test_data_path('test_parse_subtest_header.log') with open(ktap_log) as file: - kunit_parser.parse_run_tests(file.readlines()) + kunit_parser.parse_run_tests(file.readlines(), stdout) self.print_mock.assert_any_call(StrContains('suite (1 subtest)')) def test_parse_attributes(self): ktap_log = test_data_path('test_parse_attributes.log') with open(ktap_log) as file: - result = kunit_parser.parse_run_tests(file.readlines()) + result = kunit_parser.parse_run_tests(file.readlines(), stdout) # Test should pass with no errors self.assertEqual(result.counts, kunit_parser.TestCounts(passed=1, errors=0)) @@ -355,7 +356,7 @@ class KUnitParserTest(unittest.TestCase): Indented more. not ok 1 test1 """ - result = kunit_parser.parse_run_tests(output.splitlines()) + result = kunit_parser.parse_run_tests(output.splitlines(), stdout) self.assertEqual(kunit_parser.TestStatus.FAILURE, result.status) self.print_mock.assert_any_call(StrContains('Test output.')) @@ -544,7 +545,7 @@ class KUnitJsonTest(unittest.TestCase): def _json_for(self, log_file): with open(test_data_path(log_file)) as file: - test_result = kunit_parser.parse_run_tests(file) + test_result = kunit_parser.parse_run_tests(file, stdout) json_obj = kunit_json.get_json_result( test=test_result, metadata=kunit_json.Metadata()) @@ -810,7 +811,7 @@ class KUnitMainTest(unittest.TestCase): self.linux_source_mock.run_kernel.return_value = ['TAP version 14', 'init: random output'] + want got = kunit._list_tests(self.linux_source_mock, - kunit.KunitExecRequest(None, None, '.kunit', 300, 'suite*', '', None, None, 'suite', False, False)) + kunit.KunitExecRequest(None, None, False, False, '.kunit', 300, 'suite*', '', None, None, 'suite', False, False)) self.assertEqual(got, want) # Should respect the user's filter glob when listing tests. self.linux_source_mock.run_kernel.assert_called_once_with( @@ -823,7 +824,7 @@ class KUnitMainTest(unittest.TestCase): # Should respect the user's filter glob when listing tests. mock_tests.assert_called_once_with(mock.ANY, - kunit.KunitExecRequest(None, None, '.kunit', 300, 'suite*.test*', '', None, None, 'suite', False, False)) + kunit.KunitExecRequest(None, None, False, False, '.kunit', 300, 'suite*.test*', '', None, None, 'suite', False, False)) self.linux_source_mock.run_kernel.assert_has_calls([ mock.call(args=None, build_dir='.kunit', filter_glob='suite.test*', filter='', filter_action=None, timeout=300), mock.call(args=None, build_dir='.kunit', filter_glob='suite2.test*', filter='', filter_action=None, timeout=300), @@ -836,7 +837,7 @@ class KUnitMainTest(unittest.TestCase): # Should respect the user's filter glob when listing tests. mock_tests.assert_called_once_with(mock.ANY, - kunit.KunitExecRequest(None, None, '.kunit', 300, 'suite*', '', None, None, 'test', False, False)) + kunit.KunitExecRequest(None, None, False, False, '.kunit', 300, 'suite*', '', None, None, 'test', False, False)) self.linux_source_mock.run_kernel.assert_has_calls([ mock.call(args=None, build_dir='.kunit', filter_glob='suite.test1', filter='', filter_action=None, timeout=300), mock.call(args=None, build_dir='.kunit', filter_glob='suite.test2', filter='', filter_action=None, timeout=300), diff --git a/tools/testing/kunit/qemu_configs/loongarch.py b/tools/testing/kunit/qemu_configs/loongarch.py new file mode 100644 index 000000000000..a92422967d1d --- /dev/null +++ b/tools/testing/kunit/qemu_configs/loongarch.py @@ -0,0 +1,21 @@ +# SPDX-License-Identifier: GPL-2.0 + +from ..qemu_config import QemuArchParams + +QEMU_ARCH = QemuArchParams(linux_arch='loongarch', + kconfig=''' +CONFIG_EFI_STUB=n +CONFIG_PCI_HOST_GENERIC=y +CONFIG_PVPANIC=y +CONFIG_PVPANIC_PCI=y +CONFIG_SERIAL_8250=y +CONFIG_SERIAL_8250_CONSOLE=y +CONFIG_SERIAL_OF_PLATFORM=y +''', + qemu_arch='loongarch64', + kernel_path='arch/loongarch/boot/vmlinux.elf', + kernel_command_line='console=ttyS0 kunit_shutdown=poweroff', + extra_qemu_params=[ + '-machine', 'virt', + '-device', 'pvpanic-pci', + '-cpu', 'max',])