diff mbox series

kunit: tool: fix pre-existing python type annotation errors

Message ID 20201021220752.418832-1-dlatypov@google.com
State Accepted
Delegated to: Shuah Khan
Headers show
Series kunit: tool: fix pre-existing python type annotation errors | expand

Commit Message

Daniel Latypov Oct. 21, 2020, 10:07 p.m. UTC
The code uses annotations, but they aren't accurate.
Note that type checking in python is a separate process, running
`kunit.py run` will not check and complain about invalid types at
runtime.

Fix pre-existing issues found by running a type checker
$ mypy *.py

All but one of these were returning `None` without denoting this
properly (via `Optional[Type]`).

Signed-off-by: Daniel Latypov <dlatypov@google.com>
---
 tools/testing/kunit/kunit_parser.py | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)


base-commit: c4d6fe7311762f2e03b3c27ad38df7c40c80cc93

Comments

David Gow Oct. 30, 2020, 2:56 a.m. UTC | #1
On Thu, Oct 22, 2020 at 6:08 AM Daniel Latypov <dlatypov@google.com> wrote:
>
> The code uses annotations, but they aren't accurate.
> Note that type checking in python is a separate process, running
> `kunit.py run` will not check and complain about invalid types at
> runtime.
>
> Fix pre-existing issues found by running a type checker
> $ mypy *.py
>
> All but one of these were returning `None` without denoting this
> properly (via `Optional[Type]`).
>
> Signed-off-by: Daniel Latypov <dlatypov@google.com>
> ---

I'm not going to pretend to really understand python annotations
completely, but this all seems correct from what I know of the code,
and I was able to install mypy and verify the issues were fixed.

Clearly, if we're going to have type annotations here, we should be
verifying the code against them. Is there a way we could get python
itself to verify this code when the script runs, rather than have to
use mypy as a tool to verify it separately? Otherwise, maybe we can
run it automatically from the kunit_tool_test.py unit tests or
something similar?

Regardless, this is

Reviewed-by: David Gow <davidgow@google.com>

Cheers,
-- David
Daniel Latypov Oct. 30, 2020, 5:24 a.m. UTC | #2
On Thu, Oct 29, 2020 at 7:56 PM David Gow <davidgow@google.com> wrote:
>
> On Thu, Oct 22, 2020 at 6:08 AM Daniel Latypov <dlatypov@google.com> wrote:
> >
> > The code uses annotations, but they aren't accurate.
> > Note that type checking in python is a separate process, running
> > `kunit.py run` will not check and complain about invalid types at
> > runtime.
> >
> > Fix pre-existing issues found by running a type checker
> > $ mypy *.py
> >
> > All but one of these were returning `None` without denoting this
> > properly (via `Optional[Type]`).
> >
> > Signed-off-by: Daniel Latypov <dlatypov@google.com>
> > ---
>
> I'm not going to pretend to really understand python annotations
> completely, but this all seems correct from what I know of the code,
> and I was able to install mypy and verify the issues were fixed.
>
> Clearly, if we're going to have type annotations here, we should be
> verifying the code against them. Is there a way we could get python
> itself to verify this code when the script runs, rather than have to
> use mypy as a tool to verify it separately? Otherwise, maybe we can

Type annotations are https://www.python.org/dev/peps/pep-0484/
There isn't support for python itself to type check and it calls out
(only) mypy by name as a type-checker.

I don't have a good answer for how we prevent them from bitrotting :/

> run it automatically from the kunit_tool_test.py unit tests or
> something similar?

I don't think it's possible to do so cleanly.

E.g. I don't know python that well, but here's my guess at what it'd
have to look like:
* We have to assume mypy is installed
* dynamically loading the module inside a `try` (so we don't break
users who don't have it)
* figure out what func is the entry point to mypy and call it on
"./kunit.py" somehow

>
> Regardless, this is
>
> Reviewed-by: David Gow <davidgow@google.com>
>
> Cheers,
> -- David
diff mbox series

Patch

diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 8019e3dd4c32..d3bc0f197682 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -12,7 +12,7 @@  from collections import namedtuple
 from datetime import datetime
 from enum import Enum, auto
 from functools import reduce
-from typing import List
+from typing import List, Optional, Tuple
 
 TestResult = namedtuple('TestResult', ['status','suites','log'])
 
@@ -152,7 +152,7 @@  def parse_diagnostic(lines: List[str], test_case: TestCase) -> bool:
 	else:
 		return False
 
-def parse_test_case(lines: List[str]) -> TestCase:
+def parse_test_case(lines: List[str]) -> Optional[TestCase]:
 	test_case = TestCase()
 	save_non_diagnositic(lines, test_case)
 	while parse_diagnostic(lines, test_case):
@@ -164,7 +164,7 @@  def parse_test_case(lines: List[str]) -> TestCase:
 
 SUBTEST_HEADER = re.compile(r'^[\s]+# Subtest: (.*)$')
 
-def parse_subtest_header(lines: List[str]) -> str:
+def parse_subtest_header(lines: List[str]) -> Optional[str]:
 	consume_non_diagnositic(lines)
 	if not lines:
 		return None
@@ -177,7 +177,7 @@  def parse_subtest_header(lines: List[str]) -> str:
 
 SUBTEST_PLAN = re.compile(r'[\s]+[0-9]+\.\.([0-9]+)')
 
-def parse_subtest_plan(lines: List[str]) -> int:
+def parse_subtest_plan(lines: List[str]) -> Optional[int]:
 	consume_non_diagnositic(lines)
 	match = SUBTEST_PLAN.match(lines[0])
 	if match:
@@ -231,7 +231,7 @@  def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus:
 	max_test_case_status = bubble_up_errors(lambda x: x.status, test_suite.cases)
 	return max_status(max_test_case_status, test_suite.status)
 
-def parse_test_suite(lines: List[str], expected_suite_index: int) -> TestSuite:
+def parse_test_suite(lines: List[str], expected_suite_index: int) -> Optional[TestSuite]:
 	if not lines:
 		return None
 	consume_non_diagnositic(lines)
@@ -272,7 +272,7 @@  def parse_tap_header(lines: List[str]) -> bool:
 
 TEST_PLAN = re.compile(r'[0-9]+\.\.([0-9]+)')
 
-def parse_test_plan(lines: List[str]) -> int:
+def parse_test_plan(lines: List[str]) -> Optional[int]:
 	consume_non_diagnositic(lines)
 	match = TEST_PLAN.match(lines[0])
 	if match:
@@ -311,7 +311,7 @@  def parse_test_result(lines: List[str]) -> TestResult:
 	else:
 		return TestResult(TestStatus.NO_TESTS, [], lines)
 
-def print_and_count_results(test_result: TestResult) -> None:
+def print_and_count_results(test_result: TestResult) -> Tuple[int, int, int]:
 	total_tests = 0
 	failed_tests = 0
 	crashed_tests = 0