diff mbox series

[v2] lib: Convert test_printf.c to KUnit

Message ID 20201022151349.47436-1-98.arpi@gmail.com (mailing list archive)
State New
Headers show
Series [v2] lib: Convert test_printf.c to KUnit | expand

Commit Message

Arpitha Raghunandan Oct. 22, 2020, 3:13 p.m. UTC
Convert test lib/test_printf.c to KUnit. More information about
KUnit can be found at:
https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html.
KUnit provides a common framework for unit tests in the kernel.
KUnit and kselftest are standardizing around KTAP, converting this
test to KUnit makes this test output in KTAP which we are trying to
make the standard test result format for the kernel. More about
the KTAP format can be found at:
https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1175.namprd13.prod.outlook.com/.
I ran both the original and converted tests as is to produce the
output for success of the test in the two cases. I also ran these
tests with a small modification to show the difference in the output
for failure of the test in both cases. The modification I made is:
- test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
+ test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);

Original test success:
[    0.591262] test_printf: loaded.
[    0.591409] test_printf: all 388 tests passed

Original test failure:
[    0.619345] test_printf: loaded.
[    0.619394] test_printf: vsnprintf(buf, 256, "%piS|%pIS", ...)
wrote '127.000.000.001|127.0.0.1', expected
'127-000.000.001|127.0.0.1'
[    0.619395] test_printf: vsnprintf(buf, 25, "%piS|%pIS", ...) wrote
'127.000.000.001|127.0.0.', expected '127-000.000.001|127.0.0.'
[    0.619396] test_printf: kvasprintf(..., "%piS|%pIS", ...) returned
'127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
[    0.619495] test_printf: failed 3 out of 388 tests

Converted test success:
    # Subtest: printf-kunit-test
    1..1
    ok 1 - selftest
ok 1 - printf-kunit-test

Converted test failure:
    # Subtest: printf-kunit-test
    1..1
    # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote
'127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
    # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
vsnprintf(buf, 5, "%pi4|%pI4", ...) wrote '127.', expected '127-'
    # selftest: EXPECTATION FAILED at lib/printf_kunit.c:118
kvasprintf(..., "%pi4|%pI4", ...) returned
'127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
    not ok 1 - selftest
not ok 1 - printf-kunit-test

Signed-off-by: Arpitha Raghunandan <98.arpi@gmail.com>
---
Changes v1->v2:
- Retain the control flow (early returns) in do_test()
- Display less irrelevant information on test failure
- A more detailed commit message

 lib/Kconfig.debug                     |  7 +--
 lib/Makefile                          |  2 +-
 lib/{test_printf.c => printf_kunit.c} | 76 +++++++++++++--------------
 3 files changed, 43 insertions(+), 42 deletions(-)
 rename lib/{test_printf.c => printf_kunit.c} (91%)

Comments

Andy Shevchenko Oct. 22, 2020, 7:16 p.m. UTC | #1
On Thu, Oct 22, 2020 at 08:43:49PM +0530, Arpitha Raghunandan wrote:
> Convert test lib/test_printf.c to KUnit. More information about
> KUnit can be found at:
> https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html.
> KUnit provides a common framework for unit tests in the kernel.
> KUnit and kselftest are standardizing around KTAP, converting this
> test to KUnit makes this test output in KTAP which we are trying to
> make the standard test result format for the kernel. More about
> the KTAP format can be found at:
> https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1175.namprd13.prod.outlook.com/.
> I ran both the original and converted tests as is to produce the
> output for success of the test in the two cases. I also ran these
> tests with a small modification to show the difference in the output
> for failure of the test in both cases. The modification I made is:
> - test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
> + test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
> 
> Original test success:
> [    0.591262] test_printf: loaded.
> [    0.591409] test_printf: all 388 tests passed
> 
> Original test failure:
> [    0.619345] test_printf: loaded.
> [    0.619394] test_printf: vsnprintf(buf, 256, "%piS|%pIS", ...)
> wrote '127.000.000.001|127.0.0.1', expected
> '127-000.000.001|127.0.0.1'
> [    0.619395] test_printf: vsnprintf(buf, 25, "%piS|%pIS", ...) wrote
> '127.000.000.001|127.0.0.', expected '127-000.000.001|127.0.0.'
> [    0.619396] test_printf: kvasprintf(..., "%piS|%pIS", ...) returned
> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
> [    0.619495] test_printf: failed 3 out of 388 tests
> 
> Converted test success:
>     # Subtest: printf-kunit-test
>     1..1
>     ok 1 - selftest
> ok 1 - printf-kunit-test
> 
> Converted test failure:
>     # Subtest: printf-kunit-test
>     1..1
>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
> vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote
> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
> vsnprintf(buf, 5, "%pi4|%pI4", ...) wrote '127.', expected '127-'
>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:118
> kvasprintf(..., "%pi4|%pI4", ...) returned
> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>     not ok 1 - selftest
> not ok 1 - printf-kunit-test

Not bad. Rasmus, what do you think?

> Signed-off-by: Arpitha Raghunandan <98.arpi@gmail.com>
> ---
> Changes v1->v2:
> - Retain the control flow (early returns) in do_test()
> - Display less irrelevant information on test failure
> - A more detailed commit message
> 
>  lib/Kconfig.debug                     |  7 +--
>  lib/Makefile                          |  2 +-
>  lib/{test_printf.c => printf_kunit.c} | 76 +++++++++++++--------------
>  3 files changed, 43 insertions(+), 42 deletions(-)
>  rename lib/{test_printf.c => printf_kunit.c} (91%)

I would based this on top of my series that renames existing KUnit based tests
in lib/.

> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 66d44d35cc97..e82ca893ed5b 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -2053,9 +2053,6 @@ config TEST_STRSCPY
>  config TEST_KSTRTOX
>  	tristate "Test kstrto*() family of functions at runtime"
>  
> -config TEST_PRINTF
> -	tristate "Test printf() family of functions at runtime"
> -
>  config TEST_BITMAP
>  	tristate "Test bitmap_*() family of functions at runtime"
>  	help
> @@ -2282,6 +2279,10 @@ config BITS_TEST
>  
>  	  If unsure, say N.
>  
> +config PRINTF_KUNIT_TEST
> +	tristate "KUnit test for printf() family of functions at runtime"
> +	depends on KUNIT
> +
>  config TEST_UDELAY
>  	tristate "udelay test driver"
>  	help
> diff --git a/lib/Makefile b/lib/Makefile
> index ce45af50983a..c323447022b7 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -84,7 +84,6 @@ obj-$(CONFIG_TEST_SORT) += test_sort.o
>  obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
>  obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_keys.o
>  obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o
> -obj-$(CONFIG_TEST_PRINTF) += test_printf.o
>  obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o
>  obj-$(CONFIG_TEST_STRSCPY) += test_strscpy.o
>  obj-$(CONFIG_TEST_UUID) += test_uuid.o
> @@ -352,3 +351,4 @@ obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o
>  obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o
>  obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
>  obj-$(CONFIG_BITS_TEST) += test_bits.o
> +obj-$(CONFIG_PRINTF_KUNIT_TEST) += printf_kunit.o
> diff --git a/lib/test_printf.c b/lib/printf_kunit.c
> similarity index 91%
> rename from lib/test_printf.c
> rename to lib/printf_kunit.c
> index 7ac87f18a10f..e4dc1b1c8972 100644
> --- a/lib/test_printf.c
> +++ b/lib/printf_kunit.c
> @@ -5,6 +5,7 @@
>  
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>  
> +#include <kunit/test.h>
>  #include <linux/init.h>
>  #include <linux/kernel.h>
>  #include <linux/module.h>
> @@ -30,64 +31,57 @@
>  #define PAD_SIZE 16
>  #define FILL_CHAR '$'
>  
> -static unsigned total_tests __initdata;
> -static unsigned failed_tests __initdata;
>  static char *test_buffer __initdata;
>  static char *alloced_buffer __initdata;
> +struct kunit *kunittest;
>  
> -static int __printf(4, 0) __init
> +static void __printf(4, 0) __init
>  do_test(int bufsize, const char *expect, int elen,
>  	const char *fmt, va_list ap)
>  {
>  	va_list aq;
>  	int ret, written;
>  
> -	total_tests++;
> -
>  	memset(alloced_buffer, FILL_CHAR, BUF_SIZE + 2*PAD_SIZE);
>  	va_copy(aq, ap);
>  	ret = vsnprintf(test_buffer, bufsize, fmt, aq);
>  	va_end(aq);
>  
>  	if (ret != elen) {
> -		pr_warn("vsnprintf(buf, %d, \"%s\", ...) returned %d, expected %d\n",
> +		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) returned %d, expected %d\n",
>  			bufsize, fmt, ret, elen);
> -		return 1;
> +		return;
>  	}
>  
>  	if (memchr_inv(alloced_buffer, FILL_CHAR, PAD_SIZE)) {
> -		pr_warn("vsnprintf(buf, %d, \"%s\", ...) wrote before buffer\n", bufsize, fmt);
> -		return 1;
> +		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) wrote before buffer\n", bufsize, fmt);
> +		return;
>  	}
>  
>  	if (!bufsize) {
> -		if (memchr_inv(test_buffer, FILL_CHAR, BUF_SIZE + PAD_SIZE)) {
> -			pr_warn("vsnprintf(buf, 0, \"%s\", ...) wrote to buffer\n",
> -				fmt);
> -			return 1;
> -		}
> -		return 0;
> +		if (memchr_inv(test_buffer, FILL_CHAR, BUF_SIZE + PAD_SIZE))
> +			KUNIT_FAIL(kunittest, "vsnprintf(buf, 0, \"%s\", ...) wrote to buffer\n", fmt);
> +		return;
>  	}
>  
>  	written = min(bufsize-1, elen);
>  	if (test_buffer[written]) {
> -		pr_warn("vsnprintf(buf, %d, \"%s\", ...) did not nul-terminate buffer\n",
> +		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) did not nul-terminate buffer\n",
>  			bufsize, fmt);
> -		return 1;
> +		return;
>  	}
>  
>  	if (memchr_inv(test_buffer + written + 1, FILL_CHAR, BUF_SIZE + PAD_SIZE - (written + 1))) {
> -		pr_warn("vsnprintf(buf, %d, \"%s\", ...) wrote beyond the nul-terminator\n",
> +		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) wrote beyond the nul-terminator\n",
>  			bufsize, fmt);
> -		return 1;
> +		return;
>  	}
>  
>  	if (memcmp(test_buffer, expect, written)) {
> -		pr_warn("vsnprintf(buf, %d, \"%s\", ...) wrote '%s', expected '%.*s'\n",
> +		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) wrote '%s', expected '%.*s'\n",
>  			bufsize, fmt, test_buffer, written, expect);
> -		return 1;
> +		return;
>  	}
> -	return 0;
>  }
>  
>  static void __printf(3, 4) __init
> @@ -98,9 +92,8 @@ __test(const char *expect, int elen, const char *fmt, ...)
>  	char *p;
>  
>  	if (elen >= BUF_SIZE) {
> -		pr_err("error in test suite: expected output length %d too long. Format was '%s'.\n",
> +		KUNIT_FAIL(kunittest, "error in test suite: expected output length %d too long. Format was '%s'.\n",
>  		       elen, fmt);
> -		failed_tests++;
>  		return;
>  	}
>  
> @@ -112,19 +105,17 @@ __test(const char *expect, int elen, const char *fmt, ...)
>  	 * enough and 0), and then we also test that kvasprintf would
>  	 * be able to print it as expected.
>  	 */
> -	failed_tests += do_test(BUF_SIZE, expect, elen, fmt, ap);
> +	do_test(BUF_SIZE, expect, elen, fmt, ap);
>  	rand = 1 + prandom_u32_max(elen+1);
>  	/* Since elen < BUF_SIZE, we have 1 <= rand <= BUF_SIZE. */
> -	failed_tests += do_test(rand, expect, elen, fmt, ap);
> -	failed_tests += do_test(0, expect, elen, fmt, ap);
> +	do_test(rand, expect, elen, fmt, ap);
> +	do_test(0, expect, elen, fmt, ap);
>  
>  	p = kvasprintf(GFP_KERNEL, fmt, ap);
>  	if (p) {
> -		total_tests++;
>  		if (memcmp(p, expect, elen+1)) {
> -			pr_warn("kvasprintf(..., \"%s\", ...) returned '%s', expected '%s'\n",
> +			KUNIT_FAIL(kunittest, "kvasprintf(..., \"%s\", ...) returned '%s', expected '%s'\n",
>  				fmt, p, expect);
> -			failed_tests++;
>  		}
>  		kfree(p);
>  	}
> @@ -303,16 +294,13 @@ plain(void)
>  
>  	err = plain_hash();
>  	if (err) {
> -		pr_warn("plain 'p' does not appear to be hashed\n");
> -		failed_tests++;
> +		KUNIT_FAIL(kunittest, "plain 'p' does not appear to be hashed\n");
>  		return;
>  	}
>  
>  	err = plain_format();
> -	if (err) {
> -		pr_warn("hashing plain 'p' has unexpected format\n");
> -		failed_tests++;
> -	}
> +	if (err)
> +		KUNIT_FAIL(kunittest, "hashing plain 'p' has unexpected format\n");
>  }
>  
>  static void __init
> @@ -696,8 +684,9 @@ test_pointer(void)
>  	fwnode_pointer();
>  }
>  
> -static void __init selftest(void)
> +static void __init selftest(struct kunit *ktest)
>  {
> +	kunittest = ktest;
>  	alloced_buffer = kmalloc(BUF_SIZE + 2*PAD_SIZE, GFP_KERNEL);
>  	if (!alloced_buffer)
>  		return;
> @@ -711,6 +700,17 @@ static void __init selftest(void)
>  	kfree(alloced_buffer);
>  }
>  
> -KSTM_MODULE_LOADERS(test_printf);
> +static struct kunit_case printf_test_cases[] = {
> +	KUNIT_CASE(selftest),
> +	{}
> +};
> +
> +static struct kunit_suite printf_test_suite = {
> +	.name = "printf-kunit-test",
> +	.test_cases = printf_test_cases,
> +};
> +
> +kunit_test_suite(printf_test_suite);
> +
>  MODULE_AUTHOR("Rasmus Villemoes <linux@rasmusvillemoes.dk>");
>  MODULE_LICENSE("GPL");
> -- 
> 2.25.1
>
Rasmus Villemoes Oct. 23, 2020, 11:06 a.m. UTC | #2
On 22/10/2020 21.16, Andy Shevchenko wrote:
> On Thu, Oct 22, 2020 at 08:43:49PM +0530, Arpitha Raghunandan wrote:
>> Convert test lib/test_printf.c to KUnit. More information about
>> KUnit can be found at:
>> https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html.
>> KUnit provides a common framework for unit tests in the kernel.
>> KUnit and kselftest are standardizing around KTAP, converting this
>> test to KUnit makes this test output in KTAP which we are trying to
>> make the standard test result format for the kernel. More about
>> the KTAP format can be found at:
>> https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1175.namprd13.prod.outlook.com/.
>> I ran both the original and converted tests as is to produce the
>> output for success of the test in the two cases. I also ran these
>> tests with a small modification to show the difference in the output
>> for failure of the test in both cases. The modification I made is:
>> - test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
>> + test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
>>
>> Original test success:
>> [    0.591262] test_printf: loaded.
>> [    0.591409] test_printf: all 388 tests passed
>>
>> Original test failure:
>> [    0.619345] test_printf: loaded.
>> [    0.619394] test_printf: vsnprintf(buf, 256, "%piS|%pIS", ...)
>> wrote '127.000.000.001|127.0.0.1', expected
>> '127-000.000.001|127.0.0.1'
>> [    0.619395] test_printf: vsnprintf(buf, 25, "%piS|%pIS", ...) wrote
>> '127.000.000.001|127.0.0.', expected '127-000.000.001|127.0.0.'
>> [    0.619396] test_printf: kvasprintf(..., "%piS|%pIS", ...) returned
>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>> [    0.619495] test_printf: failed 3 out of 388 tests
>>
>> Converted test success:
>>     # Subtest: printf-kunit-test
>>     1..1
>>     ok 1 - selftest
>> ok 1 - printf-kunit-test
>>
>> Converted test failure:
>>     # Subtest: printf-kunit-test
>>     1..1
>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
>> vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote
>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
>> vsnprintf(buf, 5, "%pi4|%pI4", ...) wrote '127.', expected '127-'
>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:118
>> kvasprintf(..., "%pi4|%pI4", ...) returned
>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>>     not ok 1 - selftest
>> not ok 1 - printf-kunit-test
> 
> Not bad. Rasmus, what do you think?

Much better, but that '1..1' and reporting the entire test suite as 1
single (failing or passing) test is (also) a regression. Look at the
original

>> [    0.591409] test_printf: all 388 tests passed

or

>> [    0.619495] test_printf: failed 3 out of 388 tests

That's far more informative, and I'd prefer if the summary information
(whether in the all-good case or some-failing) included something like
this. In particular, I have at some point spotted that I failed to
properly hook up a new test case (or perhaps failed to re-compile, or
somehow still ran the old kernel binary, don't remember which it was) by
noticing that the total number of tests hadn't increased. The new output
would not help catch such PEBKACs.

Rasmus
Arpitha Raghunandan Oct. 23, 2020, 1:43 p.m. UTC | #3
On 23/10/20 4:36 pm, Rasmus Villemoes wrote:
> On 22/10/2020 21.16, Andy Shevchenko wrote:
>> On Thu, Oct 22, 2020 at 08:43:49PM +0530, Arpitha Raghunandan wrote:
>>> Convert test lib/test_printf.c to KUnit. More information about
>>> KUnit can be found at:
>>> https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html.
>>> KUnit provides a common framework for unit tests in the kernel.
>>> KUnit and kselftest are standardizing around KTAP, converting this
>>> test to KUnit makes this test output in KTAP which we are trying to
>>> make the standard test result format for the kernel. More about
>>> the KTAP format can be found at:
>>> https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1175.namprd13.prod.outlook.com/.
>>> I ran both the original and converted tests as is to produce the
>>> output for success of the test in the two cases. I also ran these
>>> tests with a small modification to show the difference in the output
>>> for failure of the test in both cases. The modification I made is:
>>> - test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
>>> + test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
>>>
>>> Original test success:
>>> [    0.591262] test_printf: loaded.
>>> [    0.591409] test_printf: all 388 tests passed
>>>
>>> Original test failure:
>>> [    0.619345] test_printf: loaded.
>>> [    0.619394] test_printf: vsnprintf(buf, 256, "%piS|%pIS", ...)
>>> wrote '127.000.000.001|127.0.0.1', expected
>>> '127-000.000.001|127.0.0.1'
>>> [    0.619395] test_printf: vsnprintf(buf, 25, "%piS|%pIS", ...) wrote
>>> '127.000.000.001|127.0.0.', expected '127-000.000.001|127.0.0.'
>>> [    0.619396] test_printf: kvasprintf(..., "%piS|%pIS", ...) returned
>>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>>> [    0.619495] test_printf: failed 3 out of 388 tests
>>>
>>> Converted test success:
>>>     # Subtest: printf-kunit-test
>>>     1..1
>>>     ok 1 - selftest
>>> ok 1 - printf-kunit-test
>>>
>>> Converted test failure:
>>>     # Subtest: printf-kunit-test
>>>     1..1
>>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
>>> vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote
>>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
>>> vsnprintf(buf, 5, "%pi4|%pI4", ...) wrote '127.', expected '127-'
>>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:118
>>> kvasprintf(..., "%pi4|%pI4", ...) returned
>>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>>>     not ok 1 - selftest
>>> not ok 1 - printf-kunit-test
>>
>> Not bad. Rasmus, what do you think?
> 
> Much better, but that '1..1' and reporting the entire test suite as 1
> single (failing or passing) test is (also) a regression. Look at the
> original
> 
>>> [    0.591409] test_printf: all 388 tests passed
> 
> or
> 
>>> [    0.619495] test_printf: failed 3 out of 388 tests
> 
> That's far more informative, and I'd prefer if the summary information
> (whether in the all-good case or some-failing) included something like
> this. In particular, I have at some point spotted that I failed to
> properly hook up a new test case (or perhaps failed to re-compile, or
> somehow still ran the old kernel binary, don't remember which it was) by
> noticing that the total number of tests hadn't increased. The new output
> would not help catch such PEBKACs.
> 
> Rasmus
> 

Splitting the test into multiple test cases in KUnit will display the number and name of tests that pass or fail. This will be similar to the lib/list-test.c test as can be seen here: https://elixir.bootlin.com/linux/latest/source/lib/list-test.c. I will work on this for the next version of this patch.
Petr Mladek Oct. 23, 2020, 5:31 p.m. UTC | #4
On Thu 2020-10-22 20:43:49, Arpitha Raghunandan wrote:
> Convert test lib/test_printf.c to KUnit. More information about

> Converted test success:
>     # Subtest: printf-kunit-test
>     1..1
>     ok 1 - selftest
> ok 1 - printf-kunit-test
> 
> Converted test failure:
>     # Subtest: printf-kunit-test
>     1..1
>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
> vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote
> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
> vsnprintf(buf, 5, "%pi4|%pI4", ...) wrote '127.', expected '127-'
>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:118
> kvasprintf(..., "%pi4|%pI4", ...) returned
> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>     not ok 1 - selftest

I agree with others that there should be more KUNIT_CASEs.

> not ok 1 - printf-kunit-test

> --- a/lib/test_printf.c
> +++ b/lib/printf_kunit.c

There is no standard at the moment. I see struct kunit_source defined,
for example, in the following files:

*test*.c:

      drivers/base/power/qos-test.c:
      drivers/base/test/property-entry-test.c:
      drivers/thunderbolt/test.c:
      fs/ext4/inode-test.c:
      kernel/kcsan/kcsan-test.c:
      kernel/sysctl-test.c:
      lib/kunit/string-stream-test.c:
      lib/list-test.c:
      lib/test_bits.c:
      lib/test_kasan.c:
      lib/test_linear_ranges.c:
      net/mptcp/crypto_test.c:
      net/mptcp/token_test.c:
      security/apparmor/policy_unpack_test.c:

kunit-*-test.c:

       lib/kunit/kunit-example-test.c:
       lib/kunit/kunit-test.c:

*_kunit.c

      lib/bitfield_kunit.c:

Please, either unify names of all the above modules or keep test_printf.c



> @@ -5,6 +5,7 @@
>  
>  #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>  
> +#include <kunit/test.h>
>  #include <linux/init.h>
>  #include <linux/kernel.h>
>  #include <linux/module.h>
> @@ -30,64 +31,57 @@
>  #define PAD_SIZE 16
>  #define FILL_CHAR '$'
>  
> -static unsigned total_tests __initdata;
> -static unsigned failed_tests __initdata;
>  static char *test_buffer __initdata;
>  static char *alloced_buffer __initdata;
> +struct kunit *kunittest;

This should be static variable.

>  
> -static int __printf(4, 0) __init
> +static void __printf(4, 0) __init
>  do_test(int bufsize, const char *expect, int elen,
>  	const char *fmt, va_list ap)
>  {
>  	va_list aq;
>  	int ret, written;
>  

> @@ -696,8 +684,9 @@ test_pointer(void)
>  	fwnode_pointer();
>  }
>  
> -static void __init selftest(void)
> +static void __init selftest(struct kunit *ktest)

>  {
> +	kunittest = ktest;
>  	alloced_buffer = kmalloc(BUF_SIZE + 2*PAD_SIZE, GFP_KERNEL);

The allocation and freeing should be done by the init,exit
callbacs in struct kunit_testsuite. For example, see
lib/kunit/kunit-test.c

This function can then be removed. The particular tests will
be called via more KUNIT_CASE() entries.

>  	if (!alloced_buffer)
>  		return;
> @@ -711,6 +700,17 @@ static void __init selftest(void)
>  	kfree(alloced_buffer);
>  }
>  
> -KSTM_MODULE_LOADERS(test_printf);
> +static struct kunit_case printf_test_cases[] = {
> +	KUNIT_CASE(selftest),
> +	{}
> +};
> +
> +static struct kunit_suite printf_test_suite = {
> +	.name = "printf-kunit-test",

Please, use:

	.name = "printf"

The fact that it is kunit-test should be clear from the context.

> +	.test_cases = printf_test_cases,
> +};
> +
> +kunit_test_suite(printf_test_suite);
> +

Best Regards,
Petr
Petr Mladek Oct. 23, 2020, 6:01 p.m. UTC | #5
On Fri 2020-10-23 19:13:00, Arpitha Raghunandan wrote:
> On 23/10/20 4:36 pm, Rasmus Villemoes wrote:
> > On 22/10/2020 21.16, Andy Shevchenko wrote:
> >> On Thu, Oct 22, 2020 at 08:43:49PM +0530, Arpitha Raghunandan wrote:
> >>> Converted test failure:
> >>>     # Subtest: printf-kunit-test
> >>>     1..1
> >>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
> >>> vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote
> >>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
> >>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
> >>> vsnprintf(buf, 5, "%pi4|%pI4", ...) wrote '127.', expected '127-'
> >>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:118
> >>> kvasprintf(..., "%pi4|%pI4", ...) returned
> >>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
> >>>     not ok 1 - selftest
> >>> not ok 1 - printf-kunit-test
> >>
> >> Not bad. Rasmus, what do you think?
> > 
> > Much better, but that '1..1' and reporting the entire test suite as 1
> > single (failing or passing) test is (also) a regression. Look at the
> > original
> > 
> >>> [    0.591409] test_printf: all 388 tests passed
> > 
> > or
> > 
> >>> [    0.619495] test_printf: failed 3 out of 388 tests
> > 
> > That's far more informative, and I'd prefer if the summary information
> > (whether in the all-good case or some-failing) included something like
> > this. In particular, I have at some point spotted that I failed to
> > properly hook up a new test case (or perhaps failed to re-compile, or
> > somehow still ran the old kernel binary, don't remember which it was) by
> > noticing that the total number of tests hadn't increased. The new output
> > would not help catch such PEBKACs.
> > 
> > Rasmus
> > 
> 
> Splitting the test into multiple test cases in KUnit will display
> the number and name of tests that pass or fail. This will be similar
> to the lib/list-test.c test as can be seen here:
> https://elixir.bootlin.com/linux/latest/source/lib/list-test.c.
> I will work on this for the next version of this patch.

We should probably agree on the granularity first.

It looks like an overkill to split the tests into 388 functions
and define KUNIT_CASE() lines. It might be possible to hide
this into macros but macros are hell for debugging.

I suggest to split it by the current functions that do more test()
call inside. I mean to define something like:

static struct kunit_case printf_test_cases[] = {
	KUNIT_CASE(basic),
	KUNIT_CASE(number),
	KUNIT_CASE(string),
	KUNIT_CASE(plain_pointer),
	KUNIT_CASE(null_poiter),
	KUNIT_CASE(error_pointer),
	KUNIT_CASE(addr),
	KUNIT_CASE(struct_resource),
	KUNIT_CASE(dentry),
	KUNIT_CASE(pointer_addr),
	 ...,
	{}
};

Best Regards,
Petr
Arpitha Raghunandan Oct. 24, 2020, 5:08 a.m. UTC | #6
On 23/10/20 11:31 pm, Petr Mladek wrote:
> On Fri 2020-10-23 19:13:00, Arpitha Raghunandan wrote:
>> On 23/10/20 4:36 pm, Rasmus Villemoes wrote:
>>> On 22/10/2020 21.16, Andy Shevchenko wrote:
>>>> On Thu, Oct 22, 2020 at 08:43:49PM +0530, Arpitha Raghunandan wrote:
>>>>> Converted test failure:
>>>>>     # Subtest: printf-kunit-test
>>>>>     1..1
>>>>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
>>>>> vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote
>>>>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>>>>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:82
>>>>> vsnprintf(buf, 5, "%pi4|%pI4", ...) wrote '127.', expected '127-'
>>>>>     # selftest: EXPECTATION FAILED at lib/printf_kunit.c:118
>>>>> kvasprintf(..., "%pi4|%pI4", ...) returned
>>>>> '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1'
>>>>>     not ok 1 - selftest
>>>>> not ok 1 - printf-kunit-test
>>>>
>>>> Not bad. Rasmus, what do you think?
>>>
>>> Much better, but that '1..1' and reporting the entire test suite as 1
>>> single (failing or passing) test is (also) a regression. Look at the
>>> original
>>>
>>>>> [    0.591409] test_printf: all 388 tests passed
>>>
>>> or
>>>
>>>>> [    0.619495] test_printf: failed 3 out of 388 tests
>>>
>>> That's far more informative, and I'd prefer if the summary information
>>> (whether in the all-good case or some-failing) included something like
>>> this. In particular, I have at some point spotted that I failed to
>>> properly hook up a new test case (or perhaps failed to re-compile, or
>>> somehow still ran the old kernel binary, don't remember which it was) by
>>> noticing that the total number of tests hadn't increased. The new output
>>> would not help catch such PEBKACs.
>>>
>>> Rasmus
>>>
>>
>> Splitting the test into multiple test cases in KUnit will display
>> the number and name of tests that pass or fail. This will be similar
>> to the lib/list-test.c test as can be seen here:
>> https://elixir.bootlin.com/linux/latest/source/lib/list-test.c.
>> I will work on this for the next version of this patch.
> 
> We should probably agree on the granularity first.
> 
> It looks like an overkill to split the tests into 388 functions
> and define KUNIT_CASE() lines. It might be possible to hide
> this into macros but macros are hell for debugging.
> 
> I suggest to split it by the current functions that do more test()
> call inside. I mean to define something like:
> 
> static struct kunit_case printf_test_cases[] = {
> 	KUNIT_CASE(basic),
> 	KUNIT_CASE(number),
> 	KUNIT_CASE(string),
> 	KUNIT_CASE(plain_pointer),
> 	KUNIT_CASE(null_poiter),
> 	KUNIT_CASE(error_pointer),
> 	KUNIT_CASE(addr),
> 	KUNIT_CASE(struct_resource),
> 	KUNIT_CASE(dentry),
> 	KUNIT_CASE(pointer_addr),
> 	 ...,
> 	{}
> };
> 
> Best Regards,
> Petr
> 

Okay, I will split it by the current functions that have more test() calls inside as suggested.
I will also make changes as per your other suggestions for the next version.
Thanks!
Andy Shevchenko Oct. 25, 2020, 12:38 p.m. UTC | #7
On Sat, Oct 24, 2020 at 2:07 AM Petr Mladek <pmladek@suse.com> wrote:
> On Thu 2020-10-22 20:43:49, Arpitha Raghunandan wrote:
> > Convert test lib/test_printf.c to KUnit. More information about

...

> > not ok 1 - printf-kunit-test
>
> > --- a/lib/test_printf.c
> > +++ b/lib/printf_kunit.c
>
> There is no standard at the moment.

JFYI: from v5.10-rc1 it is expected to have documentation clarifying
the naming scheme. Also there is a pending series [1] to move KUnit
based test cases to the defined schema.

> Please, either unify names of all the above modules or keep test_printf.c

[1]: https://lore.kernel.org/linux-kselftest/20201016110836.52613-1-andriy.shevchenko@linux.intel.com/
Petr Mladek Oct. 26, 2020, 9:48 a.m. UTC | #8
On Sun 2020-10-25 14:38:13, Andy Shevchenko wrote:
> On Sat, Oct 24, 2020 at 2:07 AM Petr Mladek <pmladek@suse.com> wrote:
> > On Thu 2020-10-22 20:43:49, Arpitha Raghunandan wrote:
> > > Convert test lib/test_printf.c to KUnit. More information about
> 
> ...
> 
> > > not ok 1 - printf-kunit-test
> >
> > > --- a/lib/test_printf.c
> > > +++ b/lib/printf_kunit.c
> >
> > There is no standard at the moment.
> 
> JFYI: from v5.10-rc1 it is expected to have documentation clarifying
> the naming scheme. Also there is a pending series [1] to move KUnit
> based test cases to the defined schema.
>
> > Please, either unify names of all the above modules or keep test_printf.c
> 
> [1]: https://lore.kernel.org/linux-kselftest/20201016110836.52613-1-andriy.shevchenko@linux.intel.com/

Great, thanks for the pointer. I seems that this patch actually
follows the proposed naming schema. I am fine with it then.

Best Regards,
Petr
diff mbox series

Patch

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 66d44d35cc97..e82ca893ed5b 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -2053,9 +2053,6 @@  config TEST_STRSCPY
 config TEST_KSTRTOX
 	tristate "Test kstrto*() family of functions at runtime"
 
-config TEST_PRINTF
-	tristate "Test printf() family of functions at runtime"
-
 config TEST_BITMAP
 	tristate "Test bitmap_*() family of functions at runtime"
 	help
@@ -2282,6 +2279,10 @@  config BITS_TEST
 
 	  If unsure, say N.
 
+config PRINTF_KUNIT_TEST
+	tristate "KUnit test for printf() family of functions at runtime"
+	depends on KUNIT
+
 config TEST_UDELAY
 	tristate "udelay test driver"
 	help
diff --git a/lib/Makefile b/lib/Makefile
index ce45af50983a..c323447022b7 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -84,7 +84,6 @@  obj-$(CONFIG_TEST_SORT) += test_sort.o
 obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
 obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_keys.o
 obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o
-obj-$(CONFIG_TEST_PRINTF) += test_printf.o
 obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o
 obj-$(CONFIG_TEST_STRSCPY) += test_strscpy.o
 obj-$(CONFIG_TEST_UUID) += test_uuid.o
@@ -352,3 +351,4 @@  obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o
 obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o
 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
 obj-$(CONFIG_BITS_TEST) += test_bits.o
+obj-$(CONFIG_PRINTF_KUNIT_TEST) += printf_kunit.o
diff --git a/lib/test_printf.c b/lib/printf_kunit.c
similarity index 91%
rename from lib/test_printf.c
rename to lib/printf_kunit.c
index 7ac87f18a10f..e4dc1b1c8972 100644
--- a/lib/test_printf.c
+++ b/lib/printf_kunit.c
@@ -5,6 +5,7 @@ 
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
+#include <kunit/test.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
@@ -30,64 +31,57 @@ 
 #define PAD_SIZE 16
 #define FILL_CHAR '$'
 
-static unsigned total_tests __initdata;
-static unsigned failed_tests __initdata;
 static char *test_buffer __initdata;
 static char *alloced_buffer __initdata;
+struct kunit *kunittest;
 
-static int __printf(4, 0) __init
+static void __printf(4, 0) __init
 do_test(int bufsize, const char *expect, int elen,
 	const char *fmt, va_list ap)
 {
 	va_list aq;
 	int ret, written;
 
-	total_tests++;
-
 	memset(alloced_buffer, FILL_CHAR, BUF_SIZE + 2*PAD_SIZE);
 	va_copy(aq, ap);
 	ret = vsnprintf(test_buffer, bufsize, fmt, aq);
 	va_end(aq);
 
 	if (ret != elen) {
-		pr_warn("vsnprintf(buf, %d, \"%s\", ...) returned %d, expected %d\n",
+		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) returned %d, expected %d\n",
 			bufsize, fmt, ret, elen);
-		return 1;
+		return;
 	}
 
 	if (memchr_inv(alloced_buffer, FILL_CHAR, PAD_SIZE)) {
-		pr_warn("vsnprintf(buf, %d, \"%s\", ...) wrote before buffer\n", bufsize, fmt);
-		return 1;
+		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) wrote before buffer\n", bufsize, fmt);
+		return;
 	}
 
 	if (!bufsize) {
-		if (memchr_inv(test_buffer, FILL_CHAR, BUF_SIZE + PAD_SIZE)) {
-			pr_warn("vsnprintf(buf, 0, \"%s\", ...) wrote to buffer\n",
-				fmt);
-			return 1;
-		}
-		return 0;
+		if (memchr_inv(test_buffer, FILL_CHAR, BUF_SIZE + PAD_SIZE))
+			KUNIT_FAIL(kunittest, "vsnprintf(buf, 0, \"%s\", ...) wrote to buffer\n", fmt);
+		return;
 	}
 
 	written = min(bufsize-1, elen);
 	if (test_buffer[written]) {
-		pr_warn("vsnprintf(buf, %d, \"%s\", ...) did not nul-terminate buffer\n",
+		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) did not nul-terminate buffer\n",
 			bufsize, fmt);
-		return 1;
+		return;
 	}
 
 	if (memchr_inv(test_buffer + written + 1, FILL_CHAR, BUF_SIZE + PAD_SIZE - (written + 1))) {
-		pr_warn("vsnprintf(buf, %d, \"%s\", ...) wrote beyond the nul-terminator\n",
+		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) wrote beyond the nul-terminator\n",
 			bufsize, fmt);
-		return 1;
+		return;
 	}
 
 	if (memcmp(test_buffer, expect, written)) {
-		pr_warn("vsnprintf(buf, %d, \"%s\", ...) wrote '%s', expected '%.*s'\n",
+		KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, \"%s\", ...) wrote '%s', expected '%.*s'\n",
 			bufsize, fmt, test_buffer, written, expect);
-		return 1;
+		return;
 	}
-	return 0;
 }
 
 static void __printf(3, 4) __init
@@ -98,9 +92,8 @@  __test(const char *expect, int elen, const char *fmt, ...)
 	char *p;
 
 	if (elen >= BUF_SIZE) {
-		pr_err("error in test suite: expected output length %d too long. Format was '%s'.\n",
+		KUNIT_FAIL(kunittest, "error in test suite: expected output length %d too long. Format was '%s'.\n",
 		       elen, fmt);
-		failed_tests++;
 		return;
 	}
 
@@ -112,19 +105,17 @@  __test(const char *expect, int elen, const char *fmt, ...)
 	 * enough and 0), and then we also test that kvasprintf would
 	 * be able to print it as expected.
 	 */
-	failed_tests += do_test(BUF_SIZE, expect, elen, fmt, ap);
+	do_test(BUF_SIZE, expect, elen, fmt, ap);
 	rand = 1 + prandom_u32_max(elen+1);
 	/* Since elen < BUF_SIZE, we have 1 <= rand <= BUF_SIZE. */
-	failed_tests += do_test(rand, expect, elen, fmt, ap);
-	failed_tests += do_test(0, expect, elen, fmt, ap);
+	do_test(rand, expect, elen, fmt, ap);
+	do_test(0, expect, elen, fmt, ap);
 
 	p = kvasprintf(GFP_KERNEL, fmt, ap);
 	if (p) {
-		total_tests++;
 		if (memcmp(p, expect, elen+1)) {
-			pr_warn("kvasprintf(..., \"%s\", ...) returned '%s', expected '%s'\n",
+			KUNIT_FAIL(kunittest, "kvasprintf(..., \"%s\", ...) returned '%s', expected '%s'\n",
 				fmt, p, expect);
-			failed_tests++;
 		}
 		kfree(p);
 	}
@@ -303,16 +294,13 @@  plain(void)
 
 	err = plain_hash();
 	if (err) {
-		pr_warn("plain 'p' does not appear to be hashed\n");
-		failed_tests++;
+		KUNIT_FAIL(kunittest, "plain 'p' does not appear to be hashed\n");
 		return;
 	}
 
 	err = plain_format();
-	if (err) {
-		pr_warn("hashing plain 'p' has unexpected format\n");
-		failed_tests++;
-	}
+	if (err)
+		KUNIT_FAIL(kunittest, "hashing plain 'p' has unexpected format\n");
 }
 
 static void __init
@@ -696,8 +684,9 @@  test_pointer(void)
 	fwnode_pointer();
 }
 
-static void __init selftest(void)
+static void __init selftest(struct kunit *ktest)
 {
+	kunittest = ktest;
 	alloced_buffer = kmalloc(BUF_SIZE + 2*PAD_SIZE, GFP_KERNEL);
 	if (!alloced_buffer)
 		return;
@@ -711,6 +700,17 @@  static void __init selftest(void)
 	kfree(alloced_buffer);
 }
 
-KSTM_MODULE_LOADERS(test_printf);
+static struct kunit_case printf_test_cases[] = {
+	KUNIT_CASE(selftest),
+	{}
+};
+
+static struct kunit_suite printf_test_suite = {
+	.name = "printf-kunit-test",
+	.test_cases = printf_test_cases,
+};
+
+kunit_test_suite(printf_test_suite);
+
 MODULE_AUTHOR("Rasmus Villemoes <linux@rasmusvillemoes.dk>");
 MODULE_LICENSE("GPL");