diff mbox series

[RFC,v3,18/19] of: unittest: split out a couple of test cases from unittest

Message ID 20181128193636.254378-19-brendanhiggins@google.com (mailing list archive)
State Superseded, archived
Headers show
Series kunit: introduce KUnit, the Linux kernel unit testing framework | expand

Commit Message

Brendan Higgins Nov. 28, 2018, 7:36 p.m. UTC
Split out a couple of test cases that these features in base.c from the
unittest.c monolith. The intention is that we will eventually split out
all test cases and group them together based on what portion of device
tree they test.

Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
---
 drivers/of/Makefile      |   2 +-
 drivers/of/base-test.c   | 214 ++++++++++++++++++++++++++
 drivers/of/test-common.c | 149 ++++++++++++++++++
 drivers/of/test-common.h |  16 ++
 drivers/of/unittest.c    | 316 +--------------------------------------
 5 files changed, 381 insertions(+), 316 deletions(-)
 create mode 100644 drivers/of/base-test.c
 create mode 100644 drivers/of/test-common.c
 create mode 100644 drivers/of/test-common.h

Comments

Frank Rowand Dec. 4, 2018, 10:58 a.m. UTC | #1
Hi Brendan,

On 11/28/18 11:36 AM, Brendan Higgins wrote:
> Split out a couple of test cases that these features in base.c from the
> unittest.c monolith. The intention is that we will eventually split out
> all test cases and group them together based on what portion of device
> tree they test.

Why does splitting this file apart improve the implementation?


> 
> Signed-off-by: Brendan Higgins <brendanhiggins@google.com>
> ---
>  drivers/of/Makefile      |   2 +-
>  drivers/of/base-test.c   | 214 ++++++++++++++++++++++++++
>  drivers/of/test-common.c | 149 ++++++++++++++++++
>  drivers/of/test-common.h |  16 ++
>  drivers/of/unittest.c    | 316 +--------------------------------------
>  5 files changed, 381 insertions(+), 316 deletions(-)
>  create mode 100644 drivers/of/base-test.c
>  create mode 100644 drivers/of/test-common.c
>  create mode 100644 drivers/of/test-common.h
>
< snip >

-Frank
Brendan Higgins Dec. 5, 2018, 11:54 p.m. UTC | #2
On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>
> Hi Brendan,
>
> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> > Split out a couple of test cases that these features in base.c from the
> > unittest.c monolith. The intention is that we will eventually split out
> > all test cases and group them together based on what portion of device
> > tree they test.
>
> Why does splitting this file apart improve the implementation?

This is in preparation for patch 19/19 and other hypothetical future
patches where test cases are split up and grouped together by what
portion of DT they test (for example the parsing tests and the
platform/device tests would probably go separate files as well). This
patch by itself does not do anything useful, but I figured it made
patch 19/19 (and, if you like what I am doing, subsequent patches)
easier to review.
Frank Rowand Feb. 14, 2019, 11:57 p.m. UTC | #3
On 12/5/18 3:54 PM, Brendan Higgins wrote:
> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> Hi Brendan,
>>
>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>> Split out a couple of test cases that these features in base.c from the
>>> unittest.c monolith. The intention is that we will eventually split out
>>> all test cases and group them together based on what portion of device
>>> tree they test.
>>
>> Why does splitting this file apart improve the implementation?
> 
> This is in preparation for patch 19/19 and other hypothetical future
> patches where test cases are split up and grouped together by what
> portion of DT they test (for example the parsing tests and the
> platform/device tests would probably go separate files as well). This
> patch by itself does not do anything useful, but I figured it made
> patch 19/19 (and, if you like what I am doing, subsequent patches)
> easier to review.

I do not see any value in splitting the devicetree tests into
multiple files.

Please help me understand what the benefits of such a split are.

Thanks,

Frank
Brendan Higgins Feb. 15, 2019, 12:56 a.m. UTC | #4
On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> > On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> Hi Brendan,
> >>
> >> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> >>> Split out a couple of test cases that these features in base.c from the
> >>> unittest.c monolith. The intention is that we will eventually split out
> >>> all test cases and group them together based on what portion of device
> >>> tree they test.
> >>
> >> Why does splitting this file apart improve the implementation?
> >
> > This is in preparation for patch 19/19 and other hypothetical future
> > patches where test cases are split up and grouped together by what
> > portion of DT they test (for example the parsing tests and the
> > platform/device tests would probably go separate files as well). This
> > patch by itself does not do anything useful, but I figured it made
> > patch 19/19 (and, if you like what I am doing, subsequent patches)
> > easier to review.
>
> I do not see any value in splitting the devicetree tests into
> multiple files.
>
> Please help me understand what the benefits of such a split are.

Sorry, I thought it made sense in context of what I am doing in the
following patch. All I am trying to do is to provide an effective way
of grouping test cases. To be clear, the idea, assuming you agree, is
that we would follow up with several other patches like this one and
the subsequent patch, one which would pull out a couple test
functions, as I have done here, and another that splits those
functions up into a bunch of proper test cases.

I thought that having that many unrelated test cases in a single file
would just be a pain to sort through deal with, review, whatever.

This is not something I feel particularly strongly about, it is just
pretty atypical from my experience to have so many unrelated test
cases in a single file.

Maybe you would prefer that I break up the test cases first, and then
we split up the file as appropriate?

I just assumed that we would agree it would be way too much stuff for
a single file, so I went ahead and broke it up first, because I
thought it would make it easier to review in that order rather than
the other way around.

Cheers
Frank Rowand Feb. 15, 2019, 2:05 a.m. UTC | #5
On 2/14/19 4:56 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> Hi Brendan,
>>>>
>>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>>>> Split out a couple of test cases that these features in base.c from the
>>>>> unittest.c monolith. The intention is that we will eventually split out
>>>>> all test cases and group them together based on what portion of device
>>>>> tree they test.
>>>>
>>>> Why does splitting this file apart improve the implementation?
>>>
>>> This is in preparation for patch 19/19 and other hypothetical future
>>> patches where test cases are split up and grouped together by what
>>> portion of DT they test (for example the parsing tests and the
>>> platform/device tests would probably go separate files as well). This
>>> patch by itself does not do anything useful, but I figured it made
>>> patch 19/19 (and, if you like what I am doing, subsequent patches)
>>> easier to review.
>>
>> I do not see any value in splitting the devicetree tests into
>> multiple files.
>>
>> Please help me understand what the benefits of such a split are.

Note that my following comments are specific to the current devicetree
unittests, and may not apply to the general case of unit tests in other
subsystems.


> Sorry, I thought it made sense in context of what I am doing in the
> following patch. All I am trying to do is to provide an effective way
> of grouping test cases. To be clear, the idea, assuming you agree, is

Looking at _just_ the first few fragments of the following patch, the
change is to break down a moderate size function of related tests,
of_unittest_find_node_by_name(), into a lot of extremely small functions.
Then to find the execution order of the many small functions requires
finding the array of_test_find_node_by_name_cases[].  Then I have to
chase off into the kunit test runner core, where I find that the set
of tests in of_test_find_node_by_name_cases[] is processed by a
late_initcall().  So now the order of the various test groupings,
declared via module_test(), are subject to the fragile orderings
of initcalls.

There are ordering dependencies within the devicetree unittests.

I do not like breaking the test cases down into such small atoms.

I do not see any value __for devicetree unittests__ of having
such small atoms.

It makes it harder for me to read the source of the tests and
understand the order they will execute.  It also makes it harder
for me to read through the actual tests (in this example the
tests that are currently grouped in of_unittest_find_node_by_name())
because of all the extra function headers injected into the
existing single function to break it apart into many smaller
functions.

Breaking the tests into separate chunks, each chunk invoked
independently as the result of module_test() of each chunk,
loses the summary result for the devicetree unittests of
how many tests are run and how many passed.  This is the
only statistic that I need to determine whether the
unittests have detected a new fault caused by a specific
patch or commit.  I don't need to look at any individual
test result unless the overall result reports a failure.


> that we would follow up with several other patches like this one and
> the subsequent patch, one which would pull out a couple test
> functions, as I have done here, and another that splits those
> functions up into a bunch of proper test cases.
> 
> I thought that having that many unrelated test cases in a single file
> would just be a pain to sort through deal with, review, whatever.

Having all the test cases in a single file makes it easier for me to
read, understand, modify, and maintain the tests.


> This is not something I feel particularly strongly about, it is just
> pretty atypical from my experience to have so many unrelated test
> cases in a single file.
> 
> Maybe you would prefer that I break up the test cases first, and then
> we split up the file as appropriate?

I prefer that the test cases not be broken up arbitrarily.  There _may_
be cases where the devicetree unittests are currently not well grouped
and may benefit from change, but if so that should be handled independently
of any transformation into a KUnit framework.


> I just assumed that we would agree it would be way too much stuff for
> a single file, so I went ahead and broke it up first, because I
> thought it would make it easier to review in that order rather than
> the other way around.
Brendan Higgins Feb. 15, 2019, 10:56 a.m. UTC | #6
On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>
> >>>> Hi Brendan,
> >>>>
> >>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> >>>>> Split out a couple of test cases that these features in base.c from the
> >>>>> unittest.c monolith. The intention is that we will eventually split out
> >>>>> all test cases and group them together based on what portion of device
> >>>>> tree they test.
> >>>>
> >>>> Why does splitting this file apart improve the implementation?
> >>>
> >>> This is in preparation for patch 19/19 and other hypothetical future
> >>> patches where test cases are split up and grouped together by what
> >>> portion of DT they test (for example the parsing tests and the
> >>> platform/device tests would probably go separate files as well). This
> >>> patch by itself does not do anything useful, but I figured it made
> >>> patch 19/19 (and, if you like what I am doing, subsequent patches)
> >>> easier to review.
> >>
> >> I do not see any value in splitting the devicetree tests into
> >> multiple files.
> >>
> >> Please help me understand what the benefits of such a split are.
>
> Note that my following comments are specific to the current devicetree
> unittests, and may not apply to the general case of unit tests in other
> subsystems.
>
Note taken.
>
> > Sorry, I thought it made sense in context of what I am doing in the
> > following patch. All I am trying to do is to provide an effective way
> > of grouping test cases. To be clear, the idea, assuming you agree, is
>
> Looking at _just_ the first few fragments of the following patch, the
> change is to break down a moderate size function of related tests,
> of_unittest_find_node_by_name(), into a lot of extremely small functions.

Hmm...I wouldn't call that a moderate function. By my standards those
functions are pretty large. In any case, I want to limit the
discussion to specifically what a test case should look like, and the
general consensus outside of the kernel is that unit test cases should
be very very small. The reason is that each test case is supposed to
test one specific property; it should be obvious what that property
is; and it should be obvious what is needed to exercise that property.

> Then to find the execution order of the many small functions requires
> finding the array of_test_find_node_by_name_cases[].  Then I have to

Execution order shouldn't matter. Each test case should be totally
hermetic. Obviously in this case we depend on the preceeding test case
to clean up properly, but that is something I am working on.

> chase off into the kunit test runner core, where I find that the set
> of tests in of_test_find_node_by_name_cases[] is processed by a
> late_initcall().  So now the order of the various test groupings,

That's fair. You are not the only one to complain about that. The
late_initcall is a hack which I plan on replacing shortly (and yes I
know that me planning on doing something doesn't mean much in this
discussion, but that's what I got); regardless, order shouldn't
matter.

> declared via module_test(), are subject to the fragile orderings
> of initcalls.
>
> There are ordering dependencies within the devicetree unittests.

There is now in the current devicetree unittests, but, if I may be so
bold, that is something that I would like to fix.

>
> I do not like breaking the test cases down into such small atoms.
>
> I do not see any value __for devicetree unittests__ of having
> such small atoms.

I imagine it probably makes less sense in the context of a strict
dependency order, but that is something that I want to do away with.
Ideally, when you look at a test case you shouldn't need to think
about anything other than the code under test and the test case
itself; so in my universe, a smaller test case should mean less you
need to think about.

I don't want to get hung up on size too much because I don't think
this is what it is really about. I think you and I can agree that a
test should be as simple and complete as possible. The ideal test
should cover all behavior, and should be obviously correct (since
otherwise we would have to test the test too). Obviously, these two
goals are at odds, so the compromise I attempt to make is to make a
bunch of test cases which are separately simple enough to be obviously
correct at first glance, and the sum total of all the tests provides
the necessary coverage. Additionally, because each test case is
independent of every other test case, they can be reasoned about
individually, and it is not necessary to reason about them as a group.
Hypothetically, this should give you the best of both worlds.

So even if I failed in execution, I think the principle is good.

>
> It makes it harder for me to read the source of the tests and
> understand the order they will execute.  It also makes it harder
> for me to read through the actual tests (in this example the
> tests that are currently grouped in of_unittest_find_node_by_name())
> because of all the extra function headers injected into the
> existing single function to break it apart into many smaller
> functions.

Well now the same groups are expressed as test modules, it's just a
collection of closely related test cases, but they are grouped
together for just that reason. Nevertheless, I argue this is superior
to grouping them together in a function, because a test module
(elsewhere called a test suite) relates test cases together, but makes
it clear that they are still logically independent, two test cases in
a suite should run completely independently of each other.

>
> Breaking the tests into separate chunks, each chunk invoked
> independently as the result of module_test() of each chunk,
> loses the summary result for the devicetree unittests of
> how many tests are run and how many passed.  This is the

We still provide that. Well, we provide a total result of all tests
run, but they are already grouped by test module, and we could provide
module level summaries, that would be pretty trivial.

> only statistic that I need to determine whether the
> unittests have detected a new fault caused by a specific
> patch or commit.  I don't need to look at any individual
> test result unless the overall result reports a failure.

Yep, we do that too.

>
>
> > that we would follow up with several other patches like this one and
> > the subsequent patch, one which would pull out a couple test
> > functions, as I have done here, and another that splits those
> > functions up into a bunch of proper test cases.
> >
> > I thought that having that many unrelated test cases in a single file
> > would just be a pain to sort through deal with, review, whatever.
>
> Having all the test cases in a single file makes it easier for me to
> read, understand, modify, and maintain the tests.

Alright, well that's a much harder thing to make a strong statement
about. From my experience, I have usually seen one or two *maybe
three* test suites in a single file, and you have a lot more than that
in the file right now, but this sounds like a discussion for later
anyway.

>
> > This is not something I feel particularly strongly about, it is just
> > pretty atypical from my experience to have so many unrelated test
> > cases in a single file.
> >
> > Maybe you would prefer that I break up the test cases first, and then
> > we split up the file as appropriate?
>
> I prefer that the test cases not be broken up arbitrarily.  There _may_

I wasn't trying to break them up arbitrarily. I thought I was doing it
according to a pattern (breaking up the file, that is), but maybe I
just hadn't looked at enough examples.

> be cases where the devicetree unittests are currently not well grouped
> and may benefit from change, but if so that should be handled independently
> of any transformation into a KUnit framework.

I agree. I did this because I wanted to illustrate what I thought real
world KUnit unit tests should look like (I also wanted to be able to
show off KUnit test features that help you write these kinds of
tests); I was not necessarily intending that all the of: unittest
patches would get merged in with the whole RFC. I was mostly trying to
create cause for discussion (which it seems like I succeeded at ;-) ).

So fair enough, I will propose these patches separately and later
(except of course this one that splits up the file). Do you want the
initial transformation to the KUnit framework in the main KUnit
patchset, or do you want that to be done separately? If I recall, Rob
suggested this as a good initial example that other people could refer
to, and some people seemed to think that I needed one to help guide
the discussion and provide direction for early users. I don't
necessarily think that means the initial real world example needs to
be a part of the initial patchset though.

Cheers
Frank Rowand Feb. 18, 2019, 10:25 p.m. UTC | #7
On 2/15/19 2:56 AM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>
>>>>>> Hi Brendan,
>>>>>>
>>>>>> On 11/28/18 11:36 AM, Brendan Higgins wrote:
>>>>>>> Split out a couple of test cases that these features in base.c from the
>>>>>>> unittest.c monolith. The intention is that we will eventually split out
>>>>>>> all test cases and group them together based on what portion of device
>>>>>>> tree they test.
>>>>>>
>>>>>> Why does splitting this file apart improve the implementation?
>>>>>
>>>>> This is in preparation for patch 19/19 and other hypothetical future
>>>>> patches where test cases are split up and grouped together by what
>>>>> portion of DT they test (for example the parsing tests and the
>>>>> platform/device tests would probably go separate files as well). This
>>>>> patch by itself does not do anything useful, but I figured it made
>>>>> patch 19/19 (and, if you like what I am doing, subsequent patches)
>>>>> easier to review.
>>>>
>>>> I do not see any value in splitting the devicetree tests into
>>>> multiple files.
>>>>
>>>> Please help me understand what the benefits of such a split are.
>>
>> Note that my following comments are specific to the current devicetree
>> unittests, and may not apply to the general case of unit tests in other
>> subsystems.
>>
> Note taken.
>>
>>> Sorry, I thought it made sense in context of what I am doing in the
>>> following patch. All I am trying to do is to provide an effective way
>>> of grouping test cases. To be clear, the idea, assuming you agree, is
>>
>> Looking at _just_ the first few fragments of the following patch, the
>> change is to break down a moderate size function of related tests,
>> of_unittest_find_node_by_name(), into a lot of extremely small functions.
> 
> Hmm...I wouldn't call that a moderate function. By my standards those
> functions are pretty large. In any case, I want to limit the
> discussion to specifically what a test case should look like, and the
> general consensus outside of the kernel is that unit test cases should
> be very very small. The reason is that each test case is supposed to> test one specific property; it should be obvious what that property
> is; and it should be obvious what is needed to exercise that property.

That is a valid model and philosophy of unit test design.

It is not a model that the devicetree unit tests can be shoe horned
into.  Sort of...  In a sense, the existing devicetree unit tests
already to that, if you consider each unittest() (and sometime a few
lines of code that creates the result that unittest() checks) to be a separate
unit test.  But the kunit model does not consider the sort of
equivalent KUNIT_EXPECT_EQ(), etc, to be a unit test, the unit test
in kunit would be KUNIT_CASE().  Although it is a little confusing to
me that the initialization and clean up on exit occur one level
higher than KUNIT_CASE(), in struct kunit_module.  I think the
confusion is just a matter of slight conflict in the documentation
(btw, the documents where very helpful for me to understand the
overall concepts and model).


>> Then to find the execution order of the many small functions requires
>> finding the array of_test_find_node_by_name_cases[].  Then I have to
> 
> Execution order shouldn't matter. Each test case should be totally
> hermetic. Obviously in this case we depend on the preceeding test case
> to clean up properly, but that is something I am working on.

But the order _does_ matter for the devicetree unit tests.

That is one of the problems.  The devicetree unit tests are not small,
independent tests.  Some of the tests change state in a way that
following tests depend upon.

The design documents also mention that each unit test should have
a pre-test initialization, and a post-test cleanup to remove the
results of the initialization.

The devicetree unit tests have a large, intrusive initialization.
Once again, not a good fit for this model.

The devicetree unit tests also have an undocumented (and not at all
obvious) need to leave state changed in some cases after the test
completes.  There are cases where the way that I fully validate
the success of the tests is to examine the state of the live
devicetree via /proc/devicetree/. Ideally, this would be done by
a script or a program, but creating that is not near the top of
my todo list.


>> chase off into the kunit test runner core, where I find that the set
>> of tests in of_test_find_node_by_name_cases[] is processed by a
>> late_initcall().  So now the order of the various test groupings,
> 
> That's fair. You are not the only one to complain about that. The
> late_initcall is a hack which I plan on replacing shortly (and yes I
> know that me planning on doing something doesn't mean much in this
> discussion, but that's what I got); regardless, order shouldn't
> matter.

But again, it does.


>> declared via module_test(), are subject to the fragile orderings
>> of initcalls.
>>
>> There are ordering dependencies within the devicetree unittests.
> 
> There is now in the current devicetree unittests, but, if I may be so
> bold, that is something that I would like to fix.
> 
>>
>> I do not like breaking the test cases down into such small atoms.
>>
>> I do not see any value __for devicetree unittests__ of having
>> such small atoms.
> 
> I imagine it probably makes less sense in the context of a strict
> dependency order, but that is something that I want to do away with.
> Ideally, when you look at a test case you shouldn't need to think
> about anything other than the code under test and the test case
> itself; so in my universe, a smaller test case should mean less you
> need to think about.

For the general case, I think that is an excellent model.


> I don't want to get hung up on size too much because I don't think
> this is what it is really about. I think you and I can agree that a
> test should be as simple and complete as possible. The ideal test
> should cover all behavior, and should be obviously correct (since
> otherwise we would have to test the test too). Obviously, these two
> goals are at odds, so the compromise I attempt to make is to make a
> bunch of test cases which are separately simple enough to be obviously
> correct at first glance, and the sum total of all the tests provides
> the necessary coverage. Additionally, because each test case is
> independent of every other test case, they can be reasoned about
> individually, and it is not necessary to reason about them as a group.
> Hypothetically, this should give you the best of both worlds.
> 
> So even if I failed in execution, I think the principle is good.
> 
>>
>> It makes it harder for me to read the source of the tests and
>> understand the order they will execute.  It also makes it harder
>> for me to read through the actual tests (in this example the
>> tests that are currently grouped in of_unittest_find_node_by_name())
>> because of all the extra function headers injected into the
>> existing single function to break it apart into many smaller
>> functions.
> 
> Well now the same groups are expressed as test modules, it's just a
> collection of closely related test cases, but they are grouped
> together for just that reason. Nevertheless, I argue this is superior
> to grouping them together in a function, because a test module
> (elsewhere called a test suite) relates test cases together, but makes
> it clear that they are still logically independent, two test cases in
> a suite should run completely independently of each other.

That is missing my point.  Converting to the kunit format adds a
lot of boilerplate function declarations.  Compare that extra
boilerplate to a one line comment.  This is a clarity of source
code argument that I am making.

It may be a little hard to see my point given the current state of
unittest.c.  I could definitely make that much more readable using
the current model.


>>
>> Breaking the tests into separate chunks, each chunk invoked
>> independently as the result of module_test() of each chunk,
>> loses the summary result for the devicetree unittests of
>> how many tests are run and how many passed.  This is the
> 
> We still provide that. Well, we provide a total result of all tests
> run, but they are already grouped by test module, and we could provide
> module level summaries, that would be pretty trivial.

Providing the module level summary (assuming that all of the devicetree
tests were in a single module) would meet this need.


>> only statistic that I need to determine whether the
>> unittests have detected a new fault caused by a specific
>> patch or commit.  I don't need to look at any individual
>> test result unless the overall result reports a failure.
> 
> Yep, we do that too.

Well, when you add the module level summary...


>>
>>
>>> that we would follow up with several other patches like this one and
>>> the subsequent patch, one which would pull out a couple test
>>> functions, as I have done here, and another that splits those
>>> functions up into a bunch of proper test cases.
>>>
>>> I thought that having that many unrelated test cases in a single file
>>> would just be a pain to sort through deal with, review, whatever.
>>
>> Having all the test cases in a single file makes it easier for me to
>> read, understand, modify, and maintain the tests.
> 
> Alright, well that's a much harder thing to make a strong statement
> about. From my experience, I have usually seen one or two *maybe
> three* test suites in a single file, and you have a lot more than that
> in the file right now, but this sounds like a discussion for later
> anyway.

drivers/of/test-common.c is already split out by the patch series.


>>
>>> This is not something I feel particularly strongly about, it is just
>>> pretty atypical from my experience to have so many unrelated test
>>> cases in a single file.
>>>
>>> Maybe you would prefer that I break up the test cases first, and then
>>> we split up the file as appropriate?
>>
>> I prefer that the test cases not be broken up arbitrarily.  There _may_
> 
> I wasn't trying to break them up arbitrarily. I thought I was doing it
> according to a pattern (breaking up the file, that is), but maybe I
> just hadn't looked at enough examples.

This goes back to the kunit model of putting each test into a separate
function that can be a KUNIT_CASE().  That is a model that I do not agree
with for devicetree.


>> be cases where the devicetree unittests are currently not well grouped
>> and may benefit from change, but if so that should be handled independently
>> of any transformation into a KUnit framework.
> 
> I agree. I did this because I wanted to illustrate what I thought real
> world KUnit unit tests should look like (I also wanted to be able to
> show off KUnit test features that help you write these kinds of
> tests); I was not necessarily intending that all the of: unittest
> patches would get merged in with the whole RFC. I was mostly trying to
> create cause for discussion (which it seems like I succeeded at ;-) ).
> 
> So fair enough, I will propose these patches separately and later
> (except of course this one that splits up the file). Do you want the
> initial transformation to the KUnit framework in the main KUnit
> patchset, or do you want that to be done separately? If I recall, Rob
> suggested this as a good initial example that other people could refer
> to, and some people seemed to think that I needed one to help guide
> the discussion and provide direction for early users. I don't
> necessarily think that means the initial real world example needs to
> be a part of the initial patchset though.
> 
> Cheers
>
Frank Rowand Feb. 20, 2019, 8:44 p.m. UTC | #8
On 2/18/19 2:25 PM, Frank Rowand wrote:
> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>
>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>
>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>

< snip >

>
> It makes it harder for me to read the source of the tests and
> understand the order they will execute.  It also makes it harder
> for me to read through the actual tests (in this example the
> tests that are currently grouped in of_unittest_find_node_by_name())
> because of all the extra function headers injected into the
> existing single function to break it apart into many smaller
> functions.

< snip >

>>>> This is not something I feel particularly strongly about, it is just
>>>> pretty atypical from my experience to have so many unrelated test
>>>> cases in a single file.
>>>>
>>>> Maybe you would prefer that I break up the test cases first, and then
>>>> we split up the file as appropriate?
>>>
>>> I prefer that the test cases not be broken up arbitrarily.  There _may_

I expect that I created confusion by putting this in a reply to patch 18/19.
It is actually a comment about patch 19/19.  Sorry about that.


>>
>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>> according to a pattern (breaking up the file, that is), but maybe I
>> just hadn't looked at enough examples.
> 
> This goes back to the kunit model of putting each test into a separate
> function that can be a KUNIT_CASE().  That is a model that I do not agree
> with for devicetree.

So now that I am actually talking about patch 19/19, let me give a concrete
example.  I will cut and paste (after my comments), the beginning portion
of base-test.c, after applying patch 19/19 (the "base version".  Then I
will cut and paste my alternative version which does not break the tests
down into individual functions (the "frank version").

I will also reply to this email with the base version and the frank version
as attachments, which will make it easier to save as separate versions
for easier viewing.  I'm not sure if an email with attachments will make
it through the list servers, but I am cautiously optimistic.

I am using v4 of the patch series because I never got v3 to cleanly apply
and it is not a constructive use of my time to do so since I have v4 applied.

One of the points I was trying to make is that readability suffers from the
approach taken by patches 18/19 and 19/19.

The base version contains the extra text of a function header for each
unit test.  This is visual noise and makes the file larger.  It is also
one more possible location of an error (although not likely).

The frank version has converted each of the new function headers into
a comment, using the function name with '_' converted to ' '.  The
comments are more readable than the function headers.  Note that I added
an extra blank line before each comment, which violates the kernel
coding standards, but I feel this makes the code more readable.

The base version needs to declare each of the individual test functions
in of_test_find_node_by_name_cases[]. It is possible that a test function
could be left out of of_test_find_node_by_name_cases[], in error.  This
will result in a compile warning (I think warning instead of error, but
I have not verified that) so the error might be caught or it might be
overlooked.

In the base version, the order of execution of the test code requires
bouncing back and forth between the test functions and the coding of
of_test_find_node_by_name_cases[].

In the frank version the order of execution of the test code is obvious.

It is possible that a test function could be left out of
of_test_find_node_by_name_cases[], in error.  This will result in a compile
warning (I think warning instead of error, but I have not verified that)
so it might be caught or it might be overlooked.

The base version is 265 lines.  The frank version is 208 lines, 57 lines
less.  Less is better.


## ==========  base version  ====================================

// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_test_find_node_by_name_basic(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
{
	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");

}

static void of_test_find_node_by_name_multiple_components(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
{
	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			   "trailing '/' on testcase-alias/ should fail\n");
}

/*
 * TODO(brendanhiggins@google.com): This looks like a duplicate of
 * of_test_find_node_by_name_multiple_components
 */
static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_missing_path(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias_with_relative_path(
		struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias_and_slash(
		struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_find_node_by_name_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_find_node_by_name_cases[] = {
	KUNIT_CASE(of_test_find_node_by_name_basic),
	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
	KUNIT_CASE(of_test_find_node_by_name_with_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
	KUNIT_CASE(of_test_find_node_by_name_missing_path),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
	KUNIT_CASE(of_test_find_node_by_name_with_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
	{},
};

static struct kunit_module of_test_find_node_by_name_module = {
	.name = "of-test-find-node-by-name",
	.init = of_test_find_node_by_name_init,
	.test_cases = of_test_find_node_by_name_cases,
};
module_test(of_test_find_node_by_name_module);


## ==========  frank version  ===================================

	// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_unittest_find_node_by_name(struct kunit *test)
{
	struct device_node *np;
	const char *options, *name;


	// find node by name basic

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name trailing slash

	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");


	// find node by name multiple components

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias and slash

	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			    "trailing '/' on testcase-alias/ should fail\n");


	// find node by name multiple components 2

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name missing path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);


	// find node by name missing alias

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);


	//  find node by name missing alias with relative path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);


	// find node by name with option

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);


	// find node by name with option and slash

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);


	// find node by name with null option

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);


	// find node by name with option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);


	// find node by name with option alias and slash

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);


	// find node by name with null option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);


	// find node by name option clearing

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);


	// find node by name option clearing root

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_cases[] = {
	KUNIT_CASE(of_unittest_find_node_by_name),
	{},
};

static struct kunit_module of_test_module = {
	.name = "of-base-test",
	.init = of_test_init,
	.test_cases = of_test_cases,
};
module_test(of_test_module);


> 
> 
>>> be cases where the devicetree unittests are currently not well grouped
>>> and may benefit from change, but if so that should be handled independently
>>> of any transformation into a KUnit framework.
>>
>> I agree. I did this because I wanted to illustrate what I thought real
>> world KUnit unit tests should look like (I also wanted to be able to
>> show off KUnit test features that help you write these kinds of
>> tests); I was not necessarily intending that all the of: unittest
>> patches would get merged in with the whole RFC. I was mostly trying to
>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>
>> So fair enough, I will propose these patches separately and later
>> (except of course this one that splits up the file). Do you want the
>> initial transformation to the KUnit framework in the main KUnit
>> patchset, or do you want that to be done separately? If I recall, Rob
>> suggested this as a good initial example that other people could refer
>> to, and some people seemed to think that I needed one to help guide
>> the discussion and provide direction for early users. I don't
>> necessarily think that means the initial real world example needs to
>> be a part of the initial patchset though.
>>
>> Cheers
>>
> 
>
Frank Rowand Feb. 20, 2019, 8:47 p.m. UTC | #9
On 2/20/19 12:44 PM, Frank Rowand wrote:
> On 2/18/19 2:25 PM, Frank Rowand wrote:
>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>
>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>
>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>>
> 
> < snip >
> 
>>
>> It makes it harder for me to read the source of the tests and
>> understand the order they will execute.  It also makes it harder
>> for me to read through the actual tests (in this example the
>> tests that are currently grouped in of_unittest_find_node_by_name())
>> because of all the extra function headers injected into the
>> existing single function to break it apart into many smaller
>> functions.
> 
> < snip >
> 
>>>>> This is not something I feel particularly strongly about, it is just
>>>>> pretty atypical from my experience to have so many unrelated test
>>>>> cases in a single file.
>>>>>
>>>>> Maybe you would prefer that I break up the test cases first, and then
>>>>> we split up the file as appropriate?
>>>>
>>>> I prefer that the test cases not be broken up arbitrarily.  There _may_
> 
> I expect that I created confusion by putting this in a reply to patch 18/19.
> It is actually a comment about patch 19/19.  Sorry about that.
> 
> 
>>>
>>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>>> according to a pattern (breaking up the file, that is), but maybe I
>>> just hadn't looked at enough examples.
>>
>> This goes back to the kunit model of putting each test into a separate
>> function that can be a KUNIT_CASE().  That is a model that I do not agree
>> with for devicetree.
> 
> So now that I am actually talking about patch 19/19, let me give a concrete
> example.  I will cut and paste (after my comments), the beginning portion
> of base-test.c, after applying patch 19/19 (the "base version".  Then I
> will cut and paste my alternative version which does not break the tests
> down into individual functions (the "frank version").
> 
> I will also reply to this email with the base version and the frank version
> as attachments, which will make it easier to save as separate versions
> for easier viewing.  I'm not sure if an email with attachments will make
> it through the list servers, but I am cautiously optimistic.

base_version and frank_version attached.

-Frank


> 
> I am using v4 of the patch series because I never got v3 to cleanly apply
> and it is not a constructive use of my time to do so since I have v4 applied.
> 
> One of the points I was trying to make is that readability suffers from the
> approach taken by patches 18/19 and 19/19.
> 
> The base version contains the extra text of a function header for each
> unit test.  This is visual noise and makes the file larger.  It is also
> one more possible location of an error (although not likely).
> 
> The frank version has converted each of the new function headers into
> a comment, using the function name with '_' converted to ' '.  The
> comments are more readable than the function headers.  Note that I added
> an extra blank line before each comment, which violates the kernel
> coding standards, but I feel this makes the code more readable.
> 
> The base version needs to declare each of the individual test functions
> in of_test_find_node_by_name_cases[]. It is possible that a test function
> could be left out of of_test_find_node_by_name_cases[], in error.  This
> will result in a compile warning (I think warning instead of error, but
> I have not verified that) so the error might be caught or it might be
> overlooked.
> 
> In the base version, the order of execution of the test code requires
> bouncing back and forth between the test functions and the coding of
> of_test_find_node_by_name_cases[].
> 
> In the frank version the order of execution of the test code is obvious.
> 
> It is possible that a test function could be left out of
> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> warning (I think warning instead of error, but I have not verified that)
> so it might be caught or it might be overlooked.
> 
> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> less.  Less is better.
> 
> 
> ## ==========  base version  ====================================
> 
> // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
> 
> #include <kunit/test.h>
> 
> #include "test-common.h"
> 
> static void of_test_find_node_by_name_basic(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("/testcase-data");
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find /testcase-data failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> {
> 	/* Test if trailing '/' works */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> 			    "trailing '/' on /testcase-data/ should fail\n");
> 
> }
> 
> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find /testcase-data/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_with_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("testcase-alias");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find testcase-alias failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> {
> 	/* Test if trailing '/' works on aliases */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> 			   "trailing '/' on testcase-alias/ should fail\n");
> }
> 
> /*
>  * TODO(brendanhiggins@google.com): This looks like a duplicate of
>  * of_test_find_node_by_name_multiple_components
>  */
> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *name;
> 
> 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find testcase-alias/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> }
> 
> static void of_test_find_node_by_name_missing_path(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> 		"non-existent path returned node %pOF\n", np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test, np = of_find_node_by_path("missing-alias"), NULL,
> 		"non-existent alias returned node %pOF\n", np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_missing_alias_with_relative_path(
> 		struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> 		"non-existent alias with relative path returned node %pOF\n",
> 		np);
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> 			       "option path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #2 failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> 					 "NULL option path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> 			       "option alias path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_option_alias_and_slash(
> 		struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> 			       "option alias path test, subcase #1 failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> {
> 	struct device_node *np;
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> 			test, np, "NULL option alias path test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("testcase-alias", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing test failed\n");
> 	of_node_put(np);
> }
> 
> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options;
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("/", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing root node test failed\n");
> 	of_node_put(np);
> }
> 
> static int of_test_find_node_by_name_init(struct kunit *test)
> {
> 	/* adding data for unittest */
> 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
> 	if (!of_aliases)
> 		of_aliases = of_find_node_by_path("/aliases");
> 
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> 			"/testcase-data/phandle-tests/consumer-a"));
> 
> 	return 0;
> }
> 
> static struct kunit_case of_test_find_node_by_name_cases[] = {
> 	KUNIT_CASE(of_test_find_node_by_name_basic),
> 	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
> 	KUNIT_CASE(of_test_find_node_by_name_with_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_path),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
> 	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
> 	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
> 	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
> 	{},
> };
> 
> static struct kunit_module of_test_find_node_by_name_module = {
> 	.name = "of-test-find-node-by-name",
> 	.init = of_test_find_node_by_name_init,
> 	.test_cases = of_test_find_node_by_name_cases,
> };
> module_test(of_test_find_node_by_name_module);
> 
> 
> ## ==========  frank version  ===================================
> 
> 	// SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
> 
> #include <kunit/test.h>
> 
> #include "test-common.h"
> 
> static void of_unittest_find_node_by_name(struct kunit *test)
> {
> 	struct device_node *np;
> 	const char *options, *name;
> 
> 
> 	// find node by name basic
> 
> 	np = of_find_node_by_path("/testcase-data");
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find /testcase-data failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name trailing slash
> 
> 	/* Test if trailing '/' works */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
> 			    "trailing '/' on /testcase-data/ should fail\n");
> 
> 
> 	// find node by name multiple components
> 
> 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find /testcase-data/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name with alias
> 
> 	np = of_find_node_by_path("testcase-alias");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
> 			       "find testcase-alias failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name with alias and slash
> 
> 	/* Test if trailing '/' works on aliases */
> 	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
> 			    "trailing '/' on testcase-alias/ should fail\n");
> 
> 
> 	// find node by name multiple components 2
> 
> 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	name = kasprintf(GFP_KERNEL, "%pOF", np);
> 	KUNIT_EXPECT_STREQ_MSG(
> 		test, "/testcase-data/phandle-tests/consumer-a", name,
> 		"find testcase-alias/phandle-tests/consumer-a failed\n");
> 	of_node_put(np);
> 	kfree(name);
> 
> 
> 	// find node by name missing path
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
> 		"non-existent path returned node %pOF\n", np);
> 	of_node_put(np);
> 
> 
> 	// find node by name missing alias
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test, np = of_find_node_by_path("missing-alias"), NULL,
> 		"non-existent alias returned node %pOF\n", np);
> 	of_node_put(np);
> 
> 
> 	//  find node by name missing alias with relative path
> 
> 	KUNIT_EXPECT_EQ_MSG(
> 		test,
> 		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
> 		"non-existent alias with relative path returned node %pOF\n",
> 		np);
> 	of_node_put(np);
> 
> 
> 	// find node by name with option
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
> 			       "option path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option and slash
> 
> 	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
> 			       "option path test, subcase #2 failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with null option
> 
> 	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
> 					 "NULL option path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option alias
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
> 			       "option alias path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with option alias and slash
> 
> 	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
> 				       &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
> 			       "option alias path test, subcase #1 failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name with null option alias
> 
> 	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
> 	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
> 			test, np, "NULL option alias path test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name option clearing
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("testcase-alias", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing test failed\n");
> 	of_node_put(np);
> 
> 
> 	// find node by name option clearing root
> 
> 	options = "testoption";
> 	np = of_find_node_opts_by_path("/", &options);
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
> 			    "option clearing root node test failed\n");
> 	of_node_put(np);
> }
> 
> static int of_test_init(struct kunit *test)
> {
> 	/* adding data for unittest */
> 	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
> 	if (!of_aliases)
> 		of_aliases = of_find_node_by_path("/aliases");
> 
> 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
> 			"/testcase-data/phandle-tests/consumer-a"));
> 
> 	return 0;
> }
> 
> static struct kunit_case of_test_cases[] = {
> 	KUNIT_CASE(of_unittest_find_node_by_name),
> 	{},
> };
> 
> static struct kunit_module of_test_module = {
> 	.name = "of-base-test",
> 	.init = of_test_init,
> 	.test_cases = of_test_cases,
> };
> module_test(of_test_module);
> 
> 
>>
>>
>>>> be cases where the devicetree unittests are currently not well grouped
>>>> and may benefit from change, but if so that should be handled independently
>>>> of any transformation into a KUnit framework.
>>>
>>> I agree. I did this because I wanted to illustrate what I thought real
>>> world KUnit unit tests should look like (I also wanted to be able to
>>> show off KUnit test features that help you write these kinds of
>>> tests); I was not necessarily intending that all the of: unittest
>>> patches would get merged in with the whole RFC. I was mostly trying to
>>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>>
>>> So fair enough, I will propose these patches separately and later
>>> (except of course this one that splits up the file). Do you want the
>>> initial transformation to the KUnit framework in the main KUnit
>>> patchset, or do you want that to be done separately? If I recall, Rob
>>> suggested this as a good initial example that other people could refer
>>> to, and some people seemed to think that I needed one to help guide
>>> the discussion and provide direction for early users. I don't
>>> necessarily think that means the initial real world example needs to
>>> be a part of the initial patchset though.
>>>
>>> Cheers
>>>
>>
>>
> 
>
// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_test_find_node_by_name_basic(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
{
	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");

}

static void of_test_find_node_by_name_multiple_components(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
{
	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			   "trailing '/' on testcase-alias/ should fail\n");
}

/*
 * TODO(brendanhiggins@google.com): This looks like a duplicate of
 * of_test_find_node_by_name_multiple_components
 */
static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
{
	struct device_node *np;
	const char *name;

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);
}

static void of_test_find_node_by_name_missing_path(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias(struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);
}

static void of_test_find_node_by_name_missing_alias_with_relative_path(
		struct kunit *test)
{
	struct device_node *np;

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_option_alias_and_slash(
		struct kunit *test)
{
	struct device_node *np;
	const char *options;

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
{
	struct device_node *np;

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);
}

static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
{
	struct device_node *np;
	const char *options;

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_find_node_by_name_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_find_node_by_name_cases[] = {
	KUNIT_CASE(of_test_find_node_by_name_basic),
	KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components),
	KUNIT_CASE(of_test_find_node_by_name_with_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
	KUNIT_CASE(of_test_find_node_by_name_missing_path),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias),
	KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
	KUNIT_CASE(of_test_find_node_by_name_with_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
	KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing),
	KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
	{},
};

static struct kunit_module of_test_find_node_by_name_module = {
	.name = "of-test-find-node-by-name",
	.init = of_test_find_node_by_name_init,
	.test_cases = of_test_find_node_by_name_cases,
};
module_test(of_test_find_node_by_name_module);
// SPDX-License-Identifier: GPL-2.0
/*
 * Unit tests for functions defined in base.c.
 */
#include <linux/of.h>

#include <kunit/test.h>

#include "test-common.h"

static void of_unittest_find_node_by_name(struct kunit *test)
{
	struct device_node *np;
	const char *options, *name;


	// find node by name basic

	np = of_find_node_by_path("/testcase-data");
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find /testcase-data failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name trailing slash

	/* Test if trailing '/' works */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
			    "trailing '/' on /testcase-data/ should fail\n");


	// find node by name multiple components

	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find /testcase-data/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias

	np = of_find_node_by_path("testcase-alias");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
			       "find testcase-alias failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name with alias and slash

	/* Test if trailing '/' works on aliases */
	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
			    "trailing '/' on testcase-alias/ should fail\n");


	// find node by name multiple components 2

	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	name = kasprintf(GFP_KERNEL, "%pOF", np);
	KUNIT_EXPECT_STREQ_MSG(
		test, "/testcase-data/phandle-tests/consumer-a", name,
		"find testcase-alias/phandle-tests/consumer-a failed\n");
	of_node_put(np);
	kfree(name);


	// find node by name missing path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
		"non-existent path returned node %pOF\n", np);
	of_node_put(np);


	// find node by name missing alias

	KUNIT_EXPECT_EQ_MSG(
		test, np = of_find_node_by_path("missing-alias"), NULL,
		"non-existent alias returned node %pOF\n", np);
	of_node_put(np);


	//  find node by name missing alias with relative path

	KUNIT_EXPECT_EQ_MSG(
		test,
		np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
		"non-existent alias with relative path returned node %pOF\n",
		np);
	of_node_put(np);


	// find node by name with option

	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
			       "option path test failed\n");
	of_node_put(np);


	// find node by name with option and slash

	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #1 failed\n");
	of_node_put(np);

	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
			       "option path test, subcase #2 failed\n");
	of_node_put(np);


	// find node by name with null option

	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
					 "NULL option path test failed\n");
	of_node_put(np);


	// find node by name with option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
			       "option alias path test failed\n");
	of_node_put(np);


	// find node by name with option alias and slash

	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
				       &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
			       "option alias path test, subcase #1 failed\n");
	of_node_put(np);


	// find node by name with null option alias

	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
			test, np, "NULL option alias path test failed\n");
	of_node_put(np);


	// find node by name option clearing

	options = "testoption";
	np = of_find_node_opts_by_path("testcase-alias", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing test failed\n");
	of_node_put(np);


	// find node by name option clearing root

	options = "testoption";
	np = of_find_node_opts_by_path("/", &options);
	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
			    "option clearing root node test failed\n");
	of_node_put(np);
}

static int of_test_init(struct kunit *test)
{
	/* adding data for unittest */
	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

	if (!of_aliases)
		of_aliases = of_find_node_by_path("/aliases");

	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
			"/testcase-data/phandle-tests/consumer-a"));

	return 0;
}

static struct kunit_case of_test_cases[] = {
	KUNIT_CASE(of_unittest_find_node_by_name),
	{},
};

static struct kunit_module of_test_module = {
	.name = "of-base-test",
	.init = of_test_init,
	.test_cases = of_test_cases,
};
module_test(of_test_module);
Brendan Higgins Feb. 28, 2019, 3:52 a.m. UTC | #10
On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/18/19 2:25 PM, Frank Rowand wrote:
> > On 2/15/19 2:56 AM, Brendan Higgins wrote:
> >> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>
> >>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> >>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>>
> >>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>>>>
>
> < snip >
>
> >
> > It makes it harder for me to read the source of the tests and
> > understand the order they will execute.  It also makes it harder
> > for me to read through the actual tests (in this example the
> > tests that are currently grouped in of_unittest_find_node_by_name())
> > because of all the extra function headers injected into the
> > existing single function to break it apart into many smaller
> > functions.
>
> < snip >
>
> >>>> This is not something I feel particularly strongly about, it is just
> >>>> pretty atypical from my experience to have so many unrelated test
> >>>> cases in a single file.
> >>>>
> >>>> Maybe you would prefer that I break up the test cases first, and then
> >>>> we split up the file as appropriate?
> >>>
> >>> I prefer that the test cases not be broken up arbitrarily.  There _may_
>
> I expect that I created confusion by putting this in a reply to patch 18/19.
> It is actually a comment about patch 19/19.  Sorry about that.
>

No worries.

>
> >>
> >> I wasn't trying to break them up arbitrarily. I thought I was doing it
> >> according to a pattern (breaking up the file, that is), but maybe I
> >> just hadn't looked at enough examples.
> >
> > This goes back to the kunit model of putting each test into a separate
> > function that can be a KUNIT_CASE().  That is a model that I do not agree
> > with for devicetree.
>
> So now that I am actually talking about patch 19/19, let me give a concrete
> example.  I will cut and paste (after my comments), the beginning portion
> of base-test.c, after applying patch 19/19 (the "base version".  Then I
> will cut and paste my alternative version which does not break the tests
> down into individual functions (the "frank version").

Awesome, thanks for putting the comparison together!

>
> I will also reply to this email with the base version and the frank version
> as attachments, which will make it easier to save as separate versions
> for easier viewing.  I'm not sure if an email with attachments will make
> it through the list servers, but I am cautiously optimistic.
>
> I am using v4 of the patch series because I never got v3 to cleanly apply
> and it is not a constructive use of my time to do so since I have v4 applied.
>
> One of the points I was trying to make is that readability suffers from the
> approach taken by patches 18/19 and 19/19.

I understood that point.

>
> The base version contains the extra text of a function header for each
> unit test.  This is visual noise and makes the file larger.  It is also
> one more possible location of an error (although not likely).

I don't see how it is much more visual noise than a comment.
Admittedly, a space versus an underscore might be nice, but I think it
is also more likely that a function name is more likely to be kept up
to date than a comment even if they are both informational. It also
forces the user to actually name all the tests. Even then, I am not
married to doing it this exact way. The thing I really care about is
isolating the code in each test case so that it can be executed
separately.

A side thought, when I was proofreading this, it occurred to me that
you might not like the function name over comment partly because you
think about them differently. You aren't used to seeing a function
used to frame things or communicate information in this way. Is this
true? Admittedly, I have gotten used to a lot of unit test frameworks
that break up test cases by function, so I wondering if part of the
difference in comfortability with this framing might come from the
fact that I am really used to seeing it this way and you are not? If
this is the case, maybe it would be better if we had something like:

KUNIT_DECLARE_CASE(case_id, "Test case description")
{
        KUNIT_EXPECT_EQ(kunit, ...);
        ...
}

Just a thought.

>
> The frank version has converted each of the new function headers into
> a comment, using the function name with '_' converted to ' '.  The
> comments are more readable than the function headers.  Note that I added
> an extra blank line before each comment, which violates the kernel
> coding standards, but I feel this makes the code more readable.

I agree that the extra space is an improvement, but I think any
sufficient visual separation would work.

>
> The base version needs to declare each of the individual test functions
> in of_test_find_node_by_name_cases[]. It is possible that a test function
> could be left out of of_test_find_node_by_name_cases[], in error.  This
> will result in a compile warning (I think warning instead of error, but
> I have not verified that) so the error might be caught or it might be
> overlooked.

It's a warning, but that can be fixed.

>
> In the base version, the order of execution of the test code requires
> bouncing back and forth between the test functions and the coding of
> of_test_find_node_by_name_cases[].

You shouldn't need to bounce back and forth because the order in which
the tests run shouldn't matter.

>
> In the frank version the order of execution of the test code is obvious.

So I know we were arguing before over whether order *does* matter in
some of the other test cases (none in the example that you or I
posted), but wouldn't it be better if the order of execution didn't
matter? If you don't allow a user to depend on the execution of test
cases, then arguably these test case dependencies would never form and
the order wouldn't matter.
>
> It is possible that a test function could be left out of
> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> warning (I think warning instead of error, but I have not verified that)
> so it might be caught or it might be overlooked.
>
> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> less.  Less is better.

I agree that less is better, but there are different kinds of less to
consider. I prefer less logic in a function to fewer lines overall.

It seems we are in agreement that test cases should be small and
simple, so I won't dwell on that point any longer. I agree that the
test cases themselves when taken in isolation in base version and
frank version are equally simple (obviously, they are the same).

If I am correct, we are only debating whether it is best to put each
test case in its own function or not. That being said, I honestly
still think my version (base version) is easier to understand. The
reason I think mine is easier to read is entirely because of the code
isolation provided by each test case running it its own function. I
can look at a test case by itself and know that it doesn't depend on
anything that happened in a preceding test case. It is true that I
have to look in different places in the file, but I think that is more
than made up for by the fact that in order to understand a test case,
I only have to look at two functions: init, and the test case itself
(well, also exit if you care about how things are cleaned up). I don't
have to look through every single test case that proceeds it.

It might not be immediately obvious what isolation my version provides
over your version at first glance, and that is exactly the point. We
know that they are the same because you pulled the test cases out of
my version, but what about the other test suite in 19/19,
of_test_dynamic? If you notice, I did not just break each test case by
wrapping it in a function; that didn't work because there was a
dependency between some of the test cases. I removed that dependency,
so that each test case is actually isolated:

## ============== single function (18/19) version ===========
static void of_unittest_dynamic(struct kunit *test)
{
        struct device_node *np;
        struct property *prop;

        np = of_find_node_by_path("/testcase-data");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);

        /* Array of 4 properties for the purpose of testing */
        prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);

        /* Add a new property - should pass*/
        prop->name = "new-property";
        prop->value = "new-property-data";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
                            "Adding a new property failed\n");

        /* Try to add an existing property - should fail */
        prop++;
        prop->name = "new-property";
        prop->value = "new-property-data-should-fail";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
                            "Adding an existing property should have failed\n");

        /* Try to modify an existing property - should pass */
        prop->value = "modify-property-data-should-pass";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(
                test, of_update_property(np, prop), 0,
                "Updating an existing property should have passed\n");

        /* Try to modify non-existent property - should pass*/
        prop++;
        prop->name = "modify-property";
        prop->value = "modify-missing-property-data-should-pass";
        prop->length = strlen(prop->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
                            "Updating a missing property should have passed\n");

        /* Remove property - should pass */
        KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
                            "Removing a property should have passed\n");

        /* Adding very large property - should pass */
        prop++;
        prop->name = "large-property-PAGE_SIZEx8";
        prop->length = PAGE_SIZE * 8;
        prop->value = kzalloc(prop->length, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
                            "Adding a large property should have passed\n");
}

## ============== multi function (19/19) version ===========
struct of_test_dynamic_context {
        struct device_node *np;
        struct property *prop0;
        struct property *prop1;
};

static void of_test_dynamic_basic(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Test that we can remove a property */
        KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
}

static void of_test_dynamic_add_existing_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Try to add an existing property - should fail */
        prop1->name = "new-property";
        prop1->value = "new-property-data-should-fail";
        prop1->length = strlen(prop1->value) + 1;
        KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
                            "Adding an existing property should have failed\n");
}

static void of_test_dynamic_modify_existing_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;

        /* Add a new property - should pass*/
        prop0->name = "new-property";
        prop0->value = "new-property-data";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a new property failed\n");

        /* Try to modify an existing property - should pass */
        prop1->name = "new-property";
        prop1->value = "modify-property-data-should-pass";
        prop1->length = strlen(prop1->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
                            "Updating an existing property should have
passed\n");
}

static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Try to modify non-existent property - should pass*/
        prop0->name = "modify-property";
        prop0->value = "modify-missing-property-data-should-pass";
        prop0->length = strlen(prop0->value) + 1;
        KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
                            "Updating a missing property should have passed\n");
}

static void of_test_dynamic_large_property(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;
        struct property *prop0 = ctx->prop0;

        /* Adding very large property - should pass */
        prop0->name = "large-property-PAGE_SIZEx8";
        prop0->length = PAGE_SIZE * 8;
        prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);

        KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
                            "Adding a large property should have passed\n");
}

static int of_test_dynamic_init(struct kunit *test)
{
        struct of_test_dynamic_context *ctx;

        KUNIT_ASSERT_EQ(test, 0, unittest_data_add());

        if (!of_aliases)
                of_aliases = of_find_node_by_path("/aliases");

        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
                        "/testcase-data/phandle-tests/consumer-a"));

        ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
        test->priv = ctx;

        ctx->np = of_find_node_by_path("/testcase-data");
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);

        ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);

        ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
        KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);

        return 0;
}

static void of_test_dynamic_exit(struct kunit *test)
{
        struct of_test_dynamic_context *ctx = test->priv;
        struct device_node *np = ctx->np;

        of_remove_property(np, ctx->prop0);
        of_remove_property(np, ctx->prop1);
        of_node_put(np);
}

static struct kunit_case of_test_dynamic_cases[] = {
        KUNIT_CASE(of_test_dynamic_basic),
        KUNIT_CASE(of_test_dynamic_add_existing_property),
        KUNIT_CASE(of_test_dynamic_modify_existing_property),
        KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
        KUNIT_CASE(of_test_dynamic_large_property),
        {},
};

static struct kunit_module of_test_dynamic_module = {
        .name = "of-dynamic-test",
        .init = of_test_dynamic_init,
        .exit = of_test_dynamic_exit,
        .test_cases = of_test_dynamic_cases,
};
module_test(of_test_dynamic_module);

Compare the test cases for adding of_test_dynamic_basic,
of_test_dynamic_add_existing_property,
of_test_dynamic_modify_existing_property, and
of_test_dynamic_modify_non_existent_property to the originals. My
version is much longer overall, but I think is still much easier to
understand. I can say from when I was trying to split this up in the
first place, it was not obvious what properties were expected to be
populated as a precondition for a given test case (except the first
one of course). Whereas, in my version, it is immediately obvious what
the preconditions are for a test case. I think you can apply this same
logic to the examples you provided, in frank version, I don't
immediately know if one test cases does something that is a
precondition for another test case.

My version also makes it easier to run a test case entirely by itself
which is really valuable for debugging purposes. A common thing that
happens when you have lots of unit tests is something breaks and lots
of tests fail. If the test cases are good, there should be just a
couple (ideally one) test cases that directly assert the violated
property; those are the test cases you actually want to focus on, the
rest are noise for the purposes of that breakage. In my version, it is
much easier to turn off the test cases that you don't care about and
then focus in on the ones that exercise the violated property.

Now I know that, hermeticity especially, but other features as well
(test suite summary, error on unused test case function, etc) are not
actually in KUnit as it is under consideration here. Maybe it would be
best to save these last two patches (18/19, and 19/19) until I have
these other features checked in and reconsider them then?

>
> ## ==========  base version  ====================================
>
> // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
>
> #include <kunit/test.h>
>
> #include "test-common.h"
>
> static void of_test_find_node_by_name_basic(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("/testcase-data");
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find /testcase-data failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
> {
>         /* Test if trailing '/' works */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>                             "trailing '/' on /testcase-data/ should fail\n");
>
> }
>
> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_with_alias(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("testcase-alias");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find testcase-alias failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
> {
>         /* Test if trailing '/' works on aliases */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>                            "trailing '/' on testcase-alias/ should fail\n");
> }
>
> /*
>  * TODO(brendanhiggins@google.com): This looks like a duplicate of
>  * of_test_find_node_by_name_multiple_components
>  */
> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
> {
>         struct device_node *np;
>         const char *name;
>
>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
> }
>
> static void of_test_find_node_by_name_missing_path(struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>                 "non-existent path returned node %pOF\n", np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>                 "non-existent alias returned node %pOF\n", np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_missing_alias_with_relative_path(
>                 struct kunit *test)
> {
>         struct device_node *np;
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>                 "non-existent alias with relative path returned node %pOF\n",
>                 np);
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>                                "option path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #1 failed\n");
>         of_node_put(np);
>
>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #2 failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
> {
>         struct device_node *np;
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>                                          "NULL option path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>                                "option alias path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_option_alias_and_slash(
>                 struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>                                "option alias path test, subcase #1 failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
> {
>         struct device_node *np;
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>                         test, np, "NULL option alias path test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("testcase-alias", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing test failed\n");
>         of_node_put(np);
> }
>
> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options;
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("/", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing root node test failed\n");
>         of_node_put(np);
> }
>
> static int of_test_find_node_by_name_init(struct kunit *test)
> {
>         /* adding data for unittest */
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
>
>         return 0;
> }
>
> static struct kunit_case of_test_find_node_by_name_cases[] = {
>         KUNIT_CASE(of_test_find_node_by_name_basic),
>         KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
>         KUNIT_CASE(of_test_find_node_by_name_multiple_components),
>         KUNIT_CASE(of_test_find_node_by_name_with_alias),
>         KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
>         KUNIT_CASE(of_test_find_node_by_name_missing_path),
>         KUNIT_CASE(of_test_find_node_by_name_missing_alias),
>         KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
>         KUNIT_CASE(of_test_find_node_by_name_with_option),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_with_null_option),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
>         KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
>         KUNIT_CASE(of_test_find_node_by_name_option_clearing),
>         KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
>         {},
> };
>
> static struct kunit_module of_test_find_node_by_name_module = {
>         .name = "of-test-find-node-by-name",
>         .init = of_test_find_node_by_name_init,
>         .test_cases = of_test_find_node_by_name_cases,
> };
> module_test(of_test_find_node_by_name_module);
>
>
> ## ==========  frank version  ===================================
>
>         // SPDX-License-Identifier: GPL-2.0
> /*
>  * Unit tests for functions defined in base.c.
>  */
> #include <linux/of.h>
>
> #include <kunit/test.h>
>
> #include "test-common.h"
>
> static void of_unittest_find_node_by_name(struct kunit *test)
> {
>         struct device_node *np;
>         const char *options, *name;
>
>
>         // find node by name basic
>
>         np = of_find_node_by_path("/testcase-data");
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find /testcase-data failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name trailing slash
>
>         /* Test if trailing '/' works */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>                             "trailing '/' on /testcase-data/ should fail\n");
>
>
>         // find node by name multiple components
>
>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name with alias
>
>         np = of_find_node_by_path("testcase-alias");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>                                "find testcase-alias failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name with alias and slash
>
>         /* Test if trailing '/' works on aliases */
>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>                             "trailing '/' on testcase-alias/ should fail\n");
>
>
>         // find node by name multiple components 2
>
>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>         KUNIT_EXPECT_STREQ_MSG(
>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>         of_node_put(np);
>         kfree(name);
>
>
>         // find node by name missing path
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>                 "non-existent path returned node %pOF\n", np);
>         of_node_put(np);
>
>
>         // find node by name missing alias
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>                 "non-existent alias returned node %pOF\n", np);
>         of_node_put(np);
>
>
>         //  find node by name missing alias with relative path
>
>         KUNIT_EXPECT_EQ_MSG(
>                 test,
>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>                 "non-existent alias with relative path returned node %pOF\n",
>                 np);
>         of_node_put(np);
>
>
>         // find node by name with option
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>                                "option path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option and slash
>
>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #1 failed\n");
>         of_node_put(np);
>
>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>                                "option path test, subcase #2 failed\n");
>         of_node_put(np);
>
>
>         // find node by name with null option
>
>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>                                          "NULL option path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option alias
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>                                "option alias path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name with option alias and slash
>
>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>                                        &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>                                "option alias path test, subcase #1 failed\n");
>         of_node_put(np);
>
>
>         // find node by name with null option alias
>
>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>                         test, np, "NULL option alias path test failed\n");
>         of_node_put(np);
>
>
>         // find node by name option clearing
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("testcase-alias", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing test failed\n");
>         of_node_put(np);
>
>
>         // find node by name option clearing root
>
>         options = "testoption";
>         np = of_find_node_opts_by_path("/", &options);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>                             "option clearing root node test failed\n");
>         of_node_put(np);
> }
>
> static int of_test_init(struct kunit *test)
> {
>         /* adding data for unittest */
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
>
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
>
>         return 0;
> }
>
> static struct kunit_case of_test_cases[] = {
>         KUNIT_CASE(of_unittest_find_node_by_name),
>         {},
> };
>
> static struct kunit_module of_test_module = {
>         .name = "of-base-test",
>         .init = of_test_init,
>         .test_cases = of_test_cases,
> };
> module_test(of_test_module);
>
>
> >
> >
> >>> be cases where the devicetree unittests are currently not well grouped
> >>> and may benefit from change, but if so that should be handled independently
> >>> of any transformation into a KUnit framework.
> >>
> >> I agree. I did this because I wanted to illustrate what I thought real
> >> world KUnit unit tests should look like (I also wanted to be able to
> >> show off KUnit test features that help you write these kinds of
> >> tests); I was not necessarily intending that all the of: unittest
> >> patches would get merged in with the whole RFC. I was mostly trying to
> >> create cause for discussion (which it seems like I succeeded at ;-) ).
> >>
> >> So fair enough, I will propose these patches separately and later
> >> (except of course this one that splits up the file). Do you want the
> >> initial transformation to the KUnit framework in the main KUnit
> >> patchset, or do you want that to be done separately? If I recall, Rob
> >> suggested this as a good initial example that other people could refer
> >> to, and some people seemed to think that I needed one to help guide
> >> the discussion and provide direction for early users. I don't
> >> necessarily think that means the initial real world example needs to
> >> be a part of the initial patchset though.

I really appreciate you taking the time to discuss these difficult points :-)

If the way I want to express test cases here is really that difficult
to read, then it means that I have some work to do to make it better,
because I plan on constructing other test cases in a very similar way.
So, if you think that these test cases have real readability issues,
then there is something I either need to improve with the framework,
or the documentation.

So if you would rather discuss these patches later once I added those
features that would make the notion of hermeticity stronger, or would
make summaries better, or anything else I mentioned, that's fine with
me, but if you think there is something fundamentally wrong with my
approach, I would rather figured out the right way to handle it sooner
rather than later.

Looking forward to hear what you think!

Cheers
Frank Rowand March 22, 2019, 12:22 a.m. UTC | #11
On 2/27/19 7:52 PM, Brendan Higgins wrote:
> On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/18/19 2:25 PM, Frank Rowand wrote:
>>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>
>>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>
>>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>>>
>>
>> < snip >
>>
>>>
>>> It makes it harder for me to read the source of the tests and
>>> understand the order they will execute.  It also makes it harder
>>> for me to read through the actual tests (in this example the
>>> tests that are currently grouped in of_unittest_find_node_by_name())
>>> because of all the extra function headers injected into the
>>> existing single function to break it apart into many smaller
>>> functions.
>>
>> < snip >
>>
>>>>>> This is not something I feel particularly strongly about, it is just
>>>>>> pretty atypical from my experience to have so many unrelated test
>>>>>> cases in a single file.
>>>>>>
>>>>>> Maybe you would prefer that I break up the test cases first, and then
>>>>>> we split up the file as appropriate?
>>>>>
>>>>> I prefer that the test cases not be broken up arbitrarily.  There _may_
>>
>> I expect that I created confusion by putting this in a reply to patch 18/19.
>> It is actually a comment about patch 19/19.  Sorry about that.
>>
> 
> No worries.
> 
>>
>>>>
>>>> I wasn't trying to break them up arbitrarily. I thought I was doing it
>>>> according to a pattern (breaking up the file, that is), but maybe I
>>>> just hadn't looked at enough examples.
>>>
>>> This goes back to the kunit model of putting each test into a separate
>>> function that can be a KUNIT_CASE().  That is a model that I do not agree
>>> with for devicetree.
>>
>> So now that I am actually talking about patch 19/19, let me give a concrete
>> example.  I will cut and paste (after my comments), the beginning portion
>> of base-test.c, after applying patch 19/19 (the "base version".  Then I
>> will cut and paste my alternative version which does not break the tests
>> down into individual functions (the "frank version").
> 
> Awesome, thanks for putting the comparison together!
> 
>>
>> I will also reply to this email with the base version and the frank version
>> as attachments, which will make it easier to save as separate versions
>> for easier viewing.  I'm not sure if an email with attachments will make
>> it through the list servers, but I am cautiously optimistic.
>>
>> I am using v4 of the patch series because I never got v3 to cleanly apply
>> and it is not a constructive use of my time to do so since I have v4 applied.
>>
>> One of the points I was trying to make is that readability suffers from the
>> approach taken by patches 18/19 and 19/19.
> 
> I understood that point.
> 
>>
>> The base version contains the extra text of a function header for each
>> unit test.  This is visual noise and makes the file larger.  It is also
>> one more possible location of an error (although not likely).
> 
> I don't see how it is much more visual noise than a comment.
> Admittedly, a space versus an underscore might be nice, but I think it
> is also more likely that a function name is more likely to be kept up
> to date than a comment even if they are both informational. It also
> forces the user to actually name all the tests. Even then, I am not
> married to doing it this exact way. The thing I really care about is
> isolating the code in each test case so that it can be executed
> separately.
> 
> A side thought, when I was proofreading this, it occurred to me that
> you might not like the function name over comment partly because you
> think about them differently. You aren't used to seeing a function
> used to frame things or communicate information in this way. Is this

No.  It is more visual clutter and it is more functional clutter that
potentially has to be validated.


> true? Admittedly, I have gotten used to a lot of unit test frameworks
> that break up test cases by function, so I wondering if part of the
> difference in comfortability with this framing might come from the
> fact that I am really used to seeing it this way and you are not? If
> this is the case, maybe it would be better if we had something like:
> 
> KUNIT_DECLARE_CASE(case_id, "Test case description")
> {
>         KUNIT_EXPECT_EQ(kunit, ...);
>         ...
> }
> 
> Just a thought.
> 
>>
>> The frank version has converted each of the new function headers into
>> a comment, using the function name with '_' converted to ' '.  The
>> comments are more readable than the function headers.  Note that I added
>> an extra blank line before each comment, which violates the kernel
>> coding standards, but I feel this makes the code more readable.
> 
> I agree that the extra space is an improvement, but I think any
> sufficient visual separation would work.
> 
>>
>> The base version needs to declare each of the individual test functions
>> in of_test_find_node_by_name_cases[]. It is possible that a test function
>> could be left out of of_test_find_node_by_name_cases[], in error.  This
>> will result in a compile warning (I think warning instead of error, but
>> I have not verified that) so the error might be caught or it might be
>> overlooked.
> 
> It's a warning, but that can be fixed.
> 
>>
>> In the base version, the order of execution of the test code requires
>> bouncing back and forth between the test functions and the coding of
>> of_test_find_node_by_name_cases[].
> 
> You shouldn't need to bounce back and forth because the order in which
> the tests run shouldn't matter.

If one can't guarantee total independence of all of the tests, with no
side effects, then yes.  But that is not my world.  To make that
guarantee, I would need to be able to run just a single test in an
entire test run.

I actually want to make side effects possible.  Whether from other
tests or from live kernel code that is accessing the live devicetree.
Any extra stress makes me happier.

I forget the exact term that has been tossed around, but to me the
devicetree unittests are more like system validation, release tests,
acceptance tests, and stress tests.  Not unit tests in the philosophy
of KUnit.

I do see the value of pure unit tests, and there are rare times that
my devicetree use case might be better served by that approach.  But
if so, it is very easy for me to add a simple pure test when debugging.
My general use case does not map onto this model.


>>
>> In the frank version the order of execution of the test code is obvious.
> 
> So I know we were arguing before over whether order *does* matter in
> some of the other test cases (none in the example that you or I
> posted), but wouldn't it be better if the order of execution didn't
> matter? If you don't allow a user to depend on the execution of test
> cases, then arguably these test case dependencies would never form and
> the order wouldn't matter.

Reality intrudes.  Order does matter.


>>
>> It is possible that a test function could be left out of
>> of_test_find_node_by_name_cases[], in error.  This will result in a compile
>> warning (I think warning instead of error, but I have not verified that)
>> so it might be caught or it might be overlooked.
>>
>> The base version is 265 lines.  The frank version is 208 lines, 57 lines
>> less.  Less is better.
> 
> I agree that less is better, but there are different kinds of less to
> consider. I prefer less logic in a function to fewer lines overall.
> 
> It seems we are in agreement that test cases should be small and
> simple, so I won't dwell on that point any longer. I agree that the

As a general guide for simple unit tests, sure.

For my case, no.  Reality intrudes.

KUnit has a nice architectural view of what a unit test should be.

The existing devicetree "unittests" are not such unit tests.  They
simply share the same name.

The devicetree unittests do not fit into a clean:
  - initialize
  - do one test
  - clean up
model.

Trying to force them into that model will not work.  The initialize
is not a simple, easy to decompose thing.  And trying to decompose
it can actually make the code more complex and messier.

Clean up can NOT occur, because part of my test validation is looking
at the state of the device tree after the tests complete, viewed
through the /proc/device-tree/ interface.


> test cases themselves when taken in isolation in base version and
> frank version are equally simple (obviously, they are the same).
> 
> If I am correct, we are only debating whether it is best to put each
> test case in its own function or not. That being said, I honestly
> still think my version (base version) is easier to understand. The
> reason I think mine is easier to read is entirely because of the code
> isolation provided by each test case running it its own function. I
> can look at a test case by itself and know that it doesn't depend on
> anything that happened in a preceding test case. It is true that I
> have to look in different places in the file, but I think that is more
> than made up for by the fact that in order to understand a test case,
> I only have to look at two functions: init, and the test case itself
> (well, also exit if you care about how things are cleaned up). I don't
> have to look through every single test case that proceeds it.
> 
> It might not be immediately obvious what isolation my version provides
> over your version at first glance, and that is exactly the point. We
> know that they are the same because you pulled the test cases out of
> my version, but what about the other test suite in 19/19,
> of_test_dynamic? If you notice, I did not just break each test case by
> wrapping it in a function; that didn't work because there was a
> dependency between some of the test cases. I removed that dependency,
> so that each test case is actually isolated:
> 
> ## ============== single function (18/19) version ===========
> static void of_unittest_dynamic(struct kunit *test)
> {
>         struct device_node *np;
>         struct property *prop;
> 
>         np = of_find_node_by_path("/testcase-data");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
> 
>         /* Array of 4 properties for the purpose of testing */
>         prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
> 
>         /* Add a new property - should pass*/
>         prop->name = "new-property";
>         prop->value = "new-property-data";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to add an existing property - should fail */
>         prop++;
>         prop->name = "new-property";
>         prop->value = "new-property-data-should-fail";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
>                             "Adding an existing property should have failed\n");
> 
>         /* Try to modify an existing property - should pass */
>         prop->value = "modify-property-data-should-pass";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(
>                 test, of_update_property(np, prop), 0,
>                 "Updating an existing property should have passed\n");
> 
>         /* Try to modify non-existent property - should pass*/
>         prop++;
>         prop->name = "modify-property";
>         prop->value = "modify-missing-property-data-should-pass";
>         prop->length = strlen(prop->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
>                             "Updating a missing property should have passed\n");
> 
>         /* Remove property - should pass */
>         KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
>                             "Removing a property should have passed\n");
> 
>         /* Adding very large property - should pass */
>         prop++;
>         prop->name = "large-property-PAGE_SIZEx8";
>         prop->length = PAGE_SIZE * 8;
>         prop->value = kzalloc(prop->length, GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
>                             "Adding a large property should have passed\n");
> }
> 
> ## ============== multi function (19/19) version ===========
> struct of_test_dynamic_context {
>         struct device_node *np;
>         struct property *prop0;
>         struct property *prop1;
> };
> 
> static void of_test_dynamic_basic(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Test that we can remove a property */
>         KUNIT_EXPECT_EQ(test, of_remove_property(np, prop0), 0);
> }
> 
> static void of_test_dynamic_add_existing_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to add an existing property - should fail */
>         prop1->name = "new-property";
>         prop1->value = "new-property-data-should-fail";
>         prop1->length = strlen(prop1->value) + 1;
>         KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop1), 0,
>                             "Adding an existing property should have failed\n");
> }
> 
> static void of_test_dynamic_modify_existing_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0, *prop1 = ctx->prop1;
> 
>         /* Add a new property - should pass*/
>         prop0->name = "new-property";
>         prop0->value = "new-property-data";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a new property failed\n");
> 
>         /* Try to modify an existing property - should pass */
>         prop1->name = "new-property";
>         prop1->value = "modify-property-data-should-pass";
>         prop1->length = strlen(prop1->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop1), 0,
>                             "Updating an existing property should have
> passed\n");
> }
> 
> static void of_test_dynamic_modify_non_existent_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Try to modify non-existent property - should pass*/
>         prop0->name = "modify-property";
>         prop0->value = "modify-missing-property-data-should-pass";
>         prop0->length = strlen(prop0->value) + 1;
>         KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop0), 0,
>                             "Updating a missing property should have passed\n");
> }
> 
> static void of_test_dynamic_large_property(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
>         struct property *prop0 = ctx->prop0;
> 
>         /* Adding very large property - should pass */
>         prop0->name = "large-property-PAGE_SIZEx8";
>         prop0->length = PAGE_SIZE * 8;
>         prop0->value = kunit_kzalloc(test, prop0->length, GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop0->value);
> 
>         KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop0), 0,
>                             "Adding a large property should have passed\n");
> }
> 
> static int of_test_dynamic_init(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx;
> 
>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
> 
>         if (!of_aliases)
>                 of_aliases = of_find_node_by_path("/aliases");
> 
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>                         "/testcase-data/phandle-tests/consumer-a"));
> 
>         ctx = kunit_kzalloc(test, sizeof(*ctx), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
>         test->priv = ctx;
> 
>         ctx->np = of_find_node_by_path("/testcase-data");
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->np);
> 
>         ctx->prop0 = kunit_kzalloc(test, sizeof(*ctx->prop0), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop0);
> 
>         ctx->prop1 = kunit_kzalloc(test, sizeof(*ctx->prop1), GFP_KERNEL);
>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx->prop1);
> 
>         return 0;
> }
> 
> static void of_test_dynamic_exit(struct kunit *test)
> {
>         struct of_test_dynamic_context *ctx = test->priv;
>         struct device_node *np = ctx->np;
> 
>         of_remove_property(np, ctx->prop0);
>         of_remove_property(np, ctx->prop1);
>         of_node_put(np);
> }
> 
> static struct kunit_case of_test_dynamic_cases[] = {
>         KUNIT_CASE(of_test_dynamic_basic),
>         KUNIT_CASE(of_test_dynamic_add_existing_property),
>         KUNIT_CASE(of_test_dynamic_modify_existing_property),
>         KUNIT_CASE(of_test_dynamic_modify_non_existent_property),
>         KUNIT_CASE(of_test_dynamic_large_property),
>         {},
> };
> 
> static struct kunit_module of_test_dynamic_module = {
>         .name = "of-dynamic-test",
>         .init = of_test_dynamic_init,
>         .exit = of_test_dynamic_exit,
>         .test_cases = of_test_dynamic_cases,
> };
> module_test(of_test_dynamic_module);
> 
> Compare the test cases for adding of_test_dynamic_basic,
> of_test_dynamic_add_existing_property,
> of_test_dynamic_modify_existing_property, and
> of_test_dynamic_modify_non_existent_property to the originals. My
> version is much longer overall, but I think is still much easier to
> understand. I can say from when I was trying to split this up in the
> first place, it was not obvious what properties were expected to be
> populated as a precondition for a given test case (except the first
> one of course). Whereas, in my version, it is immediately obvious what
> the preconditions are for a test case. I think you can apply this same
> logic to the examples you provided, in frank version, I don't
> immediately know if one test cases does something that is a
> precondition for another test case.

Yes, that is a real problem in the current code, but easily fixed
with comments.


> My version also makes it easier to run a test case entirely by itself
> which is really valuable for debugging purposes. A common thing that
> happens when you have lots of unit tests is something breaks and lots
> of tests fail. If the test cases are good, there should be just a
> couple (ideally one) test cases that directly assert the violated
> property; those are the test cases you actually want to focus on, the
> rest are noise for the purposes of that breakage. In my version, it is
> much easier to turn off the test cases that you don't care about and
> then focus in on the ones that exercise the violated property.
> 
> Now I know that, hermeticity especially, but other features as well
> (test suite summary, error on unused test case function, etc) are not
> actually in KUnit as it is under consideration here. Maybe it would be
> best to save these last two patches (18/19, and 19/19) until I have
> these other features checked in and reconsider them then?

Thanks for leaving 18/19 and 19/19 off in v4.

-Frank

> 
>>
>> ## ==========  base version  ====================================
>>
>> // SPDX-License-Identifier: GPL-2.0
>> /*
>>  * Unit tests for functions defined in base.c.
>>  */
>> #include <linux/of.h>
>>
>> #include <kunit/test.h>
>>
>> #include "test-common.h"
>>
>> static void of_test_find_node_by_name_basic(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("/testcase-data");
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find /testcase-data failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_trailing_slash(struct kunit *test)
>> {
>>         /* Test if trailing '/' works */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>>                             "trailing '/' on /testcase-data/ should fail\n");
>>
>> }
>>
>> static void of_test_find_node_by_name_multiple_components(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_with_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("testcase-alias");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find testcase-alias failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_with_alias_and_slash(struct kunit *test)
>> {
>>         /* Test if trailing '/' works on aliases */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>>                            "trailing '/' on testcase-alias/ should fail\n");
>> }
>>
>> /*
>>  * TODO(brendanhiggins@google.com): This looks like a duplicate of
>>  * of_test_find_node_by_name_multiple_components
>>  */
>> static void of_test_find_node_by_name_multiple_components_2(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *name;
>>
>>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>> }
>>
>> static void of_test_find_node_by_name_missing_path(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>>                 "non-existent path returned node %pOF\n", np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_missing_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>>                 "non-existent alias returned node %pOF\n", np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_missing_alias_with_relative_path(
>>                 struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>>                 "non-existent alias with relative path returned node %pOF\n",
>>                 np);
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>>                                "option path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_and_slash(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #2 failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_null_option(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>>                                          "NULL option path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>>                                "option alias path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_option_alias_and_slash(
>>                 struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>>                                "option alias path test, subcase #1 failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_with_null_option_alias(struct kunit *test)
>> {
>>         struct device_node *np;
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>>                         test, np, "NULL option alias path test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_option_clearing(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("testcase-alias", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing test failed\n");
>>         of_node_put(np);
>> }
>>
>> static void of_test_find_node_by_name_option_clearing_root(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options;
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("/", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing root node test failed\n");
>>         of_node_put(np);
>> }
>>
>> static int of_test_find_node_by_name_init(struct kunit *test)
>> {
>>         /* adding data for unittest */
>>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>>
>>         if (!of_aliases)
>>                 of_aliases = of_find_node_by_path("/aliases");
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>>                         "/testcase-data/phandle-tests/consumer-a"));
>>
>>         return 0;
>> }
>>
>> static struct kunit_case of_test_find_node_by_name_cases[] = {
>>         KUNIT_CASE(of_test_find_node_by_name_basic),
>>         KUNIT_CASE(of_test_find_node_by_name_trailing_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_multiple_components),
>>         KUNIT_CASE(of_test_find_node_by_name_with_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_with_alias_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_multiple_components_2),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_path),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_missing_alias_with_relative_path),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_with_null_option),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_with_option_alias_and_slash),
>>         KUNIT_CASE(of_test_find_node_by_name_with_null_option_alias),
>>         KUNIT_CASE(of_test_find_node_by_name_option_clearing),
>>         KUNIT_CASE(of_test_find_node_by_name_option_clearing_root),
>>         {},
>> };
>>
>> static struct kunit_module of_test_find_node_by_name_module = {
>>         .name = "of-test-find-node-by-name",
>>         .init = of_test_find_node_by_name_init,
>>         .test_cases = of_test_find_node_by_name_cases,
>> };
>> module_test(of_test_find_node_by_name_module);
>>
>>
>> ## ==========  frank version  ===================================
>>
>>         // SPDX-License-Identifier: GPL-2.0
>> /*
>>  * Unit tests for functions defined in base.c.
>>  */
>> #include <linux/of.h>
>>
>> #include <kunit/test.h>
>>
>> #include "test-common.h"
>>
>> static void of_unittest_find_node_by_name(struct kunit *test)
>> {
>>         struct device_node *np;
>>         const char *options, *name;
>>
>>
>>         // find node by name basic
>>
>>         np = of_find_node_by_path("/testcase-data");
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find /testcase-data failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name trailing slash
>>
>>         /* Test if trailing '/' works */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
>>                             "trailing '/' on /testcase-data/ should fail\n");
>>
>>
>>         // find node by name multiple components
>>
>>         np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find /testcase-data/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name with alias
>>
>>         np = of_find_node_by_path("testcase-alias");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
>>                                "find testcase-alias failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name with alias and slash
>>
>>         /* Test if trailing '/' works on aliases */
>>         KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
>>                             "trailing '/' on testcase-alias/ should fail\n");
>>
>>
>>         // find node by name multiple components 2
>>
>>         np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         name = kasprintf(GFP_KERNEL, "%pOF", np);
>>         KUNIT_EXPECT_STREQ_MSG(
>>                 test, "/testcase-data/phandle-tests/consumer-a", name,
>>                 "find testcase-alias/phandle-tests/consumer-a failed\n");
>>         of_node_put(np);
>>         kfree(name);
>>
>>
>>         // find node by name missing path
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("/testcase-data/missing-path"), NULL,
>>                 "non-existent path returned node %pOF\n", np);
>>         of_node_put(np);
>>
>>
>>         // find node by name missing alias
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test, np = of_find_node_by_path("missing-alias"), NULL,
>>                 "non-existent alias returned node %pOF\n", np);
>>         of_node_put(np);
>>
>>
>>         //  find node by name missing alias with relative path
>>
>>         KUNIT_EXPECT_EQ_MSG(
>>                 test,
>>                 np = of_find_node_by_path("testcase-alias/missing-path"), NULL,
>>                 "non-existent alias with relative path returned node %pOF\n",
>>                 np);
>>         of_node_put(np);
>>
>>
>>         // find node by name with option
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
>>                                "option path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option and slash
>>
>>         np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>         np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
>>                                "option path test, subcase #2 failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with null option
>>
>>         np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
>>                                          "NULL option path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option alias
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
>>                                "option alias path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with option alias and slash
>>
>>         np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
>>                                        &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
>>                                "option alias path test, subcase #1 failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name with null option alias
>>
>>         np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
>>         KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(
>>                         test, np, "NULL option alias path test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name option clearing
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("testcase-alias", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing test failed\n");
>>         of_node_put(np);
>>
>>
>>         // find node by name option clearing root
>>
>>         options = "testoption";
>>         np = of_find_node_opts_by_path("/", &options);
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
>>         KUNIT_EXPECT_EQ_MSG(test, options, NULL,
>>                             "option clearing root node test failed\n");
>>         of_node_put(np);
>> }
>>
>> static int of_test_init(struct kunit *test)
>> {
>>         /* adding data for unittest */
>>         KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
>>
>>         if (!of_aliases)
>>                 of_aliases = of_find_node_by_path("/aliases");
>>
>>         KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
>>                         "/testcase-data/phandle-tests/consumer-a"));
>>
>>         return 0;
>> }
>>
>> static struct kunit_case of_test_cases[] = {
>>         KUNIT_CASE(of_unittest_find_node_by_name),
>>         {},
>> };
>>
>> static struct kunit_module of_test_module = {
>>         .name = "of-base-test",
>>         .init = of_test_init,
>>         .test_cases = of_test_cases,
>> };
>> module_test(of_test_module);
>>
>>
>>>
>>>
>>>>> be cases where the devicetree unittests are currently not well grouped
>>>>> and may benefit from change, but if so that should be handled independently
>>>>> of any transformation into a KUnit framework.
>>>>
>>>> I agree. I did this because I wanted to illustrate what I thought real
>>>> world KUnit unit tests should look like (I also wanted to be able to
>>>> show off KUnit test features that help you write these kinds of
>>>> tests); I was not necessarily intending that all the of: unittest
>>>> patches would get merged in with the whole RFC. I was mostly trying to
>>>> create cause for discussion (which it seems like I succeeded at ;-) ).
>>>>
>>>> So fair enough, I will propose these patches separately and later
>>>> (except of course this one that splits up the file). Do you want the
>>>> initial transformation to the KUnit framework in the main KUnit
>>>> patchset, or do you want that to be done separately? If I recall, Rob
>>>> suggested this as a good initial example that other people could refer
>>>> to, and some people seemed to think that I needed one to help guide
>>>> the discussion and provide direction for early users. I don't
>>>> necessarily think that means the initial real world example needs to
>>>> be a part of the initial patchset though.
> 
> I really appreciate you taking the time to discuss these difficult points :-)
> 
> If the way I want to express test cases here is really that difficult
> to read, then it means that I have some work to do to make it better,
> because I plan on constructing other test cases in a very similar way.
> So, if you think that these test cases have real readability issues,
> then there is something I either need to improve with the framework,
> or the documentation.
> 
> So if you would rather discuss these patches later once I added those
> features that would make the notion of hermeticity stronger, or would
> make summaries better, or anything else I mentioned, that's fine with
> me, but if you think there is something fundamentally wrong with my
> approach, I would rather figured out the right way to handle it sooner
> rather than later.
> 
> Looking forward to hear what you think!
> 
> Cheers
>
Brendan Higgins March 22, 2019, 1:30 a.m. UTC | #12
On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/27/19 7:52 PM, Brendan Higgins wrote:
> > On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/18/19 2:25 PM, Frank Rowand wrote:
> >>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
> >>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>>
> >>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> >>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>>>>>
> >>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> >>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
< snip >
> >>
> >> In the base version, the order of execution of the test code requires
> >> bouncing back and forth between the test functions and the coding of
> >> of_test_find_node_by_name_cases[].
> >
> > You shouldn't need to bounce back and forth because the order in which
> > the tests run shouldn't matter.
>
> If one can't guarantee total independence of all of the tests, with no
> side effects, then yes.  But that is not my world.  To make that
> guarantee, I would need to be able to run just a single test in an
> entire test run.
>
> I actually want to make side effects possible.  Whether from other
> tests or from live kernel code that is accessing the live devicetree.
> Any extra stress makes me happier.
>
> I forget the exact term that has been tossed around, but to me the
> devicetree unittests are more like system validation, release tests,
> acceptance tests, and stress tests.  Not unit tests in the philosophy
> of KUnit.

Ah, I understand. I thought that they were actually trying to be unit
tests; that pretty much voids this discussion then. Integration tests
and end to end tests are valuable as long as that is actually what you
are trying to do.

>
> I do see the value of pure unit tests, and there are rare times that
> my devicetree use case might be better served by that approach.  But
> if so, it is very easy for me to add a simple pure test when debugging.
> My general use case does not map onto this model.

Why do you think it is rare that you would actually want unit tests?

I mean, if you don't get much code churn, then maybe it's not going to
provide you a ton of value to immediately go and write a bunch of unit
tests right now, but I can't think of a single time where it's hurt.
Unit tests, from my experience, are usually the easiest tests to
maintain, and the most helpful when I am developing.

Maybe I need to understand your use case better.

>
>
> >>
> >> In the frank version the order of execution of the test code is obvious.
> >
> > So I know we were arguing before over whether order *does* matter in
> > some of the other test cases (none in the example that you or I
> > posted), but wouldn't it be better if the order of execution didn't
> > matter? If you don't allow a user to depend on the execution of test
> > cases, then arguably these test case dependencies would never form and
> > the order wouldn't matter.
>
> Reality intrudes.  Order does matter.
>
>
> >>
> >> It is possible that a test function could be left out of
> >> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> >> warning (I think warning instead of error, but I have not verified that)
> >> so it might be caught or it might be overlooked.
> >>
> >> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> >> less.  Less is better.
> >
> > I agree that less is better, but there are different kinds of less to
> > consider. I prefer less logic in a function to fewer lines overall.
> >
> > It seems we are in agreement that test cases should be small and
> > simple, so I won't dwell on that point any longer. I agree that the
>
> As a general guide for simple unit tests, sure.
>
> For my case, no.  Reality intrudes.
>
> KUnit has a nice architectural view of what a unit test should be.

Cool, I am glad you think so! That actually means a lot to me. I was
afraid I wasn't conveying the idea properly and that was the root of
this debate.

>
> The existing devicetree "unittests" are not such unit tests.  They
> simply share the same name.
>
> The devicetree unittests do not fit into a clean:
>   - initialize
>   - do one test
>   - clean up
> model.
>
> Trying to force them into that model will not work.  The initialize
> is not a simple, easy to decompose thing.  And trying to decompose
> it can actually make the code more complex and messier.
>
> Clean up can NOT occur, because part of my test validation is looking
> at the state of the device tree after the tests complete, viewed
> through the /proc/device-tree/ interface.
>

Again, if they are not actually intended to be unit tests, then I
think that is fine.

< snip >

> > Compare the test cases for adding of_test_dynamic_basic,
> > of_test_dynamic_add_existing_property,
> > of_test_dynamic_modify_existing_property, and
> > of_test_dynamic_modify_non_existent_property to the originals. My
> > version is much longer overall, but I think is still much easier to
> > understand. I can say from when I was trying to split this up in the
> > first place, it was not obvious what properties were expected to be
> > populated as a precondition for a given test case (except the first
> > one of course). Whereas, in my version, it is immediately obvious what
> > the preconditions are for a test case. I think you can apply this same
> > logic to the examples you provided, in frank version, I don't
> > immediately know if one test cases does something that is a
> > precondition for another test case.
>
> Yes, that is a real problem in the current code, but easily fixed
> with comments.

I think it is best when you don't need comments, but in this case, I
think I have to agree with you.

>
>
> > My version also makes it easier to run a test case entirely by itself
> > which is really valuable for debugging purposes. A common thing that
> > happens when you have lots of unit tests is something breaks and lots
> > of tests fail. If the test cases are good, there should be just a
> > couple (ideally one) test cases that directly assert the violated
> > property; those are the test cases you actually want to focus on, the
> > rest are noise for the purposes of that breakage. In my version, it is
> > much easier to turn off the test cases that you don't care about and
> > then focus in on the ones that exercise the violated property.
> >
> > Now I know that, hermeticity especially, but other features as well
> > (test suite summary, error on unused test case function, etc) are not
> > actually in KUnit as it is under consideration here. Maybe it would be
> > best to save these last two patches (18/19, and 19/19) until I have
> > these other features checked in and reconsider them then?
>
> Thanks for leaving 18/19 and 19/19 off in v4.

Sure, no problem. It was pretty clear that it was a waste of both of
our times to continue discussing those at this juncture. :-)

Do you still want me to try to convert the DT not-exactly-unittest to
KUnit? I would kind of prefer (I don't feel *super* strongly about the
matter) we don't call it that since I was intending for it to be the
flagship initial example, but I certainly don't mind trying to clean
this patch up to get it up to snuff. It's really just a question of
whether it is worth it to you.

< snip >

Cheers!
Frank Rowand March 22, 2019, 1:34 a.m. UTC | #13
On 3/21/19 5:22 PM, Frank Rowand wrote:
> On 2/27/19 7:52 PM, Brendan Higgins wrote:

< snip >

>> Now I know that, hermeticity especially, but other features as well
>> (test suite summary, error on unused test case function, etc) are not
>> actually in KUnit as it is under consideration here. Maybe it would be
>> best to save these last two patches (18/19, and 19/19) until I have
>> these other features checked in and reconsider them then?
> 
> Thanks for leaving 18/19 and 19/19 off in v4.

Oops, they got into v4 but as 16/17 and 17/17, I think.  But it sounds
like you are planning to leave them out of v5.

> 
> -Frank
Frank Rowand March 22, 2019, 1:47 a.m. UTC | #14
On 3/21/19 6:30 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>
>> On 2/27/19 7:52 PM, Brendan Higgins wrote:

< snip >  but thanks for the comments in the snipped section.


>>
>> Thanks for leaving 18/19 and 19/19 off in v4.
> 
> Sure, no problem. It was pretty clear that it was a waste of both of
> our times to continue discussing those at this juncture. :-)
> 
> Do you still want me to try to convert the DT not-exactly-unittest to
> KUnit? I would kind of prefer (I don't feel *super* strongly about the
> matter) we don't call it that since I was intending for it to be the
> flagship initial example, but I certainly don't mind trying to clean
> this patch up to get it up to snuff. It's really just a question of
> whether it is worth it to you.

In the long term, if KUnit is adopted by the kernel, then I think it
probably makes sense for devicetree unittest to convert from using
our own unittest() function to report an individual test pass/fail
to instead use something like KUNIT_EXPECT_*() to provide more
consistent test messages to test frameworks.  That is assuming
KUNIT_EXPECT_*() provides comparable functionality.  I still have
not looked into that question since the converted tests (patch 15/17
in v4) still does not execute without throwing internal errors.

If that conversion occurred, I would also avoid the ASSERTs.

> 
> < snip >
> 
> Cheers!
>
Brendan Higgins March 25, 2019, 10:15 p.m. UTC | #15
On Thu, Mar 21, 2019 at 6:47 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/21/19 6:30 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >>
> >> On 2/27/19 7:52 PM, Brendan Higgins wrote:
>
> < snip >  but thanks for the comments in the snipped section.
>
>
> >>
> >> Thanks for leaving 18/19 and 19/19 off in v4.
> >
> > Sure, no problem. It was pretty clear that it was a waste of both of
> > our times to continue discussing those at this juncture. :-)
> >
> > Do you still want me to try to convert the DT not-exactly-unittest to
> > KUnit? I would kind of prefer (I don't feel *super* strongly about the
> > matter) we don't call it that since I was intending for it to be the
> > flagship initial example, but I certainly don't mind trying to clean
> > this patch up to get it up to snuff. It's really just a question of
> > whether it is worth it to you.
>
> In the long term, if KUnit is adopted by the kernel, then I think it
> probably makes sense for devicetree unittest to convert from using
> our own unittest() function to report an individual test pass/fail
> to instead use something like KUNIT_EXPECT_*() to provide more
> consistent test messages to test frameworks.  That is assuming
> KUNIT_EXPECT_*() provides comparable functionality.  I still have
> not looked into that question since the converted tests (patch 15/17
> in v4) still does not execute without throwing internal errors.

Sounds good.

>
> If that conversion occurred, I would also avoid the ASSERTs.

Noted.
Brendan Higgins March 25, 2019, 10:18 p.m. UTC | #16
On Thu, Mar 21, 2019 at 6:34 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/21/19 5:22 PM, Frank Rowand wrote:
> > On 2/27/19 7:52 PM, Brendan Higgins wrote:
>
> < snip >
>
> >> Now I know that, hermeticity especially, but other features as well
> >> (test suite summary, error on unused test case function, etc) are not
> >> actually in KUnit as it is under consideration here. Maybe it would be
> >> best to save these last two patches (18/19, and 19/19) until I have
> >> these other features checked in and reconsider them then?
> >
> > Thanks for leaving 18/19 and 19/19 off in v4.
>
> Oops, they got into v4 but as 16/17 and 17/17, I think.  But it sounds
> like you are planning to leave them out of v5.

Oh, I thought you meant v5 when you were thanking me. In any case, to
confirm, I will be leaving off 16/17, and 17/17 in the next version.
Rob Herring Sept. 20, 2019, 4:57 p.m. UTC | #17
Following up from LPC discussions...

On Thu, Mar 21, 2019 at 8:30 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >
> > On 2/27/19 7:52 PM, Brendan Higgins wrote:
> > > On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > >>
> > >> On 2/18/19 2:25 PM, Frank Rowand wrote:
> > >>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
> > >>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > >>>>>
> > >>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
> > >>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > >>>>>>>
> > >>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
> > >>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
> < snip >
> > >>
> > >> In the base version, the order of execution of the test code requires
> > >> bouncing back and forth between the test functions and the coding of
> > >> of_test_find_node_by_name_cases[].
> > >
> > > You shouldn't need to bounce back and forth because the order in which
> > > the tests run shouldn't matter.
> >
> > If one can't guarantee total independence of all of the tests, with no
> > side effects, then yes.  But that is not my world.  To make that
> > guarantee, I would need to be able to run just a single test in an
> > entire test run.
> >
> > I actually want to make side effects possible.  Whether from other
> > tests or from live kernel code that is accessing the live devicetree.
> > Any extra stress makes me happier.
> >
> > I forget the exact term that has been tossed around, but to me the
> > devicetree unittests are more like system validation, release tests,
> > acceptance tests, and stress tests.  Not unit tests in the philosophy
> > of KUnit.
>
> Ah, I understand. I thought that they were actually trying to be unit
> tests; that pretty much voids this discussion then. Integration tests
> and end to end tests are valuable as long as that is actually what you
> are trying to do.

There's a mixture. There's a whole bunch of tests that are basically
just testing various DT APIs and use a static DT. Those are all unit
tests IMO.

Then there's all the overlay tests Frank has added. I guess some of
those are not unittests in the strictest sense. Regardless, if we're
reporting test results, we should align our reporting with what will
become the rest of the kernel.

> > I do see the value of pure unit tests, and there are rare times that
> > my devicetree use case might be better served by that approach.  But
> > if so, it is very easy for me to add a simple pure test when debugging.
> > My general use case does not map onto this model.
>
> Why do you think it is rare that you would actually want unit tests?

I don't. We should have a unittest (or multiple) for every single DT
API call and that should be a requirement to add any new APIs.

> I mean, if you don't get much code churn, then maybe it's not going to
> provide you a ton of value to immediately go and write a bunch of unit
> tests right now, but I can't think of a single time where it's hurt.
> Unit tests, from my experience, are usually the easiest tests to
> maintain, and the most helpful when I am developing.
>
> Maybe I need to understand your use case better.
>
> >
> >
> > >>
> > >> In the frank version the order of execution of the test code is obvious.
> > >
> > > So I know we were arguing before over whether order *does* matter in
> > > some of the other test cases (none in the example that you or I
> > > posted), but wouldn't it be better if the order of execution didn't
> > > matter? If you don't allow a user to depend on the execution of test
> > > cases, then arguably these test case dependencies would never form and
> > > the order wouldn't matter.
> >
> > Reality intrudes.  Order does matter.
> >
> >
> > >>
> > >> It is possible that a test function could be left out of
> > >> of_test_find_node_by_name_cases[], in error.  This will result in a compile
> > >> warning (I think warning instead of error, but I have not verified that)
> > >> so it might be caught or it might be overlooked.
> > >>
> > >> The base version is 265 lines.  The frank version is 208 lines, 57 lines
> > >> less.  Less is better.
> > >
> > > I agree that less is better, but there are different kinds of less to
> > > consider. I prefer less logic in a function to fewer lines overall.
> > >
> > > It seems we are in agreement that test cases should be small and
> > > simple, so I won't dwell on that point any longer. I agree that the
> >
> > As a general guide for simple unit tests, sure.
> >
> > For my case, no.  Reality intrudes.
> >
> > KUnit has a nice architectural view of what a unit test should be.
>
> Cool, I am glad you think so! That actually means a lot to me. I was
> afraid I wasn't conveying the idea properly and that was the root of
> this debate.
>
> >
> > The existing devicetree "unittests" are not such unit tests.  They
> > simply share the same name.
> >
> > The devicetree unittests do not fit into a clean:
> >   - initialize
> >   - do one test
> >   - clean up
> > model.

Initialize being static and clean-up being NULL still fits into this model.

> > Trying to force them into that model will not work.  The initialize
> > is not a simple, easy to decompose thing.  And trying to decompose
> > it can actually make the code more complex and messier.
> >
> > Clean up can NOT occur, because part of my test validation is looking
> > at the state of the device tree after the tests complete, viewed
> > through the /proc/device-tree/ interface.

Well, that's pretty ugly to have the test in the kernel and the
validation in userspace. I can see why you do, but that seems like a
problem in how those tests are defined and run.

> Again, if they are not actually intended to be unit tests, then I
> think that is fine.
>
> < snip >
>
> > > Compare the test cases for adding of_test_dynamic_basic,
> > > of_test_dynamic_add_existing_property,
> > > of_test_dynamic_modify_existing_property, and
> > > of_test_dynamic_modify_non_existent_property to the originals. My
> > > version is much longer overall, but I think is still much easier to
> > > understand. I can say from when I was trying to split this up in the
> > > first place, it was not obvious what properties were expected to be
> > > populated as a precondition for a given test case (except the first
> > > one of course). Whereas, in my version, it is immediately obvious what
> > > the preconditions are for a test case. I think you can apply this same
> > > logic to the examples you provided, in frank version, I don't
> > > immediately know if one test cases does something that is a
> > > precondition for another test case.
> >
> > Yes, that is a real problem in the current code, but easily fixed
> > with comments.
>
> I think it is best when you don't need comments, but in this case, I
> think I have to agree with you.
>
> >
> >
> > > My version also makes it easier to run a test case entirely by itself
> > > which is really valuable for debugging purposes. A common thing that
> > > happens when you have lots of unit tests is something breaks and lots
> > > of tests fail. If the test cases are good, there should be just a
> > > couple (ideally one) test cases that directly assert the violated
> > > property; those are the test cases you actually want to focus on, the
> > > rest are noise for the purposes of that breakage. In my version, it is
> > > much easier to turn off the test cases that you don't care about and
> > > then focus in on the ones that exercise the violated property.
> > >
> > > Now I know that, hermeticity especially, but other features as well
> > > (test suite summary, error on unused test case function, etc) are not
> > > actually in KUnit as it is under consideration here. Maybe it would be
> > > best to save these last two patches (18/19, and 19/19) until I have
> > > these other features checked in and reconsider them then?
> >
> > Thanks for leaving 18/19 and 19/19 off in v4.
>
> Sure, no problem. It was pretty clear that it was a waste of both of
> our times to continue discussing those at this juncture. :-)
>
> Do you still want me to try to convert the DT not-exactly-unittest to
> KUnit? I would kind of prefer (I don't feel *super* strongly about the
> matter) we don't call it that since I was intending for it to be the
> flagship initial example, but I certainly don't mind trying to clean
> this patch up to get it up to snuff. It's really just a question of
> whether it is worth it to you.

I still want to see this happen at least for the parts that are
clearly unit tests. And for the parts that aren't, Frank should move
them out of of/unittest.c.

So how to move forward? Convert tests one by one? Take a first swag at
what are unit tests and aren't?

Brendan, do you still have DT unittest patches that work with current kunit?

Rob
Frank Rowand Sept. 21, 2019, 11:57 p.m. UTC | #18
On 9/20/19 9:57 AM, Rob Herring wrote:
> Following up from LPC discussions...
> 
> On Thu, Mar 21, 2019 at 8:30 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
>>
>> On Thu, Mar 21, 2019 at 5:22 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>
>>> On 2/27/19 7:52 PM, Brendan Higgins wrote:
>>>> On Wed, Feb 20, 2019 at 12:45 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>
>>>>> On 2/18/19 2:25 PM, Frank Rowand wrote:
>>>>>> On 2/15/19 2:56 AM, Brendan Higgins wrote:
>>>>>>> On Thu, Feb 14, 2019 at 6:05 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>>
>>>>>>>> On 2/14/19 4:56 PM, Brendan Higgins wrote:
>>>>>>>>> On Thu, Feb 14, 2019 at 3:57 PM Frank Rowand <frowand.list@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> On 12/5/18 3:54 PM, Brendan Higgins wrote:
>>>>>>>>>>> On Tue, Dec 4, 2018 at 2:58 AM Frank Rowand <frowand.list@gmail.com> wrote:
>> < snip >
>>>>>
>>>>> In the base version, the order of execution of the test code requires
>>>>> bouncing back and forth between the test functions and the coding of
>>>>> of_test_find_node_by_name_cases[].
>>>>
>>>> You shouldn't need to bounce back and forth because the order in which
>>>> the tests run shouldn't matter.
>>>
>>> If one can't guarantee total independence of all of the tests, with no
>>> side effects, then yes.  But that is not my world.  To make that
>>> guarantee, I would need to be able to run just a single test in an
>>> entire test run.
>>>
>>> I actually want to make side effects possible.  Whether from other
>>> tests or from live kernel code that is accessing the live devicetree.
>>> Any extra stress makes me happier.
>>>
>>> I forget the exact term that has been tossed around, but to me the
>>> devicetree unittests are more like system validation, release tests,
>>> acceptance tests, and stress tests.  Not unit tests in the philosophy
>>> of KUnit.
>>
>> Ah, I understand. I thought that they were actually trying to be unit
>> tests; that pretty much voids this discussion then. Integration tests
>> and end to end tests are valuable as long as that is actually what you
>> are trying to do.
> 
> There's a mixture. There's a whole bunch of tests that are basically
> just testing various DT APIs and use a static DT. Those are all unit
> tests IMO.
> 
> Then there's all the overlay tests Frank has added. I guess some of
> those are not unittests in the strictest sense. Regardless, if we're
> reporting test results, we should align our reporting with what will
> become the rest of the kernel.

The last time I talked to you at lpc, I was still resisting moving the
DT unittests to the kunit framework.  But I think I am on board now.

Brendan agreed to accept a kunit patch from me (when I write it) to enable
the DT unittests to report # of tests run and # of tests passed/failed,
as is currently the case.

Brendan also agreed that null initialization and clean up would be ok
for the DT unittests.  But that he does not want that model to be
frequently used.  (You mention this idea later in the email I am
replying to.)


> 
>>> I do see the value of pure unit tests, and there are rare times that
>>> my devicetree use case might be better served by that approach.  But
>>> if so, it is very easy for me to add a simple pure test when debugging.
>>> My general use case does not map onto this model.
>>
>> Why do you think it is rare that you would actually want unit tests?
> 
> I don't. We should have a unittest (or multiple) for every single DT
> API call and that should be a requirement to add any new APIs.
> 
>> I mean, if you don't get much code churn, then maybe it's not going to
>> provide you a ton of value to immediately go and write a bunch of unit
>> tests right now, but I can't think of a single time where it's hurt.
>> Unit tests, from my experience, are usually the easiest tests to
>> maintain, and the most helpful when I am developing.
>>
>> Maybe I need to understand your use case better.
>>
>>>
>>>
>>>>>
>>>>> In the frank version the order of execution of the test code is obvious.
>>>>
>>>> So I know we were arguing before over whether order *does* matter in
>>>> some of the other test cases (none in the example that you or I
>>>> posted), but wouldn't it be better if the order of execution didn't
>>>> matter? If you don't allow a user to depend on the execution of test
>>>> cases, then arguably these test case dependencies would never form and
>>>> the order wouldn't matter.
>>>
>>> Reality intrudes.  Order does matter.
>>>
>>>
>>>>>
>>>>> It is possible that a test function could be left out of
>>>>> of_test_find_node_by_name_cases[], in error.  This will result in a compile
>>>>> warning (I think warning instead of error, but I have not verified that)
>>>>> so it might be caught or it might be overlooked.
>>>>>
>>>>> The base version is 265 lines.  The frank version is 208 lines, 57 lines
>>>>> less.  Less is better.
>>>>
>>>> I agree that less is better, but there are different kinds of less to
>>>> consider. I prefer less logic in a function to fewer lines overall.
>>>>
>>>> It seems we are in agreement that test cases should be small and
>>>> simple, so I won't dwell on that point any longer. I agree that the
>>>
>>> As a general guide for simple unit tests, sure.
>>>
>>> For my case, no.  Reality intrudes.
>>>
>>> KUnit has a nice architectural view of what a unit test should be.
>>
>> Cool, I am glad you think so! That actually means a lot to me. I was
>> afraid I wasn't conveying the idea properly and that was the root of
>> this debate.
>>
>>>
>>> The existing devicetree "unittests" are not such unit tests.  They
>>> simply share the same name.
>>>
>>> The devicetree unittests do not fit into a clean:
>>>   - initialize
>>>   - do one test
>>>   - clean up
>>> model.
> 
> Initialize being static and clean-up being NULL still fits into this model.
> 
>>> Trying to force them into that model will not work.  The initialize
>>> is not a simple, easy to decompose thing.  And trying to decompose
>>> it can actually make the code more complex and messier.
>>>
>>> Clean up can NOT occur, because part of my test validation is looking
>>> at the state of the device tree after the tests complete, viewed
>>> through the /proc/device-tree/ interface.
> 
> Well, that's pretty ugly to have the test in the kernel and the
> validation in userspace. I can see why you do, but that seems like a
> problem in how those tests are defined and run.

Yes it is ugly.  Any good suggestions on a better solution?


> 
>> Again, if they are not actually intended to be unit tests, then I
>> think that is fine.
>>
>> < snip >
>>
>>>> Compare the test cases for adding of_test_dynamic_basic,
>>>> of_test_dynamic_add_existing_property,
>>>> of_test_dynamic_modify_existing_property, and
>>>> of_test_dynamic_modify_non_existent_property to the originals. My
>>>> version is much longer overall, but I think is still much easier to
>>>> understand. I can say from when I was trying to split this up in the
>>>> first place, it was not obvious what properties were expected to be
>>>> populated as a precondition for a given test case (except the first
>>>> one of course). Whereas, in my version, it is immediately obvious what
>>>> the preconditions are for a test case. I think you can apply this same
>>>> logic to the examples you provided, in frank version, I don't
>>>> immediately know if one test cases does something that is a
>>>> precondition for another test case.
>>>
>>> Yes, that is a real problem in the current code, but easily fixed
>>> with comments.
>>
>> I think it is best when you don't need comments, but in this case, I
>> think I have to agree with you.
>>
>>>
>>>
>>>> My version also makes it easier to run a test case entirely by itself
>>>> which is really valuable for debugging purposes. A common thing that
>>>> happens when you have lots of unit tests is something breaks and lots
>>>> of tests fail. If the test cases are good, there should be just a
>>>> couple (ideally one) test cases that directly assert the violated
>>>> property; those are the test cases you actually want to focus on, the
>>>> rest are noise for the purposes of that breakage. In my version, it is
>>>> much easier to turn off the test cases that you don't care about and
>>>> then focus in on the ones that exercise the violated property.
>>>>
>>>> Now I know that, hermeticity especially, but other features as well
>>>> (test suite summary, error on unused test case function, etc) are not
>>>> actually in KUnit as it is under consideration here. Maybe it would be
>>>> best to save these last two patches (18/19, and 19/19) until I have
>>>> these other features checked in and reconsider them then?
>>>
>>> Thanks for leaving 18/19 and 19/19 off in v4.
>>
>> Sure, no problem. It was pretty clear that it was a waste of both of
>> our times to continue discussing those at this juncture. :-)
>>
>> Do you still want me to try to convert the DT not-exactly-unittest to
>> KUnit? I would kind of prefer (I don't feel *super* strongly about the
>> matter) we don't call it that since I was intending for it to be the
>> flagship initial example, but I certainly don't mind trying to clean
>> this patch up to get it up to snuff. It's really just a question of
>> whether it is worth it to you.
> 
> I still want to see this happen at least for the parts that are
> clearly unit tests. And for the parts that aren't, Frank should move
> them out of of/unittest.c.
> 
> So how to move forward? Convert tests one by one? Take a first swag at
> what are unit tests and aren't?

By the end of lpc I decided that I would go ahead and take a shot at
modifying the DT unittests to use the kunit framework.  So I am
volunteering me to do the work.  This has an additional benefit of
testing whether someone like me can successfully follow the
documentation.

It looked like there was a good chance that kunit would get accepted in
the merge window, so I was going to do the DT unittest work based on
-rc1.  I see that Brenden has submitted a couple more versions after
the merge window opened.  I think that kunit will be stable enough
after -rc1 (even if not yet merged by Linus) that I will be comfortable
doing the work on top of the latest patch series at the close of the
merge window.


> 
> Brendan, do you still have DT unittest patches that work with current kunit?

I never saw a version of the DT unittest patches that worked.  I was watching
and waiting before doing a serious review of the DT unittest portion of the
kunit patch series.

-Frank

> 
> Rob
>
diff mbox series

Patch

diff --git a/drivers/of/Makefile b/drivers/of/Makefile
index 663a4af0cccd5..4a4bd527d586c 100644
--- a/drivers/of/Makefile
+++ b/drivers/of/Makefile
@@ -8,7 +8,7 @@  obj-$(CONFIG_OF_PROMTREE) += pdt.o
 obj-$(CONFIG_OF_ADDRESS)  += address.o
 obj-$(CONFIG_OF_IRQ)    += irq.o
 obj-$(CONFIG_OF_NET)	+= of_net.o
-obj-$(CONFIG_OF_UNITTEST) += unittest.o
+obj-$(CONFIG_OF_UNITTEST) += unittest.o base-test.o test-common.o
 obj-$(CONFIG_OF_MDIO)	+= of_mdio.o
 obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
 obj-$(CONFIG_OF_RESOLVE)  += resolver.o
diff --git a/drivers/of/base-test.c b/drivers/of/base-test.c
new file mode 100644
index 0000000000000..5731787a3fca8
--- /dev/null
+++ b/drivers/of/base-test.c
@@ -0,0 +1,214 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Unit tests for functions defined in base.c.
+ */
+#include <linux/of.h>
+
+#include <kunit/test.h>
+
+#include "test-common.h"
+
+static void of_unittest_find_node_by_name(struct kunit *test)
+{
+	struct device_node *np;
+	const char *options, *name;
+
+	np = of_find_node_by_path("/testcase-data");
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find /testcase-data failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
+			    "trailing '/' on /testcase-data/ should fail\n");
+
+	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find /testcase-data/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	np = of_find_node_by_path("testcase-alias");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
+			       "find testcase-alias failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	/* Test if trailing '/' works on aliases */
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
+			    "trailing '/' on testcase-alias/ should fail\n");
+
+	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	name = kasprintf(GFP_KERNEL, "%pOF", np);
+	KUNIT_EXPECT_STREQ_MSG(test,
+			       "/testcase-data/phandle-tests/consumer-a", name,
+			       "find testcase-alias/phandle-tests/consumer-a failed\n");
+	of_node_put(np);
+	kfree(name);
+
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("/testcase-data/missing-path"),
+			    NULL,
+			    "non-existent path returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
+			    "non-existent alias returned node %pOF\n", np);
+	of_node_put(np);
+
+	KUNIT_EXPECT_EQ_MSG(test,
+			    of_find_node_by_path("testcase-alias/missing-path"),
+			    NULL,
+			    "non-existent alias with relative path returned node %pOF\n",
+			    np);
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
+			       "option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path(
+			"/testcase-data/testcase-device1:test/option",
+			&options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
+			       "option path test, subcase #2 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
+			       "option alias path test failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
+				       &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
+			       "option alias path test, subcase #1 failed\n");
+	of_node_put(np);
+
+	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
+	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
+					 "NULL option alias path test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("testcase-alias", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing test failed\n");
+	of_node_put(np);
+
+	options = "testoption";
+	np = of_find_node_opts_by_path("/", &options);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
+			    "option clearing root node test failed\n");
+	of_node_put(np);
+}
+
+static void of_unittest_dynamic(struct kunit *test)
+{
+	struct device_node *np;
+	struct property *prop;
+
+	np = of_find_node_by_path("/testcase-data");
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
+
+	/* Array of 4 properties for the purpose of testing */
+	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
+
+	/* Add a new property - should pass*/
+	prop->name = "new-property";
+	prop->value = "new-property-data";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a new property failed\n");
+
+	/* Try to add an existing property - should fail */
+	prop++;
+	prop->name = "new-property";
+	prop->value = "new-property-data-should-fail";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
+			    "Adding an existing property should have failed\n");
+
+	/* Try to modify an existing property - should pass */
+	prop->value = "modify-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating an existing property should have passed\n");
+
+	/* Try to modify non-existent property - should pass*/
+	prop++;
+	prop->name = "modify-property";
+	prop->value = "modify-missing-property-data-should-pass";
+	prop->length = strlen(prop->value) + 1;
+	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
+			    "Updating a missing property should have passed\n");
+
+	/* Remove property - should pass */
+	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
+			    "Removing a property should have passed\n");
+
+	/* Adding very large property - should pass */
+	prop++;
+	prop->name = "large-property-PAGE_SIZEx8";
+	prop->length = PAGE_SIZE * 8;
+	prop->value = kzalloc(prop->length, GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
+	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
+			    "Adding a large property should have passed\n");
+}
+
+static int of_test_init(struct kunit *test)
+{
+	/* adding data for unittest */
+	KUNIT_ASSERT_EQ(test, 0, unittest_data_add());
+
+	if (!of_aliases)
+		of_aliases = of_find_node_by_path("/aliases");
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, of_find_node_by_path(
+			"/testcase-data/phandle-tests/consumer-a"));
+
+	return 0;
+}
+
+static struct kunit_case of_test_cases[] = {
+	KUNIT_CASE(of_unittest_find_node_by_name),
+	KUNIT_CASE(of_unittest_dynamic),
+	{},
+};
+
+static struct kunit_module of_test_module = {
+	.name = "of-base-test",
+	.init = of_test_init,
+	.test_cases = of_test_cases,
+};
+module_test(of_test_module);
diff --git a/drivers/of/test-common.c b/drivers/of/test-common.c
new file mode 100644
index 0000000000000..0b2319fde3b3e
--- /dev/null
+++ b/drivers/of/test-common.c
@@ -0,0 +1,149 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Common code to be used by unit tests.
+ */
+#include "test-common.h"
+
+#include <linux/of_fdt.h>
+#include <linux/slab.h>
+
+#include "of_private.h"
+
+/**
+ *	update_node_properties - adds the properties
+ *	of np into dup node (present in live tree) and
+ *	updates parent of children of np to dup.
+ *
+ *	@np:	node already present in live tree
+ *	@dup:	node present in live tree to be updated
+ */
+static void update_node_properties(struct device_node *np,
+					struct device_node *dup)
+{
+	struct property *prop;
+	struct device_node *child;
+
+	for_each_property_of_node(np, prop)
+		of_add_property(dup, prop);
+
+	for_each_child_of_node(np, child)
+		child->parent = dup;
+}
+
+/**
+ *	attach_node_and_children - attaches nodes
+ *	and its children to live tree
+ *
+ *	@np:	Node to attach to live tree
+ */
+static int attach_node_and_children(struct device_node *np)
+{
+	struct device_node *next, *dup, *child;
+	unsigned long flags;
+	const char *full_name;
+
+	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
+	dup = of_find_node_by_path(full_name);
+	kfree(full_name);
+	if (dup) {
+		update_node_properties(np, dup);
+		return 0;
+	}
+
+	child = np->child;
+	np->child = NULL;
+
+	mutex_lock(&of_mutex);
+	raw_spin_lock_irqsave(&devtree_lock, flags);
+	np->sibling = np->parent->child;
+	np->parent->child = np;
+	of_node_clear_flag(np, OF_DETACHED);
+	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+
+	__of_attach_node_sysfs(np);
+	mutex_unlock(&of_mutex);
+
+	while (child) {
+		next = child->sibling;
+		attach_node_and_children(child);
+		child = next;
+	}
+
+	return 0;
+}
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void)
+{
+	void *unittest_data;
+	struct device_node *unittest_data_node, *np;
+	/*
+	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
+	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
+	 */
+	extern uint8_t __dtb_testcases_begin[];
+	extern uint8_t __dtb_testcases_end[];
+	const int size = __dtb_testcases_end - __dtb_testcases_begin;
+	int rc;
+
+	if (!size) {
+		pr_warn("%s: No testcase data to attach; not running tests\n",
+			__func__);
+		return -ENODATA;
+	}
+
+	/* creating copy */
+	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
+
+	if (!unittest_data) {
+		pr_warn("%s: Failed to allocate memory for unittest_data; "
+			"not running tests\n", __func__);
+		return -ENOMEM;
+	}
+	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
+	if (!unittest_data_node) {
+		pr_warn("%s: No tree to attach; not running tests\n", __func__);
+		return -ENODATA;
+	}
+
+	/*
+	 * This lock normally encloses of_resolve_phandles()
+	 */
+	of_overlay_mutex_lock();
+
+	rc = of_resolve_phandles(unittest_data_node);
+	if (rc) {
+		pr_err("%s: Failed to resolve phandles (rc=%i)\n",
+		       __func__, rc);
+		of_overlay_mutex_unlock();
+		return -EINVAL;
+	}
+
+	if (!of_root) {
+		of_root = unittest_data_node;
+		for_each_of_allnodes(np)
+			__of_attach_node_sysfs(np);
+		of_aliases = of_find_node_by_path("/aliases");
+		of_chosen = of_find_node_by_path("/chosen");
+		of_overlay_mutex_unlock();
+		return 0;
+	}
+
+	/* attach the sub-tree to live tree */
+	np = unittest_data_node->child;
+	while (np) {
+		struct device_node *next = np->sibling;
+
+		np->parent = of_root;
+		attach_node_and_children(np);
+		np = next;
+	}
+
+	of_overlay_mutex_unlock();
+
+	return 0;
+}
+
diff --git a/drivers/of/test-common.h b/drivers/of/test-common.h
new file mode 100644
index 0000000000000..a35484406bbf1
--- /dev/null
+++ b/drivers/of/test-common.h
@@ -0,0 +1,16 @@ 
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Common code to be used by unit tests.
+ */
+#ifndef _LINUX_OF_TEST_COMMON_H
+#define _LINUX_OF_TEST_COMMON_H
+
+#include <linux/of.h>
+
+/**
+ *	unittest_data_add - Reads, copies data from
+ *	linked tree and attaches it to the live tree
+ */
+int unittest_data_add(void);
+
+#endif /* _LINUX_OF_TEST_COMMON_H */
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index a5ef44730ffdb..b8c220d330f03 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -29,182 +29,7 @@ 
 #include <kunit/test.h>
 
 #include "of_private.h"
-
-static void of_unittest_find_node_by_name(struct kunit *test)
-{
-	struct device_node *np;
-	const char *options, *name;
-
-	np = of_find_node_by_path("/testcase-data");
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find /testcase-data failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("/testcase-data/"), NULL,
-			    "trailing '/' on /testcase-data/ should fail\n");
-
-	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test,
-			       "/testcase-data/phandle-tests/consumer-a", name,
-			       "find /testcase-data/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	np = of_find_node_by_path("testcase-alias");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test, "/testcase-data", name,
-			       "find testcase-alias failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	/* Test if trailing '/' works on aliases */
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("testcase-alias/"), NULL,
-			    "trailing '/' on testcase-alias/ should fail\n");
-
-	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	name = kasprintf(GFP_KERNEL, "%pOF", np);
-	KUNIT_EXPECT_STREQ_MSG(test,
-			       "/testcase-data/phandle-tests/consumer-a", name,
-			       "find testcase-alias/phandle-tests/consumer-a failed\n");
-	of_node_put(np);
-	kfree(name);
-
-	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("/testcase-data/missing-path"),
-			    NULL,
-			    "non-existent path returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(test, of_find_node_by_path("missing-alias"), NULL,
-			    "non-existent alias returned node %pOF\n", np);
-	of_node_put(np);
-
-	KUNIT_EXPECT_EQ_MSG(test,
-			    of_find_node_by_path("testcase-alias/missing-path"),
-			    NULL,
-			    "non-existent alias with relative path returned node %pOF\n",
-			    np);
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testoption", options,
-			       "option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data/testcase-device1:test/option", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/option", options,
-			       "option path test, subcase #2 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("/testcase-data:testoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "testaliasoption", options,
-			       "option alias path test failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:test/alias/option",
-				       &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_STREQ_MSG(test, "test/alias/option", options,
-			       "option alias path test, subcase #1 failed\n");
-	of_node_put(np);
-
-	np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL);
-	KUNIT_EXPECT_NOT_ERR_OR_NULL_MSG(test, np,
-					 "NULL option alias path test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("testcase-alias", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing test failed\n");
-	of_node_put(np);
-
-	options = "testoption";
-	np = of_find_node_opts_by_path("/", &options);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-	KUNIT_EXPECT_EQ_MSG(test, options, NULL,
-			    "option clearing root node test failed\n");
-	of_node_put(np);
-}
-
-static void of_unittest_dynamic(struct kunit *test)
-{
-	struct device_node *np;
-	struct property *prop;
-
-	np = of_find_node_by_path("/testcase-data");
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, np);
-
-	/* Array of 4 properties for the purpose of testing */
-	prop = kcalloc(4, sizeof(*prop), GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop);
-
-	/* Add a new property - should pass*/
-	prop->name = "new-property";
-	prop->value = "new-property-data";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a new property failed\n");
-
-	/* Try to add an existing property - should fail */
-	prop++;
-	prop->name = "new-property";
-	prop->value = "new-property-data-should-fail";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_NE_MSG(test, of_add_property(np, prop), 0,
-			    "Adding an existing property should have failed\n");
-
-	/* Try to modify an existing property - should pass */
-	prop->value = "modify-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating an existing property should have passed\n");
-
-	/* Try to modify non-existent property - should pass*/
-	prop++;
-	prop->name = "modify-property";
-	prop->value = "modify-missing-property-data-should-pass";
-	prop->length = strlen(prop->value) + 1;
-	KUNIT_EXPECT_EQ_MSG(test, of_update_property(np, prop), 0,
-			    "Updating a missing property should have passed\n");
-
-	/* Remove property - should pass */
-	KUNIT_EXPECT_EQ_MSG(test, of_remove_property(np, prop), 0,
-			    "Removing a property should have passed\n");
-
-	/* Adding very large property - should pass */
-	prop++;
-	prop->name = "large-property-PAGE_SIZEx8";
-	prop->length = PAGE_SIZE * 8;
-	prop->value = kzalloc(prop->length, GFP_KERNEL);
-	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, prop->value);
-	KUNIT_EXPECT_EQ_MSG(test, of_add_property(np, prop), 0,
-			    "Adding a large property should have passed\n");
-}
+#include "test-common.h"
 
 static int of_unittest_check_node_linkage(struct device_node *np)
 {
@@ -1200,143 +1025,6 @@  static void of_unittest_platform_populate(struct kunit *test)
 	of_node_put(np);
 }
 
-/**
- *	update_node_properties - adds the properties
- *	of np into dup node (present in live tree) and
- *	updates parent of children of np to dup.
- *
- *	@np:	node already present in live tree
- *	@dup:	node present in live tree to be updated
- */
-static void update_node_properties(struct device_node *np,
-					struct device_node *dup)
-{
-	struct property *prop;
-	struct device_node *child;
-
-	for_each_property_of_node(np, prop)
-		of_add_property(dup, prop);
-
-	for_each_child_of_node(np, child)
-		child->parent = dup;
-}
-
-/**
- *	attach_node_and_children - attaches nodes
- *	and its children to live tree
- *
- *	@np:	Node to attach to live tree
- */
-static int attach_node_and_children(struct device_node *np)
-{
-	struct device_node *next, *dup, *child;
-	unsigned long flags;
-	const char *full_name;
-
-	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
-	dup = of_find_node_by_path(full_name);
-	kfree(full_name);
-	if (dup) {
-		update_node_properties(np, dup);
-		return 0;
-	}
-
-	child = np->child;
-	np->child = NULL;
-
-	mutex_lock(&of_mutex);
-	raw_spin_lock_irqsave(&devtree_lock, flags);
-	np->sibling = np->parent->child;
-	np->parent->child = np;
-	of_node_clear_flag(np, OF_DETACHED);
-	raw_spin_unlock_irqrestore(&devtree_lock, flags);
-
-	__of_attach_node_sysfs(np);
-	mutex_unlock(&of_mutex);
-
-	while (child) {
-		next = child->sibling;
-		attach_node_and_children(child);
-		child = next;
-	}
-
-	return 0;
-}
-
-/**
- *	unittest_data_add - Reads, copies data from
- *	linked tree and attaches it to the live tree
- */
-static int unittest_data_add(void)
-{
-	void *unittest_data;
-	struct device_node *unittest_data_node, *np;
-	/*
-	 * __dtb_testcases_begin[] and __dtb_testcases_end[] are magically
-	 * created by cmd_dt_S_dtb in scripts/Makefile.lib
-	 */
-	extern uint8_t __dtb_testcases_begin[];
-	extern uint8_t __dtb_testcases_end[];
-	const int size = __dtb_testcases_end - __dtb_testcases_begin;
-	int rc;
-
-	if (!size) {
-		pr_warn("%s: No testcase data to attach; not running tests\n",
-			__func__);
-		return -ENODATA;
-	}
-
-	/* creating copy */
-	unittest_data = kmemdup(__dtb_testcases_begin, size, GFP_KERNEL);
-
-	if (!unittest_data) {
-		pr_warn("%s: Failed to allocate memory for unittest_data; "
-			"not running tests\n", __func__);
-		return -ENOMEM;
-	}
-	of_fdt_unflatten_tree(unittest_data, NULL, &unittest_data_node);
-	if (!unittest_data_node) {
-		pr_warn("%s: No tree to attach; not running tests\n", __func__);
-		return -ENODATA;
-	}
-
-	/*
-	 * This lock normally encloses of_resolve_phandles()
-	 */
-	of_overlay_mutex_lock();
-
-	rc = of_resolve_phandles(unittest_data_node);
-	if (rc) {
-		pr_err("%s: Failed to resolve phandles (rc=%i)\n", __func__, rc);
-		of_overlay_mutex_unlock();
-		return -EINVAL;
-	}
-
-	if (!of_root) {
-		of_root = unittest_data_node;
-		for_each_of_allnodes(np)
-			__of_attach_node_sysfs(np);
-		of_aliases = of_find_node_by_path("/aliases");
-		of_chosen = of_find_node_by_path("/chosen");
-		of_overlay_mutex_unlock();
-		return 0;
-	}
-
-	/* attach the sub-tree to live tree */
-	np = unittest_data_node->child;
-	while (np) {
-		struct device_node *next = np->sibling;
-
-		np->parent = of_root;
-		attach_node_and_children(np);
-		np = next;
-	}
-
-	of_overlay_mutex_unlock();
-
-	return 0;
-}
-
 #ifdef CONFIG_OF_OVERLAY
 static int overlay_data_apply(const char *overlay_name, int *overlay_id);
 
@@ -2560,8 +2248,6 @@  static int of_test_init(struct kunit *test)
 static struct kunit_case of_test_cases[] = {
 	KUNIT_CASE(of_unittest_check_tree_linkage),
 	KUNIT_CASE(of_unittest_check_phandles),
-	KUNIT_CASE(of_unittest_find_node_by_name),
-	KUNIT_CASE(of_unittest_dynamic),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args),
 	KUNIT_CASE(of_unittest_parse_phandle_with_args_map),
 	KUNIT_CASE(of_unittest_printf),