mbox series

[RFC,v4,00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

Message ID 20190214213729.21702-1-brendanhiggins@google.com (mailing list archive)
Headers show
Series kunit: introduce KUnit, the Linux kernel unit testing framework | expand

Message

Brendan Higgins Feb. 14, 2019, 9:37 p.m. UTC
This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/5.0-rc5/v4 branch.

## Changes Since Last Version

 - Got KUnit working on (hypothetically) all architectures (tested on
   x86), as per Rob's (and other's) request
 - Punting all KUnit features/patches depending on UML for now.
 - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
   kunit: test: add KUnit test runner core", as requested by Luis.
 - Added support to kunit_tool to allow it to build kernels in external
   directories, as suggested by Kieran.
 - Added a UML defconfig, and a config fragment for KUnit as suggested
   by Kieran and Luis.
 - Cleaned up, and reformatted a bunch of stuff.

Comments

Frank Rowand Feb. 18, 2019, 8:02 p.m. UTC | #1
On 2/14/19 1:37 PM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
> 
> ## Changes Since Last Version
> 
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
> 

I have not read through the patches in any detail.  I have read some of
the code to try to understand the patches to the devicetree unit tests.
So that may limit how valid my comments below are.

I found the code difficult to read in places where it should have been
much simpler to read.  Structuring the code in a pseudo object oriented
style meant that everywhere in a code path that I encountered a dynamic
function call, I had to go find where that dynamic function call was
initialized (and being the cautious person that I am, verify that
no where else was the value of that dynamic function call).  With
primitive vi and tags, that search would have instead just been a
simple key press (or at worst a few keys) if hard coded function
calls were done instead of dynamic function calls.  In the code paths
that I looked at, I did not see any case of a dynamic function being
anything other than the value it was originally initialized as.
There may be such cases, I did not read the entire patch set.  There
may also be cases envisioned in the architects mind of how this
flexibility may be of future value.  Dunno.

-Frank
Brendan Higgins Feb. 20, 2019, 6:34 a.m. UTC | #2
On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
<snip>
> I have not read through the patches in any detail.  I have read some of
> the code to try to understand the patches to the devicetree unit tests.
> So that may limit how valid my comments below are.

No problem.

>
> I found the code difficult to read in places where it should have been
> much simpler to read.  Structuring the code in a pseudo object oriented
> style meant that everywhere in a code path that I encountered a dynamic
> function call, I had to go find where that dynamic function call was
> initialized (and being the cautious person that I am, verify that
> no where else was the value of that dynamic function call).  With
> primitive vi and tags, that search would have instead just been a
> simple key press (or at worst a few keys) if hard coded function
> calls were done instead of dynamic function calls.  In the code paths
> that I looked at, I did not see any case of a dynamic function being
> anything other than the value it was originally initialized as.
> There may be such cases, I did not read the entire patch set.  There
> may also be cases envisioned in the architects mind of how this
> flexibility may be of future value.  Dunno.

Yeah, a lot of it is intended to make architecture specific
implementations and some other future work easier. Some of it is also
for testing purposes. Admittedly some is for neither reason, but given
the heavy usage elsewhere, I figured there was no harm since it was
all private internal usage anyway.
Frank Rowand Feb. 20, 2019, 6:46 a.m. UTC | #3
On 2/19/19 10:34 PM, Brendan Higgins wrote:
> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> <snip>
>> I have not read through the patches in any detail.  I have read some of
>> the code to try to understand the patches to the devicetree unit tests.
>> So that may limit how valid my comments below are.
> 
> No problem.
> 
>>
>> I found the code difficult to read in places where it should have been
>> much simpler to read.  Structuring the code in a pseudo object oriented
>> style meant that everywhere in a code path that I encountered a dynamic
>> function call, I had to go find where that dynamic function call was
>> initialized (and being the cautious person that I am, verify that
>> no where else was the value of that dynamic function call).  With
>> primitive vi and tags, that search would have instead just been a
>> simple key press (or at worst a few keys) if hard coded function
>> calls were done instead of dynamic function calls.  In the code paths
>> that I looked at, I did not see any case of a dynamic function being
>> anything other than the value it was originally initialized as.
>> There may be such cases, I did not read the entire patch set.  There
>> may also be cases envisioned in the architects mind of how this
>> flexibility may be of future value.  Dunno.
> 
> Yeah, a lot of it is intended to make architecture specific
> implementations and some other future work easier. Some of it is also
> for testing purposes. Admittedly some is for neither reason, but given
> the heavy usage elsewhere, I figured there was no harm since it was
> all private internal usage anyway.
> 

Increasing the cost for me (and all the other potential code readers)
to read the code is harm.
Thiago Jung Bauermann Feb. 22, 2019, 8:52 p.m. UTC | #4
Frank Rowand <frowand.list@gmail.com> writes:

> On 2/19/19 10:34 PM, Brendan Higgins wrote:
>> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
>> <snip>
>>> I have not read through the patches in any detail.  I have read some of
>>> the code to try to understand the patches to the devicetree unit tests.
>>> So that may limit how valid my comments below are.
>>
>> No problem.
>>
>>>
>>> I found the code difficult to read in places where it should have been
>>> much simpler to read.  Structuring the code in a pseudo object oriented
>>> style meant that everywhere in a code path that I encountered a dynamic
>>> function call, I had to go find where that dynamic function call was
>>> initialized (and being the cautious person that I am, verify that
>>> no where else was the value of that dynamic function call).  With
>>> primitive vi and tags, that search would have instead just been a
>>> simple key press (or at worst a few keys) if hard coded function
>>> calls were done instead of dynamic function calls.  In the code paths
>>> that I looked at, I did not see any case of a dynamic function being
>>> anything other than the value it was originally initialized as.
>>> There may be such cases, I did not read the entire patch set.  There
>>> may also be cases envisioned in the architects mind of how this
>>> flexibility may be of future value.  Dunno.
>>
>> Yeah, a lot of it is intended to make architecture specific
>> implementations and some other future work easier. Some of it is also
>> for testing purposes. Admittedly some is for neither reason, but given
>> the heavy usage elsewhere, I figured there was no harm since it was
>> all private internal usage anyway.
>>
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

Dynamic function calls aren't necessary for arch-specific
implementations either. See for example arch_kexec_image_load() in
kernel/kexec_file.c, which uses a weak symbol that is overriden by
arch-specific code. Not everybody likes weak symbols, so another
alternative (which admitedly not everybody likes either) is to use a
macro with the name of the arch-specific function, as used by
arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

--
Thiago Jung Bauermann
IBM Linux Technology Center
Brendan Higgins Feb. 28, 2019, 4:15 a.m. UTC | #5
On Tue, Feb 19, 2019 at 10:46 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 2/19/19 10:34 PM, Brendan Higgins wrote:
> > On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> > <snip>
> >> I have not read through the patches in any detail.  I have read some of
> >> the code to try to understand the patches to the devicetree unit tests.
> >> So that may limit how valid my comments below are.
> >
> > No problem.
> >
> >>
> >> I found the code difficult to read in places where it should have been
> >> much simpler to read.  Structuring the code in a pseudo object oriented
> >> style meant that everywhere in a code path that I encountered a dynamic
> >> function call, I had to go find where that dynamic function call was
> >> initialized (and being the cautious person that I am, verify that
> >> no where else was the value of that dynamic function call).  With
> >> primitive vi and tags, that search would have instead just been a
> >> simple key press (or at worst a few keys) if hard coded function
> >> calls were done instead of dynamic function calls.  In the code paths
> >> that I looked at, I did not see any case of a dynamic function being
> >> anything other than the value it was originally initialized as.
> >> There may be such cases, I did not read the entire patch set.  There
> >> may also be cases envisioned in the architects mind of how this
> >> flexibility may be of future value.  Dunno.
> >
> > Yeah, a lot of it is intended to make architecture specific
> > implementations and some other future work easier. Some of it is also
> > for testing purposes. Admittedly some is for neither reason, but given
> > the heavy usage elsewhere, I figured there was no harm since it was
> > all private internal usage anyway.
> >
>
> Increasing the cost for me (and all the other potential code readers)
> to read the code is harm.

You are right. I like the object oriented C style; I didn't think it
hurt readability.

In any case, I will go through and replace instances where I am not
using it for one of the above reasons.
Brendan Higgins Feb. 28, 2019, 4:18 a.m. UTC | #6
On Fri, Feb 22, 2019 at 12:53 PM Thiago Jung Bauermann
<bauerman@linux.ibm.com> wrote:
>
>
> Frank Rowand <frowand.list@gmail.com> writes:
>
> > On 2/19/19 10:34 PM, Brendan Higgins wrote:
> >> On Mon, Feb 18, 2019 at 12:02 PM Frank Rowand <frowand.list@gmail.com> wrote:
> >> <snip>
> >>> I have not read through the patches in any detail.  I have read some of
> >>> the code to try to understand the patches to the devicetree unit tests.
> >>> So that may limit how valid my comments below are.
> >>
> >> No problem.
> >>
> >>>
> >>> I found the code difficult to read in places where it should have been
> >>> much simpler to read.  Structuring the code in a pseudo object oriented
> >>> style meant that everywhere in a code path that I encountered a dynamic
> >>> function call, I had to go find where that dynamic function call was
> >>> initialized (and being the cautious person that I am, verify that
> >>> no where else was the value of that dynamic function call).  With
> >>> primitive vi and tags, that search would have instead just been a
> >>> simple key press (or at worst a few keys) if hard coded function
> >>> calls were done instead of dynamic function calls.  In the code paths
> >>> that I looked at, I did not see any case of a dynamic function being
> >>> anything other than the value it was originally initialized as.
> >>> There may be such cases, I did not read the entire patch set.  There
> >>> may also be cases envisioned in the architects mind of how this
> >>> flexibility may be of future value.  Dunno.
> >>
> >> Yeah, a lot of it is intended to make architecture specific
> >> implementations and some other future work easier. Some of it is also
> >> for testing purposes. Admittedly some is for neither reason, but given
> >> the heavy usage elsewhere, I figured there was no harm since it was
> >> all private internal usage anyway.
> >>
> >
> > Increasing the cost for me (and all the other potential code readers)
> > to read the code is harm.
>
> Dynamic function calls aren't necessary for arch-specific
> implementations either. See for example arch_kexec_image_load() in
> kernel/kexec_file.c, which uses a weak symbol that is overriden by
> arch-specific code. Not everybody likes weak symbols, so another
> alternative (which admitedly not everybody likes either) is to use a
> macro with the name of the arch-specific function, as used by
> arch_kexec_post_alloc_pages() in <linux/kexec.h> for instance.

I personally have a strong preference for dynamic function calls over
weak symbols or macros, but I can change it if it really makes
anyone's eyes bleed.
Brendan Higgins March 4, 2019, 11:01 p.m. UTC | #7
On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>

<snip>

> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>
> ## Changes Since Last Version
>
>  - Got KUnit working on (hypothetically) all architectures (tested on
>    x86), as per Rob's (and other's) request
>  - Punting all KUnit features/patches depending on UML for now.
>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>    kunit: test: add KUnit test runner core", as requested by Luis.
>  - Added support to kunit_tool to allow it to build kernels in external
>    directories, as suggested by Kieran.
>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>    by Kieran and Luis.
>  - Cleaned up, and reformatted a bunch of stuff.
>
> --
> 2.21.0.rc0.258.g878e2cd30e-goog
>

Someone suggested I should send the next revision out as "PATCH"
instead of "RFC" since there seems to be general consensus about
everything at a high level, with a couple exceptions.

At this time I am planning on sending the next revision out as "[PATCH
v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
framework". Initially I wasn't sure if the next revision should be
"[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
strong objection to the former.

In the next revision, I will be dropping the last two of three patches
for the DT unit tests as there doesn't seem to be enough features
currently available to justify the heavy refactoring I did; however, I
will still include the patch that just converts everything over to
KUnit without restructuring the test cases:
https://lkml.org/lkml/2019/2/14/1133

I should have the next revision out in a week or so.
Logan Gunthorpe March 21, 2019, 1:07 a.m. UTC | #8
Hi,

On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.

I haven't followed the entire conversation but I saw the KUnit write-up
on LWN and ended up, as an exercise, giving it a try.

I really like the idea of having a fast unit testing infrastructure in
the kernel. Occasionally, I write userspace tests for tricky functions
that I essentially write by copying the code over to a throw away C file
and exercise them as I need. I think it would be great to be able to
keep these tests around in a way that they can be run by anyone who
wants to touch the code.

I was just dealing with some functions that required some mocked up
tests so I thought I'd give KUnit a try. I found writing the code very
easy and the infrastructure I was testing was quite simple to mock out
the hardware.

However, I got a bit hung up by one issue: I was writing unit tests for
code in the NTB tree which itself depends on CONFIG_PCI which cannot be
enabled in UML (for what should be obvious reasons). I managed to work
around this because, as luck would have it, all the functions I cared
about testing were actually static inline functions in headers. So I
placed my test code in the kunit folder (so it would compile) and hacked
around a couple a of functions I didn't care about that would not be
compiled.

In the end I got it to work acceptably, but I get the impression that
KUnit will not be usable for wide swaths of kernel code that can't be
compiled in UML. Has there been any discussion or ideas on how to work
around this so it can be more generally useful? Or will this feature be
restricted roughly to non-drivers and functions in headers that don't
have #ifdefs around them?

If you're interested in seeing the unit tests I ended up writing you can
find the commits here[1].

Thanks,

Logan

[1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit
Knut Omang March 21, 2019, 5:23 a.m. UTC | #9
Hi Logan,

On Wed, 2019-03-20 at 19:07 -0600, Logan Gunthorpe wrote:
> Hi,
> 
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> 
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.
> 
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
> 
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
> 
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.
> 
> In the end I got it to work acceptably, but I get the impression that
> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work
> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

Testing drivers, hardware and firmware within production kernels was the use
case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
standalone git repository. That's been the most efficient form for us so far, 
as we typically want tests to be developed once but deployed on many different
kernel versions simultaneously, as part of continuous integration.

But we're also working towards a suitable proposal for how it can be 
smoothly integrated into the kernel, but while still keeping the benefits 
and tools to allow cross-kernel development of tests. As part of this,
I have a master student who has been looking at converting some of 
the existing kernel test suites to KTF, and we have more examples coming 
from our internal usage, as we get more mileage and more users.
See for instance this recent blog entry testing skbuff as an example,
on the Oracle kernel development blog:

https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf

Other relevant links:

Git repo: https://github.com/oracle/ktf
Formatted docs: http://heim.ifi.uio.no/~knuto/ktf/
LWN mention from my presentation at LPC'17: https://lwn.net/Articles/735034/
Earlier Oracle blog post: https://blogs.oracle.com/linux/oracles-new-kernel-test-framework-for-linux-v2
OSS'18 presentation slides: https://events.linuxfoundation.org/wp-content/uploads/2017/12/Test-Driven-Kernel-Development-Knut-Omang-Oracle.pdf

> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].

It would certainly be interesting to see how the use cases you struggled with
would work with KTF ;-)

Thanks,
Knut

>
> Thanks,
> 
> Logan
> 
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit
Logan Gunthorpe March 21, 2019, 3:56 p.m. UTC | #10
On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> standalone git repository. That's been the most efficient form for us so far, 
> as we typically want tests to be developed once but deployed on many different
> kernel versions simultaneously, as part of continuous integration.

Interesting. It seems like it's really in direct competition with KUnit.
I didn't really go into it in too much detail but these are my thoughts:

From a developer perspective I think KTF not being in the kernel tree is
a huge negative. I want minimal effort to include my tests in a patch
series and minimal effort for other developers to be able to use them.
Needing to submit these tests to another project or use another project
to run them is too much friction.

Also I think the goal of having tests that run on any kernel version is
a pipe dream. You'd absolutely need a way to encode which kernel
versions a test is expected to pass on because the tests might not make
sense until a feature is finished in upstream. And this makes it even
harder to develop these tests because, when we write them, we might not
even know which kernel version the feature will be added to. Similarly,
if a feature is removed or substantially changed, someone will have to
do a patch to disable the test for subsequent kernel versions and create
a new test for changed features. So, IMO, tests absolutely have to be
part of the kernel tree so they can be changed with the respective
features they test.

Kunit's ability to run without having to build and run the entire kernel
 is also a huge plus. (Assuming there's a way to get around the build
dependency issues). Because of this, it can be very quick to run these
tests which makes development a *lot* easier seeing you don't have to
reboot a machine every time you want to test a fix.

Logan
Brendan Higgins March 21, 2019, 4:55 p.m. UTC | #11
On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > Testing drivers, hardware and firmware within production kernels was the use
> > case that inspired KTF (Kernel Test Framework). Currently KTF is available as a
> > standalone git repository. That's been the most efficient form for us so far,
> > as we typically want tests to be developed once but deployed on many different
> > kernel versions simultaneously, as part of continuous integration.
>
> Interesting. It seems like it's really in direct competition with KUnit.

I won't speak for Knut, but I don't think we are in competition. I see
KTF as a novel way to do a kind of white box end-to-end testing for
the Linux kernel, which is a valuable thing, especially in some
circumstances. I could see KTF having a lot of value for someone who
wants to maintain out of tree drivers, in particular.

Nevertheless, I don't really see KTF as a real unit testing framework
for a number of different reasons; you pointed out some below, but I
think the main one being that it requires booting a real kernel on
actual hardware; I imagine it could be made to work on a VM, but that
isn't really the point; it fundamentally depends on having part of the
test, or at least driving the test from userspace on top of the kernel
under test. Knut, myself, and others, had a previous discussion to
this effect here: https://lkml.org/lkml/2018/11/24/170

> I didn't really go into it in too much detail but these are my thoughts:
>
> From a developer perspective I think KTF not being in the kernel tree is
> a huge negative. I want minimal effort to include my tests in a patch
> series and minimal effort for other developers to be able to use them.
> Needing to submit these tests to another project or use another project
> to run them is too much friction.
>
> Also I think the goal of having tests that run on any kernel version is
> a pipe dream. You'd absolutely need a way to encode which kernel
> versions a test is expected to pass on because the tests might not make
> sense until a feature is finished in upstream. And this makes it even
> harder to develop these tests because, when we write them, we might not
> even know which kernel version the feature will be added to. Similarly,
> if a feature is removed or substantially changed, someone will have to
> do a patch to disable the test for subsequent kernel versions and create
> a new test for changed features. So, IMO, tests absolutely have to be
> part of the kernel tree so they can be changed with the respective
> features they test.
>
> Kunit's ability to run without having to build and run the entire kernel
>  is also a huge plus. (Assuming there's a way to get around the build
> dependency issues). Because of this, it can be very quick to run these
> tests which makes development a *lot* easier seeing you don't have to
> reboot a machine every time you want to test a fix.

I will reply to your comments directly on your original email. I don't
want to hijack this thread, in case we want to discuss the topic of
KUnit vs. KTF further.
Knut Omang March 21, 2019, 7:13 p.m. UTC | #12
On Thu, 2019-03-21 at 09:55 -0700, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 8:56 AM Logan Gunthorpe <logang@deltatee.com> wrote:
> > 
> > 
> > On 2019-03-20 11:23 p.m., Knut Omang wrote:
> > > Testing drivers, hardware and firmware within production kernels was the
> > > use
> > > case that inspired KTF (Kernel Test Framework). Currently KTF is available
> > > as a
> > > standalone git repository. That's been the most efficient form for us so
> > > far,
> > > as we typically want tests to be developed once but deployed on many
> > > different
> > > kernel versions simultaneously, as part of continuous integration.
> > 
> > Interesting. It seems like it's really in direct competition with KUnit.
> 
> I won't speak for Knut, but I don't think we are in competition. 

I would rather say we have some overlap in functionality :)
My understanding is that we have a common goal of providing better
infrastructure for testing, but have approached this whole problem complex from
somewhat different perspectives.

> I see
> KTF as a novel way to do a kind of white box end-to-end testing for
> the Linux kernel, which is a valuable thing, especially in some
> circumstances. I could see KTF having a lot of value for someone who
> wants to maintain out of tree drivers, in particular.

The best argument here is really good examples.
I'm not sure the distinction between "black box" and "white box" testing is
useful here, there's always underlying assumptions behind specific,
deterministic tests. Writing a test is a very good way to get to 
understand a piece of code. Just look at the flow of the example in
https://blogs.oracle.com/linux/writing-kernel-tests-with-the-new-kernel-test-framework-ktf, clearly unit tests, but the knowledge gathering is an important
part and motivation!

> Nevertheless, I don't really see KTF as a real unit testing framework
> for a number of different reasons; you pointed out some below, but I
> think the main one being that it requires booting a real kernel on
> actual hardware; 

That depends on what you want to test. If you need hardware (or simulated or
emulated hardware) for the test, of course you would need to have that hardware,
but if, lets say, you just wanted to run tests like the skbuff example tests
(see link above) you wouldn't need anything more than what you need to run KUnit
tests.

> I imagine it could be made to work on a VM, but that
> isn't really the point; it fundamentally depends on having part of the
> test, or at least driving the test from userspace on top of the kernel
> under test. Knut, myself, and others, had a previous discussion to
> this effect here: https://lkml.org/lkml/2018/11/24/170

> > I didn't really go into it in too much detail but these are my thoughts:
> > 
> > From a developer perspective I think KTF not being in the kernel tree is
> > a huge negative. I want minimal effort to include my tests in a patch
> > series and minimal effort for other developers to be able to use them.
> > Needing to submit these tests to another project or use another project
> > to run them is too much friction.

As said, I recognize the need to upstream KTF, and we are working to do that,
that's why I bother to write this :)

> > Also I think the goal of having tests that run on any kernel version is
> > a pipe dream. 

I have fulfilled that dream, so I know it is possible (Inifinband driver,
kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
from support for such workflows, but that's not really the point here - we want
to achieve both goals!

> > You'd absolutely need a way to encode which kernel
> > versions a test is expected to pass on because the tests might not make
> > sense until a feature is finished in upstream. And this makes it even
> > harder to develop these tests because, when we write them, we might not
> > even know which kernel version the feature will be added to. Similarly,
> > if a feature is removed or substantially changed, someone will have to
> > do a patch to disable the test for subsequent kernel versions and create
> > a new test for changed features. So, IMO, tests absolutely have to be
> > part of the kernel tree so they can be changed with the respective
> > features they test.

Of course a feature that is not part of a kernel cannot easily pass for that
kernel. And yes, testing for kernel version might be necessary in some cases,
and even to write a section of extra code to handle differences, still that's 
worth the benefit.
And that's also a use case: "Can I use kernel v.X.Y.Z if I need feature w?"
Lets assume we had a set of tests covering a particular feature, and someone
needed that feature, then they could just run the latest set of tests for that
feature on an older kernel to determine if they had enough support for what they
needed. If necessary, they could then backport the feature, and run the tests to
verify that they actually implemented it correctly.

On example I recall of this from the Infiniband driver times was 
the need to have a predictable way to efficiently use huge scatterlists across
kernels. We relied upon scatterlist chaining in a particular way, and the API
descriptions did not really specify to a detailed enough level how the
guaranteed semantics were supposed to be.
I wrote a few simple KTF tests that tested the driver code for the semantics we
expected, and ran them against older and newer kernels and used them to make
sure we would have a driver that worked across a few subtle changes to
scatterlists and their use. 

> > Kunit's ability to run without having to build and run the entire kernel
> >  is also a huge plus. 

IMHO the UML kernel is still a kernel running inside a user land program,
and so is a QEMU/KVM VM, which is my favourite KTF test environment.
 
Also with UML it is more difficult/less useful to deploy user space tools such
as valgrind, which IMHO would be my main reason for getting kernel code out of
the kernel. I recognize that there's a need for 
doing just that (e.g. compiling complicated data structures entirely in user
space with mocked interfaces) but I think it would be much more useful 
to be able to do that without the additional complexity of UML (or QEMU).

> > (Assuming there's a way to get around the build
> > dependency issues). Because of this, it can be very quick to run these
> > tests which makes development a *lot* easier seeing you don't have to
> > reboot a machine every time you want to test a fix.

If your target component under test can be built as a kernel module, or set of
modules, with KTF your workflow would not involve booting at all (unless you
happened to crash the system with one of your tests, that is :) )

You would just unload your module under test and the test module, recompile the
two and insmod again. My work current work cycle on this is just a few seconds.

> I will reply to your comments directly on your original email. I don't
> want to hijack this thread, in case we want to discuss the topic of
> KUnit vs. KTF further.

Good idea!
We can at least agree upon that such an important matter as this can 
be worthy a good, detailed discussion! :)

Thanks!
Knut
Logan Gunthorpe March 21, 2019, 7:29 p.m. UTC | #13
On 2019-03-21 1:13 p.m., Knut Omang wrote:
>> Nevertheless, I don't really see KTF as a real unit testing framework
>> for a number of different reasons; you pointed out some below, but I
>> think the main one being that it requires booting a real kernel on
>> actual hardware; 
> 
> That depends on what you want to test. If you need hardware (or simulated or
> emulated hardware) for the test, of course you would need to have that hardware,
> but if, lets say, you just wanted to run tests like the skbuff example tests
> (see link above) you wouldn't need anything more than what you need to run KUnit
> tests.

I'm starting to get the same impression: KTF isn't unit testing. When we
are saying "unit tests" we are specifying exactly what we want to test:
small sections of code in isolation. So by definition you should not
need hardware for this.

> I have fulfilled that dream, so I know it is possible (Inifinband driver,
> kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> from support for such workflows, but that's not really the point here - we want
> to achieve both goals!

This is what makes me think we are not talking about testing the same
things. We are not talking about end to end testing of entire drivers
but smaller sections of code. A unit test is far more granular and
despite an infinband driver existing for 2.6.39 through 4.8, the
internal implementation could be drastically different. But unit tests
would be testing internal details which could be very different version
to version and has to evolve with the implementation.

> If your target component under test can be built as a kernel module, or set of
> modules, with KTF your workflow would not involve booting at all (unless you
> happened to crash the system with one of your tests, that is :) )

> You would just unload your module under test and the test module, recompile the
> two and insmod again. My work current work cycle on this is just a few seconds.

Yes, I'm sure we've all done that many a time but it's really beside the
point. Kunit offers a much nicer method for running a lot of unit tests
on existing code.

Logan
Knut Omang March 21, 2019, 8:14 p.m. UTC | #14
On Thu, 2019-03-21 at 13:29 -0600, Logan Gunthorpe wrote:
> 
> On 2019-03-21 1:13 p.m., Knut Omang wrote:
> > > Nevertheless, I don't really see KTF as a real unit testing framework
> > > for a number of different reasons; you pointed out some below, but I
> > > think the main one being that it requires booting a real kernel on
> > > actual hardware; 
> > 
> > That depends on what you want to test. If you need hardware (or simulated or
> > emulated hardware) for the test, of course you would need to have that
> > hardware,
> > but if, lets say, you just wanted to run tests like the skbuff example tests
> > (see link above) you wouldn't need anything more than what you need to run
> > KUnit
> > tests.
> 
> I'm starting to get the same impression: KTF isn't unit testing. When we
> are saying "unit tests" we are specifying exactly what we want to test:
> small sections of code in isolation. So by definition you should not
> need hardware for this.

In my world hardware is just that: a piece of code. It can be in many forms, but
it is still code to be tested, and code that can be changed (but sometimes at a
slightly higher cost than a recompile ;-)

But that's not the point here: KTF can be used for your narrower definition of
unit tests, and it can be used for small, precise tests, for particular bugs for
instance, that you would not characterize as a unit test, still it serves the
same purpose, and I believe in a pragmatic approach to this. We want to maximize
the value of our time. I believe there's a sweet point wrt return on investment
on the scale from purist unit testing to just writing code and test with
existing applications. We're targeting that ;-)

> > I have fulfilled that dream, so I know it is possible (Inifinband driver,
> > kernels from 2.6.39 to 4.8.x or so..) I know a lot of projects would benefit
> > from support for such workflows, but that's not really the point here - we
> > want
> > to achieve both goals!
> 
> This is what makes me think we are not talking about testing the same
> things. We are not talking about end to end testing of entire drivers
> but smaller sections of code.

No! I am talking about testing units within a driver, or within any kernel
component. I am sure you agree that what constitutes a unit depend on what 
level of abstraction you are looking at.

>  A unit test is far more granular and
> despite an infinband driver existing for 2.6.39 through 4.8, the
> internal implementation could be drastically different. But unit tests
> would be testing internal details which could be very different version
> to version and has to evolve with the implementation.

> > If your target component under test can be built as a kernel module, or set
> > of
> > modules, with KTF your workflow would not involve booting at all (unless you
> > happened to crash the system with one of your tests, that is :) )
> > You would just unload your module under test and the test module, recompile
> > the
> > two and insmod again. My work current work cycle on this is just a few
> > seconds.
> 
> Yes, I'm sure we've all done that many a time but it's really beside the
> point. Kunit offers a much nicer method for running a lot of unit tests
> on existing code.

Again, use cases and examples are the key here,..

Knut
Brendan Higgins March 21, 2019, 10:07 p.m. UTC | #15
On Wed, Mar 20, 2019 at 6:08 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
> Hi,
>
> On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
>
> I haven't followed the entire conversation but I saw the KUnit write-up
> on LWN and ended up, as an exercise, giving it a try.

Awesome! Thanks for taking the time to try it out and to give me feedback!

>
> I really like the idea of having a fast unit testing infrastructure in
> the kernel. Occasionally, I write userspace tests for tricky functions
> that I essentially write by copying the code over to a throw away C file
> and exercise them as I need. I think it would be great to be able to
> keep these tests around in a way that they can be run by anyone who
> wants to touch the code.
>
> I was just dealing with some functions that required some mocked up
> tests so I thought I'd give KUnit a try. I found writing the code very
> easy and the infrastructure I was testing was quite simple to mock out
> the hardware.
>
> However, I got a bit hung up by one issue: I was writing unit tests for
> code in the NTB tree which itself depends on CONFIG_PCI which cannot be
> enabled in UML (for what should be obvious reasons). I managed to work
> around this because, as luck would have it, all the functions I cared
> about testing were actually static inline functions in headers. So I
> placed my test code in the kunit folder (so it would compile) and hacked
> around a couple a of functions I didn't care about that would not be
> compiled.

A couple of points, as for needing CONFIG_PCI; my plan to deal with
that type of thing has been that we would add support for a KUnit/UML
version that is just for KUnit. It would mock out the necessary bits
to provide a fake hardware implementation for anything that might
depend on it. I wrote a prototype for mocking/faking MMIO that I
presented to the list here[1]; it is not part of the current patchset
because we decided it would be best to focus on getting an MVP in, but
I plan on bringing it back up at some point. Anyway, what do you
generally think of this approach?

>
> In the end I got it to work acceptably, but I get the impression that

Awesome, I looked at the code you posted and it doesn't look like you
have had too many troubles. One thing that stood out to me, why did
you need to put it in the kunit/ dir?

> KUnit will not be usable for wide swaths of kernel code that can't be
> compiled in UML. Has there been any discussion or ideas on how to work

For the most part, I was planning on relying on mocking or faking the
hardware out as you have done. I have found this worked pretty well in
other instances. I know there are some edge cases for which even this
won't work like code in arch/; for that you *can* compile KUnit for
non-UML, but that is not really unit testing any more, but I included
it that as more of a last ditch effort, debugging aid, or to allow a
user to write integration tests (also very much because other people
asked me to do it ;-) ). So the main plan here is mocking and faking.

There has been some additional discussion here[2] (see the replies);
this thread is more about the future of trying to pull out kernel
dependencies which I discussed further with Luis (off list) here[3]
(in reality the discussion has been spread across a number of
different threads, but I think those are places you could at least get
context to jump in and talk about mocking and faking hardware and
Linux kernel resources).

In short, I think the idea is we are using UML as scaffolding and we
will gradually try to make it more independent by either reducing
dependencies on kernel resources (by providing high quality fakes) or
by somehow providing a better view of the dependencies so that you can
specify more directly what piece of code you need for your test. This
part is obviously very long term, but I think that's what we need to
do if we want to do really high quality unit testing, regardless of
whether we use KUnit or not. In anycase, right now, we are just
working on the basic tools to write *a* unit test for the kernel.

> around this so it can be more generally useful? Or will this feature be
> restricted roughly to non-drivers and functions in headers that don't
> have #ifdefs around them?

I hope not.

The #ifdef thing is something that I don't have a good answer for at
this time. Luis and I talked about it, but haven't come up with any
good ideas, sorry.

As for drivers, I found that testing drivers is quite doable. You can
see an example of a test I wrote for a i2c bus driver here[4]. I know
the i2c subsystem is pretty simple, but the general principle should
apply elsewhere.

>
> If you're interested in seeing the unit tests I ended up writing you can
> find the commits here[1].
>
> Thanks,
>
> Logan
>
> [1] https://github.com/sbates130272/linux-p2pmem/ ntb_kunit

I am looking forward to see what you think!

Cheers

[1] https://lkml.org/lkml/2018/10/17/122
[2] https://lkml.org/lkml/2018/11/29/93
[3] https://groups.google.com/forum/#!topic/kunit-dev/EQ1x0SzrUus
[4] https://kunit.googlesource.com/linux/+/e10484ad2f9fc7926412ec84739fe105981b4771/drivers/i2c/busses/i2c-aspeed-test.c
Logan Gunthorpe March 21, 2019, 10:26 p.m. UTC | #16
On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> that type of thing has been that we would add support for a KUnit/UML
> version that is just for KUnit. It would mock out the necessary bits
> to provide a fake hardware implementation for anything that might
> depend on it. I wrote a prototype for mocking/faking MMIO that I
> presented to the list here[1]; it is not part of the current patchset
> because we decided it would be best to focus on getting an MVP in, but
> I plan on bringing it back up at some point. Anyway, what do you
> generally think of this approach?

Yes, I was wondering if that might be possible. I think that's a great
approach but it will unfortunately take a lot of work before larger
swaths of the kernel are testable in Kunit with UML. Having more common
mocked infrastructure will be great by-product of it though.

> Awesome, I looked at the code you posted and it doesn't look like you
> have had too many troubles. One thing that stood out to me, why did
> you need to put it in the kunit/ dir?

Yeah, writing the code was super easy. Only after, did I realized I
couldn't get it to easily build.

Putting it in the kunit directory was necessary because nothing in the
NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.

> I am looking forward to see what you think!

Generally, I'm impressed and want to see this work in upstream as soon
as possible so I can start to make use of it!

Logan
Brendan Higgins March 21, 2019, 11:33 p.m. UTC | #17
On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> > A couple of points, as for needing CONFIG_PCI; my plan to deal with
> > that type of thing has been that we would add support for a KUnit/UML
> > version that is just for KUnit. It would mock out the necessary bits
> > to provide a fake hardware implementation for anything that might
> > depend on it. I wrote a prototype for mocking/faking MMIO that I
> > presented to the list here[1]; it is not part of the current patchset
> > because we decided it would be best to focus on getting an MVP in, but
> > I plan on bringing it back up at some point. Anyway, what do you
> > generally think of this approach?
>
> Yes, I was wondering if that might be possible. I think that's a great
> approach but it will unfortunately take a lot of work before larger
> swaths of the kernel are testable in Kunit with UML. Having more common
> mocked infrastructure will be great by-product of it though.

Yeah, it's unfortunate that the best way to do something often takes
so much longer.

>
> > Awesome, I looked at the code you posted and it doesn't look like you
> > have had too many troubles. One thing that stood out to me, why did
> > you need to put it in the kunit/ dir?
>
> Yeah, writing the code was super easy. Only after, did I realized I
> couldn't get it to easily build.

Yeah, we really need to fix that; unfortunately, broadly addressing
that problem is really hard and will most likely take a long time.

>
> Putting it in the kunit directory was necessary because nothing in the
> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>
> > I am looking forward to see what you think!
>
> Generally, I'm impressed and want to see this work in upstream as soon
> as possible so I can start to make use of it!

Great to hear! I was trying to get the next revision out this week,
but addressing some of the comments is taking a little longer than
expected. I should have something together fairly soon though
(hopefully next week). Good news is that next revision will be
non-RFC; most of the feedback has settled down and I think we are
ready to start figuring out how to merge it. Fingers crossed :-)

Cheers
Frank Rowand March 22, 2019, 1:12 a.m. UTC | #18
On 3/21/19 4:33 PM, Brendan Higgins wrote:
> On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
>>
>>
>>
>> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
>>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
>>> that type of thing has been that we would add support for a KUnit/UML
>>> version that is just for KUnit. It would mock out the necessary bits
>>> to provide a fake hardware implementation for anything that might
>>> depend on it. I wrote a prototype for mocking/faking MMIO that I
>>> presented to the list here[1]; it is not part of the current patchset
>>> because we decided it would be best to focus on getting an MVP in, but
>>> I plan on bringing it back up at some point. Anyway, what do you
>>> generally think of this approach?
>>
>> Yes, I was wondering if that might be possible. I think that's a great
>> approach but it will unfortunately take a lot of work before larger
>> swaths of the kernel are testable in Kunit with UML. Having more common
>> mocked infrastructure will be great by-product of it though.
> 
> Yeah, it's unfortunate that the best way to do something often takes
> so much longer.
> 
>>
>>> Awesome, I looked at the code you posted and it doesn't look like you
>>> have had too many troubles. One thing that stood out to me, why did
>>> you need to put it in the kunit/ dir?
>>
>> Yeah, writing the code was super easy. Only after, did I realized I
>> couldn't get it to easily build.
> 
> Yeah, we really need to fix that; unfortunately, broadly addressing
> that problem is really hard and will most likely take a long time.
> 
>>
>> Putting it in the kunit directory was necessary because nothing in the
>> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
>> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
>> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
>>
>>> I am looking forward to see what you think!
>>
>> Generally, I'm impressed and want to see this work in upstream as soon
>> as possible so I can start to make use of it!
> 
> Great to hear! I was trying to get the next revision out this week,
> but addressing some of the comments is taking a little longer than
> expected. I should have something together fairly soon though
> (hopefully next week). Good news is that next revision will be
> non-RFC; most of the feedback has settled down and I think we are
> ready to start figuring out how to merge it. Fingers crossed :-)
> 
> Cheers

I'll be out of the office next week and will not be able to review.
Please hold off on any devicetree related files until after I review.

Thanks,

Frank
Frank Rowand March 22, 2019, 1:23 a.m. UTC | #19
On 3/4/19 3:01 PM, Brendan Higgins wrote:
> On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
> <brendanhiggins@google.com> wrote:
>>
>> This patch set proposes KUnit, a lightweight unit testing and mocking
>> framework for the Linux kernel.
>>
> 
> <snip>
> 
>> ## More information on KUnit
>>
>> There is a bunch of documentation near the end of this patch set that
>> describes how to use KUnit and best practices for writing unit tests.
>> For convenience I am hosting the compiled docs here:
>> https://google.github.io/kunit-docs/third_party/kernel/docs/
>> Additionally for convenience, I have applied these patches to a branch:
>> https://kunit.googlesource.com/linux/+/kunit/rfc/5.0-rc5/v4
>> The repo may be cloned with:
>> git clone https://kunit.googlesource.com/linux
>> This patchset is on the kunit/rfc/5.0-rc5/v4 branch.
>>
>> ## Changes Since Last Version
>>
>>  - Got KUnit working on (hypothetically) all architectures (tested on
>>    x86), as per Rob's (and other's) request
>>  - Punting all KUnit features/patches depending on UML for now.
>>  - Broke out UML specific support into arch/um/* as per "[RFC v3 01/19]
>>    kunit: test: add KUnit test runner core", as requested by Luis.
>>  - Added support to kunit_tool to allow it to build kernels in external
>>    directories, as suggested by Kieran.
>>  - Added a UML defconfig, and a config fragment for KUnit as suggested
>>    by Kieran and Luis.
>>  - Cleaned up, and reformatted a bunch of stuff.
>>
>> --
>> 2.21.0.rc0.258.g878e2cd30e-goog
>>
> 
> Someone suggested I should send the next revision out as "PATCH"
> instead of "RFC" since there seems to be general consensus about
> everything at a high level, with a couple exceptions.
> 
> At this time I am planning on sending the next revision out as "[PATCH
> v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> framework". Initially I wasn't sure if the next revision should be
> "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> strong objection to the former.
> 
> In the next revision, I will be dropping the last two of three patches
> for the DT unit tests as there doesn't seem to be enough features
> currently available to justify the heavy refactoring I did; however, I

Thank you.


> will still include the patch that just converts everything over to
> KUnit without restructuring the test cases:
> https://lkml.org/lkml/2019/2/14/1133

The link doesn't work for me (don't worry about that), so I'm assuming
this is:

   [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

The conversation on that patch ended after:

   >> After adding patch 15, there are a lot of "unittest internal error" messages.
   > 
   > Yeah, I meant to ask you about that. I thought it was due to a change
   > you made, but after further examination, just now, I found it was my
   > fault. Sorry for not mentioning that anywhere. I will fix it in v5.

It is not worth my time to look at patch 15 when it is that broken.  So I
have not done any review of it.

So no, I think you are still in the RFC stage unless you drop patch 15.

> 
> I should have the next revision out in a week or so.
>
Brendan Higgins March 25, 2019, 10:11 p.m. UTC | #20
On Thu, Mar 21, 2019 at 6:23 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/4/19 3:01 PM, Brendan Higgins wrote:
> > On Thu, Feb 14, 2019 at 1:38 PM Brendan Higgins
< snip >
> > Someone suggested I should send the next revision out as "PATCH"
> > instead of "RFC" since there seems to be general consensus about
> > everything at a high level, with a couple exceptions.
> >
> > At this time I am planning on sending the next revision out as "[PATCH
> > v1 00/NN] kunit: introduce KUnit, the Linux kernel unit testing
> > framework". Initially I wasn't sure if the next revision should be
> > "[PATCH v1 ...]" or "[PATCH v5 ...]". Please let me know if you have a
> > strong objection to the former.
> >
> > In the next revision, I will be dropping the last two of three patches
> > for the DT unit tests as there doesn't seem to be enough features
> > currently available to justify the heavy refactoring I did; however, I
>
> Thank you.
>
>
> > will still include the patch that just converts everything over to
> > KUnit without restructuring the test cases:
> > https://lkml.org/lkml/2019/2/14/1133
>
> The link doesn't work for me (don't worry about that), so I'm assuming
> this is:
>
>    [RFC v4 15/17] of: unittest: migrate tests to run on KUnit

That's correct.

>
> The conversation on that patch ended after:
>
>    >> After adding patch 15, there are a lot of "unittest internal error" messages.
>    >
>    > Yeah, I meant to ask you about that. I thought it was due to a change
>    > you made, but after further examination, just now, I found it was my
>    > fault. Sorry for not mentioning that anywhere. I will fix it in v5.
>
> It is not worth my time to look at patch 15 when it is that broken.  So I
> have not done any review of it.

Right, I didn't expect you to, we were still discussing things on RFC
v3 at the time. I think I got you comments on v3 in a very short time
frame around sending out v4; hence why your comments were not
addressed.

>
> So no, I think you are still in the RFC stage unless you drop patch 15.

Noted. I might split that out into a separate RFC then.

>
> >
> > I should have the next revision out in a week or so.
> >
>

Cheers!
Brendan Higgins March 25, 2019, 10:12 p.m. UTC | #21
On Thu, Mar 21, 2019 at 6:12 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 3/21/19 4:33 PM, Brendan Higgins wrote:
> > On Thu, Mar 21, 2019 at 3:27 PM Logan Gunthorpe <logang@deltatee.com> wrote:
> >>
> >>
> >>
> >> On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> >>> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> >>> that type of thing has been that we would add support for a KUnit/UML
> >>> version that is just for KUnit. It would mock out the necessary bits
> >>> to provide a fake hardware implementation for anything that might
> >>> depend on it. I wrote a prototype for mocking/faking MMIO that I
> >>> presented to the list here[1]; it is not part of the current patchset
> >>> because we decided it would be best to focus on getting an MVP in, but
> >>> I plan on bringing it back up at some point. Anyway, what do you
> >>> generally think of this approach?
> >>
> >> Yes, I was wondering if that might be possible. I think that's a great
> >> approach but it will unfortunately take a lot of work before larger
> >> swaths of the kernel are testable in Kunit with UML. Having more common
> >> mocked infrastructure will be great by-product of it though.
> >
> > Yeah, it's unfortunate that the best way to do something often takes
> > so much longer.
> >
> >>
> >>> Awesome, I looked at the code you posted and it doesn't look like you
> >>> have had too many troubles. One thing that stood out to me, why did
> >>> you need to put it in the kunit/ dir?
> >>
> >> Yeah, writing the code was super easy. Only after, did I realized I
> >> couldn't get it to easily build.
> >
> > Yeah, we really need to fix that; unfortunately, broadly addressing
> > that problem is really hard and will most likely take a long time.
> >
> >>
> >> Putting it in the kunit directory was necessary because nothing in the
> >> NTB tree builds unless CONFIG_NTB is set (see drivers/Makefile) and
> >> CONFIG_NTB depends on CONFIG_PCI. I didn't experiment to see how hard it
> >> would be to set CONFIG_NTB without CONFIG_PCI; I assumed it would be tricky.
> >>
> >>> I am looking forward to see what you think!
> >>
> >> Generally, I'm impressed and want to see this work in upstream as soon
> >> as possible so I can start to make use of it!
> >
> > Great to hear! I was trying to get the next revision out this week,
> > but addressing some of the comments is taking a little longer than
> > expected. I should have something together fairly soon though
> > (hopefully next week). Good news is that next revision will be
> > non-RFC; most of the feedback has settled down and I think we are
> > ready to start figuring out how to merge it. Fingers crossed :-)
> >
> > Cheers
>
> I'll be out of the office next week and will not be able to review.
> Please hold off on any devicetree related files until after I review.

Will do.