mbox series

[RFC,v3,00/19] kunit: introduce KUnit, the Linux kernel unit testing framework

Message ID 20181128193636.254378-1-brendanhiggins@google.com (mailing list archive)
Headers show
Series kunit: introduce KUnit, the Linux kernel unit testing framework | expand

Message

Brendan Higgins Nov. 28, 2018, 7:36 p.m. UTC
This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/4.19/v3 branch.

## Changes Since Last Version

 - Changed namespace prefix from `test_*` to `kunit_*` as requested by
   Shuah.
 - Started converting/cleaning up the device tree unittest to use KUnit.
 - Started adding KUnit expectations with custom messages.

Comments

Frank Rowand Dec. 4, 2018, 10:52 a.m. UTC | #1
On 11/28/18 11:36 AM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 


> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this

This question might be a misunderstanding of the intent of some of the
terminology in the above paragraph, so this is mostly a request for
clarification.

With my pre-conception of what unit tests are, I read "test a single unit
of code" to mean a relatively narrow piece of a subsystem.  So if I
understand correctly, taking examples from patch 17 "of: unittest:
migrate tests to run on KUnit", each function call like
KUNIT_ASSERT_NOT_ERR_OR_NULL(), KUNIT_EXPECT_STREQ_MSG(), and
KUNIT_EXPECT_EQ_MSG() are each a separate unit test, and thus the
paragraph says that each of these function calls should have no
dependencies outside the test.  Do I understand that correctly?

< snip >

-Frank
Frank Rowand Dec. 4, 2018, 11:40 a.m. UTC | #2
Hi Brendan, Rob,

On 11/28/18 11:36 AM, Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
> 
> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
> 
> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/4.19/v3 branch.
> 
> ## Changes Since Last Version
> 
>  - Changed namespace prefix from `test_*` to `kunit_*` as requested by
>    Shuah.


>  - Started converting/cleaning up the device tree unittest to use KUnit.
>  - Started adding KUnit expectations with custom messages.
> 

Sorry I missed your reply to me in the v1 patch thread.  I've been
traveling a lot the last few weeks.  I'm starting to read messages
that occurred late in the v1 patch thread and the v2 patch thread,
so I'm just coming up to speed on this.

My comments below are motivated by adding the devicetree unittest to
this version of the patch series.

Pulling a comment from way back in the v1 patch thread:

On 10/17/18 3:22 PM, Brendan Higgins wrote:
> On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird@sony.com> wrote:

< snip >

> The test and the code under test are linked together in the same
> binary and are compiled under Kbuild. Right now I am linking
> everything into a UML kernel, but I would ultimately like to make
> tests compile into completely independent test binaries. So each test
> file would get compiled into its own test binary and would link
> against only the code needed to run the test, but we are a bit of a
> ways off from that.

I have never used UML, so you should expect naive questions from me,
exhibiting my lack of understanding.

Does this mean that I have to build a UML architecture kernel to run
the KUnit tests?

*** Rob, if the answer is yes, then it seems like for my workflow,
which is to build for real ARM hardware, my work is doubled (or
worse), because for every patch/commit that I apply, I not only have
to build the ARM kernel and boot on the real hardware to test, I also
have to build the UML kernel and boot in UML.  If that is correct
then I see this as a major problem for me.

Brenden, in the above quote you said that in the future you would
like to make the "tests compile into completely independent test
binaries".  I am assuming those are intended to run as standalone
user space programs instead of inside UML.  Is that correct?  If
so, how will KUnit tests be able to test code that uses locking
mechanisms that require instructions that are not available to
user space execution?  (I _think_ that such instructions may be
present, depending on which locking mechanism, but I might be
mistaken.)

Another possible concern that I have for removing the devicetree
unit tests from my normal kernel build process is that I think
that the ability to use sparse to analyze the source in the
unit tests is removed.  Please correct me if I misunderstand
that.

Another issue is that the devicetree unit tests will no longer
be cross compiled with my ARM compiler, so I lose a small
amount of testing for compiler related issues.

Overall, I'm still trying to learn enough to determine whether
the gains from moving to KUnit outweigh the losses.

-Frank
Rob Herring (Arm) Dec. 4, 2018, 1:49 p.m. UTC | #3
On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand <frowand.list@gmail.com> wrote:
>
> Hi Brendan, Rob,
>
> On 11/28/18 11:36 AM, Brendan Higgins wrote:
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> >
> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> > can run several dozen tests in under a second. Currently, the entire
> > KUnit test suite for KUnit runs in under a second from the initial
> > invocation (build time excluded).
> >
> > KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > Googletest/Googlemock for C++. KUnit provides facilities for defining
> > unit test cases, grouping related test cases into test suites, providing
> > common infrastructure for running tests, mocking, spying, and much more.
> >
> > ## What's so special about unit testing?
> >
> > A unit test is supposed to test a single unit of code in isolation,
> > hence the name. There should be no dependencies outside the control of
> > the test; this means no external dependencies, which makes tests orders
> > of magnitudes faster. Likewise, since there are no external dependencies,
> > there are no hoops to jump through to run the tests. Additionally, this
> > makes unit tests deterministic: a failing unit test always indicates a
> > problem. Finally, because unit tests necessarily have finer granularity,
> > they are able to test all code paths easily solving the classic problem
> > of difficulty in exercising error handling code.
> >
> > ## Is KUnit trying to replace other testing frameworks for the kernel?
> >
> > No. Most existing tests for the Linux kernel are end-to-end tests, which
> > have their place. A well tested system has lots of unit tests, a
> > reasonable number of integration tests, and some end-to-end tests. KUnit
> > is just trying to address the unit test space which is currently not
> > being addressed.
> >
> > ## More information on KUnit
> >
> > There is a bunch of documentation near the end of this patch set that
> > describes how to use KUnit and best practices for writing unit tests.
> > For convenience I am hosting the compiled docs here:
> > https://google.github.io/kunit-docs/third_party/kernel/docs/
> > Additionally for convenience, I have applied these patches to a branch:
> > https://kunit.googlesource.com/linux/+/kunit/rfc/4.19/v3
> > The repo may be cloned with:
> > git clone https://kunit.googlesource.com/linux
> > This patchset is on the kunit/rfc/4.19/v3 branch.
> >
> > ## Changes Since Last Version
> >
> >  - Changed namespace prefix from `test_*` to `kunit_*` as requested by
> >    Shuah.
>
>
> >  - Started converting/cleaning up the device tree unittest to use KUnit.
> >  - Started adding KUnit expectations with custom messages.
> >
>
> Sorry I missed your reply to me in the v1 patch thread.  I've been
> traveling a lot the last few weeks.  I'm starting to read messages
> that occurred late in the v1 patch thread and the v2 patch thread,
> so I'm just coming up to speed on this.
>
> My comments below are motivated by adding the devicetree unittest to
> this version of the patch series.
>
> Pulling a comment from way back in the v1 patch thread:
>
> On 10/17/18 3:22 PM, Brendan Higgins wrote:
> > On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird@sony.com> wrote:
>
> < snip >
>
> > The test and the code under test are linked together in the same
> > binary and are compiled under Kbuild. Right now I am linking
> > everything into a UML kernel, but I would ultimately like to make
> > tests compile into completely independent test binaries. So each test
> > file would get compiled into its own test binary and would link
> > against only the code needed to run the test, but we are a bit of a
> > ways off from that.
>
> I have never used UML, so you should expect naive questions from me,
> exhibiting my lack of understanding.
>
> Does this mean that I have to build a UML architecture kernel to run
> the KUnit tests?

In this version of the patch series, yes.

> *** Rob, if the answer is yes, then it seems like for my workflow,
> which is to build for real ARM hardware, my work is doubled (or
> worse), because for every patch/commit that I apply, I not only have
> to build the ARM kernel and boot on the real hardware to test, I also
> have to build the UML kernel and boot in UML.  If that is correct
> then I see this as a major problem for me.

I've already raised this issue elsewhere in the series. Restricting
the DT tests to UML is a non-starter.

> Brenden, in the above quote you said that in the future you would
> like to make the "tests compile into completely independent test
> binaries".  I am assuming those are intended to run as standalone
> user space programs instead of inside UML.  Is that correct?  If
> so, how will KUnit tests be able to test code that uses locking
> mechanisms that require instructions that are not available to
> user space execution?  (I _think_ that such instructions may be
> present, depending on which locking mechanism, but I might be
> mistaken.)

I think he means as kernel modules as kunit is for testing internal
kernel interfaces. kselftest is userspace level tests.

If this were true about locking, then UML itself would not be viable.

> Another possible concern that I have for removing the devicetree
> unit tests from my normal kernel build process is that I think
> that the ability to use sparse to analyze the source in the
> unit tests is removed.  Please correct me if I misunderstand
> that.
>
> Another issue is that the devicetree unit tests will no longer
> be cross compiled with my ARM compiler, so I lose a small
> amount of testing for compiler related issues.

0-day does that for you. :)

> Overall, I'm still trying to learn enough to determine whether
> the gains from moving to KUnit outweigh the losses.
>
> -Frank
Brendan Higgins Dec. 5, 2018, 11:10 p.m. UTC | #4
On Tue, Dec 4, 2018 at 5:49 AM Rob Herring <robh@kernel.org> wrote:
>
> On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >
> > Hi Brendan, Rob,
> >
> > Pulling a comment from way back in the v1 patch thread:
> >
> > On 10/17/18 3:22 PM, Brendan Higgins wrote:
> > > On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird@sony.com> wrote:
> >
> > < snip >
> >
> > > The test and the code under test are linked together in the same
> > > binary and are compiled under Kbuild. Right now I am linking
> > > everything into a UML kernel, but I would ultimately like to make
> > > tests compile into completely independent test binaries. So each test
> > > file would get compiled into its own test binary and would link
> > > against only the code needed to run the test, but we are a bit of a
> > > ways off from that.
> >
> > I have never used UML, so you should expect naive questions from me,
> > exhibiting my lack of understanding.
> >
> > Does this mean that I have to build a UML architecture kernel to run
> > the KUnit tests?
>
> In this version of the patch series, yes.
>
> > *** Rob, if the answer is yes, then it seems like for my workflow,
> > which is to build for real ARM hardware, my work is doubled (or
> > worse), because for every patch/commit that I apply, I not only have
> > to build the ARM kernel and boot on the real hardware to test, I also
> > have to build the UML kernel and boot in UML.  If that is correct
> > then I see this as a major problem for me.
>
> I've already raised this issue elsewhere in the series. Restricting
> the DT tests to UML is a non-starter.

I have already stated my position elsewhere on the matter, but in
summary: Ensuring most tests can run without external dependencies
(hardware, VM, etc) has a lot of benefits and should be supported in
nearly all cases, but such tests should also work when compiled to run
on real hardware/VM; the tooling might not be as good in the latter
case, but I understand that there are good reasons to support it
nonetheless.

So I am going to try to add basic support for running tests on other
architectures in the next version or two.

>
> > Brenden, in the above quote you said that in the future you would
> > like to make the "tests compile into completely independent test
> > binaries".  I am assuming those are intended to run as standalone
> > user space programs instead of inside UML.  Is that correct?  If
> > so, how will KUnit tests be able to test code that uses locking
> > mechanisms that require instructions that are not available to
> > user space execution?  (I _think_ that such instructions may be
> > present, depending on which locking mechanism, but I might be
> > mistaken.)
>
> I think he means as kernel modules as kunit is for testing internal
> kernel interfaces. kselftest is userspace level tests.

Frank is right: my long term goal is to make it so unit tests can run
as stand alone user space programs.

>
> If this were true about locking, then UML itself would not be viable.
>
> > Another possible concern that I have for removing the devicetree
> > unit tests from my normal kernel build process is that I think
> > that the ability to use sparse to analyze the source in the
> > unit tests is removed.  Please correct me if I misunderstand
> > that.
> >
> > Another issue is that the devicetree unit tests will no longer
> > be cross compiled with my ARM compiler, so I lose a small
> > amount of testing for compiler related issues.
>
> 0-day does that for you. :)
>
> > Overall, I'm still trying to learn enough to determine whether
> > the gains from moving to KUnit outweigh the losses.

Of course.

From what I have seen so far, the DT unittests seem like a pretty good
use case for KUnit. If you don't mind, what frustrates you most about
the tests you have now?

What are the most common breakages you see?

When do they get caught?

My initial reaction when I looked at the tests was that it seemed like
it would be hard to understand what caused a failure and it seemed
non-obvious where a test for a new feature should go.

To me, the thing that seemed like it needed the most work was
refactoring the tests to make them easier to understand. For example,
one thing I found when I started breaking the tests apart I found some
cases that I really had to stare at (or run diff on them) to figure
out what they did differently.

Looking forward to get your thoughts.
Frank Rowand March 22, 2019, 12:27 a.m. UTC | #5
On 12/5/18 3:10 PM, Brendan Higgins wrote:
> On Tue, Dec 4, 2018 at 5:49 AM Rob Herring <robh@kernel.org> wrote:
>>
>> On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand <frowand.list@gmail.com> wrote:
>>>
>>> Hi Brendan, Rob,
>>>
>>> Pulling a comment from way back in the v1 patch thread:
>>>
>>> On 10/17/18 3:22 PM, Brendan Higgins wrote:
>>>> On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird@sony.com> wrote:
>>>
>>> < snip >
>>>
>>>> The test and the code under test are linked together in the same
>>>> binary and are compiled under Kbuild. Right now I am linking
>>>> everything into a UML kernel, but I would ultimately like to make
>>>> tests compile into completely independent test binaries. So each test
>>>> file would get compiled into its own test binary and would link
>>>> against only the code needed to run the test, but we are a bit of a
>>>> ways off from that.
>>>
>>> I have never used UML, so you should expect naive questions from me,
>>> exhibiting my lack of understanding.
>>>
>>> Does this mean that I have to build a UML architecture kernel to run
>>> the KUnit tests?
>>
>> In this version of the patch series, yes.
>>
>>> *** Rob, if the answer is yes, then it seems like for my workflow,
>>> which is to build for real ARM hardware, my work is doubled (or
>>> worse), because for every patch/commit that I apply, I not only have
>>> to build the ARM kernel and boot on the real hardware to test, I also
>>> have to build the UML kernel and boot in UML.  If that is correct
>>> then I see this as a major problem for me.
>>
>> I've already raised this issue elsewhere in the series. Restricting
>> the DT tests to UML is a non-starter.
> 

> I have already stated my position elsewhere on the matter, but in
> summary: Ensuring most tests can run without external dependencies
> (hardware, VM, etc) has a lot of benefits and should be supported in
> nearly all cases, but such tests should also work when compiled to run
> on real hardware/VM; the tooling might not be as good in the latter
> case, but I understand that there are good reasons to support it
> nonetheless.

And my needs are the exact opposite.  My tests must run on real hardware,
in the context of the real operating system subsystems and drivers
potentially causing issues.

It is useful if the tests can also run without that dependency.

-Frank


> 
> So I am going to try to add basic support for running tests on other
> architectures in the next version or two.

< snip >
Brendan Higgins March 25, 2019, 10:04 p.m. UTC | #6
On Thu, Mar 21, 2019 at 5:28 PM Frank Rowand <frowand.list@gmail.com> wrote:
>
> On 12/5/18 3:10 PM, Brendan Higgins wrote:
> > On Tue, Dec 4, 2018 at 5:49 AM Rob Herring <robh@kernel.org> wrote:
> >>
> >> On Tue, Dec 4, 2018 at 5:40 AM Frank Rowand <frowand.list@gmail.com> wrote:
> >>>
> >>> Hi Brendan, Rob,
> >>>
> >>> Pulling a comment from way back in the v1 patch thread:
> >>>
> >>> On 10/17/18 3:22 PM, Brendan Higgins wrote:
> >>>> On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird@sony.com> wrote:
> >>>
> >>> < snip >
> >>>
> >>>> The test and the code under test are linked together in the same
> >>>> binary and are compiled under Kbuild. Right now I am linking
> >>>> everything into a UML kernel, but I would ultimately like to make
> >>>> tests compile into completely independent test binaries. So each test
> >>>> file would get compiled into its own test binary and would link
> >>>> against only the code needed to run the test, but we are a bit of a
> >>>> ways off from that.
> >>>
> >>> I have never used UML, so you should expect naive questions from me,
> >>> exhibiting my lack of understanding.
> >>>
> >>> Does this mean that I have to build a UML architecture kernel to run
> >>> the KUnit tests?
> >>
> >> In this version of the patch series, yes.
> >>
> >>> *** Rob, if the answer is yes, then it seems like for my workflow,
> >>> which is to build for real ARM hardware, my work is doubled (or
> >>> worse), because for every patch/commit that I apply, I not only have
> >>> to build the ARM kernel and boot on the real hardware to test, I also
> >>> have to build the UML kernel and boot in UML.  If that is correct
> >>> then I see this as a major problem for me.
> >>
> >> I've already raised this issue elsewhere in the series. Restricting
> >> the DT tests to UML is a non-starter.
> >
>
> > I have already stated my position elsewhere on the matter, but in
> > summary: Ensuring most tests can run without external dependencies
> > (hardware, VM, etc) has a lot of benefits and should be supported in
> > nearly all cases, but such tests should also work when compiled to run
> > on real hardware/VM; the tooling might not be as good in the latter
> > case, but I understand that there are good reasons to support it
> > nonetheless.
>
> And my needs are the exact opposite.  My tests must run on real hardware,
> in the context of the real operating system subsystems and drivers
> potentially causing issues.

Right, Rob pointed this out, and I fixed this in v4. To be clear, as
of RFC v4 you can run KUnit tests on non-UML architectures, we tested
it on x86 and ARM.

>
> It is useful if the tests can also run without that dependency.

This, of course, is still the main intended use case, but there is
nothing to stop you from using it on real hardware.