mbox series

[v2,00/17] kunit: introduce KUnit, the Linux kernel unit testing framework

Message ID 20190501230126.229218-1-brendanhiggins@google.com (mailing list archive)
Headers show
Series kunit: introduce KUnit, the Linux kernel unit testing framework | expand

Message

Brendan Higgins May 1, 2019, 11:01 p.m. UTC
## TLDR

I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
5.2.

Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
we would merge through your tree when the time came? Am I remembering
correctly?

## Background

This patch set proposes KUnit, a lightweight unit testing and mocking
framework for the Linux kernel.

Unlike Autotest and kselftest, KUnit is a true unit testing framework;
it does not require installing the kernel on a test machine or in a VM
and does not require tests to be written in userspace running on a host
kernel. Additionally, KUnit is fast: From invocation to completion KUnit
can run several dozen tests in under a second. Currently, the entire
KUnit test suite for KUnit runs in under a second from the initial
invocation (build time excluded).

KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining
unit test cases, grouping related test cases into test suites, providing
common infrastructure for running tests, mocking, spying, and much more.

## What's so special about unit testing?

A unit test is supposed to test a single unit of code in isolation,
hence the name. There should be no dependencies outside the control of
the test; this means no external dependencies, which makes tests orders
of magnitudes faster. Likewise, since there are no external dependencies,
there are no hoops to jump through to run the tests. Additionally, this
makes unit tests deterministic: a failing unit test always indicates a
problem. Finally, because unit tests necessarily have finer granularity,
they are able to test all code paths easily solving the classic problem
of difficulty in exercising error handling code.

## Is KUnit trying to replace other testing frameworks for the kernel?

No. Most existing tests for the Linux kernel are end-to-end tests, which
have their place. A well tested system has lots of unit tests, a
reasonable number of integration tests, and some end-to-end tests. KUnit
is just trying to address the unit test space which is currently not
being addressed.

## More information on KUnit

There is a bunch of documentation near the end of this patch set that
describes how to use KUnit and best practices for writing unit tests.
For convenience I am hosting the compiled docs here:
https://google.github.io/kunit-docs/third_party/kernel/docs/
Additionally for convenience, I have applied these patches to a branch:
https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
The repo may be cloned with:
git clone https://kunit.googlesource.com/linux
This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.

## Changes Since Last Version

None. I just rebased the last patchset on v5.1-rc7.

Comments

Greg Kroah-Hartman May 2, 2019, 10:50 a.m. UTC | #1
On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
> ## TLDR
> 
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.

That might be rushing it, normally trees are already closed now for
5.2-rc1 if 5.1-final comes out this Sunday.

> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> we would merge through your tree when the time came? Am I remembering
> correctly?

No objection from me.

Let me go review the latest round of patches now.

thanks,

greg k-h
Greg Kroah-Hartman May 2, 2019, 11:05 a.m. UTC | #2
On Thu, May 02, 2019 at 12:50:53PM +0200, Greg KH wrote:
> On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
> > ## TLDR
> > 
> > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > 5.2.
> 
> That might be rushing it, normally trees are already closed now for
> 5.2-rc1 if 5.1-final comes out this Sunday.
> 
> > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > we would merge through your tree when the time came? Am I remembering
> > correctly?
> 
> No objection from me.
> 
> Let me go review the latest round of patches now.

Overall, looks good to me, and provides a framework we can build on.
I'm a bit annoyed at the reliance on uml at the moment, but we can work
on that in the future :)

Thanks for sticking with this, now the real work begins...

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shuah May 2, 2019, 2:04 p.m. UTC | #3
On 5/2/19 4:50 AM, Greg KH wrote:
> On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
>> ## TLDR
>>
>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>> 5.2.
> 
> That might be rushing it, normally trees are already closed now for
> 5.2-rc1 if 5.1-final comes out this Sunday.
> 
>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>> we would merge through your tree when the time came? Am I remembering
>> correctly?
> 
> No objection from me.
> 

Yes. I can take these through kselftest tree when the time comes.
Agree with Greg that 5.2 might be rushing it. 5.3 would be a good
target.

thanks,
-- Shuah
Brendan Higgins May 3, 2019, 12:41 a.m. UTC | #4
On Thu, May 2, 2019 at 4:05 AM Greg KH <gregkh@linuxfoundation.org> wrote:
>
> On Thu, May 02, 2019 at 12:50:53PM +0200, Greg KH wrote:
> > On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
> > > ## TLDR
> > >
> > > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > > 5.2.
> >
> > That might be rushing it, normally trees are already closed now for
> > 5.2-rc1 if 5.1-final comes out this Sunday.
> >
> > > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > > we would merge through your tree when the time came? Am I remembering
> > > correctly?
> >
> > No objection from me.
> >
> > Let me go review the latest round of patches now.
>
> Overall, looks good to me, and provides a framework we can build on.
> I'm a bit annoyed at the reliance on uml at the moment, but we can work
> on that in the future :)

Eh, I mostly fixed that.

I removed the KUnit framework's reliance on UML i.e. the actual tests
now run on any architecture.

The only UML dependent bit is the KUnit wrapper scripts, which could
be made to work to support other architectures pretty trivially. The
only limitation here is that it would be dependent on the actual
workflow you are using.

In anycase, if you are comfortable reading the results in the kernel
logs, then there is no dependence on UML. (I should probably provide
some documentation on that...)

>
> Thanks for sticking with this, now the real work begins...

I don't doubt it.

>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Does this cover all the patches in this set?

Thanks!
Brendan Higgins May 3, 2019, 12:44 a.m. UTC | #5
On Thu, May 2, 2019 at 7:04 AM shuah <shuah@kernel.org> wrote:
>
> On 5/2/19 4:50 AM, Greg KH wrote:
> > On Wed, May 01, 2019 at 04:01:09PM -0700, Brendan Higgins wrote:
> >> ## TLDR
> >>
> >> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> >> 5.2.
> >
> > That might be rushing it, normally trees are already closed now for
> > 5.2-rc1 if 5.1-final comes out this Sunday.
> >
> >> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> >> we would merge through your tree when the time came? Am I remembering
> >> correctly?
> >
> > No objection from me.
> >
>
> Yes. I can take these through kselftest tree when the time comes.

Awesome.

> Agree with Greg that 5.2 might be rushing it. 5.3 would be a good
> target.

Whoops. I guess I should have sent this out a bit earlier. Oh well, as
long as we are on our way!
Logan Gunthorpe May 3, 2019, 3:18 a.m. UTC | #6
On 2019-05-01 5:01 p.m., Brendan Higgins wrote:
> ## TLDR
> 
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.
> 

As I said on the last posting, I like this and would like to see it move
forward. I still have the same concerns over the downsides of using UML
(ie. not being able to compile large swaths of the tree due to features
that don't exist in that arch) but these are concerns for later.

I'd prefer to see the unnecessary indirection that I pointed out in
patch 8 cleaned up but, besides that, the code looks good to me.

Reviewed-by: Logan Gunthorpe <logang@deltatee.com>

Thanks!

Logan
Frank Rowand May 7, 2019, 3:14 a.m. UTC | #7
On 5/1/19 4:01 PM, Brendan Higgins wrote:
> ## TLDR
> 
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.
> 
> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> we would merge through your tree when the time came? Am I remembering
> correctly?
> 
> ## Background
> 
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
> 
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
> 
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.

As a result of the emails replying to this patch thread, I am now
starting to look at kselftest.  My level of understanding is based
on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
and a _tiny_ bit of looking at kselftest code.

tl;dr; I don't really understand kselftest yet.


(1) why KUnit exists

> ## What's so special about unit testing?
> 
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.

(2) KUnit is not meant to replace kselftest

> ## Is KUnit trying to replace other testing frameworks for the kernel?
> 
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.

My understanding is that the intent of KUnit is to avoid booting a kernel on
real hardware or in a virtual machine.  That seems to be a matter of semantics
to me because isn't invoking a UML Linux just running the Linux kernel in
a different form of virtualization?

So I do not understand why KUnit is an improvement over kselftest.

It seems to me that KUnit is just another piece of infrastructure that I
am going to have to be familiar with as a kernel developer.  More overhead,
more information to stuff into my tiny little brain.

I would guess that some developers will focus on just one of the two test
environments (and some will focus on both), splitting the development
resources instead of pooling them on a common infrastructure.

What am I missing?

-Frank


> 
> ## More information on KUnit
> 
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
> 
> ## Changes Since Last Version
> 
> None. I just rebased the last patchset on v5.1-rc7.
>
Greg Kroah-Hartman May 7, 2019, 8:01 a.m. UTC | #8
On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
> On 5/1/19 4:01 PM, Brendan Higgins wrote:
> > ## TLDR
> > 
> > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > 5.2.
> > 
> > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > we would merge through your tree when the time came? Am I remembering
> > correctly?
> > 
> > ## Background
> > 
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> > 
> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> > can run several dozen tests in under a second. Currently, the entire
> > KUnit test suite for KUnit runs in under a second from the initial
> > invocation (build time excluded).
> > 
> > KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > Googletest/Googlemock for C++. KUnit provides facilities for defining
> > unit test cases, grouping related test cases into test suites, providing
> > common infrastructure for running tests, mocking, spying, and much more.
> 
> As a result of the emails replying to this patch thread, I am now
> starting to look at kselftest.  My level of understanding is based
> on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
> and a _tiny_ bit of looking at kselftest code.
> 
> tl;dr; I don't really understand kselftest yet.
> 
> 
> (1) why KUnit exists
> 
> > ## What's so special about unit testing?
> > 
> > A unit test is supposed to test a single unit of code in isolation,
> > hence the name. There should be no dependencies outside the control of
> > the test; this means no external dependencies, which makes tests orders
> > of magnitudes faster. Likewise, since there are no external dependencies,
> > there are no hoops to jump through to run the tests. Additionally, this
> > makes unit tests deterministic: a failing unit test always indicates a
> > problem. Finally, because unit tests necessarily have finer granularity,
> > they are able to test all code paths easily solving the classic problem
> > of difficulty in exercising error handling code.
> 
> (2) KUnit is not meant to replace kselftest
> 
> > ## Is KUnit trying to replace other testing frameworks for the kernel?
> > 
> > No. Most existing tests for the Linux kernel are end-to-end tests, which
> > have their place. A well tested system has lots of unit tests, a
> > reasonable number of integration tests, and some end-to-end tests. KUnit
> > is just trying to address the unit test space which is currently not
> > being addressed.
> 
> My understanding is that the intent of KUnit is to avoid booting a kernel on
> real hardware or in a virtual machine.  That seems to be a matter of semantics
> to me because isn't invoking a UML Linux just running the Linux kernel in
> a different form of virtualization?
> 
> So I do not understand why KUnit is an improvement over kselftest.
> 
> It seems to me that KUnit is just another piece of infrastructure that I
> am going to have to be familiar with as a kernel developer.  More overhead,
> more information to stuff into my tiny little brain.
> 
> I would guess that some developers will focus on just one of the two test
> environments (and some will focus on both), splitting the development
> resources instead of pooling them on a common infrastructure.
> 
> What am I missing?

kselftest provides no in-kernel framework for testing kernel code
specifically.  That should be what kunit provides, an "easy" way to
write in-kernel tests for things.

Brendan, did I get it right?

thanks,

greg k-h
Shuah May 7, 2019, 3:23 p.m. UTC | #9
On 5/7/19 2:01 AM, Greg KH wrote:
> On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
>> On 5/1/19 4:01 PM, Brendan Higgins wrote:
>>> ## TLDR
>>>
>>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>>> 5.2.
>>>
>>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>>> we would merge through your tree when the time came? Am I remembering
>>> correctly?
>>>
>>> ## Background
>>>
>>> This patch set proposes KUnit, a lightweight unit testing and mocking
>>> framework for the Linux kernel.
>>>
>>> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
>>> it does not require installing the kernel on a test machine or in a VM
>>> and does not require tests to be written in userspace running on a host
>>> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
>>> can run several dozen tests in under a second. Currently, the entire
>>> KUnit test suite for KUnit runs in under a second from the initial
>>> invocation (build time excluded).
>>>
>>> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
>>> Googletest/Googlemock for C++. KUnit provides facilities for defining
>>> unit test cases, grouping related test cases into test suites, providing
>>> common infrastructure for running tests, mocking, spying, and much more.
>>
>> As a result of the emails replying to this patch thread, I am now
>> starting to look at kselftest.  My level of understanding is based
>> on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
>> and a _tiny_ bit of looking at kselftest code.
>>
>> tl;dr; I don't really understand kselftest yet.
>>
>>
>> (1) why KUnit exists
>>
>>> ## What's so special about unit testing?
>>>
>>> A unit test is supposed to test a single unit of code in isolation,
>>> hence the name. There should be no dependencies outside the control of
>>> the test; this means no external dependencies, which makes tests orders
>>> of magnitudes faster. Likewise, since there are no external dependencies,
>>> there are no hoops to jump through to run the tests. Additionally, this
>>> makes unit tests deterministic: a failing unit test always indicates a
>>> problem. Finally, because unit tests necessarily have finer granularity,
>>> they are able to test all code paths easily solving the classic problem
>>> of difficulty in exercising error handling code.
>>
>> (2) KUnit is not meant to replace kselftest
>>
>>> ## Is KUnit trying to replace other testing frameworks for the kernel?
>>>
>>> No. Most existing tests for the Linux kernel are end-to-end tests, which
>>> have their place. A well tested system has lots of unit tests, a
>>> reasonable number of integration tests, and some end-to-end tests. KUnit
>>> is just trying to address the unit test space which is currently not
>>> being addressed.
>>
>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>> real hardware or in a virtual machine.  That seems to be a matter of semantics
>> to me because isn't invoking a UML Linux just running the Linux kernel in
>> a different form of virtualization?
>>
>> So I do not understand why KUnit is an improvement over kselftest.

They are in two different categories. Kselftest falls into black box
regression test suite which is a collection of user-space tests with a
few kernel test modules back-ending the tests in some cases.

Kselftest can be used by both kernel developers and users and provides
a good way to regression test releases in test rings.

KUnit is a white box category and is a better fit as unit test framework
for development and provides a in-kernel testing. I wouldn't view them
one replacing the other. They just provide coverage for different areas
of testing.

I wouldn't view KUnit as something that would be easily run in test 
rings for example.

Brendan, does that sound about right?

>>
>> It seems to me that KUnit is just another piece of infrastructure that I
>> am going to have to be familiar with as a kernel developer.  More overhead,
>> more information to stuff into my tiny little brain.
>>
>> I would guess that some developers will focus on just one of the two test
>> environments (and some will focus on both), splitting the development
>> resources instead of pooling them on a common infrastructure.

>> What am I missing?
> 
> kselftest provides no in-kernel framework for testing kernel code
> specifically.  That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.
> 
> Brendan, did I get it right?
thanks,
-- Shuah
Theodore Ts'o May 7, 2019, 5:22 p.m. UTC | #10
On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
> > My understanding is that the intent of KUnit is to avoid booting a kernel on
> > real hardware or in a virtual machine.  That seems to be a matter of semantics
> > to me because isn't invoking a UML Linux just running the Linux kernel in
> > a different form of virtualization?
> > 
> > So I do not understand why KUnit is an improvement over kselftest.
> > 
> > It seems to me that KUnit is just another piece of infrastructure that I
> > am going to have to be familiar with as a kernel developer.  More overhead,
> > more information to stuff into my tiny little brain.
> > 
> > I would guess that some developers will focus on just one of the two test
> > environments (and some will focus on both), splitting the development
> > resources instead of pooling them on a common infrastructure.
> > 
> > What am I missing?
> 
> kselftest provides no in-kernel framework for testing kernel code
> specifically.  That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.
> 
> Brendan, did I get it right?

Yes, that's basically right.  You don't *have* to use KUnit.  It's
supposed to be a simple way to run a large number of small tests that
for specific small components in a system.

For example, I currently use xfstests using KVM and GCE to test all of
ext4.  These tests require using multiple 5 GB and 20GB virtual disks,
and it works by mounting ext4 file systems and exercising ext4 through
the system call interfaces, using userspace tools such as fsstress,
fsx, fio, etc.  It requires time overhead to start the VM, create and
allocate virtual disks, etc.  For example, to run a single 3 seconds
xfstest (generic/001), it requires full 10 seconds to run it via
kvm-xfstests.

KUnit is something else; it's specifically intended to allow you to
create lightweight tests quickly and easily, and by reducing the
effort needed to write and run unit tests, hopefully we'll have a lot
more of them and thus improve kernel quality.

As an example, I have a volunteer working on developing KUinit tests
for ext4.  We're going to start by testing the ext4 extent status
tree.  The source code is at fs/ext4/extent_status.c; it's
approximately 1800 LOC.  The Kunit tests for the extent status tree
will exercise all of the corner cases for the various extent status
tree functions --- e.g., ext4_es_insert_delayed_block(),
ext4_es_remove_extent(), ext4_es_cache_extent(), etc.  And it will do
this in isolation without our needing to create a test file system or
using a test block device.

Next we'll test the ext4 block allocator, again in isolation.  To test
the block allocator we will have to write "mock functions" which
simulate reading allocation bitmaps from disk.  Again, this will allow
the test writer to explicitly construct corner cases and validate that
the block allocator works as expected without having to reverese
engineer file system data structures which will force a particular
code path to be executed.

So this is why it's largely irrelevant to me that KUinit uses UML.  In
fact, it's a feature.  We're not testing device drivers, or the
scheduler, or anything else architecture-specific.  UML is not about
virtualization.  What it's about in this context is allowing us to
start running test code as quickly as possible.  Booting KVM takes
about 3-4 seconds, and this includes initializing virtio_scsi and
other device drivers.  If by using UML we can hold the amount of
unnecessary kernel subsystem initialization down to the absolute
minimum, and if it means that we can communicating to the test
framework via a userspace "printf" from UML/KUnit code, as opposed to
via a virtual serial port to KVM's virtual console, it all makes for
lighter weight testing.

Why did I go looking for a volunteer to write KUnit tests for ext4?
Well, I have a plan to make some changes in restructing how ext4's
write path works, in order to support things like copy-on-write, a
more efficient delayed allocation system, etc.  This will require
making changes to the extent status tree, and by having unit tests for
the extent status tree, we'll be able to detect any bugs that we might
accidentally introduce in the es tree far more quickly than if we
didn't have those tests available.  Google has long found that having
these sorts of unit tests is a real win for developer velocity for any
non-trivial code module (or C++ class), even when you take into
account the time it takes to create the unit tests.

					- Ted

P.S.  Many thanks to Brendan for finding such a volunteer for me; the
person in question is a SRE from Switzerland who is interested in
getting involved with kernel testing, and this is going to be their
20% project.  :-)
Brendan Higgins May 8, 2019, 7:17 p.m. UTC | #11
> On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
> > > My understanding is that the intent of KUnit is to avoid booting a kernel on
> > > real hardware or in a virtual machine.  That seems to be a matter of semantics
> > > to me because isn't invoking a UML Linux just running the Linux kernel in
> > > a different form of virtualization?
> > >
> > > So I do not understand why KUnit is an improvement over kselftest.
> > >
> > > It seems to me that KUnit is just another piece of infrastructure that I
> > > am going to have to be familiar with as a kernel developer.  More overhead,
> > > more information to stuff into my tiny little brain.
> > >
> > > I would guess that some developers will focus on just one of the two test
> > > environments (and some will focus on both), splitting the development
> > > resources instead of pooling them on a common infrastructure.
> > >
> > > What am I missing?
> >
> > kselftest provides no in-kernel framework for testing kernel code
> > specifically.  That should be what kunit provides, an "easy" way to
> > write in-kernel tests for things.
> >
> > Brendan, did I get it right?
>
> Yes, that's basically right.  You don't *have* to use KUnit.  It's
> supposed to be a simple way to run a large number of small tests that
> for specific small components in a system.
>
> For example, I currently use xfstests using KVM and GCE to test all of
> ext4.  These tests require using multiple 5 GB and 20GB virtual disks,
> and it works by mounting ext4 file systems and exercising ext4 through
> the system call interfaces, using userspace tools such as fsstress,
> fsx, fio, etc.  It requires time overhead to start the VM, create and
> allocate virtual disks, etc.  For example, to run a single 3 seconds
> xfstest (generic/001), it requires full 10 seconds to run it via
> kvm-xfstests.
>
> KUnit is something else; it's specifically intended to allow you to
> create lightweight tests quickly and easily, and by reducing the
> effort needed to write and run unit tests, hopefully we'll have a lot
> more of them and thus improve kernel quality.
>
> As an example, I have a volunteer working on developing KUinit tests
> for ext4.  We're going to start by testing the ext4 extent status
> tree.  The source code is at fs/ext4/extent_status.c; it's
> approximately 1800 LOC.  The Kunit tests for the extent status tree
> will exercise all of the corner cases for the various extent status
> tree functions --- e.g., ext4_es_insert_delayed_block(),
> ext4_es_remove_extent(), ext4_es_cache_extent(), etc.  And it will do
> this in isolation without our needing to create a test file system or
> using a test block device.
>
> Next we'll test the ext4 block allocator, again in isolation.  To test
> the block allocator we will have to write "mock functions" which
> simulate reading allocation bitmaps from disk.  Again, this will allow
> the test writer to explicitly construct corner cases and validate that
> the block allocator works as expected without having to reverese
> engineer file system data structures which will force a particular
> code path to be executed.
>
> So this is why it's largely irrelevant to me that KUinit uses UML.  In
> fact, it's a feature.  We're not testing device drivers, or the
> scheduler, or anything else architecture-specific.  UML is not about
> virtualization.  What it's about in this context is allowing us to
> start running test code as quickly as possible.  Booting KVM takes
> about 3-4 seconds, and this includes initializing virtio_scsi and
> other device drivers.  If by using UML we can hold the amount of
> unnecessary kernel subsystem initialization down to the absolute
> minimum, and if it means that we can communicating to the test
> framework via a userspace "printf" from UML/KUnit code, as opposed to
> via a virtual serial port to KVM's virtual console, it all makes for
> lighter weight testing.
>
> Why did I go looking for a volunteer to write KUnit tests for ext4?
> Well, I have a plan to make some changes in restructing how ext4's
> write path works, in order to support things like copy-on-write, a
> more efficient delayed allocation system, etc.  This will require
> making changes to the extent status tree, and by having unit tests for
> the extent status tree, we'll be able to detect any bugs that we might
> accidentally introduce in the es tree far more quickly than if we
> didn't have those tests available.  Google has long found that having
> these sorts of unit tests is a real win for developer velocity for any
> non-trivial code module (or C++ class), even when you take into
> account the time it takes to create the unit tests.
>
>                                         - Ted
>
> P.S.  Many thanks to Brendan for finding such a volunteer for me; the
> person in question is a SRE from Switzerland who is interested in
> getting involved with kernel testing, and this is going to be their
> 20% project.  :-)

Thanks Ted, I really appreciate it!

Since Ted provided such an awesome detailed response, I don't think I
really need to go into any detail; nevertheless, I think that Greg and
Shuah have the right idea; in particular, Shuah provides a good
summary.

Thanks everyone!
Frank Rowand May 9, 2019, 12:43 a.m. UTC | #12
On 5/7/19 1:01 AM, Greg KH wrote:
> On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
>> On 5/1/19 4:01 PM, Brendan Higgins wrote:
>>> ## TLDR
>>>
>>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>>> 5.2.
>>>
>>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>>> we would merge through your tree when the time came? Am I remembering
>>> correctly?
>>>
>>> ## Background
>>>
>>> This patch set proposes KUnit, a lightweight unit testing and mocking
>>> framework for the Linux kernel.
>>>
>>> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
>>> it does not require installing the kernel on a test machine or in a VM
>>> and does not require tests to be written in userspace running on a host
>>> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
>>> can run several dozen tests in under a second. Currently, the entire
>>> KUnit test suite for KUnit runs in under a second from the initial
>>> invocation (build time excluded).
>>>
>>> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
>>> Googletest/Googlemock for C++. KUnit provides facilities for defining
>>> unit test cases, grouping related test cases into test suites, providing
>>> common infrastructure for running tests, mocking, spying, and much more.
>>
>> As a result of the emails replying to this patch thread, I am now
>> starting to look at kselftest.  My level of understanding is based
>> on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
>> and a _tiny_ bit of looking at kselftest code.
>>
>> tl;dr; I don't really understand kselftest yet.
>>
>>
>> (1) why KUnit exists
>>
>>> ## What's so special about unit testing?
>>>
>>> A unit test is supposed to test a single unit of code in isolation,
>>> hence the name. There should be no dependencies outside the control of
>>> the test; this means no external dependencies, which makes tests orders
>>> of magnitudes faster. Likewise, since there are no external dependencies,
>>> there are no hoops to jump through to run the tests. Additionally, this
>>> makes unit tests deterministic: a failing unit test always indicates a
>>> problem. Finally, because unit tests necessarily have finer granularity,
>>> they are able to test all code paths easily solving the classic problem
>>> of difficulty in exercising error handling code.
>>
>> (2) KUnit is not meant to replace kselftest
>>
>>> ## Is KUnit trying to replace other testing frameworks for the kernel?
>>>
>>> No. Most existing tests for the Linux kernel are end-to-end tests, which
>>> have their place. A well tested system has lots of unit tests, a
>>> reasonable number of integration tests, and some end-to-end tests. KUnit
>>> is just trying to address the unit test space which is currently not
>>> being addressed.
>>
>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>> real hardware or in a virtual machine.  That seems to be a matter of semantics
>> to me because isn't invoking a UML Linux just running the Linux kernel in
>> a different form of virtualization?
>>
>> So I do not understand why KUnit is an improvement over kselftest.
>>
>> It seems to me that KUnit is just another piece of infrastructure that I
>> am going to have to be familiar with as a kernel developer.  More overhead,
>> more information to stuff into my tiny little brain.
>>
>> I would guess that some developers will focus on just one of the two test
>> environments (and some will focus on both), splitting the development
>> resources instead of pooling them on a common infrastructure.
>>
>> What am I missing?
> 
> kselftest provides no in-kernel framework for testing kernel code
> specifically.  That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.

kselftest provides a mechanism for in-kernel tests via modules.  For
example, see:

  tools/testing/selftests/vm/run_vmtests invokes:
    tools/testing/selftests/vm/test_vmalloc.sh
      loads module:
        test_vmalloc
        (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)

A very quick and dirty search (likely to miss some tests) finds modules:

  test_bitmap
  test_bpf
  test_firmware
  test_printf
  test_static_key_base
  test_static_keys
  test_user_copy
  test_vmalloc

-Frank

> 
> Brendan, did I get it right?
> 
> thanks,
> 
> greg k-h
> .
>
Frank Rowand May 9, 2019, 12:58 a.m. UTC | #13
Hi Ted,

On 5/7/19 10:22 AM, Theodore Ts'o wrote:
> On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
Not very helpful to cut the text here, plus not explicitly indicating that
text was cut (yes, I know the ">>>" will be a clue for the careful reader),
losing the set up for my question.


>>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>>> real hardware or in a virtual machine.  That seems to be a matter of semantics
>>> to me because isn't invoking a UML Linux just running the Linux kernel in
>>> a different form of virtualization?
>>>
>>> So I do not understand why KUnit is an improvement over kselftest.
>>>
>>> It seems to me that KUnit is just another piece of infrastructure that I
>>> am going to have to be familiar with as a kernel developer.  More overhead,
>>> more information to stuff into my tiny little brain.
>>>
>>> I would guess that some developers will focus on just one of the two test
>>> environments (and some will focus on both), splitting the development
>>> resources instead of pooling them on a common infrastructure.
>>>
>>> What am I missing?
>>
>> kselftest provides no in-kernel framework for testing kernel code
>> specifically.  That should be what kunit provides, an "easy" way to
>> write in-kernel tests for things.
>>
>> Brendan, did I get it right?
> 
> Yes, that's basically right.  You don't *have* to use KUnit.  It's

If KUnit is added to the kernel, and a subsystem that I am submitting
code for has chosen to use KUnit instead of kselftest, then yes, I do
*have* to use KUnit if my submission needs to contain a test for the
code unless I want to convince the maintainer that somehow my case
is special and I prefer to use kselftest instead of KUnittest.


> supposed to be a simple way to run a large number of small tests that
> for specific small components in a system.

kselftest also supports running a subset of tests.  That subset of tests
can also be a large number of small tests.  There is nothing inherent
in KUnit vs kselftest in this regard, as far as I am aware.


> For example, I currently use xfstests using KVM and GCE to test all of
> ext4.  These tests require using multiple 5 GB and 20GB virtual disks,
> and it works by mounting ext4 file systems and exercising ext4 through
> the system call interfaces, using userspace tools such as fsstress,
> fsx, fio, etc.  It requires time overhead to start the VM, create and
> allocate virtual disks, etc.  For example, to run a single 3 seconds
> xfstest (generic/001), it requires full 10 seconds to run it via
> kvm-xfstests.
> 


> KUnit is something else; it's specifically intended to allow you to
> create lightweight tests quickly and easily, and by reducing the
> effort needed to write and run unit tests, hopefully we'll have a lot
> more of them and thus improve kernel quality.

The same is true of kselftest.  You can create lightweight tests in
kselftest.


> As an example, I have a volunteer working on developing KUinit tests
> for ext4.  We're going to start by testing the ext4 extent status
> tree.  The source code is at fs/ext4/extent_status.c; it's
> approximately 1800 LOC.  The Kunit tests for the extent status tree
> will exercise all of the corner cases for the various extent status
> tree functions --- e.g., ext4_es_insert_delayed_block(),
> ext4_es_remove_extent(), ext4_es_cache_extent(), etc.  And it will do
> this in isolation without our needing to create a test file system or
> using a test block device.
> 

> Next we'll test the ext4 block allocator, again in isolation.  To test
> the block allocator we will have to write "mock functions" which
> simulate reading allocation bitmaps from disk.  Again, this will allow
> the test writer to explicitly construct corner cases and validate that
> the block allocator works as expected without having to reverese
> engineer file system data structures which will force a particular
> code path to be executed.

This would be a difference, but mock functions do not exist in KUnit.
The KUnit test will call the real kernel function in the UML kernel.

I think Brendan has indicated a desire to have mock functions in the
future.

Brendan, do I understand that correctly?

-Frank

> So this is why it's largely irrelevant to me that KUinit uses UML.  In
> fact, it's a feature.  We're not testing device drivers, or the
> scheduler, or anything else architecture-specific.  UML is not about
> virtualization.  What it's about in this context is allowing us to
> start running test code as quickly as possible.  Booting KVM takes
> about 3-4 seconds, and this includes initializing virtio_scsi and
> other device drivers.  If by using UML we can hold the amount of
> unnecessary kernel subsystem initialization down to the absolute
> minimum, and if it means that we can communicating to the test
> framework via a userspace "printf" from UML/KUnit code, as opposed to
> via a virtual serial port to KVM's virtual console, it all makes for
> lighter weight testing.
> 
> Why did I go looking for a volunteer to write KUnit tests for ext4?
> Well, I have a plan to make some changes in restructing how ext4's
> write path works, in order to support things like copy-on-write, a
> more efficient delayed allocation system, etc.  This will require
> making changes to the extent status tree, and by having unit tests for
> the extent status tree, we'll be able to detect any bugs that we might
> accidentally introduce in the es tree far more quickly than if we
> didn't have those tests available.  Google has long found that having
> these sorts of unit tests is a real win for developer velocity for any
> non-trivial code module (or C++ class), even when you take into
> account the time it takes to create the unit tests.
> 
> 					- Ted>
> P.S.  Many thanks to Brendan for finding such a volunteer for me; the
> person in question is a SRE from Switzerland who is interested in
> getting involved with kernel testing, and this is going to be their
> 20% project.  :-)
> 
>
Frank Rowand May 9, 2019, 1:01 a.m. UTC | #14
On 5/7/19 8:23 AM, shuah wrote:
> On 5/7/19 2:01 AM, Greg KH wrote:
>> On Mon, May 06, 2019 at 08:14:12PM -0700, Frank Rowand wrote:
>>> On 5/1/19 4:01 PM, Brendan Higgins wrote:
>>>> ## TLDR
>>>>
>>>> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
>>>> 5.2.
>>>>
>>>> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
>>>> we would merge through your tree when the time came? Am I remembering
>>>> correctly?
>>>>
>>>> ## Background
>>>>
>>>> This patch set proposes KUnit, a lightweight unit testing and mocking
>>>> framework for the Linux kernel.
>>>>
>>>> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
>>>> it does not require installing the kernel on a test machine or in a VM
>>>> and does not require tests to be written in userspace running on a host
>>>> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
>>>> can run several dozen tests in under a second. Currently, the entire
>>>> KUnit test suite for KUnit runs in under a second from the initial
>>>> invocation (build time excluded).
>>>>
>>>> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
>>>> Googletest/Googlemock for C++. KUnit provides facilities for defining
>>>> unit test cases, grouping related test cases into test suites, providing
>>>> common infrastructure for running tests, mocking, spying, and much more.
>>>
>>> As a result of the emails replying to this patch thread, I am now
>>> starting to look at kselftest.  My level of understanding is based
>>> on some slide presentations, an LWN article, https://kselftest.wiki.kernel.org/
>>> and a _tiny_ bit of looking at kselftest code.
>>>
>>> tl;dr; I don't really understand kselftest yet.
>>>
>>>
>>> (1) why KUnit exists
>>>
>>>> ## What's so special about unit testing?
>>>>
>>>> A unit test is supposed to test a single unit of code in isolation,
>>>> hence the name. There should be no dependencies outside the control of
>>>> the test; this means no external dependencies, which makes tests orders
>>>> of magnitudes faster. Likewise, since there are no external dependencies,
>>>> there are no hoops to jump through to run the tests. Additionally, this
>>>> makes unit tests deterministic: a failing unit test always indicates a
>>>> problem. Finally, because unit tests necessarily have finer granularity,
>>>> they are able to test all code paths easily solving the classic problem
>>>> of difficulty in exercising error handling code.
>>>
>>> (2) KUnit is not meant to replace kselftest
>>>
>>>> ## Is KUnit trying to replace other testing frameworks for the kernel?
>>>>
>>>> No. Most existing tests for the Linux kernel are end-to-end tests, which
>>>> have their place. A well tested system has lots of unit tests, a
>>>> reasonable number of integration tests, and some end-to-end tests. KUnit
>>>> is just trying to address the unit test space which is currently not
>>>> being addressed.
>>>
>>> My understanding is that the intent of KUnit is to avoid booting a kernel on
>>> real hardware or in a virtual machine.  That seems to be a matter of semantics
>>> to me because isn't invoking a UML Linux just running the Linux kernel in
>>> a different form of virtualization?
>>>
>>> So I do not understand why KUnit is an improvement over kselftest.
> 
> They are in two different categories. Kselftest falls into black box
> regression test suite which is a collection of user-space tests with a
> few kernel test modules back-ending the tests in some cases.
> 
> Kselftest can be used by both kernel developers and users and provides
> a good way to regression test releases in test rings.
> 
> KUnit is a white box category and is a better fit as unit test framework
> for development and provides a in-kernel testing. I wouldn't view them
> one replacing the other. They just provide coverage for different areas
> of testing.

I don't see what about kselftest or KUnit is inherently black box or
white box.  I would expect both frameworks to be used for either type
of testing.


> I wouldn't view KUnit as something that would be easily run in test rings for example.

I don't see why not.

-Frank

> 
> Brendan, does that sound about right?
> 
>>>
>>> It seems to me that KUnit is just another piece of infrastructure that I
>>> am going to have to be familiar with as a kernel developer.  More overhead,
>>> more information to stuff into my tiny little brain.
>>>
>>> I would guess that some developers will focus on just one of the two test
>>> environments (and some will focus on both), splitting the development
>>> resources instead of pooling them on a common infrastructure.
> 
>>> What am I missing?
>>
>> kselftest provides no in-kernel framework for testing kernel code
>> specifically.  That should be what kunit provides, an "easy" way to
>> write in-kernel tests for things.
>>
>> Brendan, did I get it right?
> thanks,
> -- Shuah
> .
>
Theodore Ts'o May 9, 2019, 1:44 a.m. UTC | #15
On Wed, May 08, 2019 at 05:58:49PM -0700, Frank Rowand wrote:
> 
> If KUnit is added to the kernel, and a subsystem that I am submitting
> code for has chosen to use KUnit instead of kselftest, then yes, I do
> *have* to use KUnit if my submission needs to contain a test for the
> code unless I want to convince the maintainer that somehow my case
> is special and I prefer to use kselftest instead of KUnittest.

That's going to be between you and the maintainer.  Today, if you want
to submit a substantive change to xfs or ext4, you're going to be
asked to create test for that new feature using xfstests.  It doesn't
matter that xfstests isn't in the kernel --- if that's what is
required by the maintainer.

> > supposed to be a simple way to run a large number of small tests that
> > for specific small components in a system.
> 
> kselftest also supports running a subset of tests.  That subset of tests
> can also be a large number of small tests.  There is nothing inherent
> in KUnit vs kselftest in this regard, as far as I am aware.

The big difference is that kselftests are driven by a C program that
runs in userspace.  Take a look at tools/testing/selftests/filesystem/dnotify_test.c
it has a main(int argc, char *argv) function.

In contrast, KUnit are fragments of C code which run in the kernel;
not in userspace.  This allows us to test internal functions inside
complex file system (such as the block allocator in ext4) directly.
This makes it *fundamentally* different from kselftest.

Cheers,

						- Ted
Theodore Ts'o May 9, 2019, 1:58 a.m. UTC | #16
On Wed, May 08, 2019 at 05:43:35PM -0700, Frank Rowand wrote:
> kselftest provides a mechanism for in-kernel tests via modules.  For
> example, see:
> 
>   tools/testing/selftests/vm/run_vmtests invokes:
>     tools/testing/selftests/vm/test_vmalloc.sh
>       loads module:
>         test_vmalloc
>         (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)

The majority of the kselftests are implemented as userspace programs.

You *can* run in-kernel test using modules; but there is no framework
for the in-kernel code found in the test modules, which means each of
the in-kernel code has to create their own in-kernel test
infrastructure.  

That's much like saying you can use vice grips to turn a nut or
bolt-head.  You *can*, but it might be that using a monkey wrench
would be a much better tool that is much easier.

What would you say to a wood worker objecting that a toolbox should
contain a monkey wrench because he already knows how to use vise
grips, and his tiny brain shouldn't be forced to learn how to use a
wrench when he knows how to use a vise grip, which is a perfectly good
tool?

If you want to use vice grips as a hammer, screwdriver, monkey wrench,
etc.  there's nothing stopping you from doing that.  But it's not fair
to object to other people who might want to use better tools.

The reality is that we have a lot of testing tools.  It's not just
kselftests.  There is xfstests for file system code, blktests for
block layer tests, etc.   We use the right tool for the right job.

						- Ted
Frank Rowand May 9, 2019, 2:13 a.m. UTC | #17
On 5/8/19 6:58 PM, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 05:43:35PM -0700, Frank Rowand wrote:
>> kselftest provides a mechanism for in-kernel tests via modules.  For
>> example, see:
>>
>>   tools/testing/selftests/vm/run_vmtests invokes:
>>     tools/testing/selftests/vm/test_vmalloc.sh
>>       loads module:
>>         test_vmalloc
>>         (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)
> 
> The majority of the kselftests are implemented as userspace programs.

Non-argument.


> You *can* run in-kernel test using modules; but there is no framework
> for the in-kernel code found in the test modules, which means each of
> the in-kernel code has to create their own in-kernel test
> infrastructure.

Why create an entire new subsystem (KUnit) when you can add a header
file (and .c code as appropriate) that outputs the proper TAP formatted
results from kselftest kernel test modules?

There are already a multitude of in kernel test modules used by
kselftest.  It would be good if they all used a common TAP compliant
mechanism to report results.

 
> That's much like saying you can use vice grips to turn a nut or
> bolt-head.  You *can*, but it might be that using a monkey wrench
> would be a much better tool that is much easier.
> 
> What would you say to a wood worker objecting that a toolbox should
> contain a monkey wrench because he already knows how to use vise
> grips, and his tiny brain shouldn't be forced to learn how to use a
> wrench when he knows how to use a vise grip, which is a perfectly good
> tool?
> 
> If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> etc.  there's nothing stopping you from doing that.  But it's not fair
> to object to other people who might want to use better tools.
> 
> The reality is that we have a lot of testing tools.  It's not just
> kselftests.  There is xfstests for file system code, blktests for
> block layer tests, etc.   We use the right tool for the right job.

More specious arguments.

-Frank

> 
> 						- Ted
>
Frank Rowand May 9, 2019, 2:18 a.m. UTC | #18
On 5/8/19 6:44 PM, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 05:58:49PM -0700, Frank Rowand wrote:
>>
>> If KUnit is added to the kernel, and a subsystem that I am submitting
>> code for has chosen to use KUnit instead of kselftest, then yes, I do
>> *have* to use KUnit if my submission needs to contain a test for the
>> code unless I want to convince the maintainer that somehow my case
>> is special and I prefer to use kselftest instead of KUnittest.
> 
> That's going to be between you and the maintainer.  Today, if you want
> to submit a substantive change to xfs or ext4, you're going to be
> asked to create test for that new feature using xfstests.  It doesn't
> matter that xfstests isn't in the kernel --- if that's what is
> required by the maintainer.

Yes, that is exactly what I was saying.

Please do not cut the pertinent parts of context that I am replying to.


>>> supposed to be a simple way to run a large number of small tests that
>>> for specific small components in a system.
>>
>> kselftest also supports running a subset of tests.  That subset of tests
>> can also be a large number of small tests.  There is nothing inherent
>> in KUnit vs kselftest in this regard, as far as I am aware.


> The big difference is that kselftests are driven by a C program that
> runs in userspace.  Take a look at tools/testing/selftests/filesystem/dnotify_test.c
> it has a main(int argc, char *argv) function.
> 
> In contrast, KUnit are fragments of C code which run in the kernel;
> not in userspace.  This allows us to test internal functions inside
> complex file system (such as the block allocator in ext4) directly.
> This makes it *fundamentally* different from kselftest.

No, totally incorrect.  kselftests also supports in kernel modules as
I mention in another reply to this patch.

This is talking past each other a little bit, because your next reply
is a reply to my email about modules.

-Frank

> 
> Cheers,
> 
> 						- Ted
>
Theodore Ts'o May 9, 2019, 3:20 a.m. UTC | #19
On Wed, May 08, 2019 at 07:13:59PM -0700, Frank Rowand wrote:
> > If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> > etc.  there's nothing stopping you from doing that.  But it's not fair
> > to object to other people who might want to use better tools.
> > 
> > The reality is that we have a lot of testing tools.  It's not just
> > kselftests.  There is xfstests for file system code, blktests for
> > block layer tests, etc.   We use the right tool for the right job.
> 
> More specious arguments.

Well, *I* don't think they are specious; so I think we're going to
have to agree to disagree.

Cheers,

						- Ted
Knut Omang May 9, 2019, 11:52 a.m. UTC | #20
On Wed, 2019-05-08 at 23:20 -0400, Theodore Ts'o wrote:
> On Wed, May 08, 2019 at 07:13:59PM -0700, Frank Rowand wrote:
> > > If you want to use vice grips as a hammer, screwdriver, monkey wrench,
> > > etc.  there's nothing stopping you from doing that.  But it's not fair
> > > to object to other people who might want to use better tools.
> > > 
> > > The reality is that we have a lot of testing tools.  It's not just
> > > kselftests.  There is xfstests for file system code, blktests for
> > > block layer tests, etc.   We use the right tool for the right job.
> > 
> > More specious arguments.
> 
> Well, *I* don't think they are specious; so I think we're going to
> have to agree to disagree.

Looking at both Frank's and Ted's arguments here, I don't think you 
really disagree, I just think you are having different classes of tests in mind.

In my view it's useful to think in terms of two main categories of 
interesting unit tests for kernel code (using the term "unit test" pragmatically):

1) Tests that exercises typically algorithmic or intricate, complex
   code with relatively few outside dependencies, or where the dependencies 
   are considered worth mocking, such as the basics of container data 
   structures or page table code. If I get you right, Ted, the tests 
   you refer to in this thread are such tests. I believe covering this space 
   is the goal Brendan has in mind for KUnit.

2) Tests that exercises interaction between a module under test and other 
   parts of the kernel, such as testing intricacies of the interaction of 
   a driver or file system with the rest of the kernel, and with hardware, 
   whether that is real hardware or a model/emulation. 
   Using your testing needs as example again, Ted, from my shallow understanding,
   you have such needs within the context of xfstests (https://github.com/tytso/xfstests)

To 1) I agree with Frank in that the problem with using UML is that you still have to
relate to the complexity of a kernel run time system, while what you really want for these
types of tests is just to compile a couple of kernel source files in a normal user land
context, to allow the use of Valgrind and other user space tools on the code. The
challenge is to get the code compiled in such an environment as it usually relies on
subtle kernel macros and definitions, which is why UML seems like such an attractive
solution. Like Frank I really see no big difference from a testing and debugging 
perspective of UML versus running inside a Qemu/KVM process, and I think I have an idea 
for a better solution: 

In the early phases of the SIF project which mention below, I did a lot of experimentation around this. My biggest challenge then was to test the driver
implementation of the pages table handling of an Intel page table compatible on-device 
MMU, using a mix of page sizes, but with a few subtle limitations in the hardware. With some efforts of code generation and heavy automated use of
compiler feedback, I was able 
to do that to great satisfaction, as it probably saved the project a lot of time in 
debugging, and myself a lot of pain :)

To 2) most of the current xfstests (if not all?) are user space tests that do not use 
extra test specific kernel code, or test specific changes to the modules under test (am I 
right, Ted?) and I believe that's just as it should be: if something can be exercised well enough from user space, then that's the easier approach. 

However sometimes the test cannot be made easily without interacting directly 
with internal kernel interfaces, or having such interaction would greatly simplify or
increase the precision of the test. This need was the initial motivation for us to make 
KTF (https://github.com/oracle/ktf, http://heim.ifi.uio.no/~knuto/ktf/index.html) which we are working on to adapt to fit naturally and in the right way
as a kernel patch set.

We developed the SIF infiniband HCA driver
(https://github.com/oracle/linux-uek/tree/uek4/qu7/drivers/infiniband/hw/sif)
and associated user level libraries in what I like to call a "pragmatically test driven" 
way. At the end of the project we had quite a few unit tests, but only a small fraction of them were KTF tests, most of the testing needs were covered
by user land unit tests, 
and higher level application testing.

To you Frank, and your concern about having to learn yet another tool with it's own set of syntax, I completely agree with you. We definitely would want
to minimize the need to 
learn new ways, which is why I think it is important to see the whole complex of unit
testing together, and at least make sure it works in a unified and efficient way from a
syntax and operational way. 

With KTF we focus on trying to make kernel testing as similar and integrated with user
space tests as possible, using similar test macros, and also to not reinvent more wheels than necessary by basing reporting and test execution on
existing user land tools.
KTF integrates with Googletest for this functionality. This also makes the reporting format discussion here irrelevant for KTF, as KTF supports whatever
reporting format the user land tool supports - Googletest for instance naturally supports pluggable reporting implementations, and there already seems
to be a TAP reporting extension out there (I haven't tried it yet though)

Using and relating to an existing user land framework allows us to have a set of 
tests that works the same way from a user/developer perspective, 
but some of them are kernel only tests, some are ordinary user land 
tests, exercising system call boundaries and other kernel
interfaces, and some are what we call "hybrid", where parts of 
the test run in user mode and parts in kernel mode.

I hope we can discuss this complex in more detail, for instance at the testing 
and fuzzing workshop at LPC later this year, where I have proposed a topic for it.

Thanks,
Knut
Theodore Ts'o May 9, 2019, 1:35 p.m. UTC | #21
On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> 1) Tests that exercises typically algorithmic or intricate, complex
>    code with relatively few outside dependencies, or where the dependencies 
>    are considered worth mocking, such as the basics of container data 
>    structures or page table code. If I get you right, Ted, the tests 
>    you refer to in this thread are such tests. I believe covering this space 
>    is the goal Brendan has in mind for KUnit.

Yes, that's correct.  I'd also add that one of the key differences is
that it sounds like Frank and you are coming from the perspective of
testing *device drivers* where in general there aren't a lot of
complex code which is hardware independent.  After all, the vast
majority of device drivers are primarily interface code to hardware,
with as much as possible abstracted away to common code.  (Take, for
example, the model of the SCSI layer; or all of the kobject code.)

> 2) Tests that exercises interaction between a module under test and other 
>    parts of the kernel, such as testing intricacies of the interaction of 
>    a driver or file system with the rest of the kernel, and with hardware, 
>    whether that is real hardware or a model/emulation. 
>    Using your testing needs as example again, Ted, from my shallow understanding,
>    you have such needs within the context of xfstests (https://github.com/tytso/xfstests)

Well, upstream is for xfstests is git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git

The test framework where I can run 20 hours worth of xfstests
(multiple file system features enabled, multiple mount options, etc.)
in 3 hours of wall clock time using multiple cloud VM is something
called gce-xfstests.

I also have kvm-xfstests, which optimizes low test latency, where I
want to run a one or a small number of tests with a minimum of
overhead --- gce startup and shutdown is around 2 minutes, where as
kvm startup and shutdown is about 7 seconds.  As far as I'm concerned,
7 seconds is still too slow, but that's the best I've been able to do
given all of the other things I want a test framework to do, including
archiving test results, parsing the test results so it's easy to
interpret, etc.  Both kvm-xfstests and gce-xfstests are located at:

	git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git

So if Frank's primary argument is "too many frameworks", it's already
too late.  The block layer has blktests has a seprate framework,
called blktests --- and yeah, it's a bit painful to launch or learn
how to set things up.

That's why I added support to run blktests using gce-xfstests and
kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
--xfstests" will pluck a kernel from your build tree, and launch at
test appliance VM using that kernel and run the block layer tests.

The point is we *already* have multiple test frameworks, which are
optimized for testing different parts of the kernel.  And if you plan
to do a lot of work in these parts of the kernel, you're going to have
to learn how to use some other test framework other than kselftest.
Sorry, that's just the way it goes.

Of course, I'll accept trivial patches that haven't been tested using
xfstests --- but that's because I can trivially run the smoke test for
you.  Of course, if I get a lot of patches from a contributor which
cause test regressions, I'll treat them much like someone who
contribute patches which fail to build.  I'll apply pressure to the
contributor to actually build test, or run a ten minute kvm-xfstests
smoke test.  Part of the reason why I feel comfortable to do this is
it's really easy to run the smoke test.  There are pre-compiled test
appliances, and a lot of documentation:

https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md

This is why I have close to zero sympathy to Frank's complaint that
extra test frameworks are a bad thing.  To me, that's whining.  I've
done a huge amount of work to meet contributors more than half-way.
The insistence that "There Must Be One", ala the Highlander movie, is
IMHO so wrong that it's not even close.  Is it really that hard to do
a "git pull", download a test appliance, set up a config file to tell
kvm-xfstests where to find your build tree, and then run "kvm-xfstests
--smoke" or "gce-xfstests --smoke"?  Cry me a river.

There are already multiple test frameworks, and if you expect to do a
lot of work in a particular subsystem, you'll be expected to use the
Maintainer's choice of tests.  Deal with it.  We do this so we can
scale to the number of contributors we have in our subsystem.

> To 1) I agree with Frank in that the problem with using UML is that you still have to
> relate to the complexity of a kernel run time system, while what you really want for these
> types of tests is just to compile a couple of kernel source files in a normal user land
> context, to allow the use of Valgrind and other user space tools on the code.

"Just compiling a couple of kernel source files in a normal user land"
is much harder than you think.  It requires writing vast numbers of
mocking functions --- for a file system I would have to simulate the
block device layer, large portions of the VFS layer, the scheduler and
the locking layer if I want to test locking bugs, etc., etc.  In
practice, UML itself is serving as mocking layer, by its mere
existence.  So when Frank says that KUnit doesn't provide any mocking
functions, I don't at all agree.  Using KUnit and UML makes testing
internal interfaces *far* simpler, especially if the comparison is
"just compile some kernel source files as part of a userspace test
program".

Perhaps your and Frank's experience is different --- perhaps that can
be explained by your past experience and interest in testing device
drivers as opposed to file systems.

The other thing I'd add is that at least for me, a really important
consideration is how quickly we can run tests.  I consider
minimization of developer friction (e.g., all you need to do is
running "make ; kvm-xfstests --smoke" to run tests), and maximizing
developer velocity to be high priority goals.  Developer velocity is
how quickly can you run the tests; ideally, less than 5-10 seconds.

And that's the other reason why I consider unit tests to be a
complement to integration tests.  "gce-xfstests --smoke" takes 10-15
minutes.  If I can have unit tests which takes 5-15 seconds for a
smoke test of the specific part of ext4 that I am modifying (and often
with much better coverage than integration tests from userspace),
that's at really big deal.  I can do this for e2fsprogs; but if I have
to launch a VM, the VM overhead pretty much eats all or most of that
time budget right there.

From looking at your documentation of KTF, you are targetting the use
case of continuous testing.  That's a different testing scenario than
what I'm describing; with continuous testing, overhead measured in
minutes or even tens of minutes is not a big deal.  But if you are
trying to do real-time testing as part of your development process ---
*real* Test Driven Development, then test latency is a really big
deal.

I'll grant that for people who are working on device drivers where
architecture dependencies are a big deal, building for an architecture
where you can run in a virtual environment or using test hardware is
going to be a better way to go.  And Brendan has said he's willing to
look at adapting KUnit so it can be built for use in a virtual
environment to accomodate your requirements.

As far as I'm concerned, however, I would *not* be interested in KTF
unless you could demonstrate to me that launching at test VM, somehow
getting the kernel modules copied into the VM, and running the tests
as kernel modules, has zero overhead compared to using UML.

Ultimately, I'm a pragmatist.  If KTF serves your needs best, good for
you.  If other approaches are better for other parts of the kernel,
let's not try to impose a strict "There Must Be Only One" religion.
That's already not true today, and for good reason.  There are many
different kinds of kernel code, and many different types of test
philosophies.  Trying to force all kernel testing into a single
Procrustean Bed is simply not productive.

Regards,

						- Ted
Knut Omang May 9, 2019, 2:48 p.m. UTC | #22
On Thu, 2019-05-09 at 09:35 -0400, Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> > 1) Tests that exercises typically algorithmic or intricate, complex
> >    code with relatively few outside dependencies, or where the dependencies 
> >    are considered worth mocking, such as the basics of container data 
> >    structures or page table code. If I get you right, Ted, the tests 
> >    you refer to in this thread are such tests. I believe covering this space 
> >    is the goal Brendan has in mind for KUnit.
> 
> Yes, that's correct.  I'd also add that one of the key differences is
> that it sounds like Frank and you are coming from the perspective of
> testing *device drivers* where in general there aren't a lot of
> complex code which is hardware independent.  After all, the vast
> majority of device drivers are primarily interface code to hardware,
> with as much as possible abstracted away to common code.  (Take, for
> example, the model of the SCSI layer; or all of the kobject code.)
>
> > 2) Tests that exercises interaction between a module under test and other 
> >    parts of the kernel, such as testing intricacies of the interaction of 
> >    a driver or file system with the rest of the kernel, and with hardware, 
> >    whether that is real hardware or a model/emulation. 
> >    Using your testing needs as example again, Ted, from my shallow understanding,
> >    you have such needs within the context of xfstests (
> https://github.com/tytso/xfstests)
> 
> Well, upstream is for xfstests is git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git

Thanks for the correction!

> The test framework where I can run 20 hours worth of xfstests
> (multiple file system features enabled, multiple mount options, etc.)
> in 3 hours of wall clock time using multiple cloud VM is something
> called gce-xfstests.
> 
> I also have kvm-xfstests, which optimizes low test latency, where I
> want to run a one or a small number of tests with a minimum of
> overhead --- gce startup and shutdown is around 2 minutes, where as
> kvm startup and shutdown is about 7 seconds.  As far as I'm concerned,
> 7 seconds is still too slow, but that's the best I've been able to do
> given all of the other things I want a test framework to do, including
> archiving test results, parsing the test results so it's easy to
> interpret, etc.  Both kvm-xfstests and gce-xfstests are located at:
> 
> 	git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
> 
> So if Frank's primary argument is "too many frameworks", it's already
> too late.  The block layer has blktests has a seprate framework,
> called blktests --- and yeah, it's a bit painful to launch or learn
> how to set things up.

I agree at that level - and the good thing is that there are a lot to learn 
from looking at other people's ways - but working towards unification rather than even
more similar, but subtly different ways I think is a good thing anyway!

> That's why I added support to run blktests using gce-xfstests and
> kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
> --xfstests" will pluck a kernel from your build tree, and launch at
> test appliance VM using that kernel and run the block layer tests.
> 
> The point is we *already* have multiple test frameworks, which are
> optimized for testing different parts of the kernel.  And if you plan
> to do a lot of work in these parts of the kernel, you're going to have
> to learn how to use some other test framework other than kselftest.
> Sorry, that's just the way it goes.
> 
> Of course, I'll accept trivial patches that haven't been tested using
> xfstests --- but that's because I can trivially run the smoke test for
> you.  Of course, if I get a lot of patches from a contributor which
> cause test regressions, I'll treat them much like someone who
> contribute patches which fail to build.  I'll apply pressure to the
> contributor to actually build test, or run a ten minute kvm-xfstests
> smoke test.  Part of the reason why I feel comfortable to do this is
> it's really easy to run the smoke test.  There are pre-compiled test
> appliances, and a lot of documentation:
> 
> https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md
> 
> This is why I have close to zero sympathy to Frank's complaint that
> extra test frameworks are a bad thing.  To me, that's whining.  I've
> done a huge amount of work to meet contributors more than half-way.
> The insistence that "There Must Be One", ala the Highlander movie, is
> IMHO so wrong that it's not even close.  Is it really that hard to do
> a "git pull", download a test appliance, set up a config file to tell
> kvm-xfstests where to find your build tree, and then run "kvm-xfstests
> --smoke" or "gce-xfstests --smoke"?  Cry me a river.
> 
> There are already multiple test frameworks, and if you expect to do a
> lot of work in a particular subsystem, you'll be expected to use the
> Maintainer's choice of tests.  Deal with it.  We do this so we can
> scale to the number of contributors we have in our subsystem.
> 
> > To 1) I agree with Frank in that the problem with using UML is that you still have to
> > relate to the complexity of a kernel run time system, while what you really want for
> these
> > types of tests is just to compile a couple of kernel source files in a normal user
> land
> > context, to allow the use of Valgrind and other user space tools on the code.
> 
> "Just compiling a couple of kernel source files in a normal user land"
> is much harder than you think.  It requires writing vast numbers of
> mocking functions --- for a file system I would have to simulate the
> block device layer, large portions of the VFS layer, the scheduler and
> the locking layer if I want to test locking bugs, etc., etc.  In
> practice, UML itself is serving as mocking layer, by its mere
> existence.  

I might not see the full picture here wrt file system testing, 
but I do know exactly how difficult it is to do it for that ~29,000 
lines of code Infiniband driver I was working on, since actually I did it
with several versions of the kernel. Anyway that's probably more of a 
topic for a talk than an email thread :-)

> So when Frank says that KUnit doesn't provide any mocking
> functions, I don't at all agree.  Using KUnit and UML makes testing
> internal interfaces *far* simpler, especially if the comparison is
> "just compile some kernel source files as part of a userspace test
> program".
> 
> Perhaps your and Frank's experience is different --- perhaps that can
> be explained by your past experience and interest in testing device
> drivers as opposed to file systems.
> 
> The other thing I'd add is that at least for me, a really important
> consideration is how quickly we can run tests.  I consider
> minimization of developer friction (e.g., all you need to do is
> running "make ; kvm-xfstests --smoke" to run tests), and maximizing
> developer velocity to be high priority goals.  Developer velocity is
> how quickly can you run the tests; ideally, less than 5-10 seconds.

I completely agree on that one. I think a fundamental feature of any 
framework at this level is that it should be usable as developer tests as part
of an efficient work cycle.

> And that's the other reason why I consider unit tests to be a
> complement to integration tests.  "gce-xfstests --smoke" takes 10-15
> minutes.  If I can have unit tests which takes 5-15 seconds for a
> smoke test of the specific part of ext4 that I am modifying (and often
> with much better coverage than integration tests from userspace),
> that's at really big deal.  I can do this for e2fsprogs; but if I have
> to launch a VM, the VM overhead pretty much eats all or most of that
> time budget right there.

This is exactly the way we work with KTF as well: 
Change the kernel module under test, and/or the test code, 
compile, unload/load modules and run the tests again, all within seconds.
The overhead of rebooting one or more VMs (network tests sometimes 
require more than one node) or even a physical system, 
if the issue cannot be reproduced without running non-virtualized, 
would only be necessary if the test causes an oops or other crash 
that prevents the unload/load path.

> From looking at your documentation of KTF, you are targetting the use
> case of continuous testing.  That's a different testing scenario than
> what I'm describing; with continuous testing, overhead measured in
> minutes or even tens of minutes is not a big deal.  But if you are
> trying to do real-time testing as part of your development process ---
> *real* Test Driven Development, then test latency is a really big
> deal.

My experience is that unless one can enforce tests to be run on 
everyone else's changes as well, one often ends up having to pursue the bugs 
introduced by others but caught by the tests, so I believe automation and 
unit/low level testing really should go hand in hand. This is the reason for 
the emphasis on automation in the KTF docs, but the primary driver for me has 
always been as a developer toolkit.

> I'll grant that for people who are working on device drivers where
> architecture dependencies are a big deal, building for an architecture
> where you can run in a virtual environment or using test hardware is
> going to be a better way to go.  And Brendan has said he's willing to
> look at adapting KUnit so it can be built for use in a virtual
> environment to accomodate your requirements.
> 
> As far as I'm concerned, however, I would *not* be interested in KTF
> unless you could demonstrate to me that launching at test VM, somehow
> getting the kernel modules copied into the VM, and running the tests
> as kernel modules, has zero overhead compared to using UML.

As you alluded to above, the development cycle time is really the crucial 
thing here - if what you get has more value to you, then you'd be willing 
to wait just a few more seconds for it. And IMHO the real interesting notion of 
time is the time from saving the changes until you can verify that 
the test either passes, or even more important: Why it failed..

> Ultimately, I'm a pragmatist.  If KTF serves your needs best, good for
> you.  If other approaches are better for other parts of the kernel,
> let's not try to impose a strict "There Must Be Only One" religion.
> That's already not true today, and for good reason.  There are many
> different kinds of kernel code, and many different types of test
> philosophies.  Trying to force all kernel testing into a single
> Procrustean Bed is simply not productive.

I definitely pragmatically prefer evolution over revolution myself, 
no doubt about that, and I definitely appreciate this detailed view seen from the 
filesystem side,

Thanks!
Knut

> 
> Regards,
> 
> 						- Ted
Masahiro Yamada May 9, 2019, 3:19 p.m. UTC | #23
On Thu, May 2, 2019 at 8:02 AM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> ## TLDR
>
> I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> 5.2.
>
> Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> we would merge through your tree when the time came? Am I remembering
> correctly?
>
> ## Background
>
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
>
> Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> it does not require installing the kernel on a test machine or in a VM
> and does not require tests to be written in userspace running on a host
> kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> can run several dozen tests in under a second. Currently, the entire
> KUnit test suite for KUnit runs in under a second from the initial
> invocation (build time excluded).
>
> KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> Googletest/Googlemock for C++. KUnit provides facilities for defining
> unit test cases, grouping related test cases into test suites, providing
> common infrastructure for running tests, mocking, spying, and much more.
>
> ## What's so special about unit testing?
>
> A unit test is supposed to test a single unit of code in isolation,
> hence the name. There should be no dependencies outside the control of
> the test; this means no external dependencies, which makes tests orders
> of magnitudes faster. Likewise, since there are no external dependencies,
> there are no hoops to jump through to run the tests. Additionally, this
> makes unit tests deterministic: a failing unit test always indicates a
> problem. Finally, because unit tests necessarily have finer granularity,
> they are able to test all code paths easily solving the classic problem
> of difficulty in exercising error handling code.
>
> ## Is KUnit trying to replace other testing frameworks for the kernel?
>
> No. Most existing tests for the Linux kernel are end-to-end tests, which
> have their place. A well tested system has lots of unit tests, a
> reasonable number of integration tests, and some end-to-end tests. KUnit
> is just trying to address the unit test space which is currently not
> being addressed.
>
> ## More information on KUnit
>
> There is a bunch of documentation near the end of this patch set that
> describes how to use KUnit and best practices for writing unit tests.
> For convenience I am hosting the compiled docs here:
> https://google.github.io/kunit-docs/third_party/kernel/docs/
> Additionally for convenience, I have applied these patches to a branch:
> https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> The repo may be cloned with:
> git clone https://kunit.googlesource.com/linux
> This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
>
> ## Changes Since Last Version
>
> None. I just rebased the last patchset on v5.1-rc7.
>
> --
> 2.21.0.593.g511ec345e18-goog
>

The following is the log of 'git am' of this series.
I see several 'new blank line at EOF' warnings.



masahiro@pug:~/workspace/bsp/linux$ git am ~/Downloads/*.patch
Applying: kunit: test: add KUnit test runner core
Applying: kunit: test: add test resource management API
Applying: kunit: test: add string_stream a std::stream like string builder
.git/rebase-apply/patch:223: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add kunit_stream a std::stream like logger
Applying: kunit: test: add the concept of expectations
.git/rebase-apply/patch:475: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kbuild: enable building KUnit
Applying: kunit: test: add initial tests
.git/rebase-apply/patch:203: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add support for test abort
.git/rebase-apply/patch:453: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add tests for kunit test abort
Applying: kunit: test: add the concept of assertions
.git/rebase-apply/patch:518: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: test: add test managed resource tests
Applying: kunit: tool: add Python wrappers for running KUnit tests
.git/rebase-apply/patch:457: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: kunit: defconfig: add defconfigs for building KUnit tests
Applying: Documentation: kunit: add documentation for KUnit
.git/rebase-apply/patch:71: new blank line at EOF.
+
.git/rebase-apply/patch:209: new blank line at EOF.
+
.git/rebase-apply/patch:848: new blank line at EOF.
+
warning: 3 lines add whitespace errors.
Applying: MAINTAINERS: add entry for KUnit the unit testing framework
Applying: kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()
Applying: MAINTAINERS: add proc sysctl KUnit test to PROC SYSCTL section






--
Best Regards
Masahiro Yamada
Bird, Tim May 9, 2019, 5 p.m. UTC | #24
> -----Original Message-----
> From: Theodore Ts'o 
> 
> On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> > 1) Tests that exercises typically algorithmic or intricate, complex
> >    code with relatively few outside dependencies, or where the
> dependencies
> >    are considered worth mocking, such as the basics of container data
> >    structures or page table code. If I get you right, Ted, the tests
> >    you refer to in this thread are such tests. I believe covering this space
> >    is the goal Brendan has in mind for KUnit.
> 
> Yes, that's correct.  I'd also add that one of the key differences is
> that it sounds like Frank and you are coming from the perspective of
> testing *device drivers* where in general there aren't a lot of
> complex code which is hardware independent.

Ummm.  Not to speak for Frank, but he's representing the device tree
layer, which I'd argue sits exactly at the intersection of testing device drivers
AND lots of complex code which is hardware independent.  So maybe his
case is special.

> After all, the vast
> majority of device drivers are primarily interface code to hardware,
> with as much as possible abstracted away to common code.  (Take, for
> example, the model of the SCSI layer; or all of the kobject code.)
> 
> > 2) Tests that exercises interaction between a module under test and other
> >    parts of the kernel, such as testing intricacies of the interaction of
> >    a driver or file system with the rest of the kernel, and with hardware,
> >    whether that is real hardware or a model/emulation.
> >    Using your testing needs as example again, Ted, from my shallow
> understanding,
> >    you have such needs within the context of xfstests
> (https://github.com/tytso/xfstests)
> 
> Well, upstream is for xfstests is git://git.kernel.org/pub/scm/fs/xfs/xfstests-
> dev.git
> 
> The test framework where I can run 20 hours worth of xfstests
> (multiple file system features enabled, multiple mount options, etc.)
> in 3 hours of wall clock time using multiple cloud VM is something
> called gce-xfstests.
> 
> I also have kvm-xfstests, which optimizes low test latency, where I
> want to run a one or a small number of tests with a minimum of
> overhead --- gce startup and shutdown is around 2 minutes, where as
> kvm startup and shutdown is about 7 seconds.  As far as I'm concerned,
> 7 seconds is still too slow, but that's the best I've been able to do
> given all of the other things I want a test framework to do, including
> archiving test results, parsing the test results so it's easy to
> interpret, etc.  Both kvm-xfstests and gce-xfstests are located at:
> 
> 	git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
> 
> So if Frank's primary argument is "too many frameworks", it's already
> too late.  The block layer has blktests has a seprate framework,
> called blktests --- and yeah, it's a bit painful to launch or learn
> how to set things up.
> 
> That's why I added support to run blktests using gce-xfstests and
> kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
> --xfstests" will pluck a kernel from your build tree, and launch at
> test appliance VM using that kernel and run the block layer tests.
> 
> The point is we *already* have multiple test frameworks, which are
> optimized for testing different parts of the kernel.  And if you plan
> to do a lot of work in these parts of the kernel, you're going to have
> to learn how to use some other test framework other than kselftest.
> Sorry, that's just the way it goes.
> 
> Of course, I'll accept trivial patches that haven't been tested using
> xfstests --- but that's because I can trivially run the smoke test for
> you.  Of course, if I get a lot of patches from a contributor which
> cause test regressions, I'll treat them much like someone who
> contribute patches which fail to build.  I'll apply pressure to the
> contributor to actually build test, or run a ten minute kvm-xfstests
> smoke test.  Part of the reason why I feel comfortable to do this is
> it's really easy to run the smoke test.  There are pre-compiled test
> appliances, and a lot of documentation:
> 
> https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-
> quickstart.md
> 
> This is why I have close to zero sympathy to Frank's complaint that
> extra test frameworks are a bad thing.  To me, that's whining.  I've
> done a huge amount of work to meet contributors more than half-way.
> The insistence that "There Must Be One", ala the Highlander movie, is
> IMHO so wrong that it's not even close.  Is it really that hard to do
> a "git pull", download a test appliance, set up a config file to tell
> kvm-xfstests where to find your build tree, and then run "kvm-xfstests
> --smoke" or "gce-xfstests --smoke"?  Cry me a river.

Handling these types of things that are not "really that hard to do" is
exactly what meta-frameworks like KCI, Fuego, and LKFT are for.
For a core developer in a sub-system, having them learn a particular
specialized framework is OK.  However, for someone doing integration
testing of the kernel (not a core developer
in a particular subsystem), having lots of different frameworks turns
into death by a thousand cuts.  But we're working to fix that.
(Which reminds me that I have an outstanding action item to add an xfstest
test definition to Fuego. :-) )

> 
> There are already multiple test frameworks, and if you expect to do a
> lot of work in a particular subsystem, you'll be expected to use the
> Maintainer's choice of tests.  Deal with it.  We do this so we can
> scale to the number of contributors we have in our subsystem.

This seems to me to be exactly backwards.  You scale your contributors
by making it easier for them, which means adopting something already
well-know or established - not by being different.

I understand your vise grip metaphor, and agree with you.  In my opinion
kselftest and kunit are optimized for different things, and are different tools
in the Linux kernel testing toolbox.  But if you start having too many tools, or
the tools are too specialized, there are less people familiar with them and
ready to use them to help contribute.

> 
> > To 1) I agree with Frank in that the problem with using UML is that you still
> have to
> > relate to the complexity of a kernel run time system, while what you really
> want for these
> > types of tests is just to compile a couple of kernel source files in a normal
> user land
> > context, to allow the use of Valgrind and other user space tools on the
> code.
> 
> "Just compiling a couple of kernel source files in a normal user land"
> is much harder than you think.  It requires writing vast numbers of
> mocking functions --- for a file system I would have to simulate the
> block device layer, large portions of the VFS layer, the scheduler and
> the locking layer if I want to test locking bugs, etc., etc.  In
> practice, UML itself is serving as mocking layer, by its mere
> existence.  So when Frank says that KUnit doesn't provide any mocking
> functions, I don't at all agree.  Using KUnit and UML makes testing
> internal interfaces *far* simpler, especially if the comparison is
> "just compile some kernel source files as part of a userspace test
> program".

I had one thing I wanted to ask about here.  You said previously that
you plan to use KUnit to test a complicated but hardware independent
part of the filesystem code.  If you test only via UML, will that give you
coverage for non-x86 platforms? More specifically, will you get coverage
for 32-bit, for big-endian as well as little-endian, for weird architectures?
It seems like the software for these complicated sections of code is
subject to regressions due to toolchain issues as much as from coding errors.
That's why I was initially turned off when I  heard that KUnit only planned
to support UML and not cross-compilation.

I'm not sure what the status is of UML for all the weird embedded processors
that get only cross-compiled and not natively-compiled, but there are multiple
reasons why UML is less commonly used in the embedded space.
 
> Perhaps your and Frank's experience is different --- perhaps that can
> be explained by your past experience and interest in testing device
> drivers as opposed to file systems.
> 
> The other thing I'd add is that at least for me, a really important
> consideration is how quickly we can run tests.  I consider
> minimization of developer friction (e.g., all you need to do is
> running "make ; kvm-xfstests --smoke" to run tests), and maximizing
> developer velocity to be high priority goals.  Developer velocity is
> how quickly can you run the tests; ideally, less than 5-10 seconds.
> 
> And that's the other reason why I consider unit tests to be a
> complement to integration tests.  "gce-xfstests --smoke" takes 10-15
> minutes.  If I can have unit tests which takes 5-15 seconds for a
> smoke test of the specific part of ext4 that I am modifying (and often
> with much better coverage than integration tests from userspace),
> that's at really big deal.  I can do this for e2fsprogs; but if I have
> to launch a VM, the VM overhead pretty much eats all or most of that
> time budget right there.
> 
> From looking at your documentation of KTF, you are targetting the use
> case of continuous testing.  That's a different testing scenario than
> what I'm describing; with continuous testing, overhead measured in
> minutes or even tens of minutes is not a big deal.  But if you are
> trying to do real-time testing as part of your development process ---
> *real* Test Driven Development, then test latency is a really big
> deal.
> 
> I'll grant that for people who are working on device drivers where
> architecture dependencies are a big deal, building for an architecture
> where you can run in a virtual environment or using test hardware is
> going to be a better way to go.  And Brendan has said he's willing to
> look at adapting KUnit so it can be built for use in a virtual
> environment to accomodate your requirements.

This might solve my cross-compile needs, so that's good.

> 
> As far as I'm concerned, however, I would *not* be interested in KTF
> unless you could demonstrate to me that launching at test VM, somehow
> getting the kernel modules copied into the VM, and running the tests
> as kernel modules, has zero overhead compared to using UML.
> 
> Ultimately, I'm a pragmatist.  If KTF serves your needs best, good for
> you.  If other approaches are better for other parts of the kernel,
> let's not try to impose a strict "There Must Be Only One" religion.
> That's already not true today, and for good reason.  There are many
> different kinds of kernel code, and many different types of test
> philosophies.  Trying to force all kernel testing into a single
> Procrustean Bed is simply not productive.

Had to look up "Procrustean Bed" - great phrase.  :-)

I'm not of the opinion that there must only be one test framework
in the kernel. But we should avoid unnecessary multiplication. Every
person is going to have a different idea for where the line of necessity
is drawn.  My own opinion is that what KUnit is adding is different enough
from kselftest, that it's a valuable addition.  

 -- Tim
Daniel Vetter May 9, 2019, 5:42 p.m. UTC | #25
On Thu, May 9, 2019 at 7:00 PM <Tim.Bird@sony.com> wrote:
> > -----Original Message-----
> > From: Theodore Ts'o
> >
> > On Thu, May 09, 2019 at 01:52:15PM +0200, Knut Omang wrote:
> > > 1) Tests that exercises typically algorithmic or intricate, complex
> > >    code with relatively few outside dependencies, or where the
> > dependencies
> > >    are considered worth mocking, such as the basics of container data
> > >    structures or page table code. If I get you right, Ted, the tests
> > >    you refer to in this thread are such tests. I believe covering this space
> > >    is the goal Brendan has in mind for KUnit.
> >
> > Yes, that's correct.  I'd also add that one of the key differences is
> > that it sounds like Frank and you are coming from the perspective of
> > testing *device drivers* where in general there aren't a lot of
> > complex code which is hardware independent.
>
> Ummm.  Not to speak for Frank, but he's representing the device tree
> layer, which I'd argue sits exactly at the intersection of testing device drivers
> AND lots of complex code which is hardware independent.  So maybe his
> case is special.

Jumping in with a pure device driver hat: We already have add-hoc unit
tests in drivers/gpu, which somewhat shoddily integrate into
kselftests and our own gpu test suite from userspace. We'd like to do
a lot more in this area (there's enormous amounts of code in a gpu
driver that's worth testing on its own, or against a mocked model of a
part of the real hw), and I think a unit test framework for the entire
kernel would be great. Plus gpu/drm isn't the only subsystem by far
that already has a home-grown solution. So it's actually worse than
what Ted said: We don't just have a multitude of test frameworks
already, we have a multitude of ad-hoc unit test frameworks, each with
their own way to run tests, write tests and mock parts of the system.
Kunit hopefully helps us to standardize more in this area. I do plan
to look into converting all the drm selftest we have already as soon
as this lands (and as soon as I find some time ...).

Cheers, Daniel

>
> > After all, the vast
> > majority of device drivers are primarily interface code to hardware,
> > with as much as possible abstracted away to common code.  (Take, for
> > example, the model of the SCSI layer; or all of the kobject code.)
> >
> > > 2) Tests that exercises interaction between a module under test and other
> > >    parts of the kernel, such as testing intricacies of the interaction of
> > >    a driver or file system with the rest of the kernel, and with hardware,
> > >    whether that is real hardware or a model/emulation.
> > >    Using your testing needs as example again, Ted, from my shallow
> > understanding,
> > >    you have such needs within the context of xfstests
> > (https://github.com/tytso/xfstests)
> >
> > Well, upstream is for xfstests is git://git.kernel.org/pub/scm/fs/xfs/xfstests-
> > dev.git
> >
> > The test framework where I can run 20 hours worth of xfstests
> > (multiple file system features enabled, multiple mount options, etc.)
> > in 3 hours of wall clock time using multiple cloud VM is something
> > called gce-xfstests.
> >
> > I also have kvm-xfstests, which optimizes low test latency, where I
> > want to run a one or a small number of tests with a minimum of
> > overhead --- gce startup and shutdown is around 2 minutes, where as
> > kvm startup and shutdown is about 7 seconds.  As far as I'm concerned,
> > 7 seconds is still too slow, but that's the best I've been able to do
> > given all of the other things I want a test framework to do, including
> > archiving test results, parsing the test results so it's easy to
> > interpret, etc.  Both kvm-xfstests and gce-xfstests are located at:
> >
> >       git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
> >
> > So if Frank's primary argument is "too many frameworks", it's already
> > too late.  The block layer has blktests has a seprate framework,
> > called blktests --- and yeah, it's a bit painful to launch or learn
> > how to set things up.
> >
> > That's why I added support to run blktests using gce-xfstests and
> > kvm-xfstests, so that "gce-xfstests --blktests" or "kvm-xfstests
> > --xfstests" will pluck a kernel from your build tree, and launch at
> > test appliance VM using that kernel and run the block layer tests.
> >
> > The point is we *already* have multiple test frameworks, which are
> > optimized for testing different parts of the kernel.  And if you plan
> > to do a lot of work in these parts of the kernel, you're going to have
> > to learn how to use some other test framework other than kselftest.
> > Sorry, that's just the way it goes.
> >
> > Of course, I'll accept trivial patches that haven't been tested using
> > xfstests --- but that's because I can trivially run the smoke test for
> > you.  Of course, if I get a lot of patches from a contributor which
> > cause test regressions, I'll treat them much like someone who
> > contribute patches which fail to build.  I'll apply pressure to the
> > contributor to actually build test, or run a ten minute kvm-xfstests
> > smoke test.  Part of the reason why I feel comfortable to do this is
> > it's really easy to run the smoke test.  There are pre-compiled test
> > appliances, and a lot of documentation:
> >
> > https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-
> > quickstart.md
> >
> > This is why I have close to zero sympathy to Frank's complaint that
> > extra test frameworks are a bad thing.  To me, that's whining.  I've
> > done a huge amount of work to meet contributors more than half-way.
> > The insistence that "There Must Be One", ala the Highlander movie, is
> > IMHO so wrong that it's not even close.  Is it really that hard to do
> > a "git pull", download a test appliance, set up a config file to tell
> > kvm-xfstests where to find your build tree, and then run "kvm-xfstests
> > --smoke" or "gce-xfstests --smoke"?  Cry me a river.
>
> Handling these types of things that are not "really that hard to do" is
> exactly what meta-frameworks like KCI, Fuego, and LKFT are for.
> For a core developer in a sub-system, having them learn a particular
> specialized framework is OK.  However, for someone doing integration
> testing of the kernel (not a core developer
> in a particular subsystem), having lots of different frameworks turns
> into death by a thousand cuts.  But we're working to fix that.
> (Which reminds me that I have an outstanding action item to add an xfstest
> test definition to Fuego. :-) )
>
> >
> > There are already multiple test frameworks, and if you expect to do a
> > lot of work in a particular subsystem, you'll be expected to use the
> > Maintainer's choice of tests.  Deal with it.  We do this so we can
> > scale to the number of contributors we have in our subsystem.
>
> This seems to me to be exactly backwards.  You scale your contributors
> by making it easier for them, which means adopting something already
> well-know or established - not by being different.
>
> I understand your vise grip metaphor, and agree with you.  In my opinion
> kselftest and kunit are optimized for different things, and are different tools
> in the Linux kernel testing toolbox.  But if you start having too many tools, or
> the tools are too specialized, there are less people familiar with them and
> ready to use them to help contribute.
>
> >
> > > To 1) I agree with Frank in that the problem with using UML is that you still
> > have to
> > > relate to the complexity of a kernel run time system, while what you really
> > want for these
> > > types of tests is just to compile a couple of kernel source files in a normal
> > user land
> > > context, to allow the use of Valgrind and other user space tools on the
> > code.
> >
> > "Just compiling a couple of kernel source files in a normal user land"
> > is much harder than you think.  It requires writing vast numbers of
> > mocking functions --- for a file system I would have to simulate the
> > block device layer, large portions of the VFS layer, the scheduler and
> > the locking layer if I want to test locking bugs, etc., etc.  In
> > practice, UML itself is serving as mocking layer, by its mere
> > existence.  So when Frank says that KUnit doesn't provide any mocking
> > functions, I don't at all agree.  Using KUnit and UML makes testing
> > internal interfaces *far* simpler, especially if the comparison is
> > "just compile some kernel source files as part of a userspace test
> > program".
>
> I had one thing I wanted to ask about here.  You said previously that
> you plan to use KUnit to test a complicated but hardware independent
> part of the filesystem code.  If you test only via UML, will that give you
> coverage for non-x86 platforms? More specifically, will you get coverage
> for 32-bit, for big-endian as well as little-endian, for weird architectures?
> It seems like the software for these complicated sections of code is
> subject to regressions due to toolchain issues as much as from coding errors.
> That's why I was initially turned off when I  heard that KUnit only planned
> to support UML and not cross-compilation.
>
> I'm not sure what the status is of UML for all the weird embedded processors
> that get only cross-compiled and not natively-compiled, but there are multiple
> reasons why UML is less commonly used in the embedded space.
>
> > Perhaps your and Frank's experience is different --- perhaps that can
> > be explained by your past experience and interest in testing device
> > drivers as opposed to file systems.
> >
> > The other thing I'd add is that at least for me, a really important
> > consideration is how quickly we can run tests.  I consider
> > minimization of developer friction (e.g., all you need to do is
> > running "make ; kvm-xfstests --smoke" to run tests), and maximizing
> > developer velocity to be high priority goals.  Developer velocity is
> > how quickly can you run the tests; ideally, less than 5-10 seconds.
> >
> > And that's the other reason why I consider unit tests to be a
> > complement to integration tests.  "gce-xfstests --smoke" takes 10-15
> > minutes.  If I can have unit tests which takes 5-15 seconds for a
> > smoke test of the specific part of ext4 that I am modifying (and often
> > with much better coverage than integration tests from userspace),
> > that's at really big deal.  I can do this for e2fsprogs; but if I have
> > to launch a VM, the VM overhead pretty much eats all or most of that
> > time budget right there.
> >
> > From looking at your documentation of KTF, you are targetting the use
> > case of continuous testing.  That's a different testing scenario than
> > what I'm describing; with continuous testing, overhead measured in
> > minutes or even tens of minutes is not a big deal.  But if you are
> > trying to do real-time testing as part of your development process ---
> > *real* Test Driven Development, then test latency is a really big
> > deal.
> >
> > I'll grant that for people who are working on device drivers where
> > architecture dependencies are a big deal, building for an architecture
> > where you can run in a virtual environment or using test hardware is
> > going to be a better way to go.  And Brendan has said he's willing to
> > look at adapting KUnit so it can be built for use in a virtual
> > environment to accomodate your requirements.
>
> This might solve my cross-compile needs, so that's good.
>
> >
> > As far as I'm concerned, however, I would *not* be interested in KTF
> > unless you could demonstrate to me that launching at test VM, somehow
> > getting the kernel modules copied into the VM, and running the tests
> > as kernel modules, has zero overhead compared to using UML.
> >
> > Ultimately, I'm a pragmatist.  If KTF serves your needs best, good for
> > you.  If other approaches are better for other parts of the kernel,
> > let's not try to impose a strict "There Must Be Only One" religion.
> > That's already not true today, and for good reason.  There are many
> > different kinds of kernel code, and many different types of test
> > philosophies.  Trying to force all kernel testing into a single
> > Procrustean Bed is simply not productive.
>
> Had to look up "Procrustean Bed" - great phrase.  :-)
>
> I'm not of the opinion that there must only be one test framework
> in the kernel. But we should avoid unnecessary multiplication. Every
> person is going to have a different idea for where the line of necessity
> is drawn.  My own opinion is that what KUnit is adding is different enough
> from kselftest, that it's a valuable addition.
>
>  -- Tim
>
>
Frank Rowand May 9, 2019, 6:12 p.m. UTC | #26
On 5/9/19 10:00 AM, Tim.Bird@sony.com wrote:
>> -----Original Message-----
>> From: Theodore Ts'o 
>>

< massive snip >

I'll reply in more detail to some other earlier messages in this thread
later.

This reply is an attempt to return to the intent of my original reply to
patch 0 of this series.


>> Ultimately, I'm a pragmatist.  If KTF serves your needs best, good for
>> you.  If other approaches are better for other parts of the kernel,
>> let's not try to impose a strict "There Must Be Only One" religion.
>> That's already not true today, and for good reason.  There are many
>> different kinds of kernel code, and many different types of test
>> philosophies.  Trying to force all kernel testing into a single
>> Procrustean Bed is simply not productive.
> 
> Had to look up "Procrustean Bed" - great phrase.  :-)
> 
> I'm not of the opinion that there must only be one test framework
> in the kernel. But we should avoid unnecessary multiplication. Every
> person is going to have a different idea for where the line of necessity
> is drawn.  My own opinion is that what KUnit is adding is different enough
> from kselftest, that it's a valuable addition.  
> 
>  -- Tim

My first reply to patch 0 was in the context of knowing next to nothing
about kselftest.  My level of understanding was essentially from slideware.
(As the thread progressed, I dug a little deeper into kselftest, so now have
a slightly better, though still fairly surface understanding of kselftest).

Maybe I did not explain myself clearly enough in that reply.  I will try
again here.

Patch 0 provided a one paragraph explanation of why KUnit exists, titled

   "## What's so special about unit testing?"

Patch 0 also provided a statement that it is not meant to replace
kselftest, in a paragraph titled

   "## Is KUnit trying to replace other testing frameworks for the kernel?"

I stated:

   "My understanding is that the intent of KUnit is to avoid booting a kernel on
   real hardware or in a virtual machine.  That seems to be a matter of semantics
   to me because isn't invoking a UML Linux just running the Linux kernel in
   a different form of virtualization?

   So I do not understand why KUnit is an improvement over kselftest.

   ...

   What am I missing?"


I was looking for a fuller, better explanation than was given in patch 0
of how KUnit provides something that is different than what kselftest
provides for creating unit tests for kernel code.

New question "(2)":

(2) If KUnit provides something unique, then the obvious follow on
question would be: does it make sense to (a) integrate that feature into
kselftest, (b) integrate the current kselftest in kernel test functionality
into KUnit and convert the in kernel test portion of kselftest to use
KUnit, or (c) KUnit and kselftest are so different that they need to
remain separate features.

*****  Please do not reply to this email with a discussion of "(2)".
*****  Such a discussion is premature if there is not an answer
*****  to my first question.

Observation (3), rephrased:

(3) I also brought up the issue that if option (c) was the answer to
question "(2)" (instead of either option (a) or option (b)) then this
is extra overhead for any developer or maintainer involved in different
subsystems that use the different frameworks.  I intended this as one
possible motivation for why my first question mattered.

Ted grabbed hold of this issue and basically ignored what I intended
to be my core question.  Nobody else has answered my core question,
though Knut's first reply did manage to get somewhere near the
intent of my core question.

*****  Please do not reply to this email with a discussion of "(3)".
*****  Such a discussion is premature if there is not an answer
*****  to my first question.
*****
*****  Also that discussion is already occurring in this thread,
*****  feel free to discuss it there if you want, even though I
*****  feel such discussion is premature.

I still do not have an answer to my original question.

-Frank
Theodore Ts'o May 9, 2019, 9:42 p.m. UTC | #27
On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
> 
>    "My understanding is that the intent of KUnit is to avoid booting a kernel on
>    real hardware or in a virtual machine.  That seems to be a matter of semantics
>    to me because isn't invoking a UML Linux just running the Linux kernel in
>    a different form of virtualization?
> 
>    So I do not understand why KUnit is an improvement over kselftest.
> 
>    ...
> 
>    What am I missing?"

One major difference: kselftest requires a userspace environment; it
starts systemd, requires a root file system from which you can load
modules, etc.  Kunit doesn't require a root file system; doesn't
require that you start systemd; doesn't allow you to run arbitrary
perl, python, bash, etc. scripts.  As such, it's much lighter weight
than kselftest, and will have much less overhead before you can start
running tests.  So it's not really the same kind of virtualization.

Does this help?

					- Ted
Logan Gunthorpe May 9, 2019, 10:20 p.m. UTC | #28
On 2019-05-09 3:42 p.m., Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>
>>     "My understanding is that the intent of KUnit is to avoid booting a kernel on
>>     real hardware or in a virtual machine.  That seems to be a matter of semantics
>>     to me because isn't invoking a UML Linux just running the Linux kernel in
>>     a different form of virtualization?
>>
>>     So I do not understand why KUnit is an improvement over kselftest.
>>
>>     ...
>>
>>     What am I missing?"
> 
> One major difference: kselftest requires a userspace environment; it
> starts systemd, requires a root file system from which you can load
> modules, etc.  Kunit doesn't require a root file system; doesn't
> require that you start systemd; doesn't allow you to run arbitrary
> perl, python, bash, etc. scripts.  As such, it's much lighter weight
> than kselftest, and will have much less overhead before you can start
> running tests.  So it's not really the same kind of virtualization.

I largely agree with everything Ted has said in this thread, but I 
wonder if we are conflating two different ideas that is causing an 
impasse. From what I see, Kunit actually provides two different things:

1) An execution environment that can be run very quickly in userspace on 
tests in the kernel source. This speeds up the tests and gives a lot of 
benefit to developers using those tests because they can get feedback on 
their code changes a *lot* quicker.

2) A framework to write unit tests that provides a lot of the same 
facilities as other common unit testing frameworks from userspace (ie. a 
runner that runs a list of tests and a bunch of helpers such as 
KUNIT_EXPECT_* to simplify test passes and failures).

The first item from Kunit is novel and I see absolutely no overlap with 
anything kselftest does. It's also the valuable thing I'd like to see 
merged and grow.

The second item, arguably, does have significant overlap with kselftest. 
Whether you are running short tests in a light weight UML environment or 
higher level tests in an heavier VM the two could be using the same 
framework for writing or defining in-kernel tests. It *may* also be 
valuable for some people to be able to run all the UML tests in the 
heavy VM environment along side other higher level tests.

Looking at the selftests tree in the repo, we already have similar items 
to what Kunit is adding as I described in point (2) above. 
kselftest_harness.h contains macros like EXPECT_* and ASSERT_* with very 
similar intentions to the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.

However, the number of users of this harness appears to be quite small. 
Most of the code in the selftests tree seems to be a random mismash of 
scripts and userspace code so it's not hard to see it as something 
completely different from the new Kunit:

$ git grep --files-with-matches kselftest_harness.h *
Documentation/dev-tools/kselftest.rst
MAINTAINERS
tools/testing/selftests/kselftest_harness.h
tools/testing/selftests/net/tls.c
tools/testing/selftests/rtc/rtctest.c
tools/testing/selftests/seccomp/Makefile
tools/testing/selftests/seccomp/seccomp_bpf.c
tools/testing/selftests/uevent/Makefile
tools/testing/selftests/uevent/uevent_filtering.c

Thus, I can personally see a lot of value in integrating the kunit test 
framework with this kselftest harness. There's only a small number of 
users of the kselftest harness today, so one way or another it seems 
like getting this integrated early would be a good idea. Letting Kunit 
and Kselftests progress independently for a few years will only make 
this worse and may become something we end up regretting.

Logan
Theodore Ts'o May 9, 2019, 11:30 p.m. UTC | #29
On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> 
> The second item, arguably, does have significant overlap with kselftest.
> Whether you are running short tests in a light weight UML environment or
> higher level tests in an heavier VM the two could be using the same
> framework for writing or defining in-kernel tests. It *may* also be valuable
> for some people to be able to run all the UML tests in the heavy VM
> environment along side other higher level tests.
> 
> Looking at the selftests tree in the repo, we already have similar items to
> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> 
> However, the number of users of this harness appears to be quite small. Most
> of the code in the selftests tree seems to be a random mismash of scripts
> and userspace code so it's not hard to see it as something completely
> different from the new Kunit:
> 
> $ git grep --files-with-matches kselftest_harness.h *

To the extent that we can unify how tests are written, I agree that
this would be a good thing.  However, you should note that
kselftest_harness.h is currently assums that it will be included in
userspace programs.  This is most obviously seen if you look closely
at the functions defined in the header files which makes calls to
fork(), abort() and fprintf().

So Kunit can't reuse kselftest_harness.h unmodified.  And whether or
not the actual implementation of the header file can be reused or
refactored, making the unit tests use the same or similar syntax would
be a good thing.

Cheers,

						- Ted
Logan Gunthorpe May 9, 2019, 11:40 p.m. UTC | #30
On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>
>> The second item, arguably, does have significant overlap with kselftest.
>> Whether you are running short tests in a light weight UML environment or
>> higher level tests in an heavier VM the two could be using the same
>> framework for writing or defining in-kernel tests. It *may* also be valuable
>> for some people to be able to run all the UML tests in the heavy VM
>> environment along side other higher level tests.
>>
>> Looking at the selftests tree in the repo, we already have similar items to
>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>
>> However, the number of users of this harness appears to be quite small. Most
>> of the code in the selftests tree seems to be a random mismash of scripts
>> and userspace code so it's not hard to see it as something completely
>> different from the new Kunit:
>>
>> $ git grep --files-with-matches kselftest_harness.h *
> 
> To the extent that we can unify how tests are written, I agree that
> this would be a good thing.  However, you should note that
> kselftest_harness.h is currently assums that it will be included in
> userspace programs.  This is most obviously seen if you look closely
> at the functions defined in the header files which makes calls to
> fork(), abort() and fprintf().

Ah, yes. I obviously did not dig deep enough. Using kunit for in-kernel 
tests and kselftest_harness for userspace tests seems like a sensible 
line to draw to me. Trying to unify kernel and userspace here sounds 
like it could be difficult so it's probably not worth forcing the issue 
unless someone wants to do some really fancy work to get it done.

Based on some of the other commenters, I was under the impression that 
kselftests had in-kernel tests but I'm not sure where or if they exist. 
If they do exists, it seems like it would make sense to convert those to 
kunit and have Kunit tests run-able in a VM or baremetal instance.

Logan
Theodore Ts'o May 10, 2019, 4:47 a.m. UTC | #31
On Thu, May 09, 2019 at 05:40:48PM -0600, Logan Gunthorpe wrote:
> 
> Based on some of the other commenters, I was under the impression that
> kselftests had in-kernel tests but I'm not sure where or if they exist. If
> they do exists, it seems like it would make sense to convert those to kunit
> and have Kunit tests run-able in a VM or baremetal instance.

There are kselftests tests which are shell scripts which load a
module, and the module runs the in-kernel code.  However, I didn't see
much infrastructure for the in-kernel test code; the one or two test
modules called from kselftests looked pretty ad hoc to me.

That's why I used the "vise grips" analogy.  You can use a pair of
vise grips like a monkey wrench; but it's not really a monkey wrench,
and might not be the best tool to loosen or tighten nuts and bolts.

       	   	     	       	   - Ted
Frank Rowand May 10, 2019, 5:11 a.m. UTC | #32
Hi Ted,

I'll try answering this again.

The first time I was a little flippant in part of my answer because I
thought your comments somewhat flippant.  This time I'll provide a
more complete answer.


On 5/8/19 7:13 PM, Frank Rowand wrote:
> On 5/8/19 6:58 PM, Theodore Ts'o wrote:
>> On Wed, May 08, 2019 at 05:43:35PM -0700, Frank Rowand wrote:

*****  begin context from Greg KH that you snipped  *****

On 5/7/19 1:01 AM, Greg KH wrote:

<< note that I have snipped my original question above this point >>

>
> kselftest provides no in-kernel framework for testing kernel code
> specifically.  That should be what kunit provides, an "easy" way to
> write in-kernel tests for things.

*****  end   context from Greg KH that you snipped  *****

>>> kselftest provides a mechanism for in-kernel tests via modules.  For
>>> example, see:
>>>
>>>   tools/testing/selftests/vm/run_vmtests invokes:
>>>     tools/testing/selftests/vm/test_vmalloc.sh
>>>       loads module:
>>>         test_vmalloc
>>>         (which is built from lib/test_vmalloc.c if CONFIG_TEST_VMALLOC)
>>
>> The majority of the kselftests are implemented as userspace programs.

My flippant answer:

> Non-argument.

This time:

My reply to Greg was pointing out that in-kernel tests do exist in
kselftest.

Your comment that the majority of kselftests are implemented as userspace
programs has no bearing on whether kselftest support in-kernel tests.
It does not counter the fact the kselftest supports in-kernel tests.


>> You *can* run in-kernel test using modules; but there is no framework
>> for the in-kernel code found in the test modules, which means each of
>> the in-kernel code has to create their own in-kernel test
>> infrastructure.

The kselftest in-kernel tests follow a common pattern.  As such, there
is a framework.

This next two paragraphs you ignored entirely in your reply:

> Why create an entire new subsystem (KUnit) when you can add a header
> file (and .c code as appropriate) that outputs the proper TAP formatted
> results from kselftest kernel test modules?
> 
> There are already a multitude of in kernel test modules used by
> kselftest.  It would be good if they all used a common TAP compliant
> mechanism to report results.


 
>> That's much like saying you can use vice grips to turn a nut or
>> bolt-head.  You *can*, but it might be that using a monkey wrench
>> would be a much better tool that is much easier.
>>
>> What would you say to a wood worker objecting that a toolbox should
>> contain a monkey wrench because he already knows how to use vise
>> grips, and his tiny brain shouldn't be forced to learn how to use a
>> wrench when he knows how to use a vise grip, which is a perfectly good
>> tool?
>>
>> If you want to use vice grips as a hammer, screwdriver, monkey wrench,
>> etc.  there's nothing stopping you from doing that.  But it's not fair
>> to object to other people who might want to use better tools.
>>
>> The reality is that we have a lot of testing tools.  It's not just
>> kselftests.  There is xfstests for file system code, blktests for
>> block layer tests, etc.   We use the right tool for the right job.

My flippant answer:

> More specious arguments.

This time:

I took your answer as a straw man, and had no desire to spend time
countering a straw man.

I am not proposing using a vice grips (to use your analogy).  I
am saying that maybe the monkey wrench already exists.

My point of this whole thread has been to try to get the submitter
to provide a better, more complete explanation of how and why KUnit 
is a better tool.

I have not yet objected to the number (and differences among) the
many sub-system tests.  I am questioning whether there is a need
for another _test framework_ for in-kernel testing.  If there is
something important that KUnit provides that does not exist in
existing frameworks then the next question would be to ask how
to implement that important thing (improve the existing
framework, replace the existing framework, or have two
frameworks).  I still think it is premature to ask this
question until we first know the answer to what unique features
KUnit adds (and apparently until we know what the existing
framework provides).

-Frank

> 
> -Frank
> 
>>
>> 						- Ted
>>
> 
>
Frank Rowand May 10, 2019, 5:18 a.m. UTC | #33
On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> 
> 
> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>
>>> The second item, arguably, does have significant overlap with kselftest.
>>> Whether you are running short tests in a light weight UML environment or
>>> higher level tests in an heavier VM the two could be using the same
>>> framework for writing or defining in-kernel tests. It *may* also be valuable
>>> for some people to be able to run all the UML tests in the heavy VM
>>> environment along side other higher level tests.
>>>
>>> Looking at the selftests tree in the repo, we already have similar items to
>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>
>>> However, the number of users of this harness appears to be quite small. Most
>>> of the code in the selftests tree seems to be a random mismash of scripts
>>> and userspace code so it's not hard to see it as something completely
>>> different from the new Kunit:
>>>
>>> $ git grep --files-with-matches kselftest_harness.h *
>>
>> To the extent that we can unify how tests are written, I agree that
>> this would be a good thing.  However, you should note that
>> kselftest_harness.h is currently assums that it will be included in
>> userspace programs.  This is most obviously seen if you look closely
>> at the functions defined in the header files which makes calls to
>> fork(), abort() and fprintf().
> 
> Ah, yes. I obviously did not dig deep enough. Using kunit for
> in-kernel tests and kselftest_harness for userspace tests seems like
> a sensible line to draw to me. Trying to unify kernel and userspace
> here sounds like it could be difficult so it's probably not worth
> forcing the issue unless someone wants to do some really fancy work
> to get it done.
> 
> Based on some of the other commenters, I was under the impression
> that kselftests had in-kernel tests but I'm not sure where or if they
> exist.

YES, kselftest has in-kernel tests.  (Excuse the shouting...)

Here is a likely list of them in the kernel source tree:

$ grep module_init lib/test_*.c
lib/test_bitfield.c:module_init(test_bitfields)
lib/test_bitmap.c:module_init(test_bitmap_init);
lib/test_bpf.c:module_init(test_bpf_init);
lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
lib/test_firmware.c:module_init(test_firmware_init);
lib/test_hash.c:module_init(test_hash_init);	/* Does everything */
lib/test_hexdump.c:module_init(test_hexdump_init);
lib/test_ida.c:module_init(ida_checks);
lib/test_kasan.c:module_init(kmalloc_tests_init);
lib/test_list_sort.c:module_init(list_sort_test);
lib/test_memcat_p.c:module_init(test_memcat_p_init);
lib/test_module.c:static int __init test_module_init(void)
lib/test_module.c:module_init(test_module_init);
lib/test_objagg.c:module_init(test_objagg_init);
lib/test_overflow.c:static int __init test_module_init(void)
lib/test_overflow.c:module_init(test_module_init);
lib/test_parman.c:module_init(test_parman_init);
lib/test_printf.c:module_init(test_printf_init);
lib/test_rhashtable.c:module_init(test_rht_init);
lib/test_siphash.c:module_init(siphash_test_init);
lib/test_sort.c:module_init(test_sort_init);
lib/test_stackinit.c:module_init(test_stackinit_init);
lib/test_static_key_base.c:module_init(test_static_key_base_init);
lib/test_static_keys.c:module_init(test_static_key_init);
lib/test_string.c:module_init(string_selftest_init);
lib/test_ubsan.c:module_init(test_ubsan_init);
lib/test_user_copy.c:module_init(test_user_copy_init);
lib/test_uuid.c:module_init(test_uuid_init);
lib/test_vmalloc.c:module_init(vmalloc_test_init)
lib/test_xarray.c:module_init(xarray_checks);


> If they do exists, it seems like it would make sense to
> convert those to kunit and have Kunit tests run-able in a VM or
> baremetal instance.

They already run in a VM.

They already run on bare metal.

They already run in UML.

This is not to say that KUnit does not make sense.  But I'm still trying
to get a better description of the KUnit features (and there are
some).

-Frank
Knut Omang May 10, 2019, 5:48 a.m. UTC | #34
On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > 
> > 
> > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> >>>
> >>> The second item, arguably, does have significant overlap with kselftest.
> >>> Whether you are running short tests in a light weight UML environment or
> >>> higher level tests in an heavier VM the two could be using the same
> >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> >>> for some people to be able to run all the UML tests in the heavy VM
> >>> environment along side other higher level tests.
> >>>
> >>> Looking at the selftests tree in the repo, we already have similar items to
> >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> >>>
> >>> However, the number of users of this harness appears to be quite small. Most
> >>> of the code in the selftests tree seems to be a random mismash of scripts
> >>> and userspace code so it's not hard to see it as something completely
> >>> different from the new Kunit:
> >>>
> >>> $ git grep --files-with-matches kselftest_harness.h *
> >>
> >> To the extent that we can unify how tests are written, I agree that
> >> this would be a good thing.  However, you should note that
> >> kselftest_harness.h is currently assums that it will be included in
> >> userspace programs.  This is most obviously seen if you look closely
> >> at the functions defined in the header files which makes calls to
> >> fork(), abort() and fprintf().
> > 
> > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > in-kernel tests and kselftest_harness for userspace tests seems like
> > a sensible line to draw to me. Trying to unify kernel and userspace
> > here sounds like it could be difficult so it's probably not worth
> > forcing the issue unless someone wants to do some really fancy work
> > to get it done.
> > 
> > Based on some of the other commenters, I was under the impression
> > that kselftests had in-kernel tests but I'm not sure where or if they
> > exist.
> 
> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> 
> Here is a likely list of them in the kernel source tree:
> 
> $ grep module_init lib/test_*.c
> lib/test_bitfield.c:module_init(test_bitfields)
> lib/test_bitmap.c:module_init(test_bitmap_init);
> lib/test_bpf.c:module_init(test_bpf_init);
> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> lib/test_firmware.c:module_init(test_firmware_init);
> lib/test_hash.c:module_init(test_hash_init);	/* Does everything */
> lib/test_hexdump.c:module_init(test_hexdump_init);
> lib/test_ida.c:module_init(ida_checks);
> lib/test_kasan.c:module_init(kmalloc_tests_init);
> lib/test_list_sort.c:module_init(list_sort_test);
> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> lib/test_module.c:static int __init test_module_init(void)
> lib/test_module.c:module_init(test_module_init);
> lib/test_objagg.c:module_init(test_objagg_init);
> lib/test_overflow.c:static int __init test_module_init(void)
> lib/test_overflow.c:module_init(test_module_init);
> lib/test_parman.c:module_init(test_parman_init);
> lib/test_printf.c:module_init(test_printf_init);
> lib/test_rhashtable.c:module_init(test_rht_init);
> lib/test_siphash.c:module_init(siphash_test_init);
> lib/test_sort.c:module_init(test_sort_init);
> lib/test_stackinit.c:module_init(test_stackinit_init);
> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> lib/test_static_keys.c:module_init(test_static_key_init);
> lib/test_string.c:module_init(string_selftest_init);
> lib/test_ubsan.c:module_init(test_ubsan_init);
> lib/test_user_copy.c:module_init(test_user_copy_init);
> lib/test_uuid.c:module_init(test_uuid_init);
> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> lib/test_xarray.c:module_init(xarray_checks);
> 
> 
> > If they do exists, it seems like it would make sense to
> > convert those to kunit and have Kunit tests run-able in a VM or
> > baremetal instance.
> 
> They already run in a VM.
> 
> They already run on bare metal.
> 
> They already run in UML.
> 
> This is not to say that KUnit does not make sense.  But I'm still trying
> to get a better description of the KUnit features (and there are
> some).

FYI, I have a master student who looks at converting some of these to KTF, such as for
instance the XArray tests, which lended themselves quite good to a semi-automated
conversion. 

The result is also a somewhat more compact code as well as the flexibility 
provided by the Googletest executor and the KTF frameworks, such as running selected
tests, output formatting, debugging features etc.

Knut

> -Frank
Daniel Vetter May 10, 2019, 8:12 a.m. UTC | #35
On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
>
> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > >
> > >
> > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > >>>
> > >>> The second item, arguably, does have significant overlap with kselftest.
> > >>> Whether you are running short tests in a light weight UML environment or
> > >>> higher level tests in an heavier VM the two could be using the same
> > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > >>> for some people to be able to run all the UML tests in the heavy VM
> > >>> environment along side other higher level tests.
> > >>>
> > >>> Looking at the selftests tree in the repo, we already have similar items to
> > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > >>>
> > >>> However, the number of users of this harness appears to be quite small. Most
> > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > >>> and userspace code so it's not hard to see it as something completely
> > >>> different from the new Kunit:
> > >>>
> > >>> $ git grep --files-with-matches kselftest_harness.h *
> > >>
> > >> To the extent that we can unify how tests are written, I agree that
> > >> this would be a good thing.  However, you should note that
> > >> kselftest_harness.h is currently assums that it will be included in
> > >> userspace programs.  This is most obviously seen if you look closely
> > >> at the functions defined in the header files which makes calls to
> > >> fork(), abort() and fprintf().
> > >
> > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > here sounds like it could be difficult so it's probably not worth
> > > forcing the issue unless someone wants to do some really fancy work
> > > to get it done.
> > >
> > > Based on some of the other commenters, I was under the impression
> > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > exist.
> >
> > YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> >
> > Here is a likely list of them in the kernel source tree:
> >
> > $ grep module_init lib/test_*.c
> > lib/test_bitfield.c:module_init(test_bitfields)
> > lib/test_bitmap.c:module_init(test_bitmap_init);
> > lib/test_bpf.c:module_init(test_bpf_init);
> > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > lib/test_firmware.c:module_init(test_firmware_init);
> > lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> > lib/test_hexdump.c:module_init(test_hexdump_init);
> > lib/test_ida.c:module_init(ida_checks);
> > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > lib/test_list_sort.c:module_init(list_sort_test);
> > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > lib/test_module.c:static int __init test_module_init(void)
> > lib/test_module.c:module_init(test_module_init);
> > lib/test_objagg.c:module_init(test_objagg_init);
> > lib/test_overflow.c:static int __init test_module_init(void)
> > lib/test_overflow.c:module_init(test_module_init);
> > lib/test_parman.c:module_init(test_parman_init);
> > lib/test_printf.c:module_init(test_printf_init);
> > lib/test_rhashtable.c:module_init(test_rht_init);
> > lib/test_siphash.c:module_init(siphash_test_init);
> > lib/test_sort.c:module_init(test_sort_init);
> > lib/test_stackinit.c:module_init(test_stackinit_init);
> > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > lib/test_static_keys.c:module_init(test_static_key_init);
> > lib/test_string.c:module_init(string_selftest_init);
> > lib/test_ubsan.c:module_init(test_ubsan_init);
> > lib/test_user_copy.c:module_init(test_user_copy_init);
> > lib/test_uuid.c:module_init(test_uuid_init);
> > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > lib/test_xarray.c:module_init(xarray_checks);
> >
> >
> > > If they do exists, it seems like it would make sense to
> > > convert those to kunit and have Kunit tests run-able in a VM or
> > > baremetal instance.
> >
> > They already run in a VM.
> >
> > They already run on bare metal.
> >
> > They already run in UML.
> >
> > This is not to say that KUnit does not make sense.  But I'm still trying
> > to get a better description of the KUnit features (and there are
> > some).
>
> FYI, I have a master student who looks at converting some of these to KTF, such as for
> instance the XArray tests, which lended themselves quite good to a semi-automated
> conversion.
>
> The result is also a somewhat more compact code as well as the flexibility
> provided by the Googletest executor and the KTF frameworks, such as running selected
> tests, output formatting, debugging features etc.

So is KTF already in upstream? Or is the plan to unify the KTF and
Kunit in-kernel test harnesses? Because there's tons of these
in-kernel unit tests already, and every merge we get more (Frank's
list didn't even look into drivers or anywhere else, e.g. it's missing
the locking self tests I worked on in the past), and a more structured
approach would really be good.
-Daniel
Brendan Higgins May 10, 2019, 10:23 a.m. UTC | #36
> On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
> >
> > On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > > >
> > > >
> > > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > > >>>
> > > >>> The second item, arguably, does have significant overlap with kselftest.
> > > >>> Whether you are running short tests in a light weight UML environment or
> > > >>> higher level tests in an heavier VM the two could be using the same
> > > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > > >>> for some people to be able to run all the UML tests in the heavy VM
> > > >>> environment along side other higher level tests.
> > > >>>
> > > >>> Looking at the selftests tree in the repo, we already have similar items to
> > > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > > >>>
> > > >>> However, the number of users of this harness appears to be quite small. Most
> > > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > > >>> and userspace code so it's not hard to see it as something completely
> > > >>> different from the new Kunit:
> > > >>>
> > > >>> $ git grep --files-with-matches kselftest_harness.h *
> > > >>
> > > >> To the extent that we can unify how tests are written, I agree that
> > > >> this would be a good thing.  However, you should note that
> > > >> kselftest_harness.h is currently assums that it will be included in
> > > >> userspace programs.  This is most obviously seen if you look closely
> > > >> at the functions defined in the header files which makes calls to
> > > >> fork(), abort() and fprintf().
> > > >
> > > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > > here sounds like it could be difficult so it's probably not worth
> > > > forcing the issue unless someone wants to do some really fancy work
> > > > to get it done.
> > > >
> > > > Based on some of the other commenters, I was under the impression
> > > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > > exist.
> > >
> > > YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> > >
> > > Here is a likely list of them in the kernel source tree:
> > >
> > > $ grep module_init lib/test_*.c
> > > lib/test_bitfield.c:module_init(test_bitfields)
> > > lib/test_bitmap.c:module_init(test_bitmap_init);
> > > lib/test_bpf.c:module_init(test_bpf_init);
> > > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > > lib/test_firmware.c:module_init(test_firmware_init);
> > > lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> > > lib/test_hexdump.c:module_init(test_hexdump_init);
> > > lib/test_ida.c:module_init(ida_checks);
> > > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > > lib/test_list_sort.c:module_init(list_sort_test);
> > > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > > lib/test_module.c:static int __init test_module_init(void)
> > > lib/test_module.c:module_init(test_module_init);
> > > lib/test_objagg.c:module_init(test_objagg_init);
> > > lib/test_overflow.c:static int __init test_module_init(void)
> > > lib/test_overflow.c:module_init(test_module_init);
> > > lib/test_parman.c:module_init(test_parman_init);
> > > lib/test_printf.c:module_init(test_printf_init);
> > > lib/test_rhashtable.c:module_init(test_rht_init);
> > > lib/test_siphash.c:module_init(siphash_test_init);
> > > lib/test_sort.c:module_init(test_sort_init);
> > > lib/test_stackinit.c:module_init(test_stackinit_init);
> > > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > > lib/test_static_keys.c:module_init(test_static_key_init);
> > > lib/test_string.c:module_init(string_selftest_init);
> > > lib/test_ubsan.c:module_init(test_ubsan_init);
> > > lib/test_user_copy.c:module_init(test_user_copy_init);
> > > lib/test_uuid.c:module_init(test_uuid_init);
> > > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > > lib/test_xarray.c:module_init(xarray_checks);
> > >
> > >
> > > > If they do exists, it seems like it would make sense to
> > > > convert those to kunit and have Kunit tests run-able in a VM or
> > > > baremetal instance.
> > >
> > > They already run in a VM.
> > >
> > > They already run on bare metal.
> > >
> > > They already run in UML.
> > >
> > > This is not to say that KUnit does not make sense.  But I'm still trying
> > > to get a better description of the KUnit features (and there are
> > > some).
> >
> > FYI, I have a master student who looks at converting some of these to KTF, such as for
> > instance the XArray tests, which lended themselves quite good to a semi-automated
> > conversion.
> >
> > The result is also a somewhat more compact code as well as the flexibility
> > provided by the Googletest executor and the KTF frameworks, such as running selected
> > tests, output formatting, debugging features etc.
>
> So is KTF already in upstream? Or is the plan to unify the KTF and

I am not certain about KTF's upstream plans, but I assume that Knut
would have CC'ed me on the thread if he had started working on it.

> Kunit in-kernel test harnesses? Because there's tons of these

No, no plan. Knut and I talked about this a good while ago and it
seemed that we had pretty fundamentally different approaches both in
terms of implementation and end goal. Combining them seemed pretty
infeasible, at least from a technical perspective. Anyway, I am sure
Knut would like to give him perspective on the matter and I don't want
to say too much without first giving him a chance to chime in on the
matter.

Nevertheless, I hope you don't see resolving this as a condition for
accepting this patchset. I had several rounds of RFC on KUnit, and no
one had previously brought this up.

> in-kernel unit tests already, and every merge we get more (Frank's
> list didn't even look into drivers or anywhere else, e.g. it's missing
> the locking self tests I worked on in the past), and a more structured
> approach would really be good.

Well, that's what I am trying to do. I hope you like it!

Cheers!
Brendan Higgins May 10, 2019, 10:25 a.m. UTC | #37
> On Thu, May 2, 2019 at 8:02 AM Brendan Higgins
> <brendanhiggins@google.com> wrote:
> >
> > ## TLDR
> >
> > I rebased the last patchset on 5.1-rc7 in hopes that we can get this in
> > 5.2.
> >
> > Shuah, I think you, Greg KH, and myself talked off thread, and we agreed
> > we would merge through your tree when the time came? Am I remembering
> > correctly?
> >
> > ## Background
> >
> > This patch set proposes KUnit, a lightweight unit testing and mocking
> > framework for the Linux kernel.
> >
> > Unlike Autotest and kselftest, KUnit is a true unit testing framework;
> > it does not require installing the kernel on a test machine or in a VM
> > and does not require tests to be written in userspace running on a host
> > kernel. Additionally, KUnit is fast: From invocation to completion KUnit
> > can run several dozen tests in under a second. Currently, the entire
> > KUnit test suite for KUnit runs in under a second from the initial
> > invocation (build time excluded).
> >
> > KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > Googletest/Googlemock for C++. KUnit provides facilities for defining
> > unit test cases, grouping related test cases into test suites, providing
> > common infrastructure for running tests, mocking, spying, and much more.
> >
> > ## What's so special about unit testing?
> >
> > A unit test is supposed to test a single unit of code in isolation,
> > hence the name. There should be no dependencies outside the control of
> > the test; this means no external dependencies, which makes tests orders
> > of magnitudes faster. Likewise, since there are no external dependencies,
> > there are no hoops to jump through to run the tests. Additionally, this
> > makes unit tests deterministic: a failing unit test always indicates a
> > problem. Finally, because unit tests necessarily have finer granularity,
> > they are able to test all code paths easily solving the classic problem
> > of difficulty in exercising error handling code.
> >
> > ## Is KUnit trying to replace other testing frameworks for the kernel?
> >
> > No. Most existing tests for the Linux kernel are end-to-end tests, which
> > have their place. A well tested system has lots of unit tests, a
> > reasonable number of integration tests, and some end-to-end tests. KUnit
> > is just trying to address the unit test space which is currently not
> > being addressed.
> >
> > ## More information on KUnit
> >
> > There is a bunch of documentation near the end of this patch set that
> > describes how to use KUnit and best practices for writing unit tests.
> > For convenience I am hosting the compiled docs here:
> > https://google.github.io/kunit-docs/third_party/kernel/docs/
> > Additionally for convenience, I have applied these patches to a branch:
> > https://kunit.googlesource.com/linux/+/kunit/rfc/v5.1-rc7/v1
> > The repo may be cloned with:
> > git clone https://kunit.googlesource.com/linux
> > This patchset is on the kunit/rfc/v5.1-rc7/v1 branch.
> >
> > ## Changes Since Last Version
> >
> > None. I just rebased the last patchset on v5.1-rc7.
> >
> > --
> > 2.21.0.593.g511ec345e18-goog
> >
>
> The following is the log of 'git am' of this series.
> I see several 'new blank line at EOF' warnings.
>
>
>
> masahiro@pug:~/workspace/bsp/linux$ git am ~/Downloads/*.patch
> Applying: kunit: test: add KUnit test runner core
> Applying: kunit: test: add test resource management API
> Applying: kunit: test: add string_stream a std::stream like string builder
> .git/rebase-apply/patch:223: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add kunit_stream a std::stream like logger
> Applying: kunit: test: add the concept of expectations
> .git/rebase-apply/patch:475: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kbuild: enable building KUnit
> Applying: kunit: test: add initial tests
> .git/rebase-apply/patch:203: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add support for test abort
> .git/rebase-apply/patch:453: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add tests for kunit test abort
> Applying: kunit: test: add the concept of assertions
> .git/rebase-apply/patch:518: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: test: add test managed resource tests
> Applying: kunit: tool: add Python wrappers for running KUnit tests
> .git/rebase-apply/patch:457: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> Applying: kunit: defconfig: add defconfigs for building KUnit tests
> Applying: Documentation: kunit: add documentation for KUnit
> .git/rebase-apply/patch:71: new blank line at EOF.
> +
> .git/rebase-apply/patch:209: new blank line at EOF.
> +
> .git/rebase-apply/patch:848: new blank line at EOF.
> +
> warning: 3 lines add whitespace errors.
> Applying: MAINTAINERS: add entry for KUnit the unit testing framework
> Applying: kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec()
> Applying: MAINTAINERS: add proc sysctl KUnit test to PROC SYSCTL section

Sorry about this! I will have it fixed on the next revision.
Theodore Ts'o May 10, 2019, 10:43 a.m. UTC | #38
On Thu, May 09, 2019 at 10:11:01PM -0700, Frank Rowand wrote:
> >> You *can* run in-kernel test using modules; but there is no framework
> >> for the in-kernel code found in the test modules, which means each of
> >> the in-kernel code has to create their own in-kernel test
> >> infrastructure.
> 
> The kselftest in-kernel tests follow a common pattern.  As such, there
> is a framework.

So we may have different definitions of "framework".  In my book, code
reuse by "cut and paste" does not make a framework.  Could they be
rewritten to *use* a framework, whether it be KTF or KUnit?  Sure!
But they are not using a framework *today*.

> This next two paragraphs you ignored entirely in your reply:
> 
> > Why create an entire new subsystem (KUnit) when you can add a header
> > file (and .c code as appropriate) that outputs the proper TAP formatted
> > results from kselftest kernel test modules?

And you keep ignoring my main observation, which is that spinning up a
VM, letting systemd start, mounting a root file system, etc., is all
unnecessary overhead which takes time.  This is important to me,
because developer velocity is extremely important if you are doing
test driven development.

Yes, you can manually unload a module, recompile the module, somehow
get the module back into the VM (perhaps by using virtio-9p), and then
reloading the module with the in-kernel test code, and the restart the
test.  BUT: (a) even if it is faster, it requires a lot of manual
steps, and would be very hard to automate, and (b) if the test code
ever OOPS or triggers a lockdep warning, you will need to restart the
VM, and so this involves all of the VM restart overhead, plus trying
to automate determining when you actually do need to restart the VM
versus unloading and reloading the module.   It's clunky.

Being able to do the equivalent of "make && make check" is a really
big deal.  And "make check" needs to go fast.

You keep ignore this point, perhaps because you don't care about this
issue?  Which is fine, and why we may just need to agree to disagree.

Cheers,

						- Ted

P.S.  Running scripts is Turing-equivalent, so it's self-evident that
*anything* you can do with other test frameworks you can somehow do in
kselftests.  That argument doesn't impress me, and why I do consider
it quite flippant.  (Heck, /bin/vi is Turing equivalent so we could
use vi to as a kernel test framework.  Or we could use emacs.  Let's
not.  :-)

The question is whether it is the most best and most efficient way to
do that testing.  And developer velocity is a really big part of my
evaluation function when judging whether or a test framework is fit
for that purpose.
Knut Omang May 10, 2019, 11:36 a.m. UTC | #39
On Fri, 2019-05-10 at 10:12 +0200, Daniel Vetter wrote:
> On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
> >
> > On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > > >
> > > >
> > > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > > >>>
> > > >>> The second item, arguably, does have significant overlap with kselftest.
> > > >>> Whether you are running short tests in a light weight UML environment or
> > > >>> higher level tests in an heavier VM the two could be using the same
> > > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > > >>> for some people to be able to run all the UML tests in the heavy VM
> > > >>> environment along side other higher level tests.
> > > >>>
> > > >>> Looking at the selftests tree in the repo, we already have similar items to
> > > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > > >>>
> > > >>> However, the number of users of this harness appears to be quite small. Most
> > > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > > >>> and userspace code so it's not hard to see it as something completely
> > > >>> different from the new Kunit:
> > > >>>
> > > >>> $ git grep --files-with-matches kselftest_harness.h *
> > > >>
> > > >> To the extent that we can unify how tests are written, I agree that
> > > >> this would be a good thing.  However, you should note that
> > > >> kselftest_harness.h is currently assums that it will be included in
> > > >> userspace programs.  This is most obviously seen if you look closely
> > > >> at the functions defined in the header files which makes calls to
> > > >> fork(), abort() and fprintf().
> > > >
> > > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > > here sounds like it could be difficult so it's probably not worth
> > > > forcing the issue unless someone wants to do some really fancy work
> > > > to get it done.
> > > >
> > > > Based on some of the other commenters, I was under the impression
> > > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > > exist.
> > >
> > > YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> > >
> > > Here is a likely list of them in the kernel source tree:
> > >
> > > $ grep module_init lib/test_*.c
> > > lib/test_bitfield.c:module_init(test_bitfields)
> > > lib/test_bitmap.c:module_init(test_bitmap_init);
> > > lib/test_bpf.c:module_init(test_bpf_init);
> > > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > > lib/test_firmware.c:module_init(test_firmware_init);
> > > lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> > > lib/test_hexdump.c:module_init(test_hexdump_init);
> > > lib/test_ida.c:module_init(ida_checks);
> > > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > > lib/test_list_sort.c:module_init(list_sort_test);
> > > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > > lib/test_module.c:static int __init test_module_init(void)
> > > lib/test_module.c:module_init(test_module_init);
> > > lib/test_objagg.c:module_init(test_objagg_init);
> > > lib/test_overflow.c:static int __init test_module_init(void)
> > > lib/test_overflow.c:module_init(test_module_init);
> > > lib/test_parman.c:module_init(test_parman_init);
> > > lib/test_printf.c:module_init(test_printf_init);
> > > lib/test_rhashtable.c:module_init(test_rht_init);
> > > lib/test_siphash.c:module_init(siphash_test_init);
> > > lib/test_sort.c:module_init(test_sort_init);
> > > lib/test_stackinit.c:module_init(test_stackinit_init);
> > > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > > lib/test_static_keys.c:module_init(test_static_key_init);
> > > lib/test_string.c:module_init(string_selftest_init);
> > > lib/test_ubsan.c:module_init(test_ubsan_init);
> > > lib/test_user_copy.c:module_init(test_user_copy_init);
> > > lib/test_uuid.c:module_init(test_uuid_init);
> > > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > > lib/test_xarray.c:module_init(xarray_checks);
> > >
> > >
> > > > If they do exists, it seems like it would make sense to
> > > > convert those to kunit and have Kunit tests run-able in a VM or
> > > > baremetal instance.
> > >
> > > They already run in a VM.
> > >
> > > They already run on bare metal.
> > >
> > > They already run in UML.
> > >
> > > This is not to say that KUnit does not make sense.  But I'm still trying
> > > to get a better description of the KUnit features (and there are
> > > some).
> >
> > FYI, I have a master student who looks at converting some of these to KTF, such as for
> > instance the XArray tests, which lended themselves quite good to a semi-automated
> > conversion.
> >
> > The result is also a somewhat more compact code as well as the flexibility
> > provided by the Googletest executor and the KTF frameworks, such as running selected
> > tests, output formatting, debugging features etc.
> 
> So is KTF already in upstream? 

No..

> Or is the plan to unify the KTF and
> Kunit in-kernel test harnesses? 

Since KTF delegates reporting and test running to user space - reporting is based on
netlink communication with the user land frontend (Googletest based, but can in principle
be ported to any user land framework if need be) - little specific test harness features
are needed for KTF, so there's little if no direct overlap with the infrastructure in
kselftests (as far as I understand, I'd like to spend more time on the details there..)

> Because there's tons of these
> in-kernel unit tests already, and every merge we get more (Frank's
> list didn't even look into drivers or anywhere else, e.g. it's missing
> the locking self tests I worked on in the past), and a more structured
> approach would really be good.

That's been my thinking too. Having a unified way to gradually move to would benefit
anyone who needs to relate to these tests in that there will be less to learn.
But that would be a long term evolutionary goal of course.

Also I think each of these test suites/sets would benefit from being more available for
running even in kernels not made specifically for testing, and I think using the module
structure and a lean, common module (ktf.ko) as a library for the tests, but no
"polluting" of the "production" kernel code with anything else, is a kind of a policy that
comes implicitly with KTF that can make it easier to maintain a sort of standard
"language" for writing kernel tests. 

Wrt KTF, we're working on making a suitable patch set for the kernel,
but also have projects running internally that uses KTF as a standalone git repository,
and that inspires new features and other changes, as well as a growing set of tests. 
I want to make sure we find a good candidate kernel integration that can coexist nicely
with the separate git repo version without becoming too much of a back/forward porting
challenge.

Knut

> -Daniel
Knut Omang May 10, 2019, 12:12 p.m. UTC | #40
On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
> > On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
> > >
> > > On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > > > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > > > >
> > > > >
> > > > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > > > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > > > >>>
> > > > >>> The second item, arguably, does have significant overlap with kselftest.
> > > > >>> Whether you are running short tests in a light weight UML environment or
> > > > >>> higher level tests in an heavier VM the two could be using the same
> > > > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > > > >>> for some people to be able to run all the UML tests in the heavy VM
> > > > >>> environment along side other higher level tests.
> > > > >>>
> > > > >>> Looking at the selftests tree in the repo, we already have similar items to
> > > > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > > > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > > > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > > > >>>
> > > > >>> However, the number of users of this harness appears to be quite small. Most
> > > > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > > > >>> and userspace code so it's not hard to see it as something completely
> > > > >>> different from the new Kunit:
> > > > >>>
> > > > >>> $ git grep --files-with-matches kselftest_harness.h *
> > > > >>
> > > > >> To the extent that we can unify how tests are written, I agree that
> > > > >> this would be a good thing.  However, you should note that
> > > > >> kselftest_harness.h is currently assums that it will be included in
> > > > >> userspace programs.  This is most obviously seen if you look closely
> > > > >> at the functions defined in the header files which makes calls to
> > > > >> fork(), abort() and fprintf().
> > > > >
> > > > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > > > here sounds like it could be difficult so it's probably not worth
> > > > > forcing the issue unless someone wants to do some really fancy work
> > > > > to get it done.
> > > > >
> > > > > Based on some of the other commenters, I was under the impression
> > > > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > > > exist.
> > > >
> > > > YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> > > >
> > > > Here is a likely list of them in the kernel source tree:
> > > >
> > > > $ grep module_init lib/test_*.c
> > > > lib/test_bitfield.c:module_init(test_bitfields)
> > > > lib/test_bitmap.c:module_init(test_bitmap_init);
> > > > lib/test_bpf.c:module_init(test_bpf_init);
> > > > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > > > lib/test_firmware.c:module_init(test_firmware_init);
> > > > lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> > > > lib/test_hexdump.c:module_init(test_hexdump_init);
> > > > lib/test_ida.c:module_init(ida_checks);
> > > > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > > > lib/test_list_sort.c:module_init(list_sort_test);
> > > > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > > > lib/test_module.c:static int __init test_module_init(void)
> > > > lib/test_module.c:module_init(test_module_init);
> > > > lib/test_objagg.c:module_init(test_objagg_init);
> > > > lib/test_overflow.c:static int __init test_module_init(void)
> > > > lib/test_overflow.c:module_init(test_module_init);
> > > > lib/test_parman.c:module_init(test_parman_init);
> > > > lib/test_printf.c:module_init(test_printf_init);
> > > > lib/test_rhashtable.c:module_init(test_rht_init);
> > > > lib/test_siphash.c:module_init(siphash_test_init);
> > > > lib/test_sort.c:module_init(test_sort_init);
> > > > lib/test_stackinit.c:module_init(test_stackinit_init);
> > > > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > > > lib/test_static_keys.c:module_init(test_static_key_init);
> > > > lib/test_string.c:module_init(string_selftest_init);
> > > > lib/test_ubsan.c:module_init(test_ubsan_init);
> > > > lib/test_user_copy.c:module_init(test_user_copy_init);
> > > > lib/test_uuid.c:module_init(test_uuid_init);
> > > > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > > > lib/test_xarray.c:module_init(xarray_checks);
> > > >
> > > >
> > > > > If they do exists, it seems like it would make sense to
> > > > > convert those to kunit and have Kunit tests run-able in a VM or
> > > > > baremetal instance.
> > > >
> > > > They already run in a VM.
> > > >
> > > > They already run on bare metal.
> > > >
> > > > They already run in UML.
> > > >
> > > > This is not to say that KUnit does not make sense.  But I'm still trying
> > > > to get a better description of the KUnit features (and there are
> > > > some).
> > >
> > > FYI, I have a master student who looks at converting some of these to KTF, such as
> for
> > > instance the XArray tests, which lended themselves quite good to a semi-automated
> > > conversion.
> > >
> > > The result is also a somewhat more compact code as well as the flexibility
> > > provided by the Googletest executor and the KTF frameworks, such as running selected
> > > tests, output formatting, debugging features etc.
> >
> > So is KTF already in upstream? Or is the plan to unify the KTF and
> 
> I am not certain about KTF's upstream plans, but I assume that Knut
> would have CC'ed me on the thread if he had started working on it.

You are on the Github watcher list for KTF?
Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
I'll of course CC: you on the patch set when we send it to the list.

> > Kunit in-kernel test harnesses? Because there's tons of these
> 
> No, no plan. Knut and I talked about this a good while ago and it
> seemed that we had pretty fundamentally different approaches both in
> terms of implementation and end goal. Combining them seemed pretty
> infeasible, at least from a technical perspective. Anyway, I am sure
> Knut would like to give him perspective on the matter and I don't want
> to say too much without first giving him a chance to chime in on the
> matter.

I need more time to study KUnit details to say, but from a 10k feet perspective:
I think at least there's a potential for some API unification, in using the same macro
names. How about removing the KUNIT_ prefix to the test macros ;-) ? 
That would make the names shorter, saving typing when writing tests, and storage ;-) 
and also make the names more similar to KTF's, and those of user land unit test
frameworks? Also it will make it possible to have functions compiling both with KTF and
KUnit, facilitating moving code between the two.

Also the string stream facilities of KUnit looks interesting to share.
But as said, more work needed on my side to be sure about this..

Knut

> Nevertheless, I hope you don't see resolving this as a condition for
> accepting this patchset. I had several rounds of RFC on KUnit, and no
> one had previously brought this up.
> 
> > in-kernel unit tests already, and every merge we get more (Frank's
> > list didn't even look into drivers or anywhere else, e.g. it's missing
> > the locking self tests I worked on in the past), and a more structured
> > approach would really be good.
> 
> Well, that's what I am trying to do. I hope you like it!
> 
> Cheers!
Logan Gunthorpe May 10, 2019, 4:17 p.m. UTC | #41
On 2019-05-09 11:18 p.m., Frank Rowand wrote:

> YES, kselftest has in-kernel tests.  (Excuse the shouting...)

Cool. From my cursory look, in my opinion, these would be greatly
improved by converting them to the framework Brendan is proposing for Kunit.

>> If they do exists, it seems like it would make sense to
>> convert those to kunit and have Kunit tests run-able in a VM or
>> baremetal instance.
> 
> They already run in a VM.
> 
> They already run on bare metal.
> 
> They already run in UML.

Simply being able to run in UML is not the only thing here. Kunit
provides the infrastructure to quickly build, run and report results for
all the tests from userspace without needing to worry about the details
of building and running a UML kernel, then parsing dmesg to figure out
what tests were run or not.

> This is not to say that KUnit does not make sense.  But I'm still trying
> to get a better description of the KUnit features (and there are
> some).

So read the patches, or the documentation[1] or the LWN article[2]. It's
pretty well described in a lot of places -- that's one of the big
advantages of it. In contrast, few people seems to have any concept of
what kselftests are or where they are or how to run them (I was
surprised to find the in-kernel tests in the lib tree).

Logan

[1] https://google.github.io/kunit-docs/third_party/kernel/docs/
[2] https://lwn.net/Articles/780985/
Brendan Higgins May 10, 2019, 8:54 p.m. UTC | #42
On Fri, May 10, 2019 at 5:13 AM Knut Omang <knut.omang@oracle.com> wrote:
> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
> > > On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
> > > >
> > > > On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > > > > On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > > > > >
> > > > > >
> > > > > > On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > > > > >> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > > > > >>>
> > > > > >>> The second item, arguably, does have significant overlap with kselftest.
> > > > > >>> Whether you are running short tests in a light weight UML environment or
> > > > > >>> higher level tests in an heavier VM the two could be using the same
> > > > > >>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > > > > >>> for some people to be able to run all the UML tests in the heavy VM
> > > > > >>> environment along side other higher level tests.
> > > > > >>>
> > > > > >>> Looking at the selftests tree in the repo, we already have similar items to
> > > > > >>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > > > > >>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > > > > >>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > > > > >>>
> > > > > >>> However, the number of users of this harness appears to be quite small. Most
> > > > > >>> of the code in the selftests tree seems to be a random mismash of scripts
> > > > > >>> and userspace code so it's not hard to see it as something completely
> > > > > >>> different from the new Kunit:
> > > > > >>>
> > > > > >>> $ git grep --files-with-matches kselftest_harness.h *
> > > > > >>
> > > > > >> To the extent that we can unify how tests are written, I agree that
> > > > > >> this would be a good thing.  However, you should note that
> > > > > >> kselftest_harness.h is currently assums that it will be included in
> > > > > >> userspace programs.  This is most obviously seen if you look closely
> > > > > >> at the functions defined in the header files which makes calls to
> > > > > >> fork(), abort() and fprintf().
> > > > > >
> > > > > > Ah, yes. I obviously did not dig deep enough. Using kunit for
> > > > > > in-kernel tests and kselftest_harness for userspace tests seems like
> > > > > > a sensible line to draw to me. Trying to unify kernel and userspace
> > > > > > here sounds like it could be difficult so it's probably not worth
> > > > > > forcing the issue unless someone wants to do some really fancy work
> > > > > > to get it done.
> > > > > >
> > > > > > Based on some of the other commenters, I was under the impression
> > > > > > that kselftests had in-kernel tests but I'm not sure where or if they
> > > > > > exist.
> > > > >
> > > > > YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> > > > >
> > > > > Here is a likely list of them in the kernel source tree:
> > > > >
> > > > > $ grep module_init lib/test_*.c
> > > > > lib/test_bitfield.c:module_init(test_bitfields)
> > > > > lib/test_bitmap.c:module_init(test_bitmap_init);
> > > > > lib/test_bpf.c:module_init(test_bpf_init);
> > > > > lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > > > > lib/test_firmware.c:module_init(test_firmware_init);
> > > > > lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> > > > > lib/test_hexdump.c:module_init(test_hexdump_init);
> > > > > lib/test_ida.c:module_init(ida_checks);
> > > > > lib/test_kasan.c:module_init(kmalloc_tests_init);
> > > > > lib/test_list_sort.c:module_init(list_sort_test);
> > > > > lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > > > > lib/test_module.c:static int __init test_module_init(void)
> > > > > lib/test_module.c:module_init(test_module_init);
> > > > > lib/test_objagg.c:module_init(test_objagg_init);
> > > > > lib/test_overflow.c:static int __init test_module_init(void)
> > > > > lib/test_overflow.c:module_init(test_module_init);
> > > > > lib/test_parman.c:module_init(test_parman_init);
> > > > > lib/test_printf.c:module_init(test_printf_init);
> > > > > lib/test_rhashtable.c:module_init(test_rht_init);
> > > > > lib/test_siphash.c:module_init(siphash_test_init);
> > > > > lib/test_sort.c:module_init(test_sort_init);
> > > > > lib/test_stackinit.c:module_init(test_stackinit_init);
> > > > > lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > > > > lib/test_static_keys.c:module_init(test_static_key_init);
> > > > > lib/test_string.c:module_init(string_selftest_init);
> > > > > lib/test_ubsan.c:module_init(test_ubsan_init);
> > > > > lib/test_user_copy.c:module_init(test_user_copy_init);
> > > > > lib/test_uuid.c:module_init(test_uuid_init);
> > > > > lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > > > > lib/test_xarray.c:module_init(xarray_checks);
> > > > >
> > > > >
> > > > > > If they do exists, it seems like it would make sense to
> > > > > > convert those to kunit and have Kunit tests run-able in a VM or
> > > > > > baremetal instance.
> > > > >
> > > > > They already run in a VM.
> > > > >
> > > > > They already run on bare metal.
> > > > >
> > > > > They already run in UML.
> > > > >
> > > > > This is not to say that KUnit does not make sense.  But I'm still trying
> > > > > to get a better description of the KUnit features (and there are
> > > > > some).
> > > >
> > > > FYI, I have a master student who looks at converting some of these to KTF, such as
> > for
> > > > instance the XArray tests, which lended themselves quite good to a semi-automated
> > > > conversion.
> > > >
> > > > The result is also a somewhat more compact code as well as the flexibility
> > > > provided by the Googletest executor and the KTF frameworks, such as running selected
> > > > tests, output formatting, debugging features etc.
> > >
> > > So is KTF already in upstream? Or is the plan to unify the KTF and
> >
> > I am not certain about KTF's upstream plans, but I assume that Knut
> > would have CC'ed me on the thread if he had started working on it.
>
> You are on the Github watcher list for KTF?

Yep! I have been since LPC in 2017.

> Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
> I'll of course CC: you on the patch set when we send it to the list.

Awesome! I appreciate it.

>
> > > Kunit in-kernel test harnesses? Because there's tons of these
> >
> > No, no plan. Knut and I talked about this a good while ago and it
> > seemed that we had pretty fundamentally different approaches both in
> > terms of implementation and end goal. Combining them seemed pretty
> > infeasible, at least from a technical perspective. Anyway, I am sure
> > Knut would like to give him perspective on the matter and I don't want
> > to say too much without first giving him a chance to chime in on the
> > matter.
>
> I need more time to study KUnit details to say, but from a 10k feet perspective:
> I think at least there's a potential for some API unification, in using the same macro
> names. How about removing the KUNIT_ prefix to the test macros ;-) ?

Heh, heh. That's actually the way I had it in the earliest versions of
KUnit! But that was pretty much the very first thing everyone
complained about. I think I went from no prefix (like you are
suggesting) to TEST_* before the first version of the RFC at the
request of several people I was kicking the idea around with, and then
I think I was asked to go from TEST_* to KUNIT_* in the very first
revision of the RFC.

In short, I am sympathetic to your suggestion, but I think that is
non-negotiable at this point. The community has a clear policy in
place on the matter, and at this point I would really prefer not to
change all the symbol names again.

> That would make the names shorter, saving typing when writing tests, and storage ;-)
> and also make the names more similar to KTF's, and those of user land unit test

You mean the Googletest/Googlemock expectations/assertions?

It's a great library (with not so great a name), but unfortunately it
is written in C++, which I think pretty much counts it out here.

> frameworks? Also it will make it possible to have functions compiling both with KTF and
> KUnit, facilitating moving code between the two.

I think that would be cool, but again, I don't think this will be
possible with Googletest/Googlemock.

>
> Also the string stream facilities of KUnit looks interesting to share.

I am glad you think so!

If your biggest concern on my side is test macro names (which I think
is a no-go as I mentioned above), I think we should be in pretty good
shape once you are ready to move forward. Besides, I have a lot more
KUnit patches coming after this: landing this patchset is just the
beginning. So how about we keep moving forward on this patchset?
Frank Rowand May 10, 2019, 9:05 p.m. UTC | #43
On 5/10/19 3:43 AM, Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 10:11:01PM -0700, Frank Rowand wrote:
>>>> You *can* run in-kernel test using modules; but there is no framework
>>>> for the in-kernel code found in the test modules, which means each of
>>>> the in-kernel code has to create their own in-kernel test
>>>> infrastructure.
>>
>> The kselftest in-kernel tests follow a common pattern.  As such, there
>> is a framework.
> 
> So we may have different definitions of "framework".  In my book, code
> reuse by "cut and paste" does not make a framework.  Could they be
> rewritten to *use* a framework, whether it be KTF or KUnit?  Sure!
> But they are not using a framework *today*.
> 
>> This next two paragraphs you ignored entirely in your reply:
>>
>>> Why create an entire new subsystem (KUnit) when you can add a header
>>> file (and .c code as appropriate) that outputs the proper TAP formatted
>>> results from kselftest kernel test modules?
> 
> And you keep ignoring my main observation, which is that spinning up a
> VM, letting systemd start, mounting a root file system, etc., is all
> unnecessary overhead which takes time.  This is important to me,
> because developer velocity is extremely important if you are doing
> test driven development.

No, I do not "keep ignoring my main observation".  You made that
observation in an email of Thu, 9 May 2019 09:35:51 -0400.  In my
reply to Tim's reply to your email, I wrote:

   "< massive snip >

   I'll reply in more detail to some other earlier messages in this thread
   later.

   This reply is an attempt to return to the intent of my original reply to
   patch 0 of this series."


I have not directly replied to any of your other emails that have made
that observation (I may have replied to other emails that were replies
to such an email of yours, but not in the context of the overhead).
After this email, my next reply will be my delayed response to your
original email about overhead.

And the "mommy, he hit me first" argument does not contribute to a
constructive conversation about a kernel patch submittal.


> Yes, you can manually unload a module, recompile the module, somehow
> get the module back into the VM (perhaps by using virtio-9p), and then
> reloading the module with the in-kernel test code, and the restart the
> test.  BUT: (a) even if it is faster, it requires a lot of manual
> steps, and would be very hard to automate, and (b) if the test code
> ever OOPS or triggers a lockdep warning, you will need to restart the
> VM, and so this involves all of the VM restart overhead, plus trying
> to automate determining when you actually do need to restart the VM
> versus unloading and reloading the module.   It's clunky.

I have mentioned before that the in-kernel kselftest tests can be
run in UML.  You simply select the configure options to build them
into the kernel instead of building them as modules.  Then build
a UML kernel and execute ("boot") the UML kernel.

This is exactly the same as for KUnit.  No more overhead.  No less
overhead.  No more steps.  No fewer steps.


> 
> Being able to do the equivalent of "make && make check" is a really
> big deal.  And "make check" needs to go fast.
> 
> You keep ignore this point, perhaps because you don't care about this
> issue?  Which is fine, and why we may just need to agree to disagree.

No, I agree that fast test execution is useful.


> 
> Cheers,
> 
> 						- Ted
> 
> P.S.  Running scripts is Turing-equivalent, so it's self-evident that
> *anything* you can do with other test frameworks you can somehow do in
> kselftests.  That argument doesn't impress me, and why I do consider
> it quite flippant.  (Heck, /bin/vi is Turing equivalent so we could
> use vi to as a kernel test framework.  Or we could use emacs.  Let's
> not.  :-)

I have not been talking about running scripts, other than to the extent
that _one of the ways_ the in-kernel kselftests can be invoked is via
a script that loads the test module.  The same exact in-kernel test can
instead be built into a UML kernel, as mentioned above.


> The question is whether it is the most best and most efficient way to
> do that testing.  And developer velocity is a really big part of my
> evaluation function when judging whether or a test framework is fit
> for that purpose.
> .
>
Frank Rowand May 10, 2019, 9:12 p.m. UTC | #44
On 5/9/19 2:42 PM, Theodore Ts'o wrote:
> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>
>>    "My understanding is that the intent of KUnit is to avoid booting a kernel on
>>    real hardware or in a virtual machine.  That seems to be a matter of semantics
>>    to me because isn't invoking a UML Linux just running the Linux kernel in
>>    a different form of virtualization?
>>
>>    So I do not understand why KUnit is an improvement over kselftest.
>>
>>    ...
>>
>>    What am I missing?"
> 
> One major difference: kselftest requires a userspace environment; it
> starts systemd, requires a root file system from which you can load
> modules, etc.  Kunit doesn't require a root file system; doesn't
> require that you start systemd; doesn't allow you to run arbitrary
> perl, python, bash, etc. scripts.  As such, it's much lighter weight
> than kselftest, and will have much less overhead before you can start
> running tests.  So it's not really the same kind of virtualization.
> 
> Does this help?
> 
> 					- Ted
> 

I'm back to reply to this subthread, after a delay, as promised.

That is the type of information that I was looking for, so
thank you for the reply.

However, the reply is incorrect.  Kselftest in-kernel tests (which
is the context here) can be configured as built in instead of as
a module, and built in a UML kernel.  The UML kernel can boot,
running the in-kernel tests before UML attempts to invoke the
init process.

No userspace environment needed.  So exactly the same overhead
as KUnit when invoked in that manner.
Frank Rowand May 10, 2019, 9:52 p.m. UTC | #45
On 5/9/19 3:20 PM, Logan Gunthorpe wrote:
> 
> 
> On 2019-05-09 3:42 p.m., Theodore Ts'o wrote:
>> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
>>>
>>>     "My understanding is that the intent of KUnit is to avoid booting a kernel on
>>>     real hardware or in a virtual machine.  That seems to be a matter of semantics
>>>     to me because isn't invoking a UML Linux just running the Linux kernel in
>>>     a different form of virtualization?
>>>
>>>     So I do not understand why KUnit is an improvement over kselftest.
>>>
>>>     ...
>>> 
>>> What am I missing?"
>> 
>> One major difference: kselftest requires a userspace environment;
>> it starts systemd, requires a root file system from which you can
>> load modules, etc.  Kunit doesn't require a root file system;
>> doesn't require that you start systemd; doesn't allow you to run
>> arbitrary perl, python, bash, etc. scripts.  As such, it's much
>> lighter weight than kselftest, and will have much less overhead
>> before you can start running tests.  So it's not really the same
>> kind of virtualization.

I'm back to reply to this subthread, after a delay, as promised.


> I largely agree with everything Ted has said in this thread, but I
> wonder if we are conflating two different ideas that is causing an
> impasse. From what I see, Kunit actually provides two different
> things:

> 1) An execution environment that can be run very quickly in userspace
> on tests in the kernel source. This speeds up the tests and gives a
> lot of benefit to developers using those tests because they can get
> feedback on their code changes a *lot* quicker.

kselftest in-kernel tests provide exactly the same when the tests are
configured as "built-in" code instead of as modules.


> 2) A framework to write unit tests that provides a lot of the same
> facilities as other common unit testing frameworks from userspace
> (ie. a runner that runs a list of tests and a bunch of helpers such
> as KUNIT_EXPECT_* to simplify test passes and failures).

> The first item from Kunit is novel and I see absolutely no overlap
> with anything kselftest does. It's also the valuable thing I'd like
> to see merged and grow.

The first item exists in kselftest.


> The second item, arguably, does have significant overlap with
> kselftest. Whether you are running short tests in a light weight UML
> environment or higher level tests in an heavier VM the two could be
> using the same framework for writing or defining in-kernel tests. It
> *may* also be valuable for some people to be able to run all the UML
> tests in the heavy VM environment along side other higher level
> tests.
> 
> Looking at the selftests tree in the repo, we already have similar
> items to what Kunit is adding as I described in point (2) above.
> kselftest_harness.h contains macros like EXPECT_* and ASSERT_* with
> very similar intentions to the new KUNIT_EXECPT_* and KUNIT_ASSERT_*
> macros.

I might be wrong here because I have not dug deeply enough into the
code!!!  Does this framework apply to the userspace tests, the
in-kernel tests, or both?  My "not having dug enough GUESS" is that
these are for the user space tests (although if so, they could be
extended for in-kernel use also).

So I think this one maybe does not have an overlap between KUnit
and kselftest.


> However, the number of users of this harness appears to be quite
> small. Most of the code in the selftests tree seems to be a random
> mismash of scripts and userspace code so it's not hard to see it as
> something completely different from the new Kunit:
> $ git grep --files-with-matches kselftest_harness.h *
> Documentation/dev-tools/kselftest.rst
> MAINTAINERS
> tools/testing/selftests/kselftest_harness.h
> tools/testing/selftests/net/tls.c
> tools/testing/selftests/rtc/rtctest.c
> tools/testing/selftests/seccomp/Makefile
> tools/testing/selftests/seccomp/seccomp_bpf.c
> tools/testing/selftests/uevent/Makefile
> tools/testing/selftests/uevent/uevent_filtering.c


> Thus, I can personally see a lot of value in integrating the kunit
> test framework with this kselftest harness. There's only a small
> number of users of the kselftest harness today, so one way or another
> it seems like getting this integrated early would be a good idea.
> Letting Kunit and Kselftests progress independently for a few years
> will only make this worse and may become something we end up
> regretting.

Yes, this I agree with.

-Frank

> 
> Logan
Frank Rowand May 10, 2019, 9:59 p.m. UTC | #46
On 5/10/19 3:23 AM, Brendan Higgins wrote:
>> On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
>>>
>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
>>>>>
>>>>>
>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>>>>>
>>>>>>> The second item, arguably, does have significant overlap with kselftest.
>>>>>>> Whether you are running short tests in a light weight UML environment or
>>>>>>> higher level tests in an heavier VM the two could be using the same
>>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
>>>>>>> for some people to be able to run all the UML tests in the heavy VM
>>>>>>> environment along side other higher level tests.
>>>>>>>
>>>>>>> Looking at the selftests tree in the repo, we already have similar items to
>>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>>>>>
>>>>>>> However, the number of users of this harness appears to be quite small. Most
>>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
>>>>>>> and userspace code so it's not hard to see it as something completely
>>>>>>> different from the new Kunit:
>>>>>>>
>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
>>>>>>
>>>>>> To the extent that we can unify how tests are written, I agree that
>>>>>> this would be a good thing.  However, you should note that
>>>>>> kselftest_harness.h is currently assums that it will be included in
>>>>>> userspace programs.  This is most obviously seen if you look closely
>>>>>> at the functions defined in the header files which makes calls to
>>>>>> fork(), abort() and fprintf().
>>>>>
>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
>>>>> here sounds like it could be difficult so it's probably not worth
>>>>> forcing the issue unless someone wants to do some really fancy work
>>>>> to get it done.
>>>>>
>>>>> Based on some of the other commenters, I was under the impression
>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
>>>>> exist.
>>>>
>>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
>>>>
>>>> Here is a likely list of them in the kernel source tree:
>>>>
>>>> $ grep module_init lib/test_*.c
>>>> lib/test_bitfield.c:module_init(test_bitfields)
>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
>>>> lib/test_bpf.c:module_init(test_bpf_init);
>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
>>>> lib/test_firmware.c:module_init(test_firmware_init);
>>>> lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
>>>> lib/test_ida.c:module_init(ida_checks);
>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
>>>> lib/test_list_sort.c:module_init(list_sort_test);
>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
>>>> lib/test_module.c:static int __init test_module_init(void)
>>>> lib/test_module.c:module_init(test_module_init);
>>>> lib/test_objagg.c:module_init(test_objagg_init);
>>>> lib/test_overflow.c:static int __init test_module_init(void)
>>>> lib/test_overflow.c:module_init(test_module_init);
>>>> lib/test_parman.c:module_init(test_parman_init);
>>>> lib/test_printf.c:module_init(test_printf_init);
>>>> lib/test_rhashtable.c:module_init(test_rht_init);
>>>> lib/test_siphash.c:module_init(siphash_test_init);
>>>> lib/test_sort.c:module_init(test_sort_init);
>>>> lib/test_stackinit.c:module_init(test_stackinit_init);
>>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
>>>> lib/test_static_keys.c:module_init(test_static_key_init);
>>>> lib/test_string.c:module_init(string_selftest_init);
>>>> lib/test_ubsan.c:module_init(test_ubsan_init);
>>>> lib/test_user_copy.c:module_init(test_user_copy_init);
>>>> lib/test_uuid.c:module_init(test_uuid_init);
>>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
>>>> lib/test_xarray.c:module_init(xarray_checks);
>>>>
>>>>
>>>>> If they do exists, it seems like it would make sense to
>>>>> convert those to kunit and have Kunit tests run-able in a VM or
>>>>> baremetal instance.
>>>>
>>>> They already run in a VM.
>>>>
>>>> They already run on bare metal.
>>>>
>>>> They already run in UML.
>>>>
>>>> This is not to say that KUnit does not make sense.  But I'm still trying
>>>> to get a better description of the KUnit features (and there are
>>>> some).
>>>
>>> FYI, I have a master student who looks at converting some of these to KTF, such as for
>>> instance the XArray tests, which lended themselves quite good to a semi-automated
>>> conversion.
>>>
>>> The result is also a somewhat more compact code as well as the flexibility
>>> provided by the Googletest executor and the KTF frameworks, such as running selected
>>> tests, output formatting, debugging features etc.
>>
>> So is KTF already in upstream? Or is the plan to unify the KTF and
> 
> I am not certain about KTF's upstream plans, but I assume that Knut
> would have CC'ed me on the thread if he had started working on it.
> 
>> Kunit in-kernel test harnesses? Because there's tons of these
> 
> No, no plan. Knut and I talked about this a good while ago and it
> seemed that we had pretty fundamentally different approaches both in
> terms of implementation and end goal. Combining them seemed pretty
> infeasible, at least from a technical perspective. Anyway, I am sure
> Knut would like to give him perspective on the matter and I don't want
> to say too much without first giving him a chance to chime in on the
> matter.
> 
> Nevertheless, I hope you don't see resolving this as a condition for
> accepting this patchset. I had several rounds of RFC on KUnit, and no
> one had previously brought this up.

I seem to recall a request in reply to the KUnit RFC email threads to
work together.

However whether that impacts acceptance of this patch set is up to
the maintainer and how she wants to resolve the potential collision
of KUnit and KTF (if there is indeed any sort of collision).


>> in-kernel unit tests already, and every merge we get more (Frank's
>> list didn't even look into drivers or anywhere else, e.g. it's missing
>> the locking self tests I worked on in the past), and a more structured
>> approach would really be good.
> 
> Well, that's what I am trying to do. I hope you like it!
> 
> Cheers!
> .
>
Frank Rowand May 10, 2019, 10:13 p.m. UTC | #47
On 5/10/19 9:17 AM, Logan Gunthorpe wrote:
> 
> 
> On 2019-05-09 11:18 p.m., Frank Rowand wrote:
> 
>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> 
> Cool. From my cursory look, in my opinion, these would be greatly
> improved by converting them to the framework Brendan is proposing for Kunit.
> 
>>> If they do exists, it seems like it would make sense to
>>> convert those to kunit and have Kunit tests run-able in a VM or
>>> baremetal instance.
>>
>> They already run in a VM.
>>
>> They already run on bare metal.
>>
>> They already run in UML.
> 
> Simply being able to run in UML is not the only thing here. Kunit
> provides the infrastructure to quickly build, run and report results for
> all the tests from userspace without needing to worry about the details
> of building and running a UML kernel, then parsing dmesg to figure out
> what tests were run or not.

Yes.  But that is not the only environment that KUnit must support to be
of use to me for devicetree unittests (this is not new, Brendan is quite
aware of my needs and is not ignoring them).


>> This is not to say that KUnit does not make sense.  But I'm still trying
>> to get a better description of the KUnit features (and there are
>> some).
> 
> So read the patches, or the documentation[1] or the LWN article[2]. It's
> pretty well described in a lot of places -- that's one of the big
> advantages of it. In contrast, few people seems to have any concept of
> what kselftests are or where they are or how to run them (I was
> surprised to find the in-kernel tests in the lib tree).
> 
> Logan
> 
> [1] https://google.github.io/kunit-docs/third_party/kernel/docs/
> [2] https://lwn.net/Articles/780985/

I have been following the RFC versions.  I have installed the RFC patches
and run them to the extent that they worked (devicetree unittests were
a guinea pig for test conversion in the RFC series, but the converted
tests did not work).  I read portions of the code while trying to
understand the unittests conversion.  I made review comments based on
the portion of the code that I did read.  I have read the documentation
(very nice btw, as I have said before, but should be expanded).

My comment is that the description to submit the patch series should
be fuller -- KUnit potentially has a lot of nice attributes, and I
still think I have only scratched the surface.  The average reviewer
may have even less in-depth knowledge than I do.  And as I have
commented before, I keep diving into areas that I had no previous
experience with (such as kselftest) to be able to properly judge this
patch series.
Frank Rowand May 10, 2019, 10:18 p.m. UTC | #48
On 5/10/19 1:54 PM, Brendan Higgins wrote:
> On Fri, May 10, 2019 at 5:13 AM Knut Omang <knut.omang@oracle.com> wrote:
>> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
>>>> On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
>>>>>
>>>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
>>>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
>>>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
>>>>>>>>>
>>>>>>>>> The second item, arguably, does have significant overlap with kselftest.
>>>>>>>>> Whether you are running short tests in a light weight UML environment or
>>>>>>>>> higher level tests in an heavier VM the two could be using the same
>>>>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
>>>>>>>>> for some people to be able to run all the UML tests in the heavy VM
>>>>>>>>> environment along side other higher level tests.
>>>>>>>>>
>>>>>>>>> Looking at the selftests tree in the repo, we already have similar items to
>>>>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
>>>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
>>>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
>>>>>>>>>
>>>>>>>>> However, the number of users of this harness appears to be quite small. Most
>>>>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
>>>>>>>>> and userspace code so it's not hard to see it as something completely
>>>>>>>>> different from the new Kunit:
>>>>>>>>>
>>>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
>>>>>>>>
>>>>>>>> To the extent that we can unify how tests are written, I agree that
>>>>>>>> this would be a good thing.  However, you should note that
>>>>>>>> kselftest_harness.h is currently assums that it will be included in
>>>>>>>> userspace programs.  This is most obviously seen if you look closely
>>>>>>>> at the functions defined in the header files which makes calls to
>>>>>>>> fork(), abort() and fprintf().
>>>>>>>
>>>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
>>>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
>>>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
>>>>>>> here sounds like it could be difficult so it's probably not worth
>>>>>>> forcing the issue unless someone wants to do some really fancy work
>>>>>>> to get it done.
>>>>>>>
>>>>>>> Based on some of the other commenters, I was under the impression
>>>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
>>>>>>> exist.
>>>>>>
>>>>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
>>>>>>
>>>>>> Here is a likely list of them in the kernel source tree:
>>>>>>
>>>>>> $ grep module_init lib/test_*.c
>>>>>> lib/test_bitfield.c:module_init(test_bitfields)
>>>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
>>>>>> lib/test_bpf.c:module_init(test_bpf_init);
>>>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
>>>>>> lib/test_firmware.c:module_init(test_firmware_init);
>>>>>> lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
>>>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
>>>>>> lib/test_ida.c:module_init(ida_checks);
>>>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
>>>>>> lib/test_list_sort.c:module_init(list_sort_test);
>>>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
>>>>>> lib/test_module.c:static int __init test_module_init(void)
>>>>>> lib/test_module.c:module_init(test_module_init);
>>>>>> lib/test_objagg.c:module_init(test_objagg_init);
>>>>>> lib/test_overflow.c:static int __init test_module_init(void)
>>>>>> lib/test_overflow.c:module_init(test_module_init);
>>>>>> lib/test_parman.c:module_init(test_parman_init);
>>>>>> lib/test_printf.c:module_init(test_printf_init);
>>>>>> lib/test_rhashtable.c:module_init(test_rht_init);
>>>>>> lib/test_siphash.c:module_init(siphash_test_init);
>>>>>> lib/test_sort.c:module_init(test_sort_init);
>>>>>> lib/test_stackinit.c:module_init(test_stackinit_init);
>>>>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
>>>>>> lib/test_static_keys.c:module_init(test_static_key_init);
>>>>>> lib/test_string.c:module_init(string_selftest_init);
>>>>>> lib/test_ubsan.c:module_init(test_ubsan_init);
>>>>>> lib/test_user_copy.c:module_init(test_user_copy_init);
>>>>>> lib/test_uuid.c:module_init(test_uuid_init);
>>>>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
>>>>>> lib/test_xarray.c:module_init(xarray_checks);
>>>>>>
>>>>>>
>>>>>>> If they do exists, it seems like it would make sense to
>>>>>>> convert those to kunit and have Kunit tests run-able in a VM or
>>>>>>> baremetal instance.
>>>>>>
>>>>>> They already run in a VM.
>>>>>>
>>>>>> They already run on bare metal.
>>>>>>
>>>>>> They already run in UML.
>>>>>>
>>>>>> This is not to say that KUnit does not make sense.  But I'm still trying
>>>>>> to get a better description of the KUnit features (and there are
>>>>>> some).
>>>>>
>>>>> FYI, I have a master student who looks at converting some of these to KTF, such as
>>> for
>>>>> instance the XArray tests, which lended themselves quite good to a semi-automated
>>>>> conversion.
>>>>>
>>>>> The result is also a somewhat more compact code as well as the flexibility
>>>>> provided by the Googletest executor and the KTF frameworks, such as running selected
>>>>> tests, output formatting, debugging features etc.
>>>>
>>>> So is KTF already in upstream? Or is the plan to unify the KTF and
>>>
>>> I am not certain about KTF's upstream plans, but I assume that Knut
>>> would have CC'ed me on the thread if he had started working on it.
>>
>> You are on the Github watcher list for KTF?
> 
> Yep! I have been since LPC in 2017.
> 
>> Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
>> I'll of course CC: you on the patch set when we send it to the list.
> 
> Awesome! I appreciate it.
> 
>>
>>>> Kunit in-kernel test harnesses? Because there's tons of these
>>>
>>> No, no plan. Knut and I talked about this a good while ago and it
>>> seemed that we had pretty fundamentally different approaches both in
>>> terms of implementation and end goal. Combining them seemed pretty
>>> infeasible, at least from a technical perspective. Anyway, I am sure
>>> Knut would like to give him perspective on the matter and I don't want
>>> to say too much without first giving him a chance to chime in on the
>>> matter.
>>
>> I need more time to study KUnit details to say, but from a 10k feet perspective:
>> I think at least there's a potential for some API unification, in using the same macro
>> names. How about removing the KUNIT_ prefix to the test macros ;-) ?
> 
> Heh, heh. That's actually the way I had it in the earliest versions of
> KUnit! But that was pretty much the very first thing everyone
> complained about. I think I went from no prefix (like you are
> suggesting) to TEST_* before the first version of the RFC at the
> request of several people I was kicking the idea around with, and then
> I think I was asked to go from TEST_* to KUNIT_* in the very first
> revision of the RFC.
> 
> In short, I am sympathetic to your suggestion, but I think that is
> non-negotiable at this point. The community has a clear policy in
> place on the matter, and at this point I would really prefer not to
> change all the symbol names again.

This would not be the first time that a patch submitter has been
told "do B instead of A" for version 1, then told "do C instead of
B" for version 2, then told "do A instead of C" for the final version.

It sucks, but it happens.

As an aside, can you point to where the "clear policy in place" is
documented, and what the policy is?

-Frank


>> That would make the names shorter, saving typing when writing tests, and storage ;-)
>> and also make the names more similar to KTF's, and those of user land unit test
> 
> You mean the Googletest/Googlemock expectations/assertions?
> 
> It's a great library (with not so great a name), but unfortunately it
> is written in C++, which I think pretty much counts it out here.
> 
>> frameworks? Also it will make it possible to have functions compiling both with KTF and
>> KUnit, facilitating moving code between the two.
> 
> I think that would be cool, but again, I don't think this will be
> possible with Googletest/Googlemock.
> 
>>
>> Also the string stream facilities of KUnit looks interesting to share.
> 
> I am glad you think so!
> 
> If your biggest concern on my side is test macro names (which I think
> is a no-go as I mentioned above), I think we should be in pretty good
> shape once you are ready to move forward. Besides, I have a lot more
> KUnit patches coming after this: landing this patchset is just the
> beginning. So how about we keep moving forward on this patchset?
>
Knut Omang May 11, 2019, 6:17 a.m. UTC | #49
On Fri, 2019-05-10 at 15:18 -0700, Frank Rowand wrote:
> On 5/10/19 1:54 PM, Brendan Higgins wrote:
> > On Fri, May 10, 2019 at 5:13 AM Knut Omang <knut.omang@oracle.com> wrote:
> >> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
> >>>> On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
> >>>>>
> >>>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> >>>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> >>>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> >>>>>>>>>
> >>>>>>>>> The second item, arguably, does have significant overlap with kselftest.
> >>>>>>>>> Whether you are running short tests in a light weight UML environment or
> >>>>>>>>> higher level tests in an heavier VM the two could be using the same
> >>>>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
> >>>>>>>>> for some people to be able to run all the UML tests in the heavy VM
> >>>>>>>>> environment along side other higher level tests.
> >>>>>>>>>
> >>>>>>>>> Looking at the selftests tree in the repo, we already have similar items to
> >>>>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> >>>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> >>>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> >>>>>>>>>
> >>>>>>>>> However, the number of users of this harness appears to be quite small. Most
> >>>>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
> >>>>>>>>> and userspace code so it's not hard to see it as something completely
> >>>>>>>>> different from the new Kunit:
> >>>>>>>>>
> >>>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
> >>>>>>>>
> >>>>>>>> To the extent that we can unify how tests are written, I agree that
> >>>>>>>> this would be a good thing.  However, you should note that
> >>>>>>>> kselftest_harness.h is currently assums that it will be included in
> >>>>>>>> userspace programs.  This is most obviously seen if you look closely
> >>>>>>>> at the functions defined in the header files which makes calls to
> >>>>>>>> fork(), abort() and fprintf().
> >>>>>>>
> >>>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
> >>>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
> >>>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
> >>>>>>> here sounds like it could be difficult so it's probably not worth
> >>>>>>> forcing the issue unless someone wants to do some really fancy work
> >>>>>>> to get it done.
> >>>>>>>
> >>>>>>> Based on some of the other commenters, I was under the impression
> >>>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
> >>>>>>> exist.
> >>>>>>
> >>>>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> >>>>>>
> >>>>>> Here is a likely list of them in the kernel source tree:
> >>>>>>
> >>>>>> $ grep module_init lib/test_*.c
> >>>>>> lib/test_bitfield.c:module_init(test_bitfields)
> >>>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
> >>>>>> lib/test_bpf.c:module_init(test_bpf_init);
> >>>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> >>>>>> lib/test_firmware.c:module_init(test_firmware_init);
> >>>>>> lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> >>>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
> >>>>>> lib/test_ida.c:module_init(ida_checks);
> >>>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
> >>>>>> lib/test_list_sort.c:module_init(list_sort_test);
> >>>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> >>>>>> lib/test_module.c:static int __init test_module_init(void)
> >>>>>> lib/test_module.c:module_init(test_module_init);
> >>>>>> lib/test_objagg.c:module_init(test_objagg_init);
> >>>>>> lib/test_overflow.c:static int __init test_module_init(void)
> >>>>>> lib/test_overflow.c:module_init(test_module_init);
> >>>>>> lib/test_parman.c:module_init(test_parman_init);
> >>>>>> lib/test_printf.c:module_init(test_printf_init);
> >>>>>> lib/test_rhashtable.c:module_init(test_rht_init);
> >>>>>> lib/test_siphash.c:module_init(siphash_test_init);
> >>>>>> lib/test_sort.c:module_init(test_sort_init);
> >>>>>> lib/test_stackinit.c:module_init(test_stackinit_init);
> >>>>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> >>>>>> lib/test_static_keys.c:module_init(test_static_key_init);
> >>>>>> lib/test_string.c:module_init(string_selftest_init);
> >>>>>> lib/test_ubsan.c:module_init(test_ubsan_init);
> >>>>>> lib/test_user_copy.c:module_init(test_user_copy_init);
> >>>>>> lib/test_uuid.c:module_init(test_uuid_init);
> >>>>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> >>>>>> lib/test_xarray.c:module_init(xarray_checks);
> >>>>>>
> >>>>>>
> >>>>>>> If they do exists, it seems like it would make sense to
> >>>>>>> convert those to kunit and have Kunit tests run-able in a VM or
> >>>>>>> baremetal instance.
> >>>>>>
> >>>>>> They already run in a VM.
> >>>>>>
> >>>>>> They already run on bare metal.
> >>>>>>
> >>>>>> They already run in UML.
> >>>>>>
> >>>>>> This is not to say that KUnit does not make sense.  But I'm still trying
> >>>>>> to get a better description of the KUnit features (and there are
> >>>>>> some).
> >>>>>
> >>>>> FYI, I have a master student who looks at converting some of these to KTF, such as
> >>> for
> >>>>> instance the XArray tests, which lended themselves quite good to a semi-automated
> >>>>> conversion.
> >>>>>
> >>>>> The result is also a somewhat more compact code as well as the flexibility
> >>>>> provided by the Googletest executor and the KTF frameworks, such as running
> selected
> >>>>> tests, output formatting, debugging features etc.
> >>>>
> >>>> So is KTF already in upstream? Or is the plan to unify the KTF and
> >>>
> >>> I am not certain about KTF's upstream plans, but I assume that Knut
> >>> would have CC'ed me on the thread if he had started working on it.
> >>
> >> You are on the Github watcher list for KTF?
> > 
> > Yep! I have been since LPC in 2017.
> > 
> >> Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
> >> I'll of course CC: you on the patch set when we send it to the list.
> > 
> > Awesome! I appreciate it.
> > 
> >>
> >>>> Kunit in-kernel test harnesses? Because there's tons of these
> >>>
> >>> No, no plan. Knut and I talked about this a good while ago and it
> >>> seemed that we had pretty fundamentally different approaches both in
> >>> terms of implementation and end goal. Combining them seemed pretty
> >>> infeasible, at least from a technical perspective. Anyway, I am sure
> >>> Knut would like to give him perspective on the matter and I don't want
> >>> to say too much without first giving him a chance to chime in on the
> >>> matter.
> >>
> >> I need more time to study KUnit details to say, but from a 10k feet perspective:
> >> I think at least there's a potential for some API unification, in using the same
> macro
> >> names. How about removing the KUNIT_ prefix to the test macros ;-) ?
> > 
> > Heh, heh. That's actually the way I had it in the earliest versions of
> > KUnit! But that was pretty much the very first thing everyone
> > complained about. I think I went from no prefix (like you are
> > suggesting) to TEST_* before the first version of the RFC at the
> > request of several people I was kicking the idea around with, and then
> > I think I was asked to go from TEST_* to KUNIT_* in the very first
> > revision of the RFC.
> > 
> > In short, I am sympathetic to your suggestion, but I think that is
> > non-negotiable at this point. The community has a clear policy in
> > place on the matter, and at this point I would really prefer not to
> > change all the symbol names again.
> 
> This would not be the first time that a patch submitter has been
> told "do B instead of A" for version 1, then told "do C instead of
> B" for version 2, then told "do A instead of C" for the final version.
> 
> It sucks, but it happens.

Sorry, I must have overlooked the B instead of A instance - otherwise 
I would have objected against it - in addition to the recognizability 
and portability issue I think it is important that these primitives are
not unnecessary long since they will be written a *lot* of times if things go 
our way.

And the reason for using unique names elsewhere is to be able to co-exist with
other components with the same needs. In the case of tests, I believe they are 
in a different category, they are not supposed to be part of production kernels,
and we want them to be used all over, that should warrant having the ASSERT_ and EXPECT_
prefixes "reserved" for the purpose, so I would urge some pragmatism here!

> As an aside, can you point to where the "clear policy in place" is
> documented, and what the policy is?
> 
> -Frank
> 
> 
> >> That would make the names shorter, saving typing when writing tests, and storage ;-)
> >> and also make the names more similar to KTF's, and those of user land unit test
> > 
> > You mean the Googletest/Googlemock expectations/assertions?
> > 
> > It's a great library (with not so great a name), but unfortunately it
> > is written in C++, which I think pretty much counts it out here.

Using a similar syntax is a good thing, since it makes it easier for people
who write tests in user land frameworks to contribute to the kernel tests.
And if, lets say, someone later comes up with a way to run the KUnit tests in "real" 
user land within Googletest ;-) then less editing would be needed..

> >> frameworks? Also it will make it possible to have functions compiling both with KTF
> and
> >> KUnit, facilitating moving code between the two.
> > 
> > I think that would be cool, but again, I don't think this will be
> > possible with Googletest/Googlemock.

I was thinking of moves between KUnit tests and KTF tests, kernel code only.
Some test functions may easily be usable both in a "pure" mocking environment
and in an integrated setting with hardware/driver/userspace dependencies.

> >>
> >> Also the string stream facilities of KUnit looks interesting to share.
> > 
> > I am glad you think so!
> > 
> > If your biggest concern on my side is test macro names (which I think
> > is a no-go as I mentioned above), I think we should be in pretty good
> > shape once you are ready to move forward. Besides, I have a lot more
> > KUnit patches coming after this: landing this patchset is just the
> > beginning. So how about we keep moving forward on this patchset?

I think the importance of a well thought through 
test API definition is not to be underestimated 
- the sooner we can unify and establish a common base, the better, 
I think.

My other concern is, as mentioned earlier, whether UML is really that 
different from just running in a VM wrt debugging support, and that 
there exists a better approach based on automatically generating 
an environment where the test code and the source under test 
can be compiled in a normal user land program. But it seems 
there are enough clients of the UML approach to justify it 
as a lightweight entry. We want to make it easy and inexcusable 
not to test the code, having a low bar of entry is certainly good. 
Other than that, I really would need to spend some more time with 
the details on KUnit to verify my so far fairly 
shallow observations!

Thanks,
Knut
Knut Omang May 11, 2019, 6:43 a.m. UTC | #50
On Fri, 2019-05-10 at 14:59 -0700, Frank Rowand wrote:
> On 5/10/19 3:23 AM, Brendan Higgins wrote:
> >> On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
> >>>
> >>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> >>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> >>>>>
> >>>>>
> >>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> >>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> >>>>>>>
> >>>>>>> The second item, arguably, does have significant overlap with kselftest.
> >>>>>>> Whether you are running short tests in a light weight UML environment or
> >>>>>>> higher level tests in an heavier VM the two could be using the same
> >>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
> >>>>>>> for some people to be able to run all the UML tests in the heavy VM
> >>>>>>> environment along side other higher level tests.
> >>>>>>>
> >>>>>>> Looking at the selftests tree in the repo, we already have similar items to
> >>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> >>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> >>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> >>>>>>>
> >>>>>>> However, the number of users of this harness appears to be quite small. Most
> >>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
> >>>>>>> and userspace code so it's not hard to see it as something completely
> >>>>>>> different from the new Kunit:
> >>>>>>>
> >>>>>>> $ git grep --files-with-matches kselftest_harness.h *
> >>>>>>
> >>>>>> To the extent that we can unify how tests are written, I agree that
> >>>>>> this would be a good thing.  However, you should note that
> >>>>>> kselftest_harness.h is currently assums that it will be included in
> >>>>>> userspace programs.  This is most obviously seen if you look closely
> >>>>>> at the functions defined in the header files which makes calls to
> >>>>>> fork(), abort() and fprintf().
> >>>>>
> >>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
> >>>>> in-kernel tests and kselftest_harness for userspace tests seems like
> >>>>> a sensible line to draw to me. Trying to unify kernel and userspace
> >>>>> here sounds like it could be difficult so it's probably not worth
> >>>>> forcing the issue unless someone wants to do some really fancy work
> >>>>> to get it done.
> >>>>>
> >>>>> Based on some of the other commenters, I was under the impression
> >>>>> that kselftests had in-kernel tests but I'm not sure where or if they
> >>>>> exist.
> >>>>
> >>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> >>>>
> >>>> Here is a likely list of them in the kernel source tree:
> >>>>
> >>>> $ grep module_init lib/test_*.c
> >>>> lib/test_bitfield.c:module_init(test_bitfields)
> >>>> lib/test_bitmap.c:module_init(test_bitmap_init);
> >>>> lib/test_bpf.c:module_init(test_bpf_init);
> >>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> >>>> lib/test_firmware.c:module_init(test_firmware_init);
> >>>> lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> >>>> lib/test_hexdump.c:module_init(test_hexdump_init);
> >>>> lib/test_ida.c:module_init(ida_checks);
> >>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
> >>>> lib/test_list_sort.c:module_init(list_sort_test);
> >>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> >>>> lib/test_module.c:static int __init test_module_init(void)
> >>>> lib/test_module.c:module_init(test_module_init);
> >>>> lib/test_objagg.c:module_init(test_objagg_init);
> >>>> lib/test_overflow.c:static int __init test_module_init(void)
> >>>> lib/test_overflow.c:module_init(test_module_init);
> >>>> lib/test_parman.c:module_init(test_parman_init);
> >>>> lib/test_printf.c:module_init(test_printf_init);
> >>>> lib/test_rhashtable.c:module_init(test_rht_init);
> >>>> lib/test_siphash.c:module_init(siphash_test_init);
> >>>> lib/test_sort.c:module_init(test_sort_init);
> >>>> lib/test_stackinit.c:module_init(test_stackinit_init);
> >>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> >>>> lib/test_static_keys.c:module_init(test_static_key_init);
> >>>> lib/test_string.c:module_init(string_selftest_init);
> >>>> lib/test_ubsan.c:module_init(test_ubsan_init);
> >>>> lib/test_user_copy.c:module_init(test_user_copy_init);
> >>>> lib/test_uuid.c:module_init(test_uuid_init);
> >>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> >>>> lib/test_xarray.c:module_init(xarray_checks);
> >>>>
> >>>>
> >>>>> If they do exists, it seems like it would make sense to
> >>>>> convert those to kunit and have Kunit tests run-able in a VM or
> >>>>> baremetal instance.
> >>>>
> >>>> They already run in a VM.
> >>>>
> >>>> They already run on bare metal.
> >>>>
> >>>> They already run in UML.
> >>>>
> >>>> This is not to say that KUnit does not make sense.  But I'm still trying
> >>>> to get a better description of the KUnit features (and there are
> >>>> some).
> >>>
> >>> FYI, I have a master student who looks at converting some of these to KTF, such as
> for
> >>> instance the XArray tests, which lended themselves quite good to a semi-automated
> >>> conversion.
> >>>
> >>> The result is also a somewhat more compact code as well as the flexibility
> >>> provided by the Googletest executor and the KTF frameworks, such as running selected
> >>> tests, output formatting, debugging features etc.
> >>
> >> So is KTF already in upstream? Or is the plan to unify the KTF and
> > 
> > I am not certain about KTF's upstream plans, but I assume that Knut
> > would have CC'ed me on the thread if he had started working on it.
> > 
> >> Kunit in-kernel test harnesses? Because there's tons of these
> > 
> > No, no plan. Knut and I talked about this a good while ago and it
> > seemed that we had pretty fundamentally different approaches both in
> > terms of implementation and end goal. Combining them seemed pretty
> > infeasible, at least from a technical perspective. Anyway, I am sure
> > Knut would like to give him perspective on the matter and I don't want
> > to say too much without first giving him a chance to chime in on the
> > matter.
> > 
> > Nevertheless, I hope you don't see resolving this as a condition for
> > accepting this patchset. I had several rounds of RFC on KUnit, and no
> > one had previously brought this up.
> 
> I seem to recall a request in reply to the KUnit RFC email threads to
> work together.

You recall right.
I wanted us to work together to refine a common approach.
I still think that's possible.

> However whether that impacts acceptance of this patch set is up to
> the maintainer and how she wants to resolve the potential collision
> of KUnit and KTF (if there is indeed any sort of collision).

I believe there's overlap and potential for unification and code sharing.
My concern is to make sure that that can happen without disrupting too 
many test users. I'd really like to get some more time to explore that.

It strikes me that the main difference in the two approaches 
lies in the way reporting is done by default. Since KUnit handles all the
reporting itself, while KTF relies on Googletest for that, a lot more code 
in KUnit revolves around that part, while with KTF we have focused more on 
features to enable writing powerful and effective tests.

The reporting part can possibly be made configurable: Reporting with printk or reporting 
via netlink to user space. In fact, as a KTF debugging option KTF already 
supports printk reporting which can be enabled via sysfs.

If macros can have the same syntax, then there's 
likely features in KTF that KUnit users would benefit from too. 
But this of course have to be tried out.

Knut

> >> in-kernel unit tests already, and every merge we get more (Frank's
> >> list didn't even look into drivers or anywhere else, e.g. it's missing
> >> the locking self tests I worked on in the past), and a more structured
> >> approach would really be good.
> > 
> > Well, that's what I am trying to do. I hope you like it!
> > 
> > Cheers!
> > .
> > 
>
Theodore Ts'o May 11, 2019, 5:33 p.m. UTC | #51
On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> However, the reply is incorrect.  Kselftest in-kernel tests (which
> is the context here) can be configured as built in instead of as
> a module, and built in a UML kernel.  The UML kernel can boot,
> running the in-kernel tests before UML attempts to invoke the
> init process.

Um, Citation needed?

I don't see any evidence for this in the kselftest documentation, nor
do I see any evidence of this in the kselftest Makefiles.

There exists test modules in the kernel that run before the init
scripts run --- but that's not strictly speaking part of kselftests,
and do not have any kind of infrastructure.  As noted, the
kselftests_harness header file fundamentally assumes that you are
running test code in userspace.

				- Ted
Daniel Vetter May 13, 2019, 2:44 p.m. UTC | #52
On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > is the context here) can be configured as built in instead of as
> > a module, and built in a UML kernel.  The UML kernel can boot,
> > running the in-kernel tests before UML attempts to invoke the
> > init process.
> 
> Um, Citation needed?
> 
> I don't see any evidence for this in the kselftest documentation, nor
> do I see any evidence of this in the kselftest Makefiles.
> 
> There exists test modules in the kernel that run before the init
> scripts run --- but that's not strictly speaking part of kselftests,
> and do not have any kind of infrastructure.  As noted, the
> kselftests_harness header file fundamentally assumes that you are
> running test code in userspace.

Yeah I really like the "no userspace required at all" design of kunit,
while still collecting results in a well-defined way (unless the current
self-test that just run when you load the module, with maybe some
kselftest ad-hoc wrapper around to collect the results).

What I want to do long-term is to run these kernel unit tests as part of
the build-testing, most likely in gitlab (sooner or later, for drm.git
only ofc). So that people get their pull requests (and patch series, we
have some ideas to tie this into patchwork) automatically tested for this
super basic stuff.
-Daniel
Brendan Higgins May 14, 2019, 6:04 a.m. UTC | #53
On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > > is the context here) can be configured as built in instead of as
> > > a module, and built in a UML kernel.  The UML kernel can boot,
> > > running the in-kernel tests before UML attempts to invoke the
> > > init process.
> > 
> > Um, Citation needed?
> > 
> > I don't see any evidence for this in the kselftest documentation, nor
> > do I see any evidence of this in the kselftest Makefiles.
> > 
> > There exists test modules in the kernel that run before the init
> > scripts run --- but that's not strictly speaking part of kselftests,
> > and do not have any kind of infrastructure.  As noted, the
> > kselftests_harness header file fundamentally assumes that you are
> > running test code in userspace.
> 
> Yeah I really like the "no userspace required at all" design of kunit,
> while still collecting results in a well-defined way (unless the current
> self-test that just run when you load the module, with maybe some
> kselftest ad-hoc wrapper around to collect the results).
> 
> What I want to do long-term is to run these kernel unit tests as part of
> the build-testing, most likely in gitlab (sooner or later, for drm.git

Totally! This is part of the reason I have been insisting on a minimum
of UML compatibility for all unit tests. If you can suffiently constrain
the environment that is required for tests to run in, it makes it much
easier not only for a human to run your tests, but it also makes it a
lot easier for an automated service to be able to run your tests.

I actually have a prototype presubmit already working on my
"stable/non-upstream" branch. You can checkout what presubmit results
look like here[1][2].

> only ofc). So that people get their pull requests (and patch series, we
> have some ideas to tie this into patchwork) automatically tested for this

Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
and he seemed pretty open to collaboration.

Before I heard about Snowpatch, I had an intern write a translation
layer that made Prow (the presubmit service that I used in the prototype
above) work with LKML[4].

I am not married to either approach, but I think between the two of
them, most of the initial legwork has been done to make presubmit on
LKML a reality.

> super basic stuff.

I am really excited to hear back on what you think!

Cheers!

[1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
[2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
[3] https://developer.ibm.com/open/projects/snowpatch/
[4] https://kunit.googlesource.com/prow-lkml/
Brendan Higgins May 14, 2019, 6:39 a.m. UTC | #54
On Sat, May 11, 2019 at 08:17:47AM +0200, Knut Omang wrote:
> On Fri, 2019-05-10 at 15:18 -0700, Frank Rowand wrote:
> > On 5/10/19 1:54 PM, Brendan Higgins wrote:
> > > On Fri, May 10, 2019 at 5:13 AM Knut Omang <knut.omang@oracle.com> wrote:
> > >> On Fri, 2019-05-10 at 03:23 -0700, Brendan Higgins wrote:
> > >>>> On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
> > >>>>>
> > >>>>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > >>>>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > >>>>>>>
> > >>>>>>>
> > >>>>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > >>>>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > >>>>>>>>>
> > >>>>>>>>> The second item, arguably, does have significant overlap with kselftest.
> > >>>>>>>>> Whether you are running short tests in a light weight UML environment or
> > >>>>>>>>> higher level tests in an heavier VM the two could be using the same
> > >>>>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > >>>>>>>>> for some people to be able to run all the UML tests in the heavy VM
> > >>>>>>>>> environment along side other higher level tests.
> > >>>>>>>>>
> > >>>>>>>>> Looking at the selftests tree in the repo, we already have similar items to
> > >>>>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > >>>>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > >>>>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > >>>>>>>>>
> > >>>>>>>>> However, the number of users of this harness appears to be quite small. Most
> > >>>>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
> > >>>>>>>>> and userspace code so it's not hard to see it as something completely
> > >>>>>>>>> different from the new Kunit:
> > >>>>>>>>>
> > >>>>>>>>> $ git grep --files-with-matches kselftest_harness.h *
> > >>>>>>>>
> > >>>>>>>> To the extent that we can unify how tests are written, I agree that
> > >>>>>>>> this would be a good thing.  However, you should note that
> > >>>>>>>> kselftest_harness.h is currently assums that it will be included in
> > >>>>>>>> userspace programs.  This is most obviously seen if you look closely
> > >>>>>>>> at the functions defined in the header files which makes calls to
> > >>>>>>>> fork(), abort() and fprintf().
> > >>>>>>>
> > >>>>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
> > >>>>>>> in-kernel tests and kselftest_harness for userspace tests seems like
> > >>>>>>> a sensible line to draw to me. Trying to unify kernel and userspace
> > >>>>>>> here sounds like it could be difficult so it's probably not worth
> > >>>>>>> forcing the issue unless someone wants to do some really fancy work
> > >>>>>>> to get it done.
> > >>>>>>>
> > >>>>>>> Based on some of the other commenters, I was under the impression
> > >>>>>>> that kselftests had in-kernel tests but I'm not sure where or if they
> > >>>>>>> exist.
> > >>>>>>
> > >>>>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> > >>>>>>
> > >>>>>> Here is a likely list of them in the kernel source tree:
> > >>>>>>
> > >>>>>> $ grep module_init lib/test_*.c
> > >>>>>> lib/test_bitfield.c:module_init(test_bitfields)
> > >>>>>> lib/test_bitmap.c:module_init(test_bitmap_init);
> > >>>>>> lib/test_bpf.c:module_init(test_bpf_init);
> > >>>>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > >>>>>> lib/test_firmware.c:module_init(test_firmware_init);
> > >>>>>> lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> > >>>>>> lib/test_hexdump.c:module_init(test_hexdump_init);
> > >>>>>> lib/test_ida.c:module_init(ida_checks);
> > >>>>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
> > >>>>>> lib/test_list_sort.c:module_init(list_sort_test);
> > >>>>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > >>>>>> lib/test_module.c:static int __init test_module_init(void)
> > >>>>>> lib/test_module.c:module_init(test_module_init);
> > >>>>>> lib/test_objagg.c:module_init(test_objagg_init);
> > >>>>>> lib/test_overflow.c:static int __init test_module_init(void)
> > >>>>>> lib/test_overflow.c:module_init(test_module_init);
> > >>>>>> lib/test_parman.c:module_init(test_parman_init);
> > >>>>>> lib/test_printf.c:module_init(test_printf_init);
> > >>>>>> lib/test_rhashtable.c:module_init(test_rht_init);
> > >>>>>> lib/test_siphash.c:module_init(siphash_test_init);
> > >>>>>> lib/test_sort.c:module_init(test_sort_init);
> > >>>>>> lib/test_stackinit.c:module_init(test_stackinit_init);
> > >>>>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > >>>>>> lib/test_static_keys.c:module_init(test_static_key_init);
> > >>>>>> lib/test_string.c:module_init(string_selftest_init);
> > >>>>>> lib/test_ubsan.c:module_init(test_ubsan_init);
> > >>>>>> lib/test_user_copy.c:module_init(test_user_copy_init);
> > >>>>>> lib/test_uuid.c:module_init(test_uuid_init);
> > >>>>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > >>>>>> lib/test_xarray.c:module_init(xarray_checks);
> > >>>>>>
> > >>>>>>
> > >>>>>>> If they do exists, it seems like it would make sense to
> > >>>>>>> convert those to kunit and have Kunit tests run-able in a VM or
> > >>>>>>> baremetal instance.
> > >>>>>>
> > >>>>>> They already run in a VM.
> > >>>>>>
> > >>>>>> They already run on bare metal.
> > >>>>>>
> > >>>>>> They already run in UML.
> > >>>>>>
> > >>>>>> This is not to say that KUnit does not make sense.  But I'm still trying
> > >>>>>> to get a better description of the KUnit features (and there are
> > >>>>>> some).
> > >>>>>
> > >>>>> FYI, I have a master student who looks at converting some of these to KTF, such as
> > >>> for
> > >>>>> instance the XArray tests, which lended themselves quite good to a semi-automated
> > >>>>> conversion.
> > >>>>>
> > >>>>> The result is also a somewhat more compact code as well as the flexibility
> > >>>>> provided by the Googletest executor and the KTF frameworks, such as running
> > selected
> > >>>>> tests, output formatting, debugging features etc.
> > >>>>
> > >>>> So is KTF already in upstream? Or is the plan to unify the KTF and
> > >>>
> > >>> I am not certain about KTF's upstream plans, but I assume that Knut
> > >>> would have CC'ed me on the thread if he had started working on it.
> > >>
> > >> You are on the Github watcher list for KTF?
> > > 
> > > Yep! I have been since LPC in 2017.
> > > 
> > >> Quite a few of the commits there are preparatory for a forthcoming kernel patch set.
> > >> I'll of course CC: you on the patch set when we send it to the list.
> > > 
> > > Awesome! I appreciate it.
> > > 
> > >>
> > >>>> Kunit in-kernel test harnesses? Because there's tons of these
> > >>>
> > >>> No, no plan. Knut and I talked about this a good while ago and it
> > >>> seemed that we had pretty fundamentally different approaches both in
> > >>> terms of implementation and end goal. Combining them seemed pretty
> > >>> infeasible, at least from a technical perspective. Anyway, I am sure
> > >>> Knut would like to give him perspective on the matter and I don't want
> > >>> to say too much without first giving him a chance to chime in on the
> > >>> matter.
> > >>
> > >> I need more time to study KUnit details to say, but from a 10k feet perspective:
> > >> I think at least there's a potential for some API unification, in using the same
> > macro
> > >> names. How about removing the KUNIT_ prefix to the test macros ;-) ?
> > > 
> > > Heh, heh. That's actually the way I had it in the earliest versions of
> > > KUnit! But that was pretty much the very first thing everyone
> > > complained about. I think I went from no prefix (like you are
> > > suggesting) to TEST_* before the first version of the RFC at the
> > > request of several people I was kicking the idea around with, and then
> > > I think I was asked to go from TEST_* to KUNIT_* in the very first
> > > revision of the RFC.
> > > 
> > > In short, I am sympathetic to your suggestion, but I think that is
> > > non-negotiable at this point. The community has a clear policy in
> > > place on the matter, and at this point I would really prefer not to
> > > change all the symbol names again.
> > 
> > This would not be the first time that a patch submitter has been
> > told "do B instead of A" for version 1, then told "do C instead of
> > B" for version 2, then told "do A instead of C" for the final version.
> > 
> > It sucks, but it happens.

Sure, I have been there before, but I thought those original opinions
were based on a pretty well established convention. Also, I don't think
those original opinions have changed. If they have, please chime in and
correct me.

> Sorry, I must have overlooked the B instead of A instance - otherwise 
> I would have objected against it - in addition to the recognizability 
> and portability issue I think it is important that these primitives are
> not unnecessary long since they will be written a *lot* of times if things go 
> our way.

I get that. That is why I thought I might have been worthy of an
exception. It is definitely easier to write EXPECT_EQ(...) than
KUNIT_EXPECT_EQ(...) or TEST_EXPECT_EQ(...), but no one else seemed to
agree in the past.

Even now, it is only you (and maybe Frank?) telling me to change it; I
would like to maybe hear Shuah, Kees, or Greg chime in on this before I
go about actually chaning it back, as I distinctly remember each of them
telling me that I should go with KUNIT_*.

> And the reason for using unique names elsewhere is to be able to co-exist with
> other components with the same needs. In the case of tests, I believe they are 

That's the policy I was talking about.

> in a different category, they are not supposed to be part of production kernels,
> and we want them to be used all over, that should warrant having the ASSERT_ and EXPECT_
> prefixes "reserved" for the purpose, so I would urge some pragmatism here!

I don't disagree. Again, this is what I initially proposed, but no one
agreed with me on this point.

I am not saying no. Nevertheless, this is a pretty consistently applied
pattern for new stuff, and I would really prefer not to make waves on
something that really doesn't matter all that much.

So like I said, if we can get some more discussion on this and it seems
like broad consensus says we can reserve ASSERT_* and EXPECT_*, then I
will go along with it.

> > As an aside, can you point to where the "clear policy in place" is
> > documented, and what the policy is?

Global namespacing policy that Knut mentioned above.

> > 
> > -Frank
> > 
> > 
> > >> That would make the names shorter, saving typing when writing tests, and storage ;-)
> > >> and also make the names more similar to KTF's, and those of user land unit test
> > > 
> > > You mean the Googletest/Googlemock expectations/assertions?
> > > 
> > > It's a great library (with not so great a name), but unfortunately it
> > > is written in C++, which I think pretty much counts it out here.
> 
> Using a similar syntax is a good thing, since it makes it easier for people
> who write tests in user land frameworks to contribute to the kernel tests.
> And if, lets say, someone later comes up with a way to run the KUnit tests in "real" 
> user land within Googletest ;-) then less editing would be needed..
> 
> > >> frameworks? Also it will make it possible to have functions compiling both with KTF
> > and
> > >> KUnit, facilitating moving code between the two.
> > > 
> > > I think that would be cool, but again, I don't think this will be
> > > possible with Googletest/Googlemock.
> 
> I was thinking of moves between KUnit tests and KTF tests, kernel code only.
> Some test functions may easily be usable both in a "pure" mocking environment
> and in an integrated setting with hardware/driver/userspace dependencies.

Speaking for KUnit, you are right. I got KUnit working on other
architectures a couple revisions ago (apparently I didn't advertise that
well enough).

> > >>
> > >> Also the string stream facilities of KUnit looks interesting to share.
> > > 
> > > I am glad you think so!
> > > 
> > > If your biggest concern on my side is test macro names (which I think
> > > is a no-go as I mentioned above), I think we should be in pretty good
> > > shape once you are ready to move forward. Besides, I have a lot more
> > > KUnit patches coming after this: landing this patchset is just the
> > > beginning. So how about we keep moving forward on this patchset?
> 
> I think the importance of a well thought through 
> test API definition is not to be underestimated 

I agree. That is why I am saying that I think we are in good shape if we
are only arguing about the name.

> - the sooner we can unify and establish a common base, the better, 
> I think.

Again, I agree. That is what I am trying to do here.

> 
> My other concern is, as mentioned earlier, whether UML is really that 
> different from just running in a VM wrt debugging support, and that 
> there exists a better approach based on automatically generating 
> an environment where the test code and the source under test 
> can be compiled in a normal user land program. But it seems 
> there are enough clients of the UML approach to justify it 
> as a lightweight entry. We want to make it easy and inexcusable 
> not to test the code, having a low bar of entry is certainly good.

Yep.

> Other than that, I really would need to spend some more time with 
> the details on KUnit to verify my so far fairly 
> shallow observations!

Are we arguing about anything other than naming schemes here? If that's
it, we can refactor the under the hood stuff later; if we even need to
at all.

Cheers!
Brendan Higgins May 14, 2019, 8 a.m. UTC | #55
On Sat, May 11, 2019 at 08:43:23AM +0200, Knut Omang wrote:
> On Fri, 2019-05-10 at 14:59 -0700, Frank Rowand wrote:
> > On 5/10/19 3:23 AM, Brendan Higgins wrote:
> > >> On Fri, May 10, 2019 at 7:49 AM Knut Omang <knut.omang@oracle.com> wrote:
> > >>>
> > >>> On Thu, 2019-05-09 at 22:18 -0700, Frank Rowand wrote:
> > >>>> On 5/9/19 4:40 PM, Logan Gunthorpe wrote:
> > >>>>>
> > >>>>>
> > >>>>> On 2019-05-09 5:30 p.m., Theodore Ts'o wrote:
> > >>>>>> On Thu, May 09, 2019 at 04:20:05PM -0600, Logan Gunthorpe wrote:
> > >>>>>>>
> > >>>>>>> The second item, arguably, does have significant overlap with kselftest.
> > >>>>>>> Whether you are running short tests in a light weight UML environment or
> > >>>>>>> higher level tests in an heavier VM the two could be using the same
> > >>>>>>> framework for writing or defining in-kernel tests. It *may* also be valuable
> > >>>>>>> for some people to be able to run all the UML tests in the heavy VM
> > >>>>>>> environment along side other higher level tests.
> > >>>>>>>
> > >>>>>>> Looking at the selftests tree in the repo, we already have similar items to
> > >>>>>>> what Kunit is adding as I described in point (2) above. kselftest_harness.h
> > >>>>>>> contains macros like EXPECT_* and ASSERT_* with very similar intentions to
> > >>>>>>> the new KUNIT_EXECPT_* and KUNIT_ASSERT_* macros.
> > >>>>>>>
> > >>>>>>> However, the number of users of this harness appears to be quite small. Most
> > >>>>>>> of the code in the selftests tree seems to be a random mismash of scripts
> > >>>>>>> and userspace code so it's not hard to see it as something completely
> > >>>>>>> different from the new Kunit:
> > >>>>>>>
> > >>>>>>> $ git grep --files-with-matches kselftest_harness.h *
> > >>>>>>
> > >>>>>> To the extent that we can unify how tests are written, I agree that
> > >>>>>> this would be a good thing.  However, you should note that
> > >>>>>> kselftest_harness.h is currently assums that it will be included in
> > >>>>>> userspace programs.  This is most obviously seen if you look closely
> > >>>>>> at the functions defined in the header files which makes calls to
> > >>>>>> fork(), abort() and fprintf().
> > >>>>>
> > >>>>> Ah, yes. I obviously did not dig deep enough. Using kunit for
> > >>>>> in-kernel tests and kselftest_harness for userspace tests seems like
> > >>>>> a sensible line to draw to me. Trying to unify kernel and userspace
> > >>>>> here sounds like it could be difficult so it's probably not worth
> > >>>>> forcing the issue unless someone wants to do some really fancy work
> > >>>>> to get it done.
> > >>>>>
> > >>>>> Based on some of the other commenters, I was under the impression
> > >>>>> that kselftests had in-kernel tests but I'm not sure where or if they
> > >>>>> exist.
> > >>>>
> > >>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> > >>>>
> > >>>> Here is a likely list of them in the kernel source tree:
> > >>>>
> > >>>> $ grep module_init lib/test_*.c
> > >>>> lib/test_bitfield.c:module_init(test_bitfields)
> > >>>> lib/test_bitmap.c:module_init(test_bitmap_init);
> > >>>> lib/test_bpf.c:module_init(test_bpf_init);
> > >>>> lib/test_debug_virtual.c:module_init(test_debug_virtual_init);
> > >>>> lib/test_firmware.c:module_init(test_firmware_init);
> > >>>> lib/test_hash.c:module_init(test_hash_init);  /* Does everything */
> > >>>> lib/test_hexdump.c:module_init(test_hexdump_init);
> > >>>> lib/test_ida.c:module_init(ida_checks);
> > >>>> lib/test_kasan.c:module_init(kmalloc_tests_init);
> > >>>> lib/test_list_sort.c:module_init(list_sort_test);
> > >>>> lib/test_memcat_p.c:module_init(test_memcat_p_init);
> > >>>> lib/test_module.c:static int __init test_module_init(void)
> > >>>> lib/test_module.c:module_init(test_module_init);
> > >>>> lib/test_objagg.c:module_init(test_objagg_init);
> > >>>> lib/test_overflow.c:static int __init test_module_init(void)
> > >>>> lib/test_overflow.c:module_init(test_module_init);
> > >>>> lib/test_parman.c:module_init(test_parman_init);
> > >>>> lib/test_printf.c:module_init(test_printf_init);
> > >>>> lib/test_rhashtable.c:module_init(test_rht_init);
> > >>>> lib/test_siphash.c:module_init(siphash_test_init);
> > >>>> lib/test_sort.c:module_init(test_sort_init);
> > >>>> lib/test_stackinit.c:module_init(test_stackinit_init);
> > >>>> lib/test_static_key_base.c:module_init(test_static_key_base_init);
> > >>>> lib/test_static_keys.c:module_init(test_static_key_init);
> > >>>> lib/test_string.c:module_init(string_selftest_init);
> > >>>> lib/test_ubsan.c:module_init(test_ubsan_init);
> > >>>> lib/test_user_copy.c:module_init(test_user_copy_init);
> > >>>> lib/test_uuid.c:module_init(test_uuid_init);
> > >>>> lib/test_vmalloc.c:module_init(vmalloc_test_init)
> > >>>> lib/test_xarray.c:module_init(xarray_checks);
> > >>>>
> > >>>>
> > >>>>> If they do exists, it seems like it would make sense to
> > >>>>> convert those to kunit and have Kunit tests run-able in a VM or
> > >>>>> baremetal instance.
> > >>>>
> > >>>> They already run in a VM.
> > >>>>
> > >>>> They already run on bare metal.
> > >>>>
> > >>>> They already run in UML.
> > >>>>
> > >>>> This is not to say that KUnit does not make sense.  But I'm still trying
> > >>>> to get a better description of the KUnit features (and there are
> > >>>> some).
> > >>>
> > >>> FYI, I have a master student who looks at converting some of these to KTF, such as
> > for
> > >>> instance the XArray tests, which lended themselves quite good to a semi-automated
> > >>> conversion.
> > >>>
> > >>> The result is also a somewhat more compact code as well as the flexibility
> > >>> provided by the Googletest executor and the KTF frameworks, such as running selected
> > >>> tests, output formatting, debugging features etc.
> > >>
> > >> So is KTF already in upstream? Or is the plan to unify the KTF and
> > > 
> > > I am not certain about KTF's upstream plans, but I assume that Knut
> > > would have CC'ed me on the thread if he had started working on it.
> > > 
> > >> Kunit in-kernel test harnesses? Because there's tons of these
> > > 
> > > No, no plan. Knut and I talked about this a good while ago and it
> > > seemed that we had pretty fundamentally different approaches both in
> > > terms of implementation and end goal. Combining them seemed pretty
> > > infeasible, at least from a technical perspective. Anyway, I am sure
> > > Knut would like to give him perspective on the matter and I don't want
> > > to say too much without first giving him a chance to chime in on the
> > > matter.
> > > 
> > > Nevertheless, I hope you don't see resolving this as a condition for
> > > accepting this patchset. I had several rounds of RFC on KUnit, and no
> > > one had previously brought this up.
> > 
> > I seem to recall a request in reply to the KUnit RFC email threads to
> > work together.
> 
> You recall right.
> I wanted us to work together to refine a common approach.

Are you talking about right after we met at LPC in 2017? We talked about
working together, but I thought you didn't really see much value in what
I was doing with KUnit.

Or are you talking about our fist discussion on the RFC[1]. In this
case, you seemed to assert that KTF's approach was fundamentally
different and superior. You also didn't really seem interested in trying
to merge KTF at the time. The discussion concluded with Luis suggesting
that we should just keep working on both separately and let individual
users decide.

What changed since then?

> I still think that's possible.

I hope you aren't asserting that I have been unwilling to work with you.
At the outset (before I sent out the RFC), I was willing to let you take
the lead on things, but you didn't seem very interested in anything that
I brought to the table and most importantly were not interested in
sending out any code for discussion on the mailing lists.

After I started the RFC, most of your comments were very high level and,
at least to me, seemed like pretty fundamental philosophical differences
in our approaches, which is fine! I think Luis recognized that and I
think that is part of why he suggested for us to continue to work on
them separately.

I am not trying to be mean, but I don't really know what you expected me
to do. I don't recall any specific changes you suggested me to make in
my code (I think you only ever made a single comment on a thread on
anything other than the cover letter.) And you never proposed any of
your own code demonstrating an alternative way of doing things.

Nevertheless, you are of course fully welcome to critique anything I
have proposed, or to propose your own way of doing things, but we need
to do so here on the mailing lists.

> > However whether that impacts acceptance of this patch set is up to
> > the maintainer and how she wants to resolve the potential collision
> > of KUnit and KTF (if there is indeed any sort of collision).
> 
> I believe there's overlap and potential for unification and code sharing.
> My concern is to make sure that that can happen without disrupting too 
> many test users. I'd really like to get some more time to explore that.

It's possible. Like I said before, our approaches are pretty
fundamentally different. It sounds like there might be some overlap
between our expectaion/assertion macros, but we probably cannot use the
ones from Googletest without introducing C++ into the kernel which is a
no-no.

> It strikes me that the main difference in the two approaches 
> lies in the way reporting is done by default. Since KUnit handles all the
> reporting itself, while KTF relies on Googletest for that, a lot more code 
> in KUnit revolves around that part, while with KTF we have focused more on 
> features to enable writing powerful and effective tests.

I have a lot more features on the way, but what is in this initial
patchset are the absolutely core things needed to produce an MVP, and
yes, most of that code revolves around reporting and the fundamental
structure of tests.

Developing cool features is great, but you need to start off on a basis
that the community will accept. Sometimes you need to play around
downstream a bit to develop your ideas, but you always want to get
upstream feedback as early as possible; it's always possible that
someone might propose something, or point out something that breaks a
fundamental assumption that all your later features depend on.

> The reporting part can possibly be made configurable: Reporting with printk or reporting 
> via netlink to user space. In fact, as a KTF debugging option KTF already 
> supports printk reporting which can be enabled via sysfs.

Yep, I intentionally left an interface so printk (well actually
vprintk_emit) can be swapped out with another mechanism.

> If macros can have the same syntax, then there's 
> likely features in KTF that KUnit users would benefit from too. 
> But this of course have to be tried out.

Cool. Looking forward to hearing about it.

Cheers!

[1] https://lkml.org/lkml/2018/11/24/170

> > >> in-kernel unit tests already, and every merge we get more (Frank's
> > >> list didn't even look into drivers or anywhere else, e.g. it's missing
> > >> the locking self tests I worked on in the past), and a more structured
> > >> approach would really be good.
> > > 
> > > Well, that's what I am trying to do. I hope you like it!
> > > 
> > > Cheers!
> > > .
> > > 
> > 
>
Brendan Higgins May 14, 2019, 8:22 a.m. UTC | #56
On Wed, May 08, 2019 at 05:58:49PM -0700, Frank Rowand wrote:
> Hi Ted,
> 
> On 5/7/19 10:22 AM, Theodore Ts'o wrote:
> > On Tue, May 07, 2019 at 10:01:19AM +0200, Greg KH wrote:
> Not very helpful to cut the text here, plus not explicitly indicating that
> text was cut (yes, I know the ">>>" will be a clue for the careful reader),
> losing the set up for my question.
> 
> 
> >>> My understanding is that the intent of KUnit is to avoid booting a kernel on
> >>> real hardware or in a virtual machine.  That seems to be a matter of semantics
> >>> to me because isn't invoking a UML Linux just running the Linux kernel in
> >>> a different form of virtualization?
> >>>
> >>> So I do not understand why KUnit is an improvement over kselftest.
> >>>
> >>> It seems to me that KUnit is just another piece of infrastructure that I
> >>> am going to have to be familiar with as a kernel developer.  More overhead,
> >>> more information to stuff into my tiny little brain.
> >>>
> >>> I would guess that some developers will focus on just one of the two test
> >>> environments (and some will focus on both), splitting the development
> >>> resources instead of pooling them on a common infrastructure.
> >>>
> >>> What am I missing?
> >>
> >> kselftest provides no in-kernel framework for testing kernel code
> >> specifically.  That should be what kunit provides, an "easy" way to
> >> write in-kernel tests for things.
> >>
> >> Brendan, did I get it right?
> > 
> > Yes, that's basically right.  You don't *have* to use KUnit.  It's
> 
> If KUnit is added to the kernel, and a subsystem that I am submitting
> code for has chosen to use KUnit instead of kselftest, then yes, I do
> *have* to use KUnit if my submission needs to contain a test for the
> code unless I want to convince the maintainer that somehow my case
> is special and I prefer to use kselftest instead of KUnittest.
> 
> 
> > supposed to be a simple way to run a large number of small tests that
> > for specific small components in a system.
> 
> kselftest also supports running a subset of tests.  That subset of tests
> can also be a large number of small tests.  There is nothing inherent
> in KUnit vs kselftest in this regard, as far as I am aware.
> 
> 
> > For example, I currently use xfstests using KVM and GCE to test all of
> > ext4.  These tests require using multiple 5 GB and 20GB virtual disks,
> > and it works by mounting ext4 file systems and exercising ext4 through
> > the system call interfaces, using userspace tools such as fsstress,
> > fsx, fio, etc.  It requires time overhead to start the VM, create and
> > allocate virtual disks, etc.  For example, to run a single 3 seconds
> > xfstest (generic/001), it requires full 10 seconds to run it via
> > kvm-xfstests.
> > 
> 
> 
> > KUnit is something else; it's specifically intended to allow you to
> > create lightweight tests quickly and easily, and by reducing the
> > effort needed to write and run unit tests, hopefully we'll have a lot
> > more of them and thus improve kernel quality.
> 
> The same is true of kselftest.  You can create lightweight tests in
> kselftest.
> 
> 
> > As an example, I have a volunteer working on developing KUinit tests
> > for ext4.  We're going to start by testing the ext4 extent status
> > tree.  The source code is at fs/ext4/extent_status.c; it's
> > approximately 1800 LOC.  The Kunit tests for the extent status tree
> > will exercise all of the corner cases for the various extent status
> > tree functions --- e.g., ext4_es_insert_delayed_block(),
> > ext4_es_remove_extent(), ext4_es_cache_extent(), etc.  And it will do
> > this in isolation without our needing to create a test file system or
> > using a test block device.
> > 
> 
> > Next we'll test the ext4 block allocator, again in isolation.  To test
> > the block allocator we will have to write "mock functions" which
> > simulate reading allocation bitmaps from disk.  Again, this will allow
> > the test writer to explicitly construct corner cases and validate that
> > the block allocator works as expected without having to reverese
> > engineer file system data structures which will force a particular
> > code path to be executed.
> 
> This would be a difference, but mock functions do not exist in KUnit.
> The KUnit test will call the real kernel function in the UML kernel.
> 
> I think Brendan has indicated a desire to have mock functions in the
> future.
> 
> Brendan, do I understand that correctly?

Oh, sorry, I missed this comment from earlier.

Yes, you are correct. Function mocking is a feature I will be
introducing in a follow up patchset (assuming this one gets merged of
course ;-) ).

Cheers!

> -Frank
> 
> > So this is why it's largely irrelevant to me that KUinit uses UML.  In
> > fact, it's a feature.  We're not testing device drivers, or the
> > scheduler, or anything else architecture-specific.  UML is not about
> > virtualization.  What it's about in this context is allowing us to
> > start running test code as quickly as possible.  Booting KVM takes
> > about 3-4 seconds, and this includes initializing virtio_scsi and
> > other device drivers.  If by using UML we can hold the amount of
> > unnecessary kernel subsystem initialization down to the absolute
> > minimum, and if it means that we can communicating to the test
> > framework via a userspace "printf" from UML/KUnit code, as opposed to
> > via a virtual serial port to KVM's virtual console, it all makes for
> > lighter weight testing.
> > 
> > Why did I go looking for a volunteer to write KUnit tests for ext4?
> > Well, I have a plan to make some changes in restructing how ext4's
> > write path works, in order to support things like copy-on-write, a
> > more efficient delayed allocation system, etc.  This will require
> > making changes to the extent status tree, and by having unit tests for
> > the extent status tree, we'll be able to detect any bugs that we might
> > accidentally introduce in the es tree far more quickly than if we
> > didn't have those tests available.  Google has long found that having
> > these sorts of unit tests is a real win for developer velocity for any
> > non-trivial code module (or C++ class), even when you take into
> > account the time it takes to create the unit tests.
> > 
> > 					- Ted>
> > P.S.  Many thanks to Brendan for finding such a volunteer for me; the
> > person in question is a SRE from Switzerland who is interested in
> > getting involved with kernel testing, and this is going to be their
> > 20% project.  :-)
> > 
> >
Brendan Higgins May 14, 2019, 8:38 a.m. UTC | #57
On Fri, May 10, 2019 at 03:13:40PM -0700, Frank Rowand wrote:
> On 5/10/19 9:17 AM, Logan Gunthorpe wrote:
> > 
> > 
> > On 2019-05-09 11:18 p.m., Frank Rowand wrote:
> > 
> >> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
> > 
> > Cool. From my cursory look, in my opinion, these would be greatly
> > improved by converting them to the framework Brendan is proposing for Kunit.
> > 
> >>> If they do exists, it seems like it would make sense to
> >>> convert those to kunit and have Kunit tests run-able in a VM or
> >>> baremetal instance.
> >>
> >> They already run in a VM.
> >>
> >> They already run on bare metal.
> >>
> >> They already run in UML.
> > 
> > Simply being able to run in UML is not the only thing here. Kunit
> > provides the infrastructure to quickly build, run and report results for
> > all the tests from userspace without needing to worry about the details
> > of building and running a UML kernel, then parsing dmesg to figure out
> > what tests were run or not.
> 
> Yes.  But that is not the only environment that KUnit must support to be
> of use to me for devicetree unittests (this is not new, Brendan is quite
> aware of my needs and is not ignoring them).
> 
> 
> >> This is not to say that KUnit does not make sense.  But I'm still trying
> >> to get a better description of the KUnit features (and there are
> >> some).
> > 
> > So read the patches, or the documentation[1] or the LWN article[2]. It's
> > pretty well described in a lot of places -- that's one of the big
> > advantages of it. In contrast, few people seems to have any concept of
> > what kselftests are or where they are or how to run them (I was
> > surprised to find the in-kernel tests in the lib tree).
> > 
> > Logan
> > 
> > [1] https://google.github.io/kunit-docs/third_party/kernel/docs/
> > [2] https://lwn.net/Articles/780985/
> 
> I have been following the RFC versions.  I have installed the RFC patches
> and run them to the extent that they worked (devicetree unittests were
> a guinea pig for test conversion in the RFC series, but the converted
> tests did not work).  I read portions of the code while trying to
> understand the unittests conversion.  I made review comments based on
> the portion of the code that I did read.  I have read the documentation
> (very nice btw, as I have said before, but should be expanded).
> 
> My comment is that the description to submit the patch series should
> be fuller -- KUnit potentially has a lot of nice attributes, and I
> still think I have only scratched the surface.  The average reviewer
> may have even less in-depth knowledge than I do.  And as I have
> commented before, I keep diving into areas that I had no previous
> experience with (such as kselftest) to be able to properly judge this
> patch series.

Thanks for the praise! That means a lot coming from you!

I really cannot disagree that I could use more documentation. You can
pretty much always use more documentation. Nevertheless, is there a
particular part of the documentation that you think it lacking?

It sounds like there was a pretty long discussion here about, a number
of different things.

Do you want a better description of what unit testing is and how KUnit
helps make it possible?

Do you want more of an explanation distinguishing KUnit from kselftest?
How so?

Do you just want better documentation on how to test the kernel? What
tools we have at our disposal and when to use what tools?

Thanks!
Daniel Vetter May 14, 2019, 12:05 p.m. UTC | #58
On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
<brendanhiggins@google.com> wrote:
>
> On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > > > is the context here) can be configured as built in instead of as
> > > > a module, and built in a UML kernel.  The UML kernel can boot,
> > > > running the in-kernel tests before UML attempts to invoke the
> > > > init process.
> > >
> > > Um, Citation needed?
> > >
> > > I don't see any evidence for this in the kselftest documentation, nor
> > > do I see any evidence of this in the kselftest Makefiles.
> > >
> > > There exists test modules in the kernel that run before the init
> > > scripts run --- but that's not strictly speaking part of kselftests,
> > > and do not have any kind of infrastructure.  As noted, the
> > > kselftests_harness header file fundamentally assumes that you are
> > > running test code in userspace.
> >
> > Yeah I really like the "no userspace required at all" design of kunit,
> > while still collecting results in a well-defined way (unless the current
> > self-test that just run when you load the module, with maybe some
> > kselftest ad-hoc wrapper around to collect the results).
> >
> > What I want to do long-term is to run these kernel unit tests as part of
> > the build-testing, most likely in gitlab (sooner or later, for drm.git
>
> Totally! This is part of the reason I have been insisting on a minimum
> of UML compatibility for all unit tests. If you can suffiently constrain
> the environment that is required for tests to run in, it makes it much
> easier not only for a human to run your tests, but it also makes it a
> lot easier for an automated service to be able to run your tests.
>
> I actually have a prototype presubmit already working on my
> "stable/non-upstream" branch. You can checkout what presubmit results
> look like here[1][2].

ug gerrit :-)

> > only ofc). So that people get their pull requests (and patch series, we
> > have some ideas to tie this into patchwork) automatically tested for this
>
> Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> and he seemed pretty open to collaboration.
>
> Before I heard about Snowpatch, I had an intern write a translation
> layer that made Prow (the presubmit service that I used in the prototype
> above) work with LKML[4].

There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
a different one on freedesktop.org. It's a bit a mess :-/

> I am not married to either approach, but I think between the two of
> them, most of the initial legwork has been done to make presubmit on
> LKML a reality.

We do have presubmit CI working already with our freedesktop.org
patchwork. The missing glue is just tying that into gitlab CI somehow
(since we want to unify build testing more and make it easier for
contributors to adjust things).
-Daniel

> > super basic stuff.
>
> I am really excited to hear back on what you think!
>
> Cheers!
>
> [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> [3] https://developer.ibm.com/open/projects/snowpatch/
> [4] https://kunit.googlesource.com/prow-lkml/
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Brendan Higgins May 14, 2019, 6:36 p.m. UTC | #59
On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> <brendanhiggins@google.com> wrote:
> >
> > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > > > > is the context here) can be configured as built in instead of as
> > > > > a module, and built in a UML kernel.  The UML kernel can boot,
> > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > init process.
> > > >
> > > > Um, Citation needed?
> > > >
> > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > do I see any evidence of this in the kselftest Makefiles.
> > > >
> > > > There exists test modules in the kernel that run before the init
> > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > and do not have any kind of infrastructure.  As noted, the
> > > > kselftests_harness header file fundamentally assumes that you are
> > > > running test code in userspace.
> > >
> > > Yeah I really like the "no userspace required at all" design of kunit,
> > > while still collecting results in a well-defined way (unless the current
> > > self-test that just run when you load the module, with maybe some
> > > kselftest ad-hoc wrapper around to collect the results).
> > >
> > > What I want to do long-term is to run these kernel unit tests as part of
> > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> >
> > Totally! This is part of the reason I have been insisting on a minimum
> > of UML compatibility for all unit tests. If you can suffiently constrain
> > the environment that is required for tests to run in, it makes it much
> > easier not only for a human to run your tests, but it also makes it a
> > lot easier for an automated service to be able to run your tests.
> >
> > I actually have a prototype presubmit already working on my
> > "stable/non-upstream" branch. You can checkout what presubmit results
> > look like here[1][2].
> 
> ug gerrit :-)

Yeah, yeah, I know, but it is a lot easier for me to get a project set
up here using Gerrit, when we already use that for a lot of other
projects.

Also, Gerrit has gotten a lot better over the last two years or so. Two
years ago, I wouldn't touch it with a ten foot pole. It's not so bad
anymore, at least if you are used to using a web UI to review code.

> > > only ofc). So that people get their pull requests (and patch series, we
> > > have some ideas to tie this into patchwork) automatically tested for this
> >
> > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > and he seemed pretty open to collaboration.
> >
> > Before I heard about Snowpatch, I had an intern write a translation
> > layer that made Prow (the presubmit service that I used in the prototype
> > above) work with LKML[4].
> 
> There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> a different one on freedesktop.org. It's a bit a mess :-/

Oh, I didn't realize that. I found your patchwork instance here[5], but
do you have a place where I can see the changes you have added to
support presubmit?

> > I am not married to either approach, but I think between the two of
> > them, most of the initial legwork has been done to make presubmit on
> > LKML a reality.
> 
> We do have presubmit CI working already with our freedesktop.org
> patchwork. The missing glue is just tying that into gitlab CI somehow
> (since we want to unify build testing more and make it easier for
> contributors to adjust things).

I checked out a couple of your projects on your patchwork instance: AMD
X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
tests, but none of them actually had any test results. Can you point me
at one that does?

Cheers!

[5] https://patchwork.freedesktop.org/

> > > super basic stuff.
> >
> > I am really excited to hear back on what you think!
> >
> > Cheers!
> >
> > [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> > [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> > [3] https://developer.ibm.com/open/projects/snowpatch/
> > [4] https://kunit.googlesource.com/prow-lkml/
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
Brendan Higgins May 14, 2019, 8:54 p.m. UTC | #60
On Fri, May 10, 2019 at 02:52:59PM -0700, Frank Rowand wrote:

Sorry, I forgot to get back to this thread.

> On 5/9/19 3:20 PM, Logan Gunthorpe wrote:
> > 
> > 
> > On 2019-05-09 3:42 p.m., Theodore Ts'o wrote:
> >> On Thu, May 09, 2019 at 11:12:12AM -0700, Frank Rowand wrote:
> >>>
> >>>     "My understanding is that the intent of KUnit is to avoid booting a kernel on
> >>>     real hardware or in a virtual machine.  That seems to be a matter of semantics
> >>>     to me because isn't invoking a UML Linux just running the Linux kernel in
> >>>     a different form of virtualization?
> >>>
> >>>     So I do not understand why KUnit is an improvement over kselftest.
> >>>
> >>>     ...
> >>> 
> >>> What am I missing?"
> >> 
> >> One major difference: kselftest requires a userspace environment;
> >> it starts systemd, requires a root file system from which you can
> >> load modules, etc.  Kunit doesn't require a root file system;
> >> doesn't require that you start systemd; doesn't allow you to run
> >> arbitrary perl, python, bash, etc. scripts.  As such, it's much
> >> lighter weight than kselftest, and will have much less overhead
> >> before you can start running tests.  So it's not really the same
> >> kind of virtualization.
> 
> I'm back to reply to this subthread, after a delay, as promised.
> 
> 
> > I largely agree with everything Ted has said in this thread, but I
> > wonder if we are conflating two different ideas that is causing an
> > impasse. From what I see, Kunit actually provides two different
> > things:
> 
> > 1) An execution environment that can be run very quickly in userspace
> > on tests in the kernel source. This speeds up the tests and gives a
> > lot of benefit to developers using those tests because they can get
> > feedback on their code changes a *lot* quicker.
> 
> kselftest in-kernel tests provide exactly the same when the tests are
> configured as "built-in" code instead of as modules.
> 
> 
> > 2) A framework to write unit tests that provides a lot of the same
> > facilities as other common unit testing frameworks from userspace
> > (ie. a runner that runs a list of tests and a bunch of helpers such
> > as KUNIT_EXPECT_* to simplify test passes and failures).
> 
> > The first item from Kunit is novel and I see absolutely no overlap
> > with anything kselftest does. It's also the valuable thing I'd like
> > to see merged and grow.
> 
> The first item exists in kselftest.
> 
> 
> > The second item, arguably, does have significant overlap with
> > kselftest. Whether you are running short tests in a light weight UML
> > environment or higher level tests in an heavier VM the two could be
> > using the same framework for writing or defining in-kernel tests. It
> > *may* also be valuable for some people to be able to run all the UML
> > tests in the heavy VM environment along side other higher level
> > tests.
> > 
> > Looking at the selftests tree in the repo, we already have similar
> > items to what Kunit is adding as I described in point (2) above.
> > kselftest_harness.h contains macros like EXPECT_* and ASSERT_* with
> > very similar intentions to the new KUNIT_EXECPT_* and KUNIT_ASSERT_*
> > macros.
> 
> I might be wrong here because I have not dug deeply enough into the
> code!!!  Does this framework apply to the userspace tests, the
> in-kernel tests, or both?  My "not having dug enough GUESS" is that
> these are for the user space tests (although if so, they could be
> extended for in-kernel use also).
> 
> So I think this one maybe does not have an overlap between KUnit
> and kselftest.

You are right, Frank: the EXPECT_* and ASSERT_* in kselftest_harness.h
is for userspace only. kselftest_harness.h provides it's own main method
for running the tests[1]. It also makes assumptions around having access
to this main method[2].

There actually isn't that much infrastructure that that I can reuse
there. I can't even reuse the API definitions because they only pass the
context object (for me it is struct kunit, for them it is their fixture)
that they use to their test cases.

> > However, the number of users of this harness appears to be quite
> > small. Most of the code in the selftests tree seems to be a random
> > mismash of scripts and userspace code so it's not hard to see it as
> > something completely different from the new Kunit:
> > $ git grep --files-with-matches kselftest_harness.h *
> > Documentation/dev-tools/kselftest.rst
> > MAINTAINERS
> > tools/testing/selftests/kselftest_harness.h
> > tools/testing/selftests/net/tls.c
> > tools/testing/selftests/rtc/rtctest.c
> > tools/testing/selftests/seccomp/Makefile
> > tools/testing/selftests/seccomp/seccomp_bpf.c
> > tools/testing/selftests/uevent/Makefile
> > tools/testing/selftests/uevent/uevent_filtering.c
> 
> 
> > Thus, I can personally see a lot of value in integrating the kunit
> > test framework with this kselftest harness. There's only a small
> > number of users of the kselftest harness today, so one way or another
> > it seems like getting this integrated early would be a good idea.
> > Letting Kunit and Kselftests progress independently for a few years
> > will only make this worse and may become something we end up
> > regretting.
> 
> Yes, this I agree with.

I think I agree with this point. I cannot see any reason not to have
KUnit tests able to be run from the kselftest harness.

Conceptually, I think we are mostly in agreement that kselftest and
KUnit are distinct things. Like Shuah said, kselftest is a black box
regression test framework, KUnit is a white box unit testing framework.
So making kselftest the only interface to use KUnit would be a mistake
in my opinion (and I think others on this thread would agree).

That being said, when you go to run kselftest, I think there is an
expectation that you run all your tests. Or at least that kselftest
should make that possible. From my experience, usually when someone
wants to run all the end-to-end tests, *they really just want to run all
the tests*. This would imply that all your KUnit tests get run too.

Another added benefit of making it possible for the kselftest harness to
run KUnit tests would be that it would somewhat guarantee that the
interfaces between the two would remain compatible meaning that test
automation tools like CI and presubmit systems are more likely to be
easy to integrate in each and less likely to break for either.

Would anyone object if I explore this in a follow-up patchset? I have an
idea of how I might start, but I think it would be easiest to explore in
it's own patchset. I don't expect it to be a trivial amount of work.

Cheers!

[1] https://elixir.bootlin.com/linux/v5.1.2/source/tools/testing/selftests/kselftest_harness.h#L329
[2] https://elixir.bootlin.com/linux/v5.1.2/source/tools/testing/selftests/kselftest_harness.h#L681
Frank Rowand May 15, 2019, 12:14 a.m. UTC | #61
On 5/14/19 1:38 AM, Brendan Higgins wrote:
> On Fri, May 10, 2019 at 03:13:40PM -0700, Frank Rowand wrote:
>> On 5/10/19 9:17 AM, Logan Gunthorpe wrote:
>>>
>>>
>>> On 2019-05-09 11:18 p.m., Frank Rowand wrote:
>>>
>>>> YES, kselftest has in-kernel tests.  (Excuse the shouting...)
>>>
>>> Cool. From my cursory look, in my opinion, these would be greatly
>>> improved by converting them to the framework Brendan is proposing for Kunit.
>>>
>>>>> If they do exists, it seems like it would make sense to
>>>>> convert those to kunit and have Kunit tests run-able in a VM or
>>>>> baremetal instance.
>>>>
>>>> They already run in a VM.
>>>>
>>>> They already run on bare metal.
>>>>
>>>> They already run in UML.
>>>
>>> Simply being able to run in UML is not the only thing here. Kunit
>>> provides the infrastructure to quickly build, run and report results for
>>> all the tests from userspace without needing to worry about the details
>>> of building and running a UML kernel, then parsing dmesg to figure out
>>> what tests were run or not.
>>
>> Yes.  But that is not the only environment that KUnit must support to be
>> of use to me for devicetree unittests (this is not new, Brendan is quite
>> aware of my needs and is not ignoring them).
>>
>>
>>>> This is not to say that KUnit does not make sense.  But I'm still trying
>>>> to get a better description of the KUnit features (and there are
>>>> some).
>>>
>>> So read the patches, or the documentation[1] or the LWN article[2]. It's
>>> pretty well described in a lot of places -- that's one of the big
>>> advantages of it. In contrast, few people seems to have any concept of
>>> what kselftests are or where they are or how to run them (I was
>>> surprised to find the in-kernel tests in the lib tree).
>>>
>>> Logan
>>>
>>> [1] https://google.github.io/kunit-docs/third_party/kernel/docs/
>>> [2] https://lwn.net/Articles/780985/
>>
>> I have been following the RFC versions.  I have installed the RFC patches
>> and run them to the extent that they worked (devicetree unittests were
>> a guinea pig for test conversion in the RFC series, but the converted
>> tests did not work).  I read portions of the code while trying to
>> understand the unittests conversion.  I made review comments based on
>> the portion of the code that I did read.  I have read the documentation
>> (very nice btw, as I have said before, but should be expanded).
>>
>> My comment is that the description to submit the patch series should
>> be fuller -- KUnit potentially has a lot of nice attributes, and I
>> still think I have only scratched the surface.  The average reviewer
>> may have even less in-depth knowledge than I do.  And as I have
>> commented before, I keep diving into areas that I had no previous
>> experience with (such as kselftest) to be able to properly judge this
>> patch series.
> 
> Thanks for the praise! That means a lot coming from you!
> 
> I really cannot disagree that I could use more documentation. You can
> pretty much always use more documentation. Nevertheless, is there a
> particular part of the documentation that you think it lacking?

I wasn't talking about the documentation that is part of KUnit.  I was
targeting patch 0.


> It sounds like there was a pretty long discussion here about, a number
> of different things.
> 
> Do you want a better description of what unit testing is and how KUnit
> helps make it possible?
> 
> Do you want more of an explanation distinguishing KUnit from kselftest?
> How so?

The high level issue is to provide reviewers with enough context to be
able to evaluate the patch series.  That is probably not very obvious
at this point in the thread.  At this point I was responding to Logan's
response to me that I should be reading Documentation to get a better
description of KUnit features.  I _think_ that Logan thought that I
did not understand KUnit features and was trying to be helpful by
pointing out where I could get more information.  If so, he was missing
my intended point had been that patch 0 should provide more information
to justify adding this feature.

One thing that has become very apparent in the discussion of this patch
series is that some people do not understand that kselftest includes
in-kernel tests, not just userspace tests.  As such, KUnit is an
additional implementation of "the same feature".  (One can debate
exactly which in-kernel test features kselftest and KUnit provide,
and how much overlap exists or does not exist.  So don't take "the
same feature" as my final opinion of how much overlap exists.)  So
that is a key element to be noted and explained.

I don't have a goal of finding all the things to include in patch 0,
that is really your job as the patch submitter.  But as a reviewer,
it is easy for me to point out an obvious hole, even if I don't find
all of the holes.  kselftest vs KUnit overlap was an obvious hole to
me.

So, yes, more of an explanation about the in-kernel testing available
via kselftest vs the in-kernel testing available via KUnit, and how
they provide the same or different functionality.  Should both exist?
Should one replace the other?  If one provides additional features,
should the additional features be merged into a common base?

> Do you just want better documentation on how to test the kernel? What
> tools we have at our disposal and when to use what tools?
> 
> Thanks!
> .
>
Logan Gunthorpe May 15, 2019, 12:26 a.m. UTC | #62
On 2019-05-14 6:14 p.m., Frank Rowand wrote:
> The high level issue is to provide reviewers with enough context to be
> able to evaluate the patch series.  That is probably not very obvious
> at this point in the thread.  At this point I was responding to Logan's
> response to me that I should be reading Documentation to get a better
> description of KUnit features.  I _think_ that Logan thought that I
> did not understand KUnit features and was trying to be helpful by
> pointing out where I could get more information.  If so, he was missing
> my intended point had been that patch 0 should provide more information
> to justify adding this feature.

Honestly, I lost track of wait exactly your point was. And, in my
opinion, Brendan has provided over and above the information required to
justify Kunit's inclusion.

> One thing that has become very apparent in the discussion of this patch
> series is that some people do not understand that kselftest includes
> in-kernel tests, not just userspace tests.  As such, KUnit is an
> additional implementation of "the same feature".  (One can debate
> exactly which in-kernel test features kselftest and KUnit provide,
> and how much overlap exists or does not exist.  So don't take "the
> same feature" as my final opinion of how much overlap exists.)  So
> that is a key element to be noted and explained.

From my perspective, once we were actually pointed to the in-kernel
kselftest code and took a look at it, it was clear there was no
over-arching framework to them and that Kunit could be used to
significantly improve those tests with a common structure. Based on my
reading of the thread, Ted came to the same conclusion.

I don't think we should block this feature from being merged, and for
future work, someone can update the in-kernel kselftests to use the new
framework.

Logan
Frank Rowand May 15, 2019, 12:26 a.m. UTC | #63
On 5/11/19 10:33 AM, Theodore Ts'o wrote:
> On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
>> However, the reply is incorrect.  Kselftest in-kernel tests (which
>> is the context here) can be configured as built in instead of as
>> a module, and built in a UML kernel.  The UML kernel can boot,
>> running the in-kernel tests before UML attempts to invoke the
>> init process.
> 
> Um, Citation needed?

The paragraph that you quoted tells you exactly how to run a kselftest
in-kernel test in a UML kernel.  Just to what that paragraph says.


> 
> I don't see any evidence for this in the kselftest documentation, nor
> do I see any evidence of this in the kselftest Makefiles.
> 
> There exists test modules in the kernel that run before the init
> scripts run --- but that's not strictly speaking part of kselftests,
> and do not have any kind of infrastructure.  As noted, the
> kselftests_harness header file fundamentally assumes that you are
> running test code in userspace.

You are ignoring the kselftest in-kernel tests.

We are talking in circles.  I'm done with this thread.

-Frank

> 
> 				- Ted
> .
>
Theodore Ts'o May 15, 2019, 4:28 a.m. UTC | #64
On Tue, May 14, 2019 at 05:26:47PM -0700, Frank Rowand wrote:
> On 5/11/19 10:33 AM, Theodore Ts'o wrote:
> > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> >> However, the reply is incorrect.  Kselftest in-kernel tests (which
> >> is the context here) can be configured as built in instead of as
> >> a module, and built in a UML kernel.  The UML kernel can boot,
> >> running the in-kernel tests before UML attempts to invoke the
> >> init process.
> > 
> > Um, Citation needed?
> 
> The paragraph that you quoted tells you exactly how to run a kselftest
> in-kernel test in a UML kernel.  Just to what that paragraph says.

I didn't quote a paragraph.  But I'll quote from it now:

  $ make -C tools/testing/selftests run_tests

This runs the kselftest harness, *in userspace*.  That means you have
to have a root file system, and it's run after init has started, by
default.  You asserted that kselftests allows you to run modules
before init has started.  There is absolutely zero, cero, nada, zilch
mentions of any of anything like that in Documentation/dev-tools/kselftests.rst

> > There exists test modules in the kernel that run before the init
> > scripts run --- but that's not strictly speaking part of kselftests,
> > and do not have any kind of infrastructure.  As noted, the
> > kselftests_harness header file fundamentally assumes that you are
> > running test code in userspace.
> 
> You are ignoring the kselftest in-kernel tests.

I'm talking specifically about what you have been *claiming* to be
kselftest in-kernel tests above.  And I'm asserting they are really
not kselftests.  They are just ad hoc tests that are run in kernel
space, which, when compiled as modules, can be loaded by a kselftest
shell script.  You can certainly hook in these ad hoc in-kernel tests
via kselftests --- but then they aren't run before init starts,
because kselftests is inherently a userspace-driven system.

If you build these tests (many of which existed before kselftests was
merged) into the kernel such that they are run before init starts,
without the kselftest harness, then they are not kselftests, by
definition.  Both in how they are run, and since many of these
in-kernel tests predate the introduction of kselftests --- in some
cases, by many years.

> We are talking in circles.  I'm done with this thread.

Yes, that sounds like it would be best.

						- Ted
Daniel Vetter May 15, 2019, 7:41 a.m. UTC | #65
On Tue, May 14, 2019 at 11:36:18AM -0700, Brendan Higgins wrote:
> On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> > On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> > <brendanhiggins@google.com> wrote:
> > >
> > > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > > > > > is the context here) can be configured as built in instead of as
> > > > > > a module, and built in a UML kernel.  The UML kernel can boot,
> > > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > > init process.
> > > > >
> > > > > Um, Citation needed?
> > > > >
> > > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > > do I see any evidence of this in the kselftest Makefiles.
> > > > >
> > > > > There exists test modules in the kernel that run before the init
> > > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > > and do not have any kind of infrastructure.  As noted, the
> > > > > kselftests_harness header file fundamentally assumes that you are
> > > > > running test code in userspace.
> > > >
> > > > Yeah I really like the "no userspace required at all" design of kunit,
> > > > while still collecting results in a well-defined way (unless the current
> > > > self-test that just run when you load the module, with maybe some
> > > > kselftest ad-hoc wrapper around to collect the results).
> > > >
> > > > What I want to do long-term is to run these kernel unit tests as part of
> > > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> > >
> > > Totally! This is part of the reason I have been insisting on a minimum
> > > of UML compatibility for all unit tests. If you can suffiently constrain
> > > the environment that is required for tests to run in, it makes it much
> > > easier not only for a human to run your tests, but it also makes it a
> > > lot easier for an automated service to be able to run your tests.
> > >
> > > I actually have a prototype presubmit already working on my
> > > "stable/non-upstream" branch. You can checkout what presubmit results
> > > look like here[1][2].
> > 
> > ug gerrit :-)
> 
> Yeah, yeah, I know, but it is a lot easier for me to get a project set
> up here using Gerrit, when we already use that for a lot of other
> projects.
> 
> Also, Gerrit has gotten a lot better over the last two years or so. Two
> years ago, I wouldn't touch it with a ten foot pole. It's not so bad
> anymore, at least if you are used to using a web UI to review code.

I was somewhat joking, I'm just not used to gerrit ... And seems to indeed
be a lot more polished than last time I looked at it seriously.

> > > > only ofc). So that people get their pull requests (and patch series, we
> > > > have some ideas to tie this into patchwork) automatically tested for this
> > >
> > > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > > and he seemed pretty open to collaboration.
> > >
> > > Before I heard about Snowpatch, I had an intern write a translation
> > > layer that made Prow (the presubmit service that I used in the prototype
> > > above) work with LKML[4].
> > 
> > There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> > a different one on freedesktop.org. It's a bit a mess :-/
> 
> Oh, I didn't realize that. I found your patchwork instance here[5], but
> do you have a place where I can see the changes you have added to
> support presubmit?

Ok here's a few links. Aside from the usual patch view we've also added a
series view:

https://patchwork.freedesktop.org/project/intel-gfx/series/?ordering=-last_updated

This ties the patches + cover letter together, and it even (tries to at
least) track revisions. Here's an example which is currently at revision
9:

https://patchwork.freedesktop.org/series/57232/

Below the patch list for each revision we also have the test result list.
If you click on the grey bar it'll expand with the summary from CI, the
"See full logs" is link to the full results from our CI. This is driven
with some REST api from our jenkins.

Patchwork also sends out mails for these results.

Source is on gitlab: https://gitlab.freedesktop.org/patchwork-fdo
 
> > > I am not married to either approach, but I think between the two of
> > > them, most of the initial legwork has been done to make presubmit on
> > > LKML a reality.
> > 
> > We do have presubmit CI working already with our freedesktop.org
> > patchwork. The missing glue is just tying that into gitlab CI somehow
> > (since we want to unify build testing more and make it easier for
> > contributors to adjust things).
> 
> I checked out a couple of your projects on your patchwork instance: AMD
> X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
> tests, but none of them actually had any test results. Can you point me
> at one that does?

Atm we use the CI stuff only on intel-gfx, with the our gpu CI farm, see
links above.

Cheers, Daniel

> 
> Cheers!
> 
> [5] https://patchwork.freedesktop.org/
> 
> > > > super basic stuff.
> > >
> > > I am really excited to hear back on what you think!
> > >
> > > Cheers!
> > >
> > > [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> > > [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> > > [3] https://developer.ibm.com/open/projects/snowpatch/
> > > [4] https://kunit.googlesource.com/prow-lkml/
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
Brendan Higgins May 22, 2019, 9:38 p.m. UTC | #66
+Bjorn Helgaas

On Wed, May 15, 2019 at 12:41 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, May 14, 2019 at 11:36:18AM -0700, Brendan Higgins wrote:
> > On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> > > On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> > > <brendanhiggins@google.com> wrote:
> > > >
> > > > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > > > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > > > > > > is the context here) can be configured as built in instead of as
> > > > > > > a module, and built in a UML kernel.  The UML kernel can boot,
> > > > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > > > init process.
> > > > > >
> > > > > > Um, Citation needed?
> > > > > >
> > > > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > > > do I see any evidence of this in the kselftest Makefiles.
> > > > > >
> > > > > > There exists test modules in the kernel that run before the init
> > > > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > > > and do not have any kind of infrastructure.  As noted, the
> > > > > > kselftests_harness header file fundamentally assumes that you are
> > > > > > running test code in userspace.
> > > > >
> > > > > Yeah I really like the "no userspace required at all" design of kunit,
> > > > > while still collecting results in a well-defined way (unless the current
> > > > > self-test that just run when you load the module, with maybe some
> > > > > kselftest ad-hoc wrapper around to collect the results).
> > > > >
> > > > > What I want to do long-term is to run these kernel unit tests as part of
> > > > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> > > >
> > > > Totally! This is part of the reason I have been insisting on a minimum
> > > > of UML compatibility for all unit tests. If you can suffiently constrain
> > > > the environment that is required for tests to run in, it makes it much
> > > > easier not only for a human to run your tests, but it also makes it a
> > > > lot easier for an automated service to be able to run your tests.
> > > >
> > > > I actually have a prototype presubmit already working on my
> > > > "stable/non-upstream" branch. You can checkout what presubmit results
> > > > look like here[1][2].
> > >
> > > ug gerrit :-)
> >
> > Yeah, yeah, I know, but it is a lot easier for me to get a project set
> > up here using Gerrit, when we already use that for a lot of other
> > projects.
> >
> > Also, Gerrit has gotten a lot better over the last two years or so. Two
> > years ago, I wouldn't touch it with a ten foot pole. It's not so bad
> > anymore, at least if you are used to using a web UI to review code.
>
> I was somewhat joking, I'm just not used to gerrit ... And seems to indeed
> be a lot more polished than last time I looked at it seriously.

I mean, it is still not perfect, but I think it has finally gotten to
the point where I prefer it over reviewing by email for high context
patches where you don't expect a lot of deep discussion.

Still not great for patches where you want to have a lot of discussion.

> > > > > only ofc). So that people get their pull requests (and patch series, we
> > > > > have some ideas to tie this into patchwork) automatically tested for this
> > > >
> > > > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > > > and he seemed pretty open to collaboration.
> > > >
> > > > Before I heard about Snowpatch, I had an intern write a translation
> > > > layer that made Prow (the presubmit service that I used in the prototype
> > > > above) work with LKML[4].
> > >
> > > There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> > > a different one on freedesktop.org. It's a bit a mess :-/

I think Snowpatch is an ozlabs project; at least the maintainer works at IBM.

Patchwork originally was a ozlabs project, right?

Has any discussion taken place trying to consolidate some of the forks?

Presubmit clearly seems like a feature that a number of people want.

> > Oh, I didn't realize that. I found your patchwork instance here[5], but
> > do you have a place where I can see the changes you have added to
> > support presubmit?
>
> Ok here's a few links. Aside from the usual patch view we've also added a
> series view:
>
> https://patchwork.freedesktop.org/project/intel-gfx/series/?ordering=-last_updated
>
> This ties the patches + cover letter together, and it even (tries to at
> least) track revisions. Here's an example which is currently at revision
> 9:
>
> https://patchwork.freedesktop.org/series/57232/

Oooh, nice! That looks awesome! Looks like you have a number of presubmits too.

> Below the patch list for each revision we also have the test result list.
> If you click on the grey bar it'll expand with the summary from CI, the
> "See full logs" is link to the full results from our CI. This is driven
> with some REST api from our jenkins.
>
> Patchwork also sends out mails for these results.

Nice! There are obviously a lot of other bots on various kernel
mailing lists. Do you think people would object to sending presubmit
results to the mailing lists by default?

> Source is on gitlab: https://gitlab.freedesktop.org/patchwork-fdo

Err, looks like you forked from the ozlab's repo a good while ago.

Still, this all looks great!

> > > > I am not married to either approach, but I think between the two of
> > > > them, most of the initial legwork has been done to make presubmit on
> > > > LKML a reality.
> > >
> > > We do have presubmit CI working already with our freedesktop.org
> > > patchwork. The missing glue is just tying that into gitlab CI somehow
> > > (since we want to unify build testing more and make it easier for
> > > contributors to adjust things).
> >
> > I checked out a couple of your projects on your patchwork instance: AMD
> > X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
> > tests, but none of them actually had any test results. Can you point me
> > at one that does?
>
> Atm we use the CI stuff only on intel-gfx, with the our gpu CI farm, see
> links above.
>
> Cheers, Daniel
>
> >
> > Cheers!
> >
> > [5] https://patchwork.freedesktop.org/
> >
> > > > > super basic stuff.
> > > >
> > > > I am really excited to hear back on what you think!
> > > >
> > > > Cheers!
> > > >
> > > > [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> > > > [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> > > > [3] https://developer.ibm.com/open/projects/snowpatch/
> > > > [4] https://kunit.googlesource.com/prow-lkml/
> > > > _______________________________________________
> > > > dri-devel mailing list
> > > > dri-devel@lists.freedesktop.org
> > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel

Cheers!
Daniel Vetter May 23, 2019, 8:40 a.m. UTC | #67
On Wed, May 22, 2019 at 02:38:48PM -0700, Brendan Higgins wrote:
> +Bjorn Helgaas
> 
> On Wed, May 15, 2019 at 12:41 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Tue, May 14, 2019 at 11:36:18AM -0700, Brendan Higgins wrote:
> > > On Tue, May 14, 2019 at 02:05:05PM +0200, Daniel Vetter wrote:
> > > > On Tue, May 14, 2019 at 8:04 AM Brendan Higgins
> > > > <brendanhiggins@google.com> wrote:
> > > > >
> > > > > On Mon, May 13, 2019 at 04:44:51PM +0200, Daniel Vetter wrote:
> > > > > > On Sat, May 11, 2019 at 01:33:44PM -0400, Theodore Ts'o wrote:
> > > > > > > On Fri, May 10, 2019 at 02:12:40PM -0700, Frank Rowand wrote:
> > > > > > > > However, the reply is incorrect.  Kselftest in-kernel tests (which
> > > > > > > > is the context here) can be configured as built in instead of as
> > > > > > > > a module, and built in a UML kernel.  The UML kernel can boot,
> > > > > > > > running the in-kernel tests before UML attempts to invoke the
> > > > > > > > init process.
> > > > > > >
> > > > > > > Um, Citation needed?
> > > > > > >
> > > > > > > I don't see any evidence for this in the kselftest documentation, nor
> > > > > > > do I see any evidence of this in the kselftest Makefiles.
> > > > > > >
> > > > > > > There exists test modules in the kernel that run before the init
> > > > > > > scripts run --- but that's not strictly speaking part of kselftests,
> > > > > > > and do not have any kind of infrastructure.  As noted, the
> > > > > > > kselftests_harness header file fundamentally assumes that you are
> > > > > > > running test code in userspace.
> > > > > >
> > > > > > Yeah I really like the "no userspace required at all" design of kunit,
> > > > > > while still collecting results in a well-defined way (unless the current
> > > > > > self-test that just run when you load the module, with maybe some
> > > > > > kselftest ad-hoc wrapper around to collect the results).
> > > > > >
> > > > > > What I want to do long-term is to run these kernel unit tests as part of
> > > > > > the build-testing, most likely in gitlab (sooner or later, for drm.git
> > > > >
> > > > > Totally! This is part of the reason I have been insisting on a minimum
> > > > > of UML compatibility for all unit tests. If you can suffiently constrain
> > > > > the environment that is required for tests to run in, it makes it much
> > > > > easier not only for a human to run your tests, but it also makes it a
> > > > > lot easier for an automated service to be able to run your tests.
> > > > >
> > > > > I actually have a prototype presubmit already working on my
> > > > > "stable/non-upstream" branch. You can checkout what presubmit results
> > > > > look like here[1][2].
> > > >
> > > > ug gerrit :-)
> > >
> > > Yeah, yeah, I know, but it is a lot easier for me to get a project set
> > > up here using Gerrit, when we already use that for a lot of other
> > > projects.
> > >
> > > Also, Gerrit has gotten a lot better over the last two years or so. Two
> > > years ago, I wouldn't touch it with a ten foot pole. It's not so bad
> > > anymore, at least if you are used to using a web UI to review code.
> >
> > I was somewhat joking, I'm just not used to gerrit ... And seems to indeed
> > be a lot more polished than last time I looked at it seriously.
> 
> I mean, it is still not perfect, but I think it has finally gotten to
> the point where I prefer it over reviewing by email for high context
> patches where you don't expect a lot of deep discussion.
> 
> Still not great for patches where you want to have a lot of discussion.
> 
> > > > > > only ofc). So that people get their pull requests (and patch series, we
> > > > > > have some ideas to tie this into patchwork) automatically tested for this
> > > > >
> > > > > Might that be Snowpatch[3]? I talked to Russell, the creator of Snowpatch,
> > > > > and he seemed pretty open to collaboration.
> > > > >
> > > > > Before I heard about Snowpatch, I had an intern write a translation
> > > > > layer that made Prow (the presubmit service that I used in the prototype
> > > > > above) work with LKML[4].
> > > >
> > > > There's about 3-4 forks/clones of patchwork. snowpatch is one, we have
> > > > a different one on freedesktop.org. It's a bit a mess :-/
> 
> I think Snowpatch is an ozlabs project; at least the maintainer works at IBM.
> 
> Patchwork originally was a ozlabs project, right?

So there's two patchworks (snowpatch makes the 3rd): the one on
freedesktop is another fork.

> Has any discussion taken place trying to consolidate some of the forks?

Yup, but didn't lead anywhere unfortunately :-/ At least between patchwork
and patchwork-fdo, I think snowpatch happened in parallel and once it was
done it's kinda too late. The trouble is that they all have slightly
different REST api and functionality, so for CI integration you can't just
switch one for the other.

> Presubmit clearly seems like a feature that a number of people want.
> 
> > > Oh, I didn't realize that. I found your patchwork instance here[5], but
> > > do you have a place where I can see the changes you have added to
> > > support presubmit?
> >
> > Ok here's a few links. Aside from the usual patch view we've also added a
> > series view:
> >
> > https://patchwork.freedesktop.org/project/intel-gfx/series/?ordering=-last_updated
> >
> > This ties the patches + cover letter together, and it even (tries to at
> > least) track revisions. Here's an example which is currently at revision
> > 9:
> >
> > https://patchwork.freedesktop.org/series/57232/
> 
> Oooh, nice! That looks awesome! Looks like you have a number of presubmits too.

We have a pretty big farm of machines that mostly just crunch through
premerge patch series. postmerge is mostly just for statistics (so we can
find the sporadic failures and better characterize them).

> > Below the patch list for each revision we also have the test result list.
> > If you click on the grey bar it'll expand with the summary from CI, the
> > "See full logs" is link to the full results from our CI. This is driven
> > with some REST api from our jenkins.
> >
> > Patchwork also sends out mails for these results.
> 
> Nice! There are obviously a lot of other bots on various kernel
> mailing lists. Do you think people would object to sending presubmit
> results to the mailing lists by default?
> 
> > Source is on gitlab: https://gitlab.freedesktop.org/patchwork-fdo
> 
> Err, looks like you forked from the ozlab's repo a good while ago.

Yeah see above, it's a bit a complicated story. I think there's a total of
3 patchworks now :-/

> Still, this all looks great!

Cheers, Daniel

> 
> > > > > I am not married to either approach, but I think between the two of
> > > > > them, most of the initial legwork has been done to make presubmit on
> > > > > LKML a reality.
> > > >
> > > > We do have presubmit CI working already with our freedesktop.org
> > > > patchwork. The missing glue is just tying that into gitlab CI somehow
> > > > (since we want to unify build testing more and make it easier for
> > > > contributors to adjust things).
> > >
> > > I checked out a couple of your projects on your patchwork instance: AMD
> > > X.Org drivers, DRI devel, and Wayland. I saw the tab you added for
> > > tests, but none of them actually had any test results. Can you point me
> > > at one that does?
> >
> > Atm we use the CI stuff only on intel-gfx, with the our gpu CI farm, see
> > links above.
> >
> > Cheers, Daniel
> >
> > >
> > > Cheers!
> > >
> > > [5] https://patchwork.freedesktop.org/
> > >
> > > > > > super basic stuff.
> > > > >
> > > > > I am really excited to hear back on what you think!
> > > > >
> > > > > Cheers!
> > > > >
> > > > > [1] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-7bfa40efb132e15c8388755c273837559911425c
> > > > > [2] https://kunit-review.googlesource.com/c/linux/+/1509/10#message-a6784496eafff442ac98fb068bf1a0f36ee73509
> > > > > [3] https://developer.ibm.com/open/projects/snowpatch/
> > > > > [4] https://kunit.googlesource.com/prow-lkml/
> > > > > _______________________________________________
> > > > > dri-devel mailing list
> > > > > dri-devel@lists.freedesktop.org
> > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> Cheers!