Message ID | 20181016235120.138227-1-brendanhiggins@google.com (mailing list archive) |
---|---|
Headers | show |
Series | kunit: Introducing KUnit, the Linux kernel unit testing framework | expand |
On Tue, Oct 16, 2018 at 04:50:49PM -0700, Brendan Higgins wrote: > This patch set proposes KUnit, a lightweight unit testing and mocking > framework for the Linux kernel. > > Unlike Autotest and kselftest, KUnit is a true unit testing framework; > it does not require installing the kernel on a test machine or in a VM > and does not require tests to be written in userspace running on a host > kernel. Additionally, KUnit is fast: From invocation to completion KUnit > can run several dozen tests in under a second. Currently, the entire > KUnit test suite for KUnit runs in under a second from the initial > invocation (build time excluded). > > KUnit is heavily inspired by JUnit, Python's unittest.mock, and > Googletest/Googlemock for C++. KUnit provides facilities for defining > unit test cases, grouping related test cases into test suites, providing > common infrastructure for running tests, mocking, spying, and much more. > > ## What's so special about unit testing? > > A unit test is supposed to test a single unit of code in isolation, > hence the name. There should be no dependencies outside the control of > the test; this means no external dependencies, which makes tests orders > of magnitudes faster. Likewise, since there are no external dependencies, > there are no hoops to jump through to run the tests. Additionally, this > makes unit tests deterministic: a failing unit test always indicates a > problem. Finally, because unit tests necessarily have finer granularity, > they are able to test all code paths easily solving the classic problem > of difficulty in exercising error handling code. > > ## Is KUnit trying to replace other testing frameworks for the kernel? > > No. Most existing tests for the Linux kernel are end-to-end tests, which > have their place. A well tested system has lots of unit tests, a > reasonable number of integration tests, and some end-to-end tests. KUnit > is just trying to address the unit test space which is currently not > being addressed. > > ## More information on KUnit > > There is a bunch of documentation near the end of this patch set that > describes how to use KUnit and best practices for writing unit tests. > For convenience I am hosting the compiled docs here: > https://google.github.io/kunit-docs/third_party/kernel/docs/ We've started our own unit test framework (very early, and not any real infrastructure yet) under drivers/gpu/drm/selftests. They're all meant to test functions and small pieces of functionality of our libraries in isolation, without any need for a real (or virtual) gpu driver. It kinda integrates both with kselftests and with our graphics test suite, but directly running tests using UML as part of the build process sounds much, much better. Having proper and standardized infrastructure for kernel unit tests sounds terrific. In other words: I want. Please keep dri-devel@lists.freedesktop.org in the loop on future versions. Cheers, Daniel
> -----Original Message----- > From: Brendan Higgins > > This patch set proposes KUnit, a lightweight unit testing and mocking > framework for the Linux kernel. I'm interested in this, and think the kernel might benefit from this, but I have lots of questions. > Unlike Autotest and kselftest, KUnit is a true unit testing framework; > it does not require installing the kernel on a test machine or in a VM > and does not require tests to be written in userspace running on a host > kernel. This is stated here and a few places in the documentation. Just to clarify, KUnit works by compiling the unit under test, along with the test code itself, and then runs it on the machine where the compilation took place? Is this right? How does cross-compiling enter into the equation? If not what I described, then what exactly is happening? Sorry - I haven't had time to look through the patches in detail. Another issue is, what requirements does this place on the tested code? Is extra instrumentation required? I didn't see any, but I didn't look exhaustively at the code. Are all unit tests stored separately from the unit-under-test, or are they expected to be in the same directory? Who is expected to maintain the unit tests? How often are they expected to change? (Would it be every time the unit-under-test changed?) Does the test code require the same level of expertise to write and maintain as the unit-under-test code? That is, could this be a new opportunity for additional developers (especially relative newcomers) to add value to the kernel by writing and maintaining test code, or does this add to the already large burden of code maintenance for our existing maintainers. Thanks, -- Tim ...
On Tue, Oct 16, 2018 at 6:53 PM Brendan Higgins <brendanhiggins@google.com> wrote: > > This patch set proposes KUnit, a lightweight unit testing and mocking > framework for the Linux kernel. > > Unlike Autotest and kselftest, KUnit is a true unit testing framework; > it does not require installing the kernel on a test machine or in a VM > and does not require tests to be written in userspace running on a host > kernel. Additionally, KUnit is fast: From invocation to completion KUnit > can run several dozen tests in under a second. Currently, the entire > KUnit test suite for KUnit runs in under a second from the initial > invocation (build time excluded). > > KUnit is heavily inspired by JUnit, Python's unittest.mock, and > Googletest/Googlemock for C++. KUnit provides facilities for defining > unit test cases, grouping related test cases into test suites, providing > common infrastructure for running tests, mocking, spying, and much more. I very much like this. The DT code too has unit tests with our own, simple infrastructure. They too can run under UML (and every other arch). Rob
On Wed, Oct 17, 2018 at 10:49 AM <Tim.Bird@sony.com> wrote: > > > -----Original Message----- > > From: Brendan Higgins > > > > This patch set proposes KUnit, a lightweight unit testing and mocking > > framework for the Linux kernel. > > I'm interested in this, and think the kernel might benefit from this, > but I have lots of questions. > Awesome! > > Unlike Autotest and kselftest, KUnit is a true unit testing framework; > > it does not require installing the kernel on a test machine or in a VM > > and does not require tests to be written in userspace running on a host > > kernel. > > > This is stated here and a few places in the documentation. Just to clarify, > KUnit works by compiling the unit under test, along with the test code > itself, and then runs it on the machine where the compilation took > place? Is this right? How does cross-compiling enter into the equation? > If not what I described, then what exactly is happening? > Yep, that's exactly right! The test and the code under test are linked together in the same binary and are compiled under Kbuild. Right now I am linking everything into a UML kernel, but I would ultimately like to make tests compile into completely independent test binaries. So each test file would get compiled into its own test binary and would link against only the code needed to run the test, but we are a bit of a ways off from that. For now, tests compile as part of a UML kernel and a test script boots the UML kernel, tests run as part of the boot process, and the script extracts test results and reports them. I intentionally made it so the KUnit test libraries could be relatively easily ported to other architectures, but in the long term, tests that depend on being built into a real kernel that boots on real hardware would be a lot more difficult to maintain and we would never be able to provide the kind of resources and infrastructure as we could for tests that run as normal user space binaries. Does that answer your question? > Sorry - I haven't had time to look through the patches in detail. > > Another issue is, what requirements does this place on the tested > code? Is extra instrumentation required? I didn't see any, but I > didn't look exhaustively at the code. > Nope, no special instrumentation. As long as the code under tests can be compiled under COMPILE_TEST for the host architecture, you should be able to use KUnit. > Are all unit tests stored separately from the unit-under-test, or are > they expected to be in the same directory? Who is expected to > maintain the unit tests? How often are they expected to change? > (Would it be every time the unit-under-test changed?) > Tests are in the same directory as the code under test. For example, if I have a driver drivers/i2c/busses/i2c-aspeed.c, I would write a test drivers/i2c/busses/i2c-aspeed-test.c (that's my opinion anyway). Unit tests should be the responsibility of the person who is responsible for the code. So one way to do this would be that unit tests should be the responsibility of the maintainer who would in turn require that new tests are written for any new code added, and that all tests should pass for every patch sent for review. A well written unit test tests public interfaces (by public I just mean functions exported outside of a .c file, so non-static functions and functions which are shared as a member of a struct) so a unit test should change at a slower rate than the code under test, but you would likely have to change the test anytime the public interface changes (intended behavior changes, function signature changes, new public feature added, etc). More succinctly, if the contract that your code provide changes your test should probably change, if the contract doesn't change, your test probably shouldn't change. Does that make sense? > Does the test code require the same level of expertise to write > and maintain as the unit-under-test code? That is, could this be > a new opportunity for additional developers (especially relative > newcomers) to add value to the kernel by writing and maintaining > test code, or does this add to the already large burden of code > maintenance for our existing maintainers. So a couple things, in order to write a unit test, the person who writes the test must understand what the code they are testing is supposed to do. To some extent that will probably require someone with some expertise to ensure that the test makes sense, and indeed a change that breaks a test should be accompanied by a update to the test. On the other hand, I think understanding what pre-existing code does and is supposed to do is much easier than writing new code from scratch, and probably doesn't require too much expertise. I actually did a bit of an experiment internally on this: I had some people with no prior knowledge of the kernel write some tests for existing kernel code and they were able to do it with only minimal guidance. I was so happy with the result that I was already thinking that it might have some potential for onboarding newcomers. Now, how much burden does this add to maintainers? As someone who pretty regularly reviews code that come in with unit tests and code that comes in without unit tests. I find it much easier to review code that comes in with unit tests. I would actually say that from the standpoint of being an owner of a code base, unit tests actually reduce the amount of work I have to do overall. Code with unit tests is usually cleaner, the tests tell me exactly what the code is supposed to do, and I can run the tests (or ideally have an automated service run the tests) that tell me that the code actually does what the tests say it should. Even when it comes to writing code I find that writing code with unit tests ends up saving me time overall.
On 10/16/18 4:50 PM, Brendan Higgins wrote: > This patch set proposes KUnit, a lightweight unit testing and mocking > framework for the Linux kernel. Hi, Just a general comment: Documentation/process/submitting-patches.rst says: <<Describe your changes in imperative mood, e.g. "make xyzzy do frotz" instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy to do frotz", as if you are giving orders to the codebase to change its behaviour.>> That also means saying things like: ... test: add instead of ... test: added and "enable" instead of "enabled" and "improve" instead of "improved" and "implement" instead of "implemented". thanks.
On Wed, Oct 17, 2018 at 4:12 PM Randy Dunlap <rdunlap@infradead.org> wrote: > > On 10/16/18 4:50 PM, Brendan Higgins wrote: > > This patch set proposes KUnit, a lightweight unit testing and mocking > > framework for the Linux kernel. > > Hi, > > Just a general comment: > > Documentation/process/submitting-patches.rst says: > <<Describe your changes in imperative mood, e.g. "make xyzzy do frotz" > instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy > to do frotz", as if you are giving orders to the codebase to change > its behaviour.>> > Thanks! I will fix this in the next revision.
On Tue, Oct 16, 2018 at 4:54 PM Brendan Higgins <brendanhiggins@google.com> wrote: > > This patch set proposes KUnit, a lightweight unit testing and mocking > framework for the Linux kernel. > > Unlike Autotest and kselftest, KUnit is a true unit testing framework; > it does not require installing the kernel on a test machine or in a VM > and does not require tests to be written in userspace running on a host > kernel. Additionally, KUnit is fast: From invocation to completion KUnit > can run several dozen tests in under a second. Currently, the entire > KUnit test suite for KUnit runs in under a second from the initial > invocation (build time excluded). > > KUnit is heavily inspired by JUnit, Python's unittest.mock, and > Googletest/Googlemock for C++. KUnit provides facilities for defining > unit test cases, grouping related test cases into test suites, providing > common infrastructure for running tests, mocking, spying, and much more. > > ## What's so special about unit testing? > > A unit test is supposed to test a single unit of code in isolation, > hence the name. There should be no dependencies outside the control of > the test; this means no external dependencies, which makes tests orders > of magnitudes faster. Likewise, since there are no external dependencies, > there are no hoops to jump through to run the tests. Additionally, this > makes unit tests deterministic: a failing unit test always indicates a > problem. Finally, because unit tests necessarily have finer granularity, > they are able to test all code paths easily solving the classic problem > of difficulty in exercising error handling code. > > ## Is KUnit trying to replace other testing frameworks for the kernel? > > No. Most existing tests for the Linux kernel are end-to-end tests, which > have their place. A well tested system has lots of unit tests, a > reasonable number of integration tests, and some end-to-end tests. KUnit > is just trying to address the unit test space which is currently not > being addressed. > > ## More information on KUnit > > There is a bunch of documentation near the end of this patch set that > describes how to use KUnit and best practices for writing unit tests. > For convenience I am hosting the compiled docs here: > https://google.github.io/kunit-docs/third_party/kernel/docs/ Nice! I've been using mocking techniques in kernel code for the libnvdimm test infrastructure in tools/testing/nvdimm/. It's part unit test infrastructure, part emulation, and I've always had the feeling it's all a bit too adhoc. I'm going to take a look and see what can be converted to kunit. Please include linux-nvdimm@lists.01.org on future postings. I'll shamelessly plug my lwn article about unit testing https://lwn.net/Articles/654071/ because it's always good to find fellow co-travelers to compare notes and advocate for more test oriented kernel development.
On Wed, Oct 17, 2018 at 8:55 PM Dan Williams <dan.j.williams@intel.com> wrote: <snip> > > ## More information on KUnit > > > > There is a bunch of documentation near the end of this patch set that > > describes how to use KUnit and best practices for writing unit tests. > > For convenience I am hosting the compiled docs here: > > https://google.github.io/kunit-docs/third_party/kernel/docs/ > > Nice! I've been using mocking techniques in kernel code for the > libnvdimm test infrastructure in tools/testing/nvdimm/. It's part unit > test infrastructure, part emulation, and I've always had the feeling > it's all a bit too adhoc. I'm going to take a look and see what can be > converted to kunit. Please include linux-nvdimm@lists.01.org on future > postings. Great to hear! Interesting, is this kind of like the nfsim stuff? > > I'll shamelessly plug my lwn article about unit testing > https://lwn.net/Articles/654071/ because it's always good to find > fellow co-travelers to compare notes and advocate for more test > oriented kernel development. Most definitely! I will take a look, and be in touch. Cheers!