diff mbox

[AUTOTEST,1/2] Add latest LTP test in autotest

Message ID a50cf5ab0907052240xf22b2b2rf7b90cc819d0ee87@mail.gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

sudhir kumar July 6, 2009, 5:40 a.m. UTC
This patch updates the ltp wrapper in autotest to execute the latest ltp.
At present autotest contains ltp which is more than 1 year old. There have
been added lots of testcases in ltp within this period. So this patch updates
the wrapper to run the June2009 release of ltp which is available at
http://prdownloads.sourceforge.net/ltp/ltp-full-20090630.tgz

Issues: LTP has a history of some of the testcases getting broken. Anyways
that has nothing to worry about with respect to autotest. One of the known issue
is broken memory controller issue with latest kernels(cgroups and memory
resource controller enabled kernels). The workaround for them I use is to
disable or delete those tests from ltp source and tar it again with the same
name. Though people might use different workarounds for it.

I have added an option which generates a fancy html results file. Also the
run is left to be a default run as expected.

For autotest users, please untar the results file I am sending, run
cd results/default; firefox results.html, click ltp_results.html
This is a symlink to the ltp_results.html which is generated by ltp.

Please provide your comments, concerns and issues.

Signed-off-by: Sudhir Kumar <skumar@linux.vnet.ibm.com>


         cmd = os.path.join(self.srcdir, script) + ' ' + args

Comments

Lucas Meneghel Rodrigues July 6, 2009, 6:58 a.m. UTC | #1
On Mon, 2009-07-06 at 11:10 +0530, sudhir kumar wrote:
> This patch updates the ltp wrapper in autotest to execute the latest ltp.
> At present autotest contains ltp which is more than 1 year old. There have
> been added lots of testcases in ltp within this period. So this patch updates
> the wrapper to run the June2009 release of ltp which is available at
> http://prdownloads.sourceforge.net/ltp/ltp-full-20090630.tgz

Indeed it would be a good time to update the LTP version being used on
autotest.

> Issues: LTP has a history of some of the testcases getting broken. Anyways
> that has nothing to worry about with respect to autotest. One of the known issue
> is broken memory controller issue with latest kernels(cgroups and memory
> resource controller enabled kernels). The workaround for them I use is to
> disable or delete those tests from ltp source and tar it again with the same
> name. Though people might use different workarounds for it.

A good start would be LTP build being more failure resilent (ie, if one
particular test fails to compile, don't fail the whole build). I will
raise this question on the LTP mailing list.

> I have added an option which generates a fancy html results file. Also the
> run is left to be a default run as expected.
> 
> For autotest users, please untar the results file I am sending, run
> cd results/default; firefox results.html, click ltp_results.html
> This is a symlink to the ltp_results.html which is generated by ltp.
> 
> Please provide your comments, concerns and issues.
> 
> Signed-off-by: Sudhir Kumar <skumar@linux.vnet.ibm.com>
> 
> Index: autotest/client/tests/ltp/ltp.py
> ===================================================================
> --- autotest.orig/client/tests/ltp/ltp.py
> +++ autotest/client/tests/ltp/ltp.py
> @@ -23,8 +23,8 @@ class ltp(test.test):
>          self.job.require_gcc()
> 
> 
> -    # http://prdownloads.sourceforge.net/ltp/ltp-full-20080229.tgz
> -    def setup(self, tarball = 'ltp-full-20080229.tar.bz2'):
> +    # http://prdownloads.sourceforge.net/ltp/ltp-full-20090630.tgz
> +    def setup(self, tarball = 'ltp-full-20090630.tgz'):
>          tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
>          utils.extract_tarball_to_dir(tarball, self.srcdir)
>          os.chdir(self.srcdir)
> @@ -52,8 +52,9 @@ class ltp(test.test):
>          # In case the user wants to run another test script
>          if script == 'runltp':
>              logfile = os.path.join(self.resultsdir, 'ltp.log')
> +            htmlfile = os.path.join(self.resultsdir, 'ltp_results.html')
>              failcmdfile = os.path.join(self.debugdir, 'failcmdfile')
> -            args2 = '-q -l %s -C %s -d %s' % (logfile, failcmdfile,
> self.tmpdir)
> +            args2 = '-l %s -g %s -C %s -d %s' % (logfile, htmlfile,
> failcmdfile, self.tmpdir)
>              args = args + ' ' + args2
> 
>          cmd = os.path.join(self.srcdir, script) + ' ' + args

Patch looks good to me.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Martin Bligh July 6, 2009, 6:37 p.m. UTC | #2
>> Issues: LTP has a history of some of the testcases getting broken.

Right, that's always the concern with doing this.

>> Anyways
>> that has nothing to worry about with respect to autotest. One of the known issue
>> is broken memory controller issue with latest kernels(cgroups and memory
>> resource controller enabled kernels). The workaround for them I use is to
>> disable or delete those tests from ltp source and tar it again with the same
>> name. Though people might use different workarounds for it.

OK, Can we encapsulate this into the wrapper though, rather than making
people do it manually? in the existing ltp.patch or something?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
sudhir kumar July 7, 2009, 7:24 a.m. UTC | #3
On Tue, Jul 7, 2009 at 12:07 AM, Martin Bligh<mbligh@google.com> wrote:
>>> Issues: LTP has a history of some of the testcases getting broken.
>
> Right, that's always the concern with doing this.
>
>>> Anyways
>>> that has nothing to worry about with respect to autotest. One of the known issue
>>> is broken memory controller issue with latest kernels(cgroups and memory
>>> resource controller enabled kernels). The workaround for them I use is to
>>> disable or delete those tests from ltp source and tar it again with the same
>>> name. Though people might use different workarounds for it.
>
> OK, Can we encapsulate this into the wrapper though, rather than making
> people do it manually? in the existing ltp.patch or something?
>
definitely we can do that, but that needs to know about all the corner
cases of failure. So may be we can continue enhancing the patch as per
the failure reports on different OSes.

1 more thing I wanted to start a discussion on LTP mailing list is to
make aware the testcase if it is running on a physical host or on a
guest(say KVM guest). Testcases like power management, group
scheduling fairness etc do not make much sense to run on a guest(as
they will fail or break). So It is better for the test to recognise
the environment and not execute if it is under virtualization and it
is supposed to fail or break under that environment. Does that make
sense to you also ?
Lucas Meneghel Rodrigues July 7, 2009, 3:31 p.m. UTC | #4
On Tue, Jul 7, 2009 at 4:24 AM, sudhir kumar<smalikphy@gmail.com> wrote:
>> OK, Can we encapsulate this into the wrapper though, rather than making
>> people do it manually? in the existing ltp.patch or something?
>>
> definitely we can do that, but that needs to know about all the corner
> cases of failure. So may be we can continue enhancing the patch as per
> the failure reports on different OSes.

For the most immediate needs, we could try  building LTP with make -k.
Plain re-package of LTP kinda goes against our own rules. The
preferred way to do testsuite modifications is patching before the
execution. So let's strive to use the approach 'upstream package
unmodified, patch if needed'. That's how distro package does it, makes
sense for us too.

> 1 more thing I wanted to start a discussion on LTP mailing list is to
> make aware the testcase if it is running on a physical host or on a
> guest(say KVM guest). Testcases like power management, group
> scheduling fairness etc do not make much sense to run on a guest(as
> they will fail or break). So It is better for the test to recognise
> the environment and not execute if it is under virtualization and it
> is supposed to fail or break under that environment. Does that make
> sense to you also ?

We need to make an assessment of what we would expect to see failing
under a guest. LTP has a fairly large codebase, so it will be a fair
amount of work.

Lucas
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Martin Bligh July 7, 2009, 5:45 p.m. UTC | #5
On Tue, Jul 7, 2009 at 12:24 AM, sudhir kumar<smalikphy@gmail.com> wrote:
> On Tue, Jul 7, 2009 at 12:07 AM, Martin Bligh<mbligh@google.com> wrote:
>>>> Issues: LTP has a history of some of the testcases getting broken.
>>
>> Right, that's always the concern with doing this.
>>
>>>> Anyways
>>>> that has nothing to worry about with respect to autotest. One of the known issue
>>>> is broken memory controller issue with latest kernels(cgroups and memory
>>>> resource controller enabled kernels). The workaround for them I use is to
>>>> disable or delete those tests from ltp source and tar it again with the same
>>>> name. Though people might use different workarounds for it.
>>
>> OK, Can we encapsulate this into the wrapper though, rather than making
>> people do it manually? in the existing ltp.patch or something?
>>
> definitely we can do that, but that needs to know about all the corner
> cases of failure. So may be we can continue enhancing the patch as per
> the failure reports on different OSes.
>
> 1 more thing I wanted to start a discussion on LTP mailing list is to
> make aware the testcase if it is running on a physical host or on a
> guest(say KVM guest). Testcases like power management, group
> scheduling fairness etc do not make much sense to run on a guest(as
> they will fail or break). So It is better for the test to recognise
> the environment and not execute if it is under virtualization and it
> is supposed to fail or break under that environment. Does that make
> sense to you also ?

Yup, we can pass an excluded test list. I really wish they'd fix their
tests, but I've been saying that for 6 years now, and it hasn't happened
yet ;-(
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
sudhir kumar July 8, 2009, 4:17 a.m. UTC | #6
Ok Then. So my idea is to include the patch in autotest and let the
people report failures(in compilation or execution), and we can patch
autotest to apply the fix patch and build and run ltp. I do not think
we can find all cases untill and unless we start execution.

However I will start the discussion on the ltp list and see the
response from people. At least we can get the new testcases to be
aware of virtualization.

On Tue, Jul 7, 2009 at 11:15 PM, Martin Bligh<mbligh@google.com> wrote:
> On Tue, Jul 7, 2009 at 12:24 AM, sudhir kumar<smalikphy@gmail.com> wrote:
>> On Tue, Jul 7, 2009 at 12:07 AM, Martin Bligh<mbligh@google.com> wrote:
>>>>> Issues: LTP has a history of some of the testcases getting broken.
>>>
>>> Right, that's always the concern with doing this.
>>>
>>>>> Anyways
>>>>> that has nothing to worry about with respect to autotest. One of the known issue
>>>>> is broken memory controller issue with latest kernels(cgroups and memory
>>>>> resource controller enabled kernels). The workaround for them I use is to
>>>>> disable or delete those tests from ltp source and tar it again with the same
>>>>> name. Though people might use different workarounds for it.
>>>
>>> OK, Can we encapsulate this into the wrapper though, rather than making
>>> people do it manually? in the existing ltp.patch or something?
>>>
>> definitely we can do that, but that needs to know about all the corner
>> cases of failure. So may be we can continue enhancing the patch as per
>> the failure reports on different OSes.
>>
>> 1 more thing I wanted to start a discussion on LTP mailing list is to
>> make aware the testcase if it is running on a physical host or on a
>> guest(say KVM guest). Testcases like power management, group
>> scheduling fairness etc do not make much sense to run on a guest(as
>> they will fail or break). So It is better for the test to recognise
>> the environment and not execute if it is under virtualization and it
>> is supposed to fail or break under that environment. Does that make
>> sense to you also ?
>
> Yup, we can pass an excluded test list. I really wish they'd fix their
> tests, but I've been saying that for 6 years now, and it hasn't happened
> yet ;-(
>
sudhir kumar July 8, 2009, 4:22 a.m. UTC | #7
On Tue, Jul 7, 2009 at 9:01 PM, Lucas Meneghel Rodrigues<lmr@redhat.com> wrote:
> On Tue, Jul 7, 2009 at 4:24 AM, sudhir kumar<smalikphy@gmail.com> wrote:
>>> OK, Can we encapsulate this into the wrapper though, rather than making
>>> people do it manually? in the existing ltp.patch or something?
>>>
>> definitely we can do that, but that needs to know about all the corner
>> cases of failure. So may be we can continue enhancing the patch as per
>> the failure reports on different OSes.
>
> For the most immediate needs, we could try  building LTP with make -k.
> Plain re-package of LTP kinda goes against our own rules. The
> preferred way to do testsuite modifications is patching before the
> execution. So let's strive to use the approach 'upstream package
> unmodified, patch if needed'. That's how distro package does it, makes
> sense for us too.
I will request you to merge the patches if they do not appear to see
any major changes at the very early. let people see the failures and
we can quickly patch autotest to fix it. I can volunteer to look into
the ltp issues reported by people or found by me.
>
>> 1 more thing I wanted to start a discussion on LTP mailing list is to
>> make aware the testcase if it is running on a physical host or on a
>> guest(say KVM guest). Testcases like power management, group
>> scheduling fairness etc do not make much sense to run on a guest(as
>> they will fail or break). So It is better for the test to recognise
>> the environment and not execute if it is under virtualization and it
>> is supposed to fail or break under that environment. Does that make
>> sense to you also ?
>
> We need to make an assessment of what we would expect to see failing
> under a guest. LTP has a fairly large codebase, so it will be a fair
> amount of work.
Yeah, as Martin also points the same. At the very least we can expect
the new cases to be virtualization aware. For the existing ones we can
take it forward gradually, may be catching the test developers
individually :)

ATM I will suggest to merge the patches in and let get tested so that
we can collect failures/breakages if any.
>
> Lucas
>
Martin Bligh July 8, 2009, 4:40 a.m. UTC | #8
> ATM I will suggest to merge the patches in and let get tested so that
> we can collect failures/breakages if any.

I am not keen on causing regressions, which we've risked doing every
time we change LTP. I think we at least need to get a run on a non-virtualized
machine with some recent kernel, and exclude the tests that fail every time.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dor Laor July 8, 2009, 9:25 a.m. UTC | #9
On 07/08/2009 07:40 AM, Martin Bligh wrote:
>> ATM I will suggest to merge the patches in and let get tested so that
>> we can collect failures/breakages if any.
>
> I am not keen on causing regressions, which we've risked doing every
> time we change LTP. I think we at least need to get a run on a non-virtualized
> machine with some recent kernel, and exclude the tests that fail every time.

We can use the reported results (impressive) as a base.
When more regressions are introduced, we can chop more tests
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
sudhir kumar July 8, 2009, 9:30 a.m. UTC | #10
On Wed, Jul 8, 2009 at 2:55 PM, Dor Laor<dlaor@redhat.com> wrote:
> On 07/08/2009 07:40 AM, Martin Bligh wrote:
>>>
>>> ATM I will suggest to merge the patches in and let get tested so that
>>> we can collect failures/breakages if any.
>>
>> I am not keen on causing regressions, which we've risked doing every
>> time we change LTP. I think we at least need to get a run on a
>> non-virtualized
>> machine with some recent kernel, and exclude the tests that fail every
>> time.
>
> We can use the reported results (impressive) as a base.
> When more regressions are introduced, we can chop more tests

Sure, soon will send the revised patch. Thanks everyone for your thoughts!!
>
Subrata Modak July 8, 2009, 10:19 a.m. UTC | #11
On Tue, 2009-07-07 at 10:45 -0700, Martin Bligh wrote:
> On Tue, Jul 7, 2009 at 12:24 AM, sudhir kumar<smalikphy@gmail.com> wrote:
> > On Tue, Jul 7, 2009 at 12:07 AM, Martin Bligh<mbligh@google.com> wrote:
> >>>> Issues: LTP has a history of some of the testcases getting broken.
> >>
> >> Right, that's always the concern with doing this.
> >>
> >>>> Anyways
> >>>> that has nothing to worry about with respect to autotest. One of the known issue
> >>>> is broken memory controller issue with latest kernels(cgroups and memory
> >>>> resource controller enabled kernels). The workaround for them I use is to
> >>>> disable or delete those tests from ltp source and tar it again with the same
> >>>> name. Though people might use different workarounds for it.
> >>
> >> OK, Can we encapsulate this into the wrapper though, rather than making
> >> people do it manually? in the existing ltp.patch or something?
> >>
> > definitely we can do that, but that needs to know about all the corner
> > cases of failure. So may be we can continue enhancing the patch as per
> > the failure reports on different OSes.
> >
> > 1 more thing I wanted to start a discussion on LTP mailing list is to
> > make aware the testcase if it is running on a physical host or on a
> > guest(say KVM guest). Testcases like power management, group
> > scheduling fairness etc do not make much sense to run on a guest(as
> > they will fail or break). So It is better for the test to recognise
> > the environment and not execute if it is under virtualization and it
> > is supposed to fail or break under that environment. Does that make
> > sense to you also ?
> 
> Yup, we can pass an excluded test list. I really wish they'd fix their
> tests, but I've been saying that for 6 years now, and it hasn't happened
> yet ;-(

I would slightly disagree to that. 6 years is history. But, have you
recently checked with LTP ?

Yes, there were tests in LTP which were broken by design. And many of
them were fixed in last course of years. And probably this is the first
time in LTPÅ› history when people fixed all build/broken issues in a
month. Recently people stopped complaining about the so-called broken
issues. But i am hearing it again. Could you please point to this
mailing list the issues that you still face. I am sure there are now
very active members who would help you fix them.

Few more observations. If the test cases design is broken, the reason
for it reporting BROKEN, then this is a genuine issues. We would need to
fix them. But, when a test case reports BROKEN, is it justified to put
the whole blame on the test case. Did we verify whether the issue can
also be with:

     1. Libraries,
     2. Kernel (the very purpose for which test case is designed),

What will be the point if we just want the test case to PASS ?? So, do
we want to point that the kernel is always OK, and we just want the test
cases to report PASS for it, else the test case is broken ?

Fixing build issues:
LTP cannot stop inheriting new test cases with each passing day. But,
how do we guarantee that the test suite build will succeed on:
     1. every Distro,
     2. every arch,
     3. every kernel

Unlike the Linux Kernel, which carries all the stuff needed to build
itself on any arch, an user land project like LTP is completely
dependant on the system headers/libraries to complete itÅ› build. So,
when these stuff are not consistent across the architecture and Distro
geography, so, how do we solve the problem. And in most of the instance
we find that 50% of our effort/code goes in fixing this dependencies,
rather than into the actual kernel test code. And, yes, to cater to the
need, LTP has embraced the AUTOCONF framework.

Fixing Real test case BROKEN issues:
Here plays one of the unfortunate drawbacks of the Open Source Projects.
Everything does not get funded for eternity. When test cases were
checked in for some feature in the kernel, the project should have been
funded. Now, when the ABIs/functionality in the kernel changes, the
original author of those test cases is no more funded to make the
corresponding changes in the concerned tests. So, this can be solved
only when somebody reports such issues with a patch rather than sending
repeated reminders that such-and-such test cases are broken. I do not
have any other idea of how this can be solved.

I am not sure, if i am able to resolve all your doubts/concerns
completely. Mike/others from LTP mailing list will may also be of some
help.

Mike,

What do you say ?

Regards--
Subrata

> _______________________________________________
> Autotest mailing list
> Autotest@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Subrata Modak July 8, 2009, 10:19 a.m. UTC | #12
On Wed, 2009-07-08 at 09:47 +0530, sudhir kumar wrote: 
> Ok Then. So my idea is to include the patch in autotest and let the
> people report failures(in compilation or execution), and we can patch
> autotest to apply the fix patch and build and run ltp. I do not think
> we can find all cases untill and unless we start execution.
> 
> However I will start the discussion on the ltp list and see the
> response from people. At least we can get the new testcases to be
> aware of virtualization.

Great. Such a thing would be a welcome discussion, provided you also
propose us the way to do it, and how it would not affect the existing
result analysis.

Regards--
Subrata

> 
> On Tue, Jul 7, 2009 at 11:15 PM, Martin Bligh<mbligh@google.com> wrote:
> > On Tue, Jul 7, 2009 at 12:24 AM, sudhir kumar<smalikphy@gmail.com> wrote:
> >> On Tue, Jul 7, 2009 at 12:07 AM, Martin Bligh<mbligh@google.com> wrote:
> >>>>> Issues: LTP has a history of some of the testcases getting broken.
> >>>
> >>> Right, that's always the concern with doing this.
> >>>
> >>>>> Anyways
> >>>>> that has nothing to worry about with respect to autotest. One of the known issue
> >>>>> is broken memory controller issue with latest kernels(cgroups and memory
> >>>>> resource controller enabled kernels). The workaround for them I use is to
> >>>>> disable or delete those tests from ltp source and tar it again with the same
> >>>>> name. Though people might use different workarounds for it.
> >>>
> >>> OK, Can we encapsulate this into the wrapper though, rather than making
> >>> people do it manually? in the existing ltp.patch or something?
> >>>
> >> definitely we can do that, but that needs to know about all the corner
> >> cases of failure. So may be we can continue enhancing the patch as per
> >> the failure reports on different OSes.
> >>
> >> 1 more thing I wanted to start a discussion on LTP mailing list is to
> >> make aware the testcase if it is running on a physical host or on a
> >> guest(say KVM guest). Testcases like power management, group
> >> scheduling fairness etc do not make much sense to run on a guest(as
> >> they will fail or break). So It is better for the test to recognise
> >> the environment and not execute if it is under virtualization and it
> >> is supposed to fail or break under that environment. Does that make
> >> sense to you also ?
> >
> > Yup, we can pass an excluded test list. I really wish they'd fix their
> > tests, but I've been saying that for 6 years now, and it hasn't happened
> > yet ;-(
> >
> 
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Martin Bligh July 8, 2009, 3:05 p.m. UTC | #13
>> Yup, we can pass an excluded test list. I really wish they'd fix their
>> tests, but I've been saying that for 6 years now, and it hasn't happened
>> yet ;-(
>
> I would slightly disagree to that. 6 years is history. But, have you
> recently checked with LTP ?

I hate to be completely cynical about this, but that's exactly the same
message I get every year.

Yes, absolute, the best thing would be for someone to run all the tests,
work through all the problems, categorize them as kernel / library / distro,
and get each of them fixed. However, it's a fair chunk of work that I don't
have time to do.

So all I'm saying is that I know which of the current tests we have issues
with, and I don't want to upgrade LTP without a new set of data, and that
work being done. From previous experience, I would be extremely
surprised if there's not at least one new problem, and I'm not just going
to dump that on users.

Does the LTP project do this itself on a regular basis ... ie are you running
LTP against the latest kernel (or even some known stable kernel) and
seeing which tests are broken? If you can point me to that, I'd have much
more faith about picking this up ...

Up until this point we've not even managed to agree that PASS means
"ran as expected" and FAIL meant "something is wrong". LTP always
had "expected failures" which seems like a completely broken model
to me.

M.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mike Frysinger July 8, 2009, 11:05 p.m. UTC | #14
On Wednesday 08 July 2009 06:19:27 Subrata Modak wrote:
> On Tue, 2009-07-07 at 10:45 -0700, Martin Bligh wrote:
> > On Tue, Jul 7, 2009 at 12:24 AM, sudhir kumar<smalikphy@gmail.com> wrote:
> > > On Tue, Jul 7, 2009 at 12:07 AM, Martin Bligh<mbligh@google.com> wrote:
> > >>>> Issues: LTP has a history of some of the testcases getting broken.
> > >>
> > >> Right, that's always the concern with doing this.
> > >>
> > >>>> Anyways
> > >>>> that has nothing to worry about with respect to autotest. One of the
> > >>>> known issue is broken memory controller issue with latest
> > >>>> kernels(cgroups and memory resource controller enabled kernels). The
> > >>>> workaround for them I use is to disable or delete those tests from
> > >>>> ltp source and tar it again with the same name. Though people might
> > >>>> use different workarounds for it.
> > >>
> > >> OK, Can we encapsulate this into the wrapper though, rather than
> > >> making people do it manually? in the existing ltp.patch or something?
> > >
> > > definitely we can do that, but that needs to know about all the corner
> > > cases of failure. So may be we can continue enhancing the patch as per
> > > the failure reports on different OSes.
> > >
> > > 1 more thing I wanted to start a discussion on LTP mailing list is to
> > > make aware the testcase if it is running on a physical host or on a
> > > guest(say KVM guest). Testcases like power management, group
> > > scheduling fairness etc do not make much sense to run on a guest(as
> > > they will fail or break). So It is better for the test to recognise
> > > the environment and not execute if it is under virtualization and it
> > > is supposed to fail or break under that environment. Does that make
> > > sense to you also ?
> >
> > Yup, we can pass an excluded test list. I really wish they'd fix their
> > tests, but I've been saying that for 6 years now, and it hasn't happened
> > yet ;-(
>
> I would slightly disagree to that. 6 years is history. But, have you
> recently checked with LTP ?
>
> Yes, there were tests in LTP which were broken by design. And many of
> them were fixed in last course of years. And probably this is the first
> time in LTPÅ› history when people fixed all build/broken issues in a
> month. Recently people stopped complaining about the so-called broken
> issues. But i am hearing it again. Could you please point to this
> mailing list the issues that you still face. I am sure there are now
> very active members who would help you fix them.
>
> Few more observations. If the test cases design is broken, the reason
> for it reporting BROKEN, then this is a genuine issues. We would need to
> fix them. But, when a test case reports BROKEN, is it justified to put
> the whole blame on the test case. Did we verify whether the issue can
> also be with:
>
>      1. Libraries,
>      2. Kernel (the very purpose for which test case is designed),
>
> What will be the point if we just want the test case to PASS ?? So, do
> we want to point that the kernel is always OK, and we just want the test
> cases to report PASS for it, else the test case is broken ?
>
> Fixing build issues:
> LTP cannot stop inheriting new test cases with each passing day. But,
> how do we guarantee that the test suite build will succeed on:
>      1. every Distro,
>      2. every arch,
>      3. every kernel
>
> Unlike the Linux Kernel, which carries all the stuff needed to build
> itself on any arch, an user land project like LTP is completely
> dependant on the system headers/libraries to complete itÅ› build. So,
> when these stuff are not consistent across the architecture and Distro
> geography, so, how do we solve the problem. And in most of the instance
> we find that 50% of our effort/code goes in fixing this dependencies,
> rather than into the actual kernel test code. And, yes, to cater to the
> need, LTP has embraced the AUTOCONF framework.
>
> Fixing Real test case BROKEN issues:
> Here plays one of the unfortunate drawbacks of the Open Source Projects.
> Everything does not get funded for eternity. When test cases were
> checked in for some feature in the kernel, the project should have been
> funded. Now, when the ABIs/functionality in the kernel changes, the
> original author of those test cases is no more funded to make the
> corresponding changes in the concerned tests. So, this can be solved
> only when somebody reports such issues with a patch rather than sending
> repeated reminders that such-and-such test cases are broken. I do not
> have any other idea of how this can be solved.
>
> I am not sure, if i am able to resolve all your doubts/concerns
> completely. Mike/others from LTP mailing list will may also be of some
> help.

not much else to say.  it's an open source project and if you arent willing to 
contribute to fixing it but rather just sit back and complain, then you cant 
really be surprised when things dont move.
-mike
sudhir kumar July 13, 2009, 4:45 a.m. UTC | #15
On Tue, Jul 7, 2009 at 12:07 AM, Martin Bligh<mbligh@google.com> wrote:
>>> Issues: LTP has a history of some of the testcases getting broken.
>
> Right, that's always the concern with doing this.
>
>>> Anyways
>>> that has nothing to worry about with respect to autotest. One of the known issue
>>> is broken memory controller issue with latest kernels(cgroups and memory
>>> resource controller enabled kernels). The workaround for them I use is to
>>> disable or delete those tests from ltp source and tar it again with the same
>>> name. Though people might use different workarounds for it.
>
> OK, Can we encapsulate this into the wrapper though, rather than making
> people do it manually? in the existing ltp.patch or something?
>

I have rebased the patches and updated the existing ltp.patch. I will
be sending them soon.
Also for runningn ltp under kvm I have generated a patch kvm_ltp.patch
whose purpose is same as of ltp.patch but only for kvm guests. i will
be sending the results of execution on the guest as well as on bare
metal. Thanks everyone for your comments!!
diff mbox

Patch

Index: autotest/client/tests/ltp/ltp.py
===================================================================
--- autotest.orig/client/tests/ltp/ltp.py
+++ autotest/client/tests/ltp/ltp.py
@@ -23,8 +23,8 @@  class ltp(test.test):
         self.job.require_gcc()


-    # http://prdownloads.sourceforge.net/ltp/ltp-full-20080229.tgz
-    def setup(self, tarball = 'ltp-full-20080229.tar.bz2'):
+    # http://prdownloads.sourceforge.net/ltp/ltp-full-20090630.tgz
+    def setup(self, tarball = 'ltp-full-20090630.tgz'):
         tarball = utils.unmap_url(self.bindir, tarball, self.tmpdir)
         utils.extract_tarball_to_dir(tarball, self.srcdir)
         os.chdir(self.srcdir)
@@ -52,8 +52,9 @@  class ltp(test.test):
         # In case the user wants to run another test script
         if script == 'runltp':
             logfile = os.path.join(self.resultsdir, 'ltp.log')
+            htmlfile = os.path.join(self.resultsdir, 'ltp_results.html')
             failcmdfile = os.path.join(self.debugdir, 'failcmdfile')
-            args2 = '-q -l %s -C %s -d %s' % (logfile, failcmdfile,
self.tmpdir)
+            args2 = '-l %s -g %s -C %s -d %s' % (logfile, htmlfile,
failcmdfile, self.tmpdir)
             args = args + ' ' + args2