diff mbox series

usb: host: ehci-sched: add comment about find_tt() not returning error

Message ID 20201011205008.24369-1-sudipm.mukherjee@gmail.com (mailing list archive)
State Accepted
Commit 23eac8531acdec3bdbd920d5ec0d7747508eaaab
Headers show
Series usb: host: ehci-sched: add comment about find_tt() not returning error | expand

Commit Message

Sudip Mukherjee Oct. 11, 2020, 8:50 p.m. UTC
Add a comment explaining why find_tt() will not return error even though
find_tt() is checking for NULL and other errors.

Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
---
 drivers/usb/host/ehci-sched.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Alan Stern Oct. 12, 2020, 12:27 a.m. UTC | #1
On Sun, Oct 11, 2020 at 09:50:08PM +0100, Sudip Mukherjee wrote:
> Add a comment explaining why find_tt() will not return error even though
> find_tt() is checking for NULL and other errors.
> 
> Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
> ---
>  drivers/usb/host/ehci-sched.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/drivers/usb/host/ehci-sched.c b/drivers/usb/host/ehci-sched.c
> index 6dfb242f9a4b..0f85aa9b2fb1 100644
> --- a/drivers/usb/host/ehci-sched.c
> +++ b/drivers/usb/host/ehci-sched.c
> @@ -244,6 +244,12 @@ static void reserve_release_intr_bandwidth(struct ehci_hcd *ehci,
>  
>  	/* FS/LS bus bandwidth */
>  	if (tt_usecs) {
> +		/*
> +		 * find_tt() will not return any error here as we have
> +		 * already called find_tt() before calling this function
> +		 * and checked for any error return. The previous call
> +		 * would have created the data structure.
> +		 */
>  		tt = find_tt(qh->ps.udev);
>  		if (sign > 0)
>  			list_add_tail(&qh->ps.ps_list, &tt->ps_list);
> @@ -1337,6 +1343,12 @@ static void reserve_release_iso_bandwidth(struct ehci_hcd *ehci,
>  			}
>  		}
>  
> +		/*
> +		 * find_tt() will not return any error here as we have
> +		 * already called find_tt() before calling this function
> +		 * and checked for any error return. The previous call
> +		 * would have created the data structure.
> +		 */
>  		tt = find_tt(stream->ps.udev);
>  		if (sign > 0)
>  			list_add_tail(&stream->ps.ps_list, &tt->ps_list);

Acked-by: Alan Stern <stern@rowland.harvard.edu>
Lukas Bulwahn Oct. 12, 2020, 2:11 p.m. UTC | #2
On Sun, 11 Oct 2020, Sudip Mukherjee wrote:

> Add a comment explaining why find_tt() will not return error even though
> find_tt() is checking for NULL and other errors.
> 
> Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>

I get the intent of the comment but there is a large risk that somebody 
could remove the find_tt() call before the calling the function and there 
is no chance to then see later that the comment is actually wrong.

I guess you want to link this comment here to a code line above and
request anyone touching the line above to reconsider the comment then.

But there is no 'concept' for such kind of requests to changes and 
comments.

So, the comment is true now, but might be true or wrong later.

I am wondering if such comment deserves to be included if we cannot check 
its validity later...

I would prefer a simple tool that could check that your basic assumption 
on the code is valid and if it the basic assumption is still valid, 
just shut up any stupid tool that simply does not get that these calls 
here cannot return any error.

We encountered this issue because of clang analyzer complaining, but it is 
clear that it is a false positive of that (smart but) incomplete tool.

Do you intend to add comment for all false positives from all tools or are 
we going to find a better solution than that?

I am happy to contribute to the smarter solution...
e.g., a cocci script that checks that the call is still there and then
always marking the related tool finding as false positive.

Lukas

> ---
>  drivers/usb/host/ehci-sched.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/drivers/usb/host/ehci-sched.c b/drivers/usb/host/ehci-sched.c
> index 6dfb242f9a4b..0f85aa9b2fb1 100644
> --- a/drivers/usb/host/ehci-sched.c
> +++ b/drivers/usb/host/ehci-sched.c
> @@ -244,6 +244,12 @@ static void reserve_release_intr_bandwidth(struct ehci_hcd *ehci,
>  
>  	/* FS/LS bus bandwidth */
>  	if (tt_usecs) {
> +		/*
> +		 * find_tt() will not return any error here as we have
> +		 * already called find_tt() before calling this function
> +		 * and checked for any error return. The previous call
> +		 * would have created the data structure.
> +		 */
>  		tt = find_tt(qh->ps.udev);
>  		if (sign > 0)
>  			list_add_tail(&qh->ps.ps_list, &tt->ps_list);
> @@ -1337,6 +1343,12 @@ static void reserve_release_iso_bandwidth(struct ehci_hcd *ehci,
>  			}
>  		}
>  
> +		/*
> +		 * find_tt() will not return any error here as we have
> +		 * already called find_tt() before calling this function
> +		 * and checked for any error return. The previous call
> +		 * would have created the data structure.
> +		 */
>  		tt = find_tt(stream->ps.udev);
>  		if (sign > 0)
>  			list_add_tail(&stream->ps.ps_list, &tt->ps_list);
> -- 
> 2.11.0
> 
> 
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> View/Reply Online (#76): https://lists.elisa.tech/g/linux-safety/message/76
> Mute This Topic: https://lists.elisa.tech/mt/77448152/1714638
> Group Owner: linux-safety+owner@lists.elisa.tech
> Unsubscribe: https://lists.elisa.tech/g/linux-safety/unsub [lukas.bulwahn@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
> 
> 
>
Alan Stern Oct. 12, 2020, 2:57 p.m. UTC | #3
On Mon, Oct 12, 2020 at 04:11:38PM +0200, Lukas Bulwahn wrote:
> 
> 
> On Sun, 11 Oct 2020, Sudip Mukherjee wrote:
> 
> > Add a comment explaining why find_tt() will not return error even though
> > find_tt() is checking for NULL and other errors.
> > 
> > Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
> 
> I get the intent of the comment but there is a large risk that somebody 
> could remove the find_tt() call before the calling the function and there 
> is no chance to then see later that the comment is actually wrong.

Why would anybody do that?  Who would deliberately go change a driver in 
a way that is obviously wrong and would break it?  Especially when such 
changes are likely to cause compile errors?

There are tons of changes one might make to any program that will leave 
it syntactically valid but will cause it to fail.  What's special about 
removing the early calls to find_tt()?

> I guess you want to link this comment here to a code line above and
> request anyone touching the line above to reconsider the comment then.

That really seems unnecessary.

> But there is no 'concept' for such kind of requests to changes and 
> comments.
> 
> So, the comment is true now, but might be true or wrong later.
> 
> I am wondering if such comment deserves to be included if we cannot check 
> its validity later...
> 
> I would prefer a simple tool that could check that your basic assumption 
> on the code is valid and if it the basic assumption is still valid, 
> just shut up any stupid tool that simply does not get that these calls 
> here cannot return any error.

Real code contains so many assumptions, especially if you include ones 
which are obvious to everybody, that such a tool seems impractical.

Alan Stern
Lukas Bulwahn Oct. 12, 2020, 3:10 p.m. UTC | #4
On Mon, 12 Oct 2020, Alan Stern wrote:

> On Mon, Oct 12, 2020 at 04:11:38PM +0200, Lukas Bulwahn wrote:
> > 
> > 
> > On Sun, 11 Oct 2020, Sudip Mukherjee wrote:
> > 
> > > Add a comment explaining why find_tt() will not return error even though
> > > find_tt() is checking for NULL and other errors.
> > > 
> > > Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
> > 
> > I get the intent of the comment but there is a large risk that somebody 
> > could remove the find_tt() call before the calling the function and there 
> > is no chance to then see later that the comment is actually wrong.
> 
> Why would anybody do that?  Who would deliberately go change a driver in 
> a way that is obviously wrong and would break it?  Especially when such 
> changes are likely to cause compile errors?
> 
> There are tons of changes one might make to any program that will leave 
> it syntactically valid but will cause it to fail.  What's special about 
> removing the early calls to find_tt()?
> 
> > I guess you want to link this comment here to a code line above and
> > request anyone touching the line above to reconsider the comment then.
> 
> That really seems unnecessary.
> 
> > But there is no 'concept' for such kind of requests to changes and 
> > comments.
> > 
> > So, the comment is true now, but might be true or wrong later.
> > 
> > I am wondering if such comment deserves to be included if we cannot check 
> > its validity later...
> > 
> > I would prefer a simple tool that could check that your basic assumption 
> > on the code is valid and if it the basic assumption is still valid, 
> > just shut up any stupid tool that simply does not get that these calls 
> > here cannot return any error.
> 
> Real code contains so many assumptions, especially if you include ones 
> which are obvious to everybody, that such a tool seems impractical.
>

I fear that problem applies to all static code analysis tools I have seen; 
at some point, the remaining findings are simply obviously wrong to 
everybody but the tool does not get those assumptions and continues 
complaining, making the tool seem impractical.

Alan, so would you be willing to take patches where _anyone_ simply adds 
comments on what functions returns, depending on what this person might 
consider just not obvious enough?

Or are you going to simply reject this 'added a comment' patch here?

I am not arguing either way, it is just that it is unclear to me what the 
added value of the comment really is here.

And for the static analysis finding, we need to find a way to ignore this 
finding without simply ignoring all findings or new findings that just 
look very similar to the original finding, but which are valid.

Lukas
Greg Kroah-Hartman Oct. 12, 2020, 3:18 p.m. UTC | #5
On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> And for the static analysis finding, we need to find a way to ignore this 
> finding without simply ignoring all findings or new findings that just 
> look very similar to the original finding, but which are valid.

Then I suggest you fix the tool that "flagged" this, surely this is not
the only thing it detected with a test like this, right?

What tool reported this?

thanks,

greg k-h
Sudip Mukherjee Oct. 12, 2020, 3:24 p.m. UTC | #6
Hi Lukas,

On Mon, Oct 12, 2020 at 3:11 PM Lukas Bulwahn <lukas.bulwahn@gmail.com> wrote:
>
>
>
> On Sun, 11 Oct 2020, Sudip Mukherjee wrote:
>
> > Add a comment explaining why find_tt() will not return error even though
> > find_tt() is checking for NULL and other errors.
> >
> > Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
>
> I get the intent of the comment but there is a large risk that somebody
> could remove the find_tt() call before the calling the function and there
> is no chance to then see later that the comment is actually wrong.

Removing the find_tt() will mean a major rework of the code.

>
> I guess you want to link this comment here to a code line above and
> request anyone touching the line above to reconsider the comment then.
>
> But there is no 'concept' for such kind of requests to changes and
> comments.
>
> So, the comment is true now, but might be true or wrong later.

If it is wrong later due to some code change then I guess someone will
need to send a patch to correct the comment.

>
> I am wondering if such comment deserves to be included if we cannot check
> its validity later...

I am failing to understand why will you not be able to check its
validity later. You just need to read the code to check it.

>
> I would prefer a simple tool that could check that your basic assumption
> on the code is valid and if it the basic assumption is still valid,
> just shut up any stupid tool that simply does not get that these calls
> here cannot return any error.
>
> We encountered this issue because of clang analyzer complaining, but it is
> clear that it is a false positive of that (smart but) incomplete tool.

I dont think it is a false positive. The error return value is not
checked and that is true. Only that it is not applicable because of
the way the coding is done.

>
> Do you intend to add comment for all false positives from all tools or are
> we going to find a better solution than that?

I think tools will always give you some false positives and you will
need to read the code to know if its false positive or not. I dont
think there is any tool yet which will only give true positives.
Alan Stern Oct. 12, 2020, 4 p.m. UTC | #7
On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> 
> 
> On Mon, 12 Oct 2020, Alan Stern wrote:
> > Real code contains so many assumptions, especially if you include ones 
> > which are obvious to everybody, that such a tool seems impractical.
> >
> 
> I fear that problem applies to all static code analysis tools I have seen; 
> at some point, the remaining findings are simply obviously wrong to 
> everybody but the tool does not get those assumptions and continues 
> complaining, making the tool seem impractical.

Indeed, it is well known that the problem of finding all errors or bugs 
by static code analysis is Turing complete.

> Alan, so would you be willing to take patches where _anyone_ simply adds 
> comments on what functions returns, depending on what this person might 
> consider just not obvious enough?

No.  I would take such patches from anyone, but depending on what _I_ 
consider not obvious enough.

> Or are you going to simply reject this 'added a comment' patch here?

I have already accepted it.  In fact, the patch was my suggestion in the 
first place.

When I originally wrote this code, I was aware that it was somewhat 
subtle, but at the time it didn't seem to warrant a comment or 
explanation.  Sudip's patch has changed my mind.

> I am not arguing either way, it is just that it is unclear to me what the 
> added value of the comment really is here.

As with many other comments, its purpose is to explain a somewhat 
obscure aspect of the code -- something which is there by design but 
isn't immediately obvious to the reader.  That is the added value.

> And for the static analysis finding, we need to find a way to ignore this 
> finding without simply ignoring all findings or new findings that just 
> look very similar to the original finding, but which are valid.

Agreed.  In this case, the new comment does a pretty good job of telling 
people using the tool that the finding is unjustified.

If you are suggesting some sort of special code annotation that the tool 
would understand, I am open to that.  But I'm not aware of any even 
vaguely standard way of marking up a particular function call to 
indicate it will not return an error.

Alan Stern
Lukas Bulwahn Oct. 12, 2020, 6:17 p.m. UTC | #8
On Mon, 12 Oct 2020, Alan Stern wrote:

> On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > 
> > 
> > On Mon, 12 Oct 2020, Alan Stern wrote:
> > > Real code contains so many assumptions, especially if you include ones 
> > > which are obvious to everybody, that such a tool seems impractical.
> > >
> > 
> > I fear that problem applies to all static code analysis tools I have seen; 
> > at some point, the remaining findings are simply obviously wrong to 
> > everybody but the tool does not get those assumptions and continues 
> > complaining, making the tool seem impractical.
> 
> Indeed, it is well known that the problem of finding all errors or bugs 
> by static code analysis is Turing complete.
> 
> > Alan, so would you be willing to take patches where _anyone_ simply adds 
> > comments on what functions returns, depending on what this person might 
> > consider just not obvious enough?
> 
> No.  I would take such patches from anyone, but depending on what _I_ 
> consider not obvious enough.
> 
> > Or are you going to simply reject this 'added a comment' patch here?
> 
> I have already accepted it.  In fact, the patch was my suggestion in the 
> first place.
> 
> When I originally wrote this code, I was aware that it was somewhat 
> subtle, but at the time it didn't seem to warrant a comment or 
> explanation.  Sudip's patch has changed my mind.
> 
> > I am not arguing either way, it is just that it is unclear to me what the 
> > added value of the comment really is here.
> 
> As with many other comments, its purpose is to explain a somewhat 
> obscure aspect of the code -- something which is there by design but 
> isn't immediately obvious to the reader.  That is the added value.
>

Fine, then I was more conservative on adding comments than you; we will 
see if other maintainers accept adding such comments as well for further 
findings we will encounter.
 
> > And for the static analysis finding, we need to find a way to ignore this 
> > finding without simply ignoring all findings or new findings that just 
> > look very similar to the original finding, but which are valid.
> 
> Agreed.  In this case, the new comment does a pretty good job of telling 
> people using the tool that the finding is unjustified.
> 
> If you are suggesting some sort of special code annotation that the tool 
> would understand, I am open to that.  But I'm not aware of any even 
> vaguely standard way of marking up a particular function call to 
> indicate it will not return an error.
>

I cannot yet say if some annotation would work, we, Sudip and me, need to 
investigate. It could be that something like, assert(!IS_ERR(tt)), is 
sufficient to let the tools know that they can safely assume that the 
path they are complaining about is not possible.

We could make the assert() a nop, so it would not effect the resulting 
object code in any way.

We have not tried that; We are still experimenting with clang analyzer 
and are still learning.

Lukas
Lukas Bulwahn Oct. 12, 2020, 6:25 p.m. UTC | #9
On Mon, 12 Oct 2020, Greg Kroah-Hartman wrote:

> On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > And for the static analysis finding, we need to find a way to ignore this 
> > finding without simply ignoring all findings or new findings that just 
> > look very similar to the original finding, but which are valid.
> 
> Then I suggest you fix the tool that "flagged" this, surely this is not
> the only thing it detected with a test like this, right?
> 
> What tool reported this?
>

Sudip and I are following on clang analyzer findings.

On linux-next, there is new build target 'make clang-analyzer' that 
outputs a bunch of warnings, just as you would expect from such static 
analysis tools.

We will fix the tool if we can.

Lukas
Lukas Bulwahn Oct. 12, 2020, 6:49 p.m. UTC | #10
On Mon, 12 Oct 2020, Sudip Mukherjee wrote:

> 
> >
> > I am wondering if such comment deserves to be included if we cannot check
> > its validity later...
> 
> I am failing to understand why will you not be able to check its
> validity later. You just need to read the code to check it.
>

Well, I meant automatically checking the validity with some tool, like a 
tool (code analyzer, model checker, ...) that can check if a certain 
annotation holds.

> >
> > I would prefer a simple tool that could check that your basic assumption
> > on the code is valid and if it the basic assumption is still valid,
> > just shut up any stupid tool that simply does not get that these calls
> > here cannot return any error.
> >
> > We encountered this issue because of clang analyzer complaining, but it is
> > clear that it is a false positive of that (smart but) incomplete tool.
> 
> I dont think it is a false positive. The error return value is not
> checked and that is true. Only that it is not applicable because of
> the way the coding is done.
>

Maybe we have different understandings of a false positive reported by a 
tool...

My definitions are in the LPC 2020 presentation, Maintaining results from 
static analysis collaboratively, slide 4:
 
https://linuxplumbersconf.org/event/7/contributions/700/attachments/606/1088/LPC2020-Static-Analysis.pdf

So, for me, what you wrote above is exactly the definition of a
"False Tool Finding (False Positive, Type A)".

Given that Alan will accept to add a comment in the code, it is a "True 
Tool-Induced Change (True Positive, Type B)" because we can actually 
provide a reasonable patch based on what the tool reported (even though it 
is just supportive documentation, no change in the code).
 
> >
> > Do you intend to add comment for all false positives from all tools or are
> > we going to find a better solution than that?
> 
> I think tools will always give you some false positives and you will
> need to read the code to know if its false positive or not. I dont
> think there is any tool yet which will only give true positives.
>

Well, given humans provide some annotations to initially false positives, 
there is a fair chance that a tool only gives true positives after the 
annotations.

With 'just comments', the tool will continue to complain and we will need 
to read the code once again every time we did not know that that case was 
a false positive. That going to happen often, namely every time, a new 
developer looks at the tool report.

Let us check if an annotation can help beyond the current comment. This 
annotation can of course be provided later independently with another 
patch.

Lukas
Greg Kroah-Hartman Oct. 13, 2020, 5:21 a.m. UTC | #11
On Mon, Oct 12, 2020 at 08:17:34PM +0200, Lukas Bulwahn wrote:
> > If you are suggesting some sort of special code annotation that the tool 
> > would understand, I am open to that.  But I'm not aware of any even 
> > vaguely standard way of marking up a particular function call to 
> > indicate it will not return an error.
> 
> I cannot yet say if some annotation would work, we, Sudip and me, need to 
> investigate. It could be that something like, assert(!IS_ERR(tt)), is 
> sufficient to let the tools know that they can safely assume that the 
> path they are complaining about is not possible.
> 
> We could make the assert() a nop, so it would not effect the resulting 
> object code in any way.

Things like assert() have been rejected numberous times in the past in
the kernel, good luck with that :)

greg k-h
Greg Kroah-Hartman Oct. 13, 2020, 5:23 a.m. UTC | #12
On Mon, Oct 12, 2020 at 08:25:30PM +0200, Lukas Bulwahn wrote:
> 
> 
> On Mon, 12 Oct 2020, Greg Kroah-Hartman wrote:
> 
> > On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > > And for the static analysis finding, we need to find a way to ignore this 
> > > finding without simply ignoring all findings or new findings that just 
> > > look very similar to the original finding, but which are valid.
> > 
> > Then I suggest you fix the tool that "flagged" this, surely this is not
> > the only thing it detected with a test like this, right?
> > 
> > What tool reported this?
> >
> 
> Sudip and I are following on clang analyzer findings.
> 
> On linux-next, there is new build target 'make clang-analyzer' that 
> outputs a bunch of warnings, just as you would expect from such static 
> analysis tools.

Why not fix the things that it finds that are actually issues?  If there
are no actual issues found, then perhaps you should use a better tool?  :)

thanks,

greg k-h
Lukas Bulwahn Oct. 13, 2020, 5:37 a.m. UTC | #13
On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:

> On Mon, Oct 12, 2020 at 08:25:30PM +0200, Lukas Bulwahn wrote:
> > 
> > 
> > On Mon, 12 Oct 2020, Greg Kroah-Hartman wrote:
> > 
> > > On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > > > And for the static analysis finding, we need to find a way to ignore this 
> > > > finding without simply ignoring all findings or new findings that just 
> > > > look very similar to the original finding, but which are valid.
> > > 
> > > Then I suggest you fix the tool that "flagged" this, surely this is not
> > > the only thing it detected with a test like this, right?
> > > 
> > > What tool reported this?
> > >
> > 
> > Sudip and I are following on clang analyzer findings.
> > 
> > On linux-next, there is new build target 'make clang-analyzer' that 
> > outputs a bunch of warnings, just as you would expect from such static 
> > analysis tools.
> 
> Why not fix the things that it finds that are actually issues?  If there
> are no actual issues found, then perhaps you should use a better tool?  :)
>

Completely agree. That is why I was against adding comments here and 
elsewhere just to have the "good feeling of doing something" after the 
tool reported a warning and we spend some time understanding the code to 
conclude that we now understand the code better than the tool.

If you know a better tool, we will use it :) unfortunately, there is no 
easy way of finding out that a tool just reports false positives and not a 
single true positive among 1000 reports...


Lukas
Lukas Bulwahn Oct. 13, 2020, 5:41 a.m. UTC | #14
On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:

> On Mon, Oct 12, 2020 at 08:17:34PM +0200, Lukas Bulwahn wrote:
> > > If you are suggesting some sort of special code annotation that the tool 
> > > would understand, I am open to that.  But I'm not aware of any even 
> > > vaguely standard way of marking up a particular function call to 
> > > indicate it will not return an error.
> > 
> > I cannot yet say if some annotation would work, we, Sudip and me, need to 
> > investigate. It could be that something like, assert(!IS_ERR(tt)), is 
> > sufficient to let the tools know that they can safely assume that the 
> > path they are complaining about is not possible.
> > 
> > We could make the assert() a nop, so it would not effect the resulting 
> > object code in any way.
> 
> Things like assert() have been rejected numberous times in the past in
> the kernel, good luck with that :)
>

Greg, we have been warned by you now; so, we are well aware what could 
await us just as numerous others before.

Lukas
Greg Kroah-Hartman Oct. 13, 2020, 6:36 a.m. UTC | #15
On Tue, Oct 13, 2020 at 07:37:34AM +0200, Lukas Bulwahn wrote:
> 
> 
> On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:
> 
> > On Mon, Oct 12, 2020 at 08:25:30PM +0200, Lukas Bulwahn wrote:
> > > 
> > > 
> > > On Mon, 12 Oct 2020, Greg Kroah-Hartman wrote:
> > > 
> > > > On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > > > > And for the static analysis finding, we need to find a way to ignore this 
> > > > > finding without simply ignoring all findings or new findings that just 
> > > > > look very similar to the original finding, but which are valid.
> > > > 
> > > > Then I suggest you fix the tool that "flagged" this, surely this is not
> > > > the only thing it detected with a test like this, right?
> > > > 
> > > > What tool reported this?
> > > >
> > > 
> > > Sudip and I are following on clang analyzer findings.
> > > 
> > > On linux-next, there is new build target 'make clang-analyzer' that 
> > > outputs a bunch of warnings, just as you would expect from such static 
> > > analysis tools.
> > 
> > Why not fix the things that it finds that are actually issues?  If there
> > are no actual issues found, then perhaps you should use a better tool?  :)
> >
> 
> Completely agree. That is why I was against adding comments here and 
> elsewhere just to have the "good feeling of doing something" after the 
> tool reported a warning and we spend some time understanding the code to 
> conclude that we now understand the code better than the tool.
> 
> If you know a better tool, we will use it :) unfortunately, there is no 
> easy way of finding out that a tool just reports false positives and not a 
> single true positive among 1000 reports...

Who is "forcing" you to use any tool?  What is your goal here?

greg k-h
Lukas Bulwahn Oct. 13, 2020, 7:16 a.m. UTC | #16
On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:

> On Tue, Oct 13, 2020 at 07:37:34AM +0200, Lukas Bulwahn wrote:
> > 
> > 
> > On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:
> > 
> > > On Mon, Oct 12, 2020 at 08:25:30PM +0200, Lukas Bulwahn wrote:
> > > > 
> > > > 
> > > > On Mon, 12 Oct 2020, Greg Kroah-Hartman wrote:
> > > > 
> > > > > On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > > > > > And for the static analysis finding, we need to find a way to ignore this 
> > > > > > finding without simply ignoring all findings or new findings that just 
> > > > > > look very similar to the original finding, but which are valid.
> > > > > 
> > > > > Then I suggest you fix the tool that "flagged" this, surely this is not
> > > > > the only thing it detected with a test like this, right?
> > > > > 
> > > > > What tool reported this?
> > > > >
> > > > 
> > > > Sudip and I are following on clang analyzer findings.
> > > > 
> > > > On linux-next, there is new build target 'make clang-analyzer' that 
> > > > outputs a bunch of warnings, just as you would expect from such static 
> > > > analysis tools.
> > > 
> > > Why not fix the things that it finds that are actually issues?  If there
> > > are no actual issues found, then perhaps you should use a better tool?  :)
> > >
> > 
> > Completely agree. That is why I was against adding comments here and 
> > elsewhere just to have the "good feeling of doing something" after the 
> > tool reported a warning and we spend some time understanding the code to 
> > conclude that we now understand the code better than the tool.
> > 
> > If you know a better tool, we will use it :) unfortunately, there is no 
> > easy way of finding out that a tool just reports false positives and not a 
> > single true positive among 1000 reports...
> 
> Who is "forcing" you to use any tool?  What is your goal here?
>

No force involved.

For some of us, it is 'just for fun' and the interest to understand the 
capabilities of those existing static analysis tools. To understand their 
capabilities and limits, we simply go through those warnings and try to 
reason if they are true positives (and deserve a patch) or false positives 
(which we at least try to reasonably document for later statistics and 
learning on systematic tool weaknesses).

Some others actually believe that the use of static analysis tools 
increase software quality and ONLY IF a static analysis tool is used, a 
specific level of software quality is achieved and they want to prove 
that the software reaches a certain level that way. (I do not 
understand that argument but some have been repeating it quite often 
around me. This argument seems to come from a specific interpretation of 
safety standards that claim to have methods to predict the absense of 
bugs up to a certain confidence.)

I am doing it for the fun and learning about tools, and I am not such a 
believer but those others would be forced by their beliefs until they 
understand what static analysis tools and their janitors really already 
contribute to the kernel development and where the real gaps might be.

I hope that helps to get a bit of the motivation. Consider us 
kernel newbies :)

Lukas
Greg Kroah-Hartman Oct. 13, 2020, 7:35 a.m. UTC | #17
On Tue, Oct 13, 2020 at 09:16:27AM +0200, Lukas Bulwahn wrote:
> Some others actually believe that the use of static analysis tools 
> increase software quality and ONLY IF a static analysis tool is used, a 
> specific level of software quality is achieved and they want to prove 
> that the software reaches a certain level that way. (I do not 
> understand that argument but some have been repeating it quite often 
> around me. This argument seems to come from a specific interpretation of 
> safety standards that claim to have methods to predict the absense of 
> bugs up to a certain confidence.)

So do those others also audit the static analysis tools to ensure that
they actually work as they "think" they do?  If not, then their
requirement is pretty pointless :)

> I am doing it for the fun and learning about tools, and I am not such a 
> believer but those others would be forced by their beliefs until they 
> understand what static analysis tools and their janitors really already 
> contribute to the kernel development and where the real gaps might be.
> 
> I hope that helps to get a bit of the motivation. Consider us 
> kernel newbies :)

Watch out, sending patches to subsystems to "fix" issues that really
are not real problems is a sure way to get your patches rejected and
make maintainers grumpy.

I recommend starting out with code that we all "know" needs help, in
drivers/staging/ for stuff like this, so you can learn the process
better, as well as start to understand the limitations of your tools
better.

good luck!

greg k-h
Lukas Bulwahn Oct. 13, 2020, 8:02 a.m. UTC | #18
On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:

> On Tue, Oct 13, 2020 at 09:16:27AM +0200, Lukas Bulwahn wrote:
> > Some others actually believe that the use of static analysis tools 
> > increase software quality and ONLY IF a static analysis tool is used, a 
> > specific level of software quality is achieved and they want to prove 
> > that the software reaches a certain level that way. (I do not 
> > understand that argument but some have been repeating it quite often 
> > around me. This argument seems to come from a specific interpretation of 
> > safety standards that claim to have methods to predict the absense of 
> > bugs up to a certain confidence.)
> 
> So do those others also audit the static analysis tools to ensure that
> they actually work as they "think" they do?  If not, then their
> requirement is pretty pointless :)
>

Yes, they do audit the tools, but those audits and why that proves the 
absense of a bug class is yet a completely different story...

> > I am doing it for the fun and learning about tools, and I am not such a 
> > believer but those others would be forced by their beliefs until they 
> > understand what static analysis tools and their janitors really already 
> > contribute to the kernel development and where the real gaps might be.
> > 
> > I hope that helps to get a bit of the motivation. Consider us 
> > kernel newbies :)
> 
> Watch out, sending patches to subsystems to "fix" issues that really
> are not real problems is a sure way to get your patches rejected and
> make maintainers grumpy.
> 
> I recommend starting out with code that we all "know" needs help, in
> drivers/staging/ for stuff like this, so you can learn the process
> better, as well as start to understand the limitations of your tools
> better.
> 
> good luck!
>

Thanks for the advice. We will need to learn about the limitations and 
what is worth a patch and what is not and we will need luck on the way 
learning that.

Lukas
Sudip Mukherjee Oct. 13, 2020, 8:24 a.m. UTC | #19
Hi Lukas,

On Tue, Oct 13, 2020 at 6:37 AM Lukas Bulwahn <lukas.bulwahn@gmail.com> wrote:
>
>
>
> On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:
>
> > On Mon, Oct 12, 2020 at 08:25:30PM +0200, Lukas Bulwahn wrote:
> > >
> > >
> > > On Mon, 12 Oct 2020, Greg Kroah-Hartman wrote:
> > >
> > > > On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > > > > And for the static analysis finding, we need to find a way to ignore this
> > > > > finding without simply ignoring all findings or new findings that just
> > > > > look very similar to the original finding, but which are valid.
> > > >
<snip>
> >
> > Why not fix the things that it finds that are actually issues?  If there
> > are no actual issues found, then perhaps you should use a better tool?  :)
> >
>
> Completely agree. That is why I was against adding comments here and
> elsewhere just to have the "good feeling of doing something" after the
> tool reported a warning and we spend some time understanding the code to
> conclude that we now understand the code better than the tool.

I think you are missing the point here. I sent the comment not because
of any tool, I sent it because the code there was not that simple like
other drivers and at a first glance its not apparent why there are no
error checks. And, afaik, the only purpose of comments is to make the
source code easier to understand.
Lukas Bulwahn Oct. 13, 2020, 8:36 a.m. UTC | #20
On Tue, 13 Oct 2020, Sudip Mukherjee wrote:

> Hi Lukas,
> 
> On Tue, Oct 13, 2020 at 6:37 AM Lukas Bulwahn <lukas.bulwahn@gmail.com> wrote:
> >
> >
> >
> > On Tue, 13 Oct 2020, Greg Kroah-Hartman wrote:
> >
> > > On Mon, Oct 12, 2020 at 08:25:30PM +0200, Lukas Bulwahn wrote:
> > > >
> > > >
> > > > On Mon, 12 Oct 2020, Greg Kroah-Hartman wrote:
> > > >
> > > > > On Mon, Oct 12, 2020 at 05:10:21PM +0200, Lukas Bulwahn wrote:
> > > > > > And for the static analysis finding, we need to find a way to ignore this
> > > > > > finding without simply ignoring all findings or new findings that just
> > > > > > look very similar to the original finding, but which are valid.
> > > > >
> <snip>
> > >
> > > Why not fix the things that it finds that are actually issues?  If there
> > > are no actual issues found, then perhaps you should use a better tool?  :)
> > >
> >
> > Completely agree. That is why I was against adding comments here and
> > elsewhere just to have the "good feeling of doing something" after the
> > tool reported a warning and we spend some time understanding the code to
> > conclude that we now understand the code better than the tool.
> 
> I think you are missing the point here. I sent the comment not because
> of any tool, I sent it because the code there was not that simple like
> other drivers and at a first glance its not apparent why there are no
> error checks. And, afaik, the only purpose of comments is to make the
> source code easier to understand.
>

That is fine. I think it is good to add comments to make the code more 
understandable.

Lukas
diff mbox series

Patch

diff --git a/drivers/usb/host/ehci-sched.c b/drivers/usb/host/ehci-sched.c
index 6dfb242f9a4b..0f85aa9b2fb1 100644
--- a/drivers/usb/host/ehci-sched.c
+++ b/drivers/usb/host/ehci-sched.c
@@ -244,6 +244,12 @@  static void reserve_release_intr_bandwidth(struct ehci_hcd *ehci,
 
 	/* FS/LS bus bandwidth */
 	if (tt_usecs) {
+		/*
+		 * find_tt() will not return any error here as we have
+		 * already called find_tt() before calling this function
+		 * and checked for any error return. The previous call
+		 * would have created the data structure.
+		 */
 		tt = find_tt(qh->ps.udev);
 		if (sign > 0)
 			list_add_tail(&qh->ps.ps_list, &tt->ps_list);
@@ -1337,6 +1343,12 @@  static void reserve_release_iso_bandwidth(struct ehci_hcd *ehci,
 			}
 		}
 
+		/*
+		 * find_tt() will not return any error here as we have
+		 * already called find_tt() before calling this function
+		 * and checked for any error return. The previous call
+		 * would have created the data structure.
+		 */
 		tt = find_tt(stream->ps.udev);
 		if (sign > 0)
 			list_add_tail(&stream->ps.ps_list, &tt->ps_list);