diff mbox series

ASoC: core: Fix access to uninitialized list heads

Message ID 20191204151454.21643-1-tiwai@suse.de (mailing list archive)
State Accepted
Commit 07d22a9bb623714dc3199099c5cce3df6aef496c
Headers show
Series ASoC: core: Fix access to uninitialized list heads | expand

Commit Message

Takashi Iwai Dec. 4, 2019, 3:14 p.m. UTC
The error path of soc_new_pcm_runtime() invokes soc_free_pcm_runtime()
that may cause a few problems.  First off, it calls list_del() for
rtd->list that may not be initialized.  Similarly,
snd_soc_pcm_component_free() traverses over the component list that
may not be initialized, either.  Such access to the uninitialized list
head would lead to either a BUG_ON() or a memory corruption.

This patch fixes the access to uninitialized list heads by
initializing the list heads properly at the beginning before those
error paths.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
---
 sound/soc/soc-core.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

Comments

Pierre-Louis Bossart Dec. 4, 2019, 7:23 p.m. UTC | #1
On 12/4/19 9:14 AM, Takashi Iwai wrote:
> The error path of soc_new_pcm_runtime() invokes soc_free_pcm_runtime()
> that may cause a few problems.  First off, it calls list_del() for
> rtd->list that may not be initialized.  Similarly,
> snd_soc_pcm_component_free() traverses over the component list that
> may not be initialized, either.  Such access to the uninitialized list
> head would lead to either a BUG_ON() or a memory corruption.
> 
> This patch fixes the access to uninitialized list heads by
> initializing the list heads properly at the beginning before those
> error paths.
> 
> Signed-off-by: Takashi Iwai <tiwai@suse.de>

Reviewed-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>

> ---
>   sound/soc/soc-core.c | 12 ++++++------
>   1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
> index 0e2e628302f1..726e5de850c4 100644
> --- a/sound/soc/soc-core.c
> +++ b/sound/soc/soc-core.c
> @@ -478,6 +478,12 @@ static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
>   		goto free_rtd;
>   
>   	rtd->dev = dev;
> +	INIT_LIST_HEAD(&rtd->list);
> +	INIT_LIST_HEAD(&rtd->component_list);
> +	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients);
> +	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].be_clients);
> +	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].fe_clients);
> +	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].fe_clients);
>   	dev_set_drvdata(dev, rtd);
>   	INIT_DELAYED_WORK(&rtd->delayed_work, close_delayed_work);
>   
> @@ -493,12 +499,6 @@ static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
>   	/*
>   	 * rtd remaining settings
>   	 */
> -	INIT_LIST_HEAD(&rtd->component_list);
> -	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients);
> -	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].be_clients);
> -	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].fe_clients);
> -	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].fe_clients);
> -
>   	rtd->card = card;
>   	rtd->dai_link = dai_link;
>   	if (!rtd->dai_link->ops)
>
Takashi Iwai Dec. 26, 2019, 8:50 a.m. UTC | #2
On Wed, 04 Dec 2019 16:14:54 +0100,
Takashi Iwai wrote:
> 
> The error path of soc_new_pcm_runtime() invokes soc_free_pcm_runtime()
> that may cause a few problems.  First off, it calls list_del() for
> rtd->list that may not be initialized.  Similarly,
> snd_soc_pcm_component_free() traverses over the component list that
> may not be initialized, either.  Such access to the uninitialized list
> head would lead to either a BUG_ON() or a memory corruption.
> 
> This patch fixes the access to uninitialized list heads by
> initializing the list heads properly at the beginning before those
> error paths.
> 
> Signed-off-by: Takashi Iwai <tiwai@suse.de>

This patch seems forgotten?  5.5 still suffers from the mentioned
bug.


thanks,

Takashi

> ---
>  sound/soc/soc-core.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
> index 0e2e628302f1..726e5de850c4 100644
> --- a/sound/soc/soc-core.c
> +++ b/sound/soc/soc-core.c
> @@ -478,6 +478,12 @@ static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
>  		goto free_rtd;
>  
>  	rtd->dev = dev;
> +	INIT_LIST_HEAD(&rtd->list);
> +	INIT_LIST_HEAD(&rtd->component_list);
> +	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients);
> +	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].be_clients);
> +	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].fe_clients);
> +	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].fe_clients);
>  	dev_set_drvdata(dev, rtd);
>  	INIT_DELAYED_WORK(&rtd->delayed_work, close_delayed_work);
>  
> @@ -493,12 +499,6 @@ static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
>  	/*
>  	 * rtd remaining settings
>  	 */
> -	INIT_LIST_HEAD(&rtd->component_list);
> -	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients);
> -	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].be_clients);
> -	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].fe_clients);
> -	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].fe_clients);
> -
>  	rtd->card = card;
>  	rtd->dai_link = dai_link;
>  	if (!rtd->dai_link->ops)
> -- 
> 2.16.4
>
Mark Brown Dec. 27, 2019, 12:08 a.m. UTC | #3
On Thu, Dec 26, 2019 at 09:50:15AM +0100, Takashi Iwai wrote:

> This patch seems forgotten?  5.5 still suffers from the mentioned
> bug.

It's in my fixes branch.

Please don't send content free pings and please allow a reasonable time
for review.  People get busy, go on holiday, attend conferences and so 
on so unless there is some reason for urgency (like critical bug fixes)
please allow at least a couple of weeks for review.  If there have been
review comments then people may be waiting for those to be addressed.

Sending content free pings adds to the mail volume (if they are seen at
all) which is often the problem and since they can't be reviewed
directly if something has gone wrong you'll have to resend the patches
anyway, so sending again is generally a better approach though there are
some other maintainers who like them - if in doubt look at how patches
for the subsystem are normally handled.
Takashi Iwai Dec. 27, 2019, 7:28 a.m. UTC | #4
On Fri, 27 Dec 2019 01:08:18 +0100,
Mark Brown wrote:
> 
> On Thu, Dec 26, 2019 at 09:50:15AM +0100, Takashi Iwai wrote:
> 
> > This patch seems forgotten?  5.5 still suffers from the mentioned
> > bug.
> 
> It's in my fixes branch.

OK, now found a reply post of a couple of days ago.  Unfortunately the
commit still doesn't appear on linux-next because the update has been
stopped for holidays, so I overlooked it.

But...

> Please don't send content free pings and please allow a reasonable time
> for review.  People get busy, go on holiday, attend conferences and so 
> on so unless there is some reason for urgency (like critical bug fixes)
> please allow at least a couple of weeks for review.  If there have been
> review comments then people may be waiting for those to be addressed.
>
> Sending content free pings adds to the mail volume (if they are seen at
> all) which is often the problem and since they can't be reviewed
> directly if something has gone wrong you'll have to resend the patches
> anyway, so sending again is generally a better approach though there are
> some other maintainers who like them - if in doubt look at how patches
> for the subsystem are normally handled.

This warning doesn't seem fit at all.  The patch was submitted three
weeks ago.


thanks,

Takashi
Mark Brown Dec. 27, 2019, 10:41 p.m. UTC | #5
On Fri, Dec 27, 2019 at 08:28:05AM +0100, Takashi Iwai wrote:
> Mark Brown wrote:

> > Please don't send content free pings and please allow a reasonable time
> > for review.  People get busy, go on holiday, attend conferences and so 
> > on so unless there is some reason for urgency (like critical bug fixes)
> > please allow at least a couple of weeks for review.  If there have been
> > review comments then people may be waiting for those to be addressed.

> > Sending content free pings adds to the mail volume (if they are seen at
> > all) which is often the problem and since they can't be reviewed
> > directly if something has gone wrong you'll have to resend the patches
> > anyway, so sending again is generally a better approach though there are
> > some other maintainers who like them - if in doubt look at how patches
> > for the subsystem are normally handled.

> This warning doesn't seem fit at all.  The patch was submitted three
> weeks ago.

There's two bits there - one is that it's adding to the mail
volume when people chase up, the other is that if things have
been lost then almost always the answer is that I don't have the
patch any more (or it'll be error prone to find) and it'll need a
resend so it's better to chase up by resending the patch since
that can be acted on directly.
Takashi Iwai Dec. 28, 2019, 8:25 a.m. UTC | #6
On Fri, 27 Dec 2019 23:41:33 +0100,
Mark Brown wrote:
> 
> On Fri, Dec 27, 2019 at 08:28:05AM +0100, Takashi Iwai wrote:
> > Mark Brown wrote:
> 
> > > Please don't send content free pings and please allow a reasonable time
> > > for review.  People get busy, go on holiday, attend conferences and so 
> > > on so unless there is some reason for urgency (like critical bug fixes)
> > > please allow at least a couple of weeks for review.  If there have been
> > > review comments then people may be waiting for those to be addressed.
> 
> > > Sending content free pings adds to the mail volume (if they are seen at
> > > all) which is often the problem and since they can't be reviewed
> > > directly if something has gone wrong you'll have to resend the patches
> > > anyway, so sending again is generally a better approach though there are
> > > some other maintainers who like them - if in doubt look at how patches
> > > for the subsystem are normally handled.
> 
> > This warning doesn't seem fit at all.  The patch was submitted three
> > weeks ago.
> 
> There's two bits there - one is that it's adding to the mail
> volume when people chase up, the other is that if things have
> been lost then almost always the answer is that I don't have the
> patch any more (or it'll be error prone to find) and it'll need a
> resend so it's better to chase up by resending the patch since
> that can be acted on directly.

Well, I see a few points to be revised in this policy:

- If it were actually your oversight, then resending the patch makes
  sense.  But if it's not merged by other reasons?  Silently resending
  a patch can be worse.

  For example, suppose the submitter either overlooked or didn't
  receive a reply or a followup in the thread.  You didn't apply the
  patch because of the reply/followup pointing some problem.  Now, the
  submitter tries to resend the original patch again without asking
  the situation, just because you suggested so in the past.  You'll
  get the problematic patch again, and there is a risk that the patch
  gets merged mistakenly at this time.

- The mail archive (lore.kernel.org) nowadays catches all posted mails
  in a proper manner, and it's pretty easy to fetch.  And, resending
  the whole patch would be even higher volume than replying,
  disconnecting the discussions in the previous thread.

So, I find it's OK to give this kind of warning for educational
purposes to the people who don't know the common practice and send the
patches too frequently.  But for other cases, such a warning doesn't
fit.


thanks,

Takashi
Mark Brown Dec. 31, 2019, 12:16 a.m. UTC | #7
On Sat, Dec 28, 2019 at 09:25:18AM +0100, Takashi Iwai wrote:
> Mark Brown wrote:

> > There's two bits there - one is that it's adding to the mail
> > volume when people chase up, the other is that if things have
> > been lost then almost always the answer is that I don't have the
> > patch any more (or it'll be error prone to find) and it'll need a
> > resend so it's better to chase up by resending the patch since
> > that can be acted on directly.

> Well, I see a few points to be revised in this policy:

> - If it were actually your oversight, then resending the patch makes
>   sense.  But if it's not merged by other reasons?  Silently resending
>   a patch can be worse.

>   For example, suppose the submitter either overlooked or didn't
>   receive a reply or a followup in the thread.  You didn't apply the
>   patch because of the reply/followup pointing some problem.  Now, the
>   submitter tries to resend the original patch again without asking
>   the situation, just because you suggested so in the past.  You'll
>   get the problematic patch again, and there is a risk that the patch
>   gets merged mistakenly at this time.

The thing there is that if I don't remember the state of the
patch I'm likely to just say "send it again" and if I do remember
I'll remember either way (the form does say stuff about
addressing feedback, though obviously people can ignore that too).

> - The mail archive (lore.kernel.org) nowadays catches all posted mails
>   in a proper manner, and it's pretty easy to fetch.  And, resending
>   the whole patch would be even higher volume than replying,
>   disconnecting the discussions in the previous thread.

That requires me to be on the internet and fire up my web
browser!  I do actually work offline routinely, when I'm
travelling or somewhere like a coffee shop with poor internet
access so that's not always a thing I can do.  I do postpone
things but that's usually for longer periods (waiting for other
reviewers and so on) which tends to mean people don't get an
answer for their ping promptly which doesn't help either, I
haven't managed to come up with a workflow for that which is
effective.

> So, I find it's OK to give this kind of warning for educational
> purposes to the people who don't know the common practice and send the
> patches too frequently.  But for other cases, such a warning doesn't
> fit.

I deliberately try to be consistent in sending this stuff out
because I don't want to be unfair.  Which has it's own downsides
as you say.
diff mbox series

Patch

diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
index 0e2e628302f1..726e5de850c4 100644
--- a/sound/soc/soc-core.c
+++ b/sound/soc/soc-core.c
@@ -478,6 +478,12 @@  static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
 		goto free_rtd;
 
 	rtd->dev = dev;
+	INIT_LIST_HEAD(&rtd->list);
+	INIT_LIST_HEAD(&rtd->component_list);
+	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients);
+	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].be_clients);
+	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].fe_clients);
+	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].fe_clients);
 	dev_set_drvdata(dev, rtd);
 	INIT_DELAYED_WORK(&rtd->delayed_work, close_delayed_work);
 
@@ -493,12 +499,6 @@  static struct snd_soc_pcm_runtime *soc_new_pcm_runtime(
 	/*
 	 * rtd remaining settings
 	 */
-	INIT_LIST_HEAD(&rtd->component_list);
-	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].be_clients);
-	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].be_clients);
-	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_PLAYBACK].fe_clients);
-	INIT_LIST_HEAD(&rtd->dpcm[SNDRV_PCM_STREAM_CAPTURE].fe_clients);
-
 	rtd->card = card;
 	rtd->dai_link = dai_link;
 	if (!rtd->dai_link->ops)