diff mbox series

mm: hwpoison: use do_send_sig_info() instead of force_sig() (Re: PMEM error-handling forces SIGKILL causes kernel panic)

Message ID 20190116093046.GA29835@hori1.linux.bs1.fc.nec.co.jp (mailing list archive)
State New, archived
Headers show
Series mm: hwpoison: use do_send_sig_info() instead of force_sig() (Re: PMEM error-handling forces SIGKILL causes kernel panic) | expand

Commit Message

Naoya Horiguchi Jan. 16, 2019, 9:30 a.m. UTC
[ CCed Andrew and linux-mm ]

On Fri, Jan 11, 2019 at 08:14:02AM +0000, Horiguchi Naoya(堀口 直也) wrote:
> Hi Dan, Jane,
> 
> Thanks for the report.
> 
> On Wed, Jan 09, 2019 at 03:49:32PM -0800, Dan Williams wrote:
> > [ switch to text mail, add lkml and Naoya ]
> > 
> > On Wed, Jan 9, 2019 at 12:19 PM Jane Chu <jane.chu@oracle.com> wrote:
> ...
> > > 3. The hardware consists the latest revision CPU and Intel NVDIMM, we suspected
> > >    the CPU faulty because it generated MCE over PMEM UE in a unlikely high
> > >    rate for any reasonable NVDIMM (like a few per 24hours).
> > >
> > > After swapping the CPU, the problem stopped reproducing.
> > >
> > > But one could argue that perhaps the faulty CPU exposed a small race window
> > > from collect_procs() to unmap_mapping_range() and to kill_procs(), hence
> > > caught the kernel  PMEM error handler off guard.
> > 
> > There's definitely a race, and the implementation is buggy as can be
> > seen in __exit_signal:
> > 
> >         sighand = rcu_dereference_check(tsk->sighand,
> >                                         lockdep_tasklist_lock_is_held());
> >         spin_lock(&sighand->siglock);
> > 
> > ...the memory-failure path needs to hold the proper locks before it
> > can assume that de-referencing tsk->sighand is valid.
> > 
> > > Also note, the same workload on the same faulty CPU were run on Linux prior to
> > > the 4.19 PMEM error handling and did not encounter kernel crash, probably because
> > > the prior HWPOISON handler did not force SIGKILL?
> > 
> > Before 4.19 this test should result in a machine-check reboot, not
> > much better than a kernel crash.
> > 
> > > Should we not to force the SIGKILL, or find a way to close the race window?
> > 
> > The race should be closed by holding the proper tasklist and rcu read lock(s).
> 
> This reasoning and proposal sound right to me. I'm trying to reproduce
> this race (for non-pmem case,) but no luck for now. I'll investigate more.

I wrote/tested a patch for this issue.
I think that switching signal API effectively does proper locking.

Thanks,
Naoya Horiguchi
---
From 16dbf6105ff4831f73276d79d5df238ab467de76 Mon Sep 17 00:00:00 2001
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Date: Wed, 16 Jan 2019 16:59:27 +0900
Subject: [PATCH] mm: hwpoison: use do_send_sig_info() instead of force_sig()

Currently memory_failure() is racy against process's exiting,
which results in kernel crash by null pointer dereference.

The root cause is that memory_failure() uses force_sig() to forcibly
kill asynchronous (meaning not in the current context) processes.  As
discussed in thread https://lkml.org/lkml/2010/6/8/236 years ago for
OOM fixes, this is not a right thing to do.  OOM solves this issue by
using do_send_sig_info() as done in commit d2d393099de2 ("signal:
oom_kill_task: use SEND_SIG_FORCED instead of force_sig()"), so this
patch is suggesting to do the same for hwpoison.  do_send_sig_info()
properly accesses to siglock with lock_task_sighand(), so is free from
the reported race.

I confirmed that the reported bug reproduces with inserting some delay
in kill_procs(), and it never reproduces with this patch.

Note that memory_failure() can send another type of signal using
force_sig_mceerr(), and the reported race shouldn't happen on it
because force_sig_mceerr() is called only for synchronous processes
(i.e. BUS_MCEERR_AR happens only when some process accesses to the
corrupted memory.)

Reported-by: Jane Chu <jane.chu@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/memory-failure.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Comments

Dan Williams Jan. 16, 2019, 4:55 p.m. UTC | #1
On Wed, Jan 16, 2019 at 1:33 AM Naoya Horiguchi
<n-horiguchi@ah.jp.nec.com> wrote:
>
> [ CCed Andrew and linux-mm ]
>
> On Fri, Jan 11, 2019 at 08:14:02AM +0000, Horiguchi Naoya(堀口 直也) wrote:
> > Hi Dan, Jane,
> >
> > Thanks for the report.
> >
> > On Wed, Jan 09, 2019 at 03:49:32PM -0800, Dan Williams wrote:
> > > [ switch to text mail, add lkml and Naoya ]
> > >
> > > On Wed, Jan 9, 2019 at 12:19 PM Jane Chu <jane.chu@oracle.com> wrote:
> > ...
> > > > 3. The hardware consists the latest revision CPU and Intel NVDIMM, we suspected
> > > >    the CPU faulty because it generated MCE over PMEM UE in a unlikely high
> > > >    rate for any reasonable NVDIMM (like a few per 24hours).
> > > >
> > > > After swapping the CPU, the problem stopped reproducing.
> > > >
> > > > But one could argue that perhaps the faulty CPU exposed a small race window
> > > > from collect_procs() to unmap_mapping_range() and to kill_procs(), hence
> > > > caught the kernel  PMEM error handler off guard.
> > >
> > > There's definitely a race, and the implementation is buggy as can be
> > > seen in __exit_signal:
> > >
> > >         sighand = rcu_dereference_check(tsk->sighand,
> > >                                         lockdep_tasklist_lock_is_held());
> > >         spin_lock(&sighand->siglock);
> > >
> > > ...the memory-failure path needs to hold the proper locks before it
> > > can assume that de-referencing tsk->sighand is valid.
> > >
> > > > Also note, the same workload on the same faulty CPU were run on Linux prior to
> > > > the 4.19 PMEM error handling and did not encounter kernel crash, probably because
> > > > the prior HWPOISON handler did not force SIGKILL?
> > >
> > > Before 4.19 this test should result in a machine-check reboot, not
> > > much better than a kernel crash.
> > >
> > > > Should we not to force the SIGKILL, or find a way to close the race window?
> > >
> > > The race should be closed by holding the proper tasklist and rcu read lock(s).
> >
> > This reasoning and proposal sound right to me. I'm trying to reproduce
> > this race (for non-pmem case,) but no luck for now. I'll investigate more.
>
> I wrote/tested a patch for this issue.
> I think that switching signal API effectively does proper locking.
>
> Thanks,
> Naoya Horiguchi
> ---
> From 16dbf6105ff4831f73276d79d5df238ab467de76 Mon Sep 17 00:00:00 2001
> From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Date: Wed, 16 Jan 2019 16:59:27 +0900
> Subject: [PATCH] mm: hwpoison: use do_send_sig_info() instead of force_sig()
>
> Currently memory_failure() is racy against process's exiting,
> which results in kernel crash by null pointer dereference.
>
> The root cause is that memory_failure() uses force_sig() to forcibly
> kill asynchronous (meaning not in the current context) processes.  As
> discussed in thread https://lkml.org/lkml/2010/6/8/236 years ago for
> OOM fixes, this is not a right thing to do.  OOM solves this issue by
> using do_send_sig_info() as done in commit d2d393099de2 ("signal:
> oom_kill_task: use SEND_SIG_FORCED instead of force_sig()"), so this
> patch is suggesting to do the same for hwpoison.  do_send_sig_info()
> properly accesses to siglock with lock_task_sighand(), so is free from
> the reported race.
>
> I confirmed that the reported bug reproduces with inserting some delay
> in kill_procs(), and it never reproduces with this patch.
>
> Note that memory_failure() can send another type of signal using
> force_sig_mceerr(), and the reported race shouldn't happen on it
> because force_sig_mceerr() is called only for synchronous processes
> (i.e. BUS_MCEERR_AR happens only when some process accesses to the
> corrupted memory.)
>
> Reported-by: Jane Chu <jane.chu@oracle.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: stable@vger.kernel.org
> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> ---

Looks good to me.

Reviewed-by: Dan Williams <dan.j.williams@intel.com>

...but it would still be good to get a Tested-by from Jane.
Jane Chu Jan. 16, 2019, 5:20 p.m. UTC | #2
On 1/16/2019 8:55 AM, Dan Williams wrote:
> On Wed, Jan 16, 2019 at 1:33 AM Naoya Horiguchi
> <n-horiguchi@ah.jp.nec.com> wrote:
>> [ CCed Andrew and linux-mm ]
>>
>> On Fri, Jan 11, 2019 at 08:14:02AM +0000, Horiguchi Naoya(堀口 直也) wrote:
>>> Hi Dan, Jane,
>>>
>>> Thanks for the report.
>>>
>>> On Wed, Jan 09, 2019 at 03:49:32PM -0800, Dan Williams wrote:
>>>> [ switch to text mail, add lkml and Naoya ]
>>>>
>>>> On Wed, Jan 9, 2019 at 12:19 PM Jane Chu <jane.chu@oracle.com> wrote:
>>> ...
>>>>> 3. The hardware consists the latest revision CPU and Intel NVDIMM, we suspected
>>>>>     the CPU faulty because it generated MCE over PMEM UE in a unlikely high
>>>>>     rate for any reasonable NVDIMM (like a few per 24hours).
>>>>>
>>>>> After swapping the CPU, the problem stopped reproducing.
>>>>>
>>>>> But one could argue that perhaps the faulty CPU exposed a small race window
>>>>> from collect_procs() to unmap_mapping_range() and to kill_procs(), hence
>>>>> caught the kernel  PMEM error handler off guard.
>>>> There's definitely a race, and the implementation is buggy as can be
>>>> seen in __exit_signal:
>>>>
>>>>          sighand = rcu_dereference_check(tsk->sighand,
>>>>                                          lockdep_tasklist_lock_is_held());
>>>>          spin_lock(&sighand->siglock);
>>>>
>>>> ...the memory-failure path needs to hold the proper locks before it
>>>> can assume that de-referencing tsk->sighand is valid.
>>>>
>>>>> Also note, the same workload on the same faulty CPU were run on Linux prior to
>>>>> the 4.19 PMEM error handling and did not encounter kernel crash, probably because
>>>>> the prior HWPOISON handler did not force SIGKILL?
>>>> Before 4.19 this test should result in a machine-check reboot, not
>>>> much better than a kernel crash.
>>>>
>>>>> Should we not to force the SIGKILL, or find a way to close the race window?
>>>> The race should be closed by holding the proper tasklist and rcu read lock(s).
>>> This reasoning and proposal sound right to me. I'm trying to reproduce
>>> this race (for non-pmem case,) but no luck for now. I'll investigate more.
>> I wrote/tested a patch for this issue.
>> I think that switching signal API effectively does proper locking.
>>
>> Thanks,
>> Naoya Horiguchi
>> ---
>>  From 16dbf6105ff4831f73276d79d5df238ab467de76 Mon Sep 17 00:00:00 2001
>> From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
>> Date: Wed, 16 Jan 2019 16:59:27 +0900
>> Subject: [PATCH] mm: hwpoison: use do_send_sig_info() instead of force_sig()
>>
>> Currently memory_failure() is racy against process's exiting,
>> which results in kernel crash by null pointer dereference.
>>
>> The root cause is that memory_failure() uses force_sig() to forcibly
>> kill asynchronous (meaning not in the current context) processes.  As
>> discussed in thread https://lkml.org/lkml/2010/6/8/236 years ago for
>> OOM fixes, this is not a right thing to do.  OOM solves this issue by
>> using do_send_sig_info() as done in commit d2d393099de2 ("signal:
>> oom_kill_task: use SEND_SIG_FORCED instead of force_sig()"), so this
>> patch is suggesting to do the same for hwpoison.  do_send_sig_info()
>> properly accesses to siglock with lock_task_sighand(), so is free from
>> the reported race.
>>
>> I confirmed that the reported bug reproduces with inserting some delay
>> in kill_procs(), and it never reproduces with this patch.
>>
>> Note that memory_failure() can send another type of signal using
>> force_sig_mceerr(), and the reported race shouldn't happen on it
>> because force_sig_mceerr() is called only for synchronous processes
>> (i.e. BUS_MCEERR_AR happens only when some process accesses to the
>> corrupted memory.)
>>
>> Reported-by: Jane Chu <jane.chu@oracle.com>
>> Cc: Dan Williams <dan.j.williams@intel.com>
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
>> ---
> Looks good to me.
>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
>
> ...but it would still be good to get a Tested-by from Jane.

Sure, will let you know how the test goes.

Thanks!
-jane
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 1/16/2019 8:55 AM, Dan Williams
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:CAPcyv4jnHqDp7s1SdqHePms2Z-8d0zk-+6meqKeMQUNArxHb_w@mail.gmail.com">
      <pre class="moz-quote-pre" wrap="">On Wed, Jan 16, 2019 at 1:33 AM Naoya Horiguchi
<a class="moz-txt-link-rfc2396E" href="mailto:n-horiguchi@ah.jp.nec.com">&lt;n-horiguchi@ah.jp.nec.com&gt;</a> wrote:
</pre>
      <blockquote type="cite">
        <pre class="moz-quote-pre" wrap="">
[ CCed Andrew and linux-mm ]

On Fri, Jan 11, 2019 at 08:14:02AM +0000, Horiguchi Naoya(堀口 直也) wrote:
</pre>
        <blockquote type="cite">
          <pre class="moz-quote-pre" wrap="">Hi Dan, Jane,

Thanks for the report.

On Wed, Jan 09, 2019 at 03:49:32PM -0800, Dan Williams wrote:
</pre>
          <blockquote type="cite">
            <pre class="moz-quote-pre" wrap="">[ switch to text mail, add lkml and Naoya ]

On Wed, Jan 9, 2019 at 12:19 PM Jane Chu <a class="moz-txt-link-rfc2396E" href="mailto:jane.chu@oracle.com">&lt;jane.chu@oracle.com&gt;</a> wrote:
</pre>
          </blockquote>
          <pre class="moz-quote-pre" wrap="">...
</pre>
          <blockquote type="cite">
            <blockquote type="cite">
              <pre class="moz-quote-pre" wrap="">3. The hardware consists the latest revision CPU and Intel NVDIMM, we suspected
   the CPU faulty because it generated MCE over PMEM UE in a unlikely high
   rate for any reasonable NVDIMM (like a few per 24hours).

After swapping the CPU, the problem stopped reproducing.

But one could argue that perhaps the faulty CPU exposed a small race window
from collect_procs() to unmap_mapping_range() and to kill_procs(), hence
caught the kernel  PMEM error handler off guard.
</pre>
            </blockquote>
            <pre class="moz-quote-pre" wrap="">
There's definitely a race, and the implementation is buggy as can be
seen in __exit_signal:

        sighand = rcu_dereference_check(tsk-&gt;sighand,
                                        lockdep_tasklist_lock_is_held());
        spin_lock(&amp;sighand-&gt;siglock);

...the memory-failure path needs to hold the proper locks before it
can assume that de-referencing tsk-&gt;sighand is valid.

</pre>
            <blockquote type="cite">
              <pre class="moz-quote-pre" wrap="">Also note, the same workload on the same faulty CPU were run on Linux prior to
the 4.19 PMEM error handling and did not encounter kernel crash, probably because
the prior HWPOISON handler did not force SIGKILL?
</pre>
            </blockquote>
            <pre class="moz-quote-pre" wrap="">
Before 4.19 this test should result in a machine-check reboot, not
much better than a kernel crash.

</pre>
            <blockquote type="cite">
              <pre class="moz-quote-pre" wrap="">Should we not to force the SIGKILL, or find a way to close the race window?
</pre>
            </blockquote>
            <pre class="moz-quote-pre" wrap="">
The race should be closed by holding the proper tasklist and rcu read lock(s).
</pre>
          </blockquote>
          <pre class="moz-quote-pre" wrap="">
This reasoning and proposal sound right to me. I'm trying to reproduce
this race (for non-pmem case,) but no luck for now. I'll investigate more.
</pre>
        </blockquote>
        <pre class="moz-quote-pre" wrap="">
I wrote/tested a patch for this issue.
I think that switching signal API effectively does proper locking.

Thanks,
Naoya Horiguchi
---
From 16dbf6105ff4831f73276d79d5df238ab467de76 Mon Sep 17 00:00:00 2001
From: Naoya Horiguchi <a class="moz-txt-link-rfc2396E" href="mailto:n-horiguchi@ah.jp.nec.com">&lt;n-horiguchi@ah.jp.nec.com&gt;</a>
Date: Wed, 16 Jan 2019 16:59:27 +0900
Subject: [PATCH] mm: hwpoison: use do_send_sig_info() instead of force_sig()

Currently memory_failure() is racy against process's exiting,
which results in kernel crash by null pointer dereference.

The root cause is that memory_failure() uses force_sig() to forcibly
kill asynchronous (meaning not in the current context) processes.  As
discussed in thread <a class="moz-txt-link-freetext" href="https://lkml.org/lkml/2010/6/8/236">https://lkml.org/lkml/2010/6/8/236</a> years ago for
OOM fixes, this is not a right thing to do.  OOM solves this issue by
using do_send_sig_info() as done in commit d2d393099de2 ("signal:
oom_kill_task: use SEND_SIG_FORCED instead of force_sig()"), so this
patch is suggesting to do the same for hwpoison.  do_send_sig_info()
properly accesses to siglock with lock_task_sighand(), so is free from
the reported race.

I confirmed that the reported bug reproduces with inserting some delay
in kill_procs(), and it never reproduces with this patch.

Note that memory_failure() can send another type of signal using
force_sig_mceerr(), and the reported race shouldn't happen on it
because force_sig_mceerr() is called only for synchronous processes
(i.e. BUS_MCEERR_AR happens only when some process accesses to the
corrupted memory.)

Reported-by: Jane Chu <a class="moz-txt-link-rfc2396E" href="mailto:jane.chu@oracle.com">&lt;jane.chu@oracle.com&gt;</a>
Cc: Dan Williams <a class="moz-txt-link-rfc2396E" href="mailto:dan.j.williams@intel.com">&lt;dan.j.williams@intel.com&gt;</a>
Cc: <a class="moz-txt-link-abbreviated" href="mailto:stable@vger.kernel.org">stable@vger.kernel.org</a>
Signed-off-by: Naoya Horiguchi <a class="moz-txt-link-rfc2396E" href="mailto:n-horiguchi@ah.jp.nec.com">&lt;n-horiguchi@ah.jp.nec.com&gt;</a>
---
</pre>
      </blockquote>
      <pre class="moz-quote-pre" wrap="">
Looks good to me.

Reviewed-by: Dan Williams <a class="moz-txt-link-rfc2396E" href="mailto:dan.j.williams@intel.com">&lt;dan.j.williams@intel.com&gt;</a>

...but it would still be good to get a Tested-by from Jane.</pre>
    </blockquote>
    <pre>Sure, will let you know how the test goes.

Thanks!
-jane
</pre>
    <blockquote type="cite"
cite="mid:CAPcyv4jnHqDp7s1SdqHePms2Z-8d0zk-+6meqKeMQUNArxHb_w@mail.gmail.com">
      <pre class="moz-quote-pre" wrap="">
</pre>
    </blockquote>
  </body>
</html>
Jane Chu Jan. 16, 2019, 5:56 p.m. UTC | #3
Hi, Naoya,

On 1/16/2019 1:30 AM, Naoya Horiguchi wrote:
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 7c72f2a95785..831be5ff5f4d 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -372,7 +372,8 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
>   			if (fail || tk->addr_valid == 0) {
>   				pr_err("Memory failure: %#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
>   				       pfn, tk->tsk->comm, tk->tsk->pid);
> -				force_sig(SIGKILL, tk->tsk);
> +				do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
> +						 tk->tsk, PIDTYPE_PID);
>   			}
>   

Since we don't care the return from do_send_sig_info(), would you mind to
prefix it with (void) ?

thanks!
-jane
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html;
      charset=ISO-2022-JP">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <pre>Hi, Naoya,
</pre>
    <div class="moz-cite-prefix">On 1/16/2019 1:30 AM, Naoya Horiguchi
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:20190116093046.GA29835@hori1.linux.bs1.fc.nec.co.jp">
      <pre class="moz-quote-pre" wrap="">diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 7c72f2a95785..831be5ff5f4d 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -372,7 +372,8 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
 			if (fail || tk-&gt;addr_valid == 0) {
 				pr_err("Memory failure: %#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
 				       pfn, tk-&gt;tsk-&gt;comm, tk-&gt;tsk-&gt;pid);
-				force_sig(SIGKILL, tk-&gt;tsk);
+				do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
+						 tk-&gt;tsk, PIDTYPE_PID);
 			}
 </pre>
    </blockquote>
    <pre>Since we don't care the return from do_send_sig_info(), would you mind to 
prefix it with (void) ?

thanks!
-jane
</pre>
  </body>
</html>
Naoya Horiguchi Jan. 16, 2019, 11:32 p.m. UTC | #4
Hi Jane,

On Wed, Jan 16, 2019 at 09:56:02AM -0800, Jane Chu wrote:
> Hi, Naoya,
> 
> On 1/16/2019 1:30 AM, Naoya Horiguchi wrote:
> 
>     diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>     index 7c72f2a95785..831be5ff5f4d 100644
>     --- a/mm/memory-failure.c
>     +++ b/mm/memory-failure.c
>     @@ -372,7 +372,8 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
>                             if (fail || tk->addr_valid == 0) {
>                                     pr_err("Memory failure: %#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
>                                            pfn, tk->tsk->comm, tk->tsk->pid);
>     -                               force_sig(SIGKILL, tk->tsk);
>     +                               do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
>     +                                                tk->tsk, PIDTYPE_PID);
>                             }
> 
> 
> Since we don't care the return from do_send_sig_info(), would you mind to
> prefix it with (void) ?

Sorry, I'm not sure about the benefit to do casting the return value
just being ignored, so personally I'd like keeping the code simple.
Do you have some in mind?

- Naoya
Jane Chu Jan. 17, 2019, 1:07 a.m. UTC | #5
On 1/16/2019 3:32 PM, Naoya Horiguchi wrote:
> Hi Jane,
> 
> On Wed, Jan 16, 2019 at 09:56:02AM -0800, Jane Chu wrote:
>> Hi, Naoya,
>>
>> On 1/16/2019 1:30 AM, Naoya Horiguchi wrote:
>>
>>      diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>>      index 7c72f2a95785..831be5ff5f4d 100644
>>      --- a/mm/memory-failure.c
>>      +++ b/mm/memory-failure.c
>>      @@ -372,7 +372,8 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
>>                              if (fail || tk->addr_valid == 0) {
>>                                      pr_err("Memory failure: %#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
>>                                             pfn, tk->tsk->comm, tk->tsk->pid);
>>      -                               force_sig(SIGKILL, tk->tsk);
>>      +                               do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
>>      +                                                tk->tsk, PIDTYPE_PID);
>>                              }
>>
>>
>> Since we don't care the return from do_send_sig_info(), would you mind to
>> prefix it with (void) ?
> 
> Sorry, I'm not sure about the benefit to do casting the return value
> just being ignored, so personally I'd like keeping the code simple.
> Do you have some in mind?

It's just coding style I'm used to, no big deal.
Up to you to decide. :)

thanks,
-jane

> 
> - Naoya
>
William Kucharski Jan. 17, 2019, 9:44 a.m. UTC | #6
> On Jan 16, 2019, at 6:07 PM, Jane Chu <jane.chu@oracle.com> wrote:
> 
> It's just coding style I'm used to, no big deal.
> Up to you to decide. :)

Personally I like a (void) cast as it's pretty long-standing syntactic sugar to cast a call that returns a value we don't care about to (void) to show we know it returns a value and we don't care.

Without it, it may suggest we either didn't know it returned a value or that we neglected to check the return value.

However, in current use elsewhere (e.g. in send_sig_all() and __oom_kill_process()), no such (void) cast is added, so it seems better to match current usage elsewhere in the kernel.

Reviewed-by: William Kucharski <william.kucharski@oracle.com>
diff mbox series

Patch

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 7c72f2a95785..831be5ff5f4d 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -372,7 +372,8 @@  static void kill_procs(struct list_head *to_kill, int forcekill, bool fail,
 			if (fail || tk->addr_valid == 0) {
 				pr_err("Memory failure: %#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",
 				       pfn, tk->tsk->comm, tk->tsk->pid);
-				force_sig(SIGKILL, tk->tsk);
+				do_send_sig_info(SIGKILL, SEND_SIG_PRIV,
+						 tk->tsk, PIDTYPE_PID);
 			}
 
 			/*