diff mbox

[RFC,v2,3/4] acpi: apei: Do not panic() when correctable errors are marked as fatal.

Message ID 20180416215903.7318-4-mr.nuke.me@gmail.com (mailing list archive)
State RFC, archived
Headers show

Commit Message

Alex G. April 16, 2018, 9:59 p.m. UTC
Firmware is evil:
 - ACPI was created to "try and make the 'ACPI' extensions somehow
 Windows specific" in order to "work well with NT and not the others
 even if they are open"
 - EFI was created to hide "secret" registers from the OS.
 - UEFI was created to allow compromising an otherwise secure OS.

Never has firmware been created to solve a problem or simplify an
otherwise cumbersome process. It is of no surprise then, that
firmware nowadays intentionally crashes an OS.

One simple way to do that is to mark GHES errors as fatal. Firmware
knows and even expects that an OS will crash in this case. And most
OSes do.

PCIe errors are notorious for having different definitions of "fatal".
In ACPI, and other firmware sandards, 'fatal' means the machine is
about to explode and needs to be reset. In PCIe, on the other hand,
fatal means that the link to a device has died. In the hotplug world
of PCIe, this is akin to a USB disconnect. From that view, the "fatal"
loss of a link is a normal event. To allow a machine to crash in this
case is downright idiotic.

To solve this, implement an IRQ safe handler for AER. This makes sure
we have enough information to invoke the full AER handler later down
the road, and tells ghes_notify_nmi that "It's all cool".
ghes_notify_nmi() then gets calmed down a little, and doesn't panic().

Signed-off-by: Alexandru Gagniuc <mr.nuke.me@gmail.com>
---
 drivers/acpi/apei/ghes.c | 44 ++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 42 insertions(+), 2 deletions(-)

Comments

Borislav Petkov April 18, 2018, 5:54 p.m. UTC | #1
On Mon, Apr 16, 2018 at 04:59:02PM -0500, Alexandru Gagniuc wrote:
> Firmware is evil:
>  - ACPI was created to "try and make the 'ACPI' extensions somehow
>  Windows specific" in order to "work well with NT and not the others
>  even if they are open"
>  - EFI was created to hide "secret" registers from the OS.
>  - UEFI was created to allow compromising an otherwise secure OS.
> 
> Never has firmware been created to solve a problem or simplify an
> otherwise cumbersome process. It is of no surprise then, that
> firmware nowadays intentionally crashes an OS.

I don't believe I'm saying this but, get rid of that rant. Even though I
agree, it doesn't belong in a commit message.

> 
> One simple way to do that is to mark GHES errors as fatal. Firmware
> knows and even expects that an OS will crash in this case. And most
> OSes do.
> 
> PCIe errors are notorious for having different definitions of "fatal".
> In ACPI, and other firmware sandards, 'fatal' means the machine is
> about to explode and needs to be reset. In PCIe, on the other hand,
> fatal means that the link to a device has died. In the hotplug world
> of PCIe, this is akin to a USB disconnect. From that view, the "fatal"
> loss of a link is a normal event. To allow a machine to crash in this
> case is downright idiotic.
> 
> To solve this, implement an IRQ safe handler for AER. This makes sure
> we have enough information to invoke the full AER handler later down
> the road, and tells ghes_notify_nmi that "It's all cool".
> ghes_notify_nmi() then gets calmed down a little, and doesn't panic().
> 
> Signed-off-by: Alexandru Gagniuc <mr.nuke.me@gmail.com>
> ---
>  drivers/acpi/apei/ghes.c | 44 ++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 42 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
> index 2119c51b4a9e..e0528da4e8f8 100644
> --- a/drivers/acpi/apei/ghes.c
> +++ b/drivers/acpi/apei/ghes.c
> @@ -481,12 +481,26 @@ static int ghes_handle_aer(struct acpi_hest_generic_data *gdata, int sev)
>  	return ghes_severity(gdata->error_severity);
>  }
>  
> +static int ghes_handle_aer_irqsafe(struct acpi_hest_generic_data *gdata,
> +				   int sev)
> +{
> +	struct cper_sec_pcie *pcie_err = acpi_hest_get_payload(gdata);
> +
> +	/* The system can always recover from AER errors. */
> +	if (pcie_err->validation_bits & CPER_PCIE_VALID_DEVICE_ID &&
> +		pcie_err->validation_bits & CPER_PCIE_VALID_AER_INFO)
> +		return CPER_SEV_RECOVERABLE;
> +
> +	return ghes_severity(gdata->error_severity);
> +}

Well, Tyler touched that AER error severity handling recently and we had
it all nicely documented in the comment above ghes_handle_aer().

Your ghes_handle_aer_irqsafe() graft basically bypasses
ghes_handle_aer() instead of incorporating in it.

If all you wanna say is, the severity computation should go through all
the sections and look at each error's severity before making a decision,
then add that to ghes_severity() instead of doing that "deferrable"
severity dance.

And add the changes to the policy to the comment above
ghes_handle_aer(). I don't want any changes from people coming and going
and leaving us scratching heads why we did it this way.

And no need for those handlers and so on - make it simple first - then we
can talk more complex handling.
Alex G. April 19, 2018, 2:57 p.m. UTC | #2
On 04/18/2018 12:54 PM, Borislav Petkov wrote:
> On Mon, Apr 16, 2018 at 04:59:02PM -0500, Alexandru Gagniuc wrote:
>> Firmware is evil:
>>  - ACPI was created to "try and make the 'ACPI' extensions somehow
>>  Windows specific" in order to "work well with NT and not the others
>>  even if they are open"
>>  - EFI was created to hide "secret" registers from the OS.
>>  - UEFI was created to allow compromising an otherwise secure OS.
>>
>> Never has firmware been created to solve a problem or simplify an
>> otherwise cumbersome process. It is of no surprise then, that
>> firmware nowadays intentionally crashes an OS.
> 
> I don't believe I'm saying this but, get rid of that rant. Even though I
> agree, it doesn't belong in a commit message.

Of course.

(snip)> Well, Tyler touched that AER error severity handling recently
and we had
> it all nicely documented in the comment above ghes_handle_aer().
> 
> Your ghes_handle_aer_irqsafe() graft basically bypasses
> ghes_handle_aer() instead of incorporating in it.
> 
> If all you wanna say is, the severity computation should go through all
> the sections and look at each error's severity before making a decision,
> then add that to ghes_severity() instead of doing that "deferrable"
> severity dance.

ghes_severity() is a one-to-one mapping from a set of unsorted
severities to monotonically increasing numbers. The "one-to-one" mapping
part of the sentence is obvious from the function name. To change it to
parse the entire GHES would completely destroy this, and I think it
would apply policy in the wrong place.

Should I do that, I might have to call it something like
ghes_parse_and_apply_policy_to_severity(). But that misses the whole
point if these changes.

I would like to get to the handlers first, and then decide if things are
okay or not, but the ARM guys didn't exactly like this approach. It
seems there are quite some per-error-type considerations.
The logical step is to associate these considerations with the specific
error type they apply to, rather than hide them as a decision under an
innocent ghes_severity().

> And add the changes to the policy to the comment above
> ghes_handle_aer(). I don't want any changes from people coming and going
> and leaving us scratching heads why we did it this way.
>
> And no need for those handlers and so on - make it simple first - then we
> can talk more complex handling.

I don't want to leave people scratching their heads, but I also don't
want to make AER a special case without having a generic way to handle
these cases. People are just as susceptible to scratch their heads
wondering why AER is a special case and everything else crashes.

Maybe it's better move the AER handling to NMI/IRQ context, since
ghes_handle_aer() is only scheduling the real AER andler, and is irq
safe. I'm scratching my head about why we're messing with IRQ work from
NMI context, instead of just scheduling a regular handler to take care
of things.

Alex

--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
James Morse April 19, 2018, 3:35 p.m. UTC | #3
Hi Alex,

(I haven't read through all this yet, just on this one:)

On 04/19/2018 03:57 PM, Alex G. wrote:
 > Maybe it's better move the AER handling to NMI/IRQ context, since
 > ghes_handle_aer() is only scheduling the real AER andler, and is irq
 > safe. I'm scratching my head about why we're messing with IRQ work from
 > NMI context, instead of just scheduling a regular handler to take care
 > of things.

We can't touch schedule_work_on() from NMI context as it takes spinlocks and
disables interrupts. (see __queue_work()) The NMI may have interrupted 
IRQ-context code
that was already holding the same locks.

IRQ-work behaves differently, it uses an llist for the work and an arch 
code hook
to raise a self-IPI.


Thanks,

James
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov April 19, 2018, 3:40 p.m. UTC | #4
On Thu, Apr 19, 2018 at 09:57:07AM -0500, Alex G. wrote:
> ghes_severity() is a one-to-one mapping from a set of unsorted
> severities to monotonically increasing numbers. The "one-to-one" mapping
> part of the sentence is obvious from the function name. To change it to
> parse the entire GHES would completely destroy this, and I think it
> would apply policy in the wrong place.

So do a wrapper or whatever. Do a ghes_compute_severity() or however you
would wanna call it and do the iteration there.

> Should I do that, I might have to call it something like
> ghes_parse_and_apply_policy_to_severity(). But that misses the whole
> point if these changes.

What policy? You simply compute the severity like we do in the mce code.

> I would like to get to the handlers first, and then decide if things are
> okay or not,

Why? Give me an example why you'd handle an error first and then decide
whether we're ok or not?

Usually, the error handler decides that in one place. So what exactly
are you trying to do differently that doesn't fit that flow?

> I don't want to leave people scratching their heads, but I also don't
> want to make AER a special case without having a generic way to handle
> these cases. People are just as susceptible to scratch their heads
> wondering why AER is a special case and everything else crashes.

Not if it is properly done *and* documented why we applying the
respective policy for the error type.

> Maybe it's better move the AER handling to NMI/IRQ context, since
> ghes_handle_aer() is only scheduling the real AER andler, and is irq
> safe. I'm scratching my head about why we're messing with IRQ work from
> NMI context, instead of just scheduling a regular handler to take care
> of things.

No, first pls explain what exactly you're trying to do and then we can
talk about how to do it. Btw, a real-life example to accompany that
intention goes a long way.

Thx.
Alex G. April 19, 2018, 4:26 p.m. UTC | #5
On 04/19/2018 10:40 AM, Borislav Petkov wrote:
> On Thu, Apr 19, 2018 at 09:57:07AM -0500, Alex G. wrote:
>> ghes_severity() is a one-to-one mapping from a set of unsorted
>> severities to monotonically increasing numbers. The "one-to-one" mapping
>> part of the sentence is obvious from the function name. To change it to
>> parse the entire GHES would completely destroy this, and I think it
>> would apply policy in the wrong place.
> 
> So do a wrapper or whatever. Do a ghes_compute_severity() or however you
> would wanna call it and do the iteration there.

That doesn't sound right. There isn't a formula to compute. What we're
doing is we're looking at individual error sources, and deciding what
errors we can handle based both on the error, and our ability to handle
the error.

>> Should I do that, I might have to call it something like
>> ghes_parse_and_apply_policy_to_severity(). But that misses the whole
>> point if these changes.
> 
> What policy? You simply compute the severity like we do in the mce code.

As explained above, our ability to resolve an error depends on the
interaction between the error and error handler. This is very closely
tied to the capabilities of each individual handler. I'll do it your
way, but I don't think ignoring this tight coupling is the right thing
to do.

> 
>> I would like to get to the handlers first, and then decide if things are
>> okay or not,
> 
> Why? Give me an example why you'd handle an error first and then decide
> whether we're ok or not?
> 
> Usually, the error handler decides that in one place. So what exactly
> are you trying to do differently that doesn't fit that flow?

In the NMI case you don't make it to the error handler. James and I beat
this subject to the afterlife in v1.

>> I don't want to leave people scratching their heads, but I also don't
>> want to make AER a special case without having a generic way to handle
>> these cases. People are just as susceptible to scratch their heads
>> wondering why AER is a special case and everything else crashes.
> 
> Not if it is properly done *and* documented why we applying the
> respective policy for the error type.
> 
>> Maybe it's better move the AER handling to NMI/IRQ context, since
>> ghes_handle_aer() is only scheduling the real AER andler, and is irq
>> safe. I'm scratching my head about why we're messing with IRQ work from
>> NMI context, instead of just scheduling a regular handler to take care
>> of things.
> 
> No, first pls explain what exactly you're trying to do

I realize v1 was quite a while back, so I'll take this opportunity to
restate:

At a very high level, I'm working with Dell on improving server
reliability, with a focus on NVME hotplug and surprise removal. One of
the features we don't support is surprise removal of NVME drives;
hotplug is supported with 'prepare to remove'. This is one of the
reasons NVME is not on feature parity with SAS and SATA.

My role is to solve this issue on linux, and to not worry about other
OSes. This puts me in a position to have a linux-centric view of the
problem, as opposed to the more common firmware-centric view.

Part of solving the surprise removal issue involves improving FFS error
handling. This is required because the servers which are shipping use
FFS instead of native error notifications. As part of extensive testing,
I have found the NMI handler to be the most common cause of crashes, and
hence this series.

> and then we can talk about how to do it.

Your move.

> Btw, a real-life example to accompany that intention goes a long way.

I'm not sure if this is the example you're looking for, but
take an r740xd server, and slowly unplug an Intel NVME drives at an
angle. You're likely to crash the machine.

Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alex G. April 19, 2018, 4:27 p.m. UTC | #6
On 04/19/2018 10:35 AM, James Morse wrote:
> Hi Alex,
> 
> (I haven't read through all this yet, just on this one:)
> 
> On 04/19/2018 03:57 PM, Alex G. wrote:
>> Maybe it's better move the AER handling to NMI/IRQ context, since
>> ghes_handle_aer() is only scheduling the real AER andler, and is irq
>> safe. I'm scratching my head about why we're messing with IRQ work from
>> NMI context, instead of just scheduling a regular handler to take care
>> of things.
> 
> We can't touch schedule_work_on() from NMI context as it takes spinlocks
> and
> disables interrupts. (see __queue_work()) The NMI may have interrupted
> IRQ-context code
> that was already holding the same locks.
> 
> IRQ-work behaves differently, it uses an llist for the work and an arch
> code hook
> to raise a self-IPI.

That makes sense. Thank you!

Alex

> 
> Thanks,
> 
> James
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov April 19, 2018, 4:45 p.m. UTC | #7
On Thu, Apr 19, 2018 at 11:26:57AM -0500, Alex G. wrote:
> At a very high level, I'm working with Dell on improving server
> reliability, with a focus on NVME hotplug and surprise removal. One of
> the features we don't support is surprise removal of NVME drives;
> hotplug is supported with 'prepare to remove'. This is one of the
> reasons NVME is not on feature parity with SAS and SATA.

Ok, first question: is surprise removal something purely mechanical or
do you need firmware support for it? In the sense that you need to tell
the firmware that you will be removing the drive.

I'm sceptical, though, as it has "surprise" in the name so I'm guessing
the firmware doesn't know about it, the drive physically disappears and
the FW starts spewing PCIe errors...

> I'm not sure if this is the example you're looking for, but
> take an r740xd server, and slowly unplug an Intel NVME drives at an
> angle. You're likely to crash the machine.

No no, that's actually a great example!

Thx.
Alex G. April 19, 2018, 5:40 p.m. UTC | #8
SURPRISE!!!

On 04/19/2018 11:45 AM, Borislav Petkov wrote:
> On Thu, Apr 19, 2018 at 11:26:57AM -0500, Alex G. wrote:
>> At a very high level, I'm working with Dell on improving server
>> reliability, with a focus on NVME hotplug and surprise removal. One of
>> the features we don't support is surprise removal of NVME drives;
>> hotplug is supported with 'prepare to remove'. This is one of the
>> reasons NVME is not on feature parity with SAS and SATA.
> 
> Ok, first question: is surprise removal something purely mechanical or
> do you need firmware support for it? In the sense that you need to tell
> the firmware that you will be removing the drive.

SURPRISE!!! removal only means that the system was not expecting the
drive to be yanked. An example is removing a USB flash drive without
first unmounting it and removing the usb device (echo 0 >
/sys/bus/usb/.../authorized).

PCIe removal and hotplug is fairly well spec'd, and NVMe rides on that
without issue. It's much easier and faster for an OS to just follow the
spec and handle things on its own.

Interference from firmware only comes in with EFI/ACPI and FFS. From a
purely technical point of view, firmware has nothing to do with this.
From a firmware-centric view, unfortunately, firmware wants the ability
to log errors to the BMC... and hotplug events.

Does firmware need to know that a drive will be removed? I'm not aware
of any such requirement. I think the main purpose of 'prepare to remove'
is to shut down any traffic on the link. This way, link removal does not
generate PCIe errors which may otherwise end up crashing the OS.


> I'm sceptical, though, as it has "surprise" in the name so I'm guessing
> the firmware doesn't know about it, the drive physically disappears and
> the FW starts spewing PCIe errors...

It's not the FW that spews out errors. It's the hardware. It's very
likely that a device which is actively used will have several DMA
transactions already queued up and lots of traffic going through the
link. When the link dies and the traffic can't be delivered, Unsupported
Request errors are very common.

On the r740xd, FW just hides those errors from the OS with no further
notification. On this machine BIOS sets things up such that non-posted
requests report fatal (PCIe) errors. FW still tries very hard to hide
this from the OS, and I think the heuristic is that if the drive
physical presence is gone, don't even report the error.

There are a lot of problems with the approach, but one thing to keep in
mind is that the FW was written at a time when OSes were more than happy
to crash at any PCIe error reported through GHES.

Alex

>> I'm not sure if this is the example you're looking for, but
>> take an r740xd server, and slowly unplug an Intel NVME drives at an
>> angle. You're likely to crash the machine.
> 
> No no, that's actually a great example!
> 
> Thx.
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov April 19, 2018, 7:03 p.m. UTC | #9
(snip useful explanation).

On Thu, Apr 19, 2018 at 12:40:54PM -0500, Alex G. wrote:
> On the r740xd, FW just hides those errors from the OS with no further
> notification. On this machine BIOS sets things up such that non-posted
> requests report fatal (PCIe) errors. FW still tries very hard to hide
> this from the OS, and I think the heuristic is that if the drive
> physical presence is gone, don't even report the error.

Ok, second question: can you detect from the error signatures alone that
it was a surprise removal? How does such an error look like, in detail?
Got error logs somewhere to dump?

Thx.
Alex G. April 19, 2018, 10:55 p.m. UTC | #10
On 04/19/2018 02:03 PM, Borislav Petkov wrote:
> (snip useful explanation).
> 
> On Thu, Apr 19, 2018 at 12:40:54PM -0500, Alex G. wrote:
>> On the r740xd, FW just hides those errors from the OS with no further
>> notification. On this machine BIOS sets things up such that non-posted
>> requests report fatal (PCIe) errors. FW still tries very hard to hide
>> this from the OS, and I think the heuristic is that if the drive
>> physical presence is gone, don't even report the error.
> 
> Ok, second question: can you detect from the error signatures alone that
> it was a surprise removal? 

I suppose you could make some inference, given the timing of other
events going on around the the crash. It's not uncommon to see a "Card
not present" event around drive removal.

Since the presence detect pin breaks last, you might not get that
interrupt for a long while. In that case it's much harder to determine
if you're seeing a SURPRISE!!! removal or some other fault.

I don't think you can use GHES alone to determine the nature of the
event. There is not a 1:1 mapping from the set of things going wrong to
the set of PCIe errors.

> How does such an error look like, in detail?

It's green on the soft side, with lots of red accents, as well as some
textured white shades:

[   51.414616] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down
[   51.414634] pciehp 0000:b0:05.0:pcie204: Slot(179): Link Down
[   52.703343] FIRMWARE BUG: Firmware sent fatal error that we were able
to correct
[   52.703345] BROKEN FIRMWARE: Complain to your hardware vendor
[   52.703347] {1}[Hardware Error]: Hardware error from APEI Generic
Hardware Error Source: 1
[   52.703358] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Up
[   52.711616] {1}[Hardware Error]: event severity: fatal
[   52.716754] {1}[Hardware Error]:  Error 0, type: fatal
[   52.721891] {1}[Hardware Error]:   section_type: PCIe error
[   52.727463] {1}[Hardware Error]:   port_type: 6, downstream switch port
[   52.734075] {1}[Hardware Error]:   version: 3.0
[   52.738607] {1}[Hardware Error]:   command: 0x0407, status: 0x0010
[   52.744786] {1}[Hardware Error]:   device_id: 0000:b0:06.0
[   52.750271] {1}[Hardware Error]:   slot: 4
[   52.754371] {1}[Hardware Error]:   secondary_bus: 0xb3
[   52.759509] {1}[Hardware Error]:   vendor_id: 0x10b5, device_id: 0x9733
[   52.766123] {1}[Hardware Error]:   class_code: 000406
[   52.771182] {1}[Hardware Error]:   bridge: secondary_status: 0x0000,
control: 0x0003
[   52.779038] pcieport 0000:b0:06.0: aer_status: 0x00100000, aer_mask:
0x01a10000
[   52.782303] nvme0n1: detected capacity change from 3200631791616 to 0
[   52.786348] pcieport 0000:b0:06.0:    [20] Unsupported Request
[   52.786349] pcieport 0000:b0:06.0: aer_layer=Transaction Layer,
aer_agent=Requester ID
[   52.786350] pcieport 0000:b0:06.0: aer_uncor_severity: 0x004eb030
[   52.786352] pcieport 0000:b0:06.0:   TLP Header: 40000001 0000020f
e12023bc 01000000
[   52.786357] pcieport 0000:b0:06.0: broadcast error_detected message
[   52.883895] pci 0000:b3:00.0: device has no driver
[   52.883976] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down
[   52.884184] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down event
queued; currently getting powered on
[   52.967175] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Up


> Got error logs somewhere to dump?

Sure [1]. They have the ANSI sequences, so you might want to wget and
grep them in a color terminal.

Alex

[1] http://gtech.myftp.org/~mrnuke/nvme_logs/log-20180416-1919.log
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov April 22, 2018, 10:48 a.m. UTC | #11
On Thu, Apr 19, 2018 at 05:55:08PM -0500, Alex G. wrote:
> > How does such an error look like, in detail?
> 
> It's green on the soft side, with lots of red accents, as well as some
> textured white shades:
> 
> [   51.414616] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down
> [   51.414634] pciehp 0000:b0:05.0:pcie204: Slot(179): Link Down
> [   52.703343] FIRMWARE BUG: Firmware sent fatal error that we were able
> to correct
> [   52.703345] BROKEN FIRMWARE: Complain to your hardware vendor
> [   52.703347] {1}[Hardware Error]: Hardware error from APEI Generic
> Hardware Error Source: 1
> [   52.703358] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Up
> [   52.711616] {1}[Hardware Error]: event severity: fatal
> [   52.716754] {1}[Hardware Error]:  Error 0, type: fatal
> [   52.721891] {1}[Hardware Error]:   section_type: PCIe error
> [   52.727463] {1}[Hardware Error]:   port_type: 6, downstream switch port
> [   52.734075] {1}[Hardware Error]:   version: 3.0
> [   52.738607] {1}[Hardware Error]:   command: 0x0407, status: 0x0010
> [   52.744786] {1}[Hardware Error]:   device_id: 0000:b0:06.0
> [   52.750271] {1}[Hardware Error]:   slot: 4
> [   52.754371] {1}[Hardware Error]:   secondary_bus: 0xb3
> [   52.759509] {1}[Hardware Error]:   vendor_id: 0x10b5, device_id: 0x9733
> [   52.766123] {1}[Hardware Error]:   class_code: 000406
> [   52.771182] {1}[Hardware Error]:   bridge: secondary_status: 0x0000,
> control: 0x0003
> [   52.779038] pcieport 0000:b0:06.0: aer_status: 0x00100000, aer_mask:
> 0x01a10000
> [   52.782303] nvme0n1: detected capacity change from 3200631791616 to 0
> [   52.786348] pcieport 0000:b0:06.0:    [20] Unsupported Request
> [   52.786349] pcieport 0000:b0:06.0: aer_layer=Transaction Layer,
> aer_agent=Requester ID
> [   52.786350] pcieport 0000:b0:06.0: aer_uncor_severity: 0x004eb030
> [   52.786352] pcieport 0000:b0:06.0:   TLP Header: 40000001 0000020f
> e12023bc 01000000
> [   52.786357] pcieport 0000:b0:06.0: broadcast error_detected message
> [   52.883895] pci 0000:b3:00.0: device has no driver
> [   52.883976] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down
> [   52.884184] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down event
> queued; currently getting powered on
> [   52.967175] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Up

Btw, from another discussion we're having with Yazen:

@Yazen, do you see how this error record is worth shit?

 class_code: 000406
 command: 0x0407, status: 0x0010
 bridge: secondary_status: 0x0000, control: 0x0003
 aer_status: 0x00100000, aer_mask: 0x01a10000
 aer_uncor_severity: 0x004eb030

those above are only some of the fields which are purely useless
undecoded. Makes me wonder what's worse for the user: dump the
half-decoded error or not dump an error at all...

Anyway, Alex, I see this in the logs:

[   66.581121] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down
[   66.591939] pciehp 0000:b0:05.0:pcie204: Slot(179): Card not present
[   66.592102] pciehp 0000:b0:06.0:pcie204: Slot(176): Card not present

and that comes from that pciehp_isr() interrupt handler AFAICT.

So there *is* a way to know that the card is not present anymore. So,
theoretically, and ignoring the code layering for now, we can connect
that error to the card not present event and then ignore the error...

Hmmm.
Alex G. April 24, 2018, 4:19 a.m. UTC | #12
On 04/22/2018 05:48 AM, Borislav Petkov wrote:
> On Thu, Apr 19, 2018 at 05:55:08PM -0500, Alex G. wrote:
>>> How does such an error look like, in detail?
>>
>> It's green on the soft side, with lots of red accents, as well as some
>> textured white shades:
>>
>> [   51.414616] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down
>> [   51.414634] pciehp 0000:b0:05.0:pcie204: Slot(179): Link Down
>> [   52.703343] FIRMWARE BUG: Firmware sent fatal error that we were able
>> to correct
>> [   52.703345] BROKEN FIRMWARE: Complain to your hardware vendor
>> [   52.703347] {1}[Hardware Error]: Hardware error from APEI Generic
>> Hardware Error Source: 1
>> [   52.703358] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Up
>> [   52.711616] {1}[Hardware Error]: event severity: fatal
>> [   52.716754] {1}[Hardware Error]:  Error 0, type: fatal
>> [   52.721891] {1}[Hardware Error]:   section_type: PCIe error
>> [   52.727463] {1}[Hardware Error]:   port_type: 6, downstream switch port
>> [   52.734075] {1}[Hardware Error]:   version: 3.0
>> [   52.738607] {1}[Hardware Error]:   command: 0x0407, status: 0x0010
>> [   52.744786] {1}[Hardware Error]:   device_id: 0000:b0:06.0
>> [   52.750271] {1}[Hardware Error]:   slot: 4
>> [   52.754371] {1}[Hardware Error]:   secondary_bus: 0xb3
>> [   52.759509] {1}[Hardware Error]:   vendor_id: 0x10b5, device_id: 0x9733
>> [   52.766123] {1}[Hardware Error]:   class_code: 000406
>> [   52.771182] {1}[Hardware Error]:   bridge: secondary_status: 0x0000,
>> control: 0x0003
>> [   52.779038] pcieport 0000:b0:06.0: aer_status: 0x00100000, aer_mask:
>> 0x01a10000
>> [   52.782303] nvme0n1: detected capacity change from 3200631791616 to 0
>> [   52.786348] pcieport 0000:b0:06.0:    [20] Unsupported Request
>> [   52.786349] pcieport 0000:b0:06.0: aer_layer=Transaction Layer,
>> aer_agent=Requester ID
>> [   52.786350] pcieport 0000:b0:06.0: aer_uncor_severity: 0x004eb030
>> [   52.786352] pcieport 0000:b0:06.0:   TLP Header: 40000001 0000020f
>> e12023bc 01000000
>> [   52.786357] pcieport 0000:b0:06.0: broadcast error_detected message
>> [   52.883895] pci 0000:b3:00.0: device has no driver
>> [   52.883976] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down
>> [   52.884184] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down event
>> queued; currently getting powered on
>> [   52.967175] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Up
> 
> Btw, from another discussion we're having with Yazen:
> 
> @Yazen, do you see how this error record is worth shit?
> 
>   class_code: 000406
>   command: 0x0407, status: 0x0010
>   bridge: secondary_status: 0x0000, control: 0x0003
>   aer_status: 0x00100000, aer_mask: 0x01a10000
>   aer_uncor_severity: 0x004eb030

That tells you what FFS said about the error. Keep in mind that FFS has 
cleared the hardware error bits, which the AER handler would normally 
read from the PCI device.

> those above are only some of the fields which are purely useless
> undecoded. Makes me wonder what's worse for the user: dump the
> half-decoded error or not dump an error at all...

It's immediately obvious if there's a glaring FFS bug and if we get 
bogus data. If you distrust firmware as much as I do, then you will find 
great value in having such info in the logs. It's probably not too 
useful to a casual user, but then neither is a majority of the system log.

> Anyway, Alex, I see this in the logs:
> 
> [   66.581121] pciehp 0000:b0:06.0:pcie204: Slot(176): Link Down
> [   66.591939] pciehp 0000:b0:05.0:pcie204: Slot(179): Card not present
> [   66.592102] pciehp 0000:b0:06.0:pcie204: Slot(176): Card not present
> 
> and that comes from that pciehp_isr() interrupt handler AFAICT.
> 
> So there *is* a way to know that the card is not present anymore. So,
> theoretically, and ignoring the code layering for now, we can connect
> that error to the card not present event and then ignore the error...

You're missing the timing and assuming you will get the hotplug 
interrupt. In this example, you have 22ms between the link down and 
presence detect state change. This is a fairly fast removal.

Hotplug dependencies aside (you can have the kernel run without PCIe 
hotplug support), I don't think you want to just linger in NMI for 
dozens of milliseconds waiting for presence detect confirmation.

For enterprise SFF NVMe drives, the data lanes will disconnect before 
the presence detect. FFS relies on presence detect, and these are two of 
the reasons why slow removal is such a problem. You might not get a 
presence detect interrupt at all.

Presence detect is optional for PCIe. PD is such a reliable heuristic, 
that it guarantees worse error handling than the crackmonkey firmware. I 
don't see how might be useful in a way which gives us better handling 
than firmware.

> Hmmm.

Hmmm

Anyway, heuristics about PCIe error recovery belong in the recovery 
handler. I don't think it's smart to apply policy before we get there

Alex

--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov April 25, 2018, 2:01 p.m. UTC | #13
On Mon, Apr 23, 2018 at 11:19:25PM -0500, Alex G. wrote:
> That tells you what FFS said about the error.

I betcha those status and command values have a human-readable counterparts.

Btw, what do you abbreviate with "FFS"?

> It's immediately obvious if there's a glaring FFS bug and if we get bogus
> data. If you distrust firmware as much as I do, then you will find great
> value in having such info in the logs. It's probably not too useful to a
> casual user, but then neither is a majority of the system log.

No no, you're missing the point - I *want* all data in the error log
which helps debug a hardware issue. I just want it humanly readable so
that I don't have to jot down the values and go scour the manuals to map
what it actually means.

> You're missing the timing and assuming you will get the hotplug interrupt.
> In this example, you have 22ms between the link down and presence detect
> state change. This is a fairly fast removal.
> 
> Hotplug dependencies aside (you can have the kernel run without PCIe hotplug
> support), I don't think you want to just linger in NMI for dozens of
> milliseconds waiting for presence detect confirmation.

No, I don't mean that. I mean something like deferred processing: you
get an error, you notice it is a device which supports physical removal
so you exit the NMI handler and process the error in normal, process
context which allows you to query the device and say, "Hey device, are
you still there?"

If it is not, you drop all the hw I/O errors reported for it.

Hmmm?
Alex G. April 25, 2018, 3 p.m. UTC | #14
On 04/25/2018 09:01 AM, Borislav Petkov wrote:
> On Mon, Apr 23, 2018 at 11:19:25PM -0500, Alex G. wrote:
>> That tells you what FFS said about the error.
> 
> I betcha those status and command values have a human-readable counterparts.
> 
> Btw, what do you abbreviate with "FFS"?

Firmware-first.

>> It's immediately obvious if there's a glaring FFS bug and if we get bogus
>> data. If you distrust firmware as much as I do, then you will find great
>> value in having such info in the logs. It's probably not too useful to a
>> casual user, but then neither is a majority of the system log.
> 
> No no, you're missing the point - I *want* all data in the error log
> which helps debug a hardware issue. I just want it humanly readable so
> that I don't have to jot down the values and go scour the manuals to map
> what it actually means.

We could probably use more of the native AER print functions, but that's
beyond the scope of this patch. I tried something like this [1], but
have given up following the PCI maintainer's radio silence. I don't care
_that_ much about the log format.

[1] http://www.spinics.net/lists/linux-pci/msg71422.html

>> You're missing the timing and assuming you will get the hotplug interrupt.
>> In this example, you have 22ms between the link down and presence detect
>> state change. This is a fairly fast removal.
>>
>> Hotplug dependencies aside (you can have the kernel run without PCIe hotplug
>> support), I don't think you want to just linger in NMI for dozens of
>> milliseconds waiting for presence detect confirmation.
> 
> No, I don't mean that. I mean something like deferred processing:

Like the exact thing that this patch series implements? :)

> you
> get an error, you notice it is a device which supports physical removal
> so you exit the NMI handler and process the error in normal, process
> context which allows you to query the device and say, "Hey device, are
> you still there?"

Like the exact way the AER handler works?

> If it is not, you drop all the hw I/O errors reported for it.

Like the PCI error recovery mechanisms that AER invokes?

> Hmmm?
Hmmm
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov April 25, 2018, 5:15 p.m. UTC | #15
On Wed, Apr 25, 2018 at 10:00:53AM -0500, Alex G. wrote:
> Firmware-first.

Ok, my guess was right.

> We could probably use more of the native AER print functions, but that's
> beyond the scope of this patch.

No no, this does not belong in this patchset.

> Like the exact thing that this patch series implements? :)

Exact thing? I don't think so.

No, your patchset is grafting some funky and questionable side-handler
which gets to see the PCIe errors first, out-of-line and then it
practically downgrades their severity outside of the error processing
flow.

What I've been telling you to do is to extend ghes_severity() to
give the lower than PANIC severity for CPER_SEC_PCIE errors first
so that the machine doesn't panic from them anymore and those PCIe
errors get processed in the normal error processing path down
through ghes_do_proc() and then land in ghes_handle_aer(). No adhoc
->handle_irqsafe thing - just the normal straightforward error
processing path.

There, in ghes_handle_aer(), you do the check whether the device is
still there - i.e., you try to apply some heuristics to detect the error
type and why the system is complaining - you maybe even check whether
the NVMe device is still there - and *then* you do the proper recovery
action.

And you document for the future people looking at this code *why* you're
doing this.
Alex G. April 25, 2018, 5:27 p.m. UTC | #16
On 04/25/2018 12:15 PM, Borislav Petkov wrote:
> On Wed, Apr 25, 2018 at 10:00:53AM -0500, Alex G. wrote:
>> Firmware-first.
> 
> Ok, my guess was right.
> 
>> We could probably use more of the native AER print functions, but that's
>> beyond the scope of this patch.
> 
> No no, this does not belong in this patchset.
> 
>> Like the exact thing that this patch series implements? :)
> 
> Exact thing? I don't think so.
> 
> No, your patchset is grafting some funky and questionable side-handler
> which gets to see the PCIe errors first, out-of-line and then it
> practically downgrades their severity outside of the error processing
> flow.

SURPRISE!!! This is a what vs how issue. I am keeping the what, and
working on the how that you suggested.

> What I've been telling you 

It's coming (eventually). I'm trying to avoid pushing more than one
series per week.

(snip useful email context)

Hmmm.

Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Borislav Petkov April 25, 2018, 5:39 p.m. UTC | #17
On Wed, Apr 25, 2018 at 12:27:59PM -0500, Alex G. wrote:
> SURPRISE!!!

What does that mean? You've had too much coffee?

> It's coming (eventually). I'm trying to avoid pushing more than one
> series per week.

You better. Flooding people with patchsets won't get you very far.
diff mbox

Patch

diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 2119c51b4a9e..e0528da4e8f8 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -481,12 +481,26 @@  static int ghes_handle_aer(struct acpi_hest_generic_data *gdata, int sev)
 	return ghes_severity(gdata->error_severity);
 }
 
+static int ghes_handle_aer_irqsafe(struct acpi_hest_generic_data *gdata,
+				   int sev)
+{
+	struct cper_sec_pcie *pcie_err = acpi_hest_get_payload(gdata);
+
+	/* The system can always recover from AER errors. */
+	if (pcie_err->validation_bits & CPER_PCIE_VALID_DEVICE_ID &&
+		pcie_err->validation_bits & CPER_PCIE_VALID_AER_INFO)
+		return CPER_SEV_RECOVERABLE;
+
+	return ghes_severity(gdata->error_severity);
+}
+
 /**
  * ghes_handler - handler for ACPI APEI errors
  * @error_uuid: UUID describing the error entry (See ACPI/EFI CPER for details)
  * @handle: Handler for the GHES entry of type 'error_uuid'. The handler
  *	returns the severity of the error after handling. A handler is allowed
  *	to demote errors to correctable or corrected, as appropriate.
+ * @handle_irqsafe: (optional) Non-blocking handler for GHES entry.
  */
 static const struct ghes_handler {
 	const guid_t *error_uuid;
@@ -498,6 +512,7 @@  static const struct ghes_handler {
 		.handle = ghes_handle_mem,
 	}, {
 		.error_uuid = &CPER_SEC_PCIE,
+		.handle_irqsafe = ghes_handle_aer_irqsafe,
 		.handle = ghes_handle_aer,
 	}, {
 		.error_uuid = &CPER_SEC_PROC_ARM,
@@ -551,6 +566,30 @@  static void ghes_do_proc(struct ghes *ghes,
 	}
 }
 
+/* How severe is the error if handling is deferred outside IRQ/NMI context? */
+static int ghes_deferrable_severity(struct ghes *ghes)
+{
+	int deferrable_sev, sev, sec_sev;
+	struct acpi_hest_generic_data *gdata;
+	const struct ghes_handler *handler;
+	const guid_t *section_type;
+	const struct acpi_hest_generic_status *estatus = ghes->estatus;
+
+	deferrable_sev = GHES_SEV_NO;
+	sev = ghes_severity(estatus->error_severity);
+	apei_estatus_for_each_section(estatus, gdata) {
+		section_type = (guid_t *)gdata->section_type;
+		handler = get_handler(section_type);
+		if (handler && handler->handle_irqsafe)
+			sec_sev = handler->handle_irqsafe(gdata, sev);
+		else
+			sec_sev = ghes_severity(gdata->error_severity);
+		deferrable_sev = max(deferrable_sev, sec_sev);
+	}
+
+	return deferrable_sev;
+}
+
 static void __ghes_print_estatus(const char *pfx,
 				 const struct acpi_hest_generic *generic,
 				 const struct acpi_hest_generic_status *estatus)
@@ -980,7 +1019,7 @@  static void __process_error(struct ghes *ghes)
 static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs)
 {
 	struct ghes *ghes;
-	int sev, ret = NMI_DONE;
+	int sev, dsev, ret = NMI_DONE;
 
 	if (!atomic_add_unless(&ghes_in_nmi, 1, 1))
 		return ret;
@@ -993,8 +1032,9 @@  static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs)
 			ret = NMI_HANDLED;
 		}
 
+		dsev = ghes_deferrable_severity(ghes);
 		sev = ghes_severity(ghes->estatus->error_severity);
-		if (sev >= GHES_SEV_PANIC) {
+		if ((sev >= GHES_SEV_PANIC) && (dsev >= GHES_SEV_PANIC)) {
 			oops_begin();
 			ghes_print_queued_estatus();
 			__ghes_panic(ghes);