diff mbox

[v7,3/5] Documentation: Add documentation for the APM X-Gene SoC EDAC DTS binding

Message ID 1430259045-19012-4-git-send-email-lho@apm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Loc Ho April 28, 2015, 10:10 p.m. UTC
This patch adds documentation for the APM X-Gene SoC EDAC DTS binding.

Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Loc Ho <lho@apm.com>
---
 .../devicetree/bindings/edac/apm-xgene-edac.txt    |  107 ++++++++++++++++++++
 1 files changed, 107 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/edac/apm-xgene-edac.txt

Comments

Rob Herring April 29, 2015, 4:47 p.m. UTC | #1
On Tue, Apr 28, 2015 at 5:10 PM, Loc Ho <lho@apm.com> wrote:
> This patch adds documentation for the APM X-Gene SoC EDAC DTS binding.
>
> Signed-off-by: Feng Kan <fkan@apm.com>
> Signed-off-by: Loc Ho <lho@apm.com>
> ---
>  .../devicetree/bindings/edac/apm-xgene-edac.txt    |  107 ++++++++++++++++++++
>  1 files changed, 107 insertions(+), 0 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/edac/apm-xgene-edac.txt
>
> diff --git a/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt b/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt
> new file mode 100644
> index 0000000..548299a
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt
> @@ -0,0 +1,107 @@
> +* APM X-Gene SoC EDAC nodes
> +
> +EDAC nodes are defined to describe on-chip error detection and correction.
> +There are four types of EDAC:
> +
> +  memory controller    - Memory controller
> +  PMD (L1/L2)          - Processor module unit (PMD) L1/L2 cache
> +  L3                   - CPU L3 cache
> +  SoC                  - SoC IP such as SATA, Ethernet, and etc
> +
> +The following section describes the memory controller DT node binding.
> +
> +Required properties:
> +- compatible           : Shall be "apm,xgene-edac-mc".

You are still describing what you want for EDAC driver not what the
h/w looks like AFAICT.

> +- regmap-pcp           : Regmap of the PCP resource.
> +- regmap-csw           : Regmap of the CSW resource.
> +- regmap-mcba          : Regmap of the MCB-A resource.
> +- regmap-mcbb          : Regmap of the MCB-B resource.

What are these blocks? Regardless, I doubt the linkage here belongs in DT.

> +- reg                  : First resource shall be the MCU resource.
> +- interrupts            : Interrupt-specifier for MCU error IRQ(s).

What is MCU?

> +
> +The following section describes the L1/L2 DT node binding.

Similar comments for the rest. I would define memory controller
bindings and EDAC driver, then worry about the rest.

Rob

> +
> +- compatible           : Shall be "apm,xgene-edac-pmd".
> +- regmap-pcp           : Regmap of the PCP resource.
> +- regmap-efuse         : Regmap of the PMD efuse resource.
> +- reg                  : First resource shall be the PMD resource.
> +- interrupts            : Interrupt-specifier for PMD error IRQ(s).
> +
> +The following section describes the L3 DT node binding.
> +
> +- compatible           : Shall be "apm,xgene-edac-l3".
> +- regmap-pcp           : Regmap of the PCP resource.
> +- reg                  : First resource shall be the L3 resource.
> +- interrupts            : Interrupt-specifier for L3 error IRQ(s).
> +
> +The following section describes the SoC DT node binding.
> +
> +- compatible           : Shall be "apm,xgene-edac-soc"".
> +- regmap-pcp           : Regmap of the PCP resource.
> +- reg                  : First resource shall be the SoC resource.
> +                         Second resource shall be the register bus resource.
> +- interrupts           : Interrupt-specifier for SoC error IRQ(s).
> +
> +Example:
> +       efuse: efuse@1054a000 {
> +               compatible = "syscon";
> +               reg = <0x0 0x1054a000 0x0 0x20>;
> +       };
> +
> +       pcperror: pcperror@78800000 {
> +               compatible = "syscon";
> +               reg = <0x0 0x78800000 0x0 0x100>;
> +       };
> +
> +       csw: csw@7e200000 {
> +               compatible = "syscon";
> +               reg = <0x0 0x7e200000 0x0 0x1000>;
> +       };
> +
> +       mcba: mcba@7e700000 {
> +               compatible = "syscon";
> +               reg = <0x0 0x7e700000 0x0 0x1000>;
> +       };
> +
> +       mcbb: mcbb@7e720000 {
> +               compatible = "syscon";
> +               reg = <0x0 0x7e720000 0x0 0x1000>;
> +       };
> +
> +       edacmc0: edacmc0@7e800000 {
> +               compatible = "apm,xgene-edac-mc";
> +               regmap-pcp = <&pcperror>;
> +               regmap-csw = <&csw>;
> +               regmap-mcba = <&mcba>;
> +               regmap-mcbb = <&mcbb>;
> +               reg = <0x0 0x7e800000 0x0 0x1000>;
> +               interrupts = <0x0 0x20 0x4>,
> +                            <0x0 0x21 0x4>;
> +       };
> +
> +       edacpmd0: edacpmd0@7c000000 {
> +               compatible = "apm,xgene-edac-pmd";
> +               regmap-pcp = <&pcperror>;
> +               regmap-efuse = <&efuse>;
> +               reg = <0x0 0x7c000000 0x0 0x200000>;
> +               interrupts = <0x0 0x20 0x4>,
> +                            <0x0 0x21 0x4>;
> +       };
> +
> +       edacl3: edacl3@7e600000 {
> +               compatible = "apm,xgene-edac-l3";
> +               regmap-pcp = <&pcperror>;
> +               reg = <0x0 0x7e600000 0x0 0x1000>;
> +               interrupts = <0x0 0x20 0x4>,
> +                            <0x0 0x21 0x4>;
> +       };
> +
> +       edacsoc: edacsoc@7e930000 {
> +               compatible = "apm,xgene-edac-soc";
> +               regmap-pcp = <&pcperror>;
> +               reg = <0x0 0x7e930000 0x0 0x1000>,
> +                     <0x0 0x7e000000 0x0 0x1000>;
> +               interrupts = <0x0 0x20 0x4>,
> +                            <0x0 0x21 0x4>,
> +                            <0x0 0x27 0x4>;
> +       };
> --
> 1.7.1
>
Loc Ho April 29, 2015, 9:33 p.m. UTC | #2
Hi Rob,

>>  .../devicetree/bindings/edac/apm-xgene-edac.txt    |  107 ++++++++++++++++++++
>>  1 files changed, 107 insertions(+), 0 deletions(-)
>>  create mode 100644 Documentation/devicetree/bindings/edac/apm-xgene-edac.txt
>>
>> diff --git a/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt b/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt
>> new file mode 100644
>> index 0000000..548299a
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt
>> @@ -0,0 +1,107 @@
>> +* APM X-Gene SoC EDAC nodes
>> +
>> +EDAC nodes are defined to describe on-chip error detection and correction.
>> +There are four types of EDAC:
>> +
>> +  memory controller    - Memory controller
>> +  PMD (L1/L2)          - Processor module unit (PMD) L1/L2 cache
>> +  L3                   - CPU L3 cache
>> +  SoC                  - SoC IP such as SATA, Ethernet, and etc
>> +
>> +The following section describes the memory controller DT node binding.
>> +
>> +Required properties:
>> +- compatible           : Shall be "apm,xgene-edac-mc".
>
> You are still describing what you want for EDAC driver not what the
> h/w looks like AFAICT.
>

We have 4 memory controller and two memory bridges. Each memory
bridges can handle two memory controllers.

>> +- regmap-pcp           : Regmap of the PCP resource.
>> +- regmap-csw           : Regmap of the CSW resource.
>> +- regmap-mcba          : Regmap of the MCB-A resource.
>> +- regmap-mcbb          : Regmap of the MCB-B resource.
>
> What are these blocks? Regardless, I doubt the linkage here belongs in DT.

The PCP provide the top level interrupt for various status. From that,
one can determine the type of error based on various status bits -
memory error, CPU cache parity memory errors (L1, L2, and L3), and SoC
memory parity errors. The CSW contains the status to indicate whether
an memory bridge is active or not. The MCB A and MCB B are used to
determine an memory controller is active or not. In summary, we have:

CSW, MCB A, and MCB B are used to determine active memory controller
and ignore in-active memory controller.
PCP and the memory controller (reg property) are used to determine the
memory error type and type of error (correctable or un-conrrectable).

If one still needs more clarification, let me know.

>
>> +- reg                  : First resource shall be the MCU resource.
>> +- interrupts            : Interrupt-specifier for MCU error IRQ(s).
>
> What is MCU?

MCU represents the memory controller unit.

>
>> +
>> +The following section describes the L1/L2 DT node binding.
>
> Similar comments for the rest. I would define memory controller
> bindings and EDAC driver, then worry about the rest.

Okay.. As comment in following emails, I will break up the driver into
multiple drivers and focus only on the memory controller driver first.

-Loc
Borislav Petkov April 29, 2015, 9:49 p.m. UTC | #3
On Wed, Apr 29, 2015 at 02:33:28PM -0700, Loc Ho wrote:
> > Similar comments for the rest. I would define memory controller
> > bindings and EDAC driver, then worry about the rest.
> 
> Okay.. As comment in following emails, I will break up the driver into
> multiple drivers and focus only on the memory controller driver first.

Please no multiple EDAC drivers. Or do you mean something else here?
Loc Ho April 29, 2015, 9:56 p.m. UTC | #4
Hi,

>> > Similar comments for the rest. I would define memory controller
>> > bindings and EDAC driver, then worry about the rest.
>>
>> Okay.. As comment in following emails, I will break up the driver into
>> multiple drivers and focus only on the memory controller driver first.
>
> Please no multiple EDAC drivers. Or do you mean something else here?

We will have the following:

xgene-edac-mc.c
xgene-edac-pmd.c
xgene-edac-l3.c
xgene-edac-soc.c

Or what would you suggest. There are the following HW:

1. 4 DDR controller with one shared top level interrupt, two shared
memory bridges
2. 4 CPU's domain with one shared top level interrupt, shared L2 for
two CPU's, and individual L1
3. 1 L3 with one shared top level interrupt
4. One SoC memory parity block with one shared top level interupt

-Loc
Borislav Petkov April 29, 2015, 10:08 p.m. UTC | #5
On Wed, Apr 29, 2015 at 02:56:25PM -0700, Loc Ho wrote:
> Hi,
> 
> >> > Similar comments for the rest. I would define memory controller
> >> > bindings and EDAC driver, then worry about the rest.
> >>
> >> Okay.. As comment in following emails, I will break up the driver into
> >> multiple drivers and focus only on the memory controller driver first.
> >
> > Please no multiple EDAC drivers. Or do you mean something else here?
> 
> We will have the following:
> 
> xgene-edac-mc.c
> xgene-edac-pmd.c
> xgene-edac-l3.c
> xgene-edac-soc.c
> 
> Or what would you suggest.

xgene-edac.c

This granularity is insane. On x86 we have single EDAC drivers for
multiple CPU families and platforms.

You can start carving out stuff when this is absolutely needed, i.e.
when an IP block is in multiple hw incarnations. And even then I'd be
sceptical and would really want to know whether it is even worth it to
have an EDAC module for it.

So let's be conservative please.
Loc Ho April 29, 2015, 10:20 p.m. UTC | #6
Hi Boris,

>> >> > Similar comments for the rest. I would define memory controller
>> >> > bindings and EDAC driver, then worry about the rest.
>> >>
>> >> Okay.. As comment in following emails, I will break up the driver into
>> >> multiple drivers and focus only on the memory controller driver first.
>> >
>> > Please no multiple EDAC drivers. Or do you mean something else here?
>>
>> We will have the following:
>>
>> xgene-edac-mc.c
>> xgene-edac-pmd.c
>> xgene-edac-l3.c
>> xgene-edac-soc.c
>>
>> Or what would you suggest.
>
> xgene-edac.c
>
> This granularity is insane. On x86 we have single EDAC drivers for
> multiple CPU families and platforms.
>
> You can start carving out stuff when this is absolutely needed, i.e.
> when an IP block is in multiple hw incarnations. And even then I'd be
> sceptical and would really want to know whether it is even worth it to
> have an EDAC module for it.
>
> So let's be conservative please.

For now, let me just deal with the DDR module.

Rob,

Did I answered all your questions regarding the EDAC DT binding for
memory controller? I would like to get this sorted out before I post
another version.

-Loc
Rob Herring April 29, 2015, 10:43 p.m. UTC | #7
On Wed, Apr 29, 2015 at 4:56 PM, Loc Ho <lho@apm.com> wrote:
> Hi,
>
>>> > Similar comments for the rest. I would define memory controller
>>> > bindings and EDAC driver, then worry about the rest.
>>>
>>> Okay.. As comment in following emails, I will break up the driver into
>>> multiple drivers and focus only on the memory controller driver first.
>>
>> Please no multiple EDAC drivers. Or do you mean something else here?
>
> We will have the following:
>
> xgene-edac-mc.c
> xgene-edac-pmd.c
> xgene-edac-l3.c
> xgene-edac-soc.c
>
> Or what would you suggest. There are the following HW:
>
> 1. 4 DDR controller with one shared top level interrupt, two shared
> memory bridges

And ECC registers are in the DDR controllers?

I would expect the DT to have 4 DDR controller nodes and probably 2
bridges. The memory bridges have control registers?

> 2. 4 CPU's domain with one shared top level interrupt, shared L2 for
> two CPU's, and individual L1
> 3. 1 L3 with one shared top level interrupt

Presumably the registers for ECC are wired to some other block and not
CPU registers?

> 4. One SoC memory parity block with one shared top level interupt

Shared interrupts are easily supported, but implicit in the DT.
Otherwise, it seems this block is independent from the rest, correct?

Rob
Rob Herring April 29, 2015, 11:02 p.m. UTC | #8
On Wed, Apr 29, 2015 at 5:08 PM, Borislav Petkov <bp@alien8.de> wrote:
> On Wed, Apr 29, 2015 at 02:56:25PM -0700, Loc Ho wrote:
>> Hi,
>>
>> >> > Similar comments for the rest. I would define memory controller
>> >> > bindings and EDAC driver, then worry about the rest.
>> >>
>> >> Okay.. As comment in following emails, I will break up the driver into
>> >> multiple drivers and focus only on the memory controller driver first.
>> >
>> > Please no multiple EDAC drivers. Or do you mean something else here?
>>
>> We will have the following:
>>
>> xgene-edac-mc.c
>> xgene-edac-pmd.c
>> xgene-edac-l3.c
>> xgene-edac-soc.c
>>
>> Or what would you suggest.
>
> xgene-edac.c
>
> This granularity is insane. On x86 we have single EDAC drivers for
> multiple CPU families and platforms.

It may make sense on x86 as there are only 2 vendors, some level of
architecture definition at least within a vendor, some commonality
among generations, and some amount of abstraction by ACPI. For the
most part none of that applies to ARM based systems (although that is
changing).

> You can start carving out stuff when this is absolutely needed, i.e.
> when an IP block is in multiple hw incarnations. And even then I'd be
> sceptical and would really want to know whether it is even worth it to
> have an EDAC module for it.

One platform device corresponds to 1 DT node which corresponds to 1
h/w block. This is a well defined pattern throughout the kernel. There
are cases of grouping, but those are places where we need coordination
between h/w blocks such as audio and display and that is still
multiple platform devices with a single parent device. If you want
this all in 1 file or 1 module that is fine. I think that it is silly
to group otherwise independent things together and generally not what
we do anywhere else in the kernel. They all likely have different
capabilities and control mechanisms.

Rob
Loc Ho April 30, 2015, 12:47 a.m. UTC | #9
Hi Rob,

>>
>>>> > Similar comments for the rest. I would define memory controller
>>>> > bindings and EDAC driver, then worry about the rest.
>>>>
>>>> Okay.. As comment in following emails, I will break up the driver into
>>>> multiple drivers and focus only on the memory controller driver first.
>>>
>>> Please no multiple EDAC drivers. Or do you mean something else here?
>>
>> We will have the following:
>>
>> xgene-edac-mc.c
>> xgene-edac-pmd.c
>> xgene-edac-l3.c
>> xgene-edac-soc.c
>>
>> Or what would you suggest. There are the following HW:
>>
>> 1. 4 DDR controller with one shared top level interrupt, two shared
>> memory bridges
>
> And ECC registers are in the DDR controllers?
>

Yes... ECC registers are with each DDR controller. That's why I leave
the reg property.


> I would expect the DT to have 4 DDR controller nodes and probably 2
> bridges. The memory bridges have control registers?

Mustang DT does have 4 individual DT nodes for memory controllers. The
2 bridges are used to only determine if the memory controller is
active or not. If not, it just skips the initialization during probe.

>
>> 2. 4 CPU's domain with one shared top level interrupt, shared L2 for
>> two CPU's, and individual L1
>> 3. 1 L3 with one shared top level interrupt
>
> Presumably the registers for ECC are wired to some other block and not
> CPU registers?

Yes... The bus has dedicated 64K block for L2, L1, and L3. That's why
the reg property still there and not shared.

>
>> 4. One SoC memory parity block with one shared top level interupt
>
> Shared interrupts are easily supported, but implicit in the DT.
> Otherwise, it seems this block is independent from the rest, correct?

Interrupts are shared and the status registers are also shared. Yes...
the actual error module is independent. The shared blocks are:

1. Interrupt status (PCP block) - Needed for all subblock IP's
2. CSW, MCB A and MCB B - Needed for each individual memory controller
to determine whether it is active
3. efuse block - used to determine if an CPU domain is disabled
(non-existed in HW)

Anything else need clarification?

-Loc
Borislav Petkov April 30, 2015, 8:20 a.m. UTC | #10
On Wed, Apr 29, 2015 at 06:02:14PM -0500, Rob Herring wrote:
>  I think that it is silly to group otherwise independent things
> together and generally not what we do anywhere else in the kernel.
> They all likely have different capabilities and control mechanisms.

So let's look at the other extremity here: the moment someone releases
yet another "independent" IP block with some RAS functionality, same
someone will create yet another <vendor>_edac_<ip_block> driver. How
many independent IP blocks are there with RAS functionality? 10, 20,
100...? That number is most likely growing, I'd bet.

Oh, and then there'll probably be functionality which is needed by two
IP blocks so it needs to be shared. So we either copy'paste stuff or
create a lib only for that functionality...

Even worse, what if two EDAC drivers for two IP blocks would need to
talk to each other. That'll be fun.

Or there'll be a v2 of the IP block which has almost the same
functionality but no 100% - just a *little* different. So then we go
create <vendor>_edac_<ip_block-v2> driver. Yuck!

What I would prefer is to concentrate all vendor-specific RAS
functionality in one single driver. Shared functionality is then taken
care of automagically with all the synergies (I hate that word!)
involved and no unnecessary too finer-grained splitting.

In that case, we would only have to enable loading more than one EDAC
drivers on a system which has different RAS IP blocks. Now *that* is a
much cleaner solution IMO which will keep the sanity in EDAC-land above
0.

Thanks.
Arnd Bergmann April 30, 2015, 8:31 a.m. UTC | #11
On Thursday 30 April 2015 10:20:51 Borislav Petkov wrote:
> On Wed, Apr 29, 2015 at 06:02:14PM -0500, Rob Herring wrote:
> >  I think that it is silly to group otherwise independent things
> > together and generally not what we do anywhere else in the kernel.
> > They all likely have different capabilities and control mechanisms.
> 
> So let's look at the other extremity here: the moment someone releases
> yet another "independent" IP block with some RAS functionality, same
> someone will create yet another <vendor>_edac_<ip_block> driver. How
> many independent IP blocks are there with RAS functionality? 10, 20,
> 100...? That number is most likely growing, I'd bet.

That should be a safe assumption.

> Oh, and then there'll probably be functionality which is needed by two
> IP blocks so it needs to be shared. So we either copy'paste stuff or
> create a lib only for that functionality...
> 
> Even worse, what if two EDAC drivers for two IP blocks would need to
> talk to each other. That'll be fun.
> 
> Or there'll be a v2 of the IP block which has almost the same
> functionality but no 100% - just a *little* different. So then we go
> create <vendor>_edac_<ip_block-v2> driver. Yuck!
> 
> What I would prefer is to concentrate all vendor-specific RAS
> functionality in one single driver. Shared functionality is then taken
> care of automagically with all the synergies (I hate that word!)
> involved and no unnecessary too finer-grained splitting.
> 
> In that case, we would only have to enable loading more than one EDAC
> drivers on a system which has different RAS IP blocks. Now *that* is a
> much cleaner solution IMO which will keep the sanity in EDAC-land above
> 0.

The problem with your approach is that a lot of these blocks are not
vendor specific. You will probably see a server chip that contains a
memory controller from DesignWare, a cache controller from ARM, and
some other device from the chip vendor, and each of them comes with
EDAC support. Then you get three other vendors that have various
combinations of the same memory controller, cache controller and
other EDAC devices. Not all of these chips would have ARM CPU cores,
some may be e.g. MIPS or PowerPC.

This is very much what we see in all other subsystems (timers,
irqchips, gpio, ...) and the answer is always to have one driver
per device type, and have that driver handle all the variations
of that device, but not group devices into one driver just because
they happen to be on the same chip for one vendor who is merely
integrating them.

	Arnd
Borislav Petkov April 30, 2015, 8:45 a.m. UTC | #12
On Thu, Apr 30, 2015 at 10:31:12AM +0200, Arnd Bergmann wrote:
> The problem with your approach is that a lot of these blocks are not
> vendor specific. You will probably see a server chip that contains a
> memory controller from DesignWare, a cache controller from ARM, and
> some other device from the chip vendor, and each of them comes with
> EDAC support. Then you get three other vendors that have various
> combinations of the same memory controller, cache controller and
> other EDAC devices. Not all of these chips would have ARM CPU cores,
> some may be e.g. MIPS or PowerPC.

I don't mean system-vendor specific but IP-block vendor specific. So
doing a designware-edac, arm-edac, apm-edac and so on...

Also, I really want for people thinking of writing EDAC drivers to think
hard before doing so. Whether it is even worth to expose *every* RAS
functionality in some driver. And to consider that exposing it might
cause more trouble than benefit.

We've had that experience on x86 with reporting innocuous DRAM ECC
errors which were corrected by the hardware. People weren't even reading
the error message and running around wanting to RMA their processors
because something in dmesg screamed "Hardware Error".

And now APEI does the firmware-first thing where it gets to look at the
error first and then decide to report it up to the OS or not.

Which is a good thing because in most cases it unnecessarily upsets the
user.

So to get back on track: I think having the IP-block vendor stuff in one
EDAC module would make this whole heterogeneity much more manageable.

On that system above, we will load - if I can count correctly - 6 EDAC
drivers anyway. But the EDAC pile would remain sane.
Arnd Bergmann April 30, 2015, 9:01 a.m. UTC | #13
On Thursday 30 April 2015 10:45:50 Borislav Petkov wrote:
> On Thu, Apr 30, 2015 at 10:31:12AM +0200, Arnd Bergmann wrote:
> > The problem with your approach is that a lot of these blocks are not
> > vendor specific. You will probably see a server chip that contains a
> > memory controller from DesignWare, a cache controller from ARM, and
> > some other device from the chip vendor, and each of them comes with
> > EDAC support. Then you get three other vendors that have various
> > combinations of the same memory controller, cache controller and
> > other EDAC devices. Not all of these chips would have ARM CPU cores,
> > some may be e.g. MIPS or PowerPC.
> 
> I don't mean system-vendor specific but IP-block vendor specific. So
> doing a designware-edac, arm-edac, apm-edac and so on...

Ok, but I'd also do multiple drivers for one vendor if they have several
blocks that are clearly unrelated. My understanding was that this is the
case here, but I have not looked too closely.

Note that a lot of vendors do not like to say who they licensed IP
blocks from, or they are contractually required not to say it.

> Also, I really want for people thinking of writing EDAC drivers to think
> hard before doing so. Whether it is even worth to expose *every* RAS
> functionality in some driver. And to consider that exposing it might
> cause more trouble than benefit.

Definitely agreed on this point.

> We've had that experience on x86 with reporting innocuous DRAM ECC
> errors which were corrected by the hardware. People weren't even reading
> the error message and running around wanting to RMA their processors
> because something in dmesg screamed "Hardware Error".
> 
> And now APEI does the firmware-first thing where it gets to look at the
> error first and then decide to report it up to the OS or not.
> 
> Which is a good thing because in most cases it unnecessarily upsets the
> user.

Yes, that is helpful feedback, but seems unrelated to the discussion
about how to split this driver or not.

> So to get back on track: I think having the IP-block vendor stuff in one
> EDAC module would make this whole heterogeneity much more manageable.
> 
> On that system above, we will load - if I can count correctly - 6 EDAC
> drivers anyway. But the EDAC pile would remain sane.

Ok, let me try to state what I think we agree on:

- For EDAC devices that are clearly unrelated (in particular, made by
  different vendors, but possibly part of the same chip), there should
  be one module per device type that registers one platform_driver.

- For EDAC drivers that are variants of one another (e.g. different
  generations of the same memory controller), we want one module with
  one platform_driver that can handle all variants.

Let me know if you disagree with the above. If that's fine, I think it
leaves the case where we have one chip that has multiple EDAC blocks
and we can't easily tell if those blocks are all inter-related or not.
For this case, I would like to rely on your judgment as subsystem
maintainer on whether the parts look related enough to have a single
driver (with matching all device variants) or one driver for each
class of device, as well as spotting cases where a part already has
a driver from a different chip vendor that was licensing the same IP
block.

	Arnd
Borislav Petkov April 30, 2015, 9:41 a.m. UTC | #14
On Thu, Apr 30, 2015 at 11:01:43AM +0200, Arnd Bergmann wrote:
> Ok, but I'd also do multiple drivers for one vendor if they have several
> blocks that are clearly unrelated. My understanding was that this is the
> case here, but I have not looked too closely.

AFAIU, this is a driver for a bunch of xgene IP blocks.

Now, if someone integrates xgene memory controller in some other hw
platform, we could still serve it with the same xgene-edac driver -
it'll simply use only the memory controller part of the functionality.

> Note that a lot of vendors do not like to say who they licensed IP
> blocks from, or they are contractually required not to say it.

Then there's that :-\

But the moment they supply an EDAC driver, one can recognize the IP
block from the registers and functionality, no? Or do they change stuff
a little bit so that it looks sufficiently different?

> Yes, that is helpful feedback, but seems unrelated to the discussion
> about how to split this driver or not.

Yeah, I know. I just wanted to mention that in the discussion because
I get the impression that people are throwing out EDAC drivers just
because the hw functionality is there. So I wanted to remind people to
step back and consider the usefullness of such a driver first.

> Ok, let me try to state what I think we agree on:
> 
> - For EDAC devices that are clearly unrelated (in particular, made by
>   different vendors, but possibly part of the same chip), there should
>   be one module per device type that registers one platform_driver.
> 
> - For EDAC drivers that are variants of one another (e.g. different
>   generations of the same memory controller), we want one module with
>   one platform_driver that can handle all variants.
> 
> Let me know if you disagree with the above. If that's fine, I think it
> leaves the case where we have one chip that has multiple EDAC blocks
> and we can't easily tell if those blocks are all inter-related or not.
> For this case, I would like to rely on your judgment as subsystem
> maintainer on whether the parts look related enough to have a single
> driver (with matching all device variants) or one driver for each
> class of device, as well as spotting cases where a part already has
> a driver from a different chip vendor that was licensing the same IP
> block.

Well, so the way I see it is, if we could map the EDAC drivers to the IP
blocks. I.e., I'd like to have EDAC drivers mapped to the *IP* *blocks*
vendor *instead* of the platform vendor.

We will have to be able to load multiple EDAC drivers anyway but it'll
be much easier this way.

So let's have an example:

Let's say there's one xgene_edac driver which contains all the RAS
functionality of xgene IP blocks. Memory controller, L3 cache,
whatever...

Now, three *platform* vendors license, say, the xgene memory controller.

With my scheme, xgene_edac will be loaded on all those three vendors'
Linux. Done.

But if those three vendors went and created an EDAC module each for
their system, it would be a lot of repeated copy'pasting and bloat.

Now, the other dimension doesn't look better either:

xgene_edac_mc
xgene_edac_mc-v2
xgene_edac_l3
xgene_edac-l2
...

Also, in both cases, if any of those drivers need to talk to each other
or know of one another in any way, the situation becomes really funny as
they need to create something ad-hoc each time.

And this all can be avoided by having a single IP-blocks RAS driver.

Does that make more sense? Can it even be done with devicetree and all?

Thanks.
Arnd Bergmann April 30, 2015, 10:21 a.m. UTC | #15
On Thursday 30 April 2015 11:41:34 Borislav Petkov wrote:
> On Thu, Apr 30, 2015 at 11:01:43AM +0200, Arnd Bergmann wrote:
> > Ok, but I'd also do multiple drivers for one vendor if they have several
> > blocks that are clearly unrelated. My understanding was that this is the
> > case here, but I have not looked too closely.
> 
> AFAIU, this is a driver for a bunch of xgene IP blocks.
> 
> Now, if someone integrates xgene memory controller in some other hw
> platform, we could still serve it with the same xgene-edac driver -
> it'll simply use only the memory controller part of the functionality.
> 
> > Note that a lot of vendors do not like to say who they licensed IP
> > blocks from, or they are contractually required not to say it.
> 
> Then there's that :-\
> 
> But the moment they supply an EDAC driver, one can recognize the IP
> block from the registers and functionality, no? Or do they change stuff
> a little bit so that it looks sufficiently different?

They don't normally change stuff intentionally, but you can have
drivers for the same hardware that are written so differently that
it becomes hard to spot that they are doing the same thing.

We have a bunch of XGMAC drivers for the same basic licensed IP block,
each one configured a bit differently and extended with some soc-specific
bits, and drivers written from scratch.

I would also not assume that people are trying hard to hide something
from you, it's more that some guy is given the task to write a driver
for an IP block, but is not told who made that, so the binding will
just come with the name of the company he's working for.

> > Ok, let me try to state what I think we agree on:
> > 
> > - For EDAC devices that are clearly unrelated (in particular, made by
> >   different vendors, but possibly part of the same chip), there should
> >   be one module per device type that registers one platform_driver.
> > 
> > - For EDAC drivers that are variants of one another (e.g. different
> >   generations of the same memory controller), we want one module with
> >   one platform_driver that can handle all variants.
> > 
> > Let me know if you disagree with the above. If that's fine, I think it
> > leaves the case where we have one chip that has multiple EDAC blocks
> > and we can't easily tell if those blocks are all inter-related or not.
> > For this case, I would like to rely on your judgment as subsystem
> > maintainer on whether the parts look related enough to have a single
> > driver (with matching all device variants) or one driver for each
> > class of device, as well as spotting cases where a part already has
> > a driver from a different chip vendor that was licensing the same IP
> > block.
> 
> Well, so the way I see it is, if we could map the EDAC drivers to the IP
> blocks. I.e., I'd like to have EDAC drivers mapped to the *IP* *blocks*
> vendor *instead* of the platform vendor.
> 
> We will have to be able to load multiple EDAC drivers anyway but it'll
> be much easier this way.
> 
> So let's have an example:
> 
> Let's say there's one xgene_edac driver which contains all the RAS
> functionality of xgene IP blocks. Memory controller, L3 cache,
> whatever...
> 
> Now, three *platform* vendors license, say, the xgene memory controller.
> 
> With my scheme, xgene_edac will be loaded on all those three vendors'
> Linux. Done.

Right, but they would then also have to load the code for the other
blocks they did not license.

> But if those three vendors went and created an EDAC module each for
> their system, it would be a lot of repeated copy'pasting and bloat.

Yes, and I would expect you to reject any such driver submission.

> Now, the other dimension doesn't look better either:
> 
> xgene_edac_mc
> xgene_edac_mc-v2
> xgene_edac_l3
> xgene_edac-l2
> ...

Not following you here, what is the problem here?

> Also, in both cases, if any of those drivers need to talk to each other
> or know of one another in any way, the situation becomes really funny as
> they need to create something ad-hoc each time.

Agreed. If the drivers need to talk to one another, that is a clear
sign that the devices are not actually independent and that they
really belong into a single driver. That's not the case I'm thinking
of here.

> And this all can be avoided by having a single IP-blocks RAS driver.
> 
> Does that make more sense? Can it even be done with devicetree and all?

The devicetree just describes the available hardware blocks. Typically
you have one node for one block (e.g. a memory controller), and then
have one driver for that block. Things get a little funny if you want
to have multiple drivers for the same device, e.g. you want a driver
in drivers/memory/ to support autorefresh in sleep states and another
driver in drivers/edac to support error handling. If you get something
like this, you need some coordination between the two drivers, but that
is unrelated to devicetree or not.

A platform driver should match against a set of similar devices that are
identified by a "compatible" string, and we try to list the name of the
ip block here, which could be something like "arm,pl12345rev1.2" if it
is licensed from another company, or if that is not known, describe it
by using the product number of the first chip it appeared in, e.g.
"amcc,405ez-memory-controller". APM does not use numbers for their
chips, so "apm,xgene-memory-controller" would refer to the memory
controller used in the "xgene" chip. If they come out with a second
product that uses the same one, they would use the same string, or
use "apm,ygene-memory-controller" for the one that is in "ygene" if
that is slightly different. The driver can then match against both
and handle the differences based on the compatible string.

The point where it gets silly is when you try to handle multiple
unrelated devices in the same driver and you get a probe function
that just does:

apm_edac_probe(struct platform_device *dev)
{
	if (dev_is_xgene_mc(dev))
		return apm_probe_xgene_mc(dev);
	if (dev_is_xgene_l2(dev))
		return apm_probe_xgene_l2(dev);
	if (dev_is_ygene_mc(dev))
		...
}

with each of the called functions implemented in a separate file.

	Arnd
Arnd Bergmann April 30, 2015, 10:42 a.m. UTC | #16
On Thursday 30 April 2015 11:41:34 Borislav Petkov wrote:
> But if those three vendors went and created an EDAC module each for
> their system, it would be a lot of repeated copy'pasting and bloat.
> 
> Now, the other dimension doesn't look better either:
> 
> xgene_edac_mc
> xgene_edac_mc-v2
> xgene_edac_l3
> xgene_edac-l2
> ...
> 
> Also, in both cases, if any of those drivers need to talk to each other
> or know of one another in any way, the situation becomes really funny as
> they need to create something ad-hoc each time.
> 

I've looked at the driver in a little more detail now, and found that
it is a weird conglomerate at the moment. Basically you have four
drivers in one file here, appended to one another, but not sharing
any common functions except the module_init() that registers the four
drivers. This is what Rob was referring to when he suggested splitting
it up into four files, and these would in total be smaller than the
common file (by being able to use module_platform_driver()).

However, there is another dimension to this, which supports your point
about making it one driver for the platform: The split into four drivers
is completely artificial, because all four use the same IRQs and
implement four separate interrupt handlers for it (using IRQF_SHARED),
each handling only the events they are interested in. Similarly, they
all access the same register set ("pcperror"), aside from having their
own separate registers, and they use the "syscon" framework to get to
the registers.  This seems to be an inferior design, as the pcperror
stuff apparently is the real EDAC block on the system and that should
have a device driver for itself, with proper interfaces. syscon in
contrast is designed for the case where you have a badly designed
IP block that has lots of registers for functions all over the place
for which no clear abstraction is possible.

There is also the "efuse" that is listed as "syscon" here, but should
really use a separate framework that has been under discussion for a
while.

	Arnd
Borislav Petkov April 30, 2015, 12:33 p.m. UTC | #17
On Thu, Apr 30, 2015 at 12:21:14PM +0200, Arnd Bergmann wrote:
> They don't normally change stuff intentionally, but you can have
> drivers for the same hardware that are written so differently that
> it becomes hard to spot that they are doing the same thing.
> 
> We have a bunch of XGMAC drivers for the same basic licensed IP block,
> each one configured a bit differently and extended with some soc-specific
> bits, and drivers written from scratch.

So not nice.

> I would also not assume that people are trying hard to hide something
> from you, it's more that some guy is given the task to write a driver
> for an IP block, but is not told who made that, so the binding will
> just come with the name of the company he's working for.

So I'd home some guy would go grep the sources and hopefully find that
very similar functionality is already present.

> Right, but they would then also have to load the code for the other
> blocks they did not license.

That's a problem? I mean, the other code is inactive anyway and is only
a couple of KBytes. So not a big deal IMO.

If that becomes a problem with embedded, we can always do ugly ifdeffery
and Kconfig games if absolutely necessary.

> > xgene_edac_mc
> > xgene_edac_mc-v2
> > xgene_edac_l3
> > xgene_edac-l2
> > ...
> 
> Not following you here, what is the problem here?

The unnecessary fragmentation of xgene_edac functionality. I would like,
if possible, to concentrate all xgene RAS stuff in one file, like x86
does. And then there's the talking between the two aspect below...

> Agreed. If the drivers need to talk to one another, that is a clear
> sign that the devices are not actually independent and that they
> really belong into a single driver. That's not the case I'm thinking
> of here.

But if we split <vendor>_edac stuff per IP block, that will become a
problem. So we better talk about it now.

> The devicetree just describes the available hardware blocks. Typically
> you have one node for one block (e.g. a memory controller), and then
> have one driver for that block.
>
> Things get a little funny if you want to have multiple drivers for
> the same device, e.g. you want a driver in drivers/memory/ to support
> autorefresh in sleep states and another driver in drivers/edac to
> support error handling. If you get something like this, you need
> some coordination between the two drivers, but that is unrelated to
> devicetree or not.
> 
> A platform driver should match against a set of similar devices that are
> identified by a "compatible" string, and we try to list the name of the
> ip block here, which could be something like "arm,pl12345rev1.2" if it
> is licensed from another company, or if that is not known, describe it
> by using the product number of the first chip it appeared in, e.g.
> "amcc,405ez-memory-controller". APM does not use numbers for their
> chips, so "apm,xgene-memory-controller" would refer to the memory
> controller used in the "xgene" chip. If they come out with a second
> product that uses the same one, they would use the same string, or
> use "apm,ygene-memory-controller" for the one that is in "ygene" if
> that is slightly different. The driver can then match against both
> and handle the differences based on the compatible string.

Thanks for that background info, it helps a lot.

> The point where it gets silly is when you try to handle multiple
> unrelated devices in the same driver and you get a probe function
> that just does:

Ok, what if they're related through RAS?

I.e., xgene_edac contains all RAS-specific functionality of the xgene IP
blocks. This doesn't sound silly to me, frankly, and it concentrates it
all in one place. Communication between blocks is trivial and there's no
code duplication.

Other vendors using those IP blocks would simply add new DT descriptions.

> apm_edac_probe(struct platform_device *dev)
> {
> 	if (dev_is_xgene_mc(dev))
> 		return apm_probe_xgene_mc(dev);
> 	if (dev_is_xgene_l2(dev))
> 		return apm_probe_xgene_l2(dev);
> 	if (dev_is_ygene_mc(dev))
> 		...

So we do that on x86 - amd64_edac does AMD per-family setup and sb_edac
too, for the last 4 Intel uarches. And it works just fine. So if "silly"
is the only problem, I can certainly live with it.
Arnd Bergmann April 30, 2015, 12:52 p.m. UTC | #18
On Thursday 30 April 2015 14:33:00 Borislav Petkov wrote:
> 
> > The point where it gets silly is when you try to handle multiple
> > unrelated devices in the same driver and you get a probe function
> > that just does:
> 
> Ok, what if they're related through RAS?
> 
> I.e., xgene_edac contains all RAS-specific functionality of the xgene IP
> blocks. This doesn't sound silly to me, frankly, and it concentrates it
> all in one place. Communication between blocks is trivial and there's no
> code duplication.

My point is that with the current implementation, there is no communication
at all: you have four separate drivers that just live in the same file.

> Other vendors using those IP blocks would simply add new DT descriptions.
> 
> > apm_edac_probe(struct platform_device *dev)
> > {
> >       if (dev_is_xgene_mc(dev))
> >               return apm_probe_xgene_mc(dev);
> >       if (dev_is_xgene_l2(dev))
> >               return apm_probe_xgene_l2(dev);
> >       if (dev_is_ygene_mc(dev))
> >               ...
> 
> So we do that on x86 - amd64_edac does AMD per-family setup and sb_edac
> too, for the last 4 Intel uarches. And it works just fine. So if "silly"
> is the only problem, I can certainly live with it.

This is different, because in your examples, there is still some shared
code between the CPU families, fact most of the driver looks like it's
shared, and even the family-specific code is mostly shared. E.g.
f1x_early_channel_count() is used on almost all families.

What I mean with silly is the case where none of the code behind those
is shared at all, and the only common part is the probe() function
that contains the multiplexer.

	Arnd
Borislav Petkov April 30, 2015, 1 p.m. UTC | #19
On Thu, Apr 30, 2015 at 12:42:48PM +0200, Arnd Bergmann wrote:
> I've looked at the driver in a little more detail now, and found that
> it is a weird conglomerate at the moment. Basically you have four
> drivers in one file here, appended to one another, but not sharing
> any common functions except the module_init() that registers the four
> drivers. This is what Rob was referring to when he suggested splitting
> it up into four files, and these would in total be smaller than the
> common file (by being able to use module_platform_driver()).
> 
> However, there is another dimension to this, which supports your point
> about making it one driver for the platform: The split into four drivers
> is completely artificial, because all four use the same IRQs and
> implement four separate interrupt handlers for it (using IRQF_SHARED),
> each handling only the events they are interested in. Similarly, they
> all access the same register set ("pcperror"), aside from having their
> own separate registers, and they use the "syscon" framework to get to
> the registers.  This seems to be an inferior design, as the pcperror

Doh, now that you mention it...

So why isn't this thing registering a single IRQ handler which
multiplexes between the _check routines depending on the bits set in
PCPHPERRINTSTS/MEMERRINTSTS? (L3 alternatively, ctx->dev_csr + L3C_ESR).

This would really make it a single driver which acts according to the
bits set in those error registers. It can't get any simpler than that.

Loc?
Loc Ho April 30, 2015, 4:57 p.m. UTC | #20
Hi,

>> I've looked at the driver in a little more detail now, and found that
>> it is a weird conglomerate at the moment. Basically you have four
>> drivers in one file here, appended to one another, but not sharing
>> any common functions except the module_init() that registers the four
>> drivers. This is what Rob was referring to when he suggested splitting
>> it up into four files, and these would in total be smaller than the
>> common file (by being able to use module_platform_driver()).
>>
>> However, there is another dimension to this, which supports your point
>> about making it one driver for the platform: The split into four drivers
>> is completely artificial, because all four use the same IRQs and
>> implement four separate interrupt handlers for it (using IRQF_SHARED),
>> each handling only the events they are interested in. Similarly, they
>> all access the same register set ("pcperror"), aside from having their
>> own separate registers, and they use the "syscon" framework to get to
>> the registers.  This seems to be an inferior design, as the pcperror
>
> Doh, now that you mention it...
>
> So why isn't this thing registering a single IRQ handler which
> multiplexes between the _check routines depending on the bits set in
> PCPHPERRINTSTS/MEMERRINTSTS? (L3 alternatively, ctx->dev_csr + L3C_ESR).
>
> This would really make it a single driver which acts according to the
> bits set in those error registers. It can't get any simpler than that.
>
> Loc?

I had read all the emails interaction. Yes I can write a single EDAC
driver. But I actually have multiple instances and single instance of
the same IP's. For example, I have 4 DDR controllers, 4 CPU domains,
one L3 and one SoC. If you can suggest to me how this would work with
devicetree, I might be able to make it works. But from HW point of
view (take the memory controller as an example), there is really 4
hardware blocks and some shared IP's block for status. It is much
simple representation with multiple instance of the driver. In
addition, should one day we have 6 memory controllers, I would just
add two more instances in the DT node.

I think we should first agreed on the DT binding and let's not worry
about APEI. Then, whether we have one file or multiple file. Again,
the HW consist of:

1. One top level interrupt and status registers
2. CSW, MCB A and MCB B for detection of any active memory controllers
3. 4 individual memory controller instance
4. 4 CPU domain instance with its own individual L2 registers
5. Each CPU has instance own L1
6. One L3 instance
7. One SoC instance

As for as what is useful, we find that the memory controller and the
L1 and L2 are useful as it provides info for errors - for HW/system
debugging as well as margin of DDR.

-Loc
Borislav Petkov April 30, 2015, 5:18 p.m. UTC | #21
On Thu, Apr 30, 2015 at 09:57:46AM -0700, Loc Ho wrote:
> I had read all the emails interaction. Yes I can write a single EDAC
> driver. But I actually have multiple instances and single instance of
> the same IP's. For example, I have 4 DDR controllers, 4 CPU domains,
> one L3 and one SoC. If you can suggest to me how this would work with
> devicetree, I might be able to make it works. But from HW point of
> view (take the memory controller as an example), there is really 4
> hardware blocks and some shared IP's block for status. It is much
> simple representation with multiple instance of the driver.

Your whole error reporting is done roughly through the same registers,
as Arnd pointed out. Why can't you build your single driver around that
and based on the error type being reported, multiplex inside a *single*
interrupt handler to the proper decoding routines.

> I think we should first agreed on the DT binding and let's not worry
> about APEI. Then, whether we have one file or multiple file. Again,
> the HW consist of:
> 
> 1. One top level interrupt and status registers
> 2. CSW, MCB A and MCB B for detection of any active memory controllers
> 3. 4 individual memory controller instance
> 4. 4 CPU domain instance with its own individual L2 registers
> 5. Each CPU has instance own L1
> 6. One L3 instance
> 7. One SoC instance

And all of those are separate IP blocks and they can be integrated with
other, non APM hardware, or do those things all belong together? IOW,
can we expect such IP blocks to appear in other hw platforms or are
they all custom APM design which need to go together due to proprietary
coherency protocol or whatever, for example?

> As for as what is useful, we find that the memory controller and the
> L1 and L2 are useful as it provides info for errors - for HW/system
> debugging as well as margin of DDR.

Sounds like all is useful.
Loc Ho April 30, 2015, 9:19 p.m. UTC | #22
Hi,

>> I had read all the emails interaction. Yes I can write a single EDAC
>> driver. But I actually have multiple instances and single instance of
>> the same IP's. For example, I have 4 DDR controllers, 4 CPU domains,
>> one L3 and one SoC. If you can suggest to me how this would work with
>> devicetree, I might be able to make it works. But from HW point of
>> view (take the memory controller as an example), there is really 4
>> hardware blocks and some shared IP's block for status. It is much
>> simple representation with multiple instance of the driver.
>
> Your whole error reporting is done roughly through the same registers,
> as Arnd pointed out. Why can't you build your single driver around that
> and based on the error type being reported, multiplex inside a *single*
> interrupt handler to the proper decoding routines.

Yes... I can do this but I don't see how it will scale when some SoC
in the future may have more or less instance of the memory controllers
for example. I guess for now, I will just iteration 4 time with an
single base address or four resources in a loop. Let's agreed on the
DT layout and I will generate an new patch then. See below...

>
>> I think we should first agreed on the DT binding and let's not worry
>> about APEI. Then, whether we have one file or multiple file. Again,
>> the HW consist of:
>>
>> 1. One top level interrupt and status registers
>> 2. CSW, MCB A and MCB B for detection of any active memory controllers
>> 3. 4 individual memory controller instance
>> 4. 4 CPU domain instance with its own individual L2 registers
>> 5. Each CPU has instance own L1
>> 6. One L3 instance
>> 7. One SoC instance
>
> And all of those are separate IP blocks and they can be integrated with
> other, non APM hardware, or do those things all belong together?

The top level interrupt may be different and APM specific unless other
vendors adapt the same bit definitions. I highly doubt other vendor
will use the same bit definitions. The CSW is APM only. The MCB A, MCB
B, and memory controller are APM only. The L3, and SoC are APM specify
only. For L1 and L2, I will need to check with the CPU designer - but
likely APM specific.

> IOW,
> can we expect such IP blocks to appear in other hw platforms or are
> they all custom APM design which need to go together due to proprietary
> coherency protocol or whatever, for example?

See above.

>
>> As for as what is useful, we find that the memory controller and the
>> L1 and L2 are useful as it provides info for errors - for HW/system
>> debugging as well as margin of DDR.
>
> Sounds like all is useful.
>

Now let us discuss the DT layout if it is an single driver:

xgeneedac: xgeneedac@7e800000 {
        compatible = "apm,xgene-edac";
        regmap-efuse = <&efuse>; /* efuse */
        reg = <0x0 0x78800000 0x0 0x100>, /* Top level interrupt
status resource */
                <0x0 0x7e200000 0x0 0x1000>, /* CSW for MCB active resource
                <0x0 0x7e700000 0x0 0x1000>, /* MCB A resource */
                <0x0 0x7e720000 0x0 0x1000>, /* MCB B resource */
                <0x0 0x7e800000 0x0 0x1000>, /* MCU 0 resource */
                <0x0 0x7e840000 0x0 0x1000>, /* MCU 1 resource */
                <0x0 0x7e880000 0x0 0x1000>, /* MCU 2 resource */
                <0x0 0x7e8c0000 0x0 0x1000>, /* MCU 3 resource */
                <0x0 0x7c000000 0x0 0x200000>, /* CPU 0 domain for L1 and L2 */
                <0x0 0x7c200000 0x0 0x200000>, /* CPU 1 domain for L1 and L2 */
                <0x0 0x7c400000 0x0 0x200000>, /* CPU 2 domain for L1 and L2 */
                <0x0 0x7c600000 0x0 0x200000>, /* CPU 3 domain for L1 and L2 */
                <0x0 0x7e600000 0x0 0x1000>, /* L3 resource */
                <0x0 0x7e930000 0x0 0x1000>, /* SoC bus resource */
                <0x0 0x7e000000 0x0 0x1000>, /* SoC device resource */
      interrupts = <0x0 0x20 0x4>,
                       <0x0 0x21 0x4>,
                       <0x0 0x27 0x4>;
};

Arnd, Rob, and Borislav, Please comment and let me know.

-Loc
Borislav Petkov April 30, 2015, 9:30 p.m. UTC | #23
On Thu, Apr 30, 2015 at 02:19:44PM -0700, Loc Ho wrote:
> The top level interrupt may be different and APM specific unless other
> vendors adapt the same bit definitions. I highly doubt other vendor
> will use the same bit definitions. The CSW is APM only. The MCB A, MCB
> B, and memory controller are APM only. The L3, and SoC are APM specify
> only. For L1 and L2, I will need to check with the CPU designer - but
> likely APM specific.

So it sounds to me like this whole driver will control APM-specific hw
so a single driver should be fine.

Thanks.
Loc Ho April 30, 2015, 9:39 p.m. UTC | #24
Hi,

>> The top level interrupt may be different and APM specific unless other
>> vendors adapt the same bit definitions. I highly doubt other vendor
>> will use the same bit definitions. The CSW is APM only. The MCB A, MCB
>> B, and memory controller are APM only. The L3, and SoC are APM specify
>> only. For L1 and L2, I will need to check with the CPU designer - but
>> likely APM specific.
>
> So it sounds to me like this whole driver will control APM-specific hw
> so a single driver should be fine.

Okay... But I would like Rob agrees on the DT binding before I
generate this new patch.

Rob, can you comment on the DT binding in previous email thread?

-Loc
Rob Herring April 30, 2015, 10:36 p.m. UTC | #25
On Thu, Apr 30, 2015 at 4:19 PM, Loc Ho <lho@apm.com> wrote:
> Hi,
>
>>> I had read all the emails interaction. Yes I can write a single EDAC
>>> driver. But I actually have multiple instances and single instance of
>>> the same IP's. For example, I have 4 DDR controllers, 4 CPU domains,
>>> one L3 and one SoC. If you can suggest to me how this would work with
>>> devicetree, I might be able to make it works. But from HW point of
>>> view (take the memory controller as an example), there is really 4
>>> hardware blocks and some shared IP's block for status. It is much
>>> simple representation with multiple instance of the driver.
>>
>> Your whole error reporting is done roughly through the same registers,
>> as Arnd pointed out. Why can't you build your single driver around that
>> and based on the error type being reported, multiplex inside a *single*
>> interrupt handler to the proper decoding routines.
>
> Yes... I can do this but I don't see how it will scale when some SoC
> in the future may have more or less instance of the memory controllers
> for example. I guess for now, I will just iteration 4 time with an
> single base address or four resources in a loop. Let's agreed on the
> DT layout and I will generate an new patch then. See below...

You are right, it does not scale.

>>> I think we should first agreed on the DT binding and let's not worry
>>> about APEI. Then, whether we have one file or multiple file. Again,
>>> the HW consist of:
>>>
>>> 1. One top level interrupt and status registers

For these registers, are there ECC specific functions here or just
normal interrupt control/status bits (mask/unmask/status)? Assuming
the later, then you should make this block an interrupt-controller.
Then this is the interrupt-parent for the rest of the blocks.

>>> 2. CSW, MCB A and MCB B for detection of any active memory controllers
>>> 3. 4 individual memory controller instance
>>> 4. 4 CPU domain instance with its own individual L2 registers
>>> 5. Each CPU has instance own L1
>>> 6. One L3 instance
>>> 7. One SoC instance
>>
>> And all of those are separate IP blocks and they can be integrated with
>> other, non APM hardware, or do those things all belong together?
>
> The top level interrupt may be different and APM specific unless other
> vendors adapt the same bit definitions. I highly doubt other vendor
> will use the same bit definitions. The CSW is APM only. The MCB A, MCB
> B, and memory controller are APM only. The L3, and SoC are APM specify
> only. For L1 and L2, I will need to check with the CPU designer - but
> likely APM specific.
>
>> IOW,
>> can we expect such IP blocks to appear in other hw platforms or are
>> they all custom APM design which need to go together due to proprietary
>> coherency protocol or whatever, for example?
>
> See above.
>
>>
>>> As for as what is useful, we find that the memory controller and the
>>> L1 and L2 are useful as it provides info for errors - for HW/system
>>> debugging as well as margin of DDR.
>>
>> Sounds like all is useful.
>>
>
> Now let us discuss the DT layout if it is an single driver:
>
> xgeneedac: xgeneedac@7e800000 {
>         compatible = "apm,xgene-edac";
>         regmap-efuse = <&efuse>; /* efuse */
>         reg = <0x0 0x78800000 0x0 0x100>, /* Top level interrupt
> status resource */
>                 <0x0 0x7e200000 0x0 0x1000>, /* CSW for MCB active resource
>                 <0x0 0x7e700000 0x0 0x1000>, /* MCB A resource */
>                 <0x0 0x7e720000 0x0 0x1000>, /* MCB B resource */
>                 <0x0 0x7e800000 0x0 0x1000>, /* MCU 0 resource */
>                 <0x0 0x7e840000 0x0 0x1000>, /* MCU 1 resource */
>                 <0x0 0x7e880000 0x0 0x1000>, /* MCU 2 resource */
>                 <0x0 0x7e8c0000 0x0 0x1000>, /* MCU 3 resource */
>                 <0x0 0x7c000000 0x0 0x200000>, /* CPU 0 domain for L1 and L2 */
>                 <0x0 0x7c200000 0x0 0x200000>, /* CPU 1 domain for L1 and L2 */
>                 <0x0 0x7c400000 0x0 0x200000>, /* CPU 2 domain for L1 and L2 */
>                 <0x0 0x7c600000 0x0 0x200000>, /* CPU 3 domain for L1 and L2 */
>                 <0x0 0x7e600000 0x0 0x1000>, /* L3 resource */
>                 <0x0 0x7e930000 0x0 0x1000>, /* SoC bus resource */
>                 <0x0 0x7e000000 0x0 0x1000>, /* SoC device resource */

Uggg, no. You were on the right track earlier in the thread. One node
per instance of each block.

>       interrupts = <0x0 0x20 0x4>,
>                        <0x0 0x21 0x4>,
>                        <0x0 0x27 0x4>;

Can you describe what each of these interrupts do?

Rob
Arnd Bergmann April 30, 2015, 10:47 p.m. UTC | #26
On Thursday 30 April 2015 17:36:16 Rob Herring wrote:
> > Now let us discuss the DT layout if it is an single driver:
> >
> > xgeneedac: xgeneedac@7e800000 {
> >         compatible = "apm,xgene-edac";
> >         regmap-efuse = <&efuse>; /* efuse */
> >         reg = <0x0 0x78800000 0x0 0x100>, /* Top level interrupt
> > status resource */
> >                 <0x0 0x7e200000 0x0 0x1000>, /* CSW for MCB active resource
> >                 <0x0 0x7e700000 0x0 0x1000>, /* MCB A resource */
> >                 <0x0 0x7e720000 0x0 0x1000>, /* MCB B resource */
> >                 <0x0 0x7e800000 0x0 0x1000>, /* MCU 0 resource */
> >                 <0x0 0x7e840000 0x0 0x1000>, /* MCU 1 resource */
> >                 <0x0 0x7e880000 0x0 0x1000>, /* MCU 2 resource */
> >                 <0x0 0x7e8c0000 0x0 0x1000>, /* MCU 3 resource */
> >                 <0x0 0x7c000000 0x0 0x200000>, /* CPU 0 domain for L1 and L2 */
> >                 <0x0 0x7c200000 0x0 0x200000>, /* CPU 1 domain for L1 and L2 */
> >                 <0x0 0x7c400000 0x0 0x200000>, /* CPU 2 domain for L1 and L2 */
> >                 <0x0 0x7c600000 0x0 0x200000>, /* CPU 3 domain for L1 and L2 */
> >                 <0x0 0x7e600000 0x0 0x1000>, /* L3 resource */
> >                 <0x0 0x7e930000 0x0 0x1000>, /* SoC bus resource */
> >                 <0x0 0x7e000000 0x0 0x1000>, /* SoC device resource */
> 
> Uggg, no. You were on the right track earlier in the thread. One node
> per instance of each block.
> 

I agree these should be separate nodes, but I also think that we want a
separate node for the pcp/pcperror/xgeneedac device. The compatible
string for that should match whatever the datasheet calls that block,
no idea why we now have the third name for that.

The specific parts could either be subnodes of the pcperror device,
or they could be separate device nodes that reference the pcperror
device through a phandle, so the driver can make the connection between
them. This probably depends on what exactly all those nodes are:
if the registers in there are all exclusively related to EDAC handling
of the pcperror, subnodes would be best, but if the registers also
contain functionality that is not related to EDAC handling, we
probably want to have separate top-level nodes that a driver could
bind to, e.g. for doing power management on the memory controller.

	Arnd
Loc Ho April 30, 2015, 10:59 p.m. UTC | #27
Hi,

>>
>>>> I had read all the emails interaction. Yes I can write a single EDAC
>>>> driver. But I actually have multiple instances and single instance of
>>>> the same IP's. For example, I have 4 DDR controllers, 4 CPU domains,
>>>> one L3 and one SoC. If you can suggest to me how this would work with
>>>> devicetree, I might be able to make it works. But from HW point of
>>>> view (take the memory controller as an example), there is really 4
>>>> hardware blocks and some shared IP's block for status. It is much
>>>> simple representation with multiple instance of the driver.
>>>
>>> Your whole error reporting is done roughly through the same registers,
>>> as Arnd pointed out. Why can't you build your single driver around that
>>> and based on the error type being reported, multiplex inside a *single*
>>> interrupt handler to the proper decoding routines.
>>
>> Yes... I can do this but I don't see how it will scale when some SoC
>> in the future may have more or less instance of the memory controllers
>> for example. I guess for now, I will just iteration 4 time with an
>> single base address or four resources in a loop. Let's agreed on the
>> DT layout and I will generate an new patch then. See below...
>
> You are right, it does not scale.

Glad that you agree and I would like to avoid.

>
>>>> I think we should first agreed on the DT binding and let's not worry
>>>> about APEI. Then, whether we have one file or multiple file. Again,
>>>> the HW consist of:
>>>>
>>>> 1. One top level interrupt and status registers
>
> For these registers, are there ECC specific functions here or just
> normal interrupt control/status bits (mask/unmask/status)?

It is just interrupt and status bit - a bit for an type of errors.

> Assuming
> the later, then you should make this block an interrupt-controller.
> Then this is the interrupt-parent for the rest of the blocks.

Okay... Let me see if I can find an example.

>
>>>> 2. CSW, MCB A and MCB B for detection of any active memory controllers
>>>> 3. 4 individual memory controller instance
>>>> 4. 4 CPU domain instance with its own individual L2 registers
>>>> 5. Each CPU has instance own L1
>>>> 6. One L3 instance
>>>> 7. One SoC instance
>>>
>>> And all of those are separate IP blocks and they can be integrated with
>>> other, non APM hardware, or do those things all belong together?
>>
>> The top level interrupt may be different and APM specific unless other
>> vendors adapt the same bit definitions. I highly doubt other vendor
>> will use the same bit definitions. The CSW is APM only. The MCB A, MCB
>> B, and memory controller are APM only. The L3, and SoC are APM specify
>> only. For L1 and L2, I will need to check with the CPU designer - but
>> likely APM specific.
>>
>>> IOW,
>>> can we expect such IP blocks to appear in other hw platforms or are
>>> they all custom APM design which need to go together due to proprietary
>>> coherency protocol or whatever, for example?
>>
>> See above.
>>
>>>
>>>> As for as what is useful, we find that the memory controller and the
>>>> L1 and L2 are useful as it provides info for errors - for HW/system
>>>> debugging as well as margin of DDR.
>>>
>>> Sounds like all is useful.
>>>
>>
>> Now let us discuss the DT layout if it is an single driver:
>>
>> xgeneedac: xgeneedac@7e800000 {
>>         compatible = "apm,xgene-edac";
>>         regmap-efuse = <&efuse>; /* efuse */
>>         reg = <0x0 0x78800000 0x0 0x100>, /* Top level interrupt
>> status resource */
>>                 <0x0 0x7e200000 0x0 0x1000>, /* CSW for MCB active resource
>>                 <0x0 0x7e700000 0x0 0x1000>, /* MCB A resource */
>>                 <0x0 0x7e720000 0x0 0x1000>, /* MCB B resource */
>>                 <0x0 0x7e800000 0x0 0x1000>, /* MCU 0 resource */
>>                 <0x0 0x7e840000 0x0 0x1000>, /* MCU 1 resource */
>>                 <0x0 0x7e880000 0x0 0x1000>, /* MCU 2 resource */
>>                 <0x0 0x7e8c0000 0x0 0x1000>, /* MCU 3 resource */
>>                 <0x0 0x7c000000 0x0 0x200000>, /* CPU 0 domain for L1 and L2 */
>>                 <0x0 0x7c200000 0x0 0x200000>, /* CPU 1 domain for L1 and L2 */
>>                 <0x0 0x7c400000 0x0 0x200000>, /* CPU 2 domain for L1 and L2 */
>>                 <0x0 0x7c600000 0x0 0x200000>, /* CPU 3 domain for L1 and L2 */
>>                 <0x0 0x7e600000 0x0 0x1000>, /* L3 resource */
>>                 <0x0 0x7e930000 0x0 0x1000>, /* SoC bus resource */
>>                 <0x0 0x7e000000 0x0 0x1000>, /* SoC device resource */
>
> Uggg, no. You were on the right track earlier in the thread. One node
> per instance of each block.

Let me try just the memory controller. I will re-generate an patch for
just the memory controller.

>
>>       interrupts = <0x0 0x20 0x4>,
>>                        <0x0 0x21 0x4>,
>>                        <0x0 0x27 0x4>;
>
> Can you describe what each of these interrupts do?

IRQ 0x20 is used for un-correctable error for memory controller, L1,
L2 and some SoC IP parity errors.
IRQ 0x21 is used for correctable error for memory controller, and some
SoC IP parity errors.
IRQ 0x27 is used for SoC parity error.

-Loc
Loc Ho May 1, 2015, 6:44 a.m. UTC | #28
Hi Arnd,

>> > Now let us discuss the DT layout if it is an single driver:
>> >
>> > xgeneedac: xgeneedac@7e800000 {
>> >         compatible = "apm,xgene-edac";
>> >         regmap-efuse = <&efuse>; /* efuse */
>> >         reg = <0x0 0x78800000 0x0 0x100>, /* Top level interrupt
>> > status resource */
>> >                 <0x0 0x7e200000 0x0 0x1000>, /* CSW for MCB active resource
>> >                 <0x0 0x7e700000 0x0 0x1000>, /* MCB A resource */
>> >                 <0x0 0x7e720000 0x0 0x1000>, /* MCB B resource */
>> >                 <0x0 0x7e800000 0x0 0x1000>, /* MCU 0 resource */
>> >                 <0x0 0x7e840000 0x0 0x1000>, /* MCU 1 resource */
>> >                 <0x0 0x7e880000 0x0 0x1000>, /* MCU 2 resource */
>> >                 <0x0 0x7e8c0000 0x0 0x1000>, /* MCU 3 resource */
>> >                 <0x0 0x7c000000 0x0 0x200000>, /* CPU 0 domain for L1 and L2 */
>> >                 <0x0 0x7c200000 0x0 0x200000>, /* CPU 1 domain for L1 and L2 */
>> >                 <0x0 0x7c400000 0x0 0x200000>, /* CPU 2 domain for L1 and L2 */
>> >                 <0x0 0x7c600000 0x0 0x200000>, /* CPU 3 domain for L1 and L2 */
>> >                 <0x0 0x7e600000 0x0 0x1000>, /* L3 resource */
>> >                 <0x0 0x7e930000 0x0 0x1000>, /* SoC bus resource */
>> >                 <0x0 0x7e000000 0x0 0x1000>, /* SoC device resource */
>>
>> Uggg, no. You were on the right track earlier in the thread. One node
>> per instance of each block.
>>
>
> I agree these should be separate nodes, but I also think that we want a
> separate node for the pcp/pcperror/xgeneedac device. The compatible
> string for that should match whatever the datasheet calls that block,
> no idea why we now have the third name for that.

Let me go back to the regmap in the previous version - one node for
CPU domain bus (PCP), one  node for CPU switch fabric (CSW), one node
for the memory bridge A (MCB A), one node for the memory bridge B (MCB
B). Writing an simple driver that all it will provide is an map of
that those registers space doesn't make sense to me if once can use
the regmap interface. There is really nothing in these IP's that the
OS would care about. Anyone would care would be the FW.

>
> The specific parts could either be subnodes of the pcperror device,
> or they could be separate device nodes that reference the pcperror
> device through a phandle, so the driver can make the connection between
> them. This probably depends on what exactly all those nodes are:
> if the registers in there are all exclusively related to EDAC handling
> of the pcperror, subnodes would be best, but if the registers also
> contain functionality that is not related to EDAC handling, we
> probably want to have separate top-level nodes that a driver could
> bind to, e.g. for doing power management on the memory controller.

Again... Each sub-block does have some other type of register besides
the parity status registers and etc. But they are also divided
according into block of registers.For power management with the memory
controller, those will be handled by the FW. The only exception that I
can see is the CPU clock divider for the P-state which will require an
driver but that region can be mapped as a special area.

Let's just focus on the memory controller first and not worry about
the other just yet. So here is what I have:

/* CPU bus node */
pcp: pcp@78800000 {
        compatible = "syscon";
        reg = <0x0 0x78800000 0x0 0x100>;
};

/* CPU switch fabric node */
csw: csw@7e200000 {
        compatible = "syscon";
        reg = <0x0 0x7e200000 0x0 0x1000>;
};

/* Memory Bridge A */
mcba: mcba@7e700000 {
        compatible = "syscon";
        reg = <0x0 0x7e700000 0x0 0x1000>;
};

/* Memory Bridge B */
mcbb: mcbb@7e720000 {
        compatible = "syscon";
        reg = <0x0 0x7e720000 0x0 0x1000>;
};

/* Memory Controller 0 */
edacmc0: edacmc0@7e800000 {
        compatible = "apm,xgene-edac-mc";
        regmap-pcp = <&pcp>;
        regmap-csw = <&csw>;
        regmap-mcba = <&mcba>;
        regmap-mcbb = <&mcbb>;
        reg = <0x0 0x7e800000 0x0 0x1000>;
        interrupts = <0x0 0x20 0x4>,
                          <0x0 0x21 0x4>;
};

Any comment on this before I generate another version?

-Loc
Loc Ho May 1, 2015, 7:59 p.m. UTC | #29
Hi Rob,

>>>> I think we should first agreed on the DT binding and let's not worry
>>>> about APEI. Then, whether we have one file or multiple file. Again,
>>>> the HW consist of:
>>>>
>>>> 1. One top level interrupt and status registers
>
> For these registers, are there ECC specific functions here or just
> normal interrupt control/status bits (mask/unmask/status)? Assuming
> the later, then you should make this block an interrupt-controller.
> Then this is the interrupt-parent for the rest of the blocks.
>

This is the only item remain before I generate an patch with just the
memory controller. Most of the code that I see are actually an
interrupt controller HW. As it is just an interrupt mask and status
registers, is there an example in Linux that I can model after? Also,
I am not quite convince as to why we can't just share the interrupt
and request it by each memory controller?

-Loc
Rob Herring May 4, 2015, 10:36 p.m. UTC | #30
On Fri, May 1, 2015 at 2:59 PM, Loc Ho <lho@apm.com> wrote:
> Hi Rob,
>
>>>>> I think we should first agreed on the DT binding and let's not worry
>>>>> about APEI. Then, whether we have one file or multiple file. Again,
>>>>> the HW consist of:
>>>>>
>>>>> 1. One top level interrupt and status registers
>>
>> For these registers, are there ECC specific functions here or just
>> normal interrupt control/status bits (mask/unmask/status)? Assuming
>> the later, then you should make this block an interrupt-controller.
>> Then this is the interrupt-parent for the rest of the blocks.
>>
>
> This is the only item remain before I generate an patch with just the
> memory controller. Most of the code that I see are actually an
> interrupt controller HW. As it is just an interrupt mask and status
> registers, is there an example in Linux that I can model after? Also,
> I am not quite convince as to why we can't just share the interrupt
> and request it by each memory controller?

Pretty much anything that calls irq_set_chained_handler. If you don't
need to touch the shared registers and can set them up once at boot
time, then you could just do shared irq handlers. But if you have to
clear the interrupt within the PCP, you need a demuxer.

Rob
Loc Ho May 4, 2015, 11:39 p.m. UTC | #31
Hi,

On Mon, May 4, 2015 at 3:36 PM, Rob Herring <robherring2@gmail.com> wrote:
> On Fri, May 1, 2015 at 2:59 PM, Loc Ho <lho@apm.com> wrote:
>> Hi Rob,
>>
>>>>>> I think we should first agreed on the DT binding and let's not worry
>>>>>> about APEI. Then, whether we have one file or multiple file. Again,
>>>>>> the HW consist of:
>>>>>>
>>>>>> 1. One top level interrupt and status registers
>>>
>>> For these registers, are there ECC specific functions here or just
>>> normal interrupt control/status bits (mask/unmask/status)? Assuming
>>> the later, then you should make this block an interrupt-controller.
>>> Then this is the interrupt-parent for the rest of the blocks.
>>>
>>
>> This is the only item remain before I generate an patch with just the
>> memory controller. Most of the code that I see are actually an
>> interrupt controller HW. As it is just an interrupt mask and status
>> registers, is there an example in Linux that I can model after? Also,
>> I am not quite convince as to why we can't just share the interrupt
>> and request it by each memory controller?
>
> Pretty much anything that calls irq_set_chained_handler. If you don't
> need to touch the shared registers and can set them up once at boot
> time, then you could just do shared irq handlers. But if you have to
> clear the interrupt within the PCP, you need a demuxer.
>

Thanks Rob. I look through the entire driver. Interrupts are cleared
at the source. Therefore, the next version will just register for the
shared interrupt as it is currently.

-Loc
diff mbox

Patch

diff --git a/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt b/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt
new file mode 100644
index 0000000..548299a
--- /dev/null
+++ b/Documentation/devicetree/bindings/edac/apm-xgene-edac.txt
@@ -0,0 +1,107 @@ 
+* APM X-Gene SoC EDAC nodes
+
+EDAC nodes are defined to describe on-chip error detection and correction.
+There are four types of EDAC:
+
+  memory controller	- Memory controller
+  PMD (L1/L2)		- Processor module unit (PMD) L1/L2 cache
+  L3			- CPU L3 cache
+  SoC			- SoC IP such as SATA, Ethernet, and etc
+
+The following section describes the memory controller DT node binding.
+
+Required properties:
+- compatible		: Shall be "apm,xgene-edac-mc".
+- regmap-pcp		: Regmap of the PCP resource.
+- regmap-csw		: Regmap of the CSW resource.
+- regmap-mcba		: Regmap of the MCB-A resource.
+- regmap-mcbb		: Regmap of the MCB-B resource.
+- reg			: First resource shall be the MCU resource.
+- interrupts            : Interrupt-specifier for MCU error IRQ(s).
+
+The following section describes the L1/L2 DT node binding.
+
+- compatible		: Shall be "apm,xgene-edac-pmd".
+- regmap-pcp		: Regmap of the PCP resource.
+- regmap-efuse		: Regmap of the PMD efuse resource.
+- reg			: First resource shall be the PMD resource.
+- interrupts            : Interrupt-specifier for PMD error IRQ(s).
+
+The following section describes the L3 DT node binding.
+
+- compatible		: Shall be "apm,xgene-edac-l3".
+- regmap-pcp		: Regmap of the PCP resource.
+- reg			: First resource shall be the L3 resource.
+- interrupts            : Interrupt-specifier for L3 error IRQ(s).
+
+The following section describes the SoC DT node binding.
+
+- compatible		: Shall be "apm,xgene-edac-soc"".
+- regmap-pcp		: Regmap of the PCP resource.
+- reg			: First resource shall be the SoC resource.
+			  Second resource shall be the register bus resource.
+- interrupts		: Interrupt-specifier for SoC error IRQ(s).
+
+Example:
+	efuse: efuse@1054a000 {
+		compatible = "syscon";
+		reg = <0x0 0x1054a000 0x0 0x20>;
+	};
+
+	pcperror: pcperror@78800000 {
+		compatible = "syscon";
+		reg = <0x0 0x78800000 0x0 0x100>;
+	};
+
+	csw: csw@7e200000 {
+		compatible = "syscon";
+		reg = <0x0 0x7e200000 0x0 0x1000>;
+	};
+
+	mcba: mcba@7e700000 {
+		compatible = "syscon";
+		reg = <0x0 0x7e700000 0x0 0x1000>;
+	};
+
+	mcbb: mcbb@7e720000 {
+		compatible = "syscon";
+		reg = <0x0 0x7e720000 0x0 0x1000>;
+	};
+
+	edacmc0: edacmc0@7e800000 {
+		compatible = "apm,xgene-edac-mc";
+		regmap-pcp = <&pcperror>;
+		regmap-csw = <&csw>;
+		regmap-mcba = <&mcba>;
+		regmap-mcbb = <&mcbb>;
+		reg = <0x0 0x7e800000 0x0 0x1000>;
+		interrupts = <0x0 0x20 0x4>,
+			     <0x0 0x21 0x4>;
+	};
+
+	edacpmd0: edacpmd0@7c000000 {
+		compatible = "apm,xgene-edac-pmd";
+		regmap-pcp = <&pcperror>;
+		regmap-efuse = <&efuse>;
+		reg = <0x0 0x7c000000 0x0 0x200000>;
+		interrupts = <0x0 0x20 0x4>,
+			     <0x0 0x21 0x4>;
+	};
+
+	edacl3: edacl3@7e600000 {
+		compatible = "apm,xgene-edac-l3";
+		regmap-pcp = <&pcperror>;
+		reg = <0x0 0x7e600000 0x0 0x1000>;
+		interrupts = <0x0 0x20 0x4>,
+			     <0x0 0x21 0x4>;
+	};
+
+	edacsoc: edacsoc@7e930000 {
+		compatible = "apm,xgene-edac-soc";
+		regmap-pcp = <&pcperror>;
+		reg = <0x0 0x7e930000 0x0 0x1000>,
+		      <0x0 0x7e000000 0x0 0x1000>;
+		interrupts = <0x0 0x20 0x4>,
+			     <0x0 0x21 0x4>,
+			     <0x0 0x27 0x4>;
+	};