diff mbox series

[v2,2/3] soc: sifive: Add SiFive specific Cadence DDR controller driver

Message ID 1599457679-8947-3-git-send-email-yash.shah@sifive.com (mailing list archive)
State New, archived
Headers show
Series SiFive DDR controller and EDAC support | expand

Commit Message

Yash Shah Sept. 7, 2020, 5:47 a.m. UTC
Add a driver to manage the Cadence DDR controller present on SiFive SoCs
At present the driver manages the EDAC feature of the DDR controller.
Additional features may be added to the driver in future to control
other aspects of the DDR controller.

Signed-off-by: Yash Shah <yash.shah@sifive.com>
Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
---
 drivers/soc/sifive/Kconfig      |   6 ++
 drivers/soc/sifive/Makefile     |   3 +-
 drivers/soc/sifive/sifive_ddr.c | 207 ++++++++++++++++++++++++++++++++++++++++
 include/soc/sifive/sifive_ddr.h |  73 ++++++++++++++
 4 files changed, 288 insertions(+), 1 deletion(-)
 create mode 100644 drivers/soc/sifive/sifive_ddr.c
 create mode 100644 include/soc/sifive/sifive_ddr.h

Comments

Randy Dunlap Sept. 7, 2020, 5:54 a.m. UTC | #1
On 9/6/20 10:47 PM, Yash Shah wrote:
> diff --git a/drivers/soc/sifive/Kconfig b/drivers/soc/sifive/Kconfig
> index 58cf8c4..f41d8fe 100644
> --- a/drivers/soc/sifive/Kconfig
> +++ b/drivers/soc/sifive/Kconfig
> @@ -7,4 +7,10 @@ config SIFIVE_L2
>  	help
>  	  Support for the L2 cache controller on SiFive platforms.
>  
> +config SIFIVE_DDR
> +	bool "Sifive DDR controller driver"
> +	help
> +	  Support for the management of cadence DDR controller on SiFive

	                                Cadence

> +	  platforms.
> +
>  endif
Christoph Hellwig Sept. 7, 2020, 6:11 a.m. UTC | #2
On Mon, Sep 07, 2020 at 11:17:58AM +0530, Yash Shah wrote:
> Add a driver to manage the Cadence DDR controller present on SiFive SoCs
> At present the driver manages the EDAC feature of the DDR controller.
> Additional features may be added to the driver in future to control
> other aspects of the DDR controller.

So if this is a generic(ish) Cadence IP block shouldn't it be named
Cadence and made generic?  Or is the frontend somehow SiFive specific?
Palmer Dabbelt Sept. 9, 2020, 3:12 a.m. UTC | #3
On Sun, 06 Sep 2020 23:11:26 PDT (-0700), Christoph Hellwig wrote:
> On Mon, Sep 07, 2020 at 11:17:58AM +0530, Yash Shah wrote:
>> Add a driver to manage the Cadence DDR controller present on SiFive SoCs
>> At present the driver manages the EDAC feature of the DDR controller.
>> Additional features may be added to the driver in future to control
>> other aspects of the DDR controller.
>
> So if this is a generic(ish) Cadence IP block shouldn't it be named
> Cadence and made generic?  Or is the frontend somehow SiFive specific?

For some reason I thought we had a SiFive-specific interface to this, but I may
have gotten that confused with something else as it's been a while.  Someone
from SiFive would probably have a better idea, but it looks like the person I'd
ask isn't thereany more so I'm all out of options ;)

It looks like there was a very similar driver posted by Dhananjay Kangude from
Cadence in April: https://lkml.org/lkml/2020/4/6/358 .  Some of the register
definitions seem to be different, but the code I looked at is very similar so
there's at least some bits that could be shared.  I found a v4 of that patch
set, but that was back in May: https://lkml.org/lkml/2020/5/11/912 .  It
alludes to a v5, but I can't find one.  I've added Dhananjay, maybe he knows
what's up?

I don't know enough about the block to know if the subtle difference in
register names/offsets means.  They look properly jumbled up (ie, not just an
offset), so maybe there's just different versions or that's the SiFive-specific
part I had bouncing around my head?  Either way, it seems like one driver with
some simple configuration could handle both of these -- either sticking the
offsets in the DT (if they're going to be different everywhere) or by coming up
with some version sort of thing (if there's a handful of these).

I'm now also a bit worried about the provenace of this code.  The two drivers
are errily similar -- for example, the variable definitions in handle_ce()

       u64 err_c_addr = 0x0;
       u64 err_c_data = 0x0;
       u32 err_c_synd, err_c_id;
       u32 sig_val_l, sig_val_h;

are exactly the same.
Yash Shah Sept. 9, 2020, 3:56 a.m. UTC | #4
> -----Original Message-----
> From: Palmer Dabbelt <palmer@dabbelt.com>
> Sent: 09 September 2020 08:42
> To: Christoph Hellwig <hch@infradead.org>; dkangude@cadence.com
> Cc: Yash Shah <yash.shah@openfive.com>; robh+dt@kernel.org; Paul
> Walmsley ( Sifive) <paul.walmsley@sifive.com>; bp@alien8.de;
> mchehab@kernel.org; tony.luck@intel.com; devicetree@vger.kernel.org;
> aou@eecs.berkeley.edu; linux-kernel@vger.kernel.org; Sachin Ghadi
> <sachin.ghadi@openfive.com>; rrichter@marvell.com;
> james.morse@arm.com; linux-riscv@lists.infradead.org; linux-
> edac@vger.kernel.org
> Subject: Re: [PATCH v2 2/3] soc: sifive: Add SiFive specific Cadence DDR
> controller driver
> 
> [External Email] Do not click links or attachments unless you recognize the
> sender and know the content is safe
> 
> On Sun, 06 Sep 2020 23:11:26 PDT (-0700), Christoph Hellwig wrote:
> > On Mon, Sep 07, 2020 at 11:17:58AM +0530, Yash Shah wrote:
> >> Add a driver to manage the Cadence DDR controller present on SiFive
> >> SoCs At present the driver manages the EDAC feature of the DDR
> controller.
> >> Additional features may be added to the driver in future to control
> >> other aspects of the DDR controller.
> >
> > So if this is a generic(ish) Cadence IP block shouldn't it be named
> > Cadence and made generic?  Or is the frontend somehow SiFive specific?
> 
> For some reason I thought we had a SiFive-specific interface to this, but I may
> have gotten that confused with something else as it's been a while.  Someone
> from SiFive would probably have a better idea, but it looks like the person I'd
> ask isn't thereany more so I'm all out of options ;)
> 
> It looks like there was a very similar driver posted by Dhananjay Kangude
> from Cadence in April: https://lkml.org/lkml/2020/4/6/358 .  Some of the
> register definitions seem to be different, but the code I looked at is very
> similar so there's at least some bits that could be shared.  I found a v4 of that
> patch set, but that was back in May: https://lkml.org/lkml/2020/5/11/912 .  It
> alludes to a v5, but I can't find one.  I've added Dhananjay, maybe he knows
> what's up?
> 

I consulted with Dhananjay before posting this patch. From what I understood, Cadence provide highly configurable and customised DDR IP blocks based on the SoC vendor's need. This impacts the register configuration and probably the offsets too.
I had also refer the v4 patch posted by Dhananjay mentioned above and found that the registers offsets are not matching with that of Cadence DDR IP in SiFive SoC. Therefore it seems this DDR IP block has SiFive specific configurations and hence this Sifive specific driver.

> I don't know enough about the block to know if the subtle difference in
> register names/offsets means.  They look properly jumbled up (ie, not just an
> offset), so maybe there's just different versions or that's the SiFive-specific
> part I had bouncing around my head?  Either way, it seems like one driver
> with some simple configuration could handle both of these -- either sticking
> the offsets in the DT (if they're going to be different everywhere) or by
> coming up with some version sort of thing (if there's a handful of these).
> 
> I'm now also a bit worried about the provenace of this code.  The two drivers
> are errily similar -- for example, the variable definitions in handle_ce()
> 
>        u64 err_c_addr = 0x0;
>        u64 err_c_data = 0x0;
>        u32 err_c_synd, err_c_id;
>        u32 sig_val_l, sig_val_h;
> 
> are exactly the same.

I apologized, I forgot to mention it in cover-letter. I have based my work on Dhananjay's v4 patch[0]. 

- Yash

[0]: https://lkml.org/lkml/2020/4/24/183
Christoph Hellwig Sept. 9, 2020, 6 a.m. UTC | #5
On Tue, Sep 08, 2020 at 08:12:16PM -0700, Palmer Dabbelt wrote:
> I don't know enough about the block to know if the subtle difference in
> register names/offsets means.  They look properly jumbled up (ie, not just an
> offset), so maybe there's just different versions or that's the SiFive-specific
> part I had bouncing around my head?  Either way, it seems like one driver with
> some simple configuration could handle both of these -- either sticking the
> offsets in the DT (if they're going to be different everywhere) or by coming up
> with some version sort of thing (if there's a handful of these).

regmap can be used to handle non-uniform register layouts for the same
functionality.
Palmer Dabbelt Sept. 9, 2020, 8:31 p.m. UTC | #6
On Tue, 08 Sep 2020 23:00:45 PDT (-0700), Christoph Hellwig wrote:
> On Tue, Sep 08, 2020 at 08:12:16PM -0700, Palmer Dabbelt wrote:
>> I don't know enough about the block to know if the subtle difference in
>> register names/offsets means.  They look properly jumbled up (ie, not just an
>> offset), so maybe there's just different versions or that's the SiFive-specific
>> part I had bouncing around my head?  Either way, it seems like one driver with
>> some simple configuration could handle both of these -- either sticking the
>> offsets in the DT (if they're going to be different everywhere) or by coming up
>> with some version sort of thing (if there's a handful of these).
>
> regmap can be used to handle non-uniform register layouts for the same
> functionality.

Ah, cool, I hadn't seen that before.  That seems like the way to go if this is
truly an implementatic-specific register mapping.  As I was falling asleep last
night I remembered that we did end up with implementation-specific register
maps for some of the IP we integrated.  That was usually the case for IP where
we had some signals that we just didn't know what to do with, and while I know
the DDR integration was a real trip I'm not sure if that's where these
registers came from.

Hopefully someone who has better access to these hardware implementations can
comment.
Dhananjay Vilasrao Kangude Sept. 17, 2020, 9:56 a.m. UTC | #7
> -----Original Message-----
> From: Palmer Dabbelt <palmer@dabbelt.com>
> Sent: Wednesday, September 9, 2020 8:42 AM
> To: Christoph Hellwig <hch@infradead.org>; Dhananjay Vilasrao Kangude
> <dkangude@cadence.com>
> Cc: yash.shah@sifive.com; robh+dt@kernel.org; Paul Walmsley
> <paul.walmsley@sifive.com>; bp@alien8.de; mchehab@kernel.org;
> tony.luck@intel.com; devicetree@vger.kernel.org; aou@eecs.berkeley.edu;
> linux-kernel@vger.kernel.org; sachin.ghadi@sifive.com;
> rrichter@marvell.com; james.morse@arm.com; linux-
> riscv@lists.infradead.org; linux-edac@vger.kernel.org
> Subject: Re: [PATCH v2 2/3] soc: sifive: Add SiFive specific Cadence DDR
> controller driver
> 
> EXTERNAL MAIL
> 
> 
> On Sun, 06 Sep 2020 23:11:26 PDT (-0700), Christoph Hellwig wrote:
> > On Mon, Sep 07, 2020 at 11:17:58AM +0530, Yash Shah wrote:
> >> Add a driver to manage the Cadence DDR controller present on SiFive
> >> SoCs At present the driver manages the EDAC feature of the DDR
> controller.
> >> Additional features may be added to the driver in future to control
> >> other aspects of the DDR controller.
> >
> > So if this is a generic(ish) Cadence IP block shouldn't it be named
> > Cadence and made generic?  Or is the frontend somehow SiFive specific?
> 
> For some reason I thought we had a SiFive-specific interface to this, but I may
> have gotten that confused with something else as it's been a while.  Someone
> from SiFive would probably have a better idea, but it looks like the person I'd
> ask isn't thereany more so I'm all out of options ;)
> 
> It looks like there was a very similar driver posted by Dhananjay Kangude
> from Cadence in April:
> https://urldefense.com/v3/__https://lkml.org/lkml/2020/4/6/358__;!!EHscm
> S1ygiU1lA!UfVYWzQqCgaUNKN156ffKM5NkFoYtPhHapruC3yqme7nvbUBnD2
> mEHg8F6it4y4$  .  Some of the register definitions seem to be different, but
> the code I looked at is very similar so there's at least some bits that could be
> shared.  I found a v4 of that patch set, but that was back in May:
> https://urldefense.com/v3/__https://lkml.org/lkml/2020/5/11/912__;!!EHsc
> mS1ygiU1lA!UfVYWzQqCgaUNKN156ffKM5NkFoYtPhHapruC3yqme7nvbUBnD
> 2mEHg8DeCwApk$  .  It alludes to a v5, but I can't find one.  I've added
> Dhananjay, maybe he knows what's up?
> 
> I don't know enough about the block to know if the subtle difference in
> register names/offsets means.  They look properly jumbled up (ie, not just an
> offset), so maybe there's just different versions or that's the SiFive-specific
> part I had bouncing around my head?  Either way, it seems like one driver
> with some simple configuration could handle both of these -- either sticking
> the offsets in the DT (if they're going to be different everywhere) or by
> coming up with some version sort of thing (if there's a handful of these).
> 
> I'm now also a bit worried about the provenace of this code.  The two drivers
> are errily similar -- for example, the variable definitions in handle_ce()
> 
>        u64 err_c_addr = 0x0;
>        u64 err_c_data = 0x0;
>        u32 err_c_synd, err_c_id;
>        u32 sig_val_l, sig_val_h;
> 
> are exactly the same.
[Dhananjay Kangude]
 Hi Palmer,
       	Sorry for delayed reply.
	I was expecting new changes into the hardware IP since last couple of
months thus I haven't up streamed V5 patch till now. The cadence driver version 
is of more generic for cadence DDR controllers which could be part of other SoCs too.
I would suggest Yash to patch Sifive specific changes once cadence DDR controller driver 
get up streamed. I will send V5 in coming days.

Thank,
Dhananjay
diff mbox series

Patch

diff --git a/drivers/soc/sifive/Kconfig b/drivers/soc/sifive/Kconfig
index 58cf8c4..f41d8fe 100644
--- a/drivers/soc/sifive/Kconfig
+++ b/drivers/soc/sifive/Kconfig
@@ -7,4 +7,10 @@  config SIFIVE_L2
 	help
 	  Support for the L2 cache controller on SiFive platforms.
 
+config SIFIVE_DDR
+	bool "Sifive DDR controller driver"
+	help
+	  Support for the management of cadence DDR controller on SiFive
+	  platforms.
+
 endif
diff --git a/drivers/soc/sifive/Makefile b/drivers/soc/sifive/Makefile
index b5caff7..b4acb5c 100644
--- a/drivers/soc/sifive/Makefile
+++ b/drivers/soc/sifive/Makefile
@@ -1,3 +1,4 @@ 
 # SPDX-License-Identifier: GPL-2.0
 
-obj-$(CONFIG_SIFIVE_L2)	+= sifive_l2_cache.o
+obj-$(CONFIG_SIFIVE_L2)		+= sifive_l2_cache.o
+obj-$(CONFIG_SIFIVE_DDR)	+= sifive_ddr.o
diff --git a/drivers/soc/sifive/sifive_ddr.c b/drivers/soc/sifive/sifive_ddr.c
new file mode 100644
index 0000000..b1b421c
--- /dev/null
+++ b/drivers/soc/sifive/sifive_ddr.c
@@ -0,0 +1,207 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * SiFive specific cadence DDR controller Driver
+ *
+ * Copyright (C) 2019-2020 SiFive, Inc.
+ *
+ */
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <linux/delay.h>
+#include <soc/sifive/sifive_ddr.h>
+
+static ATOMIC_NOTIFIER_HEAD(ddr_err_chain);
+
+int register_sifive_ddr_error_notifier(struct notifier_block *nb)
+{
+	return atomic_notifier_chain_register(&ddr_err_chain, nb);
+}
+EXPORT_SYMBOL_GPL(register_sifive_ddr_error_notifier);
+
+int unregister_sifive_ddr_error_notifier(struct notifier_block *nb)
+{
+	return atomic_notifier_chain_unregister(&ddr_err_chain, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_sifive_ddr_error_notifier);
+
+static void handle_ce(struct sifive_ddr_priv *pv)
+{
+	u64 err_c_addr = 0x0;
+	u64 err_c_data = 0x0;
+	u32 err_c_synd, err_c_id;
+	u32 sig_val_l, sig_val_h;
+
+	sig_val_l = readl(pv->reg + ECC_C_ADDR_L_REG);
+	sig_val_h = (readl(pv->reg + ECC_C_ADDR_H_REG) &
+		     ECC_ADDR_H_MASK);
+	err_c_addr = (((ulong)sig_val_h << CTL_REG_WIDTH_SHIFT) | sig_val_l);
+
+	sig_val_l = readl(pv->reg + ECC_C_DATA_L_REG);
+	sig_val_h = readl(pv->reg + ECC_C_DATA_H_REG);
+	err_c_data = (((ulong)sig_val_h << CTL_REG_WIDTH_SHIFT) | sig_val_l);
+
+	err_c_id = ((readl(pv->reg + ECC_U_C_ID_REG) &
+		     ECC_C_ID_MASK) >> ECC_C_ID_SHIFT);
+
+	err_c_synd = ((readl(pv->reg + ECC_C_SYND_REG) &
+		      ECC_SYND_MASK) >> ECC_SYND_SHIFT);
+
+	pv->error_count = 1;
+	pv->page_frame_number = err_c_addr >> PAGE_SHIFT;
+	pv->offset_in_page = err_c_addr & ~PAGE_MASK;
+	pv->syndrome = err_c_synd;
+	pv->top_layer = 0;
+	pv->mid_layer = 0;
+	pv->low_layer = -1;
+
+	atomic_notifier_call_chain(&ddr_err_chain, SIFIVE_DDR_ERR_TYPE_CE, pv);
+}
+
+static void handle_ue(struct sifive_ddr_priv *pv)
+{
+	u64 err_u_addr = 0x0;
+	u64 err_u_data = 0x0;
+	u32 err_u_synd, err_u_id;
+	u32 sig_val_l, sig_val_h;
+
+	sig_val_l = readl(pv->reg + ECC_U_ADDR_L_REG);
+	sig_val_h = (readl(pv->reg + ECC_U_ADDR_H_REG) &
+		     ECC_ADDR_H_MASK);
+	err_u_addr = (((ulong)sig_val_h << CTL_REG_WIDTH_SHIFT) | sig_val_l);
+
+	sig_val_l = readl(pv->reg + ECC_U_DATA_L_REG);
+	sig_val_h = readl(pv->reg + ECC_U_DATA_H_REG);
+	err_u_data = (((ulong)sig_val_h << CTL_REG_WIDTH_SHIFT) | sig_val_l);
+
+	err_u_id = ((readl(pv->reg + ECC_U_C_ID_REG) &
+		    ECC_U_ID_MASK) >> ECC_U_ID_SHIFT);
+
+	err_u_synd = ((readl(pv->reg + ECC_U_SYND_REG) &
+		      ECC_SYND_MASK) >> ECC_SYND_SHIFT);
+
+	pv->error_count = 1;
+	pv->page_frame_number = err_u_addr >> PAGE_SHIFT;
+	pv->offset_in_page = err_u_addr & ~PAGE_MASK;
+	pv->syndrome = err_u_synd;
+	pv->top_layer = 0;
+	pv->mid_layer = 0;
+	pv->low_layer = -1;
+
+	atomic_notifier_call_chain(&ddr_err_chain, SIFIVE_DDR_ERR_TYPE_UE, pv);
+}
+
+static irqreturn_t ecc_isr(int irq, void *ptr)
+{
+	struct sifive_ddr_priv *pv = ptr;
+	u32 intr_status;
+	u32 val;
+
+	/* Check the intr status and confirm ECC error intr */
+	intr_status = readl(pv->reg + ECC_CTL_INT_STATUS_REG);
+
+	dev_dbg(pv->dev, "InterruptStatus : 0x%x\n", intr_status);
+	val = intr_status & (ECC_INT_CE_UE_MASK);
+	if (!((val & ECC_CE_INTR_MASK) || (val & ECC_UE_INTR_MASK)))
+		return IRQ_NONE;
+
+	if (val & ECC_CE_INTR_MASK) {
+		handle_ce(pv);
+
+		/* Clear the interrupt source */
+		if (val & ECC_INT_CE_EVENT)
+			writel(ECC_INT_CE_EVENT,
+			       pv->reg + ECC_CTL_INT_ACK_REG);
+		else if (val & ECC_INT_SECOND_CE_EVENT)
+			writel(ECC_INT_SECOND_CE_EVENT,
+			       pv->reg + ECC_CTL_INT_ACK_REG);
+		else
+			dev_err(pv->dev, "Failed to clear IRQ\n");
+	}
+
+	if (val & ECC_UE_INTR_MASK) {
+		handle_ue(pv);
+
+		/* Clear the interrupt source */
+		if (val & ECC_INT_UE_EVENT)
+			writel(ECC_INT_UE_EVENT,
+			       pv->reg + ECC_CTL_INT_ACK_REG);
+		else if (val & ECC_INT_SECOND_UE_EVENT)
+			writel(ECC_INT_SECOND_UE_EVENT,
+			       pv->reg + ECC_CTL_INT_ACK_REG);
+		else
+			dev_err(pv->dev, "Failed to clear IRQ\n");
+	}
+
+	return IRQ_HANDLED;
+}
+
+static int sifive_ddr_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct sifive_ddr_priv *priv;
+	struct resource *res;
+	int ret, irq;
+
+	priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	priv->dev = dev;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	priv->reg = devm_ioremap_resource(dev, res);
+	if (IS_ERR(priv->reg))
+		return PTR_ERR(priv->reg);
+
+	irq = platform_get_irq(pdev, 0);
+	ret = devm_request_irq(dev, irq, ecc_isr, 0, "sifive-fu540-ddr", priv);
+	if (ret) {
+		dev_err(dev, "request_irq failed\n");
+		return ret;
+	}
+
+	/* Enable & set CE/UE Interrupts for DDR4 Controller */
+	writel(~(ECC_INT_CE_UE_MASK), priv->reg + ECC_CTL_INT_MASK_REG);
+
+	platform_set_drvdata(pdev, priv);
+	dev_info(dev, "SiFive DDR probe successful\n");
+
+	return 0;
+}
+
+static int sifive_ddr_remove(struct platform_device *pdev)
+{
+	struct sifive_ddr_priv *priv = platform_get_drvdata(pdev);
+
+	/* Disable All ECC Interrupts for DDR4 Controller */
+	writel(ECC_INT_CE_UE_MASK, priv->reg + ECC_CTL_INT_MASK_REG);
+
+	return 0;
+}
+
+static const struct of_device_id sifive_ddr_of_match[] = {
+	{ .compatible = "sifive,fu540-c000-ddr"},
+	{},
+};
+
+MODULE_DEVICE_TABLE(of, sifive_ddr_of_match);
+
+static struct platform_driver sifive_ddr_driver = {
+	.driver = {
+		.name = "sifive-ddr",
+		.of_match_table = sifive_ddr_of_match,
+	},
+	.probe = sifive_ddr_probe,
+	.remove = sifive_ddr_remove,
+};
+
+module_platform_driver(sifive_ddr_driver);
+
+MODULE_AUTHOR("SiFive");
+MODULE_DESCRIPTION("SiFive DDR Controller driver");
+MODULE_LICENSE("GPL v2");
diff --git a/include/soc/sifive/sifive_ddr.h b/include/soc/sifive/sifive_ddr.h
new file mode 100644
index 0000000..2ff8623
--- /dev/null
+++ b/include/soc/sifive/sifive_ddr.h
@@ -0,0 +1,73 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * SiFive DDR Controller header file
+ *
+ */
+
+#ifndef __SOC_SIFIVE_DDR_H
+#define __SOC_SIFIVE_DDR_H
+
+int register_sifive_ddr_error_notifier(struct notifier_block *nb);
+int unregister_sifive_ddr_error_notifier(struct notifier_block *nb);
+
+struct sifive_ddr_priv {
+	void __iomem *reg;
+	struct device *dev;
+	u16 error_count;
+	unsigned long page_frame_number;
+	unsigned long offset_in_page;
+	unsigned long syndrome;
+	int top_layer;
+	int mid_layer;
+	int low_layer;
+};
+
+#define SIFIVE_DDR_ERR_TYPE_CE	(0)
+#define SIFIVE_DDR_ERR_TYPE_UE	(1)
+#define SIFIVE_DDR_EDAC_GRAIN	(1)
+#define SIFIVE_MEM_TYPE_DDR4	0xA
+#define SIFIVE_DDR_WIDTH_16	(2)
+#define SIFIVE_DDR_WIDTH_32	(1)
+#define SIFIVE_DDR_WIDTH_64	(0)
+
+#define DDR_CTL_MEM_TYPE_REG	0x000
+#define DDR_CTL_MEM_WIDTH_REG	0x004
+#define ECC_CTL_CONTROL_REG	0x174
+#define ECC_U_ADDR_L_REG	0x180
+#define ECC_U_ADDR_H_REG	0x184
+#define ECC_U_DATA_L_REG	0x188
+#define ECC_U_DATA_H_REG	0x18c
+
+#define ECC_C_ADDR_L_REG	0x190
+#define ECC_C_ADDR_H_REG	0x194
+#define ECC_C_DATA_L_REG	0x198
+#define ECC_C_DATA_H_REG	0x19c
+#define ECC_U_C_ID_REG		0x1A0
+#define ECC_CTL_INT_STATUS_REG	0x210
+#define ECC_CTL_INT_ACK_REG	0x218
+#define ECC_CTL_INT_MASK_REG	0x220
+#define ECC_C_SYND_REG		ECC_C_ADDR_H_REG
+#define ECC_U_SYND_REG		ECC_U_ADDR_H_REG
+
+#define ECC_CTL_MTYPE_MASK	GENMASK(11, 8)
+#define CTL_MEM_MAX_WIDTH_MASK	GENMASK(31, 24)
+#define ECC_ADDR_H_MASK		GENMASK(3, 0)
+#define ECC_INT_CE_UE_MASK	GENMASK(6, 3)
+#define ECC_CE_INTR_MASK	GENMASK(4, 3)
+#define ECC_UE_INTR_MASK	GENMASK(6, 5)
+#define ECC_INT_CE_EVENT	BIT(3)
+#define ECC_INT_SECOND_CE_EVENT	BIT(4)
+#define ECC_INT_UE_EVENT	BIT(5)
+#define ECC_INT_SECOND_UE_EVENT	BIT(6)
+#define ECC_CTL_ECC_ENABLE	BIT(16)
+
+#define ECC_C_ID_MASK		GENMASK(28, 16)
+#define ECC_U_ID_MASK		GENMASK(12, 0)
+#define ECC_C_ID_SHIFT		(16)
+#define ECC_U_ID_SHIFT		(0)
+#define ECC_SYND_MASK		GENMASK(15, 8)
+#define ECC_SYND_SHIFT		(8)
+
+#define CTL_REG_WIDTH_SHIFT	(32)
+
+#endif /* __SOC_SIFIVE_DDR_H */