From patchwork Fri Oct 22 02:40:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12576939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95C33C433EF for ; Fri, 22 Oct 2021 02:51:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4AD266141B for ; Fri, 22 Oct 2021 02:51:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4AD266141B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BJ2+7IbJLZXikOTVyG4fF68sHYRI2NiBNkuLhZ4MBIw=; b=NYWElcpdZatEo2 iBtfM+pZLeGXJ7X0zVMn2ntYYCIiBGY2vi5XhXgNfpnbZHTtY6LHnQAiPFySlaSSKAO1eN6kGB6Qy tannFK/ncjo3VYDyHYLRDPSUlE+FmmTVnGuX8O2me1PKc/wsULZ5/wFTFjPEk9KaJCnFqCDZT/sOu MqMX0BZuiAhrPKN/5S83xxUpIIxYGzrSZsw/f7CfSIwvXaCpRB2cNn0grYGZ+zSuvTD2V25VACiLg BBPUM//8Bu12H6NIDE+1daV5k+0/CpVn9ekvjX7lYgNzpgw++A+94SEq4muNIzLNB46y7Ie6mBbqQ GAFBK5nQrdj251/3vn6w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkeJ-009cc5-A3; Fri, 22 Oct 2021 02:51:15 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkdr-009cXp-HX; Fri, 22 Oct 2021 02:50:48 +0000 X-UUID: f95c0a28ead24abd89f97df7c1cfc6b1-20211021 X-UUID: f95c0a28ead24abd89f97df7c1cfc6b1-20211021 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1981290014; Thu, 21 Oct 2021 19:50:44 -0700 Received: from mtkmbs10n1.mediatek.inc (172.21.101.34) by MTKMBS62DR.mediatek.inc (172.29.94.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 21 Oct 2021 19:40:41 -0700 Received: from mtkcas10.mediatek.inc (172.21.101.39) by mtkmbs10n1.mediatek.inc (172.21.101.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.15; Fri, 22 Oct 2021 10:40:40 +0800 Received: from localhost.localdomain (10.17.3.154) by mtkcas10.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 22 Oct 2021 10:40:39 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC,v3 1/5] mtd: ecc: move mediatek HW ECC driver Date: Fri, 22 Oct 2021 10:40:17 +0800 Message-ID: <20211022024021.14665-2-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> References: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211021_195047_608012_FD96828D X-CRM114-Status: GOOD ( 13.23 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Move Mediatek HW ECC source driver to mtd nand folder and the header file to include linux mtd folder. The HW ECC driver can be used by Mediatek raw nand and spi nand controller. Signed-off-by: Xiangsheng Hou --- drivers/mtd/nand/Kconfig | 9 +++++++++ drivers/mtd/nand/Makefile | 1 + drivers/mtd/nand/{raw => }/mtk_ecc.c | 2 +- drivers/mtd/nand/raw/Kconfig | 1 + drivers/mtd/nand/raw/Makefile | 2 +- drivers/mtd/nand/raw/mtk_nand.c | 2 +- {drivers/mtd/nand/raw => include/linux/mtd}/mtk_ecc.h | 0 7 files changed, 14 insertions(+), 3 deletions(-) rename drivers/mtd/nand/{raw => }/mtk_ecc.c (99%) rename {drivers/mtd/nand/raw => include/linux/mtd}/mtk_ecc.h (100%) diff --git a/drivers/mtd/nand/Kconfig b/drivers/mtd/nand/Kconfig index b40455234cbd..e657823329d2 100644 --- a/drivers/mtd/nand/Kconfig +++ b/drivers/mtd/nand/Kconfig @@ -46,6 +46,15 @@ config MTD_NAND_ECC_SW_BCH ECC codes. They are used with NAND devices requiring more than 1 bit of error correction. +config MTD_NAND_ECC_MTK + bool "Mediatek hardware ECC engine" + select MTD_NAND_ECC + help + This enables support for Mediatek hardware ECC engine which + used for error correction. This correction strength depends + on different project. The ECC engine can be used with Mediatek + raw nand and spi nand controller driver. + endmenu endmenu diff --git a/drivers/mtd/nand/Makefile b/drivers/mtd/nand/Makefile index 1c0b46960eb1..66d1741fe7ff 100644 --- a/drivers/mtd/nand/Makefile +++ b/drivers/mtd/nand/Makefile @@ -10,3 +10,4 @@ obj-y += spi/ nandcore-$(CONFIG_MTD_NAND_ECC) += ecc.o nandcore-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += ecc-sw-hamming.o nandcore-$(CONFIG_MTD_NAND_ECC_SW_BCH) += ecc-sw-bch.o +nandcore-$(CONFIG_MTD_NAND_ECC_MTK) += mtk_ecc.o diff --git a/drivers/mtd/nand/raw/mtk_ecc.c b/drivers/mtd/nand/mtk_ecc.c similarity index 99% rename from drivers/mtd/nand/raw/mtk_ecc.c rename to drivers/mtd/nand/mtk_ecc.c index c437d97debb8..ce0f8b491e5d 100644 --- a/drivers/mtd/nand/raw/mtk_ecc.c +++ b/drivers/mtd/nand/mtk_ecc.c @@ -16,7 +16,7 @@ #include #include -#include "mtk_ecc.h" +#include #define ECC_IDLE_MASK BIT(0) #define ECC_IRQ_EN BIT(0) diff --git a/drivers/mtd/nand/raw/Kconfig b/drivers/mtd/nand/raw/Kconfig index 67b7cb67c030..c90bc166034b 100644 --- a/drivers/mtd/nand/raw/Kconfig +++ b/drivers/mtd/nand/raw/Kconfig @@ -362,6 +362,7 @@ config MTD_NAND_MTK tristate "MTK NAND controller" depends on ARCH_MEDIATEK || COMPILE_TEST depends on HAS_IOMEM + select MTD_NAND_ECC_MTK help Enables support for NAND controller on MTK SoCs. This controller is found on mt27xx, mt81xx, mt65xx SoCs. diff --git a/drivers/mtd/nand/raw/Makefile b/drivers/mtd/nand/raw/Makefile index 2f97958c3a33..49d3946c166b 100644 --- a/drivers/mtd/nand/raw/Makefile +++ b/drivers/mtd/nand/raw/Makefile @@ -48,7 +48,7 @@ obj-$(CONFIG_MTD_NAND_SUNXI) += sunxi_nand.o obj-$(CONFIG_MTD_NAND_HISI504) += hisi504_nand.o obj-$(CONFIG_MTD_NAND_BRCMNAND) += brcmnand/ obj-$(CONFIG_MTD_NAND_QCOM) += qcom_nandc.o -obj-$(CONFIG_MTD_NAND_MTK) += mtk_ecc.o mtk_nand.o +obj-$(CONFIG_MTD_NAND_MTK) += mtk_nand.o obj-$(CONFIG_MTD_NAND_MXIC) += mxic_nand.o obj-$(CONFIG_MTD_NAND_TEGRA) += tegra_nand.o obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c index 5c5c92132287..7d8a6b35c102 100644 --- a/drivers/mtd/nand/raw/mtk_nand.c +++ b/drivers/mtd/nand/raw/mtk_nand.c @@ -17,7 +17,7 @@ #include #include #include -#include "mtk_ecc.h" +#include /* NAND controller register definition */ #define NFI_CNFG (0x00) diff --git a/drivers/mtd/nand/raw/mtk_ecc.h b/include/linux/mtd/mtk_ecc.h similarity index 100% rename from drivers/mtd/nand/raw/mtk_ecc.h rename to include/linux/mtd/mtk_ecc.h From patchwork Fri Oct 22 02:40:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12576943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EA95C433F5 for ; Fri, 22 Oct 2021 02:52:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0412E60720 for ; Fri, 22 Oct 2021 02:52:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0412E60720 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QOObShy2oOWQ9hvN+MeV4O8zYzWvlpHsOI1M7mSzrpA=; b=kCqJ2hJBfugt5A i8Xz3ov85Zj9zVeFrVB2rCQVu1GERgOxuZnqvwuyuD0NVsmvxXd9eZ9fUtN6xp7i5kBxjNrCbkxyE mOjJ0rhdzdcU9QEAFMqQxE+6UukL9w01KPxPKFkQXqMmB8n4RjZTVI6PcgORZc8eXpUDiDqltK7ch wvvRSaOFkBtHQkx9glzcGM9zItdiA9kFZQMwj50cSsboeVk2rcj25FE75C8uFgH5bQN5gqeZp7s23 nm+cYs8rj3oxQ/HJX7f0EbVKfFmAl1Vzl4GjL8f4PRGtvtns13OteNPiFj3Z4fDe7Ot3zxiNqrhuU YEc736K5zHycVlcgCHKA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkes-009cpE-5b; Fri, 22 Oct 2021 02:51:50 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkdt-009cYT-Iz; Fri, 22 Oct 2021 02:50:52 +0000 X-UUID: 701ce1703ecc4faaac434eef842f3a88-20211021 X-UUID: 701ce1703ecc4faaac434eef842f3a88-20211021 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1509352123; Thu, 21 Oct 2021 19:50:46 -0700 Received: from mtkmbs10n2.mediatek.inc (172.21.101.183) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 21 Oct 2021 19:40:44 -0700 Received: from mtkcas10.mediatek.inc (172.21.101.39) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3; Fri, 22 Oct 2021 10:40:42 +0800 Received: from localhost.localdomain (10.17.3.154) by mtkcas10.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 22 Oct 2021 10:40:41 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC,v3 2/5] mtd: ecc: realize Mediatek HW ECC driver Date: Fri, 22 Oct 2021 10:40:18 +0800 Message-ID: <20211022024021.14665-3-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> References: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211021_195049_721557_D34895C6 X-CRM114-Status: GOOD ( 25.42 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org The v3 driver realize Mediatek HW ECC engine with pipelined case. Signed-off-by: Xiangsheng Hou --- drivers/mtd/nand/core.c | 10 +- drivers/mtd/nand/ecc.c | 88 +++++++ drivers/mtd/nand/mtk_ecc.c | 488 ++++++++++++++++++++++++++++++++++++ include/linux/mtd/mtk_ecc.h | 38 +++ include/linux/mtd/nand.h | 11 + 5 files changed, 632 insertions(+), 3 deletions(-) diff --git a/drivers/mtd/nand/core.c b/drivers/mtd/nand/core.c index 5e13a03d2b32..b228b4d13b7a 100644 --- a/drivers/mtd/nand/core.c +++ b/drivers/mtd/nand/core.c @@ -232,7 +232,9 @@ static int nanddev_get_ecc_engine(struct nand_device *nand) nand->ecc.engine = nand_ecc_get_on_die_hw_engine(nand); break; case NAND_ECC_ENGINE_TYPE_ON_HOST: - pr_err("On-host hardware ECC engines not supported yet\n"); + nand->ecc.engine = nand_ecc_get_on_host_hw_engine(nand); + if (PTR_ERR(nand->ecc.engine) == -EPROBE_DEFER) + return -EPROBE_DEFER; break; default: pr_err("Missing ECC engine type\n"); @@ -252,7 +254,7 @@ static int nanddev_put_ecc_engine(struct nand_device *nand) { switch (nand->ecc.ctx.conf.engine_type) { case NAND_ECC_ENGINE_TYPE_ON_HOST: - pr_err("On-host hardware ECC engines not supported yet\n"); + nand_ecc_put_on_host_hw_engine(nand); break; case NAND_ECC_ENGINE_TYPE_NONE: case NAND_ECC_ENGINE_TYPE_SOFT: @@ -297,7 +299,9 @@ int nanddev_ecc_engine_init(struct nand_device *nand) /* Look for the ECC engine to use */ ret = nanddev_get_ecc_engine(nand); if (ret) { - pr_err("No ECC engine found\n"); + if (ret != -EPROBE_DEFER) + pr_err("No ECC engine found\n"); + return ret; } diff --git a/drivers/mtd/nand/ecc.c b/drivers/mtd/nand/ecc.c index 6c43dfda01d4..55d6946da9c3 100644 --- a/drivers/mtd/nand/ecc.c +++ b/drivers/mtd/nand/ecc.c @@ -96,7 +96,12 @@ #include #include #include +#include +#include +#include +static LIST_HEAD(on_host_hw_engines); +static DEFINE_MUTEX(on_host_hw_engines_mutex); /** * nand_ecc_init_ctx - Init the ECC engine context * @nand: the NAND device @@ -611,6 +616,89 @@ struct nand_ecc_engine *nand_ecc_get_on_die_hw_engine(struct nand_device *nand) } EXPORT_SYMBOL(nand_ecc_get_on_die_hw_engine); +int nand_ecc_register_on_host_hw_engine(struct nand_ecc_engine *engine) +{ + struct nand_ecc_engine *item; + + if (!engine) + return -ENOTSUPP; + + /* Prevent multiple registerations of one engine */ + list_for_each_entry(item, &on_host_hw_engines, node) + if (item == engine) + return 0; + + mutex_lock(&on_host_hw_engines_mutex); + list_add_tail(&engine->node, &on_host_hw_engines); + mutex_unlock(&on_host_hw_engines_mutex); + + return 0; +} +EXPORT_SYMBOL(nand_ecc_register_on_host_hw_engine); + +int nand_ecc_unregister_on_host_hw_engine(struct nand_ecc_engine *engine) +{ + if (!engine) + return -ENOTSUPP; + + mutex_lock(&on_host_hw_engines_mutex); + list_del(&engine->node); + mutex_unlock(&on_host_hw_engines_mutex); + + return 0; +} +EXPORT_SYMBOL(nand_ecc_unregister_on_host_hw_engine); + +struct nand_ecc_engine *nand_ecc_match_on_host_hw_engine(struct device *dev) +{ + struct nand_ecc_engine *item; + + list_for_each_entry(item, &on_host_hw_engines, node) + if (item->dev == dev) + return item; + + return NULL; +} +EXPORT_SYMBOL(nand_ecc_match_on_host_hw_engine); + +struct nand_ecc_engine *nand_ecc_get_on_host_hw_engine(struct nand_device *nand) +{ + struct nand_ecc_engine *engine = NULL; + struct device *dev = &nand->mtd.dev; + struct platform_device *pdev; + struct device_node *np; + + if (list_empty(&on_host_hw_engines)) + return NULL; + + /* Check for an explicit nand-ecc-engine property */ + np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0); + if (np) { + pdev = of_find_device_by_node(np); + if (!pdev) + return ERR_PTR(-EPROBE_DEFER); + + engine = nand_ecc_match_on_host_hw_engine(&pdev->dev); + platform_device_put(pdev); + of_node_put(np); + + if (!engine) + return ERR_PTR(-EPROBE_DEFER); + } + + if (engine) + get_device(engine->dev); + + return engine; +} +EXPORT_SYMBOL(nand_ecc_get_on_host_hw_engine); + +void nand_ecc_put_on_host_hw_engine(struct nand_device *nand) +{ + put_device(nand->ecc.engine->dev); +} +EXPORT_SYMBOL(nand_ecc_put_on_host_hw_engine); + MODULE_LICENSE("GPL"); MODULE_AUTHOR("Miquel Raynal "); MODULE_DESCRIPTION("Generic ECC engine"); diff --git a/drivers/mtd/nand/mtk_ecc.c b/drivers/mtd/nand/mtk_ecc.c index ce0f8b491e5d..e0c971d6a443 100644 --- a/drivers/mtd/nand/mtk_ecc.c +++ b/drivers/mtd/nand/mtk_ecc.c @@ -16,6 +16,7 @@ #include #include +#include #include #define ECC_IDLE_MASK BIT(0) @@ -41,6 +42,9 @@ #define ECC_IDLE_REG(op) ((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE) #define ECC_CTL_REG(op) ((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON) +#define ECC_FDM_MAX_SIZE 8 +#define ECC_FDM_MIN_SIZE 1 + struct mtk_ecc_caps { u32 err_mask; const u8 *ecc_strength; @@ -79,6 +83,10 @@ static const u8 ecc_strength_mt7622[] = { 4, 6, 8, 10, 12, 14, 16 }; +static const u8 ecc_strength_mt7986[] = { + 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24 +}; + enum mtk_ecc_regs { ECC_ENCPAR00, ECC_ENCIRQ_EN, @@ -115,6 +123,15 @@ static int mt7622_ecc_regs[] = { [ECC_DECIRQ_STA] = 0x144, }; +static int mt7986_ecc_regs[] = { + [ECC_ENCPAR00] = 0x300, + [ECC_ENCIRQ_EN] = 0x80, + [ECC_ENCIRQ_STA] = 0x84, + [ECC_DECDONE] = 0x124, + [ECC_DECIRQ_EN] = 0x200, + [ECC_DECIRQ_STA] = 0x204, +}; + static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc, enum mtk_ecc_operation op) { @@ -447,6 +464,464 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc) } EXPORT_SYMBOL(mtk_ecc_get_parity_bits); +static inline int data_off(struct nand_device *nand, int i) +{ + int eccsize = nand->ecc.ctx.conf.step_size; + + return i * eccsize; +} + +static inline int fdm_off(struct nand_device *nand, int i) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + int poi; + + if (i < eng->bad_mark.sec) + poi = (i + 1) * eng->fdm_size; + else if (i == eng->bad_mark.sec) + poi = 0; + else + poi = i * eng->fdm_size; + + return poi; +} + +static inline int mtk_ecc_data_len(struct nand_device *nand) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + int eccsize = nand->ecc.ctx.conf.step_size; + int eccbytes = eng->oob_ecc; + + return eccsize + eng->fdm_size + eccbytes; +} + +static inline u8 *mtk_ecc_sec_ptr(struct nand_device *nand, int i) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + + return eng->page_buf + i * mtk_ecc_data_len(nand); +} + +static inline u8 *mtk_ecc_fdm_ptr(struct nand_device *nand, int i) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + int eccsize = nand->ecc.ctx.conf.step_size; + + return eng->page_buf + i * mtk_ecc_data_len(nand) + eccsize; +} + +static void mtk_ecc_no_bad_mark_swap(struct nand_device *a, u8 *b, + u8 *c) +{ + /* nop */ +} + +static void mtk_ecc_bad_mark_swap(struct nand_device *nand, u8 *databuf, + u8 *oobbuf) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + int step_size = nand->ecc.ctx.conf.step_size; + u32 bad_pos = eng->bad_mark.pos; + + bad_pos += eng->bad_mark.sec * step_size; + + swap(oobbuf[0], databuf[bad_pos]); +} + +static void mtk_ecc_set_bad_mark_ctl(struct mtk_ecc_bad_mark_ctl *bm_ctl, + struct mtd_info *mtd) +{ + struct nand_device *nand = mtd_to_nanddev(mtd); + + if (mtd->writesize == 512) { + bm_ctl->bm_swap = mtk_ecc_no_bad_mark_swap; + } else { + bm_ctl->bm_swap = mtk_ecc_bad_mark_swap; + bm_ctl->sec = mtd->writesize / mtk_ecc_data_len(nand); + bm_ctl->pos = mtd->writesize % mtk_ecc_data_len(nand); + } +} + +static int mtk_ecc_ooblayout_free(struct mtd_info *mtd, int section, + struct mtd_oob_region *oob_region) +{ + struct nand_device *nand = mtd_to_nanddev(mtd); + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + u32 eccsteps, bbm_bytes = 0; + + eccsteps = mtd->writesize / conf->step_size; + + if (section >= eccsteps) + return -ERANGE; + + /* reserve 1 byte for bad mark only for section 0 */ + if (section == 0) + bbm_bytes = 1; + + oob_region->length = eng->fdm_size - bbm_bytes; + oob_region->offset = section * eng->fdm_size + bbm_bytes; + + return 0; +} + +static int mtk_ecc_ooblayout_ecc(struct mtd_info *mtd, int section, + struct mtd_oob_region *oob_region) +{ + struct nand_device *nand = mtd_to_nanddev(mtd); + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + + if (section) + return -ERANGE; + + oob_region->offset = eng->fdm_size * eng->nsteps; + oob_region->length = mtd->oobsize - oob_region->offset; + + return 0; +} + +static const struct mtd_ooblayout_ops mtk_ecc_ooblayout_ops = { + .free = mtk_ecc_ooblayout_free, + .ecc = mtk_ecc_ooblayout_ecc, +}; + +const struct mtd_ooblayout_ops *mtk_ecc_get_ooblayout(void) +{ + return &mtk_ecc_ooblayout_ops; +} + +static struct device *mtk_ecc_get_engine_dev(struct device *dev) +{ + struct platform_device *eccpdev; + struct device_node *np; + + /* + * The device node is only the host controller, + * not the actual ECC engine when pipelined case. + */ + np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0); + if (!np) + return NULL; + + eccpdev = of_find_device_by_node(np); + if (!eccpdev) { + of_node_put(np); + return NULL; + } + + platform_device_put(eccpdev); + of_node_put(np); + + return &eccpdev->dev; +} + +/* + * mtk_ecc_data_format() - Covert data between ecc format and data/oob buf + * + * Mediatek HW ECC engine organize data/oob free/oob ecc by sector, + * the data format for one page as bellow: + * || sector 0 || sector 1 || ... + * || data | fdm | oob ecc || data || fdm | oob ecc || ... + * + * Terefore, it`s necessary to covert data when read/write in MTD_OPS_RAW. + * These data include bad mark, sector data, fdm data and oob ecc. + */ +static void mtk_ecc_data_format(struct nand_device *nand, + u8 *databuf, u8 *oobbuf, bool write) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + int step_size = nand->ecc.ctx.conf.step_size; + int i; + + if (write) { + for (i = 0; i < eng->nsteps; i++) { + if (i == eng->bad_mark.sec) + eng->bad_mark.bm_swap(nand, + databuf, oobbuf); + memcpy(mtk_ecc_sec_ptr(nand, i), + databuf + data_off(nand, i), step_size); + + memcpy(mtk_ecc_fdm_ptr(nand, i), + oobbuf + fdm_off(nand, i), + eng->fdm_size); + + memcpy(mtk_ecc_fdm_ptr(nand, i) + eng->fdm_size, + oobbuf + eng->fdm_size * eng->nsteps + + i * eng->oob_ecc, + eng->oob_ecc); + + /* swap back when write */ + if (i == eng->bad_mark.sec) + eng->bad_mark.bm_swap(nand, + databuf, oobbuf); + } + } else { + for (i = 0; i < eng->nsteps; i++) { + memcpy(databuf + data_off(nand, i), + mtk_ecc_sec_ptr(nand, i), step_size); + + memcpy(oobbuf + fdm_off(nand, i), + mtk_ecc_sec_ptr(nand, i) + step_size, + eng->fdm_size); + + memcpy(oobbuf + eng->fdm_size * eng->nsteps + + i * eng->oob_ecc, + mtk_ecc_sec_ptr(nand, i) + step_size + + eng->fdm_size, + eng->oob_ecc); + + if (i == eng->bad_mark.sec) + eng->bad_mark.bm_swap(nand, + databuf, oobbuf); + } + } +} + +static void mtk_ecc_fdm_shift(struct nand_device *nand, + u8 *dst_buf, u8 *src_buf) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + u8 *poi; + int i; + + for (i = 0; i < eng->nsteps; i++) { + if (i < eng->bad_mark.sec) + poi = src_buf + (i + 1) * eng->fdm_size; + else if (i == eng->bad_mark.sec) + poi = src_buf; + else + poi = src_buf + i * eng->fdm_size; + + memcpy(dst_buf + i * eng->fdm_size, poi, eng->fdm_size); + } +} + +int mtk_ecc_prepare_io_req_pipelined(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + struct mtd_info *mtd = nanddev_to_mtd(nand); + u8 *buf = eng->page_buf; + + nand_ecc_tweak_req(&eng->req_ctx, req); + + if (req->mode == MTD_OPS_RAW) { + if (req->type == NAND_PAGE_WRITE) { + /* change data/oob buf to MTK HW ECC data format */ + mtk_ecc_data_format(nand, req->databuf.in, + req->oobbuf.in, true); + req->databuf.out = buf; + req->oobbuf.out = buf + nand->memorg.pagesize; + } + } else { + eng->ecc_cfg.op = ECC_DECODE; + if (req->type == NAND_PAGE_WRITE) { + memset(eng->oob_buf, 0xff, nand->memorg.oobsize); + mtd_ooblayout_set_databytes(mtd, req->oobbuf.out, + eng->oob_buf, + req->ooboffs, + mtd->oobavail); + eng->bad_mark.bm_swap(nand, + req->databuf.in, eng->oob_buf); + mtk_ecc_fdm_shift(nand, req->oobbuf.in, + eng->oob_buf); + + eng->ecc_cfg.op = ECC_ENCODE; + } + + eng->ecc_cfg.mode = ECC_NFI_MODE; + eng->ecc_cfg.sectors = eng->nsteps; + return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg); + } + + return 0; +} + +int mtk_ecc_finish_io_req_pipelined(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + struct mtd_info *mtd = nanddev_to_mtd(nand); + struct mtk_ecc_stats stats; + u8 *buf = eng->page_buf; + int ret; + + if (req->type == NAND_PAGE_WRITE) { + if (req->mode != MTD_OPS_RAW) { + mtk_ecc_disable(eng->ecc); + mtk_ecc_fdm_shift(nand, eng->oob_buf, + req->oobbuf.in); + eng->bad_mark.bm_swap(nand, + req->databuf.in, eng->oob_buf); + } + nand_ecc_restore_req(&eng->req_ctx, req); + + return 0; + } + + if (req->mode == MTD_OPS_RAW) { + memcpy(buf, req->databuf.in, + nand->memorg.pagesize); + memcpy(buf + nand->memorg.pagesize, + req->oobbuf.in, nand->memorg.oobsize); + + /* change MTK HW ECC data format to data/oob buf */ + mtk_ecc_data_format(nand, req->databuf.in, + req->oobbuf.in, false); + nand_ecc_restore_req(&eng->req_ctx, req); + + return 0; + } + + ret = mtk_ecc_wait_done(eng->ecc, ECC_DECODE); + if (ret) + return -ETIMEDOUT; + + if (eng->read_empty) { + memset(req->databuf.in, 0xff, nand->memorg.pagesize); + memset(req->oobbuf.in, 0xff, nand->memorg.oobsize); + + ret = 0; + } else { + mtk_ecc_get_stats(eng->ecc, &stats, eng->nsteps); + mtd->ecc_stats.corrected += stats.corrected; + mtd->ecc_stats.failed += stats.failed; + + /* + * Return -EBADMSG when exit uncorrect ecc error. + * Otherwise, return the bitflips. + */ + if (stats.failed) + ret = -EBADMSG; + else + ret = stats.bitflips; + + mtk_ecc_fdm_shift(nand, eng->oob_buf, req->oobbuf.in); + eng->bad_mark.bm_swap(nand, + req->databuf.in, eng->oob_buf); + mtd_ooblayout_get_databytes(mtd, req->oobbuf.in, + eng->oob_buf, + 0, mtd->oobavail); + } + + mtk_ecc_disable(eng->ecc); + nand_ecc_restore_req(&eng->req_ctx, req); + + return ret; +} + +int mtk_ecc_init_ctx_pipelined(struct nand_device *nand) +{ + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + struct mtd_info *mtd = nanddev_to_mtd(nand); + struct device *dev; + int free, ret; + + /* + * In the case of a pipelined engine, the device registering the ECC + * engine is not the actual ECC engine device but the host controller. + */ + dev = mtk_ecc_get_engine_dev(nand->ecc.engine->dev); + if (!dev) { + ret = -EINVAL; + goto free_engine; + } + + eng->ecc = dev_get_drvdata(dev); + + /* calculate oob free bytes except ecc parity data */ + free = (conf->strength * mtk_ecc_get_parity_bits(eng->ecc) + + 7) >> 3; + free = eng->oob_per_sector - free; + + /* + * enhance ecc strength if oob left is bigger than max FDM size + * or reduce ecc strength if oob size is not enough for ecc + * parity data. + */ + if (free > ECC_FDM_MAX_SIZE) + eng->oob_ecc = eng->oob_per_sector - ECC_FDM_MAX_SIZE; + else if (free < 0) + eng->oob_ecc = eng->oob_per_sector - ECC_FDM_MIN_SIZE; + + /* calculate and adjust ecc strenth based on oob ecc bytes */ + conf->strength = (eng->oob_ecc << 3) / + mtk_ecc_get_parity_bits(eng->ecc); + mtk_ecc_adjust_strength(eng->ecc, &conf->strength); + + eng->oob_ecc = DIV_ROUND_UP(conf->strength * + mtk_ecc_get_parity_bits(eng->ecc), 8); + + eng->fdm_size = eng->oob_per_sector - eng->oob_ecc; + if (eng->fdm_size > ECC_FDM_MAX_SIZE) + eng->fdm_size = ECC_FDM_MAX_SIZE; + + eng->fdm_ecc_size = ECC_FDM_MIN_SIZE; + + eng->oob_ecc = eng->oob_per_sector - eng->fdm_size; + + if (!mtd->ooblayout) + mtd_set_ooblayout(mtd, mtk_ecc_get_ooblayout()); + + ret = nand_ecc_init_req_tweaking(&eng->req_ctx, nand); + if (ret) + goto free_engine; + + eng->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL); + eng->page_buf = + kzalloc(mtd->writesize + mtd->oobsize, GFP_KERNEL); + if (!eng->oob_buf || !eng->page_buf) { + ret = -ENOMEM; + goto free_bufs; + } + + mtk_ecc_set_bad_mark_ctl(&eng->bad_mark, mtd); + eng->ecc_cfg.strength = conf->strength; + eng->ecc_cfg.len = conf->step_size + eng->fdm_ecc_size; + + return 0; + +free_bufs: + nand_ecc_cleanup_req_tweaking(&eng->req_ctx); + kfree(eng->oob_buf); + kfree(eng->page_buf); +free_engine: + kfree(eng); + + return ret; +} + +void mtk_ecc_cleanup_ctx_pipelined(struct nand_device *nand) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + + if (eng) { + nand_ecc_cleanup_req_tweaking(&eng->req_ctx); + kfree(eng->ecc); + kfree(eng->oob_buf); + kfree(eng->page_buf); + kfree(eng); + } +} + +/* + * The on-host mtk HW ECC engine work at pipelined situation, + * will be registered by the drivers that wrap it. + */ +static struct nand_ecc_engine_ops mtk_ecc_engine_pipelined_ops = { + .init_ctx = mtk_ecc_init_ctx_pipelined, + .cleanup_ctx = mtk_ecc_cleanup_ctx_pipelined, + .prepare_io_req = mtk_ecc_prepare_io_req_pipelined, + .finish_io_req = mtk_ecc_finish_io_req_pipelined, +}; + +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void) +{ + return &mtk_ecc_engine_pipelined_ops; +} +EXPORT_SYMBOL(mtk_ecc_get_pipelined_ops); + static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = { .err_mask = 0x3f, .ecc_strength = ecc_strength_mt2701, @@ -477,6 +952,16 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = { .pg_irq_sel = 0, }; +static const struct mtk_ecc_caps mtk_ecc_caps_mt7986 = { + .err_mask = 0x1f, + .ecc_strength = ecc_strength_mt7986, + .ecc_regs = mt7986_ecc_regs, + .num_ecc_strength = 11, + .ecc_mode_shift = 5, + .parity_bits = 14, + .pg_irq_sel = 1, +}; + static const struct of_device_id mtk_ecc_dt_match[] = { { .compatible = "mediatek,mt2701-ecc", @@ -487,6 +972,9 @@ static const struct of_device_id mtk_ecc_dt_match[] = { }, { .compatible = "mediatek,mt7622-ecc", .data = &mtk_ecc_caps_mt7622, + }, { + .compatible = "mediatek,mt7986-ecc", + .data = &mtk_ecc_caps_mt7986, }, {}, }; diff --git a/include/linux/mtd/mtk_ecc.h b/include/linux/mtd/mtk_ecc.h index 0e48c36e6ca0..a286183d16c4 100644 --- a/include/linux/mtd/mtk_ecc.h +++ b/include/linux/mtd/mtk_ecc.h @@ -33,6 +33,31 @@ struct mtk_ecc_config { u32 len; }; +struct mtk_ecc_bad_mark_ctl { + void (*bm_swap)(struct nand_device *, u8 *databuf, u8* oobbuf); + u32 sec; + u32 pos; +}; + +struct mtk_ecc_engine { + struct nand_ecc_req_tweak_ctx req_ctx; + + u32 oob_per_sector; + u32 oob_ecc; + u32 fdm_size; + u32 fdm_ecc_size; + + bool read_empty; + u32 nsteps; + + u8 *page_buf; + u8 *oob_buf; + + struct mtk_ecc *ecc; + struct mtk_ecc_config ecc_cfg; + struct mtk_ecc_bad_mark_ctl bad_mark; +}; + int mtk_ecc_encode(struct mtk_ecc *, struct mtk_ecc_config *, u8 *, u32); void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int); int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation); @@ -44,4 +69,17 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc); struct mtk_ecc *of_mtk_ecc_get(struct device_node *); void mtk_ecc_release(struct mtk_ecc *); +#if IS_ENABLED(CONFIG_MTD_NAND_ECC_MTK) + +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void); + +#else /* !CONFIG_MTD_NAND_ECC_MTK */ + +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void) +{ + return NULL; +} + +#endif /* CONFIG_MTD_NAND_ECC_MTK */ + #endif diff --git a/include/linux/mtd/nand.h b/include/linux/mtd/nand.h index 32fc7edf65b3..5ffd3e359515 100644 --- a/include/linux/mtd/nand.h +++ b/include/linux/mtd/nand.h @@ -265,10 +265,16 @@ struct nand_ecc_engine_ops { /** * struct nand_ecc_engine - ECC engine abstraction for NAND devices + * @dev: Host device + * @node: Private field for registration time * @ops: ECC engine operations + * @priv: Private data */ struct nand_ecc_engine { + struct device *dev; + struct list_head node; struct nand_ecc_engine_ops *ops; + void *priv; }; void of_get_nand_ecc_user_config(struct nand_device *nand); @@ -279,8 +285,13 @@ int nand_ecc_prepare_io_req(struct nand_device *nand, int nand_ecc_finish_io_req(struct nand_device *nand, struct nand_page_io_req *req); bool nand_ecc_is_strong_enough(struct nand_device *nand); +int nand_ecc_register_on_host_hw_engine(struct nand_ecc_engine *engine); +int nand_ecc_unregister_on_host_hw_engine(struct nand_ecc_engine *engine); struct nand_ecc_engine *nand_ecc_get_sw_engine(struct nand_device *nand); struct nand_ecc_engine *nand_ecc_get_on_die_hw_engine(struct nand_device *nand); +struct nand_ecc_engine *nand_ecc_get_on_host_hw_engine(struct nand_device *nand); +struct nand_ecc_engine *nand_ecc_match_on_host_hw_engine(struct device *dev); +void nand_ecc_put_on_host_hw_engine(struct nand_device *nand); #if IS_ENABLED(CONFIG_MTD_NAND_ECC_SW_HAMMING) struct nand_ecc_engine *nand_ecc_sw_hamming_get_engine(void); From patchwork Fri Oct 22 02:40:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12576923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDD70C433F5 for ; Fri, 22 Oct 2021 02:41:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 76D44611F2 for ; Fri, 22 Oct 2021 02:41:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 76D44611F2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xMPt1tCUJpNOZDeH7nmKOad2+WE616BZjCoOxJvWR5c=; b=XFXmsTlmjTgm+R dQkjVXpoDvTfdwWYO+F/01+KStNpYpXvAByJXdeW8+T1SkU1otXtHAXvQMdZTtK15StW1BSmxc++S rtTWXvknkELWXLv1Y2enbxOUbpEjfhaAoQAdFDjUj4ICTQf/d9o+hivq1mUhGTBSMandW7roPZHZ9 jzHOfE3Ki5zodDvVsMP3BHHt4iZW0vvptxA4+H6fv/iFMlVXYilu59vatjO6RSn+RaX3ehfMyVNIx gvnW9BOVZl1cbMXOJbsLJbC6xl5ZDBFUjGsxbF4GfjvNBYyyh7/+OIMnLZWidjgUVCBkY3IjajvXh ubMLwydrwg5cTibc04Rg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkUf-009bSR-0M; Fri, 22 Oct 2021 02:41:17 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkUG-009bP3-GL; Fri, 22 Oct 2021 02:40:56 +0000 X-UUID: d01b965689f74959b9dcf932569c6513-20211021 X-UUID: d01b965689f74959b9dcf932569c6513-20211021 Received: from mtkcas67.mediatek.inc [(172.29.193.45)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1389677014; Thu, 21 Oct 2021 19:40:48 -0700 Received: from mtkmbs10n1.mediatek.inc (172.21.101.34) by MTKMBS62N2.mediatek.inc (172.29.193.42) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 21 Oct 2021 19:40:46 -0700 Received: from mtkcas10.mediatek.inc (172.21.101.39) by mtkmbs10n1.mediatek.inc (172.21.101.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.15; Fri, 22 Oct 2021 10:40:44 +0800 Received: from localhost.localdomain (10.17.3.154) by mtkcas10.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 22 Oct 2021 10:40:43 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC,v3 3/5] spi: add Mediatek SPI Nand controller driver Date: Fri, 22 Oct 2021 10:40:19 +0800 Message-ID: <20211022024021.14665-4-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> References: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211021_194052_615889_0D2424CC X-CRM114-Status: GOOD ( 24.39 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org This version the SPI driver cowork with MTK pipelined HW ECC engine. Signed-off-by: Xiangsheng Hou --- drivers/spi/Kconfig | 11 + drivers/spi/Makefile | 1 + drivers/spi/spi-mtk-snfi.c | 1234 ++++++++++++++++++++++++++++++++++++ 3 files changed, 1246 insertions(+) create mode 100644 drivers/spi/spi-mtk-snfi.c diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig index 83e352b0c8f9..6768cd510f77 100644 --- a/drivers/spi/Kconfig +++ b/drivers/spi/Kconfig @@ -514,6 +514,17 @@ config SPI_MT65XX say Y or M here.If you are not sure, say N. SPI drivers for Mediatek MT65XX and MT81XX series ARM SoCs. +config SPI_MTK_SNFI + tristate "MediaTek SPI NAND interface" + depends on MTD + select MTD_SPI_NAND + select MTD_NAND_ECC_MTK + help + This selects the SPI NAND FLASH interface(SNFI), + which could be found on MediaTek Soc. + Say Y or M here.If you are not sure, say N. + Note Parallel Nand and SPI NAND is alternative on MediaTek SoCs. + config SPI_MT7621 tristate "MediaTek MT7621 SPI Controller" depends on RALINK || COMPILE_TEST diff --git a/drivers/spi/Makefile b/drivers/spi/Makefile index 699db95c8441..0435624905d9 100644 --- a/drivers/spi/Makefile +++ b/drivers/spi/Makefile @@ -69,6 +69,7 @@ obj-$(CONFIG_SPI_MPC512x_PSC) += spi-mpc512x-psc.o obj-$(CONFIG_SPI_MPC52xx_PSC) += spi-mpc52xx-psc.o obj-$(CONFIG_SPI_MPC52xx) += spi-mpc52xx.o obj-$(CONFIG_SPI_MT65XX) += spi-mt65xx.o +obj-$(CONFIG_SPI_MTK_SNFI) += spi-mtk-snfi.o obj-$(CONFIG_SPI_MT7621) += spi-mt7621.o obj-$(CONFIG_SPI_MTK_NOR) += spi-mtk-nor.o obj-$(CONFIG_SPI_MXIC) += spi-mxic.o diff --git a/drivers/spi/spi-mtk-snfi.c b/drivers/spi/spi-mtk-snfi.c new file mode 100644 index 000000000000..f4955e64acdc --- /dev/null +++ b/drivers/spi/spi-mtk-snfi.c @@ -0,0 +1,1234 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for MediaTek SPI memory interface + * + * Copyright (C) 2021 MediaTek Inc. + * Authors: Xiangsheng Hou + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Registers used by the driver */ +#define NFI_CNFG 0x00 +#define CNFG_DMA BIT(0) +#define CNFG_READ_EN BIT(1) +#define CNFG_DMA_BURST_EN BIT(2) +#define CNFG_HW_ECC_EN BIT(8) +#define CNFG_AUTO_FMT_EN BIT(9) +#define CNFG_OP_CUST GENMASK(14, 13) +#define NFI_PAGEFMT (0x04) +#define PAGEFMT_SPARE_MASK GENMASK(21, 16) +#define PAGEFMT_SPARE_SHIFT (16) +#define PAGEFMT_FDM_ECC_SHIFT (12) +#define PAGEFMT_FDM_SHIFT (8) +#define PAGEFMT_FDM_MASK GENMASK(11, 8) +#define PAGEFMT_SEC_SEL_512 BIT(2) +#define PAGEFMT_512_2K (0) +#define PAGEFMT_2K_4K (1) +#define PAGEFMT_4K_8K (2) +#define PAGEFMT_8K_16K (3) +#define PAGEFMT_PAGE_MASK GENMASK(1, 0) +#define NFI_CON 0x08 +#define CON_FIFO_FLUSH BIT(0) +#define CON_NFI_RST BIT(1) +#define CON_BRD BIT(8) +#define CON_BWR BIT(9) +#define CON_SEC_SHIFT 12 +#define CON_SEC_MASK GENMASK(16, 12) +#define NFI_INTR_EN 0x10 +#define INTR_CUS_PROG_EN BIT(7) +#define INTR_CUS_READ_EN BIT(8) +#define INTR_IRQ_EN BIT(31) +#define NFI_INTR_STA 0x14 +#define NFI_CMD 0x20 +#define CMD_DUMMY 0x0 +#define NFI_STRDATA 0x40 +#define STAR_EN BIT(0) +#define NFI_STA 0x60 +#define NFI_FSM_MASK GENMASK(19, 16) +#define STA_EMP_PAGE BIT(12) +#define NFI_ADDRCNTR 0x70 +#define CNTR_MASK GENMASK(16, 12) +#define ADDRCNTR_SEC_SHIFT 12 +#define ADDRCNTR_SEC(val) \ + (((val) & CNTR_MASK) >> ADDRCNTR_SEC_SHIFT) +#define NFI_STRADDR 0x80 +#define NFI_BYTELEN 0x84 +#define NFI_FDML(x) (0xA0 + (x) * sizeof(u32) * 2) +#define NFI_FDMM(x) (0xA4 + (x) * sizeof(u32) * 2) +#define NFI_MASTERSTA 0x224 +#define AHB_BUS_BUSY BIT(1) +#define BUS_BUSY BIT(0) +#define SNFI_MAC_OUTL 0x504 +#define SNFI_MAC_INL 0x508 +#define SNFI_RD_CTL2 0x510 +#define RD_CMD_MASK 0x00ff +#define RD_DUMMY_SHIFT 8 +#define SNFI_RD_CTL3 0x514 +#define RD_ADDR_MASK 0xffff +#define SNFI_MISC_CTL 0x538 +#define RD_MODE_MASK GENMASK(18, 16) +#define LATCH_LAT_MASK GENMASK(9, 8) +#define LATCH_LAT_SHIFT 8 +#define RD_MODE_X2 BIT(16) +#define RD_MODE_X4 BIT(17) +#define RD_MODE_DQUAL BIT(18) +#define RD_CUSTOM_EN BIT(6) +#define WR_CUSTOM_EN BIT(7) +#define WR_X4_EN BIT(20) +#define SW_RST BIT(28) +#define SNFI_MISC_CTL2 0x53c +#define WR_LEN_SHIFT 16 +#define SNFI_PG_CTL1 0x524 +#define WR_LOAD_CMD_MASK GENMASK(15, 8) +#define WR_LOAD_CMD_SHIFT 8 +#define SNFI_PG_CTL2 0x528 +#define WR_LOAD_ADDR_MASK GENMASK(15, 0) +#define SNFI_MAC_CTL 0x500 +#define MAC_WIP BIT(0) +#define MAC_WIP_READY BIT(1) +#define MAC_TRIG BIT(2) +#define MAC_EN BIT(3) +#define MAC_SIO_SEL BIT(4) +#define SNFI_DLY_CTL3 0x548 +#define SAM_DLY_MASK GENMASK(5, 0) +#define SNFI_STA_CTL1 0x550 +#define CUS_PROG_DONE BIT(28) +#define CUS_READ_DONE BIT(27) +#define SPI_STATE GENMASK(3, 0) +#define SNFI_CNFG 0x55c +#define SNFI_MODE_EN BIT(0) +#define SNFI_GPRAM_DATA 0x800 +#define SNFI_GPRAM_MAX_LEN 160 + +#define MTK_SNFI_TIMEOUT 500000 +#define MTK_SNFI_RESET_TIMEOUT 1000000 +#define MTK_SNFI_AUTOSUSPEND_DELAY 1000 +#define KB(x) ((x) * 1024UL) + +#define MTK_NFI_MIN_SPARE (16) + +/* supported spare size of each IP */ +static const u8 spare_size_mt7986[] = { + 16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51, 52, 62, 61, 63, 64, 67, + 74 +}; + +struct mtk_snfi_caps { + const u8 *spare_size; + u8 num_spare_size; + u8 pageformat_spare_shift; + u32 max_sector_size; +}; + +struct mtk_snfi { + struct clk *nfi_clk; + struct clk *snfi_clk; + struct clk *hclk; + struct device *dev; + struct completion done; + + const struct mtk_snfi_caps *caps; + + struct { + u32 page_size; + u32 spare_per_sector; + u32 spare_idx; + struct nand_ecc_engine *engine; + bool enabled; + u32 sectors; + } ecc; + + u32 fdm_size; + u32 fdm_ecc_size; + + u32 sample_delay; + u32 read_latency; + + void *tx_buf; + dma_addr_t dma_addr; + void __iomem *regs; +}; + +static void mtk_snfi_mac_enable(struct mtk_snfi *snfi) +{ + u32 val; + + val = readl(snfi->regs + SNFI_MAC_CTL); + val &= ~MAC_SIO_SEL; + val |= MAC_EN; + + writel(val, snfi->regs + SNFI_MAC_CTL); +} + +static int mtk_snfi_mac_trigger(struct mtk_snfi *snfi) +{ + u32 val; + int ret = 0; + + val = readl(snfi->regs + SNFI_MAC_CTL); + val |= MAC_TRIG; + writel(val, snfi->regs + SNFI_MAC_CTL); + + ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL, val, + val & MAC_WIP_READY, 0, + MTK_SNFI_TIMEOUT); + if (ret < 0) { + dev_err(snfi->dev, "wait for wip ready timeout\n"); + return -EIO; + } + + ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL, val, + !(val & MAC_WIP), 0, + MTK_SNFI_TIMEOUT); + if (ret < 0) { + dev_err(snfi->dev, "wait for flash update finish timeout\n"); + return -EIO; + } + + return 0; +} + +static void mtk_snfi_mac_disable(struct mtk_snfi *snfi) +{ + u32 val; + + val = readl(snfi->regs + SNFI_MAC_CTL); + val &= ~(MAC_TRIG | MAC_EN); + writel(val, snfi->regs + SNFI_MAC_CTL); +} + +static int mtk_snfi_mac_op(struct mtk_snfi *snfi) +{ + int ret = 0; + + mtk_snfi_mac_enable(snfi); + ret = mtk_snfi_mac_trigger(snfi); + mtk_snfi_mac_disable(snfi); + + return ret; +} + +static inline void mtk_snfi_read_fdm(struct mtk_snfi *snfi, + const struct spi_mem_op *op) +{ + u32 vall, valm; + u8 *oobptr = op->data.buf.in; + int i, j; + + oobptr += snfi->ecc.page_size; + for (i = 0; i < snfi->ecc.sectors; i++) { + vall = readl(snfi->regs + NFI_FDML(i)); + valm = readl(snfi->regs + NFI_FDMM(i)); + + for (j = 0; j < snfi->fdm_size; j++) + oobptr[j] = (j >= 4 ? valm : vall) >> ((j % 4) * 8); + + oobptr += snfi->fdm_size; + } +} + +static inline void mtk_snfi_write_fdm(struct mtk_snfi *snfi, + const struct spi_mem_op *op) +{ + const u8 *oobptr = op->data.buf.out; + u32 vall, valm; + int i, j; + + oobptr += snfi->ecc.page_size; + for (i = 0; i < snfi->ecc.sectors; i++) { + vall = 0; + valm = 0; + for (j = 0; j < 8; j++) { + if (j < 4) + vall |= (j < snfi->fdm_size ? oobptr[j] : 0xff) + << (j * 8); + else + valm |= (j < snfi->fdm_size ? oobptr[j] : 0xff) + << ((j - 4) * 8); + } + writel(vall, snfi->regs + NFI_FDML(i)); + writel(valm, snfi->regs + NFI_FDMM(i)); + + oobptr += snfi->fdm_size; + } +} + +static irqreturn_t mtk_snfi_irq(int irq, void *id) +{ + struct mtk_snfi *snfi = id; + u32 sta, ien; + + sta = readl(snfi->regs + NFI_INTR_STA); + ien = readl(snfi->regs + NFI_INTR_EN); + + if (!(sta & ien)) + return IRQ_NONE; + + writel(0, snfi->regs + NFI_INTR_EN); + complete(&snfi->done); + + return IRQ_HANDLED; +} + +static int mtk_snfi_enable_clk(struct device *dev, struct mtk_snfi *snfi) +{ + int ret; + + ret = clk_prepare_enable(snfi->nfi_clk); + if (ret) { + dev_err(dev, "failed to enable nfi clk\n"); + return ret; + } + + ret = clk_prepare_enable(snfi->snfi_clk); + if (ret) { + dev_err(dev, "failed to enable snfi clk\n"); + clk_disable_unprepare(snfi->nfi_clk); + return ret; + } + + ret = clk_prepare_enable(snfi->hclk); + if (ret) { + dev_err(dev, "failed to enable hclk\n"); + clk_disable_unprepare(snfi->nfi_clk); + clk_disable_unprepare(snfi->snfi_clk); + return ret; + } + + return 0; +} + +static void mtk_snfi_disable_clk(struct mtk_snfi *snfi) +{ + clk_disable_unprepare(snfi->nfi_clk); + clk_disable_unprepare(snfi->snfi_clk); + clk_disable_unprepare(snfi->hclk); +} + +static int mtk_snfi_reset(struct mtk_snfi *snfi) +{ + u32 val; + int ret; + + val = readl(snfi->regs + SNFI_MISC_CTL) | SW_RST; + writel(val, snfi->regs + SNFI_MISC_CTL); + + ret = readw_poll_timeout(snfi->regs + SNFI_STA_CTL1, val, + !(val & SPI_STATE), 0, + MTK_SNFI_RESET_TIMEOUT); + if (ret) { + dev_warn(snfi->dev, "spi state not idle 0x%x\n", val); + return ret; + } + + val = readl(snfi->regs + SNFI_MISC_CTL); + val &= ~SW_RST; + writel(val, snfi->regs + SNFI_MISC_CTL); + + writew(CON_FIFO_FLUSH | CON_NFI_RST, snfi->regs + NFI_CON); + ret = readw_poll_timeout(snfi->regs + NFI_STA, val, + !(val & NFI_FSM_MASK), 0, + MTK_SNFI_RESET_TIMEOUT); + if (ret) { + dev_warn(snfi->dev, "nfi fsm not idle 0x%x\n", val); + return ret; + } + + val = readl(snfi->regs + NFI_STRDATA); + val &= ~STAR_EN; + writew(val, snfi->regs + NFI_STRDATA); + + return 0; +} + +static int mtk_snfi_init(struct mtk_snfi *snfi) +{ + int ret; + u32 val; + + ret = mtk_snfi_reset(snfi); + if (ret) + return ret; + + writel(SNFI_MODE_EN, snfi->regs + SNFI_CNFG); + + if (snfi->sample_delay) { + val = readl(snfi->regs + SNFI_DLY_CTL3); + val &= ~SAM_DLY_MASK; + val |= snfi->sample_delay; + writel(val, snfi->regs + SNFI_DLY_CTL3); + } + + if (snfi->read_latency) { + val = readl(snfi->regs + SNFI_MISC_CTL); + val &= ~LATCH_LAT_MASK; + val |= (snfi->read_latency << LATCH_LAT_SHIFT); + writel(val, snfi->regs + SNFI_MISC_CTL); + } + + return 0; +} + +static void mtk_snfi_prepare_for_tx(struct mtk_snfi *snfi, + const struct spi_mem_op *op) +{ + u32 val; + + val = readl(snfi->regs + SNFI_PG_CTL1); + val &= ~WR_LOAD_CMD_MASK; + val |= op->cmd.opcode << WR_LOAD_CMD_SHIFT; + writel(val, snfi->regs + SNFI_PG_CTL1); + + writel(op->addr.val & WR_LOAD_ADDR_MASK, + snfi->regs + SNFI_PG_CTL2); + + val = readl(snfi->regs + SNFI_MISC_CTL); + val |= WR_CUSTOM_EN; + if (op->data.buswidth == 4) + val |= WR_X4_EN; + writel(val, snfi->regs + SNFI_MISC_CTL); + + val = snfi->ecc.page_size + + snfi->ecc.sectors * snfi->ecc.spare_per_sector; + + writel(val << WR_LEN_SHIFT, + snfi->regs + SNFI_MISC_CTL2); + writel(INTR_CUS_PROG_EN | INTR_IRQ_EN, + snfi->regs + NFI_INTR_EN); +} + +static void mtk_snfi_prepare_for_rx(struct mtk_snfi *snfi, + const struct spi_mem_op *op) +{ + u32 val, dummy_cycle; + + dummy_cycle = (op->dummy.nbytes << 3) >> + (ffs(op->dummy.buswidth) - 1); + val = (op->cmd.opcode & RD_CMD_MASK) | + (dummy_cycle << RD_DUMMY_SHIFT); + writel(val, snfi->regs + SNFI_RD_CTL2); + + writel(op->addr.val & RD_ADDR_MASK, + snfi->regs + SNFI_RD_CTL3); + + val = readl(snfi->regs + SNFI_MISC_CTL); + val |= RD_CUSTOM_EN; + val &= ~RD_MODE_MASK; + if (op->data.buswidth == 4) + val |= RD_MODE_X4 | RD_MODE_DQUAL; + else if (op->data.buswidth == 2) + val |= RD_MODE_X2 | RD_MODE_DQUAL; + + writel(val, snfi->regs + SNFI_MISC_CTL); + + val = snfi->ecc.page_size + + snfi->ecc.sectors * snfi->ecc.spare_per_sector; + writel(val, snfi->regs + SNFI_MISC_CTL2); + + writel(INTR_CUS_READ_EN | INTR_IRQ_EN, + snfi->regs + NFI_INTR_EN); +} + +static int mtk_snfi_prepare(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + int ret; + dma_addr_t addr; + u32 val; + + addr = dma_map_single(snfi->dev, + op->data.buf.in, op->data.nbytes, + rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE); + ret = dma_mapping_error(snfi->dev, addr); + if (ret) { + dev_err(snfi->dev, "dma mapping error\n"); + return -EINVAL; + } + + snfi->dma_addr = addr; + writel(lower_32_bits(addr), snfi->regs + NFI_STRADDR); + + if (snfi->ecc.enabled && !rx) + mtk_snfi_write_fdm(snfi, op); + + val = readw(snfi->regs + NFI_CNFG); + val |= CNFG_DMA | CNFG_DMA_BURST_EN | CNFG_OP_CUST; + val |= rx ? CNFG_READ_EN : 0; + + if (snfi->ecc.enabled) + val |= CNFG_HW_ECC_EN | CNFG_AUTO_FMT_EN; + + writew(val, snfi->regs + NFI_CNFG); + + writel(snfi->ecc.sectors << CON_SEC_SHIFT, snfi->regs + NFI_CON); + + init_completion(&snfi->done); + + /* trigger state machine to custom op mode */ + writel(CMD_DUMMY, snfi->regs + NFI_CMD); + + if (rx) + mtk_snfi_prepare_for_rx(snfi, op); + else + mtk_snfi_prepare_for_tx(snfi, op); + + return 0; +} + +static void mtk_snfi_trigger(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + u32 val; + + val = readl(snfi->regs + NFI_CON); + val |= rx ? CON_BRD : CON_BWR; + writew(val, snfi->regs + NFI_CON); + + writew(STAR_EN, snfi->regs + NFI_STRDATA); +} + +static int mtk_snfi_wait_done(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + u32 val; + int ret; + + ret = wait_for_completion_timeout(&snfi->done, msecs_to_jiffies(500)); + if (!ret) { + dev_err(snfi->dev, "wait for %d completion done timeout\n", rx); + return -ETIMEDOUT; + } + + if (rx) { + ret = readl_poll_timeout_atomic(snfi->regs + NFI_BYTELEN, val, + ADDRCNTR_SEC(val) >= snfi->ecc.sectors, 0, + MTK_SNFI_TIMEOUT); + if (ret < 0) { + dev_err(snfi->dev, "wait for read sector count timeout\n"); + return -ETIMEDOUT; + } + + ret = readl_poll_timeout_atomic(snfi->regs + NFI_MASTERSTA, val, + !(val & (AHB_BUS_BUSY | BUS_BUSY)), + 0, MTK_SNFI_TIMEOUT); + if (ret) { + dev_err(snfi->dev, "wait for bus busy timeout\n"); + return -ETIMEDOUT; + } + + } else { + ret = readl_poll_timeout_atomic(snfi->regs + NFI_ADDRCNTR, val, + ADDRCNTR_SEC(val) >= snfi->ecc.sectors, + 0, MTK_SNFI_TIMEOUT); + if (ret) { + dev_err(snfi->dev, "wait for program sector count timeout\n"); + return -ETIMEDOUT; + } + } + + return 0; +} + +static void mtk_snfi_complete(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + u32 val; + + dma_unmap_single(snfi->dev, + snfi->dma_addr, op->data.nbytes, + rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE); + + if (snfi->ecc.enabled && rx) + mtk_snfi_read_fdm(snfi, op); + + val = readl(snfi->regs + SNFI_MISC_CTL); + val &= rx ? ~RD_CUSTOM_EN : ~WR_CUSTOM_EN; + writel(val, snfi->regs + SNFI_MISC_CTL); + + val = readl(snfi->regs + SNFI_STA_CTL1); + val |= rx ? CUS_READ_DONE : CUS_PROG_DONE; + writew(val, snfi->regs + SNFI_STA_CTL1); + val &= rx ? ~CUS_READ_DONE : ~CUS_PROG_DONE; + writew(val, snfi->regs + SNFI_STA_CTL1); + + /* Disable interrupt */ + val = readl(snfi->regs + NFI_INTR_EN); + val &= rx ? ~INTR_CUS_READ_EN : ~INTR_CUS_PROG_EN; + writew(val, snfi->regs + NFI_INTR_EN); + + writew(0, snfi->regs + NFI_CNFG); + writew(0, snfi->regs + NFI_CON); +} + +static int mtk_snfi_transfer_dma(struct mtk_snfi *snfi, + const struct spi_mem_op *op, bool rx) +{ + int ret; + + ret = mtk_snfi_prepare(snfi, op, rx); + if (ret) + return ret; + + mtk_snfi_trigger(snfi, op, rx); + + ret = mtk_snfi_wait_done(snfi, op, rx); + + mtk_snfi_complete(snfi, op, rx); + + return ret; +} + +static int mtk_snfi_transfer_mac(struct mtk_snfi *snfi, + const u8 *txbuf, u8 *rxbuf, + const u32 txlen, const u32 rxlen) +{ + u32 i, j, val, tmp; + u8 *p_tmp = (u8 *)(&tmp); + u32 addr_offset = 0; + int ret = 0; + + /* Move tx data to gpram in snfi mac mode */ + for (i = 0; i < txlen; ) { + for (j = 0, tmp = 0; i < txlen && j < 4; i++, j++) + p_tmp[j] = txbuf[i]; + + writel(tmp, snfi->regs + SNFI_GPRAM_DATA + addr_offset); + addr_offset += 4; + } + + writel(txlen, snfi->regs + SNFI_MAC_OUTL); + writel(rxlen, snfi->regs + SNFI_MAC_INL); + + ret = mtk_snfi_mac_op(snfi); + if (ret) { + dev_warn(snfi->dev, "snfi mac operation fail\n"); + return ret; + } + + /* Get tx data from gpram in snfi mac mode */ + if (rxlen) + for (i = 0, addr_offset = rounddown(txlen, 4); i < rxlen; ) { + val = readl(snfi->regs + + SNFI_GPRAM_DATA + addr_offset); + for (j = 0; i < rxlen && j < 4; i++, j++, rxbuf++) { + if (i == 0) + j = txlen % 4; + *rxbuf = (val >> (j * 8)) & 0xff; + } + addr_offset += 4; + } + + return ret; +} + +static int mtk_snfi_exec_op(struct spi_mem *mem, + const struct spi_mem_op *op) +{ + struct mtk_snfi *snfi = spi_controller_get_devdata(mem->spi->master); + u8 *buf, *txbuf = snfi->tx_buf, *rxbuf = NULL; + u32 txlen = 0, rxlen = 0; + int i, ret = 0; + bool rx; + + rx = op->data.dir == SPI_MEM_DATA_IN; + + ret = mtk_snfi_reset(snfi); + if (ret) { + dev_warn(snfi->dev, "reset snfi fail\n"); + return ret; + } + + /* + * If tx/rx data buswidth is not 0/1, use snfi DMA mode. + * Otherwise, use snfi mac mode. + */ + if ((op->data.buswidth != 1) && (op->data.buswidth != 0)) { + ret = mtk_snfi_transfer_dma(snfi, op, rx); + if (ret) + dev_warn(snfi->dev, "snfi dma transfer %d fail %d\n", + rx, ret); + return ret; + } + + txbuf[txlen++] = op->cmd.opcode; + + if (op->addr.nbytes) + for (i = 0; i < op->addr.nbytes; i++) + txbuf[txlen++] = op->addr.val >> + (8 * (op->addr.nbytes - i - 1)); + + txlen += op->dummy.nbytes; + + if (op->data.dir == SPI_MEM_DATA_OUT) { + buf = (u8 *)op->data.buf.out; + for (i = 0; i < op->data.nbytes; i++) + txbuf[txlen++] = buf[i]; + } + + if (op->data.dir == SPI_MEM_DATA_IN) { + rxbuf = (u8 *)op->data.buf.in; + rxlen = op->data.nbytes; + } + + ret = mtk_snfi_transfer_mac(snfi, txbuf, rxbuf, txlen, rxlen); + if (ret) + dev_warn(snfi->dev, "snfi mac transfer %d fail %d\n", + op->data.dir, ret); + + return ret; +} + +static int mtk_snfi_check_buswidth(u8 width) +{ + switch (width) { + case 1: + case 2: + case 4: + return 0; + + default: + break; + } + + return -ENOTSUPP; +} + +static bool mtk_snfi_supports_op(struct spi_mem *mem, + const struct spi_mem_op *op) +{ + struct mtk_snfi *snfi = spi_controller_get_devdata(mem->spi->master); + int ret = 0; + + if (!spi_mem_default_supports_op(mem, op)) + return false; + + if (op->cmd.buswidth != 1) + return false; + + /* + * For one operation will use snfi mac mode when data + * buswidth is 0/1. However, the HW ECC engine can not + * be used in mac mode. + */ + if (snfi->ecc.enabled && op->data.buswidth == 1 && + op->data.nbytes >= SNFI_GPRAM_MAX_LEN) + return false; + + switch (op->data.dir) { + /* For spi mem data in, can support 1/2/4 buswidth */ + case SPI_MEM_DATA_IN: + if (op->addr.nbytes) + ret |= mtk_snfi_check_buswidth(op->addr.buswidth); + + if (op->dummy.nbytes) + ret |= mtk_snfi_check_buswidth(op->dummy.buswidth); + + if (op->data.nbytes) + ret |= mtk_snfi_check_buswidth(op->data.buswidth); + + if (ret) + return false; + + break; + case SPI_MEM_DATA_OUT: + /* + * For spi mem data out, can support 0/1 buswidth + * for addr/dummy and 1/4 buswidth for data. + */ + if ((op->addr.buswidth != 0) && (op->addr.buswidth != 1)) + return false; + + if ((op->dummy.buswidth != 0) && (op->dummy.buswidth != 1)) + return false; + + if ((op->data.buswidth != 1) && (op->data.buswidth != 4)) + return false; + + break; + default: + break; + } + + return true; +} + +static int mtk_snfi_adjust_op_size(struct spi_mem *mem, + struct spi_mem_op *op) +{ + u32 len, max_len; + + /* + * The op size only support SNFI_GPRAM_MAX_LEN which will + * use the snfi mac mode when data buswidth is 0/1. + * Otherwise, the snfi can max support 16KB. + */ + if ((op->data.buswidth == 1) || (op->data.buswidth == 0)) + max_len = SNFI_GPRAM_MAX_LEN; + else + max_len = KB(16); + + len = op->cmd.nbytes + op->addr.nbytes + op->dummy.nbytes; + if (len > max_len) + return -ENOTSUPP; + + if ((len + op->data.nbytes) > max_len) + op->data.nbytes = max_len - len; + + return 0; +} + +static const struct mtk_snfi_caps mtk_snfi_caps_mt7986 = { + .spare_size = spare_size_mt7986, + .num_spare_size = 19, + .pageformat_spare_shift = 16, + .max_sector_size = 1024, +}; + +static const struct spi_controller_mem_ops mtk_snfi_ops = { + .adjust_op_size = mtk_snfi_adjust_op_size, + .supports_op = mtk_snfi_supports_op, + .exec_op = mtk_snfi_exec_op, +}; + +static const struct of_device_id mtk_snfi_id_table[] = { + { .compatible = "mediatek,mt7986-snfi", + .data = &mtk_snfi_caps_mt7986, + }, + { /* sentinel */ } +}; + +/* ECC wrapper */ +static struct mtk_snfi *mtk_nand_to_spi(struct nand_device *nand) +{ + struct device *dev = nand->ecc.engine->dev; + struct spi_master *master = dev_get_drvdata(dev); + struct mtk_snfi *snfi = spi_master_get_devdata(master); + + return snfi; +} + +static int mtk_snfi_set_spare_per_sector(struct nand_device *nand, + struct mtk_snfi *snfi, u32 *sps, u32 *idx) +{ + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + const u8 *spare = snfi->caps->spare_size; + u32 i, closest_spare = 0; + + eng->nsteps = nand->memorg.pagesize / conf->step_size; + *sps = nand->memorg.oobsize / eng->nsteps; + + if (conf->step_size == 1024) + *sps >>= 1; + + if ((*sps < snfi->fdm_size) || (*sps < MTK_NFI_MIN_SPARE)) + return -EINVAL; + + for (i = 0; i < snfi->caps->num_spare_size; i++) { + if (*sps >= spare[i] && spare[i] >= spare[closest_spare]) { + closest_spare = i; + if (*sps == spare[i]) + break; + } + } + + *sps = spare[closest_spare]; + *idx = closest_spare; + + if (conf->step_size == 1024) + *sps <<= 1; + + return 0; +} + +static int mtk_snfi_ecc_init(struct nand_device *nand) +{ + struct nand_ecc_props *reqs = &nand->ecc.requirements; + struct nand_ecc_props *user = &nand->ecc.user_conf; + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + struct mtk_snfi *snfi = mtk_nand_to_spi(nand); + struct mtk_ecc_engine *eng; + u32 spare, idx; + int ret; + + eng = kzalloc(sizeof(*eng), GFP_KERNEL); + if (!eng) + return -ENOMEM; + + nand->ecc.ctx.priv = eng; + nand->ecc.engine->priv = eng; + + /* Configure the correction depending on the NAND device topology */ + if (user->step_size && user->strength) { + conf->step_size = user->step_size; + conf->strength = user->strength; + } else if (reqs->step_size && reqs->strength) { + conf->step_size = reqs->step_size; + conf->strength = reqs->strength; + } + + /* + * align eccstrength and eccsize. + * The MTK HW ECC engine only support 512 and 1024 eccsize. + */ + if (conf->step_size < 1024) { + if (nand->memorg.pagesize > 512 && + snfi->caps->max_sector_size > 512) { + conf->step_size = 1024; + conf->strength <<= 1; + } else { + conf->step_size = 512; + } + } else { + conf->step_size = 1024; + } + + ret = mtk_snfi_set_spare_per_sector(nand, snfi, &spare, &idx); + + /* These will be used by the snfi driver */ + snfi->ecc.page_size = nand->memorg.pagesize; + snfi->ecc.spare_per_sector = spare; + snfi->ecc.spare_idx = idx; + snfi->ecc.sectors = nand->memorg.pagesize / conf->step_size; + + /* These will be used by HW ECC engine */ + eng->oob_per_sector = spare; + eng->nsteps = snfi->ecc.sectors; + + return ret; +} + +static int mtk_snfi_ecc_init_ctx(struct nand_device *nand) +{ + struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops(); + int ret; + + ret = mtk_snfi_ecc_init(nand); + if (ret) { + pr_info("mtk snfi ecc init fail!\n"); + return ret; + } + + return ops->init_ctx(nand); +} + +static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand) +{ + struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops(); + + ops->cleanup_ctx(nand); +} + +static int mtk_snfi_prepare_for_ecc(struct nand_device *nand, + struct mtk_snfi *snfi) +{ + struct nand_ecc_props *conf = &nand->ecc.ctx.conf; + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + u32 val; + + switch (nand->memorg.pagesize) { + case 512: + val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512; + break; + case KB(2): + if (conf->step_size == 512) + val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512; + else + val = PAGEFMT_512_2K; + break; + case KB(4): + if (conf->step_size == 512) + val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512; + else + val = PAGEFMT_2K_4K; + break; + case KB(8): + if (conf->step_size == 512) + val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512; + else + val = PAGEFMT_4K_8K; + break; + case KB(16): + val = PAGEFMT_8K_16K; + break; + default: + dev_err(snfi->dev, "invalid page len: %d\n", + nand->memorg.pagesize); + return -EINVAL; + } + + snfi->fdm_size = eng->fdm_size; + snfi->fdm_ecc_size = eng->fdm_ecc_size; + + val |= snfi->ecc.spare_idx << PAGEFMT_SPARE_SHIFT; + val |= snfi->fdm_size << PAGEFMT_FDM_SHIFT; + val |= snfi->fdm_ecc_size << PAGEFMT_FDM_ECC_SHIFT; + writel(val, snfi->regs + NFI_PAGEFMT); + + return 0; +} + +static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops(); + struct mtk_snfi *snfi = mtk_nand_to_spi(nand); + int ret; + + snfi->ecc.enabled = (req->mode != MTD_OPS_RAW); + ret = mtk_snfi_prepare_for_ecc(nand, snfi); + if (ret) + return ret; + + return ops->prepare_io_req(nand, req); +} + +static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand, + struct nand_page_io_req *req) +{ + struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops(); + struct mtk_ecc_engine *eng = nand->ecc.ctx.priv; + struct mtk_snfi *snfi = mtk_nand_to_spi(nand); + + if (snfi->ecc.enabled) { + eng->read_empty = + readl(snfi->regs + NFI_STA) & STA_EMP_PAGE; + snfi->ecc.enabled = false; + } + + return ops->finish_io_req(nand, req); +} + +static struct nand_ecc_engine_ops mtk_snfi_ecc_engine_pipelined_ops = { + .init_ctx = mtk_snfi_ecc_init_ctx, + .cleanup_ctx = mtk_snfi_ecc_cleanup_ctx, + .prepare_io_req = mtk_snfi_ecc_prepare_io_req, + .finish_io_req = mtk_snfi_ecc_finish_io_req, +}; + +static int mtk_snfi_ecc_probe(struct platform_device *pdev, + struct mtk_snfi *snfi) +{ + struct nand_ecc_engine *ecceng; + + if (!mtk_ecc_get_pipelined_ops()) + return -EOPNOTSUPP; + + ecceng = devm_kzalloc(&pdev->dev, sizeof(*ecceng), GFP_KERNEL); + if (!ecceng) + return -ENOMEM; + + ecceng->dev = &pdev->dev; + ecceng->ops = &mtk_snfi_ecc_engine_pipelined_ops; + + nand_ecc_register_on_host_hw_engine(ecceng); + snfi->ecc.engine = ecceng; + + return 0; +} + +static int mtk_snfi_probe(struct platform_device *pdev) +{ + struct device_node *np = pdev->dev.of_node; + struct spi_controller *ctlr; + struct mtk_snfi *snfi; + struct resource *res; + int ret, irq; + u32 val = 0; + + ctlr = spi_alloc_master(&pdev->dev, sizeof(*snfi)); + if (!ctlr) + return -ENOMEM; + + snfi = spi_controller_get_devdata(ctlr); + snfi->dev = &pdev->dev; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + snfi->regs = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(snfi->regs)) { + ret = PTR_ERR(snfi->regs); + goto err_put_master; + } + + ret = of_property_read_u32(np, "sample-delay", &val); + if (!ret) + snfi->sample_delay = val; + + ret = of_property_read_u32(np, "read-latency", &val); + if (!ret) + snfi->read_latency = val; + + snfi->nfi_clk = devm_clk_get(&pdev->dev, "nfi_clk"); + if (IS_ERR(snfi->nfi_clk)) { + dev_err(&pdev->dev, "not found nfi clk\n"); + ret = PTR_ERR(snfi->nfi_clk); + goto err_put_master; + } + + snfi->snfi_clk = devm_clk_get(&pdev->dev, "snfi_clk"); + if (IS_ERR(snfi->snfi_clk)) { + dev_err(&pdev->dev, "not found snfi clk\n"); + ret = PTR_ERR(snfi->snfi_clk); + goto err_put_master; + } + + snfi->hclk = devm_clk_get(&pdev->dev, "hclk"); + if (IS_ERR(snfi->hclk)) { + dev_err(&pdev->dev, "not found hclk\n"); + ret = PTR_ERR(snfi->hclk); + goto err_put_master; + } + + ret = mtk_snfi_enable_clk(&pdev->dev, snfi); + if (ret) + goto err_put_master; + + snfi->caps = of_device_get_match_data(&pdev->dev); + + irq = platform_get_irq(pdev, 0); + if (irq < 0) { + dev_err(&pdev->dev, "not found snfi irq resource\n"); + ret = -EINVAL; + goto clk_disable; + } + + ret = devm_request_irq(&pdev->dev, irq, mtk_snfi_irq, + 0, "mtk-snfi", snfi); + if (ret) { + dev_err(&pdev->dev, "failed to request snfi irq\n"); + goto clk_disable; + } + + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); + if (ret) { + dev_err(&pdev->dev, "failed to set dma mask\n"); + goto clk_disable; + } + + snfi->tx_buf = kzalloc(SNFI_GPRAM_MAX_LEN, GFP_KERNEL); + if (!snfi->tx_buf) { + ret = -ENOMEM; + goto clk_disable; + } + + ctlr->dev.of_node = np; + ctlr->mem_ops = &mtk_snfi_ops; + ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_QUAD; + ctlr->auto_runtime_pm = true; + + dev_set_drvdata(&pdev->dev, ctlr); + + ret = mtk_snfi_init(snfi); + if (ret) { + dev_err(&pdev->dev, "failed to init snfi\n"); + goto free_buf; + } + + ret = mtk_snfi_ecc_probe(pdev, snfi); + if (ret) { + dev_warn(&pdev->dev, "SPI-mem ECC engine not available\n"); + goto free_buf; + } + + pm_runtime_enable(&pdev->dev); + + ret = devm_spi_register_master(&pdev->dev, ctlr); + if (ret) { + dev_err(&pdev->dev, "failed to register spi master\n"); + goto disable_pm_runtime; + } + + return 0; + +disable_pm_runtime: + pm_runtime_disable(&pdev->dev); + +free_buf: + kfree(snfi->tx_buf); + +clk_disable: + mtk_snfi_disable_clk(snfi); + +err_put_master: + spi_master_put(ctlr); + + return ret; +} + +static int mtk_snfi_remove(struct platform_device *pdev) +{ + struct spi_controller *ctlr = dev_get_drvdata(&pdev->dev); + struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr); + + pm_runtime_disable(&pdev->dev); + kfree(snfi->tx_buf); + spi_master_put(ctlr); + + return 0; +} + +#ifdef CONFIG_PM +static int mtk_snfi_runtime_suspend(struct device *dev) +{ + struct spi_controller *ctlr = dev_get_drvdata(dev); + struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr); + + mtk_snfi_disable_clk(snfi); + + return 0; +} + +static int mtk_snfi_runtime_resume(struct device *dev) +{ + struct spi_controller *ctlr = dev_get_drvdata(dev); + struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr); + int ret; + + ret = mtk_snfi_enable_clk(dev, snfi); + if (ret) + return ret; + + ret = mtk_snfi_init(snfi); + if (ret) + dev_err(dev, "failed to init snfi\n"); + + return ret; +} +#endif /* CONFIG_PM */ + +static const struct dev_pm_ops mtk_snfi_pm_ops = { + SET_RUNTIME_PM_OPS(mtk_snfi_runtime_suspend, + mtk_snfi_runtime_resume, NULL) + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, + pm_runtime_force_resume) +}; + +static struct platform_driver mtk_snfi_driver = { + .driver = { + .name = "mtk-snfi", + .of_match_table = mtk_snfi_id_table, + .pm = &mtk_snfi_pm_ops, + }, + .probe = mtk_snfi_probe, + .remove = mtk_snfi_remove, +}; + +module_platform_driver(mtk_snfi_driver); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Xiangsheng Hou "); +MODULE_DESCRIPTION("Mediatek SPI memory controller driver"); From patchwork Fri Oct 22 02:40:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12576941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FE8EC433EF for ; Fri, 22 Oct 2021 02:51:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC047613A9 for ; Fri, 22 Oct 2021 02:51:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DC047613A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=h0EB+ufcvZOnQGeySV9zY6bsnOQFiDHBIMzx18rzT9o=; b=vb0iuI0Gqi6Hs0 D4C/iT6H6S7LB+o1QSJ3Ixaw4hFzGiwpvXgmZ8mcxoQI/QtPv/c6ixuWn933tOrTDnHHWlkziod9r SgbZ1As8n6WSEnMUnUpX2UhNRmD1+PMe+MqCOrPxNyqKnHIL7hS1s5mS0ZizlcwBXFpDFwDotbCFi IrGl75+65vmMeVIyDg07qC18pGIf9imy3aRsSCTVAFguO5Gw2fZ88UPpJPyfNDyeGrV21fJlAfz7r 1KUhWrBlHlF5YDtL3x5xKO93rCi/iRREF1VwDMxogJd62EhezOsHWmzPi4UX/bzHyOJxIJ+Fx5YCi qoWqXj92mwFbsx67bFCQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkeY-009ch7-Cp; Fri, 22 Oct 2021 02:51:30 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkdt-009cXp-0D; Fri, 22 Oct 2021 02:50:50 +0000 X-UUID: fdd27b8053aa43e5a7325711c3414c2e-20211021 X-UUID: fdd27b8053aa43e5a7325711c3414c2e-20211021 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 895549309; Thu, 21 Oct 2021 19:50:46 -0700 Received: from mtkmbs10n1.mediatek.inc (172.21.101.34) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 21 Oct 2021 19:40:49 -0700 Received: from mtkcas10.mediatek.inc (172.21.101.39) by mtkmbs10n1.mediatek.inc (172.21.101.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.15; Fri, 22 Oct 2021 10:40:47 +0800 Received: from localhost.localdomain (10.17.3.154) by mtkcas10.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 22 Oct 2021 10:40:46 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC,v3 4/5] arm64: dts: add snfi node for spi nand Date: Fri, 22 Oct 2021 10:40:20 +0800 Message-ID: <20211022024021.14665-5-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> References: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211021_195049_071546_899A0CF4 X-CRM114-Status: UNSURE ( 8.66 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org Signed-off-by: Xiangsheng Hou --- arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts | 18 ++++++++++++++++++ arch/arm64/boot/dts/mediatek/mt7622.dtsi | 13 +++++++++++++ 2 files changed, 31 insertions(+) diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts index f2dc850010f1..f4ba86c451fc 100644 --- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts +++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts @@ -223,6 +223,24 @@ &nandc { status = "disabled"; }; +&snfi { + pinctrl-names = "default"; + /* pin shared with spic */ + pinctrl-0 = <&snfi_pins>; + nand-ecc-engine = <&bch>; + status = "disabled"; + + spi_nand@0 { + compatible = "spi-nand"; + reg = <0>; + spi-max-frequency = <104000000>; + spi-tx-bus-width = <4>; + spi-rx-bus-width = <4>; + nand-ecc-engine = <&snfi>; + nand-ecc-placement = "interleaved"; + }; +}; + &nor_flash { pinctrl-names = "default"; pinctrl-0 = <&spi_nor_pins>; diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi index 890a942ec608..0525e4de5ec0 100644 --- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi +++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi @@ -545,6 +545,19 @@ nandc: nfi@1100d000 { status = "disabled"; }; + snfi: spi@1100d000 { + compatible = "mediatek,mt7622-snfi"; + reg = <0 0x1100D000 0 0x1000>; + interrupts = ; + clocks = <&infracfg_ao CK_INFRA_NFI1_CK>, + <&infracfg_ao CK_INFRA_SPINFI1_CK>, + <&infracfg_ao CK_INFRA_NFI_HCK_CK>; + clock-names = "nfi_clk", "snfi_clk", "hclk"; + #address-cells = <1>; + #size-cells = <0>; + status = "disabled"; + }; + bch: ecc@1100e000 { compatible = "mediatek,mt7622-ecc"; reg = <0 0x1100e000 0 0x1000>; From patchwork Fri Oct 22 02:40:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiangsheng Hou X-Patchwork-Id: 12576937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EBADC433F5 for ; Fri, 22 Oct 2021 02:51:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B3C3F613A9 for ; Fri, 22 Oct 2021 02:51:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B3C3F613A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3tvGRO1PzdLCV3HFbkJ+vvm/JMgIbwKUozq+5ff7n8o=; b=eNsDOjQOGPznw1 4zLB5u5SU6hgJpw6w4oFhE+y/PBA0G1srsIeXHCal6GBp/CoNBfnFNgApVs51C+motaJvoIa0ukqa The53n7ZVJLvSCT9nvoZ1gUAH3IFLfUoABxBUo5xOwNn5R1aU9nguDi+lGlxTEC/tCXNdcZvZCIDB HaeslBbmkB047XweL2Q1mpQNnV2veegkQEQuNT2+r0IkliDm2pYoEmkTiCEugMwvaZfTOykBpHAl0 mSUgQVAFuzemxzkQtyweg2drmZ/vjPRvfnv1QNTnGI0Bog4D0+H00K2ppwpL9/taPf0wV1Kisfcca CprSBJzOtSD7LhvruGIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdke6-009cZT-3S; Fri, 22 Oct 2021 02:51:02 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdkdq-009cXb-Nr; Fri, 22 Oct 2021 02:50:48 +0000 X-UUID: a46d7ec0c59749d684c0d63755b2bf16-20211021 X-UUID: a46d7ec0c59749d684c0d63755b2bf16-20211021 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 751290898; Thu, 21 Oct 2021 19:50:44 -0700 Received: from mtkmbs10n2.mediatek.inc (172.21.101.183) by MTKMBS62DR.mediatek.inc (172.29.94.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 21 Oct 2021 19:40:51 -0700 Received: from mtkcas10.mediatek.inc (172.21.101.39) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3; Fri, 22 Oct 2021 10:40:49 +0800 Received: from localhost.localdomain (10.17.3.154) by mtkcas10.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 22 Oct 2021 10:40:48 +0800 From: Xiangsheng Hou To: , CC: , , , , , , , , , , , , Subject: [RFC, v3 5/5] mtd: spinand: skip set/get oob data bytes when interleaved case Date: Fri, 22 Oct 2021 10:40:21 +0800 Message-ID: <20211022024021.14665-6-xiangsheng.hou@mediatek.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> References: <20211022024021.14665-1-xiangsheng.hou@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211021_195046_810881_DF8AC5D3 X-CRM114-Status: GOOD ( 11.84 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org For syndrome layout, ECC/free byte in oob layout and main data are interleaved. For this case, it is better to set/get oob data bytes in ECC engine. For MTK ECC engine, although it can auto place data as sector + oob free + oob ecc for one page in pipelined. However, the bad mark will be not fit with nand spec. Therefore, there have bad mark swap operation in ecc engine. But, the swap opeation(between bbm 0xff with 1byte main data) will lead to more one byte than oobavailable. Set oob databytes after bad mark swap will lead to lost one oob free byte. Therefore, just try to modify the spi nand framework for review. And this may be common for the interleaved case. Signed-off-by: Xiangsheng Hou --- drivers/mtd/nand/spi/core.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c index 2c8685f1f2fa..32a4707982c5 100644 --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -401,7 +401,8 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand, req->datalen); if (req->ooblen) { - if (req->mode == MTD_OPS_AUTO_OOB) + if (req->mode == MTD_OPS_AUTO_OOB && + nand->ecc.user_conf.placement != NAND_ECC_PLACEMENT_INTERLEAVED) mtd_ooblayout_get_databytes(mtd, req->oobbuf.in, spinand->oobbuf, req->ooboffs, @@ -442,7 +443,8 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand, req->datalen); if (req->ooblen) { - if (req->mode == MTD_OPS_AUTO_OOB) + if (req->mode == MTD_OPS_AUTO_OOB && + nand->ecc.user_conf.placement != NAND_ECC_PLACEMENT_INTERLEAVED) mtd_ooblayout_set_databytes(mtd, req->oobbuf.out, spinand->oobbuf, req->ooboffs,