From patchwork Thu Sep 19 19:31:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miquel Raynal X-Patchwork-Id: 11153199 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB06414DB for ; Thu, 19 Sep 2019 19:55:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B8DA620678 for ; Thu, 19 Sep 2019 19:55:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="j0B5fWHF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8DA620678 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bootlin.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vfsLfYZTot+1NxS91miG8/S4mxDWnu/aOJrM9nTe/tY=; b=j0B5fWHF0AQQV4 5r0o+6JU6Qih+nBseiTXef51yfAqepJov3EjRcxiTKsf7JWRiXmF81Wmdi8eCUB8mhrMqhxYNCUfm Rsg6XFRQpEnUJkqtieVKTr0LozWJOFYPiNy/+EZhik0ti1SuMOLwVaePm50Z79awSigEekVDvOWkD OUCRG24ayAP72JwJ3i4rLFxnIXGFTjEyyCYNwWNFGWirpHWYU5LcnFwnRZGH+zok8mEdpUCakWLMd vX2KwYV22eYlH75G8+qZJzVrcttthujfwyFSeoYVLgZ+RLW27ehUN5YfduDTYWbADuGI0hNnMdaVe Chjp5DjF11Kt1RqArCnQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.2 #3 (Red Hat Linux)) id 1iB2Ws-0000zq-Nz; Thu, 19 Sep 2019 19:55:50 +0000 Received: from relay1-d.mail.gandi.net ([217.70.183.193]) by bombadil.infradead.org with esmtps (Exim 4.92.2 #3 (Red Hat Linux)) id 1iB2Aq-0002kd-NK; Thu, 19 Sep 2019 19:33:06 +0000 X-Originating-IP: 91.224.148.103 Received: from localhost.localdomain (unknown [91.224.148.103]) (Authenticated sender: miquel.raynal@bootlin.com) by relay1-d.mail.gandi.net (Postfix) with ESMTPSA id 30008240002; Thu, 19 Sep 2019 19:33:01 +0000 (UTC) From: Miquel Raynal To: Richard Weinberger , David Woodhouse , Brian Norris , Marek Vasut , Tudor Ambarus , Vignesh Raghavendra Subject: [PATCH v3 36/40] mtd: spinand: Use the external ECC engine logic Date: Thu, 19 Sep 2019 21:31:36 +0200 Message-Id: <20190919193141.7865-37-miquel.raynal@bootlin.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190919193141.7865-1-miquel.raynal@bootlin.com> References: <20190919193141.7865-1-miquel.raynal@bootlin.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190919_123305_297686_1A590A4D X-CRM114-Status: GOOD ( 22.00 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at https://www.dnswl.org/, low trust [217.70.183.193 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tudor Ambarus , Julien Su , Schrempf Frieder , Paul Cercueil , Boris Brezillon , linux-mtd@lists.infradead.org, Thomas Petazzoni , Miquel Raynal , Mason Yang , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Now that all the logic is available in the NAND core, let's use it from the SPI-NAND core. Right now there is no functional change as the default ECC engine for SPI-NANDs is set to 'on-die', but user can now use software correction if they want to by just setting the right properties in the DT. Also note that the OOB layout handling is removed from the SPI-NAND core as each ECC engine is supposed to handle it by it's own; users should not be aware of that. Signed-off-by: Miquel Raynal --- drivers/mtd/nand/spi/Kconfig | 1 + drivers/mtd/nand/spi/core.c | 101 ++++++++++++++++++++--------------- 2 files changed, 58 insertions(+), 44 deletions(-) diff --git a/drivers/mtd/nand/spi/Kconfig b/drivers/mtd/nand/spi/Kconfig index da89b250df7c..3d7649a2dd72 100644 --- a/drivers/mtd/nand/spi/Kconfig +++ b/drivers/mtd/nand/spi/Kconfig @@ -2,6 +2,7 @@ menuconfig MTD_SPI_NAND tristate "SPI NAND device Support" select MTD_NAND_CORE + select MTD_NAND_ECC depends on SPI_MASTER select SPI_MEM help diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c index c20d57f9c11b..a521eeb0d351 100644 --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -314,6 +314,15 @@ static struct nand_ecc_engine spinand_ondie_ecc_engine = { .ops = &spinand_ondie_ecc_engine_ops, }; +static void spinand_ondie_ecc_save_status(struct nand_device *nand, u8 status) +{ + struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv; + + if (nand->ecc.ctx.conf.provider == NAND_ON_DIE_ECC_ENGINE && + engine_conf) + engine_conf->status = status; +} + static int spinand_write_enable_op(struct spinand_device *spinand) { struct spi_mem_op op = SPINAND_WR_EN_DIS_OP(true); @@ -335,7 +344,6 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand, const struct nand_page_io_req *req) { struct nand_device *nand = spinand_to_nand(spinand); - struct mtd_info *mtd = nanddev_to_mtd(nand); struct spi_mem_dirmap_desc *rdesc; unsigned int nbytes = 0; void *buf = NULL; @@ -375,17 +383,6 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand, memcpy(req->databuf.in, spinand->databuf + req->dataoffs, req->datalen); - if (req->ooblen) { - if (req->mode == MTD_OPS_AUTO_OOB) - mtd_ooblayout_get_databytes(mtd, req->oobbuf.in, - spinand->oobbuf, - req->ooboffs, - req->ooblen); - else - memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, - req->ooblen); - } - return 0; } @@ -393,7 +390,7 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand, const struct nand_page_io_req *req) { struct nand_device *nand = spinand_to_nand(spinand); - struct mtd_info *mtd = nanddev_to_mtd(nand); + struct mtd_info *mtd = spinand_to_mtd(spinand); struct spi_mem_dirmap_desc *wdesc; unsigned int nbytes, column = 0; void *buf = spinand->databuf; @@ -405,9 +402,12 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand, * must fill the page cache entirely even if we only want to program * the data portion of the page, otherwise we might corrupt the BBM or * user data previously programmed in OOB area. + * + * Only reset the data buffer manually, the OOB buffer is prepared by + * ECC engines ->prepare_io_req() callback. */ nbytes = nanddev_page_size(nand) + nanddev_per_page_oobsize(nand); - memset(spinand->databuf, 0xff, nbytes); + memset(spinand->databuf, 0xff, nanddev_page_size(nand)); if (req->datalen) memcpy(spinand->databuf + req->dataoffs, req->databuf.out, @@ -523,12 +523,17 @@ static int spinand_lock_block(struct spinand_device *spinand, u8 lock) } static int spinand_read_page(struct spinand_device *spinand, - const struct nand_page_io_req *req, - bool ecc_enabled) + const struct nand_page_io_req *req) { + struct nand_device *nand = spinand_to_nand(spinand); u8 status; int ret; + ret = nand_ecc_prepare_io_req(nand, (struct nand_page_io_req *)req, + spinand->oobbuf); + if (ret) + return ret; + ret = spinand_load_page_op(spinand, req); if (ret) return ret; @@ -537,22 +542,28 @@ static int spinand_read_page(struct spinand_device *spinand, if (ret < 0) return ret; + spinand_ondie_ecc_save_status(nand, status); + ret = spinand_read_from_cache_op(spinand, req); if (ret) return ret; - if (!ecc_enabled) - return 0; - - return spinand_check_ecc_status(spinand, status); + return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req, + spinand->oobbuf); } static int spinand_write_page(struct spinand_device *spinand, const struct nand_page_io_req *req) { + struct nand_device *nand = spinand_to_nand(spinand); u8 status; int ret; + ret = nand_ecc_prepare_io_req(nand, (struct nand_page_io_req *)req, + spinand->oobbuf); + if (ret) + return ret; + ret = spinand_write_enable_op(spinand); if (ret) return ret; @@ -567,9 +578,10 @@ static int spinand_write_page(struct spinand_device *spinand, ret = spinand_wait(spinand, &status); if (!ret && (status & STATUS_PROG_FAILED)) - ret = -EIO; + return -EIO; - return ret; + return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req, + spinand->oobbuf); } static int spinand_mtd_read(struct mtd_info *mtd, loff_t from, @@ -579,25 +591,24 @@ static int spinand_mtd_read(struct mtd_info *mtd, loff_t from, struct nand_device *nand = mtd_to_nanddev(mtd); unsigned int max_bitflips = 0; struct nand_io_iter iter; - bool enable_ecc = false; + bool disable_ecc = false; bool ecc_failed = false; int ret = 0; - if (ops->mode != MTD_OPS_RAW && spinand->eccinfo.ooblayout) - enable_ecc = true; + if (ops->mode == MTD_OPS_RAW || !spinand->eccinfo.ooblayout) + disable_ecc = true; mutex_lock(&spinand->lock); nanddev_io_for_each_page(nand, NAND_PAGE_READ, from, ops, &iter) { + if (disable_ecc) + iter.req.mode = MTD_OPS_RAW; + ret = spinand_select_target(spinand, iter.req.pos.target); if (ret) break; - ret = spinand_ecc_enable(spinand, enable_ecc); - if (ret) - break; - - ret = spinand_read_page(spinand, &iter.req, enable_ecc); + ret = spinand_read_page(spinand, &iter.req); if (ret < 0 && ret != -EBADMSG) break; @@ -628,20 +639,19 @@ static int spinand_mtd_write(struct mtd_info *mtd, loff_t to, struct spinand_device *spinand = mtd_to_spinand(mtd); struct nand_device *nand = mtd_to_nanddev(mtd); struct nand_io_iter iter; - bool enable_ecc = false; + bool disable_ecc = false; int ret = 0; - if (ops->mode != MTD_OPS_RAW && mtd->ooblayout) - enable_ecc = true; + if (ops->mode == MTD_OPS_RAW || !mtd->ooblayout) + disable_ecc = true; mutex_lock(&spinand->lock); nanddev_io_for_each_page(nand, NAND_PAGE_WRITE, to, ops, &iter) { - ret = spinand_select_target(spinand, iter.req.pos.target); - if (ret) - break; + if (disable_ecc) + iter.req.mode = MTD_OPS_RAW; - ret = spinand_ecc_enable(spinand, enable_ecc); + ret = spinand_select_target(spinand, iter.req.pos.target); if (ret) break; @@ -671,7 +681,7 @@ static bool spinand_isbad(struct nand_device *nand, const struct nand_pos *pos) memset(spinand->oobbuf, 0, 2); spinand_select_target(spinand, pos->target); - spinand_read_page(spinand, &req, false); + spinand_read_page(spinand, &req); if (spinand->oobbuf[0] != 0xff || spinand->oobbuf[1] != 0xff) printk("would be bad, but fuck it\n"); // return true; @@ -1102,6 +1112,11 @@ static int spinand_init(struct spinand_device *spinand) nand->ecc.defaults.provider = NAND_ON_DIE_ECC_ENGINE; nand->ecc.ondie_engine = &spinand_ondie_ecc_engine; + spinand_ecc_enable(spinand, false); + ret = nanddev_ecc_engine_init(nand); + if (ret) + goto err_cleanup_nanddev; + /* * Right now, we don't support ECC, so let the whole oob * area available for the user. @@ -1114,19 +1129,17 @@ static int spinand_init(struct spinand_device *spinand) mtd->_erase = spinand_mtd_erase; mtd->_max_bad_blocks = nanddev_mtd_max_bad_blocks; - if (spinand->eccinfo.ooblayout) - mtd_set_ooblayout(mtd, spinand->eccinfo.ooblayout); - else - mtd_set_ooblayout(mtd, &spinand_noecc_ooblayout); - ret = mtd_ooblayout_count_freebytes(mtd); if (ret < 0) - goto err_cleanup_nanddev; + goto err_cleanup_ecc_engine; mtd->oobavail = ret; return 0; +err_cleanup_ecc_engine: + nanddev_ecc_engine_cleanup(nand); + err_cleanup_nanddev: nanddev_cleanup(nand);