From patchwork Tue Mar 25 08:20:00 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lee Jones X-Patchwork-Id: 3886231 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E751ABF540 for ; Tue, 25 Mar 2014 09:10:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CEB58201FD for ; Tue, 25 Mar 2014 09:10:47 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B8175201F9 for ; Tue, 25 Mar 2014 09:10:46 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WSMgD-0002K1-7E; Tue, 25 Mar 2014 08:25:54 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WSMfB-0002dW-Pt; Tue, 25 Mar 2014 08:24:49 +0000 Received: from mail-wg0-x234.google.com ([2a00:1450:400c:c00::234]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WSMcW-0002IM-2K for linux-arm-kernel@lists.infradead.org; Tue, 25 Mar 2014 08:22:13 +0000 Received: by mail-wg0-f52.google.com with SMTP id k14so77926wgh.11 for ; Tue, 25 Mar 2014 01:21:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kQ4UN+WLWxIlcA6GMihvrwKjk/tgo93vhka4KpzZjJs=; b=he4l685jq04bUiwBRQ7azo+NrvH4x0AwFV0SLvCPMPvf4+Aqy9cBkeK079xvmAO9Pg q/Lm3Zup2m0/kiRzmkjldBZq0r+77kwZQPQ9A5bkNCnP+fC53uG9H/ybtJGnL5GKL3Lk FgLeszyta7HM/X/TsJ2BDz58R4LNIEbfNUcXSl4fdfaz3HfBAngTubrrXHMHjS+MPXY2 c/i86T9xJLBFL6coMl+zo37WJgPc1a2buVqIPAlKUwfAhbENfRP9Rw0FmJWwSC1baw87 GiyqXoDwd/0aDxa9pJ33/oSRNEaILtJbrUuSzCSO1yudQujXTBkaJnhIav4WrXE/YIqJ BSog== X-Gm-Message-State: ALoCoQlLli5w9zRVoEV2OkHrR0fOVCDKNVUwCg2NbEEIvQj4iKJdXTDx/Zo9sAh2uLwA33zdxxuL X-Received: by 10.180.36.8 with SMTP id m8mr20226418wij.42.1395735700690; Tue, 25 Mar 2014 01:21:40 -0700 (PDT) Received: from lee--X1.home (host109-148-113-193.range109-148.btcentralplus.com. [109.148.113.193]) by mx.google.com with ESMTPSA id k4sm5567676wib.19.2014.03.25.01.21.38 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Mar 2014 01:21:39 -0700 (PDT) From: Lee Jones To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [RFC 43/47] mtd: nand: stm_nand_bch: read and write functions (BCH) Date: Tue, 25 Mar 2014 08:20:00 +0000 Message-Id: <1395735604-26706-44-git-send-email-lee.jones@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1395735604-26706-1-git-send-email-lee.jones@linaro.org> References: <1395735604-26706-1-git-send-email-lee.jones@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140325_042204_467461_A7C542F1 X-CRM114-Status: GOOD ( 16.86 ) X-Spam-Score: -1.9 (-) Cc: angus.clark@st.com, kernel@stlinux.com, lee.jones@linaro.org, linux-mtd@lists.infradead.org, pekon@ti.com, computersforpeace@gmail.com, dwmw2@infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Helper function for bch_mtd_read() and bch_mtd_write() to handle multi-page or non-aligned reads and writes respectively. Signed-off-by: Lee Jones --- drivers/mtd/nand/stm_nand_bch.c | 143 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+) diff --git a/drivers/mtd/nand/stm_nand_bch.c b/drivers/mtd/nand/stm_nand_bch.c index 389ccee..bcaed32 100644 --- a/drivers/mtd/nand/stm_nand_bch.c +++ b/drivers/mtd/nand/stm_nand_bch.c @@ -507,6 +507,149 @@ static uint8_t bch_write_page(struct nandi_controller *nandi, return status; } +/* Helper function for bch_mtd_read to handle multi-page or non-aligned reads */ +static int bch_read(struct nandi_controller *nandi, + loff_t from, size_t len, + size_t *retlen, u_char *buf) +{ + struct mtd_ecc_stats stats; + uint32_t page_size = nandi->info.mtd.writesize; + uint32_t col_offs; + loff_t page_mask; + loff_t page_offs; + int ecc_errs, max_ecc_errs = 0; + int page_num; + size_t bytes; + uint8_t *p; + bool bounce = false; + + dev_dbg(nandi->dev, "%s: %llu @ 0x%012llx\n", __func__, + (unsigned long long)len, from); + + stats = nandi->info.mtd.ecc_stats; + page_mask = (loff_t)page_size - 1; + col_offs = (uint32_t)(from & page_mask); + page_offs = from & ~page_mask; + page_num = (int)(page_offs >> nandi->page_shift); + + while (len > 0) { + bytes = min((page_size - col_offs), len); + + if ((bytes != page_size) || + ((unsigned int)buf & (NANDI_BCH_DMA_ALIGNMENT - 1)) || + (!virt_addr_valid(buf))) /* vmalloc'd buffer! */ + bounce = true; + + if (page_num == nandi->cached_page) { + memcpy(buf, nandi->page_buf + col_offs, bytes); + goto done; + } + + p = bounce ? nandi->page_buf : buf; + + ecc_errs = bch_read_page(nandi, page_offs, p); + if (bounce) + memcpy(buf, p + col_offs, bytes); + + if (ecc_errs < 0) { + dev_err(nandi->dev, + "%s: uncorrectable error at 0x%012llx\n", + __func__, page_offs); + nandi->info.mtd.ecc_stats.failed++; + + /* Do not cache uncorrectable pages */ + if (bounce) + nandi->cached_page = -1; + + goto done; + } + + if (ecc_errs) { + dev_info(nandi->dev, + "%s: corrected %u error(s) at 0x%012llx\n", + __func__, ecc_errs, page_offs); + + nandi->info.mtd.ecc_stats.corrected += ecc_errs; + + if (ecc_errs > max_ecc_errs) + max_ecc_errs = ecc_errs; + } + + if (bounce) + nandi->cached_page = page_num; + +done: + buf += bytes; + len -= bytes; + + if (retlen) + *retlen += bytes; + + /* We are now page-aligned */ + page_offs += page_size; + page_num++; + col_offs = 0; + } + + /* Return '-EBADMSG' on uncorrectable errors */ + if (nandi->info.mtd.ecc_stats.failed - stats.failed) + return -EBADMSG; + + return max_ecc_errs; +} + +/* Helper function for mtd_write, to handle multi-page and non-aligned writes */ +static int bch_write(struct nandi_controller *nandi, + loff_t to, size_t len, + size_t *retlen, const uint8_t *buf) +{ + uint32_t page_size = nandi->info.mtd.writesize; + int page_num; + bool bounce = false; + const uint8_t *p = NULL; + uint8_t ret; + + dev_dbg(nandi->dev, "%s: %llu @ 0x%012llx\n", __func__, + (unsigned long long)len, to); + + BUG_ON(len & (page_size - 1)); + BUG_ON(to & (page_size - 1)); + + if (((unsigned long)buf & (NANDI_BCH_DMA_ALIGNMENT - 1)) || + !virt_addr_valid(buf)) { /* vmalloc'd buffer! */ + bounce = true; + } + + page_num = (int)(to >> nandi->page_shift); + + while (len > 0) { + if (bounce) { + memcpy(nandi->page_buf, buf, page_size); + p = nandi->page_buf; + nandi->cached_page = -1; + } else { + p = buf; + } + + if (nandi->cached_page == page_num) + nandi->cached_page = -1; + + ret = bch_write_page(nandi, to, p); + if (ret & NAND_STATUS_FAIL) + return -EIO; + + to += page_size; + page_num++; + buf += page_size; + len -= page_size; + + if (retlen) + *retlen += page_size; + } + + return 0; +} + /* * Hamming-FLEX operations */