From patchwork Thu Sep 20 15:44:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnout Vandecappelle X-Patchwork-Id: 1486361 Return-Path: X-Original-To: patchwork-linux-omap@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id CA29DDF2D2 for ; Thu, 20 Sep 2012 15:44:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755535Ab2ITPom (ORCPT ); Thu, 20 Sep 2012 11:44:42 -0400 Received: from 132.79-246-81.adsl-static.isp.belgacom.be ([81.246.79.132]:34608 "EHLO viper.mind.be" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755109Ab2ITPom (ORCPT ); Thu, 20 Sep 2012 11:44:42 -0400 Received: from [172.16.2.6] (helo=vandecaa-laptop) by viper.mind.be with esmtp (Exim 4.69) (envelope-from ) id 1TEive-00052N-38; Thu, 20 Sep 2012 17:44:39 +0200 Received: from arnout by vandecaa-laptop with local (Exim 4.80) (envelope-from ) id 1TEivd-0006r5-G6; Thu, 20 Sep 2012 17:44:37 +0200 From: "Arnout Vandecappelle (Essensium/Mind)" To: linux-mtd@lists.infradead.org, linux-omap@vger.kernel.org, David Woodhouse , Tony Lindgren Cc: "Arnout Vandecappelle (Essensium/Mind)" , Sven Krauss Subject: [PATCH] mtd: omap2-nand: avoid unaligned DMA accesses, fall back on prefetch method Date: Thu, 20 Sep 2012 17:44:10 +0200 Message-Id: <1348155850-26174-1-git-send-email-arnout@mind.be> X-Mailer: git-send-email 1.7.10.4 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org From: "Arnout Vandecappelle (Essensium/Mind)" The buffers given to the read_buf and write_buf methods are not necessarily u32-aligned, while the DMA engine is configured with 32-bit accesses. As a consequence, the DMA engine gives an error which appears in the log as follows: DMA misaligned error with device 4 After this, no accesses to the NAND are possible anymore because the access never completes. This usually means the system hangs if the rootfs is in NAND. To avoid this, use the prefetch method if the buffer is not aligned. It's difficult to reproduce the error, because the buffers are aligned most of the time. This bug and a patch was originally reported by Sven Krauss in http://article.gmane.org/gmane.linux.drivers.mtd/34548 Signed-off-by: Arnout Vandecappelle (Essensium/Mind) Cc: Sven Krauss --- Perhaps a better method is to fetch the first few unaligned bytes with the prefetch method, and then continue with DMA. However, since it's hard to force an unaligned buffer, it's also hard to test that this method works. --- drivers/mtd/nand/omap2.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c index c719b86..a313e83 100644 --- a/drivers/mtd/nand/omap2.c +++ b/drivers/mtd/nand/omap2.c @@ -441,7 +441,7 @@ out_copy: */ static void omap_read_buf_dma_pref(struct mtd_info *mtd, u_char *buf, int len) { - if (len <= mtd->oobsize) + if (len <= mtd->oobsize || !IS_ALIGNED((unsigned long)buf, 4)) omap_read_buf_pref(mtd, buf, len); else /* start transfer in DMA mode */ @@ -457,7 +457,7 @@ static void omap_read_buf_dma_pref(struct mtd_info *mtd, u_char *buf, int len) static void omap_write_buf_dma_pref(struct mtd_info *mtd, const u_char *buf, int len) { - if (len <= mtd->oobsize) + if (len <= mtd->oobsize || !IS_ALIGNED((unsigned long)buf, 4)) omap_write_buf_pref(mtd, buf, len); else /* start transfer in DMA mode */