From patchwork Mon Dec 9 18:27:10 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ezequiel Garcia X-Patchwork-Id: 3312301 X-Patchwork-Delegate: dan.j.williams@gmail.com Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3F44C9F37C for ; Mon, 9 Dec 2013 18:27:07 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0760B20353 for ; Mon, 9 Dec 2013 18:27:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 51DD9202FE for ; Mon, 9 Dec 2013 18:27:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1761380Ab3LIS1B (ORCPT ); Mon, 9 Dec 2013 13:27:01 -0500 Received: from top.free-electrons.com ([176.31.233.9]:36973 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1761370Ab3LIS1A (ORCPT ); Mon, 9 Dec 2013 13:27:00 -0500 Received: by mail.free-electrons.com (Postfix, from userid 106) id 8C0348B9; Mon, 9 Dec 2013 19:27:05 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from localhost.localdomain (unknown [190.2.98.212]) by mail.free-electrons.com (Postfix) with ESMTPA id 0B41C898; Mon, 9 Dec 2013 19:27:02 +0100 (CET) From: Ezequiel Garcia To: Cc: Thomas Petazzoni , Gregory Clement , Lior Amsalem , Dan Williams , Vinod Koul , Ezequiel Garcia Subject: [PATCH 2/4] dma: mv_xor: Use whole pages in the memcpy self-test Date: Mon, 9 Dec 2013 15:27:10 -0300 Message-Id: <1386613630-30767-1-git-send-email-ezequiel.garcia@free-electrons.com> X-Mailer: git-send-email 1.8.1.5 Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Since the introduction of dmaengine_unmap_data, the dmaengine core now assumes that all mappings are created with dma_map_page. Therefore, as preparation work to introduce dmaengine_unmap_data in this driver, let's start by using a page-based allocation and mapping scheme in the memcpy self-test. The allocation could be done with alloc_page(), but sticking to kmalloc results in a less intrusive patch. In addition, the size of the test buffer is increased, since dma_map_page() seem to fail when the source and destination pages are the same page. Signed-off-by: Ezequiel Garcia --- drivers/dma/mv_xor.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c index 23bcc91..8de4a24 100644 --- a/drivers/dma/mv_xor.c +++ b/drivers/dma/mv_xor.c @@ -781,7 +781,6 @@ static void mv_xor_issue_pending(struct dma_chan *chan) /* * Perform a transaction to verify the HW works. */ -#define MV_XOR_TEST_SIZE 2000 static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) { @@ -793,18 +792,18 @@ static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) struct dma_async_tx_descriptor *tx; int err = 0; - src = kmalloc(sizeof(u8) * MV_XOR_TEST_SIZE, GFP_KERNEL); + src = kmalloc(sizeof(u8) * PAGE_SIZE, GFP_KERNEL); if (!src) return -ENOMEM; - dest = kzalloc(sizeof(u8) * MV_XOR_TEST_SIZE, GFP_KERNEL); + dest = kzalloc(sizeof(u8) * PAGE_SIZE, GFP_KERNEL); if (!dest) { kfree(src); return -ENOMEM; } /* Fill in src buffer */ - for (i = 0; i < MV_XOR_TEST_SIZE; i++) + for (i = 0; i < PAGE_SIZE; i++) ((u8 *) src)[i] = (u8)i; dma_chan = &mv_chan->dmachan; @@ -813,14 +812,14 @@ static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) goto out; } - dest_dma = dma_map_single(dma_chan->device->dev, dest, - MV_XOR_TEST_SIZE, DMA_FROM_DEVICE); + dest_dma = dma_map_page(dma_chan->device->dev, virt_to_page(dest), 0, + PAGE_SIZE, DMA_FROM_DEVICE); - src_dma = dma_map_single(dma_chan->device->dev, src, - MV_XOR_TEST_SIZE, DMA_TO_DEVICE); + src_dma = dma_map_page(dma_chan->device->dev, virt_to_page(src), 0, + PAGE_SIZE, DMA_TO_DEVICE); tx = mv_xor_prep_dma_memcpy(dma_chan, dest_dma, src_dma, - MV_XOR_TEST_SIZE, 0); + PAGE_SIZE, 0); cookie = mv_xor_tx_submit(tx); mv_xor_issue_pending(dma_chan); async_tx_ack(tx); @@ -835,8 +834,8 @@ static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan) } dma_sync_single_for_cpu(dma_chan->device->dev, dest_dma, - MV_XOR_TEST_SIZE, DMA_FROM_DEVICE); - if (memcmp(src, dest, MV_XOR_TEST_SIZE)) { + PAGE_SIZE, DMA_FROM_DEVICE); + if (memcmp(src, dest, PAGE_SIZE)) { dev_err(dma_chan->device->dev, "Self-test copy failed compare, disabling\n"); err = -ENODEV;