From patchwork Mon Apr 29 05:14:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Drake X-Patchwork-Id: 10921203 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E008C14B6 for ; Mon, 29 Apr 2019 05:14:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D1DF6285FF for ; Mon, 29 Apr 2019 05:14:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C233C28614; Mon, 29 Apr 2019 05:14:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 18002285FF for ; Mon, 29 Apr 2019 05:14:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726090AbfD2FOd (ORCPT ); Mon, 29 Apr 2019 01:14:33 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:46896 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725783AbfD2FOd (ORCPT ); Mon, 29 Apr 2019 01:14:33 -0400 Received: by mail-pg1-f196.google.com with SMTP id n2so4558828pgg.13 for ; Sun, 28 Apr 2019 22:14:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=endlessm-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=vYyhLtHM+eiCpdYqHcABaA9HcjRLhQt4MRCsPWQ9DSE=; b=1/1tnI6/ywXfJkboj2WNvC/qFQxBX3LYmBKw5yvBYVxI12e1NF5RwaSkl5Wji9Ga2s ePpBmFGXt1661C9Y0i2xT9WyfTGzPxIUP8GCR4X6z0/djHWzF5ffdqpy4MJ2c9trlDk7 VdEKNBR54WW7+n3VcrrT3fwcUTU9hqgLsEzcTTXzjVUmvjnH5PswY+7vhfyp6s37LsVO eSNQu7cD9RqBe7/GUbu8sKl6wieV1HTz4C5zLhVwLprLRXrp69td3v2qTl2VGXuWm+TI QBEdYR4Hj7vG10Q91HC7LBEvw3g7B+WGO6RnUsctAH0TwJvlthhNPx5NbZA4d8JAhP4A aviA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=vYyhLtHM+eiCpdYqHcABaA9HcjRLhQt4MRCsPWQ9DSE=; b=aQK5XG4mf1TqZ2yYHyVF4mruGKqSHO77utbp7RBR5d6zeorT1LN9cBXi1ue2FFl66w BfxLDSeQaoVgaaG6V7fC6P8fKkDyqgus4iaLT2dGsaK8Jtmno+SqU4seekVoesk8Jqfx HxLPNCl3xg5bkB6fAul10XXgB82jCbjnMUKVLo/cJ7gxb9KytBEXzRRgfpbY5yvbGC6f Z4kaigy6uLwCgtV2aVxo8dLTgf1+q3fosRRS2mbmVgn5oIpne431T2bCHiVR0GCkWnxD nDM0QYwgCyEylof0h763W0x5eytdsPibIcvPzNiXrzvKV2yOVFk/JUXFdicqR4nSi2Ah vpqw== X-Gm-Message-State: APjAAAUFRWX+IHCQwBFWX16kJGStI8/g5TdAobipB09kPdYQmrwofApy Pjws9IAJEM4KDct173m8aGEmKA== X-Google-Smtp-Source: APXvYqxoPQ/35lIlLIozGdeGPeav8blC7bUJAuwJ1I78NVIkeEFIGBETue4cUFGgET7okc+Uh2oNBg== X-Received: by 2002:a65:5089:: with SMTP id r9mr56643561pgp.14.1556514871778; Sun, 28 Apr 2019 22:14:31 -0700 (PDT) Received: from limbo.local (123-204-46-122.static.seed.net.tw. [123.204.46.122]) by smtp.gmail.com with ESMTPSA id x5sm49727866pfi.91.2019.04.28.22.14.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 28 Apr 2019 22:14:30 -0700 (PDT) From: Daniel Drake To: ulf.hansson@linaro.org Cc: linux-mmc@vger.kernel.org, linux@endlessm.com, jin.tsai@alcorlink.com, linux@rempel-privat.de, arnd@arndb.de, jgg@mellanox.com Subject: [PATCH 1/2] Revert "mmc: alcor: enable DMA transfer of large buffers" Date: Mon, 29 Apr 2019 13:14:25 +0800 Message-Id: <20190429051426.7558-1-drake@endlessm.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This reverts commit 57ebb96c293da9f0ec56aba13c5541269a5c10b1. Usage of the DMA page iterator was problematic here because we were not considering offset & length of entries in the scatterlist. Also, after further discussion, the suggested revised approach is much more similar to the driver implementation before this commit was applied, so revert it. Signed-off-by: Daniel Drake --- drivers/mmc/host/alcor.c | 88 ++++++++++++++++++++++++---------------- 1 file changed, 53 insertions(+), 35 deletions(-) diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c index bb4291d50a5d..dccf68e36d9b 100644 --- a/drivers/mmc/host/alcor.c +++ b/drivers/mmc/host/alcor.c @@ -54,9 +54,9 @@ struct alcor_sdmmc_host { struct delayed_work timeout_work; struct sg_mapping_iter sg_miter; /* SG state for PIO */ - struct sg_dma_page_iter sg_diter; /* SG state for DMA */ struct scatterlist *sg; unsigned int blocks; /* remaining PIO blocks */ + int sg_count; u32 irq_status_sd; unsigned char cur_power_mode; @@ -117,19 +117,30 @@ static void alcor_reset(struct alcor_sdmmc_host *host, u8 val) dev_err(host->dev, "%s: timeout\n", __func__); } -/* - * Perform DMA I/O of a single page. - */ static void alcor_data_set_dma(struct alcor_sdmmc_host *host) { struct alcor_pci_priv *priv = host->alcor_pci; - dma_addr_t addr; + u32 addr; + + if (!host->sg_count) + return; - if (!__sg_page_iter_dma_next(&host->sg_diter)) + if (!host->sg) { + dev_err(host->dev, "have blocks, but no SG\n"); return; + } - addr = sg_page_iter_dma_address(&host->sg_diter); - alcor_write32(priv, (u32) addr, AU6601_REG_SDMA_ADDR); + if (!sg_dma_len(host->sg)) { + dev_err(host->dev, "DMA SG len == 0\n"); + return; + } + + + addr = (u32)sg_dma_address(host->sg); + + alcor_write32(priv, addr, AU6601_REG_SDMA_ADDR); + host->sg = sg_next(host->sg); + host->sg_count--; } static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host) @@ -142,29 +153,12 @@ static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host) ctrl |= AU6601_DATA_WRITE; if (data->host_cookie == COOKIE_MAPPED) { - /* - * For DMA transfers, this function is called just once, - * at the start of the operation. The hardware can only - * perform DMA I/O on a single page at a time, so here - * we kick off the transfer with the first page, and expect - * subsequent pages to be transferred upon IRQ events - * indicating that the single-page DMA was completed. - */ - __sg_page_iter_start(&host->sg_diter.base, data->sg, - data->sg_len, 0); - alcor_data_set_dma(host); ctrl |= AU6601_DATA_DMA_MODE; host->dma_on = 1; - alcor_write32(priv, data->blksz * data->blocks, + alcor_write32(priv, data->sg_count * 0x1000, AU6601_REG_BLOCK_SIZE); } else { - /* - * For PIO transfers, we break down each operation - * into several sector-sized transfers. When one sector has - * complete, the IRQ handler will call this function again - * to kick off the transfer of the next sector. - */ alcor_write32(priv, data->blksz, AU6601_REG_BLOCK_SIZE); } @@ -239,8 +233,9 @@ static void alcor_prepare_data(struct alcor_sdmmc_host *host, host->data->bytes_xfered = 0; host->blocks = data->blocks; host->sg = data->sg; + host->sg_count = data->sg_count; dev_dbg(host->dev, "prepare DATA: sg %i, blocks: %i\n", - data->sg_count, host->blocks); + host->sg_count, host->blocks); if (data->host_cookie != COOKIE_MAPPED) alcor_prepare_sg_miter(host); @@ -489,6 +484,9 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask) alcor_trf_block_pio(host, false); return 1; case AU6601_INT_DMA_END: + if (!host->sg_count) + break; + alcor_data_set_dma(host); break; default: @@ -525,7 +523,8 @@ static void alcor_data_irq_thread(struct alcor_sdmmc_host *host, u32 intmask) if (alcor_data_irq_done(host, intmask)) return; - if ((intmask & AU6601_INT_DATA_END) || !host->blocks || host->dma_on) + if ((intmask & AU6601_INT_DATA_END) || !host->blocks || + (host->dma_on && !host->sg_count)) alcor_finish_data(host); } @@ -763,7 +762,8 @@ static void alcor_pre_req(struct mmc_host *mmc, struct alcor_sdmmc_host *host = mmc_priv(mmc); struct mmc_data *data = mrq->data; struct mmc_command *cmd = mrq->cmd; - unsigned int sg_len; + struct scatterlist *sg; + unsigned int i, sg_len; if (!data || !cmd) return; @@ -785,6 +785,11 @@ static void alcor_pre_req(struct mmc_host *mmc, if (data->blksz & 3) return; + for_each_sg(data->sg, sg, data->sg_len, i) { + if (sg->length != AU6601_MAX_DMA_BLOCK_SIZE) + return; + } + /* This data might be unmapped at this time */ sg_len = dma_map_sg(host->dev, data->sg, data->sg_len, @@ -1031,13 +1036,26 @@ static void alcor_init_mmc(struct alcor_sdmmc_host *host) mmc->caps2 = MMC_CAP2_NO_SDIO; mmc->ops = &alcor_sdc_ops; - /* - * Enable large requests through iteration of scatterlist pages. - * Limit to 240 sectors per request like the original vendor driver. + /* The hardware does DMA data transfer of 4096 bytes to/from a single + * buffer address. Scatterlists are not supported, but upon DMA + * completion (signalled via IRQ), the original vendor driver does + * then immediately set up another DMA transfer of the next 4096 + * bytes. + * + * This means that we need to handle the I/O in 4096 byte chunks. + * Lacking a way to limit the sglist entries to 4096 bytes, we instead + * impose that only one segment is provided, with maximum size 4096, + * which also happens to be the minimum size. This means that the + * single-entry sglist handled by this driver can be handed directly + * to the hardware, nice and simple. + * + * Unfortunately though, that means we only do 4096 bytes I/O per + * MMC command. A future improvement would be to make the driver + * accept sg lists and entries of any size, and simply iterate + * through them 4096 bytes at a time. */ - mmc->max_segs = 64; - mmc->max_seg_size = 240 * 512; - mmc->max_blk_count = 240; + mmc->max_segs = AU6601_MAX_DMA_SEGMENTS; + mmc->max_seg_size = AU6601_MAX_DMA_BLOCK_SIZE; mmc->max_req_size = mmc->max_seg_size; } From patchwork Mon Apr 29 05:14:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Drake X-Patchwork-Id: 10921205 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 938CF1515 for ; Mon, 29 Apr 2019 05:14:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 85109285FF for ; Mon, 29 Apr 2019 05:14:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 78EB528614; Mon, 29 Apr 2019 05:14:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB041285FF for ; Mon, 29 Apr 2019 05:14:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726664AbfD2FOg (ORCPT ); Mon, 29 Apr 2019 01:14:36 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:33973 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725783AbfD2FOg (ORCPT ); Mon, 29 Apr 2019 01:14:36 -0400 Received: by mail-pg1-f193.google.com with SMTP id c13so3605185pgt.1 for ; Sun, 28 Apr 2019 22:14:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=endlessm-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=p96NZ4ONUqFyS7S8iebfNLypxI36LNSXjjfLXXDPg3s=; b=fL3rVf/URkwaKof0dkWHZf8WAG0nnzcVH9yhGG15f7f0+Vbpq1vhHYOuKvhVCx4qb2 WrE2JLtgr8OzzxqaT+CJFqpxiOnR35uN3QU7m/5jZ+rSZg+hPwwH+qOGuRXMT2GkqCaD kLoNKcRqLYMYWom6ORj6xomZ1FEViyIifUXYSk3anKHxuuDRkEuV4+RJ/JpAyHd+z/K4 xW2nqZvP7Jw1/H7m9Q7vgkMFIXueOEn3I/0Uqy1+JP7Va6hLXh4aw+JSmdYLGcDXEbpt mw7NSTIOuH9Kr3hS/zpoNIMLly9s0IOZ7CeL1yoVlP4hzZbcIhWEfZJJ76iE8KyrFd03 nWDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=p96NZ4ONUqFyS7S8iebfNLypxI36LNSXjjfLXXDPg3s=; b=TPnRveUbssKo/Mp/rHCT2WAqFU6NQNvsEHLFyEyUw6sjt9icInQ9mINHzxz4ss/dxN o9YuFdrLlxgL5JVi3PY4NoCkBm7VGa9OYk9zAygnj/HPm2hXe+yGpeqoSeGvcZvBC98e KAoH7+9jNytjm3DnvZDqkpqYdSgM7dsjmSbZnN3gx2zcyAjPx0BHz/nVGdM/rjCmmOCO URXFY9FC9dX5b/4UpRLuPSBPaEM4HwEo4chqeo7e3rZYjTkf121UrdQEiOOUFsmTOSyG UwaOexB/h62Je3MP9HZBjvcPvrqg+qWu46w5Fc+bETjucwzzCFMxmlreJ7nGJlHjzrc9 kziA== X-Gm-Message-State: APjAAAVGZWm/lpUWqFWuIXuotA/6VU4E7OhZw4+qRH2Vrwa1w6Q9N6Py g/YerAGCpQ2r91xXGPkz31Bhyj+pBFI= X-Google-Smtp-Source: APXvYqyrGjC22W67PnDeVRtrwt466MZxboNYxQWmKJJIoCKfvZU1kkY79jvOA9J1CEC1LEg5CAIFXA== X-Received: by 2002:aa7:8d8e:: with SMTP id i14mr13718247pfr.121.1556514875297; Sun, 28 Apr 2019 22:14:35 -0700 (PDT) Received: from limbo.local (123-204-46-122.static.seed.net.tw. [123.204.46.122]) by smtp.gmail.com with ESMTPSA id x5sm49727866pfi.91.2019.04.28.22.14.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 28 Apr 2019 22:14:34 -0700 (PDT) From: Daniel Drake To: ulf.hansson@linaro.org Cc: linux-mmc@vger.kernel.org, linux@endlessm.com, jin.tsai@alcorlink.com, linux@rempel-privat.de, arnd@arndb.de, jgg@mellanox.com Subject: [PATCH 2/2] mmc: alcor: work with multiple-entry sglists Date: Mon, 29 Apr 2019 13:14:26 +0800 Message-Id: <20190429051426.7558-2-drake@endlessm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190429051426.7558-1-drake@endlessm.com> References: <20190429051426.7558-1-drake@endlessm.com> MIME-Version: 1.0 Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP DMA on this hardware is limited to dealing with a 4096 bytes at a time. Previously, the driver was set up accordingly to request single-page DMA buffers, however that had the effect of generating a large number of small MMC requests for data I/O. Improve the driver to accept multi-entry scatter-gather lists. The size of each entry is already capped to 4096 bytes (AU6601_MAX_DMA_BLOCK_SIZE), matching the hardware requirements. Existing driver code already iterates through remaining sglist entries after each DMA transfer is complete. Also add some comments to help clarify the situation, and clear up some of the confusion I had regarding DMA vs PIO. Testing with dd, this increases write performance from 2mb/sec to 10mb/sec, and increases read performance from 4mb/sec to 14mb/sec. Signed-off-by: Daniel Drake Link: http://lkml.kernel.org/r/CAD8Lp47JYdZzbV9F+asNwvSfLF_po_J7ir6R_Vb-Dab21_=Krw@mail.gmail.com --- drivers/mmc/host/alcor.c | 54 ++++++++++++++++++++++++++------------- include/linux/alcor_pci.h | 2 +- 2 files changed, 37 insertions(+), 19 deletions(-) diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c index dccf68e36d9b..f3d7eabb79f7 100644 --- a/drivers/mmc/host/alcor.c +++ b/drivers/mmc/host/alcor.c @@ -117,6 +117,9 @@ static void alcor_reset(struct alcor_sdmmc_host *host, u8 val) dev_err(host->dev, "%s: timeout\n", __func__); } +/* + * Perform DMA I/O of a single page. + */ static void alcor_data_set_dma(struct alcor_sdmmc_host *host) { struct alcor_pci_priv *priv = host->alcor_pci; @@ -153,12 +156,26 @@ static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host) ctrl |= AU6601_DATA_WRITE; if (data->host_cookie == COOKIE_MAPPED) { + /* + * For DMA transfers, this function is called just once, + * at the start of the operation. The hardware can only + * perform DMA I/O on a single page at a time, so here + * we kick off the transfer with the first page, and expect + * subsequent pages to be transferred upon IRQ events + * indicating that the single-page DMA was completed. + */ alcor_data_set_dma(host); ctrl |= AU6601_DATA_DMA_MODE; host->dma_on = 1; alcor_write32(priv, data->sg_count * 0x1000, AU6601_REG_BLOCK_SIZE); } else { + /* + * For PIO transfers, we break down each operation + * into several sector-sized transfers. When one sector has + * complete, the IRQ handler will call this function again + * to kick off the transfer of the next sector. + */ alcor_write32(priv, data->blksz, AU6601_REG_BLOCK_SIZE); } @@ -776,8 +793,12 @@ static void alcor_pre_req(struct mmc_host *mmc, return; /* * We don't do DMA on "complex" transfers, i.e. with - * non-word-aligned buffers or lengths. Also, we don't bother - * with all the DMA setup overhead for short transfers. + * non-word-aligned buffers or lengths. A future improvement + * could be made to use temporary DMA bounce-buffers when these + * requirements are not met. + * + * Also, we don't bother with all the DMA setup overhead for + * short transfers. */ if (data->blocks * data->blksz < AU6601_MAX_DMA_BLOCK_SIZE) return; @@ -788,6 +809,8 @@ static void alcor_pre_req(struct mmc_host *mmc, for_each_sg(data->sg, sg, data->sg_len, i) { if (sg->length != AU6601_MAX_DMA_BLOCK_SIZE) return; + if (sg->offset != 0) + return; } /* This data might be unmapped at this time */ @@ -1037,26 +1060,21 @@ static void alcor_init_mmc(struct alcor_sdmmc_host *host) mmc->ops = &alcor_sdc_ops; /* The hardware does DMA data transfer of 4096 bytes to/from a single - * buffer address. Scatterlists are not supported, but upon DMA - * completion (signalled via IRQ), the original vendor driver does - * then immediately set up another DMA transfer of the next 4096 - * bytes. - * - * This means that we need to handle the I/O in 4096 byte chunks. - * Lacking a way to limit the sglist entries to 4096 bytes, we instead - * impose that only one segment is provided, with maximum size 4096, - * which also happens to be the minimum size. This means that the - * single-entry sglist handled by this driver can be handed directly - * to the hardware, nice and simple. + * buffer address. Scatterlists are not supported at the hardware + * level, however we can work with them at the driver level, + * provided that each segment is exactly 4096 bytes in size. + * Upon DMA completion of a single segment (signalled via IRQ), we + * immediately proceed to transfer the next segment from the + * scatterlist. * - * Unfortunately though, that means we only do 4096 bytes I/O per - * MMC command. A future improvement would be to make the driver - * accept sg lists and entries of any size, and simply iterate - * through them 4096 bytes at a time. + * The overall request is limited to 240 sectors, matching the + * original vendor driver. */ mmc->max_segs = AU6601_MAX_DMA_SEGMENTS; mmc->max_seg_size = AU6601_MAX_DMA_BLOCK_SIZE; - mmc->max_req_size = mmc->max_seg_size; + mmc->max_blk_count = 240; + mmc->max_req_size = mmc->max_blk_count * mmc->max_blk_size; + dma_set_max_seg_size(host->dev, mmc->max_seg_size); } static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev) diff --git a/include/linux/alcor_pci.h b/include/linux/alcor_pci.h index da973e8a2da8..4416df597526 100644 --- a/include/linux/alcor_pci.h +++ b/include/linux/alcor_pci.h @@ -23,7 +23,7 @@ #define AU6601_BASE_CLOCK 31000000 #define AU6601_MIN_CLOCK 150000 #define AU6601_MAX_CLOCK 208000000 -#define AU6601_MAX_DMA_SEGMENTS 1 +#define AU6601_MAX_DMA_SEGMENTS 64 #define AU6601_MAX_PIO_SEGMENTS 1 #define AU6601_MAX_DMA_BLOCK_SIZE 0x1000 #define AU6601_MAX_PIO_BLOCK_SIZE 0x200